In radio astronomy we use several GPU kernels for signal processing, such as correlator, beamformer and filtering algorithms. For historical reasons, many of these were developed in Cuda. The current generation of AMD GPUs is becoming competitive, but the effort required to port existing Cuda code to a programming language supported by AMD accelerators (HIP in this case) is prohibative.
Large language models are deep neural networks trained on both natural text and 'other' text-based pages on the internet, such as code. Specialized models have already demonstrated the ability to generate code and products such as github co-pilot promise to revolutionize the way we develop code in the future.
In this project we investigate the ability of current large language models to port code from Cuda to HIP. The results are compared to the AMD-developed HIPify tool.
Technologies used in this project
- Large language models, both commercial and online versions and publicly available offline models, such as
- Existing GPU codes for radio astronomy, for Nvidia GPUs written in Cuda
- Nvidia and AMD GPUs for testing
- Cuda and HIP programming environments
- HIPify tool (https://github.com/ROCm-Developer-Tools/HIPIFY)
Goals of this project
A couple of high-level objectives for this project are:
- investigate if existing large language models can help with the porting of GPU code between vendors
- validate if the results are correct and performant
- compare the quality of the produced code (per language model) to the exising HIPify tool (and the original Cuda version)
- Run hipified code on both AMD and Nvidia GPUs