Last weekend I was searching the internet if there are news on CUDA acceleration for PixInsight - I didn't find any news on that but I stumbled over this.
As I am using Starnet++ V2 a lot in my workflow for narrowband images (see here) and my PC is equipped with an NVIDIA GeForce GTX 1050 Ti, I was wondering how this would accelerate my image processing.
I followed William Li's instructions and...it was amazing. Calculation time went down from almost 2 minutes to about 15s.
Images to the right from top to bottom:
- CPU usage (AMD-8320 Eight Core 3.5 GHz) without CUDA
- GPU usage (GTX 1050 Ti) without CUDA
- CPU usage with CUDA
- GPU usage with CUDA
So if you are using Starnet++ V1 or V2 - standalone, GUI or with PixInsight - and you own an NVIDIA graphics card with CUDA capabilities, you should give it a try!
Add comment
Comments