For instance, you can write Python without using CUDA. CUDA's existence doesn't make Python less useful. But what do you do when you bump into a bug in Mojo? You have no ability to fix it yourself. At best, you can report it to the authors and hope they care enough about it to put in the work and release an update. If you run into a Python problem, you, or someone in your org, or a paid consultant, can fix it even if the Python core team doesn't care about it.
But somehow it is a problem when Modular gives a single promise to open-source their Mojo compiler?
> For instance, you can write Python without using CUDA. CUDA's existence doesn't make Python less useful. But what do you do when you bump into a bug in Mojo? You have no ability to fix it yourself.
You don't use AI training / inference with bare Python at all.
PyTorch (which almost all AI researchers use) primarily uses CUDA as the default and it is less useful without it (all other backends are slower). If there is a bug in anywhere from PyTorch to the silicon, you need to investigate if it is a PyTorch problem, C++ or Python issue or both, or a CUDA driver issue.
So a bug in one place (Mojo) vs a bug in 4 different places and one of them (CUDA) will never be open source. The latter is worse.
> At best, you can report it to the authors and hope they care enough about it to put in the work and release an update. If you run into a Python problem, you, or someone in your org, or a paid consultant, can fix it even if the Python core team doesn't care about it.
You are assuming Modular will never open source the Mojo compiler, when it is clear that Nvidia has been completely hostile to opening anything related to CUDA and its compiler.