> for CFD (Computational Fluid Dynamics) and CEM (Computational Electrodynamics) guys generally run on C, C++ and Fortran, or whatever runs Numerical Linear Algebra faster, I doubt Python will take this post anytime soon.
You are welcome to harbor your doubts. :) There will continue to be a place for scientists who want to write software that takes advantage of the intricacies of low-level memory transfer. (Fundamentally, this is the control that C gives you.)
However, as hardware becomes more heterogenous (Xeon Phi, GPU, SSD/Hybrid drives, Fusion IO, 40gb/100gb interconnects, etc.), it's going to get harder and harder for a programmer to learn the knowledge required to truly optimize data transfer and compute within a single node or in a cluster, and still have any time left to do real science. Newer approaches to software development are needed, to maximize the potential of all this great new hardware, while also minimizing the pain and the knowledge level required of the programmer trying to utilize this hardware.
The key saving grace is that much of these hardware innovations are really geared for data-parallel problems, and it just so happens that parallel data structures are also easy for scientists and analysts to reason about. The goal of Blaze is to extend the conceptual triumphs of FORTRAN, Matlab, APL, etc. and build an efficient programming model around this data parallelism.
If you look at what's already been achieved with NumbaPro (the Python-to-CUDA compiler) in just a few months' work, I think the future is very promising.
The underlying premise for all of Continuum's work is: "Better performance through appropriate high-level abstractions".
I work with Python but mainly with light statistics (numpy, scipy, pandas and some other libs), SAS is used to work with large datasets (mainly because that's the policy of the bank I work).
I do not work with CFD anymore, 5 years ago I left the field and returned to Brazil, CUDA/OpenCL was just a promise back them, it was fast but no one had included the technology in their solvers.
The problem for CFD is that generally some simulations could take weeks to run, using python here would just add some more weeks just because of the overhead that the language have, if time or energy consumption is a problem people would just stay with what they have right now, for small stuff people use whatever they want (and I used Python back then, these days they use OpenFoam), for performance it will be C, Fortran or C++ compiled with the best optimizing compilers for at least a decade more. In these large problems data transportation and storage was a big problem.
Generally people use C++ these days, Fortran is only important in old codebases (although Coarray Fortran was a buzzword just like CUDA when I left the field).
You are welcome to harbor your doubts. :) There will continue to be a place for scientists who want to write software that takes advantage of the intricacies of low-level memory transfer. (Fundamentally, this is the control that C gives you.)
However, as hardware becomes more heterogenous (Xeon Phi, GPU, SSD/Hybrid drives, Fusion IO, 40gb/100gb interconnects, etc.), it's going to get harder and harder for a programmer to learn the knowledge required to truly optimize data transfer and compute within a single node or in a cluster, and still have any time left to do real science. Newer approaches to software development are needed, to maximize the potential of all this great new hardware, while also minimizing the pain and the knowledge level required of the programmer trying to utilize this hardware.
The key saving grace is that much of these hardware innovations are really geared for data-parallel problems, and it just so happens that parallel data structures are also easy for scientists and analysts to reason about. The goal of Blaze is to extend the conceptual triumphs of FORTRAN, Matlab, APL, etc. and build an efficient programming model around this data parallelism.
If you look at what's already been achieved with NumbaPro (the Python-to-CUDA compiler) in just a few months' work, I think the future is very promising.
The underlying premise for all of Continuum's work is: "Better performance through appropriate high-level abstractions".