Do you have numerical code written in Python and Numpy? Do you wish it ran faster, using the full potential of your CPU?
Then you should try Numba, a JIT compiler that translates a subset of Python and Numpy code into fast machine code.
This talk will explain how Numba works, and when and how to use it for numerical algorithms, focusing on how to get very good performance on the CPU.
To understand this talk, only a basic knowledge of Python and Numpy is needed.
You will learn how Python compiles functions to bytecode and how Numba compiles bytecode to machine code. Why algorithms implemented using Numpy sometimes don't yield great performance, and how to do better using Numba. You will learn about the @numba.jit and @numba.vectorize decorators and how to create functions that use the CPU well by using e.g. multi-threading (several CPU cores), vector instructions (single instruction multiple data) and fast math (trade float accuracy for speed).
You will also learn when it does and doesn't make sense to use Numba, by contrasting it briefly with some other options for high-performance computing from Python: PyPy, C, C++, Cython, Numexpr, Dask, PyTorch, Tensorflow and Google JAX