Installing pre-built conda packages#
numba-dpex along with its dependencies can be installed using
It is recommended to use conda packages from the
to get the latest production releases.
conda create -n numba-dpex-env \ numba-dpex dpnp dpctl dpcpp-llvm-spirv \ -c intel -c conda-forge
To try out the bleeding edge, the latest packages built from tip of the main
source trunk can be installed from the
dppy/label/dev conda channel.
conda create -n numba-dpex-env \ numba-dpex dpnp dpctl dpcpp-llvm-spirv \ -c dppy/label/dev -c intel -c conda-forge
Building from source#
numba-dpex can be built from source using either
Steps to build using
conda-buildis installed in the
baseconda environment or create a new conda environment with
conda create -n build-env conda-build conda activate build-env
Build using the vendored conda recipe
conda build conda-recipe -c intel -c conda-forge
Install the conda package
conda install -c local numba-dpex
Steps to build using
As before, a conda environment with all necessary dependencies is the suggested first step.
# Create a conda environment that hass needed dependencies installed conda create -n numba-dpex-env \ scikit-build cmake dpctl dpnp numba dpcpp-llvm-spirv llvmdev pytest \ -c intel -c conda-forge # Activate the environment conda activate numba-dpex-env # Clone the numba-dpex repository git clone https://github.com/IntelPython/numba-dpex.git cd numba-dpex python setup.py develop
Building inside Docker#
A Dockerfile is provided on the GitHub repository to build
as well as its direct dependencies:
dpnp. Users can either use
one of the pre-built images on the
numba-dpex GitHub page or use the
bundled Dockerfile to build
numba-dpex from source. Using the Dockerfile
also ensures that all device drivers and runtime libraries are pre-installed.
Numba dpex ships with multistage Dockerfile, which means there are different targets available for build. The most useful ones:
To build docker image
docker build --target runtime -t numba-dpex:runtime ./
To run docker image
docker run -it --rm numba-dpex:runtime
When trying to build a docker image with Intel GPU support, the Dockerfile
will attempt to use the GitHub API to get the latest Intel GPU drivers.
Users may run into an issue related to Github API call limits. The issue
can be bypassed by providing valid GitHub credentials using the
to increase the call limit. A GitHub
can also be used instead of the password.
When building the docker image behind a firewall the proxy server settings
should be provided using the
https_proxy build args.
These build args must be specified in lowercase.
The bundled Dockerfile supports different python versions that can be specified
PYTHON_VERSION build arg. By default, the docker image is based on
the official python image based on slim debian. The requested python version
must be from the available python docker images.
BASE_IMAGE build arg can be used to build the docker image from a
custom image. Note that as the Dockerfile is based on debian any custom base
image should be debian-based, like debian or ubuntu.
The list of other build args are as follows. Please refer the Dockerfile to see currently all available build args.
Using the pre-built images#
There are several pre-built docker images available:
runtimepackage that provides a pre-built environment with
already installed. It is ideal to quickly setup and try
builderpackage that has all required dependencies pre-installed and is
ideal for building
stagespackage primarily meant for creating a new docker image that is
built on top of one of the pre-built images.
After setting up the docker image, to run
numba-dpex the following snippet
can be used.
docker run --rm -it ghcr.io/intelpython/numba-dpex/runtime:0.20.0-py3.10 bash
It is advisable to verify the SYCL runtime and driver installation within the container by either running,
python -m dpctl -f
To enable GPU device, the
device argument should be used and one of the
*-gpu images should be used.
For passing GPU into container on linux use arguments
However if you are using WSL you need to pass
--device=/dev/dxg -v /usr/lib/wsl:/usr/lib/wsl instead.
For example, to run
numba-dpex with GPU support on WSL:
docker run --rm -it \ --device=/dev/dxg -v /usr/lib/wsl:/usr/lib/wsl \ ghcr.io/intelpython/numba-dpex/runtime:0.20.0-py3.10-gpu
numba-dpex uses pytest for unit testing and the following example
shows a way to run the unit tests.
python -m pytest --pyargs numba_dpex.tests
A set of examples on how to use
numba-dpex can be found in