Deployment Environment Configuration
Configure the deployment environment before the deployment.
If you need to deploy an inference model in a container, install Docker (version 24.x.x or later) on the host and start the container of the required OS. Before deploying an inference model in a container, contact Huawei technical support to obtain the MindIE Turbo Developer Guide.
Installing Base Software Packages
This document uses Ubuntu 24.04 or openEuler 22.03 LTS SP4 as an example. Install either of them directly or in a container.
- Install basic software packages.
- Ubuntu
apt-get install -y gcc g++ make zlib1g zlib1g-dev openssl libreadline-dev git wget libsqlite3-dev libssl-dev libffi-dev unzip pciutils net-tools libblas-dev gfortran libblas3 libopenblas-dev libxml2-dev libxml2 vim sudo flex bison libdigest-md5-perl perl* cpanminus cron curl
- openEuler
yum install -y gcc-c++ make zlib-devel readline-devel git wget sqlite-devel openssl-devel libffi-devel unzip pciutils net-tools blas-devel gcc-gfortran openblas-devel libxml2-devel libxml2 vim-enhanced sudo flex bison perl-Digest-MD5 perl cronie curl
- Ubuntu
- Run the following commands to set the basic environment variables and write them to the ~/.bashrc file:
export GIT_SSL_NO_VERIFY=1 export LD_LIBRARY_PATH=/usr/local/Ascend/driver/lib64/driver:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=/usr/local/Ascend/driver/lib64/common:$LD_LIBRARY_PATH echo 'export GIT_SSL_NO_VERIFY=1' >> ~/.bashrc echo 'export LD_LIBRARY_PATH=/usr/local/Ascend/driver/lib64/driver:$LD_LIBRARY_PATH' >> ~/.bashrc echo 'export LD_LIBRARY_PATH=/usr/local/Ascend/driver/lib64/common:$LD_LIBRARY_PATH' >> ~/.bashrc
Installing CMake
- Download the CMake stable version .sh file. You can download the file to the local PC and upload it to the server, or run the following command to download the file on the server (version 3.27.1 or later is recommended, and version 3.31.5 for Arm is used as an example):
wget --no-check-certificate https://github.com/Kitware/CMake/releases/download/v3.31.5/cmake-3.31.5-linux-aarch64.sh
- Install CMake and configure the executable file to start upon system startup.
bash cmake-3.31.5-linux-aarch64.sh --prefix=/usr/bin export PATH=/usr/bin/cmake-3.31.5-linux-aarch64/bin:${PATH} echo 'export PATH=/usr/bin/cmake-3.31.5-linux-aarch64/bin:${PATH}' >> ~/.bashrc
Installing Python
Install Python 3.10 or later. Anaconda and Python 3.11 are used for example.
- Obtain the Anaconda installation package. 2024.02-1-Linux-aarch64 is used as an example. You can download the installation package to the local PC and upload it to the server, or run the following command to download the installation package on the server:
wget --no-check-certificate https://repo.anaconda.com/archive/Anaconda3-2024.02-1-Linux-aarch64.sh
- Install Anaconda.
bash Anaconda3-2024.02-1-Linux-aarch64.sh -b -p /opt/conda . /opt/conda/etc/profile.d/conda.sh export PATH=/opt/conda/bin:/opt/conda/condabin:$PATH echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc echo 'export PATH=/opt/conda/bin:/opt/conda/condabin:$PATH' >> ~/.bashrc
- Create a conda environment. The following uses Python 3.11 as an example.
conda create --name 'test' python=3.11
- Activate the conda environment.
conda activate test
- Write the following content to the ~/.pip/pip.conf file to configure a pip mirror source. If the file does not exist, create it.
[global] index-url = http://mirrors.huaweicloud.com/repository/pypi/simple/ trusted-host = mirrors.huaweicloud.com
Installing CANN
- Before the installation, run the npu-smi info command to check the command output to determine whether the Ascend NPU driver has been installed.
- If not, install the NPU driver and then go to the next step. Contact Huawei engineers to obtain the CANN Commercial Edition Software Installation Guide and install the driver.
- If the NPU driver has been installed, go to the next step.
- Contact Huawei engineers to obtain the CANN software installation package.
- Go to the corresponding path and install CANN. (Grant the .run file execute permission using the chmod command.)
cd /home/packages chmod -R 755 . ./Ascend-cann-toolkit_8.1.RC1_linux-aarch64.run --install source /usr/local/Ascend/ascend-toolkit/set_env.sh echo 'source /usr/local/Ascend/ascend-toolkit/set_env.sh' >> ~/.bashrc ./Ascend-cann-kernels-910b_8.1.RC1_linux-aarch64.run --install ./Ascend-cann-nnal_8.1.RC1_linux-aarch64.run --install source /usr/local/Ascend/nnal/atb/set_env.sh echo 'source /usr/local/Ascend/nnal/atb/set_env.sh' >> ~/.bashrc
- Query the CANN version. If the query result is the same as the version of the installation software package, the installation is successful.
- Go to the directory where the software package installation information file is stored. Replace it with the actual installation path.
cd /usr/local/Ascend/ascend-toolkit/latest/arm64-linux/
- Run the following command to query the version:
cat ascend_toolkit_install.info

- Go to the directory where the software package installation information file is stored. Replace it with the actual installation path.
Installing Ascend Extension for PyTorch
Install Torch and torch_npu (version 2.5.1 is used as an example). For details about how to download other versions, contact Huawei engineers to obtain the Ascend Extension for PyTorch 7.0.0 Software Installation Guide.
- Download the Torch software package.
wget https://download.pytorch.org/whl/cpu/torch-2.5.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl --no-check-certificate
- Perform the installation.
pip3 install torch-2.5.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
- Download the torch_npu plugin package.
wget https://gitee.com/ascend/pytorch/releases/download/v7.0.0-pytorch2.5.1/torch_npu-2.5.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl --no-check-certificate
- Perform the installation.
pip3 install torch_npu-2.5.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
- Query the PyTorch version. If the query result is the same as the version of the installation software package, the installation is successful.
pip show torch pip show torch_npu