执行验证
获取模型文件,执行测试验证环境部署是否成功。
- 获取DeepSeek-R1-Distill-Llama-70B模型文件。
- 使用“vllm/examples/offline_inference/basic/basic.py”进行测试,basic.py测试代码示例如下所示,请将模型路径替换成本地模型文件路径。
# SPDX-License-Identifier: Apache-2.0 from vllm import LLM, SamplingParams # Sample prompts. prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] # Create a sampling params object. sampling_params = SamplingParams(temperature=0.8, top_p=0.95) # Create an LLM. llm = LLM(model="/home/models/DeepSeek-R1-Distill-Llama-70B/", tensor_parallel_size=8) # 此处修改为本地模型路径,并修改使用的NPU数量以保证可以运行 # Generate texts from the prompts. The output is a list of RequestOutput objects # that contain the prompt, generated text, and other information. outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
- 执行测试命令。
python3 basic.py
模型运行正常,未输出乱码,输出语句通顺即可认为环境正常配置。