构建您自己的分发版
本指南将引导您从头开始构建一个 Llama Stack 分发版,并可选择您想要的 API 提供者。
设置您的日志级别
为了指定正确的日志级别,用户可以应用以下环境变量 LLAMA_STACK_LOGGING
,格式如下
LLAMA_STACK_LOGGING=server=debug;core=info
其中以下列表中的每个类别
all
core
server
router
inference
agents
safety
eval
tools
client
可以设置为以下任一日志级别
debug
info
warning
error
critical
默认全局日志级别是 info
。 all
设置所有组件的日志级别。
用户还可以设置 LLAMA_STACK_LOG_FILE
,它会将日志输出到指定的路径以及终端。例如:export LLAMA_STACK_LOG_FILE=server.log
Llama Stack 构建
为了构建您自己的分发版,我们建议您克隆 llama-stack
仓库。
git clone git@github.com:meta-llama/llama-stack.git
cd llama-stack
pip install -e .
使用 CLI 构建您的分发版。主要考虑点是
镜像类型 - 您想要 Conda / venv 环境还是容器(例如 Docker)
模板 - 您想使用模板来构建分发版吗?还是从头开始?
配置 - 您想使用现有的配置文件来构建分发版吗?
llama stack build -h
usage: llama stack build [-h] [--config CONFIG] [--template TEMPLATE] [--list-templates] [--image-type {conda,container,venv}] [--image-name IMAGE_NAME] [--print-deps-only] [--run]
Build a Llama stack container
options:
-h, --help show this help message and exit
--config CONFIG Path to a config file to use for the build. You can find example configs in llama_stack/distributions/**/build.yaml. If this argument is not provided, you will
be prompted to enter information interactively (default: None)
--template TEMPLATE Name of the example template config to use for build. You may use `llama stack build --list-templates` to check out the available templates (default: None)
--list-templates Show the available templates for building a Llama Stack distribution (default: False)
--image-type {conda,container,venv}
Image Type to use for the build. This can be either conda or container or venv. If not specified, will use the image type from the template config. (default:
conda)
--image-name IMAGE_NAME
[for image-type=conda|container|venv] Name of the conda or virtual environment to use for the build. If not specified, currently active Conda environment will be used if
found. (default: None)
--print-deps-only Print the dependencies for the stack only, without building the stack (default: False)
--run Run the stack after building using the same image type, name, and other applicable arguments (default: False)
此步骤完成后,将生成一个名为 <name>-build.yaml
的文件和一个模板文件 <name>-run.yaml
,并保存在命令末尾指定的输出文件路径下。
为了从其他 API 提供者构建,我们提供了分发版模板,供用户开始构建由不同提供者支持的分发版。
以下命令将允许您查看可用模板及其对应的提供者。
llama stack build --list-templates
------------------------------+-----------------------------------------------------------------------------+
| Template Name | Description |
+------------------------------+-----------------------------------------------------------------------------+
| hf-serverless | Use (an external) Hugging Face Inference Endpoint for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
| together | Use Together.AI for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
| vllm-gpu | Use a built-in vLLM engine for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
| experimental-post-training | Experimental template for post training |
+------------------------------+-----------------------------------------------------------------------------+
| remote-vllm | Use (an external) vLLM server for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
| fireworks | Use Fireworks.AI for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
| tgi | Use (an external) TGI server for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
| bedrock | Use AWS Bedrock for running LLM inference and safety |
+------------------------------+-----------------------------------------------------------------------------+
| meta-reference-gpu | Use Meta Reference for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
| nvidia | Use NVIDIA NIM for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
| cerebras | Use Cerebras for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
| ollama | Use (an external) Ollama server for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
| hf-endpoint | Use (an external) Hugging Face Inference Endpoint for running LLM inference |
+------------------------------+-----------------------------------------------------------------------------+
然后,您可以选择一个模板来构建您喜欢的分发版,并配置相应的提供者。
例如,要构建一个使用 TGI 作为推理提供者的分发版,您可以运行
$ llama stack build --template tgi
...
You can now edit ~/.llama/distributions/llamastack-tgi/tgi-run.yaml and run `llama stack run ~/.llama/distributions/llamastack-tgi/tgi-run.yaml`
如果提供的模板不符合您的用例,您可以从运行 llama stack build
开始,它将允许您交互式地进入向导,您将在其中被提示输入构建配置。
最好从模板开始,了解配置文件的结构和各种概念(API、提供者、资源等),然后再从头开始。
llama stack build
> Enter a name for your Llama Stack (e.g. my-local-stack): my-stack
> Enter the image type you want your Llama Stack to be built as (container or conda or venv): conda
Llama Stack is composed of several APIs working together. Let's select
the provider types (implementations) you want to use for these APIs.
Tip: use <TAB> to see options for the providers.
> Enter provider for API inference: inline::meta-reference
> Enter provider for API safety: inline::llama-guard
> Enter provider for API agents: inline::meta-reference
> Enter provider for API memory: inline::faiss
> Enter provider for API datasetio: inline::meta-reference
> Enter provider for API scoring: inline::meta-reference
> Enter provider for API eval: inline::meta-reference
> Enter provider for API telemetry: inline::meta-reference
> (Optional) Enter a short description for your Llama Stack:
You can now edit ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml and run `llama stack run ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml`
除了模板,您还可以通过编辑配置文件来定制构建,并使用以下命令从配置文件构建。
配置文件的内容将类似于
llama_stack/templates/*build.yaml
中的文件。
$ cat llama_stack/templates/ollama/build.yaml
name: ollama
distribution_spec:
description: Like local, but use ollama for running LLM inference
providers:
inference: remote::ollama
memory: inline::faiss
safety: inline::llama-guard
agents: inline::meta-reference
telemetry: inline::meta-reference
image_name: ollama
image_type: conda
# If some providers are external, you can specify the path to the implementation
external_providers_dir: /etc/llama-stack/providers.d
llama stack build --config llama_stack/templates/ollama/build.yaml
Llama Stack 支持存在于主代码库之外的外部提供者。这允许您独立创建和维护自己的提供者,或使用社区提供的提供者。
要使用外部提供者构建分发版,您需要
在您的构建配置文件中配置
external_providers_dir
# Example my-external-stack.yaml with external providers
version: '2'
distribution_spec:
description: Custom distro for CI tests
providers:
inference:
- remote::custom_ollama
# Add more providers as needed
image_type: container
image_name: ci-test
# Path to external provider implementations
external_providers_dir: /etc/llama-stack/providers.d
这是一个自定义 Ollama 提供者的示例
adapter:
adapter_type: custom_ollama
pip_packages:
- ollama
- aiohttp
- llama-stack-provider-ollama # This is the provider package
config_class: llama_stack_ollama_provider.config.OllamaImplConfig
module: llama_stack_ollama_provider
api_dependencies: []
optional_api_dependencies: []
pip_packages
部分列出了提供者所需的 Python 包,以及提供者包本身。该包必须在 PyPI 上可用,或者可以从本地目录或 git 仓库提供(构建环境中必须安装 git)。
使用配置文件构建您的分发版
llama stack build --config my-external-stack.yaml
有关外部提供者的更多信息,包括目录结构、提供者类型和实现要求,请参阅外部提供者文档。
Podman 替代方案
Podman 被支持作为 Docker 的替代方案。在您的环境中将 CONTAINER_BINARY
设置为 podman
以使用 Podman。
要构建容器镜像,您可以从模板开始,并使用 --image-type container
标志指定 container
作为构建镜像类型。
llama stack build --template ollama --image-type container
$ llama stack build --template ollama --image-type container
...
Containerfile created successfully in /tmp/tmp.viA3a3Rdsg/ContainerfileFROM python:3.10-slim
...
You can now edit ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml and run `llama stack run ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml`
此步骤成功后,您应该能够找到构建好的容器镜像,并使用 llama stack run <path/to/run.yaml>
进行测试。
运行您的堆栈服务器
现在,让我们启动 Llama Stack 分发版服务器。您将需要 YAML 配置文件,该文件在 llama stack build
步骤结束时已写入。
llama stack run -h
usage: llama stack run [-h] [--port PORT] [--image-name IMAGE_NAME] [--disable-ipv6] [--env KEY=VALUE] [--tls-keyfile TLS_KEYFILE] [--tls-certfile TLS_CERTFILE]
[--image-type {conda,container,venv}]
config
Start the server for a Llama Stack Distribution. You should have already built (or downloaded) and configured the distribution.
positional arguments:
config Path to config file to use for the run
options:
-h, --help show this help message and exit
--port PORT Port to run the server on. It can also be passed via the env var LLAMA_STACK_PORT. (default: 8321)
--image-name IMAGE_NAME
Name of the image to run. Defaults to the current environment (default: None)
--disable-ipv6 Disable IPv6 support (default: False)
--env KEY=VALUE Environment variables to pass to the server in KEY=VALUE format. Can be specified multiple times. (default: [])
--tls-keyfile TLS_KEYFILE
Path to TLS key file for HTTPS (default: None)
--tls-certfile TLS_CERTFILE
Path to TLS certificate file for HTTPS (default: None)
--image-type {conda,container,venv}
Image Type used during the build. This can be either conda or container or venv. (default: conda)
# Start using template name
llama stack run tgi
# Start using config file
llama stack run ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml
# Start using a venv
llama stack run --image-type venv ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml
# Start using a conda environment
llama stack run --image-type conda ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml
$ llama stack run ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml
Serving API inspect
GET /health
GET /providers/list
GET /routes/list
Serving API inference
POST /inference/chat_completion
POST /inference/completion
POST /inference/embeddings
...
Serving API agents
POST /agents/create
POST /agents/session/create
POST /agents/turn/create
POST /agents/delete
POST /agents/session/delete
POST /agents/session/get
POST /agents/step/get
POST /agents/turn/get
Listening on ['::', '0.0.0.0']:8321
INFO: Started server process [2935911]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://['::', '0.0.0.0']:8321 (Press CTRL+C to quit)
INFO: 2401:db00:35c:2d2b:face:0:c9:0:54678 - "GET /models/list HTTP/1.1" 200 OK
故障排除
如果您遇到任何问题,请在我们的 Discord 中提问或搜索我们的 GitHub 问题,或提交新问题。