Azure AI Model Inference 与 Lobe Chat 集成指南 | Guide to Integrating Azure AI Model Inference with Lobe Chat #6191
R3pl4c3r
started this conversation in
General | 讨论
Replies: 1 comment
-
POST https://.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview 官方文档里用的就是Authorization header的版本 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
配置 Azure AI Model Inference 与 Lobe Chat 的集成指南
1️⃣ 鉴权方式差异
当使用 Azure AI Model Inference API 时,需注意其鉴权方式与 OpenAI 不同:
api-key: <your-api-key>
(而非 OpenAI 的Authorization: Bearer
格式)2️⃣ 配置 Azure 权限
3️⃣ 获取访问令牌
通过 Azure Cloud Shell 执行:
4️⃣ 部署并配置 one-api
"model=xxx"
的这个xxx
5️⃣ 集成到 Lobe Chat
http://localhost:3000/v1
(本地 one-api 地址)Guide to Integrating Azure AI Model Inference with Lobe Chat
1️⃣ Differences in Authentication Methods
When using the Azure AI Model Inference API, note that its authentication method differs from OpenAI:
api-key: <your-api-key>
(instead of OpenAI'sAuthorization: Bearer
format)2️⃣ Configuring Azure Permissions
3️⃣ Obtaining Access Tokens
Execute the following commands via Azure Cloud Shell:
4️⃣ Deploying and Configuring one-api
xxx
in"model=xxx"
displayed in AI Foundry5️⃣ Integrating with Lobe Chat
http://localhost:3000/v1
(local one-api address)Beta Was this translation helpful? Give feedback.
All reactions