Skip to content

Commit

Permalink
✨ feat: add new model provider PPIO (lobehub#6133)
Browse files Browse the repository at this point in the history
* feat: add new model provider PPIO

* feat: add usage docs; fix model configs

* fix: fix ppio runtime; fix model configs

* fix: fix default model list

* fix

* fix: fix locales providers.json

---------

Co-authored-by: Jason <[email protected]>
Co-authored-by: arvinxx <[email protected]>
  • Loading branch information
3 people authored Mar 3, 2025
1 parent f472643 commit 23a3fda
Show file tree
Hide file tree
Showing 25 changed files with 1,107 additions and 3 deletions.
4 changes: 4 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,10 @@ OPENAI_API_KEY=sk-xxxxxxxxx

# TENCENT_CLOUD_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

### PPIO ####

# PPIO_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

########################################
############ Market Service ############
########################################
Expand Down
2 changes: 2 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,8 @@ ENV \
OPENROUTER_API_KEY="" OPENROUTER_MODEL_LIST="" \
# Perplexity
PERPLEXITY_API_KEY="" PERPLEXITY_MODEL_LIST="" PERPLEXITY_PROXY_URL="" \
# PPIO
PPIO_API_KEY="" PPIO_MODEL_LIST="" \
# Qwen
QWEN_API_KEY="" QWEN_MODEL_LIST="" QWEN_PROXY_URL="" \
# SambaNova
Expand Down
2 changes: 2 additions & 0 deletions Dockerfile.database
Original file line number Diff line number Diff line change
Expand Up @@ -240,6 +240,8 @@ ENV \
OPENROUTER_API_KEY="" OPENROUTER_MODEL_LIST="" \
# Perplexity
PERPLEXITY_API_KEY="" PERPLEXITY_MODEL_LIST="" PERPLEXITY_PROXY_URL="" \
# PPIO
PPIO_API_KEY="" PPIO_MODEL_LIST="" \
# Qwen
QWEN_API_KEY="" QWEN_MODEL_LIST="" QWEN_PROXY_URL="" \
# SambaNova
Expand Down
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,7 @@ We have implemented support for the following model service providers:

<details><summary><kbd>See more providers (+26)</kbd></summary>

- **[PPIO](https://lobechat.com/discover/provider/ppio)**: PPIO supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc. [Learn more](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)
- **[Novita](https://lobechat.com/discover/provider/novita)**: Novita AI is a platform providing a variety of large language models and AI image generation API services, flexible, reliable, and cost-effective. It supports the latest open-source models like Llama3 and Mistral, offering a comprehensive, user-friendly, and auto-scaling API solution for generative AI application development, suitable for the rapid growth of AI startups.
- **[Together AI](https://lobechat.com/discover/provider/togetherai)**: Together AI is dedicated to achieving leading performance through innovative AI models, offering extensive customization capabilities, including rapid scaling support and intuitive deployment processes to meet various enterprise needs.
- **[Fireworks AI](https://lobechat.com/discover/provider/fireworksai)**: Fireworks AI is a leading provider of advanced language model services, focusing on functional calling and multimodal processing. Its latest model, Firefunction V2, is based on Llama-3, optimized for function calling, conversation, and instruction following. The visual language model FireLLaVA-13B supports mixed input of images and text. Other notable models include the Llama series and Mixtral series, providing efficient multilingual instruction following and generation support.
Expand Down Expand Up @@ -668,7 +669,7 @@ If you would like to learn more details, please feel free to look at our [📘 D

## 🤝 Contributing

Contributions of all types are more than welcome; if you are interested in contributing code, feel free to check out our GitHub [Issues][github-issues-link] and [Projects][github-project-link] to get stuck in to show us what youre made of.
Contributions of all types are more than welcome; if you are interested in contributing code, feel free to check out our GitHub [Issues][github-issues-link] and [Projects][github-project-link] to get stuck in to show us what you're made of.

> \[!TIP]
>
Expand Down Expand Up @@ -889,7 +890,7 @@ This project is [Apache 2.0](./LICENSE) licensed.
[profile-link]: https://github.com/lobehub
[share-linkedin-link]: https://linkedin.com/feed
[share-linkedin-shield]: https://img.shields.io/badge/-share%20on%20linkedin-black?labelColor=black&logo=linkedin&logoColor=white&style=flat-square
[share-mastodon-link]: https://mastodon.social/share?text=Check%20this%20GitHub%20repository%20out%20%F0%9F%A4%AF%20LobeChat%20-%20An%20open-source,%20extensible%20(Function%20Calling),%20high-performance%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20ChatGPT/LLM%20web%20application.%20https://github.com/lobehub/lobe-chat%20#chatbot%20#chatGPT%20#openAI
[share-mastodon-link]: https://mastodon.social/share?text=Check%20this%20GitHub%20repository%20out%20%F0%9F%A4%AF%20LobeChat%20-%20An%20open-source,%20extensible%20%28Function%20Calling%29,%20high-performance%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20ChatGPT%2FLLM%20web%20application.%20https://github.com/lobehub/lobe-chat%20#chatbot%20#chatGPT%20#openAI
[share-mastodon-shield]: https://img.shields.io/badge/-share%20on%20mastodon-black?labelColor=black&logo=mastodon&logoColor=white&style=flat-square
[share-reddit-link]: https://www.reddit.com/submit?title=Check%20this%20GitHub%20repository%20out%20%F0%9F%A4%AF%20LobeChat%20-%20An%20open-source%2C%20extensible%20%28Function%20Calling%29%2C%20high-performance%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20ChatGPT%2FLLM%20web%20application.%20%23chatbot%20%23chatGPT%20%23openAI&url=https%3A%2F%2Fgithub.com%2Flobehub%2Flobe-chat
[share-reddit-shield]: https://img.shields.io/badge/-share%20on%20reddit-black?labelColor=black&logo=reddit&logoColor=white&style=flat-square
Expand Down
2 changes: 1 addition & 1 deletion README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ LobeChat 支持文件上传与知识库功能,你可以上传文件、图片
- **[GitHub](https://lobechat.com/discover/provider/github)**: 通过 GitHub 模型,开发人员可以成为 AI 工程师,并使用行业领先的 AI 模型进行构建。

<details><summary><kbd>See more providers (+26)</kbd></summary>

- **[PPIO](https://lobechat.com/discover/provider/ppio)**: PPIO 派欧云提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。[了解更多](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)
- **[Novita](https://lobechat.com/discover/provider/novita)**: Novita AI 是一个提供多种大语言模型与 AI 图像生成的 API 服务的平台,灵活、可靠且具有成本效益。它支持 Llama3、Mistral 等最新的开源模型,并为生成式 AI 应用开发提供了全面、用户友好且自动扩展的 API 解决方案,适合 AI 初创公司的快速发展。
- **[Together AI](https://lobechat.com/discover/provider/togetherai)**: Together AI 致力于通过创新的 AI 模型实现领先的性能,提供广泛的自定义能力,包括快速扩展支持和直观的部署流程,满足企业的各种需求。
- **[Fireworks AI](https://lobechat.com/discover/provider/fireworksai)**: Fireworks AI 是一家领先的高级语言模型服务商,专注于功能调用和多模态处理。其最新模型 Firefunction V2 基于 Llama-3,优化用于函数调用、对话及指令跟随。视觉语言模型 FireLLaVA-13B 支持图像和文本混合输入。其他 notable 模型包括 Llama 系列和 Mixtral 系列,提供高效的多语言指令跟随与生成支持。
Expand Down
16 changes: 16 additions & 0 deletions docs/self-hosting/environment-variables/model-provider.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -217,6 +217,22 @@ If you need to use Azure OpenAI to provide model services, you can refer to the
- Default: `-`
- Example: `-all,+01-ai/yi-34b-chat,+huggingfaceh4/zephyr-7b-beta`

## PPIO

### `PPIO_API_KEY`

- Type: Required
- Description: This your PPIO API Key.
- Default: -
- Example: `sk_xxxxxxxxxx`

### `PPIO_MODEL_LIST`

- Type: Optional
- Description: Used to control the model list, use `+` to add a model, use `-` to hide a model, use `model_name=display_name` to customize the display name of a model, separated by commas. Definition syntax rules see [model-list][model-list]
- Default: `-`
- Example: `-all,+deepseek/deepseek-v3/community,+deepseek/deepseek-r1-distill-llama-70b`

## Github

### `GITHUB_TOKEN`
Expand Down
16 changes: 16 additions & 0 deletions docs/self-hosting/environment-variables/model-provider.zh-CN.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -215,6 +215,22 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
- 默认值:`-`
- 示例:`-all,+01-ai/yi-34b-chat,+huggingfaceh4/zephyr-7b-beta`

## PPIO

### `PPIO_API_KEY`

- 类型:必选
- 描述:这是你在 PPIO 网站申请的 API 密钥
- 默认值:-
- 示例:`sk_xxxxxxxxxxxx`

### `PPIO_MODEL_LIST`

- 类型:可选
- 描述:用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名<扩展配置>` 来自定义模型的展示名,用英文逗号隔开。模型定义语法规则见 [模型列表][model-list]
- 默认值:`-`
- 示例:`-all,+deepseek/deepseek-v3/community,+deepseek/deepseek-r1-distill-llama-70b`

## Github

### `GITHUB_TOKEN`
Expand Down
57 changes: 57 additions & 0 deletions docs/usage/providers/ppio.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
---
title: Using PPIO API Key in LobeChat
description: >-
Learn how to integrate PPIO's language model APIs into LobeChat. Follow
the steps to register, create an PPIO API key, configure settings, and
chat with our various AI models.
tags:
- PPIO
- DeepSeek
- Llama
- Qwen
- uncensored
- API key
- Web UI
---

# Using PPIO in LobeChat

<Image alt={'Using PPIO in LobeChat'} cover src={''} />

[PPIO](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link) supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc.

This document will guide you on how to integrate PPIO in LobeChat:

<Steps>
### Step 1: Register and Log in to PPIO

- Visit [PPIO](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link) and create an account
- Upon registration, PPIO will provide a ¥5 credit (about 5M tokens).

<Image alt={'Register PPIO'} height={457} inStep src={'https://github.com/user-attachments/assets/7cb3019b-78c1-48e0-a64c-a6a4836affd9'} />

### Step 2: Obtain the API Key

- Visit PPIO's [key management page](https://ppinfra.com/settings/key-management), create and copy an API Key.

<Image alt={'Obtain PPIO API key'} inStep src={'https://github.com/user-attachments/assets/5abcf21d-5a6c-4fc8-8de6-bc47d4d2fa98'} />

### Step 3: Configure PPIO in LobeChat

- Visit the `Settings` interface in LobeChat
- Find the setting for `PPIO` under `Language Model`

<Image alt={'Enter PPIO API key in LobeChat'} inStep src={'https://github.com/user-attachments/assets/000d6a5b-f8d4-4fd5-84cd-31556c5c1efd'} />

- Open PPIO and enter the obtained API key
- Choose a PPIO model for your assistant to start the conversation

<Image alt={'Select and use PPIO model'} inStep src={'https://github.com/user-attachments/assets/207888f1-df21-4063-8e66-97b0d9cfa02e'} />

<Callout type={'warning'}>
During usage, you may need to pay the API service provider, please refer to PPIO's [pricing
policy](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link).
</Callout>
</Steps>

You can now engage in conversations using the models provided by PPIO in LobeChat.
55 changes: 55 additions & 0 deletions docs/usage/providers/ppio.zh-CN.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
title: 在 LobeChat 中使用 PPIO 派欧云 API Key
description: >-
学习如何将 PPIO 派欧云的 LLM API 集成到 LobeChat 中。跟随以下步骤注册 PPIO 账号、创建 API
Key、并在 LobeChat 中进行设置。
tags:
- PPIO
- PPInfra
- DeepSeek
- Qwen
- Llama3
- API key
- Web UI
---

# 在 LobeChat 中使用 PPIO 派欧云

<Image alt={'在 LobeChat 中使用 PPIO'} cover src={''} />

[PPIO 派欧云](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。

本文档将指导你如何在 LobeChat 中使用 PPIO:

<Steps>
### 步骤一:注册 PPIO 派欧云账号并登录

- 访问 [PPIO 派欧云](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link) 并注册账号
- 注册后,PPIO 会赠送 5 元(约 500 万 tokens)的使用额度

<Image alt={'注册 PPIO'} height={457} inStep src={'https://github.com/user-attachments/assets/7cb3019b-78c1-48e0-a64c-a6a4836affd9'} />

### 步骤二:创建 API 密钥

- 访问 PPIO 派欧云的 [密钥管理页面](https://ppinfra.com/settings/key-management) ,创建并且复制一个 API 密钥.

<Image alt={'创建 PPIO API 密钥'} inStep src={'https://github.com/user-attachments/assets/5abcf21d-5a6c-4fc8-8de6-bc47d4d2fa98'} />

### 步骤三:在 LobeChat 中配置 PPIO 派欧云

- 访问 LobeChat 的 `设置` 界面
-`语言模型` 下找到 `PPIO` 的设置项
- 打开 PPIO 并填入获得的 API 密钥

<Image alt={'在 LobeChat 中输入 PPIO API 密钥'} inStep src={'https://github.com/user-attachments/assets/4eaadac7-595c-41ad-a6e0-64c3105577d7'} />

- 为你的助手选择一个 Novita AI 模型即可开始对话

<Image alt={'选择并使用 PPIO 模型'} inStep src={'https://github.com/user-attachments/assets/8cf66e00-04fe-4bad-9e3d-35afc7d9aa58'} />

<Callout type={'warning'}>
在使用过程中你可能需要向 API 服务提供商付费,PPIO 的 API 费用参考[这里](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)
</Callout>
</Steps>

至此你已经可以在 LobeChat 中使用 Novita AI 提供的模型进行对话了。
3 changes: 3 additions & 0 deletions locales/en-US/providers.json
Original file line number Diff line number Diff line change
Expand Up @@ -139,5 +139,8 @@
},
"zhipu": {
"description": "Zhipu AI offers an open platform for multimodal and language models, supporting a wide range of AI application scenarios, including text processing, image understanding, and programming assistance."
},
"ppio": {
"description": "PPIO supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc."
}
}
4 changes: 4 additions & 0 deletions locales/zh-CN/providers.json
Original file line number Diff line number Diff line change
Expand Up @@ -139,5 +139,9 @@
},
"zhipu": {
"description": "智谱 AI 提供多模态与语言模型的开放平台,支持广泛的AI应用场景,包括文本处理、图像理解与编程辅助等。"
},
"ppio": {
"description": "PPIO 派欧云提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。"
}
}

Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ import {
NvidiaProviderCard,
OpenRouterProviderCard,
PerplexityProviderCard,
PPIOProviderCard,
QwenProviderCard,
SambaNovaProviderCard,
SenseNovaProviderCard,
Expand Down Expand Up @@ -98,6 +99,7 @@ export const useProviderList = (): ProviderItem[] => {
SiliconCloudProviderCard,
HigressProviderCard,
GiteeAIProviderCard,
PPIOProviderCard,
],
[
AzureProvider,
Expand Down
3 changes: 3 additions & 0 deletions src/config/aiModels/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ import { default as ollama } from './ollama';
import { default as openai } from './openai';
import { default as openrouter } from './openrouter';
import { default as perplexity } from './perplexity';
import { default as ppio } from './ppio';
import { default as qwen } from './qwen';
import { default as sambanova } from './sambanova';
import { default as sensenova } from './sensenova';
Expand Down Expand Up @@ -98,6 +99,7 @@ export const LOBE_DEFAULT_MODEL_LIST = buildDefaultModelList({
openai,
openrouter,
perplexity,
ppio,
qwen,
sambanova,
sensenova,
Expand Down Expand Up @@ -147,6 +149,7 @@ export { default as ollama } from './ollama';
export { default as openai } from './openai';
export { default as openrouter } from './openrouter';
export { default as perplexity } from './perplexity';
export { default as ppio } from './ppio';
export { default as qwen } from './qwen';
export { default as sambanova } from './sambanova';
export { default as sensenova } from './sensenova';
Expand Down
Loading

0 comments on commit 23a3fda

Please sign in to comment.