Skip to content

feat: add Qiniu provider support#8025

Open
JackChiang233 wants to merge 2 commits intoAstrBotDevs:masterfrom
JackChiang233:feat/add-qiniu-provider
Open

feat: add Qiniu provider support#8025
JackChiang233 wants to merge 2 commits intoAstrBotDevs:masterfrom
JackChiang233:feat/add-qiniu-provider

Conversation

@JackChiang233
Copy link
Copy Markdown

@JackChiang233 JackChiang233 commented May 6, 2026

Summary

Added support for Qiniu AI (七牛云) as a new model provider.
新增七牛云的厂商支持。

Modifications / 改动点

  • astrbot/core/config/default.py - Adds the default Qiniu provider template in the global provider config schema.
  • astrbot/core/provider/manager.py - Registers dynamic import handling for the qiniu_chat_completion provider type.
  • astrbot/core/provider/sources/qiniu_source.py - Implements the Qiniu provider adapter with model-list fallback behavior.
  • dashboard/src/composables/useProviderSources.ts - Wires provider-source UI logic to recognize and manage Qiniu source entries.
  • dashboard/src/utils/providerUtils.js - Provides the Qiniu icon mapping used by provider-related dashboard views.
  • tests/test_openai_source.py - Adds regression tests covering Qiniu model listing fallback scenarios.

  • astrbot/core/config/default.py - 在全局提供程序配置模式中添加默认的 Qiniu 提供程序模板。

  • astrbot/core/provider/manager.py - 注册 qiniu_chat_completion 提供程序类型的动态导入处理。

  • astrbot/core/provider/sources/qiniu_source.py - 实现带有模型列表回退行为的 Qiniu 提供程序适配器。

  • dashboard/src/composables/useProviderSources.ts - 连接提供程序源 UI 逻辑,以识别和管理 Qiniu 源条目。

  • dashboard/src/utils/providerUtils.js - 提供提供程序相关仪表板视图使用的 Qiniu 图标映射。

  • tests/test_openai_source.py - 添加涵盖 Qiniu 模型列表回退场景的回归测试。

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Testing

pytest tests --tb=short
# 1206 passed 0 failed

pytest tests/test_openai_source.py --tb=short
# 50 passed 0 failed

Screenshots or Test Results / 运行截图或测试结果

屏幕截图 2026-05-06 104221
屏幕截图 2026-05-06 104206

Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Add Qiniu as an OpenAI-compatible chat completion provider with configuration, runtime, and dashboard support.

New Features:

  • Introduce a Qiniu chat completion provider adapter that reuses the OpenAI-compatible implementation.
  • Add default Qiniu provider configuration and dashboard icon mapping so Qiniu can be managed from the UI.

Enhancements:

  • Register the Qiniu chat completion provider type in the dynamic provider manager and map it to the generic chat_completion type in legacy provider-type handling.

Tests:

  • Extend provider source tests to cover Qiniu model listing fallback behavior when the remote API is unavailable or returns no models.

@auto-assign auto-assign Bot requested review from LIghtJUNction and anka-afk May 6, 2026 03:10
@dosubot dosubot Bot added size:M This PR changes 30-99 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. area:webui The bug / feature is about webui(dashboard) of astrbot. labels May 6, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • The hard-coded fallback model list in ProviderQiniu.get_models (["deepseek-v3"]) would be more flexible if sourced from provider_config or settings so deployments can adjust it without code changes.
  • The new Qiniu-specific tests currently live in tests/test_openai_source.py; consider moving them into a dedicated test_qiniu_source.py to keep provider-specific behavior isolated and the test suite easier to navigate.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The hard-coded fallback model list in `ProviderQiniu.get_models` (`["deepseek-v3"]`) would be more flexible if sourced from `provider_config` or settings so deployments can adjust it without code changes.
- The new Qiniu-specific tests currently live in `tests/test_openai_source.py`; consider moving them into a dedicated `test_qiniu_source.py` to keep provider-specific behavior isolated and the test suite easier to navigate.

## Individual Comments

### Comment 1
<location path="astrbot/core/provider/sources/qiniu_source.py" line_range="12-21" />
<code_context>
+    "Qiniu Chat Completion Provider Adapter",
+)
+class ProviderQiniu(ProviderOpenAIOfficial):
+    async def get_models(self):
+        try:
+            models = await super().get_models()
+            if models:
+                return models
+        except Exception as e:
+            logger.debug(
+                "Qiniu 列举模型不可用,退回占位列表: %s",
+                e,
+                exc_info=True,
+            )
+        return ["deepseek-v3"]
</code_context>
<issue_to_address>
**issue (bug_risk):** Catching `Exception` broadly here may hide real integration issues with Qiniu.

The blanket `except Exception` will turn any upstream failure (auth issues, API/HTTP errors, client bugs) into the placeholder list, making real Qiniu problems hard to detect. Please catch only the expected error types from the Qiniu/OpenAI client, and/or log at warning/error level when the fallback is used so operational issues are visible.
</issue_to_address>

### Comment 2
<location path="astrbot/core/provider/sources/qiniu_source.py" line_range="23" />
<code_context>
+                e,
+                exc_info=True,
+            )
+        return ["deepseek-v3"]
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Hard-coding the fallback model ID may make this adapter brittle if Qiniu’s default model changes.

Using a hard-coded "deepseek-v3" couples this adapter to a specific model name; if Qiniu renames or deprecates it, the UI will still offer it and calls may fail at runtime. Prefer deriving the fallback from configuration (e.g., the default provider config) or at least defining it as a module-level constant so it’s easy to update without touching control flow.

Suggested implementation:

```python
# Default fallback model for Qiniu when listing models fails or returns empty.
# Keep this in sync with Qiniu's documented defaults or provider configuration.
DEFAULT_QINIU_FALLBACK_MODEL = "deepseek-v3"

        try:
            models = await super().get_models()
            if models:
                return models
        except Exception as e:
            logger.debug(
                "Qiniu 列举模型不可用,退回占位列表: %s",
                e,
                exc_info=True,
            )
        return [DEFAULT_QINIU_FALLBACK_MODEL]

```

If this module already defines constants or has a configuration-driven default model, you may want to:
1. Move `DEFAULT_QINIU_FALLBACK_MODEL` to that existing configuration/constant section instead of directly above this method.
2. Optionally derive `DEFAULT_QINIU_FALLBACK_MODEL` from provider configuration (e.g., `settings.qiniu.default_model`) and have this constant simply reference that value.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +12 to +21
async def get_models(self):
try:
models = await super().get_models()
if models:
return models
except Exception as e:
logger.debug(
"Qiniu 列举模型不可用,退回占位列表: %s",
e,
exc_info=True,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Catching Exception broadly here may hide real integration issues with Qiniu.

The blanket except Exception will turn any upstream failure (auth issues, API/HTTP errors, client bugs) into the placeholder list, making real Qiniu problems hard to detect. Please catch only the expected error types from the Qiniu/OpenAI client, and/or log at warning/error level when the fallback is used so operational issues are visible.

e,
exc_info=True,
)
return ["deepseek-v3"]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Hard-coding the fallback model ID may make this adapter brittle if Qiniu’s default model changes.

Using a hard-coded "deepseek-v3" couples this adapter to a specific model name; if Qiniu renames or deprecates it, the UI will still offer it and calls may fail at runtime. Prefer deriving the fallback from configuration (e.g., the default provider config) or at least defining it as a module-level constant so it’s easy to update without touching control flow.

Suggested implementation:

# Default fallback model for Qiniu when listing models fails or returns empty.
# Keep this in sync with Qiniu's documented defaults or provider configuration.
DEFAULT_QINIU_FALLBACK_MODEL = "deepseek-v3"

        try:
            models = await super().get_models()
            if models:
                return models
        except Exception as e:
            logger.debug(
                "Qiniu 列举模型不可用,退回占位列表: %s",
                e,
                exc_info=True,
            )
        return [DEFAULT_QINIU_FALLBACK_MODEL]

If this module already defines constants or has a configuration-driven default model, you may want to:

  1. Move DEFAULT_QINIU_FALLBACK_MODEL to that existing configuration/constant section instead of directly above this method.
  2. Optionally derive DEFAULT_QINIU_FALLBACK_MODEL from provider configuration (e.g., settings.qiniu.default_model) and have this constant simply reference that value.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the Qiniu Chat Completion provider. Key changes include adding Qiniu to the default configuration, implementing the ProviderQiniu class with a fallback mechanism for model listing, and updating the dashboard with the appropriate provider mapping and icon. Feedback suggests improving consistency by using the @latest tag for the Qiniu icon CDN URL and refactoring the new unit tests to use a helper function for provider setup to reduce code duplication.

Comment thread dashboard/src/utils/providerUtils.js Outdated
Comment on lines +340 to +385
async def test_qiniu_get_models_fallback_on_failure(monkeypatch):
provider = ProviderQiniu(
provider_config={
"id": "test-qiniu",
"type": "qiniu_chat_completion",
"model": "deepseek-v3",
"key": ["k"],
"api_base": "https://api.qnaigc.com/v1",
},
provider_settings={},
)
try:

async def fail_list():
raise RuntimeError("unavailable")

monkeypatch.setattr(provider.client.models, "list", fail_list)
models = await provider.get_models()
assert models == ["deepseek-v3"]
finally:
await provider.terminate()


@pytest.mark.asyncio
async def test_qiniu_get_models_fallback_when_empty(monkeypatch):
provider = ProviderQiniu(
provider_config={
"id": "test-qiniu-empty",
"type": "qiniu_chat_completion",
"model": "deepseek-v3",
"key": ["k"],
"api_base": "https://api.qnaigc.com/v1",
},
provider_settings={},
)
try:

async def empty_list():
return SimpleNamespace(data=[])

monkeypatch.setattr(provider.client.models, "list", empty_list)
models = await provider.get_models()
assert models == ["deepseek-v3"]
finally:
await provider.terminate()

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is some code duplication in the setup for test_qiniu_get_models_fallback_on_failure and test_qiniu_get_models_fallback_when_empty. To improve maintainability and follow the pattern used elsewhere in this file (like _make_provider), you could extract the provider creation logic into a new helper function (e.g., _make_qiniu_provider). This aligns with the repository rule to refactor similar functionality into shared helper functions to avoid code duplication.

References
  1. When implementing similar functionality for different cases (e.g., direct vs. quoted attachments), refactor the logic into a shared helper function to avoid code duplication.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. area:webui The bug / feature is about webui(dashboard) of astrbot. size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants