Initial commit: FastGPT Python SDK Phase 1
Implement core infrastructure: - BaseClientMixin with retry logic and validation - FastGPTClient base class with httpx - ChatClient with 11 chat operation methods - AppClient for analytics and logs - Custom exceptions (APIError, AuthenticationError, etc.) - Package configuration (pyproject.toml, setup.py) - Documentation (README.md, CLAUDE.md) - Basic usage examples 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
88
.gitignore
vendored
Normal file
88
.gitignore
vendored
Normal file
@@ -0,0 +1,88 @@
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
pip-wheel-metadata/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
PIPFILE.lock
|
||||
|
||||
# PyInstaller
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# IDEs
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Project specific
|
||||
*.log
|
||||
*.db
|
||||
*.sqlite
|
||||
|
||||
# Python
|
||||
.pytest_cache/
|
||||
.mypy_cache/
|
||||
.dmypy.json
|
||||
dmypy.json
|
||||
.pyre/
|
||||
.pytype/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# Ruff
|
||||
.ruff_cache/
|
||||
|
||||
# uv
|
||||
.uv_cache/
|
||||
194
CLAUDE.md
Normal file
194
CLAUDE.md
Normal file
@@ -0,0 +1,194 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Project Overview
|
||||
|
||||
This is the **FastGPT Python SDK**, a Python client library for interacting with FastGPT's OpenAPI. The SDK is designed following the architecture patterns from the [Dify Python SDK](https://github.com/langgenius/dify-python-sdk), adapted for FastGPT's API structure.
|
||||
|
||||
### Key Design Principles
|
||||
|
||||
1. **Base Client + Specialized Clients**: A `FastGPTClient` base class handles HTTP communication, retry logic, error handling, and validation. Specialized clients (`ChatClient`, `AppClient`) inherit from it.
|
||||
|
||||
2. **Synchronous + Asynchronous**: All clients have async variants using `httpx.AsyncClient`.
|
||||
|
||||
3. **Context Manager Support**: Clients should be usable with `with` statements for automatic resource cleanup.
|
||||
|
||||
4. **OpenAI-Compatible API**: FastGPT's `/api/v1/chat/completions` endpoint is OpenAI-compatible, but with FastGPT-specific extensions.
|
||||
|
||||
## API Architecture
|
||||
|
||||
### FastGPT API Structure
|
||||
|
||||
- **Base URL**: User-configured (default: `http://localhost:3000`)
|
||||
- **Authentication**: `Authorization: Bearer {api_key}`
|
||||
- **Chat Completions**: `/api/v1/chat/completions` (OpenAI-compatible)
|
||||
- **Chat History**: `/api/core/chat/*` endpoints
|
||||
- **App Analytics**: `/api/proApi/core/app/logs/*` endpoints
|
||||
|
||||
### Key Concepts
|
||||
|
||||
- **chatId**: Identifier for a conversation window (similar to Dify's `conversation_id`)
|
||||
- **dataId**: Identifier for a specific message within a chat
|
||||
- **appId**: Application identifier (from FastGPT app details URL)
|
||||
- **variables**: Template variables that replace `[key]` placeholders in workflows
|
||||
- **detail mode**: When `detail=true`, responses include `responseData` with module-level execution details
|
||||
|
||||
### SSE Event Types
|
||||
|
||||
FastGPT uses Server-Sent Events with multiple event types:
|
||||
|
||||
- `answer` - Main chat response content
|
||||
- `fastAnswer` - Quick reply content
|
||||
- `flowNodeStatus` - Workflow node status (`running`, `completed`, `error`)
|
||||
- `flowResponses` - Complete node response data (module execution details)
|
||||
- `interactive` - Interactive node (user input/form selection)
|
||||
- `updateVariables` - Variable updates during execution
|
||||
- `error` - Error events
|
||||
- `toolCall`, `toolParams`, `toolResponse` - Tool/agent operations
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
fastgpt-python-sdk/
|
||||
├── fastgpt_client/
|
||||
│ ├── __init__.py # Export all clients
|
||||
│ ├── client.py # Base FastGPTClient
|
||||
│ ├── async_client.py # AsyncFastGPTClient + async variants
|
||||
│ ├── base_client.py # BaseClientMixin (retry, validation)
|
||||
│ ├── chat_client.py # ChatClient (sync)
|
||||
│ ├── app_client.py # AppClient for analytics/logs
|
||||
│ ├── exceptions.py # Custom exceptions
|
||||
│ └── utils/
|
||||
│ ├── validation.py # Parameter validation
|
||||
│ ├── response_parser.py # Parse SSE events
|
||||
│ └── types.py # Type definitions
|
||||
├── tests/
|
||||
├── examples/
|
||||
├── setup.py
|
||||
├── pyproject.toml
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## Client Classes
|
||||
|
||||
### FastGPTClient (Base)
|
||||
|
||||
- Uses `httpx.Client` for connection pooling
|
||||
- Implements retry logic with configurable `max_retries` and `retry_delay`
|
||||
- Custom exceptions: `APIError`, `AuthenticationError`, `RateLimitError`, `ValidationError`
|
||||
- Parameter validation via `_validate_params()`
|
||||
- Methods: `_send_request()`, `_handle_error_response()`, `close()`
|
||||
|
||||
### ChatClient
|
||||
|
||||
Inherits from `FastGPTClient`. Key methods:
|
||||
|
||||
- `create_chat_completion()` - Send messages (blocking/streaming), supports `chatId`, `variables`, `detail`
|
||||
- `get_chat_histories()` - List chat histories for an app
|
||||
- `get_chat_init()` - Get chat initialization info
|
||||
- `get_chat_records()` - Get messages for a specific chat
|
||||
- `get_record_detail()` - Get execution details for a message
|
||||
- `update_chat_history()` - Update title or pin/unpin
|
||||
- `delete_chat_history()` - Delete a chat
|
||||
- `clear_chat_histories()` - Clear all histories
|
||||
- `delete_chat_record()` - Delete a single message
|
||||
- `send_feedback()` - Like/dislike a message
|
||||
- `get_suggested_questions()` - Generate suggested questions
|
||||
|
||||
### AppClient
|
||||
|
||||
Inherits from `FastGPTClient`:
|
||||
|
||||
- `get_app_logs_chart()` - Analytics data (users, chats, app metrics)
|
||||
- `get_app_info()` - App metadata
|
||||
|
||||
### Async Variants
|
||||
|
||||
- `AsyncFastGPTClient`, `AsyncChatClient`, `AsyncAppClient`
|
||||
- All methods are `async def` and use `await`
|
||||
- Use `async with` for context manager support
|
||||
|
||||
## Development Commands
|
||||
|
||||
```bash
|
||||
# Install package in development mode
|
||||
pip install -e .
|
||||
|
||||
# Run tests
|
||||
pytest
|
||||
|
||||
# Run single test file
|
||||
pytest tests/test_chat_client.py
|
||||
|
||||
# Run with coverage
|
||||
pytest --cov=fastgpt_client
|
||||
|
||||
# Lint with ruff
|
||||
ruff check fastgpt_client/
|
||||
|
||||
# Format with ruff
|
||||
ruff format fastgpt_client/
|
||||
```
|
||||
|
||||
## Key Differences from Dify SDK
|
||||
|
||||
| Aspect | Dify SDK | FastGPT SDK |
|
||||
|--------|----------|-------------|
|
||||
| Chat ID | `conversation_id` | `chatId` |
|
||||
| Message input | `inputs` + `query` | `messages` array (OpenAI format) |
|
||||
| Variables | `inputs` dict | `variables` dict + `messages` |
|
||||
| Streaming events | Single `data:` type | Multiple `event:` types |
|
||||
| Detail mode | N/A | `detail=true` returns `responseData` |
|
||||
| Response format | Custom Dify format | OpenAI-compatible (`choices`, `usage`) |
|
||||
|
||||
## Request/Response Patterns
|
||||
|
||||
### Chat Completion Request
|
||||
|
||||
```python
|
||||
{
|
||||
"chatId": "optional_chat_id", # Omit for stateless, provide for context
|
||||
"stream": false,
|
||||
"detail": false,
|
||||
"variables": {"key": "value"}, # Template variable substitution
|
||||
"messages": [
|
||||
{"role": "user", "content": "Hello"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Chat Completion Response (blocking, detail=false)
|
||||
|
||||
OpenAI-compatible format with `choices`, `usage`, `id`, `model`.
|
||||
|
||||
### Chat Completion Response (detail=true)
|
||||
|
||||
Includes `responseData` array with module execution details:
|
||||
- `moduleName` - Node name
|
||||
- `moduleType` - Node type (chatNode, datasetSearchNode, etc.)
|
||||
- `tokens`, `price`, `runningTime`
|
||||
- `quoteList` - Knowledge base citations
|
||||
- `completeMessages` - Full context
|
||||
|
||||
### Interactive Node Response
|
||||
|
||||
When workflow hits an interactive node:
|
||||
```python
|
||||
{
|
||||
"interactive": {
|
||||
"type": "userSelect" | "userInput",
|
||||
"params": {
|
||||
"description": "...",
|
||||
"userSelectOptions": [...], # for userSelect
|
||||
"inputForm": [...] # for userInput
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [FastGPT Chat API Documentation](https://doc.fastgpt.io/docs/introduction/development/openapi/chat)
|
||||
- [FastGPT App Logs Documentation](https://doc.fastgpt.io/docs/introduction/development/openapi/app)
|
||||
- [Dify Python SDK](https://github.com/langgenius/dify-python-sdk) (architecture reference)
|
||||
183
README.md
Normal file
183
README.md
Normal file
@@ -0,0 +1,183 @@
|
||||
# FastGPT Python SDK
|
||||
|
||||
Python SDK for FastGPT OpenAPI.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install fastgpt-client
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Chat Completion
|
||||
|
||||
```python
|
||||
from fastgpt_client import ChatClient
|
||||
|
||||
# Initialize client
|
||||
with ChatClient(api_key="fastgpt-xxxxx", base_url="http://localhost:3000") as client:
|
||||
# Send a message
|
||||
response = client.create_chat_completion(
|
||||
messages=[{"role": "user", "content": "Hello!"}],
|
||||
stream=False
|
||||
)
|
||||
response.raise_for_status()
|
||||
result = response.json()
|
||||
print(result['choices'][0]['message']['content'])
|
||||
```
|
||||
|
||||
### Streaming Chat
|
||||
|
||||
```python
|
||||
import json
|
||||
|
||||
with ChatClient(api_key="fastgpt-xxxxx") as client:
|
||||
response = client.create_chat_completion(
|
||||
messages=[{"role": "user", "content": "Tell me a story"}],
|
||||
stream=True
|
||||
)
|
||||
|
||||
for line in response.iter_lines():
|
||||
if line.startswith("data:"):
|
||||
data = line[5:].strip()
|
||||
if data and data != "[DONE]":
|
||||
chunk = json.loads(data)
|
||||
if "choices" in chunk and chunk["choices"]:
|
||||
delta = chunk["choices"][0].get("delta", {})
|
||||
content = delta.get("content", "")
|
||||
if content:
|
||||
print(content, end="", flush=True)
|
||||
```
|
||||
|
||||
### Chat with Context (chatId)
|
||||
|
||||
```python
|
||||
with ChatClient(api_key="fastgpt-xxxxx") as client:
|
||||
# First message
|
||||
response = client.create_chat_completion(
|
||||
messages=[{"role": "user", "content": "What's AI?"}],
|
||||
chatId="my_conversation_123",
|
||||
stream=False
|
||||
)
|
||||
|
||||
# Second message (continues the conversation)
|
||||
response = client.create_chat_completion(
|
||||
messages=[{"role": "user", "content": "Tell me more"}],
|
||||
chatId="my_conversation_123", # Same chatId
|
||||
stream=False
|
||||
)
|
||||
```
|
||||
|
||||
### Using Variables
|
||||
|
||||
```python
|
||||
with ChatClient(api_key="fastgpt-xxxxx") as client:
|
||||
response = client.create_chat_completion(
|
||||
messages=[{"role": "user", "content": "Hello [name]!"}],
|
||||
variables={"name": "Alice"}, # Replaces [name] placeholder
|
||||
stream=False
|
||||
)
|
||||
```
|
||||
|
||||
### Get Chat Histories
|
||||
|
||||
```python
|
||||
with ChatClient(api_key="fastgpt-xxxxx") as client:
|
||||
histories = client.get_chat_histories(
|
||||
appId="your-app-id",
|
||||
offset=0,
|
||||
pageSize=20,
|
||||
source="api"
|
||||
)
|
||||
histories.raise_for_status()
|
||||
data = histories.json()
|
||||
for chat in data['data']['list']:
|
||||
print(f"{chat['title']}: {chat['chatId']}")
|
||||
```
|
||||
|
||||
### Send Feedback
|
||||
|
||||
```python
|
||||
with ChatClient(api_key="fastgpt-xxxxx") as client:
|
||||
# Like a message
|
||||
client.send_feedback(
|
||||
appId="app-123",
|
||||
chatId="chat-123",
|
||||
dataId="msg-123",
|
||||
userGoodFeedback="Great answer!"
|
||||
)
|
||||
|
||||
# Dislike a message
|
||||
client.send_feedback(
|
||||
appId="app-123",
|
||||
chatId="chat-123",
|
||||
dataId="msg-123",
|
||||
userBadFeedback="Not helpful"
|
||||
)
|
||||
```
|
||||
|
||||
### App Analytics
|
||||
|
||||
```python
|
||||
from fastgpt_client import AppClient
|
||||
|
||||
with AppClient(api_key="fastgpt-xxxxx") as client:
|
||||
logs = client.get_app_logs_chart(
|
||||
appId="your-app-id",
|
||||
dateStart="2024-01-01T00:00:00.000Z",
|
||||
dateEnd="2024-12-31T23:59:59.999Z",
|
||||
source=["api", "online"]
|
||||
)
|
||||
logs.raise_for_status()
|
||||
print(logs.json())
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### ChatClient
|
||||
|
||||
| Method | Description |
|
||||
|--------|-------------|
|
||||
| `create_chat_completion()` | Create chat completion (blocking/streaming) |
|
||||
| `get_chat_histories()` | List chat histories for an app |
|
||||
| `get_chat_init()` | Get chat initialization info |
|
||||
| `get_chat_records()` | Get messages for a chat |
|
||||
| `get_record_detail()` | Get execution details |
|
||||
| `update_chat_history()` | Update title or pin status |
|
||||
| `delete_chat_history()` | Delete a chat |
|
||||
| `clear_chat_histories()` | Clear all chats |
|
||||
| `delete_chat_record()` | Delete single record |
|
||||
| `send_feedback()` | Like/dislike a message |
|
||||
| `get_suggested_questions()` | Get suggested questions |
|
||||
|
||||
### AppClient
|
||||
|
||||
| Method | Description |
|
||||
|--------|-------------|
|
||||
| `get_app_logs_chart()` | Get app analytics data |
|
||||
|
||||
## Development
|
||||
|
||||
```bash
|
||||
# Install in development mode
|
||||
pip install -e ".[dev]"
|
||||
|
||||
# Run tests
|
||||
pytest
|
||||
|
||||
# Lint
|
||||
ruff check fastgpt_client/
|
||||
|
||||
# Format
|
||||
ruff format fastgpt_client/
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
## Links
|
||||
|
||||
- [FastGPT Documentation](https://doc.fastgpt.io/)
|
||||
- [Chat API Documentation](https://doc.fastgpt.io/docs/introduction/development/openapi/chat)
|
||||
4
examples/.env.example
Normal file
4
examples/.env.example
Normal file
@@ -0,0 +1,4 @@
|
||||
API_KEY=""
|
||||
BASE_URL=""
|
||||
CHAT_ID=""
|
||||
APP_ID=""
|
||||
125
examples/basic_usage.py
Normal file
125
examples/basic_usage.py
Normal file
@@ -0,0 +1,125 @@
|
||||
"""Basic usage example for FastGPT Python SDK."""
|
||||
|
||||
from fastgpt_client import ChatClient
|
||||
from dotenv import load_dotenv
|
||||
import os
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# Configure your API key and base URL
|
||||
API_KEY = os.getenv("API_KEY")
|
||||
BASE_URL = os.getenv("BASE_URL")
|
||||
|
||||
|
||||
def simple_chat():
|
||||
"""Simple chat completion example."""
|
||||
with ChatClient(api_key=API_KEY, base_url=BASE_URL) as client:
|
||||
response = client.create_chat_completion(
|
||||
messages=[{"role": "user", "content": "Hello! What's AI?"}],
|
||||
stream=False
|
||||
)
|
||||
response.raise_for_status()
|
||||
result = response.json()
|
||||
|
||||
print("Response:", result['choices'][0]['message']['content'])
|
||||
|
||||
|
||||
def streaming_chat():
|
||||
"""Streaming chat completion example."""
|
||||
import json
|
||||
|
||||
with ChatClient(api_key=API_KEY, base_url=BASE_URL) as client:
|
||||
response = client.create_chat_completion(
|
||||
messages=[{"role": "user", "content": "Tell me a short story"}],
|
||||
stream=True
|
||||
)
|
||||
|
||||
print("Streaming response: ", end="")
|
||||
for line in response.iter_lines():
|
||||
if line.startswith("data:"):
|
||||
data = line[5:].strip()
|
||||
if data and data != "[DONE]":
|
||||
chunk = json.loads(data)
|
||||
if "choices" in chunk and chunk["choices"]:
|
||||
delta = chunk["choices"][0].get("delta", {})
|
||||
content = delta.get("content", "")
|
||||
if content:
|
||||
print(content, end="", flush=True)
|
||||
print()
|
||||
|
||||
|
||||
def chat_with_context():
|
||||
"""Chat with context using chatId example."""
|
||||
with ChatClient(api_key=API_KEY, base_url=BASE_URL) as client:
|
||||
chat_id = os.getenv("CHAT_ID")
|
||||
|
||||
# First message
|
||||
print("User: What's AI?")
|
||||
response = client.create_chat_completion(
|
||||
messages=[{"role": "user", "content": "What's AI?"}],
|
||||
chatId=chat_id,
|
||||
stream=False
|
||||
)
|
||||
response.raise_for_status()
|
||||
result = response.json()
|
||||
print(f"AI: {result['choices'][0]['message']['content']}\n")
|
||||
|
||||
# Second message (continues the conversation)
|
||||
print("User: Tell me more about it")
|
||||
response = client.create_chat_completion(
|
||||
messages=[{"role": "user", "content": "Tell me more about it"}],
|
||||
chatId=chat_id, # Same chatId maintains context
|
||||
stream=False
|
||||
)
|
||||
response.raise_for_status()
|
||||
result = response.json()
|
||||
print(f"AI: {result['choices'][0]['message']['content']}")
|
||||
|
||||
|
||||
def get_histories():
|
||||
"""Get chat histories example."""
|
||||
with ChatClient(api_key=API_KEY, base_url=BASE_URL) as client:
|
||||
# You need to replace this with your actual app ID
|
||||
app_id = os.getenv("APP_ID")
|
||||
|
||||
try:
|
||||
histories = client.get_chat_histories(
|
||||
appId=app_id,
|
||||
offset=0,
|
||||
pageSize=20,
|
||||
source="api"
|
||||
)
|
||||
histories.raise_for_status()
|
||||
data = histories.json()
|
||||
|
||||
print(f"Total chats: {data['data']['total']}")
|
||||
for chat in data['data']['list']:
|
||||
print(f" - {chat['title']}: {chat['chatId']}")
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("=== Simple Chat ===")
|
||||
try:
|
||||
simple_chat()
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
|
||||
print("\n=== Streaming Chat ===")
|
||||
try:
|
||||
streaming_chat()
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
|
||||
print("\n=== Chat with Context ===")
|
||||
try:
|
||||
chat_with_context()
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
|
||||
print("\n=== Get Histories ===")
|
||||
try:
|
||||
get_histories()
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
30
fastgpt_client/__init__.py
Normal file
30
fastgpt_client/__init__.py
Normal file
@@ -0,0 +1,30 @@
|
||||
"""FastGPT Python SDK
|
||||
|
||||
A Python client library for interacting with FastGPT's OpenAPI.
|
||||
"""
|
||||
|
||||
from fastgpt_client.client import AppClient, ChatClient, FastGPTClient
|
||||
from fastgpt_client.exceptions import (
|
||||
APIError,
|
||||
AuthenticationError,
|
||||
FastGPTError,
|
||||
RateLimitError,
|
||||
StreamParseError,
|
||||
ValidationError,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# Synchronous clients
|
||||
"FastGPTClient",
|
||||
"ChatClient",
|
||||
"AppClient",
|
||||
# Exceptions
|
||||
"FastGPTError",
|
||||
"APIError",
|
||||
"AuthenticationError",
|
||||
"RateLimitError",
|
||||
"ValidationError",
|
||||
"StreamParseError",
|
||||
]
|
||||
|
||||
__version__ = "0.1.0"
|
||||
114
fastgpt_client/base_client.py
Normal file
114
fastgpt_client/base_client.py
Normal file
@@ -0,0 +1,114 @@
|
||||
"""Base client mixin with retry logic and validation."""
|
||||
|
||||
import logging
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
|
||||
class BaseClientMixin:
|
||||
"""Mixin class providing retry logic, validation, and logging for FastGPT clients."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
api_key: str,
|
||||
base_url: str,
|
||||
timeout: float = 60.0,
|
||||
max_retries: int = 3,
|
||||
retry_delay: float = 1.0,
|
||||
enable_logging: bool = False,
|
||||
):
|
||||
"""Initialize base client.
|
||||
|
||||
Args:
|
||||
api_key: FastGPT API key
|
||||
base_url: Base URL for FastGPT API
|
||||
timeout: Request timeout in seconds
|
||||
max_retries: Maximum number of retry attempts
|
||||
retry_delay: Delay between retries in seconds
|
||||
enable_logging: Whether to enable request logging
|
||||
"""
|
||||
self.api_key = api_key
|
||||
self.base_url = base_url
|
||||
self.timeout = timeout
|
||||
self.max_retries = max_retries
|
||||
self.retry_delay = retry_delay
|
||||
self.enable_logging = enable_logging
|
||||
self.logger = logging.getLogger(__name__)
|
||||
|
||||
def _validate_params(self, **params) -> None:
|
||||
"""Validate request parameters.
|
||||
|
||||
Args:
|
||||
**params: Parameters to validate
|
||||
|
||||
Raises:
|
||||
ValidationError: If any parameter is invalid
|
||||
"""
|
||||
for key, value in params.items():
|
||||
if value is None:
|
||||
continue
|
||||
# Check for empty strings that should be non-empty
|
||||
if isinstance(value, str) and key in ("query", "chatId", "appId", "dataId", "content"):
|
||||
if not value.strip():
|
||||
from .exceptions import ValidationError
|
||||
raise ValidationError(f"{key} must be a non-empty string")
|
||||
# Check for valid lists/dicts
|
||||
elif isinstance(value, (list, dict)) and not value:
|
||||
# Empty lists/dicts are usually valid, but log for debugging
|
||||
if self.enable_logging and self.logger.isEnabledFor(logging.DEBUG):
|
||||
self.logger.debug(f"Parameter {key} is empty")
|
||||
|
||||
def _retry_request(self, request_func, request_context: str):
|
||||
"""Execute a request with retry logic.
|
||||
|
||||
Args:
|
||||
request_func: Function that executes the HTTP request
|
||||
request_context: Description of the request for logging
|
||||
|
||||
Returns:
|
||||
Response from the request
|
||||
|
||||
Raises:
|
||||
APIError: If all retries are exhausted
|
||||
"""
|
||||
last_exception = None
|
||||
|
||||
for attempt in range(self.max_retries):
|
||||
try:
|
||||
response = request_func()
|
||||
|
||||
# Success on non-5xx responses
|
||||
if response.status_code < 500:
|
||||
return response
|
||||
|
||||
# Server error - will retry
|
||||
if self.enable_logging:
|
||||
self.logger.warning(
|
||||
f"{request_context} failed with status {response.status_code} "
|
||||
f"(attempt {attempt + 1}/{self.max_retries})"
|
||||
)
|
||||
|
||||
if attempt < self.max_retries - 1:
|
||||
# Exponential backoff
|
||||
sleep_time = self.retry_delay * (2 ** attempt)
|
||||
time.sleep(sleep_time)
|
||||
|
||||
except Exception as e:
|
||||
last_exception = e
|
||||
if self.enable_logging:
|
||||
self.logger.warning(
|
||||
f"{request_context} raised exception: {e} "
|
||||
f"(attempt {attempt + 1}/{self.max_retries})"
|
||||
)
|
||||
|
||||
if attempt < self.max_retries - 1:
|
||||
sleep_time = self.retry_delay * (2 ** attempt)
|
||||
time.sleep(sleep_time)
|
||||
|
||||
# All retries exhausted
|
||||
if last_exception:
|
||||
from .exceptions import APIError
|
||||
raise APIError(f"Request failed after {self.max_retries} attempts: {last_exception}")
|
||||
|
||||
from .exceptions import APIError
|
||||
raise APIError(f"Request failed after {self.max_retries} attempts")
|
||||
487
fastgpt_client/client.py
Normal file
487
fastgpt_client/client.py
Normal file
@@ -0,0 +1,487 @@
|
||||
"""FastGPT Client - Main synchronous client."""
|
||||
|
||||
import logging
|
||||
from typing import Any, Dict, Literal, Union
|
||||
|
||||
import httpx
|
||||
|
||||
from .base_client import BaseClientMixin
|
||||
from .exceptions import APIError, AuthenticationError, RateLimitError, ValidationError
|
||||
|
||||
|
||||
class FastGPTClient(BaseClientMixin):
|
||||
"""Synchronous FastGPT API client.
|
||||
|
||||
This client uses httpx.Client for efficient connection pooling and resource management.
|
||||
It's recommended to use this client as a context manager.
|
||||
|
||||
Example:
|
||||
with FastGPTClient(api_key="your-key") as client:
|
||||
response = client.get_app_info(app_id="app-123")
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
api_key: str,
|
||||
base_url: str = "http://localhost:3000",
|
||||
timeout: float = 60.0,
|
||||
max_retries: int = 3,
|
||||
retry_delay: float = 1.0,
|
||||
enable_logging: bool = False,
|
||||
):
|
||||
"""Initialize the FastGPT client.
|
||||
|
||||
Args:
|
||||
api_key: Your FastGPT API key
|
||||
base_url: Base URL for the FastGPT API
|
||||
timeout: Request timeout in seconds (default: 60.0)
|
||||
max_retries: Maximum number of retry attempts (default: 3)
|
||||
retry_delay: Delay between retries in seconds (default: 1.0)
|
||||
enable_logging: Whether to enable request logging (default: False)
|
||||
"""
|
||||
# Initialize base client functionality
|
||||
super().__init__(api_key, base_url, timeout, max_retries, retry_delay, enable_logging)
|
||||
|
||||
self._client = httpx.Client(
|
||||
base_url=base_url,
|
||||
timeout=httpx.Timeout(timeout, connect=5.0),
|
||||
)
|
||||
|
||||
def __enter__(self):
|
||||
"""Support context manager protocol."""
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
"""Clean up resources when exiting context."""
|
||||
self.close()
|
||||
|
||||
def close(self):
|
||||
"""Close the HTTP client and release resources."""
|
||||
if hasattr(self, "_client"):
|
||||
self._client.close()
|
||||
|
||||
def _send_request(
|
||||
self,
|
||||
method: str,
|
||||
endpoint: str,
|
||||
json: Dict[str, Any] | None = None,
|
||||
params: Dict[str, Any] | None = None,
|
||||
stream: bool = False,
|
||||
**kwargs,
|
||||
):
|
||||
"""Send an HTTP request to the FastGPT API with retry logic.
|
||||
|
||||
Args:
|
||||
method: HTTP method (GET, POST, PUT, PATCH, DELETE)
|
||||
endpoint: API endpoint path
|
||||
json: JSON request body
|
||||
params: Query parameters
|
||||
stream: Whether to stream the response
|
||||
**kwargs: Additional arguments to pass to httpx.request
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
# Validate parameters
|
||||
if json:
|
||||
self._validate_params(**json)
|
||||
if params:
|
||||
self._validate_params(**params)
|
||||
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
def make_request():
|
||||
"""Inner function to perform the actual HTTP request."""
|
||||
# Log request if logging is enabled
|
||||
if self.enable_logging:
|
||||
self.logger.info(f"Sending {method} request to {endpoint}")
|
||||
|
||||
# Debug logging for detailed information
|
||||
if self.logger.isEnabledFor(logging.DEBUG):
|
||||
if json:
|
||||
self.logger.debug(f"Request body: {json}")
|
||||
if params:
|
||||
self.logger.debug(f"Request params: {params}")
|
||||
|
||||
# httpx.Client automatically prepends base_url
|
||||
response = self._client.request(
|
||||
method,
|
||||
endpoint,
|
||||
json=json,
|
||||
params=params,
|
||||
headers=headers,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
# Log response if logging is enabled
|
||||
if self.enable_logging:
|
||||
self.logger.info(f"Received response: {response.status_code}")
|
||||
|
||||
return response
|
||||
|
||||
# Use the retry mechanism from base client
|
||||
request_context = f"{method} {endpoint}"
|
||||
response = self._retry_request(make_request, request_context)
|
||||
|
||||
# Handle error responses (API errors don't retry)
|
||||
self._handle_error_response(response)
|
||||
|
||||
return response
|
||||
|
||||
def _handle_error_response(self, response) -> None:
|
||||
"""Handle HTTP error responses and raise appropriate exceptions.
|
||||
|
||||
Args:
|
||||
response: httpx.Response object
|
||||
|
||||
Raises:
|
||||
AuthenticationError: If status code is 401
|
||||
RateLimitError: If status code is 429
|
||||
ValidationError: If status code is 422
|
||||
APIError: For other 4xx and 5xx errors
|
||||
"""
|
||||
if response.status_code < 400:
|
||||
return # Success response
|
||||
|
||||
try:
|
||||
error_data = response.json()
|
||||
message = error_data.get("message", f"HTTP {response.status_code}")
|
||||
except (ValueError, KeyError):
|
||||
message = f"HTTP {response.status_code}"
|
||||
error_data = None
|
||||
|
||||
# Log error response if logging is enabled
|
||||
if self.enable_logging:
|
||||
self.logger.error(f"API error: {response.status_code} - {message}")
|
||||
|
||||
if response.status_code == 401:
|
||||
raise AuthenticationError(message, response.status_code, error_data)
|
||||
elif response.status_code == 429:
|
||||
retry_after = response.headers.get("Retry-After")
|
||||
raise RateLimitError(message, retry_after, response.status_code, error_data)
|
||||
elif response.status_code == 422:
|
||||
raise ValidationError(message, response.status_code, error_data)
|
||||
elif response.status_code >= 400:
|
||||
raise APIError(message, response.status_code, error_data)
|
||||
|
||||
|
||||
class ChatClient(FastGPTClient):
|
||||
"""Client for chat-related operations.
|
||||
|
||||
Example:
|
||||
with ChatClient(api_key="fastgpt-xxxxx") as client:
|
||||
response = client.create_chat_completion(
|
||||
messages=[{"role": "user", "content": "Hello!"}],
|
||||
stream=False
|
||||
)
|
||||
"""
|
||||
|
||||
def create_chat_completion(
|
||||
self,
|
||||
messages: list[dict],
|
||||
stream: bool = False,
|
||||
chatId: str | None = None,
|
||||
detail: bool = False,
|
||||
variables: dict[str, Any] | None = None,
|
||||
responseChatItemId: str | None = None,
|
||||
):
|
||||
"""Create a chat completion.
|
||||
|
||||
Args:
|
||||
messages: Array of message objects with role and content
|
||||
stream: Whether to stream the response
|
||||
chatId: Chat ID for conversation context (optional)
|
||||
detail: Whether to return detailed response data
|
||||
variables: Template variables for substitution
|
||||
responseChatItemId: Custom ID for the response message
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
self._validate_params(messages=messages)
|
||||
|
||||
data = {
|
||||
"messages": messages,
|
||||
"stream": stream,
|
||||
"detail": detail,
|
||||
}
|
||||
|
||||
if chatId:
|
||||
data["chatId"] = chatId
|
||||
if variables:
|
||||
data["variables"] = variables
|
||||
if responseChatItemId:
|
||||
data["responseChatItemId"] = responseChatItemId
|
||||
|
||||
return self._send_request(
|
||||
"POST",
|
||||
"/api/v1/chat/completions",
|
||||
json=data,
|
||||
stream=stream,
|
||||
)
|
||||
|
||||
def get_chat_histories(
|
||||
self,
|
||||
appId: str,
|
||||
offset: int = 0,
|
||||
pageSize: int = 20,
|
||||
source: Literal["api", "online", "share", "test"] = "api",
|
||||
):
|
||||
"""Get chat histories for an application.
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
offset: Offset for pagination
|
||||
pageSize: Number of records per page
|
||||
source: Source filter (api, online, share, test)
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
data = {
|
||||
"appId": appId,
|
||||
"offset": offset,
|
||||
"pageSize": pageSize,
|
||||
"source": source,
|
||||
}
|
||||
|
||||
return self._send_request("POST", "/api/core/chat/getHistories", json=data)
|
||||
|
||||
def get_chat_init(self, appId: str, chatId: str):
|
||||
"""Get chat initialization information.
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
chatId: Chat ID
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
params = {"appId": appId, "chatId": chatId}
|
||||
return self._send_request("GET", "/api/core/chat/init", params=params)
|
||||
|
||||
def get_chat_records(
|
||||
self,
|
||||
appId: str,
|
||||
chatId: str,
|
||||
offset: int = 0,
|
||||
pageSize: int = 10,
|
||||
loadCustomFeedbacks: bool = False,
|
||||
):
|
||||
"""Get chat records for a specific chat.
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
chatId: Chat ID
|
||||
offset: Offset for pagination
|
||||
pageSize: Number of records per page
|
||||
loadCustomFeedbacks: Whether to load custom feedbacks
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
data = {
|
||||
"appId": appId,
|
||||
"chatId": chatId,
|
||||
"offset": offset,
|
||||
"pageSize": pageSize,
|
||||
"loadCustomFeedbacks": loadCustomFeedbacks,
|
||||
}
|
||||
|
||||
return self._send_request("POST", "/api/core/chat/getPaginationRecords", json=data)
|
||||
|
||||
def get_record_detail(self, appId: str, chatId: str, dataId: str):
|
||||
"""Get detailed execution data for a specific record.
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
chatId: Chat ID
|
||||
dataId: Record ID
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
params = {"appId": appId, "chatId": chatId, "dataId": dataId}
|
||||
return self._send_request("GET", "/api/core/chat/getResData", params=params)
|
||||
|
||||
def update_chat_history(
|
||||
self,
|
||||
appId: str,
|
||||
chatId: str,
|
||||
customTitle: str | None = None,
|
||||
top: bool | None = None,
|
||||
):
|
||||
"""Update chat history (title or pin status).
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
chatId: Chat ID
|
||||
customTitle: Custom title for the chat
|
||||
top: Whether to pin the chat
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
data = {
|
||||
"appId": appId,
|
||||
"chatId": chatId,
|
||||
}
|
||||
|
||||
if customTitle is not None:
|
||||
data["customTitle"] = customTitle
|
||||
if top is not None:
|
||||
data["top"] = top
|
||||
|
||||
return self._send_request("POST", "/api/core/chat/updateHistory", json=data)
|
||||
|
||||
def delete_chat_history(self, appId: str, chatId: str):
|
||||
"""Delete a chat history.
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
chatId: Chat ID
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
params = {"appId": appId, "chatId": chatId}
|
||||
return self._send_request("DELETE", "/api/core/chat/delHistory", params=params)
|
||||
|
||||
def clear_chat_histories(self, appId: str):
|
||||
"""Clear all chat histories for an application.
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
params = {"appId": appId}
|
||||
return self._send_request("DELETE", "/api/core/chat/clearHistories", params=params)
|
||||
|
||||
def delete_chat_record(self, appId: str, chatId: str, contentId: str):
|
||||
"""Delete a single chat record.
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
chatId: Chat ID
|
||||
contentId: Content ID of the record
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
params = {"appId": appId, "chatId": chatId, "contentId": contentId}
|
||||
return self._send_request("DELETE", "/api/core/chat/item/delete", params=params)
|
||||
|
||||
def send_feedback(
|
||||
self,
|
||||
appId: str,
|
||||
chatId: str,
|
||||
dataId: str,
|
||||
userGoodFeedback: str | None = None,
|
||||
userBadFeedback: str | None = None,
|
||||
):
|
||||
"""Send feedback for a chat message (like/dislike).
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
chatId: Chat ID
|
||||
dataId: Message ID
|
||||
userGoodFeedback: Positive feedback text (pass None to cancel like)
|
||||
userBadFeedback: Negative feedback text (pass None to cancel dislike)
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
data = {
|
||||
"appId": appId,
|
||||
"chatId": chatId,
|
||||
"dataId": dataId,
|
||||
}
|
||||
|
||||
if userGoodFeedback is not None:
|
||||
data["userGoodFeedback"] = userGoodFeedback
|
||||
if userBadFeedback is not None:
|
||||
data["userBadFeedback"] = userBadFeedback
|
||||
|
||||
return self._send_request("POST", "/api/core/chat/feedback/updateUserFeedback", json=data)
|
||||
|
||||
def get_suggested_questions(
|
||||
self,
|
||||
appId: str,
|
||||
chatId: str,
|
||||
questionGuide: dict[str, Any] | None = None,
|
||||
):
|
||||
"""Get suggested questions based on chat context.
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
chatId: Chat ID
|
||||
questionGuide: Optional custom configuration for question guide
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
data = {
|
||||
"appId": appId,
|
||||
"chatId": chatId,
|
||||
}
|
||||
|
||||
if questionGuide:
|
||||
data["questionGuide"] = questionGuide
|
||||
|
||||
return self._send_request("POST", "/api/core/ai/agent/v2/createQuestionGuide", json=data)
|
||||
|
||||
|
||||
class AppClient(FastGPTClient):
|
||||
"""Client for application analytics and logs.
|
||||
|
||||
Example:
|
||||
with AppClient(api_key="fastgpt-xxxxx") as client:
|
||||
logs = client.get_app_logs_chart(appId="app-123")
|
||||
"""
|
||||
|
||||
def get_app_logs_chart(
|
||||
self,
|
||||
appId: str,
|
||||
dateStart: str,
|
||||
dateEnd: str,
|
||||
offset: int = 1,
|
||||
source: list[str] | None = None,
|
||||
userTimespan: str = "day",
|
||||
chatTimespan: str = "day",
|
||||
appTimespan: str = "day",
|
||||
):
|
||||
"""Get application analytics chart data.
|
||||
|
||||
Args:
|
||||
appId: Application ID
|
||||
dateStart: Start date (ISO 8601 format)
|
||||
dateEnd: End date (ISO 8601 format)
|
||||
offset: Offset value
|
||||
source: List of sources (test, online, share, api, etc.)
|
||||
userTimespan: User data timespan (day, week, month)
|
||||
chatTimespan: Chat data timespan (day, week, month)
|
||||
appTimespan: App data timespan (day, week, month)
|
||||
|
||||
Returns:
|
||||
httpx.Response object
|
||||
"""
|
||||
if source is None:
|
||||
source = ["api"]
|
||||
|
||||
data = {
|
||||
"appId": appId,
|
||||
"dateStart": dateStart,
|
||||
"dateEnd": dateEnd,
|
||||
"offset": offset,
|
||||
"source": source,
|
||||
"userTimespan": userTimespan,
|
||||
"chatTimespan": chatTimespan,
|
||||
"appTimespan": appTimespan,
|
||||
}
|
||||
|
||||
return self._send_request("POST", "/api/proApi/core/app/logs/getChartData", json=data)
|
||||
43
fastgpt_client/exceptions.py
Normal file
43
fastgpt_client/exceptions.py
Normal file
@@ -0,0 +1,43 @@
|
||||
"""FastGPT Client Exceptions."""
|
||||
|
||||
|
||||
class FastGPTError(Exception):
|
||||
"""Base exception for all FastGPT errors."""
|
||||
|
||||
def __init__(self, message: str, status_code: int = None, response_data: dict = None):
|
||||
self.message = message
|
||||
self.status_code = status_code
|
||||
self.response_data = response_data or {}
|
||||
super().__init__(self.message)
|
||||
|
||||
|
||||
class APIError(FastGPTError):
|
||||
"""General API error (4xx, 5xx responses)."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class AuthenticationError(FastGPTError):
|
||||
"""Authentication failed (401)."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class RateLimitError(FastGPTError):
|
||||
"""Rate limit exceeded (429)."""
|
||||
|
||||
def __init__(self, message: str, retry_after: str = None, status_code: int = None, response_data: dict = None):
|
||||
super().__init__(message, status_code, response_data)
|
||||
self.retry_after = retry_after
|
||||
|
||||
|
||||
class ValidationError(FastGPTError):
|
||||
"""Invalid request parameters (422)."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class StreamParseError(FastGPTError):
|
||||
"""Error parsing streaming response."""
|
||||
|
||||
pass
|
||||
59
pyproject.toml
Normal file
59
pyproject.toml
Normal file
@@ -0,0 +1,59 @@
|
||||
[build-system]
|
||||
requires = ["setuptools>=68.0", "wheel"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "fastgpt-client"
|
||||
version = "0.1.0"
|
||||
description = "Python SDK for FastGPT OpenAPI"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.8"
|
||||
license = {text = "MIT"}
|
||||
authors = [
|
||||
{name = "Your Name", email = "your.email@example.com"}
|
||||
]
|
||||
keywords = ["fastgpt", "ai", "chatbot", "llm", "openapi"]
|
||||
classifiers = [
|
||||
"Development Status :: 3 - Alpha",
|
||||
"Intended Audience :: Developers",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.8",
|
||||
"Programming Language :: Python :: 3.9",
|
||||
"Programming Language :: Python :: 3.10",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"Programming Language :: Python :: 3.12",
|
||||
"Topic :: Software Development :: Libraries :: Python Modules",
|
||||
]
|
||||
dependencies = [
|
||||
"httpx>=0.25.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest>=7.0",
|
||||
"pytest-cov>=4.0",
|
||||
"ruff>=0.1.0",
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
Homepage = "https://github.com/yourusername/fastgpt-python-sdk"
|
||||
Documentation = "https://github.com/yourusername/fastgpt-python-sdk#readme"
|
||||
Repository = "https://github.com/yourusername/fastgpt-python-sdk"
|
||||
Issues = "https://github.com/yourusername/fastgpt-python-sdk/issues"
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["."]
|
||||
include = ["fastgpt_client*"]
|
||||
|
||||
[tool.ruff]
|
||||
line-length = 100
|
||||
target-version = "py38"
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "I", "N", "W"]
|
||||
ignore = ["E501"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
python_files = ["test_*.py"]
|
||||
Reference in New Issue
Block a user