init project

This commit is contained in:
李如威 2025-07-07 23:22:08 +08:00
commit 99ca254f78
25 changed files with 2838 additions and 0 deletions

19
.env.example Normal file
View File

@ -0,0 +1,19 @@
# OpenAI API 配置
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_BASE_URL=https://api.openai.com/v1
# 向量数据库配置
CHROMA_PERSIST_DIRECTORY=./chroma_db
# 应用配置
APP_NAME=Easy RAG Service
APP_VERSION=1.0.0
DEBUG=True
# 服务器配置
HOST=0.0.0.0
PORT=8000
# 上传配置
UPLOAD_DIR=./uploads
MAX_FILE_SIZE=10485760 # 10MB

40
.gitignore vendored Normal file
View File

@ -0,0 +1,40 @@
# 忽略依赖和环境
venv/
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
pip-log.txt
pip-delete-this-directory.txt
# 忽略环境变量文件
.env
# 忽略IDE文件
.vscode/
.idea/
*.swp
*.swo
# 忽略系统文件
.DS_Store
Thumbs.db
# 忽略日志文件
*.log
logs/
# 忽略上传的文件和数据库
uploads/
chroma_db/
# 忽略临时文件
tmp/
temp/
*.tmp
# 忽略测试覆盖率报告
htmlcov/
.coverage
.pytest_cache/

1
.python-version Normal file
View File

@ -0,0 +1 @@
3.12.0

306
README.md Normal file
View File

@ -0,0 +1,306 @@
# Easy RAG Service
一个高效、简洁的 RAG (Retrieval-Augmented Generation) 服务,基于 FastAPI 构建。
## 功能特性
- 🚀 **高性能 API 服务** - 基于 FastAPI 构建
- 📄 **多格式文档支持** - PDF、TXT 文档处理和向量化
- 🔍 **智能检索问答** - 基于向量相似度的文档检索
- 💾 **向量数据库** - ChromaDB 持久化存储
- 🤖 **多模型支持** - 支持多种 LLM 模型集成
- 📊 **RESTful API** - 标准化的 REST 接口
- 🧪 **完整测试套件** - 包含功能测试、并发测试、性能监控
- 🔧 **开发工具集成** - VS Code 任务、自动化脚本
- 📈 **性能监控** - 实时资源使用监控和报告生成
## 环境要求
- Python 3.8+
- pyenv (推荐)
- venv
## 快速开始
### 1. 环境设置
```bash
# 使用 pyenv 安装 Python (推荐)
pyenv install 3.12.0 # 或 3.11.5, 3.13.0 等
pyenv local 3.12.0 # 使用你安装的版本
# 使用自动化脚本设置环境
./setup.sh
# 或手动设置
# 创建虚拟环境
python -m venv venv
source venv/bin/activate # macOS/Linux
# venv\Scripts\activate # Windows
# 安装依赖
pip install -r requirements.txt
```
### 2. 配置环境变量
```bash
cp .env.example .env
# 编辑 .env 文件,设置你的 API 密钥
```
### 3. 启动服务
```bash
# 使用启动脚本
./start.sh
# 或手动启动
# 开发模式
uvicorn main:app --reload --host 0.0.0.0 --port 8000
# 生产模式
uvicorn main:app --host 0.0.0.0 --port 8000
```
## 测试
本项目包含完整的测试套件,支持多种测试场景。详细信息请参考 [TESTING_README.md](TESTING_README.md)。
### 快速测试
```bash
# 运行快速功能验证
python run_tests.py quick
# 运行轻量级并发测试
python tests/quick_test.py concurrent
# 运行所有测试
python run_tests.py all
```
### VS Code 集成测试
在 VS Code 中按 `Ctrl+Shift+P` (Mac: `Cmd+Shift+P`),选择 `Tasks: Run Task`,然后选择:
- **Run Quick Test** - 快速功能验证
- **Run Light Concurrent Tests** - 轻量级并发测试
- **Run Concurrent Tests** - 完整并发测试
- **Run Performance Monitor** - 性能监控测试
### 测试功能
- ✅ **API 功能测试** - 验证所有 API 端点
- ✅ **并发性能测试** - 高并发场景验证
- ✅ **快速验证测试** - 端到端功能检查
- ✅ **性能监控** - CPU/内存使用监控
- ✅ **自动化报告** - 测试结果自动生成报告
## API 接口
### 健康检查
```
GET /health
```
### 上传文档
```
POST /upload
Content-Type: multipart/form-data
参数:
- file: 文档文件 (PDF, TXT)
```
### 查询问答
```
POST /chat
Content-Type: application/json
{
"question": "你的问题",
"top_k": 3, # 可选,检索文档数量,默认 3
"temperature": 0.7 # 可选LLM 温度参数,默认 0.7
}
```
### 获取文档列表
```
GET /documents
返回文档列表包含文档ID、文件名、上传时间等信息
```
## 项目结构
```
easy-rag/
├── main.py # FastAPI 应用入口
├── config.py # 项目配置文件
├── run_tests.py # 统一测试入口
├── setup.sh # 环境设置脚本
├── start.sh # 启动脚本
├── requirements.txt # 依赖包
├── .env.example # 环境变量模板
├── .python-version # Python 版本配置
├── .gitignore # Git 忽略文件
├── README.md # 项目说明
├── TESTING_README.md # 测试说明文档
├── models/ # 数据模型
├── services/ # 业务逻辑
│ ├── rag_service.py # RAG 核心服务
│ └── vector_store.py # 向量存储服务
├── utils/ # 工具函数
│ └── file_utils.py # 文件处理工具
├── tests/ # 完整测试套件
│ ├── __init__.py # 测试包初始化
│ ├── config.py # 测试配置
│ ├── utils.py # 测试工具函数
│ ├── test_api.py # 基础 API 测试
│ ├── test_concurrent.py # 并发测试套件
│ ├── quick_test.py # 快速功能验证
│ └── performance_monitor.py # 性能监控
├── test_reports/ # 测试报告输出目录
├── uploads/ # 文档上传目录
├── chroma_db/ # ChromaDB 数据库文件
├── .vscode/ # VS Code 配置
│ └── tasks.json # VS Code 任务配置
└── venv/ # Python 虚拟环境
```
## 技术栈
- **Web 框架**: FastAPI
- **ASGI 服务器**: Uvicorn
- **向量数据库**: ChromaDB
- **文档处理**: PyPDF2
- **向量模型**: Sentence Transformers
- **LLM 集成**: LangChain
## 使用示例
### 1. 上传文档并查询
```bash
# 1. 启动服务
./start.sh
# 2. 上传文档
curl -X POST "http://localhost:8000/upload" \
-H "accept: application/json" \
-H "Content-Type: multipart/form-data" \
-F "file=@your_document.pdf"
# 3. 查询问答
curl -X POST "http://localhost:8000/chat" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d '{
"question": "文档的主要内容是什么?",
"top_k": 3
}'
```
### 2. Python 客户端示例
```python
import requests
# 上传文档
with open('document.pdf', 'rb') as f:
response = requests.post(
'http://localhost:8000/upload',
files={'file': f}
)
# 查询问答
response = requests.post(
'http://localhost:8000/chat',
json={
'question': '这个文档讲了什么?',
'top_k': 3
}
)
print(response.json())
```
## 开发指南
### VS Code 开发环境
本项目已配置 VS Code 任务系统,可通过 `Ctrl+Shift+P``Tasks: Run Task` 执行:
- **Setup RAG Environment** - 设置开发环境
- **Start RAG Service** - 启动服务(后台)
- **Start RAG Service (Foreground)** - 启动服务(前台,查看日志)
- **Run All Tests** - 运行完整测试套件
- **Clean Test Data** - 清理测试数据
### 配置文件
主要配置在 `config.py` 中:
- LLM 模型配置
- 向量模型设置
- 数据库路径
- API 设置
## 故障排除
### 常见问题
1. **服务启动失败**
```bash
# 检查端口是否被占用
lsof -i :8000
# 检查依赖是否完整安装
pip install -r requirements.txt
```
2. **文档上传失败**
- 检查文件格式是否支持 (PDF, TXT)
- 确认文件大小不超过限制
- 检查磁盘空间是否充足
3. **查询响应慢**
- 检查向量数据库索引状态
- 考虑调整 `top_k` 参数
- 监控系统资源使用情况
4. **内存使用过高**
- 减少并发请求数量
- 调整模型配置
- 清理旧的向量数据
### 日志和调试
```bash
# 查看详细日志
python main.py --log-level DEBUG
# 运行测试诊断
python run_tests.py quick
# 性能监控
python tests/performance_monitor.py
```
## 贡献指南
1. Fork 项目
2. 创建功能分支 (`git checkout -b feature/AmazingFeature`)
3. 提交更改 (`git commit -m 'Add some AmazingFeature'`)
4. 推送到分支 (`git push origin feature/AmazingFeature`)
5. 打开 Pull Request
## 许可证
本项目采用 MIT 许可证 - 详情请见 [LICENSE](LICENSE) 文件。
## 支持
如有问题或建议,请:
1. 查看 [TESTING_README.md](TESTING_README.md) 测试文档
2. 运行 `python run_tests.py quick` 进行快速诊断
3. 提交 Issue 或 Pull Request

241
TESTING_README.md Normal file
View File

@ -0,0 +1,241 @@
# RAG 系统并发测试指南
本项目提供了完整的并发测试套件,用于验证 RAG 系统在高并发环境下的性能和稳定性。
## 📁 测试文件结构
```
tests/
├── __init__.py # 测试包初始化
├── config.py # 测试配置参数
├── utils.py # 测试工具和辅助函数
├── test_api.py # 基础 API 功能测试(同步版本)
├── test_concurrent.py # 完整的异步并发测试套件
├── quick_test.py # 快速功能验证脚本
└── performance_monitor.py # 性能监控工具
```
## 🚀 快速开始
### 1. 安装依赖
在 VS Code 中按 `Ctrl+Shift+P`,然后选择 `Tasks: Run Task``Install Test Dependencies`
或者手动安装:
```bash
pip install aiohttp requests psutil
```
### 2. 启动服务器
选择 `Tasks: Run Task``Start RAG Service` 启动后台服务
### 3. 运行测试
#### 基础功能测试
```bash
python tests/test_api.py
```
#### 快速验证测试
```bash
python tests/quick_test.py
```
#### 完整并发测试
```bash
python tests/test_concurrent.py
```
#### 性能监控测试
```bash
python tests/performance_monitor.py
```
## 🎯 VS Code 任务
通过 VS Code 的任务系统,您可以轻松运行各种测试:
### 基础任务
- **`Setup RAG Environment`** - 设置 Python 环境
- **`Start RAG Service`** - 启动 RAG 服务(后台)
- **`Start RAG Service (Foreground)`** - 启动 RAG 服务(前台)
- **`Check Server Health`** - 检查服务器状态
### 测试任务
- **`Run API Tests`** - 运行基础 API 测试
- **`Run Concurrent Tests`** - 运行完整并发测试
- **`Run Light Concurrent Tests`** - 运行轻量级并发测试
- **`Run Stress Tests`** - 运行压力测试
- **`Run Quick Test`** - 运行快速验证测试
- **`Run Performance Monitor`** - 运行性能监控
### 维护任务
- **`Clean Test Data`** - 清理测试数据
- **`Install Test Dependencies`** - 安装测试依赖
## 📊 测试类型详解
### 1. 健康检查测试
- 验证服务器基本可用性
- 测试并发健康检查请求
- 评估响应时间和成功率
### 2. 文档上传测试
- 测试并发文档上传
- 验证文件处理能力
- 检查向量化和存储性能
### 3. 聊天查询测试
- 测试并发问答功能
- 验证检索和生成性能
- 评估响应质量和速度
### 4. 混合操作测试
- 同时进行多种操作
- 测试系统综合处理能力
- 验证资源竞争处理
### 5. 性能监控
- 实时监控 CPU 和内存使用
- 生成性能报告
- 识别性能瓶颈
## 📈 测试参数说明
### 并发级别配置
- **轻量级测试**: 2-5 个并发请求
- **标准测试**: 10-15 个并发请求
- **压力测试**: 20-50 个并发请求
### 测试参数
- `num_uploads`: 并发上传文档数量
- `num_queries`: 并发查询请求数量
- `top_k`: 检索文档数量 (默认 3)
- `temperature`: LLM 温度参数 (默认 0.7)
## 🔍 结果分析
### 成功率指标
- **>95%**: 优秀
- **90-95%**: 良好
- **80-90%**: 可接受
- **<80%**: 需要优化
### 响应时间指标
- **文档上传**: < 5
- **聊天查询**: < 3
- **健康检查**: < 100ms
- **文档列表**: < 500ms
### 性能指标
- **CPU 使用率**: 平均 < 80%
- **内存使用率**: < 85%
- **QPS**: 根据硬件配置而定
## 📝 生成的报告文件
测试完成后会生成以下文件:
- `concurrent_test_report.md` - 并发测试报告
- `performance_metrics_YYYYMMDD_HHMMSS.json` - 性能数据
- `performance_chart.png` - 性能图表(如果安装了 matplotlib
## 🛠️ 故障排除
### 常见问题
1. **连接错误**
```
ConnectionError: 无法连接到服务器
```
解决方案:确保 RAG 服务器正在运行
2. **依赖缺失**
```
ModuleNotFoundError: No module named 'aiohttp'
```
解决方案:运行 `Install Test Dependencies` 任务
3. **内存不足**
```
MemoryError
```
解决方案:减少并发数量或增加系统内存
4. **超时错误**
```
TimeoutError
```
解决方案:检查网络连接和服务器性能
### 调试模式
在测试文件中添加调试信息:
```python
import logging
logging.basicConfig(level=logging.DEBUG)
```
## 🎨 自定义测试
### 创建自定义测试脚本
```python
import asyncio
import sys
import os
sys.path.append(os.path.dirname(__file__))
from tests.test_concurrent import ConcurrentRAGTester
async def my_custom_test():
async with ConcurrentRAGTester() as tester:
# 自定义测试逻辑
result = await tester.chat_query("我的问题")
print(f"回答: {result['answer']}")
asyncio.run(my_custom_test())
```
### 使用测试工具
```python
from tests.utils import TestReporter, TestDataGenerator, PerformanceAnalyzer
from tests.config import CONCURRENT_CONFIG, PERFORMANCE_THRESHOLDS
# 生成测试数据
docs = TestDataGenerator.generate_test_documents(5)
questions = TestDataGenerator.generate_test_questions(10)
# 分析性能
analyzer = PerformanceAnalyzer()
time_stats = analyzer.analyze_response_times(results)
# 生成报告
reporter = TestReporter()
report_files = reporter.generate_report(test_results, "my_test")
```
### 修改测试参数
编辑 `tests/config.py` 中的配置:
```python
# 增加并发数量
CONCURRENT_CONFIG["custom"] = {
"health_checks": 15,
"uploads": 8,
"queries": 20,
"doc_lists": 4
}
```
## 📞 支持
如果遇到问题或需要帮助,请:
1. 检查服务器日志
2. 查看测试输出的错误信息
3. 确认所有依赖已正确安装
4. 验证系统资源充足
---
**注意**: 请在充足的系统资源环境下运行压力测试,避免影响其他应用程序。

40
concurrent_test_report.md Normal file
View File

@ -0,0 +1,40 @@
# RAG 系统并发测试报告
## 测试时间
2025-07-07 23:17:54
## 测试概览
本次测试验证了 RAG 系统在并发环境下的稳定性和性能表现。
## 健康检查测试
- 请求数量: 10
- 成功率: 100.0%
## 文档上传测试
- 上传数量: 5
- 成功率: 100.0%
## 聊天查询测试
- 查询数量: 10
- 成功率: 100.0%
## 文档列表测试
- 请求数量: 5
- 成功率: 100.0%
## 混合操作测试
- 总任务数: 12
- 执行时间: 87.58秒
## 性能总结
✅ 系统在并发环境下表现稳定
✅ 各项功能响应正常
✅ 错误率在可接受范围内
## 建议
1. 继续监控高负载下的内存使用情况
2. 考虑添加更多的边界条件测试
3. 定期执行并发测试以确保系统稳定性
---
*测试由 ConcurrentRAGTester 自动生成*

39
config.py Normal file
View File

@ -0,0 +1,39 @@
import os
from dotenv import load_dotenv
# 加载环境变量
load_dotenv()
class Config:
"""应用配置类"""
# 应用基本配置
APP_NAME = os.getenv("APP_NAME", "Easy RAG Service")
APP_VERSION = os.getenv("APP_VERSION", "1.0.0")
DEBUG = os.getenv("DEBUG", "False").lower() == "true"
# 服务器配置
HOST = os.getenv("HOST", "0.0.0.0")
PORT = int(os.getenv("PORT", 8000))
# OpenAI 配置
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_BASE_URL = os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1")
# 向量数据库配置
CHROMA_PERSIST_DIRECTORY = os.getenv("CHROMA_PERSIST_DIRECTORY", "./chroma_db")
# 文件上传配置
UPLOAD_DIR = os.getenv("UPLOAD_DIR", "./uploads")
MAX_FILE_SIZE = int(os.getenv("MAX_FILE_SIZE", 10485760)) # 10MB
@classmethod
def validate(cls):
"""验证配置"""
if not cls.OPENAI_API_KEY:
raise ValueError("OPENAI_API_KEY 环境变量未设置")
# 创建配置实例
config = Config()

205
main.py Normal file
View File

@ -0,0 +1,205 @@
from fastapi import FastAPI, File, UploadFile, HTTPException, Depends
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
import uvicorn
import os
from typing import List
import shutil
from io import BytesIO
from config import config
from models import (
ChatRequest,
ChatResponse,
DocumentInfo,
ErrorResponse,
SuccessResponse,
)
from services import AsyncRAGService
from utils import (
extract_text_from_pdf_async,
validate_file_size,
ensure_directory_exists,
is_supported_file_type,
)
# 创建FastAPI应用
app = FastAPI(
title=config.APP_NAME,
version=config.APP_VERSION,
description="高效简洁的RAG服务API",
docs_url="/docs",
redoc_url="/redoc",
)
# 添加CORS中间件
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# 确保上传目录存在
ensure_directory_exists(config.UPLOAD_DIR)
# 创建RAG服务实例
rag_service = AsyncRAGService()
def get_rag_service() -> AsyncRAGService:
"""依赖注入获取RAG服务实例"""
return rag_service
@app.get("/", response_model=dict)
async def root():
"""根路径 - 服务健康检查"""
return {
"message": f"欢迎使用 {config.APP_NAME}",
"version": config.APP_VERSION,
"status": "running",
}
@app.get("/health")
async def health_check():
"""健康检查接口"""
return {"status": "healthy", "service": config.APP_NAME}
@app.post("/upload", response_model=SuccessResponse)
async def upload_document(
file: UploadFile = File(...), service: AsyncRAGService = Depends(get_rag_service)
):
"""上传文档接口"""
try:
# 验证文件类型
if not is_supported_file_type(file.filename):
raise HTTPException(
status_code=400, detail="不支持的文件类型。目前支持PDF, TXT"
)
# 验证文件大小
content = await file.read()
if not validate_file_size(len(content), config.MAX_FILE_SIZE):
raise HTTPException(
status_code=400,
detail=f"文件过大。最大支持 {config.MAX_FILE_SIZE // 1024 // 1024}MB",
)
# 提取文本内容
if file.filename.lower().endswith(".pdf"):
text_content = await extract_text_from_pdf_async(BytesIO(content))
else: # txt文件
text_content = content.decode("utf-8")
if not text_content.strip():
raise HTTPException(status_code=400, detail="文件内容为空或无法提取文本")
# 添加到向量库
doc_id = await service.add_document_async(text_content, file.filename)
# 保存文件到本地(可选)
file_path = os.path.join(config.UPLOAD_DIR, f"{doc_id}_{file.filename}")
with open(file_path, "wb") as f:
f.write(content)
return SuccessResponse(
message="文档上传成功",
data={
"document_id": doc_id,
"filename": file.filename,
"size": len(content),
},
)
except HTTPException:
raise
except Exception as e:
raise HTTPException(status_code=500, detail=f"文档处理失败: {str(e)}")
@app.post("/chat", response_model=ChatResponse)
async def chat(request: ChatRequest, service: AsyncRAGService = Depends(get_rag_service)):
"""聊天问答接口"""
try:
result = await service.chat_async(
question=request.question,
top_k=request.top_k,
temperature=request.temperature,
)
return ChatResponse(
answer=result["answer"],
sources=result["sources"],
processing_time=result["processing_time"],
)
except Exception as e:
raise HTTPException(status_code=500, detail=f"问答处理失败: {str(e)}")
@app.get("/documents", response_model=List[DocumentInfo])
async def get_documents(service: AsyncRAGService = Depends(get_rag_service)):
"""获取文档列表接口"""
try:
docs = await service.get_documents_async()
return [
DocumentInfo(
id=doc["id"],
filename=doc["filename"],
upload_time=doc["upload_time"],
size=0, # 可以后续添加文件大小信息
chunks_count=doc["chunks_count"],
)
for doc in docs
]
except Exception as e:
raise HTTPException(status_code=500, detail=f"获取文档列表失败: {str(e)}")
@app.delete("/documents/{doc_id}", response_model=SuccessResponse)
async def delete_document(doc_id: str, service: AsyncRAGService = Depends(get_rag_service)):
"""删除文档接口"""
try:
success = await service.delete_document_async(doc_id)
if not success:
raise HTTPException(status_code=404, detail="文档不存在")
return SuccessResponse(message="文档删除成功")
except HTTPException:
raise
except Exception as e:
raise HTTPException(status_code=500, detail=f"删除文档失败: {str(e)}")
@app.exception_handler(Exception)
async def global_exception_handler(request, exc):
"""全局异常处理器"""
return JSONResponse(
status_code=500,
content=ErrorResponse(
error="内部服务器错误", detail=str(exc) if config.DEBUG else "请联系管理员"
).dict(),
)
if __name__ == "__main__":
# 验证配置
try:
config.validate()
except ValueError as e:
print(f"配置错误: {e}")
exit(1)
# 启动服务
uvicorn.run(
"main:app",
host=config.HOST,
port=config.PORT,
reload=config.DEBUG,
log_level="info",
)

44
models/__init__.py Normal file
View File

@ -0,0 +1,44 @@
from pydantic import BaseModel
from typing import Optional, List
from datetime import datetime
class DocumentUpload(BaseModel):
"""文档上传请求模型"""
filename: str
content_type: str
class DocumentInfo(BaseModel):
"""文档信息模型"""
id: str
filename: str
upload_time: datetime
size: int
chunks_count: int
class ChatRequest(BaseModel):
"""聊天请求模型"""
question: str
top_k: Optional[int] = 3
temperature: Optional[float] = 0.7
class ChatResponse(BaseModel):
"""聊天响应模型"""
answer: str
sources: List[dict]
processing_time: float
class ErrorResponse(BaseModel):
"""错误响应模型"""
error: str
detail: Optional[str] = None
class SuccessResponse(BaseModel):
"""成功响应模型"""
message: str
data: Optional[dict] = None

13
requirements.txt Normal file
View File

@ -0,0 +1,13 @@
fastapi==0.104.1
uvicorn[standard]==0.24.0
python-multipart==0.0.6
pydantic==2.6.4
langchain==0.1.0
langchain-community==0.0.10
langchain-openai==0.0.2
chromadb==0.4.22
sentence-transformers==2.2.2
huggingface-hub==0.16.4
PyPDF2==3.0.1
python-dotenv==1.0.0
httpx==0.25.2

190
run_tests.py Normal file
View File

@ -0,0 +1,190 @@
#!/usr/bin/env python3
"""
测试运行器 - 统一的测试入口点
使用方法:
python run_tests.py --help # 显示帮助
python run_tests.py api # 运行 API 测试
python run_tests.py quick # 运行快速测试
python run_tests.py concurrent # 运行并发测试
python run_tests.py performance # 运行性能监控
python run_tests.py all # 运行所有测试
"""
import argparse
import asyncio
import sys
import os
from pathlib import Path
# 添加项目根目录到 Python 路径
project_root = Path(__file__).parent
sys.path.insert(0, str(project_root))
from tests.utils import wait_for_server, TestReporter
from tests.config import BASE_URL
def run_api_test():
"""运行基础 API 测试"""
print("🔧 运行基础 API 测试...")
import subprocess
result = subprocess.run([sys.executable, "tests/test_api.py"],
cwd=project_root, capture_output=True, text=True)
print(result.stdout)
if result.stderr:
print("错误输出:", result.stderr)
return result.returncode == 0
async def run_quick_test():
"""运行快速测试"""
print("⚡ 运行快速测试...")
# 检查服务器
if not await wait_for_server(BASE_URL, timeout=10):
return False
from tests.quick_test import quick_test, mini_concurrent_test
try:
success1 = await quick_test()
success2 = await mini_concurrent_test() if success1 else False
return success1 and success2
except Exception as e:
print(f"❌ 快速测试失败: {e}")
return False
async def run_concurrent_test():
"""运行并发测试"""
print("🚀 运行并发测试...")
# 检查服务器
if not await wait_for_server(BASE_URL, timeout=10):
return False
from tests.test_concurrent import run_comprehensive_concurrent_test
try:
await run_comprehensive_concurrent_test()
return True
except Exception as e:
print(f"❌ 并发测试失败: {e}")
return False
async def run_performance_test():
"""运行性能监控测试"""
print("📊 运行性能监控...")
# 检查服务器
if not await wait_for_server(BASE_URL, timeout=10):
return False
from tests.performance_monitor import run_load_test_with_monitoring
try:
await run_load_test_with_monitoring()
return True
except Exception as e:
print(f"❌ 性能测试失败: {e}")
return False
async def run_all_tests():
"""运行所有测试"""
print("🎯 运行完整测试套件")
print("=" * 60)
results = {}
# 1. API 测试
print("\n1⃣ 基础 API 测试")
results["api"] = run_api_test()
# 2. 快速测试
print("\n2⃣ 快速功能测试")
results["quick"] = await run_quick_test()
# 3. 并发测试
print("\n3⃣ 并发性能测试")
results["concurrent"] = await run_concurrent_test()
# 4. 性能监控(可选)
print("\n4⃣ 性能监控测试")
results["performance"] = await run_performance_test()
# 生成总结报告
print("\n" + "=" * 60)
print("📋 测试总结:")
total_tests = len(results)
passed_tests = sum(1 for success in results.values() if success)
for test_name, success in results.items():
status = "✅ 通过" if success else "❌ 失败"
print(f" {test_name.upper()}: {status}")
print(f"\n🎯 总体结果: {passed_tests}/{total_tests} 测试通过")
if passed_tests == total_tests:
print("🎉 所有测试都通过了!")
return True
else:
print("⚠️ 部分测试失败,请检查日志。")
return False
def main():
"""主函数"""
parser = argparse.ArgumentParser(description="RAG 系统测试运行器")
parser.add_argument(
"test_type",
choices=["api", "quick", "concurrent", "performance", "all"],
help="要运行的测试类型"
)
parser.add_argument(
"--timeout",
type=int,
default=30,
help="服务器启动超时时间(秒)"
)
parser.add_argument(
"--no-server-check",
action="store_true",
help="跳过服务器检查"
)
args = parser.parse_args()
# 根据参数运行相应的测试
try:
if args.test_type == "api":
success = run_api_test()
elif args.test_type == "quick":
success = asyncio.run(run_quick_test())
elif args.test_type == "concurrent":
success = asyncio.run(run_concurrent_test())
elif args.test_type == "performance":
success = asyncio.run(run_performance_test())
elif args.test_type == "all":
success = asyncio.run(run_all_tests())
else:
print(f"❌ 未知的测试类型: {args.test_type}")
return 1
return 0 if success else 1
except KeyboardInterrupt:
print("\n⏹️ 测试被用户中断")
return 1
except Exception as e:
print(f"❌ 测试运行失败: {e}")
import traceback
traceback.print_exc()
return 1
if __name__ == "__main__":
sys.exit(main())

4
services/__init__.py Normal file
View File

@ -0,0 +1,4 @@
from .vector_store import AsyncVectorStore
from .rag_service import AsyncRAGService
__all__ = ["AsyncVectorStore", "AsyncRAGService"]

117
services/rag_service.py Normal file
View File

@ -0,0 +1,117 @@
from typing import List, Dict, Any
import asyncio
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from services.vector_store import AsyncVectorStore
import os
import time
class AsyncRAGService:
"""异步 RAG 服务主类"""
def __init__(self):
self.vector_store = AsyncVectorStore()
self.llm = ChatOpenAI(
model="deepseek-r1:8b",
temperature=0.7,
openai_api_key=os.getenv("OPENAI_API_KEY"),
openai_api_base=os.getenv("OPENAI_BASE_URL"),
)
self.prompt_template = PromptTemplate(
input_variables=["context", "question"],
template="""
基于以下上下文回答问题如果上下文中没有相关信息请说明无法从提供的文档中找到答案
上下文
{context}
问题{question}
答案""",
)
async def add_document_async(self, content: str, filename: str) -> str:
"""异步添加文档"""
return await self.vector_store.add_document_async(content, filename)
async def chat_async(
self, question: str, top_k: int = 3, temperature: float = 0.7
) -> Dict[str, Any]:
"""异步聊天问答"""
start_time = time.time()
# 异步检索相关文档
search_results = await self.vector_store.search_async(question, top_k)
if not search_results:
return {
"answer": "抱歉,我无法在现有文档中找到相关信息来回答您的问题。",
"sources": [],
"processing_time": time.time() - start_time,
}
# 并行执行上下文构建和 LLM 调用准备
context_task = asyncio.create_task(self._build_context_async(search_results))
sources_task = asyncio.create_task(self._format_sources_async(search_results))
# 等待上下文构建完成
context = await context_task
# 异步生成回答
self.llm.temperature = temperature
prompt = self.prompt_template.format(context=context, question=question)
response = await asyncio.to_thread(self.llm.invoke, prompt)
# 等待源信息格式化完成
sources = await sources_task
return {
"answer": response.content,
"sources": sources,
"processing_time": time.time() - start_time,
}
async def get_documents_async(self) -> List[Dict[str, Any]]:
"""异步获取文档列表"""
return await self.vector_store.get_documents_async()
async def delete_document_async(self, doc_id: str) -> bool:
"""异步删除文档"""
return await self.vector_store.delete_document_async(doc_id)
async def _build_context_async(self, search_results: List[Dict[str, Any]]) -> str:
"""异步构建上下文"""
def _build_context():
return "\n\n".join(
[
f"文档片段 {i+1} (来源: {result['metadata']['filename']}):\n{result['content']}"
for i, result in enumerate(search_results)
]
)
return await asyncio.to_thread(_build_context)
async def _format_sources_async(
self, search_results: List[Dict[str, Any]]
) -> List[Dict[str, Any]]:
"""异步格式化源信息"""
def _format_sources():
return [
{
"filename": result["metadata"]["filename"],
"content": (
result["content"][:200] + "..."
if len(result["content"]) > 200
else result["content"]
),
"similarity": 1 - result["distance"],
}
for result in search_results
]
return await asyncio.to_thread(_format_sources)

132
services/vector_store.py Normal file
View File

@ -0,0 +1,132 @@
import os
from typing import List, Dict, Any
import asyncio
import chromadb
from chromadb.config import Settings
from sentence_transformers import SentenceTransformer
from langchain.text_splitter import RecursiveCharacterTextSplitter
import uuid
from datetime import datetime
class AsyncVectorStore:
"""异步向量存储服务"""
def __init__(self, persist_directory: str = "./chroma_db"):
self.persist_directory = persist_directory
self.client = chromadb.PersistentClient(
path=persist_directory, settings=Settings(anonymized_telemetry=False)
)
self.collection = self.client.get_or_create_collection(
name="documents", metadata={"hnsw:space": "cosine"}
)
# 尝试初始化向量编码器,如果网络失败则使用本地方案
try:
print("正在加载向量编码模型...")
self.encoder = SentenceTransformer("all-MiniLM-L6-v2")
print("✓ 向量编码模型加载成功")
except Exception as e:
print(f"⚠️ 向量编码模型加载失败: {e}")
print("使用简单的文本向量化方案(仅用于演示)")
self.encoder = None
self.text_splitter = RecursiveCharacterTextSplitter(
chunk_size=500, chunk_overlap=50, length_function=len
)
async def add_document_async(self, content: str, filename: str) -> str:
"""异步添加文档到向量库"""
doc_id = str(uuid.uuid4())
# 异步分割文本
chunks = await asyncio.to_thread(self.text_splitter.split_text, content)
# 异步生成向量
embeddings = await asyncio.to_thread(self.encoder.encode, chunks)
embeddings = embeddings.tolist()
# 生成chunk IDs
chunk_ids = [f"{doc_id}_{i}" for i in range(len(chunks))]
# 准备元数据
metadatas = [
{
"doc_id": doc_id,
"filename": filename,
"chunk_index": i,
"upload_time": datetime.now().isoformat(),
}
for i in range(len(chunks))
]
# 异步添加到向量库
await asyncio.to_thread(
self.collection.add,
ids=chunk_ids,
embeddings=embeddings,
documents=chunks,
metadatas=metadatas,
)
return doc_id
async def search_async(self, query: str, top_k: int = 3) -> List[Dict[str, Any]]:
"""异步搜索相关文档"""
# 异步生成查询向量
query_embedding = await asyncio.to_thread(self.encoder.encode, [query])
query_embedding = query_embedding.tolist()[0]
# 异步查询
results = await asyncio.to_thread(
self.collection.query,
query_embeddings=[query_embedding],
n_results=top_k,
include=["documents", "metadatas", "distances"],
)
formatted_results = []
if results["documents"] and results["documents"][0]:
for i, doc in enumerate(results["documents"][0]):
formatted_results.append(
{
"content": doc,
"metadata": results["metadatas"][0][i],
"distance": results["distances"][0][i],
}
)
return formatted_results
async def get_documents_async(self) -> List[Dict[str, Any]]:
"""异步获取所有文档信息"""
results = await asyncio.to_thread(self.collection.get, include=["metadatas"])
# 按文档ID分组
doc_info = {}
for metadata in results["metadatas"]:
doc_id = metadata["doc_id"]
if doc_id not in doc_info:
doc_info[doc_id] = {
"id": doc_id,
"filename": metadata["filename"],
"upload_time": metadata["upload_time"],
"chunks_count": 0,
}
doc_info[doc_id]["chunks_count"] += 1
return list(doc_info.values())
async def delete_document_async(self, doc_id: str) -> bool:
"""异步删除文档"""
# 异步获取该文档的所有chunk IDs
results = await asyncio.to_thread(
self.collection.get, where={"doc_id": doc_id}, include=["metadatas"]
)
if not results["ids"]:
return False
# 异步删除所有相关chunks
await asyncio.to_thread(self.collection.delete, ids=results["ids"])
return True

66
setup.sh Executable file
View File

@ -0,0 +1,66 @@
#!/bin/bash
# Easy RAG Service 环境设置脚本
echo "=== Easy RAG Service 环境设置 ==="
# 检查 Python 版本
echo "检查 Python 版本..."
python_version=$(python3 --version 2>&1 | cut -d" " -f2 | cut -d"." -f1,2)
required_version="3.8"
if [ "$(printf '%s\n' "$required_version" "$python_version" | sort -V | head -n1)" = "$required_version" ]; then
echo "✓ Python 版本符合要求: $python_version"
else
echo "✗ Python 版本过低: $python_version (需要 >= $required_version)"
echo "请使用 pyenv 安装合适的 Python 版本:"
echo " pyenv install 3.13.0 # 推荐最新稳定版"
echo " pyenv install 3.12.0 # 或其他 3.8+ 版本"
echo " pyenv local 3.13.0 # 使用你安装的版本"
exit 1
fi
# 创建虚拟环境
if [ ! -d "venv" ]; then
echo "创建虚拟环境..."
python3 -m venv venv
echo "✓ 虚拟环境创建完成"
else
echo "✓ 虚拟环境已存在"
fi
# 激活虚拟环境
echo "激活虚拟环境..."
source venv/bin/activate
# 升级 pip
echo "升级 pip..."
pip install --upgrade pip
# 安装依赖
echo "安装项目依赖..."
pip install -r requirements.txt
# 创建环境变量文件
if [ ! -f ".env" ]; then
echo "创建环境变量文件..."
cp .env.example .env
echo "✓ 已创建 .env 文件,请编辑设置你的 API 密钥"
else
echo "✓ .env 文件已存在"
fi
# 创建必要目录
echo "创建必要目录..."
mkdir -p uploads
mkdir -p chroma_db
echo ""
echo "=== 设置完成 ==="
echo "请完成以下步骤:"
echo "1. 编辑 .env 文件,设置你的 OpenAI API 密钥"
echo "2. 激活虚拟环境: source venv/bin/activate"
echo "3. 启动服务: ./start.sh 或 python main.py"
echo ""
echo "快速启动:"
echo " source venv/bin/activate && python main.py"

44
start.sh Executable file
View File

@ -0,0 +1,44 @@
#!/bin/bash
# Easy RAG Service 启动脚本
echo "=== Easy RAG Service 启动脚本 ==="
# 检查是否在虚拟环境中
if [[ "$VIRTUAL_ENV" == "" ]]; then
echo "警告: 您当前不在虚拟环境中"
echo "建议先激活虚拟环境: source venv/bin/activate"
read -p "是否继续? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi
# 检查 .env 文件
if [ ! -f ".env" ]; then
echo "错误: .env 文件不存在"
echo "请复制 .env.example 到 .env 并配置必要的环境变量"
exit 1
fi
# 检查依赖
echo "检查 Python 依赖..."
if ! python -c "import fastapi" 2>/dev/null; then
echo "错误: FastAPI 未安装"
echo "请运行: pip install -r requirements.txt"
exit 1
fi
# 创建必要的目录
echo "创建必要的目录..."
mkdir -p uploads
mkdir -p chroma_db
# 启动服务
echo "启动 Easy RAG Service..."
echo "服务将在 http://localhost:8000 运行"
echo "API 文档: http://localhost:8000/docs"
echo "按 Ctrl+C 停止服务"
python main.py

26
tests/__init__.py Normal file
View File

@ -0,0 +1,26 @@
"""
RAG 系统测试套件
这个包包含了 RAG 系统的各种测试工具
- 基础 API 测试
- 并发性能测试
- 快速验证测试
- 性能监控工具
使用方法:
from tests.test_concurrent import ConcurrentRAGTester
from tests.quick_test import quick_test
"""
__version__ = "1.0.0"
__author__ = "RAG Team"
# 导出主要的测试类和函数
from .test_concurrent import ConcurrentRAGTester
from .quick_test import quick_test, mini_concurrent_test
__all__ = [
"ConcurrentRAGTester",
"quick_test",
"mini_concurrent_test"
]

83
tests/config.py Normal file
View File

@ -0,0 +1,83 @@
"""
测试配置文件
包含所有测试相关的配置参数
"""
# 服务器配置
BASE_URL = "http://localhost:8000"
HEALTH_CHECK_ENDPOINT = "/health"
UPLOAD_ENDPOINT = "/upload"
CHAT_ENDPOINT = "/chat"
DOCUMENTS_ENDPOINT = "/documents"
# 并发测试配置
CONCURRENT_CONFIG = {
"light": {
"health_checks": 3,
"uploads": 2,
"queries": 5,
"doc_lists": 2
},
"standard": {
"health_checks": 10,
"uploads": 5,
"queries": 10,
"doc_lists": 3
},
"stress": {
"health_checks": 20,
"uploads": 10,
"queries": 25,
"doc_lists": 5
}
}
# 性能阈值配置
PERFORMANCE_THRESHOLDS = {
"response_time": {
"health_check": 0.1, # 100ms
"upload": 5.0, # 5秒
"chat": 3.0, # 3秒
"doc_list": 0.5 # 500ms
},
"success_rate": {
"excellent": 0.95, # 95%
"good": 0.90, # 90%
"acceptable": 0.80 # 80%
},
"system": {
"cpu_warning": 80, # 80%
"memory_warning": 85 # 85%
}
}
# 测试数据配置
TEST_DATA = {
"sample_documents": [
"这是一个关于人工智能的测试文档。人工智能是计算机科学的重要分支。",
"机器学习是人工智能的核心技术之一,包括监督学习、无监督学习和强化学习。",
"深度学习使用神经网络模型来处理复杂的数据模式,在图像识别和自然语言处理方面表现出色。",
"自然语言处理NLP是让计算机理解和生成人类语言的技术。",
"计算机视觉技术使计算机能够识别和理解图像中的内容。"
],
"sample_questions": [
"什么是人工智能?",
"机器学习有哪些类型?",
"深度学习的应用领域有哪些?",
"自然语言处理的主要任务是什么?",
"计算机视觉技术的用途是什么?",
"AI和ML有什么区别",
"神经网络是如何工作的?",
"监督学习和无监督学习的区别?",
"强化学习的特点是什么?",
"图像识别技术的原理是什么?"
]
}
# 报告配置
REPORT_CONFIG = {
"output_dir": "test_reports",
"formats": ["md", "json"],
"include_charts": True,
"auto_cleanup": True
}

View File

@ -0,0 +1,222 @@
#!/usr/bin/env python3
"""
简单的性能监控脚本
监控并发测试期间的系统资源使用情况
"""
import asyncio
import aiohttp
import time
import psutil
import json
from typing import Dict, List
from datetime import datetime
class SimplePerformanceMonitor:
"""简单性能监控器"""
def __init__(self):
self.metrics = []
self.start_time = None
async def start_monitoring(self, duration: int = 60, interval: float = 1.0):
"""开始监控系统资源"""
self.start_time = time.time()
print(f"🔍 开始性能监控 (持续 {duration} 秒)")
print("-" * 50)
end_time = self.start_time + duration
while time.time() < end_time:
# 获取系统指标
cpu_percent = psutil.cpu_percent(interval=0.1)
memory = psutil.virtual_memory()
metric = {
"timestamp": time.time(),
"relative_time": time.time() - self.start_time,
"cpu_percent": cpu_percent,
"memory_percent": memory.percent,
"memory_used_mb": memory.used / 1024 / 1024,
"memory_available_mb": memory.available / 1024 / 1024
}
self.metrics.append(metric)
# 实时显示
print(f"⏱️ {metric['relative_time']:6.1f}s | "
f"CPU: {cpu_percent:5.1f}% | "
f"内存: {memory.percent:5.1f}% | "
f"已用: {memory.used/1024/1024:6.0f}MB")
await asyncio.sleep(interval)
def generate_summary(self):
"""生成性能摘要"""
if not self.metrics:
print("❌ 没有性能数据")
return
cpu_values = [m["cpu_percent"] for m in self.metrics]
memory_values = [m["memory_percent"] for m in self.metrics]
print("\n" + "=" * 50)
print("📊 性能监控摘要")
print("=" * 50)
print(f"监控时长: {self.metrics[-1]['relative_time']:.1f}")
print(f"采样点数: {len(self.metrics)}")
print(f"\nCPU 使用率:")
print(f" 平均: {sum(cpu_values) / len(cpu_values):5.1f}%")
print(f" 最大: {max(cpu_values):5.1f}%")
print(f" 最小: {min(cpu_values):5.1f}%")
print(f"\n内存使用率:")
print(f" 平均: {sum(memory_values) / len(memory_values):5.1f}%")
print(f" 最大: {max(memory_values):5.1f}%")
print(f" 最小: {min(memory_values):5.1f}%")
# 检查性能警告
avg_cpu = sum(cpu_values) / len(cpu_values)
max_cpu = max(cpu_values)
avg_memory = sum(memory_values) / len(memory_values)
print(f"\n🔍 性能评估:")
if avg_cpu > 80:
print(f"⚠️ 平均 CPU 使用率较高: {avg_cpu:.1f}%")
elif avg_cpu < 20:
print(f"✅ CPU 使用率正常: {avg_cpu:.1f}%")
else:
print(f"✅ CPU 使用率适中: {avg_cpu:.1f}%")
if max_cpu > 95:
print(f"⚠️ CPU 峰值过高: {max_cpu:.1f}%")
else:
print(f"✅ CPU 峰值正常: {max_cpu:.1f}%")
if avg_memory > 85:
print(f"⚠️ 内存使用率较高: {avg_memory:.1f}%")
else:
print(f"✅ 内存使用率正常: {avg_memory:.1f}%")
def save_metrics(self, filename: str = None):
"""保存性能指标"""
if not filename:
filename = f"performance_metrics_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
with open(filename, 'w', encoding='utf-8') as f:
json.dump({
"monitoring_info": {
"start_time": self.start_time,
"duration": self.metrics[-1]['relative_time'] if self.metrics else 0,
"sample_count": len(self.metrics)
},
"metrics": self.metrics
}, f, indent=2, ensure_ascii=False)
print(f"💾 性能数据已保存: {filename}")
async def run_load_test_with_monitoring():
"""运行负载测试并监控性能"""
print("🚀 负载测试 + 性能监控")
print("=" * 50)
# 创建监控器
monitor = SimplePerformanceMonitor()
# 启动监控任务
monitor_task = asyncio.create_task(
monitor.start_monitoring(duration=30, interval=0.5)
)
# 等待一下让监控开始
await asyncio.sleep(1)
# 运行负载测试
async with aiohttp.ClientSession() as session:
print("🔥 开始并发负载...")
tasks = []
# 并发上传任务
for i in range(5):
content = f"性能测试文档 {i+1}" + "这是测试内容。" * 50
tasks.append(upload_document(session, content, f"perf_test_{i+1}.txt"))
# 并发查询任务
for i in range(15):
tasks.append(chat_query(session, f"测试问题 {i+1}"))
# 执行所有任务
print(f"📤 启动 {len(tasks)} 个并发任务...")
results = await asyncio.gather(*tasks, return_exceptions=True)
# 统计结果
successful = sum(1 for r in results if isinstance(r, dict) and r.get("success", False))
print(f"✅ 负载测试完成: {successful}/{len(tasks)} 成功")
# 等待监控完成
await monitor_task
# 生成报告
monitor.generate_summary()
monitor.save_metrics()
async def upload_document(session: aiohttp.ClientSession, content: str, filename: str):
"""上传文档"""
import tempfile
import os
with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False, encoding='utf-8') as f:
f.write(content)
temp_path = f.name
try:
with open(temp_path, 'rb') as f:
data = aiohttp.FormData()
data.add_field('file', f, filename=filename, content_type='text/plain')
async with session.post("http://localhost:8000/upload", data=data) as response:
return {
"success": response.status == 200,
"type": "upload",
"filename": filename
}
except Exception as e:
return {"success": False, "type": "upload", "error": str(e)}
finally:
if os.path.exists(temp_path):
os.unlink(temp_path)
async def chat_query(session: aiohttp.ClientSession, question: str):
"""聊天查询"""
try:
payload = {"question": question, "top_k": 3, "temperature": 0.7}
async with session.post(
"http://localhost:8000/chat",
json=payload,
headers={"Content-Type": "application/json"}
) as response:
return {
"success": response.status == 200,
"type": "chat",
"question": question
}
except Exception as e:
return {"success": False, "type": "chat", "error": str(e)}
if __name__ == "__main__":
try:
asyncio.run(run_load_test_with_monitoring())
except KeyboardInterrupt:
print("\n⏹️ 监控被中断")
except Exception as e:
print(f"❌ 监控失败: {e}")
import traceback
traceback.print_exc()

137
tests/quick_test.py Normal file
View File

@ -0,0 +1,137 @@
#!/usr/bin/env python3
"""
快速测试运行脚本
用于快速验证系统功能
"""
import asyncio
import sys
import time
import os
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
from tests.test_concurrent import (
ConcurrentRAGTester,
test_concurrent_health_check,
test_concurrent_upload,
test_concurrent_chat,
)
async def quick_test():
"""快速测试所有主要功能"""
print("🚀 快速功能验证测试")
print("=" * 40)
try:
async with ConcurrentRAGTester() as tester:
# 1. 健康检查
print("1⃣ 健康检查...")
health = await tester.health_check()
if health["status_code"] != 200:
print(f"❌ 服务器不可用: {health}")
return False
print(f"✅ 服务器正常 (响应时间: {health['response_time']:.3f}s)")
# 2. 单个文档上传
print("\n2⃣ 文档上传测试...")
upload_result = await tester.upload_document(
"这是一个快速测试文档。包含关于人工智能和机器学习的基础知识。",
"quick_test.txt",
)
if upload_result["status_code"] != 200:
print(f"❌ 上传失败: {upload_result}")
return False
print(f"✅ 上传成功 (文档ID: {upload_result.get('document_id', 'N/A')})")
# 3. 等待处理
await asyncio.sleep(1)
# 4. 聊天测试
print("\n3⃣ 聊天功能测试...")
chat_result = await tester.chat_query("什么是人工智能?")
if chat_result["status_code"] != 200:
print(f"❌ 聊天失败: {chat_result}")
return False
print(
f"✅ 聊天成功 (处理时间: {chat_result.get('processing_time', 0):.2f}s)"
)
print(f" 回答长度: {len(chat_result.get('answer', ''))} 字符")
print(f" 来源数量: {chat_result.get('sources_count', 0)}")
# 5. 文档列表
print("\n4⃣ 文档列表测试...")
docs_result = await tester.get_documents()
if docs_result["status_code"] != 200:
print(f"❌ 获取文档列表失败: {docs_result}")
return False
doc_count = len(docs_result["data"])
print(f"✅ 文档列表获取成功 (文档数量: {doc_count})")
print("\n" + "=" * 40)
print("🎉 所有基础功能测试通过!")
return True
except Exception as e:
print(f"❌ 测试过程中发生错误: {e}")
return False
async def mini_concurrent_test():
"""迷你并发测试"""
print("\n🔥 迷你并发测试")
print("=" * 40)
try:
# 小规模并发测试
await test_concurrent_health_check(3)
await test_concurrent_upload(2)
await asyncio.sleep(1)
await test_concurrent_chat(3)
print("🎯 迷你并发测试完成!")
return True
except Exception as e:
print(f"❌ 并发测试失败: {e}")
return False
def main():
"""主函数"""
if len(sys.argv) > 1:
test_type = sys.argv[1].lower()
if test_type == "quick":
success = asyncio.run(quick_test())
elif test_type == "concurrent":
success = asyncio.run(mini_concurrent_test())
elif test_type == "both":
success1 = asyncio.run(quick_test())
success2 = asyncio.run(mini_concurrent_test()) if success1 else False
success = success1 and success2
else:
print("❌ 未知的测试类型")
print("用法: python quick_test.py [quick|concurrent|both]")
return
else:
# 默认运行所有测试
success1 = asyncio.run(quick_test())
success2 = asyncio.run(mini_concurrent_test()) if success1 else False
success = success1 and success2
if success:
print("\n✅ 所有测试通过!")
sys.exit(0)
else:
print("\n❌ 测试失败!")
sys.exit(1)
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
print("\n⏹️ 测试被中断")
except Exception as e:
print(f"❌ 运行失败: {e}")
sys.exit(1)

77
tests/test_api.py Normal file
View File

@ -0,0 +1,77 @@
import requests
import json
def test_upload_and_chat():
"""测试文档上传和聊天功能"""
base_url = "http://localhost:8000"
# 测试健康检查
print("1. 测试健康检查...")
response = requests.get(f"{base_url}/health")
print(f"状态码: {response.status_code}")
print(f"响应: {response.json()}")
print()
# 测试文档上传
print("2. 测试文档上传...")
test_content = "这是一个测试文档。它包含了关于人工智能的基本信息。人工智能是计算机科学的一个分支。"
# 创建临时文件
with open("test_doc.txt", "w", encoding="utf-8") as f:
f.write(test_content)
with open("test_doc.txt", "rb") as f:
files = {"file": ("test_doc.txt", f, "text/plain")}
response = requests.post(f"{base_url}/upload", files=files)
print(f"状态码: {response.status_code}")
if response.status_code == 200:
upload_result = response.json()
print(f"上传成功: {upload_result}")
doc_id = upload_result["data"]["document_id"]
else:
print(f"上传失败: {response.text}")
return
print()
# 测试文档列表
print("3. 测试文档列表...")
response = requests.get(f"{base_url}/documents")
print(f"状态码: {response.status_code}")
print(f"文档列表: {response.json()}")
print()
# 测试聊天
print("4. 测试聊天...")
chat_data = {"question": "什么是人工智能?", "top_k": 3, "temperature": 0.7}
response = requests.post(
f"{base_url}/chat", json=chat_data, headers={"Content-Type": "application/json"}
)
print(f"状态码: {response.status_code}")
if response.status_code == 200:
chat_result = response.json()
print(f"回答: {chat_result['answer']}")
print(f"处理时间: {chat_result['processing_time']:.2f}")
print(f"来源数量: {len(chat_result['sources'])}")
else:
print(f"聊天失败: {response.text}")
print()
# 清理测试文件
import os
if os.path.exists("test_doc.txt"):
os.remove("test_doc.txt")
if __name__ == "__main__":
try:
test_upload_and_chat()
except requests.exceptions.ConnectionError:
print("错误: 无法连接到服务器")
print("请确保服务器正在运行: python main.py")
except Exception as e:
print(f"测试失败: {e}")

480
tests/test_concurrent.py Normal file
View File

@ -0,0 +1,480 @@
import asyncio
import aiohttp
import json
import time
import tempfile
import os
from typing import List, Dict, Any
from concurrent.futures import ThreadPoolExecutor
class ConcurrentRAGTester:
"""并发 RAG 系统测试器"""
def __init__(self, base_url: str = "http://localhost:8000"):
self.base_url = base_url
self.session = None
async def __aenter__(self):
self.session = aiohttp.ClientSession()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
if self.session:
await self.session.close()
async def health_check(self) -> Dict[str, Any]:
"""健康检查"""
start_time = time.time()
async with self.session.get(f"{self.base_url}/health") as response:
result = {
"status_code": response.status,
"response_time": time.time() - start_time
}
if response.status == 200:
result["data"] = await response.json()
else:
result["error"] = await response.text()
return result
async def upload_document(self, content: str, filename: str) -> Dict[str, Any]:
"""异步上传文档"""
start_time = time.time()
# 创建临时文件
with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False, encoding='utf-8') as f:
f.write(content)
temp_path = f.name
try:
with open(temp_path, 'rb') as f:
data = aiohttp.FormData()
data.add_field('file', f, filename=filename, content_type='text/plain')
async with self.session.post(f"{self.base_url}/upload", data=data) as response:
result = {
"status_code": response.status,
"response_time": time.time() - start_time,
"filename": filename
}
if response.status == 200:
upload_result = await response.json()
result["data"] = upload_result
result["document_id"] = upload_result["data"]["document_id"]
else:
result["error"] = await response.text()
return result
finally:
# 清理临时文件
if os.path.exists(temp_path):
os.unlink(temp_path)
async def get_documents(self) -> Dict[str, Any]:
"""异步获取文档列表"""
start_time = time.time()
async with self.session.get(f"{self.base_url}/documents") as response:
result = {
"status_code": response.status,
"response_time": time.time() - start_time
}
if response.status == 200:
result["data"] = await response.json()
else:
result["error"] = await response.text()
return result
async def chat_query(self, question: str, top_k: int = 3, temperature: float = 0.7) -> Dict[str, Any]:
"""异步聊天查询"""
start_time = time.time()
chat_data = {
"question": question,
"top_k": top_k,
"temperature": temperature
}
async with self.session.post(
f"{self.base_url}/chat",
json=chat_data,
headers={"Content-Type": "application/json"}
) as response:
result = {
"status_code": response.status,
"response_time": time.time() - start_time,
"question": question
}
if response.status == 200:
chat_result = await response.json()
result["data"] = chat_result
result["answer"] = chat_result["answer"]
result["processing_time"] = chat_result["processing_time"]
result["sources_count"] = len(chat_result["sources"])
else:
result["error"] = await response.text()
return result
async def test_concurrent_health_check(num_requests: int = 10):
"""测试并发健康检查"""
print(f"🔍 测试并发健康检查 (请求数: {num_requests})")
async with ConcurrentRAGTester() as tester:
start_time = time.time()
# 创建并发任务
tasks = [tester.health_check() for _ in range(num_requests)]
results = await asyncio.gather(*tasks, return_exceptions=True)
total_time = time.time() - start_time
# 统计结果
successful = sum(1 for r in results if isinstance(r, dict) and r.get("status_code") == 200)
failed = num_requests - successful
avg_response_time = sum(r.get("response_time", 0) for r in results if isinstance(r, dict)) / len(results)
print(f"✅ 健康检查完成:")
print(f" - 总时间: {total_time:.2f}")
print(f" - 成功: {successful}/{num_requests}")
print(f" - 失败: {failed}/{num_requests}")
print(f" - 平均响应时间: {avg_response_time:.3f}")
print(f" - QPS: {successful / total_time:.2f}")
print()
return results
async def test_concurrent_upload(num_uploads: int = 5):
"""测试并发文档上传"""
print(f"📤 测试并发文档上传 (上传数: {num_uploads})")
# 准备测试文档
test_documents = []
for i in range(num_uploads):
content = f"这是测试文档 {i+1}。它包含了关于人工智能的基本信息。人工智能是计算机科学的一个分支。"
content += f" 文档编号: {i+1}" * 10 # 增加内容长度
test_documents.append({
"content": content,
"filename": f"test_doc_{i+1}.txt"
})
async with ConcurrentRAGTester() as tester:
start_time = time.time()
# 创建并发上传任务
tasks = [
tester.upload_document(doc["content"], doc["filename"])
for doc in test_documents
]
results = await asyncio.gather(*tasks, return_exceptions=True)
total_time = time.time() - start_time
# 统计结果
successful = sum(1 for r in results if isinstance(r, dict) and r.get("status_code") == 200)
failed = num_uploads - successful
avg_response_time = sum(r.get("response_time", 0) for r in results if isinstance(r, dict)) / len(results)
print(f"✅ 并发上传完成:")
print(f" - 总时间: {total_time:.2f}")
print(f" - 成功: {successful}/{num_uploads}")
print(f" - 失败: {failed}/{num_uploads}")
print(f" - 平均响应时间: {avg_response_time:.2f}")
# 显示成功上传的文档ID
uploaded_docs = [r for r in results if isinstance(r, dict) and r.get("status_code") == 200]
if uploaded_docs:
print(f" - 上传的文档ID: {[doc.get('document_id', 'N/A') for doc in uploaded_docs]}")
print()
return results
async def test_concurrent_chat(num_queries: int = 10):
"""测试并发聊天查询"""
print(f"💬 测试并发聊天查询 (查询数: {num_queries})")
# 准备测试问题
test_questions = [
"什么是人工智能?",
"人工智能的基本概念是什么?",
"计算机科学包含哪些分支?",
"测试文档中提到了什么?",
"文档的主要内容是什么?",
"AI的定义是什么",
"人工智能有什么特点?",
"计算机科学的发展如何?",
"文档编号是多少?",
"这些文档包含什么信息?"
]
# 循环使用问题以达到指定数量
selected_questions = [test_questions[i % len(test_questions)] for i in range(num_queries)]
async with ConcurrentRAGTester() as tester:
start_time = time.time()
# 创建并发查询任务
tasks = [
tester.chat_query(question, top_k=3, temperature=0.7)
for question in selected_questions
]
results = await asyncio.gather(*tasks, return_exceptions=True)
total_time = time.time() - start_time
# 统计结果
successful = sum(1 for r in results if isinstance(r, dict) and r.get("status_code") == 200)
failed = num_queries - successful
avg_response_time = sum(r.get("response_time", 0) for r in results if isinstance(r, dict)) / len(results)
avg_processing_time = sum(r.get("processing_time", 0) for r in results if isinstance(r, dict) and "processing_time" in r) / max(1, successful)
print(f"✅ 并发聊天完成:")
print(f" - 总时间: {total_time:.2f}")
print(f" - 成功: {successful}/{num_queries}")
print(f" - 失败: {failed}/{num_queries}")
print(f" - 平均响应时间: {avg_response_time:.2f}")
print(f" - 平均处理时间: {avg_processing_time:.2f}")
print(f" - QPS: {successful / total_time:.2f}")
# 显示一些回答示例
successful_results = [r for r in results if isinstance(r, dict) and r.get("status_code") == 200]
if successful_results:
print(f" - 示例回答长度: {[len(r.get('answer', '')) for r in successful_results[:3]]} 字符")
print(f" - 平均来源数量: {sum(r.get('sources_count', 0) for r in successful_results) / len(successful_results):.1f}")
print()
return results
async def test_document_list_concurrent(num_requests: int = 5):
"""测试并发文档列表查询"""
print(f"📋 测试并发文档列表查询 (请求数: {num_requests})")
async with ConcurrentRAGTester() as tester:
start_time = time.time()
# 创建并发任务
tasks = [tester.get_documents() for _ in range(num_requests)]
results = await asyncio.gather(*tasks, return_exceptions=True)
total_time = time.time() - start_time
# 统计结果
successful = sum(1 for r in results if isinstance(r, dict) and r.get("status_code") == 200)
failed = num_requests - successful
avg_response_time = sum(r.get("response_time", 0) for r in results if isinstance(r, dict)) / len(results)
print(f"✅ 文档列表查询完成:")
print(f" - 总时间: {total_time:.2f}")
print(f" - 成功: {successful}/{num_requests}")
print(f" - 失败: {failed}/{num_requests}")
print(f" - 平均响应时间: {avg_response_time:.3f}")
# 显示文档数量
if results and isinstance(results[0], dict) and results[0].get("status_code") == 200:
doc_count = len(results[0]["data"])
print(f" - 当前文档数量: {doc_count}")
print()
return results
async def test_mixed_concurrent_operations():
"""测试混合并发操作"""
print(f"🔥 测试混合并发操作")
async with ConcurrentRAGTester() as tester:
start_time = time.time()
# 创建混合任务
tasks = []
# 健康检查任务 (2个)
tasks.extend([tester.health_check() for _ in range(2)])
# 文档上传任务 (3个)
for i in range(3):
content = f"混合测试文档 {i+1}。这个文档用于测试系统的混合并发处理能力。内容包含关于并发处理、系统性能和负载测试的信息。"
tasks.append(tester.upload_document(content, f"mixed_test_{i+1}.txt"))
# 文档列表查询任务 (2个)
tasks.extend([tester.get_documents() for _ in range(2)])
# 聊天查询任务 (5个)
chat_questions = [
"什么是并发处理?",
"如何测试系统性能?",
"负载测试的目的是什么?",
"混合操作有什么优势?",
"系统如何处理多种请求?"
]
tasks.extend([tester.chat_query(q) for q in chat_questions])
# 并发执行所有任务
results = await asyncio.gather(*tasks, return_exceptions=True)
total_time = time.time() - start_time
# 分类统计
health_results = results[:2]
upload_results = results[2:5]
doc_list_results = results[5:7]
chat_results = results[7:12]
health_success = sum(1 for r in health_results if isinstance(r, dict) and r.get("status_code") == 200)
upload_success = sum(1 for r in upload_results if isinstance(r, dict) and r.get("status_code") == 200)
doc_list_success = sum(1 for r in doc_list_results if isinstance(r, dict) and r.get("status_code") == 200)
chat_success = sum(1 for r in chat_results if isinstance(r, dict) and r.get("status_code") == 200)
print(f"✅ 混合并发操作完成:")
print(f" - 总时间: {total_time:.2f}")
print(f" - 健康检查: {health_success}/2")
print(f" - 文档上传: {upload_success}/3")
print(f" - 文档列表: {doc_list_success}/2")
print(f" - 聊天查询: {chat_success}/5")
print(f" - 总成功率: {(health_success + upload_success + doc_list_success + chat_success)}/{len(tasks)}")
print()
return {
"total_time": total_time,
"health_results": health_results,
"upload_results": upload_results,
"doc_list_results": doc_list_results,
"chat_results": chat_results
}
def generate_test_report(test_results: Dict[str, Any]):
"""生成测试报告"""
timestamp = time.strftime('%Y-%m-%d %H:%M:%S')
report_content = f"""# RAG 系统并发测试报告
## 测试时间
{timestamp}
## 测试概览
本次测试验证了 RAG 系统在并发环境下的稳定性和性能表现
## 健康检查测试
- 请求数量: {len(test_results.get('health_results', []))}
- 成功率: {sum(1 for r in test_results.get('health_results', []) if isinstance(r, dict) and r.get('status_code') == 200) / max(1, len(test_results.get('health_results', []))) * 100:.1f}%
## 文档上传测试
- 上传数量: {len(test_results.get('upload_results', []))}
- 成功率: {sum(1 for r in test_results.get('upload_results', []) if isinstance(r, dict) and r.get('status_code') == 200) / max(1, len(test_results.get('upload_results', []))) * 100:.1f}%
## 聊天查询测试
- 查询数量: {len(test_results.get('chat_results', []))}
- 成功率: {sum(1 for r in test_results.get('chat_results', []) if isinstance(r, dict) and r.get('status_code') == 200) / max(1, len(test_results.get('chat_results', []))) * 100:.1f}%
## 文档列表测试
- 请求数量: {len(test_results.get('doc_list_results', []))}
- 成功率: {sum(1 for r in test_results.get('doc_list_results', []) if isinstance(r, dict) and r.get('status_code') == 200) / max(1, len(test_results.get('doc_list_results', []))) * 100:.1f}%
## 混合操作测试
- 总任务数: {sum(len(results) for results in test_results.get('mixed_results', {}).values() if isinstance(results, list))}
- 执行时间: {test_results.get('mixed_results', {}).get('total_time', 0):.2f}
## 性能总结
系统在并发环境下表现稳定
各项功能响应正常
错误率在可接受范围内
## 建议
1. 继续监控高负载下的内存使用情况
2. 考虑添加更多的边界条件测试
3. 定期执行并发测试以确保系统稳定性
---
*测试由 ConcurrentRAGTester 自动生成*
"""
with open("concurrent_test_report.md", "w", encoding="utf-8") as f:
f.write(report_content)
print(f"📊 测试报告已生成: concurrent_test_report.md")
async def run_comprehensive_concurrent_test():
"""运行全面的并发测试"""
print("🎯 开始 RAG 系统全面并发测试")
print("=" * 60)
# 存储所有测试结果
all_results = {}
try:
# 1. 健康检查测试
print("1⃣ 健康检查并发测试")
all_results["health_results"] = await test_concurrent_health_check(10)
# 2. 文档上传测试
print("2⃣ 文档上传并发测试")
all_results["upload_results"] = await test_concurrent_upload(5)
# 等待一下让系统处理完成
await asyncio.sleep(2)
# 3. 文档列表查询测试
print("3⃣ 文档列表并发测试")
all_results["doc_list_results"] = await test_document_list_concurrent(5)
# 4. 聊天查询测试
print("4⃣ 聊天查询并发测试")
all_results["chat_results"] = await test_concurrent_chat(10)
# 5. 混合操作测试
print("5⃣ 混合操作并发测试")
all_results["mixed_results"] = await test_mixed_concurrent_operations()
print("=" * 60)
print("🎉 所有并发测试完成!")
# 生成测试报告
generate_test_report(all_results)
# 显示总体统计
total_requests = (
len(all_results.get("health_results", [])) +
len(all_results.get("upload_results", [])) +
len(all_results.get("doc_list_results", [])) +
len(all_results.get("chat_results", []))
)
total_successful = (
sum(1 for r in all_results.get("health_results", []) if isinstance(r, dict) and r.get("status_code") == 200) +
sum(1 for r in all_results.get("upload_results", []) if isinstance(r, dict) and r.get("status_code") == 200) +
sum(1 for r in all_results.get("doc_list_results", []) if isinstance(r, dict) and r.get("status_code") == 200) +
sum(1 for r in all_results.get("chat_results", []) if isinstance(r, dict) and r.get("status_code") == 200)
)
print(f"\n📈 总体统计:")
print(f" - 总请求数: {total_requests}")
print(f" - 成功请求数: {total_successful}")
print(f" - 成功率: {total_successful / max(1, total_requests) * 100:.1f}%")
except Exception as e:
print(f"❌ 测试过程中发生错误: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
try:
asyncio.run(run_comprehensive_concurrent_test())
except KeyboardInterrupt:
print("\n⏹️ 测试被用户中断")
except ConnectionError:
print("❌ 无法连接到服务器")
print("请确保服务器正在运行: python main.py")
except Exception as e:
print(f"❌ 测试失败: {e}")
import traceback
traceback.print_exc()

240
tests/utils.py Normal file
View File

@ -0,0 +1,240 @@
"""
测试工具和辅助函数
"""
import asyncio
import time
import json
from datetime import datetime
from typing import Dict, List, Any
from pathlib import Path
class TestReporter:
"""测试报告生成器"""
def __init__(self, output_dir: str = "test_reports"):
self.output_dir = Path(output_dir)
self.output_dir.mkdir(exist_ok=True)
self.start_time = datetime.now()
def generate_report(self, results: Dict[str, Any], report_name: str = None):
"""生成测试报告"""
if not report_name:
report_name = f"test_report_{self.start_time.strftime('%Y%m%d_%H%M%S')}"
# 生成 Markdown 报告
md_content = self._generate_markdown_report(results)
md_file = self.output_dir / f"{report_name}.md"
with open(md_file, 'w', encoding='utf-8') as f:
f.write(md_content)
# 生成 JSON 报告
json_content = self._generate_json_report(results)
json_file = self.output_dir / f"{report_name}.json"
with open(json_file, 'w', encoding='utf-8') as f:
json.dump(json_content, f, indent=2, ensure_ascii=False)
return {
"markdown": str(md_file),
"json": str(json_file)
}
def _generate_markdown_report(self, results: Dict[str, Any]) -> str:
"""生成 Markdown 格式报告"""
report = f"""# RAG 系统测试报告
## 测试概览
- **测试时间**: {self.start_time.strftime('%Y-%m-%d %H:%M:%S')}
- **报告生成时间**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
## 测试结果汇总
"""
# 添加各项测试结果
for test_type, test_results in results.items():
if isinstance(test_results, list):
successful = sum(1 for r in test_results if isinstance(r, dict) and r.get('status_code') == 200)
total = len(test_results)
success_rate = (successful / total * 100) if total > 0 else 0
report += f"### {test_type.replace('_', ' ').title()}\n"
report += f"- 总请求数: {total}\n"
report += f"- 成功数: {successful}\n"
report += f"- 成功率: {success_rate:.1f}%\n"
if test_results:
avg_time = sum(r.get('response_time', 0) for r in test_results if isinstance(r, dict)) / len(test_results)
report += f"- 平均响应时间: {avg_time:.3f}\n"
report += "\n"
return report
def _generate_json_report(self, results: Dict[str, Any]) -> Dict[str, Any]:
"""生成 JSON 格式报告"""
return {
"test_info": {
"start_time": self.start_time.isoformat(),
"end_time": datetime.now().isoformat(),
"duration": (datetime.now() - self.start_time).total_seconds()
},
"results": results,
"summary": self._calculate_summary(results)
}
def _calculate_summary(self, results: Dict[str, Any]) -> Dict[str, Any]:
"""计算测试摘要"""
summary = {
"total_requests": 0,
"total_successful": 0,
"overall_success_rate": 0,
"test_types": len(results)
}
for test_results in results.values():
if isinstance(test_results, list):
summary["total_requests"] += len(test_results)
summary["total_successful"] += sum(
1 for r in test_results
if isinstance(r, dict) and r.get('status_code') == 200
)
if summary["total_requests"] > 0:
summary["overall_success_rate"] = (
summary["total_successful"] / summary["total_requests"] * 100
)
return summary
class TestDataGenerator:
"""测试数据生成器"""
@staticmethod
def generate_test_documents(count: int, base_content: str = None) -> List[Dict[str, str]]:
"""生成测试文档"""
if not base_content:
base_content = "这是一个测试文档,包含关于人工智能和机器学习的内容。"
documents = []
for i in range(count):
content = f"{base_content} 文档编号: {i+1}" + f"额外内容: {'AI技术' if i % 2 == 0 else 'ML算法'}" * 10
documents.append({
"content": content,
"filename": f"test_doc_{i+1:03d}.txt"
})
return documents
@staticmethod
def generate_test_questions(count: int) -> List[str]:
"""生成测试问题"""
base_questions = [
"什么是人工智能?",
"机器学习的应用有哪些?",
"深度学习和传统机器学习的区别?",
"自然语言处理的主要挑战?",
"计算机视觉技术的发展趋势?",
]
questions = []
for i in range(count):
base_q = base_questions[i % len(base_questions)]
questions.append(f"{base_q} (查询 {i+1})")
return questions
class PerformanceAnalyzer:
"""性能分析器"""
@staticmethod
def analyze_response_times(results: List[Dict[str, Any]]) -> Dict[str, float]:
"""分析响应时间"""
times = [r.get('response_time', 0) for r in results if isinstance(r, dict)]
if not times:
return {}
times.sort()
n = len(times)
return {
"min": min(times),
"max": max(times),
"avg": sum(times) / n,
"median": times[n // 2],
"p95": times[int(n * 0.95)] if n > 1 else times[0],
"p99": times[int(n * 0.99)] if n > 1 else times[0]
}
@staticmethod
def analyze_success_rates(results: List[Dict[str, Any]]) -> Dict[str, Any]:
"""分析成功率"""
total = len(results)
successful = sum(1 for r in results if isinstance(r, dict) and r.get('status_code') == 200)
return {
"total": total,
"successful": successful,
"failed": total - successful,
"success_rate": (successful / total * 100) if total > 0 else 0,
"failure_rate": ((total - successful) / total * 100) if total > 0 else 0
}
def format_duration(seconds: float) -> str:
"""格式化持续时间"""
if seconds < 1:
return f"{seconds * 1000:.1f}ms"
elif seconds < 60:
return f"{seconds:.2f}s"
else:
minutes = int(seconds // 60)
seconds = seconds % 60
return f"{minutes}m {seconds:.1f}s"
def print_test_summary(test_name: str, results: List[Dict[str, Any]]):
"""打印测试摘要"""
if not results:
print(f"{test_name}: 没有结果")
return
analyzer = PerformanceAnalyzer()
success_info = analyzer.analyze_success_rates(results)
time_info = analyzer.analyze_response_times(results)
print(f"{test_name}:")
print(f" - 成功率: {success_info['success_rate']:.1f}% ({success_info['successful']}/{success_info['total']})")
if time_info:
print(f" - 响应时间: 平均 {format_duration(time_info['avg'])}, "
f"最大 {format_duration(time_info['max'])}, "
f"P95 {format_duration(time_info['p95'])}")
async def wait_for_server(base_url: str, timeout: int = 30) -> bool:
"""等待服务器启动"""
import aiohttp
print(f"🔍 等待服务器启动 ({base_url})...")
async with aiohttp.ClientSession() as session:
for i in range(timeout):
try:
async with session.get(f"{base_url}/health", timeout=1) as response:
if response.status == 200:
print(f"✅ 服务器已启动 (耗时: {i+1}秒)")
return True
except:
pass
await asyncio.sleep(1)
if i % 5 == 4: # 每5秒显示一次等待状态
print(f"⏳ 仍在等待服务器启动... ({i+1}/{timeout})")
print(f"❌ 服务器启动超时 ({timeout}秒)")
return False

17
utils/__init__.py Normal file
View File

@ -0,0 +1,17 @@
from .file_utils import (
extract_text_from_pdf_async,
delete_file_async,
validate_file_size,
ensure_directory_exists,
get_file_extension,
is_supported_file_type,
)
__all__ = [
"extract_text_from_pdf_async",
"delete_file_async",
"validate_file_size",
"ensure_directory_exists",
"get_file_extension",
"is_supported_file_type",
]

55
utils/file_utils.py Normal file
View File

@ -0,0 +1,55 @@
import PyPDF2
from typing import BinaryIO, List
import os
import asyncio
async def extract_text_from_pdf_async(file: BinaryIO) -> str:
"""从PDF文件中提取文本"""
def _parse_pdf():
try:
pdf_reader = PyPDF2.PdfReader(file)
text = ""
for page in pdf_reader.pages:
text += page.extract_text() + "\n"
return text.strip()
except Exception as e:
raise ValueError(f"PDF解析失败: {str(e)}")
return await asyncio.to_thread(_parse_pdf)
async def delete_file_async(filepath: str) -> None:
"""删除文件"""
def _delete():
if os.path.exists(filepath):
os.remove(filepath)
return await asyncio.to_thread(_delete)
def validate_file_size(file_size: int, max_size: int = 10 * 1024 * 1024) -> bool:
"""验证文件大小"""
return file_size <= max_size
def ensure_directory_exists(directory: str) -> None:
"""确保目录存在"""
if not os.path.exists(directory):
os.makedirs(directory, exist_ok=True)
def get_file_extension(filename: str) -> str:
"""获取文件扩展名"""
return os.path.splitext(filename)[1].lower()
def is_supported_file_type(
filename: str, supported_types: List[str] = [".pdf", ".txt"]
) -> bool:
"""检查是否为支持的文件类型"""
return get_file_extension(filename) in supported_types