project reborn

This commit is contained in:
王性驊 2026-04-18 22:08:01 +08:00
parent bf18e7958e
commit f084b9e3fb
38 changed files with 5489 additions and 0 deletions

82
DIAGNOSTIC_RESULTS.md Normal file
View File

@ -0,0 +1,82 @@
# cursor-adapter SSE Diagnostic Results
> **Status: RESOLVED 2026-04-18.** This file is kept for history. The bugs
> listed below have been fixed. Current behavior is captured by regression
> tests in `internal/server/messages_test.go` and `internal/converter/convert_test.go`.
---
## Originally reported (2026-04-15)
When used as an OpenAI-compatible endpoint for SDKs like
`@ai-sdk/openai-compatible` (OpenCode), cursor-adapter had five issues:
1. **Non-streaming is completely broken** — server hung for 30s and never
wrote a response body.
2. **SSE content was double-JSON-encoded** — each `delta.content` field
held the *entire* Cursor CLI JSON line (including `type:"system"`,
`type:"user"`, `type:"result"`) serialized as a string, instead of plain
assistant text.
3. **Missing `role` in first delta.**
4. **Missing `finish_reason` in final chunk.**
5. **Usage not at the top level** — embedded inside a stringified JSON
payload instead of `chunk.usage`.
---
## Root cause
Two separate bugs plus one latent one, all landed together:
- **Non-stream hang / `exit status 1`:** the chat-only isolation ported from
`cursor-api-proxy` was overriding `HOME` → temp dir. On macOS with
keychain login, the `agent` CLI resolves its session token via
`~/.cursor/` + the real keychain, so a fake `HOME` made `agent` exit
immediately with "Authentication required. Please run 'agent login'".
The adapter surfaced this as either a hang (when timeouts swallowed the
exit) or as `exit status 1` once the error bubbled up.
- **Content wrapping / leaked system-user chunks:** older pre-parser code
forwarded raw Cursor JSON lines as `delta.content`. The parser had
already been rewritten by the time this diagnostic was taken, but the
report caught an earlier build.
- **Duplicate final delta (discovered during this pass):** the stream
parser's accumulator was *reassigned* (`p.accumulated = content`) even
when the new fragment did not start with the accumulated prefix. With
Cursor CLI's incremental output mode (one fragment per message), that
meant the "you said the full text" final message looked different from
accumulated and was emitted as a second copy of the whole response.
---
## Fix summary
- `internal/workspace/workspace.go` — only override `CURSOR_CONFIG_DIR` by
default. `HOME`/`XDG_CONFIG_HOME`/`APPDATA` are only isolated when
`CURSOR_API_KEY` is set (which bypasses keychain auth anyway).
- `internal/converter/convert.go` — stream parser now handles both
cumulative and incremental Cursor output modes. In the non-prefix
branch it appends to accumulated instead of replacing it, so the final
duplicate is correctly detected via `content == accumulated` and
skipped.
- `internal/server/handlers.go` + `anthropic_handlers.go` — already emit
`role:"assistant"` in the first delta, `finish_reason:"stop"` in the
final chunk, and `usage` at the top level. Regression tests added to
`messages_test.go` lock this in.
## Verified manually
```
$ curl -sN http://localhost:8765/v1/chat/completions \
-H 'Content-Type: application/json' \
-d '{"model":"auto","stream":true,"messages":[{"role":"user","content":"count 1 to 5"}]}'
data: {"id":"chatcmpl-…","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]}
data: {"id":"chatcmpl-…","choices":[{"index":0,"delta":{"content":"\n1、2、3、4、5。"},"finish_reason":null}]}
data: {"id":"chatcmpl-…","choices":[{"index":0,"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":6,"completion_tokens":5,"total_tokens":11}}
data: [DONE]
```
Non-streaming returns `chat.completion` JSON with `stop_reason:"stop"` and
`usage` populated. Anthropic `/v1/messages` emits `message_start`
`content_block_delta*``message_stop` without duplicating the final
cumulative fragment.

80
README.md Normal file
View File

@ -0,0 +1,80 @@
# cursor-adapter
**OpenAI相容 Chat Completions API** 對外提供服務,後端透過 **Cursor CLI**(預設 `agent`)或選用的 **ACP** 傳輸與 Cursor 互動的本地代理程式。另提供 **Anthropic Messages** 相容端點。
## 需求
- Go **1.26.1**(見 `go.mod`
- 已安裝並可在 `PATH` 中執行的 **Cursor CLI**(名稱或路徑見設定檔 `cursor_cli_path`
## 建置與執行
```bash
go build -o cursor-adapter .
./cursor-adapter
```
未指定 `-c``--config` 時,會讀取 `~/.cursor-adapter/config.yaml`;若檔案不存在則使用內建預設值。
首次可複製範例後再編輯:
```bash
mkdir -p ~/.cursor-adapter
cp config.example.yaml ~/.cursor-adapter/config.yaml
```
## 命令列參數
| 參數 | 說明 |
|------|------|
| `-c`, `--config` | 設定檔路徑 |
| `-p`, `--port` | 監聽埠(覆寫設定檔) |
| `--debug` | 除錯層級日誌 |
| `--use-acp` | 改用 Cursor ACP 傳輸(預設為 CLI stream-json |
| `--chat-only-workspace` | 預設 `true`:在暫存工作區並覆寫 `HOME``CURSOR_CONFIG_DIR` 等,避免子行程讀取啟動目錄或 `~/.cursor` 規則;設為 `false` 時代理的工作目錄會對 Cursor agent 可見 |
啟動前會檢查 Cursor CLI 是否可用;失敗時程式會退出並顯示錯誤。
## 設定檔YAML
欄位與 `config.example.yaml` 對齊,例如:
- `port`HTTP 服務埠(預設 `8976`
- `cursor_cli_path`CLI 可執行檔名或路徑
- `default_model`、`available_models`、`timeout`(秒)、`max_concurrent`
- `use_acp`、`chat_only_workspace`、`log_level`
## HTTP 端點
服務綁定 **127.0.0.1**(僅本機)。路由啟用 CORS`Access-Control-Allow-Origin: *`),並允許標頭如 `Authorization`、`X-Cursor-Session-ID`、`X-Cursor-Workspace`。
| 方法 | 路徑 | 說明 |
|------|------|------|
| GET | `/health` | 健康檢查 |
| GET | `/v1/models` | 模型列表 |
| POST | `/v1/chat/completions` | OpenAI 相容聊天完成(含串流 SSE |
| POST | `/v1/messages` | Anthropic Messages 相容 |
## 開發與測試
```bash
go test ./...
```
輔助腳本:`scripts/test_cursor_cli.sh`(驗證本機 Cursor CLI 是否可呼叫)。
## 專案結構(概要)
- `main.go`CLI 入口、設定與橋接組裝、服務啟動
- `internal/config`:設定載入與驗證
- `internal/server`HTTP 路由、CORS、OpenAIAnthropic 處理、SSE、工作階段
- `internal/bridge`:與 Cursor CLIACP 的橋接與併發控制
- `internal/converter`:請求/回應與模型對應
- `internal/types`:共用型別與 API 結構
- `internal/workspace`:暫存工作區與隔離行為
- `internal/sanitize`:輸入清理
- `docs/`計畫、架構、PRD 與 Cursor CLI 格式等文件
## 安全與隱私
建議維持 `chat_only_workspace: true`,除非你有意讓 Cursor agent 存取代理行程的工作目錄與本機 Cursor 設定。詳見 `config.example.yaml` 註解與 `internal/config` 實作。

BIN
bin/cursor-adapter Executable file

Binary file not shown.

20
config.example.yaml Normal file
View File

@ -0,0 +1,20 @@
port: 8976
cursor_cli_path: agent
default_model: claude-sonnet-4-20250514
timeout: 300
max_concurrent: 5
use_acp: false
# Isolate Cursor CLI / ACP child in an empty temp workspace with
# HOME / CURSOR_CONFIG_DIR / XDG_CONFIG_HOME overridden so the agent can
# neither read the adapter's cwd nor load global rules from ~/.cursor.
# Recommended: true. Set to false only if you intentionally want the
# Cursor agent to see the adapter's working directory.
chat_only_workspace: true
log_level: INFO
available_models:
- claude-sonnet-4-20250514
- claude-opus-4-20250514
- gpt-5.2
- gemini-3.1-pro

20
config.yaml Normal file
View File

@ -0,0 +1,20 @@
port: 8765
cursor_cli_path: agent
default_model: claude-sonnet-4-20250514
timeout: 300
max_concurrent: 5
use_acp: false
# Isolate Cursor CLI / ACP child in an empty temp workspace with
# HOME / CURSOR_CONFIG_DIR / XDG_CONFIG_HOME overridden so the agent can
# neither read the adapter's cwd nor load global rules from ~/.cursor.
# Recommended: true. Set to false only if you intentionally want the
# Cursor agent to see the adapter's working directory.
chat_only_workspace: false
log_level: INFO
available_models:
- claude-sonnet-4-20250514
- claude-opus-4-20250514
- gpt-5.2
- gemini-3.1-pro

BIN
docs/.DS_Store vendored Normal file

Binary file not shown.

View File

@ -0,0 +1,343 @@
# Architecture: Cursor Adapter
## Overview
Cursor Adapter 是一個本機 HTTP proxy server將 OpenAI-compatible API 請求轉換為 Cursor CLI headless 模式指令,並將 Cursor CLI 的 streaming JSON 輸出轉換為 OpenAI SSE 格式回傳給 CLI 工具。
核心架構:**單一 binary、無狀態、spawn 子程序模式**。
### Requirement Traceability
| PRD Requirement | Architectural Component |
|----------------|------------------------|
| FR1: OpenAI-compatible API | HTTP Server (net/http + chi router) |
| FR2: Cursor CLI Integration | CLI Bridge (os/exec subprocess) |
| FR3: Streaming Response Conversion | Stream Converter (goroutine pipeline) |
| FR4: Model Listing | Model Registry |
| FR5: Configuration | Config Module (YAML) |
| FR6: Error Handling | Error Handler (middleware) |
| NFR1: Performance < 500ms overhead | goroutine pipeline, zero-copy streaming |
| NFR2: Concurrent requests ≤ 5 | semaphore (buffered channel) |
## System Architecture
### Technology Stack
| Layer | Technology | Justification |
|-------|-----------|---------------|
| Language | Go 1.22+ | 單一 binary、subprocess 管理好、goroutine 天然適合 streaming |
| HTTP Router | go-chi/chi v5 | 輕量、相容 net/http、middleware 支援好 |
| Config | gopkg.in/yaml.v3 | 標準 YAML 解析 |
| CLI | spf13/cobra | Go 標準 CLI 框架 |
| Testing | testify + stdlib | table-driven test + assertion helper |
### Component Architecture
```
┌─────────────────────────────────────────────────┐
│ cursor-adapter (single binary) │
│ │
│ cmd/ │
│ └── cursor-adapter/main.go (cobra entrypoint) │
│ │
│ internal/ │
│ ├── server/ HTTP Server (chi router) │
│ │ ├── handler.go route handlers │
│ │ ├── middleware.go error + logging │
│ │ └── sse.go SSE writer helpers │
│ ├── bridge/ CLI Bridge │
│ │ ├── bridge.go spawn subprocess │
│ │ └── scanner.go stdout line reader │
│ ├── converter/ Stream Converter │
│ │ └── convert.go cursor-json → OpenAI SSE │
│ └── config/ Config Module │
│ └── config.go YAML loading + defaults │
│ │
└─────────────────────────────────────────────────┘
│ os/exec.CommandContext
┌──────────────────┐
│ Cursor CLI │
│ agent -p ... │
│ --model ... │
│ --output-format│
│ stream-json │
└──────────────────┘
```
## Service Boundaries
單一 binary4 個 internal package
| Package | Responsibility | Exported |
|---------|---------------|----------|
| cmd/cursor-adapter | CLI 入口、wiring | main() |
| internal/server | HTTP routes + middleware | NewServer(), Server.Run() |
| internal/bridge | spawn/manage Cursor CLI subprocess | Bridge interface + CLIBridge |
| internal/converter | stream-json → OpenAI SSE 轉換 | Convert() functions |
| internal/config | YAML config 載入/驗證 | Config struct, Load() |
### Communication Matrix
| From | To | Pattern | Purpose |
|------|----|---------|---------|
| server/handler | bridge | interface call | 啟動子程序 |
| bridge | converter | channel (chan string) | 逐行傳遞 stdout |
| converter | server/handler | channel (chan SSEChunk) | 回傳轉換後的 chunk |
| server/handler | client | HTTP SSE | 回傳給 CLI 工具 |
## Data Flow
### Chat Completion (Streaming)
```
1. Client → POST /v1/chat/completions (stream: true)
2. handler → 驗證 request body
3. handler → 從 messages[] 組合 prompt
4. bridge → ctx, cancel := context.WithTimeout(...)
5. bridge → cmd := exec.CommandContext(ctx, "agent", "-p", prompt, "--model", model, "--output-format", "stream-json")
6. bridge → cmd.Stdout pipe → goroutine scanner 逐行讀
7. scanner → 每行送入 outputChan (chan string)
8. converter → 讀 outputChan轉換為 SSEChunk送入 sseChan
9. handler → flush SSE chunk 到 client
10. bridge → process 結束 → close channels → handler 發送 [DONE]
```
## Database Schema
N/A。無狀態設計不需要資料庫。
## API Contract
### POST /v1/chat/completions
Request:
```json
{
"model": "claude-sonnet-4-20250514",
"messages": [
{"role": "user", "content": "hello"}
],
"stream": true
}
```
Response (SSE when stream: true):
```
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]
```
Response (JSON when stream: false):
```json
{
"id": "chatcmpl-xxx",
"object": "chat.completion",
"choices": [{"index": 0, "message": {"role": "assistant", "content": "Hello!"}, "finish_reason": "stop"}],
"usage": {"prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0}
}
```
### GET /v1/models
Response:
```json
{
"object": "list",
"data": [
{"id": "claude-sonnet-4-20250514", "object": "model", "created": 0, "owned_by": "cursor"}
]
}
```
### GET /health
Response:
```json
{"status": "ok", "cursor_cli": "available", "version": "0.1.0"}
```
### Error Codes
| Status | Code | When |
|--------|------|------|
| 400 | invalid_request | messages 為空或格式錯誤 |
| 404 | model_not_found | 指定的 model 不存在 |
| 500 | internal_error | Cursor CLI 子程序崩潰 |
| 504 | timeout | Cursor CLI 超時未回應 |
## Async / Queue Design
N/A。不需要 queuegoroutine + channel 直接串接。
## Consistency Model
N/A。無狀態 proxy每次請求獨立。
## Error Model
| Category | Examples | Handling |
|----------|---------|----------|
| Client Error | invalid request, unknown model | 4xx不 spawn 子程序 |
| CLI Spawn Error | agent not found, not logged in | 500 + stderr message |
| Timeout | model thinking too long | kill subprocess → 504 |
| Crash | unexpected exit | 500 + exit code |
## Security Boundaries
N/A。本機 personal toolbind 127.0.0.1,無認證。
## Integration Boundaries
### Cursor CLI
| Property | Value |
|----------|-------|
| Integration Pattern | subprocess (os/exec.CommandContext) |
| Protocol | CLI binary (stdout pipe) |
| Authentication | 本機 `agent login` 狀態 |
| Failure Mode | binary not found / not logged in |
| Data Contract | `--output-format stream-json` |
| Timeout | 可配置,預設 300s |
## Observability
- structlog (slog) loggingINFO 請求/完成、ERROR 錯誤/timeout
- `/health` endpoint
- DEBUG level 時印出 Cursor CLI 原始 stdout
## Scaling Strategy
N/A。個人本機工具單實例。semaphore 限制並發子程序數(預設 5
## Non-Functional Requirements
| NFR | Requirement | Decision | Verification |
|-----|-------------|----------|-------------|
| Performance | overhead < 500ms | goroutine pipeline, streaming pipe | 實際測量 |
| Reliability | 並發 ≤ 5 | buffered channel semaphore | 併發測試 |
| Usability | 一行啟動 | cobra CLI, sensible defaults | 手動測試 |
| Distribution | 單一 binary | Go cross-compile | `go build` |
## Mermaid Diagrams
### System Architecture
```mermaid
graph LR
CLI[Hermes/OpenCode/Claude] -->|POST /v1/chat/completions| Adapter[Cursor Adapter]
Adapter -->|exec: agent -p ... --output-format stream-json| Cursor[Cursor CLI]
Cursor -->|streaming JSON stdout| Adapter
Adapter -->|SSE streaming| CLI
```
### Sequence Diagram
```mermaid
sequenceDiagram
participant C as CLI Tool
participant H as HTTP Handler
participant B as CLI Bridge
participant A as Cursor CLI
C->>H: POST /v1/chat/completions
H->>H: validate, extract prompt
H->>B: Execute(ctx, prompt, model)
B->>A: exec.CommandContext("agent", "-p", ...)
loop streaming
A-->>B: stdout line (JSON)
B-->>H: outputChan <- line
H->>H: convert to SSE chunk
H-->>C: data: {...}\n\n
end
A-->>B: process exit
B-->>H: close channels
H-->>C: data: [DONE]
```
### Data Flow Diagram
```mermaid
flowchart TD
A[Client Request] --> B{Validate}
B -->|invalid| C[400]
B -->|valid| D[Extract Prompt]
D --> E[exec.CommandContext]
E --> F{spawn OK?}
F -->|no| G[500]
F -->|yes| H[goroutine: scan stdout]
H --> I[outputChan]
I --> J[converter: JSON→SSE]
J --> K[flush to client]
K --> L{more?}
L -->|yes| H
L -->|no| M[send DONE]
```
## ADR
### ADR-001: Go 而非 Python
**Context**: 選擇實作語言。候選為 Go 和 Python (FastAPI)。
**Decision**: Go 1.22+。
**Consequences**:
- + 單一 binary不需要使用者裝 Python/pip
- + `os/exec.CommandContext` 子程序管理比 Python `asyncio` 更直覺
- + goroutine + channel 天然適合 streaming pipeline
- + cross-compilemacOS/Linux/Windows 一個 `go build`
- - SSE 手動處理(但不複雜)
**Alternatives**:
- Python + FastAPI生態好但需要 runtime部署麻煩
- Rust效能最好但開發速度慢
### ADR-002: chi router 而非 stdlib mux
**Context**: Go 1.22 的 `net/http` 已支援 method-based routing。
**Decision**: 使用 chi v5。
**Consequences**:
- + middleware 生態好logger、recoverer、timeout
- + route grouping 更乾淨
- + 相容 net/http Handler
- - 多一個 dependency
**Alternatives**:
- stdlib net/http夠用但 middleware 要自己寫
- gin太重對這個規模 overkill
### ADR-003: spawn 子程序而非 ACP
**Context**: Cursor CLI 支援 headless print mode 和 ACP (JSON-RPC)。
**Decision**: headless print mode (`agent -p --output-format stream-json`)。
**Consequences**:
- + 實作簡單spawn + 讀 stdout
- + 不需要 JSON-RPC
- - 無法做 tool usePRD 不需要)
**Alternatives**:
- ACP (JSON-RPC over stdio):功能完整,但複雜度高很多
## Risks
| Risk | Impact | Likelihood | Mitigation |
|------|--------|-----------|------------|
| Cursor CLI stream-json 格式變更 | High | Medium | 抽象 converter格式在 const 定義 |
| Cursor CLI 不支援並發實例 | Medium | Low | semaphore + queue |
| 子程序 zombie | Medium | Low | CommandContext + Wait() |
## Open Questions
1. Cursor CLI stream-json 的確切 schema需實際測試
2. Cursor CLI 能否同時跑多個 headless 實例?

View File

@ -0,0 +1,456 @@
# Code Design: Cursor Adapter
## Overview
將架構文件轉為 Go 程式碼層級設計。基於架構文件 `docs/architecture/2026-04-14-cursor-adapter.md`
語言Go 1.22+。遵循 `language-go` skill 的設計規範。
## Project Structure
```
cursor-adapter/
├── go.mod
├── go.sum
├── main.go # cobra CLI 入口 + wiring
├── config.example.yaml
├── Makefile
├── README.md
├── internal/
│ ├── config/
│ │ └── config.go # Config struct + Load()
│ ├── bridge/
│ │ ├── bridge.go # CLIBridge struct + methods
│ │ └── bridge_test.go
│ ├── converter/
│ │ ├── convert.go # Cursor JSON → OpenAI SSE 轉換
│ │ └── convert_test.go
│ └── server/
│ ├── server.go # chi router + handler wiring
│ ├── handler.go # route handler functions
│ ├── handler_test.go
│ ├── sse.go # SSE write helpers
│ └── models.go # request/response structs (JSON)
└── scripts/
└── test_cursor_cli.sh # 探索性測試 Cursor CLI 輸出
```
### Package Responsibilities
| Package | Responsibility | Exports |
|---------|---------------|---------|
| main (root) | CLI 入口、wiring | main() |
| internal/config | YAML config 載入 + 預設值 | Config, Load() |
| internal/bridge | spawn/manage Cursor CLI subprocess | Bridge interface, CLIBridge |
| internal/converter | stream-json → OpenAI SSE 轉換 | ToOpenAIChunk(), ToOpenAIResponse() |
| internal/server | HTTP routes, handlers, SSE | New(), Server.Run() |
## Layer Architecture
```
main.go (wiring)
↓ 建立 config, bridge, server
internal/server (HTTP layer)
↓ handler 呼叫 bridge
internal/bridge (CLI layer)
↓ spawn subprocess, 讀 stdout
internal/converter (轉換 layer)
↓ JSON 轉 SSE
Cursor CLI (外部)
```
## Interface Definitions
```go
// internal/bridge/bridge.go
// Bridge 定義與外部 CLI 工具的整合介面。
type Bridge interface {
// Execute 執行 prompt透過 channel 逐行回傳 Cursor CLI 的 stdout。
// context timeout 時會 kill subprocess。
Execute(ctx context.Context, prompt string, model string) (<-chan string, <-chan error)
// ListModels 回傳可用的模型列表。
ListModels(ctx context.Context) ([]string, error)
// CheckHealth 確認 Cursor CLI 是否可用。
CheckHealth(ctx context.Context) error
}
// CLIBridge 實作 Bridge透過 os/exec spawn Cursor CLI。
type CLIBridge struct {
cursorPath string // "agent" 或 config 指定的路徑
semaphore chan struct{} // 限制並發數
timeout time.Duration
}
func NewCLIBridge(cursorPath string, maxConcurrent int, timeout time.Duration) *CLIBridge
func (b *CLIBridge) Execute(ctx context.Context, prompt string, model string) (<-chan string, <-chan error)
func (b *CLIBridge) ListModels(ctx context.Context) ([]string, error)
func (b *CLIBridge) CheckHealth(ctx context.Context) error
```
```go
// internal/converter/convert.go
// CursorLine 代表 Cursor CLI stream-json 的一行。
type CursorLine struct {
Type string `json:"type"` // "assistant", "result", "error", etc.
Content string `json:"content"` // 文字內容type=assistant 時)
}
// ToOpenAIChunk 將一行 Cursor JSON 轉換為 OpenAI SSE chunk struct。
// 實際 JSON schema 需等 P2 探索確認後定義。
func ToOpenAIChunk(line string, chatID string) (*OpenAIChunk, error)
// ToOpenAIResponse 將多行 Cursor output 組合為完整 response。
func ToOpenAIResponse(lines []string, chatID string) (*OpenAIResponse, error)
// SSE 格式化
func FormatSSE(data any) string // "data: {json}\n\n"
func FormatDone() string // "data: [DONE]\n\n"
```
```go
// internal/config/config.go
type Config struct {
Port int `yaml:"port"`
CursorCLIPath string `yaml:"cursor_cli_path"`
DefaultModel string `yaml:"default_model"`
Timeout int `yaml:"timeout"` // seconds
MaxConcurrent int `yaml:"max_concurrent"`
LogLevel string `yaml:"log_level"`
AvailableModels []string `yaml:"available_models"` // optional
}
// Load 從 YAML 檔載入配置,套用預設值。
// path 為空時使用預設路徑 ~/.cursor-adapter/config.yaml。
func Load(path string) (*Config, error)
// Defaults 回傳預設配置。
func Defaults() Config
```
## Domain Models
```go
// internal/server/models.go
// Request
type ChatMessage struct {
Role string `json:"role"`
Content string `json:"content"`
}
type ChatCompletionRequest struct {
Model string `json:"model"`
Messages []ChatMessage `json:"messages"`
Stream bool `json:"stream"`
Temperature *float64 `json:"temperature,omitempty"`
}
// Response (non-streaming)
type ChatCompletionResponse struct {
ID string `json:"id"`
Object string `json:"object"` // "chat.completion"
Choices []Choice `json:"choices"`
Usage Usage `json:"usage"`
}
type Choice struct {
Index int `json:"index"`
Message ChatMessage `json:"message"`
FinishReason string `json:"finish_reason"`
}
type Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
}
// Streaming chunk
type ChatCompletionChunk struct {
ID string `json:"id"`
Object string `json:"object"` // "chat.completion.chunk"
Choices []ChunkChoice `json:"choices"`
}
type ChunkChoice struct {
Index int `json:"index"`
Delta Delta `json:"delta"`
FinishReason string `json:"finish_reason,omitempty"`
}
type Delta struct {
Role *string `json:"role,omitempty"`
Content *string `json:"content,omitempty"`
}
// Models list
type ModelList struct {
Object string `json:"object"` // "list"
Data []ModelInfo `json:"data"`
}
type ModelInfo struct {
ID string `json:"id"`
Object string `json:"object"` // "model"
Created int64 `json:"created"`
OwnedBy string `json:"owned_by"`
}
// Error response
type ErrorResponse struct {
Error ErrorBody `json:"error"`
}
type ErrorBody struct {
Message string `json:"message"`
Type string `json:"type"`
Code string `json:"code,omitempty"`
}
```
## Database Implementation Design
N/A。無資料庫。
## Error Design
```go
// internal/server/handler.go 中定義 sentinel errors + 錯誤回傳邏輯
var (
ErrInvalidRequest = errors.New("invalid_request")
ErrModelNotFound = errors.New("model_not_found")
ErrCLITimeout = errors.New("cli_timeout")
ErrCLICrash = errors.New("cli_crash")
ErrCLINotAvailable = errors.New("cli_not_available")
)
// writeError 將 error 轉換為 JSON error response 並回傳。
func writeError(w http.ResponseWriter, err error) {
var status int
var errType string
switch {
case errors.Is(err, ErrInvalidRequest):
status = http.StatusBadRequest
errType = "invalid_request"
case errors.Is(err, ErrModelNotFound):
status = http.StatusNotFound
errType = "model_not_found"
case errors.Is(err, ErrCLITimeout):
status = http.StatusGatewayTimeout
errType = "timeout"
default:
status = http.StatusInternalServerError
errType = "internal_error"
}
// 回傳 ErrorResponse JSON
}
```
### Error-to-HTTP Mapping
| Sentinel Error | HTTP Status | Error Type |
|---------------|-------------|------------|
| ErrInvalidRequest | 400 | invalid_request |
| ErrModelNotFound | 404 | model_not_found |
| ErrCLITimeout | 504 | timeout |
| ErrCLICrash | 500 | internal_error |
| ErrCLINotAvailable | 500 | internal_error |
## Dependency Injection
手動 wiring 在 `main.go`,使用 constructor injection
```go
// main.go
func run(cmd *cobra.Command, args []string) error {
// 1. Load config
cfg, err := config.Load(configPath)
if err != nil {
return fmt.Errorf("load config: %w", err)
}
// 2. Create bridge
br := bridge.NewCLIBridge(
cfg.CursorCLIPath,
cfg.MaxConcurrent,
time.Duration(cfg.Timeout)*time.Second,
)
// 3. Check CLI availability
if err := br.CheckHealth(context.Background()); err != nil {
return fmt.Errorf("cursor cli not available: %w", err)
}
// 4. Create and run server
srv := server.New(cfg, br)
return srv.Run()
}
```
## Configuration
```yaml
# config.example.yaml
port: 8976
cursor_cli_path: agent
default_model: claude-sonnet-4-20250514
timeout: 300
max_concurrent: 5
log_level: INFO
# optional: 手動指定可用模型
available_models:
- claude-sonnet-4-20250514
- claude-opus-4-20250514
- gpt-5.2
- gemini-3.1-pro
```
Config 載入順序defaults → YAML → CLI flags。
## Testing Architecture
### Unit Teststable-driven
```go
// internal/converter/convert_test.go
func TestToOpenAIChunk(t *testing.T) {
tests := []struct {
name string
input string
expected ChatCompletionChunk
}{
{"assistant line", `{"type":"assistant","content":"Hello"}`, ...},
{"result line", `{"type":"result","content":"..."}`, ...},
{"empty line", "", ...},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// ...
})
}
}
// internal/config/config_test.go
func TestLoad(t *testing.T) { ... }
func TestDefaults(t *testing.T) { ... }
// internal/server/handler_test.go使用 httptest
func TestHealthEndpoint(t *testing.T) { ... }
func TestChatCompletionInvalid(t *testing.T) { ... }
```
### Mock Strategy
```go
// internal/bridge/mock_test.go
type MockBridge struct {
OutputLines []string // 預設回傳的 stdout lines
Err error
}
func (m *MockBridge) Execute(ctx context.Context, prompt, model string) (<-chan string, <-chan error) {
outCh := make(chan string)
errCh := make(chan error, 1)
go func() {
defer close(outCh)
defer close(errCh)
for _, line := range m.OutputLines {
outCh <- line
}
if m.Err != nil {
errCh <- m.Err
}
}()
return outCh, errCh
}
```
### Integration Tests
```go
// internal/bridge/bridge_test.go需要實際 Cursor CLI 環境)
func TestExecuteSimple(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
// 實際 spawn agent
}
```
## Build & Deployment
### Makefile
```makefile
.PHONY: build run test lint fmt
build:
go build -o bin/cursor-adapter .
run: build
./bin/cursor-adapter --port 8976
test:
go test ./... -v -short
test-integration:
go test ./... -v
lint:
golangci-lint run
fmt:
gofmt -w .
goimports -w .
cross:
GOOS=darwin GOARCH=arm64 go build -o bin/cursor-adapter-darwin-arm64 .
GOOS=darwin GOARCH=amd64 go build -o bin/cursor-adapter-darwin-amd64 .
GOOS=linux GOARCH=amd64 go build -o bin/cursor-adapter-linux-amd64 .
```
### go.mod
```
module github.com/daniel/cursor-adapter
go 1.22
require (
github.com/go-chi/chi/v5 v5.x.x
github.com/spf13/cobra v1.x.x
gopkg.in/yaml.v3 v3.x.x
)
```
## Architecture Traceability
| Architecture Element | Code Design Element |
|---------------------|-------------------|
| HTTP Server (chi) | internal/server/server.go + handler.go |
| CLI Bridge | internal/bridge/bridge.go — CLIBridge |
| Stream Converter | internal/converter/convert.go |
| Config Module | internal/config/config.go |
| Error Handler | internal/server/handler.go — writeError() |
| Request/Response | internal/server/models.go |
| SSE | internal/server/sse.go |
## Code Design Review
N/A首次設計
## Open Questions
1. Cursor CLI stream-json 的確切 JSON schema需 scripts/test_cursor_cli.sh 確認)
2. 是否需要 `-ldflags` 減小 binary
3. 是否需要 Goreleaser 做 release

47
docs/cursor-cli-format.md Normal file
View File

@ -0,0 +1,47 @@
# Cursor CLI stream-json 格式
## 實際輸出格式(已確認)
NDJSON每行一個 JSON
### 1. System Init
```json
{"type":"system","subtype":"init","apiKeySource":"login","cwd":"/path","session_id":"uuid","model":"Auto","permissionMode":"default"}
```
### 2. User Message
```json
{"type":"user","message":{"role":"user","content":[{"type":"text","text":"prompt text"}]},"session_id":"uuid"}
```
### 3. Assistant Message可能多次出現
```json
{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"response text"}]},"session_id":"uuid","timestamp_ms":1776157308323}
```
### 4. Result最後一行
```json
{"type":"result","subtype":"success","duration_ms":10208,"duration_api_ms":10208,"is_error":false,"result":"OK","session_id":"uuid","request_id":"uuid","usage":{"inputTokens":0,"outputTokens":122,"cacheReadTokens":5120,"cacheWriteTokens":14063}}
```
## 轉換規則
| Cursor type | 行為 |
|-------------|------|
| system | 忽略(初始化訊息)|
| user | 忽略echo 回用戶訊息)|
| assistant | 提取 message.content[].text → OpenAI delta.content |
| result (success) | 提取 usage → OpenAI usage發送 finish_reason:"stop" |
| result (error) | 發送錯誤 chunk |
## CLI 參數
```bash
agent -p "prompt" \
--output-format stream-json \
--stream-partial-output \
--trust \
--model "model-name"
```
注意:需要 `--trust` 才能在非互動模式執行。

View File

@ -0,0 +1,215 @@
# Plan: Cursor Adapter
## Overview
用 Go 實作一個本機 OpenAI-compatible proxy透過 spawn Cursor CLI 的 headless 模式來使用 Cursor 的模型。
## Inputs
- Architecture: `docs/architecture/2026-04-14-cursor-adapter.md`
- Code Design: `docs/code-design/2026-04-14-cursor-adapter.md`
## Planning Assumptions
- Go 1.22+
- 使用 chi v5 做 HTTP router
- 使用 cobra 做 CLI
- 專案目錄:`~/Documents/projects/cursor-adapter/`
- 先探索 Cursor CLI 輸出格式,再實作轉換邏輯
## Task Breakdown
### Task P1: Go Module 初始化 + 專案結構
- Objective: `go mod init`、建立目錄結構、空檔案
- Inputs Used: Code Design — Project Structure
- Design References: code-design §Project Structure
- Dependencies: None
- Deliverables: go.mod、目錄結構、空的 package files
- Completion Criteria: `go build` 成功(即使什麼都沒做)
### Task P2: 探索 Cursor CLI 輸出格式
- Objective: 實際跑 `agent -p "hello" --output-format stream-json`,記錄每行 JSON 的結構
- Inputs Used: PRD — Open Question 1
- Design References: architecture §Open Questions
- Dependencies: P1
- Deliverables: scripts/test_cursor_cli.sh、記錄 Cursor stream-json schema 的文件
- Completion Criteria: 明確知道每行 JSON 的 type/content 結構
### Task P3: Config 模組
- Objective: internal/config/config.goYAML 載入、Defaults、驗證
- Inputs Used: Code Design — Configuration
- Design References: code-design §Config struct, §Load()
- Dependencies: P1
- Deliverables: internal/config/config.go、internal/config/config_test.go、config.example.yaml
- Completion Criteria: table-driven test 通過
### Task P4: ModelsRequest/Response Structs
- Objective: internal/server/models.go所有 JSON struct + JSON tags
- Inputs Used: Code Design — Domain Models
- Design References: code-design §Domain Models
- Dependencies: P1
- Deliverables: internal/server/models.go
- Completion Criteria: struct 定義完成JSON tag 正確
### Task P5: SSE Helper
- Objective: internal/server/sse.goSSE 格式化、flush helpers
- Inputs Used: Architecture — SSE streaming
- Design References: architecture §API Contract
- Dependencies: P4
- Deliverables: internal/server/sse.go
- Completion Criteria: FormatSSE() / FormatDone() 正確
### Task P6: CLI Bridge
- Objective: internal/bridge/bridge.goBridge interface + CLIBridge 實作)
- Inputs Used: Code Design — Interface Definitions
- Design References: code-design §Bridge interface, §CLIBridge
- Dependencies: P2, P3
- Deliverables: internal/bridge/bridge.go、internal/bridge/bridge_test.go
- Completion Criteria: Execute() 能 spawn agent、逐行 yield、timeout kill
### Task P7: Stream Converter
- Objective: internal/converter/convert.goCursor JSON → OpenAI SSE 轉換)
- Inputs Used: Code Design — Converter interface
- Design References: code-design §ToOpenAIChunk(), §ToOpenAIResponse()
- Dependencies: P2, P4
- Deliverables: internal/converter/convert.go、internal/converter/convert_test.go
- Completion Criteria: table-driven test 通過,轉換格式正確
### Task P8: HTTP Server + Handlers
- Objective: internal/server/server.go + handler.gochi router、3 個 endpoint、error middleware
- Inputs Used: Code Design — Layer Architecture
- Design References: code-design §handler.go, §writeError()
- Dependencies: P3, P4, P5, P6, P7
- Deliverables: internal/server/server.go、internal/server/handler.go、internal/server/handler_test.go
- Completion Criteria: /health、/v1/models、/v1/chat/completions 可回應(用 mock bridge 測試)
### Task P9: CLI 入口main.go
- Objective: main.gocobra command、wiring、啟動 server
- Inputs Used: Code Design — Dependency Injection
- Design References: code-design §main.go wiring
- Dependencies: P3, P8
- Deliverables: main.go
- Completion Criteria: `go build && ./cursor-adapter` 啟動成功
### Task P10: 整合測試
- Objective: 實際用 curl 和 Hermes 測試完整流程
- Inputs Used: PRD — Acceptance Criteria
- Design References: architecture §API Contract
- Dependencies: P9
- Deliverables: 測試結果記錄
- Completion Criteria: AC1-AC5 通過
### Task P11: README
- Objective: 安裝、設定、使用方式
- Inputs Used: PRD, Architecture
- Dependencies: P10
- Deliverables: README.md
- Completion Criteria: 新使用者看著 README 能跑起來
## Dependency Graph
```mermaid
graph TD
P1 --> P2
P1 --> P3
P1 --> P4
P2 --> P6
P2 --> P7
P3 --> P6
P3 --> P9
P4 --> P5
P4 --> P7
P4 --> P8
P5 --> P8
P6 --> P8
P7 --> P8
P8 --> P9
P9 --> P10
P10 --> P11
```
## Execution Order
### Phase 1: Foundation可並行
- P1 Go Module 初始化
- P3 Config 模組
- P4 Models
### Phase 2: Exploration需 P1
- P2 探索 Cursor CLI 輸出格式
- P5 SSE Helper需 P4
### Phase 3: Core Logic需 P2 + Phase 1
- P6 CLI Bridge
- P7 Stream Converter
### Phase 4: Integration需 Phase 3
- P8 HTTP Server + Handlers
- P9 CLI 入口
### Phase 5: Validation需 P9
- P10 整合測試
- P11 README
## Milestones
### Milestone M1: Foundation Ready
- Included Tasks: P1, P3, P4
- Exit Criteria: go.mod 存在、config 可載入、models struct 定義完成
### Milestone M2: Core Logic Complete
- Included Tasks: P2, P5, P6, P7
- Exit Criteria: CLI Bridge 能 spawn Cursor、converter 轉換正確
### Milestone M3: MVP Ready
- Included Tasks: P8, P9, P10, P11
- Exit Criteria: `cursor-adapter` 啟動後curl AC1-AC5 通過
## Deliverables
| Task | Deliverable | Source Design Reference |
|------|-------------|------------------------|
| P1 | go.mod + project structure | code-design §Project Structure |
| P2 | test script + format docs | architecture §Open Questions |
| P3 | internal/config/config.go + test | code-design §Configuration |
| P4 | internal/server/models.go | code-design §Domain Models |
| P5 | internal/server/sse.go | architecture §SSE |
| P6 | internal/bridge/bridge.go + test | code-design §Bridge interface |
| P7 | internal/converter/convert.go + test | code-design §Converter |
| P8 | server.go + handler.go + test | code-design §Layer Architecture |
| P9 | main.go | code-design §Dependency Injection |
| P10 | test results | PRD §Acceptance Criteria |
| P11 | README.md | — |
## Design Traceability
| Upstream Design Element | Planned Task(s) |
|-------------------------|-----------------|
| Architecture: HTTP Server | P8 |
| Architecture: CLI Bridge | P6 |
| Architecture: Stream Converter | P7 |
| Architecture: SSE | P5 |
| Architecture: Config | P3 |
| Architecture: Error Model | P8 |
| Code Design: Project Structure | P1 |
| Code Design: Interface — Bridge | P6 |
| Code Design: Interface — Converter | P7 |
| Code Design: Domain Models | P4 |
| Code Design: Configuration | P3 |
| Code Design: DI — main.go | P9 |
## Risks And Sequencing Notes
- P2探索 Cursor CLI 格式)是 critical path — P6 和 P7 依賴此結果
- P6CLI Bridge是最複雜的任務 — goroutine、subprocess、timeout、semaphore
- 如果 Cursor CLI stream-json 格式複雜P7 可能需要迭代
- P10 可能發現 edge case需要回頭修 P6/P7/P8
## Planning Review
N/A首次規劃
## Open Questions
1. Cursor CLI stream-json 的確切格式P2 回答)
2. Cursor CLI 並發限制P10 確認)

View File

@ -0,0 +1,163 @@
## Research Inputs
N/A。這是個人工具不需要市場研究。
## Problem
我同時使用多個 CLI AI 工具Hermes Agent、OpenCode、Claude Code這些工具都支援自訂 API base URL 和 model。我的公司有 Cursor 帳號,透過 `agent login` 已在本機完成認證,可以使用 Cursor 提供的多種模型。
目前的問題是:每支 CLI 工具都需要自己買 API key 或設定 provider但我已經有 Cursor 帳號的額度可以用。我需要一個轉接器,讓這些 CLI 工具能透過 Cursor CLI 的 headless 模式來使用 Cursor 的模型,省去額外的 API 費用。
## Goals
- 本機跑一個 HTTP server提供 OpenAI-compatible API`/v1/chat/completions`
- 收到請求後spawn Cursor CLI 的 `agent` 子程序來執行
- 將 Cursor CLI 的 streaming JSON 輸出轉換成 OpenAI SSE 格式回傳
- 支援多種 Cursor 模型的選擇
- 零額外認證設定 — 直接使用本機已有的 Cursor 登入狀態
## Non Goals
- 不支援非 OpenAI format 的 CLI 工具
- 不做 API key 管理或多用戶認證
- 不做計量、追蹤、計費功能
- 不做模型負載平衡或 failover
- 不代理 Cursor IDE 的功能,只代理 headless CLI 模式
## Scope
本機 personal proxy server一個使用者本機部署。
### In Scope
- OpenAI-compatible API`/v1/chat/completions`、`/v1/models`
- SSE streaming response
- 模型選擇(透過 `--model` 參數傳給 Cursor CLI
- 簡單的 YAML config 檔設定
- 錯誤處理和 CLI 子程序生命週期管理
- health check endpoint
### Out of Scope
- 非 OpenAI format 支援
- 多用戶 / API key 管理
- 計量追蹤
- GUI 介面
- Docker 部署
## Success Metrics
- Hermes Agent、OpenCode、Claude Code 都能透過設定 base URL 指向此 proxy 來使用 Cursor 模型
- streaming 回應的延遲 < 2 不含模型思考時間
- proxy 啟動後零設定即可使用(只需改 CLI 工具的 config
## User Stories
1. 作為使用者,我想啟動 proxy server這樣我的 CLI 工具就能連到它
2. 作為使用者,我想在 Hermes Agent 裡設定 `base_url = http://localhost:8976`,這樣就能用 Cursor 的模型
3. 作為使用者,我想在 CLI 工具裡指定 `model = claude-sonnet-4-20250514`proxy 會傳給 Cursor CLI
4. 作為使用者,我想看到模型的思考過程即時串流到終端機上
5. 作為使用者,我想透過 `/v1/models` 查看可用的模型列表
6. 作為使用者,我想透過 config 檔設定 proxy 的 port 和其他選項
## Functional Requirements
### FR1: OpenAI-Compatible API
- 支援 `POST /v1/chat/completions`
- 接受 OpenAI 格式的 request body`model`、`messages`、`stream`
- 當 `stream: true` 時,回傳 SSE 格式的 `data: {...}\n\n` chunks
- 當 `stream: false` 時,回傳完整的 JSON response
### FR2: Cursor CLI Integration
- 收到請求後,組合 prompt 從 messages 陣列
- spawn `agent -p "{prompt}" --model "{model}" --output-format stream-json` 子程序
- 讀取子程序的 stdout streaming JSON 輸出
- 管理子程序生命週期(啟動、執行、結束、超時 kill
### FR3: Streaming Response Conversion
- 將 Cursor CLI 的 `stream-json` 輸出轉換成 OpenAI SSE 格式
- 每個 SSE chunk 需包含 `id`、`object: "chat.completion.chunk"`、`choices[0].delta.content`
- 最後一個 chunk 需包含 `finish_reason: "stop"`
### FR4: Model Listing
- 支援 `GET /v1/models`,回傳可用模型列表
- 模型列表從 Cursor CLI 取得(`agent --list-models` 或 config 定義)
### FR5: Configuration
- YAML config 檔(預設 `~/.cursor-adapter/config.yaml`
- 可設定port、cursor_cli_path、default_model、timeout
### FR6: Error Handling
- Cursor CLI 超時(可設定,預設 5 分鐘)→ 回傳 504
- Cursor CLI 錯誤 → 回傳 500 + 錯誤訊息
- 無效的 request body → 回傳 400
- model 不存在 → 回傳 404
## Acceptance Criteria
### AC1: Basic Chat Completion
Given proxy 已啟動在 port 8976When 我用 curl 發送 `POST /v1/chat/completions` 帶上 `{"model": "claude-sonnet-4-20250514", "messages": [{"role": "user", "content": "hello"}], "stream": true}`Then 收到 SSE streaming response且內容為 Cursor CLI 的回應轉換成的 OpenAI 格式。
### AC2: Streaming Display
Given CLI 工具連到 proxy 並發送 streaming 請求When 模型正在生成回應Then CLI 工具的終端機上即時顯示文字內容(不需要等完整回應)。
### AC3: Model Selection
Given proxy 已啟動When 請求中指定 `model: "gpt-5.2"`Then proxy spawn Cursor CLI 時使用 `--model gpt-5.2`
### AC4: Health Check
Given proxy 已啟動When 發送 `GET /health`Then 回傳 `{"status": "ok", "cursor_cli": "available"}`
### AC5: Model Listing
Given proxy 已啟動When 發送 `GET /v1/models`Then 回傳 Cursor 可用的模型列表,格式符合 OpenAI models API。
## Edge Cases
- Cursor CLI 子程序意外崩潰 → proxy 回傳 500清理資源
- 請求 timeout模型思考太久→ proxy kill 子程序,回傳 504
- 並發請求 → 每個請求 spawn 獨立的子程序
- Cursor CLI 未安裝或不在 PATH → proxy 啟動時檢查,啟動失敗時給明確錯誤
- Cursor CLI 未登入 → proxy 回傳錯誤訊息提示先 `agent login`
- messages 陣列為空 → 回傳 400
- stream: false 時,需要等 Cursor CLI 完整輸出後才回傳
## Non Functional Requirements
### NFR1: Performance
- proxy 自身的 overhead < 500ms不含模型思考時間
- streaming 的第一個 token 延遲不超過 Cursor CLI 本身的延遲 + 200ms
### NFR2: Reliability
- 並發請求數 ≤ 5個人使用
- 子程序超時後正確清理,不留 zombie process
### NFR3: Usability
- 一行命令啟動:`cursor-adapter` 或 `cursor-adapter --port 8976`
- config 檔格式簡單,有合理的預設值
- 啟動時顯示可用模型列表
## Risks
| Risk | Impact | Likelihood | Mitigation |
|------|--------|-----------|------------|
| Cursor CLI output format 變更 | High | Medium | 抽象輸出解析層,方便適配 |
| Cursor CLI 不支援某些模型 | Medium | Low | 啟動時驗證模型可用性 |
| 並發子程序過多導致資源耗盡 | Medium | Low | 限制最大並發數 |
| Cursor 的 headless 模式有限制 | High | Medium | 先用 headless 模式測試,必要時 fallback 到 ACP |
## Assumptions
- Cursor CLI 已安裝且在 PATH 中
- Cursor CLI 已透過 `agent login` 完成認證
- 使用者的 CLI 工具都支援 OpenAI-compatible API format
- 使用者只需要 `/v1/chat/completions``/v1/models` 兩個 endpoint
## Dependencies
- Cursor CLI`agent` command
- Python 3.10+ 或 Node.js 18+(取決於實作語言選擇)
## Open Questions
1. Cursor CLI 的 `--output-format stream-json` 的確切 JSON schema 是什麼?需要實際跑一次來確認
2. Cursor CLI 是否支援同時跑多個 headless 實例?
3. 需要支援 function calling / tool use 嗎?(目前 PRD 不含,但如果 Cursor CLI 支援的話可以加)

37
docs/test-output-log.md Normal file
View File

@ -0,0 +1,37 @@
# Cursor CLI Test Output
# Date: $(date)
# Script: scripts/test_cursor_cli.sh
## Test 1: agent --version
```
2026.04.13-a9d7fb5
```
## Test 2: agent -p "say hello in one word" --output-format stream-json --trust
- 需要 --trust 參數在非互動模式執行
- --force/-f 被團隊管理員禁用
- 執行後等待回應中 (可能需要較長時間初始化)
## Test 3: agent --help (output-format 相關)
```
--output-format <format> Output format (only works with --print): text |
json | stream-json (default: "text")
--stream-partial-output Stream partial output as individual text deltas
(only works with --print and stream-json format)
(default: false)
```
## Test 4: agent models (部分結果)
- auto (預設)
- composer-2-fast, composer-2, composer-1.5
- gpt-5.3-codex 系列 (low/normal/high/xhigh)
- claude-4-sonnet, claude-4.5-sonnet
- grok-4-20
- gemini-3-flash
- kimi-k2.5
## 結論
1. Cursor CLI 已安裝且可用
2. stream-json 需要配合 --print (-p) 和 --trust 使用
3. 有 --stream-partial-output 可取得逐字串流
4. 實際的 JSON 格式需要等待回應完成後才能解析

14
go.mod Normal file
View File

@ -0,0 +1,14 @@
module github.com/daniel/cursor-adapter
go 1.26.1
require (
github.com/go-chi/chi/v5 v5.2.5
github.com/spf13/cobra v1.10.2
gopkg.in/yaml.v3 v3.0.1
)
require (
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/spf13/pflag v1.0.9 // indirect
)

15
go.sum Normal file
View File

@ -0,0 +1,15 @@
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/go-chi/chi/v5 v5.2.5 h1:Eg4myHZBjyvJmAFjFvWgrqDTXFyOzjj7YIm3L3mu6Ug=
github.com/go-chi/chi/v5 v5.2.5/go.mod h1:X7Gx4mteadT3eDOMTsXzmI4/rwUpOwBHLpAfupzFJP0=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

BIN
internal/.DS_Store vendored Normal file

Binary file not shown.

930
internal/bridge/bridge.go Normal file
View File

@ -0,0 +1,930 @@
package bridge
import (
"bufio"
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"log/slog"
"os"
"os/exec"
"regexp"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/daniel/cursor-adapter/internal/workspace"
)
// Bridge 定義與 Cursor CLI 的整合介面。
type Bridge interface {
Execute(ctx context.Context, prompt string, model string, sessionKey string) (<-chan string, <-chan error)
ExecuteSync(ctx context.Context, prompt string, model string, sessionKey string) (string, error)
ListModels(ctx context.Context) ([]string, error)
CheckHealth(ctx context.Context) error
}
// NewBridge 建立 Bridge。chatOnly=true 會讓每個子程序都跑在空的 temp
// workspace、並且用 env overrides 把 HOME / CURSOR_CONFIG_DIR 導到那個
// temp dir讓 Cursor agent 讀不到任何真實專案檔案或全域 rules。
func NewBridge(cursorPath string, logger *slog.Logger, useACP bool, chatOnly bool, maxConcurrent int, timeout time.Duration) Bridge {
if useACP {
return NewACPBridge(cursorPath, logger, chatOnly, maxConcurrent, timeout)
}
return NewCLIBridge(cursorPath, chatOnly, maxConcurrent, timeout)
}
// --- CLI Bridge ---
type CLIBridge struct {
cursorPath string
semaphore chan struct{}
timeout time.Duration
chatOnly bool
}
func buildCLICommandArgs(prompt, model, workspaceDir string, stream, chatOnly bool) []string {
args := []string{"--print", "--mode", "ask"}
if chatOnly {
args = append(args, "--trust")
}
if workspaceDir != "" {
args = append(args, "--workspace", workspaceDir)
}
if model != "" {
args = append(args, "--model", model)
}
if stream {
args = append(args, "--stream-partial-output", "--output-format", "stream-json")
} else {
args = append(args, "--output-format", "text")
}
args = append(args, prompt)
return args
}
func NewCLIBridge(cursorPath string, chatOnly bool, maxConcurrent int, timeout time.Duration) *CLIBridge {
if maxConcurrent <= 0 {
maxConcurrent = 1
}
return &CLIBridge{
cursorPath: cursorPath,
semaphore: make(chan struct{}, maxConcurrent),
timeout: timeout,
chatOnly: chatOnly,
}
}
// prepareWorkspace returns (workspaceDir, envOverrides, cleanup). When
// chatOnly is enabled, workspaceDir is a fresh temp dir and cleanup removes
// it. Otherwise workspaceDir falls back to the adapter's cwd with no
// cleanup.
func (b *CLIBridge) prepareWorkspace() (string, map[string]string, func()) {
if !b.chatOnly {
ws, _ := os.Getwd()
return ws, nil, func() {}
}
dir, env, err := workspace.ChatOnly("")
if err != nil {
slog.Warn("chat-only workspace setup failed, falling back to cwd", "err", err)
ws, _ := os.Getwd()
return ws, nil, func() {}
}
return dir, env, func() { _ = os.RemoveAll(dir) }
}
func (b *CLIBridge) Execute(ctx context.Context, prompt string, model string, sessionKey string) (<-chan string, <-chan error) {
outputChan := make(chan string, 64)
errChan := make(chan error, 1)
go func() {
defer close(outputChan)
defer close(errChan)
select {
case b.semaphore <- struct{}{}:
defer func() { <-b.semaphore }()
case <-ctx.Done():
errChan <- ctx.Err()
return
}
execCtx, cancel := context.WithTimeout(ctx, b.timeout)
defer cancel()
ws, envOverrides, cleanup := b.prepareWorkspace()
defer cleanup()
cmd := exec.CommandContext(execCtx, b.cursorPath, buildCLICommandArgs(prompt, model, ws, true, b.chatOnly)...)
cmd.Dir = ws
cmd.Env = workspace.MergeEnv(os.Environ(), envOverrides)
stdoutPipe, err := cmd.StdoutPipe()
if err != nil {
errChan <- fmt.Errorf("stdout pipe: %w", err)
return
}
defer stdoutPipe.Close()
if err := cmd.Start(); err != nil {
errChan <- fmt.Errorf("start command: %w", err)
return
}
scanner := bufio.NewScanner(stdoutPipe)
for scanner.Scan() {
line := scanner.Text()
line = strings.TrimSpace(line)
if line == "" {
continue
}
outputChan <- line
}
if err := scanner.Err(); err != nil {
errChan <- fmt.Errorf("stdout scanner: %w", err)
}
if err := cmd.Wait(); err != nil {
errChan <- err
}
}()
return outputChan, errChan
}
func (b *CLIBridge) ExecuteSync(ctx context.Context, prompt string, model string, sessionKey string) (string, error) {
select {
case b.semaphore <- struct{}{}:
defer func() { <-b.semaphore }()
case <-ctx.Done():
return "", ctx.Err()
}
execCtx, cancel := context.WithTimeout(ctx, b.timeout)
defer cancel()
ws, envOverrides, cleanup := b.prepareWorkspace()
defer cleanup()
cmd := exec.CommandContext(execCtx, b.cursorPath, buildCLICommandArgs(prompt, model, ws, false, b.chatOnly)...)
cmd.Dir = ws
cmd.Env = workspace.MergeEnv(os.Environ(), envOverrides)
var stdout, stderr bytes.Buffer
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
return "", fmt.Errorf("run command: %w (stderr: %s)", err, strings.TrimSpace(stderr.String()))
}
return strings.TrimSpace(stdout.String()), nil
}
func (b *CLIBridge) ListModels(ctx context.Context) ([]string, error) {
select {
case b.semaphore <- struct{}{}:
defer func() { <-b.semaphore }()
case <-ctx.Done():
return nil, ctx.Err()
}
cmd := exec.CommandContext(ctx, b.cursorPath, "models")
output, err := cmd.CombinedOutput()
if err != nil {
return nil, fmt.Errorf("list models: %w", err)
}
return parseModelsOutput(string(output)), nil
}
func (b *CLIBridge) CheckHealth(ctx context.Context) error {
cmd := exec.CommandContext(ctx, b.cursorPath, "--version")
_, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("health check: %w", err)
}
return nil
}
var ansiEscapeRe = regexp.MustCompile(`\x1b\[[0-9;]*[A-Za-z]`)
func parseModelsOutput(output string) []string {
clean := ansiEscapeRe.ReplaceAllString(output, "")
var models []string
for _, line := range strings.Split(clean, "\n") {
line = strings.TrimSpace(line)
if line == "" {
continue
}
if strings.Contains(line, "Loading") || strings.Contains(line, "Available") ||
strings.Contains(line, "Tip:") || strings.Contains(line, "Error") {
continue
}
id := line
if idx := strings.Index(line, " - "); idx > 0 {
id = line[:idx]
}
id = strings.TrimSpace(id)
if id != "" {
models = append(models, id)
}
}
return models
}
// --- ACP Bridge (per-request 完整流程,參考 cursor-api-proxy) ---
type ACPBridge struct {
cursorPath string
logger *slog.Logger
timeout time.Duration
chatOnly bool
workers []*acpWorker
nextWorker atomic.Uint32
sessionsMu sync.Mutex
sessions map[string]acpSessionHandle
sessionTTL time.Duration
}
type acpSessionHandle struct {
WorkerIndex int
SessionID string
Model string
Generation uint64
LastUsedAt time.Time
}
func NewACPBridge(cursorPath string, logger *slog.Logger, chatOnly bool, maxConcurrent int, timeout time.Duration) *ACPBridge {
if maxConcurrent <= 0 {
maxConcurrent = 1
}
bridge := &ACPBridge{
cursorPath: cursorPath,
logger: logger,
timeout: timeout,
chatOnly: chatOnly,
sessions: make(map[string]acpSessionHandle),
sessionTTL: 30 * time.Minute,
}
for i := 0; i < maxConcurrent; i++ {
bridge.workers = append(bridge.workers, newACPWorker(cursorPath, logger, chatOnly, timeout))
}
return bridge
}
func buildACPCommandArgs(workspace, model string) []string {
args := []string{"--workspace", workspace}
if model != "" && model != "auto" && model != "default" {
args = append(args, "--model", model)
}
args = append(args, "acp")
return args
}
// acpMessage 定義 ACP JSON-RPC message 格式。
type acpMessage struct {
JSONRPC string `json:"jsonrpc"`
ID *int `json:"id,omitempty"`
Method string `json:"method,omitempty"`
Params json.RawMessage `json:"params,omitempty"`
Result json.RawMessage `json:"result,omitempty"`
Error *acpError `json:"error,omitempty"`
}
type acpError struct {
Code int `json:"code"`
Message string `json:"message"`
}
type acpSession struct {
SessionID string `json:"sessionId"`
}
type acpResponse struct {
result json.RawMessage
err error
}
type acpWorker struct {
cursorPath string
logger *slog.Logger
timeout time.Duration
chatOnly bool
reqMu sync.Mutex
writeMu sync.Mutex
stateMu sync.Mutex
workspace string
envOverrides map[string]string
currentModel string
cmd *exec.Cmd
stdin io.WriteCloser
pending map[int]chan acpResponse
nextID int
readerErr error
readerDone chan struct{}
activeSink func(string)
generation atomic.Uint64
}
func newACPWorker(cursorPath string, logger *slog.Logger, chatOnly bool, timeout time.Duration) *acpWorker {
return &acpWorker{
cursorPath: cursorPath,
logger: logger,
timeout: timeout,
chatOnly: chatOnly,
}
}
func (b *ACPBridge) Execute(ctx context.Context, prompt string, model string, sessionKey string) (<-chan string, <-chan error) {
outputChan := make(chan string, 64)
errChan := make(chan error, 1)
go func() {
defer close(outputChan)
defer close(errChan)
worker, sessionID := b.resolveSession(sessionKey, model)
finalSessionID, err := worker.run(ctx, prompt, model, sessionID, func(text string) {
if text == "" {
return
}
select {
case outputChan <- text:
case <-ctx.Done():
}
})
if err == nil {
b.storeSession(sessionKey, model, worker, finalSessionID)
}
if err != nil {
errChan <- err
}
}()
return outputChan, errChan
}
func (b *ACPBridge) ExecuteSync(ctx context.Context, prompt string, model string, sessionKey string) (string, error) {
var content strings.Builder
worker, sessionID := b.resolveSession(sessionKey, model)
finalSessionID, err := worker.run(ctx, prompt, model, sessionID, func(text string) {
content.WriteString(text)
})
if err != nil {
return "", err
}
b.storeSession(sessionKey, model, worker, finalSessionID)
return strings.TrimSpace(content.String()), nil
}
func (b *ACPBridge) pickWorker() *acpWorker {
if len(b.workers) == 0 {
return newACPWorker(b.cursorPath, b.logger, b.chatOnly, b.timeout)
}
idx := int(b.nextWorker.Add(1)-1) % len(b.workers)
return b.workers[idx]
}
func (b *ACPBridge) resolveSession(sessionKey, model string) (*acpWorker, string) {
normalizedModel := normalizeModel(model)
if sessionKey == "" {
return b.pickWorker(), ""
}
b.sessionsMu.Lock()
defer b.sessionsMu.Unlock()
b.cleanupExpiredSessionsLocked()
handle, ok := b.sessions[sessionKey]
if !ok {
return b.pickWorker(), ""
}
if handle.Model != normalizedModel {
delete(b.sessions, sessionKey)
return b.workers[handle.WorkerIndex], ""
}
if handle.WorkerIndex < 0 || handle.WorkerIndex >= len(b.workers) {
delete(b.sessions, sessionKey)
return b.pickWorker(), ""
}
worker := b.workers[handle.WorkerIndex]
if worker.Generation() != handle.Generation {
delete(b.sessions, sessionKey)
return worker, ""
}
handle.LastUsedAt = time.Now()
b.sessions[sessionKey] = handle
return worker, handle.SessionID
}
func (b *ACPBridge) storeSession(sessionKey, model string, worker *acpWorker, sessionID string) {
if sessionKey == "" || sessionID == "" {
return
}
workerIndex := -1
for i, candidate := range b.workers {
if candidate == worker {
workerIndex = i
break
}
}
if workerIndex == -1 {
return
}
b.sessionsMu.Lock()
defer b.sessionsMu.Unlock()
b.cleanupExpiredSessionsLocked()
b.sessions[sessionKey] = acpSessionHandle{
WorkerIndex: workerIndex,
SessionID: sessionID,
Model: normalizeModel(model),
Generation: worker.Generation(),
LastUsedAt: time.Now(),
}
}
func (b *ACPBridge) cleanupExpiredSessionsLocked() {
if b.sessionTTL <= 0 {
return
}
cutoff := time.Now().Add(-b.sessionTTL)
for key, handle := range b.sessions {
if handle.LastUsedAt.Before(cutoff) {
delete(b.sessions, key)
}
}
}
func (b *ACPBridge) ListModels(ctx context.Context) ([]string, error) {
cmd := exec.CommandContext(ctx, b.cursorPath, "models")
output, err := cmd.CombinedOutput()
if err != nil {
return nil, fmt.Errorf("list models: %w", err)
}
return parseModelsOutput(string(output)), nil
}
func (b *ACPBridge) CheckHealth(ctx context.Context) error {
cmd := exec.CommandContext(ctx, b.cursorPath, "--version")
_, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("health check: %w", err)
}
return nil
}
// sendACP 送 JSON-RPC request。
func sendACP(stdin io.Writer, id int, method string, params interface{}) error {
msg := acpMessage{
JSONRPC: "2.0",
ID: &id,
Method: method,
}
if params != nil {
data, _ := json.Marshal(params)
msg.Params = data
}
data, _ := json.Marshal(msg)
_, err := fmt.Fprintf(stdin, "%s\n", data)
return err
}
func respondACP(stdin io.Writer, id int, result interface{}) error {
msg := map[string]interface{}{
"jsonrpc": "2.0",
"id": id,
"result": result,
}
data, _ := json.Marshal(msg)
_, err := fmt.Fprintf(stdin, "%s\n", data)
return err
}
func extractACPText(params json.RawMessage) string {
var update map[string]interface{}
_ = json.Unmarshal(params, &update)
updateMap, _ := update["update"].(map[string]interface{})
sessionUpdate, _ := updateMap["sessionUpdate"].(string)
if sessionUpdate != "" && !strings.HasPrefix(sessionUpdate, "agent_message") {
return ""
}
content, _ := updateMap["content"].(interface{})
switch v := content.(type) {
case map[string]interface{}:
if t, ok := v["text"].(string); ok {
return t
}
case []interface{}:
var parts []string
for _, item := range v {
if m, ok := item.(map[string]interface{}); ok {
if nested, ok := m["content"].(map[string]interface{}); ok {
if t, ok := nested["text"].(string); ok {
parts = append(parts, t)
continue
}
}
if t, ok := m["text"].(string); ok {
parts = append(parts, t)
}
}
}
return strings.Join(parts, "")
case string:
return v
}
return ""
}
func (w *acpWorker) Generation() uint64 {
return w.generation.Load()
}
func normalizeModel(m string) string {
m = strings.TrimSpace(m)
if m == "" || m == "default" {
return "auto"
}
return m
}
func (w *acpWorker) run(ctx context.Context, prompt string, model string, sessionID string, sink func(string)) (string, error) {
w.reqMu.Lock()
defer w.reqMu.Unlock()
t0 := time.Now()
wantModel := normalizeModel(model)
if w.cmd != nil && w.currentModel != wantModel {
slog.Debug("acp: model changed, restarting worker", "from", w.currentModel, "to", wantModel)
w.resetLocked()
sessionID = ""
}
if err := w.ensureStartedLocked(ctx, wantModel); err != nil {
return "", err
}
slog.Debug("acp: ensureStarted", "model", wantModel, "elapsed", time.Since(t0))
newSession := sessionID == ""
if newSession {
var err error
sessionID, err = w.createSessionLocked(ctx)
if err != nil {
w.resetLocked()
return "", err
}
slog.Debug("acp: session created", "elapsed", time.Since(t0))
}
w.setActiveSinkLocked(sink)
defer w.setActiveSinkLocked(nil)
slog.Debug("acp: sending prompt", "newSession", newSession, "elapsed", time.Since(t0))
if _, err := w.sendRequestLocked(ctx, "session/prompt", map[string]interface{}{
"sessionId": sessionID,
"prompt": []interface{}{map[string]interface{}{
"type": "text",
"text": prompt,
}},
}); err != nil {
w.resetLocked()
return "", err
}
slog.Debug("acp: prompt complete", "elapsed", time.Since(t0))
return sessionID, nil
}
func (w *acpWorker) ensureStartedLocked(ctx context.Context, model string) error {
if w.cmd != nil {
return nil
}
if w.workspace == "" {
var (
dir string
env map[string]string
err error
)
if w.chatOnly {
dir, env, err = workspace.ChatOnly("")
if err != nil {
return fmt.Errorf("chat-only workspace: %w", err)
}
} else {
dir, err = os.MkdirTemp("", "cursor-acp-worker-*")
if err != nil {
return fmt.Errorf("temp workspace: %w", err)
}
}
w.workspace = dir
w.envOverrides = env
}
w.currentModel = model
cmd := exec.Command(w.cursorPath, buildACPCommandArgs(w.workspace, model)...)
cmd.Dir = w.workspace
cmd.Env = workspace.MergeEnv(os.Environ(), w.envOverrides)
stdin, err := cmd.StdinPipe()
if err != nil {
return fmt.Errorf("stdin pipe: %w", err)
}
stdoutPipe, err := cmd.StdoutPipe()
if err != nil {
_ = stdin.Close()
return fmt.Errorf("stdout pipe: %w", err)
}
if err := cmd.Start(); err != nil {
_ = stdin.Close()
_ = stdoutPipe.Close()
return fmt.Errorf("start acp: %w", err)
}
w.cmd = cmd
w.stdin = stdin
w.pending = make(map[int]chan acpResponse)
w.nextID = 1
w.readerDone = make(chan struct{})
w.readerErr = nil
w.generation.Add(1)
go w.readLoop(stdoutPipe)
if _, err := w.sendRequestLocked(ctx, "initialize", map[string]interface{}{
"protocolVersion": 1,
"clientCapabilities": map[string]interface{}{
"promptCapabilities": map[string]interface{}{"text": true},
"fs": map[string]interface{}{"readTextFile": false, "writeTextFile": false},
"terminal": false,
},
"clientInfo": map[string]interface{}{
"name": "cursor-adapter",
"version": "0.2.0",
},
}); err != nil {
return fmt.Errorf("initialize: %w", err)
}
if _, err := w.sendRequestLocked(ctx, "authenticate", map[string]interface{}{
"methodId": "cursor_login",
}); err != nil {
return fmt.Errorf("authenticate: %w", err)
}
return nil
}
func (w *acpWorker) createSessionLocked(ctx context.Context) (string, error) {
resp, err := w.sendRequestLocked(ctx, "session/new", map[string]interface{}{
"cwd": w.workspace,
"mcpServers": []interface{}{},
})
if err != nil {
return "", fmt.Errorf("session/new: %w", err)
}
var session acpSession
if err := json.Unmarshal(resp, &session); err != nil || session.SessionID == "" {
return "", fmt.Errorf("session/new invalid response: %s", string(resp))
}
return session.SessionID, nil
}
func (w *acpWorker) setConfigLocked(ctx context.Context, sessionID, configID string, value interface{}) error {
_, err := w.sendRequestLocked(ctx, "session/set_config_option", map[string]interface{}{
"sessionId": sessionID,
"configId": configID,
"value": value,
})
if err != nil {
return fmt.Errorf("session/set_config_option(%s=%v): %w", configID, value, err)
}
return nil
}
func (w *acpWorker) sendRequestLocked(ctx context.Context, method string, params interface{}) (json.RawMessage, error) {
if w.stdin == nil {
return nil, fmt.Errorf("acp stdin unavailable")
}
id := w.nextID
w.nextID++
respCh := make(chan acpResponse, 1)
w.stateMu.Lock()
w.pending[id] = respCh
readerDone := w.readerDone
w.stateMu.Unlock()
if err := w.writeJSONRPCLocked(id, method, params); err != nil {
w.removePending(id)
return nil, err
}
timer := time.NewTimer(w.timeout)
defer timer.Stop()
select {
case resp := <-respCh:
return resp.result, resp.err
case <-ctx.Done():
w.removePending(id)
return nil, ctx.Err()
case <-readerDone:
return nil, w.getReaderErr()
case <-timer.C:
w.removePending(id)
return nil, fmt.Errorf("acp %s timed out after %s", method, w.timeout)
}
}
func (w *acpWorker) writeJSONRPCLocked(id int, method string, params interface{}) error {
w.writeMu.Lock()
defer w.writeMu.Unlock()
return sendACP(w.stdin, id, method, params)
}
func (w *acpWorker) readLoop(stdout io.ReadCloser) {
defer stdout.Close()
scanner := bufio.NewScanner(stdout)
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024)
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if line == "" {
continue
}
var msg acpMessage
if err := json.Unmarshal([]byte(line), &msg); err != nil {
continue
}
if msg.ID != nil && (msg.Result != nil || msg.Error != nil) {
w.deliverResponse(*msg.ID, msg)
continue
}
w.handleACPNotification(msg)
}
err := scanner.Err()
w.stateMu.Lock()
w.readerErr = err
done := w.readerDone
pending := w.pending
w.pending = make(map[int]chan acpResponse)
w.stateMu.Unlock()
for _, ch := range pending {
ch <- acpResponse{err: w.getReaderErr()}
close(ch)
}
if w.cmd != nil {
_ = w.cmd.Wait()
}
if done != nil {
close(done)
}
}
func (w *acpWorker) deliverResponse(id int, msg acpMessage) {
w.stateMu.Lock()
ch := w.pending[id]
delete(w.pending, id)
w.stateMu.Unlock()
if ch == nil {
return
}
resp := acpResponse{result: msg.Result}
if msg.Error != nil {
resp.err = fmt.Errorf("%s", msg.Error.Message)
}
ch <- resp
close(ch)
}
func (w *acpWorker) handleACPNotification(msg acpMessage) bool {
switch msg.Method {
case "session/update":
text := extractACPText(msg.Params)
if text != "" {
w.stateMu.Lock()
sink := w.activeSink
w.stateMu.Unlock()
if sink != nil {
sink(text)
}
}
return true
case "session/request_permission":
if msg.ID != nil {
w.writeMu.Lock()
_ = respondACP(w.stdin, *msg.ID, map[string]interface{}{
"outcome": map[string]interface{}{
"outcome": "selected",
"optionId": "reject-once",
},
})
w.writeMu.Unlock()
}
return true
}
if msg.ID != nil && strings.HasPrefix(msg.Method, "cursor/") {
var params map[string]interface{}
_ = json.Unmarshal(msg.Params, &params)
w.writeMu.Lock()
defer w.writeMu.Unlock()
switch msg.Method {
case "cursor/ask_question":
selectedID := ""
if options, ok := params["options"].([]interface{}); ok && len(options) > 0 {
if first, ok := options[0].(map[string]interface{}); ok {
if id, ok := first["id"].(string); ok {
selectedID = id
}
}
}
_ = respondACP(w.stdin, *msg.ID, map[string]interface{}{"selectedId": selectedID})
case "cursor/create_plan":
_ = respondACP(w.stdin, *msg.ID, map[string]interface{}{"approved": true})
default:
_ = respondACP(w.stdin, *msg.ID, map[string]interface{}{})
}
return true
}
return false
}
func (w *acpWorker) setActiveSinkLocked(sink func(string)) {
w.stateMu.Lock()
w.activeSink = sink
w.stateMu.Unlock()
}
func (w *acpWorker) removePending(id int) {
w.stateMu.Lock()
delete(w.pending, id)
w.stateMu.Unlock()
}
func (w *acpWorker) getReaderErr() error {
w.stateMu.Lock()
defer w.stateMu.Unlock()
if w.readerErr != nil {
return w.readerErr
}
return fmt.Errorf("acp process exited")
}
func (w *acpWorker) resetLocked() {
w.stateMu.Lock()
cmd := w.cmd
stdin := w.stdin
done := w.readerDone
w.cmd = nil
w.stdin = nil
w.pending = make(map[int]chan acpResponse)
w.nextID = 1
w.readerDone = nil
w.readerErr = nil
w.activeSink = nil
w.stateMu.Unlock()
w.generation.Add(1)
if stdin != nil {
_ = stdin.Close()
}
if cmd != nil && cmd.Process != nil {
_ = cmd.Process.Kill()
}
if done != nil {
select {
case <-done:
case <-time.After(2 * time.Second):
}
}
if w.workspace != "" {
_ = os.RemoveAll(w.workspace)
w.workspace = ""
}
w.envOverrides = nil
}

View File

@ -0,0 +1,309 @@
package bridge
import (
"context"
"encoding/json"
"io"
"log/slog"
"os/exec"
"strings"
"testing"
"time"
)
func TestNewBridge(t *testing.T) {
b := NewCLIBridge("/usr/bin/agent", false, 4, 30*time.Second)
if b == nil {
t.Fatal("NewCLIBridge returned nil")
}
if b.cursorPath != "/usr/bin/agent" {
t.Errorf("cursorPath = %q, want %q", b.cursorPath, "/usr/bin/agent")
}
if cap(b.semaphore) != 4 {
t.Errorf("semaphore capacity = %d, want 4", cap(b.semaphore))
}
if b.timeout != 30*time.Second {
t.Errorf("timeout = %v, want 30s", b.timeout)
}
}
func TestNewBridge_DefaultConcurrency(t *testing.T) {
b := NewCLIBridge("agent", false, 0, 10*time.Second)
if cap(b.semaphore) != 1 {
t.Errorf("semaphore capacity = %d, want 1 (default)", cap(b.semaphore))
}
}
func TestNewBridge_NegativeConcurrency(t *testing.T) {
b := NewCLIBridge("agent", false, -5, 10*time.Second)
if cap(b.semaphore) != 1 {
t.Errorf("semaphore capacity = %d, want 1 (default for negative)", cap(b.semaphore))
}
}
func TestNewBridge_UsesACPWhenRequested(t *testing.T) {
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
b := NewBridge("agent", logger, true, false, 2, 10*time.Second)
if _, ok := b.(*ACPBridge); !ok {
t.Fatalf("expected ACPBridge, got %T", b)
}
}
func TestBuildACPCommandArgs_NoModel(t *testing.T) {
got := buildACPCommandArgs("/tmp/workspace", "auto")
want := []string{"--workspace", "/tmp/workspace", "acp"}
if len(got) != len(want) {
t.Fatalf("len(args) = %d, want %d (%v)", len(got), len(want), got)
}
for i := range want {
if got[i] != want[i] {
t.Fatalf("args[%d] = %q, want %q (all=%v)", i, got[i], want[i], got)
}
}
}
func TestBuildACPCommandArgs_WithModel(t *testing.T) {
got := buildACPCommandArgs("/tmp/workspace", "sonnet-4.6")
want := []string{"--workspace", "/tmp/workspace", "--model", "sonnet-4.6", "acp"}
if len(got) != len(want) {
t.Fatalf("len(args) = %d, want %d (%v)", len(got), len(want), got)
}
for i := range want {
if got[i] != want[i] {
t.Fatalf("args[%d] = %q, want %q (all=%v)", i, got[i], want[i], got)
}
}
}
func TestBuildCLICommandArgs_UsesAskMode(t *testing.T) {
got := buildCLICommandArgs("hello", "auto", "/tmp/workspace", true, false)
wantPrefix := []string{
"--print",
"--mode", "ask",
"--workspace", "/tmp/workspace",
"--model", "auto",
"--stream-partial-output", "--output-format", "stream-json",
}
if len(got) != len(wantPrefix)+1 {
t.Fatalf("unexpected arg length: %v", got)
}
for i := range wantPrefix {
if got[i] != wantPrefix[i] {
t.Fatalf("args[%d] = %q, want %q (all=%v)", i, got[i], wantPrefix[i], got)
}
}
if got[len(got)-1] != "hello" {
t.Fatalf("last arg = %q, want prompt", got[len(got)-1])
}
}
func TestBuildCLICommandArgs_ChatOnlyAddsTrust(t *testing.T) {
got := buildCLICommandArgs("hi", "", "/tmp/ws", false, true)
found := false
for _, a := range got {
if a == "--trust" {
found = true
break
}
}
if !found {
t.Fatalf("expected --trust when chatOnly=true, got args: %v", got)
}
}
// mockCmdHelper builds a bridge that executes a fake command for channel logic testing.
func mockCmdBridge(t *testing.T) *CLIBridge {
t.Helper()
// Use "echo" as a mock command that outputs valid JSON lines
// We'll override Execute logic by using a custom cursorPath that is "echo"
return NewCLIBridge("echo", false, 2, 5*time.Second)
}
func TestExecute_ContextCancelled(t *testing.T) {
b := NewCLIBridge("/bin/sleep", false, 1, 30*time.Second)
ctx, cancel := context.WithCancel(context.Background())
cancel() // cancel immediately
outputChan, errChan := b.Execute(ctx, "test prompt", "gpt-4", "")
// Should receive error due to cancelled context
select {
case err := <-errChan:
if err == nil {
t.Error("expected error from cancelled context, got nil")
}
case <-time.After(2 * time.Second):
t.Fatal("timeout waiting for error from cancelled context")
}
// outputChan should be closed
select {
case _, ok := <-outputChan:
if ok {
t.Error("expected outputChan to be closed")
}
case <-time.After(2 * time.Second):
t.Fatal("timeout waiting for outputChan to close")
}
}
func TestExecute_SemaphoreBlocking(t *testing.T) {
b := NewCLIBridge("/bin/sleep", false, 1, 30*time.Second)
// Fill the semaphore
b.semaphore <- struct{}{}
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
_, errChan := b.Execute(ctx, "test", "model", "")
// Should get error because semaphore is full and context times out
select {
case err := <-errChan:
if err == nil {
t.Error("expected error, got nil")
}
case <-time.After(2 * time.Second):
t.Fatal("timeout waiting for semaphore blocking error")
}
// Release the semaphore
<-b.semaphore
}
func TestExecute_InvalidCommand(t *testing.T) {
b := NewCLIBridge("/nonexistent/command", false, 1, 5*time.Second)
ctx := context.Background()
outputChan, errChan := b.Execute(ctx, "test", "model", "")
var outputs []string
for line := range outputChan {
outputs = append(outputs, line)
}
// Should have error from starting invalid command
select {
case err := <-errChan:
if err == nil {
t.Error("expected error for invalid command, got nil")
}
case <-time.After(2 * time.Second):
t.Fatal("timeout waiting for error")
}
}
func TestExecute_ValidJSONOutput(t *testing.T) {
// Use "printf" to simulate JSON line output
b := NewCLIBridge("printf", false, 2, 5*time.Second)
ctx := context.Background()
// printf with JSON lines
outputChan, errChan := b.Execute(ctx, `{"type":"assistant","message":{"content":[{"text":"hello"}]}}\n{"type":"result"}`, "model", "")
var outputs []string
for line := range outputChan {
outputs = append(outputs, line)
}
// Check errChan for any errors
select {
case err := <-errChan:
if err != nil {
t.Logf("error (may be expected): %v", err)
}
default:
}
if len(outputs) == 0 {
t.Log("no outputs received (printf may not handle newlines as expected)")
} else {
t.Logf("received %d output lines", len(outputs))
}
}
func TestHandleACPNotification_ForwardsAgentMessageChunk(t *testing.T) {
w := &acpWorker{}
var got strings.Builder
w.setActiveSinkLocked(func(text string) {
got.WriteString(text)
})
params, err := json.Marshal(map[string]interface{}{
"sessionId": "s1",
"update": map[string]interface{}{
"sessionUpdate": "agent_message_chunk",
"content": map[string]interface{}{
"type": "text",
"text": "嗨",
},
},
})
if err != nil {
t.Fatalf("marshal params: %v", err)
}
handled := w.handleACPNotification(acpMessage{
Method: "session/update",
Params: params,
})
if !handled {
t.Fatal("expected session/update to be handled")
}
if got.String() != "嗨" {
t.Fatalf("sink text = %q, want %q", got.String(), "嗨")
}
}
func TestHandleACPNotification_IgnoresNonAssistantContentUpdate(t *testing.T) {
w := &acpWorker{}
var got strings.Builder
w.setActiveSinkLocked(func(text string) {
got.WriteString(text)
})
params, err := json.Marshal(map[string]interface{}{
"sessionId": "s1",
"update": map[string]interface{}{
"sessionUpdate": "planner_thought_chunk",
"content": map[string]interface{}{
"type": "text",
"text": "Handling user greetings",
},
},
})
if err != nil {
t.Fatalf("marshal params: %v", err)
}
handled := w.handleACPNotification(acpMessage{
Method: "session/update",
Params: params,
})
if !handled {
t.Fatal("expected session/update to be handled")
}
if got.String() != "" {
t.Fatalf("sink text = %q, want empty", got.String())
}
}
func TestReadLoop_DoesNotPanicWhenReaderDoneIsNil(t *testing.T) {
w := &acpWorker{
pending: make(map[int]chan acpResponse),
}
defer func() {
if r := recover(); r != nil {
t.Fatalf("readLoop should not panic when readerDone is nil, got %v", r)
}
}()
w.readLoop(io.NopCloser(strings.NewReader("")))
}
// Ensure exec is used (imported but may appear unused without integration tests)
var _ = exec.Command

88
internal/config/config.go Normal file
View File

@ -0,0 +1,88 @@
package config
import (
"fmt"
"os"
"path/filepath"
"gopkg.in/yaml.v3"
)
type Config struct {
Port int `yaml:"port"`
CursorCLIPath string `yaml:"cursor_cli_path"`
DefaultModel string `yaml:"default_model"`
Timeout int `yaml:"timeout"`
MaxConcurrent int `yaml:"max_concurrent"`
UseACP bool `yaml:"use_acp"`
ChatOnlyWorkspace bool `yaml:"chat_only_workspace"`
LogLevel string `yaml:"log_level"`
AvailableModels []string `yaml:"available_models,omitempty"`
}
// Defaults returns a Config populated with default values.
//
// ChatOnlyWorkspace defaults to true. This is the cursor-api-proxy posture:
// every Cursor CLI / ACP child is spawned in an empty temp directory with
// HOME / CURSOR_CONFIG_DIR overridden so it cannot see the host user's real
// project files or global ~/.cursor rules. Set to false only if you really
// want the Cursor agent to have access to the cwd where cursor-adapter
// started.
func Defaults() Config {
return Config{
Port: 8976,
CursorCLIPath: "agent",
DefaultModel: "auto",
Timeout: 300,
MaxConcurrent: 5,
UseACP: false,
ChatOnlyWorkspace: true,
LogLevel: "INFO",
}
}
// Load reads a YAML config file from path. If path is empty it defaults to
// ~/.cursor-adapter/config.yaml. When the file does not exist, a config with
// default values is returned without an error.
func Load(path string) (*Config, error) {
if path == "" {
home, err := os.UserHomeDir()
if err != nil {
return nil, fmt.Errorf("resolving home directory: %w", err)
}
path = filepath.Join(home, ".cursor-adapter", "config.yaml")
}
data, err := os.ReadFile(path)
if err != nil {
if os.IsNotExist(err) {
c := Defaults()
return &c, nil
}
return nil, fmt.Errorf("reading config file %s: %w", path, err)
}
cfg := Defaults()
if err := yaml.Unmarshal(data, &cfg); err != nil {
return nil, fmt.Errorf("parsing config file %s: %w", path, err)
}
if err := cfg.validate(); err != nil {
return nil, fmt.Errorf("validating config: %w", err)
}
return &cfg, nil
}
func (c *Config) validate() error {
if c.Port <= 0 {
return fmt.Errorf("port must be > 0, got %d", c.Port)
}
if c.CursorCLIPath == "" {
return fmt.Errorf("cursor_cli_path must not be empty")
}
if c.Timeout <= 0 {
return fmt.Errorf("timeout must be > 0, got %d", c.Timeout)
}
return nil
}

View File

@ -0,0 +1,157 @@
package config
import (
"os"
"path/filepath"
"reflect"
"testing"
)
func TestDefaults(t *testing.T) {
d := Defaults()
if d.Port != 8976 {
t.Errorf("expected port 8976, got %d", d.Port)
}
if d.CursorCLIPath != "agent" {
t.Errorf("expected cursor_cli_path 'agent', got %q", d.CursorCLIPath)
}
if d.DefaultModel != "auto" {
t.Errorf("expected default_model 'auto', got %q", d.DefaultModel)
}
if d.Timeout != 300 {
t.Errorf("expected timeout 300, got %d", d.Timeout)
}
if d.MaxConcurrent != 5 {
t.Errorf("expected max_concurrent 5, got %d", d.MaxConcurrent)
}
if d.LogLevel != "INFO" {
t.Errorf("expected log_level 'INFO', got %q", d.LogLevel)
}
if d.UseACP {
t.Errorf("expected use_acp false by default, got true")
}
}
func TestLoadValidYAML(t *testing.T) {
content := `port: 9000
cursor_cli_path: mycli
default_model: gpt-5.2
timeout: 60
max_concurrent: 10
use_acp: true
log_level: DEBUG
available_models:
- gpt-5.2
- claude-sonnet-4-20250514
`
dir := t.TempDir()
path := filepath.Join(dir, "config.yaml")
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
t.Fatal(err)
}
cfg, err := Load(path)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
want := Config{
Port: 9000,
CursorCLIPath: "mycli",
DefaultModel: "gpt-5.2",
Timeout: 60,
MaxConcurrent: 10,
UseACP: true,
ChatOnlyWorkspace: true,
LogLevel: "DEBUG",
AvailableModels: []string{"gpt-5.2", "claude-sonnet-4-20250514"},
}
if !reflect.DeepEqual(*cfg, want) {
t.Errorf("mismatch\n got: %+v\nwant: %+v", *cfg, want)
}
}
func TestLoadMissingFile(t *testing.T) {
path := filepath.Join(t.TempDir(), "does-not-exist.yaml")
cfg, err := Load(path)
if err != nil {
t.Fatalf("expected no error for missing file, got: %v", err)
}
d := Defaults()
if !reflect.DeepEqual(*cfg, d) {
t.Errorf("expected defaults\n got: %+v\nwant: %+v", *cfg, d)
}
}
func TestLoadInvalidYAML(t *testing.T) {
content := `{{not valid yaml`
dir := t.TempDir()
path := filepath.Join(dir, "bad.yaml")
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
t.Fatal(err)
}
_, err := Load(path)
if err == nil {
t.Fatal("expected error for invalid YAML, got nil")
}
}
func TestLoadValidation(t *testing.T) {
tests := []struct {
name string
yaml string
wantErr string
}{
{
name: "zero port",
yaml: "port: 0\ncursor_cli_path: agent\ntimeout: 10\n",
wantErr: "port must be > 0",
},
{
name: "empty cursor_cli_path",
yaml: "port: 80\ncursor_cli_path: \"\"\ntimeout: 10\n",
wantErr: "cursor_cli_path must not be empty",
},
{
name: "zero timeout",
yaml: "port: 80\ncursor_cli_path: agent\ntimeout: 0\n",
wantErr: "timeout must be > 0",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "config.yaml")
if err := os.WriteFile(path, []byte(tt.yaml), 0644); err != nil {
t.Fatal(err)
}
_, err := Load(path)
if err == nil {
t.Fatal("expected error, got nil")
}
if got := err.Error(); !contains(got, tt.wantErr) {
t.Errorf("error %q should contain %q", got, tt.wantErr)
}
})
}
}
func contains(s, substr string) bool {
return len(s) >= len(substr) && searchSubstring(s, substr)
}
func searchSubstring(s, substr string) bool {
for i := 0; i <= len(s)-len(substr); i++ {
if s[i:i+len(substr)] == substr {
return true
}
}
return false
}

View File

@ -0,0 +1,183 @@
package converter
import (
"encoding/json"
"fmt"
"strings"
"github.com/daniel/cursor-adapter/internal/types"
)
// CursorLine 代表 Cursor CLI stream-json 的一行。
type CursorLine struct {
Type string `json:"type"`
Subtype string `json:"subtype,omitempty"`
Message *CursorMessage `json:"message,omitempty"`
Result string `json:"result,omitempty"`
IsError bool `json:"is_error,omitempty"`
Usage *CursorUsage `json:"usage,omitempty"`
}
// FlexibleContent 可以是 string 或 []CursorContent。
type FlexibleContent []CursorContent
func (fc *FlexibleContent) UnmarshalJSON(data []byte) error {
var s string
if err := json.Unmarshal(data, &s); err == nil {
*fc = []CursorContent{{Type: "text", Text: s}}
return nil
}
var items []CursorContent
if err := json.Unmarshal(data, &items); err != nil {
return err
}
*fc = items
return nil
}
type CursorMessage struct {
Role string `json:"role"`
Content FlexibleContent `json:"content"`
}
type CursorContent struct {
Type string `json:"type"`
Text string `json:"text"`
}
type CursorUsage struct {
InputTokens int `json:"inputTokens"`
OutputTokens int `json:"outputTokens"`
}
// ConvertResult 是轉換一行的結果。
type ConvertResult struct {
Chunk *types.ChatCompletionChunk
Done bool
Skip bool
Error error
Usage *CursorUsage
}
// StreamParser tracks accumulated assistant output so the OpenAI stream only
// emits newly appended text, not the full Cursor message each time.
type StreamParser struct {
chatID string
accumulated string
lastRawText string
}
func NewStreamParser(chatID string) *StreamParser {
return &StreamParser{chatID: chatID}
}
func (p *StreamParser) Parse(line string) ConvertResult {
trimmed := strings.TrimSpace(line)
if trimmed == "" {
return ConvertResult{Skip: true}
}
var cl CursorLine
if err := json.Unmarshal([]byte(trimmed), &cl); err != nil {
return ConvertResult{Error: fmt.Errorf("unmarshal error: %w", err)}
}
switch cl.Type {
case "system", "user":
return ConvertResult{Skip: true}
case "assistant":
content := ExtractContent(cl.Message)
return p.emitAssistantDelta(content)
case "result":
if cl.IsError {
errMsg := cl.Result
if errMsg == "" {
errMsg = "unknown cursor error"
}
return ConvertResult{Error: fmt.Errorf("cursor error: %s", errMsg)}
}
return ConvertResult{
Done: true,
Usage: cl.Usage,
}
default:
return ConvertResult{Skip: true}
}
}
func (p *StreamParser) ParseRawText(content string) ConvertResult {
content = strings.TrimSpace(content)
if content == "" {
return ConvertResult{Skip: true}
}
if content == p.lastRawText {
return ConvertResult{Skip: true}
}
p.lastRawText = content
chunk := types.NewChatCompletionChunk(p.chatID, 0, "", types.Delta{
Content: &content,
})
return ConvertResult{Chunk: &chunk}
}
// emitAssistantDelta handles both output modes the Cursor CLI can use:
//
// - CUMULATIVE: each assistant message contains the full text so far
// (text.startsWith(accumulated)). We emit only the new suffix and
// replace accumulated with the full text.
// - INCREMENTAL: each assistant message contains just the new fragment.
// We emit the fragment verbatim and append it to accumulated.
//
// Either way, the duplicate final "assistant" message that Cursor CLI emits
// at the end of a session is caught by the content == accumulated check and
// skipped.
func (p *StreamParser) emitAssistantDelta(content string) ConvertResult {
if content == "" {
return ConvertResult{Skip: true}
}
if content == p.accumulated {
return ConvertResult{Skip: true}
}
var delta string
if p.accumulated != "" && strings.HasPrefix(content, p.accumulated) {
delta = content[len(p.accumulated):]
p.accumulated = content
} else {
delta = content
p.accumulated += content
}
if delta == "" {
return ConvertResult{Skip: true}
}
chunk := types.NewChatCompletionChunk(p.chatID, 0, "", types.Delta{
Content: &delta,
})
return ConvertResult{Chunk: &chunk}
}
// ConvertLine 將一行 Cursor stream-json 轉換為 OpenAI SSE chunk。
func ConvertLine(line string, chatID string) ConvertResult {
return NewStreamParser(chatID).Parse(line)
}
// ExtractContent 從 CursorMessage 提取所有文字內容。
func ExtractContent(msg *CursorMessage) string {
if msg == nil {
return ""
}
var parts []string
for _, c := range msg.Content {
if c.Text != "" {
parts = append(parts, c.Text)
}
}
return strings.Join(parts, "")
}

View File

@ -0,0 +1,305 @@
package converter
import (
"fmt"
"strings"
"testing"
)
func TestConvertLineAssistant(t *testing.T) {
line := `{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"Hello, world!"}]}}`
result := ConvertLine(line, "chat-123")
if result.Skip {
t.Error("expected not Skip")
}
if result.Done {
t.Error("expected not Done")
}
if result.Error != nil {
t.Fatalf("unexpected error: %v", result.Error)
}
if result.Chunk == nil {
t.Fatal("expected Chunk, got nil")
}
if result.Chunk.ID != "chat-123" {
t.Errorf("Chunk.ID = %q, want %q", result.Chunk.ID, "chat-123")
}
if result.Chunk.Object != "chat.completion.chunk" {
t.Errorf("Chunk.Object = %q, want %q", result.Chunk.Object, "chat.completion.chunk")
}
if len(result.Chunk.Choices) != 1 {
t.Fatalf("len(Choices) = %d, want 1", len(result.Chunk.Choices))
}
if *result.Chunk.Choices[0].Delta.Content != "Hello, world!" {
t.Errorf("Delta.Content = %q, want %q", *result.Chunk.Choices[0].Delta.Content, "Hello, world!")
}
}
func TestConvertLineSystem(t *testing.T) {
line := `{"type":"system","message":{"role":"system","content":"init"}}`
result := ConvertLine(line, "chat-123")
if !result.Skip {
t.Error("expected Skip for system line")
}
if result.Chunk != nil {
t.Error("expected nil Chunk for system line")
}
if result.Error != nil {
t.Errorf("unexpected error: %v", result.Error)
}
}
func TestConvertLineUser(t *testing.T) {
line := `{"type":"user","message":{"role":"user","content":"hello"}}`
result := ConvertLine(line, "chat-123")
if !result.Skip {
t.Error("expected Skip for user line")
}
if result.Chunk != nil {
t.Error("expected nil Chunk for user line")
}
if result.Error != nil {
t.Errorf("unexpected error: %v", result.Error)
}
}
func TestConvertLineResultSuccess(t *testing.T) {
line := `{"type":"result","subtype":"success","result":"done","usage":{"inputTokens":100,"outputTokens":50}}`
result := ConvertLine(line, "chat-123")
if !result.Done {
t.Error("expected Done")
}
if result.Skip {
t.Error("expected not Skip")
}
if result.Error != nil {
t.Fatalf("unexpected error: %v", result.Error)
}
if result.Usage == nil {
t.Fatal("expected Usage, got nil")
}
if result.Usage.InputTokens != 100 {
t.Errorf("Usage.InputTokens = %d, want 100", result.Usage.InputTokens)
}
if result.Usage.OutputTokens != 50 {
t.Errorf("Usage.OutputTokens = %d, want 50", result.Usage.OutputTokens)
}
}
func TestConvertLineResultError(t *testing.T) {
line := `{"type":"result","is_error":true,"result":"something went wrong"}`
result := ConvertLine(line, "chat-123")
if result.Error == nil {
t.Fatal("expected error, got nil")
}
if !strings.Contains(result.Error.Error(), "something went wrong") {
t.Errorf("error = %q, want to contain %q", result.Error.Error(), "something went wrong")
}
if result.Done {
t.Error("expected not Done for error")
}
}
func TestConvertLineEmpty(t *testing.T) {
tests := []string{"", " ", "\n", " \n "}
for _, line := range tests {
result := ConvertLine(line, "chat-123")
if !result.Skip {
t.Errorf("expected Skip for empty/whitespace line %q", line)
}
if result.Error != nil {
t.Errorf("unexpected error for empty line: %v", result.Error)
}
}
}
func TestConvertLineInvalidJSON(t *testing.T) {
line := `{"type":"assistant", invalid json}`
result := ConvertLine(line, "chat-123")
if result.Error == nil {
t.Fatal("expected error for invalid JSON, got nil")
}
if !strings.Contains(result.Error.Error(), "unmarshal error") {
t.Errorf("error = %q, want to contain %q", result.Error.Error(), "unmarshal error")
}
}
func TestExtractContent(t *testing.T) {
t.Run("nil message", func(t *testing.T) {
result := ExtractContent(nil)
if result != "" {
t.Errorf("ExtractContent(nil) = %q, want empty", result)
}
})
t.Run("single content", func(t *testing.T) {
msg := &CursorMessage{
Role: "assistant",
Content: []CursorContent{
{Type: "text", Text: "Hello"},
},
}
result := ExtractContent(msg)
if result != "Hello" {
t.Errorf("ExtractContent() = %q, want %q", result, "Hello")
}
})
t.Run("multiple content", func(t *testing.T) {
msg := &CursorMessage{
Role: "assistant",
Content: []CursorContent{
{Type: "text", Text: "Hello"},
{Type: "text", Text: ", "},
{Type: "text", Text: "world!"},
},
}
result := ExtractContent(msg)
if result != "Hello, world!" {
t.Errorf("ExtractContent() = %q, want %q", result, "Hello, world!")
}
})
t.Run("empty content", func(t *testing.T) {
msg := &CursorMessage{
Role: "assistant",
Content: []CursorContent{},
}
result := ExtractContent(msg)
if result != "" {
t.Errorf("ExtractContent() = %q, want empty", result)
}
})
}
func TestStreamParser_OnlyEmitsNewDeltaFromAccumulatedAssistantMessages(t *testing.T) {
parser := NewStreamParser("chat-123")
first := parser.Parse(`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"Hel"}]}}`)
if first.Error != nil {
t.Fatalf("unexpected error on first chunk: %v", first.Error)
}
if first.Chunk == nil || first.Chunk.Choices[0].Delta.Content == nil {
t.Fatal("expected first chunk content")
}
if got := *first.Chunk.Choices[0].Delta.Content; got != "Hel" {
t.Fatalf("first delta = %q, want %q", got, "Hel")
}
second := parser.Parse(`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"Hello"}]}}`)
if second.Error != nil {
t.Fatalf("unexpected error on second chunk: %v", second.Error)
}
if second.Chunk == nil || second.Chunk.Choices[0].Delta.Content == nil {
t.Fatal("expected second chunk content")
}
if got := *second.Chunk.Choices[0].Delta.Content; got != "lo" {
t.Fatalf("second delta = %q, want %q", got, "lo")
}
}
func TestStreamParser_SkipsFinalDuplicateAssistantMessage(t *testing.T) {
parser := NewStreamParser("chat-123")
first := parser.Parse(`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"Hello"}]}}`)
if first.Skip || first.Error != nil || first.Chunk == nil {
t.Fatalf("expected first assistant chunk, got %+v", first)
}
duplicate := parser.Parse(`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"Hello"}]}}`)
if !duplicate.Skip {
t.Fatalf("expected duplicate assistant message to be skipped, got %+v", duplicate)
}
}
func TestStreamParser_ResultIncludesUsage(t *testing.T) {
parser := NewStreamParser("chat-123")
result := parser.Parse(`{"type":"result","subtype":"success","usage":{"inputTokens":10,"outputTokens":4}}`)
if !result.Done {
t.Fatal("expected result.Done")
}
if result.Usage == nil {
t.Fatal("expected usage")
}
if result.Usage.InputTokens != 10 || result.Usage.OutputTokens != 4 {
t.Fatalf("unexpected usage: %+v", result.Usage)
}
}
func TestStreamParser_CanReconstructFinalContentFromIncrementalAssistantMessages(t *testing.T) {
parser := NewStreamParser("chat-123")
lines := []string{
`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"你"}]}}`,
`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"你好"}]}}`,
`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"你好,世界"}]}}`,
}
var content strings.Builder
for i, line := range lines {
result := parser.Parse(line)
if result.Error != nil {
t.Fatalf("line %d unexpected error: %v", i, result.Error)
}
if result.Skip {
continue
}
if result.Chunk == nil || result.Chunk.Choices[0].Delta.Content == nil {
t.Fatalf("line %d expected chunk content, got %+v", i, result)
}
content.WriteString(*result.Chunk.Choices[0].Delta.Content)
}
if got := content.String(); got != "你好,世界" {
t.Fatalf("reconstructed content = %q, want %q", got, "你好,世界")
}
}
func TestStreamParser_RawTextFallbackSkipsExactDuplicates(t *testing.T) {
parser := NewStreamParser("chat-123")
first := parser.ParseRawText("plain chunk")
if first.Skip || first.Chunk == nil || first.Chunk.Choices[0].Delta.Content == nil {
t.Fatalf("expected raw text chunk, got %+v", first)
}
duplicate := parser.ParseRawText("plain chunk")
if !duplicate.Skip {
t.Fatalf("expected duplicate raw text to be skipped, got %+v", duplicate)
}
}
func TestStreamParser_IncrementalFragmentsAccumulateAndSkipFinalDuplicate(t *testing.T) {
parser := NewStreamParser("chat-123")
fragments := []string{"你", "好,", "世", "界!"}
var got strings.Builder
for i, fr := range fragments {
line := fmt.Sprintf(`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":%q}]}}`, fr)
res := parser.Parse(line)
if res.Skip || res.Error != nil || res.Chunk == nil {
t.Fatalf("fragment %d: expected delta chunk, got %+v", i, res)
}
got.WriteString(*res.Chunk.Choices[0].Delta.Content)
}
if got.String() != "你好,世界!" {
t.Fatalf("reconstructed = %q, want 你好,世界!", got.String())
}
final := parser.Parse(`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"你好,世界!"}]}}`)
if !final.Skip {
t.Fatalf("expected final duplicate cumulative message to be skipped, got %+v", final)
}
}
func TestNewStreamParser_DoesNotExistYet(t *testing.T) {
_ = fmt.Sprintf
}

View File

@ -0,0 +1,109 @@
package converter
import "strings"
// Short-name aliases → actual Cursor model IDs.
// Allows users to configure friendly names in OpenCode instead of memorising
// exact Cursor IDs like "claude-4.6-sonnet-medium".
var shortAlias = map[string]string{
// Claude 4.6
"sonnet-4.6": "claude-4.6-sonnet-medium",
"sonnet-4.6-thinking": "claude-4.6-sonnet-medium-thinking",
"opus-4.6": "claude-4.6-opus-high",
"opus-4.6-thinking": "claude-4.6-opus-high-thinking",
"opus-4.6-max": "claude-4.6-opus-max",
// Claude 4.5
"sonnet-4.5": "claude-4.5-sonnet",
"sonnet-4.5-thinking": "claude-4.5-sonnet-thinking",
"opus-4.5": "claude-4.5-opus-high",
"opus-4.5-thinking": "claude-4.5-opus-high-thinking",
// Claude 4
"sonnet-4": "claude-4-sonnet",
"sonnet-4-thinking": "claude-4-sonnet-thinking",
// Anthropic API-style names
"claude-opus-4-6": "claude-4.6-opus-high",
"claude-opus-4.6": "claude-4.6-opus-high",
"claude-sonnet-4-6": "claude-4.6-sonnet-medium",
"claude-sonnet-4.6": "claude-4.6-sonnet-medium",
"claude-opus-4-5": "claude-4.5-opus-high",
"claude-opus-4.5": "claude-4.5-opus-high",
"claude-sonnet-4-5": "claude-4.5-sonnet",
"claude-sonnet-4.5": "claude-4.5-sonnet",
"claude-sonnet-4": "claude-4-sonnet",
"claude-opus-4-6-thinking": "claude-4.6-opus-high-thinking",
"claude-sonnet-4-6-thinking": "claude-4.6-sonnet-medium-thinking",
"claude-opus-4-5-thinking": "claude-4.5-opus-high-thinking",
"claude-sonnet-4-5-thinking": "claude-4.5-sonnet-thinking",
"claude-sonnet-4-thinking": "claude-4-sonnet-thinking",
// Old Anthropic date-based names
"claude-sonnet-4-20250514": "claude-4-sonnet",
"claude-opus-4-20250514": "claude-4.5-opus-high",
// GPT shortcuts
"gpt-5.4": "gpt-5.4-medium",
"gpt-5.4-fast": "gpt-5.4-medium-fast",
// Gemini
"gemini-3.1": "gemini-3.1-pro",
}
// ResolveToCursorModel maps a user-supplied model name to its Cursor model ID.
// If the name is already a valid Cursor ID, it passes through unchanged.
func ResolveToCursorModel(requested string) string {
if requested == "" {
return ""
}
key := strings.ToLower(strings.TrimSpace(requested))
if mapped, ok := shortAlias[key]; ok {
return mapped
}
return requested
}
type aliasEntry struct {
CursorID string
AliasID string
Name string
}
var reverseAliases = []aliasEntry{
{"claude-4.6-opus-high", "claude-opus-4-6", "Claude 4.6 Opus"},
{"claude-4.6-opus-high-thinking", "claude-opus-4-6-thinking", "Claude 4.6 Opus (Thinking)"},
{"claude-4.6-sonnet-medium", "claude-sonnet-4-6", "Claude 4.6 Sonnet"},
{"claude-4.6-sonnet-medium-thinking", "claude-sonnet-4-6-thinking", "Claude 4.6 Sonnet (Thinking)"},
{"claude-4.5-opus-high", "claude-opus-4-5", "Claude 4.5 Opus"},
{"claude-4.5-opus-high-thinking", "claude-opus-4-5-thinking", "Claude 4.5 Opus (Thinking)"},
{"claude-4.5-sonnet", "claude-sonnet-4-5", "Claude 4.5 Sonnet"},
{"claude-4.5-sonnet-thinking", "claude-sonnet-4-5-thinking", "Claude 4.5 Sonnet (Thinking)"},
{"claude-4-sonnet", "claude-sonnet-4", "Claude 4 Sonnet"},
{"claude-4-sonnet-thinking", "claude-sonnet-4-thinking", "Claude 4 Sonnet (Thinking)"},
}
// GetAnthropicModelAliases returns alias entries for models available in Cursor,
// so that /v1/models shows both Cursor IDs and friendly Anthropic-style names.
func GetAnthropicModelAliases(availableCursorIDs []string) []struct {
ID string
Name string
} {
set := make(map[string]bool, len(availableCursorIDs))
for _, id := range availableCursorIDs {
set[id] = true
}
var result []struct {
ID string
Name string
}
for _, a := range reverseAliases {
if set[a.CursorID] {
result = append(result, struct {
ID string
Name string
}{ID: a.AliasID, Name: a.Name})
}
}
return result
}

View File

@ -0,0 +1,48 @@
// Package sanitize strips third-party AI branding, telemetry headers, and
// identifying metadata from prompts before they are forwarded to the Cursor
// CLI. Without this, the Cursor agent sees "You are Claude Code..." style
// system prompts and behaves confusingly (trying to use tools it doesn't
// own, reasoning about being "Anthropic's CLI", etc.).
//
// Ported from cursor-api-proxy/src/lib/sanitize.ts.
package sanitize
import "regexp"
type rule struct {
pattern *regexp.Regexp
replace string
}
// Note: Go regexp is RE2, no lookbehind/lookahead, but these rules don't need any.
// (?i) enables case-insensitive for that single rule.
var rules = []rule{
// Strip x-anthropic-billing-header line (injected by Claude Code CLI).
{regexp.MustCompile(`(?i)x-anthropic-billing-header:[^\n]*\n?`), ""},
// Strip individual telemetry tokens that may appear in headers or text.
{regexp.MustCompile(`(?i)\bcc_version=[^\s;,\n]+[;,]?\s*`), ""},
{regexp.MustCompile(`(?i)\bcc_entrypoint=[^\s;,\n]+[;,]?\s*`), ""},
{regexp.MustCompile(`(?i)\bcch=[a-f0-9]+[;,]?\s*`), ""},
// Replace "Claude Code" product name with "Cursor" (case-sensitive on purpose).
{regexp.MustCompile(`\bClaude Code\b`), "Cursor"},
// Replace full Anthropic CLI description. Handle both straight and curly apostrophes.
{regexp.MustCompile(`(?i)Anthropic['\x{2019}]s official CLI for Claude`), "Cursor AI assistant"},
// Replace remaining Anthropic brand references.
{regexp.MustCompile(`\bAnthropic\b`), "Cursor"},
// Known Anthropic domains.
{regexp.MustCompile(`(?i)anthropic\.com`), "cursor.com"},
{regexp.MustCompile(`(?i)claude\.ai`), "cursor.sh"},
// Normalise leftover leading semicolons/whitespace at start of content.
{regexp.MustCompile(`^[;,\s]+`), ""},
}
// Text applies all sanitization rules to s.
func Text(s string) string {
if s == "" {
return s
}
for _, r := range rules {
s = r.pattern.ReplaceAllString(s, r.replace)
}
return s
}

View File

@ -0,0 +1,47 @@
package sanitize
import "testing"
func TestText_StripsBillingHeader(t *testing.T) {
in := "x-anthropic-billing-header: cc_version=1.0.8; cch=abc123\nhello world"
out := Text(in)
if out != "hello world" {
t.Errorf("got %q, want %q", out, "hello world")
}
}
func TestText_StripsTelemetryTokens(t *testing.T) {
in := "request: cc_version=2.3; cc_entrypoint=cli; cch=deadbeef the rest"
out := Text(in)
if got, want := out, "request: the rest"; got != want {
t.Errorf("got %q, want %q", got, want)
}
}
func TestText_ReplacesClaudeCodeBranding(t *testing.T) {
cases := map[string]string{
"You are Claude Code, Anthropic's official CLI for Claude.": "You are Cursor, Cursor AI assistant.",
"Powered by Anthropic.": "Powered by Cursor.",
"Visit https://claude.ai/docs or https://anthropic.com for more": "Visit https://cursor.sh/docs or https://cursor.com for more",
}
for in, want := range cases {
if got := Text(in); got != want {
t.Errorf("Text(%q)\n got: %q\n want: %q", in, got, want)
}
}
}
func TestText_EmptyStringPassesThrough(t *testing.T) {
if got := Text(""); got != "" {
t.Errorf("Text(\"\") = %q, want empty", got)
}
}
func TestText_IsIdempotent(t *testing.T) {
in := "Claude Code says hi at anthropic.com"
first := Text(in)
second := Text(first)
if first != second {
t.Errorf("sanitize is not idempotent:\n first: %q\n second: %q", first, second)
}
}

View File

@ -0,0 +1,170 @@
package server
import (
"encoding/json"
"fmt"
"strings"
"github.com/daniel/cursor-adapter/internal/sanitize"
"github.com/daniel/cursor-adapter/internal/types"
)
// buildPromptFromAnthropicMessages flattens an Anthropic Messages request into
// a single prompt string suitable for `agent --print`. It:
// - renders tool_use / tool_result blocks as readable pseudo-XML so the
// model can follow the trajectory of previous tool calls
// - embeds the `tools` schema as part of the System block via
// toolsToSystemText, so the model knows what tools the outer agent (e.g.
// Claude Code) has available
// - runs every piece of free text through sanitize.Text to strip Claude Code
// branding and telemetry headers that would confuse the Cursor agent
func buildPromptFromAnthropicMessages(req types.AnthropicMessagesRequest) string {
var systemParts []string
for _, block := range req.System {
if block.Type == "text" && strings.TrimSpace(block.Text) != "" {
systemParts = append(systemParts, sanitize.Text(block.Text))
}
}
if tools := toolsToSystemText(req.Tools); tools != "" {
systemParts = append(systemParts, tools)
}
var convo []string
for _, msg := range req.Messages {
text := anthropicContentToText(msg.Content)
if text == "" {
continue
}
switch msg.Role {
case "assistant":
convo = append(convo, "Assistant: "+text)
default:
convo = append(convo, "User: "+text)
}
}
var prompt strings.Builder
if len(systemParts) > 0 {
prompt.WriteString("System:\n")
prompt.WriteString(strings.Join(systemParts, "\n\n"))
prompt.WriteString("\n\n")
}
prompt.WriteString(strings.Join(convo, "\n\n"))
prompt.WriteString("\n\nAssistant:")
return prompt.String()
}
// anthropicContentToText renders a single message's content blocks as a
// single string. Unlike the old implementation, this one preserves tool_use
// and tool_result blocks so the model sees the full conversation trajectory
// rather than mysterious gaps.
func anthropicContentToText(content types.AnthropicContent) string {
var parts []string
for _, block := range content {
switch block.Type {
case "text":
if block.Text != "" {
parts = append(parts, sanitize.Text(block.Text))
}
case "tool_use":
input := strings.TrimSpace(string(block.Input))
if input == "" {
input = "{}"
}
parts = append(parts, fmt.Sprintf(
"<tool_use id=%q name=%q>\n%s\n</tool_use>",
block.ID, block.Name, input,
))
case "tool_result":
body := toolResultBody(block.Content)
errAttr := ""
if block.IsError {
errAttr = ` is_error="true"`
}
parts = append(parts, fmt.Sprintf(
"<tool_result tool_use_id=%q%s>\n%s\n</tool_result>",
block.ToolUseID, errAttr, body,
))
case "image":
parts = append(parts, "[Image]")
case "document":
title := block.Title
if title == "" {
title = "Document"
}
parts = append(parts, "[Document: "+title+"]")
}
}
return strings.Join(parts, "\n")
}
// toolResultBody flattens the `content` field of a tool_result block, which
// can be either a plain string or an array of `{type, text}` content parts.
func toolResultBody(raw json.RawMessage) string {
if len(raw) == 0 {
return ""
}
var asString string
if err := json.Unmarshal(raw, &asString); err == nil {
return sanitize.Text(asString)
}
var parts []struct {
Type string `json:"type"`
Text string `json:"text"`
}
if err := json.Unmarshal(raw, &parts); err == nil {
var out []string
for _, p := range parts {
if p.Type == "text" && p.Text != "" {
out = append(out, sanitize.Text(p.Text))
}
}
return strings.Join(out, "\n")
}
return string(raw)
}
// toolsToSystemText renders a tools schema array into a system-prompt chunk
// describing each tool. The idea (from cursor-api-proxy) is that since the
// Cursor CLI does not expose native tool_call deltas over the proxy, we tell
// the model what tools exist so it can reference them in its text output.
//
// NOTE: This is a one-way passthrough. The proxy cannot turn the model's
// textual "I would call Write with {...}" back into structured tool_use
// blocks. Callers that need real tool-use routing (e.g. Claude Code's coding
// agent) should run tools client-side and feed tool_result back in.
func toolsToSystemText(tools []types.AnthropicTool) string {
if len(tools) == 0 {
return ""
}
var lines []string
lines = append(lines,
"Available tools (they belong to the caller, not to you; describe your",
"intended call in plain text and the caller will execute it):",
"",
)
for _, t := range tools {
schema := strings.TrimSpace(string(t.InputSchema))
if schema == "" {
schema = "{}"
} else {
var pretty any
if err := json.Unmarshal(t.InputSchema, &pretty); err == nil {
if out, err := json.MarshalIndent(pretty, "", " "); err == nil {
schema = string(out)
}
}
}
lines = append(lines,
"Function: "+t.Name,
"Description: "+sanitize.Text(t.Description),
"Parameters: "+schema,
"",
)
}
return strings.TrimRight(strings.Join(lines, "\n"), "\n")
}

View File

@ -0,0 +1,190 @@
package server
import (
"context"
"encoding/json"
"fmt"
"net/http"
"strings"
"time"
"github.com/daniel/cursor-adapter/internal/converter"
"github.com/daniel/cursor-adapter/internal/types"
)
func (s *Server) handleAnthropicMessages(w http.ResponseWriter, r *http.Request) {
var req types.AnthropicMessagesRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeJSON(w, http.StatusBadRequest, types.NewErrorResponse("invalid request body: "+err.Error(), "invalid_request_error", ""))
return
}
defer r.Body.Close()
if req.MaxTokens <= 0 {
writeJSON(w, http.StatusBadRequest, types.NewErrorResponse("max_tokens is required", "invalid_request_error", ""))
return
}
if len(req.Messages) == 0 {
writeJSON(w, http.StatusBadRequest, types.NewErrorResponse("messages must not be empty", "invalid_request_error", ""))
return
}
model := req.Model
if model == "" {
model = s.cfg.DefaultModel
}
cursorModel := converter.ResolveToCursorModel(model)
sessionKey := ensureSessionHeader(w, r)
msgID := fmt.Sprintf("msg_%d", time.Now().UnixNano())
prompt := buildPromptFromAnthropicMessages(req)
if req.Stream {
s.streamAnthropicMessages(w, r, prompt, cursorModel, model, msgID, sessionKey)
return
}
s.nonStreamAnthropicMessages(w, r, prompt, cursorModel, model, msgID, sessionKey)
}
func (s *Server) streamAnthropicMessages(w http.ResponseWriter, r *http.Request, prompt, cursorModel, displayModel, msgID, sessionKey string) {
sse := NewSSEWriter(w)
parser := converter.NewStreamParser(msgID)
ctx, cancel := context.WithTimeout(r.Context(), time.Duration(s.cfg.Timeout)*time.Second)
defer cancel()
go func() {
<-r.Context().Done()
cancel()
}()
outputChan, errChan := s.br.Execute(ctx, prompt, cursorModel, sessionKey)
writeAnthropicSSE(sse, map[string]interface{}{
"type": "message_start",
"message": map[string]interface{}{
"id": msgID,
"type": "message",
"role": "assistant",
"model": displayModel,
"content": []interface{}{},
},
})
writeAnthropicSSE(sse, map[string]interface{}{
"type": "content_block_start",
"index": 0,
"content_block": map[string]interface{}{"type": "text", "text": ""},
})
var accumulated strings.Builder
for line := range outputChan {
result := parser.Parse(line)
if result.Skip {
continue
}
if result.Error != nil {
if strings.Contains(result.Error.Error(), "unmarshal error") {
result = parser.ParseRawText(line)
if result.Skip {
continue
}
if result.Chunk != nil && len(result.Chunk.Choices) > 0 {
if c := result.Chunk.Choices[0].Delta.Content; c != nil {
accumulated.WriteString(*c)
writeAnthropicSSE(sse, map[string]interface{}{
"type": "content_block_delta",
"index": 0,
"delta": map[string]interface{}{"type": "text_delta", "text": *c},
})
continue
}
}
}
writeAnthropicSSE(sse, map[string]interface{}{
"type": "error",
"error": map[string]interface{}{"type": "api_error", "message": result.Error.Error()},
})
return
}
if result.Chunk != nil && len(result.Chunk.Choices) > 0 {
if c := result.Chunk.Choices[0].Delta.Content; c != nil {
accumulated.WriteString(*c)
writeAnthropicSSE(sse, map[string]interface{}{
"type": "content_block_delta",
"index": 0,
"delta": map[string]interface{}{"type": "text_delta", "text": *c},
})
}
}
if result.Done {
break
}
}
outTokens := maxInt(1, accumulated.Len()/4)
writeAnthropicSSE(sse, map[string]interface{}{
"type": "content_block_stop",
"index": 0,
})
writeAnthropicSSE(sse, map[string]interface{}{
"type": "message_delta",
"delta": map[string]interface{}{"stop_reason": "end_turn", "stop_sequence": nil},
"usage": map[string]interface{}{"output_tokens": outTokens},
})
writeAnthropicSSE(sse, map[string]interface{}{
"type": "message_stop",
})
select {
case <-errChan:
default:
}
}
func (s *Server) nonStreamAnthropicMessages(w http.ResponseWriter, r *http.Request, prompt, cursorModel, displayModel, msgID, sessionKey string) {
ctx, cancel := context.WithTimeout(r.Context(), time.Duration(s.cfg.Timeout)*time.Second)
defer cancel()
go func() {
<-r.Context().Done()
cancel()
}()
content, err := s.br.ExecuteSync(ctx, prompt, cursorModel, sessionKey)
if err != nil {
writeJSON(w, http.StatusInternalServerError, types.NewErrorResponse(err.Error(), "api_error", ""))
return
}
usage := estimateUsage(prompt, content)
resp := types.AnthropicMessagesResponse{
ID: msgID,
Type: "message",
Role: "assistant",
Content: []types.AnthropicTextBlock{{Type: "text", Text: content}},
Model: displayModel,
StopReason: "end_turn",
Usage: types.AnthropicUsage{
InputTokens: usage.PromptTokens,
OutputTokens: usage.CompletionTokens,
},
}
writeJSON(w, http.StatusOK, resp)
}
func writeAnthropicSSE(sse *SSEWriter, event interface{}) {
data, err := json.Marshal(event)
if err != nil {
return
}
fmt.Fprintf(sse.w, "data: %s\n\n", data)
if sse.flush != nil {
sse.flush.Flush()
}
}

268
internal/server/handlers.go Normal file
View File

@ -0,0 +1,268 @@
package server
import (
"context"
"encoding/json"
"fmt"
"log/slog"
"math"
"net/http"
"strings"
"sync"
"time"
"github.com/daniel/cursor-adapter/internal/converter"
"github.com/daniel/cursor-adapter/internal/sanitize"
"github.com/daniel/cursor-adapter/internal/types"
)
var (
modelCacheMu sync.Mutex
modelCacheData []string
modelCacheAt time.Time
modelCacheTTL = 5 * time.Minute
)
func (s *Server) handleListModels(w http.ResponseWriter, r *http.Request) {
models, err := s.cachedListModels(r.Context())
if err != nil {
writeJSON(w, http.StatusInternalServerError, types.NewErrorResponse(err.Error(), "internal_error", ""))
return
}
ts := time.Now().Unix()
data := make([]types.ModelInfo, 0, len(models)*2)
for _, m := range models {
data = append(data, types.ModelInfo{ID: m, Object: "model", Created: ts, OwnedBy: "cursor"})
}
aliases := converter.GetAnthropicModelAliases(models)
for _, a := range aliases {
data = append(data, types.ModelInfo{ID: a.ID, Object: "model", Created: ts, OwnedBy: "cursor"})
}
writeJSON(w, http.StatusOK, types.ModelList{Object: "list", Data: data})
}
func (s *Server) cachedListModels(ctx context.Context) ([]string, error) {
modelCacheMu.Lock()
defer modelCacheMu.Unlock()
if modelCacheData != nil && time.Since(modelCacheAt) < modelCacheTTL {
return modelCacheData, nil
}
models, err := s.br.ListModels(ctx)
if err != nil {
return nil, err
}
modelCacheData = models
modelCacheAt = time.Now()
return models, nil
}
func (s *Server) handleChatCompletions(w http.ResponseWriter, r *http.Request) {
var req types.ChatCompletionRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeJSON(w, http.StatusBadRequest, types.NewErrorResponse("invalid request body: "+err.Error(), "invalid_request_error", ""))
return
}
defer r.Body.Close()
if len(req.Messages) == 0 {
writeJSON(w, http.StatusBadRequest, types.NewErrorResponse("messages must not be empty", "invalid_request_error", ""))
return
}
var parts []string
for _, m := range req.Messages {
text := sanitize.Text(string(m.Content))
parts = append(parts, fmt.Sprintf("%s: %s", m.Role, text))
}
prompt := strings.Join(parts, "\n")
model := req.Model
if model == "" {
model = s.cfg.DefaultModel
}
cursorModel := converter.ResolveToCursorModel(model)
sessionKey := ensureSessionHeader(w, r)
chatID := fmt.Sprintf("chatcmpl-%d", time.Now().UnixNano())
created := time.Now().Unix()
if req.Stream {
s.streamChat(w, r, prompt, cursorModel, model, chatID, created, sessionKey)
return
}
s.nonStreamChat(w, r, prompt, cursorModel, model, chatID, created, sessionKey)
}
func (s *Server) streamChat(w http.ResponseWriter, r *http.Request, prompt, cursorModel, displayModel, chatID string, created int64, sessionKey string) {
sse := NewSSEWriter(w)
parser := converter.NewStreamParser(chatID)
ctx, cancel := context.WithTimeout(r.Context(), time.Duration(s.cfg.Timeout)*time.Second)
defer cancel()
go func() {
<-r.Context().Done()
cancel()
}()
outputChan, errChan := s.br.Execute(ctx, prompt, cursorModel, sessionKey)
roleAssistant := "assistant"
initChunk := types.NewChatCompletionChunk(chatID, created, displayModel, types.Delta{
Role: &roleAssistant,
})
if err := sse.WriteChunk(initChunk); err != nil {
return
}
var accumulated strings.Builder
for line := range outputChan {
result := parser.Parse(line)
if result.Skip {
continue
}
if result.Error != nil {
if strings.Contains(result.Error.Error(), "unmarshal error") {
result = parser.ParseRawText(line)
if result.Skip {
continue
}
if result.Chunk != nil {
result.Chunk.Created = created
result.Chunk.Model = displayModel
if c := result.Chunk.Choices[0].Delta.Content; c != nil {
accumulated.WriteString(*c)
}
if err := sse.WriteChunk(*result.Chunk); err != nil {
return
}
continue
}
}
sse.WriteError(result.Error.Error())
return
}
if result.Chunk != nil {
result.Chunk.Created = created
result.Chunk.Model = displayModel
if len(result.Chunk.Choices) > 0 {
if c := result.Chunk.Choices[0].Delta.Content; c != nil {
accumulated.WriteString(*c)
}
}
if err := sse.WriteChunk(*result.Chunk); err != nil {
return
}
}
if result.Done {
break
}
}
promptTokens := maxInt(1, int(math.Round(float64(len(prompt))/4.0)))
completionTokens := maxInt(1, int(math.Round(float64(accumulated.Len())/4.0)))
usage := &types.Usage{
PromptTokens: promptTokens,
CompletionTokens: completionTokens,
TotalTokens: promptTokens + completionTokens,
}
select {
case err := <-errChan:
if err != nil {
slog.Error("stream bridge error", "err", err)
sse.WriteError(err.Error())
return
}
default:
}
stopReason := "stop"
finalChunk := types.NewChatCompletionChunk(chatID, created, displayModel, types.Delta{})
finalChunk.Choices[0].FinishReason = &stopReason
finalChunk.Usage = usage
sse.WriteChunk(finalChunk)
sse.WriteDone()
}
func (s *Server) nonStreamChat(w http.ResponseWriter, r *http.Request, prompt, cursorModel, displayModel, chatID string, created int64, sessionKey string) {
ctx, cancel := context.WithTimeout(r.Context(), time.Duration(s.cfg.Timeout)*time.Second)
defer cancel()
go func() {
<-r.Context().Done()
cancel()
}()
content, err := s.br.ExecuteSync(ctx, prompt, cursorModel, sessionKey)
if err != nil {
writeJSON(w, http.StatusInternalServerError, types.NewErrorResponse(err.Error(), "internal_error", ""))
return
}
usage := estimateUsage(prompt, content)
stopReason := "stop"
resp := types.ChatCompletionResponse{
ID: chatID,
Object: "chat.completion",
Created: created,
Model: displayModel,
Choices: []types.Choice{
{
Index: 0,
Message: types.ChatMessage{Role: "assistant", Content: types.ChatMessageContent(content)},
FinishReason: &stopReason,
},
},
Usage: usage,
}
writeJSON(w, http.StatusOK, resp)
}
func estimateUsage(prompt, content string) types.Usage {
promptTokens := maxInt(1, int(math.Round(float64(len(prompt))/4.0)))
completionTokens := maxInt(1, int(math.Round(float64(len(content))/4.0)))
return types.Usage{
PromptTokens: promptTokens,
CompletionTokens: completionTokens,
TotalTokens: promptTokens + completionTokens,
}
}
func maxInt(a, b int) int {
if a > b {
return a
}
return b
}
func (s *Server) handleHealth(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(r.Context(), 5*time.Second)
defer cancel()
status := "ok"
cliStatus := "available"
if err := s.br.CheckHealth(ctx); err != nil {
cliStatus = fmt.Sprintf("unavailable: %v", err)
}
writeJSON(w, http.StatusOK, map[string]string{
"status": status,
"cursor_cli": cliStatus,
"version": "0.2.0",
})
}
func writeJSON(w http.ResponseWriter, status int, v interface{}) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(status)
json.NewEncoder(w).Encode(v)
}

View File

@ -0,0 +1,361 @@
package server
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/daniel/cursor-adapter/internal/config"
)
type mockBridge struct {
executeLines []string
executeErr error
executeSync string
executeSyncErr error
lastPrompt string
lastSessionKey string
models []string
healthErr error
}
func (m *mockBridge) Execute(ctx context.Context, prompt string, model string, sessionKey string) (<-chan string, <-chan error) {
m.lastPrompt = prompt
m.lastSessionKey = sessionKey
outputChan := make(chan string, len(m.executeLines))
errChan := make(chan error, 1)
go func() {
defer close(outputChan)
defer close(errChan)
for _, line := range m.executeLines {
select {
case <-ctx.Done():
errChan <- ctx.Err()
return
case outputChan <- line:
}
}
if m.executeErr != nil {
errChan <- m.executeErr
}
}()
return outputChan, errChan
}
func (m *mockBridge) ListModels(ctx context.Context) ([]string, error) {
return m.models, nil
}
func (m *mockBridge) ExecuteSync(ctx context.Context, prompt string, model string, sessionKey string) (string, error) {
m.lastPrompt = prompt
m.lastSessionKey = sessionKey
if m.executeSyncErr != nil {
return "", m.executeSyncErr
}
if m.executeSync != "" {
return m.executeSync, nil
}
return "", nil
}
func (m *mockBridge) CheckHealth(ctx context.Context) error {
return m.healthErr
}
func TestAnthropicMessages_NonStreamingResponse(t *testing.T) {
cfg := config.Defaults()
srv := New(&cfg, &mockBridge{
executeSync: "Hello",
})
req := httptest.NewRequest(http.MethodPost, "/v1/messages", strings.NewReader(`{
"model":"auto",
"max_tokens":128,
"messages":[{"role":"user","content":"Say hello"}],
"stream":false
}`))
req.Header.Set("Content-Type", "application/json")
rec := httptest.NewRecorder()
srv.mux.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("status = %d, want %d, body=%s", rec.Code, http.StatusOK, rec.Body.String())
}
var resp struct {
ID string `json:"id"`
Type string `json:"type"`
Role string `json:"role"`
Model string `json:"model"`
StopReason string `json:"stop_reason"`
Content []struct {
Type string `json:"type"`
Text string `json:"text"`
} `json:"content"`
Usage struct {
InputTokens int `json:"input_tokens"`
OutputTokens int `json:"output_tokens"`
} `json:"usage"`
}
if err := json.Unmarshal(rec.Body.Bytes(), &resp); err != nil {
t.Fatalf("unmarshal response: %v", err)
}
if resp.Type != "message" {
t.Fatalf("type = %q, want %q", resp.Type, "message")
}
if resp.Role != "assistant" {
t.Fatalf("role = %q, want %q", resp.Role, "assistant")
}
if len(resp.Content) != 1 || resp.Content[0].Text != "Hello" {
t.Fatalf("content = %+v, want single text block 'Hello'", resp.Content)
}
if resp.StopReason != "end_turn" {
t.Fatalf("stop_reason = %q, want %q", resp.StopReason, "end_turn")
}
if resp.Usage.InputTokens <= 0 || resp.Usage.OutputTokens <= 0 {
t.Fatalf("usage should be estimated and > 0, got %+v", resp.Usage)
}
}
func TestAnthropicMessages_StreamingResponse(t *testing.T) {
cfg := config.Defaults()
srv := New(&cfg, &mockBridge{
executeLines: []string{
`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"Hi"}]}}`,
`{"type":"result","subtype":"success","usage":{"inputTokens":9,"outputTokens":1}}`,
},
})
req := httptest.NewRequest(http.MethodPost, "/v1/messages", strings.NewReader(`{
"model":"auto",
"max_tokens":128,
"messages":[{"role":"user","content":"Say hi"}],
"stream":true
}`))
req.Header.Set("Content-Type", "application/json")
rec := httptest.NewRecorder()
srv.mux.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("status = %d, want %d, body=%s", rec.Code, http.StatusOK, rec.Body.String())
}
body := rec.Body.String()
for _, want := range []string{
`"type":"message_start"`,
`"type":"content_block_start"`,
`"type":"content_block_delta"`,
`"text":"Hi"`,
`"type":"content_block_stop"`,
`"type":"message_delta"`,
`"stop_reason":"end_turn"`,
`"type":"message_stop"`,
} {
if !strings.Contains(body, want) {
t.Fatalf("stream body missing %q: %s", want, body)
}
}
}
func TestChatCompletions_ForwardsProvidedSessionHeader(t *testing.T) {
cfg := config.Defaults()
br := &mockBridge{executeSync: "Hello"}
srv := New(&cfg, br)
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", strings.NewReader(`{
"model":"auto",
"messages":[{"role":"user","content":"hello"}],
"stream":false
}`))
req.Header.Set("Content-Type", "application/json")
req.Header.Set(sessionHeaderName, "sess_frontend_123")
rec := httptest.NewRecorder()
srv.mux.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("status = %d, want %d, body=%s", rec.Code, http.StatusOK, rec.Body.String())
}
if br.lastSessionKey != "sess_frontend_123" {
t.Fatalf("bridge session key = %q, want %q", br.lastSessionKey, "sess_frontend_123")
}
if got := rec.Header().Get(sessionHeaderName); got != "sess_frontend_123" {
t.Fatalf("response session header = %q, want %q", got, "sess_frontend_123")
}
}
func TestChatCompletions_AcceptsArrayContentBlocks(t *testing.T) {
cfg := config.Defaults()
br := &mockBridge{executeSync: "Hello"}
srv := New(&cfg, br)
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", strings.NewReader(`{
"model":"auto",
"messages":[
{"role":"system","content":[{"type":"text","text":"You are terse."}]},
{"role":"user","content":[{"type":"text","text":"hello"},{"type":"text","text":" world"}]}
],
"stream":false
}`))
req.Header.Set("Content-Type", "application/json")
rec := httptest.NewRecorder()
srv.mux.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("status = %d, want %d, body=%s", rec.Code, http.StatusOK, rec.Body.String())
}
if !strings.Contains(br.lastPrompt, "system: You are terse.") {
t.Fatalf("prompt = %q, want system text content", br.lastPrompt)
}
if !strings.Contains(br.lastPrompt, "user: hello world") {
t.Fatalf("prompt = %q, want concatenated user text content", br.lastPrompt)
}
}
func TestChatCompletions_StreamingEmitsRoleFinishReasonAndUsage(t *testing.T) {
cfg := config.Defaults()
srv := New(&cfg, &mockBridge{
executeLines: []string{
// system + user chunks should be skipped entirely, never echoed as content
`{"type":"system","subtype":"init","session_id":"abc","cwd":"/tmp"}`,
`{"type":"user","message":{"role":"user","content":[{"type":"text","text":"user: hello"}]}}`,
// incremental assistant fragments
`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"你"}]}}`,
`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"好"}]}}`,
// cumulative duplicate (Cursor CLI sometimes finalises with the full text)
`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"你好"}]}}`,
`{"type":"result","subtype":"success","result":"你好","usage":{"inputTokens":3,"outputTokens":2}}`,
},
})
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", strings.NewReader(`{
"model":"auto",
"messages":[{"role":"user","content":"hi"}],
"stream":true
}`))
req.Header.Set("Content-Type", "application/json")
rec := httptest.NewRecorder()
srv.mux.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("status = %d, want 200, body=%s", rec.Code, rec.Body.String())
}
body := rec.Body.String()
// Must never leak system/user JSON as "content"
if strings.Contains(body, `"subtype":"init"`) || strings.Contains(body, `"type":"user"`) {
t.Fatalf("stream body leaked system/user JSON into SSE content: %s", body)
}
// First delta chunk must carry role:assistant (not content)
if !strings.Contains(body, `"delta":{"role":"assistant"}`) {
t.Fatalf("first chunk missing role=assistant delta: %s", body)
}
// Content deltas must be plain text — not JSON-stringified Cursor lines
if !strings.Contains(body, `"delta":{"content":"你"}`) {
t.Fatalf("first content delta not plain text: %s", body)
}
if !strings.Contains(body, `"delta":{"content":"好"}`) {
t.Fatalf("second content delta missing: %s", body)
}
// Final cumulative message that equals accumulated text must be suppressed
// (accumulated = "你好" after the two fragments; final "你好" should be Skip'd)
count := strings.Count(body, `"你好"`)
if count > 0 {
t.Fatalf("duplicate final cumulative message should have been skipped (found %d occurrences of full text as delta): %s", count, body)
}
// Final chunk must have finish_reason=stop and usage at top level
if !strings.Contains(body, `"finish_reason":"stop"`) {
t.Fatalf("final chunk missing finish_reason=stop: %s", body)
}
if !strings.Contains(body, `"usage":{`) {
t.Fatalf("final chunk missing usage: %s", body)
}
if !strings.Contains(body, `data: [DONE]`) {
t.Fatalf("stream missing [DONE] terminator: %s", body)
}
}
func TestAnthropicMessages_StreamingEmitsNoDuplicateFinalText(t *testing.T) {
cfg := config.Defaults()
srv := New(&cfg, &mockBridge{
executeLines: []string{
`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"你"}]}}`,
`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"好"}]}}`,
`{"type":"assistant","message":{"role":"assistant","content":[{"type":"text","text":"你好"}]}}`,
`{"type":"result","subtype":"success","usage":{"inputTokens":3,"outputTokens":2}}`,
},
})
req := httptest.NewRequest(http.MethodPost, "/v1/messages", strings.NewReader(`{
"model":"auto",
"max_tokens":128,
"messages":[{"role":"user","content":"hi"}],
"stream":true
}`))
req.Header.Set("Content-Type", "application/json")
rec := httptest.NewRecorder()
srv.mux.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("status = %d, want 200, body=%s", rec.Code, rec.Body.String())
}
body := rec.Body.String()
if strings.Count(body, `"text":"你好"`) > 0 {
t.Fatalf("final cumulative duplicate should be suppressed: %s", body)
}
for _, want := range []string{
`"text":"你"`,
`"text":"好"`,
`"type":"message_stop"`,
} {
if !strings.Contains(body, want) {
t.Fatalf("missing %q in stream: %s", want, body)
}
}
}
func TestAnthropicMessages_GeneratesSessionHeaderWhenMissing(t *testing.T) {
cfg := config.Defaults()
br := &mockBridge{executeSync: "Hello"}
srv := New(&cfg, br)
req := httptest.NewRequest(http.MethodPost, "/v1/messages", strings.NewReader(`{
"model":"auto",
"max_tokens":128,
"messages":[{"role":"user","content":"hello"}],
"stream":false
}`))
req.Header.Set("Content-Type", "application/json")
rec := httptest.NewRecorder()
srv.mux.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("status = %d, want %d, body=%s", rec.Code, http.StatusOK, rec.Body.String())
}
if br.lastSessionKey == "" {
t.Fatal("expected generated session key to be forwarded to bridge")
}
if got := rec.Header().Get(sessionHeaderName); got == "" {
t.Fatal("expected generated session header in response")
}
if got := rec.Header().Get(exposeHeadersName); !strings.Contains(got, sessionHeaderName) {
t.Fatalf("expose headers = %q, want to contain %q", got, sessionHeaderName)
}
}

58
internal/server/server.go Normal file
View File

@ -0,0 +1,58 @@
package server
import (
"fmt"
"net/http"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/daniel/cursor-adapter/internal/bridge"
"github.com/daniel/cursor-adapter/internal/config"
)
type Server struct {
cfg *config.Config
br bridge.Bridge
mux *chi.Mux
}
func New(cfg *config.Config, br bridge.Bridge) *Server {
s := &Server{cfg: cfg, br: br}
s.mux = s.buildRouter()
return s
}
func corsMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET, POST, OPTIONS")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type, Authorization, X-Cursor-Session-ID, X-Cursor-Workspace")
w.Header().Set("Access-Control-Expose-Headers", "X-Cursor-Session-ID")
if r.Method == "OPTIONS" {
w.WriteHeader(http.StatusNoContent)
return
}
next.ServeHTTP(w, r)
})
}
func (s *Server) buildRouter() *chi.Mux {
r := chi.NewRouter()
r.Use(middleware.Recoverer)
r.Use(middleware.Logger)
r.Use(corsMiddleware)
r.Get("/v1/models", s.handleListModels)
r.Post("/v1/chat/completions", s.handleChatCompletions)
r.Post("/v1/messages", s.handleAnthropicMessages)
r.Get("/health", s.handleHealth)
return r
}
func (s *Server) Run() error {
addr := fmt.Sprintf("127.0.0.1:%d", s.cfg.Port)
return http.ListenAndServe(addr, s.mux)
}

View File

@ -0,0 +1,28 @@
package server
import (
"fmt"
"net/http"
"strings"
"time"
)
const sessionHeaderName = "X-Cursor-Session-ID"
const exposeHeadersName = "Access-Control-Expose-Headers"
func ensureSessionHeader(w http.ResponseWriter, r *http.Request) string {
sessionKey := strings.TrimSpace(r.Header.Get(sessionHeaderName))
if sessionKey == "" {
sessionKey = fmt.Sprintf("csess_%d", time.Now().UnixNano())
}
w.Header().Set(sessionHeaderName, sessionKey)
existing := w.Header().Get(exposeHeadersName)
if existing == "" {
w.Header().Set(exposeHeadersName, sessionHeaderName)
} else if !strings.Contains(strings.ToLower(existing), strings.ToLower(sessionHeaderName)) {
w.Header().Set(exposeHeadersName, existing+", "+sessionHeaderName)
}
return sessionKey
}

57
internal/server/sse.go Normal file
View File

@ -0,0 +1,57 @@
package server
import (
"encoding/json"
"fmt"
"net/http"
"github.com/daniel/cursor-adapter/internal/types"
)
// SSEWriter 封裝 http.ResponseWriter 用於 SSE streaming。
type SSEWriter struct {
w http.ResponseWriter
flush http.Flusher
}
// NewSSEWriter 建立 SSEWriter設定必要的 headers。
func NewSSEWriter(w http.ResponseWriter) *SSEWriter {
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
w.Header().Set("X-Accel-Buffering", "no")
flusher, _ := w.(http.Flusher)
return &SSEWriter{w: w, flush: flusher}
}
// WriteChunk 寫入一個 SSE chunk。
func (s *SSEWriter) WriteChunk(chunk types.ChatCompletionChunk) error {
data, err := json.Marshal(chunk)
if err != nil {
return fmt.Errorf("marshal chunk: %w", err)
}
fmt.Fprintf(s.w, "data: %s\n\n", data)
if s.flush != nil {
s.flush.Flush()
}
return nil
}
// WriteDone 寫入 SSE 結束標記。
func (s *SSEWriter) WriteDone() {
fmt.Fprint(s.w, "data: [DONE]\n\n")
if s.flush != nil {
s.flush.Flush()
}
}
// WriteError 寫入 SSE 格式的錯誤。
func (s *SSEWriter) WriteError(errMsg string) {
stopReason := "stop"
chunk := types.NewChatCompletionChunk("error", 0, "", types.Delta{Content: &errMsg})
chunk.Choices[0].FinishReason = &stopReason
s.WriteChunk(chunk)
s.WriteDone()
}

View File

@ -0,0 +1,77 @@
package server
import (
"encoding/json"
"net/http/httptest"
"strings"
"testing"
"github.com/daniel/cursor-adapter/internal/types"
)
func TestNewSSEWriter(t *testing.T) {
rec := httptest.NewRecorder()
sse := NewSSEWriter(rec)
if sse == nil {
t.Fatal("NewSSEWriter returned nil")
}
headers := rec.Header()
if got := headers.Get("Content-Type"); got != "text/event-stream" {
t.Errorf("Content-Type = %q, want %q", got, "text/event-stream")
}
if got := headers.Get("Cache-Control"); got != "no-cache" {
t.Errorf("Cache-Control = %q, want %q", got, "no-cache")
}
if got := headers.Get("Connection"); got != "keep-alive" {
t.Errorf("Connection = %q, want %q", got, "keep-alive")
}
if got := headers.Get("X-Accel-Buffering"); got != "no" {
t.Errorf("X-Accel-Buffering = %q, want %q", got, "no")
}
}
func TestWriteChunk(t *testing.T) {
rec := httptest.NewRecorder()
sse := NewSSEWriter(rec)
content := "hello"
chunk := types.NewChatCompletionChunk("test-id", 0, "", types.Delta{Content: &content})
if err := sse.WriteChunk(chunk); err != nil {
t.Fatalf("WriteChunk returned error: %v", err)
}
body := rec.Body.String()
if !strings.HasPrefix(body, "data: ") {
t.Errorf("WriteChunk output missing 'data: ' prefix, got %q", body)
}
if !strings.HasSuffix(body, "\n\n") {
t.Errorf("WriteChunk output missing trailing newlines, got %q", body)
}
jsonStr := strings.TrimPrefix(body, "data: ")
jsonStr = strings.TrimSuffix(jsonStr, "\n\n")
var parsed types.ChatCompletionChunk
if err := json.Unmarshal([]byte(jsonStr), &parsed); err != nil {
t.Fatalf("failed to unmarshal chunk JSON: %v", err)
}
if parsed.ID != "test-id" {
t.Errorf("parsed chunk ID = %q, want %q", parsed.ID, "test-id")
}
if len(parsed.Choices) != 1 || *parsed.Choices[0].Delta.Content != "hello" {
t.Errorf("parsed chunk content mismatch, got %v", parsed)
}
}
func TestWriteDone(t *testing.T) {
rec := httptest.NewRecorder()
sse := NewSSEWriter(rec)
sse.WriteDone()
body := rec.Body.String()
want := "data: [DONE]\n\n"
if body != want {
t.Errorf("WriteDone output = %q, want %q", body, want)
}
}

View File

@ -0,0 +1,98 @@
package types
import "encoding/json"
// AnthropicBlock is a single content block in Anthropic Messages API. It can
// be text, image, document, tool_use, or tool_result. We decode the raw JSON
// once and keep the rest as RawData so downstream code (prompt building) can
// render every type faithfully.
type AnthropicBlock struct {
Type string `json:"type"`
// type=text
Text string `json:"text,omitempty"`
// type=tool_use
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Input json.RawMessage `json:"input,omitempty"`
// type=tool_result
ToolUseID string `json:"tool_use_id,omitempty"`
Content json.RawMessage `json:"content,omitempty"`
IsError bool `json:"is_error,omitempty"`
// type=image / document
Source json.RawMessage `json:"source,omitempty"`
Title string `json:"title,omitempty"`
}
// AnthropicTextBlock kept for response serialisation (proxy always returns
// text blocks back to the client; it does not emit tool_use natively).
type AnthropicTextBlock struct {
Type string `json:"type"`
Text string `json:"text,omitempty"`
}
// AnthropicContent is a flexible field: it can be a plain string OR an array
// of blocks. Claude Code always sends the array form.
type AnthropicContent []AnthropicBlock
func (c *AnthropicContent) UnmarshalJSON(data []byte) error {
var text string
if err := json.Unmarshal(data, &text); err == nil {
*c = []AnthropicBlock{{Type: "text", Text: text}}
return nil
}
var blocks []AnthropicBlock
if err := json.Unmarshal(data, &blocks); err != nil {
return err
}
*c = blocks
return nil
}
type AnthropicSystem AnthropicContent
func (s *AnthropicSystem) UnmarshalJSON(data []byte) error {
return (*AnthropicContent)(s).UnmarshalJSON(data)
}
type AnthropicMessage struct {
Role string `json:"role"`
Content AnthropicContent `json:"content"`
}
// AnthropicTool mirrors Anthropic's `tools` entry shape. InputSchema is left
// as RawMessage so we can render it verbatim in a system prompt without
// caring about the exact JSON Schema structure.
type AnthropicTool struct {
Name string `json:"name"`
Description string `json:"description,omitempty"`
InputSchema json.RawMessage `json:"input_schema,omitempty"`
}
type AnthropicMessagesRequest struct {
Model string `json:"model"`
MaxTokens int `json:"max_tokens"`
Messages []AnthropicMessage `json:"messages"`
System AnthropicSystem `json:"system,omitempty"`
Stream bool `json:"stream"`
Tools []AnthropicTool `json:"tools,omitempty"`
}
type AnthropicMessagesResponse struct {
ID string `json:"id"`
Type string `json:"type"`
Role string `json:"role"`
Content []AnthropicTextBlock `json:"content"`
Model string `json:"model"`
StopReason string `json:"stop_reason"`
Usage AnthropicUsage `json:"usage"`
}
type AnthropicUsage struct {
InputTokens int `json:"input_tokens"`
OutputTokens int `json:"output_tokens"`
}

142
internal/types/types.go Normal file
View File

@ -0,0 +1,142 @@
package types
import (
"encoding/json"
"strings"
)
func StringPtr(s string) *string { return &s }
// Request
type ChatMessageContent string
type ChatMessageContentPart struct {
Type string `json:"type"`
Text string `json:"text,omitempty"`
}
func (c *ChatMessageContent) UnmarshalJSON(data []byte) error {
var text string
if err := json.Unmarshal(data, &text); err == nil {
*c = ChatMessageContent(text)
return nil
}
var parts []ChatMessageContentPart
if err := json.Unmarshal(data, &parts); err != nil {
return err
}
var content strings.Builder
for _, part := range parts {
if part.Type == "text" {
content.WriteString(part.Text)
}
}
*c = ChatMessageContent(content.String())
return nil
}
type ChatMessage struct {
Role string `json:"role"`
Content ChatMessageContent `json:"content"`
}
type ChatCompletionRequest struct {
Model string `json:"model"`
Messages []ChatMessage `json:"messages"`
Stream bool `json:"stream"`
Temperature *float64 `json:"temperature,omitempty"`
}
// Response (non-streaming)
type ChatCompletionResponse struct {
ID string `json:"id"`
Object string `json:"object"`
Created int64 `json:"created"`
Model string `json:"model"`
Choices []Choice `json:"choices"`
Usage Usage `json:"usage"`
}
type Choice struct {
Index int `json:"index"`
Message ChatMessage `json:"message"`
FinishReason *string `json:"finish_reason"`
}
type Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
}
// Streaming chunk
type ChatCompletionChunk struct {
ID string `json:"id"`
Object string `json:"object"`
Created int64 `json:"created"`
Model string `json:"model,omitempty"`
Choices []ChunkChoice `json:"choices"`
Usage *Usage `json:"usage,omitempty"`
}
type ChunkChoice struct {
Index int `json:"index"`
Delta Delta `json:"delta"`
FinishReason *string `json:"finish_reason"`
}
type Delta struct {
Role *string `json:"role,omitempty"`
Content *string `json:"content,omitempty"`
}
// Models list
type ModelList struct {
Object string `json:"object"`
Data []ModelInfo `json:"data"`
}
type ModelInfo struct {
ID string `json:"id"`
Object string `json:"object"`
Created int64 `json:"created"`
OwnedBy string `json:"owned_by"`
}
// Error response
type ErrorResponse struct {
Error ErrorBody `json:"error"`
}
type ErrorBody struct {
Message string `json:"message"`
Type string `json:"type"`
Code string `json:"code,omitempty"`
}
func NewErrorResponse(message, errType, code string) ErrorResponse {
return ErrorResponse{
Error: ErrorBody{
Message: message,
Type: errType,
Code: code,
},
}
}
func NewChatCompletionChunk(id string, created int64, model string, delta Delta) ChatCompletionChunk {
return ChatCompletionChunk{
ID: id,
Object: "chat.completion.chunk",
Created: created,
Model: model,
Choices: []ChunkChoice{
{
Index: 0,
Delta: delta,
},
},
}
}

View File

@ -0,0 +1,132 @@
// Package workspace sets up an isolated temp directory for each Cursor CLI /
// ACP child. It pre-populates a minimal .cursor config so the agent does not
// load the real user's global rules from ~/.cursor, and returns a set of
// environment overrides (HOME, CURSOR_CONFIG_DIR, XDG_CONFIG_HOME, APPDATA…)
// so the child cannot escape back to the real profile.
//
// Ported from cursor-api-proxy/src/lib/workspace.ts.
package workspace
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"runtime"
)
// ChatOnly prepares an isolated temp workspace and returns:
// - dir: the absolute path of the new temp directory (caller is responsible
// for removing it when the child process exits).
// - env: a map of environment variables to override on the child.
//
// Auth is the tricky part. The Cursor CLI resolves login tokens from the
// real user profile (macOS keychain on darwin, ~/.cursor/agent-cli-state.json
// elsewhere); if we override HOME to the temp dir, `agent --print` dies with
// "Authentication required. Please run 'agent login' first…". So we keep
// HOME untouched unless either:
// - CURSOR_API_KEY is set (the CLI uses the env var and doesn't need HOME), or
// - authConfigDir is non-empty (account-pool mode, not used here yet).
//
// We *do* always override CURSOR_CONFIG_DIR → tempDir/.cursor. That's the
// setting Cursor uses to locate rules/, cli-config.json, and per-project
// state, so this single override is enough to stop the agent from loading
// the user's real ~/.cursor/rules/* into the prompt.
func ChatOnly(authConfigDir string) (dir string, env map[string]string, err error) {
tempDir, err := os.MkdirTemp("", "cursor-adapter-ws-*")
if err != nil {
return "", nil, fmt.Errorf("mkdtemp: %w", err)
}
cursorDir := filepath.Join(tempDir, ".cursor")
if err := os.MkdirAll(filepath.Join(cursorDir, "rules"), 0o755); err != nil {
_ = os.RemoveAll(tempDir)
return "", nil, fmt.Errorf("mkdir .cursor/rules: %w", err)
}
minimalConfig := map[string]any{
"version": 1,
"editor": map[string]any{"vimMode": false},
"permissions": map[string]any{
"allow": []any{},
"deny": []any{},
},
}
cfgBytes, _ := json.Marshal(minimalConfig)
if err := os.WriteFile(filepath.Join(cursorDir, "cli-config.json"), cfgBytes, 0o644); err != nil {
_ = os.RemoveAll(tempDir)
return "", nil, fmt.Errorf("write cli-config.json: %w", err)
}
env = map[string]string{}
if authConfigDir != "" {
env["CURSOR_CONFIG_DIR"] = authConfigDir
return tempDir, env, nil
}
env["CURSOR_CONFIG_DIR"] = cursorDir
// Only fully isolate HOME if the child will auth via CURSOR_API_KEY.
// With keychain/home-based auth, replacing HOME makes agent exit 1.
if os.Getenv("CURSOR_API_KEY") != "" {
env["HOME"] = tempDir
env["USERPROFILE"] = tempDir
if runtime.GOOS == "windows" {
appDataRoaming := filepath.Join(tempDir, "AppData", "Roaming")
appDataLocal := filepath.Join(tempDir, "AppData", "Local")
_ = os.MkdirAll(appDataRoaming, 0o755)
_ = os.MkdirAll(appDataLocal, 0o755)
env["APPDATA"] = appDataRoaming
env["LOCALAPPDATA"] = appDataLocal
} else {
xdg := filepath.Join(tempDir, ".config")
_ = os.MkdirAll(xdg, 0o755)
env["XDG_CONFIG_HOME"] = xdg
}
}
return tempDir, env, nil
}
// MergeEnv takes the current process env (as "KEY=VALUE" strings) and
// overlays overrides on top, returning a new slice suitable for exec.Cmd.Env.
// Keys from overrides replace any existing entries with the same key.
func MergeEnv(base []string, overrides map[string]string) []string {
if len(overrides) == 0 {
return base
}
out := make([]string, 0, len(base)+len(overrides))
seen := make(map[string]bool, len(overrides))
for _, kv := range base {
eq := indexOf(kv, '=')
if eq < 0 {
out = append(out, kv)
continue
}
key := kv[:eq]
if v, ok := overrides[key]; ok {
out = append(out, key+"="+v)
seen[key] = true
} else {
out = append(out, kv)
}
}
for k, v := range overrides {
if !seen[k] {
out = append(out, k+"="+v)
}
}
return out
}
func indexOf(s string, c byte) int {
for i := 0; i < len(s); i++ {
if s[i] == c {
return i
}
}
return -1
}

View File

@ -0,0 +1,103 @@
package workspace
import (
"encoding/json"
"os"
"path/filepath"
"runtime"
"strings"
"testing"
)
func TestChatOnly_NoApiKey_KeepsHome(t *testing.T) {
t.Setenv("CURSOR_API_KEY", "")
dir, env, err := ChatOnly("")
if err != nil {
t.Fatalf("ChatOnly: %v", err)
}
t.Cleanup(func() { _ = os.RemoveAll(dir) })
if !filepath.IsAbs(dir) {
t.Errorf("expected absolute path, got %q", dir)
}
cfgPath := filepath.Join(dir, ".cursor", "cli-config.json")
data, err := os.ReadFile(cfgPath)
if err != nil {
t.Fatalf("expected cli-config.json to exist: %v", err)
}
var parsed map[string]any
if err := json.Unmarshal(data, &parsed); err != nil {
t.Fatalf("cli-config.json is not valid JSON: %v", err)
}
if env["CURSOR_CONFIG_DIR"] != filepath.Join(dir, ".cursor") {
t.Errorf("CURSOR_CONFIG_DIR override wrong: %q", env["CURSOR_CONFIG_DIR"])
}
if _, ok := env["HOME"]; ok {
t.Errorf("HOME should NOT be overridden without CURSOR_API_KEY, got %q (would break keychain auth)", env["HOME"])
}
}
func TestChatOnly_WithApiKey_IsolatesHome(t *testing.T) {
t.Setenv("CURSOR_API_KEY", "sk-fake-for-test")
dir, env, err := ChatOnly("")
if err != nil {
t.Fatalf("ChatOnly: %v", err)
}
t.Cleanup(func() { _ = os.RemoveAll(dir) })
if env["HOME"] != dir {
t.Errorf("HOME override = %q, want %q", env["HOME"], dir)
}
if runtime.GOOS != "windows" && env["XDG_CONFIG_HOME"] != filepath.Join(dir, ".config") {
t.Errorf("XDG_CONFIG_HOME override wrong: %q", env["XDG_CONFIG_HOME"])
}
}
func TestChatOnly_WithAuthConfigDir_OnlySetsCursorConfigDir(t *testing.T) {
dir, env, err := ChatOnly("/tmp/fake-auth")
if err != nil {
t.Fatalf("ChatOnly: %v", err)
}
t.Cleanup(func() { _ = os.RemoveAll(dir) })
if env["CURSOR_CONFIG_DIR"] != "/tmp/fake-auth" {
t.Errorf("expected CURSOR_CONFIG_DIR to be the auth dir, got %q", env["CURSOR_CONFIG_DIR"])
}
if _, ok := env["HOME"]; ok {
t.Errorf("HOME should not be overridden when authConfigDir is set, got %q", env["HOME"])
}
}
func TestMergeEnv_OverridesExistingKeys(t *testing.T) {
base := []string{"FOO=1", "HOME=/old", "BAR=2"}
out := MergeEnv(base, map[string]string{
"HOME": "/new",
"BAZ": "3",
})
joined := strings.Join(out, "\n")
if !strings.Contains(joined, "HOME=/new") {
t.Errorf("expected HOME=/new in merged env, got: %v", out)
}
if strings.Contains(joined, "HOME=/old") {
t.Errorf("old HOME should have been replaced, got: %v", out)
}
if !strings.Contains(joined, "FOO=1") || !strings.Contains(joined, "BAR=2") {
t.Errorf("unchanged keys should pass through, got: %v", out)
}
if !strings.Contains(joined, "BAZ=3") {
t.Errorf("new key should be appended, got: %v", out)
}
}
func TestMergeEnv_EmptyOverridesReturnsSameSlice(t *testing.T) {
base := []string{"FOO=1"}
out := MergeEnv(base, nil)
if len(out) != 1 || out[0] != "FOO=1" {
t.Errorf("expected base unchanged, got %v", out)
}
}

98
main.go Normal file
View File

@ -0,0 +1,98 @@
package main
import (
"context"
"fmt"
"log/slog"
"os"
"time"
"github.com/spf13/cobra"
"github.com/daniel/cursor-adapter/internal/bridge"
"github.com/daniel/cursor-adapter/internal/config"
"github.com/daniel/cursor-adapter/internal/server"
)
var (
configPath string
port int
debug bool
useACP bool
chatOnlySet bool
chatOnlyFlag bool
)
func main() {
rootCmd := &cobra.Command{
Use: "cursor-adapter",
Short: "OpenAI-compatible proxy for Cursor CLI",
RunE: run,
}
rootCmd.Flags().StringVarP(&configPath, "config", "c", "", "config file path (default: ~/.cursor-adapter/config.yaml)")
rootCmd.Flags().IntVarP(&port, "port", "p", 0, "server port (overrides config)")
rootCmd.Flags().BoolVar(&debug, "debug", false, "enable debug logging")
rootCmd.Flags().BoolVar(&useACP, "use-acp", false, "use Cursor ACP transport instead of CLI stream-json")
rootCmd.Flags().BoolVar(&chatOnlyFlag, "chat-only-workspace", true, "isolate Cursor CLI in an empty temp workspace with overridden HOME/CURSOR_CONFIG_DIR (set to false to let Cursor agent see the adapter's cwd)")
rootCmd.PreRun = func(cmd *cobra.Command, args []string) {
chatOnlySet = cmd.Flags().Changed("chat-only-workspace")
}
if err := rootCmd.Execute(); err != nil {
os.Exit(1)
}
}
func run(cmd *cobra.Command, args []string) error {
var logLevel slog.Level
if debug {
logLevel = slog.LevelDebug
} else {
logLevel = slog.LevelInfo
}
logger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{Level: logLevel}))
slog.SetDefault(logger)
cfg, err := config.Load(configPath)
if err != nil {
return fmt.Errorf("load config: %w", err)
}
if port > 0 {
cfg.Port = port
}
if useACP {
cfg.UseACP = true
}
if chatOnlySet {
cfg.ChatOnlyWorkspace = chatOnlyFlag
}
br := bridge.NewBridge(
cfg.CursorCLIPath,
logger,
cfg.UseACP,
cfg.ChatOnlyWorkspace,
cfg.MaxConcurrent,
time.Duration(cfg.Timeout)*time.Second,
)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := br.CheckHealth(ctx); err != nil {
return fmt.Errorf("cursor cli not available: %w", err)
}
logger.Info("Cursor CLI OK")
srv := server.New(cfg, br)
mode := "CLI"
if cfg.UseACP {
mode = "ACP"
}
logger.Info("Starting cursor-adapter",
"port", cfg.Port,
"mode", mode,
"chat_only_workspace", cfg.ChatOnlyWorkspace,
)
return srv.Run()
}

39
scripts/test_cursor_cli.sh Executable file
View File

@ -0,0 +1,39 @@
#!/bin/bash
# 探索 Cursor CLI stream-json 輸出格式
echo "=== Testing Cursor CLI stream-json output ==="
echo ""
# 檢查 agent 是否可用
if ! command -v agent &> /dev/null; then
echo "ERROR: agent command not found"
echo "Please install Cursor CLI: curl https://cursor.com/install -fsS | bash"
exit 1
fi
echo "--- agent --version ---"
agent --version 2>&1 || echo "(version check failed)"
echo ""
echo "--- Simple prompt with stream-json ---"
echo 'Running: agent -p "say hello in one word" --output-format stream-json --trust'
agent -p "say hello in one word" --output-format stream-json --trust 2>&1 | head -30
echo ""
echo "--- With model flag ---"
echo 'Running: agent -p "say hello" --model "claude-sonnet-4-20250514" --output-format stream-json --trust'
agent -p "say hello" --model "claude-sonnet-4-20250514" --output-format stream-json --trust 2>&1 | head -30
echo ""
echo "--- With stream-partial-output ---"
echo 'Running: agent -p "say hello" --output-format stream-json --stream-partial-output --trust'
agent -p "say hello" --output-format stream-json --stream-partial-output --trust 2>&1 | head -30
echo ""
echo "--- Available output formats (from help) ---"
agent --help 2>&1 | grep -A2 "output-format"
echo ""
echo "--- Available models ---"
echo 'Running: agent models'
agent models 2>&1 | head -20