Compare commits

...

10 Commits

Author SHA1 Message Date
Blizzard 0d607ad4ed init: cc泄露源码 2026-04-28 10:40:28 +08:00
程序员阿江(Relakkes) ef635d6d4d docs: add GitHub issue templates for bug reports and questions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-03 13:18:13 +08:00
程序员阿江(Relakkes) 7d40dbf48c docs: add table of contents with anchor links to READMEs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-03 01:30:02 +08:00
程序员阿江(Relakkes) 62bb0169bb docs: add FAQ section for common issues
Cover three frequently reported problems:
- usage.input_tokens error from misconfigured ANTHROPIC_BASE_URL
- Cannot find package 'bundle' from outdated Bun version
- How to use non-Anthropic models (OpenAI/DeepSeek/Ollama)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-03 01:27:46 +08:00
程序员阿江-Relakkes 6b72f3357c Merge pull request #27 from studyzy/main
Enable and fix /buddy command
2026-04-03 00:56:23 +08:00
程序员阿江(Relakkes) 3c705f491b docs: add third-party model integration guide (OpenAI/DeepSeek/Ollama via LiteLLM)
Add comprehensive documentation for using non-Anthropic models through
protocol translation proxies like LiteLLM. Also document ~/.claude/settings.json
env configuration as an alternative to .env files.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-03 00:45:06 +08:00
DevinZeng 9233518b11 Enable and fix /buddy command
The /buddy command was completely disabled by bun:bundle feature('BUDDY')
flag which evaluates to false at runtime. Removed all feature('BUDDY')
checks across the codebase to register the command, and added keyboard
event handling (q/Enter to dismiss) which was missing from the UI.
2026-04-03 00:30:56 +08:00
Relakkes Yang 430502e7bb docs: add Windows startup instructions to README
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 13:58:32 +08:00
程序员阿江-Relakkes d166eb89b6 Merge pull request #3 from NanmiCoder/docs/bilingual-readme-bun-install
docs: add bilingual README and Bun install guide
2026-04-01 12:09:17 +08:00
程序员阿江(Relakkes) 919dd34a7a docs: add bilingual README and Bun install guide 2026-04-01 12:08:36 +08:00
22 changed files with 1634 additions and 59 deletions
+49 -8
View File
@@ -1,9 +1,50 @@
ANTHROPIC_AUTH_TOKEN=your_token_here # ============================================================
ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic # MiniMax(直连 Anthropic 兼容接口)
ANTHROPIC_DEFAULT_HAIKU_MODEL=MiniMax-M2.7-highspeed # ============================================================
ANTHROPIC_DEFAULT_OPUS_MODEL=MiniMax-M2.7-highspeed # ANTHROPIC_AUTH_TOKEN=your_token_here
ANTHROPIC_DEFAULT_SONNET_MODEL=MiniMax-M2.7-highspeed # ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic
ANTHROPIC_MODEL=MiniMax-M2.7-highspeed # ANTHROPIC_MODEL=MiniMax-M2.7-highspeed
API_TIMEOUT_MS=3000000 # ANTHROPIC_DEFAULT_SONNET_MODEL=MiniMax-M2.7-highspeed
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 # ANTHROPIC_DEFAULT_HAIKU_MODEL=MiniMax-M2.7-highspeed
# ANTHROPIC_DEFAULT_OPUS_MODEL=MiniMax-M2.7-highspeed
# API_TIMEOUT_MS=3000000
# ============================================================
# OpenAI(通过 LiteLLM 代理)
# 先启动: litellm --config litellm_config.yaml --port 4000
# ============================================================
# ANTHROPIC_AUTH_TOKEN=sk-anything
# ANTHROPIC_BASE_URL=http://localhost:4000
# ANTHROPIC_MODEL=gpt-4o
# ANTHROPIC_DEFAULT_SONNET_MODEL=gpt-4o
# ANTHROPIC_DEFAULT_HAIKU_MODEL=gpt-4o
# ANTHROPIC_DEFAULT_OPUS_MODEL=gpt-4o
# API_TIMEOUT_MS=3000000
# ============================================================
# DeepSeek(通过 LiteLLM 代理)
# 先启动: litellm --config litellm_config.yaml --port 4000
# ============================================================
# ANTHROPIC_AUTH_TOKEN=sk-anything
# ANTHROPIC_BASE_URL=http://localhost:4000
# ANTHROPIC_MODEL=deepseek-chat
# ANTHROPIC_DEFAULT_SONNET_MODEL=deepseek-chat
# ANTHROPIC_DEFAULT_HAIKU_MODEL=deepseek-chat
# ANTHROPIC_DEFAULT_OPUS_MODEL=deepseek-chat
# API_TIMEOUT_MS=3000000
# ============================================================
# OpenRouter(直连 Anthropic 兼容接口)
# ============================================================
# ANTHROPIC_AUTH_TOKEN=sk-or-v1-xxx
# ANTHROPIC_BASE_URL=https://openrouter.ai/api/v1
# ANTHROPIC_MODEL=openai/gpt-4o
# ANTHROPIC_DEFAULT_SONNET_MODEL=openai/gpt-4o
# ANTHROPIC_DEFAULT_HAIKU_MODEL=openai/gpt-4o-mini
# ANTHROPIC_DEFAULT_OPUS_MODEL=openai/gpt-4o
# ============================================================
# 通用设置(建议始终开启)
# ============================================================
DISABLE_TELEMETRY=1 DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
+40
View File
@@ -0,0 +1,40 @@
---
name: Bug 反馈
about: 创建一个 Bug 报告以帮助 cc-haha 项目改进
title: '[BUG] '
labels: bug
assignees: ''
---
## 🔍 问题检查清单
<!-- 请在提交 issue 前确认以下事项 -->
- [ ] 我已经仔细阅读了 [README 常见问题](https://github.com/NanmiCoder/cc-haha#常见问题) 部分
- [ ] 我已经搜索并查看了[已关闭的 issues](https://github.com/NanmiCoder/cc-haha/issues?q=is%3Aissue+is%3Aclosed)
- [ ] 我确认这不是由于 API Key 配置错误、API 端点不兼容、Bun 版本过低等常见原因导致的问题
## 🐛 问题描述
<!-- 请详细描述你遇到的问题 -->
## 📝 复现步骤
1.
2.
3.
## 💻 运行环境
- 操作系统:
- Bun 版本 (`bun --version`):
- Node 版本 (`node --version`):
- API 提供商 (如 MiniMax / OpenRouter / Anthropic 官方等):
- 使用的模型:
- 启动方式 (TUI / --print / Recovery CLI):
## 📋 错误日志
<!-- 请提供完整的错误日志信息 -->
```shell
在此粘贴错误日志
```
## 📷 错误截图
<!-- 如有,请提供错误截图 -->
+36
View File
@@ -0,0 +1,36 @@
---
name: 使用问题咨询
about: 提交使用过程中遇到的问题
title: '[问题] '
labels: question
assignees: ''
---
## ⚠️ 提交前确认
<!-- 请确认以下事项 -->
- [ ] 我已经仔细阅读了 [README 常见问题](https://github.com/NanmiCoder/cc-haha#常见问题) 部分
- [ ] 我已经阅读了[第三方模型使用指南](https://github.com/NanmiCoder/cc-haha/blob/main/docs/third-party-models.md)
- [ ] 我已经搜索并查看了[已关闭的 issues](https://github.com/NanmiCoder/cc-haha/issues?q=is%3Aissue+is%3Aclosed)
## ❓ 问题描述
<!-- 清晰简洁地描述你遇到的问题 -->
## 🔍 使用场景
<!-- 描述你在使用哪个功能时遇到的问题 -->
- 使用功能: (如:TUI 交互 / --print 无头模式 / MCP 服务器 / Skills 等)
- API 提供商: (如:MiniMax / OpenRouter / Anthropic 官方 / LiteLLM 代理等)
## 💻 环境信息
- 操作系统:
- Bun 版本 (`bun --version`):
- Node 版本 (`node --version`):
- API 提供商:
- 使用的模型:
## 📋 错误日志
```shell
在此粘贴完整的错误日志
```
## 📷 错误截图
<!-- 如有,请提供错误截图 -->
+3
View File
@@ -0,0 +1,3 @@
# Default ignored files
/shelf/
/workspace.xml
+8
View File
@@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<module type="WEB_MODULE" version="4">
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$" />
<orderEntry type="inheritedJdk" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
</module>
+8
View File
@@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/claude-code.iml" filepath="$PROJECT_DIR$/.idea/claude-code.iml" />
</modules>
</component>
</project>
Generated
+6
View File
@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="" vcs="Git" />
</component>
</project>
+307
View File
@@ -0,0 +1,307 @@
# Claude Code Haha
<p align="right"><a href="./README.md">中文</a> | <strong>English</strong></p>
A **locally runnable version** repaired from the leaked Claude Code source, with support for any Anthropic-compatible API endpoint such as MiniMax and OpenRouter.
> The original leaked source does not run as-is. This repository fixes multiple blocking issues in the startup path so the full Ink TUI can work locally.
<p align="center">
<img src="docs/00runtime.png" alt="Runtime screenshot" width="800">
</p>
## Table of Contents
- [Features](#features)
- [Architecture Overview](#architecture-overview)
- [Quick Start](#quick-start)
- [Environment Variables](#environment-variables)
- [Fallback Mode](#fallback-mode)
- [FAQ](#faq)
- [Fixes Compared with the Original Leaked Source](#fixes-compared-with-the-original-leaked-source)
- [Project Structure](#project-structure)
- [Tech Stack](#tech-stack)
---
## Features
- Full Ink TUI experience (matching the official Claude Code interface)
- `--print` headless mode for scripts and CI
- MCP server, plugin, and Skills support
- Custom API endpoint and model support ([Third-Party Models Guide](docs/third-party-models.en.md))
- Fallback Recovery CLI mode
---
## Architecture Overview
<table>
<tr>
<td align="center" width="25%"><img src="docs/01-overall-architecture.png" alt="Overall architecture"><br><b>Overall architecture</b></td>
<td align="center" width="25%"><img src="docs/02-request-lifecycle.png" alt="Request lifecycle"><br><b>Request lifecycle</b></td>
<td align="center" width="25%"><img src="docs/03-tool-system.png" alt="Tool system"><br><b>Tool system</b></td>
<td align="center" width="25%"><img src="docs/04-multi-agent.png" alt="Multi-agent architecture"><br><b>Multi-agent architecture</b></td>
</tr>
<tr>
<td align="center" width="25%"><img src="docs/05-terminal-ui.png" alt="Terminal UI"><br><b>Terminal UI</b></td>
<td align="center" width="25%"><img src="docs/06-permission-security.png" alt="Permissions and security"><br><b>Permissions and security</b></td>
<td align="center" width="25%"><img src="docs/07-services-layer.png" alt="Services layer"><br><b>Services layer</b></td>
<td align="center" width="25%"><img src="docs/08-state-data-flow.png" alt="State and data flow"><br><b>State and data flow</b></td>
</tr>
</table>
---
## Quick Start
### 1. Install Bun
This project requires [Bun](https://bun.sh). If Bun is not installed on the target machine yet, use one of the following methods first:
```bash
# macOS / Linux (official install script)
curl -fsSL https://bun.sh/install | bash
```
If a minimal Linux image reports `unzip is required to install bun`, install `unzip` first:
```bash
# Ubuntu / Debian
apt update && apt install -y unzip
```
```bash
# macOS (Homebrew)
brew install bun
```
```powershell
# Windows (PowerShell)
powershell -c "irm bun.sh/install.ps1 | iex"
```
After installation, reopen the terminal and verify:
```bash
bun --version
```
### 2. Install project dependencies
```bash
bun install
```
### 3. Configure environment variables
Copy the example file and fill in your API key:
```bash
cp .env.example .env
```
Edit `.env` (the example below uses [MiniMax](https://platform.minimaxi.com/subscribe/token-plan?code=1TG2Cseab2&source=link) as the API provider — you can replace it with any compatible service):
```env
# API authentication (choose one)
ANTHROPIC_API_KEY=sk-xxx # Standard API key via x-api-key header
ANTHROPIC_AUTH_TOKEN=sk-xxx # Bearer token via Authorization header
# API endpoint (optional, defaults to Anthropic)
ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic
# Model configuration
ANTHROPIC_MODEL=MiniMax-M2.7-highspeed
ANTHROPIC_DEFAULT_SONNET_MODEL=MiniMax-M2.7-highspeed
ANTHROPIC_DEFAULT_HAIKU_MODEL=MiniMax-M2.7-highspeed
ANTHROPIC_DEFAULT_OPUS_MODEL=MiniMax-M2.7-highspeed
# Timeout in milliseconds
API_TIMEOUT_MS=3000000
# Disable telemetry and non-essential network traffic
DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
```
> **Tip**: You can also configure environment variables via the `env` field in `~/.claude/settings.json`. This is consistent with the official Claude Code configuration:
>
> ```json
> {
> "env": {
> "ANTHROPIC_AUTH_TOKEN": "sk-xxx",
> "ANTHROPIC_BASE_URL": "https://api.minimaxi.com/anthropic",
> "ANTHROPIC_MODEL": "MiniMax-M2.7-highspeed"
> }
> }
> ```
>
> Priority: Environment variables > `.env` file > `~/.claude/settings.json`
### 4. Start
#### macOS / Linux
```bash
# Interactive TUI mode (full interface)
./bin/claude-haha
# Headless mode (single prompt)
./bin/claude-haha -p "your prompt here"
# Pipe input
echo "explain this code" | ./bin/claude-haha -p
# Show all options
./bin/claude-haha --help
```
#### Windows
> **Prerequisite**: [Git for Windows](https://git-scm.com/download/win) must be installed (provides Git Bash, which the project's internal shell execution depends on).
The startup script `bin/claude-haha` is a bash script and cannot run directly in cmd or PowerShell. Use one of the following methods:
**Option 1: PowerShell / cmd — call Bun directly (recommended)**
```powershell
# Interactive TUI mode
bun --env-file=.env ./src/entrypoints/cli.tsx
# Headless mode
bun --env-file=.env ./src/entrypoints/cli.tsx -p "your prompt here"
# Fallback Recovery CLI
bun --env-file=.env ./src/localRecoveryCli.ts
```
**Option 2: Run inside Git Bash**
```bash
# Same usage as macOS / Linux
./bin/claude-haha
```
> **Note**: Some features (voice input, Computer Use, sandbox isolation, etc.) are not available on Windows. This does not affect the core TUI interaction.
---
## Environment Variables
| Variable | Required | Description |
|------|------|------|
| `ANTHROPIC_API_KEY` | One of two | API key sent via the `x-api-key` header |
| `ANTHROPIC_AUTH_TOKEN` | One of two | Auth token sent via the `Authorization: Bearer` header |
| `ANTHROPIC_BASE_URL` | No | Custom API endpoint, defaults to Anthropic |
| `ANTHROPIC_MODEL` | No | Default model |
| `ANTHROPIC_DEFAULT_SONNET_MODEL` | No | Sonnet-tier model mapping |
| `ANTHROPIC_DEFAULT_HAIKU_MODEL` | No | Haiku-tier model mapping |
| `ANTHROPIC_DEFAULT_OPUS_MODEL` | No | Opus-tier model mapping |
| `API_TIMEOUT_MS` | No | API request timeout, default `600000` (10min) |
| `DISABLE_TELEMETRY` | No | Set to `1` to disable telemetry |
| `CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC` | No | Set to `1` to disable non-essential network traffic |
---
## Fallback Mode
If the full TUI has issues, use the simplified readline-based interaction mode:
```bash
CLAUDE_CODE_FORCE_RECOVERY_CLI=1 ./bin/claude-haha
```
---
## Fixes Compared with the Original Leaked Source
The leaked source could not run directly. This repository mainly fixes the following issues:
| Issue | Root cause | Fix |
|------|------|------|
| TUI does not start | The entry script routed no-argument startup to the recovery CLI | Restored the full `cli.tsx` entry |
| Startup hangs | The `verify` skill imports a missing `.md` file, causing Bun's text loader to hang indefinitely | Added stub `.md` files |
| `--print` hangs | `filePersistence/types.ts` was missing | Added type stub files |
| `--print` hangs | `ultraplan/prompt.txt` was missing | Added resource stub files |
| **Enter key does nothing** | The `modifiers-napi` native package was missing, `isModifierPressed()` threw, `handleEnter` was interrupted, and `onSubmit` never ran | Added try/catch fault tolerance |
| Setup was skipped | `preload.ts` automatically set `LOCAL_RECOVERY=1`, skipping all initialization | Removed the default setting |
---
## Project Structure
```text
bin/claude-haha # Entry script
preload.ts # Bun preload (sets MACRO globals)
.env.example # Environment variable template
src/
├── entrypoints/cli.tsx # Main CLI entry
├── main.tsx # Main TUI logic (Commander.js + React/Ink)
├── localRecoveryCli.ts # Fallback Recovery CLI
├── setup.ts # Startup initialization
├── screens/REPL.tsx # Interactive REPL screen
├── ink/ # Ink terminal rendering engine
├── components/ # UI components
├── tools/ # Agent tools (Bash, Edit, Grep, etc.)
├── commands/ # Slash commands (/commit, /review, etc.)
├── skills/ # Skill system
├── services/ # Service layer (API, MCP, OAuth, etc.)
├── hooks/ # React hooks
└── utils/ # Utility functions
```
---
## Tech Stack
| Category | Technology |
|------|------|
| Runtime | [Bun](https://bun.sh) |
| Language | TypeScript |
| Terminal UI | React + [Ink](https://github.com/vadimdemedes/ink) |
| CLI parsing | Commander.js |
| API | Anthropic SDK |
| Protocols | MCP, LSP |
---
## FAQ
### Q: `undefined is not an object (evaluating 'usage.input_tokens')`
**Cause**: `ANTHROPIC_BASE_URL` is misconfigured. The API endpoint is returning HTML or another non-JSON format instead of a valid Anthropic protocol response.
This project uses the **Anthropic Messages API protocol**. `ANTHROPIC_BASE_URL` must point to an endpoint compatible with Anthropic's `/v1/messages` interface. The Anthropic SDK automatically appends `/v1/messages` to the base URL, so:
- MiniMax: `ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic`
- OpenRouter: `ANTHROPIC_BASE_URL=https://openrouter.ai/api`
- OpenRouter (wrong): `ANTHROPIC_BASE_URL=https://openrouter.ai/anthropic` ❌ (returns HTML)
If your model provider only supports the OpenAI protocol, you need a proxy like LiteLLM for protocol translation. See the [Third-Party Models Guide](docs/third-party-models.en.md).
### Q: `Cannot find package 'bundle'`
```
error: Cannot find package 'bundle' from '.../claude-code-haha/src/entrypoints/cli.tsx'
```
**Cause**: Your Bun version is too old and doesn't support the required `bun:bundle` built-in module.
**Fix**: Upgrade Bun to the latest version:
```bash
bun upgrade
```
### Q: How to use OpenAI / DeepSeek / Ollama or other non-Anthropic models?
This project only supports the Anthropic protocol. If your model provider doesn't natively support the Anthropic protocol, you need a proxy like [LiteLLM](https://github.com/BerriAI/litellm) for protocol translation (OpenAI → Anthropic).
See the [Third-Party Models Guide](docs/third-party-models.en.md) for detailed setup instructions.
---
## Disclaimer
This repository is based on the Claude Code source leaked from the Anthropic npm registry on 2026-03-31. All original source code copyrights belong to [Anthropic](https://www.anthropic.com). It is provided for learning and research purposes only.
+133 -7
View File
@@ -1,5 +1,7 @@
# Claude Code Haha # Claude Code Haha
<p align="right"><strong>中文</strong> | <a href="./README.en.md">English</a></p>
基于 Claude Code 泄露源码修复的**本地可运行版本**,支持接入任意 Anthropic 兼容 API(如 MiniMax、OpenRouter 等)。 基于 Claude Code 泄露源码修复的**本地可运行版本**,支持接入任意 Anthropic 兼容 API(如 MiniMax、OpenRouter 等)。
> 原始泄露源码无法直接运行。本仓库修复了启动链路中的多个阻塞问题,使完整的 Ink TUI 交互界面可以在本地工作。 > 原始泄露源码无法直接运行。本仓库修复了启动链路中的多个阻塞问题,使完整的 Ink TUI 交互界面可以在本地工作。
@@ -8,12 +10,26 @@
<img src="docs/00runtime.png" alt="运行截图" width="800"> <img src="docs/00runtime.png" alt="运行截图" width="800">
</p> </p>
## 目录
- [功能](#功能)
- [架构概览](#架构概览)
- [快速开始](#快速开始)
- [环境变量说明](#环境变量说明)
- [降级模式](#降级模式)
- [常见问题](#常见问题)
- [相对于原始泄露源码的修复](#相对于原始泄露源码的修复)
- [项目结构](#项目结构)
- [技术栈](#技术栈)
---
## 功能 ## 功能
- 完整的 Ink TUI 交互界面(与官方 Claude Code 一致) - 完整的 Ink TUI 交互界面(与官方 Claude Code 一致)
- `--print` 无头模式(脚本/CI 场景) - `--print` 无头模式(脚本/CI 场景)
- 支持 MCP 服务器、插件、Skills - 支持 MCP 服务器、插件、Skills
- 支持自定义 API 端点和模型 - 支持自定义 API 端点和模型[第三方模型使用指南](docs/third-party-models.md)
- 降级 Recovery CLI 模式 - 降级 Recovery CLI 模式
--- ---
@@ -39,15 +55,45 @@
## 快速开始 ## 快速开始
### 1. 安装依赖 ### 1. 安装 Bun
需要 [Bun](https://bun.sh) >= 1.1 和 Node.js >= 18。 本项目运行依赖 [Bun](https://bun.sh)。如果你的电脑还没有安装 Bun,可以先执行下面任一方式:
```bash ```bash
npm install # macOS / Linux(官方安装脚本)
curl -fsSL https://bun.sh/install | bash
``` ```
### 2. 配置环境变量 如果在精简版 Linux 环境里提示 `unzip is required to install bun`,先安装 `unzip`
```bash
# Ubuntu / Debian
apt update && apt install -y unzip
```
```bash
# macOSHomebrew
brew install bun
```
```powershell
# WindowsPowerShell
powershell -c "irm bun.sh/install.ps1 | iex"
```
安装完成后,重新打开终端并确认:
```bash
bun --version
```
### 2. 安装项目依赖
```bash
bun install
```
### 3. 配置环境变量
复制示例文件并填入你的 API Key: 复制示例文件并填入你的 API Key:
@@ -55,7 +101,7 @@ npm install
cp .env.example .env cp .env.example .env
``` ```
编辑 `.env` 编辑 `.env`(以下示例使用 [MiniMax](https://platform.minimaxi.com/subscribe/token-plan?code=1TG2Cseab2&source=link) 作为 API 提供商,也可替换为其他兼容服务)
```env ```env
# API 认证(二选一) # API 认证(二选一)
@@ -79,7 +125,23 @@ DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
``` ```
### 3. 启动 > **提示**:除了 `.env` 文件,你也可以通过 `~/.claude/settings.json` 的 `env` 字段配置环境变量。这与官方 Claude Code 的配置方式一致:
>
> ```json
> {
> "env": {
> "ANTHROPIC_AUTH_TOKEN": "sk-xxx",
> "ANTHROPIC_BASE_URL": "https://api.minimaxi.com/anthropic",
> "ANTHROPIC_MODEL": "MiniMax-M2.7-highspeed"
> }
> }
> ```
>
> 配置优先级:环境变量 > `.env` 文件 > `~/.claude/settings.json`
### 4. 启动
#### macOS / Linux
```bash ```bash
# 交互 TUI 模式(完整界面) # 交互 TUI 模式(完整界面)
@@ -95,6 +157,34 @@ echo "explain this code" | ./bin/claude-haha -p
./bin/claude-haha --help ./bin/claude-haha --help
``` ```
#### Windows
> **前置要求**:必须安装 [Git for Windows](https://git-scm.com/download/win)(提供 Git Bash,项目内部 Shell 执行依赖它)。
Windows 下启动脚本 `bin/claude-haha` 是 bash 脚本,无法在 cmd / PowerShell 中直接运行。请使用以下方式:
**方式一:PowerShell / cmd 直接调用 Bun(推荐)**
```powershell
# 交互 TUI 模式
bun --env-file=.env ./src/entrypoints/cli.tsx
# 无头模式
bun --env-file=.env ./src/entrypoints/cli.tsx -p "your prompt here"
# 降级 Recovery CLI
bun --env-file=.env ./src/localRecoveryCli.ts
```
**方式二:Git Bash 中运行**
```bash
# 在 Git Bash 终端中,与 macOS/Linux 用法一致
./bin/claude-haha
```
> **注意**:部分功能(语音输入、Computer Use、Sandbox 隔离等)在 Windows 上不可用,不影响核心 TUI 交互。
--- ---
## 环境变量说明 ## 环境变量说明
@@ -176,6 +266,42 @@ src/
--- ---
## 常见问题
### Q: `undefined is not an object (evaluating 'usage.input_tokens')`
**原因**`ANTHROPIC_BASE_URL` 配置不正确,API 端点返回的不是 Anthropic 协议格式的 JSON,而是 HTML 页面或其他格式。
本项目使用 **Anthropic Messages API 协议**`ANTHROPIC_BASE_URL` 必须指向一个兼容 Anthropic `/v1/messages` 接口的端点。Anthropic SDK 会自动在 base URL 后面拼接 `/v1/messages`,所以:
- MiniMax`ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic`
- OpenRouter`ANTHROPIC_BASE_URL=https://openrouter.ai/api`
- OpenRouter 错误写法:`ANTHROPIC_BASE_URL=https://openrouter.ai/anthropic` ❌(返回 HTML
如果你的模型供应商只支持 OpenAI 协议,需要通过 LiteLLM 等代理做协议转换,详见 [第三方模型使用指南](docs/third-party-models.md)。
### Q: `Cannot find package 'bundle'`
```
error: Cannot find package 'bundle' from '.../claude-code-haha/src/entrypoints/cli.tsx'
```
**原因**Bun 版本过低,不支持项目所需的 `bun:bundle` 等内置模块。
**解决**:升级 Bun 到最新版本:
```bash
bun upgrade
```
### Q: 怎么接入 OpenAI / DeepSeek / Ollama 等非 Anthropic 模型?
本项目只支持 Anthropic 协议。如果模型供应商不直接支持 Anthropic 协议,需要用 [LiteLLM](https://github.com/BerriAI/litellm) 等代理做协议转换(OpenAI → Anthropic)。
详细配置步骤请参考:[第三方模型使用指南](docs/third-party-models.md)
---
## Disclaimer ## Disclaimer
本仓库基于 2026-03-31 从 Anthropic npm registry 泄露的 Claude Code 源码。所有原始源码版权归 [Anthropic](https://www.anthropic.com) 所有。仅供学习和研究用途。 本仓库基于 2026-03-31 从 Anthropic npm registry 泄露的 Claude Code 源码。所有原始源码版权归 [Anthropic](https://www.anthropic.com) 所有。仅供学习和研究用途。
+240
View File
@@ -0,0 +1,240 @@
# Claude Code Haha — 项目代码分析
## 概述
本项目是基于 **2026-03-31 泄露的 Anthropic Claude Code 源码** 修复而成的本地可运行版本,命名为 **Claude Code Haha**。它是一个功能完整的 **AI 编程助手 CLI 工具**,支持接入任意 Anthropic 兼容 API(如 MiniMax、OpenRouter 等)。
> [!NOTE]
> 泄露的原始源码无法直接运行。本仓库修复了启动链路中的多个阻塞问题(Enter 键无响应、TUI 不启动、stub 文件缺失等),使完整的 Ink TUI 交互界面可以在本地工作。
---
## 技术栈
| 类别 | 技术 | 说明 |
|------|------|------|
| **运行时** | [Bun](https://bun.sh) | 高性能 JS/TS 运行时,用于替代 Node.js |
| **语言** | TypeScript (ESNext) | 全量 TS`moduleResolution: bundler` |
| **终端 UI** | React 19 + [Ink 6](https://github.com/vadimdemedes/ink) | 基于 React 渲染终端 TUI 界面 |
| **CLI 解析** | Commander.js (`@commander-js/extra-typings`) | 命令行参数解析和子命令 |
| **AI API** | `@anthropic-ai/sdk` | Anthropic Messages API 通信 |
| **MCP 协议** | `@modelcontextprotocol/sdk` | Model Context Protocol 客户端 |
| **LSP** | `vscode-jsonrpc` / `vscode-languageserver-types` | Language Server Protocol 集成 |
| **可观测性** | OpenTelemetry (logs/metrics/tracing) | 遥测数据采集 |
| **特性开关** | GrowthBook (`@growthbook/growthbook`) | A/B 测试与功能门控 |
| **构建特性** | `bun:bundle` (`feature()`) | 编译时死代码消除 |
| **数据验证** | Zod v4 | 输入 schema 定义与校验 |
| **云认证** | `google-auth-library`, `@aws-sdk/client-bedrock-runtime` | Google/AWS 多云认证 |
| **Markdown** | `marked` + `highlight.js` | Markdown 渲染和代码高亮 |
| **搜索** | `fuse.js` | 模糊搜索(命令/文件搜索) |
| **其他** | `chalk`, `diff`, `chokidar`, `yaml`, `zod`, `ws` | 着色、diff、文件监听、YAML 解析、WebSocket |
---
## 架构概览
### 启动链路
```
bin/claude-haha (Bash)
└─ bun --env-file=.env src/entrypoints/cli.tsx
├─ Fast paths: --version, --daemon, --bridge, ps/logs/attach/kill
└─ Full CLI: src/main.tsx (Commander.js 解析 → React/Ink TUI)
└─ src/screens/REPL.tsx (主交互 REPL 循环)
```
- **`preload.ts`** — Bun preload 脚本,注入 `MACRO` 全局变量(版本号等)
- **`cli.tsx`** — Bootstrap 入口,通过 fast-path 优化各子命令的加载速度
- **`main.tsx`** (~800KB) — 核心主逻辑,Commander.js 定义所有命令行选项
- **`REPL.tsx`** (~900KB) — 交互式 REPL 界面,处理用户输入、消息渲染、工具执行
### 核心模块层次
```mermaid
graph TD
A[CLI Entry - cli.tsx] --> B[Main - main.tsx]
B --> C[REPL Screen - REPL.tsx]
C --> D[Query Engine - query.ts]
D --> E[API Service - services/api]
D --> F[Tool System - tools/]
D --> G[Auto Compact - services/compact]
C --> H[UI Components - components/]
C --> I[React Hooks - hooks/]
F --> J[Permission System - utils/permissions]
C --> K[MCP Clients - services/mcp]
C --> L[Slash Commands - commands/]
```
---
## 核心系统详解
### 1. Query Engine (`query.ts`, `QueryEngine.ts`)
**核心循环**`query()` 是一个 `AsyncGenerator`,驱动整个 AI 对话循环:
1. **上下文管理** — 自动压缩(auto compact)、微压缩(microcompact)、snip(历史裁剪)
2. **API 调用** — 流式调用 Anthropic Messages API
3. **工具执行** — 解析模型返回的 `tool_use` 块,执行对应工具
4. **循环继续** — 将工具结果拼接回消息列表,继续下一轮推理
关键特性:
- **Task Budget** — 限制单次 agentic turn 的总 token 消耗
- **Reactive Compact** — 遇到 prompt-too-long 时自动触发压缩
- **Streaming Tool Execution** — 流式接收模型输出的同时并行启动工具执行
- **Fallback Model** — 主模型失败时自动降级到备用模型
### 2. Tool System (`Tool.ts`, `tools.ts`, `tools/`)
采用 **`buildTool()` 工厂模式**,统一工具定义接口。每个工具包含:
- `inputSchema` (Zod) — 输入校验
- `checkPermissions()` — 权限检查
- `call()` — 执行逻辑
- `renderToolUseMessage()` / `renderToolResultMessage()` — UI 渲染
- `prompt()` — 给模型的工具说明
#### 内置工具清单 (40+)
| 分类 | 工具 | 说明 |
|------|------|------|
| **文件操作** | `FileReadTool`, `FileWriteTool`, `FileEditTool`, `NotebookEditTool` | 读/写/编辑文件、Jupyter Notebook |
| **搜索** | `GrepTool`, `GlobTool`, `WebSearchTool`, `ToolSearchTool` | Ripgrep 搜索、Glob 匹配、网页搜索、工具发现 |
| **Shell** | `BashTool`, `PowerShellTool`, `REPLTool` | Bash/PowerShell 命令执行、REPL VM 模式 |
| **任务管理** | `TaskCreateTool`, `TaskGetTool`, `TaskUpdateTool`, `TaskListTool`, `TaskStopTool`, `TaskOutputTool` | 多任务并行管理 |
| **Agent** | `AgentTool`, `TeamCreateTool`, `TeamDeleteTool`, `SendMessageTool` | 子 Agent 派生、团队协作 |
| **计划模式** | `EnterPlanModeTool`, `ExitPlanModeTool`, `EnterWorktreeTool`, `ExitWorktreeTool` | 进入/退出计划模式、Git Worktree 隔离 |
| **网络** | `WebFetchTool`, `WebBrowserTool` | HTTP 抓取、浏览器自动化 |
| **MCP** | `MCPTool`, `ListMcpResourcesTool`, `ReadMcpResourceTool` | MCP 服务器工具代理 |
| **其他** | `TodoWriteTool`, `SkillTool`, `AskUserQuestionTool`, `LSPTool`, `ConfigTool`, `SleepTool`, `BriefTool` | TODO 管理、Skill 调用、用户交互、LSP 诊断 |
| **特殊** | `ScheduleCronTool`, `RemoteTriggerTool`, `WorkflowTool`, `MonitorTool` | 定时任务、远程触发、工作流脚本 |
### 3. 斜杠命令系统 (`commands/`)
项目包含 **87+ 个斜杠命令目录**,覆盖:
| 分类 | 命令示例 |
|------|----------|
| **Git 操作** | `/commit`, `/diff`, `/branch`, `/review`, `/pr_comments` |
| **会话管理** | `/session`, `/resume`, `/compact`, `/clear`, `/export`, `/share` |
| **配置** | `/config`, `/model`, `/theme`, `/keybindings`, `/permissions`, `/vim` |
| **Agent/协作** | `/agents`, `/buddy`, `/tasks`, `/teleport` |
| **开发工具** | `/doctor`, `/stats`, `/debug-tool-call`, `/heapdump`, `/ctx_viz` |
| **集成** | `/mcp`, `/plugin`, `/skills`, `/chrome`, `/ide`, `/desktop` |
| **其他** | `/help`, `/version`, `/upgrade`, `/feedback`, `/voice`, `/login`, `/logout` |
### 4. 服务层 (`services/`)
| 服务 | 说明 |
|------|------|
| `api/` | API 调用封装 (重试、fallback、mock、dump prompts) |
| `mcp/` | MCP 客户端管理 (连接、工具注册、资源读取) |
| `compact/` | 自动压缩 (auto compact, reactive compact, snip) |
| `oauth/` | OAuth 认证流程 |
| `lsp/` | LSP 服务器集成 |
| `analytics/` | 分析追踪 (GrowthBook, 事件上报) |
| `plugins/` | 插件加载与管理 |
| `voice.ts` | 语音输入 (STT) |
| `tips/` | 使用提示推送 |
| `policyLimits/` | 组织策略限制 |
### 5. UI 组件 (`components/`)
基于 React + Ink 构建的终端 UI 组件体系,包含 **113 个组件文件** + **31 个子目录**
- **核心交互**: `PromptInput/`, `TextInput`, `VimTextInput`, `BaseTextInput`
- **消息渲染**: `Message`, `MessageRow`, `Messages`, `VirtualMessageList`(虚拟滚动)
- **差异展示**: `StructuredDiff/`, `FileEditToolDiff`, `diff/`
- **对话框**: `ModelPicker`, `ThemePicker`, `GlobalSearchDialog`, `HistorySearchDialog`
- **状态显示**: `StatusLine`, `Spinner`, `AgentProgressLine`, `CoordinatorAgentStatus`
- **Markdown**: `Markdown`, `MarkdownTable`, `HighlightedCode/`
- **设计系统**: `design-system/`, `ui/`
### 6. Skill 系统 (`skills/`)
Skill 是可扩展的指令文件系统:
- `bundledSkills.ts` — 内置 skill 注册
- `loadSkillsDir.ts` — 从文件系统加载自定义 skill
- `bundled/` — 内置 skill 目录
- 支持 YAML frontmatter 格式的 `.md` 文件
### 7. 多 Agent 架构 (`coordinator/`)
- **Coordinator Mode** — 协调器模式,中央 Agent 分配任务给 Worker Agent
- **Agent Tool** — 派生子 Agent 执行独立任务
- **Team System** — 创建/删除团队成员
- **Worktree** — Git Worktree 隔离执行
### 8. 权限与安全 (`utils/permissions/`)
多层权限控制体系:
- **PermissionMode**: `default`, `plan`, `auto`, `bypass`
- **规则来源**: project CLAUDE.md、user settings、CLI flags
- **规则类型**: `alwaysAllow`, `alwaysDeny`, `alwaysAsk`
- **Hooks**: `PreToolUse`, `PostToolUse` 钩子
---
## 关键数据流
```
用户输入 → PromptInput → handlePromptSubmit()
→ slash command? → 执行 command handler
→ 否 → query() async generator
→ auto compact / microcompact / snip
→ prependUserContext() + appendSystemContext()
→ deps.callModel() (流式 API 调用)
→ 解析 tool_use 块 → StreamingToolExecutor
→ checkPermissions() → call() → ToolResult
→ 拼接 tool_result → 继续循环
→ stop_reason ≠ tool_use → 结束
```
---
## 项目文件统计
| 目录 | 估计文件数 | 说明 |
|------|-----------|------|
| `src/tools/` | 44 目录 | 内置工具实现 |
| `src/commands/` | 87 目录 + 15 文件 | 斜杠命令 |
| `src/components/` | 113 文件 + 31 目录 | UI 组件 |
| `src/hooks/` | 83 文件 + 2 目录 | React hooks |
| `src/utils/` | 298 文件 + 31 目录 | 工具函数 |
| `src/services/` | 16 文件 + 20 目录 | 服务层 |
| **总计** | ~700+ 文件 | **大型 TypeScript 项目** |
> [!IMPORTANT]
> 核心文件体量巨大:`main.tsx` (~800KB)、`REPL.tsx` (~900KB)、`messages.ts` (~193KB)、`query.ts` (~69KB)。这是一个工业级的 AI 编程助手完整实现。
---
## 本地修复清单
| 问题 | 修复方式 |
|------|----------|
| TUI 不启动 | 恢复 `cli.tsx` 完整入口(原代码错误路由到 recovery CLI |
| 启动卡死 | 创建 stub `.md` 文件(`verify` skill 导入缺失) |
| `--print` 卡死 | 创建 `filePersistence/types.ts``ultraplan/prompt.txt` 桩文件 |
| Enter 键无响应 | `modifiers-napi` native 包缺失,加 try-catch 容错 |
| setup 被跳过 | 移除 `preload.ts` 中默认设置的 `LOCAL_RECOVERY=1` |
---
## 总结
这是一个 **功能极其完整的 AI 编程助手** 实现,涵盖:
1. ✅ 完整的终端 TUI 交互界面 (React/Ink)
2. ✅ 40+ 内建工具 (文件操作、搜索、Shell、Agent、MCP)
3. ✅ 87+ 斜杠命令
4. ✅ 多 Agent 并行执行 (Coordinator/Worker 架构)
5. ✅ MCP 协议支持
6. ✅ Skill/Plugin 扩展系统
7. ✅ 自动上下文压缩 (auto compact, reactive compact, snip)
8. ✅ 流式工具执行
9. ✅ 多层权限控制
10. ✅ 语音输入支持
11. ✅ Git Worktree 隔离执行
12. ✅ IDE 集成 (VS Code, JetBrains)
13. ✅ OAuth / API Key 认证
14. ✅ 会话持久化与恢复
+254
View File
@@ -0,0 +1,254 @@
# Using Third-Party Models (OpenAI / DeepSeek / Local Models)
This project communicates with LLMs via the Anthropic protocol. By using a protocol translation proxy, you can use any model including OpenAI, DeepSeek, Ollama, etc.
## How It Works
```
claude-code-haha ──Anthropic protocol──▶ LiteLLM Proxy ──OpenAI protocol──▶ Target Model API
(translation)
```
This project sends Anthropic Messages API requests. The LiteLLM proxy automatically translates them to OpenAI Chat Completions API format and forwards them to the target model.
---
## Option 1: LiteLLM Proxy (Recommended)
[LiteLLM](https://github.com/BerriAI/litellm) is a unified proxy gateway supporting 100+ LLMs (41k+ GitHub Stars), with native support for receiving Anthropic protocol requests.
### 1. Install LiteLLM
```bash
pip install 'litellm[proxy]'
```
### 2. Create Configuration File
Create `litellm_config.yaml`:
#### Using OpenAI Models
```yaml
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
litellm_settings:
drop_params: true # Drop Anthropic-specific params (thinking, etc.)
```
#### Using DeepSeek Models
```yaml
model_list:
- model_name: deepseek-chat
litellm_params:
model: deepseek/deepseek-chat
api_key: os.environ/DEEPSEEK_API_KEY
api_base: https://api.deepseek.com
litellm_settings:
drop_params: true
```
#### Using Ollama Local Models
```yaml
model_list:
- model_name: llama3
litellm_params:
model: ollama/llama3
api_base: http://localhost:11434
litellm_settings:
drop_params: true
```
#### Using Multiple Models (switchable after startup)
```yaml
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
- model_name: deepseek-chat
litellm_params:
model: deepseek/deepseek-chat
api_key: os.environ/DEEPSEEK_API_KEY
api_base: https://api.deepseek.com
- model_name: llama3
litellm_params:
model: ollama/llama3
api_base: http://localhost:11434
litellm_settings:
drop_params: true
```
### 3. Start the Proxy
```bash
# Set your target model's API key
export OPENAI_API_KEY=sk-xxx
# or
export DEEPSEEK_API_KEY=sk-xxx
# Start the proxy
litellm --config litellm_config.yaml --port 4000
```
The proxy will listen on `http://localhost:4000` and expose an Anthropic-compatible `/v1/messages` endpoint.
### 4. Configure This Project
Choose one of two configuration methods:
#### Method A: Via `.env` File
```env
ANTHROPIC_AUTH_TOKEN=sk-anything
ANTHROPIC_BASE_URL=http://localhost:4000
ANTHROPIC_MODEL=gpt-4o
ANTHROPIC_DEFAULT_SONNET_MODEL=gpt-4o
ANTHROPIC_DEFAULT_HAIKU_MODEL=gpt-4o
ANTHROPIC_DEFAULT_OPUS_MODEL=gpt-4o
API_TIMEOUT_MS=3000000
DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
```
#### Method B: Via `~/.claude/settings.json`
```json
{
"env": {
"ANTHROPIC_AUTH_TOKEN": "sk-anything",
"ANTHROPIC_BASE_URL": "http://localhost:4000",
"ANTHROPIC_MODEL": "gpt-4o",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "gpt-4o",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-4o",
"ANTHROPIC_DEFAULT_OPUS_MODEL": "gpt-4o",
"API_TIMEOUT_MS": "3000000",
"DISABLE_TELEMETRY": "1",
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
}
}
```
> **Note**: The `ANTHROPIC_AUTH_TOKEN` value can be any string when using the LiteLLM proxy (LiteLLM uses its own configured key for forwarding), unless you've set a `master_key` on the LiteLLM side.
### 5. Start and Verify
```bash
./bin/claude-haha
```
If everything is configured correctly, you should see the normal chat interface, with your configured target model handling the requests.
---
## Option 2: Direct Connection to Anthropic-Compatible Services
Some third-party services directly support the Anthropic Messages API, no proxy needed:
### OpenRouter
```env
ANTHROPIC_AUTH_TOKEN=sk-or-v1-xxx
ANTHROPIC_BASE_URL=https://openrouter.ai/api/v1
ANTHROPIC_MODEL=openai/gpt-4o
ANTHROPIC_DEFAULT_SONNET_MODEL=openai/gpt-4o
ANTHROPIC_DEFAULT_HAIKU_MODEL=openai/gpt-4o-mini
ANTHROPIC_DEFAULT_OPUS_MODEL=openai/gpt-4o
DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
```
### MiniMax (pre-configured in .env.example)
```env
ANTHROPIC_AUTH_TOKEN=your_token_here
ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic
ANTHROPIC_MODEL=MiniMax-M2.7-highspeed
ANTHROPIC_DEFAULT_SONNET_MODEL=MiniMax-M2.7-highspeed
ANTHROPIC_DEFAULT_HAIKU_MODEL=MiniMax-M2.7-highspeed
ANTHROPIC_DEFAULT_OPUS_MODEL=MiniMax-M2.7-highspeed
API_TIMEOUT_MS=3000000
DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
```
---
## Option 3: Other Proxy Tools
The community has built several proxy tools specifically for Claude Code:
| Tool | Description | Link |
|------|-------------|------|
| **a2o** | Anthropic → OpenAI single binary, zero dependencies | [Twitter](https://x.com/mantou543/status/2018846154855940200) |
| **Empero Proxy** | Full Anthropic Messages API to OpenAI translation | [Twitter](https://x.com/EmperoAI/status/2036840854065762551) |
| **Alma** | Client with built-in OpenAI → Anthropic proxy | [Twitter](https://x.com/yetone/status/2003508782127833332) |
| **Chutes** | Docker container supporting 60+ open-source models | [Twitter](https://x.com/chutes_ai/status/2027039742915662232) |
---
## Known Limitations
### 1. `drop_params: true` Is Essential
This project sends Anthropic-specific parameters (e.g., `thinking`, `cache_control`) that don't exist in the OpenAI API. You must set `drop_params: true` in the LiteLLM config, otherwise requests will fail.
### 2. Extended Thinking Unavailable
Anthropic's Extended Thinking is a proprietary feature not supported by other models. It is automatically disabled when using third-party models.
### 3. Prompt Caching Unavailable
`cache_control` is an Anthropic-specific feature. Prompt caching won't work with third-party models (but won't cause errors — it's silently ignored by `drop_params`).
### 4. Tool Calling Compatibility
This project heavily uses tool calling (tool_use). LiteLLM automatically translates Anthropic's tool_use format to OpenAI's function_calling format. This works in most cases, but some complex tool calls may have compatibility issues. If you encounter problems, try using a more capable model (e.g., GPT-4o).
### 5. Telemetry and Non-Essential Requests
Configure these environment variables to avoid unnecessary network requests:
```
DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
```
---
## FAQ
### Q: LiteLLM proxy returns `/v1/responses` not found?
Some OpenAI-compatible services only support `/v1/chat/completions`. Add this to your LiteLLM config:
```yaml
litellm_settings:
use_chat_completions_url_for_anthropic_messages: true
```
### Q: What's the difference between `ANTHROPIC_API_KEY` and `ANTHROPIC_AUTH_TOKEN`?
- `ANTHROPIC_API_KEY` → Sent via `x-api-key` header
- `ANTHROPIC_AUTH_TOKEN` → Sent via `Authorization: Bearer` header
LiteLLM proxy accepts Bearer Token format by default, so `ANTHROPIC_AUTH_TOKEN` is recommended.
### Q: Can I configure multiple models?
Yes. Define multiple `model_name` entries in `litellm_config.yaml`, then switch by changing the `ANTHROPIC_MODEL` value.
### Q: Local Ollama models don't work well?
This project's system prompts and tool calls require strong model capabilities. Use larger models (e.g., Llama 3 70B+, Qwen 72B+). Smaller models may fail to handle tool calling correctly.
+254
View File
@@ -0,0 +1,254 @@
# 使用第三方模型(OpenAI / DeepSeek / 本地模型)
本项目基于 Anthropic 协议与 LLM 通信。通过协议转换代理,可以使用 OpenAI、DeepSeek、Ollama 等任意模型。
## 原理
```
claude-code-haha ──Anthropic协议──▶ LiteLLM Proxy ──OpenAI协议──▶ 目标模型 API
(协议转换)
```
本项目发出 Anthropic Messages API 请求,LiteLLM 代理将其自动转换为 OpenAI Chat Completions API 格式并转发给目标模型。
---
## 方式一:LiteLLM 代理(推荐)
[LiteLLM](https://github.com/BerriAI/litellm) 是一个支持 100+ LLM 的统一代理网关(41k+ GitHub Stars),原生支持接收 Anthropic 协议请求。
### 1. 安装 LiteLLM
```bash
pip install 'litellm[proxy]'
```
### 2. 创建配置文件
新建 `litellm_config.yaml`
#### 使用 OpenAI 模型
```yaml
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
litellm_settings:
drop_params: true # 丢弃 Anthropic 专有参数(thinking 等)
```
#### 使用 DeepSeek 模型
```yaml
model_list:
- model_name: deepseek-chat
litellm_params:
model: deepseek/deepseek-chat
api_key: os.environ/DEEPSEEK_API_KEY
api_base: https://api.deepseek.com
litellm_settings:
drop_params: true
```
#### 使用 Ollama 本地模型
```yaml
model_list:
- model_name: llama3
litellm_params:
model: ollama/llama3
api_base: http://localhost:11434
litellm_settings:
drop_params: true
```
#### 使用多个模型(可在启动后切换)
```yaml
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
- model_name: deepseek-chat
litellm_params:
model: deepseek/deepseek-chat
api_key: os.environ/DEEPSEEK_API_KEY
api_base: https://api.deepseek.com
- model_name: llama3
litellm_params:
model: ollama/llama3
api_base: http://localhost:11434
litellm_settings:
drop_params: true
```
### 3. 启动代理
```bash
# 设置目标模型的 API Key
export OPENAI_API_KEY=sk-xxx
# 或
export DEEPSEEK_API_KEY=sk-xxx
# 启动代理
litellm --config litellm_config.yaml --port 4000
```
代理启动后会在 `http://localhost:4000` 监听,并暴露 Anthropic 兼容的 `/v1/messages` 端点。
### 4. 配置本项目
有两种配置方式,任选其一:
#### 方式 A:通过 `.env` 文件
```env
ANTHROPIC_AUTH_TOKEN=sk-anything
ANTHROPIC_BASE_URL=http://localhost:4000
ANTHROPIC_MODEL=gpt-4o
ANTHROPIC_DEFAULT_SONNET_MODEL=gpt-4o
ANTHROPIC_DEFAULT_HAIKU_MODEL=gpt-4o
ANTHROPIC_DEFAULT_OPUS_MODEL=gpt-4o
API_TIMEOUT_MS=3000000
DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
```
#### 方式 B:通过 `~/.claude/settings.json`
```json
{
"env": {
"ANTHROPIC_AUTH_TOKEN": "sk-anything",
"ANTHROPIC_BASE_URL": "http://localhost:4000",
"ANTHROPIC_MODEL": "gpt-4o",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "gpt-4o",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-4o",
"ANTHROPIC_DEFAULT_OPUS_MODEL": "gpt-4o",
"API_TIMEOUT_MS": "3000000",
"DISABLE_TELEMETRY": "1",
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
}
}
```
> **说明**`ANTHROPIC_AUTH_TOKEN` 的值在使用 LiteLLM 代理时可以是任意字符串(LiteLLM 会用自己配置的 key 转发),除非你在 LiteLLM 端设置了 `master_key` 校验。
### 5. 启动并验证
```bash
./bin/claude-haha
```
如果一切正常,你应该能看到正常的对话界面,实际调用的是你配置的目标模型。
---
## 方式二:直连兼容 Anthropic 协议的第三方服务
部分第三方服务直接兼容 Anthropic Messages API,无需额外代理:
### OpenRouter
```env
ANTHROPIC_AUTH_TOKEN=sk-or-v1-xxx
ANTHROPIC_BASE_URL=https://openrouter.ai/api/v1
ANTHROPIC_MODEL=openai/gpt-4o
ANTHROPIC_DEFAULT_SONNET_MODEL=openai/gpt-4o
ANTHROPIC_DEFAULT_HAIKU_MODEL=openai/gpt-4o-mini
ANTHROPIC_DEFAULT_OPUS_MODEL=openai/gpt-4o
DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
```
### MiniMax(已在 .env.example 中配置)
```env
ANTHROPIC_AUTH_TOKEN=your_token_here
ANTHROPIC_BASE_URL=https://api.minimaxi.com/anthropic
ANTHROPIC_MODEL=MiniMax-M2.7-highspeed
ANTHROPIC_DEFAULT_SONNET_MODEL=MiniMax-M2.7-highspeed
ANTHROPIC_DEFAULT_HAIKU_MODEL=MiniMax-M2.7-highspeed
ANTHROPIC_DEFAULT_OPUS_MODEL=MiniMax-M2.7-highspeed
API_TIMEOUT_MS=3000000
DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
```
---
## 方式三:其他代理工具
社区还有一些专门为 Claude Code 做的代理工具:
| 工具 | 说明 | 链接 |
|------|------|------|
| **a2o** | Anthropic → OpenAI 单二进制文件,零依赖 | [Twitter](https://x.com/mantou543/status/2018846154855940200) |
| **Empero Proxy** | 完整的 Anthropic Messages API 转 OpenAI 代理 | [Twitter](https://x.com/EmperoAI/status/2036840854065762551) |
| **Alma** | 内置 OpenAI → Anthropic 转换代理的客户端 | [Twitter](https://x.com/yetone/status/2003508782127833332) |
| **Chutes** | Docker 容器,支持 60+ 开源模型 | [Twitter](https://x.com/chutes_ai/status/2027039742915662232) |
---
## 注意事项与已知限制
### 1. `drop_params: true` 很重要
本项目会发送 Anthropic 专有参数(如 `thinking``cache_control`),这些参数在 OpenAI API 中不存在。LiteLLM 配置中必须设置 `drop_params: true`,否则请求会报错。
### 2. Extended Thinking 不可用
Anthropic 的 Extended Thinking 功能是专有特性,其他模型不支持。使用第三方模型时此功能自动失效。
### 3. Prompt Caching 不可用
`cache_control` 是 Anthropic 专有功能。使用第三方模型时,prompt caching 不会生效(但不会导致报错,会被 `drop_params` 忽略)。
### 4. 工具调用兼容性
本项目大量使用工具调用(tool_use),LiteLLM 会自动转换 Anthropic tool_use 格式到 OpenAI function_calling 格式。大部分情况下可以正常工作,但某些复杂工具调用可能存在兼容性问题。如遇问题,建议使用能力较强的模型(如 GPT-4o)。
### 5. 遥测和非必要网络请求
建议配置以下环境变量以避免不必要的网络请求:
```
DISABLE_TELEMETRY=1
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1
```
---
## FAQ
### Q: LiteLLM 代理报错 `/v1/responses` 找不到?
部分 OpenAI 兼容服务只支持 `/v1/chat/completions`。在 LiteLLM 配置中添加:
```yaml
litellm_settings:
use_chat_completions_url_for_anthropic_messages: true
```
### Q: `ANTHROPIC_API_KEY` 和 `ANTHROPIC_AUTH_TOKEN` 有什么区别?
- `ANTHROPIC_API_KEY` → 通过 `x-api-key` 请求头发送
- `ANTHROPIC_AUTH_TOKEN` → 通过 `Authorization: Bearer` 请求头发送
LiteLLM 代理默认接受 Bearer Token 格式,建议使用 `ANTHROPIC_AUTH_TOKEN`
### Q: 可以同时配置多个模型吗?
可以。在 `litellm_config.yaml` 中配置多个 `model_name`,然后通过修改 `ANTHROPIC_MODEL` 切换。
### Q: 本地 Ollama 模型效果不好怎么办?
本项目的系统提示和工具调用对模型能力要求较高。建议使用参数量较大的模型(如 Llama 3 70B+, Qwen 72B+),小模型可能无法正确处理工具调用。
+1 -4
View File
@@ -1,5 +1,4 @@
import { c as _c } from "react/compiler-runtime"; import { c as _c } from "react/compiler-runtime";
import { feature } from 'bun:bundle';
import figures from 'figures'; import figures from 'figures';
import React, { useEffect, useRef, useState } from 'react'; import React, { useEffect, useRef, useState } from 'react';
import { useTerminalSize } from '../hooks/useTerminalSize.js'; import { useTerminalSize } from '../hooks/useTerminalSize.js';
@@ -165,7 +164,6 @@ function spriteColWidth(nameWidth: number): number {
// Narrow terminals: 0 — REPL.tsx stacks the one-liner on its own row // Narrow terminals: 0 — REPL.tsx stacks the one-liner on its own row
// (above input in fullscreen, below in scrollback), so no reservation. // (above input in fullscreen, below in scrollback), so no reservation.
export function companionReservedColumns(terminalColumns: number, speaking: boolean): number { export function companionReservedColumns(terminalColumns: number, speaking: boolean): number {
if (!feature('BUDDY')) return 0;
const companion = getCompanion(); const companion = getCompanion();
if (!companion || getGlobalConfig().companionMuted) return 0; if (!companion || getGlobalConfig().companionMuted) return 0;
if (terminalColumns < MIN_COLS_FOR_FULL_SPRITE) return 0; if (terminalColumns < MIN_COLS_FOR_FULL_SPRITE) return 0;
@@ -212,7 +210,6 @@ export function CompanionSprite(): React.ReactNode {
return () => clearTimeout(timer); return () => clearTimeout(timer);
// eslint-disable-next-line react-hooks/exhaustive-deps -- tick intentionally captured at reaction-change, not tracked // eslint-disable-next-line react-hooks/exhaustive-deps -- tick intentionally captured at reaction-change, not tracked
}, [reaction, setAppState]); }, [reaction, setAppState]);
if (!feature('BUDDY')) return null;
const companion = getCompanion(); const companion = getCompanion();
if (!companion || getGlobalConfig().companionMuted) return null; if (!companion || getGlobalConfig().companionMuted) return null;
const color = RARITY_COLORS[companion.rarity]; const color = RARITY_COLORS[companion.rarity];
@@ -337,7 +334,7 @@ export function CompanionFloatingBubble() {
t3 = $[4]; t3 = $[4];
} }
useEffect(t2, t3); useEffect(t2, t3);
if (!feature("BUDDY") || !reaction) { if (!reaction) {
return null; return null;
} }
const companion = getCompanion(); const companion = getCompanion();
+67
View File
@@ -0,0 +1,67 @@
import type { Message } from '../types/message.js'
import { getCompanion } from './companion.js'
import { getGlobalConfig } from '../utils/config.js'
// Simple companion observer: picks a reaction based on the last assistant message.
// This is a lightweight placeholder that generates fun reactions without an LLM call.
const DEBUGGING_QUIPS = [
'Found it!',
'Interesting...',
'Have you tried rubber duck debugging?',
'Stack trace time!',
'I see what happened.',
]
const GENERAL_QUIPS = [
'Looking good!',
'Keep it up!',
'Nice work!',
'I believe in you!',
'You got this!',
]
const CODE_QUIPS = [
'Fancy!',
'Clean code!',
'Elegant solution!',
'Ship it!',
]
function pickQuip(messages: Message[]): string | undefined {
const lastAssistant = [...messages].reverse().find(m => m.role === 'assistant')
if (!lastAssistant) return undefined
const content = Array.isArray(lastAssistant.content)
? lastAssistant.content.map(c => (typeof c === 'string' ? c : c.type === 'text' ? c.text : '')).join('')
: typeof lastAssistant.content === 'string'
? lastAssistant.content
: ''
if (!content) return undefined
// Only react occasionally (1 in 5 turns)
if (Math.random() > 0.2) return undefined
const lower = content.toLowerCase()
if (lower.includes('error') || lower.includes('bug') || lower.includes('fix') || lower.includes('debug')) {
return DEBUGGING_QUIPS[Math.floor(Math.random() * DEBUGGING_QUIPS.length)]
}
if (lower.includes('function') || lower.includes('class') || lower.includes('const') || lower.includes('```')) {
return CODE_QUIPS[Math.floor(Math.random() * CODE_QUIPS.length)]
}
return GENERAL_QUIPS[Math.floor(Math.random() * GENERAL_QUIPS.length)]
}
export async function fireCompanionObserver(
messages: Message[],
onReaction: (reaction: string) => void,
): Promise<void> {
const companion = getCompanion()
if (!companion || getGlobalConfig().companionMuted) return
const quip = pickQuip(messages)
if (quip) {
onReaction(quip)
}
}
-2
View File
@@ -1,4 +1,3 @@
import { feature } from 'bun:bundle'
import type { Message } from '../types/message.js' import type { Message } from '../types/message.js'
import type { Attachment } from '../utils/attachments.js' import type { Attachment } from '../utils/attachments.js'
import { getGlobalConfig } from '../utils/config.js' import { getGlobalConfig } from '../utils/config.js'
@@ -15,7 +14,6 @@ When the user addresses ${name} directly (by name), its bubble will answer. Your
export function getCompanionIntroAttachment( export function getCompanionIntroAttachment(
messages: Message[] | undefined, messages: Message[] | undefined,
): Attachment[] { ): Attachment[] {
if (!feature('BUDDY')) return []
const companion = getCompanion() const companion = getCompanion()
if (!companion || getGlobalConfig().companionMuted) return [] if (!companion || getGlobalConfig().companionMuted) return []
-5
View File
@@ -1,5 +1,4 @@
import { c as _c } from "react/compiler-runtime"; import { c as _c } from "react/compiler-runtime";
import { feature } from 'bun:bundle';
import React, { useEffect } from 'react'; import React, { useEffect } from 'react';
import { useNotifications } from '../context/notifications.js'; import { useNotifications } from '../context/notifications.js';
import { Text } from '../ink.js'; import { Text } from '../ink.js';
@@ -50,9 +49,6 @@ export function useBuddyNotification() {
let t1; let t1;
if ($[0] !== addNotification || $[1] !== removeNotification) { if ($[0] !== addNotification || $[1] !== removeNotification) {
t0 = () => { t0 = () => {
if (!feature("BUDDY")) {
return;
}
const config = getGlobalConfig(); const config = getGlobalConfig();
if (config.companion || !isBuddyTeaserWindow()) { if (config.companion || !isBuddyTeaserWindow()) {
return; return;
@@ -80,7 +76,6 @@ export function findBuddyTriggerPositions(text: string): Array<{
start: number; start: number;
end: number; end: number;
}> { }> {
if (!feature('BUDDY')) return [];
const triggers: Array<{ const triggers: Array<{
start: number; start: number;
end: number; end: number;
+4 -6
View File
@@ -115,11 +115,9 @@ const forkCmd = feature('FORK_SUBAGENT')
require('./commands/fork/index.js') as typeof import('./commands/fork/index.js') require('./commands/fork/index.js') as typeof import('./commands/fork/index.js')
).default ).default
: null : null
const buddy = feature('BUDDY') const buddy = (
? ( require('./commands/buddy/index.js') as typeof import('./commands/buddy/index.js')
require('./commands/buddy/index.js') as typeof import('./commands/buddy/index.js') ).default
).default
: null
/* eslint-enable @typescript-eslint/no-require-imports */ /* eslint-enable @typescript-eslint/no-require-imports */
import thinkback from './commands/thinkback/index.js' import thinkback from './commands/thinkback/index.js'
import thinkbackPlay from './commands/thinkback-play/index.js' import thinkbackPlay from './commands/thinkback-play/index.js'
@@ -319,7 +317,7 @@ const COMMANDS = memoize((): Command[] => [
vim, vim,
...(webCmd ? [webCmd] : []), ...(webCmd ? [webCmd] : []),
...(forkCmd ? [forkCmd] : []), ...(forkCmd ? [forkCmd] : []),
...(buddy ? [buddy] : []), buddy,
...(proactive ? [proactive] : []), ...(proactive ? [proactive] : []),
...(briefCommand ? [briefCommand] : []), ...(briefCommand ? [briefCommand] : []),
...(assistantCommand ? [assistantCommand] : []), ...(assistantCommand ? [assistantCommand] : []),
+198
View File
@@ -0,0 +1,198 @@
import * as React from 'react'
import { Box, Text } from '../../ink.js'
import type { LocalJSXCommandCall } from '../../types/command.js'
import {
getCompanion,
roll,
companionUserId,
} from '../../buddy/companion.js'
import { renderSprite } from '../../buddy/sprites.js'
import {
RARITY_COLORS,
RARITY_STARS,
STAT_NAMES,
type StoredCompanion,
} from '../../buddy/types.js'
import { saveGlobalConfig } from '../../utils/config.js'
function CompanionCard({
onDone,
args,
setAppState,
}: {
onDone: (result?: string, options?: { display?: string }) => void
args: string
setAppState: (updater: (prev: any) => any) => void
}) {
const trimmed = args.trim().toLowerCase()
const companion = getCompanion()
// Handle keyboard input to dismiss
const handleKeyDown = (e: any) => {
if (e.key === 'q' || e.key === 'Enter') {
e.preventDefault()
onDone()
}
}
// Handle subcommands
React.useEffect(() => {
if (trimmed === 'mute') {
saveGlobalConfig(c => ({ ...c, companionMuted: true }))
onDone(`${companion?.name ?? 'Companion'} is now muted.`, {
display: 'system',
})
return
}
if (trimmed === 'unmute') {
saveGlobalConfig(c => ({ ...c, companionMuted: false }))
onDone(`${companion?.name ?? 'Companion'} says hello!`, {
display: 'system',
})
return
}
if (trimmed === 'pet') {
if (!companion) {
onDone('You need to hatch a companion first! Use /buddy hatch', {
display: 'system',
})
return
}
setAppState((prev: any) => ({ ...prev, companionPetAt: Date.now() }))
onDone(`You pet ${companion.name}! ♥`, { display: 'system' })
return
}
if (trimmed === 'hatch') {
if (companion) {
onDone(
`You already have ${companion.name}! Use /buddy info to see them.`,
{ display: 'system' },
)
return
}
// Hatch a new companion with a generated name
const { bones } = roll(companionUserId())
const adjectives = [
'Bright', 'Cozy', 'Swift', 'Calm', 'Wise', 'Bold',
'Fuzzy', 'Lucky', 'Snappy', 'Quirky',
]
const nouns = [
'Spark', 'Pixel', 'Ember', 'Glitch', 'Byte',
'Flux', 'Drift', 'Blip', 'Quip', 'Zap',
]
const adj = adjectives[Math.floor(Math.random() * adjectives.length)]!
const noun = nouns[Math.floor(Math.random() * nouns.length)]!
const name = `${adj} ${noun}`
const soul: StoredCompanion = {
name,
personality: `A ${bones.rarity} ${bones.species} who loves debugging and hanging out.`,
hatchedAt: Date.now(),
}
saveGlobalConfig(c => ({ ...c, companion: soul }))
onDone(
`✨ You hatched ${name} the ${bones.rarity} ${bones.species}! Say hello!`,
{ display: 'system' },
)
return
}
if (trimmed === 'release') {
if (!companion) {
onDone('No companion to release.', { display: 'system' })
return
}
const name = companion.name
saveGlobalConfig(c => {
const next = { ...c }
delete next.companion
return next
})
onDone(`Goodbye, ${name}! You'll be missed.`, { display: 'system' })
return
}
}, [])
// Render companion info
if (!companion) {
const { bones } = roll(companionUserId())
const preview = renderSprite(bones, 0)
const color = RARITY_COLORS[bones.rarity]
return (
<Box flexDirection="column" paddingX={1} paddingY={1} autoFocus={true} onKeyDown={handleKeyDown} tabIndex={0}>
<Text bold>You haven't hatched a companion yet!</Text>
<Text dimColor>Here's a preview of yours:</Text>
<Box flexDirection="column" marginY={1}>
{preview.map((line, i) => (
<Text key={i} color={color}>
{line}
</Text>
))}
<Text italic dimColor>
A {bones.rarity} {bones.species} {RARITY_STARS[bones.rarity]}
</Text>
</Box>
<Text>Run <Text bold>/buddy hatch</Text> to bring them to life!</Text>
<Text dimColor>Or type <Text bold>q</Text> to dismiss.</Text>
</Box>
)
}
const sprite = renderSprite(companion, 0)
const color = RARITY_COLORS[companion.rarity]
return (
<Box flexDirection="column" paddingX={1} paddingY={1} autoFocus={true} onKeyDown={handleKeyDown} tabIndex={0}>
<Box flexDirection="row" gap={2}>
<Box flexDirection="column">
{sprite.map((line, i) => (
<Text key={i} color={color}>
{line}
</Text>
))}
<Text italic bold color={color}>
{companion.name}
</Text>
</Box>
<Box flexDirection="column" justifyContent="center">
<Text>
<Text bold>Species:</Text>{' '}
<Text color={color}>{companion.species}</Text>
</Text>
<Text>
<Text bold>Rarity:</Text>{' '}
<Text color={color}>
{companion.rarity} {RARITY_STARS[companion.rarity]}
</Text>
</Text>
{companion.shiny && <Text color="warning"> Shiny!</Text>}
<Text dimColor>{'─'.repeat(20)}</Text>
<Text bold>Stats:</Text>
{STAT_NAMES.map(stat => (
<Text key={stat}>
<Text dimColor>{stat}:</Text>{' '}
<Text color={color}>{companion.stats[stat]}</Text>
</Text>
))}
</Box>
</Box>
<Text dimColor>{'─'.repeat(40)}</Text>
<Text dimColor>
/buddy pet · /buddy mute · /buddy unmute · /buddy release
</Text>
<Text dimColor>Press q or Enter to dismiss</Text>
</Box>
)
}
export const call: LocalJSXCommandCall = async (onDone, context, args = '') => {
return (
<CompanionCard
onDone={onDone}
args={args}
setAppState={context.setAppState}
/>
)
}
+11
View File
@@ -0,0 +1,11 @@
import type { Command } from '../../commands.js'
const buddyCommand = {
type: 'local-jsx',
name: 'buddy',
description: 'Meet your companion',
argumentHint: '[hatch|pet|mute|unmute|info]',
load: () => import('./buddy.js'),
} satisfies Command
export default buddyCommand
+5 -10
View File
@@ -309,10 +309,7 @@ function PromptInput({
const { const {
companion: _companion, companion: _companion,
companionMuted companionMuted
} = feature('BUDDY') ? getGlobalConfig() : { } = getGlobalConfig();
companion: undefined,
companionMuted: undefined
};
const companionFooterVisible = !!_companion && !companionMuted; const companionFooterVisible = !!_companion && !companionMuted;
// Brief mode: BriefSpinner/BriefIdleStatus own the 2-row footprint above // Brief mode: BriefSpinner/BriefIdleStatus own the 2-row footprint above
// the input. Dropping marginTop here lets the spinner sit flush against // the input. Dropping marginTop here lets the spinner sit flush against
@@ -1786,10 +1783,8 @@ function PromptInput({
} }
switch (footerItemSelected) { switch (footerItemSelected) {
case 'companion': case 'companion':
if (feature('BUDDY')) { selectFooterItem(null);
selectFooterItem(null); void onSubmit('/buddy');
void onSubmit('/buddy');
}
break; break;
case 'tasks': case 'tasks':
if (isTeammateMode) { if (isTeammateMode) {
@@ -1981,9 +1976,9 @@ function PromptInput({
}); });
}, [effortNotificationText, addNotification, removeNotification]); }, [effortNotificationText, addNotification, removeNotification]);
useBuddyNotification(); useBuddyNotification();
const companionSpeaking = feature('BUDDY') ? const companionSpeaking =
// biome-ignore lint/correctness/useHookAtTopLevel: feature() is a compile-time constant // biome-ignore lint/correctness/useHookAtTopLevel: feature() is a compile-time constant
useAppState(s => s.companionReaction !== undefined) : false; useAppState(s => s.companionReaction !== undefined);
const { const {
columns, columns,
rows rows
+7 -10
View File
@@ -274,6 +274,7 @@ const WebBrowserPanelModule = feature('WEB_BROWSER_TOOL') ? require('../tools/We
import { IssueFlagBanner } from '../components/PromptInput/IssueFlagBanner.js'; import { IssueFlagBanner } from '../components/PromptInput/IssueFlagBanner.js';
import { useIssueFlagBanner } from '../hooks/useIssueFlagBanner.js'; import { useIssueFlagBanner } from '../hooks/useIssueFlagBanner.js';
import { CompanionSprite, CompanionFloatingBubble, MIN_COLS_FOR_FULL_SPRITE } from '../buddy/CompanionSprite.js'; import { CompanionSprite, CompanionFloatingBubble, MIN_COLS_FOR_FULL_SPRITE } from '../buddy/CompanionSprite.js';
import { fireCompanionObserver } from '../buddy/observer.js';
import { DevBar } from '../components/DevBar.js'; import { DevBar } from '../components/DevBar.js';
// Session manager removed - using AppState now // Session manager removed - using AppState now
import type { RemoteSessionConfig } from '../remote/RemoteSessionManager.js'; import type { RemoteSessionConfig } from '../remote/RemoteSessionManager.js';
@@ -1299,12 +1300,10 @@ export function REPL({
// Dismiss the companion bubble on scroll — it's absolute-positioned // Dismiss the companion bubble on scroll — it's absolute-positioned
// at bottom-right and covers transcript content. Scrolling = user is // at bottom-right and covers transcript content. Scrolling = user is
// trying to read something under it. // trying to read something under it.
if (feature('BUDDY')) { setAppState(prev => prev.companionReaction === undefined ? prev : {
setAppState(prev => prev.companionReaction === undefined ? prev : {
...prev, ...prev,
companionReaction: undefined companionReaction: undefined
}); });
}
} }
}, [onRepin, onScrollAway, maybeLoadOlder, setAppState]); }, [onRepin, onScrollAway, maybeLoadOlder, setAppState]);
// Deferred SessionStart hook messages — REPL renders immediately and // Deferred SessionStart hook messages — REPL renders immediately and
@@ -2801,12 +2800,10 @@ export function REPL({
})) { })) {
onQueryEvent(event); onQueryEvent(event);
} }
if (feature('BUDDY')) { void fireCompanionObserver(messagesRef.current, reaction => setAppState(prev => prev.companionReaction === reaction ? prev : {
void fireCompanionObserver(messagesRef.current, reaction => setAppState(prev => prev.companionReaction === reaction ? prev : {
...prev, ...prev,
companionReaction: reaction companionReaction: reaction
})); }));
}
queryCheckpoint('query_end'); queryCheckpoint('query_end');
// Capture ant-only API metrics before resetLoadingState clears the ref. // Capture ant-only API metrics before resetLoadingState clears the ref.
@@ -4562,7 +4559,7 @@ export function REPL({
{feature('MESSAGE_ACTIONS') && isFullscreenEnvEnabled() && !disableMessageActions ? <MessageActionsKeybindings handlers={messageActionHandlers} isActive={cursor !== null} /> : null} {feature('MESSAGE_ACTIONS') && isFullscreenEnvEnabled() && !disableMessageActions ? <MessageActionsKeybindings handlers={messageActionHandlers} isActive={cursor !== null} /> : null}
<CancelRequestHandler {...cancelRequestProps} /> <CancelRequestHandler {...cancelRequestProps} />
<MCPConnectionManager key={remountKey} dynamicMcpConfig={dynamicMcpConfig} isStrictMcpConfig={strictMcpConfig}> <MCPConnectionManager key={remountKey} dynamicMcpConfig={dynamicMcpConfig} isStrictMcpConfig={strictMcpConfig}>
<FullscreenLayout scrollRef={scrollRef} overlay={toolPermissionOverlay} bottomFloat={feature('BUDDY') && companionVisible && !companionNarrow ? <CompanionFloatingBubble /> : undefined} modal={centeredModal} modalScrollRef={modalScrollRef} dividerYRef={dividerYRef} hidePill={!!viewedAgentTask} hideSticky={!!viewedTeammateTask} newMessageCount={unseenDivider?.count ?? 0} onPillClick={() => { <FullscreenLayout scrollRef={scrollRef} overlay={toolPermissionOverlay} bottomFloat={companionVisible && !companionNarrow ? <CompanionFloatingBubble /> : undefined} modal={centeredModal} modalScrollRef={modalScrollRef} dividerYRef={dividerYRef} hidePill={!!viewedAgentTask} hideSticky={!!viewedTeammateTask} newMessageCount={unseenDivider?.count ?? 0} onPillClick={() => {
setCursor(null); setCursor(null);
jumpToNew(scrollRef.current); jumpToNew(scrollRef.current);
}} scrollable={<> }} scrollable={<>
@@ -4587,8 +4584,8 @@ export function REPL({
{showSpinner && <SpinnerWithVerb mode={streamMode} spinnerTip={spinnerTip} responseLengthRef={responseLengthRef} apiMetricsRef={apiMetricsRef} overrideMessage={spinnerMessage} spinnerSuffix={stopHookSpinnerSuffix} verbose={verbose} loadingStartTimeRef={loadingStartTimeRef} totalPausedMsRef={totalPausedMsRef} pauseStartTimeRef={pauseStartTimeRef} overrideColor={spinnerColor} overrideShimmerColor={spinnerShimmerColor} hasActiveTools={inProgressToolUseIDs.size > 0} leaderIsIdle={!isLoading} />} {showSpinner && <SpinnerWithVerb mode={streamMode} spinnerTip={spinnerTip} responseLengthRef={responseLengthRef} apiMetricsRef={apiMetricsRef} overrideMessage={spinnerMessage} spinnerSuffix={stopHookSpinnerSuffix} verbose={verbose} loadingStartTimeRef={loadingStartTimeRef} totalPausedMsRef={totalPausedMsRef} pauseStartTimeRef={pauseStartTimeRef} overrideColor={spinnerColor} overrideShimmerColor={spinnerShimmerColor} hasActiveTools={inProgressToolUseIDs.size > 0} leaderIsIdle={!isLoading} />}
{!showSpinner && !isLoading && !userInputOnProcessing && !hasRunningTeammates && isBriefOnly && !viewedAgentTask && <BriefIdleStatus />} {!showSpinner && !isLoading && !userInputOnProcessing && !hasRunningTeammates && isBriefOnly && !viewedAgentTask && <BriefIdleStatus />}
{isFullscreenEnvEnabled() && <PromptInputQueuedCommands />} {isFullscreenEnvEnabled() && <PromptInputQueuedCommands />}
</>} bottom={<Box flexDirection={feature('BUDDY') && companionNarrow ? 'column' : 'row'} width="100%" alignItems={feature('BUDDY') && companionNarrow ? undefined : 'flex-end'}> </>} bottom={<Box flexDirection={companionNarrow ? 'column' : 'row'} width="100%" alignItems={companionNarrow ? undefined : 'flex-end'}>
{feature('BUDDY') && companionNarrow && isFullscreenEnvEnabled() && companionVisible ? <CompanionSprite /> : null} {companionNarrow && isFullscreenEnvEnabled() && companionVisible ? <CompanionSprite /> : null}
<Box flexDirection="column" flexGrow={1}> <Box flexDirection="column" flexGrow={1}>
{permissionStickyFooter} {permissionStickyFooter}
{/* Immediate local-jsx commands (/btw, /sandbox, /assistant, {/* Immediate local-jsx commands (/btw, /sandbox, /assistant,
@@ -4992,7 +4989,7 @@ export function REPL({
}} />} }} />}
{"external" === 'ant' && <DevBar />} {"external" === 'ant' && <DevBar />}
</Box> </Box>
{feature('BUDDY') && !(companionNarrow && isFullscreenEnvEnabled()) && companionVisible ? <CompanionSprite /> : null} {!(companionNarrow && isFullscreenEnvEnabled()) && companionVisible ? <CompanionSprite /> : null}
</Box>} /> </Box>} />
</MCPConnectionManager> </MCPConnectionManager>
</KeybindingSetup>; </KeybindingSetup>;
+3 -7
View File
@@ -861,13 +861,9 @@ export async function getAttachments(
), ),
), ),
), ),
...(feature('BUDDY') maybe('companion_intro', () =>
? [ Promise.resolve(getCompanionIntroAttachment(messages)),
maybe('companion_intro', () => ),
Promise.resolve(getCompanionIntroAttachment(messages)),
),
]
: []),
maybe('changed_files', () => getChangedFiles(context)), maybe('changed_files', () => getChangedFiles(context)),
maybe('nested_memory', () => getNestedMemoryAttachments(context)), maybe('nested_memory', () => getNestedMemoryAttachments(context)),
// relevant_memories moved to async prefetch (startRelevantMemoryPrefetch) // relevant_memories moved to async prefetch (startRelevantMemoryPrefetch)