Files
novelty-seeking/CLAUDE.md
gbanyan 43c025e060 feat: Add experiments framework and novelty-driven agent loop
- Add complete experiments directory with pilot study infrastructure
  - 5 experimental conditions (direct, expert-only, attribute-only, full-pipeline, random-perspective)
  - Human assessment tool with React frontend and FastAPI backend
  - AUT flexibility analysis with jump signal detection
  - Result visualization and metrics computation

- Add novelty-driven agent loop module (experiments/novelty_loop/)
  - NoveltyDrivenTaskAgent with expert perspective perturbation
  - Three termination strategies: breakthrough, exhaust, coverage
  - Interactive CLI demo with colored output
  - Embedding-based novelty scoring

- Add DDC knowledge domain classification data (en/zh)
- Add CLAUDE.md project documentation
- Update research report with experiment findings

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 10:16:21 +08:00

102 lines
4.0 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
This is a creative ideation system that uses LLMs to break "semantic gravity" (the tendency of LLMs to generate ideas clustered around high-probability training distributions). The system analyzes objects through multiple attribute dimensions and transforms them using expert perspectives to generate novel ideas.
## Development Commands
### Starting the Application
```bash
./start.sh # Starts both backend (port 8001) and frontend (port 5173)
./stop.sh # Stops all services
```
### Backend (FastAPI + Python)
```bash
cd backend
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
uvicorn app.main:app --host 0.0.0.0 --port 8001 --reload
```
### Frontend (React + Vite + TypeScript)
```bash
cd frontend
npm install
npm run dev # Development server
npm run build # TypeScript check + production build
npm run lint # ESLint
```
## Architecture
### Multi-Agent Pipeline
The system uses three interconnected agents that process queries through Server-Sent Events (SSE) for real-time streaming:
```
Query → Attribute Agent → Expert Transformation Agent → Deduplication Agent
Patent Search (optional)
```
**1. Attribute Agent** (`/api/analyze`)
- Analyzes a query (e.g., "bicycle") through configurable category dimensions
- Step 0: Category analysis (5 modes: FIXED_ONLY, FIXED_PLUS_CUSTOM, FIXED_PLUS_DYNAMIC, CUSTOM_ONLY, DYNAMIC_AUTO)
- Step 1: Generate attributes per category
- Step 2: Build DAG relationships between attributes across categories
- Output: `AttributeDAG` with nodes and edges
**2. Expert Transformation Agent** (`/api/expert-transformation/category`)
- Takes attributes and transforms them through diverse expert perspectives
- Step 0: Generate expert team (sources: `llm`, `curated`, `dbpedia`, `wikidata`)
- Step 1: Each expert generates keywords for each attribute
- Step 2: Generate descriptions for each keyword
- Formula: `total_keywords = attributes × expert_count × keywords_per_expert`
**3. Deduplication Agent** (`/api/deduplication/deduplicate`)
- Consolidates similar ideas using embedding similarity or LLM judgment
- Groups duplicates while preserving representative descriptions
### Backend Structure (`backend/app/`)
- `routers/` - FastAPI endpoints with SSE streaming
- `services/` - LLM service (Ollama/OpenAI), embedding service, expert source service
- `prompts/` - Bilingual prompt templates (zh/en) for each agent step
- `data/` - Curated occupation lists for expert sourcing (210 professions)
### Frontend Structure (`frontend/src/`)
- `hooks/` - React hooks matching backend agents (`useAttribute`, `useExpertTransformation`, `useDeduplication`)
- `components/` - UI panels for each stage + DAG visualization (D3.js, @xyflow/react)
- `services/api.ts` - SSE stream parsing and API calls
- `types/index.ts` - TypeScript interfaces mirroring backend schemas
### Key Patterns
**SSE Event Flow**: All agent operations stream progress via SSE events:
```typescript
// Frontend callback pattern
onStep0Start onStep0Complete onStep1Start onStep1Complete onDone
```
**Bilingual Support**: All prompts and UI support `PromptLanguage = 'zh' | 'en'`. Language flows through the entire pipeline from request to response messages.
**Expert Source Fallback**: If external sources (DBpedia, Wikidata) fail, system automatically falls back to LLM-based expert generation.
### Configuration
Backend requires `.env` file:
```
OLLAMA_BASE_URL=http://localhost:11435 # Default Ollama endpoint
DEFAULT_MODEL=qwen3:8b # Default LLM model
OPENAI_API_KEY= # Optional: for OpenAI-compatible APIs
LENS_API_TOKEN= # Optional: for patent search
```
### Dual-Path Mode
The system supports analyzing two queries in parallel (`PathA` and `PathB`) with attribute crossover functionality for comparing and combining ideas across different objects.