diff --git a/CLAUDE.md b/CLAUDE.md
new file mode 100644
index 0000000..d9c6249
--- /dev/null
+++ b/CLAUDE.md
@@ -0,0 +1,101 @@
+# CLAUDE.md
+
+This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
+
+## Project Overview
+
+This is a creative ideation system that uses LLMs to break "semantic gravity" (the tendency of LLMs to generate ideas clustered around high-probability training distributions). The system analyzes objects through multiple attribute dimensions and transforms them using expert perspectives to generate novel ideas.
+
+## Development Commands
+
+### Starting the Application
+```bash
+./start.sh # Starts both backend (port 8001) and frontend (port 5173)
+./stop.sh # Stops all services
+```
+
+### Backend (FastAPI + Python)
+```bash
+cd backend
+python3 -m venv venv
+source venv/bin/activate
+pip install -r requirements.txt
+uvicorn app.main:app --host 0.0.0.0 --port 8001 --reload
+```
+
+### Frontend (React + Vite + TypeScript)
+```bash
+cd frontend
+npm install
+npm run dev # Development server
+npm run build # TypeScript check + production build
+npm run lint # ESLint
+```
+
+## Architecture
+
+### Multi-Agent Pipeline
+
+The system uses three interconnected agents that process queries through Server-Sent Events (SSE) for real-time streaming:
+
+```
+Query → Attribute Agent → Expert Transformation Agent → Deduplication Agent
+ ↓
+ Patent Search (optional)
+```
+
+**1. Attribute Agent** (`/api/analyze`)
+- Analyzes a query (e.g., "bicycle") through configurable category dimensions
+- Step 0: Category analysis (5 modes: FIXED_ONLY, FIXED_PLUS_CUSTOM, FIXED_PLUS_DYNAMIC, CUSTOM_ONLY, DYNAMIC_AUTO)
+- Step 1: Generate attributes per category
+- Step 2: Build DAG relationships between attributes across categories
+- Output: `AttributeDAG` with nodes and edges
+
+**2. Expert Transformation Agent** (`/api/expert-transformation/category`)
+- Takes attributes and transforms them through diverse expert perspectives
+- Step 0: Generate expert team (sources: `llm`, `curated`, `dbpedia`, `wikidata`)
+- Step 1: Each expert generates keywords for each attribute
+- Step 2: Generate descriptions for each keyword
+- Formula: `total_keywords = attributes × expert_count × keywords_per_expert`
+
+**3. Deduplication Agent** (`/api/deduplication/deduplicate`)
+- Consolidates similar ideas using embedding similarity or LLM judgment
+- Groups duplicates while preserving representative descriptions
+
+### Backend Structure (`backend/app/`)
+- `routers/` - FastAPI endpoints with SSE streaming
+- `services/` - LLM service (Ollama/OpenAI), embedding service, expert source service
+- `prompts/` - Bilingual prompt templates (zh/en) for each agent step
+- `data/` - Curated occupation lists for expert sourcing (210 professions)
+
+### Frontend Structure (`frontend/src/`)
+- `hooks/` - React hooks matching backend agents (`useAttribute`, `useExpertTransformation`, `useDeduplication`)
+- `components/` - UI panels for each stage + DAG visualization (D3.js, @xyflow/react)
+- `services/api.ts` - SSE stream parsing and API calls
+- `types/index.ts` - TypeScript interfaces mirroring backend schemas
+
+### Key Patterns
+
+**SSE Event Flow**: All agent operations stream progress via SSE events:
+```typescript
+// Frontend callback pattern
+onStep0Start → onStep0Complete → onStep1Start → onStep1Complete → onDone
+```
+
+**Bilingual Support**: All prompts and UI support `PromptLanguage = 'zh' | 'en'`. Language flows through the entire pipeline from request to response messages.
+
+**Expert Source Fallback**: If external sources (DBpedia, Wikidata) fail, system automatically falls back to LLM-based expert generation.
+
+### Configuration
+
+Backend requires `.env` file:
+```
+OLLAMA_BASE_URL=http://localhost:11435 # Default Ollama endpoint
+DEFAULT_MODEL=qwen3:8b # Default LLM model
+OPENAI_API_KEY= # Optional: for OpenAI-compatible APIs
+LENS_API_TOKEN= # Optional: for patent search
+```
+
+### Dual-Path Mode
+
+The system supports analyzing two queries in parallel (`PathA` and `PathB`) with attribute crossover functionality for comparing and combining ideas across different objects.
diff --git a/backend/app/data/ddc_domains_en.json b/backend/app/data/ddc_domains_en.json
new file mode 100644
index 0000000..f8441b5
--- /dev/null
+++ b/backend/app/data/ddc_domains_en.json
@@ -0,0 +1,120 @@
+{
+ "metadata": {
+ "source": "ddc",
+ "language": "en",
+ "created_at": "2026-01-20",
+ "total_count": 100,
+ "description": "Dewey Decimal Classification knowledge domains (10 main classes + 90 divisions)"
+ },
+ "domains": [
+ {"code": "000", "name": "Computer Science, Information & General Works", "level": "class", "parent": null},
+ {"code": "010", "name": "Bibliographies", "level": "division", "parent": "000"},
+ {"code": "020", "name": "Library & Information Sciences", "level": "division", "parent": "000"},
+ {"code": "030", "name": "Encyclopedias & Books of Facts", "level": "division", "parent": "000"},
+ {"code": "040", "name": "Unassigned", "level": "division", "parent": "000"},
+ {"code": "050", "name": "Magazines, Journals & Serials", "level": "division", "parent": "000"},
+ {"code": "060", "name": "Associations, Organizations & Museums", "level": "division", "parent": "000"},
+ {"code": "070", "name": "News Media, Journalism & Publishing", "level": "division", "parent": "000"},
+ {"code": "080", "name": "Quotations", "level": "division", "parent": "000"},
+ {"code": "090", "name": "Manuscripts & Rare Books", "level": "division", "parent": "000"},
+
+ {"code": "100", "name": "Philosophy & Psychology", "level": "class", "parent": null},
+ {"code": "110", "name": "Metaphysics", "level": "division", "parent": "100"},
+ {"code": "120", "name": "Epistemology", "level": "division", "parent": "100"},
+ {"code": "130", "name": "Parapsychology & Occultism", "level": "division", "parent": "100"},
+ {"code": "140", "name": "Philosophical Schools of Thought", "level": "division", "parent": "100"},
+ {"code": "150", "name": "Psychology", "level": "division", "parent": "100"},
+ {"code": "160", "name": "Logic", "level": "division", "parent": "100"},
+ {"code": "170", "name": "Ethics", "level": "division", "parent": "100"},
+ {"code": "180", "name": "Ancient, Medieval & Eastern Philosophy", "level": "division", "parent": "100"},
+ {"code": "190", "name": "Modern Western Philosophy", "level": "division", "parent": "100"},
+
+ {"code": "200", "name": "Religion", "level": "class", "parent": null},
+ {"code": "210", "name": "Philosophy & Theory of Religion", "level": "division", "parent": "200"},
+ {"code": "220", "name": "Bible", "level": "division", "parent": "200"},
+ {"code": "230", "name": "Christianity & Christian Theology", "level": "division", "parent": "200"},
+ {"code": "240", "name": "Christian Practice & Observance", "level": "division", "parent": "200"},
+ {"code": "250", "name": "Christian Orders & Local Churches", "level": "division", "parent": "200"},
+ {"code": "260", "name": "Christian Social & Ecclesiastical Theology", "level": "division", "parent": "200"},
+ {"code": "270", "name": "History of Christianity", "level": "division", "parent": "200"},
+ {"code": "280", "name": "Christian Denominations", "level": "division", "parent": "200"},
+ {"code": "290", "name": "Other Religions", "level": "division", "parent": "200"},
+
+ {"code": "300", "name": "Social Sciences", "level": "class", "parent": null},
+ {"code": "310", "name": "Statistics", "level": "division", "parent": "300"},
+ {"code": "320", "name": "Political Science", "level": "division", "parent": "300"},
+ {"code": "330", "name": "Economics", "level": "division", "parent": "300"},
+ {"code": "340", "name": "Law", "level": "division", "parent": "300"},
+ {"code": "350", "name": "Public Administration & Military Science", "level": "division", "parent": "300"},
+ {"code": "360", "name": "Social Problems & Services", "level": "division", "parent": "300"},
+ {"code": "370", "name": "Education", "level": "division", "parent": "300"},
+ {"code": "380", "name": "Commerce, Communications & Transportation", "level": "division", "parent": "300"},
+ {"code": "390", "name": "Customs, Etiquette & Folklore", "level": "division", "parent": "300"},
+
+ {"code": "400", "name": "Language", "level": "class", "parent": null},
+ {"code": "410", "name": "Linguistics", "level": "division", "parent": "400"},
+ {"code": "420", "name": "English & Old English Languages", "level": "division", "parent": "400"},
+ {"code": "430", "name": "German & Related Languages", "level": "division", "parent": "400"},
+ {"code": "440", "name": "French & Related Languages", "level": "division", "parent": "400"},
+ {"code": "450", "name": "Italian, Romanian & Related Languages", "level": "division", "parent": "400"},
+ {"code": "460", "name": "Spanish, Portuguese & Galician", "level": "division", "parent": "400"},
+ {"code": "470", "name": "Latin & Italic Languages", "level": "division", "parent": "400"},
+ {"code": "480", "name": "Classical & Modern Greek Languages", "level": "division", "parent": "400"},
+ {"code": "490", "name": "Other Languages", "level": "division", "parent": "400"},
+
+ {"code": "500", "name": "Science", "level": "class", "parent": null},
+ {"code": "510", "name": "Mathematics", "level": "division", "parent": "500"},
+ {"code": "520", "name": "Astronomy", "level": "division", "parent": "500"},
+ {"code": "530", "name": "Physics", "level": "division", "parent": "500"},
+ {"code": "540", "name": "Chemistry", "level": "division", "parent": "500"},
+ {"code": "550", "name": "Earth Sciences & Geology", "level": "division", "parent": "500"},
+ {"code": "560", "name": "Paleontology", "level": "division", "parent": "500"},
+ {"code": "570", "name": "Biology & Life Sciences", "level": "division", "parent": "500"},
+ {"code": "580", "name": "Botany", "level": "division", "parent": "500"},
+ {"code": "590", "name": "Zoology", "level": "division", "parent": "500"},
+
+ {"code": "600", "name": "Technology", "level": "class", "parent": null},
+ {"code": "610", "name": "Medicine & Health", "level": "division", "parent": "600"},
+ {"code": "620", "name": "Engineering", "level": "division", "parent": "600"},
+ {"code": "630", "name": "Agriculture", "level": "division", "parent": "600"},
+ {"code": "640", "name": "Home & Family Management", "level": "division", "parent": "600"},
+ {"code": "650", "name": "Management & Public Relations", "level": "division", "parent": "600"},
+ {"code": "660", "name": "Chemical Engineering", "level": "division", "parent": "600"},
+ {"code": "670", "name": "Manufacturing", "level": "division", "parent": "600"},
+ {"code": "680", "name": "Manufacture for Specific Uses", "level": "division", "parent": "600"},
+ {"code": "690", "name": "Construction & Building", "level": "division", "parent": "600"},
+
+ {"code": "700", "name": "Arts & Recreation", "level": "class", "parent": null},
+ {"code": "710", "name": "Landscape & Area Planning", "level": "division", "parent": "700"},
+ {"code": "720", "name": "Architecture", "level": "division", "parent": "700"},
+ {"code": "730", "name": "Sculpture, Ceramics & Metalwork", "level": "division", "parent": "700"},
+ {"code": "740", "name": "Drawing & Decorative Arts", "level": "division", "parent": "700"},
+ {"code": "750", "name": "Painting", "level": "division", "parent": "700"},
+ {"code": "760", "name": "Graphic Arts & Printmaking", "level": "division", "parent": "700"},
+ {"code": "770", "name": "Photography & Computer Art", "level": "division", "parent": "700"},
+ {"code": "780", "name": "Music", "level": "division", "parent": "700"},
+ {"code": "790", "name": "Sports, Games & Entertainment", "level": "division", "parent": "700"},
+
+ {"code": "800", "name": "Literature", "level": "class", "parent": null},
+ {"code": "810", "name": "American Literature in English", "level": "division", "parent": "800"},
+ {"code": "820", "name": "English & Old English Literature", "level": "division", "parent": "800"},
+ {"code": "830", "name": "German & Related Literature", "level": "division", "parent": "800"},
+ {"code": "840", "name": "French & Related Literature", "level": "division", "parent": "800"},
+ {"code": "850", "name": "Italian, Romanian & Related Literature", "level": "division", "parent": "800"},
+ {"code": "860", "name": "Spanish, Portuguese & Galician Literature", "level": "division", "parent": "800"},
+ {"code": "870", "name": "Latin & Italic Literature", "level": "division", "parent": "800"},
+ {"code": "880", "name": "Classical & Modern Greek Literature", "level": "division", "parent": "800"},
+ {"code": "890", "name": "Other Literatures", "level": "division", "parent": "800"},
+
+ {"code": "900", "name": "History & Geography", "level": "class", "parent": null},
+ {"code": "910", "name": "Geography & Travel", "level": "division", "parent": "900"},
+ {"code": "920", "name": "Biography & Genealogy", "level": "division", "parent": "900"},
+ {"code": "930", "name": "History of Ancient World", "level": "division", "parent": "900"},
+ {"code": "940", "name": "History of Europe", "level": "division", "parent": "900"},
+ {"code": "950", "name": "History of Asia", "level": "division", "parent": "900"},
+ {"code": "960", "name": "History of Africa", "level": "division", "parent": "900"},
+ {"code": "970", "name": "History of North America", "level": "division", "parent": "900"},
+ {"code": "980", "name": "History of South America", "level": "division", "parent": "900"},
+ {"code": "990", "name": "History of Other Areas", "level": "division", "parent": "900"}
+ ]
+}
diff --git a/backend/app/data/ddc_domains_zh.json b/backend/app/data/ddc_domains_zh.json
new file mode 100644
index 0000000..dec7d60
--- /dev/null
+++ b/backend/app/data/ddc_domains_zh.json
@@ -0,0 +1,120 @@
+{
+ "metadata": {
+ "source": "ddc",
+ "language": "zh",
+ "created_at": "2026-01-20",
+ "total_count": 100,
+ "description": "杜威十進位圖書分類法知識領域(10個大類 + 90個細類)"
+ },
+ "domains": [
+ {"code": "000", "name": "電腦科學、資訊與總類", "level": "class", "parent": null},
+ {"code": "010", "name": "書目學", "level": "division", "parent": "000"},
+ {"code": "020", "name": "圖書資訊學", "level": "division", "parent": "000"},
+ {"code": "030", "name": "百科全書與常識書", "level": "division", "parent": "000"},
+ {"code": "040", "name": "未分配", "level": "division", "parent": "000"},
+ {"code": "050", "name": "雜誌、期刊與連續出版品", "level": "division", "parent": "000"},
+ {"code": "060", "name": "協會、組織與博物館", "level": "division", "parent": "000"},
+ {"code": "070", "name": "新聞媒體、新聞學與出版", "level": "division", "parent": "000"},
+ {"code": "080", "name": "引用語錄", "level": "division", "parent": "000"},
+ {"code": "090", "name": "手稿與珍本", "level": "division", "parent": "000"},
+
+ {"code": "100", "name": "哲學與心理學", "level": "class", "parent": null},
+ {"code": "110", "name": "形上學", "level": "division", "parent": "100"},
+ {"code": "120", "name": "知識論", "level": "division", "parent": "100"},
+ {"code": "130", "name": "超心理學與神秘學", "level": "division", "parent": "100"},
+ {"code": "140", "name": "哲學流派", "level": "division", "parent": "100"},
+ {"code": "150", "name": "心理學", "level": "division", "parent": "100"},
+ {"code": "160", "name": "邏輯學", "level": "division", "parent": "100"},
+ {"code": "170", "name": "倫理學", "level": "division", "parent": "100"},
+ {"code": "180", "name": "古代、中世紀與東方哲學", "level": "division", "parent": "100"},
+ {"code": "190", "name": "近代西方哲學", "level": "division", "parent": "100"},
+
+ {"code": "200", "name": "宗教", "level": "class", "parent": null},
+ {"code": "210", "name": "宗教哲學與理論", "level": "division", "parent": "200"},
+ {"code": "220", "name": "聖經", "level": "division", "parent": "200"},
+ {"code": "230", "name": "基督教與基督神學", "level": "division", "parent": "200"},
+ {"code": "240", "name": "基督教實踐與禮儀", "level": "division", "parent": "200"},
+ {"code": "250", "name": "基督教修會與地方教會", "level": "division", "parent": "200"},
+ {"code": "260", "name": "基督教社會與教會神學", "level": "division", "parent": "200"},
+ {"code": "270", "name": "基督教歷史", "level": "division", "parent": "200"},
+ {"code": "280", "name": "基督教教派", "level": "division", "parent": "200"},
+ {"code": "290", "name": "其他宗教", "level": "division", "parent": "200"},
+
+ {"code": "300", "name": "社會科學", "level": "class", "parent": null},
+ {"code": "310", "name": "統計學", "level": "division", "parent": "300"},
+ {"code": "320", "name": "政治學", "level": "division", "parent": "300"},
+ {"code": "330", "name": "經濟學", "level": "division", "parent": "300"},
+ {"code": "340", "name": "法律", "level": "division", "parent": "300"},
+ {"code": "350", "name": "公共行政與軍事學", "level": "division", "parent": "300"},
+ {"code": "360", "name": "社會問題與服務", "level": "division", "parent": "300"},
+ {"code": "370", "name": "教育", "level": "division", "parent": "300"},
+ {"code": "380", "name": "商業、通訊與運輸", "level": "division", "parent": "300"},
+ {"code": "390", "name": "風俗、禮儀與民俗", "level": "division", "parent": "300"},
+
+ {"code": "400", "name": "語言", "level": "class", "parent": null},
+ {"code": "410", "name": "語言學", "level": "division", "parent": "400"},
+ {"code": "420", "name": "英語與古英語", "level": "division", "parent": "400"},
+ {"code": "430", "name": "德語及相關語言", "level": "division", "parent": "400"},
+ {"code": "440", "name": "法語及相關語言", "level": "division", "parent": "400"},
+ {"code": "450", "name": "義大利語、羅馬尼亞語及相關語言", "level": "division", "parent": "400"},
+ {"code": "460", "name": "西班牙語、葡萄牙語與加利西亞語", "level": "division", "parent": "400"},
+ {"code": "470", "name": "拉丁語及義大利語族", "level": "division", "parent": "400"},
+ {"code": "480", "name": "古典與現代希臘語", "level": "division", "parent": "400"},
+ {"code": "490", "name": "其他語言", "level": "division", "parent": "400"},
+
+ {"code": "500", "name": "自然科學", "level": "class", "parent": null},
+ {"code": "510", "name": "數學", "level": "division", "parent": "500"},
+ {"code": "520", "name": "天文學", "level": "division", "parent": "500"},
+ {"code": "530", "name": "物理學", "level": "division", "parent": "500"},
+ {"code": "540", "name": "化學", "level": "division", "parent": "500"},
+ {"code": "550", "name": "地球科學與地質學", "level": "division", "parent": "500"},
+ {"code": "560", "name": "古生物學", "level": "division", "parent": "500"},
+ {"code": "570", "name": "生物學與生命科學", "level": "division", "parent": "500"},
+ {"code": "580", "name": "植物學", "level": "division", "parent": "500"},
+ {"code": "590", "name": "動物學", "level": "division", "parent": "500"},
+
+ {"code": "600", "name": "應用科學與技術", "level": "class", "parent": null},
+ {"code": "610", "name": "醫學與健康", "level": "division", "parent": "600"},
+ {"code": "620", "name": "工程學", "level": "division", "parent": "600"},
+ {"code": "630", "name": "農業", "level": "division", "parent": "600"},
+ {"code": "640", "name": "家政與家庭管理", "level": "division", "parent": "600"},
+ {"code": "650", "name": "管理與公共關係", "level": "division", "parent": "600"},
+ {"code": "660", "name": "化學工程", "level": "division", "parent": "600"},
+ {"code": "670", "name": "製造業", "level": "division", "parent": "600"},
+ {"code": "680", "name": "特定用途製造", "level": "division", "parent": "600"},
+ {"code": "690", "name": "建築與營造", "level": "division", "parent": "600"},
+
+ {"code": "700", "name": "藝術與休閒", "level": "class", "parent": null},
+ {"code": "710", "name": "景觀與區域規劃", "level": "division", "parent": "700"},
+ {"code": "720", "name": "建築學", "level": "division", "parent": "700"},
+ {"code": "730", "name": "雕塑、陶瓷與金工", "level": "division", "parent": "700"},
+ {"code": "740", "name": "繪畫與裝飾藝術", "level": "division", "parent": "700"},
+ {"code": "750", "name": "繪畫藝術", "level": "division", "parent": "700"},
+ {"code": "760", "name": "版畫與印刷藝術", "level": "division", "parent": "700"},
+ {"code": "770", "name": "攝影與電腦藝術", "level": "division", "parent": "700"},
+ {"code": "780", "name": "音樂", "level": "division", "parent": "700"},
+ {"code": "790", "name": "運動、遊戲與娛樂", "level": "division", "parent": "700"},
+
+ {"code": "800", "name": "文學", "level": "class", "parent": null},
+ {"code": "810", "name": "美國英語文學", "level": "division", "parent": "800"},
+ {"code": "820", "name": "英語與古英語文學", "level": "division", "parent": "800"},
+ {"code": "830", "name": "德語及相關文學", "level": "division", "parent": "800"},
+ {"code": "840", "name": "法語及相關文學", "level": "division", "parent": "800"},
+ {"code": "850", "name": "義大利語、羅馬尼亞語及相關文學", "level": "division", "parent": "800"},
+ {"code": "860", "name": "西班牙語、葡萄牙語與加利西亞語文學", "level": "division", "parent": "800"},
+ {"code": "870", "name": "拉丁語及義大利語族文學", "level": "division", "parent": "800"},
+ {"code": "880", "name": "古典與現代希臘文學", "level": "division", "parent": "800"},
+ {"code": "890", "name": "其他文學", "level": "division", "parent": "800"},
+
+ {"code": "900", "name": "歷史與地理", "level": "class", "parent": null},
+ {"code": "910", "name": "地理與旅遊", "level": "division", "parent": "900"},
+ {"code": "920", "name": "傳記與家譜", "level": "division", "parent": "900"},
+ {"code": "930", "name": "古代世界史", "level": "division", "parent": "900"},
+ {"code": "940", "name": "歐洲史", "level": "division", "parent": "900"},
+ {"code": "950", "name": "亞洲史", "level": "division", "parent": "900"},
+ {"code": "960", "name": "非洲史", "level": "division", "parent": "900"},
+ {"code": "970", "name": "北美洲史", "level": "division", "parent": "900"},
+ {"code": "980", "name": "南美洲史", "level": "division", "parent": "900"},
+ {"code": "990", "name": "其他地區史", "level": "division", "parent": "900"}
+ ]
+}
diff --git a/backend/app/services/embedding_service.py b/backend/app/services/embedding_service.py
index 90908ad..0ac7bfc 100644
--- a/backend/app/services/embedding_service.py
+++ b/backend/app/services/embedding_service.py
@@ -26,7 +26,7 @@ class EmbeddingService:
def __init__(self):
self.base_url = settings.ollama_base_url
- self.default_model = "nomic-embed-text" # Ollama 預設的 embedding 模型
+ self.default_model = "qwen3-embedding:4b" # Qwen3 embedding model for better semantic understanding
self.client = httpx.AsyncClient(timeout=120.0)
async def get_embedding(self, text: str, model: Optional[str] = None) -> List[float]:
diff --git a/experiments/__init__.py b/experiments/__init__.py
new file mode 100644
index 0000000..ce13dfe
--- /dev/null
+++ b/experiments/__init__.py
@@ -0,0 +1,7 @@
+"""
+Experiment module for 5-condition idea generation study.
+
+This module implements a 2×2 factorial design + control to test
+the contributions of attribute decomposition and expert perspectives
+to creative ideation quality.
+"""
diff --git a/experiments/analyze_results.py b/experiments/analyze_results.py
new file mode 100644
index 0000000..ba7ee54
--- /dev/null
+++ b/experiments/analyze_results.py
@@ -0,0 +1,546 @@
+"""
+Statistical analysis for experiment results.
+
+Performs:
+- 2×2 ANOVA for main effects (attributes, experts) and interaction
+- Post-hoc tests (Tukey HSD)
+- Effect sizes (Cohen's d)
+- Control comparison (C2 vs C5)
+
+Usage:
+ python -m experiments.analyze_results --input results/experiment_xxx_metrics.json
+"""
+
+import sys
+import json
+import argparse
+from pathlib import Path
+from typing import List, Dict, Any, Tuple
+from dataclasses import dataclass
+
+import numpy as np
+
+
+class NumpyEncoder(json.JSONEncoder):
+ """JSON encoder that handles numpy types."""
+ def default(self, obj):
+ if isinstance(obj, (np.integer, np.int64, np.int32)):
+ return int(obj)
+ if isinstance(obj, (np.floating, np.float64, np.float32)):
+ return float(obj)
+ if isinstance(obj, (np.bool_, bool)):
+ return bool(obj)
+ if isinstance(obj, np.ndarray):
+ return obj.tolist()
+ return super().default(obj)
+
+
+# Add experiments to path
+sys.path.insert(0, str(Path(__file__).parent.parent))
+
+from experiments.config import RESULTS_DIR
+
+# Try to import statistical libraries
+try:
+ from scipy import stats
+ SCIPY_AVAILABLE = True
+except ImportError:
+ SCIPY_AVAILABLE = False
+ print("Warning: scipy not installed. Some statistical tests will be unavailable.")
+
+try:
+ import pandas as pd
+ PANDAS_AVAILABLE = True
+except ImportError:
+ PANDAS_AVAILABLE = False
+
+
+@dataclass
+class EffectSize:
+ """Cohen's d effect size with interpretation."""
+ d: float
+ interpretation: str # small, medium, large
+
+ @staticmethod
+ def from_groups(group1: List[float], group2: List[float]) -> 'EffectSize':
+ """Calculate Cohen's d from two groups."""
+ n1, n2 = len(group1), len(group2)
+ if n1 < 2 or n2 < 2:
+ return EffectSize(d=0, interpretation="insufficient data")
+
+ mean1, mean2 = np.mean(group1), np.mean(group2)
+ var1, var2 = np.var(group1, ddof=1), np.var(group2, ddof=1)
+
+ # Pooled standard deviation
+ pooled_std = np.sqrt(((n1 - 1) * var1 + (n2 - 1) * var2) / (n1 + n2 - 2))
+
+ if pooled_std == 0:
+ return EffectSize(d=0, interpretation="no variance")
+
+ d = (mean1 - mean2) / pooled_std
+
+ # Interpretation (Cohen's conventions)
+ abs_d = abs(d)
+ if abs_d < 0.2:
+ interpretation = "negligible"
+ elif abs_d < 0.5:
+ interpretation = "small"
+ elif abs_d < 0.8:
+ interpretation = "medium"
+ else:
+ interpretation = "large"
+
+ return EffectSize(d=round(d, 4), interpretation=interpretation)
+
+
+@dataclass
+class TTestResult:
+ """Independent samples t-test result."""
+ t_statistic: float
+ p_value: float
+ effect_size: EffectSize
+ significant: bool # p < 0.05
+ group1_mean: float
+ group2_mean: float
+ group1_std: float
+ group2_std: float
+ group1_n: int
+ group2_n: int
+
+
+@dataclass
+class ANOVAResult:
+ """2×2 ANOVA result."""
+ main_effect_attributes: Dict[str, float] # F, p
+ main_effect_experts: Dict[str, float] # F, p
+ interaction: Dict[str, float] # F, p
+ significant_effects: List[str]
+
+
+def extract_metric_values(
+ metrics: Dict[str, Any],
+ metric_path: str
+) -> Dict[str, List[float]]:
+ """
+ Extract values for a specific metric across all queries.
+
+ Args:
+ metrics: Full metrics dict from compute_metrics.py
+ metric_path: Dot-separated path like "post_dedup_diversity.mean_pairwise_distance"
+
+ Returns:
+ Dict mapping condition name to list of values
+ """
+ by_condition = {}
+
+ for query_metrics in metrics.get("metrics_by_query", []):
+ for condition, cond_metrics in query_metrics.get("conditions", {}).items():
+ if condition not in by_condition:
+ by_condition[condition] = []
+
+ # Navigate the metric path
+ value = cond_metrics
+ for key in metric_path.split("."):
+ if value is None:
+ break
+ if isinstance(value, dict):
+ value = value.get(key)
+ else:
+ value = None
+
+ if value is not None and isinstance(value, (int, float)):
+ by_condition[condition].append(float(value))
+
+ return by_condition
+
+
+def perform_ttest(
+ group1: List[float],
+ group2: List[float],
+ group1_name: str = "Group 1",
+ group2_name: str = "Group 2"
+) -> TTestResult:
+ """Perform independent samples t-test."""
+ if not SCIPY_AVAILABLE:
+ return None
+
+ if len(group1) < 2 or len(group2) < 2:
+ return None
+
+ t_stat, p_value = stats.ttest_ind(group1, group2)
+ effect = EffectSize.from_groups(group1, group2)
+
+ return TTestResult(
+ t_statistic=round(t_stat, 4),
+ p_value=round(p_value, 4),
+ effect_size=effect,
+ significant=p_value < 0.05,
+ group1_mean=round(np.mean(group1), 4),
+ group2_mean=round(np.mean(group2), 4),
+ group1_std=round(np.std(group1, ddof=1), 4),
+ group2_std=round(np.std(group2, ddof=1), 4),
+ group1_n=len(group1),
+ group2_n=len(group2)
+ )
+
+
+def perform_2x2_anova(
+ c1_direct: List[float], # No attributes, No experts
+ c2_expert: List[float], # No attributes, With experts
+ c3_attribute: List[float], # With attributes, No experts
+ c4_full: List[float] # With attributes, With experts
+) -> ANOVAResult:
+ """
+ Perform 2×2 factorial ANOVA.
+
+ Factors:
+ - Attributes: Without (C1, C2) vs With (C3, C4)
+ - Experts: Without (C1, C3) vs With (C2, C4)
+ """
+ if not SCIPY_AVAILABLE:
+ return None
+
+ # Check minimum data
+ min_n = min(len(c1_direct), len(c2_expert), len(c3_attribute), len(c4_full))
+ if min_n < 2:
+ return None
+
+ # For a proper 2×2 ANOVA, we'd use statsmodels or similar
+ # Here we'll compute main effects and interaction manually
+
+ # Main effect of Attributes: (C3 + C4) vs (C1 + C2)
+ no_attr = c1_direct + c2_expert
+ with_attr = c3_attribute + c4_full
+ f_attr, p_attr = stats.f_oneway(no_attr, with_attr)
+
+ # Main effect of Experts: (C2 + C4) vs (C1 + C3)
+ no_expert = c1_direct + c3_attribute
+ with_expert = c2_expert + c4_full
+ f_expert, p_expert = stats.f_oneway(no_expert, with_expert)
+
+ # Interaction: Compare the difference of differences
+ # (C4 - C3) - (C2 - C1) = interaction term
+ # Simplified approach: compare all 4 groups
+ f_all, p_all = stats.f_oneway(c1_direct, c2_expert, c3_attribute, c4_full)
+
+ # Estimate interaction by checking if combination is super-additive
+ mean1, mean2, mean3, mean4 = np.mean(c1_direct), np.mean(c2_expert), np.mean(c3_attribute), np.mean(c4_full)
+ expected_additive = mean1 + (mean2 - mean1) + (mean3 - mean1) # Additive prediction
+ actual_combination = mean4
+ interaction_strength = actual_combination - expected_additive
+
+ significant_effects = []
+ if p_attr < 0.05:
+ significant_effects.append("Attributes")
+ if p_expert < 0.05:
+ significant_effects.append("Experts")
+ if p_all < 0.05 and abs(interaction_strength) > 0.01:
+ significant_effects.append("Interaction")
+
+ return ANOVAResult(
+ main_effect_attributes={"F": round(f_attr, 4), "p": round(p_attr, 4)},
+ main_effect_experts={"F": round(f_expert, 4), "p": round(p_expert, 4)},
+ interaction={
+ "F_all_groups": round(f_all, 4),
+ "p_all_groups": round(p_all, 4),
+ "interaction_strength": round(interaction_strength, 4),
+ "super_additive": interaction_strength > 0
+ },
+ significant_effects=significant_effects
+ )
+
+
+def analyze_experiment(metrics: Dict[str, Any]) -> Dict[str, Any]:
+ """
+ Perform full statistical analysis on experiment metrics.
+
+ Returns analysis results for multiple metrics.
+ """
+ results = {
+ "analysis_metrics": [],
+ "research_questions": {}
+ }
+
+ # Define metrics to analyze
+ metrics_to_analyze = [
+ ("Survival Rate", "survival_rate"),
+ ("Post-Dedup Diversity", "post_dedup_diversity.mean_pairwise_distance"),
+ ("Normalized Diversity", "normalized_diversity.mean_pairwise_distance"),
+ ("Query Distance", "post_dedup_query_distance.mean_distance"),
+ ("Cluster Count", "post_dedup_clusters.optimal_clusters"),
+ ]
+
+ for metric_name, metric_path in metrics_to_analyze:
+ print(f"\n{'='*60}")
+ print(f"Analyzing: {metric_name}")
+ print(f"{'='*60}")
+
+ # Extract values by condition
+ by_condition = extract_metric_values(metrics, metric_path)
+
+ if not by_condition:
+ print(f" No data available for {metric_name}")
+ continue
+
+ metric_results = {
+ "metric_name": metric_name,
+ "metric_path": metric_path,
+ "descriptive": {},
+ "comparisons": {},
+ "anova": None
+ }
+
+ # Descriptive statistics
+ print(f"\nDescriptive Statistics:")
+ print(f"{'Condition':<25} {'Mean':<10} {'Std':<10} {'N':<5}")
+ print("-" * 50)
+
+ for cond, values in sorted(by_condition.items()):
+ if values:
+ mean = np.mean(values)
+ std = np.std(values, ddof=1) if len(values) > 1 else 0
+ metric_results["descriptive"][cond] = {
+ "mean": round(mean, 4),
+ "std": round(std, 4),
+ "n": len(values)
+ }
+ print(f"{cond:<25} {mean:<10.4f} {std:<10.4f} {len(values):<5}")
+
+ # Key comparisons
+ comparisons = []
+
+ # 1. C1 (Direct) vs C4 (Full Pipeline) - Main comparison
+ if "c1_direct" in by_condition and "c4_full_pipeline" in by_condition:
+ result = perform_ttest(
+ by_condition["c4_full_pipeline"],
+ by_condition["c1_direct"],
+ "Full Pipeline", "Direct"
+ )
+ if result:
+ comparisons.append(("C4 vs C1 (Full vs Direct)", result))
+ metric_results["comparisons"]["c4_vs_c1"] = {
+ "t": result.t_statistic,
+ "p": result.p_value,
+ "d": result.effect_size.d,
+ "interpretation": result.effect_size.interpretation,
+ "significant": result.significant
+ }
+
+ # 2. C2 (Expert) vs C5 (Random) - Control comparison
+ if "c2_expert_only" in by_condition and "c5_random_perspective" in by_condition:
+ result = perform_ttest(
+ by_condition["c2_expert_only"],
+ by_condition["c5_random_perspective"],
+ "Expert", "Random"
+ )
+ if result:
+ comparisons.append(("C2 vs C5 (Expert vs Random)", result))
+ metric_results["comparisons"]["c2_vs_c5"] = {
+ "t": result.t_statistic,
+ "p": result.p_value,
+ "d": result.effect_size.d,
+ "interpretation": result.effect_size.interpretation,
+ "significant": result.significant
+ }
+
+ # 3. C2 (Expert-Only) vs C1 (Direct) - Effect of experts alone
+ if "c2_expert_only" in by_condition and "c1_direct" in by_condition:
+ result = perform_ttest(
+ by_condition["c2_expert_only"],
+ by_condition["c1_direct"],
+ "Expert-Only", "Direct"
+ )
+ if result:
+ comparisons.append(("C2 vs C1 (Expert effect)", result))
+ metric_results["comparisons"]["c2_vs_c1"] = {
+ "t": result.t_statistic,
+ "p": result.p_value,
+ "d": result.effect_size.d,
+ "interpretation": result.effect_size.interpretation,
+ "significant": result.significant
+ }
+
+ # 4. C3 (Attribute-Only) vs C1 (Direct) - Effect of attributes alone
+ if "c3_attribute_only" in by_condition and "c1_direct" in by_condition:
+ result = perform_ttest(
+ by_condition["c3_attribute_only"],
+ by_condition["c1_direct"],
+ "Attribute-Only", "Direct"
+ )
+ if result:
+ comparisons.append(("C3 vs C1 (Attribute effect)", result))
+ metric_results["comparisons"]["c3_vs_c1"] = {
+ "t": result.t_statistic,
+ "p": result.p_value,
+ "d": result.effect_size.d,
+ "interpretation": result.effect_size.interpretation,
+ "significant": result.significant
+ }
+
+ # Print comparisons
+ if comparisons:
+ print(f"\nPairwise Comparisons:")
+ print(f"{'Comparison':<30} {'t':<10} {'p':<10} {'d':<10} {'Sig?':<8}")
+ print("-" * 68)
+ for name, result in comparisons:
+ sig = "Yes*" if result.significant else "No"
+ print(f"{name:<30} {result.t_statistic:<10.3f} {result.p_value:<10.4f} "
+ f"{result.effect_size.d:<10.3f} {sig:<8}")
+
+ # 2×2 ANOVA (if all conditions available)
+ if all(c in by_condition for c in ["c1_direct", "c2_expert_only", "c3_attribute_only", "c4_full_pipeline"]):
+ anova = perform_2x2_anova(
+ by_condition["c1_direct"],
+ by_condition["c2_expert_only"],
+ by_condition["c3_attribute_only"],
+ by_condition["c4_full_pipeline"]
+ )
+ if anova:
+ metric_results["anova"] = {
+ "main_effect_attributes": anova.main_effect_attributes,
+ "main_effect_experts": anova.main_effect_experts,
+ "interaction": anova.interaction,
+ "significant_effects": anova.significant_effects
+ }
+
+ print(f"\n2×2 ANOVA Results:")
+ print(f" Main Effect (Attributes): F={anova.main_effect_attributes['F']:.3f}, "
+ f"p={anova.main_effect_attributes['p']:.4f}")
+ print(f" Main Effect (Experts): F={anova.main_effect_experts['F']:.3f}, "
+ f"p={anova.main_effect_experts['p']:.4f}")
+ print(f" Interaction Strength: {anova.interaction['interaction_strength']:.4f} "
+ f"({'super-additive' if anova.interaction['super_additive'] else 'sub-additive'})")
+ print(f" Significant Effects: {', '.join(anova.significant_effects) or 'None'}")
+
+ results["analysis_metrics"].append(metric_results)
+
+ # Summarize research questions
+ results["research_questions"] = summarize_research_questions(results["analysis_metrics"])
+
+ return results
+
+
+def summarize_research_questions(analysis_metrics: List[Dict]) -> Dict[str, str]:
+ """Summarize findings for each research question."""
+ rq = {}
+
+ # Find the diversity metric results
+ diversity_results = None
+ for m in analysis_metrics:
+ if "Diversity" in m["metric_name"] and "Normalized" in m["metric_name"]:
+ diversity_results = m
+ break
+ if diversity_results is None:
+ for m in analysis_metrics:
+ if "Diversity" in m["metric_name"]:
+ diversity_results = m
+ break
+
+ if diversity_results:
+ anova = diversity_results.get("anova", {})
+ comparisons = diversity_results.get("comparisons", {})
+
+ # RQ1: Does attribute decomposition improve diversity?
+ if anova and "main_effect_attributes" in anova:
+ p = anova["main_effect_attributes"]["p"]
+ rq["RQ1_attributes"] = f"Main effect p={p:.4f}. " + \
+ ("Significant effect of attributes." if p < 0.05 else "No significant effect.")
+
+ # RQ2: Do expert perspectives improve diversity?
+ if anova and "main_effect_experts" in anova:
+ p = anova["main_effect_experts"]["p"]
+ rq["RQ2_experts"] = f"Main effect p={p:.4f}. " + \
+ ("Significant effect of experts." if p < 0.05 else "No significant effect.")
+
+ # RQ3: Interaction effect?
+ if anova and "interaction" in anova:
+ strength = anova["interaction"]["interaction_strength"]
+ super_add = anova["interaction"]["super_additive"]
+ rq["RQ3_interaction"] = f"Interaction strength={strength:.4f}. " + \
+ ("Super-additive (combination better than sum)." if super_add else "Sub-additive or additive.")
+
+ # RQ5: Expert vs Random (C2 vs C5)
+ if "c2_vs_c5" in comparisons:
+ comp = comparisons["c2_vs_c5"]
+ rq["RQ5_expert_vs_random"] = f"d={comp['d']:.3f} ({comp['interpretation']}), p={comp['p']:.4f}. " + \
+ ("Expert knowledge matters." if comp["significant"] and comp["d"] > 0 else "No significant difference from random perspectives.")
+
+ return rq
+
+
+def print_research_summary(results: Dict[str, Any]):
+ """Print summary of research question findings."""
+ print("\n" + "=" * 70)
+ print("RESEARCH QUESTIONS SUMMARY")
+ print("=" * 70)
+
+ rq = results.get("research_questions", {})
+
+ print("\nRQ1: Does attribute decomposition improve semantic diversity?")
+ print(f" → {rq.get('RQ1_attributes', 'Insufficient data')}")
+
+ print("\nRQ2: Do expert perspectives improve semantic diversity?")
+ print(f" → {rq.get('RQ2_experts', 'Insufficient data')}")
+
+ print("\nRQ3: Is there an interaction effect (Full Pipeline > sum of parts)?")
+ print(f" → {rq.get('RQ3_interaction', 'Insufficient data')}")
+
+ print("\nRQ5: Do experts beat random perspectives? (C2 vs C5)")
+ print(f" → {rq.get('RQ5_expert_vs_random', 'Insufficient data')}")
+
+ print("\n" + "=" * 70)
+ print("Note: With pilot data (n=1 query), statistical power is limited.")
+ print("Full experiment (n=10+ queries) needed for reliable conclusions.")
+ print("=" * 70)
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description="Statistical analysis for experiment results"
+ )
+ parser.add_argument(
+ "--input",
+ type=str,
+ required=True,
+ help="Input metrics JSON file"
+ )
+ parser.add_argument(
+ "--output",
+ type=str,
+ help="Output file path (default: input_analysis.json)"
+ )
+
+ args = parser.parse_args()
+
+ input_path = Path(args.input)
+ if not input_path.exists():
+ input_path = RESULTS_DIR / args.input
+ if not input_path.exists():
+ print(f"Error: Input file not found: {args.input}")
+ sys.exit(1)
+
+ # Load metrics
+ with open(input_path, "r", encoding="utf-8") as f:
+ metrics = json.load(f)
+
+ # Run analysis
+ results = analyze_experiment(metrics)
+
+ # Print research summary
+ print_research_summary(results)
+
+ # Save results
+ if args.output:
+ output_path = Path(args.output)
+ else:
+ stem = input_path.stem.replace("_metrics", "")
+ output_path = input_path.parent / f"{stem}_analysis.json"
+
+ with open(output_path, "w", encoding="utf-8") as f:
+ json.dump(results, f, indent=2, ensure_ascii=False, cls=NumpyEncoder)
+
+ print(f"\nAnalysis saved to: {output_path}")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/experiments/assessment/README.md b/experiments/assessment/README.md
new file mode 100644
index 0000000..adac262
--- /dev/null
+++ b/experiments/assessment/README.md
@@ -0,0 +1,314 @@
+# Human Assessment Web Interface
+
+A standalone web application for human assessment of generated ideas using Torrance-inspired creativity metrics.
+
+## Overview
+
+This tool enables blind evaluation of creative ideas generated by the novelty-seeking experiment. Raters assess ideas on four dimensions without knowing which experimental condition produced each idea, ensuring unbiased evaluation.
+
+## Quick Start
+
+```bash
+cd experiments/assessment
+
+# 1. Prepare assessment data (if not already done)
+python3 prepare_data.py
+
+# 2. Start the system
+./start.sh
+
+# 3. Open browser
+open http://localhost:5174
+```
+
+## Directory Structure
+
+```
+assessment/
+├── backend/
+│ ├── app.py # FastAPI backend API
+│ ├── database.py # SQLite database operations
+│ ├── models.py # Pydantic models & dimension definitions
+│ └── requirements.txt # Python dependencies
+├── frontend/
+│ ├── src/
+│ │ ├── components/ # React UI components
+│ │ ├── hooks/ # React state management
+│ │ ├── services/ # API client
+│ │ └── types/ # TypeScript definitions
+│ └── package.json
+├── data/
+│ └── assessment_items.json # Prepared ideas for rating
+├── results/
+│ └── ratings.db # SQLite database with ratings
+├── prepare_data.py # Data preparation script
+├── analyze_ratings.py # Inter-rater reliability analysis
+├── start.sh # Start both servers
+├── stop.sh # Stop all services
+└── README.md # This file
+```
+
+## Data Preparation
+
+### List Available Experiment Files
+
+```bash
+python3 prepare_data.py --list
+```
+
+Output:
+```
+Available experiment files (most recent first):
+ experiment_20260119_165650_deduped.json (1571.3 KB)
+ experiment_20260119_163040_deduped.json (156.4 KB)
+```
+
+### Prepare Assessment Data
+
+```bash
+# Use all ideas (not recommended for human assessment)
+python3 prepare_data.py
+
+# RECOMMENDED: Stratified sampling - 4 ideas per condition per query
+# Results in ~200 ideas (5 conditions × 4 ideas × 10 queries)
+python3 prepare_data.py --per-condition 4
+
+# Alternative: Sample 150 ideas total (proportionally across queries)
+python3 prepare_data.py --sample 150
+
+# Limit per query (20 ideas max per query)
+python3 prepare_data.py --per-query 20
+
+# Combined: 4 per condition, max 15 per query
+python3 prepare_data.py --per-condition 4 --per-query 15
+
+# Specify a different experiment file
+python3 prepare_data.py experiment_20260119_163040_deduped.json --per-condition 4
+```
+
+### Sampling Options
+
+| Option | Description | Example |
+|--------|-------------|---------|
+| `--per-condition N` | Max N ideas per condition per query (stratified) | `--per-condition 4` → ~200 ideas |
+| `--per-query N` | Max N ideas per query | `--per-query 20` |
+| `--sample N` | Total N ideas (proportionally distributed) | `--sample 150` |
+| `--seed N` | Random seed for reproducibility | `--seed 42` (default) |
+
+**Recommendation**: Use `--per-condition 4` for balanced assessment across conditions.
+
+The script:
+1. Loads the deduped experiment results
+2. Extracts all unique ideas with hidden metadata (condition, expert, keyword)
+3. Assigns stable IDs to each idea
+4. Shuffles ideas within each query (reproducible with seed=42)
+5. Outputs `data/assessment_items.json`
+
+## Assessment Dimensions
+
+Raters evaluate each idea on four dimensions using a 1-5 Likert scale:
+
+### Originality
+*How unexpected or surprising is this idea?*
+
+| Score | Description |
+|-------|-------------|
+| 1 | Very common/obvious idea anyone would suggest |
+| 2 | Somewhat common, slight variation on expected ideas |
+| 3 | Moderately original, some unexpected elements |
+| 4 | Quite original, notably different approach |
+| 5 | Highly unexpected, truly novel concept |
+
+### Elaboration
+*How detailed and well-developed is this idea?*
+
+| Score | Description |
+|-------|-------------|
+| 1 | Vague, minimal detail, just a concept |
+| 2 | Basic idea with little specificity |
+| 3 | Moderately detailed, some specifics provided |
+| 4 | Well-developed with clear implementation hints |
+| 5 | Highly specific, thoroughly developed concept |
+
+### Coherence
+*Does this idea make logical sense and relate to the query object?*
+
+| Score | Description |
+|-------|-------------|
+| 1 | Nonsensical, irrelevant, or incomprehensible |
+| 2 | Mostly unclear, weak connection to query |
+| 3 | Partially coherent, some logical gaps |
+| 4 | Mostly coherent with minor issues |
+| 5 | Fully coherent, clearly relates to query |
+
+### Usefulness
+*Could this idea have practical value or inspire real innovation?*
+
+| Score | Description |
+|-------|-------------|
+| 1 | No practical value whatsoever |
+| 2 | Minimal usefulness, highly impractical |
+| 3 | Some potential value with major limitations |
+| 4 | Useful idea with realistic applications |
+| 5 | Highly useful, clear practical value |
+
+## Running the System
+
+### Start
+
+```bash
+./start.sh
+```
+
+This will:
+1. Check for `data/assessment_items.json` (runs `prepare_data.py` if missing)
+2. Install frontend dependencies if needed
+3. Start backend API on port 8002
+4. Start frontend dev server on port 5174
+
+### Stop
+
+```bash
+./stop.sh
+```
+
+Or press `Ctrl+C` in the terminal running `start.sh`.
+
+### Manual Start (Development)
+
+```bash
+# Terminal 1: Backend
+cd backend
+../../../backend/venv/bin/uvicorn app:app --host 0.0.0.0 --port 8002 --reload
+
+# Terminal 2: Frontend
+cd frontend
+npm run dev
+```
+
+## API Endpoints
+
+| Endpoint | Method | Description |
+|----------|--------|-------------|
+| `/api/health` | GET | Health check |
+| `/api/info` | GET | Experiment info (total ideas, queries, conditions) |
+| `/api/dimensions` | GET | Dimension definitions for UI |
+| `/api/raters` | GET | List all raters |
+| `/api/raters` | POST | Register/login rater |
+| `/api/queries` | GET | List all queries |
+| `/api/queries/{id}` | GET | Get query with all ideas |
+| `/api/queries/{id}/unrated?rater_id=X` | GET | Get unrated ideas for rater |
+| `/api/ratings` | POST | Submit a rating |
+| `/api/progress/{rater_id}` | GET | Get rater's progress |
+| `/api/statistics` | GET | Overall statistics |
+| `/api/export` | GET | Export all ratings with metadata |
+
+## Analysis
+
+After collecting ratings from multiple raters:
+
+```bash
+python3 analyze_ratings.py
+```
+
+This calculates:
+- **Krippendorff's alpha**: Inter-rater reliability for ordinal data
+- **ICC(2,1)**: Intraclass Correlation Coefficient with 95% CI
+- **Mean ratings per condition**: Compare experimental conditions
+- **Kruskal-Wallis test**: Statistical significance between conditions
+
+Output is saved to `results/analysis_results.json`.
+
+## Database Schema
+
+SQLite database (`results/ratings.db`):
+
+```sql
+-- Raters
+CREATE TABLE raters (
+ rater_id TEXT PRIMARY KEY,
+ name TEXT,
+ created_at TIMESTAMP
+);
+
+-- Ratings
+CREATE TABLE ratings (
+ id INTEGER PRIMARY KEY,
+ rater_id TEXT,
+ idea_id TEXT,
+ query_id TEXT,
+ originality INTEGER CHECK(originality BETWEEN 1 AND 5),
+ elaboration INTEGER CHECK(elaboration BETWEEN 1 AND 5),
+ coherence INTEGER CHECK(coherence BETWEEN 1 AND 5),
+ usefulness INTEGER CHECK(usefulness BETWEEN 1 AND 5),
+ skipped INTEGER DEFAULT 0,
+ timestamp TIMESTAMP,
+ UNIQUE(rater_id, idea_id)
+);
+
+-- Progress tracking
+CREATE TABLE progress (
+ rater_id TEXT,
+ query_id TEXT,
+ completed_count INTEGER,
+ total_count INTEGER,
+ PRIMARY KEY (rater_id, query_id)
+);
+```
+
+## Blind Assessment Design
+
+To ensure unbiased evaluation:
+
+1. **Randomization**: Ideas are shuffled within each query using a fixed seed (42) for reproducibility
+2. **Hidden metadata**: Condition, expert name, and keywords are stored but not shown to raters
+3. **Consistent ordering**: All raters see the same randomized order
+4. **Context provided**: Only the query text is shown (e.g., "Chair", "Bicycle")
+
+## Workflow for Raters
+
+1. **Login**: Enter a unique rater ID
+2. **Instructions**: Read dimension definitions (shown before first rating)
+3. **Rate ideas**: For each idea:
+ - Read the idea text
+ - Rate all 4 dimensions (1-5)
+ - Click "Submit & Next" or "Skip"
+4. **Progress**: Track completion per query and overall
+5. **Completion**: Summary shown when all ideas are rated
+
+## Troubleshooting
+
+### Backend won't start
+```bash
+# Check if port 8002 is in use
+lsof -i :8002
+
+# Check backend logs
+cat /tmp/assessment_backend.log
+```
+
+### Frontend won't start
+```bash
+# Reinstall dependencies
+cd frontend
+rm -rf node_modules
+npm install
+```
+
+### Reset database
+```bash
+rm results/ratings.db
+# Database is auto-created on next backend start
+```
+
+### Regenerate assessment data
+```bash
+rm data/assessment_items.json
+python3 prepare_data.py
+```
+
+## Tech Stack
+
+- **Backend**: Python 3.11+, FastAPI, SQLite, Pydantic
+- **Frontend**: React 19, TypeScript, Vite, Ant Design 6.0
+- **Analysis**: NumPy, SciPy (for statistical tests)
diff --git a/experiments/assessment/analyze_ratings.py b/experiments/assessment/analyze_ratings.py
new file mode 100755
index 0000000..f484b52
--- /dev/null
+++ b/experiments/assessment/analyze_ratings.py
@@ -0,0 +1,356 @@
+#!/usr/bin/env python3
+"""
+Analyze assessment ratings for inter-rater reliability and condition comparisons.
+
+This script:
+1. Loads ratings from the SQLite database
+2. Joins with hidden metadata (condition, expert)
+3. Calculates inter-rater reliability metrics
+4. Computes mean ratings per dimension per condition
+5. Performs statistical comparisons between conditions
+"""
+
+import json
+import sqlite3
+from collections import defaultdict
+from datetime import datetime
+from pathlib import Path
+from typing import Any
+
+import numpy as np
+from scipy import stats
+
+
+# Paths
+RESULTS_DIR = Path(__file__).parent / 'results'
+DATA_DIR = Path(__file__).parent / 'data'
+DB_PATH = RESULTS_DIR / 'ratings.db'
+ASSESSMENT_DATA_PATH = DATA_DIR / 'assessment_items.json'
+
+
+def load_assessment_data() -> dict[str, Any]:
+ """Load the assessment items data with hidden metadata."""
+ with open(ASSESSMENT_DATA_PATH, 'r', encoding='utf-8') as f:
+ return json.load(f)
+
+
+def load_ratings_from_db() -> list[dict[str, Any]]:
+ """Load all ratings from the SQLite database."""
+ if not DB_PATH.exists():
+ print(f"Database not found at {DB_PATH}")
+ return []
+
+ conn = sqlite3.connect(DB_PATH)
+ conn.row_factory = sqlite3.Row
+ cursor = conn.cursor()
+
+ cursor.execute('''
+ SELECT r.*, rat.name as rater_name
+ FROM ratings r
+ LEFT JOIN raters rat ON r.rater_id = rat.rater_id
+ WHERE r.skipped = 0
+ ''')
+
+ ratings = [dict(row) for row in cursor.fetchall()]
+ conn.close()
+
+ return ratings
+
+
+def build_idea_lookup(assessment_data: dict[str, Any]) -> dict[str, dict[str, Any]]:
+ """Build a lookup table from idea_id to metadata."""
+ lookup = {}
+ for query in assessment_data['queries']:
+ for idea in query['ideas']:
+ lookup[idea['idea_id']] = {
+ 'text': idea['text'],
+ 'query_id': query['query_id'],
+ 'query_text': query['query_text'],
+ **idea['_hidden']
+ }
+ return lookup
+
+
+def calculate_krippendorff_alpha(ratings_matrix: np.ndarray) -> float:
+ """
+ Calculate Krippendorff's alpha for ordinal data.
+
+ Args:
+ ratings_matrix: 2D array where rows are items and columns are raters.
+ NaN values indicate missing ratings.
+
+ Returns:
+ Krippendorff's alpha coefficient
+ """
+ # Remove items with fewer than 2 raters
+ valid_items = ~np.all(np.isnan(ratings_matrix), axis=1)
+ ratings_matrix = ratings_matrix[valid_items]
+
+ if ratings_matrix.shape[0] < 2:
+ return np.nan
+
+ n_items, n_raters = ratings_matrix.shape
+
+ # Observed disagreement
+ observed_disagreement = 0
+ n_pairs = 0
+
+ for i in range(n_items):
+ values = ratings_matrix[i, ~np.isnan(ratings_matrix[i])]
+ if len(values) < 2:
+ continue
+ # Ordinal distance: squared difference
+ for j in range(len(values)):
+ for k in range(j + 1, len(values)):
+ observed_disagreement += (values[j] - values[k]) ** 2
+ n_pairs += 1
+
+ if n_pairs == 0:
+ return np.nan
+
+ observed_disagreement /= n_pairs
+
+ # Expected disagreement (based on marginal distribution)
+ all_values = ratings_matrix[~np.isnan(ratings_matrix)]
+ if len(all_values) < 2:
+ return np.nan
+
+ expected_disagreement = 0
+ n_total_pairs = 0
+ for i in range(len(all_values)):
+ for j in range(i + 1, len(all_values)):
+ expected_disagreement += (all_values[i] - all_values[j]) ** 2
+ n_total_pairs += 1
+
+ if n_total_pairs == 0:
+ return np.nan
+
+ expected_disagreement /= n_total_pairs
+
+ if expected_disagreement == 0:
+ return 1.0
+
+ alpha = 1 - (observed_disagreement / expected_disagreement)
+ return alpha
+
+
+def calculate_icc(ratings_matrix: np.ndarray) -> tuple[float, float, float]:
+ """
+ Calculate Intraclass Correlation Coefficient (ICC(2,1)).
+
+ Args:
+ ratings_matrix: 2D array where rows are items and columns are raters.
+
+ Returns:
+ Tuple of (ICC, lower_bound, upper_bound)
+ """
+ # Remove rows with any NaN
+ valid_rows = ~np.any(np.isnan(ratings_matrix), axis=1)
+ ratings_matrix = ratings_matrix[valid_rows]
+
+ if ratings_matrix.shape[0] < 2 or ratings_matrix.shape[1] < 2:
+ return np.nan, np.nan, np.nan
+
+ n, k = ratings_matrix.shape
+
+ # Grand mean
+ grand_mean = np.mean(ratings_matrix)
+
+ # Row means (item means)
+ row_means = np.mean(ratings_matrix, axis=1)
+
+ # Column means (rater means)
+ col_means = np.mean(ratings_matrix, axis=0)
+
+ # Sum of squares
+ ss_total = np.sum((ratings_matrix - grand_mean) ** 2)
+ ss_rows = k * np.sum((row_means - grand_mean) ** 2)
+ ss_cols = n * np.sum((col_means - grand_mean) ** 2)
+ ss_error = ss_total - ss_rows - ss_cols
+
+ # Mean squares
+ ms_rows = ss_rows / (n - 1) if n > 1 else 0
+ ms_cols = ss_cols / (k - 1) if k > 1 else 0
+ ms_error = ss_error / ((n - 1) * (k - 1)) if (n > 1 and k > 1) else 0
+
+ # ICC(2,1) - two-way random, absolute agreement, single rater
+ if ms_error + (ms_cols - ms_error) / n == 0:
+ return np.nan, np.nan, np.nan
+
+ icc = (ms_rows - ms_error) / (ms_rows + (k - 1) * ms_error + k * (ms_cols - ms_error) / n)
+
+ # Confidence interval (approximate)
+ # Using F distribution
+ df1 = n - 1
+ df2 = (n - 1) * (k - 1)
+
+ if ms_error == 0:
+ return icc, np.nan, np.nan
+
+ f_value = ms_rows / ms_error
+ f_lower = f_value / stats.f.ppf(0.975, df1, df2)
+ f_upper = f_value / stats.f.ppf(0.025, df1, df2)
+
+ icc_lower = (f_lower - 1) / (f_lower + k - 1)
+ icc_upper = (f_upper - 1) / (f_upper + k - 1)
+
+ return icc, icc_lower, icc_upper
+
+
+def analyze_ratings():
+ """Main analysis function."""
+ print("=" * 60)
+ print("CREATIVE IDEA ASSESSMENT ANALYSIS")
+ print("=" * 60)
+ print()
+
+ # Load data
+ assessment_data = load_assessment_data()
+ ratings = load_ratings_from_db()
+ idea_lookup = build_idea_lookup(assessment_data)
+
+ if not ratings:
+ print("No ratings found in database.")
+ return
+
+ print(f"Loaded {len(ratings)} ratings from database")
+ print(f"Experiment ID: {assessment_data['experiment_id']}")
+ print()
+
+ # Get unique raters
+ raters = list(set(r['rater_id'] for r in ratings))
+ print(f"Raters: {raters}")
+ print()
+
+ # Join ratings with metadata
+ enriched_ratings = []
+ for r in ratings:
+ idea_meta = idea_lookup.get(r['idea_id'], {})
+ enriched_ratings.append({
+ **r,
+ 'condition': idea_meta.get('condition', 'unknown'),
+ 'expert_name': idea_meta.get('expert_name', ''),
+ 'keyword': idea_meta.get('keyword', ''),
+ 'query_text': idea_meta.get('query_text', ''),
+ 'idea_text': idea_meta.get('text', '')
+ })
+
+ # Dimensions
+ dimensions = ['originality', 'elaboration', 'coherence', 'usefulness']
+
+ # ================================
+ # Inter-rater reliability
+ # ================================
+ print("-" * 60)
+ print("INTER-RATER RELIABILITY")
+ print("-" * 60)
+ print()
+
+ if len(raters) >= 2:
+ # Build ratings matrix per dimension
+ idea_ids = list(set(r['idea_id'] for r in enriched_ratings))
+
+ for dim in dimensions:
+ # Create matrix: rows = ideas, cols = raters
+ matrix = np.full((len(idea_ids), len(raters)), np.nan)
+ idea_to_idx = {idea: idx for idx, idea in enumerate(idea_ids)}
+ rater_to_idx = {rater: idx for idx, rater in enumerate(raters)}
+
+ for r in enriched_ratings:
+ if r[dim] is not None:
+ i = idea_to_idx[r['idea_id']]
+ j = rater_to_idx[r['rater_id']]
+ matrix[i, j] = r[dim]
+
+ # Calculate metrics
+ alpha = calculate_krippendorff_alpha(matrix)
+ icc, icc_low, icc_high = calculate_icc(matrix)
+
+ print(f"{dim.upper()}:")
+ print(f" Krippendorff's alpha: {alpha:.3f}")
+ print(f" ICC(2,1): {icc:.3f} (95% CI: {icc_low:.3f} - {icc_high:.3f})")
+ print()
+ else:
+ print("Need at least 2 raters for inter-rater reliability analysis.")
+ print()
+
+ # ================================
+ # Condition comparisons
+ # ================================
+ print("-" * 60)
+ print("MEAN RATINGS BY CONDITION")
+ print("-" * 60)
+ print()
+
+ # Group ratings by condition
+ condition_ratings: dict[str, dict[str, list[int]]] = defaultdict(lambda: defaultdict(list))
+
+ for r in enriched_ratings:
+ condition = r['condition']
+ for dim in dimensions:
+ if r[dim] is not None:
+ condition_ratings[condition][dim].append(r[dim])
+
+ # Calculate means and print
+ condition_stats = {}
+ for condition in sorted(condition_ratings.keys()):
+ print(f"\n{condition}:")
+ condition_stats[condition] = {}
+ for dim in dimensions:
+ values = condition_ratings[condition][dim]
+ if values:
+ mean = np.mean(values)
+ std = np.std(values)
+ n = len(values)
+ condition_stats[condition][dim] = {'mean': mean, 'std': std, 'n': n}
+ print(f" {dim}: {mean:.2f} (SD={std:.2f}, n={n})")
+ else:
+ print(f" {dim}: no data")
+
+ # ================================
+ # Statistical comparisons
+ # ================================
+ print()
+ print("-" * 60)
+ print("STATISTICAL COMPARISONS (Kruskal-Wallis)")
+ print("-" * 60)
+ print()
+
+ conditions = sorted(condition_ratings.keys())
+ if len(conditions) >= 2:
+ for dim in dimensions:
+ groups = [condition_ratings[c][dim] for c in conditions if condition_ratings[c][dim]]
+ if len(groups) >= 2:
+ h_stat, p_value = stats.kruskal(*groups)
+ sig = "*" if p_value < 0.05 else ""
+ print(f"{dim}: H={h_stat:.2f}, p={p_value:.4f} {sig}")
+ else:
+ print(f"{dim}: insufficient data for comparison")
+ else:
+ print("Need at least 2 conditions with data for statistical comparison.")
+
+ # ================================
+ # Export results
+ # ================================
+ output = {
+ 'analysis_timestamp': datetime.utcnow().isoformat(),
+ 'experiment_id': assessment_data['experiment_id'],
+ 'total_ratings': len(ratings),
+ 'raters': raters,
+ 'rater_count': len(raters),
+ 'condition_stats': condition_stats,
+ 'enriched_ratings': enriched_ratings
+ }
+
+ output_path = RESULTS_DIR / 'analysis_results.json'
+ with open(output_path, 'w', encoding='utf-8') as f:
+ json.dump(output, f, ensure_ascii=False, indent=2, default=str)
+
+ print()
+ print("-" * 60)
+ print(f"Results exported to: {output_path}")
+ print("=" * 60)
+
+
+if __name__ == '__main__':
+ analyze_ratings()
diff --git a/experiments/assessment/backend/__init__.py b/experiments/assessment/backend/__init__.py
new file mode 100644
index 0000000..c0f963c
--- /dev/null
+++ b/experiments/assessment/backend/__init__.py
@@ -0,0 +1 @@
+"""Assessment backend package."""
diff --git a/experiments/assessment/backend/app.py b/experiments/assessment/backend/app.py
new file mode 100644
index 0000000..4a95f6a
--- /dev/null
+++ b/experiments/assessment/backend/app.py
@@ -0,0 +1,374 @@
+"""
+FastAPI backend for human assessment of creative ideas.
+"""
+
+import json
+from datetime import datetime
+from pathlib import Path
+from typing import Any
+
+from fastapi import FastAPI, HTTPException
+from fastapi.middleware.cors import CORSMiddleware
+
+try:
+ from . import database as db
+ from .models import (
+ DIMENSION_DEFINITIONS,
+ ExportData,
+ ExportRating,
+ IdeaForRating,
+ Progress,
+ QueryInfo,
+ QueryWithIdeas,
+ Rater,
+ RaterCreate,
+ RaterProgress,
+ Rating,
+ RatingSubmit,
+ Statistics,
+ )
+except ImportError:
+ import database as db
+ from models import (
+ DIMENSION_DEFINITIONS,
+ ExportData,
+ ExportRating,
+ IdeaForRating,
+ Progress,
+ QueryInfo,
+ QueryWithIdeas,
+ Rater,
+ RaterCreate,
+ RaterProgress,
+ Rating,
+ RatingSubmit,
+ Statistics,
+ )
+
+
+# Load assessment data
+DATA_PATH = Path(__file__).parent.parent / 'data' / 'assessment_items.json'
+
+
+def load_assessment_data() -> dict[str, Any]:
+ """Load the assessment items data."""
+ if not DATA_PATH.exists():
+ raise RuntimeError(f"Assessment data not found at {DATA_PATH}. Run prepare_data.py first.")
+ with open(DATA_PATH, 'r', encoding='utf-8') as f:
+ return json.load(f)
+
+
+# Initialize FastAPI app
+app = FastAPI(
+ title="Creative Idea Assessment API",
+ description="API for human assessment of creative ideas using Torrance-inspired metrics",
+ version="1.0.0"
+)
+
+# CORS middleware
+app.add_middleware(
+ CORSMiddleware,
+ allow_origins=["*"],
+ allow_credentials=True,
+ allow_methods=["*"],
+ allow_headers=["*"],
+)
+
+
+# Cache for assessment data
+_assessment_data: dict[str, Any] | None = None
+
+
+def get_assessment_data() -> dict[str, Any]:
+ """Get cached assessment data."""
+ global _assessment_data
+ if _assessment_data is None:
+ _assessment_data = load_assessment_data()
+ return _assessment_data
+
+
+# Rater endpoints
+@app.get("/api/raters", response_model=list[Rater])
+def list_raters() -> list[dict[str, Any]]:
+ """List all registered raters."""
+ return db.list_raters()
+
+
+@app.post("/api/raters", response_model=Rater)
+def create_or_get_rater(rater_data: RaterCreate) -> dict[str, Any]:
+ """Register a new rater or get existing one."""
+ return db.create_rater(rater_data.rater_id, rater_data.name)
+
+
+@app.get("/api/raters/{rater_id}", response_model=Rater)
+def get_rater(rater_id: str) -> dict[str, Any]:
+ """Get a specific rater."""
+ rater = db.get_rater(rater_id)
+ if not rater:
+ raise HTTPException(status_code=404, detail="Rater not found")
+ return rater
+
+
+# Query endpoints
+@app.get("/api/queries", response_model=list[QueryInfo])
+def list_queries() -> list[dict[str, Any]]:
+ """List all queries available for assessment."""
+ data = get_assessment_data()
+ return [
+ {
+ 'query_id': q['query_id'],
+ 'query_text': q['query_text'],
+ 'category': q.get('category', ''),
+ 'idea_count': q['idea_count']
+ }
+ for q in data['queries']
+ ]
+
+
+@app.get("/api/queries/{query_id}", response_model=QueryWithIdeas)
+def get_query_with_ideas(query_id: str) -> dict[str, Any]:
+ """Get a query with all its ideas for rating (without hidden metadata)."""
+ data = get_assessment_data()
+
+ for query in data['queries']:
+ if query['query_id'] == query_id:
+ ideas = [
+ IdeaForRating(
+ idea_id=idea['idea_id'],
+ text=idea['text'],
+ index=idx
+ )
+ for idx, idea in enumerate(query['ideas'])
+ ]
+ return QueryWithIdeas(
+ query_id=query['query_id'],
+ query_text=query['query_text'],
+ category=query.get('category', ''),
+ ideas=ideas,
+ total_count=len(ideas)
+ )
+
+ raise HTTPException(status_code=404, detail="Query not found")
+
+
+@app.get("/api/queries/{query_id}/unrated", response_model=QueryWithIdeas)
+def get_unrated_ideas(query_id: str, rater_id: str) -> dict[str, Any]:
+ """Get unrated ideas for a query by a specific rater."""
+ data = get_assessment_data()
+
+ for query in data['queries']:
+ if query['query_id'] == query_id:
+ # Get already rated idea IDs
+ rated_ids = db.get_rated_idea_ids(rater_id, query_id)
+
+ # Filter to unrated ideas
+ unrated_ideas = [
+ IdeaForRating(
+ idea_id=idea['idea_id'],
+ text=idea['text'],
+ index=idx
+ )
+ for idx, idea in enumerate(query['ideas'])
+ if idea['idea_id'] not in rated_ids
+ ]
+
+ return QueryWithIdeas(
+ query_id=query['query_id'],
+ query_text=query['query_text'],
+ category=query.get('category', ''),
+ ideas=unrated_ideas,
+ total_count=query['idea_count']
+ )
+
+ raise HTTPException(status_code=404, detail="Query not found")
+
+
+# Rating endpoints
+@app.post("/api/ratings", response_model=dict[str, Any])
+def submit_rating(rating: RatingSubmit) -> dict[str, Any]:
+ """Submit a rating for an idea."""
+ # Validate that rater exists
+ rater = db.get_rater(rating.rater_id)
+ if not rater:
+ raise HTTPException(status_code=404, detail="Rater not found. Please register first.")
+
+ # Validate idea exists
+ data = get_assessment_data()
+ idea_found = False
+ for query in data['queries']:
+ for idea in query['ideas']:
+ if idea['idea_id'] == rating.idea_id:
+ idea_found = True
+ break
+ if idea_found:
+ break
+
+ if not idea_found:
+ raise HTTPException(status_code=404, detail="Idea not found")
+
+ # If not skipped, require all ratings
+ if not rating.skipped:
+ if rating.originality is None or rating.elaboration is None or rating.coherence is None or rating.usefulness is None:
+ raise HTTPException(
+ status_code=400,
+ detail="All dimensions must be rated unless skipping"
+ )
+
+ # Save rating
+ return db.save_rating(
+ rater_id=rating.rater_id,
+ idea_id=rating.idea_id,
+ query_id=rating.query_id,
+ originality=rating.originality,
+ elaboration=rating.elaboration,
+ coherence=rating.coherence,
+ usefulness=rating.usefulness,
+ skipped=rating.skipped
+ )
+
+
+@app.get("/api/ratings/{rater_id}/{idea_id}", response_model=Rating | None)
+def get_rating(rater_id: str, idea_id: str) -> dict[str, Any] | None:
+ """Get a specific rating."""
+ return db.get_rating(rater_id, idea_id)
+
+
+@app.get("/api/ratings/rater/{rater_id}", response_model=list[Rating])
+def get_ratings_by_rater(rater_id: str) -> list[dict[str, Any]]:
+ """Get all ratings by a rater."""
+ return db.get_ratings_by_rater(rater_id)
+
+
+# Progress endpoints
+@app.get("/api/progress/{rater_id}", response_model=RaterProgress)
+def get_rater_progress(rater_id: str) -> RaterProgress:
+ """Get complete progress for a rater."""
+ rater = db.get_rater(rater_id)
+ if not rater:
+ raise HTTPException(status_code=404, detail="Rater not found")
+
+ data = get_assessment_data()
+
+ # Get rated idea counts per query
+ ratings = db.get_ratings_by_rater(rater_id)
+ ratings_per_query: dict[str, int] = {}
+ for r in ratings:
+ qid = r['query_id']
+ ratings_per_query[qid] = ratings_per_query.get(qid, 0) + 1
+
+ # Build progress list
+ query_progress = []
+ total_completed = 0
+ total_ideas = 0
+
+ for query in data['queries']:
+ qid = query['query_id']
+ completed = ratings_per_query.get(qid, 0)
+ total = query['idea_count']
+
+ query_progress.append(Progress(
+ rater_id=rater_id,
+ query_id=qid,
+ completed_count=completed,
+ total_count=total
+ ))
+
+ total_completed += completed
+ total_ideas += total
+
+ percentage = (total_completed / total_ideas * 100) if total_ideas > 0 else 0
+
+ return RaterProgress(
+ rater_id=rater_id,
+ queries=query_progress,
+ total_completed=total_completed,
+ total_ideas=total_ideas,
+ percentage=round(percentage, 1)
+ )
+
+
+# Statistics endpoint
+@app.get("/api/statistics", response_model=Statistics)
+def get_statistics() -> Statistics:
+ """Get overall assessment statistics."""
+ stats = db.get_statistics()
+ return Statistics(**stats)
+
+
+# Dimension definitions endpoint
+@app.get("/api/dimensions")
+def get_dimensions() -> dict[str, Any]:
+ """Get dimension definitions for the UI."""
+ return DIMENSION_DEFINITIONS
+
+
+# Export endpoint
+@app.get("/api/export", response_model=ExportData)
+def export_ratings() -> ExportData:
+ """Export all ratings with hidden metadata for analysis."""
+ data = get_assessment_data()
+ all_ratings = db.get_all_ratings()
+
+ # Build idea lookup with hidden metadata
+ idea_lookup: dict[str, dict[str, Any]] = {}
+ query_lookup: dict[str, str] = {}
+
+ for query in data['queries']:
+ query_lookup[query['query_id']] = query['query_text']
+ for idea in query['ideas']:
+ idea_lookup[idea['idea_id']] = {
+ 'text': idea['text'],
+ 'condition': idea['_hidden']['condition'],
+ 'expert_name': idea['_hidden']['expert_name'],
+ 'keyword': idea['_hidden']['keyword']
+ }
+
+ # Build export ratings
+ export_ratings = []
+ for r in all_ratings:
+ idea_data = idea_lookup.get(r['idea_id'], {})
+ export_ratings.append(ExportRating(
+ rater_id=r['rater_id'],
+ idea_id=r['idea_id'],
+ query_id=r['query_id'],
+ query_text=query_lookup.get(r['query_id'], ''),
+ idea_text=idea_data.get('text', ''),
+ originality=r['originality'],
+ elaboration=r['elaboration'],
+ coherence=r['coherence'],
+ usefulness=r['usefulness'],
+ skipped=bool(r['skipped']),
+ condition=idea_data.get('condition', ''),
+ expert_name=idea_data.get('expert_name', ''),
+ keyword=idea_data.get('keyword', ''),
+ timestamp=r['timestamp']
+ ))
+
+ return ExportData(
+ experiment_id=data['experiment_id'],
+ export_timestamp=datetime.utcnow(),
+ rater_count=len(db.list_raters()),
+ rating_count=len(export_ratings),
+ ratings=export_ratings
+ )
+
+
+# Health check
+@app.get("/api/health")
+def health_check() -> dict[str, str]:
+ """Health check endpoint."""
+ return {"status": "healthy"}
+
+
+# Info endpoint
+@app.get("/api/info")
+def get_info() -> dict[str, Any]:
+ """Get assessment session info."""
+ data = get_assessment_data()
+ return {
+ 'experiment_id': data['experiment_id'],
+ 'total_ideas': data['total_ideas'],
+ 'query_count': data['query_count'],
+ 'conditions': data['conditions'],
+ 'randomization_seed': data['randomization_seed']
+ }
diff --git a/experiments/assessment/backend/database.py b/experiments/assessment/backend/database.py
new file mode 100644
index 0000000..62f2f92
--- /dev/null
+++ b/experiments/assessment/backend/database.py
@@ -0,0 +1,309 @@
+"""
+SQLite database setup and operations for assessment ratings storage.
+"""
+
+import sqlite3
+from contextlib import contextmanager
+from datetime import datetime
+from pathlib import Path
+from typing import Any, Generator
+
+
+# Database path
+DB_PATH = Path(__file__).parent.parent / 'results' / 'ratings.db'
+
+
+def get_db_path() -> Path:
+ """Get the database path, ensuring directory exists."""
+ DB_PATH.parent.mkdir(parents=True, exist_ok=True)
+ return DB_PATH
+
+
+@contextmanager
+def get_connection() -> Generator[sqlite3.Connection, None, None]:
+ """Get a database connection as a context manager."""
+ conn = sqlite3.connect(get_db_path())
+ conn.row_factory = sqlite3.Row
+ try:
+ yield conn
+ finally:
+ conn.close()
+
+
+def init_db() -> None:
+ """Initialize the database with required tables."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+
+ # Raters table
+ cursor.execute('''
+ CREATE TABLE IF NOT EXISTS raters (
+ rater_id TEXT PRIMARY KEY,
+ name TEXT,
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
+ )
+ ''')
+
+ # Ratings table
+ cursor.execute('''
+ CREATE TABLE IF NOT EXISTS ratings (
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
+ rater_id TEXT NOT NULL,
+ idea_id TEXT NOT NULL,
+ query_id TEXT NOT NULL,
+ originality INTEGER CHECK(originality BETWEEN 1 AND 5),
+ elaboration INTEGER CHECK(elaboration BETWEEN 1 AND 5),
+ coherence INTEGER CHECK(coherence BETWEEN 1 AND 5),
+ usefulness INTEGER CHECK(usefulness BETWEEN 1 AND 5),
+ skipped INTEGER DEFAULT 0,
+ timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ FOREIGN KEY (rater_id) REFERENCES raters(rater_id),
+ UNIQUE(rater_id, idea_id)
+ )
+ ''')
+
+ # Progress table
+ cursor.execute('''
+ CREATE TABLE IF NOT EXISTS progress (
+ rater_id TEXT NOT NULL,
+ query_id TEXT NOT NULL,
+ completed_count INTEGER DEFAULT 0,
+ total_count INTEGER DEFAULT 0,
+ started_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ PRIMARY KEY (rater_id, query_id),
+ FOREIGN KEY (rater_id) REFERENCES raters(rater_id)
+ )
+ ''')
+
+ # Create indexes for common queries
+ cursor.execute('''
+ CREATE INDEX IF NOT EXISTS idx_ratings_rater
+ ON ratings(rater_id)
+ ''')
+ cursor.execute('''
+ CREATE INDEX IF NOT EXISTS idx_ratings_idea
+ ON ratings(idea_id)
+ ''')
+
+ conn.commit()
+
+
+# Rater operations
+def create_rater(rater_id: str, name: str | None = None) -> dict[str, Any]:
+ """Create a new rater."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ try:
+ cursor.execute(
+ 'INSERT INTO raters (rater_id, name) VALUES (?, ?)',
+ (rater_id, name or rater_id)
+ )
+ conn.commit()
+ return {'rater_id': rater_id, 'name': name or rater_id, 'created': True}
+ except sqlite3.IntegrityError:
+ # Rater already exists
+ return get_rater(rater_id)
+
+
+def get_rater(rater_id: str) -> dict[str, Any] | None:
+ """Get a rater by ID."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute('SELECT * FROM raters WHERE rater_id = ?', (rater_id,))
+ row = cursor.fetchone()
+ if row:
+ return dict(row)
+ return None
+
+
+def list_raters() -> list[dict[str, Any]]:
+ """List all raters."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute('SELECT * FROM raters ORDER BY created_at')
+ return [dict(row) for row in cursor.fetchall()]
+
+
+# Rating operations
+def save_rating(
+ rater_id: str,
+ idea_id: str,
+ query_id: str,
+ originality: int | None,
+ elaboration: int | None,
+ coherence: int | None,
+ usefulness: int | None,
+ skipped: bool = False
+) -> dict[str, Any]:
+ """Save or update a rating."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute('''
+ INSERT INTO ratings (rater_id, idea_id, query_id, originality, elaboration, coherence, usefulness, skipped, timestamp)
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
+ ON CONFLICT(rater_id, idea_id) DO UPDATE SET
+ originality = excluded.originality,
+ elaboration = excluded.elaboration,
+ coherence = excluded.coherence,
+ usefulness = excluded.usefulness,
+ skipped = excluded.skipped,
+ timestamp = excluded.timestamp
+ ''', (rater_id, idea_id, query_id, originality, elaboration, coherence, usefulness, int(skipped), datetime.utcnow()))
+ conn.commit()
+
+ # Update progress
+ update_progress(rater_id, query_id)
+
+ return {
+ 'rater_id': rater_id,
+ 'idea_id': idea_id,
+ 'saved': True
+ }
+
+
+def get_rating(rater_id: str, idea_id: str) -> dict[str, Any] | None:
+ """Get a specific rating."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ 'SELECT * FROM ratings WHERE rater_id = ? AND idea_id = ?',
+ (rater_id, idea_id)
+ )
+ row = cursor.fetchone()
+ if row:
+ return dict(row)
+ return None
+
+
+def get_ratings_by_rater(rater_id: str) -> list[dict[str, Any]]:
+ """Get all ratings by a rater."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ 'SELECT * FROM ratings WHERE rater_id = ? ORDER BY timestamp',
+ (rater_id,)
+ )
+ return [dict(row) for row in cursor.fetchall()]
+
+
+def get_ratings_by_idea(idea_id: str) -> list[dict[str, Any]]:
+ """Get all ratings for an idea."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ 'SELECT * FROM ratings WHERE idea_id = ? ORDER BY rater_id',
+ (idea_id,)
+ )
+ return [dict(row) for row in cursor.fetchall()]
+
+
+def get_all_ratings() -> list[dict[str, Any]]:
+ """Get all ratings."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute('SELECT * FROM ratings ORDER BY timestamp')
+ return [dict(row) for row in cursor.fetchall()]
+
+
+# Progress operations
+def update_progress(rater_id: str, query_id: str) -> None:
+ """Update progress for a rater on a query."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+
+ # Count completed ratings for this query
+ cursor.execute('''
+ SELECT COUNT(*) as count FROM ratings
+ WHERE rater_id = ? AND query_id = ?
+ ''', (rater_id, query_id))
+ completed = cursor.fetchone()['count']
+
+ # Update or insert progress
+ cursor.execute('''
+ INSERT INTO progress (rater_id, query_id, completed_count, updated_at)
+ VALUES (?, ?, ?, ?)
+ ON CONFLICT(rater_id, query_id) DO UPDATE SET
+ completed_count = excluded.completed_count,
+ updated_at = excluded.updated_at
+ ''', (rater_id, query_id, completed, datetime.utcnow()))
+ conn.commit()
+
+
+def set_progress_total(rater_id: str, query_id: str, total: int) -> None:
+ """Set the total count for a query's progress."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute('''
+ INSERT INTO progress (rater_id, query_id, total_count, completed_count)
+ VALUES (?, ?, ?, 0)
+ ON CONFLICT(rater_id, query_id) DO UPDATE SET
+ total_count = excluded.total_count
+ ''', (rater_id, query_id, total))
+ conn.commit()
+
+
+def get_progress(rater_id: str) -> list[dict[str, Any]]:
+ """Get progress for all queries for a rater."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ 'SELECT * FROM progress WHERE rater_id = ? ORDER BY query_id',
+ (rater_id,)
+ )
+ return [dict(row) for row in cursor.fetchall()]
+
+
+def get_progress_for_query(rater_id: str, query_id: str) -> dict[str, Any] | None:
+ """Get progress for a specific query."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ 'SELECT * FROM progress WHERE rater_id = ? AND query_id = ?',
+ (rater_id, query_id)
+ )
+ row = cursor.fetchone()
+ if row:
+ return dict(row)
+ return None
+
+
+def get_rated_idea_ids(rater_id: str, query_id: str) -> set[str]:
+ """Get the set of idea IDs already rated by a rater for a query."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ 'SELECT idea_id FROM ratings WHERE rater_id = ? AND query_id = ?',
+ (rater_id, query_id)
+ )
+ return {row['idea_id'] for row in cursor.fetchall()}
+
+
+# Statistics
+def get_statistics() -> dict[str, Any]:
+ """Get overall statistics."""
+ with get_connection() as conn:
+ cursor = conn.cursor()
+
+ cursor.execute('SELECT COUNT(*) as count FROM raters')
+ rater_count = cursor.fetchone()['count']
+
+ cursor.execute('SELECT COUNT(*) as count FROM ratings WHERE skipped = 0')
+ rating_count = cursor.fetchone()['count']
+
+ cursor.execute('SELECT COUNT(*) as count FROM ratings WHERE skipped = 1')
+ skip_count = cursor.fetchone()['count']
+
+ cursor.execute('SELECT COUNT(DISTINCT idea_id) as count FROM ratings')
+ rated_ideas = cursor.fetchone()['count']
+
+ return {
+ 'rater_count': rater_count,
+ 'rating_count': rating_count,
+ 'skip_count': skip_count,
+ 'rated_ideas': rated_ideas
+ }
+
+
+# Initialize on import
+init_db()
diff --git a/experiments/assessment/backend/models.py b/experiments/assessment/backend/models.py
new file mode 100644
index 0000000..681fdf6
--- /dev/null
+++ b/experiments/assessment/backend/models.py
@@ -0,0 +1,183 @@
+"""
+Pydantic models for the assessment API.
+"""
+
+from datetime import datetime
+from pydantic import BaseModel, Field
+
+
+# Request models
+class RaterCreate(BaseModel):
+ """Request to create or login as a rater."""
+ rater_id: str = Field(..., min_length=1, max_length=50, description="Unique rater identifier")
+ name: str | None = Field(None, max_length=100, description="Optional display name")
+
+
+class RatingSubmit(BaseModel):
+ """Request to submit a rating."""
+ rater_id: str = Field(..., description="Rater identifier")
+ idea_id: str = Field(..., description="Idea identifier")
+ query_id: str = Field(..., description="Query identifier")
+ originality: int | None = Field(None, ge=1, le=5, description="Originality score 1-5")
+ elaboration: int | None = Field(None, ge=1, le=5, description="Elaboration score 1-5")
+ coherence: int | None = Field(None, ge=1, le=5, description="Coherence score 1-5")
+ usefulness: int | None = Field(None, ge=1, le=5, description="Usefulness score 1-5")
+ skipped: bool = Field(False, description="Whether the idea was skipped")
+
+
+# Response models
+class Rater(BaseModel):
+ """Rater information."""
+ rater_id: str
+ name: str | None
+ created_at: datetime | None = None
+
+
+class Rating(BaseModel):
+ """A single rating."""
+ id: int
+ rater_id: str
+ idea_id: str
+ query_id: str
+ originality: int | None
+ elaboration: int | None
+ coherence: int | None
+ usefulness: int | None
+ skipped: int
+ timestamp: datetime | None
+
+
+class Progress(BaseModel):
+ """Progress for a rater on a query."""
+ rater_id: str
+ query_id: str
+ completed_count: int
+ total_count: int
+ started_at: datetime | None = None
+ updated_at: datetime | None = None
+
+
+class QueryInfo(BaseModel):
+ """Information about a query."""
+ query_id: str
+ query_text: str
+ category: str
+ idea_count: int
+
+
+class IdeaForRating(BaseModel):
+ """An idea presented for rating (without hidden metadata)."""
+ idea_id: str
+ text: str
+ index: int # Position in the randomized list for this query
+
+
+class QueryWithIdeas(BaseModel):
+ """A query with its ideas for rating."""
+ query_id: str
+ query_text: str
+ category: str
+ ideas: list[IdeaForRating]
+ total_count: int
+
+
+class Statistics(BaseModel):
+ """Overall statistics."""
+ rater_count: int
+ rating_count: int
+ skip_count: int
+ rated_ideas: int
+
+
+class RaterProgress(BaseModel):
+ """Complete progress summary for a rater."""
+ rater_id: str
+ queries: list[Progress]
+ total_completed: int
+ total_ideas: int
+ percentage: float
+
+
+# Export response models
+class ExportRating(BaseModel):
+ """Rating with hidden metadata for export."""
+ rater_id: str
+ idea_id: str
+ query_id: str
+ query_text: str
+ idea_text: str
+ originality: int | None
+ elaboration: int | None
+ coherence: int | None
+ usefulness: int | None
+ skipped: bool
+ condition: str
+ expert_name: str
+ keyword: str
+ timestamp: datetime | None
+
+
+class ExportData(BaseModel):
+ """Full export data structure."""
+ experiment_id: str
+ export_timestamp: datetime
+ rater_count: int
+ rating_count: int
+ ratings: list[ExportRating]
+
+
+# Dimension definitions (for frontend)
+DIMENSION_DEFINITIONS = {
+ "originality": {
+ "name": "Originality",
+ "question": "How unexpected or surprising is this idea? Would most people NOT think of this?",
+ "scale": {
+ 1: "Very common/obvious idea anyone would suggest",
+ 2: "Somewhat common, slight variation on expected ideas",
+ 3: "Moderately original, some unexpected elements",
+ 4: "Quite original, notably different approach",
+ 5: "Highly unexpected, truly novel concept"
+ },
+ "low_label": "Common",
+ "high_label": "Unexpected"
+ },
+ "elaboration": {
+ "name": "Elaboration",
+ "question": "How detailed and well-developed is this idea?",
+ "scale": {
+ 1: "Vague, minimal detail, just a concept",
+ 2: "Basic idea with little specificity",
+ 3: "Moderately detailed, some specifics provided",
+ 4: "Well-developed with clear implementation hints",
+ 5: "Highly specific, thoroughly developed concept"
+ },
+ "low_label": "Vague",
+ "high_label": "Detailed"
+ },
+ "coherence": {
+ "name": "Coherence",
+ "question": "Does this idea make logical sense and relate to the query object?",
+ "scale": {
+ 1: "Nonsensical, irrelevant, or incomprehensible",
+ 2: "Mostly unclear, weak connection to query",
+ 3: "Partially coherent, some logical gaps",
+ 4: "Mostly coherent with minor issues",
+ 5: "Fully coherent, clearly relates to query"
+ },
+ "low_label": "Nonsense",
+ "high_label": "Coherent"
+ },
+ "usefulness": {
+ "name": "Usefulness",
+ "question": "Could this idea have practical value or inspire real innovation?",
+ "scale": {
+ 1: "No practical value whatsoever",
+ 2: "Minimal usefulness, highly impractical",
+ 3: "Some potential value with major limitations",
+ 4: "Useful idea with realistic applications",
+ 5: "Highly useful, clear practical value"
+ },
+ "low_label": "Useless",
+ "high_label": "Useful"
+ }
+}
diff --git a/experiments/assessment/backend/requirements.txt b/experiments/assessment/backend/requirements.txt
new file mode 100644
index 0000000..fe012dc
--- /dev/null
+++ b/experiments/assessment/backend/requirements.txt
@@ -0,0 +1,3 @@
+fastapi>=0.109.0
+uvicorn>=0.27.0
+pydantic>=2.5.0
diff --git a/experiments/assessment/data/assessment_items.json b/experiments/assessment/data/assessment_items.json
new file mode 100644
index 0000000..e8fe3be
--- /dev/null
+++ b/experiments/assessment/data/assessment_items.json
@@ -0,0 +1,1832 @@
+{
+ "experiment_id": "20260119_165650",
+ "queries": [
+ {
+ "query_id": "A1",
+ "query_text": "Chair",
+ "category": "everyday",
+ "ideas": [
+ {
+ "idea_id": "A1_I039",
+ "text": "A chair that transforms into a floating cloud-like structure, offering a unique resting experience by mimicking the sensation of weightlessness and comfort through adaptive cushioning technology.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Resting"
+ }
+ },
+ {
+ "idea_id": "A1_I007",
+ "text": "Chair with adaptive cushioning that changes firmness based on user weight and posture.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A1_I030",
+ "text": "Develop a lightweight, durable chair made from recycled materials to reduce environmental impact and support sustainable transportation practices.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Train Operator",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A1_I043",
+ "text": "Interactive public chairs with built-in sensors that adapt seating based on visitor flow, offering personalized comfort and promoting social distancing through real-time occupancy feedback.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Public places visitors"
+ }
+ },
+ {
+ "idea_id": "A1_I024",
+ "text": "Modular Interlocking Chair: Combines interlocking components for easy assembly and disassembly, mimicking the principles of structural frameworks in architecture.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Structural Engineer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A1_I108",
+ "text": "An elephant-themed chair with a seat made from recycled materials, symbolizing the animal's role in environmental conservation, and featuring a subtle trunk-shaped backrest for added comfort.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "elephant",
+ "keyword": "elephant"
+ }
+ },
+ {
+ "idea_id": "A1_I006",
+ "text": "Chair with kinetic energy harvesting to power small electronics while users sit.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A1_I044",
+ "text": "Create a modular chair frame using interlocking steel rings, allowing users to customize and reinforce the structure for enhanced sturdiness without compromising design flexibility.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Sturdy structure"
+ }
+ },
+ {
+ "idea_id": "A1_I059",
+ "text": "Chair transforms breaktime ambiance with adaptive lighting, ambient sounds, and ergonomic design, creating a rejuvenating oasis for guests between events.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Wedding Planner",
+ "keyword": "breaktime ambiance"
+ }
+ },
+ {
+ "idea_id": "A1_I026",
+ "text": "Integrate ergonomic seating with adjustable lumbar support for long-haul train travel, optimizing comfort and reducing fatigue during extended journeys.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Train Operator",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A1_I074",
+ "text": "Transform chairs into interactive learning hubs with embedded sensors, projecting historical facts and ergonomic tips, blending comfort with curiosity-driven education in public spaces.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Food Critic",
+ "keyword": "educational engagement"
+ }
+ },
+ {
+ "idea_id": "A1_I036",
+ "text": "A chair with a dynamic weight-support system that adjusts its base width in real-time, providing stability and preventing tipping even when users shift their weight unexpectedly.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Supporting weight"
+ }
+ },
+ {
+ "idea_id": "A1_I095",
+ "text": "A self-regulating chair that adjusts to body heat, inspired by how organisms adapt to their environment for comfort and energy efficiency.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "organic",
+ "keyword": "organic"
+ }
+ },
+ {
+ "idea_id": "A1_I094",
+ "text": "Chairs shaped like tree roots, using sustainable materials and mimicking natural growth patterns for a grounded, earthy feel.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "organic",
+ "keyword": "organic"
+ }
+ },
+ {
+ "idea_id": "A1_I067",
+ "text": "Chair transforms into an interactive guest engagement hub, featuring QR codes for personalized messages, photo booths, and real-time feedback, fostering connection and memorable experiences.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Wedding Planner",
+ "keyword": "guest engagement strategy"
+ }
+ },
+ {
+ "idea_id": "A1_I009",
+ "text": "Chair with integrated smart home controls for lighting, temperature, and entertainment.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A1_I005",
+ "text": "Solar-powered chair with built-in battery for charging devices and ambient lighting.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A1_I055",
+ "text": "Chair transforms into a dynamic seating solution with Seating Dynamics, adapting to user preferences and space constraints for personalized comfort and functional flexibility.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Wedding Planner",
+ "keyword": "Seating Dynamics"
+ }
+ },
+ {
+ "idea_id": "A1_I103",
+ "text": "A chair powered by kinetic energy, converting the user's movements into electricity to power the chair and contribute to a sustainable energy grid.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "futuristic",
+ "keyword": "futuristic"
+ }
+ },
+ {
+ "idea_id": "A1_I027",
+ "text": "Design a modular chair that can be reconfigured for different train car configurations, enhancing space efficiency and adaptability in transit environments.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Train Operator",
+ "keyword": ""
+ }
+ }
+ ],
+ "idea_count": 20
+ },
+ {
+ "query_id": "A5",
+ "query_text": "Bicycle",
+ "category": "everyday",
+ "ideas": [
+ {
+ "idea_id": "A5_I044",
+ "text": "Create a student-led bicycle recycling program where teens design upcycled bike parts into art installations, promoting sustainability and fostering a sense of ownership in eco-friendly transportation.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Students"
+ }
+ },
+ {
+ "idea_id": "A5_I005",
+ "text": "Bike Helmet with Heads-Up Display and Safety Sensors",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A5_I084",
+ "text": "Bicycle weather window optimization: Use real-time weather data to predict ideal riding windows, enhancing safety, efficiency, and enjoyment by aligning rides with favorable conditions.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Meteorologist",
+ "keyword": "weather window"
+ }
+ },
+ {
+ "idea_id": "A5_I031",
+ "text": "Create a performance art event where audiences ride bicycles through a choreographed path to experience a story through movement.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Actor",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A5_I038",
+ "text": "Develop a bike-sharing system with AI-powered route optimization, dynamically adjusting bike distribution based on real-time demand to enhance urban mobility and reduce traffic congestion.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Transportation"
+ }
+ },
+ {
+ "idea_id": "A5_I099",
+ "text": "EchoRide: A bicycle with a smart system that echoes the rider's movements to nearby bikes, creating a synchronized echo of motion.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "echo",
+ "keyword": "echo"
+ }
+ },
+ {
+ "idea_id": "A5_I077",
+ "text": "Kickstart your day with a morning rush detox: ride your bicycle through urban landscapes, blending physical exercise with mindfulness, breathing in fresh air and exhaling stress, reviving your body and mind for the day ahead.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Content Creator",
+ "keyword": "morning rush detox"
+ }
+ },
+ {
+ "idea_id": "A5_I012",
+ "text": "Bicycle with Integrated Water Bottle Holder and Insulated Compartment",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A5_I041",
+ "text": "Introduce a bike-sharing program with docking stations near office buildings, offering discounted rates for daily commuters to reduce traffic and promote eco-friendly travel.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Commute to work"
+ }
+ },
+ {
+ "idea_id": "A5_I043",
+ "text": "Develop a modular touring bike frame that adapts to different terrains via interchangeable components, allowing riders to switch between mountain, road, and gravel configurations on the fly for versatile long-distance adventures.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Touring"
+ }
+ },
+ {
+ "idea_id": "A5_I098",
+ "text": "EchoMirror: A bicycle mirror that uses sound waves to create an echo of the surroundings, helping riders navigate without turning their head.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "echo",
+ "keyword": "echo"
+ }
+ },
+ {
+ "idea_id": "A5_I018",
+ "text": "Bike with Automatic Gear Shifting Based on Speed and Terrain",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A5_I014",
+ "text": "Bicycle with Built-In USB Ports and Wireless Charging Stations",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A5_I025",
+ "text": "Incorporate a musical interface on the handlebars, allowing riders to compose melodies by adjusting gears and cadence.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Composer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A5_I104",
+ "text": "Molten Gear System: A gear mechanism that mimics liquid viscosity, allowing for seamless, fluid gear transitions and enhanced pedaling efficiency.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "liquid",
+ "keyword": "liquid"
+ }
+ },
+ {
+ "idea_id": "A5_I033",
+ "text": "Implement blockchain-based bike ownership tracking to enhance transparency in shared bike systems, reducing fraud and improving asset management through immutable ledger records.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Accountant",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A5_I059",
+ "text": "Team Fitness Challenge: Compete in relay bike races, where each rider completes a unique obstacle course, fostering teamwork, endurance, and strategic planning for a dynamic, engaging experience.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Sports Coach",
+ "keyword": "Team Fitness Challenge"
+ }
+ },
+ {
+ "idea_id": "A5_I021",
+ "text": "Launch a 'Bicycle Empowerment Workshop' teaching budgeting, maintenance, and safety skills to marginalized communities, empowering self-sufficiency and healthy transportation choices.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Guidance Counselor",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A5_I093",
+ "text": "Use storm front timing to optimize bicycle routes, avoiding hazardous weather by predicting front arrival, ensuring rider safety and efficiency through real-time meteorological data integration.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Meteorologist",
+ "keyword": "storm front timing"
+ }
+ },
+ {
+ "idea_id": "A5_I102",
+ "text": "Liquefy Handlebars: Ergonomic handlebars shaped like liquid streams, offering adaptive grip and reducing hand fatigue through natural curvature.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "liquid",
+ "keyword": "liquid"
+ }
+ }
+ ],
+ "idea_count": 20
+ },
+ {
+ "query_id": "A7",
+ "query_text": "Smartphone",
+ "category": "everyday",
+ "ideas": [
+ {
+ "idea_id": "A7_I045",
+ "text": "Introduce a 'Photo Memory Lane' feature that uses AI to generate nostalgic photo collages from past snaps, helping users relive memories with a personalized, time-travel-like visual experience.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Taking Photos"
+ }
+ },
+ {
+ "idea_id": "A7_I061",
+ "text": "Mountain Mind Mode: A mental wellness app that simulates mountain meditation, using nature sounds and guided sessions to promote calm and focus.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "mountain",
+ "keyword": "mountain"
+ }
+ },
+ {
+ "idea_id": "A7_I048",
+ "text": "A student-focused smartphone app that transforms study sessions into interactive games, rewarding knowledge retention with real-world perks like discounts on textbooks and campus events.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Students"
+ }
+ },
+ {
+ "idea_id": "A7_I001",
+ "text": "Phone case that uses solar energy to charge the device during outdoor activities.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A7_I066",
+ "text": "LiquidSync: A synchronization system that uses liquid-like data flow, ensuring seamless and ultra-fast transfer between devices through advanced quantum-liquid protocols.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "liquid",
+ "keyword": "liquid"
+ }
+ },
+ {
+ "idea_id": "A7_I042",
+ "text": "Introduce a 'Scene Memory' feature that learns user preferences, automatically adjusting camera settings for optimal photo quality based on past shots and environmental data.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Photography"
+ }
+ },
+ {
+ "idea_id": "A7_I032",
+ "text": "Implement a modular design allowing users to swap hardware components like cameras and batteries, promoting sustainability and customization.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Tutor",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A7_I025",
+ "text": "Integrate tactile sculptural elements into the phone's form, allowing users to 'feel' digital content through 3D-printed surface variations that respond to touch.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Sculptor",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A7_I067",
+ "text": "A smartphone with a built-in metronome that syncs with the user's heartbeat to create personalized rhythms for stress relief and focus.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "rhythm",
+ "keyword": "rhythm"
+ }
+ },
+ {
+ "idea_id": "A7_I038",
+ "text": "Phones with AI-powered ocean prediction models to forecast marine weather and aid in maritime safety.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Oceanographer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A7_I065",
+ "text": "Fluid Focus Camera: A camera system that uses liquid lens technology to adjust focus instantaneously, mimicking the way water bends light for ultra-sharp images.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "liquid",
+ "keyword": "liquid"
+ }
+ },
+ {
+ "idea_id": "A7_I005",
+ "text": "Phone that uses augmented reality for interactive shopping experiences in physical stores.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A7_I009",
+ "text": "Smartphone with a magnetic charging system that aligns automatically with a charging pad.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A7_I023",
+ "text": "Embed cybersecurity risk assessments directly into the OS, continuously evaluating app permissions and data flows to preempt potential breaches.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Risk Manager",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A7_I010",
+ "text": "Phone that uses AI to generate personalized content recommendations based on location and mood.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "A7_I047",
+ "text": "A smartphone with AI-powered meeting summarization and real-time language translation, tailored for business professionals to streamline cross-border negotiations and enhance productivity during global calls.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Business Professionals"
+ }
+ }
+ ],
+ "idea_count": 16
+ },
+ {
+ "query_id": "B1",
+ "query_text": "Solar panel",
+ "category": "technology",
+ "ideas": [
+ {
+ "idea_id": "B1_I096",
+ "text": "Legislative endurance ensures long-term solar panel incentives through adaptive policies, fostering sustainable energy adoption despite technological and market shifts.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Legislator",
+ "keyword": "legislative endurance"
+ }
+ },
+ {
+ "idea_id": "B1_I034",
+ "text": "Leverage influencer partnerships with eco-conscious creators to showcase solar panels in real-life scenarios, humanizing sustainability and driving consumer engagement through authentic storytelling.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "PR Specialist",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B1_I049",
+ "text": "Develop a solar panel system that uses AI to predict and adapt to weather patterns, optimizing energy capture by adjusting panel angles and prioritizing energy storage during overcast conditions.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Weather-dependent efficiency"
+ }
+ },
+ {
+ "idea_id": "B1_I035",
+ "text": "Launch a 'Solar Storytelling' campaign where users share their solar-powered life journeys, using user-generated content to build brand loyalty and community around renewable energy.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "PR Specialist",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B1_I006",
+ "text": "Incorporate solar panels into urban infrastructure like bus stops and streetlights.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B1_I048",
+ "text": "Develop transparent solar panels using perovskite materials to convert visible light into electricity, enabling integration into windows and smart glass for energy-efficient buildings.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Solar energy conversion"
+ }
+ },
+ {
+ "idea_id": "B1_I105",
+ "text": "Beat-Driven Solar Tiles: Integrate audio sensors to adjust panel angles based on sound rhythms, optimizing energy capture during peak 'energy beats' in urban environments.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "rhythm",
+ "keyword": "rhythm"
+ }
+ },
+ {
+ "idea_id": "B1_I069",
+ "text": "Utilize remote terrain mapping to optimize solar panel placement by analyzing elevation, slope, and shading patterns, maximizing energy output in rugged or inaccessible areas.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Surveyor",
+ "keyword": "remote terrain mapping"
+ }
+ },
+ {
+ "idea_id": "B1_I036",
+ "text": "Create an interactive digital experience where users can 'design' their own solar panel setup, combining gamification with educational content to boost adoption and brand awareness.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "PR Specialist",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B1_I013",
+ "text": "Create solar-powered electric vehicle charging stations along highways.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B1_I092",
+ "text": "Solar panels can be integrated with smart grid systems to ensure real-time regulatory compliance through automated data reporting and dynamic energy dispatch, enhancing grid stability and reducing non-compliance penalties.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Legislator",
+ "keyword": "regulatory compliance"
+ }
+ },
+ {
+ "idea_id": "B1_I104",
+ "text": "Rhythmic Solar Grids: Use pulse-like patterns to synchronize solar panels with local energy demands, creating a dynamic, responsive energy network that mirrors natural rhythms.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "rhythm",
+ "keyword": "rhythm"
+ }
+ },
+ {
+ "idea_id": "B1_I010",
+ "text": "Integrate solar panels into fashion items like jackets and hats for wearable energy.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B1_I118",
+ "text": "Solar panel installations that mimic the sun's ascent with modular, tiered structures that follow the sun's path for optimal energy capture.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "sunrise",
+ "keyword": "sunrise"
+ }
+ },
+ {
+ "idea_id": "B1_I042",
+ "text": "Develop AI-integrated solar panels that dynamically adjust to optimize home energy use, storing excess power in smart batteries for peak efficiency during high-demand hours.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Power homes"
+ }
+ },
+ {
+ "idea_id": "B1_I000",
+ "text": "Integrate solar panels with kinetic energy harvesting for hybrid charging in portable devices.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B1_I046",
+ "text": "Utility companies could partner with local governments to install solar panels on public buildings, reducing costs and increasing community adoption through shared savings and collective energy goals.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Utility companies"
+ }
+ },
+ {
+ "idea_id": "B1_I026",
+ "text": "Create wearable solar-powered devices for eye health monitoring, using lightweight, flexible panels to track intraocular pressure and retinal changes in real-time.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Ophthalmologist",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B1_I106",
+ "text": "Solar Rhythm Clocks: Design solar-powered clocks that visually represent daily energy cycles, using kinetic rhythms to educate users on solar patterns and usage.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "rhythm",
+ "keyword": "rhythm"
+ }
+ },
+ {
+ "idea_id": "B1_I079",
+ "text": "Solar panels integrate a Regulatory Compliance Framework to ensure adherence to safety, environmental, and energy standards, fostering sustainable adoption through automated reporting and real-time monitoring.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Legislator",
+ "keyword": "Regulatory Compliance Framework"
+ }
+ }
+ ],
+ "idea_count": 20
+ },
+ {
+ "query_id": "B3",
+ "query_text": "3D printer",
+ "category": "technology",
+ "ideas": [
+ {
+ "idea_id": "B3_I021",
+ "text": "Develop a 3D printer that uses deep-sea mineral slurries and biodegradable polymers to construct underwater habitats with self-repairing properties, inspired by coral growth patterns.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Oceanographer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B3_I051",
+ "text": "A modular 3D printer with interchangeable material cartridges, allowing users to switch between metals, plastics, and biocomposites in seconds for diverse prototyping needs.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Varied material compatibility"
+ }
+ },
+ {
+ "idea_id": "B3_I053",
+ "text": "Transform 3D printers into immersive narrative hubs by integrating spatial storytelling, allowing users to design and print interactive 3D environments that evolve with user engagement, creating dynamic, multi-sensory experiences.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Editor",
+ "keyword": "Immersive Narrative Layouts"
+ }
+ },
+ {
+ "idea_id": "B3_I104",
+ "text": "Explosion-Inspired 3D Printer: Uses rapid material dispersion via pressurized nozzles to create complex geometries in seconds.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "explosion",
+ "keyword": "explosion"
+ }
+ },
+ {
+ "idea_id": "B3_I031",
+ "text": "Create a 3D printed training tool that adapts in real-time to an athlete's biomechanics for personalized resistance and form correction.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Sports Coach",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B3_I012",
+ "text": "Printed biocompatible scaffolds for regenerative medicine and tissue engineering.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B3_I100",
+ "text": "OctoPrint-3D: A 3D printer with 8 flexible arms that can print from multiple angles, mimicking an octopus's adaptability and precision.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "octopus",
+ "keyword": "octopus"
+ }
+ },
+ {
+ "idea_id": "B3_I024",
+ "text": "Implement a 3D printer with adaptive hydrodynamic nozzles that can print structures in high-pressure, low-visibility environments, mimicking the way deep-sea organisms build complex habitats.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Oceanographer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B3_I048",
+ "text": "Create a Manufacturer Collaboration Hub where 3D printer manufacturers share CAD libraries, printer specs, and user feedback to accelerate innovation and reduce duplication of efforts in the industry.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Manufacturers"
+ }
+ },
+ {
+ "idea_id": "B3_I017",
+ "text": "Printed kinetic sculptures that transform with motion and light.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B3_I045",
+ "text": "Develop a 3D printer designed specifically for schools, with preloaded educational models and interactive tutorials to teach STEM concepts through hands-on creation and collaborative projects.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Educational purposes"
+ }
+ },
+ {
+ "idea_id": "B3_I089",
+ "text": "A 3D printer transforms digital blueprints into tangible reality, weaving structured narratives of innovation, creativity, and precision through layer-by-layer storytelling.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Editor",
+ "keyword": "structured narrative"
+ }
+ },
+ {
+ "idea_id": "B3_I037",
+ "text": "Implement multi-material printing with precision control using advanced sensors and material-specific extrusion systems.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Automation Engineer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B3_I112",
+ "text": "WhisperCraft: A 3D printer that mimics the soft, flowing motion of a whisper, enabling the creation of organic, fluid forms with a unique aesthetic.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "whisper",
+ "keyword": "whisper"
+ }
+ },
+ {
+ "idea_id": "B3_I015",
+ "text": "Disposable 3D-printed lab equipment for low-cost scientific experimentation.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B3_I000",
+ "text": "3D-printed modular architecture for rapid, sustainable urban development using recycled materials.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B3_I075",
+ "text": "Our 3D printer ensures service quality standards by precisely replicating customized food items, maintaining consistency, hygiene, and customer satisfaction in every print.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Restaurant Manager",
+ "keyword": "service quality standards"
+ }
+ },
+ {
+ "idea_id": "B3_I057",
+ "text": "Revolutionize manufacturing with AI-powered 3D printers that optimize print processes in real-time, reducing waste and enhancing precision through adaptive material flow and dynamic temperature control.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Editor",
+ "keyword": "process optimization"
+ }
+ },
+ {
+ "idea_id": "B3_I117",
+ "text": "Fog-assisted 3D printer integrates aerosolized support structures that dissolve upon printing completion, reducing post-processing and mimicking fog's transient, supportive role.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "fog",
+ "keyword": "fog"
+ }
+ },
+ {
+ "idea_id": "B3_I047",
+ "text": "Create a student-led 3D printing club where participants design and print educational models, fostering collaboration, creativity, and hands-on learning through real-world problem-solving projects.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Students"
+ }
+ }
+ ],
+ "idea_count": 20
+ },
+ {
+ "query_id": "B4",
+ "query_text": "Drone",
+ "category": "technology",
+ "ideas": [
+ {
+ "idea_id": "B4_I069",
+ "text": "Drone equipped with optimized image sensors captures high-resolution thermal and visual data, enabling real-time structural health monitoring of bridges and buildings with unmatched precision.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Electronics Engineer",
+ "keyword": "image sensor optimization"
+ }
+ },
+ {
+ "idea_id": "B4_I077",
+ "text": "Drone equipped with adaptive signal filtering technology mitigates interference from 5G networks, ensuring stable communication for real-time aerial delivery and surveillance in urban environments.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Electronics Engineer",
+ "keyword": "signal interference mitigation"
+ }
+ },
+ {
+ "idea_id": "B4_I052",
+ "text": "A drone equipped with AI-powered crop sensors and weather forecasting tools, tailored for farmers to optimize planting schedules, reduce water usage, and increase yield through real-time data analysis.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Farmers"
+ }
+ },
+ {
+ "idea_id": "B4_I047",
+ "text": "Deploy AI-powered drones with real-time thermal imaging and gas sensors to rapidly locate and assist victims in disaster zones, while coordinating with emergency teams via encrypted communication.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Emergency response"
+ }
+ },
+ {
+ "idea_id": "B4_I045",
+ "text": "Develop a swarm of micro-drones equipped with AI to autonomously map enemy terrain, identify targets, and relay real-time data while evading detection through adaptive camouflage and encrypted communication.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Military reconnaissance"
+ }
+ },
+ {
+ "idea_id": "B4_I036",
+ "text": "Design a drone as a visual metaphor for surveillance, using long takes and static shots to evoke unease and paranoia.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Film Director",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B4_I009",
+ "text": "Drone-powered micro-farming systems for vertical gardening in urban spaces.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B4_I042",
+ "text": "Integrate AI-driven thermal vision with real-time environmental data to prioritize targets based on heat signatures and movement patterns, enhancing precision in dynamic scenarios.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Target acquisition"
+ }
+ },
+ {
+ "idea_id": "B4_I021",
+ "text": "Develop AI-driven drones equipped with blockchain technology for secure, transparent asset tracking in supply chains, enhancing logistics efficiency and reducing fraud.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Securities Analyst",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B4_I014",
+ "text": "Drone-enabled disaster communication relays using mesh networks in crisis zones.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B4_I038",
+ "text": "Stage a drone as a character in a tragic love story, using choreography and lighting to express unrequited longing.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Film Director",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B4_I003",
+ "text": "Modular drone kits for STEM education, enabling students to build and program their own drones.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B4_I017",
+ "text": "Drone-powered smart irrigation systems using soil moisture sensors for efficient water use.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B4_I090",
+ "text": "Drone serves as an aerial smart home assistant, autonomously monitoring, controlling, and optimizing home systems via IoT integration, enhancing security, energy efficiency, and user convenience.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Electronics Engineer",
+ "keyword": "smart home integration"
+ }
+ },
+ {
+ "idea_id": "B4_I134",
+ "text": "Neural-Integrated Drones: Futuristic drones equipped with neural interfaces, allowing direct brain-computer interaction for intuitive control and immersive AR/VR experiences.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "futuristic",
+ "keyword": "futuristic"
+ }
+ },
+ {
+ "idea_id": "B4_I138",
+ "text": "A lens-inspired drone that uses adaptive optics to focus on targets, adjusting its perspective in real-time for precision surveillance.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "lens",
+ "keyword": "lens"
+ }
+ },
+ {
+ "idea_id": "B4_I148",
+ "text": "Drone with butterfly-like metamorphosis: modular design allowing reconfiguration for different tasks, like pollination or surveillance.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "butterfly",
+ "keyword": "butterfly"
+ }
+ },
+ {
+ "idea_id": "B4_I029",
+ "text": "Drone-powered vineyard tours with live-streaming capabilities, allowing remote guests to explore and taste wines from different regions in real-time.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Sommelier",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "B4_I145",
+ "text": "Drone equipped with a 'sand battery' - a compact, high-capacity energy storage system using desert sand as a conductive medium.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "desert",
+ "keyword": "desert"
+ }
+ },
+ {
+ "idea_id": "B4_I111",
+ "text": "Drone equipped with biosensors and AI analyzes real-time vital signs of remote patients, enabling early health threat detection in disaster zones or isolated areas.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Electronics Engineer",
+ "keyword": "vital signs monitoring"
+ }
+ }
+ ],
+ "idea_count": 20
+ },
+ {
+ "query_id": "C1",
+ "query_text": "Food delivery service",
+ "category": "services",
+ "ideas": [
+ {
+ "idea_id": "C1_I026",
+ "text": "Integrate AR glasses for visually impaired users to enhance food recognition and navigation during delivery, improving accessibility and independence.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Ophthalmologist",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C1_I002",
+ "text": "Create a subscription box for seasonal, locally-sourced ingredients paired with recipe cards.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C1_I009",
+ "text": "Launch a carbon-neutral delivery option with electric vehicles and carbon offsetting.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C1_I031",
+ "text": "Design delivery routes as temporary urban art installations, transforming vehicles into mobile galleries that promote local cuisine and foster community engagement.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Urban Planner",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C1_I000",
+ "text": "Introduce a zero-waste delivery system using biodegradable containers and composting partnerships for each order.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C1_I037",
+ "text": "Launch a 'Culinary Improvisation' service where chefs improvise dishes based on customer mood, captured as digital art for future meals.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Conductor",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C1_I061",
+ "text": "Organic Food Forest Delivery: Curate meals from forest-grown ingredients like mushrooms and wild plants, educating customers on foraging and sustainable wild harvesting practices.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "organic",
+ "keyword": "organic"
+ }
+ },
+ {
+ "idea_id": "C1_I055",
+ "text": "Ant-Team: Enable gig workers to form temporary, task-based delivery teams, adapting like ants during resource scarcity.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "ant",
+ "keyword": "ant"
+ }
+ },
+ {
+ "idea_id": "C1_I054",
+ "text": "Ant-Heap: Create a dynamic, user-generated food rating system where customers 'store' their favorites like ants store food.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "ant",
+ "keyword": "ant"
+ }
+ },
+ {
+ "idea_id": "C1_I051",
+ "text": "Develop a unified API hub that connects food delivery platforms with restaurant POS systems, enabling real-time order sync, inventory updates, and dynamic pricing adjustments based on demand and supply.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Requires integration with restaurant systems"
+ }
+ },
+ {
+ "idea_id": "C1_I049",
+ "text": "Introduce AI-powered recipe customization on the platform, allowing users to input dietary restrictions and preferences to receive personalized meal suggestions and delivery options.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Digital platform for food delivery"
+ }
+ },
+ {
+ "idea_id": "C1_I043",
+ "text": "Create a restaurant-specific app that allows customers to order food directly from the kitchen, with real-time updates on order status and custom delivery windows for each restaurant.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Enable customers to order food from restaurants"
+ }
+ },
+ {
+ "idea_id": "C1_I027",
+ "text": "Create a dark adaptation training program via delivery apps, using specialized lighting and food items to help users with night blindness adjust to low-light environments.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Ophthalmologist",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C1_I069",
+ "text": "Foldable Menus: Origami-style folded menus that unfold into interactive digital screens for real-time order customization.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "origami",
+ "keyword": "origami"
+ }
+ },
+ {
+ "idea_id": "C1_I045",
+ "text": "Implement AI-powered dynamic routing for restaurant delivery, optimizing routes in real-time to reduce wait times and fuel costs, enhancing both efficiency and customer satisfaction through smart, adaptive delivery paths.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Facilitate last-mile delivery for restaurants"
+ }
+ },
+ {
+ "idea_id": "C1_I006",
+ "text": "Offer a 'cook along' feature where delivery includes pre-measured ingredients and live cooking guidance.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ }
+ ],
+ "idea_count": 16
+ },
+ {
+ "query_id": "C2",
+ "query_text": "Online education platform",
+ "category": "services",
+ "ideas": [
+ {
+ "idea_id": "C2_I106",
+ "text": "PrismPulse: Real-time analytics refract user data into insights, creating dynamic, responsive learning experiences.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "prism",
+ "keyword": "prism"
+ }
+ },
+ {
+ "idea_id": "C2_I087",
+ "text": "Our platform transforms skill development into a strategic investment, using ROI analytics to quantify career growth, salary potential, and time-to-employment, empowering users to make data-driven decisions on their learning journey.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Financial Analyst",
+ "keyword": "ROI in skill development"
+ }
+ },
+ {
+ "idea_id": "C2_I009",
+ "text": "Mobile-first learning apps with offline access and voice-activated note-taking features.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C2_I041",
+ "text": "Implement a real-time collaborative coding environment where students debug together, share live code changes, and receive instant feedback from AI tutors during programming exercises.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Interactive learning tools"
+ }
+ },
+ {
+ "idea_id": "C2_I119",
+ "text": "EagleFeather: Micro-credential system where learners 'grow' skills, each feather representing a milestone in their educational flight.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "eagle",
+ "keyword": "eagle"
+ }
+ },
+ {
+ "idea_id": "C2_I095",
+ "text": "Leverage financial scalability metrics to optimize online education platform pricing tiers, user acquisition costs, and LTV:CAC ratios for rapid, sustainable growth.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Financial Analyst",
+ "keyword": "financial scalability metrics"
+ }
+ },
+ {
+ "idea_id": "C2_I045",
+ "text": "Introduce AI-powered tutoring sessions that adapt to student performance in real-time, offering personalized feedback and interactive exercises to enhance learning outcomes.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Tutoring services"
+ }
+ },
+ {
+ "idea_id": "C2_I048",
+ "text": "Parental Insight Dashboard: A real-time platform where parents track their child's learning progress, receive tailored feedback, and collaborate with teachers to customize lesson plans and set achievable educational goals.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Parents"
+ }
+ },
+ {
+ "idea_id": "C2_I115",
+ "text": "EagleVision: AI-powered adaptive learning that 'soars' with personalized paths, adjusting like an eagle's flight to student progress.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "eagle",
+ "keyword": "eagle"
+ }
+ },
+ {
+ "idea_id": "C2_I016",
+ "text": "AI-generated study schedules that optimize time management and retention based on cognitive science.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C2_I020",
+ "text": "Implement real-time logistics tracking in courses to teach supply chain management with live warehouse operations simulations.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Warehouse Manager",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C2_I064",
+ "text": "Leverage digital engagement through AI-driven interactive modules, real-time collaborative projects, and gamified content to enhance student participation and knowledge retention in online aquaculture education.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Aquaculture Specialist",
+ "keyword": "digital engagement"
+ }
+ },
+ {
+ "idea_id": "C2_I111",
+ "text": "BeatLearn: A gamified platform where students 'learn to the beat' by aligning their study sessions with musical rhythms to boost memory and engagement.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "rhythm",
+ "keyword": "rhythm"
+ }
+ },
+ {
+ "idea_id": "C2_I046",
+ "text": "Create a peer-to-peer learning hub where students mentor each other through real-world projects, fostering collaboration and leadership skills while building a community-driven knowledge network.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Students"
+ }
+ },
+ {
+ "idea_id": "C2_I028",
+ "text": "Utilize digital ethnography tools to analyze student interaction patterns, enabling real-time adjustments to pedagogical strategies based on sociocultural dynamics.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Sociologist",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C2_I031",
+ "text": "Create a dynamic curriculum that evolves with seasonal trends, ensuring students stay ahead in the ever-changing fashion industry.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Fashion Designer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C2_I035",
+ "text": "Integrate dynamic lighting simulations to teach photography techniques, allowing students to visualize how different lighting conditions affect their compositions in real-time.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Photographer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C2_I007",
+ "text": "AI chatbots that provide instant feedback and answer student questions during live sessions.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C2_I004",
+ "text": "AI-driven content creation tools that generate interactive quizzes and study materials for instructors.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C2_I061",
+ "text": "Our platform tracks participant performance in real-time, using AI to personalize learning paths and provide actionable insights, ensuring each learner progresses efficiently and effectively.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Volunteer Coordinator",
+ "keyword": "participant performance tracking"
+ }
+ }
+ ],
+ "idea_count": 20
+ },
+ {
+ "query_id": "C4",
+ "query_text": "Public transportation",
+ "category": "services",
+ "ideas": [
+ {
+ "idea_id": "C4_I009",
+ "text": "Introduce a smart ticketing system with contactless payment and multi-modal integration.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C4_I001",
+ "text": "Implement solar-powered buses with integrated charging stations along routes.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C4_I030",
+ "text": "Utilize predictive analytics to dynamically adjust bus routes based on real-time passenger data and traffic patterns, optimizing efficiency and reducing wait times.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Mathematician",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C4_I049",
+ "text": "Implement solar-powered bus stops with integrated charging stations, reducing reliance on grid infrastructure and promoting sustainable urban mobility.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Reliant on infrastructure"
+ }
+ },
+ {
+ "idea_id": "C4_I004",
+ "text": "Create a carpooling platform linked to public transit schedules for shared rides.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C4_I042",
+ "text": "Implement solar-powered bus shelters with kinetic energy generators to charge electric buses, reducing reliance on non-renewable energy sources while enhancing passenger comfort.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Environmental sustainability"
+ }
+ },
+ {
+ "idea_id": "C4_I032",
+ "text": "Apply graph theory to design multi-modal transit networks, minimizing transfers and maximizing accessibility through optimal connectivity.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Mathematician",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C4_I041",
+ "text": "Implement a dynamic traffic light system that prioritizes public transit vehicles, reducing stop-and-go congestion and encouraging more people to use buses and trains.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Reduction of traffic congestion"
+ }
+ },
+ {
+ "idea_id": "C4_I092",
+ "text": "A subscription framework for public transit offers personalized travel plans, real-time updates, and integrated payment, transforming commuting into a seamless, tailored experience.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Software Engineer",
+ "keyword": "subscription framework"
+ }
+ },
+ {
+ "idea_id": "C4_I005",
+ "text": "Install solar-powered bus shelters with wireless charging for passengers' devices.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C4_I113",
+ "text": "Mountain Resilience System: Build transit systems inspired by mountain stability, using reinforced structures and adaptive routing to withstand natural disasters.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "mountain",
+ "keyword": "mountain"
+ }
+ },
+ {
+ "idea_id": "C4_I059",
+ "text": "Economically sustainable mobility in public transport involves integrating smart pricing models, energy-efficient systems, and data-driven route optimization to reduce costs, enhance accessibility, and foster long-term financial viability.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Economist",
+ "keyword": "economically sustainable mobility"
+ }
+ },
+ {
+ "idea_id": "C4_I109",
+ "text": "Rainbow Transit Hubs: Multi-modal stations with vibrant, colorful architecture inspired by rainbows, promoting community engagement and enhancing user experience through aesthetic appeal.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "rainbow",
+ "keyword": "rainbow"
+ }
+ },
+ {
+ "idea_id": "C4_I100",
+ "text": "Introduce 'Forest Map' - an interactive digital map showing real-time transit info, inspired by a forest's natural navigation through trails and landmarks.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "forest",
+ "keyword": "forest"
+ }
+ },
+ {
+ "idea_id": "C4_I044",
+ "text": "Create a city-to-city express bus network with real-time tracking and priority lanes, offering seamless transfers and integrated ticketing for multi-city journeys.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Traveling between cities"
+ }
+ },
+ {
+ "idea_id": "C4_I066",
+ "text": "Intercity logistics optimization can revolutionize public transport by integrating real-time data analytics to dynamically route buses and trains, reducing congestion and enhancing multi-modal connectivity for seamless urban mobility.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Economist",
+ "keyword": "intercity logistics optimization"
+ }
+ },
+ {
+ "idea_id": "C4_I036",
+ "text": "Wireless power transfer for tram systems enabling seamless charging without overhead lines.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Electrical Engineer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C4_I039",
+ "text": "Modular electric trolley systems with plug-and-play components for rapid deployment and scalability.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Electrical Engineer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C4_I103",
+ "text": "Develop 'Rhythm Route' algorithms that adjust bus frequencies based on peak hour rhythms, reducing congestion through adaptive scheduling.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "rhythm",
+ "keyword": "rhythm"
+ }
+ },
+ {
+ "idea_id": "C4_I093",
+ "text": "Leverage real-time data analytics to dynamically adjust bus frequencies during peak hours, optimizing capacity utilization and reducing overcrowding through predictive algorithms and adaptive scheduling.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Economist",
+ "keyword": "peak capacity management"
+ }
+ }
+ ],
+ "idea_count": 20
+ },
+ {
+ "query_id": "C9",
+ "query_text": "Elderly care service",
+ "category": "services",
+ "ideas": [
+ {
+ "idea_id": "C9_I016",
+ "text": "Mobile app for family members to track care progress and receive alerts.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C9_I045",
+ "text": "Create a 'Community Care Hubs' where seniors can access medical services, social activities, and volunteer opportunities, fostering independence and reducing isolation through local engagement.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Enabling community-based care options"
+ }
+ },
+ {
+ "idea_id": "C9_I108",
+ "text": "Interactive Kaleidoscope Gardens: Design gardens with mirrored panels and shifting patterns that respond to elders' movements, creating a calming, ever-changing environment for reflection and interaction.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "kaleidoscope",
+ "keyword": "kaleidoscope"
+ }
+ },
+ {
+ "idea_id": "C9_I088",
+ "text": "Adaptive road networks can dynamically adjust traffic signals and routes to prioritize elderly care vehicles, ensuring timely medical access and enhancing safety for senior citizens in urban areas.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Traffic Engineer",
+ "keyword": "adaptive road networks"
+ }
+ },
+ {
+ "idea_id": "C9_I071",
+ "text": "Harmonic Community Resonance transforms elderly care into a symphony of connection: residents collaborate on music therapy, creating collective rhythms that foster emotional bonds, memory recall, and a vibrant, intergenerational cultural tapestry.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Musician",
+ "keyword": "Harmonic Community Resonance"
+ }
+ },
+ {
+ "idea_id": "C9_I005",
+ "text": "Gamified memory training apps with progress tracking and social sharing.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C9_I067",
+ "text": "Craft personalized sonic environments for aging, blending nature sounds, soft melodies, and speech to soothe, engage, and enhance cognitive health in elderly care through immersive, adaptive audio therapy.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Musician",
+ "keyword": "sonic environment for aging"
+ }
+ },
+ {
+ "idea_id": "C9_I036",
+ "text": "Implement AI-driven skill assessments for caregivers to match them with residents based on cognitive and emotional needs, enhancing personalized care through data analytics.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "HR Manager",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C9_I000",
+ "text": "Virtual reality social clubs for elderly to experience travel, events, and group activities from home.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C9_I105",
+ "text": "Kaleidoscope Care Pathways: Create dynamic, ever-changing care plans that reflect each elder's unique journey, adapting to their evolving needs with colorful, personalized patterns.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "kaleidoscope",
+ "keyword": "kaleidoscope"
+ }
+ },
+ {
+ "idea_id": "C9_I038",
+ "text": "Introduce performance-based incentive programs for caregivers, using financial rewards to boost motivation and retention, aligned with HR best practices.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "HR Manager",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C9_I043",
+ "text": "Create a community-based tech hub where elderly can learn digital skills, access telehealth, and connect with peers, fostering autonomy and reducing isolation through shared learning and virtual support networks.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Supporting independent living for the elderly"
+ }
+ },
+ {
+ "idea_id": "C9_I022",
+ "text": "Design therapeutic animation sessions using guided imagery and symbolic animation to help elders process grief, memory loss, and emotional trauma.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Animator",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C9_I019",
+ "text": "Digital storytelling platforms for seniors to share life experiences with family and friends.",
+ "_hidden": {
+ "condition": "c1_direct",
+ "expert_name": "direct",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C9_I046",
+ "text": "Memory Lane Social Club: Weekly gatherings where elderly individuals share stories and artifacts, fostering connection through nostalgia and intergenerational dialogue.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Elderly individuals"
+ }
+ },
+ {
+ "idea_id": "C9_I103",
+ "text": "Design 'Desert Harmony' wellness retreats combining yoga, meditation, and minimalistic living to help seniors find inner peace and resilience, inspired by desert survival techniques.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "desert",
+ "keyword": "desert"
+ }
+ },
+ {
+ "idea_id": "C9_I026",
+ "text": "Create a smart home system with automated fall detection and emergency response using motion sensors and edge computing for immediate alerts.",
+ "_hidden": {
+ "condition": "c2_expert_only",
+ "expert_name": "Electronics Engineer",
+ "keyword": ""
+ }
+ },
+ {
+ "idea_id": "C9_I061",
+ "text": "Legal emotional equity in elderly care ensures dignified, rights-respecting services, integrating legal protections with emotional support to foster empathy, accountability, and holistic well-being for seniors.",
+ "_hidden": {
+ "condition": "c4_full_pipeline",
+ "expert_name": "Legal Scholar",
+ "keyword": "legal emotional equity"
+ }
+ },
+ {
+ "idea_id": "C9_I111",
+ "text": "Fog-Like Privacy Filters: Deploy translucent, fog-like barriers in care facilities to create private spaces for sensitive conversations, ensuring dignity and comfort.",
+ "_hidden": {
+ "condition": "c5_random_perspective",
+ "expert_name": "fog",
+ "keyword": "fog"
+ }
+ },
+ {
+ "idea_id": "C9_I051",
+ "text": "Smart wearables with real-time biometric monitoring and automatic alert integration with local emergency services, enabling rapid response to falls or medical emergencies through AI-driven risk prediction.",
+ "_hidden": {
+ "condition": "c3_attribute_only",
+ "expert_name": "direct",
+ "keyword": "Integration with emergency response systems"
+ }
+ }
+ ],
+ "idea_count": 20
+ }
+ ],
+ "total_ideas": 192,
+ "query_count": 10,
+ "conditions": [
+ "c1_direct",
+ "c2_expert_only",
+ "c3_attribute_only",
+ "c4_full_pipeline",
+ "c5_random_perspective"
+ ],
+ "randomization_seed": 42,
+ "sampling": {
+ "sample_total": null,
+ "per_query": null,
+ "per_condition": 4
+ },
+ "metadata": {
+ "source_file": "experiment_20260119_165650_deduped.json",
+ "prepared_for": "human_assessment"
+ }
+}
\ No newline at end of file
diff --git a/experiments/assessment/frontend/index.html b/experiments/assessment/frontend/index.html
new file mode 100644
index 0000000..979a87a
--- /dev/null
+++ b/experiments/assessment/frontend/index.html
@@ -0,0 +1,13 @@
+
+
+
+
+
+
+ Creative Idea Assessment
+
+
+
+
+
+
diff --git a/experiments/assessment/frontend/package-lock.json b/experiments/assessment/frontend/package-lock.json
new file mode 100644
index 0000000..82eb758
--- /dev/null
+++ b/experiments/assessment/frontend/package-lock.json
@@ -0,0 +1,4221 @@
+{
+ "name": "assessment-frontend",
+ "version": "1.0.0",
+ "lockfileVersion": 3,
+ "requires": true,
+ "packages": {
+ "": {
+ "name": "assessment-frontend",
+ "version": "1.0.0",
+ "dependencies": {
+ "@ant-design/icons": "^6.1.0",
+ "antd": "^6.0.0",
+ "react": "^19.2.0",
+ "react-dom": "^19.2.0"
+ },
+ "devDependencies": {
+ "@eslint/js": "^9.39.1",
+ "@types/node": "^24.10.1",
+ "@types/react": "^19.2.5",
+ "@types/react-dom": "^19.2.3",
+ "@vitejs/plugin-react": "^5.1.1",
+ "eslint": "^9.39.1",
+ "eslint-plugin-react-hooks": "^7.0.1",
+ "eslint-plugin-react-refresh": "^0.4.24",
+ "globals": "^16.5.0",
+ "typescript": "~5.9.3",
+ "typescript-eslint": "^8.46.4",
+ "vite": "^7.2.4"
+ }
+ },
+ "node_modules/@ant-design/colors": {
+ "version": "8.0.1",
+ "resolved": "https://registry.npmjs.org/@ant-design/colors/-/colors-8.0.1.tgz",
+ "integrity": "sha512-foPVl0+SWIslGUtD/xBr1p9U4AKzPhNYEseXYRRo5QSzGACYZrQbe11AYJbYfAWnWSpGBx6JjBmSeugUsD9vqQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@ant-design/fast-color": "^3.0.0"
+ }
+ },
+ "node_modules/@ant-design/cssinjs": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/@ant-design/cssinjs/-/cssinjs-2.0.3.tgz",
+ "integrity": "sha512-HAo8SZ3a6G8v6jT0suCz1270na6EA3obeJWM4uzRijBhdwdoMAXWK2f4WWkwB28yUufsfk3CAhN1coGPQq4kNQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@babel/runtime": "^7.11.1",
+ "@emotion/hash": "^0.8.0",
+ "@emotion/unitless": "^0.7.5",
+ "@rc-component/util": "^1.4.0",
+ "clsx": "^2.1.1",
+ "csstype": "^3.1.3",
+ "stylis": "^4.3.4"
+ },
+ "peerDependencies": {
+ "react": ">=16.0.0",
+ "react-dom": ">=16.0.0"
+ }
+ },
+ "node_modules/@ant-design/cssinjs-utils": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/@ant-design/cssinjs-utils/-/cssinjs-utils-2.0.2.tgz",
+ "integrity": "sha512-Mq3Hm6fJuQeFNKSp3+yT4bjuhVbdrsyXE2RyfpJFL0xiYNZdaJ6oFaE3zFrzmHbmvTd2Wp3HCbRtkD4fU+v2ZA==",
+ "license": "MIT",
+ "dependencies": {
+ "@ant-design/cssinjs": "^2.0.1",
+ "@babel/runtime": "^7.23.2",
+ "@rc-component/util": "^1.4.0"
+ },
+ "peerDependencies": {
+ "react": ">=18",
+ "react-dom": ">=18"
+ }
+ },
+ "node_modules/@ant-design/fast-color": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/@ant-design/fast-color/-/fast-color-3.0.0.tgz",
+ "integrity": "sha512-eqvpP7xEDm2S7dUzl5srEQCBTXZMmY3ekf97zI+M2DHOYyKdJGH0qua0JACHTqbkRnD/KHFQP9J1uMJ/XWVzzA==",
+ "license": "MIT",
+ "engines": {
+ "node": ">=8.x"
+ }
+ },
+ "node_modules/@ant-design/icons": {
+ "version": "6.1.0",
+ "resolved": "https://registry.npmjs.org/@ant-design/icons/-/icons-6.1.0.tgz",
+ "integrity": "sha512-KrWMu1fIg3w/1F2zfn+JlfNDU8dDqILfA5Tg85iqs1lf8ooyGlbkA+TkwfOKKgqpUmAiRY1PTFpuOU2DAIgSUg==",
+ "license": "MIT",
+ "dependencies": {
+ "@ant-design/colors": "^8.0.0",
+ "@ant-design/icons-svg": "^4.4.0",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8"
+ },
+ "peerDependencies": {
+ "react": ">=16.0.0",
+ "react-dom": ">=16.0.0"
+ }
+ },
+ "node_modules/@ant-design/icons-svg": {
+ "version": "4.4.2",
+ "resolved": "https://registry.npmjs.org/@ant-design/icons-svg/-/icons-svg-4.4.2.tgz",
+ "integrity": "sha512-vHbT+zJEVzllwP+CM+ul7reTEfBR0vgxFe7+lREAsAA7YGsYpboiq2sQNeQeRvh09GfQgs/GyFEvZpJ9cLXpXA==",
+ "license": "MIT"
+ },
+ "node_modules/@ant-design/react-slick": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/@ant-design/react-slick/-/react-slick-2.0.0.tgz",
+ "integrity": "sha512-HMS9sRoEmZey8LsE/Yo6+klhlzU12PisjrVcydW3So7RdklyEd2qehyU6a7Yp+OYN72mgsYs3NFCyP2lCPFVqg==",
+ "license": "MIT",
+ "dependencies": {
+ "@babel/runtime": "^7.28.4",
+ "clsx": "^2.1.1",
+ "json2mq": "^0.2.0",
+ "throttle-debounce": "^5.0.0"
+ },
+ "peerDependencies": {
+ "react": "^0.14.0 || ^15.0.1 || ^16.0.0 || ^17.0.0 || ^18.0.0 || ^19.0.0",
+ "react-dom": "^0.14.0 || ^15.0.1 || ^16.0.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
+ }
+ },
+ "node_modules/@babel/code-frame": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.28.6.tgz",
+ "integrity": "sha512-JYgintcMjRiCvS8mMECzaEn+m3PfoQiyqukOMCCVQtoJGYJw8j/8LBJEiqkHLkfwCcs74E3pbAUFNg7d9VNJ+Q==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/helper-validator-identifier": "^7.28.5",
+ "js-tokens": "^4.0.0",
+ "picocolors": "^1.1.1"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/compat-data": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/compat-data/-/compat-data-7.28.6.tgz",
+ "integrity": "sha512-2lfu57JtzctfIrcGMz992hyLlByuzgIk58+hhGCxjKZ3rWI82NnVLjXcaTqkI2NvlcvOskZaiZ5kjUALo3Lpxg==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/core": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.28.6.tgz",
+ "integrity": "sha512-H3mcG6ZDLTlYfaSNi0iOKkigqMFvkTKlGUYlD8GW7nNOYRrevuA46iTypPyv+06V3fEmvvazfntkBU34L0azAw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/code-frame": "^7.28.6",
+ "@babel/generator": "^7.28.6",
+ "@babel/helper-compilation-targets": "^7.28.6",
+ "@babel/helper-module-transforms": "^7.28.6",
+ "@babel/helpers": "^7.28.6",
+ "@babel/parser": "^7.28.6",
+ "@babel/template": "^7.28.6",
+ "@babel/traverse": "^7.28.6",
+ "@babel/types": "^7.28.6",
+ "@jridgewell/remapping": "^2.3.5",
+ "convert-source-map": "^2.0.0",
+ "debug": "^4.1.0",
+ "gensync": "^1.0.0-beta.2",
+ "json5": "^2.2.3",
+ "semver": "^6.3.1"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/babel"
+ }
+ },
+ "node_modules/@babel/generator": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.28.6.tgz",
+ "integrity": "sha512-lOoVRwADj8hjf7al89tvQ2a1lf53Z+7tiXMgpZJL3maQPDxh0DgLMN62B2MKUOFcoodBHLMbDM6WAbKgNy5Suw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/parser": "^7.28.6",
+ "@babel/types": "^7.28.6",
+ "@jridgewell/gen-mapping": "^0.3.12",
+ "@jridgewell/trace-mapping": "^0.3.28",
+ "jsesc": "^3.0.2"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/helper-compilation-targets": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/helper-compilation-targets/-/helper-compilation-targets-7.28.6.tgz",
+ "integrity": "sha512-JYtls3hqi15fcx5GaSNL7SCTJ2MNmjrkHXg4FSpOA/grxK8KwyZ5bubHsCq8FXCkua6xhuaaBit+3b7+VZRfcA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/compat-data": "^7.28.6",
+ "@babel/helper-validator-option": "^7.27.1",
+ "browserslist": "^4.24.0",
+ "lru-cache": "^5.1.1",
+ "semver": "^6.3.1"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/helper-globals": {
+ "version": "7.28.0",
+ "resolved": "https://registry.npmjs.org/@babel/helper-globals/-/helper-globals-7.28.0.tgz",
+ "integrity": "sha512-+W6cISkXFa1jXsDEdYA8HeevQT/FULhxzR99pxphltZcVaugps53THCeiWA8SguxxpSp3gKPiuYfSWopkLQ4hw==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/helper-module-imports": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.28.6.tgz",
+ "integrity": "sha512-l5XkZK7r7wa9LucGw9LwZyyCUscb4x37JWTPz7swwFE/0FMQAGpiWUZn8u9DzkSBWEcK25jmvubfpw2dnAMdbw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/traverse": "^7.28.6",
+ "@babel/types": "^7.28.6"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/helper-module-transforms": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.28.6.tgz",
+ "integrity": "sha512-67oXFAYr2cDLDVGLXTEABjdBJZ6drElUSI7WKp70NrpyISso3plG9SAGEF6y7zbha/wOzUByWWTJvEDVNIUGcA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/helper-module-imports": "^7.28.6",
+ "@babel/helper-validator-identifier": "^7.28.5",
+ "@babel/traverse": "^7.28.6"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ },
+ "peerDependencies": {
+ "@babel/core": "^7.0.0"
+ }
+ },
+ "node_modules/@babel/helper-plugin-utils": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.28.6.tgz",
+ "integrity": "sha512-S9gzZ/bz83GRysI7gAD4wPT/AI3uCnY+9xn+Mx/KPs2JwHJIz1W8PZkg2cqyt3RNOBM8ejcXhV6y8Og7ly/Dug==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/helper-string-parser": {
+ "version": "7.27.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.27.1.tgz",
+ "integrity": "sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/helper-validator-identifier": {
+ "version": "7.28.5",
+ "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.28.5.tgz",
+ "integrity": "sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/helper-validator-option": {
+ "version": "7.27.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-validator-option/-/helper-validator-option-7.27.1.tgz",
+ "integrity": "sha512-YvjJow9FxbhFFKDSuFnVCe2WxXk1zWc22fFePVNEaWJEu8IrZVlda6N0uHwzZrUM1il7NC9Mlp4MaJYbYd9JSg==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/helpers": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.28.6.tgz",
+ "integrity": "sha512-xOBvwq86HHdB7WUDTfKfT/Vuxh7gElQ+Sfti2Cy6yIWNW05P8iUslOVcZ4/sKbE+/jQaukQAdz/gf3724kYdqw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/template": "^7.28.6",
+ "@babel/types": "^7.28.6"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/parser": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.28.6.tgz",
+ "integrity": "sha512-TeR9zWR18BvbfPmGbLampPMW+uW1NZnJlRuuHso8i87QZNq2JRF9i6RgxRqtEq+wQGsS19NNTWr2duhnE49mfQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/types": "^7.28.6"
+ },
+ "bin": {
+ "parser": "bin/babel-parser.js"
+ },
+ "engines": {
+ "node": ">=6.0.0"
+ }
+ },
+ "node_modules/@babel/plugin-transform-react-jsx-self": {
+ "version": "7.27.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx-self/-/plugin-transform-react-jsx-self-7.27.1.tgz",
+ "integrity": "sha512-6UzkCs+ejGdZ5mFFC/OCUrv028ab2fp1znZmCZjAOBKiBK2jXD1O+BPSfX8X2qjJ75fZBMSnQn3Rq2mrBJK2mw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/helper-plugin-utils": "^7.27.1"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ },
+ "peerDependencies": {
+ "@babel/core": "^7.0.0-0"
+ }
+ },
+ "node_modules/@babel/plugin-transform-react-jsx-source": {
+ "version": "7.27.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx-source/-/plugin-transform-react-jsx-source-7.27.1.tgz",
+ "integrity": "sha512-zbwoTsBruTeKB9hSq73ha66iFeJHuaFkUbwvqElnygoNbj/jHRsSeokowZFN3CZ64IvEqcmmkVe89OPXc7ldAw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/helper-plugin-utils": "^7.27.1"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ },
+ "peerDependencies": {
+ "@babel/core": "^7.0.0-0"
+ }
+ },
+ "node_modules/@babel/runtime": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.28.6.tgz",
+ "integrity": "sha512-05WQkdpL9COIMz4LjTxGpPNCdlpyimKppYNoJ5Di5EUObifl8t4tuLuUBBZEpoLYOmfvIWrsp9fCl0HoPRVTdA==",
+ "license": "MIT",
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/template": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.28.6.tgz",
+ "integrity": "sha512-YA6Ma2KsCdGb+WC6UpBVFJGXL58MDA6oyONbjyF/+5sBgxY/dwkhLogbMT2GXXyU84/IhRw/2D1Os1B/giz+BQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/code-frame": "^7.28.6",
+ "@babel/parser": "^7.28.6",
+ "@babel/types": "^7.28.6"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/traverse": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.28.6.tgz",
+ "integrity": "sha512-fgWX62k02qtjqdSNTAGxmKYY/7FSL9WAS1o2Hu5+I5m9T0yxZzr4cnrfXQ/MX0rIifthCSs6FKTlzYbJcPtMNg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/code-frame": "^7.28.6",
+ "@babel/generator": "^7.28.6",
+ "@babel/helper-globals": "^7.28.0",
+ "@babel/parser": "^7.28.6",
+ "@babel/template": "^7.28.6",
+ "@babel/types": "^7.28.6",
+ "debug": "^4.3.1"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/types": {
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.28.6.tgz",
+ "integrity": "sha512-0ZrskXVEHSWIqZM/sQZ4EV3jZJXRkio/WCxaqKZP1g//CEWEPSfeZFcms4XeKBCHU0ZKnIkdJeU/kF+eRp5lBg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/helper-string-parser": "^7.27.1",
+ "@babel/helper-validator-identifier": "^7.28.5"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@emotion/hash": {
+ "version": "0.8.0",
+ "resolved": "https://registry.npmjs.org/@emotion/hash/-/hash-0.8.0.tgz",
+ "integrity": "sha512-kBJtf7PH6aWwZ6fka3zQ0p6SBYzx4fl1LoZXE2RrnYST9Xljm7WfKJrU4g/Xr3Beg72MLrp1AWNUmuYJTL7Cow==",
+ "license": "MIT"
+ },
+ "node_modules/@emotion/unitless": {
+ "version": "0.7.5",
+ "resolved": "https://registry.npmjs.org/@emotion/unitless/-/unitless-0.7.5.tgz",
+ "integrity": "sha512-OWORNpfjMsSSUBVrRBVGECkhWcULOAJz9ZW8uK9qgxD+87M7jHRcvh/A96XXNhXTLmKcoYSQtBEX7lHMO7YRwg==",
+ "license": "MIT"
+ },
+ "node_modules/@esbuild/aix-ppc64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.27.2.tgz",
+ "integrity": "sha512-GZMB+a0mOMZs4MpDbj8RJp4cw+w1WV5NYD6xzgvzUJ5Ek2jerwfO2eADyI6ExDSUED+1X8aMbegahsJi+8mgpw==",
+ "cpu": [
+ "ppc64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "aix"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/android-arm": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/android-arm/-/android-arm-0.27.2.tgz",
+ "integrity": "sha512-DVNI8jlPa7Ujbr1yjU2PfUSRtAUZPG9I1RwW4F4xFB1Imiu2on0ADiI/c3td+KmDtVKNbi+nffGDQMfcIMkwIA==",
+ "cpu": [
+ "arm"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "android"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/android-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/android-arm64/-/android-arm64-0.27.2.tgz",
+ "integrity": "sha512-pvz8ZZ7ot/RBphf8fv60ljmaoydPU12VuXHImtAs0XhLLw+EXBi2BLe3OYSBslR4rryHvweW5gmkKFwTiFy6KA==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "android"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/android-x64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/android-x64/-/android-x64-0.27.2.tgz",
+ "integrity": "sha512-z8Ank4Byh4TJJOh4wpz8g2vDy75zFL0TlZlkUkEwYXuPSgX8yzep596n6mT7905kA9uHZsf/o2OJZubl2l3M7A==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "android"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/darwin-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.27.2.tgz",
+ "integrity": "sha512-davCD2Zc80nzDVRwXTcQP/28fiJbcOwvdolL0sOiOsbwBa72kegmVU0Wrh1MYrbuCL98Omp5dVhQFWRKR2ZAlg==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/darwin-x64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/darwin-x64/-/darwin-x64-0.27.2.tgz",
+ "integrity": "sha512-ZxtijOmlQCBWGwbVmwOF/UCzuGIbUkqB1faQRf5akQmxRJ1ujusWsb3CVfk/9iZKr2L5SMU5wPBi1UWbvL+VQA==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/freebsd-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/freebsd-arm64/-/freebsd-arm64-0.27.2.tgz",
+ "integrity": "sha512-lS/9CN+rgqQ9czogxlMcBMGd+l8Q3Nj1MFQwBZJyoEKI50XGxwuzznYdwcav6lpOGv5BqaZXqvBSiB/kJ5op+g==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "freebsd"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/freebsd-x64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/freebsd-x64/-/freebsd-x64-0.27.2.tgz",
+ "integrity": "sha512-tAfqtNYb4YgPnJlEFu4c212HYjQWSO/w/h/lQaBK7RbwGIkBOuNKQI9tqWzx7Wtp7bTPaGC6MJvWI608P3wXYA==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "freebsd"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/linux-arm": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-arm/-/linux-arm-0.27.2.tgz",
+ "integrity": "sha512-vWfq4GaIMP9AIe4yj1ZUW18RDhx6EPQKjwe7n8BbIecFtCQG4CfHGaHuh7fdfq+y3LIA2vGS/o9ZBGVxIDi9hw==",
+ "cpu": [
+ "arm"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/linux-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-arm64/-/linux-arm64-0.27.2.tgz",
+ "integrity": "sha512-hYxN8pr66NsCCiRFkHUAsxylNOcAQaxSSkHMMjcpx0si13t1LHFphxJZUiGwojB1a/Hd5OiPIqDdXONia6bhTw==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/linux-ia32": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-ia32/-/linux-ia32-0.27.2.tgz",
+ "integrity": "sha512-MJt5BRRSScPDwG2hLelYhAAKh9imjHK5+NE/tvnRLbIqUWa+0E9N4WNMjmp/kXXPHZGqPLxggwVhz7QP8CTR8w==",
+ "cpu": [
+ "ia32"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/linux-loong64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-loong64/-/linux-loong64-0.27.2.tgz",
+ "integrity": "sha512-lugyF1atnAT463aO6KPshVCJK5NgRnU4yb3FUumyVz+cGvZbontBgzeGFO1nF+dPueHD367a2ZXe1NtUkAjOtg==",
+ "cpu": [
+ "loong64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/linux-mips64el": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-mips64el/-/linux-mips64el-0.27.2.tgz",
+ "integrity": "sha512-nlP2I6ArEBewvJ2gjrrkESEZkB5mIoaTswuqNFRv/WYd+ATtUpe9Y09RnJvgvdag7he0OWgEZWhviS1OTOKixw==",
+ "cpu": [
+ "mips64el"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/linux-ppc64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-ppc64/-/linux-ppc64-0.27.2.tgz",
+ "integrity": "sha512-C92gnpey7tUQONqg1n6dKVbx3vphKtTHJaNG2Ok9lGwbZil6DrfyecMsp9CrmXGQJmZ7iiVXvvZH6Ml5hL6XdQ==",
+ "cpu": [
+ "ppc64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/linux-riscv64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-riscv64/-/linux-riscv64-0.27.2.tgz",
+ "integrity": "sha512-B5BOmojNtUyN8AXlK0QJyvjEZkWwy/FKvakkTDCziX95AowLZKR6aCDhG7LeF7uMCXEJqwa8Bejz5LTPYm8AvA==",
+ "cpu": [
+ "riscv64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/linux-s390x": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-s390x/-/linux-s390x-0.27.2.tgz",
+ "integrity": "sha512-p4bm9+wsPwup5Z8f4EpfN63qNagQ47Ua2znaqGH6bqLlmJ4bx97Y9JdqxgGZ6Y8xVTixUnEkoKSHcpRlDnNr5w==",
+ "cpu": [
+ "s390x"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/linux-x64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.27.2.tgz",
+ "integrity": "sha512-uwp2Tip5aPmH+NRUwTcfLb+W32WXjpFejTIOWZFw/v7/KnpCDKG66u4DLcurQpiYTiYwQ9B7KOeMJvLCu/OvbA==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/netbsd-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/netbsd-arm64/-/netbsd-arm64-0.27.2.tgz",
+ "integrity": "sha512-Kj6DiBlwXrPsCRDeRvGAUb/LNrBASrfqAIok+xB0LxK8CHqxZ037viF13ugfsIpePH93mX7xfJp97cyDuTZ3cw==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "netbsd"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/netbsd-x64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/netbsd-x64/-/netbsd-x64-0.27.2.tgz",
+ "integrity": "sha512-HwGDZ0VLVBY3Y+Nw0JexZy9o/nUAWq9MlV7cahpaXKW6TOzfVno3y3/M8Ga8u8Yr7GldLOov27xiCnqRZf0tCA==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "netbsd"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/openbsd-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/openbsd-arm64/-/openbsd-arm64-0.27.2.tgz",
+ "integrity": "sha512-DNIHH2BPQ5551A7oSHD0CKbwIA/Ox7+78/AWkbS5QoRzaqlev2uFayfSxq68EkonB+IKjiuxBFoV8ESJy8bOHA==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "openbsd"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/openbsd-x64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/openbsd-x64/-/openbsd-x64-0.27.2.tgz",
+ "integrity": "sha512-/it7w9Nb7+0KFIzjalNJVR5bOzA9Vay+yIPLVHfIQYG/j+j9VTH84aNB8ExGKPU4AzfaEvN9/V4HV+F+vo8OEg==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "openbsd"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/openharmony-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/openharmony-arm64/-/openharmony-arm64-0.27.2.tgz",
+ "integrity": "sha512-LRBbCmiU51IXfeXk59csuX/aSaToeG7w48nMwA6049Y4J4+VbWALAuXcs+qcD04rHDuSCSRKdmY63sruDS5qag==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "openharmony"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/sunos-x64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/sunos-x64/-/sunos-x64-0.27.2.tgz",
+ "integrity": "sha512-kMtx1yqJHTmqaqHPAzKCAkDaKsffmXkPHThSfRwZGyuqyIeBvf08KSsYXl+abf5HDAPMJIPnbBfXvP2ZC2TfHg==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "sunos"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/win32-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/win32-arm64/-/win32-arm64-0.27.2.tgz",
+ "integrity": "sha512-Yaf78O/B3Kkh+nKABUF++bvJv5Ijoy9AN1ww904rOXZFLWVc5OLOfL56W+C8F9xn5JQZa3UX6m+IktJnIb1Jjg==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "win32"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/win32-ia32": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/win32-ia32/-/win32-ia32-0.27.2.tgz",
+ "integrity": "sha512-Iuws0kxo4yusk7sw70Xa2E2imZU5HoixzxfGCdxwBdhiDgt9vX9VUCBhqcwY7/uh//78A1hMkkROMJq9l27oLQ==",
+ "cpu": [
+ "ia32"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "win32"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/win32-x64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/win32-x64/-/win32-x64-0.27.2.tgz",
+ "integrity": "sha512-sRdU18mcKf7F+YgheI/zGf5alZatMUTKj/jNS6l744f9u3WFu4v7twcUI9vu4mknF4Y9aDlblIie0IM+5xxaqQ==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "win32"
+ ],
+ "engines": {
+ "node": ">=18"
+ }
+ },
+ "node_modules/@eslint-community/eslint-utils": {
+ "version": "4.9.1",
+ "resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.9.1.tgz",
+ "integrity": "sha512-phrYmNiYppR7znFEdqgfWHXR6NCkZEK7hwWDHZUjit/2/U0r6XvkDl0SYnoM51Hq7FhCGdLDT6zxCCOY1hexsQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "eslint-visitor-keys": "^3.4.3"
+ },
+ "engines": {
+ "node": "^12.22.0 || ^14.17.0 || >=16.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/eslint"
+ },
+ "peerDependencies": {
+ "eslint": "^6.0.0 || ^7.0.0 || >=8.0.0"
+ }
+ },
+ "node_modules/@eslint-community/eslint-utils/node_modules/eslint-visitor-keys": {
+ "version": "3.4.3",
+ "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-3.4.3.tgz",
+ "integrity": "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "engines": {
+ "node": "^12.22.0 || ^14.17.0 || >=16.0.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/eslint"
+ }
+ },
+ "node_modules/@eslint-community/regexpp": {
+ "version": "4.12.2",
+ "resolved": "https://registry.npmjs.org/@eslint-community/regexpp/-/regexpp-4.12.2.tgz",
+ "integrity": "sha512-EriSTlt5OC9/7SXkRSCAhfSxxoSUgBm33OH+IkwbdpgoqsSsUg7y3uh+IICI/Qg4BBWr3U2i39RpmycbxMq4ew==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": "^12.0.0 || ^14.0.0 || >=16.0.0"
+ }
+ },
+ "node_modules/@eslint/config-array": {
+ "version": "0.21.1",
+ "resolved": "https://registry.npmjs.org/@eslint/config-array/-/config-array-0.21.1.tgz",
+ "integrity": "sha512-aw1gNayWpdI/jSYVgzN5pL0cfzU02GT3NBpeT/DXbx1/1x7ZKxFPd9bwrzygx/qiwIQiJ1sw/zD8qY/kRvlGHA==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "dependencies": {
+ "@eslint/object-schema": "^2.1.7",
+ "debug": "^4.3.1",
+ "minimatch": "^3.1.2"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ }
+ },
+ "node_modules/@eslint/config-helpers": {
+ "version": "0.4.2",
+ "resolved": "https://registry.npmjs.org/@eslint/config-helpers/-/config-helpers-0.4.2.tgz",
+ "integrity": "sha512-gBrxN88gOIf3R7ja5K9slwNayVcZgK6SOUORm2uBzTeIEfeVaIhOpCtTox3P6R7o2jLFwLFTLnC7kU/RGcYEgw==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "dependencies": {
+ "@eslint/core": "^0.17.0"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ }
+ },
+ "node_modules/@eslint/core": {
+ "version": "0.17.0",
+ "resolved": "https://registry.npmjs.org/@eslint/core/-/core-0.17.0.tgz",
+ "integrity": "sha512-yL/sLrpmtDaFEiUj1osRP4TI2MDz1AddJL+jZ7KSqvBuliN4xqYY54IfdN8qD8Toa6g1iloph1fxQNkjOxrrpQ==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "dependencies": {
+ "@types/json-schema": "^7.0.15"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ }
+ },
+ "node_modules/@eslint/eslintrc": {
+ "version": "3.3.3",
+ "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-3.3.3.tgz",
+ "integrity": "sha512-Kr+LPIUVKz2qkx1HAMH8q1q6azbqBAsXJUxBl/ODDuVPX45Z9DfwB8tPjTi6nNZ8BuM3nbJxC5zCAg5elnBUTQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "ajv": "^6.12.4",
+ "debug": "^4.3.2",
+ "espree": "^10.0.1",
+ "globals": "^14.0.0",
+ "ignore": "^5.2.0",
+ "import-fresh": "^3.2.1",
+ "js-yaml": "^4.1.1",
+ "minimatch": "^3.1.2",
+ "strip-json-comments": "^3.1.1"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/eslint"
+ }
+ },
+ "node_modules/@eslint/eslintrc/node_modules/globals": {
+ "version": "14.0.0",
+ "resolved": "https://registry.npmjs.org/globals/-/globals-14.0.0.tgz",
+ "integrity": "sha512-oahGvuMGQlPw/ivIYBjVSrWAfWLBeku5tpPE2fOPLi+WHffIWbuh2tCjhyQhTBPMf5E9jDEH4FOmTYgYwbKwtQ==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=18"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/@eslint/js": {
+ "version": "9.39.2",
+ "resolved": "https://registry.npmjs.org/@eslint/js/-/js-9.39.2.tgz",
+ "integrity": "sha512-q1mjIoW1VX4IvSocvM/vbTiveKC4k9eLrajNEuSsmjymSDEbpGddtpfOoN7YGAqBK3NG+uqo8ia4PDTt8buCYA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "url": "https://eslint.org/donate"
+ }
+ },
+ "node_modules/@eslint/object-schema": {
+ "version": "2.1.7",
+ "resolved": "https://registry.npmjs.org/@eslint/object-schema/-/object-schema-2.1.7.tgz",
+ "integrity": "sha512-VtAOaymWVfZcmZbp6E2mympDIHvyjXs/12LqWYjVw6qjrfF+VK+fyG33kChz3nnK+SU5/NeHOqrTEHS8sXO3OA==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ }
+ },
+ "node_modules/@eslint/plugin-kit": {
+ "version": "0.4.1",
+ "resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.4.1.tgz",
+ "integrity": "sha512-43/qtrDUokr7LJqoF2c3+RInu/t4zfrpYdoSDfYyhg52rwLV6TnOvdG4fXm7IkSB3wErkcmJS9iEhjVtOSEjjA==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "dependencies": {
+ "@eslint/core": "^0.17.0",
+ "levn": "^0.4.1"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ }
+ },
+ "node_modules/@humanfs/core": {
+ "version": "0.19.1",
+ "resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz",
+ "integrity": "sha512-5DyQ4+1JEUzejeK1JGICcideyfUbGixgS9jNgex5nqkW+cY7WZhxBigmieN5Qnw9ZosSNVC9KQKyb+GUaGyKUA==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "engines": {
+ "node": ">=18.18.0"
+ }
+ },
+ "node_modules/@humanfs/node": {
+ "version": "0.16.7",
+ "resolved": "https://registry.npmjs.org/@humanfs/node/-/node-0.16.7.tgz",
+ "integrity": "sha512-/zUx+yOsIrG4Y43Eh2peDeKCxlRt/gET6aHfaKpuq267qXdYDFViVHfMaLyygZOnl0kGWxFIgsBy8QFuTLUXEQ==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "dependencies": {
+ "@humanfs/core": "^0.19.1",
+ "@humanwhocodes/retry": "^0.4.0"
+ },
+ "engines": {
+ "node": ">=18.18.0"
+ }
+ },
+ "node_modules/@humanwhocodes/module-importer": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/@humanwhocodes/module-importer/-/module-importer-1.0.1.tgz",
+ "integrity": "sha512-bxveV4V8v5Yb4ncFTT3rPSgZBOpCkjfK0y4oVVVJwIuDVBRMDXrPyXRL988i5ap9m9bnyEEjWfm5WkBmtffLfA==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "engines": {
+ "node": ">=12.22"
+ },
+ "funding": {
+ "type": "github",
+ "url": "https://github.com/sponsors/nzakas"
+ }
+ },
+ "node_modules/@humanwhocodes/retry": {
+ "version": "0.4.3",
+ "resolved": "https://registry.npmjs.org/@humanwhocodes/retry/-/retry-0.4.3.tgz",
+ "integrity": "sha512-bV0Tgo9K4hfPCek+aMAn81RppFKv2ySDQeMoSZuvTASywNTnVJCArCZE2FWqpvIatKu7VMRLWlR1EazvVhDyhQ==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "engines": {
+ "node": ">=18.18"
+ },
+ "funding": {
+ "type": "github",
+ "url": "https://github.com/sponsors/nzakas"
+ }
+ },
+ "node_modules/@jridgewell/gen-mapping": {
+ "version": "0.3.13",
+ "resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.13.tgz",
+ "integrity": "sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@jridgewell/sourcemap-codec": "^1.5.0",
+ "@jridgewell/trace-mapping": "^0.3.24"
+ }
+ },
+ "node_modules/@jridgewell/remapping": {
+ "version": "2.3.5",
+ "resolved": "https://registry.npmjs.org/@jridgewell/remapping/-/remapping-2.3.5.tgz",
+ "integrity": "sha512-LI9u/+laYG4Ds1TDKSJW2YPrIlcVYOwi2fUC6xB43lueCjgxV4lffOCZCtYFiH6TNOX+tQKXx97T4IKHbhyHEQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@jridgewell/gen-mapping": "^0.3.5",
+ "@jridgewell/trace-mapping": "^0.3.24"
+ }
+ },
+ "node_modules/@jridgewell/resolve-uri": {
+ "version": "3.1.2",
+ "resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz",
+ "integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6.0.0"
+ }
+ },
+ "node_modules/@jridgewell/sourcemap-codec": {
+ "version": "1.5.5",
+ "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz",
+ "integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/@jridgewell/trace-mapping": {
+ "version": "0.3.31",
+ "resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.31.tgz",
+ "integrity": "sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@jridgewell/resolve-uri": "^3.1.0",
+ "@jridgewell/sourcemap-codec": "^1.4.14"
+ }
+ },
+ "node_modules/@rc-component/async-validator": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/async-validator/-/async-validator-5.1.0.tgz",
+ "integrity": "sha512-n4HcR5siNUXRX23nDizbZBQPO0ZM/5oTtmKZ6/eqL0L2bo747cklFdZGRN2f+c9qWGICwDzrhW0H7tE9PptdcA==",
+ "license": "MIT",
+ "dependencies": {
+ "@babel/runtime": "^7.24.4"
+ },
+ "engines": {
+ "node": ">=14.x"
+ }
+ },
+ "node_modules/@rc-component/cascader": {
+ "version": "1.11.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/cascader/-/cascader-1.11.0.tgz",
+ "integrity": "sha512-VDiEsskThWi8l0/1Nquc9I4ytcMKQYAb9Jkm6wiX5O5fpcMRsm+b8OulBMbr/b4rFTl/2y2y4GdKqQ+2whD+XQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/select": "~1.5.0",
+ "@rc-component/tree": "~1.1.0",
+ "@rc-component/util": "^1.4.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=18.0.0",
+ "react-dom": ">=18.0.0"
+ }
+ },
+ "node_modules/@rc-component/checkbox": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/@rc-component/checkbox/-/checkbox-1.0.1.tgz",
+ "integrity": "sha512-08yTH8m+bSm8TOqbybbJ9KiAuIATti6bDs2mVeSfu4QfEnyeF6X0enHVvD1NEAyuBWEAo56QtLe++MYs2D9XiQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/collapse": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/collapse/-/collapse-1.2.0.tgz",
+ "integrity": "sha512-ZRYSKSS39qsFx93p26bde7JUZJshsUBEQRlRXPuJYlAiNX0vyYlF5TsAm8JZN3LcF8XvKikdzPbgAtXSbkLUkw==",
+ "license": "MIT",
+ "dependencies": {
+ "@babel/runtime": "^7.10.1",
+ "@rc-component/motion": "^1.1.4",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=18.0.0",
+ "react-dom": ">=18.0.0"
+ }
+ },
+ "node_modules/@rc-component/color-picker": {
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/@rc-component/color-picker/-/color-picker-3.0.3.tgz",
+ "integrity": "sha512-V7gFF9O7o5XwIWafdbOtqI4BUUkEUkgdBwp6favy3xajMX/2dDqytFaiXlcwrpq6aRyPLp5dKLAG5RFKLXMeGA==",
+ "license": "MIT",
+ "dependencies": {
+ "@ant-design/fast-color": "^3.0.0",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/context": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/@rc-component/context/-/context-2.0.1.tgz",
+ "integrity": "sha512-HyZbYm47s/YqtP6pKXNMjPEMaukyg7P0qVfgMLzr7YiFNMHbK2fKTAGzms9ykfGHSfyf75nBbgWw+hHkp+VImw==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.3.0"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/dialog": {
+ "version": "1.8.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/dialog/-/dialog-1.8.0.tgz",
+ "integrity": "sha512-zGksezfULKixYCIWctIhUC2M3zUJrc81JKWbi9dJrQdPaM7J+8vSOrhLoOHHkZFpBpb2Ri6JqnSuGYb2N+FrRA==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/motion": "^1.1.3",
+ "@rc-component/portal": "^2.1.0",
+ "@rc-component/util": "^1.5.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=18.0.0",
+ "react-dom": ">=18.0.0"
+ }
+ },
+ "node_modules/@rc-component/drawer": {
+ "version": "1.4.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/drawer/-/drawer-1.4.0.tgz",
+ "integrity": "sha512-Zr1j1LRLDauz4a5JXHEmeYQfvEzfh4CddNa7tszyJnfd5GySYdZ5qLO63Tt2tgG4k+qi6tkFDKmcT46ikZfzbQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/motion": "^1.1.4",
+ "@rc-component/portal": "^2.1.3",
+ "@rc-component/util": "^1.2.1",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=18.0.0",
+ "react-dom": ">=18.0.0"
+ }
+ },
+ "node_modules/@rc-component/dropdown": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/@rc-component/dropdown/-/dropdown-1.0.2.tgz",
+ "integrity": "sha512-6PY2ecUSYhDPhkNHHb4wfeAya04WhpmUSKzdR60G+kMNVUCX2vjT/AgTS0Lz0I/K6xrPMJ3enQbwVpeN3sHCgg==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/trigger": "^3.0.0",
+ "@rc-component/util": "^1.2.1",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.11.0",
+ "react-dom": ">=16.11.0"
+ }
+ },
+ "node_modules/@rc-component/form": {
+ "version": "1.6.2",
+ "resolved": "https://registry.npmjs.org/@rc-component/form/-/form-1.6.2.tgz",
+ "integrity": "sha512-OgIn2RAoaSBqaIgzJf/X6iflIa9LpTozci1lagLBdURDFhGA370v0+T0tXxOi8YShMjTha531sFhwtnrv+EJaQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/async-validator": "^5.1.0",
+ "@rc-component/util": "^1.6.2",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/image": {
+ "version": "1.6.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/image/-/image-1.6.0.tgz",
+ "integrity": "sha512-tSfn2ZE/oP082g4QIOxeehkmgnXB7R+5AFj/lIFr4k7pEuxHBdyGIq9axoCY9qea8NN0DY6p4IB/F07tLqaT5A==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/motion": "^1.0.0",
+ "@rc-component/portal": "^2.1.2",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/input": {
+ "version": "1.1.2",
+ "resolved": "https://registry.npmjs.org/@rc-component/input/-/input-1.1.2.tgz",
+ "integrity": "sha512-Q61IMR47piUBudgixJ30CciKIy9b1H95qe7GgEKOmSJVJXvFRWJllJfQry9tif+MX2cWFXWJf/RXz4kaCeq/Fg==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.4.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.0.0",
+ "react-dom": ">=16.0.0"
+ }
+ },
+ "node_modules/@rc-component/input-number": {
+ "version": "1.6.2",
+ "resolved": "https://registry.npmjs.org/@rc-component/input-number/-/input-number-1.6.2.tgz",
+ "integrity": "sha512-Gjcq7meZlCOiWN1t1xCC+7/s85humHVokTBI7PJgTfoyw5OWF74y3e6P8PHX104g9+b54jsodFIzyaj6p8LI9w==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/mini-decimal": "^1.0.1",
+ "@rc-component/util": "^1.4.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/mentions": {
+ "version": "1.6.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/mentions/-/mentions-1.6.0.tgz",
+ "integrity": "sha512-KIkQNP6habNuTsLhUv0UGEOwG67tlmE7KNIJoQZZNggEZl5lQJTytFDb69sl5CK3TDdISCTjKP3nGEBKgT61CQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/input": "~1.1.0",
+ "@rc-component/menu": "~1.2.0",
+ "@rc-component/textarea": "~1.1.0",
+ "@rc-component/trigger": "^3.0.0",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/menu": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/menu/-/menu-1.2.0.tgz",
+ "integrity": "sha512-VWwDuhvYHSnTGj4n6bV3ISrLACcPAzdPOq3d0BzkeiM5cve8BEYfvkEhNoM0PLzv51jpcejeyrLXeMVIJ+QJlg==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/motion": "^1.1.4",
+ "@rc-component/overflow": "^1.0.0",
+ "@rc-component/trigger": "^3.0.0",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/mini-decimal": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/mini-decimal/-/mini-decimal-1.1.0.tgz",
+ "integrity": "sha512-jS4E7T9Li2GuYwI6PyiVXmxTiM6b07rlD9Ge8uGZSCz3WlzcG5ZK7g5bbuKNeZ9pgUuPK/5guV781ujdVpm4HQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@babel/runtime": "^7.18.0"
+ },
+ "engines": {
+ "node": ">=8.x"
+ }
+ },
+ "node_modules/@rc-component/motion": {
+ "version": "1.1.6",
+ "resolved": "https://registry.npmjs.org/@rc-component/motion/-/motion-1.1.6.tgz",
+ "integrity": "sha512-aEQobs/YA0kqRvHIPjQvOytdtdRVyhf/uXAal4chBjxDu6odHckExJzjn2D+Ju1aKK6hx3pAs6BXdV9+86xkgQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.2.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/mutate-observer": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/@rc-component/mutate-observer/-/mutate-observer-2.0.1.tgz",
+ "integrity": "sha512-AyarjoLU5YlxuValRi+w8JRH2Z84TBbFO2RoGWz9d8bSu0FqT8DtugH3xC3BV7mUwlmROFauyWuXFuq4IFbH+w==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.2.0"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/notification": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/notification/-/notification-1.2.0.tgz",
+ "integrity": "sha512-OX3J+zVU7rvoJCikjrfW7qOUp7zlDeFBK2eA3SFbGSkDqo63Sl4Ss8A04kFP+fxHSxMDIS9jYVEZtU1FNCFuBA==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/motion": "^1.1.4",
+ "@rc-component/util": "^1.2.1",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/overflow": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/overflow/-/overflow-1.0.0.tgz",
+ "integrity": "sha512-GSlBeoE0XTBi5cf3zl8Qh7Uqhn7v8RrlJ8ajeVpEkNe94HWy5l5BQ0Mwn2TVUq9gdgbfEMUmTX7tJFAg7mz0Rw==",
+ "license": "MIT",
+ "dependencies": {
+ "@babel/runtime": "^7.11.1",
+ "@rc-component/resize-observer": "^1.0.1",
+ "@rc-component/util": "^1.4.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/pagination": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/pagination/-/pagination-1.2.0.tgz",
+ "integrity": "sha512-YcpUFE8dMLfSo6OARJlK6DbHHvrxz7pMGPGmC/caZSJJz6HRKHC1RPP001PRHCvG9Z/veD039uOQmazVuLJzlw==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/picker": {
+ "version": "1.9.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/picker/-/picker-1.9.0.tgz",
+ "integrity": "sha512-OLisdk8AWVCG9goBU1dWzuH5QlBQk8jktmQ6p0/IyBFwdKGwyIZOSjnBYo8hooHiTdl0lU+wGf/OfMtVBw02KQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/overflow": "^1.0.0",
+ "@rc-component/resize-observer": "^1.0.0",
+ "@rc-component/trigger": "^3.6.15",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=12.x"
+ },
+ "peerDependencies": {
+ "date-fns": ">= 2.x",
+ "dayjs": ">= 1.x",
+ "luxon": ">= 3.x",
+ "moment": ">= 2.x",
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ },
+ "peerDependenciesMeta": {
+ "date-fns": {
+ "optional": true
+ },
+ "dayjs": {
+ "optional": true
+ },
+ "luxon": {
+ "optional": true
+ },
+ "moment": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@rc-component/portal": {
+ "version": "2.2.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/portal/-/portal-2.2.0.tgz",
+ "integrity": "sha512-oc6FlA+uXCMiwArHsJyHcIkX4q6uKyndrPol2eWX8YPkAnztHOPsFIRtmWG4BMlGE5h7YIRE3NiaJ5VS8Lb1QQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.2.1",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=12.x"
+ },
+ "peerDependencies": {
+ "react": ">=18.0.0",
+ "react-dom": ">=18.0.0"
+ }
+ },
+ "node_modules/@rc-component/progress": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/@rc-component/progress/-/progress-1.0.2.tgz",
+ "integrity": "sha512-WZUnH9eGxH1+xodZKqdrHke59uyGZSWgj5HBM5Kwk5BrTMuAORO7VJ2IP5Qbm9aH3n9x3IcesqHHR0NWPBC7fQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.2.1",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/qrcode": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/@rc-component/qrcode/-/qrcode-1.1.1.tgz",
+ "integrity": "sha512-LfLGNymzKdUPjXUbRP+xOhIWY4jQ+YMj5MmWAcgcAq1Ij8XP7tRmAXqyuv96XvLUBE/5cA8hLFl9eO1JQMujrA==",
+ "license": "MIT",
+ "dependencies": {
+ "@babel/runtime": "^7.24.7"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/rate": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/@rc-component/rate/-/rate-1.0.1.tgz",
+ "integrity": "sha512-bkXxeBqDpl5IOC7yL7GcSYjQx9G8H+6kLYQnNZWeBYq2OYIv1MONd6mqKTjnnJYpV0cQIU2z3atdW0j1kttpTw==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/resize-observer": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/@rc-component/resize-observer/-/resize-observer-1.1.1.tgz",
+ "integrity": "sha512-NfXXMmiR+SmUuKE1NwJESzEUYUFWIDUn2uXpxCTOLwiRUUakd62DRNFjRJArgzyFW8S5rsL4aX5XlyIXyC/vRA==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.2.0"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/segmented": {
+ "version": "1.3.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/segmented/-/segmented-1.3.0.tgz",
+ "integrity": "sha512-5J/bJ01mbDnoA6P/FW8SxUvKn+OgUSTZJPzCNnTBntG50tzoP7DydGhqxp7ggZXZls7me3mc2EQDXakU3iTVFg==",
+ "license": "MIT",
+ "dependencies": {
+ "@babel/runtime": "^7.11.1",
+ "@rc-component/motion": "^1.1.4",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.0.0",
+ "react-dom": ">=16.0.0"
+ }
+ },
+ "node_modules/@rc-component/select": {
+ "version": "1.5.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/select/-/select-1.5.0.tgz",
+ "integrity": "sha512-Zz0hpToAfOdWo/1jj3dW5iooBNU8F6fVgVaYN4Jy1SL3Xcx2OO+IqiQnxqk/PjY6hg1HVt7LjkkrYvpJQyZxoQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/overflow": "^1.0.0",
+ "@rc-component/trigger": "^3.0.0",
+ "@rc-component/util": "^1.3.0",
+ "@rc-component/virtual-list": "^1.0.1",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": "*",
+ "react-dom": "*"
+ }
+ },
+ "node_modules/@rc-component/slider": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/@rc-component/slider/-/slider-1.0.1.tgz",
+ "integrity": "sha512-uDhEPU1z3WDfCJhaL9jfd2ha/Eqpdfxsn0Zb0Xcq1NGQAman0TWaR37OWp2vVXEOdV2y0njSILTMpTfPV1454g==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/steps": {
+ "version": "1.2.2",
+ "resolved": "https://registry.npmjs.org/@rc-component/steps/-/steps-1.2.2.tgz",
+ "integrity": "sha512-/yVIZ00gDYYPHSY0JP+M+s3ZvuXLu2f9rEjQqiUDs7EcYsUYrpJ/1bLj9aI9R7MBR3fu/NGh6RM9u2qGfqp+Nw==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.2.1",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/switch": {
+ "version": "1.0.3",
+ "resolved": "https://registry.npmjs.org/@rc-component/switch/-/switch-1.0.3.tgz",
+ "integrity": "sha512-Jgi+EbOBquje/XNdofr7xbJQZPYJP+BlPfR0h+WN4zFkdtB2EWqEfvkXJWeipflwjWip0/17rNbxEAqs8hVHfw==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/table": {
+ "version": "1.9.1",
+ "resolved": "https://registry.npmjs.org/@rc-component/table/-/table-1.9.1.tgz",
+ "integrity": "sha512-FVI5ZS/GdB3BcgexfCYKi3iHhZS3Fr59EtsxORszYGrfpH1eWr33eDNSYkVfLI6tfJ7vftJDd9D5apfFWqkdJg==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/context": "^2.0.1",
+ "@rc-component/resize-observer": "^1.0.0",
+ "@rc-component/util": "^1.1.0",
+ "@rc-component/virtual-list": "^1.0.1",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=18.0.0",
+ "react-dom": ">=18.0.0"
+ }
+ },
+ "node_modules/@rc-component/tabs": {
+ "version": "1.7.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/tabs/-/tabs-1.7.0.tgz",
+ "integrity": "sha512-J48cs2iBi7Ho3nptBxxIqizEliUC+ExE23faspUQKGQ550vaBlv3aGF8Epv/UB1vFWeoJDTW/dNzgIU0Qj5i/w==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/dropdown": "~1.0.0",
+ "@rc-component/menu": "~1.2.0",
+ "@rc-component/motion": "^1.1.3",
+ "@rc-component/resize-observer": "^1.0.0",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/textarea": {
+ "version": "1.1.2",
+ "resolved": "https://registry.npmjs.org/@rc-component/textarea/-/textarea-1.1.2.tgz",
+ "integrity": "sha512-9rMUEODWZDMovfScIEHXWlVZuPljZ2pd1LKNjslJVitn4SldEzq5vO1CL3yy3Dnib6zZal2r2DPtjy84VVpF6A==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/input": "~1.1.0",
+ "@rc-component/resize-observer": "^1.0.0",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/tooltip": {
+ "version": "1.4.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/tooltip/-/tooltip-1.4.0.tgz",
+ "integrity": "sha512-8Rx5DCctIlLI4raR0I0xHjVTf1aF48+gKCNeAAo5bmF5VoR5YED+A/XEqzXv9KKqrJDRcd3Wndpxh2hyzrTtSg==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/trigger": "^3.7.1",
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=18.0.0",
+ "react-dom": ">=18.0.0"
+ }
+ },
+ "node_modules/@rc-component/tour": {
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/tour/-/tour-2.3.0.tgz",
+ "integrity": "sha512-K04K9r32kUC+auBSQfr+Fss4SpSIS9JGe56oq/ALAX0p+i2ylYOI1MgR83yBY7v96eO6ZFXcM/igCQmubps0Ow==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/portal": "^2.2.0",
+ "@rc-component/trigger": "^3.0.0",
+ "@rc-component/util": "^1.7.0",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/tree": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/tree/-/tree-1.1.0.tgz",
+ "integrity": "sha512-HZs3aOlvFgQdgrmURRc/f4IujiNBf4DdEeXUlkS0lPoLlx9RoqsZcF0caXIAMVb+NaWqKtGQDnrH8hqLCN5zlA==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/motion": "^1.0.0",
+ "@rc-component/util": "^1.2.1",
+ "@rc-component/virtual-list": "^1.0.1",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=10.x"
+ },
+ "peerDependencies": {
+ "react": "*",
+ "react-dom": "*"
+ }
+ },
+ "node_modules/@rc-component/tree-select": {
+ "version": "1.6.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/tree-select/-/tree-select-1.6.0.tgz",
+ "integrity": "sha512-UvEGmZT+gcVvRwImAZg3/sXw9nUdn4FmCs1rSIMWjEXEIAo0dTGmIyWuLCvs+1rGe9AZ7CHMPiQUEbdadwV0fw==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/select": "~1.5.0",
+ "@rc-component/tree": "~1.1.0",
+ "@rc-component/util": "^1.4.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": "*",
+ "react-dom": "*"
+ }
+ },
+ "node_modules/@rc-component/trigger": {
+ "version": "3.9.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/trigger/-/trigger-3.9.0.tgz",
+ "integrity": "sha512-X8btpwfrT27AgrZVOz4swclhEHTZcqaHeQMXXBgveagOiakTa36uObXbdwerXffgV8G9dH1fAAE0DHtVQs8EHg==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/motion": "^1.1.4",
+ "@rc-component/portal": "^2.2.0",
+ "@rc-component/resize-observer": "^1.1.1",
+ "@rc-component/util": "^1.2.1",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=18.0.0",
+ "react-dom": ">=18.0.0"
+ }
+ },
+ "node_modules/@rc-component/upload": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/upload/-/upload-1.1.0.tgz",
+ "integrity": "sha512-LIBV90mAnUE6VK5N4QvForoxZc4XqEYZimcp7fk+lkE4XwHHyJWxpIXQQwMU8hJM+YwBbsoZkGksL1sISWHQxw==",
+ "license": "MIT",
+ "dependencies": {
+ "@rc-component/util": "^1.3.0",
+ "clsx": "^2.1.1"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rc-component/util": {
+ "version": "1.7.0",
+ "resolved": "https://registry.npmjs.org/@rc-component/util/-/util-1.7.0.tgz",
+ "integrity": "sha512-tIvIGj4Vl6fsZFvWSkYw9sAfiCKUXMyhVz6kpKyZbwyZyRPqv2vxYZROdaO1VB4gqTNvUZFXh6i3APUiterw5g==",
+ "license": "MIT",
+ "dependencies": {
+ "is-mobile": "^5.0.0",
+ "react-is": "^18.2.0"
+ },
+ "peerDependencies": {
+ "react": ">=18.0.0",
+ "react-dom": ">=18.0.0"
+ }
+ },
+ "node_modules/@rc-component/virtual-list": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/@rc-component/virtual-list/-/virtual-list-1.0.2.tgz",
+ "integrity": "sha512-uvTol/mH74FYsn5loDGJxo+7kjkO4i+y4j87Re1pxJBs0FaeuMuLRzQRGaXwnMcV1CxpZLi2Z56Rerj2M00fjQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@babel/runtime": "^7.20.0",
+ "@rc-component/resize-observer": "^1.0.1",
+ "@rc-component/util": "^1.4.0",
+ "clsx": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=8.x"
+ },
+ "peerDependencies": {
+ "react": ">=16.9.0",
+ "react-dom": ">=16.9.0"
+ }
+ },
+ "node_modules/@rolldown/pluginutils": {
+ "version": "1.0.0-beta.53",
+ "resolved": "https://registry.npmjs.org/@rolldown/pluginutils/-/pluginutils-1.0.0-beta.53.tgz",
+ "integrity": "sha512-vENRlFU4YbrwVqNDZ7fLvy+JR1CRkyr01jhSiDpE1u6py3OMzQfztQU2jxykW3ALNxO4kSlqIDeYyD0Y9RcQeQ==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/@rollup/rollup-android-arm-eabi": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.55.2.tgz",
+ "integrity": "sha512-21J6xzayjy3O6NdnlO6aXi/urvSRjm6nCI6+nF6ra2YofKruGixN9kfT+dt55HVNwfDmpDHJcaS3JuP/boNnlA==",
+ "cpu": [
+ "arm"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "android"
+ ]
+ },
+ "node_modules/@rollup/rollup-android-arm64": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.55.2.tgz",
+ "integrity": "sha512-eXBg7ibkNUZ+sTwbFiDKou0BAckeV6kIigK7y5Ko4mB/5A1KLhuzEKovsmfvsL8mQorkoincMFGnQuIT92SKqA==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "android"
+ ]
+ },
+ "node_modules/@rollup/rollup-darwin-arm64": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.55.2.tgz",
+ "integrity": "sha512-UCbaTklREjrc5U47ypLulAgg4njaqfOVLU18VrCrI+6E5MQjuG0lSWaqLlAJwsD7NpFV249XgB0Bi37Zh5Sz4g==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "darwin"
+ ]
+ },
+ "node_modules/@rollup/rollup-darwin-x64": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.55.2.tgz",
+ "integrity": "sha512-dP67MA0cCMHFT2g5XyjtpVOtp7y4UyUxN3dhLdt11at5cPKnSm4lY+EhwNvDXIMzAMIo2KU+mc9wxaAQJTn7sQ==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "darwin"
+ ]
+ },
+ "node_modules/@rollup/rollup-freebsd-arm64": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.55.2.tgz",
+ "integrity": "sha512-WDUPLUwfYV9G1yxNRJdXcvISW15mpvod1Wv3ok+Ws93w1HjIVmCIFxsG2DquO+3usMNCpJQ0wqO+3GhFdl6Fow==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "freebsd"
+ ]
+ },
+ "node_modules/@rollup/rollup-freebsd-x64": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.55.2.tgz",
+ "integrity": "sha512-Ng95wtHVEulRwn7R0tMrlUuiLVL/HXA8Lt/MYVpy88+s5ikpntzZba1qEulTuPnPIZuOPcW9wNEiqvZxZmgmqQ==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "freebsd"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-arm-gnueabihf": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.55.2.tgz",
+ "integrity": "sha512-AEXMESUDWWGqD6LwO/HkqCZgUE1VCJ1OhbvYGsfqX2Y6w5quSXuyoy/Fg3nRqiwro+cJYFxiw5v4kB2ZDLhxrw==",
+ "cpu": [
+ "arm"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-arm-musleabihf": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.55.2.tgz",
+ "integrity": "sha512-ZV7EljjBDwBBBSv570VWj0hiNTdHt9uGznDtznBB4Caj3ch5rgD4I2K1GQrtbvJ/QiB+663lLgOdcADMNVC29Q==",
+ "cpu": [
+ "arm"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-arm64-gnu": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.55.2.tgz",
+ "integrity": "sha512-uvjwc8NtQVPAJtq4Tt7Q49FOodjfbf6NpqXyW/rjXoV+iZ3EJAHLNAnKT5UJBc6ffQVgmXTUL2ifYiLABlGFqA==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-arm64-musl": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.55.2.tgz",
+ "integrity": "sha512-s3KoWVNnye9mm/2WpOZ3JeUiediUVw6AvY/H7jNA6qgKA2V2aM25lMkVarTDfiicn/DLq3O0a81jncXszoyCFA==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-loong64-gnu": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-gnu/-/rollup-linux-loong64-gnu-4.55.2.tgz",
+ "integrity": "sha512-gi21faacK+J8aVSyAUptML9VQN26JRxe484IbF+h3hpG+sNVoMXPduhREz2CcYr5my0NE3MjVvQ5bMKX71pfVA==",
+ "cpu": [
+ "loong64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-loong64-musl": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-musl/-/rollup-linux-loong64-musl-4.55.2.tgz",
+ "integrity": "sha512-qSlWiXnVaS/ceqXNfnoFZh4IiCA0EwvCivivTGbEu1qv2o+WTHpn1zNmCTAoOG5QaVr2/yhCoLScQtc/7RxshA==",
+ "cpu": [
+ "loong64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-ppc64-gnu": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.55.2.tgz",
+ "integrity": "sha512-rPyuLFNoF1B0+wolH277E780NUKf+KoEDb3OyoLbAO18BbeKi++YN6gC/zuJoPPDlQRL3fIxHxCxVEWiem2yXw==",
+ "cpu": [
+ "ppc64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-ppc64-musl": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-musl/-/rollup-linux-ppc64-musl-4.55.2.tgz",
+ "integrity": "sha512-g+0ZLMook31iWV4PvqKU0i9E78gaZgYpSrYPed/4Bu+nGTgfOPtfs1h11tSSRPXSjC5EzLTjV/1A7L2Vr8pJoQ==",
+ "cpu": [
+ "ppc64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-riscv64-gnu": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.55.2.tgz",
+ "integrity": "sha512-i+sGeRGsjKZcQRh3BRfpLsM3LX3bi4AoEVqmGDyc50L6KfYsN45wVCSz70iQMwPWr3E5opSiLOwsC9WB4/1pqg==",
+ "cpu": [
+ "riscv64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-riscv64-musl": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.55.2.tgz",
+ "integrity": "sha512-C1vLcKc4MfFV6I0aWsC7B2Y9QcsiEcvKkfxprwkPfLaN8hQf0/fKHwSF2lcYzA9g4imqnhic729VB9Fo70HO3Q==",
+ "cpu": [
+ "riscv64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-s390x-gnu": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.55.2.tgz",
+ "integrity": "sha512-68gHUK/howpQjh7g7hlD9DvTTt4sNLp1Bb+Yzw2Ki0xvscm2cOdCLZNJNhd2jW8lsTPrHAHuF751BygifW4bkQ==",
+ "cpu": [
+ "s390x"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-x64-gnu": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.55.2.tgz",
+ "integrity": "sha512-1e30XAuaBP1MAizaOBApsgeGZge2/Byd6wV4a8oa6jPdHELbRHBiw7wvo4dp7Ie2PE8TZT4pj9RLGZv9N4qwlw==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-linux-x64-musl": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.55.2.tgz",
+ "integrity": "sha512-4BJucJBGbuGnH6q7kpPqGJGzZnYrpAzRd60HQSt3OpX/6/YVgSsJnNzR8Ot74io50SeVT4CtCWe/RYIAymFPwA==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "linux"
+ ]
+ },
+ "node_modules/@rollup/rollup-openbsd-x64": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-openbsd-x64/-/rollup-openbsd-x64-4.55.2.tgz",
+ "integrity": "sha512-cT2MmXySMo58ENv8p6/O6wI/h/gLnD3D6JoajwXFZH6X9jz4hARqUhWpGuQhOgLNXscfZYRQMJvZDtWNzMAIDw==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "openbsd"
+ ]
+ },
+ "node_modules/@rollup/rollup-openharmony-arm64": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-openharmony-arm64/-/rollup-openharmony-arm64-4.55.2.tgz",
+ "integrity": "sha512-sZnyUgGkuzIXaK3jNMPmUIyJrxu/PjmATQrocpGA1WbCPX8H5tfGgRSuYtqBYAvLuIGp8SPRb1O4d1Fkb5fXaQ==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "openharmony"
+ ]
+ },
+ "node_modules/@rollup/rollup-win32-arm64-msvc": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.55.2.tgz",
+ "integrity": "sha512-sDpFbenhmWjNcEbBcoTV0PWvW5rPJFvu+P7XoTY0YLGRupgLbFY0XPfwIbJOObzO7QgkRDANh65RjhPmgSaAjQ==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "win32"
+ ]
+ },
+ "node_modules/@rollup/rollup-win32-ia32-msvc": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.55.2.tgz",
+ "integrity": "sha512-GvJ03TqqaweWCigtKQVBErw2bEhu1tyfNQbarwr94wCGnczA9HF8wqEe3U/Lfu6EdeNP0p6R+APeHVwEqVxpUQ==",
+ "cpu": [
+ "ia32"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "win32"
+ ]
+ },
+ "node_modules/@rollup/rollup-win32-x64-gnu": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-gnu/-/rollup-win32-x64-gnu-4.55.2.tgz",
+ "integrity": "sha512-KvXsBvp13oZz9JGe5NYS7FNizLe99Ny+W8ETsuCyjXiKdiGrcz2/J/N8qxZ/RSwivqjQguug07NLHqrIHrqfYw==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "win32"
+ ]
+ },
+ "node_modules/@rollup/rollup-win32-x64-msvc": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.55.2.tgz",
+ "integrity": "sha512-xNO+fksQhsAckRtDSPWaMeT1uIM+JrDRXlerpnWNXhn1TdB3YZ6uKBMBTKP0eX9XtYEP978hHk1f8332i2AW8Q==",
+ "cpu": [
+ "x64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "win32"
+ ]
+ },
+ "node_modules/@types/babel__core": {
+ "version": "7.20.5",
+ "resolved": "https://registry.npmjs.org/@types/babel__core/-/babel__core-7.20.5.tgz",
+ "integrity": "sha512-qoQprZvz5wQFJwMDqeseRXWv3rqMvhgpbXFfVyWhbx9X47POIA6i/+dXefEmZKoAgOaTdaIgNSMqMIU61yRyzA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/parser": "^7.20.7",
+ "@babel/types": "^7.20.7",
+ "@types/babel__generator": "*",
+ "@types/babel__template": "*",
+ "@types/babel__traverse": "*"
+ }
+ },
+ "node_modules/@types/babel__generator": {
+ "version": "7.27.0",
+ "resolved": "https://registry.npmjs.org/@types/babel__generator/-/babel__generator-7.27.0.tgz",
+ "integrity": "sha512-ufFd2Xi92OAVPYsy+P4n7/U7e68fex0+Ee8gSG9KX7eo084CWiQ4sdxktvdl0bOPupXtVJPY19zk6EwWqUQ8lg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/types": "^7.0.0"
+ }
+ },
+ "node_modules/@types/babel__template": {
+ "version": "7.4.4",
+ "resolved": "https://registry.npmjs.org/@types/babel__template/-/babel__template-7.4.4.tgz",
+ "integrity": "sha512-h/NUaSyG5EyxBIp8YRxo4RMe2/qQgvyowRwVMzhYhBCONbW8PUsg4lkFMrhgZhUe5z3L3MiLDuvyJ/CaPa2A8A==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/parser": "^7.1.0",
+ "@babel/types": "^7.0.0"
+ }
+ },
+ "node_modules/@types/babel__traverse": {
+ "version": "7.28.0",
+ "resolved": "https://registry.npmjs.org/@types/babel__traverse/-/babel__traverse-7.28.0.tgz",
+ "integrity": "sha512-8PvcXf70gTDZBgt9ptxJ8elBeBjcLOAcOtoO/mPJjtji1+CdGbHgm77om1GrsPxsiE+uXIpNSK64UYaIwQXd4Q==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/types": "^7.28.2"
+ }
+ },
+ "node_modules/@types/estree": {
+ "version": "1.0.8",
+ "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz",
+ "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/@types/json-schema": {
+ "version": "7.0.15",
+ "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz",
+ "integrity": "sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/@types/node": {
+ "version": "24.10.9",
+ "resolved": "https://registry.npmjs.org/@types/node/-/node-24.10.9.tgz",
+ "integrity": "sha512-ne4A0IpG3+2ETuREInjPNhUGis1SFjv1d5asp8MzEAGtOZeTeHVDOYqOgqfhvseqg/iXty2hjBf1zAOb7RNiNw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "undici-types": "~7.16.0"
+ }
+ },
+ "node_modules/@types/react": {
+ "version": "19.2.8",
+ "resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.8.tgz",
+ "integrity": "sha512-3MbSL37jEchWZz2p2mjntRZtPt837ij10ApxKfgmXCTuHWagYg7iA5bqPw6C8BMPfwidlvfPI/fxOc42HLhcyg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "csstype": "^3.2.2"
+ }
+ },
+ "node_modules/@types/react-dom": {
+ "version": "19.2.3",
+ "resolved": "https://registry.npmjs.org/@types/react-dom/-/react-dom-19.2.3.tgz",
+ "integrity": "sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==",
+ "dev": true,
+ "license": "MIT",
+ "peerDependencies": {
+ "@types/react": "^19.2.0"
+ }
+ },
+ "node_modules/@typescript-eslint/eslint-plugin": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.53.0.tgz",
+ "integrity": "sha512-eEXsVvLPu8Z4PkFibtuFJLJOTAV/nPdgtSjkGoPpddpFk3/ym2oy97jynY6ic2m6+nc5M8SE1e9v/mHKsulcJg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@eslint-community/regexpp": "^4.12.2",
+ "@typescript-eslint/scope-manager": "8.53.0",
+ "@typescript-eslint/type-utils": "8.53.0",
+ "@typescript-eslint/utils": "8.53.0",
+ "@typescript-eslint/visitor-keys": "8.53.0",
+ "ignore": "^7.0.5",
+ "natural-compare": "^1.4.0",
+ "ts-api-utils": "^2.4.0"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ },
+ "peerDependencies": {
+ "@typescript-eslint/parser": "^8.53.0",
+ "eslint": "^8.57.0 || ^9.0.0",
+ "typescript": ">=4.8.4 <6.0.0"
+ }
+ },
+ "node_modules/@typescript-eslint/eslint-plugin/node_modules/ignore": {
+ "version": "7.0.5",
+ "resolved": "https://registry.npmjs.org/ignore/-/ignore-7.0.5.tgz",
+ "integrity": "sha512-Hs59xBNfUIunMFgWAbGX5cq6893IbWg4KnrjbYwX3tx0ztorVgTDA6B2sxf8ejHJ4wz8BqGUMYlnzNBer5NvGg==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">= 4"
+ }
+ },
+ "node_modules/@typescript-eslint/parser": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-8.53.0.tgz",
+ "integrity": "sha512-npiaib8XzbjtzS2N4HlqPvlpxpmZ14FjSJrteZpPxGUaYPlvhzlzUZ4mZyABo0EFrOWnvyd0Xxroq//hKhtAWg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@typescript-eslint/scope-manager": "8.53.0",
+ "@typescript-eslint/types": "8.53.0",
+ "@typescript-eslint/typescript-estree": "8.53.0",
+ "@typescript-eslint/visitor-keys": "8.53.0",
+ "debug": "^4.4.3"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ },
+ "peerDependencies": {
+ "eslint": "^8.57.0 || ^9.0.0",
+ "typescript": ">=4.8.4 <6.0.0"
+ }
+ },
+ "node_modules/@typescript-eslint/project-service": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/@typescript-eslint/project-service/-/project-service-8.53.0.tgz",
+ "integrity": "sha512-Bl6Gdr7NqkqIP5yP9z1JU///Nmes4Eose6L1HwpuVHwScgDPPuEWbUVhvlZmb8hy0vX9syLk5EGNL700WcBlbg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@typescript-eslint/tsconfig-utils": "^8.53.0",
+ "@typescript-eslint/types": "^8.53.0",
+ "debug": "^4.4.3"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ },
+ "peerDependencies": {
+ "typescript": ">=4.8.4 <6.0.0"
+ }
+ },
+ "node_modules/@typescript-eslint/scope-manager": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-8.53.0.tgz",
+ "integrity": "sha512-kWNj3l01eOGSdVBnfAF2K1BTh06WS0Yet6JUgb9Cmkqaz3Jlu0fdVUjj9UI8gPidBWSMqDIglmEXifSgDT/D0g==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@typescript-eslint/types": "8.53.0",
+ "@typescript-eslint/visitor-keys": "8.53.0"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ }
+ },
+ "node_modules/@typescript-eslint/tsconfig-utils": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/@typescript-eslint/tsconfig-utils/-/tsconfig-utils-8.53.0.tgz",
+ "integrity": "sha512-K6Sc0R5GIG6dNoPdOooQ+KtvT5KCKAvTcY8h2rIuul19vxH5OTQk7ArKkd4yTzkw66WnNY0kPPzzcmWA+XRmiA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ },
+ "peerDependencies": {
+ "typescript": ">=4.8.4 <6.0.0"
+ }
+ },
+ "node_modules/@typescript-eslint/type-utils": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/@typescript-eslint/type-utils/-/type-utils-8.53.0.tgz",
+ "integrity": "sha512-BBAUhlx7g4SmcLhn8cnbxoxtmS7hcq39xKCgiutL3oNx1TaIp+cny51s8ewnKMpVUKQUGb41RAUWZ9kxYdovuw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@typescript-eslint/types": "8.53.0",
+ "@typescript-eslint/typescript-estree": "8.53.0",
+ "@typescript-eslint/utils": "8.53.0",
+ "debug": "^4.4.3",
+ "ts-api-utils": "^2.4.0"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ },
+ "peerDependencies": {
+ "eslint": "^8.57.0 || ^9.0.0",
+ "typescript": ">=4.8.4 <6.0.0"
+ }
+ },
+ "node_modules/@typescript-eslint/types": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-8.53.0.tgz",
+ "integrity": "sha512-Bmh9KX31Vlxa13+PqPvt4RzKRN1XORYSLlAE+sO1i28NkisGbTtSLFVB3l7PWdHtR3E0mVMuC7JilWJ99m2HxQ==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ }
+ },
+ "node_modules/@typescript-eslint/typescript-estree": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-8.53.0.tgz",
+ "integrity": "sha512-pw0c0Gdo7Z4xOG987u3nJ8akL9093yEEKv8QTJ+Bhkghj1xyj8cgPaavlr9rq8h7+s6plUJ4QJYw2gCZodqmGw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@typescript-eslint/project-service": "8.53.0",
+ "@typescript-eslint/tsconfig-utils": "8.53.0",
+ "@typescript-eslint/types": "8.53.0",
+ "@typescript-eslint/visitor-keys": "8.53.0",
+ "debug": "^4.4.3",
+ "minimatch": "^9.0.5",
+ "semver": "^7.7.3",
+ "tinyglobby": "^0.2.15",
+ "ts-api-utils": "^2.4.0"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ },
+ "peerDependencies": {
+ "typescript": ">=4.8.4 <6.0.0"
+ }
+ },
+ "node_modules/@typescript-eslint/typescript-estree/node_modules/brace-expansion": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.2.tgz",
+ "integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "balanced-match": "^1.0.0"
+ }
+ },
+ "node_modules/@typescript-eslint/typescript-estree/node_modules/minimatch": {
+ "version": "9.0.5",
+ "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.5.tgz",
+ "integrity": "sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==",
+ "dev": true,
+ "license": "ISC",
+ "dependencies": {
+ "brace-expansion": "^2.0.1"
+ },
+ "engines": {
+ "node": ">=16 || 14 >=14.17"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/isaacs"
+ }
+ },
+ "node_modules/@typescript-eslint/typescript-estree/node_modules/semver": {
+ "version": "7.7.3",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.3.tgz",
+ "integrity": "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q==",
+ "dev": true,
+ "license": "ISC",
+ "bin": {
+ "semver": "bin/semver.js"
+ },
+ "engines": {
+ "node": ">=10"
+ }
+ },
+ "node_modules/@typescript-eslint/utils": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/@typescript-eslint/utils/-/utils-8.53.0.tgz",
+ "integrity": "sha512-XDY4mXTez3Z1iRDI5mbRhH4DFSt46oaIFsLg+Zn97+sYrXACziXSQcSelMybnVZ5pa1P6xYkPr5cMJyunM1ZDA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@eslint-community/eslint-utils": "^4.9.1",
+ "@typescript-eslint/scope-manager": "8.53.0",
+ "@typescript-eslint/types": "8.53.0",
+ "@typescript-eslint/typescript-estree": "8.53.0"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ },
+ "peerDependencies": {
+ "eslint": "^8.57.0 || ^9.0.0",
+ "typescript": ">=4.8.4 <6.0.0"
+ }
+ },
+ "node_modules/@typescript-eslint/visitor-keys": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-8.53.0.tgz",
+ "integrity": "sha512-LZ2NqIHFhvFwxG0qZeLL9DvdNAHPGCY5dIRwBhyYeU+LfLhcStE1ImjsuTG/WaVh3XysGaeLW8Rqq7cGkPCFvw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@typescript-eslint/types": "8.53.0",
+ "eslint-visitor-keys": "^4.2.1"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ }
+ },
+ "node_modules/@vitejs/plugin-react": {
+ "version": "5.1.2",
+ "resolved": "https://registry.npmjs.org/@vitejs/plugin-react/-/plugin-react-5.1.2.tgz",
+ "integrity": "sha512-EcA07pHJouywpzsoTUqNh5NwGayl2PPVEJKUSinGGSxFGYn+shYbqMGBg6FXDqgXum9Ou/ecb+411ssw8HImJQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/core": "^7.28.5",
+ "@babel/plugin-transform-react-jsx-self": "^7.27.1",
+ "@babel/plugin-transform-react-jsx-source": "^7.27.1",
+ "@rolldown/pluginutils": "1.0.0-beta.53",
+ "@types/babel__core": "^7.20.5",
+ "react-refresh": "^0.18.0"
+ },
+ "engines": {
+ "node": "^20.19.0 || >=22.12.0"
+ },
+ "peerDependencies": {
+ "vite": "^4.2.0 || ^5.0.0 || ^6.0.0 || ^7.0.0"
+ }
+ },
+ "node_modules/acorn": {
+ "version": "8.15.0",
+ "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz",
+ "integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
+ "dev": true,
+ "license": "MIT",
+ "bin": {
+ "acorn": "bin/acorn"
+ },
+ "engines": {
+ "node": ">=0.4.0"
+ }
+ },
+ "node_modules/acorn-jsx": {
+ "version": "5.3.2",
+ "resolved": "https://registry.npmjs.org/acorn-jsx/-/acorn-jsx-5.3.2.tgz",
+ "integrity": "sha512-rq9s+JNhf0IChjtDXxllJ7g41oZk5SlXtp0LHwyA5cejwn7vKmKp4pPri6YEePv2PU65sAsegbXtIinmDFDXgQ==",
+ "dev": true,
+ "license": "MIT",
+ "peerDependencies": {
+ "acorn": "^6.0.0 || ^7.0.0 || ^8.0.0"
+ }
+ },
+ "node_modules/ajv": {
+ "version": "6.12.6",
+ "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz",
+ "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "fast-deep-equal": "^3.1.1",
+ "fast-json-stable-stringify": "^2.0.0",
+ "json-schema-traverse": "^0.4.1",
+ "uri-js": "^4.2.2"
+ },
+ "funding": {
+ "type": "github",
+ "url": "https://github.com/sponsors/epoberezkin"
+ }
+ },
+ "node_modules/ansi-styles": {
+ "version": "4.3.0",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz",
+ "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "color-convert": "^2.0.1"
+ },
+ "engines": {
+ "node": ">=8"
+ },
+ "funding": {
+ "url": "https://github.com/chalk/ansi-styles?sponsor=1"
+ }
+ },
+ "node_modules/antd": {
+ "version": "6.2.0",
+ "resolved": "https://registry.npmjs.org/antd/-/antd-6.2.0.tgz",
+ "integrity": "sha512-fwETatwHYExjfzKcV41fBtgPo4kp+g+9gp5YOSSGxwnJHljps8TbXef8WP7ZnaOn5dkcA9xIC0TyUecIybBG7w==",
+ "license": "MIT",
+ "dependencies": {
+ "@ant-design/colors": "^8.0.1",
+ "@ant-design/cssinjs": "^2.0.1",
+ "@ant-design/cssinjs-utils": "^2.0.2",
+ "@ant-design/fast-color": "^3.0.0",
+ "@ant-design/icons": "^6.1.0",
+ "@ant-design/react-slick": "~2.0.0",
+ "@babel/runtime": "^7.28.4",
+ "@rc-component/cascader": "~1.11.0",
+ "@rc-component/checkbox": "~1.0.1",
+ "@rc-component/collapse": "~1.2.0",
+ "@rc-component/color-picker": "~3.0.3",
+ "@rc-component/dialog": "~1.8.0",
+ "@rc-component/drawer": "~1.4.0",
+ "@rc-component/dropdown": "~1.0.2",
+ "@rc-component/form": "~1.6.2",
+ "@rc-component/image": "~1.6.0",
+ "@rc-component/input": "~1.1.2",
+ "@rc-component/input-number": "~1.6.2",
+ "@rc-component/mentions": "~1.6.0",
+ "@rc-component/menu": "~1.2.0",
+ "@rc-component/motion": "~1.1.6",
+ "@rc-component/mutate-observer": "^2.0.1",
+ "@rc-component/notification": "~1.2.0",
+ "@rc-component/pagination": "~1.2.0",
+ "@rc-component/picker": "~1.9.0",
+ "@rc-component/progress": "~1.0.2",
+ "@rc-component/qrcode": "~1.1.1",
+ "@rc-component/rate": "~1.0.1",
+ "@rc-component/resize-observer": "^1.0.1",
+ "@rc-component/segmented": "~1.3.0",
+ "@rc-component/select": "~1.5.0",
+ "@rc-component/slider": "~1.0.1",
+ "@rc-component/steps": "~1.2.2",
+ "@rc-component/switch": "~1.0.3",
+ "@rc-component/table": "~1.9.1",
+ "@rc-component/tabs": "~1.7.0",
+ "@rc-component/textarea": "~1.1.2",
+ "@rc-component/tooltip": "~1.4.0",
+ "@rc-component/tour": "~2.3.0",
+ "@rc-component/tree": "~1.1.0",
+ "@rc-component/tree-select": "~1.6.0",
+ "@rc-component/trigger": "^3.8.2",
+ "@rc-component/upload": "~1.1.0",
+ "@rc-component/util": "^1.7.0",
+ "clsx": "^2.1.1",
+ "dayjs": "^1.11.11",
+ "scroll-into-view-if-needed": "^3.1.0",
+ "throttle-debounce": "^5.0.2"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/ant-design"
+ },
+ "peerDependencies": {
+ "react": ">=18.0.0",
+ "react-dom": ">=18.0.0"
+ }
+ },
+ "node_modules/argparse": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz",
+ "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==",
+ "dev": true,
+ "license": "Python-2.0"
+ },
+ "node_modules/balanced-match": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz",
+ "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/baseline-browser-mapping": {
+ "version": "2.9.15",
+ "resolved": "https://registry.npmjs.org/baseline-browser-mapping/-/baseline-browser-mapping-2.9.15.tgz",
+ "integrity": "sha512-kX8h7K2srmDyYnXRIppo4AH/wYgzWVCs+eKr3RusRSQ5PvRYoEFmR/I0PbdTjKFAoKqp5+kbxnNTFO9jOfSVJg==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "bin": {
+ "baseline-browser-mapping": "dist/cli.js"
+ }
+ },
+ "node_modules/brace-expansion": {
+ "version": "1.1.12",
+ "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
+ "integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "balanced-match": "^1.0.0",
+ "concat-map": "0.0.1"
+ }
+ },
+ "node_modules/browserslist": {
+ "version": "4.28.1",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.28.1.tgz",
+ "integrity": "sha512-ZC5Bd0LgJXgwGqUknZY/vkUQ04r8NXnJZ3yYi4vDmSiZmC/pdSN0NbNRPxZpbtO4uAfDUAFffO8IZoM3Gj8IkA==",
+ "dev": true,
+ "funding": [
+ {
+ "type": "opencollective",
+ "url": "https://opencollective.com/browserslist"
+ },
+ {
+ "type": "tidelift",
+ "url": "https://tidelift.com/funding/github/npm/browserslist"
+ },
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/ai"
+ }
+ ],
+ "license": "MIT",
+ "dependencies": {
+ "baseline-browser-mapping": "^2.9.0",
+ "caniuse-lite": "^1.0.30001759",
+ "electron-to-chromium": "^1.5.263",
+ "node-releases": "^2.0.27",
+ "update-browserslist-db": "^1.2.0"
+ },
+ "bin": {
+ "browserslist": "cli.js"
+ },
+ "engines": {
+ "node": "^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7"
+ }
+ },
+ "node_modules/callsites": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/callsites/-/callsites-3.1.0.tgz",
+ "integrity": "sha512-P8BjAsXvZS+VIDUI11hHCQEv74YT67YUi5JJFNWIqL235sBmjX4+qx9Muvls5ivyNENctx46xQLQ3aTuE7ssaQ==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/caniuse-lite": {
+ "version": "1.0.30001765",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001765.tgz",
+ "integrity": "sha512-LWcNtSyZrakjECqmpP4qdg0MMGdN368D7X8XvvAqOcqMv0RxnlqVKZl2V6/mBR68oYMxOZPLw/gO7DuisMHUvQ==",
+ "dev": true,
+ "funding": [
+ {
+ "type": "opencollective",
+ "url": "https://opencollective.com/browserslist"
+ },
+ {
+ "type": "tidelift",
+ "url": "https://tidelift.com/funding/github/npm/caniuse-lite"
+ },
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/ai"
+ }
+ ],
+ "license": "CC-BY-4.0"
+ },
+ "node_modules/chalk": {
+ "version": "4.1.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz",
+ "integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "ansi-styles": "^4.1.0",
+ "supports-color": "^7.1.0"
+ },
+ "engines": {
+ "node": ">=10"
+ },
+ "funding": {
+ "url": "https://github.com/chalk/chalk?sponsor=1"
+ }
+ },
+ "node_modules/clsx": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/clsx/-/clsx-2.1.1.tgz",
+ "integrity": "sha512-eYm0QWBtUrBWZWG0d386OGAw16Z995PiOVo2B7bjWSbHedGl5e0ZWaq65kOGgUSNesEIDkB9ISbTg/JK9dhCZA==",
+ "license": "MIT",
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "color-name": "~1.1.4"
+ },
+ "engines": {
+ "node": ">=7.0.0"
+ }
+ },
+ "node_modules/color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/compute-scroll-into-view": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/compute-scroll-into-view/-/compute-scroll-into-view-3.1.1.tgz",
+ "integrity": "sha512-VRhuHOLoKYOy4UbilLbUzbYg93XLjv2PncJC50EuTWPA3gaja1UjBsUP/D/9/juV3vQFr6XBEzn9KCAHdUvOHw==",
+ "license": "MIT"
+ },
+ "node_modules/concat-map": {
+ "version": "0.0.1",
+ "resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz",
+ "integrity": "sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/convert-source-map": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-2.0.0.tgz",
+ "integrity": "sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/cross-spawn": {
+ "version": "7.0.6",
+ "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz",
+ "integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "path-key": "^3.1.0",
+ "shebang-command": "^2.0.0",
+ "which": "^2.0.1"
+ },
+ "engines": {
+ "node": ">= 8"
+ }
+ },
+ "node_modules/csstype": {
+ "version": "3.2.3",
+ "resolved": "https://registry.npmjs.org/csstype/-/csstype-3.2.3.tgz",
+ "integrity": "sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==",
+ "license": "MIT"
+ },
+ "node_modules/dayjs": {
+ "version": "1.11.19",
+ "resolved": "https://registry.npmjs.org/dayjs/-/dayjs-1.11.19.tgz",
+ "integrity": "sha512-t5EcLVS6QPBNqM2z8fakk/NKel+Xzshgt8FFKAn+qwlD1pzZWxh0nVCrvFK7ZDb6XucZeF9z8C7CBWTRIVApAw==",
+ "license": "MIT"
+ },
+ "node_modules/debug": {
+ "version": "4.4.3",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz",
+ "integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "ms": "^2.1.3"
+ },
+ "engines": {
+ "node": ">=6.0"
+ },
+ "peerDependenciesMeta": {
+ "supports-color": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/deep-is": {
+ "version": "0.1.4",
+ "resolved": "https://registry.npmjs.org/deep-is/-/deep-is-0.1.4.tgz",
+ "integrity": "sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/electron-to-chromium": {
+ "version": "1.5.267",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.5.267.tgz",
+ "integrity": "sha512-0Drusm6MVRXSOJpGbaSVgcQsuB4hEkMpHXaVstcPmhu5LIedxs1xNK/nIxmQIU/RPC0+1/o0AVZfBTkTNJOdUw==",
+ "dev": true,
+ "license": "ISC"
+ },
+ "node_modules/esbuild": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.27.2.tgz",
+ "integrity": "sha512-HyNQImnsOC7X9PMNaCIeAm4ISCQXs5a5YasTXVliKv4uuBo1dKrG0A+uQS8M5eXjVMnLg3WgXaKvprHlFJQffw==",
+ "dev": true,
+ "hasInstallScript": true,
+ "license": "MIT",
+ "bin": {
+ "esbuild": "bin/esbuild"
+ },
+ "engines": {
+ "node": ">=18"
+ },
+ "optionalDependencies": {
+ "@esbuild/aix-ppc64": "0.27.2",
+ "@esbuild/android-arm": "0.27.2",
+ "@esbuild/android-arm64": "0.27.2",
+ "@esbuild/android-x64": "0.27.2",
+ "@esbuild/darwin-arm64": "0.27.2",
+ "@esbuild/darwin-x64": "0.27.2",
+ "@esbuild/freebsd-arm64": "0.27.2",
+ "@esbuild/freebsd-x64": "0.27.2",
+ "@esbuild/linux-arm": "0.27.2",
+ "@esbuild/linux-arm64": "0.27.2",
+ "@esbuild/linux-ia32": "0.27.2",
+ "@esbuild/linux-loong64": "0.27.2",
+ "@esbuild/linux-mips64el": "0.27.2",
+ "@esbuild/linux-ppc64": "0.27.2",
+ "@esbuild/linux-riscv64": "0.27.2",
+ "@esbuild/linux-s390x": "0.27.2",
+ "@esbuild/linux-x64": "0.27.2",
+ "@esbuild/netbsd-arm64": "0.27.2",
+ "@esbuild/netbsd-x64": "0.27.2",
+ "@esbuild/openbsd-arm64": "0.27.2",
+ "@esbuild/openbsd-x64": "0.27.2",
+ "@esbuild/openharmony-arm64": "0.27.2",
+ "@esbuild/sunos-x64": "0.27.2",
+ "@esbuild/win32-arm64": "0.27.2",
+ "@esbuild/win32-ia32": "0.27.2",
+ "@esbuild/win32-x64": "0.27.2"
+ }
+ },
+ "node_modules/escalade": {
+ "version": "3.2.0",
+ "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz",
+ "integrity": "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/escape-string-regexp": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-4.0.0.tgz",
+ "integrity": "sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=10"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/eslint": {
+ "version": "9.39.2",
+ "resolved": "https://registry.npmjs.org/eslint/-/eslint-9.39.2.tgz",
+ "integrity": "sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@eslint-community/eslint-utils": "^4.8.0",
+ "@eslint-community/regexpp": "^4.12.1",
+ "@eslint/config-array": "^0.21.1",
+ "@eslint/config-helpers": "^0.4.2",
+ "@eslint/core": "^0.17.0",
+ "@eslint/eslintrc": "^3.3.1",
+ "@eslint/js": "9.39.2",
+ "@eslint/plugin-kit": "^0.4.1",
+ "@humanfs/node": "^0.16.6",
+ "@humanwhocodes/module-importer": "^1.0.1",
+ "@humanwhocodes/retry": "^0.4.2",
+ "@types/estree": "^1.0.6",
+ "ajv": "^6.12.4",
+ "chalk": "^4.0.0",
+ "cross-spawn": "^7.0.6",
+ "debug": "^4.3.2",
+ "escape-string-regexp": "^4.0.0",
+ "eslint-scope": "^8.4.0",
+ "eslint-visitor-keys": "^4.2.1",
+ "espree": "^10.4.0",
+ "esquery": "^1.5.0",
+ "esutils": "^2.0.2",
+ "fast-deep-equal": "^3.1.3",
+ "file-entry-cache": "^8.0.0",
+ "find-up": "^5.0.0",
+ "glob-parent": "^6.0.2",
+ "ignore": "^5.2.0",
+ "imurmurhash": "^0.1.4",
+ "is-glob": "^4.0.0",
+ "json-stable-stringify-without-jsonify": "^1.0.1",
+ "lodash.merge": "^4.6.2",
+ "minimatch": "^3.1.2",
+ "natural-compare": "^1.4.0",
+ "optionator": "^0.9.3"
+ },
+ "bin": {
+ "eslint": "bin/eslint.js"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "url": "https://eslint.org/donate"
+ },
+ "peerDependencies": {
+ "jiti": "*"
+ },
+ "peerDependenciesMeta": {
+ "jiti": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/eslint-plugin-react-hooks": {
+ "version": "7.0.1",
+ "resolved": "https://registry.npmjs.org/eslint-plugin-react-hooks/-/eslint-plugin-react-hooks-7.0.1.tgz",
+ "integrity": "sha512-O0d0m04evaNzEPoSW+59Mezf8Qt0InfgGIBJnpC0h3NH/WjUAR7BIKUfysC6todmtiZ/A0oUVS8Gce0WhBrHsA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@babel/core": "^7.24.4",
+ "@babel/parser": "^7.24.4",
+ "hermes-parser": "^0.25.1",
+ "zod": "^3.25.0 || ^4.0.0",
+ "zod-validation-error": "^3.5.0 || ^4.0.0"
+ },
+ "engines": {
+ "node": ">=18"
+ },
+ "peerDependencies": {
+ "eslint": "^3.0.0 || ^4.0.0 || ^5.0.0 || ^6.0.0 || ^7.0.0 || ^8.0.0-0 || ^9.0.0"
+ }
+ },
+ "node_modules/eslint-plugin-react-refresh": {
+ "version": "0.4.26",
+ "resolved": "https://registry.npmjs.org/eslint-plugin-react-refresh/-/eslint-plugin-react-refresh-0.4.26.tgz",
+ "integrity": "sha512-1RETEylht2O6FM/MvgnyvT+8K21wLqDNg4qD51Zj3guhjt433XbnnkVttHMyaVyAFD03QSV4LPS5iE3VQmO7XQ==",
+ "dev": true,
+ "license": "MIT",
+ "peerDependencies": {
+ "eslint": ">=8.40"
+ }
+ },
+ "node_modules/eslint-scope": {
+ "version": "8.4.0",
+ "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-8.4.0.tgz",
+ "integrity": "sha512-sNXOfKCn74rt8RICKMvJS7XKV/Xk9kA7DyJr8mJik3S7Cwgy3qlkkmyS2uQB3jiJg6VNdZd/pDBJu0nvG2NlTg==",
+ "dev": true,
+ "license": "BSD-2-Clause",
+ "dependencies": {
+ "esrecurse": "^4.3.0",
+ "estraverse": "^5.2.0"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/eslint"
+ }
+ },
+ "node_modules/eslint-visitor-keys": {
+ "version": "4.2.1",
+ "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-4.2.1.tgz",
+ "integrity": "sha512-Uhdk5sfqcee/9H/rCOJikYz67o0a2Tw2hGRPOG2Y1R2dg7brRe1uG0yaNQDHu+TO/uQPF/5eCapvYSmHUjt7JQ==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/eslint"
+ }
+ },
+ "node_modules/espree": {
+ "version": "10.4.0",
+ "resolved": "https://registry.npmjs.org/espree/-/espree-10.4.0.tgz",
+ "integrity": "sha512-j6PAQ2uUr79PZhBjP5C5fhl8e39FmRnOjsD5lGnWrFU8i2G776tBK7+nP8KuQUTTyAZUwfQqXAgrVH5MbH9CYQ==",
+ "dev": true,
+ "license": "BSD-2-Clause",
+ "dependencies": {
+ "acorn": "^8.15.0",
+ "acorn-jsx": "^5.3.2",
+ "eslint-visitor-keys": "^4.2.1"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "url": "https://opencollective.com/eslint"
+ }
+ },
+ "node_modules/esquery": {
+ "version": "1.7.0",
+ "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.7.0.tgz",
+ "integrity": "sha512-Ap6G0WQwcU/LHsvLwON1fAQX9Zp0A2Y6Y/cJBl9r/JbW90Zyg4/zbG6zzKa2OTALELarYHmKu0GhpM5EO+7T0g==",
+ "dev": true,
+ "license": "BSD-3-Clause",
+ "dependencies": {
+ "estraverse": "^5.1.0"
+ },
+ "engines": {
+ "node": ">=0.10"
+ }
+ },
+ "node_modules/esrecurse": {
+ "version": "4.3.0",
+ "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz",
+ "integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==",
+ "dev": true,
+ "license": "BSD-2-Clause",
+ "dependencies": {
+ "estraverse": "^5.2.0"
+ },
+ "engines": {
+ "node": ">=4.0"
+ }
+ },
+ "node_modules/estraverse": {
+ "version": "5.3.0",
+ "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz",
+ "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==",
+ "dev": true,
+ "license": "BSD-2-Clause",
+ "engines": {
+ "node": ">=4.0"
+ }
+ },
+ "node_modules/esutils": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.3.tgz",
+ "integrity": "sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==",
+ "dev": true,
+ "license": "BSD-2-Clause",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/fast-deep-equal": {
+ "version": "3.1.3",
+ "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
+ "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/fast-json-stable-stringify": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz",
+ "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/fast-levenshtein": {
+ "version": "2.0.6",
+ "resolved": "https://registry.npmjs.org/fast-levenshtein/-/fast-levenshtein-2.0.6.tgz",
+ "integrity": "sha512-DCXu6Ifhqcks7TZKY3Hxp3y6qphY5SJZmrWMDrKcERSOXWQdMhU9Ig/PYrzyw/ul9jOIyh0N4M0tbC5hodg8dw==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/fdir": {
+ "version": "6.5.0",
+ "resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz",
+ "integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=12.0.0"
+ },
+ "peerDependencies": {
+ "picomatch": "^3 || ^4"
+ },
+ "peerDependenciesMeta": {
+ "picomatch": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/file-entry-cache": {
+ "version": "8.0.0",
+ "resolved": "https://registry.npmjs.org/file-entry-cache/-/file-entry-cache-8.0.0.tgz",
+ "integrity": "sha512-XXTUwCvisa5oacNGRP9SfNtYBNAMi+RPwBFmblZEF7N7swHYQS6/Zfk7SRwx4D5j3CH211YNRco1DEMNVfZCnQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "flat-cache": "^4.0.0"
+ },
+ "engines": {
+ "node": ">=16.0.0"
+ }
+ },
+ "node_modules/find-up": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/find-up/-/find-up-5.0.0.tgz",
+ "integrity": "sha512-78/PXT1wlLLDgTzDs7sjq9hzz0vXD+zn+7wypEe4fXQxCmdmqfGsEPQxmiCSQI3ajFV91bVSsvNtrJRiW6nGng==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "locate-path": "^6.0.0",
+ "path-exists": "^4.0.0"
+ },
+ "engines": {
+ "node": ">=10"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/flat-cache": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/flat-cache/-/flat-cache-4.0.1.tgz",
+ "integrity": "sha512-f7ccFPK3SXFHpx15UIGyRJ/FJQctuKZ0zVuN3frBo4HnK3cay9VEW0R6yPYFHC0AgqhukPzKjq22t5DmAyqGyw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "flatted": "^3.2.9",
+ "keyv": "^4.5.4"
+ },
+ "engines": {
+ "node": ">=16"
+ }
+ },
+ "node_modules/flatted": {
+ "version": "3.3.3",
+ "resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.3.tgz",
+ "integrity": "sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==",
+ "dev": true,
+ "license": "ISC"
+ },
+ "node_modules/fsevents": {
+ "version": "2.3.3",
+ "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz",
+ "integrity": "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==",
+ "dev": true,
+ "hasInstallScript": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "darwin"
+ ],
+ "engines": {
+ "node": "^8.16.0 || ^10.6.0 || >=11.0.0"
+ }
+ },
+ "node_modules/gensync": {
+ "version": "1.0.0-beta.2",
+ "resolved": "https://registry.npmjs.org/gensync/-/gensync-1.0.0-beta.2.tgz",
+ "integrity": "sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/glob-parent": {
+ "version": "6.0.2",
+ "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz",
+ "integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==",
+ "dev": true,
+ "license": "ISC",
+ "dependencies": {
+ "is-glob": "^4.0.3"
+ },
+ "engines": {
+ "node": ">=10.13.0"
+ }
+ },
+ "node_modules/globals": {
+ "version": "16.5.0",
+ "resolved": "https://registry.npmjs.org/globals/-/globals-16.5.0.tgz",
+ "integrity": "sha512-c/c15i26VrJ4IRt5Z89DnIzCGDn9EcebibhAOjw5ibqEHsE1wLUgkPn9RDmNcUKyU87GeaL633nyJ+pplFR2ZQ==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=18"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/has-flag": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
+ "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/hermes-estree": {
+ "version": "0.25.1",
+ "resolved": "https://registry.npmjs.org/hermes-estree/-/hermes-estree-0.25.1.tgz",
+ "integrity": "sha512-0wUoCcLp+5Ev5pDW2OriHC2MJCbwLwuRx+gAqMTOkGKJJiBCLjtrvy4PWUGn6MIVefecRpzoOZ/UV6iGdOr+Cw==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/hermes-parser": {
+ "version": "0.25.1",
+ "resolved": "https://registry.npmjs.org/hermes-parser/-/hermes-parser-0.25.1.tgz",
+ "integrity": "sha512-6pEjquH3rqaI6cYAXYPcz9MS4rY6R4ngRgrgfDshRptUZIc3lw0MCIJIGDj9++mfySOuPTHB4nrSW99BCvOPIA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "hermes-estree": "0.25.1"
+ }
+ },
+ "node_modules/ignore": {
+ "version": "5.3.2",
+ "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.3.2.tgz",
+ "integrity": "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">= 4"
+ }
+ },
+ "node_modules/import-fresh": {
+ "version": "3.3.1",
+ "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.3.1.tgz",
+ "integrity": "sha512-TR3KfrTZTYLPB6jUjfx6MF9WcWrHL9su5TObK4ZkYgBdWKPOFoSoQIdEuTuR82pmtxH2spWG9h6etwfr1pLBqQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "parent-module": "^1.0.0",
+ "resolve-from": "^4.0.0"
+ },
+ "engines": {
+ "node": ">=6"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/imurmurhash": {
+ "version": "0.1.4",
+ "resolved": "https://registry.npmjs.org/imurmurhash/-/imurmurhash-0.1.4.tgz",
+ "integrity": "sha512-JmXMZ6wuvDmLiHEml9ykzqO6lwFbof0GG4IkcGaENdCRDDmMVnny7s5HsIgHCbaq0w2MyPhDqkhTUgS2LU2PHA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=0.8.19"
+ }
+ },
+ "node_modules/is-extglob": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz",
+ "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/is-glob": {
+ "version": "4.0.3",
+ "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz",
+ "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "is-extglob": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/is-mobile": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/is-mobile/-/is-mobile-5.0.0.tgz",
+ "integrity": "sha512-Tz/yndySvLAEXh+Uk8liFCxOwVH6YutuR74utvOcu7I9Di+DwM0mtdPVZNaVvvBUM2OXxne/NhOs1zAO7riusQ==",
+ "license": "MIT"
+ },
+ "node_modules/isexe": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz",
+ "integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==",
+ "dev": true,
+ "license": "ISC"
+ },
+ "node_modules/js-tokens": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-4.0.0.tgz",
+ "integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/js-yaml": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz",
+ "integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "argparse": "^2.0.1"
+ },
+ "bin": {
+ "js-yaml": "bin/js-yaml.js"
+ }
+ },
+ "node_modules/jsesc": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-3.1.0.tgz",
+ "integrity": "sha512-/sM3dO2FOzXjKQhJuo0Q173wf2KOo8t4I8vHy6lF9poUp7bKT0/NHE8fPX23PwfhnykfqnC2xRxOnVw5XuGIaA==",
+ "dev": true,
+ "license": "MIT",
+ "bin": {
+ "jsesc": "bin/jsesc"
+ },
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/json-buffer": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/json-buffer/-/json-buffer-3.0.1.tgz",
+ "integrity": "sha512-4bV5BfR2mqfQTJm+V5tPPdf+ZpuhiIvTuAB5g8kcrXOZpTT/QwwVRWBywX1ozr6lEuPdbHxwaJlm9G6mI2sfSQ==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/json-schema-traverse": {
+ "version": "0.4.1",
+ "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz",
+ "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/json-stable-stringify-without-jsonify": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/json-stable-stringify-without-jsonify/-/json-stable-stringify-without-jsonify-1.0.1.tgz",
+ "integrity": "sha512-Bdboy+l7tA3OGW6FjyFHWkP5LuByj1Tk33Ljyq0axyzdk9//JSi2u3fP1QSmd1KNwq6VOKYGlAu87CisVir6Pw==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/json2mq": {
+ "version": "0.2.0",
+ "resolved": "https://registry.npmjs.org/json2mq/-/json2mq-0.2.0.tgz",
+ "integrity": "sha512-SzoRg7ux5DWTII9J2qkrZrqV1gt+rTaoufMxEzXbS26Uid0NwaJd123HcoB80TgubEppxxIGdNxCx50fEoEWQA==",
+ "license": "MIT",
+ "dependencies": {
+ "string-convert": "^0.2.0"
+ }
+ },
+ "node_modules/json5": {
+ "version": "2.2.3",
+ "resolved": "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz",
+ "integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==",
+ "dev": true,
+ "license": "MIT",
+ "bin": {
+ "json5": "lib/cli.js"
+ },
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/keyv": {
+ "version": "4.5.4",
+ "resolved": "https://registry.npmjs.org/keyv/-/keyv-4.5.4.tgz",
+ "integrity": "sha512-oxVHkHR/EJf2CNXnWxRLW6mg7JyCCUcG0DtEGmL2ctUo1PNTin1PUil+r/+4r5MpVgC/fn1kjsx7mjSujKqIpw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "json-buffer": "3.0.1"
+ }
+ },
+ "node_modules/levn": {
+ "version": "0.4.1",
+ "resolved": "https://registry.npmjs.org/levn/-/levn-0.4.1.tgz",
+ "integrity": "sha512-+bT2uH4E5LGE7h/n3evcS/sQlJXCpIp6ym8OWJ5eV6+67Dsql/LaaT7qJBAt2rzfoa/5QBGBhxDix1dMt2kQKQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "prelude-ls": "^1.2.1",
+ "type-check": "~0.4.0"
+ },
+ "engines": {
+ "node": ">= 0.8.0"
+ }
+ },
+ "node_modules/locate-path": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-6.0.0.tgz",
+ "integrity": "sha512-iPZK6eYjbxRu3uB4/WZ3EsEIMJFMqAoopl3R+zuq0UjcAm/MO6KCweDgPfP3elTztoKP3KtnVHxTn2NHBSDVUw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "p-locate": "^5.0.0"
+ },
+ "engines": {
+ "node": ">=10"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/lodash.merge": {
+ "version": "4.6.2",
+ "resolved": "https://registry.npmjs.org/lodash.merge/-/lodash.merge-4.6.2.tgz",
+ "integrity": "sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/lru-cache": {
+ "version": "5.1.1",
+ "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-5.1.1.tgz",
+ "integrity": "sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w==",
+ "dev": true,
+ "license": "ISC",
+ "dependencies": {
+ "yallist": "^3.0.2"
+ }
+ },
+ "node_modules/minimatch": {
+ "version": "3.1.2",
+ "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.2.tgz",
+ "integrity": "sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==",
+ "dev": true,
+ "license": "ISC",
+ "dependencies": {
+ "brace-expansion": "^1.1.7"
+ },
+ "engines": {
+ "node": "*"
+ }
+ },
+ "node_modules/ms": {
+ "version": "2.1.3",
+ "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
+ "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/nanoid": {
+ "version": "3.3.11",
+ "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.11.tgz",
+ "integrity": "sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==",
+ "dev": true,
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/ai"
+ }
+ ],
+ "license": "MIT",
+ "bin": {
+ "nanoid": "bin/nanoid.cjs"
+ },
+ "engines": {
+ "node": "^10 || ^12 || ^13.7 || ^14 || >=15.0.1"
+ }
+ },
+ "node_modules/natural-compare": {
+ "version": "1.4.0",
+ "resolved": "https://registry.npmjs.org/natural-compare/-/natural-compare-1.4.0.tgz",
+ "integrity": "sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/node-releases": {
+ "version": "2.0.27",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.27.tgz",
+ "integrity": "sha512-nmh3lCkYZ3grZvqcCH+fjmQ7X+H0OeZgP40OierEaAptX4XofMh5kwNbWh7lBduUzCcV/8kZ+NDLCwm2iorIlA==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/optionator": {
+ "version": "0.9.4",
+ "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz",
+ "integrity": "sha512-6IpQ7mKUxRcZNLIObR0hz7lxsapSSIYNZJwXPGeF0mTVqGKFIXj1DQcMoT22S3ROcLyY/rz0PWaWZ9ayWmad9g==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "deep-is": "^0.1.3",
+ "fast-levenshtein": "^2.0.6",
+ "levn": "^0.4.1",
+ "prelude-ls": "^1.2.1",
+ "type-check": "^0.4.0",
+ "word-wrap": "^1.2.5"
+ },
+ "engines": {
+ "node": ">= 0.8.0"
+ }
+ },
+ "node_modules/p-limit": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-3.1.0.tgz",
+ "integrity": "sha512-TYOanM3wGwNGsZN2cVTYPArw454xnXj5qmWF1bEoAc4+cU/ol7GVh7odevjp1FNHduHc3KZMcFduxU5Xc6uJRQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "yocto-queue": "^0.1.0"
+ },
+ "engines": {
+ "node": ">=10"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/p-locate": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-5.0.0.tgz",
+ "integrity": "sha512-LaNjtRWUBY++zB5nE/NwcaoMylSPk+S+ZHNB1TzdbMJMny6dynpAGt7X/tl/QYq3TIeE6nxHppbo2LGymrG5Pw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "p-limit": "^3.0.2"
+ },
+ "engines": {
+ "node": ">=10"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/parent-module": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/parent-module/-/parent-module-1.0.1.tgz",
+ "integrity": "sha512-GQ2EWRpQV8/o+Aw8YqtfZZPfNRWZYkbidE9k5rpl/hC3vtHHBfGm2Ifi6qWV+coDGkrUKZAxE3Lot5kcsRlh+g==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "callsites": "^3.0.0"
+ },
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/path-exists": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz",
+ "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/path-key": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz",
+ "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/picocolors": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz",
+ "integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==",
+ "dev": true,
+ "license": "ISC"
+ },
+ "node_modules/picomatch": {
+ "version": "4.0.3",
+ "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
+ "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=12"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/jonschlinkert"
+ }
+ },
+ "node_modules/postcss": {
+ "version": "8.5.6",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz",
+ "integrity": "sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==",
+ "dev": true,
+ "funding": [
+ {
+ "type": "opencollective",
+ "url": "https://opencollective.com/postcss/"
+ },
+ {
+ "type": "tidelift",
+ "url": "https://tidelift.com/funding/github/npm/postcss"
+ },
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/ai"
+ }
+ ],
+ "license": "MIT",
+ "dependencies": {
+ "nanoid": "^3.3.11",
+ "picocolors": "^1.1.1",
+ "source-map-js": "^1.2.1"
+ },
+ "engines": {
+ "node": "^10 || ^12 || >=14"
+ }
+ },
+ "node_modules/prelude-ls": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/prelude-ls/-/prelude-ls-1.2.1.tgz",
+ "integrity": "sha512-vkcDPrRZo1QZLbn5RLGPpg/WmIQ65qoWWhcGKf/b5eplkkarX0m9z8ppCat4mlOqUsWpyNuYgO3VRyrYHSzX5g==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">= 0.8.0"
+ }
+ },
+ "node_modules/punycode": {
+ "version": "2.3.1",
+ "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz",
+ "integrity": "sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/react": {
+ "version": "19.2.3",
+ "resolved": "https://registry.npmjs.org/react/-/react-19.2.3.tgz",
+ "integrity": "sha512-Ku/hhYbVjOQnXDZFv2+RibmLFGwFdeeKHFcOTlrt7xplBnya5OGn/hIRDsqDiSUcfORsDC7MPxwork8jBwsIWA==",
+ "license": "MIT",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/react-dom": {
+ "version": "19.2.3",
+ "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.2.3.tgz",
+ "integrity": "sha512-yELu4WmLPw5Mr/lmeEpox5rw3RETacE++JgHqQzd2dg+YbJuat3jH4ingc+WPZhxaoFzdv9y33G+F7Nl5O0GBg==",
+ "license": "MIT",
+ "dependencies": {
+ "scheduler": "^0.27.0"
+ },
+ "peerDependencies": {
+ "react": "^19.2.3"
+ }
+ },
+ "node_modules/react-is": {
+ "version": "18.3.1",
+ "resolved": "https://registry.npmjs.org/react-is/-/react-is-18.3.1.tgz",
+ "integrity": "sha512-/LLMVyas0ljjAtoYiPqYiL8VWXzUUdThrmU5+n20DZv+a+ClRoevUzw5JxU+Ieh5/c87ytoTBV9G1FiKfNJdmg==",
+ "license": "MIT"
+ },
+ "node_modules/react-refresh": {
+ "version": "0.18.0",
+ "resolved": "https://registry.npmjs.org/react-refresh/-/react-refresh-0.18.0.tgz",
+ "integrity": "sha512-QgT5//D3jfjJb6Gsjxv0Slpj23ip+HtOpnNgnb2S5zU3CB26G/IDPGoy4RJB42wzFE46DRsstbW6tKHoKbhAxw==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/resolve-from": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-4.0.0.tgz",
+ "integrity": "sha512-pb/MYmXstAkysRFx8piNI1tGFNQIFA3vkE3Gq4EuA1dF6gHp/+vgZqsCGJapvy8N3Q+4o7FwvquPJcnZ7RYy4g==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=4"
+ }
+ },
+ "node_modules/rollup": {
+ "version": "4.55.2",
+ "resolved": "https://registry.npmjs.org/rollup/-/rollup-4.55.2.tgz",
+ "integrity": "sha512-PggGy4dhwx5qaW+CKBilA/98Ql9keyfnb7lh4SR6shQ91QQQi1ORJ1v4UinkdP2i87OBs9AQFooQylcrrRfIcg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@types/estree": "1.0.8"
+ },
+ "bin": {
+ "rollup": "dist/bin/rollup"
+ },
+ "engines": {
+ "node": ">=18.0.0",
+ "npm": ">=8.0.0"
+ },
+ "optionalDependencies": {
+ "@rollup/rollup-android-arm-eabi": "4.55.2",
+ "@rollup/rollup-android-arm64": "4.55.2",
+ "@rollup/rollup-darwin-arm64": "4.55.2",
+ "@rollup/rollup-darwin-x64": "4.55.2",
+ "@rollup/rollup-freebsd-arm64": "4.55.2",
+ "@rollup/rollup-freebsd-x64": "4.55.2",
+ "@rollup/rollup-linux-arm-gnueabihf": "4.55.2",
+ "@rollup/rollup-linux-arm-musleabihf": "4.55.2",
+ "@rollup/rollup-linux-arm64-gnu": "4.55.2",
+ "@rollup/rollup-linux-arm64-musl": "4.55.2",
+ "@rollup/rollup-linux-loong64-gnu": "4.55.2",
+ "@rollup/rollup-linux-loong64-musl": "4.55.2",
+ "@rollup/rollup-linux-ppc64-gnu": "4.55.2",
+ "@rollup/rollup-linux-ppc64-musl": "4.55.2",
+ "@rollup/rollup-linux-riscv64-gnu": "4.55.2",
+ "@rollup/rollup-linux-riscv64-musl": "4.55.2",
+ "@rollup/rollup-linux-s390x-gnu": "4.55.2",
+ "@rollup/rollup-linux-x64-gnu": "4.55.2",
+ "@rollup/rollup-linux-x64-musl": "4.55.2",
+ "@rollup/rollup-openbsd-x64": "4.55.2",
+ "@rollup/rollup-openharmony-arm64": "4.55.2",
+ "@rollup/rollup-win32-arm64-msvc": "4.55.2",
+ "@rollup/rollup-win32-ia32-msvc": "4.55.2",
+ "@rollup/rollup-win32-x64-gnu": "4.55.2",
+ "@rollup/rollup-win32-x64-msvc": "4.55.2",
+ "fsevents": "~2.3.2"
+ }
+ },
+ "node_modules/scheduler": {
+ "version": "0.27.0",
+ "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.27.0.tgz",
+ "integrity": "sha512-eNv+WrVbKu1f3vbYJT/xtiF5syA5HPIMtf9IgY/nKg0sWqzAUEvqY/xm7OcZc/qafLx/iO9FgOmeSAp4v5ti/Q==",
+ "license": "MIT"
+ },
+ "node_modules/scroll-into-view-if-needed": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/scroll-into-view-if-needed/-/scroll-into-view-if-needed-3.1.0.tgz",
+ "integrity": "sha512-49oNpRjWRvnU8NyGVmUaYG4jtTkNonFZI86MmGRDqBphEK2EXT9gdEUoQPZhuBM8yWHxCWbobltqYO5M4XrUvQ==",
+ "license": "MIT",
+ "dependencies": {
+ "compute-scroll-into-view": "^3.0.2"
+ }
+ },
+ "node_modules/semver": {
+ "version": "6.3.1",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.1.tgz",
+ "integrity": "sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==",
+ "dev": true,
+ "license": "ISC",
+ "bin": {
+ "semver": "bin/semver.js"
+ }
+ },
+ "node_modules/shebang-command": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz",
+ "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "shebang-regex": "^3.0.0"
+ },
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/shebang-regex": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz",
+ "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/source-map-js": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz",
+ "integrity": "sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==",
+ "dev": true,
+ "license": "BSD-3-Clause",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/string-convert": {
+ "version": "0.2.1",
+ "resolved": "https://registry.npmjs.org/string-convert/-/string-convert-0.2.1.tgz",
+ "integrity": "sha512-u/1tdPl4yQnPBjnVrmdLo9gtuLvELKsAoRapekWggdiQNvvvum+jYF329d84NAa660KQw7pB2n36KrIKVoXa3A==",
+ "license": "MIT"
+ },
+ "node_modules/strip-json-comments": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz",
+ "integrity": "sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=8"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/stylis": {
+ "version": "4.3.6",
+ "resolved": "https://registry.npmjs.org/stylis/-/stylis-4.3.6.tgz",
+ "integrity": "sha512-yQ3rwFWRfwNUY7H5vpU0wfdkNSnvnJinhF9830Swlaxl03zsOjCfmX0ugac+3LtK0lYSgwL/KXc8oYL3mG4YFQ==",
+ "license": "MIT"
+ },
+ "node_modules/supports-color": {
+ "version": "7.2.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz",
+ "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "has-flag": "^4.0.0"
+ },
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/throttle-debounce": {
+ "version": "5.0.2",
+ "resolved": "https://registry.npmjs.org/throttle-debounce/-/throttle-debounce-5.0.2.tgz",
+ "integrity": "sha512-B71/4oyj61iNH0KeCamLuE2rmKuTO5byTOSVwECM5FA7TiAiAW+UqTKZ9ERueC4qvgSttUhdmq1mXC3kJqGX7A==",
+ "license": "MIT",
+ "engines": {
+ "node": ">=12.22"
+ }
+ },
+ "node_modules/tinyglobby": {
+ "version": "0.2.15",
+ "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz",
+ "integrity": "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "fdir": "^6.5.0",
+ "picomatch": "^4.0.3"
+ },
+ "engines": {
+ "node": ">=12.0.0"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/SuperchupuDev"
+ }
+ },
+ "node_modules/ts-api-utils": {
+ "version": "2.4.0",
+ "resolved": "https://registry.npmjs.org/ts-api-utils/-/ts-api-utils-2.4.0.tgz",
+ "integrity": "sha512-3TaVTaAv2gTiMB35i3FiGJaRfwb3Pyn/j3m/bfAvGe8FB7CF6u+LMYqYlDh7reQf7UNvoTvdfAqHGmPGOSsPmA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=18.12"
+ },
+ "peerDependencies": {
+ "typescript": ">=4.8.4"
+ }
+ },
+ "node_modules/type-check": {
+ "version": "0.4.0",
+ "resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz",
+ "integrity": "sha512-XleUoc9uwGXqjWwXaUTZAmzMcFZ5858QA2vvx1Ur5xIcixXIP+8LnFDgRplU30us6teqdlskFfu+ae4K79Ooew==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "prelude-ls": "^1.2.1"
+ },
+ "engines": {
+ "node": ">= 0.8.0"
+ }
+ },
+ "node_modules/typescript": {
+ "version": "5.9.3",
+ "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz",
+ "integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "bin": {
+ "tsc": "bin/tsc",
+ "tsserver": "bin/tsserver"
+ },
+ "engines": {
+ "node": ">=14.17"
+ }
+ },
+ "node_modules/typescript-eslint": {
+ "version": "8.53.0",
+ "resolved": "https://registry.npmjs.org/typescript-eslint/-/typescript-eslint-8.53.0.tgz",
+ "integrity": "sha512-xHURCQNxZ1dsWn0sdOaOfCSQG0HKeqSj9OexIxrz6ypU6wHYOdX2I3D2b8s8wFSsSOYJb+6q283cLiLlkEsBYw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@typescript-eslint/eslint-plugin": "8.53.0",
+ "@typescript-eslint/parser": "8.53.0",
+ "@typescript-eslint/typescript-estree": "8.53.0",
+ "@typescript-eslint/utils": "8.53.0"
+ },
+ "engines": {
+ "node": "^18.18.0 || ^20.9.0 || >=21.1.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/typescript-eslint"
+ },
+ "peerDependencies": {
+ "eslint": "^8.57.0 || ^9.0.0",
+ "typescript": ">=4.8.4 <6.0.0"
+ }
+ },
+ "node_modules/undici-types": {
+ "version": "7.16.0",
+ "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-7.16.0.tgz",
+ "integrity": "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/update-browserslist-db": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.2.3.tgz",
+ "integrity": "sha512-Js0m9cx+qOgDxo0eMiFGEueWztz+d4+M3rGlmKPT+T4IS/jP4ylw3Nwpu6cpTTP8R1MAC1kF4VbdLt3ARf209w==",
+ "dev": true,
+ "funding": [
+ {
+ "type": "opencollective",
+ "url": "https://opencollective.com/browserslist"
+ },
+ {
+ "type": "tidelift",
+ "url": "https://tidelift.com/funding/github/npm/browserslist"
+ },
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/ai"
+ }
+ ],
+ "license": "MIT",
+ "dependencies": {
+ "escalade": "^3.2.0",
+ "picocolors": "^1.1.1"
+ },
+ "bin": {
+ "update-browserslist-db": "cli.js"
+ },
+ "peerDependencies": {
+ "browserslist": ">= 4.21.0"
+ }
+ },
+ "node_modules/uri-js": {
+ "version": "4.4.1",
+ "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz",
+ "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==",
+ "dev": true,
+ "license": "BSD-2-Clause",
+ "dependencies": {
+ "punycode": "^2.1.0"
+ }
+ },
+ "node_modules/vite": {
+ "version": "7.3.1",
+ "resolved": "https://registry.npmjs.org/vite/-/vite-7.3.1.tgz",
+ "integrity": "sha512-w+N7Hifpc3gRjZ63vYBXA56dvvRlNWRczTdmCBBa+CotUzAPf5b7YMdMR/8CQoeYE5LX3W4wj6RYTgonm1b9DA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "esbuild": "^0.27.0",
+ "fdir": "^6.5.0",
+ "picomatch": "^4.0.3",
+ "postcss": "^8.5.6",
+ "rollup": "^4.43.0",
+ "tinyglobby": "^0.2.15"
+ },
+ "bin": {
+ "vite": "bin/vite.js"
+ },
+ "engines": {
+ "node": "^20.19.0 || >=22.12.0"
+ },
+ "funding": {
+ "url": "https://github.com/vitejs/vite?sponsor=1"
+ },
+ "optionalDependencies": {
+ "fsevents": "~2.3.3"
+ },
+ "peerDependencies": {
+ "@types/node": "^20.19.0 || >=22.12.0",
+ "jiti": ">=1.21.0",
+ "less": "^4.0.0",
+ "lightningcss": "^1.21.0",
+ "sass": "^1.70.0",
+ "sass-embedded": "^1.70.0",
+ "stylus": ">=0.54.8",
+ "sugarss": "^5.0.0",
+ "terser": "^5.16.0",
+ "tsx": "^4.8.1",
+ "yaml": "^2.4.2"
+ },
+ "peerDependenciesMeta": {
+ "@types/node": {
+ "optional": true
+ },
+ "jiti": {
+ "optional": true
+ },
+ "less": {
+ "optional": true
+ },
+ "lightningcss": {
+ "optional": true
+ },
+ "sass": {
+ "optional": true
+ },
+ "sass-embedded": {
+ "optional": true
+ },
+ "stylus": {
+ "optional": true
+ },
+ "sugarss": {
+ "optional": true
+ },
+ "terser": {
+ "optional": true
+ },
+ "tsx": {
+ "optional": true
+ },
+ "yaml": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/which": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz",
+ "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==",
+ "dev": true,
+ "license": "ISC",
+ "dependencies": {
+ "isexe": "^2.0.0"
+ },
+ "bin": {
+ "node-which": "bin/node-which"
+ },
+ "engines": {
+ "node": ">= 8"
+ }
+ },
+ "node_modules/word-wrap": {
+ "version": "1.2.5",
+ "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.5.tgz",
+ "integrity": "sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/yallist": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/yallist/-/yallist-3.1.1.tgz",
+ "integrity": "sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g==",
+ "dev": true,
+ "license": "ISC"
+ },
+ "node_modules/yocto-queue": {
+ "version": "0.1.0",
+ "resolved": "https://registry.npmjs.org/yocto-queue/-/yocto-queue-0.1.0.tgz",
+ "integrity": "sha512-rVksvsnNCdJ/ohGc6xgPwyN8eheCxsiLM8mxuE/t/mOVqJewPuO1miLpTHQiRgTKCLexL4MeAFVagts7HmNZ2Q==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=10"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/zod": {
+ "version": "4.3.5",
+ "resolved": "https://registry.npmjs.org/zod/-/zod-4.3.5.tgz",
+ "integrity": "sha512-k7Nwx6vuWx1IJ9Bjuf4Zt1PEllcwe7cls3VNzm4CQ1/hgtFUK2bRNG3rvnpPUhFjmqJKAKtjV576KnUkHocg/g==",
+ "dev": true,
+ "license": "MIT",
+ "funding": {
+ "url": "https://github.com/sponsors/colinhacks"
+ }
+ },
+ "node_modules/zod-validation-error": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/zod-validation-error/-/zod-validation-error-4.0.2.tgz",
+ "integrity": "sha512-Q6/nZLe6jxuU80qb/4uJ4t5v2VEZ44lzQjPDhYJNztRQ4wyWc6VF3D3Kb/fAuPetZQnhS3hnajCf9CsWesghLQ==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=18.0.0"
+ },
+ "peerDependencies": {
+ "zod": "^3.25.0 || ^4.0.0"
+ }
+ }
+ }
+}
diff --git a/experiments/assessment/frontend/package.json b/experiments/assessment/frontend/package.json
new file mode 100644
index 0000000..ef1aa11
--- /dev/null
+++ b/experiments/assessment/frontend/package.json
@@ -0,0 +1,32 @@
+{
+ "name": "assessment-frontend",
+ "private": true,
+ "version": "1.0.0",
+ "type": "module",
+ "scripts": {
+ "dev": "vite",
+ "build": "tsc -b && vite build",
+ "lint": "eslint .",
+ "preview": "vite preview"
+ },
+ "dependencies": {
+ "@ant-design/icons": "^6.1.0",
+ "antd": "^6.0.0",
+ "react": "^19.2.0",
+ "react-dom": "^19.2.0"
+ },
+ "devDependencies": {
+ "@eslint/js": "^9.39.1",
+ "@types/node": "^24.10.1",
+ "@types/react": "^19.2.5",
+ "@types/react-dom": "^19.2.3",
+ "@vitejs/plugin-react": "^5.1.1",
+ "eslint": "^9.39.1",
+ "eslint-plugin-react-hooks": "^7.0.1",
+ "eslint-plugin-react-refresh": "^0.4.24",
+ "globals": "^16.5.0",
+ "typescript": "~5.9.3",
+ "typescript-eslint": "^8.46.4",
+ "vite": "^7.2.4"
+ }
+}
diff --git a/experiments/assessment/frontend/src/App.tsx b/experiments/assessment/frontend/src/App.tsx
new file mode 100644
index 0000000..05c3010
--- /dev/null
+++ b/experiments/assessment/frontend/src/App.tsx
@@ -0,0 +1,109 @@
+/**
+ * Main application component for the assessment interface.
+ */
+
+import { ConfigProvider, theme, Spin } from 'antd';
+import { useAssessment } from './hooks/useAssessment';
+import { RaterLogin } from './components/RaterLogin';
+import { InstructionsPage } from './components/InstructionsPage';
+import { AssessmentPage } from './components/AssessmentPage';
+import { CompletionPage } from './components/CompletionPage';
+
+function App() {
+ const assessment = useAssessment();
+
+ const renderContent = () => {
+ // Show loading spinner for initial load
+ if (assessment.loading && !assessment.rater) {
+ return (
+
+
+
+ );
+ }
+
+ switch (assessment.view) {
+ case 'login':
+ return (
+
+ );
+
+ case 'instructions':
+ return (
+
+ );
+
+ case 'assessment':
+ if (!assessment.rater || !assessment.currentQuery || !assessment.currentIdea || !assessment.dimensions) {
+ return (
+
+
+
+ );
+ }
+ return (
+ 0}
+ />
+ );
+
+ case 'completion':
+ return (
+
+ );
+
+ default:
+ return null;
+ }
+ };
+
+ return (
+
+ {renderContent()}
+
+ );
+}
+
+export default App;
diff --git a/experiments/assessment/frontend/src/components/AssessmentPage.tsx b/experiments/assessment/frontend/src/components/AssessmentPage.tsx
new file mode 100644
index 0000000..4deda56
--- /dev/null
+++ b/experiments/assessment/frontend/src/components/AssessmentPage.tsx
@@ -0,0 +1,199 @@
+/**
+ * Main assessment page for rating ideas.
+ */
+
+import { Card, Button, Space, Alert, Typography } from 'antd';
+import {
+ ArrowLeftOutlined,
+ ArrowRightOutlined,
+ ForwardOutlined,
+ BookOutlined,
+ LogoutOutlined
+} from '@ant-design/icons';
+import type { IdeaForRating, DimensionDefinitions, RaterProgress } from '../types';
+import { useRatings } from '../hooks/useRatings';
+import { IdeaCard } from './IdeaCard';
+import { RatingSlider } from './RatingSlider';
+import { ProgressBar } from './ProgressBar';
+
+const { Text } = Typography;
+
+interface AssessmentPageProps {
+ raterId: string;
+ queryId: string;
+ queryText: string;
+ idea: IdeaForRating;
+ ideaIndex: number;
+ totalIdeas: number;
+ dimensions: DimensionDefinitions;
+ progress: RaterProgress | null;
+ onNext: () => void;
+ onPrev: () => void;
+ onShowDefinitions: () => void;
+ onLogout: () => void;
+ canGoPrev: boolean;
+}
+
+export function AssessmentPage({
+ raterId,
+ queryId,
+ queryText,
+ idea,
+ ideaIndex,
+ totalIdeas,
+ dimensions,
+ progress,
+ onNext,
+ onPrev,
+ onShowDefinitions,
+ onLogout,
+ canGoPrev
+}: AssessmentPageProps) {
+ const {
+ ratings,
+ setRating,
+ isComplete,
+ submit,
+ skip,
+ submitting,
+ error
+ } = useRatings({
+ raterId,
+ queryId,
+ ideaId: idea.idea_id,
+ onSuccess: onNext
+ });
+
+ const handleSubmit = async () => {
+ await submit();
+ };
+
+ const handleSkip = async () => {
+ await skip();
+ };
+
+ // Calculate query progress
+ const queryProgress = progress?.queries.find(q => q.query_id === queryId);
+ const queryCompleted = queryProgress?.completed_count ?? ideaIndex;
+ const queryTotal = totalIdeas;
+
+ return (
+
+ {/* Header with query info and overall progress */}
+
+
+ Query: "{queryText}"
+
+ }
+ onClick={onShowDefinitions}
+ size="small"
+ >
+ Definitions
+
+ }
+ onClick={onLogout}
+ size="small"
+ danger
+ >
+ Exit
+
+
+
+
+ {progress && (
+
+ )}
+
+
+ {/* Error display */}
+ {error && (
+
+ )}
+
+ {/* Idea card */}
+
+
+ {/* Rating inputs */}
+
+ setRating('originality', v)}
+ disabled={submitting}
+ />
+ setRating('elaboration', v)}
+ disabled={submitting}
+ />
+ setRating('coherence', v)}
+ disabled={submitting}
+ />
+ setRating('usefulness', v)}
+ disabled={submitting}
+ />
+
+
+ {/* Navigation buttons */}
+
+
+ }
+ onClick={onPrev}
+ disabled={!canGoPrev || submitting}
+ >
+ Back
+
+
+
+ }
+ onClick={handleSkip}
+ loading={submitting}
+ >
+ Skip
+
+ }
+ onClick={handleSubmit}
+ loading={submitting}
+ disabled={!isComplete()}
+ >
+ Submit & Next
+
+
+
+
+
+ );
+}
diff --git a/experiments/assessment/frontend/src/components/CompletionPage.tsx b/experiments/assessment/frontend/src/components/CompletionPage.tsx
new file mode 100644
index 0000000..04e0e0a
--- /dev/null
+++ b/experiments/assessment/frontend/src/components/CompletionPage.tsx
@@ -0,0 +1,105 @@
+/**
+ * Completion page shown when all ideas have been rated.
+ */
+
+import { Card, Button, Typography, Space, Result, Statistic, Row, Col } from 'antd';
+import { CheckCircleOutlined, BarChartOutlined, LogoutOutlined } from '@ant-design/icons';
+import type { RaterProgress } from '../types';
+
+const { Title, Text } = Typography;
+
+interface CompletionPageProps {
+ raterId: string;
+ progress: RaterProgress | null;
+ onLogout: () => void;
+}
+
+export function CompletionPage({ raterId, progress, onLogout }: CompletionPageProps) {
+ const completed = progress?.total_completed ?? 0;
+ const total = progress?.total_ideas ?? 0;
+ const percentage = progress?.percentage ?? 0;
+
+ const isFullyComplete = completed >= total;
+
+ return (
+
+
+ : }
+ title={isFullyComplete ? 'Assessment Complete!' : 'Session Summary'}
+ subTitle={
+ isFullyComplete
+ ? 'Thank you for completing the assessment.'
+ : 'You have made progress on the assessment.'
+ }
+ extra={[
+ }
+ onClick={onLogout}
+ >
+ Exit
+
+ ]}
+ >
+
+
+
+
+
+
+
+
+
+
+
+
+ {progress && progress.queries.length > 0 && (
+
+
Progress by Query
+
+ {progress.queries.map((q) => (
+
+ {q.query_id}
+ = q.total_count ? 'success' : 'secondary'}>
+ {q.completed_count} / {q.total_count}
+ {q.completed_count >= q.total_count && ' ✓'}
+
+
+ ))}
+
+
+ )}
+
+
+
+ );
+}
diff --git a/experiments/assessment/frontend/src/components/IdeaCard.tsx b/experiments/assessment/frontend/src/components/IdeaCard.tsx
new file mode 100644
index 0000000..b7ad5a6
--- /dev/null
+++ b/experiments/assessment/frontend/src/components/IdeaCard.tsx
@@ -0,0 +1,36 @@
+/**
+ * Card displaying a single idea for rating.
+ */
+
+import { Card, Typography, Tag } from 'antd';
+
+const { Text, Paragraph } = Typography;
+
+interface IdeaCardProps {
+ ideaNumber: number;
+ text: string;
+ queryText: string;
+}
+
+export function IdeaCard({ ideaNumber, text, queryText }: IdeaCardProps) {
+ return (
+
+ IDEA #{ideaNumber}
+ Query: {queryText}
+
+ }
+ style={{ marginBottom: 24 }}
+ >
+
+ "{text}"
+
+
+ );
+}
diff --git a/experiments/assessment/frontend/src/components/InstructionsPage.tsx b/experiments/assessment/frontend/src/components/InstructionsPage.tsx
new file mode 100644
index 0000000..627bbb4
--- /dev/null
+++ b/experiments/assessment/frontend/src/components/InstructionsPage.tsx
@@ -0,0 +1,134 @@
+/**
+ * Instructions page showing dimension definitions.
+ */
+
+import { useState } from 'react';
+import { Card, Button, Typography, Space, Checkbox, Divider, Tag } from 'antd';
+import { PlayCircleOutlined } from '@ant-design/icons';
+import type { DimensionDefinitions } from '../types';
+
+const { Title, Text, Paragraph } = Typography;
+
+interface InstructionsPageProps {
+ dimensions: DimensionDefinitions | null;
+ onStart: () => void;
+ onBack?: () => void;
+ loading: boolean;
+ isReturning?: boolean;
+}
+
+export function InstructionsPage({
+ dimensions,
+ onStart,
+ onBack,
+ loading,
+ isReturning = false
+}: InstructionsPageProps) {
+ const [acknowledged, setAcknowledged] = useState(isReturning);
+
+ if (!dimensions) {
+ return (
+
+ Loading instructions...
+
+ );
+ }
+
+ const dimensionOrder = ['originality', 'elaboration', 'coherence', 'usefulness'] as const;
+
+ return (
+
+
+
+
+
Assessment Instructions
+
+ You will rate creative ideas on 4 dimensions using a 1-5 scale.
+ Please read each definition carefully before beginning.
+
+
+
+
+
+ {dimensionOrder.map((key) => {
+ const dim = dimensions[key];
+ return (
+
+ {dim.name}
+ {dim.question}
+
+ }
+ style={{ marginBottom: 16 }}
+ >
+
+ {([1, 2, 3, 4, 5] as const).map((score) => (
+ <>
+
+ {score}
+
+
+ {dim.scale[score]}
+
+ >
+ ))}
+
+
+
+ {dim.low_label}
+ {dim.high_label}
+
+
+ );
+ })}
+
+
+
+
+ {!isReturning && (
+ setAcknowledged(e.target.checked)}
+ >
+ I have read and understood the instructions
+
+ )}
+
+
+ {onBack && (
+
+ Back to Assessment
+
+ )}
+ }
+ onClick={onStart}
+ loading={loading}
+ disabled={!acknowledged}
+ >
+ {isReturning ? 'Continue Rating' : 'Begin Rating'}
+
+
+
+
+
+
+ );
+}
diff --git a/experiments/assessment/frontend/src/components/ProgressBar.tsx b/experiments/assessment/frontend/src/components/ProgressBar.tsx
new file mode 100644
index 0000000..32615b2
--- /dev/null
+++ b/experiments/assessment/frontend/src/components/ProgressBar.tsx
@@ -0,0 +1,39 @@
+/**
+ * Progress bar component showing assessment progress.
+ */
+
+import { Progress, Typography, Space } from 'antd';
+
+const { Text } = Typography;
+
+interface ProgressBarProps {
+ completed: number;
+ total: number;
+ label?: string;
+}
+
+export function ProgressBar({ completed, total, label }: ProgressBarProps) {
+ const percentage = total > 0 ? Math.round((completed / total) * 100) : 0;
+
+ return (
+
+ {label && (
+
+ {label}
+
+ {completed}/{total} ({percentage}%)
+
+
+ )}
+
+
+ );
+}
diff --git a/experiments/assessment/frontend/src/components/RaterLogin.tsx b/experiments/assessment/frontend/src/components/RaterLogin.tsx
new file mode 100644
index 0000000..05a5a3d
--- /dev/null
+++ b/experiments/assessment/frontend/src/components/RaterLogin.tsx
@@ -0,0 +1,116 @@
+/**
+ * Rater login component.
+ */
+
+import { useState, useEffect } from 'react';
+import { Card, Input, Button, Typography, Space, List, Alert } from 'antd';
+import { UserOutlined, LoginOutlined } from '@ant-design/icons';
+import * as api from '../services/api';
+import type { Rater } from '../types';
+
+const { Title, Text } = Typography;
+
+interface RaterLoginProps {
+ onLogin: (raterId: string, name?: string) => void;
+ loading: boolean;
+ error: string | null;
+}
+
+export function RaterLogin({ onLogin, loading, error }: RaterLoginProps) {
+ const [raterId, setRaterId] = useState('');
+ const [existingRaters, setExistingRaters] = useState([]);
+
+ useEffect(() => {
+ api.listRaters()
+ .then(setExistingRaters)
+ .catch(console.error);
+ }, []);
+
+ const handleLogin = () => {
+ if (raterId.trim()) {
+ onLogin(raterId.trim());
+ }
+ };
+
+ const handleQuickLogin = (rater: Rater) => {
+ onLogin(rater.rater_id);
+ };
+
+ return (
+
+
+
+
+
+ Creative Idea Assessment
+
+
+ Enter your rater ID to begin
+
+
+
+ {error && (
+
+ )}
+
+ }
+ value={raterId}
+ onChange={(e) => setRaterId(e.target.value)}
+ onPressEnter={handleLogin}
+ disabled={loading}
+ />
+
+ }
+ onClick={handleLogin}
+ loading={loading}
+ disabled={!raterId.trim()}
+ block
+ >
+ Start Assessment
+
+
+ {existingRaters.length > 0 && (
+
+
+ Existing raters:
+
+ (
+ handleQuickLogin(rater)}
+ >
+ {rater.rater_id}
+ {rater.name && rater.name !== rater.rater_id && (
+
+ ({rater.name})
+
+ )}
+
+ )}
+ />
+
+ )}
+
+
+
+ );
+}
diff --git a/experiments/assessment/frontend/src/components/RatingSlider.tsx b/experiments/assessment/frontend/src/components/RatingSlider.tsx
new file mode 100644
index 0000000..30a6974
--- /dev/null
+++ b/experiments/assessment/frontend/src/components/RatingSlider.tsx
@@ -0,0 +1,74 @@
+/**
+ * Rating input component with radio buttons for 1-5 scale.
+ */
+
+import { Radio, Typography, Space, Tooltip, Button } from 'antd';
+import { QuestionCircleOutlined } from '@ant-design/icons';
+import type { DimensionDefinition } from '../types';
+
+const { Text } = Typography;
+
+interface RatingSliderProps {
+ dimension: DimensionDefinition;
+ value: number | null;
+ onChange: (value: number | null) => void;
+ disabled?: boolean;
+}
+
+export function RatingSlider({ dimension, value, onChange, disabled }: RatingSliderProps) {
+ return (
+
+
+
+ {dimension.name.toUpperCase()}
+
+
+ {dimension.question}
+ {([1, 2, 3, 4, 5] as const).map((score) => (
+
+ {score}: {dimension.scale[score]}
+
+ ))}
+
+ }
+ placement="right"
+ overlayStyle={{ maxWidth: 400 }}
+ >
+
}
+ style={{ padding: 0, height: 'auto' }}
+ />
+
+
+
+
+
+ {dimension.low_label}
+
+
+ onChange(e.target.value)}
+ disabled={disabled}
+ style={{ flex: 1 }}
+ >
+
+ {[1, 2, 3, 4, 5].map((score) => (
+
+ {score}
+
+ ))}
+
+
+
+
+ {dimension.high_label}
+
+
+
+ );
+}
diff --git a/experiments/assessment/frontend/src/hooks/useAssessment.ts b/experiments/assessment/frontend/src/hooks/useAssessment.ts
new file mode 100644
index 0000000..71be5b4
--- /dev/null
+++ b/experiments/assessment/frontend/src/hooks/useAssessment.ts
@@ -0,0 +1,272 @@
+/**
+ * Hook for managing the assessment session state.
+ */
+
+import { useState, useCallback, useEffect } from 'react';
+import type {
+ AppView,
+ DimensionDefinitions,
+ QueryInfo,
+ QueryWithIdeas,
+ Rater,
+ RaterProgress,
+} from '../types';
+import * as api from '../services/api';
+
+interface AssessmentState {
+ view: AppView;
+ rater: Rater | null;
+ queries: QueryInfo[];
+ currentQueryIndex: number;
+ currentQuery: QueryWithIdeas | null;
+ currentIdeaIndex: number;
+ progress: RaterProgress | null;
+ dimensions: DimensionDefinitions | null;
+ loading: boolean;
+ error: string | null;
+}
+
+const initialState: AssessmentState = {
+ view: 'login',
+ rater: null,
+ queries: [],
+ currentQueryIndex: 0,
+ currentQuery: null,
+ currentIdeaIndex: 0,
+ progress: null,
+ dimensions: null,
+ loading: false,
+ error: null,
+};
+
+export function useAssessment() {
+ const [state, setState] = useState(initialState);
+
+ // Load dimension definitions on mount
+ useEffect(() => {
+ api.getDimensionDefinitions()
+ .then((dimensions) => setState((s) => ({ ...s, dimensions })))
+ .catch((err) => console.error('Failed to load dimensions:', err));
+ }, []);
+
+ // Login as a rater
+ const login = useCallback(async (raterId: string, name?: string) => {
+ setState((s) => ({ ...s, loading: true, error: null }));
+ try {
+ const rater = await api.createOrGetRater({ rater_id: raterId, name });
+ const queries = await api.listQueries();
+ const progress = await api.getRaterProgress(raterId);
+
+ setState((s) => ({
+ ...s,
+ rater,
+ queries,
+ progress,
+ view: 'instructions',
+ loading: false,
+ }));
+ } catch (err) {
+ setState((s) => ({
+ ...s,
+ error: err instanceof Error ? err.message : 'Login failed',
+ loading: false,
+ }));
+ }
+ }, []);
+
+ // Start assessment (move from instructions to assessment)
+ const startAssessment = useCallback(async () => {
+ if (!state.rater || state.queries.length === 0) return;
+
+ setState((s) => ({ ...s, loading: true }));
+ try {
+ // Find first query with unrated ideas
+ let queryIndex = 0;
+ let queryData: QueryWithIdeas | null = null;
+
+ for (let i = 0; i < state.queries.length; i++) {
+ const unrated = await api.getUnratedIdeas(state.queries[i].query_id, state.rater.rater_id);
+ if (unrated.ideas.length > 0) {
+ queryIndex = i;
+ queryData = unrated;
+ break;
+ }
+ }
+
+ if (!queryData) {
+ // All done
+ setState((s) => ({
+ ...s,
+ view: 'completion',
+ loading: false,
+ }));
+ return;
+ }
+
+ setState((s) => ({
+ ...s,
+ view: 'assessment',
+ currentQueryIndex: queryIndex,
+ currentQuery: queryData,
+ currentIdeaIndex: 0,
+ loading: false,
+ }));
+ } catch (err) {
+ setState((s) => ({
+ ...s,
+ error: err instanceof Error ? err.message : 'Failed to start assessment',
+ loading: false,
+ }));
+ }
+ }, [state.rater, state.queries]);
+
+ // Move to next idea
+ const nextIdea = useCallback(async () => {
+ if (!state.currentQuery || !state.rater) return;
+
+ const nextIndex = state.currentIdeaIndex + 1;
+
+ if (nextIndex < state.currentQuery.ideas.length) {
+ // More ideas in current query
+ setState((s) => ({ ...s, currentIdeaIndex: nextIndex }));
+ } else {
+ // Query complete, try to move to next query
+ const nextQueryIndex = state.currentQueryIndex + 1;
+
+ if (nextQueryIndex < state.queries.length) {
+ setState((s) => ({ ...s, loading: true }));
+ try {
+ const unrated = await api.getUnratedIdeas(
+ state.queries[nextQueryIndex].query_id,
+ state.rater.rater_id
+ );
+
+ if (unrated.ideas.length > 0) {
+ setState((s) => ({
+ ...s,
+ currentQueryIndex: nextQueryIndex,
+ currentQuery: unrated,
+ currentIdeaIndex: 0,
+ loading: false,
+ }));
+ } else {
+ // Try to find next query with unrated ideas
+ for (let i = nextQueryIndex + 1; i < state.queries.length; i++) {
+ const nextUnrated = await api.getUnratedIdeas(
+ state.queries[i].query_id,
+ state.rater.rater_id
+ );
+ if (nextUnrated.ideas.length > 0) {
+ setState((s) => ({
+ ...s,
+ currentQueryIndex: i,
+ currentQuery: nextUnrated,
+ currentIdeaIndex: 0,
+ loading: false,
+ }));
+ return;
+ }
+ }
+ // All queries complete
+ setState((s) => ({
+ ...s,
+ view: 'completion',
+ loading: false,
+ }));
+ }
+ } catch (err) {
+ setState((s) => ({
+ ...s,
+ error: err instanceof Error ? err.message : 'Failed to load next query',
+ loading: false,
+ }));
+ }
+ } else {
+ // All queries complete
+ setState((s) => ({ ...s, view: 'completion' }));
+ }
+ }
+
+ // Refresh progress
+ try {
+ const progress = await api.getRaterProgress(state.rater.rater_id);
+ setState((s) => ({ ...s, progress }));
+ } catch (err) {
+ console.error('Failed to refresh progress:', err);
+ }
+ }, [state.currentQuery, state.currentIdeaIndex, state.currentQueryIndex, state.queries, state.rater]);
+
+ // Move to previous idea
+ const prevIdea = useCallback(() => {
+ if (state.currentIdeaIndex > 0) {
+ setState((s) => ({ ...s, currentIdeaIndex: s.currentIdeaIndex - 1 }));
+ }
+ }, [state.currentIdeaIndex]);
+
+ // Jump to a specific query
+ const jumpToQuery = useCallback(async (queryIndex: number) => {
+ if (!state.rater || queryIndex < 0 || queryIndex >= state.queries.length) return;
+
+ setState((s) => ({ ...s, loading: true }));
+ try {
+ const queryData = await api.getQueryWithIdeas(state.queries[queryIndex].query_id);
+ setState((s) => ({
+ ...s,
+ currentQueryIndex: queryIndex,
+ currentQuery: queryData,
+ currentIdeaIndex: 0,
+ view: 'assessment',
+ loading: false,
+ }));
+ } catch (err) {
+ setState((s) => ({
+ ...s,
+ error: err instanceof Error ? err.message : 'Failed to load query',
+ loading: false,
+ }));
+ }
+ }, [state.rater, state.queries]);
+
+ // Refresh progress
+ const refreshProgress = useCallback(async () => {
+ if (!state.rater) return;
+ try {
+ const progress = await api.getRaterProgress(state.rater.rater_id);
+ setState((s) => ({ ...s, progress }));
+ } catch (err) {
+ console.error('Failed to refresh progress:', err);
+ }
+ }, [state.rater]);
+
+ // Show definitions
+ const showInstructions = useCallback(() => {
+ setState((s) => ({ ...s, view: 'instructions' }));
+ }, []);
+
+ // Return to assessment
+ const returnToAssessment = useCallback(() => {
+ setState((s) => ({ ...s, view: 'assessment' }));
+ }, []);
+
+ // Logout
+ const logout = useCallback(() => {
+ setState(initialState);
+ }, []);
+
+ // Get current idea
+ const currentIdea = state.currentQuery?.ideas[state.currentIdeaIndex] ?? null;
+
+ return {
+ ...state,
+ currentIdea,
+ login,
+ startAssessment,
+ nextIdea,
+ prevIdea,
+ jumpToQuery,
+ refreshProgress,
+ showInstructions,
+ returnToAssessment,
+ logout,
+ };
+}
diff --git a/experiments/assessment/frontend/src/hooks/useRatings.ts b/experiments/assessment/frontend/src/hooks/useRatings.ts
new file mode 100644
index 0000000..80929bb
--- /dev/null
+++ b/experiments/assessment/frontend/src/hooks/useRatings.ts
@@ -0,0 +1,133 @@
+/**
+ * Hook for managing rating submission.
+ */
+
+import { useState, useCallback } from 'react';
+import type { RatingState, DimensionKey } from '../types';
+import * as api from '../services/api';
+
+interface UseRatingsOptions {
+ raterId: string | null;
+ queryId: string | null;
+ ideaId: string | null;
+ onSuccess?: () => void;
+}
+
+export function useRatings({ raterId, queryId, ideaId, onSuccess }: UseRatingsOptions) {
+ const [ratings, setRatings] = useState({
+ originality: null,
+ elaboration: null,
+ coherence: null,
+ usefulness: null,
+ });
+ const [submitting, setSubmitting] = useState(false);
+ const [error, setError] = useState(null);
+
+ // Set a single rating
+ const setRating = useCallback((dimension: DimensionKey, value: number | null) => {
+ setRatings((prev) => ({ ...prev, [dimension]: value }));
+ }, []);
+
+ // Reset all ratings
+ const resetRatings = useCallback(() => {
+ setRatings({
+ originality: null,
+ elaboration: null,
+ coherence: null,
+ usefulness: null,
+ });
+ setError(null);
+ }, []);
+
+ // Check if all ratings are set
+ const isComplete = useCallback(() => {
+ return (
+ ratings.originality !== null &&
+ ratings.elaboration !== null &&
+ ratings.coherence !== null &&
+ ratings.usefulness !== null
+ );
+ }, [ratings]);
+
+ // Submit rating
+ const submit = useCallback(async () => {
+ if (!raterId || !queryId || !ideaId) {
+ setError('Missing required information');
+ return false;
+ }
+
+ if (!isComplete()) {
+ setError('Please rate all dimensions');
+ return false;
+ }
+
+ setSubmitting(true);
+ setError(null);
+
+ try {
+ await api.submitRating({
+ rater_id: raterId,
+ idea_id: ideaId,
+ query_id: queryId,
+ originality: ratings.originality,
+ elaboration: ratings.elaboration,
+ coherence: ratings.coherence,
+ usefulness: ratings.usefulness,
+ skipped: false,
+ });
+
+ resetRatings();
+ onSuccess?.();
+ return true;
+ } catch (err) {
+ setError(err instanceof Error ? err.message : 'Failed to submit rating');
+ return false;
+ } finally {
+ setSubmitting(false);
+ }
+ }, [raterId, queryId, ideaId, ratings, isComplete, resetRatings, onSuccess]);
+
+ // Skip idea
+ const skip = useCallback(async () => {
+ if (!raterId || !queryId || !ideaId) {
+ setError('Missing required information');
+ return false;
+ }
+
+ setSubmitting(true);
+ setError(null);
+
+ try {
+ await api.submitRating({
+ rater_id: raterId,
+ idea_id: ideaId,
+ query_id: queryId,
+ originality: null,
+ elaboration: null,
+ coherence: null,
+ usefulness: null,
+ skipped: true,
+ });
+
+ resetRatings();
+ onSuccess?.();
+ return true;
+ } catch (err) {
+ setError(err instanceof Error ? err.message : 'Failed to skip idea');
+ return false;
+ } finally {
+ setSubmitting(false);
+ }
+ }, [raterId, queryId, ideaId, resetRatings, onSuccess]);
+
+ return {
+ ratings,
+ setRating,
+ resetRatings,
+ isComplete,
+ submit,
+ skip,
+ submitting,
+ error,
+ };
+}
diff --git a/experiments/assessment/frontend/src/index.css b/experiments/assessment/frontend/src/index.css
new file mode 100644
index 0000000..fe8d42f
--- /dev/null
+++ b/experiments/assessment/frontend/src/index.css
@@ -0,0 +1,43 @@
+:root {
+ font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
+ line-height: 1.5;
+ font-weight: 400;
+
+ color-scheme: light;
+ color: rgba(0, 0, 0, 0.88);
+ background-color: #f5f5f5;
+
+ font-synthesis: none;
+ text-rendering: optimizeLegibility;
+ -webkit-font-smoothing: antialiased;
+ -moz-osx-font-smoothing: grayscale;
+}
+
+body {
+ margin: 0;
+ min-height: 100vh;
+}
+
+#root {
+ min-height: 100vh;
+}
+
+/* Custom scrollbar */
+::-webkit-scrollbar {
+ width: 8px;
+ height: 8px;
+}
+
+::-webkit-scrollbar-track {
+ background: #f1f1f1;
+ border-radius: 4px;
+}
+
+::-webkit-scrollbar-thumb {
+ background: #c1c1c1;
+ border-radius: 4px;
+}
+
+::-webkit-scrollbar-thumb:hover {
+ background: #a8a8a8;
+}
diff --git a/experiments/assessment/frontend/src/main.tsx b/experiments/assessment/frontend/src/main.tsx
new file mode 100644
index 0000000..db032b7
--- /dev/null
+++ b/experiments/assessment/frontend/src/main.tsx
@@ -0,0 +1,10 @@
+import { StrictMode } from 'react'
+import { createRoot } from 'react-dom/client'
+import './index.css'
+import App from './App'
+
+createRoot(document.getElementById('root')!).render(
+
+
+ ,
+)
diff --git a/experiments/assessment/frontend/src/services/api.ts b/experiments/assessment/frontend/src/services/api.ts
new file mode 100644
index 0000000..025bef1
--- /dev/null
+++ b/experiments/assessment/frontend/src/services/api.ts
@@ -0,0 +1,116 @@
+/**
+ * API client for the assessment backend.
+ */
+
+import type {
+ DimensionDefinitions,
+ QueryInfo,
+ QueryWithIdeas,
+ Rater,
+ RaterCreate,
+ RaterProgress,
+ Rating,
+ RatingSubmit,
+ SessionInfo,
+ Statistics,
+} from '../types';
+
+const API_BASE = '/api';
+
+async function fetchJson(url: string, options?: RequestInit): Promise {
+ const response = await fetch(`${API_BASE}${url}`, {
+ headers: {
+ 'Content-Type': 'application/json',
+ ...options?.headers,
+ },
+ ...options,
+ });
+
+ if (!response.ok) {
+ const error = await response.json().catch(() => ({ detail: response.statusText }));
+ throw new Error(error.detail || 'API request failed');
+ }
+
+ return response.json();
+}
+
+// Rater API
+export async function listRaters(): Promise {
+ return fetchJson('/raters');
+}
+
+export async function createOrGetRater(data: RaterCreate): Promise {
+ return fetchJson('/raters', {
+ method: 'POST',
+ body: JSON.stringify(data),
+ });
+}
+
+export async function getRater(raterId: string): Promise {
+ return fetchJson(`/raters/${encodeURIComponent(raterId)}`);
+}
+
+// Query API
+export async function listQueries(): Promise {
+ return fetchJson('/queries');
+}
+
+export async function getQueryWithIdeas(queryId: string): Promise {
+ return fetchJson(`/queries/${encodeURIComponent(queryId)}`);
+}
+
+export async function getUnratedIdeas(queryId: string, raterId: string): Promise {
+ return fetchJson(
+ `/queries/${encodeURIComponent(queryId)}/unrated?rater_id=${encodeURIComponent(raterId)}`
+ );
+}
+
+// Rating API
+export async function submitRating(rating: RatingSubmit): Promise<{ saved: boolean }> {
+ return fetchJson<{ saved: boolean }>('/ratings', {
+ method: 'POST',
+ body: JSON.stringify(rating),
+ });
+}
+
+export async function getRating(raterId: string, ideaId: string): Promise {
+ try {
+ return await fetchJson(`/ratings/${encodeURIComponent(raterId)}/${encodeURIComponent(ideaId)}`);
+ } catch {
+ return null;
+ }
+}
+
+export async function getRatingsByRater(raterId: string): Promise {
+ return fetchJson(`/ratings/rater/${encodeURIComponent(raterId)}`);
+}
+
+// Progress API
+export async function getRaterProgress(raterId: string): Promise {
+ return fetchJson(`/progress/${encodeURIComponent(raterId)}`);
+}
+
+// Statistics API
+export async function getStatistics(): Promise {
+ return fetchJson('/statistics');
+}
+
+// Dimension definitions API
+export async function getDimensionDefinitions(): Promise {
+ return fetchJson('/dimensions');
+}
+
+// Session info API
+export async function getSessionInfo(): Promise {
+ return fetchJson('/info');
+}
+
+// Health check
+export async function healthCheck(): Promise {
+ try {
+ await fetchJson<{ status: string }>('/health');
+ return true;
+ } catch {
+ return false;
+ }
+}
diff --git a/experiments/assessment/frontend/src/types/index.ts b/experiments/assessment/frontend/src/types/index.ts
new file mode 100644
index 0000000..d8f00d3
--- /dev/null
+++ b/experiments/assessment/frontend/src/types/index.ts
@@ -0,0 +1,142 @@
+/**
+ * TypeScript types for the assessment frontend.
+ */
+
+// Rater types
+export interface Rater {
+ rater_id: string;
+ name: string | null;
+ created_at?: string;
+}
+
+export interface RaterCreate {
+ rater_id: string;
+ name?: string;
+}
+
+// Query types
+export interface QueryInfo {
+ query_id: string;
+ query_text: string;
+ category: string;
+ idea_count: number;
+}
+
+export interface IdeaForRating {
+ idea_id: string;
+ text: string;
+ index: number;
+}
+
+export interface QueryWithIdeas {
+ query_id: string;
+ query_text: string;
+ category: string;
+ ideas: IdeaForRating[];
+ total_count: number;
+}
+
+// Rating types
+export interface RatingSubmit {
+ rater_id: string;
+ idea_id: string;
+ query_id: string;
+ originality: number | null;
+ elaboration: number | null;
+ coherence: number | null;
+ usefulness: number | null;
+ skipped: boolean;
+}
+
+export interface Rating {
+ id: number;
+ rater_id: string;
+ idea_id: string;
+ query_id: string;
+ originality: number | null;
+ elaboration: number | null;
+ coherence: number | null;
+ usefulness: number | null;
+ skipped: number;
+ timestamp: string | null;
+}
+
+// Progress types
+export interface QueryProgress {
+ rater_id: string;
+ query_id: string;
+ completed_count: number;
+ total_count: number;
+ started_at?: string;
+ updated_at?: string;
+}
+
+export interface RaterProgress {
+ rater_id: string;
+ queries: QueryProgress[];
+ total_completed: number;
+ total_ideas: number;
+ percentage: number;
+}
+
+// Statistics types
+export interface Statistics {
+ rater_count: number;
+ rating_count: number;
+ skip_count: number;
+ rated_ideas: number;
+}
+
+// Dimension definition types
+export interface DimensionScale {
+ 1: string;
+ 2: string;
+ 3: string;
+ 4: string;
+ 5: string;
+}
+
+export interface DimensionDefinition {
+ name: string;
+ question: string;
+ scale: DimensionScale;
+ low_label: string;
+ high_label: string;
+}
+
+export interface DimensionDefinitions {
+ originality: DimensionDefinition;
+ elaboration: DimensionDefinition;
+ coherence: DimensionDefinition;
+ usefulness: DimensionDefinition;
+}
+
+// Session info
+export interface SessionInfo {
+ experiment_id: string;
+ total_ideas: number;
+ query_count: number;
+ conditions: string[];
+ randomization_seed: number;
+}
+
+// UI State types
+export type AppView = 'login' | 'instructions' | 'assessment' | 'completion';
+
+export interface RatingState {
+ originality: number | null;
+ elaboration: number | null;
+ coherence: number | null;
+ usefulness: number | null;
+}
+
+export const EMPTY_RATING_STATE: RatingState = {
+ originality: null,
+ elaboration: null,
+ coherence: null,
+ usefulness: null,
+};
+
+export type DimensionKey = keyof RatingState;
+
+export const DIMENSION_KEYS: DimensionKey[] = ['originality', 'elaboration', 'coherence', 'usefulness'];
diff --git a/experiments/assessment/frontend/tsconfig.json b/experiments/assessment/frontend/tsconfig.json
new file mode 100644
index 0000000..109f0ac
--- /dev/null
+++ b/experiments/assessment/frontend/tsconfig.json
@@ -0,0 +1,20 @@
+{
+ "compilerOptions": {
+ "target": "ES2020",
+ "useDefineForClassFields": true,
+ "lib": ["ES2020", "DOM", "DOM.Iterable"],
+ "module": "ESNext",
+ "skipLibCheck": true,
+ "moduleResolution": "bundler",
+ "allowImportingTsExtensions": true,
+ "isolatedModules": true,
+ "moduleDetection": "force",
+ "noEmit": true,
+ "jsx": "react-jsx",
+ "strict": true,
+ "noUnusedLocals": true,
+ "noUnusedParameters": true,
+ "noFallthroughCasesInSwitch": true
+ },
+ "include": ["src"]
+}
diff --git a/experiments/assessment/frontend/vite.config.ts b/experiments/assessment/frontend/vite.config.ts
new file mode 100644
index 0000000..67307d7
--- /dev/null
+++ b/experiments/assessment/frontend/vite.config.ts
@@ -0,0 +1,16 @@
+import { defineConfig } from 'vite'
+import react from '@vitejs/plugin-react'
+
+export default defineConfig({
+ plugins: [react()],
+ server: {
+ host: '0.0.0.0',
+ port: 5174,
+ proxy: {
+ '/api': {
+ target: 'http://localhost:8002',
+ changeOrigin: true
+ }
+ }
+ },
+})
diff --git a/experiments/assessment/prepare_data.py b/experiments/assessment/prepare_data.py
new file mode 100755
index 0000000..9507782
--- /dev/null
+++ b/experiments/assessment/prepare_data.py
@@ -0,0 +1,375 @@
+#!/usr/bin/env python3
+"""
+Prepare assessment data from experiment results.
+
+Extracts unique ideas from deduped experiment results, assigns stable IDs,
+and randomizes the order within each query for unbiased human assessment.
+
+Usage:
+ python prepare_data.py # Use latest, all ideas
+ python prepare_data.py --sample 100 # Sample 100 ideas total
+ python prepare_data.py --per-query 10 # 10 ideas per query
+ python prepare_data.py --per-condition 5 # 5 ideas per condition per query
+ python prepare_data.py --list # List available files
+"""
+
+import argparse
+import json
+import random
+from pathlib import Path
+from typing import Any
+
+
+def load_experiment_data(filepath: Path) -> dict[str, Any]:
+ """Load experiment data from JSON file."""
+ with open(filepath, 'r', encoding='utf-8') as f:
+ return json.load(f)
+
+
+def sample_ideas_stratified(
+ ideas: list[dict[str, Any]],
+ per_condition: int | None = None,
+ total_limit: int | None = None,
+ rng: random.Random | None = None
+) -> list[dict[str, Any]]:
+ """
+ Sample ideas with stratification by condition.
+
+ Args:
+ ideas: List of ideas with _hidden.condition metadata
+ per_condition: Max ideas per condition (stratified sampling)
+ total_limit: Max total ideas (after stratified sampling)
+ rng: Random number generator for reproducibility
+
+ Returns:
+ Sampled list of ideas
+ """
+ if rng is None:
+ rng = random.Random()
+
+ if per_condition is None and total_limit is None:
+ return ideas
+
+ # Group by condition
+ by_condition: dict[str, list[dict[str, Any]]] = {}
+ for idea in ideas:
+ condition = idea['_hidden']['condition']
+ if condition not in by_condition:
+ by_condition[condition] = []
+ by_condition[condition].append(idea)
+
+ # Sample per condition
+ sampled = []
+ for condition, cond_ideas in by_condition.items():
+ rng.shuffle(cond_ideas)
+ if per_condition is not None:
+ cond_ideas = cond_ideas[:per_condition]
+ sampled.extend(cond_ideas)
+
+ # Apply total limit if specified
+ if total_limit is not None and len(sampled) > total_limit:
+ rng.shuffle(sampled)
+ sampled = sampled[:total_limit]
+
+ return sampled
+
+
+def extract_ideas_from_condition(
+ query_id: str,
+ condition_name: str,
+ condition_data: dict[str, Any],
+ idea_counter: dict[str, int]
+) -> list[dict[str, Any]]:
+ """Extract ideas from a single condition with hidden metadata."""
+ ideas = []
+
+ dedup_data = condition_data.get('dedup', {})
+ unique_ideas_with_source = dedup_data.get('unique_ideas_with_source', [])
+
+ for item in unique_ideas_with_source:
+ idea_text = item.get('idea', '')
+ if not idea_text:
+ continue
+
+ # Generate stable idea ID
+ current_count = idea_counter.get(query_id, 0)
+ idea_id = f"{query_id}_I{current_count:03d}"
+ idea_counter[query_id] = current_count + 1
+
+ ideas.append({
+ 'idea_id': idea_id,
+ 'text': idea_text,
+ '_hidden': {
+ 'condition': condition_name,
+ 'expert_name': item.get('expert_name', ''),
+ 'keyword': item.get('keyword', '')
+ }
+ })
+
+ return ideas
+
+
+def prepare_assessment_data(
+ experiment_filepath: Path,
+ output_filepath: Path,
+ seed: int = 42,
+ sample_total: int | None = None,
+ per_query: int | None = None,
+ per_condition: int | None = None
+) -> dict[str, Any]:
+ """
+ Prepare assessment data from experiment results.
+
+ Args:
+ experiment_filepath: Path to deduped experiment JSON
+ output_filepath: Path to write assessment items JSON
+ seed: Random seed for reproducible shuffling
+ sample_total: Total number of ideas to sample (across all queries)
+ per_query: Maximum ideas per query
+ per_condition: Maximum ideas per condition per query (stratified)
+
+ Returns:
+ Assessment data structure
+ """
+ rng = random.Random(seed)
+
+ # Load experiment data
+ data = load_experiment_data(experiment_filepath)
+ experiment_id = data.get('experiment_id', 'unknown')
+ conditions = data.get('conditions', [])
+ results = data.get('results', [])
+
+ print(f"Loading experiment: {experiment_id}")
+ print(f"Conditions: {conditions}")
+ print(f"Number of queries: {len(results)}")
+
+ # Show sampling config
+ if sample_total or per_query or per_condition:
+ print(f"Sampling config: total={sample_total}, per_query={per_query}, per_condition={per_condition}")
+
+ assessment_queries = []
+ total_ideas = 0
+ idea_counter: dict[str, int] = {}
+
+ for result in results:
+ query_id = result.get('query_id', '')
+ query_text = result.get('query', '')
+ category = result.get('category', '')
+
+ query_ideas = []
+
+ # Extract ideas from all conditions
+ conditions_data = result.get('conditions', {})
+ for condition_name, condition_data in conditions_data.items():
+ ideas = extract_ideas_from_condition(
+ query_id, condition_name, condition_data, idea_counter
+ )
+ query_ideas.extend(ideas)
+
+ # Apply stratified sampling if per_condition is specified
+ if per_condition is not None:
+ query_ideas = sample_ideas_stratified(
+ query_ideas,
+ per_condition=per_condition,
+ rng=rng
+ )
+
+ # Apply per-query limit
+ if per_query is not None and len(query_ideas) > per_query:
+ rng.shuffle(query_ideas)
+ query_ideas = query_ideas[:per_query]
+
+ # Shuffle ideas within this query
+ rng.shuffle(query_ideas)
+
+ assessment_queries.append({
+ 'query_id': query_id,
+ 'query_text': query_text,
+ 'category': category,
+ 'ideas': query_ideas,
+ 'idea_count': len(query_ideas)
+ })
+
+ total_ideas += len(query_ideas)
+ print(f" Query '{query_text}' ({query_id}): {len(query_ideas)} ideas")
+
+ # Apply total sample limit across all queries (proportionally)
+ if sample_total is not None and total_ideas > sample_total:
+ print(f"\nApplying total sample limit: {sample_total} (from {total_ideas})")
+ # Calculate proportion to keep
+ keep_ratio = sample_total / total_ideas
+ new_total = 0
+
+ for query in assessment_queries:
+ n_keep = max(1, int(len(query['ideas']) * keep_ratio))
+ rng.shuffle(query['ideas'])
+ query['ideas'] = query['ideas'][:n_keep]
+ query['idea_count'] = len(query['ideas'])
+ new_total += len(query['ideas'])
+
+ total_ideas = new_total
+
+ # Build output structure
+ assessment_data = {
+ 'experiment_id': experiment_id,
+ 'queries': assessment_queries,
+ 'total_ideas': total_ideas,
+ 'query_count': len(assessment_queries),
+ 'conditions': conditions,
+ 'randomization_seed': seed,
+ 'sampling': {
+ 'sample_total': sample_total,
+ 'per_query': per_query,
+ 'per_condition': per_condition
+ },
+ 'metadata': {
+ 'source_file': str(experiment_filepath.name),
+ 'prepared_for': 'human_assessment'
+ }
+ }
+
+ # Write output
+ output_filepath.parent.mkdir(parents=True, exist_ok=True)
+ with open(output_filepath, 'w', encoding='utf-8') as f:
+ json.dump(assessment_data, f, ensure_ascii=False, indent=2)
+
+ print(f"\nTotal ideas for assessment: {total_ideas}")
+ print(f"Output written to: {output_filepath}")
+
+ return assessment_data
+
+
+def list_experiment_files(results_dir: Path) -> list[Path]:
+ """List available deduped experiment files."""
+ return sorted(results_dir.glob('*_deduped.json'), key=lambda p: p.stat().st_mtime, reverse=True)
+
+
+def main():
+ """Main entry point."""
+ parser = argparse.ArgumentParser(
+ description='Prepare assessment data from experiment results.',
+ formatter_class=argparse.RawDescriptionHelpFormatter,
+ epilog="""
+Examples:
+ python prepare_data.py # Use latest, all ideas
+ python prepare_data.py --sample 100 # Sample 100 ideas total
+ python prepare_data.py --per-query 20 # Max 20 ideas per query
+ python prepare_data.py --per-condition 4 # 4 ideas per condition per query
+ python prepare_data.py --per-condition 4 --per-query 15 # Combined limits
+ python prepare_data.py --list # List available files
+
+Recommended for human assessment:
+ # 5 conditions × 4 ideas × 10 queries = 200 ideas (balanced)
+ python prepare_data.py --per-condition 4
+
+ # Or limit total to ~150 ideas
+ python prepare_data.py --sample 150
+ """
+ )
+ parser.add_argument(
+ 'experiment_file',
+ nargs='?',
+ default=None,
+ help='Experiment file name (e.g., experiment_20260119_165650_deduped.json)'
+ )
+ parser.add_argument(
+ '--list', '-l',
+ action='store_true',
+ help='List available experiment files'
+ )
+ parser.add_argument(
+ '--sample',
+ type=int,
+ default=None,
+ metavar='N',
+ help='Total number of ideas to sample (proportionally across queries)'
+ )
+ parser.add_argument(
+ '--per-query',
+ type=int,
+ default=None,
+ metavar='N',
+ help='Maximum ideas per query'
+ )
+ parser.add_argument(
+ '--per-condition',
+ type=int,
+ default=None,
+ metavar='N',
+ help='Maximum ideas per condition per query (stratified sampling)'
+ )
+ parser.add_argument(
+ '--seed', '-s',
+ type=int,
+ default=42,
+ help='Random seed for shuffling (default: 42)'
+ )
+ args = parser.parse_args()
+
+ # Paths
+ base_dir = Path(__file__).parent.parent
+ results_dir = base_dir / 'results'
+ output_file = Path(__file__).parent / 'data' / 'assessment_items.json'
+
+ # List available files
+ available_files = list_experiment_files(results_dir)
+
+ if args.list:
+ print("Available experiment files (most recent first):")
+ for f in available_files:
+ size_kb = f.stat().st_size / 1024
+ print(f" {f.name} ({size_kb:.1f} KB)")
+ return
+
+ # Determine which file to use
+ if args.experiment_file:
+ experiment_file = results_dir / args.experiment_file
+ if not experiment_file.exists():
+ # Try without .json extension
+ experiment_file = results_dir / f"{args.experiment_file}.json"
+ else:
+ # Use the latest deduped file
+ if not available_files:
+ print("Error: No deduped experiment files found in results directory.")
+ return
+ experiment_file = available_files[0]
+ print(f"Using latest experiment file: {experiment_file.name}")
+
+ if not experiment_file.exists():
+ print(f"Error: Experiment file not found: {experiment_file}")
+ print("\nAvailable files:")
+ for f in available_files:
+ print(f" {f.name}")
+ return
+
+ prepare_assessment_data(
+ experiment_file,
+ output_file,
+ seed=args.seed,
+ sample_total=args.sample,
+ per_query=args.per_query,
+ per_condition=args.per_condition
+ )
+
+ # Verify output
+ with open(output_file, 'r') as f:
+ data = json.load(f)
+
+ print("\n--- Verification ---")
+ print(f"Queries: {data['query_count']}")
+ print(f"Total ideas: {data['total_ideas']}")
+
+ # Show distribution by condition (from hidden metadata)
+ condition_counts: dict[str, int] = {}
+ for query in data['queries']:
+ for idea in query['ideas']:
+ condition = idea['_hidden']['condition']
+ condition_counts[condition] = condition_counts.get(condition, 0) + 1
+
+ print("\nIdeas per condition:")
+ for condition, count in sorted(condition_counts.items()):
+ print(f" {condition}: {count}")
+
+
+if __name__ == '__main__':
+ main()
diff --git a/experiments/assessment/results/ratings.db b/experiments/assessment/results/ratings.db
new file mode 100644
index 0000000..351ec10
Binary files /dev/null and b/experiments/assessment/results/ratings.db differ
diff --git a/experiments/assessment/start.sh b/experiments/assessment/start.sh
new file mode 100755
index 0000000..0254a2c
--- /dev/null
+++ b/experiments/assessment/start.sh
@@ -0,0 +1,101 @@
+#!/bin/bash
+
+# Human Assessment Web Interface Start Script
+# This script starts both the backend API and frontend dev server
+
+set -e
+
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+cd "$SCRIPT_DIR"
+
+# Colors for output
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m' # No Color
+
+echo -e "${GREEN}================================${NC}"
+echo -e "${GREEN}Creative Idea Assessment System${NC}"
+echo -e "${GREEN}================================${NC}"
+echo
+
+# Find Python with FastAPI (use project venv or system)
+VENV_PYTHON="$SCRIPT_DIR/../../backend/venv/bin/python"
+if [ -x "$VENV_PYTHON" ]; then
+ PYTHON_CMD="$VENV_PYTHON"
+ UVICORN_CMD="$SCRIPT_DIR/../../backend/venv/bin/uvicorn"
+else
+ PYTHON_CMD="python3"
+ UVICORN_CMD="uvicorn"
+fi
+
+# Check if assessment data exists
+if [ ! -f "data/assessment_items.json" ]; then
+ echo -e "${YELLOW}Assessment data not found. Running prepare_data.py...${NC}"
+ $PYTHON_CMD prepare_data.py
+ echo
+fi
+
+# Check if node_modules exist in frontend
+if [ ! -d "frontend/node_modules" ]; then
+ echo -e "${YELLOW}Installing frontend dependencies...${NC}"
+ cd frontend
+ npm install
+ cd ..
+ echo
+fi
+
+# Function to cleanup background processes on exit
+cleanup() {
+ echo
+ echo -e "${YELLOW}Shutting down...${NC}"
+ kill $BACKEND_PID 2>/dev/null || true
+ kill $FRONTEND_PID 2>/dev/null || true
+ exit 0
+}
+
+trap cleanup SIGINT SIGTERM
+
+# Start backend
+echo -e "${GREEN}Starting backend API on port 8002...${NC}"
+cd backend
+$UVICORN_CMD app:app --host 0.0.0.0 --port 8002 --reload &
+BACKEND_PID=$!
+cd ..
+
+# Wait for backend to start
+echo "Waiting for backend to initialize..."
+sleep 2
+
+# Check if backend is running
+if ! curl -s http://localhost:8002/api/health > /dev/null 2>&1; then
+ echo -e "${RED}Backend failed to start. Check for errors above.${NC}"
+ kill $BACKEND_PID 2>/dev/null || true
+ exit 1
+fi
+echo -e "${GREEN}Backend is running.${NC}"
+echo
+
+# Start frontend
+echo -e "${GREEN}Starting frontend on port 5174...${NC}"
+cd frontend
+npm run dev &
+FRONTEND_PID=$!
+cd ..
+
+# Wait for frontend to start
+sleep 3
+
+echo
+echo -e "${GREEN}================================${NC}"
+echo -e "${GREEN}Assessment system is running!${NC}"
+echo -e "${GREEN}================================${NC}"
+echo
+echo -e "Backend API: ${YELLOW}http://localhost:8002${NC}"
+echo -e "Frontend UI: ${YELLOW}http://localhost:5174${NC}"
+echo
+echo -e "Press Ctrl+C to stop all services"
+echo
+
+# Wait for any process to exit
+wait
diff --git a/experiments/assessment/stop.sh b/experiments/assessment/stop.sh
new file mode 100755
index 0000000..4c914aa
--- /dev/null
+++ b/experiments/assessment/stop.sh
@@ -0,0 +1,13 @@
+#!/bin/bash
+
+# Stop the assessment system
+
+echo "Stopping assessment system..."
+
+# Kill backend (uvicorn on port 8002)
+pkill -f "uvicorn app:app.*8002" 2>/dev/null && echo "Backend stopped" || echo "Backend not running"
+
+# Kill frontend (vite on port 5174)
+pkill -f "vite.*5174" 2>/dev/null && echo "Frontend stopped" || echo "Frontend not running"
+
+echo "Done"
diff --git a/experiments/aut_flexibility_analysis.py b/experiments/aut_flexibility_analysis.py
new file mode 100755
index 0000000..3b20f51
--- /dev/null
+++ b/experiments/aut_flexibility_analysis.py
@@ -0,0 +1,1342 @@
+#!/usr/bin/env python3
+"""
+AUT Flexibility Analysis for Creative Ideas
+
+Implements creativity evaluation metrics based on the Alternative Uses Task (AUT) framework:
+
+1. Lexical Diversity - Type-token ratio, vocabulary richness
+2. Concept Extraction - Key concepts and domain coverage
+3. Embedding Visualization - t-SNE/PCA scatter plots by condition
+4. Novelty Scores - Distance from global centroid (semantic novelty)
+5. Cross-condition Cohesion - Nearest neighbor overlap analysis
+6. AUT Flexibility Analysis - Category-based divergent thinking metrics
+ - LLM-based flexibility: Two-phase category generation (Hadas & Hershkovitz 2024)
+ - Embedding-based flexibility: Hierarchical clustering (arXiv:2405.00899)
+ - Jump signal: Category switch ratio in sequential generation
+
+References:
+- Hadas & Hershkovitz (2024). "Using LLMs to Evaluate AUT Flexibility Score"
+- arXiv:2405.00899 - "Characterising Creative Process in Humans and LLMs"
+- Torrance (1974). Torrance Tests of Creative Thinking
+
+Usage:
+ python aut_flexibility_analysis.py # Analyze latest experiment
+ python aut_flexibility_analysis.py experiment_xxx_deduped.json # Specific file
+ python aut_flexibility_analysis.py --skip-viz # Skip visualization (faster)
+"""
+
+import argparse
+import asyncio
+import json
+import re
+import math
+from collections import Counter, defaultdict
+from datetime import datetime, timezone
+from pathlib import Path
+from typing import Any
+
+import numpy as np
+
+# Optional imports with fallbacks
+try:
+ from sklearn.manifold import TSNE
+ from sklearn.decomposition import PCA
+ HAS_SKLEARN = True
+except ImportError:
+ HAS_SKLEARN = False
+ print("Warning: sklearn not available, visualization will be limited")
+
+try:
+ from scipy.cluster.hierarchy import linkage, fcluster
+ from scipy.spatial.distance import pdist, squareform
+ HAS_SCIPY = True
+except ImportError:
+ HAS_SCIPY = False
+ print("Warning: scipy not available, hierarchical clustering will be limited")
+
+try:
+ import matplotlib.pyplot as plt
+ import matplotlib
+ matplotlib.use('Agg') # Non-interactive backend
+ HAS_MATPLOTLIB = True
+except ImportError:
+ HAS_MATPLOTLIB = False
+ print("Warning: matplotlib not available, no plots will be generated")
+
+try:
+ import httpx
+ HAS_HTTPX = True
+except ImportError:
+ HAS_HTTPX = False
+ print("Warning: httpx not available, will use cached embeddings only")
+
+
+# ============================================================================
+# Configuration
+# ============================================================================
+
+RESULTS_DIR = Path(__file__).parent / 'results'
+OLLAMA_BASE_URL = "http://localhost:11435"
+EMBEDDING_MODEL = "qwen3-embedding:4b"
+LLM_MODEL = "qwen3:8b" # Model for flexibility category generation
+
+
+# ============================================================================
+# 1. Lexical Diversity Analysis
+# ============================================================================
+
+def tokenize(text: str) -> list[str]:
+ """Simple word tokenization."""
+ # Lowercase and extract words
+ words = re.findall(r'\b[a-zA-Z]+\b', text.lower())
+ return words
+
+
+def calculate_lexical_diversity(text: str) -> dict[str, Any]:
+ """
+ Calculate lexical diversity metrics for a text.
+
+ Returns:
+ - type_token_ratio: unique words / total words
+ - vocabulary_size: number of unique words
+ - total_words: total word count
+ - avg_word_length: average word length
+ - hapax_ratio: words appearing only once / total unique words
+ """
+ words = tokenize(text)
+
+ if not words:
+ return {
+ 'type_token_ratio': 0,
+ 'vocabulary_size': 0,
+ 'total_words': 0,
+ 'avg_word_length': 0,
+ 'hapax_ratio': 0
+ }
+
+ word_counts = Counter(words)
+ unique_words = set(words)
+ hapax = sum(1 for w, c in word_counts.items() if c == 1)
+
+ return {
+ 'type_token_ratio': len(unique_words) / len(words),
+ 'vocabulary_size': len(unique_words),
+ 'total_words': len(words),
+ 'avg_word_length': sum(len(w) for w in words) / len(words),
+ 'hapax_ratio': hapax / len(unique_words) if unique_words else 0
+ }
+
+
+def analyze_lexical_diversity_by_condition(ideas_by_condition: dict[str, list[str]]) -> dict[str, Any]:
+ """Analyze lexical diversity for each condition."""
+ results = {}
+
+ for condition, ideas in ideas_by_condition.items():
+ # Concatenate all ideas for overall metrics
+ all_text = ' '.join(ideas)
+ overall = calculate_lexical_diversity(all_text)
+
+ # Per-idea metrics
+ per_idea_metrics = [calculate_lexical_diversity(idea) for idea in ideas]
+
+ results[condition] = {
+ 'overall': overall,
+ 'per_idea_mean': {
+ 'type_token_ratio': np.mean([m['type_token_ratio'] for m in per_idea_metrics]),
+ 'vocabulary_size': np.mean([m['vocabulary_size'] for m in per_idea_metrics]),
+ 'total_words': np.mean([m['total_words'] for m in per_idea_metrics]),
+ 'avg_word_length': np.mean([m['avg_word_length'] for m in per_idea_metrics]),
+ },
+ 'idea_count': len(ideas)
+ }
+
+ return results
+
+
+# ============================================================================
+# 2. Concept Extraction
+# ============================================================================
+
+# Common English stopwords
+STOPWORDS = {
+ 'a', 'an', 'the', 'and', 'or', 'but', 'in', 'on', 'at', 'to', 'for', 'of',
+ 'with', 'by', 'from', 'as', 'is', 'was', 'are', 'were', 'been', 'be', 'have',
+ 'has', 'had', 'do', 'does', 'did', 'will', 'would', 'could', 'should', 'may',
+ 'might', 'must', 'shall', 'can', 'need', 'dare', 'ought', 'used', 'that',
+ 'which', 'who', 'whom', 'this', 'these', 'those', 'it', 'its', 'they', 'them',
+ 'their', 'we', 'us', 'our', 'you', 'your', 'i', 'me', 'my', 'he', 'him', 'his',
+ 'she', 'her', 'not', 'no', 'nor', 'so', 'than', 'too', 'very', 'just', 'also',
+ 'only', 'own', 'same', 'into', 'over', 'such', 'through', 'during', 'before',
+ 'after', 'above', 'below', 'between', 'under', 'again', 'further', 'then',
+ 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'each', 'few',
+ 'more', 'most', 'other', 'some', 'any', 'both', 'being', 'about', 'against',
+ 'while', 'using', 'based', 'allows', 'features', 'includes', 'provides'
+}
+
+# Domain keywords for classification
+DOMAIN_KEYWORDS = {
+ 'technology': {'smart', 'digital', 'ai', 'sensor', 'app', 'software', 'algorithm',
+ 'wireless', 'bluetooth', 'iot', 'data', 'automated', 'electronic'},
+ 'sustainability': {'eco', 'green', 'sustainable', 'renewable', 'solar', 'recycled',
+ 'biodegradable', 'energy', 'environmental', 'carbon', 'organic'},
+ 'health': {'health', 'medical', 'therapy', 'wellness', 'ergonomic', 'posture',
+ 'fitness', 'therapeutic', 'rehabilitation', 'mental', 'physical'},
+ 'social': {'community', 'social', 'sharing', 'collaborative', 'inclusive',
+ 'accessible', 'elderly', 'children', 'family', 'public'},
+ 'design': {'modular', 'customizable', 'aesthetic', 'minimalist', 'portable',
+ 'foldable', 'compact', 'lightweight', 'adjustable', 'convertible'},
+ 'materials': {'material', 'fabric', 'wood', 'metal', 'plastic', 'carbon',
+ 'fiber', 'composite', 'bamboo', 'leather', 'textile'}
+}
+
+
+def extract_concepts(text: str) -> dict[str, Any]:
+ """
+ Extract key concepts from text.
+
+ Returns:
+ - keywords: list of significant words (non-stopwords)
+ - bigrams: common two-word phrases
+ - domains: detected domain categories
+ """
+ words = tokenize(text)
+
+ # Filter stopwords and short words
+ keywords = [w for w in words if w not in STOPWORDS and len(w) > 2]
+
+ # Extract bigrams
+ bigrams = []
+ for i in range(len(words) - 1):
+ if words[i] not in STOPWORDS and words[i+1] not in STOPWORDS:
+ bigrams.append(f"{words[i]} {words[i+1]}")
+
+ # Detect domains
+ text_lower = text.lower()
+ detected_domains = []
+ for domain, domain_words in DOMAIN_KEYWORDS.items():
+ if any(kw in text_lower for kw in domain_words):
+ detected_domains.append(domain)
+
+ return {
+ 'keywords': keywords,
+ 'bigrams': bigrams,
+ 'domains': detected_domains
+ }
+
+
+def analyze_concepts_by_condition(ideas_by_condition: dict[str, list[str]]) -> dict[str, Any]:
+ """Analyze concept extraction for each condition."""
+ results = {}
+
+ for condition, ideas in ideas_by_condition.items():
+ all_keywords = []
+ all_bigrams = []
+ domain_counts = Counter()
+
+ for idea in ideas:
+ concepts = extract_concepts(idea)
+ all_keywords.extend(concepts['keywords'])
+ all_bigrams.extend(concepts['bigrams'])
+ for domain in concepts['domains']:
+ domain_counts[domain] += 1
+
+ keyword_counts = Counter(all_keywords)
+ bigram_counts = Counter(all_bigrams)
+
+ results[condition] = {
+ 'unique_keywords': len(set(all_keywords)),
+ 'total_keywords': len(all_keywords),
+ 'top_keywords': keyword_counts.most_common(20),
+ 'top_bigrams': bigram_counts.most_common(10),
+ 'domain_distribution': dict(domain_counts),
+ 'domain_coverage': len(domain_counts),
+ 'idea_count': len(ideas)
+ }
+
+ return results
+
+
+# ============================================================================
+# 3. Embedding-based Analysis (Visualization, Novelty, Overlap)
+# ============================================================================
+
+async def get_embeddings_from_ollama(texts: list[str], batch_size: int = 50) -> list[list[float]] | None:
+ """Get embeddings from Ollama API."""
+ if not HAS_HTTPX:
+ return None
+
+ embeddings = []
+
+ async with httpx.AsyncClient(timeout=120.0) as client:
+ for i in range(0, len(texts), batch_size):
+ batch = texts[i:i+batch_size]
+ try:
+ response = await client.post(
+ f"{OLLAMA_BASE_URL}/api/embed",
+ json={"model": EMBEDDING_MODEL, "input": batch}
+ )
+ response.raise_for_status()
+ result = response.json()
+ embeddings.extend(result["embeddings"])
+ print(f" Embedded {len(embeddings)}/{len(texts)} ideas...")
+ except Exception as e:
+ print(f" Embedding error: {e}")
+ return None
+
+ return embeddings
+
+
+def load_cached_embeddings(experiment_id: str) -> dict[str, list[float]] | None:
+ """Try to load embeddings from metrics file."""
+ metrics_file = RESULTS_DIR / f"experiment_{experiment_id}_metrics.json"
+ if not metrics_file.exists():
+ return None
+
+ # The metrics file doesn't store raw embeddings, so we can't load them
+ return None
+
+
+def compute_centroid(embeddings: np.ndarray) -> np.ndarray:
+ """Compute centroid of embeddings."""
+ return np.mean(embeddings, axis=0)
+
+
+def cosine_distance(a: np.ndarray, b: np.ndarray) -> float:
+ """Compute cosine distance between two vectors."""
+ dot = np.dot(a, b)
+ norm_a = np.linalg.norm(a)
+ norm_b = np.linalg.norm(b)
+ if norm_a == 0 or norm_b == 0:
+ return 1.0
+ return 1 - dot / (norm_a * norm_b)
+
+
+def analyze_embeddings(
+ ideas_by_condition: dict[str, list[str]],
+ embeddings_by_condition: dict[str, np.ndarray],
+ output_dir: Path,
+ skip_viz: bool = False
+) -> dict[str, Any]:
+ """
+ Analyze embeddings for visualization, novelty, and overlap.
+ """
+ results = {
+ 'novelty_scores': {},
+ 'cross_condition_overlap': {},
+ 'centroid_distances': {}
+ }
+
+ # Compute centroids for each condition
+ centroids = {}
+ for condition, embeddings in embeddings_by_condition.items():
+ centroids[condition] = compute_centroid(embeddings)
+
+ # Global centroid (all ideas)
+ all_embeddings = np.vstack(list(embeddings_by_condition.values()))
+ global_centroid = compute_centroid(all_embeddings)
+
+ # 4. Perplexity-based Novelty (approximated as distance from global centroid)
+ print("Computing novelty scores...")
+ for condition, embeddings in embeddings_by_condition.items():
+ distances = [cosine_distance(emb, global_centroid) for emb in embeddings]
+ results['novelty_scores'][condition] = {
+ 'mean': float(np.mean(distances)),
+ 'std': float(np.std(distances)),
+ 'min': float(np.min(distances)),
+ 'max': float(np.max(distances))
+ }
+
+ # 5. Cross-condition Overlap
+ print("Computing cross-condition overlap...")
+ conditions = list(embeddings_by_condition.keys())
+
+ # Centroid distances between conditions
+ for i, c1 in enumerate(conditions):
+ for c2 in conditions[i+1:]:
+ dist = cosine_distance(centroids[c1], centroids[c2])
+ results['centroid_distances'][f"{c1}_vs_{c2}"] = float(dist)
+
+ # Overlap analysis: for each idea, find if nearest neighbor is same or different condition
+ print("Computing nearest neighbor overlap...")
+ overlap_stats = defaultdict(lambda: {'same_condition': 0, 'diff_condition': 0})
+
+ # Build flat arrays with condition labels
+ all_emb_list = []
+ all_labels = []
+ for condition, embeddings in embeddings_by_condition.items():
+ for emb in embeddings:
+ all_emb_list.append(emb)
+ all_labels.append(condition)
+
+ all_emb_array = np.array(all_emb_list)
+
+ for i, (emb, label) in enumerate(zip(all_emb_array, all_labels)):
+ # Find nearest neighbor (excluding self)
+ distances = np.array([cosine_distance(emb, other) for other in all_emb_array])
+ distances[i] = float('inf') # Exclude self
+ nearest_idx = np.argmin(distances)
+ nearest_label = all_labels[nearest_idx]
+
+ if nearest_label == label:
+ overlap_stats[label]['same_condition'] += 1
+ else:
+ overlap_stats[label]['diff_condition'] += 1
+
+ for condition in conditions:
+ total = overlap_stats[condition]['same_condition'] + overlap_stats[condition]['diff_condition']
+ results['cross_condition_overlap'][condition] = {
+ 'same_condition_nn': overlap_stats[condition]['same_condition'],
+ 'diff_condition_nn': overlap_stats[condition]['diff_condition'],
+ 'cohesion_ratio': overlap_stats[condition]['same_condition'] / total if total > 0 else 0
+ }
+
+ # 3. Embedding Visualization
+ if not skip_viz and HAS_SKLEARN and HAS_MATPLOTLIB:
+ print("Generating visualizations...")
+ generate_visualizations(embeddings_by_condition, output_dir)
+
+ return results
+
+
+def generate_visualizations(
+ embeddings_by_condition: dict[str, np.ndarray],
+ output_dir: Path
+):
+ """Generate t-SNE and PCA visualizations."""
+
+ # Prepare data
+ all_embeddings = []
+ all_labels = []
+ for condition, embeddings in embeddings_by_condition.items():
+ all_embeddings.extend(embeddings)
+ all_labels.extend([condition] * len(embeddings))
+
+ all_embeddings = np.array(all_embeddings)
+
+ # Color map for conditions
+ conditions = list(embeddings_by_condition.keys())
+ colors = plt.cm.tab10(np.linspace(0, 1, len(conditions)))
+ color_map = {c: colors[i] for i, c in enumerate(conditions)}
+ point_colors = [color_map[label] for label in all_labels]
+
+ # PCA visualization
+ print(" Running PCA...")
+ pca = PCA(n_components=2, random_state=42)
+ pca_result = pca.fit_transform(all_embeddings)
+
+ plt.figure(figsize=(12, 8))
+ for condition in conditions:
+ mask = [l == condition for l in all_labels]
+ plt.scatter(
+ pca_result[mask, 0],
+ pca_result[mask, 1],
+ c=[color_map[condition]],
+ label=condition,
+ alpha=0.6,
+ s=30
+ )
+ plt.xlabel(f'PC1 ({pca.explained_variance_ratio_[0]:.1%} variance)')
+ plt.ylabel(f'PC2 ({pca.explained_variance_ratio_[1]:.1%} variance)')
+ plt.title('Ideas by Condition (PCA)')
+ plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
+ plt.tight_layout()
+ plt.savefig(output_dir / 'embedding_pca.png', dpi=150)
+ plt.close()
+
+ # t-SNE visualization
+ print(" Running t-SNE...")
+ tsne = TSNE(n_components=2, random_state=42, perplexity=min(30, len(all_embeddings)-1))
+ tsne_result = tsne.fit_transform(all_embeddings)
+
+ plt.figure(figsize=(12, 8))
+ for condition in conditions:
+ mask = [l == condition for l in all_labels]
+ plt.scatter(
+ tsne_result[mask, 0],
+ tsne_result[mask, 1],
+ c=[color_map[condition]],
+ label=condition,
+ alpha=0.6,
+ s=30
+ )
+ plt.xlabel('t-SNE 1')
+ plt.ylabel('t-SNE 2')
+ plt.title('Ideas by Condition (t-SNE)')
+ plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
+ plt.tight_layout()
+ plt.savefig(output_dir / 'embedding_tsne.png', dpi=150)
+ plt.close()
+
+ print(f" Saved visualizations to {output_dir}")
+
+
+# ============================================================================
+# 6. AUT Flexibility Analysis (Category-based Divergent Thinking)
+# ============================================================================
+
+async def call_llm(prompt: str, model: str = LLM_MODEL) -> str | None:
+ """Call Ollama LLM for text generation."""
+ if not HAS_HTTPX:
+ return None
+
+ async with httpx.AsyncClient(timeout=120.0) as client:
+ try:
+ response = await client.post(
+ f"{OLLAMA_BASE_URL}/api/generate",
+ json={
+ "model": model,
+ "prompt": prompt,
+ "stream": False,
+ "options": {"temperature": 0.3} # Lower temperature for consistency
+ }
+ )
+ response.raise_for_status()
+ result = response.json()
+ return result.get("response", "")
+ except Exception as e:
+ print(f" LLM call error: {e}")
+ return None
+
+
+async def compute_flexibility_llm(
+ ideas: list[str],
+ query: str = "bicycle"
+) -> dict[str, Any]:
+ """
+ Compute flexibility score using LLM-based category generation.
+
+ Two-phase approach (Hadas & Hershkovitz 2024):
+ 1. Generate semantic categories from all ideas
+ 2. Classify each idea into a category
+ 3. Flexibility = number of unique categories used
+
+ Returns:
+ - categories: list of generated categories
+ - assignments: mapping of idea index to category
+ - flexibility_score: count of unique categories
+ """
+ # Phase 1: Generate categories
+ ideas_text = "\n".join(f"{i+1}. {idea}" for i, idea in enumerate(ideas))
+
+ prompt1 = f"""/no_think
+You are analyzing creative ideas for alternative uses of a {query}.
+
+Examine these ideas and determine the distinct SEMANTIC CATEGORIES they fall into.
+Categories should represent fundamentally different ways of thinking about using the object.
+
+Ideas:
+{ideas_text}
+
+Output ONLY a JSON array of category names (5-15 categories typically).
+Example: ["Transportation", "Art/Decoration", "Tool/Equipment", "Recreation", "Storage"]
+
+JSON array:"""
+
+ response1 = await call_llm(prompt1)
+ if not response1:
+ return {"error": "LLM call failed for category generation"}
+
+ # Parse categories from response
+ try:
+ # Try to extract JSON array from response
+ match = re.search(r'\[.*?\]', response1, re.DOTALL)
+ if match:
+ categories = json.loads(match.group())
+ else:
+ # Fallback: split by newlines or commas
+ categories = [c.strip().strip('"\'') for c in response1.split('\n') if c.strip()]
+ categories = [c for c in categories if c and not c.startswith('[')]
+ except json.JSONDecodeError:
+ categories = [c.strip().strip('"\'') for c in response1.split(',') if c.strip()]
+
+ if not categories:
+ return {"error": "Failed to parse categories", "raw_response": response1}
+
+ # Phase 2: Classify each idea
+ categories_text = ", ".join(f'"{c}"' for c in categories)
+
+ prompt2 = f"""/no_think
+Classify each idea into exactly ONE of these categories: [{categories_text}]
+
+Ideas:
+{ideas_text}
+
+Output a JSON object mapping idea number (as string) to category name.
+Example: {{"1": "Transportation", "2": "Art/Decoration", "3": "Tool/Equipment"}}
+
+JSON object:"""
+
+ response2 = await call_llm(prompt2)
+ if not response2:
+ return {"error": "LLM call failed for classification", "categories": categories}
+
+ # Parse assignments
+ try:
+ match = re.search(r'\{.*?\}', response2, re.DOTALL)
+ if match:
+ assignments = json.loads(match.group())
+ else:
+ assignments = {}
+ except json.JSONDecodeError:
+ assignments = {}
+
+ # Calculate flexibility
+ used_categories = set(assignments.values())
+ flexibility_score = len(used_categories)
+
+ # Category distribution
+ category_counts = Counter(assignments.values())
+
+ return {
+ "categories": categories,
+ "assignments": assignments,
+ "flexibility_score": flexibility_score,
+ "category_distribution": dict(category_counts),
+ "total_ideas_classified": len(assignments)
+ }
+
+
+def compute_flexibility_embedding(
+ embeddings: np.ndarray,
+ ideas: list[str],
+ distance_threshold: float = 0.5
+) -> dict[str, Any]:
+ """
+ Compute flexibility score using embedding-based hierarchical clustering.
+
+ Method from arXiv:2405.00899:
+ 1. Encode ideas as embeddings
+ 2. Hierarchical clustering with average linkage
+ 3. Cut tree at distance threshold (higher threshold = fewer clusters)
+
+ Args:
+ embeddings: numpy array of shape (n_ideas, embedding_dim)
+ ideas: list of idea texts for reference
+ distance_threshold: cosine distance threshold for cutting dendrogram
+ (0.5 = cut when similarity drops below 0.5)
+
+ Returns:
+ - cluster_assignments: list of cluster IDs
+ - flexibility_score: number of clusters
+ - cluster_sizes: distribution of cluster sizes
+ - mean_pairwise_similarity: average similarity within condition
+ """
+ if not HAS_SCIPY:
+ return {"error": "scipy not available for hierarchical clustering"}
+
+ n_ideas = len(embeddings)
+ if n_ideas < 2:
+ return {
+ "cluster_assignments": [0] * n_ideas,
+ "flexibility_score": 1,
+ "cluster_sizes": {0: n_ideas},
+ "mean_pairwise_similarity": 1.0
+ }
+
+ # Normalize embeddings for cosine similarity
+ norms = np.linalg.norm(embeddings, axis=1, keepdims=True)
+ norms[norms == 0] = 1 # Avoid division by zero
+ normalized = embeddings / norms
+
+ # Compute pairwise cosine distances
+ distances = pdist(normalized, metric='cosine')
+
+ # Calculate mean pairwise similarity for reporting
+ mean_pairwise_sim = 1 - np.mean(distances)
+
+ # Hierarchical clustering with average linkage (better for varying density)
+ Z = linkage(distances, method='average')
+
+ # Cut at distance threshold
+ # This creates clusters where items within cluster have distance < threshold
+ clusters = fcluster(Z, distance_threshold, criterion='distance')
+
+ n_clusters = len(set(clusters))
+ cluster_sizes = Counter(clusters)
+
+ # Convert numpy keys to Python ints for JSON serialization
+ cluster_sizes_dict = {int(k): int(v) for k, v in cluster_sizes.items()}
+
+ # Calculate mean intra-cluster similarity
+ total_sim = 0
+ total_pairs = 0
+ for c in set(clusters):
+ mask = clusters == c
+ cluster_points = normalized[mask]
+ if len(cluster_points) > 1:
+ for i in range(len(cluster_points)):
+ for j in range(i + 1, len(cluster_points)):
+ sim = np.dot(cluster_points[i], cluster_points[j])
+ total_sim += sim
+ total_pairs += 1
+
+ mean_intra_sim = total_sim / total_pairs if total_pairs > 0 else None
+
+ return {
+ "cluster_assignments": [int(c) for c in clusters],
+ "flexibility_score": int(n_clusters),
+ "cluster_sizes": cluster_sizes_dict,
+ "mean_pairwise_similarity": float(mean_pairwise_sim),
+ "mean_intra_cluster_similarity": float(mean_intra_sim) if mean_intra_sim else None
+ }
+
+
+def compute_jump_signal(
+ cluster_assignments: list[int],
+ embeddings: np.ndarray | None = None,
+ similarity_threshold: float = 0.7
+) -> dict[str, Any]:
+ """
+ Compute jump signal - measures category switches in sequential idea generation.
+
+ Enhanced method from arXiv:2405.00899:
+ - Combined jump signal: jump = jumpcat ∧ jumpSS (logical AND)
+ - A "true" jump requires BOTH category change AND semantic dissimilarity
+
+ This reduces false positives where switching words within same concept space
+ would incorrectly count as a jump.
+
+ Args:
+ cluster_assignments: list of cluster IDs for each idea (in generation order)
+ embeddings: optional, for computing semantic-similarity-based jumps
+ similarity_threshold: threshold for semantic similarity jump detection (default 0.7)
+
+ Returns:
+ - category_jump_count: number of category switches (jumpcat)
+ - semantic_jump_count: number of semantic dissimilarity jumps (jumpSS)
+ - combined_jump_count: jumps where BOTH conditions are true
+ - combined_jump_ratio: proportion of combined jumps (paper metric)
+ - jump_positions: indices where combined jumps occur
+ """
+ if len(cluster_assignments) < 2:
+ return {
+ "category_jump_count": 0,
+ "semantic_jump_count": 0,
+ "combined_jump_count": 0,
+ "combined_jump_ratio": 0.0,
+ "category_jump_positions": [],
+ "semantic_jump_positions": [],
+ "combined_jump_positions": [],
+ "total_transitions": 0,
+ # Legacy fields for backward compatibility
+ "jump_count": 0,
+ "jump_ratio": 0.0,
+ "jump_positions": []
+ }
+
+ category_jumps = []
+ semantic_jumps = []
+ combined_jumps = []
+
+ for i in range(1, len(cluster_assignments)):
+ # Category-based jump (jumpcat)
+ is_category_jump = cluster_assignments[i] != cluster_assignments[i-1]
+ if is_category_jump:
+ category_jumps.append(i)
+
+ # Semantic similarity-based jump (jumpSS)
+ is_semantic_jump = False
+ if embeddings is not None:
+ sim = np.dot(embeddings[i], embeddings[i-1]) / (
+ np.linalg.norm(embeddings[i]) * np.linalg.norm(embeddings[i-1]) + 1e-10
+ )
+ is_semantic_jump = sim < similarity_threshold
+ if is_semantic_jump:
+ semantic_jumps.append(i)
+
+ # Combined jump: both must be true (paper method)
+ if is_category_jump and (is_semantic_jump if embeddings is not None else True):
+ combined_jumps.append(i)
+
+ total_transitions = len(cluster_assignments) - 1
+
+ result = {
+ "category_jump_count": len(category_jumps),
+ "semantic_jump_count": len(semantic_jumps) if embeddings is not None else 0,
+ "combined_jump_count": len(combined_jumps),
+ "combined_jump_ratio": len(combined_jumps) / total_transitions if total_transitions > 0 else 0.0,
+ "category_jump_ratio": len(category_jumps) / total_transitions if total_transitions > 0 else 0.0,
+ "semantic_jump_ratio": len(semantic_jumps) / total_transitions if total_transitions > 0 and embeddings is not None else 0.0,
+ "category_jump_positions": category_jumps,
+ "semantic_jump_positions": semantic_jumps if embeddings is not None else [],
+ "combined_jump_positions": combined_jumps,
+ "total_transitions": total_transitions,
+ # Legacy fields for backward compatibility
+ "jump_count": len(combined_jumps), # Now uses combined
+ "jump_ratio": len(combined_jumps) / total_transitions if total_transitions > 0 else 0.0,
+ "jump_positions": combined_jumps
+ }
+
+ return result
+
+
+def classify_flexibility_profile(jump_count: int, idea_count: int) -> str:
+ """
+ Classify creativity style into Persistent/Flexible/Mixed based on jump count.
+
+ Based on arXiv:2405.00899 findings:
+ - Persistent: Deep exploration within categories (low jump ratio)
+ - Flexible: Broad exploration across categories (high jump ratio)
+ - Mixed: Intermediate pattern
+
+ Paper thresholds normalized to response count:
+ - Persistent: jump_ratio < 0.30
+ - Flexible: jump_ratio > 0.45
+ - Mixed: 0.30 <= jump_ratio <= 0.45
+
+ Args:
+ jump_count: Number of category jumps
+ idea_count: Total number of ideas
+
+ Returns:
+ Profile name: "Persistent", "Flexible", "Mixed", or "Undefined"
+ """
+ if idea_count <= 1:
+ return "Undefined"
+
+ jump_ratio = jump_count / (idea_count - 1)
+
+ if jump_ratio < 0.30:
+ return "Persistent"
+ elif jump_ratio > 0.45:
+ return "Flexible"
+ else:
+ return "Mixed"
+
+
+def compute_cumulative_jump_profile(
+ jump_positions: list[int],
+ total_ideas: int
+) -> list[int]:
+ """
+ Compute cumulative jump count at each response position.
+
+ This visualization shows exploration patterns over the generation sequence,
+ revealing whether participants explore steadily or in bursts.
+
+ Args:
+ jump_positions: Indices where jumps occurred (1-indexed)
+ total_ideas: Total number of ideas generated
+
+ Returns:
+ List where index i = cumulative jumps after response i
+ """
+ if total_ideas <= 0:
+ return []
+
+ cumulative = [0] * total_ideas
+ current_jumps = 0
+
+ for i in range(total_ideas):
+ if (i + 1) in jump_positions: # Positions are 1-indexed
+ current_jumps += 1
+ cumulative[i] = current_jumps
+
+ return cumulative
+
+
+def analyze_originality_flexibility_correlation(
+ novelty_scores: dict[str, float],
+ flexibility_scores: dict[str, int]
+) -> dict[str, Any]:
+ """
+ Analyze correlation between novelty (originality) and flexibility across conditions.
+
+ Paper finding from arXiv:2405.00899:
+ - Humans: No correlation between flexibility and originality (r ≈ 0)
+ - LLMs: Positive correlation - flexible LLMs score higher on originality
+
+ Research question: Does our attribute+expert pipeline break this LLM pattern?
+ - If C4 (Full Pipeline) shows high novelty but moderate flexibility → breaks pattern
+ - If correlation is near zero → human-like creative behavior
+
+ Args:
+ novelty_scores: Mean novelty score per condition
+ flexibility_scores: Combined jump count (or flexibility score) per condition
+
+ Returns:
+ - pearson_r: Correlation coefficient
+ - interpretation: What the correlation means
+ - per_condition: Novelty and flexibility values per condition
+ """
+ conditions = list(novelty_scores.keys())
+ novelties = [novelty_scores[c] for c in conditions if c in flexibility_scores]
+ flexibilities = [flexibility_scores[c] for c in conditions if c in flexibility_scores]
+ valid_conditions = [c for c in conditions if c in flexibility_scores]
+
+ if len(novelties) < 3:
+ return {
+ "pearson_r": None,
+ "interpretation": "Insufficient data (need at least 3 conditions)",
+ "conditions": valid_conditions,
+ "novelty_values": novelties,
+ "flexibility_values": flexibilities
+ }
+
+ # Check for zero variance
+ if np.std(novelties) == 0 or np.std(flexibilities) == 0:
+ return {
+ "pearson_r": 0.0,
+ "interpretation": "Zero variance in one variable",
+ "conditions": valid_conditions,
+ "novelty_values": novelties,
+ "flexibility_values": flexibilities
+ }
+
+ correlation = np.corrcoef(novelties, flexibilities)[0, 1]
+
+ # Interpret the correlation
+ if correlation > 0.3:
+ interpretation = "Positive correlation (typical LLM pattern)"
+ elif correlation < -0.3:
+ interpretation = "Negative correlation (atypical - high novelty with low flexibility)"
+ else:
+ interpretation = "No significant correlation (human-like pattern)"
+
+ return {
+ "pearson_r": float(correlation),
+ "interpretation": interpretation,
+ "conditions": valid_conditions,
+ "novelty_values": novelties,
+ "flexibility_values": flexibilities,
+ "per_condition": {c: {"novelty": novelties[i], "flexibility": flexibilities[i]}
+ for i, c in enumerate(valid_conditions)}
+ }
+
+
+def plot_cumulative_jump_profiles(
+ profiles_by_condition: dict[str, list[int]],
+ output_path: Path
+):
+ """
+ Plot cumulative jump profiles for each condition.
+
+ Shows exploration patterns over generation sequence - steep slopes indicate
+ rapid category switching, flat regions indicate persistent exploration.
+
+ Args:
+ profiles_by_condition: Cumulative jump counts per condition
+ output_path: Directory to save the plot
+ """
+ if not HAS_MATPLOTLIB:
+ print(" Skipping cumulative jump plot (matplotlib not available)")
+ return
+
+ plt.figure(figsize=(12, 6))
+
+ # Color scheme for conditions
+ colors = plt.cm.tab10(np.linspace(0, 1, len(profiles_by_condition)))
+
+ for (condition, profile), color in zip(profiles_by_condition.items(), colors):
+ if profile: # Only plot if there's data
+ x = range(1, len(profile) + 1)
+ plt.plot(x, profile, label=condition, linewidth=2, color=color, marker='o',
+ markersize=3, alpha=0.8)
+
+ plt.xlabel('Response Position', fontsize=12)
+ plt.ylabel('Cumulative Jumps', fontsize=12)
+ plt.title('Exploration Patterns by Condition\n(Cumulative Category Jumps)', fontsize=14)
+ plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
+ plt.grid(True, alpha=0.3)
+ plt.tight_layout()
+ plt.savefig(output_path / 'cumulative_jump_profiles.png', dpi=150, bbox_inches='tight')
+ plt.close()
+
+ print(f" Saved cumulative jump profiles to {output_path / 'cumulative_jump_profiles.png'}")
+
+
+async def analyze_flexibility_by_condition(
+ ideas_by_condition: dict[str, list[str]],
+ embeddings_by_condition: dict[str, np.ndarray] | None,
+ query: str = "bicycle",
+ output_dir: Path | None = None
+) -> dict[str, Any]:
+ """
+ Analyze AUT flexibility for each condition using both LLM and embedding methods.
+
+ Enhanced with arXiv:2405.00899 metrics:
+ - Combined jump signal (jumpcat ∧ jumpSS)
+ - Flexibility profile classification (Persistent/Flexible/Mixed)
+ - Cumulative jump profiles for visualization
+
+ Returns flexibility scores, category distributions, jump signals, and profiles.
+ """
+ results = {
+ "llm_flexibility": {},
+ "embedding_flexibility": {},
+ "jump_analysis": {},
+ "flexibility_profiles": {},
+ "cumulative_jump_profiles": {},
+ "method_correlation": {}
+ }
+
+ # LLM-based flexibility analysis
+ print("\nComputing LLM-based flexibility scores...")
+ for condition, ideas in ideas_by_condition.items():
+ print(f" {condition}...")
+ llm_result = await compute_flexibility_llm(ideas, query)
+ results["llm_flexibility"][condition] = llm_result
+
+ # Embedding-based flexibility analysis
+ if embeddings_by_condition is not None:
+ print("\nComputing embedding-based flexibility scores...")
+ for condition, embeddings in embeddings_by_condition.items():
+ ideas = ideas_by_condition[condition]
+ emb_result = compute_flexibility_embedding(embeddings, ideas)
+ results["embedding_flexibility"][condition] = emb_result
+
+ # Jump signal analysis (enhanced with combined jump)
+ if "cluster_assignments" in emb_result:
+ jump_result = compute_jump_signal(
+ emb_result["cluster_assignments"],
+ embeddings
+ )
+ results["jump_analysis"][condition] = jump_result
+
+ # Classify flexibility profile
+ profile = classify_flexibility_profile(
+ jump_result["combined_jump_count"],
+ len(ideas)
+ )
+ results["flexibility_profiles"][condition] = profile
+
+ # Compute cumulative jump profile
+ cumulative = compute_cumulative_jump_profile(
+ jump_result["combined_jump_positions"],
+ len(ideas)
+ )
+ results["cumulative_jump_profiles"][condition] = cumulative
+
+ # Generate cumulative jump profile visualization
+ if output_dir is not None and results["cumulative_jump_profiles"]:
+ print("\nGenerating cumulative jump profile visualization...")
+ plot_cumulative_jump_profiles(results["cumulative_jump_profiles"], output_dir)
+
+ # Calculate correlation between methods (if both available)
+ llm_scores = []
+ emb_scores = []
+ conditions_order = []
+
+ for condition in ideas_by_condition.keys():
+ if condition in results["llm_flexibility"] and condition in results["embedding_flexibility"]:
+ llm_flex = results["llm_flexibility"][condition].get("flexibility_score")
+ emb_flex = results["embedding_flexibility"][condition].get("flexibility_score")
+ if llm_flex is not None and emb_flex is not None:
+ llm_scores.append(llm_flex)
+ emb_scores.append(emb_flex)
+ conditions_order.append(condition)
+
+ if len(llm_scores) >= 3:
+ # Pearson correlation
+ if np.std(llm_scores) > 0 and np.std(emb_scores) > 0:
+ correlation = np.corrcoef(llm_scores, emb_scores)[0, 1]
+ results["method_correlation"] = {
+ "pearson_r": float(correlation),
+ "llm_scores": dict(zip(conditions_order, llm_scores)),
+ "embedding_scores": dict(zip(conditions_order, emb_scores))
+ }
+
+ return results
+
+
+# ============================================================================
+# Main Analysis
+# ============================================================================
+
+async def run_analysis(
+ experiment_file: Path,
+ output_dir: Path,
+ skip_viz: bool = False,
+ skip_embeddings: bool = False
+):
+ """Run all analyses on an experiment file."""
+
+ print("=" * 60)
+ print("ADVANCED AUTOMATIC ANALYSIS")
+ print("=" * 60)
+
+ # Load experiment data
+ print(f"\nLoading: {experiment_file.name}")
+ with open(experiment_file, 'r', encoding='utf-8') as f:
+ data = json.load(f)
+
+ experiment_id = data.get('experiment_id', 'unknown')
+ print(f"Experiment ID: {experiment_id}")
+
+ # Extract ideas by condition
+ ideas_by_condition: dict[str, list[str]] = defaultdict(list)
+ idea_texts: list[str] = []
+ idea_conditions: list[str] = []
+
+ for result in data.get('results', []):
+ for condition_name, condition_data in result.get('conditions', {}).items():
+ dedup = condition_data.get('dedup', {})
+ unique_ideas = dedup.get('unique_ideas', [])
+ for idea in unique_ideas:
+ ideas_by_condition[condition_name].append(idea)
+ idea_texts.append(idea)
+ idea_conditions.append(condition_name)
+
+ total_ideas = len(idea_texts)
+ print(f"Total ideas: {total_ideas}")
+ print(f"Conditions: {list(ideas_by_condition.keys())}")
+
+ results = {
+ 'experiment_id': experiment_id,
+ 'analysis_timestamp': datetime.now(timezone.utc).isoformat(),
+ 'total_ideas': total_ideas,
+ 'conditions': list(ideas_by_condition.keys())
+ }
+
+ # 1. Lexical Diversity
+ print("\n" + "-" * 40)
+ print("1. LEXICAL DIVERSITY ANALYSIS")
+ print("-" * 40)
+
+ lexical_results = analyze_lexical_diversity_by_condition(ideas_by_condition)
+ results['lexical_diversity'] = lexical_results
+
+ for condition, metrics in lexical_results.items():
+ print(f"\n{condition}:")
+ print(f" Overall TTR: {metrics['overall']['type_token_ratio']:.3f}")
+ print(f" Vocabulary size: {metrics['overall']['vocabulary_size']}")
+ print(f" Avg words/idea: {metrics['per_idea_mean']['total_words']:.1f}")
+
+ # 2. Concept Extraction
+ print("\n" + "-" * 40)
+ print("2. CONCEPT EXTRACTION")
+ print("-" * 40)
+
+ concept_results = analyze_concepts_by_condition(ideas_by_condition)
+ results['concept_extraction'] = concept_results
+
+ for condition, metrics in concept_results.items():
+ print(f"\n{condition}:")
+ print(f" Unique keywords: {metrics['unique_keywords']}")
+ print(f" Domain coverage: {metrics['domain_coverage']} domains")
+ print(f" Top keywords: {[k for k, _ in metrics['top_keywords'][:5]]}")
+
+ # 3-5. Embedding-based Analysis
+ if not skip_embeddings:
+ print("\n" + "-" * 40)
+ print("3-5. EMBEDDING-BASED ANALYSIS")
+ print("-" * 40)
+
+ # Try to get embeddings
+ print("Getting embeddings from Ollama...")
+ embeddings = await get_embeddings_from_ollama(idea_texts)
+
+ if embeddings is not None:
+ # Organize embeddings by condition
+ embeddings_by_condition: dict[str, np.ndarray] = defaultdict(list)
+ for emb, condition in zip(embeddings, idea_conditions):
+ embeddings_by_condition[condition].append(emb)
+
+ for condition in embeddings_by_condition:
+ embeddings_by_condition[condition] = np.array(embeddings_by_condition[condition])
+
+ embedding_results = analyze_embeddings(
+ ideas_by_condition,
+ embeddings_by_condition,
+ output_dir,
+ skip_viz=skip_viz
+ )
+
+ results['novelty_scores'] = embedding_results['novelty_scores']
+ results['cross_condition_overlap'] = embedding_results['cross_condition_overlap']
+ results['centroid_distances'] = embedding_results['centroid_distances']
+
+ # Print novelty scores
+ print("\nNovelty Scores (distance from global centroid):")
+ for condition, scores in embedding_results['novelty_scores'].items():
+ print(f" {condition}: mean={scores['mean']:.4f}, std={scores['std']:.4f}")
+
+ # Print overlap
+ print("\nCross-condition Cohesion (% nearest neighbors from same condition):")
+ for condition, overlap in embedding_results['cross_condition_overlap'].items():
+ print(f" {condition}: {overlap['cohesion_ratio']:.1%}")
+
+ # Print centroid distances
+ print("\nCentroid Distances (lower = more similar):")
+ for pair, dist in sorted(embedding_results['centroid_distances'].items()):
+ print(f" {pair}: {dist:.4f}")
+ # 6. AUT Flexibility Analysis (Enhanced with arXiv:2405.00899 metrics)
+ print("\n" + "-" * 40)
+ print("6. AUT FLEXIBILITY ANALYSIS (arXiv:2405.00899)")
+ print("-" * 40)
+
+ # Extract query from experiment data
+ query = "bicycle" # Default
+ if data.get('results') and len(data['results']) > 0:
+ first_result = data['results'][0]
+ if 'query' in first_result:
+ query = first_result['query']
+
+ print(f"Query object: {query}")
+
+ flexibility_results = await analyze_flexibility_by_condition(
+ ideas_by_condition,
+ embeddings_by_condition,
+ query,
+ output_dir=output_dir if not skip_viz else None
+ )
+
+ results['flexibility_analysis'] = flexibility_results
+
+ # Print flexibility scores
+ print("\nLLM-based Flexibility Scores (semantic categories):")
+ for condition, flex_data in flexibility_results['llm_flexibility'].items():
+ if 'flexibility_score' in flex_data:
+ print(f" {condition}: {flex_data['flexibility_score']} categories")
+ if 'category_distribution' in flex_data:
+ top_cats = sorted(flex_data['category_distribution'].items(),
+ key=lambda x: x[1], reverse=True)[:3]
+ print(f" Top categories: {[c[0] for c in top_cats]}")
+
+ print("\nEmbedding-based Flexibility Scores (hierarchical clustering):")
+ for condition, flex_data in flexibility_results['embedding_flexibility'].items():
+ if 'flexibility_score' in flex_data:
+ print(f" {condition}: {flex_data['flexibility_score']} clusters")
+
+ # Enhanced Jump Signal Analysis (Combined Jump from paper)
+ print("\nCombined Jump Signal Analysis (jumpcat ∧ jumpSS):")
+ print(" Condition | Cat-Only | Sem-Only | Combined | Profile")
+ print(" " + "-" * 60)
+ for condition, jump_data in flexibility_results['jump_analysis'].items():
+ profile = flexibility_results.get('flexibility_profiles', {}).get(condition, "N/A")
+ cat_jumps = jump_data.get('category_jump_count', 0)
+ sem_jumps = jump_data.get('semantic_jump_count', 0)
+ combined = jump_data.get('combined_jump_count', 0)
+ print(f" {condition:16} | {cat_jumps:8} | {sem_jumps:8} | {combined:8} | {profile}")
+
+ # Print flexibility profiles summary
+ print("\nFlexibility Profiles (based on combined jump ratio):")
+ for condition, profile in flexibility_results.get('flexibility_profiles', {}).items():
+ jump_data = flexibility_results['jump_analysis'].get(condition, {})
+ ratio = jump_data.get('combined_jump_ratio', 0)
+ print(f" {condition}: {profile} (ratio={ratio:.2%})")
+
+ # 7. Originality-Flexibility Correlation Analysis
+ print("\n" + "-" * 40)
+ print("7. ORIGINALITY-FLEXIBILITY CORRELATION")
+ print("-" * 40)
+
+ # Extract novelty means and flexibility scores for correlation
+ novelty_means = {c: scores['mean'] for c, scores in embedding_results['novelty_scores'].items()}
+ flexibility_jumps = {c: jump_data.get('combined_jump_count', 0)
+ for c, jump_data in flexibility_results['jump_analysis'].items()}
+
+ correlation_result = analyze_originality_flexibility_correlation(
+ novelty_means,
+ flexibility_jumps
+ )
+ results['originality_flexibility_correlation'] = correlation_result
+
+ print(f"\nPearson r: {correlation_result.get('pearson_r', 'N/A')}")
+ print(f"Interpretation: {correlation_result.get('interpretation', 'N/A')}")
+
+ if correlation_result.get('per_condition'):
+ print("\nPer-Condition Values:")
+ for condition, vals in correlation_result['per_condition'].items():
+ print(f" {condition}: Novelty={vals['novelty']:.4f}, Flexibility={vals['flexibility']}")
+
+ # Print method correlation
+ if flexibility_results.get('method_correlation', {}).get('pearson_r') is not None:
+ print(f"\nLLM vs Embedding Flexibility Correlation: r={flexibility_results['method_correlation']['pearson_r']:.3f}")
+
+ else:
+ print("Could not get embeddings. Skipping embedding-based analysis.")
+ print("Make sure Ollama is running with the embedding model.")
+
+ # Save results
+ output_file = output_dir / f"aut_flexibility_{experiment_id}.json"
+ with open(output_file, 'w', encoding='utf-8') as f:
+ # Convert numpy types to Python types for JSON serialization
+ def convert(obj):
+ if isinstance(obj, np.ndarray):
+ return obj.tolist()
+ if isinstance(obj, (np.int64, np.int32)):
+ return int(obj)
+ if isinstance(obj, (np.float64, np.float32)):
+ return float(obj)
+ return obj
+
+ json.dump(results, f, ensure_ascii=False, indent=2, default=convert)
+
+ print("\n" + "=" * 60)
+ print(f"Results saved to: {output_file}")
+ if not skip_viz and HAS_MATPLOTLIB:
+ print(f"Visualizations saved to: {output_dir}")
+ print("=" * 60)
+
+ return results
+
+
+def list_experiment_files() -> list[Path]:
+ """List available deduped experiment files."""
+ return sorted(RESULTS_DIR.glob('*_deduped.json'), key=lambda p: p.stat().st_mtime, reverse=True)
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description='Run advanced automatic analysis on experiment results.',
+ formatter_class=argparse.RawDescriptionHelpFormatter
+ )
+ parser.add_argument(
+ 'experiment_file',
+ nargs='?',
+ default=None,
+ help='Experiment file name'
+ )
+ parser.add_argument(
+ '--list', '-l',
+ action='store_true',
+ help='List available experiment files'
+ )
+ parser.add_argument(
+ '--skip-viz',
+ action='store_true',
+ help='Skip visualization generation'
+ )
+ parser.add_argument(
+ '--skip-embeddings',
+ action='store_true',
+ help='Skip embedding-based analysis (faster)'
+ )
+ args = parser.parse_args()
+
+ available_files = list_experiment_files()
+
+ if args.list:
+ print("Available experiment files:")
+ for f in available_files:
+ print(f" {f.name}")
+ return
+
+ # Determine which file to use
+ if args.experiment_file:
+ experiment_file = RESULTS_DIR / args.experiment_file
+ if not experiment_file.exists():
+ experiment_file = RESULTS_DIR / f"{args.experiment_file}.json"
+ else:
+ if not available_files:
+ print("Error: No deduped experiment files found.")
+ return
+ experiment_file = available_files[0]
+ print(f"Using latest: {experiment_file.name}")
+
+ if not experiment_file.exists():
+ print(f"Error: File not found: {experiment_file}")
+ return
+
+ # Run analysis
+ asyncio.run(run_analysis(
+ experiment_file,
+ RESULTS_DIR,
+ skip_viz=args.skip_viz,
+ skip_embeddings=args.skip_embeddings
+ ))
+
+
+if __name__ == '__main__':
+ main()
diff --git a/experiments/compute_metrics.py b/experiments/compute_metrics.py
new file mode 100644
index 0000000..3918b76
--- /dev/null
+++ b/experiments/compute_metrics.py
@@ -0,0 +1,666 @@
+"""
+Compute metrics for experiment results.
+
+Computes metrics BOTH before and after deduplication:
+- Pre-dedup: Measures raw generation capability
+- Post-dedup: Measures quality of unique ideas
+
+Also normalizes idea counts for fair cross-condition comparison.
+
+Usage:
+ python -m experiments.compute_metrics --input results/experiment_xxx_deduped.json
+"""
+
+import sys
+import json
+import argparse
+import asyncio
+import logging
+import random
+from pathlib import Path
+from typing import List, Dict, Any, Optional, Tuple
+from dataclasses import dataclass, asdict
+
+import numpy as np
+
+# Add backend to path for imports
+sys.path.insert(0, str(Path(__file__).parent.parent / "backend"))
+
+from app.services.embedding_service import embedding_service
+from app.services.llm_service import ollama_provider, extract_json_from_response
+from experiments.config import RESULTS_DIR, MODEL, RANDOM_SEED
+
+# Configure logging
+logging.basicConfig(
+ level=logging.INFO,
+ format='%(asctime)s - %(levelname)s - %(message)s'
+)
+logger = logging.getLogger(__name__)
+
+
+@dataclass
+class DiversityMetrics:
+ """Semantic diversity metrics for a set of ideas."""
+ mean_pairwise_distance: float
+ std_pairwise_distance: float
+ min_pairwise_distance: float
+ max_pairwise_distance: float
+ idea_count: int
+
+
+@dataclass
+class ClusterMetrics:
+ """Cluster analysis metrics."""
+ optimal_clusters: int
+ silhouette_score: float
+ cluster_sizes: List[int]
+
+
+@dataclass
+class QueryDistanceMetrics:
+ """Distance from original query metrics."""
+ mean_distance: float
+ std_distance: float
+ min_distance: float
+ max_distance: float
+ distances: List[float]
+
+
+@dataclass
+class RelevanceMetrics:
+ """LLM-as-judge relevance metrics (for hallucination detection)."""
+ relevance_rate: float # Score >= 2
+ nonsense_rate: float # Score == 1
+ mean_score: float
+ score_distribution: Dict[int, int] # {1: count, 2: count, 3: count}
+
+
+@dataclass
+class ConditionMetrics:
+ """All metrics for a single condition."""
+ condition: str
+ query: str
+
+ # Idea counts
+ raw_count: int
+ unique_count: int
+ survival_rate: float
+
+ # Pre-dedup metrics (on raw ideas)
+ pre_dedup_diversity: Optional[DiversityMetrics]
+
+ # Post-dedup metrics (on unique ideas)
+ post_dedup_diversity: Optional[DiversityMetrics]
+ post_dedup_clusters: Optional[ClusterMetrics]
+ post_dedup_query_distance: Optional[QueryDistanceMetrics]
+
+ # Normalized metrics (on equal-sized samples)
+ normalized_diversity: Optional[DiversityMetrics]
+ normalized_sample_size: int
+
+ # Relevance/hallucination (post-dedup only)
+ relevance: Optional[RelevanceMetrics]
+
+
+# ============================================================
+# Embedding-based metrics
+# ============================================================
+
+async def get_embeddings(texts: List[str]) -> List[List[float]]:
+ """Get embeddings for a list of texts."""
+ if not texts:
+ return []
+ return await embedding_service.get_embeddings_batch(texts)
+
+
+def compute_pairwise_distances(embeddings: List[List[float]]) -> List[float]:
+ """Compute all pairwise cosine distances."""
+ n = len(embeddings)
+ if n < 2:
+ return []
+
+ distances = []
+ for i in range(n):
+ for j in range(i + 1, n):
+ sim = embedding_service.cosine_similarity(embeddings[i], embeddings[j])
+ dist = 1 - sim # Convert similarity to distance
+ distances.append(dist)
+
+ return distances
+
+
+async def compute_diversity_metrics(ideas: List[str]) -> Optional[DiversityMetrics]:
+ """Compute semantic diversity metrics for a set of ideas."""
+ if len(ideas) < 2:
+ return None
+
+ embeddings = await get_embeddings(ideas)
+ distances = compute_pairwise_distances(embeddings)
+
+ if not distances:
+ return None
+
+ return DiversityMetrics(
+ mean_pairwise_distance=float(np.mean(distances)),
+ std_pairwise_distance=float(np.std(distances)),
+ min_pairwise_distance=float(np.min(distances)),
+ max_pairwise_distance=float(np.max(distances)),
+ idea_count=len(ideas)
+ )
+
+
+async def compute_query_distance_metrics(
+ query: str,
+ ideas: List[str]
+) -> Optional[QueryDistanceMetrics]:
+ """Compute distance of ideas from the original query."""
+ if not ideas:
+ return None
+
+ # Get query embedding
+ query_emb = await embedding_service.get_embedding(query)
+ idea_embs = await get_embeddings(ideas)
+
+ distances = []
+ for emb in idea_embs:
+ sim = embedding_service.cosine_similarity(query_emb, emb)
+ dist = 1 - sim
+ distances.append(dist)
+
+ return QueryDistanceMetrics(
+ mean_distance=float(np.mean(distances)),
+ std_distance=float(np.std(distances)),
+ min_distance=float(np.min(distances)),
+ max_distance=float(np.max(distances)),
+ distances=distances
+ )
+
+
+async def compute_cluster_metrics(ideas: List[str]) -> Optional[ClusterMetrics]:
+ """Compute cluster analysis metrics."""
+ if len(ideas) < 3:
+ return None
+
+ try:
+ from sklearn.cluster import KMeans
+ from sklearn.metrics import silhouette_score
+ except ImportError:
+ logger.warning("sklearn not installed, skipping cluster metrics")
+ return None
+
+ embeddings = await get_embeddings(ideas)
+ embeddings_np = np.array(embeddings)
+
+ # Find optimal k using silhouette score
+ max_k = min(len(ideas) - 1, 10)
+ if max_k < 2:
+ return None
+
+ best_k = 2
+ best_score = -1
+
+ for k in range(2, max_k + 1):
+ try:
+ kmeans = KMeans(n_clusters=k, random_state=RANDOM_SEED, n_init=10)
+ labels = kmeans.fit_predict(embeddings_np)
+ score = silhouette_score(embeddings_np, labels)
+ if score > best_score:
+ best_score = score
+ best_k = k
+ except Exception as e:
+ logger.warning(f"Clustering failed for k={k}: {e}")
+ continue
+
+ # Get cluster sizes for optimal k
+ kmeans = KMeans(n_clusters=best_k, random_state=RANDOM_SEED, n_init=10)
+ labels = kmeans.fit_predict(embeddings_np)
+ cluster_sizes = [int(np.sum(labels == i)) for i in range(best_k)]
+
+ return ClusterMetrics(
+ optimal_clusters=best_k,
+ silhouette_score=float(best_score),
+ cluster_sizes=sorted(cluster_sizes, reverse=True)
+ )
+
+
+# ============================================================
+# LLM-as-Judge relevance metrics
+# ============================================================
+
+async def judge_relevance(query: str, idea: str, model: str = None) -> Dict[str, Any]:
+ """Use LLM to judge if an idea is relevant to the query."""
+ model = model or MODEL
+
+ prompt = f"""/no_think
+You are evaluating whether a generated idea is relevant and applicable to an original query.
+
+Original query: {query}
+Generated idea: {idea}
+
+Rate the relevance on a scale of 1-3:
+1 = Nonsense/completely irrelevant (no logical connection to the query)
+2 = Weak but valid connection (requires stretch but has some relevance)
+3 = Clearly relevant and applicable (directly relates to the query)
+
+Return JSON only:
+{{"score": N, "reason": "brief explanation (10-20 words)"}}
+"""
+
+ try:
+ response = await ollama_provider.generate(
+ prompt=prompt,
+ model=model,
+ temperature=0.3 # Lower temperature for more consistent judgments
+ )
+ result = extract_json_from_response(response)
+ return {
+ "score": result.get("score", 2),
+ "reason": result.get("reason", "")
+ }
+ except Exception as e:
+ logger.warning(f"Relevance judgment failed: {e}")
+ return {"score": 2, "reason": "judgment failed"}
+
+
+async def compute_relevance_metrics(
+ query: str,
+ ideas: List[str],
+ model: str = None,
+ sample_size: int = None
+) -> Optional[RelevanceMetrics]:
+ """Compute LLM-as-judge relevance metrics for ideas."""
+ if not ideas:
+ return None
+
+ # Optionally sample to reduce API calls
+ if sample_size and len(ideas) > sample_size:
+ rng = random.Random(RANDOM_SEED)
+ ideas_to_judge = rng.sample(ideas, sample_size)
+ else:
+ ideas_to_judge = ideas
+
+ scores = []
+ for idea in ideas_to_judge:
+ result = await judge_relevance(query, idea, model)
+ scores.append(result["score"])
+
+ # Compute distribution
+ distribution = {1: 0, 2: 0, 3: 0}
+ for s in scores:
+ if s in distribution:
+ distribution[s] += 1
+
+ nonsense_count = distribution[1]
+ relevant_count = distribution[2] + distribution[3]
+
+ return RelevanceMetrics(
+ relevance_rate=relevant_count / len(scores) if scores else 0,
+ nonsense_rate=nonsense_count / len(scores) if scores else 0,
+ mean_score=float(np.mean(scores)) if scores else 0,
+ score_distribution=distribution
+ )
+
+
+# ============================================================
+# Main metrics computation
+# ============================================================
+
+async def compute_condition_metrics(
+ query: str,
+ condition: str,
+ raw_ideas: List[str],
+ unique_ideas: List[str],
+ normalized_sample_size: int,
+ compute_relevance: bool = False
+) -> ConditionMetrics:
+ """Compute all metrics for a single condition."""
+
+ raw_count = len(raw_ideas)
+ unique_count = len(unique_ideas)
+ survival_rate = unique_count / raw_count if raw_count > 0 else 1.0
+
+ logger.info(f" Computing metrics for {condition}...")
+ logger.info(f" Raw: {raw_count}, Unique: {unique_count}, Survival: {survival_rate:.1%}")
+
+ # Pre-dedup diversity (on raw ideas)
+ logger.info(f" Computing pre-dedup diversity...")
+ pre_dedup_diversity = await compute_diversity_metrics(raw_ideas)
+
+ # Post-dedup diversity (on unique ideas)
+ logger.info(f" Computing post-dedup diversity...")
+ post_dedup_diversity = await compute_diversity_metrics(unique_ideas)
+
+ # Cluster analysis (post-dedup)
+ logger.info(f" Computing cluster metrics...")
+ post_dedup_clusters = await compute_cluster_metrics(unique_ideas)
+
+ # Query distance (post-dedup)
+ logger.info(f" Computing query distance...")
+ post_dedup_query_distance = await compute_query_distance_metrics(query, unique_ideas)
+
+ # Normalized diversity (equal-sized sample for fair comparison)
+ normalized_diversity = None
+ if len(unique_ideas) >= normalized_sample_size and normalized_sample_size > 1:
+ logger.info(f" Computing normalized diversity (n={normalized_sample_size})...")
+ rng = random.Random(RANDOM_SEED)
+ sampled_ideas = rng.sample(unique_ideas, normalized_sample_size)
+ normalized_diversity = await compute_diversity_metrics(sampled_ideas)
+
+ # Relevance metrics (optional, expensive)
+ relevance = None
+ if compute_relevance and unique_ideas:
+ logger.info(f" Computing relevance metrics (LLM-as-judge)...")
+ # Sample up to 10 ideas to reduce cost
+ relevance = await compute_relevance_metrics(
+ query, unique_ideas, sample_size=min(10, len(unique_ideas))
+ )
+
+ return ConditionMetrics(
+ condition=condition,
+ query=query,
+ raw_count=raw_count,
+ unique_count=unique_count,
+ survival_rate=survival_rate,
+ pre_dedup_diversity=pre_dedup_diversity,
+ post_dedup_diversity=post_dedup_diversity,
+ post_dedup_clusters=post_dedup_clusters,
+ post_dedup_query_distance=post_dedup_query_distance,
+ normalized_diversity=normalized_diversity,
+ normalized_sample_size=normalized_sample_size,
+ relevance=relevance
+ )
+
+
+async def process_experiment_results(
+ input_file: Path,
+ output_file: Optional[Path] = None,
+ compute_relevance: bool = False
+) -> Dict[str, Any]:
+ """
+ Process experiment results and compute all metrics.
+
+ Args:
+ input_file: Path to deduped experiment results JSON
+ output_file: Path for output (default: input with _metrics suffix)
+ compute_relevance: Whether to compute LLM-as-judge relevance
+
+ Returns:
+ Results with computed metrics
+ """
+ # Load experiment results
+ with open(input_file, "r", encoding="utf-8") as f:
+ experiment = json.load(f)
+
+ logger.info(f"Processing experiment: {experiment.get('experiment_id', 'unknown')}")
+
+ # Determine normalized sample size (minimum unique count across all conditions)
+ min_unique_count = float('inf')
+ for query_result in experiment["results"]:
+ for condition, cond_result in query_result["conditions"].items():
+ if cond_result.get("success", False):
+ dedup = cond_result.get("dedup", {})
+ unique_count = len(dedup.get("unique_ideas", cond_result.get("ideas", [])))
+ if unique_count > 0:
+ min_unique_count = min(min_unique_count, unique_count)
+
+ normalized_sample_size = min(int(min_unique_count), 10) if min_unique_count != float('inf') else 5
+ logger.info(f"Normalized sample size: {normalized_sample_size}")
+
+ # Process each query
+ all_metrics = []
+
+ for query_result in experiment["results"]:
+ query = query_result["query"]
+ query_id = query_result["query_id"]
+
+ logger.info(f"\nProcessing query: {query} ({query_id})")
+
+ query_metrics = {
+ "query_id": query_id,
+ "query": query,
+ "conditions": {}
+ }
+
+ for condition, cond_result in query_result["conditions"].items():
+ if not cond_result.get("success", False):
+ logger.warning(f" Skipping failed condition: {condition}")
+ continue
+
+ # Get raw and unique ideas
+ raw_ideas = cond_result.get("ideas", [])
+ dedup = cond_result.get("dedup", {})
+ unique_ideas = dedup.get("unique_ideas", raw_ideas)
+
+ # Compute metrics
+ metrics = await compute_condition_metrics(
+ query=query,
+ condition=condition,
+ raw_ideas=raw_ideas,
+ unique_ideas=unique_ideas,
+ normalized_sample_size=normalized_sample_size,
+ compute_relevance=compute_relevance
+ )
+
+ # Convert to dict for JSON serialization
+ query_metrics["conditions"][condition] = asdict(metrics)
+
+ all_metrics.append(query_metrics)
+
+ # Calculate aggregate statistics
+ aggregate = calculate_aggregate_metrics(all_metrics)
+
+ # Build output
+ output = {
+ "experiment_id": experiment.get("experiment_id"),
+ "config": experiment.get("config"),
+ "normalized_sample_size": normalized_sample_size,
+ "metrics_by_query": all_metrics,
+ "aggregate": aggregate
+ }
+
+ # Save results
+ if output_file is None:
+ stem = input_file.stem.replace("_deduped", "").replace("_complete", "")
+ output_file = input_file.parent / f"{stem}_metrics.json"
+
+ with open(output_file, "w", encoding="utf-8") as f:
+ json.dump(output, f, indent=2, ensure_ascii=False)
+
+ logger.info(f"\nMetrics saved to: {output_file}")
+
+ return output
+
+
+def calculate_aggregate_metrics(all_metrics: List[Dict]) -> Dict[str, Any]:
+ """Calculate aggregate statistics across all queries."""
+ aggregate = {}
+
+ # Collect metrics by condition
+ by_condition = {}
+
+ for query_metrics in all_metrics:
+ for condition, metrics in query_metrics["conditions"].items():
+ if condition not in by_condition:
+ by_condition[condition] = {
+ "raw_counts": [],
+ "unique_counts": [],
+ "survival_rates": [],
+ "pre_dedup_diversity": [],
+ "post_dedup_diversity": [],
+ "normalized_diversity": [],
+ "query_distances": [],
+ "cluster_counts": [],
+ "silhouette_scores": [],
+ "relevance_rates": [],
+ "nonsense_rates": []
+ }
+
+ bc = by_condition[condition]
+ bc["raw_counts"].append(metrics["raw_count"])
+ bc["unique_counts"].append(metrics["unique_count"])
+ bc["survival_rates"].append(metrics["survival_rate"])
+
+ if metrics.get("pre_dedup_diversity"):
+ bc["pre_dedup_diversity"].append(
+ metrics["pre_dedup_diversity"]["mean_pairwise_distance"]
+ )
+
+ if metrics.get("post_dedup_diversity"):
+ bc["post_dedup_diversity"].append(
+ metrics["post_dedup_diversity"]["mean_pairwise_distance"]
+ )
+
+ if metrics.get("normalized_diversity"):
+ bc["normalized_diversity"].append(
+ metrics["normalized_diversity"]["mean_pairwise_distance"]
+ )
+
+ if metrics.get("post_dedup_query_distance"):
+ bc["query_distances"].append(
+ metrics["post_dedup_query_distance"]["mean_distance"]
+ )
+
+ if metrics.get("post_dedup_clusters"):
+ bc["cluster_counts"].append(
+ metrics["post_dedup_clusters"]["optimal_clusters"]
+ )
+ bc["silhouette_scores"].append(
+ metrics["post_dedup_clusters"]["silhouette_score"]
+ )
+
+ if metrics.get("relevance"):
+ bc["relevance_rates"].append(metrics["relevance"]["relevance_rate"])
+ bc["nonsense_rates"].append(metrics["relevance"]["nonsense_rate"])
+
+ # Calculate means and stds
+ for condition, data in by_condition.items():
+ aggregate[condition] = {}
+
+ for metric_name, values in data.items():
+ if values:
+ aggregate[condition][metric_name] = {
+ "mean": float(np.mean(values)),
+ "std": float(np.std(values)),
+ "min": float(np.min(values)),
+ "max": float(np.max(values)),
+ "n": len(values)
+ }
+
+ return aggregate
+
+
+def print_metrics_summary(metrics: Dict[str, Any]):
+ """Print a formatted summary of computed metrics."""
+ print("\n" + "=" * 80)
+ print("METRICS SUMMARY")
+ print("=" * 80)
+
+ print(f"\nNormalized sample size: {metrics.get('normalized_sample_size', 'N/A')}")
+
+ aggregate = metrics.get("aggregate", {})
+
+ # Idea counts
+ print("\n--- Idea Counts ---")
+ print(f"{'Condition':<25} {'Raw':<10} {'Unique':<10} {'Survival':<10}")
+ print("-" * 55)
+ for cond, data in aggregate.items():
+ raw = data.get("raw_counts", {}).get("mean", 0)
+ unique = data.get("unique_counts", {}).get("mean", 0)
+ survival = data.get("survival_rates", {}).get("mean", 0)
+ print(f"{cond:<25} {raw:<10.1f} {unique:<10.1f} {survival:<10.1%}")
+
+ # Diversity metrics
+ print("\n--- Semantic Diversity (Mean Pairwise Distance) ---")
+ print(f"{'Condition':<25} {'Pre-Dedup':<12} {'Post-Dedup':<12} {'Normalized':<12}")
+ print("-" * 61)
+ for cond, data in aggregate.items():
+ pre = data.get("pre_dedup_diversity", {}).get("mean", 0)
+ post = data.get("post_dedup_diversity", {}).get("mean", 0)
+ norm = data.get("normalized_diversity", {}).get("mean", 0)
+ print(f"{cond:<25} {pre:<12.4f} {post:<12.4f} {norm:<12.4f}")
+
+ # Query distance
+ print("\n--- Query Distance (Novelty) ---")
+ print(f"{'Condition':<25} {'Mean Distance':<15} {'Std':<10}")
+ print("-" * 50)
+ for cond, data in aggregate.items():
+ dist = data.get("query_distances", {})
+ mean = dist.get("mean", 0)
+ std = dist.get("std", 0)
+ print(f"{cond:<25} {mean:<15.4f} {std:<10.4f}")
+
+ # Cluster metrics
+ print("\n--- Cluster Analysis ---")
+ print(f"{'Condition':<25} {'Clusters':<12} {'Silhouette':<12}")
+ print("-" * 49)
+ for cond, data in aggregate.items():
+ clusters = data.get("cluster_counts", {}).get("mean", 0)
+ silhouette = data.get("silhouette_scores", {}).get("mean", 0)
+ print(f"{cond:<25} {clusters:<12.1f} {silhouette:<12.4f}")
+
+ # Relevance (if computed)
+ has_relevance = any(
+ "relevance_rates" in data and data["relevance_rates"].get("n", 0) > 0
+ for data in aggregate.values()
+ )
+ if has_relevance:
+ print("\n--- Relevance (LLM-as-Judge) ---")
+ print(f"{'Condition':<25} {'Relevance':<12} {'Nonsense':<12}")
+ print("-" * 49)
+ for cond, data in aggregate.items():
+ rel = data.get("relevance_rates", {}).get("mean", 0)
+ non = data.get("nonsense_rates", {}).get("mean", 0)
+ print(f"{cond:<25} {rel:<12.1%} {non:<12.1%}")
+
+ print("\n" + "=" * 80)
+ print("Interpretation:")
+ print("- Higher pairwise distance = more diverse ideas")
+ print("- Higher query distance = more novel (farther from original)")
+ print("- More clusters = more distinct themes")
+ print("- Higher silhouette = cleaner cluster separation")
+ print("=" * 80)
+
+
+async def main():
+ parser = argparse.ArgumentParser(
+ description="Compute metrics for experiment results"
+ )
+ parser.add_argument(
+ "--input",
+ type=str,
+ required=True,
+ help="Input deduped experiment results JSON file"
+ )
+ parser.add_argument(
+ "--output",
+ type=str,
+ help="Output file path (default: input_metrics.json)"
+ )
+ parser.add_argument(
+ "--relevance",
+ action="store_true",
+ help="Compute LLM-as-judge relevance metrics (expensive)"
+ )
+
+ args = parser.parse_args()
+
+ input_path = Path(args.input)
+ if not input_path.exists():
+ input_path = RESULTS_DIR / args.input
+ if not input_path.exists():
+ print(f"Error: Input file not found: {args.input}")
+ sys.exit(1)
+
+ output_path = Path(args.output) if args.output else None
+
+ metrics = await process_experiment_results(
+ input_file=input_path,
+ output_file=output_path,
+ compute_relevance=args.relevance
+ )
+
+ print_metrics_summary(metrics)
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
diff --git a/experiments/conditions/__init__.py b/experiments/conditions/__init__.py
new file mode 100644
index 0000000..bf60191
--- /dev/null
+++ b/experiments/conditions/__init__.py
@@ -0,0 +1,23 @@
+"""
+Condition implementations for the 5-condition experiment.
+
+C1: Direct generation (baseline)
+C2: Expert-only (no attributes)
+C3: Attribute-only (no experts)
+C4: Full pipeline (attributes + experts)
+C5: Random-perspective (random words instead of experts)
+"""
+
+from .c1_direct import generate_ideas as c1_generate
+from .c2_expert_only import generate_ideas as c2_generate
+from .c3_attribute_only import generate_ideas as c3_generate
+from .c4_full_pipeline import generate_ideas as c4_generate
+from .c5_random_perspective import generate_ideas as c5_generate
+
+__all__ = [
+ "c1_generate",
+ "c2_generate",
+ "c3_generate",
+ "c4_generate",
+ "c5_generate",
+]
diff --git a/experiments/conditions/c1_direct.py b/experiments/conditions/c1_direct.py
new file mode 100644
index 0000000..ad7e6de
--- /dev/null
+++ b/experiments/conditions/c1_direct.py
@@ -0,0 +1,111 @@
+"""
+Condition 1: Direct Generation (Baseline)
+
+Single LLM call asking for creative ideas directly.
+No attribute decomposition, no expert perspectives.
+"""
+
+import sys
+from pathlib import Path
+
+# Add backend to path for imports
+sys.path.insert(0, str(Path(__file__).parent.parent.parent / "backend"))
+
+from typing import List, Dict, Any
+from app.services.llm_service import ollama_provider, extract_json_from_response
+from experiments.config import MODEL, TEMPERATURE, IDEAS_DIRECT, PROMPT_LANGUAGE
+
+
+def get_direct_generation_prompt(query: str, idea_count: int, lang: str = "en") -> str:
+ """Generate prompt for direct idea generation."""
+ if lang == "en":
+ return f"""/no_think
+Generate {idea_count} creative and innovative ideas for "{query}".
+
+Requirements:
+1. Each idea should be specific and actionable
+2. Ideas should be diverse, covering different aspects and applications
+3. Include both practical improvements and creative innovations
+4. Ideas should be 15-30 words each
+
+Return JSON only:
+{{"ideas": ["idea 1", "idea 2", "idea 3", ...]}}
+
+Generate exactly {idea_count} ideas."""
+ else:
+ return f"""/no_think
+為「{query}」生成 {idea_count} 個創意點子。
+
+要求:
+1. 每個點子要具體可行
+2. 點子要多元,涵蓋不同面向和應用
+3. 包含實用改進和創意創新
+4. 每個點子 15-30 字
+
+只回傳 JSON:
+{{"ideas": ["點子1", "點子2", "點子3", ...]}}
+
+生成正好 {idea_count} 個點子。"""
+
+
+async def generate_ideas(
+ query: str,
+ model: str = None,
+ temperature: float = None,
+ idea_count: int = None,
+ lang: str = None
+) -> Dict[str, Any]:
+ """
+ Generate ideas using direct LLM generation (C1 baseline).
+
+ Args:
+ query: The object/concept to generate ideas for
+ model: LLM model to use (default from config)
+ temperature: Generation temperature (default from config)
+ idea_count: Number of ideas to generate (default from config)
+ lang: Language for prompts (default from config)
+
+ Returns:
+ Dict with ideas and metadata
+ """
+ model = model or MODEL
+ temperature = temperature or TEMPERATURE
+ idea_count = idea_count or IDEAS_DIRECT
+ lang = lang or PROMPT_LANGUAGE
+
+ prompt = get_direct_generation_prompt(query, idea_count, lang)
+
+ response = await ollama_provider.generate(
+ prompt=prompt,
+ model=model,
+ temperature=temperature
+ )
+
+ result = extract_json_from_response(response)
+ ideas = result.get("ideas", [])
+
+ return {
+ "condition": "c1_direct",
+ "query": query,
+ "ideas": ideas,
+ "idea_count": len(ideas),
+ "metadata": {
+ "model": model,
+ "temperature": temperature,
+ "prompt_language": lang,
+ "mechanism": "direct_llm_generation"
+ }
+ }
+
+
+# For testing
+if __name__ == "__main__":
+ import asyncio
+
+ async def test():
+ result = await generate_ideas("Chair")
+ print(f"Generated {result['idea_count']} ideas:")
+ for i, idea in enumerate(result['ideas'], 1):
+ print(f" {i}. {idea}")
+
+ asyncio.run(test())
diff --git a/experiments/conditions/c2_expert_only.py b/experiments/conditions/c2_expert_only.py
new file mode 100644
index 0000000..96cb848
--- /dev/null
+++ b/experiments/conditions/c2_expert_only.py
@@ -0,0 +1,176 @@
+"""
+Condition 2: Expert-Only (No Attributes)
+
+Uses expert perspectives to generate ideas, but without
+attribute decomposition. Each expert generates ideas directly
+for the query from their professional perspective.
+"""
+
+import sys
+from pathlib import Path
+
+# Add backend to path for imports
+sys.path.insert(0, str(Path(__file__).parent.parent.parent / "backend"))
+
+from typing import List, Dict, Any
+from app.services.llm_service import ollama_provider, extract_json_from_response
+from app.services.expert_source_service import expert_source_service
+from experiments.config import (
+ MODEL, TEMPERATURE, EXPERT_COUNT, EXPERT_SOURCE,
+ IDEAS_PER_EXPERT, PROMPT_LANGUAGE
+)
+
+
+def get_expert_idea_generation_prompt(
+ query: str,
+ expert_name: str,
+ expert_domain: str,
+ idea_count: int,
+ lang: str = "en"
+) -> str:
+ """Generate prompt for expert-based idea generation."""
+ if lang == "en":
+ domain_text = f" ({expert_domain} field)" if expert_domain else ""
+ return f"""/no_think
+You are a {expert_name}{domain_text}.
+
+Task: Generate {idea_count} creative and innovative ideas for "{query}" from your professional perspective.
+
+Requirements:
+1. Each idea should reflect your professional expertise and unique viewpoint
+2. Think about how concepts from your field could improve or reimagine "{query}"
+3. Ideas should be specific and actionable (15-30 words each)
+4. Combine your professional knowledge with creative thinking
+
+Return JSON only:
+{{"ideas": ["idea 1", "idea 2", "idea 3", ...]}}
+
+Generate exactly {idea_count} ideas from your perspective as a {expert_name}."""
+ else:
+ domain_text = f"({expert_domain}領域)" if expert_domain else ""
+ return f"""/no_think
+你是一位{expert_name}{domain_text}。
+
+任務:從你的專業角度,為「{query}」生成 {idea_count} 個創意點子。
+
+要求:
+1. 每個點子要反映你的專業知識和獨特觀點
+2. 思考你領域的概念如何改進或重新想像「{query}」
+3. 點子要具體可行(每個 15-30 字)
+4. 結合專業知識和創意思維
+
+只回傳 JSON:
+{{"ideas": ["點子1", "點子2", "點子3", ...]}}
+
+從你作為{expert_name}的角度生成正好 {idea_count} 個點子。"""
+
+
+async def generate_ideas(
+ query: str,
+ model: str = None,
+ temperature: float = None,
+ expert_count: int = None,
+ expert_source: str = None,
+ ideas_per_expert: int = None,
+ lang: str = None
+) -> Dict[str, Any]:
+ """
+ Generate ideas using expert perspectives only (C2).
+
+ Args:
+ query: The object/concept to generate ideas for
+ model: LLM model to use
+ temperature: Generation temperature
+ expert_count: Number of experts to use
+ expert_source: Source of experts (curated, dbpedia, etc.)
+ ideas_per_expert: Ideas each expert generates
+ lang: Language for prompts
+
+ Returns:
+ Dict with ideas and metadata
+ """
+ model = model or MODEL
+ temperature = temperature or TEMPERATURE
+ expert_count = expert_count or EXPERT_COUNT
+ expert_source = expert_source or EXPERT_SOURCE
+ ideas_per_expert = ideas_per_expert or IDEAS_PER_EXPERT
+ lang = lang or PROMPT_LANGUAGE
+
+ # Get experts from curated source
+ experts, actual_source = expert_source_service.get_experts(
+ source=expert_source,
+ count=expert_count,
+ language=lang
+ )
+
+ all_ideas = []
+ expert_details = []
+
+ for expert in experts:
+ expert_name = expert.get("name", "Expert")
+ expert_domain = expert.get("domain", "")
+
+ prompt = get_expert_idea_generation_prompt(
+ query=query,
+ expert_name=expert_name,
+ expert_domain=expert_domain,
+ idea_count=ideas_per_expert,
+ lang=lang
+ )
+
+ response = await ollama_provider.generate(
+ prompt=prompt,
+ model=model,
+ temperature=temperature
+ )
+
+ result = extract_json_from_response(response)
+ ideas = result.get("ideas", [])
+
+ # Tag ideas with expert source
+ for idea in ideas:
+ all_ideas.append({
+ "idea": idea,
+ "expert_name": expert_name,
+ "expert_domain": expert_domain
+ })
+
+ expert_details.append({
+ "name": expert_name,
+ "domain": expert_domain,
+ "ideas_generated": len(ideas)
+ })
+
+ return {
+ "condition": "c2_expert_only",
+ "query": query,
+ "ideas": [item["idea"] for item in all_ideas],
+ "ideas_with_source": all_ideas,
+ "idea_count": len(all_ideas),
+ "metadata": {
+ "model": model,
+ "temperature": temperature,
+ "prompt_language": lang,
+ "expert_count": expert_count,
+ "expert_source": actual_source,
+ "ideas_per_expert": ideas_per_expert,
+ "experts": expert_details,
+ "mechanism": "expert_perspectives_only"
+ }
+ }
+
+
+# For testing
+if __name__ == "__main__":
+ import asyncio
+
+ async def test():
+ result = await generate_ideas("Chair")
+ print(f"Generated {result['idea_count']} ideas from {len(result['metadata']['experts'])} experts:")
+ for exp in result['metadata']['experts']:
+ print(f" - {exp['name']}: {exp['ideas_generated']} ideas")
+ print("\nSample ideas:")
+ for i, item in enumerate(result['ideas_with_source'][:5], 1):
+ print(f" {i}. [{item['expert_name']}] {item['idea']}")
+
+ asyncio.run(test())
diff --git a/experiments/conditions/c3_attribute_only.py b/experiments/conditions/c3_attribute_only.py
new file mode 100644
index 0000000..cc64e3c
--- /dev/null
+++ b/experiments/conditions/c3_attribute_only.py
@@ -0,0 +1,181 @@
+"""
+Condition 3: Attribute-Only (No Experts)
+
+Uses attribute decomposition to break down the query into
+structured categories, then generates ideas from each attribute.
+No expert perspectives involved.
+"""
+
+import sys
+from pathlib import Path
+
+# Add backend to path for imports
+sys.path.insert(0, str(Path(__file__).parent.parent.parent / "backend"))
+
+from typing import List, Dict, Any
+from app.services.llm_service import ollama_provider, extract_json_from_response
+from app.prompts.attribute_prompt import get_step1_dynamic_attributes_prompt
+from experiments.config import (
+ MODEL, TEMPERATURE, FIXED_CATEGORIES, PROMPT_LANGUAGE
+)
+
+
+def get_attribute_idea_generation_prompt(
+ query: str,
+ category: str,
+ attribute: str,
+ idea_count: int,
+ lang: str = "en"
+) -> str:
+ """Generate prompt for attribute-based idea generation."""
+ if lang == "en":
+ return f"""/no_think
+Generate {idea_count} creative ideas for "{query}" focusing on the attribute "{attribute}" (Category: {category}).
+
+Requirements:
+1. Each idea should be directly inspired by the attribute "{attribute}"
+2. Think about how this attribute could be improved, reimagined, or applied in new ways
+3. Ideas should be specific and actionable (15-30 words each)
+4. Be creative while maintaining relevance to the attribute
+
+Return JSON only:
+{{"ideas": ["idea 1", "idea 2", ...]}}
+
+Generate exactly {idea_count} ideas based on the attribute "{attribute}"."""
+ else:
+ return f"""/no_think
+為「{query}」生成 {idea_count} 個創意點子,聚焦於屬性「{attribute}」(類別:{category})。
+
+要求:
+1. 每個點子要直接受屬性「{attribute}」啟發
+2. 思考如何改進、重新想像或以新方式應用這個屬性
+3. 點子要具體可行(每個 15-30 字)
+4. 保持創意同時與屬性相關
+
+只回傳 JSON:
+{{"ideas": ["點子1", "點子2", ...]}}
+
+基於屬性「{attribute}」生成正好 {idea_count} 個點子。"""
+
+
+async def generate_ideas(
+ query: str,
+ model: str = None,
+ temperature: float = None,
+ categories: List[str] = None,
+ ideas_per_attribute: int = 1,
+ lang: str = None
+) -> Dict[str, Any]:
+ """
+ Generate ideas using attribute decomposition only (C3).
+
+ Args:
+ query: The object/concept to generate ideas for
+ model: LLM model to use
+ temperature: Generation temperature
+ categories: Categories to use for decomposition
+ ideas_per_attribute: Ideas to generate per attribute
+ lang: Language for prompts
+
+ Returns:
+ Dict with ideas and metadata
+ """
+ model = model or MODEL
+ temperature = temperature or TEMPERATURE
+ categories = categories or FIXED_CATEGORIES
+ lang = lang or PROMPT_LANGUAGE
+
+ # Step 1: Generate attributes using existing prompt
+ # Build category definitions for the prompt
+ category_defs = [
+ {"name": cat, "description": f"Related {cat.lower()} of the object", "order": i}
+ for i, cat in enumerate(categories)
+ ]
+
+ attr_prompt = get_step1_dynamic_attributes_prompt(
+ query=query,
+ categories=category_defs,
+ lang=lang
+ )
+
+ attr_response = await ollama_provider.generate(
+ prompt=attr_prompt,
+ model=model,
+ temperature=temperature
+ )
+
+ attributes_by_category = extract_json_from_response(attr_response)
+
+ # Step 2: Generate ideas for each attribute
+ all_ideas = []
+ attribute_details = []
+
+ for category in categories:
+ attrs = attributes_by_category.get(category, [])
+
+ for attr in attrs:
+ prompt = get_attribute_idea_generation_prompt(
+ query=query,
+ category=category,
+ attribute=attr,
+ idea_count=ideas_per_attribute,
+ lang=lang
+ )
+
+ response = await ollama_provider.generate(
+ prompt=prompt,
+ model=model,
+ temperature=temperature
+ )
+
+ result = extract_json_from_response(response)
+ ideas = result.get("ideas", [])
+
+ # Tag ideas with attribute source
+ for idea in ideas:
+ all_ideas.append({
+ "idea": idea,
+ "category": category,
+ "attribute": attr
+ })
+
+ attribute_details.append({
+ "category": category,
+ "attribute": attr,
+ "ideas_generated": len(ideas)
+ })
+
+ return {
+ "condition": "c3_attribute_only",
+ "query": query,
+ "ideas": [item["idea"] for item in all_ideas],
+ "ideas_with_source": all_ideas,
+ "idea_count": len(all_ideas),
+ "metadata": {
+ "model": model,
+ "temperature": temperature,
+ "prompt_language": lang,
+ "categories": categories,
+ "attributes_by_category": attributes_by_category,
+ "attribute_count": sum(len(v) for v in attributes_by_category.values()),
+ "ideas_per_attribute": ideas_per_attribute,
+ "attributes": attribute_details,
+ "mechanism": "attribute_decomposition_only"
+ }
+ }
+
+
+# For testing
+if __name__ == "__main__":
+ import asyncio
+
+ async def test():
+ result = await generate_ideas("Chair")
+ print(f"Generated {result['idea_count']} ideas from {result['metadata']['attribute_count']} attributes:")
+ for cat, attrs in result['metadata']['attributes_by_category'].items():
+ print(f" {cat}: {', '.join(attrs)}")
+ print("\nSample ideas:")
+ for i, item in enumerate(result['ideas_with_source'][:5], 1):
+ print(f" {i}. [{item['category']}/{item['attribute']}] {item['idea']}")
+
+ asyncio.run(test())
diff --git a/experiments/conditions/c4_full_pipeline.py b/experiments/conditions/c4_full_pipeline.py
new file mode 100644
index 0000000..c59709e
--- /dev/null
+++ b/experiments/conditions/c4_full_pipeline.py
@@ -0,0 +1,214 @@
+"""
+Condition 4: Full Pipeline (Attributes + Experts)
+
+The complete novelty-seeking system:
+1. Attribute decomposition into categories
+2. Expert team generation
+3. Expert keyword generation for each attribute
+4. Description generation for each keyword
+"""
+
+import sys
+from pathlib import Path
+
+# Add backend to path for imports
+sys.path.insert(0, str(Path(__file__).parent.parent.parent / "backend"))
+
+from typing import List, Dict, Any
+from app.services.llm_service import ollama_provider, extract_json_from_response
+from app.services.expert_source_service import expert_source_service
+from app.prompts.attribute_prompt import get_step1_dynamic_attributes_prompt
+from app.prompts.expert_transformation_prompt import (
+ get_expert_keyword_generation_prompt,
+ get_single_description_prompt
+)
+from experiments.config import (
+ MODEL, TEMPERATURE, FIXED_CATEGORIES, EXPERT_COUNT,
+ EXPERT_SOURCE, KEYWORDS_PER_EXPERT, PROMPT_LANGUAGE
+)
+
+
+async def generate_ideas(
+ query: str,
+ model: str = None,
+ temperature: float = None,
+ categories: List[str] = None,
+ expert_count: int = None,
+ expert_source: str = None,
+ keywords_per_expert: int = None,
+ lang: str = None
+) -> Dict[str, Any]:
+ """
+ Generate ideas using the full pipeline (C4).
+
+ Args:
+ query: The object/concept to generate ideas for
+ model: LLM model to use
+ temperature: Generation temperature
+ categories: Categories for attribute decomposition
+ expert_count: Number of experts
+ expert_source: Source of experts
+ keywords_per_expert: Keywords each expert generates per attribute
+ lang: Language for prompts
+
+ Returns:
+ Dict with ideas and metadata
+ """
+ model = model or MODEL
+ temperature = temperature or TEMPERATURE
+ categories = categories or FIXED_CATEGORIES
+ expert_count = expert_count or EXPERT_COUNT
+ expert_source = expert_source or EXPERT_SOURCE
+ keywords_per_expert = keywords_per_expert or KEYWORDS_PER_EXPERT
+ lang = lang or PROMPT_LANGUAGE
+
+ # Step 0: Get experts from curated source
+ experts_data, actual_source = expert_source_service.get_experts(
+ source=expert_source,
+ count=expert_count,
+ language=lang
+ )
+
+ # Convert to expected format
+ experts = [
+ {
+ "id": f"expert-{i}",
+ "name": exp.get("name", "Expert"),
+ "domain": exp.get("domain", ""),
+ "perspective": exp.get("perspective", "")
+ }
+ for i, exp in enumerate(experts_data)
+ ]
+
+ # Step 1: Generate attributes
+ category_defs = [
+ {"name": cat, "description": f"Related {cat.lower()} of the object", "order": i}
+ for i, cat in enumerate(categories)
+ ]
+
+ attr_prompt = get_step1_dynamic_attributes_prompt(
+ query=query,
+ categories=category_defs,
+ lang=lang
+ )
+
+ attr_response = await ollama_provider.generate(
+ prompt=attr_prompt,
+ model=model,
+ temperature=temperature
+ )
+
+ attributes_by_category = extract_json_from_response(attr_response)
+
+ # Step 2: Expert keyword generation for each category/attribute
+ all_keywords = []
+
+ for category in categories:
+ attrs = attributes_by_category.get(category, [])
+
+ for attr in attrs:
+ # Generate keywords from all experts for this attribute
+ keyword_prompt = get_expert_keyword_generation_prompt(
+ category=category,
+ attribute=attr,
+ experts=experts,
+ keywords_per_expert=keywords_per_expert,
+ lang=lang
+ )
+
+ keyword_response = await ollama_provider.generate(
+ prompt=keyword_prompt,
+ model=model,
+ temperature=temperature
+ )
+
+ keyword_result = extract_json_from_response(keyword_response)
+ keywords = keyword_result.get("keywords", [])
+
+ for kw in keywords:
+ all_keywords.append({
+ "category": category,
+ "attribute": attr,
+ "keyword": kw.get("keyword", ""),
+ "expert_id": kw.get("expert_id", ""),
+ "expert_name": kw.get("expert_name", "")
+ })
+
+ # Step 3: Generate descriptions for each keyword
+ all_ideas = []
+
+ for kw_info in all_keywords:
+ # Find expert details
+ expert = next(
+ (e for e in experts if e["id"] == kw_info["expert_id"]),
+ {"name": kw_info["expert_name"], "domain": "", "id": kw_info["expert_id"]}
+ )
+
+ desc_prompt = get_single_description_prompt(
+ query=query,
+ keyword=kw_info["keyword"],
+ expert_id=expert["id"],
+ expert_name=expert["name"],
+ expert_domain=expert.get("domain", ""),
+ lang=lang
+ )
+
+ desc_response = await ollama_provider.generate(
+ prompt=desc_prompt,
+ model=model,
+ temperature=temperature
+ )
+
+ desc_result = extract_json_from_response(desc_response)
+ description = desc_result.get("description", "")
+
+ all_ideas.append({
+ "idea": description,
+ "keyword": kw_info["keyword"],
+ "category": kw_info["category"],
+ "attribute": kw_info["attribute"],
+ "expert_name": expert["name"],
+ "expert_domain": expert.get("domain", "")
+ })
+
+ return {
+ "condition": "c4_full_pipeline",
+ "query": query,
+ "ideas": [item["idea"] for item in all_ideas],
+ "ideas_with_source": all_ideas,
+ "idea_count": len(all_ideas),
+ "metadata": {
+ "model": model,
+ "temperature": temperature,
+ "prompt_language": lang,
+ "categories": categories,
+ "attributes_by_category": attributes_by_category,
+ "attribute_count": sum(len(v) for v in attributes_by_category.values()),
+ "expert_count": expert_count,
+ "expert_source": actual_source,
+ "keywords_per_expert": keywords_per_expert,
+ "total_keywords": len(all_keywords),
+ "experts": [{"name": e["name"], "domain": e["domain"]} for e in experts],
+ "mechanism": "full_pipeline_attributes_plus_experts"
+ }
+ }
+
+
+# For testing
+if __name__ == "__main__":
+ import asyncio
+
+ async def test():
+ result = await generate_ideas("Chair")
+ print(f"Generated {result['idea_count']} ideas using full pipeline:")
+ print(f" Attributes: {result['metadata']['attribute_count']}")
+ print(f" Experts: {result['metadata']['expert_count']}")
+ print(f" Keywords: {result['metadata']['total_keywords']}")
+ print("\nExperts used:")
+ for exp in result['metadata']['experts']:
+ print(f" - {exp['name']} ({exp['domain']})")
+ print("\nSample ideas:")
+ for i, item in enumerate(result['ideas_with_source'][:5], 1):
+ print(f" {i}. [{item['expert_name']}] {item['keyword']}: {item['idea']}")
+
+ asyncio.run(test())
diff --git a/experiments/conditions/c5_random_perspective.py b/experiments/conditions/c5_random_perspective.py
new file mode 100644
index 0000000..3337cc7
--- /dev/null
+++ b/experiments/conditions/c5_random_perspective.py
@@ -0,0 +1,178 @@
+"""
+Condition 5: Random-Perspective Control
+
+Uses random words as "perspectives" instead of domain experts.
+Tests whether the benefit from expert perspectives comes from
+domain knowledge or simply from any perspective shift.
+"""
+
+import sys
+import json
+import random
+from pathlib import Path
+
+# Add backend to path for imports
+sys.path.insert(0, str(Path(__file__).parent.parent.parent / "backend"))
+
+from typing import List, Dict, Any
+from app.services.llm_service import ollama_provider, extract_json_from_response
+from experiments.config import (
+ MODEL, TEMPERATURE, EXPERT_COUNT, IDEAS_PER_EXPERT,
+ PROMPT_LANGUAGE, RANDOM_SEED, DATA_DIR
+)
+
+
+def load_random_words() -> List[str]:
+ """Load the random word pool from data file."""
+ words_file = DATA_DIR / "random_words.json"
+ with open(words_file, "r", encoding="utf-8") as f:
+ data = json.load(f)
+ return data.get("words", [])
+
+
+def get_random_perspective_prompt(
+ query: str,
+ perspective_word: str,
+ idea_count: int,
+ lang: str = "en"
+) -> str:
+ """Generate prompt for random-perspective idea generation."""
+ if lang == "en":
+ return f"""/no_think
+Generate {idea_count} creative and innovative ideas for "{query}" inspired by the concept of "{perspective_word}".
+
+Requirements:
+1. Each idea should draw inspiration from "{perspective_word}" - its qualities, characteristics, or associations
+2. Think about how concepts related to "{perspective_word}" could improve or reimagine "{query}"
+3. Ideas should be specific and actionable (15-30 words each)
+4. Be creative in connecting "{perspective_word}" to "{query}"
+
+Return JSON only:
+{{"ideas": ["idea 1", "idea 2", "idea 3", ...]}}
+
+Generate exactly {idea_count} ideas inspired by "{perspective_word}"."""
+ else:
+ return f"""/no_think
+為「{query}」生成 {idea_count} 個創意點子,靈感來自「{perspective_word}」這個概念。
+
+要求:
+1. 每個點子要從「{perspective_word}」獲得靈感——它的特質、特徵或聯想
+2. 思考與「{perspective_word}」相關的概念如何改進或重新想像「{query}」
+3. 點子要具體可行(每個 15-30 字)
+4. 創意地連接「{perspective_word}」和「{query}」
+
+只回傳 JSON:
+{{"ideas": ["點子1", "點子2", "點子3", ...]}}
+
+生成正好 {idea_count} 個受「{perspective_word}」啟發的點子。"""
+
+
+async def generate_ideas(
+ query: str,
+ model: str = None,
+ temperature: float = None,
+ word_count: int = None,
+ ideas_per_word: int = None,
+ lang: str = None,
+ seed: int = None
+) -> Dict[str, Any]:
+ """
+ Generate ideas using random word perspectives (C5 control).
+
+ Args:
+ query: The object/concept to generate ideas for
+ model: LLM model to use
+ temperature: Generation temperature
+ word_count: Number of random words to use (matches expert count)
+ ideas_per_word: Ideas to generate per word
+ lang: Language for prompts
+ seed: Random seed for reproducibility
+
+ Returns:
+ Dict with ideas and metadata
+ """
+ model = model or MODEL
+ temperature = temperature or TEMPERATURE
+ word_count = word_count or EXPERT_COUNT
+ ideas_per_word = ideas_per_word or IDEAS_PER_EXPERT
+ lang = lang or PROMPT_LANGUAGE
+ seed = seed or RANDOM_SEED
+
+ # Load word pool and sample random words
+ word_pool = load_random_words()
+
+ # Use seeded random for reproducibility
+ # Create a unique seed per query to get different words for different queries
+ # but same words for same query across runs
+ query_seed = seed + hash(query) % 10000
+ rng = random.Random(query_seed)
+ selected_words = rng.sample(word_pool, min(word_count, len(word_pool)))
+
+ all_ideas = []
+ word_details = []
+
+ for word in selected_words:
+ prompt = get_random_perspective_prompt(
+ query=query,
+ perspective_word=word,
+ idea_count=ideas_per_word,
+ lang=lang
+ )
+
+ response = await ollama_provider.generate(
+ prompt=prompt,
+ model=model,
+ temperature=temperature
+ )
+
+ result = extract_json_from_response(response)
+ ideas = result.get("ideas", [])
+
+ # Tag ideas with perspective word source
+ for idea in ideas:
+ all_ideas.append({
+ "idea": idea,
+ "perspective_word": word
+ })
+
+ word_details.append({
+ "word": word,
+ "ideas_generated": len(ideas)
+ })
+
+ return {
+ "condition": "c5_random_perspective",
+ "query": query,
+ "ideas": [item["idea"] for item in all_ideas],
+ "ideas_with_source": all_ideas,
+ "idea_count": len(all_ideas),
+ "metadata": {
+ "model": model,
+ "temperature": temperature,
+ "prompt_language": lang,
+ "word_count": word_count,
+ "ideas_per_word": ideas_per_word,
+ "random_seed": seed,
+ "query_seed": query_seed,
+ "selected_words": selected_words,
+ "word_details": word_details,
+ "word_pool_size": len(word_pool),
+ "mechanism": "random_perspective_control"
+ }
+ }
+
+
+# For testing
+if __name__ == "__main__":
+ import asyncio
+
+ async def test():
+ result = await generate_ideas("Chair")
+ print(f"Generated {result['idea_count']} ideas from {len(result['metadata']['selected_words'])} random words:")
+ print(f" Words used: {', '.join(result['metadata']['selected_words'])}")
+ print(f" Seed: {result['metadata']['random_seed']}, Query seed: {result['metadata']['query_seed']}")
+ print("\nSample ideas:")
+ for i, item in enumerate(result['ideas_with_source'][:5], 1):
+ print(f" {i}. [{item['perspective_word']}] {item['idea']}")
+
+ asyncio.run(test())
diff --git a/experiments/config.py b/experiments/config.py
new file mode 100644
index 0000000..7586feb
--- /dev/null
+++ b/experiments/config.py
@@ -0,0 +1,72 @@
+"""
+Experiment configuration for 5-condition idea generation study.
+"""
+
+from typing import Literal
+from pathlib import Path
+
+# Paths
+EXPERIMENTS_DIR = Path(__file__).parent
+DATA_DIR = EXPERIMENTS_DIR / "data"
+RESULTS_DIR = EXPERIMENTS_DIR / "results"
+DOCS_DIR = EXPERIMENTS_DIR / "docs"
+
+# LLM Settings
+MODEL = "qwen3:8b"
+TEMPERATURE = 0.9
+
+# Expert Settings
+EXPERT_COUNT = 4
+EXPERT_SOURCE: Literal["curated", "llm", "dbpedia", "wikidata"] = "curated"
+KEYWORDS_PER_EXPERT = 1
+
+# Language Settings
+PROMPT_LANGUAGE: Literal["en", "zh"] = "en"
+
+# Attribute Settings
+FIXED_CATEGORIES = ["Functions", "Usages", "User Groups", "Characteristics"]
+
+# Deduplication Settings
+DEDUP_THRESHOLD = 0.85
+DEDUP_METHOD: Literal["embedding", "llm"] = "embedding"
+
+# Reproducibility
+RANDOM_SEED = 42
+
+# Idea Generation Settings
+IDEAS_PER_EXPERT = 5 # For C2 and C5
+IDEAS_DIRECT = 20 # For C1
+
+# Condition Names
+CONDITIONS = [
+ "c1_direct",
+ "c2_expert_only",
+ "c3_attribute_only",
+ "c4_full_pipeline",
+ "c5_random_perspective",
+]
+
+# Condition Display Names
+CONDITION_NAMES = {
+ "c1_direct": "C1: Direct Generation",
+ "c2_expert_only": "C2: Expert-Only",
+ "c3_attribute_only": "C3: Attribute-Only",
+ "c4_full_pipeline": "C4: Full Pipeline",
+ "c5_random_perspective": "C5: Random-Perspective",
+}
+
+# Summary Config Dict (for logging/reporting)
+EXPERIMENT_CONFIG = {
+ "model": MODEL,
+ "temperature": TEMPERATURE,
+ "expert_count": EXPERT_COUNT,
+ "expert_source": EXPERT_SOURCE,
+ "keywords_per_expert": KEYWORDS_PER_EXPERT,
+ "prompt_language": PROMPT_LANGUAGE,
+ "random_seed": RANDOM_SEED,
+ "categories": FIXED_CATEGORIES,
+ "dedup_threshold": DEDUP_THRESHOLD,
+ "dedup_method": DEDUP_METHOD,
+ "ideas_per_expert": IDEAS_PER_EXPERT,
+ "ideas_direct": IDEAS_DIRECT,
+}
diff --git a/experiments/data/queries.json b/experiments/data/queries.json
new file mode 100644
index 0000000..1c3d22f
--- /dev/null
+++ b/experiments/data/queries.json
@@ -0,0 +1,66 @@
+{
+ "description": "10 pilot queries for the 5-condition experiment, balanced across categories",
+ "version": "1.0",
+ "queries": [
+ {
+ "id": "A1",
+ "query": "Chair",
+ "category": "everyday",
+ "description": "Common household furniture"
+ },
+ {
+ "id": "A5",
+ "query": "Bicycle",
+ "category": "everyday",
+ "description": "Personal transportation device"
+ },
+ {
+ "id": "A7",
+ "query": "Smartphone",
+ "category": "everyday",
+ "description": "Mobile communication device"
+ },
+ {
+ "id": "B1",
+ "query": "Solar panel",
+ "category": "technology",
+ "description": "Renewable energy technology"
+ },
+ {
+ "id": "B3",
+ "query": "3D printer",
+ "category": "technology",
+ "description": "Additive manufacturing device"
+ },
+ {
+ "id": "B4",
+ "query": "Drone",
+ "category": "technology",
+ "description": "Unmanned aerial vehicle"
+ },
+ {
+ "id": "C1",
+ "query": "Food delivery service",
+ "category": "services",
+ "description": "Restaurant meal delivery platform"
+ },
+ {
+ "id": "C2",
+ "query": "Online education platform",
+ "category": "services",
+ "description": "Digital learning service"
+ },
+ {
+ "id": "C4",
+ "query": "Public transportation",
+ "category": "services",
+ "description": "Mass transit system"
+ },
+ {
+ "id": "C9",
+ "query": "Elderly care service",
+ "category": "services",
+ "description": "Senior citizen support service"
+ }
+ ]
+}
diff --git a/experiments/data/random_words.json b/experiments/data/random_words.json
new file mode 100644
index 0000000..3dfd2a6
--- /dev/null
+++ b/experiments/data/random_words.json
@@ -0,0 +1,28 @@
+{
+ "description": "Word pool for C5 random-perspective condition",
+ "version": "1.0",
+ "selection_criteria": [
+ "Concrete and evocative (easy to generate associations)",
+ "Diverse domains (no overlap with typical expert knowledge)",
+ "No obvious connection to test queries",
+ "Equal representation across conceptual categories"
+ ],
+ "categories": {
+ "nature": ["ocean", "mountain", "forest", "desert", "cave"],
+ "optics": ["microscope", "telescope", "kaleidoscope", "prism", "lens"],
+ "animals": ["butterfly", "elephant", "octopus", "eagle", "ant"],
+ "weather": ["sunrise", "thunderstorm", "rainbow", "fog", "aurora"],
+ "art": ["clockwork", "origami", "mosaic", "symphony", "ballet"],
+ "temporal": ["ancient", "futuristic", "organic", "crystalline", "liquid"],
+ "sensory": ["whisper", "explosion", "rhythm", "silence", "echo"]
+ },
+ "words": [
+ "ocean", "mountain", "forest", "desert", "cave",
+ "microscope", "telescope", "kaleidoscope", "prism", "lens",
+ "butterfly", "elephant", "octopus", "eagle", "ant",
+ "sunrise", "thunderstorm", "rainbow", "fog", "aurora",
+ "clockwork", "origami", "mosaic", "symphony", "ballet",
+ "ancient", "futuristic", "organic", "crystalline", "liquid",
+ "whisper", "explosion", "rhythm", "silence", "echo"
+ ]
+}
diff --git a/experiments/deduplication.py b/experiments/deduplication.py
new file mode 100644
index 0000000..5d9fee6
--- /dev/null
+++ b/experiments/deduplication.py
@@ -0,0 +1,328 @@
+"""
+Post-generation deduplication for experiment results.
+
+Applies embedding-based deduplication uniformly to all conditions
+to normalize idea counts and measure "dedup survival rate".
+
+Usage:
+ python -m experiments.deduplication --input results/experiment_xxx.json
+"""
+
+import sys
+import json
+import argparse
+import asyncio
+import logging
+from pathlib import Path
+from typing import List, Dict, Any, Optional
+from dataclasses import dataclass
+
+# Add backend to path for imports
+sys.path.insert(0, str(Path(__file__).parent.parent / "backend"))
+
+from app.services.embedding_service import embedding_service
+from app.models.schemas import ExpertTransformationDescription
+from experiments.config import DEDUP_THRESHOLD, RESULTS_DIR
+
+# Configure logging
+logging.basicConfig(
+ level=logging.INFO,
+ format='%(asctime)s - %(levelname)s - %(message)s'
+)
+logger = logging.getLogger(__name__)
+
+
+@dataclass
+class DedupStats:
+ """Deduplication statistics for a single condition."""
+ condition: str
+ pre_dedup_count: int
+ post_dedup_count: int
+ duplicates_removed: int
+ survival_rate: float
+ groups: List[Dict[str, Any]]
+
+
+def ideas_to_descriptions(
+ ideas: List[str],
+ ideas_with_source: Optional[List[Dict[str, Any]]] = None
+) -> List[ExpertTransformationDescription]:
+ """
+ Convert experiment ideas to ExpertTransformationDescription format
+ for compatibility with the embedding service.
+ """
+ descriptions = []
+
+ if ideas_with_source:
+ # Use source information if available
+ for i, item in enumerate(ideas_with_source):
+ desc = ExpertTransformationDescription(
+ keyword=item.get("keyword", item.get("attribute", item.get("perspective_word", ""))),
+ expert_id=f"source-{i}",
+ expert_name=item.get("expert_name", item.get("perspective_word", "direct")),
+ description=item.get("idea", "")
+ )
+ descriptions.append(desc)
+ else:
+ # Simple conversion for ideas without source
+ for i, idea in enumerate(ideas):
+ desc = ExpertTransformationDescription(
+ keyword="",
+ expert_id=f"idea-{i}",
+ expert_name="direct",
+ description=idea
+ )
+ descriptions.append(desc)
+
+ return descriptions
+
+
+async def deduplicate_condition(
+ ideas: List[str],
+ ideas_with_source: Optional[List[Dict[str, Any]]] = None,
+ threshold: float = DEDUP_THRESHOLD
+) -> Dict[str, Any]:
+ """
+ Apply deduplication to ideas from a single condition.
+
+ Returns:
+ Dict with deduplicated ideas and statistics
+ """
+ if not ideas:
+ return {
+ "unique_ideas": [],
+ "unique_ideas_with_source": [],
+ "groups": [],
+ "stats": {
+ "pre_dedup_count": 0,
+ "post_dedup_count": 0,
+ "duplicates_removed": 0,
+ "survival_rate": 1.0
+ }
+ }
+
+ # Convert to description format
+ descriptions = ideas_to_descriptions(ideas, ideas_with_source)
+
+ # Run deduplication
+ result = await embedding_service.deduplicate(
+ descriptions=descriptions,
+ threshold=threshold
+ )
+
+ # Extract unique ideas (representatives from each group)
+ unique_ideas = []
+ unique_ideas_with_source = []
+ groups_info = []
+
+ for group in result.groups:
+ rep = group.representative
+ unique_ideas.append(rep.description)
+
+ # Reconstruct source info
+ source_info = {
+ "idea": rep.description,
+ "keyword": rep.keyword,
+ "expert_name": rep.expert_name
+ }
+ unique_ideas_with_source.append(source_info)
+
+ # Group info for analysis
+ group_info = {
+ "representative": rep.description,
+ "duplicates": [d.description for d in group.duplicates],
+ "duplicate_count": len(group.duplicates),
+ "similarity_scores": group.similarity_scores
+ }
+ groups_info.append(group_info)
+
+ pre_count = len(ideas)
+ post_count = len(unique_ideas)
+ survival_rate = post_count / pre_count if pre_count > 0 else 1.0
+
+ return {
+ "unique_ideas": unique_ideas,
+ "unique_ideas_with_source": unique_ideas_with_source,
+ "groups": groups_info,
+ "stats": {
+ "pre_dedup_count": pre_count,
+ "post_dedup_count": post_count,
+ "duplicates_removed": pre_count - post_count,
+ "survival_rate": survival_rate
+ }
+ }
+
+
+async def process_experiment_results(
+ input_file: Path,
+ output_file: Optional[Path] = None,
+ threshold: float = DEDUP_THRESHOLD
+) -> Dict[str, Any]:
+ """
+ Process an experiment results file and apply deduplication.
+
+ Args:
+ input_file: Path to experiment results JSON
+ output_file: Path for output (default: input_file with _deduped suffix)
+ threshold: Similarity threshold for deduplication
+
+ Returns:
+ Processed results with deduplication applied
+ """
+ # Load experiment results
+ with open(input_file, "r", encoding="utf-8") as f:
+ experiment = json.load(f)
+
+ logger.info(f"Processing experiment: {experiment.get('experiment_id', 'unknown')}")
+ logger.info(f"Deduplication threshold: {threshold}")
+
+ # Process each query's conditions
+ dedup_summary = {
+ "threshold": threshold,
+ "conditions": {}
+ }
+
+ for query_result in experiment["results"]:
+ query = query_result["query"]
+ query_id = query_result["query_id"]
+ logger.info(f"\nProcessing query: {query} ({query_id})")
+
+ for condition, cond_result in query_result["conditions"].items():
+ if not cond_result.get("success", False):
+ logger.warning(f" Skipping failed condition: {condition}")
+ continue
+
+ logger.info(f" Deduplicating {condition}...")
+
+ ideas = cond_result.get("ideas", [])
+ ideas_with_source = cond_result.get("ideas_with_source", [])
+
+ dedup_result = await deduplicate_condition(
+ ideas=ideas,
+ ideas_with_source=ideas_with_source,
+ threshold=threshold
+ )
+
+ # Add dedup results to condition
+ cond_result["dedup"] = dedup_result
+
+ # Update summary stats
+ if condition not in dedup_summary["conditions"]:
+ dedup_summary["conditions"][condition] = {
+ "total_pre_dedup": 0,
+ "total_post_dedup": 0,
+ "total_removed": 0,
+ "query_stats": []
+ }
+
+ stats = dedup_result["stats"]
+ cond_summary = dedup_summary["conditions"][condition]
+ cond_summary["total_pre_dedup"] += stats["pre_dedup_count"]
+ cond_summary["total_post_dedup"] += stats["post_dedup_count"]
+ cond_summary["total_removed"] += stats["duplicates_removed"]
+ cond_summary["query_stats"].append({
+ "query_id": query_id,
+ "query": query,
+ **stats
+ })
+
+ logger.info(f" {stats['pre_dedup_count']} -> {stats['post_dedup_count']} "
+ f"(survival: {stats['survival_rate']:.1%})")
+
+ # Calculate overall survival rates
+ for condition, cond_stats in dedup_summary["conditions"].items():
+ if cond_stats["total_pre_dedup"] > 0:
+ cond_stats["overall_survival_rate"] = (
+ cond_stats["total_post_dedup"] / cond_stats["total_pre_dedup"]
+ )
+ else:
+ cond_stats["overall_survival_rate"] = 1.0
+
+ # Add dedup summary to experiment
+ experiment["dedup_summary"] = dedup_summary
+
+ # Save results
+ if output_file is None:
+ stem = input_file.stem.replace("_complete", "").replace("_intermediate", "")
+ output_file = input_file.parent / f"{stem}_deduped.json"
+
+ with open(output_file, "w", encoding="utf-8") as f:
+ json.dump(experiment, f, indent=2, ensure_ascii=False)
+
+ logger.info(f"\nResults saved to: {output_file}")
+
+ return experiment
+
+
+def print_dedup_summary(experiment: Dict[str, Any]):
+ """Print formatted deduplication summary."""
+ dedup = experiment.get("dedup_summary", {})
+
+ print("\n" + "=" * 70)
+ print("DEDUPLICATION SUMMARY")
+ print("=" * 70)
+ print(f"Threshold: {dedup.get('threshold', 'N/A')}")
+
+ print("\nResults by condition:")
+ print("-" * 70)
+ print(f"{'Condition':<30} {'Pre-Dedup':<12} {'Post-Dedup':<12} {'Survival':<10}")
+ print("-" * 70)
+
+ for condition, stats in dedup.get("conditions", {}).items():
+ pre = stats.get("total_pre_dedup", 0)
+ post = stats.get("total_post_dedup", 0)
+ survival = stats.get("overall_survival_rate", 1.0)
+ print(f"{condition:<30} {pre:<12} {post:<12} {survival:<10.1%}")
+
+ print("-" * 70)
+
+ print("\nInterpretation:")
+ print("- Higher survival rate = more diverse/unique ideas")
+ print("- Lower survival rate = more redundant ideas removed")
+
+
+async def main():
+ parser = argparse.ArgumentParser(
+ description="Apply deduplication to experiment results"
+ )
+ parser.add_argument(
+ "--input",
+ type=str,
+ required=True,
+ help="Input experiment results JSON file"
+ )
+ parser.add_argument(
+ "--output",
+ type=str,
+ help="Output file path (default: input_deduped.json)"
+ )
+ parser.add_argument(
+ "--threshold",
+ type=float,
+ default=DEDUP_THRESHOLD,
+ help=f"Similarity threshold (default: {DEDUP_THRESHOLD})"
+ )
+
+ args = parser.parse_args()
+
+ input_path = Path(args.input)
+ if not input_path.exists():
+ # Try relative to results dir
+ input_path = RESULTS_DIR / args.input
+ if not input_path.exists():
+ print(f"Error: Input file not found: {args.input}")
+ sys.exit(1)
+
+ output_path = Path(args.output) if args.output else None
+
+ experiment = await process_experiment_results(
+ input_file=input_path,
+ output_file=output_path,
+ threshold=args.threshold
+ )
+
+ print_dedup_summary(experiment)
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
diff --git a/experiments/docs/aut_flexibility_explanation_zh.md b/experiments/docs/aut_flexibility_explanation_zh.md
new file mode 100644
index 0000000..bda6115
--- /dev/null
+++ b/experiments/docs/aut_flexibility_explanation_zh.md
@@ -0,0 +1,301 @@
+# AUT 彈性評估方法說明
+
+## 什麼是 AUT(替代用途任務)?
+
+AUT(Alternative Uses Task,替代用途任務)是一個經典的**發散性思維測試**,由 Guilford 在 1967 年提出。
+
+**測試方式:**
+```
+問題:「請列出磚塊的所有可能用途」
+
+典型回答:
+1. 蓋房子
+2. 當門擋
+3. 壓紙張
+4. 當武器
+5. 墊高東西
+...
+```
+
+---
+
+## Torrance 創造力四維度
+
+| 維度 | 中文 | 定義 | 測量方式 |
+|------|------|------|----------|
+| **Fluency** | 流暢性 | 產生多少想法 | 計算數量 |
+| **Flexibility** | 彈性/靈活性 | 想法涵蓋多少不同類別 | 計算類別數 |
+| **Originality** | 原創性 | 想法的稀有程度 | 統計罕見度 |
+| **Elaboration** | 精緻性 | 想法的詳細程度 | 評估細節 |
+
+---
+
+## 我們實作的三種彈性評估方法
+
+### 方法一:LLM 雙階段分類法(Hadas & Hershkovitz 2024)
+
+**原理:** 讓大型語言模型識別想法的語義類別,然後計算類別數量
+
+```
+第一階段:讓 LLM 識別所有想法的語義類別
+輸入:「椅子」的 195 個創意想法
+輸出:["交通運輸", "藝術裝飾", "醫療健康", "教育", "儲存", ...]
+
+第二階段:將每個想法分配到類別
+想法 1:「太陽能充電椅」→ 科技類
+想法 2:「椅子改裝成擔架」→ 醫療類
+想法 3:「椅腳當鼓棒」→ 藝術類
+
+彈性分數 = 使用的不同類別數量
+```
+
+**優點:** 類別名稱有語義意義,可解釋性強
+**缺點:** 依賴 LLM 的一致性,可能有解析錯誤
+
+---
+
+### 方法二:嵌入向量階層式聚類法(arXiv:2405.00899)
+
+**原理:** 將想法轉換成向量,用數學方法自動分群
+
+```
+步驟 1:將每個想法轉換成向量(embedding)
+ 「太陽能充電椅」→ [0.12, -0.34, 0.56, ...](1024 維)
+
+步驟 2:使用 Ward 連結法進行階層式聚類
+ 計算所有想法之間的餘弦距離
+ 由下而上合併最相似的群組
+
+步驟 3:在相似度 ≥ 0.7 的閾值切割樹狀圖
+ 確保同一群內的想法夠相似
+
+彈性分數 = 產生的群集數量
+```
+
+**優點:** 客觀、可重現、不依賴 LLM 判斷
+**缺點:** 群集沒有語義標籤,需要人工解讀
+
+---
+
+### 方法三:組合跳躍信號分析(Combined Jump Signal, arXiv:2405.00899)
+
+**原理:** 使用更嚴格的「真正跳躍」定義,減少假陽性
+
+```
+組合跳躍 = 類別跳躍 ∧ 語義跳躍
+
+類別跳躍(jumpcat):連續想法屬於不同的 embedding 群集
+語義跳躍(jumpSS):連續想法的語義相似度 < 0.7
+
+真正跳躍 = 兩個條件都必須成立
+```
+
+**為什麼需要組合跳躍?**
+```
+問題:單獨使用類別跳躍可能產生假陽性
+例如:「人體工學椅」和「可調節椅」
+ - 可能被分到不同群集(類別跳躍 = True)
+ - 但語義上很相似(語義跳躍 = False)
+ - 不應該算作真正的「創意跳躍」
+
+解決:組合跳躍要求兩者同時成立,更準確
+```
+
+| 跳躍比率 | 探索模式 | 含義 |
+|----------|----------|------|
+| 高(>45%) | 靈活探索(Flexible) | 廣泛切換類別,思維跳躍 |
+| 中(30-45%) | 混合模式(Mixed) | 適度切換 |
+| 低(<30%) | 持續探索(Persistent) | 深入單一領域,專注發展 |
+
+**應用:** 區分 LLM 與人類的創意模式差異
+
+---
+
+## 研究發現
+
+### 發現一:新穎性(Novelty)與彈性(Flexibility)是獨立維度
+
+| 條件 | 新穎性分數 | 彈性(群集數) | 平均相似度 | 模式 |
+|------|:----------:|:--------------:|:----------:|------|
+| C4 完整管線 | **0.395**(最高) | 10 | 0.583 | 高新穎、中等彈性 |
+| C5 隨機視角 | 0.365 | **15**(最高) | 0.521 | 高新穎、高彈性 |
+| C2 專家視角 | 0.315 | 13 | 0.517 | 中等新穎、高彈性 |
+| C3 屬性分解 | 0.337 | 12 | - | 中等新穎、中等彈性 |
+| C1 直接生成 | 0.273(最低) | **1**(最低) | 0.647 | 低新穎、低彈性 |
+
+**視覺化解讀:**
+
+```
+C1 直接生成的想法:
+┌─────────────────────────────────────┐
+│ ○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○ │ ← 所有想法集中在一個「普通領域」
+│ (彼此相似,且都很典型) │ (低新穎性 + 低彈性)
+└─────────────────────────────────────┘
+
+C5 隨機視角的想法:
+┌───┐ ┌───┐ ┌───┐ ┌───┐ ┌───┐
+│ ★ │ │ ★ │ │ ★ │ │ ★ │ │ ★ │ ← 分散在多個「新穎領域」
+└───┘ └───┘ └───┘ └───┘ └───┘ (高新穎性 + 高彈性)
+ ↑ ↑ ↑ ↑ ↑
+ 交通 醫療 藝術 教育 科技
+
+C4 完整管線的想法:
+ ┌─────────────────┐
+ ┌──┤ ★★★★★★★★★★★★ ├──┐ ← 集中在一個「新穎領域」但有多個子類別
+ │ └─────────────────┘ │ (最高新穎性 + 中等彈性)
+ │ ↓ │
+ └── 10 個語義群集 ───────┘
+```
+
+### 發現二:組合跳躍信號分析結果
+
+| 條件 | 類別跳躍 | 語義跳躍 | **組合跳躍** | 彈性檔案 |
+|------|:--------:|:--------:|:------------:|:--------:|
+| C2 專家視角 | 54 | 125 | **48** | 持續探索 |
+| C3 屬性分解 | 34 | 107 | **33** | 持續探索 |
+| C5 隨機視角 | 22 | 116 | **20** | 持續探索 |
+| C4 完整管線 | 13 | 348 | **13** | 持續探索 |
+| C1 直接生成 | 0 | 104 | **0** | 持續探索 |
+
+**組合跳躍比率:**
+
+| 條件 | 組合跳躍比率 | 彈性檔案 | 解讀 |
+|------|:------------:|:--------:|------|
+| C3 屬性分解 | **26.6%** | Persistent | 適度類別切換 |
+| C2 專家視角 | **24.4%** | Persistent | 適度類別切換 |
+| C5 隨機視角 | 10.1% | Persistent | 較低類別切換 |
+| C4 完整管線 | **3.2%** | Persistent | 非常專注的探索 |
+| C1 直接生成 | 0.0% | Persistent | 單一群集(無跳躍) |
+
+**關鍵洞察:** 組合跳躍 ≤ 類別跳躍(符合預期)。所有條件都呈現「持續探索」模式。
+
+---
+
+### 發現三:🔑 原創性-彈性相關性(關鍵發現)
+
+**論文發現(arXiv:2405.00899):**
+- **人類:** 原創性與彈性**無相關**(r ≈ 0)
+- **典型 LLM:** **正相關** — 靈活的 LLM 原創性更高
+
+**我們的結果:**
+
+| 指標 | 數值 | 解讀 |
+|------|:----:|------|
+| **Pearson r** | **0.071** | 接近零的相關性 |
+| 模式 | **類似人類** | 打破典型 LLM 模式 |
+
+**各條件數據:**
+
+| 條件 | 新穎性分數 | 彈性(組合跳躍數) |
+|------|:----------:|:------------------:|
+| C4 完整管線 | **0.395**(最高) | **13**(最低) |
+| C5 隨機視角 | 0.365 | 20 |
+| C3 屬性分解 | 0.337 | 33 |
+| C2 專家視角 | 0.315 | 48(最高) |
+| C1 直接生成 | 0.273(最低) | 0 |
+
+**重大發現:** 屬性+專家管線(C4)實現**最高新穎性但最低彈性**,
+證明結構化的無上下文生成能產生**聚焦的新穎性**而非分散的探索。
+
+**這意味著什麼?**
+```
+典型 LLM 模式:
+ 彈性高 → 新穎性高(正相關)
+ 想法越分散,越可能遇到新穎概念
+
+我們的管線(C4):
+ 彈性低 + 新穎性高(打破模式)
+ 專注探索一個新穎領域,而非到處跳躍
+
+這是「類似人類」的創意模式!
+ 人類專家通常深入探索一個領域,而非廣泛但淺薄地涉獵
+```
+
+---
+
+## 這對創意研究的意義
+
+1. **創造力是多維度的**
+ - 新穎性(Novelty)和彈性(Flexibility)是**獨立維度**
+ - 高新穎不代表高彈性,反之亦然
+ - 需要同時考慮流暢性、彈性、原創性、精緻性
+
+2. **管線設計的取捨**
+ | 策略 | 新穎性 | 彈性 | 特點 |
+ |------|:------:|:----:|------|
+ | 直接生成(C1) | 低 | 低 | 快速但普通 |
+ | 專家視角(C2) | 中 | 高 | 多元觀點 |
+ | 隨機視角(C5) | 高 | **最高** | 強迫跳躍 |
+ | 完整管線(C4) | **最高** | 中 | 結構化新穎 |
+
+3. **為什麼專家/隨機視角產生更多類別?**
+ ```
+ C1 直接生成:
+ LLM 沒有外部刺激 → 停留在「家具改良」單一領域
+ 平均相似度 0.647(最高)→ 想法彼此很像
+
+ C2 專家視角:
+ 4 個不同領域專家 → 引入不同思維框架
+ 平均相似度 0.517(較低)→ 想法更分散
+
+ C5 隨機視角:
+ 隨機詞彙強迫跳躍 → 意外的連結
+ 平均相似度 0.521 → 最多語義類別(15 個)
+ ```
+
+4. **實務建議**
+ - 若需要**高新穎性**:使用完整管線(C4)
+ - 若需要**高彈性/多元性**:使用隨機視角(C5)或專家視角(C2)
+ - 若需要**兩者兼顧**:可能需要混合策略
+
+---
+
+## 方法論修正說明
+
+### 原始演算法的問題
+
+最初的聚類演算法有邏輯錯誤:
+
+```
+原本的邏輯(錯誤):
+ 目標:找到群內相似度 >= 0.7 的群集
+
+ 問題:當想法很分散時(低相似度),
+ 無法形成符合閾值的緊密群集
+ → 演算法放棄,回傳 1 個群集
+
+ 結果:C2/C5 的分散想法被錯誤標記為「1 個群集」
+```
+
+### 修正後的演算法
+
+```
+修正後的邏輯(正確):
+ 方法:使用 average linkage 階層式聚類
+ 閾值:在距離 0.5 處切割樹狀圖
+ (即相似度 < 0.5 時分開)
+
+ 結果:分散的想法正確地被分成多個群集
+```
+
+### 結果對比
+
+| 條件 | 修正前群集數 | 修正後群集數 | 平均相似度 |
+|------|:------------:|:------------:|:----------:|
+| C1 直接生成 | 29 | **1** | 0.647(高) |
+| C2 專家視角 | 1 | **13** | 0.517(低) |
+| C5 隨機視角 | 1 | **15** | 0.521(低) |
+
+**關鍵洞察:** 低相似度 = 高多元性 = 高彈性分數
+
+---
+
+## 參考文獻
+
+1. Hadas & Hershkovitz (2024). "Using Large Language Models to Evaluate Alternative Uses Task Flexibility Score." *Thinking Skills and Creativity*, Vol. 52.
+
+2. arXiv:2405.00899 - "Characterising Creative Process in Humans and LLMs" - Jump signal methodology
+
+3. Guilford, J.P. (1967). *The Nature of Human Intelligence*. McGraw-Hill.
+
+4. Torrance, E.P. (1974). *Torrance Tests of Creative Thinking*. Scholastic Testing Service.
diff --git a/experiments/docs/creative_process_metrics_zh.md b/experiments/docs/creative_process_metrics_zh.md
new file mode 100644
index 0000000..a43b9c9
--- /dev/null
+++ b/experiments/docs/creative_process_metrics_zh.md
@@ -0,0 +1,477 @@
+# 創意過程特徵化指標詳解
+
+## 基於 arXiv:2405.00899 論文的方法論
+
+**論文標題:** "Characterising the Creative Process in Humans and Large Language Models"
+**來源:** [arXiv:2405.00899](https://arxiv.org/html/2405.00899v2)
+
+本文檔詳細解釋我們從該論文引入的創意過程評估指標,以及這些指標在我們實驗中揭示的重要發現。
+
+---
+
+## 一、組合跳躍信號(Combined Jump Signal)
+
+### 1.1 什麼是「跳躍」?
+
+在創意發散思維中,「跳躍」指的是連續產生的想法之間的**語義類別切換**。
+
+```
+想法序列範例:
+ 1. 太陽能充電椅 → 科技類
+ 2. 智慧溫控座椅 → 科技類(無跳躍)
+ 3. 椅子改裝成擔架 → 醫療類(跳躍!)
+ 4. 輪椅輔助站立功能 → 醫療類(無跳躍)
+ 5. 椅腳當鼓棒 → 藝術類(跳躍!)
+```
+
+### 1.2 為什麼需要「組合」跳躍?
+
+**原始方法的問題:**
+
+單純使用類別跳躍(jumpcat)可能產生**假陽性**:
+
+```
+問題情境:
+ 想法 A:「可折疊露營椅」 → 群集 1
+ 想法 B:「便攜式野餐椅」 → 群集 2
+
+ 類別跳躍 = True(不同群集)
+ 但這兩個想法語義上非常相似!
+ 這不應該算作真正的「創意跳躍」
+```
+
+**論文的解決方案:組合跳躍信號**
+
+```
+組合跳躍 = 類別跳躍 ∧ 語義跳躍
+
+其中:
+ 類別跳躍(jumpcat):連續想法屬於不同的 embedding 群集
+ 語義跳躍(jumpSS):連續想法的餘弦相似度 < 0.7
+
+真正跳躍 = 兩個條件都必須成立
+```
+
+### 1.3 數學定義
+
+對於連續的想法 $i$ 和 $i-1$:
+
+$$
+\text{jump}_i = \text{jump}_{cat,i} \land \text{jump}_{SS,i}
+$$
+
+其中:
+- $\text{jump}_{cat,i} = \mathbb{1}[c_i \neq c_{i-1}]$(類別是否改變)
+- $\text{jump}_{SS,i} = \mathbb{1}[\text{sim}(e_i, e_{i-1}) < 0.7]$(相似度是否低於閾值)
+
+### 1.4 我們的實驗結果
+
+| 條件 | 類別跳躍 | 語義跳躍 | **組合跳躍** | 組合比率 |
+|------|:--------:|:--------:|:------------:|:--------:|
+| C2 專家視角 | 54 | 125 | **48** | 24.4% |
+| C3 屬性分解 | 34 | 107 | **33** | 26.6% |
+| C5 隨機視角 | 22 | 116 | **20** | 10.1% |
+| C4 完整管線 | 13 | 348 | **13** | 3.2% |
+| C1 直接生成 | 0 | 104 | **0** | 0.0% |
+
+**關鍵觀察:**
+- 組合跳躍 ≤ 類別跳躍(驗證方法有效性)
+- C4 的語義跳躍很高(348)但類別跳躍很低(13)→ 想法在語義上分散但停留在相似類別
+- C1 沒有類別跳躍 → 所有想法在單一語義群集內
+
+---
+
+## 二、彈性檔案分類(Flexibility Profile Classification)
+
+### 2.1 三種創意探索模式
+
+根據論文研究,創意探索可分為三種模式:
+
+| 檔案 | 英文 | 跳躍比率 | 特徵 |
+|------|------|:--------:|------|
+| **持續探索** | Persistent | < 30% | 深入單一領域,專注發展想法 |
+| **混合模式** | Mixed | 30-45% | 適度切換,平衡深度與廣度 |
+| **靈活探索** | Flexible | > 45% | 頻繁跳躍,廣泛涉獵不同領域 |
+
+### 2.2 視覺化理解
+
+```
+持續探索(Persistent):
+┌─────────────────────────────────────┐
+│ ●→●→●→●→●→●→●→●→●→● │ 深入探索一個領域
+│ 科技類 │ 偶爾切換(<30%)
+│ ↓ │
+│ ●→●→●→● │
+│ 醫療類 │
+└─────────────────────────────────────┘
+
+靈活探索(Flexible):
+┌─────────────────────────────────────┐
+│ ●→ ●→ ●→ ●→ ●→ ●→ ●→ ● │ 頻繁在不同領域間跳躍
+│ 科 醫 藝 教 科 社 環 科 │ 每個領域停留很短
+│ 技 療 術 育 技 會 保 技 │ (>45% 跳躍)
+└─────────────────────────────────────┘
+
+混合模式(Mixed):
+┌─────────────────────────────────────┐
+│ ●→●→●→●→ ●→●→●→ ●→●→●→● │ 適度平衡
+│ 科技類 醫療類 藝術類 │ (30-45% 跳躍)
+└─────────────────────────────────────┘
+```
+
+### 2.3 我們的實驗結果
+
+| 條件 | 組合跳躍比率 | 彈性檔案 | 解讀 |
+|------|:------------:|:--------:|------|
+| C3 屬性分解 | 26.6% | Persistent | 接近 Mixed 的邊界 |
+| C2 專家視角 | 24.4% | Persistent | 適度的類別切換 |
+| C5 隨機視角 | 10.1% | Persistent | 較少切換 |
+| **C4 完整管線** | **3.2%** | **Persistent** | 非常專注的探索 |
+| C1 直接生成 | 0.0% | Persistent | 單一群集 |
+
+**重要發現:** 所有條件都呈現「持續探索」模式,但程度不同。
+
+---
+
+## 三、原創性-彈性相關性分析(Originality-Flexibility Correlation)
+
+### 3.1 論文的核心發現
+
+arXiv:2405.00899 論文發現了一個關鍵差異:
+
+| 主體 | 原創性與彈性的關係 | 解讀 |
+|------|:------------------:|------|
+| **人類** | r ≈ 0(無相關) | 原創性和彈性是獨立的能力 |
+| **典型 LLM** | r > 0(正相關) | 越靈活的 LLM 越原創 |
+
+**為什麼會有這種差異?**
+
+```
+人類創意模式:
+ - 有些人善於深入探索(低彈性、高原創)
+ - 有些人善於廣泛聯想(高彈性、高原創)
+ - 兩種能力是獨立的維度
+
+典型 LLM 模式:
+ - LLM 透過「隨機性」產生多樣性
+ - 高 temperature → 更多跳躍 → 更多意外發現
+ - 彈性和原創性被「隨機性」綁定在一起
+```
+
+### 3.2 我們的實驗結果
+
+**Pearson 相關係數:r = 0.071**
+
+| 指標 | 數值 | 解讀 |
+|------|:----:|------|
+| **Pearson r** | **0.071** | 接近零 |
+| 統計意義 | 無顯著相關 | 兩個維度獨立 |
+| **模式判定** | **類似人類** | 打破典型 LLM 模式 |
+
+**各條件詳細數據:**
+
+| 條件 | 新穎性(距離質心) | 彈性(組合跳躍數) | 組合 |
+|------|:------------------:|:------------------:|------|
+| C4 完整管線 | **0.395**(最高) | **13**(最低) | 高新穎 + 低彈性 |
+| C5 隨機視角 | 0.365 | 20 | 高新穎 + 低彈性 |
+| C3 屬性分解 | 0.337 | 33 | 中新穎 + 中彈性 |
+| C2 專家視角 | 0.315 | **48**(最高) | 中新穎 + 高彈性 |
+| C1 直接生成 | 0.273(最低) | 0 | 低新穎 + 低彈性 |
+
+### 3.3 這個發現的重大意義
+
+```
+┌─────────────────────────────────────────────────────────────┐
+│ 原創性-彈性空間 │
+│ │
+│ 高原創 │ C4● │
+│ │ C5● │
+│ │ C3● │
+│ │ C2● │
+│ │ │
+│ 低原創 │ C1● │
+│ └──────────────────────────────────────────────── │
+│ 低彈性 高彈性 │
+│ │
+│ r = 0.071 → 幾乎垂直於對角線 → 無相關 → 類似人類! │
+└─────────────────────────────────────────────────────────────┘
+
+對比典型 LLM(r > 0.3):
+┌─────────────────────────────────────────────────────────────┐
+│ 高原創 │ ● │
+│ │ ● │
+│ │ ● │
+│ │ ● │
+│ 低原創 │ ● │
+│ └──────────────────────────────────────────────── │
+│ 低彈性 高彈性 │
+│ │
+│ r > 0.3 → 沿對角線分布 → 正相關 → 典型 LLM 模式 │
+└─────────────────────────────────────────────────────────────┘
+```
+
+---
+
+## 四、累積跳躍輪廓(Cumulative Jump Profile)
+
+### 4.1 什麼是累積跳躍輪廓?
+
+追蹤在想法生成過程中,跳躍次數如何隨時間累積。
+
+```
+想法位置: 1 2 3 4 5 6 7 8 9 10
+跳躍發生: - - ✓ - ✓ - ✓ ✓ - ✓
+累積計數: 0 0 1 1 2 2 3 4 4 5
+
+輪廓線:
+ 5 │ ●
+ 4 │ ●────●
+ 3 │ ●────●
+ 2 │ ●────●
+ 1 │ ●────●
+ 0 │●────●
+ └────────────────────────────────────────
+ 1 2 3 4 5 6 7 8 9 10
+ 想法位置
+```
+
+### 4.2 輪廓線的解讀
+
+| 輪廓特徵 | 含義 | 創意模式 |
+|----------|------|----------|
+| **陡峭斜率** | 快速累積跳躍 | 頻繁切換類別 |
+| **平緩區域** | 跳躍暫停 | 深入探索當前類別 |
+| **階梯狀** | 突然爆發跳躍 | 類別耗盡後轉移 |
+| **近乎水平** | 幾乎沒有跳躍 | 持續在單一領域 |
+
+### 4.3 我們的實驗視覺化
+
+
+
+**各條件輪廓解讀:**
+
+| 條件 | 輪廓特徵 | 創意策略 |
+|------|----------|----------|
+| C2 專家視角 | 穩定上升 | 持續的類別切換 |
+| C3 屬性分解 | 穩定上升 | 持續的類別切換 |
+| C5 隨機視角 | 緩慢上升 | 較少切換 |
+| C4 完整管線 | 幾乎水平 | 非常專注的單一領域探索 |
+| C1 直接生成 | 完全水平 | 無任何類別切換 |
+
+---
+
+## 五、實驗發現的綜合意義
+
+### 5.1 核心發現總結
+
+| 發現 | 內容 | 意義 |
+|------|------|------|
+| **發現一** | 原創性-彈性相關 r = 0.071 | 管線產生「類似人類」的創意模式 |
+| **發現二** | C4 最高新穎性 + 最低彈性 | 結構化方法產生聚焦的新穎性 |
+| **發現三** | 所有條件都是 Persistent | LLM 傾向深度探索而非廣度 |
+| **發現四** | 組合跳躍 < 類別跳躍 | 驗證方法學的有效性 |
+
+### 5.2 為什麼 C4 能打破 LLM 模式?
+
+```
+典型 LLM 的問題:
+┌─────────────────────────────────────────────────────────────┐
+│ 直接生成:「給我椅子的創新用途」 │
+│ │
+│ LLM 依賴 temperature 產生多樣性 │
+│ → 高 temperature = 更多隨機性 │
+│ → 更多隨機性 = 更多跳躍(高彈性) │
+│ → 更多跳躍 = 更可能遇到新穎想法(高原創) │
+│ │
+│ 結果:彈性和原創性被綁定(正相關) │
+└─────────────────────────────────────────────────────────────┘
+
+C4 管線的突破:
+┌─────────────────────────────────────────────────────────────┐
+│ 結構化生成: │
+│ │
+│ Step 1: 屬性分解 │
+│ 「椅子」→ [便攜, 可堆疊, 人體工學, ...] │
+│ │
+│ Step 2: 專家無上下文關鍵字 │
+│ 會計師 + 「便攜」→ 「流動資產」(不知道是椅子!) │
+│ │
+│ Step 3: 重新結合 │
+│ 「椅子」+ 「流動資產」+ 會計師視角 │
+│ → 「帶 RFID 資產追蹤的企業椅子」 │
+│ │
+│ 關鍵機制: │
+│ - 結構強制「跳出」典型語義空間(高新穎性) │
+│ - 但所有想法都錨定在相同屬性集(低彈性) │
+│ - 新穎性來自「強制bisociation」而非「隨機探索」 │
+│ │
+│ 結果:高新穎性 + 低彈性 → 打破正相關 → 類似人類 │
+└─────────────────────────────────────────────────────────────┘
+```
+
+### 5.3 這對創意 AI 研究的意義
+
+**理論貢獻:**
+
+1. **證明 LLM 可以產生「類似人類」的創意模式**
+ - 不是透過模仿人類數據
+ - 而是透過結構化的創意管線設計
+
+2. **原創性和彈性是可以獨立控制的**
+ - 傳統認為需要高隨機性才能高原創
+ - 我們證明結構化約束也能達到高原創
+
+3. **「專注的新穎性」vs「分散的探索」**
+ - C4:深入一個新穎領域(專家策略)
+ - C5:廣泛接觸多個領域(通才策略)
+ - 兩種都有價值,但機制不同
+
+**實務應用:**
+
+| 目標 | 推薦策略 | 原因 |
+|------|----------|------|
+| 最大化新穎性 | C4 完整管線 | 最高距離質心分數 |
+| 最大化類別多樣性 | C2 專家視角 | 最多組合跳躍 |
+| 平衡新穎與多樣 | C3 屬性分解 | 中等水平 |
+| 快速生成 | C1 直接生成 | 最少 API 調用 |
+
+---
+
+## 六、方法論驗證
+
+### 6.1 組合跳躍 ≤ 類別跳躍
+
+這是方法學的必要條件驗證:
+
+```
+邏輯推導:
+ 組合跳躍 = 類別跳躍 ∧ 語義跳躍
+
+ 當類別跳躍 = False 時:
+ 組合跳躍 = False ∧ ? = False
+
+ 當類別跳躍 = True 時:
+ 組合跳躍 = True ∧ 語義跳躍 = 語義跳躍(可能 True 或 False)
+
+ 因此:組合跳躍 ≤ 類別跳躍(必然成立)
+```
+
+**實驗驗證:**
+
+| 條件 | 類別跳躍 | 組合跳躍 | 驗證 |
+|------|:--------:|:--------:|:----:|
+| C2 | 54 | 48 | ✓ |
+| C3 | 34 | 33 | ✓ |
+| C5 | 22 | 20 | ✓ |
+| C4 | 13 | 13 | ✓ |
+| C1 | 0 | 0 | ✓ |
+
+### 6.2 彈性檔案閾值的選擇
+
+論文使用的閾值(30%、45%)基於人類實驗數據的分布。我們的 LLM 實驗中,所有條件都落在 Persistent 區間,這本身就是一個發現:
+
+```
+人類分布(論文數據):
+ Persistent: ~33%
+ Mixed: ~34%
+ Flexible: ~33%
+
+我們的 LLM 分布:
+ Persistent: 100%(所有條件)
+ Mixed: 0%
+ Flexible: 0%
+
+解讀:
+ LLM(即使加入專家/屬性引導)仍傾向持續探索模式
+ 這可能是 LLM 架構的固有特性
+```
+
+---
+
+## 七、與其他指標的整合
+
+### 7.1 完整指標體系
+
+| 維度 | 指標 | 來源 | C4 表現 |
+|------|------|------|:-------:|
+| **流暢性** | 想法數量 | Torrance | 402(最多) |
+| **彈性** | 組合跳躍數 | arXiv:2405.00899 | 13(最低) |
+| **原創性** | 距離質心 | 本研究 | 0.395(最高) |
+| **精緻性** | 平均字數 | Torrance | 26.2 |
+
+### 7.2 C4 的獨特位置
+
+```
+創意空間定位:
+
+ 高原創性
+ │
+ C4 ●│
+ │ C5●
+ │ C3●
+ │ C2●
+ │
+ C1 ●│
+ └──────────────────── 高彈性
+ 低原創性
+
+C4 占據了「高原創性 + 低彈性」的獨特位置
+這在人類創意者中常見(專家型),但在 LLM 中罕見
+```
+
+---
+
+## 八、未來研究方向
+
+基於這些發現,建議的後續研究:
+
+1. **跨模型驗證**
+ - 在 GPT-4、Claude、Llama-3 上重複實驗
+ - 確認發現是否為通用現象
+
+2. **Temperature 敏感度測試**
+ - 論文發現 LLM 對 temperature 不敏感
+ - 測試我們的管線是否也有此特性
+
+3. **人類基準比較**
+ - 收集人類在相同任務上的數據
+ - 直接比較彈性檔案分布
+
+4. **管線變體測試**
+ - 調整屬性數量、專家數量
+ - 找到最佳平衡點
+
+---
+
+## 參考文獻
+
+1. **arXiv:2405.00899** - "Characterising the Creative Process in Humans and Large Language Models"
+ - 組合跳躍信號、彈性檔案分類的原始論文
+
+2. **Hadas & Hershkovitz (2024)** - "Using LLMs to Evaluate AUT Flexibility Score"
+ - LLM 雙階段分類法的來源
+
+3. **Torrance (1974)** - *Torrance Tests of Creative Thinking*
+ - 創造力四維度框架
+
+4. **Koestler (1964)** - *The Act of Creation*
+ - Bisociation 理論基礎
+
+---
+
+## 附錄:程式碼參考
+
+相關分析程式碼位於:
+- `experiments/aut_flexibility_analysis.py`
+ - `compute_jump_signal()` - 組合跳躍計算
+ - `classify_flexibility_profile()` - 彈性檔案分類
+ - `analyze_originality_flexibility_correlation()` - 相關性分析
+ - `compute_cumulative_jump_profile()` - 累積跳躍輪廓
+ - `plot_cumulative_jump_profiles()` - 視覺化
+
+執行分析:
+```bash
+cd experiments
+source ../backend/venv/bin/activate
+python aut_flexibility_analysis.py experiment_20260119_165650_deduped.json
+```
diff --git a/experiments/docs/experiment_design_2026-01-19.md b/experiments/docs/experiment_design_2026-01-19.md
new file mode 100644
index 0000000..5956aec
--- /dev/null
+++ b/experiments/docs/experiment_design_2026-01-19.md
@@ -0,0 +1,259 @@
+# Experiment Design: 5-Condition Idea Generation Study
+
+**Date:** January 19, 2026
+**Version:** 1.0
+**Status:** Pilot Implementation
+
+## Overview
+
+This experiment tests whether the novelty-seeking system's two key mechanisms—**attribute decomposition** and **expert transformation**—independently and jointly improve creative ideation quality compared to direct LLM generation.
+
+## Research Questions
+
+1. Does decomposing a query into structured attributes improve idea diversity?
+2. Do expert perspectives improve idea novelty?
+3. Do these mechanisms have synergistic effects when combined?
+4. Is the benefit from experts due to domain knowledge, or simply perspective-shifting?
+
+## Experimental Design
+
+### 2×2 Factorial Design + Control
+
+| | No Attributes | With Attributes |
+|--------------------|---------------|-----------------|
+| **No Experts** | C1: Direct | C3: Attr-Only |
+| **With Experts** | C2: Expert-Only | C4: Full Pipeline |
+
+**Plus:** C5: Random-Perspective (tests perspective-shifting without domain knowledge)
+
+### Condition Descriptions
+
+#### C1: Direct Generation (Baseline)
+- Single LLM call: "Generate 20 creative ideas for [query]"
+- No attribute decomposition
+- No expert perspectives
+- Purpose: Baseline for standard LLM ideation
+
+#### C2: Expert-Only
+- 4 experts from curated occupations
+- Each expert generates 5 ideas directly for the query
+- No attribute decomposition
+- Purpose: Isolate expert contribution
+
+#### C3: Attribute-Only
+- Decompose query into 4 fixed categories
+- Generate attributes per category
+- Direct idea generation per attribute (no expert framing)
+- Purpose: Isolate attribute decomposition contribution
+
+#### C4: Full Pipeline
+- Full attribute decomposition (4 categories)
+- Expert transformation (4 experts × 1 keyword per attribute)
+- Purpose: Test combined mechanism (main system)
+
+#### C5: Random-Perspective
+- 4 random words per query (from curated pool)
+- Each word used as a "perspective" to generate 5 ideas
+- Purpose: Control for perspective-shifting vs. expert knowledge
+
+---
+
+## Key Design Decisions & Rationale
+
+### 1. Why 5 Conditions?
+
+C1-C4 form a 2×2 factorial design that isolates the independent contributions of:
+- **Attribute decomposition** (C1 vs C3, C2 vs C4)
+- **Expert perspectives** (C1 vs C2, C3 vs C4)
+
+C5 addresses a critical confound: if experts improve ideation, is it because of their **domain knowledge** or simply because any **perspective shift** helps? By using random words instead of domain experts, C5 tests whether the perspective-taking mechanism alone provides benefits.
+
+### 2. Why Random Words in C5 (Not Fixed)?
+
+**Decision:** Use randomly sampled words (with seed) rather than a fixed set.
+
+**Rationale:**
+- Stronger generalization: results hold across many word combinations
+- Avoids cherry-picking accusation ("you just picked easy words")
+- Reproducible via random seed (seed=42)
+- Each query gets different random words, increasing robustness
+
+### 3. Why Apply Deduplication Uniformly?
+
+**Decision:** Apply embedding-based deduplication (threshold=0.85) to ALL conditions after generation.
+
+**Rationale:**
+- Fair comparison: all conditions normalized to unique ideas
+- Creates "dedup survival rate" as an additional metric
+- Hypothesis: Full Pipeline ideas are diverse (low redundancy), not just numerous
+- Direct generation may produce many similar ideas that collapse after dedup
+
+### 4. Why FIXED_ONLY Categories?
+
+**Decision:** Use 4 fixed categories: Functions, Usages, User Groups, Characteristics
+
+**Rationale:**
+- Best for proof power: isolates "attribute decomposition" effect
+- No confound from dynamic category selection variability
+- Universal applicability: these 4 categories apply to objects, technology, and services
+- Dropped "Materials" category as it doesn't apply well to services
+
+### 5. Why Curated Expert Source?
+
+**Decision:** Use curated occupations (210 professions) rather than LLM-generated experts.
+
+**Rationale:**
+- Reproducibility: same occupation pool across runs
+- Consistency: no variance from LLM expert generation
+- Control: we know exactly which experts are available
+- Validation: occupations were manually curated for diversity
+
+### 6. Why Temperature 0.9?
+
+**Decision:** Use temperature=0.9 for all conditions.
+
+**Rationale:**
+- Higher temperature encourages more diverse/creative outputs
+- Matches typical creative task settings
+- Consistent across conditions for fair comparison
+- Lower temperatures (0.7) showed more repetitive outputs in testing
+
+### 7. Why 10 Pilot Queries?
+
+**Decision:** Start with 10 queries before scaling to full 30.
+
+**Rationale:**
+- Validate pipeline works before full investment
+- Catch implementation bugs early
+- Balanced across categories (3 everyday, 3 technology, 4 services)
+- Sufficient for initial pattern detection
+
+---
+
+## Configuration Summary
+
+| Setting | Value | Rationale |
+|---------|-------|-----------|
+| **LLM Model** | qwen3:8b | Local, fast, consistent |
+| **Temperature** | 0.9 | Encourages creativity |
+| **Expert Count** | 4 | Balance diversity vs. cost |
+| **Expert Source** | Curated | Reproducibility |
+| **Keywords/Expert** | 1 | Simplifies analysis |
+| **Language** | English | Consistency |
+| **Categories** | Functions, Usages, User Groups, Characteristics | Universal applicability |
+| **Dedup Threshold** | 0.85 | Standard similarity cutoff |
+| **Random Seed** | 42 | Reproducibility |
+| **Pilot Queries** | 10 | Validation before scaling |
+
+---
+
+## Query Selection
+
+### Pilot Queries (10)
+
+| ID | Query | Category |
+|----|-------|----------|
+| A1 | Chair | Everyday |
+| A5 | Bicycle | Everyday |
+| A7 | Smartphone | Everyday |
+| B1 | Solar panel | Technology |
+| B3 | 3D printer | Technology |
+| B4 | Drone | Technology |
+| C1 | Food delivery service | Services |
+| C2 | Online education platform | Services |
+| C4 | Public transportation | Services |
+| C9 | Elderly care service | Services |
+
+### Selection Criteria
+- Balanced across 3 domains (everyday objects, technology, services)
+- Varying complexity levels
+- Different user familiarity levels
+- Subset from full 30-query experimental protocol
+
+---
+
+## Random Word Pool (C5)
+
+35 words selected across 7 conceptual categories:
+
+| Category | Words |
+|----------|-------|
+| Nature | ocean, mountain, forest, desert, cave |
+| Optics | microscope, telescope, kaleidoscope, prism, lens |
+| Animals | butterfly, elephant, octopus, eagle, ant |
+| Weather | sunrise, thunderstorm, rainbow, fog, aurora |
+| Art | clockwork, origami, mosaic, symphony, ballet |
+| Temporal | ancient, futuristic, organic, crystalline, liquid |
+| Sensory | whisper, explosion, rhythm, silence, echo |
+
+**Selection Criteria:**
+- Concrete and evocative (easy to generate associations)
+- Diverse domains (no overlap with typical expert knowledge)
+- No obvious connection to test queries
+- Equal representation across categories
+
+---
+
+## Expected Outputs
+
+### Per Condition Per Query
+
+| Condition | Expected Ideas (pre-dedup) | Mechanism |
+|-----------|---------------------------|-----------|
+| C1 | 20 | Direct request |
+| C2 | 20 | 4 experts × 5 ideas |
+| C3 | ~20 | Varies by attribute count |
+| C4 | ~20 | 4 experts × ~5 keywords × 1 description |
+| C5 | 20 | 4 words × 5 ideas |
+
+### Metrics to Collect
+
+1. **Pre-deduplication count**: Raw ideas generated
+2. **Post-deduplication count**: Unique ideas after similarity filtering
+3. **Dedup survival rate**: post/pre ratio
+4. **Generation metadata**: Experts/words used, attributes generated
+
+---
+
+## File Structure
+
+```
+experiments/
+├── __init__.py
+├── config.py # Experiment configuration
+├── docs/
+│ └── experiment_design_2026-01-19.md # This file
+├── conditions/
+│ ├── __init__.py
+│ ├── c1_direct.py
+│ ├── c2_expert_only.py
+│ ├── c3_attribute_only.py
+│ ├── c4_full_pipeline.py
+│ └── c5_random_perspective.py
+├── data/
+│ ├── queries.json # 10 pilot queries
+│ └── random_words.json # Word pool for C5
+├── generate_ideas.py # Main runner
+├── deduplication.py # Post-processing
+└── results/ # Output (gitignored)
+```
+
+---
+
+## Verification Checklist
+
+- [ ] Each condition produces expected number of ideas
+- [ ] Deduplication reduces count meaningfully
+- [ ] Results JSON contains all required metadata
+- [ ] Random seed produces reproducible C5 results
+- [ ] No runtime errors on all 10 pilot queries
+
+---
+
+## Next Steps After Pilot
+
+1. Analyze pilot results for obvious issues
+2. Adjust parameters if needed (idea count normalization, etc.)
+3. Scale to full 30 queries
+4. Human evaluation of idea quality (novelty, usefulness, feasibility)
+5. Statistical analysis of condition differences
diff --git a/experiments/docs/experiment_report_2026-01-19.md b/experiments/docs/experiment_report_2026-01-19.md
new file mode 100644
index 0000000..f2c112b
--- /dev/null
+++ b/experiments/docs/experiment_report_2026-01-19.md
@@ -0,0 +1,813 @@
+---
+marp: true
+theme: default
+paginate: true
+backgroundColor: #fff
+style: |
+ section {
+ font-size: 24px;
+ }
+ h1 {
+ color: #2c3e50;
+ }
+ h2 {
+ color: #34495e;
+ }
+ table {
+ font-size: 18px;
+ }
+ .columns {
+ display: grid;
+ grid-template-columns: 1fr 1fr;
+ gap: 1rem;
+ }
+---
+
+# Breaking Semantic Gravity in LLM-Based Creative Ideation
+
+## A Pilot Study on Attribute Decomposition and Expert Perspectives
+
+**Date:** January 19, 2026
+**Model:** Qwen3:8b (Temperature: 0.9)
+**Queries:** 10 pilot queries
+
+---
+
+# Research Problem
+
+## The "Semantic Gravity" Challenge
+
+LLMs tend to generate ideas clustered around **high-probability training distributions**
+
+```
+Query: "Chair"
+Typical LLM output:
+ - Ergonomic office chair
+ - Comfortable reading chair
+ - Foldable portable chair
+ ← All within "furniture comfort" semantic cluster
+```
+
+**Goal:** Break this gravitational pull toward obvious solutions
+
+---
+
+# Theoretical Framework
+
+## Bisociation Theory (Koestler, 1964)
+
+Creative thinking occurs when two unrelated "matrices of thought" collide
+
+**Our Approach:**
+1. **Attribute Decomposition** → Break object into structural components
+2. **Expert Perspectives** → Introduce distant domain knowledge
+3. **Context-Free Keywords** → Force unexpected conceptual leaps
+
+---
+
+# Experimental Design
+
+## 2×2 Factorial + Control
+
+| Condition | Attributes | Experts | Description |
+|-----------|:----------:|:-------:|-------------|
+| **C1** Direct | - | - | Baseline: Direct LLM generation |
+| **C2** Expert-Only | - | ✓ | Expert perspectives without structure |
+| **C3** Attribute-Only | ✓ | - | Structure without expert knowledge |
+| **C4** Full Pipeline | ✓ | ✓ | Combined approach |
+| **C5** Random-Perspective | - | Random | Control: Random words as "experts" |
+
+---
+
+# Research Questions
+
+1. **RQ1:** Does attribute decomposition increase idea diversity?
+
+2. **RQ2:** Do expert perspectives increase idea diversity?
+
+3. **RQ3:** Is there a synergistic (super-additive) interaction effect?
+
+4. **RQ4:** Do domain-relevant experts outperform random perspectives?
+
+---
+
+# Pipeline Architecture
+
+## C4: Full Pipeline Process
+
+```
+Query: "Chair"
+ ↓
+Step 1: Attribute Decomposition
+ → "portable", "stackable", "ergonomic", ...
+ ↓
+Step 2: Context-Free Keyword Generation (Expert sees ONLY attribute)
+ → Accountant + "portable" → "mobile assets"
+ → Architect + "portable" → "modular units"
+ ↓
+Step 3: Idea Synthesis (Reunite with query)
+ → "Chair" + "mobile assets" + Accountant perspective
+ → "Asset-tracking chairs for corporate inventory management"
+```
+
+---
+
+# Key Design Decision
+
+## Context-Free Keyword Generation
+
+The expert **never sees the original query** when generating keywords
+
+```python
+# Step 2: Expert sees only attribute
+prompt = f"As a {expert}, what keyword comes to mind for '{attribute}'?"
+# Input: "portable" (NOT "portable chair")
+
+# Step 3: Reunite with query
+prompt = f"Apply '{keyword}' to '{query}' from {expert}'s perspective"
+# Input: "mobile assets" + "Chair" + "Accountant"
+```
+
+**Purpose:** Force bisociation by preventing obvious associations
+
+---
+
+# Pilot Study Parameters
+
+## Model & Generation Settings
+
+| Parameter | Value |
+|-----------|-------|
+| LLM Model | Qwen3:8b (Ollama) |
+| Temperature | 0.9 |
+| Ollama Endpoint | localhost:11435 |
+| Language | English |
+| Random Seed | 42 |
+
+---
+
+# Pilot Study Parameters (cont.)
+
+## Pipeline Configuration
+
+| Parameter | Value |
+|-----------|-------|
+| Queries | 10 (Chair, Bicycle, Smartphone, Solar panel, 3D printer, Drone, Food delivery, Online education, Public transport, Elderly care) |
+| Attribute Categories | 4 (Functions, Usages, User Groups, Characteristics) |
+| Attributes per Category | 5 |
+| Expert Source | Curated (210 occupations) |
+| Experts per Query | 4 |
+| Keywords per Expert | 1 |
+
+---
+
+# Pilot Study Parameters (cont.)
+
+## Output & Evaluation
+
+| Parameter | Value |
+|-----------|-------|
+| Total Ideas Generated | 1,119 (after deduplication) |
+| Ideas by Condition | C1: 195, C2: 198, C3: 125, C4: 402, C5: 199 |
+| Deduplication Threshold | 0.90 (cosine similarity) |
+| Embedding Model | qwen3-embedding:4b (1024D) |
+
+---
+
+# Background: Embedding Models Evolution
+
+## From Static to Contextual Representations
+
+| Generation | Model | Characteristics | Limitation |
+|------------|-------|-----------------|------------|
+| **1st Gen** | Word2Vec, GloVe | Static vectors, one vector per word | "bank" = same vector (river vs finance) |
+| **2nd Gen** | BERT, Sentence-BERT | Contextual, transformer-based | Limited context window, older training |
+| **3rd Gen** | Qwen3-embedding | LLM-based, instruction-tuned | Requires more compute |
+
+---
+
+# Background: Transformer vs LLM-based Embedding
+
+## Architecture Differences
+
+| Aspect | Transformer (BERT) | LLM-based (Qwen3) |
+|--------|-------------------|-------------------|
+| **架構** | Encoder-only | Decoder-only (GPT-style) |
+| **訓練目標** | MLM (遮罩語言模型) | Next-token prediction |
+| **訓練數據** | ~16GB (Wikipedia + Books) | ~數 TB (網頁、程式碼、書籍) |
+| **參數量** | 110M - 340M | 4B+ |
+| **上下文** | 512 tokens | 8K - 128K tokens |
+
+---
+
+# Background
+
+## Key Comparison
+
+```
+1. 較多的知識訓練
+ BERT: 只知道 2019 年前的知識
+ Qwen3: 知道 "drone delivery", "AI-powered", "IoT" 等現代概念
+
+2. 較廣語義理解
+ BERT: "chair for elderly" ≈ "elderly chair" (詞袋相似)
+ Qwen3: 理解 "mobility assistance" vs "comfort seating" 的差異
+
+3. 接受指令微調 (Instruction Tuning)
+ 傳統: 無法根據任務意圖調整
+ Qwen3: 可以理解 "找出創意想法之間的語義差異"
+```
+
+---
+
+# Background: Qwen3-Embedding?
+
+## Comparison with Traditional Methods
+
+```
+傳統 Sentence-BERT (all-MiniLM-L6-v2):
+ - 384 維向量
+ - 訓練於 2021 年之前的數據
+ - 對短句效果好,長文本理解有限
+ - Encoder-only,MLM 訓練
+
+Qwen3-Embedding (qwen3-embedding:4b):
+ - 1024 維向量(更豐富的語義表達)
+ - 基於 Qwen3 LLM(2024+ 訓練數據)
+ - 支援長上下文(8K tokens)
+ - 指令微調(instruction-tuned)→ 配合任務意圖
+ - 繼承 LLM 的部分能力
+```
+
+**選擇理由:** 創意想法通常較長且語義複雜,需要更強的上下文理解能力
+
+---
+
+# Background: How Embedding Works
+
+## Semantic Similarity via Vector Space
+
+```
+Step 1: 將文字轉換為向量
+ "Solar-powered charging chair" → [0.12, -0.34, 0.56, ..., 0.78] (1024D)
+
+Step 2: 計算餘弦相似度
+ similarity = cos(θ) = (A · B) / (|A| × |B|)
+
+Step 3: 相似度解讀
+ 1.0 = 完全相同
+ 0.9 = 非常相似(可能是重複想法)
+ 0.5 = 中等相關
+ 0.0 = 無關
+```
+
+**應用:** 去重(similarity > 0.9)、彈性分析(clustering)、新穎性(centroid distance)
+
+---
+
+# Results: Semantic Diversity
+
+## Mean Pairwise Distance (Higher = More Diverse)
+
+> **Method:** We convert each idea into a vector embedding (qwen3-embedding:4b), then calculate the average cosine distance between all pairs of ideas within each condition. Higher values indicate ideas are more spread out in semantic space.
+
+| Condition | Mean | SD | vs C1 (Cohen's d) |
+|-----------|:----:|:--:|:-----------------:|
+| C1 Direct | 0.294 | 0.039 | - |
+| C2 Expert-Only | 0.400 | 0.028 | **3.15*** |
+| C3 Attribute-Only | 0.377 | 0.036 | **2.20*** |
+| C4 Full Pipeline | 0.395 | 0.019 | **3.21*** |
+| C5 Random | 0.405 | 0.062 | **2.72*** |
+
+*p < 0.001, Large effect sizes (d > 0.8)
+
+> **Cohen's d:** Measures effect size (how big the difference is). d > 0.8 = large effect, d > 0.5 = medium, d > 0.2 = small.
+
+---
+
+# Results: ANOVA Summary
+
+## Normalized Diversity Metric
+
+> **Method:** Two-way ANOVA tests whether Attributes and Experts each have independent effects on diversity, and whether combining them produces extra benefit (interaction). F-statistic measures variance between groups vs within groups.
+
+| Effect | F | p | Significant |
+|--------|:-:|:-:|:-----------:|
+| **Attributes (RQ1)** | 5.31 | 0.027 | Yes |
+| **Experts (RQ2)** | 26.07 | <0.001 | Yes |
+| **Interaction (RQ3)** | - | - | Sub-additive |
+
+**Key Finding:** Both factors work, but combination is **not synergistic**
+
+---
+
+# Results: Expert vs Random (RQ4)
+
+## C2 (Expert-Only) vs C5 (Random-Perspective)
+
+| Metric | C2 Expert | C5 Random | p-value | Effect |
+|--------|:---------:|:---------:|:-------:|:------:|
+| Diversity | 0.399 | 0.414 | 0.463 | n.s. |
+| Query Distance | 0.448 | 0.437 | 0.654 | n.s. |
+
+**Finding:** Random words perform as well as domain experts
+
+Implication: The value may be in **perspective shift itself**, not expert knowledge
+
+---
+
+# Results: Efficiency Analysis
+
+## Diversity per Idea Generated
+
+| Condition | Mean Ideas | Diversity | Efficiency |
+|-----------|:----------:|:---------:|:----------:|
+| C1 Direct | 20.0 | 0.293 | 1.46 |
+| C2 Expert-Only | 20.0 | 0.399 | **1.99** |
+| C3 Attribute-Only | 12.8 | 0.376 | **3.01** |
+| C4 Full Pipeline | 51.9 | 0.393 | 0.78 |
+| C5 Random | 20.0 | 0.405 | 2.02 |
+
+**C4 produces 2.6× more ideas but achieves same diversity**
+
+---
+
+# Visualization: Diversity by Condition
+
+
+
+---
+
+# Visualization: Query Distance
+
+
+
+---
+
+# Advanced Analysis: Lexical Diversity
+
+## Type-Token Ratio & Vocabulary Richness
+
+> **Method:** Type-Token Ratio (TTR) = unique words ÷ total words. High TTR means more varied vocabulary; low TTR means more word repetition. Vocabulary size counts total unique words across all ideas in a condition.
+
+| Condition | TTR | Vocabulary | Avg Words/Idea |
+|-----------|:---:|:----------:|:--------------:|
+| C1 Direct | **0.382** | 853 | 11.5 |
+| C2 Expert-Only | 0.330 | 1,358 | 20.8 |
+| C3 Attribute-Only | 0.330 | 1,098 | 26.6 |
+| C4 Full Pipeline | 0.189 | **1,992** | 26.2 |
+| C5 Random | 0.320 | 1,331 | 20.9 |
+
+**Finding:** C4 has largest vocabulary (1,992) but lowest TTR (0.189)
+→ More words but more repetition across ideas
+
+---
+
+# Advanced Analysis: Concept Extraction
+
+## Top Keywords by Condition
+
+> **Method:** Extract meaningful keywords from idea texts using NLP (removing stopwords, lemmatization). Top keywords show most frequent concepts; unique keywords count distinct terms. Domain coverage checks if ideas span different knowledge areas.
+
+| Condition | Top Keywords | Unique Keywords |
+|-----------|--------------|:---------------:|
+| C1 Direct | solar, powered, smart, delivery, drone | 805 |
+| C2 Expert | real, create, design, time, develop | 1,306 |
+| C3 Attribute | real, time, create, develop, powered | 1,046 |
+| C4 Pipeline | time, real, data, ensuring, enhancing | **1,937** |
+| C5 Random | like, solar, inspired, energy, uses | 1,286 |
+
+**Finding:** C5 Random shows "inspired" → suggests analogical thinking
+All conditions cover 6 domain categories
+
+---
+
+# Advanced Analysis: Novelty Scores
+
+## Distance from Global Centroid (Higher = More Novel)
+
+> **Method:** Compute the centroid (average vector) of ALL ideas across all conditions. Then measure each idea's distance from this "typical idea" center. Ideas far from the centroid are semantically unusual compared to the overall pool.
+
+| Condition | Mean | Std | Interpretation |
+|-----------|:----:|:---:|----------------|
+| C1 Direct | 0.273 | 0.037 | Closest to "typical" ideas |
+| C2 Expert-Only | 0.315 | 0.062 | Moderate novelty |
+| C3 Attribute-Only | 0.337 | 0.066 | Moderate novelty |
+| C5 Random | 0.365 | 0.069 | High novelty |
+| **C4 Full Pipeline** | **0.395** | 0.083 | **Highest novelty** |
+
+**Finding:** C4 produces ideas furthest from the "average" idea space
+
+---
+
+# Advanced Analysis: Cross-Condition Cohesion
+
+## % Nearest Neighbors from Same Condition
+
+> **Method:** For each idea, find its K nearest neighbors in embedding space. Cohesion = percentage of neighbors from the same condition. High cohesion means ideas from that condition cluster together; low cohesion means they're scattered among other conditions.
+
+| Condition | Cohesion | Interpretation |
+|-----------|:--------:|----------------|
+| **C4 Full Pipeline** | **88.6%** | Highly distinct idea cluster |
+| C2 Expert-Only | 72.7% | Moderate clustering |
+| C5 Random | 71.4% | Moderate clustering |
+| C1 Direct | 70.8% | Moderate clustering |
+| C3 Attribute-Only | 51.2% | Ideas scattered, overlap with others |
+
+**Finding:** C4 ideas form a distinct cluster in semantic space
+
+---
+
+# Advanced Analysis: AUT Flexibility
+
+## Semantic Category Diversity (Hadas & Hershkovitz 2024)
+
+> **Method:** Uses the Alternative Uses Task (AUT) flexibility framework. Embedding-based: Hierarchical clustering with average linkage, cut at distance threshold 0.5. Higher cluster count = more semantic categories covered = higher flexibility.
+
+| Condition | Embedding Clusters | Mean Pairwise Similarity |
+|-----------|:------------------:|:------------------------:|
+| **C5 Random** | **15** | 0.521 (most diverse) |
+| **C2 Expert-Only** | **13** | 0.517 |
+| C3 Attribute-Only | 12 | - |
+| C4 Full Pipeline | 10 | 0.583 |
+| C1 Direct | **1** | 0.647 (most similar) |
+
+**Finding:** Expert perspectives (C2, C5) produce more diverse categories than direct generation (C1)
+
+---
+
+# Advanced Analysis: Combined Jump Signal
+
+## Enhanced Method from arXiv:2405.00899
+
+> **Method:** Combined jump signal uses logical AND of two conditions:
+> - **jumpcat:** Category changes between consecutive ideas (from embedding clustering)
+> - **jumpSS:** Semantic similarity < 0.7 (ideas are semantically dissimilar)
+>
+> **True jump = jumpcat ∧ jumpSS** — reduces false positives where similar ideas happen to be in different clusters.
+
+| Condition | Cat-Only | Sem-Only | **Combined** | Profile |
+|-----------|:--------:|:--------:|:------------:|---------|
+| C2 Expert-Only | 54 | 125 | **48** | Persistent |
+| C3 Attribute-Only | 34 | 107 | **33** | Persistent |
+| C5 Random | 22 | 116 | **20** | Persistent |
+| C4 Full Pipeline | 13 | 348 | **13** | Persistent |
+| C1 Direct | 0 | 104 | **0** | Persistent |
+
+**Finding:** Combined jumps ≤ category jumps (as expected). All conditions show "Persistent" exploration pattern.
+
+---
+
+# Advanced Analysis: Flexibility Profiles
+
+## Classification Based on Combined Jump Ratio
+
+> **Method:** Classify creativity style based on normalized jump ratio (jumps / transitions):
+> - **Persistent:** ratio < 0.30 (deep exploration within categories)
+> - **Flexible:** ratio > 0.45 (broad exploration across categories)
+> - **Mixed:** 0.30 ≤ ratio ≤ 0.45
+
+| Condition | Combined Jump Ratio | Profile | Interpretation |
+|-----------|:-------------------:|:-------:|----------------|
+| C3 Attribute-Only | **26.6%** | Persistent | Moderate category switching |
+| C2 Expert-Only | **24.4%** | Persistent | Moderate category switching |
+| C5 Random | 10.1% | Persistent | Low category switching |
+| **C4 Full Pipeline** | **3.2%** | Persistent | Very deep within-category exploration |
+| C1 Direct | 0.0% | Persistent | Single semantic cluster |
+
+**Key Insight:** C4's low jump ratio indicates focused, persistent exploration within novel semantic territory
+
+---
+
+# Key Finding: Originality-Flexibility Correlation
+
+## Does Our Pipeline Break the Typical LLM Pattern?
+
+> **Paper Finding (arXiv:2405.00899):**
+> - **Humans:** No correlation between flexibility and originality (r ≈ 0)
+> - **LLMs:** Positive correlation — flexible LLMs score higher on originality
+
+**Our Results:**
+
+| Metric | Value | Interpretation |
+|--------|:-----:|----------------|
+| **Pearson r** | **0.071** | Near zero correlation |
+| Interpretation | **Human-like pattern** | Breaks typical LLM pattern |
+
+**Per-Condition Breakdown:**
+
+| Condition | Novelty | Flexibility (combined jumps) |
+|-----------|:-------:|:----------------------------:|
+| C4 Full Pipeline | **0.395** (highest) | **13** (lowest) |
+| C5 Random | 0.365 | 20 |
+| C3 Attribute-Only | 0.337 | 33 |
+| C2 Expert-Only | 0.315 | 48 (highest) |
+| C1 Direct | 0.273 (lowest) | 0 |
+
+**Critical Finding:** The attribute+expert pipeline (C4) achieves **highest novelty with lowest flexibility**, demonstrating that structured context-free generation produces **focused novelty** rather than scattered exploration.
+
+---
+
+# Cumulative Jump Profile Visualization
+
+## Exploration Patterns Over Generation Sequence
+
+> **Method:** Track cumulative jump count at each response position. Steep slopes indicate rapid category switching; flat regions indicate persistent exploration within categories.
+
+
+
+**Visual Pattern:**
+- C2/C3 show steady accumulation of jumps → regular category switching
+- C4/C5 show flatter profiles → persistent within-category exploration
+- C1 is flat (0 jumps) → all ideas in single cluster
+
+---
+
+# Flexibility vs Novelty: Key Insight
+
+## Novelty and Flexibility are Orthogonal Dimensions
+
+| Condition | Novelty (centroid dist) | Flexibility (combined jumps) | Pattern |
+|-----------|:-----------------------:|:----------------------------:|---------|
+| C4 Pipeline | **0.395** (highest) | **13** (lowest) | High novel, low flex |
+| C5 Random | 0.365 | 20 | High novel, low flex |
+| C2 Expert | 0.315 | **48** (highest) | Moderate novel, high flex |
+| C3 Attribute | 0.337 | 33 | Moderate both |
+| C1 Direct | 0.273 (lowest) | 0 | Typical, single category |
+
+**Interpretation:**
+- **C1 Direct** produces similar ideas within one typical category (low novelty, no jumps)
+- **C4 Full Pipeline** produces the most novel ideas with focused exploration (low jump ratio)
+- **C2 Expert-Only** produces the most category switching but moderate novelty
+- **r = 0.071** confirms these are orthogonal dimensions (human-like pattern)
+
+---
+
+# Embedding Visualization: PCA
+
+> **Method:** Principal Component Analysis reduces high-dimensional embeddings (1024D) to 2D for visualization by finding directions of maximum variance. Points close together = semantically similar ideas. Colors represent conditions.
+
+
+
+---
+
+# Embedding Visualization: t-SNE
+
+> **Method:** t-SNE (t-distributed Stochastic Neighbor Embedding) preserves local neighborhood structure when reducing to 2D. Better at revealing clusters than PCA, but distances between clusters are less meaningful. Good for seeing if conditions form distinct groups.
+
+
+
+---
+
+# Integrated Findings
+
+## What the Advanced Analysis Reveals
+
+| Analysis | C4 Full Pipeline Characteristic |
+|----------|--------------------------------|
+| Lexical | Largest vocabulary (1,992 words) |
+| Novelty | Highest distance from centroid (0.395) |
+| Cohesion | Tightest cluster (88.6% same-condition NN) |
+| Diversity | High pairwise distance (0.395) |
+| **Flexibility** | **Lowest combined jumps (13) = focused exploration** |
+
+**Interpretation:** C4 creates a **distinct semantic territory** -
+novel ideas that are internally coherent but far from other conditions.
+Low flexibility (3.2% jump ratio) indicates deep, focused exploration within a novel space.
+
+## Understanding Novelty vs Flexibility
+
+| Condition | Novelty | Flexibility (jumps) | Strategy |
+|-----------|:-------:|:-------------------:|----------|
+| C1 Direct | Low | Lowest (0) | Typical, single category |
+| C2 Expert | Medium | **Highest (48)** | Experts = diverse exploration |
+| C3 Attribute | Medium | Medium (33) | Structured exploration |
+| C5 Random | High | Low (20) | Random but focused |
+| **C4 Pipeline** | **Highest** | **Low (13)** | **Focused novelty** |
+
+---
+
+# Critical Limitation
+
+## Embedding Distance ≠ True Novelty
+
+Current metrics measure **semantic spread**, not **creative value**
+
+| What We Measure | What We Miss |
+|-----------------|--------------|
+| Vector distance | Practical usefulness |
+| Cluster spread | Conceptual surprise |
+| Query distance | Non-obviousness |
+| | Feasibility |
+
+```
+"Quantum entanglement chair" → High distance, Low novelty
+"Chair legs as drumsticks" → Low distance, High novelty
+```
+
+---
+
+# Torrance Creativity Framework
+
+## What True Novelty Assessment Requires
+
+| Dimension | Definition | Our Coverage |
+|-----------|------------|:------------:|
+| **Fluency** | Number of ideas | ✓ Measured |
+| **Flexibility** | Category diversity | ✓ Measured (LLM + embedding) |
+| **Originality** | Statistical rarity | Not measured |
+| **Elaboration** | Detail & development | Not measured |
+
+**Originality requires human judgment or LLM-as-Judge**
+
+---
+
+# Discussion: The Attribute Anchoring Effect
+
+## Why C4 Has Highest Novelty but Lowest Flexibility
+
+```
+C2 (Expert-Only): HIGHEST FLEXIBILITY (48 combined jumps)
+ Architect → "load-bearing furniture"
+ Chef → "dining experience design"
+ ← Each expert explores freely, frequent category switching
+
+C4 (Full Pipeline): LOWEST FLEXIBILITY (13 combined jumps, 3.2% ratio)
+ All experts respond to same attribute set
+ Architect + "portable" → "modular portable"
+ Chef + "portable" → "portable serving"
+ ← Attribute anchoring constrains category switching
+ ← BUT forced bisociation produces HIGHEST NOVELTY
+```
+
+**Key Mechanism:** Attributes anchor experts to similar conceptual space (low flexibility),
+but context-free keyword generation forces novel associations (high novelty).
+
+**Result:** "Focused novelty" — deep exploration in a distant semantic territory
+
+---
+
+# Key Findings Summary
+
+| RQ | Question | Answer |
+|----|----------|--------|
+| RQ1 | Attributes increase diversity? | **Yes** (p=0.027) |
+| RQ2 | Experts increase diversity? | **Yes** (p<0.001) |
+| RQ3 | Synergistic interaction? | **No** (sub-additive) |
+| RQ4 | Experts > Random? | **No** (p=0.463) |
+
+**Additional Findings (arXiv:2405.00899 Metrics):**
+- Full Pipeline (C4) has **highest novelty** but **lowest flexibility**
+- **Originality-Flexibility correlation r=0.071** (human-like, breaks typical LLM pattern)
+- Novelty and Flexibility are **orthogonal dimensions**
+- All conditions show **Persistent** exploration profile (combined jump ratio < 30%)
+- Direct generation (C1) produces ideas in a **single semantic cluster**
+
+---
+
+# Limitations
+
+1. **Sample Size:** 10 queries (pilot study)
+
+2. **Novelty Measurement:** Embedding-based metrics only measure semantic distance, not true creative value
+
+3. **Single Model:** Results may vary with different LLMs
+
+4. **No Human Evaluation:** No validation of idea quality or usefulness
+
+5. **Fixed Categories:** 4 attribute categories may limit exploration
+
+---
+
+# Future Work
+
+## Immediate Next Steps
+
+1. **Human Assessment Interface** (Built)
+ - Web-based rating tool with Torrance dimensions
+ - Stratified sampling: 200 ideas (4 per condition × 10 queries)
+ - 4 dimensions: Originality, Elaboration, Coherence, Usefulness
+
+2. **Multi-Model Validation** (Priority)
+ - Replicate on GPT-4, Claude, Llama-3
+ - Verify findings generalize across LLMs
+
+3. **LLM-as-Judge evaluation** for full-scale scoring
+
+4. **Scale to 30 queries** for statistical power
+
+5. **Alternative pipeline designs** to address attribute anchoring
+
+**Documentation:**
+- `experiments/docs/future_research_plan_zh.md` - Detailed research plan
+- `experiments/docs/creative_process_metrics_zh.md` - arXiv:2405.00899 metrics explanation
+
+---
+
+# Conclusion
+
+## Key Takeaways
+
+1. **Both attribute decomposition and expert perspectives significantly increase semantic diversity** compared to direct generation
+
+2. **The combination is sub-additive**, suggesting attribute structure may constrain expert creativity
+
+3. **Random perspectives work as well as domain experts**, implying the value is in perspective shift, not expert knowledge
+
+4. **Novelty and Flexibility are orthogonal creativity dimensions** - high novelty ≠ high flexibility
+ - C4 Full Pipeline: Highest novelty, lowest flexibility
+ - C5 Random: Higher flexibility, moderate novelty
+
+5. **🔑 Key Finding:** The pipeline produces **human-like originality-flexibility patterns** (r=0.071)
+ - Typical LLMs show positive correlation (flexible → more original)
+ - Our method breaks this pattern: high novelty with focused exploration
+
+6. **True novelty assessment requires judgment-based evaluation** beyond embedding metrics
+
+---
+
+# Appendix: Statistical Details
+
+## T-test Results (vs C1 Baseline)
+
+| Comparison | t | p | Cohen's d |
+|------------|:-:|:-:|:---------:|
+| C4 vs C1 | 8.55 | <0.001 | 4.05 |
+| C2 vs C1 | 7.67 | <0.001 | 3.43 |
+| C3 vs C1 | 4.23 | <0.001 | 1.89 |
+
+All experimental conditions significantly outperform baseline
+
+---
+
+# Appendix: Experiment Configuration
+
+```python
+EXPERIMENT_CONFIG = {
+ "model": "qwen3:8b",
+ "temperature": 0.9,
+ "expert_count": 4,
+ "expert_source": "curated", # 210 occupations
+ "keywords_per_expert": 1,
+ "categories": ["Functions", "Usages",
+ "User Groups", "Characteristics"],
+ "dedup_threshold": 0.90,
+ "random_seed": 42
+}
+```
+
+---
+
+# Thank You
+
+## Questions?
+
+**Repository:** novelty-seeking
+**Experiment Date:** January 19, 2026
+**Contact:** [Your Email]
+
+---
+
+# Backup Slides
+
+---
+
+# Backup: Deduplication Threshold Analysis
+
+Original threshold (0.85) was too aggressive:
+- 40.5% of removed pairs were borderline (0.85-0.87)
+- Many genuinely different concepts were grouped
+
+Raised to 0.90:
+- RQ1 (Attributes) became significant (p: 0.052 → 0.027)
+- Preserved ~103 additional unique ideas
+
+---
+
+# Backup: Sample Ideas by Condition
+
+## Query: "Chair"
+
+**C1 Direct:**
+- Ergonomic office chair with lumbar support
+- Foldable camping chair
+
+**C2 Expert-Only (Architect):**
+- Load-bearing furniture integrated into building structure
+
+**C4 Full Pipeline:**
+- Asset-tracking chairs with RFID for corporate inventory
+- (Accountant + "portable" → "mobile assets")
+
+---
+
+# Backup: Efficiency Calculation
+
+$$\text{Efficiency} = \frac{\text{Mean Pairwise Distance}}{\text{Idea Count}} \times 100$$
+
+| Condition | Calculation | Result |
+|-----------|-------------|:------:|
+| C3 Attribute | 0.376 / 12.8 × 100 | 3.01 |
+| C4 Pipeline | 0.393 / 51.9 × 100 | 0.78 |
+
+C3 achieves 96% of C4's diversity with 25% of the ideas
diff --git a/experiments/docs/future_research_plan_zh.md b/experiments/docs/future_research_plan_zh.md
new file mode 100644
index 0000000..48007c2
--- /dev/null
+++ b/experiments/docs/future_research_plan_zh.md
@@ -0,0 +1,342 @@
+# 研究發表計畫與未來工作
+
+**建立日期:** 2026-01-19
+**專案:** Breaking Semantic Gravity in LLM-Based Creative Ideation
+
+---
+
+## 一、發表可行性評估
+
+### 現有研究的覆蓋範圍
+
+| 主題 | 代表論文 | 我們的差異 |
+|------|----------|------------|
+| LLM 創意評估 | Organisciak et al. (2023) | 他們評估 LLM 創意,我們是**增強**創意 |
+| AUT 彈性評分 | Hadas & Hershkovitz (2024) | 他們是評估方法,我們是**生成方法** |
+| Prompt 工程 | Zhou et al. (2023) | 他們優化 prompt,我們是**結構化管線** |
+| LLM-as-Judge | Zheng et al. (2023) | 這是評估工具,非核心貢獻 |
+
+### 本研究的獨特貢獻
+
+| 獨特性 | 說明 | 學術價值 |
+|--------|------|----------|
+| Context-Free Keyword Generation | 專家從未看到原始查詢,強迫雙重聯想 | 方法創新 |
+| 次加性交互作用 | 屬性 × 專家 = Sub-additive | 實證發現 |
+| 隨機視角 ≈ 領域專家 | 視角轉換本身比專業知識更重要 | 理論貢獻 |
+| 新穎性-彈性正交性 | 在 LLM 創意生成中首次驗證 | 理論驗證 |
+
+---
+
+## 二、目前研究狀態
+
+### 已完成 ✓
+
+| 要素 | 狀態 | 詳情 |
+|------|:----:|------|
+| 理論框架 | ✓ | Bisociation Theory + Torrance Creativity Framework |
+| 實驗設計 | ✓ | 2×2 factorial + control (5 conditions) |
+| 管線實作 | ✓ | 屬性分解 → 專家轉換 → 去重 |
+| 自動評估指標 | ✓ | 新穎性、彈性、多樣性、凝聚度、跳躍信號 |
+| 人類評估介面 | ✓ | Web-based Torrance 評分工具 |
+| 統計分析 | ✓ | ANOVA、效果量、相關性分析 |
+| 初步實驗 | ✓ | 10 queries, Qwen3:8b, 1119 ideas |
+
+### 需要補充 ✗
+
+| 缺口 | 重要性 | 說明 |
+|------|:------:|------|
+| 多模型驗證 | **高** | 目前只有 Qwen3:8b |
+| 人類評估數據 | **高** | 介面已建置但未收集數據 |
+| 樣本量擴充 | **中** | 10 → 30-50 queries |
+| Baseline 比較 | **中** | 與其他創意增強方法比較 |
+| LLM-as-Judge | 中 | 與人類評估的相關性驗證 |
+
+---
+
+## 三、發表策略選項
+
+### 選項 A:完整論文(頂會/期刊)
+
+**目標會議/期刊:**
+- ACL / EMNLP(NLP 頂會)
+- CHI(人機互動頂會)
+- Creativity Research Journal(創意研究期刊)
+- Thinking Skills and Creativity(創意思維期刊)
+
+**論文標題建議:**
+> "Breaking Semantic Gravity: Context-Free Expert Perspectives for LLM Creative Ideation"
+
+**需要補充的工作:**
+
+| 工作項目 | 預估時間 | 優先級 |
+|----------|:--------:|:------:|
+| GPT-4 實驗 | 1 週 | P0 |
+| Claude 實驗 | 1 週 | P0 |
+| Llama-3 實驗 | 1 週 | P1 |
+| 人類評估收集 | 2-3 週 | P0 |
+| 樣本量擴充 (30 queries) | 1 週 | P1 |
+| Baseline 比較實驗 | 1-2 週 | P1 |
+| 論文撰寫 | 2-3 週 | - |
+
+**總預估時間:** 2-3 個月
+
+---
+
+### 選項 B:短論文 / Workshop Paper
+
+**目標:**
+- ACL/EMNLP Workshop on Creativity and AI
+- NeurIPS Workshop on Creativity and Design
+- ICCC (International Conference on Computational Creativity)
+
+**需要補充的工作:**
+
+| 工作項目 | 預估時間 | 優先級 |
+|----------|:--------:|:------:|
+| GPT-4 實驗 | 1 週 | P0 |
+| 小規模人類評估 (50-100 ideas) | 1 週 | P0 |
+| 論文撰寫 | 1 週 | - |
+
+**總預估時間:** 2-4 週
+
+---
+
+## 四、實驗補充計畫
+
+### Phase 1:多模型驗證(優先級 P0)
+
+```
+目標:驗證方法的泛化性
+
+模型清單:
+ □ GPT-4 / GPT-4o (OpenAI)
+ □ Claude 3.5 Sonnet (Anthropic)
+ □ Llama-3-70B (Meta)
+ □ Gemini Pro (Google) [optional]
+
+實驗設計:
+ - 相同的 10 queries
+ - 相同的 5 conditions
+ - 相同的評估指標
+
+預期結果:
+ - 跨模型一致性分析
+ - 模型特定效應識別
+```
+
+### Phase 2:人類評估(優先級 P0)
+
+```
+目標:驗證自動指標與人類判斷的相關性
+
+評估維度(Torrance Framework):
+ 1. 原創性 (Originality) - 1-5 Likert
+ 2. 精緻性 (Elaboration) - 1-5 Likert
+ 3. 可行性 (Feasibility) - 1-5 Likert
+ 4. 荒謬性 (Nonsense) - Binary
+
+樣本策略:
+ - 分層抽樣:每 condition × 每 query = 4 ideas
+ - 總計:5 × 10 × 4 = 200 ideas
+ - 評審者:3-5 人(計算 ICC)
+
+介面:
+ - 已建置:experiments/assessment/
+ - 需要:招募評審者、收集數據
+```
+
+### Phase 3:樣本量擴充(優先級 P1)
+
+```
+目標:提高統計效力
+
+擴充計畫:
+ - 現有:10 queries
+ - 目標:30-50 queries
+
+Query 來源:
+ - 物品類:傢俱、工具、電器、交通工具
+ - 概念類:服務、系統、流程
+ - 混合類:結合物理和數位元素
+
+統計效力分析:
+ - 當前效果量 d ≈ 2-3(大效應)
+ - 30 queries 應足夠達到 power > 0.95
+```
+
+### Phase 4:Baseline 比較(優先級 P1)
+
+```
+目標:與現有方法比較
+
+Baseline 方法:
+ 1. Vanilla Prompting
+ "Generate creative uses for [object]"
+
+ 2. Chain-of-Thought (CoT)
+ "Think step by step about creative uses..."
+
+ 3. Few-shot Examples
+ 提供 3-5 個創意範例
+
+ 4. Role-Playing (Standard)
+ "As a [expert], suggest uses for [object]"
+ (專家看到完整查詢)
+
+比較指標:
+ - 新穎性、彈性、多樣性
+ - 想法數量、生成時間
+ - 人類評估分數
+```
+
+---
+
+## 五、論文大綱草稿
+
+### Title
+"Breaking Semantic Gravity: Context-Free Expert Perspectives for Enhanced LLM Creative Ideation"
+
+### Abstract
+- Problem: LLMs generate ideas clustered around training distributions
+- Method: Attribute decomposition + context-free expert transformation
+- Results: Sub-additive interaction, random ≈ expert, novelty ⊥ flexibility
+- Contribution: Novel pipeline + empirical findings
+
+### 1. Introduction
+- Semantic gravity problem in LLM creativity
+- Bisociation theory and creative thinking
+- Research questions (RQ1-4)
+
+### 2. Related Work
+- LLM creativity evaluation
+- Prompt engineering for creativity
+- Computational creativity methods
+
+### 3. Method
+- Pipeline architecture
+- Context-free keyword generation
+- Experimental design (2×2 + control)
+
+### 4. Evaluation Framework
+- Automatic metrics (novelty, flexibility, diversity)
+- Human evaluation (Torrance dimensions)
+- LLM-as-Judge validation
+
+### 5. Results
+- RQ1: Attribute effect
+- RQ2: Expert effect
+- RQ3: Interaction effect
+- RQ4: Expert vs Random
+- Cross-model validation
+
+### 6. Discussion
+- Attribute anchoring effect
+- Value of perspective shift
+- Novelty vs flexibility orthogonality
+
+### 7. Conclusion
+- Contributions
+- Limitations
+- Future work
+
+---
+
+## 六、時間線規劃
+
+### 快速發表路線(Workshop Paper)
+
+```
+Week 1-2: 多模型實驗 (GPT-4, Claude)
+Week 2-3: 小規模人類評估
+Week 3-4: 論文撰寫與投稿
+
+目標:2026 Q1 Workshop Deadline
+```
+
+### 完整發表路線(Full Paper)
+
+```
+Month 1:
+ - Week 1-2: 多模型實驗
+ - Week 3-4: 樣本量擴充
+
+Month 2:
+ - Week 1-2: 人類評估收集
+ - Week 3-4: Baseline 比較實驗
+
+Month 3:
+ - Week 1-2: 數據分析與統計
+ - Week 3-4: 論文撰寫
+
+目標:ACL 2026 / EMNLP 2026
+```
+
+---
+
+## 七、風險與緩解
+
+| 風險 | 可能性 | 影響 | 緩解策略 |
+|------|:------:|:----:|----------|
+| 跨模型結果不一致 | 中 | 高 | 報告為「模型特定發現」 |
+| 人類評估 ICC 低 | 中 | 中 | 增加評審者、修訂評分指南 |
+| 效應在大樣本消失 | 低 | 高 | 現有效果量大,風險較低 |
+| 競爭論文搶先 | 低 | 高 | 優先投 Workshop 建立優先權 |
+
+---
+
+## 八、資源需求
+
+### 計算資源
+
+| 資源 | 用途 | 預估成本 |
+|------|------|:--------:|
+| OpenAI API | GPT-4 實驗 | ~$50-100 |
+| Anthropic API | Claude 實驗 | ~$50-100 |
+| Local GPU | Llama 實驗 | 已有 |
+| Ollama | Embedding | 已有 |
+
+### 人力資源
+
+| 角色 | 需求 | 說明 |
+|------|------|------|
+| 人類評審者 | 3-5 人 | 可招募同學或眾包 |
+| 統計顧問 | 可選 | 複雜統計分析諮詢 |
+
+---
+
+## 九、成功指標
+
+### 短期(1個月內)
+
+- [ ] 完成 GPT-4 實驗
+- [ ] 完成 Claude 實驗
+- [ ] 收集至少 100 個人類評估樣本
+
+### 中期(3個月內)
+
+- [ ] 完成所有模型實驗
+- [ ] 完成人類評估(200+ samples, ICC > 0.7)
+- [ ] 完成 baseline 比較
+- [ ] 投稿第一篇論文
+
+### 長期(6個月內)
+
+- [ ] 論文被接受
+- [ ] 開源程式碼和數據集
+- [ ] 擴展到其他創意任務
+
+---
+
+## 十、參考文獻
+
+1. Hadas, S., & Hershkovitz, A. (2024). Using Large Language Models to Evaluate Alternative Uses Task Flexibility Score. *Thinking Skills and Creativity*, 52, 101549.
+
+2. Organisciak, P., et al. (2023). Beyond Semantic Distance: Automated Scoring of Divergent Thinking Greatly Improves with Large Language Models. *Thinking Skills and Creativity*, 49, 101356.
+
+3. Koestler, A. (1964). *The Act of Creation*. Hutchinson.
+
+4. Torrance, E.P. (1974). *Torrance Tests of Creative Thinking*. Scholastic Testing Service.
+
+5. Stevenson, C., et al. (2024). Characterizing Creative Processes in Humans and Large Language Models. *arXiv:2405.00899*.
+
+6. Zheng, L., et al. (2023). Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. *NeurIPS 2023*.
diff --git a/experiments/docs/presentation_notes_zh.md b/experiments/docs/presentation_notes_zh.md
new file mode 100644
index 0000000..09743cd
--- /dev/null
+++ b/experiments/docs/presentation_notes_zh.md
@@ -0,0 +1,178 @@
+# 簡報備忘稿
+
+---
+
+## 開場(1-2 分鐘)
+
+**問題:** LLM 生成創意時有「語義引力」問題
+- 問「椅子創新用途」→ 都是「人體工學椅」「折疊椅」
+- 想法集中在訓練數據的高頻區域
+
+**我們的解法:** Bisociation(雙重聯想)
+- 拆解屬性 + 專家視角 + 無上下文關鍵字
+- 強迫產生意外連結
+
+---
+
+## 實驗設計(1 分鐘)
+
+**五個條件,2×2 + 控制組:**
+
+| 條件 | 記法 | 重點 |
+|------|------|------|
+| C1 | 直接生成 | Baseline |
+| C2 | 只有專家 | 專家自由發揮 |
+| C3 | 只有屬性 | 結構但無專家 |
+| C4 | 完整管線 | 屬性 + 專家 |
+| C5 | 隨機詞彙 | 控制組:隨機 vs 專家 |
+
+**關鍵設計:** 專家生成關鍵字時**看不到原始查詢**
+- 會計師 + 「便攜」→ 「流動資產」(不知道是椅子)
+- 再把「流動資產」+ 「椅子」結合
+
+---
+
+## 四個研究問題的答案
+
+| RQ | 問題 | 答案 | 一句話 |
+|----|------|:----:|--------|
+| RQ1 | 屬性有效? | ✓ Yes | p=0.027 |
+| RQ2 | 專家有效? | ✓ Yes | p<0.001 |
+| RQ3 | 有加乘效果? | ✗ No | Sub-additive |
+| RQ4 | 專家 > 隨機? | ✗ No | p=0.463 |
+
+**意外發現:** 隨機詞彙跟專家一樣好 → 價值在「視角轉換」本身
+
+---
+
+## 核心數據(記住這幾個數字)
+
+### 新穎性(距離質心,越高越新穎)
+```
+C4: 0.395 ← 最高!
+C5: 0.365
+C3: 0.337
+C2: 0.315
+C1: 0.273 ← 最低(最典型)
+```
+
+### 彈性(組合跳躍數,越高越分散)
+```
+C2: 48 ← 最高!(專家自由探索)
+C3: 33
+C5: 20
+C4: 13 ← 最低!(專注探索)
+C1: 0 ← 單一群集
+```
+
+---
+
+## 🔑 關鍵發現(重點中的重點)
+
+### 發現 1:原創性-彈性相關性
+
+**論文說:**
+- 人類:r ≈ 0(無相關)
+- 典型 LLM:r > 0(正相關)
+
+**我們的結果:r = 0.071(接近零)**
+
+→ **產生「類似人類」的創意模式!**
+
+### 發現 2:C4 的獨特位置
+
+```
+C4 = 最高新穎性 + 最低彈性
+
+這代表:「專注的新穎性」
+- 不是到處亂跳(高彈性)
+- 而是深入一個新穎領域(低彈性但高新穎)
+- 像人類專家的創意模式
+```
+
+### 發現 3:為什麼會這樣?
+
+```
+屬性錨定效應:
+ 所有專家都回應同樣的屬性集
+ → 想法被錨定在相似概念空間(低彈性)
+ → 但無上下文關鍵字強迫新穎聯結(高新穎)
+
+結果:focused novelty(聚焦的新穎性)
+```
+
+---
+
+## 方法論亮點
+
+### 組合跳躍信號(Combined Jump)
+- 舊方法:只看類別切換
+- 新方法:類別切換 **且** 語義不相似
+- 減少假陽性,更準確
+
+### 彈性檔案分類
+| 檔案 | 跳躍比率 | 我們的結果 |
+|------|:--------:|:----------:|
+| Persistent | <30% | 全部條件 |
+| Mixed | 30-45% | 無 |
+| Flexible | >45% | 無 |
+
+→ LLM 傾向「持續探索」而非「靈活跳躍」
+
+---
+
+## 限制(誠實說)
+
+1. **樣本小:** 10 個查詢(pilot study)
+2. **沒有人工評估:** 只有 embedding 指標
+3. **單一模型:** 只測 Qwen3:8b
+4. **語義距離 ≠ 真正新穎:** 「量子糾纏椅」距離遠但不新穎
+
+---
+
+## 下一步(如果被問到)
+
+1. **人工評估介面**(已建好)
+2. **多模型驗證**(GPT-4, Claude)
+3. **LLM-as-Judge** 大規模評分
+4. **30 個查詢** 增加統計效力
+
+---
+
+## 一句話總結
+
+> **我們的屬性+專家管線讓 LLM 產生「類似人類專家」的創意模式:
+> 高新穎性但專注探索,打破典型 LLM 的「彈性=新穎」正相關。**
+
+---
+
+## 快問快答
+
+**Q: 為什麼隨機詞跟專家一樣好?**
+A: 價值在「視角轉換」本身,不在專業知識
+
+**Q: 為什麼 C4 彈性最低但新穎性最高?**
+A: 屬性把專家錨定在同一概念空間,但無上下文關鍵字強迫新穎連結
+
+**Q: r=0.071 代表什麼?**
+A: 新穎性和彈性無相關,跟人類一樣,打破典型 LLM 的正相關模式
+
+**Q: Persistent profile 是好是壞?**
+A: 不是好壞,是探索策略。C4 證明可以 persistent 但仍然 novel
+
+**Q: 這對實務有什麼用?**
+A: 想要高新穎性 → 用 C4;想要多元類別 → 用 C2
+
+---
+
+## 數字速查表
+
+| 指標 | C1 | C2 | C3 | C4 | C5 |
+|------|:--:|:--:|:--:|:--:|:--:|
+| 想法數 | 195 | 198 | 125 | **402** | 199 |
+| 新穎性 | 0.273 | 0.315 | 0.337 | **0.395** | 0.365 |
+| 彈性(jumps) | 0 | **48** | 33 | 13 | 20 |
+| 跳躍比率 | 0% | 24% | 27% | **3%** | 10% |
+| 凝聚度 | 71% | 73% | 51% | **89%** | 71% |
+
+**記憶口訣:** C4 最新穎、最凝聚、最低彈性 = 「聚焦的新穎」
diff --git a/experiments/generate_ideas.py b/experiments/generate_ideas.py
new file mode 100644
index 0000000..b8e9a58
--- /dev/null
+++ b/experiments/generate_ideas.py
@@ -0,0 +1,290 @@
+"""
+Main experiment runner for the 5-condition idea generation study.
+
+Usage:
+ # Run single query through all conditions
+ python -m experiments.generate_ideas --pilot --query "Chair"
+
+ # Run all pilot queries
+ python -m experiments.generate_ideas --pilot
+
+ # Run specific conditions
+ python -m experiments.generate_ideas --query "Bicycle" --conditions c1_direct c4_full_pipeline
+"""
+
+import sys
+import json
+import argparse
+import asyncio
+import logging
+from datetime import datetime
+from pathlib import Path
+from typing import List, Dict, Any, Optional
+
+# Add backend to path for imports
+sys.path.insert(0, str(Path(__file__).parent.parent / "backend"))
+
+from experiments.config import (
+ CONDITIONS, CONDITION_NAMES, DATA_DIR, RESULTS_DIR, EXPERIMENT_CONFIG
+)
+from experiments.conditions import (
+ c1_generate, c2_generate, c3_generate, c4_generate, c5_generate
+)
+
+# Configure logging
+logging.basicConfig(
+ level=logging.INFO,
+ format='%(asctime)s - %(levelname)s - %(message)s'
+)
+logger = logging.getLogger(__name__)
+
+
+# Condition function mapping
+CONDITION_FUNCTIONS = {
+ "c1_direct": c1_generate,
+ "c2_expert_only": c2_generate,
+ "c3_attribute_only": c3_generate,
+ "c4_full_pipeline": c4_generate,
+ "c5_random_perspective": c5_generate,
+}
+
+
+def load_queries() -> List[Dict[str, Any]]:
+ """Load pilot queries from data file."""
+ queries_file = DATA_DIR / "queries.json"
+ with open(queries_file, "r", encoding="utf-8") as f:
+ data = json.load(f)
+ return data.get("queries", [])
+
+
+def save_results(results: List[Dict[str, Any]], filename: str) -> Path:
+ """Save results to JSON file."""
+ RESULTS_DIR.mkdir(parents=True, exist_ok=True)
+ output_path = RESULTS_DIR / filename
+ with open(output_path, "w", encoding="utf-8") as f:
+ json.dump(results, f, indent=2, ensure_ascii=False)
+ return output_path
+
+
+async def run_condition(
+ query: str,
+ condition: str
+) -> Dict[str, Any]:
+ """Run a single condition for a query."""
+ if condition not in CONDITION_FUNCTIONS:
+ raise ValueError(f"Unknown condition: {condition}")
+
+ generate_fn = CONDITION_FUNCTIONS[condition]
+ result = await generate_fn(query)
+ return result
+
+
+async def run_experiment(
+ queries: Optional[List[str]] = None,
+ conditions: Optional[List[str]] = None,
+ save_intermediate: bool = True
+) -> Dict[str, Any]:
+ """
+ Run the full experiment.
+
+ Args:
+ queries: List of queries to run (None = all pilot queries)
+ conditions: List of conditions to run (None = all conditions)
+ save_intermediate: Whether to save results after each query
+
+ Returns:
+ Complete experiment results
+ """
+ # Load queries if not provided
+ if queries is None:
+ query_data = load_queries()
+ queries_to_run = [(q["id"], q["query"], q["category"]) for q in query_data]
+ else:
+ queries_to_run = [(f"Q{i}", q, "custom") for i, q in enumerate(queries)]
+
+ # Default to all conditions
+ conditions = conditions or CONDITIONS
+
+ logger.info(f"Starting experiment with {len(queries_to_run)} queries and {len(conditions)} conditions")
+ logger.info(f"Conditions: {', '.join(conditions)}")
+
+ experiment_results = {
+ "experiment_id": datetime.now().strftime("%Y%m%d_%H%M%S"),
+ "config": EXPERIMENT_CONFIG,
+ "conditions": conditions,
+ "query_count": len(queries_to_run),
+ "results": [],
+ "summary": {}
+ }
+
+ for query_id, query, category in queries_to_run:
+ logger.info(f"\n{'='*60}")
+ logger.info(f"Processing query: {query} (ID: {query_id}, Category: {category})")
+ logger.info(f"{'='*60}")
+
+ query_results = {
+ "query_id": query_id,
+ "query": query,
+ "category": category,
+ "conditions": {}
+ }
+
+ for condition in conditions:
+ logger.info(f"\n Running {CONDITION_NAMES.get(condition, condition)}...")
+
+ try:
+ result = await run_condition(query, condition)
+
+ query_results["conditions"][condition] = {
+ "success": True,
+ "idea_count": result["idea_count"],
+ "ideas": result["ideas"],
+ "ideas_with_source": result.get("ideas_with_source", []),
+ "metadata": result["metadata"]
+ }
+
+ logger.info(f" Generated {result['idea_count']} ideas")
+
+ except Exception as e:
+ logger.error(f" Error in {condition}: {e}")
+ query_results["conditions"][condition] = {
+ "success": False,
+ "error": str(e),
+ "idea_count": 0,
+ "ideas": []
+ }
+
+ experiment_results["results"].append(query_results)
+
+ # Save intermediate results
+ if save_intermediate:
+ save_results(
+ experiment_results,
+ f"experiment_{experiment_results['experiment_id']}_intermediate.json"
+ )
+
+ # Calculate summary statistics
+ experiment_results["summary"] = calculate_summary(experiment_results)
+
+ # Save final results
+ output_path = save_results(
+ experiment_results,
+ f"experiment_{experiment_results['experiment_id']}_complete.json"
+ )
+
+ logger.info(f"\n{'='*60}")
+ logger.info("Experiment complete!")
+ logger.info(f"Results saved to: {output_path}")
+ logger.info(f"{'='*60}")
+
+ return experiment_results
+
+
+def calculate_summary(results: Dict[str, Any]) -> Dict[str, Any]:
+ """Calculate summary statistics for the experiment."""
+ summary = {
+ "total_queries": len(results["results"]),
+ "conditions": {}
+ }
+
+ for condition in results["conditions"]:
+ condition_stats = {
+ "total_ideas": 0,
+ "successful_queries": 0,
+ "failed_queries": 0,
+ "avg_ideas_per_query": 0
+ }
+
+ for query_result in results["results"]:
+ cond_result = query_result["conditions"].get(condition, {})
+ if cond_result.get("success", False):
+ condition_stats["successful_queries"] += 1
+ condition_stats["total_ideas"] += cond_result.get("idea_count", 0)
+ else:
+ condition_stats["failed_queries"] += 1
+
+ if condition_stats["successful_queries"] > 0:
+ condition_stats["avg_ideas_per_query"] = (
+ condition_stats["total_ideas"] / condition_stats["successful_queries"]
+ )
+
+ summary["conditions"][condition] = condition_stats
+
+ return summary
+
+
+def print_summary(results: Dict[str, Any]):
+ """Print a formatted summary of the experiment."""
+ print("\n" + "=" * 70)
+ print("EXPERIMENT SUMMARY")
+ print("=" * 70)
+
+ summary = results.get("summary", {})
+ print(f"\nTotal queries processed: {summary.get('total_queries', 0)}")
+
+ print("\nResults by condition:")
+ print("-" * 70)
+ print(f"{'Condition':<30} {'Success':<10} {'Total Ideas':<15} {'Avg/Query':<10}")
+ print("-" * 70)
+
+ for condition, stats in summary.get("conditions", {}).items():
+ name = CONDITION_NAMES.get(condition, condition)
+ success = stats.get("successful_queries", 0)
+ total = stats.get("total_ideas", 0)
+ avg = stats.get("avg_ideas_per_query", 0)
+ print(f"{name:<30} {success:<10} {total:<15} {avg:<10.1f}")
+
+ print("-" * 70)
+
+
+async def main():
+ parser = argparse.ArgumentParser(
+ description="Run the 5-condition idea generation experiment"
+ )
+ parser.add_argument(
+ "--pilot",
+ action="store_true",
+ help="Run pilot experiment with all 10 queries"
+ )
+ parser.add_argument(
+ "--query",
+ type=str,
+ help="Run single query (e.g., 'Chair')"
+ )
+ parser.add_argument(
+ "--conditions",
+ nargs="+",
+ choices=CONDITIONS,
+ help="Specific conditions to run"
+ )
+ parser.add_argument(
+ "--no-save-intermediate",
+ action="store_true",
+ help="Don't save intermediate results"
+ )
+
+ args = parser.parse_args()
+
+ # Determine queries to run
+ if args.query:
+ queries = [args.query]
+ elif args.pilot:
+ queries = None # Will load all pilot queries
+ else:
+ parser.print_help()
+ print("\nError: Must specify --pilot or --query")
+ sys.exit(1)
+
+ # Run experiment
+ results = await run_experiment(
+ queries=queries,
+ conditions=args.conditions,
+ save_intermediate=not args.no_save_intermediate
+ )
+
+ # Print summary
+ print_summary(results)
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
diff --git a/experiments/novelty_loop/README.md b/experiments/novelty_loop/README.md
new file mode 100644
index 0000000..3136f63
--- /dev/null
+++ b/experiments/novelty_loop/README.md
@@ -0,0 +1,253 @@
+# Novelty-Driven LLM Agent Loop
+
+An autonomous LLM agent that generates tasks in a while loop, using **novelty assessment as the termination condition** to help the agent "jump out" of its trained data distribution (semantic gravity).
+
+## Concept
+
+Traditional LLM-based idea generation tends to produce outputs clustered around high-probability regions of the training distribution. This "semantic gravity" limits creative exploration.
+
+This module implements a novel approach: use **novelty scores** to dynamically control when the agent should stop. Instead of fixed iteration counts, the agent continues until it finds something truly novel (a "breakthrough").
+
+```
+Seed Problem → Expert Sample → Task Generation → Novelty Assessment → Continue/Stop
+```
+
+## Research Foundation
+
+This work builds on established research:
+
+- **Novelty Search** (Lehman & Stanley): Reward novelty, not objectives
+- **Curiosity-driven Exploration** (Pathak et al.): Intrinsic motivation via prediction error
+- **Quality-Diversity** (MAP-Elites): Maintain diverse high-quality solutions
+- **Open-ended Learning**: Endless innovation through novelty pressure
+
+The unique contribution is using **novelty as a termination condition** rather than just a reward signal.
+
+## Architecture
+
+```
+┌──────────────────────────────────────────────────────────────────┐
+│ Novelty-Driven Task Generation Loop │
+├──────────────────────────────────────────────────────────────────┤
+│ │
+│ ┌──────────┐ │
+│ │ Seed │ "Design a better bicycle" │
+│ │ Problem │ │
+│ └────┬─────┘ │
+│ │ │
+│ ▼ │
+│ ┌─────────────────────────────────────────────────────────┐ │
+│ │ WHILE novelty < threshold AND iterations < max: │ │
+│ │ │ │
+│ │ 1. Sample random expert (curated occupations) │ │
+│ │ e.g., "marine biologist", "choreographer" │ │
+│ │ │ │
+│ │ 2. Generate task from expert perspective │ │
+│ │ "What task would a {expert} assign to improve │ │
+│ │ {seed_problem}?" │ │
+│ │ │ │
+│ │ 3. Embed task, compute novelty vs. centroid │ │
+│ │ │ │
+│ │ 4. If novelty > threshold → STOP (breakthrough!) │ │
+│ │ │ │
+│ └─────────────────────────────────────────────────────────┘ │
+│ │ │
+│ ▼ │
+│ ┌──────────┐ │
+│ │ Output: │ Novel task that "jumped out" of typical space │
+│ │ Task │ + trajectory of exploration │
+│ └──────────┘ │
+│ │
+└──────────────────────────────────────────────────────────────────┘
+```
+
+## Installation
+
+The module uses the existing project infrastructure. Ensure you have:
+
+1. **Ollama** running with the required models:
+ ```bash
+ ollama pull qwen3:8b
+ ollama pull qwen3-embedding:4b
+ ```
+
+2. **Python dependencies** (from project root):
+ ```bash
+ cd backend
+ source venv/bin/activate
+ pip install httpx numpy
+ ```
+
+## Quick Start
+
+### Basic Usage
+
+```bash
+cd experiments/novelty_loop
+python demo.py "Improve urban transportation"
+```
+
+### Example Output
+
+```
+Iteration 1
+ Expert: Architect (Architecture & Design)
+ Task: Design multi-modal transit hubs that integrate pedestrian, cycling, and public transport seamlessly
+ Novelty: [████████░░░░░░░░░░░░] 0.1234
+
+Iteration 2
+ Expert: Chef (Culinary)
+ Task: Create food delivery route optimization algorithms inspired by kitchen workflow efficiency
+ Novelty: [███████████░░░░░░░░░] 0.1823
+
+Iteration 3
+ Expert: Marine Biologist (Science)
+ Task: Study fish schooling behavior to develop organic traffic flow algorithms
+ Novelty: [██████████████░░░░░░] 0.3521
+
+Iteration 4
+ Expert: Choreographer (Performing Arts)
+ Task: Design pedestrian movement as urban dance, creating rhythmic crossing patterns
+ Novelty: [████████████████████] 0.5234
+ ★ BREAKTHROUGH! ★
+```
+
+## Termination Strategies
+
+### 1. Seek Breakthrough (Default)
+
+Stop when novelty exceeds threshold. Finds the first truly novel task.
+
+```bash
+python demo.py "Your problem" --strategy breakthrough --threshold 0.4
+```
+
+### 2. Exhaust Frontier
+
+Continue while novelty is high, stop when average novelty drops. Explores more thoroughly.
+
+```bash
+python demo.py "Your problem" --strategy exhaust --exhaust-threshold 0.15
+```
+
+### 3. Coverage Target
+
+Continue until N distinct conceptual clusters are covered. Ensures diversity.
+
+```bash
+python demo.py "Your problem" --strategy coverage --clusters 5
+```
+
+## API Usage
+
+```python
+import asyncio
+from experiments.novelty_loop.agent import NoveltyDrivenTaskAgent
+
+async def main():
+ agent = NoveltyDrivenTaskAgent(
+ novelty_threshold=0.4,
+ max_iterations=20,
+ language="en"
+ )
+
+ result = await agent.run("Design a better bicycle")
+
+ print(f"Found breakthrough: {result.breakthrough_task.task}")
+ print(f"Novelty score: {result.breakthrough_task.novelty_score}")
+ print(f"From expert: {result.breakthrough_task.expert}")
+
+ await agent.close()
+
+asyncio.run(main())
+```
+
+## Novelty Metrics
+
+The `novelty_metrics.py` module provides:
+
+- **Centroid Distance**: Primary novelty metric - how far from the average of all previous outputs
+- **Min Distance**: Distance to nearest neighbor (detect duplicates)
+- **Jump Detection**: Identifies significant semantic shifts between consecutive outputs
+- **Trajectory Tracking**: Cumulative novelty, jump ratio, etc.
+
+```python
+from experiments.novelty_loop.novelty_metrics import NoveltyMetrics
+
+metrics = NoveltyMetrics(similarity_threshold=0.7)
+
+# Add embeddings one by one
+for embedding in embeddings:
+ novelty = metrics.compute_novelty(embedding)
+ metrics.add_embedding(embedding, novelty)
+ print(f"Novelty: {novelty.score:.4f}, Is Jump: {novelty.is_jump}")
+
+# Get trajectory stats
+print(f"Mean novelty: {metrics.trajectory.mean_novelty}")
+print(f"Max novelty: {metrics.trajectory.max_novelty}")
+print(f"Jump ratio: {metrics.trajectory.jump_ratio}")
+```
+
+## CLI Options
+
+```
+positional arguments:
+ seed_problem The seed problem or challenge to explore
+
+options:
+ --strategy {breakthrough,exhaust,coverage}
+ Termination strategy (default: breakthrough)
+ --threshold, -t Novelty threshold for breakthrough (default: 0.4)
+ --max-iter, -m Maximum iterations (default: 20)
+ --language, -l {en,zh}
+ Language for prompts and experts (default: en)
+ --model LLM model for task generation (default: qwen3:8b)
+ --embedding-model Embedding model (default: qwen3-embedding:4b)
+ --temperature LLM temperature (default: 0.7)
+ --output, -o Save results to JSON file
+ --quiet, -q Suppress iteration output
+ --verbose, -v Enable verbose logging
+```
+
+## File Structure
+
+```
+experiments/novelty_loop/
+├── README.md # This file
+├── agent.py # Core NoveltyDrivenTaskAgent and variants
+├── novelty_metrics.py # Novelty computation utilities
+└── demo.py # Interactive CLI demo
+```
+
+## Design Decisions
+
+| Question | Decision | Rationale |
+|----------|----------|-----------|
+| Output Type | **Tasks** | Self-generated sub-goals for autonomous problem decomposition |
+| Termination | **Seek Breakthrough** | Stop when novelty exceeds threshold - find truly novel task |
+| Perturbation | **Expert Perspectives** | Experts have task-oriented knowledge; more natural than abstract domains |
+| Novelty Reference | **Centroid** | Dynamic, adapts as exploration progresses |
+
+## Connection to Main Project
+
+This module integrates with the main novelty-seeking project:
+
+- Uses the same **curated occupation data** (`backend/app/data/curated_occupations_*.json`)
+- Uses the same **embedding model** (qwen3-embedding:4b)
+- Builds on the **AUT flexibility analysis** metrics for novelty computation
+- Can use **DDC domain data** for alternative perturbation strategies
+
+## Future Work
+
+1. **Hybrid Perturbation**: Combine expert + domain perspectives
+2. **Contrastive Prompting**: Explicitly ask for outputs unlike recent ones
+3. **Semantic Steering**: Guide generation away from centroid direction
+4. **Multi-Agent Exploration**: Parallel agents with different strategies
+5. **Quality-Diversity Archive**: Maintain diverse high-quality solutions
+
+## References
+
+- Lehman, J., & Stanley, K. O. (2011). Abandoning objectives: Evolution through the search for novelty alone.
+- Pathak, D., et al. (2017). Curiosity-driven exploration by self-supervised prediction.
+- Mouret, J. B., & Clune, J. (2015). Illuminating search spaces by mapping elites.
+- arXiv:2405.00899 - Characterising Creative Process in Humans and LLMs
diff --git a/experiments/novelty_loop/__init__.py b/experiments/novelty_loop/__init__.py
new file mode 100644
index 0000000..fdae51e
--- /dev/null
+++ b/experiments/novelty_loop/__init__.py
@@ -0,0 +1,42 @@
+"""
+Novelty-Driven LLM Agent Loop
+
+An autonomous agent that generates tasks using novelty as the termination condition.
+"""
+
+from .agent import (
+ NoveltyDrivenTaskAgent,
+ ExhaustFrontierAgent,
+ CoverageTargetAgent,
+ GeneratedTask,
+ TaskGenerationResult,
+ ExpertProvider,
+ DomainProvider,
+)
+
+from .novelty_metrics import (
+ NoveltyMetrics,
+ NoveltyScore,
+ NoveltyTrajectory,
+ compute_batch_novelty,
+ find_most_novel,
+)
+
+__all__ = [
+ # Agents
+ "NoveltyDrivenTaskAgent",
+ "ExhaustFrontierAgent",
+ "CoverageTargetAgent",
+ # Data classes
+ "GeneratedTask",
+ "TaskGenerationResult",
+ "NoveltyScore",
+ "NoveltyTrajectory",
+ # Providers
+ "ExpertProvider",
+ "DomainProvider",
+ # Metrics
+ "NoveltyMetrics",
+ "compute_batch_novelty",
+ "find_most_novel",
+]
diff --git a/experiments/novelty_loop/agent.py b/experiments/novelty_loop/agent.py
new file mode 100644
index 0000000..293f387
--- /dev/null
+++ b/experiments/novelty_loop/agent.py
@@ -0,0 +1,725 @@
+"""
+Novelty-Driven Task Agent - An autonomous agent that generates tasks using novelty as termination condition.
+
+This agent operates in a while loop, generating tasks from diverse expert perspectives,
+and terminates when it finds a task that exceeds the novelty threshold (a "breakthrough").
+
+The core innovation is using novelty assessment to help the agent "jump out" of its
+trained data distribution (semantic gravity), finding truly novel ideas.
+
+Architecture:
+ Seed Problem → Expert Sample → Task Generation → Novelty Assessment → Continue/Stop
+
+Termination Strategy: "Seek Breakthrough"
+ - Continue until novelty > threshold
+ - Find the first truly novel task and stop
+
+Research Foundation:
+ - Novelty Search (Lehman & Stanley): Reward novelty, not objectives
+ - Curiosity-driven Exploration (Pathak et al.): Intrinsic motivation via prediction error
+ - Quality-Diversity (MAP-Elites): Maintain diverse high-quality solutions
+"""
+
+import asyncio
+import json
+import logging
+import random
+from dataclasses import dataclass, field
+from datetime import datetime, timezone
+from pathlib import Path
+from typing import Any, Callable, List, Optional
+
+import httpx
+import numpy as np
+
+from .novelty_metrics import NoveltyMetrics, NoveltyScore, NoveltyTrajectory
+
+logger = logging.getLogger(__name__)
+
+
+# ============================================================================
+# Data Classes
+# ============================================================================
+
+@dataclass
+class GeneratedTask:
+ """A single generated task with metadata."""
+ task: str
+ expert: str
+ expert_domain: str
+ novelty_score: float
+ iteration: int
+ is_breakthrough: bool = False
+ embedding: Optional[np.ndarray] = None
+
+
+@dataclass
+class TaskGenerationResult:
+ """Result of a complete novelty-driven task generation session."""
+ seed_problem: str
+ breakthrough_task: Optional[GeneratedTask] = None
+ trajectory: List[GeneratedTask] = field(default_factory=list)
+ total_iterations: int = 0
+ terminated_by: str = "unknown" # "breakthrough", "max_iterations", "error"
+ novelty_trajectory: Optional[NoveltyTrajectory] = None
+ start_time: Optional[str] = None
+ end_time: Optional[str] = None
+ config: dict = field(default_factory=dict)
+
+ def to_dict(self) -> dict:
+ """Convert to dictionary for JSON serialization."""
+ return {
+ "seed_problem": self.seed_problem,
+ "breakthrough_task": {
+ "task": self.breakthrough_task.task,
+ "expert": self.breakthrough_task.expert,
+ "expert_domain": self.breakthrough_task.expert_domain,
+ "novelty_score": self.breakthrough_task.novelty_score,
+ "iteration": self.breakthrough_task.iteration
+ } if self.breakthrough_task else None,
+ "trajectory": [
+ {
+ "task": t.task,
+ "expert": t.expert,
+ "expert_domain": t.expert_domain,
+ "novelty_score": t.novelty_score,
+ "iteration": t.iteration,
+ "is_breakthrough": t.is_breakthrough
+ }
+ for t in self.trajectory
+ ],
+ "total_iterations": self.total_iterations,
+ "terminated_by": self.terminated_by,
+ "novelty_stats": {
+ "mean_novelty": self.novelty_trajectory.mean_novelty if self.novelty_trajectory else 0,
+ "max_novelty": self.novelty_trajectory.max_novelty if self.novelty_trajectory else 0,
+ "jump_ratio": self.novelty_trajectory.jump_ratio if self.novelty_trajectory else 0,
+ "cumulative_novelty": self.novelty_trajectory.final_cumulative_novelty if self.novelty_trajectory else 0
+ },
+ "start_time": self.start_time,
+ "end_time": self.end_time,
+ "config": self.config
+ }
+
+
+# ============================================================================
+# Expert/Domain Providers
+# ============================================================================
+
+class ExpertProvider:
+ """Provides random experts from curated occupation lists."""
+
+ def __init__(self, data_dir: Optional[Path] = None, language: str = "en"):
+ """
+ Args:
+ data_dir: Path to data directory containing occupation JSON files
+ language: Language code ("en" or "zh")
+ """
+ if data_dir is None:
+ # Default to backend data directory
+ data_dir = Path(__file__).parent.parent.parent / "backend" / "app" / "data"
+
+ self.data_dir = data_dir
+ self.language = language
+ self._occupations: List[dict] = []
+ self._load_occupations()
+
+ def _load_occupations(self):
+ """Load occupations from JSON file."""
+ file_path = self.data_dir / f"curated_occupations_{self.language}.json"
+
+ if not file_path.exists():
+ logger.warning(f"Occupation file not found: {file_path}")
+ # Fallback to some default experts
+ self._occupations = [
+ {"name": "Marine Biologist", "domain": "Science"},
+ {"name": "Choreographer", "domain": "Arts"},
+ {"name": "Urban Planner", "domain": "Architecture"},
+ {"name": "Chef", "domain": "Culinary"},
+ {"name": "Astronomer", "domain": "Science"},
+ ]
+ return
+
+ try:
+ with open(file_path, "r", encoding="utf-8") as f:
+ data = json.load(f)
+ self._occupations = data.get("occupations", [])
+ logger.info(f"Loaded {len(self._occupations)} occupations from {file_path.name}")
+ except Exception as e:
+ logger.error(f"Error loading occupations: {e}")
+ self._occupations = []
+
+ def get_random_expert(self) -> dict:
+ """Get a random expert with name and domain."""
+ if not self._occupations:
+ return {"name": "Expert", "domain": "General"}
+ return random.choice(self._occupations)
+
+ def get_random_experts(self, count: int) -> List[dict]:
+ """Get multiple random experts without replacement."""
+ if len(self._occupations) <= count:
+ return self._occupations.copy()
+ return random.sample(self._occupations, count)
+
+
+class DomainProvider:
+ """Provides random knowledge domains from DDC classification."""
+
+ def __init__(self, data_dir: Optional[Path] = None, language: str = "en"):
+ if data_dir is None:
+ data_dir = Path(__file__).parent.parent.parent / "backend" / "app" / "data"
+
+ self.data_dir = data_dir
+ self.language = language
+ self._domains: List[dict] = []
+ self._load_domains()
+
+ def _load_domains(self):
+ """Load domains from JSON file."""
+ file_path = self.data_dir / f"ddc_domains_{self.language}.json"
+
+ if not file_path.exists():
+ logger.warning(f"Domain file not found: {file_path}")
+ self._domains = []
+ return
+
+ try:
+ with open(file_path, "r", encoding="utf-8") as f:
+ data = json.load(f)
+ self._domains = data.get("domains", [])
+ logger.info(f"Loaded {len(self._domains)} domains from {file_path.name}")
+ except Exception as e:
+ logger.error(f"Error loading domains: {e}")
+
+ def get_random_domain(self, level: Optional[str] = None) -> dict:
+ """Get a random domain, optionally filtered by level."""
+ domains = self._domains
+ if level:
+ domains = [d for d in domains if d.get("level") == level]
+
+ if not domains:
+ return {"name": "General Knowledge", "code": "000"}
+ return random.choice(domains)
+
+
+# ============================================================================
+# Novelty-Driven Task Agent
+# ============================================================================
+
+class NoveltyDrivenTaskAgent:
+ """
+ An autonomous agent that generates tasks using novelty as the termination condition.
+
+ The agent operates in a loop:
+ 1. Sample a random expert perspective
+ 2. Generate a task from that expert's viewpoint
+ 3. Compute the task's novelty (distance from centroid of previous tasks)
+ 4. If novelty > threshold → STOP (found breakthrough!)
+ 5. Otherwise → Continue with next expert
+
+ Example:
+ agent = NoveltyDrivenTaskAgent(novelty_threshold=0.4)
+ result = await agent.run("Improve urban transportation")
+
+ # result.breakthrough_task contains the novel task found
+ # result.trajectory shows the exploration path
+ """
+
+ def __init__(
+ self,
+ novelty_threshold: float = 0.4,
+ max_iterations: int = 20,
+ ollama_base_url: str = "http://localhost:11435",
+ llm_model: str = "qwen3:8b",
+ embedding_model: str = "qwen3-embedding:4b",
+ language: str = "en",
+ data_dir: Optional[Path] = None,
+ on_iteration: Optional[Callable[[GeneratedTask], None]] = None,
+ temperature: float = 0.7
+ ):
+ """
+ Args:
+ novelty_threshold: Novelty score threshold for breakthrough (0.0-1.0)
+ max_iterations: Maximum iterations before stopping
+ ollama_base_url: Ollama API endpoint
+ llm_model: Model for task generation
+ embedding_model: Model for embeddings
+ language: Language for prompts and experts ("en" or "zh")
+ data_dir: Path to data directory for expert/domain files
+ on_iteration: Callback function called after each iteration
+ temperature: LLM temperature for generation
+ """
+ self.novelty_threshold = novelty_threshold
+ self.max_iterations = max_iterations
+ self.ollama_base_url = ollama_base_url
+ self.llm_model = llm_model
+ self.embedding_model = embedding_model
+ self.language = language
+ self.temperature = temperature
+ self.on_iteration = on_iteration
+
+ # Initialize providers
+ self.expert_provider = ExpertProvider(data_dir, language)
+ self.domain_provider = DomainProvider(data_dir, language)
+
+ # Initialize novelty metrics
+ self.novelty_metrics = NoveltyMetrics(
+ similarity_threshold=0.7,
+ jump_detection_enabled=True
+ )
+
+ # HTTP client
+ self._client: Optional[httpx.AsyncClient] = None
+
+ async def _get_client(self) -> httpx.AsyncClient:
+ """Get or create HTTP client."""
+ if self._client is None:
+ self._client = httpx.AsyncClient(timeout=120.0)
+ return self._client
+
+ async def close(self):
+ """Close HTTP client."""
+ if self._client is not None:
+ await self._client.aclose()
+ self._client = None
+
+ async def _generate_text(self, prompt: str) -> str:
+ """Generate text using Ollama LLM."""
+ client = await self._get_client()
+ url = f"{self.ollama_base_url}/api/generate"
+
+ # Add /no_think prefix for qwen models to disable thinking
+ if self.llm_model.lower().startswith("qwen"):
+ prompt = f"/no_think\n{prompt}"
+
+ try:
+ response = await client.post(url, json={
+ "model": self.llm_model,
+ "prompt": prompt,
+ "stream": False,
+ "options": {
+ "temperature": self.temperature
+ }
+ })
+ response.raise_for_status()
+ result = response.json()
+ return result.get("response", "").strip()
+ except Exception as e:
+ logger.error(f"LLM generation error: {e}")
+ raise
+
+ async def _get_embedding(self, text: str) -> np.ndarray:
+ """Get embedding vector for text."""
+ client = await self._get_client()
+ url = f"{self.ollama_base_url}/api/embed"
+
+ try:
+ response = await client.post(url, json={
+ "model": self.embedding_model,
+ "input": text
+ })
+ response.raise_for_status()
+ result = response.json()
+ return np.array(result["embeddings"][0])
+ except Exception as e:
+ logger.error(f"Embedding error: {e}")
+ raise
+
+ def _build_task_prompt(
+ self,
+ seed_problem: str,
+ expert: dict,
+ previous_tasks: List[str]
+ ) -> str:
+ """Build the prompt for task generation."""
+ expert_name = expert.get("name", "Expert")
+ expert_domain = expert.get("domain", "General")
+
+ # Build context from previous tasks (if any)
+ context = ""
+ if previous_tasks:
+ recent = previous_tasks[-3:] # Last 3 tasks
+ context = "\n\nPrevious suggestions (generate something DIFFERENT):\n"
+ for t in recent:
+ context += f"- {t}\n"
+
+ if self.language == "zh":
+ prompt = f"""你是一位 {expert_name}({expert_domain})。
+
+给定问题:{seed_problem}
+
+请从你的专业角度出发,提出一个独特的改进任务或探索方向。
+这个任务应该结合你的专业知识,提供一个非传统但有价值的视角。
+{context}
+请直接给出任务描述,不要添加解释。任务应该具体、可行、且与众不同。
+
+任务:"""
+ else:
+ prompt = f"""You are a {expert_name} ({expert_domain}).
+
+Given problem: {seed_problem}
+
+From your professional perspective, propose a unique task or exploration direction to improve or innovate on this problem.
+The task should leverage your domain expertise to provide an unconventional but valuable angle.
+{context}
+Provide just the task description without explanation. The task should be specific, actionable, and distinctive.
+
+Task:"""
+
+ return prompt
+
+ async def _generate_task(
+ self,
+ seed_problem: str,
+ expert: dict,
+ previous_tasks: List[str]
+ ) -> str:
+ """Generate a task from an expert's perspective."""
+ prompt = self._build_task_prompt(seed_problem, expert, previous_tasks)
+ task = await self._generate_text(prompt)
+
+ # Clean up the response
+ task = task.strip()
+ # Remove common prefixes
+ for prefix in ["Task:", "任务:", "Here's", "I suggest", "Based on"]:
+ if task.lower().startswith(prefix.lower()):
+ task = task[len(prefix):].strip()
+
+ return task
+
+ async def run(
+ self,
+ seed_problem: str,
+ used_experts: Optional[List[dict]] = None
+ ) -> TaskGenerationResult:
+ """
+ Run the novelty-driven task generation loop.
+
+ Args:
+ seed_problem: The initial problem/challenge to explore
+ used_experts: Optional list of experts to avoid (for multi-run scenarios)
+
+ Returns:
+ TaskGenerationResult with breakthrough task (if found) and full trajectory
+ """
+ # Reset state
+ self.novelty_metrics.reset()
+
+ result = TaskGenerationResult(
+ seed_problem=seed_problem,
+ start_time=datetime.now(timezone.utc).isoformat(),
+ config={
+ "novelty_threshold": self.novelty_threshold,
+ "max_iterations": self.max_iterations,
+ "llm_model": self.llm_model,
+ "embedding_model": self.embedding_model,
+ "language": self.language
+ }
+ )
+
+ used_expert_names = set()
+ if used_experts:
+ used_expert_names = {e["name"] for e in used_experts}
+
+ previous_tasks: List[str] = []
+
+ logger.info(f"Starting novelty loop: '{seed_problem}' (threshold={self.novelty_threshold})")
+
+ try:
+ for iteration in range(self.max_iterations):
+ # 1. Sample a random expert (avoid duplicates)
+ attempts = 0
+ expert = self.expert_provider.get_random_expert()
+ while expert["name"] in used_expert_names and attempts < 10:
+ expert = self.expert_provider.get_random_expert()
+ attempts += 1
+ used_expert_names.add(expert["name"])
+
+ logger.info(f"Iteration {iteration + 1}: Expert = {expert['name']} ({expert['domain']})")
+
+ # 2. Generate task
+ task = await self._generate_task(seed_problem, expert, previous_tasks)
+ previous_tasks.append(task)
+
+ # 3. Get embedding
+ embedding = await self._get_embedding(task)
+
+ # 4. Compute novelty
+ novelty = self.novelty_metrics.compute_novelty(embedding)
+ self.novelty_metrics.add_embedding(embedding, novelty)
+
+ # 5. Create task record
+ generated_task = GeneratedTask(
+ task=task,
+ expert=expert["name"],
+ expert_domain=expert["domain"],
+ novelty_score=novelty.score,
+ iteration=iteration + 1,
+ is_breakthrough=novelty.score > self.novelty_threshold,
+ embedding=embedding
+ )
+ result.trajectory.append(generated_task)
+
+ logger.info(f" Task: {task[:80]}...")
+ logger.info(f" Novelty: {novelty.score:.4f} (threshold: {self.novelty_threshold})")
+
+ # Callback
+ if self.on_iteration:
+ self.on_iteration(generated_task)
+
+ # 6. Check for breakthrough
+ if novelty.score > self.novelty_threshold:
+ result.breakthrough_task = generated_task
+ result.terminated_by = "breakthrough"
+ result.total_iterations = iteration + 1
+ logger.info(f" BREAKTHROUGH! Stopping after {iteration + 1} iterations")
+ break
+
+ else:
+ # Max iterations reached without breakthrough
+ result.terminated_by = "max_iterations"
+ result.total_iterations = self.max_iterations
+ logger.info(f"Max iterations ({self.max_iterations}) reached without breakthrough")
+
+ # Find the most novel task as a fallback
+ if result.trajectory:
+ best_task = max(result.trajectory, key=lambda t: t.novelty_score)
+ best_task.is_breakthrough = True # Mark as best found
+ result.breakthrough_task = best_task
+
+ except Exception as e:
+ logger.error(f"Error during generation: {e}")
+ result.terminated_by = f"error: {str(e)}"
+ result.total_iterations = len(result.trajectory)
+
+ # Finalize
+ result.end_time = datetime.now(timezone.utc).isoformat()
+ result.novelty_trajectory = self.novelty_metrics.trajectory
+
+ return result
+
+
+# ============================================================================
+# Alternative Termination Strategies
+# ============================================================================
+
+class ExhaustFrontierAgent(NoveltyDrivenTaskAgent):
+ """
+ Alternative strategy: Continue while novelty is high, stop when it drops.
+
+ This explores the "novelty frontier" more thoroughly, finding multiple novel
+ ideas before stopping when exploration becomes repetitive.
+ """
+
+ def __init__(
+ self,
+ exhaustion_threshold: float = 0.15,
+ window_size: int = 3,
+ min_iterations: int = 5,
+ **kwargs
+ ):
+ """
+ Args:
+ exhaustion_threshold: Stop when recent average novelty drops below this
+ window_size: Number of recent iterations to average
+ min_iterations: Minimum iterations before checking exhaustion
+ **kwargs: Passed to parent class
+ """
+ super().__init__(**kwargs)
+ self.exhaustion_threshold = exhaustion_threshold
+ self.window_size = window_size
+ self.min_iterations = min_iterations
+
+ async def run(self, seed_problem: str, **kwargs) -> TaskGenerationResult:
+ """Override to use exhaustion-based termination."""
+ # Reset state
+ self.novelty_metrics.reset()
+
+ result = TaskGenerationResult(
+ seed_problem=seed_problem,
+ start_time=datetime.now(timezone.utc).isoformat(),
+ config={
+ "strategy": "exhaust_frontier",
+ "exhaustion_threshold": self.exhaustion_threshold,
+ "window_size": self.window_size,
+ "min_iterations": self.min_iterations,
+ "max_iterations": self.max_iterations,
+ "llm_model": self.llm_model
+ }
+ )
+
+ used_expert_names = set()
+ previous_tasks: List[str] = []
+ novelty_history: List[float] = []
+
+ try:
+ for iteration in range(self.max_iterations):
+ # Sample expert
+ expert = self.expert_provider.get_random_expert()
+ while expert["name"] in used_expert_names and len(used_expert_names) < 200:
+ expert = self.expert_provider.get_random_expert()
+ used_expert_names.add(expert["name"])
+
+ # Generate and evaluate
+ task = await self._generate_task(seed_problem, expert, previous_tasks)
+ previous_tasks.append(task)
+ embedding = await self._get_embedding(task)
+ novelty = self.novelty_metrics.compute_novelty(embedding)
+ self.novelty_metrics.add_embedding(embedding, novelty)
+
+ novelty_history.append(novelty.score)
+
+ generated_task = GeneratedTask(
+ task=task,
+ expert=expert["name"],
+ expert_domain=expert["domain"],
+ novelty_score=novelty.score,
+ iteration=iteration + 1
+ )
+ result.trajectory.append(generated_task)
+
+ if self.on_iteration:
+ self.on_iteration(generated_task)
+
+ # Check exhaustion condition
+ if iteration >= self.min_iterations:
+ recent_avg = np.mean(novelty_history[-self.window_size:])
+ if recent_avg < self.exhaustion_threshold:
+ result.terminated_by = f"exhaustion (avg={recent_avg:.3f})"
+ result.total_iterations = iteration + 1
+ break
+
+ else:
+ result.terminated_by = "max_iterations"
+ result.total_iterations = self.max_iterations
+
+ # Find all "novel" tasks
+ novel_tasks = [t for t in result.trajectory if t.novelty_score > self.exhaustion_threshold]
+ if novel_tasks:
+ result.breakthrough_task = max(novel_tasks, key=lambda t: t.novelty_score)
+ result.breakthrough_task.is_breakthrough = True
+
+ except Exception as e:
+ result.terminated_by = f"error: {str(e)}"
+ result.total_iterations = len(result.trajectory)
+
+ result.end_time = datetime.now(timezone.utc).isoformat()
+ result.novelty_trajectory = self.novelty_metrics.trajectory
+
+ return result
+
+
+class CoverageTargetAgent(NoveltyDrivenTaskAgent):
+ """
+ Alternative strategy: Continue until N distinct clusters are covered.
+
+ This ensures a diverse portfolio of ideas across different conceptual areas.
+ """
+
+ def __init__(
+ self,
+ target_clusters: int = 5,
+ cluster_threshold: float = 0.7,
+ **kwargs
+ ):
+ """
+ Args:
+ target_clusters: Target number of distinct clusters to find
+ cluster_threshold: Similarity threshold for cluster membership
+ **kwargs: Passed to parent class
+ """
+ super().__init__(**kwargs)
+ self.target_clusters = target_clusters
+ self.cluster_threshold = cluster_threshold
+
+ def _count_clusters(self, embeddings: List[np.ndarray]) -> int:
+ """Count distinct clusters using greedy clustering."""
+ if not embeddings:
+ return 0
+
+ clusters = []
+ for emb in embeddings:
+ found_cluster = False
+ for cluster_centroid in clusters:
+ similarity = NoveltyMetrics.cosine_similarity(emb, cluster_centroid)
+ if similarity >= self.cluster_threshold:
+ found_cluster = True
+ break
+
+ if not found_cluster:
+ clusters.append(emb)
+
+ return len(clusters)
+
+ async def run(self, seed_problem: str, **kwargs) -> TaskGenerationResult:
+ """Override to use coverage-based termination."""
+ self.novelty_metrics.reset()
+
+ result = TaskGenerationResult(
+ seed_problem=seed_problem,
+ start_time=datetime.now(timezone.utc).isoformat(),
+ config={
+ "strategy": "coverage_target",
+ "target_clusters": self.target_clusters,
+ "cluster_threshold": self.cluster_threshold,
+ "max_iterations": self.max_iterations
+ }
+ )
+
+ used_expert_names = set()
+ previous_tasks: List[str] = []
+ all_embeddings: List[np.ndarray] = []
+
+ try:
+ for iteration in range(self.max_iterations):
+ expert = self.expert_provider.get_random_expert()
+ while expert["name"] in used_expert_names and len(used_expert_names) < 200:
+ expert = self.expert_provider.get_random_expert()
+ used_expert_names.add(expert["name"])
+
+ task = await self._generate_task(seed_problem, expert, previous_tasks)
+ previous_tasks.append(task)
+ embedding = await self._get_embedding(task)
+ all_embeddings.append(embedding)
+
+ novelty = self.novelty_metrics.compute_novelty(embedding)
+ self.novelty_metrics.add_embedding(embedding, novelty)
+
+ generated_task = GeneratedTask(
+ task=task,
+ expert=expert["name"],
+ expert_domain=expert["domain"],
+ novelty_score=novelty.score,
+ iteration=iteration + 1
+ )
+ result.trajectory.append(generated_task)
+
+ if self.on_iteration:
+ self.on_iteration(generated_task)
+
+ # Check coverage
+ cluster_count = self._count_clusters(all_embeddings)
+ if cluster_count >= self.target_clusters:
+ result.terminated_by = f"coverage ({cluster_count} clusters)"
+ result.total_iterations = iteration + 1
+ break
+
+ else:
+ final_clusters = self._count_clusters(all_embeddings)
+ result.terminated_by = f"max_iterations ({final_clusters} clusters)"
+ result.total_iterations = self.max_iterations
+
+ # Find most novel task
+ if result.trajectory:
+ best_task = max(result.trajectory, key=lambda t: t.novelty_score)
+ best_task.is_breakthrough = True
+ result.breakthrough_task = best_task
+
+ except Exception as e:
+ result.terminated_by = f"error: {str(e)}"
+ result.total_iterations = len(result.trajectory)
+
+ result.end_time = datetime.now(timezone.utc).isoformat()
+ result.novelty_trajectory = self.novelty_metrics.trajectory
+
+ return result
diff --git a/experiments/novelty_loop/demo.py b/experiments/novelty_loop/demo.py
new file mode 100755
index 0000000..cc4c523
--- /dev/null
+++ b/experiments/novelty_loop/demo.py
@@ -0,0 +1,313 @@
+#!/usr/bin/env python3
+"""
+Novelty-Driven Task Generation Demo
+
+Interactive CLI for exploring the novelty-driven task generation agent.
+
+Examples:
+ # Basic usage with default settings
+ python demo.py "Improve urban transportation"
+
+ # Custom threshold and iterations
+ python demo.py "Design a better bicycle" --threshold 0.35 --max-iter 15
+
+ # Use Chinese language
+ python demo.py "改进城市交通" --language zh
+
+ # Use exhaustion strategy (explore until stuck)
+ python demo.py "Sustainable energy solutions" --strategy exhaust
+
+ # Use coverage strategy (find N distinct clusters)
+ python demo.py "Future of education" --strategy coverage --clusters 5
+
+ # Save results to file
+ python demo.py "Smart home innovations" --output results.json
+
+ # Verbose mode with detailed logging
+ python demo.py "Healthcare improvements" --verbose
+"""
+
+import argparse
+import asyncio
+import json
+import logging
+import sys
+from datetime import datetime
+from pathlib import Path
+
+# Add parent directory to path for imports
+sys.path.insert(0, str(Path(__file__).parent.parent.parent))
+
+from experiments.novelty_loop.agent import (
+ NoveltyDrivenTaskAgent,
+ ExhaustFrontierAgent,
+ CoverageTargetAgent,
+ GeneratedTask,
+ TaskGenerationResult
+)
+
+# ANSI color codes for terminal output
+class Colors:
+ HEADER = '\033[95m'
+ BLUE = '\033[94m'
+ CYAN = '\033[96m'
+ GREEN = '\033[92m'
+ YELLOW = '\033[93m'
+ RED = '\033[91m'
+ BOLD = '\033[1m'
+ UNDERLINE = '\033[4m'
+ END = '\033[0m'
+
+
+def print_header(text: str):
+ """Print a styled header."""
+ print(f"\n{Colors.BOLD}{Colors.HEADER}{'='*60}{Colors.END}")
+ print(f"{Colors.BOLD}{Colors.HEADER}{text.center(60)}{Colors.END}")
+ print(f"{Colors.BOLD}{Colors.HEADER}{'='*60}{Colors.END}\n")
+
+
+def print_iteration(task: GeneratedTask):
+ """Print iteration result with colors."""
+ status_color = Colors.GREEN if task.is_breakthrough else Colors.CYAN
+
+ print(f"\n{Colors.BOLD}Iteration {task.iteration}{Colors.END}")
+ print(f" {Colors.YELLOW}Expert:{Colors.END} {task.expert} ({task.expert_domain})")
+ print(f" {Colors.YELLOW}Task:{Colors.END} {task.task}")
+
+ novelty_bar = "█" * int(task.novelty_score * 20) + "░" * (20 - int(task.novelty_score * 20))
+ print(f" {Colors.YELLOW}Novelty:{Colors.END} [{novelty_bar}] {task.novelty_score:.4f}")
+
+ if task.is_breakthrough:
+ print(f" {Colors.GREEN}{Colors.BOLD}★ BREAKTHROUGH! ★{Colors.END}")
+
+
+def print_result(result: TaskGenerationResult):
+ """Print final result summary."""
+ print_header("RESULTS")
+
+ print(f"{Colors.BOLD}Seed Problem:{Colors.END} {result.seed_problem}")
+ print(f"{Colors.BOLD}Total Iterations:{Colors.END} {result.total_iterations}")
+ print(f"{Colors.BOLD}Terminated By:{Colors.END} {result.terminated_by}")
+
+ if result.novelty_trajectory:
+ print(f"\n{Colors.BOLD}Novelty Statistics:{Colors.END}")
+ print(f" Mean Novelty: {result.novelty_trajectory.mean_novelty:.4f}")
+ print(f" Max Novelty: {result.novelty_trajectory.max_novelty:.4f}")
+ print(f" Jump Ratio: {result.novelty_trajectory.jump_ratio:.2%}")
+
+ if result.breakthrough_task:
+ print(f"\n{Colors.GREEN}{Colors.BOLD}{'='*60}{Colors.END}")
+ print(f"{Colors.GREEN}{Colors.BOLD}BREAKTHROUGH TASK{Colors.END}")
+ print(f"{Colors.GREEN}{Colors.BOLD}{'='*60}{Colors.END}")
+ print(f"\n{Colors.BOLD}Expert:{Colors.END} {result.breakthrough_task.expert}")
+ print(f"{Colors.BOLD}Domain:{Colors.END} {result.breakthrough_task.expert_domain}")
+ print(f"{Colors.BOLD}Task:{Colors.END}")
+ print(f" {Colors.CYAN}{result.breakthrough_task.task}{Colors.END}")
+ print(f"\n{Colors.BOLD}Novelty Score:{Colors.END} {result.breakthrough_task.novelty_score:.4f}")
+ print(f"{Colors.BOLD}Found at Iteration:{Colors.END} {result.breakthrough_task.iteration}")
+
+ # Show trajectory summary
+ print(f"\n{Colors.BOLD}Exploration Trajectory:{Colors.END}")
+ for task in result.trajectory:
+ marker = "★" if task.is_breakthrough else "○"
+ novelty_indicator = "█" * int(task.novelty_score * 10)
+ print(f" {marker} [{task.iteration:2d}] {task.expert:20s} | {novelty_indicator:10s} {task.novelty_score:.3f}")
+
+
+def save_result(result: TaskGenerationResult, output_path: str):
+ """Save result to JSON file."""
+ with open(output_path, "w", encoding="utf-8") as f:
+ json.dump(result.to_dict(), f, ensure_ascii=False, indent=2)
+ print(f"\n{Colors.GREEN}Results saved to: {output_path}{Colors.END}")
+
+
+async def run_demo(args):
+ """Run the novelty-driven task generation demo."""
+
+ print_header("NOVELTY-DRIVEN TASK GENERATION")
+
+ print(f"{Colors.BOLD}Configuration:{Colors.END}")
+ print(f" Seed Problem: {args.seed_problem}")
+ print(f" Strategy: {args.strategy}")
+ print(f" Novelty Threshold: {args.threshold}")
+ print(f" Max Iterations: {args.max_iter}")
+ print(f" Language: {args.language}")
+ print(f" LLM Model: {args.model}")
+
+ # Create appropriate agent based on strategy
+ common_kwargs = {
+ "max_iterations": args.max_iter,
+ "llm_model": args.model,
+ "embedding_model": args.embedding_model,
+ "language": args.language,
+ "temperature": args.temperature,
+ "on_iteration": print_iteration if not args.quiet else None
+ }
+
+ if args.strategy == "breakthrough":
+ agent = NoveltyDrivenTaskAgent(
+ novelty_threshold=args.threshold,
+ **common_kwargs
+ )
+ elif args.strategy == "exhaust":
+ agent = ExhaustFrontierAgent(
+ exhaustion_threshold=args.exhaust_threshold,
+ window_size=args.window_size,
+ min_iterations=args.min_iter,
+ **common_kwargs
+ )
+ elif args.strategy == "coverage":
+ agent = CoverageTargetAgent(
+ target_clusters=args.clusters,
+ cluster_threshold=args.cluster_threshold,
+ **common_kwargs
+ )
+ else:
+ print(f"{Colors.RED}Unknown strategy: {args.strategy}{Colors.END}")
+ return
+
+ print(f"\n{Colors.BOLD}Starting generation loop...{Colors.END}")
+ print("-" * 60)
+
+ try:
+ result = await agent.run(args.seed_problem)
+ print_result(result)
+
+ if args.output:
+ save_result(result, args.output)
+
+ except Exception as e:
+ print(f"\n{Colors.RED}Error: {e}{Colors.END}")
+ if args.verbose:
+ import traceback
+ traceback.print_exc()
+ finally:
+ await agent.close()
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description="Novelty-Driven Task Generation Demo",
+ formatter_class=argparse.RawDescriptionHelpFormatter,
+ epilog=__doc__
+ )
+
+ # Required argument
+ parser.add_argument(
+ "seed_problem",
+ help="The seed problem or challenge to explore"
+ )
+
+ # Strategy selection
+ parser.add_argument(
+ "--strategy", "-s",
+ choices=["breakthrough", "exhaust", "coverage"],
+ default="breakthrough",
+ help="Termination strategy (default: breakthrough)"
+ )
+
+ # Common options
+ parser.add_argument(
+ "--threshold", "-t",
+ type=float,
+ default=0.4,
+ help="Novelty threshold for breakthrough (default: 0.4)"
+ )
+ parser.add_argument(
+ "--max-iter", "-m",
+ type=int,
+ default=20,
+ help="Maximum iterations (default: 20)"
+ )
+ parser.add_argument(
+ "--language", "-l",
+ choices=["en", "zh"],
+ default="en",
+ help="Language for prompts and experts (default: en)"
+ )
+
+ # Model options
+ parser.add_argument(
+ "--model",
+ default="qwen3:8b",
+ help="LLM model for task generation (default: qwen3:8b)"
+ )
+ parser.add_argument(
+ "--embedding-model",
+ default="qwen3-embedding:4b",
+ help="Embedding model (default: qwen3-embedding:4b)"
+ )
+ parser.add_argument(
+ "--temperature",
+ type=float,
+ default=0.7,
+ help="LLM temperature (default: 0.7)"
+ )
+
+ # Exhaust strategy options
+ parser.add_argument(
+ "--exhaust-threshold",
+ type=float,
+ default=0.15,
+ help="Exhaustion threshold for 'exhaust' strategy (default: 0.15)"
+ )
+ parser.add_argument(
+ "--window-size",
+ type=int,
+ default=3,
+ help="Window size for exhaustion check (default: 3)"
+ )
+ parser.add_argument(
+ "--min-iter",
+ type=int,
+ default=5,
+ help="Minimum iterations before exhaustion check (default: 5)"
+ )
+
+ # Coverage strategy options
+ parser.add_argument(
+ "--clusters",
+ type=int,
+ default=5,
+ help="Target clusters for 'coverage' strategy (default: 5)"
+ )
+ parser.add_argument(
+ "--cluster-threshold",
+ type=float,
+ default=0.7,
+ help="Cluster similarity threshold (default: 0.7)"
+ )
+
+ # Output options
+ parser.add_argument(
+ "--output", "-o",
+ help="Save results to JSON file"
+ )
+ parser.add_argument(
+ "--quiet", "-q",
+ action="store_true",
+ help="Suppress iteration output"
+ )
+ parser.add_argument(
+ "--verbose", "-v",
+ action="store_true",
+ help="Enable verbose logging"
+ )
+
+ args = parser.parse_args()
+
+ # Configure logging
+ if args.verbose:
+ logging.basicConfig(
+ level=logging.DEBUG,
+ format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
+ )
+ else:
+ logging.basicConfig(level=logging.WARNING)
+
+ # Run the demo
+ asyncio.run(run_demo(args))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/experiments/novelty_loop/novelty_metrics.py b/experiments/novelty_loop/novelty_metrics.py
new file mode 100644
index 0000000..ac15e97
--- /dev/null
+++ b/experiments/novelty_loop/novelty_metrics.py
@@ -0,0 +1,269 @@
+"""
+Novelty Metrics Module - Compute novelty scores for generated outputs.
+
+This module provides embedding-based novelty metrics adapted from the AUT flexibility
+analysis framework for use in novelty-driven agent loops.
+
+Key Metrics:
+- Centroid Distance: Measures how far a new output is from the centroid of previous outputs
+- Cumulative Novelty: Tracks novelty over the generation sequence
+- Jump Detection: Identifies significant semantic shifts between consecutive outputs
+"""
+
+from dataclasses import dataclass, field
+from typing import List, Optional
+import numpy as np
+
+
+@dataclass
+class NoveltyScore:
+ """Result of novelty computation for a single output."""
+ score: float # Main novelty score (0.0 = identical to centroid, 1.0 = maximally distant)
+ distance_from_centroid: float
+ min_distance_to_existing: float # Nearest neighbor distance
+ is_jump: bool # Whether this represents a significant semantic jump
+ jump_magnitude: Optional[float] = None # Similarity to previous output (if applicable)
+
+
+@dataclass
+class NoveltyTrajectory:
+ """Tracks novelty scores over a generation sequence."""
+ scores: List[float] = field(default_factory=list)
+ cumulative_novelty: List[float] = field(default_factory=list)
+ jump_positions: List[int] = field(default_factory=list)
+ centroid_history: List[np.ndarray] = field(default_factory=list)
+
+ @property
+ def mean_novelty(self) -> float:
+ """Average novelty across all outputs."""
+ return float(np.mean(self.scores)) if self.scores else 0.0
+
+ @property
+ def max_novelty(self) -> float:
+ """Maximum novelty achieved."""
+ return float(max(self.scores)) if self.scores else 0.0
+
+ @property
+ def jump_ratio(self) -> float:
+ """Proportion of transitions that were jumps."""
+ if len(self.scores) < 2:
+ return 0.0
+ return len(self.jump_positions) / (len(self.scores) - 1)
+
+ @property
+ def final_cumulative_novelty(self) -> float:
+ """Total accumulated novelty."""
+ return self.cumulative_novelty[-1] if self.cumulative_novelty else 0.0
+
+
+class NoveltyMetrics:
+ """
+ Computes novelty metrics for embeddings in a streaming fashion.
+
+ Designed for use in an agent loop where outputs are generated one at a time
+ and we need to assess novelty incrementally.
+ """
+
+ def __init__(
+ self,
+ similarity_threshold: float = 0.7,
+ jump_detection_enabled: bool = True
+ ):
+ """
+ Args:
+ similarity_threshold: Threshold for semantic similarity (below = jump)
+ jump_detection_enabled: Whether to track semantic jumps
+ """
+ self.similarity_threshold = similarity_threshold
+ self.jump_detection_enabled = jump_detection_enabled
+
+ # State
+ self.embeddings: List[np.ndarray] = []
+ self.trajectory = NoveltyTrajectory()
+ self._centroid: Optional[np.ndarray] = None
+
+ def reset(self):
+ """Reset all state for a new generation session."""
+ self.embeddings = []
+ self.trajectory = NoveltyTrajectory()
+ self._centroid = None
+
+ @staticmethod
+ def cosine_similarity(a: np.ndarray, b: np.ndarray) -> float:
+ """Compute cosine similarity between two vectors."""
+ norm_a = np.linalg.norm(a)
+ norm_b = np.linalg.norm(b)
+ if norm_a == 0 or norm_b == 0:
+ return 0.0
+ return float(np.dot(a, b) / (norm_a * norm_b))
+
+ @staticmethod
+ def cosine_distance(a: np.ndarray, b: np.ndarray) -> float:
+ """Compute cosine distance (1 - similarity) between two vectors."""
+ return 1.0 - NoveltyMetrics.cosine_similarity(a, b)
+
+ def compute_centroid(self) -> Optional[np.ndarray]:
+ """Compute centroid of all current embeddings."""
+ if not self.embeddings:
+ return None
+ return np.mean(self.embeddings, axis=0)
+
+ def compute_novelty(self, embedding: np.ndarray) -> NoveltyScore:
+ """
+ Compute novelty score for a new embedding.
+
+ This does NOT add the embedding to the history - call add_embedding() for that.
+
+ Args:
+ embedding: The embedding vector to evaluate
+
+ Returns:
+ NoveltyScore with computed metrics
+ """
+ embedding = np.array(embedding)
+
+ # First output is maximally novel (nothing to compare to)
+ if not self.embeddings:
+ return NoveltyScore(
+ score=1.0,
+ distance_from_centroid=1.0,
+ min_distance_to_existing=1.0,
+ is_jump=False,
+ jump_magnitude=None
+ )
+
+ # Distance from centroid (primary novelty metric)
+ centroid = self.compute_centroid()
+ distance_from_centroid = self.cosine_distance(embedding, centroid)
+
+ # Minimum distance to any existing embedding (nearest neighbor)
+ min_distance = min(
+ self.cosine_distance(embedding, existing)
+ for existing in self.embeddings
+ )
+
+ # Jump detection (similarity to previous output)
+ is_jump = False
+ jump_magnitude = None
+ if self.jump_detection_enabled and self.embeddings:
+ similarity_to_prev = self.cosine_similarity(embedding, self.embeddings[-1])
+ jump_magnitude = similarity_to_prev
+ is_jump = similarity_to_prev < self.similarity_threshold
+
+ # Primary novelty score is distance from centroid
+ # Normalized to [0, 1] range where higher = more novel
+ novelty_score = distance_from_centroid
+
+ return NoveltyScore(
+ score=novelty_score,
+ distance_from_centroid=distance_from_centroid,
+ min_distance_to_existing=min_distance,
+ is_jump=is_jump,
+ jump_magnitude=jump_magnitude
+ )
+
+ def add_embedding(self, embedding: np.ndarray, novelty: Optional[NoveltyScore] = None):
+ """
+ Add an embedding to the history and update trajectory.
+
+ Args:
+ embedding: The embedding to add
+ novelty: Pre-computed novelty score (computed if not provided)
+ """
+ embedding = np.array(embedding)
+
+ if novelty is None:
+ novelty = self.compute_novelty(embedding)
+
+ # Update state
+ self.embeddings.append(embedding)
+ self._centroid = self.compute_centroid()
+
+ # Update trajectory
+ self.trajectory.scores.append(novelty.score)
+
+ # Cumulative novelty
+ prev_cumulative = self.trajectory.cumulative_novelty[-1] if self.trajectory.cumulative_novelty else 0.0
+ self.trajectory.cumulative_novelty.append(prev_cumulative + novelty.score)
+
+ # Track jumps
+ if novelty.is_jump:
+ self.trajectory.jump_positions.append(len(self.embeddings) - 1)
+
+ # Store centroid history
+ if self._centroid is not None:
+ self.trajectory.centroid_history.append(self._centroid.copy())
+
+ def get_current_state(self) -> dict:
+ """Get current state as a dictionary for logging/debugging."""
+ return {
+ "num_embeddings": len(self.embeddings),
+ "mean_novelty": self.trajectory.mean_novelty,
+ "max_novelty": self.trajectory.max_novelty,
+ "jump_ratio": self.trajectory.jump_ratio,
+ "cumulative_novelty": self.trajectory.final_cumulative_novelty,
+ "recent_scores": self.trajectory.scores[-5:] if self.trajectory.scores else []
+ }
+
+
+def compute_batch_novelty(
+ embeddings: List[np.ndarray],
+ reference_embeddings: Optional[List[np.ndarray]] = None
+) -> List[float]:
+ """
+ Compute novelty scores for a batch of embeddings.
+
+ Useful for post-hoc analysis of generated outputs.
+
+ Args:
+ embeddings: List of embeddings to evaluate
+ reference_embeddings: Optional reference set (uses self if not provided)
+
+ Returns:
+ List of novelty scores (distance from centroid)
+ """
+ if not embeddings:
+ return []
+
+ embeddings_arr = np.array(embeddings)
+
+ if reference_embeddings is not None:
+ centroid = np.mean(reference_embeddings, axis=0)
+ else:
+ centroid = np.mean(embeddings_arr, axis=0)
+
+ scores = []
+ for emb in embeddings_arr:
+ distance = NoveltyMetrics.cosine_distance(emb, centroid)
+ scores.append(distance)
+
+ return scores
+
+
+def find_most_novel(
+ embeddings: List[np.ndarray],
+ texts: List[str],
+ top_k: int = 5
+) -> List[tuple]:
+ """
+ Find the most novel outputs from a batch.
+
+ Args:
+ embeddings: List of embeddings
+ texts: Corresponding text outputs
+ top_k: Number of top results to return
+
+ Returns:
+ List of (text, novelty_score, index) tuples, sorted by novelty descending
+ """
+ scores = compute_batch_novelty(embeddings)
+
+ indexed_results = [
+ (texts[i], scores[i], i)
+ for i in range(len(texts))
+ ]
+
+ # Sort by novelty score descending
+ indexed_results.sort(key=lambda x: x[1], reverse=True)
+
+ return indexed_results[:top_k]
diff --git a/experiments/results/.gitignore b/experiments/results/.gitignore
new file mode 100644
index 0000000..af4380d
--- /dev/null
+++ b/experiments/results/.gitignore
@@ -0,0 +1,5 @@
+# Ignore all experiment result files
+*.json
+
+# But keep this .gitignore
+!.gitignore
diff --git a/experiments/results/cumulative_jump_profiles.png b/experiments/results/cumulative_jump_profiles.png
new file mode 100644
index 0000000..b94b707
Binary files /dev/null and b/experiments/results/cumulative_jump_profiles.png differ
diff --git a/experiments/results/embedding_pca.png b/experiments/results/embedding_pca.png
new file mode 100644
index 0000000..b0065af
Binary files /dev/null and b/experiments/results/embedding_pca.png differ
diff --git a/experiments/results/embedding_tsne.png b/experiments/results/embedding_tsne.png
new file mode 100644
index 0000000..04e911f
Binary files /dev/null and b/experiments/results/embedding_tsne.png differ
diff --git a/experiments/results/figures/20260119_163040_diversity_boxplot.png b/experiments/results/figures/20260119_163040_diversity_boxplot.png
new file mode 100644
index 0000000..41d4091
Binary files /dev/null and b/experiments/results/figures/20260119_163040_diversity_boxplot.png differ
diff --git a/experiments/results/figures/20260119_163040_idea_counts.png b/experiments/results/figures/20260119_163040_idea_counts.png
new file mode 100644
index 0000000..4469c65
Binary files /dev/null and b/experiments/results/figures/20260119_163040_idea_counts.png differ
diff --git a/experiments/results/figures/20260119_163040_interaction_diversity.png b/experiments/results/figures/20260119_163040_interaction_diversity.png
new file mode 100644
index 0000000..8335f9d
Binary files /dev/null and b/experiments/results/figures/20260119_163040_interaction_diversity.png differ
diff --git a/experiments/results/figures/20260119_163040_interaction_novelty.png b/experiments/results/figures/20260119_163040_interaction_novelty.png
new file mode 100644
index 0000000..a2e2f99
Binary files /dev/null and b/experiments/results/figures/20260119_163040_interaction_novelty.png differ
diff --git a/experiments/results/figures/20260119_163040_metrics_comparison.png b/experiments/results/figures/20260119_163040_metrics_comparison.png
new file mode 100644
index 0000000..822edb1
Binary files /dev/null and b/experiments/results/figures/20260119_163040_metrics_comparison.png differ
diff --git a/experiments/results/figures/20260119_163040_query_distance_boxplot.png b/experiments/results/figures/20260119_163040_query_distance_boxplot.png
new file mode 100644
index 0000000..4662d47
Binary files /dev/null and b/experiments/results/figures/20260119_163040_query_distance_boxplot.png differ
diff --git a/experiments/results/figures/20260119_163040_survival_rates.png b/experiments/results/figures/20260119_163040_survival_rates.png
new file mode 100644
index 0000000..62aff8d
Binary files /dev/null and b/experiments/results/figures/20260119_163040_survival_rates.png differ
diff --git a/experiments/results/figures/20260119_165650_diversity_boxplot.png b/experiments/results/figures/20260119_165650_diversity_boxplot.png
new file mode 100644
index 0000000..c9232ab
Binary files /dev/null and b/experiments/results/figures/20260119_165650_diversity_boxplot.png differ
diff --git a/experiments/results/figures/20260119_165650_idea_counts.png b/experiments/results/figures/20260119_165650_idea_counts.png
new file mode 100644
index 0000000..6ef3ba3
Binary files /dev/null and b/experiments/results/figures/20260119_165650_idea_counts.png differ
diff --git a/experiments/results/figures/20260119_165650_interaction_diversity.png b/experiments/results/figures/20260119_165650_interaction_diversity.png
new file mode 100644
index 0000000..803e043
Binary files /dev/null and b/experiments/results/figures/20260119_165650_interaction_diversity.png differ
diff --git a/experiments/results/figures/20260119_165650_interaction_novelty.png b/experiments/results/figures/20260119_165650_interaction_novelty.png
new file mode 100644
index 0000000..0fe251f
Binary files /dev/null and b/experiments/results/figures/20260119_165650_interaction_novelty.png differ
diff --git a/experiments/results/figures/20260119_165650_metrics_comparison.png b/experiments/results/figures/20260119_165650_metrics_comparison.png
new file mode 100644
index 0000000..0a1fa26
Binary files /dev/null and b/experiments/results/figures/20260119_165650_metrics_comparison.png differ
diff --git a/experiments/results/figures/20260119_165650_query_distance_boxplot.png b/experiments/results/figures/20260119_165650_query_distance_boxplot.png
new file mode 100644
index 0000000..3e9ea59
Binary files /dev/null and b/experiments/results/figures/20260119_165650_query_distance_boxplot.png differ
diff --git a/experiments/results/figures/20260119_165650_survival_rates.png b/experiments/results/figures/20260119_165650_survival_rates.png
new file mode 100644
index 0000000..eb14708
Binary files /dev/null and b/experiments/results/figures/20260119_165650_survival_rates.png differ
diff --git a/experiments/visualize.py b/experiments/visualize.py
new file mode 100644
index 0000000..8940c57
--- /dev/null
+++ b/experiments/visualize.py
@@ -0,0 +1,521 @@
+"""
+Visualization for experiment results.
+
+Generates:
+- Box plots of diversity by condition
+- 2×2 interaction plots
+- Bar charts of survival rates
+- t-SNE/UMAP of idea embeddings (optional)
+
+Usage:
+ python -m experiments.visualize --input results/experiment_xxx_metrics.json
+"""
+
+import sys
+import json
+import argparse
+from pathlib import Path
+from typing import List, Dict, Any, Optional
+
+import numpy as np
+
+# Add experiments to path
+sys.path.insert(0, str(Path(__file__).parent.parent))
+
+from experiments.config import RESULTS_DIR
+
+# Try to import visualization libraries
+try:
+ import matplotlib.pyplot as plt
+ import matplotlib.patches as mpatches
+ MATPLOTLIB_AVAILABLE = True
+except ImportError:
+ MATPLOTLIB_AVAILABLE = False
+ print("Warning: matplotlib not installed. Visualization unavailable.")
+ print("Install with: pip install matplotlib")
+
+# Condition display names and colors
+CONDITION_LABELS = {
+ "c1_direct": "C1: Direct",
+ "c2_expert_only": "C2: Expert-Only",
+ "c3_attribute_only": "C3: Attr-Only",
+ "c4_full_pipeline": "C4: Full Pipeline",
+ "c5_random_perspective": "C5: Random"
+}
+
+CONDITION_COLORS = {
+ "c1_direct": "#808080", # Gray (baseline)
+ "c2_expert_only": "#2196F3", # Blue
+ "c3_attribute_only": "#FF9800", # Orange
+ "c4_full_pipeline": "#4CAF50", # Green (main)
+ "c5_random_perspective": "#9C27B0" # Purple (control)
+}
+
+# 2×2 factorial structure
+FACTORIAL_2X2 = {
+ "no_attr_no_expert": "c1_direct",
+ "no_attr_with_expert": "c2_expert_only",
+ "with_attr_no_expert": "c3_attribute_only",
+ "with_attr_with_expert": "c4_full_pipeline"
+}
+
+
+def extract_metric_values(
+ metrics: Dict[str, Any],
+ metric_path: str
+) -> Dict[str, List[float]]:
+ """Extract values for a specific metric across all queries."""
+ by_condition = {}
+
+ for query_metrics in metrics.get("metrics_by_query", []):
+ for condition, cond_metrics in query_metrics.get("conditions", {}).items():
+ if condition not in by_condition:
+ by_condition[condition] = []
+
+ value = cond_metrics
+ for key in metric_path.split("."):
+ if value is None:
+ break
+ if isinstance(value, dict):
+ value = value.get(key)
+ else:
+ value = None
+
+ if value is not None and isinstance(value, (int, float)):
+ by_condition[condition].append(float(value))
+
+ return by_condition
+
+
+def plot_box_comparison(
+ metrics: Dict[str, Any],
+ metric_path: str,
+ title: str,
+ ylabel: str,
+ output_path: Path,
+ figsize: tuple = (10, 6)
+):
+ """Create box plot comparing conditions."""
+ if not MATPLOTLIB_AVAILABLE:
+ return
+
+ by_condition = extract_metric_values(metrics, metric_path)
+
+ # Order conditions
+ ordered_conditions = [
+ "c1_direct", "c2_expert_only", "c3_attribute_only",
+ "c4_full_pipeline", "c5_random_perspective"
+ ]
+ conditions = [c for c in ordered_conditions if c in by_condition]
+
+ if not conditions:
+ print(f"No data for {metric_path}")
+ return
+
+ fig, ax = plt.subplots(figsize=figsize)
+
+ # Prepare data
+ data = [by_condition[c] for c in conditions]
+ labels = [CONDITION_LABELS.get(c, c) for c in conditions]
+ colors = [CONDITION_COLORS.get(c, "#888888") for c in conditions]
+
+ # Create box plot
+ bp = ax.boxplot(data, labels=labels, patch_artist=True)
+
+ # Color boxes
+ for patch, color in zip(bp['boxes'], colors):
+ patch.set_facecolor(color)
+ patch.set_alpha(0.7)
+
+ # Add individual points
+ for i, (cond, values) in enumerate(zip(conditions, data)):
+ x = np.random.normal(i + 1, 0.04, size=len(values))
+ ax.scatter(x, values, alpha=0.6, color=colors[i], edgecolor='black', s=50)
+
+ ax.set_ylabel(ylabel)
+ ax.set_title(title)
+ ax.grid(axis='y', alpha=0.3)
+
+ # Rotate labels if needed
+ plt.xticks(rotation=15, ha='right')
+ plt.tight_layout()
+
+ plt.savefig(output_path, dpi=150, bbox_inches='tight')
+ plt.close()
+ print(f"Saved: {output_path}")
+
+
+def plot_interaction_2x2(
+ metrics: Dict[str, Any],
+ metric_path: str,
+ title: str,
+ ylabel: str,
+ output_path: Path,
+ figsize: tuple = (8, 6)
+):
+ """Create 2×2 factorial interaction plot."""
+ if not MATPLOTLIB_AVAILABLE:
+ return
+
+ by_condition = extract_metric_values(metrics, metric_path)
+
+ # Check if all 2×2 conditions available
+ required = ["c1_direct", "c2_expert_only", "c3_attribute_only", "c4_full_pipeline"]
+ if not all(c in by_condition and by_condition[c] for c in required):
+ print(f"Insufficient data for 2×2 plot of {metric_path}")
+ return
+
+ fig, ax = plt.subplots(figsize=figsize)
+
+ # Calculate means
+ means = {c: np.mean(by_condition[c]) for c in required}
+ stds = {c: np.std(by_condition[c], ddof=1) if len(by_condition[c]) > 1 else 0 for c in required}
+
+ # X positions: No Experts, With Experts
+ x = [0, 1]
+ x_labels = ["Without Experts", "With Experts"]
+
+ # Line 1: Without Attributes (C1 -> C2)
+ y_no_attr = [means["c1_direct"], means["c2_expert_only"]]
+ err_no_attr = [stds["c1_direct"], stds["c2_expert_only"]]
+ ax.errorbar(x, y_no_attr, yerr=err_no_attr, marker='o', markersize=10,
+ linewidth=2, capsize=5, label="Without Attributes",
+ color="#FF9800", linestyle='--')
+
+ # Line 2: With Attributes (C3 -> C4)
+ y_with_attr = [means["c3_attribute_only"], means["c4_full_pipeline"]]
+ err_with_attr = [stds["c3_attribute_only"], stds["c4_full_pipeline"]]
+ ax.errorbar(x, y_with_attr, yerr=err_with_attr, marker='s', markersize=10,
+ linewidth=2, capsize=5, label="With Attributes",
+ color="#4CAF50", linestyle='-')
+
+ # Annotate points
+ offset = 0.02 * (ax.get_ylim()[1] - ax.get_ylim()[0]) if ax.get_ylim()[1] != ax.get_ylim()[0] else 0.01
+ ax.annotate("C1", (x[0], y_no_attr[0]), textcoords="offset points",
+ xytext=(-15, -15), fontsize=9)
+ ax.annotate("C2", (x[1], y_no_attr[1]), textcoords="offset points",
+ xytext=(5, -15), fontsize=9)
+ ax.annotate("C3", (x[0], y_with_attr[0]), textcoords="offset points",
+ xytext=(-15, 10), fontsize=9)
+ ax.annotate("C4", (x[1], y_with_attr[1]), textcoords="offset points",
+ xytext=(5, 10), fontsize=9)
+
+ ax.set_xticks(x)
+ ax.set_xticklabels(x_labels)
+ ax.set_ylabel(ylabel)
+ ax.set_title(title)
+ ax.legend(loc='best')
+ ax.grid(axis='y', alpha=0.3)
+
+ # Check for interaction (non-parallel lines)
+ slope_no_attr = y_no_attr[1] - y_no_attr[0]
+ slope_with_attr = y_with_attr[1] - y_with_attr[0]
+ interaction = slope_with_attr - slope_no_attr
+
+ interaction_text = f"Interaction: {interaction:+.4f}"
+ if interaction > 0.01:
+ interaction_text += " (super-additive)"
+ elif interaction < -0.01:
+ interaction_text += " (sub-additive)"
+ else:
+ interaction_text += " (additive)"
+
+ ax.text(0.02, 0.98, interaction_text, transform=ax.transAxes,
+ fontsize=10, verticalalignment='top',
+ bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
+
+ plt.tight_layout()
+ plt.savefig(output_path, dpi=150, bbox_inches='tight')
+ plt.close()
+ print(f"Saved: {output_path}")
+
+
+def plot_survival_rates(
+ metrics: Dict[str, Any],
+ output_path: Path,
+ figsize: tuple = (10, 6)
+):
+ """Create bar chart of deduplication survival rates."""
+ if not MATPLOTLIB_AVAILABLE:
+ return
+
+ by_condition = extract_metric_values(metrics, "survival_rate")
+
+ ordered_conditions = [
+ "c1_direct", "c2_expert_only", "c3_attribute_only",
+ "c4_full_pipeline", "c5_random_perspective"
+ ]
+ conditions = [c for c in ordered_conditions if c in by_condition]
+
+ if not conditions:
+ print("No survival rate data")
+ return
+
+ fig, ax = plt.subplots(figsize=figsize)
+
+ # Calculate means and stds
+ means = [np.mean(by_condition[c]) * 100 for c in conditions] # Convert to percentage
+ stds = [np.std(by_condition[c], ddof=1) * 100 if len(by_condition[c]) > 1 else 0 for c in conditions]
+ labels = [CONDITION_LABELS.get(c, c) for c in conditions]
+ colors = [CONDITION_COLORS.get(c, "#888888") for c in conditions]
+
+ x = np.arange(len(conditions))
+ bars = ax.bar(x, means, yerr=stds, capsize=5, color=colors, alpha=0.8, edgecolor='black')
+
+ # Add value labels on bars
+ for bar, mean in zip(bars, means):
+ height = bar.get_height()
+ ax.annotate(f'{mean:.1f}%',
+ xy=(bar.get_x() + bar.get_width() / 2, height),
+ xytext=(0, 3), textcoords="offset points",
+ ha='center', va='bottom', fontsize=10)
+
+ ax.set_xticks(x)
+ ax.set_xticklabels(labels, rotation=15, ha='right')
+ ax.set_ylabel("Survival Rate (%)")
+ ax.set_title("Deduplication Survival Rate by Condition\n(Higher = More Diverse Generation)")
+ ax.set_ylim(0, 110)
+ ax.grid(axis='y', alpha=0.3)
+
+ plt.tight_layout()
+ plt.savefig(output_path, dpi=150, bbox_inches='tight')
+ plt.close()
+ print(f"Saved: {output_path}")
+
+
+def plot_idea_counts(
+ metrics: Dict[str, Any],
+ output_path: Path,
+ figsize: tuple = (10, 6)
+):
+ """Create stacked bar chart of raw vs unique idea counts."""
+ if not MATPLOTLIB_AVAILABLE:
+ return
+
+ raw_counts = extract_metric_values(metrics, "raw_count")
+ unique_counts = extract_metric_values(metrics, "unique_count")
+
+ ordered_conditions = [
+ "c1_direct", "c2_expert_only", "c3_attribute_only",
+ "c4_full_pipeline", "c5_random_perspective"
+ ]
+ conditions = [c for c in ordered_conditions if c in raw_counts and c in unique_counts]
+
+ if not conditions:
+ print("No count data")
+ return
+
+ fig, ax = plt.subplots(figsize=figsize)
+
+ # Calculate means
+ raw_means = [np.mean(raw_counts[c]) for c in conditions]
+ unique_means = [np.mean(unique_counts[c]) for c in conditions]
+ removed_means = [r - u for r, u in zip(raw_means, unique_means)]
+
+ labels = [CONDITION_LABELS.get(c, c) for c in conditions]
+ x = np.arange(len(conditions))
+ width = 0.6
+
+ # Stacked bars: unique (bottom) + removed (top)
+ bars1 = ax.bar(x, unique_means, width, label='Unique Ideas',
+ color=[CONDITION_COLORS.get(c, "#888888") for c in conditions], alpha=0.9)
+ bars2 = ax.bar(x, removed_means, width, bottom=unique_means, label='Duplicates Removed',
+ color='lightgray', alpha=0.7, hatch='//')
+
+ # Add value labels
+ for i, (unique, raw) in enumerate(zip(unique_means, raw_means)):
+ ax.annotate(f'{unique:.0f}', xy=(x[i], unique / 2),
+ ha='center', va='center', fontsize=10, fontweight='bold')
+ ax.annotate(f'({raw:.0f})', xy=(x[i], raw + 1),
+ ha='center', va='bottom', fontsize=9, color='gray')
+
+ ax.set_xticks(x)
+ ax.set_xticklabels(labels, rotation=15, ha='right')
+ ax.set_ylabel("Number of Ideas")
+ ax.set_title("Idea Counts by Condition\n(Unique ideas shown, raw total in parentheses)")
+ ax.legend(loc='upper right')
+ ax.grid(axis='y', alpha=0.3)
+
+ plt.tight_layout()
+ plt.savefig(output_path, dpi=150, bbox_inches='tight')
+ plt.close()
+ print(f"Saved: {output_path}")
+
+
+def plot_metrics_comparison(
+ metrics: Dict[str, Any],
+ output_path: Path,
+ figsize: tuple = (12, 8)
+):
+ """Create multi-panel comparison of key metrics."""
+ if not MATPLOTLIB_AVAILABLE:
+ return
+
+ fig, axes = plt.subplots(2, 2, figsize=figsize)
+
+ # Extract metrics
+ metrics_to_plot = [
+ ("survival_rate", "Survival Rate", axes[0, 0], True),
+ ("post_dedup_diversity.mean_pairwise_distance", "Semantic Diversity", axes[0, 1], False),
+ ("post_dedup_query_distance.mean_distance", "Query Distance (Novelty)", axes[1, 0], False),
+ ("post_dedup_clusters.optimal_clusters", "Number of Clusters", axes[1, 1], False),
+ ]
+
+ ordered_conditions = [
+ "c1_direct", "c2_expert_only", "c3_attribute_only",
+ "c4_full_pipeline", "c5_random_perspective"
+ ]
+
+ for metric_path, title, ax, is_percentage in metrics_to_plot:
+ by_condition = extract_metric_values(metrics, metric_path)
+ conditions = [c for c in ordered_conditions if c in by_condition and by_condition[c]]
+
+ if not conditions:
+ ax.text(0.5, 0.5, "No data", ha='center', va='center', transform=ax.transAxes)
+ ax.set_title(title)
+ continue
+
+ means = [np.mean(by_condition[c]) for c in conditions]
+ if is_percentage:
+ means = [m * 100 for m in means]
+
+ colors = [CONDITION_COLORS.get(c, "#888888") for c in conditions]
+ x = np.arange(len(conditions))
+
+ bars = ax.bar(x, means, color=colors, alpha=0.8, edgecolor='black')
+
+ # Simplified labels
+ short_labels = ["C1", "C2", "C3", "C4", "C5"][:len(conditions)]
+ ax.set_xticks(x)
+ ax.set_xticklabels(short_labels)
+ ax.set_title(title)
+ ax.grid(axis='y', alpha=0.3)
+
+ if is_percentage:
+ ax.set_ylim(0, 110)
+
+ # Add legend
+ legend_elements = [
+ mpatches.Patch(facecolor=CONDITION_COLORS[c], label=CONDITION_LABELS[c])
+ for c in ordered_conditions if c in CONDITION_COLORS
+ ]
+ fig.legend(handles=legend_elements, loc='lower center', ncol=3, bbox_to_anchor=(0.5, -0.02))
+
+ plt.tight_layout()
+ plt.subplots_adjust(bottom=0.15)
+ plt.savefig(output_path, dpi=150, bbox_inches='tight')
+ plt.close()
+ print(f"Saved: {output_path}")
+
+
+def generate_all_visualizations(
+ metrics: Dict[str, Any],
+ output_dir: Path
+):
+ """Generate all visualization figures."""
+ if not MATPLOTLIB_AVAILABLE:
+ print("matplotlib not available. Cannot generate visualizations.")
+ return
+
+ output_dir.mkdir(parents=True, exist_ok=True)
+ experiment_id = metrics.get("experiment_id", "experiment")
+
+ print(f"\nGenerating visualizations for {experiment_id}...")
+
+ # 1. Survival rates bar chart
+ plot_survival_rates(
+ metrics,
+ output_dir / f"{experiment_id}_survival_rates.png"
+ )
+
+ # 2. Idea counts stacked bar
+ plot_idea_counts(
+ metrics,
+ output_dir / f"{experiment_id}_idea_counts.png"
+ )
+
+ # 3. Diversity box plot
+ plot_box_comparison(
+ metrics,
+ "post_dedup_diversity.mean_pairwise_distance",
+ "Semantic Diversity by Condition (Post-Dedup)",
+ "Mean Pairwise Distance",
+ output_dir / f"{experiment_id}_diversity_boxplot.png"
+ )
+
+ # 4. Query distance box plot
+ plot_box_comparison(
+ metrics,
+ "post_dedup_query_distance.mean_distance",
+ "Query Distance by Condition (Novelty)",
+ "Distance from Original Query",
+ output_dir / f"{experiment_id}_query_distance_boxplot.png"
+ )
+
+ # 5. 2×2 interaction plot for diversity
+ plot_interaction_2x2(
+ metrics,
+ "post_dedup_diversity.mean_pairwise_distance",
+ "2×2 Factorial: Semantic Diversity",
+ "Mean Pairwise Distance",
+ output_dir / f"{experiment_id}_interaction_diversity.png"
+ )
+
+ # 6. 2×2 interaction plot for query distance
+ plot_interaction_2x2(
+ metrics,
+ "post_dedup_query_distance.mean_distance",
+ "2×2 Factorial: Query Distance (Novelty)",
+ "Distance from Original Query",
+ output_dir / f"{experiment_id}_interaction_novelty.png"
+ )
+
+ # 7. Multi-panel comparison
+ plot_metrics_comparison(
+ metrics,
+ output_dir / f"{experiment_id}_metrics_comparison.png"
+ )
+
+ print(f"\nAll visualizations saved to: {output_dir}")
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description="Generate visualizations for experiment results"
+ )
+ parser.add_argument(
+ "--input",
+ type=str,
+ required=True,
+ help="Input metrics JSON file"
+ )
+ parser.add_argument(
+ "--output-dir",
+ type=str,
+ help="Output directory for figures (default: results/figures/)"
+ )
+
+ args = parser.parse_args()
+
+ input_path = Path(args.input)
+ if not input_path.exists():
+ input_path = RESULTS_DIR / args.input
+ if not input_path.exists():
+ print(f"Error: Input file not found: {args.input}")
+ sys.exit(1)
+
+ # Load metrics
+ with open(input_path, "r", encoding="utf-8") as f:
+ metrics = json.load(f)
+
+ # Output directory
+ if args.output_dir:
+ output_dir = Path(args.output_dir)
+ else:
+ output_dir = RESULTS_DIR / "figures"
+
+ generate_all_visualizations(metrics, output_dir)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/research/research_report.md b/research/research_report.md
index 197b91d..94c0f52 100644
--- a/research/research_report.md
+++ b/research/research_report.md
@@ -162,7 +162,6 @@ Result: Novel ideas like "pressure-adaptive seating"
| **Curated** | 210 pre-selected high-quality occupations | Controlled |
| **DBpedia** | 2,164 occupations from database | Broad |
-Note: use the domain list (嘗試加入杜威分類法兩層? Future work? )
---
@@ -470,3 +469,10 @@ Our Approach: Query → Attributes → (Attributes × Experts) → Ideas
- `research/experimental_protocol.md`
- `research/paper_outline.md`
- `research/references.md`
+
+---
+
+# Discussion
+
+- Futurework: Domain, 杜威分類法
+