feat: Add experiments framework and novelty-driven agent loop

- Add complete experiments directory with pilot study infrastructure
  - 5 experimental conditions (direct, expert-only, attribute-only, full-pipeline, random-perspective)
  - Human assessment tool with React frontend and FastAPI backend
  - AUT flexibility analysis with jump signal detection
  - Result visualization and metrics computation

- Add novelty-driven agent loop module (experiments/novelty_loop/)
  - NoveltyDrivenTaskAgent with expert perspective perturbation
  - Three termination strategies: breakthrough, exhaust, coverage
  - Interactive CLI demo with colored output
  - Embedding-based novelty scoring

- Add DDC knowledge domain classification data (en/zh)
- Add CLAUDE.md project documentation
- Update research report with experiment findings

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-01-20 10:16:21 +08:00
parent 26a56a2a07
commit 43c025e060
81 changed files with 18766 additions and 2 deletions

View File

@@ -0,0 +1,23 @@
"""
Condition implementations for the 5-condition experiment.
C1: Direct generation (baseline)
C2: Expert-only (no attributes)
C3: Attribute-only (no experts)
C4: Full pipeline (attributes + experts)
C5: Random-perspective (random words instead of experts)
"""
from .c1_direct import generate_ideas as c1_generate
from .c2_expert_only import generate_ideas as c2_generate
from .c3_attribute_only import generate_ideas as c3_generate
from .c4_full_pipeline import generate_ideas as c4_generate
from .c5_random_perspective import generate_ideas as c5_generate
__all__ = [
"c1_generate",
"c2_generate",
"c3_generate",
"c4_generate",
"c5_generate",
]