feat: Enhance patent search and update research documentation

- Improve patent search service with expanded functionality
- Update PatentSearchPanel UI component
- Add new research_report.md
- Update experimental protocol, literature review, paper outline, and theoretical framework

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-01-19 15:52:33 +08:00
parent ec48709755
commit 26a56a2a07
13 changed files with 1446 additions and 537 deletions

View File

@@ -59,6 +59,27 @@ With Marine Biologist Expert:
Result: Novel ideas like "pressure-adaptive seating" or "coral-inspired structural support"
```
#### The Semantic Distance Tradeoff
However, semantic distance is not always beneficial. There exists a tradeoff:
```
Semantic Distance Spectrum:
Too Close Optimal Zone Too Far
(Semantic Gravity) (Creative) (Hallucination)
├────────────────────────────┼────────────────────────────────┼────────────────────────────┤
"Ergonomic office chair" "Pressure-adaptive seating" "Quantum-entangled
"Coral-inspired support" chair consciousness"
High usefulness High novelty + useful High novelty, nonsense
Low novelty Low usefulness
```
**Our Design Choice**: Context-free keyword generation (Stage 1 excludes original query) intentionally pushes toward the "far" end to maximize novelty. Stage 2 re-introduces query context to ground the ideas.
**Research Question**: What is the hallucination/nonsense rate of this approach, and is the tradeoff worthwhile?
#### 2. Conceptual Blending Theory (Fauconnier & Turner, 2002)
> "Creative products emerge from blending elements of two input spaces into a novel integrated space."
@@ -136,12 +157,22 @@ Our "Inner Crowd":
Aggregation → Diverse idea pool (simulated crowd)
```
### Why Multiple Experts Work
### Why This Approach Works: Two Complementary Mechanisms
1. **Coverage**: Different experts activate different semantic regions
2. **Redundancy Reduction**: Deduplication removes overlapping ideas
3. **Diversity by Design**: Expert selection can be optimized for maximum diversity
4. **Diminishing Returns**: Beyond ~4-6 experts, marginal diversity gains decrease
**Factor 1: Attribute Decomposition**
- Structures the problem space before creative exploration
- Prevents premature fixation on holistic solutions
- Ensures coverage across different aspects of the target concept
**Factor 2: Expert Perspectives**
- Different experts activate different semantic regions
- Forces semantic jumps that LLMs wouldn't naturally make
- Each expert provides a distinct input space for conceptual blending
**Combined Effect (Interaction)**
- Experts are more effective when given structured attributes to transform
- Attributes without expert perspectives still generate predictable ideas
- The combination creates systematic exploration of remote conceptual spaces
---
@@ -231,32 +262,43 @@ Output:
---
## Testable Hypotheses
## Testable Hypotheses (2×2 Factorial Design)
### H1: Semantic Diversity
> Multi-expert generation produces higher semantic diversity than single-expert or direct generation.
Our experimental design manipulates two independent factors:
1. **Attribute Decomposition**: With / Without
2. **Expert Perspectives**: With / Without
### H1: Main Effect of Attribute Decomposition
> Conditions with attribute decomposition produce higher semantic diversity than those without.
**Prediction**: (Attribute-Only + Full Pipeline) > (Direct + Expert-Only)
**Measurement**: Mean pairwise cosine distance between idea embeddings
### H2: Novelty
> Ideas from multi-expert generation have lower patent overlap than direct generation.
### H2: Main Effect of Expert Perspectives
> Conditions with expert perspectives produce higher semantic diversity than those without.
**Measurement**: Percentage of ideas with existing patent matches
**Prediction**: (Expert-Only + Full Pipeline) > (Direct + Attribute-Only)
**Measurement**: Mean pairwise cosine distance between idea embeddings
### H3: Expert Count Effect
> Semantic diversity increases with expert count, with diminishing returns beyond 4-6 experts.
### H3: Interaction Effect
> The combination of attributes and experts produces super-additive benefits.
**Measurement**: Diversity vs. expert count curve
**Prediction**: Full Pipeline > (Attribute-Only + Expert-Only - Direct)
**Rationale**: Experts are more effective when given structured problem decomposition to work with.
**Measurement**: Interaction term in 2×2 ANOVA
### H4: Expert Source Effect
> LLM-generated experts produce more unconventional ideas than curated/database experts.
### H4: Novelty
> The Full Pipeline produces ideas with lowest patent overlap.
**Measurement**: Semantic distance from query centroid
**Prediction**: Full Pipeline has highest novelty rate across all conditions
**Measurement**: Percentage of ideas without existing patent matches
### H5: Fixation Breaking
> Multi-expert approach produces more ideas outside the top-3 semantic clusters than direct generation.
### H5: Expert vs Random Control
> Expert perspectives outperform random word perspectives.
**Measurement**: Cluster distribution analysis
**Prediction**: Expert-Only > Random-Perspective
**Rationale**: Validates that domain knowledge (not just any perspective shift) drives improvement
**Measurement**: Semantic diversity and human creativity ratings
---
@@ -271,10 +313,29 @@ Output:
## Positioning Against Related Work
### Key Differentiator: Attribute Decomposition
```
PersonaFlow (2024): Query → Experts → Ideas
Our Approach: Query → Attributes → (Attributes × Experts) → Ideas
```
**Why this matters**: Attribute decomposition provides **scaffolding** that makes expert perspectives more effective. An expert seeing "chair materials" generates more focused ideas than an expert seeing just "chair."
### Comparison Table
| Approach | Limitation | Our Advantage |
|----------|------------|---------------|
| Direct LLM generation | Semantic gravity, fixation | Expert-forced semantic jumps |
| Human brainstorming | Cognitive fatigue, social dynamics | Tireless LLM generation |
| PersonaFlow (2024) | Research-focused, no attribute structure | Product innovation, structured decomposition |
| PopBlends (2023) | Two-concept blending only | Multi-expert, multi-attribute blending |
| BILLY (2025) | Vector fusion less interpretable | Sequential generation, explicit control |
| Direct LLM generation | Semantic gravity, fixation | Two-factor enhancement (attributes + experts) |
| **PersonaFlow (2024)** | **No problem structure, experts see whole query** | **Attribute decomposition amplifies expert effect** |
| PopBlends (2023) | Two-concept blending only | Systematic attribute × expert exploration |
| BILLY (2025) | Cannot isolate what helps | 2×2 factorial design isolates contributions |
| Persona prompting alone | Random coverage | Systematic coverage via attribute × expert matrix |
### What We Can Answer That PersonaFlow Cannot
1. **Does problem structure alone help?** (Attribute-Only vs Direct)
2. **Do experts help beyond structure?** (Full Pipeline vs Attribute-Only)
3. **Is there an interaction effect?** (Full Pipeline > Attribute-Only + Expert-Only - Direct)
PersonaFlow showed experts help, but never tested whether **structuring the problem first** makes experts more effective.