feat: Enhance patent search and update research documentation
- Improve patent search service with expanded functionality - Update PatentSearchPanel UI component - Add new research_report.md - Update experimental protocol, literature review, paper outline, and theoretical framework Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -10,29 +10,47 @@ This document outlines a comprehensive experimental design to test the hypothesi
|
||||
|
||||
| ID | Research Question |
|
||||
|----|-------------------|
|
||||
| **RQ1** | Does multi-expert generation produce higher semantic diversity than direct LLM generation? |
|
||||
| **RQ2** | Does multi-expert generation produce ideas with lower patent overlap (higher novelty)? |
|
||||
| **RQ3** | What is the optimal number of experts for maximizing diversity? |
|
||||
| **RQ4** | How do different expert sources (LLM vs Curated vs DBpedia) affect idea quality? |
|
||||
| **RQ5** | Does structured attribute decomposition enhance the multi-expert effect? |
|
||||
| **RQ1** | Does attribute decomposition improve semantic diversity of generated ideas? |
|
||||
| **RQ2** | Does expert perspective transformation improve semantic diversity of generated ideas? |
|
||||
| **RQ3** | Is there an interaction effect between attribute decomposition and expert perspectives? |
|
||||
| **RQ4** | Which combination produces the highest patent novelty (lowest overlap)? |
|
||||
| **RQ5** | How do different expert sources (LLM vs Curated vs External) affect idea quality? |
|
||||
| **RQ6** | Does context-free keyword generation (current design) increase hallucination/nonsense rate? |
|
||||
|
||||
### Design Note: Context-Free Keyword Generation
|
||||
|
||||
Our system intentionally excludes the original query during keyword generation (Stage 1):
|
||||
|
||||
```
|
||||
Stage 1 (Keyword): Expert sees "木質" (wood) + "會計師" (accountant)
|
||||
Expert does NOT see "椅子" (chair)
|
||||
→ Generates: "資金流動" (cash flow)
|
||||
|
||||
Stage 2 (Description): Expert sees "椅子" + "資金流動"
|
||||
→ Applies keyword to original query
|
||||
```
|
||||
|
||||
**Rationale**: This forces maximum semantic distance in keyword generation.
|
||||
**Risk**: Some keywords may be too distant, resulting in nonsensical or unusable ideas.
|
||||
**RQ6 investigates**: What is the hallucination/nonsense rate, and is the tradeoff worthwhile?
|
||||
|
||||
---
|
||||
|
||||
## 2. Experimental Design Overview
|
||||
|
||||
### 2.1 Design Type
|
||||
**Mixed Design**: Between-subjects for main conditions × Within-subjects for queries
|
||||
**2×2 Factorial Design**: Attribute Decomposition (With/Without) × Expert Perspectives (With/Without)
|
||||
- Within-subjects for queries (all queries tested across all conditions)
|
||||
|
||||
### 2.2 Variables
|
||||
|
||||
#### Independent Variables (Manipulated)
|
||||
|
||||
| Variable | Levels | Your System Parameter |
|
||||
|----------|--------|----------------------|
|
||||
| **Generation Method** | 5 levels (see conditions) | Condition-dependent |
|
||||
| **Expert Count** | 1, 2, 4, 6, 8 | `expert_count` |
|
||||
| **Expert Source** | LLM, Curated, DBpedia | `expert_source` |
|
||||
| **Attribute Structure** | With/Without decomposition | Pipeline inclusion |
|
||||
| Variable | Levels | Description |
|
||||
|----------|--------|-------------|
|
||||
| **Attribute Decomposition** | 2 levels: With / Without | Whether to decompose query into structured attributes |
|
||||
| **Expert Perspectives** | 2 levels: With / Without | Whether to use expert personas for idea generation |
|
||||
| **Expert Source** (secondary) | LLM, Curated, External | Source of expert occupations (tested within Expert=With conditions) |
|
||||
|
||||
#### Dependent Variables (Measured)
|
||||
|
||||
@@ -61,34 +79,28 @@ This document outlines a comprehensive experimental design to test the hypothesi
|
||||
|
||||
## 3. Experimental Conditions
|
||||
|
||||
### 3.1 Main Study: Generation Method Comparison
|
||||
### 3.1 Main Study: 2×2 Factorial Design
|
||||
|
||||
| Condition | Description | Implementation |
|
||||
|-----------|-------------|----------------|
|
||||
| **C1: Direct** | Direct LLM generation | Prompt: "Generate 20 creative ideas for [query]" |
|
||||
| **C2: Single-Expert** | 1 expert × 20 ideas | `expert_count=1`, `keywords_per_expert=20` |
|
||||
| **C3: Multi-Expert-4** | 4 experts × 5 ideas each | `expert_count=4`, `keywords_per_expert=5` |
|
||||
| **C4: Multi-Expert-8** | 8 experts × 2-3 ideas each | `expert_count=8`, `keywords_per_expert=2-3` |
|
||||
| **C5: Random-Perspective** | 4 random words as "perspectives" | Custom prompt with random nouns |
|
||||
| Condition | Attributes | Experts | Description |
|
||||
|-----------|------------|---------|-------------|
|
||||
| **C1: Direct** | ❌ Without | ❌ Without | Baseline: "Generate 20 creative ideas for [query]" |
|
||||
| **C2: Expert-Only** | ❌ Without | ✅ With | Expert personas generate for whole query |
|
||||
| **C3: Attribute-Only** | ✅ With | ❌ Without | Decompose query, direct generate per attribute |
|
||||
| **C4: Full Pipeline** | ✅ With | ✅ With | Decompose query, experts generate per attribute |
|
||||
|
||||
### 3.2 Expert Count Study
|
||||
### 3.2 Control Condition
|
||||
|
||||
| Condition | Expert Count | Ideas per Expert |
|
||||
|-----------|--------------|------------------|
|
||||
| **E1** | 1 | 20 |
|
||||
| **E2** | 2 | 10 |
|
||||
| **E4** | 4 | 5 |
|
||||
| **E6** | 6 | 3-4 |
|
||||
| **E8** | 8 | 2-3 |
|
||||
| Condition | Description | Purpose |
|
||||
|-----------|-------------|---------|
|
||||
| **C5: Random-Perspective** | 4 random words as "perspectives" | Tests if ANY perspective shift helps, or if EXPERT knowledge specifically matters |
|
||||
|
||||
### 3.3 Expert Source Study
|
||||
### 3.3 Expert Source Study (Secondary, within Expert=With conditions)
|
||||
|
||||
| Condition | Source | Implementation |
|
||||
|-----------|--------|----------------|
|
||||
| **S-LLM** | LLM-generated | `expert_source=ExpertSource.LLM` |
|
||||
| **S-Curated** | Curated 210 occupations | `expert_source=ExpertSource.CURATED` |
|
||||
| **S-DBpedia** | DBpedia 2164 occupations | `expert_source=ExpertSource.DBPEDIA` |
|
||||
| **S-Random** | Random word "experts" | Custom implementation |
|
||||
| **S-LLM** | LLM-generated | Query-specific experts generated by LLM |
|
||||
| **S-Curated** | Curated occupations | Pre-selected high-quality occupations |
|
||||
| **S-External** | External sources | Wikidata/ConceptNet occupations |
|
||||
|
||||
---
|
||||
|
||||
@@ -251,7 +263,69 @@ def compute_patent_novelty(ideas: List[str], query: str) -> dict:
|
||||
}
|
||||
```
|
||||
|
||||
### 5.3 Metrics Summary Table
|
||||
### 5.3 Hallucination/Nonsense Metrics (RQ6)
|
||||
|
||||
Since our design intentionally excludes the original query during keyword generation, we need to measure the "cost" of this approach.
|
||||
|
||||
#### 5.3.1 LLM-as-Judge for Relevance
|
||||
```python
|
||||
def compute_relevance_score(query: str, ideas: List[str], judge_model: str) -> dict:
|
||||
"""
|
||||
Use LLM to judge if each idea is relevant/applicable to the original query.
|
||||
"""
|
||||
relevant_count = 0
|
||||
nonsense_count = 0
|
||||
results = []
|
||||
|
||||
for idea in ideas:
|
||||
prompt = f"""
|
||||
Original query: {query}
|
||||
Generated idea: {idea}
|
||||
|
||||
Is this idea relevant and applicable to the original query?
|
||||
Rate: 1 (nonsense/irrelevant), 2 (weak connection), 3 (relevant)
|
||||
|
||||
Return JSON: {{"score": N, "reason": "brief explanation"}}
|
||||
"""
|
||||
result = llm_judge(prompt, model=judge_model)
|
||||
results.append(result)
|
||||
if result['score'] == 1:
|
||||
nonsense_count += 1
|
||||
elif result['score'] >= 2:
|
||||
relevant_count += 1
|
||||
|
||||
return {
|
||||
'relevance_rate': relevant_count / len(ideas),
|
||||
'nonsense_rate': nonsense_count / len(ideas),
|
||||
'details': results
|
||||
}
|
||||
```
|
||||
|
||||
#### 5.3.2 Semantic Distance Threshold Analysis
|
||||
```python
|
||||
def analyze_distance_threshold(query: str, ideas: List[str], embedding_model: str) -> dict:
|
||||
"""
|
||||
Analyze which ideas exceed a "too far" semantic distance threshold.
|
||||
Ideas beyond threshold may be creative OR nonsensical.
|
||||
"""
|
||||
query_emb = get_embedding(query, model=embedding_model)
|
||||
idea_embs = get_embeddings(ideas, model=embedding_model)
|
||||
|
||||
distances = [1 - cosine_similarity(query_emb, e) for e in idea_embs]
|
||||
|
||||
# Define thresholds (to be calibrated)
|
||||
CREATIVE_THRESHOLD = 0.6 # Ideas this far are "creative"
|
||||
NONSENSE_THRESHOLD = 0.85 # Ideas this far may be "nonsense"
|
||||
|
||||
return {
|
||||
'creative_zone': sum(1 for d in distances if CREATIVE_THRESHOLD <= d < NONSENSE_THRESHOLD),
|
||||
'potential_nonsense': sum(1 for d in distances if d >= NONSENSE_THRESHOLD),
|
||||
'safe_zone': sum(1 for d in distances if d < CREATIVE_THRESHOLD),
|
||||
'distance_distribution': distances
|
||||
}
|
||||
```
|
||||
|
||||
### 5.4 Metrics Summary Table
|
||||
|
||||
| Metric | Formula | Interpretation |
|
||||
|--------|---------|----------------|
|
||||
@@ -261,6 +335,18 @@ def compute_patent_novelty(ideas: List[str], query: str) -> dict:
|
||||
| **Query Distance** | 1 - cos_sim(query, idea) | Higher = farther from original |
|
||||
| **Patent Novelty Rate** | 1 - (matches / total) | Higher = more novel |
|
||||
|
||||
### 5.5 Nonsense/Hallucination Analysis (RQ6) - Three Methods
|
||||
|
||||
| Method | Metric | How it works | Pros/Cons |
|
||||
|--------|--------|--------------|-----------|
|
||||
| **Automatic** | Semantic Distance Threshold | Ideas with distance > 0.85 flagged as "potential nonsense" | Fast, cheap; May miss contextual nonsense |
|
||||
| **LLM-as-Judge** | Relevance Score (1-3) | GPT-4 rates if idea is relevant to original query | Moderate cost; Good balance |
|
||||
| **Human Evaluation** | Relevance Rating (1-7 Likert) | Humans rate coherence/relevance | Gold standard; Most expensive |
|
||||
|
||||
**Triangulation**: Compare all three methods to validate findings:
|
||||
- If automatic + LLM + human agree → high confidence
|
||||
- If they disagree → investigate why (interesting edge cases)
|
||||
|
||||
---
|
||||
|
||||
## 6. Human Evaluation Protocol
|
||||
@@ -306,6 +392,22 @@ How creative is this idea overall?
|
||||
7 = Extremely creative
|
||||
```
|
||||
|
||||
#### 6.2.4 Relevance/Coherence (7-point Likert) - For RQ6
|
||||
```
|
||||
How relevant and coherent is this idea to the original query?
|
||||
1 = Nonsense/completely irrelevant (no logical connection)
|
||||
2 = Very weak connection (hard to see relevance)
|
||||
3 = Weak connection (requires stretch to see relevance)
|
||||
4 = Moderate connection (somewhat relevant)
|
||||
5 = Good connection (clearly relevant)
|
||||
6 = Strong connection (directly applicable)
|
||||
7 = Perfect fit (highly relevant and coherent)
|
||||
```
|
||||
|
||||
**Note**: This scale specifically measures the "cost" of context-free generation.
|
||||
- Ideas with high novelty but low relevance (1-3) = potential hallucination
|
||||
- Ideas with high novelty AND high relevance (5-7) = successful creative leap
|
||||
|
||||
### 6.3 Procedure
|
||||
|
||||
1. **Introduction** (5 min)
|
||||
@@ -361,21 +463,27 @@ For each query Q in QuerySet:
|
||||
For each condition C in Conditions:
|
||||
|
||||
If C == "Direct":
|
||||
# No attributes, no experts
|
||||
ideas = direct_llm_generation(Q, n=20)
|
||||
|
||||
Elif C == "Single-Expert":
|
||||
expert = generate_expert(Q, n=1)
|
||||
ideas = expert_transformation(Q, expert, ideas_per_expert=20)
|
||||
|
||||
Elif C == "Multi-Expert-4":
|
||||
Elif C == "Expert-Only":
|
||||
# No attributes, with experts
|
||||
experts = generate_experts(Q, n=4)
|
||||
ideas = expert_transformation(Q, experts, ideas_per_expert=5)
|
||||
ideas = expert_generation_whole_query(Q, experts, ideas_per_expert=5)
|
||||
|
||||
Elif C == "Multi-Expert-8":
|
||||
experts = generate_experts(Q, n=8)
|
||||
ideas = expert_transformation(Q, experts, ideas_per_expert=2-3)
|
||||
Elif C == "Attribute-Only":
|
||||
# With attributes, no experts
|
||||
attributes = decompose_attributes(Q)
|
||||
ideas = direct_generation_per_attribute(Q, attributes, ideas_per_attr=5)
|
||||
|
||||
Elif C == "Full-Pipeline":
|
||||
# With attributes, with experts
|
||||
attributes = decompose_attributes(Q)
|
||||
experts = generate_experts(Q, n=4)
|
||||
ideas = expert_transformation(Q, attributes, experts, ideas_per_combo=1-2)
|
||||
|
||||
Elif C == "Random-Perspective":
|
||||
# Control: random words instead of experts
|
||||
perspectives = random.sample(RANDOM_WORDS, 4)
|
||||
ideas = perspective_generation(Q, perspectives, ideas_per=5)
|
||||
|
||||
@@ -469,20 +577,34 @@ Plot: Expert count vs diversity curve
|
||||
|
||||
## 9. Expected Results & Hypotheses
|
||||
|
||||
### 9.1 Primary Hypotheses
|
||||
### 9.1 Primary Hypotheses (2×2 Factorial)
|
||||
|
||||
| Hypothesis | Prediction | Metric |
|
||||
|------------|------------|--------|
|
||||
| **H1** | Multi-Expert-4 > Single-Expert > Direct | Semantic diversity |
|
||||
| **H2** | Multi-Expert-8 ≈ Multi-Expert-4 (diminishing returns) | Semantic diversity |
|
||||
| **H3** | Multi-Expert > Direct | Patent novelty rate |
|
||||
| **H4** | LLM experts > Curated > DBpedia | Unconventionality |
|
||||
| **H5** | With attributes > Without attributes | Overall diversity |
|
||||
| **H1: Main Effect of Attributes** | Attribute-Only > Direct | Semantic diversity |
|
||||
| **H2: Main Effect of Experts** | Expert-Only > Direct | Semantic diversity |
|
||||
| **H3: Interaction Effect** | Full Pipeline > (Attribute-Only + Expert-Only - Direct) | Semantic diversity |
|
||||
| **H4: Novelty** | Full Pipeline > all other conditions | Patent novelty rate |
|
||||
| **H5: Expert vs Random** | Expert-Only > Random-Perspective | Validates expert knowledge matters |
|
||||
| **H6: Novelty-Usefulness Tradeoff** | Full Pipeline has higher nonsense rate than Direct, but acceptable (<20%) | Nonsense rate |
|
||||
|
||||
### 9.2 Expected Effect Sizes
|
||||
### 9.2 Expected Pattern
|
||||
|
||||
```
|
||||
Without Experts With Experts
|
||||
--------------- ------------
|
||||
Without Attributes Direct (low) Expert-Only (medium)
|
||||
With Attributes Attr-Only (medium) Full Pipeline (high)
|
||||
```
|
||||
|
||||
**Expected interaction**: The combination (Full Pipeline) should produce super-additive effects - the benefit of experts is amplified when combined with structured attributes.
|
||||
|
||||
### 9.3 Expected Effect Sizes
|
||||
|
||||
Based on related work:
|
||||
- Diversity increase: d = 0.5-0.8 (medium to large)
|
||||
- Main effect of attributes: d = 0.3-0.5 (small to medium)
|
||||
- Main effect of experts: d = 0.4-0.6 (medium)
|
||||
- Interaction effect: d = 0.2-0.4 (small)
|
||||
- Patent novelty increase: 20-40% improvement
|
||||
- Human creativity rating: d = 0.3-0.5 (small to medium)
|
||||
|
||||
|
||||
@@ -14,7 +14,26 @@ Groups of people tend to generate more diverse ideas than individuals because ea
|
||||
|
||||
PersonaFlow provides multiple perspectives by using LLMs to simulate domain-specific experts. User studies showed it increased the perceived relevance and creativity of ideated research directions and promoted users' critical thinking activities without increasing perceived cognitive load.
|
||||
|
||||
**Gap for our work**: PersonaFlow focuses on research ideation. Our system applies to product/innovation ideation with structured attribute decomposition.
|
||||
**Critical Gap - Our Key Differentiation**:
|
||||
|
||||
```
|
||||
PersonaFlow approach:
|
||||
Query → Experts → Ideas
|
||||
(Experts see the WHOLE query, no problem structure)
|
||||
|
||||
Our approach:
|
||||
Query → Attribute Decomposition → (Attributes × Experts) → Ideas
|
||||
(Experts see SPECIFIC attributes, systematic coverage)
|
||||
```
|
||||
|
||||
| Limitation of PersonaFlow | Our Solution |
|
||||
|---------------------------|--------------|
|
||||
| No problem structure | Attribute decomposition structures the problem space |
|
||||
| Experts applied to whole query | Experts applied to specific attributes |
|
||||
| Cannot test what helps (experts vs structure) | 2×2 factorial isolates each contribution |
|
||||
| Implicit/random coverage of idea space | Systematic coverage via attribute × expert matrix |
|
||||
|
||||
**Our unique contribution**: We hypothesize that attribute decomposition **amplifies** expert effectiveness (interaction effect). PersonaFlow cannot test this because they never decomposed the problem.
|
||||
|
||||
### 1.3 PopBlends: Conceptual Blending with LLMs
|
||||
**PopBlends: Strategies for Conceptual Blending with Large Language Models** (CHI 2023)
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
|
||||
## Abstract (Draft)
|
||||
|
||||
Large Language Models (LLMs) are increasingly used for creative ideation, yet they exhibit a phenomenon we term "semantic gravity" - the tendency to generate outputs clustered around high-probability regions of their training distribution. This limits the novelty and diversity of generated ideas. We propose a multi-expert transformation framework that systematically activates diverse semantic regions by conditioning LLM generation on simulated expert perspectives. Our system decomposes concepts into structured attributes, generates ideas through multiple domain-expert viewpoints, and employs semantic deduplication to ensure genuine diversity. Through experiments comparing multi-expert generation against direct LLM generation and single-expert baselines, we demonstrate that our approach produces ideas with [X]% higher semantic diversity and [Y]% lower patent overlap. We contribute a theoretical framework explaining LLM creativity limitations and an open-source system for innovation ideation.
|
||||
Large Language Models (LLMs) are increasingly used for creative ideation, yet they exhibit a phenomenon we term "semantic gravity" - the tendency to generate outputs clustered around high-probability regions of their training distribution. This limits the novelty and diversity of generated ideas. We investigate two complementary strategies to overcome this limitation: (1) **attribute decomposition**, which structures the problem space before creative exploration, and (2) **expert perspective transformation**, which conditions LLM generation on simulated domain-expert viewpoints. Through a 2×2 factorial experiment comparing Direct generation, Expert-Only, Attribute-Only, and Full Pipeline (both factors combined), we demonstrate that each factor independently improves semantic diversity, with the combination producing super-additive effects. Our Full Pipeline achieves [X]% higher semantic diversity and [Y]% lower patent overlap compared to direct generation. We contribute a theoretical framework explaining LLM creativity limitations and an open-source system for innovation ideation.
|
||||
|
||||
---
|
||||
|
||||
@@ -61,8 +61,17 @@ Large Language Models (LLMs) are increasingly used for creative ideation, yet th
|
||||
- Evaluation methods (CAT, semantic distance)
|
||||
|
||||
### 2.5 Positioning Our Work
|
||||
- Gap: No end-to-end system combining structured decomposition + multi-expert transformation + deduplication
|
||||
- Distinction from PersonaFlow: product innovation focus, attribute structure
|
||||
|
||||
**Key distinction from PersonaFlow (closest related work)**:
|
||||
```
|
||||
PersonaFlow: Query → Experts → Ideas (no problem structure)
|
||||
Our approach: Query → Attributes → (Attributes × Experts) → Ideas
|
||||
```
|
||||
|
||||
- PersonaFlow applies experts to whole query; we apply experts to decomposed attributes
|
||||
- PersonaFlow cannot isolate what helps; our 2×2 factorial design tests each factor
|
||||
- We hypothesize attribute decomposition **amplifies** expert effectiveness (interaction effect)
|
||||
- PersonaFlow showed experts help; we test whether **structuring the problem first** makes experts more effective
|
||||
|
||||
---
|
||||
|
||||
@@ -102,30 +111,41 @@ Large Language Models (LLMs) are increasingly used for creative ideation, yet th
|
||||
## 4. Experiments
|
||||
|
||||
### 4.1 Research Questions
|
||||
- RQ1: Does multi-expert generation increase semantic diversity?
|
||||
- RQ2: Does multi-expert generation reduce patent overlap?
|
||||
- RQ3: What is the optimal number of experts?
|
||||
- RQ4: How do expert sources affect output quality?
|
||||
- RQ1: Does attribute decomposition improve semantic diversity?
|
||||
- RQ2: Does expert perspective transformation improve semantic diversity?
|
||||
- RQ3: Is there an interaction effect between the two factors?
|
||||
- RQ4: Which combination produces the highest patent novelty?
|
||||
- RQ5: How do expert sources (LLM vs Curated vs External) affect quality?
|
||||
- RQ6: What is the hallucination/nonsense rate of context-free keyword generation?
|
||||
|
||||
### 4.1.1 Design Note: Context-Free Keyword Generation
|
||||
Our system intentionally excludes the original query during keyword generation:
|
||||
- Stage 1: Expert sees attribute only (e.g., "wood" + "accountant"), NOT the query ("chair")
|
||||
- Stage 2: Expert applies keyword to original query with context
|
||||
- Rationale: Maximize semantic distance for novelty
|
||||
- Risk: Some ideas may be too distant (nonsense/hallucination)
|
||||
- RQ6 investigates this tradeoff
|
||||
|
||||
### 4.2 Experimental Setup
|
||||
|
||||
#### 4.2.1 Dataset
|
||||
- N concepts/queries for ideation
|
||||
- Selection criteria (diverse domains, complexity levels)
|
||||
- 30 queries for ideation (see experimental_protocol.md)
|
||||
- Selection criteria: diverse domains, complexity levels
|
||||
- Categories: everyday objects, technology/tools, services/systems
|
||||
|
||||
#### 4.2.2 Conditions
|
||||
| Condition | Description |
|
||||
|-----------|-------------|
|
||||
| Baseline | Direct LLM: "Generate 20 creative ideas for X" |
|
||||
| Single-Expert | 1 expert × 20 ideas |
|
||||
| Multi-Expert-4 | 4 experts × 5 ideas each |
|
||||
| Multi-Expert-8 | 8 experts × 2-3 ideas each |
|
||||
| Random-Perspective | 4 random words as "perspectives" |
|
||||
#### 4.2.2 Conditions (2×2 Factorial Design)
|
||||
| Condition | Attributes | Experts | Description |
|
||||
|-----------|------------|---------|-------------|
|
||||
| **C1: Direct** | ❌ | ❌ | Baseline: "Generate 20 creative ideas for [query]" |
|
||||
| **C2: Expert-Only** | ❌ | ✅ | Expert personas generate for whole query |
|
||||
| **C3: Attribute-Only** | ✅ | ❌ | Decompose query, direct generate per attribute |
|
||||
| **C4: Full Pipeline** | ✅ | ✅ | Decompose query, experts generate per attribute |
|
||||
| **C5: Random-Perspective** | ❌ | (random) | Control: 4 random words as "perspectives" |
|
||||
|
||||
#### 4.2.3 Controls
|
||||
- Same LLM model (specify version)
|
||||
- Same temperature settings
|
||||
- Same total idea count per condition
|
||||
- Same total idea count per condition (20 ideas)
|
||||
|
||||
### 4.3 Metrics
|
||||
|
||||
@@ -142,8 +162,18 @@ Large Language Models (LLMs) are increasingly used for creative ideation, yet th
|
||||
- Novelty rating (1-7 Likert)
|
||||
- Usefulness rating (1-7 Likert)
|
||||
- Creativity rating (1-7 Likert)
|
||||
- **Relevance rating (1-7 Likert) - for RQ6**
|
||||
- Interrater reliability (Cronbach's alpha)
|
||||
|
||||
#### 4.3.4 Nonsense/Hallucination Analysis (RQ6) - Three Methods
|
||||
| Method | Metric | Purpose |
|
||||
|--------|--------|---------|
|
||||
| Automatic | Semantic distance threshold (>0.85) | Fast screening |
|
||||
| LLM-as-Judge | GPT-4 relevance score (1-3) | Scalable evaluation |
|
||||
| Human | Relevance rating (1-7 Likert) | Gold standard validation |
|
||||
|
||||
Triangulate all three to validate findings
|
||||
|
||||
### 4.4 Procedure
|
||||
- Idea generation process
|
||||
- Evaluation process
|
||||
@@ -153,27 +183,44 @@ Large Language Models (LLMs) are increasingly used for creative ideation, yet th
|
||||
|
||||
## 5. Results
|
||||
|
||||
### 5.1 Semantic Diversity (RQ1)
|
||||
### 5.1 Main Effect of Attribute Decomposition (RQ1)
|
||||
- Compare: (Attribute-Only + Full Pipeline) vs (Direct + Expert-Only)
|
||||
- Quantitative results
|
||||
- Visualization (t-SNE/UMAP of idea embeddings)
|
||||
- Statistical significance tests
|
||||
- Statistical significance (ANOVA main effect)
|
||||
|
||||
### 5.2 Patent Novelty (RQ2)
|
||||
### 5.2 Main Effect of Expert Perspectives (RQ2)
|
||||
- Compare: (Expert-Only + Full Pipeline) vs (Direct + Attribute-Only)
|
||||
- Quantitative results
|
||||
- Statistical significance (ANOVA main effect)
|
||||
|
||||
### 5.3 Interaction Effect (RQ3)
|
||||
- 2×2 interaction analysis
|
||||
- Visualization: interaction plot
|
||||
- Evidence for super-additive vs additive effects
|
||||
|
||||
### 5.4 Patent Novelty (RQ4)
|
||||
- Overlap rates by condition
|
||||
- Full Pipeline vs other conditions
|
||||
- Examples of high-novelty ideas
|
||||
|
||||
### 5.3 Expert Count Analysis (RQ3)
|
||||
- Diversity vs. expert count curve
|
||||
- Diminishing returns analysis
|
||||
- Optimal expert count recommendation
|
||||
|
||||
### 5.4 Expert Source Comparison (RQ4)
|
||||
- LLM-generated vs. curated vs. random
|
||||
### 5.5 Expert Source Comparison (RQ5)
|
||||
- LLM-generated vs curated vs external
|
||||
- Unconventionality metrics
|
||||
- Within Expert=With conditions only
|
||||
|
||||
### 5.5 Human Evaluation Results
|
||||
- Rating distributions
|
||||
- Condition comparisons
|
||||
### 5.6 Control Condition Analysis
|
||||
- Expert-Only vs Random-Perspective
|
||||
- Validates expert knowledge matters
|
||||
|
||||
### 5.7 Hallucination/Nonsense Analysis (RQ6)
|
||||
- Nonsense rate by condition (LLM-as-judge)
|
||||
- Semantic distance threshold analysis
|
||||
- Novelty-usefulness tradeoff visualization
|
||||
- Is the context-free design worth the hallucination cost?
|
||||
|
||||
### 5.8 Human Evaluation Results
|
||||
- Rating distributions by condition
|
||||
- 2×2 pattern in human judgments
|
||||
- Correlation with automatic metrics
|
||||
|
||||
---
|
||||
@@ -181,14 +228,14 @@ Large Language Models (LLMs) are increasingly used for creative ideation, yet th
|
||||
## 6. Discussion
|
||||
|
||||
### 6.1 Interpreting the Results
|
||||
- Why multi-expert works
|
||||
- The role of structured decomposition
|
||||
- Deduplication importance
|
||||
- Why each factor contributes independently
|
||||
- The interaction: why attributes amplify expert effectiveness
|
||||
- Theoretical explanation via conceptual blending
|
||||
|
||||
### 6.2 Theoretical Implications
|
||||
- Semantic gravity as framework for LLM creativity
|
||||
- Expert perspectives as productive constraints
|
||||
- Inner crowd wisdom
|
||||
- Two complementary escape mechanisms
|
||||
- Structured decomposition as "scaffolding" for creative exploration
|
||||
|
||||
### 6.3 Practical Implications
|
||||
- When to use multi-expert approach
|
||||
|
||||
472
research/research_report.md
Normal file
472
research/research_report.md
Normal file
@@ -0,0 +1,472 @@
|
||||
---
|
||||
marp: true
|
||||
theme: default
|
||||
paginate: true
|
||||
size: 16:9
|
||||
style: |
|
||||
section {
|
||||
font-size: 24px;
|
||||
}
|
||||
h1 {
|
||||
color: #2563eb;
|
||||
}
|
||||
h2 {
|
||||
color: #1e40af;
|
||||
}
|
||||
table {
|
||||
font-size: 20px;
|
||||
}
|
||||
.columns {
|
||||
display: grid;
|
||||
grid-template-columns: 1fr 1fr;
|
||||
gap: 1rem;
|
||||
}
|
||||
---
|
||||
|
||||
# Breaking Semantic Gravity
|
||||
## Expert-Augmented LLM Ideation for Enhanced Creativity
|
||||
|
||||
**Research Progress Report**
|
||||
|
||||
January 2026
|
||||
|
||||
---
|
||||
|
||||
# Agenda
|
||||
|
||||
1. Research Problem & Motivation
|
||||
2. Theoretical Framework: "Semantic Gravity"
|
||||
3. Proposed Solution: Expert-Augmented Ideation
|
||||
4. Experimental Design
|
||||
5. Implementation Progress
|
||||
6. Timeline & Next Steps
|
||||
|
||||
---
|
||||
|
||||
# 1. Research Problem
|
||||
|
||||
## The Myth, Problem and Myth of LLM Creativity
|
||||
|
||||
**Myth**: LLMs enable infinite idea generation for creative tasks
|
||||
|
||||
**Problem**: Generated ideas lack **diversity** and **novelty**
|
||||
|
||||
- Ideas cluster around high-probability training distributions
|
||||
- Limited exploration of distant conceptual spaces
|
||||
- "Creative" outputs are **interpolations**, not **extrapolations**
|
||||
|
||||
---
|
||||
|
||||
# The "Semantic Gravity" Phenomenon
|
||||
|
||||
```
|
||||
Direct LLM Generation:
|
||||
Input: "Generate creative ideas for a chair"
|
||||
|
||||
Result:
|
||||
- "Ergonomic office chair" (high probability)
|
||||
- "Foldable portable chair" (high probability)
|
||||
- "Eco-friendly bamboo chair" (moderate probability)
|
||||
|
||||
Problem:
|
||||
→ Ideas cluster in predictable semantic neighborhoods
|
||||
→ Limited exploration of distant conceptual spaces
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# Why Does Semantic Gravity Occur?
|
||||
|
||||
| Factor | Description |
|
||||
|--------|-------------|
|
||||
| **Statistical Pattern Learning** | LLMs learn co-occurrence patterns from training data |
|
||||
| **Model Collapse** (再看看) | Sampling from "creative ideas" distribution seen in training |
|
||||
| **Relevance Trap** (再看看) | Strong associations dominate weak ones |
|
||||
| **Domain Bias** | Outputs gravitate toward category prototypes |
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
# 2. Theoretical Framework
|
||||
|
||||
## Three Key Foundations
|
||||
|
||||
1. **Semantic Distance Theory** (Mednick, 1962)
|
||||
- Creativity correlates with conceptual "jump" distance
|
||||
|
||||
2. **Conceptual Blending Theory** (Fauconnier & Turner, 2002)
|
||||
- Creative products emerge from blending input spaces
|
||||
|
||||
3. **Design Fixation** (Jansson & Smith, 1991)
|
||||
- Blind adherence to initial ideas limits creativity
|
||||
|
||||
---
|
||||
|
||||
# Semantic Distance in Action
|
||||
|
||||
```
|
||||
Without Expert:
|
||||
"Chair" → furniture, sitting, comfort, design
|
||||
Semantic distance: SHORT
|
||||
|
||||
With Marine Biologist Expert:
|
||||
"Chair" → underwater pressure, coral structure, buoyancy
|
||||
Semantic distance: LONG
|
||||
|
||||
Result: Novel ideas like "pressure-adaptive seating"
|
||||
```
|
||||
|
||||
**Key Insight**: Expert perspectives force semantic jumps that LLMs wouldn't naturally make.
|
||||
|
||||
---
|
||||
|
||||
# 3. Proposed Solution
|
||||
|
||||
## Expert-Augmented LLM Ideation Pipeline
|
||||
|
||||
```
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Attribute │ → │ Expert │ → │ Expert │
|
||||
│ Decomposition│ │ Generation │ │Transformation│
|
||||
└──────────────┘ └──────────────┘ └──────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────┐ ┌──────────────┐
|
||||
│ Novelty │ ← │ Deduplication│
|
||||
│ Validation │ │ │
|
||||
└──────────────┘ └──────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# From "Wisdom of Crowds" to "Inner Crowd"
|
||||
|
||||
**Traditional Crowd**:
|
||||
- Person 1 → Ideas from perspective 1
|
||||
- Person 2 → Ideas from perspective 2
|
||||
- Aggregation → Diverse idea pool
|
||||
|
||||
**Our "Inner Crowd"**:
|
||||
- LLM + Expert 1 Persona → Ideas from perspective 1
|
||||
- LLM + Expert 2 Persona → Ideas from perspective 2
|
||||
- Aggregation → Diverse idea pool (simulated crowd)
|
||||
|
||||
---
|
||||
|
||||
# Expert Sources
|
||||
|
||||
| Source | Description | Coverage |
|
||||
|--------|-------------|----------|
|
||||
| **LLM-Generated** | Query-specific, prioritizes unconventional | Flexible |
|
||||
| **Curated** | 210 pre-selected high-quality occupations | Controlled |
|
||||
| **DBpedia** | 2,164 occupations from database | Broad |
|
||||
|
||||
Note: use the domain list (嘗試加入杜威分類法兩層? Future work? )
|
||||
|
||||
---
|
||||
|
||||
# 4. Research Questions (2×2 Factorial Design)
|
||||
|
||||
| ID | Research Question |
|
||||
|----|-------------------|
|
||||
| **RQ1** | Does attribute decomposition improve semantic diversity? |
|
||||
| **RQ2** | Does expert perspective transformation improve semantic diversity? |
|
||||
| **RQ3** | Is there an interaction effect between the two factors? |
|
||||
| **RQ4** | Which combination produces the highest patent novelty? |
|
||||
| **RQ5** | How do expert sources (LLM vs Curated vs External) affect quality? |
|
||||
| **RQ6** | What is the hallucination/nonsense rate of context-free generation? |
|
||||
|
||||
---
|
||||
|
||||
# Design Choice: Context-Free Keyword Generation
|
||||
|
||||
Our system intentionally excludes the original query during keyword generation:
|
||||
|
||||
```
|
||||
Stage 1 (Keyword): Expert sees "木質" (wood) + "會計師" (accountant)
|
||||
Expert does NOT see "椅子" (chair)
|
||||
→ Generates: "資金流動" (cash flow)
|
||||
|
||||
Stage 2 (Description): Expert sees "椅子" + "資金流動"
|
||||
→ Applies keyword to original query
|
||||
```
|
||||
|
||||
**Rationale**: Forces maximum semantic distance for novelty
|
||||
**Risk**: Some keywords may be too distant → nonsense/hallucination
|
||||
**RQ6**: Measure this tradeoff
|
||||
|
||||
---
|
||||
|
||||
# The Semantic Distance Tradeoff
|
||||
|
||||
```
|
||||
Too Close Optimal Zone Too Far
|
||||
(Semantic Gravity) (Creative) (Hallucination)
|
||||
├─────────────────────────┼──────────────────────────────┼─────────────────────────┤
|
||||
"Ergonomic office chair" "Pressure-adaptive seating" "Quantum chair consciousness"
|
||||
|
||||
High usefulness High novelty + useful High novelty, nonsense
|
||||
Low novelty Low usefulness
|
||||
```
|
||||
|
||||
**H6**: Full Pipeline has higher nonsense rate than Direct, but acceptable (<20%)
|
||||
|
||||
---
|
||||
|
||||
# Measuring Nonsense/Hallucination (RQ6) - Three Methods
|
||||
|
||||
| Method | Metric | Pros | Cons |
|
||||
|--------|--------|------|------|
|
||||
| **Automatic** | Semantic distance > 0.85 | Fast, cheap | May miss contextual nonsense |
|
||||
| **LLM-as-Judge** | GPT-4 relevance score (1-3) | Moderate cost, scalable | Potential LLM bias |
|
||||
| **Human Evaluation** | Relevance rating (1-7 Likert) | Gold standard | Expensive, slow |
|
||||
|
||||
**Triangulation**: Compare all three methods
|
||||
- Agreement → high confidence in nonsense detection
|
||||
- Disagreement → interesting edge cases to analyze
|
||||
|
||||
---
|
||||
|
||||
# Core Hypotheses (2×2 Factorial)
|
||||
|
||||
| Hypothesis | Prediction | Metric |
|
||||
|------------|------------|--------|
|
||||
| **H1: Attributes** | (Attr-Only + Full) > (Direct + Expert-Only) | Semantic diversity |
|
||||
| **H2: Experts** | (Expert-Only + Full) > (Direct + Attr-Only) | Semantic diversity |
|
||||
| **H3: Interaction** | Full > (Attr-Only + Expert-Only - Direct) | Super-additive effect |
|
||||
| **H4: Novelty** | Full Pipeline > all others | Patent novelty rate |
|
||||
| **H5: Control** | Expert-Only > Random-Perspective | Validates expert knowledge |
|
||||
| **H6: Tradeoff** | Full Pipeline nonsense rate < 20% | Nonsense rate |
|
||||
|
||||
---
|
||||
|
||||
# Experimental Conditions (2×2 Factorial)
|
||||
|
||||
| Condition | Attributes | Experts | Description |
|
||||
|-----------|------------|---------|-------------|
|
||||
| **C1: Direct** | ❌ | ❌ | Baseline: "Generate 20 ideas for [query]" |
|
||||
| **C2: Expert-Only** | ❌ | ✅ | Expert personas generate for whole query |
|
||||
| **C3: Attribute-Only** | ✅ | ❌ | Decompose query, direct generate per attribute |
|
||||
| **C4: Full Pipeline** | ✅ | ✅ | Decompose query, experts generate per attribute |
|
||||
| **C5: Random-Perspective** | ❌ | (random) | Control: random words as "perspectives" |
|
||||
|
||||
---
|
||||
|
||||
# Expected 2×2 Pattern
|
||||
|
||||
```
|
||||
Without Experts With Experts
|
||||
--------------- ------------
|
||||
Without Attributes Direct (low) Expert-Only (medium)
|
||||
|
||||
With Attributes Attr-Only (medium) Full Pipeline (high)
|
||||
```
|
||||
|
||||
**Key prediction**: The combination (Full Pipeline) produces **super-additive** effects
|
||||
- Experts are more effective when given structured attributes to transform
|
||||
- The interaction term should be statistically significant
|
||||
|
||||
---
|
||||
|
||||
# Query Dataset (30 Queries)
|
||||
|
||||
**Category A: Everyday Objects (10)**
|
||||
- Chair, Umbrella, Backpack, Coffee mug, Bicycle...
|
||||
|
||||
**Category B: Technology & Tools (10)**
|
||||
- Solar panel, Electric vehicle, 3D printer, Drone...
|
||||
|
||||
**Category C: Services & Systems (10)**
|
||||
- Food delivery, Online education, Healthcare appointment...
|
||||
|
||||
**Total**: 30 queries × 5 conditions (4 factorial + 1 control) × 20 ideas = **3,000 ideas**
|
||||
|
||||
---
|
||||
|
||||
# Metrics: Stastic Evaluation
|
||||
|
||||
| Metric | Formula | Interpretation |
|
||||
|--------|---------|----------------|
|
||||
| **Mean Pairwise Distance** | avg(1 - cos_sim(i, j)) | Higher = more diverse |
|
||||
| **Silhouette Score** | Cluster cohesion vs separation | Higher = clearer clusters |
|
||||
| **Query Distance** | 1 - cos_sim(query, idea) | Higher = farther from original |
|
||||
| **Patent Novelty Rate** | 1 - (matches / total) | Higher = more novel |
|
||||
|
||||
---
|
||||
|
||||
# Metrics: Human Evaluation
|
||||
|
||||
**Participants**: 60 evaluators (Prolific/MTurk)
|
||||
|
||||
**Rating Scales** (7-point Likert):
|
||||
|
||||
- **Novelty**: How novel/surprising is this idea?
|
||||
- **Usefulness**: How practical is this idea?
|
||||
- **Creativity**: How creative is this idea overall?
|
||||
- **Relevance**: How relevant/coherent is this idea to the query? **(RQ6)**
|
||||
- Nonsense ?
|
||||
|
||||
**Quality Control**:
|
||||
|
||||
- Attention checks, completion time monitoring
|
||||
- Inter-rater reliability (Cronbach's α > 0.7)
|
||||
|
||||
---
|
||||
|
||||
# What is Prolific/MTurk?
|
||||
|
||||
Online platforms for recruiting human participants for research studies.
|
||||
|
||||
| Platform | Description | Best For |
|
||||
|----------|-------------|----------|
|
||||
| **Prolific** | Academic-focused crowdsourcing | Research studies (higher quality) |
|
||||
| **MTurk** | Amazon Mechanical Turk | Large-scale tasks (lower cost) |
|
||||
|
||||
**How it works for our study**:
|
||||
1. Upload 600 ideas to evaluate (subset of generated ideas)
|
||||
2. Recruit 60 participants (~$8-15/hour compensation)
|
||||
3. Each participant rates ~30 ideas (novelty, usefulness, creativity)
|
||||
4. Download ratings → statistical analysis
|
||||
|
||||
**Cost estimate**: 60 participants × 30 min × $12/hr = ~$360
|
||||
|
||||
---
|
||||
|
||||
# Alternative: LLM-as-Judge
|
||||
|
||||
If human evaluation is too expensive or time-consuming:
|
||||
|
||||
| Approach | Pros | Cons |
|
||||
|----------|------|------|
|
||||
| **Human (Prolific/MTurk)** | Gold standard, publishable | Cost, time, IRB approval |
|
||||
| **LLM-as-Judge (GPT-4)** | Fast, cheap, reproducible | Less rigorous, potential bias |
|
||||
| **Automatic metrics only** | No human cost | Missing subjective quality |
|
||||
|
||||
**Recommendation**: Start with automatic metrics, add human evaluation for final paper submission.
|
||||
|
||||
---
|
||||
|
||||
# 5. Implementation Status
|
||||
|
||||
## System Components (Implemented)
|
||||
|
||||
- Attribute decomposition pipeline
|
||||
- Expert team generation (LLM, Curated, DBpedia sources)
|
||||
- Expert transformation with parallel processing
|
||||
- Semantic deduplication (embedding + LLM methods)
|
||||
- Patent search integration
|
||||
- Web-based visualization interface
|
||||
|
||||
---
|
||||
|
||||
# Implementation Checklist
|
||||
|
||||
### Experiment Scripts (To Do)
|
||||
- [ ] `experiments/generate_ideas.py` - Idea generation
|
||||
- [ ] `experiments/compute_metrics.py` - Automatic metrics
|
||||
- [ ] `experiments/export_for_evaluation.py` - Human evaluation prep
|
||||
- [ ] `experiments/analyze_results.py` - Statistical analysis
|
||||
- [ ] `experiments/visualize.py` - Generate figures
|
||||
|
||||
---
|
||||
|
||||
# 6. Timeline
|
||||
|
||||
| Phase | Activity |
|
||||
|-------|----------|
|
||||
| **Phase 1** | Implement idea generation scripts |
|
||||
| **Phase 2** | Generate all ideas (5 conditions × 30 queries) |
|
||||
| **Phase 3** | Compute automatic metrics |
|
||||
| **Phase 4** | Design and pilot human evaluation |
|
||||
| **Phase 5** | Run human evaluation (60 participants) |
|
||||
| **Phase 6** | Analyze results and write paper |
|
||||
|
||||
---
|
||||
|
||||
# Target Venues
|
||||
|
||||
### Tier 1 (Recommended)
|
||||
- **CHI** - ACM Conference on Human Factors (Sept deadline)
|
||||
- **CSCW** - Computer-Supported Cooperative Work (Apr/Jan deadline)
|
||||
- **Creativity & Cognition** - Specialized computational creativity
|
||||
|
||||
### Journal Options
|
||||
- **IJHCS** - International Journal of Human-Computer Studies
|
||||
- **TOCHI** - ACM Transactions on CHI
|
||||
|
||||
---
|
||||
|
||||
# Key Contributions
|
||||
|
||||
1. **Theoretical**: "Semantic gravity" framework + two-factor solution
|
||||
|
||||
2. **Methodological**: 2×2 factorial design isolates attribute vs expert contributions
|
||||
|
||||
3. **Empirical**: Quantitative evidence for interaction effects in LLM creativity
|
||||
|
||||
4. **Practical**: Open-source system with both factors for maximum diversity
|
||||
|
||||
---
|
||||
|
||||
# Key Differentiator vs PersonaFlow
|
||||
|
||||
```
|
||||
PersonaFlow (2024): Query → Experts → Ideas
|
||||
(Experts see WHOLE query, no structure)
|
||||
|
||||
Our Approach: Query → Attributes → (Attributes × Experts) → Ideas
|
||||
(Experts see SPECIFIC attributes, systematic)
|
||||
```
|
||||
|
||||
**What we can answer that PersonaFlow cannot:**
|
||||
1. Does problem structure alone help? (Attribute-Only vs Direct)
|
||||
2. Do experts help beyond structure? (Full vs Attribute-Only)
|
||||
3. Is there an interaction effect? (amplification hypothesis)
|
||||
|
||||
---
|
||||
|
||||
# Related Work Comparison
|
||||
|
||||
| Approach | Limitation | Our Advantage |
|
||||
|----------|------------|---------------|
|
||||
| Direct LLM | Semantic gravity | Two-factor enhancement |
|
||||
| **PersonaFlow** | **No problem structure** | **Attribute decomposition amplifies experts** |
|
||||
| PopBlends | Two-concept only | Systematic attribute × expert matrix |
|
||||
| BILLY | Cannot isolate factors | 2×2 factorial isolates contributions |
|
||||
|
||||
---
|
||||
|
||||
# References (Key Papers)
|
||||
|
||||
1. Siangliulue et al. (2017) - Wisdom of Crowds via Role Assumption
|
||||
2. Liu et al. (2024) - PersonaFlow: LLM-Simulated Expert Perspectives
|
||||
3. Choi et al. (2023) - PopBlends: Conceptual Blending with LLMs
|
||||
4. Wadinambiarachchi et al. (2024) - Effects of Generative AI on Design Fixation
|
||||
5. Mednick (1962) - Semantic Distance Theory
|
||||
6. Fauconnier & Turner (2002) - Conceptual Blending Theory
|
||||
|
||||
*Full reference list: 55+ papers in `research/references.md`*
|
||||
|
||||
---
|
||||
|
||||
# Questions & Discussion
|
||||
|
||||
## Next Steps
|
||||
1. Finalize experimental design details
|
||||
2. Implement experiment scripts
|
||||
3. Collect pilot data for validation
|
||||
4. Submit IRB for human evaluation (if needed)
|
||||
|
||||
---
|
||||
|
||||
# Thank You
|
||||
|
||||
**Project Repository**: novelty-seeking
|
||||
|
||||
**Research Materials**:
|
||||
- `research/literature_review.md`
|
||||
- `research/theoretical_framework.md`
|
||||
- `research/experimental_protocol.md`
|
||||
- `research/paper_outline.md`
|
||||
- `research/references.md`
|
||||
@@ -59,6 +59,27 @@ With Marine Biologist Expert:
|
||||
Result: Novel ideas like "pressure-adaptive seating" or "coral-inspired structural support"
|
||||
```
|
||||
|
||||
#### The Semantic Distance Tradeoff
|
||||
|
||||
However, semantic distance is not always beneficial. There exists a tradeoff:
|
||||
|
||||
```
|
||||
Semantic Distance Spectrum:
|
||||
|
||||
Too Close Optimal Zone Too Far
|
||||
(Semantic Gravity) (Creative) (Hallucination)
|
||||
├────────────────────────────┼────────────────────────────────┼────────────────────────────┤
|
||||
"Ergonomic office chair" "Pressure-adaptive seating" "Quantum-entangled
|
||||
"Coral-inspired support" chair consciousness"
|
||||
|
||||
High usefulness High novelty + useful High novelty, nonsense
|
||||
Low novelty Low usefulness
|
||||
```
|
||||
|
||||
**Our Design Choice**: Context-free keyword generation (Stage 1 excludes original query) intentionally pushes toward the "far" end to maximize novelty. Stage 2 re-introduces query context to ground the ideas.
|
||||
|
||||
**Research Question**: What is the hallucination/nonsense rate of this approach, and is the tradeoff worthwhile?
|
||||
|
||||
#### 2. Conceptual Blending Theory (Fauconnier & Turner, 2002)
|
||||
|
||||
> "Creative products emerge from blending elements of two input spaces into a novel integrated space."
|
||||
@@ -136,12 +157,22 @@ Our "Inner Crowd":
|
||||
Aggregation → Diverse idea pool (simulated crowd)
|
||||
```
|
||||
|
||||
### Why Multiple Experts Work
|
||||
### Why This Approach Works: Two Complementary Mechanisms
|
||||
|
||||
1. **Coverage**: Different experts activate different semantic regions
|
||||
2. **Redundancy Reduction**: Deduplication removes overlapping ideas
|
||||
3. **Diversity by Design**: Expert selection can be optimized for maximum diversity
|
||||
4. **Diminishing Returns**: Beyond ~4-6 experts, marginal diversity gains decrease
|
||||
**Factor 1: Attribute Decomposition**
|
||||
- Structures the problem space before creative exploration
|
||||
- Prevents premature fixation on holistic solutions
|
||||
- Ensures coverage across different aspects of the target concept
|
||||
|
||||
**Factor 2: Expert Perspectives**
|
||||
- Different experts activate different semantic regions
|
||||
- Forces semantic jumps that LLMs wouldn't naturally make
|
||||
- Each expert provides a distinct input space for conceptual blending
|
||||
|
||||
**Combined Effect (Interaction)**
|
||||
- Experts are more effective when given structured attributes to transform
|
||||
- Attributes without expert perspectives still generate predictable ideas
|
||||
- The combination creates systematic exploration of remote conceptual spaces
|
||||
|
||||
---
|
||||
|
||||
@@ -231,32 +262,43 @@ Output:
|
||||
|
||||
---
|
||||
|
||||
## Testable Hypotheses
|
||||
## Testable Hypotheses (2×2 Factorial Design)
|
||||
|
||||
### H1: Semantic Diversity
|
||||
> Multi-expert generation produces higher semantic diversity than single-expert or direct generation.
|
||||
Our experimental design manipulates two independent factors:
|
||||
1. **Attribute Decomposition**: With / Without
|
||||
2. **Expert Perspectives**: With / Without
|
||||
|
||||
### H1: Main Effect of Attribute Decomposition
|
||||
> Conditions with attribute decomposition produce higher semantic diversity than those without.
|
||||
|
||||
**Prediction**: (Attribute-Only + Full Pipeline) > (Direct + Expert-Only)
|
||||
**Measurement**: Mean pairwise cosine distance between idea embeddings
|
||||
|
||||
### H2: Novelty
|
||||
> Ideas from multi-expert generation have lower patent overlap than direct generation.
|
||||
### H2: Main Effect of Expert Perspectives
|
||||
> Conditions with expert perspectives produce higher semantic diversity than those without.
|
||||
|
||||
**Measurement**: Percentage of ideas with existing patent matches
|
||||
**Prediction**: (Expert-Only + Full Pipeline) > (Direct + Attribute-Only)
|
||||
**Measurement**: Mean pairwise cosine distance between idea embeddings
|
||||
|
||||
### H3: Expert Count Effect
|
||||
> Semantic diversity increases with expert count, with diminishing returns beyond 4-6 experts.
|
||||
### H3: Interaction Effect
|
||||
> The combination of attributes and experts produces super-additive benefits.
|
||||
|
||||
**Measurement**: Diversity vs. expert count curve
|
||||
**Prediction**: Full Pipeline > (Attribute-Only + Expert-Only - Direct)
|
||||
**Rationale**: Experts are more effective when given structured problem decomposition to work with.
|
||||
**Measurement**: Interaction term in 2×2 ANOVA
|
||||
|
||||
### H4: Expert Source Effect
|
||||
> LLM-generated experts produce more unconventional ideas than curated/database experts.
|
||||
### H4: Novelty
|
||||
> The Full Pipeline produces ideas with lowest patent overlap.
|
||||
|
||||
**Measurement**: Semantic distance from query centroid
|
||||
**Prediction**: Full Pipeline has highest novelty rate across all conditions
|
||||
**Measurement**: Percentage of ideas without existing patent matches
|
||||
|
||||
### H5: Fixation Breaking
|
||||
> Multi-expert approach produces more ideas outside the top-3 semantic clusters than direct generation.
|
||||
### H5: Expert vs Random Control
|
||||
> Expert perspectives outperform random word perspectives.
|
||||
|
||||
**Measurement**: Cluster distribution analysis
|
||||
**Prediction**: Expert-Only > Random-Perspective
|
||||
**Rationale**: Validates that domain knowledge (not just any perspective shift) drives improvement
|
||||
**Measurement**: Semantic diversity and human creativity ratings
|
||||
|
||||
---
|
||||
|
||||
@@ -271,10 +313,29 @@ Output:
|
||||
|
||||
## Positioning Against Related Work
|
||||
|
||||
### Key Differentiator: Attribute Decomposition
|
||||
|
||||
```
|
||||
PersonaFlow (2024): Query → Experts → Ideas
|
||||
Our Approach: Query → Attributes → (Attributes × Experts) → Ideas
|
||||
```
|
||||
|
||||
**Why this matters**: Attribute decomposition provides **scaffolding** that makes expert perspectives more effective. An expert seeing "chair materials" generates more focused ideas than an expert seeing just "chair."
|
||||
|
||||
### Comparison Table
|
||||
|
||||
| Approach | Limitation | Our Advantage |
|
||||
|----------|------------|---------------|
|
||||
| Direct LLM generation | Semantic gravity, fixation | Expert-forced semantic jumps |
|
||||
| Human brainstorming | Cognitive fatigue, social dynamics | Tireless LLM generation |
|
||||
| PersonaFlow (2024) | Research-focused, no attribute structure | Product innovation, structured decomposition |
|
||||
| PopBlends (2023) | Two-concept blending only | Multi-expert, multi-attribute blending |
|
||||
| BILLY (2025) | Vector fusion less interpretable | Sequential generation, explicit control |
|
||||
| Direct LLM generation | Semantic gravity, fixation | Two-factor enhancement (attributes + experts) |
|
||||
| **PersonaFlow (2024)** | **No problem structure, experts see whole query** | **Attribute decomposition amplifies expert effect** |
|
||||
| PopBlends (2023) | Two-concept blending only | Systematic attribute × expert exploration |
|
||||
| BILLY (2025) | Cannot isolate what helps | 2×2 factorial design isolates contributions |
|
||||
| Persona prompting alone | Random coverage | Systematic coverage via attribute × expert matrix |
|
||||
|
||||
### What We Can Answer That PersonaFlow Cannot
|
||||
|
||||
1. **Does problem structure alone help?** (Attribute-Only vs Direct)
|
||||
2. **Do experts help beyond structure?** (Full Pipeline vs Attribute-Only)
|
||||
3. **Is there an interaction effect?** (Full Pipeline > Attribute-Only + Expert-Only - Direct)
|
||||
|
||||
PersonaFlow showed experts help, but never tested whether **structuring the problem first** makes experts more effective.
|
||||
|
||||
Reference in New Issue
Block a user