feat: Enhance patent search and update research documentation
- Improve patent search service with expanded functionality - Update PatentSearchPanel UI component - Add new research_report.md - Update experimental protocol, literature review, paper outline, and theoretical framework Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -14,7 +14,26 @@ Groups of people tend to generate more diverse ideas than individuals because ea
|
||||
|
||||
PersonaFlow provides multiple perspectives by using LLMs to simulate domain-specific experts. User studies showed it increased the perceived relevance and creativity of ideated research directions and promoted users' critical thinking activities without increasing perceived cognitive load.
|
||||
|
||||
**Gap for our work**: PersonaFlow focuses on research ideation. Our system applies to product/innovation ideation with structured attribute decomposition.
|
||||
**Critical Gap - Our Key Differentiation**:
|
||||
|
||||
```
|
||||
PersonaFlow approach:
|
||||
Query → Experts → Ideas
|
||||
(Experts see the WHOLE query, no problem structure)
|
||||
|
||||
Our approach:
|
||||
Query → Attribute Decomposition → (Attributes × Experts) → Ideas
|
||||
(Experts see SPECIFIC attributes, systematic coverage)
|
||||
```
|
||||
|
||||
| Limitation of PersonaFlow | Our Solution |
|
||||
|---------------------------|--------------|
|
||||
| No problem structure | Attribute decomposition structures the problem space |
|
||||
| Experts applied to whole query | Experts applied to specific attributes |
|
||||
| Cannot test what helps (experts vs structure) | 2×2 factorial isolates each contribution |
|
||||
| Implicit/random coverage of idea space | Systematic coverage via attribute × expert matrix |
|
||||
|
||||
**Our unique contribution**: We hypothesize that attribute decomposition **amplifies** expert effectiveness (interaction effect). PersonaFlow cannot test this because they never decomposed the problem.
|
||||
|
||||
### 1.3 PopBlends: Conceptual Blending with LLMs
|
||||
**PopBlends: Strategies for Conceptual Blending with Large Language Models** (CHI 2023)
|
||||
|
||||
Reference in New Issue
Block a user