11 KiB
phase, plan, type, wave, depends_on, files_modified, autonomous, must_haves
| phase | plan | type | wave | depends_on | files_modified | autonomous | must_haves | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 05-output-cli | 03 | execute | 2 |
|
|
true |
|
Purpose: This is the user-facing entry point that ties together all output modules into a single invocation. After running usher-pipeline score, the user runs usher-pipeline report to get all deliverables.
Output: src/usher_pipeline/cli/report_cmd.py registered in main.py, with CliRunner integration tests.
<execution_context> @/Users/gbanyan/.claude/get-shit-done/workflows/execute-plan.md @/Users/gbanyan/.claude/get-shit-done/templates/summary.md </execution_context>
@.planning/PROJECT.md @.planning/ROADMAP.md @.planning/STATE.md @.planning/phases/05-output-cli/05-01-SUMMARY.md @.planning/phases/05-output-cli/05-02-SUMMARY.md @src/usher_pipeline/cli/main.py @src/usher_pipeline/cli/score_cmd.py @src/usher_pipeline/cli/evidence_cmd.py @src/usher_pipeline/persistence/duckdb_store.py @src/usher_pipeline/config/schema.py Task 1: Report CLI command src/usher_pipeline/cli/report_cmd.py src/usher_pipeline/cli/main.py **report_cmd.py**: Create CLI report command following the established pattern from score_cmd.py and evidence_cmd.py.@click.command('report')
@click.option('--output-dir', type=click.Path(path_type=Path), default=None, help='Output directory (default: {data_dir}/report)')
@click.option('--force', is_flag=True, help='Overwrite existing report files')
@click.option('--skip-viz', is_flag=True, help='Skip visualization generation')
@click.option('--skip-report', is_flag=True, help='Skip reproducibility report generation')
@click.option('--high-threshold', type=float, default=0.7, help='Minimum score for HIGH tier (default: 0.7)')
@click.option('--medium-threshold', type=float, default=0.4, help='Minimum score for MEDIUM tier (default: 0.4)')
@click.option('--low-threshold', type=float, default=0.2, help='Minimum score for LOW tier (default: 0.2)')
@click.option('--min-evidence-high', type=int, default=3, help='Minimum evidence layers for HIGH tier (default: 3)')
@click.option('--min-evidence-medium', type=int, default=2, help='Minimum evidence layers for MEDIUM tier (default: 2)')
@click.pass_context
def report(ctx, output_dir, force, skip_viz, skip_report, high_threshold, medium_threshold, low_threshold, min_evidence_high, min_evidence_medium):
Follow the established CLI command pattern: load config -> init store/provenance -> check prerequisites -> execute steps -> display summary -> cleanup.
Pipeline steps (echoed with click.style like score_cmd.py):
- Load configuration and initialize storage (same pattern as score_cmd.py)
- Check scored_genes table exists (error if not: "Run 'usher-pipeline score' first")
- Load scored_genes DataFrame from DuckDB via store.load_dataframe('scored_genes')
- Build tier thresholds from CLI options into dict: {"high": {"score": high_threshold, "evidence_count": min_evidence_high}, "medium": {"score": medium_threshold, "evidence_count": min_evidence_medium}, "low": {"score": low_threshold}}
- Apply tiering:
tiered_df = assign_tiers(scored_df, thresholds=thresholds) - Add evidence summary:
tiered_df = add_evidence_summary(tiered_df) - Write dual-format output:
paths = write_candidate_output(tiered_df, output_dir, "candidates") - Echo tier counts: "HIGH: N, MEDIUM: N, LOW: N (total: N candidates from M scored genes)"
- If not --skip-viz:
plot_paths = generate_all_plots(tiered_df, output_dir / "plots")-- echo each plot file created - If not --skip-report: Load validation result if available (try store.load_dataframe for validation metadata or call validate_known_gene_ranking if scored_genes has known gene data). Call
report = generate_reproducibility_report(config, tiered_df, provenance, validation_result). Write report.to_json() and report.to_markdown() to output_dir. - Save provenance sidecar for the report command itself
- Display final summary: output directory, file list, tier counts
Default output_dir: Path(config.data_dir) / "report" if not specified via --output-dir.
If output files already exist and --force not set, echo warning and skip (checkpoint pattern).
Ensure store.close() in finally block.
main.py: Add report command registration.
Add import: from usher_pipeline.cli.report_cmd import report
Add registration: cli.add_command(report)
The CLI now has 4 top-level commands: setup, evidence, score, report (plus the existing info command).
Run: cd /Users/gbanyan/Project/usher-exploring && usher-pipeline report --help -- should show all options including --output-dir, --force, --skip-viz, --skip-report, tier thresholds
Run: usher-pipeline --help -- should list report in available commands
report command is registered and shows all expected options in --help output. CLI entry point lists setup, evidence, score, report, and info commands.
Follow the established test pattern from test_scoring_integration.py: create synthetic data in a tmp_path DuckDB, invoke CLI commands via CliRunner.
Create test fixtures:
test_configfixture: Write minimal config YAML to tmp_path, pointing duckdb_path and data_dir to tmp_pathpopulated_dbfixture: Create DuckDB at tmp_path, populate with:- gene_universe table (20 synthetic genes with gene_id and gene_symbol)
- scored_genes table with all required columns (gene_id, gene_symbol, composite_score, evidence_count, quality_flag, all 6 layer score columns + 6 contribution columns, available_weight, weighted_sum)
- Design data so: 3 genes HIGH tier (score 0.7-0.95, evidence_count 3-5), 5 MEDIUM, 5 LOW, 4 EXCLUDED (score < 0.2), 3 NULL composite_score
- Register in _checkpoints table so has_checkpoint('scored_genes') returns True
Tests:
- test_report_help: Invoke
report --help, assert exit_code 0, assert "--output-dir" in output - test_report_generates_files: Invoke report with populated_db and test_config, assert exit_code 0, verify candidates.tsv exists, candidates.parquet exists, candidates.provenance.yaml exists
- test_report_tier_counts_in_output: Invoke report, assert "HIGH: 3" (or similar) appears in CLI output
- test_report_with_viz: Invoke report (no --skip-viz), verify plots/ directory contains score_distribution.png, layer_contributions.png, tier_breakdown.png
- test_report_skip_viz: Invoke report with --skip-viz, verify no plots/ directory created
- test_report_skip_report: Invoke report with --skip-report, verify no reproducibility .json/.md files
- test_report_custom_thresholds: Invoke with --high-threshold 0.8 --medium-threshold 0.5, verify different tier counts
- test_report_no_scored_genes_error: Invoke report with empty DuckDB (no scored_genes table), assert exit_code != 0, assert "Run 'usher-pipeline score' first" in output
- test_report_output_dir_option: Invoke with --output-dir custom_path, verify files created in custom_path
Run:
cd /Users/gbanyan/Project/usher-exploring && python -m pytest tests/test_report_cmd.py -vAll 9 CliRunner integration tests pass. Report command correctly generates tiered candidates in TSV+Parquet, visualizations (unless --skip-viz), and reproducibility report (unless --skip-report). Custom tier thresholds work. Missing scored_genes table produces clear error message. All file paths are verified.
<success_criteria>
- CLI
reportcommand orchestrates full output pipeline in one invocation - Supports --output-dir, --force, --skip-viz, --skip-report, and configurable tier thresholds
- Follows established CLI patterns (config loading, store init, checkpoint, provenance, summary, cleanup)
- All CliRunner integration tests pass
- Unified CLI has all subcommands: setup, evidence, score, report, info </success_criteria>