Phase 6 round-3 codex-review fixes: blockers + majors + minors

Resolved Codex review (gpt-5.5 xhigh) findings against b6913d2.

BLOCKERS:
- Appendix B reference mismatch: rewrote all main-text "Appendix B" references
  to "supplementary materials" since Appendix B is now a redirect stub. Affected
  the SSIM design-argument pointer, threshold provenance, byte-level
  decomposition, MC band capture-rate, and backbone-ablation table references
  across §III-F / §III-H.1 / §III-H.2 / §III-K / §III-L.4 / §III-M / §IV-F /
  §IV-J / §IV-K / §IV-L / §V-C / §V-H.
- Table rendering: un-commented Tables I-IV (Dataset Summary, YOLO Detection,
  Extraction Results, Cosine Distribution Statistics) which were inside HTML
  comment blocks and would not have rendered in the submission.
- Table numbering out of order: Table XIX appeared before Tables XVI-XVIII.
  Renumbered XIX -> XVI (document-level worst-case counts), XVI -> XVII (Firm x
  K=3 cross-tab), XVII -> XVIII (K=3 component comparison), XVIII -> XIX
  (Spearman correlation). Cross-references updated in §IV-J / §IV-K and §V-C.
- Table V mis-citation: §IV-C said "KDE crossover ... (Table V)" but Table V is
  the dip test. Dropped the (Table V) tag; crossover is a textual finding.
- Submission cleanup: wrapped the archived Impact Statement section heading and
  body inside the existing HTML comment (was rendering). Funding placeholder
  wrapped in HTML comment with a TO-DO note (won't render but is preserved as
  reminder).

MAJORS:
- Line 1077 numerical conflation: rewrote the §V-C / §III-L.4 paragraph that
  labelled Firm A's per-document HC+MC inter-CPA proxy ICCR of 0.6201 as a rate
  "on real same-CPA pools." 0.6201 is a counterfactual proxy under inter-CPA
  candidate-pool replacement, not the observed rate. Added explicit disambig:
  the corresponding observed rate from Table XVI (formerly XIX) is 97.5%
  HC+MC for Firm A; the proxy and observed rates measure different quantities.
- Residual "validation" language softened: "Dual-descriptor verification" ->
  "Dual-descriptor similarity"; "we validate the backbone choice" -> "we
  support the backbone choice"; "pixel-identity validation" -> "pixel-identity
  positive-anchor check"; "## M. Validation Strategy and Limitations under
  Unsupervised Setting" -> "## M. Unsupervised Diagnostic Strategy and Limits".
- "Specificity behaviour" overclaim: "characterises the cosine threshold's
  specificity behaviour" -> "specificity-proxy behaviour" (methodology §III-L.0
  and discussion §V-F).
- "Prior published / prior calibration" ambiguity: replaced "prior published
  per-comparison rate" with "the corpus-wide rate reported in §IV-I"; replaced
  "(prior published operating point)" with "(alternative operating point from
  supplementary calibration evidence)" in Tables XXI; replaced "prior reporting
  and the existing literature" with "the existing literature and the
  supplementary calibration evidence."

MINORS:
- Line 116 Bayes-optimal qualifier: "the local density minimum ... is the
  Bayes-optimal decision boundary under equal priors" -> "In idealized
  two-class mixture settings with equal priors and equal misclassification
  costs, the local density minimum ... coincides with the Bayes-optimal
  decision boundary."
- Stale section refs: §V-G for the fine-tuning caveat retargeted to §V-H
  Engineering-level caveats (where it lives after the §V-H reorganisation);
  §III-L for the worst-case rule retargeted to §III-H.1; "Section IV-D.2"
  (nonexistent) retargeted to "Section IV-D Table VI."
- Abstract / Introduction "after pool-size adjustment": separated the
  document-level D2 proxy ICCR claim from the per-signature logistic regression
  claim. Now: "Per-document D2 inter-CPA proxy ICCRs differ by an order of
  magnitude across firms ... a per-signature logistic regression confirms the
  firm gap persists after pool-size control."

NIT:
- Related Work HTML comment "(see paper_a_references_v3.md for full list)"
  -> "(full list in the References section)"; removes the version-coded
  filename reference from the source.

Artefacts:
- Combined manuscript regenerated: paper_a_v4_combined.md, 1312 lines.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-05-15 18:28:14 +08:00
parent b6913d2f93
commit 9e68f2e1d3
10 changed files with 108 additions and 112 deletions
+3 -3
View File
@@ -30,16 +30,16 @@ The contributions of this paper are:
2. **End-to-end pipeline.** We present a pipeline that processes raw PDF audit reports through VLM-based page identification, YOLO-based signature detection, ResNet-50 feature extraction, and dual-descriptor similarity computation, with automated inference and no manual intervention after initial training.
3. **Dual-descriptor verification.** We demonstrate that combining deep-feature cosine similarity with independent-minimum dHash resolves the ambiguity between *style consistency* and *image reproduction*, and we validate the backbone choice through a feature-backbone ablation.
3. **Dual-descriptor similarity.** We demonstrate that combining deep-feature cosine similarity with independent-minimum dHash resolves the ambiguity between *style consistency* and *image reproduction*, and we support the backbone choice through a feature-backbone ablation.
4. **Composition decomposition disproves the distributional-threshold path.** We show via a 2×2 factorial diagnostic (firm-mean centring × integer-tie jitter) that the apparent multimodality of the Big-4 accountant-level descriptor distribution is fully attributable to between-firm location shifts and integer mass-point artefacts. The descriptor distributions contain no within-population bimodal antimode; a distributional "natural threshold" reading of the operating points is not empirically supported.
5. **Anchor-based multi-level inter-CPA coincidence-rate calibration.** We characterise the deployed five-way classifier at three units of analysis: per-comparison ICCR (cos$>0.95$: $0.0006$; dHash$\leq 5$: $0.0013$; joint: $0.00014$), pool-normalised per-signature ICCR ($0.11$ for the deployed any-pair high-confidence rule), and per-document ICCR ($0.34$ for the operational HC$+$MC alarm). We adopt "inter-CPA coincidence rate" as the metric name throughout and reserve "False Acceptance Rate" for terminology that requires ground-truth negative labels, which the corpus does not provide.
6. **Firm heterogeneity quantification and within-firm cross-CPA collision concentration.** Per-firm rates differ by an order of magnitude after pool-size adjustment (Firm A's per-document HC$+$MC alarm at $0.62$ versus Firms B/C/D at $0.09$$0.16$). Cross-firm hit matrix analysis shows within-firm collision concentrations of $98.8\%$ at Firm A and $76.7$$83.7\%$ at Firms B/C/D under the deployed any-pair rule (the stricter same-pair joint event saturates at $97.0$$99.96\%$ within-firm across all four firms); the pattern is consistent with firm-specific template, stamp, or document-production reuse mechanisms — a descriptive finding about deployed-rule behaviour, not a claim of deliberate template sharing.
6. **Firm heterogeneity quantification and within-firm cross-CPA collision concentration.** Per-document D2 inter-CPA proxy ICCRs differ by an order of magnitude across firms (Firm A: $0.62$ versus Firms B/C/D: $0.09$$0.16$); a per-signature logistic regression of the any-pair HC hit indicator on firm dummies and centred log pool size confirms the firm gap persists after pool-size control. Cross-firm hit matrix analysis shows within-firm collision concentrations of $98.8\%$ at Firm A and $76.7$$83.7\%$ at Firms B/C/D under the deployed any-pair rule (the stricter same-pair joint event saturates at $97.0$$99.96\%$ within-firm across all four firms); the pattern is consistent with firm-specific template, stamp, or document-production reuse mechanisms — a descriptive finding about deployed-rule behaviour, not a claim of deliberate template sharing.
7. **K=3 as descriptive firm-compositional partition; three-score convergent internal consistency.** We fit a K=3 Gaussian mixture as a descriptive partition of the Big-4 accountant-level distribution (interpreted as firm-compositional structure, not as three mechanism clusters). Three feature-derived scores agree on the per-CPA descriptor-position ranking at Spearman $\rho \geq 0.879$; we report this as internal consistency rather than external validation, given that the scores share the underlying descriptor pair.
8. **Annotation-free positive-anchor capture check and unsupervised-setting disclosure.** We achieve $0\%$ positive-anchor miss rate (Wilson 95% upper bound $1.45\%$) on 262 byte-identical Big-4 signatures, with the conservative-subset caveat that byte-identical pairs are by construction near cos$=1$ and dHash$=0$. Each supporting diagnostic in §III-M addresses one specific failure mode of an unsupervised screening classifier — composition artefacts, inter-CPA coincidence, pool-size confounding, firm heterogeneity, threshold sensitivity, or positive-anchor capture — with an explicitly disclosed untested assumption. We do not claim a validated forensic detector; we position the system as a specificity-proxy-anchored screening framework with human-in-the-loop review.
The remainder of the paper is organized as follows. Section II reviews related work on signature verification, document forensics, perceptual hashing, and the statistical methods used. Section III describes the proposed methodology. Section IV presents the experimental results — distributional characterisation, mixture fits, convergent internal-consistency checks, leave-one-firm-out reproducibility, pixel-identity validation, and full-dataset robustness. Section V discusses the implications and limitations. Section VI concludes with directions for future work.
The remainder of the paper is organized as follows. Section II reviews related work on signature verification, document forensics, perceptual hashing, and the statistical methods used. Section III describes the proposed methodology. Section IV presents the experimental results — distributional characterisation, mixture fits, convergent internal-consistency checks, leave-one-firm-out reproducibility, pixel-identity positive-anchor check, and full-dataset robustness. Section V discusses the implications and limitations. Section VI concludes with directions for future work.