Paper A v3.16: remove unsupported visual-inspection / sanity-sample claims

User review of the v3.15 Sanity Sample subsection revealed that the
paper's claim of "inter-rater agreement with the classifier in all 30
cases" (Results IV-G.4) was not backed by any data artifact in the
repository. Script 19 exports a 30-signature stratified sample to
reports/pixel_validation/sanity_sample.csv, but that CSV contains
only classifier output fields (stratum, sig_id, cosine, dhash_indep,
pixel_identical, closest_match) and no human-annotation column, and
no subsequent script computes any human--classifier agreement metric.
User confirmed that the only human annotation in the project was
the YOLO training-set bounding-box labeling; signature classification
(stamped vs hand-signed) was done entirely by automated numerical
methods. The 30/30 sanity-sample claim was therefore factually
unsupported and has been removed.

Investigation additionally revealed that the "independent visual
inspection of randomly sampled Firm A reports reveals pixel-identical
signature images...for many of the sampled partners" framing used as
the first strand of Firm A's replication-dominated evidence (Section
III-H first strand, Section V-C first strand, and the Conclusion
fourth contribution) had the same provenance problem: no human
visual inspection was performed. The underlying FACT (that Firm A
contains many byte-identical same-CPA signature pairs) is correct
and fully supported by automated byte-level pair analysis (Script 19),
but the "visual inspection" phrasing misrepresents the provenance.

Changes:

1. Results IV-G.4 "Sanity Sample" subsection deleted entirely
   (results_v3.md L271-273).

2. Methodology III-K penultimate paragraph describing the 30-signature
   manual visual sanity inspection deleted (methodology_v3.md L259).

3. Methodology Section III-H first strand (L152) rewritten from
   "independent visual inspection of randomly sampled Firm A reports
   reveals pixel-identical signature images...for many of the sampled
   partners" to "automated byte-level pair analysis (Section IV-G.1)
   identifies 145 Firm A signatures that are byte-identical to at
   least one other same-CPA signature from a different audit report,
   distributed across 50 distinct Firm A partners (of 180 registered); 35 of these byte-identical matches span different fiscal years."
   All four numbers verified directly from the signature_analysis.db
   database via pixel_identical_to_closest = 1 filter joined to
   accountants.firm.

4. Discussion V-C first strand (L41) rewritten analogously to refer
   to byte-level pair evidence with the same four verified numbers.

5. Conclusion fourth contribution (L21) rewritten to "byte-level
   pair analysis finding of 145 pixel-identical calibration-firm
   signatures across 50 distinct partners (Section IV-G.1)."

6. Abstract (L5): "visual inspection and accountant-level mixture
   evidence..." rewritten as "byte-level pixel-identity evidence
   (145 signatures across 50 partners) and accountant-level mixture
   evidence..." Abstract now at 250/250 words.

7. Introduction (L55): "visual-inspection evidence" relabeled
   "byte-level pixel-identity evidence" for internal consistency.

8. Methodology III-H penultimate (L164): "validation role is played
   by the visual inspection" relabeled "validation role is played
   by the byte-level pixel-identity evidence" for consistency.

All substantive claims are preserved and now back-traceable to
Script 19 output and the signature_analysis.db pixel_identical_to_closest
flag. This correction brings the paper's descriptive language into
strict alignment with its actual methodology, which is fully
automated (except for YOLO training annotation, disclosed in
Methodology Section III-B).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-04-25 01:14:13 +08:00
parent 1dfbc5f000
commit 0471e36fd4
6 changed files with 7 additions and 11 deletions
+1 -1
View File
@@ -38,7 +38,7 @@ A recurring theme in prior work that treats Firm A or an analogous reference gro
Our evidence across multiple analyses rules out that assumption for Firm A while affirming its utility as a calibration reference.
Three convergent strands of evidence support the replication-dominated framing.
First, the visual-inspection evidence: randomly sampled Firm A reports exhibit pixel-identical signature images across different audit engagements and fiscal years for many of the sampled partners---a physical impossibility under independent hand-signing events.
First, the byte-level pair evidence: 145 Firm A signatures (from 50 distinct partners of 180 registered) have a byte-identical same-CPA match in a different audit report, with 35 of these matches spanning different fiscal years. Independent hand-signing cannot produce byte-identical images across distinct reports, so these pairs directly establish image reuse within Firm A.
Second, the signature-level statistical evidence: Firm A's per-signature cosine distribution is unimodal long-tail rather than a tight single peak; 92.5% of Firm A signatures exceed cosine 0.95, with the remaining 7.5% forming the left tail.
Third, the accountant-level evidence: of the 171 Firm A CPAs with enough signatures ($\geq 10$) to enter the accountant-level GMM, 32 (19%) fall into the middle-band C2 cluster rather than the high-replication C1 cluster---consistent with within-firm heterogeneity in signing output (potentially spanning hand-signing partners, multi-template replication workflows, CPAs undergoing mid-sample mechanism transitions, and CPAs whose pooled coordinates reflect mixed-quality replication; we do not disaggregate these mechanisms---see Section III-G for the scope of claims) rather than a pure replication population.
Of the 178 valid Firm A CPAs (the 180 registered CPAs minus two excluded for disambiguation ties in the registry; Section IV-G.2), seven are outside the GMM for having fewer than 10 signatures, so we cannot place them in a cluster from the cross-sectional analysis alone.