Paper A v3.16: remove unsupported visual-inspection / sanity-sample claims
User review of the v3.15 Sanity Sample subsection revealed that the paper's claim of "inter-rater agreement with the classifier in all 30 cases" (Results IV-G.4) was not backed by any data artifact in the repository. Script 19 exports a 30-signature stratified sample to reports/pixel_validation/sanity_sample.csv, but that CSV contains only classifier output fields (stratum, sig_id, cosine, dhash_indep, pixel_identical, closest_match) and no human-annotation column, and no subsequent script computes any human--classifier agreement metric. User confirmed that the only human annotation in the project was the YOLO training-set bounding-box labeling; signature classification (stamped vs hand-signed) was done entirely by automated numerical methods. The 30/30 sanity-sample claim was therefore factually unsupported and has been removed. Investigation additionally revealed that the "independent visual inspection of randomly sampled Firm A reports reveals pixel-identical signature images...for many of the sampled partners" framing used as the first strand of Firm A's replication-dominated evidence (Section III-H first strand, Section V-C first strand, and the Conclusion fourth contribution) had the same provenance problem: no human visual inspection was performed. The underlying FACT (that Firm A contains many byte-identical same-CPA signature pairs) is correct and fully supported by automated byte-level pair analysis (Script 19), but the "visual inspection" phrasing misrepresents the provenance. Changes: 1. Results IV-G.4 "Sanity Sample" subsection deleted entirely (results_v3.md L271-273). 2. Methodology III-K penultimate paragraph describing the 30-signature manual visual sanity inspection deleted (methodology_v3.md L259). 3. Methodology Section III-H first strand (L152) rewritten from "independent visual inspection of randomly sampled Firm A reports reveals pixel-identical signature images...for many of the sampled partners" to "automated byte-level pair analysis (Section IV-G.1) identifies 145 Firm A signatures that are byte-identical to at least one other same-CPA signature from a different audit report, distributed across 50 distinct Firm A partners (of 180 registered); 35 of these byte-identical matches span different fiscal years." All four numbers verified directly from the signature_analysis.db database via pixel_identical_to_closest = 1 filter joined to accountants.firm. 4. Discussion V-C first strand (L41) rewritten analogously to refer to byte-level pair evidence with the same four verified numbers. 5. Conclusion fourth contribution (L21) rewritten to "byte-level pair analysis finding of 145 pixel-identical calibration-firm signatures across 50 distinct partners (Section IV-G.1)." 6. Abstract (L5): "visual inspection and accountant-level mixture evidence..." rewritten as "byte-level pixel-identity evidence (145 signatures across 50 partners) and accountant-level mixture evidence..." Abstract now at 250/250 words. 7. Introduction (L55): "visual-inspection evidence" relabeled "byte-level pixel-identity evidence" for internal consistency. 8. Methodology III-H penultimate (L164): "validation role is played by the visual inspection" relabeled "validation role is played by the byte-level pixel-identity evidence" for consistency. All substantive claims are preserved and now back-traceable to Script 19 output and the signature_analysis.db pixel_identical_to_closest flag. This correction brings the paper's descriptive language into strict alignment with its actual methodology, which is fully automated (except for YOLO training annotation, disclosed in Methodology Section III-B). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -52,7 +52,7 @@ A second distinctive feature is our framing of the calibration reference.
|
||||
One major Big-4 accounting firm in Taiwan (hereafter "Firm A") is widely recognized within the audit profession as making substantial use of non-hand-signing for the majority of its certifying partners, while not ruling out that a minority may continue to hand-sign some reports.
|
||||
We therefore treat Firm A as a *replication-dominated* calibration reference rather than a pure positive class.
|
||||
This framing is important because the statistical signature of a replication-dominated population is visible in our data: Firm A's per-signature cosine distribution is unimodal with a long left tail, 92.5% of Firm A signatures exceed cosine 0.95 but 7.5% fall below, and 32 of the 171 Firm A CPAs with enough signatures to enter our accountant-level analysis (of 180 Firm A CPAs in the registry; 178 after excluding two with disambiguation ties, see Section IV-G.2) cluster into an accountant-level "middle band" rather than the high-replication mode.
|
||||
Adopting the replication-dominated framing---rather than a near-universal framing that would have to absorb these residuals as noise---ensures internal coherence among the visual-inspection evidence, the signature-level statistics, and the accountant-level mixture.
|
||||
Adopting the replication-dominated framing---rather than a near-universal framing that would have to absorb these residuals as noise---ensures internal coherence among the byte-level pixel-identity evidence, the signature-level statistics, and the accountant-level mixture.
|
||||
|
||||
A third distinctive feature is our unit-of-analysis treatment.
|
||||
Our threshold-framework analysis reveals an informative asymmetry between the signature level and the accountant level: per-signature similarity forms a continuous quality spectrum for which no two-mechanism mixture provides a good fit, whereas per-accountant aggregates are clustered into three recognizable groups (BIC-best $K = 3$).
|
||||
|
||||
Reference in New Issue
Block a user