Files
pdf_signature_extraction/paper/paper_a_abstract_v3.md
T
gbanyan 0471e36fd4 Paper A v3.16: remove unsupported visual-inspection / sanity-sample claims
User review of the v3.15 Sanity Sample subsection revealed that the
paper's claim of "inter-rater agreement with the classifier in all 30
cases" (Results IV-G.4) was not backed by any data artifact in the
repository. Script 19 exports a 30-signature stratified sample to
reports/pixel_validation/sanity_sample.csv, but that CSV contains
only classifier output fields (stratum, sig_id, cosine, dhash_indep,
pixel_identical, closest_match) and no human-annotation column, and
no subsequent script computes any human--classifier agreement metric.
User confirmed that the only human annotation in the project was
the YOLO training-set bounding-box labeling; signature classification
(stamped vs hand-signed) was done entirely by automated numerical
methods. The 30/30 sanity-sample claim was therefore factually
unsupported and has been removed.

Investigation additionally revealed that the "independent visual
inspection of randomly sampled Firm A reports reveals pixel-identical
signature images...for many of the sampled partners" framing used as
the first strand of Firm A's replication-dominated evidence (Section
III-H first strand, Section V-C first strand, and the Conclusion
fourth contribution) had the same provenance problem: no human
visual inspection was performed. The underlying FACT (that Firm A
contains many byte-identical same-CPA signature pairs) is correct
and fully supported by automated byte-level pair analysis (Script 19),
but the "visual inspection" phrasing misrepresents the provenance.

Changes:

1. Results IV-G.4 "Sanity Sample" subsection deleted entirely
   (results_v3.md L271-273).

2. Methodology III-K penultimate paragraph describing the 30-signature
   manual visual sanity inspection deleted (methodology_v3.md L259).

3. Methodology Section III-H first strand (L152) rewritten from
   "independent visual inspection of randomly sampled Firm A reports
   reveals pixel-identical signature images...for many of the sampled
   partners" to "automated byte-level pair analysis (Section IV-G.1)
   identifies 145 Firm A signatures that are byte-identical to at
   least one other same-CPA signature from a different audit report,
   distributed across 50 distinct Firm A partners (of 180 registered); 35 of these byte-identical matches span different fiscal years."
   All four numbers verified directly from the signature_analysis.db
   database via pixel_identical_to_closest = 1 filter joined to
   accountants.firm.

4. Discussion V-C first strand (L41) rewritten analogously to refer
   to byte-level pair evidence with the same four verified numbers.

5. Conclusion fourth contribution (L21) rewritten to "byte-level
   pair analysis finding of 145 pixel-identical calibration-firm
   signatures across 50 distinct partners (Section IV-G.1)."

6. Abstract (L5): "visual inspection and accountant-level mixture
   evidence..." rewritten as "byte-level pixel-identity evidence
   (145 signatures across 50 partners) and accountant-level mixture
   evidence..." Abstract now at 250/250 words.

7. Introduction (L55): "visual-inspection evidence" relabeled
   "byte-level pixel-identity evidence" for internal consistency.

8. Methodology III-H penultimate (L164): "validation role is played
   by the visual inspection" relabeled "validation role is played
   by the byte-level pixel-identity evidence" for consistency.

All substantive claims are preserved and now back-traceable to
Script 19 output and the signature_analysis.db pixel_identical_to_closest
flag. This correction brings the paper's descriptive language into
strict alignment with its actual methodology, which is fully
automated (except for YOLO training annotation, disclosed in
Methodology Section III-B).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 01:14:13 +08:00

2.1 KiB

Abstract

Regulations require Certified Public Accountants (CPAs) to attest to each audit report by affixing a signature. Digitization makes reusing a stored signature image across reports trivial---through administrative stamping or firm-level electronic signing---potentially undermining individualized attestation. Unlike forgery, non-hand-signed reproduction reuses the legitimate signer's own stored image, making it visually invisible to report users and infeasible to audit at scale manually. We present a pipeline integrating a Vision-Language Model for signature-page identification, YOLOv11 for signature detection, and ResNet-50 for feature extraction, followed by dual-descriptor verification combining cosine similarity and difference hashing. For threshold determination we apply two estimators---kernel-density antimode with a Hartigan unimodality test and an EM-fitted Beta mixture with a logit-Gaussian robustness check---plus a Burgstahler-Dichev/McCrary density-smoothness diagnostic, at the signature and accountant levels. Applied to 90,282 audit reports filed in Taiwan over 2013-2023 (182,328 signatures from 758 CPAs), the methods reveal a level asymmetry: signature-level similarity is a continuous quality spectrum that no two-component mixture separates, while accountant-level aggregates cluster into three smoothly-mixed groups with the antimode and two mixture estimators converging within $\sim$0.006 at cosine \approx 0.975. A major Big-4 firm is used as a replication-dominated (not pure) calibration anchor, with byte-level pixel-identity evidence (145 signatures across 50 partners) and accountant-level mixture evidence supporting majority non-hand-signing alongside residual within-firm heterogeneity; capture rates on both 70/30 calibration and held-out folds are reported with Wilson 95% intervals to make fold-level variance visible. Validation against 310 byte-identical positives and a $\sim$50,000-pair inter-CPA negative anchor yields FAR \leq 0.001 at all accountant-level thresholds.