Paper A v3.18: remove accountant-level + replication-dominated calibration + Gemini 2.5 Pro review minor fixes
Major changes (per partner red-pen + user decision): - Delete entire accountant-level analysis (III.J, IV.E, Tables VI/VII/VIII, Fig 4) -- cross-year pooling assumption unjustified, removes the implicit "habitually stamps = always stamps" reading. - Renumber sections III.J/K/L (was K/L/M) and IV.E/F/G/H/I (was F/G/H/I/J). - Title: "Three-Method Convergent Thresholding" -> "Replication-Dominated Calibration" (the three diagnostics do NOT converge at signature level). - Operational cosine cut anchored on whole-sample Firm A P7.5 (cos > 0.95). - Three statistical diagnostics (Hartigan/Beta/BD-McCrary) reframed as descriptive characterisation, not threshold estimators. - Firm A replication-dominated framing: 3 evidence strands -> 2. - Discussion limitation list: drop accountant-level cross-year pooling and BD/McCrary diagnostic; add auditor-year longitudinal tracking as future work. - Tone-shift: "we do not claim / do not derive" -> "we find / motivates". Reference verification (independent web-search audit of all 41 refs): - Fix [5] author hallucination: Hadjadj et al. -> Kao & Wen (real authors of Appl. Sci. 10:11:3716; report at paper/reference_verification_v3.md). - Polish [16] [21] [22] [25] (year/volume/page-range/model-name). Gemini 2.5 Pro peer review (Minor Revision verdict, A-F all positive): - Neutralize script-path references in tables/appendix -> "supplementary materials". - Move conflict-of-interest declaration from III-L to new Declarations section before References (paper_a_declarations_v3.md). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -26,13 +26,13 @@ This detection problem differs fundamentally from forgery detection: while it do
|
||||
A secondary methodological concern shapes the research design.
|
||||
Many prior similarity-based classification studies rely on ad-hoc thresholds---declaring two images equivalent above a hand-picked cosine cutoff, for example---without principled statistical justification.
|
||||
Such thresholds are fragile and invite reviewer skepticism, particularly in an archival-data setting where the cost of misclassification propagates into downstream inference.
|
||||
A defensible approach requires (i) a statistically principled threshold-determination procedure, ideally anchored to an empirical reference population drawn from the target corpus; (ii) convergent validation across multiple threshold-determination methods that rest on different distributional assumptions; and (iii) external validation against naturally-occurring anchor populations---byte-level identical pairs as a conservative gold positive subset and large random inter-CPA pairs as a gold negative population---reported with Wilson 95% confidence intervals on per-rule capture / FAR rates, since precision and $F_1$ are not meaningful when the positive and negative anchor populations are sampled from different units.
|
||||
A defensible approach requires (i) a transparent threshold anchored to an empirical reference population drawn from the target corpus; (ii) statistical diagnostics that characterise the *shape* of the underlying similarity distribution and so motivate the choice of anchor; and (iii) external validation against naturally-occurring anchor populations---byte-level identical pairs as a conservative gold positive subset and large random inter-CPA pairs as a gold negative population---reported with Wilson 95% confidence intervals on per-rule capture / FAR rates, since precision and $F_1$ are not meaningful when the positive and negative anchor populations are sampled from different units.
|
||||
|
||||
Despite the significance of the problem for audit quality and regulatory oversight, no prior work has specifically addressed non-hand-signing detection in financial audit documents at scale with these methodological safeguards.
|
||||
Woodruff et al. [9] developed an automated pipeline for signature analysis in corporate filings for anti-money-laundering investigations, but their work focused on author clustering (grouping signatures by signer identity) rather than detecting reuse of a stored image.
|
||||
Copy-move forgery detection methods [10], [11] address duplicated regions within or across images but are designed for natural images and do not account for the specific characteristics of scanned document signatures, where legitimate visual similarity between a signer's authentic signatures is expected and must be distinguished from image reproduction.
|
||||
Research on near-duplicate image detection using perceptual hashing combined with deep learning [12], [13] provides relevant methodological foundations but has not been applied to document forensics or signature analysis.
|
||||
From the statistical side, the methods we adopt for threshold determination---the Hartigan dip test [37] and finite mixture modelling via the EM algorithm [40], [41], complemented by a Burgstahler-Dichev / McCrary density-smoothness diagnostic [38], [39]---have been developed in statistics and accounting-econometrics but have not, to our knowledge, been combined as a convergent threshold framework for document-forensics threshold selection.
|
||||
From the statistical side, the methods we adopt for distributional characterisation---the Hartigan dip test [37] and finite mixture modelling via the EM algorithm [40], [41], complemented by a Burgstahler-Dichev / McCrary density-smoothness diagnostic [38], [39]---have been developed in statistics and accounting-econometrics but have not, to our knowledge, been combined as a joint diagnostic toolkit for document-forensics threshold selection.
|
||||
|
||||
In this paper, we present a fully automated, end-to-end pipeline for detecting non-hand-signed CPA signatures in audit reports at scale.
|
||||
Our approach processes raw PDF documents through the following stages:
|
||||
@@ -40,7 +40,7 @@ Our approach processes raw PDF documents through the following stages:
|
||||
(2) signature region detection using a trained YOLOv11 object detector;
|
||||
(3) deep feature extraction via a pre-trained ResNet-50 convolutional neural network;
|
||||
(4) dual-descriptor similarity computation combining cosine similarity on deep embeddings with difference hash (dHash) distance;
|
||||
(5) threshold determination using two methodologically distinct estimators---KDE antimode with a Hartigan unimodality test and finite Beta mixture via EM with a logit-Gaussian robustness check---complemented by a Burgstahler-Dichev / McCrary density-smoothness diagnostic, all applied at both the signature level and the accountant level; and
|
||||
(5) signature-level distributional characterisation using two threshold estimators---KDE antimode with a Hartigan unimodality test and finite Beta mixture via EM with a logit-Gaussian robustness check---complemented by a Burgstahler-Dichev / McCrary density-smoothness diagnostic, used to read the structure of the per-signature similarity distribution and to motivate a percentile-based operational anchor rather than a mixture-fit crossing; and
|
||||
(6) validation against a pixel-identical anchor, a low-similarity anchor, and a replication-dominated Big-4 calibration firm.
|
||||
|
||||
The dual-descriptor verification is central to our contribution.
|
||||
@@ -51,14 +51,13 @@ By requiring convergent evidence from both descriptors, we can differentiate *st
|
||||
A second distinctive feature is our framing of the calibration reference.
|
||||
One major Big-4 accounting firm in Taiwan (hereafter "Firm A") is widely recognized within the audit profession as making substantial use of non-hand-signing for the majority of its certifying partners, while not ruling out that a minority may continue to hand-sign some reports.
|
||||
We therefore treat Firm A as a *replication-dominated* calibration reference rather than a pure positive class.
|
||||
This framing is important because the statistical signature of a replication-dominated population is visible in our data: Firm A's per-signature cosine distribution is unimodal with a long left tail, 92.5% of Firm A signatures exceed cosine 0.95 but 7.5% fall below, and 32 of the 171 Firm A CPAs with enough signatures to enter our accountant-level analysis (of 180 Firm A CPAs in the registry; 178 after excluding two with disambiguation ties, see Section IV-G.2) cluster into an accountant-level "middle band" rather than the high-replication mode.
|
||||
Adopting the replication-dominated framing---rather than a near-universal framing that would have to absorb these residuals as noise---ensures internal coherence among the byte-level pixel-identity evidence, the signature-level statistics, and the accountant-level mixture.
|
||||
This framing is important because the statistical signature of a replication-dominated population is visible in our data: Firm A's per-signature cosine distribution is unimodal with a long left tail (Hartigan dip $p = 0.17$), 92.5% of Firm A signatures exceed cosine 0.95 with the remaining 7.5% forming the left tail, and 145 Firm A signatures across 50 distinct partners are byte-identical to a same-CPA match in a different audit report (35 spanning different fiscal years).
|
||||
Adopting the replication-dominated framing---rather than a near-universal framing that would have to absorb the 7.5% residual as noise---ensures internal coherence between the byte-level pixel-identity evidence and the signature-level distributional shape.
|
||||
|
||||
A third distinctive feature is our unit-of-analysis treatment.
|
||||
Our threshold-framework analysis reveals an informative asymmetry between the signature level and the accountant level: per-signature similarity forms a continuous quality spectrum for which no two-mechanism mixture provides a good fit, whereas per-accountant aggregates are clustered into three recognizable groups (BIC-best $K = 3$).
|
||||
The substantive reading is that *pixel-level output quality* is a continuous spectrum shaped by firm-specific reproduction technologies and scan conditions, while *accountant-level aggregate behaviour* is clustered but not sharply discrete: each CPA's cross-year-pooled coordinates sit closest to one of three recognizable groups (high-replication, middle-band, or hand-signed-tendency), reflecting a pooled observed tendency rather than a time-invariant regime, with smooth rather than discontinuous boundaries between groups.
|
||||
At the accountant level, the KDE antimode and the two mixture-based estimators (Beta-2 crossing and its logit-Gaussian robustness counterpart) converge within $\sim 0.006$ on a cosine threshold of approximately $0.975$, while the Burgstahler-Dichev / McCrary density-smoothness diagnostic finds no significant transition---an outcome (robust across a bin-width sweep, Appendix A) consistent with smoothly mixed clusters.
|
||||
The two-dimensional GMM marginal crossings (cosine $= 0.945$, dHash $= 8.10$) are reported as a complementary cross-check rather than as the primary accountant-level threshold.
|
||||
A third distinctive feature is the empirical reading we take from the per-signature distributional analysis.
|
||||
Three diagnostics applied to the per-signature similarity distribution---the Hartigan dip test, an EM-fitted Beta mixture (with logit-Gaussian robustness check), and the Burgstahler-Dichev / McCrary density-smoothness procedure---jointly indicate that no two-mechanism mixture cleanly explains per-signature similarity: the dip test fails to reject unimodality for Firm A, BIC strongly prefers a 3-component over a 2-component Beta fit, and the BD/McCrary candidate transition lies *inside* the non-hand-signed mode rather than between modes (and is not bin-width-stable; Appendix A).
|
||||
The substantive reading is that *pixel-level output quality* is a continuous spectrum shaped by firm-specific reproduction technologies (administrative stamping in early years, firm-level e-signing later) and scan conditions, rather than a discrete class cleanly separated from hand-signing.
|
||||
This reading motivates anchoring the operational classifier on a percentile heuristic over the Firm A reference distribution rather than on a mixture-fit crossing, and it motivates the byte-level pixel-identity anchor (Section IV-F.1) as a threshold-free positive reference that does not depend on resolving signature-level mixture structure.
|
||||
|
||||
We apply this pipeline to 90,282 audit reports filed by publicly listed companies in Taiwan between 2013 and 2023, extracting and analyzing 182,328 individual CPA signatures from 758 unique accountants.
|
||||
To our knowledge, this represents the largest-scale forensic analysis of signature authenticity in financial documents reported in the literature.
|
||||
@@ -71,17 +70,17 @@ The contributions of this paper are summarized as follows:
|
||||
|
||||
3. **Dual-descriptor verification.** We demonstrate that combining deep-feature cosine similarity with perceptual hashing resolves the fundamental ambiguity between style consistency and image reproduction, and we validate the backbone choice through an ablation study comparing three feature-extraction architectures.
|
||||
|
||||
4. **Convergent threshold framework with a smoothness diagnostic.** We introduce a threshold-selection framework that applies two methodologically distinct estimators---KDE antimode with Hartigan unimodality test and EM-fitted Beta mixture with a logit-Gaussian robustness check---at both the signature and accountant levels, and uses a Burgstahler-Dichev / McCrary density-smoothness diagnostic to characterize the local density structure. The convergence of the two estimators, combined with the presence or absence of a BD/McCrary transition, is used as evidence about the mixture structure of the data.
|
||||
4. **Percentile-anchored operational threshold.** We anchor the operational classifier's cosine cut on the whole-sample Firm A P7.5 percentile (cos $> 0.95$), a transparent and reproducible reference drawn from a known-majority-positive population, and complement it with dHash structural cuts derived from the same reference distribution. Operational thresholds are therefore explained by an empirical reference rather than asserted.
|
||||
|
||||
5. **Continuous-quality / clustered-accountant finding.** We empirically establish that per-signature similarity is a continuous quality spectrum poorly approximated by any two-mechanism mixture, whereas per-accountant aggregates cluster into three recognizable groups with smoothly mixed rather than sharply discrete boundaries---an asymmetry with direct implications for how threshold selection and mixture modelling should be applied in document forensics.
|
||||
5. **Distributional characterisation of per-signature similarity.** We apply three statistical diagnostics---a Hartigan dip test, an EM-fitted Beta mixture with logit-Gaussian robustness check, and a Burgstahler-Dichev / McCrary density-smoothness procedure---to characterise the shape of the per-signature similarity distribution. The three diagnostics jointly find that per-signature similarity forms a continuous quality spectrum, which both motivates the percentile-based operational anchor over a mixture-fit crossing and is itself a substantive finding for the document-forensics literature on similarity-threshold selection.
|
||||
|
||||
6. **Replication-dominated calibration methodology.** We introduce a calibration strategy using a known-majority-positive reference group, distinguishing *replication-dominated* from *replication-pure* anchors; and we validate classification using byte-level pixel identity as an annotation-free gold positive, requiring no manual labeling.
|
||||
|
||||
7. **Large-scale empirical analysis.** We report findings from the analysis of over 90,000 audit reports spanning a decade, providing the first large-scale empirical evidence on non-hand-signing practices in financial reporting under a methodology designed for peer-review defensibility.
|
||||
|
||||
The remainder of this paper is organized as follows.
|
||||
Section II reviews related work on signature verification, document forensics, perceptual hashing, and the statistical methods we adopt for threshold determination.
|
||||
Section II reviews related work on signature verification, document forensics, perceptual hashing, and the statistical methods we adopt for distributional characterisation.
|
||||
Section III describes the proposed methodology.
|
||||
Section IV presents experimental results including the convergent threshold analysis, accountant-level mixture, pixel-identity validation, and backbone ablation study.
|
||||
Section IV presents experimental results including the signature-level distributional characterisation, pixel-identity validation, and backbone ablation study.
|
||||
Section V discusses the implications and limitations of our findings.
|
||||
Section VI concludes with directions for future work.
|
||||
|
||||
Reference in New Issue
Block a user