6b64eabbfb
Codex round-18 (paper/codex_review_gpt55_v3_18_3.md) caught a falsified provenance claim I introduced in v3.18.3 plus four cleaner narrative items that survived the prior 17 rounds. Verdict was Minor Revision; this commit closes all 5 actionable items. - Harmonize signature_analysis/28_byte_identity_decomposition.py to use accountants.firm (joined on signatures.assigned_accountant) for Firm A membership, matching the convention in 24_validation_recalibration.py. Regenerated reports/byte_identity_decomp/byte_identity_decomposition.json. Cross-firm convergence now reports Firm A 49,389 / 55,922 = 88.32% and Non-Firm-A 27,595 / 65,514 = 42.12% (percentages unchanged at two decimal places; counts now match Table IX exactly). - Replace the Section IV-H.2 reconciliation note. The previous note speculated that the one-record discrepancy was a snapshot/floating-point artifact, which codex round-18 falsified by direct DB queries: the real cause was that script 28 used signatures.excel_firm while Table IX uses accountants.firm. With script 28 now harmonized, Table IX and the cross-firm artifact agree exactly at 55,922; the new note documents the Firm A grouping convention plus the dHash-non-null filter. - Replace residual "known-majority-positive" wording with "replication-dominated" in Introduction (contributions 4 and 6) and Methodology III-I (anchor-rationale paragraph). - Correct Methodology III-G's auditor-year description: the per-signature best-match cosine that feeds each auditor-year mean is computed against the full same-CPA cross-year pool, not within-year only. The aggregation unit is within-year, but the underlying similarity statistic is not. - Add the 145 / 50 / 180 / 35 Firm A byte-decomposition sentence to Results IV-F.1 with explicit pointer to script 28 and the JSON artifact; this resolves the round-18 finding that several manuscript locations cited IV-F.1 for a decomposition that was not actually reported there. - Rebuild Paper_A_IEEE_Access_Draft_v3.docx. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
87 lines
14 KiB
Markdown
87 lines
14 KiB
Markdown
# I. Introduction
|
|
|
|
<!-- Target: ~1.5 pages double-column IEEE format. Double-blind: no author/institution info. -->
|
|
|
|
Financial audit reports serve as a critical mechanism for ensuring corporate accountability and investor protection.
|
|
In Taiwan, the Certified Public Accountant Act (會計師法 §4) and the Financial Supervisory Commission's attestation regulations (查核簽證核准準則 §6) require that certifying CPAs affix their signature or seal (簽名或蓋章) to each audit report [1].
|
|
While the law permits either a handwritten signature or a seal, the CPA's attestation on each report is intended to represent a deliberate, individual act of professional endorsement for that specific audit engagement [2].
|
|
|
|
The digitization of financial reporting has introduced a practice that complicates this intent.
|
|
As audit reports are now routinely generated, transmitted, and archived as PDF documents, it is technically and operationally straightforward to reproduce a CPA's stored signature image across many reports rather than re-executing the signing act for each one.
|
|
This reproduction can occur either through an administrative stamping workflow---in which scanned signature images are affixed by staff as part of the report-assembly process---or through a firm-level electronic signing system that automates the same step.
|
|
From the perspective of the output image the two workflows are equivalent: both can reproduce one or more stored signature images, producing same-CPA signatures that are identical or near-identical up to reproduction, scanning, compression, and template-variant noise.
|
|
We refer to signatures produced by either workflow collectively as *non-hand-signed*.
|
|
Although this practice may fall within the literal statutory requirement of "signature or seal," it raises substantive concerns about audit quality, as an identically reproduced signature applied across hundreds of reports may not represent meaningful individual attestation for each engagement.
|
|
The accounting literature has long examined the audit-quality consequences of partner-level engagement transparency: studies of partner-signature mandates in the United Kingdom find measurable downstream effects [31], cross-jurisdictional evidence on individual partner signature requirements highlights similar quality channels [32], and Taiwan-specific evidence on mandatory partner rotation documents how individual-partner identification interacts with audit-quality outcomes [33].
|
|
Unlike traditional signature forgery, where a third party attempts to imitate another person's handwriting, non-hand-signing involves the legitimate signer's own stored signature being reused.
|
|
This practice, while potentially widespread, is visually invisible to report users and virtually undetectable through manual inspection at scale: regulatory agencies overseeing thousands of publicly listed companies cannot feasibly examine each signature for evidence of image reproduction.
|
|
|
|
The distinction between *non-hand-signing detection* and *signature forgery detection* is both conceptually and technically important.
|
|
The extensive body of research on offline signature verification [3]--[8] has focused almost exclusively on forgery detection---determining whether a questioned signature was produced by its purported author or by an impostor.
|
|
This framing presupposes that the central threat is identity fraud.
|
|
In our context, identity is not in question; the CPA is indeed the legitimate signer.
|
|
The question is whether the physical act of signing occurred for each individual report, or whether a single signing event was reproduced as an image across many reports.
|
|
This detection problem differs fundamentally from forgery detection: while it does not require modeling skilled-forger variability, it introduces the distinct challenge of separating legitimate intra-signer consistency from image-level reproduction, requiring an analytical framework focused on detecting abnormally high similarity across documents.
|
|
|
|
A secondary methodological concern shapes the research design.
|
|
Many prior similarity-based classification studies rely on ad-hoc thresholds---declaring two images equivalent above a hand-picked cosine cutoff, for example---without principled statistical justification.
|
|
Such thresholds are fragile and invite reviewer skepticism, particularly in an archival-data setting where the cost of misclassification propagates into downstream inference.
|
|
A defensible approach requires (i) a transparent threshold anchored to an empirical reference population drawn from the target corpus; (ii) statistical diagnostics that characterise the *shape* of the underlying similarity distribution and so motivate the choice of anchor; and (iii) external validation against naturally-occurring anchor populations---byte-level identical pairs as a conservative gold positive subset and large random inter-CPA pairs as a gold negative population---reported with Wilson 95% confidence intervals on per-rule capture / FAR rates, since precision and $F_1$ are not meaningful when the positive and negative anchor populations are sampled from different units.
|
|
|
|
Despite the significance of the problem for audit quality and regulatory oversight, no prior work has specifically addressed non-hand-signing detection in financial audit documents at scale with these methodological safeguards.
|
|
Woodruff et al. [9] developed an automated pipeline for signature analysis in corporate filings for anti-money-laundering investigations, but their work focused on author clustering (grouping signatures by signer identity) rather than detecting reuse of a stored image.
|
|
Copy-move forgery detection methods [10], [11] address duplicated regions within or across images but are designed for natural images and do not account for the specific characteristics of scanned document signatures, where legitimate visual similarity between a signer's authentic signatures is expected and must be distinguished from image reproduction.
|
|
Research on near-duplicate image detection using perceptual hashing combined with deep learning [12], [13] provides relevant methodological foundations but has not been applied to document forensics or signature analysis.
|
|
From the statistical side, the methods we adopt for distributional characterisation---the Hartigan dip test [37] and finite mixture modelling via the EM algorithm [40], [41], complemented by a Burgstahler-Dichev / McCrary density-smoothness diagnostic [38], [39]---have been developed in statistics and accounting-econometrics but have not, to our knowledge, been combined as a joint diagnostic toolkit for document-forensics threshold selection.
|
|
|
|
In this paper, we present a fully automated, end-to-end pipeline for detecting non-hand-signed CPA signatures in audit reports at scale.
|
|
Our approach processes raw PDF documents through the following stages:
|
|
(1) signature page identification using a Vision-Language Model (VLM);
|
|
(2) signature region detection using a trained YOLOv11 object detector;
|
|
(3) deep feature extraction via a pre-trained ResNet-50 convolutional neural network;
|
|
(4) dual-descriptor similarity computation combining cosine similarity on deep embeddings with difference hash (dHash) distance;
|
|
(5) signature-level distributional characterisation using two threshold estimators---KDE antimode with a Hartigan unimodality test and finite Beta mixture via EM with a logit-Gaussian robustness check---complemented by a Burgstahler-Dichev / McCrary density-smoothness diagnostic, used to read the structure of the per-signature similarity distribution and to motivate a percentile-based operational anchor rather than a mixture-fit crossing; and
|
|
(6) validation against a pixel-identical anchor, a low-similarity anchor, and a replication-dominated Big-4 calibration firm.
|
|
|
|
The dual-descriptor verification is central to our contribution.
|
|
Cosine similarity of deep feature embeddings captures high-level visual style similarity---it can identify signatures that share similar stroke patterns and spatial layouts---but cannot distinguish between a CPA who signs consistently and one whose signature is reproduced from a stored image.
|
|
Perceptual hashing (specifically, difference hashing) encodes structural-level image gradients into compact binary fingerprints that are robust to scan noise but sensitive to substantive content differences.
|
|
By requiring convergent evidence from both descriptors, we can differentiate *style consistency* (high cosine but divergent dHash) from *image reproduction* (high cosine with low dHash), resolving an ambiguity that neither descriptor can address alone.
|
|
|
|
A second distinctive feature is our framing of the calibration reference.
|
|
One major Big-4 accounting firm in Taiwan (hereafter "Firm A") was selected as a candidate calibration reference based on practitioner-knowledge motivation; its benchmark status is then evaluated using the image evidence reported in this paper, not asserted by the practitioner-knowledge motivation itself.
|
|
We therefore treat Firm A as a *replication-dominated* calibration reference rather than a pure positive class.
|
|
This framing is important because the statistical signature of a replication-dominated population is visible in our data: Firm A's per-signature cosine distribution is unimodal with a long left tail (Hartigan dip $p = 0.17$), 92.5% of Firm A signatures exceed cosine 0.95 with the remaining 7.5% forming the left tail, and 145 Firm A signatures across 50 distinct partners are byte-identical to a same-CPA match in a different audit report (35 spanning different fiscal years).
|
|
Adopting the replication-dominated framing---rather than a near-universal framing that would have to absorb the 7.5% residual as noise---ensures internal coherence between the byte-level pixel-identity evidence and the signature-level distributional shape.
|
|
|
|
A third distinctive feature is the empirical reading we take from the per-signature distributional analysis.
|
|
Three diagnostics applied to the per-signature similarity distribution---the Hartigan dip test, an EM-fitted Beta mixture (with logit-Gaussian robustness check), and the Burgstahler-Dichev / McCrary density-smoothness procedure---jointly indicate that no two-mechanism mixture cleanly explains per-signature similarity: the dip test fails to reject unimodality for Firm A, BIC strongly prefers a 3-component over a 2-component Beta fit, and the BD/McCrary candidate transition lies *inside* the non-hand-signed mode rather than between modes (and is not bin-width-stable; Appendix A).
|
|
The substantive reading is that *pixel-level output quality* is a continuous spectrum shaped by firm-specific reproduction technologies (administrative stamping in early years, firm-level e-signing later) and scan conditions, rather than a discrete class cleanly separated from hand-signing.
|
|
This reading motivates anchoring the operational classifier on a percentile heuristic over the Firm A reference distribution rather than on a mixture-fit crossing, and it motivates the byte-level pixel-identity anchor (Section IV-F.1) as a threshold-free positive reference that does not depend on resolving signature-level mixture structure.
|
|
|
|
We apply this pipeline to 90,282 audit reports filed by publicly listed companies in Taiwan between 2013 and 2023, extracting and analyzing 182,328 individual CPA signatures from 758 unique accountants.
|
|
To our knowledge, this represents the largest-scale forensic analysis of signature authenticity in financial documents reported in the literature.
|
|
|
|
The contributions of this paper are summarized as follows:
|
|
|
|
1. **Problem formulation.** We formally define non-hand-signing detection as distinct from signature forgery detection and argue that it requires an analytical framework focused on intra-signer similarity distributions rather than genuine-versus-forged classification.
|
|
|
|
2. **End-to-end pipeline.** We present a pipeline that processes raw PDF audit reports through VLM-based page identification, YOLO-based signature detection, deep feature extraction, and dual-descriptor similarity computation, with automated inference requiring no manual intervention after initial training and annotation.
|
|
|
|
3. **Dual-descriptor verification.** We demonstrate that combining deep-feature cosine similarity with perceptual hashing resolves the fundamental ambiguity between style consistency and image reproduction, and we validate the backbone choice through an ablation study comparing three feature-extraction architectures.
|
|
|
|
4. **Percentile-anchored operational threshold.** We anchor the operational classifier's cosine cut on the whole-sample Firm A P7.5 percentile (cos $> 0.95$), a transparent and reproducible reference drawn from a replication-dominated reference population, and complement it with dHash structural cuts derived from the same reference distribution. Operational thresholds are therefore explained by an empirical reference rather than asserted.
|
|
|
|
5. **Distributional characterisation of per-signature similarity.** We apply three statistical diagnostics---a Hartigan dip test, an EM-fitted Beta mixture with logit-Gaussian robustness check, and a Burgstahler-Dichev / McCrary density-smoothness procedure---to characterise the shape of the per-signature similarity distribution. The three diagnostics jointly find that per-signature similarity forms a continuous quality spectrum, which both motivates the percentile-based operational anchor over a mixture-fit crossing and is itself a substantive finding for the document-forensics literature on similarity-threshold selection.
|
|
|
|
6. **Replication-dominated calibration methodology.** We introduce a calibration strategy using a replication-dominated reference group, distinguishing *replication-dominated* from *replication-pure* anchors; and we validate classification using byte-level pixel identity as an annotation-free gold positive, requiring no manual labeling.
|
|
|
|
7. **Large-scale empirical analysis.** We report findings from the analysis of over 90,000 audit reports spanning a decade, providing the first large-scale empirical evidence on non-hand-signing practices in financial reporting under a methodology designed for peer-review defensibility.
|
|
|
|
The remainder of this paper is organized as follows.
|
|
Section II reviews related work on signature verification, document forensics, perceptual hashing, and the statistical methods we adopt for distributional characterisation.
|
|
Section III describes the proposed methodology.
|
|
Section IV presents experimental results including the signature-level distributional characterisation, pixel-identity validation, and backbone ablation study.
|
|
Section V discusses the implications and limitations of our findings.
|
|
Section VI concludes with directions for future work.
|