Paper A v3.18: remove accountant-level + replication-dominated calibration + Gemini 2.5 Pro review minor fixes

Major changes (per partner red-pen + user decision):
- Delete entire accountant-level analysis (III.J, IV.E, Tables VI/VII/VIII,
  Fig 4) -- cross-year pooling assumption unjustified, removes the implicit
  "habitually stamps = always stamps" reading.
- Renumber sections III.J/K/L (was K/L/M) and IV.E/F/G/H/I (was F/G/H/I/J).
- Title: "Three-Method Convergent Thresholding" -> "Replication-Dominated
  Calibration" (the three diagnostics do NOT converge at signature level).
- Operational cosine cut anchored on whole-sample Firm A P7.5 (cos > 0.95).
- Three statistical diagnostics (Hartigan/Beta/BD-McCrary) reframed as
  descriptive characterisation, not threshold estimators.
- Firm A replication-dominated framing: 3 evidence strands -> 2.
- Discussion limitation list: drop accountant-level cross-year pooling and
  BD/McCrary diagnostic; add auditor-year longitudinal tracking as future work.
- Tone-shift: "we do not claim / do not derive" -> "we find / motivates".

Reference verification (independent web-search audit of all 41 refs):
- Fix [5] author hallucination: Hadjadj et al. -> Kao & Wen (real authors of
  Appl. Sci. 10:11:3716; report at paper/reference_verification_v3.md).
- Polish [16] [21] [22] [25] (year/volume/page-range/model-name).

Gemini 2.5 Pro peer review (Minor Revision verdict, A-F all positive):
- Neutralize script-path references in tables/appendix -> "supplementary
  materials".
- Move conflict-of-interest declaration from III-L to new Declarations
  section before References (paper_a_declarations_v3.md).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-04-27 17:43:09 +08:00
parent 6ab6e19137
commit 16e90bab20
13 changed files with 430 additions and 264 deletions
+61 -70
View File
@@ -4,7 +4,7 @@
We propose a six-stage pipeline for large-scale non-hand-signed auditor signature detection in scanned financial documents.
Fig. 1 illustrates the overall architecture.
The pipeline takes as input a corpus of PDF audit reports and produces, for each document, a classification of its CPA signatures along a confidence continuum supported by convergent evidence from two methodologically distinct threshold estimators complemented by a density-smoothness diagnostic and a pixel-identity anchor.
The pipeline takes as input a corpus of PDF audit reports and produces, for each document, a classification of its CPA signatures along a confidence continuum anchored on whole-sample Firm A percentile heuristics and validated against a byte-level pixel-identity positive anchor and a large random inter-CPA negative anchor.
Throughout this paper we use the term *non-hand-signed* rather than "digitally replicated" to denote any signature produced by reproducing a previously stored image of the partner's signature---whether by administrative stamping workflows (dominant in the early years of the sample) or firm-level electronic signing systems (dominant in the later years).
From the perspective of the output image the two workflows are equivalent: both reproduce a single stored image so that signatures on different reports from the same partner are identical up to reproduction noise.
@@ -14,9 +14,9 @@ From the perspective of the output image the two workflows are equivalent: both
90,282 PDFs → VLM Pre-screening → 86,072 PDFs
→ YOLOv11 Detection → 182,328 signatures
→ ResNet-50 Features → 2048-dim embeddings
→ Dual-Method Verification (Cosine + dHash)
Three-Method Threshold (KDE / BD-McCrary / Beta mixture) → Classification
→ Pixel-identity + Firm A + Accountant-level GMM validation
→ Dual-Descriptor Verification (Cosine + dHash)
Firm A P7.5-anchored Classifier → Five-way classification
→ Pixel-identity + Inter-CPA + Held-Out Firm A validation
-->
## B. Data Collection
@@ -84,7 +84,7 @@ Preprocessing consisted of resizing to 224×224 pixels with aspect-ratio preserv
All feature vectors were L2-normalized, ensuring that cosine similarity equals the dot product.
The choice of ResNet-50 without fine-tuning was motivated by three considerations: (1) the task is similarity comparison rather than classification, making general-purpose discriminative features sufficient; (2) ImageNet features have been shown to transfer effectively to document analysis tasks [20], [21]; and (3) avoiding domain-specific fine-tuning reduces the risk of overfitting to dataset-specific artifacts, though we note that a fine-tuned model could potentially improve discriminative performance (see Section V-G).
This design choice is validated by an ablation study (Section IV-J) comparing ResNet-50 against VGG-16 and EfficientNet-B0.
This design choice is validated by an ablation study (Section IV-I) comparing ResNet-50 against VGG-16 and EfficientNet-B0.
## F. Dual-Method Similarity Descriptors
@@ -113,31 +113,29 @@ Cosine similarity and dHash are both robust to the noise introduced by the print
## G. Unit of Analysis and Summary Statistics
Three unit-of-analysis choices are relevant for this study, ordered from finest to coarsest: (i) the *signature*---one signature image extracted from one report; (ii) the *auditor-year*---all signatures by one CPA within one fiscal year; and (iii) the *accountant*---the collection of all signatures attributed to a single CPA across the full sample period.
All three are well-defined as descriptive groupings without additional assumptions; the distinction that matters for *regime interpretation*---i.e., reading a unit's summary as "this CPA's signing mechanism for that unit"---is that the auditor-year is the smallest CPA-level aggregation that is coherent under the stipulations below without additional across-year homogeneity, whereas the accountant unit is a deliberate cross-year pooling that may blend distinct signing-mechanism regimes when a CPA's practice changes over the sample period.
We use all three units in the paper and specify the role of each at the point of use.
Two unit-of-analysis choices are relevant for this study, ordered from finest to coarsest: (i) the *signature*---one signature image extracted from one report; and (ii) the *auditor-year*---all signatures by one CPA within one fiscal year.
The signature is the operational unit of classification (Section III-K) and of all primary statistical analyses (Section IV-D, IV-F, IV-G).
The auditor-year is used in the partner-level similarity ranking of Section IV-F.2 as a deliberately within-year aggregation that avoids cross-year pooling.
We do not use a coarser CPA-level cross-year unit, because pooling a CPA's signatures across the full 2013--2023 sample period would conflate distinct signing-mechanism regimes whenever a CPA's practice changes during the sample, and we make no claim about the within-CPA stability of signing mechanisms over time.
For per-signature classification we compute, for each signature, the maximum pairwise cosine similarity and the minimum dHash Hamming distance against every other signature attributed to the same CPA (over the full same-CPA set, not restricted to the same fiscal year).
The max/min (rather than mean) formulation reflects the identification logic for non-hand-signing: if even one other signature of the same CPA is a pixel-level reproduction, that pair will dominate the extremes and reveal the non-hand-signed mechanism.
Mean statistics would dilute this signal.
For the dHash dimension we use the *independent minimum dHash*: the minimum Hamming distance from a signature to *any* other signature of the same CPA (over the full same-CPA set).
The independent minimum is unconditional on the cosine-nearest pair and is therefore the conservative structural-similarity statistic; it is the dHash statistic used throughout the operational classifier (Section III-K) and all reported capture-rate analyses.
We make one stipulation about same-CPA pair detectability.
**(A1) Pair-detectability** is a statistical assumption scoped to the same-CPA pool (pooled across fiscal years, matching the max/min computation above): if a CPA uses image replication anywhere in the corpus, at least one pair of same-CPA signatures is near-identical after reproduction noise, so that max cosine / min dHash detects the replication.
This is plausible for high-volume stamping or firm-level electronic-signing workflows---where a stored image is typically reused many times under similar scan and compression conditions---but is not guaranteed in sparse CPA-corpora with only one observed replicated report, when multiple template variants are in use, or when scan-stage noise pushes a replicated pair outside the detection regime.
A1 is what the per-signature detector requires to be sensitive to replication; it is a cross-year pair-existence property, not a within-year uniformity claim.
**(A1) Pair-detectability.** *If a CPA uses image replication anywhere in the corpus, then at least one same-CPA signature pair is near-identical (after reproduction noise) within the cross-year same-CPA pool used by the max-cosine / min-dHash computation above.*
This is plausible for high-volume stamping or firm-level electronic-signing workflows---where a stored image is typically reused many times under similar scan and compression conditions---but it is *not* guaranteed when (i) the corpus contains only one observed replicated report for a CPA, (ii) multiple template variants are in use simultaneously, or (iii) scan-stage noise pushes a replicated pair outside the detection regime.
A1 is a *cross-year pair-existence* property, not a within-year uniformity claim, and is the only assumption the per-signature detector requires to be sensitive to replication.
We make *no* within-year or across-year uniformity assumption about CPA signing mechanisms.
Per-signature labels are signature-level quantities throughout this paper; we do not translate them to per-report or per-partner mechanism assignments, and we abstain from partner-level frequency inferences (such as "X% of CPAs hand-sign") that would require such a translation.
A CPA's signing output within a single fiscal year may reflect a single replication template, multiple templates used in parallel (e.g., different stored images for different engagement positions or reporting pipelines), within-year mechanism mixing, or a combination; our signature-level analyses remain valid under all of these regimes, since they do not attempt mechanism attribution at the partner or report level.
The accountant-level summary statistics of Section III-J are likewise cross-year pooled quantities by construction, and may blend distinct signing-mechanism regimes when a CPA's practice changes over the sample period; we treat this as a design choice, not an identification assumption, and the accountant-level aggregates are to be read as characterizing each CPA's pooled observed tendency over the full sample period rather than a single time-invariant regime.
The intra-report consistency analysis in Section IV-H.3 is a firm-level homogeneity check---whether the *two co-signing CPAs on the same report* receive the same signature-level label under the operational classifier---rather than a test of within-partner or within-year uniformity.
For accountant-level analysis we additionally aggregate these per-signature statistics to the CPA level by computing the mean best-match cosine and the mean *independent minimum dHash* across all signatures of that CPA.
The *independent minimum dHash* of a signature is defined as the minimum Hamming distance to *any* other signature of the same CPA (over the full same-CPA set).
The independent minimum is unconditional on the cosine-nearest pair and is therefore the conservative structural-similarity statistic; it is the dHash statistic used throughout the operational classifier (Section III-L) and all reported capture-rate analyses.
These accountant-level aggregates are the input to the mixture model described in Section III-J and to the accountant-level threshold analysis in Section III-I.5.
The intra-report consistency analysis in Section IV-F.3 is a firm-level homogeneity check---whether the *two co-signing CPAs on the same report* receive the same signature-level label under the operational classifier---rather than a test of within-partner or within-year uniformity.
## H. Calibration Reference: Firm A as a Replication-Dominated Population
@@ -147,40 +145,49 @@ Rather than treating Firm A as a synthetic or laboratory positive control, we tr
The background context for this choice is practitioner knowledge about Firm A's signing practice: industry practice at the firm is widely understood among practitioners to involve reproducing a stored signature image for the majority of certifying partners---originally via administrative stamping workflows and later via firm-level electronic signing systems---while not ruling out that a minority of partners may continue to hand-sign some or all of their reports.
We use this only as background context for why Firm A is a plausible calibration candidate; the *evidence* for Firm A's replication-dominated status comes entirely from the paper's own analyses, which do not depend on any claim about signing practice beyond what the audit-report images themselves show.
We establish Firm A's replication-dominated status through three primary independent quantitative analyses plus a fourth strand comprising three complementary checks, each of which can be reproduced from the public audit-report corpus alone:
We establish Firm A's replication-dominated status through two primary independent quantitative analyses plus a third strand comprising three complementary checks, each of which can be reproduced from the public audit-report corpus alone:
First, *automated byte-level pair analysis* (Section IV-G.1) identifies 145 Firm A signatures that are byte-identical to at least one other same-CPA signature from a different audit report, distributed across 50 distinct Firm A partners (of 180 registered); 35 of these byte-identical matches span different fiscal years.
Byte-identity implies pixel-identity by construction, and independent hand-signing cannot produce pixel-identical images across distinct reports---these pairs therefore establish image reuse as a concrete, threshold-free phenomenon within Firm A.
First, *automated byte-level pair analysis* (Section IV-F.1) identifies 145 Firm A signatures that are byte-identical to at least one other same-CPA signature from a different audit report, distributed across 50 distinct Firm A partners (of 180 registered); 35 of these byte-identical matches span different fiscal years.
Byte-identity implies pixel-identity by construction, and independent hand-signing cannot produce pixel-identical images across distinct reports---these pairs therefore establish image reuse as a concrete, threshold-free phenomenon within Firm A and confirm that replication is widespread (50 of 180 registered partners) rather than confined to a handful of CPAs.
Second, *whole-sample signature-level rates*: 92.5% of Firm A's per-signature best-match cosine similarities exceed 0.95, consistent with non-hand-signing as the dominant mechanism, while the remaining 7.5% form a long left tail reflecting within-firm heterogeneity in signing output (we do not disaggregate partner-level mechanism here; see Section III-G for the scope of claims).
Second, *signature-level distributional evidence*: Firm A's per-signature best-match cosine distribution is unimodal with a long left tail (Hartigan dip test $p = 0.17$ at $n \geq 10$ signatures; Section IV-D), consistent with a single dominant mechanism (non-hand-signing) plus residual within-firm heterogeneity rather than two cleanly separated mechanisms.
92.5% of Firm A's per-signature best-match cosine similarities exceed 0.95 and the remaining 7.5% form the long left tail (we do not disaggregate partner-level mechanism here; see Section III-G for the scope of claims).
The unimodal-long-tail shape, not the precise 92.5/7.5 split, is the structural evidence: it predicts that Firm A is replication-dominated rather than a clean two-class population, and a noise-only explanation of the left tail would predict a shrinking share as scan/PDF technology matured over 2013--2023, which is not what we observe (Section IV-F.1).
Third, *accountant-level mixture analysis* (Section IV-E): a BIC-selected three-component Gaussian mixture over per-accountant mean cosine and mean dHash places 139 of the 171 Firm A CPAs (with $\geq 10$ signatures) in the high-replication C1 cluster and 32 in the middle-band C2 cluster, directly quantifying the within-firm heterogeneity.
Third, we additionally validate the Firm A benchmark through three complementary analyses reported in Section IV-F. Only the partner-level ranking is fully threshold-free; the longitudinal-stability and intra-report analyses use the operational classifier and are interpreted as consistency checks on its firm-level output:
(a) *Longitudinal stability (Section IV-F.1).* The share of Firm A per-signature best-match cosine values below 0.95 is stable at 6-13% across 2013-2023, with the lowest share in 2023. The 0.95 cutoff is the whole-sample Firm A P7.5 heuristic (Section III-K; 92.5% of whole-sample Firm A signatures exceed this cutoff); the substantive finding here is the *temporal stability* of the rate, not the absolute rate at any single year.
(b) *Partner-level similarity ranking (Section IV-F.2).* When every auditor-year is ranked globally by its per-auditor-year mean best-match cosine (across all firms: Big-4 and Non-Big-4), Firm A auditor-years account for 95.9% of the top decile against a baseline share of 27.8% (a 3.5$\times$ concentration ratio), and this over-representation is stable across 2013-2023. This analysis uses only the ordinal ranking and is independent of any absolute cutoff.
(c) *Intra-report consistency (Section IV-F.3).* Because each Taiwanese statutory audit report is co-signed by two engagement partners, firm-wide stamping practice predicts that both signers on a given Firm A report should receive the same signature-level label under the classifier. Firm A exhibits 89.9% intra-report agreement against 62-67% at the other Big-4 firms. This test uses the operational classifier and is therefore a *consistency* check on the classifier's firm-level output rather than a threshold-free test; the cross-firm gap (not the absolute rate) is the substantive finding.
Fourth, we additionally validate the Firm A benchmark through three complementary analyses reported in Section IV-H. Only the partner-level ranking is fully threshold-free; the longitudinal-stability and intra-report analyses use the operational classifier and are interpreted as consistency checks on its firm-level output:
(a) *Longitudinal stability (Section IV-H.1).* The share of Firm A per-signature best-match cosine values below 0.95 is stable at 6-13% across 2013-2023, with the lowest share in 2023. The 0.95 cutoff is the whole-sample Firm A P7.5 heuristic (Section III-L; 92.5% of whole-sample Firm A signatures exceed this cutoff); the substantive finding here is the *temporal stability* of the rate, not the absolute rate at any single year.
(b) *Partner-level similarity ranking (Section IV-H.2).* When every auditor-year is ranked globally by its per-auditor-year mean best-match cosine (across all firms: Big-4 and Non-Big-4), Firm A auditor-years account for 95.9% of the top decile against a baseline share of 27.8% (a 3.5$\times$ concentration ratio), and this over-representation is stable across 2013-2023. This analysis uses only the ordinal ranking and is independent of any absolute cutoff.
(c) *Intra-report consistency (Section IV-H.3).* Because each Taiwanese statutory audit report is co-signed by two engagement partners, firm-wide stamping practice predicts that both signers on a given Firm A report should receive the same signature-level label under the classifier. Firm A exhibits 89.9% intra-report agreement against 62-67% at the other Big-4 firms. This test uses the operational classifier and is therefore a *consistency* check on the classifier's firm-level output rather than a threshold-free test; the cross-firm gap (not the absolute rate) is the substantive finding.
We emphasize that the 92.5% figure is a within-sample consistency check rather than an independent validation of Firm A's status; the validation role is played by the byte-level pixel-identity evidence, the accountant-level mixture, the three complementary analyses above, and the held-out Firm A fold (which confirms the qualitative replication-dominated framing; fold-level rate differences are disclosed in Section IV-G.2) described in Section III-K.
We emphasize that the 92.5% figure is a within-sample consistency check rather than an independent validation of Firm A's status; the validation role is played by the byte-level pixel-identity evidence, the unimodal-long-tail dip-test result, the three complementary analyses above, and the held-out Firm A fold (described in Section III-J; fold-level rate differences are disclosed in Section IV-F.2).
We emphasize that Firm A's replication-dominated status was *not* derived from the thresholds we calibrate against it.
Its identification rests on visual evidence and accountant-level clustering that is independent of the statistical pipeline.
Its identification rests on the byte-level pair evidence and the dip-test-confirmed unimodal-long-tail shape, both of which are independent of any threshold choice.
The "replication-dominated, not pure" framing is important both for internal consistency---it predicts and explains the long left tail observed in Firm A's cosine distribution (Section IV-D)---and for avoiding overclaim in downstream inference.
## I. Convergent Threshold Determination with a Density-Smoothness Diagnostic
## I. Signature-Level Threshold Characterisation
Direct assignment of thresholds based on prior intuition (e.g., cosine $\geq 0.95$ for non-hand-signed) is analytically convenient but methodologically vulnerable: reviewers can reasonably ask why these particular cutoffs rather than others.
To place threshold selection on a statistically principled and data-driven footing, we apply *two methodologically distinct* threshold estimators---KDE antimode with a Hartigan dip test, and a finite Beta mixture (with a logit-Gaussian robustness check)---whose underlying assumptions decrease in strength (KDE antimode requires only smoothness; the Beta mixture additionally requires a parametric specification, and the logit-Gaussian cross-check reports sensitivity to that form).
We complement these estimators with a Burgstahler-Dichev / McCrary density-smoothness diagnostic applied to the same distributions.
The BD/McCrary procedure is *not* a third threshold estimator in our application---we show in Appendix A that the signature-level BD transitions are not bin-width-robust and that the accountant-level BD null survives a bin-width sweep---but it is informative about *how* the accountant-level distribution fails to exhibit a sharp density discontinuity even though it is clustered.
The methods are applied to the same sample rather than to independent experiments, so their estimates are not statistically independent; convergence between the two threshold estimators is therefore a diagnostic of distributional structure rather than a formal statistical guarantee.
When the two estimates agree, the decision boundary is robust to the choice of method; when the BD/McCrary diagnostic finds no significant transition at the same level, that pattern is evidence for clustered-but-smoothly-mixed rather than sharply discontinuous distributional structure.
This section describes how we set the operational classifier's similarity threshold and how we characterise the per-signature similarity distribution that supports it.
The two roles are kept separate by design.
> **Operational threshold (used by the classifier).** The cosine cut is anchored on the whole-sample Firm A P7.5 percentile (cos $> 0.95$; Section III-K).
>
> **Statistical characterisation (used to motivate the choice of anchor and to describe the distributional structure).** A Hartigan dip test, an EM-fitted Beta mixture (with logit-Gaussian robustness check), and a Burgstahler-Dichev / McCrary density-smoothness procedure---all applied at the per-signature level (Section IV-D).
The reason for the split is empirical.
The three statistical diagnostics jointly find that per-signature similarity forms a continuous quality spectrum (Section IV-D, summarised below): the dip test fails to reject unimodality for Firm A; BIC strongly prefers a 3-component over a 2-component Beta fit, so the 2-component crossing is a forced fit; and the BD/McCrary candidate transition lies inside the non-hand-signed mode rather than between modes (and is not bin-width-stable; Appendix A).
Under these conditions the natural anchor for an operational cosine cut is a transparent percentile of a known-majority-positive reference population (Firm A) rather than a mixture-fit crossing whose location depends on parametric assumptions the data do not support.
We describe the three diagnostics and the assumptions underlying each in the subsections below.
The two threshold estimators rest on decreasing-in-strength assumptions: the KDE antimode/crossover requires only smoothness; the Beta mixture additionally requires a parametric specification, and the logit-Gaussian cross-check reports sensitivity to that form.
The Burgstahler-Dichev / McCrary procedure is applied to the same distribution as a *density-smoothness diagnostic*: it would identify a sharp local density discontinuity if one existed at the boundary between two cleanly separated mechanisms.
Because all three diagnostics are applied to the same sample rather than to independent experiments, agreement or disagreement among them is read as evidence about distributional structure rather than as a formal statistical guarantee.
### 1) Method 1: KDE Antimode / Crossover with Unimodality Test
We use two closely related KDE-based threshold estimators and apply each where it is appropriate.
When two labeled populations are available (e.g., the all-pairs intra-class and inter-class similarity distributions of Section IV-C), the *KDE crossover* is the intersection point of the two kernel density estimates under Scott's rule for bandwidth selection [28]; under equal priors and symmetric misclassification costs it approximates the Bayes-optimal decision boundary between the two classes.
When a single distribution is analyzed (e.g., the per-accountant cosine mean of Section IV-E) the *KDE antimode* is the local density minimum between two modes of the fitted density; it serves the same decision-theoretic role when the distribution is multimodal but is undefined when the distribution is unimodal.
When a single distribution is analysed (e.g., the per-signature best-match cosine distribution of Section IV-D) the *KDE antimode* is the local density minimum between two modes of the fitted density; it serves the same decision-theoretic role when the distribution is multimodal but is undefined when the distribution is unimodal.
In either case we use the Hartigan & Hartigan dip test [37] as a formal test of unimodality (rejecting the null of unimodality is consistent with but does not directly establish bimodality specifically), and perform a sensitivity analysis varying the bandwidth over $\pm 50\%$ of the Scott's-rule value to verify threshold stability.
### 2) Method 2: Finite Mixture Model via EM
@@ -207,33 +214,19 @@ $$Z_i = \frac{n_i - \tfrac{1}{2}(n_{i-1} + n_{i+1})}{\sqrt{N p_i (1-p_i) + \tfra
which is approximately $N(0,1)$ under the null of distributional smoothness.
A candidate transition is identified at an adjacent bin pair where $Z_{i-1}$ is significantly negative and $Z_i$ is significantly positive (cosine) or the reverse (dHash).
Appendix A reports a bin-width sensitivity sweep covering $\text{bin} \in \{0.003, 0.005, 0.010, 0.015\}$ for cosine and $\text{bin} \in \{1, 2, 3\}$ for dHash; the sweep shows that signature-level BD transitions are not bin-width-stable and that accountant-level BD transitions are largely absent, consistent with clustered-but-smoothly-mixed accountant-level aggregates.
Appendix A reports a bin-width sensitivity sweep covering $\text{bin} \in \{0.003, 0.005, 0.010, 0.015\}$ for cosine and $\text{bin} \in \{1, 2, 3\}$ for dHash; the sweep shows that signature-level BD transitions are not bin-width-stable, consistent with histogram-resolution artifacts rather than a genuine cross-mode density discontinuity.
We therefore do not treat the BD/McCrary procedure as a threshold estimator in our application but as diagnostic evidence about distributional smoothness.
### 4) Convergent Validation and Level-Shift Framing
### 4) Reading the Three Diagnostics Together
The two threshold estimators rest on decreasing-in-strength assumptions: the KDE antimode/crossover requires only smoothness; the Beta mixture additionally requires a parametric specification (with logit-Gaussian as a robustness cross-check against that form).
If the two estimated thresholds differ by less than a practically meaningful margin, the classification is robust to the choice of method.
If the two estimated thresholds were to differ by less than a practically meaningful margin and the BD/McCrary procedure were to identify a sharp transition at the same level, that pattern would constitute convergent evidence for a clean two-mechanism boundary at that location.
Equally informative is the *level at which the methods agree or disagree*.
Applied to the per-signature similarity distribution the two estimators yield thresholds spread across a wide range because per-signature similarity is not a cleanly bimodal population (Section IV-D).
Applied to the per-accountant cosine mean, the KDE antimode and the Beta-mixture crossing (together with its logit-Gaussian counterpart) converge within a narrow band, while the BD/McCrary diagnostic finds no significant transition at the same level; this pattern is consistent with a clustered but smoothly mixed accountant-level distribution rather than a sharply discrete discontinuity, and we interpret it accordingly in Section V rather than treating the BD null as a failure of the test.
This is *not* the pattern we observe at the per-signature level.
The two threshold estimators yield crossings spread across a wide range (Section IV-D); the BIC clearly prefers a 3-component over a 2-component Beta fit, indicating that the 2-component crossing is a forced fit and should be read as an upper bound rather than a definitive cut; and the BD/McCrary procedure locates its candidate transition *inside* the non-hand-signed mode rather than between modes (Appendix A).
We interpret this jointly as evidence that per-signature similarity is a continuous quality spectrum rather than a clean two-mechanism mixture, and we accordingly anchor the operational classifier's cosine cut on whole-sample Firm A percentile heuristics (Section III-K) rather than on a mixture-fit crossing.
### 5) Accountant-Level Application
In addition to applying the two threshold estimators and the BD/McCrary diagnostic at the per-signature level (Section IV-D), we apply them to the per-accountant aggregates (mean best-match cosine, mean independent minimum dHash) for the 686 CPAs with $\geq 10$ signatures.
The accountant-level estimates from the two threshold estimators (together with their convergence) provide the methodologically defensible threshold reference used in the per-document classification of Section III-L; the BD/McCrary accountant-level null is reported alongside as a smoothness diagnostic.
## J. Accountant-Level Mixture Model
In addition to the signature-level analysis, we fit a Gaussian mixture model in two dimensions to the per-accountant aggregates (mean best-match cosine, mean independent minimum dHash).
The motivation is that an individual CPA's cross-year-pooled signing *tendency*---their full-sample distribution of best-match statistics---is expected to cluster with other CPAs of similar tendency, even when the output pixel-level *quality* at the signature level lies on a continuous spectrum.
Cluster membership in the accountant-level fit is accordingly best read as a *pooled observed tendency* over the CPA's full sample-period signature set rather than as a time-invariant signing regime; where a CPA switched mechanisms during the sample period, their accountant-level coordinates reflect a weighted mix of the corresponding regimes.
We fit mixtures with $K \in \{1, 2, 3, 4, 5\}$ components under full covariance, selecting $K^*$ by BIC with 15 random initializations per $K$.
For the selected $K^*$ we report component means, weights, per-component firm composition, and the marginal-density crossing points from the two-component fit, which serve as the natural per-accountant thresholds.
## K. Pixel-Identity, Inter-CPA, and Held-Out Firm A Validation (No Manual Annotation)
## J. Pixel-Identity, Inter-CPA, and Held-Out Firm A Validation (No Manual Annotation)
Rather than construct a stratified manual-annotation validation set, we validate the classifier using four naturally occurring reference populations that require no human labeling:
@@ -245,7 +238,7 @@ We further emphasize that this anchor is a *subset* of the true positive class--
Inter-CPA pairs cannot arise from reuse of a single signer's stored signature image, so this population is a reliable negative class for threshold sweeps.
This anchor is substantially larger than a simple low-similarity-same-CPA negative and yields tight Wilson 95% confidence intervals on FAR at each candidate threshold.
3. **Firm A anchor (replication-dominated prior positive):** Firm A signatures, treated as a majority-positive reference with within-firm heterogeneity in the left tail, as evidenced by the 32/171 middle-band share in the accountant-level mixture (Section III-H).
3. **Firm A anchor (replication-dominated prior positive):** Firm A signatures, treated as a majority-positive reference with within-firm heterogeneity in the left tail, as evidenced by the 7.5% of Firm A signatures whose per-signature best-match cosine falls at or below 0.95 (Section III-H, Section IV-D).
Because Firm A is both used for empirical percentile calibration in Section III-H and as a validation anchor, we make the within-Firm-A sampling variance visible by splitting Firm A CPAs randomly (at the CPA level, not the signature level) into a 70% *calibration* fold and a 30% *heldout* fold.
The calibration-fold percentiles used in thresholding---cosine median, P1, and P5 (lower-tail, since higher cosine indicates greater similarity), and dHash_indep median and P95 (upper-tail, since lower dHash indicates greater similarity)---are derived from the 70% calibration fold only.
The heldout fold is used exclusively to report post-hoc capture rates with Wilson 95% confidence intervals.
@@ -256,12 +249,12 @@ This anchor is retained for continuity with prior work but is small in our datas
From these anchors we report FAR with Wilson 95% confidence intervals against the inter-CPA negative anchor.
We do not report an Equal Error Rate or FRR column against the byte-identical positive anchor, because byte-identical pairs have cosine $\approx 1$ by construction and any FRR computed against that subset is trivially $0$ at every threshold below $1$; the conservative-subset role of the byte-identical anchor is instead discussed qualitatively in Section V-F.
Precision and $F_1$ are not meaningful in this anchor-based evaluation because the positive and negative anchors are constructed from different sampling units (intra-CPA byte-identical pairs vs random inter-CPA pairs), so their relative prevalence in the combined set is an arbitrary construction rather than a population parameter; we therefore omit precision and $F_1$ from Table X.
The 70/30 held-out Firm A fold of Section IV-G.2 additionally reports capture rates with Wilson 95% confidence intervals computed within the held-out fold, which is a valid population for rate inference.
The 70/30 held-out Firm A fold of Section IV-F.2 additionally reports capture rates with Wilson 95% confidence intervals computed within the held-out fold, which is a valid population for rate inference.
## L. Per-Document Classification
## K. Per-Document Classification
The per-signature classifier operates at the signature level and uses whole-sample Firm A percentile heuristics as its operational thresholds, while the accountant-level threshold analysis of Section IV-E (KDE antimode, Beta-2 crossing, logit-Gaussian robustness crossing) supplies a *convergent* external reference for the operational cuts.
Because the two analyses are at different units (signature vs accountant) we treat them as complementary rather than substitutable: the accountant-level convergence band cos $\in [0.945, 0.979]$ anchors the signature-level operational cut cos $> 0.95$ used below, and Section IV-G.3 reports a sensitivity analysis in which cos $> 0.95$ is replaced by the accountant-level 2D-GMM marginal crossing cos $> 0.945$.
The per-signature classifier operates at the signature level with operational thresholds anchored on whole-sample Firm A percentile heuristics: cos $> 0.95$ (Firm A P7.5) for the cosine dimension and dHash$_\text{indep} \leq 5$ / $> 15$ (Firm A median+P75 / style-consistency ceiling) for the structural dimension.
This percentile-based anchor is the natural choice given the continuous-spectrum shape of the per-signature similarity distribution documented in Section IV-D; sensitivity to nearby alternatives is reported in Section IV-F.3.
All dHash references in this section refer to the *independent-minimum* dHash defined in Section III-G---the smallest Hamming distance from a signature to any other same-CPA signature.
We use a single dHash statistic throughout the operational classifier and the supporting capture-rate analyses (Tables IX, XI, XII, XVI), which keeps the classifier definition and its empirical evaluation arithmetically consistent.
@@ -282,16 +275,15 @@ High feature-level similarity without structural corroboration---consistent with
We note three conventions about the thresholds.
First, the cosine cutoff $0.95$ corresponds to approximately the whole-sample Firm A P7.5 of the per-signature best-match cosine distribution---that is, 92.5% of whole-sample Firm A signatures exceed this cutoff and 7.5% fall at or below it (Section III-H)---chosen as a round-number lower-tail boundary whose complement (92.5% above) has a transparent interpretation in the whole-sample reference distribution; the cosine crossover $0.837$ is the all-pairs intra/inter KDE crossover; both are derived from whole-sample distributions rather than from the 70% calibration fold, so the classifier inherits its operational cosine cuts from the whole-sample Firm A and all-pairs distributions.
Section IV-G.3 reports a sensitivity check confirming that replacing $0.95$ with the nearby accountant-level 2D-GMM marginal crossing $0.945$ alters aggregate firm-level capture rates by at most $\approx 1.2$ percentage points, so the round-number heuristic is robust to mixture-derived alternatives within the accountant-level convergence band.
Section IV-G.2 reports both calibration-fold and held-out-fold capture rates for this classifier so that fold-level sampling variance is visible.
Section IV-F.3 reports a sensitivity check confirming that replacing $0.95$ with the slightly stricter Firm A P5 percentile $0.941$ alters aggregate firm-level capture rates by at most $\approx 1.2$ percentage points, so the round-number heuristic is robust to nearby percentile-based alternatives.
Section IV-F.2 reports both calibration-fold and held-out-fold capture rates for this classifier so that fold-level sampling variance is visible.
Second, the dHash cutoffs $\leq 5$ and $> 15$ are chosen from the whole-sample Firm A $\text{dHash}_\text{indep}$ distribution: $\leq 5$ captures the upper tail of the high-similarity mode (whole-sample Firm A median $\text{dHash}_\text{indep} = 2$, P75 $\approx 4$, so $\leq 5$ is the band immediately above median), while $> 15$ marks the regime in which independent-minimum structural similarity is no longer indicative of image reproduction.
Third, the three accountant-level 1D estimators (KDE antimode $0.973$, Beta-2 crossing $0.979$, logit-GMM-2 crossing $0.976$) and the accountant-level 2D GMM marginal ($0.945$) are *not* the operational thresholds of this classifier: they are the *convergent external reference* that supports the choice of signature-level operational cut.
Section IV-G.3 reports the classifier's five-way output under the nearby operational cut cos $> 0.945$ as a sensitivity check; the aggregate firm-level capture rates change by at most $\approx 1.2$ percentage points (e.g., the operational dual rule cos $> 0.95$ AND $\text{dHash}_\text{indep} \leq 8$ captures 89.95% of whole Firm A versus 91.14% at cos $> 0.945$), and category-level shifts are concentrated at the Uncertain/Moderate-confidence boundary.
Third, the signature-level threshold-estimator outputs of Section IV-D (KDE antimode, Beta-mixture and logit-Gaussian crossings, BD/McCrary diagnostic) are *not* the operational thresholds of this classifier: they are descriptive characterisation of the per-signature similarity distribution, and Section IV-D shows they do not converge to a clean two-mechanism boundary at the per-signature level---which is why the operational cosine cut is anchored on the whole-sample Firm A percentile rather than on any mixture-fit crossing.
Because each audit report typically carries two certifying-CPA signatures (Section III-D), we aggregate signature-level outcomes to document-level labels using a worst-case rule: the document inherits the *most-replication-consistent* signature label (i.e., among the two signatures, the label rank ordered High-confidence $>$ Moderate-confidence $>$ Style-consistency $>$ Uncertain $>$ Likely-hand-signed determines the document's classification).
This rule is consistent with the detection goal of flagging any potentially non-hand-signed report rather than requiring all signatures on the report to converge.
## M. Data Source and Firm Anonymization
## L. Data Source and Firm Anonymization
**Audit-report corpus.** The 90,282 audit-report PDFs analyzed in this study were obtained from the Market Observation Post System (MOPS) operated by the Taiwan Stock Exchange Corporation.
MOPS is the statutory public-disclosure platform for Taiwan-listed companies; every audit report filed on MOPS is already a publicly accessible regulatory document.
@@ -300,4 +292,3 @@ The CPA registry used to map signatures to CPAs is a publicly available audit-fi
**Firm-level anonymization.** Although all audit reports and CPA identities in the corpus are public, we report firm-level results under the pseudonyms Firm A / B / C / D throughout this paper to avoid naming specific accounting firms in descriptive rate comparisons.
Readers with domain familiarity may still infer Firm A from contextual descriptors (Big-4 status, replication-dominated behavior); we disclose this residual identifiability explicitly and note that none of the paper's conclusions depend on the specific firm's name.
Authors declare no conflict of interest with Firm A, Firm B, Firm C, or Firm D.