Files
pdf_signature_extraction/paper/paper_a_results_v3.md
T
gbanyan 53125d11d9 Paper A v3.20.0: partner Jimmy 2026-04-27 review + DOCX rendering overhaul
Substantive content (addresses partner Jimmy's 2026-04-27 review of v3.19.1):

Must-fix items (6/6):
- §III-F SSIM/pixel rejection rewritten from first principles (design-level
  argument from luminance/contrast/structure local-window product, not the
  prior empirical 0.70 result)
- Table VI restructured by population × method; added missing Firm A
  logit-Gaussian-2 0.999 row; KDE marked undefined (unimodal), BD/McCrary
  marked bin-unstable (Appendix A)
- Tables IX / XI / §IV-F.3 dHash 5/8/15 inconsistency resolved: ≤8 demoted
  from "operational dual" to "calibration-fold-adjacent reference"; the
  actual classifier rule cos>0.95 AND dH≤15 = 92.46% added throughout
- New Fig. 4 (yearly per-firm best-match cosine, 5 lines, 2013-2023, Firm A
  on top); script 30_yearly_big4_comparison.py
- Tables XIV / XV extended with top-20% (94.8%) and top-30% (81.3%) brackets
- §III-K reframed P7.5 from "round-number lower-tail boundary" to operating
  point; new Table XII-B (cosine-FAR-capture tradeoff at 5 thresholds:
  0.9407 / 0.945 / 0.95 / 0.977 / 0.985)

Nice-to-have items (3/3):
- Table XII expanded to 6-cut classifier sensitivity grid (0.940-0.985)
- Defensive parentheticals (84,386 vs 85,042; 30,226 vs 30,222) moved to
  table notes; cut "invite reviewer skepticism" and "non-load-bearing"

Codex 3-pass verification cleanup:
- Stale 0.973/0.977/0.979 references unified on canonical 0.977 (Firm A
  Beta-2 forced-fit crossing from beta_mixture_results.json)
- dHash≤8 wording corrected to P95-adjacent (P95 = 9, ≤8 is the integer
  immediately below) instead of misleading "rounded down"
- Table XII-B prose corrected: per-segment qualification of "non-Firm-A
  capture falls faster" (true on 0.95→0.977 segment but contracts on
  0.977→0.985 segment); arithmetic now from exact counts

Within-year analyses removed:
- Within-year ranking robustness check (Class A) was added in nice-to-have
  pass but contradicts v3.14 A2-removal stance; removed from §IV-G.2 + the
  Appendix B provenance row
- Within-CPA future-work disclosures (Class B) removed from Discussion
  limitation #5 and Conclusion future-work paragraph; subsequent limitations
  renumbered Sixth → Fifth, Seventh → Sixth

DOCX rendering pipeline overhaul (paper/export_v3.py):

Critical fix - every v3 DOCX since v3.0 was shipping WITHOUT TABLES:
strip_comments() was wholesale-deleting HTML comments, but every numerical
table is wrapped in <!-- TABLE X: ... -->, so the table body was deleted
alongside the wrapper. Now unwraps TABLE comments (emit synthetic
__TABLE_CAPTION__: marker + table body) while still stripping non-TABLE
editorial comments. Result: 19 tables now render in the DOCX.

Other rendering fixes:
- LaTeX → Unicode conversion (50+ token replacements: Greek alphabet, ≤≥,
  ×·≈, →↔⇒, etc.); \frac/\sqrt linearisation; TeX brace tricks ({=}, {,})
- Math-context-scoped sub/superscript via PUA sentinels (/):
  no more underscore-eating in identifiers like signature_analysis
- Display equations rendered via matplotlib mathtext to PNG (3 equations:
  cosine sim, mixture crossing, BD/McCrary Z statistic), embedded as
  numbered equation blocks (1), (2), (3); content-addressed cache at
  paper/equations/ (gitignored, regenerable)
- Manual numbered/bulleted list rendering with hanging indent (replaces
  python-docx style="List Number" which silently drops the number prefix
  when no numbering definition is bound)
- Markdown blockquote (> ...) defensively stripped
- Pandoc footnote ([^name]) markers no longer leak (inlined at source)
- Heading text cleaned of LaTeX residue + PUA sentinels
- File paths in body text (signature_analysis/X.py, reports/Y.json)
  trimmed to "(reproduction artifact in Appendix B)" pointers

New leak linter: paper/lint_paper_v3.py - two-pass markdown source +
rendered DOCX leak detector; auto-runs at end of export_v3.py.

Script changes:
- 21_expanded_validation.py: added 0.9407, 0.977, 0.985 to canonical FAR
  threshold list so Table XII-B is reproducible from persisted JSON
- 30_yearly_big4_comparison.py: NEW; generates Fig. 4 + per-firm yearly
  data (writes to reports/figures/ and reports/firm_yearly_comparison/)
- 31_within_year_ranking_robustness.py: NEW; supports the within-year
  robustness check (no longer cited in paper but kept as repo-internal
  due-diligence artifact)

Partner handoff DOCX shipped to
~/Downloads/Paper_A_IEEE_Access_Draft_v3.20.0_20260505.docx (536 KB:
19 tables + 4 figures + 3 equation images).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 13:44:49 +08:00

56 KiB
Raw Blame History

IV. Experiments and Results

A. Experimental Setup

Experiments used mixed hardware: YOLOv11n training and inference for signature detection, and ResNet-50 forward inference for feature extraction over all 182,328 detected signatures, were performed on an Nvidia RTX 4090 (CUDA); the downstream statistical analyses (KDE antimode, Hartigan dip test, Beta-mixture EM with logit-Gaussian robustness check, Burgstahler-Dichev/McCrary density-smoothness diagnostic, and pairwise cosine/dHash computations) were performed on an Apple Silicon workstation with Metal Performance Shaders (MPS) acceleration. Feature extraction used PyTorch 2.9 with torchvision model implementations. The complete pipeline---from raw PDF processing through final classification---was implemented in Python. Because all steps rely on deterministic forward inference over fixed pre-trained weights (no fine-tuning) plus fixed-seed numerical procedures, reported results are platform-independent to within floating-point precision.

B. Signature Detection Performance

The YOLOv11n model achieved high detection performance on the validation set (Table II), with all loss components converging by epoch 60 and no significant overfitting despite the relatively small training set (425 images). We note that Table II reports validation-set metrics, as no separate hold-out test set was reserved given the small annotation budget (500 images total). However, the subsequent production deployment provides practical validation: batch inference on 86,071 documents yielded 182,328 extracted signatures (Table III), with an average of 2.14 signatures per document, consistent with the standard practice of two certifying CPAs per audit report. The high VLM--YOLO agreement rate (98.8%) further corroborates detection reliability at scale.

C. All-Pairs Intra-vs-Inter Class Distribution Analysis

Fig. 2 presents the cosine similarity distributions computed over the full set of pairwise comparisons under two groupings: intra-class (all signature pairs belonging to the same CPA) and inter-class (signature pairs from different CPAs). This all-pairs analysis is a different unit from the per-signature best-match statistics used in Sections IV-D onward; we report it first because it supplies the reference point for the KDE crossover used in per-document classification (Section III-K). Table IV summarizes the distributional statistics.

Both distributions are left-skewed and leptokurtic. Shapiro-Wilk and Kolmogorov-Smirnov tests rejected normality for both (p < 0.001), confirming that parametric thresholds based on normality assumptions would be inappropriate. Distribution fitting identified the lognormal distribution as the best parametric fit (lowest AIC) for both classes, though we use this result only descriptively; all subsequent threshold-estimator outputs reported in Section IV-D are derived via the methods of Section III-I to avoid single-family distributional assumptions.

The KDE crossover---where the two density functions intersect---was located at 0.837 (Table V). Under equal prior probabilities and equal misclassification costs, this crossover approximates the Bayes-optimal boundary between the two classes. Statistical tests confirmed significant separation between the two distributions (Cohen's d = 0.669, Mann-Whitney [36] p < 0.001, K-S 2-sample p < 0.001).

We emphasize that pairwise observations are not independent---the same signature participates in multiple pairs---which inflates the effective sample size and renders $p$-values unreliable as measures of evidence strength. We therefore rely primarily on Cohen's d as an effect-size measure that is less sensitive to sample size. A Cohen's d of 0.669 indicates a medium effect size [29], confirming that the distributional difference is practically meaningful, not merely an artifact of the large sample count.

D. Signature-Level Distributional Characterisation

This section applies the threshold-estimator and density-smoothness diagnostic of Section III-I to the per-signature similarity distribution. The joint reading is that per-signature similarity is a continuous quality spectrum rather than a clean two-mechanism mixture, which is why the operational classifier (Section III-K) anchors its cosine cut on the whole-sample Firm A P7.5 percentile rather than on any mixture-fit crossing.

1) Hartigan Dip Test: Unimodality at the Signature Level

Applying the Hartigan & Hartigan dip test [37] to the per-signature best-match distributions reveals a critical structural finding (Table V). The N = 168{,}740 count used in Table V and in the downstream same-CPA per-signature best-match analyses (Tables V and XII, and the Firm-A per-signature rows of Tables XIII and XVIII) is 15 signatures smaller than the 168{,}755 CPA-matched count reported in Table III: these 15 signatures belong to CPAs with exactly one signature in the entire corpus, for whom no same-CPA pairwise best-match statistic can be computed, and are therefore excluded from all same-CPA similarity analyses.

Firm A's per-signature cosine distribution fails to reject unimodality (p = 0.17), a pattern consistent with a dominant high-similarity regime plus a long left tail attributable to within-firm heterogeneity in signing outputs (Section III-G discusses the scope of partner-level claims). The all-CPA cosine distribution, which mixes many firms with heterogeneous signing practices, is multimodal (p < 0.001). The Firm A unimodal-long-tail finding is, in conjunction with the byte-identity, partner-ranking, and intra-report evidence reported below, consistent with the replication-dominated framing (Section III-H): a dominant high-similarity regime plus residual within-firm heterogeneity, rather than two cleanly separated mechanisms.

2) Burgstahler-Dichev / McCrary Density-Smoothness Diagnostic

Applying the BD/McCrary procedure (Section III-I.3) to the per-signature cosine distribution yields a nominally significant Z^- \rightarrow Z^+ transition at cosine 0.985 for Firm A and 0.985 for the full sample; the min-dHash distributions exhibit a transition at Hamming distance 2 for both Firm A and the full sample under the bin width (0.005 / 1) used here. Two cautions, however, prevent us from treating these signature-level transitions as thresholds. First, the cosine transition at 0.985 lies inside the non-hand-signed mode rather than at the separation between two mechanisms, consistent with the dip-test finding that per-signature cosine is not cleanly bimodal. Second, Appendix A documents that the signature-level transition locations are not bin-width-stable (Firm A cosine drifts across 0.987, 0.985, 0.980, 0.975 as the bin width is widened from 0.003 to 0.015, and full-sample dHash transitions drift across 2, 10, 9 as bin width grows from 1 to 3), which is characteristic of a histogram-resolution artifact rather than of a genuine density discontinuity between two mechanisms. We therefore use BD/McCrary as a density-smoothness diagnostic rather than as an independent threshold estimator.

3) Beta Mixture at Signature Level: A Forced Fit

Fitting 2- and 3-component Beta mixtures to Firm A's per-signature cosine via EM yields a clear BIC preference for the 3-component fit (\Delta\text{BIC} = 381), with a parallel preference under the logit-GMM robustness check. For the full-sample cosine the 3-component fit is likewise strongly preferred (\Delta\text{BIC} = 10{,}175). Under the forced 2-component fit the Firm A Beta crossing lies at 0.977 and the logit-GMM crossing at 0.999---values sharply inconsistent with each other, indicating that the 2-component parametric structure is not supported by the data. Under the full-sample 2-component forced fit no Beta crossing is identified; the logit-GMM crossing is at 0.980.

4) Joint Reading of the Three Diagnostics

The three diagnostics agree that per-signature similarity does not form a clean two-mechanism mixture: (i) the Hartigan dip test fails to reject unimodality for Firm A and rejects it for the heterogeneous-firm pooled sample; (ii) BIC strongly prefers a 3-component over a 2-component Beta fit, so the 2-component crossing is a forced fit and the Beta-vs-logit-Gaussian disagreement (0.977 vs 0.999 for Firm A) reflects parametric-form sensitivity rather than a stable two-mechanism boundary; (iii) the BD/McCrary procedure locates its candidate transition inside the non-hand-signed mode rather than between modes, and the transition is not bin-width-stable.

Table VI summarises the signature-level threshold-estimator outputs for cross-method comparison.

Non-hand-signed replication quality is therefore best read as a continuous spectrum produced by firm-specific reproduction technologies (administrative stamping in early years, firm-level e-signing later) acting on a common stored exemplar. This finding has a direct methodological pay-off: it is why the operational cosine cut is anchored on the whole-sample Firm A P7.5 percentile (Section III-K), and it is why the byte-level pixel-identity anchor (Section IV-F.1) is the natural threshold-free positive reference for downstream validation.

E. Calibration Validation with Firm A

Fig. 3 presents the per-signature cosine and dHash distributions of Firm A compared to the overall population. Table IX reports the proportion of Firm A signatures crossing each candidate threshold; these rates play the role of calibration-validation metrics (what fraction of a known replication-dominated population does each threshold capture?).

Table IX is a whole-sample consistency check rather than an external validation: the cosine cut 0.95 and the operational dHash band edges (\leq 5 high-confidence cap and \leq 15 style-consistency boundary) are themselves anchored to the whole-sample Firm A distribution described in Section III-K (the 70/30 calibration-fold thresholds of Table XI are separate and slightly different, e.g., calibration-fold cosine P5 = 0.9407 rather than the whole-sample heuristic 0.95). The operational dual rule used by the five-way classifier of Section III-K---cosine > 0.95 AND \text{dHash}_\text{indep} \leq 15 (the union of the high-confidence and moderate-confidence non-hand-signed buckets)---captures 92.46% of Firm A; the high-confidence component alone (cosine > 0.95 AND \text{dHash}_\text{indep} \leq 5) captures 81.70%. For continuity with prior calibration-fold reporting (Section IV-F.2 reports the calibration-fold rate at the calibration-fold-P95-adjacent cut \text{dHash}_\text{indep} \leq 8), Table IX also lists the cosine > 0.95 AND \text{dHash}_\text{indep} \leq 8 rate of 89.95%; this is not the operational classifier rule but a cross-reference value. Both operational rates are consistent with the dip-test-confirmed unimodal-long-tail shape of Firm A's per-signature cosine distribution (Section IV-D.1) and the 92.5% / 7.5% signature-level split (Section III-H). Section IV-F.2 reports the corresponding rates on the 30% Firm A hold-out fold, which provides the external check these whole-sample rates cannot.

F. Pixel-Identity, Inter-CPA, and Held-Out Firm A Validation

We report three validation analyses corresponding to the anchors of Section III-J.

1) Pixel-Identity Positive Anchor with Inter-CPA Negative Anchor

Of the 182,328 extracted signatures, 310 have a same-CPA nearest match that is byte-identical after crop and normalization (pixel-identical-to-closest = 1); these form the byte-identity positive anchor---a pair-level proof of image reuse that serves as conservative ground truth for non-hand-signed signatures, subject to the source-template edge case discussed in Section V-G. Within Firm A specifically, 145 of these byte-identical signatures are distributed across 50 distinct partners (of 180 registered Firm A partners), with 35 of the byte-identical pairs spanning different fiscal years; reproduction artifact for this Firm A decomposition is listed in Appendix B. As the gold-negative anchor we sample 50,000 i.i.d. random cross-CPA signature pairs from the full 168,755-signature matched corpus (inter-CPA cosine: mean = 0.763, P_{95} = 0.886, P_{99} = 0.915, max = 0.992). Because the positive and negative anchor populations are constructed from different sampling units (byte-identical same-CPA pairs vs random inter-CPA pairs), their relative prevalence in the combined anchor set is arbitrary, and precision / F_1 / recall therefore have no meaningful population interpretation. We accordingly report FAR with Wilson 95% confidence intervals against the large inter-CPA negative anchor in Table X. The primary quantity reported by Table X is FAR: the probability that a random pair of signatures from different CPAs exceeds the candidate threshold. We do not report an Equal Error Rate: EER is meaningful only when the positive and negative error-rate curves cross in a nontrivial interior region, but byte-identical positives all sit at cosine \approx 1 by construction, so FRR against that subset is trivially 0 at every threshold below 1. An EER calculation against this anchor would be arithmetic tautology rather than biometric performance, and we therefore omit it.

Two caveats apply. First, the byte-identical positive anchor referenced above is a conservative subset of the true non-hand-signed population: it captures only those non-hand-signed signatures whose nearest match happens to be byte-identical, not those that are near-identical but not bytewise identical. A would-be FRR computed against this subset is definitionally 0 at every threshold below 1 (since byte-identical pairs have cosine \approx 1), so such an FRR is a mathematical boundary check rather than an empirical miss-rate estimate; we discuss the generalization limits of this conservative-subset framing in Section V-F. Second, the 0.945 / 0.95 thresholds are derived from the Firm A whole-sample and calibration-fold percentiles rather than from this anchor set, so the FAR values in Table X are post-hoc-fit-free evaluations of thresholds that were not chosen to optimize Table X. The very low FAR at the operational cut is therefore informative about specificity against a realistic inter-CPA negative population.

2) Held-Out Firm A Validation (within-Firm-A sampling variance disclosure)

We split Firm A CPAs randomly 70 / 30 at the CPA level into a calibration fold (124 CPAs, 45,116 signatures) and a held-out fold (54 CPAs, 15,332 signatures). The total of 178 Firm A CPAs differs from the 180 in the Firm A registry by two registered Firm A partners whose signatures in the corpus are singletons (only one signature each, so the per-signature best-match cosine is undefined and they do not appear in the same-CPA matched-signature table that script 24_validation_recalibration.py reads); they are therefore not represented in either fold by construction rather than by an explicit exclusion rule. Thresholds are re-derived from calibration-fold percentiles only. Table XI reports both calibration-fold and held-out-fold capture rates with Wilson 95% CIs and a two-proportion $z$-test.

We report fold-versus-fold comparisons rather than fold-versus-whole-sample comparisons, because the whole-sample rate is a weighted average of the two folds and therefore cannot, in general, fall inside the Wilson CI of either fold when the folds differ in rate; the correct generalization reference is the calibration fold, which produced the thresholds.

Under this proper test the two extreme rules agree across folds (cosine > 0.837 and \text{dHash}_\text{indep} \leq 15; both p > 0.7). The operationally relevant rules in the 8595% capture band differ between folds by 15 percentage points (p < 0.001 given the n \approx 45\text{k}/15\text{k} fold sizes). Both folds nevertheless sit in the same replication-dominated regime: every calibration-fold rate in the 8599% range has a held-out counterpart in the 8799% range, and the calibration-fold-adjacent reference rule cosine > 0.95 AND \text{dHash}_\text{indep} \leq 8 (the integer cut immediately below the calibration-fold dHash P95 of 9) captures 89.40% of the calibration fold and 91.54% of the held-out fold; the operational classifier rule cosine > 0.95 AND \text{dHash}_\text{indep} \leq 15 used by the five-way classifier of Section III-K captures still higher rates in both folds (calibration 92.09%, 41,548 / 45,116; held-out 93.56%, 14,344 / 15,332). The modest fold gap is consistent with within-Firm-A heterogeneity in replication intensity: the random 30% CPA sample evidently contained proportionally more high-replication CPAs. We therefore interpret the held-out fold as confirming the qualitative finding (Firm A is strongly replication-dominated across both folds) while cautioning that exact rates carry fold-level sampling noise that a single 30% split cannot eliminate; the threshold-independent partner-ranking analysis (Section IV-G.2) is the cross-check that is robust to this fold variance.

3) Operational-Threshold Sensitivity: cos > 0.95 vs cos > 0.945

The per-signature classifier (Section III-K) uses cos > 0.95 as its operational cosine cut, anchored on the whole-sample Firm A P7.5 heuristic (i.e., 7.5% of whole-sample Firm A signatures lie at or below 0.95; see Section III-H). We report a sensitivity check in which this round-number cut is replaced by the slightly stricter calibration-fold P5 rounded value cos > 0.945 (calibration-fold P5 = 0.9407, see Table XI). Table XII reports the five-way classifier output under each cut.

At the aggregate firm-level, the calibration-fold-adjacent reference dual rule cos > 0.95 AND \text{dHash}_\text{indep} \leq 8 captures 89.95% of whole Firm A under the 0.95 cut and 91.14% under the 0.945 cut---a shift of 1.19 percentage points. The operational classifier rule cos > 0.95 AND \text{dHash}_\text{indep} \leq 15 used by the five-way classifier of Section III-K captures 92.46% under the 0.95 cut and 93.97% under the 0.945 cut---a shift of 1.51 percentage points.

Reading the wider grid in Table XII: the High-confidence and Moderate-confidence shares shift by less than 5 percentage points across the 0.940-0.950 neighbourhood, while pushing the cosine cut to 0.970 or 0.985 produces qualitatively different classifier behaviour (Moderate-confidence collapses from 26.02% at 0.95 to 8.81% at 0.97 and 1.32% at 0.985, with the displaced mass landing in Uncertain rather than reclassifying out of the corpus). The classifier output is therefore robust to small (~0.005-cosine) perturbations of the operational cut but not to wholesale reanchoring at the threshold-estimator outputs of Section IV-D, which is consistent with our reading that those outputs are not classifier thresholds. At the per-signature categorization level, replacing 0.95 by 0.945 reclassifies 8,508 signatures (5.04% of the corpus) out of the Uncertain band; 6,095 of them migrate to Moderate-confidence non-hand-signed, 2,294 to High-confidence non-hand-signed, and 119 to High style consistency. The Likely-hand-signed category is unaffected because it depends only on the fixed all-pairs KDE crossover cosine = 0.837. The High-confidence non-hand-signed share grows from 45.62% to 46.98%.

We interpret this sensitivity pattern as indicating that the classifier's aggregate and high-confidence output is robust to the choice of operational cut within a 0.005-cosine neighbourhood of the Firm A P7.5 anchor, and that the movement is concentrated at the Uncertain/Moderate-confidence boundary.

To make the operating-point selection (Section III-K) auditable rather than presented as a single fixed value, Table XII-B reports the capture-vs-FAR tradeoff over the candidate threshold grid spanning the calibration-fold P5 (0.9407), its rounded value (0.945), the operational anchor (0.95), the Firm A Beta-2 forced-fit crossing from Section IV-D.3 (0.977), and the BD/McCrary candidate transition from Section IV-D.2 (0.985). For each grid point we report Firm A capture (under both the cosine-only marginal and the operational dual rule cos > t AND \text{dHash}_\text{indep} \leq 15 used by the five-way classifier of Section III-K), non-Firm-A capture (the cosine-only marginal in the 108,292 non-Firm-A matched signatures), and inter-CPA FAR with Wilson 95% CI against the 50,000-pair anchor of Section IV-F.1.

Reading Table XII-B, three patterns motivate the choice of 0.95 as the operating point. First, Firm A capture on the operational dual rule decays smoothly from 95.09% at t = 0.9407 to 55.26% at t = 0.985. Relaxing the cut from 0.95 to 0.945 buys 1.51 percentage points of additional Firm A capture, and to 0.9407 buys 2.63 percentage points; tightening from 0.95 to 0.977 costs 17.96 percentage points and to 0.985 costs 37.20 percentage points. The selected cut at 0.95 is the strictest cut on this grid at which Firm A capture remains above 90\% on the operational dual rule. Second, inter-CPA FAR is small in absolute terms across the entire candidate grid (0.00126 at 0.9407, falling to 0.00004 at 0.985): under any of these operating points the classifier's specificity against random cross-CPA pairs is in the per-mille range or better, so FAR alone does not determine the choice. The marginal FAR cost of relaxing from 0.95 to 0.945 is +0.00032 (25 \to 41 false positives per 50,000 pairs) and to 0.9407 is +0.00076 (25 \to 63); the marginal FAR savings from tightening to 0.977 and 0.985 are -0.00036 and -0.00046 respectively. The FAR savings from going stricter are small in absolute terms compared with the corresponding Firm A capture loss, which makes 0.95 a balanced operating point on this grid rather than a uniquely optimal one. Third, non-Firm-A capture (the cosine-only marginal in the 108,292 non-Firm-A signatures) decays from 67.51% at 0.945 to 60.50% at 0.95, 13.14% at 0.977, and 5.73% at 0.985. The Firm-A-minus-non-Firm-A gap widens with strictness through 0.977 and then contracts (22.41 percentage points at 0.9407; 26.46 at 0.945; 31.97 at 0.95; 61.36 at 0.977; 49.54 at 0.985): on the 0.95 \to 0.977 segment non-Firm-A capture falls faster than Firm A capture in absolute terms (-47.35 vs -17.96 percentage points), so the widening is dominated by non-Firm-A removal rather than by an intrinsic property of Firm A; on the 0.977 \to 0.985 segment Firm A capture falls faster than non-Firm-A's already-low residual, so the gap contracts. We do not read the gap pattern as evidence for a particular cut; it is reported here as cross-firm replication heterogeneity rather than as a selection criterion. The operating point at 0.95 is therefore a defensible---not unique---selection in this neighbourhood, motivated by (i) keeping Firm A capture above 90\% on the operational dual rule, (ii) achieving an FAR of 0.0005 at which marginal further savings from tightening are small relative to the corresponding capture loss, and (iii) preserving the interpretive transparency of the whole-sample Firm A P7.5 reading. It is not derived from the threshold-estimator outputs of Section IV-D, which the data do not support as classifier thresholds.

The paper therefore retains cos > 0.95 as the primary operational cut and reports the 0.945 result of Table XII as a sensitivity check rather than as a deployed alternative; downstream document-level rates (Table XVII) and intra-report agreement (Table XVI) are robust to moderate cutoff shifts within the 0.945--0.95 neighbourhood as long as the same cutoff is applied uniformly across firms.

G. Additional Firm A Benchmark Validation

Before presenting the three threshold-robust analyses, Fig. 4 summarises the per-firm yearly per-signature best-match cosine distribution that motivates them. The left panel reports the mean per-signature best-match cosine within each firm bucket and fiscal year (a threshold-free statistic); the right panel reports the share of each firm-bucket-year with per-signature best-match cosine \geq 0.95 (the operational cut of Section III-K). Both panels show Firm A above the other Big-4 firms in every year of the 2013-2023 sample, with non-Big-4 firms below all four Big-4 firms throughout, and the cross-firm ordering is stable across the sample period. The mean-cosine separation between Firm A and the other Big-4 firms is on the order of 0.02-0.04 throughout the sample (e.g., 2013: Firm A 0.9733 vs Firm B 0.9498, Firm C 0.9464, Firm D 0.9395, Non-Big-4 0.9227; 2023: 0.9860 vs 0.9668, 0.9662, 0.9525, 0.9346); the share-above-0.95 separation is wider (2013: Firm A 87.2\% vs 61.8\%, 56.2\%, 38.5\%, 27.5\%). This visual is the most direct cross-firm evidence in the paper that Firm A's high-similarity behaviour is firm-specific rather than corpus-wide; the three subsections below decompose this gap along three threshold-free or threshold-robust dimensions.

The capture rates of Section IV-E are an internal consistency check: they ask "how much of Firm A does our threshold capture?", but the threshold was itself derived from Firm A's percentiles, so a high capture rate is not surprising. To go beyond this circular check, we report three further analyses, each chosen so that the informative quantity does not depend on the threshold's absolute value:

  • §IV-G.1 (year-by-year stability). Holds the cosine cutoff fixed at 0.95 and asks whether the share of Firm A below the cutoff is stable across years. The information is in the temporal trend, not in the absolute rate; under a noise-only explanation of the left tail, the share should shrink as scan/PDF technology matured.
  • §IV-G.2 (partner-level similarity ranking). Uses no threshold at all: every auditor-year is ranked by mean similarity, and we measure Firm A's share of the top decile against its baseline share. The information is in the concentration ratio, which is invariant to the choice of cutoff.
  • §IV-G.3 (intra-report agreement). Applies the calibrated classifier and measures whether the two co-signing CPAs on the same Firm A report receive the same classifier label, then compares Firm A's intra-report agreement rate to the other firms'. The information is in the cross-firm gap; the absolute agreement rate at any one firm depends on the cutoff, but the gap is robust to moderate cutoff shifts as long as the same cutoff is applied uniformly across firms.

Together these three analyses provide threshold-free or threshold-robust evidence that complements the within-sample capture rates of Section IV-E.

1) Year-by-Year Stability of the Firm A Left Tail

Table XIII reports the proportion of Firm A signatures with per-signature best-match cosine below 0.95, disaggregated by fiscal year. Under the replication-dominated interpretation (Section III-H), this signature-level left-tail rate reflects within-firm heterogeneity in signing outputs at Firm A. Consistent with the scope-of-claims framing in Section III-G, we report the rate as a signature-level quantity without disaggregating the underlying mechanism (which may span a minority of hand-signing partners, multi-template replication workflows within the firm, or a combination); partner-level mechanism attribution is not attempted. Under the alternative hypothesis that the left tail is an artifact of scan or compression noise, the share should shrink as scanning and PDF-compression technology improved over 2013-2023.

The left tail is stable at 6-13% throughout the sample period and shows no pre/post-2020 level shift: the 2013-2019 mean left-tail share is 8.26% and the 2020-2023 mean is 6.96%. The lowest observed share is in 2023 (3.75%), consistent with firm-level electronic signing systems producing more uniform output than earlier manual scanning-and-stamping, not less. This stability supports the replication-dominated framing: a persistent within-firm heterogeneity component is consistent with a Beta left tail that is stable across production technologies, whereas a noise-only explanation would predict a shrinking share as technology improved.

2) Partner-Level Similarity Ranking

If Firm A applies firm-wide stamping while the other Big-4 firms use stamping only for a subset of partners, Firm A auditor-years should disproportionately occupy the top of the similarity distribution among all auditor-years (across all firms). We test this prediction directly.

For each auditor-year (CPA \times fiscal year) with at least 5 signatures we compute the mean best-match cosine similarity across the year's signatures, yielding 4,629 auditor-years across 2013-2023. Firm A accounts for 1,287 of these (27.8% baseline share). Table XIV reports per-firm occupancy of the top K\% of the ranked distribution. The per-signature best-match cosine underlying each auditor-year mean is taken over the full same-CPA pool (Section III-G), consistent with the unit-of-analysis framing in Section III-G.

Firm A occupies 95.9% of the top 10%, 94.8% of the top 20%, 90.1% of the top 25%, and 81.3% of the top 30% of auditor-years by similarity, against its baseline share of 27.8%---a concentration ratio of 3.5\times at the top decile, 3.4\times at the top quintile, and 2.9\times at the top tercile. Firm A's share decays monotonically as the bracket widens (95.9% \to 94.8% \to 90.1% \to 81.3% \to 52.7% across top-10/20/25/30/50%), and only at the top 50% does its share approach its baseline; the over-representation is therefore concentrated in the very top of the distribution rather than spread uniformly through the upper half. Year-by-year (Table XV), the top-10% Firm A share ranges from 88.4% (2020) to 100% (2013, 2014, 2017, 2018, 2019), showing that the concentration is stable across the sample period.

This over-representation is consistent with firm-wide non-hand-signing practice at Firm A and is not derived from any threshold we subsequently calibrate. It therefore constitutes genuine cross-firm evidence for Firm A's benchmark status.

3) Intra-Report Consistency

Taiwanese statutory audit reports are co-signed by two engagement partners (a primary and a secondary signer). Under firm-wide stamping practice at a given firm, both signers on the same report should receive the same signature-level classification. Disagreement between the two signers on a report is informative about whether the stamping practice is firm-wide or partner-specific.

For each report with exactly two signatures and complete per-signature data (84,354 reports total: 83,970 single-firm reports, in which both signers are at the same firm, and 384 mixed-firm reports, in which the two signers are at different firms), we classify each signature using the dual-descriptor rules of Section III-K and record whether the two classifications agree. Table XVI reports per-firm intra-report agreement for the 83,970 single-firm reports only (firm-assignment defined by the common firm identity of both signers); the 384 mixed-firm reports (0.46% of the 2-signature corpus) are excluded from the intra-report analysis because firm-level agreement is not well defined when the two signers are at different firms.

Firm A achieves 89.9% intra-report agreement, with 87.5% of Firm A reports having both signers classified as non-hand-signed and only 4 reports (0.01%) having both classified as likely hand-signed. The other Big-4 firms (B, C, D) and non-Big-4 firms cluster at 62-67% agreement, a 23-28 percentage-point gap. This 23-28 percentage-point gap in intra-report agreement between Firm A and the other firms is consistent with firm-wide (rather than partner-specific) non-hand-signing practice; we do not claim a sharp discontinuity in the formal sense, since classifier calibration, firm-specific document-production pipelines, and signer-mix differences could each contribute to gap magnitude.

We note that this test uses the calibrated classifier of Section III-K rather than a threshold-free statistic; the substantive evidence lies in the cross-firm gap between Firm A and the other firms rather than in the absolute agreement rate at any single firm, and that gap is robust to moderate shifts in the absolute cutoff so long as the cutoff is applied uniformly across firms.

H. Classification Results

Table XVII presents the final classification results under the dual-descriptor framework with Firm A-calibrated thresholds for 84,386 documents (656 documents excluded from the 85,042-document YOLO-detection cohort because no signature on the document could be matched to a registered CPA; see Table XVII note). We emphasize that the document-level proportions below reflect the worst-case aggregation rule of Section III-K: a report carrying one stamped signature and one hand-signed signature is labeled with the most-replication-consistent of the two signature-level verdicts. Document-level rates therefore represent the share of reports in which at least one signature is non-hand-signed rather than the share in which both are; the intra-report agreement analysis of Section IV-G.3 (Table XVI) reports how frequently the two co-signers share the same signature-level label within each firm, so that readers can judge what fraction of the non-hand-signed document-level share corresponds to fully non-hand-signed reports versus mixed reports.

Within the 71,656 documents exceeding cosine 0.95, the dHash dimension stratifies them into three distinct populations: 29,529 (41.2%) show converging structural evidence of non-hand-signing (dHash \leq 5); 36,994 (51.7%) show partial structural similarity (dHash in [6, 15]) consistent with replication degraded by scan variations; and 5,133 (7.2%) show no structural corroboration (dHash > 15), suggesting high signing consistency rather than image reproduction. A cosine-only classifier would treat all 71,656 identically; the dual-descriptor framework separates them into populations with fundamentally different interpretations.

1) Firm A Capture Profile (Consistency Check)

96.9% of Firm A's documents fall into the high- or moderate-confidence non-hand-signed categories, 0.6% into high-style-consistency, and 2.5% into uncertain. This pattern is consistent with the replication-dominated framing: the large majority is captured by non-hand-signed rules, while the small residual is consistent with the within-firm heterogeneity implied by the dip-test-confirmed unimodal-long-tail shape of Firm A's per-signature cosine distribution (Section IV-D.1) and the 7.5% signature-level left tail (Section III-H). The near-zero "likely hand-signed" rate (4 of 30,226 Firm A documents, 0.013%; the 30,226 denominator is documents with at least one Firm A signer under the 84,386-document classification cohort, which differs from the 30,222 single-firm two-signer subset of Table XVI by 4 mixed-firm reports excluded from the firm-level intra-report comparison) indicates that the within-firm heterogeneity implied by the 7.5% signature-level left tail (Section IV-D) does not project into the lowest-cosine document-level category under the dual-descriptor rules; it is absorbed instead into the uncertain or high-style-consistency categories at this threshold set. We note that because the non-hand-signed thresholds are themselves calibrated to Firm A's empirical percentiles (Section III-H), these rates are an internal consistency check rather than an external validation; the held-out Firm A validation of Section IV-F.2 is the corresponding external check.

2) Cross-Firm Comparison of Dual-Descriptor Convergence

Among the 65,514 non-Firm-A signatures with per-signature best-match cosine > 0.95, 42.12% have \text{dHash}_\text{indep} \leq 5, compared to 88.32% of the 55,922 Firm A signatures meeting the same cosine condition---a \sim 2.1\times difference that the structural-verification layer makes visible. The Firm A denominator (55,922) matches Table IX exactly: both Table IX and the cross-firm decomposition define Firm A membership via the CPA registry (accountants.firm), and the cross-firm analysis additionally requires a non-null independent-min dHash record, which all 55,922 Firm A cosine-eligible signatures have in the current database. This cross-firm gap is consistent with firm-wide non-hand-signing practice at Firm A versus partner-specific or per-engagement replication at other firms; it complements the partner-level ranking (Section IV-G.2) and intra-report consistency (Section IV-G.3) findings. Reproduction artifact for these counts is listed in Appendix B.

I. Ablation Study: Feature Backbone Comparison

To validate the choice of ResNet-50 as the feature extraction backbone, we conducted an ablation study comparing three pre-trained architectures: ResNet-50 (2048-dim), VGG-16 (4096-dim), and EfficientNet-B0 (1280-dim). All models used ImageNet pre-trained weights without fine-tuning, with identical preprocessing and L2 normalization. Table XVIII presents the comparison.

EfficientNet-B0 achieves the highest Cohen's d (0.707), indicating the greatest statistical separation between intra-class and inter-class distributions. However, it also exhibits the widest distributional spread (intra std = 0.123 vs. ResNet-50's 0.098), resulting in lower per-sample classification confidence. VGG-16 performs worst on all key metrics despite having the highest feature dimensionality (4096), suggesting that additional dimensions do not contribute discriminative information for this task.

ResNet-50 provides the best overall balance: (1) Cohen's d of 0.669 is competitive with EfficientNet-B0's 0.707; (2) its tighter distributions yield more reliable individual classifications; (3) the highest Firm A all-pairs 1st percentile (0.543) indicates that known-replication signatures are least likely to produce low-similarity outlier pairs under this backbone; and (4) its 2048-dimensional features offer a practical compromise between discriminative capacity and computational/storage efficiency for processing 182K+ signatures.