Paper A v3.7: demote BD/McCrary to density-smoothness diagnostic; add Appendix A

Implements codex gpt-5.4 recommendation (paper/codex_bd_mccrary_opinion.md,
"option (c) hybrid"): demote BD/McCrary in the main text from a co-equal
threshold estimator to a density-smoothness diagnostic, and add a
bin-width sensitivity appendix as an audit trail.

Why: the bin-width sweep (Script 25) confirms that at the signature
level the BD transition drifts monotonically with bin width (Firm A
cosine: 0.987 -> 0.985 -> 0.980 -> 0.975 as bin width widens 0.003 ->
0.015; full-sample dHash transitions drift from 2 to 10 to 9 across
bin widths 1 / 2 / 3) and Z statistics inflate superlinearly with bin
width, both characteristic of a histogram-resolution artifact. At the
accountant level the BD null is robust across the sweep. The paper's
earlier "three methodologically distinct estimators" framing therefore
could not be defended to an IEEE Access reviewer once the sweep was
run.

Added
- signature_analysis/25_bd_mccrary_sensitivity.py: bin-width sweep
  across 6 variants (Firm A / full-sample / accountant-level, each
  cosine + dHash_indep) and 3-4 bin widths per variant. Reports
  Z_below, Z_above, p-values, and number of significant transitions
  per cell. Writes reports/bd_sensitivity/bd_sensitivity.{json,md}.
- paper/paper_a_appendix_v3.md: new "Appendix A. BD/McCrary Bin-Width
  Sensitivity" with Table A.I (all 20 sensitivity cells) and
  interpretation linking the empirical pattern to the main-text
  framing decision.
- export_v3.py: appendix inserted into SECTIONS between conclusion
  and references.
- paper/codex_bd_mccrary_opinion.md: codex gpt-5.4 recommendation
  captured verbatim for audit trail.

Main-text reframing
- Abstract: "three methodologically distinct estimators" ->
  "two estimators plus a Burgstahler-Dichev/McCrary density-
  smoothness diagnostic". Trimmed to 243 words.
- Introduction: related-work summary, pipeline step 5, accountant-
  level convergence sentence, contribution 4, and section-outline
  line all updated. Contribution 4 renamed to "Convergent threshold
  framework with a smoothness diagnostic".
- Methodology III-I: section renamed to "Convergent Threshold
  Determination with a Density-Smoothness Diagnostic". "Method 2:
  BD/McCrary Discontinuity" converted to "Density-Smoothness
  Diagnostic" in a new subsection; Method 3 (Beta mixture) renumbered
  to Method 2. Subsections 4 and 5 updated to refer to "two threshold
  estimators" with BD as diagnostic.
- Methodology III-A pipeline overview: "three methodologically
  distinct statistical methods" -> "two methodologically distinct
  threshold estimators complemented by a density-smoothness
  diagnostic".
- Methodology III-L: "three-method analysis" -> "accountant-level
  threshold analysis (KDE antimode, Beta-2 crossing, logit-Gaussian
  robustness crossing)".
- Results IV-D.1 heading: "BD/McCrary Discontinuity" ->
  "BD/McCrary Density-Smoothness Diagnostic". Prose now notes the
  Appendix-A bin-width instability explicitly.
- Results IV-E: Table VIII restructured to label BD rows
  "(diagnostic only; bin-unstable)" and "(diagnostic; null across
  Appendix A)". Summary sentence rewritten to frame BD null as
  evidence for clustered-but-smoothly-mixed rather than as a
  convergence failure. Table cosine P5 row corrected from 0.941 to
  0.9407 to match III-K.
- Results IV-G.3 and IV-I.2: "three-method convergence/thresholds"
  -> "accountant-level convergent thresholds" (clarifies the 3
  converging estimates are KDE antimode, Beta-2, logit-Gaussian,
  not KDE/BD/Beta).
- Discussion V-B: "three-method framework" -> "convergent threshold
  framework".
- Conclusion: "three methodologically distinct methods" -> "two
  threshold estimators and a density-smoothness diagnostic";
  contribution 3 restated; future-work sentence updated.
- Impact Statement (archived): "three methodologically distinct
  threshold-selection methods" -> "two methodologically distinct
  threshold estimators plus a density-smoothness diagnostic" so the
  archived text is internally consistent if reused.

Discussion V-B / V-G already framed BD as a diagnostic in v3.5
(unchanged in this commit). The reframing therefore brings Abstract /
Introduction / Methodology / Results / Conclusion into alignment with
the Discussion framing that codex had already endorsed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-04-21 14:32:50 +08:00
parent 6946baa096
commit 552b6b80d4
11 changed files with 458 additions and 63 deletions
+6 -6
View File
@@ -32,7 +32,7 @@ Despite the significance of the problem for audit quality and regulatory oversig
Woodruff et al. [9] developed an automated pipeline for signature analysis in corporate filings for anti-money-laundering investigations, but their work focused on author clustering (grouping signatures by signer identity) rather than detecting reuse of a stored image.
Copy-move forgery detection methods [10], [11] address duplicated regions within or across images but are designed for natural images and do not account for the specific characteristics of scanned document signatures, where legitimate visual similarity between a signer's authentic signatures is expected and must be distinguished from image reproduction.
Research on near-duplicate image detection using perceptual hashing combined with deep learning [12], [13] provides relevant methodological foundations but has not been applied to document forensics or signature analysis.
From the statistical side, the methods we adopt for threshold determination---the Hartigan dip test [37], Burgstahler-Dichev / McCrary discontinuity testing [38], [39], and finite mixture modelling via the EM algorithm [40], [41]---have been developed in statistics and accounting-econometrics but have not, to our knowledge, been combined as a three-method convergent framework for document-forensics threshold selection.
From the statistical side, the methods we adopt for threshold determination---the Hartigan dip test [37] and finite mixture modelling via the EM algorithm [40], [41], complemented by a Burgstahler-Dichev / McCrary density-smoothness diagnostic [38], [39]---have been developed in statistics and accounting-econometrics but have not, to our knowledge, been combined as a convergent threshold framework for document-forensics threshold selection.
In this paper, we present a fully automated, end-to-end pipeline for detecting non-hand-signed CPA signatures in audit reports at scale.
Our approach processes raw PDF documents through the following stages:
@@ -40,7 +40,7 @@ Our approach processes raw PDF documents through the following stages:
(2) signature region detection using a trained YOLOv11 object detector;
(3) deep feature extraction via a pre-trained ResNet-50 convolutional neural network;
(4) dual-descriptor similarity computation combining cosine similarity on deep embeddings with difference hash (dHash) distance;
(5) threshold determination using three methodologically distinct methods---KDE antimode with a Hartigan unimodality test, Burgstahler-Dichev / McCrary discontinuity, and finite Beta mixture via EM with a logit-Gaussian robustness check---applied at both the signature level and the accountant level; and
(5) threshold determination using two methodologically distinct estimators---KDE antimode with a Hartigan unimodality test and finite Beta mixture via EM with a logit-Gaussian robustness check---complemented by a Burgstahler-Dichev / McCrary density-smoothness diagnostic, all applied at both the signature level and the accountant level; and
(6) validation against a pixel-identical anchor, a low-similarity anchor, and a replication-dominated Big-4 calibration firm.
The dual-descriptor verification is central to our contribution.
@@ -55,9 +55,9 @@ This framing is important because the statistical signature of a replication-dom
Adopting the replication-dominated framing---rather than a near-universal framing that would have to absorb these residuals as noise---ensures internal coherence among the visual-inspection evidence, the signature-level statistics, and the accountant-level mixture.
A third distinctive feature is our unit-of-analysis treatment.
Our three-method framework reveals an informative asymmetry between the signature level and the accountant level: per-signature similarity forms a continuous quality spectrum for which no two-mechanism mixture provides a good fit, whereas per-accountant aggregates are clustered into three recognizable groups (BIC-best $K = 3$).
Our threshold-framework analysis reveals an informative asymmetry between the signature level and the accountant level: per-signature similarity forms a continuous quality spectrum for which no two-mechanism mixture provides a good fit, whereas per-accountant aggregates are clustered into three recognizable groups (BIC-best $K = 3$).
The substantive reading is that *pixel-level output quality* is a continuous spectrum shaped by firm-specific reproduction technologies and scan conditions, while *accountant-level aggregate behaviour* is clustered but not sharply discrete---a given CPA tends to cluster into a dominant regime (high-replication, middle-band, or hand-signed-tendency), though the boundaries between regimes are smooth rather than discontinuous.
Among the three accountant-level methods, KDE antimode and the two mixture-based estimators converge within $\sim 0.006$ on a cosine threshold of approximately $0.975$, while the Burgstahler-Dichev / McCrary discontinuity test finds no significant transition at the accountant level---an outcome consistent with smoothly mixed clusters rather than a failure of the method.
At the accountant level, the KDE antimode and the two mixture-based estimators (Beta-2 crossing and its logit-Gaussian robustness counterpart) converge within $\sim 0.006$ on a cosine threshold of approximately $0.975$, while the Burgstahler-Dichev / McCrary density-smoothness diagnostic finds no significant transition---an outcome (robust across a bin-width sweep, Appendix A) consistent with smoothly mixed clusters.
The two-dimensional GMM marginal crossings (cosine $= 0.945$, dHash $= 8.10$) are reported as a complementary cross-check rather than as the primary accountant-level threshold.
We apply this pipeline to 90,282 audit reports filed by publicly listed companies in Taiwan between 2013 and 2023, extracting and analyzing 182,328 individual CPA signatures from 758 unique accountants.
@@ -71,7 +71,7 @@ The contributions of this paper are summarized as follows:
3. **Dual-descriptor verification.** We demonstrate that combining deep-feature cosine similarity with perceptual hashing resolves the fundamental ambiguity between style consistency and image reproduction, and we validate the backbone choice through an ablation study comparing three feature-extraction architectures.
4. **Three-method convergent threshold framework.** We introduce a threshold-selection framework that applies three methodologically distinct methods---KDE antimode with Hartigan unimodality test, Burgstahler-Dichev / McCrary discontinuity, and EM-fitted Beta mixture with a logit-Gaussian robustness check---at both the signature and accountant levels, using the convergence (or principled divergence) across methods as diagnostic evidence about the mixture structure of the data.
4. **Convergent threshold framework with a smoothness diagnostic.** We introduce a threshold-selection framework that applies two methodologically distinct estimators---KDE antimode with Hartigan unimodality test and EM-fitted Beta mixture with a logit-Gaussian robustness check---at both the signature and accountant levels, and uses a Burgstahler-Dichev / McCrary density-smoothness diagnostic to characterize the local density structure. The convergence of the two estimators, combined with the presence or absence of a BD/McCrary transition, is used as evidence about the mixture structure of the data.
5. **Continuous-quality / clustered-accountant finding.** We empirically establish that per-signature similarity is a continuous quality spectrum poorly approximated by any two-mechanism mixture, whereas per-accountant aggregates cluster into three recognizable groups with smoothly mixed rather than sharply discrete boundaries---an asymmetry with direct implications for how threshold selection and mixture modelling should be applied in document forensics.
@@ -82,6 +82,6 @@ The contributions of this paper are summarized as follows:
The remainder of this paper is organized as follows.
Section II reviews related work on signature verification, document forensics, perceptual hashing, and the statistical methods we adopt for threshold determination.
Section III describes the proposed methodology.
Section IV presents experimental results including the three-method convergent threshold analysis, accountant-level mixture, pixel-identity validation, and backbone ablation study.
Section IV presents experimental results including the convergent threshold analysis, accountant-level mixture, pixel-identity validation, and backbone ablation study.
Section V discusses the implications and limitations of our findings.
Section VI concludes with directions for future work.