Phase 6 manuscript splice (1/2): Abstract / §I / §II / §III spliced
Splices v4 drafts into v3.20.0 master sub-files. Drops the "paper/v4/" working drafts and lands the v4.0 content in the master file structure. Internal draft notes / close-out checklists / open- questions blocks stripped at splice (per round-1 through round-6 deferral). Abstract (paper_a_abstract_v3.md): - Replaced v3.20.0 abstract (240w) with v4.0 abstract (247w). §I Introduction (paper_a_introduction_v3.md): - Replaced v3.20.0 §I with v4.0 §I (16 paragraphs + 8-item contributions list). §II Related Work (paper_a_related_work_v3.md): - Inserted v4.0 LOOO addition paragraph after the existing finite-mixture paragraph; added refs [42]-[44] to the internal reference annotation list. §III Methodology (paper_a_methodology_v3.md): - §III-A..F (Pipeline / Data / Page ID / Detection / Features / Dual Descriptors): kept v3.20.0 content unchanged. - §III-G..M: replaced v3.20.0 §III-G..K with v4.0 §III-G..M (Unit & Scope / Reference Populations / Distributional Diagnostics + composition decomposition / K=3 descriptive / Convergent internal-consistency / Anchor-based ICCR L.0-L.7 / Validation strategy + Table XXVII ten-tool collection). - §III-N Data Source & Anonymization: kept v3.20.0 §III-L content, renumbered to §III-N (after v4 §III-M). - §III-E ablation cross-reference: updated "§IV-I" -> "§IV-L" to match the renumbered §IV. - §III-F pixel-identity cross-reference: updated "§III-J" -> "§III-K". Gemini round-2 artifact paper/gemini_review_v4_round2.md also added (was uncommitted from the parallel-review batch). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -2,6 +2,6 @@
|
||||
|
||||
<!-- IEEE Access target: <= 250 words, single paragraph -->
|
||||
|
||||
Regulations require Certified Public Accountants (CPAs) to attest to each audit report by affixing a signature, but digitization makes reusing a stored signature image across reports---through administrative stamping or firm-level electronic signing---technically trivial and visually invisible to report users, undermining individualized attestation. We build an end-to-end pipeline that detects such *non-hand-signed* signatures at scale: a Vision-Language Model identifies signature pages, a YOLOv11 detector localizes signature regions, ResNet-50 supplies deep features, and a dual-descriptor verification layer combines deep-feature cosine similarity with perceptual hashing (difference hash, dHash) to separate *style consistency* (high cosine, divergent dHash) from *image reproduction* (high cosine, low dHash). The operational classifier outputs a five-way verdict per signature with a worst-case document-level aggregation; the cosine cut is anchored on a transparent whole-sample Firm A P7.5 percentile (cos $> 0.95$), and the dHash cuts on the same reference. Applied to 90,282 audit reports filed in Taiwan over 2013-2023 (182,328 signatures from 758 CPAs), the operational dual rule cos $> 0.95$ AND $\text{dHash}_\text{indep} \leq 15$ captures 92.46\% of Firm A and yields FAR = 0.0005 against a $\sim$50,000-pair inter-CPA negative anchor; intra-report agreement is 89.9\% at Firm A versus 62-67\% at the other Big-4 firms (a 23-28 percentage-point cross-firm gap). Validation uses three annotation-free anchors (310 byte-identical positives, $\sim$50,000 inter-CPA negatives, and a 70/30 held-out Firm A fold) reported with Wilson 95\% intervals. Three statistical diagnostics applied to the per-signature similarity distribution (Hartigan dip test, EM-fitted Beta mixture with logit-Gaussian robustness check, Burgstahler-Dichev / McCrary density-smoothness procedure) jointly characterise the distribution as a continuous quality spectrum, which motivates the percentile-based anchor and is itself a substantive finding for similarity-threshold selection in document forensics.
|
||||
Regulations require Certified Public Accountants (CPAs) to attest each audit report with a signature, but digitization makes reusing a stored signature image across reports — through administrative stamping or firm-level electronic signing — undermining individualized attestation. We build an end-to-end pipeline detecting such *non-hand-signed* signatures at scale: a Vision-Language Model identifies signature pages, YOLOv11 localizes signatures, ResNet-50 supplies deep features, and a dual-descriptor layer combines cosine similarity with an independent-minimum perceptual hash (dHash) to separate *style consistency* from *image reproduction*. Applied to 90,282 Taiwan audit reports (2013–2023), the pipeline yields 182,328 signatures from 758 CPAs; primary analyses are scoped to the Big-4 sub-corpus (437 CPAs; 150,442 signatures). Distributional diagnostics show that the apparent multimodality of the descriptor distribution dissolves under joint firm-mean centring and integer-tie jitter ($p$ rises to $0.35$), so no within-population bimodal antimode anchors the operational thresholds. We instead adopt an anchor-based inter-CPA coincidence-rate (ICCR) calibration at three units: per-comparison ($0.0006$ at cos$>0.95$; $0.0013$ at dHash$\leq 5$; $0.00014$ jointly), pool-normalised per-signature ($0.11$ under the deployed any-pair high-confidence rule), and per-document ($0.34$ for the operational HC+MC alarm). Firm heterogeneity is decisive: Firm A's per-document HC+MC alarm rate is $0.62$ versus $0.09$–$0.16$ at Firms B/C/D after pool-size adjustment, and under the deployed any-pair rule $77$–$99\%$ of inter-CPA collisions concentrate within the source firm — consistent with firm-level template-like reuse. We position the system as a specificity-proxy-anchored screening framework with human-in-the-loop review, not as a validated forensic detector; no calibrated error rates are reportable without signature-level ground truth.
|
||||
|
||||
<!-- Target word count: 240 -->
|
||||
<!-- Word count: 247 -->
|
||||
|
||||
Reference in New Issue
Block a user