Paper A v3.19.0: address Gemini 3.1 Pro round-19 Major Revision findings
Gemini 3.1 Pro round-19 (paper/gemini_review_v3_18_4.md) caught FOUR
serious issues that all 18 prior AI review rounds missed, including
fabricated rationalizations and a real statistical flaw. All four
verified by direct DB / script inspection. Verdict: Major Revision; this
commit closes every flagged item.
Fabricated rationalization corrections (text only, numbers unchanged):
- Section IV-H "656 documents excluded" rewritten. Previous text claimed
the exclusion was because "single-signature documents have no same-CPA
pairwise comparison" -- a fabricated explanation that contradicts the
paper's cross-document matching methodology. The truth, verified
against signature_analysis/09_pdf_signature_verdict.py L44 (WHERE
s.is_valid = 1 AND s.assigned_accountant IS NOT NULL): the 656
documents are excluded because none of their detected signatures could
be matched to a registered CPA name (assigned_accountant IS NULL).
- Section IV-F.2 "two CPAs excluded for disambiguation ties" rewritten.
No disambiguation logic exists in script 24; the 178 vs 180 difference
comes from two registered Firm A partners being singletons in the
corpus (one signature each, so per-signature best-match cosine is
undefined and they do not appear in the matched-signature table that
feeds the 70/30 split).
- Appendix B Table XIII provenance corrected. The previous attribution
to 13_deloitte_distribution_analysis.py / accountant_similarity_analysis.json
was wrong: neither artifact has year_month grouping. New script
29_firm_a_yearly_distribution.py reproduces Table XIII exactly from
the database via accountants.firm + signatures.year_month grouping.
Statistical flaw corrections (numbers updated):
- Inter-CPA negative anchor rewritten in 21_expanded_validation.py. The
prior implementation drew 50,000 random cross-CPA pairs from a
LIMIT-3000 random subsample, reusing each signature ~33 times and
artificially tightening Wilson FAR confidence intervals on Table X.
The corrected implementation samples 50,000 i.i.d. pairs uniformly
across the full 168,755-signature matched corpus.
- Re-run script 21. Table X numbers are close to v3.18.4 but no longer
rest on the inflated-precision artifact:
cos > 0.837: FAR 0.2101 (was 0.2062), CI [0.2066, 0.2137]
cos > 0.900: FAR 0.0250 (was 0.0233), CI [0.0237, 0.0264]
cos > 0.945: FAR 0.0008 (unchanged at this resolution)
cos > 0.950: FAR 0.0005 (was 0.0007), CI [0.0003, 0.0007]
cos > 0.973: FAR 0.0002 (was 0.0003), CI [0.0001, 0.0004]
cos > 0.979: FAR 0.0001 (was 0.0002), CI [0.0001, 0.0003]
- Inter-CPA cosine summary stats also updated:
mean 0.763 (was 0.762)
P95 0.886 (was 0.884)
P99 0.915 (was 0.913)
max 0.992 (was 0.988)
- Manuscript IV-F.1 prose updated to reflect the i.i.d. full-corpus
sampling.
Rebuild Paper_A_IEEE_Access_Draft_v3.docx.
Note: this is v3.19.0 because v3.19 closes both fabrication and a
genuine statistical flaw, not just provenance polish.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -150,7 +150,7 @@ We report three validation analyses corresponding to the anchors of Section III-
|
||||
|
||||
Of the 182,328 extracted signatures, 310 have a same-CPA nearest match that is byte-identical after crop and normalization (pixel-identical-to-closest = 1); these form the byte-identity positive anchor---a pair-level proof of image reuse that serves as conservative ground truth for non-hand-signed signatures, subject to the source-template edge case discussed in Section V-G.
|
||||
Within Firm A specifically, 145 of these byte-identical signatures are distributed across 50 distinct partners (of 180 registered Firm A partners), with 35 of the byte-identical pairs spanning different fiscal years; this Firm A decomposition is reproduced by `signature_analysis/28_byte_identity_decomposition.py` and reported in `reports/byte_identity_decomp/byte_identity_decomposition.json` (Appendix B).
|
||||
As the gold-negative anchor we sample 50,000 random cross-CPA signature pairs (inter-CPA cosine: mean $= 0.762$, $P_{95} = 0.884$, $P_{99} = 0.913$, max $= 0.988$).
|
||||
As the gold-negative anchor we sample 50,000 i.i.d. random cross-CPA signature pairs from the full 168,755-signature matched corpus (inter-CPA cosine: mean $= 0.763$, $P_{95} = 0.886$, $P_{99} = 0.915$, max $= 0.992$).
|
||||
Because the positive and negative anchor populations are constructed from different sampling units (byte-identical same-CPA pairs vs random inter-CPA pairs), their relative prevalence in the combined anchor set is arbitrary, and precision / $F_1$ / recall therefore have no meaningful population interpretation.
|
||||
We accordingly report FAR with Wilson 95% confidence intervals against the large inter-CPA negative anchor in Table X.
|
||||
The primary quantity reported by Table X is FAR: the probability that a random pair of signatures from *different* CPAs exceeds the candidate threshold.
|
||||
@@ -159,12 +159,12 @@ We do not report an Equal Error Rate: EER is meaningful only when the positive a
|
||||
<!-- TABLE X: Cosine Threshold Sweep — FAR Against 50,000 Inter-CPA Negative Pairs
|
||||
| Threshold | FAR | FAR 95% Wilson CI |
|
||||
|-----------|-----|-------------------|
|
||||
| 0.837 (all-pairs KDE crossover) | 0.2062 | [0.2027, 0.2098] |
|
||||
| 0.900 | 0.0233 | [0.0221, 0.0247] |
|
||||
| 0.837 (all-pairs KDE crossover) | 0.2101 | [0.2066, 0.2137] |
|
||||
| 0.900 | 0.0250 | [0.0237, 0.0264] |
|
||||
| 0.945 (calibration-fold P5 rounded) | 0.0008 | [0.0006, 0.0011] |
|
||||
| 0.950 (whole-sample Firm A P7.5; operational cut) | 0.0007 | [0.0005, 0.0009] |
|
||||
| 0.973 (signature-level Beta/KDE upper bound) | 0.0003 | [0.0002, 0.0004] |
|
||||
| 0.979 (signature-level Beta-2 forced-fit crossing) | 0.0002 | [0.0001, 0.0004] |
|
||||
| 0.950 (whole-sample Firm A P7.5; operational cut) | 0.0005 | [0.0003, 0.0007] |
|
||||
| 0.973 (signature-level Beta/KDE upper bound) | 0.0002 | [0.0001, 0.0004] |
|
||||
| 0.979 (signature-level Beta-2 forced-fit crossing) | 0.0001 | [0.0001, 0.0003] |
|
||||
|
||||
Table note: We do not include FRR against the byte-identical positive anchor as a column here: the byte-identical subset has cosine $\approx 1$ by construction, so FRR against that subset is trivially $0$ at every threshold below $1$ and carries no biometric information beyond verifying that the threshold does not exceed $1$. The conservative-subset FRR role of the byte-identical anchor is instead discussed qualitatively in Section V-F.
|
||||
-->
|
||||
@@ -178,7 +178,7 @@ The very low FAR at the operational cut is therefore informative about specifici
|
||||
### 2) Held-Out Firm A Validation (within-Firm-A sampling variance disclosure)
|
||||
|
||||
We split Firm A CPAs randomly 70 / 30 at the CPA level into a calibration fold (124 CPAs, 45,116 signatures) and a held-out fold (54 CPAs, 15,332 signatures).
|
||||
The total of 178 Firm A CPAs differs from the 180 in the Firm A registry by two CPAs whose signatures could not be matched to a single assigned-accountant record because of disambiguation ties in the CPA registry and which we therefore exclude from both folds; this handling is made explicit here.
|
||||
The total of 178 Firm A CPAs differs from the 180 in the Firm A registry by two registered Firm A partners whose signatures in the corpus are singletons (only one signature each, so the per-signature best-match cosine is undefined and they do not appear in the same-CPA matched-signature table that script `24_validation_recalibration.py` reads); they are therefore not represented in either fold by construction rather than by an explicit exclusion rule.
|
||||
Thresholds are re-derived from calibration-fold percentiles only.
|
||||
Table XI reports both calibration-fold and held-out-fold capture rates with Wilson 95% CIs and a two-proportion $z$-test.
|
||||
|
||||
@@ -340,7 +340,7 @@ We note that this test uses the calibrated classifier of Section III-K rather th
|
||||
## H. Classification Results
|
||||
|
||||
Table XVII presents the final classification results under the dual-descriptor framework with Firm A-calibrated thresholds for 84,386 documents.
|
||||
The document count (84,386) differs from the 85,042 documents with any YOLO detection (Table III) because 656 documents carry only a single detected signature, for which no same-CPA pairwise comparison and therefore no best-match cosine / min dHash statistic is available; those documents are excluded from the classification reported here.
|
||||
The document count (84,386) differs from the 85,042 documents with any YOLO detection (Table III) because 656 documents have no signature whose extracted handwriting could be matched to a registered CPA name (every such signature has `assigned_accountant IS NULL` in the database, typically because the auditor's report page deviates from the standard two-signature layout or the OCRed printed CPA name was not present in the registry); the per-document classifier requires at least one CPA-matched signature so that a same-CPA best-match similarity exists, and these documents are therefore excluded from the classification reported here.
|
||||
We emphasize that the document-level proportions below reflect the *worst-case aggregation rule* of Section III-K: a report carrying one stamped signature and one hand-signed signature is labeled with the most-replication-consistent of the two signature-level verdicts.
|
||||
Document-level rates therefore represent the share of reports in which *at least one* signature is non-hand-signed rather than the share in which *both* are; the intra-report agreement analysis of Section IV-G.3 (Table XVI) reports how frequently the two co-signers share the same signature-level label within each firm, so that readers can judge what fraction of the non-hand-signed document-level share corresponds to fully non-hand-signed reports versus mixed reports.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user