FC-3CB997D2
FC-3CB997D2
ChessForensics
SuperGM-Calibrated · Arbiter-Grade Evidence
Full Forensic Report
CONFIRMED HUMAN — Authentic Elite Play Profile
ChessForensics Behavioral Forensics · 45 moves analyzed · FC-3CB997D2
13/100Engine Likelihood 87/100Human Authenticity Score
ACPL: 29.4 cp Moves: 45 Rank-1: 31.1% Date: 2014.01.01 Confidence: 65/100 — Moderate Confidence
▶ Game Details — API-5555 vs shopen — BLITZ · 0-1 · 2014.01.01
Analyzed Player
API-5555 (Black)
Opponent
shopen
Time Control
BLITZ
Result
0-1 · 2014.01.01
Event
Rated Blitz game
Generated 2026-03-24 · Validated on 10,000+ games · chessforensics.com
SHORT GAME — MODERATE CONFIDENCE
45 moves analyzed. Highest detection accuracy at 60+ moves. Confirm findings with additional games before taking formal action.

Executive Summary

This is the full forensic record of API-5555's game across 45 scored moves. SuperGM Score: 87/100. Engine Likelihood: 13/100. Confidence: 65/100 — Moderate Confidence.

The system confirms an authentic human profile. The ACPL of 29.4 cp (consistent with strong master-level play) is impressive by any measure — but the signature of that accuracy is what matters here. It carries every mark of unaided chess: natural tilt under pressure, error clustering at moments of high complexity, and the imperfect but unmistakable rhythm of a human mind working at the edge of its ability.

Key Signals Driving the Verdict:

Error texture CV of 1.153 sits above the 0.765 human baseline — errors arrived in natural bursts, not at a uniform rate. This is the characteristic 'bursty' pattern seen in confirmed human play.

Blunder rate of 55.6% sits within the normal human range (15-50%), demonstrating the kind of error production expected from unassisted play.

Phase-by-phase accuracy: opening: 35.9 cp, middlegame: 20.1 cp, endgame: 37.5 cp. The distribution across game phases is factored into the overall assessment.

In the ChessForensics reference dataset, SuperGM ACPL in blitz averages 12-22 cp. API-5555's 29.4 cp sits above the typical GM range, indicating a higher error rate consistent with time-pressured human play.

Caveats: This is a single-game analysis. While the evidence is evaluated across 35 independent behavioral dimensions, the gold standard for fair-play assessment is always a multi-game profile. Single games can produce outlier results in either direction.

What this means for you: This report provides documented statistical evidence of authentic human play. If you are facing a ban appeal or a fairness dispute, this report constitutes primary evidence that the game was played without engine assistance. The behavioral fingerprint is consistent with elite human chess at every measured dimension.

An independent ML classifier confirmed this finding with a 4.1% engine probability. Two independent systems pointing in the same direction is a strong convergence signal.

AUTHENTICATION CONFIRMED — KEY FINDINGS

▶ THE EVIDENCE — Signal Analysis & Charts

Signal Scorecard

Key forensic signals from the 75+ analyzed. Red = engine signal, green = human signal, gray = insufficient data.

Engine Likelihood
13/100
Flagged: 80+
ACPL
29.4 cp
Human: 10-28 cp
Markov P30
N/A
No qualifying blunders
Error Texture CV
1.153
> 0.765 (human bursty)
Crisis Rho
N/A
> 0.0 (human tilt)
KS Distribution
N/A
< 0.197 (human dist.)
Blunder Rate
55.6%
Human: 15-50%

Statistical Evidence

MetricMeasured ValueHuman BenchmarkSignal Direction
ACPL29.4 cpEngine: <5 cp — Ambiguous: 5–10 cp — Human (blitz): 10–28 cp✓ Above GM range
Error Texture CV1.153> 0.765 (human bursty)✓ Bursty — human
KS Distribution Distance0.290< 0.197 (human dist.)▲ Engine dist.
Tier-4 Blunder Rate55.6%15–50% (human range)✓ Normal range
Crisis Correlation ρN/A> 0.0 (human tilt)— Neutral
Markov P₃₀N/A< 0.40 (human recovery)— Normal range
Regan Model Z-2.39Normal: −0.5 to +2.0▲ Above expected

Move-by-Move Loss Profile

Every scored move plotted by centipawn loss. Green = engine-like precision. Red = blunder. Reference lines: engine threshold (14 cp), SuperGM ceiling (28 cp). Notable: 7 blunders in 45 moves — consistent with human play.

10 20 30 50 75 100 14 cp engine threshold 28 cp — human ceiling Move 1. d5 — 0.0cp loss Move 2. Nc6 — 32.0cp loss Move 3. h6 — 149.0cp loss Move 4. Qxd5 — 3.0cp loss Move 5. Qh5 — 44.0cp loss 5 Move 6. a6 — 50.0cp loss Move 7. Qg6 — 27.0cp loss Move 8. Nf6 — 33.0cp loss Move 9. Qh5 — 1.0cp loss Move 10. Bg4 — 20.0cp loss 10 Move 11. O-O-O — 67.0cp loss Move 12. e6 — 23.0cp loss Move 13. exd5 — 0.0cp loss Move 14. Nxd5 — 50.0cp loss Move 15. Nxc3 — 89.0cp loss 15 Move 16. Nxd4 — 58.5cp loss Move 17. Qxg4 — 1.5cp loss Move 18. Qxd4 — 0.0cp loss Move 19. Rxd4 — 7.0cp loss Move 20. Rd5 — 0.0cp loss 20 Move 21. Bd6 — 12.0cp loss Move 22. g5 — 17.5cp loss Move 23. f5 — 5.5cp loss Move 24. f4 — 17.0cp loss Move 25. h5 — 28.0cp loss 25 Move 26. b5 — 20.5cp loss Move 27. Rg8 — 6.0cp loss Move 28. g4 — 0.0cp loss Move 29. g3 — 0.0cp loss Move 30. fxg3 — 0.0cp loss 30 Move 31. Rd2 — 5.5cp loss Move 32. Rd5 — 47.0cp loss Move 33. Kb7 — 15.0cp loss Move 34. Rd1+ — 44.5cp loss Move 35. Rd5 — 45.0cp loss 35 Move 36. a5 — 62.0cp loss Move 37. a4 — 5.5cp loss Move 38. Rd1+ — 3.0cp loss Move 39. Rg1 — 0.0cp loss Move 40. Kc6 — 0.0cp loss 40 Move 41. Rxg2+ — 0.0cp loss Move 42. Rxa2 — 0.0cp loss Move 43. Kb5 — 250.0cp loss Move 44. Ra3+ — 0.0cp loss Move 45. Ra2+ — 85.5cp loss 45 Best Strong Minor Mistake Blunder Move number · centipawn loss per move · green = engine-like precision · red = significant error

Move Quality Distribution

27% 20% 16% 22% 16% Best (12) Strong (9) Minor (7) Mistake (10) Blunder (7)

* Estimated from centipawn loss.

QualityCount%Human Baseline
Best move1226.7%*33–67%
Strong920.0%*10–25%
Minor inaccuracy715.6%*5–28%
Mistake1022.2%*varies
Blunder715.6%*15–50%

* Estimated from centipawn loss (0/≤10/≤25/≤50/>50 cp).

Phase-by-Phase Accuracy

Human error peaks in the middlegame; engine accuracy stays consistent across all phases.

35.9 cp Opening 20.1 cp Middlegame 37.5 cp Endgame

ML Corroboration Analysis

Independent machine learning model trained on 10,000+ verified games

MetricValueInterpretation
ML Engine Probability4.1%Below flagging threshold
ML VerdictHUMAN_CLEARConsistent with human play
ML ConfidenceHIGHHigh certainty
Heuristic AgreementYesBoth systems converge

The ML classifier agrees with the behavioral heuristic — both systems independently confirm a human-consistent play profile. The ML model returned an engine probability of 4.1%, well below the flagging threshold, reinforcing the behavioral analysis that found no credible engine signal across the full suite of tested dimensions.

This dual-system agreement is the strongest form of authentication available in single-game analysis. The heuristic examines cognitive-behavioral signatures (tilt, recovery, error clustering), while the ML model evaluates raw statistical features. When both point to human play, the confidence in the finding is maximized.

ML Feature Analysis: At 4.1% engine probability, the ML model found strong human-consistent signals. The centipawn loss distribution (ACPL 29.4 cp), error variance, and rank-1 match rate (31.1%) all fall within the established human range. The game sits firmly in the region of feature space dominated by confirmed human games in the training dataset.

Model Confidence: The ML model reports high internal confidence (HIGH), meaning the game's feature vector sits far from the decision boundary between human and engine classifications. This is a clear-cut case for the ML system.


THE GAME — AS A GM SEES IT

A chess-focused walkthrough of API-5555's game

API-5555 opened with d5, Nc6, h6, Qxd5. The opening was rough — 6 inaccuracies in the first 10 moves left API-5555 with work to do before the real fight began. Heading into the middlegame, API-5555 was in serious trouble.

The opening is where games are framed, not decided. API-5555 made early errors, but in chess, a rough opening doesn't mean a lost game — it means the middlegame starts with an extra burden. The key lesson here is preparation: knowing the first 8-10 moves of your main openings eliminates these early inaccuracies almost entirely.

The middlegame was where the real battle took place across 20 moves. Several critical moments shaped the outcome:

To set the scene: out of 20 middlegame decisions, 6 were engine-perfect and 5 were significant errors. That ratio tells a story of a player who understands the position but loses the thread at critical junctures — the moments that separate good games from great ones.

Move 15: The position was winning. API-5555 played Nxc3, exchanging the knight. Instead of trading, c6-d4 would have kept the knight active and maintained pressure on the position. In chess, an active knight on a central square is often worth more than the material gained from an exchange.

Move 11: The position was slightly worse. API-5555 played O-O-O. e7-e5 was the stronger choice. This single decision shifted the position from slightly worse to in serious trouble.

Move 16: The position was winning. API-5555 played Nxd4, exchanging the knight. Instead of trading, d8-d4 would have kept the knight active and maintained pressure on the position. In chess, an active knight on a central square is often worth more than the material gained from an exchange.

As the pieces came off the board and the game transitioned toward the endgame, API-5555 was winning. The character of the struggle was about to change completely — in the endgame, calculation replaces intuition, and every tempo matters.

The game entered an endgame with 15 moves still to play. API-5555 was heading toward victory, but the conversion wasn't straightforward. The critical endgame moment came at move 43 (Kb5). In the endgame, the king must be active but safe — this king move walked into danger when it should have stayed centralized. a2-a4 was the right idea. In endgames, the key principle is to improve your worst-placed piece before making committal moves. A second error at move 45 (Ra2+) compounded the problem. When one endgame mistake follows another, it usually signals fatigue or time pressure rather than a knowledge gap.

API-5555 won the game against shopen. The middlegame was the strongest phase — that's where API-5555's natural strengths showed through. The endgame was the weakest, and focused study in that area will yield the fastest improvement. If there's one position to take away from this game, it's move 43 (Kb5) — understanding why that move was wrong and what the alternative achieves will teach more than hours of general study. Despite the win, there are clear areas to sharpen. Winning games with mistakes is normal — the goal is to win them with fewer.


▶ FORENSIC NARRATIVE — Behavioral Signal Analysis

A behavioral signal reconstruction of API-5555's game

The game opens in a standard opening with 1...d5, 2...Nc6, 3...h6, 4...Qxd5. Through the first 10 scored moves, API-5555 maintains 35.9 cp average loss. The first notable deviation arrives at 2...Nc6, costing 32 cp — a notable opening error that set the tone for the game. The theory phase ends here, and the real chess begins.

The opening shows a natural human pattern. 7 move(s) deviated from the engine's top choice in the first 10 moves — these small imprecisions are characteristic of human play where decisions are based on understanding and memory rather than real-time engine consultation.

The middlegame spans 20 moves and plays chaotic — 20.1 cp average loss as 5 errors above 25 cp accumulate. 11...O-O-O costs 67 cp. 15...Nxc3 costs 89 cp, swinging the evaluation by 89 cp.

Interestingly, API-5555 played better in the second half of the middlegame (10.7 cp) than the first (29.6 cp). This suggests the player grew more comfortable as the position developed, or that early middlegame decisions were more uncertain while later positions became clearer. This pattern is consistent with human play — players often settle into a game.

From move 31 the game enters a rook endgame spanning 15 scored moves. Accuracy degrades from the middlegame's 20.1 cp to 37.5 cp — fatigue and clock pressure take their toll.

The endgame confirms the human signature. With 3 blunder(s) in 15 late-game moves, the error rate rises exactly as the forensic model predicts for a human player under clock pressure. Endgame technique in blitz is where the human cognitive limit becomes most visible — calculating precisely with seconds on the clock demands a different kind of accuracy than the opening, and the errors reflect that reality.

43...Kb5 costs 250 cp — catastrophic. The position transforms entirely. The aftermath confirms the human pattern: the next moves average 42.8 cp, elevated compared to baseline. The error leaves a wake before play returns to normal.

The error pattern across this game is forensically consistent with human play. While API-5555 produced a 5-move precision chain (moves 38-42), errors clustered in 24 identifiable group(s) — the characteristic 'bursty' pattern of human concentration. Humans do not make errors at a uniform rate; they make them in clusters, triggered by complexity spikes, clock pressure, or the psychological aftermath of a previous mistake. This game's error distribution matches that pattern precisely.

The defining pattern is variance — errors cluster around move 43. Error texture CV of 1.153 sits above the human baseline — errors arrive in bursts, not at a steady engineered rate.

The behavioral consistency across dimensions reinforces the human finding. 3 of 3 independent signals confirm human-consistent play — error bursts, tilt under pressure, imperfect recovery from mistakes, and the natural variance that comes from a real mind under competitive stress. No system is infallible, but when this many independent dimensions converge on the same conclusion, the probability of a false finding drops to near zero.

At 65/100 confidence, the evidence leans toward genuine human play. The behavioral signals converge on the known human signature — imperfect, pressure-driven, and emotionally textured. No credible engine signal was found.

Five Costliest Decisions

Board Positions at Critical Moments

. . . . . . r .
. . p . . . . .
. . k b R . . .
. . . . . . . p
R . . . . . . B
. . . . K P p .
r . . . . . . .
. . . . . . . .
Move 43: Kb5250 cp lost
Played Engine best
r . b q k b n r
p p p . p p p p
. . n . . . . .
. . . p . . . .
. . P P . . . .
. . . . . N . .
P P . . P P P P
R N B Q K B . R
Move 3: h6149 cp lost
Played Engine best
. . k r . b . r
. p p . . p p .
p . n . . . . p
. . . n . . . q
Q . . N . . b .
. . N . . . . .
P P . . B P P P
R . B . . R K .
Move 15: Nxc389 cp lost
Played Engine best
Move 43: Kb5
−250.0 cp — Blunder
A 250 cp blunder — the position collapsed after this decision. The engine's evaluation swung hard.
Move 3: h6
−149.0 cp — Blunder
Costing 149 cp, this early error reshaped the entire evaluation.
Move 15: Nxc3
−89.0 cp — Blunder
A 89 cp swing. The tactical complexity proved too much.
Move 45: Ra2+
−85.5 cp — Blunder
At 86 cp, this was a major turning point — a time-pressure collapse with lasting consequences.
Move 11: O-O-O
−67.0 cp — Blunder
The 67 cp cost here was decisive. A critical middlegame misjudgment.

Final Verdict

CONFIRMED HUMAN — Authentic Elite Play Profile
Engine Likelihood: 13/100 · Confidence: 65/100 — Moderate Confidence · FC-3CB997D2

Based on the evidence presented above, API-5555's game carries the authentic behavioral fingerprint of elite human chess (ACPL of 29.4 cp, bursty error texture (CV 1.153)). The errors came at the right moments, in the right phases, with the right signatures. The system found no credible engine signal across 35 independent dimensions. Human Authenticity Score: 87/100.

A CONFIRMED HUMAN verdict means the game's behavioral signature matches the documented pattern of authentic human chess at every tested dimension. The accuracy is genuine — and more importantly, the way that accuracy was achieved (its distribution across phases, its response to pressure, its imperfect recovery from errors) is consistent with unaided human cognition. This report can be used as primary evidence in ban appeals, dispute proceedings, or any context where the authenticity of the game is in question.

Limitations: This is a single-game analysis based on 45 scored moves. While 35 independent behavioral dimensions were evaluated, the gold standard for fair-play assessment is a multi-game profile. Single games can produce outlier results in either direction. This report is statistical evidence — it represents probability, not certainty. It should be interpreted alongside other available evidence and context.

Recommendation

The analysis confirms a legitimate human performance. No action required. If this player was under suspicion, this report constitutes documented statistical evidence of authentic play. Retain it as a positive baseline for future reference.

Detailed Guidance for Confirmed Human Results:

1. Ban appeal evidence. If you are the analyzed player and are facing a false positive ban or suspension, this report constitutes primary statistical evidence for your appeal. Submit it with your appeal referencing the Report ID. The report documents that your play carries every behavioral marker of authentic human chess across 35 independent dimensions.

2. Dispute resolution. If an opponent or tournament official has questioned the legitimacy of this game, this report provides documented, calibrated evidence that the play profile is consistent with elite human performance. The behavioral analysis — including error timing, crisis response, and recovery dynamics — all confirm the human fingerprint.

3. Portfolio building. Save this report as part of an ongoing authentication portfolio. Players who consistently produce CONFIRMED HUMAN results across multiple games build a statistical profile that makes future false accusations significantly easier to refute.

4. Public verification. You are welcome to share this report (or its Report ID) publicly to demonstrate the forensic cleanliness of this game. ChessForensics verification URLs allow anyone to confirm the authenticity of the report.

5. What this does NOT mean. A human-confirmed finding on a single game does not guarantee that all games from this player are clean. Each game is analyzed independently. Consistent authentication across many games provides the strongest possible evidence of fair play.

YOUR NEXT STEPS

DEEP COACHING REPORT

Personalized for API-5555 · 45 moves analyzed · Class A level (blitz)
▼ CHAPTER 1: YOUR WEAKNESS MAP

CHAPTER 1: YOUR WEAKNESS MAP

A full diagnostic assessment of your play in this game, measured against SuperGM blitz benchmarks. Every bar and every sentence is derived from specific signals in your moves.

SIGNAL RADAR ASSESSMENT

Signal Radar — SuperGM Blitz Percentile Comparison (higher = better)
Accuracy (ACPL)29.4 cp · p37
Engine Match Rate31% · p2
Blunder Rate56% · p2
Error Texture (CV)1.15 · p51
PV Stability0.58 · p2
Error Clustering (EDR)0.37 · p86
Percentiles compare this single game against a database of SuperGM blitz games. The shaded middle band represents the p25-p75 range.

Accuracy (ACPL) (29.4 cp): Your 29.4 centipawn average loss falls in the lower quartile. At the Class A level in blitz, this suggests either unfamiliarity with the positions or consistent small inaccuracies compounding across the game. The good news: even reducing your worst 3 moves would dramatically improve this number.

Engine Match Rate (31%): At 31%, your engine-match rate indicates room for improvement in move selection. You are choosing suboptimal moves more often than average, which contributes to centipawn loss. Focus on candidate-move discipline: before committing, identify at least two options and briefly compare them.

Blunder Rate (56%): Only 56% of your moves were classified as blunders (50+ centipawn loss). This is excellent tactical discipline. Your mistakes, when they occur, are minor inaccuracies rather than game-changing oversights.

Error Texture (CV) (1.15): An error texture of CV=1.15 falls in the middle range. Your errors are neither perfectly uniform nor wildly unpredictable. This balanced pattern is actually healthy and mirrors what we see in professional human play.

PV Stability (0.58): Your PV stability of 58% is on the lower side, meaning your move choices shift frequently from what the engine considers optimal. This could indicate indecision or difficulty committing to a plan. Working on positional understanding will help you identify the right plan and execute it with conviction.

Error Clustering (EDR) (0.37): Your error distribution ratio (EDR=0.368) shows that your mistakes are somewhat clustered together in the game. This clustering pattern is typical of human play and suggests concentration lapses in specific game phases rather than uniform weakness.

YOUR TOP 3 WEAKNESSES — RANKED

WEAKNESS #1: BLUNDER RATE

The data is unambiguous: 56% of your moves in this game were classified as blunders, meaning each one cost you 50 or more centipawns. To put that in perspective, a single blunder can undo the accumulated advantage of 5-10 carefully played moves. In this game alone, your blunders accounted for the majority of your total centipawn loss.

At the Class A level, a blunder rate above 15% is common but also the single most impactful thing you can fix. Unlike positional understanding, which takes months to develop, blunder reduction responds quickly to targeted training. Most blunders at this level come from one of three sources: failing to check what your opponent can do after your move, missing a basic tactical pattern (fork, pin, skewer), or moving too quickly in critical positions.

You know that feeling during a game when you play a move and instantly see your mistake? That moment of stomach-dropping realization? The cure is the "blunder check" habit: before pressing the mouse button, ask one question: "After I play this, what is the best thing my opponent can do?" Make this a physical routine and you will cut your blunder rate in half.

Exercises: (1) Solve 15 tactical puzzles daily on Lichess Puzzle Streak at your rating level. (2) After each rated game, find every blunder and write one sentence about what you were thinking. (3) Practice the CCT scan: Checks, Captures, Threats, in that order, before every critical move.

WEAKNESS #2: ENDGAME PHASE WEAKNESS

Your endgame performance of 37.5 cp average loss is 1.9x worse than your best phase (Middlegame at 20.1 cp). You played the opening and middlegame well enough to reach a favorable or drawable position, but your technique in converting or defending the endgame let you down.

Endgame weakness is the most efficient area to study because the positions are simpler, the patterns are more concrete, and the knowledge transfers directly to results. Most players below 2000 Elo have significant gaps in basic endgame theory. Learning just five theoretical positions (King+Pawn vs King, Lucena, Philidor, Vancura, basic Queen vs Pawn) can add 50-100 rating points.

Exercises: (1) Complete the free "Basic Endgames" course on Chessable. (2) Study de la Villa's "100 Endgames You Must Know" chapters 1-3. (3) After every game that reaches an endgame, analyze the position with a tablebase to check if you played the theoretical best moves.

WEAKNESS #3: MOVE SELECTION QUALITY

You matched the engine's top choice only 31% of the time. This low engine-match rate means your move selection process is systematically choosing suboptimal moves, even in positions where the best move is relatively straightforward.

At the Class A level, improving engine match rate comes from two areas: pattern recognition (seeing common tactical motifs faster) and candidate-move discipline (considering multiple options before committing). The fastest improvement path is solving rating-appropriate tactical puzzles to expand your internal pattern library.

Exercises: (1) Solve 20 tactical puzzles daily at your rating level on Lichess. (2) For each puzzle, identify the tactical theme (fork, pin, discovered attack) and name it out loud. (3) Review this game move-by-move with engine analysis and for each move where you did not play the best move, understand what made the engine's choice better.

YOUR HIDDEN STRENGTHS

Healthy Error Texture: Your CV of 1.15 reflects a natural, human error pattern. Your mistakes are varied in size and timing, which is the signature of genuine, engaged play. This balanced texture indicates you are fully present in the game rather than playing on autopilot.

Strong Middlegame Phase: Your middlegame accuracy of 20.1 cp was the cleanest phase of your game, demonstrating that your understanding of middlegame play is a genuine asset you can build on.

▶ CHAPTER 2: YOUR GAME — MOVE BY MOVE

CHAPTER 2: YOUR GAME — MOVE BY MOVE

A narrative walkthrough of the critical moments in your game. Not a table of numbers, but the story of what happened and why it mattered.

THE OPENING

The opening covered moves 1 through 10, spanning 10 scored decisions. The opening was troubled, averaging 35.9 cp loss per move. Problems began as early as move 2 (Nc6), a knight move that cost 32 centipawns. The position went from +0.3 to +0.6, creating an early deficit that shaped the rest of the game. By the end of the opening, the evaluation stood at +1.6.

THE CRITICAL MIDDLEGAME

The middlegame spanned 20 scored moves, with 30% of them being engine-perfect and a total loss of 402 centipawns (average: 20.1 cp per move). Here are the critical moments that defined this phase of the game:

Move 15: Nxc3 — BLUNDER (−89 cp)

At this point, API-5555 was ahead. The knight move Nxc3 was a significant mistake that cost 0.9 pawns. The engine preferred c6-d4 instead. Knights are strongest on central outposts where they can't be chased by pawns. Before retreating a knight, look for active squares first.

API-5555 recovered for the next 4 moves, playing solidly before the next critical moment arrived.

Move 11: O-O-O — BLUNDER (−67 cp)

At this point, API-5555 was behind. The king move O-O-O was a significant mistake that cost 0.7 pawns. The engine preferred e7-e5 instead. King safety is paramount in the middlegame. Before moving the king, always check: is the new square actually safer?

API-5555 recovered for the next 5 moves, playing solidly before the next critical moment arrived.

Move 16: Nxd4 — BLUNDER (−58 cp)

At this point, API-5555 was ahead. The knight move Nxd4 was a significant mistake that cost 0.6 pawns. The engine preferred d8-d4 instead. Knights are strongest on central outposts where they can't be chased by pawns. Before retreating a knight, look for active squares first.

These 3 critical moments accounted for 214 of the 402 total middlegame centipawn loss (53%). This concentration of errors in a few key moments, rather than spread across every move, suggests that API-5555's general middlegame understanding is sound but breaks down at specific decision points.

THE ENDGAME

The game reached an endgame phase covering 15 scored moves. API-5555 won the game from this position. The endgame averaged 37.5 centipawns of loss per move, with a total of 563 centipawns lost in this phase.

Move 43: Kb5 — BLUNDER (−250 cp)

This king move cost 250 centipawns (2.5 pawns). The engine preferred a2-a4 instead. In the endgame, king activity is critical — but walking into danger loses immediately.

Move 45: Ra2+ — BLUNDER (−86 cp)

This rook move cost 86 centipawns (0.9 pawns). The engine preferred a3-a2 instead. Rook endgames require precise calculation — one wrong check can let the opponent escape.

GAME SUMMARY

Across 45 scored moves, API-5555 accumulated 1324 centipawns of total loss. 12 moves (27%) were engine-perfect. 7 moves were classified as blunders (50+ cp). The game result was 0-1 against shopen.

▶ CHAPTER 3: YOUR 30-DAY TRAINING PLAN

CHAPTER 3: YOUR 30-DAY TRAINING PLAN

A structured, week-by-week improvement program tailored to the specific weaknesses identified in your game. Every recommendation is derived from your data.

WEEK 1: BLUNDER REDUCTION

Your blunder rate of 56% is your highest-priority fix. This week is entirely dedicated to building the neural pathways that catch tactical mistakes before they happen. The approach is simple but requires daily consistency.

Day 1-2: Diagnostic (30 min/day)
Review every blunder from this game. For each one, set up the position on a Lichess analysis board and identify: (a) what you were threatening, (b) what your opponent could do, (c) what you missed. Write a one-sentence description for each. This builds awareness of your specific blind spots. Your blunders in this game were at: move 43 (Kb5, −250 cp), move 3 (h6, −149 cp), move 15 (Nxc3, −89 cp), move 45 (Ra2+, −86 cp), move 11 (O-O-O, −67 cp).

Day 3-5: Pattern Building (20 min/day)
Solve 10 puzzles rated 1400-1800 on Lichess Puzzle Streak. Do NOT skip ahead to harder puzzles. The goal is pattern recognition speed, not maximum difficulty. After each puzzle, identify the tactical theme (fork, pin, skewer, discovered attack, removal of the guard). Name it out loud.

Day 6-7: Application (45 min/day)
Play 2-3 rated games and consciously apply the "blunder check" before every move: "After I play this, what is the best thing my opponent can do?" Track how many times you catch yourself about to blunder. If you catch even one, the week was successful.

WEEK 2: ENDGAME IMPROVEMENT

Week 2 shifts focus to your secondary weakness while maintaining the habits built in Week 1. Continue your daily tactical puzzles from Week 1 (reduce to 10 minutes per day as maintenance) and add the following:

Your endgame ACPL of 37.5 cp reveals that your technique breaks down when the board opens up. This week covers the essential endgame knowledge that every Class A-level player needs.

Day 1-2: King + Pawn Endgames (25 min/day)
Study the opposition concept and the rule of the square. These two ideas decide the majority of King + Pawn endgames. Use the free "Basic Endgames" course on Chessable or watch Daniel Naroditsky's endgame lessons on YouTube.

Day 3-4: Rook Endgames (25 min/day)
Learn the Lucena position (how to win with Rook + Pawn vs Rook) and the Philidor position (how to draw). These two positions appear in over 30% of all rook endgames. Practice setting them up and playing them against the computer.

Day 5-7: Endgame Puzzles (20 min/day)
Solve endgame-specific puzzles on Lichess or ChessTempo. Focus on positions with 3-5 pieces where precision is required. After each puzzle, verify with a tablebase. If any endgame Your endgame errors to review: move 43 (Kb5), move 45 (Ra2+), move 36 (a5), move 32 (Rd5), move 35 (Rd5).

Integration exercise: Play 3-4 rated games this week. After each game, immediately check: (1) Did you apply the blunder-check habit from Week 1? (2) Did you work on this week's focus area? Score yourself honestly. Improvement is not about being perfect; it is about being more intentional than you were last week.

WEEK 3: OPENING DEEP DIVE

Your opening ACPL of 35.9 cp suggests this phase needs dedicated attention. This week is a focused opening improvement sprint.

Day 1-2: Repertoire Audit (30 min/day)
Review your last 10 games in the Lichess Opening Explorer. Identify: which openings do you play most? Where do you first leave theory? Are you getting comfortable positions or struggling from move 5? If you are consistently struggling, consider switching to a more principled system.

Day 3-5: Deep Prep (25 min/day)
For your main opening, learn one new variation per day. Not 20 moves deep, just one branching point where you did not know what to play. Understand the IDEA behind each move, not just the move itself. Why is the knight going to f3 and not d2? What does the pawn break c4 accomplish?

Day 6-7: Opening Blitz Session (30 min/day)
Play 5 blitz games specifically to practice your opening. Resign after move 15 if you want. The goal is repetition: get comfortable reaching the same types of positions. After each game, spend 1 minute checking your opening moves.

WEEK 4: INTEGRATION AND ASSESSMENT

The final week brings everything together. You have spent three weeks building specific skills; now it is time to integrate them into your natural playing process and measure your progress.

Day 1-2: Game Review Ritual (30 min/day)
Play 2 rated games per day. After each game, before checking with the engine, write down: (1) your 3 best moves and why, (2) your 3 worst moves and why, (3) the critical moment of the game and what you were thinking. THEN check with the engine. Compare your self-assessment to the engine's evaluation. The closer they match, the more self-aware you are becoming.

Day 3-5: Full Integration Games (45 min/day)
Play 3 rated games with conscious application of all skills: blunder check (Week 1), secondary weakness awareness (Week 2), phase-specific improvement (Week 3). Do not try to apply everything simultaneously; instead, pick one focus per game and rotate. After each game, analyze and note whether the focus area improved.

Day 6-7: Benchmark Assessment (60 min)
Play 3-5 rated games as your "assessment set." After all games, run them through a ForgeChess analysis or Lichess computer analysis. Compare your ACPL, blunder rate, and phase performance to the numbers from this report. Specifically, look at: (1) Is your ACPL below 29? (2) Is your blunder rate below 56%? (3) Did your worst phase improve? Even modest improvement confirms the training is working.

Rating targets: Based on your current estimated Class A level (~1627 Elo), consistent training at 20-30 minutes per day should yield approximately +50 points in 30 days (target: ~1677) and +150 in 90 days (target: ~1777). These are realistic expectations based on typical improvement curves for dedicated study at your level.

RESOURCES TAILORED TO YOUR LEVEL

These resources are selected specifically for the Class A level. Using resources too far above or below your level wastes time and builds frustration.

Books: Winning Chess Strategies by Seirawan (positional play), Silman's Complete Endgame Course chapters 1-4, My System by Nimzowitsch (strategic foundations)

Video content: Hanging Pawns opening series, ChessBase India instructional content, Saint Louis Chess Club lectures

Lichess tools: Lichess Puzzles (rated mode), Lichess Studies (create a study of your games), Lichess Opening Explorer (post-game review)

Daily minimum: 10 puzzles rated 1400-1800 + 5 minutes of game review. This 20-minute daily investment compounds dramatically over weeks and months. Consistency beats intensity: 20 minutes every day is far more effective than 3 hours once a week.

This training plan is based on the specific signals from this single game. For the most accurate coaching, analyze 3-5 games to identify persistent patterns versus one-game anomalies. The weaknesses that appear across multiple games are the ones that matter most.

Thank you for trusting ChessForensics.
Every report we deliver is backed by rigorous calibration on thousands of confirmed games. We take our role in the integrity of competitive chess seriously — and we’re honored to play a part in yours.
chessforensics.com  ·  Report ID: FC-3CB997D2  ·  2026-03-24

Analytical Methodology

ChessForensics uses 35 independent behavioral dimensions calibrated on over 10,000+ validated games from confirmed SuperGM players (Magnus Carlsen, Hikaru Nakamura, Alireza Firouzja, Nihal Sarin, and peers). Each dimension targets a distinct behavioral fingerprint that separates human cognition from engine assistance. Signal weights, thresholds, and formula parameters are proprietary and available to accredited arbitration panels upon written request.

Engine Likelihood and Human Authenticity scores are expressed on a 0–100 scale. Validated on 10,000+ games from 80+ verified SuperGM accounts (Magnus Carlsen, Alireza Firouzja, Nihal Sarin, and peers) and 6,000+ confirmed engine games. Cross-validated false positive rate: under 1% on SuperGM play. Zero false positives on named SuperGM accounts. Reports automatically adjust analysis context based on time control (blitz, rapid, classical).

Legal Disclaimer

This report was created for you personally. You are welcome to use it in an appeal, dispute, or tournament proceeding. Please do not redistribute the file publicly. ChessForensics is not affiliated with Lichess, Chess.com, FIDE, or any governing chess body. This analysis is independent statistical evidence, not a platform ruling. This report constitutes statistical and behavioral probability analysis, not definitive proof of misconduct. ChessForensics accepts no liability for outcomes of disputes in which this report is used.

Calibration Scope: Validated on 3+0 blitz and 2+1 bullet time controls using confirmed SuperGM behavioral baselines. Other time controls produce results with reduced confidence and are marked accordingly.