This is the full forensic record of Hans Moke Niemann's game across 57 scored moves. ACPL: 8.3 cp (solid GM-level accuracy for classical play). Engine Likelihood: 38/100. Confidence: 60/100 — Moderate Confidence.
The behavioral fingerprint is consistent with authentic human play — no strong engine indicator was returned across the full signal suite. The sections below document each tested signal and its finding.
Note: An isolated ML classifier returned an elevated 100.0% engine probability based on raw precision metrics. However, the full behavioral analysis — including tilt patterns, error clustering, crisis response, and recovery dynamics — conclusively overrides this signal. Raw accuracy alone cannot distinguish elite human preparation from engine assistance; the behavioral dimensions can, and they confirm human play.
Six independent signals evaluated. Red = engine signal, green = human signal, gray = insufficient data.
| Metric | Measured Value | Human Benchmark | Signal Direction |
|---|---|---|---|
| ACPL | 8.3 cp | Engine: <5 cp — Ambiguous: 5–10 cp — Human (blitz): 10–28 cp | ▲ Below human floor |
| Error Texture CV | 1.950 | > 0.765 (human bursty) | ✓ Bursty — human |
| KS Distribution Distance | N/A | < 0.197 (human dist.) | — N/A |
| Tier-4 Blunder Rate | 8.8% | 15–50% (human range) | ▲ Below floor |
| Crisis Correlation ρ | N/A | > 0.0 (human tilt) | — Neutral |
| Markov P₃₀ | N/A | < 0.40 (human recovery) | — Normal range |
| Regan Model Z | -1.53 | Normal: −0.5 to +2.0 | ▲ Above expected |
Every scored move plotted by centipawn loss. ■ Green = engine-like precision. ■ Red = blunder. Reference lines: engine threshold (14 cp), SuperGM ceiling (28 cp). Notable: 4 blunders in 57 moves — consistent with human play.
* Estimated from centipawn loss.
| Quality | Count | % | Human Baseline |
|---|---|---|---|
| Best move | 35 | 61.4%* | 33–67% |
| Strong | 14 | 24.6%* | 10–25% |
| Minor inaccuracy | 3 | 5.3%* | 5–28% |
| Mistake | 1 | 1.8%* | varies |
| Blunder | 4 | 7.0%* | 15–50% |
* Estimated from centipawn loss (0/≤10/≤25/≤50/>50 cp).
Human error peaks in the middlegame; engine accuracy stays consistent across all phases.
| Metric | Value |
|---|---|
| ML Engine Probability | 100.0% |
| ML Verdict | ENGINE_STRONG |
| ML Confidence | HIGH |
The ML model diverges from the behavioral analysis. When the two systems disagree, the confidence score is reduced accordingly. The ML model is trained on 10,000+ games including verified SuperGM play.
A move-by-move behavioral reconstruction of Hans Moke Niemann's game
The game opens in a standard opening with 1...Nf6, 2...e6, 3...Bb4, 4...O-O. Through the first 10 scored moves, Hans Moke Niemann maintains engine-level precision. The theory phase ends here, and the real chess begins.
The middlegame spans 20 moves and plays a grinding battle — 7.3 cp average loss across 20 moves. 21...Bxc4 costs 28 cp. 29...Nc4 costs 72 cp.
From move 31 the game enters a rook-and-minor-piece endgame spanning 27 scored moves. Average loss holds at 11.6 cp, virtually unchanged from the middlegame — flat precision across phases.
31...fxg4 costs 111 cp — catastrophic. The position transforms entirely. The aftermath confirms the human pattern: the next moves average 29.7 cp, elevated compared to baseline. The error leaves a wake before play returns to normal.
The defining pattern is variance — accuracy drops from 1.4 cp in the opening to 11.6 cp in the endgame, errors cluster around move 31. Error texture CV of 1.950 sits above the human baseline — errors arrive in bursts, not at a steady engineered rate.
At 60/100 confidence, the evidence is consistent with genuine human play. The behavioral signals converge on the known human signature — imperfect, pressure-driven, and emotionally textured. No credible engine signal was found.
Based on the evidence presented above, the analysis of Hans Moke Niemann's game did not produce a strong signal in either direction. Additional games are recommended before drawing any conclusion.
The analysis was inconclusive. Recommended action: Submit additional games for analysis. A minimum of 3 games from the same player — each analyzed independently — provides the statistical foundation for a reliable conclusion.
A full diagnostic assessment of your play in this game, measured against SuperGM blitz benchmarks. Every bar and every sentence is derived from specific signals in your moves.
Accuracy (ACPL) (8.3 cp): Your average centipawn loss of 8.3 places you in the upper echelon of accuracy for this game. At the SuperGM level, this indicates strong move selection throughout and suggests deep familiarity with the position types that arose.
Engine Match Rate (74%): You matched the engine's top choice 74% of the time, which is exceptionally high. This means your intuitive move selection aligns with computer analysis on more than two-thirds of decisions. For a SuperGM player, this signals strong pattern recognition.
Blunder Rate (9%): Your blunder rate of 9% is above average and represents your highest-impact improvement opportunity. Each blunder costs 50+ centipawns, which means a single blunder can undo 5-10 good moves. Tactical puzzle training is the fastest path to reducing this number.
Error Texture (CV) (1.95): Your error consistency (CV=1.95) shows high variance in your mistake sizes. You alternate between very clean stretches and sudden large errors. This bursty pattern typically indicates lapses of concentration rather than systematic weakness in any one area.
PV Stability (0.86): Your PV stability of 86% means you consistently stuck with the same plan across consecutive moves. High stability indicates confident, committed play where you execute a strategy rather than improvising move-to-move.
Error Clustering (EDR) (0.10): Your EDR of 0.105 shows highly dispersed errors. Your mistakes are evenly spread rather than clustered, which suggests consistent low-level inaccuracies across all phases rather than isolated concentration drops.
Your error texture coefficient of CV=1.95 indicates high variance in your play. You oscillate between stretches of near-perfect moves and sudden, significant errors. This feast-or-famine pattern suggests that your best play is very good, but you cannot sustain it consistently across an entire game.
High variance typically comes from relying on intuition without a structured decision process. When your intuition is right, you play brilliantly. When it fails, there is no safety net to catch the error. The goal is not to suppress your intuitive play but to add a verification step that catches the worst mistakes without slowing down your good decisions.
Exercises: (1) Before every move, spend 3 seconds checking for immediate threats. (2) Maintain a consistent tempo: do not rush through easy positions and then spend all your time on hard ones. (3) Track your error distribution across games to identify which game phases produce the most variance.
Your blunder rate of 9% is moderate. You are not hemorrhaging centipawns through tactical oversights, but there is a meaningful gap between where you are and where you could be. Most of these blunders likely occurred in positions where calculation depth mattered most.
At the SuperGM level, reducing blunders from 9% to under 8% is achievable within 4-6 weeks of targeted practice. The key insight: you do not need to calculate deeper, you need to calculate more carefully. The "candidate moves" technique, where you identify 2-3 options before committing to one, eliminates the most common source of moderate blunders.
Exercises: (1) Lichess Puzzle Storm: 3-minute sessions, twice daily. (2) For each blunder in this game, set up the position and find the best move without engine help, then check. (3) In your next 5 games, consciously pause for 3 seconds before every move in the middlegame.
Exceptional Accuracy: Your overall ACPL of 8.3 cp is outstanding. This level of precision means your move selection process is well-calibrated and you rarely drift far from the optimal path. In practical terms, this means opponents must play very accurately to beat you, because you give them few opportunities to capitalize on your errors.
Strategic Commitment: Your PV stability of 86% indicates you follow through on your plans with conviction. Rather than changing direction every few moves, you identify a strategy and execute it. This is a crucial skill for converting advantages into wins.
Opening Preparation: Your opening ACPL of 1.4 cp demonstrates strong theoretical knowledge or excellent positional intuition in the opening phase. You consistently reach playable middlegame positions, which is the primary objective of the opening.
A narrative walkthrough of the critical moments in your game. Not a table of numbers, but the story of what happened and why it mattered.
The opening covered moves 1 through 10, spanning 10 scored decisions. Hans Moke Niemann navigated the opening with excellent precision, averaging only 1.4 centipawns of loss per move. The position at the end of the opening stood at +0.0, indicating a well-played theoretical phase.
The middlegame spanned 20 scored moves, with 60% of them being engine-perfect and a total loss of 146 centipawns (average: 7.3 cp per move). Here are the critical moments that defined this phase of the game:
Move 29: Nc4 — BLUNDER (−72 cp)At this point, Hans Moke Niemann was ahead. The knight move Nc4 was a significant mistake, losing 72 centipawns. The position shifted noticeably — about 0.7 pawns worth. The engine preferred f5-g4 instead, which would have saved 72 centipawns (0.7 pawns). Knights are strongest on central outposts where they can't be chased by pawns. Before retreating a knight, look for active squares first.
Move 21: Bxc4 — MISTAKE (−28 cp)At this point, Hans Moke Niemann was in a roughly equal position. The bishop move Bxc4 was an inaccuracy costing 28 centipawns. The engine preferred b7-b6 instead, which would have saved 28 centipawns (0.3 pawns). Bishop placement defines long-term strategic potential. A bishop locked behind its own pawns is far less effective than one commanding open diagonals.
These 2 critical moments accounted for 100 of the 146 total middlegame centipawn loss (68%). This concentration of errors in a few key moments, rather than spread across every move, suggests that Hans Moke Niemann's general middlegame understanding is sound but breaks down at specific decision points.
The game reached an endgame phase covering 27 scored moves. Hans Moke Niemann won the game from this position. The endgame averaged 11.6 centipawns of loss per move, with a total of 312 centipawns lost in this phase.
Move 31: fxg4 — BLUNDER (−111 cp)This pawn move cost 111 centipawns (1.1 pawns). The engine preferred c5-c2 instead. Pawn endgames are the most concrete — every tempo counts.
Move 34: Rc1+ — BLUNDER (−89 cp)This rook move cost 89 centipawns (0.9 pawns). The engine preferred c5-f5 instead. Rook endgames require precise calculation — one wrong check can let the opponent escape.
Across 57 scored moves, Hans Moke Niemann accumulated 472 centipawns of total loss. 35 moves (61%) were engine-perfect. 4 moves were classified as blunders (50+ cp). The game result was 0-1 against Magnus Carlsen.
A structured, week-by-week improvement program tailored to the specific weaknesses identified in your game. Every recommendation is derived from your data.
Your error variance (CV=1.95) shows feast-or-famine play. This week targets building a consistent decision-making process to reduce your worst moments.
Day 1-3: Tempo Training (in casual games)
Play 3 casual games per day at a deliberate, even tempo. Do not rush through easy moves and do not agonize over hard ones. The goal is a steady rhythm: roughly the same time per move (5-8 seconds in blitz, 30-60 seconds in rapid).
Day 4-5: Safety-First Puzzles (15 min/day)
Solve puzzles but before playing the "winning" move, check: is this move also safe? Does it leave anything undefended? This trains the habit of verifying before executing.
Day 6-7: Game Review (30 min total)
After your weekend games, identify every move where your loss exceeded 30 cp. For each one, determine: were you rushing? Were you distracted? Were you frustrated? Pattern recognition of WHEN you lose consistency is the key to fixing it.
Week 2 shifts focus to your secondary weakness while maintaining the habits built in Week 1. Continue your daily tactical puzzles from Week 1 (reduce to 10 minutes per day as maintenance) and add the following:
Your game showed no single dominant weakness, which means you are a well-rounded player. This week focuses on building your overall chess fitness through balanced training.
Day 1-3: Tactical Maintenance (15 min/day)
Solve 5 complex positions from recent SuperGM games per day on Lichess to keep your tactical vision sharp. Alternate between rated puzzles (for challenge) and Puzzle Streak (for speed).
Day 4-5: Game Analysis (30 min/day)
Review your recent games. For each, identify the critical moment and your thought process. Were you accurate? Were you confident? Understanding your decision-making pattern is more valuable than memorizing engine lines.
Day 6-7: Play Strong Opponents (45 min/day)
Seek games against players 100-200 rating points above you. Losing to stronger players and analyzing the games afterward is one of the fastest improvement methods available.
Integration exercise: Play 3-4 rated games this week. After each game, immediately check: (1) Did you apply the blunder-check habit from Week 1? (2) Did you work on this week's focus area? Score yourself honestly. Improvement is not about being perfect; it is about being more intentional than you were last week.
Your endgame ACPL of 11.6 cp reveals significant room for improvement in the final phase. This week provides a structured endgame curriculum.
Day 1-2: Essential Positions (30 min/day)
Study these five positions until you can play them perfectly from memory: (1) King + Pawn vs King with opposition, (2) Lucena position (rook + pawn win), (3) Philidor position (rook + pawn draw), (4) Vancura position (rook vs rook + a-pawn draw), (5) Queen vs passed pawn on 7th rank. Resources: de la Villa "100 Endgames You Must Know" chapters 1-2, or free Chessable courses.
Day 3-5: Endgame Drills (20 min/day)
Use Lichess "Play from Position" to practice endgame positions. Set up a position from the positions you studied and play it against the computer (Level 5-6). Repeat until you can win or draw (depending on the theoretical result) consistently. Key principle: activate your king early. In the endgame, the king is a fighting piece worth roughly 4 points.
Day 6-7: Practical Endgames (30 min/day)
Play 3-4 rapid games and consciously aim to reach endgames. Even if you have a winning middlegame attack, try to convert through simplification instead. This forces endgame practice in real game conditions and builds the habit of confident endgame play.
The final week brings everything together. You have spent three weeks building specific skills; now it is time to integrate them into your natural playing process and measure your progress.
Day 1-2: Game Review Ritual (30 min/day)
Play 2 rated games per day. After each game, before checking with the engine, write down: (1) your 3 best moves and why, (2) your 3 worst moves and why, (3) the critical moment of the game and what you were thinking. THEN check with the engine. Compare your self-assessment to the engine's evaluation. The closer they match, the more self-aware you are becoming.
Day 3-5: Full Integration Games (45 min/day)
Play 3 rated games with conscious application of all skills: blunder check (Week 1), secondary weakness awareness (Week 2), phase-specific improvement (Week 3). Do not try to apply everything simultaneously; instead, pick one focus per game and rotate. After each game, analyze and note whether the focus area improved.
Day 6-7: Benchmark Assessment (60 min)
Play 3-5 rated games as your "assessment set." After all games, run them through a ForgeChess analysis or Lichess computer analysis. Compare your ACPL, blunder rate, and phase performance to the numbers from this report. Specifically, look at: (1) Is your ACPL below 8? (2) Is your blunder rate below 9%? (3) Did your worst phase improve? Even modest improvement confirms the training is working.
Rating targets: Based on your current estimated SuperGM level (~2688 Elo), consistent training at 20-30 minutes per day should yield approximately +50 points in 30 days (target: ~2738) and +150 in 90 days (target: ~2838). These are realistic expectations based on typical improvement curves for dedicated study at your level.
These resources are selected specifically for the SuperGM level. Using resources too far above or below your level wastes time and builds frustration.
Books: Dvoretsky's Analytical Manual, Endgame Virtuoso by Smyslov, My Great Predecessors by Kasparov
Video content: ChessBase MegaDatabase analysis, Original analysis of your own games with deep engine time (30+ sec/move)
Lichess tools: Lichess cloud analysis at depth 40+, Lichess studies with annotation, Syzygy tablebase for endgame verification
Daily minimum: 5 complex positions from recent SuperGM games + 5 minutes of game review. This 20-minute daily investment compounds dramatically over weeks and months. Consistency beats intensity: 20 minutes every day is far more effective than 3 hours once a week.
This training plan is based on the specific signals from this single game. For the most accurate coaching, analyze 3-5 games to identify persistent patterns versus one-game anomalies. The weaknesses that appear across multiple games are the ones that matter most.
ChessForensics uses 35 independent behavioral dimensions calibrated on over 10,000+ validated games from confirmed SuperGM players (Magnus Carlsen, Hikaru Nakamura, Alireza Firouzja, Nihal Sarin, and peers). Each dimension targets a distinct behavioral fingerprint that separates human cognition from engine assistance. Signal weights, thresholds, and formula parameters are proprietary and available to accredited arbitration panels upon written request.
Engine Likelihood and Human Authenticity scores are expressed on a 0–100 scale. Validated on 10,000+ games from 80+ verified SuperGM accounts (Magnus Carlsen, Alireza Firouzja, Nihal Sarin, and peers) and 6,000+ confirmed engine games. Cross-validated false positive rate: under 1% on SuperGM play. Zero false positives on named SuperGM accounts. Reports automatically adjust analysis context based on time control (blitz, rapid, classical).
This report was created for you personally. You are welcome to use it in an appeal, dispute, or tournament proceeding. Please do not redistribute the file publicly. ChessForensics is not affiliated with Lichess, Chess.com, FIDE, or any governing chess body. This analysis is independent statistical evidence, not a platform ruling. This report constitutes statistical and behavioral probability analysis, not definitive proof of misconduct. ChessForensics accepts no liability for outcomes of disputes in which this report is used.
Calibration Scope: Validated on 3+0 blitz and 2+1 bullet time controls using confirmed SuperGM behavioral baselines. Other time controls produce results with reduced confidence and are marked accordingly.