Why Beating 100 Unknowns No Longer Beats Beating 30 Elites: Strength of Schedule in the Jits Rating Engine
The Jits rating engine now weights who you beat, not just how many. Three new mechanisms sharpen the strength-of-schedule signal. Here is the exact math, the reasoning, and every trade-off.
A 30-0 fighter ranked below a 136-21 fighter. That ranking was wrong. This article explains exactly what changed, the math behind every adjustment, and why the leaderboard looks different today.
This week, 85,361 fighters had their Jits rating recalculated. Win-loss records are identical — no matches were added, removed, or modified. But the rating engine now evaluates something it previously underweighted: the strength of the opponents behind those records.
The core Glicko-2 engine is unchanged. Three post-processing mechanisms were added to sharpen how the engine weighs who you beat, not just how many you beat. The result: a leaderboard that reflects competitive quality, not accumulated volume.
If a rating moved and you want to understand why, this is the document. If a tier changed and you want to know what to do, jump to Section 05.
The Volume Trap
When the rating engine encounters an opponent with no established rating — a new competitor, someone from an untracked organization, or a fighter with too few matches — it cannot calculate true opponent quality. The previous model handled this with a quality floor: unknown opponents were assigned a minimum quality of 0.9 on a 0–1.5 scale. That floor meant beating an unknown opponent earned 81% of the credit of beating an elite.
For a fighter with 10 matches, this barely matters. The quality floor fires once or twice, and subsequent results against rated opponents quickly calibrate the rating to its correct level.
For a fighter with 136 matches, it compounds. Every win against an unrated or weakly rated opponent deposited nearly full credit into the rating account. Over 100+ matches, those deposits accumulated into a rating that looked elite — but was built on a foundation of opponents the engine knew almost nothing about.
Here is what that looked like in practice:
| Fighter | Record | Previous Rating | Orgs |
|---|---|---|---|
| Gabriel Morais Ferreira | 136-21 | 9,969 Jits | Mostly Grappling Industries |
| Hadley Elizabeth Weber | 30-0 | 7,905 Jits | Exclusively IBJJF and ADCC |
Gabriel's record is exceptional. 136 wins is not an accident — it takes years of consistent competition. But the rating of 9,969 implied he was the strongest fighter in the dataset. Meanwhile, Hadley — undefeated across 30 matches at the two most competitive organizations in the sport — ranked below him.
The engine was not wrong about the wins. It was wrong about what those wins proved. Beating 136 opponents whose average final rating is 3,031 Jits is elite. It is not the same as beating 30 opponents whose average final rating is 4,008 Jits. The first proves volume. The second proves level.
The model now distinguishes between the two.
Three Mechanisms
Three post-processing mechanisms were added to the rating engine. Each runs after all 351,781 matches are processed through the standard Glicko-2 algorithm. They modify the final rating — they do not change the core math.
Mechanism 1: Strength-of-Schedule Final Ceiling
After all matches are processed, the engine takes a second pass. For every fighter with 8 or more matches, it calculates the average final rating of every opponent they have beaten. The ceiling is 2.0 times that average.
The word "final" is critical. At the time of each match, most opponents are still early in their rating trajectory — clustered near the 1,000 baseline. By the end of the recalculation, those same opponents have been rated across all of their matches. A fighter who lost to Hadley in January had a rating of ~1,100 at the time. By March, after the engine processed all of that fighter's matches, their final rating might be 4,500.
This is the key insight. At-time-of-match ratings have almost no spread — everyone starts near 1,000. Final ratings have enormous spread. Hadley's opponents finish the recalculation at an average of ~4,008. A local-circuit fighter's opponents finish at ~1,090. The difference is invisible during processing and obvious afterward.
The 2.0 multiplier is generous. A fighter whose opponents average 2,000 Jits has a ceiling of 4,000 — twice the level of their proven competition. The ceiling only binds when the gap between accumulated rating and opponent quality is extreme.
Eight matches is the minimum threshold. One tournament is typically 2–3 matches. By 8 matches, the engine has enough opponent data to compute a meaningful average.
Mechanism 2: Elite Loss Amplification
When an established fighter (25+ matches, Rating Deviation below 200) loses to someone rated 2,000 or more Jits below them, the loss is amplified:
A fighter rated 8,000 losing to a fighter rated 2,000 produces a gap of 6,000. The amplifier is min(3.0, 1.0 + 6,000/4,000) = 2.5×. The loss counts 2.5 times as hard.
This only fires for fighters with low RD — meaning the engine is confident about their rating. If the engine is confident you are at 8,000 and you lose to someone at 2,000, that is a strong signal the 8,000 is wrong. The loss should correct the rating faster.
Mechanism 3: SOS-Gated Floor Decay
If a fighter's average win-opponent final rating is below approximately 2,000 Jits, a retroactive penalty applies — up to 40% of the rating above baseline.
This is not floor decay for everyone. The quality floor of 0.9 remains in effect for all fighters during match processing. This mechanism only fires after processing, and only for fighters whose final opponent quality proves the floor was systematically inflating their rating. A 58-15 fighter whose opponents average 3,500 Jits is untouched. A 135-67 fighter whose opponents average 1,400 Jits gets the penalty.
Volume at scale is still rewarded. This penalty only hits fighters whose SOS proves they accumulated rating against consistently weak opposition.
Three Fighters, Three Outcomes
All numbers below are from the March 22 recalculation, verified against the production database. Records reflect match data only.
Hadley Elizabeth Weber — 30-0, IBJJF and ADCC
Average opponent final rating: ~4,008 Jits. SOS ceiling: 4,008 × 2.0 = 8,016. Raw rating after Glicko-2 processing: ~7,694. The ceiling does not bind — Hadley's raw rating is below the ceiling. Elite loss amplification: irrelevant (never lost). SOS floor decay: irrelevant (opponent average far above 2,000).
| Before | After | |
|---|---|---|
| Rating | 7,905 | 7,694 |
| Tier | S-Tier | S-Tier |
| Record | 30-0 | 30-0 |
Hadley remains S-Tier under the new model. The previous model undervalued her because she had "only" 30 matches — the quality floor compressed the gap between her opponents and everyone else's. With the SOS Final ceiling using post-processing opponent ratings, her opponents' true strength is visible, and the ceiling confirms her rating is justified.
Gabriel Morais Ferreira — 136-21, mostly Grappling Industries
Average opponent final rating: ~3,031 Jits. SOS ceiling: 3,031 × 2.0 = 6,062. Raw rating after Glicko-2 processing: ~9,969. The ceiling binds hard — raw rating is capped from 9,969 to 6,062. Additional adjustments (Bayesian WR, finishing bonus, experience) bring the final to 5,644.
| Before | After | |
|---|---|---|
| Rating | 9,969 | 5,644 |
| Tier | S-Tier | Prodigy |
| Record | 136-21 | 136-21 |
Gabriel's record is still 136-21. His Prodigy tier reflects that 136 wins against opponents averaging 3,031 Jits is genuinely excellent — but 9,969 overstated the evidence. The SOS ceiling does not punish volume. It contextualizes it.
Red Tora Matsumoto — 139-69, 66.8% win rate
Average opponent final rating: below 2,000 Jits. SOS ceiling binds. SOS floor decay also fires — the engine retroactively reduces the gains accumulated from wins against weak opponents.
| Before | After | |
|---|---|---|
| Rating | 7,120 | 4,120 |
| Tier | Prodigy | Elite |
| Record | 139-69 | 139-69 |
Red Tora's volume is genuine — 208 total matches across years of competition. The Elite tier reflects that this is a fighter who shows up consistently. The previous rating of 7,120 reflected accumulated volume; the new rating of 4,120 reflects the quality of that volume.
A New Scale
The SOS ceiling compresses the top of the rating distribution. Ratings that previously reached 9,000–10,000 are now capped at 6,000–8,500. The old tier thresholds were calibrated for the old distribution — keeping them would have pushed most of the leaderboard into lower tiers regardless of quality.
The thresholds were recalibrated to match the new distribution. An eighth tier — Unranked — was added at the bottom for fighters with provisional or insufficient data.
| Tier | Percentile | Meaning |
|---|---|---|
| S-Tier | Top 0.01% | Dominant at the highest level of competition |
| Prodigy | Top 0.2% | Exceptional record against strong opponents |
| Elite | Top 2% | Proven competitor with quality wins |
| Advanced | Top 8% | Consistently winning against rated opponents |
| Prospect | Top 23% | Building competitive experience |
| Novice | Top 48% | Active competitor |
| Beginner | Top 76% | Early competitive career |
| Unranked | Below 76% | Provisional or insufficient data |
Tiers are now assigned by percentile rank among published fighters — not by fixed rating thresholds. The top 0.01% of rated fighters are S-Tier regardless of what rating number that maps to. As the fighter pool grows or the rating distribution shifts, tier boundaries move automatically.
The current S-Tier has 10 fighters. The tier is meant to be small — it represents dominance at the highest level of youth jiu-jitsu competition.
For reference, the current tier distribution across 92,778 published fighters:
| Tier | Fighters |
|---|---|
| S-Tier | 10 |
| Prodigy | 176 |
| Elite | 1,670 |
| Advanced | 5,567 |
| Prospect | 13,916 |
| Novice | 23,195 |
| Beginner | 25,978 |
| Unranked | 26,858 |
What to Do If Your Rating Changed
If a rating dropped under this update, here is what happened and the path forward.
What happened: The engine added a second pass that evaluates the final quality of the opponents behind each win. Fighters whose accumulated rating exceeded what their opponent quality justified had their rating capped. Records are unchanged. The engine reweighted existing evidence.
What did not happen: No wins were removed. No matches were deleted. No records were altered. The competitive record is exactly the same as it was yesterday.
The path forward: compete against rated opponents.
Every win against an opponent the engine can evaluate at their true level counts at full value. The SOS ceiling rises with opponent quality — beating higher-rated opponents directly raises the ceiling. There is no penalty for facing strong competition.
The most direct way to raise a rating under the new model is to enter events with deep fields. IBJJF, ADCC, and national-level organizations draw the highest concentration of rated opponents. Regional Smoothcomp events in neighboring states bring different pools. Every new rated opponent is full-value evidence.
Look up any fighter's current rating and full match history at jits.gg/leaderboard. Every match, every opponent, every rating change is there. The Jits Rating page explains each tier and how the system works.
If geography or cost limits travel, the model is not penalizing anyone. A fighter who has beaten every available local opponent has proven what can be proven locally. The rating reflects that proven level accurately — it cannot project beyond it. When the opportunity to compete against stronger fields arrives, the rating will respond immediately.
What This Model Gets Wrong
Every rating model has limitations. Here are the ones we know about.
Gi/nogi mixing at global level. The global rating combines gi and nogi results. A fighter who is elite in gi and average in nogi gets one blended number. Segment ratings (belt × gender × format) exist for this — they are visible on each fighter's profile — but the headline number blends both.
Geographic isolation. A strong fighter in a rural area with no rated opponents nearby will have a low SOS through no fault of their own. The SOS ceiling will cap their rating below what their skill might justify. The model cannot distinguish between a fighter who has not faced strong opponents because none exist nearby and a fighter who has not faced strong opponents because they cannot beat them. Travel to larger events is the only way to generate the evidence the engine needs. This is not a bug — the rating reflects what has been proven, not what might be true.
Historical data coverage. Pre-2026 IBJJF tournaments have placement data (1st, 2nd, 3rd) but fewer individual match records. The engine rates fighters based on the matches it can see. Fighters who competed heavily before 2026 may have incomplete match histories compared to fighters whose careers started with full bracket data available.
New fighter cold start. Everyone starts at 1,000 Jits with high uncertainty. A fighter's first 5–8 matches produce volatile ratings as the engine calibrates. The SOS ceiling helps here — it caps runaway early ratings — but early-career numbers should be read as provisional.
Common Questions
Did any wins get removed?
No. Every win is still in the competitive record. Win-loss numbers are unchanged. The rating engine added a post-processing evaluation of opponent strength — it did not modify any match data.
Why did my tier change?
Two reasons: the SOS ceiling may have adjusted the rating, and the tier thresholds were recalibrated to match the new distribution. Even fighters whose rating barely moved may have changed tiers because the threshold moved.
Will the rating recover?
Every win against a rated opponent counts at full value. The SOS ceiling rises as opponent quality rises. Competing against stronger fields directly raises both the rating and the ceiling.
What if I only compete locally?
Local wins count. The rating reflects what those wins prove. The SOS ceiling reflects the strength of local opponents. If the local field averages 1,500 Jits, the ceiling is 3,000 — which may be lower than a fighter's accumulated raw rating. Travel to larger events is the most direct path to a higher ceiling.
Will this happen again?
The model will continue to evolve as the dataset grows. Every change will be documented with the same transparency — full math, full reasoning, full impact numbers.
More from Competition Intel
Get the weekly recap in your inbox
Biggest upsets, rating changes, and competition data every Monday.