Game theory simulator — multiplayer public goods with dynamic synergy, lobbying and endogenous erosion.
16 champions (algorithms) compete turn by turn, choosing how much to contribute to a common pot. The pot is multiplied by a dynamic synergy factor S then redistributed according to a hybrid ratio alpha between equal and proportional shares.
# Clone and setup
git clone https://github.com/renaudcepre/the-commons-arena.git
cd the-commons-arena
uv venv .venv
source .venv/bin/activate
uv pip install -e .
# Run a game
python cli.py run --seed 42
# Run tests
python -m pytest tests/ -qRequirements: Python 3.12+, uv
For LLM players, set the relevant API keys as environment variables
(e.g. MISTRAL_API_KEY, OPENAI_API_KEY).
1. Each champion receives an endowment M (default: $100)
2. Each champion decides how much to contribute (0% to 100% of M)
3. [Lobbying] Each champion may spend capital to shift alpha
4. Common pot = sum of contributions
5. The pot is multiplied by S: total_pot = pot * S
6. The pot is redistributed according to alpha:
share_i = alpha * (total_pot / N) + (1-alpha) * (contrib_i / sum_contribs) * total_pot
7. [Erosion] Kept money (not contributed) is degraded if S < S_initial
8. Champion's capital += kept money (eroded) + their share of the pot
9. S is updated based on the turn's cooperation rate
10. The Shadow of the Future decides whether the game continues
The alpha parameter controls how the multiplied pot is redistributed:
| Alpha | Mode | Effect |
|---|---|---|
| 1.0 | Egalitarian | Everyone receives pot/N |
| 0.5 | Hybrid (default) | 50/50 between equal and proportional |
| 0.0 | Proportional | Each receives in proportion to their contribution |
Example — 2 players, endowment=$100, S=1.8, one cooperates (100%), the other defects (0%):
- Pot = $100, multiplied_pot = $180
- alpha=1.0: Coop $90, Defect $90 → no difference, free-riding is optimal
- alpha=0.5: Coop $135, Defect $45 → significant gap
- alpha=0.0: Coop $180, Defect $0 → maximum punishment
The multiplier S represents the "health of the economy". It evolves based on the cooperation rate:
If coop_rate >= 0.8 (golden age) : S += 0.1 (trust, growth)
If coop_rate < 0.5 (erosion) : S -= 0.05 (distrust, contraction)
Otherwise (neutral zone) : S unchanged
S is clamped between s_min=1.2 and s_max=3.0
Initial value: s_initial=1.8
Three emergent regimes:
- Golden age (coop >= 80%) — S rises, the pot grows, everyone benefits
- Erosion (coop < 50%) — S falls, the economy contracts, returns shrink
- Status quo (50–80%) — S remains stable
When S drops below S_initial, kept (non-contributed) money is degraded:
erosion_factor = (S / S_initial) ^ intensity
kept_money = (1 - contribution) * endowment * erosion_factor
- With
intensity=1(default): if S drops to 1.2 (=66% of S_initial=1.8), you keep only 66% of non-contributed money - With
intensity=2: same scenario, you keep only 44%
Erosion punishes free-riders through the game itself: when cooperation declines, hoarding money yields less. This is an endogenous mechanism — not an external punishment, but a natural consequence of economic degradation.
Subtle point: with intensity=1, AlwaysBetray still wins in large groups ( $85 vs $79 for a cooperator). This is the expected outcome — the public goods dilemma is a real dilemma. The interest lies in showing the conditions under which cooperation emerges (small groups, repetition, tournaments).
Alpha is dynamic. Each turn, champions can spend their accumulated capital to push alpha:
delta_alpha = sensitivity * sum(lobby_i)
alpha = clamp(alpha + delta_alpha, 0.0, 1.0)
cost = abs(lobby_amount), deducted from capital
default sensitivity: 0.001
- Positive lobby → alpha toward 1.0 (egalitarian redistribution, benefits free-riders)
- Negative lobby → alpha toward 0.0 (proportional redistribution, benefits contributors)
Empirical observation: lobbying is a resource trap. Capitaliste and Parasite ruin themselves in a tug-of-war, while non-lobbyists (TitForTat, Rancunier...) preserve their capital. Tournament results: with lobbying ON, AlwaysBetray drops from 100% win rate to 72%.
The game has no fixed duration. Each turn, the stopping probability increases:
p(t) = 1 - exp(-k * t)
k is calibrated for ~80% chance of ending by the target turn (default: 100)
Players don't know when it ends — they plan under uncertainty. This prevents "endgame" strategies (defecting on the last turn).
| Champion | Strategy |
|---|---|
| AlwaysCoop | Always contributes 100% |
| AlwaysBetray | Always contributes 0% |
| TitForTat | Copies the median contribution from the previous turn |
| GenerousTFT | TFT but never drops below 20% |
| Rancunier | Cooperates until a defection is detected (<30%), then 0% forever |
| Pavlov | If previous gain >= endowment, repeat; otherwise switch |
| RandomPlayer | Uniform random contribution [0, 1] |
| Detective | Plays [C, D, C, C] then exploits or imitates based on reactions |
| Gradual | Cooperates, punishes proportionally to defections, then forgives |
| MajorityDetector | Follows the majority: cooperates if >50% cooperated last turn |
| Champion | Contribution | Lobby | Logic |
|---|---|---|---|
| Capitaliste | 90% | 5% of capital toward alpha=0 | Big contributor, wants contributions to pay off |
| Parasite | 0% | 5% of capital toward alpha=1 | Pure free-rider, wants equal share |
| Pragmatique | Variable | 3% of capital, adaptive direction | Free-rides if alpha>0.7, contributes if alpha<0.3 |
| Champion | Strategy |
|---|---|
| Apprenant | Multi-armed bandit (epsilon-greedy). 6 levels [0, 0.2, 0.4, 0.6, 0.8, 1.0]. 15% exploration, 10-turn sliding window |
| Justicier | Votes to exclude detected free-riders |
| Opportuniste | Adapts to synergy and context |
LLM-powered champions use pydantic-ai for structured decision-making. Any model supported by pydantic-ai works (Mistral, OpenAI, Anthropic, etc.).
Each turn, the LLM receives the full game state as JSON (synergy, capitals, previous contributions) and returns a structured decision (contribution + reasoning + optional lobby amount). Conversation history is maintained across turns for multi-turn reasoning.
Player names are replaced with random pseudonyms so the LLM cannot infer strategies from champion names.
Bots are defined as JSON files in bots/:
{
"model": "mistral:mistral-small-latest",
"personality": "You must become the richest player at all costs.",
"temperature": 0.3
}| Field | Required | Description |
|---|---|---|
model |
yes | pydantic-ai model string (e.g. mistral:mistral-small-latest, openai:gpt-4o-mini) |
personality |
no | Custom system prompt injected before the game rules |
temperature |
no | LLM temperature (default: 0.3) |
# Using a bot file (loads model + personality + temperature from bots/*.json)
python cli.py run --include mistral-neutre --seed 42
# Multiple instances of the same bot
python cli.py run --include mistral-neutre:3 --seed 42
# Direct model string (default personality)
python cli.py run --llm "mistral:mistral-small-latest" --seed 42
# Mix algorithmic champions and LLM players
python cli.py run --include Pavlov:3 --include mistral-neutre --llm "openai:gpt-4o-mini"
# List available bots
python cli.py listCreate a JSON file in bots/ — no code changes needed. The bot name is derived
from the filename (e.g. bots/my-bot.json → --include my-bot).
# Install
uv venv .venv && source .venv/bin/activate && uv pip install -e .
# Tests
python -m pytest tests/ -q
# List champions
python cli.py list
# Simple game (erosion + lobbying ON by default)
python cli.py run --seed 42
python cli.py run --seed 42 -v # turn-by-turn detail
python cli.py run --no-lobbying --seed 42 # without lobbying
# Custom compositions (--include with multipliers)
python cli.py run --include Apprenant:5 --output data/5app.json
python cli.py run --include Pavlov:3 --include AlwaysBetray --output data/pav_vs_betray.json
# Round-robin tournament
python cli.py tournament --games 50 --seed 42
python cli.py tournament --group-size 4 --games 20 --seed 42
python cli.py tournament --no-lobbying --no-erosion --seed 42 # "pure" rules
# JSON export for the viewer
python cli.py run --seed 42 --output data/partie.json| Option | Default | Description |
|---|---|---|
--duration |
100 | Target turns (shadow of the future calibration) |
--endowment |
100 | Endowment per turn |
--alpha |
0.5 | Initial alpha (0=proportional, 1=equal) |
--lobbying/--no-lobbying |
ON | Dynamic lobbying on alpha |
--erosion/--no-erosion |
ON | Endogenous erosion of kept gains |
--erosion-intensity |
1.0 | Erosion exponent (>1 = more severe) |
--include |
— | Select champions or bots (supports multipliers: Apprenant:3) |
--exclude |
— | Exclude champions |
--llm |
— | Add an LLM player by model string (e.g. mistral:mistral-small-latest) |
--seed |
random | Seed for reproducibility |
--output |
— | JSON export |
web/viewer.html — open in a browser, drag and drop a JSON file.
Displays: capital evolution, synergy, cooperation rate, contribution heatmap, dynamic leaderboard, events. Playback controls (play/pause/step/speed).
arene/ Game engine
engine.py Game loop (contributions → lobbying → redistribution → erosion → vote)
models.py Dataclasses (GameConfig, RoundContext, RoundResult, etc.)
synergy.py Synergy S computation
shadow.py Shadow of the future (stopping probability)
champion.py Abstract Champion class (decide, lobby, vote_exclusion)
registry.py @register_champion + global registry
tournament.py Round-robin tournament
report.py Report (JSON + text)
champions/ 16 algorithmic strategies + LLM player
llm_player.py LLM-powered champion (pydantic-ai)
bots/ LLM bot configs (JSON: model + personality + temperature)
cli.py Typer CLI (run, tournament, list)
web/ HTML/JS viewer (Chart.js)
data/ Pre-generated JSON scenarios
tests/ pytest
# champions/my_champion.py
from arene.champion import Champion
from arene.models import RoundContext
from arene.registry import register_champion
@register_champion
class MyChampion(Champion):
description = "My custom strategy"
def decide(self, ctx: RoundContext) -> float:
"""Return 0.0 (keep everything) to 1.0 (give everything)."""
return 0.5
def lobby(self, ctx: RoundContext) -> float:
"""Optional. Positive=alpha toward 1, negative=alpha toward 0. Cost=abs(amount)."""
return 0.0Add the import in champions/__init__.py.
