blog-banner

Managing Row-Level TTL in CockroachDB for High-Traffic MMORPGs

Last edited on May 9, 2025

0 minute read

    Massively Multiplayer Online Role-Playing Games (MMORPGs) are more than just a big initialism – they also generate an enormous amount of transient data. 

    From session logs and temporary inventories to event-based records, handling this data efficiently is crucial to maintaining performance and scalability. Without proper data management, however, databases can become bloated, leading to degraded query performance and increased storage costs.

    CockroachDB’s row-level Time-To-Live (TTL) feature provides an automated way to manage transient data by ensuring it expires and is deleted efficiently. This is particularly useful during high-traffic or ingestion-heavy periods, such as: 

    • in-game events 

    • seasonal updates 

    • or peak user activity times 

    By implementing row-level TTL, MMORPG developers can ensure that old or unnecessary data does not impact database performance.

    Additionally, for multi-region deployments, the distributed SQL architecture of CockroachDB supports REGIONAL BY ROW locality This ensures that game data is optimally stored and accessed per region, reducing latency and improving player experience.

    This guide provides a step-by-step approach to implementing row-level TTL in CockroachDB to optimize MMORPG data management during high-traffic scenarios.

    TTL for MMORPGs Cockroach Labs BLOG 1920

    What is TTL?Copy Icon

    Let’s take a moment to go deeper into TTL, or Time-To-Live, which is a concept widely used in computing to manage the lifespan of data. 

    TTL refers to the duration for which data remains valid before it expires and is automatically deleted. It’s an important mechanism for ensuring that temporary or unnecessary data doesn’t consume valuable resources, such as storage and processing power, beyond its useful life. TTL is typically used in systems like caches, databases, and networking protocols to enhance performance and efficiency by automatically purging outdated data without manual intervention.

    TTL in MMORPG Data ManagementCopy Icon

    In the context of MMORPGs, TTL is especially important for managing the vast amount of transient data generated during gameplay. In addition to what we noted above, this data can also include session logs, temporary inventories, event-based records, and other time-sensitive information. 

    For example, data related to temporary buffs, completed auction house transactions, or matchmaking queues can all be governed by TTL policies. This frees up storage and maintains optimal database performance during high-traffic events. 

    Step 1: Understand What Data Needs TTLCopy Icon

    Before implementing row-level TTL, it’s important to determine which data should have an expiration policy. Common examples in MMORPGs include:

    • Session Logs: Temporary session data that is only useful for a limited time.

    • Auction House Listings: Expired or completed transactions that no longer serve a purpose.

    • Temporary Buffs & Power-Ups: Data related to time-limited effects applied to players.

    • Matchmaking & Lobby Data: Records for players in matchmaking queues that should be removed after a set duration.

    Step 2: Enable Row-Level TTL in CockroachDBCopy Icon

    CockroachDB allows TTL expiration on a per-table basis using the ttl_expire_after parameter. To enable it, follow these steps:

    1. Create a Table with TTLCopy Icon

    If you’re storing temporary event logs, create a table with a TTL policy like this:

    CREATE TABLE event_logs (     created_at DATE NOT NULL,     id UUID DEFAULT gen_random_uuid(),     player_id UUID NOT NULL,     event_type STRING NOT NULL,     event_data JSONB,     PRIMARY KEY (created_at, id)) WITH (ttl_expire_after = '24h');

    This ensures that event logs older than 24 hours are automatically deleted.

    2. Add TTL to an Existing TableCopy Icon

    If you already have a table and want to add TTL, use:

    ALTER TABLE matchmaking_queue SET (ttl_expire_after = '30m');

    This ensures that players who have been in the matchmaking queue for over 30 minutes are removed automatically.

    Step 3: Monitor and Optimize TTL PerformanceCopy Icon

    1. Monitor TTL Job ExecutionCopy Icon

    CockroachDB runs TTL deletions as background jobs. To monitor them, use:

    SELECT * FROM [SHOW JOBS] WHERE job_type = 'AUTO TTL';

    This allows you to track whether TTL jobs are running smoothly or experiencing delays.

    2. Adjust TTL Settings for High TrafficCopy Icon

    During peak game events, ingestion rates may spike, requiring adjustments to TTL settings. You can modify TTL dynamically:

    ALTER TABLE session_data SET (ttl_expire_after = '12h');

    If necessary, consider reducing the TTL duration temporarily to free up space faster.

    Step 4: Handle TTL During High Ingestion PeriodsCopy Icon

    During heavy load scenarios, TTL deletions may compete for resources. To mitigate this:

    1. Prioritize Indexing for Efficient DeletionsCopy Icon

    Ensure that TTL-affected tables have appropriate indexing. Example:

    CREATE INDEX ON event_logs (created_at);

    This speeds up TTL deletions by allowing efficient scans.

    2. Stagger TTL DeletionsCopy Icon

    CockroachDB supports rate-limiting TTL jobs. You can set a deletion rate to prevent TTL operations from overwhelming the system:

    ALTER TABLE event_logs SET (     ttl_expire_after = '24h',     ttl_job_cron = '@every 10m' );

    This schedules TTL deletions every 10 minutes instead of running them continuously.

    3. Use Table Partitioning for Load DistributionCopy Icon

    For extremely high-ingestion tables, partitioning can help distribute load:

    ALTER TABLE event_logs PARTITION BY RANGE (created_at) (     PARTITION p1 VALUES FROM (MINVALUE) TO ('2025-01-01'),     PARTITION p2 VALUES FROM ('2025-01-01') TO (MAXVALUE) );

    This ensures that deletions target specific partitions instead of scanning the entire table.

    Note: Cannot set PARTITION BY on a table in a multi-region enabled database

    Note: Context-dependent operators are not allowed in partition

    Step 5: Use REGIONAL BY ROW for Multi-Region DeploymentsCopy Icon

    For MMORPGs operating in multiple regions, CockroachDB supports per-row regional distribution, improving performance by ensuring data is stored close to players.

    1. Enable Regional LocalityCopy Icon

    ALTER TABLE event_logs SET LOCALITY REGIONAL BY ROW;

    This ensures that rows are automatically placed in the appropriate region, reducing latency and improving query performance.

    Effortless, efficient scale for MMORPGsCopy Icon

    Managing transient data efficiently is critical for MMORPGs, especially during peak traffic events. CockroachDB’s row-level TTL automates data cleanup, ensuring that unnecessary data does not degrade performance. For multi-region deployments, REGIONAL BY ROW locality ensures that data is stored optimally across regions, further enhancing speed and availability.

    Game developers can maintain a smooth player experience while keeping storage costs under control by: 

    • properly configuring TTL settings 

    • monitoring job execution 

    • leveraging partitions for high-ingestion scenarios 

    • utilizing regional locality 

    With proper TTL and multi-region optimization, MMORPGs can scale effortlessly, allowing devs to focus on the fun of gameplay rather than the drudgery of database maintenance. Start leveraging CockroachDB’s TTL and regional capabilities today to keep your game’s data lean, fast, and efficient!

    Get in the game! Experience CockroachDB’s free cloud offering today.

    Ben Sherrill is a Sales Engineer for Cockroach Labs.