Uber / Ride Sharing
VERY COMMONRide sharing design is asked at Uber, Lyft, Google, Amazon, and Meta because it tests real-time geospatial indexing, dispatch optimization, and streaming infrastructure in one problem. It is how Uber connects 5.4 billion trips per year across 10,000 cities. You will design an H3 hexagonal geo index that locates nearby drivers in O(1), a Kafka pipeline ingesting 167K GPS updates per second, and a dispatch algorithm that ranks candidates by ETA, direction, and rating rather than raw distance.
- Design an H3 hexagonal geo index for O(1) nearby-driver lookup
- Build a Kafka pipeline ingesting 167K location updates per second
- Avoid the dispatch race condition that double-books drivers
Visual Solutions
Step-by-step animated walkthroughs with capacity estimation, API design, database schema, and failure modes built in.
Cheat Sheet
Key concepts, trade-offs, and quick-reference notes for Ride Sharing. Everything you need at a glance.
Anti-Patterns
Common design mistakes candidates make. Wrong approaches vs correct approaches for each trap.
Failure Modes
What breaks in production, how to detect it, and how to fix it. Detection metrics, mitigations, and severity ratings.
Start simple. Build to staff-level.
“Ride-sharing for 1M daily rides connecting 1M riders with 500K drivers. Drivers broadcast GPS every 3 seconds: 167K location updates/sec flow through Kafka into Redis using H3 hexagonal cells. Full driver set fits in 16.5 MB. Dispatch queries 7 H3 cells, ranks by ETA, direction, and rating. First accept wins via optimistic locking. WebSocket pushes 40-byte updates during trips; 100K concurrent rides = 1.3 MB/sec egress. Trip records at 500 MB/day in MySQL sharded by rider_id. Surge pricing per H3 cell with EMA smoothing.”
Geospatial Indexing (H3)
STANDARDWe chose H3 hexagonal grid at resolution 9 (not QuadTree) because cell assignment is a pure function with no write contention at 167K/sec. Hexagons provide uniform neighbor distances. K-ring query (cell + 6 neighbors) finds nearby drivers in under 10ms. Trade-off: we gave up adaptive density.
Core Feature DesignRide Matching / Dispatch
TRICKYWe rank candidates by ETA (50%), direction (20%), rating (15%), acceptance rate (15%) because the closest driver is often not the fastest to arrive. Optimistic locking via CAS prevents double-booking. We chose CAS (not pessimistic locking) because holding a lock for 15 seconds would block other dispatchers.
Core Feature DesignReal-Time Location Tracking
EASYWe chose WebSocket during active rides (not HTTP polling) because one connection replaces 300 poll requests per ride. 40-byte updates every 3 seconds. HTTP polling for pre-match map view where update frequency is lower. Trade-off: we accepted stateful connection complexity for 99% lower request volume.
High Level DesignQuadTree vs H3 Grid
STANDARDQuadTree rebalances on every insert (167K/sec), causing lock contention. We chose H3 because cell ID is a pure function of (lat, lng): no tree, no splits, no locks. Trade-off: H3 cells are fixed-size. We compensate with variable k-ring radius for sparse suburban areas.
Core Feature DesignSurge Pricing
STANDARDWe compute supply/demand ratio per H3 cell every 60 seconds (not globally) because geographic isolation prevents cross-neighborhood price distortion. We smooth with EMA () to prevent oscillation. Trade-off: 2-3 minute response delay for price stability. Cap at 5.0x.
Core Feature DesignETA Estimation
TRICKYWe use three layers (not Dijkstra alone): Dijkstra on road graph (25% error), + real-time traffic from driver GPS (11% error), + ML correction (<5% error). Trade-off: 3x compute cost per query for 5x accuracy improvement. Fallback: straight-line / 25 km/h. We show a range, not a point estimate.
Core Feature DesignDriver Location Service
EASYWe buffer 167K writes/sec through Kafka (not direct Redis writes) because a Redis pause at that rate queues 83K updates in app memory. Each update: 33 bytes. Total driver state: 16.5 MB in Redis. Trade-off: 1-2 seconds added latency for zero data loss during downstream slowdowns.
High Level DesignWebSocket Connection Management
EASYWe store the connection registry in Redis (not local memory) so it survives server crashes. Heartbeat every 10s, reconnect on 3 missed pings. Graceful draining on deploy: stop new connections, keep existing for 60s. Trade-off: one extra network hop per routing lookup for crash-resilient connection tracking. Peak: 100K connections.
High Level Design