EASYwalkthrough

Resumable Upload

5 of 8
3 related
The constraint: a user uploads a 5 GB video to Google Drive and their Wi-Fi drops at 80%. Without resumability, they lose 4 GB of progress and start over.
The client sends a POST to create an upload session, receiving a unique session URI. It then PUTs 4 MB chunks sequentially.
The tus protocol (an open standard for resumable uploads) solves this.
The server records the byte offset of each confirmed chunk in Redis with a 24-hour TTL. On interruption, the client sends a HEAD request to learn the last confirmed offset and resumes from there.
The system sounds straightforward, but the real challenge is idempotent chunk acceptance: if the client retries a chunk that was already received (network ACK lost), the server must recognize the duplicate by byte range and return success without double-writing. Without this, interrupted uploads corrupt files with duplicate data.
Google Cloud Storage and Dropbox both implement this pattern. We chose the tus protocol (not a custom implementation) because it is an open standard with battle-tested client libraries for every platform.
Trade-off: tus adds one extra HEAD request per retry, plus 100 bytes of session state in Redis. At 100K concurrent uploads, the session store holds only 10 MB, a negligible cost for guaranteed resumability.
Why it matters in interviews
Skipping resumable uploads breaks the system for any file over a few hundred MB. Mentioning the tus protocol and idempotent chunk acceptance shows we know the standard approach and the edge case that corrupts files.
Related concepts