Leetcode 981. Time Based Key-Value Store
Design a time-based key-value store that records multiple values per key with timestamps and returns the value whose timestamp is the largest <= the queried timestamp. With timestamps per key strictly increasing and up to 2e5 operations, the typical solution uses per-key ordered timestamp/value lists and binary search to find the floor timestamp efficiently.
Asked at:
OpenAI
Apple
Amazon
Meta
DESCRIPTION
Design a time-based key-value store that records multiple values per key with timestamps and returns the value whose timestamp is the largest <= the queried timestamp. With timestamps per key strictly increasing and up to 2e5 operations, the typical solution uses per-key ordered timestamp/value lists and binary search to find the floor timestamp efficiently.
Input:
operations = [["set", "foo", "bar", 1], ["get", "foo", 1], ["get", "foo", 3], ["set", "foo", "baz", 4], ["get", "foo", 4], ["get", "foo", 5]]
Output:
[null, "bar", "bar", null, "baz", "baz"]
Explanation: set('foo','bar',1) stores bar at timestamp 1. get('foo',1) returns 'bar' (exact match). get('foo',3) returns 'bar' (largest timestamp ≤ 3 is 1). set('foo','baz',4) stores baz at timestamp 4. get('foo',4) returns 'baz' (exact match). get('foo',5) returns 'baz' (largest timestamp ≤ 5 is 4).
Constraints:
- 1 <= key.length, value.length <= 100
- key and value consist of lowercase English letters and digits
- 1 <= timestamp <= 10^7
- All timestamps for set operations are strictly increasing per key
- At most 2 * 10^5 calls total to set and get
Understanding the Problem
Let's understand what we're being asked to design. We need a time-based key-value store that can store multiple values for the same key, each with a different timestamp.
For example, if we do set('foo', 'bar', 1) and later set('foo', 'baz', 3), the key 'foo' now has two values: 'bar' at timestamp 1 and 'baz' at timestamp 3.
When we call get('foo', 2), we need to return the value whose timestamp is the largest timestamp less than or equal to 2. In this case, that's 'bar' (timestamp 1), because timestamp 3 is too large.
Important constraint: Timestamps for each key are strictly increasing - we never set the same key with an earlier timestamp than a previous set. This means for each key, the timestamps are naturally sorted!
Edge cases to consider: What if we get() a key that doesn't exist? What if we get() with a timestamp smaller than all stored timestamps for that key? What if we have up to 2e5 operations - how do we keep get() efficient?
Brute Force Approach
Store a hash map where each key maps to a list of (timestamp, value) pairs. For set(), append the new pair to the key's list. For get(), iterate through the entire list for that key, checking each timestamp to find the largest one that is ≤ the query timestamp. This approach is simple to implement but requires scanning all timestamps for each get() operation, making it inefficient when a key has many historical values.
Building Intuition
For each key, we're storing a list of (timestamp, value) pairs that grows over time. Because timestamps are strictly increasing, this list is automatically sorted by timestamp.
When we get(key, timestamp), we need to find the largest timestamp in the list that is ≤ our query timestamp. This is a classic 'floor' search problem.
Searching for a floor value in a sorted list can be done in O(log n) time using binary search, rather than O(n) time with linear search.
With up to 2e5 operations and potentially many get() calls, this logarithmic speedup is crucial for performance.
Think of organizing historical stock prices by timestamp. If someone asks 'What was the price at 2:30 PM?', you look through your sorted list of timestamps and find the most recent price before or at 2:30 PM.
You don't need to check every single timestamp - you can use the sorted property to quickly narrow down to the right time range. The sorted timestamps are what make this efficient lookup possible.
Common Mistakes
Optimal Solution
Use a hash map where each key maps to a list of (timestamp, value) pairs. Since timestamps are strictly increasing per key, each list is automatically sorted by timestamp. For set(), append the new pair to the key's list in O(1) time. For get(), perform binary search on the key's timestamp list to find the largest timestamp ≤ the query timestamp in O(log n) time. This takes advantage of the sorted property to achieve logarithmic lookup time.
Start TimeMap simulation
0 / 95
Test Your Knowledge
Login to take the complexity quiz and track your progress
Complexity Analysis
Time Complexity: O(1) for set, O(log n) for get where n is the number of timestamps for that key set() appends to a list in O(1). get() uses binary search on a sorted list of timestamps, which takes O(log n) time where n is the number of values stored for that key.
Space Complexity: O(m) where m is the total number of set operations across all keys We store every (timestamp, value) pair that was ever set, so space grows linearly with the number of set operations.
What We've Learned
- HashMap + Sorted List hybrid structure: When you need to store multiple time-series values per key, combine a HashMap for O(1) key lookup with per-key sorted lists for temporal ordering - this dual-layer structure enables efficient access by both key and time dimension.
- Binary search on timestamps: Since timestamps are strictly increasing per key, use binary search (bisect_right/upper_bound - 1) to find the largest timestamp ≤ query in O(log n) time - this is the floor-finding pattern essential for time-based lookups.
- Monotonic property exploitation: When a problem guarantees strictly increasing timestamps, you can skip insertion sorting and simply append to the list - recognizing monotonic guarantees saves O(n) per insertion and simplifies implementation significantly.
- Off-by-one in binary search: The critical mistake is using bisect_left instead of bisect_right - you need bisect_right then subtract 1 to find the floor timestamp, or check if bisect_left's result has an exact match before using it as the answer.
- Time-series data pattern: This HashMap-of-sorted-arrays pattern applies broadly to versioned data, event logs, stock prices, sensor readings - whenever you need point-in-time queries on historical data with temporal ordering per entity.
- Space-time tradeoff awareness: Storing all historical values costs O(n) space but enables O(log n) queries - understand that precomputing and storing sorted data trades memory for query speed, which is optimal when reads vastly outnumber writes.
Related Concepts and Problems to Practice
medium
This problem involves finding elements closest to a target value in a sorted array, which shares the core pattern of searching for floor/ceiling values in ordered data structures. Both problems require understanding how to efficiently query sorted data with comparison-based lookups.
medium
This problem combines hashmap storage with binary search on timestamps to find the latest non-overlapping job. It demonstrates a similar pattern of maintaining time-indexed data and using binary search to query based on time constraints, making it excellent practice for the time-based lookup pattern.
Test Your Understanding
Why is hashmap the right data structure for this problem?
hashmap provides the optimal access pattern
It's the only data structure that works
It's the easiest to implement
It uses the least memory
Select an answer to see the explanation
Question Timeline
See when this question was last asked and where, including any notes left by other candidates.
Late November, 2025
Apple
Mid-level
Mid October, 2025
OpenAI
Senior
Early August, 2025
Meta
Staff
Comments
Hello Interview Premium
Your account is free and you can post anonymously if you choose.