3 tricks of Python’s`lru_cache`

arekusandr
2 min readMar 10, 2020
lru_cache pythoh

CacheInfo

This tiny helper method `cache_info` from the decorator function can help you troubleshoot and debug caching function stats on each call.

for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991:
pep = get_pep(n)
print(n, len(pep))
print(get_pep.cache_info())
print(get_pep.cache_info())
CacheInfo(hits=3, misses=8, maxsize=32, currsize=8)

TTL

I remember writing a cacheing bug a few years back where I had to use a lru cache decorator for some of the larger functions deployed to AWS lambda. The issue was that cache needs to be invalidated over time and there is no built in argument for that in python’s `lru_cache`. However a simple workaround is to make a time based variable as an argument for the function under lru decorator _

lru cache with ttl

And during the call we can now pass time based parameter

pep = get_pep(320, time.time() // 3600)# time.time() // 3600  - Cached for an hour
# time.time() // 24 * 3600 - Cached for a day

This is implicit TTL implementation without any external lib. The code is quite clever and I always feel satisfying how simple it is. It is easy to understand, and I really hope this will become standard in python.

maxsize = None

Default maxsize=128 is quite non universal. With `maxsize=None` you can put unlimited number for k-v pairs in the cache. RAM is cheap nowdays. Kubernetes can restart your container on OOM. So `maxsize=None` must be the new default standard value for lru cache function :sarcasm.

Happy hacking!

--

--