Background:
Memoization is an optimization technique widely used in functional programming, recursive computations, and dynamic programming when the same functions are called multiple times with the same arguments. In Python, it became common with the introduction of the functools.lru_cache decorator and caching patterns for computation results.
Problem:
Without caching, recursive functions such as Fibonacci number calculations work very slowly: for each computation argument, many calculations are repeated, leading to exponential growth in the number of calls.
Solution:
Memoization saves already computed values (for example, in a dictionary) to reuse them when necessary:
Manual implementation example:
memo = {} def fib(n): if n in memo: return memo[n] if n <= 1: result = n else: result = fib(n-1) + fib(n-2) memo[n] = result return result
With Python 3.2 you can simply use the decorator:
from functools import lru_cache @lru_cache(maxsize=128) def fib(n): if n <= 1: return n return fib(n-1) + fib(n-2)
Key features:
Can you memoize a function with mutable arguments (e.g., lists)?
No, lru_cache requires all arguments to be hashable (immutable), otherwise, it will raise a TypeError.
from functools import lru_cache @lru_cache # def f(a): return a[0] # TypeError: unhashable type: 'list'
Are memoized values automatically removed?
Yes, if maxsize is set—the values exceeding the limit are removed (Least Recently Used). An infinite cache (maxsize=None) can lead to memory overflow.
What happens if a memoized function calls itself with new arguments?
Memoization will still work—each unique set of arguments is computed only once and cached.
Writing a custom cache without size limits:
cache = {} def foo(x): if x not in cache: cache[x] = long_heavy_computation(x) return cache[x]
Over time, the program uses gigabytes of memory.
Pros:
Cons:
Using lru_cache with maxsize:
from functools import lru_cache @lru_cache(maxsize=1000) def foo(x): return long_heavy_computation(x)
Pros:
Cons: