In Python, creating an object of a class is a two-step process:
__new__ is called, which is responsible for allocating and returning a new instance of the class.__init__ is called, which initializes the already created object.__new__ — creation (can return any new object; mandatory parameter — class).__init__ — initialization (works with self, already allocated instance).__new__ is overridden for subclasses of immutable types (tuple, str, int), singleton creation, and the "factory" pattern.class MyStr(str): def __new__(cls, value): print("__new__ called") instance = super().__new__(cls, value.upper()) # modifying value before creation return instance def __init__(self, value): print("__init__ called", value) s = MyStr('abc') # __new__ called -> __init__ called abc print(s) # 'ABC'
Is it possible to initialize an immutable object through __init__ if __new__ is not implemented in the class?
No! For immutable types (like str, int, tuple), any changes to fields/values must be in __new__, otherwise it is impossible to change the value in __init__.
class MyTuple(tuple): def __init__(self, items): print(f'__init__! {items}') def __new__(cls, items): print(f'__new__! {items}') return super().__new__(cls, map(str, items)) t = MyTuple([1, 2, 3]) print(t) # ('1', '2', '3')
History
In a large Django project, inheritance from str was implemented to store special types of strings. They tried to modify the value in __init__ — but the result did not change, since str is immutable. It was fixed only after studying and overriding __new__.
History
When implementing the Singleton pattern, they forgot to implement the logic for returning the existing instance in __new__: several class calls still created new objects, leading to memory fragmentation.
History
In a data serialization library, when caching, they used a class with overridden __init__, but neglected that caching requires returning the same object through __new__. As a result, the cache broke with each new call because different objects were created.