Django ID mapper

Modified for Evennia by making sure that no model references leave caching unexpectedly (no use of WeakRefs).

Also adds cache_size() for monitoring the size of the cache.

class evennia.utils.idmapper.models.SharedMemoryModelBase(name, bases, attrs)[source]

Bases: django.db.models.base.ModelBase

class evennia.utils.idmapper.models.SharedMemoryModel(*args, **kwargs)[source]

Bases: django.db.models.base.Model

Base class for idmapped objects. Inherit from this.

class Meta[source]

Bases: object

abstract = False
classmethod get_cached_instance(id)[source]

Method to retrieve a cached instance by pk value. Returns None when not found (which will always be the case when caching is disabled for this class). Please note that the lookup will be done even when instance caching is disabled.

classmethod cache_instance(instance, new=False)[source]

Method to store an instance in the cache.

  • instance (Class instance) – the instance to cache.

  • new (bool, optional) – this is the first time this instance is cached (i.e. this is not an update operation like after a db save).

classmethod get_all_cached_instances()[source]

Return the objects so far cached by idmapper for this class.

classmethod flush_cached_instance(instance, force=True)[source]

Method to flush an instance from the cache. The instance will always be flushed from the cache, since this is most likely called from delete(), and we want to make sure we don’t cache dead objects.

classmethod flush_instance_cache(force=False)[source]

This will clean safe objects from the cache. Use force keyword to remove all objects, safe or not.


This is called when the idmapper cache is flushed and allows customized actions when this happens.


do_flush (bool)

If True, flush this object as normal. If

False, don’t flush and expect this object to handle the flushing on its own.


Flush this instance from the instance cache. Use force to override the result of at_idmapper_flush() for the object.

delete(*args, **kwargs)[source]

Delete the object, clearing cache.

save(*args, **kwargs)[source]

Central database save operation.


Arguments as per Django documentation. Calls self.at_<fieldname>_postsave(new) (this is a wrapper set by oobhandler: self._oob_at_<fieldname>_postsave())

path = 'evennia.utils.idmapper.models.SharedMemoryModel'
typename = 'SharedMemoryModelBase'
class evennia.utils.idmapper.models.WeakSharedMemoryModelBase(name, bases, attrs)[source]

Bases: evennia.utils.idmapper.models.SharedMemoryModelBase

Uses a WeakValue dictionary for caching instead of a regular one.

class evennia.utils.idmapper.models.WeakSharedMemoryModel(*args, **kwargs)[source]

Bases: evennia.utils.idmapper.models.SharedMemoryModel

Uses a WeakValue dictionary for caching instead of a regular one

class Meta[source]

Bases: object

abstract = False
path = 'evennia.utils.idmapper.models.WeakSharedMemoryModel'
typename = 'WeakSharedMemoryModelBase'

Flush idmapper cache. When doing so the cache will fire the at_idmapper_flush hook to allow the object to optionally handle its own flushing.

Uses a signal so we make sure to catch cascades.

evennia.utils.idmapper.models.flush_cached_instance(sender, instance, **kwargs)[source]

Flush the idmapper cache only for a given instance.

evennia.utils.idmapper.models.update_cached_instance(sender, instance, **kwargs)[source]

Re-cache the given instance in the idmapper cache.

evennia.utils.idmapper.models.conditional_flush(max_rmem, force=False)[source]

Flush the cache if the estimated memory usage exceeds max_rmem.

The flusher has a timeout to avoid flushing over and over in particular situations (this means that for some setups the memory usage will exceed the requirement and a server with more memory is probably required for the given game).

  • max_rmem (int) – memory-usage estimation-treshold after which cache is flushed.

  • force (bool, optional) – forces a flush, regardless of timeout. Defaults to False.


Calculate statistics about the cache.

Note: we cannot get reliable memory statistics from the cache - whereas we could do getsizof each object in cache, the result is highly imprecise and for a large number of objects the result is many times larger than the actual memory usage of the entire server; Python is clearly reusing memory behind the scenes that we cannot catch in an easy way here. Ideas are appreciated. /Griatch


total_num, {objclass – total_num, …}