提供: wiki
移動先: 案内検索

The art of cache limiting


There are several places in blender where results are cached to speed up the display of the end results.

Notably the old sequencer calculated for a given frame number the results of all tracks regardless if they are needed or not.

Even worse it never freed them after that. You had to click on refresh now and then to get rid of the cache. The final rendered freed the cache automatically after each frame thereby disabling the advantages of the cache completely. You will notice this misbehaviour if you used the scene tracks for titles on the timeline. Calling the renderer for each frame again and again can be somewhat slow...

We have to limit the cache usage in some way to an user provided upper limit. (32 MB by default, but can be changed in the user preferences.) The generic solution: a cache limitor So we have limited resources and want to free memory at some point if the memory pressure gets too high. Obviously we want to delete those elements in our cache that are most seldomly used.

Mathematically speaking there is a ordering relation on cache elements that tells us which elements have to be kicked out first. This is implemented using a "deletion chain".

Since this depends deeply on the data in the cache we give the control to the user of the cache. That also means that there is not one global cache but one for each problem.

This gives us four basic operations:

  • insert: Insert an element into the cache management system. Newly inserted elements are inserted at the end of the deletion chain.
  • touch: Put a given element at the tail of the deletion chain. This is to tell the cache manager that we just used the element.
  • enforce_constraints: (This is integrated into the insert-operation on imbuf side): Delete elements as long as global memory usage is larger than the internal cache limit.
  • unmanage: Remove an element from the cache not deleting it! If you want to delete the element at will you have to obviously tell the cache about it to prevent double frees.

Limiting our self to this set of operations will lead to obscure side effects in real life. Most likely we want to use several elements at the same time and maybe our memory pressure is so high, that the cache will immediately delete an element after it was inserted. We missed something: the cache limit has to be a soft limit! We can't blame the user if he sets his cache limit 20kb below his maximum memory usage. There must be some notion of locking elements in the cache and preventing the cache manager from deleting them. This is most easily done using reference counters in each element.

This gives us three additional operations:

  • ref: Increase the reference counter.
  • unref: Decrease the reference counter.
  • get_refcount: Get the refcounter to check for cache zombies lurking around.

Elements with refcount unequal from zero are not allowed to be deleted! C++-Interface: memutil/MEM_CacheLimiter.h

  class BigFatImage {
      ~BigFatImage() { tell_everyone_we_are_gone(this); }
  MEM_Cache<BigFatImage> BigFatImages;
  void doit(BigFatImage * b) 
     MEM_Cache_Handle<BigFatImage>* h = BigFatImages.insert(b);
     work with image...
     leave image in cache.

C-Interface: memutil/MEM_CacheLimiterC-Api.h

MEM_CacheLimiterC * new_MEM_CacheLimiter( void (*data_destructor) (void * data))

Create a new cache limitor for "data" that is deleted using the destructor "data_destructor".

void delete_MEM_CacheLimiter(MEM_CacheLimiterC * This)

Delete the cache limitor keeping it's elements untouched.

MEM_CacheLimiterHandleC * MEM_CacheLimiter_insert( MEM_CacheLimiterC * This, void * data)

Add an element to the cache management system and return a handle to control it's behaviour in the cache.

void MEM_CacheLimiter_enforce_limits(MEM_CacheLimiterC * This)

Enforce limits on the cache.

void MEM_CacheLimiter_unmanage(MEM_CacheLimiterHandleC * handle)

Unmanage an element not deleting it!

void MEM_CacheLimiter_touch(MEM_CacheLimiterHandleC * handle)

Touch an element.

void MEM_CacheLimiter_ref(MEM_CacheLimiterHandleC * handle)

Increase the reference count of an element there by preventing it to be deleted by enforce_limits().

void MEM_CacheLimiter_unref(MEM_CacheLimiterHandleC * handle)

Decrease the reference count of an element.

int MEM_CacheLimiter_get_refcount(MEM_CacheLimiterHandleC * handle)

Get the current reference count for debugging purposes.

void * MEM_CacheLimiter_get(MEM_CacheLimiterHandleC * handle)

Get the raw data pointer from the cache limitor handle. IMBUF-Interface

void IMB_cache_limiter_insert(struct ImBuf * i)

Add the given ImBuf to the cache management system. We check, if it is already in there. We enforce cache limits in the same step.

void IMB_cache_limiter_unmanage(struct ImBuf * i)

Release this ImBuf from the manager.

void IMB_cache_limiter_touch(struct ImBuf * i)

Touch the ImBuf.

void IMB_cache_limiter_ref(struct ImBuf * i)

Increase the reference count of an element there by preventing it to be deleted by insert(). This has nothing to do with the function IMB_refImBuf()!!!!

void IMB_cache_limiter_unref(struct ImBuf * i): Decrease the reference counter. int IMB_cache_limiter_get_refcount(struct ImBuf * i): Get the reference counter for debugging purposes.

STL goodie: An allocator for guardedalloc: memutil/MEM_Allocator.h

If you use the STL within blender you will most likely stumble across the problem, that your favourite STL containers use their own memory management. Since blender has it's own memory manager for the C part that generates "electric fence" blocks around the allocated areas. It also notifies you about double frees.

Luckily the designers of the STL thought of this possibility and added the feature of "allocators" to the STL.

If you use the STL within blender you are encouraged to use the allocator provided by memutil/MEM_Allocator.h.

You have to replace



  std::list<int, MEM_Allocator<int> >

You may want to add typedefs to save you from the pain of writing this all the time... ;-)