Memcached Memory Allocation

Memcached Memory Allocation

Memcached is a distributed memory caching system used to improve the performance and availability of hosted applications by reducing database load. It acts as a shared cache for all application nodes, serving as your application’s short-term memory.

Here’s how memory allocation in Memcached works and how it prevents memory fragmentation:

  • Slab Allocation System: Memcached uses a slab allocation system instead of item-by-item memory allocation. This improves memory usage and prevents fragmentation when data expires from the cache.
  • Slabs and Pages: Each slab is made up of several 1 MB pages. Each page is divided into equal-sized blocks or chunks.
  • Storing Data: When data is stored, Memcached determines the data size and finds an appropriate block in the slabs. If a suitable block is available, the data is written to it. If not, a new slab is created and divided into blocks of the required size.
  • Updating Data: If an item is updated and its new size exceeds the original block size, Memcached moves it to a suitable slab.

Memcached Memory Allocation

This system ensures efficient memory use and reduces fragmentation.

Save $100 in the next
5:00 minutes?

Register Here

Each instance of Memcached has multiple pages that are distributed and allocated in memory. This allocation method prevents memory fragmentation but can waste memory if there are not enough items of the same size, resulting in many partially filled pages. Therefore, the distribution of stored items is important.

You can adjust the slab growth coefficient while your application is running. To do this, click the Config button next to the Memcached node, go to the conf directory, and open the Memcached file. Edit the file like this:

OPTIONS=”-vv 2>> /var/log/memcached/memcached.log -f 2 -n 32″

Memcached File

In this example:

-f 2 specifies that you will have 14 slabs with each slab’s chunk size doubled.

The value after -n defines the minimum space allocated for the key, flags, and value.

Here are the results we obtained:

Chunk details:

# Item_Size Max_age Pages Count Full? Evicted Evict_Time OOM
3 320B 550s 1 113 yes 0 0 0
4 640B 681s 1 277 yes 0 0 0

Memory usage:

total used free shared buffers cached
Mem: 128 84 43 0 0 70
-/+ buffers/cache: 14 113
Swap: 0 0

Now, let’s revert to the default settings and see what values we’ll obtain.

OPTIONS=”-vv 2>> /var/log/memcached/memcached.log”

Chunk details:

# Item_Size Max_age Pages Count Full? Evicted Evict_Time OOM
5 240B 765s 1 27 yes 0 0 0
6 304B 634s 1 93 yes 0 0 0
7 384B 634s 1 106 yes 0 0 0
8 480B 703s 1 133 yes 0 0 0
9 600B 634s 1 57 yes 0 0 0

Memory usage:

total used free shared buffers cached
Mem: 128 87 40 0 0 70
-/+ buffers/cache: 17 110
Swap: 0 0 0

You can also use the -L parameter to increase the memory page size, which reduces TLB misses and improves performance.

This simple optimization helps us make better use of allocated memory.

Save $100 in the next
5:00 minutes?

Register Here