Caching Strategy

Abhilash Ranjan
5 min readSep 7, 2020

--

Caching Strategies

Before jumping into the caching strategy let us first understand the caching. Accessing data from RAM is always faster than other medium like harddisk or object-store. We store the data in RAM memory and get application access it which is faster than fetching data from the database.

Caching add-on a few of feature in any web application

· It reduces latency due to it always fetch from RAM, not the disk.

· It reduces the load in the database, does not all request goes directly to DB

· It minimizes the cost due to access directly from the cache

A different layer of caching is available like database, web proxy, CDN/DNS, search level, query level….

Eviction / Update policy in caching

The eviction policy works based on the frequently used object in the cache and replaces the value by using some of the algorithms LRU(Least Recently Used) . It works on the TTL of the object, when it expires the time then it will call the DB or the key will be replaced from the cache. There are different strategies available in eviction like based on the size, time, frequency..

Distributed cache

The cache system is spread over a network or several nodes. It has a Distributed Hash table that has the responsibility of mapping Object Values to Keys spread across multiple nodes. Therefore, when requesting resource ri, from this hash table we know that machine m1….mn is responsible for cache ri and direct the request to mi. At machine mi, it works similarly to the local cache. Machine mi may need to fetch and update the cache for ri if it doesn’t exist in memory. After that, it returns the cache back to the original server.

Use Cases Of Distributed Caches

Database Caching

The Cache layer in-front of a database saves frequently accessed data in-memory to cut down latency & unnecessary load on it. There is no DB bottleneck when the cache is implemented.

User Sessions Store

User sessions can store in the cache to avoid losing the user state in case any of the nodes go down. Like email drafts or form filling, data entry or workflow creations.

If any of the instances goes down, a new instance spins up, reads the user state from the cache & continues the session without having the user notice anything amiss.

Cross-Module Communication & Shared Storage

In-memory distributed caching is also used for message communication between the different micro-services running in conjunction with each other. Like spring batch and api data share.

It saves the shared data which is commonly accessed by all the services. It acts as a backbone for micro-service communication.

In-memory Data Stream Processing & Analytics

As opposed to traditional ways of storing data in batches & then running analytics on it. In-memory data stream processing involves processing data & running analytics on it as it streams in real-time.

This is helpful in many situations such as anomaly detection, fraud monitoring, Online gaming stats in real-time, real-time recommendations, payments processing etc.

Proxy Caching

Proxy caching is one of the most common methods for serving HTTP data where it saves common HTML, CSS in the DNS layer. It first time fetch based on the request and then next request if comes it sends from the cache. Like Google page it always comes from the cache., whenever you open the browser and type google.com it always loads from the cache.

Distributed Caching Strategies

There are different caching strategies we follow to fulfill the web app need. Lets discuss each one by one.

Cache Aside

In this strategy, the cache is sitting aside the database. The application will first request the data from the cache. If the data exists (we call this a ‘cache hit’), the app will retrieve the data directly. If not (we call this a ‘cache miss’), the app will request data from the database and write it to the cache so that the data can be retrieved from the cache again next time.

The data in this strategy is written directly to the database. This means things between the cache and the database could get inconsistent.

Read-Through

Unlike cache aside, the cache sits in between the application and the database. The application only requests data from the cache. If a ‘cache miss’ occurs, the cache is responsible to retrieve data from the database, update itself and return data to the application. The Information in this strategy too is lazy loaded in the cache, only when the user requests it.

Write-Through

In this strategy, before the data is written to the DB, the cache is updated with it.

This maintains high consistency between the cache and the database though it adds a little latency during the write operations as data is to be updated in the cache additionally. This works well for write-heavy workloads like online massively multiplayer games.

This strategy is used with other caching strategies to achieve optimized performance.

Write-Back

In the Write-back caching strategy the data is directly written to the cache instead of the database. And the cache after some delay as per the business logic writes data to the database. The application still writes data to the cache. However, there is a delay in writing from the cache to the database. The cache only flushes all updated data to the DB once in a while

A risk in this approach is if the cache fails before the DB is updated, the data might get lost. Again this strategy is mostly combined with cache aside strategies to make the most out of these.

Popular Distributed Cache

Memcache

Redis Cluster

--

--