One good thing about microservices is that they are loosely coupled, you can easily make changes to one and deploy it without affecting other services. Being loosely coupled also means that each microservice will have its own database (or schema).

With each service having its own data, we sometimes need to always rely on other services for information, like configuration, core information, etc. Usually, we get this information by issuing HTTP requests, while they can be usually quicker, they are definitely slower than having data locally, they introduce more uncertainty as HTTP requests can go wrong, and also depending on the number of the request, they can start to become a problem.

One possible solution to this is caching the information. We can cache the information close to the caller for like a period of time (say one hour) and this could reduce a lot the number of HTTP calls to another service. Or we can cache the information after we fetched from the database and reduce the number of queries.

In one real case scenario, we reduced the call from one service to another from 2.2 million calls in a week to 47 thousand just by caching the information in the caller for one hour.

The most common and widespread caching pattern is the Cache-Aside which is described by the image below.

from A Hitchhiker’s Guide to Caching Patterns

from A Hitchhiker’s Guide to Caching Patterns

In this approach, we first try to get the information in the cache, if it is not there, we perform the real call and cache the result. The next call with the same key (inside the TTL) will find the key and fetch the value from the cache instead of performing the real call.

Cache types

The two most common types of caching that I know of are the in-memory cache (eg. Hazelcast) and remote data structure server (eg. Redis). In-memory cache, as the name says, stores the data in random-access memory (RAM), and remote data structure stores the data in a separate service via HTTP (or other protocols).

Caching period

The caching period will depend on how you can your business case can tolerate stale data. if that information rarely changes (like settings), we can go for a longer period, if it changes more frequently, starts by trying a shorter period.

Examples of time to live (TTL) can be for example one minute, one hour, one day, or even without expiration.

One approach is to have a different TTL for each method, according to the business need.

You can also have a way to evict the cache when the information changes, by updating the cache values.

Error handling

Errors are going to happen, that is for sure. Even if you have a 0,01% error rate, they are going to happen. So you have to have a way to handle it. The cache service may be offline, the communication can break, the service can get out of memory, etc.

One approach is to have a short timeout (like 500ms or one second), and if anything goes wrong when trying to fetch the data from the cache, you can perform the call to the real method/endpoint. With this approach, you get the best of the cache and reduce the chances that it will break your system.


Happy coding!