Codementor Events

Introduction to Caching and Redis

Published Mar 01, 2018Last updated Aug 28, 2018
Introduction to Caching  and Redis

What is this Redis thing?
What exactly is Caching and how is it done?

This article is not a deep dive into Redis but a very simple introduction to Redis and how caching works. The next article in the series will, by means of an example, show you how you can take advantage of Redis for caching in order to improve your application speed and performance.

What is Redis?

According to official definition on redis.io, Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker.

Redis stores data using a key-value system and, as such, it is very easy to retrieve data since there are no complicated relationships or other operations making relational databases slow.

Redis supports a number of datatypes (strings, hashes, lists, sets, and sorted sets) and it stores data in memory, which makes it very fast.

What is Redis Good for?

Redis can be used for a number of things such as:

Caching
Counting
Queues
Pub and Sub

Caching with Redis

To start with, let's look into what caching is and how it can make your web application faster.

What is Caching?

Caching is the process of storing data into a cache. A cache is a temporary data store where data is kept for later use.

A cache as a data store is easier for the client (or server) to reach, as opposed to a permanent data store that might be located on a different service, which takes more time and resources to reach (a database or an external API endpoint).

How Caching Works.

The image below is not 100% accurate, since there is a lot going on under the hood and I had to keep it as simple as possible.
Request_Response (8).png

In the first illustration, the server queries the database every time a request for profile information comes from the client (Mr. Jide).

Let's assume Mr. Jide requests this data 30 times during his browsing session and each request takes 10 seconds to complete, the response time for all of the request-response times remain constant.

10secs * 30requests = 5 minutes.

In the second illustration, the server looks into the cache whenever Mr. Jide requests profile information and queries the database only if the data is not available in the cache (Redis).

One advantage of the second design over the first is that the response time for subsequent requests for the same data is shorter, since the data has been cached in memory (Redis) so there is no need to query the database, which is an expensive operation.

We also use less server resources, leading to an improvement in application performance.

Compared to querying the database, getting data from the cache is easier and faster for the server.

For the second design, let's assume the request-response cycle takes 5 seconds (after the first call), and Mr. Jide requests this data 30 times. How much time will the 30 requests take?

10secs * 1request = 10 seconds (Initial Request)
5secs * 29requests = 145seconds (Subsequent Requests)
Total: 10secs + 145secs = 2mins 35secs

Using the first design, the requests took a whooping 5 minutes while using the second design, the total trip time was 2mins 35secs.

This isn't just about speed! Imagine what difference it would make to your users and how much money you could save on server costs.

In the next article in this series, I will show you how you can implement caching using Redis.

Was this article informative? Kindly like, comment, and share with others.

Discover and read more posts from Olawale Akinseye
get started
post commentsBe the first to share your opinion
Hifzur Rahman
3 years ago

best article thanks Olawale

Abhishek Savani
5 years ago

nice article keep it up brother
:) :)

Carson Gibbons
6 years ago

Great Article, Olawale! We use Redis as a cache layer for our API-first Content Management System and it works zippy fast!

Olawale Akinseye
6 years ago

It is super fast.
Thanks for reading, Carson!

Show more replies