hamburger icon close icon
AWS Big Data

AWS ElastiCache for Redis: How to Use the AWS Redis Service

Caching is a powerful tool engineers have at their disposal to store and access data very quickly. With caching, engineers are able to scale data sources, mitigate unpredictable workload spikes, and contribute to application availability by continuing to serve data when external data sources experience failures.

The most common use cases for a distributed cache are to store and retrieve user session data in highly scalable web services or placing a cache in front of a relational or non-relational database to accelerate queries and avoid overloading the primary database. Redis, an open-source NoSQL datastore, is a popular distributed cache store that can play an integral part in AWS Big Data and AWS database deployment.

In this blog we’ll take a closer look at Redis and how AWS ElastiCache Redis deployments can be set up so you can take advantage of the low latency, high performance of a distributed cache in your deployment.

Use the links below to jump down to the instructions:

What Is Redis?

Redis is an open-source, in-memory data structure cache and store, developed and maintained by an enthusiastic Redis community. It is widely adopted as a fast in-memory database or application cache solution. The idea that Redis is a platform that can be simultaneously considered both a cache and a database is something that sets it apart from other solutions.

Redis works by placing datasets in-memory and only uses the slower disk-based storage to periodically save data and make it persistent upon machine restarts. This enables Redis to deliver outstanding performance when reading or writing data with sub-millisecond latency and support for multiple data structures with built-in replication and clustering abilities.

From an operational point of view, Redis can be challenging to deploy, monitor, and scale on your own. However, these challenges can be overcome and made easier by using a managed service such as the new AWS ElastiCache Redis service.

What Is Amazon ElastiCache for Redis?

AWS ElastiCache is a managed caching service compatible with both Redis and Memcached. When it comes to Redis, ElastiCache offers a fully managed platform that makes it easy to deploy, manage, and scale a high performance distributed in-memory data store cluster. These capabilities can significantly decrease the operational overhead in maintaining machines, software patching, monitoring, failure recovery, and backups.

AWS ElastiCache is fully compatible with the usual Redis data structures, APIs, and clients, allowing your existing applications that already use Redis to start using ElastiCache without any code changes. It supports both Redis cluster and non-cluster modes, providing enhanced high availability and reliability, with automatic failover scenarios across availability zones.

With the support of online cluster resizing, ElastiCache makes it easy to scale in or out in order to adapt to changing system demands without any downtime. From a security and compliance point of view, AWS ElastiCache has built-in VPC support and offers encryption in-transit and at-rest capabilities, combined with the native Redis AUTH feature for authentication and authorization support.

Getting Started with Amazon ElastiCache for Redis

Step 1: Setting up the AWS Environment

Before you can begin using the Amazon ElastiCache Redis service, you’ll need to set up your AWS environment. Here’s a quick walkthrough on how to do that:

1.1. Start by deploying an EC2 instance in your AWS environment. The instance will be used to run a sample application that leverages a Redis cluster. For simplicity, we will use the same Amazon VPC for both the EC2 instance and the ElastiCache cluster so that additional configuration isn’t required.

If you are unfamiliar with this step, you can find out more on how to deploy an EC2 instance here.

1.2. Once your EC2 instance is up and running, go into its details and copy the “Public IPv4 DNS” URL. It will be used later to access the example web application.

EC2 Instance detailsEC2 Instance Details

1.3. The security group assigned to the EC2 instance should allow two custom TCP inbound rules: one for the Redis default TCP port (6379) using the same security group as source—allowing connections on this port from any instance within that security group—and another for the example application on the TCP port (5000) and using 0.0.0.0/0 as source to enable access to the sample web application from your computer.

Security Group Inbound RulesThe security group inbound rules

Step 2: Create an AWS ElastiCache Cluster for Redis

Now we’ll see how to create an AWS ElastiCache cluster for Redis.

2.1. Open the ElastiCache Dashboard in the AWS Console and click on the “Get Started Now” button. Keep in mind that the AWS Region selected in the top right corner will be used as a location for your AWS Redis cache cluster deployment. Use the same region where your EC2 instance is located.

Elasticache DashboardElastiCache Dashboard

2.2. Start by creating a new cluster and selecting “Redis” as the ElastiCache engine. Notice that AWS Redis Cluster Mode can optionally be enabled. While not required for this example, it is good to keep in mind that Cluster Mode enables you to horizontally scale the AWS Redis cluster and provision up to 500 primary nodes, making it highly available and increasing its fault tolerance.

Elasticache EngineElastiCache Engine

2.3. Use the default Amazon Cloud location for the cluster and provide a unique name to identify the cluster (e.g. “elasticache-redis”).

2.4. ElastiCache provides a variety of cache instance types to choose from, each targeting different performance and storage needs. For this example, you can select the “cache.t2.micro” type which is enough for this demonstration and is free tier eligible.

2.5. Proceed with the default options, but take note of some advanced features available such as different engine versions, multi-replica and multi-AZ support for enhanced high availability and compatibility.

Redis SettingsRedis Settings

2.6. Under Advanced settings, create a new subnet group, providing a unique name to identify it. Remember to select the same VPC as your EC2 Instance, and at least two subnets in case you want to try the Multi-AZ feature.

2.7. Under the Security panel, select the previously created security group that was assigned to the EC2 instance.

Redis Advanced SettingsAdvanced Redis Settings

2.8. While the automatic backup feature is selected by default, you can opt-out and uncheck that box for this example since you won’t be needing it.

2.9. Once you review all the settings, click the “Create” button.

Import Backup MaintenanceImport, Backup and Maintenance

2.10. Once the Redis Cluster becomes available, click on the arrow to display it’s configuration details, and copy the “Primary Endpoint” URL to be used later on by the example web application.

Cluster DetailsRedis Cluster details

Step 3: How to Use Redis as a Session Store for a Web Application

Redis is commonly used as a session store in scalable web applications, where storing and managing the users’ session data is needed. In this example, we will use a simple web application that enables visitors to log in and log out and uses Redis to store their session data. The application is part of AWS own example collection for ElastiCache and you can find it’s code among other examples here.

3.1. Establish an interactive remote SSH session with the EC2 instance you deployed earlier, and run the following commands in order to install the required tools and dependencies needed by the application:

sudo yum install git -y
sudo yum install python3 -y
sudo pip3 install virtualenv
git clone https://github.com/aws-samples/amazon-elasticache-samples/
cd amazon-elasticache-samples/session-store
virtualenv venv
source ./venv/bin/activate
pip3 install -r requirements.txt

3.2. Define the following environment variables for the application: REDIS_URL,FLASK_APP and SECRET_KEY.

The value of REDIS_URL <your_redis_endpoint> will be set to the ElastiCache Primary Endpoint value saved earlier. The FLASK_APP value should point to the Python file (example-4.py) of the web application example we are going to run. The SECRET_KEY value can be filled with any random string since it’s only used by the example application as a seed to generate the session cookie.

export REDIS_URL="redis://<your_redis_endpoint>"
export FLASK_APP=example-4.py
export SECRET_KEY=some_random_value

3.3. Run the example web application using the command:

flask run -h 0.0.0.0 -p 5000 --reload

3.4. With the application running in the background, use the Public DNS name from your EC2 instance details to access the web application. In your browser, visit the URL in the port number 5000. The URL will be http://<your_ec2_public_dns>:5000. By default, three endpoints will be made available, “/”, “/login” and “/logout.

 Different Endpoints

Different endpoints

3.5. Log in into the application using the /login endpoint using any random credentials and refresh the page a few times. You will notice that the number of visits will increment every time the same user visits, and if you allow 10 seconds to elapse without refreshing the page the counter will be reset to 1.

Under the hood, upon login the application generates a unique token that represents the Redis key under which the user session data will be stored in the cluster. Every user access to the web page will increment the counter in Redis with the number of visits. Since the web application sets a Time to Live (TTL) of 10 seconds for the user data stored in Redis, the session and user counter will be automatically reset after the time elapses.

Conclusion

Implementing a caching solution can be extremely beneficial for your cloud native applications, significantly improving the overall performance while reducing costs related with data access latency and overall inefficiencies.

With AWS ElastiCache for Redis, you will benefit from the speed, simplicity, and versatility of a managed Redis service. In the end, you should choose the right combination of components that can deliver the best impact based on cost and performance that can fulfil your business goals.

A solution such as AWS ElastiCache and Redis can be that key component to lowering your operational overhead while providing better performance and user experience.

DevOps uses can get even more flexibility in their deployments by using NetApp Cloud Volumes ONTAP. Cloud Volumes ONTAP delivers enterprise-grade storage management services on AWS, Azure and Google Cloud for use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, Kubernetes integration, and more.

In particular, Cloud Volumes ONTAP supports advanced features for managing SAN storage in the cloud, catering for NoSQL database systems, as well as NFS shares that can be accessed directly from cloud big data analytics clusters, making it easier to manage disk-based data persistence in the most complex Redis clusters deployments.

New call-to-action
Yifat Perry, Technical Content Manager

Technical Content Manager