It's true, organizations decide to migrate to the cloud because they like the cloud’s scalability, flexibility, performance, and security. But ultimately, the deciding factor is the cost savings that the move offers.
However, if an organization makes the move to the cloud and fails to understand the pricing mechanism or how to adjust it for their use case, they may end up paying the same as or even more than they did using a traditional infrastructure.
One reason for incurring extra cost is the underutilization and over-provisioning of Elastic Block Storage (EBS) volumes.
A good solution for this problem is to find and delete the unused EBS volumes.
An Overutilization Use Case
One of the key cloud storage offerings on Amazon Web Services is the EBS volume.
EBS offers persistent storage, and each EBS volume comes with a “DeleteOnTermination” flag that, if marked false, will not delete the volume on instance termination.
Take, for example, a use case of a company that has set up Auto Scaling and that is faced with a major outage with their database. The outage stopped their app server from working and it caused Auto Scaling, which had been checking the health of instances, to begin terminating those instances and start launching new instances.
This process continued for a few hours, resulting in the launch of more than 50 new instances. Each instance had an EBS volume of 200 GB and none of them were deleted on instance termination.
As a result, at the end of this outage, the organization had more than 10,000 GB of unutilized storage. These unutilized instances were costing an additional $1000 per month, without contributing to the organization’s operations.
The Unutilized Volumes Solution
To solve a non-utilization problem such as the one described above, the extra volumes have to be deleted. Below, you can read more on how to write a script that will help automate the task of finding unutilized volumes (any volume in an available state) and deleting them.
Before you begin to delete any volumes though, it is necessary to verify which volumes are important and which ones are not.
If there is an important volume, you should create a backup with a snapshot. You can also assign a special tag for the volume, such as “DND” (Do Not Delete).
In the script demonstrated below, we are going to run a program which will find all volumes that are in an available, unattached state. That script will filter out the volumes with the “DND” tag and delete the rest.
One way to perform such a task is to write a program using the AWS Command Line Interface (CLI). Then make a script (.sh for Linux) that can be scheduled to execute using cron job.
Another solution is to use AWS SDK and schedule it using Cloudwatch triggers and execution by Lambda.
How to Automatically Filter and Delete EBS Volumes with Lambda Functions and CloudWatch
Step 1: Get Started by Opening AWS Lambda
Step 2: Create a Lambda Function
Step 3: Click on the Empty Box and Select CloudWatch Schedule
Step 4: Schedule the Function by Specifying Cron Expression
Step 5: Assign a Role with Necessary Permissions
Step 6: Paste the Following Code Snippet After the Trigger is Created.
ec2 = boto3.resource('ec2',region_name='us-east-1')
def lambda_handler(event, context):
for vol in ec2.volumes.all():
if vol.tags is None:
print "Deleted " +vid
for tag in vol.tags:
if tag['Key'] == 'Name':
if value != 'DND' and vol.state=='available':
print "Deleted " +vid
Important Note: This code searches only for “DND” and deletes remaining volumes that are in an available state. The code is written in Python, so do follow the indentation correctly.
Saving Costs by Using Automation
The script shown above can be scheduled and will help remove unutilized resources and eventually save money.
In the use case discussed above, the organization incurred a huge cost. After automation, they were able to save costs and avoid wasting resources and funds.
It is important to remember, that while automation will help reduce costs, data management will still be a challenge. Automation is the best way to reduce any manual errors, or dependency in deleting unutilized volumes.
Knowing the data life cycle and performing necessary checks on the process integration will let you get a good night’s sleep knowing that you aren’t paying for services you don’t use.