Blog

Are You Getting Everything You Can from Your AWS EBS Volumes?: Optimizing Amazon EBS Usage

Amazon EBS (Elastic Block Store) has played a crucial role in supporting applications that are running in the cloud. But not every company is getting the most that they can out of the volumes they are using.

What if your company is running Amazon EBS with provisioned IOPS (IO1), but due to lower PIOPS or the incorrect design, you wind up with poor performance? What if your company is paying a high amount for Amazon EBS volumes without efficiently utilizing the IOPS and allocated storage? Some companies may be paying too much because they’re using the wrong disk type for their requirements, others paying too much because of unattached volumes and old snapshots that are still being charged for even though they are not being used at all.

In this article, we’ll go over the major points for how you can optimize your Amazon EBS usage, including showing how NetApp’s Cloud Volumes ONTAP for AWS storage can help.

Enhancing Performance


Underperforming volumes can lead to real disruptions throughout your business operations. Performance of an Amazon EBS instance can be improved in a few different ways.

Use Amazon EBS-optimized Instances:


Amazon EBS-optimized instances are designed to provide dedicated capacity for the volume’s I/O. Why is that necessary? Amazon EC2 traffic is shared between network and Amazon EBS traffic. The higher the network traffic, the less room available for the Amazon EBS volume’s I/O traffic, and that increases latency. Amazon EBS-optimized instances deliver dedicated bandwidth between Amazon EC2 and Amazon EBS between 425 Mbps and 14,000 Mbps, depending on the Amazon EC2 instance type. It is recommended to use AWS EBS-optimized instances with Provisioned IOPS type volumes.

It should be noted that each Amazon EBS-optimized instance has a maximum number of IOPS it can deal out. That means if a specific instance can give 12k IOPS and you provisioned io1 volumes with 20k IOPS, you will still only get max of 12k IOPS.

Enhancing Performance Metrics:


In order to optimize, it is necessary to understand the units in which the performance of Amazon EBS is monitored. Here are a few of the key understandings:

IOPS:


This is the unit of measurement for the number of reads and writes to non-contiguous memory locations. In AWS, IOPS is measured in KiB and is defined as 256 KiB for SSDs and 1024 KiB for HDD volumes. Amazon offers SSD volumes in General Purpose & Provisioned IOPS volumes General Purpose volumes offer 3 IOPS per GB. For Provisioned IOPS volumes, you can define your IOPS need and procure that at extra cost (see the pricing schema here for details). For HDD, there are three offerings: Magnetic, Throughput Optimized, and Cold HDD. Throughput Optimized & Cold HDD can offer a max IOPS of 500 and 250, respectively. The important thing to do when tuning performance is to select the volume type that best satisfies your IOPS needs.

Volume Queue Length:


The volume queue length defines the number of pending I/O operations. Latency is the time that exists between sending an I/O request to disk and receiving the acknowledgement for the performed operation. The higher the number of I/O in the queue, the higher the latency. The volume queue length must be properly set, taking into consideration the availability of IOPS and latency. Performance can be improved by providing higher IOPS and keeping queue length low. For SSD volumes, AWS recommends a queue length of 1 for every 500 IOPS.

Prewarming of volumes created from snapshots:


In order to improve the performance of new volumes created from snapshots, it is important to prewarm them. Prewarming saves IOPS when the new disk is attached to the instance. If you do not prewarm the Amazon EBS volumes created from snapshot, you may wind up increasing the IOPS, which will affect the outbound network flow in case of heavy traffic.

Cost Optimization


It is important to avoid underutilizing your Amazon EBS volumes because that means you’re paying for resources you don’t use. Optimizing your Amazon EBS volume also means optimizing for costs.

Optimize for cost Amazon EBS volumeOne of the major cost issues is the size of Amazon EBS disk required. Many times, an organization will procure large Amazon EBS volumes, planning for a future need to scale. However, that scaling may never materialize. This results in more volume space than needed, and higher associated costs. It’s a best practice to start with a smaller size Amazon EBS volume and only increase its size as required. AWS now also provides dynamic scaling of Amazon EBS volumes.

Another useful tip is to always check that the volumes are in an available state. There are many cases where an instance is terminated, and the attached Amazon EBS volume is not deleted, keeping the volume in an available state. It also happens that custom scripts for creating snapshots will lack a proper logic to delete older snapshots, leaving a lot of unneeded snapshots behind. Both unassociated volumes and undeleted snapshots can rack up storage costs significantly. And tracking down those unused resources may not be easy. To help, NetApp provides a Cloud Assessment tool for AWS storage. This tool will analyze your AWS storage and discover unassociated or unused Amazon EBS volumes and unused snapshots, pinpointing exactly where you can save.

Another major problem is that multiple volumes are mounted at different mount points to the Amazon EC2 instance. The application must decide which is the best in terms of performance: mounting one large volume or mounting different small volumes. The basic advantage of using a larger volume is the higher number of IOPS available. Also, in a larger volume, the probability of having contiguous storage blocks is higher. This decreases the latency and increases the performance.

For both large and small volumes, Cloud Volumes ONTAP can further improve I/O performance by transparently striping data across multiple Amazon EBS disks. Note that for volumes less than 1 TB, AWS will use a credit bucket to decide the performance/burst credit; for volumes larger than 1 TB always deliver consistent performance.

Amazon EBS Volume tipFor more storage optimization, you can make use of the NetApp powerful storage efficiencies, which reduce the volume of data being stored. These include data compression, data deduplication, thin provisioning, and data compaction to make sure the minimum amount of space is being used. Data tiering from Amazon EBS to less-expensive Amazon S3 makes it possible to save costs by automatically and seamlessly shifting infrequently-used data, such as DR environments, snapshots, or infrequently-used active workloads data, to object storage until that data is needed, at which point it will automatically be brought back up to Amazon EBS.

Getting Higher Availability


To optimize storage availability, you can make use of the Cloud Volumes ONTAP high availability configuration that uses a dual-node, share-nothing architecture that replicates the environment to another Availability Zone. Cloud Volumes ONTAP HA also keeps data synchronously-mirrored with write consistency between the two nodes, ensuring you can achieve RPOs of zero, RTOs of under 60 seconds, and automatic seamless failover and failback between the nodes. These are features that aren’t available natively on AWS.

Optimizing Snapshot Policies


Amazon EBS offers point-in-time snapshots, which play a crucial role in ensuring minimal downtime and increasing data availability. Snapshots also provide an option to create a number of volumes from a single Amazon EBS volume, which can be replicated across different regions or zones. Although point-in-time snapshots decrease the recovering interval, initial snapshots do increase latency. As such, taking snapshots too frequently can have a noticeable effect on the performance of the Amazon EBS volume. Amazon EBS snapshots are stored using Amazon S3, with the initial snapshot constituting a complete copy of the source data, which can noticeably increase cloud storage costs. These snapshots also consume IOPS, so it is advisable to set an optimum time interval between taking snapshots, or to schedule snapshots when you have less traffic on the disk.

Cloud Volumes ONTAP’s solution to the performance degradation caused by taking regular native snapshots is NetApp Snapshots™ technology. NetApp Snapshots are created instantly, with no performance penalty and space efficiently because they don’t need to rely on a full copy of the source data and are an integral part of Cloud Volumes ONTAP.

NetApp Snapshots can also be shared and synced between locations both on-prem and in the cloud with SnapMirror® data replication, be used to create application-consistent backups with SnapCenter®, and can be kept for long-term retention using SnapVault® and at huge cost savings with data tiering, and be instantly restored using SnapRestore®.

Conclusion


As we’ve seen above, if you want your operations on AWS to run efficiently without going over budget, it’s necessary to optimize your Amazon EBS deployments. This can be achieved with various native AWS techniques discussed above, but is further enhanced with the help of Cloud Volumes ONTAP.

To try Cloud Volumes ONTAP today for free, try this 30-day trial on AWS today.
-