[ad_1]
And save a host of bucks!
Disclaimer: When you believe the use of this script in manufacturing, I truly advise you to check it in actual existence, and be sure you already know the way your utility is eating disk house. Despite the fact that saving some cash appears like an ideal concept, it’s higher to have your online business working than a device impaired because of some disk problems.
On my aspect, this script is most effective working on non-prod environments.
Please additionally moderately learn the AWS caution ahead of taking into consideration the use of this script.

Operating more than one environments can break the bank if you don’t optimize the price at each and every unmarried stage. A regular instance is that you simply don’t want to go away the surroundings working 24/7 as more often than not your customers aren’t at paintings.
However you even have many different optimization tips that you’ll use to avoid wasting somewhat bit extra. We will be able to undergo one nowadays, the EBS volumes unused disk house.
Ahead of going additional, we will have to think that you simply meet some must haves to be eligible for the common sense we can speak about:
- You might be the use of GP3 volumes, which means that you will have via default 3,000 IOPS — which was once now not conceivable with GP2 volumes with which the full IOPS was once calculated via AWS (overall disk house * 3)
- You might be working on Linux (I imagine the similar common sense will also be carried out to Home windows, however I’m now not truly aware of Win Server) — the authentic resize documentation will also be discovered right here.
- You most effective have one EBS quantity hooked up in line with EC2 example. The script is these days running for a unmarried partition (that you’ll configure), however it may be progressed to maintain more than one drives (I don’t have that want on my aspect).
- Web site be aware: you’ll most effective ask for an EBS building up for a similar quantity as soon as each and every six hours. So, make a selection your threshold moderately!

The above schema describes the best way the script works:
- A cron task will cause our script each and every X minutes (let’s say 10 minutes)
- When precipitated, the script will review if our partition must be prolonged. We’ve two choices right here:
- 1. No prolong is needed — therefore the script will watch for the following execution
2. Want somewhat stretch? Then we can ask AWS to increase our EBS force, then building up our native disk/partition - Redo the similar common sense
The code we can speak about is all about Python. Notice that I didn’t in finding any just right solution to play with filesystems, so I’m the use of the os module and a few awk instructions most commonly. Any choice might be preferred!
NB: When you’re most effective excited about the script itself, please soar on the finish of the item
I will be able to break up the script into 3 portions to permit you a greater working out of the engine: one associated with FS disk utilization, the second one to AWS, and the closing one might be about extending our native force.
We’ve 3 purposes for that goal. Right here they’re:
check_usage
=> this serve as takes a filesystem as a parameter (right here/
), and can run a shell (df -h / | tail -1 | awk '{sub("%","");print $5}'
) command to snatch the present utilization. Notice that I’m most effective coping with Pass right here…get_current_partition_size
=> very similar to the primary one, takes anFS
as param, and returns this time the partition dimension. Will likely be used within the 3rd manner.calculate_value_to_provision
=> Follow a easy mathematic serve as(current_fs_usage / minimum_percentage * 100) — current_fs_size
to extend your partition to the predefined threshold. Notice that if the distance wanted is lower than5go
, we forged it to5go
anyway to keep away from expanding via an excessively small quantity.
Extra strategies are outlined right here, however nonetheless in the similar AWS common sense that we’re beginning to grasp. Right here they’re:
get_instance_id
=> Use EC2 metadata to retrieve the example identificationget_region
=> Use EC2 metadata to go back the area the place the example is workinginit_ec2_client
=> Go back an EC2 consumer with the example area as parameteridentify_ebs_volume
=> Question AWS EC2 API with our example identification as a parameter, and go back the EBS identification. Notice this is running provided that we’ve got one unmarried force hooked upget_volume_size
=> Go back our EBS quantity dimension. It’s going to be used to test if an prolong has already been comprised of an AWS standpoint, however wasn’t a hit at the OS aspectextend_volume
=> Request extra Opt for our EBS to AWSwait_volume_modified
=> Wait till our new garage is to be had in the community. Notice that there’s no local waiter in boto3 (poke AWS for those who ever wish to enforce that!) after an EBS amendment. So I needed to tweak somewhat bit in line with the record which says that we will be able to use an EBS once its state is optimized.
Once more 3 purposes right here, let’s stay it easy!
get_main_disk
=> We want to decide the disk identify in response to the partition. Somewhat little bit of lsblk and we’re just right! Returns the disk identify as string (likenvme0n1
)extend_disk
=> Lengthen the newly discovered disk to EBS dimension. In case your EBS is 50Go and your disk is 40, your new disk dimension will develop into 50.extend_partition
=> Ultimate step is extending the partition. As a disk will have more than one walls, this step may be required although we’re working just one partition in our instance. Any case, disk and partition must be controlled one after the other.
And we’re accomplished! The closing step is to create your set of rules common sense underneath the __main__
for instance and also you’re just right to head!

The entire script is to be had right here on git if you want it. Be happy to remark with any recommendation/query, I will be able to be very happy to lend a hand!
When you (dis)favored this text, let me know!
See you within the subsequent AWS article.
[ad_2]