Your cart is currently empty!
Category: Amazon Web Services
-
AWS Tag Checker
I wrote this script this morning as I was creating a new web server. I realized that I had been forgetting to add my “Backup” tag to my instances so that they would automatically be backed up via AWS Backup.
This one is pretty straight forward. Utilizing Boto3 this script will iterate over all of your instances and check them for the tag specified on line 8. If the tag is not present it will then add the tag that is defined by JSON in $response.
After that is all done it will iterate over the instances again to check that the tag has been added. If a new instance has been added or it failed to add the tag it will print out a list of instance ID’s that do not have the tag.
Here is the script:
import boto3 ec2 = boto3.resource('ec2') inst_describe = ec2.instances.all() for instance in inst_describe: tag_to_check = 'Backup' if tag_to_check not in [t['Key'] for t in instance.tags]: print("This instance is not tagged: ", instance.instance_id) response = ec2.create_tags( Resources= [instance.instance_id], Tags = [ { 'Key': 'Backup', 'Value': 'Yes' } ] ) # Double check that there are no other instances without tags for instance in inst_describe: if tag_to_check not in [t['Key'] for t in instance.tags]: print("Failed to assign tag, or new instance: ", instance.instance_id)The script is also available on GitHub here:
https://github.com/avansledright/awsTagCheckIf you find this script helpful feel free to share it with your friends and let me know in the comments!
-
Lambda Function Post to Slack
I wrote this script out of a need to practice my Python skills. The idea is that if a file gets uploaded to an S3 bucket then the function will trigger and a message with that file name will be posted to a Slack channel of your choosing.
To utilize this you will need to include the Slack pip package as well as the slackclient pip package when you upload the function to the AWS Console.
You will also need to create an OAuth key for a Slack application. If you are unfamiliar with this process feel free to drop a comment below and or shoot me a message and I can walk you through the process or write a second part of the guide.
Here is a link to the project:
https://github.com/avansledright/posttoSlackLambdaIf this helps you please share this post on your favorite social media platform!
-

Automatically Transcribing Audio Files with Amazon Web Services
I wrote this Lambda function to automatically transcribe audio files that are uploaded to an S3 bucket. This is written in Python3 and utilizes the Boto3 library.
You will need to give your Lambda function permissions to access S3, Transcribe and CloudWatch.
The script will create an AWS Transcribe job with the format:
'filetranscription'+YYYYMMDD-HHMMSSI will be iterating over the script to hopefully add in a web front end as well as potentially branching to do voice call transcriptions for phone calls and Amazon Connect.
You can view the code here
If you have questions or comments feel free to reach out to me here or on any Social Media.
-
Slack’s New Nebula Network Overlay
I was turned on to this new tool that the Slack team had built. As an avid Slack user, I was immediately intrigued to test this out.
My use case is going to be relatively simple for the sake of this post. I am going to create a Lighthouse, or parent node, in an EC2 instance in my Amazon Web Services account. It will have an elastic IP so we can route traffic to it publically. I also will need to create a security group to allow traffic to port 4242 UDP. I will also allow this port inbound on my local firewall.
Clone the GIT repository for Nebula and also download the binaries. I put everything into
/etc/nebulaOnce you have all of the files downloaded you can generate your certificate of authority by running the command:
./nebula-cert ca -name "Your Company"You will want to make a backup of the ca.key and ca.cert file that is generated by this output.
Once you have your certificate of authority you can create certificates for your hosts. In my case I am only generating one for my local server. The following command will generate the certificate and keys:
./nebula-cert sign -name "Something Memorable" -ip "192.168.100.2/24"Where it says “Something Memorable” I placed the hostname of the server I am using so that I remember. One thing that the documentation doesn’t go over is assigning the IP for your Lighthouse. Because I recognize the Lighthouse as more of a gateway I assigned it to 192.168.100.1 in the config file. This will be covered soon.
There is a pre-generated configuration file located here. I simply copied this into a file inside of
/etc/nebula/Edit the file as needed. Lines 7-9 will need to be modified for each host as each host will have its own certificate.
Line 20 will need to be the IP address of your Lighthouse and this will remain the same on every host. On line 26 you will need to change this to true for your Lighthouse. On all other hosts, this will remain false.
The other major thing I changed was to allow SSH traffic. There is an entire section about SSH in the configuration that I ignored and simply added the firewall to the bottom of the file as follows:
- port: 22
proto: tcp
host: anyThis code is added below the 443 rule for HTTPS. Be sure to follow normal YAML notation practices.
Once this is all in place you can execute your Nebula network by using the following command:
/etc/nebula/nebula -config /etc/nebula/config.ymlExecute your Lighthouse first and ensure it is up and running. Once it is running on your Lighthouse you can run it on your host and you should see a connection handshake. Test by pinging your Lighthouse from your host and from your Lighthouse to your host. I also tested file transfer as well using SCP. This verifies SSH connectivity.
Now, the most important thing that Slack doesn’t discuss is creating a systemctl script for automatic startup. So I have included a basic one for you here:
[Unit]Description=Nebula Service[Service]Restart=alwaysRestartSec=1User=rootExecStart=/etc/nebula/nebula -config /etc/nebula/config.yml[Install]WantedBy=multi-user.targetThat’s it! I would love to hear about your implementations in the comments below!
-
Monitoring Disk Space with CloudWatch
I recently had a request to monitor disk space. Being that I don’t use a traditional monitoring platform but rather send all of my alerting to Slack I wondered how this would work.
There is not a direct metric in CloudWatch so we will utilize the scripts available in this guide.
You can follow along on the Amazon guide or follow some simple steps here geared towards Ubuntu based Linux distributions.
First, let’s install some dependancies:
sudo apt-get install unzipsudo apt-get install libwww-perl libdatetime-perlNext, we will download the scripts from Amazon:
curlhttps://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.2.zip-OOnce downloaded you can unpack the ZIP file:
unzip CloudWatchMonitoringScripts-1.2.2.zip && \ rm CloudWatchMonitoringScripts-1.2.2.zip && \ cd aws-scripts-monThis will put the scripts into a directory called
aws-scripts-moninside of whatever directory you are currently in. I recommend doing this inside of /home/your-user.There are a few ways to allow your scripts to have permissions to CloudWatch. I preferred to create the
awscreds.confmethod but you can also give your instance an IAM role or specify the credentials inline. If you are unsure of how to create IAM policies or roles feel free to message me and we can chat more about that.Inside the directory there is a template file that you can utilize to generate your
awscreds.conffile.cp awscreds.template awscreds.conf && vi awscreds.confModify the file as needed and save and close it.
Now let’s test the scripts to ensure functionality:
./mon-put-instance-data.pl --disk-space-util --disk-path=/ --verify --verboseYou should see a “Completed Successfully” message. If not, troubleshoot as needed.
The scripts have a lot of functionality but we are specifically looking at disk usage. I added the following line as a Cron Job:
0 * * * * /home/ubuntu/aws-scripts-mon/mon-put-instance-data.pl --disk-space-util --disk-path=/This runs the script every hour on the hour and reports the data to CloudWatch.
Now that our data is being put into CloudWatch we need to alert on any issues. For the purpose of testing I created an alarm that was below my threshold so I could verify the alerting worked. You can adjust as you need to.

Login to your AWS Management Console and navigate to the CloudWatch Console. Your data will be placed into the “Metrics” tab. Once the Metrics tab is open you will see a section called “Linux Systems”. Navigate to this and you should metrics called “Filesystem, InstanceId, Mountpath”. This is where your metrics live. You can navigate around here and view your metrics in the graphing utility. Once you have verified that the data is accurate you can create an alarm based on this metric.
Navigate to the Alarms section of CloudWatch. Click “Create alarm” in the top right corner. Follow the steps to create your Alarm. For Metric navigate to the metric we found in the previous step. For Conditions, I chose the following:
Threshold Type: Static
Whenever DiskSpaceUtilization is…: Greater than the threshold
Than…: 45% (Note this value will change based on your actually usage. For testing I recommend setting this to a value lower than your actual usage percentage so that your alarm will fire.)Click Next to continue. On the following page you can setup your notifications. I covered creating an AWS Chatbot here I have all of my CloudWatch Alarms sent to an SNS topic called
aws-alerts. You can create something similar and have your AWS Chatbot monitor that topic as well. Once the alarm fires you should be getting an alert in your specified Slack Channel that looks something like this:
Once your alarm is firing you can fine tune your thresholds to notify you as you need!
-
Setting Up AWS Chatbot
Amazon Web Services pushed their new Chatbot into beta recently. This simple bot will allow you to get alerts and notifications sent to either Slack or Amazon Chime. Because I use Slack for alerting I thought this would be a great tool. Previously I utilized Marbot to accommodate a similar function. Marbot is a great product for teams as it allows a user to acknowledge or pass an incident. I am a team of one so this feature is nice but ultimately not useful for me at this time.
Let’s get started!
Navigate to the new AWS Chatbot in the console

On the right-hand side click the drop-down menu to choose your chat client. I am going to choose Slack because that is what I use. I assume the process would be the same for Chime. You will be prompted by Slack to authorize the application. Go ahead and hit “Install”.
On the next screen, we will get to our configuration options. The first being to choose our Slack Channel:

I chose the public channel that I already have created for Marbot #aws-alerts. You can do what you want here. Maybe you want a private channel so only you can see alerts for your development environment!
The next section is IAM Permissions

I chose to create an IAM role using a template and utilized the predefined template and just made up a role name called “aws-chatbot-alerts”.
The last configuration options is for SNS topics

You can have your bot subscribe to SNS channels to receive notifications to publish there as well. I don’t currently use any so I skipped this section but, this could be super useful in the future! Look for future posts about this idea!
I will update this post soon with how to create the chatbot using the CLI and/or CloudFormation
-

Copying Files To & From an AWS S3 Bucket
Recently I needed to download an entire bucket worth of data for an offsite backup. Easy right? Go to the Amazon Web Services Console and hit download! WRONG.
You can download individual files but not an entire bucket. Seems silly. Luckily there is an easy way to do it via the Amazon Web Services CLI. Enter simple commands:
$ aws s3 cp s3://YOUR_BUCKET/ /LOCAL_DIRECTORY --recursiveLet’s dissect this just a little bit. The first couple of options in the command should be pretty self-explanatory. We are going to use the AWS CLI, we chose S3 as our service and then the ‘cp’ means we are going to copy. Now, there are a bunch of other options that you can do here. I suggest taking a look at the documentation here to learn more. After that, you simply add in your bucket name, note the trailing forward slash, then where you want to put your files on your local machine. Finally, I added the
--recursiveflag so that it would read through all the lower directories.Ultimately a very simple solution to transfer some data quickly! The AWS S3 CLI functions very similarly to that of your standard directory functions. So, feel free to poke around and see what it can do!
-

AWS CLI For CPU Credit Balance
Here is how you create a CloudWatch alarm to monitor CPU Credit Balances less than a certain amount:
aws cloudwatch put-metric-alarm --alarm-name YOUR NAME HERE--alarm-description "Alarm when CPU Credits is below 200" --metric-name CPUCreditBalance --namespace AWS/EC2 --statistic Average --period 300 --threshold 200 --comparison-operator LessThanThreshold --dimensions Name=InstanceId,Value=INSTANCEIDHERE --evaluation-periods 2 --alarm-actions ARN:YOURSNSTOPICCloudFormation Template:
https://github.com/avansledright/CloudFormation-CPU-CREDIT-BALANCE -

Encrypt an Existing EBS Volume
Say you have an existing EBS volume on Amazon Web Services that you wanted to encrypt. How would you do that? The following guide shows you how to do so via the AWS Management Console.
- Login to your console.
- Navigate to the the EBS Volume you would like to encrypt

3. Right click on your colume and create a snapshot.
4. I always give my snapshots descriptions. But we are going to end up deleting this one.
5. Make a copy of the snapshot you created in step 4.
6. In the copy settings you simply need to choose to encrypt the volume. You can specify the encryption keys to use. For this guide we will just use the standard EBS encryption key.

Once you have your new encrypted snapshot you can easily create a volume from that snapshot and then re-attach it to your instance!
-

AWS Backup
Recently Amazon Web Services announced its new service called AWS Backup. The goal is to create a simple, automated backup solution for resources within the AWS Cloud.
There have been plenty of other solutions out there for backups but most are quite costly. Here is a look at the pricing for the AWS Backup solution:

AWS Backup Pricing Snapshot The pricing for an EBS Snapshot is the same as the pricing for manual snapshots so it is quite a compelling argument to set this up.
Let’s look at a quick example of how to setup a simple recurring EBS Snapshot. In this example I have a Linux EC2 instance with a single EBS volume attached to it.
Login in to your AWS console and search for “Backup” in the services menu. You will see AWS Backup.

AWS Console Menu – AWS Backup Once you are in the console for AWS Backup, choose “Manage Backup Plans”

Manage AWS Backup Plans To get the full experience of AWS Backups I chose to make my own plan. You could also choose to use one of their existing plans.

AWS Backup Options Give your backup plan a name. Something so you can remember what the plan is going to be doing. For my example I named my plan “7Day-Snapshot”. My plan will take a snapshot of the EBS volume and store it for 7 days before discarding it.
Inside of your plan you are going to create a rule. In the example we only need one rule.

I filled the fields out as follows:
Rule Name: 7DayRetention
Frequency: Daily
Backup Window: Use Backup Window Defaults
Transition to Cold Storage: Never
Expire: 7 Days
Backup Vault: Default – You can create different vaults with various options. I would suggest this if you are wanting to separate your projects or customers.
Tags: You can add various tags but I didn’t set any up for this example.
Once you have all the options filled out hit “Create Plan” to save your new plan. You can now assign resources to your plan which is how you actually choose what is going to be backed up!
In Resource Assignments click “Assign resources”

You will need to define a few things in the next step which is choosing your resources.

Resource assignment name: I used the hostname of my Linux Server
IAM Role: I used default
Assign Resources: This is where you can get creative. One thing I am going to setup going forward is that every EBS volume with Key: Backup and Tag: Yes will fit this resource. Then I don’t have to add each volume individually. Feel free to explore. What I did was to choose “Assign By” Resource ID. Then Resource Type of EBS Volume and then found my resource in the list.
Hit Assign Resources when you are done.
That’s it! You now have a backup plan that will take a snapshot of your EBS volume during each maintenance window every day. It will then store them for one week and then delete them.
This service by AWS should solve a myriad of problems for many organizations.
If you have questions feel free to reach out!