Amazon Web Services Cloud Architecting Python

Where Is It 5 O’Clock Pt: 2

So I spend the evening deploying this web application to Amazon Web Services. In my test environment, everything appeared to be working great because every time I reloaded the page it reloaded the function as well.

When I transferred this over to a live environment I realized the Python function only ran every time I committed a change and it was re-deployed to my Elastic Beanstalk environment.

This poses a new problem. If the function doesn’t fire every time the page is refreshed the time won’t properly update and it will show incorrect areas of where it is 5 O’Clock. Ugh.

So, over the next few weeks, in my spare time, I will be re-writing this entire application to function the way I intended it to.

I think to do this I will write each function as an AWS Lambda function and then write a frontend that calls these functions on page load. Or, the entire thing will be one function and return the information and it will deploy in one API call.

I also really want to display a map that shows the areas that it is 5PM or later but I think this will come in a later revision once the project is actually functioning correctly. Along with some more CSS to make it pretty and responsive so it works on all devices.

The punch list is getting long…

Follow along here:

Amazon Web Services Cloud Architecting Networking Python Technology

Where Is It Five O’Clock Pt: 1

I bought the domain a while back and have been sitting on it for quite some time. I had an idea to make a web application that would tell you where it is five o’clock. Yes, this is a drinking website.

I saw this project as a way to learn more Python skills, as well as some more AWS skills, and boy, has it put me to the test. So I’m going to write this series of posts as a way to document my progress in building this application.

Part One: Building The Application

I know that I want to use Python because it is my language of choice. I then researched what libraries I could use to build the frontend with. I came across Flask as an option and decided to run with that. The next step I had to do was actually find out where it was 5PM.

In my head, I came up with the process that if I could first get a list of all the timezone and identify the current time in them I could filter out which timezones it was 5PM. Once establishing where it was 5PM, I can then get that information to Flask and figure out a way to display it.

Here is the function for identifying the current time in all timezones and then storing each key pair of {Timezone : Current_Time }

def getTime():
    now_utc ='UTC'))
    #print('UTC:', now_utc)
    timezones = pytz.all_timezones
    #get all current times and store them into a list
    tz_array = []
    for tz in timezones:
        current_time = now_utc.astimezone(timezone(tz))
        values = {tz: current_time.hour}
    return tz_array

Once everything was stored into tz_array I took that info and passed it through the following function to identify it was 5PM. I have another function that identifies everything that is NOT 5PM.

def find5PM():
    its5pm = []
    for tz in tz_array:
        timezones = tz.items()
        for timezone, hour in timezones:
            if hour >= 17:
    return its5pm

I made a new list and stored just the timezone name into that list and return it.

Once I had all these together I passed them through as variables to Flask. This is where I first started to struggle. In my original revisions of the functions, I was only returning one of the values rather than returning ALL of the values. This resulted in hours of struggling to identify the cause of the problem. Eventually, I had to start over and completely re-work the code until I ended up with what you see above.

The code was finally functional and I was ready to deploy it to Amazon Web Services for public access. I will discuss my design and deployment in Part Two.

Amazon Web Services Cloud Architecting Python

EC2 Action Slack Notification

I took a brief break from my Lambda function creation journey to go on vacation but, now i’m back!

This function will notify a Slack channel of your choosing when an EC2 instance enters “Starting, Stopping, Stopped, or Shutting-Down” status. I thought this might be useful for instances that reside under a load balancer. It would be useful to see when your load balancer is scaling up or down in real-time via Slack notification.

In order to use this function, you will need to create a Slack Application with an OAuth key and set that key as an environment variable in your Lambda function. If you are unsure of how to do this I can walk you through it!

Please review the function below

import logging
import requests
import boto3
import os
from urllib.parse import unquote_plus
from slack import WebClient
from slack.errors import SlackApiError

# Check EC2 Status
def lambda_handler(event, context):
    detail = event['detail']
    ids = detail['instance-id']
    eventname = detail['state']
    ec2 = boto3.resource('ec2')
# Slack Variables
    slack_token = os.environ["slackBot"]
    client = WebClient(token=slack_token)
    channel_string = "XXXXXXXXXXXXXXXXXXXX"

# Post to slack that the instance is running
    if eventname == 'running':
          instance = ids
          response_string = f"The instance: {instance} has started"
          response = client.chat_postMessage(
            channel= channel_string,
          	text="An Instance has started",
           	blocks = [{"type": "section", "text": {"type": "plain_text", "text": response_string}}]
        except SlackApiError as e:
          assert e.response["error"]  

		#Post to slack that instance is shutting down
    elif eventname == 'shutting-down':
	        instance = ids
	        response_string = f"The instance: {instance} is shutting down"
	        response = client.chat_postMessage(
	        	channel= channel_string,
	        	text="An Instance is Shutting Down",
	        	blocks = [{"type": "section", "text": {"type": "plain_text", "text": response_string}}]
    	except SlackApiError as e:
           assert e.response["error"]
    elif eventname == 'stopped':
	        instance = ids
	        response_string = f"The instance: {instance} has stopped"
	        response = client.chat_postMessage(
	        	channel= channel_string,
	        	text="An Instance has stopped",
	        	blocks = [{"type": "section", "text": {"type": "plain_text", "text": response_string}}]
    	except SlackApiError as e:
    		assert e.response["error"]
    elif eventname == 'stopping':
	        instance = ids
	        response_string = f"The instance: {instance} is stopping"
	        response = client.chat_postMessage(
	        	channel= channel_string,
	        	text="An Instance is stopping",
	        	blocks = [{"type": "section", "text": {"type": "plain_text", "text": response_string}}]
    	except SlackApiError as e:
    		assert e.response["error"]

As always the function is available on GitHub as well:

If you find this function helpful please share it with your friends or repost it on your favorite social media platform!

Amazon Web Services Cloud Architecting Python

Check EC2 Instance Tags on Launch

In my ever-growing quest to automate my AWS infrastructure deployments, I realized that just checking my tags wasn’t good enough. I should force myself to put tags in otherwise my instances won’t launch at all.

I find this particularly useful because I utilize AWS Backup to do automated snapshots nightly of all of my instances. If I don’t put the “Backup” tag onto my instance it will not be included in the rule. This concept of forced tagging could be utilized across many different applications including tagging for development, production, or testing environments.

To do this I created the Lambda function below. Utilizing EventBridge I have this function every time there is an EC2 instance that enters the “running” state.

import json
import boto3

def lambda_handler(event, context):
    detail = event['detail']
    ids = detail['instance-id']
    eventname = detail['state']
    ec2 = boto3.resource('ec2')
    while eventname == 'Running':
    #Check to see if backup tag is added to the instance
        tag_to_check = 'Backup'
        instance = ec2.Instance(ids)
        for tag in instance.tags:
            if tag_to_check not in [t['Key'] for t in instance.tags]:
                print("Stopping Instance: ", instance)
    #Get instance state to break the infinite loop
                state = instance.state['Name']          
                if state == "shutting-down":
                    print("instance is shutting-down")
                elif state == "stopped":
                    print("Instance is already stopped")
                elif state == "stopping":
                    print("instance is stopping")

The function then will check the status of the instance to ensure that it is stopped and then break the loop.

You can clone the repository from GitHub here:

If you utilize the script please share it with your friends. Feel free to modify it as you please and let me know how it works for you! As always, if you have any questions feel free to reach out here or on any other platform!

Amazon Web Services Cloud Architecting Python

AWS Tag Checker

I wrote this script this morning as I was creating a new web server. I realized that I had been forgetting to add my “Backup” tag to my instances so that they would automatically be backed up via AWS Backup.

This one is pretty straight forward. Utilizing Boto3 this script will iterate over all of your instances and check them for the tag specified on line 8. If the tag is not present it will then add the tag that is defined by JSON in $response.

After that is all done it will iterate over the instances again to check that the tag has been added. If a new instance has been added or it failed to add the tag it will print out a list of instance ID’s that do not have the tag.

Here is the script:

import boto3

ec2 = boto3.resource('ec2')
inst_describe = ec2.instances.all()

for instance in inst_describe:
    tag_to_check = 'Backup'
    if tag_to_check not in [t['Key'] for t in instance.tags]:
        print("This instance is not tagged: ", instance.instance_id)
        response = ec2.create_tags(
            Resources= [instance.instance_id],
            Tags = [
                    'Key': 'Backup',
                    'Value': 'Yes'
# Double check that there are no other instances without tags
for instance in inst_describe:
    if tag_to_check not in [t['Key'] for t in instance.tags]:
        print("Failed to assign tag, or new instance: ", instance.instance_id)        

The script is also available on GitHub here:

If you find this script helpful feel free to share it with your friends and let me know in the comments!

Amazon Web Services Cloud Architecting Python

Lambda Function Post to Slack

I wrote this script out of a need to practice my Python skills. The idea is that if a file gets uploaded to an S3 bucket then the function will trigger and a message with that file name will be posted to a Slack channel of your choosing.

To utilize this you will need to include the Slack pip package as well as the slackclient pip package when you upload the function to the AWS Console.

You will also need to create an OAuth key for a Slack application. If you are unfamiliar with this process feel free to drop a comment below and or shoot me a message and I can walk you through the process or write a second part of the guide.

Here is a link to the project:

If this helps you please share this post on your favorite social media platform!


Working From Home Tips

I’ve been working from home for some time now and have gotten into a pretty good routine that keeps me sane, healthy and happy.

  1. Create a schedule. You need to have a routine that you stick to starting with waking up at a decent time. You don’t have to commute to an office which is nice but you should still plan on waking up before 9AM
  2. Get dressed. A lot of people I know don’t get out of their pajamas when they work from home. This is a HUGE mistake. Get up, take a shower and get dressed as if you were going to your office. Maybe you can dress down a little bit and wear jeans instead of dress pants but put real pants on!
  3. Create a distraction free work space. If you have an home office now is the time to use it. Clean it up and get yourself setup like you would in your real office. If you need an extra monitor then go get one!
  4. Eat regular meals. When you get up have your breakfast like normal. For me that is usually just a protein bar and a glass of water. Eat a small but filling lunch to keep your body happy.
  5. Take breaks. I can’t stress this one enough. When you aren’t working from home you will often take breaks that you don’t even realize like: chatting with coworkers, going to get coffee. I often take breaks to stretch or walk around. The most important thing to do is stop working for a few minutes and remember that you need to recharge for just a few minutes.

I hope these tips help some of you if you are new to working from home. If you have any other tips feel free to add them below in the comments!

Amazon Web Services Cloud Architecting Python Technology

Automatically Transcribing Audio Files with Amazon Web Services

I wrote this Lambda function to automatically transcribe audio files that are uploaded to an S3 bucket. This is written in Python3 and utilizes the Boto3 library.

You will need to give your Lambda function permissions to access S3, Transcribe and CloudWatch.

The script will create an AWS Transcribe job with the format: 'filetranscription'+YYYYMMDD-HHMMSS

I will be iterating over the script to hopefully add in a web front end as well as potentially branching to do voice call transcriptions for phone calls and Amazon Connect.

You can view the code here

If you have questions or comments feel free to reach out to me here or on any Social Media.

Amazon Web Services Linux Networking Technology

Slack’s New Nebula Network Overlay

I was turned on to this new tool that the Slack team had built. As an avid Slack user, I was immediately intrigued to test this out.

My use case is going to be relatively simple for the sake of this post. I am going to create a Lighthouse, or parent node, in an EC2 instance in my Amazon Web Services account. It will have an elastic IP so we can route traffic to it publically. I also will need to create a security group to allow traffic to port 4242 UDP. I will also allow this port inbound on my local firewall.

Clone the GIT repository for Nebula and also download the binaries. I put everything into /etc/nebula

Once you have all of the files downloaded you can generate your certificate of authority by running the command:

./nebula-cert ca -name "Your Company"

You will want to make a backup of the ca.key and ca.cert file that is generated by this output.

Once you have your certificate of authority you can create certificates for your hosts. In my case I am only generating one for my local server. The following command will generate the certificate and keys:

./nebula-cert sign -name "Something Memorable" -ip ""

Where it says “Something Memorable” I placed the hostname of the server I am using so that I remember. One thing that the documentation doesn’t go over is assigning the IP for your Lighthouse. Because I recognize the Lighthouse as more of a gateway I assigned it to in the config file. This will be covered soon.

There is a pre-generated configuration file located here. I simply copied this into a file inside of /etc/nebula/

Edit the file as needed. Lines 7-9 will need to be modified for each host as each host will have its own certificate.

Line 20 will need to be the IP address of your Lighthouse and this will remain the same on every host. On line 26 you will need to change this to true for your Lighthouse. On all other hosts, this will remain false.

The other major thing I changed was to allow SSH traffic. There is an entire section about SSH in the configuration that I ignored and simply added the firewall to the bottom of the file as follows:

- port: 22
proto: tcp
host: any

This code is added below the 443 rule for HTTPS. Be sure to follow normal YAML notation practices.

Once this is all in place you can execute your Nebula network by using the following command:

/etc/nebula/nebula -config /etc/nebula/config.yml

Execute your Lighthouse first and ensure it is up and running. Once it is running on your Lighthouse you can run it on your host and you should see a connection handshake. Test by pinging your Lighthouse from your host and from your Lighthouse to your host. I also tested file transfer as well using SCP. This verifies SSH connectivity.

Now, the most important thing that Slack doesn’t discuss is creating a systemctl script for automatic startup. So I have included a basic one for you here:

Description=Nebula Service

ExecStart=/etc/nebula/nebula -config /etc/nebula/config.yml

That’s it! I would love to hear about your implementations in the comments below!

Linux Networking Technology

Discovering DHCP Servers with NMAP

I was working at a client site where a device would constantly receive a new IP address via DHCP nearly every second. It was the only device on the network that had this issue but I decided to test for rogue DHCP servers. If someone knows of a GUI tool to do this let me know in the comments. I utilized the command line utility NMAP to scan the network.

sudo nmap --script broadcast-dhcp-discover

The output should look something like:

Starting Nmap 7.70 ( ) at 2019-11-25 15:52 EST
Pre-scan script results:
| broadcast-dhcp-discover:
| Response 1 of 1:
| IP Offered:
| DHCP Message Type: DHCPOFFER
| Server Identifier:
| IP Address Lease Time: 7d00h00m00s
| Subnet Mask:
| Time Offset: 4294949296
| Router:
| Domain Name Server:
| Renewal Time Value: 3d12h00m00s
|_ Rebinding Time Value: 6d03h00m00s

This was the test that ran on my local network verifying only one DHCP server. If there were multiple, we would see another response.

Ultimately this was not the issue at my client site but this is a new function of NMAP that I had not used.

Let me know your experiences with rogue DHCP in the comments!