Categories
Amazon Web Services Cloud Architecting Python Technology

Where Is It 5 O’Clock Pt: 4

As much as I’ve scratched my head working on this project it has been fun to learn some new things and build something that isn’t infrastructure automation. I’ve learned some frontend web development some backend development and utilized some new Amazon Web Services products.

With all that nice stuff said I’m proud to announce that I have built a fully functioning project that is finally working the way I intended it. You can visit the website here:

www.whereisitfiveoclock.net

To recap, I bought this domain one night as a joke and thought “Hey, maybe one day I’ll build something”. I started off building a fully Python application backed by Flask. You can read about that in Part 1.This did not work out the way I intended as it did not refresh the timezones on page load. In part 3 I discussed how I was rearchitecting the project to include an API that would be called upon page load.

The API worked great and delivered two JSON objects into my frontend. I then parsed the two JSON objects into two separate tables that display where you can be drinking and where you probably shouldn’t be drinking.

This is a snippet of the JavaScript I wrote to iterate over the JSON objects while adding them into the appropriate table:

function buildTable(someinfo){
                var table1 = document.getElementById('its5pmsomewhere')
                var table2 = document.getElementById('itsnot5here')
                var its5_json = JSON.parse(someinfo[0]);
                var not5_json = JSON.parse(someinfo[1]);
                var its5_array = []
                var not5_array = []
                its5_json['its5'].forEach((value, index) => {

                    var row = `<tr>
                                <td>${value}</td>
                                <td></td>
                                </tr>`
                
                    table1.innerHTML += row
                })  
                not5_json['not5'].forEach((value, index) => {

                        var row = `<tr>
                                <td></td>
                                <td>${value}</td>
                                </tr>`
                
                    table2.innerHTML += row
                })  

First I reference two different HTML tables. I then parse the JSON from the API. I take both JSON objects and iterate over them adding the timezones into the table and then returning them into the HTML table.

If you want more information on how I did this feel free to reach out.

I want to continue iterating over this application to add new features. I need to do some standard things like adding Google Analytics so I can track traffic. I also want to add a search feature and a map that displays the different areas of drinking acceptability.

I also am open to requests. One of my friends suggested that I add a countdown timer to each location that it is not yet acceptable to be drinking.

Feel free to reach out in the comments or on your favorite social media platform! And as always, if you liked this project please share it with your friends.

Categories
Amazon Web Services Cloud Architecting Python Technology

Where Is It Five O’Clock Pt: 3

So I left this project at a point where I felt it needed to be re-architected based on the fact that Flask only executes the function once and not every time the page loads.

I re-architected the application in my head to include an API that calls the Lambda function and returns a list of places where it is and is not acceptable to be drinking based on the 5 O’Clock rules. These two lists will be JSON objects that have a single key with multiple values. The values will be the timezones appropriate to be drinking in.

After the JSON objects are generated I can reference them through the web frontend and display them in an appropriate way.

At this point I have the API built out and fully funcitoning the way I think I want it. You can use it by executing the following:
curl https://5xztnem7v4.execute-api.us-west-2.amazonaws.com/whereisit5

I will probably only have this publically accessible for a few days before locking it back down.

Hopefully, in part 4 of this series, I will have a frontend demo to show!

Categories
Amazon Web Services Cloud Architecting Networking Python Technology

Where Is It Five O’Clock Pt: 1

I bought the domain whereisitfiveoclock.net a while back and have been sitting on it for quite some time. I had an idea to make a web application that would tell you where it is five o’clock. Yes, this is a drinking website.

I saw this project as a way to learn more Python skills, as well as some more AWS skills, and boy, has it put me to the test. So I’m going to write this series of posts as a way to document my progress in building this application.

Part One: Building The Application

I know that I want to use Python because it is my language of choice. I then researched what libraries I could use to build the frontend with. I came across Flask as an option and decided to run with that. The next step I had to do was actually find out where it was 5PM.

In my head, I came up with the process that if I could first get a list of all the timezone and identify the current time in them I could filter out which timezones it was 5PM. Once establishing where it was 5PM, I can then get that information to Flask and figure out a way to display it.

Here is the function for identifying the current time in all timezones and then storing each key pair of {Timezone : Current_Time }

def getTime():
    now_utc = datetime.now(timezone('UTC'))
    #print('UTC:', now_utc)
    timezones = pytz.all_timezones
    #get all current times and store them into a list
    tz_array = []
    for tz in timezones:
        current_time = now_utc.astimezone(timezone(tz))
        values = {tz: current_time.hour}
        tz_array.append(values)
        
    return tz_array

Once everything was stored into tz_array I took that info and passed it through the following function to identify it was 5PM. I have another function that identifies everything that is NOT 5PM.

def find5PM():
    its5pm = []
    for tz in tz_array:
        timezones = tz.items()
        for timezone, hour in timezones:
            if hour >= 17:
                its5pm.append(timezone)
    return its5pm

I made a new list and stored just the timezone name into that list and return it.

Once I had all these together I passed them through as variables to Flask. This is where I first started to struggle. In my original revisions of the functions, I was only returning one of the values rather than returning ALL of the values. This resulted in hours of struggling to identify the cause of the problem. Eventually, I had to start over and completely re-work the code until I ended up with what you see above.

The code was finally functional and I was ready to deploy it to Amazon Web Services for public access. I will discuss my design and deployment in Part Two.

http://whereisitfiveoclock.net

Categories
Amazon Web Services Cloud Architecting Python Technology

Automatically Transcribing Audio Files with Amazon Web Services

I wrote this Lambda function to automatically transcribe audio files that are uploaded to an S3 bucket. This is written in Python3 and utilizes the Boto3 library.

You will need to give your Lambda function permissions to access S3, Transcribe and CloudWatch.

The script will create an AWS Transcribe job with the format: 'filetranscription'+YYYYMMDD-HHMMSS

I will be iterating over the script to hopefully add in a web front end as well as potentially branching to do voice call transcriptions for phone calls and Amazon Connect.

You can view the code here

If you have questions or comments feel free to reach out to me here or on any Social Media.

Categories
Amazon Web Services Linux Networking Technology

Slack’s New Nebula Network Overlay

I was turned on to this new tool that the Slack team had built. As an avid Slack user, I was immediately intrigued to test this out.

My use case is going to be relatively simple for the sake of this post. I am going to create a Lighthouse, or parent node, in an EC2 instance in my Amazon Web Services account. It will have an elastic IP so we can route traffic to it publically. I also will need to create a security group to allow traffic to port 4242 UDP. I will also allow this port inbound on my local firewall.

Clone the GIT repository for Nebula and also download the binaries. I put everything into /etc/nebula

Once you have all of the files downloaded you can generate your certificate of authority by running the command:

./nebula-cert ca -name "Your Company"

You will want to make a backup of the ca.key and ca.cert file that is generated by this output.

Once you have your certificate of authority you can create certificates for your hosts. In my case I am only generating one for my local server. The following command will generate the certificate and keys:

./nebula-cert sign -name "Something Memorable" -ip "192.168.100.2/24"

Where it says “Something Memorable” I placed the hostname of the server I am using so that I remember. One thing that the documentation doesn’t go over is assigning the IP for your Lighthouse. Because I recognize the Lighthouse as more of a gateway I assigned it to 192.168.100.1 in the config file. This will be covered soon.

There is a pre-generated configuration file located here. I simply copied this into a file inside of /etc/nebula/

Edit the file as needed. Lines 7-9 will need to be modified for each host as each host will have its own certificate.

Line 20 will need to be the IP address of your Lighthouse and this will remain the same on every host. On line 26 you will need to change this to true for your Lighthouse. On all other hosts, this will remain false.

The other major thing I changed was to allow SSH traffic. There is an entire section about SSH in the configuration that I ignored and simply added the firewall to the bottom of the file as follows:

- port: 22
proto: tcp
host: any

This code is added below the 443 rule for HTTPS. Be sure to follow normal YAML notation practices.

Once this is all in place you can execute your Nebula network by using the following command:

/etc/nebula/nebula -config /etc/nebula/config.yml

Execute your Lighthouse first and ensure it is up and running. Once it is running on your Lighthouse you can run it on your host and you should see a connection handshake. Test by pinging your Lighthouse from your host and from your Lighthouse to your host. I also tested file transfer as well using SCP. This verifies SSH connectivity.

Now, the most important thing that Slack doesn’t discuss is creating a systemctl script for automatic startup. So I have included a basic one for you here:

[Unit]
Description=Nebula Service

[Service]
Restart=always
RestartSec=1
User=root
ExecStart=/etc/nebula/nebula -config /etc/nebula/config.yml
[Install]
WantedBy=multi-user.target

That’s it! I would love to hear about your implementations in the comments below!

Categories
Linux Networking Technology

Discovering DHCP Servers with NMAP

I was working at a client site where a device would constantly receive a new IP address via DHCP nearly every second. It was the only device on the network that had this issue but I decided to test for rogue DHCP servers. If someone knows of a GUI tool to do this let me know in the comments. I utilized the command line utility NMAP to scan the network.

sudo nmap --script broadcast-dhcp-discover

The output should look something like:

Starting Nmap 7.70 ( https://nmap.org ) at 2019-11-25 15:52 EST
Pre-scan script results:
| broadcast-dhcp-discover:
| Response 1 of 1:
| IP Offered: 172.20.1.82
| DHCP Message Type: DHCPOFFER
| Server Identifier: 172.20.1.2
| IP Address Lease Time: 7d00h00m00s
| Subnet Mask: 255.255.255.0
| Time Offset: 4294949296
| Router: 172.20.1.2
| Domain Name Server: 8.8.8.8
| Renewal Time Value: 3d12h00m00s
|_ Rebinding Time Value: 6d03h00m00s

This was the test that ran on my local network verifying only one DHCP server. If there were multiple, we would see another response.

Ultimately this was not the issue at my client site but this is a new function of NMAP that I had not used.

Let me know your experiences with rogue DHCP in the comments!

Categories
Linux Python

Amazon S3 Backup from FreeNAS

I was chatting with my Dad about storage for his documents. He mentioned wanting to store them on my home NAS. I chuckled and stated that I would just push them up to the cloud because it would be cheaper and more reliable. When I got home that day I thought to myself how I would actually complete this task.

There are plenty of obvious tools to accomplish offsite backup. I want to push all of my home videos and pictures to an S3 bucket in my AWS environment. I could:

  1. Mount the S3 bucket using the drivers provided by AWS and then RSYNC the data across on a cron job.
  2. Utilize a FreeNAS plugin to drive the backup
  3. Build my own custom solution to the problem and re-invent the wheel!

It is clear the choice is going to be 3.

With the help of the Internet and I put together a simple Python script that will backup my data. I can then run this on a cron job to upload the files periodically. OR! I could Dockerize the script and then run it as a container! Queue more overkill.

The result is something complicated for a simple backup task. But I like it and it works for my environments. One of the most important things is that I can point the script at one directory that houses many Symlinks to other directories so I only have to manage one backup point.

Take a look at the GitHub link below and let me know your thoughts!

[GitHub]

Categories
Linux Technology

Lessons Learned from Migrating 17TB of Data

I finally pulled the trigger on some new hard drives for my home NAS. I am migrating from a 5U Server down two a small desktop size NAS. Ultimately this removes the need for my 42U standing rack.

I did this transfer a year or so ago when I did a full rebuild of my server but forgot to take any notes on the processes that I used. Instant regret. I remembered utilizing Rsync to do the actual transfer and I assumed that I mounted both the existing NAS to an NFS share and the new NAS through NFS. Both these mounts would reside inside a throwaway virtual machine on my application server.

I used the following Rsync command to start.

rsync --ignore-existing -ahzrvvv --progress {Source} {Destination}

To break this down a little bit:

–ignore-existing: This will ignore any existing files that copy over

-a: Archive flag. This preserves my data structure

-h: Human readable. If this flag exists for a command, use it. It makes things much easier to use.

-z: Compression. There are a bunch of different compression options for Rsync. This one does enough for me.

-r: This makes Rsync copy files recursively through the directories

-vvv: I put triple verbose on because I was having so many issues.

–progress: This will show the number of files and the progress of the file that is currently being copied. Especially useful when copying large files.

Now, my command changed over time but ultimately this is what I ended on. My source and destination were set to the respective NFS mounts and I hit [enter] to start the transfer. I left it running on the console of my Virtual Machine and walked away after I saw a handful of successful transfers. Assuming everything was going fine I went about my day as 17TB is going to take a while.

A few hours later I decided to check in on my transfer and saw that it had gotten stuck on a file after only 37KB of data transfer! Frustrated, I restarted the process. Only to see the same results later on.

After updating, downgrading, and modifying my command structure I came to the realization that there must be an issue with transferring between to NFS shares.

I am still researching why this happens but to me, it seems as though when the transfer starts the files are brought into a buffer somewhere within the Linux filesystem which gets maxed out causing the file transfer to stall. Almost as if the buffer can’t send the new files fast enough.

When I switched the transfer to utilize SSH instead of NFS to NFS the transfer completed successfully.

If someone has some information regarding how this works I would love to learn more.

Categories
Amazon Web Services Cloud Architecting Technology

Encrypt an Existing EBS Volume

Say you have an existing EBS volume on Amazon Web Services that you wanted to encrypt. How would you do that? The following guide shows you how to do so via the AWS Management Console.

  1. Login to your console.
  2. Navigate to the the EBS Volume you would like to encrypt

3. Right click on your colume and create a snapshot.

4. I always give my snapshots descriptions. But we are going to end up deleting this one.

5. Make a copy of the snapshot you created in step 4.

6. In the copy settings you simply need to choose to encrypt the volume. You can specify the encryption keys to use. For this guide we will just use the standard EBS encryption key.

Once you have your new encrypted snapshot you can easily create a volume from that snapshot and then re-attach it to your instance!

Categories
Technology

Fixing Unadoptable Unifi Devices

I wrote an article about this before that utilizes Robo3T. I figured I should also have a version for those of you who utilize SSH and Command Line.

DISCLAIMER: I am not responsible if you break anything. If you need help let me know before you create a big mess!

EDIT: I wrote a Python Script that can handle all of this for you just enter in your MAC address. Grab it here: https://github.com/avansledright/unifideletedevice

SSH into your Unifi Controller utilizing whatever means you typically use.

Connect to MongoDB by issuing the command:
mongo --port 27117

If you are utilizing a different port then change the port flag.

Once connected select the Unifi Database:

use ace

Then you can utilize the following queries to preform actions:

Find device:
db.device.find({ 'mac' : 'XX:XX:XX:XX:XX:XX' })
Remove device:
db.device.remove({ 'mac' : 'XX:XX:XX:XX:XX:XX' })

Should you want to find what site a device is registered to you can utilize the “Find Device” query from above. In the JSON output locate the Site ID. Then utilize the query below and replace the X’s with your found site ID. The result should be a nice JSON output with the name of the site.

Find site query:
db.site.find(ObjectId('XXXXXX'))