Categories
Random

Working From Home Tips

I’ve been working from home for some time now and have gotten into a pretty good routine that keeps me sane, healthy and happy.

  1. Create a schedule. You need to have a routine that you stick to starting with waking up at a decent time. You don’t have to commute to an office which is nice but you should still plan on waking up before 9AM
  2. Get dressed. A lot of people I know don’t get out of their pajamas when they work from home. This is a HUGE mistake. Get up, take a shower and get dressed as if you were going to your office. Maybe you can dress down a little bit and wear jeans instead of dress pants but put real pants on!
  3. Create a distraction free work space. If you have an home office now is the time to use it. Clean it up and get yourself setup like you would in your real office. If you need an extra monitor then go get one!
  4. Eat regular meals. When you get up have your breakfast like normal. For me that is usually just a protein bar and a glass of water. Eat a small but filling lunch to keep your body happy.
  5. Take breaks. I can’t stress this one enough. When you aren’t working from home you will often take breaks that you don’t even realize like: chatting with coworkers, going to get coffee. I often take breaks to stretch or walk around. The most important thing to do is stop working for a few minutes and remember that you need to recharge for just a few minutes.

I hope these tips help some of you if you are new to working from home. If you have any other tips feel free to add them below in the comments!

Categories
Amazon Web Services Cloud Architecting Python Technology

Automatically Transcribing Audio Files with Amazon Web Services

I wrote this Lambda function to automatically transcribe audio files that are uploaded to an S3 bucket. This is written in Python3 and utilizes the Boto3 library.

You will need to give your Lambda function permissions to access S3, Transcribe and CloudWatch.

The script will create an AWS Transcribe job with the format: 'filetranscription'+YYYYMMDD-HHMMSS

I will be iterating over the script to hopefully add in a web front end as well as potentially branching to do voice call transcriptions for phone calls and Amazon Connect.

You can view the code here

If you have questions or comments feel free to reach out to me here or on any Social Media.

Categories
Amazon Web Services Linux Networking Technology

Slack’s New Nebula Network Overlay

I was turned on to this new tool that the Slack team had built. As an avid Slack user, I was immediately intrigued to test this out.

My use case is going to be relatively simple for the sake of this post. I am going to create a Lighthouse, or parent node, in an EC2 instance in my Amazon Web Services account. It will have an elastic IP so we can route traffic to it publically. I also will need to create a security group to allow traffic to port 4242 UDP. I will also allow this port inbound on my local firewall.

Clone the GIT repository for Nebula and also download the binaries. I put everything into /etc/nebula

Once you have all of the files downloaded you can generate your certificate of authority by running the command:

./nebula-cert ca -name "Your Company"

You will want to make a backup of the ca.key and ca.cert file that is generated by this output.

Once you have your certificate of authority you can create certificates for your hosts. In my case I am only generating one for my local server. The following command will generate the certificate and keys:

./nebula-cert sign -name "Something Memorable" -ip "192.168.100.2/24"

Where it says “Something Memorable” I placed the hostname of the server I am using so that I remember. One thing that the documentation doesn’t go over is assigning the IP for your Lighthouse. Because I recognize the Lighthouse as more of a gateway I assigned it to 192.168.100.1 in the config file. This will be covered soon.

There is a pre-generated configuration file located here. I simply copied this into a file inside of /etc/nebula/

Edit the file as needed. Lines 7-9 will need to be modified for each host as each host will have its own certificate.

Line 20 will need to be the IP address of your Lighthouse and this will remain the same on every host. On line 26 you will need to change this to true for your Lighthouse. On all other hosts, this will remain false.

The other major thing I changed was to allow SSH traffic. There is an entire section about SSH in the configuration that I ignored and simply added the firewall to the bottom of the file as follows:

- port: 22
proto: tcp
host: any

This code is added below the 443 rule for HTTPS. Be sure to follow normal YAML notation practices.

Once this is all in place you can execute your Nebula network by using the following command:

/etc/nebula/nebula -config /etc/nebula/config.yml

Execute your Lighthouse first and ensure it is up and running. Once it is running on your Lighthouse you can run it on your host and you should see a connection handshake. Test by pinging your Lighthouse from your host and from your Lighthouse to your host. I also tested file transfer as well using SCP. This verifies SSH connectivity.

Now, the most important thing that Slack doesn’t discuss is creating a systemctl script for automatic startup. So I have included a basic one for you here:

[Unit]
Description=Nebula Service

[Service]
Restart=always
RestartSec=1
User=root
ExecStart=/etc/nebula/nebula -config /etc/nebula/config.yml
[Install]
WantedBy=multi-user.target

That’s it! I would love to hear about your implementations in the comments below!

Categories
Linux Networking Technology

Discovering DHCP Servers with NMAP

I was working at a client site where a device would constantly receive a new IP address via DHCP nearly every second. It was the only device on the network that had this issue but I decided to test for rogue DHCP servers. If someone knows of a GUI tool to do this let me know in the comments. I utilized the command line utility NMAP to scan the network.

sudo nmap --script broadcast-dhcp-discover

The output should look something like:

Starting Nmap 7.70 ( https://nmap.org ) at 2019-11-25 15:52 EST
Pre-scan script results:
| broadcast-dhcp-discover:
| Response 1 of 1:
| IP Offered: 172.20.1.82
| DHCP Message Type: DHCPOFFER
| Server Identifier: 172.20.1.2
| IP Address Lease Time: 7d00h00m00s
| Subnet Mask: 255.255.255.0
| Time Offset: 4294949296
| Router: 172.20.1.2
| Domain Name Server: 8.8.8.8
| Renewal Time Value: 3d12h00m00s
|_ Rebinding Time Value: 6d03h00m00s

This was the test that ran on my local network verifying only one DHCP server. If there were multiple, we would see another response.

Ultimately this was not the issue at my client site but this is a new function of NMAP that I had not used.

Let me know your experiences with rogue DHCP in the comments!

Categories
Amazon Web Services Cloud Architecting Uncategorized

Monitoring Disk Space with CloudWatch

I recently had a request to monitor disk space. Being that I don’t use a traditional monitoring platform but rather send all of my alerting to Slack I wondered how this would work.

There is not a direct metric in CloudWatch so we will utilize the scripts available in this guide.

You can follow along on the Amazon guide or follow some simple steps here geared towards Ubuntu based Linux distributions.

First, let’s install some dependancies:

sudo apt-get install unzip

sudo apt-get install libwww-perl libdatetime-perl

Next, we will download the scripts from Amazon:

curl https://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.2.zip -O

Once downloaded you can unpack the ZIP file:

unzip CloudWatchMonitoringScripts-1.2.2.zip && \ rm CloudWatchMonitoringScripts-1.2.2.zip && \ cd aws-scripts-mon

This will put the scripts into a directory called aws-scripts-mon inside of whatever directory you are currently in. I recommend doing this inside of /home/your-user.

There are a few ways to allow your scripts to have permissions to CloudWatch. I preferred to create the awscreds.conf method but you can also give your instance an IAM role or specify the credentials inline. If you are unsure of how to create IAM policies or roles feel free to message me and we can chat more about that.

Inside the directory there is a template file that you can utilize to generate your awscreds.conf file.

cp awscreds.template awscreds.conf && vi awscreds.conf

Modify the file as needed and save and close it.

Now let’s test the scripts to ensure functionality:

./mon-put-instance-data.pl --disk-space-util --disk-path=/ --verify --verbose

You should see a “Completed Successfully” message. If not, troubleshoot as needed.

The scripts have a lot of functionality but we are specifically looking at disk usage. I added the following line as a Cron Job:

0 * * * * /home/ubuntu/aws-scripts-mon/mon-put-instance-data.pl --disk-space-util --disk-path=/

This runs the script every hour on the hour and reports the data to CloudWatch.

Now that our data is being put into CloudWatch we need to alert on any issues. For the purpose of testing I created an alarm that was below my threshold so I could verify the alerting worked. You can adjust as you need to.

Login to your AWS Management Console and navigate to the CloudWatch Console. Your data will be placed into the “Metrics” tab. Once the Metrics tab is open you will see a section called “Linux Systems”. Navigate to this and you should metrics called “Filesystem, InstanceId, Mountpath”. This is where your metrics live. You can navigate around here and view your metrics in the graphing utility. Once you have verified that the data is accurate you can create an alarm based on this metric.

Navigate to the Alarms section of CloudWatch. Click “Create alarm” in the top right corner. Follow the steps to create your Alarm. For Metric navigate to the metric we found in the previous step. For Conditions, I chose the following:

Threshold Type: Static
Whenever DiskSpaceUtilization is…: Greater than the threshold
Than…: 45% (Note this value will change based on your actually usage. For testing I recommend setting this to a value lower than your actual usage percentage so that your alarm will fire.)

Click Next to continue. On the following page you can setup your notifications. I covered creating an AWS Chatbot here I have all of my CloudWatch Alarms sent to an SNS topic called aws-alerts. You can create something similar and have your AWS Chatbot monitor that topic as well. Once the alarm fires you should be getting an alert in your specified Slack Channel that looks something like this:

Once your alarm is firing you can fine tune your thresholds to notify you as you need!

Categories
Linux Python

Amazon S3 Backup from FreeNAS

I was chatting with my Dad about storage for his documents. He mentioned wanting to store them on my home NAS. I chuckled and stated that I would just push them up to the cloud because it would be cheaper and more reliable. When I got home that day I thought to myself how I would actually complete this task.

There are plenty of obvious tools to accomplish offsite backup. I want to push all of my home videos and pictures to an S3 bucket in my AWS environment. I could:

  1. Mount the S3 bucket using the drivers provided by AWS and then RSYNC the data across on a cron job.
  2. Utilize a FreeNAS plugin to drive the backup
  3. Build my own custom solution to the problem and re-invent the wheel!

It is clear the choice is going to be 3.

With the help of the Internet and I put together a simple Python script that will backup my data. I can then run this on a cron job to upload the files periodically. OR! I could Dockerize the script and then run it as a container! Queue more overkill.

The result is something complicated for a simple backup task. But I like it and it works for my environments. One of the most important things is that I can point the script at one directory that houses many Symlinks to other directories so I only have to manage one backup point.

Take a look at the GitHub link below and let me know your thoughts!

[GitHub]

Categories
Amazon Web Services

Setting Up AWS Chatbot

Amazon Web Services pushed their new Chatbot into beta recently. This simple bot will allow you to get alerts and notifications sent to either Slack or Amazon Chime. Because I use Slack for alerting I thought this would be a great tool. Previously I utilized Marbot to accommodate a similar function. Marbot is a great product for teams as it allows a user to acknowledge or pass an incident. I am a team of one so this feature is nice but ultimately not useful for me at this time.

Let’s get started!

Navigate to the new AWS Chatbot in the console

On the right-hand side click the drop-down menu to choose your chat client. I am going to choose Slack because that is what I use. I assume the process would be the same for Chime. You will be prompted by Slack to authorize the application. Go ahead and hit “Install”.

On the next screen, we will get to our configuration options. The first being to choose our Slack Channel:

I chose the public channel that I already have created for Marbot #aws-alerts. You can do what you want here. Maybe you want a private channel so only you can see alerts for your development environment!

The next section is IAM Permissions

I chose to create an IAM role using a template and utilized the predefined template and just made up a role name called “aws-chatbot-alerts”.

The last configuration options is for SNS topics

You can have your bot subscribe to SNS channels to receive notifications to publish there as well. I don’t currently use any so I skipped this section but, this could be super useful in the future! Look for future posts about this idea!

I will update this post soon with how to create the chatbot using the CLI and/or CloudFormation

Categories
Amazon Web Services

Copying Files To & From an AWS S3 Bucket

Recently I needed to download an entire bucket worth of data for an offsite backup. Easy right? Go to the Amazon Web Services Console and hit download! WRONG.

You can download individual files but not an entire bucket. Seems silly. Luckily there is an easy way to do it via the Amazon Web Services CLI. Enter simple commands:

$ aws s3 cp s3://YOUR_BUCKET/ /LOCAL_DIRECTORY --recursive

Let’s dissect this just a little bit. The first couple of options in the command should be pretty self-explanatory. We are going to use the AWS CLI, we chose S3 as our service and then the ‘cp’ means we are going to copy. Now, there are a bunch of other options that you can do here. I suggest taking a look at the documentation here to learn more. After that, you simply add in your bucket name, note the trailing forward slash, then where you want to put your files on your local machine. Finally, I added the --recursive flag so that it would read through all the lower directories.

Ultimately a very simple solution to transfer some data quickly! The AWS S3 CLI functions very similarly to that of your standard directory functions. So, feel free to poke around and see what it can do!

Categories
Amazon Web Services Cloud Architecting

AWS CLI For CPU Credit Balance

Here is how you create a CloudWatch alarm to monitor CPU Credit Balances less than a certain amount:

aws cloudwatch put-metric-alarm --alarm-name YOUR NAME HERE--alarm-description "Alarm when CPU Credits is below 200" --metric-name CPUCreditBalance --namespace AWS/EC2 --statistic Average --period 300 --threshold 200 --comparison-operator LessThanThreshold --dimensions Name=InstanceId,Value=INSTANCEIDHERE --evaluation-periods 2 --alarm-actions ARN:YOURSNSTOPIC

CloudFormation Template:
https://github.com/avansledright/CloudFormation-CPU-CREDIT-BALANCE

Categories
Linux Technology

Lessons Learned from Migrating 17TB of Data

I finally pulled the trigger on some new hard drives for my home NAS. I am migrating from a 5U Server down two a small desktop size NAS. Ultimately this removes the need for my 42U standing rack.

I did this transfer a year or so ago when I did a full rebuild of my server but forgot to take any notes on the processes that I used. Instant regret. I remembered utilizing Rsync to do the actual transfer and I assumed that I mounted both the existing NAS to an NFS share and the new NAS through NFS. Both these mounts would reside inside a throwaway virtual machine on my application server.

I used the following Rsync command to start.

rsync --ignore-existing -ahzrvvv --progress {Source} {Destination}

To break this down a little bit:

–ignore-existing: This will ignore any existing files that copy over

-a: Archive flag. This preserves my data structure

-h: Human readable. If this flag exists for a command, use it. It makes things much easier to use.

-z: Compression. There are a bunch of different compression options for Rsync. This one does enough for me.

-r: This makes Rsync copy files recursively through the directories

-vvv: I put triple verbose on because I was having so many issues.

–progress: This will show the number of files and the progress of the file that is currently being copied. Especially useful when copying large files.

Now, my command changed over time but ultimately this is what I ended on. My source and destination were set to the respective NFS mounts and I hit [enter] to start the transfer. I left it running on the console of my Virtual Machine and walked away after I saw a handful of successful transfers. Assuming everything was going fine I went about my day as 17TB is going to take a while.

A few hours later I decided to check in on my transfer and saw that it had gotten stuck on a file after only 37KB of data transfer! Frustrated, I restarted the process. Only to see the same results later on.

After updating, downgrading, and modifying my command structure I came to the realization that there must be an issue with transferring between to NFS shares.

I am still researching why this happens but to me, it seems as though when the transfer starts the files are brought into a buffer somewhere within the Linux filesystem which gets maxed out causing the file transfer to stall. Almost as if the buffer can’t send the new files fast enough.

When I switched the transfer to utilize SSH instead of NFS to NFS the transfer completed successfully.

If someone has some information regarding how this works I would love to learn more.