Copying Files To & From an AWS S3 Bucket

Recently I needed to download an entire bucket worth of data for an offsite backup. Easy right? Go to the Amazon Web Services Console and hit download! WRONG.

You can download individual files but not an entire bucket. Seems silly. Luckily there is an easy way to do it via the Amazon Web Services CLI. Enter simple commands:

$ aws s3 cp s3://YOUR_BUCKET/ /LOCAL_DIRECTORY --recursive

Let’s dissect this just a little bit. The first couple of options in the command should be pretty self-explanatory. We are going to use the AWS CLI, we chose S3 as our service and then the ‘cp’ means we are going to copy. Now, there are a bunch of other options that you can do here. I suggest taking a look at the documentation here to learn more. After that, you simply add in your bucket name, note the trailing forward slash, then where you want to put your files on your local machine. Finally, I added the --recursive flag so that it would read through all the lower directories.

Ultimately a very simple solution to transfer some data quickly! The AWS S3 CLI functions very similarly to that of your standard directory functions. So, feel free to poke around and see what it can do!

AWS CLI For CPU Credit Balance

Here is how you create a CloudWatch alarm to monitor CPU Credit Balances less than a certain amount:

aws cloudwatch put-metric-alarm --alarm-name YOUR NAME HERE--alarm-description "Alarm when CPU Credits is below 200" --metric-name CPUCreditBalance --namespace AWS/EC2 --statistic Average --period 300 --threshold 200 --comparison-operator LessThanThreshold --dimensions Name=InstanceId,Value=INSTANCEIDHERE --evaluation-periods 2 --alarm-actions ARN:YOURSNSTOPIC

CloudFormation Template:
https://github.com/avansledright/CloudFormation-CPU-CREDIT-BALANCE

Lessons Learned from Migrating 17TB of Data

I finally pulled the trigger on some new hard drives for my home NAS. I am migrating from a 5U Server down two a small desktop size NAS. Ultimately this removes the need for my 42U standing rack.

I did this transfer a year or so ago when I did a full rebuild of my server but forgot to take any notes on the processes that I used. Instant regret. I remembered utilizing Rsync to do the actual transfer and I assumed that I mounted both the existing NAS to an NFS share and the new NAS through NFS. Both these mounts would reside inside a throwaway virtual machine on my application server.

I used the following Rsync command to start.

rsync --ignore-existing -ahzrvvv --progress {Source} {Destination}

To break this down a little bit:

–ignore-existing: This will ignore any existing files that copy over

-a: Archive flag. This preserves my data structure

-h: Human readable. If this flag exists for a command, use it. It makes things much easier to use.

-z: Compression. There are a bunch of different compression options for Rsync. This one does enough for me.

-r: This makes Rsync copy files recursively through the directories

-vvv: I put triple verbose on because I was having so many issues.

–progress: This will show the number of files and the progress of the file that is currently being copied. Especially useful when copying large files.

Now, my command changed over time but ultimately this is what I ended on. My source and destination were set to the respective NFS mounts and I hit [enter] to start the transfer. I left it running on the console of my Virtual Machine and walked away after I saw a handful of successful transfers. Assuming everything was going fine I went about my day as 17TB is going to take a while.

A few hours later I decided to check in on my transfer and saw that it had gotten stuck on a file after only 37KB of data transfer! Frustrated, I restarted the process. Only to see the same results later on.

After updating, downgrading, and modifying my command structure I came to the realization that there must be an issue with transferring between to NFS shares.

I am still researching why this happens but to me, it seems as though when the transfer starts the files are brought into a buffer somewhere within the Linux filesystem which gets maxed out causing the file transfer to stall. Almost as if the buffer can’t send the new files fast enough.

When I switched the transfer to utilize SSH instead of NFS to NFS the transfer completed successfully.

If someone has some information regarding how this works I would love to learn more.

Encrypt an Existing EBS Volume

Say you have an existing EBS volume on Amazon Web Services that you wanted to encrypt. How would you do that? The following guide shows you how to do so via the AWS Management Console.

  1. Login to your console.
  2. Navigate to the the EBS Volume you would like to encrypt

3. Right click on your colume and create a snapshot.

4. I always give my snapshots descriptions. But we are going to end up deleting this one.

5. Make a copy of the snapshot you created in step 4.

6. In the copy settings you simply need to choose to encrypt the volume. You can specify the encryption keys to use. For this guide we will just use the standard EBS encryption key.

Once you have your new encrypted snapshot you can easily create a volume from that snapshot and then re-attach it to your instance!

Fixing Unadoptable Unifi Devices

I wrote an article about this before that utilizes Robo3T. I figured I should also have a version for those of you who utilize SSH and Command Line.

DISCLAIMER: I am not responsible if you break anything. If you need help let me know before you create a big mess!

EDIT: I wrote a Python Script that can handle all of this for you just enter in your MAC address. Grab it here: https://github.com/avansledright/unifideletedevice

SSH into your Unifi Controller utilizing whatever means you typically use.

Connect to MongoDB by issuing the command:
mongo --port 27117

If you are utilizing a different port then change the port flag.

Once connected select the Unifi Database:

use ace

Then you can utilize the following queries to preform actions:

Find device:
db.device.find({ 'mac' : 'XX:XX:XX:XX:XX:XX' })
Remove device:
db.device.remove({ 'mac' : 'XX:XX:XX:XX:XX:XX' })

Should you want to find what site a device is registered to you can utilize the “Find Device” query from above. In the JSON output locate the Site ID. Then utilize the query below and replace the X’s with your found site ID. The result should be a nice JSON output with the name of the site.

Find site query:
db.site.find(ObjectId('XXXXXX'))

Dialpad – A Review

Dialpad is an online voice over IP phone system focused on being the simplest phone system you have ever used. Is it? So far I sure think so.

Full disclosure: One of the businesses that I am employed by sells phone systems. It isn’t Dialpad.

When you run a business people inevitably want to call you. For the longest time I avoided having a phone number or giving out my cell phone number. I just wanted to avoid phone calls all together. Life is so much easier over email. But eventually, you need to move on and be able to accept a phone call.

I started using Twilio. Twilio was born in the cloud, runs on Amazon Web Services seems right up my alley! But it is not a phone system. It is very basic unless you want to spend hours programming on another system to get it to do what you want it to. I didn’t have time for that. But, it did allow me to have a phone number, forward calls, and forward text messages to my existing cell phone. Good enough for now.

My business is growing though. I need more features. With Twilio I still have to respond with my personal cell phone number. This is not great for a number of reasons. Most notably, I don’t want to give out my personal number anymore! This is where Dialpad comes in. Upon sign up I received a new business phone number, a personal phone number AND a conference line.

So I modified my existing Twilio number to forward to my new business line. You can port numbers into Dialpad if you pay for a more advanced plan. As I am unsure if I will stay with this software I opted to leave my number at Twilio. I then added myself as a forwarding user so that calls can come into my cell phone if I am away from my desk. All of this is done through a very user friendly web interface. You can also link it up to your GSuite accout to automatically add new users to Dialpad and put them into their respective call group.

After all of this was setup, I recorded some greetings and downloaded the desktop app. It works exactly as you would expect it to without any issues. The mobile app functions quite well. It has some quirks to it on the messaging side but overall it does what I need it to do.

One of the most interesting aspects of Dialpad is their Voice AI feature. While you are on a call it can live transcribe the call for you in the desktop app. Once the call is over it will analyze it and give you feedback. Just so happens the call I was on was with a client who was unhappy with the way their sales were going for the year so it flagged the call for lots of “Negative sentiments”. This is a very interesting feature that I will be keeping tabs on going forward.

Overall: If you want an easy to setup, full featured phone system with a decent price and don’t care about having a physical deskphone, Dialpad is a great option!

Counting Web Requests

I manage a ton of web servers. Occasionally I see attempts at flooding the servers with traffic. Typically in a malicious way. Generally these are just small attacks and nothing to write home about. But, I wanted a way see how many times a server was getting a request from a specific IP address.

Obviously this would be very challenging to accomplish by just looking at the logs. So, I put together a small Linux command that will read and count Apache requests based on unique IP addresses.

cat access.* | awk ‘{ print $1 }’ | sort | uniq -c | sort -n

Try it out and let me know what you think!

AWS Backup

Recently Amazon Web Services announced its new service called AWS Backup. The goal is to create a simple, automated backup solution for resources within the AWS Cloud.

There have been plenty of other solutions out there for backups but most are quite costly. Here is a look at the pricing for the AWS Backup solution:

AWS Backup Pricing Snapshot

The pricing for an EBS Snapshot is the same as the pricing for manual snapshots so it is quite a compelling argument to set this up.

Let’s look at a quick example of how to setup a simple recurring EBS Snapshot. In this example I have a Linux EC2 instance with a single EBS volume attached to it.

Login in to your AWS console and search for “Backup” in the services menu. You will see AWS Backup.

AWS Console Menu – AWS Backup

Once you are in the console for AWS Backup, choose “Manage Backup Plans”

Manage AWS Backup Plans

To get the full experience of AWS Backups I chose to make my own plan. You could also choose to use one of their existing plans.

AWS Backup Options

Give your backup plan a name. Something so you can remember what the plan is going to be doing. For my example I named my plan “7Day-Snapshot”. My plan will take a snapshot of the EBS volume and store it for 7 days before discarding it.

Inside of your plan you are going to create a rule. In the example we only need one rule.


I filled the fields out as follows:

Rule Name: 7DayRetention

Frequency: Daily

Backup Window: Use Backup Window Defaults

Transition to Cold Storage: Never

Expire: 7 Days

Backup Vault: Default – You can create different vaults with various options. I would suggest this if you are wanting to separate your projects or customers.

Tags: You can add various tags but I didn’t set any up for this example.

Once you have all the options filled out hit “Create Plan” to save your new plan. You can now assign resources to your plan which is how you actually choose what is going to be backed up!

In Resource Assignments click “Assign resources”

You will need to define a few things in the next step which is choosing your resources.

Resource assignment name: I used the hostname of my Linux Server

IAM Role: I used default

Assign Resources: This is where you can get creative. One thing I am going to setup going forward is that every EBS volume with Key: Backup and Tag: Yes will fit this resource. Then I don’t have to add each volume individually. Feel free to explore. What I did was to choose “Assign By” Resource ID. Then Resource Type of EBS Volume and then found my resource in the list.

Hit Assign Resources when you are done.

That’s it! You now have a backup plan that will take a snapshot of your EBS volume during each maintenance window every day. It will then store them for one week and then delete them.

This service by AWS should solve a myriad of problems for many organizations.

If you have questions feel free to reach out!

Referral Programs

So I’ve been debating referral programs fora number of different projects. I see businesses use them with great success but I haven’t implemented one before. 

For 45Squared, my web development company, I have no sales team besides myself. I rely solely on word of mouth right now. I’ve been mulling around the idea of doing a $50 referral program for any new customer that someone brings in. But, how do I control it? 

Last night I decided to do a trial run with only my close friends to see if they can bring in some people. In my friend group, we have quite an extensive reach so it could be successful. At $50 per referral, my margin is still large enough especially when I can sell hosting.


For my other business, there are too many people involved to implement a successful program. I find that most people, to complicate things with aggressive equations to calculate a percentage of a deal that the referrer will receive.

Ultimately I think that this takes away from the program and makes it unsuccessful. But, I have no experience with it, yet. 

So what are your thoughts on referral programs? Comment below.

Fixing Unifi Controller Errors

Recently I was working on a device that for the life of me I could not get to attach to my Unifi Controller. Repeatedly I would get

used default key in INFORM_ERROR state, reject it!

error on my server. The other error that I kept getting on the device itself was

Decrypt Error

when running the Inform Command.

Quite frustrated I spent a lot of time removing and adding my SSL certificate thinking that had something to do with it. I was wrong.

The real issue resides when someone deletes a whole site without removing the devices that are inside the site first. What happens is that the devices stay in the database and have a site associated with them that no longer exists. This results in me not being able to adopt them into a new site.

So Let’s Fix It

To resolve this issue we need to delete the device out of the controller by accessing the MongoDB that stores all of our information. While most of you are probably more fluent in writing Mongo queries and thus could do it from the command line I prefer to find a GUI solution so that I could understand what I am doing.

Enter Robo 3T. This is a GUI connector for MongoDB.  Depending on your setup you will need to modify your connection type. I used SSH with my private key.

Once connected you should see a list of your databases in the left column.

The Unifi Database (unless you changed it) will be called ace. Go ahead and expand out Ace and then Collections to display all your sites information. You will see a tabled called “device”. This table stores all the specific information about our devices and how they are programmed.

We now need to find our specific device so using the built in shell in Robo 3T run the following query replacing the X’s with your MAC Address.

db.device.find({ 'mac' : 'XX:XX:XX:XX:XX:XX' })

The MAC address string must be all lower case.

NOTE: Please backup your database before you do any of the following!

Once you find your device, verify that the MAC address does, in fact, match your device.

Right click on the ObjectID block. Should look something like this:

In the right click menu you can choose to Delete the document. This will permanently remove the device from your controllers database.

Once you have deleted the Document run your Inform command again and it your device should populate into your controller like normal!

If you have any questions let me know!