I was turned on to this new tool that the Slack team had built. As an avid Slack user, I was immediately intrigued to test this out.
My use case is going to be relatively simple for the sake of this post. I am going to create a Lighthouse, or parent node, in an EC2 instance in my Amazon Web Services account. It will have an elastic IP so we can route traffic to it publically. I also will need to create a security group to allow traffic to port 4242 UDP. I will also allow this port inbound on my local firewall.
Once you have all of the files downloaded you can generate your certificate of authority by running the command:
./nebula-cert ca -name "Your Company"
You will want to make a backup of the ca.key and ca.cert file that is generated by this output.
Once you have your certificate of authority you can create certificates for your hosts. In my case I am only generating one for my local server. The following command will generate the certificate and keys:
Where it says “Something Memorable” I placed the hostname of the server I am using so that I remember. One thing that the documentation doesn’t go over is assigning the IP for your Lighthouse. Because I recognize the Lighthouse as more of a gateway I assigned it to 192.168.100.1 in the config file. This will be covered soon.
There is a pre-generated configuration file located here. I simply copied this into a file inside of /etc/nebula/
Edit the file as needed. Lines 7-9 will need to be modified for each host as each host will have its own certificate.
Line 20 will need to be the IP address of your Lighthouse and this will remain the same on every host. On line 26 you will need to change this to true for your Lighthouse. On all other hosts, this will remain false.
The other major thing I changed was to allow SSH traffic. There is an entire section about SSH in the configuration that I ignored and simply added the firewall to the bottom of the file as follows:
- port: 22 proto: tcp host: any
This code is added below the 443 rule for HTTPS. Be sure to follow normal YAML notation practices.
Once this is all in place you can execute your Nebula network by using the following command:
/etc/nebula/nebula -config /etc/nebula/config.yml
Execute your Lighthouse first and ensure it is up and running. Once it is running on your Lighthouse you can run it on your host and you should see a connection handshake. Test by pinging your Lighthouse from your host and from your Lighthouse to your host. I also tested file transfer as well using SCP. This verifies SSH connectivity.
Now, the most important thing that Slack doesn’t discuss is creating a systemctl script for automatic startup. So I have included a basic one for you here:
unzip CloudWatchMonitoringScripts-1.2.2.zip && \ rm CloudWatchMonitoringScripts-1.2.2.zip && \ cd aws-scripts-mon
This will put the scripts into a directory called aws-scripts-mon inside of whatever directory you are currently in. I recommend doing this inside of /home/your-user.
There are a few ways to allow your scripts to have permissions to CloudWatch. I preferred to create the awscreds.conf method but you can also give your instance an IAM role or specify the credentials inline. If you are unsure of how to create IAM policies or roles feel free to message me and we can chat more about that.
Inside the directory there is a template file that you can utilize to generate your awscreds.conf file.
cp awscreds.template awscreds.conf && vi awscreds.conf
Modify the file as needed and save and close it.
Now let’s test the scripts to ensure functionality:
This runs the script every hour on the hour and reports the data to CloudWatch.
Now that our data is being put into CloudWatch we need to alert on any issues. For the purpose of testing I created an alarm that was below my threshold so I could verify the alerting worked. You can adjust as you need to.
Login to your AWS Management Console and navigate to the CloudWatch Console. Your data will be placed into the “Metrics” tab. Once the Metrics tab is open you will see a section called “Linux Systems”. Navigate to this and you should metrics called “Filesystem, InstanceId, Mountpath”. This is where your metrics live. You can navigate around here and view your metrics in the graphing utility. Once you have verified that the data is accurate you can create an alarm based on this metric.
Navigate to the Alarms section of CloudWatch. Click “Create alarm” in the top right corner. Follow the steps to create your Alarm. For Metric navigate to the metric we found in the previous step. For Conditions, I chose the following:
Threshold Type: Static Whenever DiskSpaceUtilization is…: Greater than the threshold Than…: 45% (Note this value will change based on your actually usage. For testing I recommend setting this to a value lower than your actual usage percentage so that your alarm will fire.)
Click Next to continue. On the following page you can setup your notifications. I covered creating an AWS Chatbot here I have all of my CloudWatch Alarms sent to an SNS topic called aws-alerts. You can create something similar and have your AWS Chatbot monitor that topic as well. Once the alarm fires you should be getting an alert in your specified Slack Channel that looks something like this:
Once your alarm is firing you can fine tune your thresholds to notify you as you need!
Amazon Web Services pushed their new Chatbot into beta recently. This simple bot will allow you to get alerts and notifications sent to either Slack or Amazon Chime. Because I use Slack for alerting I thought this would be a great tool. Previously I utilized Marbot to accommodate a similar function. Marbot is a great product for teams as it allows a user to acknowledge or pass an incident. I am a team of one so this feature is nice but ultimately not useful for me at this time.
Let’s get started!
Navigate to the new AWS Chatbot in the console
On the right-hand side click the drop-down menu to choose your chat client. I am going to choose Slack because that is what I use. I assume the process would be the same for Chime. You will be prompted by Slack to authorize the application. Go ahead and hit “Install”.
On the next screen, we will get to our configuration options. The first being to choose our Slack Channel:
I chose the public channel that I already have created for Marbot #aws-alerts. You can do what you want here. Maybe you want a private channel so only you can see alerts for your development environment!
The next section is IAM Permissions
I chose to create an IAM role using a template and utilized the predefined template and just made up a role name called “aws-chatbot-alerts”.
The last configuration options is for SNS topics
You can have your bot subscribe to SNS channels to receive notifications to publish there as well. I don’t currently use any so I skipped this section but, this could be super useful in the future! Look for future posts about this idea!
I will update this post soon with how to create the chatbot using the CLI and/or CloudFormation
Let’s dissect this just a little bit. The first couple of options in the command should be pretty self-explanatory. We are going to use the AWS CLI, we chose S3 as our service and then the ‘cp’ means we are going to copy. Now, there are a bunch of other options that you can do here. I suggest taking a look at the documentation here to learn more. After that, you simply add in your bucket name, note the trailing forward slash, then where you want to put your files on your local machine. Finally, I added the --recursive flag so that it would read through all the lower directories.
Ultimately a very simple solution to transfer some data quickly! The AWS S3 CLI functions very similarly to that of your standard directory functions. So, feel free to poke around and see what it can do!
Recently Amazon Web Services announced its new service called AWS Backup. The goal is to create a simple, automated backup solution for resources within the AWS Cloud.
There have been plenty of other solutions out there for backups but most are quite costly. Here is a look at the pricing for the AWS Backup solution:
The pricing for an EBS Snapshot is the same as the pricing for manual snapshots so it is quite a compelling argument to set this up.
Let’s look at a quick example of how to setup a simple recurring EBS Snapshot. In this example I have a Linux EC2 instance with a single EBS volume attached to it.
Login in to your AWS console and search for “Backup” in the services menu. You will see AWS Backup.
Once you are in the console for AWS Backup, choose “Manage Backup Plans”
To get the full experience of AWS Backups I chose to make my own plan. You could also choose to use one of their existing plans.
Give your backup plan a name. Something so you can remember what the plan is going to be doing. For my example I named my plan “7Day-Snapshot”. My plan will take a snapshot of the EBS volume and store it for 7 days before discarding it.
Inside of your plan you are going to create a rule. In the example we only need one rule.
I filled the fields out as follows:
Rule Name: 7DayRetention
Backup Window: Use Backup Window Defaults
Transition to Cold Storage: Never
Expire: 7 Days
Backup Vault: Default – You can create different vaults with various options. I would suggest this if you are wanting to separate your projects or customers.
Tags: You can add various tags but I didn’t set any up for this example.
Once you have all the options filled out hit “Create Plan” to save your new plan. You can now assign resources to your plan which is how you actually choose what is going to be backed up!
In Resource Assignments click “Assign resources”
You will need to define a few things in the next step which is choosing your resources.
Resource assignment name: I used the hostname of my Linux Server
IAM Role: I used default
Assign Resources: This is where you can get creative. One thing I am going to setup going forward is that every EBS volume with Key: Backup and Tag: Yes will fit this resource. Then I don’t have to add each volume individually. Feel free to explore. What I did was to choose “Assign By” Resource ID. Then Resource Type of EBS Volume and then found my resource in the list.
Hit Assign Resources when you are done.
That’s it! You now have a backup plan that will take a snapshot of your EBS volume during each maintenance window every day. It will then store them for one week and then delete them.
This service by AWS should solve a myriad of problems for many organizations.
Today I sat the AWS Security Specialty Exam. While I didn’t pass I thought to provide some commentary on the experience in relation to the training that I sought out to assist myself in the process.
I have been a big fan of ACloudGuru. They helped me pass my Solutions Architect exam last year so naturally, I returned to train and learn from them again. Much of the content that I found in this course I found to be a repeat of what I saw in the Solutions Architect material. I didn’t think much of it because I assumed this to be the correct curriculum.
Boy was I wrong.
Upon sitting down at the exam center I utilized my standard method of test taking. Answer the questions that you know the answer to first and then go back and hammer out the harder ones using the process of elimination and your knowledge.
Ryan Kroonenburg does a great job of explaining all the features of AWS and how to utilize them in a lab environment, we miss the actual application level that AWS is asking for in the exam. Now, I’m not saying that Ryan doesn’t know what he is talking about. Quite the contrary. Nor am I blaming my failure on ACloudGuru.
On top of learning all the content outlined in ACloudGuru or LinuxAcademy or whichever training resource you want to utilize, you really need to seek out real life application to these topics.
I will be going back over all the labs in the training material and applying them into my product environments (after testing). I think that this is the only way to truly learn what is needed.
Current Exam Rankings
Hardest to Easiest (based on what I’ve taken):
Solutions Architect Associate
If you have any questions regarding the exams feel free to reach out!
This was my second year attending Amazon Web Services Summit. Both times I have headed down to Chicago for a few days to network, learn, and get excited about new AWS developments.
This year, the summit was scheduled for only one day. Being that the summit started early in the morning I decided I was going to head down early. By happenstance, I was invited to attend a workshop put on by Dynatrace.
Dynatrace is a logging and monitoring platform built inside AWS. It integrates with nearly any piece of technology you can think of. For me, monitoring is important for the web servers that I manage for my customers. In this workshop, we learned how to create a continuous development pipeline. Essentially what this means is that we deployed our application which had various staging and production environments that Dynatrace was able to monitor and ensure successful deployments.
After the workshop, Dynatrace hosted a lovely rooftop cocktail party. Thanks again for the invitation!
The summit began early the next morning. I spent the morning visiting some vendor booths and getting the lay of the land before attending the keynote.
This years keynote was centered around the concept of “Builders”. Amazon wants all of its customers to be builders. By that, they mean that they want us to explore and be curious with their platform. When we see a problem they want us to solve it within Amazon Web Services. While this concept is great fundamentally, I do believe that is catered more towards developers and people that code rather than infrastructure gurus like myself. Nevertheless, I still found the concept compelling in my adventures.
The day continued with various sessions. I spent a good amount of time working through the business executive track which focuses on migrations and security.
Overall the summit was good. I did miss the two day format. By the end of the day it was a very long day of travel and learning.
If you or someone you know is interested in cloud computing, AWS Summit is a great place to get excited about all the possibilities!