Your cart is currently empty!
Category: Networking
Updating AWS Managed Prefix Lists
I was working with a customer the other day trying to come up with a way to import a bunch of IP addresses into a white list on AWS. We came up with the approach of using Managed Prefix Lists in VPC. I wrote some Python in order to grab some code from an API and then automatically put it into a prefix list.
The code takes input from an API that is managed by a 3rd party. We first use that and parse the returned values into meaningful lists. After that, we pass the IPs to the function which it will check if the entry exists or not. If it does, it will pass the IP. If it doesn’t exist it will automatically add it.
import requests import json import os import boto3 from botocore.exceptions import ClientError import ipaddress def check_for_existing(list_id, ip): client = boto3.client("ec2", region_name="us-west-2") try: response = client.get_managed_prefix_list_entries( PrefixListId=list_id, MaxResults=100, ) for entry in response['Entries']: if entry['Cidr'] == ip: return True else: pass return False except ClientError as e: print(e) def get_prefix_list_id(list_name): client = boto3.client("ec2", region_name="us-west-2") response = client.describe_managed_prefix_lists( MaxResults=100, Filters=[ { "Name": "prefix-list-name", "Values": [list_name] } ] ) for p_list in response['PrefixLists']: return {"ID": p_list['PrefixListId'], "VERSION": p_list['Version']} def update_managed_prefix_list(list_name, ip): client = boto3.client("ec2", region_name="us-west-2") if check_for_existing(get_prefix_list_id(list_name)['ID'], ip) == True: print("Rule already exists") return False else: try: response = client.modify_managed_prefix_list( DryRun=False, PrefixListId=get_prefix_list_id(list_name)['ID'], CurrentVersion=get_prefix_list_id(list_name)['VERSION'], AddEntries=[ { "Cidr": ip } ] ) return True except ClientError as e: print(e) print("Failed to update list") if __name__ == "__main__": url = "https://<my IP address URL>" headers = {} r = requests.get(url, headers=headers) json_ips = json.loads(r.content) ip = "" list_name = "" result = update_managed_prefix_list(list_name, ip) if result == True: print("Successfully Updates lists") else: print("Failed to update lists")
If you are going to use this code it will need some modifications. I ultimately did not deploy this code but I had plans to run it as a Lambda function on a schedule so the lists would always be up to date.
If this code is helpful to you please share it with your friends!
Github
Building Dynamic DNS with Route53 and PFSense
I use PFSense as my home router, firewall, VPN and much more. I’m sure a lot of my readers do as well. One thing that I have always set up is an entry in Route53 that points to my public IP address on my PFSense box. However, I use Comcast so, my IP address is changing every so often.
Typically this isn’t a big deal because only a few applications utilize the DNS entry I have setup. But, what if I could automate the changes by scheduling a job that automatically checks my IP address on the PFSense side and then updates the Route53 record automatically?
A couple of requirements:
– PFSense with the API package installed
– A subdomain setup in Route53 that points to your PFSense boxSome Python to do some magic:
import requests import json import boto3 clientid = "<pfsense clientID here>" key = "<pfsense api key here>" route53 = boto3.client('route53') zoneID = "<route53 hosted zone here>" # be sure to include a trailing "." as this is how Route53 formats things # EX: https://google.com. pfsenseDNS = "<Your subdomain>" headers = { "Authorization": f"{clientid} {key}", "Content-type": 'application/json' } #GET Pfsense IP def getWanIP(): response = requests.get('https://<your subdomain>/api/v1/system/arp', headers=headers) arptable = json.loads(response.content) entries = arptable['data'] wan = [] for entry in entries: # change the interface code if necessary if entry['interface'] == 'igb0': wan.append(entry) for entry in wan: if entry['status'] == 'permanent': wanIP = entry['ip'] return wanIP record_set = route53.list_resource_record_sets( HostedZoneId=zoneID ) for record in record_set['ResourceRecordSets']: if record['Name'] == pfsenseDNS: #pprint.pprint(record) if record['Type'] == 'A': for entry in record['ResourceRecords']: if entry['Value'] != getWanIP(): print("The Records Do Not Match") response = route53.change_resource_record_sets( HostedZoneId=zoneID, ChangeBatch={ 'Changes': [ { 'Action': 'UPSERT', 'ResourceRecordSet': { 'Name': pfsenseDNS, 'Type': 'A', 'ResourceRecords': [ { 'Value': getWanIP(), } ], 'TTL': 300, }, } ] } )
What this code does is pretty simple. First we have a function that will get us the WAN IP through the ARP table of the PFSense box. We use this function later when we get and check our record sets against this IP address.
If the addresses do not match, the script will automatically change the entry in Route53 for you!
To test out the function modify your Route53 entry to some bogus IP address and then run the script. If everything goes as planned you should see your DNS entry changed!
If you found this helpful please share it with your friends. If you have questions feel free to comment or reach out to me via any method.
Where Is It Five O’Clock Pt: 1
I bought the domain whereisitfiveoclock.net a while back and have been sitting on it for quite some time. I had an idea to make a web application that would tell you where it is five o’clock. Yes, this is a drinking website.
I saw this project as a way to learn more Python skills, as well as some more AWS skills, and boy, has it put me to the test. So I’m going to write this series of posts as a way to document my progress in building this application.
Part One: Building The Application
I know that I want to use Python because it is my language of choice. I then researched what libraries I could use to build the frontend with. I came across Flask as an option and decided to run with that. The next step I had to do was actually find out where it was 5PM.
In my head, I came up with the process that if I could first get a list of all the timezone and identify the current time in them I could filter out which timezones it was 5PM. Once establishing where it was 5PM, I can then get that information to Flask and figure out a way to display it.
Here is the function for identifying the current time in all timezones and then storing each key pair of {Timezone : Current_Time }
def getTime(): now_utc = datetime.now(timezone('UTC')) #print('UTC:', now_utc) timezones = pytz.all_timezones #get all current times and store them into a list tz_array = [] for tz in timezones: current_time = now_utc.astimezone(timezone(tz)) values = {tz: current_time.hour} tz_array.append(values) return tz_array
Once everything was stored into tz_array I took that info and passed it through the following function to identify it was 5PM. I have another function that identifies everything that is NOT 5PM.
def find5PM(): its5pm = [] for tz in tz_array: timezones = tz.items() for timezone, hour in timezones: if hour >= 17: its5pm.append(timezone) return its5pm
I made a new list and stored just the timezone name into that list and return it.
Once I had all these together I passed them through as variables to Flask. This is where I first started to struggle. In my original revisions of the functions, I was only returning one of the values rather than returning ALL of the values. This resulted in hours of struggling to identify the cause of the problem. Eventually, I had to start over and completely re-work the code until I ended up with what you see above.
The code was finally functional and I was ready to deploy it to Amazon Web Services for public access. I will discuss my design and deployment in Part Two.
Slack’s New Nebula Network Overlay
I was turned on to this new tool that the Slack team had built. As an avid Slack user, I was immediately intrigued to test this out.
My use case is going to be relatively simple for the sake of this post. I am going to create a Lighthouse, or parent node, in an EC2 instance in my Amazon Web Services account. It will have an elastic IP so we can route traffic to it publically. I also will need to create a security group to allow traffic to port 4242 UDP. I will also allow this port inbound on my local firewall.
Clone the GIT repository for Nebula and also download the binaries. I put everything into
/etc/nebula
Once you have all of the files downloaded you can generate your certificate of authority by running the command:
./nebula-cert ca -name "Your Company"
You will want to make a backup of the ca.key and ca.cert file that is generated by this output.
Once you have your certificate of authority you can create certificates for your hosts. In my case I am only generating one for my local server. The following command will generate the certificate and keys:
./nebula-cert sign -name "Something Memorable" -ip "192.168.100.2/24"
Where it says “Something Memorable” I placed the hostname of the server I am using so that I remember. One thing that the documentation doesn’t go over is assigning the IP for your Lighthouse. Because I recognize the Lighthouse as more of a gateway I assigned it to 192.168.100.1 in the config file. This will be covered soon.
There is a pre-generated configuration file located here. I simply copied this into a file inside of
/etc/nebula/
Edit the file as needed. Lines 7-9 will need to be modified for each host as each host will have its own certificate.
Line 20 will need to be the IP address of your Lighthouse and this will remain the same on every host. On line 26 you will need to change this to true for your Lighthouse. On all other hosts, this will remain false.
The other major thing I changed was to allow SSH traffic. There is an entire section about SSH in the configuration that I ignored and simply added the firewall to the bottom of the file as follows:
- port: 22
proto: tcp
host: anyThis code is added below the 443 rule for HTTPS. Be sure to follow normal YAML notation practices.
Once this is all in place you can execute your Nebula network by using the following command:
/etc/nebula/nebula -config /etc/nebula/config.yml
Execute your Lighthouse first and ensure it is up and running. Once it is running on your Lighthouse you can run it on your host and you should see a connection handshake. Test by pinging your Lighthouse from your host and from your Lighthouse to your host. I also tested file transfer as well using SCP. This verifies SSH connectivity.
Now, the most important thing that Slack doesn’t discuss is creating a systemctl script for automatic startup. So I have included a basic one for you here:
[Unit]
Description=Nebula Service
[Service]
Restart=always
RestartSec=1
User=root
ExecStart=/etc/nebula/nebula -config /etc/nebula/config.yml
[Install]
WantedBy=multi-user.target
That’s it! I would love to hear about your implementations in the comments below!
Discovering DHCP Servers with NMAP
I was working at a client site where a device would constantly receive a new IP address via DHCP nearly every second. It was the only device on the network that had this issue but I decided to test for rogue DHCP servers. If someone knows of a GUI tool to do this let me know in the comments. I utilized the command line utility NMAP to scan the network.
sudo nmap --script broadcast-dhcp-discover
The output should look something like:
Starting Nmap 7.70 ( https://nmap.org ) at 2019-11-25 15:52 EST
Pre-scan script results:
| broadcast-dhcp-discover:
| Response 1 of 1:
| IP Offered: 172.20.1.82
| DHCP Message Type: DHCPOFFER
| Server Identifier: 172.20.1.2
| IP Address Lease Time: 7d00h00m00s
| Subnet Mask: 255.255.255.0
| Time Offset: 4294949296
| Router: 172.20.1.2
| Domain Name Server: 8.8.8.8
| Renewal Time Value: 3d12h00m00s
|_ Rebinding Time Value: 6d03h00m00sThis was the test that ran on my local network verifying only one DHCP server. If there were multiple, we would see another response.
Ultimately this was not the issue at my client site but this is a new function of NMAP that I had not used.
Let me know your experiences with rogue DHCP in the comments!