Your cart is currently empty!
Blog
-
An AI Fantasy Football Draft Assistant
Last year I attempted to program a Fantasy Football draft assistant which took live data from ESPN’s Fantasy Platform. Boy was that a mistake…
First of all, shame on ESPN for not having an API for their Fantasy sports applications. The reverse engineered methods were not fast enough nor were they reliable. So, this year I took a new approach to building out a system for getting draft pick recommendations for my team.
I also wanted to put to use the example architecture and code I wrote the other day for the Strands SDK to work so I utilized it to build an API which would utilize the AWS Bedrock platform to analyze data and and ultimately return the best possible picks.
Here is a simple workflow of how the tool works:I generated this with Claude AI. It is pretty OK. The first problem I encountered was getting data. I needed two things:
1. Historical data for players
2. Projected fantasy data for the upcoming season
The historical data provides information about the players past season and the projections are for the upcoming season, obviously. The projections are useful because of any incoming rookies.
In the repository I link below I put a scripts to scrape FantasyPros for both the historical and projected data. It stores them in separate files in case you want to utilize them in a different way. There is also a script to combine them into one data source and ultimately load them into a DynamoDB table.
The most important piece of the puzzle was actually simulating the draft. I needed to create a program that would be able to track the other team’s draft picks as well as give me the suggestions and track my teams picks. This is the heart of the repository and I will be using it to get suggestions and track the draft for this coming season.Through the application, when you issue the “next” command the application will send a request to the API with the current situation of the draft. The payload looks like this:
payload = { "team_needs": team_needs, "your_roster": your_roster, "already_drafted": all_drafted_players, "scoring_format": self.session.scoring_format if self.session else "ppr", "league_size": self.session.league_size if self.session else 12 }
The “team_needs” key represents the current number of players remaining for each position. The “your_roster” position is all of the current players on my team. The other important key is “already_drafted”. This key sends all of the drafted players to the AI agent so it knows who NOT to recommend.
The application goes through all of the picks and you are able to manually enter each of the other teams picks until the draft is complete.
I’ll post an update after my draft on August 24th with the team I end up with! I still will probably lose in my league but this was fun to build. I hope to add in some sort of week-to-week management of my team as well as a trade analysis tool in the future. It would also be cool to add in some sort of analysis that could send updates to my Slack or Discord.
If you have other ideas message me on any platform you can find me on!
GitHub: https://github.com/avansledright/fantasy-football-agent -
Deploying a Strands Agent on AWS Lambda using Terraform
Recently I’ve been exploring the AI space a lot more as I’m sure a lot of you are doing as well. I’ve been looking at the Strands Agent SDK. I see this SDK as being very helpful in building out agents in the future (follow the blog to see what I come up with!).
One thing that is not included in the SDK is the ability to deploy with Terraform. The SDK includes examples of how to package and deploy with Amazon Web Services CDK so I adapted that to utilize Terraform.
I took my adaptation a step further and added an API Gateway layer so that you have the beginnings of a very simple AI agent deployed with the Strands SDK.
Check out the code here: https://github.com/avansledright/terraform-strands-agent-api
The code in the repository is fairly simple and includes everything you need to build an API Gateway, Lambda function, and some other useful resources just to help out.
The key to all of this is packaging the required dependencies inside of the Lambda Layer. Without this the function will not work.
File structure:
terraform-strands-agent-api/
└── lambda_code/
│ ├── lambda_function.py # Your Strands agent logic
│ └── requirements.txt # strands-agents + dependencies
├── api_gateway.tf # API Gateway configuration
├── iam.tf # IAM roles and policies
├── lambda.tf # Lambda function setup
├── locals.tf # Environment variables
├── logs.tf # CloudWatch logging
├── s3.tf # Deployment artifacts
├── variables.tf # Configurable inputs
└── outputs.tf # API endpoints and resource IDsYou shouldn’t have to change much in any of these files until you want to fully start customizing the actual functionality of the agent.
To get started follow the instructions below!
git clone https://github.com/avansledright/terraform-strands-agent-api cd terraform-strands-agent-api # Configure your settings. Add other values as needed echo 'aws_region = "us-west-2"' > terraform.tfvars # Deploy everything terraform init terraform plan terraform apply
If everything goes as planned you should see the output of a curl command which will give you the ability to test the demo code.
If you run into any issues feel free to let me know! I’d be happy to help you get this up and running.
If this has helped you in any way, please share it on your social media and with any of your friends!
-
Creating a List of API Gateway resources using Terraform
For some reason when you utilize Terraform with AWS, specifically when you want to get a list of API Gateway resources, that data element simply does not exist. Below is a relatively quick solution that will create a comma separated list of API Gateways so that you can iterate through them.
In order to execute this element you need to have the AWS CLI setup within your preferred deployment method. Personally, I love GitHub Actions so I needed to add another stage in my deployment to install the CLI.
The way this work is to create a data element that you can trigger as needed to execute a simple shell script.
data "external" "apis" { program = ["sh", "-c", "aws apigateway get-rest-apis --query 'items[?starts_with(name,`${var.prefix}`)].name' --output json | jq -r '{\"names\": (. | join(\",\"))}'"] }
We also are creating a variable called “prefix” so that you can filter as required by your project. Personally, I used this to create Cloudwatch Dashboards so I can easily monitor my resources.
If this is helpful for you, please share it on your social media!
-
Automating Proper Terraform Formatting using Git Pre-Hooks
I’ve noticed lately that a lot of Terraform is formatted differently. Some developers utilize two indents, others one indent. As long as the Terraform as functional most people overlook the formatting of their infrastructure as code files.
Personally I don’t think we should ever push messy code into our repositories. How could we solve this problem? Well, Terraform has a built in formatter theterraform fmt
command will automatically format your code.#!/usr/bin/env bash # Initialize variables EXIT_CODE=0 AFFECTED_FILES=() # Detect OS for cross-platform compatibility OS=$(uname -s) IS_WINDOWS=false if [[ "$OS" == MINGW* ]] || [[ "$OS" == CYGWIN* ]] || [[ "$OS" == MSYS* ]]; then IS_WINDOWS=true fi # Find all .tf files - cross-platform compatible method if [ "$IS_WINDOWS" = true ]; then # For Windows using Git Bash TF_FILES=$(find . -type f -name "*.tf" -not -path "*/\\.*" | sed 's/\\/\//g') else # For Linux/Mac TF_FILES=$(find . -type f -name "*.tf" -not -path "*/\.*") fi # Check each file individually for better reporting for file in $TF_FILES; do # Get the directory of the file dir=$(dirname "$file") filename=$(basename "$file") # Run terraform fmt check on the specific file - handle both OS formats terraform -chdir="$dir" fmt -check "$filename" >/dev/null 2>&1 # If format check fails, record the file if [ $? -ne 0 ]; then AFFECTED_FILES+=("$file") EXIT_CODE=1 fi done # If any files need formatting, list them and exit with error if [ $EXIT_CODE -ne 0 ]; then echo "Error: The following Terraform files need formatting:" for file in "${AFFECTED_FILES[@]}"; do echo " - $file" done echo "" echo "Please run the following command to format these files:" echo "terraform fmt -recursive" exit 1 fi echo "All Terraform files are properly formatted" exit 0
Put this code inside your “.git/hooks/” directory so that it automatically runs when someone does a push. If there is badly formatted Terraform you should see something like:
Running Terraform format check... Error: The following Terraform files need formatting: - ./main.tf Please run the following command to format these files: terraform fmt -recursive error: failed to push some refs to 'github.com:avansledright/terraform-fmt-pre-hook.git'
After running the <code>terraform fmt -recursive</code> it should push successfully!
If this was helpful to your or your team please share it across your social media!
-
Building out a reusable Terraform framework for Flask Applications
I find myself utilizing the same architecture for deploying demo applications on the great Python library Flask. I’ve been using the same Terraform files over and over again to build out the infrastructure.
Last weekend I decided it was time to build a reusable framework for deploying these applications. So, I began building out the repository. The purpose of this repository is to give myself a jumping off point to quickly deploy applications for demonstrations or live environments.
Let’s take a look at the features:
- Customizable Environments within Terraform for managing the infrastructure across your development and production environments
- Modules for:
- Application Load Balancer
- Elastic Container registry
- Elastic Container Service
- VPC & Networking components
- Dockerfile and Docker Compose file for launching and building the application
- Demo code for the Flask application
- Automated build and deploy for the container upon code changes
This module is built for any developer who wants to get started quickly and deploy applications fast. Using this framework will allow you to speed up your development time by being able to focus solely on the application rather than the infrastructure.
Upcoming features:
- CI/CD features using either GitHub Actions or Amazon Web Services like CodePipeline and Codebuild
- Custom Domain Name support for your application
If there are other features you would like to see me add shoot me a message anytime!
Check out the repository here:
https://github.com/avansledright/terraform-flask-module -
Create an Image Labeling Application using Artificial Intelligence
I have a PowerPoint party to go to soon. Yes you read that right. At this party everyone is required to present a short presentation about any topic they want. Last year I made a really cute presentation about a day in the life of my dog.
This year I have decided that I want to bore everyone to death and talk about technology, Python, Terraform and Artificial Intelligence. Specifically, I built an application that allows a user to upload an image and have it return to them a renamed file that is labeled based on the object or scene in the image.The architecture is fairly simple. We have a user connecting to a load balancer which routes traffic to our containers. The containers connect Bedrock and S3 for image.
If you want to try it out the site is hosted at https://image-labeler.vansledright.com It will be up for some time, I haven’t decided how long I will host it for but at least through this weekend!
Here is the code that interacts with Bedrock and S3 to process the image:
def process_image(): if not request.is_json: return jsonify({'error': 'Content-Type must be application/json'}), 400 data = request.json file_key = data.get('fileKey') if not file_key: return jsonify({'error': 'fileKey is required'}), 400 try: # Get the image from S3 response = s3.get_object(Bucket=app.config['S3_BUCKET_NAME'], Key=file_key) image_data = response['Body'].read() # Check if image is larger than 5MB if len(image_data) > 5 * 1024 * 1024: logger.info("File size to large. Compressing image") image_data = compress_image(image_data) # Convert image to base64 base64_image = base64.b64encode(image_data).decode('utf-8') # Prepare prompt for Claude prompt = """Please analyze the image and identify the main object or subject. Respond with just the object name in lowercase, hyphenated format. For example: 'coca-cola-can' or 'golden-retriever'.""" # Call Bedrock with Claude response = bedrock.invoke_model( modelId='anthropic.claude-3-sonnet-20240229-v1:0', body=json.dumps({ "anthropic_version": "bedrock-2023-05-31", "max_tokens": 100, "messages": [ { "role": "user", "content": [ { "type": "text", "text": prompt }, { "type": "image", "source": { "type": "base64", "media_type": response['ContentType'], "data": base64_image } } ] } ] }) ) response_body = json.loads(response['body'].read()) object_name = response_body['content'][0]['text'].strip() logging.info(f"Object found is: {object_name}") if not object_name: return jsonify({'error': 'Could not identify object in image'}), 422 # Get file extension and create new filename _, ext = os.path.splitext(unquote(file_key)) new_file_name = f"{object_name}{ext}" new_file_key = f'processed/{new_file_name}' # Copy object to new location s3.copy_object( Bucket=app.config['S3_BUCKET_NAME'], CopySource={'Bucket': app.config['S3_BUCKET_NAME'], 'Key': file_key}, Key=new_file_key ) # Generate download URL download_url = s3.generate_presigned_url( 'get_object', Params={ 'Bucket': app.config['S3_BUCKET_NAME'], 'Key': new_file_key }, ExpiresIn=3600 ) return jsonify({ 'downloadUrl': download_url, 'newFileName': new_file_name }) except json.JSONDecodeError as e: logger.error(f"Error decoding Bedrock response: {str(e)}") return jsonify({'error': 'Invalid response from AI service'}), 500 except Exception as e: logger.error(f"Error processing image: {str(e)}") return jsonify({'error': 'Error processing image'}), 500
If you think this project is interesting, feel free to share it with your friends or message me if you want all of the code!
-
Building a Python Script to Export WordPress Posts: A Step-by-Step Database to CSV Guide
Today, I want to share a Python script I’ve been using to extract blog posts from WordPress databases. Whether you’re planning to migrate your content, create backups, or analyze your blog posts, this tool makes it straightforward to pull your content into a CSV file.
I originally created this script when I needed to analyze my blog’s content patterns, but it’s proven useful for various other purposes. Let’s dive into how you can use it yourself.
Prerequisites
Before we start, you’ll need a few things set up on your system:
- Python 3.x installed on your machine
- Access to your WordPress database credentials
- Basic familiarity with running Python scripts
Setting Up Your Environment
First, you’ll need to install the required Python packages. Open your terminal and run:
pip install mysql-connector-python pandas python-dotenv
Next, create a file named
.env
in your project directory. This will store your database credentials securely:DB_HOST=your_database_host DB_USERNAME=your_database_username DB_PASS=your_database_password DB_NAME=your_database_name DB_PREFIX=wp # Usually 'wp' unless you changed it during installation
The Script in Action
The script is pretty straightforward – it connects to your WordPress database, fetches all published posts, and saves them to a CSV file. Here’s what happens under the hood:
- Loads environment variables from your .env file
- Establishes a secure connection to your WordPress database
- Executes a SQL query to fetch all published posts
- Converts the results to a pandas DataFrame
- Saves everything to a CSV file named ‘wordpress_blog_posts.csv’
Running the script is as simple as:
python main.py
Security Considerations
A quick but important note about security: never commit your .env file to version control. I’ve made this mistake before, and trust me, you don’t want your database credentials floating around in your Git history. Add .env to your .gitignore file right away.
Potential Use Cases
I wrote this script to feed my posts to AI to help with SEO optimization and also help with writing content for my other businesses. Here are some other ways I’ve found this script useful:
- Creating offline backups of blog content
- Analyzing post patterns and content strategy
- Preparing content for migration to other platforms
- Generating content reports
Room for Improvement
The script is intentionally simple, but there’s plenty of room for enhancement. You might want to add:
- Support for extracting post meta data
- Category and tag information
- Featured image URLs
- Comment data
Wrapping Up
This tool has saved me countless hours of manual work, and I hope it can do the same for you. Feel free to grab the code from my GitHub repository and adapt it to your needs. If you run into any issues or have ideas for improvements, drop a comment below.
Happy coding!
Get the code on GitHub
-
2024 Year in Review: A Journey Through Code and Creation
As another year wraps up, I wanted to take a moment to look back at what I’ve shared and built throughout 2024. While I might not have posted as frequently as in some previous years (like 2020’s 15 posts!), each post this year represents a significant technical exploration or project that I’m proud to have shared.
The Numbers
This year, I published 9 posts, maintaining a steady rhythm of about one post per month. April was my most productive month with 2 posts, and I managed to keep the blog active across eight different months of the year. Looking at the topics, I’ve written quite a bit about Python, Lambda functions, and building various tools and automation solutions. Security and Discord-related projects also featured prominently in my technical adventures.
Highlights and Major Projects
Looking back at my posts, a few major themes emerged:
- File Processing and Automation: I spent considerable time working with file processing systems, creating efficient workflows and sharing my experiences with different approaches to handling data at scale.
- Python Development: From Lambda functions to local tooling, Python remained a core focus of my technical work this year. I’ve shared both successes and challenges, including that Thanksgiving holiday project that consumed way more time than expected (but was totally worth it!).
- Security and Best Practices: Throughout the year, I maintained a strong focus on security considerations in development, sharing insights and implementations that prioritize robust security practices.
Community and Testing
One consistent theme in my posts has been the value of community feedback and testing. I’ve actively sought input on various projects, from interface design to data processing implementations. This collaborative approach has led to more robust solutions and better outcomes.
Looking Forward to 2025
As we head into 2025, I’m excited to increase my posting frequency while continuing to share technical insights, project experiences, and practical solutions to real-world development challenges. There are already several projects in the pipeline that I can’t wait to write about. I also hope to ride 6000 miles on my bike throughout Chicago this year.
For those interested my most popular Github repositories were:
- bedrock-poc-public
- count-s3-objects
- delete-lambda-versions
- dynamo-user-manager
- genai-photo-processor
- lex-bot-local-tester
- presigned-url-gateway
- s3-object-re-encryption
Thank You
To everyone who’s read, commented, tested, or contributed to any of the projects I’ve written about this year – thank you. Your engagement and feedback have made these posts and projects better. While this year saw fewer posts than some previous years, each one represented a significant project or learning experience that I hope provided value to readers.
Here’s to another year of coding, learning, and sharing!
-
Converting DrawIO Diagrams to Terraform
I’m going to start this post of by saying that I need testers. People to test this process from an interface perspective as well as a data perspective. I’m limited on the amount of test data that I have to put through the process.
With that said, I spent my Thanksgiving Holiday writing code, building this project and putting in way more time that I thought I would but boy is it cool.
If you’re like me and working in a Cloud Engineering capacity then you probably have built a DrawIO diagram at some point in your life to describe or define your AWS architecture. Then you have spent countless hours using that diagram to write your Terraform. I’ve built something that will save you those hours and get you started on your cloud journey.
Enter https://drawiototerraform.com. My new tool that allows you to convert your DrawIO AWS Architecture diagrams to Terraform just by uploading them. The process uses a combination of Python and LLM’s to identify the components in your diagram and their relationships, write the base Terraform, analyze the initial Terraform for syntax errors and ultimately test the Terraform by generating a Terraform plan.
All this is then delivered to you as a ZIP file for you to review, modify and ultimately deploy to your environment. By no means is it perfect yet and that is why I am looking for people to test the platform.
If you, or someone you know, is interested in helping me test have them reach out to me on through the website’s support page and I will get them some free credits so that they can test out the platform with their own diagrams.
If you are interested in learning more about the project in any capacity do not hesitate to reach out to me at anytime.
Website: https://drawiototerraform.com
-
API For Pre-signed URLs
Pre-signed URL’s are used for downloading objects from AWS S3 buckets. I’ve used them many times in the past for various reasons but this idea was a new one. A proof of concept for an API that would create the pre-signed URL and return it to the user.
This solution utilizes an API Gateway and an AWS Lambda function. The API Gateway takes two parameters “key” and “expiration”. Ultimately, you could add another parameter for “bucket” if you wanted the gateway to be able to get objects from multiple buckets.
I used Terraform to create the infrastructure and Python to program the Lambda.
Take a look at the Lambda code below:
import boto3 import json import os from botocore.exceptions import ClientError def lambda_handler(event, context): # Get the query parameters query_params = event.get('queryStringParameters', {}) if not query_params or 'key' not in query_params: return { 'statusCode': 400, 'body': json.dumps({'error': 'Missing required parameter: key'}) } object_key = query_params['key'] expiration = int(query_params.get('expiration', 3600)) # Default 1 hour # Initialize S3 client s3_client = boto3.client('s3') bucket_name = os.environ['BUCKET_NAME'] try: # Generate presigned URL url = s3_client.generate_presigned_url( 'get_object', Params={ 'Bucket': bucket_name, 'Key': object_key }, ExpiresIn=expiration ) return { 'statusCode': 200, 'headers': { 'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json' }, 'body': json.dumps({ 'url': url, 'expires_in': expiration }) } except ClientError as e: return { 'statusCode': 500, 'body': json.dumps({'error': str(e)}) }
The Terraform will also output a Postman collection JSON file so that you can immediately import it for testing. If this code and pattern is useful for you check it out on my GitHub below.