Your cart is currently empty!
Category: Technology
-
Week 2 – Fantasy Football and AI
After a heartbreaking (lol) loss in week one, our agent is back with its picks for week two!
But, before we start talking about rosters and picks and how I think AI is going to lose week two, let’s talk about the overall architecture of the application.
Current Architecture diagram You may notice that after my post on Tuesday I have substantially reduced the data storage. I’m now using three DynamoDB tables to handle everything.
- Current Roster – This table is populated by an automated scraper that pulls the rosters for all the teams in the league.
- Player Data Table – This table populates all the historical data from the draft as well as projected stats for the 2025 season. It also holds the actual points received after the week has completed.
- Waiver Table – this is probably the most notable addition to the overall Agent. This table is populated by both ESPN and FantasyPros
The waiver wire functionality is a massive addition to the Agent. It now has the ability to know what players are available for me to add to the team. If we combine that with the player stats in the Player Data Table we can get a clear picture as to how the player MIGHT preform on a week to week basis.
The waiver table is populated by a lambda function that goes out and scrapes the ESPN Fantasy Platform. It is quite involved code as there is no API for ESPN. I’m still not sure why they don’t build one. It seems like an easy win for them especially as they get into more sports gambling. You can read the code here. This Lambda function runs on a CRON every day so that the Agent always has daily updated data.
The other major addition is a web interface. I realized that accessing this via a terminal is great but, it would be way more interesting to have something to look at. Especially if I am away from the computer.
The web interface consists of a display of the roster:
Roster Screenshot There are a couple things I need to fix. You’ll notice that a few players “have points” this is a problem with the data in the Player Data Table from when I was merging all the sources. Ideally, this will display the points the player has received for the week. Eventually I would like to add some live streaming of the point totals.
Bench Players It also displays the bench (notice the same data glitch). On my list of things to do is to make these boxes drag and drop and auto update the roster table so that we can move players around. I also want to add their projections to each block so I can see the projected points per week for each player.
The BEST part (in my opinion) is the prediction functionality. There is a box that we can choose which week to get a prediction for and then it will return output from the agent.
So, let’s take a look at week two!
Optimal Week 2 Lineup Here is a screenshot of the week two optimal lineup. This is the lineup I will be starting for week two unless one of my tight-ends becomes healthy. With George Kittle on injured reserve and Evan Engram also questionable, the AI suggested picking up Hunter Henry.
The Agent will also provide up with some waiver targets as well as matchup analysis
It will also output the full response for each week. Here is the full text of what the Agent has predicted for week two:
LINEUP OPTIMIZATION FOR WEEK 2
Starting Lineup Strategy:
– Josh Allen (QB): Elite QB1 with 38.8 points in Week 1, facing NYJ defense that allowed significant passing yards. His 2024 average of 31.85 and strong 2025 projection make him a must-start.
– Josh Jacobs (RB1): Top projected RB at 18.4 points, facing Washington at home. Green Bay’s new offensive system should benefit Jacobs significantly.
– Saquon Barkley (RB2): Projected 18.1 points against Kansas City. His 2024 average of 15.93 and strong 2025 projection of 273.2 season points make him a core play.
– DK Metcalf (WR1): Highest projected WR at 14.3 points. Pittsburgh vs Seattle should be a competitive game with passing volume.
– Marvin Harrison Jr. (WR2): Rookie with 14.2 projection facing Carolina. His 154.5 season projection suggests breakout potential.
– Hunter Henry (TE): Best available TE option at 9.3 projected points with Kittle on IR and Engram questionable.
– DJ Moore (FLEX): Solid 13.9 projection against Detroit. Chicago’s passing game should see volume in what could be a high-scoring affair.
– Breece Hall (OP): Critical OP selection at 14.9 projected points. Hall offers RB1 upside against Buffalo, and using an RB in the OP slot provides higher floor than WR options.CRITICAL WAIVER WIRE TARGETS:
Roster analysis shows no critical positional needs. Your team has excellent depth across all positions with 19 total players and strong projections.
TOP DEFENSE WAIVER PICKUP:
No specific waiver targets identified in the data, but consider streaming defenses based on matchups. Look for teams facing backup QBs or offenses with high turnover rates.INJURY CONCERNS:
– George Kittle: On Injury Reserve, unavailable for Week 2
– Evan Engram: Listed as Questionable, monitor his status throughout the weekMATCHUP ANALYSIS:
– Josh Allen vs NYJ: Excellent matchup for Allen, Jets defense struggled in Week 1
– Jacobs vs WSH: Home game advantage for Green Bay, Washington’s run defense showed vulnerabilities
– Barkley vs KC: High-scoring game potential, Eagles should stay competitive requiring rushing attempts
– DK Metcalf vs SEA: Revenge game narrative, Pittsburgh’s passing attack should be featuredBEST DEFENSE MATCHUPS
Detroit DST faces Chicago at home, which presents a solid matchup. Other potential streaming options to consider:
– Teams facing rookie QBs or backup quarterbacks
– Defenses at home against struggling offensive lines
– Units facing teams with high turnover rates from Week 1WAIVER PRIORITY:
1. Streaming Defense: Look for defenses with favorable Week 3 matchups
2. Handcuff RBs: Target backup RBs for your elite runners (Jacobs, Barkley, Hall)
3. High-Upside WRs: Monitor snap counts and target shares from Week 2 for emerging receiversFinal Comments:
Your roster is exceptionally deep and well-constructed. The OP position selection of Breece Hall provides excellent upside, as RBs typically offer higher floors than WRs in flex positions. Focus on matchup-based streaming for defense and monitor the waiver wire for breakout players rather than addressing positional needs. Your core lineup projects for strong Week 2 performance with multiple players having 15+ point upside.What’s Next?
So we have a pretty solid week two prediction. Josh Allen and Saquon Barkley I have faith in. The rest of the team is hit or miss. Chicago faces Detroit at Ford Field (Go Lions!) and both teams lost week one. But Ben Johnson facing his old team for the first time has me nervous.
This brings up a few of my to-dos for the overall program.
- Defensive matchups – I need to get data for the Defenses to find the best matchups week to week. Having a good defense play is an easy way to get an advantage every week.
- Add authentication – I added a really simple authentication method to the code just for the time being. But, it would be nice to have a Single Sign On or something a little bit more secure.
- Drag-n-drop interface – I need to add functionality to be able to modify the roster on the web interface. It would be nice if this could also update ESPN.
- Slow Output – I’m always looking for ways to optimize the Agent’s output. Currently it takes about 45 seconds to a minute to return the output.
Thoughts? I hope this series is entertaining. If you have ideas for the Agent please comment below or shoot me a message somewhere!
-
AI Loses Its First Matchup – Fantasy Football Agentic AI
Straight to the point. AI lost is week one matchup by 2.28 points. I watched as many of the games as I could so that I could give a slight bit of commentary.
First a re-cap. If you haven’t been following along, I have built and am continuing to improve upon an Agentic AI solution for drafting and managing a Fantasy Football team for the 2025 season. The team is entirely AI selected and you can see its predictions for week 1 here.
There was a couple of concerns that I had looking at the lineup. Most notably Sam Darnold in the superflex (OP) position as I thought some of the other players might have break out games and boy was I right!
Here is the results from week 1
Now, let’s comment on a few things. George Kittle left his game with an injury and is likely to miss a few weeks. AI can’t predict in game injuries, yet. DJ Moore was the final hope Monday night and he was either not targeted when he was open or Caleb Williams simply didn’t throw a good ball. AI, can’t predict in game performance, yet.
Now, the Agent did hit on Josh Allen with his amazing performance against the Ravens. Breece Hall was also a great pick beating his projections.
What’s Next?
So we have some clear things to work out.
- Injuries – the AI Coach needs to understand that Kittle is likely out for a few weeks.
- Waivers – Now that we have an injury we need to replace a player. Engram is on the bench but is he the best tight end?
With these clear needs in mind I am actively working on building out a waiver wire monitoring tool to grab available players from the ESPN Fantasy platform. Because ESPN doesn’t have a native API this has been particularly challenging. I added in a Lambda function that will run daily and update the other teams rosters in a DynamoDB table so that we could potentially compare lists of players from other sources. This would give us a subset of “available” players. I also will be adding in an injury parameter that will help assist the Agent in determining the next lineup. Finally, I am scraping out the fantasy points earned per team and storing them as another data set that the Agent can use to help make predictions.
Current architecture diagram:
I’m also looking heavily into how I can structure all the data more efficiently so there is less infrastructure to manage. Ideally, it would be nice to have a single table with the player as the primary key and all of the subsets of data underneath.
I think the AI is close to dominating the rest of the league! I will be posting its predictions for next week sometime on Thursday before the game!
-
An AI Fantasy Football Draft Assistant
Last year I attempted to program a Fantasy Football draft assistant which took live data from ESPN’s Fantasy Platform. Boy was that a mistake…
First of all, shame on ESPN for not having an API for their Fantasy sports applications. The reverse engineered methods were not fast enough nor were they reliable. So, this year I took a new approach to building out a system for getting draft pick recommendations for my team.
I also wanted to put to use the example architecture and code I wrote the other day for the Strands SDK to work so I utilized it to build an API which would utilize the AWS Bedrock platform to analyze data and and ultimately return the best possible picks.
Here is a simple workflow of how the tool works:I generated this with Claude AI. It is pretty OK. The first problem I encountered was getting data. I needed two things:
1. Historical data for players
2. Projected fantasy data for the upcoming season
The historical data provides information about the players past season and the projections are for the upcoming season, obviously. The projections are useful because of any incoming rookies.
In the repository I link below I put a scripts to scrape FantasyPros for both the historical and projected data. It stores them in separate files in case you want to utilize them in a different way. There is also a script to combine them into one data source and ultimately load them into a DynamoDB table.
The most important piece of the puzzle was actually simulating the draft. I needed to create a program that would be able to track the other team’s draft picks as well as give me the suggestions and track my teams picks. This is the heart of the repository and I will be using it to get suggestions and track the draft for this coming season.Through the application, when you issue the “next” command the application will send a request to the API with the current situation of the draft. The payload looks like this:
payload = { "team_needs": team_needs, "your_roster": your_roster, "already_drafted": all_drafted_players, "scoring_format": self.session.scoring_format if self.session else "ppr", "league_size": self.session.league_size if self.session else 12 }
The “team_needs” key represents the current number of players remaining for each position. The “your_roster” position is all of the current players on my team. The other important key is “already_drafted”. This key sends all of the drafted players to the AI agent so it knows who NOT to recommend.
The application goes through all of the picks and you are able to manually enter each of the other teams picks until the draft is complete.
I’ll post an update after my draft on August 24th with the team I end up with! I still will probably lose in my league but this was fun to build. I hope to add in some sort of week-to-week management of my team as well as a trade analysis tool in the future. It would also be cool to add in some sort of analysis that could send updates to my Slack or Discord.
If you have other ideas message me on any platform you can find me on!
GitHub: https://github.com/avansledright/fantasy-football-agent -
Deploying a Strands Agent on AWS Lambda using Terraform
Recently I’ve been exploring the AI space a lot more as I’m sure a lot of you are doing as well. I’ve been looking at the Strands Agent SDK. I see this SDK as being very helpful in building out agents in the future (follow the blog to see what I come up with!).
One thing that is not included in the SDK is the ability to deploy with Terraform. The SDK includes examples of how to package and deploy with Amazon Web Services CDK so I adapted that to utilize Terraform.
I took my adaptation a step further and added an API Gateway layer so that you have the beginnings of a very simple AI agent deployed with the Strands SDK.
Check out the code here: https://github.com/avansledright/terraform-strands-agent-api
The code in the repository is fairly simple and includes everything you need to build an API Gateway, Lambda function, and some other useful resources just to help out.
The key to all of this is packaging the required dependencies inside of the Lambda Layer. Without this the function will not work.
File structure:
terraform-strands-agent-api/
└── lambda_code/
│ ├── lambda_function.py # Your Strands agent logic
│ └── requirements.txt # strands-agents + dependencies
├── api_gateway.tf # API Gateway configuration
├── iam.tf # IAM roles and policies
├── lambda.tf # Lambda function setup
├── locals.tf # Environment variables
├── logs.tf # CloudWatch logging
├── s3.tf # Deployment artifacts
├── variables.tf # Configurable inputs
└── outputs.tf # API endpoints and resource IDsYou shouldn’t have to change much in any of these files until you want to fully start customizing the actual functionality of the agent.
To get started follow the instructions below!
git clone https://github.com/avansledright/terraform-strands-agent-api cd terraform-strands-agent-api # Configure your settings. Add other values as needed echo 'aws_region = "us-west-2"' > terraform.tfvars # Deploy everything terraform init terraform plan terraform apply
If everything goes as planned you should see the output of a curl command which will give you the ability to test the demo code.
If you run into any issues feel free to let me know! I’d be happy to help you get this up and running.
If this has helped you in any way, please share it on your social media and with any of your friends!
-
Automating Proper Terraform Formatting using Git Pre-Hooks
I’ve noticed lately that a lot of Terraform is formatted differently. Some developers utilize two indents, others one indent. As long as the Terraform as functional most people overlook the formatting of their infrastructure as code files.
Personally I don’t think we should ever push messy code into our repositories. How could we solve this problem? Well, Terraform has a built in formatter theterraform fmt
command will automatically format your code.#!/usr/bin/env bash # Initialize variables EXIT_CODE=0 AFFECTED_FILES=() # Detect OS for cross-platform compatibility OS=$(uname -s) IS_WINDOWS=false if [[ "$OS" == MINGW* ]] || [[ "$OS" == CYGWIN* ]] || [[ "$OS" == MSYS* ]]; then IS_WINDOWS=true fi # Find all .tf files - cross-platform compatible method if [ "$IS_WINDOWS" = true ]; then # For Windows using Git Bash TF_FILES=$(find . -type f -name "*.tf" -not -path "*/\\.*" | sed 's/\\/\//g') else # For Linux/Mac TF_FILES=$(find . -type f -name "*.tf" -not -path "*/\.*") fi # Check each file individually for better reporting for file in $TF_FILES; do # Get the directory of the file dir=$(dirname "$file") filename=$(basename "$file") # Run terraform fmt check on the specific file - handle both OS formats terraform -chdir="$dir" fmt -check "$filename" >/dev/null 2>&1 # If format check fails, record the file if [ $? -ne 0 ]; then AFFECTED_FILES+=("$file") EXIT_CODE=1 fi done # If any files need formatting, list them and exit with error if [ $EXIT_CODE -ne 0 ]; then echo "Error: The following Terraform files need formatting:" for file in "${AFFECTED_FILES[@]}"; do echo " - $file" done echo "" echo "Please run the following command to format these files:" echo "terraform fmt -recursive" exit 1 fi echo "All Terraform files are properly formatted" exit 0
Put this code inside your “.git/hooks/” directory so that it automatically runs when someone does a push. If there is badly formatted Terraform you should see something like:
Running Terraform format check... Error: The following Terraform files need formatting: - ./main.tf Please run the following command to format these files: terraform fmt -recursive error: failed to push some refs to 'github.com:avansledright/terraform-fmt-pre-hook.git'
After running the <code>terraform fmt -recursive</code> it should push successfully!
If this was helpful to your or your team please share it across your social media!
-
Building a Python Script to Export WordPress Posts: A Step-by-Step Database to CSV Guide
Today, I want to share a Python script I’ve been using to extract blog posts from WordPress databases. Whether you’re planning to migrate your content, create backups, or analyze your blog posts, this tool makes it straightforward to pull your content into a CSV file.
I originally created this script when I needed to analyze my blog’s content patterns, but it’s proven useful for various other purposes. Let’s dive into how you can use it yourself.
Prerequisites
Before we start, you’ll need a few things set up on your system:
- Python 3.x installed on your machine
- Access to your WordPress database credentials
- Basic familiarity with running Python scripts
Setting Up Your Environment
First, you’ll need to install the required Python packages. Open your terminal and run:
pip install mysql-connector-python pandas python-dotenv
Next, create a file named
.env
in your project directory. This will store your database credentials securely:DB_HOST=your_database_host DB_USERNAME=your_database_username DB_PASS=your_database_password DB_NAME=your_database_name DB_PREFIX=wp # Usually 'wp' unless you changed it during installation
The Script in Action
The script is pretty straightforward – it connects to your WordPress database, fetches all published posts, and saves them to a CSV file. Here’s what happens under the hood:
- Loads environment variables from your .env file
- Establishes a secure connection to your WordPress database
- Executes a SQL query to fetch all published posts
- Converts the results to a pandas DataFrame
- Saves everything to a CSV file named ‘wordpress_blog_posts.csv’
Running the script is as simple as:
python main.py
Security Considerations
A quick but important note about security: never commit your .env file to version control. I’ve made this mistake before, and trust me, you don’t want your database credentials floating around in your Git history. Add .env to your .gitignore file right away.
Potential Use Cases
I wrote this script to feed my posts to AI to help with SEO optimization and also help with writing content for my other businesses. Here are some other ways I’ve found this script useful:
- Creating offline backups of blog content
- Analyzing post patterns and content strategy
- Preparing content for migration to other platforms
- Generating content reports
Room for Improvement
The script is intentionally simple, but there’s plenty of room for enhancement. You might want to add:
- Support for extracting post meta data
- Category and tag information
- Featured image URLs
- Comment data
Wrapping Up
This tool has saved me countless hours of manual work, and I hope it can do the same for you. Feel free to grab the code from my GitHub repository and adapt it to your needs. If you run into any issues or have ideas for improvements, drop a comment below.
Happy coding!
Get the code on GitHub
-
2024 Year in Review: A Journey Through Code and Creation
As another year wraps up, I wanted to take a moment to look back at what I’ve shared and built throughout 2024. While I might not have posted as frequently as in some previous years (like 2020’s 15 posts!), each post this year represents a significant technical exploration or project that I’m proud to have shared.
The Numbers
This year, I published 9 posts, maintaining a steady rhythm of about one post per month. April was my most productive month with 2 posts, and I managed to keep the blog active across eight different months of the year. Looking at the topics, I’ve written quite a bit about Python, Lambda functions, and building various tools and automation solutions. Security and Discord-related projects also featured prominently in my technical adventures.
Highlights and Major Projects
Looking back at my posts, a few major themes emerged:
- File Processing and Automation: I spent considerable time working with file processing systems, creating efficient workflows and sharing my experiences with different approaches to handling data at scale.
- Python Development: From Lambda functions to local tooling, Python remained a core focus of my technical work this year. I’ve shared both successes and challenges, including that Thanksgiving holiday project that consumed way more time than expected (but was totally worth it!).
- Security and Best Practices: Throughout the year, I maintained a strong focus on security considerations in development, sharing insights and implementations that prioritize robust security practices.
Community and Testing
One consistent theme in my posts has been the value of community feedback and testing. I’ve actively sought input on various projects, from interface design to data processing implementations. This collaborative approach has led to more robust solutions and better outcomes.
Looking Forward to 2025
As we head into 2025, I’m excited to increase my posting frequency while continuing to share technical insights, project experiences, and practical solutions to real-world development challenges. There are already several projects in the pipeline that I can’t wait to write about. I also hope to ride 6000 miles on my bike throughout Chicago this year.
For those interested my most popular Github repositories were:
- bedrock-poc-public
- count-s3-objects
- delete-lambda-versions
- dynamo-user-manager
- genai-photo-processor
- lex-bot-local-tester
- presigned-url-gateway
- s3-object-re-encryption
Thank You
To everyone who’s read, commented, tested, or contributed to any of the projects I’ve written about this year – thank you. Your engagement and feedback have made these posts and projects better. While this year saw fewer posts than some previous years, each one represented a significant project or learning experience that I hope provided value to readers.
Here’s to another year of coding, learning, and sharing!
-
The Discord Bot Framework
I’m happy to announce the release of my Discord Bot Framework. A tool that I’ve spent a considerable amount of time working on to help people build and deploy Discord Bots quickly within AWS.
Let me first start off by saying I’ve never released a product. I’ve run a service business and I’m a consultant but I’ve never been a product developer. This release marks my first codebase that I’ve packaged and put together for developers and hobbyists to utilize.
So let’s talk about what this framework does. First and foremost it is not a fully working bot. There are pre-requisites that you must accomplish. The framework holds some example code for commands and message context responses which should be enough to get any Python developer started on building their bot. The framework also includes all of the required Terraform to deploy the bot within AWS.
When you launch the Terraform it will build a Docker image for you and deploy that image to ECR as well as launch the container within AWS Fargate. All of this lives behind a load balancer so that you can scale your bot’s resources as needed although I haven’t seen a Discord bot ever require that many resources!
I plan on supporting this project personally and providing support via email for the time being for anyone who purchases the framework.
Roadmap:
– GitHub Actions template for CI/CD
– More Bot example code for commands
– Bolt on packages for new functionalityI hope that this framework helps people get started on building bots for Discord. If you have any questions feel free to reach out to me at anytime!
-
Convert Spotify Links to Youtube Links
In a continuation of my Discord Bot feature deployment, I found a need to convert Spotify links to YouTube links. I use Youtube music for my music streaming needs and the rest of the Discord uses Spotify.
With the help of ChatGPT, I created a script that converts Spotify links to Youtube links! This utilizes both the Spotify API and Youtube APIs to grab track information and format search queries to return a relevant Youtube link.
The code consists of two primary functions which I have shared below. One to get the artist and track names and another to query YouTube. Combined, we can return a YouTube link to a multitude of applications.
def get_spotify_track_info(spotify_url): track_id = sp.track(spotify_url)['id'] track_info = sp.track(track_id) return { 'name': track_info['name'], 'artists': [artist['name'] for artist in track_info['artists']] } def search_youtube_video(track_info): search_query = f"{track_info['name']} {track_info['artists'][0]} official video" request = youtube.search().list(q=search_query, part='snippet', type='video', maxResults=1) response = request.execute() video_id = response['items'][0]['id']['videoId'] return f"https://www.youtube.com/watch?v={video_id}"
I took this code and incorporated it into my Discord bot so that anytime a user posts a Spotify link it will automatically convert it to a Youtube link. Here is an example:
If you want to utilize this code check out the Github link below. As always, if you found this article helpful please share it across your social media.
Github – https://github.com/avansledright/spotify-to-youtube
-
SES Monitoring
I love AWS. But one thing they don’t do is build complete tools. SES is one of them. I recently started getting emails about high usage for one of the identities that I have set up for SES. I would assume that there was a way to track usage within CloudWatch but for the life of me I couldn’t find one. So I guess that means I need to build something.
The idea here is pretty simple, within SES identities you can set up a notification. So, I created an SNS topic and subscribed all delivery notifications to the topic. Then, subscribe a Lambda function to the topic. The lambda function acts as the processor for the records then formats them in a usable way and puts them into DynamoDB. I used the identity as the primary key. The result is a simple application architecture like the below image.
Every time an email is delivered the lambda function processes the event and checks the DynamoDB table to see if we have an existing record. If the identity is already present in the table it returns the “count” value so that we can increment the value. The “destination” value appends the destination of the email being sent. Below is a sample of the code I used to put the object into the DynamoDB Table.
def put_dynamo_object(dynamo_object): count = str(dynamo_get_item(dynamo_object)) if count == None or count == 0: count = str(1) else: count = int(count) + 1 # get email address from the long string source_string = dynamo_object['source'] email_match = match = re.search(r'[\w.+-]+@[\w-]+\.[\w.-]+', source_string) email = match.group(0) try: table.update_item( Key={ 'identity': email }, AttributeUpdates={ 'details': { 'Value': { 'caller_identity': dynamo_object['caller_identity'], 'source': dynamo_object['source'], 'destination': dynamo_object['destination'], 'count': str(count) } } } ) return True except ClientError as e: print("Failed to put record") print(e) return False
If you want to use this code feel free to reach out to me and I will share with you the Terraform to deploy the application and as always, reach out with questions or feedback!