Your cart is currently empty!
Tag: dynamodb
-
Week 2 – Fantasy Football and AI
After a heartbreaking (lol) loss in week one, our agent is back with its picks for week two!
But, before we start talking about rosters and picks and how I think AI is going to lose week two, let’s talk about the overall architecture of the application.
Current Architecture diagram You may notice that after my post on Tuesday I have substantially reduced the data storage. I’m now using three DynamoDB tables to handle everything.
- Current Roster – This table is populated by an automated scraper that pulls the rosters for all the teams in the league.
- Player Data Table – This table populates all the historical data from the draft as well as projected stats for the 2025 season. It also holds the actual points received after the week has completed.
- Waiver Table – this is probably the most notable addition to the overall Agent. This table is populated by both ESPN and FantasyPros
The waiver wire functionality is a massive addition to the Agent. It now has the ability to know what players are available for me to add to the team. If we combine that with the player stats in the Player Data Table we can get a clear picture as to how the player MIGHT preform on a week to week basis.
The waiver table is populated by a lambda function that goes out and scrapes the ESPN Fantasy Platform. It is quite involved code as there is no API for ESPN. I’m still not sure why they don’t build one. It seems like an easy win for them especially as they get into more sports gambling. You can read the code here. This Lambda function runs on a CRON every day so that the Agent always has daily updated data.
The other major addition is a web interface. I realized that accessing this via a terminal is great but, it would be way more interesting to have something to look at. Especially if I am away from the computer.
The web interface consists of a display of the roster:
Roster Screenshot There are a couple things I need to fix. You’ll notice that a few players “have points” this is a problem with the data in the Player Data Table from when I was merging all the sources. Ideally, this will display the points the player has received for the week. Eventually I would like to add some live streaming of the point totals.
Bench Players It also displays the bench (notice the same data glitch). On my list of things to do is to make these boxes drag and drop and auto update the roster table so that we can move players around. I also want to add their projections to each block so I can see the projected points per week for each player.
The BEST part (in my opinion) is the prediction functionality. There is a box that we can choose which week to get a prediction for and then it will return output from the agent.
So, let’s take a look at week two!
Optimal Week 2 Lineup Here is a screenshot of the week two optimal lineup. This is the lineup I will be starting for week two unless one of my tight-ends becomes healthy. With George Kittle on injured reserve and Evan Engram also questionable, the AI suggested picking up Hunter Henry.
The Agent will also provide up with some waiver targets as well as matchup analysis
It will also output the full response for each week. Here is the full text of what the Agent has predicted for week two:
LINEUP OPTIMIZATION FOR WEEK 2
Starting Lineup Strategy:
–Â Josh Allen (QB):Â Elite QB1 with 38.8 points in Week 1, facing NYJ defense that allowed significant passing yards. His 2024 average of 31.85 and strong 2025 projection make him a must-start.
–Â Josh Jacobs (RB1):Â Top projected RB at 18.4 points, facing Washington at home. Green Bay’s new offensive system should benefit Jacobs significantly.
–Â Saquon Barkley (RB2):Â Projected 18.1 points against Kansas City. His 2024 average of 15.93 and strong 2025 projection of 273.2 season points make him a core play.
–Â DK Metcalf (WR1):Â Highest projected WR at 14.3 points. Pittsburgh vs Seattle should be a competitive game with passing volume.
–Â Marvin Harrison Jr. (WR2):Â Rookie with 14.2 projection facing Carolina. His 154.5 season projection suggests breakout potential.
–Â Hunter Henry (TE):Â Best available TE option at 9.3 projected points with Kittle on IR and Engram questionable.
–Â DJ Moore (FLEX):Â Solid 13.9 projection against Detroit. Chicago’s passing game should see volume in what could be a high-scoring affair.
–Â Breece Hall (OP):Â Critical OP selection at 14.9 projected points. Hall offers RB1 upside against Buffalo, and using an RB in the OP slot provides higher floor than WR options.CRITICAL WAIVER WIRE TARGETS:
Roster analysis shows no critical positional needs. Your team has excellent depth across all positions with 19 total players and strong projections.
TOP DEFENSE WAIVER PICKUP:
No specific waiver targets identified in the data, but consider streaming defenses based on matchups. Look for teams facing backup QBs or offenses with high turnover rates.INJURY CONCERNS:
– George Kittle: On Injury Reserve, unavailable for Week 2
– Evan Engram: Listed as Questionable, monitor his status throughout the weekMATCHUP ANALYSIS:
– Josh Allen vs NYJ: Excellent matchup for Allen, Jets defense struggled in Week 1
– Jacobs vs WSH: Home game advantage for Green Bay, Washington’s run defense showed vulnerabilities
– Barkley vs KC: High-scoring game potential, Eagles should stay competitive requiring rushing attempts
– DK Metcalf vs SEA: Revenge game narrative, Pittsburgh’s passing attack should be featuredBEST DEFENSE MATCHUPS
Detroit DST faces Chicago at home, which presents a solid matchup. Other potential streaming options to consider:
– Teams facing rookie QBs or backup quarterbacks
– Defenses at home against struggling offensive lines
– Units facing teams with high turnover rates from Week 1WAIVER PRIORITY:
1. Streaming Defense: Look for defenses with favorable Week 3 matchups
2. Handcuff RBs: Target backup RBs for your elite runners (Jacobs, Barkley, Hall)
3. High-Upside WRs: Monitor snap counts and target shares from Week 2 for emerging receiversFinal Comments:
Your roster is exceptionally deep and well-constructed. The OP position selection of Breece Hall provides excellent upside, as RBs typically offer higher floors than WRs in flex positions. Focus on matchup-based streaming for defense and monitor the waiver wire for breakout players rather than addressing positional needs. Your core lineup projects for strong Week 2 performance with multiple players having 15+ point upside.What’s Next?
So we have a pretty solid week two prediction. Josh Allen and Saquon Barkley I have faith in. The rest of the team is hit or miss. Chicago faces Detroit at Ford Field (Go Lions!) and both teams lost week one. But Ben Johnson facing his old team for the first time has me nervous.
This brings up a few of my to-dos for the overall program.
- Defensive matchups – I need to get data for the Defenses to find the best matchups week to week. Having a good defense play is an easy way to get an advantage every week.
- Add authentication – I added a really simple authentication method to the code just for the time being. But, it would be nice to have a Single Sign On or something a little bit more secure.
- Drag-n-drop interface – I need to add functionality to be able to modify the roster on the web interface. It would be nice if this could also update ESPN.
- Slow Output – I’m always looking for ways to optimize the Agent’s output. Currently it takes about 45 seconds to a minute to return the output.
Thoughts? I hope this series is entertaining. If you have ideas for the Agent please comment below or shoot me a message somewhere!
-
An AI Fantasy Football Draft Assistant
Last year I attempted to program a Fantasy Football draft assistant which took live data from ESPN’s Fantasy Platform. Boy was that a mistake…
First of all, shame on ESPN for not having an API for their Fantasy sports applications. The reverse engineered methods were not fast enough nor were they reliable. So, this year I took a new approach to building out a system for getting draft pick recommendations for my team.
I also wanted to put to use the example architecture and code I wrote the other day for the Strands SDK to work so I utilized it to build an API which would utilize the AWS Bedrock platform to analyze data and and ultimately return the best possible picks.
Here is a simple workflow of how the tool works:I generated this with Claude AI. It is pretty OK. The first problem I encountered was getting data. I needed two things:
1. Historical data for players
2. Projected fantasy data for the upcoming season
The historical data provides information about the players past season and the projections are for the upcoming season, obviously. The projections are useful because of any incoming rookies.
In the repository I link below I put a scripts to scrape FantasyPros for both the historical and projected data. It stores them in separate files in case you want to utilize them in a different way. There is also a script to combine them into one data source and ultimately load them into a DynamoDB table.
The most important piece of the puzzle was actually simulating the draft. I needed to create a program that would be able to track the other team’s draft picks as well as give me the suggestions and track my teams picks. This is the heart of the repository and I will be using it to get suggestions and track the draft for this coming season.Through the application, when you issue the “next” command the application will send a request to the API with the current situation of the draft. The payload looks like this:
payload = { "team_needs": team_needs, "your_roster": your_roster, "already_drafted": all_drafted_players, "scoring_format": self.session.scoring_format if self.session else "ppr", "league_size": self.session.league_size if self.session else 12 }
The “team_needs” key represents the current number of players remaining for each position. The “your_roster” position is all of the current players on my team. The other important key is “already_drafted”. This key sends all of the drafted players to the AI agent so it knows who NOT to recommend.
The application goes through all of the picks and you are able to manually enter each of the other teams picks until the draft is complete.
I’ll post an update after my draft on August 24th with the team I end up with! I still will probably lose in my league but this was fun to build. I hope to add in some sort of week-to-week management of my team as well as a trade analysis tool in the future. It would also be cool to add in some sort of analysis that could send updates to my Slack or Discord.
If you have other ideas message me on any platform you can find me on!
GitHub: https://github.com/avansledright/fantasy-football-agent -
SES Monitoring
I love AWS. But one thing they don’t do is build complete tools. SES is one of them. I recently started getting emails about high usage for one of the identities that I have set up for SES. I would assume that there was a way to track usage within CloudWatch but for the life of me I couldn’t find one. So I guess that means I need to build something.
The idea here is pretty simple, within SES identities you can set up a notification. So, I created an SNS topic and subscribed all delivery notifications to the topic. Then, subscribe a Lambda function to the topic. The lambda function acts as the processor for the records then formats them in a usable way and puts them into DynamoDB. I used the identity as the primary key. The result is a simple application architecture like the below image.
Every time an email is delivered the lambda function processes the event and checks the DynamoDB table to see if we have an existing record. If the identity is already present in the table it returns the “count” value so that we can increment the value. The “destination” value appends the destination of the email being sent. Below is a sample of the code I used to put the object into the DynamoDB Table.
def put_dynamo_object(dynamo_object): count = str(dynamo_get_item(dynamo_object)) if count == None or count == 0: count = str(1) else: count = int(count) + 1 # get email address from the long string source_string = dynamo_object['source'] email_match = match = re.search(r'[\w.+-]+@[\w-]+\.[\w.-]+', source_string) email = match.group(0) try: table.update_item( Key={ 'identity': email }, AttributeUpdates={ 'details': { 'Value': { 'caller_identity': dynamo_object['caller_identity'], 'source': dynamo_object['source'], 'destination': dynamo_object['destination'], 'count': str(count) } } } ) return True except ClientError as e: print("Failed to put record") print(e) return False
If you want to use this code feel free to reach out to me and I will share with you the Terraform to deploy the application and as always, reach out with questions or feedback!
-
Moving AWS Cloudfront Logs to DynamoDB
I think its pretty obvious that I love DynamoDB. It has become one of my favorite AWS Services and I use it almost every day at work and am getting better at using it for my personal projects as well.
I had a client approach me about getting logs from a Cloudfront Distribution. Cloudfront has a native logging function that spits out .GZ files to an S3 bucket. My client doesn’t have any sort of log ingestion service so rather than build one I decided we could parse the .GZ files and store the data into a DynamoDB table. To accomplish this I created a simple lambda:
import boto3 import gzip import uuid from datetime import datetime from datetime import timedelta import time from botocore.exceptions import ClientError #Creates a time to live value def ttl_time(): now = datetime.now() ttl_date = now + timedelta(90) final = str(time.mktime(ttl_date.timetuple())) return final #Puts the log json into dynamodb: def put_to_dynamo(record): client = boto3.resource('dynamodb', region_name='us-west-2') table = client.Table('YOUR_TABLE_NAME') try: response = table.put_item( Item=record ) print(response) except ClientError as e: print("Failed to put record") print(e) return False return True def lambda_handler(event, context): print(event) s3_key = event['Records'][0]['s3']['object']['key'] s3 = boto3.resource("s3") obj = s3.Object("YOUR_BUCKET", s3_key) with gzip.GzipFile(fileobj=obj.get()["Body"]) as gzipfile: content = gzipfile.read() #print(content) my_json = content.decode('utf8').splitlines() my_dict = {} for x in my_json: if x.startswith("#Fields:"): keys = x.split(" ") else: values = x.split("\t") for key in keys: if key == "#Fields:": pass else: for value in values: my_dict[key] = value x = 0 for item in keys: if item == "#Fields:": pass else: my_dict[item] = values[x] x +=1 print('- ' * 20) myuuid = str(uuid.uuid4()) print(myuuid) my_dict["uuid"] = myuuid my_dict['ttl'] = ttl_time() print(my_dict) if put_to_dynamo(my_dict) == True: print("Successfully imported item") return True else: print("Failed to put record") return False
This lambda runs every time there is an S3 object created. It takes grabs the .GZ file and parses it into a dictionary that can be imported into DynamoDB. One other thing to note is that I append a UUID so that I can help track down errors.
I wrote a simple front end for the client to grab records based on date input which writes the logs to a CSV so they can parse them on their local machines. I have a feeling we will be implementing a log aggregation server soon!
If this code helps you please share it with your friends and co-workers!
-
A Dynamo Data Migration Tool
Have you ever wanted to migrate data from one Dynamo DB table to another? I haven’t seen an AWS tool to do this so I wrote one using Python.
A quick walk through video import sys import boto3 ## USAGE ############################################################################ ## python3 dynamo.py <Source_Table> <destination table> ## ## Requires two profiles to be set in your AWS Config file "source", "destination" ## ##################################################################################### def dynamo_bulk_reader(): session = boto3.session.Session(profile_name='source') dynamodb = session.resource('dynamodb', region_name="us-west-2") table = dynamodb.Table(sys.argv[1]) print("Exporting items from: " + str(sys.argv[1])) response = table.scan() data = response['Items'] while 'LastEvaluatedKey' in response: response = table.scan(ExclusiveStartKey=response['LastEvaluatedKey']) data.extend(response['Items']) print("Finished exporting: " + str(len(data)) + " items.") return data def dynamo_bulk_writer(): session = boto3.session.Session(profile_name='destination') dynamodb = session.resource('dynamodb', region_name='us-west-2') table = dynamodb.Table(sys.argv[2]) print("Importing items into: " + str(sys.argv[2])) for table_item in dynamo_bulk_reader(): with table.batch_writer() as batch: response = batch.put_item( Item=table_item ) print("Finished importing items...") if __name__ == '__main__': print("Starting Dynamo Migrater...") dynamo_bulk_writer() print("Exiting Dynamo Migrator")
The process is pretty simple. First, we get all of our data from our source table. We store this in a list. Next, we iterate over that list and write it to our destination table using the ‘Batch Writer’.
The program has been tested against tables containing over 300 items. Feel free to use it for your environments! If you do use it, please share it with your friends and link back to this article!
-
Querying and Editing a Single Dynamo Object
I have a workflow that creates a record inside of a DynamoDB table as part of a pipeline within AWS. The record has a primary key of the Code Pipeline job. Later in the pipeline I wanted to edit that object to append the status of resources created by this pipeline.
In order to do this, I created two functions. One that first returns the item from the table and the second that actually does the update and puts the updated item back into the table. Take a look at the code below and utilize it if you need to!
import boto3 from boto3.dynamodb.conditions import Key def query_table(id): dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('XXXXXXXXXXXXXX') response = table.query( KeyConditionExpression=Key('PRIMARYKEY').eq(id) ) return response['Items'] def update_dynanmo_status(id, resource_name, status): dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('XXXXXXXXXXXXX') items = query_table(id) for item in items: # Do your update here response = table.put_item(Item=item) return response