Aaron VanSledright

Tag: ai

  • Week 2 – Fantasy Football and AI

    Week 2 – Fantasy Football and AI

    After a heartbreaking (lol) loss in week one, our agent is back with its picks for week two!

    But, before we start talking about rosters and picks and how I think AI is going to lose week two, let’s talk about the overall architecture of the application.

    Current Architecture diagram

    You may notice that after my post on Tuesday I have substantially reduced the data storage. I’m now using three DynamoDB tables to handle everything.

    1. Current Roster – This table is populated by an automated scraper that pulls the rosters for all the teams in the league.
    2. Player Data Table – This table populates all the historical data from the draft as well as projected stats for the 2025 season. It also holds the actual points received after the week has completed.
    3. Waiver Table – this is probably the most notable addition to the overall Agent. This table is populated by both ESPN and FantasyPros

    The waiver wire functionality is a massive addition to the Agent. It now has the ability to know what players are available for me to add to the team. If we combine that with the player stats in the Player Data Table we can get a clear picture as to how the player MIGHT preform on a week to week basis.

    The waiver table is populated by a lambda function that goes out and scrapes the ESPN Fantasy Platform. It is quite involved code as there is no API for ESPN. I’m still not sure why they don’t build one. It seems like an easy win for them especially as they get into more sports gambling. You can read the code here. This Lambda function runs on a CRON every day so that the Agent always has daily updated data.

    The other major addition is a web interface. I realized that accessing this via a terminal is great but, it would be way more interesting to have something to look at. Especially if I am away from the computer.

    The web interface consists of a display of the roster:

    Roster Screenshot

    There are a couple things I need to fix. You’ll notice that a few players “have points” this is a problem with the data in the Player Data Table from when I was merging all the sources. Ideally, this will display the points the player has received for the week. Eventually I would like to add some live streaming of the point totals.

    Bench Players

    It also displays the bench (notice the same data glitch). On my list of things to do is to make these boxes drag and drop and auto update the roster table so that we can move players around. I also want to add their projections to each block so I can see the projected points per week for each player.

    The BEST part (in my opinion) is the prediction functionality. There is a box that we can choose which week to get a prediction for and then it will return output from the agent.

    So, let’s take a look at week two!

    Optimal Week 2 Lineup

    Here is a screenshot of the week two optimal lineup. This is the lineup I will be starting for week two unless one of my tight-ends becomes healthy. With George Kittle on injured reserve and Evan Engram also questionable, the AI suggested picking up Hunter Henry.

    The Agent will also provide up with some waiver targets as well as matchup analysis

    It will also output the full response for each week. Here is the full text of what the Agent has predicted for week two:

    LINEUP OPTIMIZATION FOR WEEK 2

    Starting Lineup Strategy:
    – Josh Allen (QB): Elite QB1 with 38.8 points in Week 1, facing NYJ defense that allowed significant passing yards. His 2024 average of 31.85 and strong 2025 projection make him a must-start.
    – Josh Jacobs (RB1): Top projected RB at 18.4 points, facing Washington at home. Green Bay’s new offensive system should benefit Jacobs significantly.
    – Saquon Barkley (RB2): Projected 18.1 points against Kansas City. His 2024 average of 15.93 and strong 2025 projection of 273.2 season points make him a core play.
    – DK Metcalf (WR1): Highest projected WR at 14.3 points. Pittsburgh vs Seattle should be a competitive game with passing volume.
    – Marvin Harrison Jr. (WR2): Rookie with 14.2 projection facing Carolina. His 154.5 season projection suggests breakout potential.
    – Hunter Henry (TE): Best available TE option at 9.3 projected points with Kittle on IR and Engram questionable.
    – DJ Moore (FLEX): Solid 13.9 projection against Detroit. Chicago’s passing game should see volume in what could be a high-scoring affair.
    – Breece Hall (OP): Critical OP selection at 14.9 projected points. Hall offers RB1 upside against Buffalo, and using an RB in the OP slot provides higher floor than WR options.

    CRITICAL WAIVER WIRE TARGETS:

    Roster analysis shows no critical positional needs. Your team has excellent depth across all positions with 19 total players and strong projections.

    TOP DEFENSE WAIVER PICKUP:
    No specific waiver targets identified in the data, but consider streaming defenses based on matchups. Look for teams facing backup QBs or offenses with high turnover rates.

    INJURY CONCERNS:
    – George Kittle: On Injury Reserve, unavailable for Week 2
    – Evan Engram: Listed as Questionable, monitor his status throughout the week

    MATCHUP ANALYSIS:
    – Josh Allen vs NYJ: Excellent matchup for Allen, Jets defense struggled in Week 1
    – Jacobs vs WSH: Home game advantage for Green Bay, Washington’s run defense showed vulnerabilities
    – Barkley vs KC: High-scoring game potential, Eagles should stay competitive requiring rushing attempts
    – DK Metcalf vs SEA: Revenge game narrative, Pittsburgh’s passing attack should be featured

    BEST DEFENSE MATCHUPS

    Detroit DST faces Chicago at home, which presents a solid matchup. Other potential streaming options to consider:
    – Teams facing rookie QBs or backup quarterbacks
    – Defenses at home against struggling offensive lines
    – Units facing teams with high turnover rates from Week 1

    WAIVER PRIORITY:
    1. Streaming Defense: Look for defenses with favorable Week 3 matchups
    2. Handcuff RBs: Target backup RBs for your elite runners (Jacobs, Barkley, Hall)
    3. High-Upside WRs: Monitor snap counts and target shares from Week 2 for emerging receivers

    Final Comments:
    Your roster is exceptionally deep and well-constructed. The OP position selection of Breece Hall provides excellent upside, as RBs typically offer higher floors than WRs in flex positions. Focus on matchup-based streaming for defense and monitor the waiver wire for breakout players rather than addressing positional needs. Your core lineup projects for strong Week 2 performance with multiple players having 15+ point upside.

    What’s Next?

    So we have a pretty solid week two prediction. Josh Allen and Saquon Barkley I have faith in. The rest of the team is hit or miss. Chicago faces Detroit at Ford Field (Go Lions!) and both teams lost week one. But Ben Johnson facing his old team for the first time has me nervous.

    This brings up a few of my to-dos for the overall program.

    1. Defensive matchups – I need to get data for the Defenses to find the best matchups week to week. Having a good defense play is an easy way to get an advantage every week.
    2. Add authentication – I added a really simple authentication method to the code just for the time being. But, it would be nice to have a Single Sign On or something a little bit more secure.
    3. Drag-n-drop interface – I need to add functionality to be able to modify the roster on the web interface. It would be nice if this could also update ESPN.
    4. Slow Output – I’m always looking for ways to optimize the Agent’s output. Currently it takes about 45 seconds to a minute to return the output.

    Thoughts? I hope this series is entertaining. If you have ideas for the Agent please comment below or shoot me a message somewhere!

  • AI Loses Its First Matchup – Fantasy Football Agentic AI

    Straight to the point. AI lost is week one matchup by 2.28 points. I watched as many of the games as I could so that I could give a slight bit of commentary.

    First a re-cap. If you haven’t been following along, I have built and am continuing to improve upon an Agentic AI solution for drafting and managing a Fantasy Football team for the 2025 season. The team is entirely AI selected and you can see its predictions for week 1 here.

    There was a couple of concerns that I had looking at the lineup. Most notably Sam Darnold in the superflex (OP) position as I thought some of the other players might have break out games and boy was I right!

    Here is the results from week 1

    Now, let’s comment on a few things. George Kittle left his game with an injury and is likely to miss a few weeks. AI can’t predict in game injuries, yet. DJ Moore was the final hope Monday night and he was either not targeted when he was open or Caleb Williams simply didn’t throw a good ball. AI, can’t predict in game performance, yet.

    Now, the Agent did hit on Josh Allen with his amazing performance against the Ravens. Breece Hall was also a great pick beating his projections.

    What’s Next?

    So we have some clear things to work out.

    1. Injuries – the AI Coach needs to understand that Kittle is likely out for a few weeks.
    2. Waivers – Now that we have an injury we need to replace a player. Engram is on the bench but is he the best tight end?

    With these clear needs in mind I am actively working on building out a waiver wire monitoring tool to grab available players from the ESPN Fantasy platform. Because ESPN doesn’t have a native API this has been particularly challenging. I added in a Lambda function that will run daily and update the other teams rosters in a DynamoDB table so that we could potentially compare lists of players from other sources. This would give us a subset of “available” players. I also will be adding in an injury parameter that will help assist the Agent in determining the next lineup. Finally, I am scraping out the fantasy points earned per team and storing them as another data set that the Agent can use to help make predictions.

    Current architecture diagram:

    I’m also looking heavily into how I can structure all the data more efficiently so there is less infrastructure to manage. Ideally, it would be nice to have a single table with the player as the primary key and all of the subsets of data underneath.

    I think the AI is close to dominating the rest of the league! I will be posting its predictions for next week sometime on Thursday before the game!

  • Converting DrawIO Diagrams to Terraform

    I’m going to start this post of by saying that I need testers. People to test this process from an interface perspective as well as a data perspective. I’m limited on the amount of test data that I have to put through the process.

    With that said, I spent my Thanksgiving Holiday writing code, building this project and putting in way more time that I thought I would but boy is it cool.

    If you’re like me and working in a Cloud Engineering capacity then you probably have built a DrawIO diagram at some point in your life to describe or define your AWS architecture. Then you have spent countless hours using that diagram to write your Terraform. I’ve built something that will save you those hours and get you started on your cloud journey.

    Enter https://drawiototerraform.com. My new tool that allows you to convert your DrawIO AWS Architecture diagrams to Terraform just by uploading them. The process uses a combination of Python and LLM’s to identify the components in your diagram and their relationships, write the base Terraform, analyze the initial Terraform for syntax errors and ultimately test the Terraform by generating a Terraform plan.

    All this is then delivered to you as a ZIP file for you to review, modify and ultimately deploy to your environment. By no means is it perfect yet and that is why I am looking for people to test the platform.

    If you, or someone you know, is interested in helping me test have them reach out to me on through the website’s support page and I will get them some free credits so that they can test out the platform with their own diagrams.

    If you are interested in learning more about the project in any capacity do not hesitate to reach out to me at anytime.

    Website: https://drawiototerraform.com

  • Product Name Detection with AWS Bedrock & Anthropic Claude

    Well, my AWS bill me a bit larger than normal this month due to testing this script. I thoroughly enjoy utilizing Generative AI to do work for me and I had some spare time to tackle this problem this week.

    A client sent me a bunch of product images that were not named properly. All of the files were named something like “IMG_123.jpeg”. There was 63 total files so I decided rather than going through them one by one I would see if I could get one of Anthropic’s models to handle it for me and low and behold it was very successful!

    I scripted out the workflow in Python and utilized AWS Bedrock’s platform to execute the interactions with the Claude 3 Haiku model. Take a look at the code below to see how this was executed.

    if __name__ == "__main__":
        print("Processing images")
        files = os.listdir("photos")
        print(len(files))
        for file in files:
            if file.endswith(".jpeg"):
                print(f"Sending {file} to Bedrock")
                with open(f"photos/{file}", "rb") as photo:
    
                    prompt = f"""
                        Looking at the image included, find and return the name of the product. 
                        Rules: 
                        1. Return only the product name that has been determined.
                        2. Do not include any other text in your response like "the product determined..."
                        """
                    model_response = bedrock_actions.converse(
                        prompt, 
                        image_format="jpeg",
                        encoded_image=photo.read(),
                        max_tokens="2000",
                        temperature=.01,
                        top_p=0.999
                        )
                    print(model_response['output'])
                    product_name = modify_product_name(model_response['output']['message']['content'][0]['text'])
                    
                    photo.close()
                    if os.system(f"cp photos/{file} renamed_photos/{product_name}.jpeg") != 0:
                        print("failed to move file")
                    else:
                        os.system(f"mv photos/{file} finished/{file}")
        sys.exit(0)

    The code will loop through all the files in a folder called “photos” passing each one to Bedrock and getting a response. There was a lot of characters that were returned that would either break the script or that are just not needed so I also wrote a function to handle those.

    Ultimately, the script will copy the photo to a file named after the product and then move the original file into a folder called “finished”.

    I’ve uploaded the code to GitHub and you can utilize it however you want!

  • Building a Generative AI Workflow with AWS Bedrock

    I’ve finally been tasked with a Generative AI project to work on. I’ve done this workflow manually with ChatGPT in the past and it works quite well but, for this project, the requirement was to use Amazon Web Services’ new product “AWS Bedrock”.

    The workflow takes in some code and writes a technical document to support a clear English understanding of what the code is going to accomplish. Using AWS Bedrock, the AI will write the document and output it to an S3 bucket.

    The architecture involves uploading the initial code to an S3 Bucket which will then send the request to an SQS queue and ultimately trigger a Lambda to prompt the AI and fulfill the output upload to a separate S3 bucket. Because this was a proof of concept, the Lambda function was a significant compute resource however going forward I am going to look at placing this code into a Docker container so that it can scale for larger code inputs.

    Here is the architecture diagram:

    Let’s take a look at some of the important code. First is the prompt management. I wrote a function that will take input of the code as well as a parameter of “prompt_type”. This will allow the function to be scalable to accommodate other prompts in the future.

    def return_prompt(code, prompt_type):
        if prompt_type == "testPrompt":
            prompt1 = f"Human: <your prompt>. Assistant:"
            return prompt1

    The important thing to look at here is the format of the message. You have to include the “Human:” and the “Assistant:”. Without this formatting, your API call will error.

    The next bit of code is what we use to prompt the Bedrock AI.

     prompt_to_send = prompts.return_prompt(report_file, "testPrompt")
            body = {
                "prompt": prompt_to_send,
                "max_tokens_to_sample": 300,
                "temperature": 0.1,
                "top_p": 0.9
            }
            accept = 'application/json'
            contentType = 'application/json'
    
    
            # Return Psuedo code
            bedrock_response = h.bedrock_actions.invoke_model(json.dumps(body, indent=2).encode('utf-8'), contentType, accept, modelId=modelid)
        def invoke_model(body, contentType, accept, modelId):
            print(f"Body being sent: {body}")
            try:
                response = bedrock_runtime.invoke_model(
                    body=body,
                    contentType=contentType,
                    accept=accept,
                    modelId=modelId
                )
                return response
            except ClientError as e:
                print("Failed to invoke Bedrock model")
                print(e)
                return False

    The body of our request is what configures Bedrock to run and create a response. These values can be tweaked as follows:

    max_tokens_to_sample: This specifies the number of tokens to sample in your request. Amazon recommends setting this to 4000
    TopP: Use a lower value to ignore less probable options.
    Top K: Specify the number of token choices the model uses to generate the next token.
    Temperature: Use a lower value to decrease randomness in the response.

    You can read more about the inputs here.

    If you want to see more of this code take a look at my GitHub repository below. Feel free to use it wherever you want. If you have any questions be sure to reach out to me!

    GitHub: https://github.com/avansledright/bedrock-poc-public