Category: Cloud Architecting

  • How I utilize Claude Code and AI to build complex applications

    “A Fever You Can’t Sweat Out – 20th Anniversary Deluxe” is an album that came out? Wow. I remember seeing Panic! as a teenager…

    I stayed away from AI for a long time. I think a lot of people in my field were nervous about security, bad code, incorrect information and much more. In the early days of ChatGPT it was easy to have the AI hallucinate and come up with some nonsense. While its still possible for this to happen I found a workflow that has helped me build applications and proof of concept work very quickly.

    First – I have always given AI tasks that I can do myself.
    Second – If I can’t do a task, I need to learn about it first.

    These aren’t really rules, but, things I think about when I’m building out projects. I won’t fall victim to the robot uprising!

    Let’s talk about my workflows.

    Tools:
    – Claude (Web)
    – Claude Code
    – Gemini
    – Gemini CLI
    – ChatGPT
    – Todoist

    I pay for Claude and I have subscriptions to Gemini Pro through my various GSuite Subscriptions. ChatGPT I use for free. ToDoist is my to do app of choice. I’ve had the subscription since back in my Genius Phone Repair days to manage all of the stores and their various tasks.

    The Flow

    As with most of you, I’m sure you get ideas or fragments of ideas at random times. I put these into ToDoist where I have a project called “Idea Board” its basically a simplified Kanban board with three columns:

    Idea | In progress | Finished

    The point of this is to track things and get them out of my brain to free up space in there everything else that happens in my life. I utilize the “In Progress” column for when I’m researching or actually sitting down to process the idea with more detail. Finally, the “Finished” column is utilize for either ideas that I’m not going to work on or ideas that have turned into full projects. This is not the part of the project where I actually detail out the project. It’s just a landing place for ideas.

    The next part of the flow is where I actually detail out what I want to do. If you have been utilizing Claude Code or Gemini CLI or Codex you know that input is everything and it always has been since AI became consumer ready. I generally make a folder on my computer and start drafting my ideas with more detail into markdown files. If we look at CrumbCounts.com as an example, I started with simply documenting out the problem I was trying to solve:

    Calculate the cost for this recipe.

    In order to do that we then need to put a bunch of pieces together. Because I am an AWS Fanboy most of my designs and architectures revolve around AWS but some day I might actually learn another cloud and then utilize that instead. Fit for purpose.

    Anyway, the markdown file will continually grow as I start to build the idea into a mostly detailed out document that lays out the architecture, design principals, technologies to utilize, user flow and much more. The more detail the better!

    When I am satisfied with the initial idea markdown file I will provide it to Gemini. Its not my favorite AI model out there but it possess the ability to take in and track a large amount of context which is useful when presenting big ideas.

    I assign Gemini the role of “Senior Technology Architect”. I assume the role of “stakeholder”. Gemini’s task is to review the idea that I have and either validate or, create the architecture for the idea. I prompt it to return back a markdown file that contains the technical architecture and technical details for the idea. At this point we reach our first “Human in the loop” point.

    Because I don’t trust our AI overlords this is the first point at which I will fully review the document output by Gemini. I need to make sure that what the AI is putting out is valid, will work, and is using tools and technology that I am familiar with. If the output is proposing something that i’m unsure of I need to research or ask the AI to utilizing something else.

    After I am satisfied with the architecture document I place that into the project directory. This is where we change AI Models. You see Gemini is good at big picture stuff but not so good at specifics (in my opinion). I take the architecture document and provide it to Claude (Opus, web browser or app) and give it the role of Senior Technology Engineer. Its job is to review the architecture diagram, find any weak points or things that are missing or, sometimes, things that just won’t work. Then build a report and an engineering plan. This plan details out SPECIFIC technologies, patterns and resources to use.

    I usually repeat this process a few times and review each LLM’s output looking for things that might have been missed by either myself or the AI. Once I have them both in a place that I feel confident this is when I actually start building.

    Because I lack trust in AI, I make my own repository in GitHub and setup the repository on my local machine. I do allow the AI the ability to commit and push code to the repository. Once the repository has been created I have Gemini CLI build out the application file structure. This could include:

    • Creating folders
    • Creating empty files
    • Creating base logic
    • Creating Terraform module structures

    But NOTHING specific. Gemini, once again, is not good at detailed work. Maybe i’m using it wrong. Either way, I now have all of the basic structure. Think of Gemini as a Junior Engineer. It knows enough to be dangerous so it has many guardrails.

    # SAMPLE PROMPT FOR GEMINI
    You are a junior engineer working on your first project. Your current story is to review the architecture.md and the enginnering.md. Then, create a plan.md file that details out how you would go about creating the structure of this application. You should detail out every file that you think needs to be created as well as the folder structure. 

    Inside of the architecture and engineering markdown files there is detail about how the application should be designed, coded, and architected. Essentially a pure runbook for our junior engineer.

    Once Gemini has created its plan and I have reviewed it, I allow it write files into our project directory. These are mostly placeholder files. I will allow it to write some basic functions for coding and layout some Terraform files that are simple.

    Once our junior engineer, Gemini, has completed I usually go through and review all of the files against the plan that it created. If anything is missing I will direct it to review the plan again and make any corrections. Once the code is at a place where I am happy with it, I create my first commit and push this baseline into the repository.

    At this point its time for the heavy lifting. Time to put my expensive Anthropic subscription to use. Our “Senior Developer” Claude (Opus model) is let loose on the code base to build out all the logic. 9 times out of 10 I will allow it to make all the edits it wants and just let it go while I work on something else (watching YouTube).

    # SAMPLE CLAUDE PROMPT
    You are a senior developer. You are experienced in many application development patterns, AWS, Python and Terraform. You love programming and its all you ever want to do. Your story in this sprint is to first review the engineering.md, architecture.md and plan.md file. Then review the Junior Engineer's files in this project directory. Once you have a good grasp on the project write your own plan as developer-plan.md. Stop there and I, your manager, will review.

    After I review the plan I simply tell it to execute on the plan. Then I cringe as my usage starts to skyrocket.

    Claude will inevitably have an issue so I take a look at it every now and then, respond to questions if it has any or allow it to continue. Once it reaches a logical end I start reviewing its work. At this point it should have built me some form of the application that I can run locally. I’ll get this fired up and start poking around to make sure the application does what I want it to do.

    At this point we can take a step back from utilizing AI and start documenting bugs. If I think this is going to be a long project this is where I will build out a new project in Todoist so that I can have a persistent place to take notes and track progress. This is essentially a rudimentary Jira instance where each “task” is a story. I separate them into Bugs, Features, In Progress, Testing.

    My Claude Code utilizes the Todoist MCP so it can view/edit/complete tasks as needed. After I have documented as much as I can find I let Claude loose on fixing the bugs.

    I think the real magic also comes with automation. Depending on the project I will allow Claude Code access to my Jenkins server via MCP. This allows Claude code to monitor and troubleshooting builds. This allows Claude to operate independently. What happens is that it will create new branches and push them into a development environment triggering an automated deployment. The development environment is simply my home lab. I don’t care if anything breaks there and it doesn’t really cost any money. If the build fails, Claude can review the logs and process a fix and start the CI/CD all over again.

    Ultimately, I repeat the bug fix process until I get to my minimal viable product state and then deploy the application or project into whatever is deemed the production environment.

    So, its 2026, we’re using AI to build stuff. What is your workflow? Still copying and pasting? Not using AI at all? AI is just a bubble? Feel free to comment below!

  • Cloudwatch Alarm AI Agent

    I think one of the biggest time sucks is getting a vague alert or issue and not having a clue on where to start with troubleshooting.

    I covered this in the past when I built an agent that can review your AWS bill and find practical ways to save money within your account. This application wasn’t event driven but rather a container that you could spin up when you needed a review or something you could leave running in your environment. If we take a same read-only approach to building an AWS Agent we can have have a new event driven teammate that helps us with our initial troubleshooting.

    The process flow is straight forward:

    1. Given a Cloudwatch Alarm
    2. Send a notification to SNS
    3. Subscribe a Lambda function to the topic (this is our teammate)
    4. The function utilizes the AWS Nova Lite model to investigate the contents of the alarm and utilizes its read only capabilities to find potential solutions
    5. The agent sends its findings to you on your preferred platform

    For my environment I primarily utilize Slack for alerting and messaging so I built that integration. Here is an architecture diagram:

    When the alarm triggers we should see a message in Slack like:

    The AI is capable of providing you actionable steps to either find the root cause of the problem or in some cases, present you with steps to solve the problem.

    This workflow significantly reduces your troubleshooting time and by reducing the troubleshooting time it reduces your downtime.

    So, if this is something you are interested in deploying I have created a Terraform module so you can quickly deploy it into your own environment to reduce your troubleshooting steps!

    Check it out here: https://aiopscrew.com

    If you have questions feel free to reach out to me at anytime!

  • Fantasy Football and AI – Week 12

    Well, unfortunately we took a big loss and are now in a three way tie for first place. Here are the actual results:

    I think the biggest hit was how poorly Josh Allen played. What is interesting is that I was reviewing his passed performance against Houston and he has had his worst outings of his career there. This week was no different… The other interesting thing is that Saquon Barkley just isn’t the same back as he was last year. He is trending down.

    In response to Josh Allen’s poor outing I added a deviation and historical performance analysis against and opponent to the data set so now we have a value like:

    "HOU": {
       "avg_actual": 11.6,
       "avg_deviation": -2.67,
       "avg_deviation_percentage": -18.69,
       "avg_projected": 14.27,
       "by_season": {
        "2024": {
         "avg_actual": 14.6,
         "avg_deviation": 0,
         "avg_deviation_percentage": 0,
         "avg_projected": 0,
         "games_played": 1,
         "games_with_projections": 0
        },
        "2025": {
         "avg_actual": 5.05,
         "avg_deviation": -5.65,
         "avg_deviation_percentage": -52.8,
         "avg_projected": 10.7,
         "games_played": 2,
         "total_actual": 10.1,
         "total_projected": 21.4
        }
       },

    I then coded a new tool inside of the AI to reference this data set to assist with its final calculations.

    So, when we ask the tool about Josh Allen’s performance against a team we get something like:

    Found exactly what you're looking for in Josh Allen's performance data.
    
    Josh Allen vs Kansas City (Week 9, 2025):
    - 28.8 fantasy points - That's a STRONG performance!
    - He was projected for 22.0 points but exceeded by 6.8 points (+30.9%)
    - This was one of his better games this season
    
    Historical Context (2024 vs KC):
    - Week 11, 2024: 24.0 fantasy points vs Kansas City
    
    Bottom Line: Josh Allen has performed WELL against Kansas City in recent matchups. In their two most recent meetings:
    - 2025 (Week 9): 28.8 points
    - 2024 (Week 11): 24.0 points
    
    That's an average of 26.4 fantasy points against KC - well above his season average. Allen seems to rise to the occasion against elite competition like Kansas City. The Chiefs' defense can be vulnerable to mobile QBs who can extend plays, and Allen's dual-threat ability has historically given them problems.
    
    The takeaway: When Allen faces KC, he typically delivers QB1 numbers. That 28.8-point performance this season shows he can still light up even the defending champs!

    I need to backload some more historical data yet but that is a problem for a different day. Looking forward, I hope to build my own model based on this data and setup an automated workflow that will include data ingestion and automated model building so that I can consistently build predictions.

    So, on to week 13. We have a bunch of injuries this week to contend with so this lineup will likely be changing once people are reporting. Here is the current lineup:

    In other news I built and launched https://gridirondata.com which is an API to reference all of the data I have collected so far.

    Unfortunately, its not free. But if you message me about it I’ll probably hook you up!

  • Building jsontotoon.io: A Free Tool to Cut Your LLM API Costs

    If you’re working with LLM APIs, you’re probably watching your token counts like a hawk. Every JSON object you send to Claude, GPT-4, or Gemini costs tokens, and those curly braces and quotes add up fast. I built https://jsontotoon.io to solve this exact problem—and it’s completely free to use.

    The Problem: JSON is Token-Inefficient

    Here’s the thing: JSON is fantastic for machine-to-machine communication. It’s ubiquitous, well-supported, and everyone knows how to work with it. But when you’re paying per token to send data to an LLM? It’s wasteful.

    Look at a simple example:

    [
      {"name": "Alice", "age": 30, "city": "NYC"},
      {"name": "Bob", "age": 25, "city": "LA"},
      {"name": "Carol", "age": 35, "city": "Chicago"}
    ]

    That’s 125 tokens. All those quotes, braces, and commas? The LLM doesn’t need them to understand the structure. You’re literally paying to send redundant syntax.

    Enter TOON Format

    TOON (Token-Oriented Object Notation) converts that same data to:

    name, age, city
    Alice, 30, NYC
    Bob, 25, LA
    Carol, 35, Chicago

    68 tokens. That’s a 46% reduction. The same information, fully reversible back to JSON, but nearly half the cost.

    I realize this sounds too good to be true, but the math checks out. I tested it across real-world datasets—API responses, database dumps, RAG context—and consistently saw 35-45% token reduction. Your mileage will vary depending on data structure, but the savings are real.

    How I Built It

    The backend is straightforward Python running on AWS Lambda. The TOON parser itself is deterministic—same JSON always produces the same TOON output, and round-trip conversion is lossless. No data gets mangled, no weird edge cases (well, I fixed those during testing).

    Infrastructure-wise:

    CloudFront + S3 for the static frontend

    API Gateway + Lambda for the conversion endpoint

    DynamoDB for API key storage (with email verification via SES)

    WAF with rate limiting to prevent abuse (10 requests per 5 minutes on API endpoints)

    CloudWatch dashboards for monitoring

    The whole setup costs me about $8-15/month in AWS fees, mostly for WAF. The conversion itself is so fast (< 100ms average) and cheap that I can offer unlimited free API keys without worrying about runaway costs.

    Real Use Cases

    I built this because I was spending way too much on Claude API calls for my fantasy football AI agent project. Every week I send player stats, injury reports, and matchup data in prompts. Converting to TOON saved me about 38% on tokens—which adds up when you’re making hundreds of calls per week.

    But the use cases go beyond my specific problem:

    RAG systems: Fit more context documents in your prompts without hitting limits

    Data analysis agents: Send larger datasets for analysis at lower cost

    Few-shot learning: Include more examples without token bloat

    Structured outputs: LLMs can generate TOON that’s easier to parse than JSON

    Try It Yourself

    The web interface at https://jsontotoon.io is free to use—no signup required. Just paste your JSON, get TOON. If you want to integrate it into your application, grab a free API key (also no cost, no expiration).

    Full API docs are available at https://jsontotoon.io/docs.html, with code examples in Python, JavaScript, Go, and cURL.

  • AI and Fantasy Football – Week 11

    Wow. Week 11 was filled with injuries. Josh Jacobs went down early with a knee injury. Aaron Rogers went out with a wrist injury but it all started off with an epic performance by TreVeyon Henderson putting up 32.3 points. The end result of week 11? ANOTHER VICTORY FOR AI! The team is now in 1st place. With all those injuries you might be wondering how we pulled off another victory. Well, here is the final scores for the week:

    Josh Allen came through massively with a 51 point game. Riley Patterson put up a few good kicks over in Madrid and George Kittle had a great game as well.

    Looking forward to week 12, we will have to battle some injuries but I think the depth chart should be able to sustain the blows. Here is the current proposed lineup:

    So, tech and data stuff. I added deviations into the data set. So now we can see the difference between what a player’s projection was and their actual. This will help the AI determine how a player is preforming. This is being structured on a per season per week basis as well as historically against an opponent. Next year this data will be valuable when looking at future matchups and draft choices.

    Next, I’m also working on launching an API for this entire project so that you can access the data and utilize it for your own applications. I hope to have a working beta of this by the end of the week! If you are interested in utilizing it feel free to message me. I’m sure a few of you can receive some free keys once its ready! I’ll have a separate post about the API once its ready.

  • Fantasy Football and AI – Week 9

    A BIG win in week 9! Our team is now tied for 1st place with 6 wins and 3 losses. We currently have 1318.66 total fantasy points on the season. It was looking pretty grim going into the afternoon games on Sunday. The receivers AI selected were not preforming and other players were barely hitting their projects. Josh Allen sparked some life into the team with his 30 points and then Sam Darnold showed everyone how to play quarter back with a 37.2 point performance! Check out the full results below.

    So, now we’re off to week 10. Currently, as I write this, Saquon Barkley is questionable to play. I would expect that he does play but the AI will not put him in the starting lineup. We picked up the Panther’s defense upon request from the AI. I would expect this is because they are playing the Saints who just traded away Rasheed Shaheed. The addition of DJ Moore into the OP slot is going to be a rough choice over playing a quarterback in that position. I’ll be monitoring the roster throughout the week to see if there are any other suggestions we can make. Here is what we are currently fielding into week 10:

    Tune in next week for the results! Hopefully AI can get to 7-3!

  • Fantasy Football and AI – Week 7

    Well, our win streak was too good to be true. Unfortunately we lost a close one in week 6. It came down to the Monday night games and Sam Darnold just wasn’t able to get it going over the Texans even though the Seahawks still pulled out the win.

    Our running back group also did not preform well outside of Josh Jacobs. The loss was by a difference of about 7 points so if anyone had put up another touchdown we could have won.

    Anyway, on to week week 8. A few byes to contend with but otherwise most of are starts will be playing. The AI suggested grabbing the Colts defense and kicker as they are playing Tennessee. Breece Hall is currently questionable to play so we will have to keep an eye on that but he has a favorable matchup against the Bengals. The current roster is below.

    I promised to work on MCP this week but have only made a little bit of progress. I’ve been doing a lot of research on doing it in a cost effective manner as this project makes ZERO dollars and so I can’t afford to setup a bunch of expensive infrastructure. SO – this week I worked on combining the waiver table and the stats table into one table so that we can minimize DynamoDB calls throughout the application. The other thing I did was setup DynamoDB streams which are then converted into text files for each player and placed into an S3 bucket. This is what I think will be the first step in setting up a RAG pipeline so that a model can begin to be more “aware” of current NFL and Fantasy Football landscape.

    Here is an updated architecture diagram. You’ll notice the S3 bucket on the right side. This is the eventual start of our knowledgebase.

    You’ll also notice the waiver table removed. The new player structure looks like this:

    {
     "player_id": "George Kittle#TE",
     "espn_player_id": 3040151,
     "player_name": "George Kittle",
     "position": "TE",
     "seasons": {
      "2024": {
       "season_totals": {
        "MISC_FL": 0,
        "MISC_FPTS": 158.6,
        "MISC_FPTS/G": 10.6,
        "MISC_G": 15,
        "MISC_ROST": "99.4%",
        "Player": "George Kittle",
        "Rank": 1,
        "RECEIVING_20+": 21,
        "RECEIVING_LG": 43,
        "RECEIVING_REC": 78,
        "RECEIVING_TD": 8,
        "RECEIVING_TGT": 94,
        "RECEIVING_Y/R": 14.2,
        "RECEIVING_YDS": 1106,
        "RUSHING_ATT": 0,
        "RUSHING_TD": 0,
        "RUSHING_YDS": 0
       },
       "weekly_stats": {
        "1": {
         "fantasy_points": 4,
         "opponent": "NYJ"
        },
        "2": {
         "fantasy_points": 13.6,
         "opponent": "MIN"
        },
        "4": {
         "fantasy_points": 10.5,
         "opponent": "NE"
        },
        "5": {
         "fantasy_points": 12.4,
         "opponent": "ARI"
        },
        "6": {
         "fantasy_points": 17.8,
         "opponent": "SEA"
        },
        "7": {
         "fantasy_points": 9.2,
         "opponent": "KC"
        },
        "8": {
         "fantasy_points": 18.8,
         "opponent": "DAL"
        },
        "10": {
         "fantasy_points": 11.7,
         "opponent": "TB"
        },
        "12": {
         "fantasy_points": 14.2,
         "opponent": "GB"
        },
        "13": {
         "fantasy_points": 0.7,
         "opponent": "BUF"
        },
        "14": {
         "fantasy_points": 15.1,
         "opponent": "CHI"
        },
        "15": {
         "fantasy_points": 6.1,
         "opponent": "LA"
        },
        "16": {
         "fantasy_points": 10.6,
         "opponent": "MIA"
        },
        "17": {
         "fantasy_points": 11.2,
         "opponent": "DET"
        },
        "18": {
         "fantasy_points": 2.7,
         "opponent": "ARI"
        }
       }
      },
      "2025": {
       "injury_status": "ACTIVE",
       "jersey_number": "85",
       "percent_owned": 98.97,
       "pro_team_id": 25,
       "season_projections": {
        "MISC_FL": 0.5,
        "MISC_FPTS": 147.6,
        "RECEIVING_REC": 76,
        "RECEIVING_TD": 7.5,
        "RECEIVING_YDS": 1036.9
       },
       "team": "SF",
       "weekly_outlooks": {
        "1": "George Kittle is healthy and wealthy for the 49ers' Week 1 matchup against Seattle after signing a big four-year contract extension in the offseason. Kittle's role as a pass catcher should be intensified early on with WR Brandon Aiyuk (ACL) on the PUP list to begin the campaign and Jauan Jennings (calf, contract) uncertain to suit up against the Seahawks. Kittle is coming off a 78-catch, 1,106-yard, eight-TD 2024 campaign, further cementing his place as one of the NFL's elite producers at tight end. The Seahawks were middle-of-the-pack against the position last year, giving up an average of 51.5 receiving yards per game.",
        "2": "George Kittle won't play in San Francisco's Week 2 matchup against New Orleans due to a hamstring injury that landed him on IR. Luke Farrell and Jake Tonges, who caught a TD in Kittle's absence last week against the Seahawks, will be asked to step in at tight end for the 49ers.",
        "3": "George Kittle will miss his second straight game for the 49ers in Week 3 against the Cardinals while he remains on IR due to a hamstring injury. Jake Tonges and Luke Farrell should continue to hold down the fort at TE for Kittle until the latter is able to return. Kittle won't be eligible to suit up again until Week 6."
       },
       "weekly_projections": {
        "5": 12.7,
        "6": 13.1,
        "7": 13.4,
        "8": 11.7,
        "10": 14.6,
        "12": 11.6,
        "13": 13.2,
        "14": 13.4,
        "15": 14.1,
        "16": 14.1,
        "17": 14.5
       },
       "weekly_stats": {
        "1": {
         "fantasy_points": 12.5,
         "opponent": "SEA",
         "team": "SF",
         "updated_at": "2025-10-15T17:40:58.625370"
        },
        "2": {
         "fantasy_points": 12.5,
         "opponent": "NO",
         "team": "SF",
         "updated_at": "2025-09-16T17:08:05.179797"
        },
        "3": {
         "fantasy_points": 12.5,
         "opponent": "ARI",
         "team": "SF",
         "updated_at": "2025-09-23T15:00:13.907272"
        },
        "4": {
         "fantasy_points": 12.5,
         "opponent": "JAX",
         "team": "SF",
         "updated_at": "2025-09-30T15:00:14.035733"
        },
        "5": {
         "fantasy_points": 12.5,
         "opponent": "LAR",
         "team": "SF",
         "updated_at": "2025-10-07T15:00:13.665217"
        },
        "6": {
         "fantasy_points": 12.5,
         "opponent": "TB",
         "team": "SF",
         "updated_at": "2025-10-14T15:00:14.748804"
        }
       }
      }
     },
     "updated_at": "2025-10-22T18:11:05.039158"
    }

    I hope to continue to refine this so that it can be used for future seasons. Then we can continue to use the bot into 2026’s season.

    Anyway, hopefully I can figure out MCP and the knowledgebase this week. Winter is coming so its time to hunker down and build AWS Architectures!

  • Using Strands to build an AWS Cost Analysis Agent

    Taking a break from Fantasy Football today to talk about a quick weekend project I put together.

    A friend of mine was chatting about how their AWS costs are getting out of control and they aren’t sure where to start when it comes to cleaning up the account. This prompted me with an idea to utilize AI to build an Agent that can interact with your AWS account to review resources, provide cost analysis and give you clear CLI commands or console instructions to help clean up the account.

    In order to do this, I wanted to incur as little cost as possible. So, I built a Docker image in order to run it locally. First, there is a shell script that will build an IAM User in your account that provides read-only access to the account, Cost Explorer access and access to Bedrock (to communicate with an AI model).

    The Docker image runs and builds an Agent that interacts with whichever model you want to utilize. I picked Amazon’s Nova model just to keep the costs down. The container then presents a web interface where the account’s bill break down will be displayed:

    It will also display some common costly resources and their counts:

    The next block is where things get very helpful. The AI will present to you suggestions as to how to save some money as well as some risk calculations. Because I ran this against my real account I had to blur out some information but you get the idea:

    So, now you have some actionable activities to work through to help you save money on your AWS bill. But what if you have more questions? I also included a simple chat box to help you work with the bot to come up with other explanations or find other ways to save cost.

    So I asked the AI to find the largest instance in my account and then determine the right size for it. Here is the response:

    Why would this be important? Well, if you had the AI review all of the instances in your account you could identify EC2 instances that are oversized and have them be changed accordingly. After I implemented a few of the changes that the AI recommended (and verified they didn’t break anything), my account billing decreased by about $100.

    If this is something you are interested in running on your own account feel free to reach out! I’d be happy to help you setup up the container on your machine and make suggestions as to how to save some money!

  • Fantasy Football and AI – Week 5

    It feels good to win. Week 5 locks up the 3rd win for our AI managed fantasy football team. It was also the first week where players could be on a “bye” and it handled that with out issue! We had great performances from a bunch of players and most players were fairly close to their projections. I will say, our opponent did start a player who did not play at all but the point differential overall I don’t think would have helped him win.

    Colts defense was a great suggested pickup and Sam Darnold played a HUGE game and ultimately still lost… Poor guy. The AI suggested picking up Dalton Kincaid and boy was that a home run pick.

    Now that I am back home and able to work on the code again I have a few things to fix including QB’s on waivers. For some reason the AI is not able to retrieve them. I also want to continue working on the speed that it returns information. I think implementing MCP into the architecture will help. So as my life gets back to normal I will look into how to integrate these new features!

    As always, thank you for following along. Hopefully week 6 is another victory!

  • Week 2 – AI Plays Fantasy Football

    We lost again. AI is 0-2.

    For the most part I agreed with the picks that AI made last week. Except for one. The tight end. It felt a little weird picking Hunter Henry up off waiver wire and boy did I found out why. More on that later. Here are the results from the week:

    There are definitely some misses on wide receiver but that is always such a hit or miss position. What shocked me the most was Josh Allen only get 11 points. Also, J.J. McCarthy was injured in his game. Not a single player hit their projection except for the Rams’ defense. If we take a look at the bench there are definitely some better picks we could have made strictly from looking at points for the game. However, even if we had selected all the better preforming players the team would have still lost as our opponent had 164 points this week.

    Anyway, that’s enough football for this post. Let’s talk about some tech. I mentioned earlier that I was questioning the waiver acquisition of Hunter Henry and I wish I had done some more debugging as to why that pick was selected.

    I added a chat feature to the overall application so that we can identify waiver picks and other things faster rather than having to get a full weekly analysis every time. In that code there is a function that generates a recommendation score. I’ll admit, I vibe coded some of this with some prompting of how this should be determined.

    # Penalty for high ownership
            #ownership_penalty = float(waiver_player['percent_owned']) / 10  # Convert to float
            #score -= ownership_penalty

    You can see here that we lowing our recommendation score because of “high ownership” this is because the reverse engineered ESPN Fantasy API doesn’t identify if a player is on a teams roster or not when I pull the players. So a player with a 100% ownership would likely never be recommended. So a player like Zach Ertz, who had 18 points this week was not suggested as he had approximately 80% ownership across ALL of ESPN’s fantasy platform. Well… this problem has been fixed.

    Anyway, I also added chat. And I programmed it to respond only as Dan Campbell. Here is some output when I asked “What do you think of my team?”

    pounds table
    
    Well I'll be damned, you got yourself a HELL of a roster there, chief! This team's got some serious bite to it - let me break it down for ya.
    
    Your Strengths - These are STUDS:
    - Josh Allen - That's your franchise quarterback right there. Guy's a warrior and puts up points like it's going out of style
    - Saquon Barkley & Josh Jacobs - Holy smokes, that's a one-two punch at RB that'll knock people's teeth out! Both these guys are workhorses
    - Breece Hall in the FLEX - Are you kidding me?! Most teams would KILL to have Hall as their RB1, and you got him as a flex play. That's championship-level depth
    - DK Metcalf - Big body receiver who can take the top off any defense
    - Your bench depth - Terry McLaurin, DJ Moore, Cooper Kupp, Keenan Allen?! That's four WR1/WR2 caliber guys just sitting there!
    
    Areas that need some attention:
    - George Kittle on IR - That hurts, but Hunter Henry's a solid fill-in for now
    - J.J. McCarthy at OP - Kid's got potential but he's still unproven. Might want to consider streaming that spot based on matchups
    - Marvin Harrison Jr. - Rookie receivers can be boom-or-bust, but the talent's there
    
    The Bottom Line:
    This roster is LOADED, especially at RB and WR. You've got championship potential written all over it. The depth is ridiculous - you could probably trade some of that bench talent for upgrades at TE or find a more consistent OP play.
    
    You're sitting pretty, but don't get comfortable! Keep grinding, keep looking for those edges. That's how you win championships!
    
    What specific position are you thinking about upgrading, or you want me to take a look at some waiver wire options?

    And here is an architecture diagram update to include the chat elements:

    So, I’ll be back tomorrow with a new lineup. Hopefully week three treats us better!