Category: Technology

  • How I utilize Claude Code and AI to build complex applications

    “A Fever You Can’t Sweat Out – 20th Anniversary Deluxe” is an album that came out? Wow. I remember seeing Panic! as a teenager…

    I stayed away from AI for a long time. I think a lot of people in my field were nervous about security, bad code, incorrect information and much more. In the early days of ChatGPT it was easy to have the AI hallucinate and come up with some nonsense. While its still possible for this to happen I found a workflow that has helped me build applications and proof of concept work very quickly.

    First – I have always given AI tasks that I can do myself.
    Second – If I can’t do a task, I need to learn about it first.

    These aren’t really rules, but, things I think about when I’m building out projects. I won’t fall victim to the robot uprising!

    Let’s talk about my workflows.

    Tools:
    – Claude (Web)
    – Claude Code
    – Gemini
    – Gemini CLI
    – ChatGPT
    – Todoist

    I pay for Claude and I have subscriptions to Gemini Pro through my various GSuite Subscriptions. ChatGPT I use for free. ToDoist is my to do app of choice. I’ve had the subscription since back in my Genius Phone Repair days to manage all of the stores and their various tasks.

    The Flow

    As with most of you, I’m sure you get ideas or fragments of ideas at random times. I put these into ToDoist where I have a project called “Idea Board” its basically a simplified Kanban board with three columns:

    Idea | In progress | Finished

    The point of this is to track things and get them out of my brain to free up space in there everything else that happens in my life. I utilize the “In Progress” column for when I’m researching or actually sitting down to process the idea with more detail. Finally, the “Finished” column is utilize for either ideas that I’m not going to work on or ideas that have turned into full projects. This is not the part of the project where I actually detail out the project. It’s just a landing place for ideas.

    The next part of the flow is where I actually detail out what I want to do. If you have been utilizing Claude Code or Gemini CLI or Codex you know that input is everything and it always has been since AI became consumer ready. I generally make a folder on my computer and start drafting my ideas with more detail into markdown files. If we look at CrumbCounts.com as an example, I started with simply documenting out the problem I was trying to solve:

    Calculate the cost for this recipe.

    In order to do that we then need to put a bunch of pieces together. Because I am an AWS Fanboy most of my designs and architectures revolve around AWS but some day I might actually learn another cloud and then utilize that instead. Fit for purpose.

    Anyway, the markdown file will continually grow as I start to build the idea into a mostly detailed out document that lays out the architecture, design principals, technologies to utilize, user flow and much more. The more detail the better!

    When I am satisfied with the initial idea markdown file I will provide it to Gemini. Its not my favorite AI model out there but it possess the ability to take in and track a large amount of context which is useful when presenting big ideas.

    I assign Gemini the role of “Senior Technology Architect”. I assume the role of “stakeholder”. Gemini’s task is to review the idea that I have and either validate or, create the architecture for the idea. I prompt it to return back a markdown file that contains the technical architecture and technical details for the idea. At this point we reach our first “Human in the loop” point.

    Because I don’t trust our AI overlords this is the first point at which I will fully review the document output by Gemini. I need to make sure that what the AI is putting out is valid, will work, and is using tools and technology that I am familiar with. If the output is proposing something that i’m unsure of I need to research or ask the AI to utilizing something else.

    After I am satisfied with the architecture document I place that into the project directory. This is where we change AI Models. You see Gemini is good at big picture stuff but not so good at specifics (in my opinion). I take the architecture document and provide it to Claude (Opus, web browser or app) and give it the role of Senior Technology Engineer. Its job is to review the architecture diagram, find any weak points or things that are missing or, sometimes, things that just won’t work. Then build a report and an engineering plan. This plan details out SPECIFIC technologies, patterns and resources to use.

    I usually repeat this process a few times and review each LLM’s output looking for things that might have been missed by either myself or the AI. Once I have them both in a place that I feel confident this is when I actually start building.

    Because I lack trust in AI, I make my own repository in GitHub and setup the repository on my local machine. I do allow the AI the ability to commit and push code to the repository. Once the repository has been created I have Gemini CLI build out the application file structure. This could include:

    • Creating folders
    • Creating empty files
    • Creating base logic
    • Creating Terraform module structures

    But NOTHING specific. Gemini, once again, is not good at detailed work. Maybe i’m using it wrong. Either way, I now have all of the basic structure. Think of Gemini as a Junior Engineer. It knows enough to be dangerous so it has many guardrails.

    # SAMPLE PROMPT FOR GEMINI
    You are a junior engineer working on your first project. Your current story is to review the architecture.md and the enginnering.md. Then, create a plan.md file that details out how you would go about creating the structure of this application. You should detail out every file that you think needs to be created as well as the folder structure. 

    Inside of the architecture and engineering markdown files there is detail about how the application should be designed, coded, and architected. Essentially a pure runbook for our junior engineer.

    Once Gemini has created its plan and I have reviewed it, I allow it write files into our project directory. These are mostly placeholder files. I will allow it to write some basic functions for coding and layout some Terraform files that are simple.

    Once our junior engineer, Gemini, has completed I usually go through and review all of the files against the plan that it created. If anything is missing I will direct it to review the plan again and make any corrections. Once the code is at a place where I am happy with it, I create my first commit and push this baseline into the repository.

    At this point its time for the heavy lifting. Time to put my expensive Anthropic subscription to use. Our “Senior Developer” Claude (Opus model) is let loose on the code base to build out all the logic. 9 times out of 10 I will allow it to make all the edits it wants and just let it go while I work on something else (watching YouTube).

    # SAMPLE CLAUDE PROMPT
    You are a senior developer. You are experienced in many application development patterns, AWS, Python and Terraform. You love programming and its all you ever want to do. Your story in this sprint is to first review the engineering.md, architecture.md and plan.md file. Then review the Junior Engineer's files in this project directory. Once you have a good grasp on the project write your own plan as developer-plan.md. Stop there and I, your manager, will review.

    After I review the plan I simply tell it to execute on the plan. Then I cringe as my usage starts to skyrocket.

    Claude will inevitably have an issue so I take a look at it every now and then, respond to questions if it has any or allow it to continue. Once it reaches a logical end I start reviewing its work. At this point it should have built me some form of the application that I can run locally. I’ll get this fired up and start poking around to make sure the application does what I want it to do.

    At this point we can take a step back from utilizing AI and start documenting bugs. If I think this is going to be a long project this is where I will build out a new project in Todoist so that I can have a persistent place to take notes and track progress. This is essentially a rudimentary Jira instance where each “task” is a story. I separate them into Bugs, Features, In Progress, Testing.

    My Claude Code utilizes the Todoist MCP so it can view/edit/complete tasks as needed. After I have documented as much as I can find I let Claude loose on fixing the bugs.

    I think the real magic also comes with automation. Depending on the project I will allow Claude Code access to my Jenkins server via MCP. This allows Claude code to monitor and troubleshooting builds. This allows Claude to operate independently. What happens is that it will create new branches and push them into a development environment triggering an automated deployment. The development environment is simply my home lab. I don’t care if anything breaks there and it doesn’t really cost any money. If the build fails, Claude can review the logs and process a fix and start the CI/CD all over again.

    Ultimately, I repeat the bug fix process until I get to my minimal viable product state and then deploy the application or project into whatever is deemed the production environment.

    So, its 2026, we’re using AI to build stuff. What is your workflow? Still copying and pasting? Not using AI at all? AI is just a bubble? Feel free to comment below!

  • Jenkins Skill for Claude Code

    I’ve been doing a lot more with Claude Code and before you shame me for “vibe coding” hear me out.

    First – AI might be a bubble. But, I’ve always been a slow adopter. Anything that I have AI do, I can do myself. I just find it pointless to spend hours writing Terraform modules when Claude, or another model, can do it in a few seconds. I’ll post more on my workflow in a later blog.

    One of the things that I find tedious is monitoring builds inside of Jenkins. Especially when it comes to troubleshooting. If AI writes the code, it should fix it too right?

    I built a new skill for my Claude Code so that it can view and monitor my locally hosted Jenkins instance and automatically handle any issues. The purpose of this is straight forward. Once I approve a commit to the code base my builds are going to automatically trigger. Claude Code needs to make sure that what it wrote is actually deployed.

    Inside of the markdown file you’ll find examples of how the skill can be used including:

    • List
    • View
    • Start/Stop

    These are all imperative features so that the AI can handle the pipelines accordingly. This has resulted in a significant increase in the time it takes me code and deliver a project immensely. I also don’t have to copy and paste logs back to the AI for it to troubleshoot.

    For you doomers out there – This hasn’t removed me from my job. I still act as the infrastructure architect, the software architect, the primary human tester, the code reviewer and MUCH more.

    Anyway, I’ll be publishing more skills so be sure to star the repository and follow along by subscribing to the newsletter!

    GITHUB

    Don’t miss an update

  • Custom Automated Code Scanning with AWS Bedrock and Claude Sonnet for Jenkins

    I run Jenkins in my home lab where I build and test various applications. I’m sure many of you already know this. I also use Jenkins professionally so its a great test bed for trying things out before implementing them for clients. Essentially my home lab is and will always be a sandbox.

    Anyway, I thought it would be fun to implement AI into a pipeline and have Claude scan my code bases for vulnerabilities before they are built and deployed.

    So, I first created a shared library this points to a private repository that I have on GitHub that contains all of the code.

    At the beginning of each of my pipelines I add one line to import the library like this:

    @Library('jenkins-shared-libraries') _

    Then I also created a Groovy file which defines all the prerequisites and builds the container in which our code scan runs

    def call() {
        node {
            stage('Amazon Bedrock Scan') {
                // 1. Prepare scripts from library resources
                def scriptContent = libraryResource 'scripts/orchestrator.py'
                def reqsContent = libraryResource 'scripts/requirements.txt'
                writeFile file: 'q_orchestrator.py', text: scriptContent
                writeFile file: 'requirements.txt', text: reqsContent
    
                // 2. Start the Docker container
    
                docker.image('python:3.13-slim').inside("-u 0:0") {
                    
                    // 3. Bind Credentials
                    withCredentials([
                        [$class: 'AmazonWebServicesCredentialsBinding', credentialsId: 'AWS_Q_CREDENTIALS'],
                        string(credentialsId: 'github-api-token', variable: 'GITHUB_TOKEN')
                    ]) {
                        // 4. Get repo name from Jenkins environment
                        def repoUrl = env.GIT_URL ?: scm.userRemoteConfigs[0].url
                        def repoName = repoUrl.replaceAll(/.*github\.com[:\\/]/, '').replaceAll(/\.git$/, '')
    
                        echo "Scanning repository: ${repoName}"
    
                        // 5. THESE MUST LOG TO CONSOLE
                        sh """
                            echo "--- INSTALLING DEPENDENCIES ---"
                            apt-get update -qq && apt-get install -y -qq git > /dev/null 2>&1
                            pip install --quiet -r requirements.txt
    
                            echo "--- RUNNING ORCHESTRATOR FOR ${repoName} ---"
                            python3 q_orchestrator.py --repo "${repoName}"
                        """
                    }
                }
            }
        }
    }

    This spins up a container which runs on my Jenkins instance (yes, I know I should setup a different cluster for this) and runs the orchestrator.py file which contains all of my code.

    The code iterates through all of the code files, which I filtered based on extension so that we aren’t scanning or sending executable files or unnecessary files to Bedrock.

    Once Bedrock reviews all of the files then it will put all of the details into a pull request and write code change suggestions to the files. The pull request is then submitted to the repository for me to review. If the pull request is approved the cycle starts all over again!

    I’ve slowly been rolling this out to my pipelines and boy did I miss some very obvious things. I can’t wait to keep fixing things and improving not only my pipelines but my coding skills.

    If you have any interest in setting up something similar feel free to reach out!

  • Cloudwatch Alarm AI Agent

    I think one of the biggest time sucks is getting a vague alert or issue and not having a clue on where to start with troubleshooting.

    I covered this in the past when I built an agent that can review your AWS bill and find practical ways to save money within your account. This application wasn’t event driven but rather a container that you could spin up when you needed a review or something you could leave running in your environment. If we take a same read-only approach to building an AWS Agent we can have have a new event driven teammate that helps us with our initial troubleshooting.

    The process flow is straight forward:

    1. Given a Cloudwatch Alarm
    2. Send a notification to SNS
    3. Subscribe a Lambda function to the topic (this is our teammate)
    4. The function utilizes the AWS Nova Lite model to investigate the contents of the alarm and utilizes its read only capabilities to find potential solutions
    5. The agent sends its findings to you on your preferred platform

    For my environment I primarily utilize Slack for alerting and messaging so I built that integration. Here is an architecture diagram:

    When the alarm triggers we should see a message in Slack like:

    The AI is capable of providing you actionable steps to either find the root cause of the problem or in some cases, present you with steps to solve the problem.

    This workflow significantly reduces your troubleshooting time and by reducing the troubleshooting time it reduces your downtime.

    So, if this is something you are interested in deploying I have created a Terraform module so you can quickly deploy it into your own environment to reduce your troubleshooting steps!

    Check it out here: https://aiopscrew.com

    If you have questions feel free to reach out to me at anytime!

  • Fantasy Football and AI – Playoffs Round 1

    So I was wrong in last weeks post! Our playoffs started this week. In my league all the teams go to the playoffs and if you lose then there is a loser’s bracket.

    Our AI run team was seeded at number 3. We were in a three way tie for first place and we ended the “regular season” with 2054.76 points. The leader had 2114.12. So, we weren’t far off the front!

    Anyway, enough of that. You all just want to know the outcome. Here is our point totals from our first round in the playoffs:

    I ran the AI on Saturday and it suggested pulling out Josh Jacobs in favor of TreVeyon Henderson. This ended up getting us an extra 10 points. Josh Jacobs still put up 24.2 points this week. Everyone played really well this week except Sam Darnold. I’m not sure if his hot streak is over or what is going on with him but its been rough. Christian Watson took a nasty hit in his game and left early but he is expected to be just fine.

    So, did we win? We sure did! We’re on to the next round of the playoffs and we’re going to be up against Jahmyr Gibbs so we have to hope for our best performance of the season next week. Here is the currently proposed roster:

    We have a lot of injuries and questionable players so I expect this to change. We picked up the Bills defense as they play Cleveland and they should have a good time against that struggling offense.

    As we look to the off season I hope to build up my API website https://gridirondata.com and start training the model that we will use for next year. I have been working on the overall workflow and looking into how I can have both the scrapers running in the cloud and in my homelab so that I can easily work with the data locally and not incur a lot of cloud cost.

    Stay tuned for more Fantasy Football news next week!

  • Fantasy Football and AI – Week 14

    Happy Wednesday. Victory Wednesday that is! Our AI selected correctly this week and we snuck in a tough win that was finalized on Sunday night.

    Unfortunately, we lost Zach Ertz on the way. A really nasty low hit took him out for the year. Here is the final scores for our lineup:

    Josh Allen came up huge for us. Breece Hall was useless and the Commanders defense might as well have never stepped out on the field. But, a win is a win! We are now in a 3 way tie for first place but will likely take the third seed into the playoffs given our total fantasy points.

    Here is the lineup for week 15: We’ve had to make some changes from waivers and I’m hoping the AI selected correctly. We are heading into the part of the season where teams are going to be fighting for playoff spots. I hope that it is taking that into account as it made the waiver picks.

    We have some highly projected players this week. What do you think? Will we be able to pull off another win this week?

  • Fantasy Football and AI – Week 9

    A BIG win in week 9! Our team is now tied for 1st place with 6 wins and 3 losses. We currently have 1318.66 total fantasy points on the season. It was looking pretty grim going into the afternoon games on Sunday. The receivers AI selected were not preforming and other players were barely hitting their projects. Josh Allen sparked some life into the team with his 30 points and then Sam Darnold showed everyone how to play quarter back with a 37.2 point performance! Check out the full results below.

    So, now we’re off to week 10. Currently, as I write this, Saquon Barkley is questionable to play. I would expect that he does play but the AI will not put him in the starting lineup. We picked up the Panther’s defense upon request from the AI. I would expect this is because they are playing the Saints who just traded away Rasheed Shaheed. The addition of DJ Moore into the OP slot is going to be a rough choice over playing a quarterback in that position. I’ll be monitoring the roster throughout the week to see if there are any other suggestions we can make. Here is what we are currently fielding into week 10:

    Tune in next week for the results! Hopefully AI can get to 7-3!

  • Using Strands to build an AWS Cost Analysis Agent

    Taking a break from Fantasy Football today to talk about a quick weekend project I put together.

    A friend of mine was chatting about how their AWS costs are getting out of control and they aren’t sure where to start when it comes to cleaning up the account. This prompted me with an idea to utilize AI to build an Agent that can interact with your AWS account to review resources, provide cost analysis and give you clear CLI commands or console instructions to help clean up the account.

    In order to do this, I wanted to incur as little cost as possible. So, I built a Docker image in order to run it locally. First, there is a shell script that will build an IAM User in your account that provides read-only access to the account, Cost Explorer access and access to Bedrock (to communicate with an AI model).

    The Docker image runs and builds an Agent that interacts with whichever model you want to utilize. I picked Amazon’s Nova model just to keep the costs down. The container then presents a web interface where the account’s bill break down will be displayed:

    It will also display some common costly resources and their counts:

    The next block is where things get very helpful. The AI will present to you suggestions as to how to save some money as well as some risk calculations. Because I ran this against my real account I had to blur out some information but you get the idea:

    So, now you have some actionable activities to work through to help you save money on your AWS bill. But what if you have more questions? I also included a simple chat box to help you work with the bot to come up with other explanations or find other ways to save cost.

    So I asked the AI to find the largest instance in my account and then determine the right size for it. Here is the response:

    Why would this be important? Well, if you had the AI review all of the instances in your account you could identify EC2 instances that are oversized and have them be changed accordingly. After I implemented a few of the changes that the AI recommended (and verified they didn’t break anything), my account billing decreased by about $100.

    If this is something you are interested in running on your own account feel free to reach out! I’d be happy to help you setup up the container on your machine and make suggestions as to how to save some money!

  • Fantasy Football and AI – Week 5

    It feels good to win. Week 5 locks up the 3rd win for our AI managed fantasy football team. It was also the first week where players could be on a “bye” and it handled that with out issue! We had great performances from a bunch of players and most players were fairly close to their projections. I will say, our opponent did start a player who did not play at all but the point differential overall I don’t think would have helped him win.

    Colts defense was a great suggested pickup and Sam Darnold played a HUGE game and ultimately still lost… Poor guy. The AI suggested picking up Dalton Kincaid and boy was that a home run pick.

    Now that I am back home and able to work on the code again I have a few things to fix including QB’s on waivers. For some reason the AI is not able to retrieve them. I also want to continue working on the speed that it returns information. I think implementing MCP into the architecture will help. So as my life gets back to normal I will look into how to integrate these new features!

    As always, thank you for following along. Hopefully week 6 is another victory!

  • Fantasy Football & AI – Week 3

    Week 3 is here. I started late yesterday by getting a full analysis of the team and any targets for free agency and waivers.

    Most important was to get a better Tight End as the predictions for last weekend were incredibly wrong. This is my fault and reinforces the idea that giving bad data to AI will just result in bad output.

    Here is the updated selection to get Juwan Johnson. This makes a lot more sense than picking Hunter Henry last week.

    The second thing I looked into was getting a good defensive matchup. Currently I have the Rams on the roster but they play the Eagles this week and that matchup is not going to favor the Rams.

    Here is some output from our friendly AI Dan Campbell:

    Alright, coach! Here's what I'm seeing for Week 3 defenses that can help you bite some kneecaps and put up points:
    
    TOP WEEK 3 DEFENSIVE MATCHUPS:
    
    1. Green Bay Packers DST (7.6 projected points)
    - Only 45% owned, so there's a good chance they're available
    - This is their highest projected week of the season!
    - They're facing a matchup that should generate turnovers and sacks
    
    2. Buffalo Bills DST (7.5 projected points)
    - 65% owned but worth checking
    - Solid Week 3 projection and they're consistent all season
    
    3. Kansas City Chiefs DST (7.5 projected points)
    - Only 12% owned - this is a STEAL if available!
    - Same 7.5 projection as Buffalo but way less owned
    - Championship defense with a great matchup
    
    Now, looking at your current situation - you've got the LA Rams DST starting right now. Let me tell you something, coach - any of these top 3 options would be a significant upgrade for Week 3!
    
    MY RECOMMENDATION:
    If the Kansas City Chiefs DST is available, that's your move right there! Only 12% owned with a 7.5 projection - that's championship-level value sitting on waivers. If not them, grab the Packers at 7.6 points projected.
    
    The Atlanta Falcons (7.1 projected, only 1.3% owned) are also a sneaky good play if you want to get really aggressive on the waiver wire.
    
    These defenses are gonna feast in Week 3, and you need every advantage you can get to dominate your league! Go get 'em, coach!

    I made a claim for the Packers. I would prefer not to pick them because of my love for the Detroit Lions…

    One other waiver selection that I made was at the kicker position. Here is AI Dan Campbell’s suggestions:

    Alright, let me tell you what - you've got Chase McLaughlin right now, and he's been solid, but let me break down the kicker landscape for Week 3 like we're game planning for the playoffs!
    
    Top Week 3 Kicker Options:
    
    Brandon McManus (GB) - 8.2 projected points
    - Only 14% owned, so he's sitting there waiting for you!
    - Green Bay's offense has been moving the ball well
    - This guy's got ice in his veins and a leg like a cannon
    
    Tyler Loop (BAL) - 8.0 projected points
    - Baltimore's offense creates plenty of scoring opportunities
    - 65% owned though, so he might be taken in your league
    
    Parker Romo (ATL) - 7.8 projected points
    - Only 1.4% owned - this is a diamond in the rough!
    - Atlanta's been getting into the red zone consistently
    
    Jason Myers (SEA) - 7.9 projected points
    - Seattle's offense with Sam Darnold (who you've got as your OP!) creates opportunities
    - Only 10% owned
    
    Now here's the deal - Chase McLaughlin has been putting up decent numbers for Tampa Bay, but if you want to maximize your Week 3 scoring, I'd be looking hard at Brandon McManus. The Packers offense is humming, and McManus has that veteran reliability. Plus at only 14% ownership, there's a good chance he's available.
    
    Parker Romo is my sleeper pick - less than 2% owned but projected for nearly 8 points. Atlanta's been moving the ball, and sometimes you gotta take a swing on the unknown soldier!
    
    You comfortable rolling with McLaughlin, or you want to make a move? In this league, every point matters, and kickers can be the difference between victory and going home empty-handed!

    I picked up Parker Romo for the team. I thought it would be fun to test out the “sleeper” pick.

    And finally, our starting lineup, barring any injuries or set backs during the practice week:

    I think we’ve got a good matchup this week. Most of our opponent ranks are pretty low. Barring injuries we should maybe, hopefully, get our first win of the year.

    Tune in next week for our results!