Category: Cloud Architecting

  • An AI Fantasy Football Draft Assistant

    Last year I attempted to program a Fantasy Football draft assistant which took live data from ESPN’s Fantasy Platform. Boy was that a mistake…

    First of all, shame on ESPN for not having an API for their Fantasy sports applications. The reverse engineered methods were not fast enough nor were they reliable. So, this year I took a new approach to building out a system for getting draft pick recommendations for my team.

    I also wanted to put to use the example architecture and code I wrote the other day for the Strands SDK to work so I utilized it to build an API which would utilize the AWS Bedrock platform to analyze data and and ultimately return the best possible picks.

    Here is a simple workflow of how the tool works:

    I generated this with Claude AI. It is pretty OK.

    The first problem I encountered was getting data. I needed two things:
    1. Historical data for players
    2. Projected fantasy data for the upcoming season

    The historical data provides information about the players past season and the projections are for the upcoming season, obviously. The projections are useful because of any incoming rookies.

    In the repository I link below I put a scripts to scrape FantasyPros for both the historical and projected data. It stores them in separate files in case you want to utilize them in a different way. There is also a script to combine them into one data source and ultimately load them into a DynamoDB table.

    The most important piece of the puzzle was actually simulating the draft. I needed to create a program that would be able to track the other team’s draft picks as well as give me the suggestions and track my teams picks. This is the heart of the repository and I will be using it to get suggestions and track the draft for this coming season.

    Through the application, when you issue the “next” command the application will send a request to the API with the current situation of the draft. The payload looks like this:

    payload = {
                "team_needs": team_needs,
                "your_roster": your_roster,
                "already_drafted": all_drafted_players,
                "scoring_format": self.session.scoring_format if self.session else "ppr",
                "league_size": self.session.league_size if self.session else 12
            }

    The “team_needs” key represents the current number of players remaining for each position. The “your_roster” position is all of the current players on my team. The other important key is “already_drafted”. This key sends all of the drafted players to the AI agent so it knows who NOT to recommend.

    The application goes through all of the picks and you are able to manually enter each of the other teams picks until the draft is complete.

    I’ll post an update after my draft on August 24th with the team I end up with! I still will probably lose in my league but this was fun to build. I hope to add in some sort of week-to-week management of my team as well as a trade analysis tool in the future. It would also be cool to add in some sort of analysis that could send updates to my Slack or Discord.

    If you have other ideas message me on any platform you can find me on!

    GitHub: https://github.com/avansledright/fantasy-football-agent

  • Deploying a Strands Agent on AWS Lambda using Terraform

    Recently I’ve been exploring the AI space a lot more as I’m sure a lot of you are doing as well. I’ve been looking at the Strands Agent SDK. I see this SDK as being very helpful in building out agents in the future (follow the blog to see what I come up with!).

    One thing that is not included in the SDK is the ability to deploy with Terraform. The SDK includes examples of how to package and deploy with Amazon Web Services CDK so I adapted that to utilize Terraform.

    I took my adaptation a step further and added an API Gateway layer so that you have the beginnings of a very simple AI agent deployed with the Strands SDK.

    Check out the code here: https://github.com/avansledright/terraform-strands-agent-api

    The code in the repository is fairly simple and includes everything you need to build an API Gateway, Lambda function, and some other useful resources just to help out.

    The key to all of this is packaging the required dependencies inside of the Lambda Layer. Without this the function will not work.

    File structure:
    terraform-strands-agent-api/
    └── lambda_code/
    │ ├── lambda_function.py # Your Strands agent logic
    │ └── requirements.txt # strands-agents + dependencies
    ├── api_gateway.tf # API Gateway configuration
    ├── iam.tf # IAM roles and policies
    ├── lambda.tf # Lambda function setup
    ├── locals.tf # Environment variables
    ├── logs.tf # CloudWatch logging
    ├── s3.tf # Deployment artifacts
    ├── variables.tf # Configurable inputs
    └── outputs.tf # API endpoints and resource IDs

    You shouldn’t have to change much in any of these files until you want to fully start customizing the actual functionality of the agent.

    To get started follow the instructions below!

    git clone https://github.com/avansledright/terraform-strands-agent-api
    cd terraform-strands-agent-api
    
    # Configure your settings. Add other values as needed
    echo 'aws_region = "us-west-2"' > terraform.tfvars
    
    # Deploy everything
    terraform init
    terraform plan
    terraform apply

    If everything goes as planned you should see the output of a curl command which will give you the ability to test the demo code.

    If you run into any issues feel free to let me know! I’d be happy to help you get this up and running.

    Github

    If this has helped you in any way, please share it on your social media and with any of your friends!

  • Creating a List of API Gateway resources using Terraform

    For some reason when you utilize Terraform with AWS, specifically when you want to get a list of API Gateway resources, that data element simply does not exist. Below is a relatively quick solution that will create a comma separated list of API Gateways so that you can iterate through them.

    In order to execute this element you need to have the AWS CLI setup within your preferred deployment method. Personally, I love GitHub Actions so I needed to add another stage in my deployment to install the CLI.

    The way this work is to create a data element that you can trigger as needed to execute a simple shell script.

    data "external" "apis" {
       program = ["sh", "-c", "aws apigateway get-rest-apis    --query 'items[?starts_with(name,`${var.prefix}`)].name' --output json | jq -r '{\"names\": (. | join(\",\"))}'"]
    }

    We also are creating a variable called “prefix” so that you can filter as required by your project. Personally, I used this to create Cloudwatch Dashboards so I can easily monitor my resources.

    If this is helpful for you, please share it on your social media!

  • Building out a reusable Terraform framework for Flask Applications

    I find myself utilizing the same architecture for deploying demo applications on the great Python library Flask. I’ve been using the same Terraform files over and over again to build out the infrastructure.

    Last weekend I decided it was time to build a reusable framework for deploying these applications. So, I began building out the repository. The purpose of this repository is to give myself a jumping off point to quickly deploy applications for demonstrations or live environments.

    Let’s take a look at the features:

    • Customizable Environments within Terraform for managing the infrastructure across your development and production environments
    • Modules for:
      • Application Load Balancer
      • Elastic Container registry
      • Elastic Container Service
      • VPC & Networking components
    • Dockerfile and Docker Compose file for launching and building the application
    • Demo code for the Flask application
    • Automated build and deploy for the container upon code changes

    This module is built for any developer who wants to get started quickly and deploy applications fast. Using this framework will allow you to speed up your development time by being able to focus solely on the application rather than the infrastructure.

    Upcoming features:

    • CI/CD features using either GitHub Actions or Amazon Web Services like CodePipeline and Codebuild
    • Custom Domain Name support for your application

    If there are other features you would like to see me add shoot me a message anytime!

    Check out the repository here:
    https://github.com/avansledright/terraform-flask-module

  • Create an Image Labeling Application using Artificial Intelligence

    I have a PowerPoint party to go to soon. Yes you read that right. At this party everyone is required to present a short presentation about any topic they want. Last year I made a really cute presentation about a day in the life of my dog.

    This year I have decided that I want to bore everyone to death and talk about technology, Python, Terraform and Artificial Intelligence. Specifically, I built an application that allows a user to upload an image and have it return to them a renamed file that is labeled based on the object or scene in the image.

    The architecture is fairly simple. We have a user connecting to a load balancer which routes traffic to our containers. The containers connect Bedrock and S3 for image.

    If you want to try it out the site is hosted at https://image-labeler.vansledright.com It will be up for some time, I haven’t decided how long I will host it for but at least through this weekend!

    Here is the code that interacts with Bedrock and S3 to process the image:

    def process_image():
        if not request.is_json:
            return jsonify({'error': 'Content-Type must be application/json'}), 400
    
        data = request.json
        file_key = data.get('fileKey')
    
        if not file_key:
            return jsonify({'error': 'fileKey is required'}), 400
    
        try:
            # Get the image from S3
            response = s3.get_object(Bucket=app.config['S3_BUCKET_NAME'], Key=file_key)
            image_data = response['Body'].read()
    
            # Check if image is larger than 5MB
            if len(image_data) > 5 * 1024 * 1024:
                logger.info("File size to large. Compressing image")
                image_data = compress_image(image_data)
    
            # Convert image to base64
            base64_image = base64.b64encode(image_data).decode('utf-8')
    
            
            
            # Prepare prompt for Claude
            prompt = """Please analyze the image and identify the main object or subject. 
            Respond with just the object name in lowercase, hyphenated format. For example: 'coca-cola-can' or 'golden-retriever'."""
            
            # Call Bedrock with Claude
            response = bedrock.invoke_model(
                modelId='anthropic.claude-3-sonnet-20240229-v1:0',
                body=json.dumps({
                    "anthropic_version": "bedrock-2023-05-31",
                    "max_tokens": 100,
                    "messages": [
                        {
                            "role": "user",
                            "content": [
                                {
                                    "type": "text",
                                    "text": prompt
                                },
                                {
                                    "type": "image",
                                    "source": {
                                        "type": "base64",
                                        "media_type": response['ContentType'],
                                        "data": base64_image
                                    }
                                }
                            ]
                        }
                    ]
                })
            )
            
            response_body = json.loads(response['body'].read())
            object_name = response_body['content'][0]['text'].strip()
            logging.info(f"Object found is: {object_name}")
            
            if not object_name:
                return jsonify({'error': 'Could not identify object in image'}), 422
    
            # Get file extension and create new filename
            _, ext = os.path.splitext(unquote(file_key))
            new_file_name = f"{object_name}{ext}"
            new_file_key = f'processed/{new_file_name}'
            
            # Copy object to new location
            s3.copy_object(
                Bucket=app.config['S3_BUCKET_NAME'],
                CopySource={'Bucket': app.config['S3_BUCKET_NAME'], 'Key': file_key},
                Key=new_file_key
            )
            
            # Generate download URL
            download_url = s3.generate_presigned_url(
                'get_object',
                Params={
                    'Bucket': app.config['S3_BUCKET_NAME'],
                    'Key': new_file_key
                },
                ExpiresIn=3600
            )
            
            return jsonify({
                'downloadUrl': download_url,
                'newFileName': new_file_name
            })
            
        except json.JSONDecodeError as e:
            logger.error(f"Error decoding Bedrock response: {str(e)}")
            return jsonify({'error': 'Invalid response from AI service'}), 500
        except Exception as e:
            logger.error(f"Error processing image: {str(e)}")
            return jsonify({'error': 'Error processing image'}), 500

    If you think this project is interesting, feel free to share it with your friends or message me if you want all of the code!

  • Converting DrawIO Diagrams to Terraform

    I’m going to start this post of by saying that I need testers. People to test this process from an interface perspective as well as a data perspective. I’m limited on the amount of test data that I have to put through the process.

    With that said, I spent my Thanksgiving Holiday writing code, building this project and putting in way more time that I thought I would but boy is it cool.

    If you’re like me and working in a Cloud Engineering capacity then you probably have built a DrawIO diagram at some point in your life to describe or define your AWS architecture. Then you have spent countless hours using that diagram to write your Terraform. I’ve built something that will save you those hours and get you started on your cloud journey.

    Enter https://drawiototerraform.com. My new tool that allows you to convert your DrawIO AWS Architecture diagrams to Terraform just by uploading them. The process uses a combination of Python and LLM’s to identify the components in your diagram and their relationships, write the base Terraform, analyze the initial Terraform for syntax errors and ultimately test the Terraform by generating a Terraform plan.

    All this is then delivered to you as a ZIP file for you to review, modify and ultimately deploy to your environment. By no means is it perfect yet and that is why I am looking for people to test the platform.

    If you, or someone you know, is interested in helping me test have them reach out to me on through the website’s support page and I will get them some free credits so that they can test out the platform with their own diagrams.

    If you are interested in learning more about the project in any capacity do not hesitate to reach out to me at anytime.

    Website: https://drawiototerraform.com

  • API For Pre-signed URLs

    Pre-signed URL’s are used for downloading objects from AWS S3 buckets. I’ve used them many times in the past for various reasons but this idea was a new one. A proof of concept for an API that would create the pre-signed URL and return it to the user.

    This solution utilizes an API Gateway and an AWS Lambda function. The API Gateway takes two parameters “key” and “expiration”. Ultimately, you could add another parameter for “bucket” if you wanted the gateway to be able to get objects from multiple buckets.

    I used Terraform to create the infrastructure and Python to program the Lambda.

    Take a look at the Lambda code below:

    import boto3
    import json
    import os
    from botocore.exceptions import ClientError
    
    def lambda_handler(event, context):
        # Get the query parameters
        query_params = event.get('queryStringParameters', {})
        if not query_params or 'key' not in query_params:
            return {
                'statusCode': 400,
                'body': json.dumps({'error': 'Missing required parameter: key'})
            }
        
        object_key = query_params['key']
        expiration = int(query_params.get('expiration', 3600))  # Default 1 hour
        
        # Initialize S3 client
        s3_client = boto3.client('s3')
        bucket_name = os.environ['BUCKET_NAME']
        
        try:
            # Generate presigned URL
            url = s3_client.generate_presigned_url(
                'get_object',
                Params={
                    'Bucket': bucket_name,
                    'Key': object_key
                },
                ExpiresIn=expiration
            )
            
            return {
                'statusCode': 200,
                'headers': {
                    'Access-Control-Allow-Origin': '*',
                    'Content-Type': 'application/json'
                },
                'body': json.dumps({
                    'url': url,
                    'expires_in': expiration
                })
            }
            
        except ClientError as e:
            return {
                'statusCode': 500,
                'body': json.dumps({'error': str(e)})
            }

    The Terraform will also output a Postman collection JSON file so that you can immediately import it for testing. If this code and pattern is useful for you check it out on my GitHub below.

    Github

  • Securing AWS S3 Objects with Python: Implementing SSE-S3 Encryption

    In the cloud-native world, data security is paramount, and securing Amazon Web Services (AWS) S3 storage is a critical task for any developer. In this article, we dive into a Python script designed to ensure that all your S3 objects are encrypted using Server-Side Encryption with S3-Managed Keys (SSE-S3). This method provides robust security by encrypting S3 objects at the server level using keys managed by S3.

    Understanding the Python Script

    Using the code located at: https://github.com/avansledright/s3-object-re-encryption we have a good framework for re-encrypting our objects.

    The script utilizes the boto3 library, a Python SDK for AWS, enabling developers to integrate their applications with AWS services directly. It includes functions to list objects in an S3 bucket, check their encryption status, and apply SSE-S3 encryption if necessary.

    Key Functions:

    1. Listing Objects: Retrieves all objects within a specified bucket and prefix, managing pagination to handle large datasets.
    2. Checking Encryption: Examines if each object is encrypted with SSE-S3 by accessing its metadata.
    3. Applying Encryption: Updates objects not encrypted with SSE-S3, ensuring all data is securely encrypted using copy_object with the ServerSideEncryption parameter.

    Why Encrypt with SSE-S3?

    Encrypting your S3 objects with SSE-S3 ensures that data is automatically encrypted before being saved to disk and decrypted when accessed. This happens transparently, allowing you to secure your data without modifying your application code.

    Running the Script

    The script is executed via the command line, where users specify the S3 bucket and prefix. It then processes each object, ensuring encryption standards meet organizational and compliance requirements.

    Expanding the Script

    While this script provides a basic framework for S3 encryption, it can be expanded with additional error handling, logging, and perhaps integration into a larger AWS security auditing tool.

    AWS developers looking to enhance their application security will find this script a valuable starting point for implementing standard security practices within their S3 environments. By automating the encryption process, developers can ensure consistency and security across all stored data.

    For those who manage sensitive or regulated data in AWS, applying SSE-S3 encryption programmatically can help meet legal and compliance obligations while providing peace of mind about data security.

    If you find this article helpful please share it with your friends!

  • Building a Generative AI Workflow with AWS Bedrock

    I’ve finally been tasked with a Generative AI project to work on. I’ve done this workflow manually with ChatGPT in the past and it works quite well but, for this project, the requirement was to use Amazon Web Services’ new product “AWS Bedrock”.

    The workflow takes in some code and writes a technical document to support a clear English understanding of what the code is going to accomplish. Using AWS Bedrock, the AI will write the document and output it to an S3 bucket.

    The architecture involves uploading the initial code to an S3 Bucket which will then send the request to an SQS queue and ultimately trigger a Lambda to prompt the AI and fulfill the output upload to a separate S3 bucket. Because this was a proof of concept, the Lambda function was a significant compute resource however going forward I am going to look at placing this code into a Docker container so that it can scale for larger code inputs.

    Here is the architecture diagram:

    Let’s take a look at some of the important code. First is the prompt management. I wrote a function that will take input of the code as well as a parameter of “prompt_type”. This will allow the function to be scalable to accommodate other prompts in the future.

    def return_prompt(code, prompt_type):
        if prompt_type == "testPrompt":
            prompt1 = f"Human: <your prompt>. Assistant:"
            return prompt1

    The important thing to look at here is the format of the message. You have to include the “Human:” and the “Assistant:”. Without this formatting, your API call will error.

    The next bit of code is what we use to prompt the Bedrock AI.

     prompt_to_send = prompts.return_prompt(report_file, "testPrompt")
            body = {
                "prompt": prompt_to_send,
                "max_tokens_to_sample": 300,
                "temperature": 0.1,
                "top_p": 0.9
            }
            accept = 'application/json'
            contentType = 'application/json'
    
    
            # Return Psuedo code
            bedrock_response = h.bedrock_actions.invoke_model(json.dumps(body, indent=2).encode('utf-8'), contentType, accept, modelId=modelid)
        def invoke_model(body, contentType, accept, modelId):
            print(f"Body being sent: {body}")
            try:
                response = bedrock_runtime.invoke_model(
                    body=body,
                    contentType=contentType,
                    accept=accept,
                    modelId=modelId
                )
                return response
            except ClientError as e:
                print("Failed to invoke Bedrock model")
                print(e)
                return False

    The body of our request is what configures Bedrock to run and create a response. These values can be tweaked as follows:

    max_tokens_to_sample: This specifies the number of tokens to sample in your request. Amazon recommends setting this to 4000
    TopP: Use a lower value to ignore less probable options.
    Top K: Specify the number of token choices the model uses to generate the next token.
    Temperature: Use a lower value to decrease randomness in the response.

    You can read more about the inputs here.

    If you want to see more of this code take a look at my GitHub repository below. Feel free to use it wherever you want. If you have any questions be sure to reach out to me!

    GitHub: https://github.com/avansledright/bedrock-poc-public

  • Automated Lambda Testing

    Look, I know there are a bunch of test frameworks that you could use for your Lambda functions. But what if you wanted something simple? I spent an afternoon putting together what I would want in a testing pipeline that returns a simple “Success/Fail” type response to me via Email.

    An architecture diagram for your eyes:

    The idea is to create a JSON object with a key and value pair of the name of the Lambda function and the test event to pass to the lambda. Once the file is uploaded to the S3 bucket the pipeline can be triggered where a Codebuild job will iterate through the Lambdas and their events. The Lambdas will be tested with the event and return whether or not they are successful. The results are then sent to an SNS topic to be distributed to the developers.

    Going forward, I hope to automate adding new Lambda functions to the JSON file so that testing can also be scheduled.

    I spent time packaging this solution up with all the appropriate Terraform files and code. If you are interested in this solution feel free to reach out and I can deliver the packaged application to you!

    Sample Code: GitHub