Tag: automation

  • Custom Automated Code Scanning with AWS Bedrock and Claude Sonnet for Jenkins

    I run Jenkins in my home lab where I build and test various applications. I’m sure many of you already know this. I also use Jenkins professionally so its a great test bed for trying things out before implementing them for clients. Essentially my home lab is and will always be a sandbox.

    Anyway, I thought it would be fun to implement AI into a pipeline and have Claude scan my code bases for vulnerabilities before they are built and deployed.

    So, I first created a shared library this points to a private repository that I have on GitHub that contains all of the code.

    At the beginning of each of my pipelines I add one line to import the library like this:

    @Library('jenkins-shared-libraries') _

    Then I also created a Groovy file which defines all the prerequisites and builds the container in which our code scan runs

    def call() {
        node {
            stage('Amazon Bedrock Scan') {
                // 1. Prepare scripts from library resources
                def scriptContent = libraryResource 'scripts/orchestrator.py'
                def reqsContent = libraryResource 'scripts/requirements.txt'
                writeFile file: 'q_orchestrator.py', text: scriptContent
                writeFile file: 'requirements.txt', text: reqsContent
    
                // 2. Start the Docker container
    
                docker.image('python:3.13-slim').inside("-u 0:0") {
                    
                    // 3. Bind Credentials
                    withCredentials([
                        [$class: 'AmazonWebServicesCredentialsBinding', credentialsId: 'AWS_Q_CREDENTIALS'],
                        string(credentialsId: 'github-api-token', variable: 'GITHUB_TOKEN')
                    ]) {
                        // 4. Get repo name from Jenkins environment
                        def repoUrl = env.GIT_URL ?: scm.userRemoteConfigs[0].url
                        def repoName = repoUrl.replaceAll(/.*github\.com[:\\/]/, '').replaceAll(/\.git$/, '')
    
                        echo "Scanning repository: ${repoName}"
    
                        // 5. THESE MUST LOG TO CONSOLE
                        sh """
                            echo "--- INSTALLING DEPENDENCIES ---"
                            apt-get update -qq && apt-get install -y -qq git > /dev/null 2>&1
                            pip install --quiet -r requirements.txt
    
                            echo "--- RUNNING ORCHESTRATOR FOR ${repoName} ---"
                            python3 q_orchestrator.py --repo "${repoName}"
                        """
                    }
                }
            }
        }
    }

    This spins up a container which runs on my Jenkins instance (yes, I know I should setup a different cluster for this) and runs the orchestrator.py file which contains all of my code.

    The code iterates through all of the code files, which I filtered based on extension so that we aren’t scanning or sending executable files or unnecessary files to Bedrock.

    Once Bedrock reviews all of the files then it will put all of the details into a pull request and write code change suggestions to the files. The pull request is then submitted to the repository for me to review. If the pull request is approved the cycle starts all over again!

    I’ve slowly been rolling this out to my pipelines and boy did I miss some very obvious things. I can’t wait to keep fixing things and improving not only my pipelines but my coding skills.

    If you have any interest in setting up something similar feel free to reach out!