AI2026-04-14📖 6 min read

Claude Code × GitHub Actions: Integrating AI Agents into Your CI/CD Pipeline

Claude Code × GitHub Actions: Integrating AI Agents into Your CI/CD Pipeline

A hands-on guide to integrating Claude Code with GitHub Actions — covering workflow setup, automated code review, and PR generation, with real configuration files you can drop in.

髙木 晃宏

代表 / エンジニア

👨‍💼

By embedding AI agents into your CI/CD pipeline, you can now automate much of the development process — code review, PR creation, issue triage, and more. Claude Code is Anthropic's coding agent, and through its official GitHub Actions integration, you can build a system where an AI responds autonomously to virtually any event on your repository.

If you've already picked up the basics from the Claude Code Complete Guide, this article should fit right in. Here, we'll walk through how to integrate Claude Code into a GitHub Actions workflow, implement automated code review and PR creation, and cover the security considerations — complete with working YAML examples.

What You Can Do with Claude Code × GitHub Actions

Combining Claude Code with GitHub Actions lets you hand many traditionally manual development tasks off to an AI agent. Specifically:

Automated Code Review

When a PR is opened, or when new code is pushed to it, Claude Code automatically analyzes the change and posts review comments. Style-guide conformance, potential bugs, performance suggestions — a lot of what a human reviewer covers can be automated.

Interactive Code Changes via PR Comments

Mentioning @claude in a PR or issue comment lets you give Claude Code direct instructions. Give it a natural-language prompt like "refactor this function" or "add tests," and it will actually change the code and push a commit.

Automated Issue Handling

You can also build flows where a new issue is automatically analyzed and a fix PR is generated. This could suggest a fix for a bug report, or kick off an initial implementation for a feature request — dramatically accelerating the first 80% of development.

Scheduled Reports

With the schedule trigger, you can analyze the repository state at a fixed time each day and post technical-debt or security-check reports as issues.

All of this is made possible by the official anthropics/claude-code-action@v1. Now out of beta and generally available, the action is stable enough for production use.

Supported Workflow Triggers

The main workflow triggers Claude Code Action supports:

TriggerUse
issue_commentRespond to @claude mentions in PR/issue comments
pull_request_review_commentConversations via PR review comments
pull_request (opened / synchronize)Auto-review on PR open/update
issues (opened)Auto-respond when a new issue is created
scheduleScheduled runs (reports, periodic checks, etc.)

This flexibility means you can plug an AI agent into nearly any phase of your development workflow.

Configuring the GitHub Actions Workflow

Setup Options

There are two main paths for introducing Claude Code × GitHub Actions.

Option 1: Quick Setup (recommended)

Run this in the Claude Code terminal, and everything — GitHub App installation through workflow file placement — happens automatically:

/install-github-app

This is the easiest way in. It guides you through the required configuration interactively and generates the workflow files in your repo.

Option 2: Manual Setup

For a manual setup, follow these three steps:

  1. Install the Claude GitHub App on your repository
  2. Under Settings > Secrets and variables > Actions, register ANTHROPIC_API_KEY as a secret
  3. Place the workflow file in .github/workflows/

Manual setup makes sense when you want to step through each stage carefully to match your organization's security policies.

A Minimal Workflow File

The simplest viable config is a workflow that responds when someone mentions @claude in a PR or issue comment.

name: Claude Code Assistant on: issue_comment: types: [created] pull_request_review_comment: types: [created] permissions: contents: write pull-requests: write issues: write jobs: claude: if: | (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) || (github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) runs-on: ubuntu-latest steps: - name: Run Claude Code uses: anthropics/claude-code-action@v1 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}

This workflow sets three permissions. contents: write is needed to modify code and push commits. pull-requests: write is needed to post comments or perform review actions on PRs, and issues: write is needed to post comments on issues.

Configuration Parameters

Here are the main parameters for anthropics/claude-code-action@v1:

ParameterDescriptionExample
anthropic_api_keyAnthropic API key (required)${{ secrets.ANTHROPIC_API_KEY }}
promptAdditional prompt passed to Claude Code"Perform a code review in English"
claude_argsAdditional args for the Claude Code CLI"--max-turns 10 --model claude-sonnet-4-20250514"
trigger_phraseTrigger phrase to react to"@claude" (default)
timeout_minutesTimeout in minutes30

The claude_args parameter lets you pass any Claude Code CLI options. Common ones:

CLI optionDescription
--max-turnsCap the maximum number of agent turns
--modelSpecify which model to use
--allowedToolsRestrict which tools the agent may use
--mcp-configSpecify an MCP (Model Context Protocol) server config file

For example, to pin the model to Sonnet and cap the turns:

- uses: anthropics/claude-code-action@v1 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} claude_args: "--max-turns 5 --model claude-sonnet-4-20250514"

Integration with CLAUDE.md

If you place a CLAUDE.md at the repository root, Claude Code Action automatically loads it as context. This is how you give the AI consistent project-specific coding standards and review criteria.

Setup CLAUDE.md like this, for example, and these standards will automatically apply during reviews:

# CLAUDE.md ## Coding Standards - Enable TypeScript strict mode - Keep functions under 30 lines - Use camelCase for naming ## Review Criteria - Always verify performance impact - Confirm adequate error handling - Ensure test coverage does not regress

CLAUDE.md applies not only to Claude Code's behavior inside GitHub Actions, but also to local Claude Code sessions — so the whole team gets a consistent development experience.

Implementing Automated Code Review

Auto-Review on PR Creation

Let's build a workflow that automatically runs a review when a PR is opened. We'll use the pull_request trigger with the opened and synchronize events.

name: Automated Code Review on: pull_request: types: [opened, synchronize] permissions: contents: read pull-requests: write jobs: review: runs-on: ubuntu-latest steps: - name: Automated Review by Claude uses: anthropics/claude-code-action@v1 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} prompt: | Please review the changes in this PR. Please verify the following: 1. Correctness: Are there any logic errors? 2. Security: Any potential vulnerabilities? 3. Performance: Any inefficient processing? 4. Readability: Are naming and code structure appropriate? 5. Tests: Are there adequate tests? When you find issues, comment with a specific suggested fix. Group minor nits together; leave critical issues as individual inline comments. claude_args: "--max-turns 3"

Including synchronize re-runs the review on every new commit pushed to the PR. In my experience, re-running after addressing the first-pass feedback is effective at catching anything that slipped through.

Designing the prompt Parameter

How you design the instructions passed via prompt has a huge effect on review quality. A few principles:

Be explicit about what to review.

Instead of "please review," list concrete checks. It yields more thorough and consistent reviews.

Specify the output format.

Giving the reviewer a fixed output structure makes it much easier for humans to skim the AI's results. For example: "Classify severity as [Critical], [Warning], or [Info]."

Invoke Skills.

You can also call Claude Code Skills from the prompt parameter. If your repo has pre-defined Skills, they work in GitHub Actions too — letting you share the same review criteria between local and CI.

Dialing In Review Accuracy

Here are a few things I've found helpful for bringing automated review quality to a production level.

Provide enough diff context.

Claude Code Action has access to the entire repo by default, so it can do comprehensive reviews that take related files into account. But if you set --max-turns too low, it may cut off before gathering enough context — tune it to the size of your project.

Add language- or framework-specific checks.

For a React project, check items like "unnecessary re-renders" or "appropriate useEffect dependency arrays" make the reviews much more useful.

Exclude auto-generated files to reduce noise.

Comments on lockfiles or generated type definitions are just noise. A rule in CLAUDE.md like "do not review changes to package-lock.json or *.d.ts files" can avoid the problem.

Automating PR Creation and Issue Handling

Auto-Creating a PR from an Issue

Here's a workflow where Claude Code analyzes a newly opened issue and automatically creates a fix PR.

name: Auto Fix Issues on: issues: types: [opened] permissions: contents: write pull-requests: write issues: write jobs: auto-fix: if: contains(github.event.issue.labels.*.name, 'claude-fix') runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Create Fix PR uses: anthropics/claude-code-action@v1 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} prompt: | Analyze the following issue and implement a fix. Process: 1. Understand the issue 2. Identify the related code 3. Implement the fix 4. Create a PR explaining the change Use the PR title format "fix: [summarized issue title]". In the PR description, explain what you changed and why it's the right fix. claude_args: "--max-turns 15"

This workflow filters so that only issues with the claude-fix label run the automation. Running auto-fix on every issue risks generating unintended changes. Gating by label is an important pattern for keeping automation scope under control.

Conversational Fixes via Comments

Mentioning @claude in a PR comment for conversational code changes is probably the most intuitive usage pattern for developers.

name: Claude PR Assistant on: issue_comment: types: [created] pull_request_review_comment: types: [created] permissions: contents: write pull-requests: write issues: write jobs: respond: if: | (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) || (github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) runs-on: ubuntu-latest steps: - name: Claude Response uses: anthropics/claude-code-action@v1 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} trigger_phrase: "@claude"

Here we explicitly set the trigger phrase via trigger_phrase. The default is @claude, but you can customize it — @ai-review, say — to avoid collisions with other bots on your team.

What the Interaction Looks Like

A typical PR-comment flow might look like this. The developer posts in the PR comment thread:

@claude Please add error handling to this function. Return null when the API response is 404.

Claude Code picks that up, analyzes the relevant code, implements the change, and pushes a commit. Then it reports what it changed as a comment. If the result needs more work, you can iterate with another comment.

This interactive flow really pays off when you want to turn review feedback directly into fixes. Reviewer flags a problem → Claude Code fixes it → reviewer checks again. It's a significant productivity lift for team development.

Custom Trigger Phrases

By customizing trigger_phrase, you can wire up different workflows for different purposes:

# Review-only workflow - uses: anthropics/claude-code-action@v1 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} trigger_phrase: "@claude-review" prompt: "Please perform a code review." # Refactor-only workflow - uses: anthropics/claude-code-action@v1 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} trigger_phrase: "@claude-refactor" prompt: "Please refactor the specified section."

Separate trigger phrases let you tune both the prompt and claude_args per use case. You might set a small --max-turns for reviews and a larger one for refactoring, for instance.

Security Considerations

Security is one of the most important factors when integrating an AI agent into CI/CD. Because Claude Code Action is granted read/write access to your repository, the right protections are non-negotiable.

API Key Management

Never hardcode API keys. That's the most basic and most important rule. If you embed the key directly in the workflow file, it's exposed to anyone with repo-read access. Always use GitHub Secrets.

# Correct: use GitHub Secrets with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} # Never do this: inline the key # with: # anthropic_api_key: sk-ant-xxxxxxxxxxxxx

Keys registered in GitHub Secrets don't appear in workflow logs. By default, Secrets are also protected from PRs originating from forks, so forked PRs can't access them.

Alternative Providers

Instead of an Anthropic-direct API key, you can also use Claude Code via AWS Bedrock or Google Vertex AI. Those platforms support OpenID Connect (OIDC) token-based authentication, eliminating the need to manage a static API key.

In enterprise settings, going through Bedrock or Vertex AI is often recommended for the sake of integration with existing cloud infrastructure. OIDC-based auth issues short-lived tokens, which significantly reduces the risk of key leakage.

Principle of Least Privilege

Workflow permissions should be the minimum needed. For example, a review-only workflow doesn't need code-write access.

# Review-only: read permissions only permissions: contents: read pull-requests: write # Includes code changes: write permissions needed permissions: contents: write pull-requests: write issues: write

contents: write is required when Claude Code modifies code and pushes commits; for comment-only reviews, contents: read is enough.

Handling PRs from Forks

For open-source projects, you need to account for the security implications of Claude Code running on PRs from forks. A malicious fork may attempt to alter workflow behavior, so be especially careful with the pull_request_target trigger.

Constraining Execution with --max-turns

Capping the agent's turn count with --max-turns matters not only for cost control but for security. Unbounded turns risk unexpected operations running for an extended period.

Restricting Tools with --allowedTools

Restricting what tools Claude Code can use via --allowedTools prevents unintended file operations or command execution. For a review workflow, you can allow only read-only tools and disable write tools.

We'll cover security in more depth in an upcoming article.

Realistic Workflow Examples

Building on what we've covered, here are three workflows you can actually drop into a project.

Example 1: Basic @claude Response Workflow

The most generic workflow — reacts to @claude in both PR and issue comments.

name: Claude Assistant on: issue_comment: types: [created] pull_request_review_comment: types: [created] permissions: contents: write pull-requests: write issues: write jobs: claude-assistant: if: | (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) || (github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) runs-on: ubuntu-latest timeout-minutes: 30 steps: - name: Run Claude Code uses: anthropics/claude-code-action@v1 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} claude_args: "--max-turns 10" prompt: | Please respond in English. When making code changes, clearly explain the rationale.

Setting timeout-minutes at the job level prevents a workflow from running indefinitely if Claude Code stalls for some reason.

Example 2: Automatic PR-Review Workflow

Runs a code review automatically every time a PR is opened or updated.

name: Automated PR Review on: pull_request: types: [opened, synchronize] permissions: contents: read pull-requests: write jobs: auto-review: runs-on: ubuntu-latest timeout-minutes: 20 steps: - name: Review PR uses: anthropics/claude-code-action@v1 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} claude_args: "--max-turns 5" prompt: | Please perform a comprehensive review of this PR. ## Review criteria - Correctness and logical soundness of the code - Security concerns (SQL injection, XSS, missing auth, etc.) - Performance impact - Quality of error handling - Test coverage ## Output format Report your review in the following format: ### Summary Briefly describe the change and overall assessment in 1-2 sentences. ### Issues Classify by severity: - [Critical] Must be fixed - [Warning] Should be fixed - [Info] Improvement suggestions or informational ### Positives Note anything notably well-designed or high quality. Please review in English.

Since this is review-only, permissions are limited to contents: read. Read access is enough for Claude Code to analyze the code and post review comments.

Example 3: Daily Report Workflow

Analyzes the repository state every morning at a set time and posts a report as an issue.

name: Daily Repository Report on: schedule: - cron: '0 0 * * 1-5' # Mon-Fri at UTC 0:00 (JST 9:00) permissions: contents: read issues: write jobs: daily-report: runs-on: ubuntu-latest timeout-minutes: 30 steps: - name: Checkout uses: actions/checkout@v4 with: fetch-depth: 0 - name: Generate Report uses: anthropics/claude-code-action@v1 with: anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} claude_args: "--max-turns 10" prompt: | Analyze the current state of the repository and produce a daily report. ## Analysis items 1. Summary of commits in the last 24 hours 2. Status of open PRs (awaiting review, conflicts, etc.) 3. Number of unresolved issues and the high-priority ones 4. Technical debt observations (TODO comments, deprecated API usage, etc.) ## Output Create a GitHub Issue with the analysis. Use the title format "Daily Report: [date]".

fetch-depth: 0 grabs the full history so commit analysis is accurate. The schedule trigger uses UTC, so adjust the cron expression for your local time zone.

Cost Considerations

Running Claude Code × GitHub Actions incurs two kinds of cost.

GitHub Actions runner minutes

If you exceed GitHub Actions' free quota (unlimited for public repos, 2,000 minutes per month for private repos), you pay for runner minutes. Claude Code workflows tend to run longer than typical CI pipelines, so it's important to cap them appropriately with timeout-minutes and --max-turns.

Anthropic API token costs

You pay Anthropic for the input/output tokens Claude Code processes. Large PR reviews or complex auto-fixes consume many tokens. --max-turns is effective as a cost-control measure too.

When rolling this out, I recommend a trial run on a handful of repositories to get concrete cost data before expanding scope.

Summary

Combining Claude Code with GitHub Actions gives you a way to integrate AI agents throughout the development workflow — automated code review, interactive code changes via PR comments, auto-generated fix PRs from issues, scheduled reports, and more.

The official anthropics/claude-code-action@v1 is GA, and supports quick setup via /install-github-app. Through workflow configuration, standardizing review criteria with CLAUDE.md, and tuning instructions via the prompt parameter, you can meaningfully lift your team's development velocity.

At the same time, there's a lot to consider on the security and operational side: API key management, minimizing workflow permissions, and controlling execution cost. My recommendation is to start small — a single repo with a basic @claude response workflow — and gradually expand into automated review and issue handling.

For the fundamentals of Claude Code, see the Claude Code Usage Guide. If you'd like help adopting AI agents or optimizing your CI/CD pipeline, feel free to reach out via our Contact page.

References