Run Claude Code non-interactively in scripts, GitHub Actions, and automation pipelines
The -p (or --print) flag runs Claude Code non-interactively — it processes the prompt and exits. This is the foundation for scripting, CI/CD, and automation.
claude -p "Explain the auth module"
All CLI options work with -p.
claude -p "What does auth.py do?"
claude -p "Summarize this project" --output-format json
Returns structured JSON with result, session_id, usage metadata.
claude -p "Extract function names from auth.py" \
--output-format json \
--json-schema '{"type":"object","properties":{"functions":{"type":"array","items":{"type":"string"}}},"required":["functions"]}'
The response conforms to your schema in the field.
<details> <summary>💡 Reveal Answer</summary> </details> <details> <summary>Answer</summary> </details> <details> <summary>💡 Reveal Answer</summary> </details> <details> <summary>💡 Reveal Answer</summary> </details> <details> <summary>Answer</summary> </details> <details> <summary>Best approach</summary> </details>Full access
Unlock all 14 lessons, templates, and resources for Claude Code Mastery. Free.
structured_outputclaude -p "Write a poem" \
--output-format stream-json \
--verbose \
--include-partial-messages | \
jq -rj 'select(.type == "stream_event" and .event.delta.type? == "text_delta") | .event.delta.text'
Q: When would you use --json-schema instead of regular JSON output?
Use --json-schema when you need the output to conform to a specific structure for downstream processing — like extracting function names into an array, or generating structured reports. Regular JSON output gives you Claude's response as free-form text inside a JSON wrapper. With --json-schema, the structured_output field is guaranteed to match your schema.
In headless mode, use --allowedTools to skip permission prompts:
claude -p "Run tests and fix failures" \
--allowedTools "Bash,Read,Edit"
# Allow specific commands
--allowedTools "Bash(npm run *)" "Bash(git diff *)" "Read"
# Allow everything (use with caution)
--dangerously-skip-permissions
# Append to default prompt
claude -p "Review this code" \
--append-system-prompt "You are a security engineer. Focus on vulnerabilities."
# Replace entire prompt
claude -p "Analyze this" \
--system-prompt "You are a Python expert. Only write type-annotated code."
# Load from file
claude -p "Review this PR" \
--append-system-prompt-file ./prompts/security-review.txt
Fill in the blanks:
claude ___--json-___--___Toolsclaude -p (or --print)--json-schema--allowedTools# First request
claude -p "Review this codebase for performance issues"
# Continue the most recent conversation
claude -p "Now focus on database queries" --continue
# Generate summary
claude -p "Summarize all issues found" --continue
session_id=$(claude -p "Start a review" --output-format json | jq -r '.session_id')
claude -p "Continue the review" --resume "$session_id"
# Max spending
claude -p "Refactor the auth module" --max-budget-usd 5.00
# Max agentic turns
claude -p "Fix all lint errors" --max-turns 10
# Fallback model when primary is overloaded
claude -p "Review code" --fallback-model sonnet
Q: What's the difference between --max-budget-usd and --max-turns?
--max-budget-usd sets a dollar spending limit — Claude stops when the session cost reaches that amount. --max-turns limits the number of agentic turns (tool calls + responses). Use budget limits for cost control and turn limits to prevent runaway loops. In CI/CD, using both is best practice.
Inside Claude Code:
/install-github-app
# .github/workflows/claude.yml
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
jobs:
claude:
runs-on: ubuntu-latest
steps:
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
Now in any PR or issue comment:
@claude implement this feature based on the issue description
@claude fix the TypeError in the dashboard component
name: Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: "/review"
claude_args: "--max-turns 5"
name: Daily Report
on:
schedule:
- cron: "0 9 * * *"
jobs:
report:
runs-on: ubuntu-latest
steps:
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: "Generate a summary of yesterday's commits and open issues"
claude_args: "--model opus"
Q: What triggers the Claude Code GitHub Action in the basic @claude workflow?
It triggers on issue_comment and pull_request_review_comment events — meaning when someone comments on an issue or PR. The action looks for @claude mentions in the comment text and responds. It does NOT trigger on PR creation or pushes (you'd need separate workflow triggers for those).
Run Claude Code remotely from claude.ai/code:
Prefix with & to send a task to run in the cloud:
& Fix the authentication bug in src/auth/login.ts
Or from the command line:
claude --remote "Fix the auth bug"
Pull a web session back to your local terminal:
/teleport # Interactive picker
/tp # Short form
claude --teleport # From command line
Claude Code works with Unix pipes:
# Analyze logs
cat error.log | claude -p "What's causing these errors?"
# Process data
cat data.csv | claude -p "Find anomalies" --output-format json
# Chain with other tools
git diff | claude -p "Review these changes for security issues"
# Generate and pipe
claude -p "Generate test data for the user model" > test-fixtures.json
Fill in the blanks:
___/___anthropics/claude-code-___@v1&/teleport (or /tp)anthropics/claude-code-action@v1claude -p "List the 5 largest files in this project" --output-format json
git log --oneline -10 | claude -p "Summarize these recent commits"
claude -p "List all exported functions in src/" \
--output-format json \
--json-schema '{"type":"object","properties":{"functions":{"type":"array","items":{"type":"string"}}}}'
claude -p "What testing framework does this project use?"
claude -p "Show me an example test" --continue
Reflection: How could you integrate these patterns into your existing CI/CD pipeline?
Scenario: You want to set up automatic code review on every PR that checks for security issues, style violations, and test coverage. The review should cost less than $2 per PR.
name: PR Review
on:
pull_request:
types: [opened, synchronize]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
Review this PR for:
1. Security vulnerabilities
2. Style violations
3. Missing tests for new code
Provide findings as a concise PR comment.
claude_args: |
--model sonnet
--max-turns 5
--max-budget-usd 2.00
Key decisions:
| Concept | One-Liner |
|---|---|
| Headless mode | claude -p "prompt" — processes and exits |
| Output formats | text, json, stream-json |
| Structured output | --json-schema for guaranteed structure |
| Auto-approve | --allowedTools "Bash,Read,Edit" |
| Cost control | --max-budget-usd and --max-turns |
| GitHub Action | anthropics/claude-code-action@v1 |
| @claude mentions | Comment on PRs/issues to trigger Claude |
| Remote execution | & prefix or --remote to run in cloud |
| Teleport | /tp pulls web session to terminal |
| Pipes | cat file | claude -p "analyze" works natively |
Next up: IDE & Browser Integrations → — Use Claude Code in VS Code and Chrome.