AI-powered code reviews, vulnerability analysis, architecture drift detection, test gap analysis, and release readiness scoring — directly in your Jenkins pipeline. Supports OpenAI, Anthropic Claude, Ollama (local), LM Studio, vLLM, and any OpenAI-compatible endpoint.
Every CI/CD pipeline runs linters and tests — but they miss the architectural, strategic, and contextual issues that only experienced engineers catch. ForgeAI bridges that gap by embedding AI-powered intelligence directly into your Jenkins pipeline.
-
8 specialized analyzers, each with expert-level system prompts tuned for its domain
-
Architecture-aware analysis that understands hexagonal, layered, CQRS, and microservice patterns
-
Composite scoring that weighs security 3× and architecture 2× — because not all findings are equal
-
Release readiness verdicts (SHIP_IT / CAUTION / HOLD / BLOCK) that synthesize all analyses
-
Zero-cloud mode via Ollama for air-gapped and regulated environments
-
A self-contained HTML report archived with every build
| Analyzer | ID | What It Does |
|---|---|---|
AI Code Review |
|
SOLID, DRY, naming, error handling, anti-patterns, readability |
Vulnerability Analysis |
|
OWASP Top 10, hardcoded secrets, injection, CWE mapping |
Architecture Drift Detection |
|
Layer violations, circular deps, coupling, pattern enforcement |
Test Gap Analysis |
|
Untested paths, missing edge cases, test quality, concrete suggestions |
Dependency Risk Scoring |
|
License conflicts, unmaintained deps, unpinned versions, duplication |
Commit Intelligence |
|
Commit hygiene, breaking change detection, changelog & semver suggestions |
Pipeline Optimizer |
|
Parallelization, caching, resource waste, failure resilience |
Release Readiness |
|
Composite verdict synthesizing all prior analyses |
ForgeAI is provider-agnostic. Use whatever fits your security and budget requirements:
| Provider | Type | API Key Required | Air-Gapped |
|---|---|---|---|
OpenAI (GPT-4o, GPT-4o-mini, o1) |
Cloud API |
Yes |
No |
Anthropic Claude (claude-opus-4-7, claude-sonnet-4-6) |
Cloud API |
Yes |
No |
Ollama (DeepSeek-Coder, CodeLlama, Llama 3, Mistral, Phi-3) |
Local |
No |
Yes |
LM Studio |
Local |
No |
Yes |
vLLM / LocalAI / text-generation-webui |
Self-hosted |
Optional |
Yes |
Any OpenAI-compatible endpoint |
Varies |
Varies |
Varies |
-
Go to Manage Jenkins → Plugins → Available plugins
-
Search for ForgeAI Pipeline Intelligence
-
Install and restart Jenkins
Navigate to Manage Jenkins → System → ForgeAI Pipeline Intelligence:
-
Select your LLM Provider (OpenAI / Anthropic / Ollama)
-
Enter the Endpoint URL (e.g.,
https://api.openai.com/) -
Enter the Model ID (e.g.,
gpt-4o) -
Select or create an API Key credential (Jenkins Secret Text)
-
Click Test Connection to verify
-
Enable or disable individual analyzers
-
Save
| Setting | Description | Default |
|---|---|---|
LLM Provider |
OpenAI / Anthropic / Ollama |
OpenAI |
Endpoint URL |
API base URL |
|
Model ID |
Model to use |
|
API Key Credential |
Jenkins Secret Text credential ID |
— |
Temperature |
LLM creativity (0.0–1.0) |
|
Timeout |
Request timeout in seconds |
|
Max Tokens |
Maximum response length |
|
Per-analyzer toggles |
Enable or disable each analyzer globally |
All enabled |
Publish HTML Report |
Generate the HTML report artifact |
|
Fail on Low Score |
Fail build below the threshold |
|
Score Threshold |
Minimum passing composite score (1–10) |
|
Custom System Prompt |
Text prepended to every LLM prompt |
— |
Runs multiple analyzers in sequence and produces a composite report.
stage('ForgeAI Intelligence') {
steps {
script {
def report = forgeAI(
analyzers: ['code-review', 'vulnerability', 'architecture-drift',
'test-gaps', 'dependency-risk', 'release-readiness'],
sourceGlob: 'src/**/*.java',
contextInfo: 'Spring Boot microservice, hexagonal architecture',
failOnCritical: true,
criticalThreshold: 4
)
echo "Composite Score: ${report.compositeScore}/10"
}
}
post {
always {
archiveArtifacts artifacts: 'forgeai-reports/**', allowEmptyArchive: true
publishHTML(target: [
reportDir: 'forgeai-reports',
reportFiles: 'forgeai-report.html',
reportName: 'ForgeAI Report'
])
}
}
}
| Parameter | Type | Default | Description |
|---|---|---|---|
|
|
All 7 analyzers |
Which analyzers to run |
|
|
|
Glob patterns for source files |
|
|
|
Project description, architecture, or constraints |
|
|
|
Fail build if composite score falls below threshold |
|
|
|
Minimum composite score (1–10) |
Returns a Map with: compositeScore, totalFindings, criticalCount, analyzerCount, and per-analyzer scores (e.g., code-reviewScore, vulnerabilityScore).
Runs one analyzer against provided source code.
def result = forgeAIScan(
analyzer: 'vulnerability',
source: readFile('src/main/java/App.java'),
context: 'Java 17 REST API handling PII data'
)
if (result.criticalCount > 0) {
error("Security scan found ${result.criticalCount} critical vulnerabilities")
}
stage('ForgeAI Parallel') {
parallel {
stage('Security') { steps { script { forgeAIScan analyzer: 'vulnerability', source: src } } }
stage('Architecture') { steps { script { forgeAIScan analyzer: 'architecture-drift', source: src } } }
stage('Test Gaps') { steps { script { forgeAIScan analyzer: 'test-gaps', source: src } } }
}
}
ForgeAI supports fully offline operation — no data ever leaves your network.
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a code-focused model
ollama pull deepseek-coder:6.7b # Fast, good for most use cases (~4 GB)
ollama pull codellama:13b # Meta's code model
# Verify it is running
curl http://localhost:11434/api/tags
Jenkins global config:
Provider: Ollama (Local) Endpoint: http://localhost:11434 Model ID: deepseek-coder:6.7b API Key: (leave blank)
-
Download from lmstudio.ai
-
Load any GGUF model (e.g.,
deepseek-coder-v2) -
Start the local server (default:
http://localhost:1234) -
In Jenkins, select OpenAI / OpenAI-Compatible, set endpoint to
http://localhost:1234/, and leave API Key blank
Every build generates a self-contained HTML report with:
-
Composite score and release verdict (SHIP_IT / CAUTION / HOLD / BLOCK)
-
Per-analyzer breakdown with individual scores
-
Detailed findings with severity, file location, and suggested fixes
-
Dark theme optimised for readability
The report is written to forgeai-reports/forgeai-report.html in the workspace. Use the HTML Publisher plugin or archiveArtifacts to surface it on the build page.
| Requirement | Minimum |
|---|---|
Jenkins |
2.528.3 LTS |
Java (runtime) |
17 |
LLM |
OpenAI API key, Anthropic API key, or Ollama running locally |
-
Report bugs and request features on the GitHub issue tracker
-
See CONTRIBUTING.md for pull request guidelines