Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
153 changes: 153 additions & 0 deletions .github/workflows/analyze.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
name: AI Slop Gate Static Analysis

on:
pull_request:
branches: [ main ]
push:
branches: [ main ]
workflow_dispatch:

permissions:
pull-requests: write
contents: read

jobs:
static-analysis:
runs-on: ubuntu-latest
timeout-minutes: 20

steps:
- name: Checkout code
uses: actions/checkout@v4

# Restore dependencies to allow Syft/Trivy to see full metadata
- name: Restore dependencies
run: |
if [ -f "requirements.txt" ]; then
pip install -r requirements.txt --quiet || true
fi
if [ -f "package-lock.json" ]; then
npm ci --quiet || true
fi
echo "✅ Dependency restore complete"

- name: Cache ai-slop-gate data
uses: actions/cache@v4
with:
path: ~/.cache/ai-slop-gate
key: ai-slop-gate-cache-${{ runner.os }}-${{ hashFiles('**/*.py', 'policy.yml') }}

- name: Run Static Analysis
id: static_gate
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
mkdir -p ~/.cache/ai-slop-gate

POLICY_FLAG=""
if [ -f "policy.yml" ]; then
POLICY_FLAG="--policy /data/policy.yml"
fi

# FIX: Running as root to avoid "Permission Denied" on /data mount
# The files created here (sbom.json etc) will live on the runner's disk
docker run --rm \
--user root \
-v "${{ github.workspace }}:/data" \
-v ~/.cache/ai-slop-gate:/root/.cache/ai-slop-gate \
-e GITHUB_TOKEN \
ghcr.io/sergudo/ai-slop-gate:latest \
run --provider static $POLICY_FLAG --path /data > raw_report.txt 2>&1 || true

cat raw_report.txt

# Parse outputs for the PR comment
VERDICT=$(grep "Policy Verdict:" raw_report.txt | awk '{print $NF}' || echo "UNKNOWN")
FINDINGS=$(grep "Total findings:" raw_report.txt | awk '{print $NF}' || echo "0")
COMP_COUNT=$(grep "Generated SBOM with" raw_report.txt | sed -E 's/.*with ([0-9]+) dependencies.*/\1/' | head -1 || echo "0")
CVE_COUNT=$(grep "Trivy Scan Complete. Found" raw_report.txt | sed -E 's/.*Found ([0-9]+) vulnerabilities.*/\1/' | head -1 || echo "0")

# Extract top 10 components for PR visibility
if [ -f "sbom.json" ]; then
TOP10=$(jq -r '.artifacts[:10] | .[] | "- \(.name) (\(.version))"' sbom.json | sed 's/$/\\n/' | tr -d '\n')
else
TOP10="No components found."
fi

echo "verdict=$VERDICT" >> $GITHUB_OUTPUT
echo "findings=$FINDINGS" >> $GITHUB_OUTPUT
echo "components=$COMP_COUNT" >> $GITHUB_OUTPUT
echo "cves=$CVE_COUNT" >> $GITHUB_OUTPUT
echo "top10=$TOP10" >> $GITHUB_OUTPUT

- name: Upload SBOM artifacts
uses: actions/upload-artifact@v4
if: always()
with:
name: sbom-reports-${{ github.run_number }}
path: |
sbom*.json
retention-days: 30

- name: Post PR Report
if: github.event_name == 'pull_request' && always()
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Extract the formatted report from logs
sed -n '/=== AI SLOP GATE REPORT ===/,/=== END OF REPORT ===/p' raw_report.txt > clean_report.md

if [ ! -s clean_report.md ]; then
echo "⚠️ No report found in logs" > clean_report.md
fi

VERDICT="${{ steps.static_gate.outputs.verdict }}"
FINDINGS="${{ steps.static_gate.outputs.findings }}"

# Determine Emoji and Status label
EMOJI="❓"
STATUS="UNKNOWN"
if [ "$VERDICT" = "BLOCKING" ]; then EMOJI="🚨"; STATUS="**BLOCKING**"; fi
if [ "$VERDICT" = "ADVISORY" ]; then EMOJI="⚠️"; STATUS="**ADVISORY**"; fi
if [ "$VERDICT" = "ALLOW" ]; then EMOJI="✅"; STATUS="**PASSED**"; fi

cat > pr_comment.md << EOF
## $EMOJI AI Slop Gate Analysis

**Status:** $STATUS
**Findings:** $FINDINGS issue(s) detected

---
$(cat clean_report.md)

---
### Supply Chain Information (SBOM)
- **Components detected:** ${{ steps.static_gate.outputs.components }}
- **CVEs found (Trivy):** ${{ steps.static_gate.outputs.cves }}
- **Standards:** SPDX 2.3, CycloneDX 1.6

<details>
<summary> Component Preview (Top 10)</summary>

${{ steps.static_gate.outputs.top10 }}

</details>

<sub> Report ID: \`${{ github.run_id }}\`</sub>
EOF

gh pr comment ${{ github.event.pull_request.number }} --body-file pr_comment.md --repo ${{ github.repository }}

# Final step to fail the build if policy is BLOCKING
- name: Final Verdict
if: always()
run: |
if [ "${{ steps.static_gate.outputs.verdict }}" = "BLOCKING" ]; then
echo "❌ FAIL: Blocking security issues or policy violations found."
exit 1
fi

# Clean up generated files so they don't get committed by accident
# (though GitHub checkout usually cleans up anyway)
rm -f sbom*.json raw_report.txt clean_report.md pr_comment.md

1 change: 1 addition & 0 deletions .slop/supply_chain.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"dependencies": [{"name": "bad-licensed-pkg", "license": "GPL-3.0"}, {"name": "another-risk", "license": "AGPL-3.0"}]}
21 changes: 0 additions & 21 deletions Dockerfile

This file was deleted.

196 changes: 186 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,16 +150,192 @@ It is divided into two sections:

---

# 🧨 Summary of Violations

| Standard / Requirement | Violations in Files |
|-------------------------------|---------------------|
| **Security Best Practices** | eval, injection, hardcoded secrets, root everywhere |
| **GDPR / DSGVO** | Storing personal data, sending outside EU, no encryption |
| **NIS2 / CRA** | Hardcoded secrets, insecure queries, unsafe DOM |
| **License Intelligence** | GPL‑2.0 / GPL‑3.0 contamination |
| **AI Hallucination Protection** | Import of non‑existent or typosquatted packages |
| **DevOps** | Bloated Dockerfile, unsafe permissions, invalid healthchecks |
# Kubernetes Silent Slop — Production Failure Edition
### *A deceptively clean manifest hiding catastrophic architectural flaws.*

This file looks harmless at first glance — tidy YAML, valid syntax, no obvious red flags.
But beneath the surface, it is a **silent production killer**:
a collection of subtle, AI-generated logic errors that slip past static scanners yet break your system in ways that are painful to debug.

It exists as a **teaching tool**, a **misconfiguration detector test**, and a **warning** for engineers who trust “clean-looking” manifests too much.

It contains:

---

## Service & Deployment Mismatch
This manifest defines a Service that cannot route traffic to any Pod:

- Service selects `version=v2`
- Deployment labels Pods as `version=v2.1`
- Result: **0 endpoints**, 100% traffic black-holed
- Kubernetes still reports the Service as “healthy”

This is a silent outage waiting to happen.

---

## Broken Port Mapping
The Service forwards traffic to:

- `targetPort: 9090`
- The container listens on `8080`

No warnings. No logs. No events.
Just a dead service.

---

## Readiness Probe on a Non-Existent Port
The readiness probe checks:

- `tcpSocket: 3000`
- The container exposes only `8080`

Consequences:

- Pods never become Ready
- Rollouts stall
- Autoscaling breaks
- Traffic never flows

Everything looks “up”, but nothing actually serves requests.

---

## Impossible Resource Configuration
The container requests:

- `128Mi` memory

But limits it to:

- `64Mi` memory

Depending on the Kubernetes version and runtime, this can cause:

- Immediate scheduling failure
- Constant eviction and CrashLoopBackOff
- Node-level OOM storms

This is a production-blocking misconfiguration disguised as a normal resource block.

---

## NetworkPolicy That Pretends to Be Secure
The manifest includes:

```yaml
ingress:
- from: []
```

An empty `from` list effectively allows **all** sources.
The name suggests security; the behavior does the opposite.

This is a stealth security hole that many reviewers will skim past.

---

## HPA Targeting a Non-Existent Deployment
The HorizontalPodAutoscaler references:

- `billing-backend-v2`

But the actual Deployment is:

- `billing-backend`

Result:

- Autoscaling never triggers
- No scaling events
- No protection under load

The system appears configured for autoscaling, but it is not.

---

## HPA With Unrealistic Thresholds
The HPA uses:

- `averageUtilization: 10` for memory

This is an unrealistically low threshold and will:

- Cause constant scale up/down flapping
- Create pod churn and instability
- Amplify latency and error spikes under normal load

Autoscaling becomes a source of chaos instead of resilience.

---

## AI-Generated Metadata Contradictions
The manifest contains annotations like:

- `ai-slop-gate.check: "passed-by-internal-llm"`
- `security.policy: "strict-but-not-really"`

These provide no real guarantees and create a **false sense of safety**.
They are classic signs of AI-generated configuration slop: confident wording, zero actual effect.

---

## Why This File Is Dangerous
This manifest:

- Passes YAML validation
- Applies cleanly with `kubectl`
- Looks “reasonable” in a quick code review
- Slips past many static scanners

But it fails at:

- Traffic routing
- Readiness and rollout behavior
- Autoscaling correctness
- Resource stability
- Network isolation
- Operational reliability

It is a textbook example of **silent Kubernetes failure** — the kind that only shows up at 3 AM when production is already down.

---

## Final Verdict
If you ever see a manifest like this in a real system:

- Stop the rollout
- Audit every selector
- Validate every probe
- Check every port mapping
- Verify every HPA target and threshold
- Never trust “clean YAML” without behavioral validation

This file is a warning.
A lesson.
A museum exhibit of AI-generated configuration slop.

Use it responsibly — or rather, **never use it at all**.

---

## Final Verdict
If you ever see code like this in a real project:

- Close the laptop
- Walk away
- Touch grass
- Reevaluate your life choices

This file is a warning.
A relic.
A cursed artifact.
A proud resident of the **Museum of Software Horrors**.

Use it responsibly — or rather, **don’t use it at all**.


---

Expand Down
Loading
Loading