Look, I'll be real with you. After spending way too many hours at 2 AM debugging Kubernetes pods, I realized I was typing the same damn commands over and over again. You know the drill:
kubectl get secret my-secret -o yaml
# Copy that base64 string
echo "dGhpc19pc19hbm5veWluZw==" | base64 -d
# Repeat for each secret value... 😩Or the classic pod debugging dance:
kubectl describe pod my-app-xyz
kubectl logs my-app-xyz
kubectl get events --field-selector involvedObject.name=my-app-xyz
kubectl get svc -l app=my-app
# ... 10 more commands later ...So I built these plugins. They're written in Go, they're fast, and they just work.
The problem: Kubernetes secrets are base64 encoded. Every. Single. Value. Want to check what's in your database connection string? Better get ready to copy-paste into base64 -d like it's 1999.
The solution: Just decode everything automatically and show it to me like a normal human would want to see it.
# Instead of this nonsense
kubectl get secret db-creds -o yaml
# apiVersion: v1
# data:
# password: c3VwZXJzZWNyZXQxMjM= <-- what even is this?
# username: YWRtaW4=
# Just do this
kubectl decode db-creds
# apiVersion: v1
# data:
# password: supersecret123 <-- ah, there we go
# username: adminUsage:
# Decode a secret (shows actual values, not base64 garbage)
kubectl decode my-secret
# From a specific namespace
kubectl decode my-secret -n production
# All secrets in current namespace
kubectl decode
# All secrets everywhere (because why not)
kubectl decode --all-namespaces
# Table view (when you just need a quick overview)
kubectl decode -o tableThe problem: When a pod is broken (and let's be honest, they're always broken), you end up running like 15 kubectl commands just to figure out what's wrong. Is it crashing? Is the image wrong? Is it not getting scheduled? Is there a service? What about the ingress?
And you're frantically switching between describe, logs, get events, get svc, checking the node... It's exhausting.
The solution: One command that just tells you what's wrong.
kubectl diagnose my-broken-pod-abc123It starts from the pod and works outward, showing you:
- Pod status (is it even running?)
- What controller owns it (Deployment, StatefulSet, etc.)
- Container states (and why they're crashing)
- Recent events (the good stuff)
- Which services route to it
- Ingress rules
- Storage (PVC issues)
- Node information
For healthy pods, you get a nice summary:
Pod: default/nginx-7d4b8c9f-abc12
Status:
Phase: Running ✓
Pod IP: 172.17.0.5
QoS Class: Burstable
Started: 2025-10-10 14:20:15
Controller:
Deployment: nginx-deployment
Containers:
✓ nginx
Image: nginx:1.21
Restarts: 0
Services:
nginx-service (Type: ClusterIP)
80:80/TCP
Node:
minikube
CPU: 2, Memory: 1982Mi, Pods: 110
For broken pods, it actually debugs them for you:
Pod: production/myapp-7d9f8b5c-xk2m9
Status:
Phase: Running
Pod IP: 10.244.1.5
QoS Class: Burstable
Controller:
Deployment: myapp
Containers:
✗ myapp (CrashLoopBackOff)
Image: myapp:latest
Restarts: 7
Message: back-off 5m0s restarting failed container
Error Analysis:
Container: myapp (CrashLoopBackOff)
Error logs:
2025-10-10 15:28:42 ERROR: Failed to connect to database
2025-10-10 15:28:42 Connection refused at postgres:5432
2025-10-10 15:28:42 HTTP 500 Internal Server Error
2025-10-10 15:28:43 FATAL: Application startup failed
Recent Events:
Warning BackOff: Back-off restarting failed container
Configuration:
ConfigMaps:
✓ app-config
✗ legacy-config (not found) ← probably why it's failing
Secrets:
✓ db-credentials
Services:
api-service (Type: ClusterIP)
80:8080/TCP
Node:
worker-node-2
CPU: 4, Memory: 8Gi, Pods: 110
What makes this actually useful:
- Shows exact status - No more generic "unhealthy". You see
CrashLoopBackOff,ImagePullBackOff,Error, etc. - Auto-fetches error logs - Grabs logs from crashed/failed containers and filters for actual errors (including HTTP 4xx/5xx codes)
- Shows missing configs - Lists ConfigMaps and Secrets the pod needs, with red ✗ for missing ones (instant diagnosis for
CreateContainerConfigError) - Works with everything - Deployments, DaemonSets, StatefulSets, Jobs, CronJobs - it traces the ownership chain automatically
- Smart about Jobs - For one-off Jobs/CronJobs that fail, it knows to fetch current logs (not previous)
- No noise - Only shows sections that matter. No ConfigMaps? Doesn't show that section. It's clean.
Look, I got tired of running kubectl describe, then kubectl logs --previous, then checking if ConfigMaps exist, then looking at events... This does all of it. In one command.
Error log filtering catches:
- Keywords:
error,exception,fatal,failed,failure,panic,critical,warn - HTTP errors:
400,401,403,404,500,502,503,504 - If no errors found, shows last 15 lines anyway (because sometimes the problem isn't obvious)
Usage:
# Diagnose any pod (works with Deployments, Jobs, CronJobs, DaemonSets, StatefulSets)
kubectl diagnose my-pod
# With namespace
kubectl diagnose my-pod -n production
# Works great for failed Jobs/CronJobs
kubectl diagnose my-cronjob-12345 -n production
# That's it. One command, full diagnosis.You need Go 1.19+ and kubectl.
# Clone this repo
git clone https://github.com/yourusername/kubectl-plugins.git
cd kubectl-plugins
# Build them
go build -o kubectl-decode kubectl-decode.go
go build -o kubectl-diagnose kubectl-diagnose.go
chmod +x kubectl-decode kubectl-diagnose
# Put them in your PATH (I use /usr/local/bin but you do you)
sudo cp kubectl-decode kubectl-diagnose /usr/local/bin/
# Or if you don't have sudo access
mkdir -p ~/bin
cp kubectl-decode kubectl-diagnose ~/bin/
echo 'export PATH=$PATH:~/bin' >> ~/.zshrc
source ~/.zshrc
# Check they work
kubectl decode --help
kubectl diagnose --helpThat's it. kubectl will auto-discover them because they start with kubectl-.
The original kubectl-decode was a bash script. It worked, but:
- Bash is... bash. Error handling is a nightmare.
- The Go version is way faster (native Kubernetes client)
- One binary, no dependencies, works everywhere
- Type safety is nice when you're not debugging at 2 AM
- Can do smart things like fetching logs from the right place (current vs previous) based on pod state
- Easy to add features without shell script spaghetti
f**k man, figure it out you guys are engineer
There isn't one. These are tools I actually use. Every. Single. Day. They're MIT licensed, so do whatever you want. Fork them, sell them to your company for a million dollars (good luck with that), tattoo the code on your back, I don't care.
If you find bugs, let me know. If you have ideas for more plugins, PRs are welcome.
kubectl tail- smarter log tailing with filtering (likediagnosebut for live logs)kubectl exec-all- run a command in all pods of a deploymentkubectl restart- restart deployments/daemonsets/statefulsets without typing the full commandkubectl port-forward-all- port forward to all pods at once
But honestly, diagnose and decode cover most of my daily kubectl frustrations. Everything else is just nice-to-have.
Got a kubectl command you run 50 times a day and you're sick of typing it? Turn it into a plugin and send a PR.
What I'll merge:
- Solves a real problem (not "wouldn't it be cool if...")
- Clean output (no fancy ASCII art or emoji vomit)
- Fast enough that you don't notice
- Doesn't panic on errors like a poorly written Node.js app
What I won't merge:
- Stuff that already exists in kubectl
- "Improvements" that make the output worse
- Anything that requires explaining how to use it for more than 30 seconds
MIT. Do whatever you want. If these plugins save you from one 2 AM debugging session, that's payment enough.
Made with ☕ by someone who's spent way too much time with kubectl.
If this saved you time, star the repo. If it didn't, open an issue and let's fix it.