Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KubeArmor Panic if uninitialised docker socket exists #1948

Open
daemon1024 opened this issue Jan 22, 2025 · 2 comments · May be fixed by #1958
Open

KubeArmor Panic if uninitialised docker socket exists #1948

daemon1024 opened this issue Jan 22, 2025 · 2 comments · May be fixed by #1958
Assignees
Labels
bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed

Comments

@daemon1024
Copy link
Member

Bug Report

KubeArmor Panic if uninitialised docker socket exists

General Information

Docker, err = NewDockerHandler()
if err != nil {
dm.Logger.Errf("Failed to create new Docker client: %s", err)
}
}
if containerList, err := Docker.DockerClient.ContainerList(context.Background(), container.ListOptions{}); err == nil {

We don't do a return if there's an error while initialising. This creates a panic because we call the client function with a nil client. We should return early from the function.

  • Environment description (ubuntu 18.04)
  • Orchestration system version in use (unorchestrated)

To Reproduce

Have the docker socket file exist, but the docker daemon would be down
KubeArmor Panics on init

Expected behavior

No panic

Panic Trace

Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: 2025-01-21 02:40:15.673868        INFO        Verifying Docker API client version: 1.44
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: 2025-01-21 02:40:15.673994        ERROR        Failed to create new Docker client: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: github.com/kubearmor/KubeArmor/KubeArmor/log.Err
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]:         /home/runner/work/KubeArmor/KubeArmor/KubeArmor/log/logger.go:103
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: github.com/kubearmor/KubeArmor/KubeArmor/feeder.(*Feeder).Errf
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]:         /home/runner/work/KubeArmor/KubeArmor/KubeArmor/feeder/feeder.go:446
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: github.com/kubearmor/KubeArmor/KubeArmor/core.(*KubeArmorDaemon).GetAlreadyDeployedDockerContainers
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]:         /home/runner/work/KubeArmor/KubeArmor/KubeArmor/core/dockerHandler.go:265
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: github.com/kubearmor/KubeArmor/KubeArmor/core.KubeArmor
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]:         /home/runner/work/KubeArmor/KubeArmor/KubeArmor/core/kubeArmor.go:585
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: main.main
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]:         /home/runner/work/KubeArmor/KubeArmor/KubeArmor/main.go:79
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: runtime.main
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]:         /home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/proc.go:267
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: panic: runtime error: invalid memory address or nil pointer dereference
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x19e0035]
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: goroutine 1 [running]:
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: github.com/kubearmor/KubeArmor/KubeArmor/core.(*KubeArmorDaemon).GetAlreadyDeployedDockerContainers(0xc00015b800)
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]:         /home/runner/work/KubeArmor/KubeArmor/KubeArmor/core/dockerHandler.go:269 +0xb5
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: github.com/kubearmor/KubeArmor/KubeArmor/core.KubeArmor()
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]:         /home/runner/work/KubeArmor/KubeArmor/KubeArmor/core/kubeArmor.go:585 +0x1365
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]: main.main()
Jan 21 02:40:15 ip-172-31-28-212 kubearmor[6963]:         /home/runner/work/KubeArmor/KubeArmor/KubeArmor/main.go:79 +0x3ed
Jan 21 02:40:17 ip-172-31-28-212 systemd[1]: kubearmor.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Jan 21 02:40:17 ip-172-31-28-212 systemd[1]: kubearmor.service: Failed with result 'exit-code'.
Jan 21 02:40:27 ip-172-31-28-212 systemd[1]: kubearmor.service: Service hold-off time over, scheduling restart.
Jan 21 02:40:27 ip-172-31-28-212 systemd[1]: kubearmor.service: Scheduled restart job, restart counter is at 6.
Jan 21 02:40:27 ip-172-31-28-212 systemd[1]: Stopped KubeArmor.
Jan 21 02:40:27 ip-172-31-28-212 systemd[1]: Started KubeArmor.
@daemon1024 daemon1024 added bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed labels Jan 22, 2025
@Manik2708
Copy link

Would like to work on it!

@Manik2708
Copy link

@daemon1024 Will it be fine to return error from GetAlreadyDeployedDockerContainers? As I could see 4 usages of it and after every usage of it, monitoring of docker events is started. So we can terminate kubearmor if a situation like this appears!

@Manik2708 Manik2708 linked a pull request Feb 1, 2025 that will close this issue
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants