-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Operator injected with proxy failed to start. #32
Comments
/assign @wu8685 |
Thx for your report, @qiuming520 Please check:
If none of the above issues have occurred, I may need more information, such as the complete pod yaml, pod events, etc. |
|
this is My Operator's pod |
Can you provide the last log of the orchestrator container when it failed to start with ctrlmesh-proxy injected. You can find it by command |
|
The |
I have added securityContext for my containers (operator and orchestrator). apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-operator
namespace: test-mysql-1
spec:
template:
... ...
spec:
containers:
- name: operator
securityContext:
privileged: true
... ...
- name: orchestrator
securityContext:
privileged: true
... ...
securityContext: {}
... ... As you can see in the picture below, the file But the issue still exists~ @Eikykun |
I tried to reproduce this issue using two sample operator containers, but was not successful. Can you provide a scenario that reproduces this issue in kind? @qiuming520 |
of course ! my Operator github https://github.com/bitpoke/mysql-operator my scenario:Some pods use the canary namespace, and some pods use the normally released namespace. apiVersion: ctrlmesh.kusionstack.io/v1alpha1
kind: ShardingConfig
metadata:
name: sharding-root
namespace: test-mysql-1
spec:
root:
prefix: mysql-operator
targetStatefulSet: mysql-operator
canary:
replicas: 1
inNamespaces:
- mysql-01
- mysql-02
- mysql-03
auto:
everyShardReplicas: 1
shardingSize: 1
resourceSelector:
- relateResources:
- apiGroups:
- '*'
resources:
- '*'
controller:
leaderElectionName: mysql-operator-leader-election |
I briefly looked at the |
Thx for your reply to helped me solve confusion~ @Eikykun |
General Question
My Operator : mysql-operator
k8s version : v1.21.5
controller-mesh : v0.1.2
shardingconfig-root :
My Operator has 2 containers (operator and orchestrator). When not using patch to add the labels required by ctrlmesh (ctrlmesh.kusionstack.io/enable-proxy: "true", ctrlmesh.kusionstack.io/watching: "true") , can be started successfully. After using patch to request labeling, ctrlmesh-proxy and operator can start normally, but orchestrator fails to start with the following error:
Error starting command: `--kubeconfig=/etc/kubernetes/kubeconfig/fake-kubeconfig.yaml` - fork/exec --kubeconfig=/etc/kubernetes/kubeconfig/fake-kubeconfig.yaml: no such file or directory
kubectl get pod/mysql-operator-0 -oyaml
Check the pod and find that the/etc/kubernetes/kubeconfig/fake-kubeconfig.yaml
file has been mountedWho can help me?
The text was updated successfully, but these errors were encountered: