-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Cannot create workspace on self-hosted instance #9444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @wo3hap. Your cluster doesn't meet the required specifications which is almost certainly the root cause. Specifically, it MUST be running on Ubuntu 18.04 or 20.04 (I'd suggest 20.04) and have the Kernel 5.4 or above. You're running on CentOS and the Kernel is 4.18. Can you change your cluster to meet the requirements and try again please? |
Yes, I noticed that so later I have switched to Ubuntu 20.04 with 5.4 kernel. The kernel version warning in the validation is gone but still getting the same error... |
In addition, my cloud server provider has a system image with k3s pre-installed on centos. I chose that image at first when purchasing for server. Later I switched to ubuntu 20.04 without k3s pre-installed. @mrsimonemms |
Next thing to check would be: workspace:
runtime:
containerdRuntimeDir: /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io
containerdSocket: /run/containerd/containerd.sock Those values are not typical of a k3s installation - see my k3s guide for the values to use for these two properties |
Occupied these days. I propably messed up with the containerd runtime. At first I did not get it on my own, I checked
Then I turned to a fresh Ubuntu 20.04, executed
I removed So, If I got it right, the containerd runtime is a critical part, and this should explain the error. |
New
Validation:
I rendered |
I really don't know. I can't see any smoking guns in your config. Can I suggest you use my k3s guide as I know that works? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Bug description
Hello, I am trying to deploy Gitpod in K3S on my own server. After a million times of tries (the installation is kind of confusing for me) I successfully have it deployed and accessible on port 80 and 443. I have added an integration which links to my self-hosted Gitlab, just the same as what I have done on gitpod.io. No authenticaion error occurred. However, when I tried to start a workspace from repositories in gitlab, I got the error:

I checked gitlab_access.log and could see my Gitpod instance successfully pulling repositories.
I looked up some similar issues here and questions on stackoverflow but I don't think any of them helps...
I am using the 2022.03.1 release. I have it deployed on CentOS 8.2
kernel version 4.18.0-305.3.1.el8.x86_64
, where K3Sv1.22.7+k3s1 (8432d7f2)
is installed. All three DNS records are on globally, one A record and two CNAME records.The Gitpod instance can be accessed through
online-ide.myrootdomain.xxx
(Of course not the real domain shown here), and my self-hosted Gitlab instance throughrepo.myrootdomain.xxx
. Both of the two servers have public Internet address and internal network ip.The tls certificate for Gitpod instance :
I chose the gitpod installer for deployment. I did created a namespace named
gitpod
, initialized agitpod.config.yaml
and filled thedomain
section withonline-ide.myrootdomain.xxx
. I did not usecert-manager
as I wanted to use my own tls certificate that is created usingcertbot
. I did created a secret pointing to my cert undergitpod
namespaceThe
gitpod.config.yaml
:Validation :
gitpod-installer validate cluster --kubeconfig /etc/rancher/k3s/k3s.yaml --config gitpod.config.yaml --namespace gitpod
I executed these :
gitpod-installer render --config gitpod.config.yaml --namespace gitpod > gitpod.yaml
k3s kubectl apply -f gitpod.yaml
Node is surely ready. Only one node exists, acting as
control-plane,master
, versionv1.22.7+k3s1
. I executek3s kubectl get nodes
to check out the node.Secrets under
gitpod
namespace (after deployment) :k3s kubectl get secret -n gitpod
All pods under
gitpod
namespace are running well.k3s kubectl get pods -n gitpod
I run
k3s kubectl get service -n gitpod
to see services, all of them have cluster-ip exceptmysql-headless
andws-daemon
. Theproxy
, LoadBalancer, which occupies 80 and 443, has external-ip and it's the only one that has.Later I checked logs in some pods.
k3s kubectl logs registry-facade-77mrz registry-facade -n gitpod
:k3s kubectl logs ws-manager-5496b997d4-7qkwf ws-manager -n gitpod
:k3s kubectl logs ws-daemon-nd5nz ws-daemon -n gitpod
:k3s kubectl logs image-builder-mk3-6d5bcf4598-dzpn9 image-builder-mk3 -n gitpod
:I also looked at the server pod's log
k3s kubectl logs server-5b5ff8cd75-7gs6j server -n gitpod
and found this line :
It indicates auth provider callback. In the payload I noticed the
hostname
isnull
. This probably explains the error but I have no clue why, and how I could ever solve it.I am running out of ideas. Where am I missing?? Any suggestions? I would appreciate it a lot if helps!
Steps to reproduce
CentOS 8.2
kernel version 4.18.0-305.3.1.el8.x86_64
K3S
v1.22.7+k3s1 (8432d7f2)
online-ide.myrootdomain.xxx
. Activate it in my account's profile settings and choose to use Gitpod IDE in my personal private project, thus creating a workspace in Gitpod.Workspace affected
No response
Expected behavior
Create a workspace on Gitpod instance from my repositories on self-hosted Gitlab instance successfully.
Example repository
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: