Skip to content

Cannot create workspace on self-hosted instance #9444

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ghost opened this issue Apr 21, 2022 · 8 comments
Closed

Cannot create workspace on self-hosted instance #9444

ghost opened this issue Apr 21, 2022 · 8 comments
Labels
meta: stale This issue/PR is stale and will be closed soon team: delivery Issue belongs to the self-hosted team

Comments

@ghost
Copy link

ghost commented Apr 21, 2022

Bug description

Hello, I am trying to deploy Gitpod in K3S on my own server. After a million times of tries (the installation is kind of confusing for me) I successfully have it deployed and accessible on port 80 and 443. I have added an integration which links to my self-hosted Gitlab, just the same as what I have done on gitpod.io. No authenticaion error occurred. However, when I tried to start a workspace from repositories in gitlab, I got the error:
Unable to create workplace hostname required

I checked gitlab_access.log and could see my Gitpod instance successfully pulling repositories.

I looked up some similar issues here and questions on stackoverflow but I don't think any of them helps...

I am using the 2022.03.1 release. I have it deployed on CentOS 8.2 kernel version 4.18.0-305.3.1.el8.x86_64, where K3S v1.22.7+k3s1 (8432d7f2) is installed. All three DNS records are on globally, one A record and two CNAME records.

The Gitpod instance can be accessed through online-ide.myrootdomain.xxx (Of course not the real domain shown here), and my self-hosted Gitlab instance through repo.myrootdomain.xxx. Both of the two servers have public Internet address and internal network ip.

The tls certificate for Gitpod instance :

DNS Name=*.online-ide.myrootdomain.xxx
DNS Name=*.ws.online-ide.myrootdomain.xxx
DNS Name=online-ide.myrootdomain.xxx

I chose the gitpod installer for deployment. I did created a namespace named gitpod , initialized a gitpod.config.yaml and filled the domain section with online-ide.myrootdomain.xxx. I did not use cert-manager as I wanted to use my own tls certificate that is created using certbot. I did created a secret pointing to my cert under gitpod namespace

The gitpod.config.yaml :

apiVersion: v1
authProviders: []
blockNewUsers:
 enabled: false
 passlist: []
certificate:
 kind: secret
 name: https-certificates
containerRegistry:
 inCluster: true
database:
 inCluster: true
disableDefinitelyGp: false
domain: "online-ide.myrootdomain.xxx"
kind: Full
metadata:
 region: local
objectStorage:
 inCluster: true
observability:
 logLevel: info
openVSX:
 url: https://open-vsx.org
repository: eu.gcr.io/gitpod-core-dev/build
workspace:
 resources:
   requests:
     cpu: "1"
     memory: 2Gi
 runtime:
   containerdRuntimeDir: /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io
   containerdSocket: /run/containerd/containerd.sock
   fsShiftMethod: fuse

Validation :
gitpod-installer validate cluster --kubeconfig /etc/rancher/k3s/k3s.yaml --config gitpod.config.yaml --namespace gitpod

{
  "status": "WARNING",
  "items": [
    {
      "name": "Linux kernel version",
      "description": "all cluster nodes run Linux \u003e= 5.4.0-0",
      "status": "WARNING",
      "errors": [
        {
          "message": "Invalid Semantic Version kernel version: 4.18.0-305.3.1.el8.x86_64",
          "type": "WARNING"
        }
      ]
    },
    {
      "name": "containerd enabled",
      "description": "all cluster nodes run containerd",
      "status": "OK"
    },
    {
      "name": "Kubernetes version",
      "description": "all cluster nodes run kubernetes version \u003e= 1.21.0-0",
      "status": "OK"
    },
    {
      "name": "affinity labels",
      "description": "all required affinity node labels [gitpod.io/workload_meta gitpod.io/workload_ide gitpod.io/workload_workspace_services gitpod.io/workload_workspace_regular gitpod.io/workload_workspace_headless] are present in the cluster",
      "status": "OK"
    },
    {
      "name": "cert-manager installed",
      "description": "cert-manager is installed and has available issuer",
      "status": "WARNING",
      "errors": [
        {
          "message": "no cluster issuers configured",
          "type": "WARNING"
        }
      ]
    },
    {
      "name": "Namespace exists",
      "description": "ensure that the target namespace exists",
      "status": "OK"
    },
    {
      "name": "https-certificates is present and valid",
      "description": "ensures the https-certificates secret is present and contains the required data",
      "status": "OK"
    }
  ]
}

I executed these :
gitpod-installer render --config gitpod.config.yaml --namespace gitpod > gitpod.yaml
k3s kubectl apply -f gitpod.yaml

Node is surely ready. Only one node exists, acting as control-plane,master, version v1.22.7+k3s1. I execute k3s kubectl get nodes to check out the node.

Secrets under gitpod namespace (after deployment) :
k3s kubectl get secret -n gitpod

NAME                                  TYPE                                  DATA   AGE
default-token-cnzz7                   kubernetes.io/service-account-token   3      95m
https-certificates                    kubernetes.io/tls                     2      87m
workspace-token-ttxcl                 kubernetes.io/service-account-token   3      85m
ws-manager-token-wwhj9                kubernetes.io/service-account-token   3      85m
dashboard-token-4w8pl                 kubernetes.io/service-account-token   3      85m
registry-facade-token-h4p7n           kubernetes.io/service-account-token   3      85m
ws-daemon-token-925kc                 kubernetes.io/service-account-token   3      85m
gitpod-token-xbk7z                    kubernetes.io/service-account-token   3      85m
nobody-token-hz7vc                    kubernetes.io/service-account-token   3      85m
blobserve-token-dnx6s                 kubernetes.io/service-account-token   3      85m
db-token-xs9l8                        kubernetes.io/service-account-token   3      85m
migrations-token-cfjw4                kubernetes.io/service-account-token   3      85m
agent-smith-token-8sgbv               kubernetes.io/service-account-token   3      85m
ws-proxy-token-mqnkr                  kubernetes.io/service-account-token   3      85m
ca-issuer-ca                          kubernetes.io/tls                     3      85m
ws-manager-bridge-token-78xct         kubernetes.io/service-account-token   3      85m
minio                                 Opaque                                3      85m
registry-secret                       Opaque                                3      85m
rabbitmq                              Opaque                                2      85m
messagebus-certificates-secret-core   Opaque                                3      85m
load-definition                       Opaque                                1      85m
messagebus                            Opaque                                0      85m
messagebus-erlang-cookie              Opaque                                1      85m
builtin-registry-auth                 kubernetes.io/dockerconfigjson        3      85m
mysql                                 Opaque                                6      85m
db-password                           Opaque                                2      85m
server-token-x7xp7                    kubernetes.io/service-account-token   3      85m
minio-token-zqlbd                     kubernetes.io/service-account-token   3      85m
proxy-token-f5fhq                     kubernetes.io/service-account-token   3      85m
docker-registry-token-krwzt           kubernetes.io/service-account-token   3      85m
content-service-token-rv5ql           kubernetes.io/service-account-token   3      85m
openvsx-proxy-token-sfqxj             kubernetes.io/service-account-token   3      85m
rabbitmq-token-2qm2k                  kubernetes.io/service-account-token   3      85m
image-builder-mk3-token-kmpx9         kubernetes.io/service-account-token   3      85m
ide-proxy-token-x72tj                 kubernetes.io/service-account-token   3      85m
ws-manager-tls                        kubernetes.io/tls                     3      85m
builtin-registry-facade-cert          kubernetes.io/tls                     3      85m
ws-daemon-tls                         kubernetes.io/tls                     3      85m
builtin-registry-certs                kubernetes.io/tls                     3      85m
ws-manager-client-tls                 kubernetes.io/tls                     3      85m

All pods under gitpod namespace are running well.
k3s kubectl get pods -n gitpod

NAME                                 READY   STATUS    RESTARTS      AGE
svclb-proxy-4jtpx                    3/3     Running   3 (76m ago)   106m
agent-smith-xwmc6                    2/2     Running   2 (76m ago)   106m
dashboard-74d756fcd9-sfvsm           1/1     Running   1 (76m ago)   106m
openvsx-proxy-0                      1/1     Running   1 (76m ago)   106m
blobserve-59cbd97c56-mc9ql           2/2     Running   2 (76m ago)   106m
image-builder-mk3-6d5bcf4598-dzpn9   2/2     Running   2 (76m ago)   106m
content-service-855fc6787d-sq27d     1/1     Running   1 (76m ago)   106m
ws-manager-5496b997d4-7qkwf          2/2     Running   2 (76m ago)   106m
ide-proxy-7488df7cfc-2psgt           1/1     Running   1 (76m ago)   106m
registry-facade-77mrz                2/2     Running   2 (76m ago)   106m
registry-ff6d8c4f4-6fllj             1/1     Running   1 (76m ago)   106m
ws-daemon-nd5nz                      2/2     Running   2 (76m ago)   106m
minio-68444c56b7-tgp54               1/1     Running   1 (76m ago)   106m
ws-proxy-59d455b97f-p2994            2/2     Running   5 (75m ago)   106m
proxy-5f8798bd99-g7gnv               2/2     Running   2 (76m ago)   106m
mysql-0                              1/1     Running   1 (76m ago)   106m
messagebus-0                         1/1     Running   1 (76m ago)   106m
server-5b5ff8cd75-7gs6j              2/2     Running   2 (76m ago)   106m
ws-manager-bridge-54ff4b8889-2w5pw   2/2     Running   2 (76m ago)   106m

I run k3s kubectl get service -n gitpod to see services, all of them have cluster-ip except mysql-headless and ws-daemon. The proxy, LoadBalancer, which occupies 80 and 443, has external-ip and it's the only one that has.

Later I checked logs in some pods.
k3s kubectl logs registry-facade-77mrz registry-facade -n gitpod :

{"addr":"127.0.0.1:9500","level":"info","message":"started Prometheus metrics server","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:19Z"}
{"fn":"/mnt/pull-secret.json","level":"info","message":"using authentication for backing registries","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:19Z"}
{"addr":":6060","level":"info","message":"serving pprof service","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:19Z"}
{"level":"info","message":"preparing static layer","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:19Z"}
{"level":"info","message":"🏪 registry facade is up and running","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:46Z"}
{"addr":":32223","level":"info","message":"HTTPS registry server listening","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:46Z"}

k3s kubectl logs ws-manager-5496b997d4-7qkwf ws-manager -n gitpod :

{"level":"info","message":"wsman configuration is valid","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:15Z"}
I0421 02:55:16.462570       1 request.go:665] Waited for 1.000339392s due to client-side throttling, not priority and fairness, request: GET:https://10.43.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
{"addr":"127.0.0.1:9500","level":"info","logger":"controller-runtime.metrics","message":"Metrics server is starting to listen","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"addr":":8080","level":"info","message":"started gRPC server","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"interval":15000000000,"level":"info","message":"starting workspace monitor","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"level":"info","message":"workspace monitor is up and running","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"level":"info","message":"🦸  wsman is up and running. Stop with SIGINT or CTRL+C","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"addr":"localhost:6060","level":"info","message":"serving pprof service","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"addr":"{\"IP\":\"127.0.0.1\",\"Port\":9500,\"Zone\":\"\"}","kind":"metrics","level":"info","message":"Starting server","path":"/metrics","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"addr":"{\"IP\":\"::\",\"Port\":44217,\"Zone\":\"\"}","kind":"health probe","level":"info","message":"Starting server","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"level":"info","logger":"controller.pod","message":"Starting EventSource","reconciler group":"","reconciler kind":"Pod","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","source":"kind source: *v1.Pod","time":"2022-04-21T02:55:17Z"}
{"level":"info","logger":"controller.pod","message":"Starting Controller","reconciler group":"","reconciler kind":"Pod","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"level":"info","logger":"controller.pod","message":"Starting workers","reconciler group":"","reconciler kind":"Pod","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z","worker count":1}
{"level":"info","message":"new subscriber","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","subscriberCount":1,"subscriberKey":"k10.42.0.51:60802@1650509742553170113","time":"2022-04-21T02:55:42Z"}
{"level":"info","message":"new subscriber","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","subscriberCount":2,"subscriberKey":"k10.42.0.61:51622@1650509789798017003","time":"2022-04-21T02:56:29Z"}

k3s kubectl logs ws-daemon-nd5nz ws-daemon -n gitpod:

{"level":"info","message":"containerd subscription established","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"level":"info","location":"/mnt/workingarea","message":"restored workspaces from disk","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z","workspacesLoaded":0,"workspacesOnDisk":0}
{"clientAuth":4,"level":"info","message":"enabling client authentication","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"addr":":8080","level":"info","message":"started gRPC server","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"addr":"localhost:9500","level":"info","message":"started Prometheus metrics server","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"addr":"localhost:6060","level":"info","message":"serving pprof service","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"addr":":9999","level":"info","message":"started readiness signal","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"level":"info","message":"start hosts source","name":"registryFacade","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"level":"info","message":"🧫 ws-daemon is up and running. Stop with SIGINT or CTRL+C","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}

k3s kubectl logs image-builder-mk3-6d5bcf4598-dzpn9 image-builder-mk3 -n gitpod :

{"addr":"127.0.0.1:9500","level":"info","message":"started Prometheus metrics server","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:14Z"}
{"addr":":6060","level":"info","message":"serving pprof service","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:14Z"}
{"component":"grpc","level":"warning","message":"2022/04/21 02:55:15 WARNING: [core] grpc: addrConn.createTransport failed to connect to {ws-manager:8080 ws-manager \u003cnil\u003e \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp: i/o timeout\"","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:15Z"}
{"component":"grpc","level":"warning","message":"2022/04/21 02:55:18 WARNING: [core] grpc: addrConn.createTransport failed to connect to {ws-manager:8080 ws-manager \u003cnil\u003e \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp: i/o timeout\"","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:18Z"}
{"component":"grpc","level":"warning","message":"2022/04/21 02:55:21 WARNING: [core] grpc: addrConn.createTransport failed to connect to {ws-manager:8080 ws-manager \u003cnil\u003e \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp: i/o timeout\"","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:21Z"}
{"component":"grpc","level":"warning","message":"2022/04/21 02:55:28 WARNING: [core] grpc: addrConn.createTransport failed to connect to {ws-manager:8080 ws-manager \u003cnil\u003e \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp: i/o timeout\"","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:28Z"}
{"component":"grpc","level":"warning","message":"2022/04/21 02:55:37 WARNING: [core] grpc: addrConn.createTransport failed to connect to {ws-manager:8080 ws-manager \u003cnil\u003e \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp: i/o timeout\"","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:37Z"}
{"level":"warning","message":"no TLS configured - gRPC server will be unsecured","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:42Z"}
{"addr":":8080","level":"info","message":"started workspace content server","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:42Z"}
{"interval":"6h0m0s","level":"info","message":"starting Docker ref pre-cache","refs":["docker.io/gitpod/workspace-full:latest"],"serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:42Z"}
{"level":"info","message":"👷 image-builder is up and running. Stop with SIGINT or CTRL+C","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:42Z"}

I also looked at the server pod's log k3s kubectl logs server-5b5ff8cd75-7gs6j server -n gitpod
and found this line :

{"component":"server","severity":"INFO","time":"2022-04-21T02:26:25.218Z","message":"Auth Provider Callback. Path: /auth/repo.myrootdomain.xxx/callback","payload":"{\n  req: <ref *1> IncomingMessage {\n    _readableState: ReadableState {\n      objectMode: false,\n      highWaterMark: 16384,\n      buffer: BufferList { head: null, tail: null, length: 0 },\n      length: 0,\n      pipes: [],\n      flowing: null,\n      ended: true,\n      endEmitted: false,\n      reading: false,\n      constructed: true,\n      sync: true,\n      needReadable: false,\n      emittedReadable: false,\n      readableListening: false,\n      resumeScheduled: false,\n      errorEmitted: false,\n      emitClose: true,\n      autoDestroy: true,\n      destroyed: false,\n      errored: null,\n      closed: false,\n      closeEmitted: false,\n      defaultEncoding: 'utf8',\n      awaitDrainWriters: null,\n      multiAwaitDrain: false,\n      readingMore: true,\n      dataEmitted: false,\n      decoder: null,\n      encoding: null,\n      [Symbol(kPaused)]: null\n    },\n    _events: [Object: null prototype] { end: [Array] },\n    _eventsCount: 1,\n    _maxListeners: undefined,\n    socket: Socket {\n      connecting: false,\n      _hadError: false,\n      _parent: null,\n      _host: null,\n      _readableState: [ReadableState],\n      _events: [Object: null prototype],\n      _eventsCount: 8,\n      _maxListeners: undefined,\n      _writableState: [WritableState],\n      allowHalfOpen: true,\n      _sockname: null,\n      _pendingData: null,\n      _pendingEncoding: '',\n      server: [Server],\n      _server: [Server],\n      parser: [HTTPParser],\n      on: [Function: socketListenerWrap],\n      addListener: [Function: socketListenerWrap],\n      prependListener: [Function: socketListenerWrap],\n      setEncoding: [Function: socketSetEncoding],\n      _paused: false,\n      _httpMessage: [ServerResponse],\n      [Symbol(async_id_symbol)]: 39378,\n      [Symbol(kHandle)]: [TCP],\n      [Symbol(kSetNoDelay)]: false,\n      [Symbol(lastWriteQueueSize)]: 0,\n      [Symbol(timeout)]: null,\n      [Symbol(kBuffer)]: null,\n      [Symbol(kBufferCb)]: null,\n      [Symbol(kBufferGen)]: null,\n      [Symbol(kCapture)]: false,\n      [Symbol(kBytesRead)]: 0,\n      [Symbol(kBytesWritten)]: 0,\n      [Symbol(RequestTimeout)]: undefined\n    },\n    httpVersionMajor: 1,\n    httpVersionMinor: 1,\n    httpVersion: '1.1',\n    complete: true,\n    rawHeaders: [\n      'Host',\n      'online-ide.myrootdomain.xxx',\n      'User-Agent',\n      'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36 Edg/100.0.1185.44',\n      'Accept',\n      'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',\n      'Accept-Encoding',\n      'gzip, deflate, br',\n      'Accept-Language',\n      'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',\n      'Cookie',\n      'ajs_anonymous_id=f2240086-b9c4-49c3-8b2c-1df774f21f00; gitpod-user=true; _online_ide_myrootdomain_xxx_=s%3A2618104b-3d3e-4582-bf2e-1366a1971b95.KD5qzwFxtgrXanMxr%2FOz5NK7LtoNUXYvHw9JUHK8Hy8',\n      'Dnt',\n      '1',\n      'Referer',\n      'https://repo.myrootdomain.xxx/',\n      'Sec-Ch-Ua',\n      '\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"100\", \"Microsoft Edge\";v=\"100\"',\n      'Sec-Ch-Ua-Mobile',\n      '?0',\n      'Sec-Ch-Ua-Platform',\n      '\"Windows\"',\n      'Sec-Fetch-Dest',\n      'document',\n      'Sec-Fetch-Mode',\n      'navigate',\n      'Sec-Fetch-Site',\n      'same-site',\n      'Upgrade-Insecure-Requests',\n      '1',\n      'X-Forwarded-For',\n      '10.42.0.1',\n      'X-Forwarded-Proto',\n      'https',\n      'X-Real-Ip',\n      '10.42.0.1'\n    ],\n    rawTrailers: [],\n    aborted: false,\n    upgrade: false,\n    url: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n    method: 'GET',\n    statusCode: null,\n    statusMessage: null,\n    client: Socket {\n      connecting: false,\n      _hadError: false,\n      _parent: null,\n      _host: null,\n      _readableState: [ReadableState],\n      _events: [Object: null prototype],\n      _eventsCount: 8,\n      _maxListeners: undefined,\n      _writableState: [WritableState],\n      allowHalfOpen: true,\n      _sockname: null,\n      _pendingData: null,\n      _pendingEncoding: '',\n      server: [Server],\n      _server: [Server],\n      parser: [HTTPParser],\n      on: [Function: socketListenerWrap],\n      addListener: [Function: socketListenerWrap],\n      prependListener: [Function: socketListenerWrap],\n      setEncoding: [Function: socketSetEncoding],\n      _paused: false,\n      _httpMessage: [ServerResponse],\n      [Symbol(async_id_symbol)]: 39378,\n      [Symbol(kHandle)]: [TCP],\n      [Symbol(kSetNoDelay)]: false,\n      [Symbol(lastWriteQueueSize)]: 0,\n      [Symbol(timeout)]: null,\n      [Symbol(kBuffer)]: null,\n      [Symbol(kBufferCb)]: null,\n      [Symbol(kBufferGen)]: null,\n      [Symbol(kCapture)]: false,\n      [Symbol(kBytesRead)]: 0,\n      [Symbol(kBytesWritten)]: 0,\n      [Symbol(RequestTimeout)]: undefined\n    },\n    _consuming: false,\n    _dumped: false,\n    next: [Function: next],\n    baseUrl: '',\n    originalUrl: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n    _parsedUrl: Url {\n      protocol: null,\n      slashes: null,\n      auth: null,\n      host: null,\n      port: null,\n      hostname: null,\n      hash: null,\n      search: '?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n      query: 'code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n      pathname: '/auth/repo.myrootdomain.xxx/callback',\n      path: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n      href: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n      _raw: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298'\n    },\n    params: {},\n    query: {\n      code: '6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298'\n    },\n    res: ServerResponse {\n      _events: [Object: null prototype],\n      _eventsCount: 1,\n      _maxListeners: undefined,\n      outputData: [],\n      outputSize: 0,\n      writable: true,\n      destroyed: false,\n      _last: false,\n      chunkedEncoding: false,\n      shouldKeepAlive: true,\n      maxRequestsOnConnectionReached: false,\n      _defaultKeepAlive: true,\n      useChunkedEncodingByDefault: true,\n      sendDate: true,\n      _removedConnection: false,\n      _removedContLen: false,\n      _removedTE: false,\n      _contentLength: null,\n      _hasBody: true,\n      _trailer: '',\n      finished: false,\n      _headerSent: false,\n      _closed: false,\n      socket: [Socket],\n      _header: null,\n      _keepAliveTimeout: 5000,\n      _onPendingData: [Function: bound updateOutgoingData],\n      req: [Circular *1],\n      _sent100: false,\n      _expect_continue: false,\n      locals: [Object: null prototype] {},\n      writeHead: [Function: writeHead],\n      end: [Function: end],\n      [Symbol(kCapture)]: false,\n      [Symbol(kNeedDrain)]: false,\n      [Symbol(corked)]: 0,\n      [Symbol(kOutHeaders)]: [Object: null prototype]\n    },\n    body: {},\n    secret: undefined,\n    cookies: {\n      ajs_anonymous_id: 'f2240086-b9c4-49c3-8b2c-1df774f21f00',\n      'gitpod-user': 'true',\n      _online_ide_myrootdomain_xxx_: 's:2618104b-3d3e-4582-bf2e-1366a1971b95.KD5qzwFxtgrXanMxr/Oz5NK7LtoNUXYvHw9JUHK8Hy8'\n    },\n    signedCookies: [Object: null prototype] {},\n    _parsedOriginalUrl: Url {\n      protocol: null,\n      slashes: null,\n      auth: null,\n      host: null,\n      port: null,\n      hostname: null,\n      hash: null,\n      search: '?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n      query: 'code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n      pathname: '/auth/repo.myrootdomain.xxx/callback',\n      path: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n      href: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n      _raw: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298'\n    },\n    sessionStore: MySQLStore {\n      connection: [Pool],\n      options: [Object],\n      generate: [Function (anonymous)],\n      _events: [Object: null prototype],\n      _eventsCount: 2,\n      _expirationInterval: Timeout {\n        _idleTimeout: 900000,\n        _idlePrev: [TimersList],\n        _idleNext: [TimersList],\n        _idleStart: 5066,\n        _onTimeout: [Function: bound ],\n        _timerArgs: undefined,\n        _repeat: 900000,\n        _destroyed: false,\n        [Symbol(refed)]: true,\n        [Symbol(kHasPrimitive)]: false,\n        [Symbol(asyncId)]: 85,\n        [Symbol(triggerId)]: 57\n      }\n    },\n    sessionID: '2618104b-3d3e-4582-bf2e-1366a1971b95',\n    session: Session { cookie: [Object], authFlow: [Object] },\n    _passport: { instance: [Authenticator] },\n    [Symbol(kCapture)]: false,\n    [Symbol(kHeaders)]: {\n      host: 'online-ide.myrootdomain.xxx',\n      'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36 Edg/100.0.1185.44',\n      accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',\n      'accept-encoding': 'gzip, deflate, br',\n      'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',\n      cookie: 'ajs_anonymous_id=f2240086-b9c4-49c3-8b2c-1df774f21f00; gitpod-user=true; _online_ide_myrootdomain_xxx_=s%3A2618104b-3d3e-4582-bf2e-1366a1971b95.KD5qzwFxtgrXanMxr%2FOz5NK7LtoNUXYvHw9JUHK8Hy8',\n      dnt: '1',\n      referer: 'https://repo.myrootdomain.xxx/',\n      'sec-ch-ua': '\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"100\", \"Microsoft Edge\";v=\"100\"',\n      'sec-ch-ua-mobile': '?0',\n      'sec-ch-ua-platform': '\"Windows\"',\n      'sec-fetch-dest': 'document',\n      'sec-fetch-mode': 'navigate',\n      'sec-fetch-site': 'same-site',\n      'upgrade-insecure-requests': '1',\n      'x-forwarded-for': '10.42.0.1',\n      'x-forwarded-proto': 'https',\n      'x-real-ip': '10.42.0.1'\n    },\n    [Symbol(kHeadersCount)]: 36,\n    [Symbol(kTrailers)]: null,\n    [Symbol(kTrailersCount)]: 0,\n    [Symbol(RequestTimeout)]: undefined\n  }\n}"}

It indicates auth provider callback. In the payload I noticed the hostname is null. This probably explains the error but I have no clue why, and how I could ever solve it.
I am running out of ideas. Where am I missing?? Any suggestions? I would appreciate it a lot if helps!

Steps to reproduce

  1. Gitpod 2022.03.1 release.
    CentOS 8.2 kernel version 4.18.0-305.3.1.el8.x86_64
    K3S v1.22.7+k3s1 (8432d7f2)
  2. Steps in installer readme.md and this issue but with K3S, as mentioned in description above.
  3. Enable Gitpod integration in my self-hosted Gitlab instance. In admin panel, filling the section with online-ide.myrootdomain.xxx. Activate it in my account's profile settings and choose to use Gitpod IDE in my personal private project, thus creating a workspace in Gitpod.
  4. The error shows on my Gitpod instance, but on gitpod.io everything is fine.

Workspace affected

No response

Expected behavior

Create a workspace on Gitpod instance from my repositories on self-hosted Gitlab instance successfully.

Example repository

No response

Anything else?

No response

@ghost ghost changed the title Cannot create workplaces on self-hosted instance Cannot create workspace on self-hosted instance Apr 21, 2022
@ghost ghost closed this as completed Apr 21, 2022
@ghost ghost moved this to Done in 🌌 Workspace Team Apr 21, 2022
@ghost ghost reopened this Apr 22, 2022
@mrsimonemms
Copy link
Contributor

Hi @wo3hap. Your cluster doesn't meet the required specifications which is almost certainly the root cause.

Specifically, it MUST be running on Ubuntu 18.04 or 20.04 (I'd suggest 20.04) and have the Kernel 5.4 or above. You're running on CentOS and the Kernel is 4.18.

Can you change your cluster to meet the requirements and try again please?

@mrsimonemms mrsimonemms added the team: delivery Issue belongs to the self-hosted team label Apr 25, 2022
@ghost
Copy link
Author

ghost commented Apr 25, 2022

Hi @wo3hap. Your cluster doesn't meet the required specifications which is almost certainly the root cause.

Specifically, it MUST be running on Ubuntu 18.04 or 20.04 (I'd suggest 20.04) and have the Kernel 5.4 or above. You're running on CentOS and the Kernel is 4.18.

Can you change your cluster to meet the requirements and try again please?

Yes, I noticed that so later I have switched to Ubuntu 20.04 with 5.4 kernel. The kernel version warning in the validation is gone but still getting the same error...

@ghost
Copy link
Author

ghost commented Apr 25, 2022

Hi @wo3hap. Your cluster doesn't meet the required specifications which is almost certainly the root cause.
Specifically, it MUST be running on Ubuntu 18.04 or 20.04 (I'd suggest 20.04) and have the Kernel 5.4 or above. You're running on CentOS and the Kernel is 4.18.
Can you change your cluster to meet the requirements and try again please?

Yes, I noticed that so later I have switched to Ubuntu 20.04 with 5.4 kernel. The kernel version warning in the validation is gone but still getting the same error...

In addition, my cloud server provider has a system image with k3s pre-installed on centos. I chose that image at first when purchasing for server. Later I switched to ubuntu 20.04 without k3s pre-installed. @mrsimonemms

@mrsimonemms
Copy link
Contributor

Next thing to check would be:

workspace:
 runtime:
   containerdRuntimeDir: /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io
   containerdSocket: /run/containerd/containerd.sock

Those values are not typical of a k3s installation - see my k3s guide for the values to use for these two properties

@ghost
Copy link
Author

ghost commented Apr 28, 2022

Next thing to check would be:

workspace:
 runtime:
   containerdRuntimeDir: /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io
   containerdSocket: /run/containerd/containerd.sock

Those values are not typical of a k3s installation - see my k3s guide for the values to use for these two properties

Occupied these days.

I propably messed up with the containerd runtime. At first I did not get it on my own, I checked /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io but no such directory... So I made a directory named k8s.io !
I rolled back to that CentOS image at first, looked up /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/ directory and found these containers:

7acafb6d7fa597dfef4c5462e0a60ba2cae1e950547cff996e81f9337e91eba2  bc56e544bfc5aa8eabf90214476351391e7dd496aea77fbfc03cad45651cd2e0
7c963d9b72e16c2f83eded9c5fee69ed6f330bd9ea5f2fc47ba1709270f936e6  c8becb1ea136a9358d25d420ff39e91110ad2d613b8cd6925a1afe1b7cb565dc
98fb8c6b25433539a707e0c654cc26d8ab192d48a4454db2d8ae04514f1d374e  f7039d4d7bef9a33716d1771bd061bc75f590cd602d2430592135030de1b5812
9976ffeaa9494cbd548310d4229570baf718b632e1b4c870d7e305c31bcf5868  fa31d0ab63877b819fb8fad2f2c6482586f640508962dc4d4ce50e21099a2880
9b402fe1a2046095faa15348659d15e555bb5410a8217871c8cff1e17ad125c5  fbe7fd7136d2b80ae05f78dcffbbc6c3a5c1c76c2eca5054a78e29be32fda85f
abac059eb3824e6754587b7d3df480f8c095b3b9b6e6954787112b5d09a8cd2d
CONTAINER                                                           IMAGE                                                                                                     RUNTIME                  
7acafb6d7fa597dfef4c5462e0a60ba2cae1e950547cff996e81f9337e91eba2    docker.io/rancher/library-traefik:1.7.19                                                                  io.containerd.runc.v2    
7c963d9b72e16c2f83eded9c5fee69ed6f330bd9ea5f2fc47ba1709270f936e6    docker.io/rancher/pause:3.1                                                                               io.containerd.runc.v2    
98fb8c6b25433539a707e0c654cc26d8ab192d48a4454db2d8ae04514f1d374e    docker.io/rancher/coredns-coredns:1.8.0                                                                   io.containerd.runc.v2    
9976ffeaa9494cbd548310d4229570baf718b632e1b4c870d7e305c31bcf5868    docker.io/rancher/pause:3.1                                                                               io.containerd.runc.v2    
9b402fe1a2046095faa15348659d15e555bb5410a8217871c8cff1e17ad125c5    docker.io/rancher/klipper-lb:v0.1.2                                                                       io.containerd.runc.v2    
abac059eb3824e6754587b7d3df480f8c095b3b9b6e6954787112b5d09a8cd2d    docker.io/rancher/local-path-provisioner:v0.0.19                                                          io.containerd.runc.v2    
bc56e544bfc5aa8eabf90214476351391e7dd496aea77fbfc03cad45651cd2e0    docker.io/rancher/pause:3.1                                                                               io.containerd.runc.v2    
c82ebe982ccd34195cae63042c78d39812b70a2a34308a942ca064907a530f9c    docker.io/rancher/pause:3.1                                                                               io.containerd.runc.v2    
c8becb1ea136a9358d25d420ff39e91110ad2d613b8cd6925a1afe1b7cb565dc    docker.io/rancher/pause:3.1                                                                               io.containerd.runc.v2    
e09c190d4e37c3643430a8a40c005421be67f7d1a7bc45b77c7463c4a0a1cc56    docker.io/rancher/klipper-helm@sha256:b319bce4802b8e42d46e251c7f9911011a16b4395a84fa58f1cf4c788df17139    io.containerd.runc.v2    
f7039d4d7bef9a33716d1771bd061bc75f590cd602d2430592135030de1b5812    docker.io/rancher/klipper-lb:v0.1.2                                                                       io.containerd.runc.v2    
fa31d0ab63877b819fb8fad2f2c6482586f640508962dc4d4ce50e21099a2880    docker.io/rancher/metrics-server:v0.3.6                                                                   io.containerd.runc.v2    
fbe7fd7136d2b80ae05f78dcffbbc6c3a5c1c76c2eca5054a78e29be32fda85f    docker.io/rancher/pause:3.1

Then I turned to a fresh Ubuntu 20.04, executed curl -sfL https://get.k3s.io | sh - with root
I even added docker repository shown in this document and installed containerd.io sudo apt install containerd.io
I found something after installation.

sudo find / -name containerd.sock
/run/containerd/containerd.sock
/run/k3s/containerd/containerd.sock
cd /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io
-bash: cd: /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io: No such file or directory
sudo ls /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/
2c9221466dbd24732571cc3061f89263105d3e6642c5dfa2053cff63eec4b1c3  7911bb873e124de44bd1403e71f7bb49a556ee5983aeb23cdfeb7bcabd6a6326
3866fd4dd79e7db9f00d5e87545e1158be965bb049c478f2f5d1826731bde7af  87a8934879e0b7770d38b4a3ad977f1fade1b331d57f5877d9fa6fc6a645821e
3c4bae284631fa074283a8073b149ae47c0647f8a44ea19976a39a7e530f28e6  99cd79de309f1f186eeddaad3f49b43847fdf785050c8d7e016cd20abef98d97
429ea492509b15a84c2aa9c03c895110db45696732f597f7f292d4f7ef856e23  c99799ce18da19892e36785b7bc67781c6e3eb41e46716e5b9837d312b381b4f
56274acb3895bed357e2bba22dcd0a01119d0aa21eab381facfb7b24e766823a  f18ab397439fd46675facb278e0697125bdd55a3565dc0a42b5bfe523824fcc6
63cfaa6fab9fc93ffc32af36f8b21cb37fa396209d1b9bf4bc1af3c7274805e2
sudo k3s ctr c ls
CONTAINER                                                           IMAGE                                                  RUNTIME                  
2c9221466dbd24732571cc3061f89263105d3e6642c5dfa2053cff63eec4b1c3    docker.io/rancher/mirrored-coredns-coredns:1.8.6       io.containerd.runc.v2    
3866fd4dd79e7db9f00d5e87545e1158be965bb049c478f2f5d1826731bde7af    docker.io/rancher/local-path-provisioner:v0.0.21       io.containerd.runc.v2    
3c4bae284631fa074283a8073b149ae47c0647f8a44ea19976a39a7e530f28e6    docker.io/rancher/mirrored-pause:3.6                   io.containerd.runc.v2    
429ea492509b15a84c2aa9c03c895110db45696732f597f7f292d4f7ef856e23    docker.io/rancher/mirrored-pause:3.6                   io.containerd.runc.v2    
498442d3a49c6e7811ec35c55e86f86d99dd709081e8909d7c7e3ac45d1f6b45    docker.io/rancher/mirrored-pause:3.6                   io.containerd.runc.v2    
56274acb3895bed357e2bba22dcd0a01119d0aa21eab381facfb7b24e766823a    docker.io/rancher/mirrored-pause:3.6                   io.containerd.runc.v2    
63cfaa6fab9fc93ffc32af36f8b21cb37fa396209d1b9bf4bc1af3c7274805e2    docker.io/rancher/klipper-lb:v0.3.4                    io.containerd.runc.v2    
7911bb873e124de44bd1403e71f7bb49a556ee5983aeb23cdfeb7bcabd6a6326    docker.io/rancher/klipper-lb:v0.3.4                    io.containerd.runc.v2    
87a8934879e0b7770d38b4a3ad977f1fade1b331d57f5877d9fa6fc6a645821e    docker.io/rancher/mirrored-metrics-server:v0.5.2       io.containerd.runc.v2    
91d91c7d38ab6eabe745821e4bcb4a11d5f14cc2267ee8557d74e5e9ffb1cfcf    docker.io/rancher/mirrored-pause:3.6                   io.containerd.runc.v2    
99cd79de309f1f186eeddaad3f49b43847fdf785050c8d7e016cd20abef98d97    docker.io/rancher/mirrored-pause:3.6                   io.containerd.runc.v2    
c99799ce18da19892e36785b7bc67781c6e3eb41e46716e5b9837d312b381b4f    docker.io/rancher/mirrored-pause:3.6                   io.containerd.runc.v2    
d66f4f9d20c85087369bc9ec7b00af4d3a0e23d6e5a1fe19fe538f0bfe4cefff    docker.io/rancher/klipper-helm:v0.6.6-build20211022    io.containerd.runc.v2    
da2270105432ab38f8ceeeac5618335fe378d82396f6e25cb2669169a8e8f75d    docker.io/rancher/klipper-helm:v0.6.6-build20211022    io.containerd.runc.v2    
f18ab397439fd46675facb278e0697125bdd55a3565dc0a42b5bfe523824fcc6    docker.io/rancher/mirrored-library-traefik:2.6.1       io.containerd.runc.v2

I removed containerd.io , sudo apt remove containerd.io, only /run/k3s/containerd/containerd.sock remaining.

So, If I got it right, the containerd runtime is a critical part, and this should explain the error.
I did not modify containerdRuntimeDir section in the yaml file in my first attempt, not at all.
@mrsimonemms

@ghost
Copy link
Author

ghost commented Apr 28, 2022

New gitpod.config.yaml

apiVersion: v1
authProviders: []
blockNewUsers:
  enabled: false
  passlist: []
certificate:
  kind: secret
  name: https-certificates
containerRegistry:
  inCluster: true
database:
  inCluster: true
disableDefinitelyGp: false
domain: "rootdomain.xxx"
kind: Full
metadata:
  region: local
objectStorage:
  inCluster: true
observability:
  logLevel: info
openVSX:
  url: https://open-vsx.org
repository: eu.gcr.io/gitpod-core-dev/build
workspace:
  resources:
    requests:
      cpu: "1"
      memory: 2Gi
  runtime:
    containerdRuntimeDir: /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io
    containerdSocket: /run/k3s/containerd/containerd.sock
    fsShiftMethod: fuse

Validation:

sudo gitpod-installer validate cluster --kubeconfig /etc/rancher/k3s/k3s.yaml --config gitpod.config.yaml -n gitpod
{
  "status": "WARNING",
  "items": [
    {
      "name": "Linux kernel version",
      "description": "all cluster nodes run Linux \u003e= 5.4.0-0",
      "status": "OK"
    },
    {
      "name": "containerd enabled",
      "description": "all cluster nodes run containerd",
      "status": "OK"
    },
    {
      "name": "Kubernetes version",
      "description": "all cluster nodes run kubernetes version \u003e= 1.21.0-0",
      "status": "OK"
    },
    {
      "name": "affinity labels",
      "description": "all required affinity node labels [gitpod.io/workload_meta gitpod.io/workload_ide gitpod.io/workload_workspace_services gitpod.io/workload_workspace_regular gitpod.io/workload_workspace_headless] are present in the cluster",
      "status": "OK"
    },
    {
      "name": "cert-manager installed",
      "description": "cert-manager is installed and has available issuer",
      "status": "WARNING",
      "errors": [
        {
          "message": "no cluster issuers configured",
          "type": "WARNING"
        }
      ]
    },
    {
      "name": "Namespace exists",
      "description": "ensure that the target namespace exists",
      "status": "OK"
    },
    {
      "name": "https-certificates is present and valid",
      "description": "ensures the https-certificates secret is present and contains the required data",
      "status": "OK"
    }
  ]
}

I rendered gitpod.yaml under gitpod namespace, and applied it.
Unfortunately, still no luck, same error again...

@mrsimonemms
Copy link
Contributor

I really don't know. I can't see any smoking guns in your config. Can I suggest you use my k3s guide as I know that works?

@stale
Copy link

stale bot commented Jul 31, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the meta: stale This issue/PR is stale and will be closed soon label Jul 31, 2022
@stale stale bot closed this as completed Aug 13, 2022
Repository owner moved this from Done to Awaiting Deployment in 🌌 Workspace Team Aug 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
meta: stale This issue/PR is stale and will be closed soon team: delivery Issue belongs to the self-hosted team
Projects
No open projects
Archived in project
Development

No branches or pull requests

1 participant