
Description
Bug description
Hello, I am trying to deploy Gitpod in K3S on my own server. After a million times of tries (the installation is kind of confusing for me) I successfully have it deployed and accessible on port 80 and 443. I have added an integration which links to my self-hosted Gitlab, just the same as what I have done on gitpod.io. No authenticaion error occurred. However, when I tried to start a workspace from repositories in gitlab, I got the error:
I checked gitlab_access.log and could see my Gitpod instance successfully pulling repositories.
I looked up some similar issues here and questions on stackoverflow but I don't think any of them helps...
I am using the 2022.03.1 release. I have it deployed on CentOS 8.2 kernel version 4.18.0-305.3.1.el8.x86_64
, where K3S v1.22.7+k3s1 (8432d7f2)
is installed. All three DNS records are on globally, one A record and two CNAME records.
The Gitpod instance can be accessed through online-ide.myrootdomain.xxx
(Of course not the real domain shown here), and my self-hosted Gitlab instance through repo.myrootdomain.xxx
. Both of the two servers have public Internet address and internal network ip.
The tls certificate for Gitpod instance :
DNS Name=*.online-ide.myrootdomain.xxx
DNS Name=*.ws.online-ide.myrootdomain.xxx
DNS Name=online-ide.myrootdomain.xxx
I chose the gitpod installer for deployment. I did created a namespace named gitpod
, initialized a gitpod.config.yaml
and filled the domain
section with online-ide.myrootdomain.xxx
. I did not use cert-manager
as I wanted to use my own tls certificate that is created using certbot
. I did created a secret pointing to my cert under gitpod
namespace
The gitpod.config.yaml
:
apiVersion: v1
authProviders: []
blockNewUsers:
enabled: false
passlist: []
certificate:
kind: secret
name: https-certificates
containerRegistry:
inCluster: true
database:
inCluster: true
disableDefinitelyGp: false
domain: "online-ide.myrootdomain.xxx"
kind: Full
metadata:
region: local
objectStorage:
inCluster: true
observability:
logLevel: info
openVSX:
url: https://open-vsx.org
repository: eu.gcr.io/gitpod-core-dev/build
workspace:
resources:
requests:
cpu: "1"
memory: 2Gi
runtime:
containerdRuntimeDir: /var/lib/containerd/io.containerd.runtime.v2.task/k8s.io
containerdSocket: /run/containerd/containerd.sock
fsShiftMethod: fuse
Validation :
gitpod-installer validate cluster --kubeconfig /etc/rancher/k3s/k3s.yaml --config gitpod.config.yaml --namespace gitpod
{
"status": "WARNING",
"items": [
{
"name": "Linux kernel version",
"description": "all cluster nodes run Linux \u003e= 5.4.0-0",
"status": "WARNING",
"errors": [
{
"message": "Invalid Semantic Version kernel version: 4.18.0-305.3.1.el8.x86_64",
"type": "WARNING"
}
]
},
{
"name": "containerd enabled",
"description": "all cluster nodes run containerd",
"status": "OK"
},
{
"name": "Kubernetes version",
"description": "all cluster nodes run kubernetes version \u003e= 1.21.0-0",
"status": "OK"
},
{
"name": "affinity labels",
"description": "all required affinity node labels [gitpod.io/workload_meta gitpod.io/workload_ide gitpod.io/workload_workspace_services gitpod.io/workload_workspace_regular gitpod.io/workload_workspace_headless] are present in the cluster",
"status": "OK"
},
{
"name": "cert-manager installed",
"description": "cert-manager is installed and has available issuer",
"status": "WARNING",
"errors": [
{
"message": "no cluster issuers configured",
"type": "WARNING"
}
]
},
{
"name": "Namespace exists",
"description": "ensure that the target namespace exists",
"status": "OK"
},
{
"name": "https-certificates is present and valid",
"description": "ensures the https-certificates secret is present and contains the required data",
"status": "OK"
}
]
}
I executed these :
gitpod-installer render --config gitpod.config.yaml --namespace gitpod > gitpod.yaml
k3s kubectl apply -f gitpod.yaml
Node is surely ready. Only one node exists, acting as control-plane,master
, version v1.22.7+k3s1
. I execute k3s kubectl get nodes
to check out the node.
Secrets under gitpod
namespace (after deployment) :
k3s kubectl get secret -n gitpod
NAME TYPE DATA AGE
default-token-cnzz7 kubernetes.io/service-account-token 3 95m
https-certificates kubernetes.io/tls 2 87m
workspace-token-ttxcl kubernetes.io/service-account-token 3 85m
ws-manager-token-wwhj9 kubernetes.io/service-account-token 3 85m
dashboard-token-4w8pl kubernetes.io/service-account-token 3 85m
registry-facade-token-h4p7n kubernetes.io/service-account-token 3 85m
ws-daemon-token-925kc kubernetes.io/service-account-token 3 85m
gitpod-token-xbk7z kubernetes.io/service-account-token 3 85m
nobody-token-hz7vc kubernetes.io/service-account-token 3 85m
blobserve-token-dnx6s kubernetes.io/service-account-token 3 85m
db-token-xs9l8 kubernetes.io/service-account-token 3 85m
migrations-token-cfjw4 kubernetes.io/service-account-token 3 85m
agent-smith-token-8sgbv kubernetes.io/service-account-token 3 85m
ws-proxy-token-mqnkr kubernetes.io/service-account-token 3 85m
ca-issuer-ca kubernetes.io/tls 3 85m
ws-manager-bridge-token-78xct kubernetes.io/service-account-token 3 85m
minio Opaque 3 85m
registry-secret Opaque 3 85m
rabbitmq Opaque 2 85m
messagebus-certificates-secret-core Opaque 3 85m
load-definition Opaque 1 85m
messagebus Opaque 0 85m
messagebus-erlang-cookie Opaque 1 85m
builtin-registry-auth kubernetes.io/dockerconfigjson 3 85m
mysql Opaque 6 85m
db-password Opaque 2 85m
server-token-x7xp7 kubernetes.io/service-account-token 3 85m
minio-token-zqlbd kubernetes.io/service-account-token 3 85m
proxy-token-f5fhq kubernetes.io/service-account-token 3 85m
docker-registry-token-krwzt kubernetes.io/service-account-token 3 85m
content-service-token-rv5ql kubernetes.io/service-account-token 3 85m
openvsx-proxy-token-sfqxj kubernetes.io/service-account-token 3 85m
rabbitmq-token-2qm2k kubernetes.io/service-account-token 3 85m
image-builder-mk3-token-kmpx9 kubernetes.io/service-account-token 3 85m
ide-proxy-token-x72tj kubernetes.io/service-account-token 3 85m
ws-manager-tls kubernetes.io/tls 3 85m
builtin-registry-facade-cert kubernetes.io/tls 3 85m
ws-daemon-tls kubernetes.io/tls 3 85m
builtin-registry-certs kubernetes.io/tls 3 85m
ws-manager-client-tls kubernetes.io/tls 3 85m
All pods under gitpod
namespace are running well.
k3s kubectl get pods -n gitpod
NAME READY STATUS RESTARTS AGE
svclb-proxy-4jtpx 3/3 Running 3 (76m ago) 106m
agent-smith-xwmc6 2/2 Running 2 (76m ago) 106m
dashboard-74d756fcd9-sfvsm 1/1 Running 1 (76m ago) 106m
openvsx-proxy-0 1/1 Running 1 (76m ago) 106m
blobserve-59cbd97c56-mc9ql 2/2 Running 2 (76m ago) 106m
image-builder-mk3-6d5bcf4598-dzpn9 2/2 Running 2 (76m ago) 106m
content-service-855fc6787d-sq27d 1/1 Running 1 (76m ago) 106m
ws-manager-5496b997d4-7qkwf 2/2 Running 2 (76m ago) 106m
ide-proxy-7488df7cfc-2psgt 1/1 Running 1 (76m ago) 106m
registry-facade-77mrz 2/2 Running 2 (76m ago) 106m
registry-ff6d8c4f4-6fllj 1/1 Running 1 (76m ago) 106m
ws-daemon-nd5nz 2/2 Running 2 (76m ago) 106m
minio-68444c56b7-tgp54 1/1 Running 1 (76m ago) 106m
ws-proxy-59d455b97f-p2994 2/2 Running 5 (75m ago) 106m
proxy-5f8798bd99-g7gnv 2/2 Running 2 (76m ago) 106m
mysql-0 1/1 Running 1 (76m ago) 106m
messagebus-0 1/1 Running 1 (76m ago) 106m
server-5b5ff8cd75-7gs6j 2/2 Running 2 (76m ago) 106m
ws-manager-bridge-54ff4b8889-2w5pw 2/2 Running 2 (76m ago) 106m
I run k3s kubectl get service -n gitpod
to see services, all of them have cluster-ip except mysql-headless
and ws-daemon
. The proxy
, LoadBalancer, which occupies 80 and 443, has external-ip and it's the only one that has.
Later I checked logs in some pods.
k3s kubectl logs registry-facade-77mrz registry-facade -n gitpod
:
{"addr":"127.0.0.1:9500","level":"info","message":"started Prometheus metrics server","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:19Z"}
{"fn":"/mnt/pull-secret.json","level":"info","message":"using authentication for backing registries","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:19Z"}
{"addr":":6060","level":"info","message":"serving pprof service","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:19Z"}
{"level":"info","message":"preparing static layer","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:19Z"}
{"level":"info","message":"🏪 registry facade is up and running","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:46Z"}
{"addr":":32223","level":"info","message":"HTTPS registry server listening","serviceContext":{"service":"registry-facade","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:46Z"}
k3s kubectl logs ws-manager-5496b997d4-7qkwf ws-manager -n gitpod
:
{"level":"info","message":"wsman configuration is valid","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:15Z"}
I0421 02:55:16.462570 1 request.go:665] Waited for 1.000339392s due to client-side throttling, not priority and fairness, request: GET:https://10.43.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
{"addr":"127.0.0.1:9500","level":"info","logger":"controller-runtime.metrics","message":"Metrics server is starting to listen","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"addr":":8080","level":"info","message":"started gRPC server","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"interval":15000000000,"level":"info","message":"starting workspace monitor","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"level":"info","message":"workspace monitor is up and running","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"level":"info","message":"🦸 wsman is up and running. Stop with SIGINT or CTRL+C","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"addr":"localhost:6060","level":"info","message":"serving pprof service","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"addr":"{\"IP\":\"127.0.0.1\",\"Port\":9500,\"Zone\":\"\"}","kind":"metrics","level":"info","message":"Starting server","path":"/metrics","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"addr":"{\"IP\":\"::\",\"Port\":44217,\"Zone\":\"\"}","kind":"health probe","level":"info","message":"Starting server","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"level":"info","logger":"controller.pod","message":"Starting EventSource","reconciler group":"","reconciler kind":"Pod","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","source":"kind source: *v1.Pod","time":"2022-04-21T02:55:17Z"}
{"level":"info","logger":"controller.pod","message":"Starting Controller","reconciler group":"","reconciler kind":"Pod","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z"}
{"level":"info","logger":"controller.pod","message":"Starting workers","reconciler group":"","reconciler kind":"Pod","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","time":"2022-04-21T02:55:17Z","worker count":1}
{"level":"info","message":"new subscriber","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","subscriberCount":1,"subscriberKey":"k10.42.0.51:60802@1650509742553170113","time":"2022-04-21T02:55:42Z"}
{"level":"info","message":"new subscriber","serviceContext":{"service":"ws-manager","version":"commit-abd108b30f9e5d8dfd1b1558f19c2f86cb0830d5"},"severity":"INFO","subscriberCount":2,"subscriberKey":"k10.42.0.61:51622@1650509789798017003","time":"2022-04-21T02:56:29Z"}
k3s kubectl logs ws-daemon-nd5nz ws-daemon -n gitpod
:
{"level":"info","message":"containerd subscription established","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"level":"info","location":"/mnt/workingarea","message":"restored workspaces from disk","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z","workspacesLoaded":0,"workspacesOnDisk":0}
{"clientAuth":4,"level":"info","message":"enabling client authentication","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"addr":":8080","level":"info","message":"started gRPC server","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"addr":"localhost:9500","level":"info","message":"started Prometheus metrics server","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"addr":"localhost:6060","level":"info","message":"serving pprof service","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"addr":":9999","level":"info","message":"started readiness signal","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"level":"info","message":"start hosts source","name":"registryFacade","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
{"level":"info","message":"🧫 ws-daemon is up and running. Stop with SIGINT or CTRL+C","serviceContext":{"service":"ws-daemon","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:21Z"}
k3s kubectl logs image-builder-mk3-6d5bcf4598-dzpn9 image-builder-mk3 -n gitpod
:
{"addr":"127.0.0.1:9500","level":"info","message":"started Prometheus metrics server","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:14Z"}
{"addr":":6060","level":"info","message":"serving pprof service","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:14Z"}
{"component":"grpc","level":"warning","message":"2022/04/21 02:55:15 WARNING: [core] grpc: addrConn.createTransport failed to connect to {ws-manager:8080 ws-manager \u003cnil\u003e \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp: i/o timeout\"","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:15Z"}
{"component":"grpc","level":"warning","message":"2022/04/21 02:55:18 WARNING: [core] grpc: addrConn.createTransport failed to connect to {ws-manager:8080 ws-manager \u003cnil\u003e \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp: i/o timeout\"","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:18Z"}
{"component":"grpc","level":"warning","message":"2022/04/21 02:55:21 WARNING: [core] grpc: addrConn.createTransport failed to connect to {ws-manager:8080 ws-manager \u003cnil\u003e \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp: i/o timeout\"","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:21Z"}
{"component":"grpc","level":"warning","message":"2022/04/21 02:55:28 WARNING: [core] grpc: addrConn.createTransport failed to connect to {ws-manager:8080 ws-manager \u003cnil\u003e \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp: i/o timeout\"","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:28Z"}
{"component":"grpc","level":"warning","message":"2022/04/21 02:55:37 WARNING: [core] grpc: addrConn.createTransport failed to connect to {ws-manager:8080 ws-manager \u003cnil\u003e \u003cnil\u003e 0 \u003cnil\u003e}. Err: connection error: desc = \"transport: Error while dialing dial tcp: i/o timeout\"","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:37Z"}
{"level":"warning","message":"no TLS configured - gRPC server will be unsecured","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"WARNING","time":"2022-04-21T02:55:42Z"}
{"addr":":8080","level":"info","message":"started workspace content server","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:42Z"}
{"interval":"6h0m0s","level":"info","message":"starting Docker ref pre-cache","refs":["docker.io/gitpod/workspace-full:latest"],"serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:42Z"}
{"level":"info","message":"👷 image-builder is up and running. Stop with SIGINT or CTRL+C","serviceContext":{"service":"image-builder-mk3","version":"commit-32866ac354f896566e90ceb2f32a9aaf31eb1b42"},"severity":"INFO","time":"2022-04-21T02:55:42Z"}
I also looked at the server pod's log k3s kubectl logs server-5b5ff8cd75-7gs6j server -n gitpod
and found this line :
{"component":"server","severity":"INFO","time":"2022-04-21T02:26:25.218Z","message":"Auth Provider Callback. Path: /auth/repo.myrootdomain.xxx/callback","payload":"{\n req: <ref *1> IncomingMessage {\n _readableState: ReadableState {\n objectMode: false,\n highWaterMark: 16384,\n buffer: BufferList { head: null, tail: null, length: 0 },\n length: 0,\n pipes: [],\n flowing: null,\n ended: true,\n endEmitted: false,\n reading: false,\n constructed: true,\n sync: true,\n needReadable: false,\n emittedReadable: false,\n readableListening: false,\n resumeScheduled: false,\n errorEmitted: false,\n emitClose: true,\n autoDestroy: true,\n destroyed: false,\n errored: null,\n closed: false,\n closeEmitted: false,\n defaultEncoding: 'utf8',\n awaitDrainWriters: null,\n multiAwaitDrain: false,\n readingMore: true,\n dataEmitted: false,\n decoder: null,\n encoding: null,\n [Symbol(kPaused)]: null\n },\n _events: [Object: null prototype] { end: [Array] },\n _eventsCount: 1,\n _maxListeners: undefined,\n socket: Socket {\n connecting: false,\n _hadError: false,\n _parent: null,\n _host: null,\n _readableState: [ReadableState],\n _events: [Object: null prototype],\n _eventsCount: 8,\n _maxListeners: undefined,\n _writableState: [WritableState],\n allowHalfOpen: true,\n _sockname: null,\n _pendingData: null,\n _pendingEncoding: '',\n server: [Server],\n _server: [Server],\n parser: [HTTPParser],\n on: [Function: socketListenerWrap],\n addListener: [Function: socketListenerWrap],\n prependListener: [Function: socketListenerWrap],\n setEncoding: [Function: socketSetEncoding],\n _paused: false,\n _httpMessage: [ServerResponse],\n [Symbol(async_id_symbol)]: 39378,\n [Symbol(kHandle)]: [TCP],\n [Symbol(kSetNoDelay)]: false,\n [Symbol(lastWriteQueueSize)]: 0,\n [Symbol(timeout)]: null,\n [Symbol(kBuffer)]: null,\n [Symbol(kBufferCb)]: null,\n [Symbol(kBufferGen)]: null,\n [Symbol(kCapture)]: false,\n [Symbol(kBytesRead)]: 0,\n [Symbol(kBytesWritten)]: 0,\n [Symbol(RequestTimeout)]: undefined\n },\n httpVersionMajor: 1,\n httpVersionMinor: 1,\n httpVersion: '1.1',\n complete: true,\n rawHeaders: [\n 'Host',\n 'online-ide.myrootdomain.xxx',\n 'User-Agent',\n 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36 Edg/100.0.1185.44',\n 'Accept',\n 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',\n 'Accept-Encoding',\n 'gzip, deflate, br',\n 'Accept-Language',\n 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',\n 'Cookie',\n 'ajs_anonymous_id=f2240086-b9c4-49c3-8b2c-1df774f21f00; gitpod-user=true; _online_ide_myrootdomain_xxx_=s%3A2618104b-3d3e-4582-bf2e-1366a1971b95.KD5qzwFxtgrXanMxr%2FOz5NK7LtoNUXYvHw9JUHK8Hy8',\n 'Dnt',\n '1',\n 'Referer',\n 'https://repo.myrootdomain.xxx/',\n 'Sec-Ch-Ua',\n '\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"100\", \"Microsoft Edge\";v=\"100\"',\n 'Sec-Ch-Ua-Mobile',\n '?0',\n 'Sec-Ch-Ua-Platform',\n '\"Windows\"',\n 'Sec-Fetch-Dest',\n 'document',\n 'Sec-Fetch-Mode',\n 'navigate',\n 'Sec-Fetch-Site',\n 'same-site',\n 'Upgrade-Insecure-Requests',\n '1',\n 'X-Forwarded-For',\n '10.42.0.1',\n 'X-Forwarded-Proto',\n 'https',\n 'X-Real-Ip',\n '10.42.0.1'\n ],\n rawTrailers: [],\n aborted: false,\n upgrade: false,\n url: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n method: 'GET',\n statusCode: null,\n statusMessage: null,\n client: Socket {\n connecting: false,\n _hadError: false,\n _parent: null,\n _host: null,\n _readableState: [ReadableState],\n _events: [Object: null prototype],\n _eventsCount: 8,\n _maxListeners: undefined,\n _writableState: [WritableState],\n allowHalfOpen: true,\n _sockname: null,\n _pendingData: null,\n _pendingEncoding: '',\n server: [Server],\n _server: [Server],\n parser: [HTTPParser],\n on: [Function: socketListenerWrap],\n addListener: [Function: socketListenerWrap],\n prependListener: [Function: socketListenerWrap],\n setEncoding: [Function: socketSetEncoding],\n _paused: false,\n _httpMessage: [ServerResponse],\n [Symbol(async_id_symbol)]: 39378,\n [Symbol(kHandle)]: [TCP],\n [Symbol(kSetNoDelay)]: false,\n [Symbol(lastWriteQueueSize)]: 0,\n [Symbol(timeout)]: null,\n [Symbol(kBuffer)]: null,\n [Symbol(kBufferCb)]: null,\n [Symbol(kBufferGen)]: null,\n [Symbol(kCapture)]: false,\n [Symbol(kBytesRead)]: 0,\n [Symbol(kBytesWritten)]: 0,\n [Symbol(RequestTimeout)]: undefined\n },\n _consuming: false,\n _dumped: false,\n next: [Function: next],\n baseUrl: '',\n originalUrl: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n _parsedUrl: Url {\n protocol: null,\n slashes: null,\n auth: null,\n host: null,\n port: null,\n hostname: null,\n hash: null,\n search: '?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n query: 'code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n pathname: '/auth/repo.myrootdomain.xxx/callback',\n path: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n href: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n _raw: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298'\n },\n params: {},\n query: {\n code: '6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298'\n },\n res: ServerResponse {\n _events: [Object: null prototype],\n _eventsCount: 1,\n _maxListeners: undefined,\n outputData: [],\n outputSize: 0,\n writable: true,\n destroyed: false,\n _last: false,\n chunkedEncoding: false,\n shouldKeepAlive: true,\n maxRequestsOnConnectionReached: false,\n _defaultKeepAlive: true,\n useChunkedEncodingByDefault: true,\n sendDate: true,\n _removedConnection: false,\n _removedContLen: false,\n _removedTE: false,\n _contentLength: null,\n _hasBody: true,\n _trailer: '',\n finished: false,\n _headerSent: false,\n _closed: false,\n socket: [Socket],\n _header: null,\n _keepAliveTimeout: 5000,\n _onPendingData: [Function: bound updateOutgoingData],\n req: [Circular *1],\n _sent100: false,\n _expect_continue: false,\n locals: [Object: null prototype] {},\n writeHead: [Function: writeHead],\n end: [Function: end],\n [Symbol(kCapture)]: false,\n [Symbol(kNeedDrain)]: false,\n [Symbol(corked)]: 0,\n [Symbol(kOutHeaders)]: [Object: null prototype]\n },\n body: {},\n secret: undefined,\n cookies: {\n ajs_anonymous_id: 'f2240086-b9c4-49c3-8b2c-1df774f21f00',\n 'gitpod-user': 'true',\n _online_ide_myrootdomain_xxx_: 's:2618104b-3d3e-4582-bf2e-1366a1971b95.KD5qzwFxtgrXanMxr/Oz5NK7LtoNUXYvHw9JUHK8Hy8'\n },\n signedCookies: [Object: null prototype] {},\n _parsedOriginalUrl: Url {\n protocol: null,\n slashes: null,\n auth: null,\n host: null,\n port: null,\n hostname: null,\n hash: null,\n search: '?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n query: 'code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n pathname: '/auth/repo.myrootdomain.xxx/callback',\n path: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n href: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298',\n _raw: '/auth/repo.myrootdomain.xxx/callback?code=6ed648b80bae72d2d238f97e9fc68b6d84c65317b80f6ecebe864b60cfeb2298'\n },\n sessionStore: MySQLStore {\n connection: [Pool],\n options: [Object],\n generate: [Function (anonymous)],\n _events: [Object: null prototype],\n _eventsCount: 2,\n _expirationInterval: Timeout {\n _idleTimeout: 900000,\n _idlePrev: [TimersList],\n _idleNext: [TimersList],\n _idleStart: 5066,\n _onTimeout: [Function: bound ],\n _timerArgs: undefined,\n _repeat: 900000,\n _destroyed: false,\n [Symbol(refed)]: true,\n [Symbol(kHasPrimitive)]: false,\n [Symbol(asyncId)]: 85,\n [Symbol(triggerId)]: 57\n }\n },\n sessionID: '2618104b-3d3e-4582-bf2e-1366a1971b95',\n session: Session { cookie: [Object], authFlow: [Object] },\n _passport: { instance: [Authenticator] },\n [Symbol(kCapture)]: false,\n [Symbol(kHeaders)]: {\n host: 'online-ide.myrootdomain.xxx',\n 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36 Edg/100.0.1185.44',\n accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',\n 'accept-encoding': 'gzip, deflate, br',\n 'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',\n cookie: 'ajs_anonymous_id=f2240086-b9c4-49c3-8b2c-1df774f21f00; gitpod-user=true; _online_ide_myrootdomain_xxx_=s%3A2618104b-3d3e-4582-bf2e-1366a1971b95.KD5qzwFxtgrXanMxr%2FOz5NK7LtoNUXYvHw9JUHK8Hy8',\n dnt: '1',\n referer: 'https://repo.myrootdomain.xxx/',\n 'sec-ch-ua': '\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"100\", \"Microsoft Edge\";v=\"100\"',\n 'sec-ch-ua-mobile': '?0',\n 'sec-ch-ua-platform': '\"Windows\"',\n 'sec-fetch-dest': 'document',\n 'sec-fetch-mode': 'navigate',\n 'sec-fetch-site': 'same-site',\n 'upgrade-insecure-requests': '1',\n 'x-forwarded-for': '10.42.0.1',\n 'x-forwarded-proto': 'https',\n 'x-real-ip': '10.42.0.1'\n },\n [Symbol(kHeadersCount)]: 36,\n [Symbol(kTrailers)]: null,\n [Symbol(kTrailersCount)]: 0,\n [Symbol(RequestTimeout)]: undefined\n }\n}"}
It indicates auth provider callback. In the payload I noticed the hostname
is null
. This probably explains the error but I have no clue why, and how I could ever solve it.
I am running out of ideas. Where am I missing?? Any suggestions? I would appreciate it a lot if helps!
Steps to reproduce
- Gitpod 2022.03.1 release.
CentOS 8.2kernel version 4.18.0-305.3.1.el8.x86_64
K3Sv1.22.7+k3s1 (8432d7f2)
- Steps in installer readme.md and this issue but with K3S, as mentioned in description above.
- Enable Gitpod integration in my self-hosted Gitlab instance. In admin panel, filling the section with
online-ide.myrootdomain.xxx
. Activate it in my account's profile settings and choose to use Gitpod IDE in my personal private project, thus creating a workspace in Gitpod. - The error shows on my Gitpod instance, but on gitpod.io everything is fine.
Workspace affected
No response
Expected behavior
Create a workspace on Gitpod instance from my repositories on self-hosted Gitlab instance successfully.
Example repository
No response
Anything else?
No response