You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: blog/_posts/2024-12-09-quickstart-uplink.md
+9-6
Original file line number
Diff line number
Diff line change
@@ -459,12 +459,15 @@ Once you have services such as Postgresql, SSH, Ollama, the Kubernetes API serve
459
459
460
460
This means that all CLIs, tools, and products that work with whatever you've tunneled can be used without modification.
461
461
462
-
* Perhaps you manage many databases? Use pgdump and pgrestore to backup and restore databases.
463
-
* Do you deploy to Kubernetes? Use kubectl, Helm, ArgoCD, or Flux to deploy applications, just run them in-cluster
464
-
* Do you write your own Kubernetes operators for customers? Just provide the updated KUBECONFIG to your Kubernetes operators and controllers
465
-
* Do you want to access GPUs hosted on Lambda Labs, Paperspace, or your own datacenter? Command and control your GPU instances from your management cluster
466
-
* Do you have a powerful GPU somewhere and want to infer against it using your central cluster? Run ollama remotely, and tunnel its REST API back
467
-
* Do you have many different edge devices? Tunnel SSHD and run Ansible, Puppet, or bash scripts against them just as if they were on your local network
462
+
**Common uses-cases for inlets-uplink**
463
+
464
+
* Do you have an agent for your SaaS product, that customers need to run on private networks? Access it via a tunnel.
465
+
* Perhaps you manage a number of remote databases? Use pgdump and pgrestore to backup and restore databases.
466
+
* Do you deploy to Kubernetes? Use kubectl, Helm, ArgoCD, or Flux to deploy applications, just run them in-cluster.
467
+
* Do you write your own Kubernetes operators for customers? Just provide the updated KUBECONFIG to your Kubernetes operators and controllers.
468
+
* Do you want to access GPUs hosted on Lambda Labs, Paperspace, or your own datacenter? Command and control your GPU instances from your management cluster.
469
+
* Do you have a powerful GPU somewhere and want to infer against it using your central cluster? Run ollama remotely, and tunnel its REST API back.
470
+
* Do you have many different edge devices? Tunnel SSHD and run Ansible, Puppet, or bash scripts against them just as if they were on your local network.
468
471
469
472
In the documentation you can learn more about managing, monitoring and automating tunnels.
0 commit comments