Skip to content

Commit 7fecf31

Browse files
committed
Include some use-cases
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <[email protected]>
1 parent 6459614 commit 7fecf31

File tree

1 file changed

+9
-6
lines changed

1 file changed

+9
-6
lines changed

Diff for: blog/_posts/2024-12-09-quickstart-uplink.md

+9-6
Original file line numberDiff line numberDiff line change
@@ -459,12 +459,15 @@ Once you have services such as Postgresql, SSH, Ollama, the Kubernetes API serve
459459
460460
This means that all CLIs, tools, and products that work with whatever you've tunneled can be used without modification.
461461
462-
* Perhaps you manage many databases? Use pgdump and pgrestore to backup and restore databases.
463-
* Do you deploy to Kubernetes? Use kubectl, Helm, ArgoCD, or Flux to deploy applications, just run them in-cluster
464-
* Do you write your own Kubernetes operators for customers? Just provide the updated KUBECONFIG to your Kubernetes operators and controllers
465-
* Do you want to access GPUs hosted on Lambda Labs, Paperspace, or your own datacenter? Command and control your GPU instances from your management cluster
466-
* Do you have a powerful GPU somewhere and want to infer against it using your central cluster? Run ollama remotely, and tunnel its REST API back
467-
* Do you have many different edge devices? Tunnel SSHD and run Ansible, Puppet, or bash scripts against them just as if they were on your local network
462+
**Common uses-cases for inlets-uplink**
463+
464+
* Do you have an agent for your SaaS product, that customers need to run on private networks? Access it via a tunnel.
465+
* Perhaps you manage a number of remote databases? Use pgdump and pgrestore to backup and restore databases.
466+
* Do you deploy to Kubernetes? Use kubectl, Helm, ArgoCD, or Flux to deploy applications, just run them in-cluster.
467+
* Do you write your own Kubernetes operators for customers? Just provide the updated KUBECONFIG to your Kubernetes operators and controllers.
468+
* Do you want to access GPUs hosted on Lambda Labs, Paperspace, or your own datacenter? Command and control your GPU instances from your management cluster.
469+
* Do you have a powerful GPU somewhere and want to infer against it using your central cluster? Run ollama remotely, and tunnel its REST API back.
470+
* Do you have many different edge devices? Tunnel SSHD and run Ansible, Puppet, or bash scripts against them just as if they were on your local network.
468471
469472
In the documentation you can learn more about managing, monitoring and automating tunnels.
470473

0 commit comments

Comments
 (0)