Skip to content
This repository was archived by the owner on May 27, 2022. It is now read-only.

Commit 20c9840

Browse files
committed
Update paths
1 parent 824b814 commit 20c9840

31 files changed

+164
-164
lines changed

README.md

Lines changed: 51 additions & 51 deletions
Large diffs are not rendered by default.

doc/applications-onboard/network-edge-applications-onboarding.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -35,21 +35,21 @@ Copyright (c) 2019-2020 Intel Corporation
3535
This document aims to familiarize users with the Open Network Edge Services Software (OpenNESS) application on-boarding process for the Network Edge. This document provides instructions on how to deploy an application from the Edge Controller to Edge Nodes in the cluster; it also provides sample deployment scenarios and traffic configuration for the application. The applications will be deployed from the Edge Controller via the Kubernetes `kubectl` command-line utility. Sample specification files for application onboarding are also provided.
3636

3737
# Installing OpenNESS
38-
The following application onboarding steps assume that OpenNESS was installed through [OpenNESS playbooks](https://github.com/otcshare/specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md).
38+
The following application onboarding steps assume that OpenNESS was installed through [OpenNESS playbooks](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md).
3939

4040
# Building applications
4141
Users must provide the application to be deployed on the OpenNESS platform for Network Edge. The application must be provided in a Docker\* image format that is available either from an external Docker repository (Docker Hub) or a locally built Docker image. The image must be available on the Edge Node, which the application will be deployed on.
4242

4343
> **Note**: The Harbor registry setup is out of scope for this document. If users already have a docker container image file and would like to copy it to the node manually, they can use the `docker load` command to add the image. The success of using a pre-built Docker image depends on the application dependencies that users must know.
4444
45-
The OpenNESS [edgeapps](https://github.com/otcshare/edgeapps) repository provides images for OpenNESS supported applications. Pull the repository to your Edge Node to build the images.
45+
The OpenNESS [edgeapps](https://github.com/open-ness/edgeapps) repository provides images for OpenNESS supported applications. Pull the repository to your Edge Node to build the images.
4646

4747
This document explains the build and deployment of two applications:
4848
1. Sample application: a simple “Hello, World!” reference application for OpenNESS
4949
2. OpenVINO™ application: A close to real-world inference application
5050

5151
## Building sample application images
52-
The sample application is available in [the edgeapps repository](https://github.com/otcshare/edgeapps/tree/master/sample-app); further information about the application is contained within the `Readme.md` file.
52+
The sample application is available in [the edgeapps repository](https://github.com/open-ness/edgeapps/tree/master/sample-app); further information about the application is contained within the `Readme.md` file.
5353

5454
The following steps are required to build the sample application Docker images for testing the OpenNESS Edge Application Agent (EAA) with consumer and producer applications:
5555

@@ -64,7 +64,7 @@ The following steps are required to build the sample application Docker images f
6464
docker images | grep consumer
6565
```
6666
## Building the OpenVINO application images
67-
The OpenVINO application is available in [the EdgeApps repository](https://github.com/otcshare/edgeapps/tree/master/openvino); further information about the application is contained within `Readme.md` file.
67+
The OpenVINO application is available in [the EdgeApps repository](https://github.com/open-ness/edgeapps/tree/master/openvino); further information about the application is contained within `Readme.md` file.
6868

6969
The following steps are required to build the sample application Docker images for testing OpenVINO consumer and producer applications:
7070

@@ -491,12 +491,12 @@ This section guides users through the complete process of onboarding the OpenVIN
491491
492492
## Deploying the Application
493493
494-
1. An application `yaml` specification file for the OpenVINO producer that is used to deploy the K8s pod can be found in the Edge Apps repository at [./applications/openvino/producer/openvino-prod-app.yaml](https://github.com/otcshare/edgeapps/blob/master/applications/openvino/producer/openvino-prod-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the producer application by running:
494+
1. An application `yaml` specification file for the OpenVINO producer that is used to deploy the K8s pod can be found in the Edge Apps repository at [./applications/openvino/producer/openvino-prod-app.yaml](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/producer/openvino-prod-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the producer application by running:
495495
```
496496
kubectl apply -f openvino-prod-app.yaml
497497
kubectl certificate approve openvino-prod-app
498498
```
499-
2. An application `yaml` specification file for the OpenVINO consumer that is used to deploy K8s pod can be found in the Edge Apps repository at [./applications/openvino/consumer/openvino-cons-app.yaml](https://github.com/otcshare/edgeapps/blob/master/applications/openvino/consumer/openvino-cons-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the consumer application by running:
499+
2. An application `yaml` specification file for the OpenVINO consumer that is used to deploy K8s pod can be found in the Edge Apps repository at [./applications/openvino/consumer/openvino-cons-app.yaml](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/consumer/openvino-cons-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the consumer application by running:
500500
```
501501
kubectl apply -f openvino-cons-app.yaml
502502
kubectl certificate approve openvino-cons-app
@@ -597,7 +597,7 @@ The following is an example of how to set up DNS resolution for OpenVINO consume
597597
dig openvino.openness
598598
```
599599
3. On the traffic generating host build the image for the [Client Simulator](#building-openvino-application-images)
600-
4. Run the following from [edgeapps/applications/openvino/clientsim](https://github.com/otcshare/edgeapps/blob/master/applications/openvino/clientsim/run-docker.sh) to start the video traffic via the containerized Client Simulator. A graphical user environment is required to view the results of the returning augmented videos stream.
600+
4. Run the following from [edgeapps/applications/openvino/clientsim](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/clientsim/run-docker.sh) to start the video traffic via the containerized Client Simulator. A graphical user environment is required to view the results of the returning augmented videos stream.
601601
```
602602
./run_docker.sh
603603
```
@@ -726,15 +726,15 @@ kubectl interfaceservice get <officeX_host_name>
726726
727727
## Inter application communication
728728
The IAC is available via the default overlay network used by Kubernetes - Kube-OVN.
729-
For more information on Kube-OVN, refer to the Kube-OVN support in OpenNESS [documentation](https://github.com/otcshare/specs/blob/master/doc/dataplane/openness-interapp.md#interapp-communication-support-in-openness-network-edge)
729+
For more information on Kube-OVN, refer to the Kube-OVN support in OpenNESS [documentation](https://github.com/open-ness/specs/blob/master/doc/dataplane/openness-interapp.md#interapp-communication-support-in-openness-network-edge)
730730
731731
# Enhanced Platform Awareness
732-
Enhanced platform awareness (EPA) is supported in OpenNESS via the use of the Kubernetes NFD plugin. This plugin is enabled in OpenNESS for Network Edge by default. Refer to the [NFD whitepaper](https://github.com/otcshare/specs/blob/master/doc/enhanced-platform-awareness/openness-node-feature-discovery.md) for information on how to make your application pods aware of the supported platform capabilities.
732+
Enhanced platform awareness (EPA) is supported in OpenNESS via the use of the Kubernetes NFD plugin. This plugin is enabled in OpenNESS for Network Edge by default. Refer to the [NFD whitepaper](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-node-feature-discovery.md) for information on how to make your application pods aware of the supported platform capabilities.
733733
734-
Refer to [<b>supported-epa.md</b>](https://github.com/otcshare/specs/blob/master/doc/getting-started/network-edge/supported-epa.md) for the list of supported EPA features on OpenNESS network edge.
734+
Refer to [<b>supported-epa.md</b>](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/supported-epa.md) for the list of supported EPA features on OpenNESS network edge.
735735
736736
# VM support for Network Edge
737-
Support for VM deployment on OpenNESS for Network Edge is available and enabled by default, where certain configuration and prerequisites may need to be fulfilled to use all capabilities. For information on application deployment in VM, see [openness-network-edge-vm-support.md](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/openness-network-edge-vm-support.md).
737+
Support for VM deployment on OpenNESS for Network Edge is available and enabled by default, where certain configuration and prerequisites may need to be fulfilled to use all capabilities. For information on application deployment in VM, see [openness-network-edge-vm-support.md](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-network-edge-vm-support.md).
738738
739739
# Troubleshooting
740740
This section covers steps for debugging edge applications in Network Edge.

doc/applications-onboard/openness-interface-service.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Update the physical Ethernet interface with an IP from the `192.168.1.0/24` subn
3535
route add -net 10.16.0.0/16 gw 192.168.1.1 dev eth1
3636
```
3737

38-
> **NOTE**: The default OpenNESS network policy applies to pods in a `default` namespace and blocks all ingress traffic. Refer to [Kubernetes NetworkPolicies](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from the `192.168.1.0/24` subnet on a specific port.
38+
> **NOTE**: The default OpenNESS network policy applies to pods in a `default` namespace and blocks all ingress traffic. Refer to [Kubernetes NetworkPolicies](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from the `192.168.1.0/24` subnet on a specific port.
3939
4040
> **NOTE**: The subnet `192.168.1.0/24` is allocated by the Ansible\* playbook to the physical interface, which is attached to the first edge node. The second edge node joined to the cluster is allocated to the next subnet `192.168.2.0/24` and so on.
4141
@@ -78,7 +78,7 @@ Currently, interface service supports the following values of the `driver` param
7878
7979
## Userspace (DPDK) bridge
8080
81-
The default DPDK-enabled bridge `br-userspace` is only available if OpenNESS is deployed with support for [Userspace CNI](https://github.com/otcshare/specs/blob/master/doc/dataplane/openness-userspace-cni.md) and at least one pod was deployed using the Userspace CNI. You can check if the `br-userspace` bridge exists by running the following command on your node:
81+
The default DPDK-enabled bridge `br-userspace` is only available if OpenNESS is deployed with support for [Userspace CNI](https://github.com/open-ness/specs/blob/master/doc/dataplane/openness-userspace-cni.md) and at least one pod was deployed using the Userspace CNI. You can check if the `br-userspace` bridge exists by running the following command on your node:
8282
8383
```shell
8484
ovs-vsctl list-br

doc/applications-onboard/openness-network-edge-vm-support.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ The KubeVirt role responsible for bringing up KubeVirt components is enabled by
132132

133133
## VM deployment
134134
Provided below are sample deployment instructions for different types of VMs.
135-
Please use sample `.yaml` specification files provided in the OpenNESS Edge Controller directory, [edgenode/edgecontroller/kubevirt/examples/](https://github.com/otcshare/edgenode/tree/master/edgecontroller/kubevirt/examples), to deploy the workloads. Some of the files require modification to suit the environment they will be deployed in. Specific instructions on modifications are provided in the following steps:
135+
Please use sample `.yaml` specification files provided in the OpenNESS Edge Controller directory, [edgenode/edgecontroller/kubevirt/examples/](https://github.com/open-ness/edgenode/tree/master/edgecontroller/kubevirt/examples), to deploy the workloads. Some of the files require modification to suit the environment they will be deployed in. Specific instructions on modifications are provided in the following steps:
136136

137137
### Stateless VM deployment
138138
To deploy a sample stateless VM with containerDisk storage:

doc/applications-onboard/using-openness-cnca.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ Available management with `kube-cnca` against LTE CUPS OAM agent are:
4040
2. Deletion of LTE CUPS userplanes
4141
3. Updating (patching) LTE CUPS userplanes
4242

43-
The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [OpenNESS Experience Kit](https://github.com/otcshare/specs/blob/master/doc/getting-started/openness-experience-kits.md).
43+
The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [OpenNESS Experience Kit](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md).
4444
In the following sections, a detailed explanation with examples is provided about the CNCA management.
4545

4646
Creation of the LTE CUPS userplane is performed based on the configuration provided by the given YAML file. The YAML configuration should follow the provided sample YAML in [Sample YAML LTE CUPS userplane configuration](#sample-yaml-lte-cups-userplane-configuration) section. Use the `apply` command to post a userplane creation request onto Application Function (AF):
@@ -165,7 +165,7 @@ OpenNESS provides ansible scripts for setting up NGC components for two scenario
165165

166166
For Network Edge mode, the CNCA provides a kubectl plugin to configure the 5G Core network. Kubernetes adopted plugin concepts to extend its functionality. The `kube-cnca` plugin executes CNCA related functions within the Kubernetes ecosystem. The plugin performs remote callouts against NGC OAM and AF microservice on the controller itself.
167167

168-
The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [OpenNESS Experience Kit](https://github.com/otcshare/specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md)
168+
The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [OpenNESS Experience Kit](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md)
169169

170170
#### Edge Node services operations with 5G Core (through OAM interface)
171171

doc/applications/openness_appguide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ This guide is targeted at the <b><i>Cloud Application developers who want to</b>
3535
- Develop applications for Edge computing that take advantage of all the capabilities exposed through Edge Compute APIs in OpenNESS.
3636
- Port the existing applications and services in the public/private cloud to the edge unmodified.
3737

38-
The document will describe how to develop applications from scratch using the template/example applications/services provided in the OpenNESS software release. All the OpenNESS Applications and services can be found in the [edgeapps repo](https://github.com/otcshare/edgeapps).
38+
The document will describe how to develop applications from scratch using the template/example applications/services provided in the OpenNESS software release. All the OpenNESS Applications and services can be found in the [edgeapps repo](https://github.com/open-ness/edgeapps).
3939

4040
## OpenNESS Edge Node Applications
4141
OpenNESS Applications can onboard and provision on the edge nodes only through the OpenNESS Controller. The first step in Onboarding involves uploading the application image to the controller through the web interface. Both VM and Container images are supported.

doc/applications/openness_openvino.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,14 +100,14 @@ For more information about CSR, refer to [OpenNESS CertSigner](../applications-o
100100

101101
Applications are deployed on the OpenNESS Edge Node as Docker containers. Three docker containers need to be built to get the OpenVINO pipeline working: `clientsim`, `producer`, and `consumer`. The `clientsim` Docker image must be built and executed on the client simulator machine while the `producer` and `consumer` containers/pods should be onboarded on the OpenNESS Edge Node.
102102

103-
On the client simulator, clone the [OpenNESS edgeapps](https://github.com/otcshare/edgeapps) and execute the following command to build the `client-sim` container:
103+
On the client simulator, clone the [OpenNESS edgeapps](https://github.com/open-ness/edgeapps) and execute the following command to build the `client-sim` container:
104104

105105
```shell
106106
cd <edgeapps-repo>/openvino/clientsim
107107
./build-image.sh
108108
```
109109

110-
On the OpenNESS Edge Node, clone the [OpenNESS edgeapps](https://github.com/otcshare/edgeapps) and execute the following command to build the `producer` and `consumer` containers:
110+
On the OpenNESS Edge Node, clone the [OpenNESS edgeapps](https://github.com/open-ness/edgeapps) and execute the following command to build the `producer` and `consumer` containers:
111111
```shell
112112
cd <edgeapps-repo>/openvino/producer
113113
./build-image.sh

0 commit comments

Comments
 (0)