Skip to content

Commit

Permalink
update version to v3.3.0-alpha.1
Browse files Browse the repository at this point in the history
Signed-off-by: pixiake <[email protected]>
  • Loading branch information
pixiake committed Apr 18, 2022
1 parent 041aef2 commit 84febc8
Show file tree
Hide file tree
Showing 6 changed files with 10 additions and 10 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@ If your Kubernetes cluster environment meets all requirements mentioned above, t
### Minimal Installation

```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0-alpha.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0-alpha.0/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0-alpha.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0-alpha.1/cluster-configuration.yaml
```

Then inspect the logs of installation.
Expand Down
4 changes: 2 additions & 2 deletions README_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@ glusterfs kubernetes.io/glusterfs 3d4h
### 最小化快速部署

```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0-alpha.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0-alpha.0/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0-alpha.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0-alpha.1/cluster-configuration.yaml

# 查看部署进度及日志
$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -f
Expand Down
2 changes: 1 addition & 1 deletion controller/installRunner.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@
"name": "ks-installer",
"namespace": "kubesphere-system",
"labels": {
"version": "v3.3.0-alpha.0"
"version": "v3.3.0-alpha.1"
},
},
}
Expand Down
2 changes: 1 addition & 1 deletion deploy/cluster-configuration.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.3.0-alpha.0
version: v3.3.0-alpha.1
spec:
persistence:
storageClass: "" # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
Expand Down
2 changes: 1 addition & 1 deletion deploy/kubesphere-installer.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ spec:
serviceAccountName: ks-installer
containers:
- name: installer
image: kubespheredev/ks-installer:v3.3.0-alpha.0
image: kubespheredev/ks-installer:v3.3.0-alpha.1
imagePullPolicy: "Always"
resources:
limits:
Expand Down
6 changes: 3 additions & 3 deletions roles/download/defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ ks_version: >-
{%- if dev_tag is defined and dev_tag != "" -%}
{{ dev_tag }}
{%- else -%}
v3.3.0-alpha.0
v3.3.0-alpha.1
{%- endif %}
#KubeSphere:
Expand Down Expand Up @@ -139,7 +139,7 @@ argocd_applicationset_repo: "{{ base_repo | default('quay.io/') }}{{ namespace_o
argocd_applicationset_tag: v0.4.1
argocd_dex_repo: "{{ base_repo | default('ghcr.io/') }}{{ namespace_override | default('dexidp') }}/dex"
argocd_dex_tag: v2.30.2
argocd_redis_repo: "{{ base_repo }}{{ namespace_override }}/redis"
argocd_redis_repo: "{{ base_library_repo }}redis"
argocd_redis_tag: 6.2.6-alpine

#ks-monitor:
Expand Down Expand Up @@ -625,7 +625,7 @@ images:

argocd_repo:
repo: "{{ argocd_repo }}"
tag: "{{ argocd_repo_tag }}"
tag: "{{ argocd_tag }}"
sha256: "{{ argocd_checksum|default(None) }}"
group: "kubesphere-devops-images"

Expand Down

0 comments on commit 84febc8

Please sign in to comment.