Skip to content
This repository has been archived by the owner on Jul 24, 2019. It is now read-only.

Add support for Cinder fake backend #270

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions cinder/templates/deployment-volume.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ spec:
mountPath: /etc/cinder/conf/cinder.conf
subPath: cinder.conf
readOnly: true
{{- if .Values.ceph.enabled }}
- name: cephconf
mountPath: /etc/ceph/ceph.conf
subPath: ceph.conf
Expand All @@ -75,6 +76,7 @@ spec:
mountPath: /etc/ceph/ceph.client.{{ .Values.ceph.cinder_user }}.keyring
subPath: ceph.client.{{ .Values.ceph.cinder_user }}.keyring
readOnly: true
{{- end }}
volumes:
- name: pod-etc-cinder
emptyDir: {}
Expand All @@ -83,9 +85,11 @@ spec:
- name: cinderconf
configMap:
name: cinder-etc
{{- if .Values.ceph.enabled }}
- name: cephconf
configMap:
name: cinder-etc
- name: cephclientcinderkeyring
configMap:
name: cinder-etc
{{- end }}
7 changes: 7 additions & 0 deletions cinder/templates/etc/_cinder.conf.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ rabbit_ha_queues = true
rabbit_hosts = {{ .Values.messaging.hosts }}

[rbd1]
volume_backend_name = rbd
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = {{ .Values.backends.rbd1.pool }}
rbd_ceph_conf = /etc/ceph/ceph.conf
Expand All @@ -78,3 +79,9 @@ rbd_secret_uuid = {{- include "secrets/ceph-client-key" . -}}
{{- end }}
rbd_secret_uuid = {{ .Values.backends.rbd1.secret }}
report_discard_supported = True

{{- if .Values.backends.fake.enabled }}
[fake]
volume_backend_name = fake
volume_driver = cinder.tests.fake_driver.FakeLoggingVolumeDriver
{{- end }}
37 changes: 19 additions & 18 deletions cinder/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,25 @@ replicas:
volume: 1
scheduler: 1

backends:
enabled:
- rbd1
rbd1:
secret: null
user: "admin"
pool: "volumes"
fake:
enabled: false
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd like to see what @alanmeadows recommends, but i would imagine a structure that could allow for multiple backends to be enabled simultaneously would be desired, like:

backends:
  fake:
    enabled: true
  rdb1:
    enabled: true
    secret: null
    user: "admin"
    pool: "volumes"
  lvm:
    enabled: true
...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it looks a little unwieldy in its current state here. It can support multiple backends as is, but a developer would have to add another entry under the current 'enabled:' list. Currently there's a function in Helm-toolkit that handles dissecting these lists and entering them in the service configuration files, which relies on them being scoped how they are now. I can look into handling them in the manner you suggest because I agree, it does appear more readable.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup, ok...i get it. so a good thing to do would be to document the plans for how to use backends more generally (for anything). let me come up with a doc structure that i think would cover this well, and then we can tackle it more generally. sound good?


ceph:
enabled: true
monitors: []
cinder_user: "admin"
# a null value for the keyring will
# attempt to use the key from
# common/secrets/ceph-client-key
cinder_keyring: null

labels:
node_selector_key: openstack-control-plane
node_selector_value: enabled
Expand Down Expand Up @@ -74,23 +93,6 @@ database:
cinder_password: password
cinder_user: cinder

ceph:
enabled: true
monitors: []
cinder_user: "admin"
# a null value for the keyring will
# attempt to use the key from
# common/secrets/ceph-client-key
cinder_keyring: null

backends:
enabled:
- rbd1
rbd1:
secret: null
user: "admin"
pool: "volumes"

glance:
version: 2

Expand All @@ -99,7 +101,6 @@ messaging:
user: rabbitmq
password: password


api:
workers: 8

Expand Down
8 changes: 7 additions & 1 deletion docs/developer/minikube.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,12 +189,18 @@ To deploy Openstack-Helm in development mode, ensure you've created a minikube-a
/var/lib/docker
```

As a result of this guidence, we recommend creating the following for MariaDB like shown below.
As a result of this guidance, we recommend creating the following for MariaDB like shown below.

```
sudo mkdir -p /data/openstack-helm/mariadb
```


### Change Cinder Backend

Currently, Cinder uses Ceph for the RBD backend out of the box. To avoid Ceph in development mode, developers can use the Cinder 'Fake' backend by both adding the `fake` driver to the list of [enabled backends](https://github.com/att-comdev/openstack-helm/blob/master/cinder/values.yaml#L87) in Cinder's values.yaml file, and changing the `enabled` flag under `fake` to true. The Fake backend allows developers to create volumes and log events. In the future, the Fake backend will be replaced with the LVM backend in development mode to allow developers to create and attach volumes to instances.


Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd like to see a simpler one liner for development as an example (something cut and paste for developers), i.e.:

helm install --name cinder --set backend.fake.enabled=true --set backend.rdb1.enabled=false local/cinder --namespace=openstack

it just makes things so much easier for developers who are looking for quick cut/paste instructions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. Sounds good

### Label Minikube Node

Be sure to label your minikube node according to the documentation in our installation guide (this remains exactly the same).
Expand Down