Skip to content

Commit 084b320

Browse files
committed
Changed references to riak-admin and riak-repl
Changed references to riak-admin and riak-repl as these are replaced with `riak admin` and `riak repl`.
1 parent d6f7798 commit 084b320

File tree

86 files changed

+740
-740
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

86 files changed

+740
-740
lines changed

content/riak/kv/2.9.1/_reference-links.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@
111111
[use admin index]: {{<baseurl>}}riak/kv/2.9.1/using/admin/
112112
[use admin commands]: {{<baseurl>}}riak/kv/2.9.1/using/admin/commands/
113113
[use admin riak cli]: {{<baseurl>}}riak/kv/2.9.1/using/admin/riak-cli/
114-
[use admin riak-admin]: {{<baseurl>}}riak/kv/2.9.1/using/admin/riak-admin/
114+
[use admin riak admin]: {{<baseurl>}}riak/kv/2.9.1/using/admin/riak admin/
115115
[use admin riak control]: {{<baseurl>}}riak/kv/2.9.1/using/admin/riak-control/
116116

117117
### Cluster Operations

content/riak/kv/2.9.1/add-ons/redis/developing-rra.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -72,9 +72,9 @@ an opaque value, ie a `string`. The following command provides an example of
7272
creating the bucket-type `rra`:
7373

7474
```sh
75-
if ! riak-admin bucket-type status rra >/dev/null 2>&1; then
76-
riak-admin bucket-type create rra '{"props":{}}'
77-
riak-admin bucket-type activate rra
75+
if ! riak admin bucket-type status rra >/dev/null 2>&1; then
76+
riak admin bucket-type create rra '{"props":{}}'
77+
riak admin bucket-type activate rra
7878
fi
7979
```
8080

content/riak/kv/2.9.1/configuring/basic.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ aliases:
1717

1818
[config reference]: {{<baseurl>}}riak/kv/2.9.1/configuring/reference
1919
[use running cluster]: {{<baseurl>}}riak/kv/2.9.1/using/running-a-cluster
20-
[use admin riak-admin#member-status]: {{<baseurl>}}riak/kv/2.9.1/using/admin/riak-admin/#member-status
20+
[use admin riak admin#member-status]: {{<baseurl>}}riak/kv/2.9.1/using/admin/riak admin/#member-status
2121
[perf erlang]: {{<baseurl>}}riak/kv/2.9.1/using/performance/erlang
2222
[plan start]: {{<baseurl>}}riak/kv/2.9.1/setup/planning/start
2323
[plan best practices]: {{<baseurl>}}riak/kv/2.9.1/setup/planning/best-practices
@@ -61,7 +61,7 @@ We advise that you make as many of the changes below as practical
6161
_before_ joining the nodes together into a cluster. Once your
6262
configuration has been set on each node, follow the steps in [Basic Cluster Setup][use running cluster] to complete the clustering process.
6363

64-
Use [`riak-admin member-status`][use admin riak-admin#member-status]
64+
Use [`riak admin member-status`][use admin riak admin#member-status]
6565
to determine whether any given node is a member of a cluster.
6666

6767
## Erlang VM Tunings
@@ -143,10 +143,10 @@ the location of this file)
143143

144144
### Verifying ring size
145145

146-
You can use the `riak-admin` command can verify the ring size:
146+
You can use the `riak admin` command can verify the ring size:
147147

148148
```bash
149-
riak-admin status | grep ring
149+
riak admin status | grep ring
150150
```
151151

152152
Console output:

content/riak/kv/2.9.1/configuring/reference.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ aliases:
2727
[plan backend multi]: ../../setup/planning/backend/multi
2828
[config backend multi]: ../../setup/planning/backend/multi/#configuring-multiple-backends-1
2929
[use admin riak cli]: ../../using/admin/riak-cli
30-
[use admin riak-admin]: ../../using/admin/riak-admin
30+
[use admin riak admin]: ../../using/admin/riak admin
3131
[glossary aae]: ../../learn/glossary/#active-anti-entropy-aae
3232
[use ref search 2i]: ../../using/reference/secondary-indexes
3333
[cluster ops bucket types]: ../../using/cluster-operations/bucket-types
@@ -194,7 +194,7 @@ parameters below.
194194

195195
<tr>
196196
<td><code>platform_bin_dir</code></td>
197-
<td>The directory in which the <a href="../../using/admin/riak-admin"><code>riak-admin</code></a>,
197+
<td>The directory in which the <a href="../../using/admin/riak admin"><code>riak admin</code></a>,
198198
<code>riak-debug</code>, and now-deprecated <code>search-cmd</code>
199199
executables are stored.</td>
200200
<td><code>./bin</code></td>

content/riak/kv/2.9.1/configuring/search.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -133,18 +133,18 @@ Enable this node in distributed query plans; defaults to `on`.
133133

134134
If enabled, this node will participate in distributed Solr queries. If disabled, the node will be excluded from Riak search cover plans, and will therefore never be consulted in a distributed query. Note that this node may still be used to execute a query. Use this flag if you have a long running administrative operation (e.g. reindexing) which requires that the node be removed from query plans, and which would otherwise result in inconsistent search results.
135135

136-
This setting can also be changed via `riak-admin` by issuing one of the following commands:
136+
This setting can also be changed via `riak admin` by issuing one of the following commands:
137137

138138
```
139-
riak-admin set search.dist_query=off
139+
riak admin set search.dist_query=off
140140
```
141141
or
142142

143143
```
144-
riak-admin set search.dist_query=on
144+
riak admin set search.dist_query=on
145145
```
146146

147-
Setting this value in riak.conf is useful when you are restarting a node which was removed from search queries with the `riak-admin` feature. Setting `search.dis_query` in riak.conf will prevent the node from being included in search queries until it is fully spun up.
147+
Setting this value in riak.conf is useful when you are restarting a node which was removed from search queries with the `riak admin` feature. Setting `search.dis_query` in riak.conf will prevent the node from being included in search queries until it is fully spun up.
148148

149149
Valid values: `on` or `off`
150150

content/riak/kv/2.9.1/configuring/strong-consistency.md

+12-12
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ toc: true
2222
[glossary vnode]: {{<baseurl>}}riak/kv/2.9.1/learn/glossary/#vnode
2323
[concept buckets]: {{<baseurl>}}riak/kv/2.9.1/learn/concepts/buckets
2424
[cluster ops bucket types]: {{<baseurl>}}riak/kv/2.9.1/using/cluster-operations/bucket-types
25-
[use admin riak-admin#ensemble]: {{<baseurl>}}riak/kv/2.9.1/using/admin/riak-admin/#riak-admin-ensemble-status
26-
[use admin riak-admin]: {{<baseurl>}}riak/kv/2.9.1/using/admin/riak-admin
25+
[use admin riak admin#ensemble]: {{<baseurl>}}riak/kv/2.9.1/using/admin/riak admin/#riak admin-ensemble-status
26+
[use admin riak admin]: {{<baseurl>}}riak/kv/2.9.1/using/admin/riak admin
2727
[config reference#advanced]: {{<baseurl>}}riak/kv/2.9.1/configuring/reference/#advanced-configuration
2828
[plan cluster capacity]: {{<baseurl>}}riak/kv/2.9.1/setup/planning/cluster-capacity
2929
[cluster ops strong consistency]: {{<baseurl>}}riak/kv/2.9.1/using/cluster-operations/strong-consistency
@@ -99,7 +99,7 @@ than three nodes, strong consistency will be **enabled** but not yet
9999
state. Once at least three nodes with strong consistency enabled are
100100
detected in the cluster, the system will be activated and ready for use.
101101
You can check on the status of the strong consistency subsystem using
102-
the [`riak-admin ensemble-status`][use admin riak-admin#ensemble] command.
102+
the [`riak admin ensemble-status`][use admin riak admin#ensemble] command.
103103

104104
## Fault Tolerance
105105

@@ -136,12 +136,12 @@ of N, i.e. `n_val`, for buckets
136136
can create and activate a bucket type with N set to 5 and strong
137137
consistency enabled---we'll call the bucket type
138138
`consistent_and_fault_tolerant`---using the following series of
139-
[commands][use admin riak-admin]:
139+
[commands][use admin riak admin]:
140140

141141
```bash
142-
riak-admin bucket-type create consistent_and_fault_tolerant \
142+
riak admin bucket-type create consistent_and_fault_tolerant \
143143
'{"props": {"consistent":true,"n_val":5}}'
144-
riak-admin bucket-type activate consistent_and_fault_tolerant
144+
riak admin bucket-type activate consistent_and_fault_tolerant
145145
```
146146

147147
If the `activate` command outputs `consistent_and_fault_tolerant has
@@ -244,7 +244,7 @@ unable to service strongly consistent operations. The best strategy is
244244
to reboot nodes one at a time and wait for each node to rejoin existing
245245
[ensembles][cluster ops strong consistency] before
246246
continuing to the next node. At any point in time, the state of
247-
currently existing ensembles can be checked using [`riak-admin ensemble-status`][admin riak-admin#ensemble].
247+
currently existing ensembles can be checked using [`riak admin ensemble-status`][admin riak admin#ensemble].
248248

249249
## Performance
250250

@@ -263,9 +263,9 @@ can be found in [Adding and Removing Nodes][cluster ops add remove node].
263263
Your cluster's configuration can also affect strong consistency
264264
performance. See the section on [configuration][config reference#strong-cons] below.
265265

266-
## riak-admin ensemble-status
266+
## riak admin ensemble-status
267267

268-
The [`riak-admin`][use admin riak-admin] interface
268+
The [`riak admin`][use admin riak admin] interface
269269
used for general node/cluster management has an `ensemble-status`
270270
command that provides insight into the current status of the consensus
271271
subsystem undergirding strong consistency.
@@ -274,7 +274,7 @@ Running the command by itself will provide the current state of the
274274
subsystem:
275275

276276
```bash
277-
riak-admin ensemble-status
277+
riak admin ensemble-status
278278
```
279279

280280
If strong consistency is not currently enabled, you will see `Note: The
@@ -332,13 +332,13 @@ ensembles are displayed in the `Ensembles` section of the
332332
To inspect a specific ensemble, specify the ID:
333333

334334
```bash
335-
riak-admin ensemble-status <id>
335+
riak admin ensemble-status <id>
336336
```
337337

338338
The following would inspect ensemble 2:
339339

340340
```bash
341-
riak-admin ensemble-status 2
341+
riak admin ensemble-status 2
342342
```
343343

344344
Below is sample output for a single ensemble:

content/riak/kv/2.9.1/configuring/v2-multi-datacenter.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ Setting | Options | Default | Description
102102
Setting | Options | Default | Description
103103
:-------|:--------|:--------|:-----------
104104
`data_root` | `path` (string) | `data/riak_repl` | Path (relative or absolute) to the working directory for the replication process
105-
`queue_size` | `bytes` (integer) | `104857600` (100 MiB) | The size of the replication queue in bytes before the replication leader will drop requests. If requests are dropped, a fullsync will be required. Information about dropped requests is available using the `riak-repl status` command
105+
`queue_size` | `bytes` (integer) | `104857600` (100 MiB) | The size of the replication queue in bytes before the replication leader will drop requests. If requests are dropped, a fullsync will be required. Information about dropped requests is available using the `riak repl status` command
106106
`server_max_pending` | `max` (integer) | `5` | The maximum number of objects the leader will wait to get an acknowledgment from, from the remote location, before queuing the request
107107
`vnode_gets` | `true`, `false` | `true` | If `true`, repl will do a direct get against the vnode, rather than use a `GET` finite state machine
108108
`shuffle_ring` | `true`, `false` | `true `| If `true`, the ring is shuffled randomly. If `false`, the ring is traversed in order. Useful when a sync is restarted to reduce the chance of syncing the same partitions.

content/riak/kv/2.9.1/configuring/v2-multi-datacenter/nat.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -62,17 +62,17 @@ Server C is set up with a single internal IP address: `192.168.1.20`
6262
Configure a listener on Server A:
6363

6464
```bash
65-
riak-repl add-nat-listener [email protected] 192.168.1.10 9010 50.16.238.123 9011
65+
riak repl add-nat-listener [email protected] 192.168.1.10 9010 50.16.238.123 9011
6666
```
6767

6868
Configure a site (client) on Server B:
6969

7070
```bash
71-
riak-repl add-site 50.16.238.123 9011 server_a_to_b
71+
riak repl add-site 50.16.238.123 9011 server_a_to_b
7272
```
7373

7474
Configure a site (client) on Server C:
7575

7676
```bash
77-
riak-repl add-site 192.168.1.10 9010 server_a_to_c
77+
riak repl add-site 192.168.1.10 9010 server_a_to_c
7878
```

content/riak/kv/2.9.1/configuring/v2-multi-datacenter/quick-start.md

+21-21
Original file line numberDiff line numberDiff line change
@@ -66,25 +66,25 @@ addresses would need to be routable over the public Internet.
6666
### Set Up the Listeners on Cluster1 (Source cluster)
6767

6868
On a node in Cluster1, `node1` for example, identify the nodes that will
69-
be listening to connections from replication clients with `riak-repl
69+
be listening to connections from replication clients with `riak repl
7070
add-listener <nodename> <listen_ip> <port>` for each node that will be
7171
listening for replication clients.
7272

7373
```bash
74-
riak-repl add-listener [email protected] 172.16.1.11 9010
75-
riak-repl add-listener [email protected] 172.16.1.12 9010
76-
riak-repl add-listener [email protected] 172.16.1.13 9010
74+
riak repl add-listener [email protected] 172.16.1.11 9010
75+
riak repl add-listener [email protected] 172.16.1.12 9010
76+
riak repl add-listener [email protected] 172.16.1.13 9010
7777
```
7878

7979
### Set Up the Site on Cluster2 (Site cluster)
8080

8181
On a node in Cluster2, `node4` for example, inform the replication
82-
clients where the Source Listeners are located with `riak-repl add-site
82+
clients where the Source Listeners are located with `riak repl add-site
8383
<ipaddr> <port> <sitename>`. Use the IP address(es) and port(s) you
8484
configured in the earlier step. For `sitename` enter `Cluster1`.
8585

8686
```bash
87-
riak-repl add-site 172.16.1.11 9010 Cluster1
87+
riak repl add-site 172.16.1.11 9010 Cluster1
8888
```
8989

9090
**Note**: While a Listener needs to be added to each node, only a single
@@ -94,10 +94,10 @@ Source cluster.
9494

9595
### Verify the Replication Configuration
9696

97-
Verify the replication configuration using `riak-repl status` on both a
98-
Cluster1 node and a Cluster2 node. A full description of the `riak-repl
97+
Verify the replication configuration using `riak repl status` on both a
98+
Cluster1 node and a Cluster2 node. A full description of the `riak repl
9999
status` command's output can be found in the documentation for
100-
`riak-repl`'s [status output][cluster ops v2 mdc#status].
100+
`riak repl`'s [status output][cluster ops v2 mdc#status].
101101

102102
On the Cluster1 node, verify that there are `listener_<nodename>`s for
103103
each listening node, and that `leader` and `server_stats` are populated.
@@ -198,33 +198,33 @@ above in the other direction, i.e. from Cluster2 to Cluster1.
198198
### Set Up the Listeners on Cluster2 (Source cluster)
199199

200200
On a node in Cluster2, `node4` for example, identify the nodes that will
201-
be listening to connections from replication clients with `riak-repl
201+
be listening to connections from replication clients with `riak repl
202202
add-listener <nodename> <listen_ip> <port>` for each node that will be
203203
listening for replication clients.
204204

205205
```bash
206-
riak-repl add-listener [email protected] 192.168.1.21 9010
207-
riak-repl add-listener [email protected] 192.168.1.22 9010
208-
riak-repl add-listener [email protected] 192.168.1.23 9010
206+
riak repl add-listener [email protected] 192.168.1.21 9010
207+
riak repl add-listener [email protected] 192.168.1.22 9010
208+
riak repl add-listener [email protected] 192.168.1.23 9010
209209
```
210210

211211
### Set Up the Site on Cluster1 (Site cluster)
212212

213213
On a node in Cluster1, `node1` for example, inform the replication
214-
clients where the Source Listeners are with `riak-repl add-site <ipaddr>
214+
clients where the Source Listeners are with `riak repl add-site <ipaddr>
215215
<port> <sitename>`. Use the IP address(es) and port(s) you configured in
216216
the earlier step. For `sitename` enter **Cluster2**.
217217

218218
```bash
219-
riak-repl add-site 192.168.1.21 9010 Cluster2
219+
riak repl add-site 192.168.1.21 9010 Cluster2
220220
```
221221

222222
### Verify the Replication Configuration
223223

224-
Verify the replication configuration using `riak-repl status` on a
225-
Cluster1 node and a Cluster2 node. A full description of the `riak-repl
224+
Verify the replication configuration using `riak repl status` on a
225+
Cluster1 node and a Cluster2 node. A full description of the `riak repl
226226
status` command's output can be found in the documentation for
227-
`riak-repl`'s [status output][cluster ops v2 mdc#status].
227+
`riak repl`'s [status output][cluster ops v2 mdc#status].
228228

229229
On the Cluster1 node, verify that `Cluster2_ips`, `leader`, and
230230
`client_stats` are populated. They should look similar to the following:
@@ -350,16 +350,16 @@ To start a fullsync operation, issue the following command on your
350350
leader node:
351351

352352
```bash
353-
riak-repl start-fullsync
353+
riak repl start-fullsync
354354
```
355355

356356
A fullsync operation may also be cancelled. If a partition is in
357357
progress, synchronization will stop after that partition completes.
358-
During cancellation, `riak-repl status` will show 'cancelled' in the
358+
During cancellation, `riak repl status` will show 'cancelled' in the
359359
status.
360360

361361
```bash
362-
riak-repl cancel-fullsync
362+
riak repl cancel-fullsync
363363
```
364364

365365
Fullsync operations may also be paused, resumed, or scheduled for

content/riak/kv/2.9.1/configuring/v2-multi-datacenter/ssl.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -32,10 +32,10 @@ Riak REPL SSL support consists of the following items:
3232
## SSL Configuration
3333

3434
To configure SSL, you will need to include the following four settings
35-
in the `riak-repl` section of your `advanced.config`:
35+
in the `riak repl` section of your `advanced.config`:
3636

3737
```advancedconfig
38-
{riak-repl, [
38+
{riak repl, [
3939
% ...
4040
{ssl_enabled, true},
4141
{certfile, "/full/path/to/site1-cert.pem"},

0 commit comments

Comments
 (0)