- May 16, 2019
-
-
Guillaume Abrioux authored
This commit renames the `docker_exec_cmd` variable to `container_exec_cmd` so it's more generic. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- May 07, 2019
-
-
Rishabh Dave authored
Add code in ceph-mgr for creating a keyring for manager in so that managers can be deployed on a separate node too. Signed-off-by:
Rishabh Dave <ridave@redhat.com>
-
Rishabh Dave authored
Except for some corner case, it's not correct to access some other node's copy of variable docker_exec_cmd. Therefore replace "hostvars[groups[mon_group_name][0]]['docker_exec_cmd']" by "docker_exec_cmd". Signed-off-by:
Rishabh Dave <ridave@redhat.com>
-
- Apr 23, 2019
-
-
Rishabh Dave authored
Keywords requiring only one item shouldn't express it by creating a list with single item. Signed-off-by:
Rishabh Dave <ridave@redhat.com>
-
- Apr 15, 2019
-
-
Guillaume Abrioux authored
When adding a new monitor, we must reuse the existing initial monitor keyring. Otherwise, the new monitor will issue its 'mkfs' with a new monitor keyring and it will result with a mismatch between them. The new monitor will be unable to join the quorum in the end. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> Co-authored-by:
Rishabh Dave <ridave@redhat.com>
-
- Apr 10, 2019
-
-
Guillaume Abrioux authored
Let's use a condition to run this task only on the first mon. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- Apr 03, 2019
-
-
fpantano authored
According to rdo testing https://review.rdoproject.org/r/#/c/18721 a check on the output of the ceph_health value is added to allow the playbook to make several attempts (according to the retry/delay variables) when waiting the cluster quorum or when the container bootstrap is not ended. It avoids the failure of the command execution when it doesn't receive a valid json object to decode (because cluster is too slow to boostrap compared to ceph-ansible task execution). Signed-off-by:
fpantano <fpantano@redhat.com>
-
- Mar 29, 2019
-
-
Rishabh Dave authored
Signed-off-by:
Rishabh Dave <ridave@redhat.com>
-
Rishabh Dave authored
Otherwise the reader is forced to search for "when" when blocks are too long. Signed-off-by:
Rishabh Dave <ridave@redhat.com>
-
- Mar 25, 2019
-
-
Guillaume Abrioux authored
otherwise, the task to copy mgr keyring fails during the rolling_update. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
This prevents the packaging from restarting services before we do need to restart them in the rolling update sequence. We want to handle services restart at rolling_update playbook. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
As of nautilus, the initial keyrings list has changed, it means when upgrading from Luminous or Mimic, it is expected there's a mismatch between what is found on the cluster and the expected initial keyring list hardcoded in ceph_key module. We shouldn't fail when upgrading to nautilus. str_to_bool() took from ceph-volume. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> Co-Authored-by:
Alfredo Deza <adeza@redhat.com>
-
- Mar 14, 2019
-
-
Dimitri Savineau authored
Currently the default crush rule value is added to the ceph config on the mon nodes as an extra configuration applied after the template generation via the ansible ini module. This implies two behaviors: 1/ On each ceph-ansible run, the ceph.conf will be regenerated via ceph-config+template and then ceph-mon+ini_file. This leads to a non necessary daemons restart. 2/ When other ceph daemons are collocated on the monitor nodes (like mgr or rgw), the default crush rule value will be erased by the ceph.conf template (mon -> mgr -> rgw). This patch adds the osd_pool_default_crush_rule config to the ceph template and only for the monitor nodes (like crush_rules.yml). The default crush rule id is read (if exist) from the current ceph configuration. The default configuration is -1 (ceph default). Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1638092 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Mar 07, 2019
-
-
Dimitri Savineau authored
We don't need to set After=docker.service when the container_binary variable isn't set to docker. It doesn't break anything currently but it could be confusing when using podman. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Mar 05, 2019
-
-
Dimitri Savineau authored
Ceph daemons will set the CONTAINER_IMAGE environment variable value in the daemon metadata. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Mar 04, 2019
-
-
Kevin Coakley authored
The following lint issues have been resolved: [301] Commands should not change things if nothing needs doing /home/travis/build/ceph/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:2 [305] Use shell only when shell functionality is required /home/travis/build/ceph/ceph-ansible/roles/ceph-osd/tasks/start_osds.yml:47 [301] Commands should not change things if nothing needs doing /home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:2 [301] Commands should not change things if nothing needs doing /home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:7 [301] Commands should not change things if nothing needs doing /home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:14 [301] Commands should not change things if nothing needs doing /home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:19 [301] Commands should not change things if nothing needs doing /home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:24 Signed-off-by:
Kevin Coakley <kcoakley@sdsc.edu>
-
- Feb 27, 2019
-
-
Dimitri Savineau authored
There's no need to set the client_admin_ceph_authtool_cap variable via a set_fact task. Instead we can set this in the role defaults. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
The administrator keyring needs full capabilities on mds like mon, osd and mgr. Whithout this, the client.admin key won't be able to run commands against mds (like ceph tell mds.0 session ls) Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1672878 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Guillaume Abrioux authored
those variables are mandatory. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
there's no need to generate mgr keyrings 'mgr.monX' when mgrs aren't collocated with monitors. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Kevin Coakley authored
Set directories to 755 and files to 644 to /var/lib/ceph/mon/{{ cluster }}-{{ monitor_name }} recursively instead of setting files and directories to 755 recursively. The ceph mon process writes files to this path with permissions 644. This update stops ansible from updating the permissions in /var/lib/ceph/mon/{{ cluster }}-{{ monitor_name }} every time ceph mon writes a file and increases idempotency. Signed-off-by:
Kevin Coakley <kcoakley@sdsc.edu>
-
- Feb 20, 2019
-
-
Patrick Donnelly authored
Otherwise keys get scattered over the mons and the mgr key is not copied properly. With ansible_inventory: [mdss] mds-000 ansible_ssh_host=192.168.129.110 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' [clients] client-000 ansible_ssh_host=192.168.143.94 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' [mgrs] mgr-000 ansible_ssh_host=192.168.222.195 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' [mons] mon-000 ansible_ssh_host=192.168.139.173 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' monitor_address=192.168.139.173 mon-002 ansible_ssh_host=192.168.212.114 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' monitor_address=192.168.212.114 mon-001 ansible_ssh_host=192.168.167.177 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' monitor_address=192.168.167.177 [osds] osd-001 ansible_ssh_host=192.168.178.128 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' osd-000 ansible_ssh_host=192.168.138.233 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' osd-002 ansible_ssh_host=192.168.197.23 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' We get this failure: TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************************************************************************************************************************************************** included: /root/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml for mon-000, mon-002, mon-001 TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************************************************************************************************************************************************* changed: [mon-000] => { "attempts": 1, "changed": true, "cmd": [ "ceph", "--cluster", "ceph", "-n", "mon.", "-k", "/var/lib/ceph/mon/ceph-li1166-30/keyring", "mon_status", "--format", "json" ], "delta": "0:00:01.897397", "end": "2019-02-14 17:08:09.340534", "rc": 0, "start": "2019-02-14 17:08:07.443137" } STDOUT: {"name":"li1166-30","rank":0,"state":"leader","election_epoch":4,"quorum":[0,1,2],"quorum_age":0,"features":{"required_con":"2449958747315912708","required_mon":["kraken","luminous","mimic","osdmap-prune","nautilus"],"quorum_con":"4611087854031667199","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus"]},"outside_quorum":[],"extra_probe_peers":[{"addrvec":[{"type":"v2","addr":"192.168.167.177:3300","nonce":0},{"type":"v1","addr":"192.168.167.177:6789","nonce":0}]},{"addrvec":[{"type":"v2","addr":"192.168.212.114:3300","nonce":0},{"type":"v1","addr":"192.168.212.114:6789","nonce":0}]}],"sync_provider":[],"monmap":{"epoch":1,"fsid":"bb401e2a-c524-428e-bba9-8977bc96f04b","modified":"2019-02-14 17:08:05.012133","created":"2019-02-14 17:08:05.012133","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus"],"optional":[]},"mons":[{"rank":0,"name":"li1166-30","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.139.173:3300","nonce":0},{"type":"v1","addr":"192.168.139.173:6789","nonce":0}]},"addr":"192.168.139.173:6789/0","public_addr":"192.168.139.173:6789/0"},{"rank":1,"name":"li985-128","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.167.177:3300","nonce":0},{"type":"v1","addr":"192.168.167.177:6789","nonce":0}]},"addr":"192.168.167.177:6789/0","public_addr":"192.168.167.177:6789/0"},{"rank":2,"name":"li895-17","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.212.114:3300","nonce":0},{"type":"v1","addr":"192.168.212.114:6789","nonce":0}]},"addr":"192.168.212.114:6789/0","public_addr":"192.168.212.114:6789/0"}]},"feature_map":{"mon":[{"features":"0x3ffddff8ffacffff","release":"luminous","num":1}],"client":[{"features":"0x3ffddff8ffacffff","release":"luminous","num":1}]}} TASK [ceph-mon : fetch ceph initial keys] ************************************************************************************************************************************************************************** changed: [mon-001] => { "changed": true, "cmd": [ "ceph", "-n", "mon.", "-k", "/var/lib/ceph/mon/ceph-li985-128/keyring", "--cluster", "ceph", "auth", "get", "client.bootstrap-rgw", "-f", "plain", "-o", "/var/lib/ceph/bootstrap-rgw/ceph.keyring" ], "delta": "0:00:03.179584", "end": "2019-02-14 17:08:14.305348", "rc": 0, "start": "2019-02-14 17:08:11.125764" } STDERR: exported keyring for client.bootstrap-rgw changed: [mon-002] => { "changed": true, "cmd": [ "ceph", "-n", "mon.", "-k", "/var/lib/ceph/mon/ceph-li895-17/keyring", "--cluster", "ceph", "auth", "get", "client.bootstrap-rgw", "-f", "plain", "-o", "/var/lib/ceph/bootstrap-rgw/ceph.keyring" ], "delta": "0:00:03.706169", "end": "2019-02-14 17:08:14.041698", "rc": 0, "start": "2019-02-14 17:08:10.335529" } STDERR: exported keyring for client.bootstrap-rgw changed: [mon-000] => { "changed": true, "cmd": [ "ceph", "-n", "mon.", "-k", "/var/lib/ceph/mon/ceph-li1166-30/keyring", "--cluster", "ceph", "auth", "get", "client.bootstrap-rgw", "-f", "plain", "-o", "/var/lib/ceph/bootstrap-rgw/ceph.keyring" ], "delta": "0:00:03.916467", "end": "2019-02-14 17:08:13.803999", "rc": 0, "start": "2019-02-14 17:08:09.887532" } STDERR: exported keyring for client.bootstrap-rgw TASK [ceph-mon : create ceph mgr keyring(s)] *********************************************************************************************************************************************************************** skipping: [mon-000] => (item=mgr-000) => { "changed": false, "item": "mgr-000", "skip_reason": "Conditional result was False" } skipping: [mon-000] => (item=mon-000) => { "changed": false, "item": "mon-000", "skip_reason": "Conditional result was False" } skipping: [mon-000] => (item=mon-002) => { "changed": false, "item": "mon-002", "skip_reason": "Conditional result was False" } skipping: [mon-000] => (item=mon-001) => { "changed": false, "item": "mon-001", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=mgr-000) => { "changed": false, "item": "mgr-000", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=mon-000) => { "changed": false, "item": "mon-000", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=mon-002) => { "changed": false, "item": "mon-002", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=mon-001) => { "changed": false, "item": "mon-001", "skip_reason": "Conditional result was False" } changed: [mon-001] => (item=mgr-000) => { "changed": true, "cmd": [ "ceph", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "auth", "import", "-i", "/etc/ceph//ceph.mgr.li547-145.keyring" ], "delta": "0:00:05.822460", "end": "2019-02-14 17:08:21.422810", "item": "mgr-000", "rc": 0, "start": "2019-02-14 17:08:15.600350" } STDERR: imported keyring changed: [mon-001] => (item=mon-000) => { "changed": true, "cmd": [ "ceph", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "auth", "import", "-i", "/etc/ceph//ceph.mgr.li1166-30.keyring" ], "delta": "0:00:05.814039", "end": "2019-02-14 17:08:27.663745", "item": "mon-000", "rc": 0, "start": "2019-02-14 17:08:21.849706" } STDERR: imported keyring changed: [mon-001] => (item=mon-002) => { "changed": true, "cmd": [ "ceph", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "auth", "import", "-i", "/etc/ceph//ceph.mgr.li895-17.keyring" ], "delta": "0:00:05.787291", "end": "2019-02-14 17:08:33.921243", "item": "mon-002", "rc": 0, "start": "2019-02-14 17:08:28.133952" } STDERR: imported keyring changed: [mon-001] => (item=mon-001) => { "changed": true, "cmd": [ "ceph", "-n", "client.admin", "-k", "/etc/ceph/ceph.client.admin.keyring", "--cluster", "ceph", "auth", "import", "-i", "/etc/ceph//ceph.mgr.li985-128.keyring" ], "delta": "0:00:05.782064", "end": "2019-02-14 17:08:40.138706", "item": "mon-001", "rc": 0, "start": "2019-02-14 17:08:34.356642" } STDERR: imported keyring TASK [ceph-mon : copy ceph mgr key(s) to the ansible server] ******************************************************************************************************************************************************* skipping: [mon-000] => (item=mgr-000) => { "changed": false, "item": "mgr-000", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=mgr-000) => { "changed": false, "item": "mgr-000", "skip_reason": "Conditional result was False" } changed: [mon-001] => (item=mgr-000) => { "changed": true, "checksum": "aa0fa40225c9e09d67fe7700ce9d033f91d46474", "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/etc/ceph/ceph.mgr.li547-145.keyring", "item": "mgr-000", "md5sum": "cd884fb9ddc9b8b4e3cd1ad6a98fb531", "remote_checksum": "aa0fa40225c9e09d67fe7700ce9d033f91d46474", "remote_md5sum": null } TASK [ceph-mon : copy keys to the ansible server] ****************************************************************************************************************************************************************** skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => { "changed": false, "item": "/var/lib/ceph/bootstrap-osd/ceph.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => { "changed": false, "item": "/var/lib/ceph/bootstrap-rgw/ceph.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => { "changed": false, "item": "/var/lib/ceph/bootstrap-mds/ceph.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => { "changed": false, "item": "/var/lib/ceph/bootstrap-rbd/ceph.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring) => { "changed": false, "item": "/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => { "changed": false, "item": "/var/lib/ceph/bootstrap-osd/ceph.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-000] => (item=/etc/ceph/ceph.client.admin.keyring) => { "changed": false, "item": "/etc/ceph/ceph.client.admin.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => { "changed": false, "item": "/var/lib/ceph/bootstrap-rgw/ceph.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => { "changed": false, "item": "/var/lib/ceph/bootstrap-mds/ceph.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => { "changed": false, "item": "/var/lib/ceph/bootstrap-rbd/ceph.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring) => { "changed": false, "item": "/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring", "skip_reason": "Conditional result was False" } skipping: [mon-002] => (item=/etc/ceph/ceph.client.admin.keyring) => { "changed": false, "item": "/etc/ceph/ceph.client.admin.keyring", "skip_reason": "Conditional result was False" } changed: [mon-001] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => { "changed": true, "checksum": "095c7868a080b4c53494335d3a2223abbad12605", "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-osd/ceph.keyring", "item": "/var/lib/ceph/bootstrap-osd/ceph.keyring", "md5sum": "d8f4c4fa564aade81b844e3d92c7cac6", "remote_checksum": "095c7868a080b4c53494335d3a2223abbad12605", "remote_md5sum": null } changed: [mon-001] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => { "changed": true, "checksum": "ce7a2d4441626f22e995b37d5131b9e768f18494", "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-rgw/ceph.keyring", "item": "/var/lib/ceph/bootstrap-rgw/ceph.keyring", "md5sum": "271e4f90c5853c74264b6b749650c3f2", "remote_checksum": "ce7a2d4441626f22e995b37d5131b9e768f18494", "remote_md5sum": null } changed: [mon-001] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => { "changed": true, "checksum": "e35e8613076382dd3c9d89b5bc2090e37871aab7", "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-mds/ceph.keyring", "item": "/var/lib/ceph/bootstrap-mds/ceph.keyring", "md5sum": "ed7c32277914c8e34ad5c532d8293dd2", "remote_checksum": "e35e8613076382dd3c9d89b5bc2090e37871aab7", "remote_md5sum": null } changed: [mon-001] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => { "changed": true, "checksum": "ac43101ad249f6b6bb07ceb3287a3693aeae7f6c", "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-rbd/ceph.keyring", "item": "/var/lib/ceph/bootstrap-rbd/ceph.keyring", "md5sum": "1460e3c9532b0b7b3a5cb329d77342cd", "remote_checksum": "ac43101ad249f6b6bb07ceb3287a3693aeae7f6c", "remote_md5sum": null } changed: [mon-001] => (item=/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring) => { "changed": true, "checksum": "01d74751810f5da621937b10c83d47fc7f1865c5", "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring", "item": "/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring", "md5sum": "979987f10fd7da5cff67e665f54bfe4d", "remote_checksum": "01d74751810f5da621937b10c83d47fc7f1865c5", "remote_md5sum": null } changed: [mon-001] => (item=/etc/ceph/ceph.client.admin.keyring) => { "changed": true, "checksum": "482f702cf861b41021d76de655ecf996fe9a4a4a", "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/etc/ceph/ceph.client.admin.keyring", "item": "/etc/ceph/ceph.client.admin.keyring", "md5sum": "7581c187044fd4e0f7a5440244a6b306", "remote_checksum": "482f702cf861b41021d76de655ecf996fe9a4a4a", "remote_md5sum": null } TASK [ceph-mon : include secure_cluster.yml] *********************************************************************************************************************************************************************** skipping: [mon-000] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [ceph-mon : crush_rules.yml] ********************************************************************************************************************************************************************************** skipping: [mon-000] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [mon-002] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [mon-001] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [ceph-mgr : set_fact docker_exec_cmd] ************************************************************************************************************************************************************************* skipping: [mon-000] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [mon-002] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [mon-001] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [ceph-mgr : include common.yml] ******************************************************************************************************************************************************************************* included: /root/ceph-ansible/roles/ceph-mgr/tasks/common.yml for mon-000, mon-002, mon-001 TASK [ceph-mgr : create mgr directory] ***************************************************************************************************************************************************************************** changed: [mon-000] => { "changed": true, "gid": 167, "group": "ceph", "mode": "0755", "owner": "ceph", "path": "/var/lib/ceph/mgr/ceph-li1166-30", "secontext": "unconfined_u:object_r:ceph_var_lib_t:s0", "size": 4096, "state": "directory", "uid": 167 } changed: [mon-002] => { "changed": true, "gid": 167, "group": "ceph", "mode": "0755", "owner": "ceph", "path": "/var/lib/ceph/mgr/ceph-li895-17", "secontext": "unconfined_u:object_r:ceph_var_lib_t:s0", "size": 4096, "state": "directory", "uid": 167 } changed: [mon-001] => { "changed": true, "gid": 167, "group": "ceph", "mode": "0755", "owner": "ceph", "path": "/var/lib/ceph/mgr/ceph-li985-128", "secontext": "unconfined_u:object_r:ceph_var_lib_t:s0", "size": 4096, "state": "directory", "uid": 167 } TASK [ceph-mgr : fetch ceph mgr keyring] *************************************************************************************************************************************************************************** skipping: [mon-000] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [mon-002] => { "changed": false, "skip_reason": "Conditional result was False" } skipping: [mon-001] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [ceph-mgr : copy ceph keyring(s) if needed] ******************************************************************************************************************************************************************* An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option failed: [mon-002] (item={'name': '/etc/ceph/ceph.mgr.li895-17.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li895-17/keyring', 'copy_key': True}) => { "changed": false, "item": { "copy_key": true, "dest": "/var/lib/ceph/mgr/ceph-li895-17/keyring", "name": "/etc/ceph/ceph.mgr.li895-17.keyring" } } MSG: Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring' Searched in: /root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring /root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring /root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring /root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring /root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring /root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring on the Ansible Controller. If you are using a module and expect the file to exist on the remote, see the remote_src option skipping: [mon-002] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False}) => { "changed": false, "item": { "copy_key": false, "dest": "/etc/ceph/ceph.client.admin.keyring", "name": "/etc/ceph/ceph.client.admin.keyring" }, "skip_reason": "Conditional result was False" } An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option failed: [mon-001] (item={'name': '/etc/ceph/ceph.mgr.li985-128.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li985-128/keyring', 'copy_key': True}) => { "changed": false, "item": { "copy_key": true, "dest": "/var/lib/ceph/mgr/ceph-li985-128/keyring", "name": "/etc/ceph/ceph.mgr.li985-128.keyring" } } MSG: Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring' Searched in: /root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring /root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring /root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring /root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring /root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring /root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring on the Ansible Controller. If you are using a module and expect the file to exist on the remote, see the remote_src option skipping: [mon-001] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False}) => { "changed": false, "item": { "copy_key": false, "dest": "/etc/ceph/ceph.client.admin.keyring", "name": "/etc/ceph/ceph.client.admin.keyring" }, "skip_reason": "Conditional result was False" } An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option failed: [mon-000] (item={'name': '/etc/ceph/ceph.mgr.li1166-30.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li1166-30/keyring', 'copy_key': True}) => { "changed": false, "item": { "copy_key": true, "dest": "/var/lib/ceph/mgr/ceph-li1166-30/keyring", "name": "/etc/ceph/ceph.mgr.li1166-30.keyring" } } MSG: Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring' Searched in: /root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring /root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring /root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring /root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring /root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring /root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring on the Ansible Controller. If you are using a module and expect the file to exist on the remote, see the remote_src option skipping: [mon-000] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False}) => { "changed": false, "item": { "copy_key": false, "dest": "/etc/ceph/ceph.client.admin.keyring", "name": "/etc/ceph/ceph.client.admin.keyring" }, "skip_reason": "Conditional result was False" } NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************* to retry, use: --limit @/root/ceph-linode/linode.retry PLAY RECAP ********************************************************************************************************************************************************************************************************* client-000 : ok=30 changed=2 unreachable=0 failed=0 mds-000 : ok=32 changed=4 unreachable=0 failed=0 mgr-000 : ok=32 changed=4 unreachable=0 failed=0 mon-000 : ok=89 changed=21 unreachable=0 failed=1 mon-001 : ok=84 changed=20 unreachable=0 failed=1 mon-002 : ok=81 changed=17 unreachable=0 failed=1 osd-000 : ok=32 changed=4 unreachable=0 failed=0 osd-001 : ok=32 changed=4 unreachable=0 failed=0 osd-002 : ok=32 changed=4 unreachable=0 failed=0 Also, create all keys on the first mon and copy those to the other mons to be consistent. Signed-off-by:
Patrick Donnelly <pdonnell@redhat.com>
-
- Feb 13, 2019
-
-
Guillaume Abrioux authored
instead of using `RuntimeDirectory` parameter in systemd unit files, let's use a systemd `tmpfiles.d` to ensure `/run/ceph`. Explanation: `podman` doesn't create the `/var/run/ceph` if it doesn't exist the time where the container is run while `docker` used to create it. In case of `switch_to_containers` scenario, `/run/ceph` gets created by a tmpfiles.d systemd file; when switching to containers, the systemd unit file complains because `/run/ceph` already exists The better fix would be to ensure `/usr/lib/tmpfiles.d/ceph-common.conf` is removed and only rely on `RuntimeDirectory` from systemd unit file parameter but we come from a non-containerized environment which is already running, it means `/run/ceph` is already created and when starting the unit to start the container, systemd will still complain and we can't simply remove the directory if daemons are collocated. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
`ceph-mon` tries to redeploy monitors because it assumes it was not yet deployed since `mon_socket_stat` and `ceph_mon_container_stat` are undefined (indeed, we stop the daemon before calling `ceph-mon` in the switch_to_containers playbook). Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- Feb 11, 2019
-
-
Sébastien Han authored
167 is the ceph uid for Red Hat based system, thus trying to deploy a monitor on Debian fail since the ceph user id on that system is 64045. This commit uses the ceph_uid variable which contains the right uid based on system/container detection. Closes: https://github.com/ceph/ceph-ansible/issues/3589 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- Feb 05, 2019
-
-
John Fulton authored
With 'podman version 1.0.0' on RHEL8 beta the 'get ceph version' and 'ceph monitor mkfs' commands fail [1] with "error configuring network namespace for container Missing CNI default network". When net=host is added these errors are resolved. net=host is used in many other calls (grep -R net=host | wc -l --> 38). Fixes: #3561 Signed-off-by:
John Fulton <fulton@redhat.com> (cherry picked from commit 410abd77455a92a85f0674577e22af0af894964f)
-
Guillaume Abrioux authored
/var/run/ceph resides in a non persistent filesystem (tmpfs) After a reboot, all daemons won't start because this directory will be missing. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
Add required changes to support podman on rhel8 Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1667101 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- Jan 22, 2019
-
-
Sébastien Han authored
You can now use 'ceph_mon_container_listen_port' to change the port the monitor will listen on. Setting the default to 3300 (assigned by IANA) since Nautilus has released the messenger2 transport protocol. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
This reverts commit ee08d1f89a588e878324141dd0f80c65058a377d which was mostly to workaround a bug in ceph@master. Now, ceph@master is fixed so reverting this. Thanks to https://github.com/ceph/ceph/pull/25900 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- Jan 09, 2019
-
-
Sébastien Han authored
Somewhat something changed with the introduction of msg2 and we have to add each node as a peer so the monitors can form a quorum. This might be due to our CI environment, although adding this is completly harmless and solves monitors not being able to form quorum. It seems that the initial monitor map wasn't containing the right information about the peers (addresses like 0.0.0.0/0r1, for each rank. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- Dec 17, 2018
-
-
Sébastien Han authored
These aliases have led to several issues making believe that ceph binaries are actually present on the host when running the command. However it wasn't explicit that the commands were only ran inside a container. It has brought to much confusion so we decided to remove them. Closes: https://github.com/ceph/ceph-ansible/issues/3445 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- Dec 11, 2018
-
-
Guillaume Abrioux authored
ceph-ansible@master requires the latest stable ansible version. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- Dec 04, 2018
-
-
Sébastien Han authored
Json is a type structure which is always typed as a string, where before this we were declaring a dict, which is not a json valid structure. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
- Dec 03, 2018
-
-
Sébastien Han authored
This commit unifies the container and non-container code, which in the meantime gives use the ability to deploy N mon container at the same time without having to serialized the deployment. This will drastically reduces the time needed to bootstrap the cluster. Note, this is only possible since Nautilus because the monitors are bootstrap the initial keys on their own once they reach quorum. In the Nautilus version of the ceph-container mon, we stopped generating the keys 'manually' from inside the container, for more detail see: https://github.com/ceph/ceph-container/pull/1238 Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
This will speed up the deployment and also deploy mon and mgr collocated just as recommended. This won't prevent you of adding more and dedicaded machines for mgr if needed. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
This was probably a leftover/mistake so let's fix this and make the file consistent. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
During the first iteration, the command won't return anything, or can simply fail and might not return a valid json structure. Ansible will fail parsing it in the filter `from_json` so let's default that variable to empty dictionary. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
We don't support Ubuntu Precise, so this feature does not exists anymore. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
Add the ability to protect pools on containerized clusters. Signed-off-by:
Sébastien Han <seb@redhat.com>
-