Skip to content
Snippets Groups Projects
  1. Feb 28, 2019
  2. Feb 20, 2019
  3. Feb 19, 2019
    • David Waiting's avatar
      ensure at least one osd is up · f776ef3e
      David Waiting authored
      
      The existing task checks that the number of OSDs is equal to the number of up OSDs before continuing.
      
      The problem is that if none of the OSDs have been discovered yet, the task will exit immediately and subsequent pool creation will fail (num_osds = 0, num_up_osds = 0).
      
      This is related to Bugzilla 1578086.
      
      In this change, we also check that at least one OSD is present. In our testing, this results in the task correctly waiting for all OSDs to come up before continuing.
      
      Signed-off-by: default avatarDavid Waiting <david_waiting@comcast.com>
      (cherry picked from commit 3930791cb7d2872e3388d33713171d7a0c1951e8)
      f776ef3e
  4. Feb 06, 2019
  5. Jan 16, 2019
  6. Dec 20, 2018
  7. Dec 04, 2018
    • Sébastien Han's avatar
      osd: discover osd_objectstore on the fly · 32ca0b43
      Sébastien Han authored
      
      Applying and passing the OSD_BLUESTORE/FILESTORE on the fly is wrong for
      existing clusters as their config will be changed.
      
      Typically, if an OSD was prepared with ceph-disk on filestore and we
      change the default objectstore to bluestore, the activation will fail.
      The flag osd_objectstore should only be used for the preparation, not
      activation. The activate in this case detects the osd objecstore which
      prevents failures like the one described above.
      
      Signed-off-by: default avatarSébastien Han <seb@redhat.com>
      (cherry picked from commit 4c5113019893c92c4d75c9fc457b04158b86398b)
      2 tags
      32ca0b43
    • Sébastien Han's avatar
      ceph-osd: change jinja condition · 648c064d
      Sébastien Han authored
      If an existing cluster runs this config, and has ceph-disk OSD, the
      `expose_partitions` won't be expected by jinja since it's inside the
      'old' if. We need it as part of the osd_scenario != 'lvm' condition.
      
      Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1640273
      
      
      Signed-off-by: default avatarSébastien Han <seb@redhat.com>
      (cherry picked from commit bef522627e1e9827b86710c7a54f35a0cd596fbb)
      648c064d
  8. Nov 29, 2018
  9. Nov 28, 2018
  10. Oct 29, 2018
  11. Oct 22, 2018
  12. Oct 17, 2018
  13. Oct 12, 2018
  14. Oct 10, 2018
  15. Oct 09, 2018
  16. Oct 04, 2018
  17. Sep 27, 2018
  18. Sep 12, 2018
  19. Aug 28, 2018
  20. Aug 20, 2018
  21. Aug 16, 2018
    • Sébastien Han's avatar
      Revert "osd: generate device list for osd_auto_discovery on rolling_update" · c38226b7
      Sébastien Han authored
      
      This reverts commit e84f11e99ef42057cd1c3fbfab41ef66cda27302.
      
      This commit was giving a new failure later during the rolling_update
      process. Basically, this was modifying the list of devices and started
      impacting the ceph-osd itself. The modification to accomodate the
      osd_auto_discovery parameter should happen outside of the ceph-osd.
      
      Also we are trying to not play ceph-osd role during the rolling_update
      process so we can speed up the upgrade.
      
      Signed-off-by: default avatarSébastien Han <seb@redhat.com>
      c38226b7
  22. Aug 10, 2018
  23. Aug 09, 2018
  24. Jul 30, 2018
Loading