<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://storaged.org/feed.xml" rel="self" type="application/atom+xml" /><link href="https://storaged.org/" rel="alternate" type="text/html" /><updated>2026-02-18T09:04:00+00:00</updated><id>https://storaged.org/feed.xml</id><title type="html">Storaged Project</title><subtitle>Storaged project is a collection of storage management tools, libraries and APIs used for creating, managing and monitoring storage across the GNU/Linux ecosystem.</subtitle><entry><title type="html">Filtering Devices with LVM Devices File</title><link href="https://storaged.org/lvm/2026/02/18/Filtering-devices-with-LVM-device-file.html" rel="alternate" type="text/html" title="Filtering Devices with LVM Devices File" /><published>2026-02-18T07:13:00+00:00</published><updated>2026-02-18T07:13:00+00:00</updated><id>https://storaged.org/lvm/2026/02/18/Filtering-devices-with-LVM-device-file</id><content type="html" xml:base="https://storaged.org/lvm/2026/02/18/Filtering-devices-with-LVM-device-file.html"><![CDATA[<p>To control which devices LVM can work with, it was always possible to configure filtering in the <code class="language-plaintext highlighter-rouge">devices</code> section of the <code class="language-plaintext highlighter-rouge">/etc/lvm/lvm.conf</code> configuration file. But filtering devices this way was not very simple and could lead to problems when using paths like <code class="language-plaintext highlighter-rouge">/dev/sda</code> which are not stable. Many users also didn’t know this possibility exists and while using this type of filtering is possible for a single command with the <code class="language-plaintext highlighter-rouge">--config</code> option, it is not very user friendly. This all changed recently with the introduction of the new configuration file <code class="language-plaintext highlighter-rouge">/etc/lvm/devices/system.devices</code> and the corresponding <code class="language-plaintext highlighter-rouge">lvmdevices</code> command in LVM 2.03.12. A new option <code class="language-plaintext highlighter-rouge">--devices</code> was also added to the existing LVM commands for a quick way to limit which devices one specific command can use.</p>

<h3 id="lvm-devices-file">LVM Devices File</h3>

<p>As was said above, there is a new <code class="language-plaintext highlighter-rouge">/etc/lvm/devices/system.devices</code> configuration file. When this file exists, it controls which devices LVM is allowed to scan. Instead of relying on matching the device path, the devices file uses stable identifiers like WWID, serial number or UUID.</p>

<p>A device file on a simple system with a single physical volume on a partition would look like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># LVM uses devices listed in this file.
# Created by LVM command vgimportdevices pid 187757 at Fri Feb 13 16:44:45 2026
# HASH=1524312511
PRODUCT_UUID=4d58d0c1-8b67-4fa6-a937-035d2bfbb220
VERSION=1.1.1
IDTYPE=devname IDNAME=/dev/sda2 DEVNAME=/dev/sda2 PVID=rYeMgwy0mO0THDagB6k8mZkoOSqAWfte PART=2
</code></pre></div></div>

<p>When the devices file is enabled, LVM will only scan and operate on devices listed in it. Any device not present in the file is invisible to LVM, even if it has a valid PV header.</p>

<p>This is the biggest change brought in with this feature. The old <code class="language-plaintext highlighter-rouge">lvm.conf</code> based filters were always optional and LVM always scanned all devices in the system, unless told otherwise. This could cause problems on systems with many disks, where LVM (especially during boot) could take a long time scanning devices that did not even “belong” to it.</p>

<p>By default, the LVM devices file is enabled with the latest versions of LVM and on systems without preexisting volume groups, creating new LVM setups with commands like <code class="language-plaintext highlighter-rouge">pvcreate</code> or <code class="language-plaintext highlighter-rouge">vgcreate</code> will automatically add the new physical volumes to the devices file. If desired, this feature can be disabled by setting <code class="language-plaintext highlighter-rouge">use_devicesfile=0</code> in <code class="language-plaintext highlighter-rouge">lvm.conf</code> or by simply removing the existing devices file. On systems without the devices file, LVM will simply scan all devices in the system the same way it did before introduction of this configuration file.</p>

<h3 id="managing-devices-with-lvmdevices-and-vgimportdevices">Managing Devices with <code class="language-plaintext highlighter-rouge">lvmdevices</code> and <code class="language-plaintext highlighter-rouge">vgimportdevices</code></h3>

<p>On most newly installed systems with LVM, the devices file should be already present and populated, but you might want to either create it later on systems installed with an older version of LVM, or manage some devices manually. It is possible to modify the <code class="language-plaintext highlighter-rouge">system.devices</code> manually, but a new command <code class="language-plaintext highlighter-rouge">lvmdevices</code> was added for simple management of the file.</p>

<p>To simply import all devices in an existing volume group, <code class="language-plaintext highlighter-rouge">vgimportdevices &lt;vgname&gt;</code> can be used and for all volume groups in the system, <code class="language-plaintext highlighter-rouge">vgimportdevices -a</code> can be used.</p>

<p>A single physical volume can be added to the file with <code class="language-plaintext highlighter-rouge">lvmdevices --adddev</code> and removed with <code class="language-plaintext highlighter-rouge">lvmdevices --deldev</code>.</p>

<p>To check all entries in the devices file, <code class="language-plaintext highlighter-rouge">lvmdevices --check</code> can be used and any issues found by the check command can be fixed with <code class="language-plaintext highlighter-rouge">lvmdevices --update</code>.</p>

<h4 id="backups">Backups</h4>

<p>In the sample devices file above, you might have noticed the <code class="language-plaintext highlighter-rouge">VERSION</code> field. This is the current version of the file. LVM automatically makes a backup of the file with every change and old versions of the file can be found in the <code class="language-plaintext highlighter-rouge">/etc/lvm/devices/backup</code> directory. So if you make some mistakes when changing the file with <code class="language-plaintext highlighter-rouge">lvmdevices</code>, you can simply restore to a previous version of the file.</p>

<h3 id="overriding-the-devices-file-and-filtering-with-commands">Overriding the Devices File and Filtering with Commands</h3>

<p>Together with the devices file feature, a new option <code class="language-plaintext highlighter-rouge">--devices</code> was added to all LVM commands. This option allows specifying devices which are visible to the command. This overrides the existing devices file so it can be used either to restrict the command to work only on a subset of devices specified in the devices file or even to allow it to run on devices not specified in the file at all.</p>

<p>This option is also very useful when dealing with multiple volume groups with the same name. This is a known limitation of LVM – two volume groups with the same name cannot coexist in one system and LVM will refuse to work without renaming one of them. This can be a problem when dealing with cloned disks or backups. With <code class="language-plaintext highlighter-rouge">--devices</code>, commands like <code class="language-plaintext highlighter-rouge">vgs</code> can be restricted to “see” only one of the volume groups.</p>

<h3 id="issue-missing-volume-group">Issue: Missing Volume Group</h3>

<p>As mentioned above, when installing a new system with LVM, for the newly created volume groups, the used devices will be added to the devices file. Fedora (and RHEL) installer, Anaconda, will also add all other volume groups present during installation to the devices file so these will also be visible in the installed system. The problems start when a device with a volume group is added to the system after installation. The volume group (and any logical volumes in it) is suddenly invisible. Even commands like <code class="language-plaintext highlighter-rouge">vgs</code> will simply ignore it, because its physical volumes are not listed in the devices file.</p>

<p>This can be a problem on dual boot systems with encryption. Because the second system’s volume group is “hidden” by the encryption layer, <a href="https://discussion.fedoraproject.org/t/luks-group-doesnt-show-up/97164">it is not visible during installation and not added to the devices file</a>. When the user unlocks the LUKS device in their newly installed system, they can’t access their second system. Unfortunately in this situation, the only solution is to manually add the second system’s volume group with <code class="language-plaintext highlighter-rouge">vgimportdevices</code> as described above.</p>

<h3 id="conclusion">Conclusion</h3>

<p>The LVM devices file provides a cleaner and more reliable way to control which devices LVM uses, replacing the old <code class="language-plaintext highlighter-rouge">lvm.conf</code> based filtering with stable device identifiers and simple management through the <code class="language-plaintext highlighter-rouge">lvmdevices</code> command. Overall, for most users the devices file should work transparently without any manual configuration needed.</p>]]></content><author><name>Vojtech Trefny</name></author><category term="lvm" /><summary type="html"><![CDATA[To control which devices LVM can work with, it was always possible to configure filtering in the devices section of the /etc/lvm/lvm.conf configuration file. But filtering devices this way was not very simple and could lead to problems when using paths like /dev/sda which are not stable. Many users also didn’t know this possibility exists and while using this type of filtering is possible for a single command with the --config option, it is not very user friendly. This all changed recently with the introduction of the new configuration file /etc/lvm/devices/system.devices and the corresponding lvmdevices command in LVM 2.03.12. A new option --devices was also added to the existing LVM commands for a quick way to limit which devices one specific command can use.]]></summary></entry><entry><title type="html">ATA SMART in libblockdev and UDisks</title><link href="https://storaged.org/udisks/2026/01/30/ATA-SMART-in-libblockdev-and-UDisks.html" rel="alternate" type="text/html" title="ATA SMART in libblockdev and UDisks" /><published>2026-01-30T17:00:00+00:00</published><updated>2026-01-30T17:00:00+00:00</updated><id>https://storaged.org/udisks/2026/01/30/ATA-SMART-in-libblockdev-and-UDisks</id><content type="html" xml:base="https://storaged.org/udisks/2026/01/30/ATA-SMART-in-libblockdev-and-UDisks.html"><![CDATA[<p>For a long time there was a need to modernize the UDisks’ way of ATA SMART data retrieval. The ageing <em>libatasmart</em> project went unmaintained over time yet there was no other alternative available. There was the <em>smartmontools</em> project with its <code class="language-plaintext highlighter-rouge">smartctl</code> command whose console output was rather clumsy to parse. It became apparent we need to decouple the SMART functionality and create an abstraction.</p>

<p><em>libblockdev-3.2.0</em> introduced a new <code class="language-plaintext highlighter-rouge">smart</code> plugin API tailored for UDisks needs, first used by the <em>udisks-2.10.90</em> public beta release. We haven’t received much feedback for this beta release and so the code was released as the final <em>2.11.0</em> release about a year later.</p>

<p>While the <em>libblockdev-smart</em> plugin API is the single public interface, we created two plugin implementations right away - the existing <em>libatasmart</em>-based solution (plugin name <code class="language-plaintext highlighter-rouge">libbd_smart.so</code>) that was mostly a straight port of the existing UDisks code, and a new <code class="language-plaintext highlighter-rouge">libbd_smartmontools.so</code> plugin based around <code class="language-plaintext highlighter-rouge">smartctl</code> JSON output.</p>

<p>Furthermore, there’s a promising initiative going on: the <a href="https://github.com/smartmontools/smartmontools/issues/409">libsmartmon library</a> and if that ever materializes we’d like to build a new plugin around it - likely deprecating the <code class="language-plaintext highlighter-rouge">smartctl</code> JSON-based implementation along with it. Contributions welcome, this effort deserves more public attention.</p>

<p>Whichever plugin gets actually used is controlled by the libblockdev plugin configuration - see <code class="language-plaintext highlighter-rouge">/etc/libblockdev/3/conf.d/00-default.cfg</code> for example or, if that file is absent, have a look at the builtin defaults: <a href="https://github.com/storaged-project/libblockdev/blob/master/data/conf.d/00-default.cfg">https://github.com/storaged-project/libblockdev/blob/master/data/conf.d/00-default.cfg</a>. Distributors and sysadmins are free to change the preference so be sure to check it out. Thus whenever you’re about to submit a bugreport upstream, please specify which plugin you do use.</p>

<h2 id="plugin-differences">Plugin differences</h2>

<h4 id="libatasmart-plugin">libatasmart plugin:</h4>

<ul>
  <li>small library, small runtime I/O footprint</li>
  <li>the preferred plugin, stable for decades</li>
  <li>libatasmart unmaintained upstream</li>
  <li>no internal drive/quirk database, possibly reporting false values for some attributes</li>
</ul>

<h4 id="smartmontools-plugin">smartmontools plugin:</h4>

<ul>
  <li>well-maintained upstream</li>
  <li>extensive <em>drivedb</em>, filtering out any false attribute interpretation</li>
  <li>experimental plugin, possibly to be dropped in the future</li>
  <li>heavy on runtime I/O due to additional device scanning and probing (ATA IDENTIFY)</li>
  <li>forking and calling <code class="language-plaintext highlighter-rouge">smartctl</code></li>
</ul>

<p>Naturally the available features do vary across plugin implementations and though we tried to abstract the differences as much as possible, there are still certain gaps.</p>

<h2 id="the-libblockdev-smart-api">The libblockdev-smart API</h2>

<p>Please refer to our extensive public documentation: <a href="https://storaged.org/libblockdev/docs/libblockdev-SMART.html#libblockdev-SMART.description">https://storaged.org/libblockdev/docs/libblockdev-SMART.html#libblockdev-SMART.description</a></p>

<p>Apart from ATA SMART, we also laid out foundation for SCSI/SAS(?) SMART, though currently unused in UDisks and essentially untested. Note that <em>NVMe Health Information</em> has been available through the <em>libblockdev-nvme</em> plugin for a while and is not subject to this API.</p>

<h2 id="attribute-names--validation">Attribute names &amp; validation</h2>

<p>We spent great deal of effort to provide unified attribute naming, consistent data type interpretation and attribute validation. While <em>libatasmart</em> mostly provides raw values, <em>smartmontools</em> benefits from their <code class="language-plaintext highlighter-rouge">drivedb</code> and provide better interpretation of each attribute value.</p>

<p>For the public API we had to make a decision about attribute naming style. While <em>libatasmart</em> only provides single style with no variations, we’ve discovered lots of inconsistencies just by grepping the <code class="language-plaintext highlighter-rouge">drivedb.h</code>. For example attribute ID 171 translates to <code class="language-plaintext highlighter-rouge">program-fail-count</code> with <em>libatasmart</em> while <code class="language-plaintext highlighter-rouge">smartctl</code> may report variations of <code class="language-plaintext highlighter-rouge">Program_Fail_Cnt</code>, <code class="language-plaintext highlighter-rouge">Program_Fail_Count</code>, <code class="language-plaintext highlighter-rouge">Program_Fail_Ct</code>, etc. And with UDisks historically providing untranslated <em>libatasmart</em> attribute names, we had to create a translation table for <code class="language-plaintext highlighter-rouge">drivedb.h</code> -&gt; <em>libatasmart</em> names. Check this atrocity out in <a href="https://github.com/storaged-project/libblockdev/blob/master/src/plugins/smart/smart-private.h">https://github.com/storaged-project/libblockdev/blob/master/src/plugins/smart/smart-private.h</a>. This table is by no means complete, just a bunch of commonly used attributes.</p>

<p>Unknown attributes or those that fail validation are reported as generic <code class="language-plaintext highlighter-rouge">attribute-171</code>. For this reason consumers of the new UDisks release (e.g. <em>Gnome Disks</em>) may spot some differences and perhaps more attributes reported as unknown comparing to previous UDisks releases. Feel free to submit fixes for the mapping table, we’ve only tested this on a limited set of drives.</p>

<p>Oh, and we also fixed the notoriously broken <em>libatasmart</em> drive temperature reporting, though the fix is not 100% bulletproof either.</p>

<p>We’ve also created an experimental <code class="language-plaintext highlighter-rouge">drivedb.h</code> validator on top of <em>libatasmart</em>, mixing the best of both worlds, with uncertain results. This feature can be turned on by the <code class="language-plaintext highlighter-rouge">--with-drivedb[=PATH]</code> configure option.</p>

<h2 id="disabling-ata-smart-functionality-in-udisks">Disabling ATA SMART functionality in UDisks</h2>

<p>UDisks 2.10.90 release also brought a new configure option <code class="language-plaintext highlighter-rouge">--disable-smart</code> to disable ATA SMART completely. This was exceptionally possible without breaking public ABI due to the API providing the <a href="https://storaged.org/udisks/docs/gdbus-org.freedesktop.UDisks2.Drive.Ata.html#gdbus-property-org-freedesktop-UDisks2-Drive-Ata.SmartUpdated">Drive.Ata.SmartUpdated</a> property indicating the timestamp the data were last refreshed. When disabled compile-time, this property remains always set to zero.</p>

<p>We also made SMART data retrieval work with <code class="language-plaintext highlighter-rouge">dm-multipath</code> to avoid accessing particular device paths directly and tested that on a particularly large system.</p>

<h2 id="drive-access-methods">Drive access methods</h2>

<p>The <code class="language-plaintext highlighter-rouge">ID_ATA_SMART_ACCESS</code> udev property - see <a href="https://storaged.org/udisks/docs/udisks.8.html">man udisks(8)</a>. This property was a very well hidden secret, only found by accident while reading the <em>libatasmart</em> code. As such, this property was in place for over a decade. It controls the access method for the drive. Only <em>udisks-2.11.0</em> learned to respect this property in general no matter what <code class="language-plaintext highlighter-rouge">libblockdev-smart</code> plugin is actually used.</p>

<p>Those who prefer UDisks to avoid accessing their drives at all may want to set this <code class="language-plaintext highlighter-rouge">ID_ATA_SMART_ACCESS</code> udev property to <code class="language-plaintext highlighter-rouge">none</code>. The effect is similar to compiling UDisks with ATA SMART disabled, though this allows fine-grained control with the usual udev rule match constructions.</p>

<h2 id="future-plans-nice-to-haves">Future plans, nice-to-haves</h2>

<p>Apart from high hopes for the aforementioned <a href="https://github.com/smartmontools/smartmontools/issues/409">libsmartmon library</a> effort there are some more rough edges in UDisks.</p>

<p>For example, housekeeping could use refactoring to allow arbitrary intervals for specific jobs or even particular drives other than the fixed 10 minutes interval that is used for SMART data polling as well. Furthermore some kind of throttling or a constrained worker pool should be put in place to avoid either spawning all jobs at once (think of spawning <code class="language-plaintext highlighter-rouge">smartctl</code> for your 100 of drives at the same time) or to avoid bottlenecks where one slow housekeeping job blocks the rest of the queue.</p>

<p>At last, make SMART data retrieval via USB passthrough work. If that happened to work in the past, it was a pure coincidence. After receiving dozen of bugreports citing spurious kernel failure messages that often led to a USB device being disconnected, we’ve disabled our ATA device probes for USB devices. As a result the <code class="language-plaintext highlighter-rouge">org.freedesktop.UDisks2.Drive.Ata</code> D-Bus interface gets never attached for USB devices.</p>]]></content><author><name>Tomáš Bžatek</name></author><category term="udisks" /><summary type="html"><![CDATA[For a long time there was a need to modernize the UDisks’ way of ATA SMART data retrieval. The ageing libatasmart project went unmaintained over time yet there was no other alternative available. There was the smartmontools project with its smartctl command whose console output was rather clumsy to parse. It became apparent we need to decouple the SMART functionality and create an abstraction.]]></summary></entry><entry><title type="html">Partitioning with Ansible Storage Role: Partitions</title><link href="https://storaged.org/storage-role/2025/12/15/Partitioning-with-Ansible-Storage-Role-Partitions.html" rel="alternate" type="text/html" title="Partitioning with Ansible Storage Role: Partitions" /><published>2025-12-15T09:13:00+00:00</published><updated>2025-12-15T09:13:00+00:00</updated><id>https://storaged.org/storage-role/2025/12/15/Partitioning-with-Ansible-Storage-Role-Partitions</id><content type="html" xml:base="https://storaged.org/storage-role/2025/12/15/Partitioning-with-Ansible-Storage-Role-Partitions.html"><![CDATA[<p>The <a href="https://linux-system-roles.github.io/storage/">storage role</a> always allowed creating and managing different storage technologies like LVM, LUKS encryption or MD RAID, but one technology seemed to be missing for a long time, and surprisingly, it was the most basic one, the actual partitioning. Support for partition management was always something that was planned for the storage role, but it was never a high priority. From the start, the role could create partitions. When creating a more complex storage setup on an empty disk, for example creating a new LVM volume group or adding a new physical volume to an existing LVM setup, the role would always automatically create a single partition on the disk. But that was all the role could do, just one single partition spanning the entire disk.</p>

<p>The reason for this limitation was simple: creating multiple partitions is something usually reserved for the OS installation process, where users need to have separate partitions required by the bootloader, like <code class="language-plaintext highlighter-rouge">/boot</code> and <code class="language-plaintext highlighter-rouge">/boot/efi</code>. The more advanced “partitioning” is then delegated to a more complex storage technologies like LVM, which is where most of the changes are done in an existing system and where users will usually employ Ansible to make changes later.</p>

<p>But the requirement for more advanced partition management was always there, and since the <a href="https://github.com/linux-system-roles/storage/releases/tag/1.19.0">1.19 release</a>, the role can now create and manage partitions in the Ansible way.</p>

<h3 id="partition-management-with-storage-role">Partition Management with Storage Role</h3>

<p>The usage of the role for partition management is simple and follows the same logic as the other storage technologies, with the management divided into two parts: managing the <code class="language-plaintext highlighter-rouge">storage_pools</code>, which in the case of partitions is the underlying disk (or to be more precise, the partition table), and the <code class="language-plaintext highlighter-rouge">volumes</code>, which are the partitions themselves. A simple playbook to create two partitions on a disk can look like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  roles:
    - name: linux-system-roles.storage
      storage_pools:
        - name: sdb
          type: partition
          disks: sdb
          volumes:
            - name: sdb1
              type: partition
              size: 1 GiB
              fs_type: ext4
            - name: sdb2
              type: partition
              size: 10 GiB
              fs_type: ext4
</code></pre></div></div>

<p>and the partitions it creates will look like this</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE
sdb      8:16   0  20G  0 disk
├─sdb1   8:17   0   1G  0 part             ext4
└─sdb2   8:18   0  10G  0 part             ext4
</code></pre></div></div>

<p>Other filesystem-related properties (like <code class="language-plaintext highlighter-rouge">mount_point</code> or <code class="language-plaintext highlighter-rouge">fs_label</code>) can be specified, and these work in the same way as for any other volume type.</p>

<p>The only property that is specific to partitions is <code class="language-plaintext highlighter-rouge">part_type</code>, which allows you to choose a partition type when using the MBR/MSDOS partition table. Supported types are <code class="language-plaintext highlighter-rouge">primary</code>, <code class="language-plaintext highlighter-rouge">logical</code> and <code class="language-plaintext highlighter-rouge">extended</code>. If you don’t specify the partition type, the role will create the first three partitions as primary and for the fourth one, add an extended partition and create it as a logical partition inside it. On GPT, which is used as the default partition table, the partition type is ignored.</p>

<p>Encrypted partitions can be created by adding the <code class="language-plaintext highlighter-rouge">encryption: true</code> option for the partition and setting the passphrase:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  roles:
    - name: linux-system-roles.storage
      storage_pools:
        - name: sdb
          type: partition
          disks: sdb
          volumes:
            - name: sdb1
              type: partition
              size: 1 GiB
              fs_type: ext4
              encryption: true
              encryption_password: "aaaaaaaaa"
            - name: sdb2
              type: partition
              size: 10 GiB
              fs_type: ext4
              encryption: true
              encryption_password: "aaaaaaaaa"
</code></pre></div></div>

<p>Don’t forget that adding the encryption layer is a destructive operation – if you run the two playbooks above one after another, the filesystems created by the first one will be removed, and all data on them will be lost. Adding the LUKS encryption layer (so-called re-encryption) is currently not supported by the role.</p>

<h3 id="idempotency-and-partition-numbers">Idempotency and Partition Numbers</h3>

<p>One of the core principles of Ansible is idempotency, or the ability to re-run the same playbook, and if the system is in the state specified by the playbook, no changes will be made.</p>

<p>This is true for partitioning with the storage role as well. When running the playbook from our example above for the second time, the role will check the <code class="language-plaintext highlighter-rouge">sdb</code> disk and look for the two specified partitions. And if there are two partitions 1 and 10 GiB large, it won’t do anything. This is how the role works in general, but with partitions, there is a new challenge: partitions don’t have unique names and using partition numbers for idempotency can be tricky.</p>

<blockquote>
  <p>Did you know that partition numbers for logical partitions are not stable? If you have two logical partitions <code class="language-plaintext highlighter-rouge">sdb5</code> and <code class="language-plaintext highlighter-rouge">sdb6</code>, removing the <code class="language-plaintext highlighter-rouge">sdb5</code> partition will automatically re-number the <code class="language-plaintext highlighter-rouge">sdb6</code> partition to <code class="language-plaintext highlighter-rouge">sdb5</code>.</p>
</blockquote>

<p>Predicting the partition name is not always straightforward. For example, disks that end in a number (common with NVMe drives) require adding a <code class="language-plaintext highlighter-rouge">p</code> separator before the partition number (<code class="language-plaintext highlighter-rouge">nvme0n1</code> becomes <code class="language-plaintext highlighter-rouge">nvme0n1p1</code>).</p>

<p>For these reasons, the role requires explicitly using the <code class="language-plaintext highlighter-rouge">state: absent</code> option to remove a partition, and partitions can be referred to by their numbers in the playbooks as well as their full names. So, for example, the following playbook will resize the <code class="language-plaintext highlighter-rouge">sdb2</code> partition from our first example</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  roles:
    - name: linux-system-roles.storage
      storage_pools:
        - name: sdb
          type: partition
          disks: sdb
          volumes:
            - name: 2
              type: partition
              size: 15 GiB
              fs_type: ext4
</code></pre></div></div>

<p>and the first partition won’t be removed, because it is not explicitly mentioned as <code class="language-plaintext highlighter-rouge">absent</code>, only omitted in the playbook:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS FSTYPE
sdb      8:16   0  20G  0 disk
├─sdb1   8:17   0   1G  0 part             ext4
└─sdb2   8:18   0  15G  0 part             ext4
</code></pre></div></div>

<h3 id="feedback-and-future-features">Feedback and Future Features</h3>

<p>With this change, the storage role can now manage all basic storage technologies. We are of course not yet covering all the potential features, but we are always looking for more ideas from our users. If you have any features you’d like to see in the role, please don’t hesitate and <a href="https://github.com/linux-system-roles/storage/issues">let us know</a>.</p>]]></content><author><name>Vojtech Trefny</name></author><category term="storage-role" /><summary type="html"><![CDATA[The storage role always allowed creating and managing different storage technologies like LVM, LUKS encryption or MD RAID, but one technology seemed to be missing for a long time, and surprisingly, it was the most basic one, the actual partitioning. Support for partition management was always something that was planned for the storage role, but it was never a high priority. From the start, the role could create partitions. When creating a more complex storage setup on an empty disk, for example creating a new LVM volume group or adding a new physical volume to an existing LVM setup, the role would always automatically create a single partition on the disk. But that was all the role could do, just one single partition spanning the entire disk.]]></summary></entry><entry><title type="html">Partitioning with Ansible Storage Role: VDO</title><link href="https://storaged.org/storage-role/2023/10/05/Partitioning-with-Ansible-Storage-Role-VDO.html" rel="alternate" type="text/html" title="Partitioning with Ansible Storage Role: VDO" /><published>2023-10-05T12:17:00+00:00</published><updated>2023-10-05T12:17:00+00:00</updated><id>https://storaged.org/storage-role/2023/10/05/Partitioning-with-Ansible-Storage-Role-VDO</id><content type="html" xml:base="https://storaged.org/storage-role/2023/10/05/Partitioning-with-Ansible-Storage-Role-VDO.html"><![CDATA[<p>This time we shall talk about Storage Role support of VDO. The abbreviation stands for Virtual Data Optimizer and that is exactly what it does. It reduces stored data size to save space. To be precise Storage Role utilizes LVM version of VDO called (what a surprise) <a href="https://man7.org/linux/man-pages/man7/lvmvdo.7.html">LVM VDO</a>.</p>

<h3 id="how-does-vdo-do-it">How Does VDO Do It</h3>

<p>There are two main options VDO uses to reduce the data size:</p>

<ul>
  <li>Data compression</li>
  <li>Data deduplication</li>
</ul>

<p>Data compression works just like regular file compression. However VDO packs and unpacks blocks of data automatically and on lower level, so user does not even know about it happening.</p>

<p>The same goes for data deduplication. VDO identifies and removes duplicit blocks of data. Redundant blocks are removed and the last remaining copy of the block gets to do all their work.</p>

<p>During the VDO device creation compression and deduplication can be turned on or off.</p>

<h3 id="using-vdo-in-storage-role">Using VDO in Storage Role</h3>

<p>We should be already pretty confident about how to use the Storage Role and using VDO is not much different.</p>

<p>To use it the <code class="language-plaintext highlighter-rouge">storage_pool</code> has to be created. Then set <code class="language-plaintext highlighter-rouge">true</code> to one or both of the options <code class="language-plaintext highlighter-rouge">compression</code> and <code class="language-plaintext highlighter-rouge">deduplication</code> on one of the volumes. This will tell the role to use VDO. <strong>Please note that currently the Storage Role supports only one VDO volume per <code class="language-plaintext highlighter-rouge">storage_pool</code></strong>.</p>

<p>You also want to set both the <code class="language-plaintext highlighter-rouge">vdo_pool_size</code> and <code class="language-plaintext highlighter-rouge">size</code> options.</p>

<p>Why two sizes?
The first size represented by <code class="language-plaintext highlighter-rouge">vdo_pool_size</code> option is actual physical space reserved for the compressed data.</p>

<p>The other option - <code class="language-plaintext highlighter-rouge">size</code> - tells the device how should it present itself on the outside. This value is virtual and can (and it is supposed to) be larger than reserved physical space. By how much is left to users discretion and should be based on estimation of data compressibility.</p>

<p>The playbook for VDO creation then should look like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    ---
    - hosts: all
    become: true
    vars:
    storage_safe_mode: false

    tasks:
    - name: Create LVM VDO volume under volume group 'vg1'
    include_role:
    name: linux-system-roles.storage
    vars:
    storage_pools:
            - name: vg1
            disks:
            - "dev/sda"
            - "dev/sdb"
            - "dev/sdc"
            volumes:
            - name: test1
            compression: true
            deduplication: true
            vdo_pool_size: "9 GiB" # space taken on disk
            size: "12 GiB"         # virtual space
            mount_point: "/opt/test1"
            state: present
</code></pre></div></div>

<h3 id="things-to-know-before-creating-a-vdo-device">Things to Know Before Creating a VDO Device</h3>

<p>As goes for all more advanced features that Storage Role provides, VDO is meant for specific use cases.</p>

<p>Some data such as logs are much easier to compress or deduplicate. This makes them much better candidates. On the other hand, using VDO with data that are often modified or already scrambled by encryption can result in just an additional strain on resources.</p>

<p>Data that cannot be easily deduplicated or compressed can also cause a situation when user runs out of physical storage space with VDO showing lots of free space left.</p>

<p>Since the system has no way of telling what kind of data are eventually going to be put on which device, the responsibility of choosing wisely falls upon the user.</p>

<h3 id="couple-of-tips-at-the-end">Couple of Tips at the End</h3>

<p>And that’s it. As I already mentioned, Storage Role VDO uses LVM VDO so its <a href="https://man7.org/linux/man-pages/man7/lvmvdo.7.html">manpages</a> are a good point to start if you want to know more about it. And for more general information about VDO you can also check <a href="https://github.com/dm-vdo/vdo">VDO project on Github</a>.</p>]]></content><author><name>Jan Pokorny</name></author><category term="storage-role" /><summary type="html"><![CDATA[This time we shall talk about Storage Role support of VDO. The abbreviation stands for Virtual Data Optimizer and that is exactly what it does. It reduces stored data size to save space. To be precise Storage Role utilizes LVM version of VDO called (what a surprise) LVM VDO.]]></summary></entry></feed>