How thin volumes work
To really understand how thin volume work, lets begin talking about classical, non thinly-provided volumes. The following picture depict a normal LVM setup:
As you can see, multiple physical disks/partitions are aggregated together to form one or more Volume Groups. Multiple Logical Volumes are then allocated inside the volume group. Please note how free space exists inside each logical volume: you can not recover this free space until you shrink the volume and this, in turn, require a filesystem shrink. Event if you filesystem supports shrinking, this practice is generally avoided. XFS and some other filesystem do not support shrinking at all, effectively negating any volume shrink.
On the plus side note that, precisely because their space space is pre-allocated during volume creation, classical logical volume are often mostly contiguous on-disk. This is an important consideration to do when dealing with mechanical storage, as a badly fragmented volume will hamper performance. By the virtue of being mostly contiguous, normal logical volumes remain the preferred choice when dealing with performance sensitive applications (eg: critical databases and virtual machines) running on top of spinning disks.
Now consider how thin logical volumes work:
Once agaig, physical disks and partitions are aggregated inside a Volume Group. Inside it, a Thin Pool is created. But hey – what a Thin Pool is? A thin pool is a semi-static entry (similar to a normal logical volumes) which itself contain Thin Volumes. These thin volumes, in turn, are used to create filesystem and store data.
You may wonder where the advantage is: after all, the Thin Pool is a semi-static entry with real space allocated. The point it that Thin Volumes created inside it share the same physical disk space, with the possibility to share common data and without free space fragmentation/wasting. When a thin volume needs to grow, free space is subtracted from the Thin Pool and assigned (via metadata mapping) to the requesting thin volume.
Wait a moment – what it means that they “share common data”, and how is it possible? We'll answer to both questions in the following page, when speaking about snapshots. For the moment, note that it is not done via deduplication – if you really want an on-line deduplication system, you had to go with ZFS.
Please note that this flexibility has its drawback: disk space fragmentation, specifically. The reason is simple: as disk space is dynamically allocated to thin logical volumes, you had no guarantee that they remain contiguous from the disk perspective. So, when dealing with mechanical storage, you better left I/O critical application on normal logical volumes (which, by the way, can happily coexist in the same Volume Group where the Thin Pools reside).
Moreover, as new space is allocated to the requesting thin volume, it is zeroed – in other word, the disk chunk is first overwritten with zeroes and only then it is assigned to the thin volume where it is needed. This is a security features: without it, anyone having raw access to the thin volume (eg: by using a root account on the VM where the thin volume is mapped) could access sensitive data that were un-mapped from other thin volumes. Obviously this has a performance impact but, if you don't care about securing some specific pools, you can disable it.