(Continued from last post on Performance in Virtualized Storage)
Storage issues in Containers are somewhat different from those in hypervisors. Docker containers have two types: storage driver, used by container images, and volume driver, used by so-called data volume.
Storage drivers are responsible for translating images to Containers’ root filesystem. Docker supports device-mapper, AUFS, OverlayFS, Btrfs, and recently s3 storage drivers. Storage drivers usually support snapshot (though can be emulated) and thin provisioning.
Not all drivers are the same. My colleague Jeremy Eder has benchmarked extensively on storage drivers in his blog.
Most of the performance issues, also expressed in Problem 6 in this LWN article, are caused by (false) sharing: one container’s I/O activity will be felt by others in the shared underlying storage, aka noisy neighbor problem.
Naturally, solutions are invariably concentrating on jailbreaking shared storage.
IceFS, despite what the name suggested, is meant to work for hypervisors originally. Nonetheless, the idea is rich enough to shed light in container storage. IceFS provides physical and namespace isolation for hypervisor (and potentially container) consumers. Such isolation improves reliability and lessens noisy neighbor problem. I have yet to spot snapshot and thin provisioning for possible Docker adoption though.
SpanFS is like IceFS on isolation but more aggressive: I/O stacks are also isolated, and thus buffer allocation and scheduling for different containers are completely isolated (locks and noise? no more!). The result is astounding. Certain microbenchmark pointed to 10x faster than ext4.
Split-Level I/O is somewhat along that line too: I/O stacks are not only isolated but also tagged for each process/container/VM. Thus priority notation and resource accounting are well under control. This corrects priority inversion and ignorance caused by noisy neighbors.