A Very Rough Performance Comparison: File vs. TCM Loop vs. Loopback

Following up with my previous investigation on loopback setup.

Test Environment:
Fedora 21 x86_64, 50GB RAM, 24 Core Intel(R) Xeon(R) CPU X5650 @ 2.67GHz, Kernel 3.17.8-300.fc21.x86_64

File used for TCM loop and loopback is 200GB created in an ext4 filesystem.

XFS is built on top of loopback and TCM loop

I am sure there are problems with these simplistic tests. Would love to see if they could be reproduced elsewhere.

Small IO

fio options: –ioengine=libaio –iodepth=4 –rw=rw –bs=4k –size=50G –numjobs=4

 Type ext4 File TCM Loop + XFS Loopback + XFS
 RW Bandwidth  53MB/s  66MB/s  61MB/s

Large IO

fio options: –ioengine=sync –iodepth=4 –rw=rw –bs=1m –size=50G –numjobs=4

 TYPE ext4 FILE TCM LOOP + XFS LOOPBACK + XFS
 RW Bandwidth  112MB/s  109MB/s  95MB/s

Loopback suffers from so-called double-caching problem where page cache is allocated twice for the same on-disk block. There are attempts to fix it using O_DIRECT but none have been merged into kernel or loopback mount yet. Parallels’s ploop is an O_DIRECT enabled loopback variant but I haven’t tested it.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s