Towards a Universal Storage Control Plane

As time flies, the goals for an universal storage control plane keep evolving.

  • libstoragemanagement is a SMI-S based framework. It supports C and Python binding.
  • OpenStack Cinder manages block storage on heterogeneous backends. It is mostly used for Virtual environment (Nova), although undercloud could also use some Cinder volumes.
  • OpenStack Fuxi is a young project with an ambitious goal of storage management for Containers, VMs, and bare metal machines.
  • Dell/EMC projects that are yet widely inter-operable.
  • Virtualized backend by presenting a virtualized block storage based on a well formatted image encoding or API. The control plane and data plane are blurred because they are specified by the image format or the API.
    • QCOW(2) from QEMU. Although it mostly targets VM storage, recent qemu-tcmu proposal and work make QEMU able to work with Container and bare metal as well.
    • VAAI from VMWare. VM only?
  • Container engines and orchestrators
    • Kubernetes and Docker Volumes support Cloud, virtualized, and NAS: AWS, GCE, OpenStack, Azure, vSphere, Ceph, Gluster, iSCSI, NFS, Fibre Channel, Quobyte, Flocker, etc.
    • Storage features such as provisioning, attach/detach, snapshot, quota, resize, protection are either implemented or in road map.

Random Thoughts: Hardened Docker Container, Unikernel and SGX

Recently I read several interesting research and analyst papers on Docker container security.

SCONE: Secure Containers using Intel SGX describes an implementation of putting Container in the secure enclave, without extensive footprint and performance loss.

Hardening Linux Container is a very comprehensive analysis of Containers, virtualization, and underlying technologies. I find it very rewarding to read it twice. So is the other white paper Abusing Privileged and Unprivileged Linux Containers.

So far all these are based on Docker Containers that runs on top of a general purpose OS (Linux). There are limitations and false claims.

What about running an unikernel in SGX? It is entirely possible, per Solo5. And what about the next step to make unikernel, running inside SGX, a new container runtime for Kubernetes, just like Docker, rkt, and hyper?

ContainerCon 2016: Random Notes

There are quite some good talks.


I wrote about hyper a while back. Now it is full fledged Hypernetes: run Docker image in VM, Neutron enabled, modified Cinder volume plugin. And it is Kubernetes outside! Folks at hyper did lots of hard work.

The talk itself got lots of attention. Dr. Zhang posted his slides. I must admit he did a good job walking through kubernetes in a very short time.

Open SDS

It is a well felt frustration across the industry that look-and-feel of multi-vendor storage is different and management frameworks (e.g. Cinder, Kubernetes and Mesos) have to take pain to abstract a common interface to deal with them.

Open SDS aims to end this by starting a cross vendor collaboration. It is interesting to see Huawei and Dell EMC were standing on the stage together.


I talked about the opportunities a converged QEMU and TCMU offers. Think about Cinder but without Nova to access storage. While there are efforts to make Cinder (and hypervisor storage) into bare metal and Container, QEMU+TCMU is probably one of the most promising framework.


Brutal ssh attackers experienced

My VM running on one of the public clouds witnessed brutal ssh attack. During its mere 12 days of up time, a grand total of 101,329 failed ssh were logged.

Below is the sample of IPs and their frequencies.


# of Attacks IP

Create a VHD blob using Azure Go SDK

There is no good tutorials on how to create a VHD blob that can serve as VM’s Data Disk on Azure. So I write some note on it, based on my recent Kubernetes PR 30091 as part of the effort to support Azure Data Disk dynamic provisioning.

This work uses Azure’s ASM and ARM modes: ARM mode to extract Storage Account name, key, Sku tier, location; ASM mode to list, create, and delete Azure Page Blob.

When an inquiry comes to find an Azure Storage account that has a Sku tier Standard_LRS and location eastus, all Storage accounts are listed. This is accomplished by getStorageAccounts(), which calls ListByResourceGroup() in Azure Go SDK. Each account returns its name, location, and Sku tier. Once a matching account is identified, the access key is retrieved via getStorageAccesskey(), which calls Azure SDK’s ListKeys().

Creating a VHD blob must use the classic Storage API that require account name and access key. createVhdBlob takes the account name and key and a creates a VHD blob in the account’s vhds Container. This uses Azure SDK’s PutPageBlob() method. Once the Page Blob is created, a VHD footer must be created at the end of the blob. This is currently accomplished in my forked go-vhd that is also upstreamed. The VHD footer appends to the blob by calling SDK’s PutPage() method.

Now the Page Blob is created, it can be used as Azure VM’s Data Disk.

A prototype data disk dynamic provisioning on Kubernetes can be found at my branch . A quick demo is as the following.

First create a Storage Class

kind: StorageClass
apiVersion: extensions/v1beta1
 name: slow
 skuName: Standard_LRS
 location: eastus

Then create a Persistent Volume Claim like this

Once created, it should be like

# _output/bin/kubectl describe pvc
Name: claim1
Namespace: default
Status: Bound
Volume: pvc-37ff3ec3-5ceb-11e6-88a3-000d3a12e034
Labels: <none>
Capacity: 3Gi
Access Modes: RWO
No events.

Create a Pod that uses the claim:

apiVersion: v1
kind: ReplicationController
name: nfs-server
    replicas: 1
       role: nfs-server
           role: nfs-server
       - name: nfs-server
         image: nginx
         - mountPath: /exports
           name: mypvc
          - name: mypvc
               claimName: claim1


The Pod should run with the dynamically provisioned Data Disk.

Azure Block Storage Comming to Kubernetes

Microsoft Azure offers file (SMB) and block (vhd data disk ). Kubernetes already supports Azure file storage. Latest Azure becomes the latest cloud provider in Kubernetes 1.4.  As a follow-up, development Azure block storage support also started. A preliminary release is at

Here is a quick tutorial to use data disk in a Pod.

First, create a cloud config file (e.g. /etc/cloud.conf) and fill your Azure credentials in the following format:

 "aadClientID" : 
 "aadClientSecret" : 
 "subscriptionID" : 
 "tenantID" : 

Use this cloud conf file and tell Kubernetes apiserver, controller-manager, and kubelet to use it by adding option –cloud-config=/etc/cloud.conf 

Second, login to your Azure portal and create some VHDs. This step is not needed once dynamic provisioning is supported.

Then get the VHDs’ name and URI and use them in your Pod like the following

apiVersion: v1
kind: ReplicationController
 name: nfs-web
 replicas: 1
     role: web-frontend
       - name: web
           image: nginx
        - name: disk1
          mountPath: &amp;amp;amp;quot;/usr/share/nginx/html&amp;amp;amp;quot;
        - name: disk2
          mountPath: &amp;amp;amp;quot;/mnt&amp;amp;amp;quot;
       - name: disk1
            diskName: test7.vhd
       - name: disk2
            diskName: test8.vhd

Once the Pod is created, you should expect the similar mount output like the following

/dev/sdd on /var/lib/kubelet/plugins/ type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sdd on /var/lib/kubelet/pods/1ddf7491-57f9-11e6-94cd-000d3a12e034/volumes/ type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sdc on /var/lib/kubelet/plugins/ type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sdc on /var/lib/kubelet/pods/1ddf7491-57f9-11e6-94cd-000d3a12e034/volumes/ type ext4 (rw,relatime,seclabel,data=ordered)


Run Single Node Kubernetes Cluster on OpenStack

Running Kubernetes on OpenStack is surprisingly lacking simple HOWTOs. So I just cook one.

While there is a in Kubernetes that can (supposedly) spin up a Kubernetes cluster on OpenStack, I find the easiest and quickest way is to use in Kubernetes source tree.

First, spin up a Nova instance on OpenStack and make sure docker, golang, etcd, openssl are installed.

Then following the instruction from OpenStack to get the RC file:

“Download and source the OpenStack RC file

  1. Log in to the dashboard and from the drop-down list select the project for which you want to download the OpenStack RC file.

  2. On the Project tab, open the Compute tab and click Access & Security.

  3. On the API Access tab, click Download OpenStack RC File and save the file. The filename will be of the form where PROJECT is the name of the project for which you downloaded the file.

  4. Copy the file to the computer from which you want to run OpenStack commands. “

Use the OpenStack RC and create your OpenStack cloud config for Kubernetes using the following format

# cat /etc/cloud.conf 
auth-url = 
username =
password =
tenant-name =
region =

The clone the Kubernetes source tree and apply my patch from PR 25750 (if not merged yet)

Then you can spin up a local cluster under Kubernetes source tree using the following command:

# find Nova instance name and override hostname
ALLOW_PRIVILIGED=true CLOUD_PROVIDER=openstack CLOUD_CONFIG=/etc/cloud.conf HOSTNAME_OVERRIDE="rootfs-dev" hack/