Create a VHD blob using Azure Go SDK

There is no good tutorials on how to create a VHD blob that can serve as VM’s Data Disk on Azure. So I write some note on it, based on my recent Kubernetes PR 30091 as part of the effort to support Azure Data Disk dynamic provisioning.

This work uses Azure’s ASM and ARM modes: ARM mode to extract Storage Account name, key, Sku tier, location; ASM mode to list, create, and delete Azure Page Blob.

When an inquiry comes to find an Azure Storage account that has a Sku tier Standard_LRS and location eastus, all Storage accounts are listed. This is accomplished by getStorageAccounts(), which calls ListByResourceGroup() in Azure Go SDK. Each account returns its name, location, and Sku tier. Once a matching account is identified, the access key is retrieved via getStorageAccesskey(), which calls Azure SDK’s ListKeys().

Creating a VHD blob must use the classic Storage API that require account name and access key. createVhdBlob takes the account name and key and a creates a VHD blob in the account’s vhds Container. This uses Azure SDK’s PutPageBlob() method. Once the Page Blob is created, a VHD footer must be created at the end of the blob. This is currently accomplished in my forked go-vhd that is also upstreamed. The VHD footer appends to the blob by calling SDK’s PutPage() method.

Now the Page Blob is created, it can be used as Azure VM’s Data Disk.

A prototype data disk dynamic provisioning on Kubernetes can be found at my branch . A quick demo is as the following.

First create a Storage Class

kind: StorageClass
apiVersion: extensions/v1beta1
metadata:
 name: slow
provisioner: kubernetes.io/azure-disk
parameters:
 skuName: Standard_LRS
 location: eastus

Then create a Persistent Volume Claim like this

Once created, it should be like


# _output/bin/kubectl describe pvc
Name: claim1
Namespace: default
Status: Bound
Volume: pvc-37ff3ec3-5ceb-11e6-88a3-000d3a12e034
Labels: <none>
Capacity: 3Gi
Access Modes: RWO
No events.

Create a Pod that uses the claim:


apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
    replicas: 1
    selector:
       role: nfs-server
    template:
       metadata:
         labels:
           role: nfs-server
    spec:
       containers:
       - name: nfs-server
         image: nginx
       volumeMounts:
         - mountPath: /exports
           name: mypvc
       volumes:
          - name: mypvc
            persistentVolumeClaim:
               claimName: claim1

 

The Pod should run with the dynamically provisioned Data Disk.

10 thoughts on “Create a VHD blob using Azure Go SDK

  1. Great article; really helped me to get things working on Azure. One thing that might be worth adding is that you need to create a storage account AND a container named ‘vhds’ inside it for this to work.

    Before doing this I was getting error messages saying:

    ErrorCode=ContainerNotFound, ErrorMessage=The specified container does not exist.

    Looking at the code I found the constants at the top of azure_blob.go that indicated it was looking for a container names vhds. After creating that my pvc created as expected.

    Like

      1. It¡¦s really a great and helpful piece of inairmftoon. I¡¦m happy that you simply shared this helpful information with us. Please stay us informed like this. Thank you for sharing.

        Like

  2. Thanks for this article – I managed to set up the azure storage but when claiming it in my deployment I see the following Warnings on `kubectl describe pod ….`

    “`
    Events:
    FirstSeen LastSeen Count From SubObjectPath Type Reason Message
    ——— ——– —– —- ————- ——– —— ——-
    5m 5m 1 {default-scheduler } Normal Scheduled Successfully assigned my-postgres-2871964231-39klx to k8s-agent-308af468-2
    3m 3m 1 {kubelet k8s-agent-308af468-2} Warning FailedMount Unable to mount volumes for pod “my-postgres-2871964231-39klx_default(9cc909ba-0990-11e7-83ac-000d3ab6802a)”: timeout expired waiting for volumes to attach/mount for pod “default”/”my-postgres-2871964231-39klx”. list of unattached/unmounted volumes=[postgresdata]
    3m 3m 1 {kubelet k8s-agent-308af468-2} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod “default”/”my-postgres-2871964231-39klx”. list of unattached/unmounted volumes=[postgresdata]
    1m 1m 1 {kubelet k8s-agent-308af468-2} spec.containers{postgres} Normal Pulling pulling image “postgres:9.6.2-alpine”
    1m 1m 1 {kubelet k8s-agent-308af468-2} spec.containers{postgres} Normal Pulled Successfully pulled image “postgres:9.6.2-alpine”
    1m 1m 1 {kubelet k8s-agent-308af468-2} spec.containers{postgres} Normal Created Created container with docker id 37170147e5f1; Security:[seccomp=unconfined]
    1m 1m 1 {kubelet k8s-agent-308af468-2} spec.containers{postgres} Normal Started Started container with docker id 37170147e5f1
    “`

    I saw that timeout in your PR https://github.com/kubernetes/kubernetes/pull/30091 as well. Is that to be expected and it works anyway? Will my data now be persisted?

    Testing that now as we speak but would be nice to have some more info on that.

    Like

    1. Enig. Saerlig nar jeg for oyeblikket bor med vegger som ligner de pa bildene fordi vi er i oppussingsfase 😉 Men mange av tingene inspirerer. Det ma jo allikevel bli en blanding – for a fa frem egen stil/personlighet. Ellers blir det sjelelost og kaldt. Man ma jo se at det MENNESKER som bor i huset og at det ikke er et utgtsllinsilokale.ha en fin helg!klem mette

      Like

  3. Well you know what they say ‘not Frk&an#8217;If you can fake sincerity you’ve got it made,and if anyone can fake sincerity, it’s Frank..All the best.

    Like

Leave a reply to Markus Cancel reply