Brutal ssh attackers experienced

My VM running on one of the public clouds witnessed brutal ssh attack. During its mere 12 days of up time, a grand total of 101,329 failed ssh were logged.

Below is the sample of IPs and their frequencies.

 

# of Attacks IP
10 180.101.185.9
10 58.20.125.166
10 82.85.187.101
11 112.83.192.246
11 13.93.146.130
11 40.69.27.198
14 101.4.137.29
14 162.209.75.137
14 61.178.42.242
19 14.170.249.105
20 220.181.167.188
22 185.110.132.201
23 155.94.142.13
23 173.242.121.52
24 155.94.163.14
25 154.16.199.47
28 184.106.69.36
29 185.2.31.10
29 58.213.69.180
30 163.172.201.33
32 185.110.132.89
37 211.144.95.195
42 117.135.131.60
48 91.224.160.106
60 91.224.160.131
72 80.148.4.58
72 91.224.160.108
73 91.201.236.155
84 91.224.160.184
90 222.186.21.36
146 180.97.239.9
146 91.201.236.158
9465 218.65.30.56
15885 116.31.116.18
17163 218.65.30.4
17164 182.100.67.173
17164 218.65.30.152
23006 116.31.116.11

Create a VHD blob using Azure Go SDK

There is no good tutorials on how to create a VHD blob that can serve as VM’s Data Disk on Azure. So I write some note on it, based on my recent Kubernetes PR 30091 as part of the effort to support Azure Data Disk dynamic provisioning.

This work uses Azure’s ASM and ARM modes: ARM mode to extract Storage Account name, key, Sku tier, location; ASM mode to list, create, and delete Azure Page Blob.

When an inquiry comes to find an Azure Storage account that has a Sku tier Standard_LRS and location eastus, all Storage accounts are listed. This is accomplished by getStorageAccounts(), which calls ListByResourceGroup() in Azure Go SDK. Each account returns its name, location, and Sku tier. Once a matching account is identified, the access key is retrieved via getStorageAccesskey(), which calls Azure SDK’s ListKeys().

Creating a VHD blob must use the classic Storage API that require account name and access key. createVhdBlob takes the account name and key and a creates a VHD blob in the account’s vhds Container. This uses Azure SDK’s PutPageBlob() method. Once the Page Blob is created, a VHD footer must be created at the end of the blob. This is currently accomplished in my forked go-vhd that is also upstreamed. The VHD footer appends to the blob by calling SDK’s PutPage() method.

Now the Page Blob is created, it can be used as Azure VM’s Data Disk.

A prototype data disk dynamic provisioning on Kubernetes can be found at my branch . A quick demo is as the following.

First create a Storage Class

kind: StorageClass
apiVersion: extensions/v1beta1
metadata:
 name: slow
provisioner: kubernetes.io/azure-disk
parameters:
 skuName: Standard_LRS
 location: eastus

Then create a Persistent Volume Claim like this

Once created, it should be like


# _output/bin/kubectl describe pvc
Name: claim1
Namespace: default
Status: Bound
Volume: pvc-37ff3ec3-5ceb-11e6-88a3-000d3a12e034
Labels: <none>
Capacity: 3Gi
Access Modes: RWO
No events.

Create a Pod that uses the claim:


apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
    replicas: 1
    selector:
       role: nfs-server
    template:
       metadata:
         labels:
           role: nfs-server
    spec:
       containers:
       - name: nfs-server
         image: nginx
       volumeMounts:
         - mountPath: /exports
           name: mypvc
       volumes:
          - name: mypvc
            persistentVolumeClaim:
               claimName: claim1

 

The Pod should run with the dynamically provisioned Data Disk.

Azure Block Storage Comming to Kubernetes

Microsoft Azure offers file (SMB) and block (vhd data disk ). Kubernetes already supports Azure file storage. Latest Azure becomes the latest cloud provider in Kubernetes 1.4.  As a follow-up, development Azure block storage support also started. A preliminary release is at https://github.com/kubernetes/kubernetes/pull/29836

Here is a quick tutorial to use data disk in a Pod.

First, create a cloud config file (e.g. /etc/cloud.conf) and fill your Azure credentials in the following format:


{
 "aadClientID" : 
 "aadClientSecret" : 
 "subscriptionID" : 
 "tenantID" : 
 "resourceGroup": 
}

Use this cloud conf file and tell Kubernetes apiserver, controller-manager, and kubelet to use it by adding option –cloud-config=/etc/cloud.conf 

Second, login to your Azure portal and create some VHDs. This step is not needed once dynamic provisioning is supported.

Then get the VHDs’ name and URI and use them in your Pod like the following


apiVersion: v1
kind: ReplicationController
metadata:
 name: nfs-web
spec:
 replicas: 1
 template:
 metadata:
   labels:
     role: web-frontend
 spec:
     containers:
       - name: web
           image: nginx
     volumeMounts:
        - name: disk1
          mountPath: &amp;amp;amp;quot;/usr/share/nginx/html&amp;amp;amp;quot;
        - name: disk2
          mountPath: &amp;amp;amp;quot;/mnt&amp;amp;amp;quot;
     volumes:
       - name: disk1
         azureDisk:
            diskName: test7.vhd
            diskURI: https://openshiftstoragede1802.blob.core.windows.net/vhds/test7.vhd
       - name: disk2
         azureDisk:
            diskName: test8.vhd
            diskURI: https://openshiftstoragede1802.blob.core.windows.net/vhds/test8.vhd

Once the Pod is created, you should expect the similar mount output like the following


/dev/sdd on /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/test7.vhd type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sdd on /var/lib/kubelet/pods/1ddf7491-57f9-11e6-94cd-000d3a12e034/volumes/kubernetes.io~azure-disk/disk1 type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sdc on /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/test8.vhd type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sdc on /var/lib/kubelet/pods/1ddf7491-57f9-11e6-94cd-000d3a12e034/volumes/kubernetes.io~azure-disk/disk2 type ext4 (rw,relatime,seclabel,data=ordered)