Brutal ssh attackers experienced

My VM running on one of the public clouds witnessed brutal ssh attack. During its mere 12 days of up time, a grand total of 101,329 failed ssh were logged.

Below is the sample of IPs and their frequencies.

 

# of Attacks IP
10 180.101.185.9
10 58.20.125.166
10 82.85.187.101
11 112.83.192.246
11 13.93.146.130
11 40.69.27.198
14 101.4.137.29
14 162.209.75.137
14 61.178.42.242
19 14.170.249.105
20 220.181.167.188
22 185.110.132.201
23 155.94.142.13
23 173.242.121.52
24 155.94.163.14
25 154.16.199.47
28 184.106.69.36
29 185.2.31.10
29 58.213.69.180
30 163.172.201.33
32 185.110.132.89
37 211.144.95.195
42 117.135.131.60
48 91.224.160.106
60 91.224.160.131
72 80.148.4.58
72 91.224.160.108
73 91.201.236.155
84 91.224.160.184
90 222.186.21.36
146 180.97.239.9
146 91.201.236.158
9465 218.65.30.56
15885 116.31.116.18
17163 218.65.30.4
17164 182.100.67.173
17164 218.65.30.152
23006 116.31.116.11

Create a VHD blob using Azure Go SDK

There is no good tutorials on how to create a VHD blob that can serve as VM’s Data Disk on Azure. So I write some note on it, based on my recent Kubernetes PR 30091 as part of the effort to support Azure Data Disk dynamic provisioning.

This work uses Azure’s ASM and ARM modes: ARM mode to extract Storage Account name, key, Sku tier, location; ASM mode to list, create, and delete Azure Page Blob.

When an inquiry comes to find an Azure Storage account that has a Sku tier Standard_LRS and location eastus, all Storage accounts are listed. This is accomplished by getStorageAccounts(), which calls ListByResourceGroup() in Azure Go SDK. Each account returns its name, location, and Sku tier. Once a matching account is identified, the access key is retrieved via getStorageAccesskey(), which calls Azure SDK’s ListKeys().

Creating a VHD blob must use the classic Storage API that require account name and access key. createVhdBlob takes the account name and key and a creates a VHD blob in the account’s vhds Container. This uses Azure SDK’s PutPageBlob() method. Once the Page Blob is created, a VHD footer must be created at the end of the blob. This is currently accomplished in my forked go-vhd that is also upstreamed. The VHD footer appends to the blob by calling SDK’s PutPage() method.

Now the Page Blob is created, it can be used as Azure VM’s Data Disk.

A prototype data disk dynamic provisioning on Kubernetes can be found at my branch . A quick demo is as the following.

First create a Storage Class

kind: StorageClass
apiVersion: extensions/v1beta1
metadata:
 name: slow
provisioner: kubernetes.io/azure-disk
parameters:
 skuName: Standard_LRS
 location: eastus

Then create a Persistent Volume Claim like this

Once created, it should be like


# _output/bin/kubectl describe pvc
Name: claim1
Namespace: default
Status: Bound
Volume: pvc-37ff3ec3-5ceb-11e6-88a3-000d3a12e034
Labels: <none>
Capacity: 3Gi
Access Modes: RWO
No events.

Create a Pod that uses the claim:


apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
    replicas: 1
    selector:
       role: nfs-server
    template:
       metadata:
         labels:
           role: nfs-server
    spec:
       containers:
       - name: nfs-server
         image: nginx
       volumeMounts:
         - mountPath: /exports
           name: mypvc
       volumes:
          - name: mypvc
            persistentVolumeClaim:
               claimName: claim1

 

The Pod should run with the dynamically provisioned Data Disk.

Azure Block Storage Comming to Kubernetes

Microsoft Azure offers file (SMB) and block (vhd data disk ). Kubernetes already supports Azure file storage. Latest Azure becomes the latest cloud provider in Kubernetes 1.4.  As a follow-up, development Azure block storage support also started. A preliminary release is at https://github.com/kubernetes/kubernetes/pull/29836

Here is a quick tutorial to use data disk in a Pod.

First, create a cloud config file (e.g. /etc/cloud.conf) and fill your Azure credentials in the following format:


{
 "aadClientID" : 
 "aadClientSecret" : 
 "subscriptionID" : 
 "tenantID" : 
 "resourceGroup": 
}

Use this cloud conf file and tell Kubernetes apiserver, controller-manager, and kubelet to use it by adding option –cloud-config=/etc/cloud.conf 

Second, login to your Azure portal and create some VHDs. This step is not needed once dynamic provisioning is supported.

Then get the VHDs’ name and URI and use them in your Pod like the following


apiVersion: v1
kind: ReplicationController
metadata:
 name: nfs-web
spec:
 replicas: 1
 template:
 metadata:
   labels:
     role: web-frontend
 spec:
     containers:
       - name: web
           image: nginx
     volumeMounts:
        - name: disk1
          mountPath: &amp;amp;amp;quot;/usr/share/nginx/html&amp;amp;amp;quot;
        - name: disk2
          mountPath: &amp;amp;amp;quot;/mnt&amp;amp;amp;quot;
     volumes:
       - name: disk1
         azureDisk:
            diskName: test7.vhd
            diskURI: https://openshiftstoragede1802.blob.core.windows.net/vhds/test7.vhd
       - name: disk2
         azureDisk:
            diskName: test8.vhd
            diskURI: https://openshiftstoragede1802.blob.core.windows.net/vhds/test8.vhd

Once the Pod is created, you should expect the similar mount output like the following


/dev/sdd on /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/test7.vhd type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sdd on /var/lib/kubelet/pods/1ddf7491-57f9-11e6-94cd-000d3a12e034/volumes/kubernetes.io~azure-disk/disk1 type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sdc on /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/test8.vhd type ext4 (rw,relatime,seclabel,data=ordered)
/dev/sdc on /var/lib/kubelet/pods/1ddf7491-57f9-11e6-94cd-000d3a12e034/volumes/kubernetes.io~azure-disk/disk2 type ext4 (rw,relatime,seclabel,data=ordered)

 

Run Single Node Kubernetes Cluster on OpenStack

Running Kubernetes on OpenStack is surprisingly lacking simple HOWTOs. So I just cook one.

While there is a kube-up.sh in Kubernetes that can (supposedly) spin up a Kubernetes cluster on OpenStack, I find the easiest and quickest way is to use local-up-cluster.sh in Kubernetes source tree.

First, spin up a Nova instance on OpenStack and make sure docker, golang, etcd, openssl are installed.

Then following the instruction from OpenStack to get the RC file:

“Download and source the OpenStack RC file

  1. Log in to the dashboard and from the drop-down list select the project for which you want to download the OpenStack RC file.

  2. On the Project tab, open the Compute tab and click Access & Security.

  3. On the API Access tab, click Download OpenStack RC File and save the file. The filename will be of the form PROJECT-openrc.sh where PROJECT is the name of the project for which you downloaded the file.

  4. Copy the PROJECT-openrc.sh file to the computer from which you want to run OpenStack commands. “

Use the OpenStack RC and create your OpenStack cloud config for Kubernetes using the following format


# cat /etc/cloud.conf 
[Global] 
auth-url = 
username =
password =
tenant-name =
region =

The clone the Kubernetes source tree and apply my patch from PR 25750 (if not merged yet)

Then you can spin up a local cluster under Kubernetes source tree using the following command:

# find Nova instance name and override hostname
ALLOW_PRIVILIGED=true CLOUD_PROVIDER=openstack CLOUD_CONFIG=/etc/cloud.conf HOSTNAME_OVERRIDE="rootfs-dev" hack/local-up-cluster.sh

 

Start Single Kubernetes Cluster on AWS EC2


# get a copy of kubernetes source

$ git clone https://github.com/rootfs/kubernetes; cd kubernetes

# put AWS access key id and secret in ~/.aws/credentials like the following
# ~/.aws/credentials 
#[default]
#aws_access_key_id = ......
#aws_secret_access_key = ....

# get the host name from EC2 management console and use host name as override
$ ALLOW_PRIVILEGED=true LOG_LEVEL=5 CLOUD_PROVIDER="aws" HOSTNAME_OVERRIDE="ip-172-18-14-238.ec2.internal" hack/local-up-cluster.sh

Run Azure CLI on RHEL 7

My usual bookkeeping.


yum install nodejs010-nodejs

source /opt/rh/nodejs010/enable

wget http://aka.ms/linux-azure-cli -O azure-cli.tgz

tar xzvf azure-cli.tgz

cd bin

npm install

# make sure azure account is available and follow the process to authenticate

./azure login

# should be ready to use azure cli now

./azure vm list

# switch to Azure Resource Manager (arm) mode

./azure config mode arm

Run Kubernetes End-to-end Volume On CentOS

With a couple of fixes, Kubernetes can run volume e2e tests on a local CentOS cluster.

On Fedora/CentOS/RHEL, after git clone of latest Kubernetes source:

Start up a local cluster 

ALLOW_PRIVILEGED=true ALLOW_SECURITY_CONTEXT=true hack/local-up-cluster.sh

Run Volume e2e tests

KUBERNETES_PROVIDER=centos KUBERNETES_CONFORMANCE_TEST=y hack/ginkgo-e2e.sh --ginkgo.focus=Volumes

That is!

The volume e2e tests consists of testing volume plugins (NFS, Glusterfs, iSCSI, CephFS, Ceph RBD, OpenStack Cinder). Each test will create a containerized server, a client Pod that has a mount path uses the Volume type. The client expects to see a pre-created HTML file on the server. The Persistent Volumes test creates a NFS server, a Persistent Volume (PV) using the NFS backstore and recycle policy, and Persistent Volume Claim (PVC) that is able to bind to the NFS PV. After the PVC is bound, it is immediately deleted, the NFS PV is recycled, deleting all the content on it.

 

More tests cases are welcome!