This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Azure

Cloud API Adaptor (CAA) on Azure

This documentation will walk you through setting up CAA (a.k.a. Peer Pods) on Azure Kubernetes Service (AKS). It explains how to deploy:

  • One worker AKS
  • CAA on that Kubernetes cluster
  • An Nginx pod backed by CAA pod VM

Note: Run the following commands from the root of this repository.

Pre-requisites

Azure login

There are a bunch of steps that require you to be logged into your Azure account via az login. Retrieve your “Subscription ID” and set your preferred region:

export AZURE_SUBSCRIPTION_ID=$(az account show --query id --output tsv)
export AZURE_REGION="eastus"

Resource group

Note: Skip this step if you already have a resource group you want to use. Please, export the resource group name in the AZURE_RESOURCE_GROUP environment variable.

Create an Azure resource group by running the following command:

export AZURE_RESOURCE_GROUP="caa-rg-$(date '+%Y%m%b%d%H%M%S')"

az group create \
  --name "${AZURE_RESOURCE_GROUP}" \
  --location "${AZURE_REGION}"

Build CAA pod-VM image

Note: If you have made changes to the CAA code that affects the Pod-VM image and you want to deploy those changes then follow these instructions to build the pod-vm image.

An automated job builds the pod-vm image each night at 00:00 UTC. You can use that image by exporting the following environment variable:

export AZURE_IMAGE_ID="/CommunityGalleries/cocopodvm-d0e4f35f-5530-4b9c-8596-112487cdea85/Images/podvm_image0/Versions/$(date -v -1d "+%Y.%m.%d" 2>/dev/null || date -d "yesterday" "+%Y.%m.%d")"

Above image version is in the format YYYY.MM.DD, so to use the latest image use the date of yesterday.

Build CAA container image

Note: If you have made changes to the CAA code and you want to deploy those changes then follow these instructions to build the container image from the root of this repository.

If you would like to deploy the latest code from the default branch (main) of this repository then expose the following environment variable:

export registry="quay.io/confidential-containers"

Deploy Kubernetes using AKS

Make changes to the following environment variable as you see fit:

export CLUSTER_NAME="caa-$(date '+%Y%m%b%d%H%M%S')"
export AKS_WORKER_USER_NAME="azuser"
export SSH_KEY=~/.ssh/id_rsa.pub
export AKS_RG="${AZURE_RESOURCE_GROUP}-aks"

Note: Optionally, deploy the worker nodes into an existing Azure Virtual Network (VNet) and Subnet by adding the following flag: --vnet-subnet-id $SUBNET_ID.

Deploy AKS with single worker node to the same resource group you created earlier:

az aks create \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --node-resource-group "${AKS_RG}" \
  --name "${CLUSTER_NAME}" \
  --enable-oidc-issuer \
  --enable-workload-identity \
  --location "${AZURE_REGION}" \
  --node-count 1 \
  --node-vm-size Standard_F4s_v2 \
  --nodepool-labels node.kubernetes.io/worker= \
  --ssh-key-value "${SSH_KEY}" \
  --admin-username "${AKS_WORKER_USER_NAME}" \
  --os-sku Ubuntu

Download kubeconfig locally to access the cluster using kubectl:

az aks get-credentials \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --name "${CLUSTER_NAME}"

Deploy CAA

Note: If you are using Calico Container Network Interface (CNI) on a different Kubernetes cluster, then, configure Virtual Extensible LAN (VXLAN) encapsulation for all inter workload traffic.

User assigned identity and federated credentials

CAA needs privileges to talk to Azure API. This privilege is granted to CAA by associating a workload identity to the CAA service account. This workload identity (a.k.a. user assigned identity) is given permissions to create VMs, fetch images and join networks in the next step.

Note: If you use an existing AKS cluster it might need to be configured to support workload identity and OpenID Connect (OIDC), please refer to the instructions in this guide.

Start by creating an identity for CAA:

export AZURE_WORKLOAD_IDENTITY_NAME="caa-identity"

az identity create \
  --name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --location "${AZURE_REGION}"

export USER_ASSIGNED_CLIENT_ID="$(az identity show \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
  --query 'clientId' \
  -otsv)"

Annotate the CAA Service Account with the workload identity’s CLIENT_ID and make the CAA DaemonSet use workload identity for authentication:

cat <<EOF > install/overlays/azure/workload-identity.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cloud-api-adaptor-daemonset
  namespace: confidential-containers-system
spec:
  template:
    metadata:
      labels:
        azure.workload.identity/use: "true"
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cloud-api-adaptor
  namespace: confidential-containers-system
  annotations:
    azure.workload.identity/client-id: "$USER_ASSIGNED_CLIENT_ID"
EOF

AKS resource group permissions

For CAA to be able to manage VMs assign the identity VM and Network contributor roles, privileges to spawn VMs in $AZURE_RESOURCE_GROUP and attach to a VNet in $AKS_RG.

az role assignment create \
  --role "Virtual Machine Contributor" \
  --assignee "$USER_ASSIGNED_CLIENT_ID" \
  --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"

az role assignment create \
  --role "Reader" \
  --assignee "$USER_ASSIGNED_CLIENT_ID" \
  --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"

az role assignment create \
  --role "Network Contributor" \
  --assignee "$USER_ASSIGNED_CLIENT_ID" \
  --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AKS_RG}"

Create the federated credential for the CAA ServiceAccount using the OIDC endpoint from the AKS cluster:

export AKS_OIDC_ISSUER="$(az aks show \
  --name "$CLUSTER_NAME" \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --query "oidcIssuerProfile.issuerUrl" \
  -otsv)"

az identity federated-credential create \
  --name caa-fedcred \
  --identity-name caa-identity \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --issuer "${AKS_OIDC_ISSUER}" \
  --subject system:serviceaccount:confidential-containers-system:cloud-api-adaptor \
  --audience api://AzureADTokenExchange

AKS subnet ID

Fetch the VNet name of that AKS created automatically:

export AZURE_VNET_NAME=$(az network vnet list \
  --resource-group "${AKS_RG}" \
  --query "[0].name" \
  --output tsv)

Export the Subnet ID to be used for CAA Daemonset deployment:

export AZURE_SUBNET_ID=$(az network vnet subnet list \
  --resource-group "${AKS_RG}" \
  --vnet-name "${AZURE_VNET_NAME}" \
  --query "[0].id" \
  --output tsv)

Populate the kustomization.yaml file

Replace the values as needed for the following environment variables:

Note: For regular VMs use Standard_D2as_v5 for the AZURE_INSTANCE_SIZE.

Run the following command to update the kustomization.yaml file:

cat <<EOF > install/overlays/azure/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../yamls
images:
- name: cloud-api-adaptor
  newName: "${registry}/cloud-api-adaptor"
  newTag: latest
generatorOptions:
  disableNameSuffixHash: true
configMapGenerator:
- name: peer-pods-cm
  namespace: confidential-containers-system
  literals:
  - CLOUD_PROVIDER="azure"
  - AZURE_SUBSCRIPTION_ID="${AZURE_SUBSCRIPTION_ID}"
  - AZURE_REGION="${AZURE_REGION}"
  - AZURE_INSTANCE_SIZE="Standard_DC2as_v5"
  - AZURE_RESOURCE_GROUP="${AZURE_RESOURCE_GROUP}"
  - AZURE_SUBNET_ID="${AZURE_SUBNET_ID}"
  - AZURE_IMAGE_ID="${AZURE_IMAGE_ID}"
secretGenerator:
- name: peer-pods-secret
  namespace: confidential-containers-system
  literals: []
- name: ssh-key-secret
  namespace: confidential-containers-system
  files:
  - id_rsa.pub
patchesStrategicMerge:
- workload-identity.yaml
EOF

The SSH public key should be accessible to the kustomization.yaml file:

cp $SSH_KEY install/overlays/azure/id_rsa.pub

Deploy CAA on the Kubernetes cluster

Run the following command to deploy CAA:

CLOUD_PROVIDER=azure make deploy

Generic CAA deployment instructions are also described here.

Run sample application

Ensure runtimeclass is present

Verify that the runtimeclass is created after deploying CAA:

kubectl get runtimeclass

Once you can find a runtimeclass named kata-remote then you can be sure that the deployment was successful. A successful deployment will look like this:

$ kubectl get runtimeclass
NAME          HANDLER       AGE
kata-remote   kata-remote   7m18s

Deploy workload

Create an nginx deployment:

echo '
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: default
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      runtimeClassName: kata-remote
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        imagePullPolicy: Always
' | kubectl apply -f -

Ensure that the pod is up and running:

kubectl get pods -n default

You can verify that the peer-pod-VM was created by running the following command:

az vm list \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --output table

Here you should see the VM associated with the pod nginx. If you run into problems then check the troubleshooting guide here.

Cleanup

If you wish to clean up the whole set up, you can delete the resource group by running the following command:

az group delete \
  --name "${AZURE_RESOURCE_GROUP}" \
  --yes --no-wait

1 - Pod-VM image for Azure

This documentation walks you through the Pod VM image building on Azure

This documentation will walk you through building the pod VM image for Azure.

Note: Run the following commands from the directory azure/image.

Pre-requisites

Azure login

The image build will use your local credentials, so make sure you have logged into your account via az login. Retrieve your Subscription ID and set your preferred region:

export AZURE_SUBSCRIPTION_ID=$(az account show --query id --output tsv)
export AZURE_REGION="eastus"

Resource group

Note: Skip this step if you already have a resource group you want to use. Please, export the resource group name in the AZURE_RESOURCE_GROUP environment variable.

Create an Azure resource group by running the following command:

export AZURE_RESOURCE_GROUP="caa-rg-$(date '+%Y%m%b%d%H%M%S')"

az group create \
  --name "${AZURE_RESOURCE_GROUP}" \
  --location "${AZURE_REGION}"

Create a shared image gallery:

export GALLERY_NAME="caaubntcvmsGallery"
az sig create \
  --gallery-name "${GALLERY_NAME}" \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --location "${AZURE_REGION}"

Create the “Image Definition” by running the following command:

Note: The flag --features SecurityType=ConfidentialVmSupported allows you to a upload custom image and boot it up as a Confidential Virtual Machine (CVM).

export GALLERY_IMAGE_DEF_NAME="cc-image"
az sig image-definition create \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --gallery-name "${GALLERY_NAME}" \
  --gallery-image-definition "${GALLERY_IMAGE_DEF_NAME}" \
  --publisher GreatPublisher \
  --offer GreatOffer \
  --sku GreatSku \
  --os-type "Linux" \
  --os-state "Generalized" \
  --hyper-v-generation "V2" \
  --location "${AZURE_REGION}" \
  --architecture "x64" \
  --features SecurityType=ConfidentialVmSupported

Build pod-VM image

The Pod-VM image can be built in three ways:

  • Customize an existing marketplace image
  • Customize an existing marketplace image with pre-built binaries
  • Convert and upload a pre-built QCOW2 image

Modifying an existing marketplace image

  • Install packer by following these instructions.

  • Create a custom Azure VM image based on Ubuntu 22.04 adding kata-agent, agent-protocol-forwarder and other dependencies for Cloud API Adaptor (CAA):

export PKR_VAR_resource_group="${AZURE_RESOURCE_GROUP}"
export PKR_VAR_location="${AZURE_REGION}"
export PKR_VAR_subscription_id="${AZURE_SUBSCRIPTION_ID}"
export PKR_VAR_use_azure_cli_auth=true
export PKR_VAR_az_gallery_name="${GALLERY_NAME}"
export PKR_VAR_az_gallery_image_name="${GALLERY_IMAGE_DEF_NAME}"
export PKR_VAR_az_gallery_image_version="0.0.1"
export PKR_VAR_offer=0001-com-ubuntu-confidential-vm-jammy
export PKR_VAR_sku=22_04-lts-cvm

export AA_KBC="cc_kbc_az_snp_vtpm"
export CLOUD_PROVIDER=azure
PODVM_DISTRO=ubuntu make image

Note: If you want to disable cloud config then export DISABLE_CLOUD_CONFIG=true before building the image.

Use the ManagedImageSharedImageGalleryId field from output of the above command to populate the following environment variable. It’s used while deploying cloud-api-adaptor:

# e.g. format: /subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/galleries/.../images/.../versions/../
export AZURE_IMAGE_ID="/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourceGroups/${AZURE_RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/${GALLERY_NAME}/images/${GALLERY_IMAGE_DEF_NAME}/versions/${PKR_VAR_az_gallery_image_version}"

Customize an image using prebuilt binaries via Docker

docker build -t azure-podvm-builder .
docker run --rm \
  -v "$HOME/.azure:/root/.azure" \
  -e AZURE_SUBSCRIPTION_ID \
  -e AZURE_RESOURCE_GROUP \
  -e GALLERY_NAME \
  -e GALLERY_IMAGE_DEF_NAME \
  azure-podvm-builder

If you want to use a different base image, then you’ll need to export environment variables: PUBLISHER, OFFER and SKU.

Sometimes using the marketplace image requires accepting a licensing agreement and also using a published plan. Following link provides more detail.

For example using the CentOS 8.5 image from “eurolinux” publisher requires a plan and license agreement.

You’ll need to first get the Uniform Resource Name (URN):

az vm image list \
  --location ${AZURE_REGION} \
  --publisher eurolinuxspzoo1620639373013 \
  --offer centos-8-5-free \
  --sku centos-8-5-free \
  --all \
  --output table

Then you’ll need to accept the agreement:

az vm image terms accept \
    --urn eurolinuxspzoo1620639373013:centos-8-5-free:centos-8-5-free:8.5.5

Then you can use the following command line to build the image:

docker run --rm \
  -v "$HOME/.azure:/root/.azure" \
  -e AZURE_SUBSCRIPTION_ID \
  -e AZURE_RESOURCE_GROUP  \
  -e PUBLISHER=eurolinuxspzoo1620639373013 \
  -e SKU=centos-8-5-free \
  -e OFFER=centos-8-5-free \
  -e PLAN_NAME=centos-8-5-free \
  -e PLAN_PRODUCT=centos-8-5-free \
  -e PLAN_PUBLISHER=eurolinuxspzoo1620639373013 \
  -e PODVM_DISTRO=centos \
  azure-podvm-builder

Another example of building Red Hat Enterprise Linux (RHEL) based image:

docker run --rm \
  -v "$HOME/.azure:/root/.azure" \
  -e AZURE_SUBSCRIPTION_ID \
  -e AZURE_RESOURCE_GROUP  \
  -e PUBLISHER=RedHat \
  -e SKU=9-lvm \
  -e OFFER=RHEL \
  -e PODVM_DISTRO=rhel \
  azure-podvm-builder

Using a pre-created QCOW2 image

quay.io/confidential-containers hosts pre-created pod-vm images as container images.

  • Download QCOW2 image
mkdir -p qcow2-img && cd qcow2-img

export QCOW2_IMAGE="quay.io/confidential-containers/podvm-generic-ubuntu-amd64:latest"
curl -LO https://raw.githubusercontent.com/confidential-containers/cloud-api-adaptor/staging/podvm/hack/download-image.sh

bash download-image.sh $QCOW2_IMAGE . -o podvm.qcow2
  • Convert QCOW2 image to Virtual Hard Disk (VHD) format

You’ll need the qemu-img tool for conversion.

qemu-img convert -O vpc -o subformat=fixed,force_size podvm.qcow2 podvm.vhd
  • Create Storage Account

Create a storage account if none exists. Otherwise you can use the existing storage account.

export AZURE_STORAGE_ACCOUNT=cocosa

az storage account create \
--name $AZURE_STORAGE_ACCOUNT  \
    --resource-group $AZURE_RESOURCE_GROUP \
    --location $AZURE_REGION \
    --sku Standard_ZRS \
    --encryption-services blob
  • Create storage container

Create a storage container if none exists. Otherwise you can use the existing storage account

export AZURE_STORAGE_CONTAINER=vhd
az storage container create \
    --account-name $AZURE_STORAGE_ACCOUNT \
    --name $AZURE_STORAGE_CONTAINER \
    --auth-mode login
  • Get storage key
AZURE_STORAGE_KEY=$(az storage account keys list --resource-group $AZURE_RESOURCE_GROUP --account-name $AZURE_STORAGE_ACCOUNT --query "[?keyName=='key1'].{Value:value}" --output tsv)

echo $AZURE_STORAGE_KEY
  • Upload VHD file to Azure Storage
az storage blob upload  --container-name $AZURE_STORAGE_CONTAINER --name podvm.vhd --file podvm.vhd
  • Get the VHD URI
AZURE_STORAGE_EP=$(az storage account list -g $AZURE_RESOURCE_GROUP --query "[].{uri:primaryEndpoints.blob} | [? contains(uri, '$AZURE_STORAGE_ACCOUNT')]" --output tsv)

echo $AZURE_STORAGE_EP

export VHD_URI="${AZURE_STORAGE_EP}${AZURE_STORAGE_CONTAINER}/podvm.vhd"
  • Create Azure VM Image Version
az sig image-version create \
   --resource-group $AZURE_RESOURCE_GROUP \
   --gallery-name $GALLERY_NAME  \
   --gallery-image-definition $GALLERY_IMAGE_DEF_NAME \
   --gallery-image-version 0.0.1 \
   --target-regions $AZURE_REGION \
   --os-vhd-uri "$VHD_URI" \
   --os-vhd-storage-account $AZURE_STORAGE_ACCOUNT

On success, the command will generate the image id. Set this image id as a value of AZURE_IMAGE_ID in peer-pods-cm Configmap.

You can also use the following command to retrieve the image id:

AZURE_IMAGE_ID=$(az sig image-version  list --resource-group  $AZURE_RESOURCE_GROUP --gallery-name $GALLERY_NAME --gallery-image-definition $GALLERY_IMAGE_DEF_NAME --query "[].{Id: id}" --output tsv)

echo $AZURE_IMAGE_ID