
Setting up cloud ingest volumes
Now we’re ready to configure permissions on the Azure Storage Account so that the Edge Volume provider has access to upload data to the blob container.
Offical Documentation
You can use the script below to get the extension identity and then assign the necessary role to the storage account:
RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME"
CLUSTER_NAME="YOUR_CLUSTER_NAME"
export EXTENSION_TYPE=${1:-"microsoft.arc.containerstorage"}
EXTENSION_IDENTITY_PRINCIPAL_ID=$(az k8s-extension list \
--cluster-name ${CLUSTER_NAME} \
--resource-group ${RESOURCE_GROUP_NAME} \
--cluster-type connectedClusters \
| jq --arg extType ${EXTENSION_TYPE} 'map(select(.extensionType == $extType)) | .[] | .identity.principalId' -r)
STORAGE_ACCOUNT_NAME="YOUR_STORAGE_ACCOUNT_NAME"
STORAGE_ACCOUNT_RESOURCE_GROUP="YOUR_STORAGE_ACCOUNT_RESOURCE_GROUP"
STORAGE_ACCOUNT_ID=$(az storage account show --name ${STORAGE_ACCOUNT_NAME} --resource-group ${STORAGE_ACCOUNT_RESOURCE_GROUP} --query id --output tsv)
az role assignment create --assignee ${EXTENSION_IDENTITY_PRINCIPAL_ID} --role "Storage Blob Data Contributor" --scope ${STORAGE_ACCOUNT_ID}
Create a deployment to test the cloud ingest volume
Now we can test transferring data from edge to cloud.I’m using the demo from Azure Arc Jumpstart: Deploy demo from Azure Arc Jumpstart
First off, create a container on the storage account to store the data from the edge volume.
export STORAGE_ACCOUNT_NAME="YOUR_STORAGE_ACCOUNT_NAME"
export STORAGE_ACCOUNT_CONTAINER="fault-detection"
STORAGE_ACCOUNT_RESOURCE_GROUP="YOUR_STORAGE_ACCOUNT_RESOURCE_GROUP"
az storage container create --name ${STORAGE_ACCOUNT_CONTAINER} --account-name ${STORAGE_ACCOUNT_NAME} --resource-group ${STORAGE_ACCOUNT_RESOURCE_GROUP}
Next, create a file called acsa-deployment.yaml using the following content:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
### Create a name for your PVC ###
name: acsa-pvc
### Use a namespace that matched your intended consuming pod, or "default" ###
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: cloud-backed-sc
---
apiVersion: "arccontainerstorage.azure.net/v1"
kind: EdgeSubvolume
metadata:
name: faultdata
spec:
edgevolume: acsa-pvc
path: faultdata # If you change this path, line 33 in deploymentExample.yaml must be updated. Don't use a preceding slash.
auth:
authType: MANAGED_IDENTITY
storageaccountendpoint: "https://${STORAGE_ACCOUNT_NAME}.blob.core.windows.net/"
container: ${STORAGE_ACCOUNT_CONTAINER}
ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: acsa-webserver
spec:
replicas: 1
selector:
matchLabels:
app: acsa-webserver
template:
metadata:
labels:
app: acsa-webserver
spec:
containers:
- name: acsa-webserver
image: mcr.microsoft.com/jumpstart/scenarios/acsa_ai_webserver:1.0.0
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "200m"
memory: "256Mi"
ports:
- containerPort: 8000
env:
- name: RTSP_URL
value: rtsp://virtual-rtsp:8554/stream
- name: LOCAL_STORAGE
value: /app/acsa_storage/faultdata
volumeMounts:
### This name must match the volumes.name attribute below ###
- name: blob
### This mountPath is where the PVC will be attached to the pod's filesystem ###
mountPath: "/app/acsa_storage"
volumes:
### User-defined 'name' that will be used to link the volumeMounts. This name must match volumeMounts.name as specified above. ###
- name: blob
persistentVolumeClaim:
### This claimName must refer to the PVC resource 'name' as defined in the PVC config. This name will match what your PVC resource was actually named. ###
claimName: acsa-pvc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: virtual-rtsp
spec:
replicas: 1
selector:
matchLabels:
app: virtual-rtsp
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: virtual-rtsp
spec:
initContainers:
- name: init-samples
image: busybox
resources:
limits:
cpu: "200m"
memory: "256Mi"
requests:
cpu: "100m"
memory: "128Mi"
command:
- wget
- "-O"
- "/samples/bolt-detection.mp4"
- https://github.com/ldabas-msft/jumpstart-resources/raw/main/bolt-detection.mp4
volumeMounts:
- name: tmp-samples
mountPath: /samples
containers:
- name: virtual-rtsp
image: "kerberos/virtual-rtsp"
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "200m"
memory: "256Mi"
imagePullPolicy: Always
ports:
- containerPort: 8554
env:
- name: SOURCE_URL
value: "file:///samples/bolt-detection.mp4"
volumeMounts:
- name: tmp-samples
mountPath: /samples
volumes:
- name: tmp-samples
emptyDir: { }
---
apiVersion: v1
kind: Service
metadata:
name: virtual-rtsp
labels:
app: virtual-rtsp
spec:
type: LoadBalancer
ports:
- port: 8554
targetPort: 8554
name: rtsp
protocol: TCP
selector:
app: virtual-rtsp
---
apiVersion: v1
kind: Service
metadata:
name: acsa-webserver-svc
labels:
app: acsa-webserver
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8000
protocol: TCP
selector:
app: acsa-webserver
Once created, apply the deployment :
export STORAGE_ACCOUNT_NAME="YOUR_STORAGE_ACCOUNT_NAME" # we need to export the storage account name so envsubst can substitute it
envsubst < acsa-deployment.yaml | kubectl apply -f -
[!NOTE]
This will deploy in to the default namespace.
This will create the deployment and the volumes, substituting the values for the storage account name with the variables previously set.