Jeff’s Note #
Unlike generic exam dumps, ADH analyzes this scenario through the lens of a Real-World Lead Developer.
For DVA-C02 candidates, the confusion often lies in choosing the right persistent storage type for containerized apps on Kubernetes. In production, this is about knowing exactly when to use network file systems versus block storage within EKS pods. Let’s drill down.
The Certification Drill (Simulated Question) #
Scenario #
Global MediaHub runs an image processing pipeline on containerized microservices deployed across Kubernetes clusters hosted in their on-premises data centers. These microservices share a Network File System (NFS) volume for persisting and accessing files during processing. Due to capacity constraints with the on-prem NFS, the engineering team must rapidly migrate this workload to AWS. The new environment must keep the Kubernetes clusters highly available and maintain the shared file access pattern for all containers.
The Requirement #
Design a migration strategy to AWS that supports shared file storage across containers and offers high availability for the Kubernetes clusters running these containerized apps.
The Options #
- A) Transfer the data from the on-premises NFS to an Amazon Elastic Block Store (Amazon EBS) volume. Upload the container images to Amazon Elastic Container Registry (Amazon ECR).
- B) Transfer the data from the NFS share to an Amazon Elastic File System (Amazon EFS) volume. Upload the container images to Amazon Elastic Container Registry (Amazon ECR).
- C) Create an Amazon Elastic Container Service (Amazon ECS) cluster to run the applications. Configure each container instance to mount the Amazon Elastic Block Store (Amazon EBS) volume at the required path.
- D) Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to run the applications. Configure nodes to mount an Amazon Elastic Block Store (Amazon EBS) volume at the required path for the containers.
- E) Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to run the applications. Configure nodes to mount an Amazon Elastic File System (Amazon EFS) volume at the required path for the containers.
Google adsense #
leave a comment:
Correct Answer #
B and E
Quick Insight: The DVA-C02 Developer Imperative #
- When running stateful containerized applications on Kubernetes that require shared file access, EFS provides a managed, scalable NFS-compatible storage solution allowing all pods to read and write concurrently across AZs.
- EBS volumes are block storage tied to a single EC2 instance or node, not suited for sharing across multiple containers/nodes, making it unsuitable here.
- Uploading container images to ECR is a best practice for container image management and deployment reliability.
Content Locked: The Expert Analysis #
You’ve identified the answer. But do you know the implementation details that separate a Junior from a Senior?
The Expert’s Analysis #
Correct Answer #
Options B and E
The Winning Logic #
The key requirements are:
- Shared access to the same data for multiple containers (as was offered by on-premises NFS).
- High availability and multi-AZ availability for the Kubernetes cluster.
- Migration from a POSIX-compliant network filesystem to AWS.
Amazon EFS is a fully managed, elastic NFS file system accessible concurrently from multiple EC2 instances or Kubernetes pods across availability zones. This makes it the natural fit to replace an on-premises NFS share that multiple pods access for files/data storage.
Uploading container images to Amazon ECR is standard best practice for container registries and deployment.
EBS volumes, by contrast, are designed as block storage volumes that attach to a single EC2 instance once at a time. While low-latency and performant for single-node storage, they do not support concurrent multi-node sharing. This violates the requirement that all containers need simultaneous shared access.
Amazon Elastic Kubernetes Service (EKS) is the preferred managed Kubernetes service for high availability. Configuring EKS worker nodes to mount EFS volumes enables persistence of shared data across pods.
The Trap (Distractor Analysis): #
-
Why not A or D?
Because Amazon EBS does not support concurrent mounts on multiple nodes and is not suited for shared file storage in Kubernetes pods, it will cause failure if multiple pods try to access the same volume. Using EBS here would break the shared file access model. -
Why not C?
Amazon ECS is a container orchestration service but since the scenario explicitly uses Kubernetes clusters and the requirement is high availability on Kubernetes, ECS is not aligned with requirements.
The Technical Blueprint #
Developer Code/CLI Snippet: Mounting EFS on EKS Nodes #
# Example EFS CSI Driver installation for EKS:
kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/ecr/?ref=release-1.3"
# StorageClass example for dynamic provisioning:
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
EOF
# PersistentVolumeClaim:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-pvc
spec:
storageClassName: efs-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
EOF
The Comparative Analysis #
| Option | Suitability for Shared Storage | Kubernetes Compatibility | High Availability | Notes |
|---|---|---|---|---|
| A | Low (EBS is single-node) | Works with Kubernetes | Limited (single AZ per volume) | Violates shared access requirement |
| B | High (EFS supports multi-node) | Works with Kubernetes | Yes (multi-AZ) | Correct choice for shared NFS functionality |
| C | Low (ECS not Kubernetes) | N/A | Yes | Does not meet Kubernetes cluster requirement |
| D | Low (EBS single-node) | Works with Kubernetes | Limited | Single-node mount rule breaks shared file use case |
| E | High (EFS multi-node) | Works with Kubernetes | Yes | Correct choice to mount shared file system volumes |
Real-World Application (Practitioner Insight) #
Exam Rule #
For the exam, always pick Amazon EFS when you see shared persistent storage in Kubernetes requiring concurrent pod access.
Real World #
In production, many teams combine Amazon EFS for shared config/data files with Amazon EBS for high-performance local data depending on workload profiles. For stateless pods, ephemeral storage or S3 object storage may also be used.
(CTA) Stop Guessing, Start Mastering #
Disclaimer
This is a study note based on simulated scenarios for the DVA-C02 exam.