Skip to main content

AWS DVA-C02 Drill: Lambda Deployment Strategies - Managing Large Libraries

Jeff Taakey
Author
Jeff Taakey
21+ Year Enterprise Architect | AWS SAA/SAP & Multi-Cloud Expert.

Jeff’s Note
#

Unlike generic exam dumps, ADH analyzes this scenario through the lens of a Real-World Lead Developer.

For DVA-C02 candidates, the confusion often lies in how to efficiently manage large, ever-growing dependencies in Lambda deployments. In production, this is about knowing exactly which deployment options support the size and update velocity of your code libraries without compromising cold start performance and maintainability. Let’s drill down.

The Certification Drill (Simulated Question)
#

Scenario
#

At Vertex Analytics, a team of developers is building a data processing application that leverages custom machine learning (ML) algorithms. The ML library used is rapidly growing and is currently around 15 GB in size. The application runs on multiple AWS Lambda functions, and all these Lambda functions need access to the ML library. The developers must choose a way to share the ML library across all Lambda functions while handling the increasing size efficiently.

The Requirement:
#

Determine the most effective and maintainable solution to provide all Lambda functions access to the large and growing ML library.

The Options
#

  • A) Store the library as Lambda layers and attach these layers to the Lambda functions.
  • B) Store the library in Amazon S3 and download the library into the function’s /tmp storage at runtime.
  • C) Package the library into a Lambda container image and redeploy the Lambda functions when the image updates.
  • D) Store the library in an Amazon Elastic File System (Amazon EFS) and mount the EFS filesystem to all Lambda functions.

Google adsense
#

leave a comment:

Correct Answer
#

D

Quick Insight: The Developer Deployment Imperative
#

Lambda layers have a hard limit on unzipped size (~250 MB), making large ML libraries impossible to deploy via layers. Downloading at runtime from S3 impacts latency, can blow up cold start duration, and may exhaust /tmp storage (512 MB). Container images max out at 10 GB, which is smaller than the 15+ GB library and require full redeploys on every update. Mounting EFS provides a scalable and persistent file system accessible concurrently by all Lambda functions without redeployment.

Content Locked: The Expert Analysis
#

You’ve identified the answer. But do you know the implementation details that separate a Junior from a Senior?


The Expert’s Analysis
#

Correct Answer
#

Option D

The Winning Logic
#

Mounting an Amazon EFS filesystem to Lambda functions allows them to share access to large filesystems that can scale beyond the size limits of Lambda layers and container images. EFS supports concurrent access from multiple Lambdas and can handle large growing files without redeploying the functions every time the library changes. This makes it ideal for large ML libraries like this 15 GB growing dataset.

  • The EFS filesystem is mounted via Lambda’s EFS integration, which transparently allows your functions to read the ML library as if it were local.
  • The solution avoids Lambda package size limits (250 MB layers, 10 GB container images).
  • It also avoids runtime overhead and startup delays from downloading the library on each invocation.

The Trap (Distractor Analysis):
#

  • Why not A? Lambda layers have hard limits on size (250 MB unzipped) which is well below 15 GB, so this is technically impossible.
  • Why not B? Downloading the full library on every function invocation would introduce significant latency, possibly exceed /tmp storage (512 MB), and is operationally expensive.
  • Why not C? Container images have a maximum size of 10 GB, less than the current 15 GB, and require redeploying all Lambdas whenever the library updates, which is operationally complex for rapidly growing libraries.

The Technical Blueprint
#

# Example CLI to create and mount EFS for Lambda

# Create EFS filesystem
aws efs create-file-system --performance-mode generalPurpose --throughput-mode bursting

# Create a mount target in the Lambda VPC's subnets
aws efs create-mount-target --file-system-id fs-123456 --subnet-id subnet-abc123 --security-groups sg-01234

# Configure Lambda function to mount EFS
aws lambda update-function-configuration \
  --function-name my-function \
  --file-system-configurations Arn=arn:aws:efs:region:account-id:file-system/fs-123456,LocalMountPath=/mnt/efs

The Comparative Analysis
#

Option API / Deployment Complexity Performance Use Case Suitability
A) Lambda Layers Easy to use, simple API Fast cold start but size limited to 250 MB Small libraries, not suitable for 15 GB+ size
B) S3 Download at Runtime Custom coding needed Slow cold start, high latency Small, infrequently updated libs; temporary downloads only
C) Container Image Complex CI/CD pipeline Fast cold start but size limit 10 GB Libraries under 10 GB, infrequent updates
D) EFS Mount Requires VPC config, EFS management Fast access, no size limits, shared storage Large, growing libraries needing shared access

Real-World Application (Practitioner Insight)
#

Exam Rule
#

For the exam, always pick Amazon EFS when you see Lambda functions needing access to large (>10GB) or frequently updated shared code or data.

Real World
#

In production, we might also consider container images for supporting dependencies under 10 GB if we want immutable deployments and isolated environments. For huge datasets or models that change often, EFS provides flexibility without redeployment overhead.


(CTA) Stop Guessing, Start Mastering
#


Disclaimer

This is a study note based on simulated scenarios for the AWS DVA-C02 exam.

The DevPro Network: Mission and Founder

A 21-Year Tech Leadership Journey

Jeff Taakey has driven complex systems for over two decades, serving in pivotal roles as an Architect, Technical Director, and startup Co-founder/CTO.

He holds both an MBA degree and a Computer Science Master's degree from an English-speaking university in Hong Kong. His expertise is further backed by multiple international certifications including TOGAF, PMP, ITIL, and AWS SAA.

His experience spans diverse sectors and includes leading large, multidisciplinary teams (up to 86 people). He has also served as a Development Team Lead while cooperating with global teams spanning North America, Europe, and Asia-Pacific. He has spearheaded the design of an industry cloud platform. This work was often conducted within global Fortune 500 environments like IBM, Citi and Panasonic.

Following a recent Master’s degree from an English-speaking university in Hong Kong, he launched this platform to share advanced, practical technical knowledge with the global developer community.


About This Site: AWS.CertDevPro.com


AWS.CertDevPro.com focuses exclusively on mastering the Amazon Web Services ecosystem. We transform raw practice questions into strategic Decision Matrices. Led by Jeff Taakey (MBA & 21-year veteran of IBM/Citi), we provide the exclusive SAA and SAP Master Packs designed to move your cloud expertise from certification-ready to project-ready.