Courses Tools Exam Guides Pricing For Teams
Sign Up Free
AWS 7 min read · 1,322 words

AWS Certified Developer Associate - Scenario Based Questions Confusion

Expert guide: candidate fails scenario questions but passes factual ones. Practical recovery advice for AWS Certified Developer Associate candidates.

Why You Pass Factual AWS Developer Questions But Fail Scenario-Based Ones

You’ve memorized that Lambda has a 15-minute timeout limit. You know DynamoDB uses partition keys. You can recite the IAM permission model. Yet when the exam presents a multi-service scenario, you’re selecting wrong answers with confidence. This gap between conceptual knowledge and applied problem-solving is the primary reason AWS Certified Developer Associate candidates fail on their second or third attempt.

Direct Answer

The AWS Certified Developer Associate (DVA-C02) exam tests your ability to architect solutions across interconnected services, not identify isolated facts. When you study concepts independently—Lambda timeouts, S3 versioning, SQS message retention—you’re only covering 40% of what the exam measures. Scenario questions require you to understand how IAM policies gate access to Lambda functions that process messages from SQS, write to DynamoDB, and expose data through API Gateway. The exam vendor (AWS/Pearson Vue) deliberately weights scenario-based questions at 55-65% of the total exam because real development work never happens in isolation. Candidates who score 70%+ on factual questions but fail scenarios are missing the architectural layer where services interact, constrain each other, and create failure modes.

Why This Happens to AWS Certified Developer Associate Candidates

You’ve likely been studying from materials that organize content by service. A chapter on Lambda. A chapter on DynamoDB. A chapter on IAM. This structure is convenient for learning but catastrophic for exam performance because it trains your brain to answer “What is Lambda?” instead of “How does this Lambda function get triggered, authenticated, and limited in a production workflow?”

Scenario questions force a different cognitive task. They present a business requirement—“Your application needs to process 50,000 user uploads daily, store metadata, trigger validation logic, and notify downstream systems”—then ask which combination of services and configurations solves it. The wrong answers are deliberately designed to appeal to candidates who understand individual services but miss the architectural dependencies.

Here’s what happens: You see “Use Lambda with a 30-minute timeout” as an option and think it’s wrong because you know the Lambda timeout is 15 minutes. But the real issue isn’t the timeout itself—it’s that this scenario requires asynchronous processing, which means Lambda shouldn’t be timing out at all because SQS should be buffering the requests. A candidate studying in isolation catches the timeout mistake but misses the architectural mistake: using Lambda synchronously instead of event-driven.

The exam is testing whether you understand service orchestration—how to select the right tool for each part of the workflow and configure it correctly within that workflow. CloudFormation questions don’t just ask “How do you define an S3 bucket?” They ask “In this CloudFormation template with EC2, VPC security groups, and RDS, why is the application timing out?” The answer requires reading the entire template and understanding how VPC configuration affects database connectivity.

The Root Cause: Studying Concepts in Isolation Instead of in Architectural Context

Let’s isolate the actual problem. When you study a Lambda fact sheet, you learn:

  • 128 MB to 10,240 MB memory allocation
  • 15-minute timeout maximum
  • Can be triggered by 90+ event sources
  • Executes within a VPC (optional)
  • Returns JSON responses

This is all true and testable. But in a scenario, none of this matters unless you understand the broader question: “Why would a developer choose Lambda for this specific problem given these constraints?”

Example: A scenario describes an application that needs to process large video files uploaded to S3. The files are 2-4 GB each. A wrong answer suggests “Use Lambda to transcode the files in-place.” This appeals to candidates who know Lambda can be triggered by S3 events. But it fails because Lambda has a 512 MB temporary storage limit and a 15-minute timeout—transcoding 4 GB is impossible. The question isn’t testing whether you remember the timeout; it’s testing whether you can reason through architectural constraints.

This reasoning only works if you study services in relational context. You need to know:

  • Lambda’s storage and timeout limits relative to typical workloads
  • DynamoDB’s write throttling patterns in relation to SQS queue processing rates
  • IAM permission models in the context of how services authenticate to each other
  • API Gateway rate limiting alongside Lambda concurrency limits
  • CloudFormation’s parameter resolution as it connects EC2 instances to RDS databases through security groups

When you study in isolation, you’re building disconnected facts. When the exam presents a scenario, your brain searches for matching keywords instead of executing a mental architecture.

How the AWS Certified Developer Associate Exam Actually Tests This

AWS measures developer competency through an architectural lens. The exam vendor weights questions like this:

  • 15-20%: Single-service factual questions (what you’ve been passing)
  • 55-65%: Multi-service scenarios requiring architectural reasoning
  • 15-25%: Implementation questions about code, SDKs, and error handling

Your scores confirm this distribution. You’re likely hitting 80%+ on the factual layer but 40-50% on scenarios. This is backwards-compatible with a 65-70% passing score because the scenario weight pulls your average down.

The testing logic is this: A developer who memorizes facts is a risk. A developer who understands how services interact, constraint each other, and fail is deployable. The exam is filtering for the latter.

Real exam scenarios look like this:

Example scenario:

Your company runs a web application where users upload documents. The application must:

  1. Accept uploads through an HTTPS endpoint
  2. Validate file type and size
  3. Store valid files in S3
  4. Trigger indexing for search functionality
  5. Return upload confirmation within 5 seconds
  6. Notify administrators if validation fails

Which architecture best satisfies these requirements?

A) Use API Gateway with a Lambda function that validates files, writes to S3, triggers another Lambda for indexing synchronously, and returns immediately. If indexing takes longer than 5 seconds, catch the timeout and return a partial response.

B) Use API Gateway with a Lambda function that validates files, writes to S3, publishes a message to SNS for indexing notification, and returns immediately. Index subscribers process asynchronously. Failed validations publish to a separate SNS topic for administrator notifications.

C) Use EC2 with a load balancer. Process uploads synchronously, validate, store in S3, and trigger indexing through a direct Lambda invocation before returning.

D) Use API Gateway with a Lambda function that stores files in S3, uses DynamoDB Streams to trigger validation, and another Lambda to handle indexing. Return immediately without validation confirmation.

Why B is correct: It separates concerns (validation is synchronous, indexing is asynchronous), guarantees the 5-second response (validation + S3 write can easily finish in that window), uses SNS for fan-out notification (efficient for notifying multiple admin systems), and decouples systems (indexing failures don’t affect user experience).

Why candidates fail:

  • A appeals to people who know Lambda can be chained. They miss that synchronous chaining defeats the timeout requirement. You can’t guarantee the indexing Lambda finishes in 5 seconds when you’re waiting for it.
  • C appeals to people who haven’t internalized that serverless is better for spiky workloads. It works architecturally but is operationally heavier and doesn’t scale elegantly.
  • D appeals to people who understand DynamoDB Streams. They miss that Streams aren’t designed for real-time triggering (there’s latency) and that DynamoDB validation is an anti-pattern when files live in S3.

The correct answer requires understanding that API Gateway + Lambda + S3 + SNS compose a specific pattern: synchronous entry point, asynchronous processing, event notification. This pattern only makes sense if you’ve studied these services as a system, not as individual tools.

How to Fix This Before Your Next Attempt

1. Rebuild your study model around workflows, not services.

Stop reading “DynamoDB: A Complete Guide.” Instead, study workflows: “User Authentication Flow” (IAM roles, Cognito, STS), “File Processing Pipeline” (S3 events, Lambda, SQS, SNS, DynamoDB), “Real-Time API” (API Gateway, Lambda, RDS, ElastiCache). For each workflow, document:

  • Which service handles which step
  • Why that service was chosen over alternatives
  • What happens when the service hits a limit

Ready to pass?

Start AWS Practice Exam on Certsqill →

1,000+ exam-accurate questions, AI Tutor explanations, and a performance dashboard that shows exactly which domains to fix.