Courses Tools Exam Guides Pricing For Teams
Sign Up Free
AWS 6 min read · 1,200 words

AWS SAA Ai Search Scenario Learning

What Most Candidates Get Wrong About This

You think the AI and search services on the SAA-C03 are a small topic. They’re not.

Most candidates see a question about Amazon OpenSearch or Amazon Kendra and either blank out or guess. That’s because they studied EC2, RDS, and S3 for weeks but skipped the AI/ML and search section. The exam weight doesn’t lie—these services show up, and they show up in scenario-based questions that require you to pick the right tool from three similar-sounding options.

The second mistake is treating these services like theory. You read that “Kendra is for enterprise search” and move on. Then on test day you get a scenario: “A financial services company needs to search through 500,000 PDF documents with complex permission-based access control. Budget is tight. What service?” You’ve seen Kendra mentioned, but you don’t know why it’s the answer over Elasticsearch or a custom Lambda solution.

The third mistake is confusing the services themselves. OpenSearch, Elasticsearch, Kendra, and even bedrock-backed search features blur together in your head. You don’t know which one scales to which document count, which one has native PDF parsing, which one costs per query versus per hour.

On the SAA-C03, you will see at least 2–3 questions that touch AI, search, or both. At least one will be a scenario question where the answer depends on understanding the specific use case the service solves, not just its name.

The Specific Problem You’re Facing

You’re scoring in the 660–700 range on practice tests and you’re stuck. You pass some practice exams, fail others. When you look at your missed questions, a pattern emerges: anything involving OpenSearch, Kendra, or when to use AI services versus traditional databases—you’re getting those wrong.

Or worse, you’re getting them right by luck. You guessed between OpenSearch and a custom DynamoDB + Lambda solution and picked OpenSearch because it “sounded like search.” That won’t hold when the exam adds complexity.

The real problem is this: You’ve learned what these services are, but not when to use them or why they beat the alternative. That’s a scenario-learning problem, not a knowledge problem.

Here’s what the exam is actually testing: Can you read a business requirement and match it to the right AWS service in under two minutes? When a scenario says “the customer needs to index and search through unstructured documents with relevance-based ranking and role-based access,” can you immediately say “Kendra” instead of spending 45 seconds debating between three options?

The gap is bigger if you’re weak on cost. Kendra scales on a per-request or per-index model. OpenSearch scales on node count. Elasticsearch (managed) doesn’t exist on AWS anymore—it’s OpenSearch now. Most candidates don’t know that detail and lose points on optimization questions.

A Step-By-Step Approach That Works

Step 1: Map the Services to Their Core Use Case (Not Their Names)

Stop memorizing features. Start with the problem each service solves:

  • Amazon Kendra: Enterprise search across unstructured documents (PDFs, Word, web pages, Slack messages). It indexes content, understands natural language queries, and ranks results by relevance. Use when: A company has 100,000+ documents scattered across file shares and needs Google-like search with minimal setup. Cost is per request.

  • Amazon OpenSearch: Managed Elasticsearch-compatible search and analytics. Full-text search, log analytics, real-time dashboards. Use when: You need sub-second search on structured or semi-structured data, or you’re ingesting millions of log events per day. Cost is per node/hour.

  • Amazon Bedrock: Managed foundation models (Claude, Llama, etc.) for text generation, summarization, and classification. Use when: You need AI to generate content or understand meaning, not search.

The difference: Kendra = “find this document.” OpenSearch = “search this structured dataset quickly.” Bedrock = “generate or classify this text.”

Step 2: Practice With Real Scenario Questions

Write or find 5 scenario questions. Here’s one to start:

“A healthcare company has 2 million patient records in S3 as PDFs (intake forms, lab results, images). Doctors need to search for patients by symptom keywords, medical history, and date range. The company has no in-house search infrastructure. What service should they use?”

Answer: Kendra. Why? It handles unstructured documents (PDFs), understands natural language queries (“patients with diabetes diagnosed in 2022”), and requires no index management. OpenSearch would work but requires you to parse PDFs, structure the data, and manage nodes. Bedrock is wrong—it generates content, it doesn’t search.

Do this for 5 scenarios. Each time, write down:

  1. What does the customer have? (unstructured docs, logs, structured data)
  2. What do they need to do? (search, analyze, generate)
  3. What constraints matter? (cost, speed, setup complexity)
  4. Which service fits all three criteria?

Step 3: Learn the Cost and Scale Models

On test day, you’ll see questions like: “Which of these options is most cost-effective?”

  • Kendra: You pay per query (roughly $0.30–$0.60 per request for standard edition) plus index storage.
  • OpenSearch: You pay per node per hour. A 3-node cluster in us-east-1 costs ~$600/month minimum.
  • RDS + full-text search: You pay for the database instance + you build search yourself.

A scenario with 50 queries per day? Kendra. A scenario with 10 million documents and 10,000 queries per day? OpenSearch makes sense because per-query costs would explode.

Step 4: Drill the Distractor Patterns

AWS exam writers will give you:

  • OpenSearch when the answer is Kendra (looks similar)
  • DynamoDB + Lambda when the answer is Kendra (technically works but overengineered)
  • Bedrock when they mean semantic search (sounds AI-ish)
  • Elasticsearch (outdated name) when they mean OpenSearch

When you see a question, eliminate distractors first. If the scenario says “unstructured documents” and you see “DynamoDB,” cross it out. If it says “AI generation,” cross out “Kendra.”

What To Focus On (And What To Skip)

Focus on these:

  • The difference between searching (Kendra, OpenSearch) and generating (Bedrock).
  • When to use Kendra for unstructured documents with weak structure (permission-based access, relevance ranking).
  • When to use OpenSearch for structured data, logs, or analytics (sub-second latency, custom queries).
  • Cost comparison: per-request vs. per-node billing.
  • The fact that Elasticsearch is now OpenSearch on AWS.

Skip these (not on SAA-C03):

  • Deep OpenSearch configuration (node types, shard allocation).
  • Bedrock model fine-tuning or advanced prompt engineering.
  • Building a search system from scratch with Lambda + DynamoDB (you won’t be asked).
  • Kendra advanced features like custom synonyms or entity extraction.

The exam tests selection and decision-making, not implementation depth.

Your Next Move

Right now, do this:

  1. Find or write 3 scenario questions that mix Kendra, OpenSearch, and at least one distractor (RDS, Bedrock, Lambda). Spend 2 minutes on each. Write your answer and the reasoning.

  2. Take a practice test section focused on AI/search questions only. If you don’t have one, use the AWS Skills Builder or any SAA-C03 practice exam platform and filter for “machine learning” or “search” topics. Score it. Look at misses.

  3. Retake that section in 2 days after reviewing only the services you got wrong. You should see improvement within 3–4 attempts.

If you’re scoring under 700 on full practice exams, this gap (AI and search scenarios) might be costing you 20–40 points. Close it before test day.

Ready to pass?

Start AWS Practice Exam on Certsqill →

1,000+ exam-accurate questions, AI Tutor explanations, and a performance dashboard that shows exactly which domains to fix.