AWS Certified AI Practitioner AIF-C01

#6 Single Choice

A company uses Amazon SageMaker for its ML pipeline in a production environment. The company has large input data sizes up to 1 GB and
processing times up to 1 hour. The company needs near real-time latency.
Which SageMaker inference option meets these requirements?

A.

Real-time inference

B.

Serverless inference

C.

Asynchronous inference Most Voted

D.

Batch transform

#7 Single Choice

A company is using domain-specific models. The company wants to avoid creating new models from the beginning. The company instead wants
to adapt pre-trained models to create models for new, related tasks.
Which ML strategy meets these requirements?

A.

Increase the number of epochs.

B.

Use transfer learning. Most Voted

C.

Decrease the number of epochs.

D.

Use unsupervised learning.

#8 Single Choice

A company is building a solution to generate images for protective eyewear. The solution must have high accuracy and must minimize the risk of
incorrect annotations.
Which solution will meet these requirements?

A.

Human-in-the-loop validation by using Amazon SageMaker Ground Truth Plus Most Voted

B.

Data augmentation by using an Amazon Bedrock knowledge base

C.

Image recognition by using Amazon Rekognition

D.

Data summarization by using Amazon QuickSight Q

#9 Single Choice

A company wants to create a chatbot by using a foundation model (FM) on Amazon Bedrock. The FM needs to access encrypted data that is
stored in an Amazon S3 bucket. The data is encrypted with Amazon S3 managed keys (SSE-S3).
The FM encounters a failure when attempting to access the S3 bucket data.
Which solution will meet these requirements?

A.

Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key. Most Voted

B.

Set the access permissions for the S3 buckets to allow public access to enable access over the internet.

C.

Use prompt engineering techniques to tell the model to look for information in Amazon S3.

D.

Ensure that the S3 data does not contain sensitive information.

#10 Single Choice

A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency
possible.
Which solution will meet these requirements?

A.

Deploy optimized small language models (SLMs) on edge devices. Most Voted

B.

Deploy optimized large language models (LLMs) on edge devices.

C.

Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.

D.

Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.

Unlock All Questions

You are viewing the free preview. Purchase a plan to access all questions, answers, and detailed explanations.

Back
CloudTechExam PRIVACY POLICY © 2026 CloudTechExam
Help Assistant
Hi! Ask me anything about Cloud Tech Exam — purchasing, accounts, question banks, etc.