시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / AIF-C01 덤프  / AIF-C01 문제 연습

Amazon AIF-C01 시험

AWS Certified AI Practitioner 온라인 연습

최종 업데이트 시간: 2025년11월17일

당신은 온라인 연습 문제를 통해 Amazon AIF-C01 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AIF-C01 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 87개의 시험 문제와 답을 포함하십시오.

 / 9

Question No : 1


A company wants to create a new solution by using AWS Glue. The company has minimal programming experience with AWS Glue.
Which AWS service can help the company use AWS Glue?

정답:
Explanation:
AWS Glue is a serverless data integration service that enables users to extract, transform, and load (ETL) data. For a company with minimal programming experience, Amazon Q Developer provides an AI-powered assistant that can generate code, explain AWS services, and guide users through tasks like creating AWS Glue jobs. This makes it an ideal tool to help the company use AWS Glue effectively.
Exact Extract from AWS AI Documents:
From the AWS Documentation on Amazon Q Developer:
"Amazon Q Developer is an AI-powered assistant that helps developers by generating code, answering questions about AWS services, and providing step-by-step guidance for tasks such as building ETL pipelines with AWS Glue. It is designed to assist users with varying levels of expertise, including those with minimal programming experience."
(Source: AWS Documentation, Amazon Q Developer Overview)
Detailed
Option A: Amazon Q Developer
This is the correct answer. Amazon Q Developer can assist the company by generating AWS Glue scripts, explaining Glue concepts, and providing guidance on setting up ETL jobs, which is particularly helpful for users with limited programming experience.
Option B: AWS Config
AWS Config is used for tracking and managing resource configurations and compliance, not for assisting with coding or using services like AWS Glue. This option is incorrect.
Option C: Amazon Personalize
Amazon Personalize is a machine learning service for building recommendation systems, not for assisting with data integration or AWS Glue. This option is irrelevant.
Option D: Amazon Comprehend
Amazon Comprehend is an NLP service for analyzing text, not for helping users write code or use AWS Glue. This option does not meet the requirements.
Reference: AWS Documentation: Amazon Q Developer Overview (https://aws.amazon.com/q/developer/)
AWS Glue Developer Guide: Introduction to AWS Glue (https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html)
AWS AI Practitioner Learning Path: Module on AWS Developer Tools and Services

Question No : 2


A customer service team is developing an application to analyze customer feedback and automatically classify the feedback into different categories. The categories include product quality, customer service, and delivery experience.
Which AI concept does this scenario present?

정답:
Explanation:
The scenario involves analyzing customer feedback and automatically classifying it into categories such as product quality, customer service, and delivery experience. This task requires processing and understanding textual data, which is a core application of natural language processing (NLP). NLP encompasses techniques for analyzing, interpreting, and generating human language, including tasks like text classification, sentiment analysis, and topic modeling, all of which are relevant to this use case.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"Natural Language Processing (NLP) enables machines to understand and process human language. Common NLP tasks include text classification, sentiment analysis, named entity recognition, and topic modeling. Services like Amazon Comprehend can be used to classify text into predefined categories based on content."
(Source: AWS AI Practitioner Learning Path, Module on AI and ML Concepts)
Detailed
Option A: Computer visionComputer vision involves processing and analyzing visual data, such as images or videos. Since the scenario deals with textual customer feedback, computer vision is not applicable.
Option B: Natural language processing (NLP)This is the correct answer. The task of classifying customer feedback into categories requires understanding and processing text, which is an NLP task. AWS services like Amazon Comprehend are specifically designed for such text classification tasks.
Option C: Recommendation systemsRecommendation systems suggest items or content based on user preferences or behavior. The scenario does not involve recommending products or services but rather classifying feedback, so this option is incorrect.
Option D: Fraud detectionFraud detection involves identifying anomalous or fraudulent activities, typically in financial or transactional data. The scenario focuses on text classification, not anomaly detection, making this option irrelevant.
Reference: AWS AI Practitioner Learning Path: Module on AI and ML Concepts
Amazon Comprehend Developer Guide: Text Classification (https://docs.aws.amazon.com/comprehend/latest/dg/how-classification.html)
AWS Documentation: Introduction to NLP (https://aws.amazon.com/what-is/natural-language-processing/)

Question No : 3


A company is using a pre-trained large language model (LLM) to extract information from
documents. The company noticed that a newer LLM from a different provider is available on Amazon Bedrock. The company wants to transition to the new LLM on Amazon Bedrock.
What does the company need to do to transition to the new LLM?

정답:
Explanation:
Transitioning to a new large language model (LLM) on Amazon Bedrock typically involves minimal changes when the new model is pre-trained and available as a foundation model. Since the company is moving from one pre-trained LLM to another, the primary task is to ensure compatibility between the new model's input requirements and the existing application. Adjusting the prompt template is often necessary because different LLMs may have varying prompt formats, tokenization methods, or response behaviors, even for similar tasks like document extraction.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"When switching between foundation models in Amazon Bedrock, you may need to adjust the prompt template to align with the new model’s expected input format and optimize its performance for your use case. Prompt engineering is critical to ensure the model understands the task and generates accurate outputs."
(Source: AWS Bedrock User Guide, Prompt Engineering for Foundation Models)
Detailed
Option A: Create a new labeled dataset. Creating a new labeled dataset is unnecessary when transitioning to a new pre-trained LLM, as pre-trained models are already trained on large datasets. This option would only be relevant if the company were training a custom model from scratch, which
is not the case here.
Option B: Perform feature engineering. Feature engineering is typically associated with traditional machine learning models, not pre-trained LLMs. LLMs process raw text inputs, and transitioning to a new LLM does not require restructuring input features. This option is incorrect.
Option C: Adjust the prompt template. This is the correct approach. Different LLMs may interpret prompts differently due to variations in training data, tokenization, or model architecture. Adjusting the prompt template ensures the new LLM understands the task (e.g., document extraction) and produces the desired output format. AWS documentation emphasizes prompt engineering as a key step when adopting a new foundation model.
Option D: Fine-tune the LLM. Fine-tuning is not required for transitioning to a new pre-trained LLM unless the company needs to customize the model for a highly specific task. Since the question does not indicate a need for customization beyond document extraction (a common LLM capability), fine-tuning is unnecessary.
Reference: AWS Bedrock User Guide: Prompt Engineering for Foundation Models (https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-engineering.html)
AWS AI Practitioner Learning Path: Module on Working with Foundation Models in Amazon Bedrock
Amazon Bedrock Developer Guide: Transitioning Between Models (https://docs.aws.amazon.com/bedrock/latest/devguide/)

Question No : 4


A company's large language model (LLM) is experiencing hallucinations.
How can the company decrease hallucinations?

정답:
Explanation:
Hallucinations in large language models (LLMs) occur when the model generates outputs that are factually incorrect, irrelevant, or not grounded in the input data. To mitigate hallucinations, adjusting the model's inference parameters, particularly the temperature, is a well-documented approach in AWS AI Practitioner resources. The temperature parameter controls the randomness of the model's output. A lower temperature makes the model more deterministic, reducing the likelihood of generating creative but incorrect responses, which are often the cause of hallucinations.
Exact Extract from AWS AI Documents:
From the AWS documentation on Amazon Bedrock and LLMs:
"The temperature parameter controls the randomness of the generated text. Higher values (e.g., 0.8 or above) increase creativity but may lead to less coherent or factually incorrect outputs, while lower values (e.g., 0.2 or 0.3) make the output more focused and deterministic, reducing the likelihood of hallucinations."
(Source: AWS Bedrock User Guide, Inference Parameters for Text Generation)
Detailed
Option A: Set up Agents for Amazon Bedrock to supervise the model training. Agents for Amazon Bedrock are used to automate tasks and integrate LLMs with external tools, not to supervise model training or directly address hallucinations. This option is incorrect as it does not align with the purpose of Agents in Bedrock.
Option B: Use data pre-processing and remove any data that causes hallucinations. While data pre-processing can improve model performance, identifying and removing specific data that causes hallucinations is impractical because hallucinations are often a result of the model's generative process rather than specific problematic data points. This approach is not directly supported by AWS documentation for addressing hallucinations.
Option C: Decrease the temperature inference parameter for the model. This is the correct approach. Lowering the temperature reduces the randomness in the model's output, making it more likely to stick to factual and contextually relevant responses. AWS documentation explicitly mentions adjusting inference parameters like temperature to control output quality and mitigate issues like hallucinations.
Option D: Use a foundation model (FM) that is trained to not hallucinate. No foundation model is explicitly trained to "not hallucinate," as hallucinations are an inherent challenge in LLMs. While some models may be fine-tuned for specific tasks to reduce hallucinations, this is not a standard feature of foundation models available on Amazon Bedrock.
Reference: AWS Bedrock User Guide: Inference Parameters for Text Generation (https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html)
AWS AI Practitioner Learning Path: Module on Large Language Models and Inference Configuration
Amazon Bedrock Developer Guide: Managing Model Outputs (https://docs.aws.amazon.com/bedrock/latest/devguide/)

Question No : 5


Which AWS feature records details about ML instance data for governance and reporting?

정답:
Explanation:
Amazon SageMaker Model Cards provide a centralized and standardized repository for documenting machine learning models. They capture key details such as the model's intended use, training and evaluation datasets, performance metrics, ethical considerations, and other relevant information. This documentation facilitates governance and reporting by ensuring that all stakeholders have access to consistent and comprehensive information about each model. While Amazon SageMaker Debugger is used for real-time debugging and monitoring during training, and Amazon SageMaker Model Monitor tracks deployed models for data and prediction quality, neither offers the comprehensive documentation capabilities of Model Cards. Amazon SageMaker JumpStart provides pre-built models and solutions but does not focus on governance documentation.
Reference: Amazon SageMaker Model Cards

Question No : 6


What does an F1 score measure in the context of foundation model (FM) performance?

정답:
Explanation:
The F1 score is a metric used to evaluate the performance of a classification model by considering both precision and recall. Precision measures the accuracy of positive predictions (i.e., the proportion of true positive predictions among all positive predictions made by the model), while recall measures the model's ability to identify all relevant positive instances (i.e., the proportion of true positive predictions among all actual positive instances). The F1 score is the harmonic mean of precision and recall, providing a single metric that balances both concerns. This is particularly useful when dealing with imbalanced datasets or when the cost of false positives and false negatives is significant.
Options B, C, and D pertain to other aspects of model performance but are not related to the F1 score.
Reference: AWS Certified AI Practitioner Exam Guide

Question No : 7


A retail store wants to predict the demand for a specific product for the next few weeks by using the Amazon SageMaker DeepAR forecasting algorithm.
Which type of data will meet this requirement?

정답:
Explanation:
Amazon SageMaker's DeepAR is a supervised learning algorithm designed for forecasting scalar (one-dimensional) time series data. Time series data consists of sequences of data points indexed in time order, typically with consistent intervals between them. In the context of a retail store aiming to predict product demand, relevant time series data might include historical sales figures, inventory levels, or related metrics recorded over regular time intervals (e.g., daily or weekly). By training the DeepAR model on this historical time series data, the store can generate forecasts for future product demand. This capability is particularly useful for inventory management, staffing, and supply chain optimization. Other data types, such as text, image, or binary data, are not suitable for time series forecasting tasks and would not be appropriate inputs for the DeepAR algorithm.
Reference: Amazon SageMaker DeepAR Algorithm

Question No : 8


A company built an AI-powered resume screening system. The company used a large dataset to train the model. The dataset contained resumes that were not representative of all demographics.
Which core dimension of responsible AI does this scenario present?

정답:
Explanation:
Fairness refers to the absence of bias in AI models. Using non-representative datasets leads to biased predictions, affecting specific demographics unfairly. Explainability, privacy, and transparency are important but not directly related to this scenario.
Reference: AWS Responsible AI Framework.

Question No : 9


A company deployed an AI/ML solution to help customer service agents respond to frequently asked questions. The questions can change over time. The company wants to give customer service agents the ability to ask questions and receive automatically generated answers to common customer questions.
Which strategy will meet these requirements MOST cost-effectively?

정답:
Explanation:
RAG combines large pre-trained models with retrieval mechanisms to fetch relevant context from a knowledge base. This approach is cost-effective as it eliminates the need for frequent model retraining while ensuring responses are contextually accurate and up to date.
Reference: AWS RAG Techniques.

Question No : 10


A company needs to train an ML model to classify images of different types of animals. The company has a large dataset of labeled images and will not label more data.
Which type of learning should the company use to train the model?

정답:
Explanation:
Supervised learning is appropriate when the dataset is labeled. The model uses this data to learn patterns and classify images. Unsupervised learning, reinforcement learning, and active learning are not suitable since they either require unlabeled data or different problem settings.
Reference: AWS Machine Learning Best Practices.

Question No : 11


What does an F1 score measure in the context of foundation model (FM) performance?

정답:
Explanation:
The F1 score is the harmonic mean of precision and recall, making it a balanced metric for evaluating model performance when there is an imbalance between false positives and false negatives. Speed, cost, and energy efficiency are unrelated to the F1 score.
Reference: AWS Foundation Models Guide.

Question No : 12


A pharmaceutical company wants to analyze user reviews of new medications and provide a concise overview for each medication.
Which solution meets these requirements?

정답:
Explanation:
Amazon Bedrock provides large language models (LLMs) that are optimized for natural language understanding and text summarization tasks, making it the best choice for creating concise summaries of user reviews. Time-series forecasting, classification, and image analysis (Rekognition) are not suitable for summarizing textual data.
Reference: AWS Bedrock Documentation.

Question No : 13


Which option is a benefit of using Amazon SageMaker Model Cards to document AI models?

정답:
Explanation:
Amazon SageMaker Model Cards provide a standardized way to document important details about an AI model, such as its purpose, performance, intended usage, and known limitations. This enables transparency and compliance while fostering better communication between stakeholders. It does not store models physically or optimize computational requirements.
Reference: AWS SageMaker Model Cards Documentation.

Question No : 14


A company has thousands of customer support interactions per day and wants to analyze these interactions to identify frequently asked questions and develop insights.
Which AWS service can the company use to meet this requirement?

정답:
Explanation:
Amazon Comprehend is the correct service to analyze customer support interactions and identify frequently asked questions and insights.
Amazon Comprehend:
A natural language processing (NLP) service that uses machine learning to uncover insights and relationships in text.
Capable of extracting key phrases, detecting entities, analyzing sentiment, and identifying topics from text data, making it ideal for analyzing customer support interactions.
Why Option B is Correct:
Text Analysis Capabilities: Can process large volumes of text to identify common topics, phrases, and sentiment, providing valuable insights.
Suitable for Customer Support Analysis: Specifically designed to understand the content and meaning of text, which is key for identifying frequently asked questions.
Why Other Options are Incorrect:
A. Amazon Lex: Used for building conversational interfaces, not for text analysis.
C. Amazon Transcribe: Converts speech to text but does not perform text analysis.
D. Amazon Translate: Used for translating text between languages, not for analyzing content.

Question No : 15


A company has a database of petabytes of unstructured data from internal sources. The company wants to transform this data into a structured format so that its data scientists can perform machine learning (ML) tasks.
Which service will meet these requirements?

정답:
Explanation:
AWS Glue is the correct service for transforming petabytes of unstructured data into a structured format suitable for machine learning tasks.
AWS Glue:
A fully managed extract, transform, and load (ETL) service that makes it easy to prepare and transform unstructured data into a structured format.
Provides a range of tools for cleaning, enriching, and cataloging data, making it ready for data scientists to use in ML models.
Why Option D is Correct:
Data Transformation: AWS Glue can handle large volumes of data and transform unstructured data into structured formats efficiently.
Integrated ML Support: Glue integrates with other AWS services to support ML workflows.
Why Other Options are Incorrect:
A. Amazon Lex: Used for building chatbots, not for data transformation.
B. Amazon Rekognition: Used for image and video analysis, not for data transformation.
C. Amazon Kinesis Data Streams: Handles real-time data streaming, not suitable for batch transformation of large volumes of unstructured data.

 / 9