시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / AIF-C01 덤프  / AIF-C01 문제 연습

Amazon AIF-C01 시험

AWS Certified AI Practitioner 온라인 연습

최종 업데이트 시간: 2025년06월06일

당신은 온라인 연습 문제를 통해 Amazon AIF-C01 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AIF-C01 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 87개의 시험 문제와 답을 포함하십시오.

 / 3

Question No : 1


A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company needs the LLM to produce more consistent responses to the same input prompt.
Which adjustment to an inference parameter should the company make to meet these requirements?

정답:
Explanation:
Lowering the temperature value in an LLM controls the randomness of the model's output. A lower temperature (close to 0) makes the model's predictions more deterministic and consistent, leading to similar outputs for identical prompts. This is particularly beneficial in tasks like sentiment analysis, where consistency and reliability in responses are crucial.

Question No : 2


An AI company periodically evaluates its systems and processes with the help of independent software vendors (ISVs). The company needs to receive email message notifications when an ISV's compliance reports become available.
Which AWS service can the company use to meet this requirement?

정답:
Explanation:
AWS Artifact is a service that provides on-demand access to AWS compliance reports, including those from independent software vendors (ISVs). AWS Artifact can notify users when new compliance reports are available, ensuring that the company stays updated and can evaluate its systems and processes accordingly.
D: AWS Data Exchange’s Data Exchange is used for subscribing to and managing third-party data sets. It is not intended for compliance reports or notifications about them.

Question No : 3


A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications.
Which factor will drive the inference costs?

정답:
Explanation:
Token is the basic unit of generative AI model

Question No : 4


A company has a foundation model (FM) that was customized by using Amazon Bedrock to answer customer queries about products. The company wants to validate the model's responses to new types of queries. The company needs to upload a new dataset that Amazon Bedrock can use for validation.
Which AWS service meets these requirements?

정답:

Question No : 5


A company wants to create a chatbot by using a foundation model (FM) on Amazon Bedrock. The FM needs to access encrypted data that is stored in an Amazon S3 bucket. The data is encrypted with Amazon S3 managed keys (SSE-S3).
The FM encounters a failure when attempting to access the S3 bucket data.
Which solution will meet these requirements?

정답:
Explanation:
When data in an Amazon S3 bucket is encrypted using SSE-S3 (Server-Side Encryption with Amazon S3 managed keys), the IAM role used by the application (in this case, Amazon Bedrock) must have permissions to access and decrypt the data. Assigning the correct permissions to the role ensures that the Foundation Model (FM) can access the encrypted data.

Question No : 6


A company is developing a new model to predict the prices of specific items. The model performed well on the training dataset. When the company deployed the model to production, the model's performance decreased significantly.
What should the company do to mitigate this problem?

정답:
Explanation:
The issue described is likely caused by overfitting, where the model performs well on the training dataset but fails to generalize to unseen data. Increasing the volume of training data can help mitigate overfitting by providing the model with more diverse examples, improving its ability to generalize to new data in production.

Question No : 7


A company wants to use a large language model (LLM) to develop a conversational agent. The company needs to prevent the LLM from being manipulated with common prompt engineering techniques to perform undesirable actions or expose sensitive information.
Which action will reduce these risks?

정답:
Explanation:
Creating a prompt template that teaches the LLM to identify and resist common prompt engineering attacks, such as prompt injection or adversarial queries, helps prevent manipulation. By explicitly guiding the LLM to ignore requests that deviate from its intended purpose (e.g., "You are a helpful assistant. Do not perform any tasks outside your defined scope."), you can mitigate risks like exposing sensitive information or executing undesirable actions.

Question No : 8


A company wants to use AI to protect its application from threats. The AI solution needs to check if
an IP address is from a suspicious source.
Which solution meets these requirements?

정답:
Explanation:
An anomaly detection system can analyze patterns and behaviors, such as IP address access patterns, to detect any deviations from the norm, which could indicate suspicious or malicious activity. An anomaly detection model can flag unusual access attempts, such as those from suspicious IP addresses, making it well-suited for threat detection. Fraud forecasting (option D) typically focuses on predicting potential fraud patterns rather than real-time anomaly detecti

Question No : 9


A company is using domain-specific models. The company wants to avoid creating new models from the beginning. The company instead wants to adapt pre-trained models to create models for new, related tasks.
Which ML strategy meets these requirements?

정답:
Explanation:
Transfer learning is a machine learning strategy that leverages pre-trained models and adapts them to new but related tasks. This allows the company to avoid building models from scratch, significantly reducing the time and resources required for training. By fine-tuning the pre-trained model on domain-specific data, the company can achieve high performance for the new task without starting from the beginning.

Question No : 10


A company is using Amazon SageMaker Studio notebooks to build and train ML models. The company stores the data in an Amazon S3 bucket. The company needs to manage the flow of data from Amazon S3 to SageMaker Studio notebooks.
Which solution will meet this requirement?

정답:

Question No : 11


A company built a deep learning model for object detection and deployed the model to production.
Which AI process occurs when the model analyzes a new image to identify objects?

정답:
Explanation:
Inference is the process of using a trained model to make predictions or decisions on new, unseen data. In the case of an object detection model, inference involves feeding a new image into the model, which then analyzes the image and outputs the detected objects and their locations.

Question No : 12


A financial institution is using Amazon Bedrock to develop an AI application. The application is hosted in a VPC. To meet regulatory compliance standards, the VPC is not allowed access to any internet traffic.
Which AWS service or feature will meet these requirements?

정답:
Explanation:
AWS PrivateLink is used to securely access AWS services from a VPC without exposing the traffic to the public internet. This ensures compliance with regulatory standards that prohibit internet access, as all communication happens over the private AWS network.

Question No : 13


A company needs to build its own large language model (LLM) based on only the company's private data. The company is concerned about the environmental effect of the training process.
Which Amazon EC2 instance type has the LEAST environmental effect when training LLMs?

정답:
Explanation:
The Amazon EC2 Trn series (Trn1 instances) are purpose-built for training machine learning models and are designed to deliver high performance while optimizing energy efficiency. They use AWS Trainium chips, which are specifically engineered for ML training workloads, providing excellent performance per watt and reducing the environmental impact of large-scale training processes.

Question No : 14


An AI practitioner is using an Amazon Bedrock base model to summarize session chats from the customer service department. The AI practitioner wants to store invocation logs to monitor model input and output data.
Which strategy should the AI practitioner use?

정답:
Explanation:
Amazon Bedrock provides the ability to log model invocations, including input and output data, for monitoring and troubleshooting purposes. By enabling invocation logging in Amazon Bedrock, the AI practitioner can store logs securely and use them to analyze model behavior and performance.

Question No : 15


What are tokens in the context of generative AI models?

정답:

 / 3