시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / SOA-C03 덤프  / SOA-C03 문제 연습

Amazon SOA-C03 시험

AWS Certified CloudOps Engineer - Associate 온라인 연습

최종 업데이트 시간: 2025년11월17일

당신은 온라인 연습 문제를 통해 Amazon SOA-C03 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 SOA-C03 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 65개의 시험 문제와 답을 포함하십시오.

 / 2

Question No : 1


A company uses Amazon ElastiCache (Redis OSS) to cache application data. A CloudOps engineer must implement a solution to increase the resilience of the cache. The solution also must minimize the recovery time objective (RTO).
Which solution will meet these requirements?

정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
For high availability and fast failover, ElastiCache for Redis supports replication groups with Multi-AZ and automatic failover. CloudOps guidance states that a primary node can be paired with one or more replicas across multiple Availability Zones; if the primary fails, Redis automatically promotes a replica to primary in seconds, thereby minimizing RTO. This architecture maintains in-memory data continuity without waiting for backup restore operations. Backups (Options B and D) provide durability but require restore and re-warm procedures that increase RTO and may impact application latency. Switching engines (Option A) to Memcached does not provide Redis replication/failover
semantics and would not inherently improve resilience for this use case. Therefore, creating a read replica in a different AZ and enabling Multi-AZ with automatic failover is the prescribed CloudOps pattern to increase resilience and achieve the lowest practical RTO for Redis caches.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Reliability and Business Continuity
• Amazon ElastiCache for Redis C Replication Groups, Multi-AZ, and Automatic Failover
• AWS Well-Architected Framework C Reliability Pillar

Question No : 2


A CloudOps engineer configures an application to run on Amazon EC2 instances behind an Application Load Balancer (ALB) in a simple scaling Auto Scaling group with the default settings. The Auto Scaling group is configured to use the RequestCountPerTarget metric for scaling. The CloudOps engineer notices that the RequestCountPerTarget metric exceeded the specified limit twice in 180 seconds.
How will the number of EC2 instances in this Auto Scaling group be affected in this scenario?

정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
With simple scaling policies, an Auto Scaling group performs one scaling activity when the alarm condition is met, then observes a default cooldown period (300 seconds) before another scaling activity of the same type can begin. CloudOps guidance explains that cooldown prevents rapid successive scale-outs by allowing time for the newly launched instance(s) to register with the load balancer and impact the metric. Even if the alarm breaches multiple times during the cooldown
window, the group waits until the cooldown completes before evaluating and acting again. In this case, although RequestCountPerTarget exceeded the threshold twice within 180 seconds, the group will launch a single instance and then wait for cooldown before any additional scale-out can occur.
Options A, C, and D do not reflect the behavior of simple scaling with cooldowns; A describes step/target-tracking-like behavior, and C/D are not Auto Scaling mechanics.
References (AWS CloudOps Documents / Study Guide):
• Amazon EC2 Auto Scaling C Simple Scaling Policies and Cooldown (User Guide)
• Elastic Load Balancing Metrics C ALB RequestCountPerTarget (CloudWatch Metrics)
• AWS Well-Architected Framework C Performance Efficiency & Operational Excellence

Question No : 3


A company hosts a production MySQL database on an Amazon Aurora single-node DB cluster. The database is queried heavily for reporting purposes. The DB cluster is experiencing periods of performance degradation because of high CPU utilization and maximum connections errors. A CloudOps engineer needs to improve the stability of the database.
Which solution will meet these requirements?

정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
Amazon Aurora supports up to 15 Aurora Replicas that share the same storage volume and provide read scaling and improved availability. Official guidance states that replicas “offload read traffic from the writer” and that you should direct read-only workloads to the reader endpoint, reducing CPU pressure and connection counts on the primary. Aurora also supports Replica Auto Scaling through Application Auto Scaling policies using metrics such as CPU utilization or connections to add or remove replicas automatically. This design addresses both high CPU and maximum connections by moving reporting traffic to read replicas while keeping a single write primary for OLTP.
Option B creates a separate cluster with independent storage, increasing operational overhead and data synchronization complexity.
Options C and D introduce application-layer caching changes that may not guarantee data freshness or relieve the write node directly. Therefore, adding read replicas and routing reporting to the reader endpoint, with auto scaling based on load, is the least intrusive, CloudOps-aligned way to stabilize performance.
References (AWS CloudOps Documents / Study Guide):
• Amazon Aurora C Replicas and Reader Endpoint (Aurora User Guide)
• Aurora Replica Auto Scaling (Aurora & Application Auto Scaling Guides)
• AWS Well-Architected Framework C Reliability & Performance Efficiency

Question No : 4


A company runs a website on Amazon EC2 instances. Users can upload images to an Amazon S3 bucket and publish the images to the website. The company wants to deploy a serverless image-processing application that uses an AWS Lambda function to resize the uploaded images.
The company's development team has created the Lambda function. A CloudOps engineer must implement a solution to invoke the Lambda function when users upload new images to the S3 bucket.
Which solution will meet this requirement?

정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
Use Amazon S3 Event Notifications with AWS Lambda to trigger image processing on object creation. S3 natively supports invoking Lambda for events such as s3:ObjectCreated:*, providing a serverless, low-latency pipeline without managing additional services. AWS operational guidance states that “Amazon S3 can directly invoke a Lambda function in response to object-created events,” allowing you to pass event metadata (bucket/key) to the function for resizing and writing results back to S3. This approach minimizes operational overhead, scales automatically with upload volume, and integrates with standard retry semantics. SNS or SQS can be added for fan-out or buffering patterns,
but they are not required when the requirement is simply “invoke the Lambda function on upload.” CloudWatch alarms do not detect individual S3 object uploads and cannot directly satisfy per-object triggers. Therefore, configuring S3 → Lambda event notifications meets the requirement most directly and aligns with CloudOps best practices for event-driven, serverless automation.
References (AWS CloudOps Documents / Study Guide):
• Using AWS Lambda with Amazon S3 (Lambda Developer Guide)
• Amazon S3 Event Notifications (S3 User Guide)
• AWS Well-Architected C Serverless Applications (Operational Excellence)

Question No : 5


Application A runs on Amazon EC2 instances behind a Network Load Balancer (NLB). The EC2 instances are in an Auto Scaling group and are in the same subnet that is associated with the NLB. Other applications from an on-premises environment cannot communicate with Application A on port 8080.
To troubleshoot the issue, a CloudOps engineer analyzes the flow logs. The flow logs include the following records:
ACCEPT from 192.168.0.13:59003 → 172.31.16.139:8080
REJECT from 172.31.16.139:8080 → 192.168.0.13:59003
What is the reason for the rejected traffic?

정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:
VPC Flow Logs show the request arriving and being ACCEPTed on dstport 8080 and the corresponding response being REJECTed on the return path to the client’s ephemeral port (59003). AWS networking guidance states that security groups are stateful (return traffic is automatically allowed) while network ACLs are stateless and require explicit inbound and outbound rules for both directions. CloudOps operational guidance for VPC networking further notes that when you allow an inbound request (for example, TCP 8080) through a subnet’s network ACL, you must also allow the outbound ephemeral port range (typically 1024C65535) for the response traffic; otherwise, the return packets are dropped and appear as REJECT in flow logs. The observed pattern―request accepted to 8080, response rejected to 59003―matches a missing outbound ephemeral-range allow on the subnet’s NACL. Therefore, the cause is the subnet NACL, not security groups or on-premises ACLs. The remediation is to add an outbound ALLOW rule on the NACL for the appropriate ephemeral TCP port range back to the on-premises CIDR (and the corresponding inbound rule if asymmetric).
References (AWS CloudOps documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Networking and Content
Delivery
• Amazon VPC C Network ACLs (stateless behavior and rule requirements)
• Amazon VPC C Security Groups (stateful return traffic)
• VPC Flow Logs C Record fields, ACCEPT/REJECT analysis

Question No : 6


A company has a microservice that runs on a set of Amazon EC2 instances. The EC2 instances run behind an Application Load Balancer (ALB).
A CloudOps engineer must use Amazon Route 53 to create a record that maps the ALB URL to example.com.
Which type of record will meet this requirement?

정답:
Explanation:
An alias record is the recommended Route 53 record type to map domain names (e.g., example.com) to AWS-managed resources such as an Application Load Balancer. Alias records are extension types of A or AAAA records that support AWS resources directly, providing automatic DNS integration and no additional query costs.
AWS documentation states:
“Use alias records to map your domain or subdomain to an AWS resource such as an Application Load Balancer, CloudFront distribution, or S3 website endpoint.”
A and AAAA records are used for static IP addresses, not load balancers. CNAME records cannot be used at the root domain (e.g., example.com). Thus, Option C is correct as it meets CloudOps networking best practices for scalable, managed DNS resolution to ALBs.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 5: Networking and Content Delivery
• Amazon Route 53 Developer Guide C Alias Records
• AWS Well-Architected Framework C Reliability and Performance Efficiency Pillars
• Elastic Load Balancing C Integrating with Route 53

Question No : 7


A company requires the rotation of administrative credentials for production workloads on a regular basis. A CloudOps engineer must implement this policy for an Amazon RDS DB instance's master user password.
Which solution will meet this requirement with the LEAST operational effort?

정답:
Explanation:
AWS Secrets Manager natively supports credential management and automatic rotation for Amazon RDS master user passwords. When a secret is associated with an RDS instance, Secrets Manager automatically updates the password both in the secret and on the database, without downtime or manual scripting.
AWS documentation confirms:
“AWS Secrets Manager can automatically rotate the master user password for Amazon RDS databases. Rotation is fully managed and integrated, requiring no custom code or maintenance.”
Option A introduces unnecessary Lambda automation.
Option B and C use Parameter Store, which does not provide direct RDS password rotation. Therefore, Option D achieves secure, automatic credential rotation with least operational effort, fully aligned with CloudOps security automation principles.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 4: Security and Compliance
• AWS Secrets Manager C Rotating Secrets for Amazon RDS
• AWS Well-Architected Framework C Security Pillar
• Amazon RDS User Guide C Managing Master User Passwords

Question No : 8


A global gaming company is preparing to launch a new game on AWS. The game runs in multiple AWS Regions on a fleet of Amazon EC2 instances. The instances are in an Auto Scaling group behind an Application Load Balancer (ALB) in each Region. The company plans to use Amazon Route 53 for DNS services. The DNS configuration must direct users to the Region that is closest to them and must provide automated failover.
Which combination of steps should a CloudOps engineer take to configure Route 53 to meet these requirements? (Select TWO.)

정답:
Explanation:
The combination of geoproximity routing and DNS failover health checks provides global low-latency routing with high availability.
Geoproximity routing in Route 53 routes users to the AWS Region closest to their geographic location, optimizing latency. For automatic failover, Route 53 health checks can monitor CloudWatch alarms tied to the health of the ALB in each Region. When a Region becomes unhealthy, Route 53 reroutes traffic to the next available Region automatically.
AWS documentation states:
“Use geoproximity routing to direct users to resources based on geographic location, and configure health checks to provide DNS failover for high availability.”
Option B incorrectly monitors EC2 instances directly, which is not efficient at scale.
Option C uses private IPs, which cannot be globally health-checked.
Option E (simple routing) does not support geographic or failover routing. Hence, A and D together meet both the proximity and failover requirements.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 5: Networking and Content Delivery
• Amazon Route 53 Developer Guide C Geoproximity Routing and DNS Failover
• AWS Well-Architected Framework C Reliability Pillar
• Amazon CloudWatch Alarms C Integration with Route 53 Health Checks

Question No : 9


A CloudOps engineer needs to control access to groups of Amazon EC2 instances using AWS Systems Manager Session Manager. Specific tags on the EC2 instances have already been added.
Which additional actions should the CloudOps engineer take to control access? (Select TWO.)

정답:
Explanation:
AWS Systems Manager Session Manager allows secure, auditable instance access without SSH keys
or inbound ports. To control access based on instance tags, CloudOps best practices require two configurations:
Attach an IAM policy to users or groups granting ssm:StartSession, ssm:DescribeInstanceInformation, and ssm:DescribeSessions.
Include a Condition element in the IAM policy referencing instance tags, such as Condition:
{"StringEquals": {"ssm:resourceTag/Environment": "Production"}}.
This ensures users can start sessions only with instances that have matching tags, providing fine-grained access control.
AWS CloudOps documentation under Security and Compliance states:
“Use IAM policies with resource tags in the Condition element to restrict which managed instances users can access using Session Manager.”
Options B and D incorrectly suggest attaching roles or service accounts that are not relevant to user-level access control.
Option C (placement groups) pertains to networking and performance, not access management. Therefore, A and E together provide tag-based, least-privilege access as required.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 4: Security and Compliance
• AWS Systems Manager User Guide C Controlling Access to Session Manager Using Tags
• AWS IAM Policy Reference C Condition Keys for AWS Systems Manager
• AWS Well-Architected Framework C Security Pillar

Question No : 10


A company is running an application on premises and wants to use AWS for data backup. All of the data must be available locally. The backup application can write only to block-based storage that is compatible with the Portable Operating System Interface (POSIX).
Which backup solution will meet these requirements?

정답:
Explanation:
The Storage Gateway service enables hybrid cloud backup by presenting local block storage that synchronizes with AWS cloud storage. For scenarios where all data must remain available locally while still backed up to AWS, the correct mode is gateway-stored volumes.
AWS documentation defines:
“Use stored volumes if you want to keep all your data locally while asynchronously backing up point-in-time snapshots to Amazon S3 for durable storage.”
These volumes expose an iSCSI interface compatible with POSIX file systems, allowing direct use by on-premises backup software.
Gateway-cached volumes (Option C) store primary data in AWS with limited local cache, violating the “all data must be available locally” requirement.
Options A and B are object-based storage solutions, not compatible with POSIX or block-based backup applications.
Therefore, Option D fully satisfies CloudOps reliability and continuity best practices by ensuring local availability, cloud durability, and POSIX compatibility for backups.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 2: Reliability and Business Continuity
• AWS Storage Gateway User Guide C Stored Volumes Overview
• AWS Well-Architected Framework C Reliability Pillar
• AWS Hybrid Cloud Storage Best Practices

Question No : 11


Optimization]
A company has a workload that is sending log data to Amazon CloudWatch Logs. One of the fields includes a measure of application latency. A CloudOps engineer needs to monitor the p90 statistic of this field over time.
What should the CloudOps engineer do to meet this requirement?

정답:
Explanation:
To analyze and visualize custom statistics such as the p90 latency (90th percentile), a CloudWatch metric must be generated from the log data. The correct method is to create a metric filter that extracts the latency value from each log event and publishes it as a CloudWatch metric. Once the metric is published, percentile statistics (p90, p95, etc.) can be displayed in CloudWatch dashboards or alarms.
AWS documentation states:
“You can use metric filters to extract numerical fields from log events and publish them as metrics in CloudWatch. CloudWatch supports percentile statistics such as p90 and p95 for these metrics.”
Contributor Insights (Option A) is for analyzing frequent contributors, not numeric distributions. Subscription filters (Option C) are used for log streaming, and Application Insights (Option D) provides monitoring of application health but not custom p90 statistics. Hence, Option B is the CloudOps-aligned, minimal-overhead solution for percentile latency monitoring.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 1: Monitoring and Logging
• Amazon CloudWatch Logs C Metric Filters
• AWS Well-Architected Framework C Operational Excellence Pillar

Question No : 12


A user working in the Amazon EC2 console increased the size of an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 Windows instance. The change is not reflected in the file system.
What should a CloudOps engineer do to resolve this issue?

정답:
Explanation:
When an Amazon EBS volume is resized, the new storage capacity is immediately available to the attached EC2 instance. However, EBS does not automatically extend the file system. The CloudOps engineer must manually extend the file system within the operating system to utilize the additional space.
AWS documentation for EC2 and EBS specifies:
“After you increase the size of an EBS volume, use file systemCspecific tools to extend the file system so that the operating system can use the new storage capacity.”
On Windows instances, this can be achieved through Disk Management or diskpart commands. On Linux systems, utilities such as growpart and resize2fs are used.
Options B and C do not modify file system metadata and are ineffective.
Option D unnecessarily replaces the volume, which adds risk and downtime. Thus, Option A aligns with the Monitoring and Performance Optimization practices of AWS CloudOps by properly extending the file system to recognize the new capacity.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 1
• Amazon EBS C Modifying EBS Volumes
• Amazon EC2 User Guide C Extending a File System After Resizing a Volume
• AWS Well-Architected Framework C Performance Efficiency Pillar

Question No : 13


A CloudOps engineer needs to ensure that AWS resources across multiple AWS accounts are tagged consistently. The company uses an organization in AWS Organizations to centrally manage the accounts. The company wants to implement cost allocation tags to accurately track the costs that are allocated to each business unit.
Which solution will meet these requirements with the LEAST operational overhead?

정답:
Explanation:
Tagging is essential for governance, cost management, and automation in CloudOps operations. The AWS Organizations tag policies feature allows centralized definition and enforcement of required tag keys and accepted values across all accounts in an organization. According to the AWS CloudOps study guide under Deployment, Provisioning, and Automation, tag policies enable automatic validation of tags, ensuring consistency with minimal manual overhead.
Once tagging consistency is enforced, enabling cost allocation tags in the AWS Billing and Cost Management console allows accurate cost distribution per business unit. AWS documentation states:
“Use AWS Organizations tag policies to standardize tags across accounts. You can activate cost allocation tags in the Billing console to track and allocate costs.”
Option B introduces unnecessary complexity with Lambda automation.
Option C detects but does not enforce tagging.
Option D limits flexibility to Service Catalog resources only. Therefore, Option A provides a centrally managed, automated, and low-overhead solution that meets CloudOps tagging and cost-tracking requirements.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 3: Deployment, Provisioning and Automation
• AWS Organizations C Tag Policies
• AWS Billing and Cost Management C Cost Allocation Tags
• AWS Well-Architected Framework C Operational Excellence and Cost Optimization Pillars

Question No : 14


A CloudOps engineer creates an AWS CloudFormation template to define an application stack that
can be deployed in multiple AWS Regions. The CloudOps engineer also creates an Amazon CloudWatch dashboard by using the AWS Management Console. Each deployment of the application requires its own CloudWatch dashboard.
How can the CloudOps engineer automate the creation of the CloudWatch dashboard each time the application is deployed?

정답:
Explanation:
According to CloudOps automation and monitoring best practices, CloudWatch dashboards should be provisioned as infrastructure-as-code (IaC) resources using AWS CloudFormation to ensure consistency, repeatability, and version control. AWS CloudFormation supports the AWS::CloudWatch::Dashboard resource, where the DashboardBody property accepts a JSON object describing widgets, metrics, and layout.
By exporting the existing dashboard configuration as JSON and embedding it into the CloudFormation template, every deployment of the application automatically creates its corresponding dashboard. This method aligns with the CloudOps requirement for automated deployment and operational visibility within the same stack lifecycle.
AWS documentation explicitly states:
“Use the AWS::CloudWatch::Dashboard resource to create a dashboard from your template. You can include the same JSON you use to define a dashboard in the console.”
Option A requires manual execution.
Options C and D incorrectly reference or reuse existing dashboards, failing to produce unique, deployment-specific dashboards.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 1: Monitoring and
Logging
• AWS CloudFormation User Guide C Resource Type: AWS::CloudWatch::Dashboard
• AWS Well-Architected Framework C Operational Excellence Pillar
• Amazon CloudWatch C Automating Dashboards with Infrastructure as Code

Question No : 15


A company's AWS accounts are in an organization in AWS Organizations. The organization has all features enabled. The accounts use Amazon EC2 instances to host applications. The company manages the EC2 instances manually by using the AWS Management Console. The company applies updates to the EC2 instances by using an SSH connection to each EC2 instance.
The company needs a solution that uses AWS Systems Manager to manage all the organization's current and future EC2 instances. The latest version of Systems Manager Agent (SSM Agent) is running on the EC2 instances.
Which solution will meet these requirements?

정답:
Explanation:
AWS CloudOps automation best practices recommend using AWS Systems Manager Quick Setup for organization-wide management and configuration of EC2 instances. The Default Host Management Configuration Quick Setup automatically enables Systems Manager capabilities such as Patch Manager, Inventory, Session Manager, and Automation across all managed instances within the organization.
When deployed from the management account, Quick Setup automatically integrates with AWS Organizations to propagate configuration and permissions to existing and future accounts. This meets the requirement for organization-wide management with no manual configuration or SSH access. AWS documentation notes:
“You can use Quick Setup in the management account of an organization in AWS Organizations to configure Systems Manager capabilities for all accounts and Regions. Quick Setup automatically keeps configurations up to date.”
Options B, C, and D require custom deployments or manual IAM updates, lacking centralized automation. Therefore, Option A fully satisfies CloudOps standards for automated provisioning and ongoing management of EC2 instances across an organization.
References (AWS CloudOps Documents / Study Guide):
• AWS Certified CloudOps Engineer C Associate (SOA-C03) Exam Guide C Domain 3: Deployment, Provisioning and Automation
• AWS Systems Manager C Quick Setup and Default Host Management Configuration
• AWS Organizations Integration with Systems Manager
• AWS Well-Architected Framework C Operational Excellence Pillar

 / 2