시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / 312-40 덤프  / 312-40 문제 연습

EC-Council 312-40 시험

Certified Cloud Security Engineer (CCSE) 온라인 연습

최종 업데이트 시간: 2025년06월06일

당신은 온라인 연습 문제를 통해 EC-Council 312-40 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 312-40 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 125개의 시험 문제와 답을 포함하십시오.

 / 4

Question No : 1


Rebecca Gibel has been working as a cloud security engineer in an IT company for the past 5 years. Her organization uses cloud-based services. Rebecca's organization contains personal information about its clients,which is encrypted and stored in the cloud environment. The CEO of her organization has asked Rebecca to delete the personal information of all clients who utilized their services between 2011 and 2015. Rebecca deleted the encryption keys that are used to encrypt the original data; this made the data unreadable and unrecoverable.
Based on the given information, which deletion method was implemented by Rebecca?

정답:
Explanation:
Crypto-shredding is the method of ‘deleting’ encrypted data by destroying the encryption keys. This method is particularly useful in cloud environments where physical destruction of storage media is not feasible. By deleting the keys used to encrypt the data, the data itself becomes inaccessible and is effectively considered deleted.
Here’s how crypto-shredding works:
Encryption: Data is encrypted using cryptographic keys, which are essential for decrypting the data to make it readable.
Key Management: The keys are managed separately from the data, often in a secure key management system.
Deletion of Keys: When instructed to delete the data, instead of trying to erase the actual data, the encryption keys are deleted.
Data Inaccessibility: Without the keys, the encrypted data cannot be decrypted, rendering it unreadable and unrecoverable.
Compliance: This method helps organizations comply with data protection regulations that require
secure deletion of personal data.
Reference: A technical paper discussing the concept of crypto-shredding as a method for secure deletion of data in cloud environments.
An industry article explaining how crypto-shredding is used to meet data privacy requirements, especially in cloud storage scenarios.

Question No : 2


SevocSoft Private Ltd. is an IT company that develops software and applications for the banking sector. The security team of the organization found a security incident caused by misconfiguration in Infrastructure-as-Code (laC) templates. Upon further investigation, the security team found that the server configuration was built using a misconfigured laC template, which resulted in security breach and exploitation of the organizational cloud resources.
Which of the following would have prevented this security breach and exploitation?

정답:
Explanation:
Scanning Infrastructure-as-Code (IaC) templates is a preventive measure that can identify misconfigurations and potential security issues before the templates are deployed. This process involves analyzing the code to ensure it adheres to best practices and security standards. Here’s how scanning IaC templates could have prevented the security breach:
Early Detection: Scanning tools can detect misconfigurations in IaC templates early in the development cycle, before deployment.
Automated Scans: Automated scanning tools can be integrated into the CI/CD pipeline to continuously check for issues as code is written and updated.
Security Best Practices: Scanning ensures that IaC templates comply with security best practices and organizational policies.
Vulnerability Identification: It helps identify vulnerabilities that could be exploited if the infrastructure is deployed with those configurations.
Remediation Guidance: Scanning tools often provide guidance on how to fix identified issues, which
can prevent exploitation.
Reference: Microsoft documentation on scanning for misconfigurations in IaC templates1.
Orca Security’s blog on securing IaC templates and the importance of scanning them2.
An article discussing common security risks with IaC and the need for scanning templates3.

Question No : 3


Martin Sheen is a senior cloud security engineer in SecGlob Cloud Pvt. Ltd. Since 2012, his organization has been using AWS cloud-based services. Using an intrusion detection system and antivirus software, Martin noticed that an attacker is trying to breach the security of his organization. Therefore, Martin would like to identify and protect the sensitive data of his organization. He requires a fully managed data security service that supports S3 storage and provides an inventory of publicly shared buckets, unencrypted buckets, and the buckets shared with AWS accounts outside his organization.
Which of the following Amazon services fulfills Martin's requirement?

정답:
Explanation:
Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect sensitive data in AWS. It is specifically designed to support Amazon S3 storage and provides an inventory of S3 buckets, helping organizations like SecGlob Cloud Pvt. Ltd. to identify and protect their sensitive data.
Here’s how Amazon Macie fulfills Martin’s requirements:
Sensitive Data Identification: Macie automatically and continuously discovers sensitive data, such as personally identifiable information (PII), in S3 buckets.
Inventory and Monitoring: It provides an inventory of S3 buckets, detailing which are publicly accessible, unencrypted, or shared with accounts outside the organization.
Alerts and Reporting: Macie generates detailed alerts and reports when it detects unauthorized access or inadvertent data leaks.
Data Security Posture: It helps improve the data security posture by providing actionable recommendations for securing S3 buckets.
Compliance Support: Macie aids in compliance efforts by monitoring data access patterns and ensuring that sensitive data is handled according to policy.
Reference: AWS documentation on Amazon Macie, which outlines its capabilities for protecting sensitive data in S31.
An AWS blog post discussing how Macie can be used to identify and protect sensitive data in S3 buckets1.

Question No : 4


SecAppSol Pvt. Ltd. is a cloud software and application development company located in Louisville, Kentucky. The security features provided by its previous cloud service provider was not satisfactory, and in 2012, the organization became a victim of eavesdropping. Therefore, SecAppSol Pvt. Ltd.
changed its cloud service provider and adopted AWS cloud-based services owing to its robust and
cost-effective security features.
How does SecAppSol Pvt. Ltd.'s security team encrypt the traffic
between the load balancer and client that initiate
SSL or TLS sessions?

정답:
Explanation:
To encrypt the traffic between the load balancer and clients that initiate SSL or TLS sessions, SecAppSol Pvt. Ltd.'s security team would enable an HTTPS listener on their load balancer. This is a common method used in AWS to secure communication.
Here’s how it works:
HTTPS Listener Configuration: The security team configures the load balancer with an HTTPS listener, which listens for incoming SSL or TLS connections on a specified port (usually port 443).
SSL/TLS Certificates: They deploy SSL/TLS certificates on the load balancer. These certificates are used to establish a secure connection and encrypt the traffic.
Secure Communication: When a client initiates a session, the HTTPS listener uses the SSL/TLS certificate to perform a handshake, establish a secure connection, and encrypt the data in transit.
Backend Encryption: Optionally, the load balancer can also be configured to encrypt traffic to the backend servers, ensuring end-to-end encryption.
Security Policies: The security team sets security policies on the load balancer to define the ciphers and protocols used for SSL/TLS, further enhancing security.
Reference: AWS documentation on configuring end-to-end encryption in a load-balanced environment, which includes setting up an HTTPS listener1.
AWS documentation on creating an HTTPS listener for your Application Load Balancer, detailing the process and requirements2.

Question No : 5


Jerry Mulligan is employed by an IT company as a cloud security engineer. In 2014, his organization migrated all applications and data from on-premises to a cloud environment. Jerry would like to perform penetration testing to evaluate the security across virtual machines, installed apps, and OSes in the cloud environment, including conducting various security assessment steps against risks specific to the cloud that could expose them to serious threats.
Which of the following cloud computing service models does not allow cloud penetration testing (CPEN) to Jerry?

정답:
Explanation:
In the cloud computing service models, SaaS (Software as a Service) typically does not allow
customers to perform penetration testing. This is because SaaS applications are managed by the service provider, and the security of the application is the responsibility of the provider, not the customer.
Here’s why SaaS doesn’t allow penetration testing:
Managed Service: SaaS providers manage the security of their applications, including regular updates and patches.
Shared Environment: SaaS applications often run in a shared environment where multiple customers use the same infrastructure, making it impractical for individual customers to conduct penetration testing.
Provider’s Policies: Most SaaS providers have strict policies against unauthorized testing, as it could impact the service’s integrity and availability for other users.
Alternative Assessments: Instead of penetration testing, SaaS providers may offer security assessments or compliance certifications to demonstrate the security of their applications.
Reference: Oracle’s FAQ on cloud security testing, which states that penetration and vulnerability testing are not allowed for Oracle SaaS offerings1.
Cloud Security Alliance’s article on pentesting in the cloud, mentioning that CSPs often have policies describing which tests can be performed and which cannot, especially in SaaS models2.

Question No : 6


Rick Warren has been working as a cloud security engineer in an IT company for the past 4 years. Owing to the robust security features and various cost-effective services offered by AWS, in 2010, his organization migrated to the AWS cloud environment. While inspecting the intrusion detection system, Rick detected a security incident.
Which of the following AWS services collects logs from various data sources and stores them on a centralized location as logs files that can be used during forensic investigation in the event of a security incident?

정답:
Explanation:
Amazon CloudTrail is a service that provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.
In the context of forensic investigation, CloudTrail plays a crucial role:
Event Logging: CloudTrail collects logs from various AWS services and resources, recording every API call and user activity that alters the AWS environment.
Centralized Storage: It aggregates the logs and stores them in a centralized location, which can be an Amazon S3 bucket.
Forensic Investigation: The logs stored by CloudTrail are detailed and include information about the user, the time of the API call, the source IP address, and the response elements returned by the AWS service. This makes it an invaluable tool for forensic investigations.
Security Monitoring: CloudTrail logs can be continuously monitored and analyzed for suspicious activity, which is essential for detecting security incidents.
Compliance: The service helps with compliance audits by providing a history of changes in the AWS
environment.
Reference: AWS’s official documentation on CloudTrail, which outlines its capabilities and use cases for security and compliance1.
An AWS blog post discussing the importance of CloudTrail logs in security incident investigations2.
A third-party article explaining how CloudTrail is used for forensic analysis in AWS environments3.

Question No : 7


Alice, a cloud forensic investigator, has located, a relevant evidence during his investigation of a
security breach in an organization's Azure environment. As an investigator, he needs to sync different types of logs generated by Azure resources with Azure services for better monitoring.
Which Azure logging and auditing feature can enable Alice to record information on the Azure subscription layer and obtain the evidence (information related to the operations performed on a specific resource, timestamp, status of the operation, and the user responsible for it)?

정답:
Explanation:
Azure Activity Logs provide a record of operations performed on resources within an Azure subscription. They are essential for monitoring and auditing purposes, as they offer detailed information on the operations, including the timestamp, status, and the identity of the user responsible for the operation.
Here’s how Azure Activity Logs can be utilized by Alice:
Recording Operations: Azure Activity Logs record all control-plane activities, such as creating, updating, and deleting resources through Azure Resource Manager.
Evidence Collection: For forensic purposes, these logs are crucial as they provide evidence of the operations performed on specific resources.
Syncing Logs: Azure Activity Logs can be integrated with Azure services for better monitoring and can be synced with other tools for analysis.
Access and Management: Investigators like Alice can access these logs through the Azure portal, Azure CLI, or Azure Monitor REST API.
Security and Compliance: These logs are also used for security and compliance, helping organizations
to meet regulatory requirements.
Reference: Microsoft Learn documentation on Azure security logging and auditing, which includes details on Azure Activity Logs1.
Azure Monitor documentation, which provides an overview of the monitoring solutions and mentions the use of Azure Activity Logs2.

Question No : 8


Sandra, who works for SecAppSol Technologies, is on a vacation. Her boss asked her to solve an urgent issue in an application. Sandra had to use applications present on her office laptop to solve this issue, and she successfully rectified it. Despite being in a different location, she could securely use the application.
What type of service did the organization use to ensure that Sandra could access her office laptop from a remote area?

정답:
Explanation:
Amazon AppStream 2.0 is a fully managed application streaming service that allows users to access desktop applications from anywhere, making it the service that enabled Sandra to access her office laptop applications remotely.
Here’s how it works:
Application Hosting: AppStream 2.0 hosts desktop applications on AWS and streams them to a web browser or a connected device.
Secure Access: Users can access these applications securely from any location, as the service provides a secure streaming session.
Resource Optimization: It eliminates the need for high-end user hardware since the processing is done on AWS servers.
Central Management: The organization can manage applications centrally, which simplifies software updates and security.
Integration: AppStream 2.0 integrates with existing identity providers and supports standard security
protocols.
Reference: AWS documentation on Amazon AppStream 2.0, detailing how it enables remote access to applications1.
An AWS blog post explaining the benefits of using Amazon AppStream 2.0 for remote application access2.

Question No : 9


VenturiaCloud is a cloud service provider that offers robust and cost-effective cloud-based services to cloud consumers. The organization became a victim of a cybersecurity attack. An attacker performed a DDoS attack over the cloud that caused failure in the entire cloud environment. VenturiaCloud conducted a forensics investigation.
Who among the following are the first line of defense against cloud security attacks with their primary role being responding against any type of security incident immediately?

정답:
Explanation:
Incident Handlers are typically the first line of defense against cloud security attacks, with their primary role being to respond immediately to any type of security incident. In the context of a cybersecurity attack such as a DDoS (Distributed Denial of Service), incident handlers are responsible for the initial response, which includes identifying, managing, recording, and analyzing security threats or incidents in real-time.
Here’s how Incident Handlers function as the first line of defense:
Immediate Response: They are trained to respond quickly to security incidents to minimize impact and manage the situation.
Incident Analysis: Incident Handlers analyze the nature and scope of the incident, including the type of attack and its origin.
Mitigation Strategies: They implement strategies to mitigate the attack, such as rerouting traffic or isolating affected systems.
Communication: They communicate with relevant stakeholders, including IT professionals, management, and possibly law enforcement.
Forensics and Recovery: After an attack, they work on forensics to understand how the breach occurred and on recovery processes to restore services.
Reference: An ISACA journal article discussing the roles of various functions in information security, highlighting the first line of defense1.
An Australian Cyber Security Magazine article emphasizing the importance of identity and access management (IAM) as the first line of defense in securing the cloud2.

Question No : 10


Thomas Gibson is a cloud security engineer who works in a multinational company. His organization wants to host critical elements of its applications; thus, if disaster strikes, applications can be restored quickly and completely. Moreover, his organization wants to achieve lower RTO and RPO values.
Which of the following disaster recovery approach should be adopted by Thomas' organization?

정답:
Explanation:
The Warm Standby approach in disaster recovery is designed to achieve lower Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) values. This approach involves having a scaled-down version of a fully functional environment running at all times in the cloud. In the event of a disaster, the system can quickly switch over to the warm standby environment, which is already running and up-to-date, thus ensuring a quick and complete restoration of applications. Here’s how the Warm Standby approach works:
Prepared Environment: A duplicate of the production environment is running in the cloud, but at a reduced capacity.
Quick Activation: In case of a disaster, this environment can be quickly scaled up to handle the full production load.
Data Synchronization: Regular data synchronization ensures that the standby environment is always up-to-date, which contributes to a low RPO.
Reduced Downtime: Because the standby system is always running, the time to switch over is minimal, leading to a low RTO.
Cost-Efficiency: While more expensive than a cold standby, it is more cost-effective than a hot
standby, balancing cost with readiness.
Reference: An article discussing the importance of RPO and RTO in disaster recovery and how different strategies, including Warm Standby, impact these metrics1.
A guide explaining various disaster recovery strategies, including Warm Standby, and their relation to achieving lower RTO and RPO values2.

Question No : 11


A BPO company would like to expand its business and provide 24 x 7 customer service. Therefore, the organization wants to migrate to a fully functional cloud environment that provides all features with minimum maintenance and administration.
Which cloud service model should it consider?

정답:
Explanation:
SaaS, or Software as a Service, is the ideal cloud service model for a BPO company looking to expand its business and provide 24/7 customer service with minimal maintenance and administration. SaaS provides a complete software solution that is managed by the service provider and delivered over the internet, which aligns with the needs of a BPO company for several reasons:
Fully Managed Service: SaaS offers a fully managed service, which means the provider is responsible for the maintenance, updates, and security of the software.
Accessibility: It allows employees to access the software from anywhere at any time, which is essential for 24/7 customer service operations.
Scalability: SaaS solutions are highly scalable, allowing the BPO company to easily adjust its usage based on business demands without worrying about infrastructure limitations.
Cost-Effectiveness: With SaaS, the BPO company can avoid upfront costs associated with purchasing, managing, and upgrading hardware and software.
Integration and Customization: Many SaaS offerings provide options for integration with other services and customization to meet specific business needs.
Reference: An article discussing how cloud computing services are becoming the new BPO style, highlighting the benefits of SaaS for BPO companies1.
A report on the impact of cloud services on BPOs, emphasizing the advantages of SaaS in terms of cost savings and quick response to customers1.

Question No : 12


A new public web application is deployed on AWS that will run behind an Application Load Balancer (ALB). An AWS security expert needs to encrypt the newly deployed application at the edge with an SSL/TLS certificate issued by an external certificate authority. In addition, he needs to ensure the rotation of the certificate yearly before it expires.
Which of the following AWS services can be used to accomplish this?

정답:
Explanation:
AWS Certificate Manager (ACM) is the service that enables an AWS security expert to manage SSL/TLS certificates provided by AWS or an external certificate authority. It allows the deployment of the certificate on AWS services such as an Application Load Balancer (ALB) and also handles the renewal and rotation of certificates.
Here’s how ACM would be used for the web application:
Certificate Provisioning: The security expert can import an SSL/TLS certificate issued by an external certificate authority into ACM.
Integration with ALB: ACM integrates with ALB, allowing the certificate to be easily deployed to encrypt the application at the edge.
Automatic Renewal: ACM can be configured to automatically renew certificates provided by AWS. For certificates from external authorities, the expert can manually import a new certificate before the old one expires.
Yearly Rotation: While ACM does not automatically rotate externally provided certificates, it simplifies the process of replacing them by allowing the expert to import new certificates as needed.
Reference: AWS documentation on ACM, which explains how to import certificates and use them with ALB1. AWS blog post discussing the importance of rotating SSL/TLS certificates and how ACM facilitates this process2.

Question No : 13


SecureSoftWorld Pvt. Ltd. is an IT company that develops software solutions catering to the needs of the healthcare industry. Most of its services are hosted in Google cloud. In the cloud environment, to secure the applications and services, the organization uses Google App Engine Firewall that controls the access to the App Engine with a set of rules that denies or allows requests from a specified range of IPs.
How many unique firewall rules can SecureSoftWorld Pvt. Ltd define using App Engine Firewall?

정답:
Explanation:
Google App Engine Firewall allows organizations to create a set of rules that control the access to their App Engine applications. These rules can either allow or deny requests from specified IP ranges, providing a robust mechanism for securing applications and services hosted on the Google Cloud. Here’s how the rule limit applies to SecureSoftWorld Pvt. Ltd:
Rule Creation: SecureSoftWorld Pvt. Ltd can create firewall rules that specify which IP ranges are allowed or denied access to their App Engine services.
Rule Limit: The company can define up to 1000 individual firewall rules1.
Rule Priority: These rules are prioritized, meaning that rules with a lower priority number are evaluated before those with a higher number.
Default Rule: By default, any request that does not match a specific rule is allowed. However, this default action can be changed to deny, effectively blocking all traffic that does not match any of the defined rules.
Rule Management: The rules can be managed via the Google Cloud Console, the gcloud command-
line tool, or the App Engine Admin API.
Reference: Google Cloud documentation explaining the App Engine firewall and the maximum number of rules1.

Question No : 14


A document has an organization's classified information. The organization's Azure cloud administrator has to send it to different recipients. If the email is not protected, this can be opened and read by any user. So the document should be protected and it will only be opened by authorized users.
In this scenario, which Azure service can enable the admin to share documents securely?

정답:
Explanation:
Azure Information Protection (AIP) is a cloud-based solution that helps organizations classify and protect documents and emails by applying labels. AIP can be used to protect both data at rest and in transit, making it suitable for securely sharing classified information.
Here’s how AIP secures document sharing:
Classification and Labeling: AIP allows administrators to classify data based on sensitivity and apply labels that carry protection settings.
Protection: It uses encryption, identity, and authorization policies to protect documents and emails.
Access Control: Only authorized users with the right permissions can access protected documents, even if the document is shared outside the organization.
Tracking and Revocation: Administrators can track activities on shared documents and revoke access if necessary.
Integration: AIP integrates with other Microsoft services and applications, ensuring a seamless protection experience across the organization’s data ecosystem.
Reference: Microsoft’s overview of Azure Information Protection, which details how it helps secure document sharing1.
A guide on how to configure and use Azure Information Protection for protecting sensitive information2.

Question No : 15


An Azure organization wants to enforce its on-premises AD security and password policies to filter brute-force attacks. Instead of using legacy authentication, the users should sign in to on-premises and cloud-based applications using the same passwords in Azure AD.
Which Azure AD feature can enable users to access Azure resources?

정답:
Explanation:
Azure AD Pass-Through Authentication (PTA) allows users to sign in to both on-premises and cloud-based applications using the same passwords. This feature is part of Azure Active Directory (AD) and helps organizations enforce their on-premises AD security and password policies in the cloud, thereby providing a seamless user experience while maintaining security.
Here’s how Azure AD PTA works:
Integration with On-Premises AD: Azure AD PTA integrates with an organization’s on-premises AD to apply the same security and password policies to cloud resources.
Authentication Request Handling: When a user signs in, the authentication request is passed through to the on-premises AD for validation.
Brute-Force Attack Protection: By enforcing the on-premises AD security policies, Azure AD PTA helps to filter out brute-force attacks.
No Passwords Stored in the Cloud: User passwords remain on-premises and are not stored in Azure AD, which enhances security.
Simple Sign-On Experience: Users enjoy a simple sign-on experience with the same set of credentials
across on-premises and cloud services.
Reference: Microsoft’s documentation on deploying on-premises Microsoft Entra Password Protection, which works with Azure AD PTA1.
A step-by-step guide on implementing Azure AD Password Protection on-premises, which complements the PTA feature2.
An overview of Azure AD Password Protection and Smart Lockout features, which are part of the broader Azure AD security framework3.

 / 4
EC-Council