시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / AAISM 덤프  / AAISM 문제 연습

ISACA AAISM 시험

ISACA Advanced in AI Security Management Exam 온라인 연습

최종 업데이트 시간: 2025년10월03일

당신은 온라인 연습 문제를 통해 ISACA AAISM 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AAISM 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 90개의 시험 문제와 답을 포함하십시오.

 / 5

Question No : 1


An organization is facing a deepfake attack intended to manipulate stock prices. The organization’s crisis communication plan has been activated.
Which of the following is MOST important to include in the initial response?

정답:
Explanation:
AAISM guidance on crisis management and communication emphasizes that the initial priority in responding to a reputational or market manipulation attack is to provide accurate clarifying information to the public through a pre-approved statement. This ensures stakeholders and markets are given verified facts immediately, limiting the spread of misinformation. While forensic analysis, employee training, and monitoring activities are important, they occur after the immediate need for public trust and damage control is addressed. Pre-approved statements are a central control in AI-related incident response to ensure consistency, timeliness, and credibility in communications.
Reference: AAISM Study Guide C AI Governance and Program Management (Incident Response and Crisis Communication)
ISACA AI Security Management C Public Communication and Trust Preservation

Question No : 2


Which of the following information is MOST important to include in a centralized AI inventory?

정답:
Explanation:
AAISM governance practices identify ownership and accountability as the most critical element in any centralized AI inventory. An AI inventory provides oversight by cataloging all AI assets within an organization, and assigning responsibility ensures that each system has clear governance, monitoring, and compliance coverage. While use cases, training data, and registries are valuable metadata, they do not guarantee accountability. Without defined ownership, no party is responsible for addressing risk, bias, or incidents. Therefore, the most important information to include is ownership and accountability details for each AI system.
Reference: AAISM Exam Content Outline C AI Governance and Program Management (AI Inventories and Oversight)
AI Security Management Study Guide C Ownership and Accountability Structures

Question No : 3


Which of the following would BEST help to prevent the compromise of a facial recognition AI system through the use of alterations in facial appearance?

정답:
Explanation:
AAISM materials note that adversaries may attempt to bypass facial recognition by disguising or altering appearance. The most effective mitigation is to enhance training data with a wide range of variances in facial features, lighting, and disguises so the system can robustly detect authentic users despite adversarial attempts. Monitoring and secondary confirmation are supportive controls but are reactive. Fine-tuning to reduce hallucinations is irrelevant in this context, as hallucinations apply more to generative AI. The best preventive measure is strengthening the model with diverse, variance-rich training data.
Reference: AAISM Study Guide C AI Technologies and Controls (Robust Training Data Strategies)
ISACA AI Security Management C Biometric AI Security Risks

Question No : 4


How can an organization BEST protect itself from payment diversions caused by deepfake attacks impersonating management?

정답:
Explanation:
AAISM’s risk management framework stresses that the most effective defense against deepfake-enabled fraud, such as payment diversion, is resilient payment approval processes. This includes multi-step verification, segregation of duties, and independent confirmations for high-value transactions. Employee training, policies, or limiting payment frequency may reduce exposure, but they cannot guarantee prevention. Only process-based controls enforce structural safeguards that prevent fraudulent instructions from being executed, even if a deepfake impersonation attempt is successful.
Reference: AAISM Exam Content Outline C AI Risk Management (Fraud and Deepfake Risk)
AI Security Management Study Guide C Transactional Resilience and Controls

Question No : 5


Personal data used to train AI systems can BEST be protected by:

정답:
Explanation:
AAISM guidance on privacy-preserving AI highlights anonymization as the most effective means of protecting personal data used in training. By irreversibly removing or masking identifiable attributes, anonymization ensures that training data cannot be linked back to individuals, thereby meeting key privacy obligations under laws such as GDPR. Erasing data after training may limit exposure but does not protect it during the training process. Ensuring data quality improves accuracy but does not mitigate privacy risk. Hashing protects data integrity but does not guarantee anonymity, as hashes can sometimes be reversed or correlated. Therefore, anonymization is the recommended control for protecting personal data in AI training.
Reference: AAISM Study Guide C AI Technologies and Controls (Privacy-Preserving Methods)
ISACA AI Security Management C Data Anonymization Practices

Question No : 6


Which of the following is the MOST important consideration for an organization that has decided to adopt AI to leverage its competitive advantage?

정답:
Explanation:
AAISM’s governance guidance emphasizes that adopting AI for competitive advantage must begin with a comprehensive strategic roadmap for integration. This roadmap aligns AI adoption with business objectives, sets priorities, defines milestones, and ensures coordination across functions. Risk management, training, and tool procurement are essential, but they are tactical steps that follow once the strategic direction is defined. Without a roadmap, adoption becomes fragmented and risks misalignment with business strategy. The most important consideration at the adoption
stage is therefore creating a strategic integration roadmap.
Reference: AAISM Exam Content Outline C AI Governance and Program Management (Strategy and Roadmapping)
AI Security Management Study Guide C Business Alignment of AI Initiatives

Question No : 7


Which of the following is the MOST important course of action when implementing continuous monitoring and reporting for AI-based systems?

정답:
Explanation:
The AAISM governance framework specifies that the foundation of continuous monitoring is real-time tracking of key risk indicators. This ensures immediate detection of deviations, model drift, and operational anomalies. Automated alerts, dashboards, and reporting templates all support monitoring, but they rely on the presence of accurate, real-time KRI measurement as their source. Without live monitoring, the other controls are reactive rather than proactive. The most important course of action in establishing effective continuous monitoring is therefore real-time KRI tracking.
Reference: AAISM Study Guide C AI Governance and Program Management (Continuous Monitoring and Assurance)
ISACA AI Risk Guidance C Monitoring Key Risk Indicators

Question No : 8


Which of the following controls BEST mitigates the risk of bias in AI models?

정답:
Explanation:
Bias in AI models primarily stems from limitations or imbalances in training data. The AAISM study materials emphasize that the most effective way to mitigate this risk is through diverse data sourcing strategies that ensure coverage across demographics, scenarios, and contexts. Access controls protect data security, not fairness. Data reconciliation ensures accuracy but does not address representational imbalance. Cryptographic hashing preserves integrity but has no impact on bias mitigation. To reduce systemic unfairness, the critical control is sourcing diverse and representative data.
Reference: AAISM Exam Content Outline C AI Technologies and Controls (Bias and Fairness Management) AI Security Management Study Guide C Data Governance and Bias Reduction Strategies

Question No : 9


Which of the following is the MOST critical key risk indicator (KRI) for an AI system?

정답:
Explanation:
AAISM highlights that while accuracy and performance metrics are important, the rate of drift is the most critical KRI for AI systems. Model drift occurs when input data or environmental conditions shift, causing the system to degrade and produce unreliable outputs. This risk indicator directly reflects whether the AI continues to function as intended over time. Accuracy rates and response times are performance metrics, not primary risk signals. The amount of data in the model does not reliably indicate exposure to risk. Therefore, the greatest KRI for ongoing assurance and governance is the rate of drift.
Reference: AAISM Study Guide C AI Risk Management (Monitoring and Drift Detection)
ISACA AI Security Management C Key Risk Indicators for AI Systems

Question No : 10


Which of the following is the MOST important consideration when deciding how to compose an AI red team?

정답:
Explanation:
AAISM materials specify that the composition of an AI red team must be tailored to the organization’s AI use cases. The purpose of red-teaming is to simulate realistic adversarial conditions aligned with the actual applications of AI. For example, testing a generative model requires different expertise than testing a fraud detection system. While resource availability, compliance requirements, and time-to-market pressures are practical considerations, they are secondary to aligning team expertise with use case scenarios. The most important factor is therefore the AI use cases themselves.
Reference: AAISM Exam Content Outline C AI Risk Management (Red Teaming Considerations)
AI Security Management Study Guide C Tailoring Adversarial Testing to Use Cases

Question No : 11


In a new supply chain management system, AI models used by participating parties are interactively connected to generate advice in support of management decision making.
Which of the following is the GREATEST challenge related to this architecture?

정답:
Explanation:
The AAISM governance framework notes that in multi-party AI ecosystems, the greatest challenge is ensuring clear accountability for AI outputs. When models from different parties interact, responsibility for errors, bias, or harmful recommendations can be unclear, leading to disputes and compliance gaps. While aggregate risk assessment and error identification are significant, they are secondary to the fundamental governance requirement of establishing transparent lines of responsibility. Without defined accountability, no stakeholder can reliably manage or mitigate risks. Therefore, the greatest challenge in such a distributed architecture is responsibility for AI outputs.
Reference: AAISM Study Guide C AI Governance and Program Management (Accountability in Multi-Party Systems)
ISACA AI Governance Guidance C Roles and Responsibilities in AI Collaboration

Question No : 12


When integrating AI for innovation, which of the following can BEST help an organization manage security risk?

정답:
Explanation:
AAISM emphasizes that when introducing innovative AI systems, organizations reduce security and compliance risk by following a phased adoption approach. This allows incremental deployment, controlled testing, and gradual scaling while monitoring risks in real time. Re-evaluating risk appetite and evaluating compliance are important governance steps but do not directly mitigate risks during implementation. Seeking third-party advice can add expertise but does not provide the structured control that phased integration offers. The most effective risk management approach for AI innovation is to adopt a phased rollout strategy.
Reference: AAISM Exam Content Outline C AI Risk Management (Innovation and Risk Control)
AI Security Management Study Guide C Phased Implementation Strategies

Question No : 13


An organization needs large data sets to perform application testing.
Which of the following would BEST fulfill this need?

정답:
Explanation:
According to AAISM study guidance, the most direct and effective way to obtain large volumes of diverse data for application testing is through open-source data repositories. These repositories provide freely available, well-documented, and often standardized data that supports testing and benchmarking in a compliant manner. Model cards document AI behavior but do not provide data. Incorporating search content may introduce legal, privacy, and quality risks. Data augmentation is useful for expanding existing sets but does not provide the breadth or size required when starting
with insufficient data. The recommended best practice for sourcing large testing datasets is therefore the use of open-source repositories.
Reference: AAISM Study Guide C AI Technologies and Controls (Data Sources and Testing Practices)
ISACA AI Security Management C Data Governance and Compliance in AI Testing

Question No : 14


In the context of generative AI, which of the following would be the MOST likely goal of penetration testing during a red-teaming exercise?

정답:
Explanation:
AAISM’s risk management content describes red-teaming in generative AI as focused on deliberately crafting adversarial prompts to test whether the model produces unexpected or undesired outputs that violate safety, integrity, or compliance standards. The goal is not to stress system performance or randomly disrupt outputs, but rather to uncover vulnerabilities in how the model responds to manipulative inputs. This allows organizations to improve resilience against prompt injection, jailbreaking, or harmful content generation. The correct answer is therefore generate outputs that are unexpected using adversarial inputs.
Reference: AAISM Exam Content Outline C AI Risk Management (Red-Team Testing and Adversarial Exercises) AI Security Management Study Guide C Penetration Testing in Generative AI Contexts

Question No : 15


Which of the following BEST describes the role of risk documentation in an AI governance program?

정답:
Explanation:
In AAISM governance guidance, risk documentation is described as the structured record that defines the organization’s risk appetite and tolerance levels for AI initiatives. By outlining acceptable levels of risk, documentation ensures decision-makers can approve, monitor, and adjust AI projects within defined boundaries. While it may also serve audit functions, technical analysis, or communication to stakeholders, its primary role is to formalize risk acceptance thresholds and integrate them into governance and decision-making. This aligns directly with the governance requirement to align AI adoption with organizational risk appetite.
Reference: AAISM Study Guide C AI Governance and Program Management (Risk Documentation and Appetite) ISACA AI Security Management C Governance, Risk and Compliance Integration

 / 5
ISACA
CISA 덤프