시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / CNPA 덤프  / CNPA 문제 연습

The Linux Foundation CNPA 시험

Certified Cloud Native Platform Engineering Associate 온라인 연습

최종 업데이트 시간: 2025년10월10일

당신은 온라인 연습 문제를 통해 The Linux Foundation CNPA 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 CNPA 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 85개의 시험 문제와 답을 포함하십시오.

 / 2

Question No : 1


In the context of Agile methodology, which principle aligns best with DevOps practices in platform engineering?

정답:
Explanation:
Agile and DevOps share the principle of continuous improvement through rapid feedback and iteration.
Option B is correct because gathering feedback continuously and iterating aligns directly with DevOps practices such as CI/CD, observability-driven development, and platform engineering’s focus on developer experience. This ensures platforms and applications evolve quickly in response to real-world conditions.
Option A contradicts Agile, which emphasizes active customer collaboration.
Option C reflects rigid waterfall methodologies, not Agile or DevOps.
Option D enforces silos, which is the opposite of DevOps principles of cross-functional collaboration.
By embracing continuous feedback loops, both Agile and platform engineering accelerate delivery, improve resilience, and ensure that platforms deliver real value to developers and end users. This cultural alignment ensures both speed and quality in cloud native environments.
Reference:
― Agile Manifesto Principles
― CNCF Platforms Whitepaper
― Cloud Native Platform Engineering Study Guide

Question No : 2


What is a key cultural aspect that drives successful platform adoption in an organization?

정답:
Explanation:
Successful platform adoption depends heavily on cultural practices that foster collaboration and continuous improvement.
Option D is correct because feedback loops between developers and platform teams ensure that the platform evolves to meet developer needs while balancing security and governance. This aligns with the principle of treating the platform as a product, where developer experience is central.
Option A (mandates) often lead to resistance and shadow IT.
Option B isolates platform teams, creating silos and reducing alignment with developer workflows.
Option C is misleading―security is important, but overemphasizing it at the expense of usability hinders adoption.
Feedback-driven iteration creates trust, improves usability, and drives organic adoption. It transforms the platform into a valuable product that developers want to use, rather than one they are forced to adopt.
Reference:
― CNCF Platforms Whitepaper
― Team Topologies (Platform as a Product model)
― Cloud Native Platform Engineering Study Guide

Question No : 3


In a cloud native environment, which approach is effective for managing resources to ensure a balance between defined states and dynamic adjustments?

정답:
Explanation:
Declarative resource management is a core principle in Kubernetes and cloud native platforms.
Option C is correct because declarative systems define the desired state of resources (e.g., YAML manifests for Deployments, Services, or ConfigMaps), and controllers reconcile the actual state to match the desired state. This provides consistency, automation, and resilience, while also allowing
dynamic adjustments like scaling.
Option A (imperative management) requires step-by-step commands, which are error-prone and not scalable.
Option B (manual tracking) adds overhead and risk of drift.
Option D (static allocation) wastes resources and does not adapt to changing workloads.
Declarative management enables GitOps workflows, automated scaling, and consistent application of policies. This approach aligns with platform engineering principles by combining automation with governance, enabling efficiency and reliability at scale.
Reference:
― CNCF GitOps Principles
― Kubernetes Design Principles
― Cloud Native Platform Engineering Study Guide

Question No : 4


Which metric measures a cloud native platform's impact on developer productivity and deployment speed?

정답:
Explanation:
The Lead Time for Changes metric, one of the DORA (DevOps Research and Assessment) metrics, directly measures the impact of a platform on developer productivity and deployment speed.
Option B is correct because it reflects the average time taken from when code is committed until it is successfully deployed into production. A shorter lead time indicates that the platform enables faster feedback loops, quicker delivery of features, and overall improved developer experience.
Option A (infrastructure cost) and Option D (resource utilization) are important for operations but do not measure productivity or speed.
Option C (security vulnerabilities) relates to platform security posture, not productivity.
By tracking lead time, organizations can evaluate how effective their platform is in enabling self-service, automation, and streamlined CI/CD workflows. Improvements in this metric demonstrate that the platform is successfully reducing friction for developers and accelerating value delivery to end users.
Reference:
― CNCF Platforms Whitepaper
― State of DevOps Report (DORA Metrics)
― Cloud Native Platform Engineering Study Guide

Question No : 5


In a cloud native environment, how do policy engines facilitate a unified approach for teams to consume platform services?

정답:
Explanation:
Policy engines (such as Open Policy Agent C OPA or Kyverno) play a critical role in enforcing governance, security, and compliance consistently across cloud native platforms.
Option D is correct because policy engines provide centralized, reusable policies that can be applied across clusters, services, and environments. This ensures that developers consume platform services in a compliant and secure manner, without needing to manage these controls manually.
Option A is partially correct but too narrow, as policies extend beyond compliance to include operational, security, and cost-control measures.
Option B is not the primary function of policy engines, though integration with CI/CD is possible.
Option C is incorrect because SLAs are business agreements, not enforced by policy engines directly.
Policy engines enforce guardrails like image signing, RBAC rules, resource quotas, and network policies automatically, reducing cognitive load for developers while giving platform teams confidence in compliance. This supports the platform engineering principle of combining self-service with governance.
Reference:
― CNCF Platforms Whitepaper
― CNCF Security TAG (OPA, Kyverno)
― Cloud Native Platform Engineering Study Guide

Question No : 6


Which of the following is a primary benefit of using Kubernetes Custom Resource Definitions (CRDs) in a self-service platform model?

정답:
Explanation:
Kubernetes Custom Resource Definitions (CRDs) extend the Kubernetes API by allowing platform teams to create and expose custom APIs without modifying the core Kubernetes API server code.
Option C is correct because this extensibility enables teams to define new abstractions (e.g., Database, Application, or Environment resources) tailored to organizational needs, which developers can consume through a self-service model.
Option A is incorrect because scaling and failover are handled by controllers or operators, not CRDs themselves.
Option B is wrong because RBAC is still required for access control over custom resources.
Option D is misleading because multi-cloud support depends on how CRDs and their controllers are implemented, not a built-in CRD feature.
By leveraging CRDs, platform teams can standardize workflows, hide complexity, and implement guardrails, all while presenting developers with simplified abstractions. This is central to platform engineering, as it empowers developers with self-service APIs while maintaining operational control.
Reference:
― CNCF Platforms Whitepaper
― Kubernetes Extensibility Documentation
― Cloud Native Platform Engineering Study Guide

Question No : 7


In a cloud native environment, what is one of the security benefits of implementing a service mesh?

정답:
Explanation:
A key advantage of using a service mesh is its ability to secure service-to-service communication transparently, without requiring application code changes.
Option A is correct because service meshes (e.g., Istio, Linkerd) provide mutual TLS (mTLS) by default, ensuring both encryption in transit and authentication between services. This establishes a zero-trust networking model inside the cluster.
Option B (scaling) is managed by Kubernetes (Horizontal Pod Autoscaler), not service mesh.
Option C (logging) may be supported as an observability feature, but it is not the primary security benefit.
Option D (IP allowlisting) is an outdated, less flexible mechanism compared to identity-based policies that meshes provide.
Service meshes enforce security consistently across all services, support fine-grained policies, and ensure compliance without burdening developers with complex configurations. This makes mTLS a foundational benefit in cloud native platform security.
Reference:
― CNCF Service Mesh Whitepaper
― CNCF Platforms Whitepaper
― Cloud Native Platform Engineering Study Guide

Question No : 8


A software development team is struggling to adopt a new cloud native platform efficiently.
How can a centralized developer portal, such as Backstage, help improve their adoption process?

정답:
Explanation:
Developer portals like Backstage act as the single entry point for platform services, APIs, golden paths, and documentation.
Option A is correct because centralizing access greatly reduces the friction developers face when trying to adopt a new platform. Instead of searching across fragmented systems or learning low-level Kubernetes details, developers can find everything in one place, including templates, service catalogs, automated workflows, and governance policies.
Option B is irrelevant to platform adoption.
Option C may foster community sharing but does not directly address adoption challenges.
Option D contradicts platform engineering principles, which emphasize democratizing access and self-service rather than restricting tools to senior developers.
By providing a unified experience, portals improve discoverability, consistency, and self-service. They reduce cognitive load and support the platform engineering principle of improving developer experience, making adoption of new platforms smoother and more efficient.
Reference:
― CNCF Platforms Whitepaper
― CNCF Platform Engineering Maturity Model
― Cloud Native Platform Engineering Study Guide

Question No : 9


What is the main benefit of using minimal base container images and SBOM attestation practices in CI/CD pipelines?

정답:
Explanation:
The use of minimal base container images and Software Bill of Materials (SBOM) attestation is a best practice for strengthening software supply chain security.
Option B is correct because smaller base images contain fewer components, which inherently reduces the attack surface and the number of potential vulnerabilities. SBOMs, meanwhile, provide a detailed inventory of included libraries and dependencies, enabling vulnerability scanning, license compliance, and traceability.
Option A is only a partial benefit, not the primary goal.
Option C (maximum flexibility) contradicts the principle of minimal images, which deliberately restrict included software.
Option D (reducing storage costs) may be a side effect but is not the core benefit in a security-focused context.
By combining minimal images with SBOM practices, platform teams ensure stronger compliance with supply chain security frameworks, enable early detection of vulnerabilities in CI/CD pipelines, and support fast remediation. This is emphasized in CNCF security and platform engineering guidance as a way to align with zero-trust principles.
Reference:
― CNCF Supply Chain Security Whitepaper
― CNCF Platforms Whitepaper
― Cloud Native Platform Engineering Study Guide

Question No : 10


Which of the following would be considered an advantage of using abstract APIs when offering cloud service provisioning and management as platform services?

정답:
Explanation:
Abstract APIs are an essential component of platform engineering, providing a simplified interface for developers to consume infrastructure and cloud services without deep knowledge of provider-specific details.
Option B is correct because abstractions allow platform teams to curate services with built-in guardrails, ensuring compliance, security, and operational standards are enforced automatically. Developers get the benefit of self-service and flexibility while the platform team ensures governance.
Option A would slow down the process, defeating the purpose of abstraction.
Option C removes guardrails, which risks security and compliance violations.
Option D allows uncontrolled deployments, which can create chaos and undermine platform governance.
Abstract APIs strike the balance between developer experience and organizational control. They provide golden paths and opinionated defaults while maintaining the flexibility needed for developer productivity. This approach ensures efficient service provisioning at scale with reduced cognitive load on developers.
Reference:
― CNCF Platforms Whitepaper
― CNCF Platform Engineering Maturity Model
― Cloud Native Platform Engineering Study Guide

Question No : 11


In a GitOps approach, how should the desired state of a system be managed and integrated?

정답:
Explanation:
The GitOps model is built on the principle that the desired state of infrastructure and applications must be stored in Git as the single source of truth.
Option D is correct because Git provides versioning, immutability, and auditability, while reconciliation controllers (e.g., Argo CD or Flux) pull the desired state into the system continuously. This ensures that actual cluster state always matches the declared Git state.
Option A is partially correct but fails because GitOps eliminates manual push workflows― automation ensures changes are pulled and reconciled.
Option B describes Kubernetes CRDs, which may be part of the system but do not embody GitOps on their own.
Option C contradicts GitOps principles, which rely on pull-based reconciliation, not centralized push.
Storing desired state in Git provides full traceability, automated rollbacks, and continuous reconciliation, improving reliability and compliance. This makes GitOps a core practice for cloud native platform engineering.
Reference:
― CNCF GitOps Principles
― CNCF Platforms Whitepaper
― Cloud Native Platform Engineering Study Guide

Question No : 12


Development teams frequently raise support tickets for short-term access to staging clusters, creating a growing burden on the platform team.
What's the best long-term solution to balance control, efficiency, and developer experience?

정답:
Explanation:
The most sustainable solution for managing developer access while balancing governance and self-service is to adopt GitOps-based RBAC management.
Option A is correct because it leverages Git as the source of truth for access permissions, allowing developers to request access through pull requests. For non-sensitive environments such as staging, approvals can be automated, ensuring efficiency while still maintaining auditability. This approach aligns with platform engineering principles of self-service, automation, and compliance.
Option B places the burden entirely on one engineer, which does not scale.
Option C introduces bottlenecks, delays, and reduces developer experience.
Option D bypasses governance and auditability, potentially creating security risks.
GitOps for RBAC not only improves developer experience but also ensures all changes are versioned, reviewed, and auditable. This model supports compliance while reducing manual intervention from the platform team, thus enhancing efficiency.
Reference:
― CNCF GitOps Principles
― CNCF Platforms Whitepaper
― Cloud Native Platform Engineering Study Guide

Question No : 13


Which platform component enables one-click provisioning of sandbox environments, including both infrastructure and application code?

정답:
Explanation:
A CI/CD pipeline is the platform component that enables automated provisioning of sandbox environments with both infrastructure and application code.
Option A is correct because modern pipelines integrate Infrastructure as Code (IaC) with application deployment, enabling “one-click” or self-service provisioning of complete environments. This capability is central to platform engineering because it empowers developers to spin up temporary or permanent sandbox environments quickly for testing, experimentation, or demos.
Option B (service mesh) focuses on secure, observable service-to-service communication but does not provision environments.
Option C (service bus) is used for asynchronous communication between services, not environment provisioning.
Option D (observability pipeline) deals with collecting telemetry data, not provisioning.
By leveraging CI/CD pipelines integrated with GitOps and IaC tools (such as Terraform, Crossplane, or Kubernetes manifests), platform teams ensure consistency, compliance, and automation. Developers benefit from reduced friction, faster feedback cycles, and a better overall developer experience.
Reference:
― CNCF Platforms Whitepaper
― CNCF GitOps Principles
― Cloud Native Platform Engineering Study Guide

Question No : 14


As a Cloud Native Platform Associate, you need to implement an observability strategy for your Kubernetes clusters.
Which of the following tools is most commonly used for collecting and monitoring metrics in cloud native environments?

정답:
Explanation:
Prometheus is the de facto standard for collecting and monitoring metrics in Kubernetes and other cloud native environments.
Option D is correct because Prometheus is a CNCF graduated project designed for multi-dimensional data collection, time-series storage, and powerful querying using PromQL. It integrates seamlessly with Kubernetes, automatically discovering targets such as Pods and Services through service discovery.
Option A (Grafana) is widely used for visualization but relies on Prometheus or other data sources to collect metrics.
Option B (ELK Stack) is better suited for log aggregation rather than real-time metrics.
Option C (OpenTelemetry) provides standardized instrumentation but is focused on generating and
exporting metrics, logs, and traces rather than storage, querying, and alerting.
Prometheus plays a central role in platform observability strategies, often paired with Alertmanager for notifications and Grafana for dashboards. Together, they enable proactive monitoring, SLO/SLI measurement, and incident detection, making Prometheus indispensable in cloud native platform engineering.
Reference:
― CNCF Observability Whitepaper
― Prometheus CNCF Project Documentation
― Cloud Native Platform Engineering Study Guide

Question No : 15


What is the primary purpose of using multiple environments (e.g., development, staging, production) in a cloud native platform?

정답:
Explanation:
The primary reason for implementing multiple environments in cloud native platforms is to isolate the different phases of the software development lifecycle.
Option A is correct because environments such as development, staging, and production enable testing and validation at each stage without impacting end users. Development environments allow rapid iteration, staging environments simulate production for integration and performance testing, and production environments serve real users.
Option B (reducing costs) may be a side effect but is not the main purpose.
Option C (distributing traffic) relates more to load balancing and high availability, not environment separation.
Option D is the opposite of the goal―different environments often require tailored infrastructure to meet their distinct purposes.
Isolation through multiple environments is fundamental to reducing risk, supporting continuous delivery, and ensuring stability. This practice also allows for compliance checks, automated testing, and user acceptance validation before changes reach production.
Reference:
― CNCF Platforms Whitepaper
― Team Topologies & Platform Engineering Guidance
― Cloud Native Platform Engineering Study Guide

 / 2
The Linux Foundation