Pure Certified Portworx Enterprise Professional (PEP) Exam 온라인 연습
최종 업데이트 시간: 2025년10월03일
당신은 온라인 연습 문제를 통해 Pure Storage Portworx Enterprise Professional 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 Portworx Enterprise Professional 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 75개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The DriveStateChange alert in Portworx indicates that free disk space on a storage device has fallen below the recommended threshold of 10%. This alert warns administrators that storage capacity on a particular disk is critically low and that immediate action may be needed to avoid performance degradation or failures. Monitoring disk space is essential to maintain cluster health and prevent data loss. Portworx automatically generates this alert as part of its proactive monitoring system, providing early warning so operators can add capacity, remove unnecessary data, or re-balance workloads. The alert documentation advises maintaining sufficient free space to ensure optimal performance and data durability in the Portworx cluster 【 Pure Storage Portworx Alerting Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Stork (Storage Orchestrator for Kubernetes) is a Portworx component designed to enhance
Kubernetes storage management. Its primary purpose is to orchestrate storage-aware operations, including volume scheduling, migration, backup, and disaster recovery. Stork integrates deeply with Kubernetes to provide application-aware scheduling decisions that respect storage constraints such as volume locality and affinity. It also facilitates migration of stateful workloads by coordinating volume replication and failover. Stork simplifies complex storage workflows in Kubernetes environments, enabling seamless backup and restore of applications and improving overall resilience. Portworx’s official documentation highlights Stork as a key enabler for business continuity by managing storage operations and migrations, making it essential for Kubernetes environments running critical stateful workloads with Portworx storage 【 Pure Storage Portworx Stork Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
To view the status of only Portworx pods within the “portworx” namespace, administrators should use label selectors with kubectl. The command kubectl -n portworx get pods -l name=portworx filters pods by the label name=portworx, showing only pods related to the Portworx deployment. This is more precise than simply listing all pods with -o wide, which includes unrelated pods. Checking Portworx pods’ status is crucial for monitoring cluster health, identifying pod restarts, or troubleshooting failures. The Portworx installation manifests and documentation specify labels applied to Portworx pods, enabling operators to filter efficiently. Using this command supports focused operational monitoring and streamlined debugging within Kubernetes environments running Portworx 【 Pure Storage Portworx Kubernetes Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Adding a new drive to an existing Portworx storage cluster involves bringing the physical device
online for Portworx management. The correct command for this is pxctl service drive add -drive /dev/dm-1 -operation start. This command instructs Portworx to recognize and incorporate the new drive specified by the device path (e.g., /dev/dm-1) into its storage pool. After this operation, Portworx can use the drive for provisioning volumes or expanding capacity. The -operation start flag signals Portworx to initialize and prepare the drive for use. This method is part of Portworx’s dynamic storage management capabilities, allowing flexible scaling of storage resources without downtime. Official CLI documentation outlines this command as the supported approach to adding drives to running clusters safely and efficiently 【 Pure Storage Portworx CLI Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx recommends using Prometheus, Alertmanager, and Grafana as the core technologies for monitoring Portworx clusters within Kubernetes. Prometheus scrapes metrics exposed by Portworx components and stores time-series data for analysis. Alertmanager handles alert rules and notification delivery, enabling administrators to respond to critical events promptly. Grafana provides a powerful visualization platform to build dashboards from Prometheus data, helping teams visualize cluster health, performance metrics, and capacity trends. This combination is widely adopted due to its native Kubernetes integration, scalability, and extensibility. Portworx documentation includes detailed guidance on configuring these tools to monitor metrics such as volume latency, node health, pool usage, and snapshot status, forming a comprehensive monitoring and alerting solution for production environments 【 Pure Storage Portworx Monitoring Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
To verify a Portworx upgrade on Kubernetes, administrators use the pxctl get storagenodes
command. This Portworx CLI command lists all storage nodes with detailed information including version, status, and health. By inspecting the version column, administrators can confirm whether all nodes have been successfully upgraded to the desired Portworx release. This command specifically queries Portworx daemons for accurate cluster version details, unlike kubectl get nodes which shows Kubernetes node info but not Portworx versioning. Portworx upgrade best practices stress using pxctl commands for detailed verification after an upgrade to ensure consistent cluster software versions and successful upgrade completion 【 Pure Storage Portworx Upgrade Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
To view detailed information about storage pools in a Portworx cluster―including size, availability, usage, and health―administrators should use the command pxctl service pool show. This CLI command provides a comprehensive overview of all storage pools configured on cluster nodes, including pool IDs, device names, pool sizes, free space, and status. It helps administrators monitor resource utilization, detect degraded pools, and plan capacity expansions. While kubectl get storagecluster shows the overall cluster CRD status and pxctl cluster provision-status shows provisioning status, neither provides detailed pool-level insights. Portworx’s operational documentation recommends pxctl service pool show as the definitive command for monitoring pool resources and ensuring storage health across the cluster 【 Pure Storage Portworx CLI Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Defragmentation in Portworx reorganizes storage blocks to reduce fragmentation caused by frequent write and delete operations. Scheduling defragmentation during low workload periods ensures minimal impact on application performance while improving storage efficiency and I/O throughput. This optimization leads to faster read/write operations and prolongs the lifespan of storage devices
by minimizing random I/O. Portworx provides administrators the ability to define defragmentation windows and recurrence policies within cluster configurations to automate this process. The official Portworx documentation explains that carefully timed defragmentation is critical to maintaining optimal cluster performance without disrupting business-critical workloads, making it an essential part of ongoing cluster maintenance and operational health 【 Pure Storage Portworx Performance Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
When troubleshooting a Portworx node that appears down, the first step is to verify the overall Kubernetes cluster health, particularly the node’s readiness. Running kubectl get node -o wide provides detailed information about all cluster nodes, including their status, roles, and network details. Ensuring the affected node is marked “Ready” or identifying any abnormal conditions helps isolate whether the problem is at the Kubernetes level or specific to Portworx. If the node is not Ready, issues may lie with Kubernetes components or node-level hardware/network problems. After confirming node status, further investigation using pxctl status or examining kubelet logs with journalctl can pinpoint Portworx-specific or system-level failures. Portworx operational best practices recommend starting with Kubernetes node health checks before delving into Portworx or system logs to effectively triage issues 【 Pure Storage Portworx Troubleshooting Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Enabling security in Portworx without allowing guest access involves explicitly setting both enabled: true under the security section and guestAccess: false within the auth subsection of the StorageCluster spec. This configuration activates Portworx security features, enforcing authentication and encryption while preventing unauthenticated (guest) access to volumes. The guestAccess flag controls whether clients without valid credentials can access storage resources; setting it to false
tightens security by requiring all access to be authenticated. This declarative setup is managed via the Kubernetes operator, ensuring consistent enforcement across cluster restarts and upgrades. Portworx’s security documentation stresses this dual setting to harden clusters against unauthorized access while maintaining operational capabilities for authorized users, aligning with enterprise security policies and compliance standards 【 Pure Storage Portworx Security Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx alerts are generated for several resource types within the storage cluster environment, primarily including Nodes, Disks, Pods, Namespaces, and Volumes. These alerts provide real-time notifications of events such as node failures, disk health degradation, volume status changes, pod crashes, or namespace-level issues affecting storage consumption or performance. Monitoring these resource types helps administrators proactively manage cluster health, maintain high availability, and troubleshoot faults before they impact applications. The Portworx alerting framework aggregates data from these resources and integrates with external monitoring systems for centralized alert management. Official Portworx observability and alerting documentation list these resource categories as the core focus of Portworx alerting mechanisms, critical for operational awareness and automation 【 Pure Storage Portworx Observability Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Stork version 2.3 is the minimum version required to support Application Backup features in Portworx. Application Backup allows for consistent snapshots and restores of complex, multi-volume, and multi-pod stateful applications. This capability depends on enhancements introduced in Stork 2.3 that enable application-aware backup orchestration, coordination between Kubernetes and storage layers, and integration with backup policies. Earlier Stork versions lack these features, making them unsuitable for application-level backups. Portworx release notes and Stork documentation confirm that version 2.3 introduced key functionalities that underpin the reliable backup and restore workflows for stateful workloads, making it a baseline requirement for disaster recovery and business continuity implementations involving application backups 【 Pure Storage Portworx Backup Docs†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
To ensure Portworx volumes for an ElasticSearch application are created only on specific Kubernetes nodes, the Volume Placement Strategy feature is used. This feature allows administrators to define node affinity or anti-affinity rules that restrict volume provisioning to a subset of nodes. By tagging the six nodes with appropriate labels and configuring the StorageClass or volume parameters to respect these labels, Portworx guarantees that volumes will only be provisioned on those nodes. This targeted volume placement is critical for performance optimization, data locality, and compliance with infrastructure constraints. Autopilot automates scaling and Stork manages storage-aware scheduling but does not directly control volume node placement. The Portworx deployment documentation highlights Volume Placement Strategy as the tool for precise volume-to-node mapping in Kubernetes clusters 【 Pure Storage Portworx Deployment Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
In Portworx, a volume is considered “Public” when guest access is enabled. Guest access allows users and applications without explicit Portworx authentication credentials to access the volume. This setting is typically used in less restrictive environments where access control is relaxed, but it reduces security by exposing data to potentially unauthorized entities. Public volumes can be accessed by any entity with network connectivity and basic permissions, which is why enabling guest access is carefully controlled in secure deployments. Portworx documentation on security models and access controls stresses that public volumes should be used sparingly and monitored closely due to the elevated risk of data exposure and compliance violations 【 Pure Storage Portworx Security Guide†source 】 .
정답:
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Skinny Snapshots are a space-efficient snapshot technique used by Portworx for replicated volumes (Repl 2 or 3) when storage capacity is limited and no external Object Store is configured. Unlike full snapshots that duplicate data blocks, skinny snapshots capture only the differences (deltas) since the last snapshot, minimizing space consumption. This method allows administrators to take frequent snapshots without significantly impacting storage availability. Skinny Snapshots are particularly useful for on-premises environments or clusters without access to cloud object storage, balancing snapshot granularity with resource constraints. Official Portworx snapshot documentation explains how skinny snapshots work internally, improving backup and recovery capabilities under tight storage conditions without requiring cloud integration 【 Pure Storage Portworx Snapshot Guide†source 】 .