SnowPro Advanced Administrator 온라인 연습
최종 업데이트 시간: 2025년06월18일
당신은 온라인 연습 문제를 통해 Snowflake ADA-C01 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 ADA-C01 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 72개의 시험 문제와 답을 포함하십시오.
정답: AC
Explanation:
USAGE on the schema DatabaseA_clone.Schema1. When a database is cloned, the schemas in the cloned database, including the USAGE privileges for the corresponding schemas in the original database, are also cloned. Therefore, the ANALYST role will have USAGE privileges on Schema1 in DatabaseA_clone.
SELECT on all tables, and only non-secure views in DatabaseA_clone.Schema1. Database cloning replicates all permission settings from the original database, including SELECT permissions on tables and non-secure views. However, permissions on secure views, even if present in the original database, are not automatically carried over to the corresponding views in the cloned database.
Option B (USAGE on the database DatabaseA_clone.Schema1) is inaccurate as the distinction between database and schema permissions needs to be made clear. Option D (SELECT on all tables, and only secure views in DatabaseA_clone.Schema1) is incorrect because permissions on secure views are not replicated with the database cloning. Option E (SELECT on all tables and views in DatabaseA_clone.Schema1) is incorrect as permissions on secure views do not automatically carry over to the cloned database.
정답:
Explanation:
According to the Snowflake documentation1, resource monitors are a feature that helps you manage and control Snowflake costs by monitoring and setting limits on your compute resources. Resource monitors do not consume any credits or add any load to the virtual warehouses they monitor1. Resource monitors can also have multiple triggers that specify different actions (such as suspending or notifying) when certain percentages of the credit quota are reached2. Resource monitors can be applied to either the entire account or a specific set of individual warehouses1. The other options are not benefits of resource monitors. The cost of running a resource monitor is negligible, not 10% of a credit3. Multiple resource monitors cannot be applied to a single virtual warehouse; only one resource monitor can be assigned to a warehouse at a time2. Resource monitor governance is not tightly controlled; account administrators can enable users with other roles to view and modify resource monitors using SQL2.
정답:
Explanation:
According to the Snowflake documentation1, the authentication process for SCIM provisioning uses an OAuth Bearer token and this token is valid for six months. Customers must keep track of their authentication token and can generate a new token on demand. If the token expires, the SCIM provisioning process will fail. Therefore, the token must be regenerated before it expires. The other options are not required for SCIM provisioning.
정답:
Explanation:
In Snowflake, auto-suspend for a multi-cluster virtual warehouse occurs when there has been no activity on any of the clusters for a specified period of time. This is to save costs by automatically stopping the use of compute resources of the warehouse when there is no query or data loading activity. The auto-suspend time is configurable and can be set to occur after a certain period of inactivity.
Option B (After a specified period of time when an additional cluster has started on the maximum number of clusters specified for a warehouse) does not trigger auto-suspend unless there is no activity during that time. Option C (When the minimum number of clusters is running and there is no activity for the specified period of time) is inaccurate because auto-suspend considers the activity of all clusters, not just the minimum number of clusters. Option D (Auto-suspend does not apply for multi-cluster warehouses) is incorrect, as multi-cluster warehouses are also subject to the auto-suspend feature.
정답: A
Explanation:
According to the Replication considerations documentation, the Time Travel retention period for a secondary database can be different from the primary database. The retention period can be set at the database, schema, or table level using the DATA_RETENTION_TIME_IN_DAYS parameter. Therefore, to extend the Time Travel retention policy to 60 days on the secondary database only, the best option is to set the data retention policy on the secondary database to 60 days using the ALTER DATABASE command.
The other options are incorrect because:
• B. Setting the data retention policy on the schemas in the secondary database to 60 days will not affect the database-level retention period, which will remain at 30 days. The most specific setting overrides the more general ones, so the schema-level setting will apply to the tables in the schema, but not to the database itself.
• C. Setting the data retention policy on the primary database to 30 days and the schemas to 60 days will not affect the secondary database, which will have its own retention period. The replication process does not copy the retention period settings from the primary to the secondary database, so they can be configured independently.
• D. Setting the data retention policy on the primary database to 60 days will not affect the secondary database, which will have its own retention period. The replication process does not copy the retention period settings from the primary to the secondary database, so they can be configured independently.
정답:
Explanation:
In Snowflake, if an administrator wants to delegate the administration of the company's data exchange to users who do not have the ACCOUNTADMIN role, they can grant those users USAGE permission on the data exchange. This allows the specified role to view the data exchange and request data without granting them the ability to modify or own it. This meets the requirement to delegate administration while maintaining appropriate separation of privileges.
Options A (Grant imported privileges on data exchange) and C (Grant ownership on data exchange) provide more permissions than necessary and may not be appropriate just for managing the data exchange. Option B (Grant modify on data exchange) might also provide unnecessary additional permissions, depending on what administrative tasks need to be performed. Typically, granting USAGE permission is the minimum necessary to meet basic administrative requirements.
정답:
Explanation:
According to the Accessing a Data Exchange documentation, a consumer account can request and get data from the Data Exchange using either the ACCOUNTADMIN role or a role with the IMPORT SHARE and CREATE DATABASE privileges. The ACCOUNTADMIN role is the top-level role that has all privileges on all objects in the account, including the ability to request and get data from the Data Exchange. A role with the IMPORT SHARE and CREATE DATABASE privileges can also request and get data from the Data Exchange, as these are the minimum privileges required to create a database from a share.
The other options are incorrect because:
• A. The SYSADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SYSADMIN role is a pre-defined role that has all privileges on all objects in the account, except for the privileges reserved for the ACCOUNTADMIN role, such as managing users, roles, and shares.
• B. The SECURITYADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SECURITYADMIN role is a pre-defined role that has the privilege to manage security objects in the account, such as network policies, encryption keys, and security integrations, but not data objects, such as databases, schemas, and tables.
• E. The IMPORT PRIVILEGES and SHARED DATABASE are not valid privileges in Snowflake. The correct privilege names are IMPORT SHARE and CREATE DATABASE, as explained above.
정답:
Explanation:
According to the Snowflake Warehouse Cost Optimization blog post, one of the strategies to reduce the cost of running a warehouse is to use a multi-cluster warehouse with auto-scaling enabled. This allows the warehouse to automatically adjust the number of clusters based on the concurrency demand and the queue size. A multi-cluster warehouse can also be configured with a minimum and maximum number of clusters, as well as a scaling policy to control the scaling behavior. This way, the warehouse can handle the parallel load queries efficiently without wasting resources or credits. The blog post also suggests using a smaller warehouse size, such as SMALL or XSMALL, for loading data, as it can perform better than a larger warehouse size for small INSERTs. Therefore, the best option to reduce the costs while minimizing the overall load times for migrating data warehouse history is to keep the warehouse as a SMALL or XSMALL and configure it as a multi-cluster warehouse to handle the parallel load queries.
The other options are incorrect because:
• A. Deploying another 2XL warehouse to handle a portion of the load queries will not reduce the costs, but increase them. It will also introduce complexity and potential inconsistency in managing the data loading process across multiple warehouses.
• B. Changing the 2XL warehouse to 4XL will not reduce the costs, but increase them. It will also provide more compute resources than needed for small INSERTs, which are not CPU-intensive but I/O-intensive.
• D. Converting the INSERTs to several tables will not reduce the costs, but increase them. It will also create unnecessary data duplication and fragmentation, which will affect the query
performance and data quality.
정답:
Explanation:
According to the CREATE ACCOUNT documentation, the account name must be specified when the account is created, and it must be unique within an organization, regardless of which Snowflake Region the account is in.
The other options are incorrect because:
• The account does not require at least one ORGADMIN role within one of the organization’s accounts. The account can be created by an organization administrator (i.e. a user with the ORGADMIN role) through the web interface or using SQL, but the new account does not inherit the ORGADMIN role from the existing account. The new account will have its own set of users, roles, databases, and warehouses.
• The account name is not immutable and can be changed. The account name can be modified by contacting Snowflake Support and requesting a name change. However, changing the account name may affect some features that depend on the account name, such as SSO or SCIM.
• The account name does not need to be unique among all Snowflake customers. The account name only needs to be unique within the organization, as the account URL also includes the region and cloud platform information. For example, two accounts with the same name can exist in different regions or cloud platforms, such as myaccount.us-east-1.snowflakecomputing.com and myaccount.eu-west-1.aws.snowflakecomputing.com.
정답: C
Explanation:
According to the Using Persisted Query Results documentation, the RESULT_SCAN function allows you to query the result set of a previous command as if it were a table. The LAST_QUERY_ID function returns the query ID of the most recent statement executed in the current session. Therefore, the combination of these two functions can be used to access the output of the SHOW WAREHOUSES command, which returns the configurations of all the virtual warehouses in the account. However, to persist the warehouse data in JSON format in the table VWH_META, the OBJECT_CONSTRUCT function is needed to convert the output of the SHOW WAREHOUSES command into a VARIANT column. The OBJECT_CONSTRUCT function takes a list of key-value pairs and returns a single JSON object.
Therefore, the correct commands to execute are:
정답:
Explanation:
According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days.
To meet the requirements of the scenario, the following best practices should be followed:
• The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.
• The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.
The other options are incorrect because:
• Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.
• The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.
• Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.
정답:
Explanation:
According to the Using Dynamic Data Masking documentation, Dynamic Data Masking is a feature that allows you to alter sections of data in table and view columns at query time using a predefined masking strategy.
The following are some of the characteristics of Dynamic Data Masking:
• A single masking policy can be applied to columns in different tables. This means that you can write a policy once and have it apply to thousands of columns across databases and schemas.
• A single masking policy can be applied to columns with different data types. This means that you can use the same masking strategy for columns that store different kinds of data, such as strings, numbers, dates, etc.
• A masking policy that is currently set on a table can be dropped. This means that you can remove the masking policy from the table and restore the original data visibility.
• A masking policy can be applied to the VALUE column of an external table. This means that you can mask data that is stored in an external stage and queried through an external table.
• The role that creates the masking policy will always see unmasked data in query results. This is not true, as the masking policy can also apply to the creator role depending on the execution context conditions defined in the policy. For example, if the policy specifies that only users with a certain custom entitlement can see the unmasked data, then the creator role will also need to have that entitlement to see the unmasked data.
정답:
Explanation:
According to the AT | BEFORE documentation, the AT or BEFORE clause is used for Snowflake Time Travel, which allows you to query historical data from a table based on a specific point in the past.
The clause can use one of the following parameters to pinpoint the exact historical data you wish to access:
• TIMESTAMP: Specifies an exact date and time to use for Time Travel.
• OFFSET: Specifies the difference in seconds from the current time to use for Time Travel.
• STATEMENT: Specifies the query ID of a statement to use as the reference point for Time
Travel.
Therefore, the queries that will allow the user to view the historical data that was in the table before the query was executed are:
• B. SELECT * FROM my_table AT (TIMESTAMP => ‘2021-01-01 07:00:00’ :: timestamp); This query uses the TIMESTAMP parameter to specify a point in time that is before the query execution time of 07:01.
• D. SELECT * FROM my table PRIOR TO STATEMENT ‘8e5d0ca9-005e-44e6-b858-a8f5b37c5726’; This query uses the PRIOR TO STATEMENT keyword and the STATEMENT parameter to specify a point in time that is immediately preceding the query execution time of 07:01.
• F. SELECT * FROM my_table BEFORE (STATEMENT => ‘8e5d0ca9-005e-44e6-b858-a8f5b37c5726’); This query uses the BEFORE keyword and the STATEMENT parameter to specify a point in time that is immediately preceding the query execution time of 07:01.
The other queries are incorrect because:
• A. SELECT * FROM my table WITH TIME_TRAVEL (OFFSET => -60*30); This query uses the OFFSET parameter to specify a point in time that is 30 minutes before the current time, which is 07:30. This is after the query execution time of 07:01, so it will not show the historical data before the query was executed.
• C. SELECT * FROM TIME_TRAVEL (‘MY_TABLE’, 2021-01-01 07:00:00); This query is not valid syntax for Time Travel. The TIME_TRAVEL function does not exist in Snowflake. The correct syntax is to use the AT or BEFORE clause after the table name in the FROM clause.
• E. SELECT * FROM my_table AT (OFFSET => -60*30); This query uses the AT keyword and the OFFSET parameter to specify a point in time that is 30 minutes before the current time, which is 07:30. This is equal to the query execution time of 07:01, so it will not show the historical data before the query was executed. The AT keyword specifies that the request is inclusive of any changes made by a statement or transaction with timestamp equal to the specified parameter. To exclude the changes made by the query, the BEFORE keyword should be used instead.
정답:
Explanation:
According to the Network Policies documentation, a network policy can be applied to an account, a security integration, or a user. If there are network policies applied to more than one of these, the most specific network policy overrides more general network policies.
The following summarizes the order of precedence:
• Account: Network policies applied to an account are the most general network policies. They are overridden by network policies applied to a security integration or user.
• Security Integration: Network policies applied to a security integration override network policies applied to the account, but are overridden by a network policy applied to a user.
• User: Network policies applied to a user are the most specific network policies. They override both accounts and security integrations.
Therefore, if both the account_level and user_level network policies are defined, the user_level policy will take effect and the account_level policy will be ignored. The other options are incorrect because:
• The account_level policy will not override the user_level policy, as explained above.
• The user_level network policies will be supported, as they are part of the network policy feature.
• A network policy error will not be generated, as there is no conflict between the account_level and user_level network policies.
정답:
Explanation:
When a user runs a complex SQL query on a dedicated virtual warehouse involving a large amount of data read from micro-partitions, the best action for optimal performance of a second query is to prevent the warehouse from suspending between the two queries. This ensures that the data cache (such as the result set cache and local disk cache) remains "warm," providing faster performance for the second query because it may be able to utilize these caches instead of reading the data from the micro-partitions again.
Option A (Assign additional clusters to the virtual warehouse) might not directly impact the performance of the second query unless the current size of the warehouse is already insufficient for the parallel processing needs. Option B (Increase the STATEMENT_TIMEOUT_IN_SECONDS parameter in the session) does not improve query performance; it simply increases the maximum time a query can run. Option D (Use the RESULT_SCAN function to post-process the output of the first query) is for accessing the cached results of a previous query but may not apply in this scenario, as it is for processing the exact same result set, not a new query based on the same data set.