Authorization to execute CREATE
Primary role
Secondary role
Application role
Database role
In Snowflake, the authorization to execute CREATE <object> statements, such as creating tables, views, databases, etc., is determined by the role currently set as the user's primary role. The primary role of a user or session specifies the set of privileges (including creation privileges) that the user has. While users can have multiple roles, only the primary role is used to determine what objects the user can create unless explicitly specified in the session.
Which type of role can be granted to a share?
Account role
Custom role
Database role
Secondary role
In Snowflake, shares are used to share data between Snowflake accounts. When creating a share, it is possible to grant access to the share to roles within the Snowflake account that is creating the share. The type of role that can be granted to a share is a Custom role. Custom roles are user-defined roles that account administrators can create to manage access control in a more granular way. Unlike predefined roles such as ACCOUNTADMIN or SYSADMIN, custom roles can be tailored with specific privileges to meet the security and access requirements of different groups within an organization.
Granting a custom role access to a share enables users associated with that role to access the shared data if the share is received by another Snowflake account. It is important to carefully manage the privileges granted to custom roles to ensure that data sharing aligns with organizational policies and data governance standards.
References:
Snowflake Documentation on Shares: Shares
Snowflake Documentation on Roles: Access Control
Which activities are managed by Slowflake's Cloud Services layer? (Select TWO).
Authorisation
Access delegation
Data pruning
Data compression
Query parsing and optimization
Snowflake's Cloud Services layer is responsible for managing various aspects of the platform that are not directly related to computing or storage. Specifically, it handles authorisation, ensuring that users have appropriate access rights to perform actions or access data. Additionally, it takes care of query parsing and optimization, interpreting SQL queries and optimizing their execution plans for better performance. This layer abstracts much of the platform's complexity, allowing users to focus on their data and queries without managing the underlying infrastructure.References: Snowflake Architecture Documentation
How can staged files be removed during data loading once the files have loaded successfully?
Use the DROP command
Use the purge copy option.
Use the FORCE = TRUE parameter
Use the LOAD UNCERTAIN FILES copy option.
To remove staged files during data loading after they have been successfully loaded, the PURGE copy option is used in Snowflake.
PURGE Option: This option automatically deletes files from the stage after they have been successfully copied into the target table.
Usage:
FROM @my_stage
FILE_FORMAT = (type = 'csv')
PURGE = TRUE;
References:
Snowflake Documentation on COPY INTO
In Snowflake, what allows users to perform recursive queries?
QUALIFY
LATERAL
PIVOT
CONNECT BY
In Snowflake, the CONNECT BY clause allows users to perform recursive queries. Recursive queries are used to process hierarchical or tree-structured data, such as organizational charts or file systems. The CONNECT BY clause is used in conjunction with the START WITH clause to specify the starting point of the hierarchy and the relationship between parent and child rows.
References:
Snowflake Documentation: Hierarchical Queries
Which Snowflake object can be used to record DML changes made to a table?
Snowpipe
Stage
Stream
Task
Snowflake Streams are used to track and record Data Manipulation Language (DML) changes made to a table. Streams capture changes such as inserts, updates, and deletes, which can then be processed by other Snowflake objects or external applications.
Creating a Stream:
CREATE OR REPLACE STREAM my_stream ON TABLE my_table;
Using Streams: Streams provide a way to process changes incrementally, making it easier to build efficient data pipelines.
Consuming Stream Data: The captured changes can be consumed using SQL queries or Snowflake tasks.
References:
Snowflake Documentation: Using Streams
Snowflake Documentation: Change Data Capture (CDC) with Streams
When snaring data in Snowflake. what privileges does a Provider need to grant along with a share? (Select TWO).
USAGE on the specific tables in the database.
USAGE on the specific tables in the database.
MODIFY on 1Mb specific tables in the database.
USAGE on the database and the schema containing the tables to share
OPEBATE on the database and the schema containing the tables to share.
When sharing data in Snowflake, the provider needs to grant the following privileges along with a share:
A. USAGE on the specific tables in the database: This privilege allows the consumers of the share to access the specific tables included in the share.
D. USAGE on the database and the schema containing the tables to share: This privilege is necessary for the consumers to access the database and schema levels, enabling them to access the tables within those schemas.
These privileges are crucial for setting up secure and controlled access to the shared data, ensuring that only authorized users can access the specified resources.
Reference to Snowflake documentation on sharing data and managing access:
Data Sharing Overview
Privileges Required for Sharing Data
Which function can be used with the copy into
FLATTEN
OBJECT_AS
OBJECT_CONSTRUCT
TO VARIANT
The correct function to use with the COPY INTO <location> statement to convert rows from a relational table into a single variant column and to unload rows into a JSON file is TO VARIANT. The TO VARIANT function is used to explicitly convert a value of any supported data type into a VARIANT data type. This is particularly useful when needing to aggregate multiple columns or complex data structures into a single JSON-formatted string, which can then be unloaded into a file.
In the context of unloading data, the COPY INTO <location> statement combined with TO VARIANT enables the conversion of structured data from Snowflake tables into a semi-structured VARIANT format, typically JSON, which can then be efficiently exported and stored. This approach is often utilized for data integration scenarios, backups, or when data needs to be shared in a format that is easily consumed by various applications or services that support JSON.
References:
Snowflake Documentation on Data Unloading: Unloading Data
Snowflake Documentation on VARIANT Data Type: Working with JSON
How can the outer array structure of a semi-structured file be removed?
Use the parameter strip_outer_array = true in a COPY INTO