Snowflake Quick Notes For Certification Help

1. Scale Up – Increase Size of warehouse. – Need to do manually. 

2. Scale down/Scale IN – Decrease the size of warehouse. 

3. Scale Out – Adding Clusters to warehouse. – to be done automatically. 

4. Scale Back – to be done automatically. 

5. Maximized: same value for both maximum and minimum clusters. 

6. Auto-scale: different values for maximum and minimum clusters. In this mode, Snowflake starts and stops clusters as needed to dynamically manage the load on the warehouse. 

7. Snowflake provides scaling policies, which are used to determine when to start or shut down a cluster. 

8. The scaling policy for a multi-cluster warehouse only applies if it is running in Auto-scale mode. 

9. There are two scaling policies (Default) “Standard” and “Economy”. 

10.The Size of Micro Partition 50 to 500MB un-Compressed data. 

11.Time travel for permanent tables 1 to 90 Days. For temporary and Transient tables 1 day. 

12.Fail safe for permanent tables 7 days. For temporary and Transient tables 0 day. 

13.Multi-cluster and Materialized view property stats from Enterprise edition. 

14.The Maximum limits on child task- 100. 

15.There are 3 layers in snowflake architecture. 

a. Database/data storage.
b. Query Processing.
c. Cloud Service Layer/Service Layer. 

16.The Cloud Service Layer/Service layer provides 10% adjustment for cloud services is calculated daily. 

17.For Snow Pipe, Data Replication, Materialized Views and automated clustering the virtual warehouse is not needed. 

18.There is No Limit for creating inbound and outbound Shares

19.The External tables and Internal stages can’t be cloned

20.The Snowflake credits are for WH Computing and Cloud Service layer. 

21.Business edition offers higher level of data protection

22.VPS accounts do not share any resources with accounts outside the VPS. 

23.Standard Time travel 1 day in standard edition

24.Column/Row level security starts from Enterprise edition

25.Materialized views starts from Enterprise edition

26.The below data objects can be shared. 

 a. Tables
 b. External tables
 c. Secure views
 d. Secure materialized views
 e. Secure UDFs 

27.The shares are read-only we cannot perform any DML Operations

28.With Secure Data Sharing, no actual data is copied or transferred between accounts. 

29.All sharing is accomplished through Snowflake’s unique services layer and metadata store. 

30.The Sharing does not take up any memory on consumer side. The only charges applicable for computing

31.VSP Edition does not support for secure data sharing

32.If the CREATE SHARE privilege is granted to a role, any user with the role can create a share. 

33.The account administrator (i.e users with the ACCOUNTADMIN system role) role is the most powerful role in the system

34.The Custom Roles cannot be granted by default for All Users. 

35.The Data sharing is currently only supported between accounts in the same region

36.Sharing data from multiple tables via secured view

37.The WebUI does not supports add/remove secured UDFs from shares. to be done via SQL. 

38.Snowflake loads the semi structured data in to VARIENT column. 

39.The snowflake does not support below data types. 

 a. LOB
 b. CLOB
 c. ENUM
 d. User defined data Types. 

40.We can access elements in JSON objects in the below formats. 

 a. Dot Notation
 b. Bracket Notation 

41.The Snowflake architecture similar to Shared Data architecture. 

42.If AWS Private Link or Azure Private Link is enabled for your account and you wish to use private connectivity to connect to Snowflake, run the SYSTEM$GET_PRIVATELINK_CONFIG function to determine the private connectivity URL to use. 

43.Each Snowflake account is hosted in a single region. If you wish to use Snowflake across multiple regions, you must maintain a Snowflake account in each of the desired regions. 

44.The government regions are only supported for Snowflake accounts on Business-Critical Edition (or higher). 

45.Snowflake does not move data between accounts, so any data in an account in a region remains in the region unless users explicitly choose to copy, move, or replicate the data. 

46.The snowflake domain name 

47.Micro-partitioning is automatically performed on all Snowflake tables. Tables are transparently partitioned using the ordering of the data as it is inserted/loaded. 

48.The warehouse cache may be reset if a running warehouse is suspended and then resumed/Size of WH increases or decreases. 

49.Snowflake offers tools to extract data from source systemsFalse

50.The types of caching used by Snowflake. 

 a. Warehouse caching – Storage Layer
 b. Metadata caching – Cloud Service Layer
 c. Query result caching – Cloud Service Layer. 

51.When data is staged to a Snowflake internal staging area using the PUT command, the data is encrypted on the client’s machine – True. 

52.The Data Sharing is integrated with role-based access controls

53.The Stream can be created on Permanent, Temporary, Transient and External tables. 

54.As like temporary tables we can create temporary internal/external stages. 

55.Where each clustering key consists of one or more table columns/expressions, which can be of any data type, except VARIANT, OBJECT, or ARRAY. A clustering key can contain any of the following. 

  • Base columns.
  • Expressions on base columns.
  • Expressions on paths in VARIANT columns.

56.When we clone the table the existing cluster Key Copied to cloned object. 

57.The table is created using CREATE TABLE … LIKE or CREATE TABLE … AS SELECT the key will not get copied. 

58.Defining a clustering key directly on top of VARIANT columns is not supported; however, you can specify a VARIANT column in a clustering key if you provide an expression consisting of the path and the target type. 

59.For a snowflake session, more than one virtual warehouse can be specified at the time on executing query – FALSE. 

60.The Privileges provided by the SYSADMIN and SECURITYADMIN role are automatically contained in the ACCOUNTADMIN role since the ACCOUNTADMIN role sits on top of the role hierarchy – TRUE. 


62.The SHOW command does not need any running WH. 

63.Load Meta data for a table expires after 64 Days. 


65.The WH Provides CPU, Memory and Temporary storages. 


67.The Account level storage contributes Database, Snowflake internal stages and historical data maintained in fail safe

68.A Snowflake session can only have one current warehouse at a time.

69.Is the Snowflake HIPPA Compliant – Yes.

70.A Task need WH for computing.

71.A Task tree can have spanning multiple schemas – FALSE.

72.When the Task is created 1st time the default status is SUSPENDED.


74.Snowsight is enabled by default for account administrators (i.e. users with ACCOUNTADMIN role) only.

75.The History page allows you to view and drill into the details of all queries executed in the last 14 days.

76.Snowflake supports the following file formats for query export: 

 a. Comma-separated values (CSV) 
 b. Tab-separated values (TSV) 77.The web interface only supports exporting results up to 100 MB in size.

78.The materialized views behaves more like a Table.

79.The Streams are part of Continuous data loading.

80.Streams are data objects which resides inside schema.

81.We can create any number of streams on source table.

82.Stream objects can be cloned (The clone inherits the current offset).

83.Stream objects can be created for permanent, transient and temporary tables and external tables.

84.Stream can capture INSERT, UPDATE and DELETE on Permanent table.

85.Stream can capture only INSERT on External tables.

86.The Stream objects get stale if not consumed with in time travel period and when you describe there is a field called stale which is set to true in case of stale data.

87.Creating and managing Stream requires a role with minimum. 

 a. Database - USAGE Role
 b. Schema – USAGE, CREATE and STREAM
 c. Table – SELECT.

88.Querying a stream requires a role with a minimum of the following permissions. 

 a. Database – USAGE
 b. Schema – USAGE
 c. Stream – SELECT.
 d. Table – SELECT.

89. Types of Streams 

 a. Standard: tracks all DML changes to the source table (including table truncates).
 b. Append Only: Only Insert action has been tracked.
 c. Insert Only: On External table.

90.When the stream for the table created the 3 hidden columns are added to the source table and begin storing change tracking metadata. 

c. METADATA$ROWID All these columns consume a small amount of storage.

91.When a stream is dropped and recreated using create or replace command, loses all its tracking history(offset).

92.A task can execute a single SQL statement, including a call to a stored procedure.

93.There is no event source that can trigger a task; instead, a task runs on a schedule.

94.A Task tree equivalent to DAG (Direct Acyclic Graph) – FALSE (No, Task tree does not support fork and join).

95. What kind of objects will be used to consume data from stream automatically? – (Task and Task Tree).

96.Task runs are not associated with a user. Instead, each run is executed by a system service.

97. Privileges for Tasks. 

a. Account – EXECUTE TASK -> Task executed at account level; it needs execute task privilege at account level.
b. Database – USAGE
c. Schema – USAGE, CREATE TASK d. Warehouse – USAGE

98.Simple tree of tasks limited to max of 1000 tasks total (Including root task)

99.Whenever we create task by default the status is suspended.

100. A schedule can not be specified for child tasks in simple tree of tasks.

101. A stream can be shared like a table – FALSE.

102. If a stream is not consumed regularly, snowflake temporarily extend the data retention period for the source table- TRUE.

103. What grant/privilege a role should have so that it can suspend or resume a task? – Operate.

104. A Task supports all session parameters but does not support user and account parameters -> TRUE.

105. Even a user with the ACCOUNTADMIN role cannot view the results for a query run by another user.

106. Any privileges granted on the source object do not transfer to the cloned object.

107. CREATE TABLE … LIKE (creates an empty copy of an existing table).

108. Snowflake supports defining and maintaining constraints, but does not enforce them, except for NOT NULL constraints, which are always enforced.

109. Snowflake supports defining constraints on permanent, transient, and temporary tables. Constraints can be defined on columns of all data types, and there are no limits on the number of columns that can be included in a constraint.

110. When a table is copied using CREATE TABLE … LIKE or CREATE TABLE … CLONE, all existing constraints on the table, including foreign keys, are copied to the new table.

111. Inline constraints are created as part of the column definition and can only be used for single-column constraints.

112. Out-of-line constraints are defined using a separate clause that specifies the column(s) on which the constraint is created. They can be used for creating either single-column or multi-column constraints, as well as creating constraints for existing columns.

113. To check the table structure in snowflake use, GET_DDL Function or Table_Constraint View.

114. We can clone a permanent table to Transient table/Temporary table, but we can’t clone a transient or Temporary table to a permanent table.

115. FLATTEN can be used to convert semi-structured data to a relational representation.

116. FLATTEN is a table function that takes a VARIANT, OBJECT, or ARRAY column and produces a lateral view.

117. UDFs can contain SQL or JavaScript; however, DDL and DML operations are not supported in UDFs.

118. Snowflake does not allow for a clone of a view directly. If you clone the entire schema or database that contains that view, it'll work, but there is no direct clone for a view.

119. Only the Database | Schema | Table | Stream | Task | Sequence can be cloned.

120. Default size is X-small for warehouses created using CREATE WAREHOUSE Command.

121. Default Size is X-Large for warehouses created in the web interface.

122. Snowflake utilizes per-second billing (with a 60-second minimum each time the warehouse starts).

123. For a multi-cluster warehouse, the number of credits billed is calculated based on the number of servers per cluster and the number of clusters that run within the time period.

124. Auto-suspend and auto-resume apply only to the entire warehouse and not to the individual clusters in the warehouse. For a multi-cluster warehouse.

125. Similar to all DML operations in Snowflake, re-clustering consumes credits. The number of credits consumed depends on the size of the table and the amount of data that needs to be re-clustered.

126. Defining a clustering key directly on top of VARIANT columns is not supported; however, you can specify a VARIANT column in a clustering key if you provide an expression consisting of the path and the target type.

127. A retention period of 0 days for an object effectively disables Time Travel for the object.

128. The DATA_RETENTION_TIME_IN_DAYS object parameter can be used by users with the ACCOUNTADMIN role to set the default retention period for your account.

129. The data retention period for a database, schema, or table can be changed at any time.

130. Time Travel cannot be disabled for an account; however, it can be disabled for individual databases, schemas, and tables by specifying DATA_RETENTION_TIME_IN_DAYS with a value of 0 for the object.

131. If you change the data retention period for a table, the new retention period impacts all data that is active, as well as any data currently in Time Travel.

132. Fail-safe provides a (non-configurable) 7-day period during which historical data may be recoverable by Snowflake. This period starts immediately after the Time Travel retention period ends.

133. The fees are calculated for each 24-hour period (i.e. 1 day) from the time the data changed. The number of days historical data is maintained is based on the table type and the Time Travel retention period for the table.

134. Custom roles can be created by the SECURITYADMIN roles as well as by any role to which the CREATE ROLE privilege has been granted.

135. There is no concept of a “super-user” or “super-role” in Snowflake that can bypass authorization checks. All access requires appropriate access privileges.

136. You can suspend and resume Automatic Clustering for a clustered table at any time using ALTER TABLE … SUSPEND / RESUME RECLUSTER.

137. External tables are read-only, therefore no DML operations can be performed on them; however, external tables can be used for query and join operations. Views can be created against external tables.

138. The default VALUE and METADATA$FILENAME columns cannot be dropped in External tables.

139. Creating and managing external tables requires a role with a minimum of the following role permissions:

Object Privilege
Database USAGE
Schema USAGE, CREATE STAGE (if creating a new stage), CREATE EXTERNAL TABLE
Stage (if using an existing stage) USAGE

140. The warehouse is created initially in a SUSPENDED state.

141. Whenever we replicate the database, the privileges granted on database objects not replicated.

142. Time travel and Fail safe not applicable for Stages in Snowflake.

143. Staged files can be deleted from a Snowflake stage using the REMOVE command to remove the files in the stage after you are finished with them.

144. Snowpipe generally loads older files first, there is no guarantee that files are loaded in the same order they are staged.

145. Snowpipe uses file loading metadata associated with each pipe object to prevent reloading the same files (and duplicating data) in a table. This metadata stores the path (i.e. prefix) and name of each loaded file, and prevents loading files with the same name even if they were later modified (i.e. have a different eTag).

146. Snowflake storage capacity can be pre-purchased for a lower price? - TRUE.

147. An organization building Business critical data application, can set up a single Business Critical Edition hosted in multiple regions to ensure Disaster recovery -> FALSE.

148. BOOLEAN can also have an “unknown” value, which is represented by NULL.

149. Boolean Conversion: 

a. String:
 I. True/t/yes/y/on/1 – TRUE
 II. False/f/no/n/off/0 – FALSE
b. Numeric:
 I. 0 – FALSE
 II. Any non Zero (+/-) – TRUE
c. The value NULL is converted to NULL.

150. In snowflake the string constant can be closed by “Single quotes” or “Double Dollar ($$)”.

151. The INTERVAL is not a data type and can be used only with DATE data type.

152. Max length of VARCHAR – 16MB Uncompressed.

153. Max length of BOOLEAN – 8MB.

154. ‘NaN’, ‘inf’ and ‘-inf’ are special values for float and represent with in quotes.

155. The CHAR data type equivalent to VARCHAR (1).

156. The snowflake displays FLOAT, FLOAT 4, FLOAT 8, DOUBLE and DOUBLE PRECISION as FLOAT.

157. In Literals at least one digit must follow the exponent (e/E) marker. EX: 1.23E42, 1.234E+2

158. The VARIANT data type imposes a 16 MB (compressed) size limit on individual rows.

159. A user cannot view the result set from a query that another user executed. This behavior is intentional. For security reasons, only the user who executed a query can access the query results.

160. We can identify an account using either its name in your organization or its Snowflake-assigned locator in the cloud region where the account is located.

161. The Organization Name and Account Name can be changed after creation but the locator for an account cannot be changed once the account is created.

162. The Snowflake provides 3 types of parameters ACCOUNT, SESSION and OBJECT Parameters.

163. ACCOUNTADMIN should never be designated as a user’s default role. Instead, designate a lower-level administrative or custom role as their default.

164. Each time a warehouse is started or resized to a larger size; the warehouse is billed for 1 minute’s worth of usage based on the hourly rate shown above.

165. When a warehouse is increased in size, credits are billed only for the additional servers that are provisioned. For example, changing from Small (2) to Medium (4) results in billing charges for 1 minute’s worth of 2 credits.

166. Users with the ACCOUNTADMIN role can use the Snowflake web interface or SQL to view monthly and daily credit usage for all the warehouses in your account.

167. Periodic rekeying of encrypted data. – Enterprise edition.

168. Support for encrypting data using customer-managed keys. – Business critical and higher.

169. HITRUST CSF compliance/HIPAA compliance – Business critical.

170. Account-level network policy management can be performed through either the web interface or SQL.

171. User-level network policy management can be formed using SQL.

172. If a user is associated to both an account-level and user-level network policy, the user-level policy takes precedence.

173. Currently, we cannot guarantee that only one instance of a task with a defined predecessor task is running at a given time.

174. A brief lag occurs after a parent task finishes running, and any child task is executed.

175. If the role that a running task is executing under is dropped while the task is running, the task completes processing under the dropped role.

176. A task does not support account or user parameters.

177. If the definition of a stored procedure called by a task changes while the tree of tasks is executing, the new programming could be executed when the stored procedure is called by the task in the current run.

178. There is a 60-minute default limit on a single run of a task.

179. The organization administrator (ORGADMIN) system role is responsible for managing operations at the organization level.

180. Once an account is created, ORGADMIN can view the account properties but does not have access to the account data.

181. The ORGADMIN role exists at the account level.

182. If the Files = Uncompressed, By default SF compress data from files to gzip.

183. In the VARIENT data type we can store values of any other type, including OBJECT and ARRAY, up to a maximum size of 16 MB compressed.

184. A value of any data type can be implicitly cast to a VARIANT value, subject to size restrictions.

185. For a JSON or Avro the Outer array structured can be removed using STRIP_OUTER_ARRAY.


187. FLATTEN is a table function that takes an ARRAY column and produces a lateral view.

188. The FLATTEN function ignores values such as NULL values if any in the table. You should use OUTER joins to display all rows from the source table.

189. User stage can be accessed by single user but need copying to multiple tables.

190. Table stage accessed by multiple users and copied to single table.

191. Both user and table stage can’t be altered or dropped and can’t be set file format.

192. No.Of shares we can create but only one DB per one share.

193. When you exit the interface, Snowflake cancels any queries that you submitted during this session and are still running.

194. The minimum amount of elapsed time between the early access and final stages is 24 hours.

195. Custom roles can be created by the SECURITYADMIN roles as well as by any role to which the CREATE ROLE privilege has been granted.

196. The top-most custom role assigned to the system role SYSADMIN.

197. This staged approach only applies to full releases. For patch releases, all accounts are moved on the same day.

198. A known issue in Snowflake displays FLOAT, FLOAT4, FLOAT8, REAL, DOUBLE, and DOUBLE PRECISION as FLOAT even though they are stored as DOUBLE.

199. The automatic maintenance of materialized views consumes credits.

200. The result cache run for 24hr format.



203. DQL Commands – SELECT.

204. Snowflake provides the following DDL commands for using SQL variables: SET, UNSET and SHOW VARIABLES.

205. The size of string or binary variables is limited to 256 bytes.

206. The $ sign is the prefix used to identify variables in SQL statements, it is treated as a special character when used in identifiers.

207. VPS accounts do not share any resources with accounts outside the VPS.

208. Database fail over and failback starts from business-critical Edition.

209. The External stage requires the cloud service account to access.

210. Transactions are never “nested”.

211. A transaction is associated with a single session. Multiple sessions cannot share the same transaction.

212. Snowflake transactions, like most database transactions, guarantee ACID properties.

213. Snowflake supports READ COMMITTED transaction isolation.

214. External tables are read-only, therefore no DML operations can be performed on them; however, external tables can be used for query and join operations. Views can be created against external tables.

215. Snowflake charges 0.06 credits per 1000 event notifications received.

216. Snowflake does not allow standard DML (e.g. INSERT, UPDATE, DELETE) on materialized views. Snowflake does not allow users to truncate materialized views.

217. A cloned table does not include the load history of the source table. Data files that were loaded into a source table can be loaded again into its clones.

218. A clone is writable and is independent of its source (i.e. changes made to the source or clone are not reflected in the other object).

219. To create a clone, your current role must have the following privilege(s) on the source object: · Tables - SELECT. · Pipes, Streams, Tasks - OWNERSHIP · Other objects - USAGE

  • Tables - SELECT.
  • Pipes, Streams, Tasks - OWNERSHIP
  • Other objects - USAGE

220. If the COPY GRANTS keywords are used while creating clone object, then the new object inherits any explicit access privileges granted on the original table but does not inherit any future grants defined for the object type in the schema.

221. By default, the maximum number of accounts in an organization cannot exceed 25.

222. What functional category does Looker fall into? - Business Intelligence.

223. What functional category does Matillion fall into? - Data Integration.

224. What two Tech Partner types are available from in-account menu items? - Partner Connect, Programmatic Interfaces.

225. A VARIANT value can be missing (contain SQL NULL), which is different from a VARIANT null value, which is a real value used to represent a null value in semi-structured data.

226. Elements that contain even a single “null” value are not extracted into a column. Note that this applies to elements with “null” values and not to elements with missing values, which are represented in columnar form.

227. The PUT and GET commands not supported in snowflake UI.

228. A customer may have as many accounts as they want.

229. Each account is deployed on a single cloud provider platform (AWS/GCP/Azure).

230. Each account exists in a single geographic region.

231. Each account exits with a single snowflake edition.

232. The Standard edition does not support for multi cluster warehouse, 90 days’ time travel and Secure Views.

233. We can upgrade snowflake from standard to other editions but via UI it is not possible.

234. Each database belongs to single snowflake account.

235. Database can be replicated to other accounts, but they cannot SPAN multiple accounts.

236. Objects belongs to single schema, single Database and single accounts.

237. User, role, warehouse, database, resource monitor and integration belong to Account level.

238. We can share data between accounts only through share.

239. When ever we create snowflake account by defaults the 5 roles will get assigned. ACCOUNT ADMIN, SYSADMIN, USER ADMIN, PUBLIC and SECURITY ADMIN.

240. The default role assigned for newly created account is SYSADMIN.

241. The COMPUTE_WH of XS SIZE by default assigned to new account.

242. By default, PUBLIC and Information SCHEMA has been assigned.

243. By default, DEMO_db and UTIL_DB has been created.

244. By default, only the ACCOUNT ADMIN can access the SNOWFLAKE shared Database.

245. Once the session ends, data stored in the temporary table is purged completely from the system and, therefore, is not recoverable, either by the user who created the table or Snowflake.

246. After creation, temporary tables cannot be converted to any other table type.

247. The data in the transient tables cannot be recovered after time travel retention period passes – TRUE.

248. Snowflake's security and authentication includes Snowflake Failures alerts? – FALSE.

249. B-tree type indexes are supported by Snowflake's performance optimizing query methods. – FALSE

250. One benefit of client-side encryption is that the data is encrypted before loading into storage layer. – FALSE.

251. Client-side encryption provides a secure system for managing data in cloud storage. Client-side encryption means that a user encrypts stored data before loading it into Snowflake.

252. The Transient/permanent/temporary tables can be cloned.

253. We can create either (temp and transient) or (temp and permanent) tables with the same name and same/different columns. When we drop table with same name always the temp will get dropped then other tables.

254. After creation, transient tables cannot be converted to any other table type.

255. The Temp table can be cloned to temp/transient tables but not be cloned to permanent table.

256. When we drop the base table, the cloned objects will not get dropped.

257. The transient table can be cloned to temp/transient tables but not be cloned to permanent table.

258. The historical data for temporary tables can be extracted with in the same session before the time travel expires.

259. The permanent tables can be cloned to permanent/temp/transient.

260. To validate data in an uploaded file, execute COPY INTO <table> in validation mode using the VALIDATION_MODE parameter. The VALIDATION_MODE parameter returns any errors that it encounters in a file.

261. The COPY INTO <location> command provides a copy option (SINGLE) for unloading data into a single file or multiple files. The default is SINGLE = FALSE (i.e. unload into multiple files).

262. By default, all unloaded data files are compressed using gzip, unless compression is explicitly disabled or one of the other supported compression methods is explicitly specified.

263. All data files unloaded to Snowflake internal locations are automatically encrypted using 256/128-bit keys.

264. Data files unloaded to cloud storage can be encrypted if a security key (for encrypting the files) is provided to Snowflake.

265. The object_construct function can be used to convert the rows of data into json format.

266. The load history for snowpipe is stored in the metadata for 14 days.

267. The load history for bulk data load is stored in the metadata for 64 days.

268. Indexes can not be migrated in snowflake.

269. The Stored procedures can not be migrated or administered using the user interface in snowflake.

270. The share object can not span multiple databases, data provider must create separate share for each database.

271. Time travel for shared database is not possible.

272. Creating a clone on shared database is not possible.

273. Editing comment on shared database is also not possible.

274. Shared database and all other objects in the database can not be forwarded (means can not be re shared).

275. The views can not be shared. Only the secure views can be shared.

276. In Maximized mode, all clusters run concurrently so there is no need to start or shut down individual clusters.

277. The object privilege assign to role which are intern assign to users.

278. Only the ACCOUNTADMIN\SECURITYADMIN can create\alter\drop network policy.

279. All files stored in External stages are automatically encrypted – FALSE.

280. The data loaded to internal stages are automatically encrypted – TRUE.

281. Only the one instance of data per cloud or region must be replicated. Once the instance is replicated more than one consumer can make use of this data.

282. The definition for a view cannot be updated (i.e. you cannot use ALTER VIEW or ALTER MATERIALIZED VIEW to change the definition of a view). To change a view definition, you must recreate the view with the new definition.



285. The pipes can be suspended and then resumed.

286. Pause or resume a pipe (using ALTER PIPE … SET PIPE_EXECUTION_PAUSED = TRUE | FALSE).

287. If you are in the middle of running queries when you refresh, they will resume running when the refresh is completed.

288. Click the context menu to select a different active warehouse for the worksheet. You can resume or suspend the selected warehouse or resize the warehouse.

289. A user can query stage objects – TRUE.

290. The only Account Admin can access both Account and Notification in ribbon option.

291. The Security Admin can access only the Account option in ribbon.

292. The 3 panels of worksheet area. 

 a. Navigational Tree
 b. SQL pane
 c. Result Pane

293. The data Unloading supports for CSV, TSV, JSON and Parquet formats.

294. Files are first unloaded to a Snowflake internal location, then can be downloaded locally using GET.

295. All data files unloaded to Snowflake internal locations are automatically encrypted.

296. Data files unloaded to cloud storage can be encrypted if a security key (for encrypting the files) is provided to Snowflake.

297. User Owns the Database objects – False. Actually the Role owns the objects.

298. User can view and modify Resource Monitors? – True - But the ACCOUNTADMIN must enable the user first (by granting permissions)

299. How many resource monitors can you have at the account level? – One.

300. Which Snowflake cache does the user/administrator have control over? – Warehouse Cache.

301. Resource monitors can be used to impose limits on the number of credits that are consumed by: 

  • User-managed virtual warehouses
  • Virtual warehouses used by cloud services.

302. Resource monitors can only be created by account administrators (i.e. users with the ACCOUNTADMIN role); however, account administrators can choose to enable users with other roles to view and modify resource monitors using SQL.

303. Credit quota accounts for credits consumed by both user-managed virtual warehouses and virtual warehouses used by cloud services.

304. The default schedule for a resource monitor specifies that it starts monitoring credit usage immediately and the used credits reset back to 0 at the beginning of each calendar month.

305. A single monitor can be set at the account level to control credit usage for all warehouses in your account.

306. A monitor can be assigned to one or more warehouses, thereby controlling the credit usage for each assigned warehouse.

307. An account-level resource monitor does not control credit usage by the Snowflake-provided warehouses (used for Snowpipe, automatic reclustering, and materialized views); the monitor only controls the virtual warehouses created in your account.

308. Notifications can be received by account administrators through the web interface and/or email; however, by default, notifications are not enabled.

309. You cannot change the customized schedule for a resource monitor back to the default. You must drop the monitor and create a new monitor.

310. The standard retention period is 1 day (24 hours) and is automatically enabled for all Snowflake accounts.

311. The external table does not inherit the file format, if any, in the stage definition. You must explicitly specify any file format options for the external table using the FILE_FORMAT parameter.

312. Any pipes that reference to Internal stages is not cloned.

313. Any pipes that reference to External stages are cloned.

314. When a table with a column with a default sequence is cloned, the cloned table still references the original sequence object.

315. When AUTO_INGEST = FALSE, a cloned pipe is paused by default.

316. When AUTO_INGEST = TRUE, a cloned pipe is set to the STOPPED_CLONED state. In this state, pipes do not accumulate event notifications as a result of newly staged files. When a pipe is explicitly resumed, it only processes data files triggered as a result of new event notifications.

317. Currently, when a database or schema that contains source tables and streams is cloned, any unconsumed records in the streams (in the clone) are inaccessible.

318. When a database or schema that contains tasks is cloned, the tasks in the clone are suspended by default. The tasks can be resumed individually.

319. External tables are read-only, therefore no DML operations can be performed on them; however, external tables can be used for query and join operations. Views can be created against external tables.

320. Querying data stored external to the database is likely to be slower than querying native database tables; however, materialized views based on external tables can improve query performance.

321. Partition columns can only be defined when an external table is created, using the CREATE EXTERNAL TABLE … PARTITION BY syntax with a list of column definitions for partitioning.

322. Snowflake charges 0.06 credits per 1000 event notifications received.

323. The external table does not inherit the file format, if any, in the stage definition. You must explicitly specify any file format options for the external table using the FILE_FORMAT parameter.

324. The following are not supported for external tables: 

a. Clustering keys
b. Cloning
c. Data in XML format

325. Time Travel is not supported for external tables.

326. Once an account is created, ORGADMIN can view the account properties but does not have access to the account data.

327. Organization name must be unique across all Snowflake organizations. It cannot include underscores or other delimiters.

328. Snowflake credits are charged based on the number of virtual warehouses you use, how long they run, and their size.

329. Stopping and restarting a warehouse within the first minute does not change the amount billed; the minimum billing charge is 1 minute.

Krsna (GCKR)
Database architect, SnowPro Certified, Trainer.

Follow me on Graphy
Watch my streams on Graphy App
KSR DATAVIZON 2023 Privacy policy Terms of use Contact us Refund policy