- February 26, 2025
- Posted by: Darshini N
- Category: Snowflake
Data pipelines are the backbone of modern data systems, but simply moving data from one point to another isn’t enough. Without proper auditing, you might be operating blindly, risking errors and inefficiencies.
In this post, we explore Snowflake’s auditing process within the medallion architecture—how it meticulously tracks data movement across Bronze, Silver, and Gold stages using three powerful stored procedures.
Why Auditing Matters in Snowflake
Auditing ensures that your data is:
- Accurately processed from ingestion to transformation.
- Trackable at every stage with relevant metadata.
- Error-free and reliable for downstream analytics.
Whenever a new file lands in Snowflake (via external cloud storage or internal stage), an entry is created in the AUDIT_TABLE, capturing key insights:
- What file was loaded?
- How large was it?
- How many records made it through?
- Were there any processing errors?
These questions are answered using three game-changing stored procedures for auditing:
1. Source Data Loading & Auditing: The Bronze Layer
The Bronze layer is where raw, unprocessed data first lands. This stage often contains duplicates, missing values, and raw information. Tracking data here is crucial to ensure nothing is lost or corrupted.
How It Works
Data is ingested using COPY INTO statements, and upon completion, a stored procedure is called that updates the audit information of bronze layer in the AUDIT_TABLE with:
- File names
- Record counts
- Errors (if any)
- Step (Bronze Load Complete / Bronze Load Failed)

Leverage information_schema.copy_history to automatically capture metadata about the loaded files.
Example Audit Entry for bronze:
File Name | Bronze Record Count | Bronze Load Status | Bronze Error Message |
customer_data.csv | 10,000 | Completed | None |
2. Transforming & Auditing: The Silver Layer
Once data is ingested, it moves to Silver, where it is cleaned, deduplicated, and enriched with key fields such as Integration_ID, Target_Created_Dt, and Target_Updated_Dt. Silver load can be merge and update or Truncate and load based on your requirement. After the silver load, the silver audit procedure can be called inside the same stored procedure script where the error messages (if any) can be passed as an argument to the audit procedure.
How It Works
The Silver audit procedure tracks:
- Silver record count (post-processing)
- Timestamp of ingestion
- Load status (Success/Error)
- Step (Silver Load Complete / Bronze Load Complete)
Sample code to update the silver layer status in audit_table:
Silver table, Bronze table and error message can be passed as a parameter to this audit stored procedure to use this procedure for all tables dynamically.

Using the filename populated in audit_table from the previous step, the count is calculated from the silver table for the files loaded in the current run.
🔍 Error Handling: If an issue arises during the Silver load, it is captured and logged in the AUDIT_TABLE to help pinpoint and resolve failures efficiently. During error, the step field is updated back to Bronze Load Complete.
Example Audit Entry for silver:
File Name | Silver Record Count | Silver Load Status | Silver Error Message |
customer_data.csv | 0 | Failed | Duplicate Records |
3. The Final Check: Gold Layer
The Gold layer represents refined, analytics-ready data. By this stage, transformations are complete, and data is prepared for reporting and decision-making.
How It Works
Once data enters Gold, the audit procedure for gold layer is called inside the gold load procedure and the AUDIT_TABLE is updated with:
- Record count for Gold tables
- Timestamp of final load
- Load status (Success/Error)
Sample code to update the gold layer status in audit_table

🎯 Tracking Pipeline Completion: The STEP field in the AUDIT_TABLE tracks pipeline progress:
- “Bronze Load Complete”
- “Silver Load Complete”
- “Gold Load Complete”
✅ If all stages complete successfully, the data is fully trusted. If an error occurs, it can be immediately flagged for review.
Example Audit Entry:
File Name | Gold Record Count | Gold Load Status | Gold Error Message |
customer_data.csv | 9,750 | Completed | None |
The Power of These Auditing Procedures
By leveraging these procedures, organizations gain:
- Full transparency into data pipeline execution.
- Early error detection to prevent downstream issues.
- Improved data integrity for business intelligence and analytics.
💡 With real-time audit logs, you never have to wonder what happened in your pipeline—everything is traceable!
Conclusion: The Secret Weapon for Data Pipelines
If you’ve ever struggled with data pipeline reliability, these are your solutions. They provide:
- Real-time tracking of data movement.
- Automated error detection to prevent bad data from propagating.
- Detailed audit logs for troubleshooting and optimization.
In today’s data-driven world, transparency is everything. With Snowflake’s auditing power, you’re not just moving data—you’re mastering your data pipeline.
Please feel free to reach out to us for your Snowflake solution needs. Cittabase is a Premier partner with Snowflake.