Snowflake data teams: How to boost your productivity and work safely with Cortex Code
4 minute read
15 April 2026
Enterprise data teams are often restricted in how they use public AI tools due to their organisation’s security and compliance requirements. In today’s AI-driven landscape, this can limit the productivity and insights available. Even when teams find the right AI tools, getting them approved can take months due to procurement and budget processes. That’s where Snowflake’s AI coding agent, Cortex Code (CoCo), stands out.
For Snowflake users looking at Cortex Code, its appeal is strong. You can immediately start working with a secure, enterprise-compliant, and out-of-the-box AI solution. Your data engineers can work more efficiently and focus on higher-value development. Tasks that once took hours can now be completed in minutes. Cortex Code also speeds up the development to production cycle and reduces time spent on debugging and issue resolution.
We’ve put all of this to the test at Altis, and I’d love to share how my team have used Cortex Code in the last few months and all the benefits we’ve seen for ourselves.
What is Cortex Code?
Cortex Code (also known as CoCo) is a user-facing AI agent built into Snowflake itself. It helps with general coding and data engineering tasks, including SQL, dbt, and notebooks. Key strengths include:
- Secure: it runs entirely within Snowflake’s infrastructure, enforcing role-based access controls.
- Context-aware: it knows your tables, schemas, and objects.
- Interactive: it enables real-time interaction to execute queries, generate files, and explore documentation.
Speed up the data exploration and modelling lifecycle
One of the first places I noticed Cortex Code’s value was in the BRONZE (Raw Data) layer. This is where raw ingestion lands: files of varying formats and inconsistent schemas. It often contains little to no documentation or available business SME’s knowledge.
With Cortex Code, I asked questions like 'Explain this table and its columns' or 'Which column would make the best natural key?' I used it to quickly understand the structure, identify key relationships, and spot potential data quality issues before even starting the design. The answers appeared in under a minute, grounded in Snowflake’s actual metadata and tables. Instead of manually scrolling through each column, we found we were able to save hours - useful time that could now be redirected to the next step, like designing data models.
As new features appear in Snowflake, we are looking to further leverage Cortex Code and Cortex Analyst (Semantic Views) for the entire data modelling cycle. Cortex Code can recommend fact and dimension models, generate transformation logic, and construct semantic views directly on the physical tables we provide. In practice, this translates into simple prompts like “Generate a SQL model to create a DIM_BUYER table with deduplication logic” or “Create a semantic view to check our forecast accuracy by SKU velocity code”. As models evolve, it can also help validate joins and grain, and trace lineage and downstream impact. It doesn’t replace human judgment, but makes it a powerful assistant across the end-to-end modelling workflow.
Debugging data pipeline failures
We’ve used AI to debug and solve pipeline problems, such as files showing up in the wrong format, external stage referencing a wrong filename, encodings changing, or columns drifting in type from upstream systems. With Cortex Code, I fed it the error and context and asked what’s likely wrong. In the past, identifying these issues meant digging through logs, running ad-hoc queries, and iterating slowly. While it doesn’t always deliver a perfect fix, Cortex Code consistently points me in the right direction much faster than manual investigation. In some cases, it identifies the root cause immediately, such as detecting a UTF-16 encoded file causing decoding failures when UTF-8 was expected.
Validating RBAC hierarchies before terraform deploys
Another area where Cortex Code truly excelled is role-based access control (RBAC). Anyone who has managed Snowflake access at scale knows how complex role hierarchies can become, with inherited privileges, future grants, and overlapping roles. Managing all of this through Terraform requires a rigorous and well-structured design approach.
Recently, we deployed a new environment. During end-to-end testing, my data factory jobs failed due to insufficient permissions. With Cortex Code, I was able to quickly identify which grants in the Managed Access Schema and database-scope roles were not working as intended. Instead of scanning query history and Terraform log, manually running multiple SHOW GRANTS queries and piecing everything together, I asked Cortex Code to trace permissions, explain inheritance, and highlight potential gaps. What used to take a couple of hours now only took about 30 minutes.
The takeaway for data teams
Before AI, data engineers spent most of their time writing and debugging code. This left little room for planning, strategy, or high-impact work. With AI handling the heavy lifting, you can now increase your productivity and focus on high-value, strategic tasks.
Cortex Code’s advantage lies in its context and security. It runs entirely within Snowflake with access to real data, roles, and metadata, enabling data privacy and delivering actionable insight. For Snowflake users, it’s an AI tool you can tap into immediately. The question isn’t ‘Should we try Cortex Code?’ but ‘How quickly can we adopt it effectively?’
Related insights
Share
Other insights

Subscribe to Altis
Join our mailing list to receive the latest updates, expert insights and event news.
