| [v1.0.82-beta] 2026-05-13 ADDED: +2 MODIFIED: ~3 - •
developing-with-streamlit-in-snowflakeUse for Streamlit development tasks with a Snowflake angle: Snowflake-connected … +Use for Streamlit development tasks with a Snowflake angle: Snowflake-connected dashboards, Streamlit-in-Snowflake (SiS) deployment to warehouse / SPCS / Workspaces, applying Snowflake branding, st.connection('snowflake'). Also use for general Streamlit authoring (widgets, layouts, caching, theming, custom components) — this skill routes general OSS questions to version-matched content from a detected Streamlit ≥1.57 install, or to a bundled OSS snapshot when no install is available. Triggers: streamlit, st., dashboard, app.py, theme, beautify, style, CSS, color, background, button, custom component, st.components, snowflake dashboard, monitor snowflake, streamlit on snowflake, streamlit in snowflake, SiS, scaffold, snowflake theme, st.connection snowflake, snow streamlit deploy.developing-with-streamlit **[REQUIRED]** Use for ALL Streamlit tasks: creating, editing, debuggi…scaffolding-streamlit-in-snowflake Use for any Streamlit task where Snowflake is in scope — building a da…
- •
marketplace-searchSearch the public Snowflake Marketplace for third-party datasets, data shares, N… +Search the public Snowflake Marketplace for third-party datasets, data shares, Native Apps, and Connected Apps using the `cortex search marketplace` CLI. Invoke this skill PROACTIVELY any time the user expresses intent to find, use, or obtain a third-party dataset, app, or external data product. For example "do you have weather data", "find a stock price dataset", "I need consumer spending data", "is there a Salesforce / HubSpot / Stripe connector", "what demographic data can I get for California", "anything for ESG ratings", "find a marketplace listing for X", "what is ", "what's the alternate source to X", "what about ", "find me stuff", "find me a table about ", "where is data stored", or even just a BARE third-party product/brand name on its own (e.g. "Tomorrow.io", "Fishbowl", "DV360", "Snowflake managed MCP servers"). When the user names ANY third-party product, vendor, brand, or external service — even as a fragment, even if framed as a product question, even if the prompt sounds like a documentation lookup — invoke this skill. **Erring toward over-firing is preferred over missing a marketplace opportunity.** If you are about to tell the user "I don't have that data" or "you'll need to bring your own data", STOP and invoke this skill first — the public marketplace likely has it.
ALSO invoke this skill BEFORE writing any code, fetching from external APIs (e.g. clinicaltrials.gov, BLS, FRED, World Bank), or building dashboards/reports against third-party data sources — search the marketplace FIRST to see if the data is already available there as a share, even if the user explicitly named an external source. The marketplace listing is almost always preferable to a custom API integration.
Do NOT use this skill for ONE specific listing referenced by global name (e.g. GZ2FQZ711TU) or exact title — use `get-marketplace-listing-details`. Do NOT use it when marketplace search results are already in hand and only need formatting. Do NOT use it for searching the user's own internal Snowflake catalog (tables, views, schemas, functions, semantic views) — use `cortex search object`. Do NOT use it for Snowflake product documentation or how-to questions — use `cortex search docs`.
When ambiguous between marketplace and a sibling tool, prefer marketplace — it's cheap and missing a relevant listing is expensive — UNLESS the user has clearly moved past discovery, i.e. asks for: *integration syntax with a named mechanism* ("how to use to connect/fetch/integrate..."), *catalog inventory with no external qualifier* ("what is X tables", with no "external" / "from " / "third-party" hint), a *specific identifier value* ("what is the [code/symbol/SM ID] for [identifier]"), or an *educational deep-dive with explicit depth markers* ("explain X in detail", "to a beginner", "full overview"). In those cases the user already knows what they want; a marketplace search won't help. - •
data-cleanroomsdescription_changed +Use for ALL requests related to Snowflake Data Clean Rooms (DCR): clean room, cleanroom, DCR, collaboration(s), view/list collaborations, join/review collaboration, invitation, data offering(s), template(s), register, share table, run analysis, run activation, audience overlap, activation, export segment, create collaboration, create cleanroom, measure overlap. overlap, manage templates, add template, remove template, approve template, reject template, auto-approval, tear down, leave, drop collaboration, delete collaboration. Covers browsing, joining, registering, running analysis/activation, creating collaborations, managing templates, and creating leaving/tearing down collaborations via the DCR Collaboration API. - •
semantic-viewdescription_changed +Use for ALL requests that mention: create, build, debug, fix, troubleshoot, optimize, improve, or analyze a semantic view — AND for requests about VQR suggestions, verified queries, verified query representations, seeding/generating queries, suggesting metrics, suggesting filters, recommending metrics/filters/facts, importing Tableau (.twb/.twbx/.tds/.tdsx) or Power BI (.pbit/.pbix) files, or enriching a semantic view. This is the entry point - even if the request seems simple. DO NOT attempt to create, debug, or generate suggestions for semantic views manually - always invoke this skill first. This skill guides users through creation, setup, auditing, VQR suggestion generation, filter & metric suggestions, Tableau/Power BI imports, and SQL generation debugging workflows for semantic views with Cortex Analyst. - •
skill_developmentdescription_changed +Create, document, audit, or audit refactor skills for Cortex Code. Use when: creating new skills, capturing session work as skills, reviewing skills, refactoring large skills. Triggers: create skill, build skill, new skill, summarize session, capture workflow, audit skill, review skill. skill, refactor skill, triage skills.
| | [v1.0.81-beta] 2026-05-12 ADDED: +2 REMOVED: -3 MODIFIED: ~1 - •
snowflake-appsBuild and deploy web applications on Snowflake. Use for ALL app requests: create… +Build and deploy web applications on Snowflake. Use for ALL app requests: create, scaffold, build, deploy, publish, develop, test, operate, monitor, or troubleshoot a Snowflake App. A Snowflake App is a JS/Node web application (typically Next.js) deployed to SPCS via SnowCLI app commands (`snow app` preferred, `snow __app` fallback for older CLI versions). This is NOT a Streamlit app or Native App. Triggers: build me an app, new app, scaffold, react app, next.js app, dashboard, data app, deploy my app, push to snowflake, ship it, deploy failed, fix deploy, run locally, develop, app logs, app status, restart app.create Create a new Snowflake App (Next.js) from scratch. Use when the user a…deploy Deploy an app to Snowflake. Summarises settings, gets approval, then b…develop Local development, testing, and iteration for Snowflake Apps. Use when…operate Post-deploy operations for Snowflake Apps: logs, status, restart, scal…
- •
team-workflowMulti-phase team orchestration for feature implementation. Supports two entry pa… +Multi-phase team orchestration for feature implementation. Supports two entry paths: explicit user request for teammates, or autonomous complexity-based assessment after entering plan mode. HIGHEST PRIORITY — must be loaded FIRST (before any domain skills) when user asks to use teammates, teams, or parallel agents. Triggers: use teammates, use a team, work in parallel with agents, delegate to teammates, swarm this, swarm, team up on this, team up, orchestrate with subagents, subagent-orchestrated, gated workflow, multi-phase workflow, coordinate agents, spawn workers, worker/verifier, parallel agents, run as a team, investigate with agents, research with agents, explore with agents. - •
build-react-app - •
ctx-workflow - •
developing-with-streamlit - •
icebergdescription_changed +Use for **ALL** Iceberg table requests in Snowflake. This is the **REQUIRED** entry point for catalog integrations, catalog-linked databases, external volumes, auto-refresh issues, Horizon IRC diagnostics, and Snowflake Intelligence. DO NOT work with Iceberg manually - invoke this skill first. Triggers: iceberg, iceberg table, apache iceberg, catalog integration, REST catalog, ICEBERG_REST, glue, AWS glue, glue IRC, lake formation, unity catalog, databricks, polaris, opencatalog, open catalog, onelake, OneLake, microsoft fabric, fabric, fabric lakehouse, onelake REST, SAP, SAP BDC, SAP Business Data Cloud, CLD, catalog-linked database, linked catalog, auto-discover tables, sync tables, LINKED_CATALOG, external volume, storage access, S3, Azure blob, GCS, IAM role, trust policy, Access Denied, 403 error, ALLOW_WRITES, storage permissions, auto-refresh, autorefresh, stale data, refresh stuck, delta direct, snowflake intelligence, text-to-SQL iceberg, query iceberg natural language. language, horizon IRC, horizon IRC setup, horizon IRC not working, test horizon IRC, diagnose horizon IRC, debug horizon IRC, horizon IRC connection, horizon IRC endpoint, horizon REST catalog, PAT authentication horizon.
| | [v1.0.79-beta] 2026-05-06 MODIFIED: ~2 - •
data-governancedescription_changed +**[REQUIRED]** for all Snowflake data governance tasks. Routes to six sub-skills: (1) horizon-catalog — access history, users, roles, grants, permissions, query history, compliance, catalog; (2) data-policy — [REQUIRED] masking, row access, projection projection, aggregation, join, and tokenization policies, tag-based masking, protect sensitive data, column/TIMESTAMP masking; masking, the 2-stage UI create flow triggered by `/data-governance Create a new policy for me`; (3) sensitive-data-classification — [REQUIRED for ALL classification] PII, classify, data classification, manual/automatic classification, Classification Profile, auto_tag, custom classifiers, regex, semantic/privacy category, IDENTIFIER, QUASI_IDENTIFIER, SENSITIVE, SYSTEM$CLASSIFY, DATA_CLASSIFICATION_LATEST, GDPR/CCPA/PCI; (4) governance-maturity-score — governance posture, maturity score, assessment, recommendations; (5) observability-maturity-score — data observability, DMF coverage, quality monitoring maturity, lineage usage, observability assessment; (6) object-contacts — [REQUIRED] assign data steward, create contact, object contact, contact report, who owns this table, SET CONTACT, data stewardship. MUST be used for classification or masking tasks — do not answer from general knowledge. horizon-catalog is the fallback. Triggers: governance, access history, permissions, grants, roles, audit, compliance, catalog, masking policy, row access policy, projection policy, aggregation policy, join policy, JOIN_REQUIRED, tokenization policy, tokenize at write time, external tokenization, FPE, PII, sensitive data, classification, run classification, SYSTEM$CLASSIFY, classifier, classification profile, DATA_CLASSIFICATION_LATEST, detect PII, GDPR, CCPA, PCI, tag sensitive columns, governance maturity score, governance posture, how well governed, data observability, observability maturity, DMF coverage, lineage usage, observability assessment, data steward, object contact, assign contact, who owns this table, contact report, SET CONTACT. CONTACT, /data-governance Create a new policy. - •
warehousedescription_changed +Warehouse configuration, DDL, Gen2 creation/conversion, Gen2, adaptive, performance tuning, DML optimization, ETL workloads, sizing, credit-per-hour rates from Credit Consumption Table. Resume rates, resume behavior, region availability, Snowpark-optimized limitations. Not for cost analytics or historical warehouse spend (cost-intelligence) or org billing (billing). billing.
| | [v1.0.78-beta] 2026-05-05 ADDED: +1 - •
cortex-ai-function-studioCreate, evaluate, and optimize custom AI functions using Snowflake Cortex AI Com… +Create, evaluate, and optimize custom AI functions using Snowflake Cortex AI Complete. Supports text, image, and document inputs. Use when: building LLM-powered functions, evaluating AI function performance, tuning prompts, selecting models, checking async job status. Triggers: ai function builder, custom ai function, user defined ai function, build my own llm function, evaluate ai function, tune ai function, optimize ai function, demo ai function, resume ai function job, image classification, document analysis, multimodal ai function.create Create a new custom AI function. Supports table-based or manual input …demos Interactive demos for custom AI functions. Use when: demo, example, wa…classification Quick Start demo: Build a toxicity classifier and evaluate it — the fa…insurance-claim-routing Interactive demo: Generate pseudo-labels from a strong teacher model, …legal-doc-extraction Interactive demo: Build a legal contract field extractor and create a …pdf-field-extraction Interactive demo: Extract structured fields from SEC 10-K filing PDFs …policy-conditioned-routing Interactive demo: Build a policy-conditioned ticket router where a see…redaction Interactive demo [Experimental]: Build a PII redaction function using …
evaluate Evaluate an AI function's performance against a labeled dataset using …optimize Optimize an AI function through automated function body optimization, …synthetic-data Generate synthetic data or pseudo-label input-only tables for AI funct…
| | [v1.0.77-beta] 2026-05-01 ADDED: +1 MODIFIED: ~1 - •
snowpipe-streaming**[REQUIRED]** Use for ALL Snowpipe Streaming tasks: setup, configure, troublesh… +**[REQUIRED]** Use for ALL Snowpipe Streaming tasks: setup, configure, troubleshoot, monitor, optimize, or migrate streaming pipelines. Covers the High-Performance Architecture exclusively. Triggers: snowpipe streaming, streaming ingestion, low-latency ingestion, real-time ingestion, Snowpipe Streaming SDK, channel, insertRows, appendRows, streaming channel, PIPE object, streaming pipe, snowpipe v2, high-performance streaming, migrate classic streaming, troubleshoot streaming.migrate Migrate from Snowpipe Streaming classic to High-Performance Architectu…monitor Monitor Snowpipe Streaming High-Performance Architecture pipeline heal…optimize Optimize Snowpipe Streaming High-Performance Architecture throughput, …setup Set up Snowpipe Streaming High-Performance Architecture pipelines from…troubleshoot Troubleshoot Snowpipe Streaming High-Performance Architecture pipeline…
- •
billingdescription_changed +Org-level Snowflake billing in dollars/currency. Use for: dollar spend in currency via SNOWFLAKE.ORGANIZATION_USAGE. Covers USAGE_IN_CURRENCY_DAILY, REMAINING_BALANCE_DAILY, CONTRACT_ITEMS, RATE_SHEET_DAILY. Invoices, charges, contracts, by service type, monthly spend trends, which services cost the most money, remaining balance, reconciliation, contract termination date, contract expiration date, contract start date, contract details, rate comparison, spend by account. reconciliation. Not for single-account credit credit-based analytics (cost-intelligence) or warehouse config DDL (warehouse). Key distinction: dollars/currency → billing, credits only → cost-intelligence.
| | [v1.0.76-beta] 2026-05-01 ADDED: +1 MODIFIED: ~4 - •
spark-migrationRun the SMA CLI to convert PySpark code to Snowpark, or generate dashboard and f… +Run the SMA CLI to convert PySpark code to Snowpark, or generate dashboard and fix EWIs from existing SMA output. Supports both Snowpark API and Snowpark Connect (SCOS) conversion paths. Triggers: run sma, convert spark, migrate pyspark, sma conversion, migrate to snowpark, convert to snowpark, already ran sma, sma dashboard, fix ewis, stage conversion, snowpark connect, scos, scos migration, migrate to snowpark connect, migrate to scos.sma-dashboard-generator Generate interactive SMA dashboard to track EWIs from conversion. Trig…snowflake-notebook-migration Migrates Databricks (DBX) notebooks to Snowflake Workspace notebooks. …snowpark-connect Snowpark Connect (SCOS) skills for migrating and validating PySpark an…migrate-pyspark-to-snowpark-connect Migrate PySpark and Databricks workloads to Snowflake SCOS (Snowpark C…migrate-spark-scala-to-snowpark-connect Migrate Spark Scala workloads to Snowflake SCOS (Snowpark Connect for …validate-pyspark-to-snowpark-connect Validate a completed PySpark to Snowpark Connect (SCOS) migration by r…validate-spark-scala-to-snowpark-connect Validate a completed Spark Scala to Snowpark Connect (SCOS) migration …
stage-conversion Replace embedded file paths in SMA-converted Snowpark code. Use when: …
- •
native-app-providerdescription_changed +Use for **ALL** Snowflake Native App Framework tasks: creating app packages, writing manifest files, writing setup scripts, sharing data, testing, versioning, publishing, configuring telemetry and health status reporting, monitoring app health and lifecycle events, setting up event sharing, and debugging apps. Also use for **ALL** SPCS (Snowpark Container Services) work within native apps: adding containers, upgrading container services, building and pushing images, writing service specs, configuring compute pools, and managing service lifecycle. This is the **REQUIRED** entry point for any native app work. DO NOT attempt native app development manually - invoke this skill first. Triggers: native app, app package, application package, manifest.yml, setup script, CREATE APPLICATION, Snowflake marketplace, listing, native app framework, build native app, walk me through, guide me, get started, add version, register version, add patch, release channel, release directive, publish app, publish version, upgrade consumers, telemetry, health status, SYSTEM$REPORT_HEALTH_STATUS, log_level, trace_level, event definitions, event sharing, APPLICATION_STATE, lifecycle events, monitor app, debug app, observability, add streamlit, streamlit dashboard, add dashboard, streamlit UI, add UI to native app, native app streamlit, streamlit frontend, get_active_session, default_streamlit, SPCS native app, container native app, native app containers, native app SPCS, add containers, container_services, grant_callback, specification file, version_initializer. version_initializer, restricted caller, RCR, restricted callers rights, EXECUTE AS RESTRICTED CALLER, GRANT CALLER, caller rights, caller grants, restricted_callers_rights, access consumer data, consumer's role, caller's privileges, consumer's privileges. - •
declarative-sharingdescription_changed +**[REQUIRED]** Use for **ALL** declarative sharing and application packages with TYPE=DATA, (i.e data apps). Share data products across Snowflake accounts with versioning. Default choice when user wants to share data with another account. Also use when converting an existing data share to declarative sharing. sharing, or when a consumer wants to migrate from a data share to a declarative app. Triggers: declarative, data product, native app, data app, data application, share, sharing, another account, cross account, cross region, application package, manifest, marketplace, listing, publish, share a table, share data, manifest from share, share to manifest, generate manifest from share, inspect share, share to yaml, introspect share, convert share, migrate share, existing share, secure share to declarative, upgrade share, future-proof share, multiple shares, combine shares, merge shares, multiple data shares shares, consumer migration, migrate from share, upgrade share to app, replace share with app, share to app migration, drop-in replacement, switch from share to app - •
alertdescription_changed +Snowflake alert management - create, alter, suspend, resume resume, and troubleshoot alerts. Use when: user wants to create a new alert, modify an existing alert, set up monitoring, suspend or resume alerts. alerts, or investigate why an alert is firing/failing/not delivering. Triggers: create alert, new alert, add alert, alter alert, modify alert, change alert, suspend alert, resume alert, monitor with alert, set up alert, alert condition. condition, troubleshoot alert, debug alert, investigate alert, alert firing, alert failed, alert not firing, why did my alert trigger, CONDITION_FAILED, ACTION_FAILED, notification not delivered. - •
snowflake-postgresdescription_changed +**[REQUIRED]** Use for **ALL** requests involving Snowflake Postgres, and for general help working with any PostgreSQL database through standard PG tooling (psql, ~/.pg_service.conf, ~/.pgpass, pg_doctor diagnostics). Triggers: 'postgres', 'postgresql', 'pg', 'psql', 'create postgres instance', 'show postgres instances', 'suspend postgres', 'resume postgres', 'reset postgres credentials', 'rotate postgres password', 'import postgres connection', 'postgres network policy', 'postgres health check', 'pg_doctor', 'pg_lake', 'postgres iceberg', 'pg iceberg', 'read pg_lake in snowflake', 'pg to snowflake iceberg', 'catalog integration for pg_lake', 'expose pg_lake to snowflake', 'SNOWFLAKE_POSTGRES catalog', 'catalog linked database for pg_lake', 'query postgres iceberg from snowflake', 'postgres slow queries', 'cache hit', 'bloat', 'vacuum', 'dead rows', 'postgres locks', 'blocking queries', 'postgres disk usage', 'active postgres queries', 'postgres connection count', 'neon', 'supabase', 'rds postgres', 'aurora postgres', 'azure postgres', 'crunchy bridge', 'external postgres', 'my postgres'. Do NOT use for generic Iceberg / catalog integration / storage integration / data lake requests — those are owned by the `iceberg` skill. skill, EXCEPT for catalog integrations scoped to pg_lake (`CATALOG_SOURCE = SNOWFLAKE_POSTGRES`), which are handled here. Only handle Iceberg when it is scoped to pg_lake (Postgres-resident Iceberg tables). tables or the pg_lake-specific catalog integration path).
|
|