Meet your new dbt pair programmer - the one who actually understands your environment, respects your workflow, and does the heavy lifting.
If you've tried other dbt tools with Copilot (dbt power user, datamate, etc.), you know the pain:
- They don't respect your Python environment
- They can't see your actual project structure
- They fail when adapters are missing from THEIR environment
- You end up doing the work yourself anyway
dbt-core-mcp is different. It's not just another plugin - it's a true pair programming partner that:
- Zero dbt Dependencies: Our server needs NO dbt-core, NO adapters - works with YOUR environment
- Stays in Flow: Keep the conversation going with Copilot while it handles dbt commands, runs tests, and analyzes impact
- Respects Your Environment: Detects and uses YOUR exact dbt version, YOUR adapter, YOUR Python setup (uv, poetry, venv, conda)
- Actually Helps: Instead of generic suggestions, you get real work done - "run my changes and test downstream" actually does it
- Knows Your Project: Full access to your models, lineage, sources, and compiled SQL - no guessing, no manual lookups
Before dbt-core-mcp
You: "Copilot, help me understand what depends on stg_orders"
Copilot: "You should check the manifest.json or run dbt list..."
You: Switches to terminal, runs commands, copies output back...With dbt-core-mcp
You: "What depends on stg_orders?"
Copilot: Shows full lineage, impact analysis, and affected models instantly
You: "Run my changes and test everything downstream"
Copilot: Does it. Reports results. You focus on the next step.
This is pair programming the way it should be - you focus on the logic, Copilot handles the execution. No context switching, no terminal juggling, just flow.
- 🔥 Zero dbt Dependencies: Server has NO dbt-core, NO adapters - ultimate environment respect
- Natural Language Control: Just talk - "run my changes and test downstream" actually works
- Bridge Execution: Automatically detects YOUR environment and runs dbt with YOUR versions
- Works with ANY Adapter: duckdb, snowflake, postgres, bigquery, databricks - if you have it, we work with it
- Smart Selection: Automatic change detection - run only what changed, or changed + downstream
- Full Project Awareness: Lineage analysis, impact assessment, compiled SQL - instant access to everything
- True Pair Programming: Stay in conversation with Copilot while it executes dbt commands and reports results
- Schema Change Detection: Automatically detects column changes and recommends downstream updates
- No Configuration Needed: Works with your existing dbt setup - any adapter, any database, any version
- Concurrency Safe: Detects and waits for existing dbt processes to prevent conflicts
This server provides tools to interact with dbt projects via the Model Context Protocol, enabling AI assistants to:
- Query dbt project metadata and configuration
- Get detailed model and source information with full manifest metadata
- Execute SQL queries with Jinja templating support ({{ ref() }}, {{ source() }})
- Inspect models, sources, and tests
- Access dbt documentation and lineage
Just talk to Copilot naturally - no need to memorize commands or syntax:
Explore your project
You: "What models do we have in this project?"
Copilot: Shows all models with materialization types and tagsUnderstand dependencies
You: "Show me what the customers model depends on"
Copilot: Displays full lineage with upstream sources and modelsRun smart builds
You: "Run only the models I changed and test everything downstream"
Copilot: Executes dbt with smart selection, runs tests, reports results
If you don't have Python installed, get it at python.org/downloads - you'll need Python 3.9 or higher.
Don't have uv yet? Install it with: pip install uv or see docs.astral.sh/uv
Click the badge for your VS Code version:
That's it! The server will automatically start when you open a dbt project.
Add this to your .vscode/mcp.json file in your dbt project workspace:
{
"servers": {
"dbt-core": {
"command": "uvx",
"args": ["dbt-core-mcp"]
}
}
}Or if you prefer pipx:
{
"servers": {
"dbt-core": {
"command": "pipx",
"args": ["run", "dbt-core-mcp"]
}
}
}The server will automatically use your workspace directory as the dbt project location.
For the impatient who want the latest features immediately:
With uvx:
{
"servers": {
"dbt-core": {
"command": "uvx",
"args": [
"--from",
"git+https://github.com/NiclasOlofsson/dbt-core-mcp.git",
"dbt-core-mcp"
]
}
}
}With pipx:
{
"servers": {
"dbt-core": {
"command": "pipx",
"args": [
"run",
"--no-cache",
"--spec",
"git+https://github.com/NiclasOlofsson/dbt-core-mcp.git",
"dbt-core-mcp"
]
}
}
}This downloads and installs directly from GitHub every time - always bleeding edge!
By default, dbt commands have no timeout (they can run as long as needed). For complex models that take a long time to compile, you can set a timeout or explicitly disable it:
{
"servers": {
"dbt-core": {
"command": "uvx",
"args": [
"dbt-core-mcp",
"--dbt-command-timeout", "300" // 5 minutes, or use 0 for no timeout (default)
]
}
}
}The server automatically detects your dbt project from the workspace root. If your dbt project is in a subdirectory or you need to specify a different location, use --project-dir with either a relative or absolute path:
{
"servers": {
"dbt-core": {
"command": "uvx",
"args": [
"dbt-core-mcp",
"--project-dir", "path/to/dbt/project" // relative or absolute path
]
}
}
}For the MCP Server:
- Python 3.9 or higher
- NO dbt-core required, NO adapters required - just install
dbt-core-mcp
For Your dbt Project:
- dbt Core 1.9.0 or higher
- Any dbt adapter (dbt-duckdb, dbt-postgres, dbt-snowflake, dbt-databricks, etc.)
The server automatically detects and uses YOUR project's dbt installation via bridge execution.
- Python models: Not currently supported. Only SQL-based dbt models are supported at this time.
- dbt Version: Requires dbt Core 1.9.0 or higher
Don't worry about memorizing these - you don't need to know tool names or parameters. Just talk naturally to Copilot and it figures out what to use. This reference is here for the curious who want to understand what's happening under the hood.
Pro tip: Focus on the conversational examples (You: / Copilot:) - they show how to actually use these tools in practice.
Get basic information about your dbt project including name, version, adapter type, and resource counts. By default, also runs dbt debug to validate your environment and test the database connection.
You: "What dbt version is this project using?"
Copilot: Shows project info with dbt version, adapter type, and connection statusYou: "How many models and sources are in this project?"
Copilot: Displays counts and project overview with diagnosticsYou: "Is my database connection working?"
Copilot: Shows connection test results from dbt debugYou: "Check my dbt setup"
Copilot: Runs full environment validation and reports any issues
Parameters:
run_debug: Rundbt debugto validate environment and test connection (default: True)
Returns: Project metadata plus diagnostic results including:
- Database connection status (ok/failed/unknown)
- Environment validation output
- System and dependency checks
Note: Set run_debug=False to skip diagnostics and get only basic project info (faster for repeated queries).
One tool, all resource types - these unified tools work across models, sources, seeds, snapshots, and tests. No need for separate tools for each type.
List all resources in your project, or filter by type (models, sources, seeds, snapshots, tests).
You: "Show me all resources in this project"
Copilot: Lists all models, sources, seeds, snapshots, and testsYou: "What models do we have?"
Copilot: Filters to show only models with their materialization typesYou: "List all data sources"
Copilot: Shows configured sources with schemas and descriptionsYou: "Show me the seeds"
Copilot: Displays CSV seed files available in the projectYou: "Which models are materialized as tables?"
Copilot: Filters models by materialization type
Parameters:
resource_type: Optional filter -"model","source","seed","snapshot","test", orNonefor all
Returns: Consistent structure for all types with common fields (name, description, tags) plus type-specific details (materialization, source_name, etc.)
Get detailed information about any resource - works for models, sources, seeds, snapshots, and tests.
You: "Show me details about the customers model"
Copilot: Displays full model metadata, config, column information, and compiled SQLYou: "What's in the raw_customers source?"
Copilot: Shows source schema, columns, and freshness configurationYou: "Describe the country_codes seed"
Copilot: Returns seed configuration and column definitionsYou: "What columns does the orders model have?"
Copilot: Shows column names, types, and descriptions from databaseYou: "Show me the compiled SQL for customers"
Copilot: Returns model info with compiled SQL (all Jinja resolved)You: "Tell me about the customer_snapshot"
Copilot: Displays snapshot configuration and SCD tracking setup
Parameters:
name: Resource name (e.g., "customers", "jaffle_shop.raw_orders")resource_type: Optional - auto-detects if not specifiedinclude_database_schema: Include actual column types from database (default: true)include_compiled_sql: Include compiled SQL with Jinja resolved (default: true, models only)
Auto-detection: Just provide the name - the tool automatically finds it whether it's a model, source, seed, snapshot, or test. For sources, use "source_name.table_name" format or just the table name.
Compiled SQL: For models, automatically includes compiled SQL with all {{ ref() }} and {{ source() }} resolved to actual table names. Will trigger dbt compile if not already compiled. Set include_compiled_sql=False to skip compilation.
Understand relationships across all resource types - analyze dependencies and impact for models, sources, seeds, snapshots, and tests.
Trace dependency relationships for any resource - shows what it depends on (upstream) and what depends on it (downstream).
You: "Show me the lineage for the customers model"
Copilot: Displays full dependency tree with upstream sources and downstream modelsYou: "What does stg_orders depend on?"
Copilot: Shows upstream dependencies (sources and parent models)You: "What's downstream from the raw_customers source?"
Copilot: Shows all models that use this sourceYou: "Where does the revenue model get its data from?"
Copilot: Displays upstream lineage with all source dataYou: "Show me everything that uses the country_codes seed"
Copilot: Lists all downstream models that reference this seed
Parameters:
name: Resource name (works for models, sources, seeds, snapshots, tests)direction:"upstream"(sources),"downstream"(dependents), or"both"(default)depth: Maximum levels to traverse (None for unlimited, 1 for immediate, etc.)resource_type: Optional - auto-detects if not specified
Returns: Dependency tree with statistics (upstream_count, downstream_count, total_dependencies)
Use cases:
- Understand data flow and relationships
- Explore where resources get their data
- See what depends on specific resources
- Impact analysis before making changes
Analyze the blast radius of changing any resource - shows all downstream dependencies that would be affected.
You: "What's the impact of changing the stg_customers model?"
Copilot: Shows all downstream models, tests, and affected resourcesYou: "If I modify the raw_orders source, what needs to run?"
Copilot: Lists impacted models grouped by distance with recommended commandsYou: "What breaks if I change the country_codes seed?"
Copilot: Shows total impact count and affected resourcesYou: "How many models depend on this snapshot?"
Copilot: Displays impact statistics and dependency count
Parameters:
name: Resource name (works for models, sources, seeds, snapshots, tests)resource_type: Optional - auto-detects if not specified
Returns:
- Affected resources grouped by distance from the changed resource
- Count of affected tests and other resources
- Total impact statistics
- Context-aware recommended dbt commands (e.g.,
dbt run -s stg_customers+) - Impact level message (No/Low/Medium/High)
Use cases:
- Before refactoring: understand blast radius
- Planning incremental rollouts
- Estimating rebuild time after changes
- Risk assessment for modifications
Execute SQL queries against your database using dbt's ref() and source() functions.
You: "Show me 10 rows from the customers model"
Copilot: Executes SELECT * FROM {{ ref('customers') }} LIMIT 10You: "Count the orders in the staging table"
Copilot: Runs SELECT COUNT() and returns result*You: "What's the schema of stg_payments?"
Copilot: Queries column information and displays schemaYou: "Query the raw orders source and show me recent records"
Copilot: Uses {{ source() }} function to query and filter results
Parameters:
sql: SQL query with optional {{ ref() }} and {{ source() }} functionslimit: Maximum rows to return (optional, defaults to unlimited)
Run dbt models with smart selection for fast development.
You: "Run only the models I changed"
Copilot: Detects modified models and executes dbt run with selectionYou: "Run my changes and everything downstream"
Copilot: Runs modified models plus all downstream dependenciesYou: "Run the customers model"
Copilot: Executes dbt run --select customersYou: "Build all mart models with a full refresh"
Copilot: Runs dbt run --select marts. --full-refresh*You: "Run modified models and check for schema changes"
Copilot: Runs models and detects added/removed columns
Smart selection modes:
modified_only: Run only models that changedmodified_downstream: Run changed models + everything downstream
Other parameters:
select: Model selector (e.g., "customers", "tag:mart")exclude: Exclude modelsfull_refresh: Force full refresh for incremental modelsfail_fast: Stop on first failurecheck_schema_changes: Detect column additions/removals
Schema Change Detection: When enabled, detects added or removed columns and recommends running downstream models to propagate changes.
Run dbt tests with smart selection.
You: "Test only the models I changed"
Copilot: Runs tests for modified models onlyYou: "Run tests for my changes and downstream models"
Copilot: Tests modified models and everything affected downstreamYou: "Test the customers model"
Copilot: Executes dbt test --select customersYou: "Run all tests for staging models"
Copilot: Runs dbt test --select staging.
Parameters:
modified_only: Test only changed modelsmodified_downstream: Test changed models + downstreamselect: Test selector (e.g., "customers", "tag:mart")exclude: Exclude testsfail_fast: Stop on first failure
Run models and tests together in dependency order (most efficient approach).
You: "Build my changes and everything downstream"
Copilot: Runs dbt build with modified models and dependenciesYou: "Run and test only what I modified"
Copilot: Executes dbt build on changed models onlyYou: "Build the entire mart layer with tests"
Copilot: Runs dbt build --select marts. with all tests*
Load seed data (CSV files) from seeds/ directory into database tables.
You: "Load all seed data"
Copilot: Runs dbt seed and loads all CSV filesYou: "Load only the seeds I changed"
Copilot: Detects modified seed files and loads themYou: "Reload the raw_customers seed file"
Copilot: Executes dbt seed --select raw_customers --full-refreshYou: "Show me what's in the country_codes seed"
Copilot: Displays preview of loaded seed data
Seeds are typically used for reference data like country codes, product categories, etc.
Smart selection modes:
modified_only: Load only seeds that changedmodified_downstream: Load changed seeds + downstream dependencies
Other parameters:
select: Seed selector (e.g., "raw_customers", "tag:lookup")exclude: Exclude seedsfull_refresh: Truncate and reload seed tablesshow: Show preview of loaded data
Important: Change detection works via file hash:
- Seeds < 1 MiB: Content changes detected ✅
- Seeds ≥ 1 MiB: Only file path changes detected
⚠️
For large seeds, use manual selection or run all seeds.
Execute dbt snapshots to capture slowly changing dimensions (SCD Type 2).
You: "Run all snapshots"
Copilot: Executes dbt snapshot for all snapshot modelsYou: "Execute the customer_history snapshot"
Copilot: Runs dbt snapshot --select customer_historyYou: "Run daily snapshots"
Copilot: Executes snapshots tagged with 'daily'
Snapshots track historical changes by recording when records were first seen, when they changed, and their state at each point in time.
Parameters:
select: Snapshot selector (e.g., "customer_history", "tag:daily")exclude: Exclude snapshots
Note: Snapshots are time-based and should be run on a schedule (e.g., daily/hourly), not during interactive development. They do not support smart selection.
Install dbt packages defined in packages.yml to enable interactive package management workflow.
You: "I need to use dbt_utils macros"
Copilot: Checks if installed, adds to packages.yml, runs install_deps()You: "Install the packages defined in packages.yml"
Copilot: Executes dbt deps and shows installed packagesYou: "Add dbt_utils and install it"
Copilot: Edits packages.yml, runs install_deps(), ready to use macros
This tool enables a complete workflow where Copilot can:
- Suggest using a dbt package (e.g., dbt_utils)
- Edit packages.yml to add the package
- Run install_deps() to install it
- Write code that uses the package's macros
All without breaking conversation flow.
Returns: Installation status and list of installed packages
Package Discovery:
Use list_resources(type="macro") to see which packages are already installed.
Macros follow the pattern macro.{package_name}.{macro_name}.
Note: This is an interactive development tool (like run_models/test_models), not infrastructure automation. It enables Copilot to complete its own recommendations mid-conversation.
Fast iteration with smart selection - just describe what you want:
You: "Run only what I changed"
Copilot: Detects modified models and runs themYou: "Run my changes and test everything downstream"
Copilot: Runs modified models + downstream dependencies, then testsYou: "Build my modified models with tests"
Copilot: Executes dbt build with smart selection
The first run establishes a baseline state automatically. Subsequent runs detect changes and run only what's needed.
Before-and-After Example:
Traditional workflow:dbt run --select customers+ dbt test --select customers+With dbt-core-mcp:
You: "I modified the customers model, run it and test everything affected"
Copilot: Handles everything - runs, tests, and reports results
This server executes dbt commands in your project's Python environment using a bridge execution pattern:
- Zero dbt Dependencies: MCP server requires NO dbt-core, NO adapters - just Python utilities
- Environment Detection: Automatically finds your Python environment (uv, poetry, venv, conda, etc.)
- Bridge Execution: Builds Python scripts as strings and executes them in YOUR environment
- Uses YOUR dbt: Runs with YOUR dbt-core version, YOUR adapters, YOUR configuration
- No Conflicts: Can't have version conflicts when we don't have dbt dependencies!
- Concurrency Safety: Detects and waits for existing dbt processes to prevent database lock conflicts
The server reads dbt's manifest.json for metadata and uses dbt show --inline (executed in YOUR environment) for SQL query execution with full Jinja templating support.
In practice:
Your project: dbt-core 1.10.13 + dbt-duckdb
Our server: mcp, fastmcp, pydantic, pyyaml, psutil (no dbt!)
Result: Perfect compatibility - we detect your environment and run YOUR dbt
No configuration needed - it just works with your existing dbt setup, any version, any adapter.
Want to help make this better? The best contribution you can make is actually using it - your feedback and bug reports are what really drive improvements.
Of course, code contributions are welcome too! Check out CONTRIBUTING.md for development setup and guidelines. But seriously, just using it and telling us what works (or doesn't) is incredibly valuable.
MIT License - see LICENSE file for details.
Niclas Olofsson - GitHub