You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: website/blog/2022-04-19-dbt-cloud-postman-collection.md
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ is_featured: true
19
19
20
20
The dbt Cloud API has well-documented endpoints for creating, triggering and managing dbt Cloud jobs. But there are other endpoints that aren’t well documented yet, and they’re extremely useful for end-users. These endpoints exposed by the API enable organizations not only to orchestrate jobs, but to manage their dbt Cloud accounts programmatically. This creates some really interesting capabilities for organizations to scale their dbt Cloud implementations.
21
21
22
-
The main goal of this article is to spread awareness of these endpoints as the docs are being built & show you how to use them.
22
+
The main goal of this article is to spread awareness of these endpoints as the docs are being built & show you how to use them.
23
23
24
24
<!--truncate-->
25
25
@@ -45,7 +45,7 @@ Beyond the day-to-day process of managing their dbt Cloud accounts, many organiz
45
45
46
46
*Below this you’ll find a series of example requests - use these to guide you or [check out the Postman Collection](https://dbtlabs.postman.co/workspace/Team-Workspace~520c7ac4-3895-4779-8bc3-9a11b5287c1c/request/12491709-23cd2368-aa58-4c9a-8f2d-e8d56abb6b1dlinklink) to try it out yourself.*
47
47
48
-
## Appendix
48
+
## Appendix
49
49
50
50
### Examples of how to use the Postman Collection
51
51
@@ -55,7 +55,7 @@ Let’s run through some examples on how to make good use of this Postman Collec
55
55
56
56
One common question we hear from customers is “How can we migrate resources from one dbt Cloud project to another?” Often, they’ll create a development project, in which users have access to the UI and can manually make changes, and then migrate selected resources from the development project to a production project once things are ready.
57
57
58
-
There are several reasons one might want to do this, including:
58
+
There are several reasons one might want to do this, including:
59
59
60
60
- Probably the most common is separating dev/test/prod environments across dbt Cloud projects to enable teams to build manually in a development project, and then automatically migrate those environments & jobs to a production project.
61
61
- Building “starter projects” they can deploy as templates for new teams onboarding to dbt from a learning standpoint.
Copy file name to clipboardExpand all lines: website/blog/2022-05-17-stakeholder-friendly-model-names.md
+7-7Lines changed: 7 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ In this article, we’ll take a deeper look at why model naming conventions are
29
29
30
30
>“[Data folks], what we [create in the database]… echoes in eternity.” -Max(imus, Gladiator)
31
31
32
-
Analytics Engineers are often centrally located in the company, sandwiched between data analysts and data engineers. This means everything AEs create might be read and need to be understood by both an analytics or customer-facing team and by teams who spend most of their time in code and the database. Depending on the audience, the scope of access differs, which means the user experience and context changes. Let’s elaborate on what that experience might look like by breaking end-users into two buckets:
32
+
Analytics Engineers are often centrally located in the company, sandwiched between data analysts and data engineers. This means everything AEs create might be read and need to be understood by both an analytics or customer-facing team and by teams who spend most of their time in code and the database. Depending on the audience, the scope of access differs, which means the user experience and context changes. Let’s elaborate on what that experience might look like by breaking end-users into two buckets:
33
33
34
34
- Analysts / BI users
35
35
- Analytics engineers / Data engineers
@@ -49,21 +49,21 @@ Here we have drag and drop functionality and a skin over top of the underlying `
49
49
**How model names can make this painful:**
50
50
The end users might not even know what tables the data refers to, as potentially everything is joined by the system and they don’t need to write their own queries. If model names are chosen poorly, there is a good chance that the BI layer on top of the database tables has been renamed to something more useful for the analysts. This adds an extra step of mental complexity in tracing the <Termid="data-lineage">lineage</Term> from data model to BI.
51
51
52
-
#### Read only access to the dbt Cloud IDE docs
52
+
#### Read only access to the dbt Cloud IDE docs
53
53
If Analysts want more context via documentation, they may traverse back to the dbt layer and check out the data models in either the context of the Project or Database. In the Project view, they will see the data models in the folder hierarchy present in your project’s repository. In the Database view you will see the output of the data models as present in your database, ie. `database / schema / object`.
54
54
55
55

56
56
57
57
**How model names can make this painful:**
58
-
For the Project view, generally abstracted department or organizational structures as folder names presupposes the reader/engineer knows what is contained within the folder beforehand or what that department actually does, or promotes haphazard clicking to open folders to see what is within. Organizing the final outputs by business unit or analytics function is great for end users but doesn't accurately represent all the sources and references that had to come together to build this output, as they often live in another folder.
58
+
For the Project view, generally abstracted department or organizational structures as folder names presupposes the reader/engineer knows what is contained within the folder beforehand or what that department actually does, or promotes haphazard clicking to open folders to see what is within. Organizing the final outputs by business unit or analytics function is great for end users but doesn't accurately represent all the sources and references that had to come together to build this output, as they often live in another folder.
59
59
60
60
For the Database view, pray your team has been declaring a logical schema bucketing, or a logical model naming convention, otherwise you will have a long, alphabetized list of database objects to scroll through, where staging, intermediate, and final output models are all intermixed. Clicking into a data model and viewing the documentation is helpful, but you would need to check out the DAG to see where the model lives in the overall flow.
61
61
62
62
#### The full dropdown list in their data warehouse.
63
63
64
64
If they have access to Worksheets, SQL runner, or another way to write ad hoc sql queries, then they will have access to the data models as present in your database, ie. `database / schema / object`, but with less documentation attached, and more proclivity towards querying tables to check out their contents, which costs time and money.
65
65
66
-

66
+

67
67
68
68
**How model names can make this painful:**
69
69
Without proper naming conventions, you will encounter `analytics.order`, `analytics.orders`, `analytics.orders_new` and not know which one is which, so you will open up a scratch statement tab and attempt to figure out which is correct:
@@ -73,9 +73,9 @@ Without proper naming conventions, you will encounter `analytics.order`, `analyt
73
73
-- select * from analytics.orders limit 10
74
74
select*fromanalytics.orders_newlimit10
75
75
```
76
-
Hopefully you get it right via sampling queries, or eventually find out there is a true source of truth defined in a totally separate area: `core.dim_orders`.
76
+
Hopefully you get it right via sampling queries, or eventually find out there is a true source of truth defined in a totally separate area: `core.dim_orders`.
77
77
78
-
The problem here is the only information you can use to determine what data is within an object or the purpose of the object is within the schema and model name.
78
+
The problem here is the only information you can use to determine what data is within an object or the purpose of the object is within the schema and model name.
79
79
80
80
### The engineer’s user experience
81
81
@@ -98,7 +98,7 @@ There is not much worse than spending all week developing on a task, submitting
98
98
This is largely the same as the Analyst experience above, except they created the data models or are aware of their etymologies. They are likely more comfortable writing ad hoc queries, but also have the ability to make changes, which adds a layer of thought processing when working.
99
99
100
100
**How model names can make this painful:**
101
-
It takes time to become a subject matter expert in the database. You will need to know which schema a subject lives in, what tables are the source of truth and/or output models, versus experiments, outdated objects, or building blocks used along the way. Working within this context, engineers know the history and company lore behind why a table was named that way or how its purpose may differ slightly from its name, but they also have the ability to make changes.
101
+
It takes time to become a subject matter expert in the database. You will need to know which schema a subject lives in, what tables are the source of truth and/or output models, versus experiments, outdated objects, or building blocks used along the way. Working within this context, engineers know the history and company lore behind why a table was named that way or how its purpose may differ slightly from its name, but they also have the ability to make changes.
102
102
103
103
Change management is hard; how many places would you need to update, rename, re-document, and retest to fix a poor naming choice from long ago? It is a daunting position, which can create internal strife when constrained for time over whether we should continually revamp and refactor for maintainability or focus on building new models in the same pattern as before.
0 commit comments