Improved ergonomics for execution dependencies in assets - We introduced a set of APIs to simplify working with Dagster that don't use the I/O manager system for handling data between assets. I/O manager workflows will not be affected.
AssetDep type allows you to specify upstream dependencies with partition mappings when using the deps parameter of @asset and AssetSpec.
MaterializeResult can be optionally returned from an asset to report metadata about the asset when the asset handles any storage requirements within the function body and does not use an I/O manager.
AssetSpec has been added as a new way to declare the assets produced by @multi_asset. When using AssetSpec, the multi_asset does not need to return any values to be stored by the I/O manager. Instead, the multi_asset should handle any storage requirements in the body of the function.
Asset checks (experimental) - You can now define, execute, and monitor data quality checks in Dagster [docs].
The @asset_check decorator, as well as the check_specs argument to @asset and @multi_asset enable defining asset checks.
Materializing assets from the UI will default to executing their asset checks. You can also execute individual checks.
When viewing an asset in the asset graph or the asset details page, you can see whether its checks have passed, failed, or haven’t run successfully.
Auto materialize customization (experimental) - AutoMaterializePolicies can now be customized [docs].
All policies are composed of a set of AutoMaterializeRules which determine if an asset should be materialized or skipped.
To modify the default behavior, rules can be added to or removed from a policy to change the conditions under which assets will be materialized.
Dagster pipes is a new library that implements a protocol for launching compute into external execution environments and consuming streaming logs and Dagster metadata from those environments. See https://github.com/dagster-io/dagster/discussions/16319 for more details on the motivation and vision behind Pipes.
Out-the-box integrations
Clients: local subprocess, Docker containers, Kubernetes, and Databricks
Dagster pipes is composable with existing launching infrastructure via open_pipes_session. One can augment existing invocations rather than replacing them wholesale.
[ui] Global Asset Graph performance improvement - the first time you load the graph it will be cached to disk and any subsequent load of the graph should load instantly.
will cause type checking errors. To migrate, update type hints to respect the new subclassing.
AssetExecutionContext cannot be used as the type annotation for @ops run in @jobs. To migrate, update the type hint in @op to OpExecutionContext. @ops that are used in @graph_assets may still use the AssetExecutionContext type hint.
[ui] We have removed the option to launch an asset backfill as a single run. To achieve this behavior, add backfill_policy=BackfillPolicy.single_run() to your assets.
has_dynamic_partition implementation has been optimized. Thanks @edvardlindelof!
[dagster-airbyte] Added an optional stream_to_asset_map argument to build_airbyte_assets to support the Airbyte prefix setting with special characters. Thanks @chollinger93!
[dagster-k8s] Moved “labels” to a lower precedence. Thanks @jrouly!
[dagster-k8s] Improved handling of failed jobs. Thanks @Milias!
[dagster-databricks] Fixed an issue where DatabricksPysparkStepLauncher fails to get logs when job_run doesn’t have cluster_id at root level. Thanks @PadenZach!
New dagster-insights sub-module - We have released an experimental dagster_cloud.dagster_insights module that contains utilities for capturing and submitting external metrics about data operations to Dagster Cloud via an api. Dagster Cloud Insights is a soon-to-be released feature that shows improves visibility into usage and cost metrics such as run duration and Snowflake credits in the Cloud UI.
[dagster-dbt] DbtCliResource now enforces that the current installed version of dbt-core is at least version 1.4.0.
[dagster-dbt] DbtCliResource now properly respects DBT_TARGET_PATH if it is set by the user. Artifacts from dbt invocations using DbtCliResource will now be placed in unique subdirectories of DBT_TARGET_PATH.
When executing a backfill that targets a range of time partitions in a single run, the partition_time_window attribute on OpExecutionContext and AssetExecutionContext now returns the time range, instead of raising an error.
Fixed an issue where the asset backfill page raised a GraphQL error for backfills that targeted different partitions per-asset.
Fixed job_name property on the result object of build_hook_context.
AssetSpec has been added as a new way to declare the assets produced by @multi_asset.
AssetDep type allows you to specify upstream dependencies with partition mappings when using the deps parameter of @asset and AssetSpec.
[dagster-ext] report_asset_check method added to ExtContext.
[dagster-ext] ext clients now must use yield from to forward reported materializations and asset check results to Dagster. Results reported from ext that are not yielded will raise an error.
The Dagster UI documentation got an overhaul! We’ve updated all our screenshots and added a number of previously undocumented pages/features, including:
The Overview page, aka the Factory Floor
Job run compute logs
Global asset lineage
Overview > Resources
The Resources documentation has been updated to include additional context about using resources, as well as when to use os.getenv() versus Dagster’s EnvVar.
Information about custom loggers has been moved from the Loggers documentation to its own page, Custom loggers.
[ui] When using the search input within Overview pages, if the viewer’s code locations have not yet fully loaded into the app, a loading spinner will now appear to indicate that search results are pending.
Fixed an asset backfill bug that caused occasionally caused duplicate runs to be kicked off in response to manual runs upstream.
Fixed an issue where launching a run from the Launchpad that included many assets would sometimes raise an exception when trying to create the tags for the run.
[ui] Fixed a bug where clicking to view a job from a run could lead to an empty page in situations where the viewer’s code locations had not yet loaded in the app.
The deps parameter for @asset and @multi_asset now supports directly passing @multi_asset definitions. If an @multi_asset is passed to deps, dependencies will be created on every asset produced by the @multi_asset.
Added an optional data migration to convert storage ids to use 64-bit integers instead of 32-bit integers. This will incur some downtime, but may be required for instances that are handling a large number of events. This migration can be invoked using dagster instance migrate --bigint-migration.
[ui] Dagster now allows you to run asset checks individually.
[ui] The run list and run details page now show the asset checks targeted by each run.
[ui] In the runs list, runs launched by schedules or sensors will now have tags that link directly to those schedules or sensors.
[ui] Clicking the "N assets" tag on a run allows you to navigate to the filtered asset graph as well as view the full list of asset keys.
[ui] Schedules, sensors, and observable source assets now appear on the resource “Uses” page.
[dagster-dbt] The DbtCliResource now validates at definition time that its project_dir and profiles_dir arguments are directories that respectively contain a dbt_project.yml and profiles.yml.
[dagster-databricks] You can now configure a policy_id for new clusters when using the databricks_pyspark_step_launcher (thanks @zyd14!)
[ui] Added an experimental sidebar to the Asset lineage graph to aid in navigating large graphs. You can enable this feature under user settings.
Fixed an issue where the dagster-webserver command was not indicating which port it was using in the command-line output.
Fixed an issue with the quickstart_gcp example wasn’t setting GCP credentials properly when setting up its IOManager.
Fixed an issue where the process output for Dagster run and step containers would repeat each log message twice in JSON format when the process finished.
[ui] Fixed an issue where the config editor failed to load when materializing certain assets.
[auto-materialize] Previously, rematerializing an old partition of an asset which depended on a prior partition of itself would result in a chain of materializations to propagate that change all the way through to the most recent partition of this asset. To prevent these “slow-motion backfills”, this behavior has been updated such that these updates are no longer propagated.
Previously, when importing a dbt project in Cloud, naming the code location “dagster” would cause build failures. This is now disabled and an error is now surfaced.
[dagster-ext] An initial version of the dagster-ext module along with subprocess, docker, databricks, and k8s pod integrations are now available. Read more at https://github.com/dagster-io/dagster/discussions/16319. Note that the module is temporarily being published to PyPI under dagster-ext-process, but is available in python as import dagster_ext.
[asset checks] Added an ‘execute’ button to run checks without materializing the asset. Currently this is only supported for checks defined with @asset_check or AssetChecksDefinition.
[asset checks] Added check_specs argument to @graph_multi_asset
[asset checks] Fixed a bug with checks on @graph_asset that would raise an error about nonexistant checks