What’s new in Databricks for November 2022


  • You can filter by job name with the List all jobs operation ( GET/Jobs/List) in the jobs API
  • Databricks Terraform provider updated to version 1.6.4 to add the warehouse_type parameter to the databricks_sql_endpoint resource to support additional Databricks SQL warehouse types, and more.
  • When you click the Search field in the top bar of your Databricks workspace, your recent files, notebooks, queries, alerts, and dashboards are now listed under Recents, sorted by the last opened date. Recent objects in the list are filtered to match your search criteria. 
  • Sparse checkout support in repos allows you to work with only a subset of the remote repo directories in Databricks. It’s useful if you are working with a monorepo or your repos size is beyond the Databricks supported limits.


  • You can now search for tables that are registered in Unity Catalog using the search dialog located at the top of every page in the Databricks Workspace UI. You can search on table names, table comments, column names and column comments
  • Unity Catalog lets you specify a cloud storage location for managed tables at the catalog and schema levels using Data explorer. You will be able soon to do the same using SQL Statement
  • You can connect select partner solutions to data managed by Unity Catalog

Delta Lake

  • You can now easily create an example pipeline with the Create pipeline from sample data option in the DLT user interface

Databricks SQL

  • You can now create a dashboard filter that works across multiple queries at the same time. In Edit dashboard mode, choose Add, then Filter, then New Dashboard Filter.
  • Autocomplete now supports CREATE MATERIALIZED VIEW

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *