What’s new in Databricks for January 2023

Platform 

  • We have added left and right sidebars to the Databricks notebook. You can now find the notebook’s table of contents in the left-hand sidebar, and you can find comments, MLflow experiments, and the notebook revision history in the right-hand sidebar.
  • Databricks integration with Confluent Schema registry now supports external schema registry addresses with authentication. This feature is available from ( from_avro, to_avro, from_protobuf, to protobuf)
  • Implement the needed changes for lateral column alias, with the support of implicit lateral column aliasing. You can now reuse an expression specified earlier in the same SELECT list. For example, in SELECT 1 AS a, a + 1 AS b, the a in a + 1 can be resolved as the previously defined 1 AS a.
  • Databricks Terraform provider updated to version 1.9.0
  • Databricks runtime 12.1 and 12.1ML are GA
  • Admins can use the new account console home screen to better navigate between Databricks products.
  • Try the new Databricks REST API Explorer documentation experience in beta. The API Explorer includes the latest versions of the Databricks REST API documentation. (https://docs.databricks.com/api-explorer/workspace/clusters)

Governance

  • Cluster policies now support limiting the max number of clusters per user

Delta Lake

  • Spark Structured Streaming now works with the format deltasharing on a source Delta Sharing table.
  • Table version using timestamp now supported for Delta Sharing tables in Catalogs: You can now use the SQL syntax TIMESTAMP AS OF in SELECT statements to specify the version of a Delta Sharing table that’s mounted in a catalog.
  • Support for when not matched by source for merge into you can now add WHEN NOT MATCHED BY SOURCE clauses to MERGE INTO to update or delete rows in the chosen table that don’t have matches in the source table based on the merge condition. The new clause is available in SQL, Python, Scala, and Java. See MERGE INTO.
  • A New DLT onboarding widget guides users who want to create pipelines using data from S3 

Databricks SQL

  • Added H3 geospatial functions to the inline panel reference
  • Added inline references for SQL syntax like Create Table and Over

Partner connect

  • Connect to Privacera using partner connect
  • Connect to Sigma using partner connect

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *