Oct 20, 2024

Managing Files in Unity Catalog Volumes in Databricks: A Comprehensive Guide

Databricks has become a go-to platform for data engineers, analysts, and data scientists to work with big data and machine learning. One of its powerful features is the Unity Catalog, which helps manage and organize data at scale. Recently, Databricks introduced Volumes, an efficient way to manage non-tabular data in cloud object storage, like CSV, JSON, images, or audio files. In this guide, we will explore how to manage files in Unity Catalog volumes with different tools, programming languages, and the Databricks interface. The goal is to provide easy-to-understand, actionable steps for working with volumes, regardless of your familiarity with Databricks.

Why Use Volumes in Databricks?

Volumes in Databricks are designed to store and manage non-tabular data. Examples include:

  • Data files for ingestion like CSV, JSON, and Parquet.

  • Files such as images, text, and audio for machine learning and AI workloads.

  • CSV or JSON files generated by Databricks for integration with external systems.

Databricks recommends using volumes for organizing and managing non-tabular data in cloud storage. Instead of traditional ways to interact with cloud storage directly, volumes provide a secure and centralized way to manage data with Unity Catalog. You can also use volumes to store libraries, initialization scripts, and build artifacts.

Working with Files in Unity Catalog Volumes

1. Uploading Files to a Volume

Uploading files to a volume is straightforward. The Catalog Explorer in Databricks offers a user-friendly interface for performing common file operations, such as uploading, downloading, and deleting files.

Steps to upload files:

  1. In your Databricks workspace, click on the Catalog icon.

  2. Search for the volume where you want to upload files.

  3. Click the “Upload to this volume” button to open the upload dialog.

  4. Select the file you want to upload (maximum file size: 5 GB).

Uploading files in this manner ensures the data is securely managed under Unity Catalog governance.

2. Downloading Files from a Volume

Downloading files from a Unity Catalog volume is just as simple. Follow these steps to download files:

  1. Select one or more files from the volume.

  2. Click the “Download” button to download them to your local machine.

3. Deleting Files from a Volume

To keep your data organized, you may occasionally need to delete files. Here’s how you can delete files from a Unity Catalog volume:

  1. Select one or more files from the volume.

  2. Click the “Delete” button.

  3. Confirm the action by clicking “Delete” in the dialog box that appears.

This ensures that unnecessary files are removed, optimizing your storage and improving manageability.

4. Creating and Deleting Directories in Volumes

Directories provide a way to logically group files. Creating a directory in a volume allows you to better organize files, especially when handling multiple datasets or different types of files.

To create a directory:

  1. Click the kebab menu (three vertical dots) next to the volume name.

  2. Select “Create directory.”

  3. Enter a directory name and click “Create.”

To delete a directory:

  1. Select one or more directories from the volume.

  2. Click “Delete” and confirm the action in the dialog box.

These file and directory management tasks help streamline workflows, whether you are dealing with small or large datasets.

Programmatic Access to Files in Volumes

One of the great features of Databricks is its flexibility in letting you interact with volumes using different programming languages, including Python, SQL, and APIs. Below are some examples.

1. Accessing Files in Volumes with Python

Databricks supports Python, allowing you to read and write files in volumes programmatically. Here’s an example of how you can read a CSV file stored in a volume using Python:

# Example of reading a CSV file from a volume using Python
df = spark.read.
          format("csv").
          load("/Volumes/catalog_name/schema_name/volume_name/data.csv")
display(df)

This code loads a CSV file from the volume into a Spark DataFrame and then displays the data in the Databricks notebook.

2. Accessing Files in Volumes with SQL

For those who prefer SQL, you can also interact with volumes using SQL commands. Below is an example of reading a file stored in a volume with Spark SQL:

-- Example of reading a CSV file from a volume using SQL 

SELECT * FROM '/Volumes/catalog_name/schema_name/volume_name/data.csv';

This method is especially useful if you're already comfortable working with SQL queries and tables in Databricks.

3. Managing Files with Databricks Utilities

Databricks also offers dbutils.fs, a utility for file system commands. You can use this utility to perform actions like creating directories, listing files, or moving files. Below is an example of using dbutils to create a new directory in a volume:

# Example of creating a directory in a volume 

dbutils.fs.mkdirs("/Volumes/catalog_name/schema_name/volume_name/new_directory")

4. REST API for File Management

You can also use Databricks' REST API to manage files in volumes. Here’s an example of using a simple curl command to list the contents of a folder in a volume:

# Example of listing the contents of a folder in a volume using REST API
curl 
--request GET "<https://$>{DATABRICKS_HOST}/api/2.0/fs/directories/Volumes/main/default/my-volume/my-folder" \\
--header "Authorization: Bearer ${DATABRICKS_TOKEN}" | jq

This command will return a JSON object containing the files and directories in the specified folder.

Managing Files with External Tools

You don’t have to work entirely within the Databricks environment to manage files in volumes. Databricks supports external tools, such as SDKs and command-line tools, for managing files in volumes.

1. Using Databricks CLI

The Databricks CLI allows you to manage files in volumes from your local environment. The key command for interacting with files is databricks fs. Below is an example of how you can list files using the CLI:

# Example of listing files in a volume using the Databricks CLI 

databricks fs ls dbfs:/Volumes/catalog_name/schema_name/volume_name

The Databricks CLI makes it easier to automate file management tasks.

2. Using SDKs

Databricks provides SDKs for Python, Java, and Go, enabling you to manage files in volumes programmatically. Each SDK includes methods for listing, uploading, and deleting files in volumes.

3. Using SQL Connectors

Databricks also offers SQL connectors for Python, Go, Node.js, and other languages, allowing you to manage files in volumes through SQL commands. These connectors make it easier to integrate Databricks with other platforms.

Conclusion

Managing files in Unity Catalog volumes provides a centralized, secure, and scalable way to handle non-tabular data in Databricks. Whether you prefer using the user interface or programmatically managing files with Python, SQL, or REST APIs, Databricks makes it flexible and easy.

By leveraging volumes, you can better organize and manage datasets used in machine learning, data science, and AI projects. This guide should provide a solid foundation for working with files in Unity Catalog volumes, helping you make the most out of Databricks for your data management needs.

Feel free to dive into each approach and see what works best for your workflows!

Table of Content

Title

Subscribe to get notified.

Subscribe to get notified.

Subscribe to get notified.

Want to hear about our latest Datalakehouse and Databricks learnings?

Subscribe to get notified.

Want to hear about our latest Datalakehouse and Databricks learnings?

Subscribe to get notified.

Make your data engineering process efficient and cost effective. Feel free to reach for a data infrastructure audit.

How WTD Can help

- Data experts for implementing projects

- On-demand data team for support

Make your data engineering process efficient and cost effective. Feel free to reach for a data infrastructure audit.

How WTD Can help

- Data experts for implementing projects

- On-demand data team for support

Make your data engineering process efficient and cost effective. Feel free to reach for a data infrastructure audit.

How WTD Can help

- Data experts for implementing projects

- On-demand data team for support