Specifies the name of the file system within the external storage system.
| Valid in: | DATA and PROC steps (when accessing DBMS data using SAS/ACCESS software) |
|---|---|
| Categories: | Bulk Loading |
| Data Access | |
| Default: | LIBNAME option value |
| Interaction: | This option is used with the BL_ACCOUNTNAME=, BL_DNSSUFFIX=, and BL_FOLDER= options to specify the external data location. |
| Data source: | Microsoft SQL Server, Snowflake, Spark |
| Notes: | Support for this data set option was added in SAS 9.4M7. |
| Support for Snowflake and Spark was added in SAS 9.4M9. | |
| See: | BL_ACCOUNTNAME= data set option, BL_DNSSUFFIX= data set option, BL_FILESYSTEM= LIBNAME option, BL_FOLDER= data set option |
Table of Contents
specifies the name of the file system within the external storage system that files are to be read from and written to.
The external data location is generated using this option and the BL_ACCOUNTNAME=, BL_DNSSUFFIX=, and BL_FOLDER= data set options. These values are combined to define the URL to access an Azure Data Lake Storage Gen2 location:
https://<account-name>.<network-storage-host-name>/<file-system-name>/<file-path-folder>
These values might result
in a URL similar to https://myaccount.dfs.core.windows.net/myfilesystem/myfolder.
Microsoft SQL Server: This option is used when Microsoft SQL Server loads data to Azure Synapse Analytics (SQL DW) with an Azure Data Lake Storage Gen2 storage account. For more information, see Bulk Loading to Azure Synapse Analytics.
Snowflake: This option is used to bulk load or bulk unload compressed Parquet files that are stored in an Azure ADLS location. For more information, see data set.
Spark: This option is used when the Spark engine bulk loads or bulk unloads data in Databricks on Azure with an Azure Data Lake Storage Gen2 storage account. For more information, see Bulk Loading and Unloading to Databricks in Azure.