BL_DELETE_DATAFILE= Data Set Option

Specifies whether to delete only the data file or all files that the SAS/ACCESS engine creates for the DBMS bulk-load facility.

Valid in: DATA and PROC steps (when accessing DBMS data using SAS/ACCESS software)
Categories: Bulk Loading
Data Set Control
Alias: BL_DELETE_FILES= [Oracle]
Default: YES
Requirement: To specify this option, you must first specify BULKLOAD=YES.
Data source: Amazon Redshift, Aster, DB2 under UNIX and PC Hosts, Google BigQuery, Greenplum, HAWQ, Impala, Informix, Microsoft SQL Server, Netezza, Oracle, PostgreSQL, SAP IQ, Snowflake, Spark, Yellowbrick
Notes: Support for Microsoft SQL Server and Yellowbrick was added in SAS 9.4M7.
Support for Informix and Spark was added in SAS 9.4M9.
See: BL_CONTROL= data set option, BL_DATAFILE= data set option, BL_DELETE_DATAFILE= LIBNAME option, BL_DELETE_ONLY_DATAFILE= data set option, BL_USE_MANIFEST= data set option, BL_USE_PIPE= data set option, BULKLOAD= data set option

Syntax

BL_DELETE_DATAFILE=YES | NO

Syntax Description

YES

deletes all (data, control, and log) files that the SAS/ACCESS engine creates for the DBMS bulk-load facility.

NO

does not delete these files.

Details

Amazon Redshift: Setting BL_DELETE_DATAFILES=YES deletes data files that are created during the bulk-loading process. If BL_USE_MANIFEST=YES, then manifest files are deleted as well. Files are deleted from the local machine and from the S3 bucket.

DB2 under UNIX and PC Hosts: Setting BL_DELETE_DATAFILE=YES deletes only the temporary data file that SAS/ACCESS creates after the load completes.

Greenplum, HAWQ: When BL_DELETE_DATAFILE=YES, the external data file is deleted after the load completes.

Netezza: You can use this option only when BL_USE_PIPE=NO.

Oracle, PostgreSQL: When BL_DELETE_DATAFILE=YES, all files (DAT, CTL, and LOG) are deleted.

Spark: Setting BL_DELETE_DATAFILES=YES deletes the temporary files on the SAS server and in Azure storage. This option is used when the Spark engine bulk loads or bulk unloads data in Databricks. For more information, see Bulk Loading and Unloading to Databricks in Azure.

Examples

Example 1: Delete All Files

In this example, the default is YES, so all files are deleted.

libname x oracle user=myusr1 pw=mypwd1 path=mypath;
proc datasets library=x;
  delete test1; run;
data x.test1 ( bulkload=yes );
c1=1;
run;
x dir BL_TEST1*.*;

Example 2: Retain All Files

No files are deleted in this example.

libname x oracle user=myusr1 pw=mypwd1 path=mypath;
proc datasets library=x;
  delete test2; run;
data x.test2 ( bulkload=yes bl_delete_files=no );
c1=1;
run;
x dir BL_TEST2*.*;
Last updated: February 3, 2026