INSERTBUFF= LIBNAME Statement Option

Specifies the number of rows in a single DBMS insert.

Valid in: SAS/ACCESS LIBNAME statement
Category: Data Set Control
Default: DBMS-specific
Restrictions: SAS allows the maximum number of rows that the DBMS allows, up to 32,767 rows.
Additional driver-specific restrictions might apply.
The optimal value for this option varies with factors such as network type and available memory.
Interactions: If you specify the DBCOMMIT= option with a value that is less than the value of INSERTBUFF=, then DBCOMMIT= overrides INSERTBUFF=.
DB2 under UNIX and PC Hosts, Greenplum:To use this option, you must specify INSERT_SQL=YES.
Hadoop, Spark: This option is used only when BULKLOAD=NO.
ODBC to Microsoft SQL Server: If you are using INSERTBUFF=, do not also set BULKLOAD=YES.
Data source: Amazon Redshift, Aster, DB2 under UNIX and PC Hosts, Google BigQuery, Greenplum, Hadoop, HAWQ, Impala, Informix, JDBC, Microsoft SQL Server, MySQL, Netezza, ODBC, OLE DB, Oracle, PostgreSQL, SAP HANA, SAP IQ, Snowflake, Spark, Vertica, Yellowbrick
Notes: Support for Yellowbrick was added in SAS 9.4M7.
Support for Hadoop, Informix, and Spark was added in SAS 9.4M9.
The default for Microsoft SQL Server changed from 1 to automatically calculated in SAS 9.4M9.
The default for Oracle changed from 10 to 500 in SAS 9.4M9.
Tip: You might need to experiment with different values to determine the best value for your site.
See: INSERTBUFF= data set option, DBCOMMIT= LIBNAME option, DBCOMMIT= data set option, INSERT_SQL= LIBNAME option, INSERT_SQL= data set option, READBUFF= LIBNAME option, READBUFF= data set option

Syntax

INSERTBUFF=positive-integer

Syntax Description

positive-integer

specifies the number of rows to insert.

Details

Default Values for the INSERTBUFF= LIBNAME Option

DBMS-Specific Default Values

DBMS

Default

Amazon Redshift

1

Aster, DB2 under UNIX and PC Hosts, Google BigQuery, Greenplum, Hadoop, HAWQ, Impala, Informix, Microsoft SQL Server, Netezza, PostgreSQL, SAP HANA, SAP IQ, Spark, Vertica

automatically calculated based on row length

MySQL, ODBC, OLE DB, Snowflake, Yellowbrick

1

Oracle

500

When REREAD_EXPOSURE=YES, the (forced) default value is 1.

Items to Be Aware of for INSERTBUFF=

SAS application messages that indicate the success or failure of an Insert operation represent information for only a single insert, even when multiple inserts are performed. Therefore, when you assign a value that is greater than INSERTBUFF=1, these messages might be incorrect.

When you insert rows with the VIEWTABLE window or the FSVIEW or FSEDIT procedure, use INSERTBUFF=1 to prevent the DBMS interface from trying to insert multiple rows. These features do not support inserting more than one row at a time.

DB2 under UNIX and PC Hosts: If one row in the insert buffer fails, all rows in the insert buffer fail.

Impala: If the calculated INSERTBUFF= value exceeds the default DBCOMMIT= value of 1000, the INSERTBUFF= value is likewise set to 1000. This helps to improve performance.

MySQL: Values greater than 0 activate the INSERTBUFF= option, and the engine calculates how many rows it can insert at one time, based on row size. If one row in the insert buffer fails, all rows in the insert buffer might fail, depending on your storage type.

Last updated: February 3, 2026