READBUFF= LIBNAME Statement Option

Specifies the number of rows of DBMS data to read into the buffer.

Valid in: SAS/ACCESS LIBNAME statement
CONNECT statement
Category: Data Set Control
Aliases: BUFF=, BUFFSIZE= [Oracle]
ROWSET_SIZE= [Amazon Redshift, Aster, DB2 under UNIX and PC Hosts, DB2 under z/OS, Greenplum, HAWQ, Impala, Microsoft SQL Server, Netezza, ODBC, OLE DB, PostgreSQL, SAP ASE, SAP HANA, SAP IQ, Vertica]
ROWSET= [Aster, Greenplum, HAWQ, Informix, SAP IQ]
Default: DBMS-specific
Restrictions: When READBUFF=1, only one row is retrieved at a time.
[Oracle] The default value is calculated based on 250 rows as the row size. 250 rows is the minimum value that this option can accept.
Interactions: Buffering data reads can decrease network activities and increase performance. However, because SAS stores the rows in memory, higher values for READBUFF= use more memory. In addition, if too many rows are selected at once, rows that are returned to the SAS application might be out of date. For example, if someone else modifies the rows, you do not see the changes.
Hadoop: This option is applied only when READ_METHOD=JDBC.
Data source: Amazon Redshift, Aster, DB2 under UNIX and PC Hosts, DB2 under z/OS, Google BigQuery, Greenplum, Hadoop, HAWQ, Impala, Informix, JDBC, Microsoft SQL Server, Netezza, ODBC, OLE DB, Oracle, PostgreSQL, SAP ASE, SAP HANA, SAP IQ, Snowflake, Spark, Vertica, Yellowbrick
Notes: Starting in SAS 9.4M9, Hadoop, JDBC, and Spark have a maximum value.
Support for Informix was added in SAS 9.4M9.
Tip: This option improves performance by specifying a number of rows that can be held in memory for input into SAS.
See: INSERTBUFF= LIBNAME option, INSERTBUFF= data set option, READBUFF= data set option

Table of Contents

Syntax

READBUFF=integer

Syntax Description

integer

the positive number of rows to hold in memory. SAS allows the maximum number that the DBMS allows.

Details

DBMS-Specific Default Values

DBMS

Default

Amazon Redshift

PostgreSQL

Yellowbrick

0

Aster

Google BigQuery

Greenplum

Hadoop

HAWQ

Impala

Informix

JDBC

Netezza

SAP HANA

SAP IQ

Snowflake

Spark

automatically calculated based on row length

Google BigQuery, Informix: The maximum value is 32,767.

Hadoop, JDBC, and Spark: The maximum value is 100,000. The default value for Databricks is 100,000.

DB2 under UNIX and PC Hosts

If you do not specify a value for this option, the default buffer size is automatically calculated based on the row length of your data. The SQLExtendedFetch API call is used.

DB2 under z/OS

For SAS 9.2 and above, the default is 1 and the maximum value is 32,767.

For more information, see READBUFF= Restriction.

Microsoft SQL Server

ODBC

The default is 0. If you do not specify a value for this option, the SQLFetch API call is used and no internal SAS buffering is performed. When you specify READBUFF=1 or greater, the SQLExtendedFetch API call is used.

OLE DB

Vertica

1

Oracle

The default is 250. If you do not specify this option, the READBUFF= value is automatically calculated as the number of rows that can fit into a memory buffer of 100K on most platforms. On the z/OS platform, it is calculated as the number of rows that can fit into a memory buffer of 50K. However, it is adjusted to be within the range of 250–15000. You can always override the default value by explicitly specifying a value for this option.

SAP ASE

The default is 100. To use a value greater than 1 row for this option, you must specify CONNECTION=UNIQUE.

Last updated: February 3, 2026