Edit

Share via


Introduction to Memory-Optimized Tables

Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance

Memory-optimized tables are created using CREATE TABLE (Transact-SQL).

Memory-optimized tables are fully durable by default, and, like transactions on (traditional) disk-based tables, transactions on memory-optimized tables are fully atomic, consistent, isolated, and durable (ACID). Memory-optimized tables and natively compiled stored procedures support only a subset of Transact-SQL features.

Starting with SQL Server 2016, and in Azure SQL Database, there are no limitations for collations or code pages that are specific to In-Memory OLTP.

The primary storage for memory-optimized tables is the main memory. Rows in the table are read from and written to memory. A second copy of the table data is maintained on disk, but only for durability purposes. See Creating and Managing Storage for Memory-Optimized Objects for more information about durable tables. Data in memory-optimized tables is only read from disk during database recovery (for example, after a server restart).

For even greater performance gains, In-Memory OLTP supports durable tables with transaction durability delayed. Delayed durable transactions are saved to disk soon after the transaction commits and control is returned to the client. In exchange for the increased performance, committed transactions that aren't persisted to disk are lost in a server crash or fail over.

Besides the default durable memory-optimized tables, SQL Server also supports non-durable memory-optimized tables, which aren't logged and their data isn't persisted on disk. This means that transactions on these tables don't require any disk IO, but the data is lost if there is a server crash or failover.

In-Memory OLTP is integrated with SQL Server to provide a seamless experience in all areas such as development, deployment, manageability, and supportability. A database can contain in-memory as well as disk-based objects.

Rows in memory-optimized tables are versioned. This means that each row in the table potentially has multiple versions. All row versions are maintained in the same table data structure. Row versioning is used to allow concurrent reads and writes on the same row. For more information about concurrent reads and writes on the same row, see Transactions with Memory-Optimized Tables.

The following figure illustrates multi-versioning. The figure shows a table with three rows and each row has different versions.

Multi-versioning.

The table has three rows: r1, r2, and r3. r1 has three versions, r2 has two versions, and r3 has four versions. Different versions of the same row don't necessarily occupy consecutive memory locations. The different row versions can be dispersed throughout the table data structure.

The memory-optimized table data structure can be seen as a collection of row versions. Rows in disk-based tables are organized in pages and extents, and individual rows addressed using page number and page offset, row versions in memory-optimized tables are addressed using 8-byte memory pointers.

Data in memory-optimized tables is accessed in two ways:

  • Through natively compiled stored procedures.

  • Through interpreted Transact-SQL, outside of a natively compiled stored procedure. These Transact-SQL statements may be either inside interpreted stored procedures or they may be ad hoc Transact-SQL statements.

Accessing Data in Memory-Optimized Tables

Memory-optimized tables can be accessed most efficiently from natively compiled stored procedures (Natively Compiled Stored Procedures). Memory-optimized tables can also be accessed with (traditional) interpreted Transact-SQL. Interpreted Transact-SQL refers to accessing memory-optimized tables without a natively compiled stored procedure. Some examples of interpreted Transact-SQL access include accessing a memory-optimized table from a DML trigger, ad hoc Transact-SQL batch, view, and table-valued function.

The following table summarizes native and interpreted Transact-SQL access for various objects.

Feature Access Using a Natively Compiled Stored Procedure Interpreted Transact-SQL Access CLR Access
Memory-optimized table Yes Yes No1
Memory-optimized table type Yes Yes No
Natively compiled stored procedure Nesting of natively compiled stored procedures is now supported. You can use the EXECUTE syntax inside the stored procedures, as long as the referenced procedure is also natively compiled. Yes No*

1You can't access a memory-optimized table or natively compiled stored procedure from the context connection (the connection from SQL Server when executing a CLR module). You can, however, create and open another connection from which you can access memory-optimized tables and natively compiled stored procedures.

Sensitive data in memory-optimized tables can be protected by using Always Encrypted. The following limitations apply:

  • When using Always Encrypted with secure enclaves, the use of enclave-enabled keys for columns in memory-optimized tables isn't supported. This means that in-place encryption can't be used, and the initial encryption is done on the client.
  • Always Encrypted isn't supported for any column in a memory-optimized table when the table is referenced in a natively compiled module.

Performance and Scalability

The following factors affect the performance gains that can be achieved with In-Memory OLTP:

Communication: An application using many short stored procedure calls might see a smaller performance gain compared to an application with fewer calls and more functionality implemented in each stored procedure.

Transact-SQL Execution: In-Memory OLTP achieves the best performance when using natively compiled stored procedures rather than interpreted stored procedures or query execution. There can be a benefit to accessing memory-optimized tables from such stored procedures.

Range Scan vs Point Lookup: Memory-optimized nonclustered indexes support range scans and ordered scans. For point lookups, memory-optimized hash indexes have better performance than memory-optimized nonclustered indexes. Memory-optimized nonclustered indexes have better performance than disk-based indexes.

  • Starting in SQL Server 2016, the query plan for a memory-optimized table can scan the table in parallel. This improves the performance of analytical queries.
    • Hash indexes also became scannable in parallel in SQL Server 2016.
    • Nonclustered indexes also became scannable in parallel in SQL Server 2016.

Index operations: Index operations aren't logged, and they exist only in memory.

Concurrency: Applications whose performance is affected by engine-level concurrency, such as latch contention or blocking, improves significantly when the application moves to In-Memory OLTP.

The following table lists the performance and scalability issues that are commonly found in relational databases and how In-Memory OLTP can improve performance.

Issue In-Memory OLTP Impact
Performance

High resource (CPU, I/O, network, or memory) usage.
CPU
Natively compiled stored procedures can lower CPU usage significantly because they require fewer instructions to execute a Transact-SQL statement compared to interpreted stored procedures.

In-Memory OLTP can help reduce the hardware investment in scaled-out workloads because one server can potentially deliver the throughput of multiple servers.

I/O
If you encounter an I/O bottleneck from processing to data or index pages, In-Memory OLTP may reduce the bottleneck. Additionally, the checkpointing of In-Memory OLTP objects is continuous and doesn't lead to sudden increases in I/O operations. However, if the working set of the performance critical tables doesn't fit in memory, In-Memory OLTP doesn't apply because it requires data to be memory resident. If you encounter an I/O bottleneck in logging, In-Memory OLTP can reduce the bottleneck because it does less logging. If one or more memory-optimized tables are configured as non-durable tables, you can eliminate logging for data.

Memory
In-Memory OLTP doesn't offer any performance benefit. In-Memory OLTP can put extra pressure on memory as the objects need to be memory resident.

Network
In-Memory OLTP doesn't offer any performance benefit. The data needs to be communicated from data tier to application tier.
Scalability

Most scaling issues in SQL Server applications are caused by concurrency issues such as contention in locks, latches, and spinlocks.
Latch Contention
A typical scenario is contention on the last page of an index when inserting rows concurrently in key order. Because In-Memory OLTP doesn't take latches when accessing data, the scalability issues related to latch contentions are fully removed.

Spinlock Contention
Because In-Memory OLTP doesn't take latches when accessing data, the scalability issues related to spinlock contentions are fully removed.

Locking Related Contention
If your database application encounters blocking issues between read and write operations, In-Memory OLTP removes the blocking issues because it uses a new form of optimistic concurrency control to implement all transaction isolation levels. In-Memory OLTP doesn't use TempDB to store row versions.

If the scaling issue is caused by conflict between two write operations, such as two concurrent transactions trying to update the same row, In-Memory OLTP lets one transaction succeed and fails the other transaction. The failed transaction must be resubmitted either explicitly or implicitly, retrying the transaction. In either case, you need to make changes to the application.

If your application experiences frequent conflicts between two write operations, the value of optimistic locking is diminished. The application isn't suitable for In-Memory OLTP. Most OLTP applications don't have write conflicts unless the conflict is the result of lock escalation.

Row-Level Security in Memory-Optimized Tables

Row-Level Security is supported in memory-optimized tables. Applying Row-Level Security policies to memory-optimized tables is essentially the same as disk-based tables, except that the inline table-valued functions used as security predicates must be natively compiled (created using the WITH NATIVE_COMPILATION option). For details, see the Cross-Feature Compatibility section in the Row-Level Security topic.

Various built-in security functions that are essential to row-level security are available for memory-optimized tables. For details, see Built-in Functions in Natively Compiled Modules.

EXECUTE AS CALLER - All native modules now support and use EXECUTE AS CALLER by default, even if the hint isn't specified. This is because it is expected that all row-level security predicate functions use EXECUTE AS CALLER so that the function, and any built-in functions used within it, are evaluated in the context of the calling user.
EXECUTE AS CALLER has a small (approximately10%) performance hit caused by permission checks on the caller. If the module specifies EXECUTE AS OWNER or EXECUTE AS SELF explicitly, these permission checks and their associated performance cost is avoided. However, using either of these options together with the mentioned built-in functions incurs a higher performance hit due to the necessary context-switching.

Scenarios

For a brief discussion of typical scenarios where In-Memory OLTP can improve performance, see In-Memory OLTP.

See Also

In-Memory OLTP (In-Memory Optimization)