SQL Server 2014: features aimed to scale

BrianFranklin

Brian Franklin

Are your clients running SQL Server 2008, or perhaps even some lingering instances of SQL Server 2005? Are they migrating to SQL Server 2012? Most likely, you’re seeing your clients using a blend of versions. Luckily, there’s good news on the horizon: the release of SQL Server 2014 is right around the corner, bringing some interesting new features and smart enhancements to the table. The SQL Server 2014 Community Technical Preview (CTP) 2 was released to the public last October, and is the first and only production-ready build prior to release to market (RTM).

While SQL Server 2014 doesn’t contain any significant changes for SSIS, SSRS, and SSAS, the database engine contains extensive changes under the hood. Targeting improved performance and scalability, Microsoft has designed SQL Server 2014 to take full advantage of the rapid growth in server computing power. Modern database servers now contain many processor cores and much more memory. I’ll preview the most significant changes to SQL Server in this post, focusing primarily on the biggest highlight: in-memory online transaction processing (OLTP).

OLTP databases often experience periods of high usage that can slow traditional systems. SQL Server 2014 in-memory OLTP targets the primary causes of performance degradation, disk read-write latency and data table locks. Because data is stored in-memory, there are no wait times associated with disk operations. The elimination of table locking further reduces wait times since no blocking will occur across processes.

Evolution of SQL ServerFigure 1. “The Evolution of SQL Server”. SQL Server Team. Retrieved from the SQL Server Blog website, http://blogs.technet.com/b/dataplatforminsider/archive/2013/06/03/sql-server-2014-unlocking-real-time-insights.aspx

In-memory OLTP

The most talked about new feature in SQL Server 2014 is memory-optimized tables. Designed with existing in-memory data warehouse and business intelligence capabilities in mind, in-memory OLTP is intended to address performance and scalability concerns with disk-based tables in databases with large-scale OLTP workloads. At its core, the new database engine is optimized for in-memory tables, eliminates traditional locking and latching during transactions, and natively compiles stored procedures.

Durability

Memory-based tables can be durable and non-durable. By default, in-memory tables are durable, with a second copy being maintained on disk. Data on disk would only be accessed during database recovery operations. Memory-based table transactions are fully ACID (Atomic, Consistent, Isolated, and Durable) and use the same transaction log as disk-based operations. Alternatively, memory-based tables can be created as non-durable with the SCHEMA_ONLY option. Non-durable table operations are not logged and therefore significantly reduce disk IO. Consider using non-durable tables for ETL processes where data may only be needed temporarily.

Multi-versioning

In-memory OLTP provides optimistic Multi-version Concurrency Control (MVCC), which completely avoids locking during DML operations than can lead to deadlock victims. SQL Server 2014 maintains row versioning to allow concurrent read and write operations on the same row of data. The database engine assumes no conflicting transactions. Conflicting transactions are detected and terminated.

Native compilation

Another major component to the in-memory OLTP database engine is natively compiled tables and stored procedures. Tables and stored procedures are compiled into native code dynamic-link libraries (DLLs) that are accessed by the central processing unit (CPU) without further interpretation. While these DLLs are not part of the database, SQL Server manages these files automatically. One note: statistics on memory-optimized tables are not updated automatically and execution plans for stored procedures are not updated as data in the table changes. Statistics will need to be updated manually on a regular basis and stored procedures dropped and re-created to ensure proper optimization.

Consider using in-memory OLTP to scale when heavy workload presents latch contention or for read-only reporting tables serving a large user population. Memory-optimized tables should also be considered for reducing latency when business logic is contained primarily within stored procedures.

AlwaysOn

Based on database mirroring, AlwaysOn was introduced in SQL Server 2012 as a comprehensive high-availability and disaster-recovery solution. The primary features leveraged by AlwaysOn are Availability Groups and Failover Cluster Instances (FCIs). Availability Groups are sets of user databases that failover together to another SQL Server instance within the same failover cluster. Initially, AlwaysOn supported one primary replica and up to four secondary replicas. In SQL Server 2014, the number of secondary replicas is increased to eight. Additionally, secondary replicas will now remain readable when disconnected from the primary replica or during cluster quorum loss. Of course, if you are using any of the secondary replicas, they must be fully licensed.

ColumnStore indexes

ColumnStore indexes were introduced in SQL Server 2012 and are based on xVelocity. This technology, which involves storing compressed data in columnar format, originated with Analysis Services and PowerPivot, and was adapted for SQL Server 2012 databases. New for SQL Server 2014 are clustered columnstore indexes. Clustered columnstore indexes are updateable and can include all columns in the table, and—unlike traditional clustered indexes—data is not stored in a sorted order. Instead, columnstore compression stores the data to maximize compression and performance. Clustered columnstore indexes should be considered standard for large data warehouse fact tables, which primarily see bulk loads and read-only operations.

Better maintenance, monitoring, and disaster recovery

Each of these features are available as part of the enterprise edition of SQL Server 2014 and are intended for the support of mission critical OLTP, data warehouse, and business intelligence solutions. Combined with the other numerous enhancements for improved maintenance, monitoring, and disaster recovery, SQL Server 2014 offers new heights of scalability and performance. For further reading, visit TechNet Evaluation Center for SQL Server 2014 and MSDN Books Online for SQL Server 2014.

 

About Brian Franklin
Brian Franklin is a Solution Architect with Slalom Consulting in Dallas, Texas with a background in finance and over 15 years of technical experience, specializing in designing and developing insightful business intelligence solutions. His combined business experience and technical expertise have contributed greatly to many successful projects.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: