I have been really lucky the past two weeks as I was able to attend two separate SQL Saturday events back to back. For me, it is a day where I can both learn and talk with others about my passion with Microsoft SQL Server. The latest SQL Saturday #493 in Mountain View, CA (Silicon Valley) on 9 April 2016 was no exception to this wonderful community of SQL server folks.
One of the cool things I did not know about and learned during the keynote was Stretch Databases in SQL Server 2016. They gave us a quick level overview and demo of this new feature for Microsoft SQL Server 2016, which then motivated me to take that session. We also got an amazing high level overview of new things happening at Microsoft from a great speaker Ross Mistry (Blog | Twitter) who is a great public speaker and he also published the Introducing-Microsoft SQL Server 2012 book, which you can get for free. Another cool tip is that SQL 2016 Standard will allow the feature for AlwaysOn, which is a huge win for Microsoft customers, as it has been announced that DB Mirroring will soon be deprecated.
At this session, I decided to take the following classes and here is a brief overview:
-
Anthony van Gemert a PM from the Microsoft SQL Server Engineering team spoke about MICROSOFT: SQL Server Stretch Database. The focus of this feature is offloading the cold data from your database to another location, like Microsoft Azure specifically. The cool thing about this feature is that you have older data that you are legally or contract wise committed to keeping for a certain number of years, but this data is not accessed that often. Say it is sales orders from the past decade and let’s also assume your business has been booming, so there is a lot of data. For simplicity let’s say that “cold data” is 1 TB in size, well as you can tell your Business SLA will be at risk due to restore or maintenance time (to complete the task) and the problem is shrinking IT budgets versus end user data retention/availability requests. Well, the solution is to securely migrate the often not used cold data to Azure. The benefits will include shrinking the size of your production database with access to the recent data to allow faster times to complete restores and maintenance, such as indexing. Additionally, it will also allow you to maintain the database online in an active state to query in cases of audits or other business required reporting without waiting for tapes to be delivered onsite and also restore periods.
-
Jimmy May’s (Blog | Twitter) provided two separate presentations for me and I thoroughly enjoyed both of this talks.
-
Hardware for Nothing, Get Your Flash for Free was right before lunch and was designed for sponsored vendors to deliver a pitch for their products. The main point to take away from this demo is that SQL server’s pricing model is more expensive today than in previous editions, as it is now based on processor cores instead of sockets. A lot of times, the bottleneck for today’s systems is waiting on disk writes, well with flash you can lower your SQL footprint by increasing your disk I/O with SanDisk technologies. If you want to obtain a free demo of this product, please register here.
-
If you do the math, then you will see that you can easily save ~$400,000 savings by eliminating the need to license 2 hosts:
Each host: 2-sockets, 14 cores/CPU = 28 cores (same as SanDisk tested) $6,874/core for SQL 2014 EE (full price, no discounts, no Software Assurance) 28 * $6,874 = $192,472 per host (for SQL Enterprise licenses) ~$400,000 savings by eliminating the need to license 2 hosts
- In this Screenshot (below) is a scale of adding more VM’s (which sees little value after the 7th one) and then adding SanDisk FlashSoft caching software, and all-flash with SanDisk Fusion ioMemory
-
-
-
SQL Server 2016 AlwaysOn AGs Break-through Perf Enhancements was a session in the afternoon where he discussed the bottlenecks with SQL Server Availability Groups (AGs) in both 2012 and 2014. It has always been limited by Database Mirroring (DBM) legacy code, which had two built-in performance bottlenecks inherited by AGs
- Log transport: This is a process that encrypts and compresses log traffic to secondary replica(s)
- Redo thread: Think about it as a continuous restore thread, applying changes made on the primary replica to secondary replica(s)
- Log transport: This is a process that encrypts and compresses log traffic to secondary replica(s)
-
Thomas LaRock (Blog | Twitter) had an amazing talk about Cardinality Estimates in Microsoft SQL Server 2014 where he displayed the differences between a SQL Server 2012 and 2014 instance when it takes into consideration SQL Server Optimization to run the best plan. Where it tries to determine the least cost by stopping when good plan is found and it uses statistics in every stage to determine cost. It goes through Parse, bind, optimize and executive (First three are logical) but clearly execution engine is way more important than the storage engine. Cardinality Estimates (CE) is the DNA for your queries. A lot of the scripts that he used to demo these differences were from this book.
- SQL 2014/2016 are the first versions to provide new changes to the way that CE works, as it has been unchanged since SQL Server 7.0.
- Legacy CE assumes Uniformity, Independence, Containment and Inclusion
- New CE is way more accurate since it does not assume as much as the legacy CE
- SQL 2014/2016 are the first versions to provide new changes to the way that CE works, as it has been unchanged since SQL Server 7.0.
- Tim Ford (Blog | Twitter) had a topic on How Good Is Your Indexing Strategy? He went over the tools that can help you determine if your Indexes are decent. The importance of a good index is that it will help lead to faster reads, enforce uniqueness and benefits I/O greatly. (Scans lead to loading data from
disk into memory) Tim had a great slide deck and he talked about how does indexing can get in the way of performance with Writes/Updates/Deletes, (double hit) I/O and increases space in the database table. He went over the Indexing DMO’s as seen below:
What Makes a good index? One that is being used plus leads to High read to write ratio, narrow, healthy fragmentation and reasonable fill factor. This table he put together shows you how to look at the Indexing DMO’s
-
Thomas Grohser (Blog | Twitter) finished the day off with Establishing a SLA. I liked the presentation a lot as it was very upfront, direct and no-nonsense. A good Service Level Agreement will look at some of these topics:
- How much data can we lose? (Recovery Point Objective) Yes, the AIM is to lose zero, but you need to be realistic and be within your budget.
- Need to put into SLA to test the backup process, as this helps you validate the plan periodically as well as determine the actual time (databases grow, so will restores) to accomplish this task.
- Don’t confuse luck with availability!!
- Most realistic is 99.7% and keep in mind that 99.9 requires 3 separate data centers
To begin with he made the following suggestions:
- How much data can we lose? (Recovery Point Objective) Yes, the AIM is to lose zero, but you need to be realistic and be within your budget.
- Rule#1 defines the SLA first and then determine a solution (not the other way around)
- Need to agree on reality of when system will be available for use (can’t just say when SSMS loads or I can see the database online)
- SLA should cover operational requirements, maintenance windows, responsibilities, dependencies
- Most importantly, what happens if SLA is not met then “What are the contingency plans?”