SQL Sentry v8 Technology Update
Published On: March 14, 2014
Categories: SQL Sentry, SentryOne Builds 0
I haven't blogged in a while but I wanted to share some important and exciting news about SQL Sentry. SQL Sentry v8 has been in development for quite some time and contains a number of significant new features, including custom conditions.
To get right to the point, SQL Sentry v8 will have a system requirements change, specifically with regard to the operating system version used for both the Monitoring Service and the SQL Sentry Client. There are no system requirements changes for watched targets.
These are the minimum operating system requirements as they exist today in Sentry 7.5:
- Windows XP SP3
- Windows Server 2003 SP2
- Windows Vista SP1 or later
- Windows Server 2008
- Windows 7
- Windows Server 2008 R2
These are the minimum operating system requirements for SQL Sentry 8.0:
- Windows Vista SP2 (x86 and x64)
- Windows 7 SP1 (x86 and x64)
- Windows Server 2008 R2 SP1 (x64)
- Windows Server 2008 SP2 (x86 and x64)
Support for Windows XP and Windows Server 2003 have been dropped. The reason for this is that we have decided to target .NET 4.5 instead of .NET 4.0. In this post I wanted to share some of the reasons for doing so.
With SQL Sentry v8 we are introducing custom conditions. Custom conditions can be created on-demand and assigned to any number of watched targets in your enterprise. The details on how those are created and configured are outside the scope of this post, but within a custom condition you can evaluate performance counters, run SQL Server queries, perform arithmetic operations, etc. For more details, see Greg Gonzalez's posts on the major new features in v8.
With that in mind, envision the following scenario:
- You create a custom condition.
- You add a SQL Server query to the custom condition that runs a query and returns a result.
- You apply the custom condition at the Global level, so it will run against every computer in your enterprise.
So, you need at least X executions of the custom condition where X is equal to the number of monitored targets. Now, let's assume you have C custom conditions with one statement each. It will therefore take X * C statement evaluations for all the conditions to be evaluated.
Now, let's assume that each SQL Server query takes T seconds to run. If we evaluate the conditions serially across all servers, it will take X * C * T seconds to evaluate all conditions on all servers. The thing is, we don't know what X, C, or T ultimately will be. We can't make any guarantees about the execution time of the queries, nor can we about how many queries will be in a condition, or how many places that condition will be evaluated.
Our first instinct is to parallelize this problem. We could partition across X threads, breaking the working set into (C * T) over X threads, but we could end up with a lot of threads if we have 100 servers. We could schedule the threads in a queue to limit the number of concurrent threads we create and reduce context switching, yielding more like (C * T) * (X/PROCESSOR_COUNT) iterations, however if C * T takes a long time, then we potentially create a work backlog. Let's say we have 10 conditions. If each one takes a second to run, then C is 10 and T is 1. If we have 200 watched targets, then X is 200 and it will take (C * T) * (X/PROCESSOR_COUNT), or (2000/PROCESSOR_COUNT) seconds to get through all the evaluations. That's 250 seconds on an 8 core machine. We could bump this up to 100 threads but it would still take 20 seconds, and we'd have a lot of overhead for all those extra threads.
So there's a dilemma. Either we ramp up the number of threads to a huge number so we can run everything in parallel, or we wait using a smaller number of threads. This is where .NET 4.5 helps, in a big way. .NET 4.5 contains a very clean pattern for exposing asynchronous IO-bound operations via the async framework. Going back to our previous example, let's think about what one thread would be doing:
- Run initialization code
- Execute a query and wait for the result
- Return results
Now what's actually happening is that steps 1 and 3 run really, really fast, and that's where we need the CPU, but step 2 is what takes the vast majority of the time, and that's where we're actually waiting. What this ultimately means is that if we do massively parallelize our operation, we're only doing so to create a bunch of threads that are all waiting.
By using async, the thread is actually freed at step 2 and able to do other work. This means that if we have another condition that needs to evaluate, it can use that time to run steps 1 and 3.
In the example above, it means we can evaluate X * C * T in ~T time, using as few threads as possible. More threads will be created if we have multiple executions running steps 1 and 3 simultaneously, which is certainly possible, but much less likely if the ratio of 1 and 3 to 2 is small.
This was the primary reason that we decided to move to .NET 4.5 now..., but async also allows us to free up threads in other areas, such as in our trace collection, blocking queue pipelines for some of our real-time disconnected async producer/consumer scenarios, and connection testing.
SQL Sentry v8 is just the beginning of this transition. We will continue to leverage the powerful features in .NET 4.5 to further our goal of delivering the best-performing, most advanced monitoring software available.
Brooke (@Macromullet) has led the development team for SentryOne since the company's inception. His strengths in multi-threaded programming, visual component development, network programming, and database development have been instrumental to the success of the SentryOne platform.