SQL Sentry v8.2: Adaptive Web UIs and Fast Query Capture

Greg Gonzalez

Published On: October 30, 2014

Categories: Cloud, PASS Summit, SentryOne Builds, Performance Analysis 0

What do these two major new features have in common you may ask? Aside from shipping in the same release, not much, other than the fact that they are both extremely cool and unique! In this post, I will provide a rundown of each.

Download SQL Sentry v8.2 here: New Users | Existing Users

Adaptive Web UIs

You may have heard talk of our new web portal, cloud.sentryone.com, introduced earlier this year in v8.0. With it, you can now easily and securely synchronize your on-premise SQL Sentry data to our cloud. This opens up a world of possibilities, including:

  • Web access to:
    • Slick new mobile, tablet and big-screen interfaces
    • the Performance Advisor Overview, displaying key metrics for all monitored servers
    • various SQL Sentry reports, from performance to disk space
    • the custom condition Events Log
  • Easy, controlled sharing of your SQL Sentry data with:
  • Consulting and remote DBA services partners now have a single unified interface for managing all of their SQL Sentry environments
  • Custom Conditions can be synchronized on demand across all of your SQL Sentry environments, or shared with the community (coming soon)

You enable cloud sync via the cloud.sentryone.com node in the SQL Sentry Client. If you already have an account on answers.sqlperformance.com you can use it here. After the first successful sync, login to cloud.sentryone.com to see your data, share it, etc. For more details see the Online Help.

If you login from a desktop you will see a page listing all SQL Sentry environments either owned by you or shared with you. You can choose one of two paths for each database v8.2 or higher, Server Status or Server Details:

CloudMainDesktop2_Compressed

Server Details is a desktop-oriented UI which includes the Performance Advisor Overview, Events Log, Reports, and sharing features. Server Status is the new adaptive/responsive UI released in v8.2, and is the one I will focus on here.

When logging in via mobile device or tablet the Server Status UI is always shown. It was developed using the latest responsive/adaptive web technologies to work across a wide variety of platforms, from the smallest of mobile devices to the largest of big screens. It provides a high-level view of the health of the entire environment at a glance, with easy access to the details. Whereas many of our interfaces are oriented directly towards the DBA, this one was designed to appeal to the broader audience of DBAs, IT admins, and even IT managers alike.

Here is a shot on my desktop running at 1920x1200:

CloudEventsByTag_thumb2_Compressed
(click to enlarge)

Each circle, or "wedget" as they are affectionately known internally, represents a server... in this case 130 servers. ;-) The outer border and background color represents the relative health of the server, going from green (good) to red (bad), with a numeric "score" in the middle. As conditions worsen, a server will change color and move towards the upper left of the screen.

Here is a shot from my Samsung Galaxy S5:

CloudMobile6_thumb1_Compressed
(click to enlarge)

Yes, this is the same UI! It adapts automatically based upon the device type and resolution. From there you can use the toolbar to easily control various other display options:

CloudToolbar1_Compressed

There are 3 different views available by selecting the header dropdown, with multiple grouping options for the Events-based views:

CloudViewSelector1_Compressed

  • Wait Time / Session – The current wait time per user session in milliseconds for the SQL Server instance, broken down by the major resource categories (network, cpu, memory, disk, other). This measure is often a great indicator of how fast or slow a server is running from an end user perspective.
  • Events by Severity – Server health using a 100-based score, organized by the severity levels of the associated custom conditions.
  • Events by Tag – Server health using a 100-based score, organized by the tags used by the associated custom conditions (currently limited to network, cpu, memory, disk, other).

Clicking (or pressing) on any server presents the details view, which should help to clarify the relationship between the "categories", circle wedges, and custom conditions:

CloudConnectionDetails_Compressed
(click to enlarge)

Over this date range (24 hours by default) there were 7 events (custom conditions evaluating to true for this server). The tags assigned to the conditions drive the numbers shown in the colored boxes, which represent the count of events for each tag. The severity levels of those conditions (Medium and Low here) are used to drive the overall score of 83.

Tags and severity levels are adjustable in the Conditions List in the SQL Sentry Client, and scores are calculated using the logic described here. Astute readers may note that the colors mirror the colors used in the desktop software.

There are too many other layout options to fully cover here, so I encourage you to experiment and let us know which views work best for you! For more details, or to test drive a demo account, go here.

Fast Query Capture

Performance Advisor for SQL Server has always had a couple of ways to capture high impact queries: Top SQL and QuickTrace. The two look very different because they serve very different purposes. Top SQL is a lightweight trace that runs continuously and captures heavier queries exceeding specified filter thresholds (Duration > 5 seconds by default). QuickTrace is run on demand to capture all T-SQL activity occurring during a short window, including fast running queries that fall below the Top SQL filter threshold.

Problem is, QuickTrace can't collect these fast queries all the time – continuous collection via trace and even Extended Events is just too heavy. This is where the new Query Stats Collection comes in. It uses an extremely lightweight collection mechanism which can safely be run 24/7. Even though it captures all queries exceeding the scaled filter thresholds, I've been using the term Fast Query Capture because it is more representative of the unique value delivered by the feature.

Instead of trace or XE, we periodically sample the query stats and procedure stats DMVs and capture any queries exceeding the Top SQL filters on a cumulative basis. This is an extremely lightweight way of continuously collecting fast running queries which heretofore would not have appeared in Top SQL. They show up on two new Top SQL subtabs:

ProcedureStatsTab_Compressed
(click to enlarge)

QueryStatsTab_Compressed
(click to enlarge)

Selections on the two tabs are synchronized, so related procedures and queries are always highlighted when switching between tabs as shown above. Currently the grids show the cumulative impact of the queries over the active date range (set on the toolbar). In the upcoming v8.3, the grids have been enhanced to allow choosing either this view or a details view with individual samples for each query.

We didn't want to introduce a new set of filter settings for query stats collection, but since the original Top SQL filters were intended to be applied to a single execution of a query, and here we are dealing with multiple executions of a query, we needed some base time period over which to apply the existing filters – this is called the Filter Time Span. This setting works in tandem with the Query Stats Sample Interval setting to calculate a Filter Factor which is used to scale the Top SQL filters for query stats collection. These new settings are highlighted below:

QueryStatsSettings_thumb_Compressed
(click to enlarge)

Consider the following example (different from the settings shown above):

Top SQL collection filter Min Duration
> 3 seconds
 
Query Stats Sample Interval 3 minutes  
Filter Time Span 20 seconds  
Filter Factor (calculated) 9 3 minutes / 20 seconds
(Query Stats Sample Interval / Filter Time Span)
Effective fast query filter Total Duration
> 27 seconds
3 seconds x 9
(Top SQL collection filter x Filter Factor)

This means that the total duration of all executions of a query over the 3 minute sample period must exceed 27 seconds in order to be captured and appear on the Procedure Stats and/or Query Stats tab for a given range. The same type of filtering can be applied to CPU or IO consumed. This approach enables you to collect only those fast queries that are actually having a significant impact on server resources, and by adjusting the Filter Time Span you can reduce the filter thresholds as low as you want, without incurring significant additional overhead.

As you may know, the query and procedure stats DMVs don't contain hosts, applications, or spids, so you won't see those columns on the new tabs. However, using a little magic, we are able to show you the associated databases and objects, and this is sufficient for most use cases.

IMPORTANT
  • If you had previously reduced your Top SQL filter settings to very low values in order to capture fast running queries, you should be able to move them back up to normal levels after enabling query stats collection. This can dramatically reduce the associated network and storage overhead for individual queries captured via trace and shown on the Completed Queries subtab.
  • Query stats collection is enabled by default on new installs only. If you have an existing SQL Sentry install, be sure to enable Collect Query Stats under Top SQL Source settings. As always, this can be done at the global, group, or server level.

Come See us at the Summit!

We will be in Seattle next week at the 2014 PASS Summit, demonstrating SQL Sentry v8.2 live. I may even be providing sneak peeks at some exciting features yet to be made public. ;-) I hope to see you there!

Download SQL Sentry v8.2 here: New Users | Existing Users

Greg is founder and Chief Scientist at SentryOne. He is a Microsoft developer by background and has been working with SQL Server since the mid 90s. He is perpetually engaged in the design and development of SQL Sentry, Plan Explorer and other SentryOne solutions for optimizing performance on the Microsoft Data Platform.


Comments