Have You Been Using Runtime Statistics?
When thinking about what to write about for my first official blog post, I knew I wanted it to be something that could help our users get more out of our products. A great feature that I find gets overlooked quite frequently by SQL Sentry users is Runtime Statistics. The Runtime Statistics graph allows you to see historical execution durations for a particular statement all in one graph. The hope of this post is to bring light to this great feature, and provide some information on how to use and navigate it.
There are several ways to get to Runtime Statistics for a job or Top SQL event. Through the Navigator pane, you can get to Top SQL Runtime Statistics by right-clicking a particular instance node > Jump To.. > Top SQL Runtime Stats. For jobs, simply drill down to and expand a particular job node, and double-click the Runtime Stats node associated with it.
To open the Runtime Statistic page for a specific Top SQL statement, you will first want to open the Top SQL tab for a monitored connection. Once there, find a statement that interest you or that you wish to further analyze. Right-click the statement > Jump To.. > Runtime Stats. The Runtime Statistics graph will open up, and automatically be in the context of the statement you selected in Top SQL. You can also pull up the Runtime Statistics graph for jobs through the Calendar. In this scope, find a job on the Event Manager Calendar > right-click the event > Jump To.. > Runtime Stats. If you are monitoring a connection with both Performance Advisor and Event Manager, then you can use the same steps on the Calendar for a particular Top SQL event.
As you can see from the screenshot above (click to enlarge), each execution is plotted on the graph with start time as the X-axis and execution duration as the Y-axis. The graph will overlay the Minimum, Maximum, and Average runtime for the selected statement or job. This allows quick visibility into execution performance in relation to the best, worst, and typical runtimes for a particular event.
Just below the graph you will find some options for manipulating the graph. The “Interval Type” is used to set seconds or minutes as the measure of execution duration. The “Auto-Scale To Max Value” does just as it states: It will auto-scale the graph into a best-fit view based on the Maximum value available for the result set you are viewing. The “Show Value Labels” will show execution durations for each event on the graph, without having to hover over them for the tool tip.
If the “Auto-Scale To Max Value” setting is unchecked, you have the ability to explicitly set the Y-axis. This will be measured in minutes or seconds depending on the “Interval Type” selected. This can be useful if you need to zoom in for some additional visual granularity, or if you have an anomalous execution time that is far longer than the rest of the data set for a particular event that makes the auto-scale undesirable.
Above the graph, you have additional settings, particularly those to adjust the window in time you are viewing. To the far left you see where you can set a reference point in time. Note that this timestamp will equate to the ending point of the time frame you wish to view (furthest point to the right on the X-axis). The “Go” button just to the right of this will execute the change in time you have made.
Next on this tool bar are the view intervals, or the length in time you would like to view from the reference point you have defined. Since the reference point is always the ending point, the different lengths in time for these views will go backwards in time from the reference point. In order, these views are 1 minute, 10 minutes, 1 hour, 4 hours, and 1 day. You can also increase the number of days in the view by increasing the “Days” value. The green arrows will allow you to shift the window in time by certain intervals depending on the view you are in, allowing you to make shifts in the viewed time frame without having to re-enter an explicit reference point in time.
There are several options for further analysis of a particular event through the right-click menu. The “Open” option will take you to a Plan Explorer session, where you can view the plan for that particular execution (if we collected it). One use case for this is for comparing the plan used for events with extended runtimes, to those that stay closer to the average runtime. This could help point out poor indexes, parameter sniffing, or bad statistics for a particular parameter set.
Then you have the “Jump To” option, which you might recognize from other places in SQL Sentry. The “Jump To” function has always been one of my favorite features (Justin Randall has been known to sing its praises as well). It can be used to bounce between different data sets and features within SQL Sentry, allowing users to get deeper insight into the different moving parts of an environment, and make correlations based on this information. Some highlight examples include jumping to the Calendar, Dashboard, or Top SQL. Users can jump to the Calendar to get a view of what other events where occurring during the time of this particular execution, jump to the Dashboard to get a picture of overall resource utilization during the time the selected event executed, or jump to Top SQL to see the runtime statistics for all of the statements executing around the same time.
The “Add/Edit Notes” option allows users to add notes to certain events, and even have these notes included in Top SQL: Duration Threshold Max alerts. This can be helpful for letting other users know that there is a known issue, that someone is already looking into it, or just notes on troubleshooting done thus far.
The “Graph View,” which is also apart of the top tool bar, allows users to switch to an aggregate view. This view gives users a visualization of execution counts for ranges of execution durations, providing an additional high level view of a statements performance:
Below the graph is a grid listing captured Top SQL statements. Note that this grid is only available for Top SQL statements collected by Performance Advisor. Simply selecting a different statement will change the scope of the graph to that statement. The grid also provides some useful information such as the database a given statement was executed against and an execution count. The real treasure here is the “Max Duration Threshold” and “Override” feature. By checking the “Override” box for a particular statement, you can set an individual runtime threshold for alerting, without having to create filtered duplicate Top SQL: Duration Threshold Max conditions. Note that both the explicit duration and percent thresholds cannot be honored at the same time, and the explicit duration will always have priority. Note, when viewing Runtime Statistics for a job, the grid is not available. To switch to different job events, you will have to do so explicitly through the Calendar or the Navigator pane.
Note that what is available for the Runtime Statistics page is dependent on whether you are looking at Top SQL statements or jobs, and which products you are monitoring a given connection with. If you are monitoring a connection with just Performance Advisor, then you will only be able to open up the Runtime Statistics for Top SQL statements, and it will only show the grid, as the graph is dependent on a connection being monitored with Event Manager. If you are monitoring a connection with only Event Manager, you can only open up Runtime Statistics for jobs. If you are monitoring a connection with both Performance Advisor and Event Manager, then you will have full functionality. This includes the graph for Top SQL statements, the ability to open Runtime Statistics for jobs, and being able to jump to Runtime Statistics for Top SQL events on the Calendar.
We have done a quick walk through of yet another SQL Sentry feature that helps our users get greater insight into their environments and make correlations to improve overall performance. I hope that those of you who knew about this feature learned something new, and for those of you who were unaware Runtime Statistics existed, enjoy the new toy. As always, please feel free to contact our dedicated Support team with any questions you might have regarding Runtime Statistics, or any other questions about SQL Sentry products.
Patrick is a Customer Success Engineering Manager and helps to provide customers with support services, troubleshooting, and defect resolution. Patrick also does some side work with SentryOne’s Professional Services team, who provide training and consulting. Patrick’s blog focus is on helping SQL Sentry users get the most out of our products. His hope is to ensure our users are as knowledgeable and comfortable as possible with SQL Sentry products, so they have as much ammo as possible to solve real world SQL Server problems.