Monitoring SQL Server Analysis Services (SSAS) Performance

 

Getting Started Monitoring SQL Server Analysis Services (SSAS) Performance

 

SQL Server Analysis Services (SSAS) is one of the most powerful analytics solutions Microsoft offers, and it has come a long way from its Online Analytical Processing (OLAP) origins. It’s one of the only products that can handle true end-to-end Business Intelligence (BI) workloads and has become the “aggregation engine of choice” for enterprise data. It has even been described as a “user interface to data.” Well-thought-out models hosted in SSAS can effortlessly highlight insights in Power BI and SQL Server Reporting Services (SSRS). Self-service reporting has never been easier.

If you’re responsible for SSAS, and you want to better manage its performance and availability, your first and most important task involves a bit of discovery. You need to understand where SSAS lives in your environment, where it gets data from, which users utilize it, what version it is, what mode it runs in, and if/when it’s processed or refreshed.

Troubleshooting a technical issue can be next to impossible if you don’t understand the environment and what it is you are troubleshooting. Attention to detail is critical when it comes to managing SSAS performance, which is why you should get details from logs—not humans. Logs usually don’t lie and provide specific timeframes, IP addresses, and error messages. Everything you need to fix an issue successfully… in many cases.

Out of the box, SSAS doesn’t have the necessary logging/tracing set up for effective troubleshooting of the most common problems. The first thing you need to do each time you encounter an SSAS performance issue is to configure the correct traces and logs to see what SSAS is doing. You need to know if SSAS was processing during the issue, if it was queried, and which user was querying it.

 

Identifying Your SSAS Version and Mode

First, determine which SSAS version and mode you’re running. There are many options for doing so. For example, the following screenshot shows SSAS in SQL Server Management Studio (SSMS) in our example environment.

 

SSAS Version Shown in SSMS

 

Connect and select <Browse for more>; in the example, you can see there are two instances on the local machine. Select your default instance and then select Connect. The build numbers tell you which version you're running and which updates (e.g., service packs, cumulative updates) are installed. The following are examples of build numbers for each SQL Server version:

  • SQL Server 2017—14.xx.xxxx.xxx

  • SQL Server 2016—13.xx.xxxx.xx

  • SQL Server 2014—12.xx.xxxx.xx

  • SQL Server 2012—11.xx.xxxx.xx

You could also run select @@version in SQL Server (although that would require you to have the SQL Engine running), run a DMV query in SSAS and look at the DBMSVersion, or run an XMLA query.

Once you know which version you have, you need to determine the mode that SSAS is running in—Multidimensional, Tabular, or maybe even SharePoint. One easy way is to look at the icons; in the following screenshot, you can see that the Tabular icon resembles a table and the Multidimensional icon is a cube.

 

Tabular and Multidimensional Icons in SSMS

 

Multidimensional mode has been around for a while and can be challenging to understand. Tabular mode was introduced in SQL Server 2012 and uses relational modeling constructs and terms. 

Much has been written about the different modes or modeling types; to learn more, see “Comparing tabular and multidimensional solutions.”

 

Which Workloads Are Running in SSAS

Once you know which SSAS version and mode you’re running, you need to understand the status of its health and performance. To do so, you need to implement monitoring, and you need a record of what your SSAS server is doing.

In SSAS, a process aptly called “processing” that pulls data from data stores and encodes, compresses, and aggregates that data into a highly optimized model. Processing can affect query performance, and querying can affect processing, so you need to know when data is processed and how much data is processed. You will also need to know when the data is being queried, by whom, and how often.

If Power BI or Tableau are creating ad-hoc reports, you’ll want to keep an eye on how the reports affect SSAS. Sometimes, ad-hoc reporting tools can create poorly designed queries that degrade SSAS performance. You can prevent this degradation in most cases by designing your models for these reports (e.g., hide key columns, use synonyms in Power BI, correct hierarchies, appropriate data types, default aggregations).

The most common issue regarding processing and querying is when processing happens at the same time a query is running. You want to take steps to limit the effects of processing on querying. To determine which workloads are running in SSAS, you’ll need to set up traces or counters or install a product that gathers this information for you (such as SentryOne BI Sentry).

 

Capturing Trace and Perfmon Counters

When an SSAS performance issue is reported, start a lightweight trace and a Performance Monitor (Perfmon) log of specific counters. You can set this up and capture this information yourself, but you will need to log them to a repository that you can easily query. You also need the ability to synchronize your Perfmon counters with your trace data. There are some whitepapers that explain how to do this and some GitHub samples that will get you started.

One very easy way to do this is to install BI Sentry and add SSAS as a target. Doing so captures all the relevant traces and Perfmon counters to your SentryOne repository in SQL Server. If you decide to create traces/logs, be careful, as it’s possible to grab too much information and make the problem worse. The SentryOne platform has evolved over the years to capture all the necessary information in the most lightweight method possible. The SentryOne platform doesn’t install an agent on your SSAS machine, and our team of MVPs has tuned our repository to be extremely fast and performant.

 

An Example Scenario

In this scenario, BI Sentry is the system of record (i.e., repository) for monitoring performance. A DBA with little SSAS knowledge has been volunteered to manage and monitor SSAS. The SSAS database was developed by a contractor and then handed off to the data team. The first few months following the deployment were uneventful, but recently the data team began hearing about report timeouts. In this environment, a reporting dashboard was set up to allow end users and some executives to create and run reports out of the SSAS database. The reports are a mix of Power BI reports and some pre-built and ad-hoc reports, where users have created reports to explore the data.

After looking in SSMS, you recognize that you have a SQL Server 2017 implementation of SSAS Tabular. Since the first occurrence of the issues, the IT department was able to examine the reports and determine the poorly performing reports all query the SSAS Tabular Model. As a result, BI Sentry was installed and set up to watch the SSAS database. 

The screenshot below is from the SentryOne performance dashboard. This dashboard shows two things that give you a basic understanding of what could be happening.  

  1. You can tell right off the bat that the SSAS model is being processed (the spike in rows transferred).

  2. You can also tell that there is memory pressure (SSAS memory is spiking above the red memory limit), which is forcing SSAS to dump caches and causing subsequent queries to run slower.  



SentryOne Performance Analysis Dashboard

 

Now that you know what could be causing the performance issues, you need to identify the cause the spike in memory pressure and see which queries are executed at that time. This can be easily accomplished by highlighting the spike in the Performance dashboard and selecting Jump to and Calendar.

In the calendar view, you can see that the processing command is being kicked off at the same time that several Power BI reports are running. After more closely evaluating the queries, you realize that a lot of high cardinality fields are reported on that should have been hidden. Many key value fields are visible and end up included in some reports. Most reports don’t need to show CustomerID or ProductID; it's unlikely a report writer will want to slice on customer id or productid. You should hide these types of columns from the reporting tools, and only columns that will be reported on should be visible. Doing so is easy and can make quite a difference in performance.

 

Hiding Columns

 

Then, you can change the processing commands to run during the middle of the night when no one is querying the database.  

Once you implement these two simple changes, your processing and querying workloads aren’t interfering with one another, and the generated queries have a much smaller memory footprint because they no longer show the key value fields.

 

Test Data

For this example scenario, test data was created using TPC-DS. The following is a brief overview of the process:

  1. Download TPC-DS tools.

  2. Build and execute TPC-DS to generate data.

  3. Connect to SQL Server and run tpcds.sql to create the data warehouse tables.

  4. Run tpcds_source.sql to create source/staging tables.

  5. Use the .sql scripts to create/load data (there is a how to guide .docx in the tools).

 

Download Trial