How Does SQL Server 2019 Accelerated Database Recovery Affect TempDB?
Published On: June 23, 2020
Categories: tempdb, SQL Server 0
You might have heard me talk about tempdb parasites in the past, or maybe you've read my blog post on the same topic, "Be Mindful of SQL Server Tempdb Use (aka Tempdb Parasites!)." I know that at least one person did, because they recently asked a great question.
In that blog post, I reviewed how triggers use the version store in tempdb for access to the "special" trigger tables we can use from within the trigger code to access the previous and new versions of data being modified. One person on Twitter was wondering if that is still the case for triggers on databases using Accelerated Database Recovery (ADR) in SQL Server 2019. I really wasn't sure, so I decided to find out.
Persisted Version Store
Using ADR in SQL Server 2019 creates a persisted version store (PVS) within the database. The version store in tempdb is gone after a restart but the PVS will remain, and it allows for instantly rolling back in-flight transactions.
The PVS is the part we're interested in for this blog post. We want to see if having the PVS on the database from ADR will change how the version store is used for triggers.
Turn Accelerated Database Recovery On
I already have a test set up from the tempdb parasites blog post to show triggers using tempdb. I'll use that same test after turning ADR on for the database. We can turn it on using a simple T-SQL command:
ALTER DATABASE tests SET ACCELERATED_DATABASE_RECOVERY = ON;
We can check to see that it took with a new column in sys.databases:
SELECT [name] ,is_accelerated_database_recovery_on FROM sys.databases
I can use the same table and trigger I used from my tempdb parasites test, but first I want to make sure the PVS is doing its thing. I can run a transaction and use a new DMV to see that:
SELECT persistent_version_store_size_kb FROM sys.dm_tran_persistent_version_store_stats WHERE database_id = DB_ID('tests')
Does the Trigger Still Use TempDB?
This is the trigger I used in my tempdb parasites blog post. It runs on update for a table called "test." While it runs, it prints a message telling me how much space this process is taking up in tempdb:
use tests go create trigger tUpdateTest on dbo.test for update as declare @var nvarchar(max) declare @pages int SELECT @pages = internal_objects_alloc_page_count * 8 / 1024 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID and database_id = 2 print '[During MBs] ' + cast(@pages as varchar) select @var = deleted.[data] from deleted go
After enabling ADR on the database and making sure it's working, I get the following output from running an update:
Using this bit in the trigger:
SELECT @pages = internal_objects_alloc_page_count * 8 / 1024 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID and database_id = 2
I'm informed that we've put roughly 493MB into tempdb. That tells me that the trigger is indeed still using the tempdb version store.
I wondered if the PVS was also active during this update, so I checked that, too. I did all this relatively in sequence and with nothing else going on with this database. The PVS was previously 200KB, and I felt that it should be much larger now:
It is bigger indeed. I've added about 70MB in version data.
What If the Trigger Has a Transaction?
Next, I wondered if this might change at all if my trigger actually did something more than read in stats about tempdb, so I modified it a bit.
First, I made this table to put some data into while the trigger runs:
CREATE TABLE [dbo].[PVSTest]( [id] [int] IDENTITY(1,1) NOT NULL, [when] DATETIME2, [data] [nvarchar](50) NULL );
Then, I added an insert to the trigger:
use tests go alter trigger tUpdateTest on dbo.test for update as declare @var nvarchar(max) declare @pages int SELECT @pages = internal_objects_alloc_page_count * 8 / 1024 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID and database_id = 2 INSERT dbo.PVSTest ([when], [data]) VALUES (getdate(), 'Ran the trigger') print '[During MBs] ' + cast(@pages as varchar) select @var = deleted.[data] from deleted go
Next, I ran the big update again with the following results:
The trigger tables still went to tempdb, but the PVS grew a lot:
I wanted to find out if that was related to the change I made in the trigger or something else I might have done. I used the cleanup procedure for the PVS to reset:
EXEC sys.sp_persistent_version_cleanup 'tests'
That procedure runs a background cleanup process. It didn't seem to create any sort of blocking, but I was surprised by how long it took:
I've built out background processes like this before, and they tend to take longer on purpose due to patterns that keep them from impacting performance. Given that, I wasn't worried at all about how long it took, I just wasn't expecting it.
After another wait for everything to cycle through, I could see the PVS was down to 8KB:
I then decided to run once with the version of the trigger that doesn't do any work, check the size, reset again, and run with the version that does the inserts so that I could compare the results to get a delta.
The difference in size I saw being reported turned out to be nothing more than the timing of checking the size. Checking while the transaction is running always shows a larger PVS. It goes back down a bit after rollback or commit, and the timing for that varies due to the background cleanup. The version data hangs around when I don't close out the transaction, which is what I would expect.
I tried a few other things, but nothing different happened that was worth talking about. In each scenario, one thing remained static—the trigger always used tempdb. So, if you're wanting a nice side effect of using ADR in SQL Server 2019 to be less load on tempdb, I would say that is a distinct possibility, especially if you're using snapshot isolation. If you use triggers and you were hoping those would go to the PVS, unfortunately, that doesn't appear to be what happens.
Until next time!
Jason has worked in technology for over 20 years. He joined SentryOne in 2006 having held positions in network administration, database administration, and software engineering. During his tenure at SentryOne, Jason has served as senior software developer and founded both Client Services and Product Management. His diverse background with relevant technologies made him the perfect choice to build out both of these functions. As SentryOne experienced explosive growth, Jason returned to lead SentryOne Client Services, where he ensures that SentryOne customers receive the best possible end to end experience in the ever-changing world of database performance and productivity.