Script for TempDB Sessions, The Queries, and (Nearly) Everything: The View

At the beginning of last month I posted an old query that showed the usage of TempDB. Since then I’ve modified it and added the query plan and have created a view out of the query.

I find using this view has helped me since Read Committed Snapshot isolation (RCSI) was turned on for a few of our databases as TempDB usage increases when RCSI is enabled.

CREATE VIEW vw_all_temp_db_usage
AS
 SELECT
 u.session_id
 AS [spid],
 DB_NAME (s.database_id)
 AS [database],
 s.original_login_name
 AS [login],
 s.host_name
 AS [origin],
 u.internal_objects_alloc_page_count/128.0
 AS [allocated tempDb Session Object Space],
 u.user_objects_alloc_page_count/128.0
 AS [ allocated tempDb Task User Space],
 u.user_objects_dealloc_page_count/128.0
 AS [deallocated tempDb Task User Space],
 tsu.internal_objects_alloc_page_count/128.0
 AS [tempDb Task Object Space],
 tsu.user_objects_alloc_page_count/128.0
 AS [tempDb Task User Space],
 tsu.user_objects_dealloc_page_count/128.0
 AS [deallocated Task tempDb User Space],
 tsk.task_state
 AS [state],
 t.text,
 qp.query_plan
 FROM sys.dm_db_session_space_usage u
 INNER JOIN sys.dm_exec_sessions s
 ON u.session_id = s.session_id
 INNER JOIN sys.dm_db_task_space_usage tsu
 ON s.session_id = tsu.session_id
 INNER JOIN sys.dm_exec_connections c
 ON c.session_id = s.session_id
 INNER JOIN sys.dm_os_tasks tsk
 on tsk.session_id = s.session_id
 INNER JOIN sys.dm_exec_requests req
 on u.session_id = req.session_id
 CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle) t
 CROSS APPLY sys.dm_exec_query_plan(req.plan_handle) qp
GO

Once you have run the above SQL you can execute as follows:


select * from vw_all_temp_db_usage

Into The [Void]

Recently I’ve been wondering how I can suppress the output in my PowerShell scripts when loading assemblies into them. I used to find them useful; but now I find them annoying and they are no substitute for error handling ( I used to find them handy as a way of telling me that the script had got this far in the script).

There is more than one way to suppress output to the console, but for assembly loading, I prefer to use [void] because it looks neater than the alternatives:


[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices.AdomdClient")

What is happening here is that [void] is casting the output into null so it does not appear on the console. Remove the [void] and you will see the output.


[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices.AdomdClient")

I mentioned that I prefer the neatness of this to the alternatives, so what are the alternatives? This post on StackOverflow not only talks about the alternatives, but also performance tests the alternatives against one another. It makes for interesting reading!

Setting Aggregation Designs on SSAS Partitions Part Two

Earlier this year I posted a Powershell script that will set the set the aggregation designs on your cube partitions. I recently created another script that contains a “Fix” flag to set the aggregation design, so you can just run it without the flag to get the info, or set the flag and fix the aggregation designs. I also added a “process” part that will run a process index on any partition whose aggregations are not fully processed.

The advantages of using this over the older version include the aforementioned process and checks, but also it means you don’t have to specify cube, measuregroup and the aggregation designs name. The disadvantage to this is that it assumes you want the latest aggregation design applied and processed. Somewhere along the way there will probably be a script which combines all of this nicely! Continue reading

Script for TempDB Sessions, The Queries, and (Nearly) Everything

Hello,

I’ve been sifting through a few of my older sql files on my hard drive and found this one I wrote a few years back. For each connected session to the SQL instance this script shows the session id’s pages allocated in tempDB and the queries that were last run.

There’s quite a lot of variations of this script available on the web. but I like this one because it does not go into too much detail, but is rather useful for quick troublshooting.

SELECT
 u.session_id AS [spid],
 DB_NAME (s.database_id) AS [database],
 s.login_name AS [login],
 s.host_name[origin],
 u.internal_objects_alloc_page_count/128.0 AS [allocated tempDb Session Object Space],
 u.user_objects_alloc_page_count/128.0 AS [ allocated tempDb Task User Space],
 u.user_objects_dealloc_page_count AS [deallocated tempDb Task User Space],
 tsu.internal_objects_alloc_page_count/128.0 AS [tempDb Task Object Space],
 tsu.user_objects_alloc_page_count/128.0 AS [tempDb Task User Space],
 tsu.user_objects_dealloc_page_count AS [deallocated Task tempDb User Space],
 tsk.task_state AS [state],
 t.text
FROM sys.dm_db_session_space_usage u
 INNER JOIN sys.dm_exec_sessions s ON u.session_id = s.session_id
 INNER JOIN sys.dm_db_task_space_usage tsu ON s.session_id = tsu.session_id
 INNER JOIN sys.dm_exec_connections c ON c.session_id = s.session_id
 INNER JOIN sys.dm_os_tasks tsk on tsk.session_id = s.session_id
 CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle) t
ORDER BY u.session_id;

Unexpected behaviour when using columnstore index | Microsoft Connect

An item has very recently been published to the Microsoft Connect website with regards to an issue relating to Clustered Columnstore Indexes.

Unexpected behaviour when using columnstore index | Microsoft Connect.

Put simply there is inconsistent behaviour when returning the data on a table that is stored using a Clustered Columnstore Index and a table that is not. There is also an issue when using the “Include Actual Execution Plan” from Managment Studio.

The item has two attachments that provide complete examples of the inconsistent behaviour. I have downloaded and have been able to reproduce on SQL Server 2014 CU3. If anyone else has SQL Server 2014 installed (needs to be Enterprise or Developer as partitioning is used) it’d be helpful if you can download and try to replicate.

Deprecated Features of SQL

I was looking over the deprecated features of SQL Server 2014, and what strikes me is how little features were actually discontinued in SQL Server 2014. In fact the only feature discontinued was the removal of support of SQL Server 2005 (90) compatibility level. And that is sorted soon enough. And according to books online, there are no breaking changes when upgrading from SQL Server 2012 to SQL Server 2014. All this contrasts directly with the previous versions of SQL Server where there were numerous discontinued and breaking features. And in the next version of SQL Server (2015??) there are far more features removed than in SQL Server 2014 (10 in fact), but still far less than previous versions. I can’t help but wonder what is in store for the next version of SQL Server… Continue reading

Deleting Data The Slow Way

Rare is the time do we have to delete the data row by row. But…. Recently I needed to delete some xml fields in a database as they were no longer required. It was a massive amount of xml; the table itself was 50,000 rows and nearly 300gb, and I needed to delete roughly 9000 rows.

I decided to delete a row at a time. Whilst this contradicts how sql works, I did not want to fill up the log with one massive delete. And as the database has its log backed up every 15 minutes, this will ensure that there is free space in the log for new deletes. So we’ll delete a row every 2 minutes and check that there is space in the log before deleting. If the log is looking full we’ll check again in 5 minutes until there is space to run another delete. So I can run the script and leave it to run knowing that the script won’t kill the database.

The script is below with an example using AdventureWorksDW2014.
Continue reading

Pick up a Book… & Throw it Out The Window!

When I learned to play the guitar, the first thing I did was obviously buy a guitar. Then I got a guitar teacher. And then I practised, and then practised, and when I had finished practising I practised some more. I made many mistakes. I got frustrated. But I never gave up. And over time I got better, made less mistakes, picked up some tips and tricks to make playing easier. I developed an ear for notes. I understood scales and keys and relative minors. And now I’m good. not great, but I can play a bunch of Smashing Pumpkins songs and really have a lot of fun playing the guitar without really putting too much effort into practising because I put so much effort in the beginning.

90% of my learning was based on practice. 10% was based on reading books and putting it into practice. The books came some time after buying a guitar. To be clear: I did not learn the pentatonic scale and it’s different position on the fretboard by reading books. I put the theory into practice. To really hammer the point home: I did not learn how to play the guitar by picking up a book and reading it. It was merely a guide to point me in the right direction.

So how does this relate to databases?  Continue reading