I was catching up on some blogs yesterday, and one I really liked was this post on Spaghetti DBA. The authors main complaint was that he had grown disillusioned with Connect and the way in which so many issues are dealt with (ie, closed as “Won’t” Fix”). All the author wanted was better feedback from the engineers when responding to issues. I’m adding my voice to this request because in the past year I have raised a couple of issues myself, only to be disappointed with the feedback: Continue reading
Hello! And Happy New Year!
I’ve spent much of the past month off work looking after my daughter whilst my wife is at work. It seems that while this time of year is quiet for me at the office, in the NHS its the busiest period of the year. So it has been great to spend time with Phoebe at home, which has resembled a building site since the end of October. Indeed, as I work from home I have had to move the computer from into 5 different times whilst work was completed. During that time I’ve learnt more things about plumbing than I’ve ever wanted to know, and surprised myself when I kept a remarkably cool head when I noticed water leaking out the ceiling (from the room I had just removed the radiator from successfully (I thought) and whose pipes I had capped) into our living room. And here is some advice which is as unrelated to technology as you’ll ever read on this site, but invaluable nonetheless: try not to reuse caps to cap off radiator pipes, as you have to turn them so tight they tend to break up when you try to use them again. Which is exactly what I had done. I thought they were screwed on well enough until I turned the heating on and water got flowing around the system, which was when the water started to leak out of the busted cap. Fortunately for me no damage was done and I was able to drain the entire heating system, which unfortunately coincided with us living without heating during the coldest days of 2014, until the plastering was done. It’s all part of us paying our dues until the house is done. Currently we are without a shower/bath, though mercifully we are not far away from friends who are kind enough to let us use their bathroom. Continue reading
This post has nearly been a year in the making. When I hit the 100 post mark, which was roughly a year into writing this blog, I wanted to share some of my thoughts about blogging and what it meant to me and how someone can start up a blog and still be actively posting a year later. But I decided not to, as I felt a year and 100 posts was not nearly enough time to post anything with any real authority. But 1 year and 100 posts later I still wanted to share my thoughts on blogging. If for nothing else, it’ll be interesting to read this post in 2/3/4 years time and see just how much of what I wrote I still agree with. This post is not definitive; rather, it’s like viewing a junk shop; I’m sure there’ll be something for someone to take home from this collection of thoughts. Continue reading
Recently I needed to update some cubes in a test environment after the underlying data warehouse had been refreshed. Typically I’d sync the cubes from prod, but as sync causes locking, and I needed to get this done during the day, I needed to reprocess all of the cubes in the test environment, and there were a lot of them, all ranging in sizes from hundreds of gb to a few mb. The simplest way to update the cubes was to run a process default on the dimensions and then run a process default on the measure groups. Some of the measure groups were partitioned, so I could’ve got clever and just processed fully any unprocessed partitions, but I felt a default would do most of the hard work for me. Continue reading
Today I’m going on a mini rant about people abusing Powershell, specifically to do with naming Powershell functions. If you’re going to take the trouble to write Powershell functions and add them to a module for others to have access to, please, don’t be a jerk, use a verb from the approved verbs list that is freely and readily available on the MSDN Developers site, or even by typing “get-verb” in the console.
Last month I posted a view that looked at TempDB usage. I use it primarily as a monitor for TempDB usage on instances that have databases that have Read Committed Snapshot isolation enabled.
Recently I made a change to the view; it includes the snapshot isolation level of the session. If it is using snapshots then would expect TempDB to be greater for this session/task.
if exists (select * from sys.sysobjects where name = 'vw_all_temp_db_usage') drop view vw_all_temp_db_usage GO CREATE VIEW vw_all_temp_db_usage AS SELECT u.session_id AS [spid], DB_NAME (s.database_id) AS [database], s.original_login_name AS [login], s.host_name AS [origin], u.internal_objects_alloc_page_count/128.0 AS [allocated tempDb Session Object Space], u.user_objects_alloc_page_count/128.0 AS [ allocated tempDb Task User Space], u.user_objects_dealloc_page_count/128.0 AS [deallocated tempDb Task User Space], tsu.internal_objects_alloc_page_count/128.0 AS [tempDb Task Object Space], tsu.user_objects_alloc_page_count/128.0 AS [tempDb Task User Space], tsu.user_objects_dealloc_page_count/128.0 AS [deallocated Task tempDb User Space], tsk.task_state AS [state], t.text, CASE s.transaction_isolation_level WHEN 0 THEN 'Unspecified' WHEN 1 THEN 'ReadUncomitted' WHEN 2 THEN 'ReadCommitted' WHEN 3 THEN 'Repeatable' WHEN 4 THEN 'Serializable' WHEN 5 THEN 'Snapshot' END AS [transaction Isolation Level], qp.query_plan FROM sys.dm_db_session_space_usage u INNER JOIN sys.dm_exec_sessions s ON u.session_id = s.session_id INNER JOIN sys.dm_db_task_space_usage tsu ON s.session_id = tsu.session_id INNER JOIN sys.dm_exec_connections c ON c.session_id = s.session_id INNER JOIN sys.dm_os_tasks tsk on tsk.session_id = s.session_id INNER JOIN sys.dm_exec_requests req on u.session_id = req.session_id CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle) t CROSS APPLY sys.dm_exec_query_plan(req.plan_handle) qp GO --select * from all_temp_db_usage
Once you have run the above SQL you can execute as follows:
select * from vw_all_temp_db_usageFollow @rPh0enix
Hello! And welcome to the final part of this series. This script combines the first 4 parts of this series to create an unabridged process of synchronising a ssas database from one server to another. Even though I’ve used this script fora while, it still gets tweaks and I’d still consider it a work in progress, however it’s as complete now as it ever has been. No doubt when I make some more changes I’ll post an update.
The script so far includes:
- checking if sync role exists, and creates it if it does not
- checking the permissions on the role to ensure that the sync role has administer permissions on the database to be sync’d, and will add them if they do not
- checks that the service account of the target server is a member of the sync role (a requirement of syncing) and adds it if not a member
- checks if the source database exists on the target server. It will copy it in most cases (see 6) however if it does not exist it will disregard point 7
- checks that there is sufficient space on the disk to contain two copies of the database (see below for a full explanation), and contains the option to drop the target database before syncing.
- if there is no target database and there is not enough space to sync this database, it will skip syncing
- If the target database is to be dropped before syncing, then the target roles/permissions/data source are stored and will replace the updated database values (assuming it existed on target server prior to syncing)
- This assumes that you are not copying over the permissions from the target cube. The idea here being that these cubes are used for a different environment (like test for example) that have their own datasource/roles/members.
Hello, and welcome to part 4 of the series. In this post, I’m going to do a straight up sync with none of the disk checks from yesterdays post. The full script at the end of the series will include the disk checks. Continue reading
Hello, and welcome to part 3 of the series on syncing databases across servers. The first two parts focused on ensuring that the role exists on the source server with the appropriate permissions to sync. We then check and add the service account of the target instance to ensure that the sync can occur.
Part two of this series expands upon yesterdays post, which was creating a database role and assigning permissions, by adding a member to that role. The script adds just one member to the role, but if you need more than one member you can either create a domain group and add all the users required into that group, then use the group name in place of the user name, or do something clever with arrays.
Tomorrow’s post will look at some disk checks that are included to ensure there is enough space to sync cubes.
Here are the links for the other parts of the series
Automate SSAS Syncing Part 1: Create SSAS Database Role
Automate SSAS Syncing Part 2: Create SSAS Role And Add Member