Hello, and welcome to part 4 of the series. In this post, I’m going to do a straight up sync with none of the disk checks from yesterdays post. The full script at the end of the series will include the disk checks. Continue reading
Hello, and welcome to part 3 of the series on syncing databases across servers. The first two parts focused on ensuring that the role exists on the source server with the appropriate permissions to sync. We then check and add the service account of the target instance to ensure that the sync can occur.
Part two of this series expands upon yesterdays post, which was creating a database role and assigning permissions, by adding a member to that role. The script adds just one member to the role, but if you need more than one member you can either create a domain group and add all the users required into that group, then use the group name in place of the user name, or do something clever with arrays.
Tomorrow’s post will look at some disk checks that are included to ensure there is enough space to sync cubes.
Here are the links for the other parts of the series
Automate SSAS Syncing Part 1: Create SSAS Database Role
Automate SSAS Syncing Part 2: Create SSAS Role And Add Member
Every day this week I am going to be posting part of a script that I recently wrote, but broken down into separate components that are useful in their own right. The full script, which I will be sharing at the end of the week, automates the synchronisation of SSAS cubes from one instance to another.
So to make this an automated process that was robust to any deviation I had to make sure that the destination service account has full admin access to the source database, that the destination disk could stand the anticipated growth as well as the copy of the sync’d cube, and that the datasource’s connection string on the destination database was reset to it’s original value. Typically the datasource does not need updating, but the destination databases are used in a different environment with their own datasource. And as the sync also syncs the datasource it is necessary to update the datasource back to its original value Continue reading
Here’s a simple Powershell script here to get the top level info of the cubes on an instance. This is a good script to understand how to use the AMO namespace to get info on the cubes via Powershell. You can add many other properties to this list to expand the info that you want. I’ve often mused about moving a lot of the logic that I constantly put in these scripts into higher tier functions in Powershell, or even create a ssas library, but time is always a premium… Continue reading
One feature that has always irritated me greatly is how you cannot alter the connection for a xmla query window via the right click content menu. For .sql, .mdx and .dmx it is an option via the content menu, but not xmla! Continue reading
At the beginning of last month I posted an old query that showed the usage of TempDB. Since then I’ve modified it and added the query plan and have created a view out of the query.
I find using this view has helped me since Read Committed Snapshot isolation (RCSI) was turned on for a few of our databases as TempDB usage increases when RCSI is enabled.
CREATE VIEW vw_all_temp_db_usage AS SELECT u.session_id AS [spid], DB_NAME (s.database_id) AS [database], s.original_login_name AS [login], s.host_name AS [origin], u.internal_objects_alloc_page_count/128.0 AS [allocated tempDb Session Object Space], u.user_objects_alloc_page_count/128.0 AS [ allocated tempDb Task User Space], u.user_objects_dealloc_page_count/128.0 AS [deallocated tempDb Task User Space], tsu.internal_objects_alloc_page_count/128.0 AS [tempDb Task Object Space], tsu.user_objects_alloc_page_count/128.0 AS [tempDb Task User Space], tsu.user_objects_dealloc_page_count/128.0 AS [deallocated Task tempDb User Space], tsk.task_state AS [state], t.text, qp.query_plan FROM sys.dm_db_session_space_usage u INNER JOIN sys.dm_exec_sessions s ON u.session_id = s.session_id INNER JOIN sys.dm_db_task_space_usage tsu ON s.session_id = tsu.session_id INNER JOIN sys.dm_exec_connections c ON c.session_id = s.session_id INNER JOIN sys.dm_os_tasks tsk on tsk.session_id = s.session_id INNER JOIN sys.dm_exec_requests req on u.session_id = req.session_id CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle) t CROSS APPLY sys.dm_exec_query_plan(req.plan_handle) qp GO
Once you have run the above SQL you can execute as follows:
select * from vw_all_temp_db_usageFollow @rPh0enix
Recently I’ve been wondering how I can suppress the output in my PowerShell scripts when loading assemblies into them. I used to find them useful; but now I find them annoying and they are no substitute for error handling ( I used to find them handy as a way of telling me that the script had got this far in the script).
There is more than one way to suppress output to the console, but for assembly loading, I prefer to use [void] because it looks neater than the alternatives:
What is happening here is that [void] is casting the output into null so it does not appear on the console. Remove the [void] and you will see the output.
I mentioned that I prefer the neatness of this to the alternatives, so what are the alternatives? This post on StackOverflow not only talks about the alternatives, but also performance tests the alternatives against one another. It makes for interesting reading!Follow @rPh0enix
Earlier this year I posted a Powershell script that will set the set the aggregation designs on your cube partitions. I recently created another script that contains a “Fix” flag to set the aggregation design, so you can just run it without the flag to get the info, or set the flag and fix the aggregation designs. I also added a “process” part that will run a process index on any partition whose aggregations are not fully processed.
The advantages of using this over the older version include the aforementioned process and checks, but also it means you don’t have to specify cube, measuregroup and the aggregation designs name. The disadvantage to this is that it assumes you want the latest aggregation design applied and processed. Somewhere along the way there will probably be a script which combines all of this nicely! Continue reading
I’ve really developed a jones for using PowerShell to make painful, repetitive, time consuming tasks much quicker. PowerShell’s great for automation, but it’s also great for running tasks on an adhoc basis. Continue reading