No-Hassle SQL Server Remoting with PowerShell V2

Although, PowerShell give the whole remoting experience, you may still need to ask you network administrator to let you configure remoting on all the machines you need to access.  Of course, unless you hold the keys of the kingdom.  But, it is much  simpler without PowerShell Remoting.

When you use the SQLPS Module it automatically let you connect to any of your SQL Servers on your network.  So, YES!! As long as you have permission to manage these servers, SQLPS already gives you the ability to remotely connect to other machines.  There’s no need to setup the Remoting with  Enable-PSRemoting or Set-WSManQuickConfig commands.

Simple Remoting #1

So, How can I change from my SQL Server local prompt to another instance?  Here’s an example how:

1. First load the SQLPS Module:

[sourcecode language=”powershell”]
Import-Module SQLPS -DisableNameChecking
[/sourcecode]

2. Confirm you have the SQL Server drive available:

[sourcecode language=”powershell”]
Get-PSDrive SQL*
[/sourcecode]

3. Let’s navigate to your SQL Server Instance:

[sourcecode language=”powershell”]
cd SQLServer:\SQL\Your-Machine-name\Default  #or

cd SQLServer:\SQL\Your-Machine-name\Your-InstanceName
[/sourcecode]

4. Now you can continue and navigate through your SQL Server drive like in DOS filesystem.

[sourcecode language=”powershell”]
Dir Databases -force
[/sourcecode]

5. Finally, to remote connect top another session just change the “Your-Machine-name”, and your “Your-InstanceName”.

Simple Remoting #2

One thing you need to understand, after you load SQLPS module, now you have access to all SMO (SQLServer Management objects) .NET Classes. This will help you build short and very effective .NET that extend the scripting of PowerShell.  Also, you don’t need to be at the SQLServer: drive to use it.

Here’s a basic example using SMO:

1. First load the SQLPS Module:

[sourcecode language=”powershell”]
Import-Module SQLPS
[/sourcecode]

2. Create a variable to hold your local SQL Server objects:

[sourcecode language=”powershell”]
$MySQL = new-object Microsoft.SqlServer.Management.Smo.Server
[/sourcecode]

3. Now that your .NET object has been create you need to see all what you can grab from it:

[sourcecode language=”powershell”]
$MySQL | Get-member | Out-GridView
[/sourcecode]

4. I decided to grab the “Information” property so I can display all the information, and verify that this is my local SQL instance:

[sourcecode language=”powershell”]
$MySQL.Information
[/sourcecode]

5. Finally, to remote connect to another machine, I just add the SQL Server InstanceName to the line creating my variable:

[sourcecode language=”powershell”]
$MySQL = new-object Microsoft.SqlServer.Management.Smo.Server "Your-Machine-name"
[/sourcecode]
Or
[sourcecode language=”powershell”]
$MySQL = new-object Microsoft.SqlServer.Management.Smo.Server "Your-Machine-name\Your-Instancename"
[/sourcecode]

Keep in mind, you can another variable holding your SQL Server name with the instancename and do something like:

[sourcecode language=”powershell”]
$SQLInstanceName = "Your-Machine-name\InstanceName"
$MySQL = new-object Microsoft.SqlServer.Management.Smo.Server $SQLInstanceName
$MySQL.Databases[$SourcedbName].tables | sort -desc dataspaceused | select Parent, `
@{Label="DisplayName";Expression={$_.Schema+"."+$_.Name};} | `
sort DisplayName;
[/sourcecode]

Again, no need to setup PSRemoting.

Simple Remoting #3

And this is the simplest of all, if you like to reuse your T-SQL scripts.  This will be your most used PowerShell SQLPS cmdlet “Invoke-SQLcmd“.

1. Load the SQLPS module:

[sourcecode language=”powershell”]
Import-Module SQLPS
[/sourcecode]

2. Just start using the SQLPS “Invoke-SQLcmd” command: (basic form)

[sourcecode language=”powershell”]
Invoke-Sqlcmd -Query "SELECT GETDATE() AS TimeOfQuery;" -ServerInstance "."  #or

$SaveResult = Invoke-Sqlcmd -Query "SELECT * FROM [AdventureWorks].[dbo].[AWBuildVersion];" -ServerInstance "Your-Machine-name\InstanceName"

$SaveResult
[/sourcecode]

Again, there’s no need to be at the SQLServer: drive in order for this to work.  As you can see, with just a few lines of code you can connect to another SQL Server instance.

Go ahead and try it!

Creating your own ExitCode in PowerShell and use it in SSIS package

As I started to build executable solutions in PowerShell for my SSIS packages, now I need to take advantage of providing an Exit Code value within the application.  Creating your own exit code can be very useful for providing a Success or Failure of a task.  This is way you can take the correct action on how your flow is going to behave.
I found a great blog article from one of my MVP college – MOW that got me started:
# http://thepowershellguy.com/blogs/posh/archive/2008/05/20/hey-powershell-guy-how-can-i-run-a-powershell-script-from-cmd-exe-and-return-an-errorlevel.aspx
Also, here’s an MSDN blog article: # http://msdn.microsoft.com/en-us/library/system.environment.exit(v=VS.80).aspx.  These article lead me to find multiple ways to introduce an Exit Code in PowerShell.
In its basic form:
1. Exit N – as shown on MOW’s blog (link above), only work in PowerShell
2. Exit (“N”) – I just try it but only work with PowerShell.
3. [Environment]::Exit(“N”) – Following the MSDN article (link above), it work in both PowerShell and DOS.
Here’s where “N” equals the Exit Number you want to provide.  Also, you need to understand that this only work when executing your script with the PowerShell.exe at any command prompt: DOS or PowerShell.
Now, as you notice and probably wondering, I found out that “Exit N” and Exit(“N”) behave differently when is executed in both as a script, and compiled under PrimalForms 2011.  I will show this in my examples.
In PowerShell V2, I found some different behaviour when using the above methods.  My goal in these exercise is to provide an Exit Code that can be use in either PowerShell (as a Script and as Executable) and DOS (as executable).  The reason is that you want to eventually convert your script as an executable in the future.
Here’s the basic structure on how this is done taking an example of a Division process where we’ll be trapping the “Divided by 0″ exception using “Try-Catch“.  Then, after testing our script we can create the executable that can be included in an SSIS solution.
To validate the Exit Code has a value, we need to create en executable using PrimalForms 2011, then use the $LastExitCode in PowerShell and the %ErrorLevel% in DOS to confirm we have the values we want.  Both Exit Code values need to match in PowerShell and DOS.  These samples scripts will run under PowerShell console, and the compiled *.exe version will run under DOS.
Sample Script 3 – DivideNow3.ps1 – using “Exit N”.

[sourcecode language=”powershell”]
#========================================================================
# Created with: SAPIEN Technologies, Inc., PrimalForms 2011 v2.0.12
# Created on: 8/31/2011 6:33 PM
# Created by: Max Trinidad
# Organization: PutItTogether
# Filename: DivideNow3.ps1
# http://thepowershellguy.com/blogs/posh/archive/2008/05/20/hey-powershell-guy-how-can-i-run-a-powershell-script-from-cmd-exe-and-return-an-errorlevel.aspx
# http://msdn.microsoft.com/en-us/library/system.environment.aspx
#========================================================================
Param([Int] $arg1, [Int] $arg2)
Try
{
[Double] $result = $arg1 / $arg2;
}
Catch
{
#Write-Host "Error in function";
$err1 = 1;
}
Finally
{
if($err1 -eq 1)
{
## – Custom Exit Code:
Exit 99;
}
Else
{
Write-Output $result;
Exit 0;
}
}

[/sourcecode]

Results for DivideNow3.ps1
Sample Script 2 – DivideNow2.ps1 – using “Exit(“N”).
[sourcecode language=”powershell”]
#========================================================================
# Created with: SAPIEN Technologies, Inc., PrimalForms 2011 v2.0.12
# Created on: 8/31/2011 6:33 PM
# Created by: Max Trinidad
# Organization: PutItTogether
# Filename: DivideNow2.ps1
# http://thepowershellguy.com/blogs/posh/archive/2008/05/20/hey-powershell-guy-how-can-i-run-a-powershell-script-from-cmd-exe-and-return-an-errorlevel.aspx
# http://msdn.microsoft.com/en-us/library/system.environment.aspx
#========================================================================
Param([Int] $arg1, [Int] $arg2)
Try
{
[Double] $result = $arg1 / $arg2;
}
Catch
{
#Write-Host "Error in function";
$err1 = 1;
}
Finally
{
if($err1 -eq 1)
{
## – Custom Exit Code:
Exit("99");
}
Else
{
Write-Output $result;
Exit("0");
}
}

[/sourcecode]

Results for DivideNow2.ps1
Sample Script 1 – DivideNow.ps1 – using .NET “[Environment]::Exit(“N”)”.
[sourcecode language=”powershell”]
# Organization: PutItTogether
# Filename: DivideNow.ps1
# http://thepowershellguy.com/blogs/posh/archive/2008/05/20/hey-powershell-guy-how-can-i-run-a-powershell-script-from-cmd-exe-and-return-an-errorlevel.aspx
# http://msdn.microsoft.com/en-us/library/system.environment.aspx
#========================================================================
Param([Int] $arg1, [Int] $arg2)
Try
{
[Double] $result = $arg1 / $arg2;
}
Catch
{
#Write-Host "Error in function";
$err1 = 1;
}
Finally
{
if($err1 -eq 1)
{
## – Custom Exit Code:
[Environment]::Exit("99");
}
Else
{
Write-Output $result;
[Environment]::Exit("0");
}
}

[/sourcecode]

Results for DivideNow.ps1

This is the results we want to get.  Both DOS and PowerShell will show the correct assigned Exit Codes.  This script and executable will work and can be integrated in an SSIS package.
Bonus Script – Executing a PS Script from PowerShell.exe in the PowerShell Console to execute DivideNow.ps1

[sourcecode language=”powershell”]
#========================================================================
# Created with: SAPIEN Technologies, Inc., PrimalForms 2011 v2.0.12
# Created on: 9/2/2011 9:21 AM
# Created by: Max Trinidad
# Organization: PutItTogether
# Filename: DivideNow_test.ps1
#========================================================================

# .\DivideNow.ps1 40 5 – This will execute and close PowerShell immediately
# .\DivideNow.ps1 40 0 – This will execute and close PowerShell immediately

# But if I execute with PowerShell.exe
PowerShell -noexit -command { `
cd ‘C:\Users\Max\Documents\SAPIEN\PrimalForms 2011\Files’; `
& ./DivideNow.ps1 40 5}

$LastExitCode

PowerShell -noexit -command { `
cd ‘C:\Users\Max\Documents\SAPIEN\PrimalForms 2011\Files’; `
& ./DivideNow.ps1 40 0}

$LastExitCode
[/sourcecode]

Happy PowerShelling

Passing Parameter to a PowerShell executable – Part 2

Using a PowerShell executable w/Param() in SSIS

Here’s where the fun begins.  It took me a few hours to figure something out in order to my SSIS package pass some values into my PowerShell executable.  So, here’s a sample solution where I found out that my Flat file source record didn’t match the record layout provide by another Internet source.  Yes, believe or not, sometimes documentation out there might be way out of day.  But, Thanks to SSIS “..Source Error Output“, you can redirect these records error to a destination object.  Then you can later analysis these bad records.

So, if you work creating SSIS packages, then everyone should know how to use the “Data Flow component” to help you define your data Source, and Destination objects.  Now, in cases when you need to isolate records that are causing your package to fail, then you need to add a second Destination object to deal with them later.  In my example, my second Destination object will be to a new SQL table called “CompanyData_SkippedRecords“.

Here’smy sample Text file to be process by my SSIS package: “CompanyData.txt”

Sample Dummy CompanyData.txt Source file

Notice, the first line are the columns, but some of the data in this file won’t match the column layout.    If I run this solution with the Source and Destination objects defined, this package will fail.

Unmatching records causing SSIS package to fail

Now, we can add your another Destination component to isolate these unmatching records.  Connect the Source component by dragging the “Red Arrow” to the second Destination Component handling the errors.

Dragging the RedArrow from Source to the second Destination Component

A “Configure Error Output” windows will popup, change the “Set this value to selected cells:” to  “Redirect row“.  Then, highlight all fields in both “Error” and “Truncation” columns, and Click “Apply” to change the “Fail Component” to “Redirect row“.

Change both "Error" and "Truncation" to "Redirect Row"

Click “OK” when you confirmed that all fields has “Redirect row“.

Verify all fields has "Redirected row"

These changes are in place for our Flat File Source “Error Output” section.

Now, we need to complete setting up out second destination component labeled “Error Skipped Records Destination” where our Error Records will be dropped into our new SQL table.

Open our Error Destination component.  For now, we are going to use the default, the new table will be named “[Error Skipped Records Destination]”.  So, all bad records will be stored in this table.

Accept Default Error table name

Then, click “OK” to exit.

At this step we are done with our Import and Error/Mismatch records handling, and we are ready to run our solution.  At the end of this step will have bad records table.

4 bad Record went into our Err table

Here’s our good data:

And, this is Error Records table:

Now, the next step, I need to a table to consolidate all my mismatched records.  I will be reading from my “[Error Skipped Records Destination]” and selecting the “[Flat File Source Error Output Column]” field as “DataSkipped” into a new table called “CompanyData_SkippedRecords“.

Here’s the new table layout:

[sourcecode language=”sql”]
USE [Developer]
GO

/****** Object: Table [dbo].[CompanyData_SkippedRecords]
Script Date: 08/18/2011 12:27:15
*****/

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_PADDING ON
GO

CREATE TABLE [dbo].[CompanyData_SkippedRecords](
[DataSkipped] [text] NOT NULL,
[ErrRecordsLength] [varchar](7) NOT NULL,
[TotalFieldsPerRecords] [varchar](5) NOT NULL,
[FileGroupSource] [varchar](4) NOT NULL,
[ProcessDate] [datetime] NOT NULL,
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

SET ANSI_PADDING OFF
GO
[/sourcecode]

As you may notice, this table is very different than the we one createdto store the errors.  And here’s where we integrate our PowerShell script solution into our SSIS Package.

PowerShell Script

In this new table out of the 5 fields 4  will be genrated by PowerShell , and two of them are coming from parameters:

  1. DataSkipped – comes from the our Err table.
  2. ErrRecordsLength – Count of the length of each records skipped.
  3. TotalFieldsPerRecords – Count the number of fields per records (using the delimiter -1).
  4. FileGroupSource – Denominates the type of records that was imported (ie.Comp).
  5. ProcessDate – Date that this Err table was process.

The purpose of this PowerShell executable it to generate this additional fields to help analyzing what’s wrong with my Text file.  This process will read one Err Record at a time, generate the addition values, and insert into the new table.

One requirement: You need to have the SQLPS module installed in your “MyDocument\WindowsPowerShell\Modules” folder were the user is running this SSIS package, or will not run. If you’re usign SQL Server 2008 R2, then download the SQLPS module from Chad’s Miller blogsite:http://sev17.com/2010/07/making-a-sqlps-module/(SQL Denali CTP3 already has it). max

Here’s the Script (don’t be scare!), to be compiled with PrimalForms 2011:

[sourcecode language=”powershell”]
# =============================================================
# Microsoft PowerShell Source File — Created with SAPIEN Technologies PrimalScript 2011
# NAME: AnalyzeSkippedRecords.ps1
# AUTHOR: Max.Trinidad ,
# DATE : 8/18/2011
#
# COMMENT: PowerShell Script to Analyze the Data Skipped for processing and update the Skipped
# table with additional information.
# =============================================================
PARAM([String] $Arg1, [String] $Arg2, [String] $Arg3)
#Imput: $Arg1="Err*"; $Arg2 = "Developer"; $Arg3 = "YourComputer"

Import-Module SQLPS -DisableNameChecking;

[DateTime] $ProcessDate = (get-date).month.ToString("00")+"/"+(get-date).Day.ToString("00")`
+"/"+(get-date).Year.ToString()

$MySQL = new-object(‘Microsoft.SqlServer.Management.Smo.Server’) $Arg3
$SchemaTableList = $MySQL.Databases[$Arg2].tables | where {$_.name -like $Arg1} | `
select @{Label="Schema_Tablename";Expression={$_.Schema+".["+$_.Name+"]"};};

foreach($table in $SchemaTableList)
{
$tblName = $table.Schema_TableName;

$sqlqry1 = @"
USE $Arg2
GO

select
[Flat File Source Error Output Column] as DataSkipped
from $tblName
"@;

$rdSkipData = Invoke-Sqlcmd -ServerInstance $Arg3 -Query $sqlqry1;

$recCount = 0;
Do
{
$getInfo = $rdSkipData[$recCount] | select DataSkipped., `
@{Label="ErrRecordslength";Expression={($_.DataSkipped).Length};}, `
@{Label="TotalFieldsPerRecord";Expression={(($_.DataSkipped).split("^")).count};};

$DataSkipped = $rdSkipData[$recCount].DataSkipped.ToString().Replace("’","").Replace(‘"’,”).Trim();
$ErrRecordsLength = $getInfo.ErrRecordsLength;
$TotalFieldsPerRecord = $getInfo.TotalFieldsPerRecord;
$FileGroupSource = $Arg1;

$sqlqry2 = @"
USE $Arg2
GO

INSERT INTO [dbo].[CompanyData_SkippedRecords]
VALUES(
‘$DataSkipped’
,’$ErrRecordsLength’
,’$TotalFieldsPerRecord’
,’$FileGroupSource’
,’$ProcessDate’
)
GO
"@;
Write-Host $sqlqry2

Invoke-Sqlcmd -ServerInstance $Arg3 -Query $sqlqry2;
$recCount++
} Until($recCount -eq $rdSkipData.Count);
}
[/sourcecode]

Notice this script was meant to search for multiple error tables based of the type of File groups (ie. ADB1, ADB2, and ADB*).  Now, using the steps from Passing Parameter to a PowerShell executable – Part – 1 on the previous blog, i compiled my script and create my executable.  Keep in mind, the you will need to move this executable to the machine you are running this SSIS Package.  Thanks to SAPIEN for this Great product!

Create SSIS Variables for input Parameters for the PowerShell Executable

This part took me a few hour to figure out how to pass parameters using the SSIS variables in combination with “Expressions”.  Yes!!  That was the keyword “Expressions”.  I couldn’t fine a blog out there that will properly explain this process.  Just one that I will give some credit because this keyword got my attention: http://dennyglee.com/2006/05/12/ssis-global-variables-and-execute-process-task/ – “..you can modify the Execute Process Task at the point of the Property Expression attribute..”.  Unfortunately, thsi blog didn’t show a step-by-step on how this done.  But, lucky you, I’m blogging about it.

Here’s how is done:

Create three variable that will hold some parameters values to be injected into the executable:

Create variable to be use as parameters to our PowerShell executable

Now, we add the “Execute Process Task” component, connect the “Data Flow Task”, and double-click on the “Execute Process Task”.  In the “Execute Process Task Editor”, go to the “Process” section and in “Executable” add the executable name “AnalyzeSkippedRecords.exe”.

And now,… to setup your input parameters,  Go to the “Expressions” section:

Property Expression Editor - Select Arguments

Here’s where you define the input Argument(s) for your executable(s).  Under “Property expressions”, you select the property “Argument”, and then click on the “…” to build the expresions:

Building your Input Arguments

Yes!! This is the money maker:  ( there’s a space between the double-quotes )

@[User::ErrorGroup] + ” ” + @[User::SourceDatabase] + ” ” + @[User::SQLInstance]

Now, click “OK” twice to save the changes, and  we ready to run our full SSIS package.

Success!! This solution works.

Check my “CompanyData_SkippedRecords” table:

Checking my Skipped data

Now, I can keep working on analyzing this data with PowerShell, and/or other tools.

Couple of Pointers:

In the “Execute Process Task Editor”, you can troubleshoot your executable session if you leave the setting “WindowStyle” to Normal.  This way you can see the console popup. Then, when everything is working correctly, change it back to Hidden“.

Also, if you’re having some problem with the PowerShell executable failing, you can add the “Write-Host ‘Pause’; Start-Sleep -second 10;” oneliner command to pause the session and give you time to see the error when leaving the “WindowsStyle” as Normal.

Download this SSIS_PowerShell_sample solution at:

Great SQL Saturday South Florida Event – PowerShell included!!

Yes!!  It was a great event with over 400 people showing up, and only two PowerShell Sessions included.  Thanks to all SQL Saturday organizers for all the efforts in putting this events together.  I’m honor to be part of it.  Thanks Scott and Herve!!

Well, here’s both presentations: Session 1 – Let’s Talk About SQL PowerShell, Session 2 – Extendign your T-SQL with PowerShell, and more samples:

Many THANKS!! to all who attended my sessions, and I know that you all were asking for more at the end.  This is just an example that there’s a lot to talk about PowerShell and every little time to do it.  But, don’t worry!

I’m in the process of setting up some livemeeting event every on the third week of the month at 11:30am(EST).  It will a “PowerShell Code Snippet” 3o minutes demo with the purpose of showing block of code by myself and our PowerShell community.  Hopefully, this will speed up your learning by reusing these block of code.

Stay Tuned to this blog site for more information.

Just forgot… Congrats to all the winners how found marking on the flyers I provided.  I gaveaway 2 SAPIEN PowerShell V2 ebooks (X), 2 PluralSight One Month Training  Subscription(PL), and 2 Diskeeper Pro Premier Defrag software for desktop (D).

Happy PowerShelling!!

SQL Server SMO loading Assemblies Gotcha’s

One thing I’ve seen most DBA’s blogging about SQL Server and PowerShell is the use of the V1 way to load the SMO Assemblies using:
[void][reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”);
And, there’s a reason for still using this method, which I will show in a minute. Now, PowerShell Version 2.0, introduce a new command “Add-Type” use to simplify loading the assemblies:
Add-Type -AssemblyName “Microsoft.SqlServer.Smo”;
And, works Great!  Except, when you’re working with different versions of SQL Server on the same box.  You will experience some issues loading the SMO Assembly using the “Add-Type”:
Here’s the error message:
PS [3] > Add-Type -AssemblyName “Microsoft.SqlServer.Smo”;
Add-Type : Could not load file or assembly ‘Microsoft.SqlServer.Smo, Version=9.0.242.0, Culture=neutral, PublicKeyToken
=89845dcd8080cc91′ or one of its dependencies. The system cannot find the file specified.
At line:1 char:9
+ Add-Type <<<<  -AssemblyName “Microsoft.SqlServer.Smo”;
+ CategoryInfo          : NotSpecified: (:) [Add-Type], FileNotFoundException
+ FullyQualifiedErrorId : System.IO.FileNotFoundException,Microsoft.PowerShell.Commands.AddTypeCommand
Addtype_SMO_error
I’m sure, many if you may have seen this error, and that’s why people sometime go back to use the old faithful V1 format.
Now, you need to be careful because you might be loading the wrong SMO assemblies to use against your database.  But, for most cases, I sure it’s OK to load the latest one because it can handle older versions.  This is why you most test all your scripts before putting them in your production environment.
Well, the old and faithful V1 “ [reflection.assembly]::Load… ” to load your SMO assemblies work because it grabs the latest version from your windows “Assembly” folder.   Here’s an example:
v1 load SMO assembly
In my environment I do have SQL Server Express, SQL Server 2008 R2 Developer, and SQL Server Denali.  So, when I use V1 way to load my SMO, you will notice in the PSConsole that it’s loading the assembly from SQL Server SMO version 11.0.0.0.  Notice that using the V2  command “Add-Type”, was trying to find 9.0.242.0 and fail.  Even,  if this error might be due to a registry issue, this is no the assembly I want to load.
Now, can we still use V2 “Add-Type” command to load the correct version of this assemblies.  Hell Yea!!
Here is how:
1. You need to confirm the full path and filename of the  SMO DLL, which is located in the “C:\Windows\Assembly” folder.
2. It helps to load the latest SMO assembly using V1 way, then use the following .NET line to list the location of that assembly:
[system.appdomain]::CurrentDomain.GetAssemblies() | Select location
Check PowerShell session loaded Assemblies
3. Now, look for your loaded SMO assembly location, then highlight and press ‘Enter’ to copy the line:
C:\Windows\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\11.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll
4. Use the V2 “Add-Type” command with the “-Path” parameter, and paste (right-click mouse) to add the copied SMO full path.  This value must be a string so use quotes around it.  Don’t press enter yet!!
Sample line copied but change version to “10.0.0.0”:
“C:\Windows\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\10.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll”
In order to load a different version, you need to manually change the version number.  In my case, I need to change from 11.0.0.0 to use my SQL Server SMO version 10.0.0.0.  This is why you need to check for your multiple SMO version already installed in your system under you Windows Assembly folder.
Windows loaded assemblies
Finally, press enter on the following command will load the SMO assembly for SQL Server 2008/R2: (this is a one-liner)
Add-Type -path “C:\Windows\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\10.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll”
I know it’s a long line.  Remember, you just type it one time and then copy/paste to another script.
Don’t forget to save this line, then open a new PSConsole session, and run it.
Here’s screen shot how is done:
New Powershell Session and loaded SMO 2k8 Assemblies
Keep in mind, if you’re wondering what assemblies are loaded during your PowerShell session, then use the following one-liner:
[system.appdomain]::CurrentDomain.GetAssemblies() | Select location
To test your SMO loaded successfully use the following SMO lines to connect to you local SQL instance:
$MySQL = new-object(‘Microsoft.SqlServer.Management.Smo.Server’)
$MySQL.Information.version
Happy PowerShelling!!

SQL Server Denali Side-by-Side 2008 R2 BIDS issue workaround

As we all are desperate trying to get into the new SQL Server Denali CTP3, we are going to face some few stones in the road.  If you have a clean virtual machine to installed it, then you are OK.  But, if you are going to create another instance side-by-side with another SQL Server engine, then keep this in mind.

First, about the installation experience, there are good points.  As far as my installation, it was smooth.  Prior the CTP3 installation I had to remove my previous SQL Denali CTP1, and it went without a problem.

In case, you’re a SQL Developer and multiple version of SQL installed, then you must be aware that you may have a problem openning your SSIS 2008  R2 solution in your R2 “Business Intelligence Developement Studio” (BIDS) after installing CTP3

Now, last time I did a SQL Server 2008 R2 Side-by-Side installation, I had SQL Server 2005 BIDS installed and working properly.  But installing SQL Server 2008 R2 did affect my SQL Server 2005 BIDS.   So, I did experience the same situation after installing CTP3 and trying to use SQL Server 2008 R2 BIDS.

Here’s the error you will get when trying to open a previously created SSIS solution:

R2 BIDS error after installing SQL Denali CTP3

What’s the workaround?

1. You need to have you SQL Server 2008 R2 media.

2. Go to your Control Panel, add/remove programs.

3. Right-Click on “Microsoft SQL Server 2008 R2 (64-bit)“, and select “Remove“.

4. Follow all the prompts, select SQL Instance, check to remove only the “Business Intelligence Developement Studio“, and complete the uninstall.

5. I would recommend to reboot at the end of this process.

6. Now, you can reverse this process by Right-Click on “Microsoft SQL Server 2008 R2 (64-bit)” again, and select “Add“.

At the end of this process, you will be able to successfully open your SSIS R2 solution in your R2 BIDS.

Let’s keep trying more stuff with SQL Denali CTP3.

Using Get-TSQLTableRecordCount dual syntax

Here’s one of the function that’s part of my recent SQLDevTools Module – “Get-TSQLTableRecordCount” which is a good example of a multi-purpose command.   The main purpose of this function is to assist in displaying the record count of all tables in the database.  But, at the same time it can assist you to provide a list of all users databases.

Usage #1  – To show all databases:

In case that you don’t remember  the database name in your SQL Server instance, this command give you the ability to list all databases.   Here’s an example:

PS> Get-TSQLTableRecordCount -SQLInstanceName MAX-PCWIN1 -ShowAllDatabases
Result:

Processing SQLServer Instance: MAX-PCWIN1, DatabaseName: ALL

SQLServerName DatabaseName
————- ————
[MAX-PCWIN1]  AdventureWorks
[MAX-PCWIN1]  AdventureWorks2008R2
[MAX-PCWIN1]  Developer
[MAX-PCWIN1]  ReportServer
[MAX-PCWIN1]  ReportServerTempDB

Usage #2 – To Display all tables record counts in the selected Database:

Now you know the database name and you’re ready to use the command to get the record counts fo all your tables.  Here’s an example:

PS> Get-TSQLTableRecordCount -SQLInstanceName MAX-PCWIN1 -DatabaseName Developer
Result:

Processing SQLServer Instance: MAX-PCWIN1, DatabaseName: Developer

Parent      DisplayName                    RowCount DataSpaceUsed
——      ———–                    ——– ————-
[Developer] dbo.DatabaseDiskStats                 0             0
[Developer] dbo.Employees                         9           240
[Developer] dbo.ServersToCheckDiskStats           0             0
[Developer] dbo.SurfSpotTable                     4             8
[Developer] Production.ProductDescription       762           144
[Developer] Production.ProductPhoto             101          2208
[Developer] Purchasing.PurchaseOrderDetail        0             0
[Developer] Sales.SalesPerson                     0             0

Quick Export SQL Data to a Excel CSV file…

Here’s a quick way to get data off on of your SQL Server table and dump it to a CSV file.  Only 4 steps:

1. Make sure you are in SQLPS console, or have the community SQLPS (or SQLPSv2) module loaded ( ie. Import-Module SQLPS ) in either your PowerShell Console or ISE.

2. Have you query ready, then save it into a PowerShell variable:

$sql = “SELECT * FROM [AdventureWorks].[Production].[Location]”

3. Next one-liner will build the PowerShell object and exported to a *CSV file: (Execute in localhost only. Use -ServerInstance with the -database parameter if is needed t0 execute query)

Invoke-Sqlcmd -query $sql | Export-Csv -Path c:\temp\excelfile.csv -NoTypeInformation

4. Last line will open the file in Excel:

ii c:\temp\excelfile.csv

This surely beats going to SSIS for quick results!

For more information about the “Invoke-SQLcmd” use the help in PowerShell:

Help Invoke-SQLcmd -detailed

Just released SQLDevTools Module (Alpha 0.1.0)

I’m excited to release this new version of SQLDevTools today Monday June 6th.  Here’s list of the updates:

##- SQLDevTools Alpha 0.1.0 Released 06/06/2011:
##-
##- Functions fixed;
##   1. Get-TSQLTableRecordCount dual syntax issue not working properly.
##  2. Write-TSQLScriptIndex parameters ‘DropIndexOnly” and ‘CreateIndexOnly’ was mispelled.
##  3. Get-TSQLTableTriggers not work, add modify_date field as last_modified.

## – Updated two function to include rename of destination table: (need to all help)
##  1. Write-TSQLScriptDFConstraint – Help updated
##  2. Write-TSQLScriptInsertSelect – Help updated

## – Add two new functions to change destination table name in the PS Object: (Help included)
##  1. Rename-TSQLScriptedTableName
##  2. Rename-TSQLScriptedTableIndexes
##  3. Get-TSQLUserDefinedDataTypes
##  4. Get-TSQLXmlSchemaCollections
##   5. Get-TSQLUserDefinedFunctions
##   6. Write-TSQLScriptXmlSchemaCollections
##  7. Write-TSQLScriptUserDefinedFunctions
##  8. Write-TSQLScriptUserDefinedDataTypes

Next release -> will include Views, User Stored-Procedures, and Database triggers.

Basically, this completes the basic need to script out most object needed to rebuild a table, or series of tables.  It takes into account “Identity”, Computed, and User Defined objects.  Also, I included the ability to alter(rename) your scripted destination tablename, and index objects.

For testing these functions, I’ve use Microsoft “AdventureWorks” which is full of scripting surprises.  Keep in mind, I’ve use a mix of both SMO and embed-TSQL that assist in generating the results.

Here’s a teaser sample using new function to script User Defined Data Types:

[sourcecode language=”powershell”]
## – Setting variables:
$SQLInstanceName = "YourSQLinstanceName"
$SourcedbName = "AdventureWorks"
$DestinationdbName = "Developer"

## – Sample – Write a UserDefinedDataTypes script, and display results in console:
Write-TSQLScriptUserDefinedDataTypes -SQLInstanceName $SQLInstanceName -DatabaseName $SourcedbName `
-DestinationdbName $DestinationdbName -DropUDDTOnly
Write-TSQLScriptUserDefinedDataTypes -SQLInstanceName $SQLInstanceName -DatabaseName $SourcedbName `
-DestinationdbName $DestinationdbName -CreateUDDTOnly

## Save PS Object to a file, and then view it:
[string] $ScriptToFile = Write-TSQLScriptUserDefinedDataTypes -SQLInstanceName $SQLInstanceName -DatabaseName $SourcedbName `
-DestinationdbName $DestinationdbName
$ScriptToFile | Out-File c:\Temp\SQLScript01.sql
ii c:\Temp\SQLScript01.sql
[/sourcecode]

Sample executing Write-TSQLScriptUserDefinedDataTypes

Stay Tuned!!

Preview to my new set of SQL Developer PowerShell functions

Here’s a preview on what I’ve been doing for work.  I create about 12 new functions to assist me in automating a task to copy table(s) from one database to another.  Then, this process will also rebuild the data, indexes, and constraints from the scripts extracted during this process.

All these functions are using embedded T-SQL scripts to assist in the scripting of the SQL Table objects.  Finally, I will be ending having a module to load all my functions when is needed.

Just recently, I updated two on the main functions I posted in the Microsoft Script Repository:

1. “Function Write-SQLCreateTable” – Updated – 05/25/2011: Added three new switch parameters so we can create individual scripts for:DropTableOnly, CreateTableOnly, and TurnOffidentity.

2. “Function Write-SQLInsertSelect” – Updated – 05/25/2011: Add ability to detect if there’s an Identilty Column so it can add the “Set Identity_Insert …” line.  And, Add set-location go back to your “Home Drive” when the process completed.

These two updated functions require you have the SQLPS module or in won’t work.  That’s why I include to do both Import and Remove of SQLPS module.  Also, these functions will not script out Indexes, and Constraints.

In my new SQLDevTools Module, all functions will not import/remove the SQLPS module.  It will included functions to script out Indexes, Check Constraints, Foreign Keys and even Triggers.  Functions name are change slightly, and  a test functions script is included to assist in building rebuilding a table in another database.  More functions will be added to this module.  I will make it available to the community through CodePlex when is ready.

Here’s a preview snapshot of my new functions:

List of SQLDEVTOOLS functions

Special Thanks to the following bloggers for their contributions:

1. Martin Bell  (UK SQL MVP) – August 2009 posts: http://sqlblogcasts.com/blogs/martinbell/archive/tags/Scripting/default.aspx

2. Aaron Nelson (SQLVariant) – November 2010 post: http://sqlvariant.com/wordpress/index.php/2010/11/sql-source-control-powershell-and-tfs-ftw/