Tuesday, December 14, 2010

Capture SQL Server Agent WMI Event Alerts with SQL Sentry Event Manager

SQL Sentry Event Manager is able to monitor SQL Server Agent Alerts and react to them out of the box. One limitation with that functionality creeps up when dealing with Agent alerts that are set to fire off of WMI events. While WMI event based alerts aren’t what I would call common place, they do make it easy to do some interesting things with an Agent alert that might normally take a bit more customization to do.
One thing that I know we can alert on using WMI events is when a database is dropped, so to keep the scope of this post small we’ll be using that for our Agent alert.
The first thing you want to do is create the alert using SQL Server Management Studio (SSMS from now on). In the SSMS Object Explorer look for the SQL Server Agent node, then expand it until you see the folder named “Alerts”. Right click the “Alerts” folder and select “New Alert…”. You should get a dialogue similar to the one below:
image
First give the Alert a name. You can call it whatever you want, but try and keep it simple, because we’re going to use this name someplace else later, and the simpler it is, the easier it will be to use. I simply named mine for the purpose it will serve “Database Dropped”.
Under Type select “WMI event alert”. Namespace should default to “\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER”, but if it doesn’t you want to enter that for the Namespace field. Last, under Query, enter “SELECT * FROM DROP_DATABASE”.
In very short terms, we’re telling SQL Server to run an asynchronous query against the SQL Server WMI Provider that will return a row when someone executes a “DROP DATABASE” command. It’s not my goal here to get too far into the subject of WMI or even the SQL Server WMI Provider for Server Events, but you can find a wealth of information on it at http://msdn.microsoft.com/en-US/library/ms180560(v=SQL.100).aspx.
Now, just click ok, and leave SSMS running. The alert will actually be fired as is, but what we’re going to do is create a custom alert trap that will feed SQL Sentry enough information to display this alert on a calendar and perform actions when conditions are met involving the alert. We’ll come back in a bit, and do one more thing to it, but for now it is configured the way we want.
The way we’re going to have the Agent alert tell SQL Sentry all about the event is with an Agent job. An alert already watched by SQL Sentry uses the SQL Sentry Alert Trap Job that is maintained by SQL Sentry. We’re going to create a special alert trap job, just for our alert, that does some of the same things the default SQL Sentry alert trap does, but provides some different information. The most important piece of information is going to be the name of the Agent alert that this event data is from.
So, back in SSMS I create a new job called “Database Dropped Alert Trap”. I give the job one “Transact-SQL script” step and drop the following code into the "Command field:
set quoted_identifier off

 

begin tran

 

declare @msg varchar(256)

 

set @msg = 'Database [' + '$(ESCAPE_SQUOTE(WMI(DatabaseName)))' + '] was dropped by [' + '$(ESCAPE_SQUOTE(WMI(LoginName)))' + ']  from computer [' + '$(ESCAPE_SQUOTE(WMI(ComputerName)))' + ']'

 

exec msdb.dbo.spTrapAlert_20 '50000', '0','$(ESCAPE_SQUOTE(WMI(DatabaseName)))', @msg

 

update msdb.dbo.SQLSentryAlertLog_20 

set AlertName = 'Database Dropped' 

where ID = @@IDENTITY

 

commit tran
The above code may look a little complicated, but all we’re really doing is grabbing some basic information from the WMI event row. This is provided to us by SQL Server Agent, and is accessed using the bits formatted like “(WMI(XXX)))”. Those have to be escaped a certain way, thus the bits that look like “$(ESCAPE_SQUOTE”. Then we’re using the stored procedure SQL Sentry uses to add the alert event to the alert log table in MSDB. Finally, we update the most recently added row for Alerts giving it the name of our Alert (in my case ‘Database Dropped’), so that the SQL Sentry polling service can identify and associate the object that generated the event data. This all happens inside of a short transaction to make sure it all runs as a singular operation.
Don’t create any schedule for this job. Remember, it will be run as the target for our Agent alert, so no schedule is necessary.
For the next part, we need to perform the steps in order, so I will number them.
1.) Use the SQL Sentry console application to ensure that the new SQL Server Agent Alert (for me it’s named “Database Dropped”) is watched by SQL Sentry Event Manager.
2.) Back in SSMS, right click the agent alert and select properties.
3.) In the alert properties dialogue, select “Response” from the left side panel. If you have done everything to this point, it should look like the image below:
image
4.) In the drop-down under “Execute job” select the Agent job that we created special for this alert:
image
5.) Click ok.
That is all there is to it. Using the code from above for our job step, whenever someone drops a database from this server, the alert should fire and you will get an event in SQL Sentry telling you what database was dropped, what login executed the drop command, and what machine it was executed from. You can use the standard SQL Sentry Event Manager conditions and actions off of this alert, including Send Email and Send Page to be notified proactively.
You can experiment with other WMI events as well such as CREATE or DROP LOGIN. The WMI query is what needs to change, as well as some of the WMI properties gathered in the alert trap job step.
If anyone out there wants some assistance getting this set up, or if you just have questions, head over to forum.sqlsentry.net and post a new thread or just email us at support at sqlsentry.net.
Until next time,
-JRH

Friday, October 1, 2010

On the lighter side…

If you look to the right under my photo you will see a couple of pieces of information, one of those being my Xbox Live gamer tag. Given that, you might guess I’m a bit of a gamer. The truth is, it’s more than just a bit.

Rarely will I mix in any kind of gaming with my work-related presence, but I figure that if I can have my gamer tag on my blog, then what’s wrong with blogging about a game here and there? So here it comes.

I am floored by the complete awesomeness that is Starcraft II: Wings of Liberty. This is (IMHO) by far the best RTS to grace us with it’s presence *ever*. The level of detail and quality of production that are presented by this game are nothing short of amazing, and brilliant.

I haven’t been so immersed in a single player campaign for any game since Demon’s Souls on the PS3, and this is a strategy title! It draws you into the Starcraft universe in ways that I never imagined it would prior to release.

I’m not doing a full review here, I think it speaks for itself, but suffice to say that it will hold a spot on my PC for a long time to come. The *ever so long* wait was indeed worth it.

That said (here comes the work tie-in), I know lots of folks have been waiting on some of the great new features in SQL Sentry 6.0. Those that got a sneak peek back at Spring SQL Connections may have been waiting the longest, but I want to assure that the wait is indeed worth it.

You have to ask yourself, “Do I want something now, that may be ok, but might not be everything I envisioned, or do I want to wait for features and quality that will blow my socks off?”. I’m here to tell you that yes, you want to wait.

I’m not comparing SQL Server tools to video games, but I am pointing out that true quality is always and certainly, worth waiting for. If you don’t believe that, go out and pick up a copy of Starcraft II: Wings of Liberty. Let me know when you find your socks.

Thanks,
-JRH

Monday, September 20, 2010

SQL Saturday #46 (Raleigh, NC)

This past weekend (09/18/2010) marked the SQL Saturday Event for Raleigh, NC.

It was organized by long time friend Brett Tomson (LinkedIn) and members of the area SSUG, and they did a great job organizing the event.

First, the night before we were able to enjoy a nice dinner at a local place called '’The Pit”. It was indeed some of the best BBQ (Carolina style) that has ever passed my lips.

The day of, I was manning the SQL Sentry table with Steve Wright (Twitter), and we were both able to do a mid-day session for attendees. I did a one hour session called “Recipe for a Happy DBA: A Guide for SQL Server Developers”. It’s basically a talk for developers on how to do things in a way that gets you cool points with your DBA team. I had a small, but very dedicated, group of developers that were interested in the topic, and I hope they had as much fun attending as I had presenting.

Something neat that the hosts of the event did was to plan small giveaways for the end of every session. These were mostly books that were relevant to the track the session was a part of. This was a first for me to see at one of these events, and I thought it really added a nice flare. It also gave more incentive to attend some of the less popular topics.

All in all, I give two thumbs up to the folks that planned and executed for the Raleigh, NC SQL Saturday event. I look forward to more great things to come next time.

Thanks,
-JRH

Tuesday, September 7, 2010

Monitoring Replication Latency With SQL Sentry Event Manager

To start, let me state now that this post applies to a plain vanilla transactional publication on SQL Server 2005 and higher that does not allow anonymous subscribers. That’s not to say it won’t make sense for other topologies, but I have not explored others, and some of the concepts involved don’t make sense or exist for them anyway.

I recently had a really good request come along asking if SQL Sentry can send notification when replication latency breaks a specified threshold.

I like this type of request, because it’s something that isn’t addressed “out of the box”, but I knew that I could put something together with SQL Server functionality and some tools provided by SQL Sentry. I also admittedly have a little bit of a soft spot for replication going back to early 2002.

There are actually two solutions I’ve came up with for this. They both have pros and cons, so there is really nothing wrong with implementing both.

The first, and probably simplest to setup is to use the performance counter collection facility built into SQL Sentry Event Manager (details). Microsoft has seen fit to provide us with some performance counters specifically for tracking replication latency, both for distribution and for the log reader.

The setup procedure is outlined here. The only difference being that you will use the performance counters below, and not the counters discussed in the KB article.

One of the “not so great” aspects of using this method is that some of these counters are specific to a publication/subscription pair, and if your subscribers change, you will have to frequently add and remove counters from this job in the SQL Sentry console application. If your subscriptions are fairly static, then you have no worries here.

The counters you want to add are as follows:

  • SQLServer:Replication Dist.
    • Dist:Delivery Latency
  • SQLServer:Replication Logreader
    • Logreader:Delivery Latency

For the first, you will have a counter instance per publication/subscription combination. This is the one you’ll need to keep an eye on if your adding and dropping subscriptions regularly.

Summing these two counters gives you the time it takes for a transaction to go from the publisher to the subscriber. Unfortunately, the counter values can’t be summed in SQL Sentry and compared to a threshold, and there is no counter that sums them for you, so you will have to track the thresholds separately using the “Performance Counter: Threshold Max” condition for each one. This too is a minor shortcoming of using this method.

Even with a few hurdles though, setting things up using performance counters offers some advantages. It’s not overly complicated to set up, since you can literally do everything you need to do right from the SQL Sentry console application; there is no need to add a stored procedures to your publication database, and you can add the counters from any server to any watched job. That means that you don’t technically have to use a SQL Sentry license to monitor each server that has a publication to monitor. You would only need a license for the server running the job that has the counters attached.

The other method I worked out is a bit more technical, but it provides a way to alert on the latency for all subscriptions at once for a given publication, and bases the threshold on the total latency instead of the two separate values.

Here we’re going to add a stored procedure to the publication database, then create a SQL Server Agent job to run the procedure on schedule.

I will list the code first, then explain what it is doing:

-- This proc must be created in the publication database
-- use CTRL + SHIFT + M to replace variables
use <DatabaseName, varchar(35), Mydatabase>
go
 
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[<StoredProcedureName, varchar(50), MyProc>]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[<StoredProcedureName, varchar(50), MyProc>]
GO
 
/* Prototypes:
exec <StoredProcedureName, varchar(50), MyProc> 'MyPublication', 8, 1, 'myemail@mycompany.com'
exec <StoredProcedureName, varchar(50), MyProc> @pub = 'MyPublication', @emailResults = 1, @emailRecipients='me@mycompany.com;you@yourcompany.com'
exec <StoredProcedureName, varchar(50), MyProc> 'MyPublication'
 
NOTE: Email functionality works on SQL Server 2005 instances or greater watched by SQL Sentry.
*/    
    
    create proc <StoredProcedureName, varchar(50), MyProc>
        @pub sysname,
        @maxLatencySeconds int = 5,
        @emailResults bit = 0,
        @emailRecipients nvarchar(255) = '',
        @dropTable bit = 0
AS
 
set nocount on
    
    -- table def matches sp_helptracertokenhistory output 
    if object_id('tempdb.dbo.##repl_latency_stats') is NULL
    begin
        create table ##repl_latency_stats
            ( distributor_latency int NULL
            , subscriber varchar(1000) NULL
            , subscriber_db varchar(1000) NULL
            , subscriber_latency int NULL
            , overall_latency int NULL );
    end
 
    --clean out the temp table
    truncate table ##repl_latency_stats
 
    --send the token
    declare @tokenHandle bigint
    exec sp_postTracerToken @publication = @pub, @tracer_token_id = @tokenHandle output;
 
    --just sit until the max latency * 2 runs out so the token has time to get there
    declare @maxWait int = @maxLatencySeconds * 2;
    if @maxWait > 30
    begin
        set @maxWait = 30;
    end
    declare @waitfor varchar(10) = '00:00:' + CAST((@maxWait) as varchar(2));
    waitfor delay @waitfor;
 
    --get the results the results
    insert ##repl_latency_stats (distributor_latency, subscriber, subscriber_db, subscriber_latency, overall_latency)            
    exec sp_helpTracerTokenHistory @pub, @tokenHandle;
    
    --get rid of history
    declare @cutoff datetime = getdate();
    exec sp_deleteTracerTokenHistory @publication = @pub, @cutoff_date = @cutoff;
 
    --process results    
    if (@emailResults > 0 AND (select COUNT(*) from ##repl_latency_stats where overall_latency > @maxLatencySeconds) > 0)
      begin
      
        declare @subject varchar(100)
        declare @message varchar(100)
        declare @query varchar(500)
 
        select
            @subject = 'Replication Latency Threshold Exceeded',
            @message = 'Replication latency for the following subscriptions has exceeded ' + CAST(@maxLatencySeconds as nvarchar(2)) + ' seconds:' + char(10) + char(13),
            @query =
                'set nocount on
                select 
                    cast(subscriber as varchar(25)) as [Subscription Server],
                    cast(subscriber_db as varchar(40)) as [Subscription Database],
                    cast(distributor_latency as varchar(15)) as [Distributor Latency],
                    cast(subscriber_latency as varchar(15)) as [Subscriber Latency],
                    cast(overall_latency as varchar(15)) as [Latency Overall]
                from 
                    ##repl_latency_stats
                where overall_latency > ' + CAST(@maxLatencySeconds as nvarchar(2));
 
        exec msdb.dbo.sp_sentry_dbmail_20
            @recipients = @emailRecipients
        ,    @subject = @subject
        ,    @body = @message
        ,    @query = @query
      end
    else
      begin
            select 
                subscriber as [Subscription Server],
                subscriber_db as [Subscription Database],
                distributor_latency as [Distributor Latency: Seconds],
                subscriber_latency as [Subscriber Latency: Seconds],
                overall_latency as [Latency Overall: Seconds]
            from 
                ##repl_latency_stats;
      end
      
    IF @dropTable = 1
    begin
        drop table ##repl_latency_stats;
    end;
      
return

First off, I wrote the procedure using replacement variables, so you can name it whatever you want easily by using CTRL + SHIFT + M in SSMS. That is important to point out, since it won’t run until that is done.

The procedure takes the name of the publication to monitor, the latency threshold to use, 1 or 0 for whether to send an email alert if the threshold is broken, a list of standard format email addresses (separated by ‘;’), and 1 or 0 for whether or not to drop the temp table, which you generally won’t want to do, since it needs to be there when the email alerts are picked up.

We’re using functionality that was added to SQL server 2005 called a tracer token. Basically we’re doing the same thing a plumber does when he drops a microchip into your plumbing, and tracks it through to find a clog, only we’re just seeing how long it takes to get from the publication to the subscriber.

The procedure creates the token, then waits long enough for it to make it through and drops the results into a global temp table.

If you’ve specified to email alerts about breaking the threshold, those results are added to the SQL Sentry email Queue for this server (details), which is why the server needs to be monitored by SQL Sentry Event Manager. The alert will be delivered by the SQL Sentry server service, including the current contents of the temp table that break the runtime threshold, during the next message monitor poll.

Keep in mind that the threshold in this case is on the total latency, which is in seconds. Also, if the tracer token doesn’t make it to the subscriber within 30 seconds, you won’t get any results. This can be increased easily by changing some values just above the “WAITFOR DELAY” call, but 30 seconds should already be well beyond what is needed to get full results.

As always, anyone interested in setting this up that wants more information can post on forum.sqlsentry.net or email support at sqlsentry.net.

-JRH

Friday, July 30, 2010

What Happened There? Post #1

This is the first post in a series. I’m going to do some different things to a test instance of SQL Server, show you what it looks like in SQL Sentry Performance Advisor, then explain what I did and why Performance Advisor looks that way when it happens.

This should be a great series for anyone using SQL Sentry Performance Advisor day to day. If you’ve ever looked at those performance metrics and thought “what caused that?” it could be revealed in one of these posts.

The systems used here are basically “nothing special” VMs running on Hyper-V in a Windows 2008 R2 cluster. I’m sticking with SQL Server 2008 R2, but generally most things will be relevant to 2005 and some to 2000. I will try to keep things as close as possible to something that might really happen in a production environment, but do keep in mind that it is in a controlled test environment, and you may see some things that you know yourself would really (almost) never happen.

With the intro out of the way, let’s take a look at the Performance Advisor dashboard in history mode for today’s “What happened there?”.

image

Notice in the above screenshot I have circled the “SQL Server Memory” chart. That is really for the benefit of focusing in on that area for this post. Many may notice that this server has a lot of other scary things going on, but this time we’re focusing on SQL Server memory.

What we see in the chart is that memory being used by both the buffer cache and the object cache took a steep dive at about 9:13AM. Then again at around 9:15AM.

There are different things that can cause SQL Server to dump cache. You see it a little when memory pressure exists and things are constantly being flushed out to make room. There are other symptoms that go along with that like higher lazy writes and a low Page Life Expectancy (PLE), but in this case (even though there is normally memory pressure on this server) we see a more drastic immediate change.

Normally when there is a cache dump this large you can count on some pretty nasty behavior from the SQL Server. If it had gone all the way to zero I might even think the SQL Server services had crashed or been restarted, but in this case there is a perfectly logical, and safe (sort of) explanation.

So what happened there? This is a test server, and generally before testing the performance implications of any potential change we dump the cache on purpose in order to get more accurate results.

So, in this case the dramatic change in memory used by SQL Server is just the result of my running the following statements:

DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Oh, and yes, I did it twice just for good measure which explains the second dip at 9:15AM.
Now I have to go tell the QA team why I’m messing with their test boxes.
Until next time,
-JRH

Tuesday, July 27, 2010

When Is an Administrator NOT a Sysadmin?

When you are working on a default install of SQL Server 2008, that’s when!

In the past, when you installed SQL Server the local administrators group was automatically granted SA privileges. With SQL Server 2008 that is no longer the case. I think this is in general a good move and forces us to be more aware of who can do crazy things to our database servers, but it’s also not overly obvious that this change was made until someone gets "access denied” or “login failed for…”.

With a tool like SQL Sentry that needs a certain level of access, it’s important to keep in mind that during or after installing a new instance of SQL Server 2008 or greater you will need to grant the service account the necessary privileges. This is the case even if the account is already a local administrator or even a domain administrator.

I’ve posted this to our KB as well here.

-JRH

Wednesday, July 7, 2010

Wild Blackberries!

This is a bit off topic, but I couldn’t help myself.

For anyone that grew up in rural North Carolina one of the greatest things about mid-summer to early fall has to be the wild blackberries. The Charlotte, NC suburbs are not quite in the mountains, and not quite off of them either so they are the prime growing area for blackberry bushes. You can find them just about anywhere from the local park to the side of the road or even the parking lot at your local grocery store.

Our offices were moved toward the end of last year, and in the new location there is a nice path that we can go out for a walk on and get some exercise at lunch time (I certainly need it). We’ve been walking out there for a while, watching patiently as the berries all along the side went from green to red and now finally to black.

Recently, Steve (Twitter) and I decided we would grab a bucket and pick some berries during our walk. The last time I did this I may have been 11 or 12. For me, picking berries takes me back to a time when the only thing I was worried about was how many I needed to have mom bake a pie or cobbler (or both…). Oddly I don’t remember having worried about snakes, thorns and ticks as much back then.

Maybe the perceived element of danger will make the cobbler that much sweeter this time (assuming I can figure out how to make one, or beg my wife to make it).

(Not an actual picture of our blackberry bushes, but they look the same.
Ours are bigger and yummier though.
They also have less berries since Steve and I took them.)

Bet Regards,
-JRH

Wednesday, June 16, 2010

Right Click!

This is more of a tip to those who are already using or evaluating SQL Sentry than anything else.

I get a lot of questions that are perfectly valid about where different features are located or accessed from. SQL Sentry is, after all, a large suite of products with many features and if you are running multiple products you have quite a few feature choices.

Usually it’s a specific question such as “where do I find…” or “how do I filter…” to which I provide as simple and direct an answer as possible.

The reason for this post is to make folks aware of something that isn’t overly obvious, but maybe should be. If you are familiar with the development work we do here and the amount of thought we put into usability, it is just logical, but alas our society as a whole is just not used to having software that flows from feature to feature in ways that make sense or that make it easy for us to use.

SQL Sentry’s console application is designed for DBAs by DBAs. That is not just marketing fluff. It is 100% true. The thoughts and ideas that go into building SQL Sentry software all come from our internal product staff, all of whom are experienced developers and DBAs and from our customers who often have great ideas for ways they would like to use SQL Sentry products.

All of this background leads me to my point. We are very big on ease of access, just like our customer base. Since getting around to different places in the console usually requires using a mouse, a very logical place to put access to features is in the various context menus, or, the right click menus. I invite everyone to simply right click on anything in SQL Sentry. Whether it be a connection node icon, a history event on the calendar, a column header on a data grid, a runtime graph for an Agent job, an Event Chain Node or even the history mode chart for database file I/O you will always find something useful to do next by simply right clicking.

In summary, if you are ever in the console trying to work out how to do something, just try right clicking on whatever it is you are looking at. You may be pleasantly surprised to discover a set of useful options organized in a logical manner, right where you would expect them to be.

-JRH

Thursday, May 20, 2010

Performance Advisor Tip for DBAs Managing SharePoint Databases

I’ve had this come up a few times, and I’ve seen it in person with our testing environment, so I thought I would post a quick tip about it here.

Folks that have been working with SharePoint databases for a while probably know that the search index crawling done by SharePoint services can really kick out a crazy amount of deadlocks. In normal applications this would be a huge red flag, but for SharePoint it’s considered ok. In fact, in this SharePoint performance white paper it is actually stated that “An acceptable number of deadlocks is below one per second”.

So, for users of SQL Sentry Performance Advisor who want to be notified of deadlocks, but also have one or more SharePoint installations living in their monitored environment, how do you weed out the SharePoint deadlocks from the “OH NOZ! MAH REPORTZ WUZ TEH VICTIMZ” deadlocks?

There are actually a couple of ways to do this, but I am going to set it up the easiest way configuring it it globally, which will apply to all of the SQL Server instances being monitored by my SQL Sentry installation.

The first thing we want to do is to select the “Devices” node in the Navigator pane. Which sets our conditions and actions lists to the global context, so any changes we make will apply to everything.

image 

Next we want to make sure that “send Email” and “Send Page” are disabled for the “SQL Server: Deadlock” condition. This condition is met any time a deadlock is detected, so we want it turned off, otherwise we’ll still be notified about the SharePoint deadlocks. You can leave the “Log To Database” action enabled here if you like so that you can look at them later in the SQL Sentry logs. The point here is just to make sure we’re not getting notified through email or otherwise about every single deadlock. If you can’t find the General Actions tab, just select it from the View menu at the top of the console application.

image

Now, we still need a way to be notified of the *bad* deadlocks, so what we want is the “SQL Server: Deadlock: Output Content Match” condition. This condition is met when any combination of text values either exists and/or doesn’t exist within the output of an event. For a deadlock, the entire XML definition of the deadlock is included in the output. This means that any text value included in that XML can be used to make sure this condition is either met or not met. What we’re going to do, is set the properties of this condition so that the name(s) of the SharePoint database(s) must not exist in our deadlocks. This is done in a few simple steps.

1.) Enable the “send email” action:
image

2.) Edit the Output Content Match properties (Ignore “Step Name” as it doesn’t apply to deadlocks, or at the global condition level):
image 

3.) Select a recipient for the notifications:
image

Now, we should only get emails about deadlocks from the databases that should *not* be generating them on a regular basis. It’s that simple.

Until next time,

-JRH

Sunday, May 9, 2010

ASP.NET 4: Could’ve used that…

At SQL Sentry HQ we all tend to wear a few different hats. It’s important to make sure every base is covered regardless of who is or is not available.

That said, besides doing customer support and a portion of product development, I am also generally responsible for development work on internal systems and a good portion of our web site. To be specific in the context of this post, I do a fair amount of work on our forum (forum.sqlsentry.net) and our customer portal (www.sqlsentry.net/portal).

As with any web site, pages move, URIs change over time, and browsers have to be redirected to avoid the dreaded 404.

HTTP 1.1 has supported the permanent redirect status (301) with the location header for some time. This is a redirect so that (if the browser understands it) the next time the old URI is encountered the round trip to the server is avoided saving some resources on both sides and in-between.

Prior to ASP.NET 4 if you wanted to do this you had to set the location header and status on the request like below:

//Response status
Response.Status = "301 Moved Permanently";
//New location header
Response.AddHeader("Location","http://the.new.location");

Then you end the response etc. and this works out fine. It’s just a pain to have to include this in some utility class, and it’s really not that intuitive either. You have to know that you want to do exactly this. It’s not something that you can happen across by studying the Response object’s members in general.

Well, today I was just doing some personal skills maintenance, and I saw that in ASP.NET 4 they’ve added a new Response method called RedirectPermanent. It issues the 301 response as outlined above instead of 302 (temporarily moved) which is sent by the regular Redirect method.

It’s simple to use, it’s just one line as below:

//Redirect permanently
Response.RedirectPermanent(http://the.new.location);

You can use a relative path in there too even though the RFC technically doesn’t support it. You can also specify to end the response immediately as a second parameter.

Anyway, it doesn’t solve world peace and it’s not even a compelling reason to go out and rewrite a bunch of code, but like I said in the title, I could’ve used that! :o)

Until next time,

-JRH

Thursday, April 22, 2010

SQL Saturday #30: Richmond Virginia

With all the excitement of my first trade show last week, and the mountain of catch-up work waiting when I got back, I forgot to take a moment to talk about SQL Saturday #30 in Richmond, VA. I had the opportunity to deliver my session on “Things You Mom Never Told You About SSMS”, man the SQL Sentry sponsor table and Sit in on a great session about table partitioning called “Loading Data In Real Time” by Mike Femenella.

This was a well organized event, with a great turnout. The folks that attended my session were all very attentive, had some good questions and even had some things to add during and at the end. The event was originally scheduled for January, but wintery weather forced a reschedule to April 10th. There was a great turn out despite that setback, and you really couldn’t tell there had been a setback at all.

In this case, Steve Wright (Twitter) and I got a late start out of Huntersville, NC on Friday, April 9th, and we needed to get back quickly to catch the plane to Las Vegas for SQLConnections Saturday evening. These two circumstances caused us to miss the chance to socialize at the speaker’s dinner and after the event, but I really enjoyed the time we did have there.

My hat is off to Jessica Moss (Blog | Twitter) and her team for a job well done. I am already looking forward to the Richmond, VA event next year.

Until Next Time,

-JRH

Friday, April 16, 2010

A Plea to DBAs: Please Share Your Toys

I spent the first part of this week manning the SQL Sentry booth at the Spring DevConnections conference in Las Vegas. It was a great experience meeting all of the existing and potential customers, learning the problems they face in their day to day professional lives and getting a chance to show off our brand a bit.

DevConnections’ larger theme was a launch event for Visual Studio 2010, .NET 4.0, and Silverlight 4.0. While there were lots of SQL Server professionals, and a dynamite SQL Server based session track, I believe that most would agree there was a heavy showing from the software development community.

While many developers write code every day that runs on SQL Server and related platforms, they are generally not the ones that get to monitor those environments. Those that do get to monitor those environments may tend to keep most of that data on their side of the fence. This is not for any sort of selfish reason though, but probably because it is a lot of data that is just not thought to be relevant to others, or not thought to be relevant enough to spend time passing it around in various formats.

During the event I talked to quite a lot of folks, and it became very obvious that developers were not only interested in seeing things like Top SQL, Blocking and Deadlocks, but they were also very interested in knowing the effects of their code on network, CPU, memory and disk resources. They were excited about seeing the Event Manager calendar that shows SQL Server events in relation to each other under the one context they all share, time, as well. Developers see all of the features listed above as very useful if not vital to providing ongoing support for the applications they plan, build, deploy and support.

SQL Sentry is a full featured enterprise system for assisting with performance optimization for SQL Servers, and some other technologies that may surround SQL Server such as Windows Task Scheduler and Oracle. The current focus being SQL Server based like the DBMS, Analysis Services, SQL Agent and Reporting Services. SQL Sentry makes it easier on DBAs by monitoring lots of things a DBA would normally spend a lot of time gathering manually, and it makes sense to think that since it does things a DBA does, it should also have the same security access that a DBA does. Usually at least sysadmin rights, and quite often Windows admin.

Any DBA reading this might cringe at the thought of developers having that level of security access, and in most places it is just not allowed. That doesn’t mean that developers cannot have access to this valuable information though. The SQL Sentry console application does not communicate directly with the SQL Sentry server. It simply reads data from the SQL Sentry repository. This means that someone using the SQL Sentry console needs nothing more than read access the that one repository database. The repository database even ships with some roles pre-defined to help lock it down even further (more info).

DBAs running SQL Sentry could easily provide the console application to a development group for use in planning, developing and supporting their applications. They could see directly how the target server already behaves and plan accordingly prior to deployment, during development. Every developer I talked to about this immediately saw the value in it.

My moral here is a message to the DBA community. If you run SQL Sentry, or a tool like it, that provides valuable information to DBAs as well as developers, please share your toys with the development team if it is safe to do so. In the case of SQL Sentry it is easy to lock down, and there is no charge for installing additional consoles. There is really nothing to lose. I promise you the offer will be appreciated, and you may even be rewarded by gaining a development team that becomes much more knowledgeable and considerate of your data environment.

Until next time,

-JRH

Friday, March 12, 2010

.NET Framework Bug in SmtpClient.Send()…

An interesting support case came up recently involving one of the staples of SQL Sentry Event Manager functionality – SMTP support. The basic problem was that all email actions were failing reporting “GeneralFailure”, which is kind of the catch-all for SMTP communication errors. It can often be very difficult to troubleshoot. In a lot of cases it actually requires gathering logs from the SMTP server itself.

In order for SQL Sentry to send notifications via email certain settings are required such as the SMTP Server, the “from” email address and, if necessary, security credentials for the SMTP server. In this case, we were able to send a test email using the simple test from the SQL Sentry console application’s settings tab. This let me know that the SMTP server was working, so I needed to look deeper into what was going on in the software as opposed to what was going on at the SMTP server.

I found a nice article online (here) pointing out that I could enable a trace listener for System.Net.Sockets which would log detailed information about the SMTP conversation with the server. This trace revealed a nasty NullReferenceException that was being thrown every time the SQL Sentry server service tried to call the Send method on SmtpClient.

After some quality time with Bing.com I found this Microsoft Connect record:
http://connect.microsoft.com/VisualStudio/feedback/details/361506/smtpclient-throws-nullreferenceexception-if-password-is-blank

Apparently, if you provide a login, but no password, SmtpClient.Send() will not be your friend (prior to .NET 4.0). This turned out to be the problem. Luckily, with a little testing, we discovered that this SMTP server didn’t really need to have credentials provided at all (not from the inside anyway). Simply removing the login got everything rolling again.

What we learn here is just that our impatience for .NET 4.0 is that much more justified. Thank goodness Brooke Philpott is already working toward targeting .NET 4.0 with SQL Sentry. I’ll save the details on that for him though. ;o)

Until next time,

-Jason R. Hall

Monday, March 8, 2010

Some Moments Captured in Time at SQL Saturday #33 in Charlotte, NC

Some nice photos came out of SQL Saturday #33 thanks to Ken Teeter from SQL Sentry. I just wanted to take a moment and comment on a few of them. Everyone keep in mind that it’s all in good fun please :)

First we have Motley Crew(L-R: Andy Warren, Steve Jones, Rushabh Mehta, Peter Shire) back on tour, though I’m not sure where to buy tickets:

I think someone just sank Steve Wright’s battleship:

If Mike Walsh ever tells you he can deliver a training session in his sleep, believe it:

This image of Andrew Kelly is OUTSTANDING! (Semper Fi):

Patient-Zero located and identified as Tim Ford. Please inform the CDC:

Aaron Nelson raising his hand to ask the audience a question. Doesn’t that work the other way around Aaron?

Notice how all the other speakers are standing except for Brett Tomson? Slacker…. The guilt is written all over his face:

Karen Gonzalez, Missy Kelly and Peter Shire share a rare moment of peace early on in the day. Thanks for all your hard work!

Again, this is all in fun. I hope everyone enjoyed the event and made it home safely. Until next time,

-Jason R. Hall

Sunday, March 7, 2010

SQL Saturday #33 (Charlotte, NC)

Saturday, March 6th marked the completion of the event all of us at SQL Sentry had been greatly anticipating. I presented “Things Your Mom Never Told You About SSMS” for the second time ever (first at the Raleigh, NC SSUG). I followed directly behind Aaron Bertrand who presented on a similar topic. That really turned out to be great for a couple of reasons. First, Aaron is a popular speaker, and I got a lot of the overflow crowd that weren’t able to make it into the room for his session. Second, there really wasn’t much overlap at all between the two sessions. In fact, I would even go so far as to say that a lot of my topics were building off of topics Aaron brought up in his session. It was almost synergetic without having ever planned it to be. All in all it was a great experience for me being new to speaking, and I had a great crowd that understood a little Power Point fumbling and also had a lot to contribute. Anyone interested can find the materials here including some things I had to leave out due to time constraints. I will be delivering this session once more at SQL Saturday, Richmond VA on April 10.

The entire SQL Saturday #33 experience was wonderful. For me, the chance to put faces with names for people that often provide invaluable feedback for SQL Sentry products was great. Some names that are close to mind right this moment are Tim Ford, Aaron Nelson and Mike Walsh. It was also a treat to reunite with some long time friends like Ashton Hobbs and Brett Tomson with whom I have a long history. This social opportunity, combined with all of the technical knowledge gained by attending sessions delivered by these same folks made it an epic win in my book. I fully look forward to seeing these folks and many others at more events in the future.

It was quite a hectic day for all of us at SQL Sentry, and the lead up to it was no exception, but there are a few people that I really want to point out for the effort put into making this happen. Greg and Karen Gonzalez, Missy Kelly and Peter Shire all put in a massive effort to make this happen in the weeks leading up to the event. I know there are others that should be mentioned, but these folks really stood out to me, and I want to offer my thanks. I hope they all enjoyed it as much as I did.

Thursday, February 25, 2010

Auto-Kill Long Running Blocks on a SQL Server Monitored by SQL Sentry Performance Advisor

A customer recently came across a problem in which a third-party system was causing long block chains on one of his servers. The best solution he could come up with was really just to kill the blocking SPID when this happens. Knowing that SQL Sentry monitors for blocks and tracks their duration, he asked if there was a way to automate killing the blocking SPID after a specified duration.

This is not something that is built into SQL Sentry as standard functionality, but with some creative thought we were able to come up with a solution using existing SQL Sentry functionality combined with some out-of-band use of one of the remote procedures SQL Sentry employs on a monitored server.

First and foremost we need a way to ID the blocking SPIDs at the head of any block chains. We also need a way to target only the long running blocks, not all of them.

On every SQL Sentry Performance Advisor monitored server, a few utility objects are created in the MSDB database. These are used to reduce network chatter between the SQL Sentry server and monitored SQL Servers. One of these objects is called “spGetBlockInfo_20” (the _20 is just a versioning mechanism, please don’t get stuck on it). This is a stored procedure that can provide us with information about blocked and blocking processes and includes useful information like the wait time, database name, time of last batch, login name and plenty of other things that I’ll skip for brevity. It is the same mechanism that provides raw block information to the '”Blocking SQL” tab in SQL Sentry Performance Advisor.

My first instinct was to write a simple procedure that will load the block information into a temp table and use a cursor to build a dynamic kill statement, finally executing the statement with sp_executesql. It would have one parameter that simply specified the number of minutes since the last batch to use for block duration. This would be done without the cursor due to the original intention to kill multiple blocks, and it could possibly go that way again some day, so it was left as a cursor for now. The first cut of this is below:

First Draft

CREATE PROCEDURE dbo.spKillLongRunningBlocks 
  @minutesSinceLastBatch int
AS
CREATE TABLE #tmp
    (
      spid smallint,
      blocked smallint,
      waittype binary(2),
      waittime int,
      waitresource nvarchar(256),
      dbname sysname,
      last_batch datetime,
      ecid smallint,
      hostname nchar(128),
      [program_name] nchar(128),
      loginame nchar(128),
      cmdtext ntext,
      waittext nvarchar(1000)
    );
INSERT #tmp
  EXEC spGetBlockInfo_20;
DECLARE cspids CURSOR FAST_FORWARD FOR
    SELECT spid, last_batch
    FROM #tmp
    WHERE #tmp.blocked = 0
    AND DATEDIFF(minute, #tmp.last_batch, GETDATE()) >= @minutesSinceLastBatch;
DECLARE @spid int;
DECLARE @lastBatch datetime;
DECLARE @killCmd nvarchar(2000);
OPEN cspids;
FETCH NEXT FROM cspids INTO @spid, @lastBatch;
SET @killCmd = '';    
IF @@FETCH_STATUS = 0
BEGIN
  SET @killCmd =  @killCmd + 'KILL ' + CAST(@spid AS nvarchar(5)) + ';';
  EXEC sp_executesql @killCmd;
END;
  
CLOSE cspids;
DEALLOCATE cspids;
DROP TABLE #tmp;

This worked pretty well actually, until I ran the procedure code by the product architect for SQL Sentry Performance Advisor Greg Gonzalez who quickly pointed out that for RPC processes that last_batch field can often show up as ‘01/01/1900’. This would create a situation where we might kill the wrong processes, so I had to find another way to determine the duration. So, after a bit of though I concluded that the process with the highest wait time and the SPID in question as it’s blocker should provide the most accurate duration. Luckily, “waittime” is already part of the output of “spGetBlockInfo_20”. It is in milliseconds, so I had to multiply the input by 60 then by 1000 to convert it for comparison. This next version includes this change on lines 26-28:

Edited for Duration Accuracy

CREATE PROCEDURE dbo.spKillLongRunningBlocks 
  @minutesSinceLastBatch int
AS
CREATE TABLE #tmp
    (
      spid smallint,
      blocked smallint,
      waittype binary(2),
      waittime int,
      waitresource nvarchar(256),
      dbname sysname,
      last_batch datetime,
      ecid smallint,
      hostname nchar(128),
      [program_name] nchar(128),
      loginame nchar(128),
      cmdtext ntext,
      waittext nvarchar(1000)
    );
INSERT #tmp
  EXEC spGetBlockInfo_20;
DECLARE cspids CURSOR FAST_FORWARD FOR
    SELECT spid, last_batch
    FROM #tmp
    WHERE #tmp.blocked = 0
    AND (SELECT MAX(t2.waittime) 
       FROM #tmp t2 
       WHERE t2.blocked = #tmp.spid) > CAST((@minutesSinceLastBatch * 60) * 1000 as int);
DECLARE @spid int;
DECLARE @lastBatch datetime;
DECLARE @killCmd nvarchar(2000);
OPEN cspids;
FETCH NEXT FROM cspids INTO @spid, @lastBatch;
SET @killCmd = '';    
IF @@FETCH_STATUS = 0
BEGIN
  SET @killCmd =  @killCmd + 'KILL ' + CAST(@spid AS nvarchar(5)) + ';';
  EXEC sp_executesql @killCmd;
END;
  
CLOSE cspids;
DEALLOCATE cspids;
DROP TABLE #tmp;

We need the most accurate solution possible. After all, we are forcibly ending running processes sight unseen on our DBMS server. With that in mind I ran the procedure by architect and lead developer Brooke Philpott. Transforming the raw block data from “spGetBlockInfo_20” into what you can see in the Blocking SQL tab in Performance Advisor is a very complicated process. We can’t assume that all blocks that may still be active after the threshold is met are supposed to be killed. We didn’t have to discuss this very long for me to buy in to the fact that we needed to narrow the margin of error a bit more.

SQL Sentry Event Manager has a feature called Execute Action Parameters. This feature allows one to pass all or parts of the SQL Sentry notification message in as a parameter to the target of an Execute SQL or Execute Process action. For the purpose here, it was easiest to pass in the full message an execute command similar to the following:

EXEC msdb.dbo.spKillLongRunningBlocks '<%MessageText%>'

The standard “blocking detected” message from SQL Sentry contains something very important in making this work. It lists the entire block chain from top to bottom. A simple example is pasted below:

Example: SQL Sentry Blocking Detected Message

[Connection]:  ICWS1
[Message]: SQL Server Blocking Detected
[Start Time]: 2/24/2010 2:07:46 PM
[Duration]: 57 minutes, 27 seconds
[Minimum Duration Threshold]: 5 seconds
------------------------------------------------------------------------
[Timestamp (Local)]: 2/24/2010 3:05:14 PM
[Timestamp (UTC)]: 2/24/2010 8:05:14 PM
[Generated By]: SQL Sentry 5.5 Server [ICWS1]
[Version]: 5.5.20.0
[Monitor Type]: SqlServerBlockDetector
[Condition]: SQL Server: Blocking SQL
[Response Ruleset]: Long Running Blocks
------------------------------------------------------------------------
SPID at Head of Blocking Chain:
SPID [ecid]: 81 [0]
Client Machine: ICWS1
Application: SQL Server Agent: Job 'BLOCK_TESTA'
Login Name: NT AUTHORITY\NETWORK SERVICE
Last Batch: 2/24/2010 2:07:41 PM
Wait Type: WAITFOR
Wait Time: 00:00:00
Database: SSTESTS
Command Text:
WAITFOR DELAY '00:00:01'
------------------------------------------------------------------------
Blocked SPID:
SPID [ecid]: 82 [0]
Blocked By SPID: 81
Client Machine: ICWS1
Application: SQL Server Agent: Job 'BLOCK_TESTB'
Login Name: NT AUTHORITY\NETWORK SERVICE
Last Batch: 2/24/2010 2:07:46 PM
Wait Type: LCK_M_RS_S
Wait Time: 00:57:27
Database: SSTESTS
Command Text:
SELECT DISTINCT *
FROM TestTable ORDER BY ID DESC
------------------------------------------------------------------------

Note that we can trust that the correct entries are gathered from the message since we know that the head of the block chain is always the first entry in the message. The parsing calls can start from the beginning of the message for each value to parse.

All of the information necessary to narrow the selection down to the exact SPID we want to kill is in there. All that is needed from this point is some creative TSQL string parsing and type conversion. In the final version of the procedure I’ve early on where we’re parsing the blocking SPID, client machine, login name and database name from the message text. Since technically the SPID number could have been reused by this time, we need a way to ID more values than just SPID number. With all of these values from the message text we can compare using a composite key to identify a specific block. This fully removes the need to test for any wait time at all, placing that burden back on the SQL Sentry server service where it really belongs. As a result, two things happen. First, the part of the where clause in the cursor that constrains the duration is replaced with a final check to make sure there are still processes being blocked by this SPID. Secondly, we no longer have to specify the number of minutes as a parameter.

Some other important improvements in this version include restricting the SPID query to non-system SPIDs (>50). Imagine the horror when you realize that your automated process is killing important work that SQL Server is trying to do. What’s worse, imagine the emails and phone calls. Yikes! I’ve also included some defaulted parameters that can be used to target specific applications, client machines, logins and/or databases. They are null by default, but are useful for targeting only certain blocks instead of everything across the board.

Edited for Composite Key

CREATE PROCEDURE dbo.spKillLongRunningBlocks 
  @messageText nvarchar(2000),
  @targetClientMachine nvarchar(128) = null,
  @targetApplication nvarchar(128) = null,
  @targetLoginName nvarchar(128) = null,
  @targetDatabase nvarchar(128) = null
AS
--PRINT @messageText;
DECLARE @beforeSpidConstant nvarchar(25);
DECLARE @beforeClientMachineConstant nvarchar(25);
DECLARE @beforeApplicationConstant nvarchar(25);
DECLARE @beforeLoginNameConstant nvarchar(25);
DECLARE @beforeDatabaseConstant nvarchar(25);
SET @beforeSpidConstant = 'SPID [ecid]:    ';
SET @beforeClientMachineConstant = 'Client Machine:    ';
SET @beforeLoginNameConstant = 'Login Name:    ';
SET @beforeDatabaseConstant = 'Database:      ';
DECLARE @spidLoc int;
DECLARE @clientMachineLoc int;
DECLARE @loginNameLoc int;
DECLARE @databaseLoc int;
DECLARE  @targetSpid nvarchar(5);
SELECT @spidLoc = CHARINDEX(@beforeSpidConstant, @messageText);
SELECT @clientMachineLoc = CHARINDEX(@beforeClientMachineConstant, @messageText);
SELECT @loginNameLoc = CHARINDEX(@beforeLoginNameConstant, @messageText);
SELECT @databaseLoc = CHARINDEX(@beforeDatabaseConstant, @messageText);
SELECT @targetSpid = 
  SUBSTRING(@messageText, 
        @spidLoc + LEN(@beforeSpidConstant),
        (CHARINDEX(' ', @messageText, @spidLoc + LEN(@beforeSpidConstant))) - (@spidLoc + LEN(@beforeSpidConstant)));
IF @targetClientMachine IS NULL
BEGIN        
  SELECT @targetClientMachine = 
    RTRIM(SUBSTRING(@messageText, 
          @clientMachineLoc + LEN(@beforeClientMachineConstant),
          (CHARINDEX(CHAR(13) + char(10), @messageText, @clientMachineLoc + LEN(@beforeClientMachineConstant))) - (@clientMachineLoc + LEN(@beforeClientMachineConstant))));
END      
IF @targetLoginName IS NULL
BEGIN
  SELECT @targetLoginName = 
    RTRIM(SUBSTRING(@messageText, 
          @loginNameLoc + LEN(@beforeLoginNameConstant),
          (CHARINDEX(CHAR(13) + char(10), @messageText, @loginNameLoc + LEN(@beforeLoginNameConstant))) - (@loginNameLoc + LEN(@beforeLoginNameConstant))));
END        
IF @targetDatabase IS NULL
BEGIN
  SELECT @targetDatabase = 
    RTRIM(SUBSTRING(@messageText, 
          @databaseLoc + LEN(@beforeDatabaseConstant),
          (CHARINDEX(CHAR(13) + char(10), @messageText, @databaseLoc + LEN(@beforeDatabaseConstant))) - (@databaseLoc + LEN(@beforeDatabaseConstant))));                        
END
--PRINT @targetSpid;
--PRINT @targetClientMachine;
--PRINT @targetLoginName;
--PRINT @targetDatabase;
CREATE TABLE #tmp
    (
      spid smallint,
      blocked smallint,
      waittype binary(2),
      waittime int,
      waitresource nvarchar(256),
      dbname sysname,
      last_batch datetime,
      ecid smallint,
      hostname nchar(128),
      [program_name] nchar(128),
      loginame nchar(128),
      cmdtext ntext,
      waittext nvarchar(1000)
    );
INSERT #tmp
  EXEC spGetBlockInfo_20;
DECLARE cspids CURSOR FAST_FORWARD FOR
    SELECT 
      spid, 
      hostname, 
      [program_name], 
      loginame, 
      dbname
    FROM #tmp
    WHERE #tmp.blocked = 0
    AND spid > 50 --Don't kill system SPIDs!!!
    AND (SELECT COUNT(t2.spid) 
       FROM #tmp t2 
       WHERE t2.blocked = #tmp.spid) > 0;
       
DECLARE @spid smallint;
DECLARE @clientMachine nvarchar(128);
DECLARE @application nvarchar(128);
DECLARE @loginName nvarchar(128);
DECLARE @database sysname;
DECLARE @killCmd nvarchar(50);
OPEN cspids;
FETCH NEXT FROM cspids INTO @spid, @clientMachine, @application, @loginName, @database;
SET @killCmd = '';    
IF @@FETCH_STATUS = 0
BEGIN
  --PRINT @targetClientMachine + ' = ' + @clientMachine
  --PRINT @targetApplication + ' = ' + @application
  --PRINT @targetLoginName + ' = ' + @loginName
  --PRINT @targetDatabase + ' = ' + CAST(@database AS nvarchar(128))
  IF(CAST(@targetSpid as int) = @spid 
     AND @targetClientMachine = RTRIM(@clientMachine)
     --Application may not always match, so only use it if it was passed in
     AND isnull(@targetApplication,RTRIM(@application)) = RTRIM(@application)
     AND @targetLoginName = RTRIM(@loginName)
     AND @targetDatabase = CAST(@database AS nvarchar(128)))
  BEGIN
    SET @killCmd =  @killCmd + 'KILL ' + CAST(@spid AS nvarchar(5)) + ';';
    EXEC sp_executesql @killCmd;
  END
END;
  
CLOSE cspids;
DEALLOCATE cspids;
DROP TABLE #tmp;

So now that we have a mechanism in place to kill the process at the head of any long running block chain, we need a way to trigger it. At first we wanted to use the “Max Runtime Threshold” condition in SQL Sentry Performance Advisor for SQL Server Blocking, but this condition only fires when the block has finished, and it doesn’t include all the items that we want from the message text. We could use the “SQL Server: Blocking SQL” condition since it gets fired as soon as any block that meets the collection threshold is detected, but we really only want this to be triggered after a specified amount of time. The solution lies in Response Rulesets. Using a response ruleset we can use the standard Blocking SQL condition with the Execute SQL action in SQL Sentry, and only allow the action to be performed when a specified amount of time has passed. Since response rulesets can be set for each action (as opposed to each condition) we can still send an email, or save a database log of every single block detected, but only execute our kill procedure when enough time has elapsed since the block was first detected.

First, enable the condition for the server where our kill procedure is placed (you can do this globally as well, but you want to make certain the kill procedure exists on every monitored server in that case):

Enable the Execute SQL Action for the Blocking SQL Condition

image 
Note: Learn more about SQL Sentry conditions and actions here: SQL Sentry Training Videos

Next, enter the command to execute our block killing procedure. This is the same command discussed above in bold text, but altered to restrict by database. Note that my test database is called ‘SSTESTS’:

Enter the Command for the Execute SQL Action

image

Now, create the response ruleset to only process messages after the block has been active for more than 5 minutes:

Create and Set the Time Based Response Ruleset

image

Conclusion

Now our solution is in place. Any time a block is detected, and continues blocking for more than 5 minutes, our stored procedure will be run killing only the process at the head of that block chain.

This solution takes advantage of some of the more advanced features that SQL Sentry has to offer, and is just one example of the many solutions that can be built using SQL Sentry and a little creativity.

Anyone that is interested in setting this up please just email support at sqlsentry.net and mention this blog entry. Even if you don’t already use SQL Sentry you can try this out using an evaluation copy from here.