Friday, March 30, 2012
Managed Stored Procedures
VS2003 use .NET 1.1.
Good article about VB.NET and SQL/CLR: http://www.devx.com/dotnet/Article/21286
Managed Store Procedure does not deploy
I have created a managed stored procdure in a sql server project in VS. I have put in the corect server name password and login fro the connection to the database.
When I deploy however it doesn't deploy the stored proccdure to the database even though it says it has successfully deployed the stored procedure. Has anyone had this
problem and how can you make sure it is deploying to the correct database.
Did you make sure it is deploying the stored procedure into the correct database? It might be deploying it in master.mdf.
sqlmanage the stored procedure permissions....
hi,
for a graphic management tool, you can have a look at the Microsoft free provided one, you can get for free at http://www.microsoft.com/downloads/details.aspx?familyid=C243A5AE-4BD1-4E3D-94B8-5A0F62BF7796&displaylang=en
regards
|||Thanks Andrea....but i have allready installed this tool...
in the mean time ....i've find the solution using that tool......go in the DB tree : Security-Users (choose the user that i want to add permisions) -Properties- Securables- here i add the objects and grant the permisions....
Monday, March 26, 2012
making record read-only
i want to make a certain records read-only using stored procedure. something like:
Update dbo.Articles
SET record = "read-only"
WHERE
ArticleID = @.ArticleID
(you know what i mean)
the reason i want to do that is that im publishing articles and i want to lock the articles once they're published so no body can change them unless they're unpublished.
of course i aslo want to know how to make a record read-write again.
ThanxAnswer: you can not. Row level security is not part of SQL.
What yo ucould do is a Trigger that blos and rolld back the transaction when a row has a specific value (read only flag).
But then - why the heck dont you handle this in the frontend application? Just dont let the user delete.|||well i wanted i did that in the front end.. just wanted to do it at the backend to ensure data integrity.
Thank you|||::just wanted to do it at the backend to ensure data integrity.
Sadly, this is nonsense, as this does not touch the issue of data integrity. Relational integrity has nothing to do with being able to delete something or not.
Now, in the rare case this needs to be blocked, a trigger that blows the transaction is about the only way to go :-)sql
Making query in SQL server 2005 - What is the best way?
application using SQL Server 2005. i am trying to make all my query
using stored procedure in database and just passing the parameter to
the database.
I am having some complex task that I can't do in
one query. So what is the best way to make the best performance? Using
complex stored procedure or using more code instead?
Thanks a lot for your help
It sounds like perhaps a temporary table or table variable would be useful. Are you needing to store the results of your 1st query and then process them in a 2nd query (i.e., get a resultset and, for each row, do a calculation/etc)? If so then the temp table solution is available. You can do it all in a single stored procedure.
/****************************************************
Check out our Sql Server 2005 and 2000 tutorials
****************************************************/
Making outbound HTTP requests from SQLCLR
Hi,
I have written a C# console application that adds a message to a SB queue and a C# stored procedure that reads the message from the queue.
I have had so many problems that I'm beginning to doubt the usefulness of this, but that is probably just my frustration speaking.
I am stuck on putting an XML message into the queue and reading the XML in the stored procedure.
The console app has a simple object called Message. This class has 2 fields, a guid and a string. I serialize the object into XML using XmlSerializer. This results in an XML string that looks like this:
<?xml version="1.0" encoding="utf-16"?>
<Message xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http:/
/www.w3.org/2001/XMLSchema">
<TranId>a5b4a32f-4940-46ea-a928-3aae33a067fe</TranId>
<Text>TEST</Text>
</Message>
I use a SqlCommand to SEND the message to the queue. I use the following code to SEND the message:
// Add dialog handle
SqlParameter paramDialogHandle = new SqlParameter("@.dh", SqlDbType.UniqueIdentifier);
paramDialogHandle.Value = this.DialogHandle;
cmd.Parameters.Add(paramDialogHandle);
// Add message
SqlParameter paramMsg = new SqlParameter("@.msg", SqlDbType.VarChar,4000);
paramMsg.Value = msg.ToXml();
cmd.Parameters.Add(paramMsg);
// Build the SEND command
cmd.CommandText = "SEND ON CONVERSATION @.dh " +
"MESSAGE TYPE [http://www.TestSB.com/msg/HelloWorld] " +
"(@.msg)";
cmd.ExecuteNonQuery();
For some reason, this message never gets to the stored procedure that is set up to be activated for this queue. Actually, it never even gets written to the queue.
However, If I change the type of parameter in the SEND above to SqlDbType.Xml, the message is written into the queue, but without the '<?xml version="1.0" encoding="utf-16"?>' beginning it, so when my stored procedure tries to deserialize the xml, using XmlSerializer, it fails.
Obviously, I have overcome the backflips needed to do XmlSerialization within the SQLCLR, and I am getting the stored procedure activated, because I do get a message logged showing the XML that is received.
Since I have the queue set up to only allow valid XML messages, I'm assuming that inclusion of the <? xml version..../> line prevents the message from passing Service Broker's XML validation. (And by the way, I have never seen an error message placed into the queue, the event log, or thrown when non-xml is passed to a queue that has XML validation turned on. The message just does not show up in the queue. Is this a bug?) But the lack of the line prevents XmlSerializer from working.
How the heck do you serialize an object into XML using XmlSerializer, so that it can be deserialized with XmlSerializer?
Thanks
]Monty[
On the Service Broker specific issues I advise you to try following the steps in this trubleshooting article I posted at http://blogs.msdn.com/remusrusanu/archive/2005/12/20/506221.aspx. What you will most likely find is that when it appears that you send the message and the message vanishes, you actualy gonna find an error message in the sender's service queue with the error complaining about the message being invalid XML formated.
You see, the XML validation of messages is always performed by the target service when the message is received, not by the sender when the message is sent. Therefore if you send an incorrect XML message, it will not trigger an error in the SEND statement. However, since I assume your message type is defined as WELL_FORMED_XML, the message will not pass the XML validation when is attempted to be enqueued in the target queue and an error message will be sent back. This is the error message you'll find in your sender's queue.
The XML formating problems you are seing are due to the parameter types you use. It makes a big difference on the XML if you pass it in as an VARCHAR(4000), an NVARCHAR(4000) or an XML datatype. VARCHAR types will support UTF-8 encoded XML, therefore your serializer needs to create UTF-8 XML. NVARCHAR type will support UTF-16 encoded, and you need to serialize it accordingly. I'm not an .Net XmlSerializer expert, but I believe you can specify the encoding to be used. I know for one that you can create an XmlTextWriter and specify the Encoding.UTF8 or Encoding.UTF16, then use this XmlTextWriter to obtain the xml stream from the XmlSerializer. Similarily, you can specify the encoding in the XmlTextReader when deserializing the XML.
However, if you use the XML datatype, then you shouldn't need to specify explicitly the encoding. It seems you got this working on the SEND side, but you cannot get the message body as an XML on the receive side. How do you obtain this message in the C# client in the receiver side? Are you using SqlCommand.ExecuteReader to parse the resultset of a RECEIVE statement? What you need to do is to make sure that you get back from the server an XML datatype, not an VARBINARY(MAX). Simply use a RECEIVE ... CAST(message_body AS XML) for this to happen.
HTH,
~ Remus
|||
Hi Remus,
Thanks for the reply. Yes, what you've posted about the data types was part of the problem. However, I have never been able to find any trace of a poorly formed XML error message, in the target's queue or in the initiator's queue. If you say so, I guess that's right, but it doesn't work for me.
I've gotten around the serialization problem by using nvarchar(max) on the send and receive side and the <? xml .../> is making it across fine now. However, now XmlSerializer is complaining about the first line of the XML that follows. I've given up on this and am in the process of simplifying things back down to were they were once somewhat working predictably.
My problem now is that I can only run my test once and get an error message, and then I must shut down Visual studio 2005 (because it is holding a connection open somewhere) Drop and then create the database in Sql Mgmt Studio, and then bring up VS2005 again and run my console application. If I run it 2 times in a row, the stored procedure stops getting activated, even though in my postdeployscript.sql, I am dropping and then recreating the activation of the newly installed stored proc using the following:
ALTER QUEUE [TestSB Queue] WITH ACTIVATION (DROP);
go
alter queue [TestSB Queue]
with status=on,
activation (
status = on,
procedure_name = ServiceProc,
max_queue_readers = 2,
execute as self);
That brings up another question: Why is it you must drop the activation and then create it again in order to get the activation to fire on the newly deployed stored procedure? I understand that the assembly containing the stored proc has a different version, but it took me forever to figure out that you had to do a drop first, and that the alter queue [xx] Activation(....) wasn't enough.
Anyway, I have been hitting my head against the wall with this for all week. Tomorrow I am going to start over with new solution and try to simplify things even more to get to the bottom of this. If you forced me to go into production right now with ServiceBroker, I'd have to complain that it isn't ready. Or I'm not ready at best. I've had so many flakey experiences so far, I know my confidence has been eroded. There is too little documentation and apparently not a lot of experience with C# programs written under Visual Studio that write to queues that are serviced by SQLCLR stored procedures. There is very little in the way of samples, etc. Heck the words 'postdeployscript' and 'predeplyscript', necessary things for doing SB stuff within VS, can't even be found in the Visual Studio help!
In the end, all I want to do is serialize a message through SB to some code that makes a web service call. Boy is that a tall order.
]Monty[
Are you ending the sending dialog after the SEND? If that's the case, then when the error comes back it will find the dialog already ended and therefore it will simply delete the dialog, as well as the error message. If you do something like BEGIN DIALOG/SEND/END, this is similar to a 'fire and forget' scenario, where you (the sender) can never know if your message actually reached the target.
Anyway, using the Profiler can reveal a great deal of what's going on with the dialog messages, select the events in the Broker category.
Unfortunately I cannot help you too much with the CLR and Visual Studio deployment problems, since I'm unfamiliar with them myself. The dedicated .Net forum is monitored by experts with more knowledge in that area, http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=86&SiteID=1, it would worth the trouble to describe the problem there. There are also some blogs of experts in this area, like http://blogs.msdn.com/sqlclr/
One explanation for the weird behavior you see would be is your activated procedure never returns. You can verify this in the Profiler (the activation started/stopped events) as well as by querying the sys.dm_broker_activated_tasks view.
HTH,
~ Remus
1. You do not have to ALTER QUEUE in order to use the newly installed stored proc. If there are any instances of the stored proc already running, they will continue to run the old version. However, the next time Service Broker activates a new proc it should automatically launch the newly deployed version. This behavior is exactly the same as that of SQL when ALTER PROCEDURE can update the procedure without killing spids that are already running the older version. However, any new invocation will use the new code.
2. You should be able to close the connection to the database from Visual Studio by opening the Server Explorer pane, navigating to the database, right clicking and selecting 'Close Connection'.
3. You do not have to DROP activation before reconfiguring it. You can adjust any/all of the settings of activation (viz, status, procedure_name, max_queue_readers and execute as user) using a single ALTER QUEUE statement.
4. Here's a resource on XmlSerialization in SQLCLR:
http://blogs.msdn.com/sqlclr/archive/2005/07/25/Vineet.aspx
The steps I perform are as follows:
SqlCommand cmd = conn.CreateCommand()
cmd.CommandText = @."WAITFOR (RECEIVE TOP(1) message_body, ..., FROM [YourQueue]), TIMEOUT 5000";
cmd.Transaction = tran;
SqlDataReader reader = cmd.ExecuteReader();
if (reader.Read())
{
SqlBytes body = reader.GetSqlBytes();
// XmlSerializer xs;
YourObject yo = (YourObject) xs.Deserialize(body.Stream);
//...
}
5. If you install the samples that come with the SQL Server DVDs, you should find a HelloWorld_CLR sample as well as a ServiceBrokerInterface library which is a sample programming model for writing service broker application. The library also has a shopping cart sample.
|||I just wanted to caution you about using SQLCLR to make outbound HTTP requests. When you make a web request, note that you are doing so as the user running sqlservr.exe. If this is a machine account, you may want to think about the consequences of that principle making outbound HTTP connections to proxy servers (or worse the destination web server itself).
Secondly, if you make a blocking web request call, you will be holding the transaction locks on the conversation group (and anything else that the stored proc may have locked) until the response is received and parsed.
Thirdly, you are using up one of sqlservr's threads for doing the web request, preventing the server from doing real database work until the request returns.
Finally, how are you going to handle errors from the webserver? Either you will have to rollback the transaction (but then the message will pop up again on executing a RECEIVE) or use some sophisticated mechanism of logging pending requests and retrying them using some retry logic. If you chose to rollback, remember that rolling back 5 times consecutively will disable the queue to prevent the poisoned message from repeatedly activating the queue.
If any of these issues concern you then you might want to think about re-architecting the solution to use an external ADO.NET app that receives the messages rather than an internally activated stored proc. If you want the ADO.NET app to be dynamically launched you could try the external activator mentioned here (http://rushi.desai.name/Blog/tabid/54/EntryID/7/Default.aspx).
|||Hi guys,
Thanks for the replies. This is really going somewhere now :)
1) Getting errors when sending non-xml thru a 'well formed xml' queue:
Yes, I am ending the dialog after the send. Are you saying that I need to turn around and wait for a response error message when the messsage is read? If so, how do I keep the transaction from becoming a super long one? That's one of the points I'm trying to make. This is a fire and forget queue. I'm basically trying to do a one-way message. I think the 'well-formed-xml' validation is not to useful in this case. I can't wait around for the reading of the message to generate a returning error message, that might be in an hour or so from now, and the transaction will have to be open all that time. I think a better approach is to turn off the validation and deal with it in the application code.
2) Doing an ALTER QUEUE after a new stored procedure is installed.
Interesting theory. Unfortunately, I have *NEVER* been able to get the stored procedure to fire after a deploy from visual studio. I spent 2 days on this until I tried doing a drop and create on the activation and finally got the stored procedure to be called. I'm obviously doing something wrong, but I can't for the life of me figure out what it is. And it only works for me if I do the drop and then the create. The create is not enough for me.
3) Making outbound HTTP requests from SQLCLR
Well, reliable messaging is the whole point of my exercise. There is no Indigo yet and QueuedComponents/MSMQ is not supported officially by Microsoft in a clustered environment, and a clustered environment is required for fault tolerance in production since there is no disaster recovery for MSMQ. Believe me, I have been there. QueuedComponents/MSMQ is not an option. So I'm left with SB queues with stored procedures making web service calls. I understand the transacional concerns that have been raised, and handling errors is a big unsolved problem because of the poison message 'feature' (can it be turned off?), and the fact that there is no way to delay a message from being received again, and the fact that the stored procedure is supposed to read all the messages before it stops. Assuming the web server is down, if I requeue the message back to the queue again, the stored procedure will just keep reading the same message over and over. I really don't know how to handle errors when the web service call fails, I haven't been able to get that far yet.Are you saying that internal activation is not the way to do this, and external activation is the way to go? Does it help with handling errors or just avoid the resource usage problem of http request inside of SQLCLR?
4) Closing connections to the database by closing the connections in server explorer.
I do this but Visual studio is still holding a connection, because I can't drop the database unless I close VS. Even closing the solution doesn't work. If I didn't have the activation problem, I wouldn't have to go through this step, so solving the activation problem would lessen this problem a bunch.
Thanks
]Monty[
EDIT: I was just checking out the external activation samples, but it uses that ServiceBroker Interface code. So the example only really shows how to use it, rather than how to do external activation. Are there any simplified examples available that don't use the interface code? I'm sorry but that code is not commented very well and is very complicated. I guess if that is all there is available then I can try to make it out, but I started my current excerise using it, and eventually gutted it down to the basic parts trying to understand it and I've obviously messed it up.
Hi again,
On the subject of the external activation sample:
Imagine that the environment is a classic 3 tier architecture, a UI or Web Service layer, an application layer that the UI talks to through web services, (the UI and the app layer are web farms not clustered using the clustering service but clustered with nlb) and finally the database server layer. How is a Windows Service going to be safely running in this environment? I need a clustered Windows Service layer in order to provide fault tolerance for the windows service. It doesn't exist. Do I run a copy of the service on each web service machine in the app layer? How do I get them all to coordinate? Does the sample provided have this support?
]Monty[
Monty Hi,
I'm really glad you're making progress. I'm sure we'll nail this down in no time.
But first of all, you are making some wrong assumtions about how Service Broker works and on how to write Service Broker apps. Service Broker messages will not be sent until a transaction commits. So definetely there will be no long transaction left open waiting for the error, since the error by definition won't come until you commit your transaction :). The model is like this: you send the message and commit. Then the application goes about it's normal business. Later (as you pointed out, it may be hours later), either an error comes back (mallformed XML), either the dialog times out (this is just another error message in your queue), either a response comes back. This message (errors are also messages) should be the trigger to continue your processing on this request (continue from where the SEND left). You can either use activation on the sender's side, or the application can have a dedicated listener thread that sits in a WAITFOR(RECEIVE...) loop. In the later case, the application should be, of course, prepared to deal with the case when it just started up and will find response messages for requests it made a week ago, the last time it was running.
BTW, in case you noticed that Service Broker messages appear to be sent immeadetly, before the transaction is commited, this is just an optimization we do in the case when the target service resides in the same instance as the sender. The 'sent' messages are actually locked by the sender's transaction and won't be available for RECEIVE in any other transaction until the sender's commit. And if the optimization cannot be performed for whatever reason (target database offline, target queue disabled, target conversation locked etc), the message will take the normal path (through sender's sys.transmission_queue).
Now about the WebService calls.
The HTTP requests per say are very expensive to be done from inside SQL Server. So it is a matter of resource consumtion to be external, not internal.
On how to handle a message like this (that requires a potential failure in an expensive operation, like a WS call), my proposed handling is this:
begin transaction
receive message
if message is web request
save state of request (http address, caller etc)
else is message is retry timer message
load state of request
endif
save a retry timer on the dialog (say 1 minute) using BEGIN DIALOG TIMER
commit
do the web request (no transaction open)
if success
begin transaction
reset retry timer
send back response
end conversation
commit
endif
This way you don't held long transaction (nothing is worse to a database than those!) and you have a persistent retry timer, stored in the database. You will retry your web request even after a server restart or a failover (cluster or mirroring), because dialog timers are persisted. And you don't rollback in case of web call failure.
HTH,
~ Remus
|||
Remus Rusanu wrote:
But first of all, you are making some wrong assumtions about how Service Broker works and on how to write Service Broker apps. Service Broker messages will not be sent until a transaction commits. So definetely there will be no long transaction left open waiting for the error, since the error by definition won't come until you commit your transaction :). The model is like this: you send the message and commit. Then the application goes about it's normal business. Later (as you pointed out, it may be hours later), either an error comes back (mallformed XML), either the dialog times out (this is just another error message in your queue), either a response comes back. This message (errors are also messages) should be the trigger to continue your processing on this request (continue from where the SEND left). You can either use activation on the sender's side, or the application can have a dedicated listener thread that sits in a WAITFOR(RECEIVE...) loop. In the later case, the application should be, of course, prepared to deal with the case when it just started up and will find response messages for requests it made a week ago, the last time it was running.
Hi Remus
You hit the nail on the head, I am completely confused on how this works. I thought that was pretty obvious from my previous posts. :)
So to summarize, when I send a message, I do the SEND, and commit the transaction, but I don't END CONVERSATION until I get back a response or an error. 1) I'm not sending back a response on the receive side, and 2) I'm ending the conversation as soon as the send completes. Also, since my sender is a console application, I should turn around and wait for the response/errors, before sending another message, correct?
TIA
]Monty[
|||
Hi again,
Another question?
After sending the message (and not ending the conversation), do I receive on the queue I did the send on or the client queue that I had to set up to create the initiator service?
Here's the ddl for the queues, messages, and contracts:
PRINT 'create message SBTest.Message1';
CREATE MESSAGE TYPE
[SBTest.Message1]
validation = NONE;
PRINT 'create contract SBTest.Contract';
CREATE CONTRACT [SBTest.Contract]
(
[SBTest.Message1] SENT BY INITIATOR
);
go
--*********************************************
--* Create the [SBTest] service
--*********************************************
PRINT 'create queue ''[SBTest Queue]''';
CREATE QUEUE [SBTest Queue];
PRINT 'create service ''[SBTest]''';
CREATE SERVICE [SBTest]
ON QUEUE [SBTest Queue]
(
[SBTest.Contract]
);
go
--*********************************************
--* Create the [SBTest Client] service (the client.exe program)
--*********************************************
PRINT 'create queue ''[SBTest Client Queue]''';
CREATE QUEUE [SBTest Client Queue];
PRINT 'create service ''[SBTest Client]''';
CREATE SERVICE [SBTest Client]
on queue [SBTest Client Queue];
-- no contract because it only initiates messages
go
Do I do a recieve on [SBTest Queue] or [SBTest Client Queue]?
TIA
]Monty[
|||Response messages will be sent by the target service to the initiator service. Hence they will be delivered to the initiator queue and you should receive them from there (i.e. [SBTest Client Queue]) in your case. If you want the response message to actually contain a real message body, you will need to add a new message type and alter the contract. If your responses do not contain any info but simply acknowledge the initiator that its request has been handled, you could simply end the conversation on the target which will send the special 'End Conversation' message back to the initiator as follows:
Initiator Target
Begin Tran
Begin Dialog
Send a request
Commit
Begin Tran
Receive
Process message
End conversation
Commit
Begin Tran
Receive
If 'End Dialog' message
End conversation
Else If 'Error' message
Log/MsgBox/Email
End conversation
Commit
As Remus explained handling the responses could be performed in a background thread of your console application. If you need the user to be alerted of errored dialogs immediately, this background thread could signal the UI thread to pop up a message box. Or it could simply the log the failures to a table so that the user can look at the table to see what failed later. If you don't need the responses to be processed immediately, you could setup a periodic task (you can do that using SSB itself without requiring SQL Agent) that processes the responses.
|||Monty Shaw writes:
Also, since my sender is a console application, I should turn around and wait for the response/errors, before sending another message, correct?
This depends on whether the user need to block and wait for the response before submitting the next request. But as you said, the user is not really interested in the response and hence you could simply keep accepting new requests on the console and then submit them using "Begin Dialog/Send" in a transaction. Processing of responses can be performed asynchronously in a background task or deferred to a batched task that runs every night.
Later,
Rushi
Ok guys,
I created a small visual studio solution that does things the way you've suggested. I've zipped up the entire solution. I can not get the stored procedure to activate and consume messages out of the queue.
In order to recreate what I am seeing, you will have to:
1. unzip this zip
2. change the databse references in the Database project and the SqlServer project.
3. run the create.sql script to create the database.
4. deploy the solution
5. run the activate.sql script to set up the activation of the stored procedure
6. run the Client.exe (run without debug).
The first time you run this it may work. If so run it again. What I see after the first run, and a solution deploy, is that the stored procedure never again is activated to process messages. The stored proc is set up to just read the messages and then end the conversation along with writing messages into the log table. Nothing, nada.
Help!
Is there a way to upload a zip file? I wrote the above thinking I could attach a file to this post, but there doesn't appear to be a way to do that. If you can tell me how to do that or where to send it, I have a zip file just waiting.
]Monty[
|||You should wait for a response on [SBTest Client Queue].
The [SBTest] should end the conversation on success. The client will get back an error message on failure (for whatever reason: XML validation, conversation timeout, access denied on target service) or an end conversation message on case of success. The client could wait for a response with a WAITFOR(RECEIVE ... FROM [SbTest Client Queue] WHERE CONVERSATION_HANDLE = @.conversation_handle) where @.conversation_handle is the handle returned by BEGIN DIALOG.
BTW, if the initiator (sender) is an console application and there is no message sent back to the initiator, I'm not sure it makes sense for the console application to wait for a response, but I guess this is solely for the purpose of testing and figuring out how things work.
HTH,
~ Remus
Making outbound HTTP requests from SQLCLR
Hi,
I have written a C# console application that adds a message to a SB queue and a C# stored procedure that reads the message from the queue.
I have had so many problems that I'm beginning to doubt the usefulness of this, but that is probably just my frustration speaking.
I am stuck on putting an XML message into the queue and reading the XML in the stored procedure.
The console app has a simple object called Message. This class has 2 fields, a guid and a string. I serialize the object into XML using XmlSerializer. This results in an XML string that looks like this:
<?xml version="1.0" encoding="utf-16"?>
<Message xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http:/
/www.w3.org/2001/XMLSchema">
<TranId>a5b4a32f-4940-46ea-a928-3aae33a067fe</TranId>
<Text>TEST</Text>
</Message>
I use a SqlCommand to SEND the message to the queue. I use the following code to SEND the message:
// Add dialog handle
SqlParameter paramDialogHandle = new SqlParameter("@.dh", SqlDbType.UniqueIdentifier);
paramDialogHandle.Value = this.DialogHandle;
cmd.Parameters.Add(paramDialogHandle);
// Add message
SqlParameter paramMsg = new SqlParameter("@.msg", SqlDbType.VarChar,4000);
paramMsg.Value = msg.ToXml();
cmd.Parameters.Add(paramMsg);
// Build the SEND command
cmd.CommandText = "SEND ON CONVERSATION @.dh " +
"MESSAGE TYPE [http://www.TestSB.com/msg/HelloWorld] " +
"(@.msg)";
cmd.ExecuteNonQuery();
For some reason, this message never gets to the stored procedure that is set up to be activated for this queue. Actually, it never even gets written to the queue.
However, If I change the type of parameter in the SEND above to SqlDbType.Xml, the message is written into the queue, but without the '<?xml version="1.0" encoding="utf-16"?>' beginning it, so when my stored procedure tries to deserialize the xml, using XmlSerializer, it fails.
Obviously, I have overcome the backflips needed to do XmlSerialization within the SQLCLR, and I am getting the stored procedure activated, because I do get a message logged showing the XML that is received.
Since I have the queue set up to only allow valid XML messages, I'm assuming that inclusion of the <? xml version..../> line prevents the message from passing Service Broker's XML validation. (And by the way, I have never seen an error message placed into the queue, the event log, or thrown when non-xml is passed to a queue that has XML validation turned on. The message just does not show up in the queue. Is this a bug?) But the lack of the line prevents XmlSerializer from working.
How the heck do you serialize an object into XML using XmlSerializer, so that it can be deserialized with XmlSerializer?
Thanks
]Monty[
On the Service Broker specific issues I advise you to try following the steps in this trubleshooting article I posted at http://blogs.msdn.com/remusrusanu/archive/2005/12/20/506221.aspx. What you will most likely find is that when it appears that you send the message and the message vanishes, you actualy gonna find an error message in the sender's service queue with the error complaining about the message being invalid XML formated.
You see, the XML validation of messages is always performed by the target service when the message is received, not by the sender when the message is sent. Therefore if you send an incorrect XML message, it will not trigger an error in the SEND statement. However, since I assume your message type is defined as WELL_FORMED_XML, the message will not pass the XML validation when is attempted to be enqueued in the target queue and an error message will be sent back. This is the error message you'll find in your sender's queue.
The XML formating problems you are seing are due to the parameter types you use. It makes a big difference on the XML if you pass it in as an VARCHAR(4000), an NVARCHAR(4000) or an XML datatype. VARCHAR types will support UTF-8 encoded XML, therefore your serializer needs to create UTF-8 XML. NVARCHAR type will support UTF-16 encoded, and you need to serialize it accordingly. I'm not an .Net XmlSerializer expert, but I believe you can specify the encoding to be used. I know for one that you can create an XmlTextWriter and specify the Encoding.UTF8 or Encoding.UTF16, then use this XmlTextWriter to obtain the xml stream from the XmlSerializer. Similarily, you can specify the encoding in the XmlTextReader when deserializing the XML.
However, if you use the XML datatype, then you shouldn't need to specify explicitly the encoding. It seems you got this working on the SEND side, but you cannot get the message body as an XML on the receive side. How do you obtain this message in the C# client in the receiver side? Are you using SqlCommand.ExecuteReader to parse the resultset of a RECEIVE statement? What you need to do is to make sure that you get back from the server an XML datatype, not an VARBINARY(MAX). Simply use a RECEIVE ... CAST(message_body AS XML) for this to happen.
HTH,
~ Remus
|||
Hi Remus,
Thanks for the reply. Yes, what you've posted about the data types was part of the problem. However, I have never been able to find any trace of a poorly formed XML error message, in the target's queue or in the initiator's queue. If you say so, I guess that's right, but it doesn't work for me.
I've gotten around the serialization problem by using nvarchar(max) on the send and receive side and the <? xml .../> is making it across fine now. However, now XmlSerializer is complaining about the first line of the XML that follows. I've given up on this and am in the process of simplifying things back down to were they were once somewhat working predictably.
My problem now is that I can only run my test once and get an error message, and then I must shut down Visual studio 2005 (because it is holding a connection open somewhere) Drop and then create the database in Sql Mgmt Studio, and then bring up VS2005 again and run my console application. If I run it 2 times in a row, the stored procedure stops getting activated, even though in my postdeployscript.sql, I am dropping and then recreating the activation of the newly installed stored proc using the following:
ALTER QUEUE [TestSB Queue] WITH ACTIVATION (DROP);
go
alter queue [TestSB Queue]
with status=on,
activation (
status = on,
procedure_name = ServiceProc,
max_queue_readers = 2,
execute as self);
That brings up another question: Why is it you must drop the activation and then create it again in order to get the activation to fire on the newly deployed stored procedure? I understand that the assembly containing the stored proc has a different version, but it took me forever to figure out that you had to do a drop first, and that the alter queue [xx] Activation(....) wasn't enough.
Anyway, I have been hitting my head against the wall with this for all week. Tomorrow I am going to start over with new solution and try to simplify things even more to get to the bottom of this. If you forced me to go into production right now with ServiceBroker, I'd have to complain that it isn't ready. Or I'm not ready at best. I've had so many flakey experiences so far, I know my confidence has been eroded. There is too little documentation and apparently not a lot of experience with C# programs written under Visual Studio that write to queues that are serviced by SQLCLR stored procedures. There is very little in the way of samples, etc. Heck the words 'postdeployscript' and 'predeplyscript', necessary things for doing SB stuff within VS, can't even be found in the Visual Studio help!
In the end, all I want to do is serialize a message through SB to some code that makes a web service call. Boy is that a tall order.
]Monty[
Are you ending the sending dialog after the SEND? If that's the case, then when the error comes back it will find the dialog already ended and therefore it will simply delete the dialog, as well as the error message. If you do something like BEGIN DIALOG/SEND/END, this is similar to a 'fire and forget' scenario, where you (the sender) can never know if your message actually reached the target.
Anyway, using the Profiler can reveal a great deal of what's going on with the dialog messages, select the events in the Broker category.
Unfortunately I cannot help you too much with the CLR and Visual Studio deployment problems, since I'm unfamiliar with them myself. The dedicated .Net forum is monitored by experts with more knowledge in that area, http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=86&SiteID=1, it would worth the trouble to describe the problem there. There are also some blogs of experts in this area, like http://blogs.msdn.com/sqlclr/
One explanation for the weird behavior you see would be is your activated procedure never returns. You can verify this in the Profiler (the activation started/stopped events) as well as by querying the sys.dm_broker_activated_tasks view.
HTH,
~ Remus
1. You do not have to ALTER QUEUE in order to use the newly installed stored proc. If there are any instances of the stored proc already running, they will continue to run the old version. However, the next time Service Broker activates a new proc it should automatically launch the newly deployed version. This behavior is exactly the same as that of SQL when ALTER PROCEDURE can update the procedure without killing spids that are already running the older version. However, any new invocation will use the new code.
2. You should be able to close the connection to the database from Visual Studio by opening the Server Explorer pane, navigating to the database, right clicking and selecting 'Close Connection'.
3. You do not have to DROP activation before reconfiguring it. You can adjust any/all of the settings of activation (viz, status, procedure_name, max_queue_readers and execute as user) using a single ALTER QUEUE statement.
4. Here's a resource on XmlSerialization in SQLCLR:
http://blogs.msdn.com/sqlclr/archive/2005/07/25/Vineet.aspx
The steps I perform are as follows:
SqlCommand cmd = conn.CreateCommand()
cmd.CommandText = @."WAITFOR (RECEIVE TOP(1) message_body, ..., FROM [YourQueue]), TIMEOUT 5000";
cmd.Transaction = tran;
SqlDataReader reader = cmd.ExecuteReader();
if (reader.Read())
{
SqlBytes body = reader.GetSqlBytes();
// XmlSerializer xs;
YourObject yo = (YourObject) xs.Deserialize(body.Stream);
//...
}
5. If you install the samples that come with the SQL Server DVDs, you should find a HelloWorld_CLR sample as well as a ServiceBrokerInterface library which is a sample programming model for writing service broker application. The library also has a shopping cart sample.
|||I just wanted to caution you about using SQLCLR to make outbound HTTP requests. When you make a web request, note that you are doing so as the user running sqlservr.exe. If this is a machine account, you may want to think about the consequences of that principle making outbound HTTP connections to proxy servers (or worse the destination web server itself).
Secondly, if you make a blocking web request call, you will be holding the transaction locks on the conversation group (and anything else that the stored proc may have locked) until the response is received and parsed.
Thirdly, you are using up one of sqlservr's threads for doing the web request, preventing the server from doing real database work until the request returns.
Finally, how are you going to handle errors from the webserver? Either you will have to rollback the transaction (but then the message will pop up again on executing a RECEIVE) or use some sophisticated mechanism of logging pending requests and retrying them using some retry logic. If you chose to rollback, remember that rolling back 5 times consecutively will disable the queue to prevent the poisoned message from repeatedly activating the queue.
If any of these issues concern you then you might want to think about re-architecting the solution to use an external ADO.NET app that receives the messages rather than an internally activated stored proc. If you want the ADO.NET app to be dynamically launched you could try the external activator mentioned here (http://rushi.desai.name/Blog/tabid/54/EntryID/7/Default.aspx).
|||Hi guys,
Thanks for the replies. This is really going somewhere now :)
1) Getting errors when sending non-xml thru a 'well formed xml' queue:
Yes, I am ending the dialog after the send. Are you saying that I need to turn around and wait for a response error message when the messsage is read? If so, how do I keep the transaction from becoming a super long one? That's one of the points I'm trying to make. This is a fire and forget queue. I'm basically trying to do a one-way message. I think the 'well-formed-xml' validation is not to useful in this case. I can't wait around for the reading of the message to generate a returning error message, that might be in an hour or so from now, and the transaction will have to be open all that time. I think a better approach is to turn off the validation and deal with it in the application code.
2) Doing an ALTER QUEUE after a new stored procedure is installed.
Interesting theory. Unfortunately, I have *NEVER* been able to get the stored procedure to fire after a deploy from visual studio. I spent 2 days on this until I tried doing a drop and create on the activation and finally got the stored procedure to be called. I'm obviously doing something wrong, but I can't for the life of me figure out what it is. And it only works for me if I do the drop and then the create. The create is not enough for me.
3) Making outbound HTTP requests from SQLCLR
Well, reliable messaging is the whole point of my exercise. There is no Indigo yet and QueuedComponents/MSMQ is not supported officially by Microsoft in a clustered environment, and a clustered environment is required for fault tolerance in production since there is no disaster recovery for MSMQ. Believe me, I have been there. QueuedComponents/MSMQ is not an option. So I'm left with SB queues with stored procedures making web service calls. I understand the transacional concerns that have been raised, and handling errors is a big unsolved problem because of the poison message 'feature' (can it be turned off?), and the fact that there is no way to delay a message from being received again, and the fact that the stored procedure is supposed to read all the messages before it stops. Assuming the web server is down, if I requeue the message back to the queue again, the stored procedure will just keep reading the same message over and over. I really don't know how to handle errors when the web service call fails, I haven't been able to get that far yet.Are you saying that internal activation is not the way to do this, and external activation is the way to go? Does it help with handling errors or just avoid the resource usage problem of http request inside of SQLCLR?
4) Closing connections to the database by closing the connections in server explorer.
I do this but Visual studio is still holding a connection, because I can't drop the database unless I close VS. Even closing the solution doesn't work. If I didn't have the activation problem, I wouldn't have to go through this step, so solving the activation problem would lessen this problem a bunch.
Thanks
]Monty[
EDIT: I was just checking out the external activation samples, but it uses that ServiceBroker Interface code. So the example only really shows how to use it, rather than how to do external activation. Are there any simplified examples available that don't use the interface code? I'm sorry but that code is not commented very well and is very complicated. I guess if that is all there is available then I can try to make it out, but I started my current excerise using it, and eventually gutted it down to the basic parts trying to understand it and I've obviously messed it up.
Hi again,
On the subject of the external activation sample:
Imagine that the environment is a classic 3 tier architecture, a UI or Web Service layer, an application layer that the UI talks to through web services, (the UI and the app layer are web farms not clustered using the clustering service but clustered with nlb) and finally the database server layer. How is a Windows Service going to be safely running in this environment? I need a clustered Windows Service layer in order to provide fault tolerance for the windows service. It doesn't exist. Do I run a copy of the service on each web service machine in the app layer? How do I get them all to coordinate? Does the sample provided have this support?
]Monty[
Monty Hi,
I'm really glad you're making progress. I'm sure we'll nail this down in no time.
But first of all, you are making some wrong assumtions about how Service Broker works and on how to write Service Broker apps. Service Broker messages will not be sent until a transaction commits. So definetely there will be no long transaction left open waiting for the error, since the error by definition won't come until you commit your transaction :). The model is like this: you send the message and commit. Then the application goes about it's normal business. Later (as you pointed out, it may be hours later), either an error comes back (mallformed XML), either the dialog times out (this is just another error message in your queue), either a response comes back. This message (errors are also messages) should be the trigger to continue your processing on this request (continue from where the SEND left). You can either use activation on the sender's side, or the application can have a dedicated listener thread that sits in a WAITFOR(RECEIVE...) loop. In the later case, the application should be, of course, prepared to deal with the case when it just started up and will find response messages for requests it made a week ago, the last time it was running.
BTW, in case you noticed that Service Broker messages appear to be sent immeadetly, before the transaction is commited, this is just an optimization we do in the case when the target service resides in the same instance as the sender. The 'sent' messages are actually locked by the sender's transaction and won't be available for RECEIVE in any other transaction until the sender's commit. And if the optimization cannot be performed for whatever reason (target database offline, target queue disabled, target conversation locked etc), the message will take the normal path (through sender's sys.transmission_queue).
Now about the WebService calls.
The HTTP requests per say are very expensive to be done from inside SQL Server. So it is a matter of resource consumtion to be external, not internal.
On how to handle a message like this (that requires a potential failure in an expensive operation, like a WS call), my proposed handling is this:
begin transaction
receive message
if message is web request
save state of request (http address, caller etc)
else is message is retry timer message
load state of request
endif
save a retry timer on the dialog (say 1 minute) using BEGIN DIALOG TIMER
commit
do the web request (no transaction open)
if success
begin transaction
reset retry timer
send back response
end conversation
commit
endif
This way you don't held long transaction (nothing is worse to a database than those!) and you have a persistent retry timer, stored in the database. You will retry your web request even after a server restart or a failover (cluster or mirroring), because dialog timers are persisted. And you don't rollback in case of web call failure.
HTH,
~ Remus
|||
Remus Rusanu wrote:
But first of all, you are making some wrong assumtions about how Service Broker works and on how to write Service Broker apps. Service Broker messages will not be sent until a transaction commits. So definetely there will be no long transaction left open waiting for the error, since the error by definition won't come until you commit your transaction :). The model is like this: you send the message and commit. Then the application goes about it's normal business. Later (as you pointed out, it may be hours later), either an error comes back (mallformed XML), either the dialog times out (this is just another error message in your queue), either a response comes back. This message (errors are also messages) should be the trigger to continue your processing on this request (continue from where the SEND left). You can either use activation on the sender's side, or the application can have a dedicated listener thread that sits in a WAITFOR(RECEIVE...) loop. In the later case, the application should be, of course, prepared to deal with the case when it just started up and will find response messages for requests it made a week ago, the last time it was running.
Hi Remus
You hit the nail on the head, I am completely confused on how this works. I thought that was pretty obvious from my previous posts. :)
So to summarize, when I send a message, I do the SEND, and commit the transaction, but I don't END CONVERSATION until I get back a response or an error. 1) I'm not sending back a response on the receive side, and 2) I'm ending the conversation as soon as the send completes. Also, since my sender is a console application, I should turn around and wait for the response/errors, before sending another message, correct?
TIA
]Monty[
|||
Hi again,
Another question?
After sending the message (and not ending the conversation), do I receive on the queue I did the send on or the client queue that I had to set up to create the initiator service?
Here's the ddl for the queues, messages, and contracts:
PRINT 'create message SBTest.Message1';
CREATE MESSAGE TYPE
[SBTest.Message1]
validation = NONE;
PRINT 'create contract SBTest.Contract';
CREATE CONTRACT [SBTest.Contract]
(
[SBTest.Message1] SENT BY INITIATOR
);
go
--*********************************************
--* Create the [SBTest] service
--*********************************************
PRINT 'create queue ''[SBTest Queue]''';
CREATE QUEUE [SBTest Queue];
PRINT 'create service ''[SBTest]''';
CREATE SERVICE [SBTest]
ON QUEUE [SBTest Queue]
(
[SBTest.Contract]
);
go
--*********************************************
--* Create the [SBTest Client] service (the client.exe program)
--*********************************************
PRINT 'create queue ''[SBTest Client Queue]''';
CREATE QUEUE [SBTest Client Queue];
PRINT 'create service ''[SBTest Client]''';
CREATE SERVICE [SBTest Client]
on queue [SBTest Client Queue];
-- no contract because it only initiates messages
go
Do I do a recieve on [SBTest Queue] or [SBTest Client Queue]?
TIA
]Monty[
|||Response messages will be sent by the target service to the initiator service. Hence they will be delivered to the initiator queue and you should receive them from there (i.e. [SBTest Client Queue]) in your case. If you want the response message to actually contain a real message body, you will need to add a new message type and alter the contract. If your responses do not contain any info but simply acknowledge the initiator that its request has been handled, you could simply end the conversation on the target which will send the special 'End Conversation' message back to the initiator as follows:
Initiator Target
Begin Tran
Begin Dialog
Send a request
Commit
Begin Tran
Receive
Process message
End conversation
Commit
Begin Tran
Receive
If 'End Dialog' message
End conversation
Else If 'Error' message
Log/MsgBox/Email
End conversation
Commit
As Remus explained handling the responses could be performed in a background thread of your console application. If you need the user to be alerted of errored dialogs immediately, this background thread could signal the UI thread to pop up a message box. Or it could simply the log the failures to a table so that the user can look at the table to see what failed later. If you don't need the responses to be processed immediately, you could setup a periodic task (you can do that using SSB itself without requiring SQL Agent) that processes the responses.
|||Monty Shaw writes:
Also, since my sender is a console application, I should turn around and wait for the response/errors, before sending another message, correct?
This depends on whether the user need to block and wait for the response before submitting the next request. But as you said, the user is not really interested in the response and hence you could simply keep accepting new requests on the console and then submit them using "Begin Dialog/Send" in a transaction. Processing of responses can be performed asynchronously in a background task or deferred to a batched task that runs every night.
Later,
Rushi
Ok guys,
I created a small visual studio solution that does things the way you've suggested. I've zipped up the entire solution. I can not get the stored procedure to activate and consume messages out of the queue.
In order to recreate what I am seeing, you will have to:
1. unzip this zip
2. change the databse references in the Database project and the SqlServer project.
3. run the create.sql script to create the database.
4. deploy the solution
5. run the activate.sql script to set up the activation of the stored procedure
6. run the Client.exe (run without debug).
The first time you run this it may work. If so run it again. What I see after the first run, and a solution deploy, is that the stored procedure never again is activated to process messages. The stored proc is set up to just read the messages and then end the conversation along with writing messages into the log table. Nothing, nada.
Help!
Is there a way to upload a zip file? I wrote the above thinking I could attach a file to this post, but there doesn't appear to be a way to do that. If you can tell me how to do that or where to send it, I have a zip file just waiting.
]Monty[
|||You should wait for a response on [SBTest Client Queue].
The [SBTest] should end the conversation on success. The client will get back an error message on failure (for whatever reason: XML validation, conversation timeout, access denied on target service) or an end conversation message on case of success. The client could wait for a response with a WAITFOR(RECEIVE ... FROM [SbTest Client Queue] WHERE CONVERSATION_HANDLE = @.conversation_handle) where @.conversation_handle is the handle returned by BEGIN DIALOG.
BTW, if the initiator (sender) is an console application and there is no message sent back to the initiator, I'm not sure it makes sense for the console application to wait for a response, but I guess this is solely for the purpose of testing and figuring out how things work.
HTH,
~ Remus
Making changes to the reportserver database
tables and stored procedures? Will admendments I make affect upgrades in the
future?My feeling is that you do this at your own risk. Going directly at the
reportserver database is not supported. You should use the web services. If
you want some of you own tables and stored procedures I would suggest
creating another database, even if you are going against reportserver
database.
I would guess that you could do it as long as your naming convention is such
that MS wouldn't be likely to name something similar. But if you put them in
your own database then your are guaranteed to be safe.
--
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Si Downes" <SiDownes@.discussions.microsoft.com> wrote in message
news:093F8E5E-948B-40BE-A4BE-14AB46A080EF@.microsoft.com...
> Is it ok to make changes directly to the ReportServer database - adding
> tables and stored procedures? Will admendments I make affect upgrades in
the
> future?
Friday, March 23, 2012
Making a stored procedure run on its own periodically.
I have a requirement to update a table in SQL Server through a stored
procedure,
the stored procedure should run periodically, say once in a month, and
update the table.I should not use any external programs to call the
stored procedure, is there any other way to do it without using an
external programs.
Thanks in advance,
RajanSchedule a job that calls the stored procedure:
http://msdn.microsoft.com/library/e...tomate_4v1v.asp
--
David Portas
----
Please reply only to the newsgroup
--|||Thank you so much.
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message news:<x8GdnWECNf3zSFSiRVn-gw@.giganews.com>...
> Schedule a job that calls the stored procedure:
> http://msdn.microsoft.com/library/e...tomate_4v1v.asp
Making a Stored Procedure in MS-SQL 2000
Sorry, this is a very easy question. I have to create a stored
procedure in a SQL SERVER Database.
I have read lots on MSDN about the syntax of this stored procedure :
ie.
CREATE PROCEDURE spGetContctDetails
AS
-- get everything out of contact table
SELECT * FROM tblContact
but I cannot find anywhere how to actually add this procedure, so that
it is accessable in my VB.NET program.
What I am asking is : what screen do I type this into? Is it Query
Analyzer, if so where?
Its got me oh so confused. When I've got time i'll read all about SQL
Server, but for this morning I need to know how to simply add that
Stored Procedure to my database.
Thanks LOADS for any help.Type the CREATE PROCEDURE code in Query Analyzer. You can then execute
the proc from VB using the ADO command.execute method for example.
--
David Portas
SQL Server MVP
--
Monday, March 19, 2012
make stored procedure inactive?
thinking that I might have a few extra ones that are no longer used by the
application and was wondering if there is an easy way to make a stored
procedure not accessable by the web application as a way to test if it is
being used without deleting it?
Thanks
Paul G
Software engineer.
"Paul" <Paul@.discussions.microsoft.com> wrote in message
news:847134DF-FF89-48DF-8C6C-50A5720E9040@.microsoft.com...
> HI I have a large .net web application with many stored procedures. I am
> thinking that I might have a few extra ones that are no longer used by the
> application and was wondering if there is an easy way to make a stored
> procedure not accessable by the web application as a way to test if it is
> being used without deleting it?
> Thanks
> --
> Paul G
> Software engineer.
I would simply rename the object to something else.
Take a look at sp_rename in the BOL.
Rick Sawtell
MCT, MCSD, MCDBA
|||ok thanks will give it a try.
Paul G
Software engineer.
"Paul" wrote:
> HI I have a large .net web application with many stored procedures. I am
> thinking that I might have a few extra ones that are no longer used by the
> application and was wondering if there is an easy way to make a stored
> procedure not accessable by the web application as a way to test if it is
> being used without deleting it?
> Thanks
> --
> Paul G
> Software engineer.
|||Paul,
Revoke / Deny the execute permission from application database user.
Take a look into Revoke / Deny in books online.
Thanks
Hari
SQL Server MVP
"Paul" <Paul@.discussions.microsoft.com> wrote in message
news:05F662B0-1AB8-4629-8BD8-1F3AA1968CCE@.microsoft.com...[vbcol=seagreen]
> ok thanks will give it a try.
> --
> Paul G
> Software engineer.
>
> "Paul" wrote:
|||Ok thanks, I think this will work as I have set up a user and password that
the applications uses so should be able to deny execution using enterprise
manager.
Paul G
Software engineer.
"Hari Prasad" wrote:
> Paul,
> Revoke / Deny the execute permission from application database user.
> Take a look into Revoke / Deny in books online.
> Thanks
> Hari
> SQL Server MVP
> "Paul" <Paul@.discussions.microsoft.com> wrote in message
> news:05F662B0-1AB8-4629-8BD8-1F3AA1968CCE@.microsoft.com...
>
>
make stored procedure inactive?
thinking that I might have a few extra ones that are no longer used by the
application and was wondering if there is an easy way to make a stored
procedure not accessable by the web application as a way to test if it is
being used without deleting it?
Thanks
--
Paul G
Software engineer."Paul" <Paul@.discussions.microsoft.com> wrote in message
news:847134DF-FF89-48DF-8C6C-50A5720E9040@.microsoft.com...
> HI I have a large .net web application with many stored procedures. I am
> thinking that I might have a few extra ones that are no longer used by the
> application and was wondering if there is an easy way to make a stored
> procedure not accessable by the web application as a way to test if it is
> being used without deleting it?
> Thanks
> --
> Paul G
> Software engineer.
I would simply rename the object to something else.
Take a look at sp_rename in the BOL.
Rick Sawtell
MCT, MCSD, MCDBA|||ok thanks will give it a try.
--
Paul G
Software engineer.
"Paul" wrote:
> HI I have a large .net web application with many stored procedures. I am
> thinking that I might have a few extra ones that are no longer used by the
> application and was wondering if there is an easy way to make a stored
> procedure not accessable by the web application as a way to test if it is
> being used without deleting it?
> Thanks
> --
> Paul G
> Software engineer.|||Paul,
Revoke / Deny the execute permission from application database user.
Take a look into Revoke / Deny in books online.
Thanks
Hari
SQL Server MVP
"Paul" <Paul@.discussions.microsoft.com> wrote in message
news:05F662B0-1AB8-4629-8BD8-1F3AA1968CCE@.microsoft.com...[vbcol=seagreen]
> ok thanks will give it a try.
> --
> Paul G
> Software engineer.
>
> "Paul" wrote:
>|||Ok thanks, I think this will work as I have set up a user and password that
the applications uses so should be able to deny execution using enterprise
manager.
--
Paul G
Software engineer.
"Hari Prasad" wrote:
> Paul,
> Revoke / Deny the execute permission from application database user.
> Take a look into Revoke / Deny in books online.
> Thanks
> Hari
> SQL Server MVP
> "Paul" <Paul@.discussions.microsoft.com> wrote in message
> news:05F662B0-1AB8-4629-8BD8-1F3AA1968CCE@.microsoft.com...
>
>
make stored procedure inactive?
thinking that I might have a few extra ones that are no longer used by the
application and was wondering if there is an easy way to make a stored
procedure not accessable by the web application as a way to test if it is
being used without deleting it?
Thanks
--
Paul G
Software engineer."Paul" <Paul@.discussions.microsoft.com> wrote in message
news:847134DF-FF89-48DF-8C6C-50A5720E9040@.microsoft.com...
> HI I have a large .net web application with many stored procedures. I am
> thinking that I might have a few extra ones that are no longer used by the
> application and was wondering if there is an easy way to make a stored
> procedure not accessable by the web application as a way to test if it is
> being used without deleting it?
> Thanks
> --
> Paul G
> Software engineer.
I would simply rename the object to something else.
Take a look at sp_rename in the BOL.
Rick Sawtell
MCT, MCSD, MCDBA|||ok thanks will give it a try.
--
Paul G
Software engineer.
"Paul" wrote:
> HI I have a large .net web application with many stored procedures. I am
> thinking that I might have a few extra ones that are no longer used by the
> application and was wondering if there is an easy way to make a stored
> procedure not accessable by the web application as a way to test if it is
> being used without deleting it?
> Thanks
> --
> Paul G
> Software engineer.|||Paul,
Revoke / Deny the execute permission from application database user.
Take a look into Revoke / Deny in books online.
Thanks
Hari
SQL Server MVP
"Paul" <Paul@.discussions.microsoft.com> wrote in message
news:05F662B0-1AB8-4629-8BD8-1F3AA1968CCE@.microsoft.com...
> ok thanks will give it a try.
> --
> Paul G
> Software engineer.
>
> "Paul" wrote:
>> HI I have a large .net web application with many stored procedures. I am
>> thinking that I might have a few extra ones that are no longer used by
>> the
>> application and was wondering if there is an easy way to make a stored
>> procedure not accessable by the web application as a way to test if it is
>> being used without deleting it?
>> Thanks
>> --
>> Paul G
>> Software engineer.|||Ok thanks, I think this will work as I have set up a user and password that
the applications uses so should be able to deny execution using enterprise
manager.
--
Paul G
Software engineer.
"Hari Prasad" wrote:
> Paul,
> Revoke / Deny the execute permission from application database user.
> Take a look into Revoke / Deny in books online.
> Thanks
> Hari
> SQL Server MVP
> "Paul" <Paul@.discussions.microsoft.com> wrote in message
> news:05F662B0-1AB8-4629-8BD8-1F3AA1968CCE@.microsoft.com...
> > ok thanks will give it a try.
> > --
> > Paul G
> > Software engineer.
> >
> >
> > "Paul" wrote:
> >
> >> HI I have a large .net web application with many stored procedures. I am
> >> thinking that I might have a few extra ones that are no longer used by
> >> the
> >> application and was wondering if there is an easy way to make a stored
> >> procedure not accessable by the web application as a way to test if it is
> >> being used without deleting it?
> >> Thanks
> >> --
> >> Paul G
> >> Software engineer.
>
>
Make Stored Proc Faster
ALTER PROCEDURE sproc_ReturnAvailability
@.ExtractDate DateTime,
@.DateFrom DateTime,
@.DateTo DateTime,
@.96hrPlusFlag int,
@.AppointmentsCount int OUTPUT
AS
IF @.96hrPlusFlag = 0
BEGIN
SELECT @.AppointmentsCount = COUNT(tbl_SurgerySlot.SurgerySlotKey)
FROM tbl_SurgerySlot
INNER JOIN tbl_SurgerySlotDescription ON (tbl_SurgerySlot.Label = tbl_SurgerySlotDescription.Label AND tbl_SurgerySlot.PracticeCode = tbl_SurgerySlotDescription.PracticeCode)
AND tbl_SurgerySlot.ExtractDate = @.ExtractDate
AND tbl_SurgerySlot.StartTime BETWEEN @.DateFrom AND @.DateTo
AND tbl_SurgerySlotDescription.NormalBookable = 1
AND tbl_SurgerySlot.SurgerySlotKey NOT IN(
SELECT tbl_Appointment.SurgerySlotKey
FROM tbl_Appointment
WHERE tbl_Appointment.ExtractDate = @.ExtractDate
AND tbl_Appointment.Deleted = 0
AND tbl_Appointment.Cancelled = 0
)
END
ELSE
BEGIN
IF @.96hrPlusFlag = 1
SELECT @.AppointmentsCount = COUNT(tbl_SurgerySlot.SurgerySlotKey)
FROM tbl_SurgerySlot
INNER JOIN tbl_SurgerySlotDescription ON (tbl_SurgerySlot.Label = tbl_SurgerySlotDescription.Label AND tbl_SurgerySlot.PracticeCode = tbl_SurgerySlotDescription.PracticeCode)
AND tbl_SurgerySlot.ExtractDate = @.ExtractDate
AND tbl_SurgerySlot.StartTime >@.DateTo
AND tbl_SurgerySlotDescription.NormalBookable = 1
AND tbl_SurgerySlot.SurgerySlotKey NOT IN(
SELECT tbl_Appointment.SurgerySlotKey
FROM tbl_Appointment
WHERE tbl_Appointment.ExtractDate = @.ExtractDate
AND tbl_Appointment.Deleted = 0
AND tbl_Appointment.Cancelled = 0
)
END
Cheers...You need to LEFT OUTER JOIN tbl_Appointment rather than use NOT IN on it. And in the WHERE clause filter in records where SurgerySlotKey IS NOT NULL.
Make SQL Server distinguish between uppercase and lowercase characters in a stored procedu
Use the COLLATE command and change
WHERE A = B
to
WHERE A COLLATE SQL_Latin1_General_Cp850_CS_AS = B COLLATE SQL_Latin1_General_Cp850_CS_AS
This assumes that SQL_Latin1_General_Cp850_CI_AS is your normal collation. The change in collation is effectivle only for the scope of the one clause.
|||
I adopted a different approach.
I converted using the convert function to varbinary(50). My original column was varchar(50)
For example if 'column1' value was being compared with@.column1 parameter, then the following solution helped me in achieving my goal:
CONVERT(varbinary(50), column1) = CONVERT(varbinary(50), @.column1)
My only question is if I have a 50 character varchar then when I convert it to varbinary, should I convert to varbinary(50)?
The advantage of this is that I don't need to disturb the existing collation setting on SQL Server 2000 instance.
|||
I ran
DECLARE @.VAR VARCHAR(50)
SET @.VAR = '12345'
PRINT DATALENGTH(@.VAR)
PRINT DATALENGTH(CONVERT(VARBINARY(50), @.VAR))
and got
5
5
which shows that the lengths can be the same.
>The advantage of this is that I don't need to disturb the existing collation setting on SQL Server 2000 instance.
The COLLATE command used as I describe does not change the collation sequence of the database, merely the sequence the comparison is done in.
make more space available for store proc editing
forward?):
when editing stored proc from EM, the screen estate is vastly wasted. Some
of us don't care, some of us do care about this. For those who don't care,
they should not have much to object the idea of making more screen space
accessible. So if you folks in the future could reduce the blank space and
convert it to space displaying code, I am one of those who would applaud.
Quentin> when editing stored proc from EM, the screen estate is vastly wasted.
Some
> of us don't care, some of us do care about this.
Use Query Analyzer. Press F8, you can right-click a stored procedure and
hit Edit (or script to new window as create, if you like to drop/create all
your objects).
You can even run only sections of code from this interface (by highlighting
only the relevant portions), test and debug, and - wow - you aren't stuck in
modal mode, so you can actually look at other stored procedures, other
servers, tables, etc. while you are working on a proc.
Enterprise Manager is not really meant for development, but more for
Enterprise Management, hence the name.
A|||Thanks Aaron.
I'm thinking making more space available for EM stored proc editing does not
hurt anyone and some benefit from it.
Quentin
"Aaron Bertrand - MVP" <aaron@.TRASHaspfaq.com> wrote in message
news:OQdu1L4VDHA.1784@.TK2MSFTNGP10.phx.gbl...
> > when editing stored proc from EM, the screen estate is vastly wasted.
> Some
> > of us don't care, some of us do care about this.
> Use Query Analyzer. Press F8, you can right-click a stored procedure and
> hit Edit (or script to new window as create, if you like to drop/create
all
> your objects).
> You can even run only sections of code from this interface (by
highlighting
> only the relevant portions), test and debug, and - wow - you aren't stuck
in
> modal mode, so you can actually look at other stored procedures, other
> servers, tables, etc. while you are working on a proc.
> Enterprise Manager is not really meant for development, but more for
> Enterprise Management, hence the name.
> A
>|||> I'm thinking making more space available for EM stored proc editing does
not
> hurt anyone and some benefit from it.
I understand, however I'm suggesting a workaround, because you're not going
to see any changes to the client tools in SQL Server 7.0/2000.
Make Filter = False
with a solution that might work for you:
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=ONnNRtsBEHA.3064%40tk2msftngp13.phx.gbl&rnum=2&prev=/groups%3Fq%3D%2522in%2Bclause%2522%2Bgroup:microsoft.public.sqlserver.reportingsvcs%26hl%3Den%26lr%3D%26ie%3DUTF-8%26selm%3DONnNRtsBEHA.3064%2540tk2msftngp13.phx.gbl%26rnum%3D2
Ravi Mumulla (Microsoft)
SQL Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"vrodkar" <vrodkar@.discussions.microsoft.com> wrote in message
news:A6CBA50D-2C8F-4A97-B925-CE5DE0378347@.microsoft.com...
> I have a stored proc to return the main report data. I have another
dataset1 to return the distinct values for my parameter. I filter the main
data based on the parameter selected by user. I wanted to add 'ALL' option
to the parameter drop down. I have added an UNION to the dataset1 to include
this option. I now want to change my filter expresion from
'=Fields!FRole.Value = Parameters!PRole.Value' to include ALL
option and basically ignore the filter. Is it possible|||I'm curious on this as well as I am also trying to
implement this on a report. Has anyone founnd a
workaround?
>--Original Message--
>I have a stored proc to return the main report data. I
have another dataset1 to return the distinct values for
my parameter. I filter the main data based on the
parameter selected by user. I wanted to add 'ALL' option
to the parameter drop down. I have added an UNION to the
dataset1 to include this option. I now want to change my
filter expresion from '=Fields!FRole.Value =Parameters!PRole.Value' to include ALL option and
basically ignore the filter. Is it possible
>.
>|||Yes I use an "(All)" option in most of my reports.
It's easier if you're using queries instead of stored procedures.
In your parameter list have an item labelled "(All)" give it a Value of "%".
In your main data query have criteria or where clause using the 'LIKE' operator against the parameter, so in SQL;
SELECT * FROM tblData WHERE Country LIKE @.Country
% is the SQL wildcard character, but must be used with the like operator.
Regards
Chris McGuigan
"BiggieSize" wrote:
> I'm curious on this as well as I am also trying to
> implement this on a report. Has anyone founnd a
> workaround?
> >--Original Message--
> >I have a stored proc to return the main report data. I
> have another dataset1 to return the distinct values for
> my parameter. I filter the main data based on the
> parameter selected by user. I wanted to add 'ALL' option
> to the parameter drop down. I have added an UNION to the
> dataset1 to include this option. I now want to change my
> filter expresion from '=Fields!FRole.Value => Parameters!PRole.Value' to include ALL option and
> basically ignore the filter. Is it possible
> >.
> >
>
make a stored procedure
Hi
In my table i have a datetime field
now i want to delete all records there are more than 1 hour old
can someone help me with this
Alvin
Hi,
Here is a possible solution:
delete from tableName
where datettimeField < dateadd(hour, -1, getdate())
Greetz,
Geert
Geert Verhoeven
Consultant @. Ausy Belgium
My Personal Blog
Monday, March 12, 2012
Make a SELECT DISTINCT in Reporting Services
I have a stored procedure, that can't be modified.
It return data like these (the real one is more, and more complex of
course)
A ...
A ...
A ...
B ...
B ...
B ...
C ...
C ...
I want to make a table with only theses entries :
A
B
C
How can i do it in reporting services ? I have tested
=Distinct(Fields!Qu_HA_LIB.Value, "GESCCNS")
But Distrinct seems to be not recognized by RS 2000
Thank you in advance
MassanuCan somebody help me, please ...
Make a Job continue after Primary Key Violation?
procedure i have many INSERT commands that may result with a duplicate key
violation.
this kind of error should not stop the process and the algorithm continue.
when i run it manually ( EXEC SP1 ...) it behaves ok and continue even when
primary key violation occur.
BUT, when i run it via a job (single step) it quits on the 1st error!!!
WHY ?
how can i force the job/SP to continue running after data errors like key
violations '
thanks
rafiThere are errors in sql that are not trappable. Erland has some good writing
that you should take a look:
http://www.sommarskog.se/error-handling-I.html
-oj
"Rafi" <Rafi@.discussions.microsoft.com> wrote in message
news:93C834D3-71DA-48AF-B2C6-2151EC06F136@.microsoft.com...
>i have a stored procedure that processes new incoming data. during this
> procedure i have many INSERT commands that may result with a duplicate key
> violation.
> this kind of error should not stop the process and the algorithm continue.
> when i run it manually ( EXEC SP1 ...) it behaves ok and continue even
> when
> primary key violation occur.
> BUT, when i run it via a job (single step) it quits on the 1st error!!!
> WHY ?
> how can i force the job/SP to continue running after data errors like key
> violations '
> thanks
> rafi|||Hi
Why not add a where clause to the insert statement to check the PK value
does not exist? If they do exist you may want to log the fact.
It may not a good idea to SET XACT_ABORT OFF as you may miss something that
is important.
John
"Rafi" wrote:
> i have a stored procedure that processes new incoming data. during this
> procedure i have many INSERT commands that may result with a duplicate key
> violation.
> this kind of error should not stop the process and the algorithm continue.
> when i run it manually ( EXEC SP1 ...) it behaves ok and continue even whe
n
> primary key violation occur.
> BUT, when i run it via a job (single step) it quits on the 1st error!!!
> WHY ?
> how can i force the job/SP to continue running after data errors like key
> violations '
> thanks
> rafi