Showing posts with label pull. Show all posts
Showing posts with label pull. Show all posts

Monday, March 26, 2012

Making replication more reliable

We have about 90 sites doing merge replication with pull subscriptions over
a DSL VPN.
Replication often stalls following a dropped connection. Normally,
restarting SQL Server Agent at the subsciber gets things moving again.
Any good suggestions on a way to a perform this restart. We can't really do
it from within our user application as they don't have sufficient privilege.
Any easy way to do it from the publisher site?
Also are we making things worse for ourselves by running the merge agent
continuously?
Tony Toker
Data Identic Ltd.
either schedule your agents to run every 10 minutes, or set up your job so
that on job failure it wraps around and starts job step 1 again.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"Tony Toker" <xyzzy@.identic.co.uk> wrote in message
news:cgi1vo$p1d$1$830fa795@.news.demon.co.uk...
> We have about 90 sites doing merge replication with pull subscriptions
over
> a DSL VPN.
> Replication often stalls following a dropped connection. Normally,
> restarting SQL Server Agent at the subsciber gets things moving again.
> Any good suggestions on a way to a perform this restart. We can't really
do
> it from within our user application as they don't have sufficient
privilege.
> Any easy way to do it from the publisher site?
> Also are we making things worse for ourselves by running the merge agent
> continuously?
> Tony Toker
> Data Identic Ltd.
>
|||Thanks for the tip.
Some of the agents don't actually fail, but run indefinitely.
I'll look at checking connectivity then running replmerge, or the activex
controls (which aren't currently deployed) from our application to synch
when necessary.
This will be a stupid question whatever the answer but would replmerge work
with a push subscription, ie can you initiate synchronization of a push
subscription at the subscriber?
Thanks for all your help on here, you need to get that book out and start
earning for your advice!
Tony
"Hilary Cotter" <hilaryk@.att.net> wrote in message
news:uZzHfRqiEHA.3428@.TK2MSFTNGP11.phx.gbl...
> either schedule your agents to run every 10 minutes, or set up your job so
> that on job failure it wraps around and starts job step 1 again.
> --
> Hilary Cotter
> Looking for a book on SQL Server replication?
> http://www.nwsu.com/0974973602.html
>
> "Tony Toker" <xyzzy@.identic.co.uk> wrote in message
> news:cgi1vo$p1d$1$830fa795@.news.demon.co.uk...
> over
> do
> privilege.
>

Wednesday, March 7, 2012

Maintenance required after deploying merge pull replication?

Hi,
Before I ask for input, I just want to thank everyone in this
newsgroup for being so helpful... I've really gotten through some
complicated issues thanks to the information here. People like Hilary
are always coming through.
I am deploying a solution that depends on Continuous Merge Pull
replication... I can configure the systems of my customers, but once I
deploy them, I won't have direct administrative access to their copy
of MSDE which contains a pull subscription to our central database.
If needed, I could write an application to interface with MSDE through
the ActiveX controls...
What sort of control am I going to need so that I can always be sure
they are in sync? How can I make sure that if the connection is
dropped, or a new article is added, that they automatically keep
retrying until the process works?
In general, what are the problems do you think I might have after
deploying this sort of solution? Is it even workable without being
able to remote administer?
Thanks again
josh
Josh,
the agent failure/retry/success alerts should take care of some of these
concerns, provided you can configure email integration in your MSDEs.
To ensure the data is correctly transferred, occasional validation could be
used.
To ensure that the merge agent keeps retrying in the event of a network
failure, you can force it to work in an infinite loop (step 3 of the merge
agent's job step can have 'on success' and 'on failure' going back to step
2).
HTH,
Paul Ibison
|||So with an infinite loop trying to sync, will I also be covered for
any of the issues that normally cause the agent to stop?
For instance, replicating schema changes and new articles typically
requires a new snapshot, so the merge agent just stops itself (even in
continuous mode) until an up-to-date snapshot is posted. Will the
loop allow it to keep checking this?
Basically, I have no administrative access over the MSDE once it is
deployed (since it is on dynamic IP), and need to be sure there is
nothing that can cause sync to fail.
I suppose if the MSDE is on dynamic IP, there is no good way to
configure after deployment. Perhaps using replication is not the best
choice in this scenario. ..
Any input appreciated.
Thanks again,
josh
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message news:<eTsOlvAYEHA.3476@.tk2msftngp13.phx.gbl>...
> Josh,
> the agent failure/retry/success alerts should take care of some of these
> concerns, provided you can configure email integration in your MSDEs.
> To ensure the data is correctly transferred, occasional validation could be
> used.
> To ensure that the merge agent keeps retrying in the event of a network
> failure, you can force it to work in an infinite loop (step 3 of the merge
> agent's job step can have 'on success' and 'on failure' going back to step
> 2).
> HTH,
> Paul Ibison
|||Josh,
the loop will protect against network issues but not adding new articles.
When the article is added, you'll need to run the snapshot agent
immediately, to ensure the merge agent doesn't stop. You could have a step
in the snapshot agent (sp_start_job) which restarts the merge agent if
necessary.
Not to sure about the dynamic IP bit. The Netbios name is normally used
rather than the IP address to connect using Enterprise Manager. If you are
replicating to a subscriber on a non-trusted domain or over the internet
then I could see your problem.
The thought that you could no longer monitor the subscriber makes me
concerned, and in my opinion this will be a problem as time goes by. Perhaps
you could have the subscriber run a job on a schedule which posts its IP
address to the publisher, so you have the option of troubleshoting if
necessary.
Regards,
Paul Ibison