I found a cool set of maintenance scripts at the following:
http://blog.hundhausen.com/CoolDBAAutomationJobs.aspx
I am only using the Index maintenance jobs
SQLIndexDefragAll
SQLUpdateStatistics
SqlUpdateUsageAll
SQLDBCCAll
I am running SQL 2000 enterprise, currently sp3a. Some of my servers
are heavy used production boxes and I need to be able to manage the
indexes in a systematic way.
My questions are:
If I run this nightly
1: Should I update all statistics or just on the tables I have
defragged?
2: If only on the Defragged tables, should I make another job to
update statisics for all user databases Weekly/monthly/not at all?
3: How often should I update usage
4: How often should I run CheckDB, and with what options?
Thanks!> http://blog.hundhausen.com/CoolDBAAutomationJobs.aspx
interesting stuff... good for a lazy dba's. please do not get me wrong :), a
good dba is a lazy dba, does not do a lot of stuff manually :), I like lazy
dba's very much.
> I am only using the Index maintenance jobs
> SQLIndexDefragAll
> SQLUpdateStatistics
> SqlUpdateUsageAll
> SQLDBCCAll
>
> I am running SQL 2000 enterprise, currently sp3a. Some of my servers
go to sp4 if you can and your applications can handle it. Watch out for
index scans when you have not the same data type in your where clauses in
your maintenance plans so.
> are heavy used production boxes and I need to be able to manage the
> indexes in a systematic way.
> My questions are:
> If I run this nightly
> 1: Should I update all statistics or just on the tables I have
> defragged?
the question begins at should I defrag, what and when. when is actually very
important.
the procs has DBCC SHOWCONTIG call to suggest what needs to be deframented
... but it does not use any of the output of the call as far as I can see for
sql 2000.
SQLDBIndexDefragExclusions allows you to exclude entire database, not a
table from what I can see... I did not understand how to implement defrag of
particular subset of tables with the jobs to be hones, did not the
functionality in the sp code that would allow me to do that.
After all the sp will run DBCC INDEXDEFRAG. Pretty much, if you run this at
a wrong time you end up with bunch of data in your t logs but will not
accomplish anything. This is a low priority thing, online de-fragmentation,
if the page it is trying to access is locked used it will live it alone. That
means running this when your data are modified is not going to do you good.
It will not move data between the files either, that means if you have a
process that modifies the data in big chunks and the index is not in one
file, you may consider a different maintenance if you have one file busy,
another empty. can be plenty of examples. The strategy depends on how do your
data look like, how are they accessed and what are your workload patterns.
An plenty depends on your database configuration (auto update statistics
that is for example). Right now there is more unknown about your system for
an advice then known, nothing pretty much exept of heavily used (what can
mean many different things, aka oltp/olap/dss difference for example etc.)
> 2: If only on the Defragged tables, should I make another job to
> update statistics for all user databases Weekly/monthly/not at all?
Same thing here. depends on the workload on your databases first of all. It
can be that a database is heavily used, but it is mostly for reports and the
data are loaded in there let's say once every 2 weeks, what is the point of
maintain the statistic and do defrag there every day? correct, it does not
make sense. My point is, the maintenance is defined by the particular
database/system and the environment it is (business rules/requirements are
included in the consideration).
> 3: How often should I update usage
Depends on your maintenance schedule and what can you afford to do.
and depends on how exactly are you going to to use this statistic data. what
for do you need it?
> 4: How often should I run CheckDB, and with what options?
I'd say daily if your maintenance window is big enough to allow it, then do
it.
It is production, sooner you know that your data are bad, better chance you
have to correct it if possible indeed.
Can you afford it - another big question. It may be that you actually can not.
Your memory pool is going to suffer (I/O will be high, granted, and whatever
was in memory before dbcc can be not there anymore, so it will hit your
performance, question is how fast can your system recover and perform to your
liking). so you need to decide can you run such a thing. VLDB's are a
different story, a lot of data flies into memory, literally. what ever it is,
it is the last maintenance task for the given database, because you want to
find your that your data are corrupted after your reindex defrag etc, not
before that... no vendor is perfect :)
--
Thanks, Liliya
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment