Non-geeks may want to look away now.
I've never liked having lots of little logs, for the pettiest of reasons: as a DBA, I like to see my entire onstat -l output on one screen. However, there are a couple of more important reasons:
1. Getting into a LTX_HWM / LTX_EHWM / LONGTX state is a function of the size of the individual logical log (which is a key reason not to have different sized logs, as well.)
2. If you are depending on dynamic logging to get out of this, having a too-small logical log means that you run into the risk that the engine will not be able to allocate the addition log(s) before the current log runs out and you get hung up anyway.
You have to balance this with the extra recoverability you get from having a massive disk failure with your logs being too big, so don't be tempted to have four logs of one GB each. But in general, it's better to have 10 or 20 or at most 100 biggish logical logs than 1000 little ones.
2 comments:
Balance the simplicity of fewer larger logs with the safe issue that smaller logs fill faster and so will be archived more frequently if you are using continuous or ALARMPROGRAM based logical log archiving. Also, to your argument that small logs might run out before additional logs can be allocated, I'd respond that a larger log takes longer to allocate so that one is a toss up to me. Small or large, either the auto allocation of new logs will keep up with the filling of the remaining logs or it won't. Size doesn't matter to this issue.
I did balance the simplicity versus safety issue, but this is a known bug. :o)
Post a Comment