| Title: | *OLD* ALL-IN-1 (tm) Support Conference |
| Notice: | Closed - See Note 4331.l to move to IOSG::ALL-IN-1 |
| Moderator: | IOSG::PYE |
| Created: | Thu Jan 30 1992 |
| Last Modified: | Tue Jan 23 1996 |
| Last Successful Update: | Fri Jun 06 1997 |
| Number of topics: | 4343 |
| Total number of notes: | 18308 |
another customer site is having trouble with TRM. It is taking
extraordinary long on OA$SHARE. I want to run this without shutting
down ALL-IN-1 so i can see how long it actually takes. i would like to
run TRM this way on all 5 of the mail areas, one at a time on different
nights. the jobs would start at midnight.
since ALl-IN-1 will be up while TRM is running, what will not get
processed? for example, in TRU and EW, accounts that are logged in do
not get processed.
thanks ! ann !
| T.R | Title | User | Personal Name | Date | Lines |
|---|---|---|---|---|---|
| 1612.1 | IOSG::MAURICE | See below | Thu Oct 15 1992 10:17 | 24 | |
Hi,
TRM will not run with ALL-IN-1 up. I'd give that one up if I were you.
TRM does have an option to run on each of the 5 mail areas but this is
only useful if you are expecting long repair times. In order to
calculate the usage counts for OA$SHARE say, it has to read all records
in the other shared areas for attachments.
So let's address the problem of TRM taking too long. Here's some things
to do:
1. Make sure you are patched up to date.
2. Make sure that parallel processing is happening - the batch queue it
is scheduled on should be a generic batch queue pointing to many
execution queues spread across the nodes in your cluster.
3. Make sure you have followed the configuration guidelines in the
management guide.
Cheers
Stuart
| |||||
| 1612.2 | thanks! | NCBOOT::HARRIS | oooppps | Thu Oct 15 1992 15:15 | 14 |
thanks for the advice.
we'll just keep plugging away at it!
1. Make sure you are patched up to date.
NO patches at all
2. Make sure that parallel processing is happening - the batch queue it
is scheduled on should be a generic batch queue pointing to many
execution queues spread across the nodes in your cluster.
YES
3. Make sure you have followed the configuration guidelines in the
management guide.
YES
| |||||