[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[leafnode-list] Re: spool full due to posted binarys



On 21 Jul,  
     Matthias Andree <matthias.andree@xxxxxx> wrote:

> Brian D <groups@xxxxxxxxxxxxxxxxxxxx> writes:
> 
> > On 21 Jul, Matthias Andree <matthias.andree@xxxxxx> wrote:
> 
> >> find /var/spool/news/GROUP/NAME -size +31000c -exec rm '{}' ';'
> >> texpire
> 
> > Thanks. I deleted most manually (they were xposted to two other (binary)
> > groups so I had to delete three times. I only gained 3.7 meg until I ran
> > texpire, which presumably deleted them from the message.id folder, then I
> > gained about 3G of free space.
> 
> Exactly.
> 
> > Done that, and sorted out all the problems caused by running the server
> > with no free space in /var.
> 
> That would probably just be another "texpire -r" run and perhaps "fetchmail
> -f".
I used texpire -r and since then leafnode has worked well.


> 
> Anyways, what broke? Leafnode shouldn't need AVOIDABLE manual repairs after
> running out of space. (More complex fixes will go into leafnode-2 only.)
> 
Leafnode was fine. Other things weren't. Most were sorted by going down to
run level 2 and back up again.

Would there be any problem in running  

"find /var/spool/news/* -size +31000c -exec rm '{}' ';'"?

As I don't carry binary groups it doesn't matter if it deletes all messages
above 32k. I would assume a "texpire -r" and "fetchmail -f" would get things
back to normal, or even fetchmail without the -f.

-- 
  Brian Duffell  VirtualRiscPCSA  | RISC OS 4.39 Adjust
  Darlington ASC                  | <www.dare-asc.co.uk> 
  Darlington Dolphin Masters ASC  | <www.darlingtonmasters.org.uk>
-- 
_______________________________________________
leafnode-list mailing list
leafnode-list@xxxxxxxxxxxxxxxxxxxxxxxxxxxx
https://www.dt.e-technik.uni-dortmund.de/mailman/listinfo/leafnode-list
http://leafnode.sourceforge.net/