[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[leafnode-list] catching up



Say (for example) that your machine has been down with hardware problems
:-( and on getting it back, you'd prefer that leafnode didn't download all
newsarticles in certain groups in the interim - esp if you have more that
one server - getting 4000 articles in alt.os.linux.mandrake takes time from
server1 and then querying server2 to check that yes there are very few
extra articles takes an age...

what's the best way of handling this?

rm /var/spool/news/interesting.groups/whatever
and then let initialfetch handle it or is there another method that's
easier?

version 2.0b8 in case it matters!

R
-- 
Robert Marshall

-- 
leafnode-list@xxxxxxxxxxxxxxxxxxxxxxxxxxxx -- mailing list for leafnode
To unsubscribe, send mail with "unsubscribe" in the subject to the list