[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [leafnode-list] Problem with texpire



Matthias Andree <ma@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> writes:

> Stefan Wiens <s.wi@xxxxxxx> writes:
>
>> Matthias Andree <ma@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> writes:
>> 
>> > Stefan Wiens <s.wi@xxxxxxx> writes:
>> >
>> >> To find broken overview lines:
>> >> find /var/spool/news -name .overview | \
>> >>         xargs awk -v FS="\t" '{if(NF>9)print FILENAME" "$0}'
>> >
>> > None here[TM].
>> 
>> Not really surprising.
>> 
>> Instead, your WIP-20010605-1 getxoverline() will get SIGSEGV in
>> tab2spc() should strdup() fail.
>
> Yup, I haven't yet reworked all places to do proper error checking. I
> have a critstrdup call for these places, but I am delaying the next
> snapshot release until I have fixed at least one of the long-standing
> major issues, otherwise, it'd be pointless.
>
>> Documented issue, see texpire(8). Groups not in groupinfo are
>> currently ignored.
>
> Ok. Bad Thing[tm].
>
>> I'd be "not amused" should texpire ever clean up my site-local
>> subdirectories under /var/spool/news. That would be too surprising,
>> even if documented.
>
> What other chance would there be. What site-local subdirectories belong
> below /var/spool/news? /var/spool/news belongs to the news server,
> nobody else.

OK, I agree. I sometimes use subdirectories to play around with
certain article selections, using hardlinks.

What do you think about the following policy:

  All subdirectories under /var/spool/news whose name is a valid
  newsgroup name are considered for expiry. The only character that
  will never show up in a group name is ".". (lost+found is also
  excluded.)

  All files whose name is all-digits or ".overview" and which can be
  accessed via directories whose names could form valid newsgroup
  names may be affected by expiry.

  If you have to put something under /var/spool/news that leafnode
  doesn't know about, give it a name containing ".".

Descending the entire spooldir would normally require stat()ing each
filename, but that would be unacceptable. There has to be some
optimization. My find(1) manpage mentions an optimization by assuming
that a directory contains 2 fewer subdirectories than its link count.
Would this work on all target systems?

A strategy like:

SPOOLDIR=/var/spool/news ; export SPOOLDIR
find "$SPOOLDIR"/* -path "$SPOOLDIR/*.*" -prune \
                -o -path "$SPOOLDIR/lost+found" -prune \
                -o -type d -print -links 2 -prune 

runs fast here. (GNU find version 4.1)

What to do with symlinks?

> In the long run, we'd better close the spool dir anyways. Supporting
> local-spool and efficient NNTP access at the same time is not a good
> idea.

You mean a "storage API"?

Of course, the tradspool-like directory structure is slow and
inefficient (at least on ext2fs), but the possibility to easily
handle it with standard UNIX tools like find(1) and grep(1)
compensates for it.

Stefan


-- 
leafnode-list@xxxxxxxxxxxxxxxxxxxxxxxxxxxx -- mailing list for leafnode
To unsubscribe, send mail with "unsubscribe" in the subject to the list