[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [leafnode-list] leafnode is not pulling in any news

I had trouble parsing parts of your reply, and have some more comments
and questions.

On Thu, Jul 03, 2003 at 11:25:45AM +0200, Matthias Andree wrote:
> Ross Boylan schrieb am 2003-07-02:
> > > Then that's the one you're looking for. If you "cat" it, you should be
> > > given the PID and the host name of the process holding the lock.
> > 
> > Now I'm confused.  leafnode is running all the time.  Are you saying
> > leafnode and fetchnews can't run at the same time?
> Well, fetchnews and texpire (and some of the minor utilities as
> applyfilter) are mutually exclusive, leafnode itself (the nntpd process)
> and run at the same time as fetchnews and texpire.
Is that "fetchnews and texpire ... are mutually exclusive, but
leafnode can run at the same time as them.."?

I assume two leafnode's can't or shouldn't run at the same time, but
leafnode doesn't seem to put its process id in
/var/lock/news/leafnode.  So is it true that /var/lock/news/leafnode
is just for the use of the auxiliary processes (fetchnews, texpire,
..), and only one of them can run at once?

> > Ah: when I look at the file, it's the PID of a fetchnews process.  And
> > yet the ppp-down script says
> > #!/bin/sh
> > 
> >  /etc/news/leafnode/debian-config
> > 
> > # Kill any fetch processes hanging around
> > if [ "$NETWORK" = "PPP" ]; then
> >    if [ -f /var/lock/news/fetchnews.lck ]; then
> >       /bin/kill -INT $(cat /var/lock/news/fetchnews.lck | head -1)
> >    fi
> > fi
> > So I think this may be a problem with the debian package, since
> > fetchnews.lck seems not to be where the PID lives (the file
> > fetchnews.lck is never present).
> Then let Mark Brown, broonie@xxxxxxxxxx, know about it, he's packaging
> leafnode for Debian. You can, of course, use the regular Debian bug
> tracking system. I changed the default location for the lock file
> recently, maybe that ppp-down file needs to be adjusted.

> > Also slightly odd: the parent process is PID 1, init.
> Not quite, it's likely that the fetchnews process gets "reparented to
> init" (or "adopted by init") if its actual parent dies, which happens
> when fetchnews has finished network activity, or deliberately by pppd to
> make a stricter distinction between pppd and its children.
> > My nntp retrieval still seems very slow (I think I noticed that over a
> > year ago; it's been a constant).  I'm getting about 0.5 kB/s, whereas
> > my dialup connection can manage 5 or 6kb/s (e.g., pulling down email).
> The problem is that leafnode-1 sends commands and expects responses in a
> lock-step way, which means that your connection is idle for a full
> round-trip time without filtering, or for two with filtering that cannot
> happen at the XOVER level.

What's filtering?  And what does "or for two with filtering that
cannot happen at the XOVER level"?  Does that mean it takes two round
trips if filtering can't be done with XOVER (whatever that is)?

Also, another thought struck me: does nntp set speed early on and not
adjust later?  When fetchnews is first running it typically is
competing with lots of other activities for bandwidth, so it may have
relatively little.  If it somehow locks into that rate, that might
account for its slowness.

> > > fetchnews may not detect your connection is down for an extended amount
> > > of time.
> > 
> > If the ppp-down were working right, it wouldn't need to detect the
> > connection was down, would it?
> I think so.

In view of the other comments about this behavior, it seems odd that I
had a fetchnews process that seemed to last for hours after the
connection was down.

There is some indication this problem started after I upgraded to
1.9.41 on Debian, but it was only several days later that it
occurred.  That fact, and the remark that fetchnews should figure out
when the connection drops, suggest that some somewhat unusual
circumstances were necessary to get the problem started.  I'm not sure
what, though; it seemed pretty repeatable when I tested it.  However,
I typically waited much less than 10 minutes after bringing the
connectionn down to test the offline behavior or dial back in.

> > ctime looks OK.  Is there a reason you're touching the times in this
> > way?  It seems unusual.
> That behaviour was in leafnode-1.9.19 as I took over and released
> 1.9.20, and I didn't want to change it in a "stable" series.

Thank you for working on the program.  And thanks for your answers to
my questions too.

> It has some use though: if mtime == ctime, then the newsgroup has been
> read once (likely only the pseudo article), if ctime > mtime, then the
> newsgroup has been read again. This is used to detect "newsgroup was
> accidentally touched", these newsgroups are supposed to be unsubscribed
> from earlier (timeout_short) as opposed to newsgroups that are read
> regularly (timeout_long).
There were some references to this in the documentation, but the
meaning of "accidentally touched" was not clear to me until this
explanation.  It might be nice to say a little more in the
documentation on this concept.

Having two times is certainly useful; it just seems more conventional
for ctime (creation time, right?) to be fixed at creation, and mtime
(modification time) to be the one that is changed later.

leafnode-list@xxxxxxxxxxxxxxxxxxxxxxxxxxxx -- mailing list for leafnode
To unsubscribe, send mail with "unsubscribe" in the subject to the list