[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [leafnode-list] fetchnews in parallel?
> > Not really. Reading and writing are lock-step operations as of now, and
> > especially on safe filesystems (that excludes ext2fs on Linux), it can
> > be somewhat slow.
> yes, it would be, BUT if it should happen - then it could work as a
> "burst" buffer. it would be working on the assumtion, that You cannot
> further slow down a slow disk ;)Yes, but you can't boost the performance with a larger buffer. You will
have to wait for the directory updates (synchronous on most systems) in
> > OTOH, I'm running leafnode on a rather fast Fujitsu MAH3182MP (U160 SCSI
> > drive) that supports tagged command queueing, unlike ATA drives (NOTE:
> > some BSDs support ATA tagged command queueing, but Maxtor ATA drives do
> > not).
> yeah something like that would help some. but on the other side - how many
> rpm's does your fujitsu have? 10krpm? just that helps a lot, not including
> the pros of the scsi interface over ide.Fujitsu MAH drives spin at 7200/min. The important pro over ATA is that
SCSI has working tagged command queueing.
> second of all - not everybody makes a small lan/home server based on a 4
> raided 15krpm cheetahs :)Neither do I. It's my home "workstation", I was annoyed of my previous
5400/min ATA drive.
> > When you pipeline writes, how do you propagate errors back to know where
> > to pick up?
> hmmmm....again I'm going into theory mode ;) lets make some steps:
> 1) fetchnews connects to every news.server.com
> 2) fetchnews downloads all the article headers for every groups that is
> marked as interestingIt won't. It will look at overview data only.
> 3) for every server it disconnects after finishing that taskNo way will there be unnecessary disconnects, that would be a
regression. They cost 4 round trips for TCP breakdown, 3 round trips for
> 6) fetchnews starts downloading, buffering it in 4 pipes with simultanous
> disk access (the number of pipes could be set via vconfig, just like the
> size of every pipe)Increasing the concurrency on the disk will backfire, especially with
slowly-seeking disk. Thrashing contention is considerable.
> 8a) we complete without error. fetchnews check if every article from the
> list is downloaded and then deletes that list. werre safe, aren't we?
> 8b) we lose the power. system reboots. fetchnews starts and sees that the
> file with the files IS in the direcotory. then check which articles are
> present and downloaded. when it sees that in a place the download ends,
> than it discards th last downloaded downloading it again, just in case,
> and then download the rest. go back to 8a. or 8b ;)Ok, your approach is to keep a TODO list. Effectively, leafnode already
keeps a pointer of the last article read, which is just as good, but
consumes less space. Only passing back write errors through a pipe is a
Content-Disposition: inline-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)
-----END PGP SIGNATURE-------
leafnode-list@xxxxxxxxxxxxxxxxxxxxxxxxxxxx -- mailing list for leafnode
To unsubscribe, send mail with "unsubscribe" in the subject to the list