[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [leafnode-list] fetchnews in parallel?

Dnia Wed, 10 Jul 2002 13:08:27 -0700 (PDT) niejaki(a) "Michael O'Quinn"
<michael@xxxxxxxxxxx> napisal(a):

> > yes, with a slow ata a 4-pipe write WOULD be a problem. but for a 2
> > disk scsi raid it would be child play, they propably wouldn't have
> > seen any difference. but as I've said - this option could defined in
> > config. even - could be a ./configure --with-simultanous-write-pipes
> > or something.
> Um, what does the number of pipes have to do with it?  I think the
> limiting factor here is most likely going to be the bandwidth of the
> incoming connection.  No amount of parallelism is going to allow you to
> exceed the bandwidth of your Internet connection, no matter what.

not exactly. I've checked that puppy out. (at least for me) it seems that
after fetching an article thaere is a halt. second - articles are not some
kind of big - an average is what? 4KB? well... I get an article per
second, even when people are not downloading anything. and then the disk
i/o is not a bad problem - because squid is not running for them. 

> For example, my DSL connection is rated at 768 kbps incoming, and in
> Real Life it maxes out at about 680 kbps.  (TCP overhead and such eats
> up a percentage).  I've established this max speed with several speed
> benchmarking services, and via the firewall's real time bandwidth meter.

I'm running a 512 rated, with a cbq through wondershaper 1.1
(www.lartc.org), that is limiting incoming to 400, and outgoing to 250.
higher and i get 200ms ping at peak times through the modem to the modem
on the isp's router. this 400/250kbit/s is divided by 20 people, and will
be that up to 30 of them, then there will be a 768 upgrade.

> When fetchnews is fetching over that line, it typically runs about
> 200-400 kbps.  I have another program on another Linux box that DOES do
> parallel news fetches (using NewsPlex), and it always maxes out the line
> at 680kbps.

yah,well not for me :( even though right now I have this:

maniack@emilka:~$ ping news.supermedia.pl
PING news.supermedia.pl ( 56 octets data
--- news.supermedia.pl ping statistics ---
20 packets transmitted, 20 packets received, 0% packet loss
round-trip min/avg/max = 24.7/60.6/120.0 ms

which is as good as it gets here :/

> In neither case does disk bandwidth even remotely become a problem.  The
> machine doing the parallel fetches is a P-133, something like 32 or 64MB
> memory, and an old ATA 6 gig drive.  Even if the drive is capable of ATA
> 33, the motherboard isn't, and the drive is 5400 RPM. 

the server is:
p75@100, 128 mb ram, 3c509b for the modem, intel eepro100 for the lan,
maxtor d540x 5400rpm 40gb, running squid,
qmail+ucspi+daemontools+ezmlm/idx, leafnode, apache (low use). the modules
use some amount - its a router with ever growing list of modules for the

> Even when my Internet bandwidth is totally maxed out, the disk on either
> machine barely flickers.


	HTTP requests per minute:	36.3
	ICP messages per minute:	34.0

with 35% hit and 10% byte hit.

qmail - thats almost nothing, maybe a mail per 5 minutes. people here use
mostly icq, gadugadu, and other messengers.

leafnode - 2 users out of 3 are now connected to leafnode and actively
reading, I'm the third who is not.

during peak times the max throughput through that disk is a mere 200kB/s
on a 100mbit/s, whereas normally it ~3MB/s). And its going to get worse...

> Now when I am retrieving this cached news across my internal 10 Base TX
> network, server limitations do become an issue.  But surprisingly, it's
> still not disk bandwidth, but CPU on the server.

I'm having a 10/100 switch connected close to the server, it has 8
workstations(10/100), server(100), and 3 hubs connected (10mbits due to
the 250m distae to every one of them).

|GIT d- s+:- a--- C++ UL++++ P+ L+++ E- W N++ o? K? w-- !O !M !V|
|_PS+ PE+++ Y+ PGP !t !5 !X R+ !tv b++++ !DI D+++ G e- h! r- y++|

leafnode-list@xxxxxxxxxxxxxxxxxxxxxxxxxxxx -- mailing list for leafnode
To unsubscribe, send mail with "unsubscribe" in the subject to the list