Igor Lautar
2007-12-07 18:30:22 UTC
Hi,
During debugging of slow performance, we have found out that if we do
not set clientCallTimeOutPeriod (stays to block forever - 0 as default),
performance is greatly degraded.
1 Gbit link, client reads data and sends them in 256 KB chunks to server
(for sake of test, server calls servant which immediatelly discards
buffer and returns).
If clientCallTimeOutPeriod is set to any other value than 0, link
utilization is 1-2% (2-3 MB/s). If clientCallTimeOutPeriod is set to 0,
utilization jumps to 66% (limit of reading data from disk on client).
Is there a known bug for this or is this just sideefect of having
timeouts on client side?
Timeouts are really useful in some cases (misbehaving corba server).
Thank you,
Igor
During debugging of slow performance, we have found out that if we do
not set clientCallTimeOutPeriod (stays to block forever - 0 as default),
performance is greatly degraded.
1 Gbit link, client reads data and sends them in 256 KB chunks to server
(for sake of test, server calls servant which immediatelly discards
buffer and returns).
If clientCallTimeOutPeriod is set to any other value than 0, link
utilization is 1-2% (2-3 MB/s). If clientCallTimeOutPeriod is set to 0,
utilization jumps to 66% (limit of reading data from disk on client).
Is there a known bug for this or is this just sideefect of having
timeouts on client side?
Timeouts are really useful in some cases (misbehaving corba server).
Thank you,
Igor