Discussion:
[omniORB] 4.0.7: clientCallTimeOutPeriod slows down transfer
Igor Lautar
2007-12-07 18:30:22 UTC
Permalink
Hi,

During debugging of slow performance, we have found out that if we do
not set clientCallTimeOutPeriod (stays to block forever - 0 as default),
performance is greatly degraded.

1 Gbit link, client reads data and sends them in 256 KB chunks to server
(for sake of test, server calls servant which immediatelly discards
buffer and returns).

If clientCallTimeOutPeriod is set to any other value than 0, link
utilization is 1-2% (2-3 MB/s). If clientCallTimeOutPeriod is set to 0,
utilization jumps to 66% (limit of reading data from disk on client).

Is there a known bug for this or is this just sideefect of having
timeouts on client side?
Timeouts are really useful in some cases (misbehaving corba server).

Thank you,
Igor
Igor Lautar
2007-12-07 21:02:32 UTC
Permalink
Hi,

Few words on envirnment:
Windows 2003 Server x64 for server
Windows XP Pro win32 for client
1 Gbit network
omniORB compiled with MS VS 8.0 (cl 14.00.50727.42)

thx,
Igor
Post by Igor Lautar
Hi,
During debugging of slow performance, we have found out that if we do
not set clientCallTimeOutPeriod (stays to block forever - 0 as
default), performance is greatly degraded.
1 Gbit link, client reads data and sends them in 256 KB chunks to
server (for sake of test, server calls servant which immediatelly
discards buffer and returns).
If clientCallTimeOutPeriod is set to any other value than 0, link
utilization is 1-2% (2-3 MB/s). If clientCallTimeOutPeriod is set to
0, utilization jumps to 66% (limit of reading data from disk on
client).
Is there a known bug for this or is this just sideefect of having
timeouts on client side?
Timeouts are really useful in some cases (misbehaving corba server).
Thank you,
Igor
Loading...