Discussion:
[omniORB] transfer large files
Antonio Beamud Montero
2008-09-19 20:31:23 UTC
Permalink
Hi all:
Is efficient to send large files using corba calls with omniorb, or is
better delegate the transfer using other protocols (ftp for example)?
If can be done with omniorb, what's the better approach, split in chunks
of octects?

Greetings
Matej Kenda
2008-09-19 21:42:34 UTC
Permalink
Hi Antonio,

I used sequence of octets for sending files.

Based on the measurements I did, the transfer speed is comparable to
using FTP when using chunks of data, larger than 32 kiB.

Regards,

Matej

On Fri, Sep 19, 2008 at 4:30 PM, Antonio Beamud Montero
Post by Antonio Beamud Montero
Is efficient to send large files using corba calls with omniorb, or is
better delegate the transfer using other protocols (ftp for example)?
If can be done with omniorb, what's the better approach, split in chunks
of octects?
Antonio Beamud Montero
2008-09-19 22:19:17 UTC
Permalink
Post by Matej Kenda
Hi Antonio,
I used sequence of octets for sending files.
Then, the file is entirely loaded in memory before be sent... no?

Thanks for your reply.
Post by Matej Kenda
Based on the measurements I did, the transfer speed is comparable to
using FTP when using chunks of data, larger than 32 kiB.
Regards,
Matej
On Fri, Sep 19, 2008 at 4:30 PM, Antonio Beamud Montero
Post by Antonio Beamud Montero
Is efficient to send large files using corba calls with omniorb, or is
better delegate the transfer using other protocols (ftp for example)?
If can be done with omniorb, what's the better approach, split in chunks
of octects?
Matej Kenda
2008-09-19 23:19:45 UTC
Permalink
On Fri, Sep 19, 2008 at 6:19 PM, Antonio Beamud Montero
Post by Antonio Beamud Montero
Post by Matej Kenda
Hi Antonio,
I used sequence of octets for sending files.
Then, the file is entirely loaded in memory before be sent... no?
No, it doesn't have to be.

You can send large files by reading and sending chunks in a loop.

<pseudo>

file.open();
file_server.open_file(fname);
octet_sequence buffer;
buffer.length(32*1024);

while (!file.eof()) {
file.read(buffer);
file_server->put(buffer);
}

file.close();
file_server.close();

</pseudo>

HTH,

Matej
Serguei Kolos
2008-10-23 21:42:00 UTC
Permalink
Hello

While migrating from the omniORB 4.0.7 to 4.2.3 I have noticed significant
difference in the behavior of omniORB applications. I have server
application
which is using the following two options:

threadPerConnectionPolicy 0 // the server shall be able to process
// several
hundreds of clients concurrently
threadPoolWatchConnection 0 // for more efficient processing
// of concurrent
client requests

That was working fine with 4.0.7. Now with 4.1.3 if a client sends several
subsequent requests to the server then for every second request it get
response with 50 milliseconds delay. For example when running both the
client and server on the same machine the times for requests execution
look like (in milliseconds): 0.12 50.23 0.12 50.42 0.14
50.88 ...

This can be changed by decreasing the connectionWatchPeriod to something
very small (by default it is set to 50000 microseconds which seems to be
the
cause of the issue). But in this case the CPU consumption of the server
grows
significantly.

With respect to that I have several questions:
1. Is it a bug or a feature of the 4.1.3?
2. Can this be changed back to have the same behavior as in 4.0.7?
3. If not then is it possible to achieve with the omniORB 4.1.3 the same
response
time and CPU consumption as with 4.0.7 for a server handling many
concurrent
clients?

Cheers,
Sergei

PS: I'm running on Linux with 2.6.9 kernel using gcc 3.4. I have also
made some tests
with the omniORB echo example - its behavior is exactly the same.
Serguei Kolos
2008-10-24 22:20:47 UTC
Permalink
Hi

I'm using cdrMemoryStream class of the omniORB to do some data
packing/unpacking.
After moving to the omniORB 4.1.3 I got my application crashing because
of the negative
value returned by the cdrMemoryStream::bufSize() function when I'm using
memory stream
for reading from the input buffer. Looking to the code I have noticed a
difference with respect
to the 4.0.7. Before bufSize was implemented like (file cdrMemoryStream.cc):

CORBA::ULong
cdrMemoryStream::bufSize() const
{
if (!pd_readonly_and_external_buffer) {
return (CORBA::ULong)((omni::ptr_arith_t)pd_outb_mkr -
(omni::ptr_arith_t)ensure_align_8(pd_bufp));
}
else {
return (CORBA::ULong)((omni::ptr_arith_t)pd_inb_end -
(omni::ptr_arith_t)pd_bufp);
}
}

And in 4.1.3 it is:

CORBA::ULong
cdrMemoryStream::bufSize() const
{
return (CORBA::ULong)((omni::ptr_arith_t)pd_outb_mkr -
(omni::ptr_arith_t)pd_bufp_8);
}

In case of using this object for input the pd_outb_mkr=0 and pd_bufp_8
is pointing to the beginning of the buffer, so the result is a large
negative
number. Is that a bug or I'm missing something?

Cheers,
Sergei
Duncan Grisby
2008-10-24 22:45:56 UTC
Permalink
Post by Serguei Kolos
While migrating from the omniORB 4.0.7 to 4.2.3 I have noticed significant
difference in the behavior of omniORB applications. I have server
application
threadPerConnectionPolicy 0 // the server shall be able to process
// several
hundreds of clients concurrently
threadPoolWatchConnection 0 // for more efficient processing
// of concurrent
client requests
That was working fine with 4.0.7. Now with 4.1.3 if a client sends several
subsequent requests to the server then for every second request it get
response with 50 milliseconds delay. For example when running both the
client and server on the same machine the times for requests execution
look like (in milliseconds): 0.12 50.23 0.12 50.42 0.14
50.88 ...
It's a bug. In the case of using a thread pool and not watching
connections, the last socket in the array used in a call to poll() would
incorrectly think it was still in the array after it had been removed.
That meant that socket was not re-added to the array at the right time,
leading to the delay.

I've fixed it in CVS, and attached the simple patch that fixes it.

Thanks for the bug report.

Duncan.
--
-- Duncan Grisby --
-- ***@grisby.org --
-- http://www.grisby.org --

-------------- next part --------------
A non-text attachment was scrubbed...
Name: socketcollection.patch
Type: text/x-c++
Size: 1257 bytes
Desc: not available
Url : http://www.omniorb-support.com/pipermail/omniorb-list/attachments/20081024/c3db5057/socketcollection.bin
Loading...