Discussion:
[omniORB] emulating Reactor pattern with omniORB
Michael Kilburn
2007-11-07 19:54:34 UTC
Permalink
Hi

It looks like there is a way to emulate "proper" Reactor pattern with
omniORB. Here is the trick:
- use ORB_CTRL mode
- install interceptors
- on receiving request or response -- lock mutex
- on request send/response -- unlock mutex

This gives semantic close to reactor pattern as implemented in Orbacus (i.e.
you could have nested calls of any depth) and your server will behave as in
single-threaded mode -- i.e. no need for synchronization headache.

Is anything wrong with this approach?

So far I see only these drawbacks:
- client (app that initiates call chain) can't work this way (you can't
unlock mutex that is not locked)
- one-way calls will require some additional treatment (they do not receive
response)
- I am not sure if it is ok to stay in interceptor for long time -- for
example, what will happen if it will stay there for 5 mins? for 1 second and
another request will arrive? In which context these interceptors are called,
is anything locked at that point in omniORB guts?

Thanks
--
Sincerely yours,
Michael.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.omniorb-support.com/pipermail/omniorb-list/attachments/20071108/75534c27/attachment.htm
Duncan Grisby
2007-11-11 01:25:07 UTC
Permalink
Post by Michael Kilburn
It looks like there is a way to emulate "proper" Reactor pattern with
- use ORB_CTRL mode
- install interceptors
- on receiving request or response -- lock mutex
- on request send/response -- unlock mutex
This gives semantic close to reactor pattern as implemented in Orbacus (i.e.
you could have nested calls of any depth) and your server will behave as in
single-threaded mode -- i.e. no need for synchronization headache.
Is anything wrong with this approach?
It strikes me as spectacularly dangerous to lock and unlock mutexes
inside interceptors, since you have no guarantee that the ORB will call
them in the way you expect in absolutely all situations.

I am not a fan of the reactor model. I disagree that there is "no need
for synchronization headache" -- whenever you do any remote call, you
are releasing your lock, meaning that all code has to be thread safe
across all calls. To my mind, that's much more of a headache than simply
managing locks as required by critical regions in the code.
Post by Michael Kilburn
- client (app that initiates call chain) can't work this way (you can't
unlock mutex that is not locked)
- one-way calls will require some additional treatment (they do not receive
response)
- I am not sure if it is ok to stay in interceptor for long time -- for
example, what will happen if it will stay there for 5 mins? for 1 second and
another request will arrive? In which context these interceptors are called,
is anything locked at that point in omniORB guts?
omniORB doesn't hold any locks while calling the request interceptors.
However, your lock will be acquired and released out of step with the
existing partial-order on locks. I would want to do a complete analysis
of all the interactions between the locks to convince myself that there
were no possibility for deadlocks between your lock and the locks in
omniORB.

There are all kinds of other situations in which the scheme might go
wrong. One additional example to the things you list above is that there
are various situations in which calls can be retried due to network
errors, in which case the clientSendRequest interceptor can be called
more than once without corresponding calls to the clientReceiveReply
interceptor.

As I say, I don't think it's a very safe idea.

Cheers,

Duncan.
--
-- Duncan Grisby --
-- ***@grisby.org --
-- http://www.grisby.org --
Michael Kilburn
2007-11-11 05:07:39 UTC
Permalink
Post by Duncan Grisby
Post by Michael Kilburn
It looks like there is a way to emulate "proper" Reactor pattern with
- use ORB_CTRL mode
- install interceptors
- on receiving request or response -- lock mutex
- on request send/response -- unlock mutex
This gives semantic close to reactor pattern as implemented in Orbacus (i.e.
you could have nested calls of any depth) and your server will behave as in
single-threaded mode -- i.e. no need for synchronization headache.
Is anything wrong with this approach?
It strikes me as spectacularly dangerous to lock and unlock mutexes
inside interceptors, since you have no guarantee that the ORB will call
them in the way you expect in absolutely all situations.
I agree, that is why I seek advice here. But, In my case we have very
large amount of code that is not thread-safe and uses nested calls (it
was written originally for Orbacus). It has to be migrated to omniORB
and here real headache starts, because making that code working in
ORB_CTL_MODE with nested calls support is not easy.
Post by Duncan Grisby
I am not a fan of the reactor model. I disagree that there is "no need
for synchronization headache" -- whenever you do any remote call, you
are releasing your lock, meaning that all code has to be thread safe
across all calls.
Well, as far as I see, there is no need to synchronize, because this:

lock <- incoming calls (receiving request/response)
....
outgoing calls (sending request/response) -> unlock

will ensure that anything inside the server will be executed in
serialized manner, and every execution "session" will have its own
thread and stack. lock/unlock operation ensures proper memory
visibility (i.e. changes made in one thread guaranteed to be visible
in others). So in essence there is no need for synchronization.

Of course, there is a problem of reentrant calls, but in our case it
is ok, because our nested calls cases usually look like this:
- client calls server
- server does all work
- server notifies all subscribers
- nested: subscribers ask for more data (usually retrieving pieces
that were changed)
- server returns

there are no problems with reentrant calls -- they are possible only
at "safe" points.
Post by Duncan Grisby
To my mind, that's much more of a headache than simply managing locks
as required by critical regions in the code.
On the other side I found that "simply managing locks" is quite
non-trivial task in MT CORBA server in cases when behind CORBA layer
your C++ objects/servants reference each other -- in another project I
had a lot of problems implementing it correctly (especially servant's
lifetime and destruction order). And I am not sure that next person
who will look into the code won't make a mistake.
Post by Duncan Grisby
Post by Michael Kilburn
is anything locked at that point in omniORB guts?
omniORB doesn't hold any locks while calling the request interceptors.
However, your lock will be acquired and released out of step with the
existing partial-order on locks. I would want to do a complete analysis
of all the interactions between the locks to convince myself that there
were no possibility for deadlocks between your lock and the locks in
omniORB.
Could you give some details about "existing partial-order on locks"?
Post by Duncan Grisby
There are all kinds of other situations in which the scheme might go
wrong. One additional example to the things you list above is that there
are various situations in which calls can be retried due to network
errors, in which case the clientSendRequest interceptor can be called
more than once without corresponding calls to the clientReceiveReply
interceptor.
True, this is a problem... I wonder what happens if call failed? will
clientReceiveReply() will be called?
Is there any description of omniORB callbacks behavior and provided guarantees?

I wish omniORB provided us with reactor model -- it can implement it
properly (while our attempts are basically hacks).
--
Sincerely yours,
Michael.
Loading...