Pat Pascal
2006-11-28 03:58:17 UTC
We are having a problem getting a few of the omniORB configuration
parameters to work as we expected (and as indicated in the documentation
and config file comments). Or it could be we just don't understand how
the omniORB client connection server threads work.
In our use of omniORB 4.0.7 on RHEL 4, it is our desire to have only 1
thread per incoming client connection to our server. So I have set the
"threadPerConnectionPolicy" configuration parameter to a value of 1,
as per the following config file comment:
#####################################################################
# threadPerConnectionPolicy
#
# 1 means the ORB should dedicate one thread per connection on the
# server side. 0 means the ORB should dispatch a thread from a pool
# to a connection only when a request has arrived.
#
# Valid values = 0 or 1
#
threadPerConnectionPolicy = 1
I have also set, the "maxServerThreadPerConnection" parameter
to a value of 1 (although this may not be necessary if the
"threadPerConnectionPolicy" is set to 1 --> the config comments and
documentation are not real clear on whether this is actually required
or if this parameter only applies when using the thread pool).
We also do not want idle client connections to the server to close
(we want the above thread to stay running until closed by the client),
so I set the "inConScanPeriod" parameter to 0 (instead of the default
180 seconds), as indicated in the following config file comment:
########################################################################
# inConScanPeriod
#
# Idle connections shutdown. The ORB periodically scans all the
# incoming connections to detect if they are idle.
# If no operation has passed through a connection for a scan period,
# the ORB would treat this connection idle and shut it down.
#
# Valid values = (n >= 0 in seconds)
# 0 --> do not close idle connections.
#
inConScanPeriod = 0
Unfortunately setting this parameter to 0 appears to have no effect,
since our client connection thread (which was idle for more than 180
seconds) was closed at 180 seconds anyways. I tried setting the
parameter to a larger number (e.g. 240 seconds), but the idle client
connection still closed at 180 seconds. If I set the parameter to a
smaller value (e.g. 60 seconds), then the idle connection actually did
close at the smaller configured value. From my testing, it appears the
parameter's value is properly used if it's less than 180 seconds, but
using a zero value or a value greater than 180, seems to be ignored.
I looked at the omniORB code where "inConScanPeriod" is used, but it
was not obvious what the problem might be, or what I should be doing
differently. I turned on tracing, and omniORB logged it's notion of
the config values, and these correctly corresponded with those in the
config file (or even if they were provided on the command line).
I tried one last attempt a keeping the idle client connection thread
open by setting the scanGranularity parameter to 0 (as indicated in the
config file comments below), but this still did not have the desired
effect (i.e. the idle connection thread still closed at 180 seconds):
#######################################################################
# scanGranularity
#
# The granularity at which the ORB scans for idle connections.
# This value determines the minimum value that inConScanPeriod or
# outConScanPeriod can be.
#
# Valid values = (n >= 0 in seconds)
# 0 --> do not scan for idle connections.
#
scanGranularity = 0
You may wonder why I believe the idle connection thread is closing at
180 seconds when these parameters are set to 0. In our server, we are
obtaining the thread ID in the incoming client connection call, and
comparing it to the thread IDs for other incoming client connections.
The thread ID is unique if a new client connects within 180 seconds
of the previous successful client connection. However if a new client
connects after 180 seconds have elapsed since the previous successful
client connection, the new client has the same thread ID as the previous
client connection. In this last case, I am assuming that omniORB has
closed the previous client connection thread (thinking it was idle and a
180 second timeout has expired), and that the new client ends up getting
a new thread with an ID that has the same value as the previous client's
connection thread ID.
Can anyone comment on whether we are completely misunderstanding the use
of these configuration parameters and the operation of the connection
threads, or if there is actually a defect in omniORB (or it's
documentation)?
Thanks,
Pat Pascal
parameters to work as we expected (and as indicated in the documentation
and config file comments). Or it could be we just don't understand how
the omniORB client connection server threads work.
In our use of omniORB 4.0.7 on RHEL 4, it is our desire to have only 1
thread per incoming client connection to our server. So I have set the
"threadPerConnectionPolicy" configuration parameter to a value of 1,
as per the following config file comment:
#####################################################################
# threadPerConnectionPolicy
#
# 1 means the ORB should dedicate one thread per connection on the
# server side. 0 means the ORB should dispatch a thread from a pool
# to a connection only when a request has arrived.
#
# Valid values = 0 or 1
#
threadPerConnectionPolicy = 1
I have also set, the "maxServerThreadPerConnection" parameter
to a value of 1 (although this may not be necessary if the
"threadPerConnectionPolicy" is set to 1 --> the config comments and
documentation are not real clear on whether this is actually required
or if this parameter only applies when using the thread pool).
We also do not want idle client connections to the server to close
(we want the above thread to stay running until closed by the client),
so I set the "inConScanPeriod" parameter to 0 (instead of the default
180 seconds), as indicated in the following config file comment:
########################################################################
# inConScanPeriod
#
# Idle connections shutdown. The ORB periodically scans all the
# incoming connections to detect if they are idle.
# If no operation has passed through a connection for a scan period,
# the ORB would treat this connection idle and shut it down.
#
# Valid values = (n >= 0 in seconds)
# 0 --> do not close idle connections.
#
inConScanPeriod = 0
Unfortunately setting this parameter to 0 appears to have no effect,
since our client connection thread (which was idle for more than 180
seconds) was closed at 180 seconds anyways. I tried setting the
parameter to a larger number (e.g. 240 seconds), but the idle client
connection still closed at 180 seconds. If I set the parameter to a
smaller value (e.g. 60 seconds), then the idle connection actually did
close at the smaller configured value. From my testing, it appears the
parameter's value is properly used if it's less than 180 seconds, but
using a zero value or a value greater than 180, seems to be ignored.
I looked at the omniORB code where "inConScanPeriod" is used, but it
was not obvious what the problem might be, or what I should be doing
differently. I turned on tracing, and omniORB logged it's notion of
the config values, and these correctly corresponded with those in the
config file (or even if they were provided on the command line).
I tried one last attempt a keeping the idle client connection thread
open by setting the scanGranularity parameter to 0 (as indicated in the
config file comments below), but this still did not have the desired
effect (i.e. the idle connection thread still closed at 180 seconds):
#######################################################################
# scanGranularity
#
# The granularity at which the ORB scans for idle connections.
# This value determines the minimum value that inConScanPeriod or
# outConScanPeriod can be.
#
# Valid values = (n >= 0 in seconds)
# 0 --> do not scan for idle connections.
#
scanGranularity = 0
You may wonder why I believe the idle connection thread is closing at
180 seconds when these parameters are set to 0. In our server, we are
obtaining the thread ID in the incoming client connection call, and
comparing it to the thread IDs for other incoming client connections.
The thread ID is unique if a new client connects within 180 seconds
of the previous successful client connection. However if a new client
connects after 180 seconds have elapsed since the previous successful
client connection, the new client has the same thread ID as the previous
client connection. In this last case, I am assuming that omniORB has
closed the previous client connection thread (thinking it was idle and a
180 second timeout has expired), and that the new client ends up getting
a new thread with an ID that has the same value as the previous client's
connection thread ID.
Can anyone comment on whether we are completely misunderstanding the use
of these configuration parameters and the operation of the connection
threads, or if there is actually a defect in omniORB (or it's
documentation)?
Thanks,
Pat Pascal