Agarwal, Shelendra
2014-07-17 11:29:45 UTC
Hi All,
We are facing a problem where the memory consumed by application is growing very high and the consumed memory not getting released even if required work is complete. The memory management of this application is mainly driven by Omni ORB [OMNI ORB version 4.1.4]
Our server side application (in C++) acts as servant application which queries DB and fetches records and then fills a CORBA sequence. Then the requester application (here Omni ORB) takes care of delivering that data to User interface layer (written in VC++).
Now omni is the intermediate layer working between our core server application (say querying from DB) and our UI. So we are depending on omni CORBA sequence completely for the memory allocation and de-allocation
We are seeing from the glance command that: during this process the heap memory (acquired for CORBA sequences in consideration) is growing to an extent proportionately with the number of records we are operating or fetching from DB. But then it never goes down even if the data is surrendered to UI by omni. Initially we thought this as a leak but then realized that the memory growth is not consistently happening. Assume that we have fetched say 10000 records from DB and returned back to UI through omni. Then subsequently if we are querying a small section of data, then memory does not grow further. It appears that omni uses a different kind of memory management as opposed to traditional new/delete. Because the memory address used in the first iteration is getting re-used in the second iteration. However that consumed memory does not come down at all.
Also we are even seeing at times that: if the UI logged in user is different and the request to query DB goes to a different application thread, then again for that thread, memory is allocated from heap. Overall we are seeing that a huge amount of memory is being allocated and reserved for its operation instead of releasing it after the request has been served. The issue here is the memory usage grows up very high.
The leak detector tools are clearly telling that, there are no leaks in our application or in omni. With this we are getting an impression that it's a different kind of memory management technique. But then the concern is, for each serving thread if it allocates good amount of heap memory and keep with it as working area/memory then we are running out of resources.
Please highlight if there is any design glitches (or any recommendations)
Thanks & Regards,
Shelendra Agarwal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.omniorb-support.com/pipermail/omniorb-list/attachments/20140717/4662e5ae/attachment.html>
We are facing a problem where the memory consumed by application is growing very high and the consumed memory not getting released even if required work is complete. The memory management of this application is mainly driven by Omni ORB [OMNI ORB version 4.1.4]
Our server side application (in C++) acts as servant application which queries DB and fetches records and then fills a CORBA sequence. Then the requester application (here Omni ORB) takes care of delivering that data to User interface layer (written in VC++).
Now omni is the intermediate layer working between our core server application (say querying from DB) and our UI. So we are depending on omni CORBA sequence completely for the memory allocation and de-allocation
We are seeing from the glance command that: during this process the heap memory (acquired for CORBA sequences in consideration) is growing to an extent proportionately with the number of records we are operating or fetching from DB. But then it never goes down even if the data is surrendered to UI by omni. Initially we thought this as a leak but then realized that the memory growth is not consistently happening. Assume that we have fetched say 10000 records from DB and returned back to UI through omni. Then subsequently if we are querying a small section of data, then memory does not grow further. It appears that omni uses a different kind of memory management as opposed to traditional new/delete. Because the memory address used in the first iteration is getting re-used in the second iteration. However that consumed memory does not come down at all.
Also we are even seeing at times that: if the UI logged in user is different and the request to query DB goes to a different application thread, then again for that thread, memory is allocated from heap. Overall we are seeing that a huge amount of memory is being allocated and reserved for its operation instead of releasing it after the request has been served. The issue here is the memory usage grows up very high.
The leak detector tools are clearly telling that, there are no leaks in our application or in omni. With this we are getting an impression that it's a different kind of memory management technique. But then the concern is, for each serving thread if it allocates good amount of heap memory and keep with it as working area/memory then we are running out of resources.
Please highlight if there is any design glitches (or any recommendations)
Thanks & Regards,
Shelendra Agarwal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.omniorb-support.com/pipermail/omniorb-list/attachments/20140717/4662e5ae/attachment.html>