Search This Blog

Saturday, October 3, 2009

To inject or not to inject

Last week I was a couple of days at a client to look at some strange behavior in the application. The application was a calculator used by clients of the company to calculate services the client offers. The application was written in Java6 and used Tapestry for the view and a Spring/Hibernate/AspectJ combination for the backend which talked against an Oracle database. Tomcat was used as the runtime container for the application.

The first problem that had to be tackled was that sometimes the dependency injection did not work properly. Sometimes dependencies were injected, and other times not. When I looked at the application I noticed that an aspect was responsible for injecting repositories into domain objects. As soon as a domain object is deserialized or instantiated, the AnnotationBeanConfigurerAspect injected the repository. At least that was the intention. But the injection did not always take place. I further discovered that load-time weaving was used to inject the aspect’s behavior. I looked at the configuration and did not see anything that was wrong. The proper class loader in Tomcat was used and the necessary Spring configuration was present. What was causing this awkward behavior? Because time was limited I decided to switch to compile-time weaving. We compiled the classes with the AspectJ compiler and redeployed the application. This time the dependency injection was working as expected. We did not further investigate why load-time weaving did not work properly. If you have any suggestions please mention them.

The second problem was that after a specific amount of inactivity at the application side, the application crashed when it was first accessed. The stack trace revealed that it was a problem with the underlying database connection. The message indicated that the connection was already closed. What was happening was that Oracle closes all idle connections after a specific amount of time, in our case 30 minutes. This is configured in the sqlnet.ora file with a parameter called sqlnet.expire. To overcome this, we added the following configuration to the Commons DBCP connection pool:


validationQuery=SELECT 1 FROM DUAL

The result of this configuration is that all connections returned from the pool are first tested with the specified validation query. If the test fails the connection is discarded from the pool and a new one is returned. This solved our problem. I must mention that this configuration can have a negative impact on performance because every returned connection is checked for validity. Because of this, the simplest Oracle query is used to test the connection. Since the application did not have many concurrent transactions, this setting was acceptable.

If performance is an issue, I would recommend using an eviction strategy where a specified eviction policy periodically checks all idle connections for validity. See the Commons DBCP configuration guide for an explanation to configure this.

I hope some of you may find these solutions helpful in your own projects.

No comments: