[SPR-6870] Cache by-type lookups in DefaultListableBeanFactory Created: 18/Feb/10  Updated: 09/Jul/12  Resolved: 27/May/12

Status: Closed
Project: Spring Framework
Component/s: Core
Affects Version/s: 2.5.6
Fix Version/s: 3.2 M1

Type: Improvement Priority: Minor
Reporter: Eberhard Wolff Assignee: Chris Beams
Resolution: Complete Votes: 45
Labels: performance
Σ Remaining Estimate: Not Specified Remaining Estimate: Not Specified
Σ Time Spent: 3.5h Time Spent: 3.5h
Σ Original Estimate: Not Specified Original Estimate: Not Specified

Attachments: Text File SPR6870.patch     HTML File perf305MainPatch.html     HTML File perf305stock.html     HTML File perfDiffStockVsPatch1.html    
Issue Links:
Duplicate
is duplicated by SPR-9296 Major Performance Problem in Applicat... Resolved
is duplicated by SPR-7949 Improve performance of DefaultSinglet... Resolved
is duplicated by SPR-8095 Improve performance of bean lookup by... Resolved
Sub-Tasks:
Key
Summary
Type
Status
Assignee
SPR-9448 Backport "Cache by-type lookups in De... Backport Resolved Chris Beams  
Days since last comment: 5 years, 11 weeks, 1 day ago
Last commented by a User: true
Last updater: Sylvain LAURENT

 Description   

The Autowiring algorithms tries to work out the dependencies by building up the Beans. If this fails, a BeanCreationException is thrown, caught and then a different way to handle the dependencies is tried. In some situations this results in slow performance and it is probably also not the nicest programming style.



 Comments   
Comment by Kristian Rosenvold [ 02/Jun/10 ]

This problem is really bad if you use a lot of web-scoped beans and @Autowired with web-scoped beans currently uses about 75% of our total CPU time on our web server. Since the factory is not caching anything by type, it ends up iterating the entire factory a lot. And web-scoped beans are created per-request.

Comment by Kristian Rosenvold [ 02/Jun/10 ]

And yes, this applies to spring 3.0 too. The caching really needs to be updated to understand "by type".

Comment by Kristian Rosenvold [ 02/Jun/10 ]

It also severely affects start-up time of your application context. About 50-75% of our application context startup-time is spent iterating the application context to determine autowire-by type.

Comment by j w [ 16/Jun/10 ]

+1 autowiring by type is inefficient and in my profiling is taking a large percentage of the request/response time.

Comment by Kristian Rosenvold [ 08/Jan/11 ]

There is now a workaround !

http://jawspeak.com/2010/11/28/spring-slow-autowiring-by-type-getbeannamesfortype-fix-10x-speed-boost-3600ms-to/

Comment by j w [ 09/Jan/11 ]

Thanks Kristian,

My solution was not ideal, but I hope it helps you. I'm happy to contribute a patch if Spring wants it.

Comment by Lari Hotari [ 21/Mar/11 ]

Please fix this issue. Add caching to the DefaultListableBeanFactory.getBeanNamesForType method. This is a performance hotspot for many Spring applications (like Grails).

There should be caching for autowireBeanProperties and autowireByName too. Grails has caching for autowireByName in the DefaultListableBeanFactory implementation it uses: https://github.com/grails/grails-core/blob/master/grails-core/src/main/groovy/org/codehaus/groovy/grails/commons/spring/ReloadAwareAutowireCapableBeanFactory.java . Caching should be provided by default.

Comment by Lari Hotari [ 21/Mar/11 ]

Might be related to SPR-7949.

Comment by Lari Hotari [ 21/Mar/11 ]

This issue might also be related to SPR-7988 .

Comment by Kristian Rosenvold [ 18/Apr/11 ]

The enclosed patch adds caching to DefaultListableBeanFactory#getBeanNamesForType, providing a huge boost to any users of @Autowired on non-singleton beans. Even container startup-time is significantly improved, we saved 20% on overall startup time of our 1100 bean ApplicationContext.

In a running server environment (for a typical web application using web scopes and @Autowired) this patch will provide a similar reduction in overall cpu usage.

The patch is applied over 3.0.5 but applies cleanly at r4225 on trunk.

The patch is entirely non-functional and adds no new tests but it breaks no tests either. There are several tests that verify that the caching does not break anything, something I repeatedly discovered while making the patch.

I have also enclosed a profiler report showing the loading of the application context before and after this patch, and also a diff-only report. The reports are not complete, but shows running one wired integration test with 860 autowired dependencies from a pool of 1021 beans (there is no external io in this test)

Comment by mck [ 20/Apr/11 ]

There's (possibly related) performance patch for web applications using non-singleton beans submitted in SPR-7949
This patch really is just the beginnings of improving the bean factory code to use ReadWriteLocks instead of hard synchronization statements...

Comment by Kristian Rosenvold [ 20/Apr/11 ]

This patch operates at a higher level in the stack than the SPR-7949 patch and applying this patch will render the SPR-7949 patch useless. I have not tried the patch at SPR-7949, but I did another patch a couple of years ago using readerwriterlocks that turned out to be very prone to deadlocks in highly concurrent environments. At the time I think I decided this was a fundamental problem of using readerwriterlocks in this setting. This patch has no deadlock potential.

Comment by Aaron Digulla [ 23/Feb/12 ]

Any chance to see some work on this? The performance of looking up Spring beans by type is a bother for us as well.

We solved this outside of Spring by using this workaround:

getBean(Class<T> type) {
    String name = type.getName() + "#0";
    return getBean( name, type );
}

The idea is that for most singletons, there is only a single implementation, so the ID can be omitted and Spring will use the type of the bean to generate a name. This means that you need to do some extra work for interfaces or that you must always request beans using interface by a specific ID and never by type.

But it's a poor solution. It's pretty simple to cache the bean definition lookup in getBean(Class<T> type) and that would give a huge performance boost without the need for any hacks.

Comment by Mindaugas Žakšauskas [ 24/Feb/12 ]

Spring, please solve this. Here's my story:

http://stackoverflow.com/questions/9429612/spring-wiring-by-type-is-slower-by-magnitude-than-wiring-by-name

Comment by Mike Youngstrom [ 24/Feb/12 ]

I've been told this is a risky fix because it is right in the core of some complex code. NOW is the perfect time attempt to fix this since you are so early in the 3.2 release cycle!

Comment by Chris Beams [ 27/May/12 ]

Thanks, Kristian for the patch. With minor modifications, it has been applied and indeed solves the problem nicely.

At the time of this writing the snapshot containing these changes has just finished publishing. There are a large number of watchers on this issue – please consider taking the latest 3.2.0.BUILD-SNAPSHOT for a spin and provide any feedback. Note that we'll be shipping this change with 3.2 M1 within a day or two as well; but naturally any feedback prior to that would be great.

commit 4c7a1c0a5403b35dd812dae1f2a753538928bb32 (HEAD, SPR-6870)
Author: Chris Beams <[email protected]>
Date:   Sun May 27 17:40:33 2012 +0300
 
    Cache by-type lookups in DefaultListableBeanFactory
 
    Prior to this change, by-type lookups using DLBF#getBeanNamesForType
    required traversal of all bean definitions within the bean factory
    in order to inspect their bean class for assignability to the target
    type. These operations are comparatively expensive and when there are a
    large number of beans registered within the container coupled with a
    large number of by-type lookups at runtime, the performance impact can
    be severe. The test introduced here demonstrates such a scenario clearly.
    
    This performance problem is likely to manifest in large Spring-based
    applications using non-singleton beans, particularly request-scoped
    beans that may be created and wired many thousands of times per second.
    
    This commit introduces a simple ConcurrentHashMap-based caching strategy
    for by-type lookups; container-wide assignability checks happen only
    once on the first by-type lookup and are afterwards cached by type
    with the values in the map being an array of all bean names assignable
    to that type. This means that at runtime when creating and autowiring
    non-singleton beans, the cost of by-type lookups is reduced to that of
    ConcurrentHashMap#get.
    
    Issue: SPR-6870

Comment by Sylvain LAURENT [ 09/Jul/12 ]

For those watching this issue, it has ben back ported to 3.1.2, see SPR-9448

Generated at Wed Sep 20 07:39:48 UTC 2017 using JIRA 6.4.14#64029-sha1:ae256fe0fbb912241490ff1cecfb323ea0905ca5.