Call us: +1-415-738-4000
The Ehcache Technical FAQ answers frequently asked questions on how to use Ehcache, integration with other products, and solving issues. Other resources for resolving issues include:
This FAQ is organized into the following sections:
The source code is distributed in the root directory of the download. It is also available through SVN.
Yes. Create your CacheManager using new CacheManager(...) and keep hold of the reference. The singleton approach, accessible with the getInstance(...) method, is also available. Remember that Ehcache can support hundreds of caches within one CacheManager. You would use separate CacheManagers where you want different configurations. The Hibernate EhCacheProvider has also been updated to support this behavior.
The documentation has been updated with comprehensive coverage of the schema for Ehcache and all elements and attributes, including whether they are mandatory. See the Configuration page.
Automatic element versioning works only with unclustered MemoryStore caches. Distributed caches or caches that use off-heap or disk stores cannot use auto-versioning. (Distributed caches require Terracotta BigMemory Max, and off-heap storage requires either Terracotta BigMemory Max or BigMemory Go.)
To enable auto-versioning, set the system property
true (it is
false by default). Manual (user provided) versioning of cache elements is ignored when auto-versioning is in effect. Note that if this property is turned on for one of the ineligible caches, auto-versioning will silently fail.
The Ehcache Fast Restart Store (FRS) feature provides the option to store a fully consistent copy of the in-memory data on the local disk at all times. After any kind of shutdown — planned or unplanned — the next time your application starts up, all of the data that was in memory is still available and very quickly accessible. To configure your cache for disk persistence, use the "localRestartable" persistence strategy. For more information, refer to Persistence and Restartability.
The minimum configuration you need to get replicated caching going is:
<cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=126.96.36.199, multicastGroupPort=4446"/> <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"/>
and then at least one cache declaration with
in it. An example cache is:
<cache name="sampleDistributedCache1" maxEntriesLocalHeap="10" eternal="false" timeToIdleSeconds="100" timeToLiveSeconds="100"> <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"/> </cache>
Each peer server can have the same configuration.
The cache event listening works but it does not get plumbed into the peering mechanism. The current API does not have a CacheManager event for cache configuration change. You can however make it work by calling the notifyCacheAdded event.
The Terracotta server provides an additional store, generally referred to as the Level 2 or L2 store.
The JVM MemoryStore and OffHeapStore in the local node is referred to as the L1 store.
maxBytesLocalOffHeap comprise the maximum size of the local L1 store.
maxBytesLocalDisk is overridden when using Terracotta to provide the L2 size. The L2 size is effectively the maximum cache size.
'localTempSwap' configures overflow to the local DiskStore. When using Terracotta, the local DiskStore
is not used, and the cache should be configured for 'distributed' persistence strategy. When the a cache configured for 'distributed' persistence strategy gets full, it overflows to the Terracotta L2 store running on the server. The L2
can be further configured with the
There are two patterns available: write-through and write-behind caching. In write-through caching, writes to the cache cause writes to an underlying resource. The cache acts as a facade to the underlying resource. With this pattern, it often makes sense to read through the cache too. Write-behind caching uses the same client API; however, the write happens asynchronously.
While file systems or a web-service clients can underlie the facade of a write-through cache, the most common underlying resource is a database.
You can externalize the value of the
terracottaConfig url from the ehcache.xml file by using a system property for the
url attribute. Add this in your ehcache.xml config file:
and define my.terracotta.server.url as a system property.
The following classloaders are tried in this order:
Thread.currentThread().getContextClassLoader()(so set this to override)
CacheManager- the one that loaded the ehcache-terracotta.jar
It has been deprecated. Use the "consistency" attribute instead.
Any configuration files using
coherent=true will be mapped to
will be mapped to
<cacheManagerEventListenerFactory class="" properties=""/>
No, it is unrelated. It is for listening to changes in your local CacheManager.
Yes. Just set the persistence strategy of cache to "none".
As of Ehcache 2.0, this is not possible. You can set the maxEntriesLocalHeap to 1, but setting the maxSize to 0 now gives an infinite capacity.
Remember that a value in a cache element is globally accessible from multiple threads. It is inherently not thread-safe to modify the value. It is safer to retrieve a value, delete the cache element, and then reinsert the value. The UpdatingCacheEntryFactory does work by modifying the contents of values in place in the cache. This is outside of the core of Ehcache and is targeted at high performance CacheEntryFactories for SelfPopulatingCaches.
As of Ehcache 1.2, they can be stored in caches with MemoryStores. If an attempt is made to replicate or overflow a non-serializable element to disk, the element is removed and a warning is logged.
These are three cache attributes that can be used to build an effective eviction configuration. It is advised to test and tune these values to help optimize cache performance. TTI
timeToIdleSeconds is the maximum number of seconds that an element can exist in the cache without being accessed, while TTL
timeToLiveSeconds is the maximum number of seconds that an element can exist in the cache whether or not is has been accessed. If the
eternal flag is set, elements are allowed to exist in the cache eternally and none are evicted. The eternal setting overrides any TTI or TTL settings.
These attributes are set for the entire cache in the configuration file. If you want to set them per element, you must do it programmatically.
For more information, see Setting Expiration.
See this recipe.
When the maximum number of elements in memory is reached, the Least
Recently Used (LRU) element is removed. "Used" in this case means
inserted with a
put or accessed with a
If the cache is not configured with a persistence strategy, the LRU element is
evicted. If the cache is configured for "localTempSwap", the LRU element is flushed asynchronously to the DiskStore.
Because the MemoryStore has a fixed maximum number of elements, it will have a maximum memory use equal to the number of elements multiplied by the average size. When an element is added beyond the maximum size, the LRU element gets pushed into the DiskStore. While we could have an expiry thread to expire elements periodically, it is far more efficient to only check when we need to. The tradeoff is higher average memory use. The expiry thread keeps the DiskStore clean. There is less contention for the DiskStore's locks because commonly used values are in the MemoryStore. We mount our DiskStore on Linux using RAMFS, so it is using OS memory. While we have more of this than the 2GB 32-bit process size limit, it is still an expensive resource. The DiskStore thread keeps it under control. If you are concerned about CPU utilization and locking in the DiskStore, you can set the diskExpiryThreadIntervalSeconds to a high number, such as 1 day. Or, you can effectively turn it off by setting the diskExpiryThreadIntervalSeconds to a very large value.
The amount of memory consumed per thread is determined by the Stack Size. This is set using -Xss. The amount varies by OS. The default is 512KB (for Linux), but 100kb is also recommended. The threads are created per cache as follows: * DiskStore expiry thread - if DiskStore is used * DiskStore spool thread - if DiskStore is used * Replication thread - if asynchronous replication is configured. If you are not doing any of the above, no extra threads are created.
With JDK versions prior to JDK1.5, the number of RMI registries is limited to one per virtual machine, and therefore Ehcache is limited to one CacheManager per virtual machine operating in a replicated cache topology. Because this is the expected deployment configuration, however,
there should be no practical effect. The telltale error is
java.rmi.server.ExportException: internal error: ObjID already in use
On JDK1.5 and higher, it is possible to have multiple CacheManagers per VM, each participating in the same or different replicated cache topologies.
Indeed the replication tests do this with 5 CacheManagers on the same VM, all run from JUnit.
timeToLive work as usual. Note however that the
eternal attribute, when set to "true", overrides
timeToIdle so that no expiration can take place.
Note also that expired elements are not necessarily evicted elements, and that evicted elements are not necessarily expired elements.
See the Terracotta documentation for more information on expiration and eviction in a distributed cache.
Ehcache 1.7 introduced a less fine-grained age recording in Element
which rounds up to the nearest second. Some APIs may be sensitive to this change.
In Ehcache, elements can have overridden TTI and TTLs. Terracotta distributed Ehcache supports this functionality.
Standalone Ehcache supports LRU, LFU and FIFO eviction strategies, as well as custom evictors. For more information, refer to Cache Eviction Algorthims.
Note: There is no user selection of eviction algorithms with clustered caches. The attribute MemoryStoreEvictionPolicy is ignored (a clock eviction policy is used instead), and if allowed to remain in a clustered cache configuration, the MemoryStoreEvictionPolicy may cause an exception.
The Terracotta Server Array algorithm is optimised for fast server-side performance. It does not evict as soon as it is full, but periodically checks the size. Based on how overfull it is (call that n), its next eviction pass evicts those n elements. It picks a random sample 30% larger than n. It then works through the sample and:
Two things can cause elements to be flushed from L1 to L2.
Note that L2 means the Terracotta Server Array, available with BigMemory Max.
An element, key, and value in Ehcache is guaranteed to have equivalence,
true, compared to another as it moves between stores.
In the express install or serialization mode of Terracotta, which is the default, Terracotta is the same. However, such elements are not identical objects, such that
false between each other as they move between stores.
An element in Ehcache is guaranteed to
.equals() another as it moves between stores.
However in identity mode, Terracotta makes a further guarantee that the key and the value are identical, such as
true. This is achieved using extensions to the Java Memory Model.
JDK 1.6 or higher.
Yes. You use 1 instance of Ehcache and 1 ehcache.xml. You configure your caches with Hibernate names for use by Hibernate. You can have other caches that you interact with directly, outside of Hibernate.
For Hibernate we have about 80 Domain Object caches, 10 StandardQueryCaches, 15 Domain Object Collection caches. We have around 5 general caches we interact with directly using BlockingCacheManager. We have 15 general caches we interact with directly using SelfPopulatingCacheManager. You can use one of those or you can use CacheManager directly. See the tests for example code on using the caches directly. Look at CacheManagerTest, CacheTest and SelfPopulatingCacheTest.
This is a Hibernate 3 bug. See http://opensource.atlassian.com/projects/hibernate/browse/HHH-3392 for tracking. It is fixed in 3.3.0.CR2, which was released in July 2008.
See the OSGi section for Enterprise Ehcache. If you are not using distributed cache, leave out the
<terracotta> element shown in the configuration example.
Version 1.6 is compatible. See Google App Engine Caching.
ActiveMQ seems to have a bug in at least ActiveMQ 5.1, where it does not cleanup temporary queues, even though they have been deleted. That bug appears to be long standing but was thought to have been fixed. See http://issues.apache.org/activemq/browse/AMQ-1255.
The JMSCacheLoader uses temporary reply queues when loading. The Active MQ issue is readily reproduced in Ehcache integration testing. Accordingly, use of the JMSCacheLoader with ActiveMQ is not recommended. Open MQ tests fine.
Tomcat is such a common deployment option for applications using Ehcache that there is a page on known issues and recommended practices. See Tomcat Issues and Best Practices.
WARN [Replication Thread] RMIAsynchronousCacheReplicator.flushReplicationQueue(324) | Unable to send message to remote peer. Message was: Connection refused to host: 127.0.0.1; nested exception is: java.net.ConnectException: Connection refused java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is: java.net.ConnectException: Connection refused
This is caused by a 2008 change to the Ubuntu/Debian Linux default network configuration.
Essentially, the Java call
InetAddress.getLocalHost(); always returns the loopback address, which
is 127.0.0.1. Why? Because in these recent distros, a system call of
$ hostname always returns an address
mapped onto the loopback device, and this causes Ehcache's RMI peer creation logic to always assign the loopback address, which causes the error you are seeing.
All you need to do is crack open the network config and make sure that the hostname of the machine returns a valid network address accessible by other peers on the network.
Some app servers do not permit the creation of message listeners. This issue has been reported on Websphere 5. Websphere 4 did allow it. Tomcat allows it. Glassfish Allows it. Jetty allows it. Usually there is a way to turn off strict support for EJB checks in your app server. See your vendor documentation.
JRockit has an has a bug where it reports the younggen size instead of the old to our CacheManager so we over-aggressively flush to L2 when using percentage of max heap based config. As a workaround, set maxEntriesLocalHeap instead.
For the latest compatibility information, see Release Information.
A local CacheEventListener will work locally, but other nodes in a Terracotta cluster are not notified unless the TerracottaCacheEventReplicationFactory event listener is registered for the cache.
Use the Cache.getQuiet() method. It returns an element without updating statistics.
Set the system property
net.sf.ehcache.disabled=true to disable Ehcache. This can easily be done using
-Dnet.sf.ehcache.disabled=true in the command line. If Ehcache is disabled, no elements will be added to a cache.
This is not possible. However, you can achieve the same result as follows:
Create a new cache:
Cache cache = new Cache("test2", 1, true, true, 0, 0, true, 120, ...); cacheManager.addCache(cache);
See the JavaDoc for the full parameters.
Get a list of keys using
cache.getKeys, then get each element and put it in the new cache.
None of this will use much memory because the new cache elements have values that reference the same data as the original cache.
cacheManager.removeCache("oldcachename") to remove the original cache.
Yes, it is recommended. If the JVM keeps running after you stop using Ehcache, you should call CacheManager.getInstance().shutdown() so that the threads are stopped and cache memory is released back to the JVM. However, if the CacheManager does not get shut down, it should not be a problem. There is a shutdown hook which calls the shutdown on JVM exit. This is explained in the documentation here.
Yes. When you call CacheManager.shutdown() is sets the singleton in CacheManager to null. If you try an use a cache after this you will get a CacheException. You need to call CacheManager.create(). It will create a brand new one good to go. Internally the CacheManager singleton gets set to the new one. So you can create and shutdown as many times as you like. There is a test which explicitly confirms this behavior. See CacheManagerTest#testCreateShutdownCreate().
Statistics gathering is disabled by default in order to optimize performance. You can enable statistics gathering in caches in one of the following ways:
In the Terracotta Developers Console.
To function, certain features in the Developers Console require statistics to be enabled.
Statistics should be enabled when using the Ehcache Monitor.
Ehcache does not experience deadlocks. However, deadlocks in your application code can be detected with certain tools, such as JConsole.
You should see the listener port open on each server. You can use the replicated cache debug tool to see what is going on. (See Remote Network Debugging and Monitoring for Distributed Caches).
If you see nothing happening while cache operations should be going through, enable trace (LOG4J) or finest (JDK) level
net.sf.ehcache.distribution in the logging configuration being used by the debugger.
A large volume of log messages will appear. The normal problem is that the CacheManager has not joined the replicated cache topology.
Look for the list of cache peers.
Finally, the debugger in Ehcache 1.5 has been improved to provide far more information on the caches that are
replicated and events which are occurring.
Terracotta clusters remember the configuration settings. You need to delete the cluster to change cache settings of Terracotta distributed caches. You can also use the Terracotta Dev Console to apply persistent changes to common cache settings.
SampledCache and SampledCacheManager MBeans are made available in the Terracotta Developer Console.
These are time-based gauges, based on once-per-second measurements. These are different than the JMX MBeans available through the
You need to add a newly created cache to a CacheManager before it gets initialised. Use code like the following:
CacheManager manager = CacheManager.create(); Cache myCache = new Cache("testDiskOnly", 0, true, false, 5, 2); manager.addCache(myCache);
ConcurrentHashMap does not provide an eviction mechanism. We add that ourselves. For caches larger than 5000 elements, we create an extra ArrayList equal to the size of the cache which holds keys. This can be an issue with larger keys. An optimization which cache clients can use is:
http://www.codeinstructions.com/2008/09/instance-pools-with-weakhashmap.html To reduce the number of key instances in memory to just one per logical key, all puts to the underlying ConcurrentHashMap could be replaced by map.put(pool.replace(key), value), as well as keyArray.set(index, pool.replace(key)) You can take this approach when producing the keys before handing them over to EhCache.
Even with this approach, there is still some added overhead consumed by a reference consumed by each ArrayList element. Update: Ehcache 2.0 will introduce a new implementation for MemoryStore based on a custom ConcurrentHashMap. This version provides fast iteration and does away with the need for the keyArray thus bringing memory use back down to pre 1.6 levels. And with other memory optimizations made to Element in 1.7, memory use will actually be considerably lower than pre 1.6 levels.
It was configured for temporary disk swapping that is cleared after a restart. For crash-resilient persistence, configure your cache with persistenceStrategy="localRestartable", or use distributed cache, which is backed by the Terracotta Server Array.
A clustered cache created programmatically on one application node does not automatically appear on another node in the cluster. The expected behavior is that caches (whether clustered or not) added programmatically on one client are not visible on other clients. CacheManagers are not clustered, only caches are. So if you want to add a cache programmatically, you would have to add it on all the clients. If that cache is configured to be Terracotta clustered, then it will use the same store, and changes applied to cache entries on one client will automatically reflect on the second client.
Ehcache uses SoftReferences with asynchronous RMI-based replication, so that replicating caches do not run out of memory if the network is interrupted. Elements scheduled for replication will be collected instead. If this is happening, you will see warning messages from the replicator. It is also possible that a SoftReference can be reclaimed during the sending, in which case you will see a debug level message in the receiving CachePeer. Some things you can do to fix them:
Having done the above, SoftReferences will then only be reclaimed if there is some interruption to replication and the message queue gets dangerously high.
Because the TSA itself provides both disk persistence (if required) and scale out, the local DiskStore is not available with Terracotta clustered caches.
Cache.removeAll()seems to take a long time. Why?
removeAll() is used with distributed caches, the operation has to clear entries in the Terracotta Server Array as well as in the client. Additional time is required for this operation to complete.
TTL/TTI are meant to control the relevancy of data for business reasons, not as an operational constraint for managing resources. Without the occurrence of so-called "inline" eviction, which happens whenever an expired element is accessed, it is possible for expired elements
to continue existing in the Terracotta Server Array. This is to minimize the high cost of checking
individual elements for expiration. To force Terracotta servers to inspect element TTL/TTIs (which lowers performance), set
ehcache.storageStrategy.dcv2.perElementTTITTL.enabled = true" in system properties.
The Terracotta client library runs with your application and is often involved in operations which your application is not necessarily aware of. These operations may get interrupted, too, which is not something the Terracotta client can anticipate. Ensure that your application does not interrupt clustered threads. This is a common error that can cause the Terracotta client to shut down or go into an error state, after which it will have to be restarted.
It isn't. This is a problem with using a database as an integration point. Integration via a message queue, with a Terracotta clustered application acting as a message queue listener and updating the database avoids this, as would the application receiving a REST or SOAP call and writing to the database. AQ can have DB trigger put in a poll, or AQ can push it up.
There are a few ways to try to solve this, in order of preference:
The backport-concurrent library is used in Ehcache to provide java.util.concurrency facilities for Java 4 - Java 6. Use either the Java 4 version which is compatible with Java 4-6, or use the version for your JDK.
If you use this default implementation, the cache name is called "SimplePageCachingFilter". You need to define a cache with that name in ehcache.xml. If you override CachingFilter, you are required to set your own cache name.
WARN CacheManager ... Creating a new instance of CacheManager using the diskStorePath "C:\temp\tempcache" which is already used by an existing CacheManager.
This means that, for some reason, your application is trying to create one or more additional instances of Ehcache's CacheManager with the same configuration. Ehcache is automatically resolving the Disk path conflict, which works fine. To eliminate the warning:
CacheManager.getInstance(). In Hibernate, there is a special provider for this called
net.sf.ehcache.hibernate.SingletonEhCacheProvider. See Hibernate.
From Ehcache 2.4, the
defaultCache is optional. When you try to programmatically add a cache by name,
CacheManager.add(String name), a default cache is expected to exist in the CacheManager configuration. To fix this error, add a defaultCache to the CacheManager's configuration.
The error is
net.sf.ehcache.distribution.RemoteCacheException: Error doing put to remote peer. Message was: Error unmarshaling return header; nested exception is: java.net.SocketTimeoutException: Read timed out.
This is typically solved by increasing
socketTimeoutMillis. This setting is the amount of time a sender
should wait for the call to the remote peer to complete. How long it takes depends on the network and
the size of the elements being replicated.
The configuration that controls this is the
socketTimeoutMillis setting in
120000 seems to work well for most scenarios.
<cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="hostName=fully_qualified_hostname_or_ip, port=40001, socketTimeoutMillis=120000"/>
You have not configured a Terracotta server for Ehcache to connect to, or that server isn't reachable.
You need to include the ehcache-terracotta jar in your classpath.