If you host your application on 64 bits environment and if you have a lot of RAM then you can consider disabling cache size limits by changing this setting:
<!-- CACHING - DISABLE CACHE SIZE LIMITS
If true, Sitecore does not limit cache size growth and ignores any maximum cache sizes
specified in the web.config file.
Enabling this setting can improve the application's performance in 64-bit environments
by allowing Sitecore to take full advantage of the available memory.
After setting this value to true, monitor the system at regular intervals, as this
configuration can cause Sitecore to consume too much memory and cause Out Of Memory errors.
It is only recommended to set the setting to true in 64-bit environments.
Default value: false
<setting name="Caching.DisableCacheSizeLimits" value="false" />
Without cache size limits the memory consumption will grow higher and higher over time. How high it will go depends on the size of your solution (mainly number of items), so your memory diagram can look something like this:
While changing this setting can improve your application's performance, on the other hand, you should monitor your system to prevent Sitecore from consuming too much memory. New stuff will be added to the cache, but the old one will not be removed, so at some point, you can run out of free RAM and get
OutOfMemory errors. In the worst case, your application can be restarted.
Proactive AutoHeal on Azure:
Your application can be automatically restarted by Azure Platform if your application's process’ private bytes exceeds 90% of the limit for over 30 seconds. More info
Following is the real world example from one of my projects. I restarted the application, started smart publish of the whole content tree and at the end more than 1M elements have been published. Memory consumption reached 12GB. Sometimes it can go even higher and then Azure restarts the application:
Here is another print screen from dotMemory. Snapshots have been taken after application restart but before smart publish, then after 53k, 104k, 151k published elements. As you can see the number of objects stored in memory increase about 4,5M for every 50k published elements:
Below is the screenshot of the details of "after 151k" snapshot. The memory is used mostly by three
SqlServerDataProvider objects. Those three objects are for three Sitecore databases: core, master and web. There is also a lot of wasted memory by duplicated string values. For example, if you have 1M product items and half of them is in Approved workflow state and other half is in Draft state, then the IDs of those two workflow states can be duplicated by 0,5M times for each one.
Fortunately, there are out of the box ways to improve memory consumption in Sitecore.
#1 Enable Sitecore.Interning
Following setting enables interning mechanisms that should reduce memory consumption. This is done by reusing immutable objects like strings or IDs instead of creating new ones:
<!-- INTERNING ENABLED
If enabled, Sitecore would re-use same immutable object instances, and enable InternManager<T> API.
This can reduce memory consumption, and simplify Garbage Collection.
The tradeoff is additional CPU cost of putting an object to intern pool.
Default value: false.
<setting name="Interning.Enabled" value="false"/>
Additionally, you can enable following setting, to also intern field values:
<!-- INTERNING KNOWN FIELD VALUES
If enabled, Sitecore would ensure to use intern pool for the known field values.
EXAMPLE: Every item under workflow would have one among known limited values.
Since a field value is cached as string, a lot of duplicated strings represeting same workflow would present in memory.
This setting is useful on large solutions where memory consumption is high.
Default value: false.
<setting name="Interning.InternKnownFieldValues" value="false"/>
If you enable it, Sitecore will try to reuse values for fields from the following list. Of course, you can add your own fields here.
Getting back to my real world example, this how memory consumption chart looked like after I enabled those two settings. As you can see the memory consumption is lower by about 4GB this time.
And this is how it looks in dotMemory. After 150k it has 9M objects stored in memory less than before:
The interning mechanism is very fast so you shouldn't see any performance issues when you enabled it. The price we pay is one
ConcurrentDictionary lookup per field and it is not even visible during the dotTrace profiling. I also heard that Sitecore considers including interning by default for future versions of product.
#2 Use MemoryHealthMonitor
If you enabled interning mechanism and you still worry about memory consumption you can configure
MemoryHealthMonitor. In case that your memory consumption exceeds defined
Threshold memory monitor can clear your Sitecore cache and then force garbage collection. It is better than application restart, right?
This is how your patch can looks like:
<hook type="Sitecore.Diagnostics.MemoryMonitorHook, Sitecore.Kernel">
<param desc="Check interval">00:01:00</param>
<param desc="Minimum time between log entries">00:00:20</param>
MemoryHealthMonitor works only if you enable Performance Counters, but you can easly customize this behaviour.