Wednesday, April 27, 2011

Performance Impact of Using Cache and Nocache in Sequences

When Caching is used the data dictionary is updated once till the final cached value is used. If you cache 1000 values then the data dictionary is updated once for every 1000 sequence numbers are used.

When Nocaching is used then the data dictionary is updated for every sequence number. Frequent updation means there will be lock to that particular object hence when 5 sessions simultaneously request the nextval then the four sessions will be waiting for "row cache lock" wait resulting in poor performance.

According to Metalink,

When caching and ordering is used in an RAC database, then the session wanting to get the NEXTVAL of a sequence need to get an exclusive instance SV lock before inserting or updating the sequence values in the the shared pool. When multiple sessions want the nextval of the same sequence, then some sessions will wait on 'DFS lock handle' waitevent.

As a consequence of those serialisation mechanisms, the sequence throughput, i.e. the maximum speed at which it is possible to increment a sequence, doesn't scale with the number of RAC nodes, when the sequence is not cached and/or when the sequence is ordered. The sequence throughput is always better for cached sequences compared to non cached sequences.

In summary, due to the serialization mechanisms for non-cached ordered sequences, it takes more time to get a row cache lock or the SQ enqueue when more nodes are involved. That decrease effect is certainly visible when going from 1 to 2 nodes and is still perceptible to a lower extent when going from 2 to 3 nodes, but flat away with more nodes.

Thanks

No comments:

Post a Comment

Followers

Powered By Blogger
 

RAC Database Administration. Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com