You can select a cache strategy for the index. This selection overrides the default strategy, determined by the Adaptive Server optimizer, for reading data pages from an index into the buffers in data cache. The following selections are available:
Most Recently Used Replacement (MRU)—This selection specifies that Adaptive Server uses the most recently used strategy for determining where in cache to place data pages when reading in new data.
If you clear the check box, Adaptive Server reads new pages into the MRU end of the chain of buffers in cache. Subsequent reads move the pages along the chain towards the least recently used (LRU) end until they are flushed out by new reads at the MRU end. If you select Most Recently Used Replacement, Adaptive Server reads new pages into the LRU end. They are used and immediately flushed when a new page enters the MRU end.
This strategy is advantageous when a page is needed only once for a query. It tends to keep such pages from flushing out other pages that can potentially be reused while still in cache.
Large Buffer Prefetch—This selection applies if one or more large buffer pools is defined in the default cache or, if the index is bound to a named cache, in the named cache. A large buffer pool is one that has buffers larger than the 2K default, as specified in the Cache property sheet. If you select Large Buffer Prefetch, the Adaptive Server optimizer can fetch data in I/Os of as many as eight 2K data pages at a time instead of the default of one page at a time.
This strategy is advantageous for data that is stored and accessed sequentially; for example, it can improve performance for queries that scan the leaf level of a nonclustered index.