Looking at the segment level stats (shown below stacked), it looks like it is indeed TABLE1 which is generating the high I/O.

So we can “alter table TABLE1 cache” (and place it a keep pool) to remove this completely. Need to this about long term growth for sizing the pool also. Also perhaps scanning the table is not the most efficient use of resource, maybe there is an application improvement which can be made there

For further info here is a comparison of I/O profile with prior to the transition.

In both cases the vast majority of logical reads are table scans. (this is an extreme example of an application I/O profile in that respect).

However in the red (first) case almost none of those translate to physical reads – i.e. the reads are cached in the SGA.

In the green (second) case the majority translate into physical reads to SAN (and the absolute thoughput is much lower).

So more evidence to backup the fact that we need to ensure we cache the relevant segments in the SGA (and in the longer term perhaps look to preventing the table scans in the first place).

 

 

Since on ASM there is no OS level caching, it is important that all such queries are cached where possible in the SGA, otherwise via direct path read they will not be cached and will have to go to physical disk. This one is fairly easy to do via the cache directive + keep pool since the segment size is fairly small.