Oracle Observations

December 17, 2011

The UKOUG post conference post.

Filed under: Uncategorized — bigdaveroberts @ 11:38 am

Beyond the obvious opportunities to put faces to names on OTN and the oracle-l freelist group and the opportunity to drink beer and learn at the same time, there are the more tangible benefits of getting nuggets of information that you didn’t know you didn’t know and if you thought you needed to know, would take you an age to discover by yourself!

So the highlights:

According to Maria Colgan ( dbms_stats should gather data relating to chained rows. If the data exists in the data dictionary, the optimiser will use it, but currently the only way to get this information is via the deprecated analyze command. So, I suppose If you do analyse your chained rows using the analyse command then there is no reason why you should delete these statistics before regenerating more accurate statistics using the appropriate dbms_atats command!

Something that possibly warrants further research!

NB. It is not currently possible to manually manipulate the chained rows stats, they are either generated by the analyze command or deleted.

There were some very curious facts in Frits Hooglands ( presentation on multi-block reads.

Firstly, the implementation of multi-block reads is evolving over the point releases. There can be significant changes to the behaviour of multi block reads between these point releases and you should anticipate significant performance changes even when you are performing even a minor patchset upgrade.

Secondly, most people take the ‘db file scattered read’ wait event as a proxy for the i/o subsystem performance. However it is apparent that with the complexities in place, the wait event may cover more than one read. i.e. if you have a peak wait time of 20ms, (which would be poor) this might actually be timing 2 consecutive reads resulting in the respectable average 10ms per read.

Ultimately you have to defer to the storage admin to accurately state the true maximum i/o subsystem read time!

From the Wolfgang Breitling ( session on the Oracle optimiser, there came the interesting fact that if you export statistics, no information is recorded regarding any tables (or columns) that do not have statistics defined. If you subsequently generate statistics for a table and these new statistics have a negative effect on performance, importing the previously saved statistics will not delete these newly generated statistics! To get back to the position you started at you need to delete all new statistics and then import the statistics from the backup!

Secondly, if you manually generate frequency histograms, always use 2 as a minimum! In some cases Oracle divides by 2 and then rounds down, so in that scenario, any value less than 2 will be lost!

Thirdly if you are generating your own histograms it might sometime be appropriate to miss out popular values. It is much more important to specify the occurrence of rare values!

From Connor McDonald I learnt that the circular animated icon that appears whenever the internet hasn’t yet done what you wanted to is actually called the ‘Infinite Progress Indicator’.

I also learnt that Connor is one of the most entertaining and subtly informative speakers and he should never be missed!

Finally in conclusion, it is apparent that in terms of Oracle support, not all platforms are created equal! If you are not using Linux or Solaris you are likely to wait longer for patches and those patches, due to the reduced crowd sourcing based testing might be inferior in quality!

I could explain what I learnt from Jonathan Lewis’s session, however as a previous incarnation of that is available online I will simply point you at that:

Lenz Grimmer gave a wealth of information about the Oracle Linux Kernel.

Oracle has protested in the past about how long it takes for fixes it makes to the Linux kernel to be incorporated into the RHEL Kernel. This session covered further the rationale behind Oracles decision to create its own kernel and the approach it has taken!

Ultimately Red Hat has customised the kernel (chiefly with bug fixes), but Red hat doesn’t automatically adopt changes to the kernel, they take changes to the kernel and then incorporate them into their own version of the kernel. It is largely this fact that when Oracle makes a fix and submits it to the main Kernel development team it can be an extended period of time before this patch is re-incorporated into the RHEL Linux fork and is then made generally available.

Oracle has taken the decision to adopt the mainstream Linux kernel and apply its fixes and optimisations directly to that.

This is done on the basis that, firstly, as most people running OEL will be running Oracle on top and will benefit from the changes sooner and secondly, that most applications running on Linux will be interfacing to the kernel using the GNU libc library and it is at that level that they are supporting compatibility with the RHEL kernel.

I.e. if you make low level calls to the Linux kernel bypassing the libc library then there is a small risk of incompatibility!

Secondly it was stated that the 64bit OEL Linux port is now the primary platform rather than the original 32bit version. This is probably a mechanical move, while the majority of Linux users will still be 32bit, most new work is likely to be 64bit in both the amateur and professional community!

There was also an incredibly interesting performance tuning roundtable, but I think that warrants a separate blog post!


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at

%d bloggers like this: