Oracle Observations

December 18, 2011

The UKOUG performance tuning round table.

Filed under: Performance — bigdaveroberts @ 2:59 pm

At the tuning round table there were three experts: Jonathan Lewis, Maria Coulgan and Greg Rhan.

Hopefully Jonathan Lewis needs no introduction, Maria Colgan has worked at oracle in the server technology performance team and Greg Rahn works in the real world performance group, specialising in data warehouses and parallel query.

The first question was how the merge command speeded up processing.

The answer was that it produces 2 arrays and does bulk inserts and bulk updates.

However it was stated that depending on the actual data, array processing with exceptions might be a more effective solution.

In a discussion on sub-query un-nesting it was observed that giving the oracle optimiser a nested sub-query gives oracle the opportunity to un-nest the query, if you un-nest the query yourself Oracle has less potential optimisations to consider.

However, when Oracle un-nests a query, it doesn’t always produce the same plan that would be generated if the optimiser was presented with an already un-nested query!

In a discussion of AutoDoP (Degree of Parallelism), it was noted that it doesn’t consider the cost of the insert. I.e. a CTAS might select the wrong degree of parallelism for the select if the insert cannot cope with the rate that data is generated by the query!

AutoDoP can also be confused by compressed data.

It was suggested that the lack of persistence in Oracle’s implementation of cardinality feedback was a problem.

The response was: ‘watch this space’, presumably indicating that functionality in this area is still evolving and this is one of the things that might change!

One interesting thing that I had not considered was the possibility of forcing the optimiser version used to execute a query can be forced using a hint.

Standard edition was then considered, primarily because Standard edition included stored outlines, which are now deprecated but can’t be replaced by SQL plan management because this is currently an enterprise only feature!

It was suggested that while stored outlines were deprecated, there were currently no plans to de-support this feature.

Slightly off topic the question was asked: What is your favourite new feature in standard edition?

The two answers offered were:

1. Extended statistics
and
2. Pending statistics

On the subject of flash cache, it was suggested that using flash cache was only of potential value when you had maxed out the ram in the machine. i.e. the overhead of managing data in memory was much lower than the overhead of managing data in flash cache through the disk i/o code path.

Advertisements

December 17, 2011

The UKOUG post conference post.

Filed under: Uncategorized — bigdaveroberts @ 11:38 am

Beyond the obvious opportunities to put faces to names on OTN and the oracle-l freelist group and the opportunity to drink beer and learn at the same time, there are the more tangible benefits of getting nuggets of information that you didn’t know you didn’t know and if you thought you needed to know, would take you an age to discover by yourself!

So the highlights:

According to Maria Colgan (http://blogs.oracle.com/optimizer/) dbms_stats should gather data relating to chained rows. If the data exists in the data dictionary, the optimiser will use it, but currently the only way to get this information is via the deprecated analyze command. So, I suppose If you do analyse your chained rows using the analyse command then there is no reason why you should delete these statistics before regenerating more accurate statistics using the appropriate dbms_atats command!

Something that possibly warrants further research!

NB. It is not currently possible to manually manipulate the chained rows stats, they are either generated by the analyze command or deleted.

There were some very curious facts in Frits Hooglands (http://fritshoogland.wordpress.com) presentation on multi-block reads.

Firstly, the implementation of multi-block reads is evolving over the 11.2.0.2 point releases. There can be significant changes to the behaviour of multi block reads between these point releases and you should anticipate significant performance changes even when you are performing even a minor patchset upgrade.

Secondly, most people take the ‘db file scattered read’ wait event as a proxy for the i/o subsystem performance. However it is apparent that with the complexities in place, the wait event may cover more than one read. i.e. if you have a peak wait time of 20ms, (which would be poor) this might actually be timing 2 consecutive reads resulting in the respectable average 10ms per read.

Ultimately you have to defer to the storage admin to accurately state the true maximum i/o subsystem read time!

From the Wolfgang Breitling (http://www.centrexcc.com/) session on the Oracle optimiser, there came the interesting fact that if you export statistics, no information is recorded regarding any tables (or columns) that do not have statistics defined. If you subsequently generate statistics for a table and these new statistics have a negative effect on performance, importing the previously saved statistics will not delete these newly generated statistics! To get back to the position you started at you need to delete all new statistics and then import the statistics from the backup!

Secondly, if you manually generate frequency histograms, always use 2 as a minimum! In some cases Oracle divides by 2 and then rounds down, so in that scenario, any value less than 2 will be lost!

Thirdly if you are generating your own histograms it might sometime be appropriate to miss out popular values. It is much more important to specify the occurrence of rare values!

From Connor McDonald I learnt that the circular animated icon that appears whenever the internet hasn’t yet done what you wanted to is actually called the ‘Infinite Progress Indicator’.

I also learnt that Connor is one of the most entertaining and subtly informative speakers and he should never be missed!

Finally in conclusion, it is apparent that in terms of Oracle support, not all platforms are created equal! If you are not using Linux or Solaris you are likely to wait longer for patches and those patches, due to the reduced crowd sourcing based testing might be inferior in quality!

I could explain what I learnt from Jonathan Lewis’s session, however as a previous incarnation of that is available online I will simply point you at that: http://jonathanlewis.wordpress.com/2011/06/23/video/

Lenz Grimmer gave a wealth of information about the Oracle Linux Kernel.

Oracle has protested in the past about how long it takes for fixes it makes to the Linux kernel to be incorporated into the RHEL Kernel. This session covered further the rationale behind Oracles decision to create its own kernel and the approach it has taken!

Ultimately Red Hat has customised the kernel (chiefly with bug fixes), but Red hat doesn’t automatically adopt changes to the kernel, they take changes to the kernel and then incorporate them into their own version of the kernel. It is largely this fact that when Oracle makes a fix and submits it to the main Kernel development team it can be an extended period of time before this patch is re-incorporated into the RHEL Linux fork and is then made generally available.

Oracle has taken the decision to adopt the mainstream Linux kernel and apply its fixes and optimisations directly to that.

This is done on the basis that, firstly, as most people running OEL will be running Oracle on top and will benefit from the changes sooner and secondly, that most applications running on Linux will be interfacing to the kernel using the GNU libc library and it is at that level that they are supporting compatibility with the RHEL kernel.

I.e. if you make low level calls to the Linux kernel bypassing the libc library then there is a small risk of incompatibility!

Secondly it was stated that the 64bit OEL Linux port is now the primary platform rather than the original 32bit version. This is probably a mechanical move, while the majority of Linux users will still be 32bit, most new work is likely to be 64bit in both the amateur and professional community!

There was also an incredibly interesting performance tuning roundtable, but I think that warrants a separate blog post!

December 2, 2011

The UKOUG mobile application – Brilliant in theory and quite good in practice!

Filed under: Uncategorized — bigdaveroberts @ 2:23 am

I downloaded the app the first time I had reliable Wi-Fi availability.

Pleasingly it appears it would have only grabbed about 7Mb of my 3G data allocation.

I should state that I’m an android user and my current phone of choice is an LG Optimus 940 3D.

Prior to this I owned the classic Nokia N95!

There are obvious but slight issues with the application.

The Twitter feed has an unfamiliar method of updating the twitter feeds (found under the News and social/Twitter tab) you have an update arrowed circle, but you sometimes get prompted to drag down for updates? A little unpolished!

Interests doesn’t seem to be enabled in the networking session, I suppose you could always add your interests to your profile!

And up till now, the agenda in the app appears to be updated after the standard online UKOUG agenda.

The above are all small issues, slightly more significant is the fact that your favourites appear to be recorded by steam and time. When the app was originally made available and the contents in the app were falling behind the agenda available online I took the opportunity to ‘like’ sessions that I knew had been cancelled and sessions that I knew had moved location or time.

Ultimately, the app obviously records ‘likes’ by time and by venue! If your favourite session movies your agenda will not be appropriately updated.

This is a significant fault, but being forewarned will hopefully resolve any issues for the readers of my blog!

I hope that users of the app will review the selected sessions before the conference starts to verify that the selected apps are still the ones originally selected.

On the plus side, I have downloaded 2 updates to the app today (although possibly the first update was available before today!) which hopefully indicates that there is a very proactive approach to pushing out updates!

Obviously there are some limitations to the app! But there is intelligent integration to twitter, linked-in and Google maps! The small amount of out of date information that I suspect will be remedied over the period of the conference, and the bits of the app that don’t appear to work on my phone are trivial and can be dismissed!

Overall, a huge new free benefit for members attending the conference, IMHOOC!

December 1, 2011

UKOUG adgenda updates.

Filed under: Uncategorized — bigdaveroberts @ 1:04 am

Since I began my planning for the UKOUG conference on the 30th of October there have been several amendments to the published agenda.

Monday:

New session:

10:00 – 10:45

EBS Roundtable
The secret of a successful R12 project – how Oracle GRC controls can help reduce implementation/upgrade risk

Chris Burt – Fulcrum Way
Co-presenter(s): Adil Khan – Fulcrum Way

Cancelled session:

10:00 – 10:45

Strategic Sourcing

Nimesh Sharma

Moved session

11:05 – 12:05

Coexistence of Oracle E-Business Suite and Fusion Applications – Technical Dive

Nadia Bendjedou – Oracle

Has been rescheduled for 10:00 – 10:45

New session

11:05 – 12:05

Oracle Fusion Applications: Technical Architecture Overview

Canceled session

14:30 – 15:15

Explaining the MySQL Explain

Ronald Bradford, ACE Director – EffectiveMySQL

Replaced by new session

14:30 – 15:15

When Dolphins Attack!

Tony Webb – Wellcome Trust Sanger Institute

New session

16:35 – 17:35

Event-Driven SOA: Events meet Services

Guido Schmutz, ACE Director – Trivadis AG

Canceled session

16:35 – 17:35

Improving MySQL performance with better indexes

Ronald Bradford, ACE Director –

Has been replaced by

16:35 – 17:35

Building Highly Available & Scalable, Real-Time Services with MySQL Cluster

Frazer Clement – Oracle

Tuesday

Newly confirmed session

11:15 – 12:15

How to get more out of your Sub-Ledgers using Sub Ledger Accounting

Rien Otterman – Oracle

Two sessions have swapped their time slots:

Pete Finnigans – Can people be identified in the database? Has been moved to 14:40 – 15:25

David Burnham – Cost Effective Highly Available Database Recipes for Limited Budgets has moved to 15:35 – 16:20

A newly confirmed session

17:50 -18:35

A beginner’s guide to SQL Tuning

Tony Hasler – Anvil Computer Services

Wednesday

Two sessions have swapped their time slots:

Steven Davelaar, ACE – Oracle

Building a Multi-Tasking ADF Application with Dynamic Regions and Dynamic Tabs

Has been moved to 09:00 – 10:00

Shaun Smith – Oracle

Cloud Enabled Java Persistence

has been moved to 11:20 – 12:20

Blog at WordPress.com.