Oracle Observations

April 12, 2007

Async I/O and all that.

Filed under: AIX,Unix — bigdaveroberts @ 10:10 am

While I do like to characterise myself as a DBA, I have only had one all nighter, which sort of exposes the conceit of my claim.

The system involved was a large (for the time) data warehouse, and the database had crashed unexpectedly on the day before the main weekly data load. (If I recall correctly, it was due to a control file timeout.) While it was not unusual for the system to fail, (mainly due to the fact that the database files were sparse, for some unknown historical reason making space management somewhat problematic), as the weekly load was due shortly the system was being monitored more closely on the evening that the main loads commenced.

After the data load stated the system crashed again. The system was recovered, looked fine, and the data load was initiated again, and again the database crashed. The cycle repeated. I believe that in all the database crashed 11 times in close succession.

To be honest I handed the problem over to another DBA in the morning, and they initiated a Disaster Recovery, while continuing to investigate the problem.

The first problem found was one identified by Oracle. We had recently enabled Parallel DML, and this had implicitly enabled parallel recovery. Oracle 8.1.7.2 apparently had issues with parallel recovery which made it somewhat less efficient than normal recovery. This meant that Oracle continued to perform recovery operations long after the database had been restarted.

The issue was that having had one crash increased the workload/contention in the database causing the multiple additional crashes.

What had caused the original crash?

Well, at least 3 issues.

1. A new contract developer had been recruited by the client who didn’t realise what a smart idea removing the indexes and recreating them after a data load was. This had over time, as his code had been promoted to live substantially increased the work done during the data loads.

2. There was contention between the control files and the redo logs due to poor file locations.

3. The system was running on AIX, and The oracle manual for configuring AIX asynchronous I/O was inadequate. While the manual had recommended increasing the maxioservers and the minioservers, it made no recommendations regarding the queue length.

There were also hints of a conflict between IBM and Oracle, in that it was suggested that the underlying cause of the Asynchronous I/O problem was from a rewrite of IBM’s Async I/O subsystem, and subsequently when errors began to appear from the Oracle database, Oracle blamed IBM, as Oracle had worked without errors before the change.

In terms of problem resolution.

The first moves were to disable parallel recovery with the assistance of Oracle support and a temporary suspension of the new data load, along with disabling Async I/O in oracle. (At my initiative.)

The result was that the crashes stopped.

I was instructed to re-instate async i/o by my team leader, as in his view, as the customer was paying for gold support we should get a proper resolution from Oracle rather than using a work around that would potentially reduce the performance of their system.

Further investigation revealed an increasing trend in [EAGAIN] warnings being logged by Oracle, and with the benefit of a large control file, a rather elegant graph of the deterioration in the time taken to archive redo log files (due to disk contention) was produced using the following SQL:

SELECT recid,
To_Char( next_time, 'YYYY-MON-DD HH24:MI:SS' ),
(completion_time - next_time) * ( 24 * 60 * 60 ) "Seconds to archive"
FROM v$archived_log
ORDER BY recid
/

We then spent a period of time, finding ‘colder’ disks for the location of the control files, and with the production of a script to log the number of [EAGAIN] warnings per hour we proceeded to tune the async I/O parameters sufficiently to minimise the number of warnings produced.

The end result were parameter settings far in excess of the recommendations in the installation guide, and only in fact limited by the number of async i/o processes that Oracle would start with. (This turned out to be about 350.) Also a script that automatically counted the number of [EAGAIN] warnings which alerted us by email if the number went over a certain threshold.

To an extent, that was the end of the problem. Although after a couple of months one of the developers spotted that a query would return differing results depending on which execution plan was used. After investigation, an attempt was made to rebuild the index, which failed. And having passed the call to Oracle we were recommended to upgrade to 8.1.7.3. After this upgrade had been performed, we were then able to rebuild the index. Unfortunately the disparity in the results persisted, and we eventually dropped and rebuilt the table.

While the corruption can’t be proven to be related to the large number of crashes encountered, it does seem to be more than co-incidence that a quite bizarre problem occurs shortly after a substantial number of system crashes.

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: