Oracle Observations

December 4, 2008

The RAC roundtable

Filed under: Uncategorized — bigdaveroberts @ 9:55 pm

Please note as a non-expert, I don’t guarantee my ability to accurately summarise everything!

The experts were Julian Dyke, Joel Goodman, + 2 others.

The first question concerned connection pooling, and the unbalanced load experienced wile attempting to load balance.

Several issues were discussed while closing in on a conclusion.

Joel explained the new FAN events covering service goodness that have been introduced in 11gR2. However these new asynchronous notifications are only used if the JDBC ICC client is used or if the middle tier was aware of these notifications.

As the person asking the question appeared to be setting sensible min and max handlers, it was concluded that the issue was that load balancing was occurring at connect time. I.e. Differing numbers of connections are established at connect time to the RAC servers based on current load, whereas the goal should have been uniform connection allocation initially, and then load balancing needs to be established on the basis of allocation from the existing pool on the basis of current load. Essentially the goal wasn’t to increase the number of connections to notes that were being underutilised, but to rather use more frequently the connections that already existed to the nodes that were under utilised.

Key to achieving this runtime load balancing was utilisation of services to handle listener connections.

There was a brief associated discussion as to which connect time algorithm should be used.

The conclusion was that clbgul (?)=long should be set, although doubt was expressed that this actually related to the original question.

A curious question was then asked about RAC in a virtual IBM server environment, Doubt was expressed that RAC was actually supported in that environment, but that doubt seemed to ebb away.

The question was, should the virtual RAC environment be given another node, more CPUs in existing nodes or an additional CPUs in the existing nodes.

The simple answer was
1) Better CPUs
2) More CPUs
3) More nodes.

However then came the caveats.
1) The amount of batch work may influence the decision.
2) While faster CPUs is traditionally considered the solution that involves less risk, there is the possibility that the increased throughput might increase load on the interconnect, which is probably the biggest performance inhibitor. Ergo there is no simple answer!
3) As the environment was virtual, then it was probably worthwhile just trying it and seeing what the result was!
4) It was pointed out that if the reason for going RAC was HA rather than performance, than all the above answers were wrong! As the main issue would be by going to a 3 server RAC cluster, the user benefited from the fact that a node failure would result in the loss of 1/3rd of the available capacity, rather than ½ that would currently be experienced.

There was a brief discussion of interdependence of clusterware, asm and rdbms versions, and from what point rolling upgrades were available.

Clusterware has always supported rolling upgrades.
ASM supports rolling upgrades from 11g.

Finally the version of Clusterware should be >= the version of ASM >= the version of RDBMS.

Next the question of RAC on GPFS vs. ASM on AIX was asked.

Many questions and observations about technology and politics of file system choice were tackled before attendees attested that RAC worked well on both!

Before that answer was established, the fact that ASM didn’t suit SAP was observed, because SAP requires access to a native file system.

ASMfs would e introduced with ASM 11.2 and would expose the ASM as a normal file system.

An advantage of ASM was that it opened, held and cached data file file handles, which was an optimisation that wouldn’t be possible with other file systems.

Utilisation of the white papers produced by the MAA group was recommended.

What are the effects of internal redundancy?
Memory, IO and CPU! In that order!

The issue of block versioning in RAC was covered.

Essentially multiple versions of blocks can exist on different nodes. When writing, this isn’t a consideration. The latest version of the block needs to be either read from disk, or passed from another node in the cluster causing traffic. However for reading, out of date duplicate blocks on the current node can be utilised for logical reads!

Again, as with the security round table, only the surface seemed to be scratched!

A summary of the Oracle Security round table.

Filed under: Uncategorized — bigdaveroberts @ 8:48 pm

The experts present were:

Pete Finnigan (of petefinnigan.com) – Pete I hope requires no introduction, before Pete I was an Oracle security virgin!
Paul Wright (of Markit) – From the previous days Oracle security session seems to be a heavy proponent of hedgehog form sentrigo.
Slavik Markovich (of Sentrigo (originator of Hedgehog)) – on the basis of his session, a strong proponent of proactive pl/sql security hole discovery.

And possibly Kev Else (of no fools limited) – listed on the agenda; however I failed to confirm his identity or presence.

Very roughly, Pete Finnigan expressed the position that open routing is the greatest general security risk. The ability for anyone to plug a laptop into an open Ethernet socket and then be able to connect directly to the database!

Secondly the implementation of security at the application layer, where the functionality of a user is restricted within an application, but when connecting directly either through SQL*Plus or Excel had little or no restriction on the SQL they could execute.

There was then the consideration of the nature and the source of the threats confronting an organisation.

Threats were predominantly not malicious, but rather based on the failings of various carbon based life forms. The propensity of people to place critical data on CDs or USB sticks, and then not be able to verify what happened to that data or who had access to it.

However there was also the suggestion as more companies have a direct exposure to the internet; the proportion of the risk that was internal (in the past estimated to be 80%) was dropping with organised gangs attempting to attack financial institutions.

Next was an observation made at sites that implement a data map – a system where access to sensitive data is recorded.

The behavior observed, was that people soon to leave an organisation often accessed much more data in the period before they left than they would in normal use of the systems.

Pete then proposed a methodology for reducing user’s privileges.

1) Check what privileges a user holds, both directly and through roles.
2) Check what type of objects that a user owns.
3) Identify roles that a user has been granted, but doesn’t require to create the objects that exist.
4) Audit that user on the roles that the user in theory doesn’t need.
5) If after a couple of months revoke the privileges that the user doesn’t use and doesn’t need.

It was then stated that this was only an approach to system privileges, father actions would then need to be taken to curtail object privileges.

Issues relating to the vulnerabilities that were introduced by not following Oracles recommendations for having a separate oinstall installation user and oper and oasys groups.

If performed correctly, a privileged UNIX user (not oracle) will connect to sqlplus /nolog, connect internal and only acquire public privileges, rather than the sys privileges that are acquired when the oracle user performs these instructions.

There was then an encouragement to prioritorise and escalate security implementation on the basis of an investigation of the importance of the data to be protected. Essentially some of your databases may only hold administrative data, and hardening these databases is substantially less important than hardening those databases that may contain personal or financial information.

There was a discussion regarding whistle blowing, and the stated fact that many firms were now obliged to have a risk officer or security officer, and it is to that person in the first case that security issues should probably be raised.

There was a little more, but I suspect that even tripling the time allocated, we would have only scratched the surface!

December 3, 2008

Linux and the Centro cross rail line

Filed under: Uncategorized — bigdaveroberts @ 11:00 pm

Somewhat off topic:

Having spent another good day at UKOUG 2008 and being a local boy I took the 21:14 to Longbridge to take me home.

Interestingly before departure, the LCD screens pumping news weather and adverts into the carriages went blank; shortly to be followed by a Linux boot screen.

I have seen games machines in pubs and railway information terminals displaying blue screens of death and even yesterday the screens at the ICC were displaying the helpful information that an error had occurred in a Microsoft C++ library, but this is probably the first evidence I have seen of Linux encroaching on this market.

The fact that the service on the Centro trains is almost certainly not directly paid for the commuters using the service may be a factor in the choice, or there is the possibility that the spread of Linux is actually much more widespread that is immediately apparent.

Ultimately we have a service provider who could have used Windows to provide the service but have chosen Linux instead.

I’d love to see the business case behind that, but I suspect that it is just another very small confirmation that in backing Linux, Oracle has made a very shrewd move.

ukoug 2008

Filed under: Uncategorized — bigdaveroberts @ 1:22 am

It seems to be 8 months since I last blogged. Unfortunately my work seems to be becoming less and less oracle related. However with the onset of the 25th anniversary UKOUG conference, I do have a little to blog!

Much of the excitement this year has related to the launch of the exadata machine, which is perhaps best described as a Data Warehouse accelerator.

In technical terms, some of the processing that was previously performed by the database server has been offloaded to ‘intelligent’ Disk arrays.

Ultimately, a little very clever software has been used to leverage high end commodity hardware to produce a relatively cheap very fast Data warehouse machine. Moving Oracle even father ahead of SQL*Server in the crucial Decision management arena.

However as this technology has little relevance to my current customer, I skipped most of the sessions on exadata apart from the demonstration of Oracle’s exadata simulator.

Unlike some attendees, it is not my intention to blog about all the sessions that I attend, ultimately I attend the sessions to learn myself, and without independent investigation, I lack confidence that I could fully relate their content accurately.

What I do intend to do, is relate some of the information from the round tables that I attend. It is my opinion that the frank exchange of information between experts that these sessions consist of is highly valuable, and without formal slides available for obvious reasons should be documented and disseminated.

Create a free website or blog at WordPress.com.