Oracle on VMware: Caging the license dragon

Virtualizing databases has huge financial and operational benefits – in particular with Oracle, where physically deployed database servers are typically heavily under-utilized which leads to huge over-spending on license cost.

Of course poor efficiency on database servers leads to higher processing requirements, to higher number of CPUs purchased, and in turn to massive additional license and maintenance revenue for the software vendor.

No surprise that software vendors attempt to stop or delay efforts to reduce poor efficiency in any way they can, using all the tricks in the playbook plus a number of dirty tricks that you will never find in books on business ethics.

mouse-trap-helmet-smallThe latest roadblock Oracle has come up with is what we’ll refer to as the VMotion trap.

Disclaimer: I will not be liable for any false, inaccurate, inappropriate or incomplete information presented in this post. If you want to use the information in this post, verify the legal implications yourself or with advise from an independent, specialized 3rd party.

Read more of this post

The IOPS race is over

emc-f1-carInfrastructure has always been a tough place to compete in. Unlike applications, databases or middleware, infrastructure components are fairly easy to replace with another make and model, and thus the vendors try to show off their product as better than the one from the competition.

In case of storage subsystems, the important metrics has always been performance related and IOPS (I/O operations per second) in particular.

I remember a period when competitors of our high-end arrays (EMC Symmetrix, these days usually just called EMC VMAX) tried to artificially boost their benchmark numbers by limiting the data access pattern to only a few megabytes per front-end IO port. This caused their array to handle all I/O in the small memory buffer cache of each I/O port – and none of the I/O’s would really be handled by either central cache memory or backend disks. This way they could boost their IOPS numbers much higher than ours. Of course no real world application would ever only store a few megabytes of data so the numbers were pure bogus – but marketing wise it was an interesting move to say the least.

With the introduction of the first Sun based Exadata (the Exadata V2) late 2009, Oracle also jumped on the IOPS race and claimed a staggering one million IOPS. Awesome! So the gold standard was now 1 million IOPS, and the other players had to play along with the “mine’s bigger than yours” vendor contest.
Read more of this post

Baking a cake: trading CPU for IO?

Sometimes I hear people claim that by using faster storage, you can save on database licenses. True or false?

The idea is that many database servers are suffering from IO wait – which actually means that the processors are waiting for data to be transferred to or from storage – and in the meantime, no useful work can be done. Given the expensive licenses that are needed for running commercial database software, usually licensed per CPU core, this then leads to loss of efficiency.

Let’s see if we can visualise the problem here with a common world example – Baking a cake.
 
 

Read more of this post

Silly Little Oracle Benchmark – RPM edition

slob-rpmA while ago Kevin Closson announced a new release of the well-known SLOB kit.

SLOB is a simple but powerful toolkit that drives lots and lots of IO on a real Oracle database (so for performance testing of database platforms, it’s much better than synthetic IO tests).

A previous version was bundled with Outrun but required the entire Outrun distribution to work properly. With the new 2.3 version I created an RPM package that can be installed separate on any Enterprise Linux 6.x (64 bit) server.

The wiki page (including instructions) can be found here: SLOB RPM Package wiki

Thanks to Kevin for granting permission to redistribute this awesome toolkit!

Read more of this post

Interview with Madora

A while ago I was interviewed by Kay Williams of Madora Consulting.

Madora Interview

As many customers are overwhelmed by licensing, audit and compliancy issues, I highly recommend my EMEA readers to reach out to Madora if you need independent assistance in that area.

In the interview we discussed a bit of my background, the challenges my customers are facing and how we help them, a bit on the future of Oracle and EMC as well as things like Cloud computing, how EMC sometimes competes with Oracle, my views on Oracle Engineered Systems, and where the two companies are fundamentally different. It has been out there for a while but I was enjoying vacation so I haven’t mentioned it before, but here it is :-)

Expected reading time about 10 minutes. Many thanks to Kay for the interview!

Enjoy: Madora – Interview with Bart Sjerps of EMC

This post first appeared on Dirty Cache by Bart Sjerps. Copyright © 2011 – 2015. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Introducing Outrun for Oracle

Overview

outrun-logo-transparentIf you want to get your hands dirty with Oracle database, the first thing you have to do is build a system that actually runs Oracle database. Unless you have done that several times before, chances are that this will take considerable time spent on trial-and-error, several reinstalls, fixing install problems and dependencies and so on. The time it takes for someone who is reasonably experienced on Linux, but has no prior Oracle knowledge, would probably range from a full working day (8 hours, best case) to many days. I also have witnessed people actually giving up.

Even for experienced users, doing the whole process manually over and over again is very time consuming, and deploying five or more systems by hand is a guarantee that each one of them is slightly different – and thus a candidate for subtle problems that happen on one but not the others. Virtualization and consolidation is all about consistency and making many components as if they were only one.

There are literally dozens of web pages (such as blog posts) that contain detailed instructions on how to set up Oracle on a certain platform. Some examples:

The Gruff DBA – Oracle 12cR1 12.1.0.1 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Introduction
Pythian – How to Install Oracle 12c RAC: A Step-by-Step Guide
Martin Bach – Installing Oracle 12.1.0.2 RAC on Oracle Linux 7-part 1

Even if you follow the guidelines in such articles, you are likely to run into problems due to running a different OS, different Oracle version, network problems, and so on. Not to mention that in many cases the “best practices” provided by various vendors are often not honoured because they tend to be overlooked due to information overload…

Some people have hinted to use automated deployment tools such as Ansible (i.e. Frits Hoogland – Using Ansible for executing Oracle DBA tasks) but there are (as far as I know) no complete out-of-the-box solutions.

EMC has published several white papers and reference architectures with instructions on how to set up Oracle to run best on EMC. Still, some of the papers are not a step-by-step manual so you have to extract configuration details manually from various (sometimes conflicting) sources and convert them in configuration file entries, commands, etc.

So I decided a while ago to go for a different approach, and build a virtual appliance that does all of these things for you while still offering (limited) flexibility in different platform and versions, and preferences for configuration.

Read more of this post

Comparing database replication features

It’s still a hot topic in my customer conversations: Should we use Oracle Data Guard or something else for providing disaster recovery?
I’ve written an explanation a while ago. Recently I also created a powerpoint slide comparing various features – in an attempt to be as unbiased as possible (I think I partly succeeded ;)

I’ve put the comparison in a static page on my blog and will update it any time I get new insights or think I can improve it otherwise.

View the comparison here: Comparing DR features

This post first appeared on Dirty Cache by Bart Sjerps. Copyright © 2011 – 2015. All rights reserved. Not to be reproduced for commercial purposes without written permission.

The Oracle Parking Garage

Oracle parking garage

(Thanks to House of Brick Technologies)

 

Oracle Data Placement on XtremIO

xtremio-oracle-logo
Many customers these days are implementing Oracle on XtremIO so they benefit from excellent, predictable performance and other benefits such as inline compression and deduplication, snapshots, ease of use etc.

Those benefits come at a price and if you just consider XtremIO on a usable gigabyte basis, it does not come cheap. Things change if you calculate the savings due to those special features. Still, customers are trying to get the best bang for the buck, and so I got a question from one customer if it would make sense to place only Oracle datafiles on XtremIO and leave everything else on classic EMC storage. This would mean redo logs, archive logs, control files, temp tables, binaries and everything else, *except* the datafiles, would be stored on an EMC VNX or VMAX. The purpose of course is to only have things that require fast random reads (the tables) on XtremIO.

I can clearly see the way of thinking but my response was to change the layout slightly. I highly recommend to place everything that is needed to make up a database in a consistent way, on the same storage box.

Read more of this post

Oracle ASM vs ZFS on VNX

swiss-cheeseIn my last post on ZFS I shared results of a lab test where ZFS was configured on Solaris x86 and using XtremIO storage. A strange combination maybe but this is what a specific customer asked for.

Another customer requested a similar test with ZFS versus ASM but on Solaris/SPARC and on EMC VNX. Also very interesting as on VNX we’re using spinning disk (not all-flash) so the effects of fragmentation over time should be much more visible.

So with support of the local administrators, I performed a similar test as the one before: start on ASM and get baseline random and sequential performance numbers, then move the tablespace (copy) to ZFS so you start off with as little fragmentation as possible. Then run random read/write followed by sequential read, multiple times and see how the I/O behaves.
Read more of this post