Oracle LMS response to licensing on VMware

fineprintLast week I was in London to attend the UK Oracle User Group licensing event. After a number of sessions with excellent material leading to very interesting discussions (one was showing – with permission – some of my own content, with the comment that it saved this customer “a shitload of money” – thanks John for the mention :), there was a session from Oracle LMS UK (License Management Services).

A few interesting points from their presentation are worth sharing as otherwise you would not get much insight in the working methods of LMS.
Read more of this post

Oracle on VMware: Caging the license dragon

Virtualizing databases has huge financial and operational benefits – in particular with Oracle, where physically deployed database servers are typically heavily under-utilized which leads to huge over-spending on license cost.

Of course poor efficiency on database servers leads to higher processing requirements, to higher number of CPUs purchased, and in turn to massive additional license and maintenance revenue for the software vendor.

No surprise that software vendors attempt to stop or delay efforts to reduce poor efficiency in any way they can, using all the tricks in the playbook plus a number of dirty tricks that you will never find in books on business ethics.

mouse-trap-helmet-smallThe latest roadblock Oracle has come up with is what we’ll refer to as the VMotion trap.

Disclaimer: I will not be liable for any false, inaccurate, inappropriate or incomplete information presented in this post. If you want to use the information in this post, verify the legal implications yourself or with advise from an independent, specialized 3rd party.

Read more of this post

Baking a cake: trading CPU for IO?

Sometimes I hear people claim that by using faster storage, you can save on database licenses. True or false?

The idea is that many database servers are suffering from IO wait – which actually means that the processors are waiting for data to be transferred to or from storage – and in the meantime, no useful work can be done. Given the expensive licenses that are needed for running commercial database software, usually licensed per CPU core, this then leads to loss of efficiency.

Let’s see if we can visualise the problem here with a common world example – Baking a cake.
 
 

Read more of this post

Interview with Madora

A while ago I was interviewed by Kay Williams of Madora Consulting.

Madora Interview

As many customers are overwhelmed by licensing, audit and compliancy issues, I highly recommend my EMEA readers to reach out to Madora if you need independent assistance in that area.

In the interview we discussed a bit of my background, the challenges my customers are facing and how we help them, a bit on the future of Oracle and EMC as well as things like Cloud computing, how EMC sometimes competes with Oracle, my views on Oracle Engineered Systems, and where the two companies are fundamentally different. It has been out there for a while but I was enjoying vacation so I haven’t mentioned it before, but here it is :-)

Expected reading time about 10 minutes. Many thanks to Kay for the interview!

Enjoy: Madora – Interview with Bart Sjerps of EMC

This post first appeared on Dirty Cache by Bart Sjerps. Copyright © 2011 – 2015. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Tales from the past – Disaster Recovery testing

A long time ago in a datacenter far, far away….

Turmoil has engulfed the IT landscape. Within the newly formed digital universe,
corporate empires are becoming more and more
dependent on their digital data and computer systems.
To avoid downtime when getting hit by an evil strike, the corporations are
starting to build disaster recovery capabilities in their operational architectures.

While the congress of the Republic endlessly debates whether
the high cost of decent recovery methods is justified,
the Supreme CIO Chancellor has secretly dispatched a Jedi Apprentice,
one of the guardians of reliability and availability,
to validate existing recovery plans…

Another story from my days as UNIX engineer in the late nineties. I obfuscated all company or people names to protect their reputation or disclose sensitive information, but former colleagues might recognize parts of the stories or maybe everything. Also, some of it is a long time ago and I cannot be sure all I say is factually correct. The human memory is notoriously unreliable.

oobsignIn those days, our company was still relying on tape backup as the only Disaster Recovery (DR) strategy. The main datacenter had a bunch of large tape silos, where, on a daily basis, trays of tapes were unloaded, packed and labeled in a small but strong suitcase, and sent to an off-site location (Pickup Truck Access Method) so the invaluable data could be salvaged in case our entire datacenter would go up in flames.

Read more of this post

Introducing Outrun for Oracle

Overview

outrun-logo-transparentIf you want to get your hands dirty with Oracle database, the first thing you have to do is build a system that actually runs Oracle database. Unless you have done that several times before, chances are that this will take considerable time spent on trial-and-error, several reinstalls, fixing install problems and dependencies and so on. The time it takes for someone who is reasonably experienced on Linux, but has no prior Oracle knowledge, would probably range from a full working day (8 hours, best case) to many days. I also have witnessed people actually giving up.

Even for experienced users, doing the whole process manually over and over again is very time consuming, and deploying five or more systems by hand is a guarantee that each one of them is slightly different – and thus a candidate for subtle problems that happen on one but not the others. Virtualization and consolidation is all about consistency and making many components as if they were only one.

There are literally dozens of web pages (such as blog posts) that contain detailed instructions on how to set up Oracle on a certain platform. Some examples:

The Gruff DBA – Oracle 12cR1 12.1.0.1 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Introduction
Pythian – How to Install Oracle 12c RAC: A Step-by-Step Guide
Martin Bach – Installing Oracle 12.1.0.2 RAC on Oracle Linux 7-part 1

Even if you follow the guidelines in such articles, you are likely to run into problems due to running a different OS, different Oracle version, network problems, and so on. Not to mention that in many cases the “best practices” provided by various vendors are often not honoured because they tend to be overlooked due to information overload…

Some people have hinted to use automated deployment tools such as Ansible (i.e. Frits Hoogland – Using Ansible for executing Oracle DBA tasks) but there are (as far as I know) no complete out-of-the-box solutions.

EMC has published several white papers and reference architectures with instructions on how to set up Oracle to run best on EMC. Still, some of the papers are not a step-by-step manual so you have to extract configuration details manually from various (sometimes conflicting) sources and convert them in configuration file entries, commands, etc.

So I decided a while ago to go for a different approach, and build a virtual appliance that does all of these things for you while still offering (limited) flexibility in different platform and versions, and preferences for configuration.

Read more of this post

Tales from the past – Overheated Datacenter

A long time ago in a datacenter far, far away….

It is a period of digital revolution.

Rebel Dot Com companies, striking from hidden basements and secret lofts,
have won their first fights against long-standing evil corporate empires.

During the battles, rebel geeks have managed to invent secret technology to
replace corporations old ultimate weapons,
such as snail mail and public telephone networks currently powering the entire planet.

Contracted by the Empire’s sinister CIOs, the UNIX Engineer and author of this blog
races against the clock across the UNIX root directories,
to prepare new IT infrastructure for the upcoming battle –
while at the same time, trying to keep the old weapons of mass applications available and running
as best as he can to safeguard the customers freedom in the digital galaxy.

In the late nineties, before I switched to the light side of the Force and joined EMC, I was UNIX engineer and working as a contractor for financial institutions. This is a first in a number of stories from that period and later. I obfuscated all company or people names to protect their reputation or disclose sensitive information, but former colleagues might recognize parts of the stories or maybe everything. Also, some of it is a long time ago and I cannot be sure all I say is factually correct. The human memory is notoriously unreliable.

heatwave
It was a friday late afternoon.

Everyone in my department already left for the weekend, but I was working on critical infrastructure project that was on a tight deadline, otherwise I guess I would have left already, too.

At some point I needed to re-install a UNIX server, which in those days was done by physically booting them from an install CD – so I needed to go to the datacenter room and get physical console access to get that going. I walked to the datacenter floor, which hosted several large UNIX systems, a mainframe, a number of EMC Symmetrix storage systems, network gear, lots of Intel servers mostly running Windows NT and maybe a few Novell.

There were large tape libraries for backup, lots of server racks, fire extinguishers and whatever you typically find in a large datacenter floor like that. I used my keycard to open the door to the datacenter and stepped in… The first thing I thought was, wow, it’s warm in here…

Read more of this post

The Oracle Parking Garage

Oracle parking garage

(Thanks to House of Brick Technologies)

 

Fun with Linux UDEV and ASM: Using UDEV to create ASM disk volumes

floppy-disksBecause of the many discussions and confusion around the topic of partitioning, disk alignment and it’s brother issue, ASM disk management, hereby an explanation on how to use UDEV, and as an extra, I present a tool that manages some of this stuff for you.

The questions could be summarized as follows:

  • When do we have issues with disk alignment and why?
  • What methods are available to set alignment correctly and to verify?
  • Should we use ASMlib or are there alternatives? If so, which ones and how to manage those?

I’ve written 2 blogposts on the matter of alignment so I am not going to repeat myself on the details. The only thing you need to remember is that classic “MS-DOS” disk partitioning, by default, starts the first partition on the disk at the wrong offset (wrong in terms of optimal performance). The old partitioning scheme was invented when physical spinning rust was formatted with 63 sectors of 512 bytes per disk track each. Because you need some header information for boot block and partition table, the smart guys back then thought it was a good idea to start the first block of the first data partition on track 1 (instead of track 0). These days we have completely different physical disk geometries (and sometimes even different sector sizes, another interesting topic) but we still have the legacy of the old days.

If you’re not using an Intel X86_64 based operating system then chances are you have no alignment issues at all (the only exception I know is Solaris if you use “fdisk”, similar problem). If you use newer partition methods (GPT) then the issue is gone (but many BIOSes, boot methods and other tools cannot handle GPT). As MSDOS partitioning is limited to 2 TiB (http://en.wikipedia.org/wiki/Master_boot_record) it will probably be a thing of the past in a few years but for now we have to deal with it.

Wrong alignment causes some reads and writes to be broken in 2 pieces causing extra IOPS. I don’t have hard numbers but a long time ago I was told it could be an overhead of up to 20%. So we need to get rid of it.

ASM storage configuration

ASM does not use OS file systems or volume managers but has its own way of managing volumes and files. It “eats” block devices and these block devices need to be read/write for the user/group that runs the ASM instance, as well as the user/group that runs Oracle database processes (a public secret is that ASM is out-of-band and databases write directly to ASM data chunks). ASM does not care what the name or device numbers are of a block device, neither does it care whether it is a full disk, a partition, or some other type of device as long as it behaves as a block device under Linux (and probably other UNIX flavors). It does not need partition tables at all but writes its own disk signatures to the volumes it gets.

[ Warning: Lengthy technical content, Rated T, parental advisory required ]

Read more of this post

Oracle, VMware and sub-server partitioning

costsaveLast week (during EMC world) a discussion came up on Twitter around Oracle licensing and whether Oracle would support CPU affinity as a way to license subsets of a physical server these days.

Unfortunately, the answer is NO (that is, if you run any other hypervisor than Oracle’s own Oracle VM). Enough has been said on this being anti-competitive and obviously another way for Oracle to lock in customers to their own stack. But keeping my promise, here’s the blogpost ;-)

A good writeup on that can be found here: Oracle’s reaction on the licensing discussion
And see Oracle’s own statement on this: Oracle Partitioning Policy

So let’s accept the situation and see if we can find smarter ways to run Oracle on a smaller license footprint – without having to use an inferior hypervisor from a vendor who isn’t likely to help you use it to reduce license cost savings…

The vast majority of enterprise customers run Oracle based on CPU licensing (actually, licensing is based on how many cores you have that run Oracle or have Oracle installed).
Read more of this post