Linux Disk Alignment Reloaded

railtrackmisalignMy all-time high post with the most pageviews is the one on Linux disk alignment: How to set disk alignment in Linux. In that post I showed an easy method on how to set and check disk alignment under linux.
Read more of this post

Oracle Exadata X3 Database In-Memory Machine: Timely Thoughtful Thoughts For The Thinking Technologist – Part I

Awesome post by Kevin! Recommended read if you are interested in Oracle Exadata.

Kevin Closson's Blog: Platforms, Databases and Storage

Oracle Exadata X3 Database In-Memory Machine – An Introduction
On October 1, 2012 Oracle issued a press release announcing the Oracle Exadata X3 Database In-Memory Machine. Well-chosen words, Oracle marketing, surgical indeed.

Words matter.
Games Are Games–Including Word Games
Oracle didn’t issue a press release about Exadata “In-Memory Database.” No, not “In-Memory Database” but “Database In-Memory” and the distinction is quite important. I gave some thought to that press release and then searched Google for what is known about Oracle and “in-memory” database technology. Here is what Google offered me:

Note: a right-click on the following photos will enlarge them.

 

With the exception of the paid search result about voltdb, all of the links Google offered takes one to information about Oracle’s Times Ten In-Memory Database which is a true “in-memory” database. But this isn’t a blog post about semantics. No, not at all. Please read on.

Seemingly Silly…

View original post 3,187 more words

Managing REDO log performance


I have written before about managing database performance issues, and the topic is hot and alive as ever. Even with today’s fast processors, huge memory sizes and enormous bandwidth to storage and networks.

warning: Rated TG (Technical Guidance required) for sales guys and managers ;-)

A few recent conversations with customers showed other examples of miscommunication between IT teams, resulting in problems not being solved efficiently and quickly.
In this case, the problem was around Oracle REDO log sync times and some customers had a whole bunch of questions to me on what EMC’s best practices are, how they enhance or replace Oracle’s best practices, and in general how they should configure REDO logs in the first place to get best performance. The whole challenge is complicated by the fact that more and more organizations are using EMC’s FAST-VP for automated tiering and performance balancing of their applications and some of the questions were around how FAST-VP improves (or messes up) REDO log performance.

Read more of this post

Application processing at lightning performance – The hourglass view of access times

HourglassEven in these modern times, when lots of things are changing in the ICT world, some lessons from the past still hold true.

Previously, I discussed the I/O stack in a typical database environment. As virtualization has complicated things a bit, the fundamental principles of performance tuning stay the same.

Recently I was browsing through old presentations of colleagues and found another interesting view on response times in an application stack. Again, I polished it up a bit and modified it to reflect a few innovations and personal insights.

The idea is as follows. We as humans have problems getting a feel of how fast modern microprocessors work. We talk in milliseconds, microseconds, nanoseconds. So – in the comparison we assume a 1 Gigahertz processor and then scale up one nanosecond to match one second – because this fits better in human’s view of the world. Then we compare various sorts of storage on the “indexed” timescale and see how they relate to each other.

Read more of this post

Monkey Business

Monkey eating bananaMaybe you have heard the story of the Monkey Experiment. It is about an experiment with a bunch of monkeys in a cage, a ladder, and a banana. At a certain point one of the monkeys sees the banana hanging up high, starts climbing the ladder, and then the researcher sprays all monkeys with cold water. The climbing monkey tumbles down before even getting the banana, looks puzzled, wait until he’s dry again and his ego back on its feet. He tries again, same result, all monkeys are sprayed wet. Some of the others try it a few times until they learn: don’t climb for the banana or you will get wet and cold.

The second part of the experiment becomes more interesting. The researcher removes one of the monkeys and replaces him with a fresh, dry monkey with an unharmed ego. After a while he spots the banana, wonders to himself why the other monkeys are so stupid not to go for the banana, and gives it a try. But when reaching the ladder, the other monkeys kick his ass and make it very clear he is not supposed to do so. After the new monkey is conditioned not to go for the banana, the researcher replaces the “old” monkeys, one by one, with new ones. Every new monkey goes for the banana until he learns not to do so.

Eventually the cage is full of monkeys who know that they are not allowed to climb the ladder to get the banana. None of them knows why – it’s just the way it is and always has been…
Read more of this post

Information Lifecycle Management and Oracle databases – part 3

Archiving and purging old data

In the end, if you want to seriously reduce the effective size of a database (after using all innovations on the infrastructure level) is to move data out of the database on to something else. This is a bit against Oracle’s preferred approach as they propose to hold as much of the application data in the database for as long as possible (I wonder why…)

We could separate all archiving methods into two categories:

  • Methods that don’t change the RDBMS representation and just move tables or records to a different location in the same or different database;
  • Methods that convert database records into something else and remove it from the database layer completely

Read more of this post

Information Lifecycle Management and Oracle databases – part 2

Database compression

 

Compression

 

Another technique that Oracle has improved as of version 11g is compression. In versions up to 10g you could only compress an entire table, and after that, random performance on a compressed table was poor. It worked well for data warehouses where I/O bandwidth is reduced (compressed data can be read quicker from disk than uncompressed) but only in specific cases.

In 11g Oracle has introduced “advanced” compression. I will not go into details, but it allows compression on a much more granular basis, so that OLTP applications can benefit, and it works on a record-by-record basis. Oracle claims this reduces the total database size (no-brainer :) ) and therefore also the backup size (thereby ignoring the effects of tape compression that most customers use, so your mileage may vary). Data can only be compressed once, so the size of a normal database on tape compared to a compressed one will probably not be different with tape compression enabled.

Read more of this post

Information Lifecycle Management and Oracle databases – part 1

This is an article I wrote a while ago (late 2009), a while after EMC introduced Enterprise Flash Drives (EFD’s). Although more tooling is available these days to automate the tiering of storage, the basic concepts are still very valid, and the article might be a good explanation of the basic concept of database storage tiering and what we want to achieve with this strategy.

I recommend you read Flash Drives first to get some background knowledge before continuing with ILM.


Innovation with Flash Drives

The innovation in disk drive technology with Enterprise Flash Drives (EFD’s – also known as Solid State Disk or SSD’s) is capable of solving the problem of low random performance when using mechanical disk drives.

Read more of this post

Innovation with Flash Drives – part 2

Energy Efficiency of Flash

Here is a comparison of power consumption of various current drive types:

Power per Terabyte

Power per Terabyte

This picture shows the amount of energy to store 1 Terabyte of information. As this would only require one 1-Terabyte SATA drive, this is the most energy efficient (as long as you don’t need much performance). The smaller the capacity, the more drives you need to store 1 Terabyte and therefore smaller drives are less energy efficient just storing data. Faster drives (15,000 rpm) are also the most energy hungry drives so the faster the drive spins, the more energy is needed per terabyte.

Read more of this post

Innovation with Flash Drives – part 1

This is an article I wrote a while ago (mid 2009), a while after EMC introduced Enterprise Flash Drives (EFD’s) and albeit a bit outdated, it is a valid introduction to Enterprise Flash Drive technology.

It also explains the differences between consumer grade flash (the SSD drive in your laptop or tablet) versus enterprise grade (the stuff that makes business applications screamingly fast)


Enterprise Flash Drive technology

Over the last 30 years, we have seen an enormous improvement in hard disk capacity. Back then a gigabyte drive was quite something and using a state-of-the-art SCSI interface, it could transfer a whopping three megabytes per second!

Currently modern disk drives are smaller and can easily store a terabyte of capacity or more – a 1000-fold improvement! (And the 1.5 and 2 terabyte drives are on the way). With modern fibre channel interfaces, a disk can transfer up to 4 Gigabit (about 400 megabytes) per second which is a 133 times improvement over 30 years!

Fortunately disk drive technology has more or less kept up with Moore’s law for microprocessors regarding capacity and channel bandwidth.

There is a problem, however. First of all the hunger for computer storage is outpacing the increase in disk capacities, so we have to buy more and more disks to satisfy our needs (IDC findings show that the worldwide information created yearly is increasing by about 60% each year).
The second problem is that the access performance of disk drives has not improved that much. In other words, to find information on a disk drive it still requires mechanical movements. The drive arm has to be moved into position over the right disk track and then the disk has to rotate until the data can be accessed by the arm. This takes a few milliseconds and depends largely on the rotational speed of the disks – and this has not improved that much. 30 years ago the regular rotational speed was 3600 rpm, where today’s high-performance drives offer 15,000 rpm – a 4-fold improvement resulting in access times from 24 milliseconds 30 years ago to 6 milliseconds today (note that this is the average time to read data on a random location on the disk if the disk is idle). An important negative side effect of increasing rotational speed is that the power consumption of the drive increases almost exponentially – this is why there were experiments with 20,000 rpm drives but these are not widely adopted into the market. Any faster and the drive will overheat or require special cooling.

Read more of this post