Oracle 11g Express Edition

Recently I had to upgrade my company laptop to Windows 7. This was a painful ordeal that lasted a whole weekend. The upgrade consisted of a reimaging of my machine. One byproduct was that my Oracle 10g database was wiped off the machine. Ooops. I forgot to back that thing up.

I decided to reinstall Oracle after the upgrade. It was interesting to see that Oracle had an 11g version of their Express Edition free database. The latest version is 11.2, which is currently in Beta.

I was pleased with how fast I could install the Express Edition database. This thing was actually at Express speeds. I was also reminded that Express Edition does not come with much. You get a command line SQL tool. You get tools to start, stop, backup, and restore the database. That is it. Looks like I need to polish off my command line DBA command line programming.

The Trigger Project

There has been a lot of drama on our project based on performance problems we are having in production. Some people are attributing the performance issues to the number of triggers we have in the database. That sounds like nonsense to me. I deal in hard facts. There was a total lack of evidence to back up this theory. So I have decided to run some tests in our database to measure the overhead of adding triggers to DML.

I wrote a harness script that updates 10,000 records. It commits the change every update. The thing runs at a good speed. It takes around a minute to complete. The table currently has no triggers on it. Then I added one trigger that did nothing in the body of the trigger. I was shocked to find the harness took over 20% longer to run with the trigger.

I had another developer present when I ran the test. He could not believe the results either. Then he had some suggestions to vet the results. I ran the tests a couple more time. Looks like the first run was a fluke. He also suggested I run the tests locally on a database on my machine to eliminate irrelevant variables.

So next I plan to move the tests over to a local database on my machine. This way the network is not a factor. Other uses on the database are also eliminated from the equation. Next I am going to time having a lot of triggers on a table. I will implement some cascading triggers. When I am done, I will have a lot of empirical evidence to back up my opinoin that triggers in and of themselves have no impact on performance.

Triggers Did It

A job that runs at night has been taking too long every night this week. This causes pain as the system administrators call up our DBAs or help desk in the middle of the night. Obviously we have a problem. I would call it a performance problem. What makes this unique is the response I heard from this. Developers told me a manager decided there would be no new database triggers implemented due to the performance problem.

WTF? This made little sense. Who correlated our performance problem to the presence of triggers? I attended a meeting hosted by this very manager. So I asked about this new "no triggers" edict. He said that he has inspected the PL/SQL source code. He found many database triggers. He also saw triggers firings that caused other triggers to fire as well. Therefore they were suspects in the performance problem.

I said that by this logic, we would find many stored procedures in our source code. The stored procedures get exectured. The code runs slow. Therefore we should not use stored procedures. Obviously this argument does not hold up.

Rather than being an obnoxious developer who shoots down ideas, I decided to later pitch a plan to get to the actual root cause of the performance problem. We should find out what pieces of the code are taking a long time to execute. Then would should drill down into the slow pieces and find out why. If we do that and find some database triggers to blame, I agree we should optimize them.

I come to find that there is a proposal to do the very thing I recommend. Our manager clarified that we cannot add database triggers without analyzing the performance impact. I am all for that. I run timing studies all the time on slow code. We need to use logic and hard evidence to deal with tough and frustrating problems such as performance. If not, we might as well chirp "add more indexes" like any other lackey and get nowhere.

From the Trenches

I have been following a seminar on how to be a DBA. These talks give you hands on advice. They are geared towards individuals who find themselves appointed as a DBA by accident. While I am not formally a DBA, I often do some light DBA tasks. So I thought I would study up.

When you start a DBA job, you obviously need to find out what servers and databases you are responsible for. But another important discovery is to find which databases you are not supposed to manage. You should gain info on your systems by talking to actual users. If they are not available, rely on knowledge from developers and testers of the system.

I found out about a nice option in TOAD where you are execute a query against many databases at once. My tool of choice is an old copy of PL/SQL Developer. So far I only log into one database at a time to run queries.

Make sure you find out the cycles of use by the business. This will help you scope out the best time for backups of the database. Speaking of backups, make sure you can recover using the backups. Test out a restore to a different system. It is no good to do backups if in the end they do not work.

Global Temporary Tables

Our system uses temporary tables at times. It makes the data access simpler across different processes such as apps, procedures, and reports. Temporary in this sense means that a table is created, popoulated, used, and then dropped. However there is another type of temporary table in Oracle.

Global temporary tables (GTT) are implemented by Oracle. The are fast in that they have no redo or rollback. They operate in one of two ways. They either lose their data at the end of a transaction, or the end of a session. Which way is determined when you create the table.

GTTs themselves are static. The table structure is created once and remains in place always. It is the data that is transient. The data is private per session. No statistics are gathered on these tables. The data resides in the TEMP tablespace. They are cached in the buffer cache.

GTTs have been available since Oracle 8i. I would recommend you be familiar with them. You never know when you might find a table in your database that is a GTT.