Result Cache

The result cache is a place in memory where Oracle stores results. Subsequent queries can then obtain these results quickly. This cache encompasses both SQL queries and PL/SQL functions. Note that the cache can become invalid when the source data of the queries changes.

The SQL query result cache is stored in the SGA. The results can be from just fragments of a query. You use the hint /*+ result_cache */ to advise Oracle that you would like to make use of the cache. Check out the DBMS_RESULT_CACHE built-in package to clear or inspect the contents of the cache.

The PL/SQL function result cache stores data across sessions. It also is stored in the SGA. When defining a function, use the RELIES_ON clause to identify tables whose changes may invalidate the results of the cache.

Next time I will start by explaining how you can extract your PL/SQL source code from the database.

Keeping It Simple

My boss told me to help another team out on our project. They are behind schedule. And just recently, a review of their code showed they were missing some important stuff. Their team was clueless about this. Since I knew what was going on, I decided to help.

First I helped out requirement team figure out what changes needed to be made. Then I went into design mode. The existing code in a PL/SQL package that implements a ton of business logic. We needed to store parameters for the logic in the database instead of hard coding a lot of the values.

How does one get this done? This package gets called a lot. In other words, it needs to be fast. I can't have the code look up the values each time it needs them. I could be fancy and have the package load the values on initialization. But I want to make this simple as possible.

Therefore I created a new record type that stores all the lookup values. Then I created a new procedure which loads the lookup values from the database. Then I pass this record around to the different functions that need the values. This should be simple enough for any developer to follow. I just want to code the stuff. I don't want to inherit another team's code. Good enough for now. The real fun comes when I need to thoroughly test this stuff.

Bulk Binding and other Tuning Tricks

As promised, here is a discussion on bulk binding and other misc performance tuning tips. The goal of bulk binding is to reduce the amount of times Oracle has to do a context switch between the PL/SQL and SQL engines.

There are two directions that bulk binding operates in. The first is when you want to put the data from a collection into the database. To do this you employ the FORALL statement. It will make the whole collection of elements be transferred to the database with one SQL context switch.

The converse bulk binding technique is when you have data in the database you want to extract into a collection. This is where you use BULK COLLECT INTO a collection. Again the PL/SQL engine only has to make one switch to the SQL engine.

Here are some other ways to make PL/SQL run fast. Take advantage of the short circuit evaluation of expressions. If you know one part of an IF statement is sufficient for the whole decision, and this part occurs often, put that as the first check in your IF statement. Oracle will skip the rest of the checks for you. Another feature you can enable is inline subroutines. Instead of having your code jump to a separate subroutine, Oracle can place the full code of the subroutine right in your function/procedure. This can be controlled globally with the PLSQL_OPTIMIZE_LEVEL parameter, or done a line at a time with PRAGMA INLINE.

Tuning Tips

Here are some good tips to make your PL/SQL fast. Do not rely on implicit data type conversions. Also choose the smallest data type that will fit your data. The PLS_INTEGER data type is good. The SIMPLE_INTEGER type is better. Smaller types are better.

You should code less code per module for a small gain. The big boost will come from using a SQL statement to do the work instead of a whole procedure worth of PL/SQL. I struggle with this. Our data sets are huge. I am sometimes afraid to do something huge with one SQL statement. The thing may never complete. Or it may bomb in the middle and get rolled back. At least my PL/SQL routines can commit often and avoid redoing work on errors.

I thought I could fit all of this into one post. But I have more to say. Look for me to cover bulk binding next time, along with some other misc tips.

Interpreted and Native Compilation

There are two types of PL/SQL compilation. The first is interpreted compilation, which is the default. Interpreted compilation is done at run time. It is good for development.

The other type of compilation is native compilation. It has better performance. Therefore it is better suited for a production setting. Compiled code is stored in the SYSTEM schema. You can enable this by setting PLSQL_CODE_TYPE to NATIVE on a session or system level.

Next time I will review some PL/SQL tuning techniques.


The plain old LOB is a new data type for the Oracle 11g database. It supports compression of the data. And it sports better performance. You get a LOB column when you define a column with the SECUREFILE keyword.

Even with compression, LOBs may take up a lot of space. That is why Oracle has a feature called deduplication. This will detect when there is more than one copy of a LOB in the database. It will then cause these to reference just one copy of the LOB. You enable this feature by modifying the column using the DEDUPLICATE LOB keywords.

Oracle now provides encryption which is done at the block level. You get to choose your encryption algorithm such as 3DES or AES. Note that you can specify the key length of the encryption (128, 192, or 256 bits). You enable encryption on a column by modifying it and using the ENCRYPT USING keywords.

There is a promotion strategy to get old large object to use the new Oracle 11g type. This option allows the table with the old types to be available while the promotion is taking place. The promotion is accomplished with calls to the DBMS_REDEFINITION built in package.

ORA-01403 Even With NVL

It was the night before the big release to our customer. Our internal test team was still finding some bugs in our software. Ouch that’s painful. One such error being reported was an ORA-01403 when they brought up a new screen. This was perplexing. I thought every single column in the SELECT statement was wrapped with the NVL function. And I knew there was a record in the database that we were selecting from. What the heck was going on?

Well the best way to debug weird bugs like this is to try things out. I knew there were some NULL column values in the record the testers were selecting. So I updated some test data and filled in some values. The problem went away. Was I losing my mind? Or perhaps the NVL was broken. I did not really write this code from scratch. I copy and pasted it from another application. It should have worked.

That’s when it hit me. The guilty column was defined as evilCol CHAR(1). However the column being selected was written as NVL(evilCol, ‘NA’). Do you see the problem? The default for NULL values of the column was one which was too large to fit in the column given its definition. Oracle won’t replace a NULL with something that is too large to fit in the source column. Thus I got a SQL warning that we had a NULL issue.

Here is the real shame about this bug. I had previously fixed a bad NVL default for another column being selected in this SQL statement. However I did not go through every column to check whether there were other bad defaults. Shame on me. Luckily I went through all the column and their NVL defaults this time. The hard lesson was learned. Be careful of the size (and type) of defaults you return using the NVL function. They might not match the column type you are selecting from.

The Microsoft Connection

I used to be an exclusive Microsoft developer. That meant I normally interacted with Microsoft SQL Server. Yes I know. It is sacrilege. Just recently I subscribed to the MSDN magazine to read up on the latest Microsoft technologies. There were a lot of articles that referenced programming APIs to access databases. I thought I would share some of them here to mix things up.

The first topic is ADO.NET. We use it in some of the newer applications in our system. This is included with the .NET framework from Microsoft. It is used to access a relational database like Oracle. The technology started as an evolution of Active X Data Objects (ADO) from Microsoft. A data provider provides access to a data source. In the case of Oracle, the data provider is the Oracle Data Provider for .NET.

A related topic is EF, which stands for Entity Framework. That is short for the ADO.NET Entity Framework. This is a set of data access APIs. It is similar to the Java Persistence API. An entity has a primary key. An entity data model maps data to a database schema. The programmer then uses either Entity SQL or LINQ to Entities to access the entity data model. This abstraction allows the programmer to access the data independent of where the data is actually stored.

Next we have DTO, which stands for Data Transfer Objects. These were previously called VO, or Value Objects. They implement a simple design pattern. They are used with DAO, which is short for Data Access Objects, another Microsoft technology. The main idea behind DTO is that they just provide accessors and mutators. There is no functionality provided other than get and set.

Finally we have POCO, which stands for Plain Old CLR Objects. The CLR is the Common Language Run time from Microsoft. This term is a play on POJO, or Plain Old Java Objects. POCO pokes fun at the complexity of accessing data using Java. POCO are simple access objects compared to hard to use ORM objects.


I want to start by talking about the LONG data type in Oracle. You can have at most one column of type LONG per table. This data is stored inline (which means it is in the table, not referenced by the table). This column can store up to 2G of data.

A LOB is a large unstructured data type. An example of a LOB is an image. Use the DBMS_LOB built-in package to deal with this data type.

A related data type is the BFILE. This represents an external LOB. You initialize columns of type BFILE using the BFILENAME function to point to the actual file that is backed by the BFILE.

Finally let me mention temporary LOBs. They last as long as your session. These types of LOBs are not associated with tables. They are good for transforming data. Next time I will continue to discuss LOBs, with emphasis on performance and encryption.