The Dreaded ORA-01848

I got a phone call and email from the customer this morning. The new version of our software was bombing. The error was ORA-01848: day of year must be between 1 and 365. Ouch. That looks like such an amateur programming error.

So I traced where we were doing the date stuff in our code. Turns out we were trying to compose a DATE value. The input to the date was some data in the database. Yep. The customer test data was invalid. I pointed this out to them. Some of their days were well over 365 in value.

I got another call in the afternoon. They were still getting the ORA-01848. I walked them through to check whether the bay dates were fixed. They were. Must be something else. Made sure they were going to the right database. Yep. But there was only one place in the code where we did this date stuff.

Finally it hit me. Maybe some of their dates were too small. Yep. Had some records where the day was 000 (minimum must be at least 001). Now I am crossing my fingers. The moral of the story might be for me to validate all the data before using it in any date calculations.

Block Safety

I got a task to go find all transactions that have failed due to a bug, and retry them at night. It sounds simple enough. The retry really means to reimplement the logic that the application would have done at the hands of a user.

My only real concern was what would happen if one of those transactions ran into a problem at night. Sounds like a job for exception processing. At first I just stuck an EXCEPTION handler in the middle of the loop. Of course that resulted in compilation errors.

Well what was I to do? Had to recall that the EXCEPTION is a part of a BEGIN-END pair. So I slapped a BEGIN-END block in the middle of my loop. Within that I added my exception handler. Now the code is rock solid. That is, even if there are some errors, this script will go on.

Home Made Locks

I am working on some high priority problems my previous team is having. The customers are mad and want a fix yesterday. The applications keep aborting with the same error messages. I traced this down in the code to some object locking code.

This was Oracle Pro*C code that implements locks with records in a locks database table. When the users need to obtain a lock, they try to insert a record in this table. If anybody else has already inserted a similar record (has a lock), then the second lock immediately fails.

When the lock fails, the application displays an error message and aborts. This really irks our customer. So the first order of business is to make sure the application does not abort in this scenario. Next I figure we need the application to wait around a while until the lock might become free. why not let the Oracle database manage this contention? I figure somebody has already solved the queued lock problem. Why try to implement it again in our own code, especially when our implementation will be substandard.

Database Links

I need to create a lot of test data for unit testing some new code. Now I already had some good data in another database. So I used a tool to create some scripts that insert the data into the new database. There were some problems with this technique. My database tables have a lot of columns in them. The creation of insert scripts by the tool puts the whole insert statement on one line. This line turns out to be much too long for SQL*Plus to process. So I started to manually break up the lines. There had to be a better way.

Database links came to the rescue. You create the objects by specifying which database to connect to. You also need to provide login credentials when creating the link. Then you can perform queries that access the tables from the other database through the link. This was awesome. The link itself becomes an object in your schema. I hear you can even connect to non-Oracle database. I have no need for that feature though.

You do need to make sure the remote database you are linking to is in the tnsnames file on your server. Remember to do a COMMIT after you run the query through the link to release the rollback segment in the other database. The link operates like a pointer. If you really wanted to, a link can be used as a poor man's replication technology. If you want to get some information on your links, check out the DBA_DB_LINKS view.

Commitment Woes

I ran a bunch of routines in a stored procedures to generate some data. Then I queried that database from the SQL*Plus tool I previously used to create the data. The checks showed the new data was there. Finally I logged into my operating system account, and ran some shell scripts that ran my whole test suite.

The funny thing is that I could not see the results over in my SQL*Plus session. That was strange. The shell scripts call code that commit the changes. I had to take a break to analyze this oddity. Then it came to me. My SQL*Plus work had not been committed in the first place. Therefore the test suite in a separate session could not see the uncommitted changes. Doh!

If you thought that one was bad, listen to this story. I made some changes to the database. Then I called a stored procedure to run some jobs to process that data. Worked good the first time. Then I repeated the process. Changed up some data, called the exact stored procedure. This time the changes I made did not take. WTF? I even did a commit but got wrong results. A lot of inspection got me the answer. The stored procedure was in a package that initialized some internal variables. This package initialization happens once per session. My jobs were being run by one SQL*Plus session. I needed to log out and log back in to get the package initialization to rerun and process the new data. Ouch. That's a tricky one.

Trouble with Groups

I inherited some back end code. Most of it is written in PL/SQL. That's the part I like. The problem is that I cannot get the darn thing to run to completion without errors. The latest issue is with a file that is generated.

The developer uses the UTL_FILE built in package to create a file and write to it. That sounds reasonable. However the Oracle database is hosted on a UNIX server. So when it creates the file, the thing is owned by user oracle and by the oinstall group. I am not oracle. And I do not belong to the oinstall group.

Guess what? That means I cannot do anything with the file that got created by UTIL_FILE. I cannot view the file contents. I cannot delete the file. Nor can I move it. This is just no fun. What am I supposed to do? My first reaction was to just become a member of the oinstall group. However that raised some flags with the DBAs and system administrators. Although I agree this sounds wrong, what is a developer to do? I have put in a request for assistance with one of the DBAs. Hopefully help is on the way.

Mysterious Foreign Key Issue

I need to do a lot of unit tests of some database code soon. So I wrote a script that would reset all the data. This script had to remove a lot of records from some tables. I figured the quickest way to do that would be to truncate the tables. I carefully chose the order of the truncation to ensure that the tables with foreign key references would be chopped first.

When I ran my script, I kept getting an ORA-02266 error message. This is defined as "unique/primary keys in table referenced by enabled foreign keys". This seemed strange. This was happening even when there were no records in the foreign and primary key tables. At first I thought maybe some synonyms were resolving to the wrong tables. But even when I prefaced table names with their schemas, I got the error.

In the end I could not figure out what was going on. I replaced the TRUNCATE commands with DELETE statements. Those worked fine. Is this some sort of database error? Or maybe you just are not allowed to TRUNCATE tables with foreign key references. Who knows?

SQL Detective

I saw some full page ads from Conquest Corporation in the latest issue of Oracle Magazine. They were compelling advertisements. So I decided to try and download two of their new products.

Unfortunately, one of the download links did not work. The other one downloaded fine. However after installation, I could not figure out how to work the product. Was not sure what the heck the thing did either.

This exercise was almost a bust. However I was given the opportunity to downloaded Conquest's flagship product. It is SQL Detective. Once again I was unsure from the hype what this tool did exactly. After downloading I found it to be like PL/SQL Developer or Toad.

Here are my first impressions. Logon took a long time. The default layout is not intuitive. You type you SQL statement below the results window. You have to use a window scroller to get more query results (I prefer a button). There were some positive features though. I like having an explain plan tab. You can also click the users node in the tree and figure out who has accounts in your database.

The pricing page on the web was not intuitive either. I determined that the bare bones edition cost $136. For that price, I did not find any SQL Detective features which would make me abandon PL/SQL Developer, which I use for my customer's work. At least Conquest had a compelling enough marketing in their ad to get me to give it a try.

Competition for MySQL

MySQL was previously owned by Sun Microsystems. Then Oracle bought Sun. So now it is managed by Oracle. This is an open source product. However there are other offerings out there. Let look at one or two of them.

The most direct competition from Microsoft would be their SQL Server Express. However that product is not as simple as it sounds. I want to call attention to another product. That is SQLLite.

SQLLite is actually a library. There is no configuration required. You link in a database engine. The contents of the database are stored in a single file.

SQLLite is implemented in C++. The API to the database is low level. This thing was made with C programmers in mind. Good luck if you want to access it through some .NET code.

You should realize that this is a scaled down database. It tries to implement most of the SQL-92 standard. But it is still low power. You don't even want to try to access the database from multiple threads. Things will not go well. I must confess that I am not a MySQL expert. So I cannot as of yet compare SQLLite with it. However I do have a personal project which is outgrowing Oracle Express Edition. MySQL may be my next bet.

Tuning

I have written about Oracle database tuning in the past. How about we see how other players conquer this issue. Let us assume that we have Microsoft technologies on the front and back end. Microsoft provides tier interaction profiling (TIP) to sort out performance issue.

TIP adds code to profile your application. It records timing information. The actions timed are all those that result in a database hit. These include SQL, ODBC, and OLE DB. Those are all Microsoft database access technologies. Developers use frameworks that often abstract them from actual database code. TIP helps get the developer back in touch with the actual database interaction.

Oracle XE and JDBC Performance

I am using an Oracle Express Edition database to prototype a very large data set application. My programming language is Java. I am using JDBC to get to the database.

For starters I thought I needed to download the
JDBC drivers from Oracle. This required me to sign into the Oracle Technology Network. However the login kept saying my password was wrong. It was a pain.

I got the drivers. Now there are several flavors of JDBC drivers. Oracle says you should stay away from the OCI version. Ok. There is also a JDBC driver that bridges to an ODBC driver. This is a slow configuration. I chose a thin JDBC driver. By default it is auto committing each insert statement I make.

So I did a quick timing test. I inserted 10,000 records into a small table. I stored the timestamp for each insert into the table. The whole thing took a total of 19 seconds. That is 500 records per second inserted. That feels pretty good since I a doing a commit after each insert, and this is the free edition of the Oracle database. The only other test I might try is to see whether a PL/SQL script can load the data faster.

Query Tools

I am working on a new project that has a lot of data. So I know I will need some good tools. My database is Oracle XE, which comes with a limited command line tool. I decided to look around to see what other tools I could get.

My current tool of choice is PL/SQL Developer. I checked their web site. A single user license goes for $180. Plus there is a $50 charge to installation media and documentation. There is a 30-day trial period version available. But there is no free option. I passed on it.

Next I tried SQL Developer form Oracle. It is free of charge. The thing requires the Java Development Kit. I already have that. So the downloaded was smaller. It installed fine. However when I queried a bunch of records, only the first 50 were shown. I had to keep sliding down the control on the side of the results window to get more data. Also the default display for the date columns was MM/DD/YYYY. This product was also out.

Finally I tried Toad for Oracle Freeware. It is, as the name implies, totally free. The install states that it has the same functionality as Toad For Oracle Base Edition. I found it odd that I downloaded the installer. After installation, it said that there was a more recent version available. What? Luckily there was a button to get the whole result set in one swoop. Nice. I did not like that the query results did not append a record number to the left of each record. But I can live with that. Toad you are my tool of choice for now.

Choosing an Oracle Version

I've got a hot new side project. There is a list of 170 million URLs I want to scrape from the web. I want to use Oracle to hold my data. What version of the database do I need? My first choice was Oracle Express Edition, also known as Oracle XE. It is free. However it has a limit of 4 Gigabytes of user data. My first database table with 170 million records will blow past that limit.

Next I took a quick look at the different Oracle Standard Edition offerings. They go from $180 to $350 per user, with a 5 user minimum. This cost is not astronomical. But my project is in a research mode right now. I don't want t0 shell out that kind of cash unless I have a money making opportunity. So I will do a prototype with Oracle XE.

Oracle XE comes with a Run SQL Command Line tool. It looks like a DOS box, and behaves like SQL*Plus. That is too bare bones for me. Let's see what kind of tools there are out there that can help me. I am used to PL/SQL Developer myself. However others swear by Toad. And Oracle has SQL Developer. Next time I will let you know my initial experience with these tools for my new project.

Oracle Sun Exadata Database Machine

Larry Ellison recently bragged about the Exadata Database Machine v2. This appliance was previously aimed at database warehousing use in version 1. Now it sets its target as online transaction processing systems. Version 2 is supposed to be twice as fast as v1. Larry says the thing outperforms the best machines from IBM. He seems to be taking personal initiative on this project.

The Exadata Database Machine has a number of parts. Of course is hosts an Oracle 11g database. It also has an Exadata Storage Server from Sun. The storage server has 12 disks and 2 CPUs. However the real power comes from the huge amount of flash storage in the storage server. We are talking a couple hundred Gigabytes here.

The storage server component in this system is a smart one. It makes use of highly parallel operations. It can also do query processing itself. There is a speed gain from doing the processing close to where the data resides. The box also supports compression to drastically reduce data size.

Although the system comes preconfigured, you still need to do some fine tuning on the device. The cost can be a little more than $1M. However when you factor all the licensing and support costs, and get a bigger box, the cost can be a few million. This is obviously the high end OLTP market. Oracle is going after the market previously dominated by manufacturers such as IBM and EMC.

Performance Problems

There have been times recently when our production system has been painfully slow. The DBA Team took the lead to find out what was wrong. They could not isolate the problem. So they brought up the matter with Oracle Corporation.

Oracle could find no defect with the database software. Their analysis resulted them in saying that there must be a defect in our source code. The only value add they provided was some queries which were taking a long time on average.

This was most disappointing. Any monkey with a tool can identify the worst performing queries in the system. Now this is not to say that a query might not be poor. However these SQL statements were doing updates using only a primary key in the WHERE clause.

Sooner or later this problem will come around to our team. I told our project manager that I expected we would need a dedicated Oracle performance engineer. We used to have such individuals assigned to our project. They have since been replaced by consultants who are competent, but not performance experts.

NULLs and OUT Params

Recently I came across a big stored procedure in our code. I needed to make some changes to it. That's when I decided this procedure could not go on with its size and complexity. It was time to refactor it.

After I broke the procedure into many procedures and functions, I ran some regression tests. Initially I broke the functionality. I went back to good old debugging techniques. I wrote out a bunch of variables to a table. The values were always NULL.

Then it hit me. I had a bunch of OUT params that were not initialized. Therefore they were NULL by default. You can avoid this by always initializing OUT parameters. These were numeric variables. Incrementing NULL lefts the values NULL when I had the bug. Explicitly setting them to zero solved my problem.

Out of Sequence

I want to follow up on my post from yesterday with more details. A junior developer had to write a PL/SQL script that would update upwards of 10 million rows in the database. To make things run fast, she put FOR UPDATE in the cursor SQL. Then the actual update used WHERE CURRENT OF. This sounds like a solid design.

The problem was that a batching commit mechanism was added in the middle of the loop through the cursor. The result was that after the first batch was committed, the next iteration through the loop caused an ORA-01002: fetch out of sequence. The thing to know is that the FOR UPDATE works up until the next commit. After that you are done with the FOR UPDATE cursor. But the script was coded so that it kept on trying to loop after the commit. Is there a solution for batch commits with a cursor? Yes. Don't use FOR UPDATE. Or break up your cursor into many cursors, each of which does a single COMMIT.

Hidden Errors

We recently released a new version of our system to the customer. They needed us to fix a bug. The bug involved the application displaying too many records on some screens. The fix involved adding a new column to a table to remove the ambiguity of records. There was one final touch that was required. We needed to populate this column for all the existing records.

One of our junior developers coded up a PL/SQL script to go through all the records and determine the value for the new column. With some guidance from a senior developer, she even made the script fast by using FOR UPDATE and WHERE CURRENT OF. Good stuff. There was just one problem. The customer complained that the darn thing did not work. There was a flurry of emails trying to figure out what was wrong.

Finally I got dragged into a conference call to discuss this problem. At first a lot of people were taken aback that most of the records did not get the column populated. Finally I could not take it any more and I chimed in that we were required to not fill in all records, but only the ones where we could reasonably determined the value. The rest were to remain NULL. Another developer got ahold of the script, checked out how it handled exceptions, and searched the production database for evidence of any problems. Sure enough an exception was raised. The script caught the exception, but handled it by logging the error to a table and exiting. The shell script that called the PL/SQL script reported success and we were none the wiser.

Now we get into the blame game as to why testers and developers did not detect the problem. A DBA chimed in with some sage advice. If we are going to trap errors, then we should use DBMS_OUTPUT to pipe a message to the screen informing a DBA that the thing bombed. Then we could deal with the problem immediately. In fact, it might have been better to leave the script bomb and report the exception to the command line. Then the customer would not have been the one to discover the error.

NoSQL on the Rise

There is a relatively new movement called NoSQL. It involves high performance databases for little to no cost. Social Networks have been using NoSQL. You might be able to use them too, depending on your specific database requirements.

The NoSQL movement has different products from different vendors. Amazon created SimpleDB. It is data storage for key value pairs. And Google created BigTable, open sourcing the data model for it.

Some NoSQL databases focus on storing JSON objects. Examples of these databases are CouchDB and MongoDB. CouchDB let's you use a REST API to access the JSON data.

These NoSQL databases often have limitations. Some impose a time limit for queries to complete. Others return only partial data sets. These databases can hold a lot of data. But they often are poor for complicated queries.

Script Performance

I had this script to write to correct some data. The script was going to affect about 20k records. One part of the script needed to mine through a table with 1B records. This needed to be done for each of the affected 20k records. So I knew that a brute force approach would never complete.

So I thought perhaps I could preprocess those 1B records. There were only a few thousand of those records that were of interest. If I could just load the couple thousand relevant records into memory, I could use that data to go though my batch.

This seemed to be a good candidate for some type of collection. But what kind should I use? I don't do enough PL/SQL to be very comfortable with collections. I read around and though maybe it could be a VARRAY. But I only had experience with PL/SQL tables. What is a developer to do?

In the end I needed to get this script coded ASAP. So I did the simplest programmatic approach. I created a temporary table with a few thousand records. This table represented the important data mined from the 1B record raw table. I am hoping that SQL against this small temporary table will be very fast. We shall see when I ship the fix out to the production environment where the volumes are huge.

MongoDB

Previously I had written an overview of the NoSQL movement. Today I want to talk a little more about a specification implementation. I will cover MongoDB.

NoSQL has some niche applications. Sometimes your information does not fit into a database schema. MongoDB tries to provide high performance. This is in contrast to other NoSQL implementations which are gerared toward high concurrency.

There are no transactions in MonogoDB. You can't recover from an abrupt shutdown. MongoDB is good if you need data quickly, and you access the data frequently. JSON is used as the default data notation. Internally MongoDB uses BSON (binary JSON). MongoDB itself is written in C++.

The use of MongoDB is very similar to using a traditional relational database. I have not played with it yet. Not sure that I will. You still need to know what the competition is using though.

Chunking for Performance

So you have a lot of data you want to update. I am talking about the whole big table. How do you do it? Well in the good old days you found a way to divide and conquer. Pretty much you put some limiter in the WHERE clause of the update to do a small bunch at a time. That way the whole darn table did not get locked.

The idea is similar in Oracle 11g Rel 2. However the bunching is supported in the system. You make use of the DBMS_PARALLEL_EXECUTE package. You choose a way to chunk up your data. That is, you divide up the data by some characteristic. However the new package does the work of splitting up your DML between different statements which operate in parallel.

The procedures in the package do their own COMMIT on their parts. You also need to have the CREATE JOB privilege, as a bunch of jobs will be scheduled on your behalf to get the job some in parallel. A common way to chunk up your data is by ROWID. However you can choose some other attribute of your data. On our project we use a generated ID to do the chunking. And previously we had used partitions to break up the chunks. Maybe it is time to rethink that plan.

Index Rebuild

Yesterday our loading software came to a halt in the production environment. The DBA team came to the rescue and determined some tables had indexes that were just out of date. A quick real time rebuild got things rolling again.

Now our DBA team is looking to be more proactive. We had a weekly job that rebuilds the indexes for some major tables. However we started looking for other tables to rebuild indexes for.

The DBAs told us that they were looking for tables that were frequently updated. I found one such example. A DBA chimed in and agreed the table I identified was a trouble table from a prior performance issue.

You would think that Oracle would have some built in support to keep indexes up to date.

Virtual Private Database

Oracle has a behind the scenes functionality called the virtual private database. It is also known as fine grained access control (FGAC). The database will modify queries automatically to present a partial view of a database table. There is policy_function which returns the WHERE clause that masks the query results.

This functionality can be as simple as hiding certain rows from a result set using a more restrictive WHERE clause. It can also mask out some columns. This effectively creates a customized virtual object that is generated on the fly. This is very similar to a view. It is just that each user gets a custom view.

The FGAC can implement row level security. As such you can lock down data as you see fit. An application context is set at sign on. This is accomplished via a LOGON trigger. The rest is plug and chug by the Oracle database. Essentially you are getting logical security with the virtual private database. Good stuff. We might be using this on our project at work to implement some of the customer’s data hiding business needs.

Global Temporary Tables

Global Temporary Tables are a feature from Oracle 8i. The tables are managed by the database server. They hold data which is private per session. The data is automatically deleted at the end of the session.

These tables can improve query speed. This is because there is no redo or rollback information collected. They also prevent private data sets from interfering with each other. You can consider these tables the ultimate work tables.

Tom Kyte advises against their use. However there is a difference between a temp table and a global temporary table. A temp table is one you create on the fly to do some complex work on. These global temporary tables are permanent tables which users can only see a portion of the data.

The Rise of NoSQL

Recently I read about the Cassandra project. It is a distributed database. The goal is to store a large amount of data. It uses the Google BigTable model (more on BigTable later). The databases stores key value pairs.

Cassandra was created by FaceBook. It is used by FaceBook, Twitter, and Digg. It is an open source project managed by Apache. The intended use is large web applications. There are other projects similar to Cassandra. They are part of the NoSQL movement. Are these databases poised to replace the relational ones like Oracle, DB2, and SQL Server? Let’s see what these NoSQL databases have to offer.

NoSQL in general are data stores that do not impose a fixed structure for the data. Access to the NoSQL databases tries to avoid joins. The technical term for these databases is structured storage. They have weak consistency. That means if you update a copy on your server, the update is not guaranteed to be propagated everywhere immediately. These type of databases also have very simple interfaces.

Let’s go over some other NoSQL databases. Hadoop is a project with many subprojects. One such subproject is MapReduce, which is a framework for large data set distributed processing. Then there is Google’s BigTable. It is a distributed storage system that can scale to a large size.

Next we have MemCacheDB. It is a distributed key value storage system. Despite the name, it is not for caching. It is a persistent storage mechanism that uses the memcache protocol to access a Berkeley database on the back end. Another NoSQL database is project Voldemort. It is a distributed database providing key value storage. LinkedIn uses it.

CouchDB is an Apache project. It is a document oriented database. You query it using MapReduce. It has a RESTful JSON API. Note that CouchDB is written in the Erlang functional programming language. A similar offering is MongoDB. The name is a play on humongous. It is also a document oriented database. This one was written in C++. It collects JSON documents and stored them in a binary BSON format.

I have covered a lot of NoSQL implementation. They mostly provided distributed storage. Although they scale very well for large data sets, their functionality is limited. They are not a replacement for relational databases. You still need RDBs for transaction processing. Nonetheless it is good to know a bit about the NoSQL movement, and the problems it tries to solve.

Edition Based Redefinition

Previously you could not compile any PL/SQL that was currently being executed. If you did, the result would be an ORA-04068 “Existing state of packages has been discarded”. This restriction is lifted in Oracle 11g rel 2. You can now perform an online upgrade of an application. In other words you can patch code while users are executing the old version. You can also do some schema changes while users access the old version. The goal of this new capability is to ensure that downtime to users is minimized.

Objects in general fall into two categories. They are either editionable or noneditionable. Editionable objects includes functions, procedures, package, and so on. They can be easily upgraded online. Noneditionable objects are things like database tables. Normally you cannot upgrade them online. However with a new object called an editioning view, you can even simulate changes to database tables while users are on the system running your apps.

An editioning view has specific limitations. You can only select from this view. Furthermore you can only reference one database table (and thus no joins are permitted). This view acts like a synonym in that it projects attributes from the underlying table. This view can have triggers. The trick is that this view can effectively hide columns from the user. This hiding operation is similar to dropping a column a table. The new kind of column drop is a virtual one. The column in the underlying table remains, but access to it has been shut off.

Here is the operational practice to use such editioning views. You first rename the underlying table. Then you create an editioning view that has the same name as the old table. You drop the triggers on the original table. Recreate the triggers on the new view. Revoke any privs on the base table. This is what we call a hot rollover. There are other similar features in Oracle 11g rel 2. Perhaps I will cover them in a future post.

SQL Developer

There is a new release of Oracle SQL Developer available. It is version 2.1. This has two new big features. They are PL/SQL unit testing and a Data Modeler/Viewer.

The PL/SQL unit testing part is a unit test framework. It allows you to share unit tests. It also let’s you run regression tests.

The Data Modeler/Viewer gives you read only access to a database schema. You can view the schema objects graphically. A benefit is the ability to browse the foreign keys between tables.

To tell the truth, there is even a version 2.1.1 patch that is available now. The big v2.1 release is 94M large for the install. It requires the Java Development Kit v1.6 update 11..

This app is still free of charge. You can still edit PL/SQL just like the older versions of the tool. And it still integrates with all kinds of source code control.

ORA-12154 Blues

I assigned a developer a trouble ticket from the production environment. He said he was getting errors when trying to connect and reset his password. The specific error was ORA-12154, which translates to "TNS: Could not resolve service name."

This looked like a simple problem to correct. I ensured he had the TNS_ADMIN environment variable set. I also ensured he had the right "tnsnames.ora" file on the network. That seemed strange.

After I gave up, I called in our lead DBA. He wanted to see how many Oracle homes there were on the guy's machine. There were multiple ones due to the multiple products installed. The developer could not find an Oracle Home selector. But this did not turn out to be the problem either.

The lead DBA and myself went through different ideas to get this guy running. Then I had a strange thought. What if this guy had a weird password that was screwing things up. I thought that could not be possible since we have a trigger that prevents bad passwords. However the DBA told me his profile did not enforces the password trigger. Sure enough the developer had an "at sign" character in his password. So the part of his password after the password gets treated as the host string. His logon would never work.

In the end we had a DBA reset his password. This just goes to show that an Oracle error message does not always identify the exact problem being encountered. You might need to team up with some competent individuals to figure out the problem. It might also involve thinking outside the box and trying things out.

Sun Microsystems and the MySQL Community

Oracle continues to work on acquiring Sun Microsystems. The MySQL community has some opposition to this move. The complaints are led by the creator of MySQL. He sold out his share to Sun. But he thinks Oracle will work to dismantle the MySQL database in order to increase their profit share.

The European Commission is still debating whether to block the acquisition. Many current Oracle customers have written to the EU to encourage the approval of the merger. MySQL can fork a copy of the code because the project is open source.

Oracle promises to spend money on MySQL development. It also is committed to not forcing customers to pay for a support contract for MySQL. The MySQL creator says he saw that Oracle bought out InnoDB before, and the project development became stagnant.

SQL Communications Area

There is a structure in Pro*C called the SQL Communications Area. The structure variable is named sqlca. The most useful field in the structure is sqlcode, which is the status variable. You access this variable with the fully qualified name like this "sqlca.sqlcode". The structure is defined in file sqlca.h.

The status variable (sqlcode) has three different types of values. Zero means the operation was a success. A positive value means the SQL statement executed but threw an exception. And a negative value means the statement did not execute. This variable is updated by Oracle after each SQL statement.

Normally you will get a sqlcode of 1403 if there is a No Data Found exception. However you can set the MODE to ANSI in your Pro*C precompiler. This will result in sqlcode being set to 100 (instead of 1403) on No Data Found.

The developer is able to independently define other status variables such as SQLSTATE and SQLCODE. This SQLCODE is different than the sqlca.sqlcode. In addition to the SQL Communication Area, there is an Oracle Communications Area. It has the variable name oraca.

You can trap Pro*C errors using the WHENEVER clause. There are different values used in association with the clause. Some of these are SQLERROR, SQL WARNING, and NOT FOUND. SQLERROR equates to the sqlcode being negative. SQL WARNING is when sqlcode is positive or sql.sqlwarn[0] equals ‘W’. And NOT FOUND is a sqlcode of either 1403 or 100, depending on the precompiler MODE.

It is possible to have more than one sqlca. However there is only one active one at a time. My recent research into the SQL Communications Area was when the sqlcode was non zero for a trivial SQL statement. It turned out that the value was 1403 which means No Data Found. It took a while for me to discover that one of the bind variables had an extra space at the end causing an update to fail.