Ad Hoc Reporting

I had written a script to restore some data that had been lost. A team convened to run my script. The thing completed very quickly. After some spot checks, we determined my script was a success. The DBAs left the call shortly after that. There was one last task left. My boss wanted me to write some SQL to produce a comma separated file with the data.

That sounded simple enough. SPOOL the data out to a file. The main command was a big SELECT statement which included all the column values separated by commas. I gave it a try in development and found the lines were wrapping. So I SET LINESIZE to 200. We run the job in the production environment.

Ooops. The headers are showing up on every page. Should have SET HEADING OFF. Then we import the data into Microsoft Excel. Some rows are not lining up correctly. The LINESIZE needed to be bigger. Maybe set it to 2000? Is that too high to work? Also some of the data had commas within the values. Oh man this is getting tricky.

In the end I resorted to a tool like PL/SQL Developer to do the trick. It can export data with ease.

Overcoming the ORA-01440

I had to test the effect of changing a column type from NUMBER(1) to NUMBER. Luckily our client code did not choke. I guess they translate into integers in the programming language on the client. After the test was done, I tried to alter the table back to its original state. Unfortunately I ran into an ORA-01440.

The ORA-01440 happened because I was trying to change the type to a type with decreased precision. Apparently you can only do that if the column you are changing is empty. Well I figured I could get around that by making all the values in the column to be NULL. I could clean the data up later.

There was one additional problem in my plan. The column I was dealing with could not be NULL. I was in a bind. That's when Tech on the Net provided the idea to just DELETE the records. That way there was no data to hold me back. Actually I exported the data and reimported it after the type was changed. This is something you won't learn in the books. You got to experience this specific type of problem to be able to work around it.

Oracle Data Dictionary

I just read an article from Oracle Magazine on the data dictionary views. There was a figure with 12 different user views of interest. I am trying to digest all the information and make some connections between the views. Let's start with USER_SOURCE. Okay I have employed that one before. It can tell you the source code of something you have compiled into the database, such as a stored procedure.

Now an object you compile into the database will also have an entry in USER_OBJECTS. That is also familiar territory. But there is also a USER_OBJECT_SIZE view. I bet you can figure out what that view tells you. Now other views hold object information of a certain kinds. Your triggers are shown in USER_TRIGGERS, and your procedures in USER_PROCEDURES. But have you ever heard of the view USER_TRIGGERS_COLS? Me neither. Looks like it maps triggers to the table/columns they use.

I have never queried USER_ERRORS directly. I just type SHOW ERRORS in SQL*Plus. To do some source code analysis, it looks like I might check out USER_DEPENDENCIES, USER_ARGUMENTS, or USER_IDENTIFIERS. Looks like I got some homework to do. If I find anything of use, I will definitely share it.

Performance Views

I recently read an good article in Oracle Magazine about the fundamentals of performance tuning. Actually it was almost an introduction to tuning. I already knew about the V$SESSION view that tells you what your session is waiting for. However I discovered there is also a V$SESSION_EVENT view as well. This view is a history of waits that your session has encountered over its lifetime. Now that is some good stuff. The view itself is small. There are not that many columns. But you can get a lot of data by quering it.

I had heard about the V$SESSTAT view before. But I had not used it before and did not even know how to use it. This important view will help you determine what resources are being used. Yessir good stuff. You should really check out the Artical in the Nov/Dev 2012 Oracle magazine. Perhaps it is online somewhere. The title was "Beginning Performance Tuning: Diagnose the Past". Good luck peeps.

Sequence Issues

I have a PL/SQL package that generates test data for a complex piece of software I maintain. Recently I needed to run some tests. So I use my package to generate unit test data. However I get a bunch of errors on generation. I trace them down and find that a unique constraint is being violated on insert to a particular table.

The constraint looks right. It appears we are trying to put duplicate keys in this table. But that does not make sense. I use a database sequence to generate the keys. Wait. I check the existing records, and then the sequence, and find the sequence value is off. How did this happen? No clue. The fix should be to move the sequence value up past the exsiting records. But how do you do that?

Well I guess I could drop and recreate the sequence with a high starting value. That does not feel correct. I want to just set the next value. Turns out there is no specific API to do that. Instead I modify the sequence to have a large increment value. Then I select a NEXTVAL once, pushing the next value up to where I want it to be. Finally I reset the increment to 1 on the sequence.

This technique also feels like a lot of work to change the next value. But it appears to be the accepted method to do this. Normally you would not want to be mucking with the sequence value, other than getting the next value.

Verification Required

Our development team writes a lot of SQL scripts. These sometimes just set up some configuration data in the database. Other times they perform some sort of data correction. We delegate the packages and release of these scripts to our DBA team. Today we got a new requirement from the DBA team. We must also provide a separate verification script to validate the changes made by the SQL scripts.

Now I guess a verification script might add a small amount of value. If somebody runs the script, we could determine whether the original script ran to completion. However now our team needs to spend some extra time to write a verification script. Here is the fault in the logic behind this idea. If somebody was going to make some mistakes with a SQL script, what makes you think they could write a verification script to detect the mistakes?

Personally I do not care much. I love writing scripts. The more the merrier. I just need to be given time to do so. Therefore I am not really going to complain about this new requirement. I might even write a few verification scripts that just DBMS_OUTPUT.PUT_LINE('Script verified!') and see if anybody cares. LOL. The downfall is once somebody detects the shennanigans, we are going to be required to write verifications scripts on the verification scripts.

Weird DBMS_OUTPUT Behavior

I had to do some unit testing today. Before running the test, I determined the expected outputs. Unfortunately the actual output differed. Had to fail the test. Then I needed to debug the issue. I stuck some DBMS_OUTPUT statement in the middle of my package. However the output did not show up in the report.

Initially I thought maybe it was not my code running. I dropped the procedure and the report errored out. It was my code running. Then I thought it was the positioning of my DBMS_OUTPUT statements. Nope. My statements were not getting into the report. Very strange. I started to get spooked. I thought it might have something to do with the capitalization of the word DBMS_OUTPUT. Yeah, I know. Nonsense. No that was not the cause.

There was only one thing to do. Trace this output line by line until I know what is up. The report is written a little funny. There is a UNIX Korn shell script which calls a Perl script which executes a SQL script that makes a call to my procedure. The Perl script was somehow grabbing the output of the SQL*Plus execution. That's when I saw it. Only certain output was being captured and put into the report. You had to stick a sentinel value in the output to get it into the report. The rest was skipped over.

Order was restored. But for future reference, I might stick the output in a procedure. I can make the procedure put the sential value in the output. Then I can comment the heck out of this and explain the hack.

Single Sign On

A couple of our power users had Single Sign On installed on their workstations. They were reporting that our apps were not working with this configuration. A developer from our team thought this was strange as no modification to our programs was supposed to be required with SSO. I told him that our apps would most likely need to be changed. That's because we control Oracle login with our own customer login screens.

I did just a bit of digging today. Now I understand the situation better. Oracle has "Intelligent Application Response". This is a way to congire SSO to handle the logon automatically by filling in fields in custom dialogs such as us. So it is true you will not need to modify your apps. You just need to properly configure the Intelligent Application Response to know your login dialogs.

I use the term SSO. However I should really refer to it as ESSO, which stands for Enterprise Single Sign-On. There is a component called ESSO-LM that controls the login through us customer screens. ESSO-LM stands for Enterprise Single Sign-On Logon Manager. Sounds legit. We need at least one developer to come up to speed with this technology.

Right now our apps look like they are not cooperating with a solution that requires no legacy app change. That makes us look like the bad guy. We need to bone up on the tech to speak intelligently, and point out what needs to be done to integrate our system with ESSO.

Quick Fix

There had been some high priority customer problems in the morning. I got called into some meetings. The customer had an idea to shut off a type of transaction until we figured out what was wrong. I said that would work. Managers and executives from the customer organization spent a lot of time deliberating over what to do.

At the end of the day, my boss asked me to join another conference call. They wanted the transactions halted. My boss thought we could just put some trigger on the table containing the transaction requests. I was tasked with figuring this out and shipping it immediately.

This seemed easy enough. I wrote an after insert trigger for each row. In the trigger I moved the inserted row to another table. Then I deleted the row from the table on which the trigger fired. Boom. You can't do that. You get an error that the table is mutating, and the trigger cannot operate on that data.

I looked around. The net had some ideas to store the data from the row level trigger in some structure. Then a statement level trigger could gather that data and operate on it. Shoot. I did not have time for this nonsense. But I know a bit about how this data gets inserted. Every row affected happens in its own statement. Therefore there is a one to one correspondance to the row and statement level triggers. Bamm.

I moved everything into the statement level trigger and was off to the races. Polished up my scripts and sent them over to test. The test team took a long time to do their tests. Our database team built a database release. It got sent out to production in the middle of the night. I was up until 3am to make sure things went smoothly. They did. This morning the customer was relieved to find the transactions halted.

Now we need to investigate the root cause of the problem. Somebody else got assigned that task. As for me, I worked only about half a day today. I was up all night. No need putting in any more hours for now.

External Tables

We frequently gets requests from our customer to do mass updates. In the past, I would just take the data and generate a SQL script from it. Seems legit. This worked up to the point where we started getting too much data. Then I started splitting the jobs up into multiple scripts. Things got out of hand when I needed 15 or more scripts to do the job. Each script itself took a long time time run.

External tables came to the rescue. Now I put the data from the customer in a file on the server. Then I create an external table backed by that file. SQL*Loader does the job of reading in that file and exposing it to me like a normal database table. Now I can't do any updates on this table. However I don't need to. I just use it as input to update the database. I loop a cursor through the records, make my updates, and the job is done.

The beauty of this solution is that it is very fast. The script technique took a long time to execute when the scripts were large. The external table technique hardly takes any time at all.

Performance Tuning Triggers

I had to test some fucntionality that essentially hands with large volumes. The first test ran for over 4 hours. That's when I killed it. I was hoping it would eventually finish. No such luck. I took the sample size down by 99%. The thing ran for 30 seconds. Then I bumped up the sample size 10 times. It ran for 10 minutes. That's not linear.

There was mass hysteria on the team about there being many triggers on the table getting updated. I did a performance test, enabling only one of the triggers at a time. Turns out only one trigger causes any performance change. I studied the code of that trigger, and found that it updated a lot of records for each update. Not good.

My rewrite was to disable the trigger during the large update. However I did not want to disable the trigger for all users, just the one doing the update. There seems to be no Oracle support to disable a trigger per session. So I used Murnane's technique to implement a disabled trigger per session. I tested this out with some profiling SQL scripts. Now I am putting this in the application. Let's hope for speedy returns.

Divide and Conquer

My customer is doing an acceptance test of my latest code changes. Things are not working well. Initiially I found that they were getting blocked by one of the business rules in the system. Then I determined they were running the wrong scripts. Finally the code was getting an Oracle error.

The pressure was mounting. I got on a conference call with a bunch of managers. I needed to figure this out fast. The script that was bombing was not huge. But the SQL in it was massive. Lots of joins and unions and large predicates. Ouch. How was I going to figure this out?

Then I decided to try to find a shortcut. The call stack said a function in a PL/SQL package was actually generating the exception. I checked and found we got lucky. There were only two places where the script accessed the package. Bamm. We were trying to construct a date with a value that is sometimes aritificially too large.

Sometimes you can get saved by a little divide and conquery.