TNS Names Alias Cache

Our DBA team keeps a TNSnames.ora file out on a network drive. They maintain it with all the databases anyone would need to connect to. All the developers use this copy of the file to map TNS alias names. Today one developer found out that when he first boots up, our application cannot seem to properly access that file.

The developer said he must first access the network share with Windows Explorer before any of our apps can use the share to get to the TNSnames.ora file. He thinks that maybe Windows need to cache the network share information or something. That did not sound like a sound theory. But the evidence was plain. He said the first time after a reboot, he always get a database not found error. The second time he tries, our apps work fine.

Well there may be some truth to this theory. However he went on to generalize that maybe the users of our system might encounter some problem when cached data is not present, and a database call times out. That just does not seem to have a leg to stand on though.

For starters, we put the TNSnames.ora file on the local C: disk drive of our customer' computers. So there should be no delay in accessing that file. Sure the database server is far away, accessed over the network. However I cannot believe that some delay will cause a timeout resulting in random application errors.

Garbled Data

Our test team was trying to recreate some problems detected by our customers. They were having a hard time. So they took a look around at the data in the different tables in this part of the subsystem. They found the main table that drives the processing. Unfortunately, most of the data seemed to be all garbled up.

They were using SQL Developer. One of the testers wondered if this could be a problem with the query tool. That was doubtful. But hey. It could not hurt to try viewing the data with another tool. So another tester pulled out TOAD and tried to bring up the data. Most of the data seemed empty.

The testers figured that SQL Developer was just displaying all the data, regardless of how strange it was. TOAD on the other hand, was just supressing the display of weird looking data. This did not help us understand why the data was bad. It just was an interesting observation.

The data in the main table was SQL*Loaded from a file the testers were creating. Further inspection of this file showed that the positioning was incorrect. Perhaps that was the source of the weird looking data. Further research is required.

Ad Hoc Reporting

I had written a script to restore some data that had been lost. A team convened to run my script. The thing completed very quickly. After some spot checks, we determined my script was a success. The DBAs left the call shortly after that. There was one last task left. My boss wanted me to write some SQL to produce a comma separated file with the data.

That sounded simple enough. SPOOL the data out to a file. The main command was a big SELECT statement which included all the column values separated by commas. I gave it a try in development and found the lines were wrapping. So I SET LINESIZE to 200. We run the job in the production environment.

Ooops. The headers are showing up on every page. Should have SET HEADING OFF. Then we import the data into Microsoft Excel. Some rows are not lining up correctly. The LINESIZE needed to be bigger. Maybe set it to 2000? Is that too high to work? Also some of the data had commas within the values. Oh man this is getting tricky.

In the end I resorted to a tool like PL/SQL Developer to do the trick. It can export data with ease.

Overcoming the ORA-01440

I had to test the effect of changing a column type from NUMBER(1) to NUMBER. Luckily our client code did not choke. I guess they translate into integers in the programming language on the client. After the test was done, I tried to alter the table back to its original state. Unfortunately I ran into an ORA-01440.

The ORA-01440 happened because I was trying to change the type to a type with decreased precision. Apparently you can only do that if the column you are changing is empty. Well I figured I could get around that by making all the values in the column to be NULL. I could clean the data up later.

There was one additional problem in my plan. The column I was dealing with could not be NULL. I was in a bind. That's when Tech on the Net provided the idea to just DELETE the records. That way there was no data to hold me back. Actually I exported the data and reimported it after the type was changed. This is something you won't learn in the books. You got to experience this specific type of problem to be able to work around it.

Oracle Data Dictionary

I just read an article from Oracle Magazine on the data dictionary views. There was a figure with 12 different user views of interest. I am trying to digest all the information and make some connections between the views. Let's start with USER_SOURCE. Okay I have employed that one before. It can tell you the source code of something you have compiled into the database, such as a stored procedure.

Now an object you compile into the database will also have an entry in USER_OBJECTS. That is also familiar territory. But there is also a USER_OBJECT_SIZE view. I bet you can figure out what that view tells you. Now other views hold object information of a certain kinds. Your triggers are shown in USER_TRIGGERS, and your procedures in USER_PROCEDURES. But have you ever heard of the view USER_TRIGGERS_COLS? Me neither. Looks like it maps triggers to the table/columns they use.

I have never queried USER_ERRORS directly. I just type SHOW ERRORS in SQL*Plus. To do some source code analysis, it looks like I might check out USER_DEPENDENCIES, USER_ARGUMENTS, or USER_IDENTIFIERS. Looks like I got some homework to do. If I find anything of use, I will definitely share it.

Performance Views

I recently read an good article in Oracle Magazine about the fundamentals of performance tuning. Actually it was almost an introduction to tuning. I already knew about the V$SESSION view that tells you what your session is waiting for. However I discovered there is also a V$SESSION_EVENT view as well. This view is a history of waits that your session has encountered over its lifetime. Now that is some good stuff. The view itself is small. There are not that many columns. But you can get a lot of data by quering it.

I had heard about the V$SESSTAT view before. But I had not used it before and did not even know how to use it. This important view will help you determine what resources are being used. Yessir good stuff. You should really check out the Artical in the Nov/Dev 2012 Oracle magazine. Perhaps it is online somewhere. The title was "Beginning Performance Tuning: Diagnose the Past". Good luck peeps.

Sequence Issues

I have a PL/SQL package that generates test data for a complex piece of software I maintain. Recently I needed to run some tests. So I use my package to generate unit test data. However I get a bunch of errors on generation. I trace them down and find that a unique constraint is being violated on insert to a particular table.

The constraint looks right. It appears we are trying to put duplicate keys in this table. But that does not make sense. I use a database sequence to generate the keys. Wait. I check the existing records, and then the sequence, and find the sequence value is off. How did this happen? No clue. The fix should be to move the sequence value up past the exsiting records. But how do you do that?

Well I guess I could drop and recreate the sequence with a high starting value. That does not feel correct. I want to just set the next value. Turns out there is no specific API to do that. Instead I modify the sequence to have a large increment value. Then I select a NEXTVAL once, pushing the next value up to where I want it to be. Finally I reset the increment to 1 on the sequence.

This technique also feels like a lot of work to change the next value. But it appears to be the accepted method to do this. Normally you would not want to be mucking with the sequence value, other than getting the next value.