So you have a lot of data you want to update. I am talking about the whole big table. How do you do it? Well in the good old days you found a way to divide and conquer. Pretty much you put some limiter in the WHERE clause of the update to do a small bunch at a time. That way the whole darn table did not get locked.
The idea is similar in Oracle 11g Rel 2. However the bunching is supported in the system. You make use of the DBMS_PARALLEL_EXECUTE package. You choose a way to chunk up your data. That is, you divide up the data by some characteristic. However the new package does the work of splitting up your DML between different statements which operate in parallel.
The procedures in the package do their own COMMIT on their parts. You also need to have the CREATE JOB privilege, as a bunch of jobs will be scheduled on your behalf to get the job some in parallel. A common way to chunk up your data is by ROWID. However you can choose some other attribute of your data. On our project we use a generated ID to do the chunking. And previously we had used partitions to break up the chunks. Maybe it is time to rethink that plan.
Free Laundry
-
Apparently a lot of apartment buildings have coin operated laundry machines
in the basement. And guess what? You can order a key to unlock the payment
me...