There has been a lot of drama on our project based on performance problems we are having in production. Some people are attributing the performance issues to the number of triggers we have in the database. That sounds like nonsense to me. I deal in hard facts. There was a total lack of evidence to back up this theory. So I have decided to run some tests in our database to measure the overhead of adding triggers to DML.
I wrote a harness script that updates 10,000 records. It commits the change every update. The thing runs at a good speed. It takes around a minute to complete. The table currently has no triggers on it. Then I added one trigger that did nothing in the body of the trigger. I was shocked to find the harness took over 20% longer to run with the trigger.
I had another developer present when I ran the test. He could not believe the results either. Then he had some suggestions to vet the results. I ran the tests a couple more time. Looks like the first run was a fluke. He also suggested I run the tests locally on a database on my machine to eliminate irrelevant variables.
So next I plan to move the tests over to a local database on my machine. This way the network is not a factor. Other uses on the database are also eliminated from the equation. Next I am going to time having a lot of triggers on a table. I will implement some cascading triggers. When I am done, I will have a lot of empirical evidence to back up my opinoin that triggers in and of themselves have no impact on performance.
Reproducing a Race Condition
-
We have a job at work that runs every Wednesday night. All of a sudden, it
aborted the last 2 weeks. This caused some critical data to be late. The
main ...