Expert Oracle Tips by Burleson Consulting
August 24, 2010
Question: We were trying to insert approximately 76 million rows with about 4 gigabytes of space with a batch job when the insert operation failed with the archive log error. What are the best practices for performing large batch insert jobs to avoid hanging?
Answer: When you run a large batch insert or updates job you risk aborting with:
2 - Insert hangs when your archive redo log directory becomes full.
Divide and Conquer: Make the job re-startable and commit every 1 5 minutes to release held rollback segments (undo logs).
Dedicated undo: Assign a giant, dedicated rollback segment to the batch job, large enough to hold all of the before images for any updates.
Parallelize the insert job: There are two types of parallelism for large DML jobs:
(1) You can use parallel DML, or
(2) submit multiple simultaneous insert jobs, making sure to you have enough freelists allocated to the table to prevent buffer busy waits.
Use insert append: Using the “append” hint to your insert ensures that you always grab a fresh, dead-empty block from your freelists. If you are doing parallel DML, the Append mode is the default and you don't need to specify an APPEND hint.
Disable/drop indexes: It's far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size.
本文转自海天一鸥博客园博客,原文链接:http://www.cnblogs.com/sgsoft/archive/2011/01/06/1927053.html,如需转载请自行联系原作者