oracle10g - Deletes Slow on a Oracle BIG Table -


i have table has around 180 million records , 40 indexes. nightly program, loads data table due business conditions can delete , load data table. nightly program bring new records or updates existing records in table source system.we have limited window i.e 6 hours complete extract source system, perform business transformations , load data target table , ready users consume data in morning. issue facing delete table takes lot of time due 40 indexes on table(an average of 70000 deletes per hour). did digging on internet , see below options

a) drop or disable indexes before delete , rebuild indexes: program loads data target table after delete , loading data needs perform quite few updates indexes critical. , rebuild 1 index takes 1.5 hours due enormous amount of data in table. approach not feasible due time takes rebuild indexes , due limited time have data ready users

b) use bulk delete: program deletes based on rowid , deletes records 1 one below

delete <table> rowid = g_wpk_tab(ln_i);

g_wpk_tab collection holds rowids deleted read looping via , intermediate commit every 50000 row deletes.

tom of asktom says in discussion on here says bulk delete , row row delete take same amount of time

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:5033906925164

so wont feasible option well

c)regular delete: tom of asktom suggests use regular delete , takes long time due number of indexes on table

d)ctas: approach out of question because program needs recreate table , create 40 indexes , proceed updates , mentioned above index take atleast 1.5 hrs create

if provide me other suggestions appreciate it.

update: of have decided go approach suggested https://stackoverflow.com/users/409172/jonearles archive instead of delete. approach add flag table mark records deleted delete , have post delete program run during day delete off records. ensure data available users @ right time. since users consume via obiee planning set content level filter on table not @ archival column users needn't know select , ignore.

parallel dml alter session enable parallel dml;, delete /*+ parallel */ ...;, commit;. it's easy.

parallel ddl alter index your_index rebuild nologging compress parallel;. nologging reduce amount of redo generated during index rebuild. compress can reduce size of non-unique index, reduces rebuild time. parallel can make huge difference in rebuild time if have more 1 cpu or more 1 disk. if you're not using these options, wouldn't surprised if using of them improves index rebuilds order of magnitude. , 1.5 * 40 / 10 = 6 hours.

re-evaluate indexes really need 40 indexes? it's entirely possible, many indexes created because "indexes magic". make sure there's legitimate reason behind each index. can difficult do, few people document reason index. before ask around, may want gather information. turn on index monitoring see indexes being used. , if index used, see how used, perhaps through v$sql_plan. it's possible index used specific statement index have worked well.

archive instead of delete instead of deleting, set flag mark row archived, invalid, deleted, etc. avoid immediate overhead of index maintenance. ignore rows temporarily , let other job delete them later. large downside affects query on table.

upgrading out of question, 12c has interesting new feature called in-database archiving. it's more transparent way of accomplishing same thing.


Comments

Popular posts from this blog

c++ - Creating new partition disk winapi -

Android Prevent Bluetooth Pairing Dialog -

php - joomla get content in onBeforeCompileHead function -