My database has a Jobs_table(PK_Job_Num), HistoryTable(PK_Serial_Num, IDX_Job_Num), ReturnsTable(PK_Serial_Num) . The tables are set with cascading deletes So if you delete the Job the History and Returns records are deleted. After reading a bunch of posts on the Internet I decided It would be wise to use chunked deletes. Deleing from the Returns Table, then History, and finally the jobs table. I keep getting a out of memory error after about a hour of deleting. I am deleting around 36million records. My question, Is there a better way using the cascading deletes, without growing the log file too big. Not growing the log was a major reason for using chunks. But it appears that although it doesn't grow the log it is a hog on memory.
Job JobDate TotalPieces I056 2015-12-04 1171076 I509 2015-12-04 4753 I613 2015-12-04 6073 I872 2015-12-04 5655 I883 2015-12-04 5618 I717 2015-12-04 1732498 I718 2015-12-11 1055570 I614 2015-12-11 6790 I510 2015-12-11 4000 I075 2015-12-11 980872 H964 2015-12-11 522674 I323 2015-12-11 502853 1104 2015-12-11 110926 1111 2015-12-18 99246 I719 2015-12-18 833775 J052 2015-12-18 560519 I812 2015-12-18 5482 I852 2015-12-18 4433 J285 2015-12-18 1452321 J997 2015-12-18 5915 J999 2015-12-18 2943 J965 2015-12-23 1064961 J978 2015-12-23 575038 J331 2015-12-23 6687 J460 2015-12-23 4625 I720 2015-12-23 385627 1118 2015-12-23 97049 I222 2015-12-23 567837 1125 2016-01-01 99721 J027 2016-01-01 1581536 J461 2016-01-01 4506 J332 2016-01-01 6238 J761 2016-01-01 515395