Let's pretend that I have 1,000,000 rows to insert data and I don't care about data loss.
How much faster would 1 commit after 1,000,000 rows be rather than 1 commit every 100,000 rows?
Same question for updates and deletes.
How can I change the commit size? i.e. don't commit the transaction until you have processed XXX number of rows?
Commit is not something that happens automatically every N rows, but something that the client has to perform explicitly.
As to the question whether batches of 100000 rows would perform better or worse than batches of a million rows, that is a question for a benchmark. I would expect that there will be no big difference beyond a certain batch size.