I'm writing my graduate work about methods of importing data from a file to SQL Server table.
I have created my own program and now I'm comparing it with some standard methods such as
My program reads in lines from a source file, parses them,
and imports them one by one using ordinary INSERTs.
The file contains 1 million lines with 4 columns each.
And now I have the situation that my program takes 160 seconds, while the standard methods take 5-10 seconds.
Why are BULK operations faster?
Do they use special means or something?
Can you please explain it or give me some useful links or something?
BULK INSERT can be a minimally logged operation (depending on various parameters like indexes, constraints on the tables, recovery model of the database etc). Minimally logged operations only log allocations and deallocations. In case of BULK INSERT, only extent allocations are logged instead of the actual data being inserted. This will provide much better performance than INSERT.
The actual advantage, is to reduce the amount of data being logged in the transaction log.
In case of BULK LOGGED or SIMPLE recovery model the advantage is significant.
Optimizing BULK Import Performance
You should also consider reading this answer : Insert into table select * from table vs bulk insert
By the way, there are factors that will influence the BULK INSERT performance :
Whether the table has constraints or triggers, or both.
The recovery model used by the database.
Whether the table into which data is copied is empty.
Whether the table has indexes.
Whether TABLOCK is being specified.
Whether the data is being copied from a single client or copied in parallel from multiple clients.
Whether the data is to be copied between two computers on which SQL Server is running.