We currently have a system that displays a page tabular data on the screen without any paging support in the user interface. It is running on Java 1.5 / Spring JDBC / T-SQL stored procs / SQLServer 2000 stack.
In the absence of the ability to skip rows in the result set (limitation of SQLServer 2K without using dynamic SQL); I am exploring the option of having the data layer select all rows and have the DAO layer scroll through the skipped pages of rows and then only read off a page worth of rows.
My question is this:
How much of a performance gain (in terms of DB CPU and I/O) is this change going to be compared with the current state where all rows are returned?
I know that there will be only a page worth of data going over the wire between the DB and the Application, but I am interested to know what will happen inside the DBMS. Assuming the query plan is already cached, is the DBMS going to skip the processing of the first 40 pages of results if i only want page 41?
I guess I'm looking to know if we incur much of a cost even though the cursor is going to skip the first x pages of the result set.
If you have a BTree (index, clustered or non-clustered) then the only way to go to page X is to know a key on the page and seek straight to it. Every other mean to 'skip' first X-1 pages will have to go trough the all the pages from 1 to X and skip each record individually. A narrow index on the 'paged' field can help to count, as high density slots (hence the narrow index) reduce the number of pages that have to be scanned to find the row that starts the page X.