ormsubsonicsubsonic2.2openaccesstelerik-open-access

Telerik OpenAccess vs. SubSonic in simple speed test (not a "which is better")


I have been using SubSonic 2 for ~5 years now and have loved it. However, for the past six months I've been toying with the idea of moving either to SubSonic 3 or to a similar ORM tool. Since my company uses plenty of Telerik's tools, I thought I'd try OpenAccess. After getting it configured, I figured I'd try an extremely basic task of loading up a RadGrid with information from our Users table (~30 records).

So, within the Grid's OnNeedDataSource event I have the following:

var start = System.Environment.TickCount;
context = new EntitiesModel();
rgUsers.DataSource = (from u in context.Users select u);
var stop = System.Environment.TickCount;
var elapsed = stop - start;
litTelerik.Text = string.Format("This process took <b>{0}</b> milliseconds", elapsed);

After building that and running the page, it spits back that it took 1607 ms. However, after refreshing the page it comes back as 0 seconds. (Why?)

I then put in the SubSonic code:

var start = System.Environment.TickCount;
rgUsers.DataSource = new UserCollection().Load();
var stop = System.Environment.TickCount;
var elapsed = stop - start;
litTelerik.Text = string.Format("This process took <b>{0}</b> milliseconds", elapsed);

I run the code for the first time and it says it took 171 ms. After refreshing the page, it reports that it took 60-70ms.

So, my question is: Why does OA take considerably longer to load on the first visit, but zero seconds to load on each page refresh? Whereas SubSonic is considerably faster on the first visit, but takes ~65ms on each page refresh?

I apologize if this is a "basic" question or if I'm not testing performance adequately. If there's any way to improve this method, I'd greatly appreciate any advice.

Thanks, Andrew


Solution

  • OpenAccess has an internal Database object that is created the first time you create an OpenAcccessContext. It basically calculates all of the defaults, creates caches, initializes other infrastructure objects, etc.. As soon as it is created it is stored in an internal static dictionary (with the connectionID being the key).

    Every other context created, would use that internal object and wouldn't have the overhead at all. That being said 1600 ms is a bit high, you might consider changing the mapping type (xml is optimal, performance wise).

    An optimization would be to make sure that the model is initialized in the application start handler. The following code should do the trick.

    void Application_Start(object sender, EventArgs e)
    {
        var modelInfo = new EntitiesModel().Metadata;
    }
    

    EDIT: As a follow up, it says 0 ms, and that is not actually the query execution time. What the query return is an IQueryable that is later executed. You have to call ToList() in order to get real data.