I have a static library written in C, with no dynamic memory allocation.
Until now, the library has only been used in an application for regular i386 Linux, where CPU and memory was plentiful.
I now need to try building a version of the library for an embedded, real-time ARM9 system (provided by a 3rd party). Before that I have to give some rough estimates of memory footprint and CPU usage.
For memory footprint, I build a tiny application on my i386 machine, statically linking with my library, that exercises all the functions of my library. Is it roughly correct that checking the resident memory of this application will give me a ballpark estimate of my library's memory footprint? Is there a better way to measure it?
For estimating CPU usage, I'm at a loss. I can of course run the test application mentioned above, on my i386 system, but I don't know what metrics that'll give me (if any) that can translate into something relateable to the ARM system. Is there a way to do it?
Your memory estimate sounds pretty good to me, as long as you compiled it for ARM9. Actually, if you cross-compile the library without debug info, and you expect all the functions of the library to be used in the final application, then the file-size of the library is a pretty good ball-park estimate. The only way that wouldn't work is if you had a lot of zero-initialized global (or static) variables. Run-time memory allocation is a different matter, of course, but you've accounted for that already.
Size estimates based on x86 code may be within the same ball-park, but really shouldn't be trusted. Sizes do vary from compiler to compiler also, so try to match it if you can, but any recent ARM compiler is OK for a rough estimate.
As for CPU estimates, that's impossible to put a figure on without measuring it. It's a function of the architectural efficiency of the CPU, effectiveness of compiler optimizations, clock rate, memory speed, bus speed, cache size, cache pressure caused by other running tasks, etc., etc., etc. There's just too many variables.
One thing you might be able to do is use big-O notation to say something about the performance of the algorithms on different input.
I'd probably just say "light", or "heavy". You probably have an idea which of those fits.