clinkercontinuous-integrationembedded

How can I install and use compilers for embedded C on an external server?


Short Question
Is there an accepted way to run compilers / linkers for embedded software projects on a remote server and still be able to program and debug the software on local machine.

Note I know every IDE will be different, so what I am after is how to define a work flow to accomplish this task assuming that the IDE can be ran using the .o/.elf files built from the remote server.

Areas of Concern
1) Networking to a virtual Windows machine.
2) How / when to transfer the source code to the server to build.

Background
Each family of microprocessor that our software team works with requires it's own compiler, IDE, and programmer. This, overtime, creates many difficulties to overcome.

1) Each developer requires it's own, often pricey, license.
2) To pick up a project that another developer started requires extra care to make sure all of the compiler settings are the same.
3) Supporting legacy software may require an old compilers that conflict with the currently installed compiler.
... the list goes on and on.

Edit: 7-10-2011 1:30 PM CST
1) The compilers I am speaking of are indeed cross compilers
2) A short list of processor families this system ideally would support: Motorola Coldfire, PIC and STM8.
3) Our Coldfire compiler is a variant of GCC, but we have to support multiple versions of it. All other compilers use a target specific compiler that does not provide a floating license.
4) To address littleadv, what I would like to accomplish is an external build server.
5) We currently use a combination of SVN and GIT hosted on an online repository for version control. This is in fact how I thought I would be transferring files to the build server.
6) We are stuck with Windows for most of the compilers.

I now believe that the direction to go is an external build server. There a few obstacles to over come yet. I will assume that we will have to transfer the source files to the server via version control software. Seeing how multiple product lines require access to the same compilers, having an instance for each project does not seem practical.

Would it make sense to create a repository for each compiler that would include folders for build, source, include, output, etc... then have scripts on the users' end that takes care of moving files form the IDE's file structure to the required structure for the compiler? This approach would keep the project repository from being thrashed and give a since of how many times a compiler has been used. Thanks for all of the great responses so far!


Solution

  • In my opinion implementing an automated build server would be the cleanest solution to what you're trying to achieve. With an additional benefit...continuous integration! (I'll touch on CI a bit later).

    There are plenty of tools out there to use. @Clifford has already mentioned CMake. But some others are:

    So first of all I'll try to explain what we do and suggest how this might work for you. I don't suggest this is the accepted way to do things but it has worked for us. As I mentioned we use TeamCity for our build server. Each software project is added into TeamCity and build configurations are set up. The build configurations tell TeamCity when to build, how to build and where your project's SCM repository is. We use two different build configurations for each project, one we call "integration" which monitors the project's SCM repository and triggers an incremental build when a check-in is detected. The other configuration we call "nightly" which triggers at a set time every night and performs a completely clean build.

    Incidentally just a quick note regarding SCM. For this to work cleanest I think the SCM for each project should be used in a stable trunk topology. If your developers all work from their own branches you'd probably need separate build configurations for each developer which I think would get unnecessarily messy. We've set up our build server with its own SCM user account but with read-only access.

    So when a build is triggered for a particular build configuration the server grabs the latest files from the repository and sends them to a "build agent" which executes the build using a build script. We've used Rake to script our builds and automated testing but you can use whatever. The build agent can be on the same PC as the server but in our case we have a separate PC because our build server is centrally located with the ICT department whereas we need our build agent to be physically located with my team (for automated on-target testing). So the toolchains that you use are installed on your build agent.

    How could this work for you?

    Lets say you work for TidyDog and you have two projects on the go:

    1. "PoopScoop" is based on a PIC18F target compiled using the C18 compiler has its trunk located in your SCM at //PoopScoop/TRUNK/
    2. "PoopBag" is based on a ColdFire target compiled with GCC has its trunk located at //PoopBag/TRUNK/

    The compilers that you need in order to build all projects are installed on your build agent (We'll call it TidyDogBuilder). Whether that's the same PC that's running the build server or a separate box is dependent on your situation. Each project has it's own build script (e.g. //PoopScoop/Rakefile.rb and //PoopBag/Rakefile.rb) which handles source file dependencies and invocation of the appropriate compilers. You could for example go to //PoopScoop/ in command prompt, enter rake and the build script would take care of compiling the PoopScoop project within the command prompt.

    You then have your build configurations set up on the build server. A build configuration for PoopScoop for example would specify what SCM tool you're using and the repository location (e.g. //PoopScoop/TRUNK/), specify which build agent to use (e.g. TidyDogBuilder), specify where to find the appropriate build script and any necessary command to use (e.g. //PoopScoop/Rakefile.rb invoked with rake incremental:build) and specify what event triggers a build (e.g. Detection of a check-in to //PoopScoop/TRUNK/). So the idea is that if someone submits a change to //PoopScoop/TRUNK/Source/Scooper.c the build server detects this change, grabs the latest revisions of the source files from the repository and sends them to the build agent to be compiled using the build script and in the end emails every developer that has a change in build with the build result.

    If your projects need to be compiled for multiple targets you would just modify the project's build script to handle this (e.g. You might have commands like rake build:PIC18 or rake build:Coldfire) and set up a separate build configuration on the build server for each target.

    Continuous Integration

    So with this system you get continuous integration up and running. Modify your build scripts to run unit tests as well as compile your project and you can have your unit testing being performed automatically after every change. The motive for this is to try to pick up problems as early as possible, as you're developing rather than being surprised during verification activities.

    Closing Thoughts