We're often working on a project where we've been handed a large data set (say, a handful of files that are 1GB each), and are writing code to analyze it.
All of the analysis code is in Git, so everybody can check changes in and out of our central repository. But what to do with the data sets that the code is working with?
I want the data in the repository:
However, I don't want the data in the git repository:
It seems that I need a setup with a main repository for code and an auxiliary repository for data. Any suggestions or tricks for gracefully implementing this, either within git or in POSIX at large? Everything I've thought of is in one way or another a kludge.
use submodules to isolate your giant files from your source code. More on that here:
http://git-scm.com/book/en/v2/Git-Tools-Submodules
The examples talk about libraries, but this works for large bloated things like data samples for testing, images, movies, etc.
You should be able to fly while developing, only pausing here and there if you need to look at new versions of giant data.
Sometimes it's not even worth while tracking changes to such things.
To address your issues with getting more clones of the data: If your git implementation supports hard links on your OS, this should be a breeze.
The nature of your giant dataset is also at play. If you change some of it, are you changing giant blobs or a few rows in a set of millions? This should determine how effective VCS will be in playing a notification mechanism for it.
Hope this helps.