In books (e.g. "Barcodes for Mobile Devices", ISBN 978-0-521-88839-4), Papers (e.g. "Bar Codes May Have Poorer Error Rates Than Commonly Believed", DOI: 10.1373/clinchem.2010.153288) or websites information about the accuracy or error rates of barcodes are given.
The given numbers vary for e.g. Code39 from 1 error in 1.7 million, over 1 error in 3 million to 1 error in 4.5 million.
Where do these numbers come from and how can one calculate it (e.g. for Code39)?
In the definition of Code39 in ISO/IEC 16388:2007 I also couldn't find usefull information.
The "error rate" these numbers describe is the read error rate, i.e. how often a barcode may be read incorrectly when scanned. In order for barcodes to be useful this needs to be a very low value and so barcode formats that have lower read error rates are potentially better (although there are other factors involved as well).
These numbers are presumably determined by scientific testing. In the website you linked to there is a further link to a study by Ohio University that describes the methodology they used, which is an example of how this can be done:
An automated test apparatus was constructed and used for the test. The apparatus included a robot which loaded carrier sheets onto oscillating stages that were moved under four fixed mounted, “hand held” moving beam, visible laser diode bar code scanners. Scanner output was a series of digital pulses. Decoding of all symbols was performed in a computer using software programs based on standard reference decode algorithms. Each symbol was scanned by each scanner until 283 decodes were obtained. [...] An error occurred and was recorded whenever the decoded data did not match the encoded data for a given symbol.