Before reading, ask yourself "Can my opinions be changed by data?".  If the answer is "no", then reading this paper probably isn't an effective use of your time.

Attached is a a draft paper that measures the size cost of various error handling techniques (apologies for the half meg paper, the fancy graph javascript is kind of big).  Exceptions, return codes, abort, and many other strategies are covered.

If the attachment doesn't make it through for whatever reason, the html is also saved on github: https://github.com/ben-craig/error_bench/blob/master/error_size_benchmarking.html

I plan on putting this in the pre-Cologne mailing.

Let me know if I have misrepresented any error handling technique, or if my methodology is flawed in some way.