Categories
Posts in this category
- The Fun of Running a Public Web Service, and Session Storage
- CPAN Pull Request Challenge: A call to the CPAN authors
- You Write Your Own Bio
- Icinga2, the Monitoring System with the API from Hell
- Progress in Icinga2 Land
- Iron Man Challenge - Am I a Stone Man?
- Correctness in Computer Programs and Mathematical Proofs
- Why Design By Contract Does Not Replace a Test Suite
- Doubt and Confidence
- Fun and No-Fun with SVG
- Goodby Iron Man
- Harry Potter and the Methods of Rationality
- Introducing my new project: Quelology organizes books
- iPod nano 5g on linux -- works!
- Keep it stupid, stupid!
- My Diploma Thesis: Spin Transport in Mesoscopic Systems
- Why is my /tmp/ directory suddenly only 1MB big?
Thu, 23 Aug 2012
Correctness in Computer Programs and Mathematical Proofs
Permanent link
While reading On Proof and Progress in Mathematics by Fields Medal winner Bill Thurston (recently deceased I was sorry to hear), I came across this gem:
The standard of correctness and completeness necessary to get a computer program to work at all is a couple of orders of magnitude higher than the mathematical community’s standard of valid proofs. Nonetheless, large computer programs, even when they have been very carefully written and very carefully tested, always seem to have bugs.
I noticed that mathematicians are often sloppy about the scope of their symbols. Sometimes they use the same symbol for two different meanings, and you have to guess from context which on is meant.
This kind of sloppiness generally doesn't have an impact on the validity of the ideas that are communicated, as long as it's still understandable to the reader.
I guess on reason is that most mathematical publications still stick to one-letter symbol names, and there aren't that many letters in the alphabets that are generally accepted for usage (Latin, Greek, a few letters from Hebrew). And in the programming world we snort derisively at FORTRAN 77 that limited variable names to a length of 6 characters.