This page is a sub-page of our page on What is Mathematics?
//////// Quoting from E.T. Jaynes, Probability Theory – The Logic of Science, Cambridge University Press, 2009 (2003), pp. 675-676:
A few years ago the writer attended a seminar talk by a young mathematician who had just received his Ph.D. degree and, we understood, had a marvellous new limit theorem of probability theory. He started to define the sets he proposed to use, but three blackboards were not enough for them, and he never got through the list. At the end of the hour, having to give up the room, we walked out in puzzlement, not knowing even the statement of his theorem.
A ‘19th century mathematician’ like Poincaré would have been into the meat of the calculation within a few minutes and would have completed the proof and pointed out its consequences in time for discussion.
The young man is not to be blamed; he was only doing what he had been taught a ‘20th century mathematician’ must do. Although he has perhaps now learned to plan his talks a little better, he is surely still wasting much of his own time and that of others in reciting all the preliminary incantations that are demanded in 20th century mathematics before one is allowed to proceed to the actual problem. He is a victim of what we consider to be, not a higher standards or rigor, but studied mathematical discourtesy.
Nowadays, if you introduce a variable \, x \, without repeating the incantation that it is in some set or ‘space’ \, X , you are accused of dealing with an undefined problem. If you differentiate a function \, f(x) \, without first having stated that it is differentiable, you are accused of lack of rigor. If you note that your function \, f(x) \, has some special property natural to the application, you are accused of lack of generality. In other words, every statement you make will receive the discourteous interpretation.
Obviously, mathematical results cannot be communicated without some decent standards of precision in our statements. But a fanatical insistence on one particular form of precision and generality can be carried so far that it defeats its own purpose; 20th century mathematics often degenerates into an idle adversary game instead of a communication process.
The fanatic is not trying to understand your substantive message at all, but only trying to find fault with your style of presentation. He will strive to read nonsense into what you are saying, if he can possibly find a way of doing so. In self-defense, writers are obliged to concentrate their attention on every tiny, irrelevant, nitpicking detail of how things are said rather than on what is said. The length grows; the content shrinks.
Mathematical communication would be much more efficient and pleasant if we adopted a different attitude. For one who makes the courteous interpretation of what others write, the fact that \, x \, is introduced as a variable already implies that there is some set \, X \, of possible values. Why should it be necessary to repeat that incantation every time a variable is introduced, thus using up two symbols where one would do? (Indeed, the range of values is usually indicated more clearly at the point where it matters, by adding conditions such as \, (0 < x < 1) \, after an equation.)
For a courteous reader, the fact that a writer differentiates \, f(x) \, twice already implies that he considers it twice differentiable; why should he be required to say everything twice? If he proves proposition \, A \, in enough generality to cover his application, why should he be obliged to use additional space for irrelevancies about the most general possible conditions under which \, A \, would be true?
A source as annoying as the fanatic is his cousin, the compulsive mathematical nitpicker. We expect that an author will define his technical terms, and then use them in a way consistent with his definitions. But if any other author has ever used the term with a slightly different shade of meaning, the nitpicker will be right there accusing you of inconsistent terminology. The writer has been subjected to this many times; and colleagues report the same experience.
Nineteenth century mathematicians were not being non-rigorous by their style; they merely, as a matter of course, extended simple civilized courtesy to others, and expected to receive it in return. This will lead one to try to read sense into what others write, if it can possibly be done in view of the whole context; not to pervert our reading of every mathematical work into a witch-hunt for deviations of the Official Style.
Therefore, sympathizing with the young man’s plight but not intending to be enslaved like him, we issue the following:
Every variable \, x \, that we introduce is understood to have some set \, X \, of possible values. Every function \, f(x) \, that we introduce is supposed to be sufficiently well-behaved so that what we do with it makes sense. We undertake to make every proof general enough to cover the application we make of it. It is an assigned homework problem for the reader who is interested in the question to find the most general conditions under which the result would hold.
We could convert many 19th century mathematical works to 20th century standards by making a rubber stamp containing the Proclamation, with perhaps another sentence using the terms sigma-algebra, Borel field, Radon-Nikodym derivative, and stamping it on the first page.
Modern writers could shorten their works substantially, with improved readability and no decrease in content, by including such a Proclamation in the copyright message, and writing thereafter in the 19th century style. Perhaps some publishers, seeing these words, may demand that they do this for economic reasons; it would be a service to science.
/////// End of Quote from E.T. Jaynes