Partial Differential Equations

This page is a sub-page of our page on Calculus of Several Real Variables.

///////

Related KMR-pages:

Differential Equations = Ordinary Differential Equations

///////

Books:

Introduction to Partial Differential Equations – from Fourier Series to Boundary-value Problems, by Arne Broman, Dover Publications Inc., 1989 (1970)
Partial Differential Equations – An Introduction, by David Colton, Dover Publications inc. 1988

///////

Other relevant sources of information:

The superposition principle
Orthogonal coordinates
Curvilinear coordinates
Coordinate system

///////

List of anchors into the text below:

What is a Partial Differential Equation?
But what is a Partial Differential Equation?
Brief History of PDEs
Elliptic PDEs
Hyperbolic PDEs
Parabolic PDEs
Separation of Variables

///////

What is a Partial Differential Equation?

Partial differential equation at Scholarpedia
Partial differential equation at Wikipedia
Partial Differential Equation at Wolfram MathWorld
Partial Differential Equation at Britannica.com

/////// Quoting Wikipedia (Partial differential equation):

In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function.

The function is often thought of as an “unknown” to be solved for, similarly to how \, x \, is thought of as an unknown number, to be solved for, in an algebraic equation like \, x^2 − 3x + 2 = 0 . However, it is usually impossible to write down explicit formulas for solutions of partial differential equations. There is, correspondingly, a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers.

Partial differential equations also occupy a large sector of pure mathematical research, in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations.

Partial differential equations are ubiquitous in mathematically-oriented scientific fields, such as physics and engineering. For instance, they are foundational in the modern scientific understanding of sound, heat, diffusion, electrostatics, electrodynamics, fluid dynamics, elasticity, general relativity, and quantum mechanics.[citation needed] They also arise from many purely mathematical considerations, such as differential geometry and the calculus of variations; among other notable applications, they are the fundamental tool in the proof of the Poincaré conjecture from geometric topology.

Partly due to this variety of sources, there is a wide spectrum of different types of partial differential equations, and methods have been developed for dealing with many of the individual equations which arise. As such, it is usually acknowledged that there is no “general theory” of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields.[1]

Ordinary differential equations form a subclass of partial differential equations, corresponding to functions of a single variable. Stochastic partial differential equations and nonlocal equations are, as of 2020, particularly widely studied extensions of the “PDE” notion. More classical topics, on which there is still much active research, include elliptic and parabolic partial differential equations, fluid mechanics, Boltzmann equations, and dispersive partial differential equations.

/////// End of Quote from Wikipedia

But what is a Partial Differential Equation? (Steven Strogatz on YouTube):

Brief History of Partial Differential Equations

/////// Quoting Colton, p. 49:

Mathematicians did not spontaneously decide to create the theory of partial differential equations, but rather were initially led to study certain particular equations arising in the mathematical formulation of specific physical phenomena. The first significant progress in solving partial differential equations occurred in the middle of the eighteenth century when Euler (1707 – 1783) and d’Alembert (1717 – 1783) investigated the wave equation

\, \dfrac{ {\partial}^2 u }{ \partial x^2} = \dfrac{1}{c^2} \dfrac{ {\partial}^2 u }{ \partial t^2} .

Both were led to the solution

\, u(x,t) = f(x+ct) + g(x-ct) \, ,

where f \, and g \, are “arbitrary” functions, and a debate continued to rage until the 1770s on what “arbitrary” meant. Euler also took up the problem of the vibrations of a regular and circular drum governed by the two-dimensional wave equation

\, \dfrac{ {\partial}^2 u }{ \partial x^2} + \dfrac{ {\partial}^2 u }{ \partial y^2} = \dfrac{1}{c^2} \dfrac{ {\partial}^2 u }{ \partial t^2} ,

and obtained various special solutions by what is now known as the method of separation of variables. Finally, in a series of definitive papers on the propagation of sound, Euler obtained cylindrical and spherical wave solutions of the wave equation in two and three variables. Progress, however, was limited by the lack of knowledge of Fourier series and of the behavior of the special functions arising from the application of the method of separation of variables.

Research into the theory of gravitational attraction led to the formulation of Laplace’s equation

\, \dfrac{ {\partial}^2 u }{ \partial x^2} + \dfrac{ {\partial}^2 u }{ \partial y^2} + \dfrac{ {\partial}^2 u }{ \partial z^2} = 0 ,

for the potential function \, u(x,y,z) . The first significant work on potential theory was done by Legendre (1752 – 1833) in his 1782 study of the gravitational attraction of spheroids, in which he introduced what are now known as Legendre polynomials. This work was continued by Laplace (1749 – 1827) in 1785 (although Laplace never mentioned Legendre!). In a series of papers continuing through the 1780s, Legendre and Laplace continued their investigations of potential theory and the use of Legendre polynomials, associated Legendre polynomials, [cylindrical harmonics] and spherical harmonics, laying the foundation for the vast work in the nineteenth century on the theory of harmonic functions. However, no general method for solving Laplace’s equation was developed in the eighteenth century, nor were the full potentialities of the use of special functions appreciated.

The study of partial differential equations experienced a phenomenal growth in the nineteenth century. This growth not only illuminated new areas of physics, but created the need for mathematical developments in such diverse areas as analytic function theory, the calculus of variations, ordinary differential equations, and differential geometry. In this brief history, we can only highlight a few of the developments that are relevant to the material covered in this book.

The first major step was taken by Fourier (1768 – 1830) in 1807 when he submitted a paper on heat conduction and trigonometric series to the Academy of Sciences of Paris. His paper was rejected; however, when the Academy made the subject of heat conduction the topic of a grand prize in 1812, Fourier submitted a revised copy and this time he won the prize. He continued to work in the area and in 1822 published his classic Théorie analytique de la chaleur in which, following his paper of 1807, he derived the equation of heat conduction

\, \dfrac{ {\partial}^2 u }{ \partial x^2} = \dfrac{1}{{\alpha}^2} \dfrac{ \partial u }{ \partial t} ,

and solved specific heat conduction problems by what is now known as the method of separation of variables and Fourier series. All of Fourier’s work was purely formal, and the convergence properties of Fourier series were left unexamined until later in the century. Later, Poisson (1781 – 1840) made use of Legendre polynomials and spherical harmonics in addition to trigonometric series to study multi-dimensional problems.

At the same time as Fourier series were being developed, Fourier , Cauchy (1789 – 1857), and Poisson discovered what is now called the Fourier integral and applied it to various problems in heat conduction and water waves. Because all three presented papers orally to the Academy of Sciences and published their results only later, it is not possible to assign priority to the discovery of Fourier integrals and transforms.

Mathematicians of the nineteenth century vigorously investigated problems associated with Laplace’s equation, continuing the research initiated by Legendre and Laplace. In a paper written in 1813, Poisson showed that the gravitational attraction of a body with density \, \rho(x,y,z) \, is given by

\, \dfrac{ {\partial}^2 u }{ \partial x^2} + \dfrac{ {\partial}^2 u }{ \partial y^2} + \dfrac{ {\partial}^2 u }{ \partial z^2} = -4 \pi \rho(x,y,z) ,

for points inside the body. Poisson’s derivation of this result was not rigorous, even by the standards of his time, and the first rigorous derivation of Poisson’s equation was given by Gauss (1777 – 1855) in 1839.

However, despite the work of Legendre, Laplace, Poisson, and Gauss, almost nothing was known about the general properties of solutions to Laplace’s equation. In 1828, Green (1793 – 1841), a self-taught English mathematician, published a privately printed booklet entitled An Essay on the Application of Mathematical Analysis to the Theory of Electricity and Magnetism. In this small masterwork Green derived what is now known as Green’s formulas and introduced the concept of the Green’s function. Unfortunately, his work was neglected for over twenty years until Sir William Thomson (later Lord Kelvin, 1828 – 1907) discovered it and, recognizing its great value, had it published in the Journal für Mathematik.

Until the middle part of the nineteenth century, mathematicians simply assumed that a solution to the Laplace or Poisson equations existed, usually arguing from physical considerations. In particular, Green’s proof of the existence of a Green’s function was based entirely on a physical argument. However, in the second half of the century extensive work was undertaken on the problem of existence of solutions to partial differential equations, not only for Laplace’s equation and Poisson’s equation, but for partial differential equations with variable coefficients. In particular, Riemann (1826 – 1866) and Hadamard (1865 – 1963) investigated initial value problems for hyperbolic equations, Picard (1856 – 1941) and others for elliptic equations, while Cauchy and Kowalewsky (1850 – 1891) studied the initial value, or Cauchy problem, for general systems of partial differential equations with analytic coefficients.

Gradually, mathematicians became aware that different types of equations required different types of boundary and initial conditions, leading to the now-standard classification of partial differential equations into elliptic, hyperbolic, and parabolic types. This classification was introduced by DuBois-Reymond (1831 – 1889).

In addition to investigations on existence theorems for general partial differential equations, research continued on the Dirichlet problem and the Neumann problem for Laplace’s equation through methods involving analytic function theory, the calculus of variations, and the method of integral equations (using successive approximation techniques). Considerable effort was also made to prove the existence of eigenvalues for

\, \dfrac{ {\partial}^2 u }{ \partial x^2} + \dfrac{ {\partial}^2 u }{ \partial y^2} + k^2 u = 0 ,

particulary by Schwarz (1843 – 1921) and Poincaré (1854 – 1912). The systematic treatment of the eigenvalue problems for partial differential equations was delayed until the development of the theory of integral equations in the twentieth century by Fredholm (1866 – 1927) and Hilbert (1862 – 1943). We shall return to this theme shortly.

Throughout the nineteenth century, mathematicians and mathematical physicists were concerned with the theory of wave motion, continuing the tradition established by Euler  and d’Alembert in the eighteenth century. In particular, numerous papers were written applying the method of separation of variables in curvilinear  coordinates to solve initial-boundary value problems for the wave equation and boundary value problems for the reduced wave equation or Helmholtz equation. Of paramount importance was the theory of Bessel functions, which were first systematically studied by Bessel (1784 – 1846), a mathematician and director of the astronomical observatory in Königsberg. Although these functions are of central importance in the study of wave propagation, Bessel was in fact led to his study while working on the motion of the planets.

In addition to the method of separation of variables for solving initial-boundary problems in wave propagation, integral representations of solutions to the wave equation and reduced wave equation were established by Poisson, Helmholtz (1821 – 1894), and Kirchhoff (1824 – 1887). The most spectacular triumph of these investigations into the theory of wave propagation was Maxwell’s derivation in 1864 of the laws of electromagnetism. From his equations, Maxwell (1831 – 1879) predicted that electromagnetic waves travel through space at the speed of light and that light itself was an electromagnetic phenomenon. Maxwell’s research was the highlight of nineteenth century mathematical physics and his monograph A Treatise on Electricity and Magnetism, published in 1873, is one of the classics of scientific thought.

Early in the twentieth century, a major new era in the theory of partial differential equations began with the development of the theory of integral equations to solve boundary value problems for partial differential equations. Integral equations had already been used by Neumann (1832 – 1925) in 1870 to solve the Dirichlet problem for Laplace’s equation in a convex domain by the method of successive approximations. However, due to the fact that no systematic theory of integral equations was available, Neumann was not able to remove the restrictive condition of convexity from his analysis.

The first step toward a general theory of integral equations was taken by Volterra (1860 – 1940) in 1896 and 1897 when he used the method of successive approximations to solve whati s now called the Volterra integral equation of the second kind:

\, \phi(s) - \int_{a}^{s} K(s, t) \phi(t) dt = f(s) .

Volterra’s ideas were taken up by Fredholm, a professor at Stockholm, who established what is now known as the Fredholm alternative for Fredholm integral equations of the second kind:

\, \phi(s) - \lambda \int_{a}^{b} K(s, t) \phi(t) dt = f(s) .

Fredholm then proceeded to use his theory to solve the Dirichlet problem for Laplace’s equation in domains that were not necessarily convex, his first results appearing in a seminal paper published in 1900. Fredholm‘s ideas were brought to fruition by Hilbert, a professor at Göttingen and the leading mathematician of the early part of the twentieth century. In a series of six papers published between 1904 and 1910, Hilbert more simply formulated Fredholm’s ideas, established the fact that an “arbitrary” function can be expanded in a series of eigenfunctions of the integral equation (now called the Hilbert-Schmidt theorem), and applied his results to problems in mathematical physics. The method of integral equations has been applied to an increasing number of problems in mathematical physics, most notably the scattering of acoustic, electromagnetic, and elastic waves by inhomogeneities in the medium.

The work by Volterra , Fredholm, and Hilbert has reverberated through the twentieth century, leading first to Hilbert space theory and functional analysis with applications to distributional solutions of initial value and boundary value problems for partial differential equations, and, in a somewhat different direction, to singular integral operators and the “general” theory of linear partial differential operators. However, these topics are beyond the scope of this brief survey; indeed, as the twentieth century reached middle age the era of partial differential equations became so broad and deep that a short survey of the directions taken and the results discovered would require a small monograph! Thus we conclude this section by indicating only three of these directions that are relevant to this book: numerical methods, nonlinear problems, and improperly posed problems.

We recall that nineteenth century research in the theory of partial differential equations was concerned primarily with well posed linear problems – by well posed we mean that a solution exists, is unique, and depends continuously on the boundary or initial data. For such problems, interest was focused on obtaining series or integral representations for the solution. However, as the demands of science increased, it became evident that such representations were often not suitable for numerical computation. For example, in using the series representation of the solution of Maxwell’s equations describing the propagation of radio waves around the earth it was discovered that over a thousand terms of the series were needed in order to assure the needed accuracy – a formidable task even for a modern computer! Hence mathematicians were led to derive new methods for the approximate solution of boundary and initial value problems of mathematical physics, leading to a fruitful interplay between the art of computer science and the methods of numerical analysis.

At the same time, it has become clear that the real world is in fact nonlinear and that although linear models are useful and valid in certain contexts, many phenomena can be understood only by a nonlinear model. Motivated by an increasing number of apparently intractable problems in fluid and gas dynamics, elasticity, and chemical reactions, mathematicians in the twentieth century have systematically studied nonlinear partial differential equations. This subject has by now reached full maturity and forms one of the major areas of the theory of partial differential equations. Finally, by mid-century, mathematicians realized (after some resistance!) that well posed problems were not the only ones of physical interest. In particular, such problems as the design of shock-free air foils and the inverse scattering problems associated with radar, sonar, and medical imaging have led mathematicians seriously to consider improperly posed problems and to derive methods for their “solution.” Although the subject areas of study in partial differential equations in the twentieth century are significantly different from those of the last, the words of Fourier still provide the appropriate guidelines: “The profound study of nature is the most fertile source of mathematical discoveries.”

/////// End of Quote from Colton

Elliptic Partial Differential Equations

Elliptic Partial Differential Equation at Wikipedia

Laplace’s Equation at Wolfram MathWorld
Laplace’s Equation at Wikipedia
Poisson’s Equation at Wikipedia

///////

Hyperbolic Partial Differential Equations

Hyperbolic Partial Differential Equation at Wikipedia

The Wave Equation at Wikipedia
The Electromagnetic Wave Equation at Wikipedia
The Wave Equation at ScienceDirect

Hearing the shape of a drum
Vibrations of a circular membrane

///////

Parabolic Partial Differential Equations

Parabolic Partial Differential Equation at Wikipedia

The Heat Equation at Wikipedia
The Heat Equation at Wolfram MathWorld
The Schrödinger Equation at Wikipedia
The Schrödinger Wave Equation at Eric Weissenstein’s world of physics
Fisher’s equation

///////

Separation of Variables

/////// Quoting Wikipedia on “Orthogonal coordinates

While vector operations and physical laws are normally easiest to derive in Cartesian coordinates, non-Cartesian orthogonal coordinates are often used instead for the solution of various problems, especially boundary value problems, such as those arising in field theories of quantum mechanics, fluid flow, electrodynamics, plasma physics and the diffusion of chemical species or heat.

The chief advantage of non-Cartesian coordinates is that they can be chosen to match the symmetry of the problem. For example, the pressure wave due to an explosion far from the ground (or other barriers) depends on 3D space in Cartesian coordinates, however the pressure predominantly moves away from the center, so that in spherical coordinates the problem becomes very nearly one-dimensional (since the pressure wave dominantly depends only on time and the distance from the center). Another example is (slow) fluid in a straight circular pipe:

in Cartesian coordinates, one has to solve a (difficult) two dimensional boundary value problem involving a partial differential equation, but in cylindrical coordinates the problem becomes one-dimensional with an ordinary differential equation instead of a partial differential equation.

The reason to prefer orthogonal coordinates instead of general curvilinear coordinates is simplicity: many complications arise when coordinates are not orthogonal. For example, in orthogonal coordinates many problems may be solved by separation of variables.

Separation of variables is a mathematical technique that converts a complex d-dimensional problem into d one-dimensional problems that can be solved in terms of known functions. Many equations can be reduced to Laplace’s equation or the Helmholtz equation. Laplace’s equation is separable in 13 orthogonal coordinate systems (listed in the table below with the exception of toroidal), and the Helmholtz equation is separable in 11 orthogonal coordinate systems.

Orthogonal coordinates never have off-diagonal terms in their metric tensor. In other words, the infinitesimal squared distance \, {ds}^2 \, can always be written as a scaled sum of the squared infinitesimal coordinate displacements. […] These scaling functions are used to calculate differential operators in the new coordinates, e.g., the gradient, the Laplacian, the divergence and the curl.

A simple method for generating orthogonal coordinates systems in two dimensions is by a conformal mapping of a standard two-dimensional grid of Cartesian coordinates (x, y) . A complex number z = x + iy can be formed from the real coordinates x and y , where i represents the imaginary unit. Any holomorphic function w = f(z) with non-zero complex derivative will produce a conformal mapping; if the resulting complex number is written w = u + iv , then the curves of constant u and v intersect at right angles, just as the original lines of constant x and y did.

Orthogonal coordinates in three and higher dimensions can be generated from an orthogonal two-dimensional coordinate system, either by projecting it into a new dimension (cylindrical coordinates) or by rotating the two-dimensional system about one of its symmetry axes. However, there are other orthogonal coordinate systems in three dimensions that cannot be obtained by projecting or rotating a two-dimensional system, such as the ellipsoidal coordinates [ which are based on confocal quadrics]. More general orthogonal coordinates may be obtained by starting with some necessary coordinate surfaces and considering their orthogonal trajectories.

/////// End of Quote from Wikipedia

• The Helmholtz Equation at Wikipedia
Bessel’s differential equation
Cylindrical harmonics

Leave a Reply

Your email address will not be published. Required fields are marked *