Partial Differential Equations


Preamble

Please note that these notes are still under construction. Moreover, these notes are not for educational purposes, but rather should be used as a reference.

Introduction

Basic knowledge of the following topics is assumed. You may want to check the appendix.

  • Linear algebra
  • (simple) ODEs
  • Fundamental calculus:
    • vector algebra,
    • fundamental theorems such as Green's Theorem and Divergence Theorem
    • Jacobians
    • infinite series
    • directional derivatives

Resources Used

  • Partial Differential Equations: An Introduction, 2nd Ed, by Walter A. Strauss.
  • Elements of Partial Differential Equations by Ian N. Sneddon.

Notation

  • 0 \in \N and \N^+ := \N \setminus \{0\}.

Partial Differential Equations

Def. PDE

A partial differential equation is an equation that involves an unknown function u(x_1,...,x_n) of n \geq 2 variables (otherwise it's called ordinary) and some of its partial derivatives with respect to one or more of x_1,...,x_n. Precisely u is a function such that u: U \to \R where U is an open subset of \R^n called the domain of the PDE.

Notice how the parameters x_1,...,x_n are independent variables and the unknown function u is a dependent variable depending on x_1,...,x_n.

The order of a PDE is the highest order that appears in the equation.

A solution of a PDE is a known function u that satisfies the equation.

Notation. Partial Derivative

From now on we will denote the partial derivative of u with respect to its parameter x as

u_{x} := \dfrac{\partial u}{\partial x}

Moreover, higher-order partial derivatives are simply written in sequence. For example, the partial derivative of u_{x} with respect to y will be denoted as

u_{xy} := \dfrac{\partial}{\partial y}\left(\dfrac{\partial u}{\partial x}\right)

Recall from calculus that u_{xy} = u_{yx} (when the relevant derivatives are continuous).

Unless otherwise stated we will assume all derivatives exist and are continuous.

Def. Initial and Boundary Conditions

These are not formal definitions but rather conventions regarding PDEs.

Because PDEs typically have infinitely many solutions, we may want to impose auxiliary conditions. These conditions are usually classified into two classes: initial and boundary conditions, which themselves (as we will see later) are further classified.

An initial condition specifies as the name implies initial conditions. For example, for an unknown function u of n variables

u(x_1, x_2, ..., x_{n-1}, c) = \varphi(x_1, x_2, ..., x_{n-1})

is an initial condition where c \in \R is a constant.

A boundary condition specifies the boundary behavior of the solution. For example, defining the domain of the PDE is part of specifying boundary data, and so is prescribing the normal derivative \partial u/\partial n on the boundary. We will see more about boundary conditions and classify them later on.

Def. Well-Posed Problem

A well-posed PDE problem consists of a PDE in a domain with initial and/or boundary conditions (or other auxiliary conditions) that satisfies

  1. existence i.e. there exists at least one solution satisfying these conditions
  2. uniqueness i.e. there is at most one solution
  3. stability i.e. the solution continuously depends on the parameters (so that it's not chaotic)

First Order Equations

Thm. Constant Coefficients

Let us be given a first-order homogeneous partial differential equation

au_x + bu_y = 0

with non-zero constant coefficients a,b \in \R.

Geometric Method

Noting au_x + bu_y is the derivational derivative of u in the direction of the vector \bold{V} = a\bold{i} + b\bold{j} which is always zero. Therefore, u(x,y) is constant in the direction of \bold{V} and (b, -a) is orthogonal to \bold{V}. The lines parallel to \bold{V} which are called the characteristic lines have equations bx - ay = ``\text{constant}". The solution is constant on each line, therefore u(x,y) depends on bx-ay only, so that

u(x,y) = f(bx-ay)

where f is arbitrary function of one variable.

Thm. Variable Coefficient

Now, let us be given a first-order homogeneous partial differential equation

u_x + yu_y = 0

Geometric Method

This PDE asserts that the directional derivative in the direction of the vector (1,y) is zero. The curves in the plane with (1,y) as tangent vectors have slopes y, so that

\dfrac{dy}{dx} = \dfrac{y}{1}.

This ODE has the solutions

y = C e^x

which are the characteristic curves of our PDE. Putting this into our equation and following it we deduce

u(x,y) = u(0, e^{-x}y)

and

\boxed{u(x,y) = f\!\left(ye^{-x}\right)}

which is the general solution to our PDE.

Second Order Equations

Def. Simple 2nd Order PDE

The homogeneous linear equation of order two in two variables is given by

\boxed{a_{11} u_{xx} + 2a_{12}u_{xy} + a_{22}u_{yy} + \cdots = 0}

where \cdots denotes terms of order one or lower and a_{ij} are real coefficients.

The coefficient 2 is introduced for convenience.

Thm. Canonical Forms

By a linear transformation of the independent variables, a 2nd order linear PDE in two variables can be reduced to one of the three forms. Let \Delta = a_{12}^2 - a_{11}a_{22}, then it is of the form

  1. hyperbolic if \Delta > 0 so that
u_{xx} - u_{yy} + \cdots = 0
  1. parabolic if \Delta = 0 so that
u_{xx} + \cdots = 0
  1. elliptic if \Delta < 0 so that
u_{xx} + u_{yy} + \cdots = 0

where \cdots denotes terms of order one or lower.

1. Hyperbolic Case

Let

\xi = y - \lambda_1 x \\ \eta = y - \lambda_2x

where

\lambda_{1,2} := \dfrac{a_{12} \pm \sqrt{\Delta}}{a_{11}}

then

u(\xi, \eta) = f(\xi) + g(\eta)

where f and g are arbitrary functions.

2. Parabolic Case

For the case \Delta = 0, we have

\xi = y - \lambda x \\ \eta = x

where \lambda = \lambda_1 = \lambda_2 is defined as above, then

u(\xi, \eta) = f(\xi) \eta + g(\eta).

3. Elliptic Case

For \Delta < 0, we have

\xi = \enspace x \cos \theta + y \sin \theta \\ \eta = -x \sin \theta + y \cos \theta

where

\tan(2\theta) = \dfrac{2 a_{12}}{a_{11} - a_{22}}.

The Wave Equation

Def. Wave Equation

The (one-dimensional) wave equation (on the line) is defined as

u_{tt} = c^2 u_{xx}

where x \in (-\infty, \infty) and t > 0. The general solution is easy to solve since operators factor out nicely:

u_{tt} - c^2 u_{xx} = \left(\dfrac{\partial}{\partial t} - c\dfrac{\partial}{\partial x}\right)\left(\dfrac{\partial}{\partial t} + c\dfrac{\partial}{\partial x}\right)u = 0

so that

\boxed{u(x,t) = f(x+ct) + g(x-ct)}

where f and g are twice-differentiable arbitrary functions of single variable.

Thm. Initial Value Problem

Assume we are given an initial value problem for the wave equation so that

\def\arraystretch{1.5} \begin{array}{rcl} u_{tt} - c^2 u_{xx} &=& 0 \\ \hdashline u(x, 0) &=& g(x) \\ u_t(x, 0) &=& h(x) \end{array}

where g and h are the given initial displacement and velocity. There is one and only one solution of this problem which is

\boxed{ u(x,t) = \dfrac{g(x+ct) + g(x-ct)}{2} + \dfrac{1}{2c} \displaystyle \int_{x-ct}^{x+ct} h(s) \> ds }

Def. Inhomogeneous Wave Equation

Let f \in C^2(\R^2, \R), then the inhomogeneous wave equation is defined as

u_{tt} - c^2 u_{xx} = f(x,t)

Thm. d'Alembert's Formula

Let

u_{tt} - c^2 u_{xx} = f(x,t)

be an inhomogeneous wave equation with the initial conditions

\def\arraystretch{1.25} \begin{array}{ccc} u(x,0) &=& g(x) \\ u_t(x,0) &=& h(x) \\ \end{array}

then the solution is given by the formula

\def\arraystretch{2.5} \begin{array}{rll} u(x,t) = & \dfrac{g(x+ct) + g(x-ct)}{2} \\ + & \dfrac{1}{2c} \displaystyle \int_{x-ct}^{x+ct} h(s) \> ds \\ + & \dfrac{1}{2c} \displaystyle \int_0^t \int_{x-c(t - \tau)}^{x+c(t - \tau)} f(s,\tau) \> ds \> d\tau \end{array}

notice that the double integral is over the characteristic triangle \Delta.

Moreover, if this problem on the half line i.e. with extra boundary condition

u(0,t) = r(t)

then the solution is the same as above for x > ct > 0, but for 0 < x < ct, we have

u(x,t) = \cdots \textcolor{red}{ + r\left(t - \dfrac{x}{c}\right) + \dfrac{1}{2c} \iint_D f}

where t - x/c is the reflection point and D is the corresponding shaded region.

Thm. Causality

Effect of initial position g(x) is a pair of waves traveling in either direction at speed c and at half the original amplitude. The effect of an initial velocity h(x) is a wave spreading out at speed \le c in both directions, so part of the wave may lag behind (if there is an initial velocity) but no part goes faster than speed c. This is called the principle of causality.

See and add principle of causality figure p. 39

An initial condition (position or velocity or both) at the point (x_0, 0) can affect the solution for t > 0 only in the shaded sector, which is called the domain of influence of the point (x_0, 0).

The domain of influence corresponds to the shaded are for the (upwards) triangle x+ct = x_0, x - ct = x_0 and (x_0,0).

The domain of dependence or the past history of the point (x,t) corresponds to the shaded area for the (downwards) triangle (x-ct, 0), (x+ct,0) and (x,t).

Def. Energy

Consider a homogenous wave equation, then we define the energy integral as

\boxed{ E(t) := \dfrac{1}{2} \int_{-\infty}^\infty (u_t^2 + c^2u_x^2) \> dx }

which is constant (exercise).

Thm. Reflection of Waves

Assume we are given a Dirichlet problem on the half-line (0,\infty) for the wave equation so that

\def\arraystretch{1.5} \begin{array}{rcl} v_{tt} - c^2 v_{xx} &=& 0 \\ \hdashline v(x, 0) &=& g(x) \\ v_t(x, 0) &=& h(x) \\ \hdashline v(0, t) &=& 0 \end{array}

where x \in (0,\infty) and t>0. Then, the solution for x>ct is given by

v(x,t) = \dfrac{1}{2}\bigl(g(x+ct) + g(x-ct)\bigr) + \dfrac{1}{2c} \int_{x-ct}^{x+ct} h(y)\,dy

and for 0<x<ct we have

v(x,t) = \dfrac{1}{2}\bigl(g(x+ct) - g(ct-x)\bigr) + \dfrac{1}{2c} \int_{ct-x}^{ct+x} h(y)\,dy

See Figure 2 in p. 62

Thm. Duhamel's Principle

The Diffusion Equation

Def. Diffusion Equation

The (one-dimensional) diffusion equation (or also known as heat equation) is defined as

\begin{equation} u_t = k u_{xx} \end{equation}

The diffusion equation is harder to solve than the wave equation, so we will postpone the general solution.

where t > 0, k \in \R and 0 < x .< l.

Def. Energy

Consider we are given an homogenous diffusion equation, then we define it's energy integral as

\boxed{ E(t) := \dfrac{1}{2} \int_0^l u^2 dx }

which is non-increasing (exercise) i.e. E(t) < E(0).

(Weak) Maximum Principle

If u(x,t) satisfies the diffusion equation in a rectangle, say 0 \leq x \leq l and 0 \leq t \leq T in space-time, then the maximum value of u(x,t) is attained either initially at t = 0 or on the lateral sides (x = 0 or x = l).

Indeed there is a stronger version of the maximum principle called the strong maximum principle which asserts the maximum cannot be attained anywhere inside the rectangle but only on the bottom or the lateral sides (unless u is constant). The corners are allowed.

The minimum value has the same property so that it too can be attained only on the bottom or the lateral sides.

Uniqueness

The maximum principle can be used to give a proof of uniqueness for the Dirichlet problem for the diffusion equation. That is, there is at most one solution of

\def\arraystretch{1.5} \begin{array}{lll} u_t - ku_{xx} &=& f(x,t) \\ u(x,0) &=& \phi(x) \\ u(0, t) &=& g(t), \\ \hdashline u(l,t) &=& h(t) \end{array}

for four given functions f, \varphi, g and h so that the solution is completely determined by it's initial and boundary conditions.

This diffusion equation problem is also stable making it well-posed.

Thm. Invariance Properties

The diffusion equation (1) has some basic invariance properties, namely

  1. The translation u(x - y, t) of any solution u(x,t) is another solution.
  2. Any derivative u_x, u_t, or u_xx etc. of a solution is another solution.
  3. A linear combination of solutions is again a solution — linearity.
  4. An integral of a solution is again a solution.

Thm. Diffusion on the Whole Line

Let us be given the following problem

\def\arraystretch{1.25} \begin{array}{rcl} u_t &=& ku_{xx} \\ u(x, 0) &=& \phi(x) \end{array}

where x \in (-\infty, \infty) and t \in (0, \infty). Assuming \phi(\infty) = \phi(-\infty) = 0, we have

\boxed{ S(x,t) = \dfrac{1}{\sqrt{4 \pi k t}} e^{-x^2 / 4kt} }

for t > 0. This is called the source function, so that for the solution we have

\boxed{ \def\arraystretch{1} \begin{array}{lll} u(x,t) &=& \displaystyle\int_{-\infty}^\infty S(x - y,t) \phi(y) dy \\ \\ &=& \dfrac{1}{\sqrt{4 \pi k t}} \displaystyle\int_{-\infty}^\infty e^{-(x-y)^2 / 4kt} \phi(y) dy \end{array} }

The source function S(x,t) is defined for all real x and positive t. Moreover, S(x,t) is positive and is even in x so that S(-x,t) = S(x,t).

Note that this general solution integral is not expressible with the elementary functions, therefore recall that

\text{erf}(x) := \dfrac{2}{\sqrt{\pi}} \int_0^x e^{-p^2} dp

Thm. Comparison of Wave and Diffusion

PropertyWaveDiffusion
Speed of PropagationFinite and \leq cInfinite
Singularities for t > 0Transported with speed c*Lost immediately
Well-posed for t > 0YesYes (at least for bounded)
Well-posed for t < 0YesNo
Maximum PrincipleNoYes
Behaviour as t \mapsto +\inftyEnergy is constant so does not decayDecays to 0
InformationTransportedLost gradually

*along characteristics with speed c.

Thm. Diffusion on the Half-Line

Let's take the domain for the diffusion equation to be the half-line (0, \infty) and take the Dirichlet boundary conditions at the single point x = 0, so that the problem becomes

\def\arraystretch{1.5} \begin{array}{rcl} v_{t} - k v_{xx} &=& 0 \\ \hdashline v(x, 0) &=& \phi(x) \\ v(0, t) &=& 0 \end{array}

where x \in (0, \infty) and t \in (0, \infty). The general solution for this problem is of the form

\boxed{ v(x,t) = \int_0^\infty \left[ S(x-y, t) - S(x+y, t) \right] \phi(y)dy }

Now consider the Neumann problem, so that it becomes

\def\arraystretch{1.5} \begin{array}{rcl} w_{t} - k w_{xx} &=& 0 \\ \hdashline w(x, 0) &=& \phi(x) \\ w_x(0, t) &=& 0 \end{array}

where x \in (0, \infty) and t \in (0, \infty), then the general solution for this problem is of the form

\boxed{ w(x,t) = \int_0^\infty \left[ S(x-y, t) + S(x+y, t) \right] \phi(y)dy }

Thm. Diffusion with a Source

Let's now consider the diffusion equation with a source f(x,t), i.e.

\def\arraystretch{1.5} \begin{array}{rcl} u_{t} - k u_{xx} &=& f(x,t) \\ \hdashline u(x, 0) &=& \phi(x) \end{array}

then the general solution is of the form

\boxed{ \def\arraystretch{1} \begin{array}{lll} u(x,t) &=& \displaystyle\int_{-\infty}^\infty S(x - y,t) \phi(y) dy \\ \\ &+& \displaystyle\int_0^t\int_{-\infty}^\infty S(x-y, t-s) f(y,s) \> dy \> ds \end{array} }

Separation of Variables and the Dirichlet Condition

Thm. in the Wave Equation

Assume we are given a homogeneous wave equation with Dirichlet conditions so that

\def\arraystretch{1.5} \begin{array}{rcl} u_{tt} - c^2 u_{xx} &=& 0 \\ \hdashline u(0, t) &=& 0 \\ u(l, t) &=& 0 \\ \hdashline u(x, 0) &=& g(x) \\ u_t(x,0) &=& h(x) \end{array}

where x \in (0, l). A separated solution is a solution of the form

u(x,t) = X(x) T(t)

So if we plug this equation into our PDE, we get

XT'' = c^2 X'' T \implies -\dfrac{T''}{c^2 T} = -\dfrac{X''}{X} = \lambda.

This defines a separation constant \lambda. For the Dirichlet eigenvalue problem one gets \lambda>0, so write \lambda=\beta^2 with \beta>0. Then the equations above are a pair of separate ODEs:

\begin{array}{lcl} X'' + \beta^2 X &=& 0 \\ T'' + c^2 \beta^2 T &=& 0 \end{array}

These ODEs are easy to solve and have solutions of the form

\begin{array}{lcl} X = C \cos(\beta x) + D \sin(\beta x) \\ T = A \cos(\beta c t) + B \sin (\beta c t) \end{array}

where A,B,C,D are constants. The boundary conditions require X(0)=0=X(l). From X(0)=0 we get C=0, hence X(x)=D\sin(\beta x), and then

0 = X(l) = D \sin (\beta l)

We are not interested in the trivial solution X\equiv 0 (equivalently C=D=0).

therefore \beta = \dfrac{n\pi}{l} and there are infinitely many separated solutions of the form

\boxed{ u_n(x,t) = \left(A_n\cos\left(\dfrac{n \pi ct}{l}\right) + B_n\sin\left(\dfrac{n \pi ct}{l}\right)\right)\sin\left(\dfrac{n \pi x}{l}\right) }

where n = 1,2,\dots and A_n and B_n are arbitrary constants. Noting a sum of solutions is also a solution, we have

\boxed{ u(x,t) = \sum_{n=1}^\infty u_n(x,t) }

The coefficients n \pi c / l before the variable t are sometimes called the frequencies.

For the initial condition, notice that

g(x) = \sum_{n=1}^\infty A_n \sin\left(\dfrac{n \pi x}{l}\right)

and

h(x) = \sum_{n=1}^\infty \dfrac{n \pi c}{l} B_n \sin\left(\dfrac{n \pi x}{l}\right).

The numbers \lambda_n = (n \pi / l)^2 are called eigenvalues and the functions X_n(x) are called eigenfunctions where n = 1,2,3,....

Thm. in the Diffusion Equation

Analogously assume we are given a homogeneous diffusion equation with Dirichlet conditions so that

\def\arraystretch{1.5} \begin{array}{rcl} u_{t} - k u_{xx} &=& 0 \\ \hdashline u(0, t) &=& 0 \\ u(l, t) &=& 0 \\ \hdashline u(x, 0) &=& g(x) \end{array}

where x \in (0, l), then

\dfrac{T'}{kT} = \dfrac{X''}{X} = -\lambda

so that

\boxed{ u(x,t) = \sum_{n=1}^\infty A_n e^{-(n \pi / l)^2 kt} \sin\left(\dfrac{n \pi x}{l}\right) }

and

g(x) = \sum_{n=1}^\infty A_n \sin\left(\dfrac{n \pi x}{l}\right).

TODO: Learn and write the methodology. Check out ODE solution as well.

Separation of Variables and the Neumann Condition

Thm. in the Wave Equation

Consider the (homogenous) wave equation with the Neumann BC, then it has a solution of the form

\def\arraystretch{1.5} \begin{array}{rcl} u_{tt} - c^2 u_{xx} &=& 0 \\ \hdashline u_{\textcolor{red}{x}}(0, t) &=& 0 \\ u_{\textcolor{red}{x}}(l, t) &=& 0 \\ \hdashline u(x, 0) &=& g(x) \\ u_t(x,0) &=& h(x) \end{array}

where x \in (0, l). A separated solution for this Neumann BC is a solution of the form

\boxed{ \def\arraystretch{1} \begin{array}{rcl} u(x,t) &=& \dfrac{1}{2}A_0 + \dfrac{1}{2} B_0 t \\ \\ &+& \displaystyle\sum_{n=1}^\infty \left(A_n \cos\left(\dfrac{n \pi c t}{l}\right) + B_n \cos\left(\dfrac{n \pi c t}{l}\right)\right) \cos\left(\dfrac{n \pi x}{l}\right) \end{array} }

Thm. in the Diffusion Equation

Now assume we are given a homogeneous diffusion (heat) equation with Neumann conditions so that

\def\arraystretch{1.5} \begin{array}{rcl} u_{t} - k u_{xx} &=& 0 \\ \hdashline u\textcolor{red}{_x}(0, t) &=& 0 \\ u\textcolor{red}{_x}(l, t) &=& 0 \\ \hdashline u(x, 0) &=& g(x) \end{array}

then it has a solution of the form

\boxed{ u(x,t) = \dfrac{1}{2} A_0 + \sum_{n=1}^\infty A_n e^{-(n \pi / l)^2 k t} \cos\left(\dfrac{n \pi x}{l}\right) }

where

g(x) = \dfrac{1}{2} A_0 + \sum_{n=1}^\infty A_n \cos\left(\dfrac{n \pi x}{l}\right)

Also, notice that we have the eigenvalues

0,\; \left(\frac{\pi}{l}\right)^2,\; \left(\frac{2\pi}{l}\right)^2,\;\dots

and the eigenfunctions

X_n(x) = \cos\left(\dfrac{n \pi x}{l}\right)

for n=0,1,2,\dots (with X_0\equiv 1).

Notice how, unlike Dirichlet condition, n starts from 0 instead of 1.

Separation of Variables

Thm. Core Idea

We always assume

u(x,t) = X(x)T(t)

and plug this into our PDE which gives us two ODEs: spatial and time. Everything reduces to solving eigenvalue problem for X and then plugging eigenvalues into T.

For the Dirichlet case, we have the eigenvalues

\lambda_n = \left(\dfrac{n \pi}{l}\right)^2

and eigenfunctions

X_n(x) = \sin\left(\dfrac{n \pi x}{l}\right)

For the Neumann case, the eigenfunctions are

X_n(x) = \cos\left(\dfrac{n \pi x}{l}\right)

with eigenvalues \lambda_n = (n\pi/l)^2 for n=0,1,2,\dots (note \lambda_0=0 corresponds to the constant eigenfunction).

Fourier Series

Def. Fourier Sine Series

The Fourier sine series for the given function \phi(x) is given by

\phi(x) = \sum_{n=1}^\infty A_n \sin\left(\dfrac{n \pi x}{l}\right)

in the interval (0, l).

These series, as we saw earlier, are used for wave and diffusion equations with Dirichlet boundary conditions.

Thm. Fourier Sine Series Coefficients

The coefficients of the Fourier sine series are given by

\boxed{ A_n = \frac{2}{l} \int_0^l \phi(x) \sin\left(\dfrac{n \pi x}{l}\right) \> dx }

Def. Fourier Cosine Series

The Fourier cosine series is defined as

\phi(x) = \frac{1}{2}A_0 + \sum_{n=1}^\infty A_n \cos\left(\dfrac{n \pi x}{l}\right)

in the interval (0, l)

These series, as we saw earlier, are used for wave and diffusion equations with Neumann boundary conditions on (0, l).

Thm. Fourier Cosine Series Coefficients

The coefficients of the Fourier cosine series are given by

\boxed{ A_n = \frac{2}{l} \int_0^l \phi(x) \cos\left(\dfrac{n \pi x}{l}\right) \> dx }

for n \ge 1, and

\boxed{A_0 = \frac{2}{l}\int_0^l \phi(x)\,dx.}

Def. (Full) Fourier Series

The (full) Fourier series of \phi(x) on the interval (-l,l) is defined as

\phi(x) = \frac{1}{2}A_0 + \sum_{n=1}^\infty \left(A_n \cos\left(\dfrac{n \pi x}{l}\right) + B_n \sin\left(\dfrac{n \pi x}{l}\right)\right)

Thm. Fourier Series Coefficients

The coefficients of the fourier series are given by

A_n = \frac{1}{l} \int_{-l}^l \phi(x) \cos\left(\dfrac{n \pi x}{l}\right) \> dx

where n \in \N, and

B_n = \frac{1}{l} \int_{-l}^l \phi(x) \sin\left(\dfrac{n \pi x}{l}\right) \> dx

where n \in \N^+.

Note that these coefficients are not exactly the same as the previously defined cosine and sine coefficients.

Thm. Parseval's Equality

For the full Fourier series on (-L, L) we have

\int_{-L}^L |f(x)|^2 dx = \dfrac{L}{2} a_0^2 + L \sum_{n=1}^\infty (a_n^2 + b_n^2)

For the sine series on (0, L), we have

\int_{0}^L |f(x)|^2 dx = \dfrac{L}{2} \sum_{n=1}^\infty b_n^2

Finally, for the cosine series on (0, L) we have

\int_0^L |f(x)|^2 dx = \dfrac{L}{4} a_0^2 + \dfrac{L}{2} \sum_{n=1}^\infty a_n^2

Necessary Condition

The Parseval Equality is true if

\int_a^b |f(x)|^2 dx

is finite.

Appendix 1. Calculus