Definition 8. Denote for each boundary point the boundary space
The boundary condition is called maximal dissipative if the associated boundary spaces
are maximal nonpositive for all
.
Maximal dissipative boundary conditions were proposed in [189, 275
] in the context of symmetric
positive operators, which include symmetric hyperbolic operators as a special case. With such boundary
conditions, the IBVP is well posed in the following sense:
Definition 9. Consider the linearized version of the IBVP (5.1, 5.2
, 5.3
), where the matrix functions
and
and the vector function
do not depend on
. It is called well posed if
there are constants
and
such that each compatible data
and
gives rise to a unique
-solution
satisfying the estimate
This definition strengthens the corresponding definition in the Laplace analysis, where trivial initial data
was assumed and only a time-integral of the -norm of the solution could be estimated
(see Definition 6). The main result of the theory of maximal dissipative boundary conditions
is:
Theorem 7. Consider the linearized version of the IBVP (5.1, 5.2
, 5.3
), where the matrix functions
and
and the vector function
do not depend on
. Suppose the system
is symmetric hyperbolic, and that the boundary conditions (5.3
) are maximal dissipative. Suppose,
furthermore, that the rank of the boundary matrix
is constant in
.
Then, the problem is well posed in the sense of Definition 9. Furthermore, it is strongly well posed
if the boundary matrix is invertible.
This theorem was first proven in [189, 275
, 344
] for the case where the boundary surface
is
non-characteristic, that is, the boundary matrix
is invertible for all
. A difficulty
with the characteristic case is the loss of derivatives of
in the normal direction to the boundary
(see [422]). This case was studied in [293, 343
, 387
], culminating with the regularity theorem in [387
],
which is based on special function spaces, which control the
-norms of
tangential derivatives and
normal derivatives at the boundary (see also [389]). For generalizations of Theorem 7 to the quasilinear
case; see [218
, 388
].
A more practical way of characterizing maximal dissipative boundary conditions is the following. Fix a
boundary point , and define the scalar product
by
,
.
Since the boundary matrix
is Hermitian with respect to this scalar product, there exists a basis
of eigenvectors of
, which are orthonormal with respect to
. Let
be the corresponding eigenvalues, where we might assume that the first
of
these eigenvalues are strictly positive, and the last
are strictly negative. We can expand
any vector
as
, the coefficients
being the characteristic fields
with associated speeds
. Then, the condition (5.90
) at the point
can be written as
In conclusion, a maximal dissipative boundary condition must have the form of Eq. (5.94), which
describes a linear coupling of the outgoing characteristic fields
to the incoming ones,
. In
particular, there are exactly as many independent boundary conditions as there are incoming fields, in
agreement with the Laplace analysis in Section 5.1.1. Furthermore, the boundary conditions must not
involve the zero speed fields. The simplest choice for
is the trivial one,
, in which case data for
the incoming fields is specified. A nonzero value of
would be chosen if the boundary is to incorporate
some reflecting properties, like the case of a perfect conducting surface in electromagnetism, for
example.
Example 29. Consider the first-order reformulation of the Klein–Gordon equation for the variables
; see Example 13. Suppose the spatial domain is
, with the boundary located
at
. Then,
and the boundary matrix is
Example 30. For Maxwell’s equations on a domain with
-boundary
, the boundary
matrix is given by
Recall that the constraints and
propagate along the time evolution vector field
,
,
, provided the continuity equation holds. Since
is tangent to the
boundary, no additional conditions controlling the constraints must be specified at the boundary; the
constraints are automatically satisfied everywhere provided they are satisfied on the initial surface.
Example 31. Commonly, one writes Maxwell’s equations as a system of wave equations for the
electromagnetic potential in the Lorentz gauge, as discussed in Example 28. By reducing the problem
to a first-order symmetric hyperbolic system, one may wonder if it is possible to apply the
theory of maximal dissipative boundary conditions and obtain a well-posed IBVP, as in the
previous example. As we shall see in Section 5.2.1, the answer is affirmative, but the correct
application of the theory is not completely straightforward. In order to illustrate why this is the case,
introduce the new independent fields
. Then, the set of wave equations can be
rewritten as the first-order system for the 20-component vector
,
,
At this point, one might ask why we were able to formulate a well-posed IBVP based on the second-order formulation in Example 28, while the first-order reduction discussed here fails. As we shall see, the reason for this is that there exist many first-order reductions, which are inequivalent to each other, and a slightly more sophisticated reduction works, while the simplest choice adopted here does not. See also [354, 14] for well-posed formulations of the IBVP in electromagnetism based on the potential formulation in a different gauge.
Example 32. A generalization of Maxwell’s equations is the evolution system
for the symmetric, trace-free tensor fields Decomposing into its parts parallel and orthogonal to the unit outward normal
,
However, one also needs to control the incoming field at the boundary. This
field, which propagates with speed
, is related to the constraints in the theory. Like in
electromagnetism, the fields
and
are subject to the divergence constraints
,
. However, unlike the Maxwell case, these constraints do not propagate trivially. As a
consequence of the evolution equations (5.102
, 5.103
), the constraint fields
and
obey
A solution to this problem has been presented in [181] and [187], where a similar system
appears in the context of the IBVP for Einstein’s field equations for solutions with anti-de Sitter
asymptotics, or for solutions with an artificial boundary, respectively. The method consists in
modifying the evolution system (5.102
, 5.103
) by using the constraint equations
in
such a way that the constraint fields for the resulting boundary adapted system propagate
along
at the boundary surface. In order to describe this system, extend
to a smooth
vector field on
with the property that
. Then, the boundary-adapted system reads:
As anticipated in Example 31, the theory of symmetric hyperbolic first-order equations with maximal dissipative boundary conditions can also be used to formulate well-posed IBVP for systems of wave equations, which are coupled through the boundary conditions, as already discussed in Section 5.1.3 based on the Laplace method. Again, the key idea is to show strong well-posedness; that is, an a priori estimate, which controls the first derivatives of the fields in the bulk and at the boundary.
In order to explain how this is performed, we consider the simple case of the Klein–Gordon equation
on the half plane
. In Example 13 we reduced the problem
to a first-order symmetric hyperbolic system for the variables
with symmetrizer
, and in Example 29 we determined the class of maximal dissipative boundary
conditions for this first-order reduction. Consider the particular case of Sommerfeld boundary conditions,
where
is specified at
. Then, Eq. (3.103
) gives the following conservation law,
On the other hand, the first-order reduction is not unique, and as we show now, different reductions may
lead to stronger estimates. For this, we choose a real constant such that
and
define the new fields
, which yield the symmetric hyperbolic system
Summarizing, we have seen that the most straightforward first-order reduction of the Klein–Gordon
equation does not lead to strong well-posedness. However, strong well-posedness can be obtained by
choosing a more sophisticated reduction, in which the time-derivative of is replaced by its derivative
along the time-like vector
, which is pointing outside the domain at the boundary
surface. In fact, it is possible to obtain a symmetric hyperbolic reduction leading to strong
well-posedness for any future-directed time-like vector field
, which is pointing outside the domain at
the boundary. Based on the geometric definition of first-order symmetric hyperbolic systems
in [205], it is possible to generalize this result to systems of quasilinear wave equations on curved
backgrounds [264
].
In order to describe the result in [264], let
be a vector bundle over
with fiber
; let
be a fixed, given connection on
and let
be a
Lorentz metric on
with inverse
, which depends pointwise and smoothly on a
vector-valued function
, parameterizing a local section of
. Assume
that each time-slice
is space-like and that the boundary
is
time-like with respect to
. We consider a system of quasilinear wave equations of the form
Theorem 8. [264] The IBVP (5.113
, 5.114
, 5.115
) is well posed. Given
and sufficiently
small and smooth initial and boundary data
,
and
satisfying the compatibility
conditions at the edge
, there exists a unique smooth solution on
satisfying
the evolution equation (5.113
), the initial condition (5.114
) and the boundary condition (5.115
).
Furthermore, the solution depends continuously on the initial and boundary data.
Theorem 8 provides the general framework for treating wave systems with constraints, such as Maxwell’s equations in the Lorentz gauge and, as we will see in Section 6.1, Einstein’s field equations with artificial outer boundaries.
Here, we show how to prove the existence of weak solutions for linear, symmetric hyperbolic equations with
variable coefficients and maximal dissipative boundary conditions. The method can also be applied to a
more general class of linear symmetric operators with maximal dissipative boundary conditions;
see [189, 275
]. The proof below will shed some light on the maximality condition for the boundary space
.
Our starting point is an IBVP of the form (5.1, 5.2
, 5.3
), where the matrix functions
and
do not depend on
, and where
is replaced by
, such that the
system is linear. Furthermore, we can assume that the initial and boundary data is trivial,
,
. We require the system to be symmetric hyperbolic with symmetrizer
satisfying the conditions in Definition 4(iii), and assume the boundary conditions (5.3
) are
maximal dissipative. We rewrite the IBVP on
as the abstract linear problem
For the following, the adjoint IBVP plays an important role. This problem is defined as follows. First,
the symmetrizer defines a natural scalar product on ,
Lemma 4. Let be a boundary point. Then,
is maximal nonpositive if and only if
is maximal nonnegative.
Proof. Fix a boundary point and define the matrix
with
the
unit outward normal to
at
. Since the system is symmetric hyperbolic,
is Hermitian. We
decompose
into orthogonal subspaces
,
,
on which
is positive,
negative and zero, respectively. We equip
with the scalar products
, which are defined by
The lemma implies that solving the original problem with
is equivalent to
solving the adjoint problem
with
, which, since
is held fixed at
,
corresponds to the time-reversed problem with the adjoint boundary conditions. From the a priori energy
estimates we obtain:
Lemma 5. There is a constant such that
Proof. Let and set
. From the energy estimates in Section 3.2.3 one easily obtains
In particular, Lemma 5 implies that (strong) solutions to the IBVP and its adjoint are unique.
Since and
are closable operators [345], their closures
and
satisfy the same
inequalities as in Eq. (5.128
). Now we are ready to define weak solutions and to prove their
existence:
In order to prove the existence of such , we introduce the linear space
and equip it
with the scalar product
defined by
If is a weak solution, which is sufficiently smooth, it follows from the Green type
identity (5.120
) that
has vanishing initial data and that it satisfies the required boundary conditions,
and hence is a solution to the original IBVP (5.118
). The difficult part is to show that a weak solution is
indeed sufficiently regular for this conclusion to be made. See [189, 275, 344, 343, 387] for such
“weak=strong” results.
http://www.livingreviews.org/lrr-2012-9 |
Living Rev. Relativity 15, (2012), 9
![]() This work is licensed under a Creative Commons License. E-mail us: |