Generated Jacobian Equations, part 1/n


(12 Feb 2019)

Prescribed Jacobians

Often in mathematics we find ourselves looking for transformations between geometric shapes that send a given mass distribution into another. Concretely, given two mass distributions $f$ and $g$ in $\mathbb{R}^d$, we seek a map $T$ such that

\[\int_{T(E)}g(y)\;dy = \int_{E}f(x)\;dx,\;\forall\;E\subset \mathbb{R}^d.\]

That is, we want to find a transformation $T$ that spreads the mass $f$ into the mass $g$. There are many contexts where one needs to find such transformation or study its properties, and we will go over a list of examples later.

If the transformation $T$ satisfies the above condition and is also differentiable the change of variables formula tells us that the Jacobian of $T$ (the determinant of the derivative of $T(x)$) must satisfy the equation

\[\det (DT(x))g(T(x)) = f(x),\;\forall\;x.\]

In other words, the Jacobian of the map $T$ is prescribed in terms of $x$ and its image under $T$. Conversely, if a differentiable map $T$ has a Jacobian satisfying the above relation for all $x$, then it will map $f$ into $g$.

This can be considered as a first order partial differential equations which is (very) nonlinear, with $T(x)$ being the unknown. However this is too broad and difficult for our purposes. We are interested in situations where the mapping $T$ is determined from the gradient of a scalar field, a potential. This means a rule by which we can associate to every scalar function $u(x)$ a map $T_u(x)$, and in this case we are only interested in maps $T$ that arise from such a scalar $u$.

Concretely, we are interested in situations where we are given a function

\[Y:\Omega\times \mathbb{R}\times \mathbb{R}^d \to \overline{\Omega}\]

and then one looks for $T$ of the form

\[T_u(x) := Y(x,u(x),Du(x))\]

for some scalar function $u$. Note that in this case we chain rule gives us the formula

\[DT_u (x) = Y_x + Y_z \otimes Du(x) + Y_p D^2u(x)\]

Therefore, the first order equation for $T_u$ becomes a second order for $u$, and it has the form

\[\det ( D^2u(x) +A(x,u(x),Du(x))) = \frac{f(x)}{g(Y(x,u(x),Du(x)))}\frac{1}{\det(Y_p(x,u(x),Du(x)))}\]

This type of equation is known as a prescribed Jacobian equation.

The simplest such equation is the Monge-Ampère equation, corresponding to $Y(x,u,Du) = Du$,

\[\det (D^2u) = \frac{f(x)}{g(Du(x))},\]

solving this equation with the right boundary conditions gives a function whose gradient $Du$ maps $f$ into $g$.

Generating Jacobian Equations

We are only going to be interested in prescribed Jacobian equations arising from a generating function and known as Generated Jacobian Equations (GJE). This is a broad class of nonlinear equations covering optimal transportation, optimal surfaces design, differential geometry, economics, and more.

In each case, the following three ingredients are present

1) A map $T$ between two $d$-dimensional manifolds, whose Jacobian is prescribed

\[\det(DT(x)) = f(x)/g(T(x)).\]

2) A structure that produces $T$ from a scalar potential $u$, meaning there is some $\Psi$ such that for some scalar field $u$ we have

\[T(x) = Y(x,u(x),Du(x)).\]

3) The scalar potential $u(x)$ satisfies a generalized convexity condition.

As it will turn out the second and third ingredients are two aspects of the same structure, and behind that structure is a generating function. One may think of the generating function as determining the type of GJE we are considering in the same way a Riemannian metric determines a linear elliptic equation (i.e. $\Delta_gu = 0$ where $\Delta_g$ is Laplace-Beltrami operator for the metric).

Consider two domains $\Omega,\overline{\Omega} \subset \mathbb{R}^d$ (open, bounded subsets), a generating function $G$ is a function

\[G: \Omega\times \overline{\Omega}\times \mathbb{R} \to \mathbb{R},\]

which has the following two properties

1) $G$ s differentiable and strictly decreasing in its third argument, i.e. $G(x,y,z)>G(x,y,z’)$ if $z < z’$. In particular, this defines a smooth function $H$ such that $G(x,y,H(x,y,u)) = u$ for all $x,y,u$.

2) For fixed $x\in \Omega$, the map $ (y,z) \mapsto (D_xG(x,y,z),G(x,y,z))$ is injective, with a smooth inverse defined on its image.

The simplest example of a generating function is the linear generating function,

\[G(x,y,z) = x\cdot y-z\]

In the following posts I will explain how associated to every generating function there is a notion of gradient/subdifferential, a transform between functions, a notion of convex functions, and a notion of segments and convex sets. These objects in the case of the linear generating function reduce to the usual notion of a gradient/subdifferential, the Legendre transform, and the usual notions from convex geometry. This rich geometric structure is intrinsic to the equation, be it from optimal transform or from geometric geometrics.

In the next post I will review other, more interesting examples of generating functions, discuss more about the elements of generating functions, later on I will introduce the Ma-Trudinger-Wang tensor and the notion of weak solutions to the GJE.