A blob method for degenerate diffusion

19
Katy Craig University of California, Santa Barbara joint with José Antonio Carrillo (Oxford), Francesco Patacchini (IFP Energies), Karthik Elamvazhuthi (UCLA), Matt Haberland (Cal Poly), Olga Turanova (Michigan State) Simons Center for Theory of Computing “Sampling Algorithms and Geometries on Probability Distributions” September 7th, 2021 Positi 0.5 Time 0 1 2 3 A blob method for degenerate diffusion and applications to sampling and two layer neural networks.

Transcript of A blob method for degenerate diffusion

Page 1: A blob method for degenerate diffusion

Katy Craig University of California, Santa Barbara

joint with José Antonio Carrillo (Oxford), Francesco Patacchini (IFP Energies), Karthik Elamvazhuthi (UCLA), Matt Haberland (Cal Poly), Olga Turanova (Michigan State)

Simons Center for Theory of Computing “Sampling Algorithms and Geometries on Probability Distributions” September 7th, 2021

Position -0.5 0.0 0.5

Tim

e

0

1

2

3

A blob method for degenerate diffusion and applications to sampling and two layer neural networks.

Page 2: A blob method for degenerate diffusion

• Motivation • Wasserstein gradient flows • Particle methods (discrete ↔ continuum) • Particle method + regularization = blob method for diffusive PDEs • Numerics

• Motivation • Wasserstein gradient flows • Particle methods (discrete ↔ continuum) • Particle method + regularization = blob method for diffusive PDEs • Numerics

Plan

2

Page 3: A blob method for degenerate diffusion

Sampling/robot coverage algorithms

3

Consider a target distribution .

Sampling: How can we choose samples , so that (with high probability), they accurately represent the desired target distribution?

Coverage: How can we program robots to move so that they distribute their locations according to (deterministically)?

In both cases, we seek to approximate by an empirical measure:

PDE’s can inspire new ways to construct the empirical measure.

ρ̄ ∈ "(ℝd)<latexit sha1_base64="z7d8Nd4NYmH5fJXh0qXSE8TAtYs=">AAACFXicbVDJSgNBEO2JW4xb1KOXxiB4kDAjil6UgBdPEsUskEmG7k4ladKz2N0jhmF+wou/4sWDIl4Fb/6NneWgiQ8KHu9VUVWPRoIrbdvfVmZufmFxKbucW1ldW9/Ib25VVRhLBhUWilDWKVEgeAAVzbWAeiSB+FRAjfYvhn7tHqTiYXCrBxE0fdINeIczoo3k5Q/cxKVEJg+px93US/iZk7ausKtiqkDDHXZ9onuUJjdpq+3lC3bRHgHPEmdCCmiCspf/ctshi30INBNEqYZjR7qZEKk5E5Dm3FhBRFifdKFhaEB8UM1k9FWK94zSxp1Qmgo0Hqm/JxLiKzXwqekc3qimvaH4n9eIdee0mfAgijUEbLyoEwusQzyMCLe5BKbFwBDCJDe3YtYjkjBtgsyZEJzpl2dJ9bDoHBft66NC6XwSRxbtoF20jxx0gkroEpVRBTH0iJ7RK3qznqwX6936GLdmrMnMNvoD6/MHahWflA==</latexit>

{x̄i}Ni=1 ✓ Rd

<latexit sha1_base64="z7d8Nd4NYmH5fJXh0qXSE8TAtYs=">AAACFXicbVDJSgNBEO2JW4xb1KOXxiB4kDAjil6UgBdPEsUskEmG7k4ladKz2N0jhmF+wou/4sWDIl4Fb/6NneWgiQ8KHu9VUVWPRoIrbdvfVmZufmFxKbucW1ldW9/Ib25VVRhLBhUWilDWKVEgeAAVzbWAeiSB+FRAjfYvhn7tHqTiYXCrBxE0fdINeIczoo3k5Q/cxKVEJg+px93US/iZk7ausKtiqkDDHXZ9onuUJjdpq+3lC3bRHgHPEmdCCmiCspf/ctshi30INBNEqYZjR7qZEKk5E5Dm3FhBRFifdKFhaEB8UM1k9FWK94zSxp1Qmgo0Hqm/JxLiKzXwqekc3qimvaH4n9eIdee0mfAgijUEbLyoEwusQzyMCLe5BKbFwBDCJDe3YtYjkjBtgsyZEJzpl2dJ9bDoHBft66NC6XwSRxbtoF20jxx0gkroEpVRBTH0iJ7RK3qznqwX6936GLdmrMnMNvoD6/MHahWflA==</latexit>

{x̄i}Ni=1 ✓ Rd ρ̄

ρ̄<latexit sha1_base64="0RUrWwJpSf7pLQYD2VlJE3is5TM=">AAACRXicbZDPSxwxFMcz1l9drW7rsZfgIhQKy0xRWgSL4KUnUXBV2KzDm2xmN5hJhuRN6xLmn+uld2/9D7x4sBSvml0XadUHgS/fz3tJ3jcrlXQYx7+jmVezc/MLi68bS8tvVlabb98dO1NZLjrcKGNPM3BCSS06KFGJ09IKKDIlTrLzvTE/+S6sk0Yf4agUvQIGWuaSAwYrbTKWgfXMDk19tk+3dyjLLXCf1H6/psxVRerlTjJmrC8UQuovUhnIhZWDIYK15ocPDA39yKTOcRTY441psxW340nR5yKZihaZ1kHavGR9w6tCaOQKnOsmcYk9DxYlV6JusMqJEvg5DEQ3SA2FcD0/SaGmG8Hp09zYcDTSifvvhIfCuVGRhc4CcOiesrH5EutWmH/peanLCoXmDw/llaJh6XGktC+t4KhGQQC3MvyV8iGEGDEE3wghJE9Xfi6OP7WTrXZ8uNna/TqNY5G8J+vkA0nIZ7JLvpED0iGc/CRX5Ib8iX5F19Hf6PahdSaazqyR/yq6uwezWLME</latexit>

⇢̄N :=1

N

NX

i=1

�xi

N!+1�����! ⇢̄

Page 4: A blob method for degenerate diffusion

PDEs and sampling/coverage algs

4

Suppose , for -convex.ρ̄ = e−V V : ℝd → ℝ λ

Diffusion:

[Villani 2008,…],

Particle method: [Fournier, Guillin 2015]

∂tρ = ∇ ⋅ (ρ∇log (ρ/ρ̄)) = Δρ − ∇ ⋅ (ρ∇log ρ̄)

KL(ρ(t), ρ̄) ≤ e−λtKL(ρ(0), ρ̄) KL(μ, ν) = ∫ μ log(μ/ν)

dXt = 2dBt − ∇log ρ̄(Xt)dt<latexit sha1_base64="J7rSjG6ine00/4jR45mE/TNQKEA=">AAACQnicbVDNahsxGNQ6beO6f0567EXUFFwKZjc0JAQcDL3kZFyof4rXXrSy1hbWSov0bRIj9tl66RPklgfIpYeW0msP1do+tE4HBMPMfJK+iTPBDfj+rVfZe/Dw0X71ce3J02fPX9QPDgdG5ZqyPlVC6VFMDBNcsj5wEGyUaUbSWLBhvPxQ+sNLpg1X8hOsMjZJyVzyhFMCTorqn0O9UNNuE97iszYOE02oDQrbLXBo8jSyvB0U0y4OZ0wAiewo4kWZDa81ny+AaK2urLNB4Xchlwms3KC70WWiesNv+Wvg+yTYkgbaohfVb8KZonnKJFBBjBkHfgYTSzRwKlhRC3PDMkKXZM7GjkqSMjOx6woK/MYpM5wo7Y4EvFb/nrAkNWaVxi6ZEliYXa8U/+eNc0hOJ5bLLAcm6eahJBfYbVz2iWdcMwpi5Qihmru/YrogrkZwrddcCcHuyvfJ4KgVHLf8j+8bnfNtHVX0Cr1GTRSgE9RBF6iH+oiiL+gOfUc/vK/eN++n92sTrXjbmZfoH3i//wA0Y7AD</latexit>

⇢N (t) :=1

N

NX

i=1

�Xi(t)N!+1�����! ⇢(t)

Degenerate diffusion:

[Matthes, et al. 2009, Chewi, et. al 2020]

Particle method: ?

∂tρ = ∇ ⋅ (ρ∇(ρ/ρ̄)) =ρ̄=1 12 Δρ2

KL(ρ(t), ρ̄) ≤ e−λtKL(ρ(0), ρ̄)

Motivation for deg. diff: Sampling: SVGD, chi-sq. PDE: porous media, swarming, … Coverage: deterministic particle method Optimization: training neural network with single hidden layer, RBF

Page 5: A blob method for degenerate diffusion

• Motivation • Wasserstein gradient flows • Particle methods (discrete ↔ continuum) • Particle method + regularization = blob method for diffusive PDEs • Numerics

• Motivation • Wasserstein gradient flows • Particle methods (discrete ↔ continuum) • Particle method + regularization = blob method for diffusive PDEs • Numerics

Plan

5

Page 6: A blob method for degenerate diffusion

W2 gradient flows

6

Diffusion: ∂tρ = ∇ ⋅ (ρ∇log (ρ/ρ̄)), E(ρ) = ∫ ρ log(ρ̄/ρ) = KL(ρ, ρ̄)

Degenerate Diffusion: ∂tρ = ∇ ⋅ (ρ∇(ρ/ρ̄)), E(ρ) = 1

2 ∫ |ρ − ρ̄ |2 /ρ̄ = χ2(ρ, ρ̄)

Aggregation + Drift: ∂tρ = ∇ ⋅ (ρ∇(K * ρ)) + ∇ ⋅ (ρ∇V ), E(ρ) = 1

2 ∬ K(x − y)dρ(x)dρ(y) + ∫ Vρ

2-layer neural networks: [MMN ’18] [RVE ’18] [CB ’18] …

E(ρ) = 12 ∫ ∫ Φ(x, z)dρ(x) − f0(z)

2

dν(z)Choices of Φ:�(x, z) = x1(⌃ixizi + xd)+

<latexit sha1_base64="39L+wBELWUnjr0K8LZQkmCNnohY=">AAADBnicbZHdbtMwFMfd8DXKxzq4REIWFVKnRVXclrZcgCq4gMtS6DapKZHjuKm1xIlsB9ZZvucV2ENwh5C42i28AW+D0w5pyziSrb/O/3fkc47DPGFSed6fmnPt+o2bt7Zu1+/cvXd/u7HzYF9mhSB0SrIkE4chljRhnE4VUwk9zAXFaZjQg/DodekffKJCsox/UKuczlMcc7ZgBCubCho96I+XrHXsnuzCFxAeBwi2oP+exSkONIOmTDF4Ys+eVToycDfYCxpNr+2tA14V6Fw0R4+/lnE6DnZqP/0oI0VKuSIJlnKGvFzNNRaKkYSaul9ImmNyhGM6s5LjlMq5Xo9n4FObieAiE/ZwBdfZixUap1Ku0tCSKVZLWfXK5P+8WaEWw7lmPC8U5WTz0KJIoMpguSsYMUGJSlZWYCKY7RWSJRaYKLvRet2P6MKufd2PDhOjJ29eGY0GLkQ95MLOs565zNhhNoy1XNTvu91BhRDRP6LbcbtDt4sqQGwBEYdGe+3hc9cuvY/K2+sb2w+nn0mWpphH2p9YUPvlzGGoJ+ZjZEwFMBfsihcLbMt9jsMEG/vbqPq3V8V+p426be8dao5egk1sgUfgCWgBBAZgBN6CMZgCAk7BGfgFfjtfnG/Od+fHBnVq5zUPwaVwzv4ClYD0ig==</latexit>

�(x, z) = (|x� z|)<latexit sha1_base64="ZtKPP6UiUXNkTEo6Q3auDVp86Sg=">AAAC6XicbZHLbhMxFIadKZcSLk3LggUbiwgplYZonIQkLEAVLGAZItJWyoTI4/EkVjyeke2Bpq4fgh1i2y28AG/C2+BckNopR5rRr/N/Rz6XKOdM6SD4U/F2bt2+c3f3XvX+g4eP9mr7B8cqKyShI5LxTJ5GWFHOBB1ppjk9zSXFacTpSbR4t/JPvlCpWCY+6WVOJymeCZYwgrVLTWtPwsGcNc7880P4Goa5Yo2LsxfnF4fTWj1oBuuANwXaijrYxmC6X/kdxhkpUio04VipMQpyPTFYakY4tdWwUDTHZIFndOykwClVE7OewMLnLhPDJJPuExqus1crDE6VWqaRI1Os56rsrZL/88aFTvoTw0ReaCrI5qGk4FBncLUOGDNJieZLJzCRzPUKyRxLTLRbWrUaxjRxm133YyJuzfD9W2tQz4eog3zYetmx1xk3zIZxlo+6Xb/dKxEy/ke0W36777dRCZg5QM4ia4Jm/5Xvlt5Fq3/Qta4fQb+SLE2xiE04dKAJVzNHkRnaz7G1JcBesUveTGJXHgoccWzdtVH5tjfFcauJ2s3gY6d+9GZ7913wFDwDDYBADxyBD2AARoAACy7BT/DLW3jfvO/ejw3qVbY1j8G18C7/Amx551Y=</latexit>

= 12 ∬ ∫ Φ(x, z)Φ(y, z)dν(z)

K(x,y)

dρ(x)dρ(y) − ∫ ∫ Φ(x, z)f0(z)dν(z)

V(x)

dρ(x) + C= ∫ (ψ * ρ)2dν

∂tρ(t) = − ∇W2E(ρ(t))

= 12 ∫ |ρ |2 /ρ̄ + C

Page 7: A blob method for degenerate diffusion

• Motivation • Wasserstein gradient flows • Particle methods (discrete ↔ continuum) • Particle method + regularization = blob method for diffusive PDEs • Numerics

• Motivation • Wasserstein gradient flows • Particle methods (discrete ↔ continuum) • Particle method + regularization = blob method for diffusive PDEs • Numerics

Plan

7

Page 8: A blob method for degenerate diffusion

W2 gradient flows

8

Aggregation + Drift:

∂tρ = ∇ ⋅ (ρ∇(K * ρ)) + ∇ ⋅ (ρ∇V ), E(ρ) = 12 ∫ (K * ρ)ρ + ∫ Vρ

All W2 gradient flows are solutions of continuity equations

∂tρ + ∇ ⋅ (ρv[ρ]) = 0, v[ρ] = − ∇ ∂E∂ρ

Diffusion:

∂tρ = ∇ ⋅ (ρ∇log (ρ/ρ̄)), E(ρ) = ∫ ρ log(ρ̄/ρ) = KL(ρ, ρ̄)

Degenerate Diffusion:

∂tρ = ∇ ⋅ (ρ∇(ρ/ρ̄)), E(ρ) = 12 ∫ |ρ |2 /ρ̄

Page 9: A blob method for degenerate diffusion

1. Approximate initial data:

2. Evolve the locations:

3. Since unif Lipschitz,

ρN0 = 1

N

N

∑i=1

δxi

ddt

xi(t) = v[ρN(t)](xi(t)) ⟺ ∂tρN + ∇ ⋅ (ρNv[ρN]) = 0

v[ρ] W2(ρN(t), ρ(t)) ≤ e∥∇v∥∞t W2(ρN0 , ρ0)

N→+∞ 0

Particle methodsConsider a continuity equation with uniformly Lipschitz continuous velocity v[ρ] : ℝd → ℝd

9

…what about v not unif Lipschitz?

{∂tρ + ∇ ⋅ (ρv[ρ]) = 0,ρ(x,0) = ρ0(x) .

ρN(t) = 1N

N

∑i=1

δxi(t)

Page 10: A blob method for degenerate diffusion

Wasserstein gradient flows

10

Aggregation + Drift:

∂tρ = ∇ ⋅ (ρ∇(K * ρ)) + ∇ ⋅ (ρ∇V ), E(ρ) = 12 ∫ (K * ρ)ρ + ∫ Vρ

not Lipschitz

Lipschitz for boundedD2K, D2V

How can we make degenerate diffusion more like aggregation? Regularize

Diffusion:

∂tρ = ∇ ⋅ (ρ∇log (ρ/ρ̄)), E(ρ) = ∫ ρ log(ρ̄/ρ) = KL(ρ, ρ̄)

Degenerate Diffusion:

∂tρ = ∇ ⋅ (ρ∇(ρ/ρ̄)), E(ρ) = 12 ∫ |ρ |2 /ρ̄

not Lipschitz

Page 11: A blob method for degenerate diffusion

• Motivation • Wasserstein gradient flows • Particle methods (discrete ↔ continuum) • Particle method + regularization = blob method for diffusion • Numerics

• Motivation • Wasserstein gradient flows • Particle methods (discrete ↔ continuum) • Particle method + regularization = blob method for diffusion • Numerics

Plan

11

Page 12: A blob method for degenerate diffusion

Blob method for diffusion

12

Degenerate Diffusion:

∂tρ = ∇ ⋅ (ρ∇(ρ/ρ̄)), E(ρ) = ∫ |ρ |2 /ρ̄

Approximation of Degenerate Diffusion:

∂tρ = ∇ ⋅ (ρ∇φϵ * (φϵ * ρ/ρ̄)), Eϵ(ρ) = 12 ∫ |φϵ * ρ |2 /ρ̄

Theorem (C., Elamvazhuthi, Haberland, Turanova, in preparation): The velocity is Lipschitz on .vϵ[ρ] = − ∇φϵ * (φϵ * ρ/ρ̄) CRϵ−d−2 Ω ⊆ BR(0)

Consequently, the particle method is well-posed:

and, for fixed , as this converges to the GF of .ϵ > 0 N → + ∞, Eϵ

ddt

xi(t) = − ∇φϵ * (φϵ * ρN(t)/ρ̄) = − ∇φϵ * ( 1N

N

∑i=1

φϵ(xi(t) − xj(t))/ρ̄(xi(t)))What happens as and ?N → + ∞ ϵ → 0

E(ρ) = ∫ (ψ * ρ)2ν − 2∫ ψ * ( f0ν)V

ρ

Page 13: A blob method for degenerate diffusion

Convergence of blob method

13

Previous work: • [Oelschläger ’98]: conv. of particle method to smooth, positive solutions • [Lions, Mas-Gallic 2000]: convergence of bounded entropy solutions as

(particles not allowed) • [Carrillo, C., Patacchini 2017]: convergence of bounded entropy solns;

allow additional GF terms (aggregation, drift,…), .

• [Javanmard, Mondelli, Montanari 2019]: convergence of particle method to smooth, strictly positive solns; allow additional GF terms (2 layer NN)

ρ̄ = 1

ϵ → 0

∂tρ = Δρm, m ≥ 2

Theorem (C., Elamvazhuthi, Haberland, Turanova, in prep.): Suppose

• , for convex, on a bounded, convex domain .

• for with bounded entropy

Then for all .

ρ̄ = e−V V : ℝd → ℝ ΩW2(ρN

0 , ρ0) = o(e− 1ϵd+2 ) ρ0

ρN(t) ϵ→0 ρ(t) t ∈ [0,T]In limiting of 2 layer NN, limiting dynamics are convex GF for log-convex and concave.ν f0ν

Page 14: A blob method for degenerate diffusion

Implications

14

Sampling: Spatially discrete, deterministic particle method for sampling according to chi-squared divergence (c.f. [Chewi, et. al. ’20])

PDE: Provably convergent numerical method for diffusive gradient flows with low regularity (merely bounded entropy)

Coverage: Deterministic particle method well-suited to robotics

Optimization:

• Particle method equivalent to training dynamics for neural networks with a singular hidden layer, RBF activation.

• Our result identifies limiting dynamics in the over parametrized regime ( ) as variance of the RBF decreases to zero ( ), .

• Limiting dynamics are convex GF for log-convex and concave.

N → + ∞ ϵ → 0 ν ≠ 1ν f0ν

E(ρ) = ∫ (ψ * ρ)2ν − 2∫ ψ * ( f0ν)V

ρ

Page 15: A blob method for degenerate diffusion

• Motivation: - Diffusive PDEs and sampling/coverage algorithms - Training dynamics for neural networks with a single hidden layer

• Wasserstein gradient flows • Particle methods (discrete ↔ continuum) • Particle method + regularization = blob method for diffusive PDEs • Numerics

• Motivation: - Diffusive PDEs and sampling/coverage algorithms - Training dynamics for neural networks with a single hidden layer

• Wasserstein gradient flows • Particle methods (discrete ↔ continuum) • Particle method + regularization = blob method for diffusive PDEs • Numerics

Plan

15

Page 16: A blob method for degenerate diffusion

16

NumericsFigure 2: caption!

Figure 3: caption!

specific figure. For moderate confinement (middle row, k = 100), we observe enter the rate of convergence.For strong confinement (bottom row, k = 109), we observe enter rate of convergence. We believe that toachieve optimal rates of convergence in the presence of strong confinement, one will need to optimize theODE solver on the GPU to the presence of the confining potential. As the main goal of the present workis analysis of the particle interactions and convergence to the continuum PDE, we leave further analysis of

31

Figure 2: caption!

Figure 3: caption!

specific figure. For moderate confinement (middle row, k = 100), we observe enter the rate of convergence.For strong confinement (bottom row, k = 109), we observe enter rate of convergence. We believe that toachieve optimal rates of convergence in the presence of strong confinement, one will need to optimize theODE solver on the GPU to the presence of the confining potential. As the main goal of the present workis analysis of the particle interactions and convergence to the continuum PDE, we leave further analysis of

31

Figure 2: caption!

Figure 3: caption!

specific figure. For moderate confinement (middle row, k = 100), we observe enter the rate of convergence.For strong confinement (bottom row, k = 109), we observe enter rate of convergence. We believe that toachieve optimal rates of convergence in the presence of strong confinement, one will need to optimize theODE solver on the GPU to the presence of the confining potential. As the main goal of the present workis analysis of the particle interactions and convergence to the continuum PDE, we leave further analysis of

31

Figure 2: caption!

Figure 3: caption!

specific figure. For moderate confinement (middle row, k = 100), we observe enter the rate of convergence.For strong confinement (bottom row, k = 109), we observe enter rate of convergence. We believe that toachieve optimal rates of convergence in the presence of strong confinement, one will need to optimize theODE solver on the GPU to the presence of the confining potential. As the main goal of the present workis analysis of the particle interactions and convergence to the continuum PDE, we leave further analysis of

31

Figure 2: caption!

Figure 3: caption!

specific figure. For moderate confinement (middle row, k = 100), we observe enter the rate of convergence.For strong confinement (bottom row, k = 109), we observe enter rate of convergence. We believe that toachieve optimal rates of convergence in the presence of strong confinement, one will need to optimize theODE solver on the GPU to the presence of the confining potential. As the main goal of the present workis analysis of the particle interactions and convergence to the continuum PDE, we leave further analysis of

31

Figure 2: caption!

Figure 3: caption!

specific figure. For moderate confinement (middle row, k = 100), we observe enter the rate of convergence.For strong confinement (bottom row, k = 109), we observe enter rate of convergence. We believe that toachieve optimal rates of convergence in the presence of strong confinement, one will need to optimize theODE solver on the GPU to the presence of the confining potential. As the main goal of the present workis analysis of the particle interactions and convergence to the continuum PDE, we leave further analysis of

31

Figure 2: caption!

Figure 3: caption!

specific figure. For moderate confinement (middle row, k = 100), we observe enter the rate of convergence.For strong confinement (bottom row, k = 109), we observe enter rate of convergence. We believe that toachieve optimal rates of convergence in the presence of strong confinement, one will need to optimize theODE solver on the GPU to the presence of the confining potential. As the main goal of the present workis analysis of the particle interactions and convergence to the continuum PDE, we leave further analysis of

31

Figure 2: caption!

Figure 3: caption!

specific figure. For moderate confinement (middle row, k = 100), we observe enter the rate of convergence.For strong confinement (bottom row, k = 109), we observe enter rate of convergence. We believe that toachieve optimal rates of convergence in the presence of strong confinement, one will need to optimize theODE solver on the GPU to the presence of the confining potential. As the main goal of the present workis analysis of the particle interactions and convergence to the continuum PDE, we leave further analysis of

31

Figure 2: caption!

Figure 3: caption!

specific figure. For moderate confinement (middle row, k = 100), we observe enter the rate of convergence.For strong confinement (bottom row, k = 109), we observe enter rate of convergence. We believe that toachieve optimal rates of convergence in the presence of strong confinement, one will need to optimize theODE solver on the GPU to the presence of the confining potential. As the main goal of the present workis analysis of the particle interactions and convergence to the continuum PDE, we leave further analysis of

31

Figure 2: caption!

Figure 3: caption!

specific figure. For moderate confinement (middle row, k = 100), we observe enter the rate of convergence.For strong confinement (bottom row, k = 109), we observe enter rate of convergence. We believe that toachieve optimal rates of convergence in the presence of strong confinement, one will need to optimize theODE solver on the GPU to the presence of the confining potential. As the main goal of the present workis analysis of the particle interactions and convergence to the continuum PDE, we leave further analysis of

31

N = 100, ϵ = (1/N)0.99

Page 17: A blob method for degenerate diffusion

101 102 103

N

10°3

10°2

10°1

kΩ≤,

N(t

Ω̄k1

slope = °0.96

Figure 4: Comparison of how the strength of the confining potential a↵ects the evolution of the density. Left:no confinement (k = 0). Middle: medium confinement (k = 100). Right: strong confinement (k = 109).

Figure 5: Desired target distribution ⇢̄(x) = C/(1+ |x|2) for N = 25, 50, 100, 200, 400 robots Matt will tweakline of best fit to capture line of initial decay, not bumps in 800 robot curve

optimal implementation of the confining potential to future work.In Figure 7.2.6, we contrast the previous simulations, in the absence of confinement (k = 0) and with

Hl̈der continuous initial data given by a Barenblatt profile, with the analogous simulation for discontinuousinitial data, given by a characteristic function ⇢0 = 1[�1.25,1.25]. As the corresponding solution of the porousmedium equation with initial data ⇢0 has lower regularity, we anticipate that the rate of convergence of thenumerical method should be slower. Indeed, we observe that the rate of convergence decreases from secondorder to first order (check!) in both the case of ⇢̄ uniform (left) and ⇢̄ log-concave (right).

7.2.7 Convergence to Steady State

In Figure 9, we examine the rate of convergence of our method to the desired target distribution ⇢̄ as thenumber of particles n increases. Given that we only expect convergence to ⇢̄ when the dynamics are confinedto a bounded domain, we only consider the case of a strong confining potential (k = 109). We observe insertthe rate of convergence for both uniform and log-concave target distributions ⇢̄.

32

17

Numerics log concaveρ̄

102 103

N

10°2

10°1

kΩN

(t=

2.0)

°Ω̄k

1

slope = °0.96

101 102 103

N

10°3

10°2

10°1

kΩ≤,

N(t

Ω̄k1

slope = °0.96

Figure 4: Comparison of how the strength of the confining potential a↵ects the evolution of the density. Left:no confinement (k = 0). Middle: medium confinement (k = 100). Right: strong confinement (k = 109).

Figure 5: Desired target distribution ⇢̄(x) = C/(1+ |x|2) for N = 25, 50, 100, 200, 400 robots Matt will tweakline of best fit to capture line of initial decay, not bumps in 800 robot curve

optimal implementation of the confining potential to future work.In Figure 7.2.6, we contrast the previous simulations, in the absence of confinement (k = 0) and with

Hl̈der continuous initial data given by a Barenblatt profile, with the analogous simulation for discontinuousinitial data, given by a characteristic function ⇢0 = 1[�1.25,1.25]. As the corresponding solution of the porousmedium equation with initial data ⇢0 has lower regularity, we anticipate that the rate of convergence of thenumerical method should be slower. Indeed, we observe that the rate of convergence decreases from secondorder to first order (check!) in both the case of ⇢̄ uniform (left) and ⇢̄ log-concave (right).

7.2.7 Convergence to Steady State

In Figure 9, we examine the rate of convergence of our method to the desired target distribution ⇢̄ as thenumber of particles n increases. Given that we only expect convergence to ⇢̄ when the dynamics are confinedto a bounded domain, we only consider the case of a strong confining potential (k = 109). We observe insertthe rate of convergence for both uniform and log-concave target distributions ⇢̄.

32

Figure 4: Comparison of how the strength of the confining potential a↵ects the evolution of the density. Left:no confinement (k = 0). Middle: medium confinement (k = 100). Right: strong confinement (k = 109).

Figure 5: Desired target distribution ⇢̄(x) = C/(1+ |x|2) for N = 25, 50, 100, 200, 400 robots Matt will tweakline of best fit to capture line of initial decay, not bumps in 800 robot curve

optimal implementation of the confining potential to future work.In Figure 7.2.6, we contrast the previous simulations, in the absence of confinement (k = 0) and with

Hl̈der continuous initial data given by a Barenblatt profile, with the analogous simulation for discontinuousinitial data, given by a characteristic function ⇢0 = 1[�1.25,1.25]. As the corresponding solution of the porousmedium equation with initial data ⇢0 has lower regularity, we anticipate that the rate of convergence of thenumerical method should be slower. Indeed, we observe that the rate of convergence decreases from secondorder to first order (check!) in both the case of ⇢̄ uniform (left) and ⇢̄ log-concave (right).

7.2.7 Convergence to Steady State

In Figure 9, we examine the rate of convergence of our method to the desired target distribution ⇢̄ as thenumber of particles n increases. Given that we only expect convergence to ⇢̄ when the dynamics are confinedto a bounded domain, we only consider the case of a strong confining potential (k = 109). We observe insertthe rate of convergence for both uniform and log-concave target distributions ⇢̄.

32

Page 18: A blob method for degenerate diffusion

Open questions

18

• general

• less information on

• Quantitative rate of convergence depending on N and ?

• Can better choice of RBF lead to faster rates of convergence? Help fight against curse of dimensionality?

• Can random batch method [Jin, Li, Liu ’20] lower computational cost from while preserving long-time behavior?

ρ̄

ρ̄

fw,z(x) = − ∫ φϵ(x − w)φϵ(x − z)/ρ̄(x)dx

ϵ

3(N−m/d)

O(N2)

Page 19: A blob method for degenerate diffusion

Thank you!