WEBVTT
1
00:00:05.050 --> 00:00:10.350
Mark Kushner: No. Good afternoon. Welcome to our next seminar.
2
00:00:10.430 --> 00:00:26.190
Mark Kushner: It's my pleasure to introduce you to Professor Andrew Christie from that far away place of East Lansing that you get. You know, Professor Michigan State University. You may recognize Michigan State as a school lost to basketball, and in February.
3
00:00:27.200 --> 00:00:38.600
Mark Kushner: on end of the team the 3 factories of degree, that, as degrees from the University of Michigan, followed by the end of the Ph. D. And mathematics from the University of Wisconsin.
4
00:00:38.880 --> 00:00:46.410
Mark Kushner: He then recurred to the University of Michigan, where he was a post doctoral fellow working with he and Boyd and Robert Kras name
5
00:00:46.420 --> 00:00:50.020
Mark Kushner: on that pre simulation methods for plasmids.
6
00:00:50.190 --> 00:01:08.160
Mark Kushner: We joined the math department in Michigan State, and in 2,006. Where is now the Msu foundation, Professor? He has also been Faculty fellow at Air Force Research Laboratories and networks, and an idea for the director of the group that currently the Air Force business
7
00:01:08.500 --> 00:01:18.320
Mark Kushner: at Michigan State, and through was instrumental in establishing and being the first chair of the Department of Computational Mathematics, Science and engineering
8
00:01:18.930 --> 00:01:27.130
Mark Kushner: and his Research Room focuses on multi-scale modeling and higher order. Numerical methods he's been involved in developer.
9
00:01:27.840 --> 00:01:44.640
Mark Kushner: I order, or Larry and and send me the and conservative methods for simulation of plasma is also an involve the development of high order. Finite difference concerning transport methods for simulation. Magneto, hypodynamics
10
00:01:44.910 --> 00:01:54.400
Mark Kushner: is now lead to I and director of turn that that we'll hear about today the center for hierarchy and robust modeling of Don't. You, with Libyan transport.
11
00:01:54.980 --> 00:02:17.770
Mark Kushner: The title of Andrew's talk is moving to address the curve. The dimensionality that we enable free principle off of the lines in future systems. Thank you very much for coming today, and as part of our thank you, we provide you to the Thank you. I appreciate it, and this seems to be recorded.
12
00:02:25.340 --> 00:02:26.360
Mark Kushner: Thank you.
13
00:02:28.860 --> 00:02:29.950
Mark Kushner: Thank you.
14
00:02:31.370 --> 00:02:34.410
Mark Kushner: I'm going to start sharing my screen.
15
00:02:43.740 --> 00:02:46.420
Mark Kushner: Alright. So
16
00:02:46.780 --> 00:02:54.070
Mark Kushner: it is not playing there. I am curious as what we should be doing to get it to play. There is playing here.
17
00:02:54.110 --> 00:03:00.730
Mark Kushner: I can turn my screen.
18
00:03:01.110 --> 00:03:02.960
Kim N8FNC: It's playing on zoom.
19
00:03:03.180 --> 00:03:05.400
Mark Kushner: Good!
20
00:03:05.480 --> 00:03:07.610
Mark Kushner: There it's
21
00:03:08.020 --> 00:03:08.720
you.
22
00:03:16.320 --> 00:03:20.750
Mark Kushner: It's your projector on. Yes, there you go.
23
00:03:23.130 --> 00:03:24.070
Mark Kushner: Hmm.
24
00:03:25.550 --> 00:03:27.340
Mark Kushner: There you go. Hmm.
25
00:03:27.370 --> 00:03:30.050
Mark Kushner: Awesome.
26
00:03:30.290 --> 00:03:39.690
Mark Kushner: Alright, so thank you. Mark for inviting me. I'm. I'm really happy to be here. I'm gonna talk to you all about what we've been working on in China.
27
00:03:40.130 --> 00:03:43.180
Mark Kushner: Let me see if I can get rid of this.
28
00:03:43.610 --> 00:03:48.010
Mark Kushner: There is a secret button somewhere here that allows you to close this.
29
00:03:51.010 --> 00:03:59.140
Mark Kushner: It was there it was there in your original thing. Yeah, the hide floating meeting control. Thank you. Now we are good.
30
00:03:59.950 --> 00:04:02.310
Mark Kushner: all right. So
31
00:04:06.770 --> 00:04:21.690
Mark Kushner: the how do we do in Bands Page up there? We go, page down. All right. We're good, not my computer. I'm learning. So i'm gonna talk today about what this group of 20 individuals at 9 institutions is working on.
32
00:04:21.839 --> 00:04:30.550
Mark Kushner: There are. The the group is headed at by Michigan State University and co-LED by Los Angeles National Lab
33
00:04:30.860 --> 00:04:42.610
Mark Kushner: and we are really targeted at trying to tackle multi scale problems. And so we're really trying to think about building net generation tools that will enable fusion, energy, design.
34
00:04:42.810 --> 00:04:54.310
Mark Kushner: And so with that in mind, each of these different nodes place an important role in what we're trying to do, and I will talk about most of these nodes throughout the day as we go through this talk.
35
00:04:57.270 --> 00:05:09.900
Mark Kushner: So if you're trying to do optimal design and control of plasma, the solution space for plasma is with couple transport equations, coupled with Max's equations.
36
00:05:09.900 --> 00:05:23.920
Mark Kushner: Well, it's rugged terrain with significant cliffs, peaks, and valleys, is the way we like to think about it. And ha! You have to deal with a lot of nonlinear transitions or instabilities. And so it makes this problem really challenging as a control problem.
37
00:05:24.050 --> 00:05:30.210
Mark Kushner: And so with that in mind, we we we are trying to take a holistic approach to how we think about this.
38
00:05:30.460 --> 00:05:39.750
Mark Kushner: So our target, our goal is really fusion, energy, and so just to recap things that that a lot of people know. But I want to recap Anyways.
39
00:05:39.890 --> 00:05:55.740
Mark Kushner: fusion is the process of taking in this case 2 hydrogen atoms and slamming them together with enough energy that they fuse, and in the fusion process out kicks an alpha particle which is your helium atom, a neutron
40
00:05:55.740 --> 00:06:11.760
Mark Kushner: and energy, because the mass of the neutron plus the hydro. The helium atom is less than the mass of the original system, and it's that energy that we're really interested in, and of course we have a quintessential working example of fusion. That's the sun right, or any of the stars we see out there.
41
00:06:11.760 --> 00:06:22.820
Mark Kushner: And what allows this process to happen in the case of the sun is, you have a hawk burning gas that is being driven inwards by gravity. That's allowing the process of fusion to take place.
42
00:06:22.850 --> 00:06:32.090
Mark Kushner: Now, here on earth we don't have the advantage of gravity to do this. We really have to be thinking about other ways of confining the burning plasma.
43
00:06:32.410 --> 00:06:53.340
Mark Kushner: And so there are a myriad of concepts. And so there is a M. Fe, which is magnet. We can find fusion energy systems. There's many magnetically confined systems. 3 of them are here, magnetically confined systems, and then, if he is inertially confined fusion, energy, that's a it's a slightly different process. But we'll talk about that, too, a little bit
44
00:06:53.540 --> 00:07:03.960
Mark Kushner: in the Mfe context up here. This is a tocomac. This is actually a schematic of what eater is. It's a it's a artist's rendition of what a cutaway it would be
45
00:07:04.280 --> 00:07:12.780
Mark Kushner: down here in the lower corner is a concept being developed out by zap energy in Washington.
46
00:07:12.890 --> 00:07:18.470
Mark Kushner: and this is really a gas pop. See, PIN. It's about a 50 cm chamber. It's a really interesting concept.
47
00:07:18.670 --> 00:07:30.590
Mark Kushner: There's another concept over here that's related to a reverse field configuration where you're actually creating 2 plasma bubbles and slamming them into each other to try and create high density plasma for fusion, energy.
48
00:07:30.730 --> 00:07:47.190
Mark Kushner: And then over here with with what we have from this is a picture of the holler on from Niff. The National Mission facility. That concept is really an indirect drive where you are heating up the hole around with a laser that then emit a mix x-rays back in to compress
49
00:07:47.190 --> 00:08:00.440
Mark Kushner: that capsule inward. And this is the concept you've all heard about already, right? This actually achieved ignition. So our our goal is really try and create models to do optimal design of these systems.
50
00:08:00.550 --> 00:08:13.050
Mark Kushner: For just to get a sense of the complexity of scales, we're going to focus in on eater for a little bit. So we're going to talk about the length and time scales associated with either. What makes that a challenging system to model before we do anything else.
51
00:08:13.380 --> 00:08:41.460
Mark Kushner: And so when you're trying to think about modeling these systems, the first question you should ask is, what's the right model? Right? You don't you shouldn't necessarily just go out and use any model you should try and use a model that captures the physics you care about. And so this is. You can think of this as a model landscape. This is really a Along this bottom access is the nuts and number. That's the mean free path of particle will travel scale to something in the system size. So something you care about.
52
00:08:41.460 --> 00:08:53.290
Mark Kushner: And so for high notes numbers, they're very collisionless plasma or collisionless gases for what you care about, and for low, for very small nuts and numbers. They're very collisional systems. And so along this access is collisionality.
53
00:08:53.490 --> 00:09:07.290
Mark Kushner: Along this axis in this model is really the the coulomb coupling, and it's really talking about how how strongly coupled the gases. And so, of course, the Boltzmann equation is a model that is the correct model for
54
00:09:07.290 --> 00:09:28.040
Mark Kushner: the entire state space. But it's very computationally expensive. So ideally you'd like to use the right model for the right job. And so, if you're thinking about modeling. you could say if i'm very far away from my plasma. I have a very long distance between myself and my plasma. The electrons in the ions look to roughly speak like on top of each other.
55
00:09:28.080 --> 00:09:38.410
Mark Kushner: So in this context, in the small nuts number small the by-length regime, you get down to ideal. Mhd. Is a good mom. Ideally, Mhd. Is a set of fluid equations
56
00:09:38.530 --> 00:09:48.840
Mark Kushner: that describes the average density, the average velocity, the average energy, and the magnetic field associated with those, and it's really a good approximation.
57
00:09:49.120 --> 00:10:08.360
Mark Kushner: My advise you to say it's a great approximation when your plasma looks like a liquid metal. So if you stand far enough back from your plasma, so that the I's and electrons look like they're more or less next to each other all the time ideal, and they's a great model for your plasma. And if we could get away with Aldehyde we certainly love to, because it's much cheaper to solve than other models.
58
00:10:09.070 --> 00:10:33.130
Mark Kushner: Now if you zoom in really close to the plasma, so you're standing right on top of the plasma. Einstein electrons have separated distance between them. You have charge, separation, and in this context the plasma looks very kinetic, and the model you need to use is the Boltzmann equation. Where F is probability distribution function, finding a particle the given position at a given velocity at a given time. It's a 6 dimensional model plus time.
59
00:10:33.130 --> 00:10:39.950
Mark Kushner: and it couples, of course, to your Max's equations to the Lorenz force. and of course this term will be on the right hand side, which I've
60
00:10:40.280 --> 00:10:57.030
Mark Kushner: i'm hiding a lot of stuff there. It's a 5 dimensional integral. There's a lot of stuff going on in that term that says Delta collision. And then, of course, you're coupling into Max's equations, and you need to to in it the way it couples into nonlinear Way, of course, is the plasma becomes the source for Max's equations in the model you're solving.
61
00:10:57.480 --> 00:11:01.130
Mark Kushner: So those are sort of the 2 regimes we're talking about.
62
00:11:01.810 --> 00:11:28.520
Mark Kushner: Now, if we're thinking about M. Fe. So a magnetic fusion energy system. There's things you want to think about which are what are the space and time scales involved. So the smallest time scale is the electron cyclotron frequency. Your that's the electrons orbiting around the magnetic field. It's a strong magnetic field in the system, followed by the lower hybrid mode. Your ion cyclotron frequency your turbulence. Time scales.
63
00:11:28.620 --> 00:11:39.880
Mark Kushner: you know you get your saw tooth crash time, Scales Island to grow. These are all fundamental phenomenon that happen inside of a token. Mac, Really, you're talking about trying to go from 10 to 10 to the minus 10 s.
64
00:11:39.980 --> 00:11:54.190
Mark Kushner: 2 days. So 10 to the 4 s. So you've got 14 orders of magnitude and time. You're trying to cover an eater. It's kind of hard to appreciate how big working orders of magnitude and time is. It's a large set of timescale. You're trying to call
65
00:11:55.440 --> 00:12:13.340
Mark Kushner: spatial scales aren't. Really that much better in eater. So in either you have your smallest like scales or your electron gyro radius, just the radius of electrons around the magnetic field. You're the by length your I and John Radius, moving up to your tearing motor, skinned up your atomic, mean, free path. Another thing that always kind of gets me in these systems.
66
00:12:13.390 --> 00:12:20.770
Mark Kushner: If you're cool in collisions, the interactions between the ions and the electrons. Really that mean free path. It's on the order of the system itself.
67
00:12:20.820 --> 00:12:22.860
Mark Kushner: And so, if you're thinking kinetically
68
00:12:22.880 --> 00:12:34.130
Mark Kushner: well, this is telling you that you mean free path is bigger than your system size. You ought to be thinking about a kinetic model in that context. And so, really, this is this is telling you that. Okay.
69
00:12:34.500 --> 00:12:45.750
Mark Kushner: I I know my company. Free path tells me I could compute here. But that's not really the only thing going on here. My my electron-in interactions look purely kinetic.
70
00:12:46.550 --> 00:12:51.440
Mark Kushner: And so I need to think carefully about how I might try approaching modeling. This system.
71
00:12:51.650 --> 00:13:01.260
Mark Kushner: Now there are kinetic models for things like Toca, Max. These are often done with what are called gyro kinetic models.
72
00:13:01.340 --> 00:13:19.060
Mark Kushner: and those are models where you take the kinetic equations, and you average over a gyro radius to come with a gyro averaged model. So you take a model from 6 dimensions to 5 dimensions, going to turn out. That makes it much more computationally practical. We'll see that in a second. But that's that's an important thing that happens in here.
73
00:13:19.060 --> 00:13:36.500
Mark Kushner: And so there's 2 codes out there that I'm going to mention is actually a lot of codes for doing this. But these are 2 codes funded by the DOE, who also fund my center. This is Xdc. This is a big particle in cell code that does gyro average kinetics, and this is cogent, which is a mesh based Eulerian code coming out of one of our national lab
74
00:13:36.500 --> 00:13:45.210
Mark Kushner: that's been designed specifically to edge plasma physics inside of the token, Mac. So inside of these systems, what's really complicated about doing this kinetically is.
75
00:13:45.210 --> 00:14:00.390
Mark Kushner: if you're inside of what's called the separator. So the separate, which is the separation between the the hot burning plasma and the pedestal that falls off to the edge region. Right? So that's really the region that separates it to and down. Here you have some diverter for hopefully driving energy into when the plasma crashes.
76
00:14:00.390 --> 00:14:19.680
Mark Kushner: And so in this can then this region near the wall which we are plasma couples to the wall, and if you study boundary value problems ever, you know. Boundary conditions are everything in this region. The problem is your mean, free path. You're You're not mean free path, but your iron cyclotron radius, your your gyro orbit
77
00:14:19.690 --> 00:14:27.120
Mark Kushner: is much larger. then things within the system. So this becomes comparable to scales within the system in this area.
78
00:14:27.130 --> 00:14:37.880
Mark Kushner: What that translates into is it's the gyro average model. Isn't: really not the right model in the edge region. Okay, you want to. In fact, what you really need is something that's fully kinetic in the edge region.
79
00:14:38.020 --> 00:14:43.000
Mark Kushner: That's horribly expensive, but that's what you need. If you're going to get it right? Okay.
80
00:14:43.240 --> 00:14:44.180
Mark Kushner: So
81
00:14:44.270 --> 00:14:52.450
Mark Kushner: let's just think about what it would be to try and do this fully, kinetically. I'm going to go to the world of last off. I'm going to throw away the collision operator. I'm going to say, hey.
82
00:14:52.580 --> 00:15:08.800
Mark Kushner: What if I was going to solve this by just directly discretizing the equations with some standard trivial, finite difference method. And i'm going to see how much. How much does that cost me? What's the computational cost of just trying to do that? Okay.
83
00:15:08.840 --> 00:15:13.900
Mark Kushner: So in this context, what we're thinking about is why is this problem so hard to solve?
84
00:15:14.190 --> 00:15:23.850
Mark Kushner: Right? So you know, we've got this beautiful new exo scale computer frontier at Oakridge National Lab. It's a whopping big computer, 1 point, one X offlops
85
00:15:23.940 --> 00:15:39.100
Mark Kushner: big beefy, 700 petabytes of storage, 9,000 gpus that have 370,000 cores. Right? This is a big machine right? Gonna do some really big iron, 4.8 petabytes of RAM. That's the number. I want you to keep in mind.
86
00:15:39.310 --> 00:15:41.360
Mark Kushner: because it turns out that's not enough.
87
00:15:41.610 --> 00:15:57.900
Mark Kushner: Okay. So if you're going to look at doing this problem, let's just suppose I was going to do, and we'll figure out what this means physically in a second. Let's just suppose i'm going to take 256 mesh points in each dimension. So 256 and x
88
00:15:57.900 --> 00:16:04.280
Mark Kushner: 156, and y from 56 and z turn 56 and Vx 256, and V. Z.
89
00:16:05.030 --> 00:16:10.400
Mark Kushner: You end up with 2.5 petabytes of RAM for one copy
90
00:16:10.430 --> 00:16:20.740
Mark Kushner: of that distribution. Function. If I just drop the dimension down to 5 Dimension. Now you see why people like gyro kinetics all of a sudden i'm much smaller in terms of the storage. I use right.
91
00:16:21.190 --> 00:16:35.050
Mark Kushner: So that 6 dimension is really painful. Now let's think about the fact that that's one copy of a distribution function. If i'm just doing forward euler time stepping. I need 2 copies of the distribution function that takes me to 4. Point 3 petabytes of RAM,
92
00:16:35.290 --> 00:16:42.100
Mark Kushner: so I can only really do ford oil a time stepping with 256 mush points in each direction.
93
00:16:42.340 --> 00:16:50.530
Mark Kushner: with with for one species, say the electrons couldn't do the ions. I can't do anything else I care about right.
94
00:16:50.650 --> 00:17:07.609
Mark Kushner: So i'm kind of feeling a bit out of out of luck here. We, if we ask, what is that even useful? Or was that even worth it? All right, If you go to eater and you pull out the references from a couple from the almost about 10 years ago, that these design references come from.
95
00:17:07.780 --> 00:17:19.609
Mark Kushner: They talk about the temperature in the edge region of the Toca Mac, and the temperature in the edge. We need to a Token Mac is about 162 Ev electron volts. Remember every electron volts 10,000 cover right?
96
00:17:20.530 --> 00:17:25.099
Mark Kushner: So actually, this is killer electron bolts. This is not even it's not right.
97
00:17:25.390 --> 00:17:26.250
Mark Kushner: So
98
00:17:27.470 --> 00:17:31.030
Mark Kushner: so no it. Yeah any
99
00:17:31.370 --> 00:17:40.040
Mark Kushner: is 10 to the 19. The de by length turns out to be point. One centimeters. Okay. so that's your to by line, because point one centimeters.
100
00:17:40.390 --> 00:17:52.980
Mark Kushner: If I talk about 256 mesh points in each direction, that means each mesh in terms of space. It's only resolving 3.7 cm in length, right?
101
00:17:53.010 --> 00:18:01.730
Mark Kushner: So i'm not close to resolving the to buy length. and i'm not close to resolving other kinetic effects within the system, right?
102
00:18:01.810 --> 00:18:04.590
Mark Kushner: And so what you end up with is a setting Where?
103
00:18:04.610 --> 00:18:09.210
Mark Kushner: Okay, I did this horrific calculation, this monstrous calculation.
104
00:18:09.370 --> 00:18:14.970
Mark Kushner: It didn't get down to the small sign scales. Now. maybe that's okay for the problem you care about.
105
00:18:15.240 --> 00:18:29.580
Mark Kushner: Maybe it's not. And if you think about charge. Separation, charge. Separation is really down below the divide line. Right? So if you care about charge separation in this system and trying to understand that you're not really capturing that with what we're talking about.
106
00:18:30.770 --> 00:18:37.500
Mark Kushner: Okay, so that's sort of the initial sort of well you know that kind of sucks.
107
00:18:37.620 --> 00:18:42.020
So we come back to this scale of models, and we say, Well.
108
00:18:42.150 --> 00:18:47.110
Mark Kushner: I want to compute here. I know I can compute here. and the question really is.
109
00:18:47.190 --> 00:18:50.430
Mark Kushner: how do I take the physics here
110
00:18:50.740 --> 00:19:00.720
Mark Kushner: and move them down to something I can compute here. That's really the question we want to think about. Can we somehow incorporate the phys we really care about in models that we can compute with.
111
00:19:02.320 --> 00:19:03.200
Mark Kushner: So
112
00:19:03.220 --> 00:19:24.170
Mark Kushner: So that is really built around this idea that we're going to start first by saying. what is the goal? The goal is design of these systems. So start with the idea of uncertain quantification. Start by asking, what do you need to actually do on certainty, quantification and optimal design for the plasma. In the first place.
113
00:19:24.570 --> 00:19:36.930
Mark Kushner: that's gonna drive you to the fact that you need surrogate models. If i'm going to put this in size of an optimizer, and I want to run 10,000 or a 1 million calculations, I cannot do it with a first principle for its model. I have to come up with surrogates
114
00:19:36.950 --> 00:19:40.360
Mark Kushner: that respect the physics of the first principles model.
115
00:19:40.750 --> 00:19:57.530
Mark Kushner: but they need to go inside of some outcomes. So I need to think about surrogate models for multi scale modeling. But if I want to do, if I want to train the surrogates, I need solution to the first principle model. So it takes you thinking to simulation, acceleration. Well, in that area we're looking at sparse grids.
116
00:19:57.700 --> 00:20:02.190
Mark Kushner: these ideas of low rank tensity compositions which, it turns out
117
00:20:02.370 --> 00:20:17.630
Mark Kushner: actually can take the problem that scales like into the D, and makes it scale like d times n. So these low rank tensor approximations are a big deal because they change the calculus. They make the kinetic calculation possible. Okay.
118
00:20:17.650 --> 00:20:33.080
Mark Kushner: And so these are something we're really interested in. Of course, what we're really interested in is, how do we self can do these self consistently and stick them inside of an Amr framework. That's what we really want to do. We want to have these low, right tensor approximations that capture key physics in each part of the domain self consistently.
119
00:20:33.360 --> 00:20:41.040
Mark Kushner: And then, you know, as you move around in the domain, it changes the rank of the problem you're working with to try and capture different physics. So that's really in a nutshell
120
00:20:41.050 --> 00:20:59.510
Mark Kushner: what we're trying to do within the center. So that's the bird's eye view of what the centers. So is this problem hopeless? No. really. You can start with things like implicit product on cell, but not just implicit particle. So you want implicit particle cell that is asymptotic preserving. Why do I need that
121
00:20:59.560 --> 00:21:14.040
Mark Kushner: you want something that's gonna capture the gyro orbits of the particles without resolving the gyro orbits right so, and then it reaches where the gyros get large. You want to suddenly transition to capturing those dial orbits. Right? So you want something like implicit particle cell is a approach.
122
00:21:14.040 --> 00:21:22.790
Mark Kushner: These low rank approximations are absolutely in a a path forward for what we're trying to do. We're already looking at how we do this inside of Amr for a variety of settings.
123
00:21:22.920 --> 00:21:28.440
Mark Kushner: And then this idea of mixed precision. It turns out that there you can do these really clever things with mixed models.
124
00:21:28.480 --> 00:21:37.530
Mark Kushner: You can take a low order model, maybe even coming from machine learning where you've trained it on data. You can use that as a prediction.
125
00:21:37.570 --> 00:21:47.010
Mark Kushner: as maybe even an implicit prediction. And you can do these clever ideas of mixed precision where you can recover accuracy through correcting that low order model with explicit calculations.
126
00:21:47.050 --> 00:22:03.920
Mark Kushner: So we're working on mixed model mixed precision calculations. We actually have theory that talks about how you can actually deal with the discontinuous errors. Here we can actually explicitly project the discontinuous errors into the null space and be able to actually rigorously make these things converge. And so that's something we're excited about.
127
00:22:04.280 --> 00:22:11.450
Mark Kushner: You need this idea of structure preservation. Right? I need my models to blend together right. I need my
128
00:22:11.450 --> 00:22:28.300
Mark Kushner: kinetic model to self consistently. Go to a diffusion limit when it's collisional self consistently couple in with these lower order approximations in some way, especially if i'm going to do this mixed model calculation, I'd really like to be able to have my kinetic representation to self consistently blend in with some cheaper calculation.
129
00:22:28.600 --> 00:22:34.830
Mark Kushner: And so I need structure, preservation, asymptotic preservation, asymptotic preservation. Is this idea that the
130
00:22:34.870 --> 00:22:36.860
Mark Kushner: you have a discretization.
131
00:22:36.900 --> 00:22:46.780
Mark Kushner: But as you take a small parameter to 0, like the notes of number, you will automatically recover the diffusion limit. So I automatically get the fact that things should be diffusion. So I want these types of kids in there.
132
00:22:47.000 --> 00:22:55.280
Mark Kushner: Also, we're we're really focused on this data-driven surrogate. In fact, I will talk a little bit today about the data different surrogates that we're doing.
133
00:22:55.610 --> 00:23:03.090
Mark Kushner: and how you will couple these back into scientific computing to try and come up with faster circuits for what you're solving.
134
00:23:03.440 --> 00:23:10.590
Mark Kushner: And then, of course, we're doing things like model discovery. So something I didn't mention is the group at Boulder has developed a
135
00:23:10.950 --> 00:23:26.760
Mark Kushner: a a new variation of something called sparse identification for nonlinear dynamics. It's called Cindy, and it's a way of doing model discovery through regression. So you could take your particle cell data and say, Can I find a mesoscale model from that particle and cell data?
136
00:23:26.810 --> 00:23:45.930
Mark Kushner: So the formulation that the boulder group is developed, really allows you to do this effectively because it's a weak formulation. So you multiply by some test function. You do integration by parts to derivatives from the data onto the test function, and you can do very noisy data and discover models that are mesoscale models for describing surrogates in these settings. So. So that's another thing we're working on.
137
00:23:46.280 --> 00:23:55.720
Mark Kushner: And we're also interested in all sorts of different types of novel models. So I keep saying structure preserving. And why this structure preserving matter?
138
00:23:56.420 --> 00:24:04.660
Mark Kushner: This is a very simple calculation just to make the point. This is a beam in a box, right? I've got a box that's
139
00:24:04.670 --> 00:24:22.220
Mark Kushner: metal on all sides. Actually, this is the one that actually works. It's a PC. Boundary perfectly electrically conducting you're injecting on one side a beam, and it's propagating across the box that electron being. And of course, that electron means it. Pop gets across the box, expands, the electrons push apart on each other. Okay.
140
00:24:22.470 --> 00:24:28.180
Mark Kushner: And if you don't conserve, or if you don't satisfy the continuity equation.
141
00:24:28.280 --> 00:24:33.490
Mark Kushner: and you don't satisfy dell that equals row being blows itself apart into little beams in a heartbeat.
142
00:24:34.510 --> 00:24:35.790
Mark Kushner: Then, if you
143
00:24:36.350 --> 00:24:51.230
Mark Kushner: can serve this, or if you can serve continuity, but you, don't conserve Delta, d equals row, you still break apart into being. Let's just over a longer period of time. Right? This is what I mean by structure preservation. Your numerical method has to preserve the underlying physics. If you want to get the right thing out.
144
00:24:51.250 --> 00:25:06.300
Mark Kushner: Okay, this is the classic setting for magneto hydrogen, and it's con calculation, and we do all the time. It's called the we we shock to you to rotate it shock tube in a multi-d setting, and if you don't preserve del dot e bell dot b equals 0. You get all sorts of weird things, and eventually the problem blows up
145
00:25:06.300 --> 00:25:12.030
Mark Kushner: right? So that's that's about it. That's a stability issue. We don't preserve this involution that we' to be is identically 0.
146
00:25:12.610 --> 00:25:25.990
Mark Kushner: Over here is a high order, low order, calculation. This is actually a numerical that that's using a high order method. It's using a low order method as a cheap prediction, followed by a higher order method that's been accelerated by the low order method.
147
00:25:26.470 --> 00:25:42.520
Mark Kushner: In the case where the low order method, the the low order method does not have the right asymptotic limit it's not charge, conserving the whole scheme does not converge when you try and do this type of thing. Well, it's really saying if you're thinking about mixing models. So I talked about multi model calculations.
148
00:25:42.520 --> 00:25:54.740
Mark Kushner: If that course model in that fine model, Don't have the correct ap limit together. They're going to diverge instead of converge. When you try and do that mixed model calculation. And so that's really an example of why asymptotic preserving matters.
149
00:25:55.380 --> 00:26:18.120
Mark Kushner: So here's another way to think about the center. The center is organized around 2 main pillars. The main pillars are run away, electrons and Mfe and I can tell you about that problem and around trying to optimize and understand all ROM physics. What do I mean by home? The largest uncertainty
150
00:26:18.480 --> 00:26:24.240
Mark Kushner: in the Icf system at Niff is the
151
00:26:24.800 --> 00:26:29.910
Mark Kushner: and the physics associated with the whole room. It's not the capsule. There's certainly uncertainties there.
152
00:26:30.170 --> 00:26:33.200
Mark Kushner: but it is the whole, on that is a large set of uncertainties.
153
00:26:33.530 --> 00:26:49.150
Mark Kushner: So our group is. Tech is trying to tackle these 2 surrogate problems: the way of building up this entire framework, so trying to do beyond forward simulation, which is this idea of uncertainty, quantification, and optimal design, multi scale modeling.
154
00:26:49.350 --> 00:26:57.680
Mark Kushner: simulation, acceleration, and self-consistency. Those are all leading to this holistic framework of mathematics that we're trying to use to solve these systems
155
00:26:57.680 --> 00:27:13.620
Mark Kushner: whereas the surrogates are what the whole team is working around as a collective group to try and advance the science of the algorithms. We're doing. So we can then create a common framework, we can pass off to others right. Our goal is to enable science right? We're trying to enable science by building this framework.
156
00:27:13.990 --> 00:27:14.870
Mark Kushner: So
157
00:27:15.060 --> 00:27:26.100
Mark Kushner: run away electrons just in case you guys don't know it's a Really, it's a really interesting problem when the plasma starts to crash in a toka. Ma, so it's built up, and then it starts to crash the ions slow down.
158
00:27:26.300 --> 00:27:38.070
Mark Kushner: But you've got these currents in the plasma, and what happens is induction forces the electrons to speed up. So the ions slow down the electrons speed up. Eventually the electrons become relativistic and completely decouple
159
00:27:38.190 --> 00:27:39.720
Mark Kushner: from the rest of the system.
160
00:27:39.770 --> 00:28:06.330
Mark Kushner: When that happens. That's really painter for things like eater, because the energy in that bunch of relativistic electrons they've calculated, if it impacts the side of the reactor are going to blow massive holes in the side of the reactor. It's not just like a little problem. It's a big problem. So we're looking at runaway electrons as a survey to something that you really cares about. So that's an important surrogate. It's clearly a kinetic problem, and we want to be able to design systems to mitigate that. That's really what we want to be thinking about.
161
00:28:06.760 --> 00:28:11.750
Mark Kushner: and the hall around setting. Well, i'm going to talk about the horror on setting more so. I will just go to that right now.
162
00:28:12.360 --> 00:28:18.010
Mark Kushner: So again, our motivation in this is to really discuss
163
00:28:18.010 --> 00:28:43.040
Mark Kushner: how to best bridge these. I bring these ideas together in a way that's going to enable optimal design. So it's really what the center is doing. So we've got all these different threads, and we're trying to manage. We do. We have regular meetings, regular seminars, regular collective interactions with the idea of bringing these ideas together in a holistic way that's going to let us build a framework to do this. That's what we're really trying to do. So
164
00:28:43.650 --> 00:28:50.170
Mark Kushner: Let's take an example. Let's take the whole wrong right. So yay ignition awesome, great. We're happy.
165
00:28:50.500 --> 00:29:00.310
Mark Kushner: Of course we're far from actually being where that could be right. They put in 2 megajoules at the surface. They got out 3 megajoules right.
166
00:29:00.550 --> 00:29:09.650
Mark Kushner: and what they really want to do is they should be. They could be theoretically getting out a 100 times more than that because of the fuel that's in the system. So you want to be able to try and think about this.
167
00:29:11.590 --> 00:29:32.690
Mark Kushner: This problem is challenging, because what you're doing is, you are sending these lasers into the system. You've got these different being what you're sending into the hall wrong. Your goal is that you're heating up the gold hall room around the outside in a way that it's actually going into black body radiation and creating X rays that are going to bathe
168
00:29:32.690 --> 00:29:37.900
Mark Kushner: your capsule and try and compress your capsule. That's really the process going on here. However.
169
00:29:38.540 --> 00:29:48.760
Mark Kushner: let's talk about some of the challenges. When the beam let's come to the inlet they ionize Apply as well that sits there that plasma creates mode, coupling it's actually the iron acoustic wave
170
00:29:48.760 --> 00:30:03.710
Mark Kushner: that couples each of the different beamlets together. So there's actually a nonlinear design process where they're literally trying to take energy from this beam and possibly pump it into this beam through the nonlinearities of the in acoustic waves going on in that plasma at the endlight.
171
00:30:04.260 --> 00:30:17.660
Mark Kushner: All right, and then so what they're really trying to do is think about. Oh, we we pulse the beams right at different rates. We have 2 2 different temporal pulsing. We could turn the lace on and off right could we create pulse trains on the lasers, on the beamlets, in such a way as to create
172
00:30:17.680 --> 00:30:27.690
Mark Kushner: some sort of effect along the surface, so that, measured at the surface of the capsule. You are getting a spherical bath
173
00:30:27.700 --> 00:30:40.500
Mark Kushner: of radiation coming into the surface. It's a really hard, really crazy, optimal design problem. It's really interesting, too. So this is what this is a surrogate. Well, this is in a circuit. This is the real thing.
174
00:30:40.630 --> 00:30:43.740
Mark Kushner: What we wanted to do is create a surrogate for this right? Because
175
00:30:43.790 --> 00:30:56.920
Mark Kushner: that's really hard when when you know, if you're talking about, Look! If we really did this, we would actually be at the web, we might be at the weapons Lab. Right? This is a really hard problem. There are a lot of people at Lawrence Livermore actually working on this problem
176
00:30:57.110 --> 00:31:01.850
Mark Kushner: right? But we're trying to create a set of tools that are going to help them. And what are we trying to do?
177
00:31:02.330 --> 00:31:03.110
Mark Kushner: So
178
00:31:03.520 --> 00:31:16.240
Mark Kushner: we're trying to think about it from the perspective of uncertainty, quantification, and optimal design. So really, you've got some system that you want to put in an outer design group, and in that outer design loop, what you want to do is you want to be able to tweak parameters.
179
00:31:16.240 --> 00:31:33.030
Mark Kushner: Define how you're going to optimally control the system. You have some inputs. Maybe you're playing for us. We're going to be playing with the with the sources and where they sit and how they're currently pulse the right for us. So we playing with some parameters. Maybe we're playing with the opacity as a parameter right, because we could change the density of the material that we're we're working with.
180
00:31:33.200 --> 00:31:42.410
Mark Kushner: So you've got some parameters you're playing with You're trying to do up on design. and you want to do this many, many times, maybe thousands of times, to actually do the design process. Right?
181
00:31:43.600 --> 00:31:57.360
Mark Kushner: So if I try and do this with a full, full, full, high fidelity simulation full stop. You're done, even if I do the nice, low, rank, tensor decomposition stuff which is super cool to it. Scales like n times D. And I feel great about that.
182
00:31:57.540 --> 00:32:10.950
Mark Kushner: I still cannot do that fast enough to make that optimal design loop work right? So what that tells you is that you need to be thinking about surrogates right? This is when I talk about holistic trying to think about the whole thing at once, and how we do these different pieces.
183
00:32:10.980 --> 00:32:18.970
Mark Kushner: Right? So you think about surrogates. I could create low fidelity models, right? I could that just toss like an Mht model, and then say good enough, we'll try that instead.
184
00:32:19.080 --> 00:32:22.320
Mark Kushner: or some reduced model in in the case of
185
00:32:22.650 --> 00:32:38.790
Mark Kushner: in the case of transport. Maybe we're just going to use a 2 term expansion in moments, and be perfectly happy with that, although we're not because the low fidelity model misses the mark, it doesn't actually, while it doesn't get the right mean and standard deviation of the high fidelity stuff.
186
00:32:38.790 --> 00:32:49.460
Mark Kushner: So the real question is so multi-fidelity simulation which has been led a lot by Sandia. So our colleagues at San Diego have been leading. This work really focuses on this idea of using
187
00:32:49.630 --> 00:33:07.400
Mark Kushner: a few high fidelity simulations to make low fidelity simulations much more accurate predictions of what you're trying to do, because that's what multi fidelity simulation is. It's really this idea of trying to use a few high fidelity along with a bunch of low fidelity runs inside of popcorn design.
188
00:33:07.400 --> 00:33:22.840
Mark Kushner: And this particular group, John Jakeman and Tim Weldy, have actually done really amazing things with this in the context of electromagnetics. They've done some really cool stuff with Darpa, some really cool stuff with designing next generation, night vision goggles
189
00:33:22.840 --> 00:33:28.000
Mark Kushner: all in this all in this whole framework of trying to is multi-fidelity optimization stuff.
190
00:33:28.310 --> 00:33:33.290
Mark Kushner: Now for us, though, and what the research project is around uncertainty quantification is that
191
00:33:33.390 --> 00:33:47.820
Mark Kushner: we're really thinking about this not from a multi-fidelity you queue process. but from the process of unordered models. How do we do uncertainty, quantification? How we do the mathematics of uncertainty, quantification
192
00:33:47.850 --> 00:34:07.870
Mark Kushner: when the models are not an ordered collection of models. I don't have this hierarchy, this multi fidelity. What I have is an ensemble of models coming from something like machine learning, where I can't quite say what the error bounds are on the models, because they have maybe uncontrolled errors on certain most numeric, most machine learning algorithms have what you would call uncontrolled approximations. And
193
00:34:08.150 --> 00:34:15.630
Mark Kushner: And so we need to be thinking differently about this. And so in the context of transport. We're going to be thinking about surrogates
194
00:34:15.980 --> 00:34:23.070
Mark Kushner: for the transport problem. When you're thinking about we have. Well, we have really well written. One of my former students wrote an amazing.
195
00:34:23.120 --> 00:34:26.800
Mark Kushner: fast, sweeping transport code. That's really efficient.
196
00:34:27.000 --> 00:34:37.929
Mark Kushner: And so we have a high fidelity Sn method, which is a discrete ordinance method. We also have a high fidelity, Pn. Method, and filtered P. And method. That's very, very fast.
197
00:34:37.969 --> 00:34:48.020
Mark Kushner: and those are going to form the basis of our ground truth that we're going to use in the context. If you're just going to multi-fed all these simulations these are going to be our ground truth.
198
00:34:48.310 --> 00:34:51.270
Mark Kushner: And then we're gonna couple that
199
00:34:51.310 --> 00:35:05.860
Mark Kushner: with a range of other models. So Jing Wei, who from University of Washington, Seattle, has actually developed asymptotic, preserving low rank approximations for transport that ensure you get the right diffusion limit in that low rank approximation.
200
00:35:06.090 --> 00:35:18.710
Mark Kushner: So that's really important. So we're going to couple her low rank approximation into trying to do uncertainty, quantification modeling here. and then we have 3 machine learning surrogates. We're going to couple in.
201
00:35:19.240 --> 00:35:29.220
Mark Kushner: One is by my colleague, Corey. How could Oak Ridge National Lab. What he did he took the idea of entropy closures for transport. This is some way of closing some model.
202
00:35:29.300 --> 00:35:38.270
Mark Kushner: and he took the most expensive part which comes from a sh which comes from an optimization you have to over the whole domain, and he he replaced it with machine learning.
203
00:35:38.370 --> 00:35:43.190
Mark Kushner: So you basically. Trained the machine, learning algorithm to do the optimization step for him
204
00:35:43.220 --> 00:35:53.050
Mark Kushner: which makes this model, instead of being incredibly expensive, makes it incredibly cheap and preserves positivity, those all sorts of cool stuff. And so this is one approach we're going to look at
205
00:35:53.440 --> 00:36:04.140
Mark Kushner: another approach we're looking at is one that's developed, rated by my collaborator. you.ch, where she developed reduced order models for transport where they did really cool stuff. I know she calls it just the trick.
206
00:36:04.230 --> 00:36:08.830
Mark Kushner: But it's basically she figured out how to do reduced order models, and they figured out how to do
207
00:36:09.040 --> 00:36:26.490
Mark Kushner: optimal collocation for that reduced for the model, so they don't lose positivity with their model, so they can do these reduced order models that Don't lose positivity, and they can describe the transport field to the reduced order models. These calculations are much, much faster. You could run these through the optimizer over and over and over again. They're very cheap to do.
208
00:36:26.490 --> 00:36:32.260
And then a third class of models which were developed by Yingda and myself and a coll collaborator, Luke Roberts and Jin Tahoe.
209
00:36:32.380 --> 00:36:37.860
Mark Kushner: is really about structure, preserving machine learning which i'm going to talk a little bit about now is sort of
210
00:36:37.870 --> 00:36:51.660
Mark Kushner: an area that I feel very strongly about. We're doing structure preservation here. I didn't talk about it there. This actually entropy closure is actually also preserving structure. It's. It's things like positivity and things you care about. It makes the solution realizable.
211
00:36:52.090 --> 00:36:56.900
Mark Kushner: So these are the kinds of things that we're we're working on doing so
212
00:36:57.310 --> 00:37:06.070
Mark Kushner: in the context of this problem. We're one of the things we're really focused on as a team is creating surrogates that are hard enough
213
00:37:06.200 --> 00:37:09.280
Mark Kushner: that they actually allow us to build
214
00:37:09.380 --> 00:37:16.980
Mark Kushner: tools that are useful to the lab folks while not being so hard that it can't be a graduate student. Thesis
215
00:37:17.270 --> 00:37:24.100
Mark Kushner: right? There's really a trade off there. Graduate students have 5 years. You have to make something they can do within that 5 years, right?
216
00:37:24.220 --> 00:37:42.690
Mark Kushner: And so that trade off is we're looking. Instead of doing the full, we're creating a Planar geometry surrogates that have the same property. Right? So we want to look at. How do we optimize the source point sources of the inlet? Maybe we only allowed to have 4 point sources, and the question is, if I pulse those in time in some way.
217
00:37:42.790 --> 00:37:49.030
Mark Kushner: and I then combine those with. They have a fixed plane, but I can move them up and down the plane or location.
218
00:37:49.200 --> 00:37:52.920
Mark Kushner: I want to create a uniform field
219
00:37:53.080 --> 00:37:56.400
Mark Kushner: planar in the Y direction. At the surface of this object.
220
00:37:56.700 --> 00:38:14.200
Mark Kushner: Well, what that really translates into is I want, if you think about taking a 4 year, transform along the Y direction here. What I'm really saying is, I want to maximize the the the constant, coefficient, and minimize all the other coefficients. So i'm trying to. That's a multi objective optimization problem that i'm trying to solve.
221
00:38:14.410 --> 00:38:30.270
Mark Kushner: And so this is what we're going to do with the high fidelity models we're going to multi fidelity simulating optimization on this surrogate to start with, with a full transport code. So this is a full, If you do with the great equations, the full 5 decode it's also.
222
00:38:30.270 --> 00:38:39.820
Mark Kushner: You can make it. It's not hard to extend this to 60 to do the the energy groups if you want to. So this codes a a very efficient code for doing those problems for transport.
223
00:38:40.890 --> 00:38:53.430
Mark Kushner: And then we're going to turn around and we're going to ask the question, how do we develop the theory for multi fidel to you using this collection of surrogates. So this collection of surrogates are going to be our our playground.
224
00:38:53.430 --> 00:39:03.580
Mark Kushner: We're thinking about how we do the actual mathematics for uncertainty quantification. When we talk about uncertain when we talk about uncertainties associated with models coming from machine learning.
225
00:39:03.770 --> 00:39:19.600
Mark Kushner: and we're gonna try and do again a optimal control problem for optimizing the sources at the inlet given the surrogates and we're going to then try and think about how we create the theory around this is actually we're in. We're in the fifth month of this 5 year, Grant.
226
00:39:19.710 --> 00:39:36.770
Mark Kushner: We are working on trying to create the theory around. How we do uncertainty, quantification with ensembles and models. So what do I mean by by unordered models? I'll give you an example. The model I'm going to talk about next, Which is this machine learning closure for the transport equations
227
00:39:37.160 --> 00:39:55.380
Mark Kushner: with 6 moments captures what you would do with 18 moments with equivalent, accuracy, and one thirtieth the cost right. but it's an uncontrolled approximation in the sense that the neural network you can't say what the airbound is on the neural network. It's something you trained right.
228
00:39:55.530 --> 00:40:02.060
Mark Kushner: And so we want to think about how we want certainty quantification with these kinds of surrogates in mind. Right? So that's that's really what we're trying to think of.
229
00:40:03.360 --> 00:40:07.900
Mark Kushner: So transport. Let's just take one D to start with, just to talk about it.
230
00:40:08.090 --> 00:40:17.170
Mark Kushner: This is your gray equation for transport simply isotropic scattering. It's a. It's a simple transport problem. It's linear it's as simple as it gets
231
00:40:17.440 --> 00:40:28.130
Mark Kushner: your cross sections S. Consists of your scattering and your absorption cross-section, that your total cross-section your domain. It's saying, what's happening to your particles they passed through your domain.
232
00:40:28.500 --> 00:40:33.600
Mark Kushner: and and we are interested in solving this model.
233
00:40:33.900 --> 00:40:47.850
Mark Kushner: not kinetically. We're interested in using moment equations to solve this model. Now, this is not a new problem. This is a classic problem. This has been something that's been in the literature for a long time. I just by the chatman, and in Scott
234
00:40:48.170 --> 00:40:58.980
Mark Kushner: grad introduced a closure in 1,949 for transport that is particularly effective. And so we're gonna the one baseline is the grad moment closure
235
00:40:59.140 --> 00:41:14.820
Mark Kushner: in particular Here we're really interested in this idea of reducing dimensionality down to and making the calculations computationally trackable. Now this is just oned. We can do this with spherical harmonics, 2 and and 2 and 3D.
236
00:41:17.180 --> 00:41:21.870
Mark Kushner: So when you take moments, of course, the the moment closer problem is simply this.
237
00:41:22.260 --> 00:41:26.040
Mark Kushner: I multiply my function. F
238
00:41:26.310 --> 00:41:38.660
Mark Kushner: by up a a legendre Polynomial and I integrate the leading order. One gives you the density or and then a higher order. Correction your next moment up.
239
00:41:38.820 --> 00:41:42.050
Mark Kushner: and then it's equal to some absorption of that leading order term
240
00:41:42.720 --> 00:41:56.030
Mark Kushner: this, that the fact that your M. 0 depends on m one, and you don't have an equation for m. One is the moment closure problem in a nutshell. Right? That's saying that I need my next highest piece of information to actually solve this equation. So again, I take another moment.
241
00:41:56.180 --> 00:42:07.050
Mark Kushner: and i'll connect them to, and I take another moment. It connects him 3. So for and so on. So the question is, how are you going to close this system? So you really want to come up with some closure here.
242
00:42:07.140 --> 00:42:12.650
Mark Kushner: and you'd like that closure to be hyperbolic, so you can solve the system effectively.
243
00:42:13.300 --> 00:42:14.170
Mark Kushner: Now.
244
00:42:15.580 --> 00:42:20.800
Mark Kushner: this and that's written here. It's written for a reason that's supposed to make you think neural network.
245
00:42:21.070 --> 00:42:25.250
and the neural network is some function of these other moments.
246
00:42:25.320 --> 00:42:26.140
Mark Kushner: So
247
00:42:26.180 --> 00:42:35.230
Mark Kushner: this was first done in 2,019 or not 2,019 by Wayne ony. So Wayne on, he and his collaborators said.
248
00:42:35.400 --> 00:42:42.740
Mark Kushner: Hey, let's take kinetic models and let's close them with neural networks. All right. We're going to train those neural networks on kinetic data.
249
00:42:42.830 --> 00:42:55.600
Mark Kushner: And we're gonna actually see how good we do. Now, the problem with that is that while it looks really good over a short time. they can't preserve the mathematical properties of the system when they do this.
250
00:42:55.660 --> 00:43:04.210
Mark Kushner: and so they can do these really short the short time simulations that look great. You know You've got 3 moments and it matches the kinetic stuff Exactly. You think that's awesome. Right? You're feeling good about that.
251
00:43:04.730 --> 00:43:08.010
Mark Kushner: and then you try and simulate a little bit longer, and the whole thing blows up
252
00:43:08.390 --> 00:43:26.230
Mark Kushner: right? So that's really the the problem with this is that when you do this type of neural network structure. That neural network doesn't respect the fundamental physics of the problem. You're solving it's just some tone, only high order problem, not in your polym. You interpolation, right. It's just some piece of information giving you something somewhere.
253
00:43:26.540 --> 00:43:31.530
Mark Kushner: So they went on to do this for a bunch of other equations.
254
00:43:31.530 --> 00:43:58.980
Mark Kushner: Then a group of people got excited about doing this for plasma. And so you had People do this for Euler Poisson, and create closures that was interesting. You had people do closures for blast off Poisson, where they were able to reproduce the hemick. Perkins closure with the neural network. But again only over short time windows. They couldn't do these over long time limits, right? So so all this is basically saying, Well, we can do something. It's cool. It's some magic moment. What's cool about this i'm just gonna go back aside to say what's cool about this?
255
00:43:58.980 --> 00:44:02.030
What's cool about this is by knowing that magic moment
256
00:44:02.150 --> 00:44:09.640
Mark Kushner: your low fidelity model was, say 3 or 5 moments, is capturing the high fidelity physics of the kinetic system.
257
00:44:10.160 --> 00:44:24.520
Mark Kushner: That's what's cool about this right? So you're creating something that when trained well is actually going to be able to capture something for you. So if you want to put this in an optimizer, hey, that sounds pretty cool right. If I could put that an optimizer and do something with that. That sounds potentially useful.
258
00:44:25.280 --> 00:44:38.660
Mark Kushner: Okay. So along comes my colleague, Cory Hoc. And at the same time we were producing our results. They figured out that if they stall for transport
259
00:44:38.690 --> 00:44:51.580
Mark Kushner: the maximum entropy closure, if they replace the optimization with a neural network, they can still prove all the things they want in terms of positivity. They get all those results they want, and they make the calculation really cheap.
260
00:44:51.620 --> 00:44:57.420
Mark Kushner: So Corey's approach is actually really quite elegant. I I if you want to talk about we could talk about Corey's approach more.
261
00:44:57.430 --> 00:45:11.550
Mark Kushner: Our approach is a little bit different, which i'll talk about now there is the only other one, and it came out the same time as ours that actually preserves this idea of hyperbolicity. And so let me talk about what I mean by reserves hyperbolicity.
262
00:45:12.060 --> 00:45:18.730
Mark Kushner: So what was done by Wayne on e is, he said. I'm just going to take
263
00:45:19.150 --> 00:45:21.840
Mark Kushner: a can, a bunch of kinetic data.
264
00:45:21.970 --> 00:45:37.420
Mark Kushner: and i'm going to train it, so that when I put in these earlier moments it gives me about the highest moment. It's going to train, train, train, train train, and then I'm going to differentiate that network. No, a lot work you can. You can use all the differentiation to differentiate that neural network. And I'm going to go to town with my simulations right? So i'm going to go ahead and do that.
265
00:45:37.510 --> 00:45:41.950
Mark Kushner: So that's what they did, and they and their first papers. They did a whole bunch of this stuff.
266
00:45:42.160 --> 00:45:50.650
Mark Kushner: It was effective over a short time scale. If you look at the results. It's really quite impressive what they're doing, and a free streaming limit with a 3 moment model it's doing great.
267
00:45:50.860 --> 00:45:51.520
Mark Kushner: Okay.
268
00:45:52.440 --> 00:46:04.520
Mark Kushner: But the neural network. It does do some things, it does preserve symmetry. You can prefer. You can prove it preserves galley and invariance. You can prove it. It it preserves a scalar invariance.
269
00:46:04.760 --> 00:46:07.520
Mark Kushner: but it does not preserve hyperbolicity.
270
00:46:07.860 --> 00:46:32.410
Mark Kushner: So why do we care about hyperbolicity? Well, if you don't actually preserve hyperbolicity. it's actually saying that the Eigen, structure, and the problem can change from the, from the what we expect it to be when we have causality, and when we have waves that are propagating at finite speed. There's something that's complex eigenstruct. And then the whole thing blows up when the eigenvalue is going complex, and it's just going to blow up.
271
00:46:33.040 --> 00:46:33.950
Mark Kushner: So
272
00:46:34.360 --> 00:46:48.050
Mark Kushner: right we we it didn't. It was great for a short time. But what what can we do now? So in our work for neural networks? What we did is we went back to the kinetic equations.
273
00:46:48.360 --> 00:46:58.780
Mark Kushner: and we said, what is it that we should be learning? We didn't ask you, didn't just try and learn. We asked, what is it we should be learning in the first place. Okay, now.
274
00:46:58.840 --> 00:47:04.560
Mark Kushner: that's motivated by the physics, there's actually an analytic closure to this simple kinetic problem.
275
00:47:04.900 --> 00:47:08.220
In fact, that analytic closure for the free streaming limit
276
00:47:08.360 --> 00:47:20.250
Mark Kushner: is that the highest moment, whenever you close the system is always related in some way to the first. The the gradient of the highest moment is always related to the gradient of the 4 lowest moments.
277
00:47:20.790 --> 00:47:23.100
Mark Kushner: Okay, that tells you something.
278
00:47:23.180 --> 00:47:24.080
Mark Kushner: Okay.
279
00:47:24.090 --> 00:47:30.580
Mark Kushner: What it tells you is that. And that's what I'm saying here. What it tells you is that instead of trying to learn
280
00:47:30.870 --> 00:47:33.100
Mark Kushner: just the highest moment.
281
00:47:33.340 --> 00:47:36.120
Mark Kushner: maybe I should be changing the model.
282
00:47:36.260 --> 00:47:45.750
Mark Kushner: So I should be learning. The gradient of the highest moment is related to neural network, coefficient training gradient to the lower moments. Okay.
283
00:47:45.910 --> 00:47:50.810
Mark Kushner: that simple change ribbon by what the theory told us
284
00:47:51.040 --> 00:48:01.510
Mark Kushner: literally makes the model much simpler. To train with much less data. hey? So that is something that's interesting in its own right.
285
00:48:02.770 --> 00:48:11.490
Mark Kushner: We actually gain accuracy. Out of this we gain efficiency in terms of training. It also leads to the fact that we can enforce hyperbolicity.
286
00:48:11.860 --> 00:48:18.730
Mark Kushner: All right. So here comes another screen full of other stuff that are. That's a bit ugly.
287
00:48:19.090 --> 00:48:22.700
Mark Kushner: all right. If I want this equation to be hyperbolic.
288
00:48:23.760 --> 00:48:32.520
Mark Kushner: There's actually a mathematical theory that tells me if I can find a symmetrizer that I multiply a by that some special matrix. then.
289
00:48:32.840 --> 00:48:44.410
Mark Kushner: which i'm going to be able to prove that my item, those are real, and that there's a full, or that it has all in eigenvalues are there so as long as I can come up with this matrix a 0 multiply by this guy that's going to do this.
290
00:48:44.440 --> 00:48:46.580
Mark Kushner: I'm in good shape. And here
291
00:48:46.600 --> 00:48:55.720
Mark Kushner: A. J. Is going to be related to the neural network through this relations. So this is how we're choosing to build our model. So based on what we're trying to accomplish. We chose this.
292
00:48:58.060 --> 00:48:59.100
Mark Kushner: So
293
00:48:59.150 --> 00:49:10.300
Mark Kushner: all right. Blah blah, blah Math. Okay, really, this is a theorem. We prove that allows us to show that we can guarantee hyperbolicity for this training process that there exists a symmetrizer.
294
00:49:10.380 --> 00:49:20.700
Mark Kushner: and that we can explicitly guarantee that we can playing this matrix train the neural network and ensure hyperbolicity. We can do it in a very simple way if you only look at connecting the first.
295
00:49:20.870 --> 00:49:36.360
Mark Kushner: If if you only look at connecting 4 moments to the highest moment. If you try and do it more than hot for moments, you end up in a setting where you have to do an implicit solve to make it work. We didn't really want to do that. So in the paper we only talk about training up to connecting 4 moments to the highest mode. We didn't want to do some implicit solve.
296
00:49:36.720 --> 00:49:43.430
Mark Kushner: All right. So this leads us to something that's also really cool. This model we can prove, gets the diffusion limit.
297
00:49:43.540 --> 00:49:49.550
Mark Kushner: So something about this neural network we can prove also guarantees. We will get the right diffusion limit
298
00:49:49.590 --> 00:50:04.410
Mark Kushner: all this particular setting which is imposing the correct physics. I'm not talking about how we prove this. I'm just telling you we can prove this. What that means is we can train the neural network, so that it's going to give us kinetic effects and intermediate regimes, and that we know it's going to go to the right diffusion limit on a long time scale.
299
00:50:05.250 --> 00:50:06.440
Mark Kushner: That's kind of cool.
300
00:50:08.950 --> 00:50:10.600
Mark Kushner: Okay, so
301
00:50:11.590 --> 00:50:18.650
Mark Kushner: all right. training here is the thing that I love about neural networks and kinetic theory.
302
00:50:19.020 --> 00:50:23.400
Mark Kushner: We are training based on random initial data.
303
00:50:23.720 --> 00:50:29.300
Mark Kushner: We are taking a collection of random sine waves and fixed random capacities.
304
00:50:29.360 --> 00:50:36.260
Mark Kushner: So pick and capacity in some range between. I think we went to up to a 100 point, one to 100 for capacities.
305
00:50:36.370 --> 00:50:53.960
Mark Kushner: for our absorption went from 0 to 10, just random numbers. We we pick a fixed number, we pick a set of random frequencies. We do a simulation over a period of time, and that's going to create our training data set, and we do a bunch of those. Okay. So we just create these random initial conditions, random capacities, random whatever.
306
00:50:54.130 --> 00:51:00.520
Mark Kushner: And then what's super cool about this is, we're about to apply this to things we didn't train on, and it does really well.
307
00:51:01.170 --> 00:51:02.520
Mark Kushner: that's what's interesting.
308
00:51:02.600 --> 00:51:21.500
Mark Kushner: It's that there's something generalizable about this. It's not perfectly generalizable. But there's something generalizable about this. We can capture physics, for we didn't train on by training on this random initial data which gives me hope for this kind of circuits. I could think about taking something like a low rank approximation.
309
00:51:21.680 --> 00:51:29.910
Mark Kushner: setting up a little periodic box, create a bunch of kinetic data. and then train on that kinetic data for periodic perturbations.
310
00:51:29.970 --> 00:51:40.480
Mark Kushner: and then use that to build my neural networks for multi-dimensional simulations. There's something beautiful about this that there's something generalizable under the hood here.
311
00:51:40.500 --> 00:51:41.480
Mark Kushner: moving it
312
00:51:41.620 --> 00:51:44.590
Mark Kushner: hard. What we see is really interesting.
313
00:51:44.940 --> 00:51:48.590
Mark Kushner: Okay. So all right.
314
00:51:48.680 --> 00:51:57.860
Mark Kushner: This is something with boundary conditions. We never trained on boundary conditions. All right. That's what's pretty cool about this. We took what's called a Gaussian pulse, or they called the
315
00:51:58.090 --> 00:52:03.540
Mark Kushner: It's a Gaussian pulse. You let it go. Typically, you would let it go in a period of time, and just watch it spread out.
316
00:52:03.770 --> 00:52:19.990
Mark Kushner: Here we put a reflecting boundary condition which we didn't train on. We just put it into the fluid model itself with the neural network, and what you see is here is the exact solution. Here is the gradient train neural network, and it's following pretty good along
317
00:52:20.140 --> 00:52:25.120
Mark Kushner: that kinetic solution without training on having a boundary condition
318
00:52:25.570 --> 00:52:29.750
Mark Kushner: right? The other models that they there are the what are called the filter Pn.
319
00:52:29.910 --> 00:52:43.380
Mark Kushner: So filtered. Pn. Is where you filter the highest moment really heavily. and the standard Pn. Which just has some terrible oscillations. We're doing this in a free stream, a near free streaming limit. the filtered PE, the unfiltered P. And as terrible oscillations.
320
00:52:43.560 --> 00:52:51.730
Mark Kushner: What's super interesting is, go to moment 5. When I told you we were doing this for hire, for up to 6 moments, or whatever go to month, 5 0 to 5. This is your sixth moment.
321
00:52:52.560 --> 00:53:05.360
Mark Kushner: Look at the kinetic solution, which is this blue line and the trained kinetics and the training connection with the boundary conditions. It's doing pretty good at getting the higher moments right, too.
322
00:53:05.360 --> 00:53:14.160
Mark Kushner: That's what's interesting about this. When I talk about being generalizable. It's doing something really interesting, having been trained on random date in a periodic box.
323
00:53:14.790 --> 00:53:16.400
Mark Kushner: Okay. So
324
00:53:18.320 --> 00:53:22.810
Mark Kushner: here is the 2 material problem. This is what we didn't train on again
325
00:53:22.840 --> 00:53:30.540
Mark Kushner: the opacity. And the white region is, I think, 10 a 100. Maybe this is. I think it's a 100 here and
326
00:53:30.620 --> 00:53:45.130
Mark Kushner: one in this region. So it's basically almost free, streaming and almost completely collisional. And what you get is that in the regions where it's very collisional. All the models do well, and then you go to the free stream limit, and the pn method breaks down.
327
00:53:45.360 --> 00:53:49.730
Mark Kushner: The hyperbolic closure does well.
328
00:53:49.850 --> 00:54:06.800
Mark Kushner: and the the there's another one we did, which is a non hyperbolic closure, which is training over a short time. When you run it longer it breaks down right. So the the point here is, if you don't have hyperbolicity. This is what happens to your neural network when you don't have hyperbolicity, it breaks down.
329
00:54:06.800 --> 00:54:13.810
Mark Kushner: whereas the green one which is hyperbolic is following your kinetic one on these long time. Lots right? So when you so so what? Here
330
00:54:13.880 --> 00:54:20.150
Mark Kushner: The the blue is exact. The orange is your p enclosure, which is you 0 your highest moment.
331
00:54:20.310 --> 00:54:39.050
Mark Kushner: The green is your provably hyperbolic with a symmetrizer solution, and the red is just training the gradients of the neural network without enforcing hyperbolicity. What ends up happening is you can get long time runs out of your hyperbox solution that's going to go to the right diffusion limit that you can't get out of
332
00:54:39.050 --> 00:54:41.380
other neural networks. You just apply to the system.
333
00:54:42.190 --> 00:54:46.300
Mark Kushner: Okay. this is again showing more results for the same kind of thing.
334
00:54:46.880 --> 00:55:03.850
Mark Kushner: This is looking at, not the ninth. This is the seventh moment in the ninth moment, doing the 2 material problem, and showing again very similar things that the kinetic one with the hyperbolic closure does really well. The filtered Pn. Does well for low moments, but for the high moments the filter P. And breaks down.
335
00:55:03.850 --> 00:55:08.360
and that's again showing the power of what these neural networks can do in terms of being expressive
336
00:55:10.120 --> 00:55:28.760
Mark Kushner: ongoing work. Just so you can see we've extended this to the Bgk equations for for transport. The Bgk equations are just the simple modification we're using a Bgk collision operator instead of a Boltzmann inflation operator
337
00:55:29.160 --> 00:55:32.100
Mark Kushner: with some inclusion frequency, Tau.
338
00:55:32.780 --> 00:55:38.830
Mark Kushner: And in this context, again, the this is an 8 moment model.
339
00:55:38.900 --> 00:55:41.080
Mark Kushner: The this is
340
00:55:41.640 --> 00:56:08.900
Mark Kushner: this is just inside of the training window. I have other results. I can show you outside of the training when I should have put one in here. It's my mistake. I thought i'd grab the one that was double the time. So this is the end of the training window. I thought I had grab the one that was double the time for this, and I grabbed the wrong one. So. But it's doing really well. It's not up. It's not surprising as toing while the training, when it was surprising to be well outside of the training window, so I should have shown you that one. This one's actually much harder, much, much harder, than radiation transport.
341
00:56:09.200 --> 00:56:27.210
Mark Kushner: And the reason why is because your hyperbolic system is in non conservative form by its very nature. Why is it non conservative? What if you think back to what I said, is that the gradient of the highest moment is related to the gradient of your 4 lowest moments. That means you can't write in flux conservative form.
342
00:56:27.210 --> 00:56:43.500
Mark Kushner: That means you have to be looking at things called path conservative methods for numerical pdes. It's actually much harder to do that than it is to do it to to make this all work. It's much much harder than it was for radiation transport. The other thing is, we're doing something kind of clever Here we've designed a neural network
343
00:56:43.600 --> 00:56:51.040
Mark Kushner: that learns the eigenvalues in an ordered way, and converts them directly to the coefficients that go in front of the gradients.
344
00:56:51.090 --> 00:57:11.470
Mark Kushner: and it privately maintains hyperbolicity. And so that's also pretty cool. We know how to extend the Rte. To. We know how to extend the symmetrizer to 2 and 3D for Rt: so we're working on that right now, which is training M. That's harder, because whenever you're doing a multi dimensional problem, the data is much more expensive to collect.
345
00:57:12.000 --> 00:57:17.950
Mark Kushner: So this is a harder prom to train. We're working on doing that, but we know how to do the symmetrize and move that forwards.
346
00:57:18.510 --> 00:57:45.810
Mark Kushner: and then we're also doing this, for on the bucket list here is to do this for Bgk Poisson. So the next step in this project right here is now to put Sans equation in there along with the Bgk model, and consider that over a range of collisionalities to show that it generalizes in that context as well. Really, the the goal is to create a series of surrogate models and a multi- setting that can be used outside of their training window for long time simulations.
347
00:57:46.430 --> 00:57:55.960
Mark Kushner: This work is with one of nuclear engineers undergraduates who's now my grad student at University of Michigan. So you guys might remember Nick.
348
00:57:56.110 --> 00:58:07.260
Mark Kushner: And then Ming Chen Ding is one of my postdocs in Jin Tahoe, or Wong, was one of my former postdocs who work on this with me. So this is. This is a really a collaborative effort on this phone.
349
00:58:07.850 --> 00:58:20.370
Mark Kushner: So let's come back to what the goal is. The goal is. We want to be able to do multi fidelity, hierarchical modeling of these kinds of complex systems. We want to be able to do optimal design.
350
00:58:20.370 --> 00:58:33.510
Mark Kushner: And so all of these pieces are being built in parallel to build a holistic approach. An entire family of models capable of doing these systems for the purpose of you queue, uncertainty, quantification.
351
00:58:35.430 --> 00:58:41.310
Mark Kushner: So with that, these are references. If you're interested, I can send these 2. I'll get the talk. Mark.
352
00:58:41.540 --> 00:58:43.620
Mark Kushner: and I will take your questions.
353
00:58:48.630 --> 00:58:51.370
Mark Kushner: Thank you very much. And your other questions
354
00:58:53.340 --> 00:58:54.330
Mark Kushner: Okay.
355
00:59:01.310 --> 00:59:03.880
Mark Kushner: I didn't know that the moments
356
00:59:06.930 --> 00:59:09.720
Mark Kushner: Well, that's what I mean by uncontrolled approximations.
357
00:59:09.840 --> 00:59:32.420
Mark Kushner: I do know that the oh, so the do you mean convergent in the sense that if I refine i'll get the right answer. Or do you mean, can you, since the if you take in the moment you'll converge to the solution of the kinetic in the context of Rte. It's a spectral expansion. You can actually show that that's actually a spectrally convergent for Rt: which is, linear you can actually show that's a spectral expansion
358
00:59:32.420 --> 00:59:34.960
Mark Kushner: for Bgk: where it's nonlinear.
359
00:59:38.240 --> 00:59:44.150
Mark Kushner: Yeah. it's. A it's a good question. It works well.
360
00:59:46.710 --> 00:59:47.560
Mark Kushner: Great question.
361
00:59:48.410 --> 00:59:56.700
Mark Kushner: But the the motivation is some magic. Genie gave you somewhere the right closure.
362
00:59:56.740 --> 00:59:58.820
Mark Kushner: and then your mobile mile did the right thing.
363
01:00:01.860 --> 01:00:12.140
Mark Kushner: Yeah. So just add on to that. And then I ask my question right Generally you would have to to it from multiple number of moments. And then.
364
01:00:13.120 --> 01:00:16.400
But yeah, it's it's a hard question.
365
01:00:16.490 --> 01:00:22.080
Mark Kushner: Okay. So so my question on the neural networks. Yeah, it is impressive that
366
01:00:22.150 --> 01:00:27.930
Mark Kushner: you know you had. I don't know some isotropic data, but you are. They can then it general license.
367
01:00:28.120 --> 01:00:36.530
Mark Kushner: But there are also many reasons why it can fail right? So
368
01:00:36.820 --> 01:00:51.230
Mark Kushner: great. Okay. So that's a guarantee. Bye. in spite of that, the many ways it can fail. So what's the strategy there to? Well, so for the radiation transport stuff? We also know we're getting the right long time limit.
369
01:00:51.400 --> 01:01:18.320
Mark Kushner: And so there is. That's where the When I talk about uncontrolled approximations and neural networks. That's all you have to think about uncertainty, modification real seriously about that right for harder problems than our TV. In that context, I think the jury is out. There are people is part of the team. We're thinking about numerical analysis for neural networks, right? And there are lots of interesting open questions there to try and answer.
370
01:01:18.400 --> 01:01:21.680
Mark Kushner: I think there is hope.
371
01:01:22.000 --> 01:01:27.830
Mark Kushner: if we can. So the the neural network stuff I showed you for Bgk
372
01:01:29.360 --> 01:01:37.000
Mark Kushner: is interesting. I am actually more interested in these days. Although we're going to finish this project.
373
01:01:37.040 --> 01:01:47.200
Mark Kushner: the idea of replacing the collision operator and imposing an H. Theorem on a neural network that's going to do the collisions for you. And so that's something i'm more interested in these days.
374
01:01:47.250 --> 01:01:52.910
Mark Kushner: I I do agree that there's but when I was heading towards
375
01:01:53.340 --> 01:02:04.460
Mark Kushner: you, develop a general approach that can work for any function approximator, right? It could be a Gp. Could be something more amenable to your problem that in your network. So.
376
01:02:04.580 --> 01:02:13.610
Mark Kushner: But of course, your Netflix have nice sounding in a proposal. Let me set up 2021, but if they would be 2, in 2024. But
377
01:02:13.970 --> 01:02:36.490
Mark Kushner: yeah, so I I like that part. But I was wondering if there was anything particular about your networks, or you know you could imagine. So the team also has done interesting things where we've taken standard principle of thought will be composition stuff where we've made those structure preserving, and those are very beneficial for doing these kinds of things. But what you can do in that context is you can guarantee if you build
378
01:02:36.490 --> 01:02:39.780
Mark Kushner: in the spirit of Jon. Has David's work
379
01:02:39.790 --> 01:02:53.850
Mark Kushner: a a reduced order model that preserves the structure for the prom you care about. You can also couple that with a neural network for giving generalizability for how you evolve, how you add those basis functions together.
380
01:02:53.850 --> 01:03:08.720
Mark Kushner: and that actually gives you something super powerful. And part of the team is actually looking at that as an approach to build something more generalizable where you have guaranteed mathematical structure in that basis. But then, how you combine those basis elements done with a neural network.
381
01:03:08.760 --> 01:03:11.450
Mark Kushner: So we're we're exploring a bunch of these things.
382
01:03:13.010 --> 01:03:13.710
Mark Kushner: And
383
01:03:13.980 --> 01:03:20.540
Mark Kushner: is it so for training your closure is it? Looks like you're lie entirely on higher fidelity, kinetic simulations
384
01:03:21.740 --> 01:03:28.450
Mark Kushner: and experiments via dating stream. Absolutely so the group at.
385
01:03:29.780 --> 01:03:42.690
Mark Kushner: for sure they can be a data stream. So the people who are leading that actually are Lawrence. Little more national lab right? That particular group actually uses under Brian spears has done things where they do high fidelity.
386
01:03:42.800 --> 01:03:56.360
Mark Kushner: Oh, well, low fidelity training of the neural network means They use low fidelity. Many low fidelity models train the neural network, and then they coupled to the experiment at the output layer and do what's called transference learning, and they get amazing results in terms of their ability to be predictive.
387
01:03:56.360 --> 01:04:14.220
Mark Kushner: And so that's really if you talk about niff in the story of Niff. That's actually part of what led to the success. And so, yeah, you can certainly look at making experimental data streams. A key part of the training process, the the right. Now, the only thing we're doing is by
388
01:04:14.350 --> 01:04:20.270
Mark Kushner: just trying to develop theory behind these things right? So that's what we're trying to do right now. So we there's lots of cool things we can do
389
01:04:23.120 --> 01:04:24.660
Mark Kushner: any other questions.
390
01:04:25.950 --> 01:04:36.610
Mark Kushner: my Andrew back up on for good. So. So in traditional article on the cell, your camera enough particle. So you have a lot of noise.
391
01:04:36.760 --> 01:04:54.240
Mark Kushner: You do a Fourier transform, you say? Well, this fatal frequencies. I know that that's just noise. There's no physics that can explain that. So you throw away those those special harmonics you transform back on. Now I have something smooth
392
01:04:54.240 --> 01:05:01.880
Mark Kushner: and you move on. So there was a decision made. What's right? What's wrong.
393
01:05:02.330 --> 01:05:07.430
Mark Kushner: Then it need to to create a speed up the system. So is there
394
01:05:07.620 --> 01:05:13.350
Mark Kushner: an analogy that you can do with the machine learning that since machine learning
395
01:05:14.770 --> 01:05:18.640
Mark Kushner: right from role it a sense of
396
01:05:19.630 --> 01:05:27.270
Mark Kushner: if your your high fidelity solution is only required to 3 mesh point to 256. You have no problem right
397
01:05:27.620 --> 01:05:32.790
Mark Kushner: So but you don't do that because somewhere along the way
398
01:05:34.180 --> 01:05:38.670
Mark Kushner: multiply, and you begin going wrong. so
399
01:05:40.030 --> 01:05:54.000
Mark Kushner: can the machine learning become detailed right from wrong, and string your calculations. So one of the things we're looking at is how to build machine learning that can extrapolate. So imagine taking
400
01:05:54.120 --> 01:06:04.030
Mark Kushner: a set of resolutions that are super course finer, finer, finer, and the machine learning is trained on that process of successive refinement.
401
01:06:04.030 --> 01:06:19.900
Mark Kushner: I will say the only way the neural network doesn't generalize, and what I just showed you is, if you go below the space, your resolution of what I just did. It won't work right. That's where the generalization fails. So along these lines can you create a multi resolution training process where you could teach it
402
01:06:20.120 --> 01:06:25.100
Mark Kushner: at. You know. What are the things to filter out? Given the resolution we're working with, and Why.
403
01:06:25.280 --> 01:06:32.490
Mark Kushner: that's an interesting question. So is this sort of like a really great problem? Yeah.
404
01:06:32.700 --> 01:06:37.840
Mark Kushner: that's exactly what I was thinking about is how to use multi-grid to train neural networks
405
01:06:41.100 --> 01:06:43.020
Mark Kushner: and any other questions
406
01:06:43.690 --> 01:06:45.670
Archis Sudhir Joglekar: I have a question from online.
407
01:06:45.840 --> 01:06:48.640
Mark Kushner: Sure. Yeah, there are.
408
01:06:49.110 --> 01:06:50.430
Archis Sudhir Joglekar: Thanks.
409
01:06:50.510 --> 01:06:58.930
Archis Sudhir Joglekar: Great. Well, this is a little bit farther in the future. Possibly. But can you speak to your approaches towards optimization?
410
01:07:00.000 --> 01:07:19.070
Mark Kushner: Sure, I think right now, we're really focused on honest on you queue for optimization. I think we're gonna be. I I think we haven't made a ton of decisions because we were talking about this as a group, so can I speak to it
411
01:07:19.230 --> 01:07:21.210
Mark Kushner: not as much as I would like to
412
01:07:21.450 --> 01:07:27.740
Mark Kushner: if i'm being honest, what we're, what we can do in terms of optimization is once we have
413
01:07:27.810 --> 01:07:31.010
Mark Kushner: multi fidelity you queue in hand.
414
01:07:31.040 --> 01:07:41.490
Mark Kushner: we can start to look at whether we want to do greedy approaches to optimization or other other types of approaches. It's not clear that
415
01:07:41.670 --> 01:07:47.190
Mark Kushner: it's not clear. That we have a good sense of that right now is what I will say.
416
01:07:48.570 --> 01:07:50.220
Archis Sudhir Joglekar: Sounds good. Thank you.
417
01:07:59.770 --> 01:08:16.529
Mark Kushner: I'm dyslexic. Could I ask you to need them. So the first question, from a completely sideways viewpoint, have any of these approach approaches suggested a system that might explain the existence of all lightning. No, not yet.
418
01:08:16.720 --> 01:08:19.420
Mark Kushner: That's an interesting question. But no, not yet.
419
01:08:23.189 --> 01:08:29.890
Mark Kushner: The so these systems are doing simulations which other systems model simulations.
420
01:08:32.729 --> 01:08:40.069
Mark Kushner: I know what about human or automated oversight between step to make certain the models are accurate to reality.
421
01:08:40.370 --> 01:08:58.890
Mark Kushner: So the goal with uncertainty. Quantification is to try and do validation, both with experiment as well as validation, with other, with other models within the system. And so, if you're thinking about how to avoid human oversight
422
01:08:59.000 --> 01:09:05.609
Mark Kushner: that comes in at the stage where we start to compare with real experiments. Can we really predict the real thing? And what did we miss?
423
01:09:08.310 --> 01:09:10.750
Mark Kushner: Thank you.