[MUSIC]
So we have Yevgeniy Kovchegov from Oregon State.
Will tell us about random self-similar trees, dynamical
pruning and its applications to inviscid Burgers equations.
>> Thank you.
So first of all,
I want to mention that this is a joint work.
With Ilya Zallapin of University of Nevada, with whom we've been
collaborating since maybe 2008 or 2009, I don't remember.
And then recently Maxim Arnold joined us.
He's from University of Texas in Dallas, and
he is an expert in Burgers equations.
So a lot of things that I'll tell you about Burgers here,
I learn from him.
So I will start with terminologies that you'll
be used.
Unfortunately, I couldn't avoid it.
So, first of all, the space
that we are dealing with, I will denote by L plain.
So the space of finite, unlabeled, rooted,
reduced binary trees with edge length and plane embedding.
Okay, a bit long that probably the reduced part is here,
so serious reductions.
So we consider as vertices only the junctions and
the leaf and the root.
But we do not have vertices of degree two, because we just make
them middle of the edge, and they're reduced in that sense.
Binary, now the embedded plenary and then the rooted.
So that means that we have left right orientation and
up to down, towards the root, away from the root, up.
Now the each tree has a root, so
the empty tree is the smallest tree we have will be denoted by
five and will consist of only the root, here I draw it.
Now, we have edge length, so
we have the distance between points on the tree.
So essentially we are talking about metric space with a fixed
distance function.
Now the two functions of trees that
we will use quite intensively will be the length of the tree.
So the lengths of the tree will be sum of the lengths of
all edges.
And then the height of the tree is the distance,
the furthest distance among the leaf to the root.
So the biggest distance between a leaf and a root, okay?
So, first of all, one more notation.
Delta xT.
So take any point x on the tree and
delta xT will consist of all the points on the tree.
So it's a sub tree above x.
That way you will make x into the new root.
Now with such, we have a partial ordering.
So when do we say that T1 is small or equal than T2,
if there is an existing isometry from T1 to T2?
So isometry would be A function from
T1 to T2, so that the image
of T1 will be the included
in to delta F of row 1, T2.
So essentially, this condition would preserve the orientation.
And then the distances between points after
the mapping will also be preserved.
So essentially you're embedding T1 into T2.
Maybe you would switch left to right sometimes.
But just to say what tree is bigger than the other, okay?
>> It's back on tool.
>> Why?
>> Is it on to?
>> It's not under.
Yeah, so
essentially it says that it can carve out a 3 T1 out of T2.
And so that the orientation is preserved up is up.
>> Injection wise.
>> Yeah, now, so
that makes partial order between the trees, and
we'll need that partial order in order to define pruning.
So what would be the generalized, what we call
generalized dynamical pruning because it generalizes a lots of
various prunings that described and studied in the literature.
Well that would be the map from L plain to r plus
side that it will be monotone, non decreasing.
With respect to that partial ordering, okay?
So in the next few slides we'll give examples of
functions like this.
So once you have a function like this,
then you can define a mapping that is parameterized by time,
T, such that it maps trees
in L plane into trees in L plane in the following way.
It will definitely leave the root along, right?
So as any mapping.
What would it do next?
It would look at every point x on the tree.
It will look at delta xT the tree above x.
And if that non-decreasing monotone function phi that we've
selected, of delta xT is bigger goes then this point stays.
But if it's smaller,
then it's called trees list including that point x.
So the monetanicity part guarantee is
the following that S of T is gonna be itself a tree.
So essentially, we erase parts of the tree
from top to down according to that function phi.
And also we know that, If s is big or
equals t, then we erase more of the tree from top to bottom.
So what's left is gonna be at least more or equal, okay?
So this gives us
a universal way of eroding the tree from leaves down.
All right, so here's some known examples.
So, let's take that function phi to be the tree height.
Well, three height is definitely is an increasing function,
right?
If you can embed a T1 into T2, then definitely
the height of T2 is gonna be big or echo than that of T1.
And moreover, I mean, because we did it so
that the orientation up and down is preserved, right?
We're not talking about the diameter or anything.
Now, if we pick phi to be that height,
actually the generalized pruning introduced in
the previous slide, coincide with the tree erasure
procedure described by Neveu in his 1986 paper.
And by the way, relevant to what I'm going to be talking about,
he establishes an invariance of critical and sub-critical binary
Galton-Watson processes with ID exponential edge lengths,
with respect to that tree erasure.
So tree erasure is probably the most natural way of pruning.
Essentially, you set fire on every leaf,
and then it burns with the same speed down toward the root.
What is also notorious about it is that, of course,
the semi-group property it satisfies.
If I freeze, if I put down the fire and then restart again,
then I will have the semi-group properties.
It just continues. Yes.
>> Just to clarify,
we've seen there this cyclical quasi-invariance,
because sometimes the tree will disappear if the condition or something-
>> Yeah, yeah.
Quasi invariance,
you do condition on survival of it here, yeah.
>> Okay. Great.
>> [INAUDIBLE] How can you make a tree smaller and
still have it in there?
>> Yeah, yeah, yeah, absolutely.
We think about a forest, and you prune every tree,
and just count the survivals and the statistics of.
Yeah, absolutely.
Now this is the pruning I'm gonna be concentrating on.
>> Because I think this is a key point, so
is this clear what this invariance scheme is?
>> What if I- >> Go ahead, please.
>> Yes, thank you, thank you.
Actually, I should have paused and asked for questions, yeah.
>> I'm sorry, could you go back to the regime in which
it's invariant?
>> There's a special slide about invariance.
>> Okay, great. >> So
I just mentioned that as kind of like a sneak peek, okay?
>> Thank you.
>> I'm sorry for organizing slides this way.
So the one that we're gonna concentrate as an application
today is gonna to be the length.
So what is the length?
The length is again is the sum of all edge lengths in the tree,
right?
So it's very easy to see that the pruning operator that
we create with this function is not gonna be semi-group.
And I'll draw a picture, I meant to draw it before, but maybe.
Somehow I forgot to bring a lot of the pictures.
So imagine.
A tree, and you can see that one of the points x,
which are on the junction, so
it's one of the points that we consider vertices here.
Now imagine that you calculate
the length of the left branch.
Let it be L, and the length of the right branch to be R.
But the lengths of delta xT of the tree above x
include left and right, so it's gonna be L + R.
So what could happen?
Because we are erasing sub-trees according to their length.
If the length is less than T, then it's erased.
So if you look at the time t,
which is bigger than max of L and R, and
smaller than L + R, right, then what happened?
What happened is that we erased all the points here on the left,
all the points here on the right, and then we pause for
that much time.
And if you stop the process and
then start the erasure, then we will not remember that
we have to wait more time till we're in here.
So you will not have the semi-group
property, but you could of course fix it
by creating a sticky point here, which kinda counts time.
And that would be exactly a massive point,
a sticky point in solving Burgers equation.
So you can make it a semi-group if you introduce heavy points,
like timers, essentially.
So this is what our application is gonna to do.
That's a bit later.
So I want to complete the picture with the third example.
I thought of, giving two examples is not sufficient.
Three is the golden rule, I think.
So third example, we spend many years studying, so
it's not a waste of time, I hope.
So it's called Horton pruning.
And it's tightly relating to Horton-Strahler ordering
that originated in hydrology and computer science and
many other applications, biological applications.
So what is Horton prunings?
Well, the picture on the next slide will explain it all.
Maybe I should start
with the picture on the next slide, why not?
I mean, you made me think about in which order to present, yeah.
So you start with a tree, and then Horton pruning does,
you erase all the leaves, and on one iteration, and then do what?
Series reduction.
So if this was T, this is R(T), you do one more, R square of T.
Do one more, R cube of T.
If you are down to the empty tree, you stop, and
count how many iterations, you did three.
So you say, okay the Horton-Strahler
order of this tree is three.
And that's what a Horton pruning would do if you let
phi(T) to be the Horton order of the tree, minus 1.
So for every every delta xt,
therefore we'll look at the Horton order.
Now, well, we need to do minus 1, because at time 0,
we don't wanna prune.
And so, actually all iterations happen in integer times.
And arguably, although we have that conjecture which I
think we are closing on, but this would be essentially all
pruning that are semi-groups at integer times.
>> [INAUDIBLE] When you go to R(T), then do once more,
so you remove- >> I cross the leaves.
>> So you remove the leaves.
>> And then do a series reduction.
Yeah, everything we do is here is mode series reduction.
Yeah, and so three rounds.
And then, why is it important for hydrological applications,
for all kind of computer science applications?
Well, because it builds a hierarchy on a tree.
So whatever was erased on the first round is given order one,
second round, order two, third round, order three, and so on.
So there is a bit of overlap in the literature, because more
computer science minded people like to call it register number.
And it is, instead of Horton-Strahler order and
it equals the minimum number of memory registers necessary to
evaluate, and that is arithmetic expression describe by a tree T.
Now, of the papers on the topic, the first theoretical
paper of interest here is that of Burd, Waymire, and
Winn where they consider again critical Galton-Watson.
And Galton-Watson and show some type of invariances
called Tokunaga or Horton self-similarity.
So, now one of the peculiar things that,
one of our first observations about it when we started
working on Horton pruning was to notice that,
if you take a level-set tree.
So let me just to be logical in order,
I tell you of what level-set tree is, so
you would make all the local maxima into leaves and
all the junctions will be the local minimas
in the corresponding order, okay?
And you can always go between level-set tree and
back if you look at the invariances, okay?
So we notice our first results explaining
some self-similarity to invariance,
which I haven't defined.
Well, based on the following, that if you take
a function and look at its level-set tree and
then prune it, what tree would you get?
You would get the tree, a level-set tree of
the interpolated function of local minimas.
So each time your prune it corresponds to taking local
minimas, so you prune twice you take local
minimas of local minimas, and so on.
And in that sense we had a lot of fun playing with that.
Now, I just want to quickly clarify what we call
exponential critical binary Galton-Watson tree,
essentially this Galton-Watson tree is in continuous time,
so every edge is exponential.
Yeah, so let me just be that brief about this slide.
Now, something interesting that was noticed a long time ago and
appears in the literature is that if you
take an exponential excursion.
So how would you generate excursion?
You would go up by an exponential flight, down,
up down, until you crossover to the negatives.
And look at the level-set tree,
then what you see is exactly that critical
Galton-Watson tree with exponential edge lengths.
So, now with this being our object we proved a small theorem
about general pruning the generalized pruning that's how
it's called, but it will have implications.
So what does it mean?
So the theorem says, take any increasing function Pi and
the pruning operator that comes with it.
And look at the tree that you obtain by randomly selecting 3T
according to Galton-Watson exponential Galton-Watson,
critical Galton-Watson, critical binary Galton-Watson,
so probability one-half, one-half branching.
And then, if you prune it for
time T conditioned that after time T the tree
didn't disappears, so it did get reduced only to the root.
That it's still gonna be Galton-Watson, and
still critical, and still with exponential edge lengths except
where the exponent is different,
it can be represented as L times P sub T.
So this is being the new parameter, and
P sub T being the probability of that exact event that
we condition upon, that tree selected
according Galton-Watson has not been erased.
All with this, obviously we
want this to be positive, so
even at that probability is positive.
Which of course we could make a invaraince five function for
which it will be.
So, essentially any of these prunings would take critical
Galton-Watson into critical Galton-Watson, and
then we check the dynamics of the edge lengths.
Now, for the three examples,
if Pi is the total length that would be the new rate,
where I0 and I1 are modified vessel functions.
And you easily can check with this formula again
the absence of semigroup property.
For Pi being the tree height, so for
the ratio by the viewer this would be the new rate and
for Horton-Strahler that would be just H integer
time lambda will be half time.
So each integer time you prune and
you get twice longer edges.
All right, this is the slide you were waiting for the invariance,
so I should start at the bottom of the slide to be logical,
right?
So our main goal of all of this study is not
particular applications like in hydrology or
in computer science or Burgers' equation.
Our main goal is finding the invariances
wnder these prunings.
So we need a measure like Galton-Watson measure
that generates tree size that,
if we prune end condition on not the whole tree being raised.
Then we get the same distribution maybe scaled,
like the length gets scaled.
So, essentially think about some kind of shift distribution,
like you raise the trees from the leaves down and
you're looking for the invariances.
And so this is just like what you're waiting for, okay?
Now, this unrelated results and
I'm not gonna go into defining those sorts of things,
just I thought of it as kind of curiosity.
So as I said,
we started by answering some hydrological problems, and
so one of those was concerning Horton-Strahler order that is.
If you have self-similarity under pruning meaning not just
statistic of order I into order J stays the same on average,
then you would have what's called strong Horton law,
which is a genetic law for the A number of branches of orders.
So this was an open question for the hydrological community,
and that's what we found really interesting to study.
The other one was studying invariances under
Horton pruning of trees created by Kingman coalescent, and
we've shown a big version of what's called Horton law there.
But out of curiosity of the again I wanted to highlight
the falling fact that from Horton pruning perspective
the Kingman coalescent tree is only one pruning apart
from the level set for IID time series for well,
in the hierarchical the statisticians don't like time
series idea [LAUGH] to be time series.
So more like white noise.
Okay, so in that sense the trees are essentially the same and
the result.
It was kind of curious I mean first we arrived at it
purely by formulas and I realize that it was natural.
Now here's the application I announce in the title
of the talk, so I have to go through this application.
I actually also find it very interesting.
So this application has to do with the pruning where
a five function was picked to be the length of the tree, right?
All right, so exactly this type of thing.
So let's try it with 1d iviscid Berger's equation.
The first part of the talk where we don't go into the analysis of
the trees would work in any dimension I should say, but
here we restrict ourselves 1d.
So let's take 0 external force.
So on the right hand side, we will have 0.
There is a standard way of introducing the potential
velocity field by substituting instead of the negative,
the derivative with respect x of psi.
You plug in on the additional assumptions we which you can
easily derive this proportionality to the density
function.
But you would arrive to this Cauchy problem with some initial
conditions, okay?
So essentially, that's what we are solving, okay?
Now, and I copied here with initial condition.
Now, before the first collision of particles the Berger's
dynamics is straightforward, you have the starting point,
you have starting velocity, you go with that velocity.
And the most interesting stuff starts with collisions.
So and in that sense after the collision
time what you do you modify the equation by introducing
actually, a small viscous term.
With this tiny epsilon,
you know the solution of this equation would be well defined.
The Hopf-Cole transformation given here,
and in general, they mention with [INAUDIBLE]
It gives an integral representation of
the solution for epsilon, any epsilon here.
Now we obtain the solution of our equation, not to say we,
it's a classical known problem by letting epsilon go to 0 and
looking where this solution goes, and
the Lax-Oleinik variational principle implies
that this will be the formula for the solution.
So I copied that formula here.
So what does it mean with respect
to psi 0 of initial potential?
So imagine this being the initial potential side.
Now you have points here on this negative
slope going left, here on this positive slope going right.
Now, the quadratic term here says the following.
Suppose we can inscribe a parabola so that
it is tangent to the potential function on both sides.
Let's look at the corresponding interval on x-axis.
And let's look at the minima of that parabola also projected
on x-axis.
So all of these points will collide into one massive point
here at time t, and t how do you t?
Well, this is the coefficient when inscribe the parabola.
So by time t they all will be here.
So what does it mean?
It means that, by inscribing parabola here we know where
the massive point originates, then the massive points when
they absorb everything they could observe from left and
right, start moving and then colliding.
So you would get a short wave tree.
Okay, so
this is a short wave tree that we are gonna be pruning.
And we show that the pruning dynamics will
essentially correspond to the Burgers dynamics if
we start placing sticky points on the tree.
So as our first exercise and
that's what I'm reporting here,
we take and you show placement of velocities.
Initial placement of particles and
velocities alternating from positive 1 negative 1.
So you alternate the particles left right moving initial
velocity left right, left right with plus or minus 1 velocity.
So same velocity just going left or right.
Later on we will assume that the points are placed according
to Poisson a point process, so that would be like one
dimensional analog of Zel'dovich conditions from strategics.
So what would be the initial potential there?
Well, I mean exactly like minus 1 plus 1 minus 1 plus 1.
This is the slopes, right?
So for
this type of potential, the shock wave tree is very easy.
It's really to inscribe a parabola into
a negative 45 degree corners, right?
So we do know that the massive points will originate here.
Now, we model the shockwave
tree as an upside down level self tree.
Was a level set three of the ups and
downs, or however you like to see it.
So what will happen here, you will have here points going
left, and here points going right, generating massive
particle till this whole edge is eaten down and then, therefore,
the massive particles, they start moving left.
Similarly, you have one originating here and
this will be the time needed for all of the points on the left,
along the right, to collide into the massive particular and
to start moving to the right if If you cut this corner by just
making one coordinate like this, right, and
then square the parabola, the minima is gonna be here.
It's gonna correspond to the intersection of those two lines.
That's where they're gonna collide and move on.
Okay, is this clear?
I should.
>> Show me your left hand.
>> Left hand?
>> Yes.
>> Okay.
>> Yes, because I had that feeling that left and right in
your description where reversed, you're looking to the sunder.
>> Okay.
>> [LAUGH] >> You're right, actually.
That's very good psychological observation.
But, no, but I meant the other left.
>> [LAUGH] >> Alternative left.
>> Alternative left, yes.
>> [LAUGH] >> [LAUGH]
>> Yeah,
whatever political replications.
So now let's see how the trees work and
how is erosion going with those trees.
So imagine we started with a potential like this.
I already cut the corner here.
What happens with time t?
Well, all the points here on the right, slide towards the left.
All the points here slide in and so
we get these flat plains in the potential.
Same thing here.
What does it correspond to the vertical because
each leaf has a vertical and a horizontal component.
Vertical is the accumulation of the massive point.
The horizontal is the movement before the collision and
what you see is that beginning this vertical mentions
this horizontal and this vertical mentions that one.
And so if in our level set tree we're only
interested in the vertical displacement.
That's what our tree will be.
And by time t, it will be eroded on the left and
the right by the length of 2t.
So essentially you get erosion.
So this part got eroded and this part got eroded.
Okay, and
whatever got eroded accumulates into the mass of massive point.
And so if we go and this is from our paper so
this is really like a movie.
So if you go on, what happens it this edge is eating away.
This left slope, so we have a plane here,
what does it correspond here?
To the erosion of this leaf on the right,
no, it's left actually.
>> [LAUGH] >> And
the appearance of a massive point in the middle of a leaf.
Okay, five minute, okay, I'm almost there.
>> [LAUGH] >> And then what happens next?
Well, this guy, this slope erodes, right?
So we essentially any way the left and right sides, and
now but you might think about it as now one double massive
point sitting here and waiting for the collision.
That's exactly that case with the raising by length,
where you roll to the left side, you roll to the right side.
But then you're waiting for before you restart moving, okay?
So that's what correspond to this case would correspond to
the double massive point on the leaf.
And then the dynamics goes on,
once they meet, they start moving.
So if we, how about I say it here and then.
I mean the slides aren't placed in the most logical order as
you already know.
If we start with placing particles
according to person point process.
Then, what are we having?
We are having those exact exponential excursions for
which the level set trees are gonna be critical exponential
Galton–Watson trees.
So if we start with that initial condition, this will exactly
correspond to the case in the theorem that we stated.
And that is part A of this theorem saying,
okay, at time t the tree erodes and erodes so
that it's conditional, it's survival.
It's gonna be still critical Galton–Watson but
with a new rate.
And moreover, we can actually track,
so we can always drop back the potential.
So if you have the erode of the tree these massive points,
we can always get back to the potential.
So from this, we have a complete description of
the potential function distribution at time t for
this zero damage kind of conditions.
So here it is, starting with exponential
excursion as our potential,
The experiment of lambda over 2 for each,
Flight, then the tree that we get at time t of
Burgers dynamics will be the Galton-Watson,
this parameter lambda sub T, which is this.
That was part A of that theorem.
Now what else?
Next, a single or double massive point are placed at the leaf.
Well it's a single point with this probability and
double with the corresponding probability.
Like one minus this, okay, the complementary probability.
So some will have double, some will have single.
What else?
What else is each single mass of leaf has
is a mass m equal to 2T.
No I mean if it's single mass it's always twice the time.
If it's double mass I need one more minute.
If it's double point mass, then we have joined distribution.
This is the case where, when left and right side eroded, but
the massive points didn't collide, and that would be still
expressed through this function P sub TNL, L of t, which
are in turn expressed through modified Bessel functions.
What else?
We have, we can also place massive points in the middle
of the leaf that corresponds to the case one,
one of the leaf eroded but the other one hasn't.
And so, the other massive point is still growing and
that would be just geometrically the corresponding probability,
geometric but starting with values zero, one, two and etc.,
with the corresponding probability P sub t from
the other slide.
So we have some of these points.
Now, what are the masses of the points That we are placing on
the list.
And by the way, they could be a left one or
the right one, right?
It could be the right massive point already stopped growing
and started moving, but the left one is moving or vice versa.
So this probability 1/2 of course.
The masses of them have this distribution.
So each of the masses has this distribution.
[INAUDIBLE] So we can give complete description of
the distribution of the potential at time t.
No.
Okay, if we don't start with just next question,
but start placing just points according to
process from left to right.
Then of course it will consist of the many excursions so
we have the description of the dynamics of massive particles.
So essentially if you take a random, randomly
pick mass of particle, it's mass will be distributed this way.
If it's moving, it's mass is gonna be 2t.
If it's still forming, its mass is gonna be distributed this
way and there's a picture.
So either it's on an excursion or it's still moving.
Finally, this is the last slide.
>> [LAUGH] >> I'm so ignorant.
Wrap up, I see.
>> [LAUGH] >> Okay.
>> [LAUGH] >> So what is related results?
Our result has some similarity although
it's different we try to regenerate some
of the known results from this one but
what was that I say Sinai's work 1992
work he started this white noise,
you do Berger's dynamic at time it is still white noise.
So, but this [INAUDIBLE] result of [INAUDIBLE] and
[INAUDIBLE] and then they have the same flavor but
they are different.
Now what is a perspective?
The perspectives, of course, we want to expand
the Berger's dynamics into a larger class in a higher
dimensional case because we can still inscribe paraboloids,
we can still have the shockwave trees that's what they are.
It was an interesting with the invariances
was the models of statistical mechanics are critically
exhibiting loss so there are some really important questions.
We actually generated on here our own hierarchical branching
process that we have that made me want it.
Okay, I'll stop here because wrap up is wrap up.
I mean I just >> [APPLAUSE]
>> Do you have a question for?
>> I have one.
>> Okay.
>> Have you thought at all about connections between this stuff
in any way and continuing random trees?
>> Continue random trees.
We thought I mean look we even
read some of the papers in that direction to see.
You see the problem with [INAUDIBLE] processes in general
is for us what we do is we do statistics on leaves down.
So if you look at levels set three you can generate
an infinite level three if you start with a mark of
a random work from left to right because from leaf down.
Now this is gonna be an infinite tree that you generate from
leafs down, but you never reach the root.
And whatever combinatorial stochastic process is studied
it's mostly from the root up if they build an infinite tree.
For us, it's like from leaves down, and
thus almost like zero one law.
I mean, whatever our infinite tree built from leaves down
has nothing to do with infinite tree built from root up.
Yeah.
>> A question
I just want to make sure because I'm not sure I understood.
You have this example.
The velocity's either negative one or one.
>> On the left or on the right?
Okay now.
>> So the point is was it just your example or
is this your assumption in your theory?
>> This is our assumption,
I'm sorry for >> So
velocities are plus or minus.
>> Yeah and the points are distributed according to points
and process, yeah thank you.
I mean again I think I went very fast.
It was supposed to be giving this talk.
So I interfere, but this was my 50 minute talk so
I promised myself to talk it in 40 minutes.
>> Thank you [INAUDIBLE] especially for
such short notice.
>> [APPLAUSE] >> Thank you.
No comments:
Post a Comment