Wednesday, May 16, 2018

Youtube daily report May 16 2018

My name is Pedro González.

I'm going to present a project today that I've been working on here in the Elbphilharmonie

called »Starlings Over The Cloud«.

Starlings are these birds that move in flocks.

When one of them moves, it affects the way the others are moving.

At the same time the clouds – those clouds in the sky where the birds are moving through

– act as a metaphor for the web, for the internet.

The idea of the piece is taking, as a point of departure, the idea of the internet as a new tool

to make people communicate in a different way and much better.

But at the same time it's also changing the way we communicate with each other in real life.

This simplification of emotions that is happening through the internet is also happening in our daily life.

Digital utopia of the internet has it's good things and also it's bad things.

I think it's important to ask questions and to go to places that are still not defined.

This is utopia.

For me as a contemporary music composer, I think it's important that we try to find new way or new paths.

I would say this is why it's interesting for me because it's also what I do as a composer.

I think it's an interesting piece, I'm happy with our process and I think people will have fun.

They will get shocked a bit.

I think it's a good performance to see.

For more infomation >> 3rd Hamburg International Music Festival | Starlings Over The Cloud - Duration: 2:05.

-------------------------------------------

GZERO World Clip: Presidential Relay Race - Duration: 1:03.

But the bigger question is, can you trust the United States?

I mean, ultimately American leadership is a relay race from one president to the other.

You pass the baton, you want to win the race.

President Trump has shown very clearly that his interest is in ripping up things that Obama did,

because he was a loser president, Trump does the best deals.

But if you're an American ally, you really don't feel like you can engage with the United States

with that kind of continual stop-start presidency.

One piece of good news: I don't think it matters at all for North Korea.

They weren't really going to denuclearize anyway.

It's not like they trusted the Americans or the Americans trusted the North Koreans.

We are going to have a meeting coming up between Trump and Kim Jong Un

and both sides really want some kind of deal.

In fact, Trump may want one even more now that he got rid of the Iranian deal.

Watch that space in a positive way, but on the Iran front?

All the news is going to be negative.

For more infomation >> GZERO World Clip: Presidential Relay Race - Duration: 1:03.

-------------------------------------------

The Two Biggest Questions We Get for P&S Documents - Duration: 3:36.

Welcome to the Cape House Show, where we give you the tips, tools, and sense of

humor you're going to need to get through the biggest transaction of your

life! I'm Katie Clancy, CEO of The Cape House at

William Raveis, and today we're going to part two of Purchase and Sale documents.

So last time we talked, we answered "what is a Purchase and Sale?", "why do

we have a two contract offer situation in Massachusetts?", and "how does it work?" This week

we're talking about the money. Big questions about the money on a Purchase

and Sale document! We're going to give you two big frequently asked

questions, one that I get from buyers and one that I get from sellers. Buyers

they often are wondering, "So I've got an 80%

mortgage, which means I'm putting 20% down but I'm putting 5% down with my

Purchase and Sale - how does that work?" Alright, this is how it works.

So when you put the 5% "down" with your transaction (with your real estate

transaction), that is going into an escrow account which is a holding account on

neutral. It is an account that is just holding this money until the closing.

So if you put 5% into that escrow account at Purchase and Sale, it just

sits there until the closing. At which time, that 5% plus the additional 15% for

a total of 20%, goes right to your bank and then you pay the rest of the balance

of the house on your mortgage. I hope that makes sense! The second big

frequently asked question that I get is from sellers. Sometimes sellers you'll

have a buyer who is putting less than 5% down (or 5% or less than 10%). At any rate,

they're not putting as much down with their mortgage. This happens a lot with

FHA mortgages, VA mortgages, and other low downpayment

mortgages, which means that your buyer doesn't have 5% or 10% to put down at

Purchase and Sale. So as a seller, you're like "You know, I'm looking for a little skin

in the game here, you know? They're only putting 1% down, how does

that make me feel? Like are they really invested in this transaction?"

Here's the thing, if someone is getting a mortgage

like that - usually a first-time homebuyer - if someone's

getting a mortgage like that, that 1% that $2,500 or $5,000 that they're

asking you to hold your house for until closing, that represents huge skin in the

game for them. These are people who don't have a ton of cash. Most of the

time (like the last time I saw this happen) it was the entire savings that

this couple had for their preschool for their daughter.

Let me tell you, they're not walking away from that money.

But you really need a real estate agent who understands how that works and

can prepare you for that and can also prepare the buyer for that to know that

some sellers won't understand it and they might be

uncomfortable accepting a low Purchase and Sale deposit.

But if you're well prepared, and your seller is well prepared, that all should go smooth as silk!

There you go! That concludes our lessons on Purchase and Sale documents

in Massachusetts. We have a lot more, I mean, we could talk about this all day. If

you have any more questions about Purchase and Sale documents or anything

to do with real estate on Cape Cod or anywhere else, come find us at

TheCapeHouseTeam.com

For more infomation >> The Two Biggest Questions We Get for P&S Documents - Duration: 3:36.

-------------------------------------------

As arveres... somos nozes?!| Minuto da Terra - Duration: 3:13.

For more infomation >> As arveres... somos nozes?!| Minuto da Terra - Duration: 3:13.

-------------------------------------------

Allen School Colloquia: Hannaneh Hajishirzi (UW) - Duration: 1:01:40.

- Good afternoon, welcome to the first talk

in the Paul G. Allen School of Computer Science

and Engineering Colloquiam Series for 2018.

I am delighted to introduce to you

Hannah Hajishirzi who is an assistant

research professor in the electrical engineering

department here at UW so she's no stranger

to most of the people in this room.

Hannah has been doing exciting work

at the junction of natural language processing

computer vision, machine learning,

and artificial intelligence.

She does really exciting and ground-breaking work

in building on these foundations

to enable artificial intelligence

to start to do more than what any

of these single areas can do by itself.

So I think we're in for a real treat

with her talk in which she'll be telling us

about learning to reason and answer

questions in multiple modalities.

She's worked with a wide range of data.

She brings together the amazing capabilities

of end-to-end deep learning systems

with symbolic methods that are designed

to support reasoning and interpretability.

So without further ado, welcome, Hannah

and thanks everyone for coming.

- [Hanna Hajishirzi] Thanks Noah.

So thanks a lot for the introduction.

In this talk I'm going to present my work

on question answering and reasoning

about multi-modal data.

This is a joint work with my amazing students

and my colleagues at University of Washington

and AI, too.

Recently we have witnessed great progress

in the field of artificial intelligence,

especially in natural language processing

and question answering.

For example, we have seen IBM's Watson

beating humans in Jeopardy.

We see Google search engine being able

to answer a lot of interesting questions

about entities and events and it's mainly built

on Google Knowledge Graph.

Also question answering and interactive system

capabilities have been deployed

into nowadays cell phones and home automation,

like in Amazon Echo and Google Home.

These systems are great mainly because

they are doing a really good job

in pattern matching but we really need

to answer two important challenges in order

for these systems to be fully applicable.

The first challenge is to have rich

understanding of the input.

The second challenge is the ability

to do complex reasoning.

Let's look at this example:

What percentage of the Washington state budget

has been spent on education in the past 20 years?

If you ask this question from Google you probably

see a list of webpages that are relevant

to the Washington state budget

and it's the user's job to go

over these web pages, connect all of them,

finally find the answers to the question.

But what we want is a question answering system

to be able to understand that we are actually

looking to find and solve that equation.

And then it's the AI system's job to go

over different web pages to understand

exactly what is going on inside those web pages,

looking at different sources of data,

like graphs, like diagrams, tables, and so on,

and then connect all of them together,

do complex reasoning that practically requires

multiple steps to finally answer this question.

Or let's look at this problem.

What will happen to the population of rabbits

if the population of foxes increase?

So this is a type of question that probably

a ten-year-old would be able to answer

by looking at this diagram, knowing that foxes

and rabbits are connected to each other.

This is the food web and foxes are consuming

and eating rabbits.

But for current AI systems this is actually

very difficult to answer.

In order to answer those questions

the system not only requires to understand

what is going on inside this diagram

but also needs to know what it means

that rabbits are consumed by foxes.

Basically, it requires some sort

of complex reasoning to go over

a large collection of textbooks

and also maybe look at some other types

of structured data like encyclopedia,

or, for example, Wikipedia pages

to finally answer this question.

In my research I have been focused

on designing AI systems that can

address these two challenges.

One is understanding the input

and also being able to do reasoning.

I have started my research career

with designing logical formalisms

on how to represent data such that we can

do more efficient reasoning.

Then I extended those formalisms

to NLP and cognitive vision applications

by learning those formalisms from data.

In particular I have introduced new challenges

in NLP and computer vision.

Some challenges like automatically solving

algebra word problems or automatically solving

geometry word problems.

Basically these challenges are the types of tests

that 10-year-olds or 12-year-olds

would be able to handle but current AI systems

can't solve those problems.

In order for an AI system to address

these questions I have made contributions

to the NLP area to basically have better

and richer understanding of the textual input

and also to computer vision literature

to have a better understanding of visual input

and also multi-modal data.

For the purpose of this talk I'm going

to focus on a task that is mainly

question answering.

The idea is we want to have a good and rich

understanding of the input that can be

of the form of multi-modal data,

usually a question and a context

and an algorithm for being able

to do reasoning to find the answers

to the question.

Here is the outline of my talk.

In the first part I will show how

we can represent data mainly using

symbolic representation or neural representation.

Then I'm going to show my work

on designing end-to-end and deep neural models

for question answering about multi-modal data.

And then in the next part I will show how

to use symbolic representation

to solve some AI challenges

and then finally I show my future directions.

When we want to design AI systems

an important challenge to address is

how we represent data such that

we can learn data and data representation

from data, but at the same time

we want these representations

to facilitate reasoning for us.

These type of representations can range

from symbolic representations,

like logical formulas, to neural representations.

Let's look at this problem

We want to design a system that can automatically

solve geometry word problems.

And I want to show that if we can understand

the input to some sort of logical formula

like what you see in the screen

then if we can leverage axioms and theorems

from the geometry domain and do reasoning

we would be able to solve these problems.

This representation is great because it allows us

to do complex reasoning and then solve

these geometry problems.

But at the same time directly learning it

from data is very hard.

Basically this representation

is too rigid to learn.

What we want is to make

these logical representations a little softer.

For example, we can use different formalisms

that are available, like markup logic network,

probabilistic relational models,

or some of my PhD work

on representing sequential data.

But for the purpose of this work

we have focused on using probabilistic relations

and then assign some probabilistic score

to each of the relations that we are extracting

from the geometry problem.

So here I just told you about how we represent

the problems and then later in the talk

I will show you how we can use

these probabilistic relations

to finally solve the problem.

On the other end of the spectrum

lies neural representations.

One very popular way and technique

that has been used in the deep neural model

literature is to use word embedding.

And the idea is to see if the word

that occur very frequently with each other

be appearing in the same high dimensional space

or they appear very close to each other

in a high dimensional space.

And this is called embedding, to map words

into some high dimensional space.

This has been a very popular these days

and people are achieving a lot of great results

using deep neural models.

But in order to achieve meaningful representations

we usually require lots of training data.

And also because they are not that intuitive

it is very hard to interpret and explain them,

it is much harder to do complex reasoning

using these types of representations.

But what if we can add some structure

to these neural representations?

For that let's look at the domain

of visual illustrations or in particular

let's focus on diagrams.

A lot of different types of visual data

can be found in a diagram form.

So for example we have diagrams in textbooks,

we have diagrams showing us how

to assemble furniture,

we have work flow diagrams, and so on.

These type of images are inherently different

from natural images and they usually try

to depict some complex phenomenon

that it's very hard to show

just with a single image or multiple sentences.

But if we use an out-of-the-box

neural embedding model and then try

to represent these diagrams into

some neural representations

that probably wouldn't work.

Why, because we're going

to lose a lot of information

that are hidden in the structure of these diagrams.

Also, there are a lot of ambiguities

involving these diagrams.

Like arrows might mean different things

in different types of diagrams.

In one diagram it might mean consuming,

in some other diagram it might mean

water transformation, for example

in a water-cycle diagram.

So how do we tackle this is to build and design

a representation that works for a wide range

of diagrams and then just try to respect

the structure hidden inside the diagram.

In particular we introduce diagram parse graphs

where every node in the graph actually shows

a constituent in the diagram.

Like for the diagram at the bottom

you can see there are notes

for blobs, for texts, or for arrows.

And then added, show how these constituents

are related with each other.

For example there are intra-object relationships,

a text describing a blob,

or inter-object relationships,

two objects are related with each other.

And then later we can take different components

from the diagram and then basically encode those

into some neural representation.

Later in the talk I will show how

we can leverage these representations

and do a good job

in question answering about diagrams.

So I showed you two different ends

of the spectrum for representing input.

One are symbolic representations

which are great because they allow us

to do more complex reasoning but they usually work

for a specific domain.

On the other end of the spectrum

are neural representations which are great

because they can cover a wide range of inputs

and they are easier to learn

if you have enough training data.

But on the other hand,

they are much harder for reasoning.

So most of my work has been focused on

making the logical formalisms a little bit softer

or neural representations more structured

and in future I'd really like to be able

to combine these directions.

Let's move on to the second part of my talk

which is on designing neural models

for question answering about multi-modal data.

Lets first look at this task

in the textual domain.

This has been also called reading comprehension

in natural language processing

and it's a very studied problem.

The idea is we have a question

like which NFL team represented the NFC

at Superbowl 48 and there is a context paragraph

given to us as an input.

Then the goal is to find the answer

to the question.

Which is going to be Seattle Seahawks

and it usually can be found inside the paragraph.

A conventional approach to solve these problems

is a pipeline approach.

It involves a feature engineering to set

to map the question and the context

to some features like the words

that are appearing in the question in context

like the frequency, their similarities, and so on

and then train a classifier that tells us

if a phrase is the correct answer

to the question, or not.

When we apply this method to a very popular

recent data set of question answering

this method achieves about 52-3% accuracy

on how it can solve these problems.

And as you see there is a large gap

between this pipeline approach

and human performance.

And we believe that the reason is

there is a disconnect between the feature engineering

or the representation and also

how we design the classifier.

Also, we think that these features do not do a good job

of representing the text or the interaction

between the question and the text.

So what we have done is to introduce

a neural approach to address this problem.

What we want is to map the question and context

into some neural representation

and then at the same time learn a function

that assigns a really high score

to the correct answer of the question.

Basically this function

has a domain and a range.

The domain would be the neural representation

from the question and context

and the output is the distribution

over the words appearing in context,

such that the correct phrase like the Seattle Seahawks

gets the highest score.

But how do we learn such a function and representation?

Let's look at the similarity in formation

between the question and the context paragraph.

What we want is to find what word or phrases

in the context are really important

in answering the question.

Let's look at this phrase:

National Football Conference

This is probably an important phrase.

It is relevant to NFL to NFC and Super Bowl 48

in the question.

But then let's look at another word,

defeated, in the context

This is probably a less important question

or a less important word to understand.

And because it is only relevant to something

like represented in the question.

This is called an attention mechanism

and it has been very popular these days

in both NLP and computer vision.

I can very loosely define an attention mechanism

by using human visual attention.

For example, if I want to see, and I want to focus

on the stop sign on this image

we basically look at certain parts of this image

with high resolution where the stop sign is located

and the other parts of the image with lower resolution.

So basically I'm going to look at the most important

parts of the image.

So this has been using NLP and other tasks

like machine translation and other tasks as well.

Most of the time the attention is usually being looked

at this direction, like the attention

from the context paragraph to the question.

But what we observe in this word is that the attention

it is important to look at it

from the other direction as well.

Let me give you some insight and then

I will dig into the details.

For this direction we want to see

for every word in the question

or every phrase in the question

what are the most important parts of the context

or what are the critical informations from the context.

For example, NFL teams would be related

to Seattle Seahawks and Denver Broncos

because both of them are teams in the NFL.

But then let's look at another phrase like NFC.

It is more likely being relevant

to National Football Conference but the thing is

it is more relevant to Seattle Seahawks

compared to Denver Broncos.

Why, because Seattle Seahawks is part of NFC

while Denver Broncos is not.

So this actually will help us to have higher score

for the correct answer to the question.

So here are the details of how

we implement the attention.

We first compute a similarity matrix using all the words

that are appearing in my context paragraph

and all the words in the question

then for every word in the context

I compute the distribution of how it is similar

to the words in the question.

And then I have a rated combination of all the words

in the question for the context words.

Now I have a new distribution over the context words.

For the second direction what we do is again

to build on this similarity information

between the question and context words.

But this time we're going to look

at every word in the question

and compute its distribution to how much

it is similar to the words in the context.

So now we have one distribution with respect

to the question words.

We have another distribution with respect

to the orange words and an aggregate, all of those.

Now we have a new distribution that represents

how important the context words are

for us to answer the question.

So this is great, we have been able

to bring in information from the question side

and have a better representation of the context paragraph.

But what is missing is how we can incorporate information

from the structure and the sequential nature

of the sentences.

In particular the added new function

that basically tries to encode

the structure and sequential information

from the context and see how these

are interacted with each other.

And finally this will be the output

of our system in scoring the phrases.

More particularly, we have introduced

a deep neural model called bi-directional

attention flow.

This is a hierarchical architecture

that has different layers.

These layers are designed such that

they add richer understanding of the input.

And basically we have the representations of the input

at different levels of granularity

according to these different layers.

And here is the detailed architecture

of our system.

Don't get scared of this diagram

but what it mainly shows is that

each of these nodes tries to represent

a word into some neural representation.

We have different layers, that each layer

is responsible to capture some information

about the context and the question.

For example we have character embedding layer

that tries to deal with unknown words

in the vocabulary.

We have attention flow layer that tries

to bring in information from the question,

And we have modeling layer that tries to capture

the structure of the sentence

into building the representation.

Now that we have a representation

we pass all of these to an ouput layer

that can change according to different applications.

But for the purpose of this particular test

we wanted to compute the distribution

of a very sparse index of the phrases

where the n index of the phrase is located.

Basically we predict p of start and p of n distribution.

Then at the training stage we bring in training data

and we optimize this objective function

which is maximizing the log probabilities

that these predicted distributions

p of start and p of n are actually assigning

higher scores to the ground truth start index

and the ground truth end index or in particular

y start and y index.

And then once we do training we basically

learn the parameters and then use the model in action.

At test time the input to the system

are, again, the question and the context paragraph.

We outline our neural model against all the layers.

with all the learned parameters

and now we find out what is the most likely phrase

that is the answer to the question.

Let's see how our model works in practice.

We evaluated this model in a very popular

question answering data set

that includes about 100 k questions and paragraphs.

And they are all most popular articles from Wikipedia.

And we evaluated on how well it could answer the questions.

As of January, one last year we were state of the art

and we were the first on this leaderboard,

this question answering leaderboard.

And our system was able to achieve about 81% accuracy.

And the reason that we are higher with respect

to other teams, we believe that we leverage this

by direction on attention.

Also this hierarchical nature, this modular nature

of our representation and our model

is helping a lot with capturing

more insight about the input.

So since then a lot of teams are competing

in this domain.

And so now, these days there are about 60-70 teams

on the leaderboard and some of it

are built based on our models,

some are completely different systems

but it is interesting that best model now

at least as of January one 2018

is actually building on our BiDAF by adding

new word-embedding ELMO and it gets

about 85% accuracy.

We have also evaluated BiDAF on other data sets.

Some of it has been done by my group

and some of it by other researchers in other places.

Basically we have achieved the state of the art

on a set of articles from CNN where

the question answering is in the form

of cloze style tests,

state of the art on some other Wikipedia

question answering data sets, Zero Shot

relation extraction data sets and a new data set

that requires multi-hop reasoning.

We also tried to incorporate such similar ideas

into another modality.

In particular we showed that if we add

a little bit more structure to these neural representations

we are able to leverage similar ideas

into a diagram question answering test.

In particular, we introduced this challenge

of answering questions about diagrams

that are taken from textbooks.

We have collected about 15 k questions and diagrams

and we have questions like this in this food web

we want to see how many consumers consume kelp

or some questions like this, what happens to the water

in the sea in a sunny day.

So there are different varieties of questions.

There are a lot of ambiguities

so this is obviously more challenging

than just question answering about only a language modality.

So here is the architecture of our system.

We basically applied a similar setup

to understand questions and map them

to some neural representation.

Then we build our dependency graph

or basically the diagram, a parse graph

adds some structure to diagram representation,

take different components of the diagram

and code them into some neural representation.

And then computer attention how they are similar

to each other and then answer questions.

Our results are promising.

Our method with respect to another method

that uses only deep neural models

without these structured representations

we achieved almost 15% better results

and we got this significant gain.

And our system is able to answer these types of questions

Like the diagram depicts the life cycle of what?

Or how many stages of growth does the diagram depict?

The second one is more difficult.

It requires to have a better understanding

of the diagram.

Let me show you a demo of my system

on how we answer questions about textual data.

So I hope you all see the demo.

So the input to the system are a paragraph

and a question and then we submit

and we want to see how we answer the question.

Let me show you some examples.

The first paragraph is about Nikola Tesla.

If I ask this question, in what year Nikola Tesla was born,

if I submit it will give me 1856.

And it's pretty interesting because there is no

explicit mention anywhere but you see

that the first number in the parentheses is 1856

and our system is able to learn

that usually the first number is associated

with the birth year.

Let's look at another question that requires

richer understanding of the question.

The document, the article is

about Inter-governmental Panel on Climate Change.

I can ask this question.

What organization is the IPCC part of?

If I submit the question it will give me United Nations,

which is right, and as you can see here is a hint.

"IPCC is a scientific inter-governmental body

"under the auspices of the United Nations."

So it shows that it can do kind of complex paraphrasing.

Or let's look at another question that requires

a little bit reasoning.

This article is about the Rhine River in Europe.

And the question is asking what is the longest river

in Central and Western Europe.

If I submit this question it gives me Danube

but there is no explicit hints, explicit mention

that Danube is the largest river.

You can find it here: "It is the 2nd largest river

in Central and Western Europe."

It refers to Rhine after the Danube.

So basically the system is able to do

some single-step reasoning to understand

that Danube is actually the largest.

Let me show you some mistakes that our system is making

because probably that's more interesting

to show how we can make improvements.

Let's look at this article, Oxygen.

I want to write my own question.

What does the element with atomic number eight belong to?

So we expect the system to give me it's a member

of the chalcogen group.

Understanding that an element with the atomic number eight

is actually oxygen, but, Okay the system makes a mistake

because it does understand that this is oxygen

but we require another step to understand

it's a member of chalcogen group.

Let me push on this reasoning side a little further.

Let me write down my own story.

Liz has nine black kittens.

She gave some of her kittens, or let me give a number.

She gave three kittens to John.

John had five kittens.

Then I'm going to ask my question

which is how many kittens...

does Liz have.

So the system is not able to answer this question.

It finds that 'okay Liz initially had nine black kittens.'

But it's not able to do reasoning to understand

that some number of kittens are decreased

from the initial number of kittens.

So this is basically the focus

of the next part of my talk.

So just to summarize, so far,

I have talked about designing a deep, modular

neural model that can do question answering

on wide coverage input that includes text

and also diagrams.

The remaining challenges are what can we do

when the questions require more complex reasoning,

especially when the training data is limited.

And that's the focus of next part of my talk.

I am interested in introducing new challenges

that humans can solve but current AI systems

cannot address those.

In particular I have looked at the domain

of geometry and algebra word problems,

trying to design algorithms

that can automatically solve them.

Addressing those problems require rich understanding

of the input and also the ability

to do complex reasoning while training data is limited.

An interesting test bed to all of these problems

that I'm introducing is algebra word problems.

Now I have the story that I just entered as my demo

like Liz had nine black kittens and something happened

to the number of kittens, now how many kittens

does she have, or did John get.

Designing algorithms that can automatically

solve these problems has been an AI challenge

for a long time, even since 1963,

but the approaches that earlier AI researchers

were taking was basically using some rules

to map questions into equations.

But that does not generalize to new domains.

Especially because these algebra word problems

are designed on a child's world knowledge

and they can vary a lot.

We can have questions on daily life.

They can have questions on shopping

or science experiments.

There are no prior constraints

on the syntax or semantics that have been used

in these domains.

And we sometimes require knowledge

in order to solve some of these problems.

For example, in order to compute

the number of people who began living

in a country we need to basically know that we need

to add the number of people who were born

in that country and the people who were immigrated

to that country.

There are some words in those stories

that don't matter much, like the word kitten

can be replaced with many different things

like book, toy, balloons.

As long as we do it consistently we should be fine.

But some words like this verb give,

in this story it plays an important role.

If we replace it with receive the whole story

and the final equation would be completely different.

There are some irrelevant or missing information

in these stories.

For example, the story tells me

that Mary cuts some more roses

from her flower garden.

It never explicitly tells us that these roses

are actually being put in the vase.

But it is very easy for us humans

to understand that these roses

are being put in the vase but this

is not so obvious for machines.

There are ambiguities involved

like, for example for the first story

we need to add the number of games that are lost

and the number of games that are won

in order to find the total number of games.

But in the second story we need

to subtract the number of balloons that are lost

from the total number of balloons

to find the remaining number of balloons.

So to really understand the stories

we need to combine all these sentences

and understand these sequences

of sentences all together.

We have started this challenge

in 2014 with a few colleagues and since then

it has attracted a lot of interest

in the AI and NLP communities.

And a lot of people are looking at this problem.

So one idea for learning to solve algebra word problems

would be something like this.

What if we directly learn equations from text,

and map text to the equation.

But when we apply this approach on the data set

that are from 5th grade math questions it fails.

It basically gets about 62% accuracy.

Our solution is what if we get closer

to how humans try to solve these problems.

And basically look at all the quantities

that are appearing in these problems

as sets of entities and look at the stories

of how those entities are changed in different states,

in different world states.

So let's look at this example.

Liz gave some of her kittens to John.

We started with some number of quantities

about kittens that was our initial set

and it had one container which was Liz.

And according to this sentence

these quantities are transferred

between two different sets or two different containers.

Now Liz has less number of kittens.

John has more number of kittens.

But not all the numbers that are appearing in equations

strictly follow the order that those numbers

appear in those stories.

Let's look at this example.

On Monday, 375 students went on a trip to the zoo.

All seven buses were filled and four students

had to travel in cars.

How many students were in each bus?

In order to solve this problem

we probably need to first compute this part,

which is multiplying the number of buses

which is seven, to the unknown number

of students that are inside each bus

to find out what is the total number

of students in buses.

And basically using this idea we're going

to represent math word problems

using semantically tied equation trees.

And the idea is every leaf in those trees

are actually showing us the quantities and the sets

that are appearing in the problem

and then the intermediate node show math operators,

show how these sets are being combined

with each other.

But all those intermediate nodes

are also typed, meaning that they are going to be

of type, for example, students, or money, or something.

Then our problem now is reduced

to find the best tree that represent this word problem.

The space of equation trees

for a given problem is huge.

In particular for a problem that includes

about six quantities the assertion space

is 1.7 million trees.

But the good news is we can compute the score

of these trees in a bottom-up approach.

For example we can use and learn

some local scoring functions

that scores all these sub-trees

multiply all of these scores, like the scores

of all of these sub-trees and then see how

they should be combined with each other

according to the global information

that we are getting from the problem.

Basically, we reduce the problem

of scoring these equation trees

into learning some local function

that tries to score sub-trees

with repect to some parameters

and also some global functions

that tries to score all these global trees

to see how these sub-trees should be combined

with each other.

Then we learn those functions

to learning a local function

we try and classify, it's a multi-class classifier

that as input it takes a pair of entities

and as output it returns one of the four

math operators: addition, subtraction,

multiplication and division.

And the features that we are using

to train this classifier are the intertextual relationship

between the two entities that we have extracted

from text and we also have incorporated semantics

we have extracted for all of those entities.

Then in order to compute the global scoring function

we have a discriminative classifier

that tries to score a good tree versus bad trees.

And again the features we're going

to take advantage of, the global features

extracted from text.

For inference, we leverage integer linear programming

to generate candidate trees for us

that are consistent according to the types

we are getting from the problem.

For example, we have some type constraints

like this that the type of the left hand side

of the equality should be similar

to the type of the right hand side of the equality.

Our results are promising compared to an approach

that does not use this representation.

We get about 72% accuracy, about 10% gain.

And even more recently some other researchers

have built on our approach

used deep reinforcement learning

on how to do a better job on combining

these sub-trees and they achieved

about 78% accuracy on this test.

These are some problems

that our system is able to solve.

Our system can combine set difference

number of packs like four, eight, and four

and then multiply them by the number

of bouncy balls in each pack.

Or we can form a long range of additions

and subtractions mainly informed

by the verbs that are appearing in the question.

We are still not able to solve problems like this,

that Sara, Keith, Benny, and Alyssa

it's hard to infer that it's talking

about four people.

So in this part I talked about algebra word problems

which is a new challenge in the NLP and AI literature.

I showed how can you reduce learning to solve

algebra word problems to learning

to map text to math operators.

And if I can solve this problem

it's actually a step toward how we can have

an understanding about multiple sentences together.

And basically, try to be able to have

a precise understanding of this type of text

and do a better job in question answering.

Let's push a little further on the reasoning side.

And also let's bring in another modality.

For that we have focused on automatically solving

geometry word problems.

This is much more challenging than an algebra domain

because not only we need to understand the text

of the question like most of the challenges

I described also holds here

we also need to understand the diagram part

of the question and also be able to align those.

Understand that for example secant AB

is actually referring to that A B line

in the diagram.

So I'm not going to go into the details

how we exactly solve the vision part

or the language part, but just give you an intuition.

I would like to go from the text and the diagram

into some logical representation

that allows me to do complex reasoning.

And obviously learning these representations

directly from data would be very difficult

so what we do is make the representations

a little softer and then extract

all geometry concepts that exist in the problem,

like ABC, line DE, line AC and so on,

and then try to form how they can be related

to each other or basically find what are the possible

geometry relations that exist

between the geometry concepts.

So we have something like ABC the triangle

or line AC and DE are parallel with each other.

You might even have a wrong relation

like AC and AD are parallel with each other.

Then what we like to do is to be able

to score these relations according

to the text that we are observing from the question

and also according to the diagram

that you are observing from the question.

For scoring them according to the text

we follow an idea very similar

to what I just described for the algebra domain.

We would like to form a classifier

or different classifiers, such that

they learn what is the best relation

between two geometry concepts.

And to compute the diagram scores

we would like to use standard vision techniques,

have some rough estimate

of how this diagram would look like

and find the accurate representation

and then to score these relations

according to the diagram.

And then once I have all of these scores I would like

to align my knowledge between the text and diagram scores

do an optimization task, find what is the best

of those relations.

But this is also an important challenge,

how we align textual and visual data here.

Let me give you some intuition.

You want to find a set of relations

that actually have high score according to the text

and according to the diagram.

Also, we want to cover most of the important facts

that are mentioned in the text and the diagram.

Also, we want it to be coherent

meaning that the relationships

shouldn't conflict or shouldn't contradict each other.

The search space is huge,

like we have a combinatorial search space

but the good news is we could form

a sub-modular optimization function

that allows us to gradually select important relations

so our optimization, of course, is efficient

but at the same time we get to something

that is close to optimal.

Then we have mapped the question

in the form of text and diagram

to some logical representation.

Now I will bring in my knowledge

about geometry which are some theorems and axioms

that appear in the geometry domain.

I will do reasoning and then finally

answer the question.

Our results are promising.

Basically, we show that we can achieve

about 52% accuracy on automatically solving

geometry word problems and we have achieved

significant gains compared to just a rule-based system

or when we only look at the text or the diagram.

And again it's very interesting

to see that our system is able to beat

a student average in automatically solving

these SAT word problems.

That was kind of exciting

and there was a New York Times article

featuring this work.

So in this part of the talk

I mainly focus on symbolic representation

for complex reasoning and I introduce

two new challenges,

one on automatically solving algebra word problems,

the other on automatically solving geometry word problems.

I showed that this intermediate representation matter

but the main idea was how can you relate concepts

in the math domain or in the geometry domain.

But in order to form these relationships

and classify those we actually require knowledge

about those basic operators,

either in geometry domain or in the math domain.

An important question to ask is how can we generalize it

to more complex domains.

And that's actually the focus of my future directions.

I would like to design AI system

that can have rich understanding

and can do complex reasoning on a wide range

of multi-modal inputs including textual or visual data.

There are a few components that I need to build

in order to make these AI systems achievable.

One is trying to collect and acquire knowledge.

Some parts of knowledge are given to us explicitly

but a lot of pieces of knowledge are hidden.

How can we acquire those knowledge information

for us to do a better job in reasoning.

Another important direction I would like to pursue

is how can we leverage the benefits

from symbolic representation and neural representation

and cover a wide range of input

why we can do complex reasoning.

And also in order to be able to make these systems

really applicable I want to design a scalable algorithm.

And finally I would like to take these

into new applications for example

in tutoring applications.

Some ideas of how we can collect knowledge.

A lot of important information are hidden.

For example about entity attributes.

We want to collect information, for example,

about object sizes but we might be able

to capture those just by looking

at how they co-occur with each other.

Usually dogs are bigger than cats and so on,

by looking at multi-modal data.

Or how can we collect knowledge

about events and their structure

by looking at their temporal information

that we get from, again, a large collection

of multi-modal data.

Then if we have these type of knowledge extracted,

how can we incorporate those into our system.

For example, we can add that to our modular representations

have some algorithms for aggregation and reasoning

while we have a new representation

and new knowledge resources coming in.

It is also very important to have a scalable algorithm

to understand different types of inputs

because the input to the problem can be a paragraph

and it can go all the way to world wide web, right?

How can we design algorithms that can understand

a wide range of inputs and be scalable.

One important direction I want to proceed

borrow ideas from information retrieval

and try to hash candidate answers

that might be the answer to the question

and then use those, for example, search engines and so on.

But at the same time we want to have

a deeper and richer understanding.

Another important direction is how can we read text faster.

We have some preliminary results already,

which is on incorporating ideas

from human speed reading and design

neural speed reading approaches.

And so far with our preliminary results

we show that we can get the almost similar accuracy

but with three times faster.

I would like to incorporate all of these

in, for example, tutoring and education applications

This tutoring system is required

to have two important components

for automatically solving problems

or generating problems and then interacting

with the students.

For one part the system requires

to know the mistakes that the student is making

and explain them how to solve those.

But on the other direction the system can work

as a study buddy with the student

and try to actually acquire knowledge

from the student and then help them

to understand the problems better.

We have already done some preliminary work

on generating word problems and collecting knowledge

from students.

And then I'm going to get to this part

of my future work which I think is very important

and the idea is how can we leverage

both of these representations and make them

closer together such that we have

more complex reasoning but at the same time

we can cover a wide range of inputs.

One potential idea that I had is

to design a network that can learn

different reasoning operators, in particular

we can have something like this

that can take the world state,

the current fact that we are observing,

and a question, update the world state

and then reduce the query.

We basically want to borrow ideas

from logical reasoning literature

reduce the query to something

that is simpler to answer.

Let me show you an example.

My current state is Daniel is holding the apple.

Then the observation is that Daniel journeyed

to the garden.

The question is asking 'where is the apple?'

I want to update the world state.

Now I know Daniel is holding the apple.

Daniel is in the garden but the important part

of the world state that I'd like to focus on

is actually where is Daniel.

This is the important part that okay,

I want to simplify the query.

And given for example, a story and a question

like where is apple and if you need information

about where apple goes from different people

I would be able to apply my network backward

and each time try to answer the simpler query.

The first one would be where is Daniel.

The second one is still where is Daniel, and so on.

And we can even stack these different layers together

and have more complex reasoning.

To summarize, I introduced two different methods

using neural and symbolic approaches.

That both of them try to go beyond pattern matching

try to achieve rich understanding of the input

while they are able to do complex reasoning.

I showed that the neural model worked well

with different types of input.

The symbolic representations can do

great complex reasoning.

And in future I would like to integrate

both directions to leverage the benefits

of both systems.

Thank you. (applause)

- [Noah Smith] We have time for questions.

- [Dan] This is an outsider's curiosity

question, but on the SATs sort of the algorithm

versus the human, do you have a sense

of which style of question answering

more benefits from there being multiple choice,

or if they even leverage multiple choice

in the same way or a totally different way?

- So actually we didn't leverage multiple choices

in our setup.

And I think if we did we even could get better numbers.

Basically that's how we handled that.

Sorry, I forgot the rest of the question.

So I think if we did leverage the multiple choices

we could even get better numbers

because when we did the reasoning

we translated all the old logical formulas

into some numerical equations and then solved it

through some of the equations.

If we couldn't solve then we didn't answer

because we wanted to avoid the negative number.

But if we were kind of using the different

multiple choices we were able to make sure

that some of them definitely are not working

and therefore remove some choices

then do fifty fifty answers but we didn't do that.

We didn't use any human trick for answering...

- For example, probably less common

in the geometry domain but if you happen

to make a natural language mistake

that a human is very unlikely to make

you may get an answer that isn't

one of the choices and you should just try again.

- Sure, so no, we didn't leverage

the multiple choice, that's a good idea.

And actually like about 30% of our errors

are natural language errors, very good observation.

And some of it is not like

we don't understand the sentence.

The hard part is how can you make co-references

among different sentences.

Like, for example, if we're talking

about different lines and it said each other

we didn't know which of those two lines...

So these are one category of questions.

And there was 30% of questions

that they were really complex and it required

external knowledge, like it was saying

a polygon is hidden under a piece of paper.

It is an obvious thing for a child

to know what is happening there

but our system didn't have any idea.

- [Magda] Can you say more

about the scalability problem because one thing

that if I have this SAT problem

there's so many practice problems

and they're all going to be so similar

and they're all going to apply the same rules

and the same patterns but kind of

in the knowledge there's also things

that are less common less frequency

and can be applied on the one hand scalability

will stop being sound but on the other hand

can capture kind of some of the sense

of popular information.

- Sure, absolutely, so Magda is asking my view

about the scalability and the trade off.

So that's absolutely right.

One first thing that I showed you

was how can we do faster reading of the input.

I'm saying to get similar accuracy

in question answering.

We got good accuracy but right now our number

is about 85%, we got something around 83%.

We thought that is fine, to make 2% mistakes

but at the same time be faster

be able to kind of read the text faster.

So I completely agree there are definitely trade offs.

Like the same kind of trade off exists

between the complex reasoning and being

more specific to have wide coverage

at the same time be very generic and general.

So I think it really depends

on the application domain.

But again, one direction that I really like

to pursue is the following.

So right now when you have in Google search

and when you search something it will give you

a really quick response because they have probably hashed

a lot of indexes, they can easily find

the relevant document.

But if they really want to have

a good understanding of the meaning

it will be hard, just by looking

at the hash from the document level,

I mean, you know like document level hashing

probably is very high level, right?

What if we can go a little bit more

inside the document and hash different words

in the context.

So for example, I have something like

Barack Obama was 44th president in 2009,

I want to hash information about Barack Obama

one with respect to the 44th and what

with respect to 2009.

Now if the questions asks me

who was the president in 2009 I can easily do

some dot product between the vectors

that I hashed and also what I've got

in the question.

So I can be able to be very accurate

but at the same time make it much faster.

Especially go from linear time

by reading the text, the whole question

and the text, to kind of log linear time,

if I can be smart in hashing this stuff.

- [Sonja] How much interpretability is

important in this domain?

I mean when you give an answer would you

want the users to understand the reasoning?

Are you working on that part?

- So Sonja is asking how interpretability

is important, especially these neural

representations.

Absolutely it is very important.

I think interpretability and explainability

are two topics very relevant to each other

but not exactly the same thing.

I might even be able to explain

some of my rationals about how I decided

to choose this answer but still

my models are not that interpretable.

I highly agree that if they are interpretable

it's much easier for me to explain

but I might be able to get around it.

Without interpretability I can still

explain this stuff.

But I agree this is a very important direction.

I am mainly focused on explainability

than interpretability but I agree

those are both very important.

So for example, we have done,

like one common practice in language

is to visualize where the attentions

are going or for example,

go to lower dimension

and see how the words that are similar

with each other are close together

and if it makes sense.

- [Noah] One quick question if she'll let me.

So you may have already answered this

when you answered Dan's question but imagine

that you could get the best NLP group

in the world to work on one problem

and really move the needle on it

would it be co-reference resolution,

would it be something else?

What would help you move the numbers

on one of these tasks?

- I would say understanding multiple sentences together.

Co-reference resolution is part of it,

but this sequential understanding

I think it is important, like - Some version of discourse.

- Some version of discourse, right.

So for example nowadays we are doing

a really good job on sentence-level understanding

but understanding the whole story together

or how things are connected with each other

that's actually gonna be the first thing

towards being able to do multi-hop reasoning

and complex grid, how to connect different things together.

- [Noah Smith] Okay, I think we are out of time

let's thank the speaker again.

(applause) - Thank you.

For more infomation >> Allen School Colloquia: Hannaneh Hajishirzi (UW) - Duration: 1:01:40.

-------------------------------------------

Trump Just Had Every Single One Of - Duration: 11:52.

Trump Just Had Every Single One Of Them Arrested!

The ENTIRE Democratic Party Is FURIOUS!

Illegal immigration is a concern for many Americans.

And the Trump administration has made it there mission to stamp it out as much as possible

and crackdown on criminals.

One story that recently surfaced in the conservative media is something that everyone should be

worried about.

During a sting operation to try and reign in a plethora of illegal aliens, over 475

gang members were arrested by law enforcement agents with Immigration and Customs Enforcement

(ICE).

65 were released by an American immigration judge while merely four were actually maintained

on arrest for criminal charges.

A recent report indicates that 99 MS-13 gang members who came to the United States illegally

were unaccompanied minors.

Sadly, over 64 of them, the majority, were granted the status of Special Immigrant Juvenile.

This special designation is a quasi-amnesty program for those who crossed the America-Mexican

border illegally.

Breitbart News reported, "Nearly 100 recently arrested MS-13 gang members arrived in the

United States by crossing through the U.S.-Mexico border as "unaccompanied minors" and then

getting resettled throughout the country by the federal government.

About 475 gang members have been arrested by the Immigration and Customs Enforcement

(ICE) agency's "Operation Matador" sting, with 99 of those gang members arrested having

arrived in the U.S. as "unaccompanied minors."

Of the 99 MS-13 gang members who entered the country as unaccompanied minors, 64 of them

were granted Special Immigrant Juvenile Status (SIJ), which acts as a quasi-amnesty program

for young illegal aliens who cross the southern border.

Of the 475 gang members arrested by ICE in this operation, 65 of them had been allowed

to be released into the U.S. by an immigration judge, while four were re-arrested on criminal

charges after they were released.

Unaccompanied minors who cross the southern border have continued to be resettled across

the U.S. despite a direct correlation of the quasi-amnesty program — known as the Unaccompanied

Minor Children (UAC) program — with the proliferation of the MS-13 gang in regions

of the country like Nassau County and Suffolk County in New York.

Under President Trump's administration, the UAC program has continued.

For example, in Fiscal Year 2018 thus far, nearly 200 unaccompanied minors have been

resettled in Suffolk County, along with almost 280 in Queens County, and more than 115 in

Nassau County, despite the regions' issues with the MS-13 gang.

Miami-Dade County also struggling with a massive illegal alien population, has had to take

in nearly 400 unaccompanied minors thus far in Fiscal Year 2018, as well as Palm Beach

County, which has had more than 33o unaccompanied minors resettled in the region."

This large sting operation is not the only one that has taken place that led to the arrest

of MS-13 members.

In Maryland, six members of the street gang were seen before a federal grand jury.

All of the perpetrators were aged 19 to 22 and a part of a nine-count indictment.

Their crimes ranged from m****r, racketeering, to conspiracy.

The Baltimore Sun reported,

"The latest indictments come roughly two weeks after an MS-13 member from another Maryland

community was convicted in a federal racketeering conspiracy.

Raul Ernesto Landaverde Giron of Silver Spring was found guilty of m****r in aid of racketeering

and faces a mandatory sentence of life in prison.

Following that conviction, U.S. Attorney General Jeff Sessions said Maryland has "suffered

terribly" because of the "uniquely barbaric" gang's criminal activities.

In charges announced Thursday, Juan Carlos Sandoval Rodriguez, 20, is accused of luring

a victim to a park in Annapolis, where he and other alleged MS-13 members and associates

murdered him.

Prosecutors believe the March 2016 k*****g was motivated by a desire to enhance or maintain

rank within the gang or gain status as a member.

In October 2016, four defendants allegedly attempted to m****r two others in Annapolis,

largely by stabbing the victims repeatedly.

Last year, Sessions designated MS-13 as a "priority" for the Department of Justice's

Organized Crime Drug Enforcement Task Force.

That designation directs prosecutors to pursue all legal avenues to target the gang and lets

local police agencies tap into federal money to help pay for gang-related investigations."

MS-13, or the Mara Salvatrucha, is believed by federal prosecutors to have thousands of

members nationwide, primarily immigrants from Central America.

It emerged in the 1980s from a stronghold in Los Angeles.

But its true rise began after members were deported back to El Salvador in the 1990s.

President Donald Trump blames lax U.S. immigration laws for allowing deported members to return

to the U.S.

Federal authorities say the danger posed by the decades-old street gang has been increasing.

During a December stop in Baltimore, Homeland Security Secretary Kirstjen Nielsen described

MS-13 as a "threat to our homeland security.""

Immigration remains a hot-button issue.

While conservatives argue that we need to toughen up on border security liberals have

argued we need to be more generous with children who were brought to the United States illegally

by their parents when they did not have a choice.

The rise in gang violence by gangs such as MS-13 that are run by illegal immigrants has

pushed this controversial debate to the forefront of news outlets all across the nation further

dividing people.

Share if you agree that American citizens should not have to live their lives in fear

from streets gangs

like MS-13.

For more infomation >> Trump Just Had Every Single One Of - Duration: 11:52.

-------------------------------------------

Uomini e donne, la scelta di Sara: la promessa d'amore di Lorenzo | Wind Zuiden - Duration: 3:28.

For more infomation >> Uomini e donne, la scelta di Sara: la promessa d'amore di Lorenzo | Wind Zuiden - Duration: 3:28.

-------------------------------------------

Need for Speed Shift - How Reduce Lag/Improve Performance and Get More FPS - Duration: 10:05.

Need for Speed™ SHIFT is an award-winning authentic racing game that combines the true

driver's experience with real-world physics, pixel-perfect car models, and a wide range

of authentic race tracks. Need for Speed SHIFT takes players in a different direction to

create a simulation experience that replicates the true feeling of driving high-end performance

cars. While the game is doing all of that perfectly, it isn't delivering a playable

and pleasant gaming experience for everyone. So, today we're going to deliver that last

piece of the puzzle that is missing. Ready? Let's go.

First of all, download the Low Specs Experience from my website and then install it. Start

it from your Desktop shortcut and then go to the optimization catalog tab and select

Need for Speed Shift from this drop-down menu. Now press load the optimization and extract

this package to the folder where your game has been installed. After you did that go

to that folder and start the ragnotech control panel and this window will pop-up.

Now select the method of optimization and resolution you would like to run your game

on. After you did that simply press optimize and start your game.

I'm leaving you now with the rest of this gameplay to enjoy. Please do like and subscribe

if you found this video useful. Dislike it if you feel the complete opposite and I'll

see you guys next time with a whole new video, til' next time, take care and fly safely.

For more infomation >> Need for Speed Shift - How Reduce Lag/Improve Performance and Get More FPS - Duration: 10:05.

-------------------------------------------

AI搭載の「Googleニュース」 iOSアプリがダウンロード可能に (2018年5月16日掲載) - ライブドアニュース - Duration: 10:58.

For more infomation >> AI搭載の「Googleニュース」 iOSアプリがダウンロード可能に (2018年5月16日掲載) - ライブドアニュース - Duration: 10:58.

-------------------------------------------

浜崎あゆみ、LGBTとの深い絆と "家族" に送った涙のエール|BIGLOBEニュース - Duration: 6:39.

For more infomation >> 浜崎あゆみ、LGBTとの深い絆と "家族" に送った涙のエール|BIGLOBEニュース - Duration: 6:39.

-------------------------------------------

Stefano De Martino si 'avvicina' a Belen Rodriguez, ecco perché | Wind Zuiden - Duration: 5:14.

For more infomation >> Stefano De Martino si 'avvicina' a Belen Rodriguez, ecco perché | Wind Zuiden - Duration: 5:14.

-------------------------------------------

『世界仰天ニュース』衝撃映像に軽く炎上も、視聴率好調 人気の秘密は - ライブドアニュース - Duration: 3:19.

For more infomation >> 『世界仰天ニュース』衝撃映像に軽く炎上も、視聴率好調 人気の秘密は - ライブドアニュース - Duration: 3:19.

-------------------------------------------

ドコモが資産運用に参入 「dポイント」投資とロボアドで若者開拓 - ITmedia ビジネスオンライン - Duration: 5:16.

For more infomation >> ドコモが資産運用に参入 「dポイント」投資とロボアドで若者開拓 - ITmedia ビジネスオンライン - Duration: 5:16.

-------------------------------------------

Suzuki Vitara - Duration: 1:07.

For more infomation >> Suzuki Vitara - Duration: 1:07.

-------------------------------------------

Uomini e Donne: Sara sceglie, Mariano 'costretto' ad abbandonare? | Wind Zuiden - Duration: 4:43.

For more infomation >> Uomini e Donne: Sara sceglie, Mariano 'costretto' ad abbandonare? | Wind Zuiden - Duration: 4:43.

-------------------------------------------

강제 성추행 입건 뮤맹 이서원..태연한 SNS 논란. 비밀이 밝혀졌다. - Duration: 4:15.

For more infomation >> 강제 성추행 입건 뮤맹 이서원..태연한 SNS 논란. 비밀이 밝혀졌다. - Duration: 4:15.

-------------------------------------------

ドコモが資産運用に参入 「dポイント」投資とロボアドで若者開拓 - ITmedia ビジネスオンライン - Duration: 5:16.

For more infomation >> ドコモが資産運用に参入 「dポイント」投資とロボアドで若者開拓 - ITmedia ビジネスオンライン - Duration: 5:16.

-------------------------------------------

5 Gründe, warum Sie Krafttraining machen sollten - Duration: 6:42.

For more infomation >> 5 Gründe, warum Sie Krafttraining machen sollten - Duration: 6:42.

-------------------------------------------

Opel Zafira 1.8 16V Ecotec 140pk 111 Edition - Duration: 1:11.

For more infomation >> Opel Zafira 1.8 16V Ecotec 140pk 111 Edition - Duration: 1:11.

-------------------------------------------

'7:2→7:7→8:7' LG, 충격패 위기에서 가까스로 승리 - Duration: 5:08.

For more infomation >> '7:2→7:7→8:7' LG, 충격패 위기에서 가까스로 승리 - Duration: 5:08.

-------------------------------------------

Kira and Jack Look Back

For more infomation >> Kira and Jack Look Back

-------------------------------------------

What Is The Sojourn? - SPACEDOCK ORIGINAL SERIES - Duration: 2:58.

For more infomation >> What Is The Sojourn? - SPACEDOCK ORIGINAL SERIES - Duration: 2:58.

-------------------------------------------

ChandeliALAN - Duration: 0:35.

S̪̱̰̬͕̬̮̩͓̲̜̺̗͍̯ͤ̓̑̈̄ͣͣ̀̕ͅT̶̨̛̮̥̥͇͙͔̟̰̞̪̟̰͍ͬͨͬ̐͋ͥ̏̔͆̈̌Ȩ̵ͣ̍ͬ̎̀͑̓͊̅̚͞͏͚̭͇͈̲͉͖K͆ͩ͒̅͒̋ͥ̆ͤͭ҉̷̡̨͓̞͈̝̯̖ ̡͙̻̰̘͖͕̾̊ͥ̐̿̋͐̄̋́̚͘S̢̡̧͓̰̪̘̭̙̞̯̮̙̖̬͔̏̐ͮ͊̓̾ͤ̀̕T̶ͪͬͥ̂̉ͣ̾̕͏̲͙̩̖̩͈̼̱̹̤̼͕̟̻͡ͅĘ̵̧̘̭͓̬̱̽̆̿̃̾͂ͮKͩ͌̒̓̔ͦ̇̌́ͤ͆ͫͨͥ̈́҉̷̪͖̼̺̮̻͉̤͓̣̜̰̜̫͇͚̪́͢͢ͅ ̴̢ͬ́̅͐ͮͨ̍̂̾̂̐ͯͯ̚҉̦͕̦͓̻̣͕͖̦̲͈̬͕͓͘͘ͅS̶̡̻̟̖̞̞̜͉̻̪̜̖̜͉͈̥̻ͦ̀ͭͬ͂̒̾T̷͍̱̣̮̲́̃ͮͫ̈̑ͪ̓ͬͯ͢͝͞͞Eͦ̆ͤͭ͌̓̌ͫ̑͐̓͂̑͌͏̻͓̞̯̯͉̥̞̘͖̮̕͠Ḵ̢̬̫͎̦̯͔͚͖̠̟͖̓̓̓͑ͮ̏̚͟ ̢̑̈́ͧ̅ͬ̄̾͂̓̄ͩͮ͠҉̵͖̤̪̖͓̫̱̺̮̙͎͖͚S̈̎̒̂ͬͪ̾̃̍̈́̀ͮ͏̢̡͎͈̫̼̱̱̪͉̣͕͕̼̮͇T̓͒̉ͦ͗̓͏̞̺͓͍͇̝̠̖̰͚̰̟͍̳̫͈͜Ḛ̶̛͍̰̪͇̼̜̣̺͍͈͔̑ͧ͂͒͊̎ͯ̓͆̑̔͂̈́ͨͦ̿͒̐̈͝ͅK̿ͤͦͪͧ͐̋ͪ̌̓̃̚͞҉͔͎̳̼̝̦͚̣̝̮̖ ̓̏̑̋͑̀͆̃̒͌ͩ̏̎͂̑̊͂́ͪ͘͏̟͖̰̳̗̞̹̫̠̞͓ͅŞ̸͇͔̤̰͔̟̥͕̠̳̪̯̠̬͇̄͌̐͊̔ͨ̐̌̑̔́̈ͫ̊̅̇́͠ͅT̴̢ͪ̊ͩͩ̓̅̍҉̴̺̳͖̱͜E͕̺͉̩̞̲̞̜̙͔͔͂̄͌́͡K̸͎͕̩̪̳̳̣ͫ͂̓ͫͥ͆̎ͮ̍ͨ̊ͣ̇͑͑̎͠ͅ ̛̙̘̳͍͈̹̊̂̄ͫͨͮ͛̍̀̆̕͟͡S̢̨͊̅̏͂̄̃̊ͤ͑͂͋̀҉̬̝̬̯̼T̴̩̩̠̰͇̪͉̟͙̹̺̖̎͆̂̇̐͆̔͑͆̐ͫͬ͂́ͯ̈́͆̋ͥ͜ͅE̶̛̥͎̮̰͙͉̥͇͚̞̬̥͎͚̟̠̰͌ͦ͐̾̑̂͂̀ͫ̎͒͢ͅͅK̡̧̩̪͉̝̭̜͕̗̤̯̳̤̲̥̬̋̊̽͆̎̑̽͐͛̔̀̿͆́͜͡ ͨ̎͗ͧ͂͒҉̹̹̫̱̼̳̯̠̤̦͙̠͢͢͟͞ͅS̶̨͔̹̭̠͔̦̼̠̮̠̣̻̥̰̞̦̮͑̒̍͒ͨ͢Ţ̵̨̯̞̘͍̪̙̰͇͇̺ͪ̈́ͧ̌ͫͫͪ̔͊̆̚ͅẸ̸̷̴̢̨͙̲̯̹͙̺̞̮̻̰̼͇̝͊ͦ̃̓ͪ̽́̽͑̀͗̿ͦ̆̐͆̆ͣͅK̡͍͓͎͈͕̼̗̖̲̹͎̙̭ͧ̓ͣͬ̏̃̐̽̈̿ͮͬ̚͟ ̢̢̺̼̲͕̣̳̳͕̪̥͉̯̺̦̆̒͆̾ͫͪ͛̀̾̊̊̆͟͞ͅS̵̷ͣͤͪͭͣ̏̉͊͗ͩͣ̑ͫ͡͏̠̺̘̙͓͓͔̟̲͎͓T̮̜̙͉͎̖̤͓̬̳̳̮̤̫͍͊ͩ̌̂̓ͦ͐͒ͥ̕͟Ĕ̷̢̀ͧ͒̓̿͑̾҉̧҉̰̜͖̖̞̫̯̳͚͙̬͚͇̠͚ͅKͤ͐́ͭ͐ͯ͑̌ͥ̋̆̌̏ͧͫ̒ͫ͌͏̨̨̞̺͚̥͔͈̤̲̰̱͉͖̳͖͎͓̪̬̝́ ̵̛͎͉̤̝͖̙ͨͭ̔̉̏͌͛ͯ̆͗̽̔̕S̵̵̢̢̙̞̹͇͖̥̞̮̣͔̭̞̖̦͉̾̎ͥ͂̉́͊͑͝ͅͅŢ̶̶̱̝͇̖͎̼͍͉̹͓͉͙͓̙ͣ̔̓̈́ͪ̈́ͯ̅ͦ̀͝Ȩ̸̶̷̟̹͓ͫ̈ͪ͊͒ͨ̔͆̃͆̃ͣ̉͗̀́̋̾̕K̩̭̯̖͚̲͙͕̮̲̩̩̪̥̯ͬͪͪ̈͌̒̒̿ͫ̋̓ͯ̿̈́̌̇ͯ̚͡͝ͅ ͂ͣ̀͛ͮͭ̔́͆҉̧̻̞̙̘̥͇̩̪͎͓͙͓͍S̼̩̪͉̗͓͍͖̠͖̲̥̟̪̞̠͍ͣͥ͌̈́̈́͒ͪ̋̇̒ͯͧͯ̀̌ͮ̚͜T̵̡̠̭̰̝̠͚̙͎͉͇̔ͣ́̂ͧ̇ͫ̓͋̄ͣͨE̛̍͛͆̎ͪ̈͌̌̚͢҉̴̘̫̠̟͙̹̩͎̹̩͈͟Ķͩ͆̅̈ͪͬ͑͋̾͂҉̡͎̻̺̮͍̹͕̤̳͕̻͈̳ ̸̛̺̻̘̎͋ͤ̄ͨ͒̋ͫͤ̽ͤ͋ͨ̀̃ͮͤ͜͜ͅŞ̧̛̱̺͓͍̠̝̞̹̪̝̒ͤ̅̉ͨ̔̈́͒̚ͅT̷̷͔̲̮̤̜̩̯̱͚̦̲̤̱͍̜̟̲̍̊̏͂̓̏̎ͬ̑̑͋̓̅͐͑̅̀̚͜Ę̸ͤ̒̃͌̈̓͟҉̱͓͔̯͖͖͓͖̗͈̹̩̝̰ͅK̷̢̥͉͙͙͕̟̪̺̙̞͒̅͌ͩ̓̓͆̕͘͠ ͖͓͇̣̲͎̤̱̖ͪ͂̌ͨ̀ͯ̿ͥ̌̃̿̀ͧ͜͢S̶̢̫̦͕͈̤̥͓̰͖̳̑ͦ̈́ͯ̓̋ͬͯͬ̈́̔̍͋̂T͓̙̙͚͉͕̪́͊ͣ̆̽͘E͉̙̩͔̻͎͖͔͓̳͙͓̲ͣͦ̊ͪ̀͟͟͝K͖̬͔̭̬͓͚̺̦͚͚͚̠͓̝̋ͬͧ͗ͪ̑͐͋͒͑̀̄̌ͧ͌͢ͅ ̷̢̡̫̫̱͈̭͓̟̥̦͎̠̞͇̬͚́̃͋́ͮͤ͟͞S̛ͣ̋͛͆̄̌̒ͥ̔ͫͪ͐́͂̽͡҉̶͕̟̻̙̫̱͖̝̦͇̣̺̗̯͉̭̩͎͟ͅŢ̵̞̯̱̼̮̜͉͈͉̖̗̪̰ͨ̏̿ͭ̃̋͂̐ͥ̉ͨ̀͟͝ͅͅȨ̸̣͖̳̘̭̺͇̇̄ͯ̏̈̆͛̾̏͟Ķ̲͕̟̹̯͚̬̮͎̝̟ͭ͛̇ͧͬ̄͝͝ ̵̘̟̪̟̩̲̒͛̍ͯ͞S̾ͭ̊̒͌̈̑͑͑ͪ͆̒͒̀̚̕͞҉̣͇̣̜͚̗̹̺̤̟̗̞̠̩̠̘̹̭͓T̨̨̈̔̄̂͏̯͎͚̬͇̣͔̞̹̘̞̕E̴̴̢͚͈̗̳̠͙͖̪̲̭ͪͥ͌̈ͨͪͣͮͭ̚Ḵ͍̩͍̒ͪ̈̃̓̄ͧͥͨ͑̑̌͡ ̸̨͚̞͓̜̝̭̦̘̠̺͖̺̝̖̖̱ͬͥ̍ͬͯ̓͌͊͊ͥ̿ͫ̿̃͌̉̐͞͞S̋͐̽̑̉ͪͥ̑͌ͥ͋ͭ̇ͥ̿͏̙̱̩͈̠͟͠T̷̵̶̷͓̖̞̱̟ͦ͋ͤ̋͋̾ͧ̏̎͌̂͡Ȩ̶̨̘̞͕̦̻͓͇̮͓͔͈͂̈ͬ͂ͫͫ̔͂ͅĶ̷̤̱͚̖͈̱͖̪͈͚̮̤͇͌́͗̌̐ͫ̽̚ͅ ̸̴̯̭͖͎̝̼̻̥̺͓̓͒ͫ̎͛͑ͪͯͥ͋͌͋̕S̴̵͉̞̲̬̻̬̱̥̟͖̗̱̹̭̰̹̳͙̲͒͐̈ͬͨͨͫ͌̑ͣͦ͐͊̄̾̃ͤ́Ţ̸͕̮̟̞̦̓̄̑̓͐ͦ̀ͭ̽̈͗ͤͨ̃͒͠͝ͅE̥̞͕̼̓̾̓̽ͩ̏̋ͬ͌ͧ̾̊ͬ͒̊͟͠K̥̮͚̮̘̖̼̊̋͊̆ͨ̊ͣ̓̆̈́́̿͑͜͞ͅ ̸̙̣̲̪͈͓̪̗̮̫̥̫͓͖̖̆͐ͪ̓͒̂͊͛ͬͤ̿̊͘͘͢S̴̶̞̤͕͈͈͉̬̼̗̪͈̙̙ͣͣͬ̑ͪͦ̀ͨͤ̄͋͒̽ͥ̅ͦ̚ͅŢͣͪ̊̽͒̀̑̾͗͒̈ͪ͏҉̯̻̳̟̫̫̮̲̤̳̦ͅE̷ͣ͗͊͑̿ͯ͛ͫ̃͊̍͑҉̙̬͖͚͇̳̼̝K̴̬̘͚͇̤͙̻͕͎̲̎̿ͮ̀̇ͪ̇̈́ͫ͆̍ͦͪͫ̉̀̌̃͜͞͠ ̸̷̴̤̳̰̻͈̗͕̠̜̻̦͔̱̤͚ͧ̓ͤ́ͧͮͮ͐ͣ̄̄S̵̗̞̯̳̜̺̮̖̯̗̝̥̤͓͙ͭ͛ͩ͠T̴̬̭̮̭̙̻̠̜̹͉̽̀̿̃̃͌̚͝E̵̡͔͍̩̠̳̥͍̤̲ͮ̿̿ͫ̓͜͞K̪̪̰̤̗͇̱̹͙̤̬̗ͭ̅ͩ͐͒̓̃̕͠ ̮̫̬̗͙̪̫̜̭̤̞̻́̄͂͆̂ͯ̐͒̑͐̔̐̓́̀̕̕͟Ś̵̡̟̠̖̬̓͐ͧ͋͊ͣͮͨ̽T̵̨̮̠̲̙̘͍ͣ̂̅̇̅̐̇͊ͩ͛ͫ̕E̸̵̞̫͈͊̊̈́ͩ͐ͦ̄͋̀ͅK̴̢̤͓͉͚̞̹͇̝̼͎̤͇̼̺͓̪̻̙ͪ̈̿̏̈́̀́͜ ͓̺̗̣̫̭̒̇̉̒̈́ͭ̊̈́̏ͯ̃ͩ̉͆͐͜͝Ş̷̛̯̟͍̟͍̙̙̣̪̊̎̓͆ͩ̎̚͞ͅT̷̡̜̦͕̘͇̗͇̩̺̹͂͑̈́̉͒ͧͧ̒̐ͬ̈́̏͊̿̏ͪͬ̚͜͟͡E̷̛̛̱͚͙̮̩̎̄ͪ̃͆̽͂̋̔̐͒̔͗͢͞K̶͕̻͖̪̣̘̗̬̣̟̤̤͎̞͋͌̌͌̊̍͒̊̏̆̈ͤ̿̚͜ͅͅͅ ̷̢͔̬̮̜̦̼̦̬͚̅̇̾̋ͧ̈͋̄͗̐͗́̄̉̃̎͑ͫ̚ͅͅS̡͉͚̪͉̣͊̒͂͒̉̕͟T̴̷̫̹̮̥̗͈̠̣͕͚̯͋ͭͬͯ̾͂̀̿̌ͪ̈́̕E̝̤̻̭̣͍̮͙̥̥̭̺̞͕̝̪ͫͭ̈̊͋̒ͥͧ̒͗̑ͫͧ͘͢͜͢Ķ̷̰̞͙̙͍͉̘̬̞̗͉̰̘̅ͨ̾́͠͠͡ ̵̷̢̢̳̫͉̩̤̳̥͖̲͉̳̪͔͇ͤ̉̓̔͑̏ͬ̏ͬ͑͠Ś̌͗͑͏̺̱̫̹͓̤͔̱̦̞̖̠̠̮̰͡͝T̴̘̜̠͚̙̰̱̖͈͈̜̘̪̮͔̣̏ͮ̍͋͌̊̈ͦͨ̓̈͊́͘Ȩ̶̢̪̫̭̖̞͙̝͙͙̫̣͇͚̰̋ͦ̆͋ͅK̷̛͉̤͚̰̯̘̪͔̰̖͚͐ͮ̌̒̅̊́̅́̅͋̂͢͜ ̴̡̜̞̻̝͎̐̾̔̿ͫͧ̌͜͡S̡̢͓̼͍̏ͨ͒̉̾ͫͮ̿͋͊̎ͪͪ́T̃̆͋҉̜͈͖̝͍̗̦̮͓̯̭̳́͟ͅĘ̶̵̢̳̼͔̦̈́̓ͪ̈́̃̿̆͛̀ͭ̀̓̀̀K̞̜̣͈ͮ́̉ͣ̔͌̎ͤ͂ͧ̆̽ͪ̀̀͘͢͞͡ͅ ̶ͪ̽ͬ͊̓ͮ̌̽̊͑ͣ̔ͯ̈ͥ̐̀̚͘͏̗̪͖̼̗̭̲Śͦ͐̌͋̈̍ͨ̃̄ͩ̇ͥ͒̊̈͑̽̚̕҉̗̥̻̤̝̩͇͍̲̱̝͟͡Ţ̴̫̟͇̼̗͌̽͊́ͭͣ͛̎̑ͯͪ͒͌̒̎ͦ̚Ḛ̸͍̝͇͇̜̺̜̦͚͒ͫ̊ͩ̄̈́͆͋̎̓̃͗̋̾͌̚̚͠K̸̛̫̹̜̬͖̻̗̺͓̦͙̣̲̝͛̀ͭͬ̓̆͊̄ͨͣ̊̈́ͪ̈̎ͮ͜ ̢̢̙̳̬͔̞͕̲̰̘̭͕͕͚̯̝͐̂̎̍ͬ̎͐̔ͤ̎̽ͦͤ̄͊͟͡S̴̡̥̣̘͖͚̦͉̰̗͎̭̪̤̜͕̝͌͐̇ͧ̄ͯ͐ͧ͒ͬ̏̌ͮͯ͒͊͋̀͡͝T̴̛͙̩͈͈̩̜̯͆̿ͫͣ͆̌ͦ̄̍̑ͮ̅̍̊ͅͅE̒͑ͧ̍̉̄̍̐̃ͣ̄͆҉͚̘̜̱̤̀͘͢K̢̫̬̙̼͇̠͉̖͉͓̲̽̂ͧ̿̆̿͂̎͐͌̀ͦ̈͢ͅ ̸̵͙̞̬̝͈̜̤̙̤̠͖̩͇̻̞̻̲̣͖ͣ̿̄͛̍͛̈́͋̊ͮͭ͛͗̚͟S͙͖̠̞̯̥̩͙̥̟̘̹̹̮̞̭̤̙̍ͩͬ̃͂ͬ̈ͪ̆ͯͬͤ̉̄̀͘͝T͕̬̜͍̜͓͇̯̜̠̳͔̿̒͊ͯ̀͢Ȩ̶̥̼̟͔͈̤̙̤̻̤̫̰͈̲̤̬̣̫̐̍͑̊͂ͥͥ̏ͣ̅̊̊ͤͅḲ̴̸̨̛̠͉̟͓͎͓͚̠̫͕̯̯̦̗͛̆̑͋ͭ̂́͗̂̄ͤ͒ͮͪ͆ ̷͈͈̱͙̹͚͙̹̗͔͙ͭ͆̏̓̒̔͊͛ͯͭ͛̐͑ͣ̅̔̉͢S̴̸͉̦̯̲̝̤̥̞̝͚̱͈̠̹̒̓̈́̄̉̃͜͟T̨ͭ̑̉̽ͪ̈́͞͏̤̼͍̞̹̫͇̳͉̝̮̱̜̥͇̦̬̻͎E̔̾̃͂̾̓ͯ̉͏̵̨̨̫̲̩̬̻̞͈̤̘͚̥̻͍͟ͅKͪ̏̏̀͑̍̿ͪ̂ͨͤ͊ͦ̄̅ͯͯͯ͏̷҉̸̭͓̳̻̖͔̦̖̙͔̹͈͓̳͙̱̤̟͠ͅ ̉̓̀̊̎ͥ̅ͥ̄̔̆́͆̉҉̛͓̣̪̲̪͕͕̳͖̯͕̺̱̣̱̺͔̮Ș̵̛̥̥̦̙̯͙͕̞͔̥͆̒͒ͯ̓̃̿ͦ͌̉̌̐͂ͤ̑͜͠T̵̼̙̞̼͇͇͎̫͖̱̝̘̼͉̟͆͋ͨ͒̒́̄̔͗̇ͭͤ̓̅̋͋ͨ̒͘̕Eͪ͌̽̌̾̈̈́̆̐ͫ̔ͦͦͤ̓̏̒͏̷̫̺͎̣͚̩͙͖́͘K̶̴̤̣̯̪̗̲ͫͦ͐͋̒ͥͯͣ́̀͠͞ ̖̯͎͓͓͈̙̙̼͍̖̱̲̲͇͉̘̯̉̑̆̌ͦ͛̇ͯ̐̾͗̔́ͫ̚͢͡Sͧ͊̅ͤ҉̷̛͈̤̮̟T̷̉̈̑̆ͥ͂̈͊̍̚̕͢͏̛͇̼̳͖͉̺̮̳̯͙̯͈Eͣͬͥ̑͐͊̍ͭͣͨ͆̊̇͏̴͏̯̲̮͎̞̲̫̻̭̩̜̹͡ͅͅK̘̥̫̞͙̣̝͉̿̀͆̒ͦ͋̄ͩ́̚͟͜ ̴̰̠͇̫̞͎̱̩̱̩̫̱͊ͥ̑̂ͮͨ̈́͗S̸̨̯̭̲̤̾ͬͨ͋͛͑ͪ̐̂̅̊ͪT̶̜̱̠̱̟̬͕̳̤͎̺ͩ̓ͧ̅͐ͨ͆̃ͭ̄̑ͣ̇ͭ̃ͧͫ́̚͢͝ͅͅȨ̺͓̤̩̣̠̞͓̓ͮͨ́ͩ̔̈̑̎̋K̶̷̺̺͇̟̼̘̝̗͇͊ͮ̀̀͋ͨͩ͘ ̷̬̲̻̮͇͚͈͓͉͔͓̹̘̞͎̟̟͎̊̎̌ͯ͑ͪ̇̋ͪ͑̇́̚͘S̷̮͉̙͎̤̬̩ͭͫ̓̌ͤ̑̈́͘͝T̴̜̤͈͍̲̜̻͔̣̻͙̦̜͍̈́̀̓̏͂ͥͥ́ͩ̈ͫͧ̌ͨ̕͢E̓͋͗̓̏ͪͦ̊ͭͯ͊͊̓ͩ͂̉҉̶͈̦̜͎̝̦͕̞K̴̶̯̣̻̫̦͙͉̰̭̬̤̠͂͊ͧͩ̂ ̶̷̛̛̫͔̟̖͚̠̙̝͕̤͇̳̘͍̥̻̼̥̬ͨͩͮͤͭ͊̐͒͆͛̂ͨ̒́̄ͦ̀S̨͇̹̪̟̰̞̭̩̯̟̓̃̓̉̈ͨͪ͜͟͢T̵̘̩̘͑̇͗̍̽̀͘͠Ě̡͉̳̪͇̠͔̺̜̣̓̓̄̋̿̔̉̅̚͞Ḱ̵̸̼̩̗̠̤̤̫̺̝͚͉͗ͬͤ̾̒̈́́ͅ ̡͚̙̭̙̹̲̘̭ͦ̑͐̚͢͞S̶̶̛̥̩̪̘̫̙̳̤͇̟̠̙̳͈͈͍̀̽̀͒̐̐ͭ̈ͭ͒ͧ̃́͟T̸̢̧̧͐͋̄̽͑̐̎̆ͣ͑̐̊̚҉̥͕̱͇̮͍̯̹͈̮Ę̵̴̖̝̬̩͖̹̦̯͂̈́̔͋͛ͧͭ͆̐̍́̑͝͡K̴̢̡̞̘̠̳̖̝͈̹͚̯̻̹͓ͬͦͩ͋͌͛ͮ̐ͬ̔̑̓̄͌͊̐̓͛̏͠ͅ ̧̰̥̱̦͉̖̺̳͖͎̗̐̓ͥͮͤ́͆̌͑̑͌ͫ͋̿̾͘͠Ş̵̖̝͔͉̩̗̪̤̘̭̝̗̙̭̳̭̝̜̬̉̔̊̓ͥ̅͂ͪ̔͗̒̔ͮ̿̏̌̕͞T͈̲͚̞̦̮̤̘͚̻̩͕̩̫͇̑͋̏̂͌ͩ͊̓̇̃ͭͪͬ͋͢͝ͅḚ̢͚͓̘̰̲͈̯̰͇͓̘̖̃̽ͧ͒̾͌̈́̏͒͑ͭ̂̐ͥͧ̀K̷̥̱̖̠͚̘͈̝̪͙͇͓̓̈́̊ͯ̓̄̑ͪ͡ͅ ̨͙̱̥̪̩̺̘͕̟̘̗͓͚̲͔͎̼̟̳͊͆̍͑ͦ͐̐ͨ͗̍ͫͤͮ́́̀S̷̵̡̼̲̙͇̠̻̻͓̩̪̮͚͙͓̦ͫͥ͐ͨ̈́̽͂͆̈̀̓ͣ̚T̛̹̗̦̗̣͍͔͇̺̝̯̫̤̣̟͔̭͂ͧͩ̍̊͑́ͣ̔̚͞ͅȨ̶͕̰͇͇̘̲̭̪̰̦̹͇̟͌̉̍̎̉ͥ̃̈̏̉̒͛͝K̶̵̶͓̙̖̝͗̽̊́ͥ͌ͣ̋̌̉͗̾ͨ̓͗ ̵̴̳̯̱̭̭̭̖̟̞̖͕̦̹̪̫̅͋ͧ̿ͪͥ̈ͧ̾̏̿͌̓ͨ͌̀S̨̍̄̂͆̑̎̄͑͏̡̤̫͓̬̼͎́͟ͅT̢̓̍̉̈́ͫ̅̏ͯ͗ͪ͊̉̀ͭ̀̿ͬ̿̕͞͏̭͖͕Ẽͭ̉͊͊̔̆́̐̊̀̑҉̧͟҉̼̝͓̤͉̜͔̺̲̭K̴̛̮͎͓̝̗̫͖̊ͥ̔̏̈́͞ ̀̒̿̎̈͒̂̓͑̉̈͛ͯͭ͋͑ͨ̕҉̨̡̥͔͔̻̤̣͓̩͉͉̖̫̤̬ͅȘ̨̘͕̭̘̙̬͈̘̣̠̘̫̫̤̮̠̓̔ͥͤ́͠T̶̶̵͓̗̜̫̖̺̜̅̋͑ͬͩ͐̂ͪ̚Ė̷̶̷̶̢̤̪̹̖͓̤̀ͦ́̑Ķ̵̹̤̠̬͉̩̗͙͓͍ͦͥͩ̈́̓͒̐̾̋̑͋͢ ̡̹̟̲̯͈̫̳̟̲̮͈̞͔̤͈̯̠̮̀̓̒̔ͫ͐͛́̅̅̂͌ͤ̽ͬ̈̾ͣ́͞S̢̛̰̺̻̮̆͆ͬ͑ͩ̋̀̿͌̈̎ͬ̓ͨ́̚͢T͑ͬ̇̊͊ͪ͌̄̉̍̽̕҉̭̹̳̠͈͍Ȩ̵̧̫̦̭̘̮̞̙̗̰̼̦̘͍̦̾̔̔ͭ͑ͮͮͫ̓̒̌ͣ̾ͨͭͬ̀͠Ḱ̵̡̢̫̗͖̟̞͓̰̬̠̩̠̖̰̫̙̩̗͇̊̽ͨ̐ͯ̄͂͂̒̀̃̅͜͠ ̦̱͕̹̙̻̩̣̤̺͎̩̤̼̖͑̐͒̈ͬͮ̇͐̈̋̽͌̈́̋̅͊ͧ̂̀͘͡ͅS̶̜͉͇̥̻̟̉͑̄̄͆͑̏̅ͭ̓ͪ̐̂̂̉ͯ̃̚͞T̶ͮ̏̇ͦ͌͂͊ͨ̈́́̆͂ͥͧͭͮ͡͏̜͓̠̜̺̬̥̩̖̤̳̻̳̝͚̥̕E̦̩̼̯͖̻̫̺̺̳͕̜̖̥̘̹ͯ̀̑ͯ͛͆̈ͬ̑̈́̄ͫͬͬͪ̿͑̀Ḳ̻̱͖̠̟̙͔̥̞̗̩̥̓̍̇͒̍ͫͭ͌̄̾̊͑̾͗͆́̐͜͢ͅ ̸̛̞̜͙̗̥̰̠̥̜͕̣̠̅͑̆ͩͦ̓̔̏ͦͧͦ̑ͧ͂̆͜S̶̴̴̨͍͚̜̰̩͕̩͙̍̒̏͒ͬ̈͆̈́ͨ̀̌̈T̠̫͓͕̮͕̗̮̦͕͋̋ͭͭ̉ͣͭ̍ͨ̋̅̈͛̀́͝E̴̡̧̼̲̙̘̞͙͚̙͙̫̯͗̒̄̈́̊ͮ̎͋͒͛̉K̂͂̔͂̆ͤ̆ͨ̋͂͋͏̟̲̼͔͝ ̵̑̆ͯ̌̒ͭ̊ͭ̊ͨ҉̛͍̖͈̱̫̩̩̩̝̩͔̟͈̺͞ͅS̨̛͔̩̟̼̣̦͎̖͓̤͇̞͓͍̭ͨ́̎̊̐ͯ̇͑̄͐̕͞T̛͕̯͇̮̫ͤͧͤ̀ͯͩ̋͟Ẻ̂ͭ̈́̉̆̈́̾͆̑̃̿͂̽̎҉̨̀͏̭̲͖͕͎͚̪̙̙͚̟K͇̘̙͚͖͇̽ͨͤ̒̕͡͞ ̶̧̱̭͇̯̞̫̯̖̘̞̪̪̤͖͙̋́́̚͟͞Sͮ͂͆̅̊ͤ̏̋͊͒̈̓͒ͫ̄̑́͜͞҉͎̱͍̗̩͍̣̫̦͚̫̪̦̘͙Ţ̾ͮ̋̾̎̇̾̇́͂̉̐̒ͯ̿͑ͩ̓͟͏̻̯̮̩̜̩̩̖ͅͅĘ̙̳̝̺̻̬̫͇̣̰̮͇ͬ̊ͣ̑̔ͦ̄ͭ̋̈͑̑̀̚͟͠K̸̴̢͕͙͓̲͚͙̣̱͇͔̟̩͙̇̇̈́̅̔̓͐ͨ ̛̤͇̻̳̰͈̬̥̣̫̫̙͕̩̫ͧͤ͆̓̀̿ͯ͒ͧ͊̃̾̈̚̚S̶̜̩̟̯̪͙ͧ̎ͣ̒̾̐ͭ͛ͭ̋͑̽͗ͭ́͡T̷͔̻̻̼̳ͪ̏͋̾ͩͣͮ͝͠E̵͔̺͎͋ͩͦ̿͊̌͋͒̎̈́ͮͦ͛̎ͦͤ̃̕͠K̇ͬ̈́̉̐̊̉͋̓̏̎ͣ̒ͧͩͪ͆̚̚҉̴͏͕̱̗̳̯͉̗ͅ ̨̗̜̙̲̺̱͓̭͉̬̮̰͍̯͓͋̾̿́̇̑ͫͦ́ͅS̴̷̯̤̪̙̤̲͚̥̬̝̹̭̭͚̾́̿̓́͑ͨ͂ͥ̍̀̒̑ͩ̾ͬͨ͛ͅͅT̸̷̙͉̝͇̤̹̺̜̮͙̰͔̝̞̺͎͓̱̭ͭ́͂̋̐͟E̸̢̪͔͈̳̯͈̫͖̤̯̭̙͕̥̺ͯ̎̑̒̉͐̀̒̇͗͒͘͜͢ͅK̨̨̪̻͉̙͖̟͕̖̟͚̎͒ͪͩ̇̏ͧ̏̄̐̉͒ͦ̌̐̍͢ͅ ̵̧̹̠̮̱͓̼͙̫̪͍̹͖̙͍̜̫̥̌̌̿̓́͟Ș̳̳͔̲̇̓̀̈́ͤͩ̂́ͤ̀ͣͦ͑͊͗͝T̸̸̩̳̳̰̳̤̦̻̟̯̮̤̲̘̆ͮ͒ͤ̄́̍̏̚ͅE̢̙̦̙͔͎̝̞̣̗͍̯͍̲͉͎͕ͭ̃̿̾̈́͊̽ͦͥͅK̗̖͈̳̗̩̝͇̤͕̎͂ͩ͗͞͠ ̷̣̭͖̼̻̯̻͗̿͒̂̋͆͟͡Ş̷̸̛̳͕̬̙͉̯̱͙̋̈́͐̏̋̿ͫͮ͟T̷̶͔͎͍͓̜̙̹̣̟̲̠̦͙̳̳̻ͩ͗̑̽̏ͮ̇͛̿̒ͮͧ̉̌͌̋ͩ͊̚͘͝E̜̻̩̠͛͋́͂ͥͥ̃̕͟K̪̗̘͎͉ͪ̋ͤͮ̎̀͟͞ͅ ̙̠̰͎̬̩͕̳͕̫̱͚̉̈́̊̓̆͋͗̄̆͗̄̄̀̕͢͟S̨̄͋̀̔̂̇ͭͪ̉̍ͧ̾͡҉͎͓͇̱͚̖̲̤̮͎͖̕͝ͅT̶̺͙͙̹̟̖̺̝͉͈̻͖̟̰̳̘̱̓͌͂̾̃͗ͧ̇̐ͥ̐ͬͬ͂͟ͅE̢̻̯̳ͩͦ͂̿͘͜Ķ̸̛̪͉̪̩̤̓̒ͬͥ́̈́ͭͣ̍̔ͪ̅ͮͧ͗̃̄͠ͅͅ ̨̙̖͔͓̰̰ͣ̀̆͛ͥ̕͠S̝̟̤̫͎̝ͤ͋̊̽ͥ̐̆͆͋ͫͦ̿̔͘͜͜͠Ṭ̷̷͕̹̮͓̭̮̹̺ͨͦ̌̀͢͡E̩̩͉͔̺͈͗̎̾ͥͩ̓̋̍͆̽ͯͥͣͧ͊ͫ̈̕͜K̡͎̼̗̺͕̱̫̻̹̟͉̹̯̫ͤ̍̒ͫͩ̋̅̍̚͟͝ͅͅͅͅ ̸̸̷̜̼̼̜̗̦͙͈̻̜ͯͪ́ͧͣ̚̕͘S̢̧̗̞̹̹̣̗͔̞̙̞̥̣̗̮̬̺͖̈́̀ͫͮͣ͒̌̾͗̒̅̏̉́̓ͣ̕T̢̺̭̤̈ͩ̒ͮ͘͠Ȩ̰̠̠͚̦͇͈͔̭ͨ̾ͨ̃͛ͣͨ̾ͮ̄ͥͧͧ̀̚K̵̴̴̯̯͓̝̗̘̝͔͈̲̦̰̬͊͌ͭ ̢͙̖̫̬͕̠̳̬͚ͮ̈́̇̉̐̒͒͂ͮ̐́̅̃̄̓̋ͩ̉ͯ͜S̴̢̛͛̏͗̉ͨ͢͏̻̙̤̜̼͍T̢̧̆́̈͌̿̓̓̎̉͏̪̪͈̻͕̬̣͚̺̟͇̟͔̣̰̻̜̪E̛̬̳̫̮̯̪̜̭̖͓̪̥ͪ̄̆̀ͬ̓̀͐̀̍̈́ͭK̵̬̤̭͔̪̬̯̦̯̼͇̩̻̲̲͚̖ͤ̌̉̅͒ͥ͐ͪͤͣ̃̆̅͑ͣ̽͒́͛́ͅ ̝̹̮͖̜͎̱͓̳͓̭͛̃̓͊͑ͮ͑͊͐̄ͫ̓ͤ̀ͅȘ̡̳͓̺̣͕̬͚̤̜̝͙̈ͨͨͬ̃ͬ̑̌̃͋ͤͭ̀̚͢ͅTͥ̈́̈̔͗̓͢͠͏̧̠̯͈͓͕̦̰̝̲͍̬͞È̖̫̜͔̩̟͉́̓́̾ͦͮ̇̃̒́̃ͥ̀́͟ͅĶ̛͇̘͔͗̍ͬ͒͛ͧͪ͂̎̑ͣ̾̑̉ͅ ̡̺̦̤̦͚̮͖͙̝̬͎̼̭͋̆ͤ̑͒ͥ͗̋̆̉̿̔̋̓ͯ̀̚͝ͅS̨͉̳̻̪̻͖̣͖͔̜̦̰̿̄̀̊ͭͥ̊͂͊͟T̙̝͕̖ͫ͐͋́͋̽ͫ̏ͦ͐͑̐̈́̎̓ͭ͆̂́͘͢͡Eͥ͌̾ͥ̏̇̑ͯ̽̄ͤ̽̂̄̅ͥ́̚͜͢͡͏͖̝͖̱̟͕̤͓̞̼̱͚̦͇͎̬̱͙Ķ̬͈̜͍̫̺̱͓͕̲̭̪͊ͤ̌ͧ̄ͭ̒̾̍͐ͮ̎̈́̿ͦ͛̆́͞ ̵̢̒̾ͧ͑ͫͪ̈ͯ̋̄ͧ̑̄̅ͯ̿̎ͬ̚҉̥̝̼̘͔̘̣̝͖̦̩̤̬͎̭͚̗͢S̤̻̬̜̳͍̱͖̭̰͔̋͒̔͌̐͛̽͟͟T̹̫̙̮̜̰̭̍͒ͧͬ̓̀͜͞E̶ͮ̿͊̽͆̿͛ͥ̈̔ͣ̂͡҉̛̝̫̬̬̣͔̤͙̜̙K̸̟͓̬̲̉̏̎̊͊ͤ ̧͐ͨ͑̿̐̑̔̽͑ͥ͗̾͒̏̓҉͍͇͎͕S͋ͥ̆̊ͬ͢͏̸̳̰̦̩̪͎̘͚̮̣̯̫̱͘T̷̴̨̨̹͈̹̻̪̠̬̞͛̇ͪ͆͆̐͐̆̍̌̕E͑͊̿ͧ͂҉̷͔̠̘̠͓̼̯̬̘̖͍͚͝ͅͅK̡͚̝͕̜͎̩̮͉͙̟͐ͮ̄̊̆͐͛́̌̉̒̓ͮ͐ͩ͌͆̚̚͢͡ͅ ̶̴͎͇͔̙͔͍̲̗̙͕̲̼͔͛̐ͯ̈́̓͒̃ͧͦ̂ͪ͆̈ͧ͗S̨̛ͩ̎ͯ̒͝҉͎̺̗͖̘͈͇͍̤̱̝̥̻ͅͅT̵̤̤̟̤̣̞̗͉̦̆ͣ́ͫ͆̇̂̓̌̀͟͞͝E̴̛͚̗͔̪͇̲͔̼̘̯ͤͩͦͫ̆̅̇ͦ̿̽͆͂ͮͦ̈͂ͪ̊Ǩ̡̡̻̳͉̳̘̦̹͙͓̹͓͉̟̗͙̭͚ͧ͛͑̀͆ ̢̢̢̭̲̘̻̺͚̟̖͕̳̅̀́̍̽̄ͤ̅́͢S̼̖̺̘̥̮͉͓̦̖̺͙͕̜͔̟̠̈͊̏̚͢͠Ţ̵̛̱̤͕͕̠̩̭͎͉̝̱̳̰̩̭ͭͦͯͧ̾̾̅ͯ͊̏͐̐̇̌͜͠Ȇͭ͆̓͑҉̡̥̰̣̫̣͙̭̥͓͓̳͚̯̱̩͕̀͝K̸̛̦̙̣̖̟̠͈ͨͣ̓ͤ̿̉̓̌ͯ̌̒̅ͣ͠ͅ ̨̛̃ͥ̍̏̓̃͏̞͓͕͍̫̥͖̤̥̘͔͔̟̟S̢̻͚̟͇̰͕̙̻͇̪̳͎͎͓͇̟͚̽̏ͥͧ́̍̀̈́̿̆ͭ͊̽̋͗́́͘T͆̑͛̎͂́͛͋̍͛̐̄͏̲͇̦̯̙̩͔̝̳E̸͚̖̺̘̭͎̜̹̯̓̔ͩ͑͐͂ͤ̓͋ͪ̋̀̀͘͠K̷̢̻̫̙͓̝̬̳̣̞̭̹̖̊͛̅̔̂̌ͣͮͪ̈ͥͬͤ̌̽̈́ͪ͝ ̢̜͇̼̰͕͉͒͂ͮ̋ͪ̀ͩͥ̏̅ͦͨ̀͆̈̏̿ͪ͟͝S̶̰̰̳̥̱͇̞̃̏̏̑̀ͥ̆̆͐ͫ́̚͜͝Ţ̷͔̟̬̭̬̬̳͓̺̲̦̰͍̻̱͔̖̾̾̀̾ͪͨ̈́̃͟ͅȨ̶͍̹̝͇̭̜̀̋ͤ̔ͣ̍ͨ͂̈́ͯ͗͜͜K̷̢̂̓̓͒ͨ̊̌ͤ̈͑ͨ͆̐̉̏̅҉̵͈̼̜̞̖͎̯̫̻̣͎̻ ̨̢̟̥̰̹͎̥̝̭͓̖̱̫̭͔̼ͮ̑ͨ̏̈́ͣ͐͒͂ͩ́̿́S̴̡̧̘͙̻̞̙̯̹͇͍̳̥͂̈́͗ͧ̈͂ͮ̇̿́̅͟͡ͅT̸̶̛̻̪̯̬̄͑̓̔ͧ̐̍ͤͤ̏͛ͤ̒̽̅ͬ́͐́E͒͆̒ͯͫͤ͏̧̣̞̬̣̮̩͎͖̥͜K̵̷̶̫̜̰̰̦͐̔̄ͩ͌̀ͭ̏͋̀̅ͬ͐͐͋ͮ̇̑̈́͢͢ ̸̀͑ͮͯ͒ͫ̇̿̀̚͏̧͚̼̪̙̦̮̹̟͝S̴̢͎͚̝̬͚̘̞̈ͮ̎̓ͭ͢͡T̋̓͊́͛ͧ̍҉̢̩̺̦̗̜̗̤̻̲̳̭̺̳̝Ȩ̗̭̣̱̻̅̾̑͋͋̉̊̎̈́̃̇͌ͮͦ̉̋̽͛ͦ͢ͅK͚̤̹͍̻͔̥̜͚̗͉̥̹̺̲̤̠̲ͧ̎̏͋̒̍̌ͣ̍̚͡ͅ ̸̯̤̰͖̱͒͗̑́̉̎̕ͅS̀͛̄̔̈́ͫ҉̢̜̼̖̩͚̞́T̋̀̔ͩ̈́ͮͧ͛͋̎̔̉̓̈́ͩͤ̈́̚҉͎͇͍͕͢Ḙ̼̥̺͉̤͔͖̝̬͚̼ͯͤ̎̋̔̇͒́͠K͂ͪͩͫ́̂́ͨ̾̂̋̍ͣ͐͏̛̛̬͕̻̞̳͎͚̱͉̫͔̤͙͡ ̶̛̲̝̙̦̻̠͕̽̽̇ͧ͗̇̿̋ͣͬ̾̀̆̄̀͠Ș͕̣̙̱̜̹̝̤̮̥̙̘̈́̀́͗̋̽̀͟͜Ţ̶̸̟̹̩̰̬̬͔̦̠̝͓̏̈̉ͮͧ̅ͤ̃̌ͥ̚͜Ȅ̶̡͖͔̦̺͂͑̎̆ͮ̇ͧ͊ͣ̓̋ͩ̑ͧĶ̵͙͚̺̟͈̠̥ͫ̀ͮ̊͠ ̵̙͔͇̞̭̟̺͓̘̺̱̗̘ͦ̊̊̔́̿̽̊͋́͟͟ͅS̸̺͍̰̹̭̝͎͐̄̆ͣ̏͘͟͟͞Tͪ͊̑̑ͥ̾҉̳̞̫̗̬͙̯̮̀Ȩ̧̛̯̩͎̤̤̖̬̪̩̜̭͎͕̭̟̥̋͊̑̄͐ͦ̋̌̕͡ͅK̴̙̼͇̼̝̮͔̣͚̞̘̯̟̟̖̬͛͆ͩͤ̋͐́͠ͅ ̧͖̦̪ͨ͂̈́ͮ̍̐͑͟͢ͅS͆ͧ̀̚̚͏̢͉̳̭̱̲T̶̷̸͕͔͓̠̝͓̗̜̝̣̯̓ͭ̐ͣͤ͗ͫ̈́̀̑̽ͦͣ̀͆ͫ̏̚͝͡ͅE̶̙͈͕̠ͭ̈́ͣͯ̏͐ͩͥ̎̀͢K̸̻̜̺̞̟͎̺̤̘̀ͭͫͫ ̸̿ͦͦ̅ͧ́҉͉̬̥͕͖̮Š̸̷̡͎̱̠̝̻̱̱͎̩͚̰̞͔̟̲̊ͩͥͪ͑͗̌͟͝T̶̴̢̢̫̣̣̩̞̦̟̘̭̟̝ͯ͛̎ͨ͛͆̉ͣ̈͋̿̔̆͂͑̿͐̅́E̸̟̼͕͕͍̥ͨ̋͊͋̓̐ͤͪ̿̀͞Ǩ̵̩̘̲̲̑̓̆ͦͨ͑ ̷̴̘̻̞̖̪̭̩̜̼̯̖̝͌̆ͪͫ̒ͦ̐͗̈̀ͣͤͯ͋̒ͫ̚͞͞S̢̮̩̮̙̟̳̙̳̤͔͎̥͇̥̗̺͍̭ͭ̾͂̈́̋̈́̇͒̒ͧͭ͊̚T̡̙̖̬̘̼͕̈̉̆̄̆̾ͨ̓̍̏́͊̌ͥͮ̚E̡̫̗̯̝̞̥͓͎̝̦̖͇ͭͩ̔̆̆ͭ̆ͯ̊̾ͨͧ̄̑̅̊̅̚͝K̸̤̲̜̭͍͇͕̾̍ͮ̀̔̽̍ͥ͘͝ ̷̝̼͓͉͖̞̻̩̜̉̎̿ͯ̒͂̓̇̍͐͂́͞͡͡ͅͅŞ̼̻͈̙̦̪͊ͯͨͮ̐ͮ͌̇̀ͦ̀͜͡Ṫ͍̬̹͍̰̞̈͗͛̐͘͟͟͞Eͭ̔̄̑ͮ҉̷̫̙̻͇̼ͅK̛̏ͪ̓͐̊͊ͭ̓͏̸̦̣̫͕̼̖̹͎̤̜͉̥̘̰̭͟ ̵͍̦͈͎̜̖̟̇ͥ̋ͫ̐ͥ̎̑̑͐̃ͦ̚͞ͅȘ̷̙͎͎̪͍͈̬̱͈͈̬̊̆́ͥͫͭͣ̌͆͢T̛͔̹̫̠̼̫̱̓̅ͬ͋͋ͣ́̆͊́͒́E̶͓̳̘̥̿̌ͣ̍ͣ̍̅́̅ͨͧͮͬ̍̿́̚̕K̢̨͎͚̥̬͖̝͕̬͉̪̭͇̺̞͚̯̙̀̀̽͌̔̃ͬ̈́̓̆́͡͝ ̴̨̦̳̳̪͖̮̘̠̯̼̦͈̮̫̙ͧͪͣͨ͢S̸̈̎̾ͣͧ̊̍̂ͮͪ̉͆̓ͬ͢҉̥̜̥̫͕̰Tͥ̆ͧ̅͗̆͘͡͏͉̜̠̺͇̘̤͎̘̼̙̻̪͚̦̱̘E̹͕̜̰̻̦̖͋̈͌̍ͤ͢͜͝K̷̠̯̜̩͓͕̭͙̮͛͆̅ͣ̄̏͛̾̄͢ ̮̮̫̬̫̤̫̟͓̰̜͛̊ͮ̽̑ͫ̅̔͌ͫͧ́͟͡S̬͈͔̳̮̭͇͔͇̜̭̊̓ͭ̐͒ͧͩͦͫ͡͝T̈́̅̏́̈́͛̒ͯ͊̊̆̋ͪ̂ͯ̓͑҉̡͙̞͉̝̝͔̬̼̗͚̠̜Ȩ̧̠͉͓͈̼̲̬͓̬͎͈̫͎͎̼͚̺̜̾ͩ͛͒̈͠K̛̦̖͔̫̙̲̟͇̱̻̟̬̩̠̻͒͊ͦͨ̈́̌̀̇̋͛̃ͫ̿ͯ̅͝ͅͅ ̡̢̛͕͓͚̩̳̪̮͑ͮͩ̑̉ͨͩ̇ͦ̕͟ͅͅS̵̰̣̺͔̱͎̃ͪ̇̄̊̂̑̏̄̂́͒̑̓ͥ́̚͢ͅͅT̤̥͈͚ͣ͛ͭ̏̆͞E̝̪̯̬͍̩̪̱͎͖̽ͥͪ͂ͯ̾͑͆̊̍͗͗͋̓̆͒ͪ͌̒͘͡͞Kͦ̌̑̈̌̊ͩ̌͗ͭ͛͆̂ͨ҉̳̼̖ ̡̇̇̂̆͒҉̙̫̬̘̼̞͘͝S̢͌ͤ̎̒̀͒̂͑̊̃ͥͦ̽͑̎ͤͧ͟͏̺̬̞͔̭̦̱͡T̶̷̤̙͓̼͂ͣ̔̅̄̈ͤ̈́ͭ͑ͭͣ̏ͬͪ͟͠E̡̟̣̲͕̦͚̬̹͙̝͇̺̟̞̝̺͆̅́̓̋̌ͧ̈̅̓ͬ̊̚͘̕͜͜Ķ̷̭͕̱̤̹͎̺͚͇͖̟͙̻ͨ͛͛̇̅̂͑ͬ̏ͥ͌̓͒̂ͬ͢͜ͅ ̢̛͙͕̦̗̘̥̫̻̯̮̲͙͕͚̂̂̔͛̍͆̋̇͋̆̍ͯ̚

For more infomation >> ChandeliALAN - Duration: 0:35.

-------------------------------------------

:0215: Budding Update Roses at Front / گلاب پر پیوند کاری کا احوال - Duration: 10:26.

For more infomation >> :0215: Budding Update Roses at Front / گلاب پر پیوند کاری کا احوال - Duration: 10:26.

-------------------------------------------

Is ontharen met hars goed voor je? - Duration: 3:56.

For more infomation >> Is ontharen met hars goed voor je? - Duration: 3:56.

-------------------------------------------

Nininho apresenta moção de pesar pelo falecimento do ex-deputado J. Barreto - Notícias 24/7 - Duration: 2:36.

For more infomation >> Nininho apresenta moção de pesar pelo falecimento do ex-deputado J. Barreto - Notícias 24/7 - Duration: 2:36.

-------------------------------------------

Comment j'ai renoué avec l'ambition // GRAND-MÈRE GRUNGE - Duration: 6:47.

For more infomation >> Comment j'ai renoué avec l'ambition // GRAND-MÈRE GRUNGE - Duration: 6:47.

-------------------------------------------

As arveres... somos nozes?!| Minuto da Terra - Duration: 3:13.

For more infomation >> As arveres... somos nozes?!| Minuto da Terra - Duration: 3:13.

-------------------------------------------

EDGE #1 | THE SCARIEST TRICK EVER ? - Duration: 4:04.

For more infomation >> EDGE #1 | THE SCARIEST TRICK EVER ? - Duration: 4:04.

-------------------------------------------

Maher Zain - Ramadan (Live & Acoustic - New 2018) - Duration: 5:26.

You lift me up high

You spread my wings

And fly me to the sky

I feel so alive

It's like my soul thrives in your light

But how I wish you'd be

Here with me all year around

Ramadan Ramadan Ramadanu ya habib (Ramadan Ramadan Ramadan o beloved)

Ramadan Ramadan laytaka dawman qareeb (Ramadan Ramadan how I wish you were always near)

Love is everywhere

So much peace fills up the air

Ramadan month of the Quran

I feel it inside of me, strengthening my Iman

But how I wish you'd be

Here with me all year around

Ramadan Ramadan Ramadanu ya habib (Ramadan Ramadan Ramadan o beloved)

Ramadan Ramadan laytaka dawman qareeb (Ramadan Ramadan how I wish you were always near)

I just love the way you make me feel

Every time you come around you breathe life into my soul

And I promise that

I'll try throughout the year

To keep your spirit alive

Cause In my heart it never dies

Oh Ramadan!

Ramadan Ramadan Ramadanu ya habib (Ramadan Ramadan Ramadan o beloved)

Ramadan Ramadan laytaka dawman qareeb (Ramadan Ramadan how I wish you were always near)

Ramadan Ramadan Ramadanu ya habib (Ramadan Ramadan Ramadan o beloved)

Ramadan Ramadan laytaka dawman qareeb (Ramadan Ramadan how I wish you were always near)

Laytaka dawman qareeb (How I wish you were always near)

For more infomation >> Maher Zain - Ramadan (Live & Acoustic - New 2018) - Duration: 5:26.

-------------------------------------------

Pachelbel: (not) Just the One-hit Wonder of the Canon in D? - Duration: 14:48.

As much as I like Pachelbel's Canon in D – and many of you probably as well, Johann

Pachelbel was much more than the composer of his Canon alone.

With over 500 works that are assigned to Pachelbel, it's obvious that reducing this composer

to his canon alone, is doing him much injustice.

But there is one important fact on top of that: the huge influence Pachelbel had on

another composer we might now a bit better: Johann Sebastian Bach.

Hello everybody, my name is Wim Winters and welcome to Authentic Sound.

This channel is all about exploring the music from Bach to Beethoven and Beyond, with the

single goal to inspire you on your journey as a musician or as a music lover.

Inspiration occurs when we're touched by something that is close enough to what we

know, but at the same time points us towards unknown directions.

At the end of this video, I'll play for you a Ricercar from Pachelbel's hand, a

piece that I just grabbed from the book shelf, and points strongly to a variety of works

by Bach: you'll hear fragments distantly pointing to his Musical offering, at least

in style, the Art of Fugue even, but also, strikingly in fact, to fragments that Bach

seems just to have copied into another famous work: the canzona in d minor he wrote for

organ.

So, Pachelbel was born in Nuremberg in 1653 and, as a teenager, explored Southern Germany,

where he was surrounded by a rich musical culture, shaped and influenced still by composers

such as Frescobaldi and Gabrieli.

Twenty years old, in 1673, he moved to Vienna, political and musical capital of the Habsburg

empire.

There he met and worked with famous composers such as Kerll and Muffat while studying at

the same time the music of his predecessor Froberger.

In 1677 Pachelbel changed Vienna for...

Eisenach.

He quickly befriended the Bach family, where only 8 years later one of the greatest composers

of all time would see daylight.

Ambrosius Bach, Johann Sebastian's father, was a prominent member of the Bach clan, who

dominated the region in such way that if a musician was needed somewhere, often a 'Bach'

was asked for!

Pachelbel stayed only one year in Eisenach, but the bound between him and Ambrosius must

have been really deep.

He moved to Erfurt in 1678, a place where he would stay for twelve years.

Still, Johann Pachelbel became godfather to Johann Juditha Bach (Sebastian's sister)

and taught Johann Christoph Bach (Sebastian's elder brother).

Johann Christoph Bach studied with Pachelbel in Erfurt from 1686 to 1689.

When he married in 1694 it is documented that Pachelbel was present on his wedding, for

which occasion he composed the music that he most probably has performed himself.

Chances are big that young Sebastian was present, the only time Pachelbel and he have met.

1694 is an important date for Sebastian, but for a worse reason.

His mother passed away on the 3d of May of that year.

Soon after, on February 20 1695, his father, age 50 would follow his wife to the grave.

Johann Christoph, just married with a first child of barely one year, would take Sebastian

in house.

It is certain that Johann Sebastian received a excellent musical training from his brother.

It is not difficult either to imagine that much of what Pachelbel had taught Christoph

was passed on to his younger brother.

Concrete information of what exactly young Sebastian studied and played at this time

is lost forever.

However, the music book from another Pachelbel student, Johann Valentin Eckelst, who studied

in the same time as Christoph with Pachelbel, still exists.

In that manuscript, we find pieces from Pachelbel (preludes, fugues, fantasies, capriccios,

suites and chorals) as well as pieces from Froberger, Johann Caspar Kerll, Johann Krieger,

Guillaume Gabriel Nivers, and others.

It's probably a similar book from which the Bach necrology speaks.

We all know the story of young Sebastian, desperately wanting to play from his older

brother's music book, with works by exactly Froberger, Kerll, Pachelbel – the same names

as are in the Eckelst manuscript . Access to that book was forbidden by Christoph, so

young Sebastian secretly went down at night, each night for six long months, copying the

book.

At the end he was caught by Christoph who took it away from him.

Christoph Wolff, in his famous Bach biography suggest that Christoph might have had nothing

against his brother playing from that manuscript, but copying diminished the value of the book,

for which he himself had to pay for while studying with Pachelbel.

Sebastian Bach later destroyed most, if not all of his younger works, but in a manuscript

called the "Neumeister collection", 38 pieces are found of which some belong undoubtedly

to young Sebastian.

38 choral preludes total, 25 written before 1700.

Their style is closely related to Pachelbel, Christoph and Michael Bach.

It might be a topic for a future video to dive into this.

That manuscript only shows us the deep influence Pachelbel had on Bach, because of the quality

the Nuremberg master displayed in his compositions, partly as well because of the close relationship

Pachelbel had and kept with the Bach family.

Only that aspect should give him a higher musical status than he has today.

Before I leave you with the complete Ricercar as an musical illustration for this video,

we cannot end without mentioning the Hexachordum Apollinis.

Published in 1699, this bundle of 6 arias with variations is to be considered as one

of the highlights of Pachelbel's oeuvre for keyboard.

One of the surprising elements here is, that Pachelbel dedicated the Hexachordum to two

musicians, Tobias Richter and, a today still really famous name, Dietrich Buxtehude, the

famous organist of the Marienkirche in Lübeck.

Pachelbel expresses in his foreword the wish that his son one day would study with the

two masters, something we don't know if that ever happened.

But we all know the story of still young and ambitious Sebastian, leaving Arnstadt to stay

a while with the Lübeck master, a stay that turned out to be much longer than allowed.

Bach was just 20 when he would return to Arnstadt.

But it is highly possible, if not certain, that the Hexachordum Apollinis, probably even

a copy of the edition signed by the Nuremberg master, was played and discussed in long winter

evenings that young Bach and old Buxtehude spent around the keyboard.

And as with Mozart, also for a genius like Bach, impressions like these had a deep influence

in the development of his own style.

So yes, Pachelbel, so much more than only the composer of the Canon in D. Our recording

of the Hexachordum Apollinis, the first complete on clavichord, might be an up step for more

in the future.

Who knows?

So, I will leave you with my almost improvised recording of the Ricercar, written in my Bärenreiter

edition as an organ piece with pedal, but perfectly playable on keyboard too.

But before I do so, I'd like to thank you all for watching.

If you have questions or want to share some additional thoughts to the topic of this video,

please leave them in the comment section below.

And if you are new here to the channel, hit that subscribe button and then... we'll

see each other soon again! (music)

For more infomation >> Pachelbel: (not) Just the One-hit Wonder of the Canon in D? - Duration: 14:48.

-------------------------------------------

Hornet la frappe de retour avec Bourgeoisie, La Fouine valide - Duration: 1:46.

For more infomation >> Hornet la frappe de retour avec Bourgeoisie, La Fouine valide - Duration: 1:46.

-------------------------------------------

100名爱豆投票选出的最强舞蹈机器,来自SM和JYP的爱豆并列第一 - Duration: 3:39.

For more infomation >> 100名爱豆投票选出的最强舞蹈机器,来自SM和JYP的爱豆并列第一 - Duration: 3:39.

-------------------------------------------

スポーツで「ゾーン」に入るとどうなる? フェンシング・太田雄貴が、その感覚を語る - Duration: 10:00.

For more infomation >> スポーツで「ゾーン」に入るとどうなる? フェンシング・太田雄貴が、その感覚を語る - Duration: 10:00.

-------------------------------------------

「顔は内臓の鏡」美容家・水井真理子、マリエの肌を見て… - Duration: 3:44.

For more infomation >> 「顔は内臓の鏡」美容家・水井真理子、マリエの肌を見て… - Duration: 3:44.

-------------------------------------------

DIY: CUSTOMISER UN T-SHIRT AVEC DU WAX [ AFRICAN PRINT MAP ON T-SHIRT] - Duration: 6:54.

For more infomation >> DIY: CUSTOMISER UN T-SHIRT AVEC DU WAX [ AFRICAN PRINT MAP ON T-SHIRT] - Duration: 6:54.

-------------------------------------------

Galaxy S9 et S9+ : Samsung lance deux nouveaux coloris, Rouge et Or - Duration: 3:24.

For more infomation >> Galaxy S9 et S9+ : Samsung lance deux nouveaux coloris, Rouge et Or - Duration: 3:24.

-------------------------------------------

Fake-Po fast explodiert! Kim K.-Double stürzt von Kamel - Duration: 1:06.

For more infomation >> Fake-Po fast explodiert! Kim K.-Double stürzt von Kamel - Duration: 1:06.

-------------------------------------------

Mercedes-Benz C-Klasse Estate 180 K 157 Pk BlueEfficiency ECC/Full Map Navi/16" LMV/PDC V+A/Stoelver - Duration: 1:08.

For more infomation >> Mercedes-Benz C-Klasse Estate 180 K 157 Pk BlueEfficiency ECC/Full Map Navi/16" LMV/PDC V+A/Stoelver - Duration: 1:08.

-------------------------------------------

热血街舞团车轮战冯正pk啊k,谁会赢?让我们拭目以待 - Duration: 3:30.

For more infomation >> 热血街舞团车轮战冯正pk啊k,谁会赢?让我们拭目以待 - Duration: 3:30.

-------------------------------------------

Taschentuch-Alarm: Ab Herbst gibt's 3. "This Is Us"-Staffel - Duration: 0:50.

For more infomation >> Taschentuch-Alarm: Ab Herbst gibt's 3. "This Is Us"-Staffel - Duration: 0:50.

-------------------------------------------

AT 7 AUTO-LOADING MADNESS! - Duration: 13:00.

now it's your last chance to get your hands on our limited edition t-shirt.

before the campaign ends link in the description

I don't ah I'm gonna save you such stir it this tank is serious Louis

t95 it is but without the armor and the gun right lay it up wicked scorpion he's

a one shot hello artery oh I think I'm for it yeah

strongarm er hello

oh come on well that was a good game and there's no

lying that is serviced hello challenger say hello to my minigun

hello guys mmm these tech sucks finish em can't end s you anywhere this

penetration is not very good let's get to pushing oh god I got a t95 guys I'm

I'm out I'm a-gonna flanked it no c95 or

perchick till I fight pushing till nightfall pushing okay I'll go for the C

95-87 autoloader vs. t95 Oh No let's watch I'm gonna wreck this

t95 I'm gonna blows tracks off and snipe is Coppola come on it is Coppola shred

there you go is screen I gotta kill the t95 help help reduce okay I coming I'm

coming come on you know seven minutes and I'll be there

stinky fest why is playing so bad thanks so much fun here they come

uh-huh

victim switch ghetto late weights yeah forget though get him thank you yes

we crashed what a scrub lightweight no lightweight finishing this gun is

terrible and I've gotten permission to engage

four teams no no gun that gun the elevation here we go low plate yeah okay

site here we go yes yes he's coming my way yes perfect the moment have been

waiting for yes hello hello sir time to make fun of it quit me don't run don't

run he reverses faster than I go forward now you get it oh no two nights

hey let's kill this grub it's red oh Christ

they go screw come on get us watch get up yeah

ah this tank sucks I'm reloading okay I'm gonna go wrong I'm gonna flank him

keep it in place watch keep any place yes he decided start side of a turret

what no I got an even better idea we're gonna rush the bridge okay for the

memes oh that's 51 please don't shoot us just like having such a big gun and yes

okay let's go let's do it oh there's one target fire Oh Mike well

they know we're coming yeah I think yeah I can take these guys

to the left guess it alone let's do it unload don't stop pushing

nice watch yes Mike yes yes reload keep pushing keep pushing

reload okay pass sir I'm gonna get him steel steel don't push me no challenge

into ass never mind I'm gonna get the pants forget to tell in here so what'd

you get the challenger behind me dr. Challa Tristan keep going alright

alright reload

man the team is probably freaking out now game metal hamburgers come here

I'm Armas hit come on what show you with me Oh amen let's get him let's just go

and get him he bounced Oh jet ski oh come on get him get him

no let the government it he goes again I'll get the light tank yep sure go

ahead how do you buddy oh this Amelie stock okay how does it taste

yes Oh God it's amazing so much I want this much making every day every time

hello yes George there was a good game you only had to beat up chair facing

Tomatoes here like thanks 200 damage Oh back to tier 9 match making it's like

nine nine fun as long as it lasted she said I bounced him boys being cheeky

or finishing finishing me I need to hear like primer tires alright alright don't

push or whatever is doing that wait what push the mountain now why did

I reload GG we gotta kick me three with us nice nice nice

yes yes yes I decide we can pet him yes yes oh yes OSU hello su you're

completely red no and third oh Christ fucking fingers can I go for the reload

and try Rush Rhees su this thank you so slow so you can reload why you rush

stuff you sir are demobilized come on kill him come on come on no gasps ah

whoa no 2/9 no tonight on - oof

nice watch okay cut away oh no it has to be K okay I better side a 3150 let's go

hello sir nice hmmm nice huh

this tank sucks let's get to BK let's go get him hello artillery please stop

shooting me that would be greatly appreciated

oh you got the VK and then lo it's true come on T 43 are you like mouths a

loader right here since you for three

please get him

hey come here m12 you'll die yes that's good oh hello Panthers no

don't shoot me yeah you bounced yeah let's go get it

let's go get him

it's the depression is depressing oh come on get him let's let's get us watch

come on yes perfect old crib a mix yeah trying to get him out forget him nice

nice nice come on get em finish it riddim no no don't have to come

information hello thought the governor Asia I can eat the a mix yes okay I'm

gonna die I'm gonna dive dive dive oh this tank sucks

he's Coppola is red good luck tiger

I don't ah I'm gonna save you so let's do it yes I could penny swatch

this is the moment is that the Normandy landing come on swatch you can do it

come on come on come on lover want to lower plates you can get him

finally this hello guys that's a lot of Tanks kV spamming gold

get him all right

all right permission to engage this horrible

machine get the Hawaii now let's get the Kiwi ponies Coppola ah very nice oh

hello got the butter back everyone I mean when this thing gets to feast on

things you can actually penetrate it's like broken yeah get this you primary

targets yeah he's firing high-explosive with the one 120 millimeter gun that

explains to 44 percent win rate ready to fire it's gonna be awesome

come here yes let's get this to do it no

no you miss them is grub we hope you enjoyed this video now go to the

comments and tell us what we should play next and don't forget to leave a like

later folks

For more infomation >> AT 7 AUTO-LOADING MADNESS! - Duration: 13:00.

-------------------------------------------

Grande Fratello, Favoloso espulso per la frase sulla t-shirt: gravissima - Duration: 4:07.

For more infomation >> Grande Fratello, Favoloso espulso per la frase sulla t-shirt: gravissima - Duration: 4:07.

-------------------------------------------

Uomini e Donne, Gemma Galgani pensa al matrimonio? Ecco il commento della dama - Duration: 4:17.

For more infomation >> Uomini e Donne, Gemma Galgani pensa al matrimonio? Ecco il commento della dama - Duration: 4:17.

-------------------------------------------

NBA: Netflix annuncia nuova serie su Michael Jordan e i Chicago Bulls - Duration: 4:18.

For more infomation >> NBA: Netflix annuncia nuova serie su Michael Jordan e i Chicago Bulls - Duration: 4:18.

-------------------------------------------

Regular Show - Death Metal Crash Pit (CZ) - Duration: 1:26.

One two Three!

♪ You saw my cat walking along the sidewalk, ♪

♪ you said "hey!" ♪

♪ It's a cat, or a rat perhaps, or a bird or a dragon ♪

♪ fish, cancer, goat perhaps you saw? ♪

Oh, I saw the goat,

but at last throw your caravan into the pit.

♪ I'll destroy what I touch ... ♪

Help, that song is horror!

Please, let me go.

♪ I know the goat, I have it, ♪

♪ I say "I'll give you". ♪

(♪ ... I have a goat myself, I eat. ♪)

They will not open!

♪ I'll destroy what I touch! ♪

♪ I'll destroy what I touch! ♪

And what our ...

For more infomation >> Regular Show - Death Metal Crash Pit (CZ) - Duration: 1:26.

-------------------------------------------

Trump Just Had Every Single One Of - Duration: 11:52.

Trump Just Had Every Single One Of Them Arrested!

The ENTIRE Democratic Party Is FURIOUS!

Illegal immigration is a concern for many Americans.

And the Trump administration has made it there mission to stamp it out as much as possible

and crackdown on criminals.

One story that recently surfaced in the conservative media is something that everyone should be

worried about.

During a sting operation to try and reign in a plethora of illegal aliens, over 475

gang members were arrested by law enforcement agents with Immigration and Customs Enforcement

(ICE).

65 were released by an American immigration judge while merely four were actually maintained

on arrest for criminal charges.

A recent report indicates that 99 MS-13 gang members who came to the United States illegally

were unaccompanied minors.

Sadly, over 64 of them, the majority, were granted the status of Special Immigrant Juvenile.

This special designation is a quasi-amnesty program for those who crossed the America-Mexican

border illegally.

Breitbart News reported, "Nearly 100 recently arrested MS-13 gang members arrived in the

United States by crossing through the U.S.-Mexico border as "unaccompanied minors" and then

getting resettled throughout the country by the federal government.

About 475 gang members have been arrested by the Immigration and Customs Enforcement

(ICE) agency's "Operation Matador" sting, with 99 of those gang members arrested having

arrived in the U.S. as "unaccompanied minors."

Of the 99 MS-13 gang members who entered the country as unaccompanied minors, 64 of them

were granted Special Immigrant Juvenile Status (SIJ), which acts as a quasi-amnesty program

for young illegal aliens who cross the southern border.

Of the 475 gang members arrested by ICE in this operation, 65 of them had been allowed

to be released into the U.S. by an immigration judge, while four were re-arrested on criminal

charges after they were released.

Unaccompanied minors who cross the southern border have continued to be resettled across

the U.S. despite a direct correlation of the quasi-amnesty program — known as the Unaccompanied

Minor Children (UAC) program — with the proliferation of the MS-13 gang in regions

of the country like Nassau County and Suffolk County in New York.

Under President Trump's administration, the UAC program has continued.

For example, in Fiscal Year 2018 thus far, nearly 200 unaccompanied minors have been

resettled in Suffolk County, along with almost 280 in Queens County, and more than 115 in

Nassau County, despite the regions' issues with the MS-13 gang.

Miami-Dade County also struggling with a massive illegal alien population, has had to take

in nearly 400 unaccompanied minors thus far in Fiscal Year 2018, as well as Palm Beach

County, which has had more than 33o unaccompanied minors resettled in the region."

This large sting operation is not the only one that has taken place that led to the arrest

of MS-13 members.

In Maryland, six members of the street gang were seen before a federal grand jury.

All of the perpetrators were aged 19 to 22 and a part of a nine-count indictment.

Their crimes ranged from m****r, racketeering, to conspiracy.

The Baltimore Sun reported,

"The latest indictments come roughly two weeks after an MS-13 member from another Maryland

community was convicted in a federal racketeering conspiracy.

Raul Ernesto Landaverde Giron of Silver Spring was found guilty of m****r in aid of racketeering

and faces a mandatory sentence of life in prison.

Following that conviction, U.S. Attorney General Jeff Sessions said Maryland has "suffered

terribly" because of the "uniquely barbaric" gang's criminal activities.

In charges announced Thursday, Juan Carlos Sandoval Rodriguez, 20, is accused of luring

a victim to a park in Annapolis, where he and other alleged MS-13 members and associates

murdered him.

Prosecutors believe the March 2016 k*****g was motivated by a desire to enhance or maintain

rank within the gang or gain status as a member.

In October 2016, four defendants allegedly attempted to m****r two others in Annapolis,

largely by stabbing the victims repeatedly.

Last year, Sessions designated MS-13 as a "priority" for the Department of Justice's

Organized Crime Drug Enforcement Task Force.

That designation directs prosecutors to pursue all legal avenues to target the gang and lets

local police agencies tap into federal money to help pay for gang-related investigations."

MS-13, or the Mara Salvatrucha, is believed by federal prosecutors to have thousands of

members nationwide, primarily immigrants from Central America.

It emerged in the 1980s from a stronghold in Los Angeles.

But its true rise began after members were deported back to El Salvador in the 1990s.

President Donald Trump blames lax U.S. immigration laws for allowing deported members to return

to the U.S.

Federal authorities say the danger posed by the decades-old street gang has been increasing.

During a December stop in Baltimore, Homeland Security Secretary Kirstjen Nielsen described

MS-13 as a "threat to our homeland security.""

Immigration remains a hot-button issue.

While conservatives argue that we need to toughen up on border security liberals have

argued we need to be more generous with children who were brought to the United States illegally

by their parents when they did not have a choice.

The rise in gang violence by gangs such as MS-13 that are run by illegal immigrants has

pushed this controversial debate to the forefront of news outlets all across the nation further

dividing people.

Share if you agree that American citizens should not have to live their lives in fear

from streets gangs

like MS-13.

For more infomation >> Trump Just Had Every Single One Of - Duration: 11:52.

-------------------------------------------

Svět v objektivu | Už Vím - Duration: 2:28.

For more infomation >> Svět v objektivu | Už Vím - Duration: 2:28.

-------------------------------------------

Inspired makeup tutorial: LOONA Yves - new | Albaricoque Y Nueces - Duration: 6:36.

Hey guys! So as you see, today I bring you this Yves inspired makeup look from her solo "new"

First I prep my lids with this eye primer

Then I'll fill in my brows with a matte brown shadow from this palette, I've been using this for the last times

And conceal the bottom part of my brows with this cream concealer

It's not going to look this white all the time!

Because now I cover my lids with this concealer that matches my skin tone to create a canvas for the eyeshadows

Ok, time to start with the colors

I take this soft orange to be the base for the other colors

With a blending brush, I blend everything

And to make it very shimmery, I will use this orange-toned golden highlighter generously

Really close to the lash line, I smoke some shimmer brown with a fluffy angled brush

To create a thin line that will act as a base, I use a pencil brush and this dark matte brown

So I do my face with this bb cream

With this loose powder, I set the bottom part of my brows so that concealer won't slide

To conceal my under eyes, I use some of that bright creamy concealer that I used under my brows

And set everything with loose powder, pressing the brush to avoid creasing

From this highlighter palette, I take the cream highlighter and create a base for the powder highlighter to stick in

Here I press that same golden highlighter

To soften everything, I use this smaller blending brush

With that dark matte brown, create the typical outer "v" shape, but very subtle and thin

Time for eyeliner!

This is more of a straight line comparing to the droopy ones

So, I make it thin, long and straight, a tiny bit upwards

To boost my lashes, I will use a piece cut in half on the outer part of my eyes

Before, I apply some coats of mascara

With the help of tweezers, I apply the lashes

I like this dolly effect, especially when you don't want to wear a full band pair of lashes

Yves has natural aegyo sal (puffy eyes)

So I will fake it by creating a soft shadow under my eye bags

Finalize the lashes by applying mascara to bottom lashes

And we're done with the eyes!

Moving on to the face

I apply powder to the rest of my face, but not too much since I don't like that dry matte look

I concentrate on my T zone, since it's more oily

I tested this blush for the first time and I liked it

I was a little nervous that it was going to be very intense, but I took off the excess

That's how I take off the excess!

I take the golden highlighter from this palette and apply VERY LITTLE amount

We want a soft glow for this look

For the lips, I mix two colors

First, a coral nude shade as the base

And of top of it, this coral pink, tapping it with my ring finger

This is not a gradient lip, so definitely apply all over the lips in a thin, soft coat

And we're finished!

I even tried to dress in a similar way to mimic Yves' look and I liked the overall results??

I hope you liked this video! If you did, please give me a thumbs up, it helps me a lot!

And don't forget to suppor Yves and LOONA! The girls are almost debuting as a full group!

Here I will leave some other videos for you guys to check it out!

If you like my content, subscribe to my channel and hit the notification bell to never miss a video!

I hope to see you in my next videos! Bye!

No comments:

Post a Comment