>> WENDY: Good day, everyone, and welcome to today's webinar titled "Treatment Targets,
Target Engagement, and Target Populations in Mental Health Services Research to Improve
Public Health: Examples from the Field."
At this time, all participants are in a listen-only mode.
If you would like to submit a topic or technical-related question, you may do so at any time via the
Q and A pod located at the bottom of your screen.
Please note, this webinar is being recorded.
I will be standing by if you should need any assistance.
It is now my pleasure to turn the conference over to Makeda Williams.
>> MAKEDA WILLIAMS: Thank you so much for the logistics, Wendy.
Good morning, good afternoon.
My name is Dr. Makeda Williams.
I'm from the National Institute of Mental Health Office for Research on Disparities
and Global Mental Health.
I'm pleased to welcome you to today's webinar titled "Treatment Targets, Target Engagement,
and Target Populations in Mental Health Services Research to Improve Public Health: Examples
from the Field."
This is the second in a series of four webinars in our 2017 webinar series focused on global
mental health issues.
Please note that this webinar is being recorded, and it will be posted on our website.
It now gives me great pleasure to introduce to you the speakers for today.
Their biosketches are located on the webinar website, but I just want to give you a brief
overview for each of the speakers.
Dr. Mary McKay is the Neidorff Family and Centene Corporation Dean of the Brown School
at the Washington University in St. Louis, St. Louis, Missouri.
She has significant experience in child mental health services and implementation research
methods as well as 20 years of experience conducting HIV prevention and care-oriented
studies.
Dr. McKay is a principal investigator for one of NIMH's research partnerships for scaling
up mental health interventions in low- and middle-income countries, titled "Strengthening
Mental Health and Research Training in Africa," otherwise known as SMART Africa.
SMART Africa is a transdisciplinary collaborative partnership to engage stakeholders from academia,
government, nongovernmental organizations, and local communities in Uganda, Kenya, Ghana,
and South Africa in addressing child mental health burden, evidence-based intervention
implementation, scale-up, and service gaps.
Prior to joining the Brown School, she was the McSilver Professor of Social Work and
the Inaugural Director of the McSilver Institute for Poverty Policy and Research at New York
University's Silver School of Social Work.
Our next speaker is Dr. Mary Acri.
She is a senior research scientist at the McSilver Institute for Policy Poverty and
Research, and research assistant professor at New York University School of Medicine's
Department of Child and Adolescent Psychiatry.
She is also faculty within the Center for Implementation and Dissemination Science in
States for Children and is an adjunct faculty member at the Silver School of Social Work
at The New York University.
With a background in clinical social work, Dr. Acri's research interests center on developing
and testing interventions to enhance the detection and treatment of mental health problems among
children and their caregivers.
Our next speaker is Dr. Ozge Sensoy Bahar, and she is a research assistant professor
at the Brown School, Washington University in St. Louis, St. Louis, Missouri, with research
interests in child and family well-being in global contexts characterized by poverty and
associated stressors.
Her current research program focuses on youth experiences of child work and labor, as well
as the individual, family, and contextual factors leading to child labor in two country
contexts.
Those would be Turkey and Ghana.
The goal of her work is to develop culturally and contextually relevant interventions to
reduce risk factors associated with child labor.
Dr. Sensoy Bahar serves as a lead to the SMART Africa Center.
Our next speaker is Dr. Denise Pintello, currently serving as Chief of the Child and Adolescent
Services Research Program and also as Acting Chief of the Dissemination and Implementation
Research Program at the NIMH.
Before joining NIMH she served as a special assistant to the Director and the Deputy Director
of the National Institute on Drug Abuse, otherwise known as NIDA, for 11 years, and she oversaw
the implementation of innovative scientific initiatives and special research dissemination
projects.
As a social worker for more than two decades, Dr. Pintello worked extensively in child welfare,
mental health, and substance abuse and provided clinical, case management, and supervisory
services for over 1,500 children and adults.
She has also conducted research studies within the fields of child welfare, domestic violence,
juvenile justice, mental health, and substance abuse.
Last but not least is our final speaker, Dr. Michael Freed.
He is the NIMH Service Research and Clinical Epidemiology Branch Chief.
Under his guidance, the branch helps set Institute research priorities, develop funding initiatives,
and administer a public health�oriented research portfolio to increase access, continuity,
quality, equity, efficiency, and value of mental health services to those in need.
The branch also manages the dissemination and implementation research portfolio for
the Institute.
Dr. Freed joined NIMH from the Department of Defense where, as the research director
of an interdisciplinary team of researchers, clinicians, and support staff, he worked to
transform behavioral health care across the military health system.
He served in principal and co-investigator roles on several key epidemiological and health
services research studies, as well as clinical trials.
I'm delighted to welcome all of our speakers today to talk to us about mental health services
research, and I will now turn it over to Dr. Michael Freed.
>> MICHAEL FREED: Perfect.
Thank you, Makeda, for that wonderful introduction.
When we think about the perfect intervention package, we want to make sure that evidence
supports whether and how the intervention is effective or not.
Oftentimes what we hear as program officers from researchers curious about conducting
a clinical trial is, "Hey, I'm using an untested intervention and wondering if it works."
Or, "I want to develop an intervention and ensure that it works in my setting."
The question that is the catalyst for this webinar is really what if it does not work?
Can we look under the hood and really examine why it doesn't work?
On this slide you see, when we talk about the intervention package, what you see is
a present.
A wrapped present.
There are lots of ways to think about whether an intervention works, and we think long and
hard about this.
There are different frameworks, and RE-AIM is one of those frameworks.
For example, we think about intervention Reach, and that's the percentage and representativeness
of the target population included.
We think about Effectiveness, which is does it work?
Does the intervention work on primary outcomes?
We think about Adoption: How representative of setting and intervention staff to what's
really happening in the real world?
We think about Implementation: Is the intervention delivered per protocol?
We also think about Maintenance, which is will the intervention be sustained with fidelity
following completion of the study?
Another way to think about this is something called the PRECIS-2, which is really asking
the question, how pragmatic is the intervention?
How well does the intervention fit into the service setting, whether it works or not?
It really still has to be the right fit for the setting.
We can ask questions about the tradeoff of how pragmatic a trial might be and its generalizability,
and even maybe who judges how pragmatic a trial is.
Finally, when we think about whether an intervention works, we might think about this through an
implementation science lens.
How do we value the importance of effectiveness, like clinical data, when designing the study?
Perhaps there is robust evidence that the intervention improves ... that there's enough
evidence that the intervention improves clinical outcomes, and the research question really
is about which implementation strategy is best, which is how do we best implement the
best practice?
Moving even a step forward, perhaps there's already enough research evidence that we can
move straight to implementation.
In 2014, NIMH developed new requirements for clinical trials.
These requirements essentially codified what good intervention developers were already
doing, and that's really to understand mechanism.
The term that's used is called target, and this is to differ from what we think about
a target population or a target outcome.
Here when we think about a target, it's a factor that an intervention is intended to
modify based on a hypothesis that modification of that factor will result in improvement
of symptoms, behavior, or functional outcomes.
This might be easy to think about in terms of drug development or even psychotherapy
development.
Some examples from drug development might be molecular processes, synaptic- and circuit-level
networks.
For psychotherapies, we might think about cognitive or emotional processes, but this
can be applied for services interventions too.
We might think about provider behavior, decision-making processes, or organizational policies.
But again, these are examples.
I also just wanted to distinguish between moderators and mediators.
This is an important distinction.
Both are very important in any type of research study, but [unintelligible] looking at moderators
are going to be insufficient to meet requirements for clinical trial responsiveness.
We think about a moderator, this is a categorical variable or ordinal variable or continuous
variable that affects the direction and strength of the relationship between an independent
or predictor variable and the dependent or criterion variable.
Juxtaposed to a mediator, which really is sort of the extent that the variable accounts
for the relationship between predictor and criterion.
Put more simply, a moderator is going to answer when will certain effects hold, and a mediator,
how or why do such effects occur?
If we move on to thinking about adaptation of the perfect intervention package, and we
can think about not all interventions and not all treatments will work for everyone,
but it's important to demonstrate a priori, as a rationale, that there's an empirical
rationale for why we're going to make the adaptation.
For example, evidence-based practices exist, and one may erroneously determine that they
don't work for a certain group of people, not because they actually don't work, but
because they were not implemented well.
We really need to understand how much changing the packaging matters, versus changing what's
inside the package.
So if we think about just different presents - so there's some pictures here - we can think
about changing the color of the wrapping, the style of the bow, but really what we want
for any type of adaptation study is an empirical rationale for the specific adaptation or augmentation,
but also, we want empirical rationale for the corresponding mechanism by which the adapted
intervention is expected to substantially enhance outcomes.
This gets a little bit more complicated when we think about advancements in methodology,
and we think about what's happening on the ground in actual service settings.
For example, how do we think about experimental therapeutics when we think about multicomponent
interventions?
Or pragmatic studies where adding research measures is challenging.
The more pragmatic a study is, the less research influence is involved.
Think about multilevel interventions.
Those are interventions that happen at the patient/clinic/health system level, or if
you think about school systems, the child/the classroom/the school.
And we think about non-clinical outcomes.
This is true for dissemination and implementation studies.
We want to think about mental health services clinical trial research beginning with the
application in mind, but really now moving to the grant application.
To make it really easy for us as program staff to be able to evaluate the responsiveness
of the application, we strongly encourage researchers to be very, very clear about identifying
the target or targets, outlining a conceptual framework or empirical basis for the proposed
targets, what's the plan to assess target engagement (and that's measuring the target),
and then describing the analytic strategy involved.
This makes it, again, easy for us as program staff but also for reviewers to look at and
evaluate the quality of what you're proposing.
There are multiple mechanisms, grant mechanisms that accept clinical trials, but here are
some broad frameworks.
For an R34, this is really a pilot study focused on refining and optimizing interventions for
use with broader target populations or for use in community practice settings.
We want feasibility of a study, but that's not sufficient for a responsive application.
Studies should be designed to explicitly address whether the intervention engage the mechanism
that is presumed to underlie the intervention's effects.
When we think about an R01, this is a mechanism that supports trials that are statistically
powered to provide a definitive answer regarding the study's intervention effectiveness in
comparison to usual care practices or some alternative group.
Studies should also be designed to address hypotheses regarding predictors and moderators
of effectiveness and questions regarding mechanisms that underlie clinical benefit.
We also have a collaborative R01 which is used when two or more sites are needed.
We have a variety of funding announcements that accept clinical trials, and some that
do not accept clinical trials.
Here is a website that lists all of them, but we would encourage any researcher or potential
researcher to call us or email us.
The program staff are happy to help direct you to the best-fitting funding announcement
for your research idea.
Please note that all NIMH funding announcements accepting clinical trials, and this includes
mental health services and D&I trials, do require applicants to follow the experimental
therapeutic paradigm.
There's really no way out of this if you're wanting to propose a clinical trial, and know
that the definition of clinical trial is pretty inclusive, so if you're proposing a study,
please read that definition carefully.
We're also very interested in research to identify new targets, to identify new methods
to measure targets, and to develop new analytic approaches to assess target engagement and
causality.
We know we don't have all the answers, and this is an important line of research that
we want to encourage, and our branch in particular is interested in research that aligns to NIMH
Strategic Research Priority number 4 from the NIMH strategic plan.
We're really excited today.
We have some great speakers lined up - Doctors Mary McKay, Mary Acri, and Ozge Sensoy Bahar
- to talk about two really important studies, one domestic and one global.
You can see them here.
These are great examples from the field that we're really excited to showcase.
Then finally Dr. Denny Pintello will lead a discussion at the end and field questions.
Thank you, and we certainly want your research ideas, we want your application, and please
contact us if you have any questions.
With that, turn it over to Dr. Mary McKay.
>> MARY MCKAY: Great.
Thanks so much, Mike and Makeda, for the warm words of welcome.
I'm assuming that I'm coming through okay.
I am incredibly grateful for this opportunity to share two examples of studies that we have
currently in the field.
Myself and my really talented colleagues, Mary and Ozge, we're going to try to focus
in on the design of these studies, the methods that go along, as we try to look at both targets
of our interventions, look at the outcomes of our interventions, and look at potential
factors that influence the adoption, implementation, and sustainability of interventions, all while
trying to go to scale with some of the interventions for kids and families in our mental health
services studies.
I'm going to talk about two studies.
One's set, the first one, in New York City.
This is a hybrid effectiveness implementation study of an intervention, a family strengthening
intervention delivered via groups, family groups, that targets kids with serious behavioral
difficulties.
Mary and I are going to talk through that New York City study, and then Ozge and I are
going to pick up how do we learn from New York to influence what could potentially help
kids and families in Africa?
And I want to emphasize that there's a bidirectionality in our thinking, that we are also very interested
in what we learn from our African scale-up experiences and how that influences urban
centers like New York City.
We're going to talk about both the work first in the U.S. as well as then go global and
take us to Africa around some of the adaptations that are clearly needing to be made, but also
what can we learn from the U.S. in terms of family strengthening?
There is a tremendous set of colleagues that we have, both in the U.S. as well as global
colleagues, that are part of these research teams.
There are also amazing training opportunities for research interests as beginning professionals,
so I just want to express my sincere gratitude to our outstanding staff, as well as our committed
participants and our colleagues that really have helped us to field, I think, probably
the most ambitious study I've ever been involved with in the U.S.
Let me launch into the New York study first and set it up in terms of the issues that
we're trying to address, and then Mary Acri will take over and talk a little bit about
methods and where we are in the field.
First, our particular research group focuses really exclusively on young people and their
families and communities impacted by poverty.
If you think about addressing children's behavioral success, addressing their serious conduct-related
difficulties, you must take into account the complex circumstances of young people and
their families' lives.
There's tremendous trauma exposure.
There's tremendous social ills that kids and families and communities are confronting that
are associated with poverty, and also I think equally important is they are showing coping,
resilience, thriving in the face of fairly significant adversity.
Taking into account both the complexity of young people, families' and communities' lives
as well as their strengths to successfully address child mental health difficulties is
really the perspective that we come from.
Mike talked about targets, and those of us that feel very strongly about testing intervention
packages and services for kids and their families understand greatly that without a real focus
on engagement, retention in both our clinical trials as well as services in general, we're
going to only come away with partial answers to our research questions.
This particular slide has been in my slide deck for a long time, but what's important
to note is that these statistics, there hasn't been nearly as much movement as I would wish,
and I think many of us would wish, around engagement of kids and families that need
services with the service systems that are set up to serve them.
We still, many of our clinical trials are plagued and certainly our service systems
are plagued with the phenomenon called no-shows, early dropouts, and really what those mean
to us, as researchers that are interested in kids and families, is incredible missed
opportunities for kids and families to actually address their needs, particularly early on,
and so I'll come back to that point in just a little bit.
As we think about putting together an application, putting together a service package as Mike
talked about, there are many factors that we need to take into account.
The complex circumstances, the strength and resilience of kids and families as they approach
an intervention like ours, as well as obstacles and complexities that exist at the family
level, at the service system level, at the community level.
This slide just identifies a set of core issues that we often consider as we're putting together
intervention capacities � intervention packages, that these issues must be addressed.
And so service capacity�I'll just highlight that one issue from this particular slide�Part
of why we chose to test a group-delivered intervention had everything to do with the
struggle that our clinic partners have in terms of really having the capacity to see
families, long waiting lists, or capacity to serve kids and families outside of school
hours.
We chose that group-delivered intervention to address some of those issues, and I'll
talk about that in a little bit as well.
I think where we are in the field, and I appreciate either Makeda or Mike saying how many decades
I've been in the field.
It's been a long time.
We have come a long way in terms of creating evidence-based programs, services, that really
can be of substantial assistance to children and their families, can really improve children's
mental health, improve parenting and family supports for children.
And I'm happy to note that this particular intervention that we're testing is on the
National Registry of Evidence-Based [Programs and] Practices [NREPP].
However, despite the unbelievable advancement in our published clinical trials, in our main
psychotherapies, in our medication advances, in our community- and home-based services
that really have strong evidence behind them, we have continually experienced obstacles
to those interventions, those practices being implemented in real-world settings, in public
safety net clinics.
And so that's really where we're spending our time as child mental health scientists.
It's really thinking about how do we close the gap between what we know to potentially
be helpful, and actually how do we understand how to close that gap between really kids
and families getting access to those services, taking advantage of them, and having good
outcomes?
We've explained that gap as real disappointment around our challenges of disseminating these
practices.
Our group has been really, and others have been really trying to take a part.
Where does implementation, where does dissemination, where does closing the gap go wrong?
We think about this as a multilevel problem, that the alignment potentially with young
people and their families, their needs, and what we put together is off.
That there's also been potential mismatch between the perspectives and skills of what
the existing workforce can deliver and are interested in delivering, and how our intervention
packages have been put together.
Then because we work almost exclusively in low-resource settings, whether we're talking
about in the U.S. or globally, to really think about from the very beginning what's the alignment
between the resources, the funders, the payers, the connection with existing policy and procedures,
and what we're proposing?
I think that these have given us some opportunities to adjust the intervention package and to
think about modifying the evidence base to potentially be more successful in terms of
implementation.
Adjusting and correcting for those misalignments, mismatches, really learning from trials like
the two that we're going to put forth for you today, and humbly, what I appreciate about
what Mike said, learning what works, but actually learning from some of our failures is incredibly
important to this ongoing program of research if we're going to improve kids' mental health.
Just some background around what's the particular space that we're working with in terms of
kids and families: We are identifying really one of the primary reasons that kids get referred
to mental health services, and these are oppositional defiant disorders, conduct disorders, these
serious behavioral challenges that kids experience that help them be unsuccessful at home, at
school, and in community.
Prevalence varies widely.
In the poverty-impacted communities and service systems that we work in, prevalence can be
really quite high.
Kids are struggling, and their behavior is calling attention to those struggles that
kids have.
We also know that, from the evidence base, that family collaboration, parenting, supportive
family processes, organized family processes, are critical to kids' behavioral success,
and yet poverty disrupts parenting, disrupts family processes, undermines parents' mental
health.
We're really trying to think in a comprehensive way.
If we want kids to be behaviorally successful, how do we support their families?
How do we address adverse circumstances of families?
Our interventions try to think about issues simultaneously, which is improving children's
behavior, strengthening parenting, understanding the complex situations that families find
themselves in�their stress, their strains that undermine parenting�and then starting
to draw on also evidence-based approaches like parent management training and others
that really can be helpful to parents, but often haven't been tested in extreme poverty
circumstances both here in the U.S. as well as globally.
How do we modify those approaches, bolster those approaches, so they're likely to more
comprehensively meet kids' and families' needs?
We also take a pretty high priority on what are the common barriers that families experience.
They can be internal to families, like stigma and stress and real sensitivity around being
told what to do as a parent, and how do we think about putting those barriers as well
as other real access barriers together so that we actually have engagement interventions
embedded in our service package that we're testing both in New York as well as globally?
In sum, we have put together a study that is now in its third year, that aims to generate
new knowledge, to address some particular areas that we think are in need of new knowledge
in the child mental health services and implementation realm.
We need new knowledge around engagement of low-income families with conduct challenges.
What are some of the intervention packages that can actually, really penetrate the population
of kids and families in need?
We need to really take a look at can we design options for safety net clinics that can involve
families, and then can be evidence-informed?
We need to look at not just small tests of effectiveness, but also tests of scalability,
particularly in resource-constrained settings, and we're hoping by these types of studies
to open up a menu of options for policymakers to support the uptake and the integration
of service innovations that have evidence, but that have been used in public systems
so are much more likely to be scalable across state systems, or country systems in the case
of our global work.
What is our intervention package that we are testing and trying to then understand what
works about that package or not?
That's the effectiveness part of our study.
Then what can we learn about, as Mike talked about, the targets within that intervention,
the mediators, as well as some contextual targets that I'll talk about that make implementation
and scalability, sustainability more possible?
Our intervention package is a multiple family group [MFG].
This is a group of families that come together.
The sessions that families work on are guided by a protocol, all meant to strengthen parts
of family life and parenting that have been empirically linked to kids' conduct-related
difficulties, serious conduct-related difficulties, and their behavioral success.
Remember, I talked about service capacity.
These groups are relatively large.
Up to eight families meet together, both children as well as their adult caregivers, and this
multiple family group and the protocol that guides these group sessions are laser focused
on family factors, that when strengthened have been implicated in the onset and maintenance
of childhood DBD [disruptive behavior disorder].
What does that mean?
There's a set of factors within families, parenting, that when weakened really have
been strongly linked to kids' conduct difficulties.
The hypothesis around this work is that if strengthened, kids' behavioral difficulties
should actually improve pretty dramatically.
Our materials are available through NREPP, and we've completed one randomized controlled
trial and effectiveness study in a relatively small number of public clinics that allowed
us to put this forward as an evidence-based practice [EBP].
Mary will talk about we are no longer in a small number of clinics, but now across a
New York City service system, attempting to both test the effectiveness, take a look at
family- and clinic-level targets that lead to both effectiveness as well as implementation
and scalability of this particular study.
What are our targets for the multiple family group?
What are the family influences that we work on to strengthen within families so that kids
can be behaviorally successful?
This is a table that was created with strong collaboration with parent consumers as well
as our provider partners.
This is actually, what we like to think about, is the evidence base of what families can
do to help their kids who are struggling behaviorally, but in four words that begin with R and two
words that begin with S. Why did we break down these components of our intervention
in this way?
Well, because we wanted the evidence base to be transparent to our parent consumers,
our parent participants in our study, as well as our providers.
We needed to break down that evidence so that you could actually remember what you were
working on session to session, week to week with families, and the empirical rationale
for why we were working on those things.
Rules is the umbrella term for family organization, consistent parenting.
Responsibility relates to parents really providing leadership within a family, children contributing
to the mission of the family.
Relationships relates to family warmth within family support.
Respectful communication, talking and listening across caregivers and children.
Those four Rs come out of the parent management training literature and have been strongly
linked to kids' behavioral success.
What was less clear in the parent management literature, but clearer to us as you take
apart the clinical trials, the evidence-based programs, is that without also additional
focus on stress and social support, which are highly related to engagement in services
and engagement in trials, as well as your ability to change what you're doing as a parent
in a family�without also adding those two Ss, the evidence base really was less relevant
and potentially less effective with the populations of kids and families that we were working
with.
In terms of our aims, our particular R01 that is in the field right now is meant to examine
the short-term and longitudinal impact of a multiple family group intervention on young
people who are struggling behaviorally�oppositional defiant disorder, conduct disorder�that
they meet those criteria, although many of our kids are comorbid with many other mental
health challenges.
We see that as a replication of our first completed randomized trial of MFGs.
We also have proposed to look at a specific set of family-level mediators at the parents
and family level, and what is the impact of changing those mediators, those four Rs and
two Ss, on child outcome?
We also think that clinic and provider-level factors will influence whether we are able
to implement this intervention and sustain it within particular child mental health practices.
We're looking at factors such as readiness to adopt an innovation, motivation.
We're very interested in whether particular clinics and providers can maintain fidelity,
and if they can't, what are some of the explanations for the challenges with that?
That would have been enough, the first three points, but this is a Type 2 effectiveness
implementation study, and so we also propose to test an implementation strategy.
We're calling them CITs, clinic implementation teams.
We think that with extra support at the clinic level, that we can support providers to overcome
some of the barriers that they might experience to up the implementation and integration of
a multiple family group within their particular clinic setting.
This is how we think about the first set of four Rs and two Ss, that our multiple family
groups specifically target those influences, and that those should relate to children's
outcomes.
Then again, this is just the picture of how we think about clinic and provider-level moderators
and their influence on whether we're able to impact the family-level variables, family-level
targets, that we're trying to influence, and whether if we intervene with a CIT, a clinic
implementation team, and some extra on-the-ground support, that there is a difference in those
moderators and how that impacts family success and then ultimately child outcomes.
What are clinic implementation teams?
They've been developed from prior work.
They come out of quality improvement work within New York State.
Then those clinic implementation teams are working with a subset of our providers and
directors.
They're kind of internal champions, and they develop site-specific plans to address any
obstacles that come up with MFG implementation, fidelity, and there's lots of complementary
work that we drew on from Charles Glisson and others.
I'm going to go through one more slide, Mary Acri, around some of our design choices, and
then I'll transition over to you.
We really thought long and hard around what were some of our design options, and chose
a clinical effectiveness part of our study that examines the intervention impact.
We think about that as replication of our first R01.
We drew into our design methods that examine mechanisms of change and variables that moderate
outcomes, and we also drew on implementation trial methods to examine strategies that potentially
help integrate and support fidelity within our particular clinics.
We chose a Type 2, as I said, hybrid effectiveness�implementation research design, which connects both effectiveness
and implementation, and we really do appreciate the support of our program officials, our
national colleagues, as we vetted this type of design within our grant application.
We also worked really closely with our parents, consumer collaborators, provider collaborators,
directors, to think through these types of designs, why they potentially would yield
us new knowledge, and were we interested in the answers to these questions in our collaborative
working groups?
Mary, I'm going to turn this over to you to talk through the pretty ambitious set of methods
that we both proposed, but now are in the middle of actually implementing this trial,
and then I'll pick it up as you take us through so that we can leave New York City and go
on to the globe.
Mary Acri?
>> MARY ACRI: Great, thank you.
To talk a little bit about the method: From the New York State Office of Mental Health
we received a master list of all licensed child behavioral health clinics that are located
in the five New York City boroughs.
They were randomized to either the two active intervention arms, which was the multiple
family group alone or combined with a clinic implementation team condition versus services
as usual.
Again, as Mary McKay had said earlier, MFG is a 16-week group.
They meet weekly, and there's at least two generations per family, so that can include
caregivers, grandparents, siblings, the child.
Optimally, this model is administered by a parent peer and a mental health clinician,
and the way we're defining a parent peer is a caregiver who has had the lived experience
of caring for a child with a mental health problem and has successfully navigated the
mental health service system on behalf of that child and family.
The other condition, active condition, is MFG plus a clinic implementation team, and
typically the CITs are comprised of one mental health provider, a supervisor, and a parent
partner.
What they do is convene and provide site-specific support around the changing aspects of the
process, content, and structure of the MFG model while retaining the core elements of
fidelity.
And then the last condition is services as usual, and those are services that that family
and child would receive normally through accessing the service system.
Our randomization procedure was that we were at the clinic level, and we randomized the
condition by borough.
We have a total sample of almost 3,000 caregiver�child dyads and providers, so of that, exactly 2,688
adult caregivers of a child between 7 and 11 years of age who meets criteria for oppositional
defiant disorder or conduct disorder, and then two service providers per clinic.
That comes to something like 45 clinics per condition, 20 caregiver child dyads per clinic,
and then the two service providers per clinic.
Our inclusion and exclusion criteria for caregivers was that they needed to be at least 18 years
of age or older - we are offering this intervention in Spanish and English - and that they must
be the primary caregiver of a child between 7 and 11 years of age who meet criteria for
a DBD, disruptive behavior disorder.
We are collecting a variety of information from the child, family, caregiver, and then
the organizational processes.
The child measures - we're really targeting symptomatology and functioning, and the caregiver
is the main respondent for that information.
We're also collecting information on caregiver and family processes.
In the literature, there's extensive evidence that depression and stress on behalf of the
caregiver is associated with uptake of the parenting principles that these parent management
training programs offer, as well as there is a bidirectional relationship between parental
and child mental health.
We're taking a close look at the caregiver's depression and stress over time.
We are focusing specifically on parenting practices, including family communication,
roles, responsibilities - so, those targets that Mary mentioned earlier - as well as family
support.
We're looking at the perceived relevance of treatment and those processes that have emerged
in the literature as being critical for engagement and ongoing use of services as well as outcomes.
Finally, we're looking at provider measures.
We're looking at how the providers view evidence-based practices, and their perception and beliefs
about participating in family-centered care, and their exposure to different training and
supervisory experiences and how that shapes the practice that they engage in.
This study is guided by several hypotheses.
The first hypothesis is that children who participate in MFG are going to evidence significantly
reduced conduct difficulties and increased functioning over time compared to services
as usual.
Again, as Mary mentioned, that's in service of the replication study from the prior R01.
But we are also interested in looking at the MFG-plus-clinic-implementation-team condition,
and that we believe that children in that condition will evidence the greatest magnitude
of change in outcomes, so that's both behavior symptomatology and functioning over time.
We're also looking again at those four Rs and two Ss and looking for the differential
weight or effect of those variables on outcomes.
Our hypothesis is that stress, relationships, and rules - those will evidence significantly
greater impact on child outcomes over time relative to the remaining Rs and Ss - support,
responsibilities, and respectful communication.
Hypotheses under Aim 3 are looking at those organizational factors that are proposed to
impact sustained use of evidence-based practices in clinic settings.
We're looking at leadership support, and our hypothesis there is that if the leadership
is in support of this innovation being delivered in their clinic, we're going to get significantly
greater impact on implementation and integration in comparison to other factors such as general
readiness to adopt an EBP and the clinic climate, and also that provider motivation is going
to be a powerful variable in terms of impacting MFG implementation and integration, and more
so than other provider-level variables.
Finally, the hypothesis under Aim 4 is that we really are looking at those organizational
factors that will, we think, have a greater impact on implementing MFG.
We're looking at specifically the MFG-plus-clinic-implementation-team condition.
We believe that those folks that are in that condition will evidence significantly higher
readiness for innovation and support, as well as preparedness and motivation to implement
the multiple family group model compared to MFG alone, and that this will be maintained
over time.
We also believe that providers assigned to this condition will evidence significantly
enhanced implementation, so fidelity to MFG, and that they will have started and completed
more groups than the MFG condition alone.
In addition to the quantitative data, we really are interested in looking at barriers and
facilitators to uptake and sustain use of evidence-based practices such as MFG, so we
are going to be conducting qualitative interviews with the clinic implementation teams as well
as additional providers in the clinic.
We have interview guides that are going to consist of semistructured questions relating
to their experience implementing and integrating MFG, including barriers and facilitators that
they encountered through this process.
Preliminarily, again, we are in the beginning of year 3 in terms of our rollout of this
project.
We started engaging the clinics in the services-as-usual condition.
We have currently 41 clinics who are enrolled in that condition, and then we rolled out
MFG and MFG-plus-the-clinic-implementation condition simultaneously.
We at this point have 18 clinics participating in the MFG condition and 5 that are enrolled
in the MFG-plus=-CIT condition, and that number is likely higher because we're changing every
day.
In terms of our preliminary results, we are seeing that the average child is approximately
8 years of age, predominantly male, and of color.
And we have found that the majority of folks that we are approaching do meet criteria for
ODD or CD based upon the DBD rating scale.
Their caregivers tend to be in their late 30s and primarily the mother, biological mother,
and we're finding that although their depression scores are not as high as we anticipated,
they are experiencing considerable stress, and that their family income constitutes them
for being impacted by poverty and living in poverty-impacted communities.
As we move forward with this, we are hopeful that we're going to gain more information
about the children and families that are being served in the public mental health system,
as well as the impact of MFG and this innovative clinic implementation team upon child outcomes,
uptake, and sustained utilization.
>> MARY MCKAY: Mary, thank you so much.
It's Mary McKay again.
I just want to underline a couple things that you said, and then take us to a discussion
about how you can build on a study like the ambitious one that you just described into
other work that will help improve the lives of kids and their families in low-resource
settings.
The first is that this is an overview of our study.
Mary put forward the select measures, the mixed methods that we're trying to employ,
so that we are both using qualitative as well as quantitative methods to come away with
a fuller picture of where we're succeeding and where we're failing.
Also, as we ramp up to enter a full system, penetration of roughly ... Mary's right, things
change pretty quickly.
Even as we put the slides together a month or so ago, as we start to approach penetration
rates of clinics 60% of a system, 70% of a system, we're learning a lot around what are
the directors, what are the influencers of even deciding to enter our trial, picking
up this intervention innovation?
What are their concerns?
I think that we're trying to both concentrate on where are the factors that really facilitate
this work, but also getting deep description of places where there's barriers and failures.
I really give Mary and the entire New York team of staff that really are working very
hard across the system, as well as our New York City system partners, because this type
of study in an entire child behavioral health system would not be possible without tremendous
buy-in at all levels for policymakers, funders, payers, and so we're incredibly grateful.
Just in case, as I go into our work in Africa, you think that this is a study about transporting
what we've learned in the U.S., or particularly in New York, into Africa, I need to tell you
a much more complicated story.
Part of the 20 years that Makeda and Mike referred to into my career, half of my research
portfolio has been focused in children's mental health services research, and most of that
mental health services research was primarily set in the U.S.
But the other half constantly of the portfolio was around kids' mental health and HIV prevention
and care research, not only set in the U.S., but also set first in South Africa and then
in several other countries across the continent.
Part of the prevention work in HIV prevention and the mental health support work that we
were involved in, those interventions were also delivered by multiple family group.
Part of why we brought them back to the U.S. was how well they were received in other global
contexts around kids and their caregivers coming together.
So when we had this opportunity to put together a center that was focused both on child behavioral
health scale-up science, but also building capacity in countries where there were not
necessarily resources for kids, we went to that multiple family group protocol that had
been used in Africa, that had been used in the U.S., that had data behind it across cultural
contexts.
We knew the delivery looked different.
We knew that some of the content was quite a bit different, it was tailored to the context,
but we also knew that it was an acceptable intervention delivery format.
That's a long way of saying that I think about this work as bidirectional, that the global
work learned from the U.S., and the U.S. work definitely leans on what we've learned in
global context.
With that I'm going to introduce very briefly our center that is just a little bit over
a year old.
We call it SMART Africa, Strengthening Mental Health and Research Training Africa Center.
Again, we are focused on the common sets of challenges that children exhibit, conduct-related
problems, disruptive behavior disorders, that can vary in terms of prevalence but put kids
on a serious path for unsuccess at school, at home, in community, and really associated
with quite adverse outcomes without intervention.
So as we took the opportunity to think about what could a center do to support effective
and scalable solutions, we focused on a set, a subset here in African country context,
where we had very strong in-country investigators.
We also knew that some of the issues that we confronted in the public safety net in
New York were highly prevalent in the country context that we were working in in Africa.
Poverty, stress, those were issues in the New York City context, in the U.S. context,
but clearly in what felt sometimes like insurmountable obstacles in our country context in Africa
- the stress and the poverty.
We had to think very seriously about how potentially any evidence-based approach could take into
account the complexity of kids' and families' lives, and we also knew that the evidence
base had not grown up in Africa, in our country partners in Ghana, Uganda, Kenya, and South
Africa.
We had to think very humbly about what we could use from the four Rs and two Ss, from
the multiple family group protocol from the U.S. in SMART Africa.
Here's our specific aims.
The first was to create a platform, a research consortium, that brought together key stakeholders
- very much the same way as we do in our U.S.-based study - that brought together scientists but
also policymakers, the service sector in the NGO space, community and cultural stakeholders,
and really to knit together a group of very, very child-mental-health-interested, family-mental-health-interested
stakeholders in Uganda, Ghana, Kenya, and South Africa, to focus on really thinking
about child mental health burden in those country contexts, evidence-based implementations,
scale-up, addressing the very serious service gaps that exist between those country contexts.
Our second goal in this particular center is to build child mental health implementation
research capacity, and our two capacity-building countries and investigators and collaborators
in those countries are Ghana and Kenya.
We also proposed a scale-up, an EBP scale-up study in Uganda that we will - just similar
to what Mary Acri just laid out, really engage in a scale-up study that tries to understand
what are the multilevel influences on uptake, implementation, and effectiveness, on sustainability
of an EBP, and then to really try hard to then disseminate this information to our policy
and government collaborators.
We had the real privilege of having our National Institute of Mental Health colleagues, including
Makeda, visit most recently our Uganda site, and one of the visits that we made was a very
unnatural visit for many of us who consider ourselves academic.
We went to Parliament to talk about the implications of the work in SMART Africa and how we can
partner with them to inform child mental health policy and funding across the country.
The same way that our New York State and New York City partners are close colleagues to
ours in the New York City study, that's the partnership that we're trying to form in SMART
Africa right from the get go.
How do we achieve these aims, particularly around our network?
We engaged in a recent conference where we've invested hard into what's our theory of change?
What are the goals that we have around children's mental health, both symptoms and functioning?
What are the factors that facilitate and impede implementation to consider multilevel obstacles
as well as facilitators in each one of the country contexts that we're working with?
We're also conducting lots of formative work in country.
A systematic scan of capacities, needs, existing infrastructure, so that we can actually understand
the landscape the same way we do in the U.S.
What are the places that we can actually stand upon, and where are the places we're going
to run into trouble?
Step Three is in capacity-building countries: In Ghana and Kenya, they will conduct small,
randomized effectiveness implementation trials, and in Uganda a large-scale hybrid effectiveness
implementation study will be conducted around our EBP, which is an adapted version of the
multiple family group intervention that Mary and I just described.
Continuously we are trying to learn from every step of this process to inform other scale-up
studies in low-resource contexts, both in the U.S. as well as globally.
This is kind of our map of what our SMART Africa Center is trying to do.
We are trying to network and information share, we're trying to provide technical support,
capacity-building activities, but what I'm going to talk about mostly is our scale-up
research study, and then Ozge will talk about capacity building in other country contexts.
If I didn't say this before, I should have.
These studies take a complete village to really field, and so the collaborative methods that
bring people together across sectors, across perspectives, across stakeholder groups, are
very important and underpin both the work that we do in New York State and New York
City as well as the work that we do globally.
As we have already described, we took this evidence-based practice and the method of
delivery, multiple family groups, that had been used previously in Africa, and designed
a study very similar to what you just saw Mary present.
This is a longitudinal, experimental, mixed-method effectiveness implementation research design.
In group one, in condition one, schools are randomized in this particular design so that
our multiple family groups are going to be delivered by trained parent peers, exclusively
parent peers that are trained and supported, and we're drawing those facilitators from
local planning councils.
Why is that important?
Because trained professional child mental health providers do not exist in the settings
that we work with in any type of scale that could field actual services for kids and families.
In condition two, a set of school multiple family groups are going to be delivered by
trained and supervised community health workers drawn from local primary care clinics.
And finally, we are knitting together comparison mental health wellness and educational supports
within schools so that every kid, every family in need has access to some type of support
in incredibly low-resource settings.
Here's where we work.
What's an incredibly both important opportunity for a country like Uganda, as well as a huge
challenge, is in Uganda kids make up almost 60% of the total population of the country,
and yet kids are incredibly burdened within a Ugandan context by poverty, violence, and
a host of other burdens, which really does potentially compromise the future generation
and the future of a country context without really seriously meeting the mental health
and health needs and security needs of kids within the country context.
We have put together an unbelievable team in Uganda that has done the formative work
that we also did in New York City and New York State about how you think about taking
an evidence-based practice but aligning it to the resources available, the system availability,
as well as the cultural context.
There is lots of work that is done in large and small groups that teachers and caregivers
and clinic directors really dig in deep to both the four Rs and two Ss of intervention
protocol, as well as some other prevention programs that have existed within the local
community context, to really fuse together the evidence base that exists around how do
you help kids with conduct difficulties, but the delivery and the format and the content
really be aligned with community and cultural values and preferences.
This is an example of taking the four Rs and two Ss, but deeply translating those activities
into an evidence-based approach that families would recognize.
(This is actually in English, but also the intervention materials are all translated
into Luganda and I just don't have those language capacities.)
One of the things that I've learned in working in both Africa and the U.S. is they are very
different contexts with very different preferences and resources and norms.
On the other hand, there are some universals.
Parents really care about their kids and their outcomes.
Stress is a really important factor.
Communication within families is really important, and so you'll see some of the same topics
that were not rejected by our collaborative working group�as a matter of fact, embraced,
but then modified in terms of how you discuss those issues within an African context.
I'm going to stop here and turn this over to Ozge, who's going to talk a little bit
about just it's not only an adaptation process and then a scale-up study that will happen
in Uganda, but there's lots of work occurring simultaneously in our other country contexts
around building capacity to do this work, as well as testing and intervention.
Ozge, can I turn that over to you?
>> OZGE SENSOY BAHAR: Thank you, Mary.
As Dr. McKay mentioned, we have two sites, Ghana and Kenya, that are getting ready to
launch their small-scale study using lessons learned from Uganda.
Our Ghana research team is composed of two researchers from the School of Public Health,
two researchers from the Department of Social Work, as well as BasicNeeds as their implementation
partner.
This team has engaged extensively in formative work to get ready for their pilot study.
They had their first stakeholders' meeting in March, bringing together researchers and
practitioners and policymakers.
They have had an onsite follow-up meeting with implementation partner BasicNeeds Ghana
in Tamale to further discuss the details of the partnership.
The research team has completed the review of the adapted manual in Uganda and made some
initial edits in the process of adapting it to the Ghana context, and they have scheduled
visits with BasicNeeds Ghana and community stakeholders in Tamale for initial feedback
on the intervention manual and delivery.
Our colleagues in Kenya, which is our second capacity-building site - our research team
there is composed of three researchers from the Department of Psychiatry at University
of Nairobi.
They are also very busy getting ready for their pilot study.
They had their first stakeholders' meeting also in March, and engaged extensively in
the discussion of the existing mental health policy in the country and the lack of policies
specific to children and adolescents around mental health.
Since then they have continued to engage policymakers.
As a matter of fact, the director of the Mental Health Department within the Ministry of Health
in Kenya attended our conference in early August in Uganda and participated in a panel
that focused on strengthening the dialogue between policymakers and researchers.
The team is currently engaged in forming a working group and working closely with the
Ministry of Health to sketch a mental health policy for children and adolescents that can
be incorporated in the existing mental health policy in the country.
The capacity building is a very critical component of our SMART Africa Center, and our capacity-building
efforts also include the Global Child Health Fellowship that aims at supporting the upcoming
generation of scholars interested in global child and adolescent behavioral health.
We currently have nine fellows, including myself, who come from five different countries,
four of them in Sub-Saharan Africa, and they're all at different stages of their academic
career.
Each fellow is paired with two mentors.
Since they've started they have been engaged extensively in grant writing, manuscript preparation
and publication, among other things.
With that, I will turn it to, I believe, Makeda?
>> MARY MCKAY: No, I'll take it one more time to make some summary comments, Ozge.
Thank you.
I guess that I want to sum up, with gratitude to both Ozge and Mary for co-presenting with
me, and I just want to highlight a set of things that our NIH colleagues have been incredibly
supportive but also really challenged us to be bigger than we thought we could be and
to be more ambitious than we thought even possible.
This program of research I just want to underline has really had us engage deeply with policymakers
and funders, both in the U.S. as well as in the continent of Africa, and that those are
a set of needed, critically important, but often not forged relationships between academics
and policymakers.
There are incredibly sophisticated and challenging community-based and collaborative participatory
methods that underpin the formative work and also continually underneath both of these
ambitious efforts.
On top of that, us choosing really innovative, rigorous research designs, choosing to go
a mixed-method set of approaches, and keeping everybody on these teams and these champions
in line in trying to achieve the goals of the study, have been both some of the most
rewarding work of our, at least my life and also the most challenging.
I'm grateful for the opportunity and the support for NIMH.
I'm grateful for everybody that works in both of these very large teams.
I'm happy to, Denny, have you lead us in some questions and know that I'm grateful for your
individual support as well.
>> DENISE PINTELLO: Thank you, Mary, and thank you very much to Mary Acri and Dr. Ozge Sensoy
Bahar.
Those are great presentations.
I'm very excited about the work that you are doing at this time, and I do want to express
my appreciation for the opportunity to join in and facilitate this discussion.
During the next few minutes, I would like to do a quick summary of what we've heard
so far about targets, mechanisms of change, facilitate any discussion and questions, and
invite our participants to submit any questions that you have.
Then also as we close, tell you about related funding opportunities that I hope some of
you are thinking about as you consider applying to NIMH and that hopefully there's something
in there that speaks to your passion for research.
As you saw, a fantastic presentation by Dr. Michael Freed.
He provided an overview of the experimental therapeutics approach, and the gist is that
in services research interventions it's tough to do on our end of the continuum.
It's very easy to identify targets and mechanism of change on the basic science side.
But what we're finding in the last three years, a lot of the researchers have stepped up and
really have provided great targets as far as clinician-, patient-, and consumer-level
targets, clinician- and provider- level behavior, and organizational and system-level factors.
The goal is to try to - what are you targeting?
What are you trying to change and manipulate as you design your study?
Those are some of the pieces that we've been really encouraged by with taking this approach
in the last 5 years, or 4 years about.
As you heard earlier when Mary McKay presented her first study, and you saw the slide that
identified the various levels - your core family variables, provider level, clinic level
- and then you can see targets in between that they're manipulating.
For instance, with the provider level, looking at manipulating motivation, preparedness,
fidelity, and clinic level or organizational level, you're looking at manipulating readiness,
leadership support, and climate through those teams, those implementation clinic teams.
That was a nice way.
I really appreciate you providing that overview.
Then the next study that Ozge and Mary presented about SMART Africa, again just looking just
at that capacity-building piece, the provider level, and some of the targets that they are
manipulating are knowledge, skills, and motivation, and using a number of different activities
to test those and manipulate.
These are some really exciting approaches that we would love to see the results in.
I just want to quickly provide another example.
I know we're kind of tough on time here, but a really wonderful D&I (dissemination and
implementation) research example is from Aaron Lyon who has an R21 titled "Beliefs and Attitudes
for Successful Implementation in Schools," and he is testing a pre-implementation training
intervention designed to improve the delivery of a school-based mental health evidence-based
treatment.
On his figure here, you see he is providing a pre-implementation intervention, he's doing
some training, motivational enhancement training therapy, to manipulate school-based mental
health provider attitudes, perceived behaviors, and subjective norms to see does that influence
their intention to implement.
Then you can see the lineage to does that enhance their ability to deliver the evidence-based
treatment and then hopefully enhance mental health outcomes?
Look at NIH Reporter or feel free to contact Aaron if you have any questions or you'd like
to learn more about his model.
What I'd like to do now is transition to see if anybody who is listening to us today, I
think we have at least nearly 50 folks, that have questions.
I'd like to go ahead and kick us off with a question to Dr. Mary McKay, because you've
been working in the field for a little while I believe, and in 2014 when our experimental
therapeutics approach was initiated at NIMH, it happened to be around the timeframe you
composed your R01.
You were one of the first folks that really put in the targets and mechanisms of change.
I'd love to get a sense of your initial thoughts at that time on the experimental therapeutics
approach, and has this thinking for you changed over time?
>> MARY MCKAY: That's a great question.
I think that the particular way that the call for proposals was written truly challenged
my colleague Kimberly Hoagwood, myself, Mary Acri, to think very specifically about those
mediators and moderators that I presented, and also to think about how on earth we were
going to actually really study them.
The call for proposals probably pushed us to propose a study that we thought was actually
bigger than ourselves.
I think - in retrospect I'm very happy we did that.
At the time that we got the funding, we were both joyous and terrified, because what we
had proposed is to work across an entire system to attempt to sample from every clinic in
New York City, and Mary Acri and team are well on their way from either being able to
actually sample and run groups and collect data in every clinic in New York City, or
deeply understand from directors' and providers' perspectives about the choice not to participate.
I think that the call is very challenging to us as investigators to open up the intervention
packages that Mike so nicely put up in terms of his graphics, make hypothesis based on
the limited preliminary data that we had.
For example, Mary put up some of our hypothesis about we think that some of those Rs are more
impactful than others.
Those come out of some of our post hoc analysis of our trial data.
I'm not sure I would have taken those steps had it not been that the call really challenged
us.
Denny, is that how you kind of wanted me to respond?
>> DENISE PINTELLO: Absolutely.
I really appreciate that because that parallels a lot of feedback that we've received from
the field, because in 2014 it was a new approach.
It was a different way to frame your research questions and design, so I really do appreciate
your thinking and articulating that for us in the audience.
What I'd like to do is ask another question here, and this would be to Mary Acri and to
Ozge Sensoy Bahar.
As folks that are newer to the field than some others, as up-and-coming researchers,
can you speak to your thoughts about the benefits of testing targets and mechanisms of change
in the field of mental health, as well as the challenges?
>> MARY ACRI: Ozge, do you want to lead?
>> OZGE SENSOY BAHAR: You can take it and I'll follow you.
>> MARY ACRI: Sure.
As a relatively newer investigator, this project has been really revolutionary in terms of
my career, and just on back of what Mary had said, to be able to see how an entire service
system responds to recruitment requests as well as uptake.
We're leading something like 15 groups in the fall so far, and so to launch this massive
endeavor on a large scale is just so rewarding.
Also in terms of the mechanisms of change and looking at specific targets, I feel like
it advances the knowledge base as to what are the critical ingredients or the primary
ingredients in terms of why a particular intervention is effective, and are there more ingredients,
to use the same analogy, that are more effective than others?
I think this really moves the field further because we want to unpack - we know that behavioral
parent training programs are effective, but the critical ingredients within them, and
whether there are specific factors that are more effective or more impactful, really is
a new venture.
It's exciting to be part of that.
>> OZGE SENSOY BAHAR: Just to build on what Mary Acri said, I am personally really excited
about the opportunity to see these mechanisms of change and what is similar and what is
different in comparison to the U.S. context.
In Sub-Saharan Africa, I think we have the opportunity because we will be conducting
the intervention in not only three different countries, but also in school systems as opposed
to mental health services systems, I think it will be really fascinating to see what
will look similar and what will look different.
>> DENISE PINTELLO: Thank you very much.
That's great.
We have one question and we have very limited time to respond, but let me go ahead.
We have a question from Kaley Patrick asking about, "Some calls for proposals push to have
investigators apply for and receive an R21 before submitting the R01.
How do you conceptualize an R21 as a building block for an eventual R01 within the context
of building capacity within a low-middle income country?"
This would be for the international side of our group today.
Mary or Ozge?
>> MARY MCKAY: Yeah, I can respond but also maybe Makeda or NIH colleagues, if you want
to think about that as well.
This program of research builds on a whole series of pilot studies, R21, R34, unfunded
work, small grants from our university.
You can trace the building, the learning, the adaptation from a number of studies ahead
of time.
I think the challenge to investigators currently is it feels more complex what has to happen
in your R21 or your pilot studies in order to be able to apply for one of these mechanisms.
You need to know the levers at multilevels.
You at least need your R21.
Your pilot study needs to kind of uncover those, collect some data on what you think
those major feasibility obstacles are, what major targets are, some of the obstacles in
your setting or facilitators in your setting.
I think that the more narrow theoretical frameworks that we might have used in the past for intervention
research, we have to rethink which frameworks we're using to guide the early studies so
that we come away with multilevel data that will inform a more comprehensive, both effectiveness
and implementation, trial.
Am I making any sense, Denny?
Because that's off the top of my head.
>> DENISE PINTELLO: Absolutely.
Then we will follow up with the ... Provide NIMH suggestions for the requester of the
information.
Thank you very much, Mary and everyone.
Before we wrap up, I do want to share some information about some new funding opportunity
announcements that I hope this audience would have some interest in, because it also refers
to the targets and mechanisms of change.
As Dr. Michael Freed discussed earlier, we do have clinical trial funding opportunity
announcements, and these are three that relate to services research and of course hybrid
designs with a D&I, dissemination and implementation, field.
You will find these at our webpage here.
Our Branch, in the Services Research and Clinical Epidemiology Branch, within the last 3 or
4 months has issued these five FOAs.
We're very excited about these.
You will see the first two," Effectiveness Trials for Post-Acute Interventions" and "Services
to Optimize Longer Term Outcomes," those also include you applying for a study that would
have targets and mechanisms of change, and also these are open to foreign entities.
The Mental Health Navigator Model, that is also requiring if you have clinical trial
you have to identify targets and mechanisms of change.
Then the last piece doesn't involve clinical trials, so you don't have to have targets
and mechanisms of change with that.
You'll notice with the first four there's a 2018 expiration date.
Please don't worry.
We're planning to reissue them so they will have a lifetime of 2020, but please contact
myself or Mike Freed if you have any questions.
Mike is our contact for the first two FOAs and I'm the contact for the second.
If you have any additional questions, please feel free to contact us.
The last piece that I'm going to say before I turn it over to our organizers is that we
are very proud to convene a large mental health services research conference.
It's our 24th conference.
Last year Dr. McKay was one of the co-chairs of the conference.
Please put the dates aside for August 1st and 2nd.
It would probably be convened here in the Washington D.C. area, so I just wanted to
let you all know that, and we hope to see you there.
Thank you.
>> MAKEDA WILLIAMS: Thank you so much to all of our speakers today for a very informative
webinar on mental health services research, and thank you to Dr. Pintello at the end for
sharing those FOAs and the conference in August of next year.
That was very well said.
Hopefully those who are on the call will be able to participate.
I'd also like to thank NIMH leadership, the Bizzell Group, and OneSource for their support
of our webinar series and logistics.
Our next webinar will be held on Tuesday, September 12, 2017, from 9:00 a.m. to 10:30
a.m. U.S. Eastern Time.
The title of this webinar is "Research Capacity Building, Nurturing, and Strengthening Emerging
Scientists."
Please visit the Global Mental Health webinar website for more information.
Now I'll turn it back over to Wendy, our operator, to close out today's webinar.
>> WENDY: Thank you.
This does conclude today's program.
Thank you for your participation.
You may disconnect at any time.
No comments:
Post a Comment