+1443 776-2705 panelessays@gmail.com
  • What were the clearest takeaways from the articles?
  • What were the most confusing parts of the articles?
  • What did you learn that could have practical usefulness?
  • What new concepts or theories did you encounter?
  • Argue points you have different opinions on
  • what are the things that will help engineering ?

C H A P T E R 1 9

Translating Research to Widespread
Practice in Engineering Education

Thomas A. Litzinger and Lisa R. Lattuca

Introduction

Governmental, academic, and professional
organizations around the world have
pointed to the need for changes in engineer-
ing education to meet global and national
challenges (see, e.g., Australian Council of
Engineering Deans, 2008; National Academy
of Engineering, 2004; Royal Academy of
Engineering, 2007). Some of these organi-
zations have specifically pointed to the need
for the changes in engineering education to
be based on educational research (Jamieson
& Lohmann, 2009, 2012; National Research
Council [NRC], 2011). In spite of these calls
for change, researchers are finding that the
rate of change and the nature of the change
are not keeping pace with the calls for
change.

Reidsema, Hadgraft, Cameron, and King
(2011) ask “why has change (in engineer-
ing education in Australia) not proceeded
more rapidly nor manifested itself more
deeply within the curriculum” (p. 345) in
spite of funding from the national govern-
ment and continuing efforts of engineering

professional societies and Australian Coun-
cil of Engineering Deans? Reidsema et al.
report that interviews of sixteen coordina-
tors of engineering science units at four dif-
ferent universities in Australia revealed that
traditional lecture combined with tutorials
remained the dominant model of instruc-
tion. An in-depth study of the state of engi-
neering education in the United States by
Sheppard, Macatangay, Colby, and Sullivan
(2009) makes the case that “in the midst of
worldwide transformation, undergraduate
engineering programs in the United States
continue to approach problem-solving and
knowledge acquisition in an outdated man-
ner” (Schmidt, 2009, p. 1).

A study of the awareness and adoption
of innovations within U.S. engineering pro-
grams found high awareness, but low adop-
tion. Borrego, Froyd, and Hall (2010) sur-
veyed engineering department heads in the
United States on the use of seven inno-
vations in engineering education, including
student-active pedagogies and curriculum-
based service learning. Awareness of these
two research-based innovations was high,

375

https://www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139013451.025
Downloaded from https://www.cambridge.org/core. University of Florida, on 14 Nov 2021 at 20:45:55, subject to the Cambridge Core terms of use, available at

376 cambridge handbook of engineering education research

at approximately 80% of the 197 respon-
dents. Just over 70% reported that student-
active pedagogies were being used in their
program, whereas only 28% indicated ser-
vice learning was being used in their pro-
grams. The use of student-active pedagogies,
at least, would seem to be quite common.
However, when asked what fraction of their
faculty members used student-active peda-
gogies, the department heads indicated that
only about one third were using them.

This state of affairs is not unique to engi-
neering educators or even to educators in
general. As Henderson and Dancy (2009)
have shown, slow adoption of research-
based teaching practices exists in science
education as well. In fact, workshops spon-
sored by the U.S. NRC suggest that these
problems exist for science, technology, engi-
neering, and mathematics (STEM) educa-
tion throughout K–121 and higher education
in the United States (NRC, 2011). Indeed,
writing about K–12 education, Cohen and
Ball (p. 31) note: “We expect innovative
activity at every level of education, but typi-
cally sketchy implementation. . . . and even
when there is broad adoption, to expect
variable, and often weak, use in practice.”
Other fields, such as healthcare (Bero et al.,
1998; Kreuter & Bernhardt, 2009) and social
work (Dearing, 2009; Nutley, Walter, &
Davies, 2009), also report that research-
based practices are not readily taken up by
practitioners.

Fortunately, the literature on change and
diffusion of innovations, as well as on the
use of research-based practices in education
and other fields, provides insights into the
causes of low rates and low quality of adop-
tion as well as strategies for increasing the
chances of successful transfer. Drawing on
this literature, we have attempted to do the
following:

� Identify likely causes for the slow adop-
tion and low quality of the adoption of
research-based practices.

� Provide summaries of strategies that have
been found to be effective at promoting
high-quality adoption of research-based
practices.

� Discuss opportunities and challenges for
further research into the processes of
adoption of research-based practices in
engineering education.

� Offer an overall summary, in the
Final Thoughts section, of key mes-
sages for researchers who are developing
research-based practices with the goal of
widespread use and for leaders of educa-
tional change processes.

Before taking up our main discussion, how-
ever, we define what we mean by research-
based practices. We also discuss the use of
research-based practices in engineering edu-
cation to set the context for the remainder
of the discussion.

Research-Based Practices

So what is a “research-based practice?”
Related terms that appear in the literature
are “evidence-based practices” and “innova-
tions.” A recent report on STEM education
published by the NRC of the U.S. National
Academies (2011) uses the term “promis-
ing practices.” We use the term research-
based practice to encompass all of these ele-
ments. We take research-based practices to
be those that have been studied in well-
designed investigations that collect convinc-
ing evidence showing that the practice can
be effective in promoting learning. Quanti-
tative research studies supporting the devel-
opment of research-based practices should
provide reliable and valid evidence that
the practice has a significant and substan-
tial effect on learning. As we shall see
later in the chapter, however, demonstrat-
ing that a new practice has a sizeable, sta-
tistically significant effect is not sufficient.
High-quality adoption of a practice is more
likely when those who adopt the new prac-
tice understand why it works. Therefore, a
research-based practice must also be based
on research that establishes why the prac-
tice is effective. Generally, this research will
be qualitative and will not involve statistical
analysis.

https://www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139013451.025
Downloaded from https://www.cambridge.org/core. University of Florida, on 14 Nov 2021 at 20:45:55, subject to the Cambridge Core terms of use, available at

translating research to widespread practice in engineering education 377

Limitations of this Review

Our approach to writing this chapter and
the literature that we were able to access
led to two limitations that are important to
state explicitly. First, we focused the chapter
on processes for bringing about large-scale
change in faculty practice driven by educa-
tion research. We do not address the factors
that affect why individual educators decide
to engage in a large-scale change effort nor
do we address the experiences of those who
undertake translation of research to practice
as a personal journey. The other major lim-
itation stems from the literature base that
we were able to access, which is dominated
by studies in the United States. We were
able to locate some excellent work done
outside of the United States, but still the
majority of the references carry a U.S. per-
spective. Furthermore, most of the mate-
rials from outside the United States come
from other Western countries. As discussed
later in the chapter, adapting a practice to
local context and culture is a critical part
of successful transfer to widespread use. So,
the dominance of a single country and cul-
tural perspective (Western) in this review is
a potentially significant limitation.

Research-based Practices in
Engineering Education

Research-based practices enter engineering
education primarily through two pathways.
Until the last decade, the dominant path-
way was through the adoption/adaptation of
research-based educational practices devel-
oped outside of engineering. Over the last
ten to fifteen years, however, educational
research within engineering has grown dra-
matically and has begun to provide addi-
tional research-based practices for engineer-
ing educators. The scope of research-based
practices in education and engineering edu-
cation is very broad, spanning from recruit-
ment of students to the performance of early
career graduates in the workplace and every-
thing in between. In this chapter, we focus

on pedagogical practices, but much of what
we discuss also applies to increasing the use
of research-based practices independent of
the specific type of practice.

We use team-based learning to illustrate
the time scale of adoption of an innova-
tion in engineering education. Team-based
learning was recently identified as the most
widely adopted research-based practice in
engineering education in the United States
by participants in a workshop on diffu-
sion of innovations in engineering education
(Center for the Advancement of Scholar-
ship in Engineering Education, 2011). To cre-
ate a the timeline of the adoption of team-
based learning in engineering education, we
used the American Society for Engineering
Education (ASEE) proceedings database to
search for the terms – teams, cooperative
learning, and collaborative learning2. Two
different searches were conducted: one for
papers with any of these terms in the title
and one with any of the terms appearing
in the full paper, including references. The
title search is taken as an indicator of schol-
arly use of team-based learning, whereas the
full paper search is an indicator of aware-
ness of team-based learning. Because of the
number of papers involved, no attempt was
made to judge the sophistication of the prac-
tice described in the papers.

Figure 19.1 presents the timelines for the
number of papers that include teams or
cooperative or collaborative learning in the
title and anywhere in the paper, for the
period from 1996 to 2011 (the full range of
dates in the database). The curves show sim-
ilar trends with a ratio of number of papers
with any of the terms to the number with
the terms in the title of roughly 20:1. To
give a visual indication of the rate of change
in the years prior to 1996, the time scale
begins at 1980 because 1981 was the year
when the first paper on cooperative learn-
ing was presented at an engineering confer-
ence in the U.S (Smith, Johnson, & John-
son, 1981; Smith, 1998, 2011). The dashed line
connects the first paper with the term coop-
erative learning in the title to the data from
the ASEE database. The figure shows that it
took nearly twenty-five years for the number

https://www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139013451.025
Downloaded from https://www.cambridge.org/core. University of Florida, on 14 Nov 2021 at 20:45:55, subject to the Cambridge Core terms of use, available at

378 cambridge handbook of engineering education research

0

200

400

600

800

1000

1200

1400

0

10

20

30

40

50

60

70

1980 1985 1990 1995 2000 2005 2010 2015

N
u

m
b

e
r

w
it

h
t

e
rm

s
in

P
a

p
e

r

N
u

m
b

e
r

w
it

h
t

e
rm

s
in

T
it

le

Year

Terms in Title

Terms in Paper

Figure 19.1. Number of papers containing terms related to cooperative learning;
data from 1996 to 2011 were generated from the Proceedings of the ASEE Annual
Meeting.

of papers on team-based learning to reach
steady-state, which we take as indicator of
the end of change process.

This time scale is consistent the work
of Getz, Siegfried, and Anderson (1997),
who studied the adoption of innovations
in higher education in the United States.
They conducted a survey study of the adop-
tion of thirty innovations in six categories
from curriculum to financial services at more
than two hundred colleges and universities.
The number of years between the first per-
centile adopters to the median percentile
was twenty-six years. For the four curricular
innovations in their study, women’s studies,
computer science major, interdisciplinary
major, and formal study abroad, that differ-
ence was fifteen, seventeen, fifty-one, and
fifty years, respectively. Thus, their work
suggests a time scale measured in decades
for change in higher education.

The time scale suggested by the publi-
cation data on team-based learning and the
work of Getz, Siegfried, and Anderson is dis-
couragingly long. The literature on change
in educational systems and on translation
of research to practice provides important
insights into the factors that lead to such
a slow pace of change and to the reasons

why such efforts often fail. We provide an
overview of this literature in the next sec-
tion.

Challenges to Successful Transfer
from Research to Practice

In this discussion, we are not concerned here
with what Cohen and Ball refer to as “agent-
less diffusion” through which a research-
based practice is discovered and adopted
without any direct action on the part of
the developer, because such a process is
highly unlikely to lead to widespread use of
the research-based practice. Rather, we are
concerned with the translation of research-
based practices to widespread use through
direct action on the part of the developers
of the practice and/or other agents. The pro-
cess by which the developers of a research-
based practice seek to persuade others to
adopt their research-based practice is often
referred to as dissemination.

A common approach to dissemination
is the “replication model” in which the
instructor targeted as an adopter is expected
to passively accept and apply the new
practice just as it was developed (Bodilly,

https://www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139013451.025
Downloaded from https://www.cambridge.org/core. University of Florida, on 14 Nov 2021 at 20:45:55, subject to the Cambridge Core terms of use, available at

translating research to widespread practice in engineering education 379

Glennan, Kerr, & Galegher, 2004). In this
model, the researcher identifies the need
for a new practice, develops and assesses it,
and then seeks to disseminate it to poten-
tial adopters. Trowler, Saunders, and Knight
(2003) describe the change theory underpin-
ning this approach as technical-rational; in
this approach “experts plan and then man-
age faithful implementation” (p. 7). The
underlying belief of the replication approach
is that “well designed interventions will
cause change” (p. 7). As we shall see, there
are a number of issues with the replication
model of dissemination.

According to Bodilly et al. (2004), the
replication model was commonly used in
the 1960s and 1970s in U.S. higher education.
The model involved the development of an
educational innovation along with associ-
ated training for educators that would lead
to precise adoption of the innovation. The
communication was essentially one-way,
from the developers to the educators. Stud-
ies of the replication approach found “few
new sites that had implemented the design
with fidelity” (Bodilly et al., 2004, p. 12). In
an article on the state of large-scale educa-
tion reform around the world, Fullan (2009)
confirms the assessment that the replication
model failed to achieve widespread adoption
of innovative practices in the United States.
He writes that in spite of large expenditures
of resources on major curriculum reforms,
“by the early 1970s there was mounting evi-
dence that the yield was miniscule, confined
to isolated examples” (p. 103). Clearly, the
replication model was a failure.

A major issue with the replication model
is that it does not treat the educators as
active participants who bring prior knowl-
edge, experience, and beliefs about teach-
ing and learning to the adoption process.
The parallels between the replication model,
which treats the potential adopter as a vessel
to be filled, and the transmission model of
teaching, which looks at students in a simi-
lar way, are somewhat disturbing. A related
issue is that developers fail to meet the needs
of potential adopters. Cohen and Ball note
that the particular practice that the devel-
oper seeks to disseminate often does not

address an “urgent” need of the potential
adopters. In this situation, the developer is
faced with creating a market for his or her
research-based practice.

The nature of research-based practice
that is being transferred to classroom prac-
tice can also have a significant impact on
the likelihood of successful transfer to large
numbers of educators. Regarding the pro-
cess of reform in K–12 education in the
United States, Elmore (1996) writes that:

Innovations that require large changes in
the core of educational practice seldom pene-
trate more than a small fraction of American
schools and classrooms, and seldom last for
very long when they do. By ‘core of edu-
cational practice’, I mean how the teachers
understand the nature of knowledge and the
student’s role in learning, and how these ideas
about knowledge and learning are manifested
in teaching and classwork. (p. 1)

In a similar vein, Cohen and Ball (2007) note
that “ambitious” pedagogical practices that
seek to change significantly what an educa-
tor does in the classroom face the greatest
challenges. They note that such practices are
likely to lead to a feeling of “incompetence”
on the part of potential adopters because
familiar and conventional practices are being
uprooted and challenged.

The points made by Elmore and Cohen
and Ball are related to compatibility of
an innovation as defined by Rogers (1995)
within his book, Diffusion of Innovations. He
describes diffusion of innovations as “the
process through which an innovation is com-
municated through certain channels over
time among members of a social system”
(Rogers, 1995, p. 10). The innovation itself is
one of the four main elements of the model
of diffusion of innovations; the other ele-
ments are the social system within which
potential adopters of the innovation live
and/or work, the communication channels
through which others learn about the inno-
vation, and the temporal characteristics of
the diffusion process. Rogers defines com-
patibility, one of five key attributes of an
innovation, as “the degree to which an inno-
vation is perceived as consistent with the

https://www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139013451.025
Downloaded from https://www.cambridge.org/core. University of Florida, on 14 Nov 2021 at 20:45:55, subject to the Cambridge Core terms of use, available at

380 cambridge handbook of engineering education research

values, past experiences, and needs of poten-
tial adopters” (p. 224). Research-based prac-
tices aimed at making substantial changes in
the core of educational practice are likely
to be perceived as incompatible with past
experiences and possibly with the needs of
potential adopters.

Dearing (2009) discusses research transfer
to practice in the field of social work using
the framework of diffusion of innovations.
He provides a list of the “top ten dissemi-
nation mistakes”; a number of the mistakes
are also relevant to transfer to practice in
higher education. One of his top ten mis-
takes is that developers create and advocate
only a single research-based practice, rather
than offering a set of practices from which
potential adopters can choose. Another mis-
take noted by Dearing is that developers
assume that evidence of effectiveness will
persuade potential adopters to implement
the new practice. He suggests emphasizing
other attributes of the practice, such as com-
patibility. On a similar note, Henderson and
Dancy (2010) suggest emphasizing personal
connections over presentation of data.

Dearing also considers using the devel-
opers as the leaders for dissemination as a
mistake because the developers are often
not the persons most likely to be able
to engage and persuade potential adopters.
Other researchers (e.g., Baker, 2007; Elmore,
1996; Horwitz, 2007; Schoenfeld, 2006) make
a related point that the lack of orga-
nizations specifically focused on translat-
ing research to practice is a major barrier
to widespread adoption of research-based
practices. National governments have cre-
ated such bodies, for example, the National
Diffusion Network and the What Works
Clearinghouse in the United States and
Learning and Teaching Support Network in
the United Kingdom. In the United States
at least, the success at bringing about large-
scale translation of research to practice has
been limited (Fullan, 2009).

Challenges to the successful transfer of
research-based practices can also arise as
educators adapt them to meet personal
and local needs. Coburn (2003) summarizes
past work that relates to the nature and

quality of the implementation of new prac-
tices. She notes the following characteristics
of the transfer process (p. 4):

� Even when educators adopt new prac-
tices, they do so in ways that show sub-
stantial variation in depth and substance.

� Educators’ knowledge, beliefs, and expe-
rience influence how they choose, inter-
pret, and implement new practices, mak-
ing it likely that they “gravitate” to new
practices that align with their prior expe-
riences.

� Educators tend to prefer new practices
that affect “surface features” such as
new materials or classroom organizations,
rather than practices involving deeper
pedagogical principles.

� Finally, educators tend to “graft new
approaches” onto normal classroom prac-
tices rather than changing those practices.

The findings of Henderson and Dancy (2009)
on transfer of physics education research to
practice in higher education are consistent
with the trends noted by Coburn.

The sheer number of research-based
practices available in the literature presents
another challenge to widespread adoption.
This situation is consistent with Cohen and
Ball’s observation that the present approach
to creating research-based practices and
translating them to practice will result in
“innovative activity at every level of edu-
cation but typically sketchy implementa-
tion” (p. 31). Their observation is consis-
tent with Schoenfeld’s (2006) observation
that the process of research is more highly
valued than the process of implementa-
tion. Within engineering education, the sit-
uation is complicated by a lack of a com-
mon vision on what needs to be changed
and what research-based methods should be
adopted.

Past work has also shown that ignoring
the reality of the environment in which
instructors find themselves, and the chal-
lenges that environment may present to
the adoption of the new practice, also
contribute to failure of transfer (e.g., see

https://www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139013451.025
Downloaded from https://www.cambridge.org/core. University of Florida, on 14 Nov 2021 at 20:45:55, subject to the Cambridge Core terms of use, available at

translating research to widespread practice in engineering education 381

Elmore, 1996). Environmental characteris-
tics include instructional resources, disci-
plinary expectations, policies, and man-
agement. Lack of sufficient institutional
resources and appropriate facilities can
also hinder implementation of novel teach-
ing practices. Disciplinary and institutional
teaching norms can further impede or dis-
courage experimentation with novel meth-
ods (Henderson & Dancy, 2010). Cohen and
Ball (2007) note that many developers of
research-based practices fail to consider the
need for special equipment and spaces on
the transferability of their innovative prac-
tice. Lack of incentives and recognition for
the use of innovative pedagogies is widely
noted (e.g., Cohen & Ball, 2007; Elmore,
1996; Fairweather, 2005) as a reason for the
lack of use of innovative practices. Fair-
weather (2008) notes yet another challenge
to widespread adoption of research-based
practices: faculty and institutions bear the
costs of implementing and sustaining new
practices whereas the majority of the ben-
efits accrue to the students and those who
employ them.

A recent study of some of the most
improved school systems around the world
has demonstrated that cultural differences
can have an impact on the adoption process
and what is required for success (Mourshed,
Chijioke, & Barber, 2010). One example of
how culture can affect the implementation
process relates to the use of evaluation data.
Mourshed and colleagues make the point
that evaluating the impact of the new prac-
tices is crucial to successful implementation,
but that the results of those assessments
must be used in a culturally sensitive man-
ner. They report that it is common to make
assessment data public in Anglo-American
school systems, but that public release of
such data would not be acceptable in many
Asian and Eastern European school systems.
A leader of an Asian system is quoted on this
topic: “No good for our students could ever
come from making school data public and
embarrassing our educators” (p. 70).

Other work suggests that the culture of
engineering education itself may contribute
to failure, or at least increase the challenges

to successful translation to widespread use.
A study of more than 10,000 faculty at 517
colleges and universities by Nelson Laird,
Shoup, Kuh, and Schwarz (2008) investi-
gated the importance that faculty members
in a variety of disciplines placed on deep
approaches to learning.3 In comparison to
colleagues in other fields with less codified
knowledge, for example, philosophy and
literature, faculty members in engineering
and science rated the importance of deep
approaches to learning lower by nearly 0.75
standard deviations (p < .001). Thus, the cul-
ture of teaching in engineering seems to be
a significant challenge to the use of many
research-based pedagogies that are intended
to increase student engagement. Student
resistance to changing accepted practices in
the classroom is also a potential challenge to
the use of nontraditional teaching methods
(Dancy & Henderson, 2004).

Another cultural tension common in
engineering (as well as other fields) is the
relative value placed on research and teach-
ing in decisions regarding tenure and pro-
motion (Fairweather, 2008). Fairweather’s
research, using data on approximately 17,000
faculty who responded to the National Sur-
vey on Postsecondary Faculty in 1992–3 and
1998–9, showed that the more time a fac-
ulty member spends in the classroom, the
lower his or her salary, regardless of the
type of four-year institution (Fairweather,
2005). His work also shows that the strongest
predictor of faculty salary is the number of
career publications. Comparing the differ-
ential cost/benefit of one hour teaching or
publishing “in the mean” demonstrates that
time spent teaching costs a faculty mem-
ber money whereas time spent publishing
is rewarded with higher pay. Fairweather
(2008) concludes that:

These findings strongly suggest that enhanc-
ing the value of teaching in STEM fields
requires much more than empirical evi-
dence of instructional effectiveness. It requires
active intervention by academic leaders at
the departmental, college, and institutional
level. It requires efforts to encourage a culture
within academic programs that values teach-
ing. (p. 24)

https://www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139013451.025
Downloaded from https://www.cambridge.org/core. University of Florida, on 14 Nov 2021 at 20:45:55, subject to the Cambridge Core terms of use, available at

382 cambridge handbook of engineering education research

Adopting research-based practices that lead
to major shifts from traditional practices
for teaching require a substantial invest-
ment of time to learn about and imple-
ment the new practices appropriately. The
data from Fairweather indicate that invest-
ing effort in a process adopting new peda-
gogical practices is not the most productive
use of time, at least when measured by salary
compensation.

Schoenfeld (2006) makes a complemen-
tary point about the effect of values on the
process of transfer to practice. He asserts
that the academy places higher value on
research, that is, the process that creates
and evaluates innovative teaching meth-
ods, compared to development, that is, the
process of transfer to practice. This dif-
ference in value would make it less likely
that researchers would undertake …

Increasing theUseofEvidence-Based
Teaching inSTEMHigherEducation:

AComparisonofEightChangeStrategies

MauraBorregoa and CharlesHendersonb

aVirginia Tech,bWesternMichiganUniversity

Abstract
Background Prior efforts have built a knowledge base of effective undergraduate STEM
pedagogies, yet rates of implementation remain low. Theories from higher education, man-
agement, communication, and other fields can inform change efforts but remain largely
inaccessible to STEM education leaders, who are just beginning to view change as a
scholarly endeavor informed by the research literature.

Purpose This article describes the goals, assumptions, and underlying logic of selected change
strategies with potential relevance to STEM higher education settings for a target audience
of change agents, leaders, and researchers.

Scope/Method This review is organized according to the Four Categories of Change Strat-
egies model developed by Henderson, Beach, and Finkelstein (2011). We describe eight strat-
egies of potential practical relevance to STEM education change efforts (two from each category).
For each change strategy, we present a summary with key references, discuss their applicability
to STEM higher education, provide a STEM education example, and discuss implications
for change efforts and research.

Conclusions Change agents are guided, often implicitly, by a single change strategy. These
eight strategies will expand the repertoire of change agents by helping them consider change
from a greater diversity of perspectives. Change agents can use these descriptions to design
more robust change efforts. Improvements in the knowledge and theory base underlying
change strategies will occur when change agents situate their writing about change initiatives
using shared models, such as the one presented in this article, to make their underlying
assumptions about change more explicit.

Keywords curriculum change; instructional change; theories of change

Introduction
Increasingly, high-profile organizations are calling for widespread improvement in undergradu-
ate science, technology, engineering, and math (STEM) education. These calls are frequently
framed in terms of increasing the number, diversity, and quality of STEM graduates (American
Society for Engineering Education [ASEE], 2009, 2012; Hawwash, 2007; King, 2008; National
Academy of Engineering [NAE], 2004; President’s Council of Advisors on Science and Tech-
nology [PCAST], 2012). While these broad goals are not new, growing attention is being paid

Journal of Engineering Education VC 2014 ASEE. http://wileyonlinelibrary.com/journal/jee
April 2014, Vol. 103, No. 2, pp. 220–252 DOI 10.1002/jee.20040

to the instructional practices of STEM faculty, specifically to encourage more widespread use of
instructional strategies grounded in the research on how students learn (NRC, 2012).

Tremendous investment and related efforts over the past few decades have built up a sub-
stantial knowledge base about STEM learning and many effective pedagogies and interven-
tions (Borrego, Froyd, & Hall, 2010; NRC, 2012; Prince & Felder, 2006). Yet these
prestigious organizations are increasingly expressing dissatisfaction with the rate of imple-
mentation, adoption, and scale-up of research-based instructional strategies (ASEE, 2009;
2012; NRC, 2012; PCAST, 2012). It has become painfully clear that higher education
change processes are at least as complex as the pedagogies and learning processes they seek
to promote. STEM education change agents, leaders, and researchers are just beginning to
view change as a scholarly endeavor that can and should be informed by the research litera-
ture. While fields such as management, higher education, and communication have devel-
oped a wealth of literature to inform such change efforts, this knowledge remains largely
inaccessible to STEM education leaders and researchers.

This research review describes the Four Categories of Change Strategies model previously
developed by Henderson, Beach, and Finkelstein that allows for categorization of change
strategies (Henderson, Beach, & Finkelstein, 2011; Henderson, Finkelstein, & Beach,
2010) and uses this model to describe and compare eight change strategies that are relevant
to STEM higher education settings. The target audience is STEM higher education change
agents, leaders, and researchers. This article, organized according to the Four Categories of
Change Strategies model, is not meant to be an exhaustive literature review of possible
change strategies, but rather is meant to highlight and provide an overview of what we see as
some important perspectives on change. Two strategies were selected to illustrate each cate-
gory of change strategy, and we present a STEM education example for each. These selec-
tions were based on our perception of the current or potential use of each strategy as well as
our desire to have the two strategies represent significantly different ways of operating within
each category. The Discussion and Conclusion sections focus on implications for change
agents and future directions for research.

This review focuses on higher education. We acknowledge that a significant body of literature
describes extensive change efforts related to precollege education (e.g., Hargreaves, Lieberman,
Fullan, & Hopkins, 2009; Sykes, Schneider, & Plank, 2012). Our goal, however, is not to sum-
marize this work and translate it to higher education settings, because they feature greater
instructor autonomy, far less government control, and limited reliance on standardized tests
for accountability than in precollege settings. We follow a United States, higher education-
based definition of STEM, to encompass primarily biology, chemistry, engineering, geosciences,
mathematics, and physics. When our examples or claims apply primarily to engineering edu-
cation, we use “engineering education” instead of “STEM education.”

TheoryinSTEMEducationResearch
There are two primary reasons a review of change strategies in STEM higher education is
needed. First, change is not traditionally a domain that STEM leaders have thought of as
informed by theory or literature. The relevant literature on change in higher education is not
necessarily accessible to those who need to apply it. This literature is scattered in disciplines
and journals outside STEM, and many ideas, although promising, are understudied in the
higher education context. Additionally, work being done in instructional change in one
STEM discipline is not necessarily connected to similar work in other STEM disciplines.

Increasing Evidence-Based Teaching in STEM Education 221

Chapter 8 of the Discipline-Based Educational Research (DBER) report calls for more research
into the extent to which educational research has influenced undergraduate instructional
practices within and across STEM disciplines (NRC, 2012).

Second, engineering educators and engineering education researchers have limited experi-
ence with education and social science theories. Descriptions of theory use in the DBER report
imply that engineering lags far behind physics and chemistry education in its engagement with
learning theory (NRC, 2012). Engineering education scholars (Beddoes & Borrego, 2011;
Borrego, 2007; Koro-Ljungberg & Douglas, 2008) have called for more explicit use of theory
in educational research, yet there are few detailed discussions in the engineering education lit-
erature about what theory means and how it is best applied in engineering education research
and practice. Flyvbjerg (2001) argues that knowledge accumulation is fundamentally different
in the natural and social sciences in ways that link theory and methodology much more closely
in the social sciences. This difference means that administrators and instructors trained in
technical engineering may be unfamiliar with the ways in which theory informs decisions
about educational methods.

In the education literature, Creswell (2009) defines theory in quantitative educational
research as “an interrelated set of constructs (or variables) formed into propositions, or hypothe-
ses, that specify the relationship among variables (typically in terms of magnitude or
direction) . . . it helps to explain (or predict) phenomena that occur in the world” (p. 51). Using
theory to inform interventions and investigations helps us focus on the most important factors
to effect the desired changes, whether they are related to student learning or instructional
change. In qualitative research, theory is closely linked to choices guiding methodology
(Crotty, 1998). Theory helps link the results of an otherwise isolated study to a broader body
of research: “Theoretically grounded work,” according to Beddoes and Borrego (2011),
“connects researchers, allows generalizations across studies, and advances the field of engi-
neering education by avoiding re-inventing the wheel” (p. 283). Thus, linking change efforts
to existing theory ensures that new initiatives are informed by and build upon prior efforts.
The pressing economic and environmental challenges facing the world and the need to prepare
engineers to meet these challenges mean we simply cannot afford to rediscover key aspects of
change with each new initiative.

ChangeTheories
Higher education leaders have been considering questions of how to change faculty instruc-
tional practices for decades, and researchers have attempted to make sense of the literature
on instructional change for nearly as long (e.g., Emerson & Mosteller, 2000; Levinson-Rose
& Menges, 1981; Weimer & Lenze, 1997). Contributing to the complexity is the variety of
levels of focus, including individual instructors, departments, institutions, and broader edu-
cation systems. More recent summaries and reviews attempt to capture the complex higher
education change processes that bridge individual and organizational scales (Amundsen &
Wilson, 2012; Henderson et al., 2011; Kezar, 2001; Seymour, 2002; Stes, Min-Leliveld,
Gijbels, & Van Petegem, 2010). While these reviews have helped to situate different per-
spectives on change with respect to each other and to identify the blind spots of a particular
approach, the field has not yet developed a coherent understanding of what perspectives are
most effective in a given set of circumstances. These reviews tell us that there are many per-
spectives and approaches to change that focus on certain aspects of complex higher education
systems. We know that certain approaches are a better fit for certain situations, but we do

222 Borrego & Henderson

not have a systematic way of thinking about which change perspectives are most appropriate
in a given situation. Our review does not attempt to solve this problem; rather, we are
describing (by comparing and contrasting) different approaches as they apply specifically to
engineering and STEM higher education. We argue that change in higher education has not
been conceptualized well enough to have its own specific theories, and that articulating the
underlying logic of specific change strategies will help develop theory. To frame this discus-
sion, we employ the Four Categories of Change Strategies model developed by Henderson
and colleagues, an interdisciplinary team that included physics education researchers who
explicitly focused their analysis on change in STEM higher education (Henderson et al.,
2010, 2011). This model provides a way to categorize change strategies; it is necessarily less
complex than reality in order to make sense of reality. It describes but does not explain nor
predict the effectiveness of various change strategies (as a theory would be expected to do).
In order to be useful, the model must be applied. We explore the change-strategies model by
describing eight specific strategies, two in each of four change categories and highlight the
contribution of this model to relating various strategies to one another. Situating the strate-
gies will help change agents articulate the underlying logic and assumptions of their efforts;
doing so will support the eventual development of theory.

FourCategoriesof
ChangeStrategies

On the basis of a literature review of 191 journal articles published between 1995 and 2008,
Henderson et al. (2010, 2011) developed the Four Categories of Change Strategies model to
categorize strategies that have been used to conceptualize or to create change in undergradu-
ate STEM instruction. The similarity of these categories to those developed through an
independent review of an overlapping literature base (Amundsen & Wilson, 2012) suggests
that the four categories are robust and meaningful. The four categories, shown in Figure 1,
are based on two categorization criteria.

The first criterion focuses on the aspect of the system that is to be changed; these aspects
range from individual instructors to environments and structures. Our use of the terms
instructor and faculty throughout this article is meant to include instructors at all levels,
including temporary and part-time instructors and tenure-track and tenured professors.
Some of the organizations that have applied these strategies have, however, only focused on
the pressures and reward systems for tenured and tenure-track faculty members.

The second criterion focuses on whether the intended outcome of the change strategy is
known in advance, that is, whether the result of the change process is prescribed or emer-
gent. For example, using a specific set of curricular materials, textbook, technology (clickers),
or assessment tool is a prescribed outcome.

In our experience, it is common for reviewers to argue that there is significant overlap
among the four categories created by these criteria. For example, enacting a policy has impacts
on individuals and may include strategies to encourage individuals to support the strategy.
Similarly, creating a teaching and learning center to develop reflective teachers requires
organizational and administrative action. It is important to remember, however, that while
change efforts can and perhaps should involve multiple strategies, the articles analyzed by
Henderson et al. (2010, 2011) and later by Amundsen and Wilson (2012) tended to focus
their discussion on one primary strategy. These criteria were the salient ones distinguishing
the set of articles analyzed. We are not claiming that these categories and their strategies have

Increasing Evidence-Based Teaching in STEM Education 223

no interconnections and overlaps; rather, we are trying to make distinctions that assist in
relating the various common change strategies to one another. Again, this model is, like any
model, less complex than the reality it attempts to describe.

Figure 1 clarifies the role of the change agent in each of the four categories. Henderson
et al. (2011) and Henderson, Beach, Finkelstein, and Larson (2008) found that each cate-
gory was closely associated with a different community of professionals and their publishing
venues. The Curriculum and Pedagogy category was dominated by STEM instructors
including DBER scholars. Most of the Reflective Teachers category publications were writ-
ten by faculty developers, for example, teaching and learning center staff. Most Policy publi-
cations reflected the interests of higher education researchers, and the few Shared Vision
publications were authored by administrators describing their practices.

In the following sections, we discuss two change strategies in each of the four change categories.
Information for each strategy is summarized in Table 1. For each change strategy, we present a sum-
mary with key references, discuss their potential applicability to STEM higher education, and

Figure 1 Change theories mapped to the four categories of change strategies.
Figure adapted from Henderson et al. (2011). The italicized text in each box
lists the eight change strategies discussed in further detail in the text.

224 Borrego & Henderson

discuss implications for change efforts and engineering education research on change. Then we
provide an example of how the strategy has been applied in STEM higher education.

UnderlyingLogicof
ChangeStrategies

A useful concept for our discussion of change strategies is from the field of evaluation: logic
models. In evaluation, a logic model is a detailed map developed to clarify and communicate
goals, intermediate outcomes, and measures for a specific project (W. K. Kellogg Foundation,
2004). Logic models make explicit which actions are intended to cause desired changes
(McLaughlin & Jordan, 1999). They are closely related to theories of action and theories of
change (Center for Civic Partnerships, 2007; Milstein & Chapel, n.d.), all of which empha-
size their focus on communicating the logic and assumptions underlying a change effort.
Logic models and theories of change can be quite extensive, including full-page or larger
maps connecting boxes of activities, outcomes, stakeholders, and indicators (Keystone
Accountability, 2009; W. K. Kellogg Foundation, 2004). In arguing for using theories of
change to inform evaluation, Carol Weiss (1995) uses the term program theory to describe a
less detailed version that is more in line with our approach but still articulates some of the
underlying assumptions of how change happens. Since we are describing general strategies
and not specific programs or projects, we cannot present full logic models. Rather, inspired
by this concept, we articulate the underlying logic for each of the change strategies described
in this article, summarize it in a sentence or two, and supplement it by the more detailed
description of the approach and its limitations and assumptions. We hope our open discus-
sion of underlying logic encourages others to be more explicit in their change logic as well.

ChoosingtheRightTheory
Readers might wonder how to use this information that compares a range of change strate-
gies. Our goal in this article is modest: to clarify the goals, assumptions, and underlying logic
of each strategy in order to encourage STEM higher education change agents to situate their
own efforts within this model of strategies and make their underlying assumptions about
change more explicit.

A good starting point, particularly for those without social science backgrounds, is to focus
on one strategy that fits their situation best (in terms of resources, goals, locus of change, and
implicit assumptions about change already being followed). Readers should trust their wis-
dom and consider selection of a theory as a design problem: within the constraints, some
options fit better than others, but there is no clear right or wrong answer. In publications,
part of the peer-review process is evaluating the appropriateness of the theory or perspective
taken. For example, when studying effectiveness of dissemination efforts, a dissemination
perspective is most likely to identify productive variables and processes on which to focus.

At the end of this article, we suggest that multiple change strategies will increase the like-
lihood of success. However, STEM education change agents are unaccustomed to discussing
their work in terms of the broader change literature or the categories of strategies presented
here. The past efforts reviewed in Henderson et al.’s synthesis (2010, 2011) had a strong
tendency to focus on just one of the four categories, without attempting to combine strate-
gies. This model of four categories of change strategies is based on how change efforts have
been described in the prior literature; it provides little critique or guidance as to how these
efforts should be described in the future. This article attempts to provide a foundation and

Increasing Evidence-Based Teaching in STEM Education 225

T
ab

le
1

S
u

m
m

ar
y

o
f

C
h

an
g
e

C
at

eg
o
ri

es
an

d
S

tr
at

eg
ie

s
(a

cc
o
rd

in
g

to
ca

te
g
o
ri

es
in

F
ig

u
re

1
)

C
h

an
g
e

ca
te

g
o
ry

an
d

st
ra

te
g
y

S
u
m

m
ar

y
K

ey
m

et
ap

h
o
r

K
ey

ch
an

g
e

ag
en

t
ro

le
K

ey
ch

an
g
e

m
ec

h
an

is
m

T
yp

ic
al

m
et

ri
cs

o
f

su
cc

es
s

I.
C

u
rr

ic
u

lu
m

&
P

ed
ag

o
g
y

D
if

fu
si

o
n

In
n

o
va

ti
o
n

s
ar

e
cr

ea
te

d
in

o
n

e
lo

ca
ti

o
n

,
th

en
ad

o
p

te
d

o
r

ad
ap

te
d

b
y

o
th

er
s.

M
u
lt

i-
st

ag
e

ad
o
p
ti

o
n

p
ro

ce
ss

.

S
ca

tt
er

in
g

D
ev

el
o
p

a
q

u
al

it
y

in
n

o
va


ti

o
n

an
d

sp
re

ad
th

e
w

o
rd

.
A

d
o
p
ti

o
n

d
ec

is
io

n
s

b
y

p
o
te

n
ti

al
u

se
rs

.
N

u
m

b
er

o
f

u
se

rs
o
r

am
o
u
n

t
o
f

in
fl

u
en

ce
o
f

th
e

in
n

o
va

ti
o
n

Im
p

le
m

en
ta

ti
o
n

A
se

t
o
f

p
u

rp
o
se

fu
l

ac
ti

vi
ti

es
ar

e
d

es
ig

n
ed

to
p

u
t

p
ro

ve
n

in
n

o
va

ti
o
n

s
in

to
p

ra
ct

ic
e

in
a

n
ew

se
tt

in
g
.

T
ra

in
in

g
D

ev
el

o
p

a
tr

ai
n

in
g

p
ro


g
ra

m
th

at
in

vo
lv

es
p

er

fo
rm

an
ce

ev
al

u
at

io
n

an
d

fe
ed

b
ac

k
.

T
ra

in
in

g
o
f

p
o
te

n
ti

al
u

se
rs

.
F

id
el

it
y

o
f

u
se

o
f

in
n

o
va

ti
o
n

II
.

R
efl

ec
ti

ve
T

ea
ch

er
s

S
ch

o
la

rl
y

te
ac

h
in

g
In

d
iv

id
u

al
fa

cu
lt

y
re

fl
ec

t
cr

it
i-

ca
ll

y
o
n

th
ei

r
te

ac
h

in
g

in
an

ef
fo

rt
to

im
p

ro
ve

.

S
el

f-
re

fl
ec

ti
o
n

E
n

co
u
ra

g
e

fa
cu

lt
y

to
re

fl
ec

t
o
n

an
d

co
ll

ec
t

d
at

a
re

la
te

d
to

th
ei

r
te

ac
h

in
g
.

E
vi

d
en

ce
-b

as
ed

re
fl

ec
ti

o
n

o
n

p
ra

ct
ic

e.
S

el
f-

re
p

o
rt

ed
ch

an
g
es

in
b
el

ie
fs

,
te

ac
h

in
g

p
ra

ct
ic

es
,

o
r

sa
ti

sf
ac

ti
o
n

w
it

h
st

u

d
en

t
le

ar
n

in
g

F
ac

u
lt

y
le

ar
n

in
g

co
m

m
u

n
it

ie
s

A
g
ro

u
p

o
f

fa
cu

lt
y

su
p

p
o
rt

s
ea

ch
o
th

er
in

im
p

ro
vi

n
g

te
ac

h
in

g
.

C
o
m

m
u

n
it

y
d

ev
el

o
p

m
en

t
B

ri
n

g
fa

cu
lt

y
to

g
et

h
er

an
d

sc
af

fo
ld

co
m

m
u

n
it

y
d

ev
el

o
p
m

en
t.

P
ee

r
su

p
p

o
rt

/a
cc

o
u
n

t-
ab

il
it

y;
ex

p
o
su

re
to

n
ew

vi
ew

s
ab

o
u

t
te

ac
h

in
g

an
d

le
ar

n
in

g
.

S
el

f-
re

p
o
rt

ed
ch

an
g
es

in
b
el

ie
fs

,
te

ac
h

in
g

p
ra

ct
ic

es
,

o
r

sa
ti

sf
ac

ti
o
n

w
it

h
st

u

d
en

t
le

ar
n

in
g
;

m
o
ti

va
ti

o
n

to
w

ar
d

s
te

ac
h

in
g

II
I.

P
o
li

cy
Q

u
al

it
y

as
su

ra
n

ce
M

ea
su

ra
b
le

ta
rg

et
o
u
tc

o
m

es
ar

e
id

en
ti

fi
ed

an
d

p
ro

g
re

ss
to

w
ar

d
s

th
em

is
as

se
ss

ed
an

d
tr

ac
k
ed

.

A
cc

re
d

it
at

io
n

D
ev

el
o
p

m
ea

su
ra

b
le

o
u
t-

co
m

es
,

d
efi

n
e

su
cc

es
s,

co
ll

ec
t

ev
id

en
ce

.

P
re

ss
u
re

to
m

ee
t

o
u
tc

o
m

es
.

D
eg

re
e

to
w

h
ic

h
o
u
tc

o
m

e
m

ea
su

re
s

ar
e

m
et

226 Borrego & Henderson

T
ab

le
1

(c
o
n

ti
n

u
ed

)

C
h

an
g
e

ca
te

g
o
ry

an
d

st
ra

te
g
y

S
u
m

m
ar

y
K

ey
m

et
ap

h
o
r

K
ey

ch
an

g
e

ag
en

t
ro

le
K

ey
ch

an
g
e

m
ec

h
an

is
m

T
yp

ic
al

m
et

ri
cs

o
f

su
cc

es
s

O
rg

an
iz

at
io

n
al

d
ev

el
o
p
m

en
t

L
ea

d
er

d
ev

el
o
p

s
n

ew
vi

si
o
n

an
d

p
la

n
s

a
st

ra
te

g
y

fo
r

al
ig

n

in
g

em
p

lo
ye

e
at

ti
tu

d
es

an
d

b
eh

av
io

rs
w

it
h

th
is

vi
si

o
n

.

L
ea

d
er

sh
ip

D
ev

el
o
p

n
ew

vi
si

o
n

.
A

n
a-

ly
ze

al
ig

n
m

en
t

o
f

p
ar

ts
o
f

th
e

o
rg

an
iz

at
io

n
w

it
h

th
e

n
ew

vi
si

o
n

an
d

id
en

ti
fy

st
ra

te
g
y

fo
r

cr
ea

ti
n

g
al

ig
n

m
en

t.

S
tr

at
eg

ic
w

o
rk

b
y

th
e

le
ad

er
to

co
m

m
u

n
i-

ca
te

vi
si

o
n

an
d

n
ee

d
fo

r
ch

an
g
e

an
d

to
d

ev
el

o
p

st
ru

ct
u

re
s

to
m

o
ti

va
te

em
p

lo
ye

es
to

w
o

rk
to

w
a
rd

s
it

.

P
ro

d
u

ct
iv

it
y-

re
la

te
d

m
et


ri

cs
(e

.g
.,

cr
ed

it
h

o
u

r
p

ro
d

u
ct

io
n

,
g
ra

d
u

at
io

n
ra

te
s,

et
c.

)

IV
.

S
h

ar
ed

V
is

io
n

L
ea

rn
in

g
o
rg

an
iz

at
io

n
s

L
ea

d
er

w
o
rk

s
to

d
ev

el
o
p

an
o
rg

an
iz

at
io

n
al

cu
lt

u
re

th
at

su
p

p
o
rt

s
k
n

o
w

le
d

g
e

cr
ea

ti
o
n

.

T
ea

m
le

ar
n

in
g

M
o
ve

d
ec

is
io

n
-m

ak
in

g
fu

r-
th

er
fr

o
m

th
e

to
p
.

In
ve

st
in

d
ev

el
o
p

in
g

em
p

lo
ye

es

p
er

so
n

al
m

as
te

ry
,

m
en

ta
l

m
o
d
el

s,
sh

ar
ed

vi
si

o
n

,
te

am
le

ar
n

in
g
.

T
ea

m
-l

ev
el

q
u

es
ti

o
n


in

g
an

d
re

vi
si

o
n

o
f

m
en

ta
l

m
o
d

el
s

(i
.e

.,
d

o
u

b
le

lo
o
p

le
ar

n
in

g
;

A
rg

yr
is

&
S

ch
€o
n

,
1
9
7
4
)

fa
ci

li
ta

te
d

b
y

m
id

d
le

m
an

ag
er

s.

V
ag

u
e

an
d

si
tu

at
io

n
d

ep
en

d
en

t

C
o
m

p
le

xi
ty

le
ad

er
sh

ip
In

a
co

m
p

le
x

sy
st

em
,

re
su

lt
s

o
f

ac
ti

o
n

s
ar

e
n

o
t

ea
si

ly
p

re

d
ic

te
d

.
C

h
an

g
e

ag
en

ts
ca

n
cr

ea
te

o
rg

an
iz

at
io

n
al

co
n


d

it
io

n
s

th
at

in
cr

ea
se

th
e

li
k
e-

li
h

o
o
d

o
f

p
ro

d
u

ct
iv

e
ch

an
g
e.

E
m

er
g
en

ce
D

is
ru

p
t

ex
is

ti
n

g
p

at
te

rn
s,

en
co

u
ra

g
e

n
o
ve

lt
y,

an
d

ac
t

as
se

n
se

m
ak

er
s.

N
ew

id
ea

s
em

er
g
e

th
ro

u
g
h

in
te

ra
ct

io
n

s
o
f

in
d

iv
id

u
al

s.
F

o
rm

al
le

ad
er

s
en

co
u
ra

g
e

th
is

p
ro

ce
ss

b
y

cr
ea

ti
n

g
d

is
eq

u
il

ib
ri

u
m

an
d

am
p

li
fy

in
g

p
ro

d
u

ct
iv

e
in

n
o
va

ti
o
n

s.

V
ag

u
e

an
d

si
tu

at
io

n
d

ep
en

d
en

t

Increasing Evidence-Based Teaching in STEM Education 227

common language for productive future developments. We encourage readers to apply this
work, critique it, and build on it.

CategoryI
CurriculumandPedagogy

Change strategies in this category focus on changing individuals (typically faculty members)
in a prescribed way. Henderson et al. (2011) found that among STEM undergraduate edu-
cation researchers, strategies in this category are the most commonly used and discussed. In
fact, discussion about how to improve undergraduate STEM instruction is typically concep-
tualized solely within this category. Working from the assumption that faculty have limited
time and expertise to develop improved teaching methods, STEM change agents develop
and perfect highly structured and specific interventions meant to be easily implemented by
others. Developing highly specified interventions is the basic change model behind many
change initiatives in undergraduate STEM (Seymour, 2001) and also the change model
implicit in influential funding programs for undergraduate STEM instructional improve-
ment, such as the NSF (2010) Transforming Undergraduate Education in STEM (TUES)
and its predecessor, Course, Curriculum and Laboratory Improvement (CCLI). These ideas
are elaborated below in specific descriptions and examples of diffusion and implementation.

Diffusion
Underlying logic STEM undergraduate instruction will be changed by altering the behavior

of a large number of individual instructors. The greatest influences for changing instructor behavior
lie in optimizing characteristics of the innovation and exploiting the characteristics of individuals
and their networks.

Description The term diffusion of innovations was popularized by a book of that title
first published by Everett Rogers in 1962 and now in its fifth edition (Rogers, 2003). The
theory has been used to describe adoption of a wide range of innovations, including agricul-
tural equipment, public health interventions, and cellular telephones, and has demonstrated
relevance to diffusion of instructional strategies. Three features of Rogers’s theory are often
implicit in discussions about STEM education change initiatives.

First, much of diffusion of innovation theory focuses on the characteristics of the innovation
(such as an instructional strategy or curricular approach); that is, the discussion may focus, for
example, on how much better the innovation is than current practice (relative advantage) or
how hard it is to understand and use the innovation (complexity). The second feature is that
adoption is conceptualized as an individual choice. Potential adopters (faculty members) are
often categorized in terms of their “innovativeness” (e.g., whether they are innovators, early
adopters, early majority, late majority, or laggards), and this information may be used to target
influential leaders or individualize dissemination strategies by adopter type. Finally, once
enough people adopt an innovation (Rogers suggests between 10% and 25%), it will reach a criti-
cal mass (or what (Gladwell [2000] calls a “tipping point”), after which the innovation will con-
tinue to spread on its own until it saturates the system. We note that although this theory has
been applied in settings around the world, the conceptualization of curriculum change as an indi-
vidualized, course-based act is more characteristic of the United States. For example, European
views of knowledge and curriculum may be more interconnected and holistic than those in the
United States (Borrego & Bernhard, 2011; de Graaff & Kolmos, 2007).

Another important aspect of Rogers’s view of diffusion of innovations is the representa-
tion of adoption decisions as a series of stages. Adopters do not move from knowing nothing

228 Borrego & Henderson

about an innovation to adopting it in one step. While there are many descriptions of the
stages through which an adapter reaches the point of using an innovation, the five-stage
description offered by Rogers (2003) provides a useful framework:

1. Awareness – Awareness of the innovation, but lacking complete information about it

2. Interest – Growing interest and information seeking

3. Evaluation – Decision …

Research-InformedPolicyChange:
ARetrospectiveonEngineeringAdmissions

BethM.Holloway,a TeriReed,b P. K.Imbrie,b and KenReidc

aPurdueUniversity,bTexasA&MUniversity, cOhioNorthernUniversity

Abstract
Background Few studies have investigated how engineering education admission policies
contribute to the underrepresentation of specific groups. Transforming these policies may sig-
nificantly affect who becomes an engineer. This article reports the outcome of using re-
search results to inform change in admission policy at a Midwestern public university.

Purpose There were three research questions: Is there statistically significant evidence of admis-
sion decision gender bias for engineering applicants? Do affective and cognitive factors predic-
tive of engineering student success differ between men and women? Can a differ-
ence in the resulting admitted class be confirmed when such factors inform admission policy?

Design/Method Admissions records were examined for differences in cognitive metrics
between men and women. Student records were analyzed before and after the policy change.
Neural network modeling of student records predicted the cognitive and affective
measures most important for success in retention and graduation.

Results Statistical analysis indicated a gender bias in the admission process results,
which was traced back to the policy. Success factor modeling suggested a different set of
criteria could mitigate this bias. After admission criteria were changed, statistical analysis
confirmed the gender bias against women was mitigated.

Conclusions The application of research and the change process described shows the im-
portant role of research in motivating and informing policy change. This work highlights the
contribution of institutional bias in admission policy to the underrepresentation of
groups in engineering education, especially if admission is limited to a minimum standar-
dized math test score.

Keywords gender equity; research-informed policy change; success factor modeling

Overview
One of the early steps a student makes along the pathway toward becoming an engineer is apply-
ing and being admitted to an accredited institution. While an abundance of literature describes
the virtues of precollege and recruiting programs and their potential effect on increasing the
number of applications to engineering schools, little to no research informs institutional policy
on important factors to consider in the engineering admissions process. This article describes a
research-to-practice effort at a U.S. Midwestern public university which began by studying the
results of admission practices and ended by changing admissions policy. The genesis of this

Journal of Engineering Education VC 2014 ASEE. http://wileyonlinelibrary.com/journal/jee
April 2014, Vol. 103, No. 2, pp. 274–301 DOI 10.1002/jee.20046

effort was when the authors identified that the number of engineering applications from
women increased by 46% over a five-year time period while the number of women admitted
during that same time period increased by only 23%. This disparity between application and
admissions gains led to an investigation of the university’s engineering admissions process and
ultimately policy. When a gender bias was confirmed by statistical analysis, the authors used
research-based success modeling to identify key admission factors that could produce a differ-
ent result from the university’s engineering admission policy. While most research stops here,
the authors were able to use these research findings to influence a process and policy for which
they had no direct responsibility. By promoting and reprioritizing researched admissions fac-
tors, the number of women admitted to the College of Engineering increased, and mitigation
of gender bias was confirmed.

The following is a retrospective of this four-year journey toward research-informed policy
change. While this time period seems long, a goal of the authors’ is to encourage others to consider
the extra step of using research findings to affect change and improve policies of the engineering
education system. In much the same manner as Jamieson and Lohmann (2009) demonstrated the
importance of linking research and educational practices, this article demonstrates the possibilities
for change when linking research and policy. Using research to inform engineering educational
policy could significantly improve the higher education system, given administrators who under-
stand the power of applied research and researchers who value and understand the potential of
how research-informed policies can positively affect system change.

Introduction
The National Academy of Engineering’s (NAE) Changing the Conversation (2008) created an
awareness of the public’s perception of engineering in general and of teens’ perceptions of engi-
neering more specifically. These perceptions or misperceptions suggest changes needed so that
high school students can be more effectively recruited to the engineering field. The national
conversation regarding the education of engineers was sparked and re-energized by NAE’s The
Engineer of 2020 (2004) and Educating the Engineer of 2020 (2005). But no such similar national
conversations have focused on the role of admission policy and how it is either a barrier to or
enhances being able to study to become an engineer. Though admission policies can and do vary
by institution, admission is a gateway through which all eventual engineers must pass. Changing
admission policy, then, may have significant implications for who becomes an engineer. It may
also be valuable to modify admission policy to align better with producing the type of engineer
who will be successful in the future. However, to know what changes to admission policies can
align with producing the types of engineers needed in the future, an understanding of the out-
comes of the current policies is needed, as well as an understanding of how changes might
impact admission results, and how policy changes can be made.

Viewing the admission process within the larger context of the progression of a potential
engineering student through to graduation, as in Figure 1, demonstrates the relationships be-
tween applicants, admitted students, and yielded students who enroll in an undergraduate engi-
neering education program. From this systems perspective, it is clear that an applicant cannot
enroll and be retained through to graduation unless first admitted. From this standpoint, admis-
sion policy is situated squarely between recruiting and retention.

If, then, admission policies have a significant role in who does or does not become an engi-
neer, changing such policies may play a role in increasing the representation of groups such as
women and underrepresented minorities in the engineering education system.

Retrospective on Engineering Admission Policy Change 275

At a large U.S. Midwestern public university, institutional data indicated that from 2006
through 2010 the number of applications received from women for engineering increased by
46%, a result of increased efforts in recruiting. However, over that same time period, the num-
ber of women who were admitted into engineering increased by only 24%, and the number of
women who enrolled in engineering increased by only 20%. A mismatch in the growth rates
of women applicants and women admitted was unexpected because the university’s College
of Engineering had for many years set goals for increasing the number and percentage of
women studying engineering. In fact, this college was the first in the United States to create a
women in engineering program, which demonstrated its longstanding commitment to increas-
ing women’s representation and success in the field. The disparity between application gains
and admission gains raised questions about equity within the admission process and the poten-
tial effects of admission policy on the underrepresentation of women in engineering, both at
this university and in general.

There are many theories of change and of the factors that either promote or impede the
change process. The research-to-practice project described in this article illustrates Weick
and Quinn’s (1999) theory of episodic change, which relies on Kurt Lewin’s (1951) classic
framework of organizational development: unfreeze, change, and refreeze. These three stages
were used to frame a retrospective review of the request from the College of Engineering to

Figure 1 Progression of an engineer-
ing student from prospect to graduate.

276 Holloway, Reed, Imbrie, & Reid

the University admissions office to change engineering admission policy that was shown to
be biased in favor of men. The communication channels of the change process are particu-
larly emphasized; since the change was a policy recommendation, communication was the
primary means of effecting change. Although this project was done at a single university,
many facets of the process are transferable to other institutions of higher education.

LiteratureReview
Ongoing research that seeks to identify reasons for the persistent underrepresentation of
women in engineering has focused primarily on two broad areas: recruitment, specifically
precollege preparation, exposure, and experience; and retention, specifically higher education
experience. A significant body of literature describes programs and practices that have been
implemented, and which have incorporated findings from both recruitment and retention
research (for example, Bogue & Cady, 2010). Unfortunately, the prevalence of this informa-
tion has not significantly increased the number of women engineering graduates in the last
20 years. The lack of change suggests that other factors may be influencing that outcome.

There is a dearth of research about the university admission process and its related policies
that could guide the understanding of the degree to which this process and related policies are
or are not subject to gender bias. Few studies have specifically investigated the variance that
admission policy contributes to the underrepresentation of women in engineering. Margolis
and Fisher (2002) found that changes in the admission process and evaluation criteria increased
the number of women studying computer science. Unfortunately, that research was limited in
scope and did not generalize its findings to engineering education. A review of the literature
revealed there is a significant lack of research that critically evaluates engineering admission
processes and policies in three areas: gender bias when admission decisions principally depend
on typical high school metrics (i.e., standardized test score, high school grade point average
[GPA], and high school class rank); gender bias regarding the types of factors (i.e., cognitive
and psycho-social or affective and attitudinal factors) used to make admission decisions; and
the role of systematic research to inform policy creation or modification. One exception is a
study by Leonard and Jiang (1999) that indicated a systemic gender bias against women when
SAT (a U.S.-based national standardized test used in college admissions) scores were used to
admit students to the University of California, Berkeley in all fields except engineering. They
did find, however, that within engineering, those women on the margin of admission accord-
ing to their SAT scores outperformed similarly scoring men with respect to their college
grades. This general lack of literature suggests engineering admission processes, policy, and cri-
teria are closely held by institutions, presumably for competitive reasons. Unfortunately, the
lack of scholarly work on engineering admissions promotes keeping these processes and poli-
cies unchanged rather than modifying them in an informed way. Indeed, Camara and Kimmel
(2005) point out that “most admissions decisions are made using tools that have been around
for 50 years or more” (p. viii).

As stated earlier, admission decisions are generally based on the premise that the selection cri-
teria used (e.g., standardized test scores, high school rank) will yield the highest degree of student
success, where student success is typically operationalized in terms of first-year retention and ulti-
mately graduation. However, evidence is emerging from the literature on the affective and attitu-
dinal factors that shape our understanding of student success (Artelt, Baumert, Julius-McElvany,
& Peschar, 2001; Atkin, Black, & Coffey, 2001; Schreiber, 2002). These affective and attitudi-
nal factors have been shown to be positively correlated with college student success, but have not

Retrospective on Engineering Admission Policy Change 277

been made a substantial part of admissions decisions, although researchers have identified their
addition as a possible way to ameliorate the underrepresentation issue (Sedlacek, 2005).

Substantiating a broader set of admission criteria that can lead to improved retention and
graduation requires a more thorough understanding of the relationships between attributes and
outcomes of first-year students. To date, many studies look in isolation at the individual contri-
butions of affective and attitudinal student attributes to student success; these include Biggs,
Kember, and Leung (2001), French and Oakes (2001), Hackett, Betz, Casas, and Rocha-Singh
(1992), O’Neil and Abedi (1996), Pajares (1996), and K. Taylor and Betz (1983). Numerous
articles, such as Moller-Wong and Eide (1997), identify the relationship between critical cogni-
tive variables and student success. Unfortunately, little research has used hybrid models that
combine both cognitive measures and affective and attitudinal measures. Such models have
the potential to provide more insight into factors influencing student success.

Guided by this literature about the contributions of affective and attitudinal attributes to
student success, a 161-item instrument was developed that used psychometrically tested affec-
tive and attitudinal indicators of student success. The instrument is referred to as the Student
Attitudinal Success Instrument, or SASI (Immekus, Maller, Imbrie, Wu, & McDermott,
2005; Reid, 2009; Reid & Imbrie, 2008). Combining the results of the SASI with typical
high school cognitive metrics (e.g., standardized test scores, high school GPA, class rank, etc.)
forms a hybrid model of student success. The hybrid modeling of student success used in this
project provided a tool that could optimize the factors considered during the admission process
and set admission policy tailored to engineering students at the studied university.

This project offered the possibility to better understand the admissions process and the results
of the policy that guide the process, especially as a potential partial solution to bolster the rep-
resentation of women and other minorities in engineering. The following research questions
guided this study:

To what extent is there statistically significant evidence of admission decision gender
bias for engineering applicants when considering standardized test scores, high school
GPA, and class rank?

Do affective and cognitive factors used to predict engineering student success (opera-
tionalized as first-year retention and graduation) differ between men and women?

When such factors are used to inform admission processes and policy, can a difference
in the resulting admitted and enrolled class demographics be confirmed?

This article also describes the process by which the findings from the first two research ques-
tions were used to inform and change engineering admission policy at a Midwestern university.

TheResearch
UniversityAdmissionProcess
To better understand the context of this study, a brief review of the admission process at the
Midwestern university is needed. The university has stated on its Web site that many general
factors are used to make a decision on a student’s application, and that all factors are taken
into consideration in a holistic manner. These factors are:

subject matter expectations (the number of semesters of math, science, English, social
studies, and foreign language that each student is required to have taken in high school)

278 Holloway, Reed, Imbrie, & Reid

overall high school grade point average (GPA)

core high school GPA (English, math, science, foreign language, and social studies classes)

high school class rank

scores from U.S.-based national standardized tests for college readiness (SAT or ACT)

overall grades in academic coursework

grades related to intended major

strength of student’s overall high school curriculum

trends in achievement

ability to be successful in intended major

personal background and experiences

time of year the application is received

space availability in intended program

Some of these factors are quantitative, such as standardized test scores, high school GPA,
and class rank. Many are not, such as ability to be successful in intended major and personal
background and experiences. When this research project began in 2008, the application did
not require an essay; an essay was added some years later but before the admission policy was
changed. The university’s admissions office has stated that there are no minimum requirements
for quantitatively measured metrics; such a policy leads to a system that is nonformulaic and
flexible, but also makes the process less transparent and more subjective.

A flowchart of the admission process is presented in Figure 2. Every application is first
reviewed for completeness in the data processing office. If an application is incomplete, a file is
started and a request for additional information is sent to the applicant. If an application is com-
plete, there are no irregularities in the application data, and the metrics of the applicant are clearly
outstanding, the applicant is admitted. If there are some irregularities, or the metrics of the appli-
cant are not as outstanding, the application is forwarded to one of several designated admission
counselors for the College of Engineering. The counselor can then either admit the student or
send the application on to the college-specific committee with a recommendation on an admis-
sion decision. For each college there is a committee, which is comprised of all the admission
counselors designated for that college and a senior admission counselor. The committee then
comes to a decision about the application. The committee can admit the student to engineering,
offer the student an alternate option at the university, deny the student admission, or request
additional information from the applicant.

StatisticalAnalysis
This effort began when the authors identified that the number of engineering applications
from women increased by 46% over a five-year time period while the number of women
admitted during that same time period increased by only 23%. A statistical analysis of the
quantitative metric data of the applicants and admitted students to the College of Engineer-
ing over the five-year period was performed to investigate potential gender bias in the results
of the admission process.

Retrospective on Engineering Admission Policy Change 279

Methods and Results The admissions office stores applicant information in a database.
A new database is created for each admissions cycle. Each database contains the demograph-
ics of each applicant (including gender, ethnicity, and residency), the cognitive metrics of each
applicant (including standardized test scores, class rank, number of semesters of and grades in
core courses, and overall and core high school GPAs), and the admissions decision made for
each applicant. This research project used data for the 2006–2010 cohort entry years. The data
then were filtered to include only records for the following:

applicants with complete applications (incomplete applications were filtered out)

applicants for fall semesters only

applicants who would be admitted directly from high school and would be first-time
college students

applicants to the College of Engineering

The overall demographics of these applicants, disaggregated by gender, are given in Table 1.
Not all metric data are available for each applicant. For example, since high schools increas-

ingly do not provide rankings of their students, some students did not have a high school class
rank. Some international students do not take standardized tests. Not every student took both
the SAT and ACT tests. In order to minimize the amount of missing data, all ACT test scores
were converted into equivalent SAT (SATe) Math and Verbal scores using the concordance
published by the College Board, the administrator of the SAT (Dorans, 1999). Finally, many
students take these tests more than once in an effort to improve their scores. This university’s
policy with regard to multiple tests is to use the highest scores from each part of the test for
consideration in the admission process, as opposed to just using the latest complete set of test
data or the highest overall set of test data. Because of this policy, only the maximum test scores
were used in the analyses.

Figure 2 Flowchart of admissions process.

280 Holloway, Reed, Imbrie, & Reid

Anderson-Darling normality tests were run on each metric distribution to determine if a
normal probability distribution was adequate to describe the data; each metric was determined
not to be normal. Therefore, a nonparametric, 2-sample Mann-Whitney test at a 95% confi-
dence level was used. This test can be used to make inferences about the difference between
two population medians based on two independent random samples.

An analysis of the aggregate applicant pool is shown in Table 2 and includes the sample size
and median value for each metric as well as the p-value and Cohen’s d for each comparison.
The scales for each metric are also presented as a range from minimum to maximum possible
value. Analyses were completed for each individual cohort year, and the results were similar
each year, as demonstrated through cluster analysis. Cluster analysis showed resultant shapes
and patterns to be consistent from cohort to cohort, demonstrating strong repeatability and

Table 1 Demographics of Applicants

to Engineering, 2006–2010 Cohorts

Women
(n 5 7,884)

Men
(n 5 30,856)

n % n %

Race/Ethnicity
Caucasian, non-Hispanic 5,016 72.1 19,996 75.8
African American, non-Hispanic 462 6.6 1,135 4.3
Hispanic American 383 5.5 1,243 4.7
Asian American/Pacific Islander 473 6.8 1,730 6.6
Asian American 301 4.3 1,049 4.0
Native American 37 0.5 160 0.6
Native Hawaiian/Pacific Islander 2 0.0 12 0.0
Other 71 1.0 274 1.0
Two or more races 51 0.7 169 0.6
Unknown 140 2.0 506 1.9
Not reported 19 0.3 118 0.4

Residency
Domestic 6,955 88.2 26,392 85.5
International 929 11.8 4464 14.5

Table 2 Metric Medians for Applicants

to Engineering, 2006–2010 Cohorts

Women Men

Metric Scale Median n Median n p Cohen’s d

Overall GPA 0.0–4.0 3.9 7,017 3.7 21,357 0.0000 0.42*
Core GPA 0.0–4.0 3.75 7,681 3.52 29,459 0.0000 0.48*
Class rank 1st299th 94 4,460 87 17,393 0.0000 0.45*

percentile
SATe verbal 200–800 620 7,775 600 30,310 0.0000 0.22
SATe math 200–800 680 7,774 680 30,310 20.08
SATe total 400–1600 1,300 7,775 1,290 30,310 0.0000 0.08

*Moderate effect size.

Retrospective on Engineering Admission Policy Change 281

stability. Similar taxonomies are indicated by a consistent number of core profiles, similar in
magnitude and shape as determined by the cluster homogeneity coefficient > 0.6 (Tyron &
Bailey, 1970) and Cattell’s (1978) similarity coefficient (similar: rp > 0.95, dissimilar: |rp| < 0.7
[Freedman & Stumpf, 1978]). Values of Cattell’s coefficient comparing clusters expected to be
highly similar are consistently 0.94 < rp < 1.00, while those expected to be dissimilar were
|rp|< 0.54. Therefore, only the results from the total combined pools are presented here. Simi-
larly, previous research has shown gender-based results were the same when evaluating differ-
ences between male and female students each year and if taken in aggregate (Reid, 2009; Reid
& Imbrie, 2009). For all analyses, a p-value of 0.05 or less was considered to be statistically sig-
nificant. In Tables 2 and 3, a statistical significance in the difference in the medians is denoted
by a bolded higher median. Because the size of the pool of applications was large (n > 38,000),
most differences in median were found to be statistically significant. Therefore, to determine if
the differences are also meaningful, Cohen’s d, was used to determine the effective size of the
differences. Cohen (1988) originally defined ranges for effect sizes as small, d 5 0.2; medium,
d 5 0.5; and large, d 5 0.8; with the caveat that “there is a certain risk inherent in offering con-
ventional operational definitions for those terms for use in power analysis in as diverse a field of
inquiry as behavioral science” (p. 25). On the basis of subsequent exploration of effect sizes as
they apply to research in the social sciences, Hyde (2005; Hyde & Linn, 2006) defined the
ranges as part of the gender similarity hypothesis as near-zero, d�0.10; small, 0.11 < d�0.35;
moderate, 0.36 < d�0.65; large, 0.66 < d�1.0; and very large, d > 1.0. In the results tables,
moderate effect sizes are indicated by one asterisk.

The data for the overall applicant pool (Table 2) show that the medians of the women’s
overall GPA, core GPA, class rank, SATe verbal scores, and SATe total scores are statistically
higher than those of the men. In terms of effect sizes, the differences between men’s and wom-
en’s overall GPA, core GPA, and class rank are moderate; all others are small or near-zero.
These same type of results were found of engineering applicants to a small comprehensive
regional university located in New Jersey (Cleary, Riddell, & Hartmann, 2008).

Figure 3 shows boxplots of the distribution of overall GPA and SATe math scores of appli-
cants by gender. The box represents the middle 50% of the data, and the horizontal line
through the box is the median. The vertical lines, often called whiskers, extending from the top
and bottom of the box are 1.5 quartiles in length; the dots denote individual datum points out-
side of that range. Note that the men have a much wider data spread and longer tails, especially
on the lower end. Figure 3 data clearly indicate that men with lower high school GPAs apply

Table 3 Metric Medians for Students Admitted

to Engineering, 2006–2010 Entry Cohorts

Women Men

Metric Median n Median n p Cohen’s d

Overall GPA 4.0 4,937 3.8 20,131 0.0000 0.38*

Core GPA 3.8 6,763 3.62 22,748 0.0000 0.42*

Class rank 95 3,991 91 12,520 0.0000 0.35*

SATe verbal 640 6,699 630 22,511 0.0000 0.14
SATe math 690 6,699 710 22,511 0.0000 20.28
SATe total 1,320 6,699 1,330 22,511 0.0000 20.07

*Moderate effect size.

282 Holloway, Reed, Imbrie, & Reid

F
ig

u
re

3
D

at
a

d
is

tr
ib

u
ti

o
n

o
f

(A
)

o
ve

ra
ll

G
P

A
an

d
(B

)
S

A
T

e
m

at
h

sc
o
re

s
fo

r
ap

p
li

ca
n

ts
to

en
g
in

ee
ri

n
g
,

2
0
0
6

2
0
1
0

co
h

o
rt

s.

Retrospective on Engineering Admission Policy Change 283

for admission to engineering, whereas similar women do not apply. Data distributions for high
school core GPA, high school class rank, SATe verbal scores, and SATe total scores are similar
to those shown in Figure 3 and are not presented here.

An analysis of the pool of students admitted to engineering is shown in Table 3. This table
includes the sample size and median value for each metric as well as the p-value and Cohen’s d
for each comparison. The medians of the women’s overall GPA, core GPA, class rank, and
SATe verbal scores are statistically higher than those of the men. The medians of the men’s
SATe math and SATe total scores were statistically higher than the median of the women’s. In
terms of effect sizes, the differences between men’s and women’s overall GPA, core GPA, and
class rank are moderate; all others are small or near-zero.

The boxplots in Figure 4 show data point distributions of overall GPA and SATe math scores
for the men and women admitted to engineering. Note that the men have a much wider data
spread and longer tails, especially on the lower end for overall GPA. This characteristic of the
distribution is also present in the overall application distributions (Figure 3).

Discussion It is not surprising, perhaps, to see gender-based differences in the overall
population of applicants to engineering because the university and its admissions office have
limited influence over the population of who applies. In an ideal admission process, there
should be no expectation that there will be significant differences in the metrics of the men
and women, particularly in institutionally defined populations, that is, those that the admis-
sions office controls, such as the admitted student population.

This university states that it uses standardized tests as part of its admission criteria. Admis-
sion counselors typically consider standardized test scores when estimating the applicant’s like-
lihood of academic success in college.

The literature indicates that standardized test scores, however, are not as good at predict-
ing student success in college as high school metrics and are gender-biased. Research pub-
lished by the College Board, the administrator of the SAT, has indicated that a student’s high
school grades and class rank …