Design research brings together influences from the whole gamut of social, psychological, and more technical sciences to create a tradition of empirical study stretching back over 50 years (Horvath 2004; Cross 2007). A growing part of this empirical tradition is experimental, which has gained in importance as the field has matured. As in other evolving disciplines, e.g. behavioural psychology, this maturation brings with it ever-greater scientific and methodological demands (Reiser 1939; Dorst 2008). In particular, the experimental paradigm holds distinct and significant challenges for the modern design researcher. Thus, this book brings together leading researchers from across design research in order to provide the reader with a foundation in experimental design research; an appreciation of possible experimental perspectives; and insight into how experiments can be used to build robust and significant scientific knowledge. This chapter sets the stage for these discussions by introducing experimental design research, outlining the various types of experimental approach, and explaining the role of this book in the wider methodological context.

An overview of this book's content and structure
Theory building and testing as an integrated cycle of empiricism, and its link to experimentation

Figures - uploaded by Mario Štorga

Author content

All figure content in this area was uploaded by Mario Štorga

Content may be subject to copyright.

ResearchGate Logo

Discover the world's research

  • 20+ million members
  • 135+ million publications
  • 700k+ research projects

Join for free

PhilipCash· TinoStanković

MarioŠtorga Editors

Experimental

Design

Research

Approaches, Perspectives, Applications

http://www.springer.com/978-3-319-33779-1

vii

Preface

This book's origins lie in the editors' own experiences of developing and reviewing

experimental studies of design; and in particular, from our collaborative excitement

when combining new methods and disciplinary insights with more traditional exper-

imental design research.

Researchers face ever-growing technical, methodological, and theoretical possi-

bilities and we have found in our own research, as well as that of our students, that

getting to grips with these topics can prove somewhat daunting. This book aims to

both help researchers share in our enthusiasm for experimental design research,

and provide practical support in bringing together the many different perspectives

and methods available to develop scientifically robust and impactful experimental

studies.

Fundamentally, this book builds on the methodological foundations laid down

by many authors in the design research field, as well as our field's long tradition

of boundary spanning empirical studies. Without these works this book would not

have been possible. In this sense each chapter reflects and builds on key thinking

in the design research field in order to provide the reader with chapters that not

only constitute distinct research contributions in their own right but also help bring

cohesive insight into experimental design research as a whole.

Throughout the writing process our focus has continually been on bringing

together insights for researchers both young and established, with the aim to take

experimental design research to the next level of scientific development. In par-

ticular it is not our aim to lay down a prescriptive set of methodological rules, but

rather provide researchers with the concepts, paradigms and means they need to

understand, bridge and build on the many research methodologies and methods in

this domain. Thus this book forms a bridge between specific methods and wider

methodology in order to both develop better methods and also contextualise their

work in the wider methodological landscape.

Over the last decades design research has grown as a field in terms of both its

scientific and industrial significance. However, with this growth has come with

challenges of scientific rigour, integrating diverse empirical and experimental

approaches, and building wider scientific impact outside of design research. We see

Preface

viii

this book as a contribution to this process of scientific and methodological develop-

ment, and more generally see this process of growth as a necessary and inspiring

development taking design research into the future alongside its more fundamen-

tal brethren, such as psychology, artificial intelligence or biotechnology. This book

reflects our vision of design research as an ever more rigorous and scientifically

exciting field, and we think that this is also reflected in the substantial and insight-

ful works provided by each of the chapter authors, without whom this book would

have been impossible!

Philip Cash

Tino Stankovic´

Mario Štorga

ix

Contents

Part I The Foundations of Experimental Design Research

1 An Introduction to Experimental Design Research ............... 3

Philip Cash, Tino Stanković and Mario Štorga

2 Evaluation of Empirical Design Studies and Metrics ............. 13

Mahmoud Dinar, Joshua D. Summers, Jami Shah

and Yong-Seok Park

3 Quantitative Research Principles and Methods

for Human-Focused Research in Engineering Design ............. 41

Mark A. Robinson

Part II Classical Approaches to Experimental Design Research

4 Creativity in Individual Design Work .......................... 67

Yukari Nagai

5 Methods for Studying Collaborative Design Thinking ............ 83

Andy Dong and Maaike Kleinsmann

6 The Integration of Quantitative Biometric Measures

and Experimental Design Research ............................ 97

Quentin Lohmeyer and Mirko Meboldt

7 Integration of User-Centric Psychological and Neuroscience

Perspectives in Experimental Design Research .................. 113

Claus-Christian Carbon

Part III Computation Approaches to Experimental Design Research

8 The Complexity of Design Networks: Structure and Dynamics ..... 129

Dan Braha

Contents

x

9 Using Network Science to Support Design Research:

From Counting to Connecting ................................ 153

Pedro Parraguez and Anja Maier

10 Computational Modelling of Teamwork in Design ............... 173

Ricardo Sosa

11 Human and Computational Approaches for Design

Problem-Solving ........................................... 187

Paul Egan and Jonathan Cagan

Part IV Building on Experimental Design Research

12 Theory Building in Experimental Design Research ............... 209

Imre Horváth

13 Synthesizing Knowledge in Design Research .................... 233

Kalle A. Piirainen

14 Scientific Models from Empirical Design Research ............... 253

John S. Gero and Jeff W.T. Kan

3

Chapter 1

An Introduction to Experimental

Design Research

Philip Cash, Tino Stanković and Mario Štorga

© Springer International Publishing Switzerland 2016

P. Cash et al. (eds.), Experimental Design Research,

DOI 10.1007/978-3-319-33781-4_1

Abstract Design research brings together influences from the whole gamut of

social, psychological, and more technical sciences to create a tradition of empiri-

cal study stretching back over 50 years (Horvath 2004; Cross 2007 ). A growing

part of this empirical tradition is experimental, which has gained in importance

as the field has matured. As in other evolving disciplines, e.g. behavioural psy-

chology, this maturation brings with it ever-greater scientific and methodologi-

cal demands (Reiser 1939; Dorst 2008). In particular, the experimental paradigm

holds distinct and significant challenges for the modern design researcher. Thus,

this book brings together leading researchers from across design research in order

to provide the reader with a foundation in experimental design research; an appre-

ciation of possible experimental perspectives; and insight into how experiments

can be used to build robust and significant scientific knowledge. This chapter sets

the stage for these discussions by introducing experimental design research, out-

lining the various types of experimental approach, and explaining the role of this

book in the wider methodological context.

Keywords Design science · Experimental studies · Research methods

P. Cash (*)

Department of Management Engineering,

Technical University of Denmark, Diplomvej, 2800 Lyngby, Denmark

e-mail: pcas@dtu.dk

T. Stanković

Engineering Design and Computing Laboratory,

Department of Mechanical and Process Engineering,

Swiss Federal Institute of Technology Zurich, Zurich, Switzerland

M. Štorga

Faculty of Mechanical Engineering and Naval Architecture,

University of Zagreb, Zagreb, Croatia

4P. Cash et al.

1.1 The Growing Role of Experimentation

in Design Research

Over the last 50 years, design research has seen a number of paradigm shifts in its

scientific and empirical culture. Starting in the 1960s and 1970s, researchers were

concerned with answering what design science actually meant and how scientific

practices should be adapted to fit this emerging field where problem-solving and

scientific understanding shared priority (Simon 1978; Hubka 1984; Eder 2011 ).

This was the first major effort to adapt and develop methods and processes from

the scientific domain into 'design science', where researchers were also concerned

with changing design practice. This effort stemmed from a drive to develop design

knowledge and scientific methods that better reflected the fact that although design

is concerned with the artefact, designing includes methods, process, and tools not

directly embedded in daily practice. In the 1980s, a new paradigm emerged, char-

acterised by the development of 'design studies'. This was driven by a growing

focus on understanding and rationalising the creative design processes of designer

behaviour and cognition. This new paradigm was also linked to the emergence of

computer-supported design research (see Part III). In the 1990s, there was a move

to bring coherence to the field by uniting the design studies and design science

paradigms under the wider label of design research, which more fully captured

the theoretical, empirical, and pragmatic aspects of research into design. This also

reflected a larger effort to unite previously disparate research groups and empiri-

cal approaches in a single field, bringing together research and industrial appli-

cation. This effort has sparked the most recent development since the 2000s: a

drive to bring together the varied disciplines in design research and to reinvigorate

the arduous process of bringing order and increasing scientific rigour to empiri-

cal design research (Brandt and Binder 2007; Dorst 2008). This has been reflected

in the renewed focus on the development of field-specific research methods (Ball

and Ormerod 2000a), a prioritisation of theoretical and empirical rigour (Dorst

2008), and the emergence of specific design research methodologies (Blessing and

Chakrabarti 2009 ). Thus, the stage is set for our discussion of experimentation in

the wider context of empirical design research.

Empirical studies in design research provide the foundation for the develop-

ment of both scientific knowledge about and impactful guidance for design (see

Chap. 2 , and Part IV). More formally, empirical studies support the theory build-

ing/testing cycle illustrated by the black circles as shown in Fig. 1.1 (Eisenhardt

1989; Eisenhardt and Graebner 2007). Empirical insights are used to derive new

perspectives and build explanations, as well as to test those explanations (Carroll

and Swatman 2000; Gorard and Cook 2007). Empiricism encapsulates all the var-

ied means of deriving evidence from direct or indirect observation or experience.

Experimentation thus forms one part of the wider empirical milieu.

In the context of design research and for the purposes of opening this book,

experimentation can be defined as "a recording of observations, quantitative or

qualitative, made by defined and recorded operations and in defined conditions ,

5

1 An Introduction to Experimental Design Research

followed by examination of the data, by appropriate statistical and mathemati-

cal rules, for the existence of significant relations" (Nesselroade and Cattell 2013 ,

11:22). This typically follows (although is not limited to) a process of induction,

deduction, and testing (Nesselroade and Cattell 2013) in support of the theory

building/testing cycle (white circles in Fig. 1.1). Effective experimentation forms

a core part of elucidating specific variables, developing and testing relationships/

hypotheses, and comparing the predictive power of competing theories (Wacker

1998; Snow and Thomas 2007). It is important to recognise that this perspective

limits the focus of our discussion by excluding the observation or instigation of

unique and incomparable but observed and manipulated events, which might be

referred to as an experiment by an action researcher. For more on the develop-

ment of experimentation in psychology, see Nesselroade and Cattell (2013), and

for a substantially more detailed discussion of how experimentation fits into theory

building in design research, see Chap. 12, and Part IV more generally.

Over the last 20 years, the importance of experimentation has steadily grown

within design research. For example, in 1990, just 2 % (1 of 43) of papers in

Design Studies dealt with experiments, whilst in 2014, that number was 24 %

(8 of 33) (ScienceDirect 2015 ).1 Experimentation in its various forms is increas-

ingly recognised as a powerful means for carrying out design research (see Part I,

Chap. 3 ). However, this brings increasing demands in terms of how and where

experimental techniques can be applied, methodological rigour, and the generation

of scientific knowledge (Cash and Culley 2014; Cash and Piirainen 2015). Design

1Keyword: experiment in abstract, title or keywords from 1990 to 2015.

Fig. 1.1 Theory building and

testing as an integrated cycle

of empiricism, and its link to

experimentation

6P. Cash et al.

research is a comparatively young field and is thus still in the process of develop-

ing its own methodological and scientific best practices. This field-specific devel-

opment is key to building a rigorous body of methods and scientific knowledge

within a discipline (see Part I, Chap. 3) (Kitchenham et al. 2002; Blessing and

Chakrabarti 2009 ). Thus, this book seeks to address the need to develop a tradition

of experimentation that is tailored to the specific challenges of design research,

whilst also bringing together the lessons learned from the varied fields to which

design research is linked. In order to address this need, it is first necessary to clar-

ify what it is we mean when we talk about experiments in design research.

1.2 Experimental Design Research

The scientific paradigm can be generally characterised as the generation of reliable

knowledge about the world (see Chap. 13 for more). Broadly, this has resulted in

a tendency, most notable in the natural sciences, to take the production of experi-

mental knowledge for granted and to focus on theory (Radder 2003). However,

this perspective can be deceptively one-sided, particularly in the applied context of

design research. Here, the development of experimentation is intrinsically linked

with the development of technology (Tiles and Oberdiek 1995; Radder 2003 ).

Experimental methods build on (often specifically designed) technologies and

technical insights (e.g. see Chap. 6), whilst simultaneously contributing to tech-

nological innovations and technical understanding (e.g. see Part III). Thus, there

are a number of parallels between the realisation of experimental processes and

those processes of technological development that often form the focus of design

research. This is particularly important in the social and human sciences, e.g. eco-

nomics, sociology, medicine, and psychology, where experimental activities form

a significant part of the wider scientific endeavour. Problematically in this con-

text, the philosophical discussion surrounding experimental research builds almost

exclusively on the natural sciences. Thus, there is a significant need to develop

methodological and scientific understanding of experimentation that reflects the

unique challenges in the human sciences (see, e.g. Winston and Blais 1996 or

Guala 2005), of which design research is a part.

In experimental design research, these discussions are nascent and form a major

reason for the development of this book. Core to this endeavour is the realisation

that experimental design research concerns human beings and thus faces a set of

challenges not fully reflected by discussions of experimentation in the natural sci-

ences (Radder 2003). Specifically, human subjects are often aware of, actively

interpret, and react to what is happening in an experiment. Further, this aware-

ness can influence subjects' response to an experiment, often above and beyond

the actual intervention response intended by the experimenter. This challenge is

reflected by biases such as the John Henry effect, and in methodological techniques

such as the placebo control, which are well recognised in, e.g., medical science

(Glasgow and Emmons 2007), but are only beginning to be acknowledged and dis-

cussed in design research (Dyba and Dingsoyr 2008; Cash and Culley 2014).

7

1 An Introduction to Experimental Design Research

More broadly issues of bias and control are only one consideration when

dealing with human subjects. From a socio-cultural perspective, science dealing

with human subjects must also respect a common-sense perspective on human

beings. Here, social and ethical issues are paramount. Radder (2003, 274) states

"who is entitled to define the nature of human beings: the scientists or the peo-

ple themselves?" From this, it is possible to draw parallels with the discussions

underpinning design practice, i.e. how can designers influence users ethically

(Berdichevsky and Neuenschwander 1999; Lilley and Wilson 2013 ). Thus, just

as designers must consider their right to interpret and influence users, design

researchers must also consider the implications stemming from their interpretation

and influencing of designers. This forms the bedrock on which all discussions of

experimental research must build. However, it is not the purpose of this work to

discuss these further, and we simply point to the comprehensive ethical guidelines

provided by organisations such as the American Psychological Association (2010 )

and the National Academy of Sciences (2009).

As discussed above, experimental design research encapsulates a wide range

of research designs, sharing fundamental design conventions (see Part I, Chap. 3 ).

Table 1.1 gives an overview of the basic types of experimental study, which are

further elaborated with respect to design research in Chap. 12. This does not

include computer-based simulation studies, which will be dealt with in more

detail in Part III. Thus, Table 1.1 describes the types of experimental approach,

how each type controls extraneous variables, and what type of evidence each is

capable of generating. For example, the recent study by Dong et al. (2015) utilised

random assignment and a between-group design, making it a type of true experi-

ment. In contrast, the study by Cash et al. (2012) used a similar type of between-

group comparison but used non-random group assignment, making it a type of

Table 1.1 An overview of basic types of experimental design

Type Summary description

Randomised or true

experiment

Participants are randomly assigned to treatment conditions

including a control (see also randomised controlled trial)

Means of control Extraneous variables controlled via random assignment

and comparison with a control condition

Capable of demonstrating Cause and effect, high quality of evidence

Quasi-experiment (natural

experiment)

Participants are non-randomly assigned to treatment conditions

(participants can also be assigned by forces beyond the

experimenters control in the case of natural experiments)

Means of control Extraneous variables controlled via comparison with a control

condition

Capable of demonstrating Correlation

Pre-experiment or

pseudo-experiment

Follows experimental design conventions, but no control condition

is used. Sometimes called a pseudo-experiment

Means of control Extraneous variables mitigated via comparison with

a no-treatment group (i.e. a group that receives no intervention at

all) or using a single group pre-design versus post-design

Capable of demonstrating Correlation, weak generalisability, low quality of evidence

8P. Cash et al.

quasi-experiment. Within each type, there are numerous sub-types. For detailed

explanation of these experimental design considerations, e.g. selecting an appro-

priate sample, see Chap. 3.

Understanding the distinction between the types outlined in Table 1.1 can be

critical to assessing the evidence provided by a study and how this can be used to

develop rigorous scientific knowledge (see Part IV).

In terms of subject, experiments can be applied at the cognitive or organisa-

tional level, utilise classical (Part II) or computational approaches (Part III), and

include long or short time frames. Thus, their integration with wider methodology

is critical if rigorous evidence and a cohesive body of scientific knowledge is to be

developed (Parts I and IV).

In experimental design research, this challenge of integration is more signifi-

cant than ever due to the growing importance of computer-based experimentation.

Building on the pioneering works in artificial intelligence where computers were

predominantly used for simulation, which enables the study of various models

of human cognition (Weisberg 2006), recent developments in scientific practice

highlight the potential for computer-based experimentation. New means for auto-

mated analysis, data interpretation and visualisation, and storage and dissemina-

tion reflect just a few of the novel approaches opened by computer-based research

(Radder 2003 ). As with previous methodological paradigm shifts (Sect. 1.1), this

rapidly expanding research domain faces the challenge of how to define experi-

mental standards and systematic procedures, which ensure both justifiability of

the experimental method and the repeatability of the obtained data. However, the

potential for design researchers is huge, particularly in the emergent science of

complexity and the study of the sociological and psychological roots of design-

ing (see Part III). Thus, this book brings together and confronts the commonalities

and conflicts between classical and computational experimental design research in

order to distil core methodological insights that underpin all experimental design

research, bridging methodology and methods, approaches, perspectives, and

applications.

1.3 The Aim of This Book: Linking Methodology,

Methods, and Application

From Sects. 1.1 and 1.2, it is evident that experiments are well described at both

the methodology level in terms of their role in theory building/testing (Fig. 1.1 )

and the detailed method-specific level (Table 1.1). At the methodology level,

numerous texts offer guidance, for example, Blessing and Chakrabarti (2009 ),

Saunders et al. (2009), or Robson (2002) (also see Part IV). Similarly, at the

method-specific level, texts such as that by Kirk (2009) or Shadish et al. (2002 )

9

1 An Introduction to Experimental Design Research

explore experimental design in detail (also see Part II). Further, there are countless

articles discussing specific aspects of experimental methodology or design. Thus,

why does a need exist in design research?

An aspect that neither methodology nor method-specific texts deal with is how

researchers can adapt or adopt these insights into the specific context of their

own field. This need for field-specific development and adaption at the interface

between methodology and method is highlighted by numerous authors in both

design research (Ball and Ormerod 2000b; Blessing and Chakrabarti 2009, 8) and

its related fields, where similar efforts have received significant support (Levin

and O'Donnell 1999; Kitchenham et al. 2002). The key element that drives field-

specific adaption is the integration between specific methods and the wider body

of research practice and methodology, i.e. the middle ground between methodol-

ogy and methods. Thus, it is this middle ground that this book seeks to fill, help-

ing contextualise experiments within design research and exploring how they can

be used, adapted to, and developed in the design research context as illustrated in

Fig. 1.2 . This book explicitly answers the need articulated in Sect. 1.1 : to develop

a tradition of experimentation that is both grounded in rigorous methodology and

tailored to the specific challenges of design research; to support design researchers

in the following:

Bringing together methodology and methods for experimental design research.

Exploring different perspectives on how experimental methods can be success-

fully adapted to the design research context.

Discussing approaches to developing greater scientific rigour and best practice

in experimental design research.

Building more robust scientific tools and methods in order to shape a cohesive

body of scientific knowledge.

Fig. 1.2 The middle ground between methodology and methods

10 P. Cash et al.

1.4 The Structure of This Book

Throughout this book, chapter authors draw on a wide range of perspectives in

order to provide a multifaceted foundation in the approaches to, and use of, exper-

imental design research in building rigorous scientific knowledge. This is struc-

tured in four parts outlined below and illustrated in Fig. 1.3:

Part I The foundations of experimental design research deals with the devel-

opment of the experimental design research tradition, its role in the

wider scope of design research empiricism, and the fundamentals of

experimental design.

Part II Classical approaches to experimental design research deals with the

study of individuals and teams, and the key features of examining these

subjects in the design research context.

Part III Computation approaches to experimental design research deals with

the use of computation to complement and extend classical experimen-

tal design research, as well as significant developments in this field.

Part IV Building on experimental design research deals with how to draw all

these approaches and perspectives together in order to build meaningful

theory, a cohesive body of scientific knowledge, and effective models

of design.

Fig. 1.3 An overview of this book's content and structure

11

1 An Introduction to Experimental Design Research

References

American Psychological Association (2010) Ethical principles of psychologists and code of con-

duct. Am Psychol 57:1060–1073

Ball LJ, Ormerod TC (2000a) Putting ethnography to work: the case for a cognitive ethnography

of design. Int J Hum Comput Stud 53:147–168

Ball LJ, Ormerod TC (2000b) Applying ethnography in the analysis and support of expertise in

engineering design. Des Stud 21:403–421

Berdichevsky D, Neuenschwander E (1999) Toward an ethics of persuasive technology. Commun

ACM 42:51–58. doi:10.1145/301353.301410

Blessing LTM, Chakrabarti A (2009) DRM, a Design Research Methodology. Springer, New

York

Brandt E, Binder T (2007) Experimental design research: genealogy, intervention, argument. In:

IASDR international association of societies of design research, pp 1–18

Carroll JM, Swatman PA (2000) Structured-case: a methodological framework for building the-

ory in information systems research. Eur J Inf Syst 9:235–242

Cash P, Culley S (2014) The role of experimental studies in design research. In: Rodgers P, Yee J

(eds) The Routledge companion to design research. Routledge, New York, pp 175–189

Cash P, Elias EWA, Dekoninck E, Culley SJ (2012) Methodological insights from a rigorous

small scale design experiment. Des Stud 33:208–235

Cash P, Piirainen KA (2015) Building a cohesive body of design knowledge: developments from

a design science research perspective. In ICED 15 international conference on engineering

design. Milan, Italy (in press)

Cross N (2007) Forty years of design research. Des Stud 28:1–4

Dong A, Lovallo D, Mounarath R (2015) The effect of abductive reasoning on concept selection

decisions. Des Stud 37:37–58. doi:10.1016/j.destud.2014.12.004

Dorst K (2008) Design research: a revolution-waiting-to-happen. Des Stud 29:4–11

Dyba T, Dingsoyr T (2008) Empirical studies of agile software development: a systematic review.

Inf Softw Technol 50:833–859

Ernst Eder W (2011) Engineering design science and theory of technical systems: legacy of

Vladimir Hubka. J Eng Des 25:361–385

Eisenhardt KM (1989) Building theories from case study research. Acad Manag Rev 14:532–550

Eisenhardt KM, Graebner ME (2007) Theory building from cases: opportunities and challenges.

Acad Manag J 50:25–32

Glasgow RE, Emmons KM (2007) How can we increase translation of research into practice?

Types of evidence needed. Annu Rev Public Health 28:413–433

Gorard S, Cook TD (2007) Where does good evidence come from? Int J Res Method Educ

30:307–323

Guala F (2005) The methodology of experimental economics. Cambridge University Press,

Cambridge

Horvath I (2004) A treatise on order in engineering design research. Res Eng Design 15:155–181

Hubka V (1984) Theory of technical systems: fundamentals of scientific Konstruktionslehre.

Springer, Berlin

Kirk RE (2009) Experimental design. Sage Publications, London, UK

Kitchenham BA, Pfleeger SL, Pickard LM, Jones PW, Hoaglin DC, El-Emam K, Rosenberg J

(2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans

Softw Eng 28:721–734

Levin JR, O'Donnell AM (1999) What to do about educational research's credibility gaps? Issues

Educ 5:177–229

Lilley D, Wilson GT (2013) Integrating ethics into design for sustainable behaviour. J Des Res

11:278–299

National Academy of Sciences (2009) On being a scientist: a guide to responsible conduct in

research

12 P. Cash et al.

Nesselroade JR, Cattell RB (2013) Handbook of multivariate experimental psychology, vol 11.

Springer Science & Business Media

Radder H (2003) The philosophy of scientific experimentation. University of Pittsburgh Press,

Pittsburgh

Reiser OL (1939) Aristotelian, Galilean and non-Aristotelian modes of thinking. Psychol Rev

46:151–162

Robson C (2002) Real world research, vol 2nd. Wiley, Chichester

Saunders MNK, Lewis P, Thornhill A (2009) Research methods for business students, vol 3rd.

Pearson, Essex

ScienceDirect (2015) Science Direct: paper repository (Online). www.sciencedirect.com

Shadish WR, Cook TD, Campbell DT (2002) Experimental and quasi-experimental designs for

generalized causal inference. Mifflin and Company, Boston

Simon HA (1978) The science of the artificial. Harvard University Press

Snow CC, Thomas JB (2007) Field research methods in strategic management: contributions to

theory building and testing. J Manage Stud 31:457–480

Tiles M, Oberdiek H (1995) Living in a technological culture: human tools and human values.

Routledge

Wacker JG (1998) A definition of theory: research guidelines for different theory-building

research methods in operations management. J Oper Manage 16:361–385

Weisberg RW (2006) Creativity: understanding innovation in problem solving, science, inven-

tion, and the arts. John Wiley & Sons

Winston AS, Blais DJ (1996) What counts as an experiment? A transdisciplinary analysis of text-

books, 1930–1970. Am J Psychol 109:599–616

... However, detailed critical comparisons are difficult because of substantial variation in how individual methods are applied and how they are tailored to specific research questions. For illustrations of this, see discussions about the validity of protocol studies in design and problem solving (e.g., Blech et al., 2019;Chiu & Shu, 2010); experiments in design and engineering (e.g., Cash et al., 2016;Panchal & Szajnfarber, 2017) and case studies in design and cognition (e.g., Crilly, 2019a;Wallace & Gruber, 1989). However, whatever strengths and weaknesses are identified with existing methods, the development of alternative approaches should not be seen as an attack: methodological diversity is beneficial for exploring different aspects of the phenomena of interest and providing crossmethod checks (for such arguments applied to design research see Crilly, 2019b; for wider arguments see Greene et al., 1989;Mingers, 1997;van Peer et al., 2012). ...

... Our approach gave us ready access to continuous and multi-faceted information sources, including paper-based data (e.g., annotated sketches), verbal data (e.g., protocols of group discussions, verbal utterances recorded while interacting with the game) and digital data (e.g., digital object manipulations, physical gestures, technical performance of the design outputs). One limitation of most design cognition research is that it is characterized by variation in: (1) how the different methods are applied, (2) the type of data that can be collected through these methods, and (3) the ways in which this data is analyzed (for discussion see Blech et al., 2019;Cash et al., 2016;Crilly, 2019b;Panchal & Szajnfarber, 2017;Wallace & Gruber, 1989). These observations have led several design researchers to acknowledge the need for a more rigorous research approach for studying design (for a discussion see Boujut & Blanco, 2003;Crilly & Cardoso, 2017;editorial board of IJDCI, 2013;Goldschmidt & Tatsa, 2005). ...

Increasing the range of methods available for researching design cognition provides new opportunities for studying the phenomena of interest. Here we propose an approach for observing design activities, using Virtual Reality (VR) design-build-test games with built-in physics simulation. To illustrate this, we report on two exploratory design workshops where two groups of participants worked to solve a technical design problem using such a platform. Participants were asked to sketch ideas to solve the problem, and then to design, test and iterate some of their developed design concepts in a VR game. Researchers were able to obtain continuous and multifaceted recordings of participants' behavior during the various design activities. This included on-screen design activities, verbal utterances, physical gestures, digital models of design outputs, and records of the test outcomes. Our experiences with the workshops are discussed with respect to the opportunities that similar VR game platforms offer for design cognition research, both in general and specifically in terms of ideation, prototyping, problem reframing, intrinsic motivation and demonstrated vulnerability. VR game platforms not only offer a valuable addition to existing research options, but additionally offer a basis for developing training interventions in design education and practice.

... However, as one of the scientific research fields, educational research plays an important role in societies' progress since it investigates the behavior of the individual that interacts with the education system (Cash, Stankovic, & Storge, 2016;Devauus, 2014). If learning focuses pivotally on educating students, educational research aims at developing methods and techniques to advance the teaching/learning process (Varia, 2011;Etikan & Bala, 2017;Flake, 2017). ...

... Based on the above, theses and dissertations submitted by post-graduate students, whether MAs or PhDs, are the primary support for educational research in general and the research on curricula and instruction in specific (Lindl, Krauss, Schilcher, & Hilbert, 2020). This is because they contain findings that enhance the educational literature and develop the teaching/learning process in the same manner (Molly, 2020;Muller, Block, & Kranz, 2014;Yilmaz, 2013;Cash, Stankovic, & Storge, 2016;Chaiyasook & Jaroongkongdoch, 2014). ...

This study aims at investigating the thematic and methodological approaches in the master's theses of the curricula and instruction conducted at Middle East University across the last five years (2015/2016-2019/2020). It also aims at monitoring the thematic gaps in these researches based on the research issues of top priority approved by the Ministry of Higher Education during the period 2011-2020. The study sample consists of all theses conducted throughout the last five years, which are 56 in number. The data collection instrument was a content analysis card that consists of two axes; the first one is to monitor the thematic approaches and the second one is to monitor the methodological approaches within the methodology, sample and sampling, study instruments, processing data, and referencing. The validity of the card has been checked by presenting it to several measurements and evaluation specialists. Regarding the reliability, the card showed acceptable reliability with a degree of 0.82, 0.86, and 0.91 for the thematic approaches, the methodological approaches, and the total card respectively. The study shows that the thematic approaches in the research on curricula and instruction are focused on the primary stage in general and the learning and teaching strategies and curriculum evaluation in specific, while the methodological approaches are focused on the quantitative methods, the teachers as a change and the usage of the descriptive design (survey). Sampling methods are focused on probability sampling (random and stratified) with using a questionnaire as a data collection instrument. Face and constructive validity and Cronbach's Alpha coefficient are used as instruments to check validity and reliability. The study also shows that there is a gap in the thematic approaches in the analyzed theses especially in the programs of preparing preschool teachers and early childhood education teachers in light of the global standards.

... There are various approaches referring to the conduction, evaluation as well as validation of design methods in engineering design. These include, but are not limited to, the Experimental Design Research by Cash et al. (2016), the Design Research Methodology by Blessing and Chakrabarti (2009), the Validation Square by Seepersad et al. (2006) and the Spiral Eight Fold Model by Eckert et al. (2003). Cantamessa (2003) has a classification scheme with empirical research, experimental research, development of new tools and methods, implementation studies and other. ...

Engineering design has a broad variety of approaches, methods and methodologies to conduct, evaluate and validate research. This contribution focuses on empirical studies and divides existing approaches and classifies them according to a scheme with criteria and boundary conditions, such as participants (students, researchers), the length of the study, the incorporation of the study into the curriculum etc. There are certain ideas, challenges and recommended practices associated with each environment and scenario. Knowing them will help design method developers in engineering design who want to conduct empirical studies but have little or no experience with student participants. Therefore, conducted studies from the research institute are mapped onto the classification scheme and synthesized challenges and recommended practices associated with laboratory conditions and student participants will be presented.

... The physicochemical properties of the test water were recorded as follows: conductivity 260.8 µM cm −1 , pH 7.56, dissolved oxygen 6.9 mg l −1 , temperature 29.5 • C, and photoperiod 12:12 light:dark. Four groups (24 fish/group) were assigned in three replicates for each treatment group (eight fish/glass aquarium according to Cash et al., 2016) during the experimental period. The first group was a control group; the second group were exposed to 3.16 mg/l of HCQ according to Ramesh et al. (2018)this concentration is lower than LC 50 > 100 mg/l according to SANOFI (2020); the third group was exposed to 3.16 mg/l of HCQ + 10 mg/l of SP; and the fourth group was exposed to 3.16 mg/l of HCQ + 20 mg/l of SP for 15 days. ...

The current study aims at evaluating the toxicity of hydroxychloroquine (HCQ) as a pharmaceutical residue in catfish (Clarias gariepinus) and the protective role of Spirulina platensis (SP). Four groups were used in this study: (1) a control group, (2) a group exposed to 3.16 mg/l of HCQ, (3) a group exposed to 3.16 mg/l of HCQ + 10 mg/l of SP, and (4) a group exposed to 3.16 mg/l of HCQ + 20 mg/l of SP for 15 days of exposure. The HCQ-treated group showed a significant decline in the hematological indices and glucose, total protein, and antioxidant levels in relation to the control group, whereas the HCQ-treated group showed a significant increase in the levels of creatinine, uric acid, aspartate aminotransferase (AST), and alanine aminotransferase (ALT) as well as the percentage of poikilocytosis and nuclear abnormalities of RBCs in relation to the control group. The histopathological evaluation of the liver indicated dilation of the central vein, vacuolization, degeneration of hepatocytes and pyknotic nuclei, as well as reduction of glomeruli, dilation of Bowman's space, and degeneration of renal tubules in the kidney of the HCQ-treated group. Spirulina platensis (SP) rendered the hematological and biochemical indexes as well as antioxidant levels and the histological architecture to normal status in a dose-dependent manner. Accordingly, the current study recommends the use of SP to remedy the toxic effects of HCQ.

... When combined with neuroimaging techniques, these studies are particularly useful for evaluating the impact of age on the brain [10], the relationship between risk factors and development of disease [20], and the outcomes of treatments over time [27]. Analyzing longitudinal data requires special computational tools, which have been traditionally grounded in statistical models, such as analysis of variance (ANOVA) [5]. With recent advances in deep learning, supervised models, such as the Long Short-Term Memory (LSTM) networks [19], have become alternative approaches for analyzing longitudinal trajectory of individuals [2] by formulating the problem as classification or prediction tasks. ...

Longitudinal neuroimaging or biomedical studies often acquire multiple observations from each individual over time, which entails repeated measures with highly interdependent variables. In this paper, we discuss the implication of repeated measures design on unsupervised learning by showing its tight conceptual connection to self-supervised learning and factor disentanglement. Leveraging the ability for `self-comparison' through repeated measures, we explicitly separate the definition of the factor space and the representation space enabling an exact disentanglement of time-related factors from the representations of the images. By formulating deterministic multivariate mapping functions between the two spaces, our model, named Longitudinal Self-Supervised Learning (LSSL), uses a standard autoencoding structure with a cosine loss to estimate the direction linked to the disentangled factor. We apply LSSL to two longitudinal neuroimaging studies to show its unique advantage in extracting the `brain-age' information from the data and in revealing informative characteristics associated with neurodegenerative and neuropsychological disorders. For a downstream task of supervised diagnosis classification, the representations learned by LSSL permit faster convergence and higher (or similar) prediction accuracy compared to several other representation learning techniques.

... The lack of clarity as to which methods and evidence are necessary for carrying out a successful validation continues to be a research gap in DRM (Gericke et al., 2017). The wide-ranging and rather holistic requirements for a study regarding objectivity (Cash et al., 2016), reliability and validity (Ruckpaul et al., 2014) as well as the systematic structure of the studies (Dinar et al., 2015) are often described. However, since the studies are primarily conducted by inexperienced researchers (Wallace, 2011), they lack experience in design practice and therefore are not fully able to implement real design problems in method validation. ...

The requirements on validity for studies in design research are very high. Therefore, this paper aims at identifying challenges that occur when setting up studies and suggests solution strategies to address them. Three different institutes combining their experience discussed several studies in a workshop. Resulting main challenges are to find a suitable task, to operationalise the variables and to deal with a high analysis effort per participant. Automation in data evaluation and a detailed practical guideline on studies in design research are considered necessary.

... By doing all these staffs one can get a reliable valid idea about the design. A lot of statistical tools have been used to explore, estimate and validate data acquired through the use of experimental design[Cash et al. 2016]. Experimental design has become very effective statistical tool in analyzing data involving process performance and process capability. ...

... Recently, the related fields of marketing, leadership and management have also started to recognise the potential strengths of experimental methods, which still are considered 'underutilized' (Ryals and Wilson, 2018;Podsakoff and Podsakoff, 2019). Other areas, for example design, have a longer history of using experiments to study human behaviours and interactions (Cash, Stanković and Štorga, 2016) and have adopted and developed own approaches to experimental research, drawing on what was previously primarily a psychologists' domain. Although in social sciences experiments are sometimes criticised as a strategy for studying real world due to possible biases of the participants and the limited resemblance of the experimental setting to the real-life context (Robson, 2002), they still constitute a powerful methodology for testing theories, especially when human behaviour is involved. ...

  • Agata Wróbel Agata Wróbel
  • Phd Thesis

This PhD thesis focuses on how professional facilitators – consultants, can support design and product development teams during creative process, and consequently enhance innovative processes in firms. The main task of facilitation is to help groups perform better and make them successful in reaching their goals in an effective, result-oriented and engaging way. While current knowledge on this topic is highly based on practice and experience, in my thesis I provide theoretical explanation to some of the mechanisms of facilitation practice and clarify many inconsistencies in previous research. In doing so, I propose a novel definition of neutrality in facilitation, show the relationship between the facilitator's neutrality and team trust, as well as identify specific process structures in workshop facilitation, which can be used to enhance creative performance of teams and conduct workshops in a more effective manner. Finally, in addition to the theoretical insights, this work also provides practical learnings for managers and organisational leaders who would like to apply facilitation in their companies.

Machine learning analysis of longitudinal neuroimaging data is typically based on supervised learning, which requires large number of ground-truth labels to be informative. As ground-truth labels are often missing or expensive to obtain in neuroscience, we avoid them in our analysis by combing factor disentanglement with self-supervised learning to identify changes and consistencies across the multiple MRIs acquired of each individual over time. Specifically, we propose a new definition of disentanglement by formulating a multivariate mapping between factors (e.g., brain age) associated with an MRI and a latent image representation. Then, factors that evolve across acquisitions of longitudinal sequences are disentangled from that mapping by self-supervised learning in such a way that changes in a single factor induce change along one direction in the representation space. We implement this model, named Longitudinal Self-Supervised Learning (LSSL), via a standard autoencoding structure with a cosine loss to disentangle brain age from the image representation. We apply LSSL to two longitudinal neuroimaging studies to highlight its strength in extracting the brain-age information from MRI and revealing informative characteristics associated with neurodegenerative and neuropsychological disorders. Moreover, the representations learned by LSSL facilitate supervised classification by recording faster convergence and higher (or similar) prediction accuracy compared to several other representation learning techniques.

Design is an extremely diverse field where there has been widespread debate on how to build a cohesive body of scientific knowledge. To date, no satisfactory proposition has been adopted across the field – hampering scientific development. Without this basis for bringing research together design researchers have identified difficulties in building on past works, and combining insights from across the field. This work starts to dissolve some of these issues by drawing on Design Science Research to propose an integrated approach for the development of design research knowledge, coupled with pragmatic advice for design researchers. This delivers a number of implications for researchers as well as for the field as a whole.

  • John G Wacker

This study examines the definition of theory and the implications it has for the theory‐building research. By definition, theory must have four basic criteria: conceptual definitions, domain limitations, relationship‐building, and predictions. Theory‐building is important because it provides a framework for analysis, facilitates the efficient development of the field, and is needed for the applicability to practical real world problems. To be good theory, a theory must follow the virtues (criteria) for 'good' theory, including uniqueness, parsimony, conservation, generalizability, fecundity, internal consistency, empirical riskiness, and abstraction, which apply to all research methods. Theory‐building research seeks to find similarities across many different domains to increase its abstraction level and its importance. The procedure for good theory‐building research follows the definition of theory: it defines the variables, specifies the domain, builds internally consistent relationships, and makes specific predictions. If operations management theory is to become integrative, the procedure for good theory‐building research should have similar research procedures, regardless of the research methodology used. The empirical results from a study of operations management over the last 5 years (1991–1995) indicate imbalances in research methodologies for theory‐building. The analytical mathematical research methodology is by far the most popular methodology and appears to be over‐researched. On the other hand, the integrative research areas of analytical statistical and the establishment of causal relationships are under‐researched. This leads to the conclusion that theory‐building in operations management is not developing evenly across all methodologies. Last, this study offers specific guidelines for theory‐builders to increase the theory's level of abstraction and the theory's significance for operations managers.

  • Kathleen M. Eisenhardt
  • A.M. Huberman
  • M.B. Miles

- This paper describes the process of inducting theory using case studies from specifying the research questions to reaching closure. Some features of the process, such as problem definition and construct validation, are similar to hypothesis-testing research. Others, such as within-case analysis and replication logic, are unique to the inductive, case-oriented process. Overall, the process described here is highly iterative and tightly linked to data. This research approach is especially appropriate in new topic areas. The resultant theory is often novel, testable, and empirically valid. Finally, framebreaking insights, the tests of good theory (e.g., parsimony, logical coherence), and convincing grounding in the evidence are the key criteria for evaluating this type of research.

  • Andrew S. Winston
  • Daniel J. Blais

The textbook definition of experiment as manipulation of an independent variable while holding all other variables constant is generally treated as transdisciplinary and transhistorical. We examined the rise of this definition in psychology and other disciplines by comparing 236 introductory texts from psychology, sociology, biology, and physics published during the 1930s, 1950s, and 1970s. The definition of experiment in psychology texts did not approach uniformity until the 1970s and was not borrowed from texts of other disciplines. The standard definition is relatively absent from physics, infrequent in biology, and appears in sociology after its development in psychology. We discuss the enshrinement of experimentation as the sole method for the discovery of causes.