Materials and Technology: References
Materials and
Technology: References
American Association for the Advancement of Science. (1993). Benchmarks for science literacy. New York: Oxford University Press.
Ball, D. (1991). Research on teaching
mathematics: Making subject-matter knowledge part of the equation. In J. Brophy
(Ed.), Advances in research on teaching (Vol. 2, pp. 1-48). Greenwich,
CT: JAI Press.
Callister, T. A, Jr., & Dunne, F. (1992).
The computer as doorstop: Technology as disempowerment. Phi Delta Kappan,
74(4), 324-326.
Clark, R. E. (1992). Six definitions of media in
search of theory. In D. P. Ely & B. B. Minor (Eds.), Educational media
and technology yearbook (Vol. 18, pp. 65-76). Englewood, CO: Libraries
Unlimited.
Cuban, L. (1986). Teachers and machines: The
classroom use of technology since 1920. New York: Teachers College Press.
Flagg, B. (1990). Formative evaluation for
educational technologies. Hillsdale, NJ: Lawrence Erlbaum.
Hativa, N. (1988). Computer-based drill and
practice in arithmetic: Widening the gap between high- and low-achieving
students. American Educational Research Journal, 25(3), 366-397.
Heinrich, R. (1970). Technology and the
management of instruction. Washington, D.C.: Association of Educational
Communications and Technology.
National Research Council. (1996). National science education
standards. Washington, D.C.: National Academy Press.
Ralph, J., & Dwyer, M. C. (1988). Making
the case: Evidence of program effectiveness in schools and classrooms.
Washington, D.C.: U.S. Department of Education, Office of Educational Research
and Improvement and RMC Research.
Thorkildsen, R., & Lowry, W. (1990). The
effects of a videodisk program on mathematics self-concept: Research report.
Logan, UT: Utah State University, Center for Persons with Disabilities.
Woodward, J. (1994). Effects of curriculum
discourse style on eighth graders' recall and problem solving in earth science.
Elementary School Journal, 94(3), 299-314.
Bibliography
American Association for the Advancement of Science. (1989). Science for all Americans. New York: Oxford University Press.
Anderson, R. C. (1988,
September 8). Comments submitted to the California State Board of Education at
a public hearing in Sacramento, CA.
Baker, D. P. (1993).
Compared to Japan, the U. S. is a low achiever . . . really: New evidence and
comment on Westbury. Educational Researcher, 22(3), 18-20.
Ball, D. L. (1991).
Research on teaching mathematics: Making subject-matter knowledge part of the
equation. In J. Brophy (Ed.), Advances in research on teaching (Vol. 2,
pp. 1-48). Greenwich, CT: JAI Press.
Bangert, R. L., Kulik,
J. A., & Kulik, C. C. (1983). Individualized systems of instruction in
secondary schools. Review of Educational Research, 53(2), 143-158.
Bateman, B. (1992). Academic
child abuse. Eugene, OR: International Institute for Advocacy for School
Children.
Begle, E. G., &
Geeslin, W. (1972). Teacher effectiveness in mathematics instruction
(National Longitudinal Study of Mathematical Abilities Reports, No. 28).
Washington, D.C.: Mathematical Association of America and National Council of
Teachers of Mathematics.
Bishop, A. J. (1990).
Mathematical power to the people. Harvard Educational Review, 60(3),
357-369.
Block, J. H. (1980).
Success rate. In C. Denham & A. Lieberman (Eds.), Time to learn (pp.
95-106). Washington, D.C.: U. S. Department of Education, National Institute of
Education.
Borg, W. R., & Gall,
M. D. (1989). Educational research. New York: Longman.
Brophy, J. (Ed.).
(1991). Advances in research on teaching, Volume 2. Greenwich, CT: JAI
Press.
Buchmann, M. (1986).
Role over person: Morality and authenticity in teaching. Teachers College
Record, 87(4), 529-543.
Bunge, M. (1991). A
critical examination of the new sociology of science, Part I. Philosophy of the
Social Sciences, 21, 524-60.
Callister, T. A., Jr.,
& Dunne, F. (1992). The computer as doorstop: Technology as disempowerment.
Phi Delta Kappan, 74(4), 324-326.
Carnine, D. (1993a).
The development of educational tools-for scientific literacy, instructional
technology, media, and material (Research Report). Eugene, OR: National
Center to Improve the Tools of Educators, The University of Oregon.
Carnine, D. (1993b). Process
for selecting and implementing valid educational approaches. Eugene, OR:
National Center to Improve the Tools of Educators, The University of Oregon.
Cherryholmes, C. H. (1992).
Notes on pragmatism and scientific realism. Educational Researcher,
21(6), 13-17.
Clark, R. E. (1992). Six
definitions of media search of a theory. In D. P. Ely & B. B. Minor (Eds.),
Educational media and technology yearbook (Vol. 18, pp. 65-76). Englewood,
CO: Libraries Unlimited.
Clark, R. E. (1985).
Confounding in educational computing research. Journal of Educational
Computing Research, 1(2), 137-147.
Clark, R. E. (1983).
Reconsidering research on learning from media. Review of Educational Research,
53(4), 445-59.
Clark, R. E. (1982).
Antagonism between achievement and enjoyment in ATI studies. Educational
Psychologist, 17(2), 92-101.
Clark, R. E., &
Salomon, G. (1986). Media in teaching. In M. Wittrock (Ed.), Handbook of
research on teaching (3rd ed., pp. 464-478). New York: Macmillan.
Clifford, G. J. (1973).
A history of the impact on research on teaching. In R. M. W. Travers (Ed.), Second
handbook of research on teaching. Chicago: Rand McNally & Company.
Cochran-Smith, M.
(1991). Word processing and writing in elementary classrooms: A critical review
of related literature. Review of Educational Research, 61(1), 107-155.
Cohen, D. K., Peterson,
P. L, Wilson, W., Ball, D., Putnam, R., Prawat, R., Heaton, R., Remillard, J.,
& Wiemers, N. (1990). Effects of state-level reform of elementary school
mathematics curriculum on classroom practice (Research Report 90-14). East
Lansing, MI: The National Center for Research on Teacher Education and The
Center for the Learning and Teaching of Elementary Subjects, Michigan State
University.
Columbro, M. N. (1964).
Supervision and action research. Educational Leadership, 21, 297-300.
Commission on
Instructional Technology. (1970). To improve learning. A report to the
President and the Congress of the United States. Washington, D.C.: Author.
Cronbach, L. J., &
Snow, R. E. (Eds.). (1977). Aptitudes and instructional methods. New
York: Irvington/Neiburg.
Cronbach, L. J. (1963).
Course improvement through evaluation. Teachers College Record, 64(8),
672-683.
Cuban, L. (1989). The
"at-risk" label and the problem of urban school reform. Phi Delta
Kappan, 70(10), 780-784.
Cuban, L. (1986).
Teachers and machines: The classroom use of technology since 1920. New
York: Teachers College Press.
Engelmann, S., &
Carnine, D. (1991). Theory of instruction: Principles and applications.
Eugene, OR: ADI Press.
Feynman, R. P. (1985). Surely
you're joking, Mr. Feynman. New York: W. W. Norton & Co.
Finn, J. D. (1993). School
engagement and students at risk. Buffalo, NY: State University of New York
at Buffalo.
Flagg, B. N. (1990). Formative
evaluation for educational technologies. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Fulton, K. (1993,
Autumn/Winter). Teaching matters: The role of technology in education. ED-TECH
Review, 5-10.
Grint, K. & Gill, R.
(1995). The gender-technology relation. Bristol, PA: Taylor &
Francis, Inc.
Hannifin, M. J. (1985).
Empirical issues in the study of computer-assisted interactive video. Educational
Communications and Technology Journal, 33(4), 235-47.
Hasselbring, T.,
Sherwood, R., Bransford, J., Fleenor, K., Griffith, D., & Goin, L.
(1987-88). An evaluation of a level-one instructional videodisc program. Journal
of Educational
Technology Systems, 16(2), 151-169.
Technology Systems, 16(2), 151-169.
Hativa, N. (1988).
Computer-based drill and practice in arithmetic: Widening the gap between high-
and low-achieving students. American Educational Research Journal,
25(3), 366-397.
Heinich, R. (1970). Technology
and the management of instruction. Washington, D.C.: Association for
Educational Communications and Technology.
Hodgkinson, H . L.
(1957). Action research: A critique. Journal of Educational Sociology,
31, 137-153.
Hodson, D. (1988).
Toward a philosophically more valid science curriculum. Science Education,
72(1), 19-40.
Hofmeister, A. M.
(1993). Elitism and reform in school mathematics. Remedial and Special
Education, 14(6), 8-13.
Hofmeister, A. M.
(1992). Multimedia overview: An educator's perspective. In Stimulate
learning with DVI multimedia (pp. 4-6). Atlanta, GA: International Business
Machines (IBM).
Hofmeister, A. M.
(1990). Individual differences and the form and function of instruction. The
Journal of Special Education, 24(2), 150-159.
Hofmeister, A. M.
(1989). Teaching problem-solving skills with technology. Educational
Technology, 29(9), 26-29.
Hofmeister, A. M., &
Thorkildsen, R. (1989). Videodisc levels: A case study in hardware obsession. Journal
of Special Education Technology, 10(2), 73-79.
Hofmeister, A. M.
(1984). Microcomputer applications in the classroom. New York: Holt,
Rinehart, and Winston.
Hofmeister, A. M.
(1973). Audio-tutorial programming with exceptional children. Educational
Technology, 13(12), 50-52.
Hooper, S., &
Hannifin, M. J. (1991). Psychological perspectives on emerging instructional
technologies: A critical analysis. Educational Psychologist, 26(1),
69-95.
House, E. R. (1991).
Realism in research. Educational Researcher, 20(6), 2-9, 25.
Hughes, M. M. (1955).
Iron County teachers study their problems scientifically. Educational
Leadership, 12, 489-495.
James, R. K., and
Francq, E. (1988). Assessing the implementation of a science program. School
Science and Mathematics, 88(2), 149-159.
Kameenui, E. J. (1993).
Diverse learners and the tyranny of time: Don't fix blame; fix the leaky roof. The
Reading Teacher, 46(5), 376-383.
Komoski, K. (1992, March
18). A testimony given by Kenneth Komoski, Director of the Education Products
Information Exchange, to the U.S . Congress Select Committee on Education,
Washington, D.C.
Kozma, R. B. (1991).
Learning with media. Review of Educational Research, 61(2),179-211.
Long, J. L. (1989,
November 14). Judge of the Superior Court. Statement of Decision, No. 361906,
Dept. 14. Sacramento. CA: Superior Court of the State of California, in and for
the County of Sacramento.
Maddux, C. D. (1989).
Logo: Scientific dedication or religious fanaticism in the 1990s? Educational
Technology, 29(2), 18-23.
Mann, L. (1979). On
the trail of process. New York: Grune & Stratton.
Martin, M. (1979).
Connections between philosophy of science and science education. Studies in
Philosophy and Education, 9, 329.
Multimedia Monitor.
(1993). IBM Corporation offers print-on-demand service. Multimedia Monitor,
11(10), 6.
National Council of
Teachers of Mathematics. (1991). Professional standards for teaching
mathematics. Reston, VA: Author.
Nicholls, J. G. (1989). The
competitive ethos and democratic education. Cambridge, MA: Harvard
University Press.
Nicholls, J. G. (1979).
Development of perception of own attainment and causal attributions for success
and failure in reading. Journal of Educational Psychology, 71, 94-99.
Nicholls, J. G. (1978).
The development of the concepts of effort and ability, perception of own
attainment, and the understanding that difficult tasks require more ability. Child
Development, 49,800-814.
Orleans, J. S. (1952). The
understanding of arithmetic processes and concepts possessed by teachers of
arithmetic (Publication Number 12). New York: Office of Research and
Evaluation, Division of Teacher Education, College of the City of New York.
Ralph, J., & Dwyer,
M. C. (1988). Making the case: Evidence of program effectiveness in schools
and classrooms. Washington, D.C.: U. S. Department of Education, Office of
Educational Research and Improvement and RMC Research.
Ross, S. M., &
Morrison, G. R. (1990). In search of a happy medium in instructional technology
research: Issues concerning external validity, media replications and learner
control. Educational Technology Research and Development, 37(1), 19-33.
Salomon, G., &
Gardner, H. (1986). The computer as educator: Lessons from television research.
Educational Researcher, 13-19.
Schoen, H. L. (1976).
Self-paced instruction: How effective has it been in secondary and
post-secondary schools? The Mathematics Teacher, 69, 352-357.
Sewall, G. (Ed.).
(1992). Social studies review #9. New York: American Textbook Council.
Shulman, L. (1987).
Knowledge and teaching: Foundations of the new reform. Harvard Educational
Review, 57, 1-22.
Shumsky, A. (1956).
Cooperation in action research: A rationale. Journal of Educational
Sociology, 30, 180-185.
Slavin, R. E. (1989).
PET and the pendulum: Faddism in education and how to stop it. Phi Delta
Kappan, 70(10), 752-758.
Snider, W. (1988).
"Small changes won't do," says California panel: Cites failure to
serve diverse student body. Education Week, 7(37), 1, 12.
Strahler, A. N. (1992). Understanding
science: An introduction to concepts and issues. Buffalo, NY: Prometheus
Books.
Tetenbaum, T. J., &
Mulkeen, T. A. (1984). Logo and the teaching of problem-solving: A call for a
moratorium. Educational Technology, 24(11), 16-19.
Thorkildsen, R., &
Lowry, W. (1990). The effects of a videodisc program on mathematics
self-concept. Logan, UT: Utah State University, Center for Persons with
Disabilities.
Thorkildsen, R. (1986). Development
and testing of videodisc systems for main streaming. Final Report to U.S.
Department of Education, Office of Special Education Programs (Grant No.
G008402242). Logan, UT: Utah State University, Center for Persons with
Disabilities.
Toch, T. (1993). The
perfect school. U. S. News & World Report, 114(1), 46-60.
Tyson-Bernstein, H.,
& Woodward, A. (1991). Nineteenth century policies for twenty-first century
practice: The textbook reform dilemma. In P. G. Altbach, G. P. Kelly, H. G.
Petrie, & L. Weis (Eds.), Textbooks in American society (pp.
91-104). Albany, NY: State University of New York Press.
Tyson-Bernstein, H.
(1988). A conspiracy of good intentions: America's text book fiasco. Washington,
D.C.: The Council for Basic Education.
Vitale, M., &
Romance, N. (1992). Using videodisc instruction in an elementary science
methods course. Journal of Research in Science Teaching, 29(9).
Westbury, I. (1993).
American and Japanese achievement . . . again: A response to Baker. Educational
Researchers, 22(3), 21-25.
Wilson, J. (1990).
Integrated learning systems: A primer. Classroom Computer Learning,
10(5), 22-23, 27-30, 34, 36.
Woodward, J. (1994).
Effects of curriculum discourse style on eighth graders' recall and problem
solving in earth science. Elementary School Journal, 94(3), 299-314.
Worthington, R. M.
(1963). Action research in vocational education. American Vocational Journal,
38(1), 18-19, 38.
Copyright © 1998 by
American Association for the Advancement of Science
According to Seels and
Richey (1994), theory consists of the concepts, constructs, principles, and
propositions that contribute to the body of knowledge. The power of a theory
derives from its organized structure of related propositions that describe,
explain, predict and control the observed phenomena (Gall, Borg, & Gall,
1996).
It is human nature to
make sense out of the world. Through observations of the world by senses,
people tend to impose orders by finding out what is going on. Theory is
developed to describe what is observed, and explain why and how the observed
phenomenon occurs. The intention of the theory development sometimes is beyond
its descriptive and explanatory functions. The ultimate purposes of
theory-building sometimes lie in the predication and the control of the
phenomenon. This tendency of theory building is described very clearly by
Babbie (1998):
" Observing and
tryng to interpret what we observe is a native human activity. It's the
foundation of our survival. In everyday life, however, we are often casual and
semiconscious in both our observations and our interpretations, so that we make
mistakes in both. Scientists make observation and interpretation conscious,
deliberate acts." (p. 3)
It is the conscious and
deliberate observations and intepretations that set up the foundations of the
theory. Theory, thus, is a systematic explanation for the
observations, which includes a set of constructs that are linked with other
with a certain relationship (Gall, Borg, & Borg, 1996).. The constructs
are the made-up concepts representing ideas or entities targeted to be
explained, e.g. personality, intelligence, or achievement.
There are two approaches
for theory development:
- Grounded theory approach: it
derives constructs and laws directly from the immediate data that one has
collected.
- Hypothesis Testing: this is to
start by formulating a hypothesis based on a thoery, and then submit it to
a test by collecting empirical data. Three major steps:
- The formulation of a
hypothesis: a proposition about the relationship between two or more
theoretical constructs
- The deduction of observable
consequences of the hypothesis: it guides the research method design
determining what variables to be controlled, how to control variables, and
what to observe, e.g.samples and measurements of variables
- The testing of hypothesis by
making observation, i.e. collecting and analyzing research data
A knowledge building
process cannot be complete without embracing both approches. The establishment
of theories requires both our sensory percetpion and rational reflection.
Creation and verification of theories both contribute to the process of
knowledge building of a field.
How
do instructional-design theories differ from learning theories?
It is the major function that a theory is proposed to serve that Reigeluth (1999) used to distinguish learning theory and instructional-design theory. Learning theories, concerning more with describing and explaining how learning occurs, highlight the descriptive and explanatory function of a theory, i.e. what it is and how it is. On the other hand, instructional-design theories, focusing on prescribing how to better facilitate learning, emphasize the prescriptive and controlling characteristics of a theory, i.e. what it should be and how it should be.
It is the major function that a theory is proposed to serve that Reigeluth (1999) used to distinguish learning theory and instructional-design theory. Learning theories, concerning more with describing and explaining how learning occurs, highlight the descriptive and explanatory function of a theory, i.e. what it is and how it is. On the other hand, instructional-design theories, focusing on prescribing how to better facilitate learning, emphasize the prescriptive and controlling characteristics of a theory, i.e. what it should be and how it should be.
Based on the description
of several literature mentioned in Reigeluth's (1999) article on what is
instructional-design theories, a comparison between instructional-design
theories and learning theories is listed as follows:
Instructional-design theories
|
Learning theories
|
|
|
|
|
|
|
References:
Gall, M. D., Borg, W. R.,
& Gall, J. P. (1996). Educational research: An introduction ( Sixth
ed.). White Plains, NY: Longman.
Reigeluth, C. M. (1999).
What is instructional-design theory and how is it changing? In C. M. Reigeluth
(Ed.), Instructional design theories and models (Vol. II: A New paradigm
of instructional theory, pp. 5-29). Mahwah, NJ: Lawrence Erlbaum Associates.
Seels, B.B. &
Richey, R.C. (1994). Instructional technology: the definition and domains of
the field. Washington, DC: Association for Educational Communications and
Technology.
Learning theories are
concerned with what learning is and how learning occurs.
In the late 1950s, learning
psychology underwent a scientific revolution on paradigm shift from
behaviorism, which focuses the observable behavioral changes, to cognitive
psychology, which is more interested in what learners know and how it is
acquired, i.e. mental structure and mental process.
According to Jonassen
(1991), such psychological evolution resulted in little effect on IST
(Instructional Systems Technology) theory and practice. Jonassen argued that
there were needs of a philosophical paradigm shift to re-conceptualize the
learners' mental state. In the 1990s, the IST field started the growing
interest in constructivism, an epistemological belief about what
"knowing" is and how one "come to know." Contructivists
believe in individual interpretations of the reality, i.e. the knower and the
known are interactive and inseparable.
Behaviorists proposed
that psychological theories should exclusively address the physical stimuli
that an organism encounters and its observable behavioral response to them.
They focus on the scientific study of behavior, i.e. "to discover the
lawful relationship between environment events and behaviors" (Gredler,
1997). Such scientific inquiry implies an objective philosophical belief that
that there is a single reality, and objective knowledge can be acquired.
On the other hand, the
focus on the mental structures and processes in cognitive psychology does not
explicitly indicate its philosophical position about whether there is an
objective reality. The internal representation can echo the external reality,
which asserts a position of objectivism that the mind can stand separate and
independent from the body. Thus, knowledge can be transferred from the outside
of the mind into the inside of the mind. However, internal representation can
also be regarded as a subjective construction of integrating incoming
information and the existing knowledge structures, which entails a position of
constructivism that there is no single objective reality and that knowledge
cannot exist independently from the knower.
Constructivism is
claimed to be the synthesis of empiricism, which sees knowledge as the product
of sensory perception, and rationalism which sees it as the product of rational
reflection (Driscoll, 2000). Thus, knowledge is no longer the correspondence or
reflection of an objective ontological reality; it is an adaptive and active
construction of subjective experience and interpretation of the world. This
constructivist view of knowledge implies that the learners should be provided
with an environment that encourages learners to interact with the objects in
the environment and thus to actively participate and engage in thinking and
reflection.
Based on a broad
distinction in the assumptions about what is learning, what is the learning
mechanism, three major broad views of learning are identified.
What is learning?
|
What is the mechanism
that the learning process operates on?
|
|
Learning is behavioral
change.
|
The association
between the stimuli and response via the manipulation of reinforcement
|
|
Learning is the growth
of conceptual understanding. It focuses on the internal representation of
some kind of external reality.
|
Information process
and knowledge representation within the learner, i.e. cognitive processes
take place within the heads of individuals
|
|
Learning is the
effective participation in practices of inquiry and discourse. It focuses on
construction of meanings and use of concepts and skills.
|
A process of
enculturation, emphasizing the socio-cultural setting and the activities of the
people within the setting
|
References:
Drisoll, M. P.( 2000). Psychology
of learning for instruction. 2nd. Needham Heights, MA: Allyn and Bacon.
Gredler, M. E. (1997). Learning
and instruction: Theory into practice. Upper River, NJ: Prentice Hall.
Jonassen, D. H. (1991).
Objectivism vs. constructivism: Do we need a new paradigm? Educational
Technology Research and Development, 39 (3), 5-14.
What
is an instructional-design theory?
Gagné and Dick (1983)
described the characteristics of instructional theories in terms their
functions and foundations.
- Functions: Instructional
theories are prescriptive in nature. They relate specific instructional
events to learning processes and learning outcomes, identifies
instructional conditions that optimize learning outcomes, and provides a
rational description of causal relationships between procedures used to
teach and their behavioral consequences in enhanced human performance.
- Foundations: Instructional
theories are derived from learning research and theory.
Reigeluth (1999) used
the term 'instructional-design theory', which is defined as a theory that
"offers explicit guidance on how to between help people learn and develop.
The kinds of learning and development may include cognitive, emotional, social,
physical and spiritual."
Reigeluth (1999) explain
instructional-design theory from several aspects:
Its
characteristics
Reigeluth (1999) described four major characteristics of instructional-design theory:
Reigeluth (1999) described four major characteristics of instructional-design theory:
- It is design-oriented: it
focuses on means to attain given goals for learning or development à it
provides direct guidance on how to achieve their goals. Design-oriented
theories are very different from descriptive theories, which describes the
effects that occur when a given class of causal events occurs, or which
describes the sequence in which certain events occur
- It identifies methods of
instruction, i.e. ways to support and facilitate learning, as well as the
situations in which those methods should and should not be used.
- In all instructional-design
theories, the methods of instruction can be broken into more detailed
component methods, which provide more guidance to educators about
different components and different ways to perform the methods; different
kinds of methods; offering criteria that methods should meet.
- The methods are probabilistic
rather than deterministic: focusing on control instead of description and
explanation à In other words, instructional-design theories intend to
control variables in the learning environment to achieve certain results
Its
components
There should be two major components in instructional-design theory:
There should be two major components in instructional-design theory:
- Methods of instruction: methods
for facilitating human learning and development
- Instructional situation:
indications as to when and when not to use those methods and descriptions
of the conditions under which the instruction will take place
- The nature of what is to be
learned
- The nature of the learner
(prior knowledge, learning strategies, motivation)
- The nature of learning
environment
- The nature of the instructional
development constraints
Its
focus on value for decision making
Both values and empirics are important for making decisions about how to teach as well as what to teach. All the instructional-design-theories state explicitly what values guide their selection of goals and what values guide their selection of methods.
Both values and empirics are important for making decisions about how to teach as well as what to teach. All the instructional-design-theories state explicitly what values guide their selection of goals and what values guide their selection of methods.
Its
importance
It provides guidelines for practitioners. It transforms descriptive theory into methods of how to work, developing techniques and determine implementation details that are applicable to most conditions
It provides guidelines for practitioners. It transforms descriptive theory into methods of how to work, developing techniques and determine implementation details that are applicable to most conditions
Reigeluth provides
(1999) three questions to examine what an instructional-design theory can
offer:
- What methods best facilitate
learning and human development under different situations?
- What learning-tool features
best allow an array of alternative methods to be made available to
learners and allow them to make decisions (with varying degrees of guidance)
about both content (what to learn) and methods while the instruction is in
progress?
- What system features best allow
an instructional design team (that preferable includes all stakeholders)
to design quality learning tools?
What
instructional-design theories are discussed?
The instructional
theories discussed in this knowledge base are classified as follows based on
the different theoretical foundations about learning.
References:
Reigeluth, C. M. (1999).
(Ed.), Instructional-design theories and models: An new paradigm of
instructional theory, Volume II.. Mahwah, NJ: Lawrence Erlbaum Associates.
Human beings tend to impose
rationales to explain the phenomena that surround them. Some employ the
mechanistic scientific view, and some take a systems view. The former is an
analytical, reductionist and linear-causal paradigm, in which the observed
phenomenon is broken into parts, and the parts are isolated from the whole and
examined separately. Systems theory opposes the reduction of systems. It
criticizes the mechanistic view neglects the relationship of the components
with the larger systems. It emphasizes the totality, complexity, and dynamics
of the system. However, it also argues that, despite of the complexity and
diversity of the world, models, principles and laws can be generalized across
various systems, their components, and the relationships between them. In other
words, corresponding abstractions and conceptual models can be applied to
different phenomena.
Systems theory comes
from the general systems theory proposed by the biologist Ludwig von
Bertalanffy. He recognized a compelling need for a unified and disciplined
inquiry in understanding and dealing with increasing complexities, complexities
that are beyond the competence of any single discipline. The theory pursues
scientific exploration, understanding, and controlling of systems.
The systems view
investigates the components of the phenomena, the interaction between the
components, and the relation of components to their larger environment. The
underlying assumption of Bertalanffy's theory is that there are universal
principles of organization across different fields. Boulding states that the
objectives of GST aim to point out similarities in the theoretical
constructions of different disciplines, and to develop something like a
spectrum of theories -- a system of systems that may perform a gestalt in
theoretical constructions.
Systems theory was
furthered by Ross Ashby's concept of Cybernetics. Cybernetics means steersman
in Greek. Wiener introduced this idea as the science of communication and
control in the animal and the machine. The idea was first described to
illustrate the transmission of information through communication channels and
the concept of feedback. It evolved to emphasize the constructive power of the
observer, who controls/constructs models of the systems with which the observer
interacts.
Characteristics
of systems theory
The major purpose of systems theory is to develop unifying principles by the integration of various sciences, natural and social. With focus on the structures and functions of the system, the system can be viewed from different perspectives:
The major purpose of systems theory is to develop unifying principles by the integration of various sciences, natural and social. With focus on the structures and functions of the system, the system can be viewed from different perspectives:
- Open system: a system keeps
evolving and its properties keep emerging through its interaction with
environment
- Holistic view: systems theory
focuses on the arrangement of and relations between the parts that connect
them into a whole. The mutual interaction of the parts makes the whole
bigger than the parts themselves.
- Goal-directedness: systems are
goal oriented and engage in feedback with the environment in order to meet
the goals. Also, every part of the system is interdependent with each
other working together toward the goals.
- Self-organizing: productive
dynamic systems are self-organizing. It implies the adaptive ability of
the systems to the changes in the environment. Using a metaphor of social
interaction, Pask (1975, 1984) described the self-organizing process as
"a conversation between two or more participants, whose purpose is to
arrive at "an agreement over an understanding."
What
are the assumptions about systems view?
Reigeluth, Bathany, and Olson (1993) described the following assumption in terms of design:
Reigeluth, Bathany, and Olson (1993) described the following assumption in terms of design:
- "A systems view suggests
that essential quality of a part resides in its relationship to the
whole."
- "The system and its parts
should be designed from the perspective of the whole system and in view of
its embeddedness in its environment."
- "The systems design notion
requires both coordination and integration. We need to design all parts
operating at a specific system level of the organization interactively and
simultaneously. This requires coordination. The requirement of designing
for the interdependency across all system levels invites
integration."
The
importance to Instructional Systems Design: Theory into Practice
From a systems view, the instructional system is an open system that interacts with the educational system and is an interdisciplinary subject matter that incorporates different fields, such as psychology, communication, education, and computer science. Also, the systems approach applied to instructional design brings forth an extensive analysis of components that engage in carrying out the instructional goal as well as the input-output-feedback transformational process that interacts between the components (Banathy, 1991).
From a systems view, the instructional system is an open system that interacts with the educational system and is an interdisciplinary subject matter that incorporates different fields, such as psychology, communication, education, and computer science. Also, the systems approach applied to instructional design brings forth an extensive analysis of components that engage in carrying out the instructional goal as well as the input-output-feedback transformational process that interacts between the components (Banathy, 1991).
From a systems view,
examination of the processes and components of the instructional system is not
adequate to fully understand the system itself. Thus, it shifts the attention
from the design components, such as instructional strategies, media selection
and material development, to implementation. How the system adopts the
instructional innovation or the change becomes the major issue. The systems
theory provides a comprehensive perspective for designers to foresee the
resistance to change and enables designers to understand the complexity of
educational systems.
Banathy (1996) suggests
that besides paying attention to this functional structure of the system, we
should also look at the system from two other perspectives. One is to examine
the instructional system as a synthetic organism in the context of its
community and the larger society. The other is to explore what the
instructional system does through time. The suggestions, in fact, echoes the
ways that Reigeluth, Bathany, and Olson (1993) proposed to adopt systems
design:
"We should explore
educational change and renewal from the larger vistas of the evolving society,
and envision a new design. We should view the system we design from the
perspectives of the overall societal context. Approaching education from this
perspective, we shall enlarge our horizon and develop the largest possible
picture of education within the largest possible context."
Impacts
to Educational Systems
Systemic change recognizes the interrelationships and interdependencies among the parts of the educational system, with the consequence that desired changes in one part of the system are accompanied by changes in other parts that are necessary to support those desired changes and recognizes the interrelationships and interdependencies between the educational systems and its community, including parents, employers, social service agencies, religious organizations, and much more, with the consequence that all those stakeholders are given active ownership over the change effort (Jenlink et al 1996.)
Systemic change recognizes the interrelationships and interdependencies among the parts of the educational system, with the consequence that desired changes in one part of the system are accompanied by changes in other parts that are necessary to support those desired changes and recognizes the interrelationships and interdependencies between the educational systems and its community, including parents, employers, social service agencies, religious organizations, and much more, with the consequence that all those stakeholders are given active ownership over the change effort (Jenlink et al 1996.)
According to Banathy
(1987), there are four subsystems in any educational enterprise:
- The learning experience
subsystem: the cognitive information processing of the learner
- The instructional subsystem:
the production of the environment or opportunities for learners to learn
by the instructional designers and teachers
- The administrative subsystem:
decision making of resource allocation by the administrators based on the
instructional needs and governance input
- The governance subsystem: the
production of policies which provide directions and resources for the
educational enterprise in order to meet their needs by "owners"
Based on the
interpretations of such analysis, the instructional system is part of
educational system. Reigeluth (1996) gave more of his thought on the comparison
of ESD and ISD.
What
is the relationship between ESD (Educational Systems Development) and ISD
(Instructional Systems Development)?
- First, let's examine their
definitions. Based on the definitions, ISD is within ESD.
ESD is the "knowledge base about the complete educational enterprise" (Reigeluth, 1995).
ISD is the "knowledge base about the instructional subsystem" (Reigeluth, 1995) - In what way do these two
knowledge bases relate to each other?
- The function of ESD aims to
create a new paradigm of education. It is not concerned with making changes
within the existing paradigm. It encompasses all subsystems of the
educational enterprise. It entails radical changes.
- ESD needs ISD: ISD, as a more
fully developed knowledge base, can contributes insights to developing
ESD; design skills and systems thinking of ISD are needed in EDS.
- ISD needs ESD: ISD needs
changes in the larger organizations, such as administrative and governance
systems) to support their success; The new paradigm in ESD will create a
greater needs for ISD expertise; ESD will initiate ISD's search for new
directions in instructional theory.
What
are the common characteristics between ESD and ISD?
- Both use systems thinking to
examine and explain the mutually interdependent relationships:
- Between the new system and its
suprasystem
- Between the new system and its
peer system
- Among the many functions and
components that compose the new system
- Both use design theory to
inform the process, which consists of the fundamental elements, such as
analysis, synthesis, evaluation and basic activities of deign, development
and implementation
- Both are not linear: both needs
simultaneity and recursion during the process.
Why
a New Paradigm in ESD?
- Changes in Society: the major
paradigm shifts in society is from Agrarian to Industrial to Information.
Such shifts bring in changes in all of the society's subsystems including
family, business and education.
- The need for a new paradigm of
education is based on massive changes in both the conditions and
educational needs of an information society.
- Selection vs. Learning: In
terms of the educational function, the industrial age is to use
standardization strategy to separate the laborers from the managers, and
to build up conformity and compliance in bureaucratic organization. On the
contrary, the education and training in the information age should be
designed to foster active thinkers, who can take initiatives and think
critically in team-based organization.
- The systemic changes in the
family requires school to become a caring environment due to the systemic
changed in the family
References:
Banathy, B. H. (1968). Instructional systems. Palo Alto, CA: Fearon Publishers.
Banathy, B. H. (1968). Instructional systems. Palo Alto, CA: Fearon Publishers.
Banathy, B. H. (1987).
Instructional systems design. In R. M. Gagne (Ed.), Instructional
technology: Foundations. HIllsdale, NJ: Lawrence Erlbaum.
Banathy, B. H. (1991). Systems design of education. Englewood Cliffs, NJ: Educational Technology Publication.
Banathy, B. H. (1996). Systems inquiry and its application in education. In D. H. Jonassen (Ed), Handbook of research for educational communications and technology. New York: Macmillan.
Jenlink, P.M., Reigeluth, C.M., Carr, AA & Nelson, L.M. (Jan-Feb. 1996). An Expedition for Change: Facilitating the Systemic Change Process in School Districts. Tech Trends. Vol 41, No. 1, page 21-30.
Bertalanffy, L. V (1968). General systems theory. New York: Braziller.
Reigeluth, C. , Banathy, B. H. & Olson, J. R. (1993). Comprehensive systems design: A new educational technology. Berlin: Springer-Verlag.
Ellsworth (2000)
commented that Rogers' Diffusion of Innovations (1995) is an excellent general
practitioner's guide. Rogers' framework provide "a standard classification
scheme for describing the perceived attributes on innovations in universal
terms" (Rogers, 1995). Research in educational change has applied and
explored Rogers' model to different contexts.
Rogers' model studies
diffusion from a change communication framework to examine the effects of all
the components involved in the communication process on the rate of adoption.
Rogers (1996) identified the differences both in people and in the innovation.
The model provides the guidelines for the change agents about what attributes
that they can build into the innovation to facilitate its acceptance by the
intended adopter. Rogers also identified the sequence of change agent roles:
- To develop a need for change.
- To establish an
information-exchange relationship.
- To diagnose problems.
- To create an intent in the
client to change.
- To translate an intent to
action.
- To stabilize adoption and
prevent discontinuance.
- To achieve a terminal
relationship
How
is diffusion defined in Rogers' Model?
Diffusion is a process by which an innovation is communicated through certain channels over time among the members of a social system.
Diffusion is a process by which an innovation is communicated through certain channels over time among the members of a social system.
The definition indicates
that:
- The adopters can be an
individual, groups, or organization at different levels of social system.
- The target is innovation
- The process is communication
- The means is communication
channels
- The context of innovation is a
social system
- It is a change over time.
How
can we categorize different types of adopter?
- Innovators (risk takers)
- Early adopters (hedgers)
- Early majority (waiters)
- Late majority (skeptics)
- Late adopters (slowpokes)
What
are the factors affecting the rate of adoption of an innovation?
According to Rogers (1995), there are five major factors affecting the rate of adoption:
According to Rogers (1995), there are five major factors affecting the rate of adoption:
- Perceived Attributes of
Innovation
An innovation is a idea, practice or object that is perceived as new by an individual or other unit of adoption. How the adopter perceived characteristics of the innovation has impacts on the process of adoption.
- Relative advantage: the degree
to which an innovation is perceived as better than the idea it supersedes.
The underlying principle is that the greater the perceived relative
advantage of an innovation, the more raid its rate of adoption
- Compatibility: the degree to
which an innovation is perceived as being consistent with the existing
values, past experiences, and needs of potential adopters
- Complexity: the degree to which
an innovation is perceived as difficult to understand and use
- Trialability: the degree to
which an innovation may be experimented with on a limited basis. If an
innovation is trialable, it results in less uncertainty for adoption
- Observability: the degree to
which the results of an innovation are visible to others. The easier it is
for individuals to see the results of an innovation, the more likely they
are to adopt.
- Type of Innovation-Decision
- Optional: an individual
flexibility
- Collective: a balance between
maximum efficiency and freedom
- Authority: it yields the high
rate of adoption, but produces high resistance.
- Communication Channels
- Mass Media
- Interpersonal
- Nature of the Social System
A social system is defined as a set of interrelated units that are engaged in joint problem solving to accomplish a common goal. The members or units of a social system may be individuals, informal groups, organizations, and or subsystems. All members cooperate at least to the extent of seeking to solve a common problem in order to reach a mutual goal: Sharing of a common objective binds the system together. The social structure affects the innovation's diffusion in several ways:
- Social structure and
communication structure: patterned arrangements of the units in a system
- System norms: norms are
established behavior patterns for the members of a social system
- Roles of opinion leaders and
change agents: opinion leadership is the degree to which an individual is
able to influence other individual's attitudes or overt behavior
informally in a desired way with relative frequency
- Types of innovation decisions:
optional innovation-decision, collective innovation -decision, authority
innovation-decision; contingent innovation-decision
- The consequences of innovation:
desirable vs. undesirable, direct vs. indirect, anticipated vs.
unanticipated
- Extent of Change Agent's Promotion
Siegel (1999) listed
four additional factors of Rogers' theory:
- Pro-innovation Bias: three
assumptions about innovation:
- It should be diffused and
adopted by all members of a social system
- It should be diffused more
rapidly
- It should be neither reinvented
nor rejected
- Reinvention: people use
innovations in ways not originally intended
- Individual characteristics of
adopters
What
is innovation-decision process for individual or other decision making unit?
- Knowledge: it occurs when an
individual is exposed to the innovation's existence and gains some
understanding of how it functions
- Persuasion: it occurs when an
individual forms a favorable or unfavorable attitude toward the innovation
- Decision: it occurs when an
individual engages in activities that lead to a choice to adopt or reject
the innovation
- Implementation: it occurs when
an individual puts an innovation into use
- Confirmation: it occurs when an
individual seeks reinforcement of an innovation decision or reverse the
previous decision due to the conflict
What
are the contributions of Rogers' Model?
Ellsworth (2000) pointed out the most critical benefits of Rogers' model is the innovation attributes. He said:
Ellsworth (2000) pointed out the most critical benefits of Rogers' model is the innovation attributes. He said:
"Practitioners are
likely to find this perspective of the greatest use if they are engaged in the
actual development of the innovation or if they are deciding whether (or how)
to adapt the innovation to meet local requirements…Rogers' framework can be
useful in determining how it is to be presented to its intended adopters."
(p.40)
Rogers' model has
identified the critical components in the change system and their
characteristics. The model is relatively systematic because the consequence of
the change is confined with a predetermined "innovation", a
predetermined goal. The interrelationship and dynamic exchange between the
components in the change system is not expected to contribute to the continuous
shaping of the vision, but to be controlled to adopt a desirable idea, object,
or program.
References:
Ellsworth, J. B. (2000). Surviving changes: A survey of Educational change models. Syracuse, NY: ERIC Clearinghouse.
Ellsworth, J. B. (2000). Surviving changes: A survey of Educational change models. Syracuse, NY: ERIC Clearinghouse.
Rogers, E. (1995). Diffusion
of Innovations. (4th ed.). New York, NY: The Free Press.
Ellsworth (2001) pointed out that the issues that Fullan's model helps the change agent to deal with include:
- What are the implications of change for people or organizations
promoting or opposing it at particular levels?
- What can different stakeholders to do promote change that
addresses their needs and priorities?
Fullan (1982, 1991) proposed that there are four broad phases in the change process: initiation, implementation, continuation, and outcome.
Image from Sarah Fitzpatrick's site
InitiationThe factors that affecting the initiation phases include:
- Existence and quality of innovations
- Access to innovations
- Advocacy from central administration
- Teacher advocacy
- External change agents
Fullan and Stigelbauer (1991) identified three areas of the major factors affecting implementation: characteristics of change, local characteristics and external factors (government and other agencies). They identified different stakeholders in local, and federal and governmental levels. They also identified characterizations of change to each stakeholder and the issues that each stakeholder should consider before committing a change effort or rejecting it.
Characteristics of Change Local Factors External Factors
Characteristics
of Change
|
Local
Factors
|
External
Factors
|
|
|
|
Continuation
Continuation is a decision about institutionalization of an innovation based on the reaction to the change, which may be negative or positive. Continuation depends on whether or not:
- The change gets embedded/built into the structure (through
policy/budget/timetable)
- The change has generated a critical mass of administrators or
teachers who are skilled and committed to
- The change has established procedures for continuing assistance
Attention to the following perspectives on the change process may support the achievement of a positive or successful change outcome:
- Active initiation & participation: change does not end in
recognizing or initial context with the innovation, but starts with the
contact and evolves along with the continuous interaction with it and the
environmental changes that it brings forth
- Pressure, support and negotiation
- Changes in skills, thinking, and committed actions
- Overriding problem of ownership
Fullan (1993) provide eight basic lessons about thinking about change:
- You can't mandate what matters: complexity of change in skills,
thinking and committed actions in educational enterprise. Fullan commented
that "effective change agents neither embrace nor ignore mandates.
They use them as catalysts to reexamine what they are doing." (p.24)
- Change is a journey not a blueprint: changes entails
uncertainty with positive and negative forces of change.
- Problems are our friends: problems are the route to deeper
change and deeper satisfaction; conflict is essential to any successful
change effort.
- Vision and strategic planning come later: vision comes later
because the process of merging personal and shared visions take time. This
different from Rogers'conception of innovation, as an idea, practice or
object, that drives the change process. Rogers' model is similar to what
Fullan's critics on Beckhard and Pritchard's (1992) vision-driven, which
emphasizing the creating and setting of the vision, communicating the
vision, building commitment to the vision, and organizing people and what
they do so that they are aligned to the vision. People learn about the
innovation through their interactions with the innovation and others in
the context of innovation. Deep ownership comes through the learning that
arise form full engagement in solving problems.
- Individualism and collectivism must have equal power: Stacy's
concept of "dynamic system" helps clarify Fullan's ideas of
innovation collaboration:
"The dynamic systems perspective leads to a view of culture as emergent. What a group comes to share in the way of culture and philosophy emerges from individual personal beliefs through a learning process that builds up over years." (Stacy, 1992, p. 145)
6.
Neither centralization nor
decentralization works: the center and local units need each other. Successful
changes require a dynamic two-way relationship of pressure, support and
continuous negotiation.
7.
Connection with the wider
environment is critical for success: change should recognize a broader context,
to which change asserts its constant action.
8.
Every person is a change agent:
" It is only by individuals taking action to alter their own environments
that there is any change for deep change."
Fullan (1993) provided suggestions of elements that
successful change requires:- The ability to work with polar opposites: imposition of change
vs. self-learning; planning vs. uncertainty; problems vs. creative
resolution; vision vs. fixed direction; individual vs. groups;
centralizing vs. decentralizing; personal change vs. system change
- Dynamic interdependency of state accountability and local
autonomy
- Combination of individuals and societal agencies
- Internal connection within oneself and within one's
organization and external connections to others and to the environment
- Moral purpose is complex and problematic
- Theories of education and theories of change need each other
- Conflict and diversity are our friends
- Understanding the meaning of operating on the edge of chaos
- Emotional intelligence is anxiety provoking and anxiety
containing
- Collaborative cultures are anxiety provoking and anxiety
containing
- Attack incoherence connectedness and knowledge creation are
critical
- There is no single solution. Craft your own theories and
actions by being a critical consumer.
Ellsworth, J. B. (2000). Surviving changes: A survey of Educational change models. Syracuse, NY: ERIC Clearinghouse.
Fullan, M. (1982). The meaning of educational change. New York: Teachaers College Press.
Fullan, M. G. (1993). The complexity of the change process. In Change forces: Probing the depth of educational reform, pp. 19-41. Falme Press.
Fullan, M. G. (1999). Change Forces: The sequel. Philadelphia, PA: Falmer Press.
Fullan, M., & Stiegelbauer, S. (1991). The new meaning of educational change. 2nd ed. New York: Teachers College Press.
Ely (1990) referred
conditions of changes to the factors in the environment that affects the
implementation in the change process. When the implementation plan to launch
out innovation is carefully crafted to satisfy all the perceived attributes
that facilitate the rate of adoption, what else can make the adoption easier or
impede the adoption? This is exactly the question that Ely's Conditions of
Changes intend to answer.
Ely (1999) listed eight
conditions that should exist or be created in the environment where in the
innovation is implemented to facilitate its adoption:
- Dissatisfaction with the status
quo: the
precondition for people to accept a change is that they perceive a needs
to change the environment. Perception of such needs usually is revealed in
people's dissatisfaction of the existing methods, products, or programs.
Understanding of the cause of the dissatisfaction and identifying who has
dissatisfaction can help the change agent to communicate the innovation to
the adopters in a more effective way. Ellisworth (2001) said that
understanding sources and the levels of dissatisfaction can help the
change agent to position the innovation to be more compatible with their
'felt needs' (in Rogers' term).
- Sufficient knowledge and skills: In order to make the
implementation succeed, "the people who will ultimately implement any
innovation must possess sufficient knowledge and skills to do the
job." (Ely, 1995). It is especially evident when the innovation
involves in use of a certain tool or a technique. Without enough training
to use the tool or technique, the innovation will die out soon.
- Availability of resources: A good recipe itself does not
guarantee the tasty results of cooking. There must be right ingredients
and right cooking utensils available for the cook to use. In the same
logic, an innovation without resources, such as money, tools and
materials, to support its implementation, will not be successful.
- Availability of time: The adoption of the innovation
takes time. As it is put by Ely, "the implementers must have time to
learn, adapt, integrate, and reflect on what they are doing." Their
'confirmation' of the acceptance of the innovation does not necessarily
bring forth the change. It needs time for the people to understand the
innovation and develop the abilities to adapt the innovation.
- Reward or incentives: People need to be encouraged
in their performance of innovation or use of the innovation. Extrinsic or
intrinsic rewards can add some value of the innovation, and thus, promote
its implementation.
- Participation: Participants in the
implementation should be encouraged to involve in decision-making. With
the opportunities to communicate their ideas and opinions, the
participants can have sense of the ownership of the innovation. Moreover,
the communication among all parties can help monitor the progress of the
innovation.
- Commitment: Since the implementation take
a great deal of endeavors and time, the people who are involved in the
implementation need to make commitment to their efforts and time. There
must be "firm and visible evidence that there is endorsement and continuing
support for implementation" (Ely, 1995).
- Leadership: Unless to say, the leaders'
expectations and commitment have a great impacts on the process of
implementation. Leadership also include the availability of affective
support thorough the process.
References:
Ely, D. P. (1990).
Conditions that facilitate the implementation of educational technology
innovations. Journal of Research on Computing in Education, 23 (2),
298-305.
Ely, D. P. (1999). New
perspectives on the implementation of educational technology innovation.
What is instructional
technology? Whenever I introduced myself as an instructional systems major to
someone I just met, they always asked me what this major was about. If I used
the term "instructional technology" to explain what the instructional
systems is, they seemed to come to an understanding of what I study for my
major. They immediately related the instructional technology to the web-based
instruction, educational CD-ROMs, and computers. We need to understand that the
use of the machine is only one aspect of the technology. Solomon (2000) pointed
out that "alternative perspectives in our field assume a broader
interpretation of technology as the systematic application of all sources of
organized knowledge." Based on this viewpoint, technology consists the
products, the artifacts such as machines and tools, as well as the processes,
the ways of doing things such as strategies and techniques.
It is from this view of
a broader interpretation that Seels and Richey (1994) defined the field of
instructional technology, i.e. "the theory and practice of design,
development, utilization, management and evaluation of processes and resources
for learning." Their definition reflects the evolution of instructional
technology from a movement to a field and a profession and the contributions
this field has made to theory and practice.
Different terms have
been used to represent the field. Schrock (1995) used the term instructional
development as a broader context for her description of the history of the
field; Reiser (2001) used the term instructional design and technology to
define the field. To Shrock (1995), instructional development is "a
self-correcting, systems approach that seeks to apply scientifically derived principles
to the planning, design, creation, implementation, and evaluation of effective
and efficient instruction"; To Reiser (2001), "the field of
instructional design and technology encompasses the analysis of learning and
performance problems, and the design, development, implementation, evaluation
and management of instructional and non-instructional processes and resources
intended to improve learning and performance in a variety of settings,
particularly educational institutions and the workplace." Both terms
encompass the same broader scope of the field as the term instructional
technology.
The following
descriptions of the historical development of the field of instructional
technology incorporated Reiser's and Shrock's thoughts.
Before 1920's
|
|
1920's
|
|
1930's
|
|
1940's
|
|
1950's
|
|
1960's
|
|
1970's
|
|
1980's
|
|
1990's
|
|
References:
Reiser, R. A. (2001). A
history of instructional design and technology: Part II: A history of
instructional design. Educational Technology, Research and Development,
49 (2), 57-67.
Seels, B. B., &
Richey, R. C. (1994). Instructional technology: The definition and domains
of the field. Washington, D. C.: Association of Education Communications
and Technology.
Shrock, S. A. (1995). A
brief history of instructional development. In G. J. Anglin (Ed.), Instructional
technology: Past present and future (Second ed., pp. 11-18). Englewood, CO:
Libraries Unlimited Inc.
Solomon, D. (2000).
Toward a post-modern agenda in instruction technology. Educational
Technology, Research and Development, 48 (4), 5-20.
ISD (Instructional Systems Design) model is
an organized procedure that includes steps of analyzing, designing, developing,
implementing and evaluating instruction to improve the quality and
effectiveness of instruction and to enhance learning.
What are systems characteristics of ISD?- Goal-directed: ISD guides the preparation of instruction to
accomplish specific goals and objective
- Interdependence of each step in the process: ISD emphasizes the
congruency among the objectives, instruction, and evaluation
- An closed system: ISD mainly focuses on the system of
instruction, which is "the intentional arrangement of experiences,
leading to learners acquiring particular capabilities" (Smith&
Ragan, 1999)
Gustafson (1991) explained the models serve a variety of purposes, such as theory-building and testing, description, prediction, and explanation; the ISD models are used for three primary functions in the ID practice:
- Communication device
- Planning Guides for management activities
- Prescriptive algorithms for decision making
Gustafson's (1991) taxonomy of ID models
Gustafson (1991) proposed three categories of ID models based on different focuses:
- Classroom focus: the goal is to do better job of instruction
within the constraints of the situation, in which a teacher, students, a
curriculum and facility already exist. The emphasis is on selecting and
adapting existing materials and instructional strategies. Examples: The Kemp Model (1985)
- Product focus: The goal is production of instructional
products. The development of the product, and the product's objectives may
have been given. Examples: The Leshi, Pollockk, and Reigeluth Model
(1990)
- Systems focus: The goal is to develop instructional output,
which may include material, equipment, a management plan or an
instructor's training package. The focus demands extensive analysis of the
use environment, the characteristics of the task, and whether or not
development should even take place. It is a problem solving approach
requiring data collection to determine the precise nature of the problem. Examples:
The Dick and Carey's Model (1990), The Diamond's Model (1989)
Media view, embryonic systems view, narrow systems view, standard systems view and the instructional systems design view.
The standard systems view
- The system includes major processes: conduct needs assessment,
establish overall goals, conduct task analysis, specify objectives,
develop assessment strategies, select media, produce materials, conduct
formative evaluation and summative evaluation.
- Formative evaluation: gathering information on adequacy and
using this information as a basis for further development - during the
development of improvement of a program or product
- Summative evaluation: gathering information on adequacy and
using this information to make decisions about utilization à after
completion and for the benefit of some external audience or decision maker
agency, or further possible users
- This view seems to be most dominant in the field.
- The comments on the standard systems view:
- It is linear. It indicates the generic procedural framework
that includes the steps of analyzing, designing, developing, implementing
and evaluating instruction.
- "The input-output structure and the emphasis on task
analysis, objectives, and assessment of learning outcomes conjures up
visions of machine gradable training." (Schiffman, 1995)
- The view is "unaware of the theoretical/research context
from which they are derived from and upon which they are dependent for
enlightened and insightful implementation…"(Schiffman, 1995)
- It lacks attention to the matters of diffusion and the linking
the practice with relevant learning theory and research.
Schiffman (1995) proposed a systemic view showing ISD to be a synthesis of theory and research related to
- How humans perceive and give meaning to the stimuli in their
environment
- The nature of information and how it is composed and
transmitted
- The concept of systems and the interrelationships among factors
promoting or deterring efficient and effective accomplishment of the
desired outcomes
- The consulting and managerial skills necessary to meld point 1
through 3 into a coherent whole
- Educational theory and research:
general educational psychology, specific theories of learning and
varieties of human capability:
- Designers need to have an understanding of the principles of
human physical, emotional, social, and mental growth and development.
- Without a broad-based foundation in learning theory the practice
of ISD becomes narrowly focused on means (the steps in the systems model)
rather than on the rightful end (learning)
- The instructional designers have to distinguish different human
capabilities. This knowledge enables the designers to utilize the research
on the particular conditions under which each type of human capability is
more likely to be learned, and to identify objectives of instructional
unit that reflect the needs of the system
- Systems Analysis: needs assessment
to collect and analyze data about problems with instructional solution or
problems with other solutions.
- Data Collection and Data Analysis: What data needs to be
collected? Why analysis?
Designers must know the goals, functions,
resources, constraints, chain-of-command culture of the organization
|
|
Data must be gathered on the specific
target population to determine their general characteristics, motivation,
sophistication as learners, and performance levels.
|
|
Learning environment must be studied:
on-job training, formal instruction, small group interaction
|
|
Analysis can determine whether there are
any gaps between what is and what should be, and to determine the causes of
the problem: instructional, motivational, or environmental
|
|
If the solution lies in instruction, then
proceed with tasks in task, content, learner analysis, testing and
measurement, media selection and production, and evaluation.
|
- Needs Analysis:
Determine to what extent the problem can
be classified as instructional in nature
|
|
Identify constraints, resources, and
learning characteristics
|
|
Determine goals and priorities
|
|
Analysis can determine whether there are
any gaps between what is and what should be, and to determine the causes of
the problem: instructional, motivational, or environmental
|
- Task Analysis: the process of defining what to be learned
The analysis phase involves analysis of the learner, the
task, and the context. However, task analysis is essential to identify the
content and the process that are required to achieve the desired learning
goals. |
|
For instructional designers, first we
have to determine an instructional need exists, and then to specify what to
be learned in order to develop how to learn and how to evaluate the learning.
|
|
The analysis of the context is much more
strongly influenced by systems theory and by sociological theories
|
|
The attention given to the analysis of the learner has
grown since the learner plays a constructive role according to cognitive
theory. The learner's characteristics, i.e. attitudes, motivations,
attributions, and interests, are considered in the design. |
|
How a learning task is analyzed:
Observable behaviors + mental tasks; difference between the ways of novices
and various levels of experts complete mental and physical tasks; Attention
is given within objectives to tapping the understanding underlying a
performance
|
Diffusion
An understanding of the process of change, resistance to change and categories of adopters prepares the designer to work well to bring about changes in an organization.
Consulting/Interpersonal relations
It focuses on the relationship between the designers and the clients. Five phases of a consultancy (Bell and Nadler, 1979): entry, diagnosis, response, disengagement and closure.
Project Management
Knirk and Gustafson (1986) listed six stages of project management: Planning, organizing, staffing, budgeting, controlling and communicating
References:
Diamond, R. M. (1989). Designing & improving courses and curricula in higher education: A systematic approach. San Francisco, CA: Jossey-Bass.
Dick, W., & Carey, L. (1996). The systematic design of instruction. 4th ed. New York, NY: Harper Collin
Gustafson, K. L. (1991). Survey of Instructional Development Models. US Department of Education. Public Domain.
Kemp, J. E. (1985). The instructional design process. New York: Haper and Row.
Kemp, J. E., Morrison, G. R., & Ross, S. V. (1994). Design effective instruction, New York: Macmillan
Schiffman, S. S. (1995). Instructional systems design: Five views of the field. In G. J. Anglin (Ed.), Instructional technology: Past, present and future (Second ed., pp. 131-142).Englewood, CO: Libraries Unlimited Inc.
Smith, P., & Ragan, T. J. (1999). Instructional Design. 2nd ed. John, Wiley & Sons, Inc.
Rossett (1995) described
needs assessment as an initial inquiry of information about situation. ,
Jonassen, Tessmer, and Hannum (1999) explained the purposes of needs assessment
include:
- To determine if learning is a
solution to an identified need, and if so, how serious the learning need
is; the result is prioritized inventory of learning goals.
- Needs analysis is the data
gathering and decision making process that instructional designers go
through to determine the goals of any instructional system
- Needs analysis identifies the
present capability of prospective learners or trainees, the desired
outcomes, and the discrepancies between those
When
do we do needs assessment?
Rossett (1995) pointed out the importance of the needs assessment as a driving force affecting every other aspects in the instructional design system, i.e. design, development, use and evaluation.
Rossett (1995) pointed out the importance of the needs assessment as a driving force affecting every other aspects in the instructional design system, i.e. design, development, use and evaluation.
"Needs assessments
are done when the instructional technologist is trying to respond to a request
for assistance. Needs assessments gather information to assist professionals in
making data-driven and responsive recommendations about how to solve the
problem or introduce new technology." (p. 184)
Rossett (1995) described
the major information that needs assessment tends to identify:
- Optimal Performance: What is it
that the learner/performer need to know or do?
- Actual Performance: What is it
that the learner/performer actually know and do?
- Feelings: How do the
learner/performer feel about the topic, training about the topic, the
topic as priority, and confidence surrounding the topics
- Causes: Rossett incorporated
the work of Bandura (1977) and Keller (1979, 1983) into a system that
recognizes four kind of causes:
- Lack of skill or knowledge:
Can the learner or performer do the task?
- Flawed Environment: Does the
environment support the task performance? The support includes tools,
forms, work space, etc.
- Improper Incentives: What are
the consequences of doing the job badly or not doing at all?
- Unmotivated Employees: What is
the internal state of the individuals involved, i.e. their value toward
the task, and their confidence of their ability
Lack of skill or knowledge
|
Flawed Environment
|
Improper Incentive
|
Unmotivated Employees
|
|
Information needed
|
Are the learners able
to do the task?
|
Environmental support
|
Feeling, consequences
of task performance
|
Feelings
|
Possible Data Sources
|
Records and outcomes,
observations, interviews
|
Observations,
interviews, focused groups
|
Observations, records,
interviews, questionnaire
|
Interviews,
questionnaires
|
Possible Solutions
|
Training, job aids
|
Improved tools or
forms, workplace redesign, job redesign
|
Improved policies,
better supervision, improved incentives
|
Training, information,
coaching, better supervision
|
Data
Collection Tools
- Observations: fining out
optimal and actual performance, the environmental factors
- Interviewing: optimal
knowledge, environment, incentive, and motivation
- Records and Outcomes: finding
out optimal and actual performance; environmental factors from the
complaints; examining policies to identify incentives
- Facilitating Groups: to
assemble an organizational-wide accord on optimal; it can be used to seek
other information, but we need to be careful about the honest discuss of
actual performance, feeling and causes.
- Surveying through
questionnaires: It is efficient to gather information from a large number
of respondent as well as information about feeling, causes, solution.
The
process of Needs Assessment
- Identify Purpose Based on
Initiators: According to Rossett (1995)There are three initiators:
- Performance problems: if there
is a gap between ultimate and actual performance, the focus is to find
out the causes.
- New stuff: Because the new
technologies, systems or approaches are used, the focus will be more on
the optimals, and feelings
- Mandates: There might be a
performance problem; then there might be not. It can be approached as
performance problem or as a new stuff.
- Identify Sources: Where is/Who
has the information that I need? Can I access such information?
- Select Tools for getting
information: What are appropriate ways to collect data? What are the
questions to ask in the interviews and in the surveys? What are under
observations?
- Conduct the needs assessment in
stages in order to search for the information needed:
- Use findings to make decision:
Analyze the data and identify the gaps, determine the causes of the gaps
and identify the kinds of interventions to resolve the gap.
Typology
of Questions (Rossett, 1995)
- Problem finding: Is there a
problem? What is the nature of the problem?
- Problem selecting: Prioritize
identified problem
- Knowledge/skills proving: Ask
to perform the task
- Finding feelings: Questions
about the feelings and attitudes about the problem
- Cause findings: Questions about
the cause of the problems
Smith and Ragan (1999)
categories three sides of needs assessment:
- Discrepancy Model: this focuses
on the gaps between "what is" and "what should be"
- List the goals of
instructional system]
- Determine how well the identified
goals are already being achieved
- Determine the gaps between
what is and what should be
- Prioritize gaps according to
agreed-upon criteria
- Determine which gaps are
instructional needs and which are most appropriate for design and
development of instruction
- Problem-Finding, Problem
Solving model: this takes a broad view in terms of performance technology.
The model focuses on resolving the causes of the problem, and the
non-instructional solutions are considered.
- Determine whether there really
is a problem
- Determine whether the cause of
the problem is related to employees' in training environments or to
learners' achievement in educational environments.
- Determine whether the solution
to the achievement/performance problem is learning
- Determine whether instruction
for these learning goals is currently offered: if yes, carry out the
discrepancy model; if no, carry out the innovation model.
- Innovation Model: it examines
changes or innovations in the educational system or organization and
determines whether new learning goals should be added to the curriculum.
References:
Jonassen, D. H.,
Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for
instructional design. Mahwah, NJ: Lawrence Erlbaum Associates.
Rossett, A. (1995).
Needs Assessment. In Anglin, G. J. (Ed). Instructional technology: Past,
present, and future, p.183-196. Englewood, CO: Libraries Unlimited, Inc.
Smith, P. L. &
Ragan, T. J. (1999). Instructional Design. 2nd. Danvers, MA: John Wiley
& Sons, Inc.
Task analysis basically
is a process to identify human capabilities that supports the performance of a
task under analysis. Therefore, it usually involves breaking down a task
performance into smaller steps, and identifying different human capabilities
that support the task performance. Within the structural framework of the IT
field, task analysis is one segment of instructional design process. It is also
the foundation for instructional design. Jonassen, Tessmer, and Hannum (1999)
stated the purposes of task analysis as follows:
- It determines what must be
learned to achieve the learning goal.
- The results of task analysis
can be transformed into statements of learning goals, which determine what
actually gets taught or trained.
- It analyzes the learning
situation for the purpose of making instructional design decisions.
- It is used as a basis to
organize tasks and task components as well as to sequence them.
The shift of paradigm
from behaviorism to cognitivism has changed the focus of task analysis on
behaviors to the internal mental representations and processes. From the
perspectives of behaviorism and cognitivism, learning is an outcome, either
behavioral change as a result of shaping by a series of reinforcements or a
reconstruction of knowledge representation as a result of mental process. Under
these circumstances, if learning focuses on behaviors, then the target of task
analysis should focus on the desired behaviors. Job task analysis, procedural
analysis and functional job analysis (Jonassen, Tessmer, & Hannum, 1999)
seem to serve the purpose. If learning focuses on transmission of
"knowledge" as mental representation, then the target of task
analysis should focus on the required "knowledge" entailed in the
task. Methods, such as learning hierarchy, information processing, the methods
that Jonassen, Tessmer, and Hannum (1999) classify as cognitive task analysis
methods, meet the ends.
Chipman, Schrragen, and
Shalin (2000) classified task analysis into two major categories, traditional
task analysis, and cognitive task analysis, when they gave an overview of each
chapter in their edited book Cognitive Task Analysis. Traditional task analysis
refers to a breakdown of observable task performance into a series of overt
observable behaviors that support the performance; cognitive analysis is
"the extension of traditional task analysis techniques to yield
information about the knowledge, thought processes, and goal structures that
underlie observable task performance." (Chipman, Schrragen, & Shalin,
2000). The distinction mainly lies in overt physical actions and covert
cognitive process. The development of cognitive task analysis was initiated
because of the evolution from industrial age to information age, which was
explained by Reigeluth (1999) as changes in instruction's supersystems that
have impacts on the paradigm of education and training. The needs for
standardization and efficiency gave more weight to the mechanistic analysis of
the job performance; the demands for customization and effectiveness call for
understanding of complex thinking and of process of problem-solving underlying
the task performance. In fact, a lot of cognitive task analysis methods arise
from analyses of human-computer interaction, which tackle the mental activities
distributing and interacting between human and machines in decision-making,
critical diagnostic and control tasks.
Jonassen, Tessmer, and
Hannum (1999) described task analysis as a breakdown of performance into
detailed levels of specificity", and as a front-end analysis, which
consists of description of mastery performance and criteria, breakdown of job
tasks into steps, and the consideration of the potential worth of solving
performance problems. They also classified types of task analysis:
- Job/performance analysis:
focusing on the behaviors engaged in by the performer
- It was evolved from the
industrial revolution, which focused on time-motion study techniques to
reduce jobs to their simplest activities so that they could be learned
quicker and performed more reliable
- Leaning analysis: focusing on
the cognitive activities required to efficiently learn
- The revolution in learning
psychology in the 1960s focused the attention of designers on the way
learners were processing information as they performed tasks
- Techniques such as learning hierarchy
analysis and information processing and path analysis were developed as
part of this movement
- Cognitive task analysis:
focusing on the performances and their associated knowledge states
- When learning psychology
assumed a more cognitive psychological basis, methods for conducting
cognitive task analysis emerged.
- The growth of cognitive task
analysis methods was fueled by military efforts in designing intelligent
tutoring system
- Content and subject matter
analysis: examining the concepts and relationships of the subject matter
- Through the 1950s and 1960s,
subject matter analysis evolved as the dominant curriculum planning tool
in education
- Analysis of the structure of
subject matter became the focus of instruction
- Activity-based method: examining
human activity and understanding in context
- Anthropological methods have
been applied to analyzing the learning process, ushering in situated and
everyday conceptions of the human activity.
- These activity analysis
approaches analyze how people perform in natural, everyday settings. They
attempt to document how humans act and the social and contextual values
that affect that activity.
Such classifications
recognize the differences in the functional purposes of task analysis and the
types of human knowledge and capabilities to be analyzed. For example, learning
hierarchical analysis, decomposing human cognitive skills into the rules and
concepts in a hierarchical structure, could be used to identify topics that
need to be taught and the sequence of teaching those topics. Activity theory,
exploring the interaction between the individual in the society and the
environment in terms of tools, rules and division of labors, could help
identify what types of support need to be provided in the learning environment.
How do we describe
tasks? Scholars developed different taxonomies of learning to help classify the
tasks in order to identify the mental behavior, physical performance and
affective state required by the task. There are three general domains:
cognitive domain, i.e. knowledge and abilities requiring memory, thinking, and
reasoning processes; affective domain, i.e. attitudes, dispositions, and
emotions states; psychomotor domain, i.e. motor skills and perceptual
processes.
Bloom's
Taxonomy of Cognitive Domain
Knowledge
|
Recall previous
learned information: specific, universals, and abstraction
|
Comprehension
|
Grasp the
understanding of precious learned information
|
Analysis
|
Break down
informational materials into their component parts to develop divergent conclusions
by identifying motives or causes, making inferences, and/or finding evidence
to support generalizations
|
Synthesis
|
Creatively or
divergently apply prior knowledge and skills to produce a new or original
whole
|
Evaluation
|
Judge the value of
material based on personal values/opinions, resulting in an end product, with
a given purpose, without real right or wrong answers.
|
Robert
Gagné's five learned capabilities
Intellectual Skills
|
Mental operations that permit individuals to respond to
conceptualizations of the environment
|
Cognitive Strategy
|
An internal process by
which the learner controls his/her own ways of thinking and learning
|
Verbal Information
|
Retrieve stored information
|
Attitude
|
An internal state that
affects an individual choice of action
|
Motor Skills
|
Capability to perform
a sequence of physical movement
|
Ausubel's
rote vs. meaningful learning
Ausubel (1968) described
learning in terms of the relationship between learned materials and prior
knowledge in the cognitive structure.
Rote Learning
|
Meaningful Learning
|
"learned materials are
discrete and relative isolated entities which are only related to cognitive
structure in an arbitrary, verbatim fashion, not permitting the establishment
of significant relationships"
|
learning "take
place if the learning task can be related in a nonarbitrary, substantive
fashion to what the learner already knows, and if the learning adopts a
corresponding learning set to do so"
|
Anderson's
two types of knowledge
Anderson (1983) proposed
two long-term memory stores: a declarative and a procedural memory. The
knowledge in the declarative memory, i.e. facts and goals, is represented in
terms of chunks. At the symbolic level, chunks are structured as a semantic
network. On the other hand, the knowledge in the procedural memory is
represented as production rules in forms of condition-action pairs, in which
the flow of control passes from one production to another when the actions of
one production create the conditions needed for another production to take
place.
Declarative Knowledge
|
Procedural Knowledge
|
Knowledge about what
it is
|
Knowledge about how to
do things
|
References:
Anderson, J. R. (1983). The
architecture of cognition. Cambridge, MA: Harvard University Press.
Ausubel, D. P. (1968). Educational
psychology: a cogntive view. New York: Holt, Rinehart and Winston.
Jonassen, D. H.,
Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for
instructional design.
In J. M. Schraggen, S.
F. Chipman, & V. L. Shalin (2000), (Eds). Cognitive Task Analysis,
p.p 365-383.Mahwah, Nj: Lawrence Erlbaum Association.
With the emergence of
constructivism, the practice of instructional design, which was deeply rooted
in behaviorist and cognitive learning theories, was challenged. Constructivism,
viewing reality as individual consecration, has caused a shift of the ways of
thinking about what knowledge is and what learning is. The challenge,
therefore, lies in what to do in the process of instructional design and how to
do instructional design to adapt the shift. The constructivist orientations in
the field have resulted in a lot of dialogues in different aspects of ID
process as well as suggestions of alternative instructional-design theories
(Duffy and Jonassen, 1992; Reigeluth, 1999).
From the constructivist
perspective, knowledge is neither behavioral changes nor organizational
structure that within the learner. Instead, knowledge is viewed a construction
o understanding in a context or is located in the actions of persons and
groups. Thus, the ultimate goal of learning is to facilitate the active
cognitive reorganization in learners, and their active engagement in the world
Moreover, the focus of design of instruction in the constructivist paradigm has
shifted from how to structure instructional events in order to maximize the
effectiveness of information transmission to the development of meaningful
learning environments that will help students construct their understanding by
engaging in meaning-making. Such changes in epistemological and pedagogical
beliefs have challenged the practice in the ICED process. Perkins (1992), from
a standpoint of cognitive constructivism whose focus is more on the active
construction of the mind, has proposed to emphasize on the analysis of the
tasks in a meaningful context, on the design of a manipulative space, and on
understanding and use of knowledge as assessment measurement. On the hand,
Brown, Collins and Dugid (1989), from a socio-cultural constructivist
viewpoint, described how participation in social interactions and culturally
organized activities influences psychological development.
Below is a list of
contrast in the ID process between so-called traditional ID practice
(behaviorism and cognitivism), and constructivist ID process abstracted from
different literature (Bednar, Cunningham, Duffy, & Perry, 1995; Duffy &
Cunninghum, 1996; Duffy & Jonassen).
Traditional
|
Constructivism
|
|
The consequence of
learning
|
|
|
Content Analysis
|
|
|
Analysis of learners
|
|
|
Specification of
objectives
|
|
|
Synthesis
|
|
|
The role of instructor
|
|
|
Evaluation
|
|
|
References:
Brown, J. S., Collins,
A., & Dugid, P. (1989). Situated cognition and the culture of learning. Educational
Researcher, 18, 32-42.
Duffy, T. M., & Cunningham,
D. J. (1996). Constructivism: Implications for the design and delivery of
Instruction. In D. H. Jonassen, (Ed.), Handbook of research for educational
communication, pp.170-198. New York, NY: Simon & Schuster Macmillan.
Duffy, T. M, & Jonassen, D. H. (1992). (Eds.), Constructivism and the Technology of Instruction: A conversation, p.45-55 Hillsdale, NJ: Lawrence Erlbaum Associates.
Perkins, D. N. (1992). Technology
meets constructivism: Do they make a marriage? In T. M. Duffy and D. H.
Jonassen, (Eds.), Constructivism and the Technology of Instruction: A
conversation, p.45-55 Hillsdale, NJ: Lawrence Erlbaum Association.
Reigeluth, C. M. (1999).
(Ed.), Instructional-design theories and models: An new paradigm of
instructional theory, Volume II.. Mahwah, NJ: Lawrence Erlbaum Associates.
Research is a systematic
inquiry to describe, explain, predict and control the observed phenomenon.
Research involves inductive and deductive methods (Babbie, 1998). Inductive
methods analyze the observed phenomenon and identify the general principles,
structures, or processes underlying the phenomenon observed; deductive methods
verify the hypothesized principles through observations. The purposes are
different: one is to develop explanations, and the other is to test the
validity of the explanations.
One thing that we have
to pay attention to research is that the heart of the research is not on
statistics, but the thinking behind the research. How we really want to find
out, how we build arguments about ideas and concepts, and what evidence that we
can support to persuade people to accept our arguments.
Gall, Borg and Gall
(1996) proposed four types of knowledge that research contributed to education
as follows:
- Description: Results of research can
describe natural or social phenomenon, such as its form, structure,
activity, change over time, relationship to other phenomena. The
descriptive function of research relies on instrumentation for measurement
and observations. The descriptive research results in our understanding of
what happened. It sometimes produces statistical information about aspects
of education.
- Prediction: Prediction research is
intended to predict a phenomenon that will occur at time Y from
information at an earlier time X. In educational research, researchers
have been engaged in:
- Acquiring knowledge about
factors that predict students' success in school and in the world of work
- Identifying students who are
likely to be unsuccessful so that prevention programs can be instituted.
- Improvement: This type of research is
mainly concerned with the effectiveness of intervention. The research
approach include experimental design and evaluation research.
- Explanation: This type research subsumes
the other three: if the researchers are able to explain an educational
phenomenon, it means that they can describe, can predict its consequences,
and know how to intervene to change those consequences.
What
are the purposes of research?
Patton (1990) pointed out the importance of identifying the purpose in a research process. He classified four types of research based on different purposes:
Patton (1990) pointed out the importance of identifying the purpose in a research process. He classified four types of research based on different purposes:
- Basic Research: The purpose of this research
is to understand and explain, i.e. the research is interested in
formulating and testing theoretical construct and propositions that
ideally generalize across time and space. This type of research takes the
form of a theory that explains the phenomenon under investigation to give
its contribution to knowledge. This research is more descriptive in nature
exploring what, why and how questions.
- Applied Research: The purpose of this research
is to help people understand the nature of human problems so that human
beings can more effectively control their environment. In other words,
this type of research pursues potential solutions to human and societal
problems. This research is more prescriptive in nature, focusing on how
questions.
- Evaluation Research (summative and formative):
Evaluation research studies the processes and outcomes aimed at attempted
solution. The purpose of formative research is to improve human
intervention within specific conditions, such as activities, time, and
groups of people; the purpose of summative evaluation is to judge the
effectiveness of a program, policy, or product.
- Action Research: Action research aims at
solving specific problems within a program, organization, or community.
Patton (1990) described that design and data collection in action research
tend to be more informal, and the people in the situation are directly
involved in gathering information and studying themselves.
What
is the research process?
Gall, Borg, and Gall (1996) described the following stages of conducting a research study:
Gall, Borg, and Gall (1996) described the following stages of conducting a research study:
- Identify a significant research
problem: in this stage, find out the research questions that are significant
and feasible to study.
- Prepare a research proposal: a
research proposal usually consists of the sections including introductory,
literature review, research design, research method, data analysis and
protection of human subject section, and timeline.
- Conduct a pilot study: the
purpose is to develop and try out data-collection methods and other
procedures.
- Conduct a main study
- Prepare a report
Gall, Borg, and Gall
(1996) also explained that these five stages may overlap or occur in a
different order depending the nature of the study. Qualitative studies which
involve emergent research design may gather and analyze some data before
developing the proposal, or a pilot study can be done before writing a research
proposal or not at all.
Anglin, Ross, and
Morrison (1995) took a closer look at the stages of identifying a research
problem and preparing the research proposal. They advised a sequence of
planning steps:
Select a Topic
|
||||||
Research requires
commitment. As a researcher, you want to make sure you are doing something
that you have a great interest in doing.
|
||||||
Identify the Research Problem
|
||||||
Based on your own
understanding and interest of the topic, think about what issues can be
explored? Sometimes, a research problem cannot be immediately identified.
But, through reviewing the existing literature and having continuous
discourse with peers and scholars, the research problem will start take its
shape.
|
||||||
Conduct a Literature Search
|
||||||
Reviewing literature
has two major purposes: one is to build up the researcher's knowledge base of
the topic under exploration for a deeper understanding, and the other is to
ensure the significance of the research. The researcher needs to make sure
how the research will be able to contribute to the knowledge in the related
field compared with the existing research literature.
|
||||||
State the Research Question
|
||||||
The research problem will evolve during your pursuing knowledge
base through reviewing literature and discourse with peers and scholars. To
specify what questions your research study want to answer helps to provide
the basis of planning other parts of your study, e.g. the research design,
the methods for data collection and analysis.
Ideas abstracted from Anglin, Ross, and
Morrison (1995)
|
||||||
Determine the Research Design
|
||||||
In the intention of
the research study is to verify a causal relationship between certain
variables, use an experimental design; if the intention of the research study
is to find out how variables relate to one another, use a correlational
design; if the intention of the research study is to describe and understand
a particular social condition/pattern and meaning of a social experience, conduct
a qualitative study.
|
||||||
Determine Methods
|
||||||
Three major elements
in the research study need to be considered: participants, materials, and
instruments.
|
||||||
For qualitative research, the issues are the sources of data, where
the researcher can find the information and what methods the researcher can
use to get the information. Qualitative research usually focuses on the
verbal information gathered from the interviews, observations, documents or
cultural artifacts. The very distinctive feature about the qualitative
research is that the researcher is part of the instrument. The recognition of
this researcher's subjective interpretation of the information yields the
process of triangulation, which emphasizes use of multiple sources, methods,
investigators, and theories to ensure the credibility of the research.
|
||||||
Identify Analysis Procedures
|
||||||
Different research
questions and different research designs entail different analysis method to
take. Experimental design employs statistical analysis to give statistical
descriptions of the groups in terms of different independent variables and
dependent variables, and to determine the significance of the differences
whether the dependent variables are caused by the independent variables. On
the other hand, qualitative design employs semantic analysis to identify
themes, categories, processes, and patterns of an observed phenomenon, and
provides rich descriptions of the phenomenon in order to develop a deeper
understanding of human systems.
|
References:
Anglin, G. J., Ross, S.
M., & Morrsion, G. R. (1995). Inquiry in instructional design and
technology: Getting started. In G. J. Anglin (ed.), (2nd ed.) Instructional
technology: Past, present, and future. Englewood, CO: Libraries Unlimited,
Inc.
Gall, M. D., Borg, W.
R., & Gall, J. P. (1996). Educational Research: An Introduction (
Sixth ed.). White Plains, NY: Longman.
Patton, M. Q. (1990). Qualitative
Evaluation and Research Methods. ( 2nd ed.). Newbury Park, CA: Sage.
Webster Dictionary
defines paradigm as "an example or pattern: small, self-contained,
simplified examples that we use to illustrate procedures, processes, and
theoretical points." The most quoted definition of paradigm is Thomas
Kuhn's (1962, 1970) concept in The Nature of Science Revolution, i.e. paradigm
as the underlying assumptions and intellectual structure upon which research
and development in a field of inquiry is based. The other definitions in the
research literature include:
- Patton (1990): A paradigm is a
world view, a general perspective, a way of breaking down the complexity
of the real world.
- Paradigm is an interpretative
framework, which is guided by "a set of beliefs and feelings about
the world and how it should be understood and studied." (Guba, 1990).
Denzin and Lincoln (2001) listed three categories of those beliefs:
- Ontology: what kind of being is
the human being. Ontology deals with the question of what is real.
- Epistemology: what is the
relationship between the inquirer and the known: "epistemology is the
branch of philosophy that studies the nature of knowledge and the process
by which knowledge is acquired and validated" (Gall, Borg, &
Gall, 1996)
- Methodology: how do we know the
world, or gain knowledge of it?
When challenging the
assumptions underlying positivism, Lincoln and Guba (2000) also identified two
more categories that will distinguish different paradigms, i.e. beliefs in
causality and oxiology. The assumptions of causality asserts the position of
the nature and possibility of causal relationship; oxiology deals with the
issues about value. Specific assumptions about research include the role of
value in research, how to avoid value from influencing research, and how best
to use research products (Baptiste, 2000).
Dill and Romiszowski
(1997) stated the functions of paradigms as follows:
- Define how the world works, how
knowledge is extracted from this world, and how one is to think, write,
and talk about this knowledge
- Define the types of questions
to be asked and the methodologies to be used in answering
- Decide what is published and
what is not published
- Structure the world of the
academic worker
- Provide its meaning and its
significance
Two major philosophical
doctrines in the social science inquiry are positivism and postpositivism. The
following is a contrast of the research approach that are entailed from these
two different philosophical paradigms.
Positivism
|
Postpostivism
|
|
Philosophical Inquiry
|
|
|
Research Design
|
|
|
Data Collection and
Design
|
|
|
View of causality
|
|
|
Lincoln and Guba (2000)
made the following distinctions between positivist and naturalist inquiries.
Positivist
|
Naturalist
|
Reality is single,
tangible, and fragmentable.
|
Realities are
multiple, constructed, and holistic.
|
Dualism: the knower
and the known are independent.
|
The knower and the
known are interactive and inseparable.
|
Time and context free
generalization
|
Only time-and
context-bound working hypotheses are possible.
|
Real causes,
temporally precedent to or simultaneous with their effects (causal
relationship)
|
All entities are in a
state of mutual simultaneous shaping, so that it is impossible to distinguish
causes from effects.
|
Inquiry is value free.
|
Inquiry is value
bounded.
|
References:
Baptiste, I. (2000). Calibrating
the "instrument": Philosophical issues framing the researcher's role.
Class notes in ADTED 550.
Dills, C. R., &
Romiszowski, A. J. (1997). The instructional development paradigm: An
introduction. In C. R. Dills, and A. J. Romiszowski (Eds)., Instructional
development paradigms. Englewood, NJ: Educational Technology Publications,
Inc.
Gall, M. D., Borg, W.
R., & Gall, J. P. (1996). Educational Research: An Introduction (
6th ed.). White Plains, NY: Longman.
Lincoln, Y. S., &
Guba, E., G. (2000). Paradigmatic controversies, contradictions and emerging
confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of
Qualitative Research (2nd ed., pp. 163-188). Thousand Oaks, CA: Sage
Publications, Inc.
Patton, M. Q. (1990). Qualitative
Evaluation and Research Methods ( 2nd ed.). Newbury Park, CA: Sage.
Smith, P., & Ragan,
T. J. (1999). Instructional Design. 2nd ed. John, Wiley & Sons, Inc.
Webster Dictionary
defines paradigm as "an example or pattern: small, self-contained,
simplified examples that we use to illustrate procedures, processes, and
theoretical points." The most quoted definition of paradigm is Thomas
Kuhn's (1962, 1970) concept in The Nature of Science Revolution, i.e. paradigm
as the underlying assumptions and intellectual structure upon which research
and development in a field of inquiry is based. The other definitions in the
research literature include:
- Patton (1990): A paradigm is a
world view, a general perspective, a way of breaking down the complexity
of the real world.
- Paradigm is an interpretative
framework, which is guided by "a set of beliefs and feelings about
the world and how it should be understood and studied." (Guba, 1990).
Denzin and Lincoln (2001) listed three categories of those beliefs:
- Ontology: what kind of being is
the human being. Ontology deals with the question of what is real.
- Epistemology: what is the
relationship between the inquirer and the known: "epistemology is the
branch of philosophy that studies the nature of knowledge and the process
by which knowledge is acquired and validated" (Gall, Borg, &
Gall, 1996)
- Methodology: how do we know the
world, or gain knowledge of it?
When challenging the
assumptions underlying positivism, Lincoln and Guba (2000) also identified two
more categories that will distinguish different paradigms, i.e. beliefs in
causality and oxiology. The assumptions of causality asserts the position of
the nature and possibility of causal relationship; oxiology deals with the
issues about value. Specific assumptions about research include the role of
value in research, how to avoid value from influencing research, and how best
to use research products (Baptiste, 2000).
Dill and Romiszowski
(1997) stated the functions of paradigms as follows:
- Define how the world works, how
knowledge is extracted from this world, and how one is to think, write,
and talk about this knowledge
- Define the types of questions
to be asked and the methodologies to be used in answering
- Decide what is published and
what is not published
- Structure the world of the
academic worker
- Provide its meaning and its
significance
Two major philosophical
doctrines in the social science inquiry are positivism and postpositivism. The
following is a contrast of the research approach that are entailed from these
two different philosophical paradigms.
Positivism
|
Postpostivism
|
|
Philosophical Inquiry
|
|
|
Research Design
|
|
|
Data Collection and
Design
|
|
|
View of causality
|
|
|
Lincoln and Guba (2000)
made the following distinctions between positivist and naturalist inquiries.
Positivist
|
Naturalist
|
Reality is single,
tangible, and fragmentable.
|
Realities are
multiple, constructed, and holistic.
|
Dualism: the knower
and the known are independent.
|
The knower and the known
are interactive and inseparable.
|
Time and context free
generalization
|
Only time-and
context-bound working hypotheses are possible.
|
Real causes,
temporally precedent to or simultaneous with their effects (causal
relationship)
|
All entities are in a
state of mutual simultaneous shaping, so that it is impossible to distinguish
causes from effects.
|
Inquiry is value free.
|
Inquiry is value
bounded.
|
References:
Baptiste, I. (2000). Calibrating
the "instrument": Philosophical issues framing the researcher's role.
Class notes in ADTED 550.
Dills, C. R., &
Romiszowski, A. J. (1997). The instructional development paradigm: An
introduction. In C. R. Dills, and A. J. Romiszowski (Eds)., Instructional
development paradigms. Englewood, NJ: Educational Technology Publications,
Inc.
Gall, M. D., Borg, W.
R., & Gall, J. P. (1996). Educational Research: An Introduction (
6th ed.). White Plains, NY: Longman.
Lincoln, Y. S., &
Guba, E., G. (2000). Paradigmatic controversies, contradictions and emerging
confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of
Qualitative Research (2nd ed., pp. 163-188). Thousand Oaks, CA: Sage
Publications, Inc.
Patton, M. Q. (1990). Qualitative
Evaluation and Research Methods ( 2nd ed.). Newbury Park, CA: Sage.
Smith, P., & Ragan,
T. J. (1999). Instructional Design. 2nd ed. John, Wiley & Sons, Inc.
Quantitative research is
basically a hypothesis testing process. Based on some theoretical foundations,
hypotheses are formulated. After the design and implementation of determining
human subjects, developing and selecting instrumentations, and
operationalization of some controlling factors, data are collected and analyzed
to test hypotheses. It is a hypothetico-deductive approach, which usually
collects quantitative data from experimental or quasi-experimental designs and
statistical analysis.
Gall, Borg and Gall
(1996) described the quantitative research as the inquiry " that is
grounded in the assumption that features of the social environment constitute
and objective reality that is relatively constant across time and settings. The
dominant methodology is to describe and explain features of this reality by
collecting numerical data on observable behaviors of samples and by subjecting
these data to statistical analysis."
The major process of
conducting a quantitative research is as follows:
Identify the Research Problem
|
Review of the Literature
|
State Research Questions and Hypotheses
|
Plan Research Design:
whom to study, i.e. sampling and human subject concerns; how to
study, i.e. operationalization of the variables; what to study, i.e.
instrumentation, data collection, and data analysis
|
Implementation of Research Study
|
Presentation and Explanation of Results
|
Statistical
Analysis
There are two types of statistics: descriptive statistics and inferential statistics.
There are two types of statistics: descriptive statistics and inferential statistics.
- Descriptive Statistics:
- Mathematical technique for
organizing and summarizing a set of a numerical data
- Means, Mode, Standard
Deviation
- Central Tendency: a single
numerical value that is used to describe the average of an entire set or
sets of scores. Let think about behind the inference
- Inferential Statistics:
- Parameter estimation
- Significance testing: a
negative, falsification approach
Thinking
behind the inferential statistics:
- If the descriptive statistics
is described within a sample, how well does this represent the descriptive
statistics of population? This is an issue of inferential statistics.
- If you study a population, no
needs for significance testing. But, if the research wants to project the
result into a bigger population, significance testing is necessary.
- Inferential statistics assume
that research starts with a simple random sample, i.e. equal and
independent chance to being selected from a population. Sample will never
be representative of the entire population. But, using simple random
sampling, we can have a mathematical explanation, i.e. probability, behind
it.
Statistical inference
refers to a set of mathematical procedures of generalizing the statistical
results obtained from a sample to the population from which the sample
presumable was drawn. In other words, by using statistical inference
techniques, we are attempting to use statistics computed from a sample to make
inferences about population parameters.
What
does the statistical significance mean?
Significance refers to the confidence level of rejecting the null hypothesis by accepting a certain level of error. In other words, if the result is non-significant, it means that the result is unable to reject the null hypothesis because the chance of having such result is greater than a confidence level or level of error that we can accept.
Significance refers to the confidence level of rejecting the null hypothesis by accepting a certain level of error. In other words, if the result is non-significant, it means that the result is unable to reject the null hypothesis because the chance of having such result is greater than a confidence level or level of error that we can accept.
Null
Hypothesis: A prediction that no
relationship between two measured variables will be found, or that no
difference between groups on a measured variable will be found.
Type
I error: The rejection of the
null hypothesis when it is true.
With Type One error, we
reject the null hypothesis claiming that we have a real difference when in fact
the difference is not real but is due to sampling and chance variation. When we
make a statement that the treatment is effective but it is not, we are making a
Type I error.
Type
II error: the failure to reject
the null hypothesis of no difference, when there is in fact a difference.
Do not reject Null Hypothesis
|
Reject Null Hypothesis
|
|
Null Hypothesis (Ho)
is true
|
Correct Decision
1-a
|
Type I error
a
|
Null Hypothesis (Ho)
is false
|
Type II error
ß
|
Correct decision
(1-ß)
|
All statistical tests
begin with certain assumptions that, together with the laws of probability,
enable statisticians to develop the tables we use to determine statistical
significance, for example, tables for the normal distribution, t, chi-square,
and F.
Parametric
Statistics vs. Nonparametric statistic
Instrumentation is a process of operationalize the variables and scaling (converting something into numbers). In statistics, numbers are categories into the following taxonomy:
Instrumentation is a process of operationalize the variables and scaling (converting something into numbers). In statistics, numbers are categories into the following taxonomy:
- Nominal: a verbal label, such as
categories
- Ordinal: hierarchical rating or order
- Interval: consistent difference between
two different numbers, such as height, weights; Zero is not absolute
- Ratio: zero is not absolute
Interval and ratio are
cardinal numbers, which are usually described in parametric statistics, such as
means, standard deviation, t test, and ANOVA. Nominal and ordinal are described
in nonparametric statistics (no additions, subtraction, square root), such as
mode, median, range of scores, and chi-square test.
What
are statistical significance tests?
The level of significance does not indicate how much likely it is that your research hypothesis is correct; it only helps you to make a decision about rejecting the null hypothesis. It has only an indirect bearing on confirmation of your research hypothesis
The level of significance does not indicate how much likely it is that your research hypothesis is correct; it only helps you to make a decision about rejecting the null hypothesis. It has only an indirect bearing on confirmation of your research hypothesis
chi-square: a nonparametric test of
statistical significance that is used when the research data are in the form of
frequency counts for two or more categories.
- When to use? chi-square handles
only categorical (nominal) data, frequencies, or figures based on them
like percentages or probabilities
- Why to use?
- Eliminate the alternative
explanation of sampling and chance error
- Test a hypothesis about
whether one variable is related to another
- Test whether the data fit a
particular model or distribution
- Combine probabilities derived
from independent samples in a single study or across studies into a
single probability (another way of doing meta-analysis)
- What's the formula? The formula
for chi-square is simple. It involves finding the difference between the
observed frequency and the frequency that would be expected by chance,
squaring it, dividing it be the expected frequency, and summing these over
all the observe data.
Degree of freedom: the freedom of the cell data in
successive samples to vary once the column and row totals are fixed. In
general, chi-square tables have (r-1) (c-1) degrees of freedom, where r is the
number of rows and c the number of columns
t
test: A test of statistical
significance that is used to determine the null hypothesis that two sample
means come from identical populations can be rejected.
Research studies, such
as a comparison of two experimental treatments or a comparison of a control
with an experimental group, examine a difference between two means with t test.
The degree of freedom in
t test is (n1+n2-2)
ANOVA
(Analysis of Variance): Testing differences among several means:
- A procedure for determining
whether the difference between the mean scores of two or more groups on a
dependent variable is statistically significant.
- When the groups have been
classified on several independent variables, the procedure can be used to
determine whether each factor and the interactions between the factors
have a statistically significant effect on the dependent variable.
- When to use it?
It is widely used for complex experimental designs where more than two groups or where multiple conditions are being compared. For example, 2x2x3
- Why to use?
- Allow to partition the
variance of the study to find the part that is attributable to each of
the variables as well as the combined effect of variable.
- After accounting for these
variables and their combinations, the residual variance is a purer
measure of sampling and chance error, and it provides for ore precise
tests of statistical significance.
- What is ANOVA?
The logic used by analysis of variance involves deriving two estimates of the population variance: first, an estimate that includes the effect of one or more of the variables; second, an estimate, which is free of it. If there is no effect, on average, the two estimates are equal and the expected value of dividing the first by the second, called an F ration, is one. However, given a possible effect, and in any event, sampling and chance error, the first of these estimates will typically be the larger, so the ration will usually be grater than one. We consult an F table to determine how much above one to allow for sample and chance error.
About
Sampling
Sample is part of the targeted population. The major issue is generalization: the less the sample is, the less generalizability the conclusion is; the bigger the sample size, the less error. The concern is representativeness. Sample size is a function of population size and desired confidence level, the statistical power.
Sample is part of the targeted population. The major issue is generalization: the less the sample is, the less generalizability the conclusion is; the bigger the sample size, the less error. The concern is representativeness. Sample size is a function of population size and desired confidence level, the statistical power.
Two major random
probability sampling: simple random sampling and stratified random and cluster
samples:
- Simple random sampling: It
permits generalization from sample to the population it represents
- Stratified random and cluster
samples: It increase confidence in making generalization to particular
subgroups or areas.
Power
Analysis: A method to determine
the sample size. The analysis makes sure that you have sufficient power to
determine the conclusion. When you hold the power constant, then look at the
relationship of effect size, the variance, and the sample size.
About
Reliability and Validity
Reliability: it refers to the consistency of an instrument. It is the extent to which other researchers would arrive at similar results if they studied the same case using exactly the same procedures as the first researcher. In classical test theory, the amount of measurement error in the score yielded by a test.
Reliability: it refers to the consistency of an instrument. It is the extent to which other researchers would arrive at similar results if they studied the same case using exactly the same procedures as the first researcher. In classical test theory, the amount of measurement error in the score yielded by a test.
Validity: it refers the
appropriateness, meaningfulness, and usefulness of specific inferences made
from test scores.
- Construct Validity: The extent
to which inferences from a test' scores accurately reflect the construct
that test is claimed to be measure.
- Content Validity: the extent to
which inferences from a test's scores adequately represent the content or
conceptual domain that the test is claimed to measure.
- Ecological validity: The extent
to which the results of an experiment can be generalized from conditions
in the research setting to particular naturally occurring condition.
Internal
Validity: The extent to which
extraneous variables have been controlled, so that any observed effects can be
attributed solely to the treatment variable. In other words, your conclusion
about the cause-effect is right and how right it is.
The common rival
explanations about the cause-effect relationship: testing and regression.
Threats
to Internal Validity: (notes from EDPSY 475)
- Sampling and chance error
- Hawthorne Effect, i.e. novelty
effect
- Pygmalion Effect, i.e.
expectancy effect
- John Henry Effect: In
experiments, a situation in which control group participants perform
beyond their usual level because they perceive that they are in
competition with the experimental group
- Resentful Demoralization: the
control group performs worse
- Experimental Diffusion: the
control group adopt the treatments from the experimental groups
- Treatment fidelity: what you
plan to do is different from what you actually do: actually treatment
carried out is very different form the original
Cambel
and Stanely: Classical threats to internal validity:
- Something is happening
simultaneously with your experiment. In other words, something changes
along the situation that affects the statistical differences.
- Maturation: something happens
in the situation because of the natural development or growth, e.g. the
subject ability matures
- Testing (Pretesting), i.e.
students become test wise: Pretest can prompt somebody to improve their
ability
- Instrumentation: different
instruments used in pretest and posttest
- Experimental Mortality:
Attrition
- Regression to the mean: this
concerns with the tendency of flotation: low ability have more tendency to
go up, and high ability have more tendency to go down.
- Selection: the effect is
attributed to the fact that different types of students are selected for
two groups. Randomization is the best safe grad against different
selection.
- Selection/Maturation
Interaction: one group is changing naturally, or two groups have different
maturation rates.
External
Validity: The extent to which
the results of a research study can be generalized to individuals and
situations beyond those involved in the study. It concerns whether you can
generalize or extend your findings?
- Interaction with testing
(pre-testing): Without the pretest, people will not pay attention to
certain treatment. In other words, combination of the pretest and
treatment makes the results different. Thus, it cannot applied to outside
the experiment because of the lack of the pretest in the population.
- Selection-Treatment
interaction: One group is more acceptable to the treatment. It is possible
that treatment on a certain type of people work better.
- Multiple Treatment
Interference: It is common in a single-subject study. The same group is
used again and again to try different treatments, e.g. A, B, C, D. Howeer,
if D works, but it cannot prove that if only D works, or A, B, C, help. In
other words, the other treatments may interfere a certain treatment.
Therefore, the findings cannot be generalized with confidence to a
situation in which treatment D is administered alone.
The
relationship between internal validity and external validity:
Usually we use to controlled conditions to gain internal validity, but we kind of lose external validity; we use the field study to gain the external validity, but we lost internal validity.
Usually we use to controlled conditions to gain internal validity, but we kind of lose external validity; we use the field study to gain the external validity, but we lost internal validity.
Threats
to Internal/External validity in observation
- Observation Reactivity:
Hawthorne Effects
- Reliability decay: the tendency
for observational data recorded during the latter phases of data
collection to be less reliable than those collected earlier. As an
observer, you become less sharp as the day goes on.
- Observer Drifts: Rater Drifts:
Over time, observer changes the criteria or definition. How to avoid it?
The rater should grade the same question one at a time; use second
observer to observe occasionally at different point of time.
- Consensual Drifts:
Inter-influence between observers
References:
Gall, M. D., Borg, W.
R., & Gall, J. P. (1996). Educational Research: An Introduction (
Sixth ed.). White Plains, NY: Longman.
Anderson (1990)
explained that research in education is a disciplined attempt to address
questions or solve problems through the collection and analysis of primary data
for the purpose of description, explanation, generalization, and prediction
(Anderson,1990, p. 4).
Driscoll (1984)
summarized eight paradigms for research in instructional systems:
- Experimental research design:
this research is designed to establish causal influence on a phenomenon of
interest. The focus of the instructional systems research is to examine
the effects on learning of single or interacting instructional variables,
e.g. the effects of the concept mapping strategy on students' performance
in comprehension.
- Quasi-experimental design: most
of time in instructional research, the random sampling of students to
different treatment conditions is not possible. The internal validity will
be an issue.
- Meta-analysis: to use the
previously report research findings for a statistical analysis, i.e. a
statistical means for synthesizing research findings to determine the
effect size.
- Case Study/Ethnography: instead
of controlling the contextual influence on learning variables, case
studies and ethnographical studies describe those contextual factors in a
natural setting and investigate the interactional pattern of those
contextual factors in a learning situation.
- Systems-based evaluation: study
of the design, development, implementation, and effectiveness of a new
instructional product or program in order to improve or adjust the product
or program
- Cost-effectiveness and cost
analysis
- Technique and model development
and validation
Driscoll and Dick (1999)
categories two inquiry types in the papers published in the IT field: empirical
and nonempirical. Nonempirical inquiry includes literature reviews, conceptual
article, and description of projects, models, tools or software. Empirical
inquiry includes:
- Case study: A study of how or
why something occurs in a particular situation, a defined boundary
- Experiment: A study of the
effect of a manipulated variable on an observed variable
- Survey: A study describing the
distribution of responses to a questionnaire or exploring relations among
variables
- Program evaluation: A study to
determine the impact or outcomes of an educational program
- Development Research: A study
of design, development, or evaluation processes, tools or models, and the
conditions that facilitate their use
- Qualitative-Naturalistic: A
study aimed at developing an understanding of human systems, involving the
collection of non-numerical data and detailed rich descriptions of natural
events.
What
is developmental research?
Instructional development connotes two major definitions:
Instructional development connotes two major definitions:
- A process of producing
instructional materials: "the process of translating the design
specifications into physical form (Seels & Richey, 1994)
- The ISD process: "the
process of analyzing needs, determining what content must be mastered,
establishing educational goals, designing materials to reach the
objectives, and trying out and revising the program in terms of learner
achievement" (Heinch, Molenda, & Russell, 1993)
Developmental research
intends to produce knowledge with the ultimate aim of improving the
processes of instructional design, development and evaluation.
Seels and Richey (1994) defined it as "the systematic study of designing,
developing and evaluating instructional programs, processes, and products that
must meet the criteria of internal consistency and effectiveness." (p.
127). Three major endeavors of developmental research:
- Performing and studying the
processes of design, developing or/and evaluation
- Study of the impacts of someone
else's instructional design and development efforts
- Study of the instructional
design, development, and evaluation process as a whole, or of a particular
process component:
According to Richey and
Nelson (1996), there are two types of developmental research:
Type One
|
Type Two
|
|
Conclusion
|
Contextually Specific
|
Generalizable
|
Focus
|
|
|
Product
|
|
|
Major Research Methods
|
|
|
Major Evaluation
Components
|
|
|
Ideas abstracted from Richey and Nelson (1996)
To Richey and Nelson
(1996), the non-developmental research is referred to instructional psychology
studies, media or delivery system comparison and impact studies, message design
and communication studies, policy analysis or formation studies, and research
on the profession.
The
Developmental Research Process
Richey and Nelson (1996) provided methodological direction about the developmental research:
Richey and Nelson (1996) provided methodological direction about the developmental research:
- Defining the Research Problem
The focus of the research problem is on the particular aspect of
the design, development, or evaluation process instead of a particular variable
that impacts learning, or the impact of a type of media. The research topics
include one of the following dimensions in the instructional development:
design, development or evaluation model or process, instructional product,
program, process or tool, identification of general development principles or
situation-specific recommendations.
- Review of Related Literature
The topics under review that may be related to developmental
research include: procedural models, characteristics of effective instructional
products, programs, or development systems, factors regarding the use of the
target development processes, factors regarding the implementation and
management of instruction.
- Research Procedure
Population and Samples
The potential research participants can be designers, developers, evaluators; instructors, program facilitators; learners. These participants can be the sources of the data that shed light to the research under investigation. Usually the project itself is the object of the developmental study.
Research Design
This addresses the planning of the process: How and when will the development take place? When will the instruction be implemented? How and when will the evaluation take place?
This addresses the planning of the process: How and when will the development take place? When will the instruction be implemented? How and when will the evaluation take place?
For example, an experimental study should consider
operationalization of the variables, e.g. what is the treatment? how long will
the treatment be? how to measure the variables? what is the validity and
reliability issues of the variables?
Data Collection and Analysis Procedures
The information sources related to the research may include: documentations of the design, development and evaluation tasks; documentations of the conditions under which development takes place; needs assessment, formative evaluation, and summative evaluation; profiling of the target populations, profiling of the design/implementation contexts, and the impacts of the instruction and the context associated with the impact
The information sources related to the research may include: documentations of the design, development and evaluation tasks; documentations of the conditions under which development takes place; needs assessment, formative evaluation, and summative evaluation; profiling of the target populations, profiling of the design/implementation contexts, and the impacts of the instruction and the context associated with the impact
- Results and Conclusions
The developmental research contributes to the field's knowledge
based with more understanding of the new procedural models, generalizable
principles or the lesson learned in a particular project.
What
are the future development of the developmental research?
According to Richey and Nelson (1996), constructivism has influenced the shift of the attention of developmental topics to the role of context in design and the development of anchored instruction. Also, the research of the development process emphasizes on designer decision-making, knowledge acquisition tools, and the use of automated development tools
According to Richey and Nelson (1996), constructivism has influenced the shift of the attention of developmental topics to the role of context in design and the development of anchored instruction. Also, the research of the development process emphasizes on designer decision-making, knowledge acquisition tools, and the use of automated development tools
- Research on the design and
designer decision making process intends to contribute understanding of the design
process, producing the design models that match the design activities, and
identifying the impacts of various design environments. Research topics
include exploration of the design task environment, cognitive process of
instructional designers, role of knowledge in the design process
- Research on techniques for
knowledge acquisition focuses
on producing new content analysis tools and procedures, determining the
conditions under which they can be best used. Designers, subject matter
experts and the design tasks are units of analysis in these studies.
Cognitive science on theories of knowledge representations
(Qullian's semantic networks; Anderson's declarative and procedural knowledge)
and the process of knowledge acquisition (Atkinson & Shiffrin, 1968; Fitts'
1966: three stage of skill acquisition; Anderson's ACT*R: 1993) provide
directions and advanced techniques on how to elicit subject matter experts'
knowledge required for problem solving in a specific domain and how to present
and describe experts' knowledge performance and knowledge base.
Recently the constructivist view point of knowledge helps
designers to explore the different aspects of the environmental factors on
knowledge acquisition, e.g. social and cultural (Vygotsky's signification and
scaffolding), learning as the enculturation process into community of practice
(Browns, Collins and Duguid's situated cognition), tool uses and collaborative
learning (Lave's distributed cognition, Banduara's modeling, Vygotsky's
externalization via social interaction). Those explorations give rise to the
research on the context where the knowledge is applied and used, the social
interaction with tools, rules, and the members in the community that the
designers through such understanding have a more solid knowledge base on how to
simulate such context in the learning environment. This type of research can
focus on the effectiveness of different techniques and tools for task analysis
or instructional design.
- Research on knowledge-based
design systems
focuses on the production and testing of tools that would change design
procedures, e.g. systems and tools that support the instructional
processes.
The systems and tools under exploration include: Merrill and Li's
ID experts (1993; 1998), and Kearsley's (1985) expert system tools for
instructional design.
Research problems that
can be explored regarding the developmental research in the future include:
- Individual tools for
self-contained instructional design tasks
- Integrated systems that provide
decision support and structure for the process
- Integrated system that act as
drafting boards for the design process
References:
Driscoll, M. P. (1984).
Alternative paradigms for research i instructional systems. Journal of
Instructiional Development, 7(4), 2-5.
Driscoll, M.P., &
Dick, W. (1999). New research paradigms in instructional technology: An
inquiry. Educational Technology, Research and Development, 47 (2), 7-18.
Richey, R. C., &
Nelson, W. A. (1996). Developmental research. In D. H. Jonassen (Ed.),
Handbook of Research for Educational Communications and Technology
(pp.1213-1242). New York: Macmillan.
Seels, B. B., &
Richey, R. C. (1994). Instructional technology: The definition and domains
of the field. Washington, D.C.: Association of Educatoinal Communications
and Technology.
Comments
Post a Comment