首页> 外文期刊>BioScience >A New Method for Assessing Critical Thinking in the Classroom
【24h】

A New Method for Assessing Critical Thinking in the Classroom

机译:一种评估课堂批判性思维的新方法

获取原文
获取原文并翻译 | 示例
           

摘要

To promote higher-order thinking in college students, we undertook an effort to learn how to assess critical-thinking skills in an introductory biology course. Using Bloom's taxonomy of educational objectives to define critical thinking, we developed a process by which (a) questions are prepared with both content and critical-thinking skills in mind, and (b) grading rubrics are prepared in advance that specify how to evaluate both the content and critical-thinking aspects of an answer. Using this methodology has clarified the course goals (for us and the students), improved student metacognition, and exposed student misconceptions about course content. We describe the rationale for our process, give detailed examples of the assessment method, and elaborate on the advantages of assessing students in this manner.nnSeveral years ago, we launched a journey toward understanding what it means to teach critical thinking. At that time, we were both biology instructors working together on teaching an introductory biology course at Duke University, and we wanted to help students develop higher-order thinking skills—to do something more sophisticated than recite back to us facts they had memorized from lectures or the textbook (i.e., what many of them had been asked to do in previous biology courses).nnThe justification for our journey is well supported by the science education literature. Many college and university faculty believe that critical thinking should be a primary objective of a college education (Yuretich 2004), and numerous national commissions have called for critical-thinking development (e.g., AAAS 1989, NAS–NRC 2003). Yet when trying to implement critical thinking as an explicit goal in introductory biology, we found ourselves without a well-defined scheme for its assessment.nnAnd we were not alone. Despite the interest among faculty in critical thinking as a learning goal, many faculty believe that critical thinking cannot be assessed or they have no method for doing so (Beyer 1984, Cromwell 1992, Aviles 1999). Consider a 1995 study from the Commission on Teacher Credentialing in California and the Center for Critical Thinking at Sonoma State University (Paul et al. 1997). These groups initiated a study of college and university faculty throughout California to assess current teaching practices and knowledge of critical thinking. They found that although 89 percent of the faculty surveyed claimed that critical thinking is a primary objective in their courses, only 19 percent could explain what critical thinking is, and only 9 percent of these faculty were teaching critical thinking in any apparent way (Paul et al. 1997). This observation is supported by evidence from other sources more specific to the sciences, which suggest that many introductory science, technology, engineering, and math (STEM) courses do not encourage the development of critical-thinking abilities (Fox and Hackerman 2003, Handelsman et al. 2004).nnWhy is it that so many faculty want their students to think critically but are hard-pressed to provide evidence that they understand critical thinking or that their students have learned how to do it?nnWe identified two major impediments to the assimilation of pedagogical techniques that enhance critical-thinking abilities. First, there is the problem of defining “critical thinking.” Different definitions of the term abound (Facione 1990, Aretz et al. 1997, Fisher and Scriven 1997). Not surprisingly, many college instructors and researchers report that this variability greatly impedes progress on all fronts (Beyer 1984, Resnick 1987). However, there is also widespread agreement that most of the definitions share some basic features, and that they all probably address some component of critical thinking (Potts 1994). Thus, we decided that generating a consensus definition is less important than simply choosing a definition that meets our needs and consistently applying it. We chose Bloom's taxonomy of educational objectives (Bloom 1956), which is a well-accepted explanation for different types of learning and is widely applied in the development of learning objectives for teaching and assessment (e.g., Aviles 1999).nnBloom's taxonomy delineates six categories of learning: basic knowledge, secondary comprehension, application, analysis, synthesis, and evaluation (box 1). The first two categories, basic knowledge and secondary comprehension, do not require critical-thinking skills, but the last four—application, analysis, synthesis, and evaluation—all require the higher-order thinking that characterizes critical thought. The definitions for these categories provide a smooth transition from educational theory to practice by suggesting specific assessment designs that researchers and instructors can use to evaluate student skills in any given category. Other researchers and even entire departments have investigated how to apply Bloom's taxonomy to refine questions and drive teaching strategies (e.g., Aviles 1999, Anderson and Krathwohl 2001). Nonetheless, the assessments developed as part of these efforts cannot be used to measure critical thinking independent of content.nnThe second major impediment to developing critical thinking in the classroom is the difficulty that faculty face in measuring critical-thinking ability per se. It is relatively straightforward to assess students' knowledge of content; however, many faculty lack the time and resources to design assessments that accurately measure critical-thinking ability (Facione 1990, Paul et al. 1997, Aviles 1999). A large body of literature already exists showing that critical thinking can be assessed (e.g., Cromwell 1992, Fisher and Scriven 1997). The critical-thinking assessments that have been most rigorously tested are subject-independent assessments. These assessments presumably have the advantage of allowing measurements of critical-thinking ability regardless of the context, thus making it possible to compare different groups of people (Aretz et al. 1997, Facione et al. 2000). Previous studies have demonstrated a positive correlation between the outcomes of these subject-independent tests and students' performance in a course or on a task (e.g., Onwuegbuzie 2001). Such studies serve to illustrate that critical thinking per se is worth assessing, or at least that it has some relationship to students' understanding of the material and to their performance on exams. Still, generalized assessments of critical-thinking ability are almost never used in a typical classroom setting (Haas and Keeley 1998). There are several problems with such general tests, including the following: Faculty doubt that the measurements indicate anything useful about discipline-specific knowledge.n n Administering these tests takes time away from the content of the course and can be costly; thus, they are viewed as “wasted” time.n n Most faculty lack the time to learn the underlying structure and theory behind the tests, and so it is unclear to them why such a test would be worthwhile.n nnnRecognizing the problems with standardized, discipline-independent assessments of critical thinking, we developed an assessment methodology to enable the design of questions that clearly measure both the content we want students to know and the cognitive skills we want them to obtain. Ideally, this methodology should allow for discipline-specific (i.e., content-based) questions in which the critical-thinking component can be explicitly dissected and scored. Furthermore, we built on the work of others who have used Bloom's taxonomy to drive assessment decisions by using this taxonomy to explicitly define the skills that are required for each question. Finally, we crafted a system for developing scoring rubrics that allows for independent assessment of both the content and the skills required for each question. It is this methodology we have begun applying to introductory biology.
机译:为了促进大学生的高级思维,我们努力学习如何在生物学入门课程中评估批判性思维技能。使用Bloom的教育目标分类法来定义批判性思维,我们开发了一个过程,通过该过程,(a)考虑到内容和批判性思维技巧来准备问题,并且(b)预先准备评分等级,以指定如何评估这两种方式答案的内容和批判性思维方面。使用这种方法已经阐明了课程目标(对我们和学生而言),提高了学生的元认知,并暴露了学生对课程内容的误解。我们描述了我们过程的原理,给出了评估方法的详细示例,并详细阐述了以这种方式评估学生的优势。几年前,我们开始了一段理解理解批判性思维意味着什么的旅程。当时,我们都是生物学指导者,在杜克大学共同教授生物学入门课程,我们希望帮助学生发展高阶思维技能,做些比从背诵的事实更复杂的事情,而不是背诵给我们或教科书(例如,以前的生物学课程曾要求他们做很多事情)。科学教育文献充分支持了我们旅程的理由。许多学院和大学的教授认为,批判性思维应该是大学教育的主要目标(Yuretich 2004),并且许多国家委员会呼吁进行批判性思维的发展(例如,AAAS 1989,NAS–NRC 2003)。然而,当尝试将批判性思维作为入门生物学的明确目标时,我们发现自己没有一个明确的评估方案。而且我们并不孤单。尽管教师对批判性思维作为学习目标很感兴趣,但许多教师仍认为批判性思维无法得到评估,或者他们没有这样做的方法(Beyer 1984,Cromwell 1992,Aviles 1999)。考虑一下1995年加州教师资格认证委员会和索诺玛州立大学批判性思维中心的一项研究(Paul等,1997)。这些团体发起了一项针对整个加利福尼亚大学学院的研究,以评估当前的教学实践和批判性思维的知识。他们发现,尽管接受调查的89%的教职员工声称批判性思维是他们课程的主要目标,但只有19%的人可以解释批判性思维是什么,并且只有9%的教师以任何明显的方式教授批判性思维(Paul等(1997)。这一观察结果得到了其他更具体于科学的来源的证据的支持,这些证据表明许多入门科学,技术,工程和数学(STEM)课程并不鼓励批判性思维能力的发展(Fox和Hackerman,2003,Handelsman等(2004年)。nn为什么这么多的教师希望学生进行批判性思考,却很难提供证据证明他们了解批判性思维或学生已经学会了怎么做呢?nn我们发现了同化的两个主要障碍增强批判性思维能力的教学技术。首先,存在定义“批判性思维”的问题。该术语的定义多种多样(Facione,1990; Aretz等,1997; Fisher和Scriven,1997)。毫不奇怪,许多大学教师和研究人员报告说,这种可变性极大地阻碍了各个方面的进步(Beyer 1984,Resnick 1987)。但是,人们也普遍同意,大多数定义都具有一些基本特征,而且它们都可能涉及批判性思维的某些组成部分(Potts,1994)。因此,我们认为,生成共识定义比简单地选择满足我们需求并持续应用的定义重要。我们选择Bloom的教育目标分类法(Bloom 1956),这是对不同类型学习的一种公认的解释,并广泛用于教学和评估学习目标的制定(例如,Aviles 1999)。nnBloom的分类法描述了六类学习方式:基础知识,中学理解,应用,分析,综合和评估(专栏1)。前两类是基础知识和中学理解,不需要批判性思维技能,但是后四类是应用,分析和综合以及评估-都需要表征批判性思维的高阶思维。这些类别的定义通过建议研究人员和讲师可以用来评估任何给定类别的学生技能的特定评估设计,提供了从教育理论到实践的平稳过渡。其他研究人员甚至整个系统都研究了如何运用Bloom的分类法来提炼问题并制定教学策略(例如Aviles 1999,Anderson和Krathwohl 2001)。但是,作为这些努力的一部分而进行的评估不能用于独立于内容的方式来评判批判性思维。课堂上发展批判性思维的第二个主要障碍是教师在衡量批判性思维能力本身方面面临的困难。评估学生的内容知识相对简单;然而,许多教师缺乏时间和资源来设计评估,以准确地衡量批判性思维能力(Facione,1990; Paul等,1997; Aviles,1999)。已经有大量文献表明可以评估批判性思维(例如,Cromwell 1992,Fisher和Scriven 1997)。经过最严格测试的批判性思维评估是独立于受试者的评估。这些评估的优势可能是不管背景如何,都可以测量批判性思维能力,从而可以比较不同的人群(Aretz等,1997; Facione等,2000)。先前的研究表明,这些独立于科目的测试的结果与学生在课程或任务中的表现之间存在正相关关系(例如,Onwuegbuzie 2001)。这些研究表明,批判性思维本身值得评估,或者至少与学生对材料的理解以及他们在考试中的表现有一定关系。但是,在典型的课堂环境中几乎从来没有使用批判性思维能力的一般性评估(Haas and Keeley 1998)。此类通用测试存在几个问题,其中包括:教师怀疑测量是否表明对特定学科知识有用的信息。n进行这些测试需要花费时间,而且会花费很高的费用;因此,他们被视为“浪费”时间。nn大多数教师都没有时间去学习测试背后的基础结构和理论,因此他们不清楚为什么这样的测试值得。nnn独立于学科的批判性思维评估,我们开发了一种评估方法,可以设计出能够明确衡量我们希望学生知道的内容和我们希望他们获得的认知技能的问题。理想地,该方法应该允许针对特定学科(即基于内容)的问题,在该问题中可以清晰地分析和评估批判性思维成分。此外,我们基于使用Bloom的分类法来明确评估每个问题所需技能的其他人的工作,这些人使用Bloom的分类法来驱动评估决策。最后,我们设计了一个用于开发评分标准的系统,该系统可以独立评估每个问题所需的内容和技能。正是这种方法,我们已开始将其应用于入门生物学。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号