四月青年社区

 找回密码
 注册会员

QQ登录

只需一步,快速开始

查看: 1815|回复: 12

[公知观察] 中国孩子想象力倒数第一?

[复制链接]
发表于 2010-12-11 11:24 | 显示全部楼层 |阅读模式
导读:
4.jpg
最初在围脖上看到关于“中国孩子想象力倒数第一”的消息是在新周刊这条消息:

@新周刊:看到这副画,你会想到什么?——2009年,教育进展国际评估组织对全球21个国家进行调查,结果发现:中国中小学生想象力倒数第一 ——答案稍后揭晓。发布时间是11月24日10:35。



想象力要怎么去评判?这令笔者感到好奇,但是仅花费几分钟,就搜到了学者徐磊,在2010-8-8 6:19:41发表的辟谣博文:http://www.sciencenet.cn/m/user_content.aspx?id=350944,详细的数据,证明这根本就是一条造谣博眼球的新闻,所以也就只是顺手转了链接,没有放在心里。

但是仅仅过了一天,截止11月25日15:51,这新闻就已经原文转发4075条,原文评论2223条,笔者随即又看到人民日报视点版等也相继转载,这才意识到谣言的传播速度之快,波及范围之广,根本超出了想象。

有网友在笔者微博上回应:“每次你们都说很多是谣言,为何没人收集一下搞个证据链接,发个围脖给力的转起来,不然名人和媒体都在说,不知情的我们到底要信谁。”在笔者提醒她注意下面有辟谣的链接后,这位网友又回应说:“都只是链接,不知道该相信谁,类似的事不只一件。在网上辟谣大家也是通过网络看的,也没什么公信力。倒是无数名人和媒体的传播,证据看着多给力。”

这样的反应应当不只是个案,在这个信息爆炸的年代,如何吸收信息,如何甄别信息,正规媒体究竟该肩负什么样的责任和道义?正如当年的“中日夏令营,中国孩子素质低下”一样,当时这是个令多少人耳熟能详的新闻,又曾经被多少正规媒体转载过,但是最终却被证明,不过是一出博人眼球的假新闻。
这样的谣言究竟是被人制造出来,出于什么目的制造出来?值得深思。



徐磊的博客

http://www.sciencenet.cn/m/user_content.aspx?id=350944

(已同意转载)

各国学生教育评估(针对自然科学和数学)2006年报告的数据整理
最近科学网的一篇博客文章- 谁之过?中国青少年想象力世界倒数第一!引起很多人关注.该文中声称“2009年,教育进展国际评估组织对全球21个国家进行调查显示,中国孩子的计算能力排名世界第一,想象力却排名倒数第一,创造力排名倒数第五.在中小学生中, 认为自己有好奇心和想象力的只占4.7%,而希望培养想象力和创造力的只占14.9%.” 坦白的说,我对这个缺乏数据源的结论非常怀疑。我是不相信中国青少年想象力差的。事实上即使是可能更受应试教育“毒害”的,年纪更长的一些人,我发现我在科学网所接触到的不少想象力也是从来不差的,反而天马行空的可以,有的人异想天开的程度让我对他们如何大学甚至中学或者小学毕业都感到好奇。不开玩笑了。作为自然科学的从业人员,我习惯说话有跟有据,于是我很想找到该评估组织的原始报告查证一下.虽说现在是信息时代,以讹传讹,甚至无中生有的事情却不少见.于是这促使我产生好奇心要查证一下。
只是非常可惜,我搜来搜去,没有找到和这个声称的“教育进展国际评估组织”数字相吻合的报告.尽管我尝试使用了International education/educational assessment/ evaluation, 附加21 countries, China, calculation, creativity和imagination等的不同组合方式.我找到的最靠谱的是Programme for International Student Assessment.

The Programme for International Student Assessment (PISA) is a worldwide evaluation of 15-year-old school children's scholastic performance, performed first in 2000 and repeated every three years. It is coordinated by the Organisation for Economic Co-operation and Development (OECD), with a view to improving educational policies and outcomes.
维基百科链接:
http://en.wikipedia.org/wiki/Pro ... _Student_Assessment


PISA在美国National Center for Education Statistics (NCES) 上的主页:
http://nces.ed.gov/surveys/pisa/



PISA在OECD的官方主页:http://www.pisa.oecd.org

该网站可以下载到2000,2003,和2006年的详细评估办法 (assessment framework) 和结果以及2009年的评估办法。2009年的统计结果还没有在官方网站上公布,但据NCES的网站披露 (见NCES首页显示的第二段内容) ,其结果会在2010年也就是今年的12月公布。

我需要说明的是,我找到的这份报告中的数据和那篇博文中给出的很不同,该评估组织2009年的统计数据还没有出炉.由于这个组织每三年做一次调查,离现在最近的是2006年的数据.该报告里没有包含对中国大陆地区学生的统计,但包含台湾地区,香港和澳门三地.值得注意的是,2009年的评估办法里有专门提及到对于创新能力的评估还包括终生学习能力的.
• Innovative literacy concept, which is concerned with the capacity of students to apply knowledge and skills in key subject areas and to analyze, reason and communicate effectively as they pose, solve and interpret problems in a variety of situations.
• Relevance to lifelong learning, which does not limit PISA to assessing students’ curricular and cross-curricular competencies, but also asks them to report on their own motivation to learn, their beliefs about themselves and their learning strategies.
另外,2009年的报告还包括对中国大陆地区上海的中学生的调查.因此我相信2009年的报告结果将非常值得期待.不过,我依然认为PISA 2006年的统计结果依然值得一看,至少比开头提到的那篇博客文章中提及的数据更可信.

以下所有内容,来自2006年版的PISA报告。该报告下载地址:
http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2008016
4-1.jpg

2006年PISA参与地区














PISA的意义 (摘自原报告正文第3页)
To provide a critical external perspective on the achievement of U.S. students through comparisons with students of other nations, the United States participates at the international level in PISA, the Progress in International Reading Literacy Study (PIRLS), and the Trends in International Mathematics and Science Study (TIMSS).2 TIMSS and PIRLS seek to measure students’ mastery of specific knowledge, skills, and concepts and are designed to reflect curriculum frameworks in the United States and other participating jurisdictions.

PISA provides a unique and complementary perspective to these studies by not focusing explicitly on curricular outcomes, but on the application of knowledge in reading, mathematics, and science to problems with a real-life context (OECD 1999). The framework for each subject area is based on concepts, processes, and situations or contexts (OECD 2006). For example, for science literacy, the concepts included are physics, chemistry, biological sciences, and earth and space sciences. The processes are centered on the ability to acquire, interpret, and act on evidence such as describing scientific phenomena and interpreting scientific evidence. The situations or contexts are those (either personal or educational) in which students might encounter scientific concepts and processes. Assessment items are then developed on the basis of these descriptions (see appendix A for examples).

PISA uses the terminology of “literacy” in each subject area to denote its broad focus on the application of knowledge and skills. For example, PISA seeks to assess whether15-year-olds are scientifically literate, or to what extent they can apply scientific knowledge and skills to a range of different situations they may encounter in their lives. Literacy itself refers to a continuum of skills—it is not a condition that one has or does not have (i.e., literacy or illiteracy). Rather, each person’s skills place that person at a particular point on the literacy continuum (OECD 2006).

The target age of 15 allows jurisdictions to compare outcomes of learning as students near the end of compulsory schooling. PISA’s goal is to answer the question “what knowledge and skills do students have at age 15?” taking into account schooling and other factors that may influence their performance. In this way, PISA’s achievement scores represent a “yield” of learning at age 15, rather than a direct measure of attained curriculum knowledge at a particular grade level, because 15-year-olds in the United States and elsewhere come from several grade levels (figure 3 and table C-1).

PISA2006采用的方法 (摘自原报告正文第4页)
PISA 2006 was a 2-hour paper-and-pencil assessment of 15-year-olds collected from nationally representative samples in participating jurisdictions. Like other large scale assessments, PISA was not designed to provide individual student scores, but rather national and group estimates of performance. In PISA 2006, every student answered science items. Not every student answered both reading and mathematics items as these were distributed across different versions of the test booklets (for more information on PISA 2006’s design, see the technical notes in appendix B). PISA 2006 was administered between September and November 2006. The U.S. sample included both public and private schools, randomly selected and weighted to be representative of the nation. In total, 166 schools and 5,611 students participated in PISA 2006 in the United States. The overall weighted school response rate was 69 percent before the use of replacement schools. The final weighted student response rate was 91 percent5 (see the technical notes in appendix B for additional details on sampling, administration, response rates, and other issues).
This report provides results for the United States in relation to the other jurisdictions participating in PISA 2006, distinguishing OECD jurisdictions and non- OECD jurisdictions. All differences described in this report have been tested for statistical significance at the 0.05 level. Additional information on the statistical procedures used in this report is provided in the technical notes in appendix B. For further results from PISA 2006, see the OECD publication PISA 2006:
Science Competencies for Tomorrow’s World(Vols. 1 and 2) available at http://www.pisa.oecd.org (OECD, 2007a, 2007b).

PISA 2006中Science Literacy的详细定义 (正文第5页)
“an individual’s scientific knowledge and use of that knowledge to identify questions, to acquire new knowledge, to explain scientific phenomena, and to draw evidence-based conclusions about science related issues, understanding of the characteristic features of science as a form of human knowledge and enquiry, awareness of how science and technology shape our material, intellectual, and cultural environments, and willingness to engage in science-related issues, and with the ideas of science, as a reflective citizen (OECD 2006, p.12).”

三个子考察范围
? Identifying scientific issues includes recognizing issues that are possible to investigate scientifically; identifying keywords to search for scientific information; and recognizing the key features of a scientific investigation.
? Explaining phenomena scientifically covers applying knowledge of science in a givensituation; describing or interpreting phenomena scientifically and predicting changes; and identifying appropriate descriptions, explanations, and predictions.
? Using scientific evidence includes interpreting scientific evidence and making and communicating conclusions; identifying the assumptions, evidence, and reasoning behind conclusions; and reflecting on the societal implications of science and technological developments.

4-2.jpg

对Science Literacy的6个分级评价(正文第7页)

(原报告还对各地区不同Science Literacy水平的分布做了统计, 但在这里我未给出.)

4-3.jpg

统计结果(正文第6页)


Combined science literacy scores are reported on a scale from 0 to 1,000 with a mean set at 500 and a standard deviation of 100. Fifteen-year-old students in the United States had an average score of 489 on the combined science literacy scale, lower than the OECD average score of 500 (tables 2 and C-2). U.S. students scored lower in science literacy than their peers in 16 of the other 29 OECD jurisdictions and 6 of the 27 non- OECD jurisdictions. Twenty-two jurisdictions (5 OECD jurisdictions and 17 non-OECD jurisdictions) reported lower scores than the United States in science literacy.

When comparing the performance of the highest achieving students—those at the 90th percentile— there was no measurable difference between the average score of U.S. students (628) compared to the OECD average (622) on the combined science literacy scale (table C-3). Twelve jurisdictions (9 OECD jurisdictions and 3 non-OECD jurisdictions) had students at the 90th percentile with higher scores than the United States on the combined science literacy scale.

At the other end of the distribution, among low achieving students at the 10th percentile, U.S. students scored lower (349) than the OECD average (375) on the combined science literacy scale. Thirty jurisdictions (21 OECD jurisdictions and 9 non-OECD jurisdictions) had students at the 10th percentile with higher scores than the United States on the combined science literacy scale.
U.S. students also had lower scores than the OECD average score for two of the three scientific literacy subscales (explaining phenomena scientifically (486 versus 500) and using scientific evidence (489 versus 499)). Twenty-five jurisdictions (19 OECD and 6 non-OECD jurisdictions) had a higher average score than the United States on the explaining phenomenascientifically subscale, and 20 jurisdictions (14 OECD and 6 non-OECD jurisdictions) had a higher average score than the United States on the using scientificevidence subscale. There was no measurable difference in the performance of U.S. students compared with the OECD average on the identifying scientific issues subscale (492 versus 499). However, 18 jurisdictions (13 OECD and 5 non-OECD jurisdictions) scored higher than the United States on the identifyingscientific issues subscale.

PISA 2006中Mathematics Literacy 的详细定义(正文第11页)
“an individual’s capacity to identify and understand the role that mathematics plays in the world, to make well-founded judgments and to use and engage with mathematics in ways that meet the needs of that individual’s life as a constructive, concerned and reflective citizen (OECD, 2006, p.12).”
统计结果(正文第12页):
In 2006, the average U.S. score in mathematics literacy was 474 on a scale from 0 to 1,000, lower than the OECD average score of 498 (tables 3 and C-7). Thirty-one jurisdictions (23 OECD jurisdictions and 8 non-OECD jurisdictions) had a higher average score than the United States in mathematics literacy in 2006. In contrast, 20 jurisdictions (4 OECD jurisdictions and 16 non-OECD jurisdictions) scored lower than the United States in mathematics literacy in 2006.

When comparing the performance of the highest achieving students—those at the 90th percentile— U.S. students scored lower (593) than the OECD average (615) on the mathematics literacy scale (table C-8). Twenty-nine jurisdictions (23 OECD jurisdictions and 6 non-OECD jurisdictions) had students at the 90th percentile with higher scores than the United States on the mathematics literacy scale. At the other end of the distribution, among low achieving students at the 10th percentile, U.S. students scored lower (358) than the OECD average (379) on the mathematics literacy scale. Twenty six jurisdictions (18 OECD jurisdictions and 8 non-OECD jurisdictions) had students at the 10th percentile with higher scores than the United States on the mathematics literacy scale. There was no measurable change in either the U.S. mathematics literacy score from 2003 to 2006 (483 versus 474) or the U.S. position compared to the OECD average, although scores in 11 other jurisdictions did change (table C-7).

4-4.jpg

4-5.jpg
图5a, 图5b
性别差异性(正文第13页)
In the United States, no measurable difference was observed between the scores for 15-year-old males (489) and females (489) on the combined science literacy scale (figure 6, table C-9). Males had a higheraverage score than females in 8 jurisdictions (6 OECD jurisdictions and 2 non-OECD jurisdictions), while females had a higher average score than males in 12 jurisdictions (2 OECD jurisdictions and 10 non- OECD jurisdictions). The OECD average was higher for males (501) than females (499) on the combined science literacy scale.
In the United States, no measurable difference was found in the percentage of U.S. females (1.5 percent) and males (1.6 percent) scoring at level 6 (the highest level) on the combined science literacy scale (table C-10). Again, the percentages of U.S. females scoring at (16.2 percent) or below (6.8 percent) level 1 (the lowest levels) did not measurably differ from those for their male peers (8.3 percent below level 1 and 17.4 percent at level 1) on the combined science literacy scale.
On average across the OECD jurisdictions, females scored higher than males on the identifying scientificissues subscale (508 versus 490) and the using scientificevidence subscale (501 versus 498), while males scored higher than females on the explaining phenomenascientifically subscale (508 versus 493) (table C-11). In the United States, females had a higher average scorethan males on the identifying scientific issues subscale (500 versus 484), while males had a higher averagescore than females on the explaining phenomenascientifically subscale (492 versus 480).7 There was no measurable difference between U.S. 15-year-old males and females on the using scientific evidence subscale (486 versus 491).
图5a, 图5b

美国各种族间差异性(正文第13页)
(由于各国都往往具有复杂的种族成分, 该报告只对美国进行了种族分析)
On the combined science literacy scale, Black (non-Hispanic) students and Hispanic students scored lower, on average, than White (non-Hispanic) students, Asian (non-Hispanic) students, and students of more than one race (non-Hispanic) (figure 7, table C-12). On average, Hispanic students scored higher than Black (non-Hispanic) students, while White (non-Hispanic) students scored higher than Asian (non-Hispanic) students. This pattern of performance on PISA 2006 by race/ethnicity is similar to that found in PISA 2000 and PISA 2003 (Lemke et al. 2001, 2004).

On the combined science literacy scales, Black (non-Hispanic) students, Hispanic students, and American Indian/Alaska Native (non-Hispanic) students scored below the OECD average, while scores for White (non-Hispanic) students were above the OECD average. On average, the mean scores of White (non-Hispanic), Asian (non-Hispanic), and students of more than one race (non-Hispanic) were in the PISA level 3 proficiency range for the combined science literacy scale; the mean scores of Hispanic, American Indian/Alaska Native (non-Hispanic), and Native Hawaiian/Other Pacific Islander (non-Hispanic) students were in the level 2 proficiency range; and the mean score for Black (non-Hispanic) students was at the top of the level 1 proficiency range.
美国各种族间差异性(正文第13页)

4-6.jpg

本文引用地址: http://www.sciencenet.cn/m/user_content.aspx?id=350944
发表于 2010-12-11 11:27 | 显示全部楼层
那都是媚外派自残的说法,不足为信。
回复 支持 反对

使用道具 举报

发表于 2010-12-11 11:35 | 显示全部楼层
想象力有,都被教育抹杀了。
回复 支持 反对

使用道具 举报

发表于 2010-12-11 11:57 | 显示全部楼层
想象力有,都被教育抹杀了。
天人合一 发表于 2010-12-11 11:35



    你的存在说明中国孩子的想象力世界第一
回复 支持 反对

使用道具 举报

发表于 2010-12-11 12:16 | 显示全部楼层
p药也P点有用的,乱P没意思
回复 支持 反对

使用道具 举报

发表于 2010-12-11 20:46 | 显示全部楼层
中国孩子想象力是世界第一 但是被精英们搞得教育制度束缚了
回复 支持 反对

使用道具 举报

发表于 2010-12-11 21:34 | 显示全部楼层
回复 3# 天人合一


    在中国的各色网站上混了这么多年,让我深深地相信国人的想象力是当之无愧的第一。你可以看到各种稀奇古怪的言论和千奇百怪的揣摩以及各种扭曲360度的新闻,这想象力还不够高吗?
回复 支持 反对

使用道具 举报

发表于 2010-12-11 23:06 | 显示全部楼层
导读:

最初在围脖上看到关于“中国孩子想象力倒数第一”的消息是在新周刊这条消息:

@新周刊:看到这副 ...
反击谣言 发表于 2010-12-11 11:24



    俺怎么觉得在这地球上不止21个国家吧。
回复 支持 反对

使用道具 举报

发表于 2010-12-15 11:06 | 显示全部楼层
回复 1# 反击谣言


   
回复 支持 反对

使用道具 举报

发表于 2010-12-15 11:09 | 显示全部楼层
中国JY们的YY能力 倒是世界第一的。
回复 支持 反对

使用道具 举报

发表于 2010-12-17 01:11 | 显示全部楼层
**** 作者被禁止或删除 内容自动屏蔽 ****
大清皇毛 发表于 2010-12-11 20:46



    肯定是制度不好
回复 支持 反对

使用道具 举报

发表于 2010-12-17 01:43 | 显示全部楼层
想象力是人在已有形象的基础上,在头脑中创造出新形象的能力。比如当你说起汽车,我马上就想像出各种各样的汽车形象来就是这个道理。因此,想象一般是在掌握一定的知识面的基础上完成的。 想象力是在你头脑中创造一个念头或思想画面的能力。

摘自百度百科

我感觉说想象力倒数第一的人,可能把想象力和创新力混淆了。想象并不一定意味着创新,比如说当提到圆形是,头脑中(只要智商正常)就会生出完整的圆的形象。其实中国的教育还是比较注重想象的,例如数学中的几何,很多都需要想象。
回复 支持 反对

使用道具 举报

发表于 2010-12-17 07:39 | 显示全部楼层
想象力有,都被教育抹杀了。
天人合一 发表于 2010-12-11 11:35



    在AC不能说ZF的不好,看,又被主流批了吧
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 注册会员

本版积分规则

小黑屋|手机版|免责声明|四月网论坛 ( AC四月青年社区 京ICP备08009205号 备案号110108000634 )

GMT+8, 2024-9-22 13:24 , Processed in 0.055222 second(s), 25 queries , Gzip On.

Powered by Discuz! X3.4

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表