以人力回收容量众包为主题进行的游戏化在线问卷调查实践外文翻译资料

 2022-08-25 09:08

Experiences with a Gamified Online Questionnaire for Crowdsourcing Human Recycling Capabilities

Abstract

In this paper we share our experiences with an online questionnaire which had as its main goal to crowd-source how people classify various objects for recycling. To keep people engaged to complete it, and to assess gamification elements we planned to use in a persuasive system for this task later on, we had already integrated these elements into the questionnaire. Besides positive feedback from some participants, we also learned that there are drawbacks and pitfalls with such elements that can be problematic depending on the hypotheses to be addressed with the questionnaire.

Author Keywords

Crowdsourcing, gamified online questionnaire, recycling

ACM Classification Keywords

H.5.m [Information interfaces and presentation (e.g.,HCI)]: Miscellaneous.

Introduction

Sorting garbage is a relevant topic as world cities in 2012 generated about 1.3 billion tons of solid waste per year [3]. In terms of recycling, in Germany for example, four (sometimes five) different trash bins for households are available that are designated to hold only a specific kind of trash. If the separation of garbage is done properly, it has a positive effect in terms of environmental protection, e.g. by greatly increasing the recovery rate of domestic waste [2]. Nevertheless, not everyone seems to do it properly [1]. Reasons for that might be that the rules on what belongs in which of these trash bins makes it difficult to do properly, or people are simply not motivated enough to do it. In HCI, the topic of encouraging people to reflect on their recycling behavior (e.g. [8]) has been under investigation for a few years now. As discussed in [5] we found the work on the BinCam (e.g. [7]) a relevant and useful approach to encourage people to recycle better. This approach, which uses gamification, relied on the performance of a crowd recruited via Amazon Mechanical Turk (AMT). This crowd had the task to classify pictures taken by a camera inside the kitchen trash bin, to decide whether all objects were sorted correctly. The performance of the crowd was not good: in a random picture sample, 15 of 20 classifications were wrong [7]. The work on the BinCam did not investigate in detail why this was the case, i.e. whether this is a systematic problem or only one based on the nature of AMT.

To gain insights into this topic, we decided to analyze human capabilities in recycling and whether the Wisdom of Crowds [6] can produce better results than individuals in this domain (for details see [5]). We conducted this with the help of a gamified online questionnaire. If the performance of a crowd produced reliable classification results (unlike to the BinCam reports), we would have a big opportunity: If an underlying system design encouraged people to classify pictures of waste, we could not only use this as feedback for intelligent systems, but would also have the chance to educate people participating in the classification process and receiving feedback as well.

Gamified Online Questionnaire

To gain insights into this topic, we decided to use an online questionnaire, in which participation was voluntary and without any monetary compensation, to reduce the chance of random answers to earn money faster [4], which could have been an issue at AMT. Participants had the task to classify 40 objects in terms of how they would recycle them in Germany. As we envisioned a game later on that would encourage people to classify such pictures on demand without payment, we also integrated game elements into the questionnaire to assess them a priori. Another motivation for using these was that we hoped to influence the dropout rate positively and to spark more interest in the questionnaire. Figure 1 shows the questionnaire interface. Here, participants needed to decide how they would dispose of waste and were asked to state how confident they were in their decision. To assess our hypotheses (see [5] for more details) it was necessary to use multiple conditions, in which we varied the game elements and feedback types. We had a control group, which had not received any feedback, and the only game element was that participants knew they would receive a score in the end and could compare it to other participantsrsquo; scores. We also had four feedback groups which were also accompanied with gamification elements; for an overview see Text box 1. The feedback was provided together with a happy or sad emoticon and additional information, depending on the condition, which were equally distributed (based on completed runs). We explained in the beginning that points are given for correct answers and subtracted otherwise. Moreover, we told the participants that they can gain bonus points by answering quickly (a precondition later on in our game setting). Participants were recruited via social media and we requested that they had lived at least three years in Germany, to ensure that they were familiar with local recycling rules. Besides the classification task, we also asked questions about their waste sorting behavior and how they (would) assess the game/feedback elements (if they were in a group without them). After finishing these tasks, participants had the chance to provide us with their e-mail address for a follow-up study. This took place one week later, in which we only showed objects they had classified wrongly in their first run. We wanted to assess whether we could educate people even if they did not know that they would be re-tested.

Figure 1: Classification question

No feedback: Participants only see their score after all classification task(without feedback on their decision) together with the high score list.

Ground truth feedback (GTF):Participants

剩余内容已隐藏,支付完成后下载完整资料


附录 译文

以人力回收容量众包为主题进行的游戏化在线问卷调查实践

概要

在本文中,我们通过测试在线调查问卷回收率的的方式来分享我们的实践经验,该调查做这个实验的的主要目标是通过群体来共同完成任务提高问卷回收率。

人们如何对各种物品进行分类以便回收。为了让人们参与以及完成它,并且为了评估的效果我们已经将用于此任务说服系统中使用的游戏化元素集成到问卷中了。除了一些参与者能够给予我们积极反馈之外,我们还了解到,根据调查问卷要解决的假设有一些因素导致问卷存在缺陷,并且这些缺陷可能会导致一些问题的出现。

作者关键词

众包,在线游戏化问卷,回收

ACM分类关键词

H.5.m [信息接口和表示(例如,人机交互)]:杂项。

介绍

垃圾分类是一个相关话题,因为2012年世界城市每年产生约13亿吨固体废物[3]。在回收条款方面,例如在德国,有四个(有时是五个)不同的垃圾箱供家庭使用,这些垃圾箱被指定只能容纳一种特定类型的垃圾。如果垃圾分离得当,它在环境保护方面具有积极作用,例如: 较大水平的提高生活垃圾的回收率[2]。 然而,并非每个人都这样做[1]. 造成这种情况的原因可能是那些垃圾箱所属的规则:哪些垃圾应该归属哪些垃圾桶难以被正确完成,或者人们根本没有足够的动力去做。在做调查问卷的同时也鼓励人们反思其回收行为的主题(例如[8])现在已经进行了几年的调查了。 正如[5]中所讨论的那样,我们发现BinCam的工作(例如[7])是鼓励人们更好地回收的相关且有用的方法。 这种使用游戏化的方法依赖于通过Amazon Mechanical Turk(AMT)招募的人群的表现。他招募的人群的任务是对厨房垃圾桶内拍摄的照片进行分类,以确定是否所有物品都被正确分类。这群招募的人的表现并不好:在随机图片样本中,20个分类中有15个是错误的[7]。关于BinCam的工作没有详细调查为什么会出现这种情况的细节,即这是一个系统性问题还是仅基于AMT的性质问题。

为了深入了解这个主题,我们决定分析回收中的人类容量以及群众智慧[6]是否能够产生比这个主题中的个体进行的更好的结果(详见[5])。我们在游戏化的在线调查问卷的帮助下进行了这项工作。 如果群体的表现产生可靠的分类结果(与BinCam报告不同),我们将有一个很大的机会:如果底层系统设计鼓励人们对垃圾的图片进行分类,我们不仅可以将其用作智能系统的反馈,也有机会教育参与分类过程并接收回应的人。

游戏化在线调查问卷

为了深入了解这一主题,我们决定使用在线调查问卷,其中参与者都是自愿的,没有任何金钱补偿,以避免通过随机选择答案更快完成问卷达到赚钱的目的[4],这可能是AMT研究中存在的一个问题。参与者的任务是根据德国的回收垃圾规则来对回收的40个物体进行分类。我们后来设想的调查问卷会通过游戏的方式来鼓励人们在没有付款的情况下按需分类这些图片,我们还将游戏元素整合到问卷中以便优先地评估它们。使用这些元素的另一个动机是我们希望积极地影响问卷流失者并激发他们对调查问卷的更多兴趣。图1显示了问卷界面。(图1:分类问题没有反馈:参与者只有在所有分类任务(没有他们的决定反馈)和高分列表之后才得到分数。)在这里,参与者需要决定如何处理废物,并要求他们陈述他们对所做决策的信心。为了评估我们的假设(更多细节见[5]),有必要使用多个条件,包括我们在其中改变游戏元素和反馈类型。我们有一个对照组,我们没有给予任何反馈,唯一的游戏元素是参与者知道他们最终会得到一个分数,并可以将这个分数与其他参与者的分数进行比较。同时我们还有四个反馈小组,也加入了游戏化元素;有关概述请参阅文本1。反馈小组提供快乐或悲伤的表情符号并且与其他信息一起提供,具体的条件均匀分布(基于已完成的实验)。我们在开始时解释说,我们将对调查问卷中答对的题目给予分数,错误的题目减去分数。此外,我们告诉参与者,他们可以通过快速回答问题来获得额外的奖励积分(这是我们游戏设置中的一个先决条件)。参与者是通过社交媒体招募的,并且我们要求他们在德国居住至少三年,以确保他们熟悉当地的回收规则。除了分类任务外,我们还询问了关于他们自己在生活中的垃圾分类行为以及他们(将)如何评估游戏/反馈元素的问题(如果他们在没有游戏反馈元素的组中)。完成这些任务后,参与者能够向我们提供他们的电子邮件地址以便我们对他们进行后续研究。后续研究会发生在完成问卷的一周后,我们只向参与者展示了他们在第一次完成问卷中错误分类的垃圾。我们想评估在被测试者不知道他们会被重新测试的情况下能否起到教育意义。

四种游戏/反馈元素

页面实况反馈(GTF):参与者总是能够看到他们是否做出了正确的决定以及正确答案是什么。可用的游戏元素包括:在排行榜中查看自己的排名点,能够查阅在高分列表上的下一个位置需要多少点以及它们在其上的当前位置(均在分类后显示)。

带有解释的GTF:与GTF相同。此外,还说明了如何通过提供简短陈述来给出基本事实决定,以及提供正式文件的参考。
同一人群决定的GTF:与GTF相同。此外,他们看到许多人以同样的方式看到一个百分比来决定选择。
人群决定的GTF:与GTF相同。此外,他们通过查看每个分类选项的百分比年龄来了解其他参与者是如何决定的。

经验

除了与我们相关的研究结果所需要的不同的研究假设外(我们将在[5]中深入讨论),我们还在在线调查问卷中获得了关于使用游戏化和反馈元素的一些见解:

积极的反馈

在调查问卷中我们没有对“游戏化”的整体印象有调查要求,但是当我们通过社交媒体来进行宣传的时候,我们有机会来收集参与用户的反馈信息,至少能够让他们提供参与者个人的证据。大多数与都通过链接和电子邮件来与我们联系,这些参与者通过留言都表明,在实验中拥有竞争对手是能够对回收垃圾这项实验产生积极作用,在回复评论的参与者中,64%的人也进入了使用昵称的高分排行榜上。一些用户在调查链接下留言发表了他们在此列表中的排名位置来作为评论内容,并试图模仿其他玩家来做得更糟糕。参与者还试图将昵称与其他参与者匹配起来(评论中有一个叫“疯狂猜测”的用户)。在这种情况下,这起事件的双方都提出了关于道德的问题,因为在这起事件中匿名性已经被破坏了。在这两个案例中,出现了关于特定图片的讨论。在“标准”调查问卷,是否会导致超越问题本身的讨论的事件产生是值得考虑的。我们倾向于直接在下一个关于游戏化的在线调查问卷中调查这些方面。通过调查我们在实验中增加反馈元素和游戏元素,总体上被认为是积极的。

问卷流失人数

有66人没有能够完成问卷(占所有问卷总数的26.4%)。 从这些至少进行过一次垃圾分类的参与者的流失来看,对实验结果无反馈的这类参与者流失率所占的比例最低(49中有总共4个),发生概率稍高些的情况则是在把所有不同的答案和基本事实同时被展现的时候。产生的其他条件14(显示有多少人已决定相同)和剩下的12个辍学者。 目前尚不清楚反馈或游戏化因素是否会引起这种情况,但这是一个需要在以后的在线调查问卷中牢记的问题:显示反馈可能会使人失去动力,特别是如果他们不同意基本事实(作为回收规则) 也可能在一个国家内有所不同)。

参与调查问卷后续研究的比例:

在一周后随访的参与者人数低于实验初期的预期。在完成第一份问卷的184名参与者中,只有36名参与者(约占总参与者的19.6%)参加了。除了上述的原因以外,参与者还表明再一周后的调查中没有足够的游戏元素来回报,不足以支撑参与者考虑再次参与。处于无游戏反馈组的人更有可能进行这种随访(占总数的33%),而在四种游戏反馈元素的组中的参与者只有8%至23%的人参与。其中最糟糕的结果来自于被解释小组。总的来说,似乎反馈元素不鼓励参与者继续参与该项调查。另一种解释是我们只允许参与者参与问卷一次(在调查问卷中使用技术对策和介绍说明了这一点)。这在游戏中是违反常理的,并且可能导致较低的参与率,因为在调查问卷最初表现不佳的人很可能对后续将进行的研究失去了兴趣。

时间问题:
尽管我们可以证明随着时间的推移有反馈元素的小组内的人员产生了比无反馈元素的小组成员更好的结果,但我们无法在各个反馈元素之间找到任何明显的差异。我们假设通过快速回答问题的方式能够获得奖励积分的这个信息导致参与者在大脑的思考时间里也在回答问题,并且他们没有进行完整并且彻底的阅读,因为只是瞥见了正确答案(在所有反馈条件中都显示) 就进行了答案的选择。因此,通过这种设置,我们了解到通过更精确地阐明从而不损害游戏分数的方面在调查问卷的实验中起着至关重要的作用。

讨论

尽管我们可以找到[5]中所述的大多数假设的答案,但我们了解到,游戏化的调查问卷也可能存在需要事先解决的缺陷。这些缺陷还表明,如果没有游戏元素或反馈可能引入新的偏见来源,游戏元素的使用可能并不总是可取的,如果不设置游戏和反馈元素那么某些问题可能会得到参与者更清晰的回答。尽管如此,我们在这里确定的缺陷也可以简单地依赖于我们人口统计学中所犯错误的游戏化元素集来引入(虽然[5]显示了定性问题)或者我们游戏化调查的整体印象仍然是一个问卷调查而不是游戏(因此降低了所选反馈/游戏元素的有效性)。对于现在调查的这种情况,如果主要设计是一个游戏而不是一个调查与现在这里的结果比较,看看会发生什么似乎很有趣,这个游戏中的结果可能值得调查。

传记

Pascal Lessel在萨尔大学学习计算机科学。 2012年,他开始在德国艺术智能研究中心担任研究员,专注于纸质文物和说服技术的数字化。 2014年初,他认为游戏化和众包计算的结合是有趣的研究方向

参考文献

[1] Environment Bureau Hong Kong. Blueprint for Sustainable Use of Resources. http://goo.gl/JfSUZW, May 2013. [last accessed 21/02/2015].

[2] Environmental Protection Department. Programme on Source Separation of Domestic Waste - Annual Update 2010. http://goo.gl/c1lmjH, May 2010. [last accessed 21/02/2015].

[3] Hoornweg, D., and Bhada-Tata, P. What a Waste: A Global Review of Solid Waste Management. Urban Development Series, 15 (March 2012), 1–116.

[4] Ipeirotis, P. G., Provost, F., and Wang, J. Quality Management on Amazon Mechanical Turk. In Proc. of HCOMP 2010, HCOMP rsquo;10, ACM (2010), 64–67.

[5] Lessel, P., Altmeyer, M., and Kruuml;ger, A. Analysis of Recycling Capabilities of Individuals and Crowds to Encourage and Educate People to Separate Their Garbage Playfully. In Proc. CHI 2015 (to appear), ACM (2015).

[6] Surowiecki, J. The Wisdom of Crowds. Anchor, 2005.

[7] Thieme, A., Comber, R., Miebach, J., Weeden, J., Kruml;amer, N., Lawson, S., and Olivier, P. “Wersquo;ve Bin Watching You”: Designing for Reflection and Social Persuasion to Promote Sustainable Lifestyles. In Proc. CHI 2012, ACM (2012), 2337–2346.

[8] Zlatow, M., and Kelliher, A. Increasing Recycling Behaviors Through User-Centered Design. In Proc. DUX 2007, ACM (2007), 27:1–27:1.

附录 外文原文

Experiences with a Gamified Online Questionnaire for Crowdsourcing Human Recycling Capabilities

Abstract

In this paper we share our experiences with an online questionnaire which had as

剩余内容已隐藏,支付完成后下载完整资料


资料编号:[452009],资料为PDF文档或Word文档,PDF文档可免费转换为Word

原文和译文剩余内容已隐藏,您需要先支付 30元 才能查看原文和译文全部内容!立即支付

以上是毕业论文外文翻译,课题毕业论文、任务书、文献综述、开题报告、程序设计、图纸设计等资料可联系客服协助查找。