At the HCOMP 2019 (crowdsourcing) conference, October 28-30, Kurt Luther, assistant professor of computer science, and six students are attending and presenting work from his lab:
- Vikram Mohanty (CS PhD student) is presenting a full paper, “Second Opinion: Supporting Last-Mile Person Identification with Crowdsourcing and Face Recognition.”
- Aditya Bharadwaj (CS PhD student) is presenting “Mixed-initiative methods for following design guidelines in creative tasks” at the Doctoral Consortium.
- David Mitchell, Efua Akonor, Sarwat Kazmi are three REU students presenting posters/demos from their summer internships in the lab. David is presenting “PairWise: Mitigating political bias in crowdsourced content moderation,” and Efua and Sarwat are presenting, “It’s QuizTime: A study of online verification practices on Twitter.”
- Sukrit Venkatagiri (CS PhD student) supervised the above intern projects.
HCOMP 2019 full paper:
AI-based face recognition technologies often present a shortlist from which a human expert must select correct match(es) while avoiding false positives, which we term the “last-mile problem.” We propose Second Opinion, a web-based software tool that employs a novel crowdsourcing workflow to assist experts in solving the last-mile problem. We evaluated Second Opinion with a mixed-methods lab study involving 10 experts and 300 crowd workers who collaborate to identify people in historical photos. We found that crowds can eliminate 75% of false positives from the highest confidence candidates suggested by face recognition.
At the CSCW 2019 (social computing) conference, November 9-13, Kurt Luther and three students are attending and presenting work from his lab:
- Sukrit Venkatagiri (CS PhD student) is presenting a full paper, “GroundTruth: Augmenting expert image geolocation with crowdsourcing and shared representations.”
- Jacob Thebault-Spieker (CS postdoc) is a co-author on the above paper.
- Tianyi Li (CS PhD student) is presenting a full paper, “Dropping the baton? Understanding errors and bottlenecks in a crowdsourced sensemaking pipeline,” and presenting “Solving Mysteries with the Wisdom of Crowds: A Modularized Pipeline and Context Slices” at the Doctoral Consortium.
CSCW 2019 full papers:
In this paper, we introduce the concept of shared representations for crowd–augmented expert work, focusing on the complex sensemaking task of image geolocation performed by professional journalists and human rights investigators. We built GroundTruth, an online system that uses three shared representations—a diagram, grid, and heatmap—to allow experts to work with crowds in real time to geolocate images. Our mixed-methods evaluation with 11 experts and 567 crowd workers found that GroundTruth helped experts geolocate images, and revealed challenges and success strategies for expert–crowd interaction.
In this paper, we conduct a series of studies with 325 crowd workers using a crowd sensemaking pipeline to solve a fictional terrorist plot, focusing on understanding why errors and bottlenecks happen and how they propagate. We classify types of crowd errors and show how the amount and quality of input data influence worker performance. We conclude by suggesting design recommendations for integrated crowdsourcing systems and speculating how a complementary top-down path of the pipeline could refine crowd analyses.