Abstract

Video-sharing platforms (VSPs) such as YouTube, TikTok, and Twitch attract millions of users and have become influential information sources, especially among the young generation. Video creators and live streamers make videos to engage viewers and form online communities. VSP celebrities obtain monetary benefits through monetization programs and affiliated markets. However, there is a growing concern that user-generated videos are becoming a vehicle for spreading misinformation and controversial content. Creators may make inappropriate content for attention and financial benefits. Some other creators also face harassment and attack. This workshop seeks to bring together a group of HCI scholars to brainstorm technical and design solutions to improve the credibility, trust, and safety of VSPs. We aim to discuss and identify research directions for technology design, policy-making, and platform services for video-sharing platforms.

How to Attend

The workshop will be hosted in hybrid mode. The event will take place in person at CHI2023 in Hamburg, Germany, as well as on Zoom. To participate, you are encouraged to submit a 1 to 4 page (excluding references) position paper in the ACM primary article template (https://www.acm.org/publications/taps/word-template-workflow) stating your research background, your connections to VSPs, and/or your related future work. Alternatively, you can also submit an abstract of the topic you would like to discuss or a up to 2 min video. Please ensure your position paper is an accessible PDF. For video submissions, please add closed captions. Your submission will be published on the workshop website after obtaining your consent. Submissions should be made by February 23rd at 11:59pm Anywhere on Earth March 2nd at 11:59pm Anywhere on Earth(i.e., 23:59 in the latest time zone on the planet). Please email safevspchi23@googlegroups.com if you have any questions.

Key Topics

User-generated videos may contain inaccurate or fabricated information that could cause social and societal problems. We invite researchers and practitioners to brainstorm VSP problems and come up with solutions.

Misinformation

radical and ideological content

Algorithm Bias

recommendation and searching

Harassment

cyberbullying and aggression

Monetization

affiliate marketing content

Multi-Modality Data

video, audio, comments, likes

Live Content

direct communication between streamers and viewers

Rabbit Hole

echo chamber and problematic beliefs

Viewer Participation

debunking misinformation with users' effort

Other Platforms

cross-posting features to spread misinformation

Key Questions

  • How can we categorize problematic creation and use of videos on VSPs?
  • Who are the key players spreading misinformation or inappropriate content?
  • Who are the possible victims of misinformation videos?
  • How can we identify misinformation and problematic behaviors on VSPs?
  • What challenges do video and image AI bring to the credibility and trust of online user-generated videos?
  • What research infrastructure is needed to study video sharing?
  • How can researchers nurture strategic partnerships with platforms to obtain data?
  • What technological solutions can reduce the misinformation on VSPs?
  • What community technologies can be designed to enhance trust and safety on VSPs?
  • How to involve the video viewers in countering misinformation?
  • How should platform policies be made to improve credibility, trust, and safety on VSPs?
  • How can VSPs mitigate the spread of inappropriate content to other platforms?