ACM CHI is the most prestigious venue in the field of Human-Computer Interaction. Due to the travel bans and other restrictions during the COVID-19 era, CHI 2021 was online. I attended ACM CHI 2021, where I watched many presentations, attended LBW sessions, and met other researchers to get new friends and research collaborators. In this blog, I will share my impression about CHI and the most exciting talk(s) I attended.
Even Though I was attending my third CHI, ACM CHI 2021 was my first experience to be in a virtual CHI. Perhaps, in the following years, we will have a more sustainable and safe world so we can again fly over the continents, meet new people in CHI, and have little adventures during the busy CHI times. But I believe even if that happens we should keep organizing the conferences and in particular CHI in the “hybrid” mode to support disabled persons, students, and academicians with no funds and visa restrictions.
While I am sincerely thankful for all the CHI community for making this conference happen, I also experienced many burdens during the online conference, which I would mention some of them here: (1) In real CHIs it was so difficult to jump from one session to another. The room was full of people and passing through them was weird and not comfortable! Thus as an audience, I knew that I should spend 1.5 hours listening to all the presentations and maybe asking some questions. In the virtual mode, however, exchanging between sessions can be done just by one click. That is too tempting and makes me switch from one session to another. Thus, I was less focused, and put a lot of cognitive effort into “what is happening right now in a parallel session?”. (2) Besides the lags in the video presentations, which could harm the Q&A experiences, I also think 5-min video presentations are short and not informative (e.g., do not teach about methodology). Some of these videos can even mislead the viewers if they don’t read the paper (e.g., don’t reflect the study limitations). So it would be great to provide more time for the presenters to communicate their research in detail.
Among many interesting presentations I attended, I selected two studies that are relevant to my research:
Park and Lee. Designing a Conversational Agent for Sexual Assault Survivors: Defining Burden of Self-Disclosure and Envisioning Survivor-Centered Solutions
[Link to the paper] [Link to the presentation]
This is a design work! The authors studied designing conversation agents (CAs) to support survivors of sexual abuse. Sexual violation is a serious issue for many women around the world. Women can be under sexual assault either by their partners or their non-partners. Unfortunately, those who survive from sexual violations have difficulties after the crime where they cannot report these cases to other people and the authorities. Usually, they avoid face-to-face disclosure as it has many other burdens. In most cases, the survivors face secondary victimization. In other words, people or authorities lay blame on the victims. Considering these issues, using a machine instead of a human could be an alternative for reporting and exchanging the necessary information.
In this paper, the authors first define survivors’ self-disclosure burdens, compared the burdens caused by a CA and a human, and co-designed features of CAs using the participatory design method. During the participatory design sessions, participants talked about their earlier experience with human agents and their requirements from CAs. Interestingly, the authors found that 17 survivors prefer to report their cases to CAs rather than the police officers.
The authors identified different self-disclosure burdens! The survivors expressed fewer burdens with CAs compared with humans considering time, financial aspects, availability, and emotional burdens (such as blame). They also mentioned several burdens for reporting their case to the CA, for example, privacy and security (e.g., they worry if other people know they are using such apps), social (e.g. they worry about the social pressure if other people see the notifications from the app), and emotional burdens (e.g., they think CAs just imitate and they fake empathy).
The authors mentioned several features that can reduce such burdens such as multi-stage authentication, locking the past conversations, using the “export to email” system, and camouflaging the app icon. To reduce the emotional burden and avoid the problem of faking empathy, the authors suggested the use of crowdsourced messages collected from other users. The authors also suggested that such an application should empower the survivors. For example, the legal procedures should only proceed if the users confirm them.
I was interested in this paper because it targets the type of users who are vulnerable to violence. It also studies CA design. This, in particular, is interesting for our group as we also wonder how CAs can support social media users to reduce privacy conflicts. Stay tuned to know more about our ongoing research topics!
Rakibul et al. Your Photo is so Funny that I don’t Mind Violating Your Privacy by Sharing it: Effects of Individual Humor Styles on Online Photo-sharing Behaviors
[Link to the paper] [Link to the presentation]
The authors studied the humor style of online social networks and its relationship with photo-sharing behaviors. An interesting finding was that when the authors primed users with warnings to avoid meme sharing, some users even shared more than before! The authors later categorized the user into different types and called people with such behaviors as “humor deniers”. The paper suggests that designers should personalize privacy-preserving interventions (e.g., warnings) based on the user type as such privacy-preserving warnings might backfire for some users.
I was interested in this work because it is about the problem of the Multiparty Privacy Conflicts (MPCs), where meme sharers (data uploaders) can threaten the privacy of memes (data subjects). In our recent work in PET and ISP Labs, we designed a new family of solutions called Dissuasive Mechanisms to deter non-consensual content uploaders from sharing others photos in online social networks. Perhaps it would be interesting to see how our dissuasive mechanisms could be effective in the meme sharing context and how users with different characteristics such as humor deniers could respond to the dissuasive mechanisms.
In addition to the papers mentioned above, I also like several other papers such as (i) a study about the role of Fear Of Missing Out (FOMO) on human privacy behavior, in particular in online social networks, (ii) a design work that proposed a new technique for password entry using muscle memory, (iii) a survey study about the effect of nudging (i.e., default and framing techniques) on privacy decisions of smart-home users’, (iv) an empirical study about security and privacy advice of the protestors during the #Black_Lives_Matter protests, (v) a study that used empathetic communication skills to develop chatbots to protect users against online financial frauds.