Third Workshop

Hypotheses to Validate

The third workshop was the culminating workshop to test if the methods helped people from multiple disciplines to work together to design conversations. I also wanted to see if the methods helped participants to understand how designing conversations could inform the wider design process, if design thinking methods help in designing chatbots, and how I could better organize the workshop to help a wider audience. Reflecting on the second workshop, I wanted to test if the methods helped teams design goal-meeting conversations. In addition to these wider hypothesis, I also had more specific tests for each method. At this point, I felt fairly validated in the utility of Bot Personas. As this workshop was composed of strangers working together, I was testing if the method additionally helped facilitate team-formation. For improv, I wanted to test if expert improv actors leading the method, rather than myself, would better support teams to iterate on the Bot Persona and possible conversation qualities. For conversation mapping, the previous workshop validated that the method helped with structuring the components of dialogue. In this workshop, I was testing if the method helped teams to structure the wider conversations.

Context for the Third Workshop

The third workshop invited back those who had signed up for the previously canceled second workshop. 25 people attended, including user experience designers, graphic designers, programmers, research scientists, teachers, and writers. This workshop was lengthened to be 3 hours for a longer context-setting presentation, accommodating participants that may have no programing experience, and more time for the methods and discussion. However, due to traffic in Los Angeles and our underestimation of help participants needed for technological set up, the substance of the workshop was closer to 2 hours long.

Summary of Events

A day before the event, we sent instructions to all participants on how to install the appropriate software and development environment. The workshop began with improv actors acting out a conversation between two cave people to illustrate conversational principles. This followed with a 20-minute presentation I led, having actors illustrate conversation, discussing trust, conversation design, and design thinking principles. Once finished, participants were randomly organized into groups of three and four and asked to ideate on their chatbots, using cards from the game Apples to Apples to spur their process (Kirby, 1999). I introduced the Bot Persona method and teams proceeded to fill out their Bot Personas. Next, I re-introduced the expert improv actors, asking them to improvise a conversation based on one of the team’s persona. After watching the improvisors, teams then did the improv method on their own. At this point, we took a break, but my team and I stayed to facilitate open ended discussion. As the workshop restarted, participants were guided in initializing their chatbots. Once all of the chatbots were activated, I introduced conversation mapping. The workshop ended with an open-ended discussion about what we learned and chatbots.

Observations and Feedback

The third workshop went incredibly well, though I was less confident these methods were completely successful. Almost everyone felt accomplished, feeling they understood the basics of conversation design and chatbot development. One person was anticipating a more programmatic focus, but was more irritated by the photography and filming I used to document the workshop. Overall, I am confident to share these methods widely for teams to use in their development of chatbots, though with some significant changes that need to be validated in the future, changes discussed later in the conclusion.

Bot Personas helped people understand what was needed to design a holistic personality and identify good conversations to have. As demonstrated in the previous workshops, teams iterated and collaborated effectively on their ideas for their chatbots. Having Bot Personas at the beginning of the project was incredibly helpful for strangers coming together and quickly begin working cohesively. Also, the addition of user goals onto the worksheet made it much easier for teams to design their conversations around the relationship between the chatbots and the intended users. In this workshop, people were much more successful in designing chatbots with clearer goals. While these changes may have not been the cause, I did observe much fewer chatbots based on celebrity personalities.

The improv method may be invalidated as a successful method in helping teams iterate and prototype conversations. Many participants appeared to feel uncomfortable and were unable to “defer judgment, encourage wild ideas, and build on the ideas of others” (Simsarian, 2003). Later, when mapping conversations, the rationale of the method to explore dialogue and personality made more sense to some participants. However it did not appear to significantly support most team’s design process.

Anticipating that this extraverted and unusual activity may prevent participants from fully engaging with the method, I had invited professional improv actors to lead the exercise. Having professionals lead was suggested to me as they may be “helpful to have people who are little bit more willing to go” all the way into their characters and to “keep the energy up” (Fowlie, 2016). However, the actor’s performance was viewed as “incredibly awkward” and “forced” (Anonymous, 2016, in person feedback). Later, I learned that the situations I put the improv actors were the worst kinds of situations for improv comedy. Opening the workshop, I had the actors pretend to be cave people meeting for the first time and they had to exchange objects. In improv, the “two people meeting each other” situation is very much frowned upon in improv technique (Fowlie, 2016; Perry, 2016, in person feedback). This is because the characters do not yet have an emotional connection. This is an important aspect of on-boarding users in conversation design needs to be explored (Hulick, 2016; Vandehey, 2016), but should not hold up the entire process of making a chatbot. I can understand how the improv method could be very “difficult” for novice improv actors to play “catch up” (Fowlie, 2016).

Conversation mapping again proved to help participants understand how to structure the components conversations. As observed in the second workshop, teams again focused on the dialogue versus the wider structure conversation framework. However, more teams did attempt to structure their conversations to meet their chatbot’s goals. Unlike before, teams felt immediately confident enough to bypass the post-it note exercise and enter their dialogue into code. At this point in the workshop, there was little time left for teams to experiment with longer, more complex conversations that may require the full mapping exercise. For example, one team was able to create a wide variety of conversations, Maxwell1, a PacMan Ghost guiding tourists through Los Angeles. However most of the conversations consisted of only two or three statements. For this team, these conversations were not very complex and could be easily entered and iterated upon in the chatbot’s configuration file.

This workshop also provided an opportunity to observe and receive feedback on the workshop design as a whole. Unlike the previous workshops, I began the workshop with a presentation, which participants overwhelmingly found helpful in understanding the motivation, context, and expectations for the workshop. Participants reported understanding how trust can be conveyed through conversation, though I didn’t see an opportunity for participants to personally observe this in their own designs. Most expected a more lecture-heavy event. From feedback and observing people’s engagement throughout, participants truly enjoyed learning and designing through experiential learning. This approach seemed to be new to several participants, letting them reflect upon their current design practice and how it could change. Many participants also noted that developing ideas and experiences using design thinking methods was very beneficial, particularly with an unfamiliar medium. Watching teams work together, it was also clear to me that experiential learning encouraged participants to learn from each other.

A consistent thread through all of the workshops was that there was never enough time. The participants rightfully understood the workshop and an overview with a lot to cover. Yet, people felt particularly rushed in the last workshop. They wanted to explore more, spend more time building their bots, and learn more about conversation design. Yes, there were many more issues helping participants get set up with the technology, taking time from the activities. In addition, the workshop was held after work hours and some participants were quite tired by the end. Again, I also observed that teams were not given enough time to reflect upon their learnings.

For example, I was surprised how much people really wanted to discuss ethics of their work and chatbots more broadly. Participants prompted the issues of identity and transparency for chatbots ( McHugh, 2015; Newitz, 2015; Petersen, 2007), how realism in dialogue can affect trust (Savin-Baden et al., 2013; Pavlus, 2016), and the degree in which we should preserve human’s roles (Decker, 2007; Sullins, 2012). These are only some of the many significant ethical problems that have been more broadly explored in regards to automation and artificial intelligence (Anderson and Anderson, 2011; Russell et al., 2015), but have yet been fully studied for chatbot experiences. I too wish to understand these issues more and how to develop experiential methods to help teams confront them as well.

Bibliography

Anderson, M. and Anderson, S.L. (eds.) (2011) Machine ethics. New York: Cambridge University Press.

Decker, M. (2007) ‘Can humans be replaced by autonomous robots? Ethical reflections in the framework of an interdisciplinary technology assessment’, Workshop at ICRA. Vol. 7. 2007.

Fowlie, J. (2016) ‘Interview with Jimmy Fowlie’. Interview with 24 January.

Hulick, S. (2016) How quartz Onboards new users | user Onboarding. Available at: https://www.useronboard.com/how-quartz-onboards-new-users/ (Accessed: 21 February 2016).

Kirby, M. (1999) Apples to Apples. Mattel.

McHugh, M. (2015) Slack is overrun with Bots. Friendly, wonderful Bots. Available at: http://www.wired.com/2015/08/slack-overrun-bots-friendly-wonderful-bots/ (Accessed: 10 February 2016).

Newitz, A. (2015) Ashley Madison Code shows more women, and more Bots. Available at: http://gizmodo.com/ashley-madison-code-shows-more-women-and-more-bots-1727613924 (Accessed: 10 February 2016).

Pavlus, J. (2016) The next phase of UX: Designing Chatbot personalities. Available at: http://www.fastcodesign.com/3054934/the-next-phase-of-ux-designing-chatbot-personalities (Accessed: 10 February 2016).

Petersen, S. (2007) ‘The ethics of robot servitude’, Journal of Experimental & Theoretical Artificial Intelligence, 19(1), pp. 43–54. doi: 10.1080/09528130601116139.

Russell, S., Dewey, D., Tegmark, M., Kramar, J. and Mallah, R. (2015) Research priorities for robust and beneficial artificial intelligence. Berkeley, California: Future of Life Institute.

Savin-Baden, M., Tombs, G., Burden, D. and Wood, C. (2013) ‘“It’s almost like talking to a Person’’, International Journal of Mobile and Blended Learning, 5(2), pp. 78–93. doi: 10.4018/jmbl.2013040105.

Simsarian, K.T. (2003) ‘Take it to the next stage’, CHI ’03 extended abstracts on Human factors in computing systems – CHI ’03, . doi: 10.1145/765891.766123.

Sullins, J.P. (2012) ‘Robots, love, and sex: The ethics of building a Love Machine’, IEEE Transactions on Affective Computing, 3(4), pp. 398–409. doi: 10.1109/t-affc.2012.31.

Vandehey, J. (2016) Slack Bot Onboarding — Slack Platform Blog. Available at: https://medium.com/slack-developer-blog/slack-bot-onboarding-3b4c979de374#.m27yeb3v6 (Accessed: 21 February 2016).

Last updated