Second Workshop
Last updated
Last updated
The second workshop was setup to test the methods in as real-world context as possible. While the first workshop helped provide an early indication of how to improve the methods, the second workshop intended to more critically test the methods. Each of the methods were being tested again for their initial hypothesis. Yet I also had more specific hypothesis that could be better tested in an environment with more participants and more time to work on their respective conversations. Regarding the Bot Persona, I was testing if participants reflect on what consists of a personality. Given more time, I hoped participants would iterate on their chatbot’s personality. In doing this, I hoped to observe teams increasingly empathizing with their chatbots and feeling their persona was adaptable. Regarding the improv, I was testing if doing the Bot Persona first helped teams engage in improv more easily. Also, I was hoping to better observe if this method was helping teams iterate on their conversations and dialogue. Regarding conversation mapping, I was testing if teams felt a need to physically map out their conversations before inputting the dialogue into their chatbot’s configuration file. The concepts behind the conversation mapping method were previously validated as participants were very able to build their bots. Yet these participants didn’t feel a need to physically map their smaller conversations. Perhaps this feeling would change in larger teams and more complex conversations.
The workshop invitation was posted on Meetup.com, publicly inviting the design and technology community of Los Angeles to participate. Over 80 people from multiple disciplines registered to join the workshop. On the day of the workshop, we had to cancel the public event due to an unexpected plumbing explosion. Wanting an opportunity to further test the workshop, we invited our colleagues to stay late and participate. Eight colleagues joined for a 2 hour period, pairing up to design four conversations. The conversations were duplicated, so everyone could program their own chatbots, resulting in 8 chatbots.
The workshop began with a concise presentation describing the intent and structure of the workshop. To assist in ideation, participants were given the option to select a character, user and context from some pre-selected cards from the game Apples to Apples (Kirby, 1999). Then, teams reflected on their choices, determining their chatbot’s goal, user’s goal, and the chatbot’s name. Next, teams learned about and filled out their Bot Personas based on their initial impressions of their chatbots. Selecting one of the teams, I asked them to improvise a conversation in front of the group, while I took notes, indicating what I observed in their conversation. After watching, the rest of the teams proceeded to improvise on their own. After some time, I asked teams to briefly reflect what they felt worked and didn’t work with their conversations and their chatbots. At this point, participants were led through the process of setting up their computers and activating the chatbots. Once complete, I taught everyone how to map their dialogue with more complex conversations where the chatbot asks a question. With their mapped conversations, teams proceeded to enter their dialogue into their chatbot configuration files and test the conversations. Lastly, I asked everyone to give feedback on the workshop.
The results of the workshop were very promising, indicating the methods did help people design conversations. Participants were easily able to build identities for their chatbots and set up cooperative conversations. However, I observed participants spent more energy towards developing personality and dialogue rather than goal oriented conversation.
The Bot Persona proved to be very successful for helping teams empathize with their chatbots. At the beginning, a significant majority of the participants chose their chatbots to reflect celebrities or personalities that they knew or imagined. This helped give teams an initial shared understanding of what they were designing. The Bot Persona challenged the teams to expand upon their ideas by filling in the various components of the persona. For example, one team’s chatbot Sassper, began as a ghost giving advice to users who are not in a relationship. When a team member added the quality “Sassy” to the Feeling section, the entire design of the bot was redefined. The team rewrote what they entered for every section to more closely align with their new design. Instead of helping users in finding a relationship, Sassper now helped users cope with their situation. The identity of the chatbots and quality of conversations were dramatically improved as teams continued to collaborate and iterate on their personas.
The improv method was also more productive in this workshop. Reordering the improv after the Bot Persona clearly helped give structure to the conversations. Teams were more quickly able to iterate their conversations and test the goals of their chatbot. I observed several participants experimenting with dialogue and the different facets of a personality that chatbot may emphasize. This focus on personality, though, appeared to distract from also exploring possible behaviors. For example, in improvising a conversation with Mad_kanye, the team was more focused on the variety of angry statements Mad_kanye could say, rather than the stated goal of helping users calm down.
While the act of improv may not have been useful for exploring goals, the act of role-playing did help at other times. Another chatbot, named Sjorn, began as an elderly Norwegian lumberjack helping users navigate the forest. The team had developed a personality, conversational structure and dialogue, then activated the bot to test the conversation. While talking with the bot, the team realized that the dialogue did not quite match the intended personality. After reflecting upon the written dialogue, a teammate selected a picture of a young Norwegian male model as the chatbot’s avatar. The dialogue immediately felt more closely aligned and additional behaviors became evident to the team. This team was able to better understand their design in interacting with the chatbot through conversation. Some identify this approach as prototyping through role-playing rather than strictly improv. Role-playing can be used in all phases of a design process – in this case as a method to evaluate and refine the prototype (Simsarian, 2003). Instead of role-playing between human teammates, role-playing was done with the chatbot. This is a promising approach to explore in future studies.
Conversation mapping in this workshop had mixed results. As indicated in the previous workshop, participants found that breaking down dialogue into its components was a very revealing for how conversations operated. It was also helpful in translating dialogue into the configuration of the chatbots. In teaching the method, I instructed teams to write the exact dialogue they wanted on each post-it note. I expected teams to be to separate the intrinsic and extrinsic roles the post-it notes play in conversation mapping (Gray, 2010) – these roles being the literal dialogue and the structure of the conversation. During this session, teams were much more focused on dialogue that matched the personality than in structuring conversations. For example, in conversation mapping the Krisjenner chatbot, one participant spent a majority of this time looking up comparable quotes that the real life Kris Jenner said. Note this approach has been proven successful in designing realistic chatbots (Haller and Rebedea, 2013; Chin and Collinsworth, 2016). Designing realistic dialogue for chatbots is absolutely an avenue I would want participants to explore. But designing cooperative conversations requires attention towards the goal and purpose of a conversation.
The second workshop illustrated many of the benefits and weakness with the conversation design methods I developed. Overall, the were effective in helping teams collaboratively build human-centric chatbots. Particularly in making relatable experiences, people worked through the design process: empathizing, defining, ideation, prototyping, and testing (d.School, 2010). By completing the methods, everyone began to understand the qualities necessary to design effective conversations. I was elated that all of the participants were eager to continue learning and experimenting with designing chatbots and to use these methods in the future. Nevertheless, the conversations that were designed did not fully accomplish many of the outcomes I hoped to observe.
The most significant issue I observed, that is core to this project, was how the methods did not significantly encourage designers to design a conversation that meets a user’s goal. This important because effective conversation depends on “changes brought about by conversation have lasting value to the participants” (Dubberly and Pangaro, 2009). In the workshop, teams were predominantly focused on personality design and manifesting those choices through dialogue. My first thought was that one or more of the methods were insufficient to help teams clarify their chatbot’s goals and develop solutions to meet them. Yet, there were several conditions of the workshop that could have led to this outcome.
I did not present a lot of context to the participants for the intention of the workshop. Introducing the second workshop, I spoke about how we would be designing all of the components necessary for mildly intelligent chatbots. By doing so, I may have set the expectations of the participants to pay less heed to meeting their chatbot’s and conversational goals.
The structure of the workshop may have encouraged more focus on designing personalities. Allow me to break up the workshop’s schedule into four equal portions. The first two portions of the workshop, Improv and Bot Persona, focuses on empathizing with the chatbot and it’s emotional, cognitive, and behavioral qualities. The third portion, beginning conversation mapping and programming, teaches participants where to add specific dialogue in order to make the chatbot function. The fourth portion, conversation mapping, allows for structuring the conversation and writing dialogue at the same time. Looking at the workshop in this way, it becomes evident that participants may be spending much of their time considering the details of a chatbot’s personality rather than developing a chatbot that meets user’s goals.
The participants of the workshop were simply more excited to design personalities than conversations. In designing this workshop, I gave significant leeway for teams to develop their own chatbots. This was in the spirit of giving greater creative ownership to teams, motivating them through a long experiential learning workshop. To this end, the chatbots participants made were wildly creative, hilarious, unusual, and endearing. Moreover, there was little discussion in or demand for creating practical experiences. I did ask teams to pick chatbot and user goals, but did not test the conversations against these goals. Without holding teams to these constraints, it’s understandable that teams were preoccupied with creating something fun rather than useful.
Lastly, there was not enough time. By the end of the workshop, all teams had built fairly rudimentary chatbots that replied to comments, and had conversations that asked one or two questions. This already is an incredible achievement given the experience levels of the participants. Given another hour, it’s possible that teams would have enriched their conversations dramatically, possibly more closely oriented towards achieving a goal. Holding participants to such an expectation in a constrained amount of time was foolhardy at best.
Because of these very real possibilities, I did not make many significant changes to the methods. Instead for the next workshop, I hoped to observe any patterns that may or may not confirm my earlier insights. For Bot Personas, I added a section to the worksheets indicating user goals. For improv, I decided to invite professional improv actors to better illustrate the method. Since the participants in the third workshop were members of the public, they may be more uncomfortable with each other than my colleagues were of each other. For conversation mapping, I would be clearer that teams should endeavor to design a conversation to meet a user goal and give the teams more time to do so.
Chin, J. and Collinsworth, B. (2016) ‘Interview with Joanna Chin and Bryan Collinsworth’. Interview with 21 January.
d.School (2010) An Introduction to Design Thinking: Process Guide. Available at: https://dschool.stanford.edu/sandbox/groups/designresources/wiki/36873/attachments/74b3d/ModeGuideBOOTCAMP2010L.pdf?sessionID=2f58897684fb982484d0df8fbb73761194ef1158 (Accessed: 19 February 2016).
Dubberly, H. and Pangaro, P. (2009) What is conversation? How can we design for effective conversation?. Available at: http://www.dubberly.com/articles/what-is-conversation.html (Accessed: 4 February 2016).
Gray, D., Brown, S. and Macanufo, J. (2010) Gamestorming: A playbook for innovators, rulebreakers, and changemakers. United States: O’Reilly Media, Inc, USA.
Haller, E. and Rebedea, T. (2013) ‘Designing a Chat-bot that simulates an historical figure’, 2013 19th International Conference on Control Systems and Computer Science, . doi: 10.1109/cscs.2013.85.
Kirby, M. (1999) Apples to Apples. Mattel.
Simsarian, K.T. (2003) ‘Take it to the next stage’, CHI ’03 extended abstracts on Human factors in computing systems – CHI ’03, . doi: 10.1145/765891.766123.