Research: Chatbots, Conversation, & Trust

Chatbots

BACKGROUND & THEORY

A chatbot is a computer program designed to have an intelligent conversation with people. Conversations are primarily conveyed through text or speech (van Lun, 2011), also use “gesture, gaze, speech… intonation” as long as the medium can transmit such information (Cassell, 2000). Since Alan Turing presented the Imitation Game (1950), programmers and designers have been designing programs that could communicate with people. At their core, a chatbot’s “intelligence” or “mind” is simply how well it maps user’s inputs to an appropriate response (Abou-Zahra, 2012; Kenyon, 2015). The system itself is not intelligent, but “you can write patterns to trap specific meanings” of certain words and phrases to return other words and phrases, “which is a very close approximation in many cases” (Wilcox and Wilcox, 2013).

Designers have been designing intelligent minds primarily using two techniques to build patterns and the “word and phrase” bank: writing them by hand or by copying a large body of text, such as memoirs, literature, or wikipedia articles (Kenyon, 2015; Abou-Zahra, 2012). Writing by hand is understandably labor intensive and difficult to create a sense of realism. Much more research is being placed into working from pre-written text. One notable example copied historical records to recreate a historical figure’s personality, that of Adolf Hitler (Figure above) (Haller and Rebedea, 2013). The Adolf Hitler chatbot, however, was never tested for how well it worked with users. The most ambitious used machine-learning techniques to process the British National Corpus (BNC, 2002), a collection of text samples amounting to over 100 million words, extracted from 4124 modern British English texts of all kinds, both spoken and written (Shawar and Atwell, 2005). In response to users finding “some responses not just rude but incoherent”, the authors insist their creation “be seen to be ‘useful’” before talking to the chatbot, in order for it “to be appreciated” (Shawar and Atwell, 2005). Chatbot frameworks are based in having conversations, but there is little discussion in the literature on how to make good conversations. Some argue that the attempt itself is “inhuman” to replicate human-to-human conversation (Colby, 1999). Most chatbot designers take a reflexive approach by simply responding to statements, rather than designing interactions to user’s wider needs and goals (Abu Shawar and Atwell, 2007). Because there is little conscious effort to design the journey users go on, it’s difficult to ensure users have a differentiated, premium, and memorable experience (Pine and Gilmore, 1998). To design such systems, I felt that I needed to better understand what conversation is, how it works, and what makes for good conversation.

Conversation

THEORY & BACKGROUND

Conversation is not just the the transfer of information, it is how people work together (Shannon and Weaver, 1963). It’s is difficult and often not very efficient (Coiera, 2000; Garrod and Pickering, 2004), yet people continue to have conversations. One of the main reason is that, conversation is how people “align their situation models” to come to mutual understanding (Garrod and Pickering, 2004). This alignment behavior, like many social behaviors, is argued to be “wired” in human consciousness (Dijksterhuis and Bargh, 2001). Otherwise people may just talk past each other, if they would talk at all. Dubberly and Pangaro, inspired by information theorist Gordon Pask, say conversation is an additive process, where the exchange of information helps people learn new concepts, share and evolve knowledge, and confirm agreement (2009; 1976).

To better explain how people broadly engage each other, allow me to simplify and operationalize how conversation works by surmising Dubberly and Panaro’s framework for conversation (Dubberly and Panaro, 2009).

  1. Open a Channel – Someone starts and can be heard by another

  2. Commit to Engage – Everyone decides to stick around

  3. Construct Meaning – Things start building

    1. Establish commonalities and shared norms.

    2. Talk about the topics to discuss and how it all works together.

    3. Everyone makes “meaning” of what happened.

  4. Evolve – “Either or both hold new beliefs, make decisions, or develop new relationships, with others, with circumstances or objects, or with themselves.”

  5. Converge on Agreement – Everyone discusses their understanding until alignment.

  6. Act or Transact – Make the exchange (someone does something)

It’s important to note that this framework illustrates complete, successful conversation. There are also many ways that this conversation breakdown, being limited by conversational infrastructure and the participants (Dubberly and Panaro, 2009), as well if someone’s meaning gets confused (Grice, 1977; Sack, 2002). This framework illustrates how to have efficient conversation, but does not address what makes for good conversation.

Linguists often suggest the work of H. Paul Grice as a basis for understanding conversation (Neale, 1992). I find the Cooperative Principle important in guiding good conversation because it addresses people’s needs in discourse as well as hinting at the many ways the principle can be “flouted” to complex and hilarious effect (Dynel, 2008; The University of Nottingham and Millard, 2014). Grice developed the Cooperative Principle (1977), an accepted way of speaking that is considered standard behavior (Davies, 2007). He describes this behavior as: “Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged” (Grice, 1977). More simply, people engage in conversation with a shared purpose or goal, saying what needs to be said, when it needs to be said. Note cooperative here is a technical term for the clarity and interpretability of a contribution (Davies, 2007). While this distinction is important and is confusing in other contexts, in user experience design, clarity and simplicity is a helpful and cooperative act (Ritzenthaler, 2011). I’m not worrying about misinterpretation here.

Conversation Research

ANALOGOUS OBSERVATION

To better understand the Cooperative Principle, I needed to observe conversation. Using the analogous method, I observed natural conversation with Daniel, a local Hairstylist, as well as Jimmy, an improv instructor teaching students at The Groundlings (IDEO, 2014). Hairstylist’s are well known to be great conversationalists, talking with dozens of new and repeat customers every week. It was also a good excuse to get a haircut myself. Improv classes are also interesting because students are learning how to invent plausible conversations. I noticed several patterns while observing both experiences.

Each experience began with similar themes. Both opened with interesting ways to get to know each other. At the salon, Daniel began asking me about my hair’s history and talked about how he figured out his hair style. At The Groundlings, the class began with students recounting their upbringing while other students acted it out. The beginning of the experiences was also focused around reducing tension and judgement. Recognizing quickly my nervousness around my thinning hair, my hairstylist talked plainly about how he was going to help and how my condition was less drastic than the evidence indicates. Jimmy had the entire class laugh for 2 minutes straight after recognizing some improv students began the class stiffly. Surprisingly difficult, but it immediately changed everyone’s attitude.

Throughout each experience, earning understanding appeared to be of importance. For instance, both sought for commonality. After establishing what to do with my hair, I asked Daniel how he could be friendly with so many people. He said that he always brings up four categories, work, living situation, hobbies, relationships. Amongst those, there is always some common ground. At the improv class, students kept putting themselves in situations when strangers would meet each other. Jimmy responded that students didn’t need to fight to earn a relationship, but rather to assume it had already been won. Both experiences also had people trying to be heard. When talking to people with extreme perspectives, Daniel said he will never blindly accept any opinion and insisted on “lightly engaging” everyone to explore what they really mean. After witnessing a student who agreed too much, Jimmy’s critique was how everyone should always be adding information to a scene. When an actor adds information, the actor earns ownership of the world that they are helping to make.

By the end of each experience, participants were regularly adjusting expectations for others and oneself. Each experience encouraged taking personal pride and placing it in others. I asked Daniel what he learned about good conversation from talking to strangers for over 30 years. He said to “embrace everyone’s peculiarities” so that your expectations are always fooled. One improv student critiqued another that they had acted selfishly in a scene. Jimmy, followed up, saying that the trick to being an incredible improv performer was to always make the other person look good. Not only does it help energize a scene, but it also helps to form better relationships between the characters and the actors. Lastly, people needed to adjust their behavior around risk. I asked Daniel if I could more aggressively shave the sides of my head. Daniel replied that first haircuts are about establishing a relationship. If I wanted something a bit crazier, we could do it next time. Towards the end of the improv class, Jimmy critiqued the entire class for staying safe in a few roles. These weekly classes were their safe space to experiment with weird, bizarre, and uncomfortable personalities, topics, and situations.

Reflecting on these patterns, the conversations seem oriented around creating and building trust. This makes sense since with all groups are based in contracts of trust (Friend, no date; Gambetta, 1990). Everyone needed a little trust at the beginning and something to confirm it would be okay. Then trust was validated when everyone discovered that they can be together for some period of time and be respected. Well into each session, everyone had to give trust to others. But what is the relationship between conversation and trust? Does one cause another? To better understand the relationship of conversation and trust, I needed to better understand trust theory.

Trust Research

THEORY & BACKGROUND

Trust has been researched in earnest since the 1950s in the disciplines of philosophy, sociology, psychology, management, marketing, ergonomics, human–computer interaction, industrial psychology and online commerce (Corritore, 2001; Lee and See, 2004). These examinations have revealed the many problems trust poses (Husted, 1998) and the lack of agreement on what trust means across disciplines (Lewicki, 1995; Lee and See, 2004). Yet there are common threads in these definitions (Rousseau et al., 1998). Trust only exists in a situation of uncertainty or risk (Adams, 1995; Luhmann, 1988). It is also a future focused expectation of someone and of what may happen (Burt and Knez, 1995; Luhmann, 1988). Lastly, some party has a condition needing to be fulfilled (Corritore, Kracher, and Wiedenbeck, 2003; McKnight, Choudhury, and Kacmar, 2002). Combining these main strands of literature, Kassebaum describes trust as “an expectation about a future behaviour of another person and an accompanying feeling of calmness, confidence, and security depending on the degree of trust and the extent of the associated risk” (Translated by Bamberger, 2010). While clear, by focusing on people, this definition does not encompass trust with systems and organizations. This is why I prefer the more concise definition of trust as “an attitude of positive expectation that one’s vulnerabilities will not be exploited” (Riegelsberger, Sasse, and McCarthy, 2005).

To illustrate the processes of how people trust, I rely upon the basic trust framework by Riegelsberger, Sasse, and McCarthy (2005).

  1. A Trustor and a Trustee perceives trustworthiness about each other by receiving signals (1)

  2. The Trustor makes an expectation of a Trustee, with some kind of risk or uncertainty (2)

  3. The Trustee fulfills that expectation (3)

This framework shows how trust is grown through a history of setting and meeting expectations. Through each successive interaction, expectations grow as well as the risk that the outcome will not be met. Each time the expectation is met, trust is built. This trust framework also mirrors and builds upon the process of the conversational framework with the inclusion of expectation and risk. In fact, with this context in mind, “in its basic sense trust is essential to every conversation” (Løgstrup, 1997; Cockburn, 2014). Through continued conversational loops in a trust building context, participants are able to build trust. What then seems incredibly important to me is understanding how trusting relationships begin. In the trust framework illustrated above, the signals that indicate trustworthiness are unspecified. However, Bickmore and Cassel note “trust… is often established through linguistic means” (2001). Their description suggests small talk as one such signal.

Small talk can be a ritual for people to transition between awkward states of not-knowing to knowing, a social convention for establishing common ground, build appreciation between people, establish expertise, and learn about each other (Bickmore and Cassell, 2001). These properties suggest how small talk is “specifically created to signify the presence of trust-warranting properties” (Riegelsberger, Sasse, and McCarthy, 2005), allowing a “process in which [people] iteratively ‘test the water’ to determine if they want to continue deepening the relationship” (Bickmore and Cassell, 2001). Here we have a clear example of how conversation creates and builds trust. In order to begin the trust building process, people begin with conversation to send signals of trustworthiness. Then, by successfully following the conversation framework, people are able to make expectations, agreement, and fulfillment of one another.

I now understand that the point of conversation with a chatbot is to build trust between the chatbot and it’s users. At this point, I feel I can confidently understand that the purpose of this project is to discover how might we design trustworthy chatbot experiences (Berger, 2012). What I still do not understand is how chatbots can uniquely instill trustworthiness through conversations, such as with small talk but also conversation more broadly. By understanding these traits, I can then learn how to best design chatbot conversations.

Chatbot Experts

INTERVIEWS & OBSERVATIONS

Some of the experts I spoke with.

To better understand design trustworthiness in chatbots, I interviewed a variety of chatbot experts. I spoke to Elaine Lee, Designer at Large; Colin Mitchel, Twitter Bot Designer; Emmet Connolly, Director of Product Design at Intercom; Matt Webb, CEO and Co-Founder of Berg; and Joanna Chin and Bryan Collinsworth, creators of dBot. I also informally talked to other experts, Ben Brown, CEO of Howdy and Co-Founder of BotKit; Matt Klee, Lead Interaction Designer at Rethink Robotics; Stuart Nolan, Robotics Magician; Brandon Stephens, Co-Founder of MidwestUX Conference; Andrew Hutchins, Drone-Human Interaction Researcher at Duke University; and Jimmy Fowlie, Improv Teacher at The Groundlings.

In my interviews and conversations, I observed two patterns in relation to trust: personality and dealing with the unexpected.

Many experts discussed how elements of the personality alter how users interact. Brown discussed how power-structures in workplaces can be transferred to his chatbot, Howdy (Brown, Jan 8 2016, video call). When Howdy is set up by a boss, teams talk to it with more formal language and are more diligent about interacting with it than in teams that set up Howdy on their own. Noting a similar transference, Klee and Nolan said that simply adding eyes to a robot immediately makes people empathize with robots, treating them with more care and feeling safer around them (Klee, Oct 10 2016, telephone call; Nolan, Sep 25 2016, video call). Nolan also noted how impossible it is to test for human trust with robots because companies and researchers keep creating “cute” robots (Nolan, Sep 25 2016, telephone call). Chin noted that “if you looked at it very logically, there’s no reason you should want to talk” to their douchebag simulator dBot (Chin and Collinsworth, 2016). But people would keep talking to it, she speculated because it so closely reflected their own experiences with online dating. Mitchel discussed how his Twitter Bots, like BoggleBot, are all “as friendly and welcoming and kind as possible” because of his own fears of his bots inadvertently affecting a vulnerable person (Mitchel, 2016). Collinsworth explained by “giving [chatbots] quirks, giving them those things that aren’t necessarily expected or perfect” makes the chatbots more connectable (Chin and Collinsworth, 2016).

Experts also discussed dealing with conversational errors. Webb noted dependability is a major problem working with chatbots. “We model their behavior a bit in our heads” in order to set expectations on what might happen, then we don’t know what to do when our models prove wrong (Webb, 2016). Almost in response, Chin and Collinsworth suggest defining the purpose and context of a chatbot, can help designers capture 90% of the functionality people expect their chatbot to address. By doing so, people are a “little bit surprised by the times when [dBot] doesn’t quite respond the way you expect” (Chin and Collinsworth, 2016). Hutchins and Stephens both noted that, when there is not enough unexpected behavior, humans are prone to over-trust systems, leading to extreme accidents that could have otherwise been caught (Hutchins, Oct 15 2015, video call; Stephens, Nov 1 2015, video call; Lee and See, 2004). Perhaps with this in mind, Connolly and Lee have found in highly-complex chatbots that there are too many opportunities for the service to break down and humans need to take over. Connolly’s service Intercom avoids most of these conflicts by having most the conversations end by talking with a human operator, because if “something is sensitive and you want advice, and empathy, and so on you want that to be with a person not a bot” (Connolly, 2016). Lee’s chatbot Large is designed for almost all contingencies. Yet, if Large recognizes users are going along a stressful path such as multiple credit card charging errors, a human operator is instructed take over the conversation (Lee, Jan 8 2016, video call).

Conversation Design

ELEMENTS & UNDERSTANDING

Reflecting on these patterns, I understand three elements that need to be designed for chatbot conversations: Dialogue, Personality, and Conversational Structure.

The most apparent element is writing the conversational dialogue. Language is the primary medium of chatbots, so understandably the words and phrases that are said needs to be designed. At its most functional basis, dialogue handles dual roles, sharing data to users but also guiding user input. This data carries the manifestation of a chatbot’s personality. Data also carries the state of the chatbot’s understanding, indicating the status of the conversation and reduces confusion. Dialogue also guides how users are expected to behave in return, preventing such errors by indicating what inputs the chatbot is listening for. Dialogue also guides what relationship users should adopt with the chatbot, vis-á-vis its personality.

Personality is clearly a dimension that is intertwined with how people trust. For designing chatbots, personality seems obligatory for defining behavioral expectations and setting the right level of trust. For users, expectations define the mental model they should ascribe to a chatbot, intuitively learning how a chatbot ought to be used. For designers, they too share the same expectations of a personality as their users, helping to define what behaviors are intuitive for users. Setting the right level of trust helps users decide what tasks are socially appropriate for a chatbot. By having the right level of trust, users can be more tolerant of errors, while reducing the likelihood of unexpected errors.

Conversational Structure is the journey users take to achieve the goals and purpose of conversation. A well designed structure allows for a useful and enjoyable conversational experience. On its face, a structured conversation is useful for planning how a personality would achieve user’s goals. It’s also useful for reducing the actual risk user’s face by anticipating and addressing plausible problems and errors. A structured conversation creates an enjoyable conversation by achieving goals with a unique and delightful path. In designing this elevated experience, a structured conversation reduces perceived risk by smartly addressing when fear and stress are needed most.

Dialogue, Personality, and Conversational structure are clearly central elements that need to be designed to create trustworthy chatbot conversations. However, at present there few methods to guide chatbot designers in addressing these elements. We might help chatbot designers design more trustworthy experiences by developing methods to design conversations. Additionally, as this project is motivated by design thinking principles, there is also a need to understand how each of these elements can be designed within team settings. How might teams collaboratively write dialogue? How might teams collaboratively design a personality? How might teams collaboratively structure a conversation?

Bibliography

Abou-Zahra, S. (2012) Accessibility principles – how people with disabilities use the web. Available at: https://www.w3.org/WAI/intro/people-use-web/principles (Accessed: 10 February 2016).

Adams, J. (1995) Risk: The policy implications of risk compensation and Plural Rationalities. Bristol, PA: UCL Press.

Bamberger, W. (2010) Interpersonal Trust – Attempt of a Definition. Available at: https://www.ldv.ei.tum.de/en/research/fidens/interpersonal-trust/ (Accessed: 13 February 2016).

Berger, W. (2012) ‘The secret phrase top innovators use’, Harvard Business Review (September).

Bickmore, T. and Cassell, J. (2001) ‘Relational agents: A Model and Implementation of Building User Trust’, Proceedings of the SIGCHI conference on Human factors in computing systems, CHI ’01, pp. 396–403. doi: 10.1145/365024.365304.

BNC (2009) British National Corpus. Available at: http://www.natcorp.ox.ac.uk/ (Accessed: 11 February 2016).

Burt, R.S. and Knez, M. (1995) ‘Kinds of Third-Party effects on trust’,Rationality and Society, 7(3), pp. 255–292. doi: 10.1177/1043463195007003003.

Cassell, J. (2000) ‘Embodied conversational interface agents’,Communications of the ACM, 43(4), pp. 70–78. doi: 10.1145/332051.332075.

Chin, J. and Collinsworth, B. (2016) ‘Interview with Joanna Chin and Bryan Collinsworth’. Interview with 21 January.

Cockburn, D. (2014) ‘Trust in Conversation’, Nordic Wittgenstein Review, 3(1), pp. 47–68.

Coiera, E. (2000) ‘When conversation is better than computation’,Journal of the American Medical Informatics Association, 7(3), pp. 277–286. doi: 10.1136/jamia.2000.0070277.

Colby, K.M. (1999) ‘Comments on Human-Computer Conversation’, in Wilks, Y. (ed.) Machine conversations. Boston, MA: Springer US, pp. 5–8.

Connolly, E. (2016) ‘Interview with Emmet Connolly’. Interview with 13 January.

Corritore, C.L. (2001) ‘Trust in the online environment’, in Smith, M.J., Salvendy, G., and Harris, P.D. (eds.) Usability evaluation and interface design: Cognitive engineering, intelligent agents, and virtual reality. London: Lawrence Erlbaum Associates.

Corritore, C.L., Kracher, B. and Wiedenbeck, S. (2003) ‘On-line trust: Concepts, evolving themes, a model’, International Journal of Human-Computer Studies, 58(6), pp. 737–758. doi: 10.1016/s1071-5819(03)00041-7.

Davies, B.L. (2007) ‘Grice’s cooperative principle: Meaning and rationality’, Journal of Pragmatics, 39(12), pp. 2308–2331. doi: 10.1016/j.pragma.2007.09.002.

Dijksterhuis, A. and Bargh, J.A. (2001) ‘The perception-behavior expressway: Automatic effects of social perception on social behavior.”’,Advances in experimental social psychology, (33 (2001)), pp. 1–40.

Dubberly, H. and Pangaro, P. (2009) What is conversation? How can we design for effective conversation?. Available at: http://www.dubberly.com/articles/what-is-conversation.html (Accessed: 4 February 2016).

Dynel, M. (2008) ‘There is method in the humorous speaker’s madness: Humour and Grice’s model’, Lodz Papers in Pragmatics, 4(1). doi: 10.2478/v10016-008-0011-5.

Friend, C. (no date) Social contract theory | Internet encyclopedia of philosophy. Available at: http://www.iep.utm.edu/soc-cont/ (Accessed: 13 February 2016).

Gambetta, D. (ed.) (1990) ‘Can we Trust Trust’, in Trust: Making and breaking cooperative relations. Cambridge, Mass., USA: Blackwell Publishers, pp. 213–237.

Garrod, S. and Pickering, M.J. (2004) ‘Why is conversation so easy?’,Trends in Cognitive Sciences, 8(1), pp. 8–11. doi: 10.1016/j.tics.2003.10.016.

Grice, P. (1975) ‘Logic and Conversation’, in Cole, P. and Morgan, J. (eds.) Syntax and Semantics 3: Speech Acts. New York: Academic Press, pp. 41–58.

Haller, E. and Rebedea, T. (2013) ‘Designing a Chat-bot that simulates an historical figure’, 2013 19th International Conference on Control Systems and Computer Science, . doi: 10.1109/cscs.2013.85.

IDEO (2014) Analogous Inspirational Research. Available at: http://www.designkit.org/methods/6 (Accessed: 12 February 2016).

Kenyon, S. (2015) Artificial intelligence is a design problem. Available at: http://hplusmagazine.com/2015/01/21/artificial-intelligence-design-problem/ (Accessed: 7 February 2016)

Lee, J.D. and See, K.A. (2004) ‘Trust in automation: Designing for appropriate reliance’, Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), pp. 50–80. doi: 10.1518/hfes.46.1.50.30392.

Løgstrup, K.E. (1997) The Ethical Demand. United States: University of Notre Dame Press.

Luhmann, N. (1988) ‘Familiarity, Confidence, Trust: Problems and Alternatives’, in Gambetta, D. (ed.) Trust: Making and breaking cooperative relations. New York, NY, USA: Blackwell Publishers, p. Ch 6.

McKnight, D.H., Choudhury, V. and Kacmar, C. (2002) ‘The impact of initial consumer trust on intentions to transact with a web site: A trust building model’, The Journal of Strategic Information Systems, 11(3-4), pp. 297–323. doi: 10.1016/s0963-8687(02)00020-3.

Mitchel, C. (2016) ‘Interview with Colin Mitchel’. Interview with 7 January.

Neale, S. (1992) ‘Paul Grice and the philosophy of language’, Linguistics and Philosophy, 15(5), pp. 509–559. doi: 10.1007/bf00630629.

Pask, G. (1976) Conversation theory: Applications in education and epistemology. Available at: Summary of Concepts available at: http://pangaro.com/architecture-of-conversations.html.

Pine, J.B. and Gilmore, J.H. (1998) ‘Welcome to the experience economy’, Harvard Business Review (July).

Riegelsberger, J., Sasse, M.A. and McCarthy, J.D. (2005) ‘The mechanics of trust: A framework for research and design’, International Journal of Human-Computer Studies, 62(3), pp. 381–422. doi: 10.1016/j.ijhcs.2005.01.001.

Ritzenthaler, D. (2011) What does it mean to be simple?. Available at: http://52weeksofux.com/post/21026021557/what-does-it-mean-to-be-simple (Accessed: 11 February 2016).

Rousseau, D.M., Sitkin, S.B., Burt, R.S. and Camerer, C. (1998) ‘Not So Different After All: A Cross-Discipline View of Trust’, Academy of Management Review, 23(3), pp. 393–404. doi: 10.5465/amr.1998.926617.

Sack, W. (2002) ‘What does a very large-scale conversation look like? Artificial dialectics and the graphical summarization of large volumes of e-mail’, Leonardo, 35(4), pp. 417–426. doi: 10.1162/002409402760181231.

Shannon, C.E. and Weaver, W. (1963) The mathematical theory of communication. 4th edn. United States: University of Illinois Press.

Shawar, B.A. and Atwell, E.S. (2005) ‘Using corpora in machine-learning chatbot systems’, International Journal of Corpus Linguistics, 10(4), pp. 489–516. doi: 10.1075/ijcl.10.4.06sha.

The University of Nottingham and Millard, M. (2014) Grice’s maxims in ‘the big bang theory’. Available at: https://www.youtube.com/watch?v=vEM8gZCWQ2w (Accessed: 12 February 2016).

Turing, A.M. (1950) ‘Computing Machinery and Intelligence’, Mind, LIX(236), pp. 433–460. doi: 10.1093/mind/lix.236.433.

Van Lun, E. (2011) Chatbot – artificial person with interactive textual conversation skills. Available at: https://www.chatbots.org/chatbot/ (Accessed: 11 February 2016).

Webb, M. (2016) ‘Interview with Matt Webb’. Interview with 18 January.

Wilcox, B. and Wilcox, S. (2013) ‘Making it real: Loebner-winning chatbot design’, Arbor, 189(764), p. a086. doi: 10.3989/arbor.2013.764n6009.

Last updated