TALKING CONVERSATIONAL AI WITH DR. TANIA PEITZKER

May 19, 2020

Tania Peitzker

 

Dr. Tania Peitzker is CEO and Chief Technology Innovation Officer of the Munich-based company AI Bots as a Service. In the bots business for over 10 years, Dr. Peitzker’s projects have evolved from classic 2D chatbots to conversational 3D AI bots. Her latest launch, Birgit am Bodensee, is a cognitive interface which appears in life-like human form using sophisticated hologram-technology. Designed as a prototype for deployment across numerous smart cities worldwide, in addition to speaking over 130 languages, Birgit can be completely transformed, renamed, re-gendered and otherwise customized to meet the needs of different clients and markets.  

 

 

Birgit was due to be launched this past April and, as a result of the coronavirus, you came up with a touchfree ‘pandemic innovation’ – tell us about that.

Yes, Birgit, our newest 3D AI bot hologram, was commissioned to be installed at a central location in the old town of Konstanz as well as on ferries running on Lake Constance. Our initial prototype used a touchscreen, with the idea that the user – whether a visitor or a resident – would initiate the conversation with the 3D holographic installation that way. Voice tech is still on the rise but people aren’t feeling that familiar with it. Think about your own usage: everyone still feels a bit reserved or hesitant speaking to their phone. It’s taken Amazon billions just to get people adopting their new tech of talking to a device. That was why we decided to personify the device as a hologram: so you’re actually talking to a holographic ‘person’ instead of a device – we can also make it a holographic animal or an object.

We realized because of the pandemic that it might spread the virus if we had to get people to touch the screen or a mouse. So we've spent the last two months in R&D and developed a way to activate the machine via movement. It can detect when you're there, it can see when someone is standing in front of it and it will ask you in English or German, “Which language would you like to speak with me?” as it can speak 130 languages.

 

She’s what you would call a “wayfinder” bot, correct?

Yes, although our whole purpose and mission has now altered as a result of the pandemic, in the sense that we really are there to help the local businesses to regain the lost footfall – at the independent bookshops, at the cultural organisations that we’re supporting through these installations. Konstanz has a high student population, so you’re trying to cater to everyone's needs, and we don't want to exclusively be there just for the tourists, who have fallen away now anyway. We want to incorporate free community announcements: if you've got an event and you're an artist or a writer – how do you promote your event? You don't have a marketing budget, you can only hope for the best on Facebook and social media and even when you build up hundreds of contacts and friends, there's no guarantee everyone will turn up. So we want to reinforce non-commercial culture by having free community ads – what in the past would have been a poster on a wall, we’re the digital equivalent of that and saying, “look, we’re going to make it easy to make a community announcement, so they’re very targeted and effective” – because if somebody comes along and talks to Birgit, she’s going to chat and find out and unobtrusively if you are a resident or are a tourist, so she can make a suggestion:

“If you live here, maybe you’d be interested in this event tonight that’s a free choral concert?”
“No.”
“Oh, what music do you prefer?”
“Jazz.”
“Oh, there’s a free jazz concert down the road...”

This type of conversational commerce can be used for the greater good in a very targeted way.

 

One of our professors – Wojtek Materka, a visiting professor from INSEAD – wrote his PhD dissertation on the topic of trust in the digital age. He made some interesting points about AI in our recent podcast interview, which I want to turn over to you.

Trust is a huge practical issue, it’s not just a theoretical or academic issue. Looking at our Cologne pilot in the shopping center – which lasted only 4 weeks and drew attention from Harvard, Yale and Delft universities (I've put a log book up on the Berliner researchgate, so if anyone's curious they can do a deep dive into that project.) – what I learned is that trust in AI comes down to the simple thing of: will people use it and talk to it? So, on a B2C-level, there was this box in the mall that people from all ages and backgrounds – from an 85-year-old Cologner who walked by and stuck his head in the box trying to examine how it worked, to a 3-year-old toddler whose mother lifted her up and showed her and she waved and talked to it, to the migrants who talked to it and were delighted because our bot understood their accents. So, the obstacles to getting this uptake were: “What is it?” “Can I trust it?” “Is it filming me?” “Is it capturing my data?” “Is it going to affect me badly?” Then, when you explained it was AI, for some people that made it even more untrustworthy because it was so unknown and unfamiliar.

 

Materka asserts that algorithms are always a reflection of their makers, of your value system. How much are you aware of this ethical responsibility and how are you addressing it?

We are quite open about our belief system because we think being AI producers or creators – in these times where there’s authoritarianism evident in Western democracies – it’s important to talk about this. If I’m being interviewed by a young African-American researcher at Harvard who's challenging me on why all our avatars are white to date and not ethnically diverse – although we do have some that are as well – it’s not a matter of confrontation, it's a matter of having it thought through: what are we trying to create and represent and who are we excluding by accident, possibly? It’s not just a commercially-driven exercise.

On the face of it, commercially, of course we have to look at avatar placement and make it  culturally appropriate. The same as if you were visiting a Muslim country where they insist on you dressing a different way as a woman – these are cultural sensitivities that, arguably, need to be carried over into the avatar representation.

Regarding the actual inner workings of an avatar, we’ve been adapting to all those concerns automatically because we're already working at the actual frontend of the algorithm, where it’s interfacing. Our whole core purpose and core competence is to interface the machine in dialogue with a human randomly, whoever that human happens to be. So in that regard, we’ve been very conscious of the algorithms always working neutrally, from a linguistic point of view – not having too many Americanisms or Australianisms or whichever culture it’s embedded in at the time, so it would still remain fairly neutral, so as to not cause offense.

 

Your bot at the mall was suggesting that people might want to buy a present for their mother, which of course on the surface seems like a nice simple gesture, but it's possible that might be offensive to someone or even painful.

It’s a work in progress. If we’d had feedback – if we’d seen that people who were talking to it really didn’t like that question, we could have stopped it asking that question. But on the face of it – just to get the thing out there and to get that familiarity [with bots] – we have to take those sorts of risks: that it’s not a hugely controversial thing to ask, “Do you want to get a present for your mother?” because most people are open to getting their mother a gift. All these questions can only be answered through use and more deployments and installations and feedback, and where it might cause controversy, then we’ll adapt.

With this whole problem of AI bias, the real solution and answer will be – and is now – having control over it. So if there was a consensus that the interaction was causing widespread offense or issues, if you have control over the algorithm, you take it out. The current problem is the algorithms, for example running credit checks in finance – they’re racially discriminating against people of colour – African American citizens aren't getting loans. Or also it's being used in American Court systems already, racial bias: they’re likely to be repeat offenders. The problem with those algorithms aren’t the algorithms themselves, it's the fact that the people who run them or own them haven’t deleted or edited out the offensive parts that are giving false outcomes.

 

Reflecting on one of the use cases you present in your new book “Uses and Risks of Business Chatbots”: part of me loves the idea of having a neutral, intelligent helper guiding me through a complicated online bureaucratic task like tax forms, while part of me remembers these moments feeling the artificiality of interacting with chatbots. I would like to always know (a) when I was interacting with a bot and (b) I would like an opt-out option – can I talk to a human?

That's a real issue, and we need a code of conduct, or even an ISO, when you get to a certain scale. If you're not just creating social media bots and there is widespread use commercially, I think there has to be a code of conduct where people can opt out and where a bot identifies itself. Also, it depends on the task it's being used for. They’re not being widely adopted by purchasers because of all these fears, even though a lot of younger people – Gen Y, Gen Z – really want to use them for ease of bureaucracy, as in those use cases you mentioned. If I can go get my bank loan or do my taxes and all that routine, mundane stuff that is fairly tree-structured – “yes”, “no” – if that can be used to make my life easier and reduce bureaucracy, great. And it's a fairly neutral, straightforward process. As a woman in business, I would have loved to have dealt with a chatbot rather than some of the bankers in my past. There's something to be said for neutralizing, levelling out the playing field. A code of conduct in terms of the people running the bots or the personal assistants – that would be a way to control the interaction, so that people aren't forced down this route where you have no option and you’re not quite sure about the level of decision-making that virtual assistant has over you.

 

Are you advocating for that?

Yes, I looked at what’s happening here in the UK in my book in depth: the Department of Works and Pensions is using chatbots – this was before the pandemic. They were trying to accelerate and fast track unemployment benefit applications through the use of a chatbot. There has been a big reaction around a concept called the ‘digital poorhouse’. Asking whether, if the people who are the most vulnerable – like the unemployed and disabled trying to get their pensions – are being processed by this non-human, is it going to confine people to the digital poorhouse? There was a lot of media discussion of this that was anti the chatbot at the DWP, and I think now – and as I argued in my book – there is something that can be in favor of people having been processed by a chatbot because you can get a level of neutrality, you won't be judged for being in that vulnerable state. In theory, if they’re following the code of conduct, if they can deploy the chatbot in a fair and reasonable way, it could process your application 10 times faster than is currently being done during the pandemic, where they’re just overloaded and the case work is far too high for the human staff to manage.

 

If you don't know how your bot arrived at certain answers or conclusions, how can people trust this technology with their important decisions and how can you protect yourself?

From the liability – it's setting the parameters, so if we don't allow it decision-making: for example, with Birgit am Bodensee, we’re not giving her any powers to decide whether people are entitled to buy a certain ticket for the ferry or are entitled to get certain entrance tickets for the cinema. So, if it's not got any power over your actions, there's nothing that can really go wrong in the sense that it's merely there for conversational AI and giving information – information that can't really go wrong because it's confined to a certain topic. One of the most useful applications is connecting it to the public transport timetables in real time – and during a pandemic, when they’re running irregularly, it's really handy. Sure you could go look it up on an app, but as is commonly known, people don't use more than 5 apps on their phones – so do we stick with printed timetables or do we make the public transport network come alive through voice? You just go talk to the nearest hologram: “Hey, when does the next bus or train or ferry go?”

The only thing that could go wrong is it giving you the wrong train time – it's not going to be at risk of saying anything offensive or doing anything offensive.

 

One question that's been raised is about our human interactions – the ones we think of as banal now – these simple chances to ask someone directions somewhere. Again, Wojtek Materka had something very interesting to say in his blog post:

“The potentiality to connect with strangers and relate with them over simple tasks.  Those innocent and banal connections build up to something much more significant.  All human conversations, even the most menial ones, affect the value system we constantly evolve.”

Absolutely. I'm a big believer in community mental health, in having more social interaction and less screen time. Go into any cafe and everyone’s on their phones, no one’s talking to each other. The issue that that quote addresses is quality interactions between humans, right? So, one, you have to wean all these humans off their devices, which is what we’re trying to do with voice, secondly, the quality human interactions need to be regenerated and have space made for them, and that is part of our philosophy and our liberal democratic approach to everything we do. We want people to start talking to each other in a meaningful way.

For example, if you're working at a tourist information point and you're sitting there answering 100 times a day: “This is where the toilets are, around the corner.” “This is where the ferry leaves from, over there.” – you’re almost like an automaton anyway. It’s FAQs on turbocharge. And that is dehumanizing, we find.

So you take away that job description and say, “Ok, we're going to pay you just to sit there  just to be nice to people, to create a good atmosphere and make people feel better about themselves and the world.” Then, for one thing, there's probably going to be hundreds of jobs for psychology graduates and social welfare workers. I think this is where the pandemic is actually leading a lot of people to challenging questions right now: about how we are interacting with each other and why we're not and what we are talking to each other about.

 

Where do you envision your style of bots being used over the next two to three years?

So we’re doing another bot in Munich in parallel [with Birgit] that’s in support of pharmacies and hospitals – this is again a new development we’ve just been working on because of the pandemic. Staff at hospitals and pharmacies are completely stretched, even after the pandemic, and we want to install a bot called Goethe Jr. there and another character called Hildegard of Bingen (because of the historic figure’s natural remedies and advice on how to build the immune system). At point of sale they are a real help because they reduce contact between the human staff and the human customers while the contagion is still going and staff are overloaded with long queues. Again, the customers aren't having a great time either because they’re standing in long queues and maybe they just walked into the pharmacy to buy some special shampoo, but they’re being forced to stand in a queue, and then say after 20 minutes, “Actually I don't have a prescription, I just want to buy a shampoo.” It [the bot] is going to talk to people and find out, “What are you here today for? Can I help?”

I’m calling it queue management: “Ok, if you're not here to get a prescription, go over to that side of the store, if you’re here to drop off a prescription, just leave it here with me.” We're going to have a box there on the stand. The customer can go off and they’ll be notified by text message to come back and get their medication. So that’s going to change the everyday routine and mundane, horrible things about picking up a prescription or buying one or two items – in post-pandemic times and during the pandemic. We have a few other verticals. We've been asked to do everything from bots for manufacturing to promoting cinemas (once they open again). If anything, we’ve got more work to do, as in more jobs we've been asked to do and more and more use cases because of this anti-viral element we’ve added, replacing touch with voice. We’re advertising that now in various channels as a third-party service, so all the banks with ATMs may want to start asking us to replace the touchscreen – which could potentially transmit the virus or any future viruses – with voice. We’re also offering that for vending machines: think about all the coffee vending machines – in a hospital, in a big office block – just gathering germs. We’re offering that as a white label service so that anyone with existing vending machines – ticket machines at train stations, at airports – we want to botify all these machines and put voice in them.

 

What is your fundamental argument for why we need bots and how they will enrich our lives?

We're talking with a museum right now – it's not just to put in a display, make it interactive and make the experience of going to this museum even better for visitors, it's also – going back to the whole user interface and gender representation and avatar design – that we want to revive historical figures like Hildegard of Bingen, an early mediaeval feminist from German history that a lot people have just forgotten, which is a shame. We’re also looking at rehabilitating Nietzche’s reputation as being actually against anti-semitism and against racism, and then he got co-opted during the fascist years. We want to go back and revive historic figures because of the analogies and metaphors and the lessons they can teach today, when there is a resurgence of populism, a resurgence of hate speech and wrecking people's reputations for the sake of ideologies. So we have some pretty big-picture things we can see can be achieved with just simple commercial applications.

It’s not simply just a marketing gimmick, as people often think, or another chatbot that's just trying to automate things and potentially grab my data. If it's done properly, under GDPR, they’re not allowed to grab your data. Any data that's taken, they have to ask you: “Can I take your data for this poll or this marketing?” That leaves us with this room to play with cultural interventions, reshaping and redesigning society. And I think in many countries we’re at that point now because of this pandemic: people want something new and to rethink how we’re treating each other – are we treating each other with respect? Do we have a vision and a balance of where we’ve come from as a society? Have we forgotten important cultural figures in our history and they should be present with us today because they could teach us important lessons about trust in government and good governance and running a fair and proper health system? These big questions – coming from the humanities, I'm going to give you this answer, obviously: they shouldn't have been excluded from commerce, they should have always gone hand-in-hand, these humanities concerns. It's values that we want from society and good governance, hand-in-hand with trade and commerce – and that's also even more important in the digital age.

 

Listen to the complete original recording of this interview on our ‘Meet the Experts’ podcast.

 


Other news