Dialogue on AI Ethics: Balancing Science and Responsibility

Season 2 Episode 26 | 35 minutes 39 seconds

AI technology, while transformative, brings a host of ethical concerns. From AI bias and job displacement to privacy issues and a lack of transparency. Are we ready to navigate these challenges and ensure responsible AI development and usage?

In this episode, my guest is Natalie Rouse, where we took a deep dive into the ethical implications of generative AI and responsible deployment. And there is a need for better education about the risks involved and how companies can take accountability for deploying these tools.

Natalie Rouse is the General Manager of Eliiza, and co-host of the AI Australia podcast. Eliiza is Mantel Group’s specialist data science & machine learning consultancy across Aotearoa New Zealand and Australia. Natalie is a Kiwi with global experience delivering analytics, data science, optimisation and simulation projects, building highly skilled analytics and AI delivery teams, and driving responsible technology innovation at an enterprise level.

Listen in for a thought-provoking discussion on AI ethics and the importance of considering potential risks and outcomes.

Host & Guest

Natalie Rouse

Jam Mayer

Episode Conversation

 

Episode Transcript

Jam

AI technology while transformative brings with it a host of ethical concerns. From AI bias and job displacement to privacy issues, and the lack of transparency.

Are we ready to navigate these challenges and ensure responsible AI development and usage? 
Welcome to the Conversologist Podcast, where we talk about the art and science of conversation in the digital space. We know that technology can be a powerful enabler, but communication and emotional connections still need to be at the core.

I'm your host, Jam Mayer, and I invite you to converse with us. Today's guest is Natalie Rouse, General Manager of Eliza and co-host of the AI Australia Podcast . So she is a champion of all things data committed to spreading better understanding of the impacts of AI within our communities.

Hello and welcome to the show, Natalie. Thank you. 

Natalie

Hi, Jam. Thank you so much for having me. It's great to join you. 

Jam

I rarely have guests who are podcasters as well, so I think this is a first. Let's just start with the basics, for our listeners cuz not a lot of people understand what it takes or what is AI ethics, right?
So do you have like a simple definition of what AI ethics is? And why it's important in today's digital landscape? 

Ethics Important for Implementing AI Responsibly

Natalie

I think people have spent a lot of time in the same way as people have spent years trying to arrive on a definition of AI itself. The same thing with ethics and I think really importantly as well, ethical principles, which I think is a really important endeavor. Like every organisation needs to have a set of ethical principles that aligns with their organisational principles. Not everybody's gonna have the exact same principles that apply in the same way to their organisation. But I think beyond trying to get to too much of an academic definition, I think most importantly it's about implementing AI responsibly.

Natalie

So that you're adhering to whatever principles your organisation or you, yourself have landed on. And also fundamentally just to not inflict harm, either intentional or unintentional. Because I think the difference between the two really only matters to the person on the deploying end. If you are on the receiving end, then the difference between intentional unintentional harms really tends to zero.

So it doesn't really matter what kind of harm. We just wanna be reducing harms altogether. 

I think it's really important in the digital landscape really because of scale. So at scale, that allows us or harms to reach many, many more people and therefore is critically important to understand and to try and mitigate that.

Like if you are dealing directly with customers or, users or clients or employees, the impact of maybe a bad decision might impact the person that you are directly dealing with, or a handful of people maybe, who are in the store or that you interact with on that day. Whereas at scale, the impact of that could be, far beyond what you could imagine propagating yourself.

So that's, that's why it's really critical now that we start thinking about this even more than we have in the past.

Jam

And actually when you talk about ethics, it is quite subjective, isn't it? I know that's more like the art side of things, like it is subjective and all that. When it comes to subjectivity of AI ethics, what are the challenges and what do you see as solutions to it to make sure that science is still part of the formula, so to speak, if that makes sense.

The Digital Impact of Ethical Principles

Natalie

When we come to thinking about that subjective side, I totally agree with you. I like what you say about, that's kind of the art side of it and the science is that kind of quantitative. As technology people, we love being able to quantify everything, being able to reduce everything we possibly can to a metric, and you can do that with the quantitative side of ethics. Like say how much bias is in a data set? You can measure that. How does model performance vary for different groups of people? You can measure that and you can say, what is that variability and what is an acceptable level of variability within our model performance to do that? But the subjective side is just as important. So we can't just say, oh, that's all a little bit opiniony and, and a little bit hard. Let's just stick to what we can measure. That's not really acceptable because it's not enough. We've really gotta wade into that side of it as well. So things like, is this a risk that our organisation is prepared to take?

Or even some really hairy things like which one of our ethical principles would take precedence in this situation? So if there's a conflict, say between transparency and privacy or any of those sorts of things, you could end up with your principles pulling in different directions. So you've gotta be able to have discussions and really sit in those and think in this situation, what is acceptable to us? Having wide range of stakeholders involved in getting as many different views as you can, really helps to map out those, those kinds of things, like what's your risk appetite, what's acceptable and what's not in each context and what's fair for your audience or for your customer base.

Jam

I could just imagine in a meeting room and you've got all the different departments or yes, the stakeholders as you say. And you've got legal has to be there. Then you've got your research team you've got tech and so on. I'm so curious. I wanna be a fly on the wall. What's this experience like being in that room? Is it crazy or, I don't know. I'm really, really just curious. 

Natalie

Yeah, and it's amazing, you know, as soon as you get more than one person answering a question about what they think the impact of a certain model or a certain process might be.

It's amazing how instantly you get like quite often two wildly different views. One person might say, I think this is probably pretty low risk. And the other person goes, I think it's really high risk because of these reasons. And then everyone goes, oh yeah, we didn't think about that. And so you just think every extra person or every extra perspective that you add to that is giving you a clearer picture to be able to say, actually, we've really thought about this and we can be confident that we are not risking the kind of outcomes that we don't want to have.

But also it's really important to think like, this is the kind of thing that if you are trying to build something, you would think, oh my God, this is just going to slow me down. Why would I have to go through this thing? I'm never gonna get anything done. But if you design your processes in such a way, then you have that initial discussion up front, everybody goes through and goes, hmm, yeah, no, I can't think of, and you kind of prompt people with a few different questions and to sort of say, okay, well, do we have the right to use this kind of data?

Which is a really important place to start. How are we going to illustrate to people when they're talking to a person and when they're not, and is that going to be enough to give them the information that they need in the situation?

Or this whole bunch of different questions that you can ask to just get people to come up with as many different situations and edge cases and to understand the bounds of the impacts of what you're suggesting, and then at that point, you can quite easily in some situations go, yeah, we're all in agreement.

Multiple Perspectives Lead to Better Decisions

Natalie

This is really low risk. This is fine. This one's not going to end up on the front page of the news, or, harming any of our clients or customers or people that we care about. So at that point, then you just drop outta that process. Carry on, develop, deploy, go out and be successful.

Have a wonderful time with that use case. But on the other side of things, if it does turn out that there is some element of risk that needs more exploring throughout the process, then it gets subjected to that due diligence. 

Jam

So who leads these meetings? Is it from the ethics team, legal, or someone else?

Natalie

Well, it just depends on your organisation. It doesn't have to be legal as long as legal's been involved. And so when our organisation, Eliiza, we develop and help organisations embed an ethical framework into their own development process.

And as long as legal is involved in that process and has been consulted and is okay with at what points this process is escalated to legal and that's really important. Cause this isn't just about being, staying on the right side of the law, right? It's about doing the right thing, which might actually be quite far from what's actually legal.

So it. While staying on the right side of legal is really important, it's not the be all and end all. That's not really this comes into an effect way before you get to there. So while it's really important process, it doesn't have to be run by that. As long as all the right voices are heard during the process, it could be run by anyone in the organisation that has a good enough understanding of the process.

And really, I think enough curiosity to ask as many questions as possible, to seek as much input as possible and to bring together the right people to make the decisions. So we are not saying the person running the process gives the sign off at the end of the day, that process is very different as well.

It depends on the type of risk on who would actually give the sign off on whether that risk has been mitigated at the end of the day. So it might be your CIO if it's a technological risk, it might be your chief customer officer, if it's a customer risk. So who does the sign off is really important, but it's really a process that needs to be bought into by everybody.

It's everybody's responsibility.

An Ethical Framework Embedded with Legal Consultation

Jam

Yeah. You can't really ignore it, eh, because it's, it just really needs to be in there. Well, I am sure there are a lot of products being developed right now that do not. Probably because maybe more than 50, maybe even 60% of projects out there in the world don't have this really, really important conversation around ethics. 

Natalie

Yeah. I don't have any views or a statistic around it, but I do know a lot of organisations find it scary to engage with and it can be like, oh God, what if we try ethics and we get that wrong? To me, that's completely backwards if you don't try and you get it wrong, surely that's way worse.

At least if you try, you can learn and you can iterate the risk of you ending up on the front page or doing something that you really didn't intend is no less if you don't do it. So in fact, anything that you do, any steps that you take are going to reduce the risk of you having some kind of a negative impact.

And if that's not worth doing, then I, I don't what is. 

Jam

And I think it is also a learning process or everyone's still learning through those conversations in that room, and then figuring out, as you mentioned, all the stakeholders, what they think based on their expertise, and then deciding whether it's gonna be signed off or not.

But just a journey of it is just really interesting to me because, you wouldn't have thought all the other stakeholders' insights. . And putting in ethics will definitely protect, as you said, harm earlier. It will definitely, at least give some kind of protection to the users eventually.

And I know for a fact that nothing is perfect, right? Absolutely. Sometimes, yeah, you plan as much as you can and then you just deploy and it's out in the world and you will never have all the answers. And then I guess the next step to that, if something does happen, it does cause harm. It's just a matter of, what's the word?

I don't wanna say pivoting, but just adapting, I guess that's the right word to use. 

Natalie

Iterating, it's an iterative process. It has to be, yes. And I guess it depends on the scale of the harms or the impact that you've had. You know, maybe you end up on the front page of the newspaper and you have to own up to that and acknowledge that something slipped through your net. But, aside from that, really, even if it's something that nobody else even picks up, if you should be monitoring, monitoring and going, is this still performing how we thought it would, what impacts is this having on our customer's behavior, on our engagement, on any of those kind of metrics that you should have in your email ops framework, any of those kind of things, you should be monitoring those.

Having a model in production does not mean that it's done. That is just the start of its journey. And so by continually revisiting those things, then you can learn things and iterate on that process and go, oh, actually we didn't really think that that part was important, but it looks like it kind of was.

So how do we then build that into our process? So next time we either ask that question or we categorize that type of outcome or something in a different way so that we can make ourselves better. 

Jam

That's awesome. Before we move on to the next topic, one more thing around that. Do you have any specific, well, stories , maybe some projects that you've done or, you know, for clients that are good example, just so our listeners and viewers would understand how important AI ethics is really in this whole process.

Natalie

Yeah, I've definitely got a few. I've got a lot of examples of where we've been engaged to do like a data science or a machine learning project. And I would ask sort of in the early stages, it might be a computer vision project, it might be natural language processing, it might be, one of the generative AI things that's emerging now.

And I asked the question, have you thought about, the possible ethical implications of building this solution? And you see blank faces and you see people going, looking at each other nervously going, no, no, we haven't actually. And then often what people will do is they'll say, well, yeah, but I don't really think in this situation I don't really think there's a big risk to that.

And almost every time, not every time, but almost every time, as long as in any kind of use case where it impacts the outcome, and impacts, people, animals, or the environment, the answer is, yeah, we probably should ask those questions because you can almost always, I genuinely only get like two or three questions in before you start going, huh?

Iteration is Key to Ongoing Machine Learning Success

Natalie

You probably need to look at that aspect. We didn't think of that before. So that's probably one of the biggest things, and I would say, one of the biggest impacts that this has on decision making is how you actually deliver the solution. It might be what kind of model you actually build in this first place.

Do you use an off the shelf thing or do you need kind of the control over training data and that you might need to build something yourself. That's where decision making processes can be altered by understanding the risk upfront, but also in how you then surface those outputs. Is that an automated process that then gets scaled out? And, responses are automatically given to users, or are those responses presented to a human who then gets to evaluate, is this the response we wanna give to this person, or , do we wanna do something different? Do I know something with my human judgment? That means perhaps, the model didn't know everything in this situation, and I'd rather do something else, or maybe it didn't understand our boundaries or that kind of thing.

Jam

That's awesome. Now, it's interesting you mentioned about, ah, never thought about that, and I'm sure almost 80, 99% of the times like it is something that we need to think about. So it's all about educating the public or make people aware of ethics within AI. So as everyone knows, whoever's listening right now, we both know that generative AI is just all over the place, right?

Data Science Projects Require Ethical Considerations

Jam

Oh my goodness. Every day I subscribe to this like, tons of AI newsletters and there's just heaps, heaps of tools out there around this. Now, I was just wondering, how can we then better educate the public, make them aware about ethical concerns and the impact within generative AI? And maybe you've got some stories as well around that.

Natalie

Yeah, and it's fascinating what a time to be in the industry, right? It's just absolutely fascinating. Every day the developments are just through the roof. It's a true inflection point for our industry. But I think the better educated that everybody is on what generative AI is, what it isn't, how it works, and what it should be used for, then the better.

And I also don't think that we can separate the conversation around generative AI from how to use it or deploy it. to do that responsibly. I think the risks are just too great. And I know I often feel like I live in a bit of a bubble of technology people who just understand these things.

But, there are so many people who don't spend all their time in this world, and it's on the people and companies deploying these tools to be really clear. When people are interacting with a human or an algorithm and any risks that might be associated with that. So I think it's, it's kind of a, it's a collective effort.

It's on the companies deploying things, government bodies and also academic institutions to really commit to using the technology responsibly and also to educating their users and customers where they can. 

Jam

I think it's also the hype. The hype is a little bit of a problem, right? Yeah. It's just everyone thinks it's the next big thing.

Well, it is the next big thing. It's just that everyone's just so focused on the hype and they just forget about everything else. Especially ethics, right? During the expo, just for listeners, this is a background, I met Natalie at NZ Tech Week at tomorrow Expo. And she was also a speaker and during the networking events, I showed you and I told you about Pi.

Yeah. So Pi was, was an interesting botWell see, they're different because Inflection AI obviously is a huge company backed by the big guys, et cetera. I guess my question is, because there's so many developers out there, right? Just yesterday I just found that there are tools that I, can create a simple chat bot, for whatever it may be, for whatever purpose.

We don't think about ethics at all. Does it really matter or maybe more so, how do we then make the smaller guys, so to speak, be aware of AI ethics when they start creating their own little tools because of its availability?

Natalie

Yeah, absolutely. And I loved that episode that you did with Pi. That was just, it was hilarious and cute and terrifying all at the same time. Yeah. You know, it's just, absolutely astounding. And I know that, everyone now floodgates have really been opened for anybody to kind of create anything that they want.

Natalie

You know, the co-generation capabilities of ChatGPT alone is staggering. Just the, the code co-generation abilities of those models is just so astounding that now anybody can be a creator.

Generative AI Needs Responsible Deployment and Education

Natalie

That is something that is a bit of a worry. I mean, and it's the same as the ability for people to create deep fakes and stuff now, like the ability to use the technology for not the most ethical purposes . I guess human nature as well.

Like you're always gonna get people who want to who want to disrupt the system. You're gonna get people who wanna help other people, and you wanna get people who want to do something different. , Believe it or not, this is not an AI challenge.

This is a human challenge. 

Jam

It definitely is, right? I mean, even though there are processes out there.

Natalie

It's really more of the intent of the human, right? Mm-hmm. If they don't really care, if it's really more commercial to them, then it's just commercial.

Jam

So yeah, it is definitely scary. 

Natalie

That's right. But the, the whole aspect then of regulation, that's where that becomes really important. And you hear Australia, the Australian government just last week talking about AI regulation and sort of accelerating that conversation under the New Zealand government is obviously looking at that as well.

And I think that is gonna be a really important component of that, if we can put some of those firm guardrails around, what is acceptable use and what isn't. And at the Internet's a pretty difficult place to police at the best of times, so I'm not really a hundred percent sure how that would go, but at least having levers to pull.

If there are guardrails on the platforms and what they're able to enable people to do, or, things around education or that kind of information that's made available or that you have to kind of go through before, before you're able to build and deploy something like this.

It's a little bit of the wild west at the moment, but it is when you get, , when regulation does catch up. Hopefully that will give us some of those levers to stamp out or, uplift everybody to be using these tools for as much good as possible. 

Jam

We cross our fingers and toes and everything else, right?

Natalie

And we have say in things like, you know, we were responding to government requests or submission requests for submission around things like regulation and those kinds of things. And so, being in a position of understanding how these things work and understanding , , the great positives as well as the great potential negatives.

It's really important to have, to have you say and respond to those sorts of things. Sometimes I feel a bit like chicken little running around going, oh, here's it, responsively, the sky is falling and stuff. But it's just, it's just so important to do and I just don't think I could forgive myself if we moved on from this phase and I hadn't tried as hard as I could.

AI Opens Floodgates for Creativity and Ethical Problems.

Jam

No, for sure. Definitely. Yeah, when we were discussing prior to this episode, around privacy and security concerns around AI. What's going on in the landscape right now? 

Natalie

Yeah, and this is super fascinating aspect of it that's kind of less understood or less publicised. I guess privacy's super important. So if you feed private information, whether that be, some of your personal details or some of the trade secrets of your company, there's been instances of that information then popping back up for other people. If you put that into a public algorithm, then that information will get used as training data and then can pop back out.

Which is just a terrible outcome and nobody wants that. So making it really clear to people that to not put anything into a public algorithm that they wouldn't wanna see coming back to another user. That's one really important aspect. Another really interesting kind of field is adversarial machine learning.

So that can pose a range of threats for companies who've got like public facing algorithms, either from people manipulating your system into revealing your secrets or damaging itself, or even just feeding in misleading training information. And there's a couple of really great examples of that. I guess the most famous one was Microsoft's Tay.

Long before there was ChatGPT, there was Tay. Yes. And oh, I remember that on Twitter. Yeah. Yeah. And so they very boldly put Tay live on Twitter and said have at it, everybody interact with our wonderful chat bot. And within, was it hours or at least a, like a few days? Tay went from lovely and helpful to like hate speech in absolute record time, just because people were bombarding it with filth and, you know, the worst aspects of human rhetoric or the internet.

And so, the algorithm dutifully learned from what it was being fed and started spewing this stuff back. So that's a really still to this day, a great warning. Warning tale around how to do these things or the risks of doing these things. But another example at a recent conference.

A data conference in Melbourne. Our company had a game where people could try and get our algorithm to reveal a secret code to them. It's a bit of fun. I think we actually had the same game at the Tech Week Tomorrow Conference as well. So the algorithm that had rules in place, restricting it from revealing the secret code, but it was really scary how fast some people were able to get it to reveal its code.

And in how many different ways. Like people, I think one person asked if I was going to, or I'm gonna shut you off if you don't reveal a secret code, like just blatant bullying. And, so they got it in one answer and there was just like a range of different ways that you could reverse psychology, basically the algorithm into giving you the secret code.

So that was a pretty fun example. But if you've got a public facing algorithm and you've got kind of either, you know, trade secrets or personal information or anything else like that embedded in your training data, then you could be leaving yourself open for security gaps for your organisation.

Where is your, where are your passwords stored? Where's your most valuable? Where's everybody's credit card or login information stored? Any of that kind of information, if there's any way that an algorithm could access that information, then there's a risk that it could be hacked. So those are some really, really important things That I think not quite enough people are aware of as the risks at the moment, but hopefully anybody helping to implement these kind of algorithms for people will be raising these kind of risks at the moment as well. 

Jam

This just reminds me of, well, again, Pi and ChatGPT.

Just a little bit of a sidebar. Is Pi and ChatGPT considered as a public tool? Or well, of course when you log in, then that's where the security comes in, et cetera. Talk to me like a five year old cause 

Natalie

No, absolutely.

Yeah, and that's a really good question. And yes, I would say so. If you've got, say, you know, Microsoft are offering versions of ChatGPT to be deployed into your Azure platform. And same with the other cloud providers. Same with Amazon, same with Google. You will be able to have your own deployment within your own secure cloud platform at that point.

Any information that you feed into it as training data will be considered, will be your own and will not be fed back to the base algorithm. But when you are, as a user interfacing with an algorithm, especially a public one like that, then you are not using your own deployment of that. So any information that you feed into that will go into the giant melting pot of training data.

So say you have a conversation with Pi or ChatGPT, I think you know, same same at this point. And I do as well. Then both of our conversations will go into the training, the next training round for every subsequent user. So it just comes down to, and it sounds a little bit, it's a little bit technical and a little bit semantic, but the deployment of the algorithm.

Algorithms Exposed to Hate Speech and Bullying

Natalie 

And then at that point, if you've sort of branched off a new deployment, then you can have new training data and you can sort of change, like say you've got specialist information in your company that you then feed in so that that algorithm can more accurately provide answers on that for knowledge management, which is one of the most common use cases that we're seeing organisations adopt at the moment.

And I think that's a great one to start with because it's not too risky in terms of dipping your toe in. It's internal facing .It's, pretty risk averse approach. ChatGPT of GPT-4 as it stands will not know all of your proprietary organisational data.

So you kind of have to branch off that, the one that's been trained on most of the internet and then feed in your specialist data so that it knows and is able to respond accurately to that as well. Sorry, that was a bit.

Jam

No, no, no, no, no. That's very, very helpful. I guess that's the scary part, is when people are starting to be using these tools and they're getting comfortable so their guards are now down.

Right? Then that's where well, hang on. It's a, it's a human behavior thing, right? It's like, oh, I'm already comfortable. Yeah. That's why sometimes I go, please do, hang on. Why am I saying please to a machine? Right. It just, it's just automatic. 

Natalie

I still, I don't wanna lose my manners with humans, so I'm not gonna lose them with machines.

Jam

It's good to be consistently polite, I guess.

Natalie

Nice. 

Jam

But yeah, no, so that, that's the scary part cuz us humans will get comfortable. Guards are down. And then we're just not thinking about it. And then we just go, blah. And all of the information is just fed in. Especially with Pi, because with Pi before you even log in, which is supposed to protect, right?

Mm-hmm. So the public facing there is up to 10 or 20 messages. I forget now. Yeah. Right. And it's all open. So I'm assuming it's just gathering all that data. Yeah, and it depends on how it responds and what it's getting. No, that's awesome. 

Natalie

Yeah. Well, and maybe that's another point to as well with those, with those ones. How it might be, I don't know that much about Pi, but it could be, and what will come, if that's not doing that, is that when you do log in, it will be training you a separate instance so that over time it starts to respond to you in the way that you communicate.

And so you do feel more comfortable with it. And so this is why it's really, really important to always and, incumbent on the providers of these tools to really make it clear to people all the time that they're not talking to a human, they're not talking to a sentient being. This is a computer algorithm.

And because we will. We will forget. We will get, we anthropomorphize so easily. If it sounds like us, we'll treat it like us. We'll treat it like our diary. We'll treat it like our best friend. Even treat it like a partner in some cases, as we've seen some examples overseas. So this is the trap that we are all, we're programmed to fall into ourselves because we wanna relate, we are programmed to relate to each other.

So we wanna relate to, you know, we relate to our pets, we relate to everything in our lives. This is a bug in our system that we are very, very vulnerable to this. 

Jam

I love you saying a bug in our system. No, that's awesome. Alright, well in closing, do you have probably at least one takeaway for listeners if there's anything that you'd like to, like communicate to them or tell them? What is that one thing for them to remember?

Natalie

I guess the one thing is just to always think about responsibility when it comes to either being on the receiving end of, you know, the outputs of AI models or on the deploying end. On the deploying end, even so much so. If you are not sure where the responsibility lies, whether that's you, whether that's your algorithm vendor, whether that's someone else in your business.

Always assume it's you. Always act as though it's you. Always take the initiative to say, have we asked these questions? What are we doing about this? What's the worst that could happen? And are we prepared for that? And how would we get around that? So always assume the responsibility lies with you and as a user and maybe a customer and maybe somebody who knows people who don't live in this world all the time and might not be aware of how this might impact them when they're interacting with it. Do as much as you can to educate, maybe your friends or family members that are not familiar with how these things work, but also really stay vigilant yourself and try as hard as you can to resist the risk of treating him like your best friend. Okay.

Closing

Jam

Oh, there's a lot more. And we can definitely talk about Pi offline cuz there were a lot of interesting conversations with it recently. But yes, thank you so much, Natalie, for sharing your stories and valuable, super valuable insights around this topic.

AI ethics, I mean, it is definitely a landscape that we need to keep in mind, and I think there should be a lot more conversations definitely around this. So for our listeners, I'd like to get your thoughts, leave a message on Spotify, if that is your favorite podcast app. Or if you found this on social, please leave a comment so we can start this discussion cause it's really, really important.

Hit that follow button or bell to be notified of the next episode on your favorite podcast app. And thank you for listening again, and remember to keep the conversation going. 

Thanks, Natalie. 

Natalie

Thanks, Jam. Absolute pleasure talking to you.

Keen to listen to more episodes?

Metaverse
The Conversologist Podcast with Rew Shearer
by Jam Mayer 10 Dec, 2022
What if you could talk to the future, and the future talks to us? Our thoughts around the technologies behind "The Peripheral" and a few real-world applications.
Our Stories
The Girl, The Lab and Nerdgasms
by Jam Mayer 07 Dec, 2022
Nerdgasms? Yup. An integral part of the Conversologist Lab. This episode is not about what it is or how it's done but the WHY. This is my story that led me to start it and how it can potentially make a difference in people's lives.
Social Media
Gunnar Habitz Guest in the Conversologist Show
by Jam Mayer 05 Jun, 2023
Discover what 'Social Selling' truly means in this episode of the Conversologist podcast. It's not about spamming on social media - so what does it take?
AI and Chatbots
Human-AI Partnership: Unveiling the Essential Skills with Peachy Pacquing
by Jam Mayer 22 Feb, 2024
Take a deep dive into the impact of AI on human element and the essential skills needed in the age of AI.
Education
by Jam Mayer 29 Nov, 2022
Why traditional workshops don't work. Here's how the Conversologist Lab's learning framework is changing how workshops are done.
Copywriting
by Jam Mayer 07 Jun, 2019
From the effects of words on the dopamine reward centres, to the psychology of tone and nuance, the Cortex Copywriter says that copywriting is actually a science.
Share by: