Leadership Espresso with Stefan Götz

AI won’t Replace You. But it Will Change What Makes You Irreplaceable! +++ feat Lewin Keller, CEO CoachBot

Stefan Götz

What happens when every employee has multiple AI agents with PhD-level expertise at their fingertips? How will this transform leadership as we know it?

In this thought-provoking conversation, AI coaching expert Levin Keller reveals how artificial intelligence is reshaping the leadership landscape. Rather than replacing human capabilities, AI is creating unprecedented opportunities to automate routine tasks and focus on what makes us uniquely human.

The discussion explores how companies like SoftBank are already deploying thousands of AI agents to enhance employee capabilities at minimal cost. As these tools become more sophisticated, leaders must develop new skills in distinguishing between what should be human-directed versus AI-automated. This isn't just about efficiency—it's about reimagining leadership itself.

"The future of leadership is understanding what is human directed, where do we need to be involved, and what is AI agent automated," Lewin explains. This distinction becomes crucial as organizations design systems that appropriately balance automation with human judgment, particularly around decisions with significant ethical or emotional implications.

Perhaps most fascinating is how this transformation might help us rediscover what makes us fundamentally human. When freed from repetitive tasks and armed with powerful analytical tools, leaders gain space to focus on deeper questions about purpose, values, and meaning. The evolution of leadership in an AI-enhanced world isn't about faster execution—it's about raising consciousness and making more meaningful choices.

Whether you're enthusiastic about AI's potential or concerned about its implications, this conversation offers valuable perspective on navigating the changing leadership landscape with both technological fluency and human wisdom. Join us to discover how embracing AI literacy could transform not just how you work, but how you lead.

Listen to the Leadership Espresso Podcast:
https://open.spotify.com/show/4OT3BYzDHMafETOMgFEor3

View the Leadership Espresso Podcast:
https://www.youtube.com/@Stefangoetz_Global_Leadership/videos

Connect with Stefan Götz on LinkedIn:
https://www.linkedin.com/in/stefangoetz/

Check out Stefan's Executive and Team Coaching
https://www.stefan-goetz.com/

Speaker 1:

Hi, levin, it's great to have you on our latest episode, leadership Espresso podcast. You are one of the experts in terms of coaching and coach bots. You started a startup a couple of months ago and you are at the forefront of connecting AI to coaching, and today we want to talk about the impact on leadership. Welcome to the show.

Speaker 2:

Thank you so much, stefan, excited to be here Great.

Speaker 1:

So let's jump right into it. We all know that AI is discussed as completely devastating the landscape of coaching. We have the power already that we can have easy cases of conversations, and I want to talk more about the capacities, on the opportunities, on how this can shape a new leadership bringing more value to clients, to customers. So here we go. What, what is your basic assumption on how this will turn out into a more value-driven approach?

Speaker 2:

Yeah, great question. I like the positive sentiment on it. So I believe AI is here for a reason. Many of us appreciate it because it's already impacting their abilities, their skills, their potential in a positive way. Others might not yet have had the chance to access AI in such way. Others might be just led by fears and concerns, which are very valid at that point as well.

Speaker 2:

So, no matter where you are on your AI adoption journey, there are certain, you know, kind of points of learning when it comes to AI that we need to, let's say, go through when it comes to AI literacy in order to make sense of AI and then be able to apply it to our lives, to our business. So if you sit someone in front of a JetGPT and say, here, go and use it, they will look at you and say what should I use it for? What is it? Is it a search engine? So you need to understand how is it built, what is it capable of, where are maybe also some risks in terms of hallucinations and so forth, in order to then use it. But once you understood the power of a chat GPT oh my gosh, I'm just going to feed in that this, you know whatever funny character style, because then it's much more fun for me to digest the content, then the possibilities are endless, absolutely, and actually, I have had so many conversations with world-class companies where I realized the adoption rate is fast.

Speaker 1:

So we don't talk about people maybe in the lower end of adoption. Let's talk about those guys who really need to make a turn in the company to produce value. So if the adoption rate would be just having high literacy, knowing what can I use it for? What can I use it for Now? At that point, what are your assumptions, the main assumptions, the maybe three assumptions, how this will shape leadership in that adoption rate.

Speaker 2:

Again, big question, and you're jumping the gun. You want to go straight into it. How will the future look like? I'm not that oracle. However, my assumptions here are SoftBank, as an example, is already implementing AI agents at a large scale. We're talking about 10,000 of AI agents and they're using those AI agents at a large scale. We're talking about 10,000 of AI agents and they're using those AI agents to A power their employees with more capabilities, more resources, because an agent is only 23 cent per month in their understanding.

Speaker 2:

Now, agents is a very vague term. Right, we all have an understanding more or less of what an agent is, but is this agent now having access to the World Wide Web and can research latest information, or is this a past version of the World Wide Web? Can it also do agentic use cases where it triggers smart actions like writing an email, triggering a workflow, validating certain information? Let's keep this vague for now, but an agent is something that can follow a task that you decide and, yeah, most of the times we're using large language model LLMs and we instruct them with prompt engineering to instruct them on a certain task. Yeah, do this summary answer in this tone, with this template, whatever. Now, if we would assume the future is energy is cheap because Meta is building data centers in the size of Manhattan. We have endless energy. We have endless compute because we can find all those resources we need to build those you know kind of computer centers, all these data centers around the world, and then with all this chip and computing power we can power our LLMs, currently the biggest bottleneck over there. We need to find more data to train the AI on more stuff, because we've explored and exploited all the data there is already and it's really difficult now to get more data, and this data is needed to bring the models on even more mature levels. Let's assume all of this is happening and has happened over the years to come, and now an employee has to its ability a set of 1000 agents Think of 1000 interns that you can manage with the click of a button and those interns are already today each acting on the level of a PhD in the respective subject matter. So already today we have GPT, which is Deep Research Mode, and so forth the level of a PhD at the hands of our fingertip.

Speaker 2:

There was an interesting interview with Sam Altman where he talked about he would have expected the world to be a very different place when you would have asked him that in the past about what will change once this is available. It's not that much of a change now compared to three, four years ago. Why is that? Because we as humans no matter if we are private people or in an organization have not yet fully understood the potential of how to use the technology wisely. But with the AI literacy and the AI fluency, our ability to communicate and collaborate effectively on the topic of AI there will come the maturity and with the maturity we will be knowing what to use an AI agent for and what not.

Speaker 2:

Because here we have a high error rate, here we have a high bias, here we better have the human lead with their understanding of emotions, ethics and so forth. But then, once we figured all this out, we can deploy agents across many different areas. We're building workflows. Just by saying out the workflow to a voice interface hey, can you create a scraper that is always checking the latest weather data and, based on this weather data, you're increasing the production of umbrellas in my offshore fabric over there. Hey, then you're speeding up, you know whatever logistical times? And then I want in this warehouse, to have a stock of this size and depending on weather forecast, and you know, boom, boom, boom, and this is then happening fully.

Speaker 1:

I can see that like a dashboard. Yeah, I'm a person, a cockpit about my main agents I use, maybe as a sports agent. I have a financial agent, I have my financial agent, I have my workflow agent. So if I put myself into that picture being an employee, this would give me a lot more power towards my leadership, Because it's like, as you said, there is a PhD at my fingertip. That is a very complex and powerful tool. Now, I guess there are a lot of factors. That first is that literacy we need to build up people wanting to choose those agents, which is a step. I guess there's a lot of fear in it because it could replace me. Now my question is how do we address these kind of fears?

Speaker 2:

Yeah, very good point. And when I look at the SoftBank case that I talked about, yeah, very good point. And you know, when I look at the SoftBank case that I talked about, it's not only, you know, enabling their employees with thousands of agents, it's also being able to save, you know, costs for human resource when there is an agent that can do the job potentially much faster, with much lower error rate, with much higher ability to analyze, spot trends and so forth. So what you're saying there about the leadership leadership will no longer just apply to the people that have power formal power of the organization. It will no longer just apply to the people who maybe have the skills because they have been trained on certain soft skills that are needed for leadership. It will no longer just apply to the people, you know, who have intrinsic motivation and feel empowered to take responsibility as a leader. As you say, it will be this universal power that we will all have as employees, no matter if we're junior just entering the organization. We will enter an organization through agents. We will onboard through agents. We will get to know the organizations through agents. It will all be conversational interfaces, no matter if they're text, voice, video whatsoever, and those agents are here to serve us, instruct them, our the memory of the agent about us, our preferences, our needs, wants, the way we process, language. It will all customize so that it's most easy for us to just look at our dashboard and make decisions. So what you're describing of having already today maybe GPT or whatever console you're using and having different agents your workflow agent, your marketing agent and so forth. They're all having different tasks. This is the future, where you wake up and then your agent is already waiting for you.

Speaker 2:

Last night, you received 44 messages on LinkedIn. You have 12 emails. Six of them are highly relevant. I'd love to escalate one even before you have your breakfast to you. Are you okay for me to suggest an answer? It was your colleague, brian, who has flagged that, this issue and I would now do this and this and this, and all you're doing as a human is. You're processing on the abstraction layer. That is where you still need it.

Speaker 2:

Meaning the future in leadership is not so much about what we know of leadership today. The future of leadership, as I see it, is you need to be able to understand what is human directed, where do we need to be involved and what is AI agent automated. So automated is like I kind of create a rule rule and do my instruction. I verify, I validate, I refine with the contextual data that I'm collecting, and this is a system that learns on its own and get better and better.

Speaker 2:

But there's a few areas that we discussed earlier where you know human emotional processing or you know our value and belief system, that can be at times very complex, or at least we feel like it's hard to put this in words so machine could follow it. Probably it's going to be possible very soon. But let's say, the human mind and its complexity still wants to take ownership of certain tasks. And this is also, I think, from a philosophical point of view, the beauty of why AI is coming now to us. We're in such a crazy time. Things are getting faster and faster, we're more addicted than ever to our digital companionship, our devices and so forth. And now AI is here and really it's going so crazy and so fast that probably many of us will at some point just burn out and switch off. This is the past where I think we have to decide now what is human and what is ai led where can?

Speaker 1:

I love that, I love, I love that approach and, uh, you know, when I'm just listening and receiving your message, it's kind of creating some temptations but also it's frightening me. You know, my heartbeat goes up like, do I really want that world and where am I gonna stay as a human? And I, I believe in human, I believe in the creativity, in the original presence and, you know, I think we are the best. Uh, if there's such a thing as an AI machine, we are somehow superior. I still believe. Maybe I'm a dinosaur. Right, how, if we believe in a human-centric way forward companioned by AI, what is the beauty of the human-centric approach? Where do we stay? The beauty of the human centric approach, where do we stay? Actually, is there, is there, is there a way where the human centric assisted could be superior to what we have now? Could you design a picture or what is your assumption about that potential approach?

Speaker 2:

Yeah, I like that human-assisted part because you know, especially in CoachBot, we always explain coaches, trainers, managers, experts that, hey, we're not here to replace you as a human expert with AI, we're here to enhance, enhance. You know this could be we scale your impact, we scale your reach, we scale your income. We kind of automate workflows to free up more time. So these are all the benefits of having AI at our fingertips. And then we need to learn with our literacy, fluency and maturity of like how to really use it. So it's doing good for us, for our stakeholders, clients, our environment and so forth. So we're talking now about AI enhancing the human. We're not going to talk today about robotics, because robotics is the second crazy thing that will also change the world massively, next to the super technology cycle that we're having.

Speaker 2:

This will be our second episode. Another topic let's just focus on the AI part. So it doesn't matter if in the morning, I wake up and next to me there is my robot already waiting and greeting me with, you know, the fresh towel, yeah, the nice towel that I love so much. On a Tuesday morning I would call my wife doing that, actually Using the phone or whatever gadget, the AI pin. But let's assume we have now AI at our fingertips.

Speaker 2:

And your question, where do we want? This is actually an important one. Where do we want to stay human? And where do we need to stay human because of potential impacts that are dangerous or frightening or uncontrollable. So, in that example of the, the kind of the, the rate, the, whatever, the weather forecast and the production, the short-term production of um of umbrellas, and then how you want to ship them and distribute them and so forth, um, what is it that I want to control as a human? Because maybe it has a crazy cost impact. For example, we're having, you know, rains scheduled in europe. We want to produce another hundred thousand rain umbrellas that we want to distribute through robots at the metro stations or whatsoever, and anything that goes above a hundred thousand orders. I want to as a human control, because I feel this is having a financial impact that I want to control.

Speaker 1:

One topic is where do we want to control in times of taking decisions Exactly? So decision points is going to be human, okay.

Speaker 2:

But also more important than decision points is actually design, architecture design Also more important than decision points is actually design, architecture design and we usually know this only so far from anything that we're constructing physically or software architectures, but those software architectures are obviously the most important knots are the decision points. You know what happens if we receive this data and the data is this way and this happens then. So when we build those workflows, it's about how do we design the process and then the decision-making can be fully automated, it can be fully human-led or it can be a combination where we say up to this data point it's going to be AI automated, then it's human or in this zone, whatever. We want to have this human point of reflection. We want to have the human in the loop. So when we design those systems, we need to carefully think about when to bring in the human, for what reasons right because control is one another.

Speaker 2:

One is ethical concerns I'm talking here about, let's say, we have police robots that automatically are imprisoning certain people. Yeah, they have whatever biometrical understanding of the person that they're looking for. We know there is an error rate and then we just let this police dog, police robot do its work. Again here, do we want maybe to accompany a human police escort that in the moment also judges? Oh wait, the person, the suspect that we're looking for, is currently here with his underage kids. We don't want to have them this traumatic experience that this robot is kind of taking out this person.

Speaker 1:

And so, yeah, Okay, what I understand. It's like the enhancement, the assistant, that makes my decision better. Exactly, okay, understood, which is great and maybe it's philosophical.

Speaker 2:

Exactly, and maybe it's philosophical. Exactly, maybe it's philosophical sometimes. It will be easy because it's connected to our values.

Speaker 2:

Yes, yes going far beyond that and we're like, whoa, are we even asking us the right questions? And this is now coming to the point that you just made. What is empowering ai, us as humans, to go deeper. I want to kind of quickly reflect on the fact that I've been working over the last 10 years in technology startups, scale-ups, big companies at Google and I haven't had many people talking to me about philosophers or philosophy in general. Some people are kind of a bit of a nerd and reading up on stoic, but in the end, philosophy has not been a big thing in my generation, for you know my peers now.

Speaker 2:

Lately in the last year, I've heard a lot of people come to me talking about the need of philosophers and looking into, you know, historic and history in terms of how can we make sense of what's currently happening, because what is happening is oftentimes beyond our imagination, of our understanding and so forth. And suddenly philosophy is again very relevant as a science and as a subject to make sense of things. And this is, I believe, a great point in time Me as an employee, me as a person, a great point in time me as an employee, me as a person what can I get rid of with AI, knowing it will do the job exactly as I wanted, or maybe even better at times. Now I freed up time on my side, what do I want to do with it, right? Do I want to design new architectures? Do I want to?

Speaker 2:

And workflows? Do I want to spend time on creative things? You said you know humans are super creative, and I agree. So do I want to make more music? Do I want to write, and so forth? Or do I want to ask?

Speaker 1:

deeper questions Understood. So it will. So the second, second leverage of AI will be it frees up of tasks that are reputational or that are just. You know that can be done quicker and faster research by somebody than me. So you, you are actually, you're actually opening the box of what. What is the value of being human?

Speaker 2:

that is, yeah, what many people today are talking about when they talk about AI. And you know, some are afraid and others say wait, this is a great opportunity because it allows us to become more humans. And some and I would say maybe that's one third of the people agree, and two they're like what is this person talking about? Like what AI is so dangerous? I'm so afraid of it. It kills 300 billion jobs in the next years to come. Yeah, yeah, because it makes us more human. Now, Right, so what is our place in the future.

Speaker 1:

Exactly why is it good to be human? What does it bring to the table? What is your take?

Speaker 2:

Yeah, honestly, it depends a lot on the environment that we're going to live in the future Climate change, employment, certain crisis, war and so forth. So this will depend a lot on are we getting our stuff together as humanity to help each other, you know, not starve from hunger, not die from rockets, and so forth. Nevertheless, let's assume we're just looking at the typical, you know, professional here in the westernized world that has already a good income, that has sustained, you know, maybe some savings that could allow them to be off work for a year. And now they found a few smart AI workflows where agents are actually delivering the income that they previously had with a nine to five job or whatsoever. So suddenly this person is like unemployed, but wanted unemployment because still money coming in agents doing the work.

Speaker 2:

Okay, what's next? What do I focus my time and energy on? And then I'm likely ending up asking myself very existential questions about what is the purpose of life. You know, what is my journey here? What can I do?

Speaker 2:

And the funny thing is, when we talk about, you know, civilizations and development as big state organizations, only the developed countries come to a certain point of consciousness that allow them to ask deeper questions and care about things like the environment, because we first need to have that luxury, to have the time, the resources available to even open our minds and be conscious about it.

Speaker 2:

So and this is pretty much the same thing, I believe, the more people will now get to a point where they're being fed, where their needs, needs you know, basic needs and also sometimes extended needs are met.

Speaker 2:

They now have a lot of time and opportunity at their fingerprints, meaning they have all these PhD agents that they can say, hey, do some research on this topic or on this challenge, or, you know, build me this workflow here. And what do we now do with all that stuff? And this goes back to the existential questions that we all need to ask ourselves what is our mission here? You know, why are we working for this organization? Why are we in this domain? What is, what is our purpose in in this life? What do we still want to do in this decade, in this time of my, of my life? Um, and yeah, this is bringing a lot of opportunity, but, I think, also a lot of challenge, and positive challenge, because we need to now undergo a personal transformation ourselves, no matter if we are leaders in our families or, as I take it in a more general term, it's about consciousness.

Speaker 1:

So if being human means to ask deeper questions about meaning of life, about mission, about our purpose, mission about our purpose, this, I reckon, is deeply human and I think that's the difference to a machine or to any other species and potentially could help us where we are as humanity, as a nation, as an organization, because if you look at the automotive industry right now in Germany, we're stuck. We're stuck because we're repeating the patterns of the past and trying to do them faster.

Speaker 2:

So would AI liberate us from repetition, from research, from some types of decision-making, and opens up a new space of asking deeper, meaningful questions to raise consciousness what resonates, you know, for me is like I don't want ai to automate research or automate curiosity in a way that we are not doing it, but, as you said, at the end, I wanted to challenge us to go deeper and deeper and deeper, right? So, when we think about automation, let's build even better automation, with more decision-making points where we have the perfect understanding between human and AI. When we do research, let's use even more data points even you know more later data, or you know more context-rich data, or whatever in order to understand the problem, maybe from different angles. And then, yes, I believe AI can be very helpful to know what are the options. However, then the final decisions.

Speaker 2:

You know this is still on us and, yeah, consciousness, you know, I think, is very difficult to make decisions when we're being rushed. We all know this from personal experience. Right, it's very difficult to understand what is my gut feeling, my instinct, and also really going deep into data points. However, now with AI, we're having the opportunity, if we build those systems right, to understand, digest the data much faster and then, hopefully this is my personal hope have more time for the okay, let's just be pregnant with that idea, with that path, with that decision, the okay, let's just be pregnant with that idea, with that path, with that decision. So we're buying ourselves some time back for stuff that, as you describe it, makes us human the creative part, the intuition, the connection, the oneness that some of us feel.

Speaker 1:

I like that picture that AI could bring us into a position of a new space of liberation, of creating more meaningful or for asking more meaningful questions, to start more meaningful conversations, to have more meaningful relationship, because that's truly human, that deep sense of connectedness, of transformational power and and being liberated.

Speaker 1:

For, you know, time wise, money wise, whatever, so that we, you know, we, we are pushed, potentially pushed, to be more human. And this could be an answer or a picture that could kind of balance, to be freaked out by what the potential could be and that I, you know, I don't, you know, I'm not part of it anymore, but I could become more, even in that system. So I found, by experience, that somebody who is a CEO or starts a startup, in that sense, there's always a person's story behind. There's always a person's story behind. So, if you allow, I would like to end up a little bit about how come that you drive AI into that direction, what is your person's story behind? And I heard that some major change happened, maybe when you became father or so. So maybe you want to share a little bit about that foundation, that truly human foundation that drives your startup.

Speaker 2:

Yeah, happy to. So our mission is to create more access to coaching, especially AI coaching, to coaching, especially ai coaching. Um, meaning ai coaching is a mix of what we know as coaching non-directive questions and, as coach, sending our attention and our presence to that person to bring them from point a, through self-reflection, effective questioning, to point b and it's not kind of ingested like doctrine. You need to go, you need to do this like religion, or in a mentorship is more like hey, what is it that is important to you? So I found this very powerful for myself.

Speaker 2:

My background is, you know, I've been raised by a single mom. Most of the time it was. Two younger brothers took a lot of responsibility early on, which, you know, was hard in my childhood because I didn't get to play as much as others, but then later on, it was also good that I could use, because it was just very natural for me to take over responsibility, and responsibility at some point no longer was, you know, something that feels hard, that feels it brings liability. It was like the ability to respond. It was like, well, it's much easier for me, you know. You know I was. I was serving nine months in the army and when I was picked to lead an operation, I was like nice, I enjoy this because I can, you know, stay calm even when it's getting very hectic, and make rational decisions, both on the data I collect and the gut feel that is guiding me. So, when I became a father, I made very conscious choices. So I've transitioned from, like I would say, you know, a party animal that was a frequent flyer, very unaware of the environmental impact, and that I'm having eating meat and fish every day. You know, yeah, just living myself through life by enjoyment and holding to, you know, plant-based, centered person, person that is, you know, very much focused on their mindfulness and meditation and sports. I did full distance.

Speaker 2:

So I really changed my physicality and my mindset in order to, yeah, somewhat be most effective and most loving and kind, as I wanted to be in this new life episode of mine and I'm currently in this episode where a I want to take care of for my three kids and my wife. I want to be that family man, while I serve also my social environment, my community, and this is, for me, a global one. I don't see my community just as the neighborhood here around me, looking at the houses next door. No, I see this as all human beings and I have this deep sense and kind of wish to serve people, especially those who do not have access to, let's say, support when it's most needed. So I believe, especially when we're going into our adulthood. So you know something age 13, 14 until 30, men are usually a little later than women when they really feel adult, because the brain works there a little different.

Speaker 2:

Nevertheless, I feel in this time it's so important to have the right kind of support in your life to figure things out like productivity, career administration, you know, even well-being, and how do I formulate goals for me that serve me, how do I achieve them, how do I build on that success further and so forth. And I believe coaching is a very powerful tool. It's highly inaccessible at the minute, with about 100,000 professional coaches that we have, the average coaching session is $244. That's only affordable for about 15% of the adult global population only once. So for those who want recurring sessions, it's even a smaller circle, so it's super exclusive.

Speaker 2:

We do know it works much better than traditional learning and training you training where we're just going through a set of static content and now putting the power of AI on top of it meaning we're making it super available 24 seven access through AI companions kind of interfaces, through text and voice, for example that can bring you the power of those reflective questions, no matter if you're on your phone, on your computer, on your smart device, in your car, at your home.

Speaker 2:

That can be super powerful, no matter if you are in your private life, as an individual, when it comes to your relations, to your life purpose and so forth, or to the workplace. Imagine, sometimes you know you would just have that perfect voice with that perfect question in that perfect moment, making you reflect. Is this really the best thing you can spend your time on right now? Is this really the best approach that you've taken, by just copying that slide set, versus thinking about the structure first? And this is what we want to build with CoachBot not to create the final product for you, but actually we allow experts that are really good on their domain and their subject and their niche automotive, you know to bring the best out of the humans that they're working with. And this is what we do kind of on an everyday basis helping human experts to build technology that can scale impact Right.

Speaker 1:

Now, how did you build in this human-centric approach at the heart of your system? How can we see what we discussed alive in your system?

Speaker 2:

We haven't discussed this today because we focused on the opportunity around AI, but, as we know, there are risks. We quickly touched on topics like hallucinations, bias. There is this risk of privacy and IP loss, meaning that if we only have two, three big technology companies in the world that power all the data and we just learned it from Sam Altman as well whatever you put into OpenAI, no matter if you even archive or delete the thread In a court case, OpenAI is forced to give these data public, so they retain everything that you've ever talked to the AI, no matter if you delete it. It's pretty crazy, by the way. So in this world where a few technology companies are storing and kind of processing all the information in the world, including your most sensitive topics and so forth, we had made a few choices about how do we want to play that game and we decided we do not believe that all the data needs to go into an AI model, meaning we have a so-called zero trust AI policy and we're not fine tuning our AI models on the sensitive client data. We're collecting your workplace secret, what you told us about your colleague. We're also not training the models on the coach, the expert's knowledge, their frameworks, methodologies, subject matter, expertise, personality. They build into the AI Because the moment it's in the model it's no longer going to be extracted and removed, right, it's then in the model. It's a black box even for the model. It's no longer going to be extracted and removed, right, it's then in the model. It's a black box even for the developer. It's really difficult to identify certain points, move it out and oftentimes we're training a model on a model on a model.

Speaker 2:

So we have designed our coach-oriented reasoning algorithm, short name CORA, in a way that it's only built on memory. This is a client's information that we're extracting from the conversation. That is relevant and system logic, very old school software engineering, you know, no kind of big fuss in terms of AI gets to decide. No, the humans build the way they want to create context. So we're doing a similarity search where we're making sense of the words that we're learning, and then we have a semantic search where we're making sense of the context. What does it mean? This employee seeking for more well-being? Did we have previous conversations that are connected to the area of well-being? Oh, maybe there is something going on at home with the kids or the partner. Oh, there is a certain health situation. Oh, there is stress because there has been no promotion and then, based on all the memory information that we have and the context understanding that the ai does, we can bring out one very powerful question that is helping you are very cautious about data yeah.

Speaker 1:

About the ethical issues of that data yeah. Um, and how does it? How does your system assist the coach to come up with more value?

Speaker 2:

yeah, that's a very important point because if we don't have a communication between the AI and the human, we're creating silos and suddenly maybe the AI is smarter than the human and making decisions where the human is like what are you doing? This is nonsense, like I don't like this, this is dangerous. But the AI has data and just acts as per your instruction that you put in in the first place. But you're thinking. Your understanding is outdated because the AI is much ahead. So we need to have a constant understanding between the two worlds in a way that, okay, the human can control and follow the processing of the AI. And this is the challenge with these black box, fine-tuned AI models. They're getting so good that it's really hard for us to make sure. Are they still working ethically?

Speaker 2:

So what we put in there is that A we're controlling the output at all times. That means hallucinations that come up about 30% of the times when using certain GPTs and they need to be put into boundaries. We need to put the AI into chains, basically to not have those. So we hard code a certain set of rules. Don't go into professional health advice, financial advice, legal advice. Don't go into directive coaching, for example, where we are giving clear advice, like a GPT would do. Rather ask smart questions, don't stack questions to overwhelm the user. And it's us that create, let's say, two-thirds of that list of basic requirements and rules, and then one-third is added by the expert where they believe oh yeah, I want to do more of this or less of this. I want to do this use case, but not this use case. And together we're creating this long list of exclusions and rules that we then tell the AI, we instruct it not to do, but even through hallucination, meaning the AI has conflicting rules or is overwhelmed by the amount of rules or whatever. And when we have such a situation, we're double checking with a hard coded content moderator, as we call it, to remove those from the answers before they go out to the client.

Speaker 1:

I'm a little bit shocked or irritated, because this would mean it really replace. It replaces the, the, the coach, and it's just that, as the model is going to get better, it replaced even the senior coach, so it's no longer an enhancement.

Speaker 2:

Yeah. So the thing is, ai is already so pretty awesome that I would say was confident it can work on a PCC marker test. For those who know coaching a little bit, that's a professional standard that is already quite senior. Yeah, it's not the master level, but it's already quite senior. And it's probably only a question of time until AI could also pass the relevant markers tests of a master coach certification.

Speaker 2:

Obviously it cannot be, you know, in the emotional context. It has certain disabilities because it cannot experience emotions like us. It's just imitating the understanding by having this amazing contextual text understanding. Nevertheless, yes, I think it's very dangerous in terms of if you use it for that, it can already imitate what a human coach, a senior coach even, would do. We rather say no, we're creating something that, in its combined power, enhancing the human with ai, is a better outcome than the ai only, which is aiming to replace the coach. And here our coaches are using ai to follow up with clients where they don't have time and so forth, and then having the human when it's most needed for a deep transformational session where emotional connection, presence, and then having at other points in time, at night, at three, where I feel maybe a little worried about the presentation tomorrow or the financial issues that I'm having. They're having an ai coach to say do this breathing exercise that we discussed you know, follow the principles that we know, then this can be a powerful combination.

Speaker 1:

I mean this sounds like there is. The question is not if I enhance my capacities as a coach, as a senior coach, with that I. The question is more, with what kind of uh ethics, with what kind of integrity I build a model or I choose a coach bot or a system that will enhance and come up with a product of integrity is that? Is that the sum of what of it all?

Speaker 2:

I think that could be one of the conclusions that we draw, not just for coaches and relevant experts, but for all of us, because, let's face it, the future is going to be supported by agents all around. There will be agents from the governments monitoring the traffic already happening today, making sure the traffic lights work on time, and everything in the infrastructure to our private lives, our phones and all the apps we're using is agentic and automated. So the question now is for us which are the systems, which are the workflows we're choosing to be AI-powered? Some will be fully AI-led, some will be hybrid, as I say, between human and AI, and some will be human-only. And it's only our understanding, our ethical understanding, that will be starting in the conversations that we have with our parents at home, going through school, going through experiences that we have with apps that we build, because in the future, we just build apps with you know voice, and it will shape our understanding of how and when to use AI.

Speaker 1:

Oh brilliant, I love it. So my key takeaways are it's all about your personal adoption in that to become literate, ai literate, and then overcoming your fears that you're replacing and think more of, about the enhancement. The second one is once and it's already there you know you have chosen your kind of agents of integrity that stay with your values. You know it will liberate you from workflows that can be done better by AI and it will liberate you to be opening a space of freedom, of choices, of new choices. And the third one is what I like most about it. It's about it will kind of help us being more human, asking deeper questions, more meaningful questions about what is our mission here, and make better choices. And then leadership is more about raising that kind of consciousness of what are the better choices to take.

Speaker 1:

So it's it's it's it's becoming a leader of your own personal transformation story, where you raise your own consciousness. So the discussions we have potentially in the future is not about you do this or that you reach goals. It's more about what is a more meaningful way forward. It's more forum, it's more of liberation, it's more of accountability, it's more of creating uh. So leadership must undergo its own personal transformation as well. So it's a side-by-side development and it could potentially overcome the fear to the opportunity and help us maybe start going on this journey and experiment and find out how we can become more human and have a guided way from AI. Is that a potential sum up of what we discussed?

Speaker 2:

I think I want to make one last example for those who feel this is too abstract. Think of an employee that is being asked to reach out to new clients and bring forward the value proposition of the company so that we start a sales conversation. So this business development rep so far has spent a lot of their time. About 80% was the what, what am I writing? What is the list of people that I reach out to? And then they're stuck in the process of doing that writing an email, sending an out checking the answer, thinking, writing the doing part, yeah, doing part, right.

Speaker 2:

And then we have about 20% scheduled time with our managers, our retrospective meetings. Oh, what is the how? How can we do that better? Is there maybe a little increment that we can do? Imagine the future.

Speaker 2:

You're using 10 AI agents. You know, some are writing your posts, some are writing in different styles, with different individual, like different audiences in mind, doing tests, a, b tests, giving you the data points, and you see, oh, my audience is rather respect kind of addressed by this. You know, brings this conversion rate and so forth. So this is just done by one employee who can maybe instruct 10, 100 or 1,000 agents at one time. If you're being a hiring manager and you have a person who says, hey, I've done business development for the last 10 years. I'm really good in opening up opportunities, here's my track record versus you have another employee, maybe like a Gen Z, who says, yeah, I'm working with ai for two years now. I've built armies of agents collaborating among each other. We can basically do whatever you want me to do. And who would you hire? Yeah, and most of the hiring managers, I think, would go now with the person who can instruct the agent.

Speaker 2:

So what you said about, you know, having fears of being replaced. It's not helpful at the moment, because the world keeps spinning faster and faster, and only the ones who are AI mature will have a chance to really you know use AI. And only the ones who free up the time this was your second point will be able to spend more time on stuff that is human, that is creative, that is, you know, ethical, that is philosophical. And this is the third point AI will allow for those who join the hype train and who start with curiosity, but also healthy skepticism and being loyal to their concerns and really understanding what are the data prioritizations. Those will master the game and then eventually uplift their human consciousness in a way that this can solve some of the biggest problems on earth, potentially.

Speaker 1:

Nothing more to add. It was fantastic, very inspiring. Levin, I wish you all the success with your startup, and maybe I have to join and find out myself as a dinosaur, what we talked about, so I will stay AI mature. So thank you. Thank you for today. We'll do another one, maybe about robotics, later, and for now, thank you Having you in the show for today and all the best for your future. Thank you, stefan. Have a good one Bye.