In this conversation, Matthew Gwyther and Neal Taylor explore the relationship between teams and AI and values that can be used to augment the team with AI.
Key takeaways
- AI can optimize team dynamics without eliminating team members.
- AI agents can perform various tasks and enhance workflows.
- The fear of AI often stems from concerns about replacement.
- AI should be viewed as a tool for augmentation, not diminishment.
- Experimentation with AI tools can lead to improved team satisfaction.
- Organizations need to adapt to the integration of AI technologies.
- AI can help individuals focus on higher-level tasks.
- The impact of AI on productivity can be both positive and negative.
- Transparency in AI usage fosters trust within teams.
- Embracing AI requires a shift in mindset towards collaboration.
Key Values for augmenting teams with AI
- Welcome the bots to the team
- Experiment / upskill
- Augment human capabilities
- Delegate tasks and focus on higher value
- Catch yourself when you’re getting distracted
- Don’t get fooled by AI
Transcript
Neal
So Matt we were talking about Teams and AI and I thought just as a way of introduction: One thing I asked AI Some months ago was I was having a bit of trouble with a team and we were stalling a little bit and Told AI the situation what to do about it and guess what AI said? AI said get rid of the team.
Matty G
Wow, there you go. Optimizing myself.
Neal
Just me and AI. So the question becomes, how do you survive as a team? How do you use AI without cannibalizing the whole team?
Matty G
So if AI wants to cannibalize the team itself then how can you supervise an AI with tendencies to do this? Yeah just kind of riffing on it that I suppose is perhaps to think about AI from a human perspective is broadening agency in that kind of direction actually across so allowing people to use tools the right purpose as coaches or as peers or as a designer, whatever it might be in these different layers and then it’s almost like as if that individual had spawned a group of helpers to then improve their agency to deliver in certain area. So that’s one way of framing I’d say is just actually to think about it as a human that can then cross multiply by becoming more by extending mind into these different tools. That’s one way to do it.
Neal
So it could be, so just to summarize that, it could be almost like a fractal where you’ve got, let’s say, a team and then within each person there’s almost like a team of AI agents underneath each one that augments the original person within the team.
Matty G
Exactly, that’s a good way of putting it. I think what we’ll probably see is AI intelligence inserted at different layers of an organization that might even be within the stack, the infrastructure, more of an intelligent internet that has AI components and parts within it. And then within the organization itself, the process automated in different ways, orchestration, AI agents, different API calls, tools, tool use and whatnot. And then within an individual. There’ll be these different personas that we can lean on, that we can use in different ways to stretch in different directions. So I imagine it’s actually about enlarging the individual in some way, rather than cannibalizing and getting rid of. It’s like adding to that person. It’s the opposite in fact.
Neal
So it should almost be like it’s securing the people within the team rather than getting rid of them. But then that also requires new skills in how you actually use these AI. And agents here, what are we talking about in terms of agents? What exactly is that?
Matty G
So you can think about it in different ways. suppose colloquially we think of an agent as if it’s a chatbot, but really what most people are talking about now when they talk about AI agents are agents that can conduct different tasks or parts of tasks autonomously in conjunction with one another. So you might have an agent, which is an orchestrator that might just be your connected end to end workflow, which has different subparts units within it.
Some of those are AI, API calls and others are calculations, whatever it might be, some sort of tool use. So that would be an example of an agentic kind of workflow, which has got all of these different elements pieced together. And I think beyond that is probably the idea of having some kind of
AI that can review the whole workflow itself as well. So either there’s a human in the loop or there’s another AI in the loop that’s monitoring the series of tasks and calls.
Neal
So just to give that some examples, some real life examples, is anything that you’ve used recently with AI agents to help you within a team.
Matty G
Yeah, so I mean, a bunch of this stuff I’ve been working on in my own slightly more esoteric capacities, rather than stuff we’ve been doing within a team. an example we’ve been using is in traditional educational publishing company, we’ve been trying to look at how we might adapt our video creation asset multimedia workflows. And we did produce a semi-agentic flow there. We didn’t automate it all we wanted our hands on different bits so people can up skill in different components before you might plug it into something like N8n to automate. But yeah there we were producing an agent which would take an idea for an educational video and then spit out a kind of comprehensive video brief which would have all of the component parts descriptions for assets and so on that you would then be able to take and if you were to put it into an agentic workflow be able to pass that part of the code or if it was tagged in a certain way to an API to produce an image for instance in the video and then ideally those could be then called back so yeah that’s that’s that’s one example less esoteric one lots of weird stuff I’ve been doing which is more interesting
Neal
So let’s have a think in terms of we’ve managed to have an overview of actually how AI can actually enhance and augment teams. actually, you know, and this is a funny thing. Terrence McKenna once said that the fear of AI is actually, the male ego. And his fear of actually being replaced by something else So to what extent is this fear actually almost just an? Egoistic fear when the reality is that it’s actually something that can support. It’s actually something that can increase the the amount of value that you can Create as an individual within team or as as a team What are your thoughts about that?
Matty G
Yeah, exactly. Yeah, I think it’s, I think that deep fear of replacement or that kind of existential dread, that fear of things we’ve talked about previously about entropy, chaos, something being put into a system which then opens it up and then it starts to become uncontrollable process that gets out of hand and could lead to decay of self, decay of identity, breakdown of work. So think it’s a base fear and it’s understandable, especially when we’re fed a diet of dystopian predictions about AI and technology and have been for many years. Yeah, I think in a way, like there’s something that I’m interested in this is which is almost like a human beings a lot of this stuff is actually just a perspectival shift or a way of looking at the same phenomena. So we’ve kind of discussed how you can go from looking at one thing which is the potential of advanced AI technology let’s say and look at it as an existential threat and if you look at it in a slightly different way you can see it as a purveyor of infinite possibility and opportunity. It might seem like both might seem like extreme positions but as we know with tech that it’s not value neutral it’s a lot about how the people that are going to be using this technology view it will determine how it becomes because we’re building it as we use it as well we’re not just using something that’s handed and given it’s how we use it and if we can use it to empower a team or individuals then that’s the way it will be used it’s as simple as that
Neal
Yes, I mean, a term that I used previously was that it augments, and I’ve used that term deliberately, that it augments human capacity. So if you think about human capacity in this instance being about, let’s say,
Well, let’s take the example of television and Marshall McLuhan uses that example of the television actually augments the eyesight by being able to see, you know, television to be able to see further on. What are we actually augmenting with AI?
Matty G
That’s a good question, isn’t it? And I think we will, and just to caveat that as well, I think there’s the potential for augmentation and the opposite, diminishment in some way as well. So these should always be looked at as a full spectrum. So how we might be using a tool to offload and unload certain cognitive capabilities. was a big furore recently, think it was MIT paper where they showed that students that were using LLMs to produce their essays were using less kind of brain power than those that were just using traditional note taking and producing their essays in that way. So on the one hand, it kind of shows there’s an offload of maybe the deeper part of thinking in that example. But then maybe also there is the offloading of
But if there’s an offloading of instead the need to do that work so you’re delegating as well. So I think there’s always like a balance. You can see both sides of it. in this example where someone might be outsourcing some of the work, giving an LLM notes in order to produce an essay versus writing their own notes and producing the essay by hand. can even look at that in two different ways. You can say, well, those individuals are risking diminishing their minds. They’re going to wither away, they create their critical thinking skills and their ability to think clearly. Or you can say, no, they’ve just outsourced that, they’ve delegated it. Now they’ve got free capacity to do something else.
Obviously the essay maybe wasn’t such an appealing thing for them, maybe they’ll go and do something else, maybe they won’t. But yeah, I think there are examples of both how AI can augment and also diminish on the same hand, depending on how we look at it. And I think it’s going to be a complex, like with any human tool use, television use, as an example, you know has opened up worlds to us, nature documentaries, history clips from times we would never see and so on, things that open up that are marvellous and then things that are potentially quite trivial or potentially not so good for us. I think it’s going to be everything. And to go back to the entry point, that worries people.
So then coming down hard on either side and saying, this is going to be chaos, this is the end of civilization, we’re all going to lose our jobs, or this is going to be utopia, of both ideological framings of a complexity that’s going to play out all the way in between those views. That’s where we need to get comfortable operating the thing.
Neal
Yeah, so even if you just look back at Marshall McLuhan’s laws for technology, the four laws which was what does the technology amplify or improve? let’s say with AI it could be creativity, could be being able to complete, let’s just call it a menial written task much more quickly. That could be something that needed to be done, it needed to be written, it can do that much more quickly now. But there’s also the other laws, which is obsolescence. What does it drive out of prominence? What does it make less relevant? What is it, which is an interesting one, what old form or practice does it bring back into use? And then there’s the reverse, which is what happens when the technology is pushed to its extreme?
how does it flip into its opposite?
And we’ve seen that with, for example, social media, which was supposed to be connecting people, it ended up actually making people more isolated. And I think there’s been that term with AI, where you’re actually going to get artificial intelligence, but actually reduced intelligence amongst humans. But just to bring that back to Teams in terms of… What are we actually augmenting here in terms of the teams? I think my understanding of it would be that, let’s say the team can produce X amount of value. It could be with AI they can produce two X value or three X value in terms of the amount of output or indeed the amount of quality that can be produced together with the team.
What do you think about that?
Matty G
Yeah, I think it’s reasonable. was just looking at a report today that someone shared at work and it was around the kind of promises and the letdowns of some of the AI rhetoric, but also what’s actually been going on. So it’s something like only 5 % of AI pilots were making it through to being signed off, used and scaled within medium to large organisations, but they found something like 90 % of individuals were using this kind of shadow IT using tools and people have been using those effectively. So it’s almost like the tools that haven’t been sanctioned, that haven’t been… Having to get the tools to adapt in a way to an organizational context and have this sufficient information there on the organization but also memory and improve and updating these are things they were citing is so difficult with an organization and getting it to change but actually just giving people really powerful tools. So I’d say it’s like with with teams I’d say autonomy over technology use tends to improve outcomes in terms of satisfaction and then satisfaction drives motivation which then tends to drive good work if people are aligned and in the right place and behind this kind of vision and the why of the team or the organisation. So I would say Yeah, there’s one boon there already is just like, let people use an experiment with these tools because it tends to make them feel happier and more in control of their work that they can use them and that they’re trusted to do so. And then I think, yeah, in terms of the organisation, that’s another question.
Matty G
Advanced setups, agentic workflows, tools and technologies, models help a team. I think that’s a more difficult question at the moment. There’s not enough findings to kind of, at least from what I can see, to reveal too much insight there. It’s happening, but it’s happening slowly.
Neal
So just to try and summarize this and wrap it up and bring it all together, which is how to use AI without cannibalizing the team. if we were able to distill that into one value, well, not necessarily just one value, but into a set of values that the teams could take with them.
Something that comes to my mind at the moment, just picking out what you said, is just being able to experiment with AI, having the freedom to be able to experiment with AI to see how it can help the team as being almost like that takeaway point.
Matty G
Yeah, think that really makes a lot of sense.
Yeah, inviting it into the team, not worrying about this cannibalizing, think. think framing it as an augmenter, again, coming back to the importance of the way that you frame and the ways of looking in relation to the technology will determine how you use it. So I think thinking of it as an augmenter explicitly when you’re setting up a team at the start to say, OK, we’ve got these people, we’ve got this expertise, we’ve also got these… buddies, chat box companions that everyone can use. These are the ones I’m using being open and transparent about what you’re using, learning from however everyone uses it. And I just kind of see it as a web that will develop around people that they’ll have. For instance, I’ve got, you know, work related coach, Dharma related coach, nutritionist, medical things, all of this kind of thing. I think increasingly people are going to have that. They’re going to have miniaturized versions of themselves that they’ve optimized. And if you imagine that for everyone within a team it’s potentially very powerful. It could get distracting but I think it’s why not explore because it’s also fun and it’s interesting to see what’s possible.
Neal
And there’s almost a clue there as well, which is going back to McLuhan’s last law, which is how does it flip? And it’s almost having some kind of value to have a stopgap or a boundary at some point. How do you know that, okay, now we’re getting just completely distracted. not actually doing any value. We’re now off with the fairies messing about and doing nothing. There needs to be some kind of value about that. I don’t know if we can actually. To sort of synthesize it right now. But I can imagine something like that coming out at some point as well where you’ve got, and I really liked what you said, is it was welcome them into the team, the bots, welcome the bots into the team, experiment with them, augment, and then have some way of ensuring you don’t get distracted. So that’s quite.
Matty G
Exactly, yeah. Don’t get distracted, don’t get fooled by them. Don’t let them trick you and say that your colleagues are no longer needed.

