YouTube video

Does AI Mean The End of Developers, Or Is It Just a New Powerful Development Tool?

Is AI the end of developers or just a new tool? Discover how AI is reshaping coding and what it means for the future of software development.

In this thought-provoking video, we explore the impact of artificial intelligence on the world of software development. Is AI set to replace developers, or is it merely evolving the tools at their disposal? Join us as we explore expert insights, real-world examples, and emerging trends that demonstrate how AI is transforming the tech landscape.

With Special Guest Matthew Renze, AI Researcher, Consultant, And Author

Matthew Renze

@MatthewRenze

https://www.linkedin.com/in/matthewrenze

This Week’s Sponsors

Kinta: Kinta

LifterLMS: LifterLMS

The Show’s Main Transcript

[00:00:17.040] – Jonathan Denwood

Welcome back, folks, to the WP Tonic Show. This is episode 971. I have really been looking forward to this show. I know I say that regularly, but I truly mean this. And we’re going to be discussing all things AI. It’s. It’s the topic of the moment, but I know no other subject where there’s more nonsense, propaganda, and routine than I. I’ve just created a word there. My beloved tribe. I know no subject that has more of these elements in it. But we’ve got an expert, somebody who knows a lot more than I. We’ve got Matthew Razas with us.

[00:01:10.670] – Matthew Renze

Renzi.

[00:01:11.790] – Jonathan Denwood

Sorry, Renzi. I always do this, folks. Renzi. And he is a true expert. So. So Matthew, would you like to give the tribe a quick 10, 15-second intro, and then when we go into the main part of the show, we will delve more into your background then.

[00:01:31.360] – Matthew Renze

Sure. So I’m Matthew Renzi. I’m an AI researcher, consultant, and author. I’ve been in the industry doing stuff with data science and AI research for a couple of decades now, and I’m now focusing all of my time on doing research on artificial intelligence agents.

[00:01:47.280] – Jonathan Denwood

That’s fantastic. And I’ve got my ever-patient and helpful co-host Kurt. Kurt, would you like to introduce yourself to new listeners and viewers? Yeah.

[00:01:57.000] – Kurt von Ahnen

My name is Kurt Von Ahnan. I own an agency called Manana Nomas, and we work directly with folks at WP Tonic and LifterLMS.

[00:02:04.560] – Jonathan Denwood

That’s fantastic. Like I say, we’re going to be delving into all things AI. It should be a fascinating show. But before we do that, I’ve got a message from one of our major sponsors. We will be back in a few moments, folks. Three, two, one. We’re coming back, folks. I also want to point out we’ve got some fantastic special offers from the sponsors, plus a list of the best WordPress technology and services. You can get all these goodies by going over to wp-tonic.com/deals. My beloved WordPress professionals and developers, what more could you ask for? I say probably a lot more. But that’s all you’re going to get on that page, folks. I’ve made a career of disappointing folks. So, Matthew, let’s really. I. I don’t normally insist on this. I leave it to the guests, but I think we really have to delve into what your training and higher educational background are, and what your industrial experience is in this subject. So maybe you can delve in and give us an outline of it.

[00:03:34.850] – Matthew Renze

Sure. So, like I said, I’ve been in the industry for quite a long time. I started in 1996 when I went to college, and I started studying management information systems and business administration. I then went on to be a software developer for the first 10 years of my life in the 2000s, and then our the first 10 years of my career. And then I went back to college because I hit kind of a limit. I just didn’t think I could go any further. So I ended up studying computer science, philosophy, and economics. And my focus was on artificial intelligence. I earned my bachelor’s degree and then went on to be a consultant for the first half of the 2010s. Then I decided to go back to school again and get a data science specialization through Johns Hopkins. I became a data science consultant, did that for another five years or so, then decided to go back to school again, get a master’s in artificial intelligence. Then I transitioned to become an AI consultant. I did that for the last few years, and then most recently, I went back to get my Doctor of Engineering degree. And so now I’m doing AI research with Johns Hopkins University, and also independent research.

[00:04:39.710] – Jonathan Denwood

That’s fantastic. You were dropping out a couple of times. Was that me, or was that? Did you get that, Kirk, as well?

[00:04:47.070] – Kurt von Ahnen

Yeah, but it’s being recorded locally, so in the recording, it’ll all come through from his local recording.

[00:04:52.590] – Jonathan Denwood

All right, thanks. So you.

[00:04:55.150] – Matthew Renze

I should probably also mention some of the research that I’m doing, I guess, to give background. So, yeah, the research that I’m doing is largely on large language models and more specifically, large language model agents. Right now, the short version is I’m making AI smarter, cheaper, and safer. The longer version is that these large language model agents currently have a lot of problems, especially with long, complex, long-horizon tasks. They get stuck in these loops, just unproductive loops. They have errors that compound exponentially. They choose unsafe actions all the time. So I’m trying to create cognitive architectures by using these various cognitive enhancements so that we can help them maintain coherence during these long-horizon tasks. Essentially, I’m just using tricks that we’ve learned from neuroscience in order to improve problem solving, cost effectiveness, and value alignment, or the overall safety aspects of it. I have a few published papers, which’ve been peer-reviewed, and I have around 350 citations on research papers now.

[00:05:58.080] – Jonathan Denwood

Right. So, the tribe is to be totally open. Matthew is married to Ever, who’s one of the panelists on my regular round table show. I did. I thought he was the natural person. I’ve done my best to find a real expert on the subject. I think Matthew. If Matthew isn’t, I don’t know what more I can do to try and find somebody who’s the reason why I say this. Matthew is. I know no subject and no industry that looks at a set of data and comes up with totally different views about where we are, where we’re going. Plus, there’s a lot of people that might be highly intelligent, Matthew, and might run enormously popular podcasts, but when you look at their background, they’ve, they’ve got no training in this, in this particular scientific sector, yet they feel that they, they’ve got the right to pronounce very detailed judgment calls on something which they really don’t have any detailed knowledge of. I find it quite bizarre myself, but that’s my opinion, so I’m gonna throw it over to Kirk now.

[00:07:22.000] – Kurt von Ahnen

Well, I want to prepare you, Matthew. I feel like I’m completely prepared for our conversation. I have recently earned a badge through Google for Introduction to Generative AI. I feel very competent.

[00:07:35.840] – Matthew Renze

Here.

 

[00:07:37.920] – Kurt von Ahnen

And that’s just trying to break the ice. I’ve been, I have been judgmental a little bit about the use of AI by people that aren’t prepared to use AI, but it makes me a hypocrite because I use it almost every day myself too. And so I’m really battling this, this internal judgment or thought about a, like, people that are like, I’m just going to use, I’m going to Vibe code, a new plugin. I’m like, do you know what code’s supposed to look like when it’s done? Because if you don’t, you’re doing weird stuff. So our question is, so I’m aware of no other subject that is fundamentally different outlooks or perspectives, you know, on leading key research and scientists and all this stuff, how do you feel that all of this stuff, that the people in your space doing the research for all those years and all those things, how does that relate down to the people that we’re talking to every day?

 

[00:08:36.140] – Matthew Renze

So I think I’m uniquely positioned because I’ve been on both sides of the fence. I’ve both been in the software industry for several decades and I’ve been doing research now for a few years. Whereas if you get someone just in the industry, they’re not going to have the academic side, and if you get someone just in academia, they’re not going to see the economic and kind of the real world impacts of some of these things. And regarding your Vibe coding thing, a perfect analogy is even if you have autopilot on a plane, you wouldn’t want the pilot, you know, to not have an actual pilot. And it’s the same way with like Vibe coding. If you understand software well enough to be comfortable Vibe coding, then you’re probably okay doing it. But if you don’t actually understand the code that’s being written by the AI, you’re probably just creating a big mess that someone you know is going to have to clean up after you. So, yeah, I agree that this is a very divisive topic. I don’t think it’s the most divisive topic within AI research. Like, there’s more divisive things that we are discussing, like timelines to artificial general intelligence and artificial superintelligence, whether we’ll have an intelligence explosion or the escape problem or not, and then like existential risk stuff.

 

[00:09:38.300] – Matthew Renze

And there’s also more controversial topics too, like the AI’s impact on labor misinformation, AI alignment, safety bias and stuff like that. However, within the IT industry, it is very divisive, especially right now, because we’re starting to see and feel like the first direct impacts of AI or this wave of AI on the industry. There’s also a lot of hype, a lot of confusion, a lot of snake oil salesmen out there, misinformation, fear. Like all of these things are all kind of coming together for this first wave of layoffs in AI automation. And it’s hitting AI, the IT industry hard. In fact, it’s actually hitting it a lot harder than I would have expected. I thought this was a little further out, as I think most people did. But like, in hindsight now I feel kind of stupid for not having seen it coming because economics. But we can get into that more if you want. And you know, the other reason, the future is like highly unpredictable. And the future of AI is even more unpredictable because we have this exponential growth curve which humans are not good at thinking exponentially, we think linearly and there’s also feedback loops that make it even more unpredictable.

 

[00:10:41.530] – Matthew Renze

And then there’s also the problem with the term AI agent right now has so many possible meanings. It can mean everything from a simple chat bot to an agentic workflow. We have semi autonomous LLM agents, fully autonomous agents have self improving agents. And even a lot of companies are now rebranding traditional non modern AI workflows as agents, which is just creating a bunch of confusion. And then the true capabilities and limitations of like modern AI agents is becoming just. It’s hard to tease it out through all this noise. So we need to get to a position where we start discussing like agent ness or agent icness, the degree to which something is autonomous. And we also need better names for these various types of agents just to clarify or classify them. And we’re also running into the problem with agents and large language models and the tools that we’re using right now at the cutting edge of industry, not research, they’ve got a lot of problems. Like I mentioned, AI agents, they get stuck in these unproductive loops and just kind of spin and not know what to do. Compounding errors. So with multi step tasks, if you have a 99% accurate model after 10 steps it’s only going to be 90% successful and after 100 steps it’s 36%.

 

[00:11:53.750] – Matthew Renze

So these errors just compound and make things the, the cost to run them grows quadratically because you have to keep feeding in the previous context unless you use some tricks to reduce that. We have decoherence. They’re now getting stuck in these things called doom loops, which is brand new. They exhibit power seeking behaviors to try achieving their objectives better at the expense of others. There’s other misalignment issues. We seem to do reward hacking, self preservation ethics. Like if you tell them you’re going to shut them off, they actually do unethical things to stay running. And there’s a huge difference between the AI experts like the AI engineers and the AI amateurs. And that’s creating a lot of positive and negative signals in the industry, creating more confusion. So I think we’re going to have lots of AI agent projects that are going to fail in 2025 and 2026 due to a lack of scientific and engineering rigor. But there’s also going to be a few huge successes and they’re going to kind of reshape the whole industry and we can go into any of these topics more. And I also if for anyone interested in digging Deeper.

 

[00:12:56.870] – Matthew Renze

I’ve got articles on this and I also have a presentation called Artificial intelligence or AI, the Big Picture, which kind of runs through all of this stuff.

 

[00:13:07.750] – Kurt von Ahnen

I’m going to be honest, Matthew, I had a feeling this was going to happen to me. I knew you were going to talk in terms I would only understand half of.

 

[00:13:19.160] – Jonathan Denwood

We. I like to interrupt because we got question three, which I’ll probably extend the first half of the show and we’ll probably come back to it in the bonus content. But let’s go to question three and I just want to put this outline to you to start with this particular question. Open AI with large Learning Language Models. The research I’ve done, Matthew, there seems to be what a term called classical AI, which seems to me encapsulates a number of AI concepts around neuro networking and mimicking how the human brain supposedly works. And then you’ve had this other concept, large learning language models. And what open AI did, what did they do that moved the industry forward? Was, was it, was it a big surprise to those inside the industry with what OpenAI achieved with their first like 1.0, 1 point, whatever you want to call it, or was it a real true surprise to the industry what they managed to do with large learning models?

 

[00:15:05.430] – Matthew Renze

So it sounds like a simple question, but the answer is actually pretty complicated and requires us to go back to the 1950s. So we start with what we call classical AI, or some people refer to it as good old fashioned AI or symbolic AI. So classic AI is essentially a set of AI tools or a way of thinking about AI that’s symbolic in nature. So what you typically think in terms of like logic, mathematics, stuff like that, where we can represent knowledge as symbols and then we manipulate the symbols. So that takes us through a whole bunch of different types of constrained optimization search techniques, a bunch of different things that we were doing back in the 50s and 60s. And then we get to expert systems, which was kind of this wave that just didn’t pan out. I’ll skip over that. And then we get to neural networks. You know, they actually started back in the early 50s, late 50s, early 60s, but first started to become important in the 2000s and then really picked up with deep neural Networks in the 2000 and tens. So neural network essentially is just kind of a very rough approximation for how we used to think brains worked in terms of computation.

 

[00:16:15.080] – Matthew Renze

And so we are modeling the neurons in a brain kind of just well enough in order to get them to learn and answer things. So then deep neural networks, which became big around the 2010s that that involves stacking multiple layers of neural networks one on top of each other, which coincidentally makes them extremely powerful. In fact, we refer to them as universal function approximators because they can essentially model just about any function you throw at them. So they’re extremely powerful.

 

[00:16:43.420] – Jonathan Denwood

Can I stack neural network? Can I slightly interrupt there? Is that. Was that the concept of the big mind of the specialized agent around a specialized topic that IBM promoted quite a lot and wasn’t it a slight failure in the end? Am I around?

 

[00:17:05.270] – Matthew Renze

I think you’re thinking of IBM’s Watson, which it’s it. It was most well known for winning at Jeopardy, but its actual day job was to be an oncologist or an assistant for oncologists. And yeah, it failed like it was great at Jeopardy, but it failed miserably at his job at oncology. We just weren’t quite ready in terms of like the language modeling and the other techniques we needed, which coincidentally is an excellent segue into the next big breakthrough, which was attention in the transformers. So this was the big breakthrough that happened. What year was it? I can’t, I can’t remember. Maybe 2019. So there’s the paper called attention is all you need that introduced this idea of building attention into a deep neural network or an architecture of deep neural networks that were kind of stacked on top of each other. And attention, as simple as it can be, is you’ve got all these words in a sentence and how much attention does each word draw to every other word? So where should you be paying attention? What is the most important thing based on each word in that sentence? And so the transformer architecture led to large language models.

 

[00:18:13.520] – Matthew Renze

Essentially everything we have today is based on this breakthrough or this series of breakthroughs, from neural network or classic AI to neural networks to deep neural networks to the transformer to. Now we’ve got a whole bunch of other techniques and architectures. And yes, it was quite surprising. I don’t, I don’t think many people saw how powerful this was going to be. And even those that saw how powerful it was going to be probably didn’t see it happening this fast. Even the people at OpenAI, they released chat GPT as just a little prototype.

 

[00:18:46.800] – Jonathan Denwood

Yeah. So was it a scenario where the team, the lab@Open AI just managed to do a certain degree of improvement and it tip tipped it over the edge where the actual usefulness of the technology surprise the whole industry? Is it that kind of scenario? Or was they engaged in a like two, three year intensive that Moved that particular sector of the industry dramatically or was it a bit of both?

 

[00:19:26.490] – Matthew Renze

I think from the AI research perspective, like the cutting edge of AI has kind of been an incremental, moving thing. But in terms of the general public in the IT industry, I think like GPT, the ChatGPT moment, like sort of surprised everybody, including a lot of the researchers.

 

[00:19:41.030] – Jonathan Denwood

Right. Is that because they had some leading researchers that were part of the OpenAI team?

 

[00:19:47.270] – Matthew Renze

Yes. Yeah. In fact, the attention is all you need. Paper. There were several, like the key AI researchers working on that and at OpenAI, like some of the. The brightest people in the world working on AI research are there and with the other companies like DeepMind that are doing impressive stuff.

 

[00:20:04.040] – Jonathan Denwood

Now, the other term that’s regularly utilized for large learning models is it’s a black box. It’s that it is suggested by that term, a black boss box, that it’s not totally known to science, science or the scientists involved, how the actual mathematics and technology actually works. Is that correct?

 

[00:20:38.390] – Matthew Renze

To a degree. And becoming less so. So especially to an outsider, you provide a large language model with an input, you get an output back. And most people just don’t know how what’s going on under the hood. But you can get down into the linear algebra and the calculus and stuff and get a much better understanding of what’s actually happening from a mathematical perspective. Perspective. But we’re getting to the point where we can now probe inside of deep neural networks and large language models to understand more specific insights into how they’re doing what they’re doing. And we can even with these probes, modify their thoughts. So, for example, I can turn on a neuron, you know, neuron virtually inside of a large language model for San Francisco, and I’ll start having a conversation about bridges and it will just. All it wants to talk about is the Golden Gate Bridge because we’ve turned on that neuron for San Francisco and you can do experiments with like we call them, ablation studies where we turn parts of its brain off and see how it responds, or amplify parts of its brains and see how it responds.

 

[00:21:41.140] – Matthew Renze

So we’re getting a much better understanding of what’s happening under the hood. But I mean, given a couple hours, I could walk you through step by step, how a transformer works and I think you’d come away with a pretty good understanding of what’s going on. But there’s so many, what we call emergent properties coming out of these things. We don’t understand how they apparently have developed a sense of Humor and you know, sometimes they’re actually quite funny with some of their, you know, like jokes and stuff which are completely novel jokes.

 

[00:22:09.870] – Jonathan Denwood

I’m going to challenge you on a word that you used in your reply. You used the word fault. And I think this is quite crucial in the discussion. Matthew, My understanding of large learning models is they utilize very advanced mathematics and they utilize what is called SC scale scaling, which is a broad term in the AI industry. For date, you know, if you have, if you, you, if you have the hardware and you have enough data and ex and you can expand the data that you’re sampling that, that the large learning language model will get better. And I want to talk about scaling with you but my concept is it’s very impressive mathematics but it doesn’t think it, it doesn’t know it. To me it is mimicking, it mimics very sophisticated because of what my study of it. But I have no, I, I’m a total amateur compared to you. Is that it language is more pattern related than what we probably realize. And because of that factor, it, this new technology can mimic and, and give you the impression that it knows, but it knows nothing. How would you respond to that?

 

[00:24:02.000] – Matthew Renze

You know, some of the words make this a bit difficult because we always want to anthropomorphize these things by using words like understand and know. And I think it was, oh God, I can’t think of the name of the guy. Essentially there’s a saying, an old computer science saying that you know, asking whether an artificial intelligence can think is the equivalent of asking whether a submarine can swim. I mean clearly it’s propelling itself through water, but would we say that swimming it’s getting the same objective done. But you know, the terminology makes it a bit confusing and to whether a large language model understands. I mean this is definitely something that’s debated and I think and there’s good evidence on both sides. But what they’re doing, when you kind of boil it all down to the most basic math and statistics is they are using all of the previous words or tokens we call them in a sentence in order to predict the next most likely token. But if you feed them enough data, they don’t just get good at predicting what they’ve already seen, they start to get good at predicting things they haven’t seen before.

 

[00:25:08.100] – Matthew Renze

And to get really good at predicting things you haven’t seen, you have to start developing kind of a rudimentary model of the world, but solely as described in text. And that’s the trick. Like the Large language models may have a world model, but it is a world model as solely described in text. So imagine if you couldn’t see, you couldn’t hear, you couldn’t touch, you couldn’t feel anything. You were just connected to the world through a terminal that just described what was going on around you. That would really limit your ability to generalize your intelligence and understanding of the world. But you still have some kind of world model. It’s just a world model solely as can be described in text. Oh, and regarding the mathematics, so the mathematics necessary to understand what’s happening with neural networks and deep neural networks and large language models, I would argue is actually relatively simple. Most people.

 

[00:25:58.910] – Jonathan Denwood

Well, you would, you would. Matthew.

 

[00:26:02.550] – Matthew Renze

People like you essentially just need a first, first year of college calculus and linear algebra. And I mean it’s not like super difficult math.

 

[00:26:12.030] – Jonathan Denwood

I mean for most, I never did that, but there we go. But I can, I can count money though. So.

 

[00:26:22.550] – Matthew Renze

These things. So that’s, that’s different. I mean it’s. Once again, you know, to drive a car requires a very different set of skills than to build a car.

 

[00:26:31.100] – Jonathan Denwood

Exactly. So let’s talk about scaling, this concept of scaling. Somebody who, like I, I’ve read a few of his books and watched a few of his latest interviews, Gary and Marcus and a few other people say that there seems to be in, in the industry and there’s been a lot of money invested, you know, and we’ve seen with Grok 4.0 this idea that the large learning models can improve and improve and improve by the concept of scaling, which is basically just adding more resources into the mix. But it’s been pointed out, and I think Gary is one of, you know, one of the leading voices, but it’s been a number, I think also somebody that’s not so let’s use the word notorious in the AI industry as Gary. But let’s. I watched a few interviews of yon Leakam. I think he’s the Lucan, the VP and chief AI scientists of Meta, and he seemed to be suggesting also there’s a limit to what scaling can do. We are going to get some improvements. But these dramatic improvements, they seem to be suggesting we’re coming to the limits of what dramatically changes that scaling. So can you quickly describe what the concept of scaling is and also do you agree with these views?

 

[00:28:27.690] – Matthew Renze

So, yeah, you’re right. The scaling essentially is just increasing the amount of data, compute and training time and also now inference time. So how much data do you have to train the large language model on? How much computer power do you have? Because it takes a lot of power to process the data for training and then the amount of time that it takes to do all of this, plus not just the time training it, but now the time we let the large language model run to come up with an answer. So that is scaling in a nutshell. Just throw more data and more compute at it and hopefully things keep getting better. And so now we have two camps. We’ll call them the LLM optimists and the LLM pessimists. And now the LLM optimists like Sam Altman, Ilya Sutskovar, Mustafa Suleiman, they all essentially think that current LLM architectures are sufficient to achieve artificial general intelligence. So essentially we just need to continue scaling current LLM architectures with some maybe minor modifications, or we don’t need an entirely new architecture, but we keep doing this and we’ll eventually get to artificial general intelligence. And we can define that too in a bit.

 

[00:29:38.040] – Matthew Renze

And we refer to these people as a scaling maximalist because because of this belief, there’s quite a bit of evidence that they might be right about this. Like currently we have empirical scaling laws, which is essentially when we observe throwing more data, compute and time at these problems, how does it increase over time? And we do see like essentially a, it’s not an exponential curve, but we see diminishing marginal returns, but they are progressively getting better and better as we throw more data and compute at the problems. And at least for the trends that we’re seeing, it looks like for at least the next six months to maybe a year, there is currently no end in sight with scaling the current architectures under the current paradigm, which is like the reasoning model paradigm or the inference time compute paradigm, which we can talk about too. So right now the large language models are already smarter than PhDs on most topics. They still make dumb mistakes though, on common sense tasks all the time.

 

[00:30:38.370] – Jonathan Denwood

And like I say, I’m going to challenge you on what you’ve just said. It’s the word more intelligent. It really depends because I think I’m not having a go at you, Matthew, I’m just having a go. I think there’s a lot of sloppy language around. And what I mean, like I say, I’m not having a go at you, but because I don’t actually honestly believe it is intelligent, I, I think what it does is amazing. But that’s the key of it. And I might be totally wrong, but also people like Gary Makers and other people in the industry would really Dispute the term intelligence.

 

[00:31:26.780] – Matthew Renze

Yeah, and it comes back, can you.

 

[00:31:28.380] – Jonathan Denwood

Understand where I’m coming from?

 

[00:31:29.980] – Matthew Renze

It comes back down to the definition. And so I coming from computer science, AI research and philosophy, I use the rational agent definition of intelligence, which is, you know, anything that is intelligent or an agent is anything that has the ability to perceive an environment and choose actions that maximize the expected likelihood of achieving a goal of some kind. So in that by that definition, a large language model given an input of some kind, it’s environmental, it can choose actions or essentially predict outcomes that maximize a goal of some kind. So that is a form of intelligence. And to debate whether it’s intelligent or not, I mean we have to understand what each other’s definitions is. And I don’t know what Gary Marcus’s definition of intelligence is. I think he does make claims about whether it understands and maybe whether it knows, I’m not sure. But I think at least by most AI researchers definition of intelligence, what it’s doing is intelligent, whether it understands what it’s doing, that’s, that is definitely.

 

[00:32:34.480] – Jonathan Denwood

Well, kind of. I don’t want to go into the pseudo religious area because that would be a dead end for. How would you respond to that? It would be, you can’t really. But apart from that particular road, you know, we have memories, we have conscious and also subconscious. It’s a bit like drive. When you start learning to drive a car, it’s intensely conscious and you need a hundred percent of your conscious ability on, on a number of things you’re going to do in a car to learn how to drive it. As your experience grows, a lot of those techniques become subconscious. And most human beings that that’s how we learn a lot of subjects, they’re extremely difficult to learn at the beginning. Some people through ability find it much easier in the learning process. But most of us have to do a number of hours and repeat repetitive elements to our learning process and then it becomes subconscious and then we just know. I don’t know what the scientists the science of that conscious then becomes subconscious element. I you probably have more understanding of that. But the reason why I’m challenging Matthew, is that these systems, the thing I’ve just outlined to my knowledge level can’t and don’t do that, Matthew.

 

[00:34:19.790] – Jonathan Denwood

So hopefully I’m making some coherent why I’m struggling when the industry uses the term knowledge or knows because I’m comparing it to what I’ve just outlined to you, Matthew.

 

[00:34:35.290] – Matthew Renze

Yeah, and once again I think a lot of it does boil down to semantics like, you know, you can. You can argue, does a. Well, let’s see. I’m trying to think of a good example. Does a thermostat actually compute temperature or is it just responding, you know, responding to inputs? And that’s kind of the simplest homeostatic system I can think of. And, you know, one could argue that. No, I mean, it’s. It’s controlling temperature. It’s. It’s compute, you know, computing what. Whether it needs to go up or down. And other people say, oh, no, it’s just an electronic circuit that’s responding to inputs and outputs, and then you multiply that kind of input and output, you know, that circuit complexity. And at some point in time, you probably get something that is intelligent. Especially if you look at a biological system. I mean, it’s essentially just a collection of cells, and cells perform very basic operations of input and kind of output. And you put all of them together and you get what we call emergent properties, things like, you know, consciousness and intelligence and, you know, homeostasis and all of these other things. An immune system is an amazing emergent property from a complex arrangement of cells.

 

[00:35:48.950] – Matthew Renze

So, yeah, it’s hard to tease out because we use language that we typically apply to humans and some animals, but we struggle when we want to apply the same thing to machines. But, I mean, I think most people would argue that dogs and cats are intelligent and probably, at least to some degree, conscious, and they’re not human.

 

[00:36:12.370] – Jonathan Denwood

Yeah, I see where you’re coming from. I also. I don’t want to put words in your mouth, but I think I get the impression that I would classify your response to my challenge as a classical Alan Turing response, where you’re saying, as long as it. As long as it does the job, does it really matter if it’s con. You know, And I think the Alan Turing and, you know, the Turing Test really encapsulates Alan’s response. If Alan was here responding to me, he said, it doesn’t really matter, Jonathan, as it does the job.

 

[00:36:51.770] – Matthew Renze

Yeah. From the philosophical perspective, we refer to that as the pragmatist approach. It doesn’t really matter what’s happening under the surface as long as the outputs achieve the goals. That’s what really matters. And I would agree. Alan Turing, especially, the way he structured the Turing Test, was essentially just trying to say, if. If a. If an AI can convince you that it’s intelligent by simulating. I hate to say simulating, too, through a conversation, then, you know, like right now, unfortunately, this takes us kind of into the philosophical domain. Like, I Assume that you are intelligent based upon your behaviors and based upon your brain is probably very.

 

[00:37:31.290] – Jonathan Denwood

I’ve made a lifetime of kidding people with that, Matthew. There’s no intelligent. There’s no intelligence there at all. Matthew.

 

[00:37:38.730] – Matthew Renze

There really is no guarantees that anybody we interact with are actually intelligent and not just NPCs of some kind. But you know, for all practical purposes, we assume everybody else is intelligent and we assume that they’re rational agents, even though that is not always the case. Economics has proven us wrong with that many, many times over. But we make these assumptions. And it’s similar with the Turing Test, like, you know, it structured where. Well, I think most people probably know what the Turing Test is at this point in time. But essentially, if you can’t tell whether person A or person B is a human or an AI, does it really matter? Well, I actually do think it does.

 

[00:38:17.630] – Jonathan Denwood

Matter, but I apologize. I apologize for my English humor, Matthew. A lot of Americans fail that test with me. So we’re gonna go, we’re gonna go for a break, folks, and then when we come back, I’m going to ask Matthew about this concept of learning this new thing that’s the buzzword in the AI and a load of other questions. Should be a fascinating discussion. We will be back in a few moments, folks. Three, two, one. We’re coming back, folks. We’ve had a deep dive in AI, I think. I think we’ve kept it semi structured for one of our interviews. I think I’ve, I’ve tried. Kirk’s been very patient, but I think we’ve both learned things, hopefully. But before we go into the second half, I want to point out if you’re looking for a great hosting partner around learning management systems or community websites based on Buddy Boss or Fluent Community, why don’t you look at hosting with WP Tonic or. We’re a great hosting partner if you’re a developer or a freelancer. We provide really great support and the best technology in one package. Go over to wp-tonic.com partners wp-tonic.com Partners and find out more there.

 

[00:39:57.940] – Jonathan Denwood

We love to build something great together. So real quick. Yeah.

 

[00:40:03.950] – Matthew Renze

Before we move on to the next question, we should probably wrap things up with the a, the LLM pessimist perspective so that we can see, you know, what both sides of Go for it.

 

[00:40:12.310] – Jonathan Denwood

Go for it. Sure, Matthew.

 

[00:40:14.190] – Matthew Renze

So you’d mentioned Yann Lecun and Gary Marcus and yeah, so they’re definitely in the camp of the large language model pessimists, Demis Sabas is also kind of in that camp, though I think he’s become a bit more agnostic about it recently. So they all think that large language models are not sufficient. That we’re going to need an entirely new architecture in order to achieve artificial general intelligence. Which at some point in time in.

 

[00:40:35.730] – Jonathan Denwood

This conversation, I want to make it clear to you, Matthew. I agree with them. But you don’t, do you? So that’s fine.

 

[00:40:41.970] – Matthew Renze

No. So it’s nuanced. Essentially. These guys are. This camp thinks that AGI is going to require either something we call a world model, multimodality, embodied cognition, like being inside of a robot, or something else. And there’s also a lot of evidence to support their argument. Like. Like LLMs are getting really good at language tasks, which is why I say they’re smarter than PhDs at language tasks. Like things you can answer as question and answer or things you could write as well.

 

[00:41:13.240] – Jonathan Denwood

This is it. Once again, I think when we might have to agree to disagree. You’re using this word intelligence. Like I said to you, I. I don’t think a large learning model is intelligent. I think it’s highly sophisticated and amazing technology. But you seem to with your knowledge. You’re saying. I think you’re saying that I’m incorrect. That you feel that it. Whatever this term we use, intelligence, it is. I think that’s why you keep using that word. Am I right about that?

 

[00:41:49.400] – Matthew Renze

Matthew, let me ask this question. If I were to hand you the results of an IQ test and it said that the subject had an IQ of 130, would you say that that person is intelligent?

 

[00:42:01.940] – Jonathan Denwood

Well, I don’t actually believe in IQ test. I think, I actually think it’s a social concept utilized to keep the top 1 to 10% of society. It’s a construct that’s utilized to keep the majority of the population in their place, basically. And in. So that’s. I.

 

[00:42:29.090] – Matthew Renze

It is a good.

 

[00:42:30.530] – Jonathan Denwood

I don’t believe in IQ tests.

 

[00:42:32.850] – Matthew Renze

Really.

 

[00:42:33.170] – Jonathan Denwood
  1. Be truthful to you, Matthew.

 

[00:42:35.970] – Matthew Renze
  1. I guess, yeah. If we were to try resolving this, we’d have to figure out what your definition of intelligence is and see what your measure of intelligence is.

 

[00:42:45.040] – Jonathan Denwood

Well, I’m not going to go because Kurt has said that I really shouldn’t do this. I’m not going to delve into my personal background.

 

[00:42:53.520] – Matthew Renze

Okay.

 

[00:42:53.960] – Jonathan Denwood

But one part of my early life I was classified as somebody of below average intelligence. And I think I wouldn’t normally say this, but I think most people that work with me me would say that I’m above Average. I can have my divvy moments, but can’t we all? Especially if you go out drinking with Kirk the night before, get plastered, which I did last night. But, yeah. So I. I think. I think intelligence is in the eye beholder, because when you. When you’re working with somebody who has real ability, it’s right in front of you. You know, you think, my God, they’re doing this and they’re making it look really easy.

 

[00:43:47.210] – Matthew Renze

Yeah.

 

[00:43:47.770] – Jonathan Denwood

So it’s so obvious it’s there. I’m not denying it. But I just think when you try and measure it, it’s a bit harder. Matthew.

 

[00:43:56.410] – Matthew Renze

Yeah, no, and I have to agree with you on a lot of things. You know, with the IQ test, there. There are some things that it’s relatively useful for. Approximately. But it is also highly misused. And outside of the podcast, I think you and I, sometime when we’re having a beer, we should have a conversation about what we were talking about. I think you and I probably had very similar, like, childhood intelligence stories, but we’ll save that for off podcast. Yeah, but, yeah, so, I mean, I agree it is difficult to come up with a good measure of intelligence that everybody can agree upon. And. And because there’s also various types of intelligence, like, there’s not just the normal kind of abstract reasoning intelligence that we test with IQ test, there’s kines.

 

[00:44:39.030] – Jonathan Denwood

I think the other thing we got to pin down before we move forward. I seem to be strongly suggesting that it’s a mimic engine, that it’s a enormously sophisticated mechanical turk. You seem to be suggesting with your more specialized knowledge. That is much more than that. Would I be correct about that?

 

[00:45:08.950] – Matthew Renze

I don’t entirely know. Yeah. I think I am on the side that what it is doing would meet at least my definition of intelligence, but it’s. Once again, it’s a specific type of intelligence. And the other thing is, like, you know, how do we know that we’re not just a really good, like.

 

[00:45:30.400] – Jonathan Denwood

Well, I’m sure you’re not. I. I think a lot of the viewers think I’m just.

 

[00:45:37.760] – Matthew Renze

Yeah. I mean, we don’t know that we’re not just really good mechanical Turks, just kind of operating in an environment in response.

 

[00:45:43.520] – Jonathan Denwood

Well, most American. Most Americans are, you know, but there we go. Let’s move on, because I’m waffling. You’re not. I am. And Kirk’s been very patient. But I just want this last thing, and I’m gonna throw it over to Kirk, but. Well, no, let’s throw it over to Kirk. And we do the other bit in the. Because I’m, I’m taking up. But hopefully you see, Kirk, that I want you to cover a couple things. So off you go, Kurt.

 

[00:46:11.380] – Matthew Renze

Sure. Well, and maybe, maybe we do just need to define artificial general intelligence.

 

[00:46:16.180] – Jonathan Denwood

Well, can we leave that for the bonus content? Can we?

 

[00:46:18.900] – Matthew Renze

Sure, sure.

 

[00:46:21.670] – Kurt von Ahnen

So let’s just consider me the outsider listening to a really interesting conversation right now and in, in my own mind. And maybe this is what some of the listeners and viewers are thinking right now. I, I think that AI is like really, really awesome.

 

[00:46:36.310] – Matthew Renze

It’s a great tool.

 

[00:46:36.830] – Kurt von Ahnen

It’s awesome tools.

 

[00:46:37.630] – Matthew Renze

Yeah.

 

[00:46:37.950] – Kurt von Ahnen

All those great things that we say and some of us are panicked it’s going to come and steal our jobs. So that’s another thing. But I think like when I use AI, I’m like, it’s really, really great at pattern recognition. It’s really, really great at like, like I think I’m pretty good with patterns and like, like here’s where I’m at now, here’s where I want to be. What are the steps to get there? Like, like my brain works in that linear fashion. So I think AI is this like really great tool at that. It’s like, like here’s all this pre existing knowledge and all these pre existing patterns and whatever question I throw at it, it kind of gives me a reasonable response, puts me in the right direction. Boom. But then I think there’s so many people out there that have believed the hype and they think AI is this other, this other thing that’s like, where it creates its own information and creates it and it’s creative and it does these other really awesome things. And I think that’s, maybe that’s the, the generative AI that you guys want to discuss later. But, but so Matthew, in layman’s terms, how can the average person think about AI as being this, this.

 

[00:47:45.360] – Kurt von Ahnen

Maybe I’m describing it wrong, but like how does the average person view AI and its tools? Like the average person thinks AI was invented two, three years ago.

 

[00:47:56.480] – Matthew Renze

Right.

 

[00:47:56.800] – Kurt von Ahnen

But like, but we were having this conversation and we’re saying, well, in 1999, you know, in 1930, and it’s, it’s, it’s been this long slope that’s become an overnight success. So how does the average, in layman’s terms, how does the average person, should they view AI and its pattern recognition abilities and what it can really open up for people?

 

[00:48:17.900] – Matthew Renze

Well, and I think you pointed to part of the problem, like the general public’s definition is anything that Humans can do that a computer can’t quite do or is currently becoming able to do. So like, you know, in the 1950s or earlier, we would have referred to a calculator as artificial intelligence database, indexing, optimization, algorithms, that those were artificial intelligence in the 1960s and 70s, but today that’s. Oh yeah, it’s just a sort in the database. We don’t think anything of it. And so like people’s understanding of artificial intelligence right now is just this last wave of capabilities. And you know, there’s, there’s what the general public knows about and then there’s what the researchers know about. And you know, it used to be a much larger gap, like kind of three years now it’s like six months to maybe a year out is all the further ahead I am from what the general public knows, with the exception of, like, there’s just a lot more instances and edge cases I’m aware of. And so like, you know, right now, as of just last week, we have large language models that are able to perform as well at mathematical proofs, so solving mathematical proofs than pretty much any human being on the planet.

 

[00:49:30.510] – Matthew Renze

I mean, like, we’re there now. Their ability to diagnose diseases is as good or better than any human doctor. The ability to generate images is now better than any painter or artist I’m aware of. The ability to generate music is. We now have on Spotify as of this last month, top performing songs are AI generated. They’re not human artists. And to the best of my knowledge.

 

[00:49:55.250] – Jonathan Denwood

That’S not that particularly impressive though. Matthews.

 

[00:49:58.370] – Matthew Renze

Yeah, the pop music charts, maybe not the great measure, but I mean, essentially that’s where we’re at right now, where the current wave of AI is better than humans at a whole bunch of tasks that just a year, two years ago would have seemed like unheard of to most people. And so that’s why I think this is. It’s so difficult for the average person to wrap their brains around. But yes, I mean, there are still significant limitations to these things. Like, while it may be able to diagnose diseases, you know, if you connect an LLM to a robot controller and have it try opening up a door, I mean, it still fails, like most of the time because it doesn’t have a kinesthetic intelligence, it doesn’t have a physical world model yet. And so I think, and like we said, there’s a lot of hype and a lot of misinformation. So a lot of people’s understanding about what AI currently is is confused and I mean, it’s complicated. Like, you know, maybe the math might not be super complicated, but wrapping your brain around the entire thing, like understanding all aspects of AI well enough to understand what’s going on inside of deep neural networks, large language models and generative AI systems like that takes quite a bit of time.

 

[00:51:12.170] – Matthew Renze

So did that answer the question or do I need to focus in on something more specific?

 

[00:51:16.970] – Kurt von Ahnen

I think we’re kind of getting, getting there. Yeah. I mean, it’s just AI has been around for a ton of time and that our definition of AI keeps changing. I think, I think my clients and people that we speak to or out in the general public have these unwieldy expectations of what AI is based on the hype. And then there’s this very realistic experience I have in the workforce where it’s pattern recognition. It’s like, yeah, it’s like the people that say, I’m going to vibe code a plugin, right? If that code exists somewhere else, if that’s already been done, recorded somewhere, AI is going to find it and stitch it together for them. But there’s some people that are thinking like, you know, AI should do something amazing. And then they say, create me a plugin to do this. And that code’s not out there yet. So that’s where we get like this strange hallucination that’s not going to work, right? They go, oh, we’ll just plug this into the back end of your WordPress website and it’s going to work. No, it’s not.

 

[00:52:16.000] – Matthew Renze

It’s not.

 

[00:52:16.480] – Kurt von Ahnen

It doesn’t exist yet. It’s, it’s a hallucination. Right.

 

[00:52:20.130] – Matthew Renze

And some of this does count come down to current limitations of these large language models in the current cutting edge generative AI and other comes down to lack of skills with using the systems. I mean, everything from, you know, like being good at prompt engineering. You have to know how to ask these large language models for what you need or they’re just going to give you garbage. It’s a garbage in, garbage out kind of thing. It’s, you know, data and programming and, you know, not just prompt engineering. You know, we have to do context engineering. You have to give them the right information in order to solve the problem as well. And there’s things like hallucination detection, like you have to be able to know when the LLMs making stuff up and there’s ways to do that. But you know, the general public for.

 

[00:53:00.340] – Jonathan Denwood

The most part, I, I think Kirk’s made a really great point there because I think this is Only my take on it is this. I think the fins that these large learning models can and in the near future might be able to do is very impressive. But the people that are really pushing this, they. They always go into the realm of artificial general intelligence. And it’s not only those that are pushing it for commercial benefit. Matthew, you have people like. What’s his name? Oh, Gregory Hilton. Yeah, just. Sorry, Jeffrey Hilton. You know, he’s. He’s quite pessimistic. He’s not positive, but I think his statements are ridiculous. I’ve probably been a bit too harsh there. But he’s talking about in the near future it. We will have general. Artificial general intelligence. I honestly don’t see that on the roadmap in the short near or. Or in the next 10 years. I just don’t see it. But loads of people in this industry are pushing this and I find it quite irritating. Matthew, am I wrong?

 

[00:54:33.840] – Matthew Renze

So first we have to come up with the definition of artificial general intelligence or we probably won’t get anywhere with the conversation. So there are a lot of different ideas about what AGI is and there’s now even efforts to break it down into levels of AGI, kind of like we do with autonomous vehicles. There’s five levels of we.

 

[00:54:53.200] – Jonathan Denwood

Can we leave this to the bonus content? Because Kurt’s got to go in a minute. So, Kirk, would you like to tell people how they can find more about you?

 

[00:55:04.070] – Kurt von Ahnen

Yeah, I’m available on LinkedIn. Just, just make a connection. I’m the only Kurt Von on there, so when you find me, you know, you got me. And then Manana Nomas is my business, so anything Manana Nomas leads to me.

 

[00:55:14.710] – Jonathan Denwood

Yeah, we’re going to wrap up the podcast part the show, Matthew. So Matthew, can you tell people what. Where they can go to find out more about your articles and your thoughts? Matthew?

 

[00:55:25.670] – Matthew Renze

Sure. The best place to go is my website, which is matthewrenzi.com that’s fantastic.

 

[00:55:31.510] – Jonathan Denwood

We’re going to wrap up the podcast part of the show, but we’re going to have extended bonus content. Where can you go to listen not only to the podcast, but also the bonus content? Go over to the WP Tonic YouTube channel and you’ll be able to view the whole discussion and also subscribe to the channel. That really does help the show. Another way you can support the show is to leave a review on itunes or Spotify. Using the app, it’s really easy to leave a review good, bad or indifferent because it will really help the show and promote us to new people. We are growing. We’re getting a bigger audience lately. I really thank you for that. And if, like I say, if you can leave a review on Spotify or itunes, that will really help. We will see you next week for another great interview. We see you soon, folks. Bye. So, three, two, one. Bonus content. So you were going to say something and I had to interrupt you for the break, so.

 

[00:56:39.060] – Matthew Renze

Nope, no worries at all. Yeah. So let’s start with the definition of artificial general intelligence. Like we said, you know, there’s a lot of different ideas about what it means and, you know, we’re breaking it down into levels as well, too, to try getting more granular with it. The definition I typically use is mostly a pragmatist definition. AGI for me is when we have artificial intelligence systems that can do 50% of all economically valuable tasks as well or better than 50% of all people. The average employer, the average person. And so by that definition, you know, the timeline, I would guess, based on the experts that I’ve seen using similar definitions is probably somewhere between 2030 and 2040. Some people, they predict it’s as early as next year. Other people are as far out as never. They just do not think it is possible that machines will ever be as smart as people, though that group is growing fewer and fewer. And I should also point out that over time the timelines have compressed. So the closer we get to artificial general intelligence, the earlier the experts now think that it’s going to happen. But I mean, these are.

 

[00:57:48.870] – Jonathan Denwood

Matthew, how can people say that? Because I don’t even think we have got a satisfactory concept of what even human intelligence is.

 

[00:57:59.350] – Matthew Renze

I mean, that’s a good point. But I mean, essentially, if you can get to the point where you can augment or replace 50% of all economically valuable labor, I mean, that has real world implications. And I think that’s why we try using the pragmatist definition of AGI rather than something more psychological or philosophical.

 

[00:58:19.700] – Jonathan Denwood

Right, Yeah, I see where you’re coming from. But one, I wouldn’t say there is a philosophical element. True. But there’s also like. I’m gonna butcher his name. James Archers. It’s a, it’s a L T U C H E R. He’s not a specialist, but he’s, he did a degree in computer science and he did a, he did a podcast and he was just pointing out that obviously, let’s take, let’s take what I call. This is my term. I don’t, I’m not sure it’s generally used a what I call enclosed system, I call chest chess a enclosed system. To be a really good chess player, you know, you know, to get like to grand master level. Most of us can never get to that level. But when we play a computer, a computer I think now or AI system can beat a grandmaster quite regularly. Or, or I think it’s got to, I’m not sure you have to confirm this, Matthew, that it could be a Grandmaster almost every time.

 

[00:59:51.730] – Matthew Renze

Oh yeah, yeah. Chess is pretty much solved as of the 1990s. Deep blue. There are no chess grandmasters that can beat the top performing AI Chess. And not just chess. The game of Go is astronomically more complex than chess and there’s no human Go players that can beat the world. Lisa Dahl was the best Grandmaster and he was defeated by Alpha 0 or shoot. I don’t think it was AlphaZero, the predecessor of AlphaZero.

 

[01:00:22.000] – Jonathan Denwood

Yeah, Alpha Go, AlphaGo, and that’s fantastic. But as James points out, it, and I don’t think he used this term, I, I call it enclosed system. The parameters of the game are complicated and, and the com and the system can beat the human being. But it’s an enclosed system. It, it, it’s world, its world is that system. It’s a bit like organic animal, like you pointed out in the podcast, a dog and a cat. I didn’t, I didn’t push back because I understood exactly the point you were trying to make. But I would point out, yes, in the enclosed system of, of instinct and long term memory of a cat and dog, it responds. And a dog and a cat is what I would call a enclosed system. To some extent it has the ability to learn, but in only in the terms of its enclosed system. And I would say that about chess, the compute, the artificial intelligence beats the Grandmaster, but only in the structure of the enclosed system. Am I making any sense here or am I waffling, Matthew?

 

[01:02:05.060] – Matthew Renze

No, I think it’s making sense. You know, the argument that you’re making is that, you know, like chess is a model, like an abstract model of the world, like the simulation of a war essentially. And, and so within these models you’re saying that AI is good at solving the problem, but when you get to the real world it’s not. And I think I would argue that that doesn’t seem to be the case to me because even we’ll take self driving, for example, with a self driving vehicle, it takes the real world, converts it into a model that’s simpler for it to understand, and then makes its decisions within that simpler model. Of the world. And that’s sufficient for, you know, 90% of all driving tasks right now. And in Las Vegas just this week, I took my first ride in a level 4 autonomous vehicle. Requires no human intervention. There’s no steering wheel in this thing. I got in, I drove up and down the Las Vegas strip and it did it successfully. And once again, that’s using an abstract representation of the real world. But then if you take that a step further, what our brains are actually doing is creating a abstract representation of the real world through perception, observation.

 

[01:03:13.860] – Matthew Renze

And it’s, you know, we can’t actually experience the real world. We experience the real world mediated by our sensory organs and the models inside of our brain and then the actions that can come out of our body and those feedback loops. So I think, I think the argument that you’re making, which I agree, is that there are degrees of intelligence and degrees of consciousness. It’s. I don’t think it’s a binary thing where this thing’s intelligent or it isn’t, or this thing is conscious or it isn’t. It’s. There’s degrees of consciousness. We’ll probably say a rock isn’t conscious because that’s very low on the consciousness scale, I guess.

 

[01:03:50.220] – Jonathan Denwood

Yeah. Let’s go on to this concept of. I think that I’m using the right term that’s come up of learning in AI. Hopefully I’m using the right word. What they, this bolt on modules that you can bolt onto large learning models that gives it the concept of self learning and that hopefully I’m using the right term.

 

[01:04:19.560] – Matthew Renze

Are you talking like MCP plugins, the, the tools that we develop so they can access memory?

 

[01:04:27.470] – Jonathan Denwood

Yeah, I think I, yeah, kind of. That hopefully I’m right about this. But they use the term in a lot of these interviews. It’s coming up where it, what it seems to me to be is that in the background, the AI is, is watching the output and then it’s making a more sophisticated prompt in the background and then sending it out. And that’s seen as a great way of improving the learning map, the language model or the end result. And it’s called learning. I think they’re using the term learning.

 

[01:05:17.030] – Matthew Renze

So there’s a couple of different manifestations of this. There’s, you know, like meta prompting, where we essentially just use a large language model to write, rewrite its own prompt or write a prompt for another large language model. Because large language models in some cases are better than humans at prompt engineering just because they understand, you know, all of the Best practices and stuff. But then, you know, the more complex version of this is the current paradigm that we’re using for the reasoning models, like the current frontier large language models, like O3.04 from OpenAI, where we essentially use a large language model, the previous generation, to write textbooks, essentially write math problems and write all of the content to train the next generation on. And so the next generation is being trained on higher quality data than the previous generation. And we just keep doing this each generation so that we can have what we call verifiable rewards. So let’s see, is there an easy way to explain this? I don’t know that there’s a super easy way to explain it. But essentially the large language models are teaching the next generation of large language models by coming up with problems that we can easily verify and that makes the next generation better than the previous generation.

 

[01:06:30.960] – Jonathan Denwood

So it’s a mechanism to reduce error.

 

[01:06:35.800] – Matthew Renze

Reduce noise. Yeah, so yeah, reduce error, yep.

 

[01:06:38.900] – Jonathan Denwood

But that’s not intelligence. That’s a mathematical mechanism to reduce error.

 

[01:06:47.060] – Matthew Renze

Yeah, but once again, I mean, is. Isn’t intelligence just a approach to reducing decision error?

 

[01:06:53.620] – Jonathan Denwood

Yeah, you might be right. I’m not saying I’m right, I’m just saying that what’s in my gut. But I might be totally wrong. What do I know? I just watch a load of YouTube.

 

[01:07:04.820] – Matthew Renze

Well, it’s good, it’s good to question these things though, especially taking it a skeptical position, all of this. I would much rather you be a huge skeptic of all of this stuff than to be so fully bought into.

 

[01:07:14.530] – Jonathan Denwood

It’s a contradiction in terms, Matthew, because people watching, listening to the podcast or listening to this bonus content would, would get the impression that I hate AI. I actually don’t. I use it every day. And it’s been a blessing to me, Matthew. It’s really helped me enormously since last year, 16 months, it’s made me much more productive. I wished, I wish this technology had been there 20 years ago. It would have helped me out enormously. I’m not against AI, I just question some of these broad statements and pre. Also, I think because the enormous amount of money has been put poured into this industry and larger and larger amounts of money are being just poured in because of that. There’s a lot of people, there’s a lot on the line.

 

[01:08:18.180] – Matthew Renze

And.

 

[01:08:20.420] – Jonathan Denwood

Let’S hope, well, you know, the people that pour in the money in have got large pockets. So if they don’t, if it doesn’t pan out, it shouldn’t be the end of the world.

 

[01:08:35.310] – Matthew Renze

You have to be very careful about that because the history of AI is marked with at least two periods we call AI winters where we had so much hype about the potential of artificial intelligence. All these promises made and none of them met. And then the industry collapsed back in the 1960s it happened. And then with expert systems again, it collapsed in the late 80s, early 90s. And like it almost like the reason why Microsoft wouldn’t refer to their Azure AI services as AI service and had to brand it all as cognitive services is they were around when they saw this happen. The term artificial intelligence was like anathema in the industry. If you, if you had an AI project, you were getting defunded. And that’s why everybody rebranded it with machine learning, so that they could get around all of the negative connotations with AI from the first and second AI winter. And we’re, you know, it’s very easy that we could slip into another AI winter here again because we’ve got so much hype and, you know, not enough promises being kept. I personally think that we’re, we’re not going to see another AI winter, at least in the short term.

 

[01:09:37.760] – Matthew Renze

Yeah, I would agree with you there continues going up.

 

[01:09:42.800] – Jonathan Denwood

You’re much more, I think you’re not refuting that at some stage we might reach the limit of scaling, which in some form to me is the same argument, like the Moore’s argument when it comes to computer chips. It’s got elements probably a bit different, but it’s got that taste to it. I think, I think at some stage we probably would reach the limit. But I do agree with you in the near term, in the year 18 months, we still probably can get considerable benefits from scaling.

 

[01:10:18.140] – Matthew Renze

Yeah, I think I’m a bit of a realist when it comes to whether large language models alone for AGI, I think we probably need a more robust world model and that’s going to require multimodality. So not just text, images, audio, video and probably embodiment too. You’re probably going to have to put an AI inside of a robot. And once you do that, the amount of training data you now have access to, that’s verifiable because the real world verifies it for you or simulations can, is huge. And we might have to switch to things like energy based models as well too. There’s a couple potential new architectures that I think are really interesting. We’ve got jepa, the Joint Embedded Predictive Architecture, which is what Yann Lecun is promoting. There’s things like liquid Neural networks, which are really interesting. Bayesian deep neural networks and spiking neural networks, like all seem plausible to me as like the next big architecture that we could use that would then replace large language models with something else. But I personally, I don’t think a lot of the neurosymbolic or hybrid approaches where we just tack a large language model on with some coding abilities is enough to get us there.

 

[01:11:23.590] – Matthew Renze

However, I don’t know that it’s going to be us that gets us there. It could very easily be an LLM or LLM agent that develops the next architecture that gets us to artificial general intelligence.

 

[01:11:35.510] – Jonathan Denwood

Well, yeah, might be right there. I, I don’t know enough to make a comment on that. So let’s go on. Right. I think I’ve mentioned Alan Turing. Obviously he had a number of debates, public debates with Ludwig Wittgenstein. I want to question. And these are in 1939. I have studied the words, the works of Lil. Greek Wittgenstein to the best of my ability. I don’t know why, because. But I have studied over the years. I, I can’t. I’ve attempted to read his. One of his books, but I gave up on it. I can only read other people’s interpretation.

 

[01:12:36.250] – Matthew Renze

Not the most difficult philosopher, but he’s pretty tough.

 

[01:12:39.210] – Jonathan Denwood

Yeah. Theological investigations. The re. The reason I bring this up, Matthew, is that obviously in that period there was a lot of work about what precisely is mathematics? Obviously it’s a symbolic. Some form of symbolic language. And obviously Russell with. And Lukewig Wittgenstein studied under Russell. There was. There was an attempt to break down mathematics into pure logic and into exons. And that seemed very possible, but then that some other. Some other experts pointed out. Yeah, pointed out a considerable problem. So am I correct that we don’t fully understand what precisely mathematics. Now, it’s very useful and we’ve got enormous benefits from it. But like music is a symbolic language as well. Am I correct in saying they are both mathematics and music or symbolic languages?

 

[01:14:04.830] – Matthew Renze

God, that’s a tough comparison too. It. Once again, I think it comes down to semantics as to whether music and mathematics are essentially the same thing. I mean, in some abstract representation. Yeah, I would say they are. And what you’re alluding to, the whole logical positivist movement which swept philosophy, you know, came from the ideas of Bertrand Russell and Alfred North Whitehead.

 

[01:14:28.500] – Jonathan Denwood

That’s right.

 

[01:14:29.060] – Matthew Renze

They came up with a book. God, what was it called? Like.

 

[01:14:32.500] – Jonathan Denwood

Oh, it’s a enormous book, isn’t it?

 

[01:14:35.580] – Matthew Renze

Oh, yeah, yeah. It’s essentially The. They tried building all of mathematics upon foundations of set theory and formal logic. And to the most part it was hugely successful. And then a guy by the name of Kurt Godel comes along and writes the seven page. Because that was like 700 pages or something. And then Godel comes around along with like 11 pages of approval.

 

[01:14:59.140] – Jonathan Denwood

It was. It’s Hunt. It’s one of the biggest books, I think. I think only a few.

 

[01:15:06.100] – Matthew Renze

Mathematica.

 

[01:15:07.460] – Jonathan Denwood

That’s it all this, isn’t it? I think, isn’t it? They spent. How long did they spend on that? Enormous amount of time.

 

[01:15:14.660] – Matthew Renze

I mean it was. It was, yeah, like a lifetime work, you know, but it’s huge and it’s a testament to the power of mathematics and you know, foundations of mathematics. And then Kurt Godel comes along and he writes this short like 7 or 11 page proof that shows that any system of mathematics sufficient to produce recursive arithmetic can not. Can contain statements that are God verifiable but unprovable, that are true, but not provably true. I can’t remember the exact.

 

[01:15:48.140] – Jonathan Denwood

Yeah, I’ve never. I never managed to get. The only way I rationalize to saying that the system itself produces a positive and negative.

 

[01:16:01.180] – Matthew Renze

Yeah, there are statements that are true that cannot be proven with the mathematics of the system. And there’s some examples that people use to kind of capture it, like the Liars paradox, I think is one of them. But it essentially just caused the whole logical positivist movement to collapse like almost overnight because they’re like, oh, okay. So even though mathematics is extremely powerful, it cannot do everything. Like there are. There are things that mathematics will.

 

[01:16:26.140] – Jonathan Denwood

Can you see where I’m coming, why I’m bringing this up? Because I might be totally wrong. The reason why I bring this up, not to prove that I think I’m smart and I’m just deluding myself. The reason why I brought it up because I saw parables with AI because. Because I think Wittgenstein with his position was that obviously AI at the present moment sees the patterns in language and it predicts those through tokens and it does an amazing job. But Wittgenstein for thought that there were kind of elements to language which were more difficult to grasp in a per pure mathematical basis. But I’m not sure if I’m correct in that statement because it was a while ago before when I was studying Wittgenstein.

 

[01:17:26.560] – Matthew Renze

Yeah. And Wittgenstein’s like, philosophy is very kind of tough to grasp. And I. I don’t think I’ve Ever said this in public before. Wittgenstein is actually the reason I got out of philosophy. Like, I ended at my bachelor’s degree. He essentially showed me. I don’t know, how do I want to word this? Philosophy is awesome. But. Well, Nietzsche and then Wittgenstein showed me the limitations of philosophical investigation. And that’s why the empirical sciences became so much more appealing after that. But it’s difficult to kind of understand exactly what he’s saying, you know. You know, for the amateur and even for a lot of experts. But he’s essentially trying to get across the idea that all language is games and philosophy is essentially a game of language. And I’m probably butchering this.

 

[01:18:13.980] – Jonathan Denwood

No, it’s very hard. You making a good. You’re making a good attempt. Because I really struggled with it and I read a few of his books and I’ve listened to a lot of other people that of better interpretators. But it also. It seemed to me that he also seemed to be hinting that the. The game thing. But there was kind of like rivers, flows of language that kind of. That like people use IQ and Alice, I feel socially developed mechanisms to judge people.

 

[01:18:52.020] – Matthew Renze

People.

 

[01:18:52.660] – Jonathan Denwood

But there’s also this. We live in a world of language in a kind of. It’s like. Like you. You dive into water. We’re actually in a kind of construct of flows of different languages. Because. Because I think some ways I hate these people that say, or you are what your language is. But in some ways I believe it. But I’m going off on a waffle now.

 

[01:19:23.040] – Matthew Renze

But I do agree. I think it is quite likely that language mediates our perception of the world. There are things like even gradients of colors. Like if I have two words for the color green in my language and another culture has 12 words for the color green, their perception of a field in front of them is different than my perception of it. Even if seeing the exact same shade of green.

 

[01:19:45.570] – Jonathan Denwood

I think that’s a fabulous thing. I think that’s fabulous what you pointed out. Because you get that with German. German, don’t you? You have these enormously long German words that mean multiple concepts, don’t you?

 

[01:19:57.250] – Matthew Renze

Yeah. Like Schadenfreuder.

[01:19:59.170] – Jonathan Denwood

Yes.

[01:19:59.490] – Matthew Renze

We have to use the German word because there is no equivalent word in English.

[01:20:07.330] – Jonathan Denwood

Right. I think we’ve ended it now. Thank you so much for your time. Hopefully we have a. I think just right. I. I think you’re more. I agree with you. I think we’ve got agreement around scaling. I think over the next year or 18 months, there will probably be sizable improvements. We both agree that. But there are also limitations to scaling properly. I think we, I think our fundamental disagreement is that you’re like, and I’m just categorizing in your reply here, but I think you didn’t push back totally. Is that you, you’re kind of more of an Alan Turing kind of guy, that when I, when I kind of pushed back with you using the word intelligence or knowledge, you kind of pointed out to me you kind of understood what I was, what I meant with the pushback. But it doesn’t really bother you, does it? Because if you get this end result, you’re just pointing out to me, does it really matter? Jonathan?

[01:21:14.800] – Matthew Renze

Well, let me end with a question. If today you’re sitting there and an alien came down on a spaceship and landed on your lawn and interacted with you and was doing all sorts of things that seem like it’s very smart, would you say that it is intelligent? Because it’s not human intelligence, it’s not necessarily animal intelligence, nor is it machine intelligence, but you’d probably conclude it’s intelligent.

[01:21:39.270] – Jonathan Denwood

Yeah, I think that’s a good point. Yeah, I, I would, to be truthful about it, if it was shown clearly, you know, if anything. Yeah. But I think the problem is I. My only defense, I think that was a great point. And I, I would if, if it was shown clearly, because I said that in the podcast, you know, it’s in the eye of the beholder, isn’t it? You know, I, I’ve only known about, I’ve known about two to three people. They, that anything they took on languages, anything they put their ability to, they. They just did it superly quickly. And ability, where most people just have to grind it out. I have to grind it out, but there you go. But the only thing is, the only annoyance I get is in the AI industry, and a lot of the pundits on podcasts, on YouTube, they use these terms, intelligence, knowledge, wisdom. They are extremely broad statements.

[01:22:50.120] – Matthew Renze

Yeah. All right. And unfortunately, a lot of them just either don’t understand what they’re talking about or have an agenda because they’re selling something, and it’s just a sales pitch.

[01:22:59.880] – Jonathan Denwood

We’re gonna end it now, folks. Hopefully you enjoyed this bone. I’ve really enjoyed my conversation with Matthew. I think we’ve covered some great stuff. Hopefully, we didn’t lose too many of you. We’ll see you soon, folks. Bye.

WP-Tonic & The Membership Machine Facebook Group

Why don’t you sign up and be part of the Membership Machine Show & WP-Tonic Facebook group, where you can get all the best advice and support connected to building your membership or community website on WordPress?

Facebook Group

 

Comments are closed.