We Interview Matthew Renze Data Science Consultant, Author and Public Speaker
Matthew Renze is a data science consultant, author, and public speaker. Over the past two decades, he’s taught over 300,000 software developers and IT professionals. He’s delivered over 100 keynotes, presentations, and workshops at conferences on every continent in the world (including Antarctica). His clients range from Fortune 500 companies to small tech startups around the globe.
Matthew is a Microsoft MVP in AI, an ASPInsider, and an author for Pluralsight, Udemy, and Skillshare. He’s also an open-source software contributor. His focus includes artificial intelligence, data science, and machine learning
Jonathon: Welcome back folks to the WP tonic show. It’s episode 550. We’ve got a fantastic guest. We’ve got Matthew Renzi with us. And he’s a data scientist, a consultant all over. He knows everything about artificial intelligence and big data. And he’s been very gracious to say that he will come on the show. It’s going to be a little bit different, but we’re really going to delve into this world. Because I’m sure like you I’ve been hearing about artificial intelligence and big data for the past five years. Literally every tech story has some element of big data or artificial intelligence. Before I introduce him, I like to introduce my co-host Steven Saunders. Steven, would you like to quickly introduce yourself?
Steven: My name is Steven Saunders, I am from zipfish.io. We specialize in making WordPress faster and optimizing the servers.
Jonathon: That’s great. And before we go into the main part of the interview with you, I like to talk about one of our sponsors. That’s Kinsta hosting. Kinsta hosting is a specialized WordPress hosting company. It’s has been supporting WP Tonic for a couple of years now. Why should you be interested? Well, it provides really fantastic hosting for your projects and for your client’s projects. Especially if you’ve got woo commerce or a large learning management membership website, you need quality hosting. And they’re one of the better providers in the field. They only specialize in WordPress. All their hosting is on Google cloud. They provide a customized interface, all the technical bells and whistles, plus great support. That should be interesting for you and your clients. So go over to Kinsta and have a look at their packages.
I suggest that you buy one for yourself or for your clients. And the main thing is tell them that you heard about them on the WP Tonic show. So let’s go straight into the interview with Matthew. So Matthew, thank you so much for coming on the show. Like I say, you are experienced scientists and specializing in artificial intelligence, big data. So I think we just go straight into the questions because we got a limited amount of time. So I’m going to ask a really broad and semi difficult question, but if you can just give a very quick outline. Because I think the other questions we’d go in some more detail don’t you. So what is the difference quickly? This is a ridiculous question in a way, but hopefully you understand why I’m asking it. Between human intelligence and machine learning and AI.
Matthew: Well, I’ll try and do it as quickly as possible, but you are right, it was a pretty big question. So we have a distinction between artificial intelligence and natural intelligence. And the main distinction is that natural intelligence is essentially organic based intelligence. And artificial is any machine that we use in order to simulate intelligent thought, which then we have to define intelligence, which is essentially the rational engine definition is any agent that perceives its environment and chooses actions that maximize the expected likelihood of achieving a goal of some kind. And then within organic intelligence, we have human intelligence, we have natural animal intelligence and we also have collective intelligence. And then within artificial intelligence, we have artificial narrow intelligence, artificial general intelligence, and then artificial super intelligence. And then machine learning is essentially just a subtype of artificial narrow intelligence that is based on math and statistics.
Jonathon: That’s great. Before I throw it over to Steven, the British scientist Alan Corinne and he had this test where if a machine could mimic a human being, he said that was conscious basically that that machine or entity, whatever it would, you could then classify as having almost human level consciousness. How does that fit into what you’ve just said?
Matthew: Well, when you think about it, when we judge other human beings as being intelligent or not, we’re essentially performing a Turing test, we’re essentially asking, we’re making assumptions first off, and then we’re asking them questions. And the responses that we get from them help us to determine whether this person’s actually intelligent and rational and conscious, or whether they’re just, you know, a robot on the other end.
Jonathon: There are a lot of people that think that of me actually.
Matthew: So when Alan Turing was proposing this, he wasn’t saying that this is the end all be all criteria for determining whether a machine is conscious or not. It was essentially trying to come up with, well, what does it heuristic for us to gauge whether a machine is pretty intelligent responses or not? And I think most people would say that for them when they would believe that a machine is actually intelligent is when it can explain exactly why it’s doing what it’s doing in a way that appeal to human intellect. If it can do that, most people would say, yeah, it’s probably conscious. And if it’s not able to do that, then it’s pretty clear. It’s most likely not conscious, but I guess conscious in the sense of metacognition, understanding its own thoughts, behaviors, and actions, being able to explain them or articulate them in a way that is intellectually satisfying to us as human beings.
Jonathon: Great. Over to you Steven.
Steven: So what does that look like? What’s an example of something that would be like intellectually satisfying to a human being? Like what does that kind of tangibly feel like I guess?
Matthew: Well, to me, for example, if a machine is unable to explain why it determined that a certain type of image contains a diagnosis of cancer we would say that it clearly doesn’t understand it. It’s just, it’s a function it’s mapping and input to an output and giving us an answer. But if it could articulate exactly why it believes that this is cancer versus not cancer and human doctors listen to this explanation, they’re like, yeah, no, it clearly understands that this is cancer because of this and that. It meets all of our needs from an intellectual standpoint. I think that’s what they would say is satisfying that criteria for explainability.
Steven: Interesting. So then like what about neuro networks then? Because like the classic thing that I hear about neural networks is that like, it’s kind of a black box. Like nobody really knows what’s going on. Like you send enough data and you get enough, you get your result out and then like, how did it come up with that? Who knows?
Matthew: Yeah. And right now neural networks are largely a black box. We train them with a set of data and then it essentially we give it a new input. It produces an output. And in some cases we have bits of explainable AI that we can use in order to try teasing out why, but it’s not of its own volition explaining its behaviors. It’s essential. We’re kind of coding the explanations in. So we’re giving it an input and we’re getting an output plus another output, which is kind of like a diagnostic explanation. If that makes sense.
Steven: Like as far as neuro networks go, when I think of them often, I think of them like in like identifying things like, or predicting things like in like those two different areas. Because that’s where all AI like sits, but are there like other questions out there besides like, this is an Apple, or if I throw this ball, it’s going to land over there kind of like predicting identified. Are there other questions that people are trying to answer outside of like those two, like primary, very basic ideas?
Matthew: Yeah. So you can use neural networks for a lot of different things. So in general there are two kinds of categories of machine learning, there’s analysis and there’s synthesis. And analysis is where we’re taking a complex, high dimensional set of data and reducing it down to a simplified or more abstract version. So you take an image and you produce a label that this is a cat, a, you take an image of a person’s skin and you say, this is cancer. This is not cancer. And you can take audio and say, well, this sound is a gunshot or not. Or you could also do what we call regression. So those are classification examples. Regression would be like you show it to a bunch of houses and their sales prices. And then you predict a new house that has never seen before how much it’s likely to sell for.
And so classification, regression, other areas like we have anomaly detection. So what data are different from the normal data? We do cluster analysis where we’re looking at the similarities between type’s data and putting them into clusters. We can also do things like recommendations, very common. But beyond that in those are all analysis tasks. In the synthesis side, we’re essentially generating complex, higher dimensional data from more simplified inputs. And most people don’t even know that a lot of this stuff exists at least in research right now. But like you can type a description of, you know I went to a black bird that has yellow stripes and something mid-flight and we can actually generate an image of what that description says that looks realistic. We can do it with human faces as well, too. You can just describe a female with blue eyes, blonde hair and, you know, age 23, approximately. And you can synthesize a face that matches that. Or you can draw sketches and it will, from those sketch lines, it will produce a face. And even more complex stuff we can synthesize video now to you feed it, what we call semantically labeled images. And it will produce like real-time video that looks like it’s an actual car driving down the street.
Steven: Right. Crazy! Well Jonathon, back to you.
Jonathon: So I’m basically going to jump little slightly of the predefined questions that I managed to couple together with my limited knowledge of this subject. But when you talk about neuron networks because obviously neuro networks, the way I saw them with my primitive knowledge level was that you put percentages and just like throwing dice. And progressively you train the artificial intelligence to go through a layer of yes or no`s percentages, proper probability. That was what I was looking for. And then it would come out with answer a mumbling there. But and then you mentioned reductionism, which is a fundamental power of science. But I was wondering game theory neuro networks, a tool that is I am correct in saying it’s a form of mathematics and especially game free was the two kind of linked together. Were neural networks developed as a way of helping get answers around game theory or they totally divorced Matthew?
Matthew: Well I think there are two questions in here. We’ll start with the neural networks and then we’ll go on to game theory. So first off with neural networks I’ve got to admit that there are a lot of people in academia, I think that are using overly complex attempts to explain these to the general public. You’ve got people that I think do a good job of explaining these things to the general public, because they’re not trying to inundate you with mathematical terminology and stuff like that. So the most basic way I can explain what a neural network is, is using a metaphor from biology. So essentially the metaphor of an actual human brain. Some people will get angry with me for even using the metaphor, but I think it works well in order to conceptualize what’s going on.
And that’s originally where neural networks came from. They were based on this kind of crude understanding of how we thought brains worked back in like the 1950s and sixties. So you think about it as a set of neurons, just like a human brain connected with synapses. And you’ve got an input coming into the neuron and then it will fire. If the threshold is high enough for activation, and then it goes on to the next layer and the next layer, the next layer. And we have some where the neurons kind of go like this and we have somewhere the neurons kind of go like that. And you can think of that in terms of like the analysis and the synthesis like metaphor that we’re using. And so you are essentially training a neural network to learn a mapping from an input to an output using this kind of conceptual model of a brain.
So we train it with data. We give it a bunch of examples of the inputs and a bunch of examples of the outfits we’re giving it the answer while we’re training it. And then over time it’s essentially learning this function. And so that’s the conceptual biological conceptualization of it. But in the real world, we actually model all of this mathematically. So if mathematically a neural network is what we call a universal function. Approximately, it’s essentially estimating, you know, given an input, its numbers. So three dimensional, four dimensional or many dimensions. And then we’re using linear algebra in order to compute an output and an input linear algebra, same thing you’d get, if you were a freshmen or sophomore in an engineering class in college, it’s not super complicated math. In fact, most high school students I think could do linear algebra quite easily. And then we use a basic calculus to train the network or train the weights of the neural network during the training process. And that’s just the providing an estimate of the output from a purely mathematical or statistical standpoint. Once again, it’s just a function. It’s a mapping of an input to an output. But when we’re representing this brain, if you will or this neural network mathematically, we’re doing it with mathematical operations and a bunch of numbers that we store in an array, which we call vector.
So whenever you hear about feature vectors and stuff like that, that’s essentially, it’s just an array of numbers. So you’ve got number, number, number, number, number. And if we have it in two dimensions, we call that a matrix. And if we have that in more than two dimensions, we call that a tensor, which is where Tensor Flow gets his name. It’s just an end dimensional array of gradient descent and the chain rule stuff learned as freshmen and essentially in calculus.
Jonathon: Yes. Well, Matthew, you are using words that are instilling fear in my heart calculus. But they wouldn’t touch me because I got dyslexia. I’m very good at business math. I’ve always been good at that. I’m very good at the concepts as most dyslexic, but I didn’t move to give them the answers that they were looking for. So that was the end of me. I’m going to go back because I put them out of order I probably confused Steven. But Steven should have got used to that by now. So let’s go on. So this concept of weights which applies to neural networks, I’ll presume as I was doing my research that is a kind of measurement of the actual output. Am I right? Or totally wrong there?
Matthew: Well, so when you’re thinking about a neural network, there are essentially three kinds of concepts. We’ve got weights, we’ve got biases and we have activation functions. And so the weights you can think of as the strength of the connection between each neuron in a neural network. So the larger, the number, the stronger, the connection, the lower, the number, typically in a negative direction the weaker, the connection. Or I guess it’s a reverse connection. And so the weights are essentially representing the strength between the neurons.
And then each neuron has what we call an activation function, which essentially represents when the neurons going to fire. So if there’s a little bit of input, is it going to fire right away? Or if there’s a lot of input, is it going to fire? And so that’s kind of controlling when the neuron itself fires and then the weights are in between these neurons. And then we’ve got something called a bias, which essentially just shifts this activation function either to the left or the right. You can think of it as increasing or decreasing the sensitivity of the neuron itself. And then all of these weights we’re essentially just setting during the training process, using a technique that we call backpropagation, which I’d be happy to explain if you’d like.
Jonathon: That was going to be our next question. So I think I we will leave that to a break we come back with that actually. Hopefully we haven’t totally lost the audience. But I think it has been an interesting discussion keep with us. I’m sure you are going to be enlightened. We’ll be back in a few moments folks,
Announcer: Launch flows turn your WooCommerce website into a selling machine. We make it easy to create gorgeous sales funnels, no friction checkouts order bumps, upsells down sells, and much more. Gain full control over your buyer’s journey from the top of your WooCommerce sales funnel, all the way to the bottom. Best of all, you can use your favorite page builder, such as Elementor, divvy, Beaver Builder Gutenberg, or one of the high converting templates we’ve included inside. Get rid of the clunky WooCommerce shop pages and checkout process in favor of an optimized buyer flow that instantly increases conversions and makes you more.,, Launch flows provides one click order bumps that increase the total value of every sale with a 10 to 30% conversion rate. This is perfect for anyone offering complimentary products, training or extended warranties with unlimited upsells and down sell your buyer’s journey. Doesn’t need to end at the checkout.
Instead, we make it easy to display a series of additional offers as part of the original transaction. This is perfect for one time offers related products, mastermind class offers, high ticket software sales or subscription supplements. Not an expert? Don’t worry! We’ve got the training and the consultation you need. WP launch will teach you how to get the most out of launch flows with personal consultation on WordPress, woo commerce, marketing automation, and much more. If you want to earn more money with your WooCommerce online business, you owe it to yourself to try launch flows today.
Jonathon: We’re coming back. We`ve had a feast on artificial intelligence. What is it? And what are some of its key parts? Hopefully we haven’t totally lost the audience. I value our small intelligent audience. I’m struggling. I wrote up the questions. Matthew is going at our pace, which is great. So we’re going to come back. We were talking about weights. And a related question on our list question is backpropagation. What does that mean Matthew?
Matthew: So backpropagation is a technique that we use to train a neural network. And I’m guessing most people aren’t familiar with neuroscience, but it is equivalent to what we call the credit assignment problem in neuro science. How do you assign a value to each neuron for its participation in choosing a correct answer? So if you’ve got a bunch of neurons and you make a correct answer, you need to go back and tell each neuron, Hey, you did a good job when you fired or not fired in order to produce the right answer.
But we have to do this mathematically because, you know, we don’t have our game brains inside of computers. So once again, we go right back to we’re talking freshmen calculus. It’s not very complicated math at all. And we use something called the chain rule in order to calculate the derivative of each neuron. This once again is basically just saying, how much credit should I give to this neuron for its contribution to the answer that you gave, and then gradually we’re just increasing or we’re decreasing the weight of each of these neurons or synapses between the neurons using this gradient descent process. Essentially just moving it slightly down this way or moving it slightly up this way, because it’s doing a good job or a bad job answering the question, essentially, we’re providing a tiny rewards or punishments for each neuron whether it’s giving us a right or wrong answer. And then essentially these small nudges to the weights are giving us better answers every time we punish it or reward it for a correct or a wrong answer.
Steven: So I guess just to like, I’m just trying to like build a picture in my head. Let’s say like I’m a Tesla, I’m trying to identify if something does a human, because I need to know if something is a human. So like that will be our UK or use case. So I’m trying to build a neural network that will like be like, I guess, image like identification of like, this is a human, this is a trashcan or not a human, I guess like the question. So if I’m going to approach this problem, I have all this data set that says these are all humans. I feed it into my neural network and then it processes all of that and it knows it’s all human. So like good rewards or whatever, like go into that back chain. So that any neural network that fired when it said it was a human is a positive thing. If it didn’t fire, that’s also a positive thing. So those get more weights. But then let’s say I do like the opposite, like a trashcan just to test it. And if trash cans are coming out of humans, that mean I need to keep training it, I guess. Is that like a proper way of thinking through this?
Matthew: Yeah. So every time you’re running the training algorithm, it will get progressively better at detecting a human versus a trashcan. And so then the more times you run it, you’ll eventually it’ll take her out and you won’t get any better as you continue to feed it more information or run it additional times or iterations. And then once you’ve hit that kind of threshold, that’s essentially as good as, that network’s going to perform without doing some other kinds of techniques. We can do some other things outside of that training process to make it work better. But yeah, conceptually like that’s exactly what we’re doing. You know, we feed it a picture of a human. If it gets as human, we give it thumbs up. And that assigns the credit back through the network. If it said trashcan, when it was a human, then we’d give it a thumbs down. And that gives a negative credit back through the network. And then we feed it a trashcan. And if it predicts trashcan, we give it a thumbs up. And if it predicts not trashcan, we give it a thumbs down. And over time, it’s going to progressively get better at doing this until we’ve essentially maxed out the accuracy or the performance of that neural network.
Steven: And so if I’m a developer, do I just like take this? I don’t know let’s say it’s like some open source neural network thing that’s been developed out there. And I kind of just tweak the weights or whatever. Or am I actually going in and trying to decide like these neural networks are supposed to identify these kinds of pixels and it should have this kind of weight. Like how in depth does like developer get into actually manipulating the neural networks or does more about manipulating the datasets and the thumbs up and thumbs down?
Matthew: So this is a great question. And this is one of the things that I think is largely misunderstood in the developer community right now. I’m going to try and breaking this into three different groups. We’ve got pre-trained models, which are essentially off the shelf solutions that are ready to go. I’m a developer using one of these pre-trained models. Like you know, Microsoft has their cognitive services. We’ve got Google’s cloud AI services and Amazon’s AI services. These are off the shelf ready to go. You just give it an input. It’s going to give you an out, but you don’t have to do any training. You don’t need to know anything about math or statistics or neural networks. You just start using it. Then we we’ll go to the other side. Then we’ve also got like completely custom neural networks where you’re going to need a data scientist.
You’re going to need a bunch of data. You’re going to need training algorithms and a whole bunch of compute power in order to train a model from scratch. We’ve got some tools that we’re building like a auto ML or automated machine learning, which are making this a lot easier and not requiring as much knowledge or resources in order to make it happen. But we can talk about that too. But then in the middle, we’ve got something that we call a transfer learning where we take a pre-trained model that was designed to do one specific task and this detects humans or this detects cats. And then we slightly tweak it through this process called transfer learning to have it do a slightly different path. Well, it was designed to detect people, but we needed to detect our company’s logo instead.
So it’s already been trained all the hard work’s been done. So all we need to do is give it a couple of examples of our company’s logo, and then it will essentially adapt what it’s already learned to just that new task. And that is actually relatively easy. I mean, there’s the drag and drop services online to do transfer learning right now. So depending upon what type of problem you’re solving in each of these three categories, you either need to know nothing about machine learning, neural networks and you know, that stuff. Or you need to know a whole bunch about it or you just need to know enough in between to get by. But it all depends upon what you’re trying to solve and there’s tools in each of these three categories now for developers.
Steven: Interesting. So the idea is that if somebody has figured this out fairly easy to do kind of tweak it to what you need. If you’re more on the bleeding edge of things and are trying to develop new systems, that’s where, like you really got to roll up your sleeves and have like a data scientist on your team and all of these other people that know exactly what’s happening and how it’s going on. It’s not something that you could just go hire a developer from Upwork or whatever, and be like, Hey, do this for me.
Matthew: That`s exactly correct. It really depends upon what you’re trying to do. And this is why I always recommend just start by doing a search for existing solutions to this problem. Oftentimes someone’s already, already solved it for you has solved a very similar problem, which you can transfer the learning from, or nobody’s even attempted this. And you’re pretty much on your own. And that’s when you need a data scientist and a data engineer, or a machine learning engineer.
Steven: John, I’ll kick it back to you.
Jonathon: Matthew earlier on in the first half of the show, you said that we don’t fully understand the mathematics. So is it a similar as when it comes to these neural networks? Is it a similar situation to quantum physics that the mathematics that utilize in quantum physics works and we get outcomes that help us build the modern world. But when we try and apply quantum physics to the larger world of gravity what came from Einstein relativity doesn’t really play well with. Is it that situation? So outside its field, the actual mathematics, or is it, we really just don’t understand the mathematics completely, that is utilized to model these neural networks?
Matthew: Well, I think I understand the comparative metaphor you’re using and I think it’s interesting. I don’t know that it’s, it’s a good mapping for the relationship to what we currently know about neural networks. So I think the thing we have to understand is that, you know, the, the mathematics to execute a neural network and to train a neural network, we understand really well. And as I’ve said, you know, it’s, it’s literally the type of model.
Jonathon: That’s what I was getting confused because you were hinting through the rest of the questions that, but at the beginning you were suggesting that we didn’t fully understand the mathematics. That’s what, yeah. So thank you. I think the quantum mechanics, I was just trying to make out, that was pretty intelligent actually.
Matthew: Oh no, actually I do like the comparison. And I could speculate on how they are related, but I know just enough about quantum physics right now to be dangerous. And I would probably say something wrong, but I am studying quantum computing in order to better understand it. So that someday I’ll be able to come on the show and talk to you about quantum computers. So back to what we do know and what we don’t know. So execution and training of the neural network, we understand pretty well. But we don’t have a complete theory of why neural networks work the way they work. So sometimes like a neural network will work really good for this task A. And we tried to get it to do task B. That seems like it should work and it just completely fails.
And we don’t understand why yet. We have guesses as to it, but we don’t have like a mathematical theory for why it works or does it work. But experts are currently working towards a kind of a unified or general theory of neural networks. And I’m guessing we’ll probably be there someday. And we also don’t fully understand how best to implement them, but we’re learning new techniques every day. In fact, every week I’m reading about a new technique. Someone tried that did something impressive. You know, we’ve now got auto machine learning or automated machine learning. Transformers are blowing up right now in terms of what we’re discovering they can do. We have modular neural networks, a spiking neural networks, a whole slew of different techniques that are eventually going to lead us to solve problems that we can’t currently solve with them.
And other things like reinforcement learning too. And we often can’t explain why neural networks are doing what they’re doing. We can explain mathematically, I can show you the calculations that made to get to the decision. But in terms of a an explanation that average human being would understand, we’re still quite a ways away in, in many domains about that, but we’re getting better with explainable AI and you know transparent AI systems that help us to understand why neural networks are doing what they do. But what I generally recommend is always use the simplest tool for the job.
Essentially if you can get away with using just like a decision tree classifier, which is a really basic form of machine learning, like ridiculously simple. It’s just a tree of decisions that it’s making you know, use that if it’s solving the problem, don’t build a deep neural network just to squeeze out another half a percent of accuracy. Because like, if you have to try explaining to a judge why your credit approval algorithm rejected this person you can show him the decision tree and he’s going to be like, Oh yeah, I clearly understand why you approved or rejected this person. But if you show them a bunch of linear algebra and calculus, he’s going to have no idea, you know, what you’re talking about. So simplest tool to solve the problem.
Jonathon: There we go. I think we’re going to wrap up the podcast part of the show for 30 minutes, roughly goes really quick. I think I got to get my waffling down to a certain degree, listeners and viewers. Hopefully you enjoyed this interview. It will be outside our normal realm. If you do just give us some feedback and I’ll try and get some other people slightly outside the normal realm of WordPress plugins, business and development. So Matthew has agreed to stay on for another 15 minutes, which you can see in our bonus content. So keep with us often it’s been a fascinating discussion. So Matthew, what’s the best way for people to find out more about you, your ideas, or maybe, you know, you give some other insights into this fascinating world of which we’ve been discussing.
Matthew: So everything that I do, I try putting on my website, which is matthewrenzi.com. Including all of the places you can find me on the internet will be linked there and all of my online courses. So essentially the best way to describe my job right now as a data science consultant with a focus on AI and machine learning is that I’m trying to help software developers and IT professionals understand data science, machine learning and AI without having to go back to college to understand what they need to know in order to start leveraging these tools today. So I’m doing my absolute best to try explaining things as simply as possible using metaphors that I think everyone can understand and trying to strip away the entire math out of it.
Because there’s a place for that in academia. And if you want to go back to college in order to get a master’s degree in AI, absolutely you will need to know linear algebra, calculus, like multivariate calculus, and even more complex math. And that even for the average software developer, because of how we’ve got these tools pre-packaged and stuff, I don’t think they need that level of knowledge in order to start using this stuff. So check out my website, matthewrenzi.com.
Jonathon: Oh, that’s great. And Steven, what’s the best way for people to learn more about you and what you are up to?
Steven: Head over to zipfish.io, run a speed test and see how we can make it faster.
Jonathon: And before we wrap up the show, folks, I just want to remind you that I’m doing a webinar. It’s the second Friday of December. I think that’s the 11th of December at 10:30 AM Pacific standard time. It’s going to be me and Spencer forum again. Our first webinar in a series, which was last month was a well still this month actually was a great success, got a lot of positive feedback. This upcoming webinar in December is going to be roughly around the same thing, but with additional information. We’re going to be showing you actively how you can integrate a WordPress website with one of the leading CRMs, which is going to be active campaign. We actually go through an actual example of how you link the two together to fire off a series of automated email using active campaign.
And during our series of webinars, we’re going to be looking at a series of the leading tools and actively explaining the fundamentals and then giving you a practical example. Also if you really want to support the show, go over to the WP Tonic YouTube channel and subscribe. You’ll be able to watch the bonus content of our interview with Matthew, the whole interview, plus the bonus on the channel. So go over now to YouTube channel and sign up. We will see you next week were we will have another great interview or a discussion between me and Steven. We’ll see you soon folks
Every Friday at 8:30am PST we have a great and hard-hitting round-table show with a group of WordPress developers, online business owners and WordPress junkies where we discuss the latest and most interesting WordPress and online articles/stories of the week. You can also watch the show LIVE every Friday at 8:30am PST on our Facebook WP-Tonic Show page. https://www.facebook.com/wptonic/