Global Marketing
Management Consultants
Global Marketing
Management Consultants
mobile-logo
Global Marketing
Management Consultants
Top

Managing Marketing: Data Ethics, AI and Marketing

Jordan_Taylor_Bartels

Managing Marketing is a podcast hosted by TrinityP3 Founder and Global CEO, Darren Woolley. Each podcast is a conversation with a thought-leader, professional or practitioner of marketing and communications on the issues, insights and opportunities in the marketing management category. Ideal for marketers, advertisers, media and commercial communications professionals.

Jordan Taylor-Bartels is the Managing Director of Magic, an agency using predictive marketing technology to empower brands to make more efficient marketing decisions and the CEO of Conumdrm. a predictive AI platform. Jordan talks about the practical and ethical challenges facing marketers in a post-cookie world and the opportunities for brands to use data to increase engagement and drive growth by being better informed and aligned to their customers needs, wants and desires.

You can listen to the podcast here:

Follow Managing Marketing on SoundcloudPodbean, Google Podcasts, TuneInStitcher, Spotify, Apple Podcast and Amazon Podcasts.

Transcription:

Darren:

Welcome to Managing Marketing, a weekly podcast, where we discuss the issues and opportunities facing marketing, media, and advertising with industry thought leaders and practitioners.

Today, I’m sitting down with Jordan Taylor Bartels, Managing Director at Magic and CEO of Conundrm. Welcome, Jordan.

Jordan:

Thanks for having me. Thanks for simply sparing the time to have a chat. It’s an interesting time in advertising, that’s for sure.

Darren:

It is interesting because technology has really been driving innovation in advertising and marketing, hasn’t it?

Jordan:

It really has and it’s turning out, as everyone’s well aware, in the last six months to be a very, very different shift, and how sticky that shift is, no one’s really going to know until probably two or three years’ time. But we are potentially witnessing a cataclysmic, once in a 10-year event for advertising where what we did over these last three, four years will be completely removed.

Darren:

You’re talking about the cookie apocalypse, aren’t you.

Jordan:

The cookie apocalypse. It’s kind of been overdone a bit and it’s been talked to death, I’m sure. And it’s now publishers scrambling, it’s third-party providers scrambling, it’s people defending it, there are people for it. Coincidentally, there’s the whole ethics behind it as well.

So, it really is an interesting shift that we’re about to see. But look, it’s definitely a topic of interest. It’s just how well can people answer it beyond speculation is kind of where it gets really tricky.

Darren:

Do you think Google are also scrambling a bit, because I think it’s been delayed two or three times, hasn’t it?

Jordan:

I think it has. We speak to the Google cloud team quite regularly, to be honest. And they’re not entirely sure, I don’t think on what the solution is. We’ve seen FLoC and all these things appear in some kind of proactive fashion. But I think that it really is Apple versus the world at this point.

And I think the relationship that once was a bit contentious or toxic between Google and Apple, I think particularly on the Google side, they’ll be looking to repair that very quickly. But look, I think generally, I think it’s always too much pepper then too much salt. And then you usually find that just right in the middle…eventually; whether it’s politics, whether it’s advertising, whether it’s relationships.

Darren:

Well, it’s interesting that you see this as the clash of the Titans in a way, because Apple has always maintained this idea of outstanding customer experience. And clearly, privacy is part of that.

Google on the other hand, has always maintained do no evil, but a lot of what the value of their company is, is actually providing access and insights to users of their products, isn’t it?

I mean, one of the reasons that so much of Google’s product is free is because they make their money from third parties accessing that data and information through either advertising or SEO or the like, a paid service I mean.

Jordan:

Yeah, I think directly, and I think for a long time, and even to this day, people forgot that they really are the product themselves. I think there was the whole revolution really around social networking and kind of breaking down barriers of communication across worlds, across societies, religions, etc.

But people kind of got lost in the romance of it all, and really forgot that how are companies like Facebook, like Google, how are they employing tens and tens of thousands of people, producing amazing technologies that seemingly come out of nowhere or update every 24 hours, without effectively monetizing the product, which was all the people using it and exploring it?

The amazing things that happen with the social connection and then subsequently the darker side of that, understanding really what people think and how they perceptively or what their intent might be for a particular action.

So, I am quite interested in understanding why people lock that whole piece out and really just give themselves up to, I guess, effectively one singular company.

Darren:

Well, it’s interesting, isn’t it? Because we have had media generally before all this; television, radio, newspapers, magazines, are all supporting their content or editorial based on advertising. But it was so visible, it was like in your face.

Then along comes Google and Facebook and yes, the ads are there, but really the value is not in just placing the advertising, is it? It’s the information that they collect about us as we’re using these platforms and even, ironically, when we’re not using them. Because a lot of the time, you download an app onto a phone and you give permission for them to collect a whole lot of data about things that you’re using generally.

Jordan:

Yeah, I think it’s a really good point, actually. I think it really comes down to do people even know what big data is? People who should know probably don’t know really, even just at a general level, what big data is and how it’s collected and how it’s used.

But essentially, on these social media platforms, there’s a behavioral analysis piece i.e. collecting data points to understand interactions. There’s kind of the data science translation of it, which interprets all those complex numbers and millions of rows of data. And then it’s effectively executed by a piece of ad technology.

And that ad technology is Facebook and it uses the Facebook app. How do we showcase or utilise this data? How do we get publishers to intervene and place their product or service or idea into a user journey or into a moment that we think is aligned?

And it’s kind of been hopped on a fair bit, to be honest, over the last 10 years. But we did really see this shift away from the big creative directors of the surname agencies globally pushing creative ideas down to users. And I think if anything, this big data revolution, which was kind of at the forefront via social media apps, has completely shifted that.

No longer do we have to guess if a piece of creative works. We can already have some kind of inclination or at least gathered predictions as to where a piece of creative will resonate with a particular audience. And this is all down to the algorithms that Facebook have.

We were reading content yesterday on TikTok’s attention algorithms, where they basically tag up every single frame and every single pixel to differentiate colours, to differentiate movement. And if you do swipe away from a TikTok ad at a certain time, at a certain kind of pixel level, that they can then learn from that, and you’ll never see anything like that again.

I really think though that the fact that people aren’t understanding what big data is, be it the users, the agencies, or to be honest, the big brands, that this is why we’ve seen this kind of imbalance, and this is why we’ve seen a disproportionate action potentially taken. It is because people really aren’t understanding one; how or what data is being collected and how it’s being used. And secondly, how is my data being stored?

Darren:

So, Jordan, I just want to go back a step because in my introduction of you, I sort of ran through Managing Director at Magic. Please assure me that this doesn’t mean you’re in charge of a Harry Potter set or something. Be in charge of a Managing Director of Magic.

Jordan:

Ha! So, Magic, and I won’t plug this too hard. Magic is a digital lab that represents this collision between data science and marketing media. We care so much about it that we’ve tapped onto the tagline of ‘from mad men to math men’. And something that we care so much about that we even trademarked it just because our egos are that strong.

But in essence, what we represent is we represent pushing the boundaries, the cutting-edge using data only to justify decision-making. I think data first is something that everyone kind of uses, but no one actually does. It’s something where we rely on all decision-making and simply use client interaction as context to that decision-making.

And to be honest, it’s allowed us to do some pretty incredible stuff like what we’ve been able to understand, psychographics of particular cohorts and shape our communication to suit those certain psychographic profiles, or where we’ve been able to interconnect really complex user journeys, so we can understand information beyond the whole thing about Facebook going down, we’ve actually reversed that and gone well, let’s start using Facebook as a channel that has a purpose.

Even with the cookies being removed, what can we use with Facebook? How does it fit into user journeys? And it’s really meant that our clients have not only been able to scale the impact, but anywhere from a CMO communicating to the CEO, they’ve been able to prove it. And beyond just Google dashboards and all those things, it’s intrinsic or specific data that shows immediate connections from marketing budgets, which are always the first thing to be cut and actual results.

Darren:

So, one of the great promises of big data, data analytics was that it was going to allow us to get closer and closer to the point of cause and effect. And what we’ve seen in places, largely attribution models that can vary from virtually no value whatsoever.

Blind Freddy making a guess would do better than some of the attribution models that you see. And an example of that was I had a client and their attribution model would attribute to mainstream media, like TV and radio for 90 seconds. But for the post, it would be up to six weeks for traditional direct mail. So, it was quite interesting that they would have such flagrant variations right up to, it’s getting quite good, isn’t it? But we still aren’t actually at cause and effect, are we?

Jordan:

No, I don’t think we… to be honest, no cookies allowed that. And then cookies were kind of building us towards this thing. It was like half my marketing spends’ working; I just don’t know which half. And you know, we were working towards that as a collective, I guess.

But I think when it comes to attribution, it’s like, cool, well, I think the question that we need to start asking is which part of the user journey do we really want to answer? Which part are we willing to understand that bygones be bygones, like we can’t be predictable about how unpredictable we are.

So, I think this is where we start getting stuck. And I think we’ve got an over-reliance like we’ve seen predictive analytics now taking a significant shift, what Conundrm is turning into, which is another data science project we’re running.

We’ve seen audit software come out that in the end, the audit software is purely relying on last click. It’s impossible to disregard all that effort that people do at the top of the funnel. I really think that this cookie shift will potentially separate those who rely so heavily on publisher led analytics and really push brands; big, small startups, or enterprises to start wanting to own that user journey a bit more and own that data, as opposed to relying on the real estate agent to tell you when to sell your house, which is always ‘sell today’.

So, I think as we start seeing this shift, I think businesses will start feeling more empowered. And I guess, as we have seen businesses become more empowered, the responsibility around data cleaning, data storage, and data privacy will have to improve because suddenly, the accountability is on the brand and not on billion or $100 billion, or trillion-dollar companies.

Darren:

Because we really have seen some big shifts. And one of the reasons that we’re here today is because over the last, let’s say five years ─ and from our perspective, it started in Europe. The EU was very clear that they believed that consumers had a right to privacy and that they had a right to know what their personal data and information was being used for. Because there were a lot of abuses, weren’t there?

There were a lot of people that were using data in a way that wasn’t in the best interests of the end-user or the consumer.

Jordan:

Oh, absolutely. I think the most obvious case study for this was anything to do with political campaigns subtly on Australian shores to the big one in 2016 in the US to Brexit. It’s about how are you using data to influence decisions and what is that decision?

There are certain things that it’s a transaction. There’s certain things where … and this kind of comes to the ethics of it…there are certain things where it moves from a transactional exchange. One where the repercussions of being coerced into purchasing something is $29.95 versus something that can shape global politics, can shape even local communities. And I think that’s where this larger piece of data has come to the fore.

Darren:

What’s your definition because we use words like ethics and data ethics. And I think it’s always interesting because the ethics centre here in Sydney has a lot of discussions, especially in a business forum around the idea of ethics. And some people say it’s doing the right thing, but do you have a functional definition that you work with?

Jordan:

Typically…the way we approach it is split into two. The first one is the transaction or the consequence of using data. I think that one is openly not really the one that’s up for debate. It’s the storage. So, when it comes to data storage ethics, this is where we’re seeing a lot of Apples marketing campaigns leaning into.

And unfortunately, the transactional nature of data i.e., I give you X data so you give me relevant ads back has been caught up in this. So, for us, when we talk around data ethics, it’s one around manipulation of the end user, but it’s also scaled into the severity of the consequence of what that transaction means.

For us, data storage is probably the most important thing. How is data being stored, how is personally identifiable information being received, who can see it, and how can it be accessed? And then now, obviously, with cookies disappearing, the question now comes into, how long am I holding it for?

And these are really questions around ethics, and I think it should be the focus of data on marketing data ethics, as opposed to how much data am I giving up and what am I getting back for it?

Darren:

Because there is a huge business, isn’t there? In people trading data ─ collecting it for one purpose and then bundling it up and providing it to other parties, to third parties.

Jordan:

Absolutely. There’s a business model built on it. And there are business models, particularly that have built upon cookie data where they will be having some sweaty boardroom meetings now trying to figure out how or if or when they should pivot.

Am I against businesses that mine data and then sell it? You know, what, probably not. I think the data that people give off online, these aren’t bank account IDs and bank account passwords. That’s not specifically what we’re talking about when we talk about data capture.

What we’re talking about is more around the nature of the data that’s being captured. How is it being used? As the evolution of computational analytics is kind of spooling up, it’s like how will this data eventually be used? You know, doing a myDNA saliva test, how would that data be used?

This is kind of where we’re pivoting conversations now. And I think people are becoming far more aware of that little checkbox you get when you complete a survey or a form, or you’ve signing up to a newsletter. I think people are becoming far more aware of what they’re actually giving up.

Darren:

Except that it’s not really informed consent, is it? Because they manage to give you 57 pages in the smallest font type possible, where bearing in on that is what they’re actually going to use the data for.

You know, it’s an interesting concept; informed consent ─ it’s used a lot in medicine, it’s used in legal terms, it’s used in financial services that you shouldn’t sign a contract or accept an agreement until you’re fully informed of all the consequences and implications of your decision, but it’s virtually impossible here, isn’t it?

Jordan:

I think so, but I think this is a consequence of some entrenched behaviour that’s particularly occurred in digital marketing. I’m not biased enough to say that digital marketing and the relationships between ads and clients and users have always been transparent. I think for so long, now, we’ve seen data manipulation, we’ve seen ad spend manipulation. But we’ve seen a bunch of things like that.

For a long time, there was a distrust, honestly, between digital agencies and clients. And now, what we’re seeing as a repercussion of that, is we’re seeing a distrust of brands and their customers or their prospective customers. Similarly, with government, we’re seeing the same thing.

We’re seeing a complete digital distrust of how governments are operating. I mean, we only have to refer to the Australian COVID response and the COVID app, where the consensus response was that no, I’m not giving the government my data. It almost felt at that point that people were more willing to give their data to Apple than they were to the Australian government.

Darren:

Well, it’s an interesting concept, trust, isn’t it? Clearly, Apple is, in that case, perhaps perceived as more trustworthy than the Australian government or in fact, any government. Because I think governments everywhere have suffered from this. Your use before about the way or you mentioned about the way data was used to manipulate people’s perceptions during elections and votes and things, is an example of what political parties and interest groups are willing to do.

Jordan:

Absolutely. And I think leaning back into the knowledge before around salt and pepper, is that what we’re seeing now is that maybe the question we have to ask users about where their priorities sit, is do you want to go back to the days where you’re getting Viagra pill advertising on every single piece of news content that you’re seeing, regardless of whether you need them or not, or would you prefer to start seeing or continue to see ads that are of relevant context?

To me, personally, I’m glad to see advertisers’ take that knee down to my level and at least attempt to contextualise their service or product through the use of wording, tone, context imagery, it’s at least appropriate to who I am.

Admittedly, I’m obviously biased towards ─ and that has to be clear that I am biased towards the industry as a whole. But at the same time, I’d much rather that than receive the junk that we used to receive in the late 90s and early 00s.

Darren:

Except that how long has the industry had access to cookie data, for instance, and yet there is still a lot of junk advertising, spam advertising. But the ability to actually use it to enhance my experience of the process as a user, has not actually been delivered. And for all the good intentions.

And for all the hard work of a few players that want to fulfil this dream, this Nirvana, there’s just as many that will completely take the shortcut or hit the mass distribution button on the basis that it doesn’t matter if I send out a hundred million Viagra ads as long as I get one or two clicks on it.

Jordan:

Yeah, I think this is kind of falling into this battle between people who are media buying, who are buying impressions, who are buying clicks as opposed to those who are through that kind of reverse cycle of data, basically trying to buy engagement. And they’re really different things.

If you want to max out media budget, it’s always funny, interesting when people say, “Oh, we get this many impressions and our site visitors went up” and these things that are bought, it’s effectively a transaction between an ad server and a media buyer.

What’s far more interesting is understanding why people have clicked, what piece of content they really did engage with, where that piece of content falls into user journeys. And I think this is why we’re seeing this ultimate distrust.

I mean, the cookie in itself was framed as a piece of distrust. There’s a reason why they call it a cookie and not a tracker. If they said, “Oh, do you want to accept these trackers on your site which are going to follow how you use this website,” I’m telling you right now, people would have said, no.

But since you put a cookie there oh, what’s a cookie? It’s a crumbly delicious treat, the cookie monster on Sesame Street used to have. It was a very pointed decision to call it that as opposed to anything of its actual use.

Darren:

And yet if you look deeper into it, it was because it left a trail of crumbs all over your internet footprint, right?

Jordan:

Absolutely, there you go.

Darren:

That’s why it was called the cookie because they could follow the crumbs.

Jordan:

Exactly right. And I think if they had called it a tracker or called it what it was, and had not disguised it under a metaphor, I think we would be in a very different position than where we are now. And it’s not a criticism of that decision. It’s more so an objective kind of at least analysis of why we’re at this point. I mean, we’re asking users to trust us when the very nature of cookies themselves was delivered under a veil.

Darren:

Look, you mentioned Conundrm, which is your AI platform. And it’s primarily to empower marketers to understand more about the customer journey, right?

Jordan:

Effectively. It’s kind of stepping, it’s kind of being built into alpha. We’re in the midst of finalising it into alpha, but it’s effectively leaning into our data science experience that we have here at Magic. And we’ve built out a new predictive modeling system that relies on our own … blood, sweat, and tears have gone into this, but our own variation of modeling, to effectively understand how every piece of advertising influenced the broader journey.

Again, knowing last click… particularly now that cookies are gone, the last click doesn’t matter as much. The more you understand about how users behave or how users interact with your site across a variety of different channels online or offline, is really what drives this.

Conundrm will be designed for launching in only a few months. But Conundrm will be designed to enable marketers to not only shift their thinking to be that of a CFO, but will also allow them to have an internal data science resource that is both software and as a service to help answer problems around which part of my marketing is working.

I mean, the reason why we’ve called it Conundrm is that the adage of half my marketing spend is working, I don’t know which half, as I mentioned earlier, is … that’s been a century-old conundrum for the marketing industry.

And as cookies start to disappear, Conundrm’s aim is to help empower marketers to go, “Right, just because I can’t track the performance of a cinema ad or just because I can’t track the performance of a billboard or just because I can’t track what a YouTube ad does, doesn’t mean that you can’t have a significant, predictability behind its performance using some very complex mathematical modelling.

And I guess by tying this into our overall approach around personalised advertising and taking that knee, that’s where we’re hoping that these two worlds will collide and that we can start really empowering people to make decisions that have relative certainty, as opposed to just guessing.

Darren:

Now, we’ve had some high-profile media coverage on where predictive modelling and personalisation has led to embarrassing outcomes. And while it may have been apocryphal, the famous story about sending a target, I think it was, the report said in the New York Times ─ sent a message to a household congratulating them on the impending pregnancy that the team had informed the parents of.

The thing about AI is it really can be used in a number of ways. One of them is to help sort through huge amounts of data to then present it in a way that can inform decisions. The other way is that it can actually be set up to make decisions in real-time for you in such situations where human beings couldn’t possibly be making that number of decisions that quickly.

Jordan:

Well, absolutely. We’re at a point where it’s super interesting that we’re going to start building up when it comes into MarTech. But what we’re seeing is that this knowledge, particularly around mathematics and predictability, it’s been around for quite a long time.

It’s only now that we’re reaching or starting to get to the realm of reaching that point where a computer can do it. And I think yes, there’s been some embarrassing pairings or assumptions made between two separate data points. I.e., someone visiting a baby bunting website or whatever it might be, and then subsequently receiving emails or mail posts around, “Hey, what car seat do you need?”

I think really what we need to start leaning into, to be certain is to be a fool, and there’s too many people or too many businesses or business models promising certainty in an uncertain world. But the argument that can be made is that having 30 or 40% more confidence behind the decision is better than having zero.

Darren:

Yeah. Look, I think where I’ve seen the application of AI informing decisions is much more interesting than where it starts making decisions because the AI can only be as good as whoever it is that programs it, right?

Jordan:

And how quickly you can learn. Domain knowledge is one of those core things that really many businesses ignore. And particularly, the advertising landscaping, when you speak to any well-informed CMO, the unpredictability around how humans behave, the argument is that if you ask people what the advertising industry is, it’s a lot different to what it actually is. So, to understand that intent, to understand that behaviour is foolish, I think.

Darren:

So, what about the ethics of AI? I mean, you’ve mentioned … and I think it’s quite a good point that there’s ethical considerations, certainly around the collection, storage and usage of people’s data, but what about when decisions start getting made, and to the point that data first? So, these are all decisions based on the available data. Is there an ethical issue associated with that?

Jordan:

Absolutely. So, in a previous life, I worked across some very interesting innovation products, Hyperloop, SpaceX, and the like. And one of the prerequisites to actually even begin to work there or to conceptualise the work there, was to read a particular book by Nick Bostrom which is effectively around super-intelligence. And it’s like the path, the dangers, the strategies behind what that means. He’s a Swedish philosopher, yadda, yadda very smart guy.

But the argument he makes is that once we do create that super AI,  it will be the last thing that humans will ever create, because AIs will learn from our past mistakes very quickly, they’ll learn processes of flows or efficiencies in milliseconds as opposed to generations. And without stepping into the whole “I Am Legend” thing, is that for us right now in 2021, the ethics also have to be considered around jobs. Will AI take jobs?

There’s been a fair bit of content recently, which we talked about in the office around legal departments, what will clerics do? What will all clerks do? Where they are collecting data, finding precedence to cases when in an instance an AI can effectively scour through thousands and thousands and thousands of cases, and obviously, come to that same conclusion in a millisecond.

Darren:

Yeah, it’s interesting because the proponents say, well, it gets rid of all the boring jobs and allows people to focus on the really interesting stuff, except that there’s a lot of people that are actively employed in doing those boring, tedious tasks, that is their life.

I know the world economic forum is doing a lot of work in this space and they say the solution is actually having an economic structure that isn’t based on the 40-hour week, that full employment is people working a lot less.

Jordan:

I mean, that’s an interesting topic. That’s going to be one of those generational shifts. It’s how Denmark moved from hyper conservatism to a  social democracy. And these aren’t things that can be done from the bottom up. I believe in fighting for the people and all of that, but this is one of those things where it’s really going to come down to the collective operational law, and then pushing that down.

When it comes to AI particularly, now we’re seeing this is now. I know SpaceX specifically like it’s the be-all and end-all of how they model. It’s the be-all and end-all of how to decide whether to move a launch time by one minute or one hour or a day, or a week or abandon it entirely.

Humans can probably make that decision, to be honest. But when a human makes a decision, we think about many things. Do we think about what will people think of me if I come up with this answer? What will happen to me or my family, if I get this wrong? The benefit of AI is that you can scream at a computer all you like, but in the end, you’re screaming at a computer.

Darren:

It’s like when you’re trying to disagree with the GPS, when it’s telling you to turn left and you want to go right. It doesn’t matter how many times you tell it’s wrong, it keeps telling you to go left.

Jordan:

It’s the same as Captur as well. When you say click the trees in these images, and you’re definitely selecting all the trees, and then you end up having an argument about it with the computer to prove yourself that you aren’t a robot.

Darren:

But isn’t the issue there that it’s fundamentally human. I mean, I wonder if they could ever set up an AI that can decide on the ethical stance of anything. Because my personal definition of ethics is that every single decision we make will do some harm. What we need to do is find the decisions that do the least amount of harm. And that’s what an ethical decision would be.

Jordan:

I think so, you’re definitely right. It’s leaning into this commonality strain that seems to be recurring in this conversation around the consequence. It’s measuring the severity of the consequence, be it in that ad transaction. Be it BHP trying to figure out whether or not they buy or they acquire fuel at 9:00 AM or 10:00 AM, whether hospitals need to have certain beds in intensive care units or to shrink down the specialisation.

These are all things that people need to take into account. We’re even seeing charities now who are joining in on the AI conversation to understand whether or not a particular creative would trigger some form of compassion. And people being compassionate towards charities is a good thing. But as soon as you shift that use case to a separate business, something that’s probably far more capitalistic or more focused around capitalism, it can become a negative, particularly commercial.

And this is where it’s coming down to this decision of look, do we need to start having authorising bodies that overlook or oversee, or who are integrated into one; how we use data, how we store it, and really how ─ which is effectively how AI modelling is built upon, and the consequences of using AI to make decisions.

The argument we tested that always came up was if you’re driving 60 kilometres an hour and a child crosses the road, who’s the car protecting? Is the car protecting the driver, or is it protecting the child on the road? And that is almost like a paradox. There’s no real right answer there. But in the instance where that decision is made to protect one or the other, who’s at fault?

Darren:

So, Jordan, you’re a self-confessed champion of the industry. You believe in the role of big data, you believe that building more accurate attribution models and predictive models will make it much more effective. What do you see as the shining light on the hill? You know, where should we be heading?

Jordan:

We should effectively be heading to a point where users understand the value of the data they hold. And I think that Nirvana really is, and we’re seeing this in crypto circles already, few clients doing that, where it’s a democratised or decentralised transaction.

Now, if you read an article, you should get paid, because you’re giving up your data to read that article or transacting your use. Subsequently, as well, the journalists who wrote the article should get paid, the publisher should get paid. That’s where I think the matter is. I think the matter is when the users themselves get paid to share their data and allow or agree to whatever extent is understandable, where they can basically sell their data back to these brands so they can serve them relevancy.

I do think that’s five, six years off. But I believe it’s probably the most important conversation when it comes to advertising. Not cookies, not media spend, not how much media spend on digital is going to be because they’re self, sort of prompts are the answers or questions.

It’s more around how do we make sure users are comfortable? How do we educate users to be more aware of what digital marketing is and what transaction and data is. And I think we’ll fast forward 50 years and everyone will know everything about data. Just, unfortunately, at this point, I don’t think we do.

Darren:

Yeah. I think it’s one of those terms that everyone’s using without actually truly understanding the basis of it. What it really means.

So, on that basis of moving towards a Nirvana in marketing and advertising, what’s the thing that marketers and their agencies should stop doing now and what should they be doing more of?

Jordan:

I think they should be asking why and even in weekly WIP conversations at an agency or brand, or even just that relationship, it’s like as soon as someone says, “Do it this way,” the question always will be why. And I guess, if you can’t prove that claim, if you can’t back it up with evidence and there’s no excuse now to not be able to back it up ─ it shouldn’t be inadmissible into the broader discussion.

And I think what that does is it puts a lot of pressure on agencies to ensure that they are tracking, everything can be tracked, that they are justifying decision-making on the back of data, on the back of modeling, on the back of something tangible. But what doing those two things creates, is it creates harmony.

It creates a commonality or a single language between an agency or digital marketer and brand, and it removes any of that smoke, any of that fog, any of that uncertainty, which I’m sure happens many times, and clients will be like, “Has the agency or has this particular person got my best interest at heart or the brand’s best interest at heart?” But if you can open that discussion and start it with, “Hey, look, we’ve built this model that shows you an implied outcome we think will happen from this.”

And yes, obviously it’s not certain, but at least there’s some increase and likelihood which should only further encourage people to make more risky decisions.” And I think by people making more risky decisions in advertising … and not meaning creative, but more so just in terms of how they do things, we should start to see the progression happen again. We should start to see people taking that step, we should start to see cutting edge come back.

And I think it’s something that I’m desperate for brands to start doing. I mean, I really want to see Australian leaders in this space not accept the status quo and continually push that ceiling higher and higher or even break it. And that’s what we’ll see when we start having a proper, an actual data-first philosophy, not just, “I look at Google Analytics.”

Darren:

Yeah. It’s interesting, one of my favourite quotes is about insanity is doing the same thing over and over again and expecting a different outcome.

I guess what you’re saying is that you can have access to data that informs your decisions and allows you to think outside the rut of doing the same thing over and over again. Trying something new, not because it’s a hundred per cent guaranteed, but because now, at least, you have a degree of certainty of its efficacy.

Jordan:

Absolutely. And if you’re going to fail, despite this lifting or perceived lifting in terms of culture if it even exists ─ but users for most parts are pretty forgiving. I think it’s all about experimenting, pushing the boundaries, adopting data science. The best thing we want to see is that we start seeing all of these brands working with data scientists.

I mean, whilst we’re an advocate for ourselves first and foremost, we’re advocates for data scientists, for programs, for mathematicians to get involved in marketing. Data scientists are often used in big banks or big corps to work in risk or work in finance, but forgetting that the marketing spend is still in these circumstances a significant portion of what the budget is.

So, we want to start getting the bright minds of mathematicians and data scientists into those verticals and into marketing workshops. And I think if we can start doing that, we’re banging on for innovation, we’re banging on for progression. We want to encourage smart minds to get excited about marketing and that’s effectively what Magic and Conundrm do.

Darren:

And so, part of the problem, part of the resistance has been that a lot of times, mathematics science has been positioned artificially as the opposite of the sort of art and intuition that marketing was traditionally based on. In business, marketing has often been seen as the sort of wacko area of business, because it was all about gut instinct.

But as you say today, there’s access to huge amounts and incredibly insightful analysis. Obviously, that’s the work that you’re doing with Magic and part of the challenge of what you’re solving with Conundrm.

Jordan:

You’re exactly right. You throw Albert Einstein into 2021, you put a cardigan on him, you put some cool shoes on, the dude looks like a creative to me. And I think if we can start accepting that, encourage it, and start using the smart minds for things that are outside their remit, outside their box, I really think that the questions they can at least ask to uncover huge amounts of potential, let alone what they can answer.

Darren:

I think the classic image of Albert Einstein, he wouldn’t get a job in advertising because he was way too old. I think if you’re over 40, you’re seen as old school.

But anyway, look Jordan, this has been a terrific conversation. I think this intersection of science and art is the area that’s going to create huge opportunities and potential, not in the future, but right now. And a lot of people are talking about it, but I think a lot of companies, especially are struggling with it.

Jordan:

I think even the big ones. And I think it’s part of how we start to break out of this egg. How do we start getting scientists and mathematicians excited about marketing. And the way to do that is to interconnect what they do with business outcomes.

I think so many mathematicians and data scientists are so far removed from what their amazing work actually does, that not only are they not inspired but then they become in this gap between, “Oh, look, that’s just too complicated. And marketing teams can’t deal with that. I can barely read the dashboard that the agency has.”

I think if we can work on bringing this together, not only will you or I learn a hell of a lot more about how this works and what these guys and gals are into, but I think the overall outcome will be, “Hey, look at this question we’ve asked. We’ve never asked this question about our business before. And we can now at least position ourselves with the backup of mathematicians and data scientists to actually answer it, or at least try to answer it.”

And I think really if I could put one thing on where I see a shift in advertising and marketing and consumer relations and user relations, is that there’s a reason why mathematics has been involved in society for hundreds and hundreds of years. And it’s because it works. It’s because it asks questions, it aims to ask questions and disprove or prove them.

There’s no bias, like we’re not trying to push an answer and push an answer to a question that we want to answer. We’re more opening up the floor to go, “Cool, here’s the question. How would we go about proving it or disapproving it?”

Darren:

Yeah, absolutely. And that’s why they call it mathematical proofs because they either prove or disapprove with the numbers. Look, I love mathematics. It’s a language that a few people have mastered and those that do, are phenomenal in the way they think.

Hey, Jordan, thanks for your time. Just one last question before we go, and that is we started talking about ethics, and there’s been a lot of accusations flying around. But from your perspective, which company or person has been the least ethical in the marketing space when it comes to data?

Ideal for marketers, advertisers, media, and commercial communications professionals, Managing Marketing is a podcast hosted by Darren Woolley and special guests. Find all the episodes here

Want more articles like this? Subscribe to our newsletter:



    Darren is considered a thought leader on all aspects of marketing management. A Problem Solver, Negotiator, Founder & Global CEO of TrinityP3 - Marketing Management Consultants, founding member of the Marketing FIRST Forum and Author. He is also a Past-Chair of the Australian Marketing Institute, Ex-Medical Scientist and Ex-Creative Director. And in his spare time he sleeps. Darren's Bio Here Email: darren@trinityp3.com

    We're Listening

    Have something to say about this article?
    Share it with us on Twitter, Facebook or LinkedIn

    Tweet
    Share
    Share
    Buffer
    Pin