All About Trusted AI: Artificial Intelligence Advantages

trusted ai technology artificial intelligence advantages machine learning applications

What is trusted AI (artificial intelligence) technology? It’s AI software that achieves four key traits: Fair Accountable Values Driven Explainable Let’s dig into each of these aspects of artificial intelligence and machine learning. AI should be fair. Since AI systems make so many decisions on our behalf, we need to know that the decisions it’s making are fundamentally fair. Fairness, as we discussed in previous blog posts, can be tricky to navigate in terms of outcomes, but the bare minimum standard of fairness is that AI tech does not discriminate on protected classes (age, gender, race, religion, disability, etc.) or inferred variables that correlate to protected classes. So AI and machine learning being unbiased is essential to effectiveness.

Every decision AI makes should at a minimum be blind to those considerations, except where permitted by law and ethics. AI should be accountable. When we build systems to make decisions, whether it’s who to show our ads to, what constitutes a valuable customer, our systems must inform the users - us, and our customers - how it made those decisions so that we can hold the system accountable. If an AI system declines your loan, it should explain what factors led to that decline. It’s not enough for the system to say a loan application was declined, but also spit out things like insufficient household income, or credit score below the required threshold - whatever variables used to make its decision should be communicated to the user. 

AI should be values-driven. This is a BIG one. Our AI systems - and their outcomes - have to match our values. If we claim we support, for example, non-discrimination based on age, and our AI models discriminate based on age, we have a system that’s out of alignment with our values. As an interesting side note, we often say that Facebook and its various companies like Instagram has built a system that essentially makes the world a worse place by amplifying negative emotions and promoting rampant misinformation. 

Interestingly, this doesn’t conflict with their core values: Be bold. Focus on impact. Move fast. Be open. Build social value. Nowhere in their statement of values do things like “engender happiness” or “make the world a better place” exist, so it should be no surprise to us that they build AI which is aligned with their values - even if it doesn’t align with our values. AI should be explainable. Ultimately, any AI model - which is nothing more than a piece of software - should be interpretable and explainable. 

How did a system make its decisions? What data did it learn from? What algorithms did it incorporate? When we know what’s in the engine, it’s much easier to fix it when it goes wrong. When we know what the ingredients are in our cooking, it’s much easier to correct our dishes. All this sounds great as abstract theory. This is what we want in systems that make decisions on our behalf, every day. The question is, how do we practically implement some of this? You must learn how to test for bias in your data, and how to know when a system has gone off the rails.

Why is trusted AI so important? These systems - from mortgage loan application processing to hiring to even what you see when you log into your favorite social network - govern a large and increasing part of our lives. Do you trust, for example, Facebook’s AI to make decisions that are fair and equitable? I certainly don’t - and yet I still use it (though much less than I used to). If you’re a marketer or a business owner, are you using AI? (the answer is almost certainly yes, whether you know it or not) Do you trust the AI you’re using? If so, why? If not, why not? 

The reality is that AI is like computing in general: it lets us do bigger stuff, faster. Like any amplifier, it will take the good and make it better - and it will take the bad and make it worse. One look at your declining reach in unpaid social media marketing proves that for you. Is that making you a better marketer? Is it making the world a better place? Is that working with systems you trust? 

I believe we are at a pivotal point now in the use of AI in marketing. With increasing regulation and restriction on the data we have available to us, more and more companies will turn to machine learning and AI to extract more value out of the data we do have access to. Without an emphasis on fairness, on building trustworthy systems, there’s a good chance we’ll make things worse rather than better - not just for company performance, but for the world as a whole. On the other hand, if we build our systems to process marketing data with fairness, accountability, values alignment, and interpretability at the core of our efforts, we’ll achieve not only better marketing results, we’ll also not make the world a worse place. 

Every time you evaluate a vendor from now on, consider asking how they incorporate those practices of fairness, accountability, values alignment, and interpretability in their systems. If they don’t have clear, documented processes for doing so, give real thought whether you should be working with that vendor or not. At some point, without those north stars to guide their efforts, they are likely to create AI on your behalf that generates damaging results.

But what if you can't afford AI? One of the most common questions whenever I present about the use of AI in marketing is, "What if we can't afford an AI engineer and a data scientist, or can't afford an agency or vendor?" In the past, I've struggled to answer this question in a satisfactory way for a couple of reasons. First, I've struggled to answer it because I use AI every day, so I have trouble imagining what it would be like to not have access to the tools. It'd be like trying to understand someone who didn't have access to spreadsheets - they're just part of my everyday work. 

The second reason I've struggled to answer the question is because the problems I face at work every day are large-scale problems that are well-suited to AI. Problems involving small data generally don't land on my desk; someone else has already solved them, and I'm not needed. It's this train of thought that has led to what I think is a satisfactory answer to that question. AI is good at three things: processing data faster (and thus being able to handle a lot of it), processing data more accurately, and processing data in routine ways. Google's Chief Decision Scientist Cassie Kozyrkov calls AI and machine learning nothing more than "problem labeling machines", which is accurate. We use AI to turn data into numbers that can be calculated and processed, clustered and predicted. But that presumes we have enough data to do all that labeling and processing. AI fails when we don't have enough data. And therein lies the distinguishing factor, the real answer to the question. You need AI when you have machine-sized problems. You can use human solutions when you have human-sized problems. 

For example, suppose you want to know what works in tweets, what topics to cover. If you downloaded all your tweets, and sorted by the most engaging, you could probably get a good idea of what works by reading - manually - the top 100 tweets, and doing a bit of legwork to group them together by topic and language. You don't need AI for that. If you need to do that for an entire industry sector, you've now got a machine-sized problem, and that's where AI shines. There's no practical way to sort and process hundreds of thousands of tweets in a timely fashion. Companies like Google process more data with AI than they ever could with humans. They'd have to employ most of North America just to deal with a day's worth of data. AI is called for. If you need to understand the language used on a company website, you can have a person read the top 10 or even top 100 pages - that's a human-sized problem. If you need to understand the language used on Wikipedia? That's a machine-sized problem. 

So, here's my answer to "What if we can't afford an AI engineer and a data scientist, or can't afford an agency or vendor?". Find a way to reduce the data down to a human-sized problem and solve it with humans until you have enough resources - money, time, people - to work with the full-size dataset. Sampling data is a time-honored method to make big data smaller, and doesn't require anything more sophisticated than a semester's worth of statistics classes in university (assuming you did well in the class, of course). Make the data and the problem fit the resources you have to solve it as best as you can.

With every software vendor and services provider proclaiming that they too are an AI-powered company, it's more difficult to demystify artificial intelligence and its applications for marketers. What is AI? Why should you care? How does it apply to your business? In the newly revised Third Edition of AI for Marketers, you'll get the answers you've been looking for. With all-new practical examples, you'll learn topics like attribution modeling, forecasting, natural language processing, and influencer identification.

AI technology is here to stay and it's only getting more advanced each year. Artificial intelligence and machine learning tech will be applied to more industries and products over time, so you best get on board now!

Social Selling Entrepreneur Sales Suggestions: