Investigating Intentional Biases in Consumer Product Marketing | Epstein Becker & Green


Advances in artificial intelligence (“AI”) continue to offer exciting opportunities to transform decision-making and targeted marketing in the world of consumer goods. While AI has been touted for its ability to create fairer, more inclusive systems, including in terms of lending and creditworthiness, AI models can also embed human and societal prejudice in such a way that unintended and potentially unlawful downstream effects can arise.

Most of the time, when we talk about bias, we focus on random bias. What about intentional bias? The following hypothesis illustrates the problem related to the marketing of consumer goods.

In targeted advertising, an algorithm learns all sorts of things about a person through social media and other online sources and then serves ads to that person based on the data collected. Let’s say the algorithm targets ads to African Americans. By “deliberately” we do not mean that the software developer has racial or otherwise nefarious goals toward African Americans. Rather, we mean that the developer simply intends to use whatever information is available to target ads to that particular demographic (even if that data is race-specific or race-correlated, such as zip codes). This raises a number of interesting questions.

Would that be legally okay, except in certain situations with bona fide professional qualifications (for those familiar with labor law)? What if the product is specific hair care products or a specific genre of music? What about rental furniture based on data suggesting that African Americans are more than the average consumer of such furniture? Taking this scenario a step further, what if it is well documented that hire purchase agreements are a major contributor to poverty among African Americans?

The data can also be skewed by the way in which the data is collected or selected for use. What if the data collected from predominantly African American zip codes suggests that African Americans are typically willing to pay higher rental rates, and therefore the ads targeted to African American people include those higher rates? Could the companies promoting these ads based on these statistical correlations be held responsible for predatory or discriminatory lending practices? Do we still need human judgment to ensure that AI-powered decision-making is fair?

These are some of the questions we’ll be exploring in our upcoming targeted advertising panel, and we invite you to join us.

[View source.]


Source link

Previous Businesses in downtown Roanoke are preparing for 25 Days of Dickens
Next EY Praises UK Insurance coverage 'World Chief', Stories Basic ESG Issues