Idea: Marketing driven by AI needs disclosures

AI/ML models are already used to market to you based on your behaviours both in apps, web, and using data about your real-life. They are highly affective and I see a future of challenges.

Companies should actively disclose the data they use and the intent of those communications.

I propose a system that extends the current privacy and data protection laws that allows customers to opt-out or have disclosures.

OPT OUT:
Users can opt out *specifically* from AI/ML driven communication systems that use data to predict behavioural inclinations. These customers would then be put on traditional rules based systems.

DISCLOSURES:
If the user does not opt-out, the following information should listed in the marketing preference areas of the product.
- First party data collected (usage, etc)
- Second/Third parties data was collected or purchased
- Other data sources used to make the prediction
Note: These would not disclose data but just the source of the data

The company would also be shown the actual artefact that was delivered to their inbox, phone, etc

The company would disclose their intent of the communication
Examples:
- "We sent this email to provide information about a feature we believe would bring you benefit. Our intent is to drive adoption of this feature as it results in improved retention rate of customers. The feature is included in your subscription."
- "We sent this message because individuals with similar behaviours typically cancel membership. Our intent is to keep you as a customer by showcasing other features."

I am starting to think the alignment problem isn’t just about AI… but also education and transparency toward human beings.

Previous
Previous

Being yourself at your job

Next
Next

AI in the future