What are the key privacy challenges associated with artificial intelligence (AI) development, and why should investors care?
While investment in AI is exploding, challenges to usage and expansion are already emerging due to global concerns about how data is collected and used to train AI models. From a user perspective, we are seeing companies self-limit their use of AI due to intellectual property and privacy concerns, reducing the value they get from AI initiatives. Meanwhile, we are also seeing developers repeatedly blocked from entering entire markets.
Here’s the problem: AI is loaded with risks around IP protection, privacy and security, and data usage. All of these could be fatal for AI providers, users, and their investors. So the investor community needs to take notice, because the success or failure of their investments will depend on how companies address this challenge.
Investors are eager for AI companies to build trust. What specific actions can companies take to improve AI safety, transparency, and responsible data practices?
The real differentiator for AI providers is their ability to effectively leverage data, including proprietary and confidential data. To do this, they must work with third parties, such as customers and data providers. Some companies will make deals with AI providers to make their data available for training models, as Dotdash Meredith (magazine publisher) did with OpenAI. However, companies with more sensitive data will likely not or cannot adopt this approach.
These businesses need to embrace what I call “Secure Collaborative AI” powered by Privacy Enhancing Technologies (PET). Just as the world of e-commerce was only unlocked once everyone was convinced that credit card transactions were protected online, in the world of AI, PET will unlock the true value of AI by protecting data and models as organizations collaborate, allowing them to get the most out of both.
Don’t take my word for it; this is already law. President Biden’s Executive Order on AI Trust and Security directs government agencies to use these techniques to protect data when deploying AI. Perhaps even better, Apple is taking action on this, as evidenced by their announcement of Apple Private Cloud Compute, which uses a specific type of PET to protect user data.
In practice, this type of technology allows users to monetize AI models while protecting their intellectual property, improve models by accessing better data that may not be publicly available, unlock sensitive data that is currently unavailable for privacy and security reasons to help customers derive better insights, and personalize AI models for each customer.
Ultimately, an AI model is only as good as the data it is trained on, and the main obstacles to data access are privacy and security. Not only should investors keep this in mind when considering this space, but teams building AI applications also need to be proactive in leveraging technology to solve data access and analysis problems.
What news headlines are you watching?
There’s been a ton of negative news lately, so I thought I’d highlight some positive news: the House of Representatives passed the Privacy-Enhancing Technology Research Act. This legislation follows President Biden’s Executive Order on the “Safe, Secure, and Trustworthy Development and Use” of AI, and authorizes the National Science Foundation to advance research to mitigate personal privacy risks in data and AI. It also provides research, training, standards, and interagency coordination to develop PETs. In other words, this is one way to make AI safer, and it’s something to be excited about.