Artificial Intelligence (AI) requires three things to work.
- High processing speed.
- Large amounts of data storage.
- Large amounts of data collection.
The first two are localized technology innovations. These are done in the confines of labs, both public and private, throughout the world. The microchips get continually small, faster, and more efficient, leading to an ever increasing array of digital products.
The last however, – data collection – is an entirely different beast.
This is a social innovation, and I would suggest one which is being implemented without your informed consent.
For example, consider location information on your smart phone as the large data set to be collected.
You might not be too shocked to know that if you opt in to turn on location services on your Android phone that you are giving Google you location data. Hey, the convenience of “Find My Device” comes with a cost, and the cost is the tracking of every move you make.
However, a reasonable question is this: does Google track and store your location data when you don’t opt -in on this service?
Possibly. Just read Google’s privacy policy, which you legally accepted when you joined the Google ecosystem:
“When you use Google services, we may collect and process information about your actual location. We use various technologies to determine location, including IP address, GPS, and other sensors that may, for example, provide Google with information on nearby devices, Wi-Fi access points and cell towers.”
I have little doubt that when I acquired a new device, and turned on a new software service and accepted their operating system, that the user agreement i signed contained a privacy policy heavily favoring the data rights of the corporation.
The Googles, Amazons, and Apples of the world have the best data lawyers in the world, and it should be no surprise that these companies not only have the legal right to this data; It’s likely they actually own this data, to do what they will with it.
But, do they have the moral right?
To what extent do these software agreements give informed consent, when the technology of behind AI, including data acquisition, storage and processing, is probably only understood by a handful of people at these companies?
Rephrased, how much do we need to understand “underneath the hood” of these companies to ever give informed consent?
Here is where we run into a fundamental difficulty, because you see, there is no incentive for a publicly traded company, to have a moral debate about privacy, since their sole motivation (as a corporate entity, not as a group of people within an organization) is to maximize profit: Why voluntary debate an issue which, if it turns against you, strikes at the heart of your future business model?
Hence the need for us, at least as individuals – and preferably as citizens – to not become so enamored of the the technology of AI to have a serious debate about its morality.