AI Development Pushed by Pandemics | Privacy Issues
Artificial Intelligence and COVID19
COVID19, also known as coronavirus, has developed into a recent global threat. The disease has no cure, no treatments, and no means of controlling it – other than isolation, social distancing, face masks, and hand washing. One form of technology that has shown its capability in tracing, tracking, and helping to prevent the spread of the virus comes through artificial intelligence or AI.
AI systems were already in use as a tool to help predict and prevent the spread of viruses in the health community. It was an AI-based system that first discovered an unknown form of pneumonia that was spreading around Wuhan, China, giving insight that an epidemic was in its beginnings. Health professionals who had access to this information were the early warning system that led the world to understand that something was amiss, that there was indeed a new viral strain threatening and growing across the globe.
Artificial intelligence uses machine learning algorithms and automated data mining to review large scale data sets and pull out any patterns spotted for further inspection. AI systems can aggregate and pool information from a variety of sources, including the internet, news releases, social media, and government websites. These intelligent systems can form patterns of corresponding data, determining when widespread data sets begin to form. It can be about anything, a viral infection, or a hot run on Reese’s candy stockpiles. It can be virtually anything that can be organized numbered and tallied. This determination also includes people, their data, their personal information, and their health statistics. At what point does the aggregation of all this data become a source of privacy violation?
Privacy Issues with Comprehensive Data Tracking
Even if you are not aware of it, your actions are under constant scrutiny. Government and private companies alike have already implemented public surveillance systems that include biometric features such as facial-recognition technologies. These systems are incorporated into other AI-based monitoring systems. The predicted days from the book 1984, are already upon us.
In some countries, public transportation systems are being equipped with new thermal imaging systems, taking the body temperature of every person that enters or uses the system. That same system not only inputs your temperature but also matches your facial-recognition details, adding information to your already filled data file of statistics about you, health, education, residence, and political affiliations.
With the scare of COVID19, Americans are turning a blind eye to their civil liberties being violated as U.S. companies are now installing similar systems in public areas. These systems also monitor citizens for social distancing, track their data with facial-recognition, and detail those with whom you are seen. It can also take note of the percentage of individuals in a given area who are wearing masks. While masks do provide some cover and cause some disruption to facial-recognition, don’t be fooled into thinking that a mask means that your identity cannot be determined. Some facial-recognition technologies can establish your identity even with a mask. Other AI technologies can aggregate data from outside sources, and come to a close if not an exact determination of who is wearing the mask. Nowhere are you safe anymore.
Data tracking and tracing apps have been determined to be a useful tool by government officials in slowing the spread of the virus. Americans are slow to use them. They have concerns that their privacy will be violated and abused. Americans are not used to a society where their every move is tracked and those whom they come into contact with detailed. They should be alarmed.
These applications use both GPS and Bluetooth technologies to aggregate data about the individual, their whereabouts along with all those who come within 10-12 feet, as their cell phone’s Bluetooth registers against theirs. While it is true that you could be walking by a stranger, the compilation of data will keep track and eventually see the pattern of who it is, you know, who are your friends and acquaintances. Is this a violation of American civil liberties? Giving up freedoms for safety has been warned against since the foundation of our country.
Governments promise that these applications are safe to use. However, it has already been found that large corporations like Apple and Google, who came together to create a tracing application, are misusing the data collected. Through an outside agency, they have found that within lines of code, the data collected is being sold to multiple third parties for consumer abuse.
Ethical Standards – Do You Believe Them?
Using AI in the healthcare industry has not caught up to the technological advances of other commercial uses of the technology. In light of COVID19, more applications using this advanced form of artificial intelligence are being created and put in use by healthcare workers and the general public.
For AI to continue to be a trusted commodity for the public to consider using, investing in, and participating in, there will have to be a set of guiding principles or ethics that will restrain any AI systems from abuse. The Asilomar Principles are such a set of policies or guidelines used to manage the ethics of AI. These guidelines were created in 2017 at the Asilomar Conference on Beneficial AI. The conference, held in Pacific Grove, California, was organized by the Future of Life Institute.
The Future of Life Institute is a nonprofit organization whose primary vision statement is “to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its course considering new technologies and challenges.” The set of 23 well-developed principles ensures that AI is not used in such a way that it will damage humanity. An example would be an arms race in lethal autonomous weapons, which could result in catastrophic wars. Many AI developers have signed on to agree to abide by these tenets. By following these guidelines, humanity can work to harness the strength of AI for the betterment of all and avoid its misuse by powerful entities against citizens.
As citizens, we need to ask ourselves about those corporations who continue to abuse AI against people. An example of recent abuse by Google and Apple in their SafeTrace app, which was endorsed by state governors for citizens to use as COVID19 tracking and tracing phone applications, should cause alarm. For government and corporations to work together to violate our fundamental civil liberties creates a need now more than ever to separate the influence of corporate money on our government. It should also cause Americans to understand that today, more than ever, they will need to stay aware of new technologies as they are presented. When corporations abuse their power, Americans still have options. Don’t spend your money with these companies or use their services. They will either understand they need to get their priorities straight or go out of business. It is in our hands.
- Research Goal: The research goals of AI should be to create beneficial intelligence – not undirected intelligence.
- Research Funding: When investments are made into AI development, it should include funding for research into areas that address its beneficial use and other ethical and moral questions.
- Science-Policy Link: AI researchers and legislators should have constructive exchanges on creating technological policies.
- Research Culture: Researchers and the developers of AI should share a culture of cooperation, trust, and transparency.
- Race Avoidance: Developers should not consider AI a race to the finish but cooperate and avoid cutting corners, which could impede safety measures.
Ethics and Values
- Safety: AI systems should be able to operate reliably for the duration of their lifespan. The reliability should be verified when applicable and feasible.
- Failure Transparency: If it is determined that an AI system can cause harm, it should be made fully transparent so that it can be studied to determine the reasons behind it.
- Judicial Transparency: If AI becomes involved in any judicial decision-making, it should also be able to output the reasons for the decision made, and be auditable by human input.
- Responsibility: Understand that as designers and builders of advanced AI systems that creators are essential stakeholders in any possible misuse. This foresight requires forethought in the building for shaping responsible security and positive implications.
- Value Alignment: Designers of highly autonomous AI systems should take care that their goals and behaviors are aligned with human values and last throughout its operational lifespan.
- Human Values: AI systems should be designed to follow ethical human morals and values compatible with the ideas of human dignity, rights, freedom, and cultural diversity.
- Personal Privacy: With AI’s power to analyze and utilize data, people should be allowed to access, manage, and control the data generated.
- Liberty and Privacy: When AI is applied to personal data, it should be done in such a way that it does not curtail people’s real or perceived individual liberties or civil rights.
- Shared Benefit: The technological benefits of AI should benefit and empower all people equally.
- Shared Prosperity: When AI creates economic prosperity, it should be shared to benefit all of humanity.
- Human Control: There should always be human-based decision making. Humans choose when and how to delegate decision making to AI and only to accomplish human-chosen objectives.
- Non-subversion: The power granted by control of highly advanced AI systems should observe and improve upon, rather than subvert, the social and civic processes on which society is based upon.
- AI Arms Race: We should never create an arms race in which we create lethal autonomous weapons. It should be avoided.
- Capability Caution: Since there is no known answer, we should avoid predicting limits to the autonomous nature of AI.
- Importance: Advanced AI systems can profoundly create change upon the earth. Knowing this, we should use great care and resources to plan and manage such impacted modifications to the history of our planet and people.
- Risks: AI presents catastrophic and existential risks; designers should plan and mitigation efforts that follow with the expected impact.
- Recursive Self-Improvement: Certain AI systems are designed to self-replicate and self-improve; these should be subjected to strict safety and control standards.
- Common Good: Superintelligence should only be developed for the good of all humankind, not for the benefit of a sole individual, corporation, or nation and should share similar ethically sound ideas.
Ethics in Artificial Intelligence
Artificial intelligence, combined with machine learning, is the transformative technology of today and the future. It will be able to drive our cars, run our utilities, take over our manual labor, even learn to discriminate against people, create and fight wars, and, possibly, destroy humanity. In the end, it is more than an attack on our privacy; it is an attack on the human race. While there are many benefits to AI, how can we control the abilities of what we have created?
These types of questions have been leading some of the technology’s top thought leaders to turn their ideas from the functions of AI to the ethical considerations of AI. These highly intelligent, self-learning forms of technology have powerful and potentially life-challenging consequences for the entire globe.
Most people’s concern with AI, outside of privacy, is the idea that advanced technology will take over jobs. Indeed, digital transformation is already clearing out entire categories of employment, making those positions obsolete for humans. The transformation of the workforce does not mean an end to available work; it is a change. Humans will be faced with learning new skills to become employed in other positions, and AI will indeed create new and exciting job opportunities. It will likely exceed the number of jobs created over the number of posts which will be overtaken. That is good news.
There is currently a wave of ethical problems with AI applications, which will continue into the future. An example of this is in the fight against misinformation. Highly advanced technologies can now mimic and create fake images, videos, and conversations. In a day and time, when misinformation is being used as a weapon against the public, how is the everyday person supposed to comprehend if what they see or hear real or a bot? Even healthcare applications, including those who use text-based counseling services, have no idea if they are getting assistance from a real human being or a highly advanced AI program. Does this make you feel uncomfortable?
Clearly, there is a need for a focus on the ethics of technology. As humans, we need to think about how to reign in AI-enabled systems and how to keep them within boundaries. When AI presents with disinformation or performs data tasks outside its prescribed boundaries, governments and corporations alike need to start reacting to these as cybersecurity threats. That is what the malfunctions are – threats. For these entities to not take these threats seriously and protect the public from abuses of privacy, violations of information crime, then they are just as malicious as the out-of-control AI that is causing the threat.