Artificial intelligence (AI), sometimes known as machine intelligence, refers to the ability of computers to perform human-like feats of cognition including learning, problem-solving, perception, decision-making, and speech and language.
Early AI systems had the ability to defeat a world chess champion, map streets, and compose music. Thanks to more advanced algorithms, data volumes, and computer power and storage, AI evolved and expanded to include more sophisticated applications, such as self-driving cars, improved fraud detection, and “personal assistants” like Siri and Alexa.
Today, researchers are using AI to improve predictions, diagnoses, and treatments for mental illnesses. The intersection of machine learning and computational psychiatry is rapidly creating more precise, personalized mental health care.
Artificial intelligence as it is used today is considered “weak AI,” because it is generally designed to perform just one or two specific tasks as well as, and often better than, humans. At this point, however, the controversial future of AI research includes ideas about developing “strong AI,” or a super-intelligence, with the potential to perform many or all cognitive tasks better than humans. AI safety research is a priority for some scientists concerned about potential dangers if such advanced technology gets into the wrong hands, although others still question the possibility of ever achieving human-level strong AI.
People often possess an array of devices that incorporate artificial intelligence. For example, devices that leverage centralized home-management systems adjust thermostats. Wearable gadgets push their users to exercise or consider their food choices. Smartphones and tablets complete words and sentences as people type emails and texts. Autonomous vehicles are already in use on city streets.
Machines have already transformed the jobs of millions of people—by monitoring actions that couldn't previously be tracked, calculating data in new ways, guiding decision making, or taking over tasks. For example, drones photograph and monitor some construction sites for discrepancies. Some probation officers handle their cases according to instructions from a computer program, which decides how much of a risk each person poses. Algorithms write reports for some publications.
People have a tendency to distrust algorithms, research shows, tending to prefer their own judgments and even the judgments of others over algorithms. This general distrust has been labeled “algorithm aversion.” One explanation is that humans like to feel in control and don’t want to cede control to technology. People also hold AI to higher standards; they can forgive human errors but not algorithmic errors. Algorithms have to essentially be perfect for people to embrace them.
Machines aren't replacing people, but they are replacing people’s expectations about what they can and should control. Machines are creating new sorts of relationships, as people find themselves working intimately with android entities that feel like a mechanism and a human—without quite being either. A central tension arises from the human desire for autonomy and agency, which machines can quickly strip away. Designing machines that provide value while not erasing human motivation will be an ongoing challenge moving forward.
The sex robots that exist today are not particularly advanced, but they will undoubtedly continue to evolve. It’s important to explore perceptions and ethical questions around sex robots, and people report envisioning both positive and negative consequences. Benefits may include sexual and emotional companionship for those who don't have a human partner, sexual release, and an opportunity to gain sexual experience. Downsides may include exacerbating the objectification of women, emboldening people who might engage in nonconsensual sex, and reducing human empathy.
Artificial intelligence has the potential to reshape psychiatry—and those efforts are already well underway. Amassing massive datasets can allow scientists to identify factors that render people more vulnerable to mental illness, improve the accuracy of diagnoses, and assess which treatments are effective and for whom.
The field of computational psychiatry leverages mathematical and computational tools to improve the understanding, diagnosis, and treatment of mental disorders.
Computational psychiatry has the potential to gain insight into any condition with a large enough dataset. Machine learning could identify which genes contribute to the development of autism or the factors that render adolescents vulnerable to binge-drinking such as brain size or parental divorce. These programs could reveal which systems are affected by dopamine in patients with Parkinson’s disease, or a person’s risk for depression based on factors such as sex and childhood trauma.
Artificial intelligence has the potential to leverage large datasets to improve diagnoses and reduce misdiagnoses. For example, depressive episodes in bipolar disorder and depression can be difficult to distinguish; many patients with bipolar are misdiagnosed with major depressive disorder. A machine learning algorithm that used self-reports and blood samples recently identified bipolar disorder patients in various scenarios, potentially providing a helpful supplement for clinicians in the future.
There are currently no medical tests to definitively diagnose autism, but a recent study demonstrated that a machine learning algorithm identified proteins that differed in boys with autism and that predicted the severity of the condition. As this technology continues to evolve, it could help diagnose autism based on biomarkers.
A recent study found that a machine learning algorithm classified cases of schizophrenia based on brain images with 87 percent accuracy. The pattern-recognition skills and predictive abilities of AI could provide a valuable tool for clinicians diagnosing schizophrenia.
It’s important to translate the fascinating discoveries of AI into applications that can really help people. This can involve pinning down predictive risk factors for psychiatric conditions: Are there specific brain regions that make people more likely to commit suicide? Or to become depressed? Are certain medications effective for some patients with schizophrenia but not others? Doctors can then assess patient risk and provide proactive, personalized mental healthcare.
Artificial intelligence can analyze massive datasets for difficult-to-spot connections between drugs, diseases, and biological processes to identify potential treatments. For example, a machine learning framework recently predicted which of the 20,000 FDA-approved drugs had the greatest likelihood of helping to treat Alzheimer's disease.
The evolution of artificial intelligence has led to countless ethical questions. Will machine learning perpetuate bias and inequality? Will AI infringe on human privacy and freedoms? Will humans lose their jobs to robots? Will machines become more intelligent than humans?
People are right to question the nature of machines that can evolve on their own. By actively engaging with these concerns, hopefully humans can develop ethical systems of artificial intelligence moving forward.
People interact with technology on an unprecedented scale and in many different environments—at work, in the supermarket, in the car, at home. Technology deployers have some responsibility to keep people safe as AI poses ethical challenges. Whether it’s anticipating systemic bias, recognizing when technologies coerce decision-making, intercepting malicious actors who wish to weaponize platforms, or taking a stand on overzealous surveillance, creators and consumers need to make sure that technology serves the population well.
One ethical concern about artificial intelligence is the potent yet subtle influence of technology on people’s choices and decision-making. Companies are able to use all of the information they store about people to their advantage—“nudging” people towards decisions that are predominantly in the company’s interests. Another concern may arise from the technologies that claim to be able to read and interpret human emotions. The idea of a product deceiving a child or vulnerable adult into believing it truly “understands them,” and thereby influencing them, is worrying.
This goal requires a moral approach to building AI systems and a plan for making AI systems ethical themselves. For example, developers of self-driving cars should be considering their social consequences, including ensuring that the cars themselves are capable of making ethical decisions. Building ethical artificial intelligence involves addressing ethical questions (e.g. how to prevent mass unemployment) and ethical concerns (e.g. clarifying moral values) and then developing a plan that aims to satisfy human needs.