Loyola University > Center for Digital Ethics & Policy > Research & Initiatives > Essays > Archive > 2018 > The Death Algorithm — Will Google Influence Your Lifespan?
The Death Algorithm — Will Google Influence Your Lifespan?
July 17, 2018
Google Brain and Stanford University researchers unveiled a new algorithm in the journal Nature earlier this year to great excitement. According to their paper, it appears that Google’s new artificial intelligence (AI) algorithm is able to predict, within 95 percent certainty, what the outcome of your next hospital visit will be.
The algorithm, developed in partnership with researchers at Stanford University, UC San Francisco, and The University of Chicago Medicine, is hailed as a breakthrough among predictive models, with its near-perfect success rate.
Google’s formula can accurately predict longevity after hospitalization, probable length of hospitalization, and chance of re-hospitalization after release. The test of the algorithm’s learning system included 216,221 adults and more than 46 billion data points collected from various sources, including notes scribbled on patient charts and random paper scraps.
Because Google has access to so many data points for each individual, it can use not only age, ethnicity, and vital statistics, but also any information that has been entered into online medical records.
Google's foray into medical AI is the tech company’s way of finding a commercial application for which AI can be monetized. Google’s AI chief Jeff Dean told Bloomberg news that the next step is to move the AI into brick-and-mortar health clinics to help hospitals and medical centers predict health outcomes so “they can apply life-saving methods more quickly.”
Or not.
Perhaps instead of rallying equipment and personnel to save lives, a patient’s predicted poor outcome would encourage cash-strapped, bottom-line-oriented medical facilities to save expensive life-extending measures for someone with a greater chance at viability.
Let’s face it — Google and other companies involved in the race for AI marketability, like IBM’s Watson unit, Babylon Health, and Yitu Technology aren’t doing this out of some kind of soulful, self-effacing altruism, but for one purpose only: Money.
Money is not necessarily a bad goal — in fact, it’s the driving force behind a lot of cutting-edge medical advances. But when unbridled capitalism isn’t tempered by conscience and oversight, it can go rogue in the worst ways — like Google’s monstrous algorithm.
This kind of predictive algorithm is detrimental to patients on many levels. Studies show that mindset can, in fact, influence patient outcome. If you're predicted to die, you may be influenced to do so by the very suggestion that it might happen.
And this kind of thing already occurs with enough frequency to be a well-known phenomenon.
The American Journal of Public Health published a study regarding the phenomenon of “voodoo” death, where members of a community die within a short period of time after being cursed by an authority figure. Researchers conclude that these deaths do occur, and they’re precipitated by emotional stress so severe it causes a complete shut-down of bodily systems.
In modern medicine, this is known as the “nocebo” effect or medical hexing. A patient hears the prediction of a negative outcome from a clinician he or she regards as an authority and immediately takes a turn for the worse. The more strongly the patient believes in the accuracy of the source, the more likely they are to take the negativity to heart.
And what could be more authoritative than an AI with near-perfect predictive capabilities?
An article published in Bloomberg gave an account of a woman with late-stage cancer who was given a 19.9 percent chance of dying during her stay at the hospital by Google’s algorithm. Even so, she passed away a few days later.
A just-under 20 percent chance seems like decent odds — the woman had a greater than 80 percent chance of survival. So why didn’t she survive? The lack of follow-up to this incident makes it seem like Google is unable — or unwilling — to address the question.
Let’s examine the other issue that might be at play in this story — cutting costs. If hospitals and doctors are in receipt of statistics that give a slim-to-none odds of survival, it's quite possible that the patient in question might receive lesser care to minimize costs and divert assistance to other, more hearty patients.
But hospitals aren’t necessarily the bad guys here — they’re at the mercy of big capitalism, too and they have to act accordingly — with an eye toward cost savings. Medical devices receive incremental innovations, meaning medical centers are in a “play to play” bind with some necessary life-saving equipment. Furthermore, the medical device industry successfully lobbied to protect their prices from regulators and the public by killing a 2007 bill that would have required full disclosure. This means one hospital could pay twice, triple, or more what another is paying for the same device elsewhere. And they’ll never know it since manufacturers are not required to divulge pricing.
Now, let’s look at the life of Patient Zero, who has just had his or her longevity evaluated by Google’s AI. Besides the fact that Patient Zero’s body might be under significant stress from predictions of death and the hospital caring for him or her might have decided to spend their money on a patient with more time left to pay, there’s another issue looming.
Google is tapping into billions of data points by both covert, and overt, methods. There’s a current trend for doctors, hospitals, and medical centers to enter your medical records into a medical database and it’s a cinch for Google’s AI to access that information. But other salient details about you and your medical information are easy to find. A report from the Department of Health and Human Services show an inordinate number of health apps collect data not covered under the Health Insurance Portability and Accountability Act (HIPAA) and are fair game for any and all who want to collect and monetize them.
In addition, Google is pretty convincing when asking for access to intimate data. In 2016, it persuaded the National Health Service of Great Britain to share the data of 1.6 million patients so it could work on an app to detect kidney injury. These patients were not given a choice as to whether or not they’d like to contribute their data to the study, either, the information was simply handed over.
In an effort the keep the data flowing, Google announced its Google Cloud Healthcare Application Programming Interface (API) this March to “free up the flow of information leading to actionable insights from artificial intelligence and machine learning that can improve health outcomes.” This is a bald-faced attempt by Google to get their hands on more data and bolster the marketability of their patient-outcome-predicting AI.
Supporting Google’s efforts are the companies who sell data-compiling apps that are joining the Google information-collection bandwagon. Fitbit, one of the most popular fitness-trackers in the United States, announced it would share data with the Google Cloud Healthcare API just a month after Google unveiled the program.
Additionally, Twine Health, a service that manages chronic diseases like diabetes and high blood pressure, was acquired by Fitbit and will be joining the fold of Google-friendly data suppliers. James Park, co-founder and CEO of Fitbit is enthused by his company’s participation, saying, “This collaboration will accelerate the pace of innovation to define the next generation of healthcare and wearables.”
In the meantime, Google’s proprietary OS, Chrome, has partnered with Healthcast and Citrix to “improve shareability” among healthcare organizations.
While there’s no doubt that collecting patients’ data from numerous doctor’s visits in one convenient place can give health care providers a more holistic view of a patient’s overall health status, the idea also poses an enormous security and privacy challenge.
While most corporations claim to be hyper-focused on protecting consumer privacy, plenty of breaches occurred that get no more than an “oops” from the offending party.
In 2015, health care data breaches from just three sources affected an estimated 100 million people. Anthem Healthcare, Premera Blue Cross, and Excellus topped the list, with UCLA Health System, Medical Informatics Engineering, and the Virginia Department of Medical Services also seeing significant data loss to hackers.
Notably, the largest breach, Anthem Healthcare, was caught in January of 2015, but wasn’t made public until February, prompting lawmakers to call the company out on their slow response. And what price did Anthem pay for not protecting users’ data? First, a class-action lawsuit had to be filed before they’d take action, but they ultimately paid out a $115 million settlement. Unfortunately, this only translates to two years of credit monitoring for consumers whose data was stolen, which appears to be the standard slap-on-the-wrist companies endure when breaches occur.
But is two years of monitoring fair compensation? Hackers, like everyone else who understands the standard two-year credit monitoring compensation payout, may simply hold your data for the two years before selling or using it.
Once your data is breached, it’s breached forever, and two years of monitoring isn’t enough. Google, and other companies like it who want to enter the healthcare market with a data-heavy product, should be required to prove they have the technology in place to protect consumer data before there’s a beta-test.
The fear of hacking isn’t stopping competitors for a share of the health-care market from scrambling to block Google’s stranglehold on the collection of health data. Just this year, Apple, in partnership with three other companies, announced a beta test of their Health app’s online medical records capability. All of your medical records — regardless of doctor — could now be kept on an app on your iPhone.
This, of course, could include information collected from the various Health-app-linked applications. Currently, there are over 40 apps that feed data to Apple Health. They range from heart, sleep, diabetes, and blood pressure monitors to workout trackers, food intake calculators, and menstrual cycle monitors.
Feeding data from all of these apps into your Apple Health app can help your doctor get a well-rounded view of your lifestyle choices and overall health if — and only if — the data is accurate. It can also open the door to data compromise, share information you’d rather not share, or contribute to overreliance on “heresay” health evidence and misdiagnosis.
The Countdown Begins
Now that Google’s AI has opened the Pandora’s box of little evils associated with our most intimate data, we’re already at risk of being wronged at a number of levels. Without regulation and oversight, this kind of health-care based tech can be used as easily to harm as to help.
With foreknowledge of poor health outcomes, hospitals, physicians, and insurers can divert costly resources and benefits to patients more likely to survive in an effort to maximize their bottom lines.
Patients can undergo life-altering — or ending — stress based on a machine’s calculation of their probably longevity, contributing to a self-perpetuating terminal event.
And even if neither of those things happen, an unguarded, unregulated central repository of consumer data is being put at risk by corporations eagerly scrambling for their next piece of the financial pie.
Besides the obvious issues posed by data breaches and hacking, health data could ostensibly be used to deny insurance coverage, bias potential employers, and subject individuals to any number of discriminatory practices.
Regardless of these other potential areas for harm, the most frightening part of this equation is that this AI is taking aim at the one thing we have that it doesn’t — humanity.
While Google’s algorithm might be the acme of technical wizardry, able to sift through billions of infinitesimal and seemingly irrelevant data bits to arrive at what many see as an incredibly on-point calculation, the one thing that it cannot possibly calculate is the depth of the human spirit and the indomitable will to live.
It cannot substitute for a caring doctor or nurse who truly knows their patient. And it cannot figure in to its formula the will of a mother to live to see her daughter’s wedding, or of a husband to hold his wife’s hand just one more time on their anniversary.
Let’s not let a machine tell us — or the people we trust to help us when we’re ailing — when to quench that spark that animates us — that decision should remain irrevocably ours.
Nikki Williams is a bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at nbwilliamsbooks.com.