{"id":2231,"title":"Using algorithms to predict death: Lessons learned","link":"https:\/\/www.reframetech.de\/en\/2018\/09\/20\/using-algorithms-to-predict-death-lessons-learned\/","date":"09\/20\/2018","date_unix":1537427020,"date_modified_unix":1649928089,"date_iso":"2018-09-20T07:03:40+00:00","content":"<p>Animals are extraordinary beings. Oscar the cat, for example. Adopted as a kitten by the staff at a nursing home in the United States, he grew up surrounded by people with end-stage dementia. At some point staff members noticed something unusual: Oscar did not interact much with people, but when he made himself comfortable next to one of the residents, that person invariably died within a few hours. After a while the nurses started informing family members that they saw Oscar curled up beside a patient. What sounds like an urban legend was in fact documented in\u00a0<strong><a href=\"http:\/\/www.nejm.org\/doi\/full\/10.1056\/NEJMp078108\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener\">an article<\/a><\/strong>\u00a0in the respected <em>New England Journal of Medicine <\/em>in 2007. Oscar soon made headlines around the world. Even <strong><a href=\"http:\/\/www.steerehouse.org\/shoscar_landing\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener\">a book<\/a><\/strong><strong>\u00a0<\/strong>has been written about the unusual feline.<\/p>\n<p>That makes Oscar very good at <strong><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/17\/how-algorithms-can-save-people-from-an-early-death-2\/\" target=\"_blank\" rel=\"noopener\">something that challenges doctors and nurses every day:<\/a><\/strong> predicting when a person will die. Moreover, he\u2019s amazingly accurate. Animals also sense when humans have illnesses such as tumors.\u00a0<strong><a href=\"http:\/\/erj.ersjournals.com\/content\/erj\/39\/3\/669.full.pdf\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener\">Dogs, for example, can be trained to detect lung cancer by sniffing a patient\u2019s breath<\/a><\/strong><strong>.<\/strong>\u00a0And the results are better than those provided by conventional diagnostics.<\/p>\n<p>Researchers are now trying to discover how they do it. Presumably the animals can smell certain molecules in the patient\u2019s breath or on various parts of the body. Although the actual mechanism is not yet known, what is clear is that the animals\u2019 brains absorb and process complex information and then come to the relevant conclusion. That describes how algorithms work as well.<\/p>\n<p>It was only a matter of time before researchers developed algorithmic systems that are capable, like Oscar, of predicting death. Two types are now generally used:<\/p>\n<ol>\n<li><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/17\/how-algorithms-can-save-people-from-an-early-death-2\/\" target=\"_blank\" rel=\"noopener\"><strong>Algorithms that analyze a patient\u2019s vital signs, such as heart rate and blood oxygen levels, to predict the probability of death within a few hours<\/strong><\/a>\u2013 in order to save the patient\u2019s life.<\/li>\n<li><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/19\/optimizing-palliative-care-when-algorithms-predict-a-patients-death\/\" target=\"_blank\" rel=\"noopener\"><strong>Algorithms that use a pool of patient data, such as medical diagnoses, treatments or prescribed medications, to predict the probability of death within 3 to 12 months<\/strong><\/a>\u2013 allowing terminally ill patients to benefit from at-home palliative care.<\/li>\n<\/ol>\n<p>In both cases, it is generally acknowledged that algorithm-based systems will be able to produce, on average, better predictions than humans can. And even if hospital staff use special scoring systems to assess risk or the probability of death, the scores\u2019 accuracy still leaves a lot to be desired, which is why doctors and nurses generally rely on their own professional experience.<\/p>\n<p>Over the course of their careers, medical practitioners treat thousands of patients. Yet it\u2019s hard to pinpoint the exact knowledge they use as the basis for choosing one therapy over another. That is not the case with software, however, which runs on a dataset that is always accessible and constantly growing.<\/p>\n<p>Whether it\u2019s a matter of predicting avoidable problems early on and thus saving lives or accurately foreseeing the unavoidable and helping the seriously ill die in a dignified manner, algorithms can help doctors make decisions based not only on their subjective experiences, but on objective parameters as well. In addition, using algorithms can help medical staff to systematize decision-making processes, for example in hospitals.<\/p>\n<p>Yet as useful as such systems might be, many people feel uncomfortable when they discover that software is being used to foretell a death. This is particularly true in the second example given above: when death is more or less inevitable and there is little that can be done to save the patient\u2019s life. The oncologist and author Siddhartha Mukherjee describes exactly this feeling of discomfort in one of his essays. In other words, there are many questions that must still be answered and many aspects that require public discussion before such algorithms become widely used.<\/p>\n<p><strong>Algorithmic decision-making processes must be transparent<\/strong><\/p>\n<p>Two such \u201cdeath algorithms\u201d are currently the subject of discussion in the United States which are being deployed to predict if seriously ill patients will die within the next 3 to 12 months (see our blog entry <strong>\u201c<\/strong><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/19\/optimizing-palliative-care-when-algorithms-predict-a-patients-death\/\" target=\"_blank\" rel=\"noopener\"><strong>Optimizing palliative care: When algorithms predict a patient\u2019s death<\/strong><\/a><strong>\u201d<\/strong>). One is still being developed and has been published in a study by researchers at Stanford University; the other is the property of private US-based palliative-care provider Aspire Health.<\/p>\n<p>In contrast to the Stanford researchers, Aspired Health is a for-profit enterprise that specializes in home-based end-of-life care. That means it earns money when people with serious illnesses no longer receive curative care. The greater the number of people who use the palliative services provided by Aspire, the greater the company\u2019s profit. The conflict of interest is readily apparent. To make things more problematic, no one outside of the company knows how the algorithm used by Aspire works, since it\u2019s a trade secret. It thus remains unclear how the system decides when the company\u2019s palliative services are the better choice for a patient.<\/p>\n<p>One could easily presume that Aspire Health has optimized the algorithm to identify as many patients as possible who would benefit from palliative care. Yet increasing the sensitivity generally results in lower specificity. That means the algorithm would misidentify too many of those patients who will actually live longer than one year, the upper limit for palliative care.<\/p>\n<p>An algorithm must be transparent if outsiders are to understand how it has been optimized. And when it comes to systems that predict the probability of death, optimization parameters should not be the purview of commercial businesses alone. The system\u2019s developers must instead publicly disclose which goals are being pursued with the algorithm and under what conditions it is being used. Both of these aspects must be subject to a public social, political and ethical debate. Moreover, it must be possible to verify the algorithm\u2019s performance.<\/p>\n<p><strong>Algorithms must be thoroughly validated on an ongoing basis<\/strong><\/p>\n<p>It must be possible to validate algorithms if they are to be considered transparent. And an algorithm must be publicly accessible if it is to be publicly validated. Furthermore, a number of questions must be asked when it comes to algorithms of this type, such as:<\/p>\n<ul>\n<li>How reliable are their predictions?<\/li>\n<li>How often do the results include false positives or false negatives?<\/li>\n<li>Are the algorithms truly helpful in achieving the desired goals? What those goals (e.g. improving access to at-home palliative care or reducing costs resulting from unnecessary treatments and interventions)?<\/li>\n<li>Which framework are they embedded in, i.e. which patient groups were they developed for?<\/li>\n<\/ul>\n<p>To answer these questions, publicly funded studies are required.\u00a0<strong><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/17\/how-algorithms-can-save-people-from-an-early-death-2\/\" target=\"_blank\" rel=\"noopener\">Multiple studies have already shown, for example, that algorithms used in monitoring systems to predict the chance of a patient\u2019s dying in the near term are highly accurate and can save lives<\/a><\/strong><strong>.<\/strong>\u00a0Such algorithms have already been approved as medical products by health regulators, such as the Food and Drug Administration (FDA) in the US and oversight bodies in Europe.<\/p>\n<p><strong>Approval procedures must be adapted (to new developments) <\/strong><\/p>\n<p>No standard definition yet exists of what constitutes a \u201cclinical decision support system\u201d (CDSS) as used in medical institutions. Algorithmic input for making decisions can vary considerably in complexity, ranging from a simple warning that a patient\u2019s medical record contains duplicate entries to deep-learning processes which predict when an individual will die.<\/p>\n<p>The criteria a medical product must meet for certification are very complex, yet they generally set <a href=\"https:\/\/www.johner-institut.de\/blog\/medizinische-informatik\/decision-support-systeme-medizinprodukt\/\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener\"><strong>no concrete benchmarks that a CDSS must adhere to.<\/strong><\/a><strong>\u00a0<\/strong>The US is somewhat further ahead than other countries here, since the FDA laid out regulatory guidelines for CDSSs at the end of 2017, including information on which systems will be considered medical products requiring approval and which will not. At the same time, as the systems become more complex, so will the criteria that must be met for approval. That, in turn, could make implementing such systems unnecessarily complicated or hinder the development of innovations.<\/p>\n<p>A number of other regulatory issues must be clarified for the algorithmic systems that are used to predict death:<\/p>\n<ul>\n<li>Which institutions should be permitted to use such algorithms? Which should use them as a matter of course? For example, should they only be used at full-fledged medical institutions? Or should organizations be able to use them that provide at-home palliative care? Can primary care doctors use them in their practices? Or only the health insurers and medical associations responsible for deciding whether at-home palliative care is warranted?<\/li>\n<li>How can self-learning algorithms be regulated if they continue to develop on their own and it is not completely clear how they generate their output?<\/li>\n<\/ul>\n<p><strong>Doctors must be trained to communicate predicted deaths<\/strong><\/p>\n<p>Before the age of artificial intelligence, doctors often had to rely solely on their knowledge and wealth of experience. This was also true when the task at hand was informing a patient that they did not have long to live. Yet even if an algorithm existed which could say with 100-percent certainty that a patient is going to die \u2013 which will never be the case \u2013 that would not make it any easier for physicians to communicate the news to patients.<\/p>\n<p>On the contrary, algorithms still lack empathy and morality. And recommendations about a situation so personal and emotional as an impending death should never be made by a computer program. Doctors can use artificial intelligence as an aid, but they will always have to consider the entire individual as they reach their decision on what the best way forward is. There are more than a few patients who have a 95-percent chance of dying, yet firmly believe they are one of the 5 percent who will recover.<\/p>\n<p>Research shows that doctors tend to be overly optimistic when they predict whether or not their patients will live. That\u2019s only human. Yet how do doctors deal with the fact that algorithms are better at foretelling medical outcomes? Do they include this more objective information in their deliberations? And how do they then tell the patient that their recommendation is based on a prediction made by a machine?<\/p>\n<p>An open discussion is needed of the discomfort many people feel when they learn that computers are predicting when patients will die. What is also needed is thorough training in this area for medical personnel, including additional certification in statistics and programs that impart a basic understanding of how algorithmic systems work. Programs must also be introduced to help clinicians communicate better with patients, especially when the subject is palliative treatment.<\/p>\n<p><strong>An open discussion is needed on the importance of life\u2019s final phase<\/strong><\/p>\n<p>Chemotherapy and surgery can be lucrative for hospitals. Yet how much should be spent on treating the terminally ill when it is clear that the patient does not have long to live? Or when it is clear that the patient will die sooner without treatment, but will not have to spend their final weeks or months in a hospital and will experience fewer side effects? There are no easy answers to this question.<\/p>\n<p>After all, there are patients who indeed survive despite all the statistics suggesting they will not. Moreover, it\u2019s almost impossible to ascertain if a therapy was unnecessary. Was it unnecessary if it gave the patient a few more weeks? A discussion is needed on this topic as well, among both policy makers and society at large.<\/p>\n<p><a href=\"http:\/\/www.independent.co.uk\/life-style\/health-and-families\/features\/the-cost-of-nhs-health-care-deciding-who-lives-and-who-dies-10096784.html\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener\"><strong>In the UK, for example, it has been decided<\/strong><\/a>\u00a0that quality of life can only be improved or a life extended if the price of doing so is reasonable \u2013 meaning, more concretely, if it costs between \u00a320,000 and \u00a330,000 per additional year of life. Cost-benefit analyses of this sort also make people uneasy. What might seem reasonable to a healthy person who is well off could be completely inacceptable to someone who is ill and lacks any income. Yet this is a discussion that must take place all the same, since if there is no transparent, broad discourse there can be no consensus, and cost-benefit decisions will merely be handed off to hospitals, health insurers and providers of palliative care.<\/p>\n<p>Whether this concept \u2013 the quality-adjusted life year (QALY) \u2013 is really the right indicator for such calculations remains <strong><a href=\"https:\/\/www.barmer.de\/blob\/71070\/06873e0cafacb1092caa3e51c7ddad8d\/data\/qalys-in-der-kosten-nutzen-bewertung.pdf\" target=\"_blank\" aria-label=\"Opens in a new tab\"  target=\"_blank\" rel=\"noopener\">controversial in Germany<\/a><\/strong><strong>.<\/strong>\u00a0Discussing it is more important than ever, however, given the development of new medical tools such as death-prediction algorithms.<\/p>\n<p><strong>Doctor and patient must always make the decision<\/strong><\/p>\n<p>Regardless of how algorithms for predicting death develop in the future, guidelines must be put in place ensuring that it is ultimately a human being \u2013 a doctor \u2013 who makes the recommendation or decides together with the patient or family what the best way forward is.<\/p>\n<p>Whether a cat or an algorithm makes a prediction, there is always a chance it will be wrong. That will always be the case. Moreover, such predictions \u201conly represent the model of a person\u2019s life, never the person themselves,\u201d says Kevin Baum, a computer ethics specialist at Saarland University, speaking to German broadcaster ARD. \u201cMachines are not capable of making decisions in individual cases.\u201d What they also can\u2019t do is something Oscar the cat is particularly good at: keeping people company and making them feel better.<\/p>\n<hr \/>\n<p>This is the third post of a three-part series on the use of algorithms in the calculation of death risks.<\/p>\n<p>Part 1 of the series is here:\u00a0<strong><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/17\/how-algorithms-can-save-people-from-an-early-death-2\/\" target=\"_blank\" rel=\"noopener\">\u201c<\/a><\/strong><strong><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/17\/how-algorithms-can-save-people-from-an-early-death-2\/\" target=\"_blank\" rel=\"noopener\">How algorithms can save people from an early death<\/a><\/strong><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/17\/how-algorithms-can-save-people-from-an-early-death-2\/\" target=\"_blank\" rel=\"noopener\"><strong>\u201d<\/strong><\/a><\/p>\n<p>Part 2 of the series here:\u00a0<strong><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/17\/how-algorithms-can-save-people-from-an-early-death-2\/\" target=\"_blank\" rel=\"noopener\">\u201c<\/a><\/strong><strong><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/19\/optimizing-palliative-care-when-algorithms-predict-a-patients-death\/\" target=\"_blank\" rel=\"noopener\">Optimizing palliative care: When algorithms predict a patient\u2019s death<\/a><\/strong><a href=\"https:\/\/www.reframetech.de\/en\/2018\/09\/17\/how-algorithms-can-save-people-from-an-early-death-2\/\" target=\"_blank\" rel=\"noopener\"><strong>\u201d<\/strong><\/a><\/p>\n<p>&nbsp;<\/p>\n","excerpt":"<p>Animals are extraordinary beings. Oscar the cat, for example. Adopted as a kitten by the staff at a nursing home [&hellip;]<\/p>\n","thumbnail":"https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2018\/03\/daan-stevens-282446-unsplash.jpg","thumbnailsquare":"https:\/\/www.reframetech.de\/wp-content\/uploads\/sites\/23\/2018\/03\/daan-stevens-282446-unsplash-370x370.jpg","authors":[{"id":1308,"name":"Dr. Cinthia Brise\u00f1o","link":"https:\/\/www.reframetech.de\/blogger\/dr-cinthia-briseno\/"}],"categories":[{"id":2,"name":"Uncategorized","link":"https:\/\/www.reframetech.de\/en\/category\/uncategorized\/"}],"tags":[{"id":263,"name":"Algorithm","link":"https:\/\/www.reframetech.de\/en\/tag\/algorithm\/"},{"id":313,"name":"Doctors","link":"https:\/\/www.reframetech.de\/en\/tag\/doctors\/"},{"id":274,"name":"Ethics","link":"https:\/\/www.reframetech.de\/en\/tag\/ethics\/"},{"id":301,"name":"Ethics of Algorithms","link":"https:\/\/www.reframetech.de\/en\/tag\/ethics-of-algorithms\/"},{"id":264,"name":"Medical","link":"https:\/\/www.reframetech.de\/en\/tag\/medical\/"}]}