Ivo Abraham, a nurse by profession and an outcomes and effectiveness researcher by trade, is Professor of Pharmacy and Medicine at the University of Arizona in Tucson, Arizona, where he is also affiliated with the Center for Health Outcomes and PharmacoEconomic Research, the Arizona Cancer Center, and the Center for Applied Genomics and Genetic Medicine. Dr. Abraham has served as regular or visiting professor at universities in the United States, Europe, and Asia. A native of Belgium, he received his BS (psychiatric nursing) from the Catholic University of Leuven, and his MS (psychiatric-mental health nursing) and PhD (clinical research) from the University of Michigan.
His research program has been funded continuously since 1984 by governmental agencies, foundations, and corporations worldwide. He has served in the U.S. as appointed and ad hoc reviewer for the National Institutes of Health (NIH), the National Institute for Mental Health (NIMH), the Agency for Healthcare Research and Quality (AHRQ), and the Veterans Administration. Additionally he has served as an appointed and ad hoc reviewer in Europe for the EU FP7 and HORIZON2020 funding programs; and in Canada, Japan, The Netherlands, Ireland, and Catalunya (Spain) for national funding agencies. He has worked as expert advisor to the Innovative Medicines Initiative, a joint € 5.3 billion (US$ 6.3 billion) undertaking of the European Union and the biopharmaceutical industry to stimulate innovation in human therapeutics since the Initiative’s inception in 2008.
He currently serves as the associate editor for quantitative methods for JAMA Dermatology. He has co-authored more than 350 articles, over 75 chapters, and 30+ books and monographs. His educational and scientific honors and awards include an Invitational Research Fellowship from the Japan Society for the Promotion of Science (2007-2008), which he conducted at Hyogo University and Aomori University.
The promises of Big Data are intuitively appealing: (virtually) unlimited data that will enable us to answer (virtually) any questions that we may have. Unfortunately, by and of themselves, Big Data are rather useless. They require Deep Analytics: inquiring people equipped with engines of analysis to explore, discover, and invent.
What should these inquiring people focus on? In The Emperor of
All Maladies, Siddhartha Mukherjee identifies three new directions
for cancer medicine: therapeutics, prevention, and explaining the
(genetic) behavior of cancer. With Big Data, we can cover these
three fronts simultaneously: molecules to models of care; patients
to populations; and empirics to evidence.
What are the engines of analysis in Deep Analytics? Conventional
biostatistics will continue to be useful but only to generate more of
the same: more description, though with greater precision; more
comparisons between groups, just more and larger groups; more
Kaplan-Meier curves, but still dipping down against time; and more regressions predicting one variable from other variables, but with greater accuracy. We need to bridge over to disciplines outside healthcare and integrate their analytical methods.
To give some examples, complexity reduction analytics help us
find embedded structures, patterns, and trends in patients, diseases, treatments, and outcomes—in time and over time. Signals of interest may be crowded over by other signals; discrimination analytics assist us in distinguishing between signals and extracting the signals of interest. Aggregation methods help us find patients, symptoms,
diseases, treatments, and outcomes that are similar and dissimilar, and cluster together or differentiate themselves. We may be able to identify profiles of patients at risk of poor treatment outcomes, or most likely to benefit from a given treatment. We can shift from identifying patient risk factors to anticipating, identifying, and managing patients at risk. We can detect patterns of variables and processes that explain why some patients respond to treatments, why others do not, and why most do to some extent.
In this, we should use analytics that let data talk for themselves;
rather than have them say what we want them to say. We can test
“causal” models that help us understand the interplay of various
factors in treatment outcomes. We should let data sketch out patterns of cause and consequence, of predisposition and exception, of treatment and outcomes. We may let data draw themselves out into flow charts that help us understand what happens as patients are treated; or in decision trees that assist us in deciding which patients would benefit most from an array of treatment options. To better plan treatment, we can develop complex and targeted simulations of treatments and treatment outcomes based on patient and disease characteristics. Lastly, we should combine “old” engines with the more recent generation of artificially intelligent engines. As much of Big Data is unstructured, natural language processing engines can extract data out of text or speech. Machine-learning engines work from data presented to them to construct prediction and decision models and algorithms.