The late visionary theoretical physicist Stephen Hawking was not only a leading scientist in the study of black holes, quantum mechanics and cosmology, he also professed strong views about artificial intelligence (AI). AI, he warned, could be “the best, or the worst thing, ever to happen to humanity. We do not yet know which.”  Hawking’s fear was that not only will AI destroy far more jobs than it creates, but that its “machine learning” might one day also cause it to “develop a will of its own, a will that is in conflict with ours and which could destroy us.”

While other top scientists disagree with those pessimistic assessments, there is another danger with AI: that we will not actually harness the power of this technology effectively and harvest its full potential. A number of leading scientists are coming to the conclusion that the disparate AI research efforts that are ongoing – from China to Israel to Germany to Silicon Valley, whether headquartered in large corporate laboratories or small university labs – lack a grand vision capable of adequately serving humanity. A lot of the research is focused on narrow problems such as maximizing advertising revenues that are not game changers capable of addressing the world’s greatest challenges.

Consider the impact of the best-known use of AI: the Facebook engagement algorithms. Facebook uses surveillance of our online behavior to capture our personal data and create a profile of each individual user. The goal is to generate increasingly accurate, automated predictions of which advertisements we are most susceptible to. Sophisticated psychology and “psychographic messaging” are at work, creating what the company’s first president, Sean Parker, recently called “a social-­validation feedback loop.”  That architecture was manipulated by outfits like Cambridge Analytica and, allegedly, by Russian cyber-operatives to spread fake news and try to impact the 2016 US presidential election.  Facebook’s algorithmic operation is impressive in its reach and impact, but also warped Moreover, it is giving a bad name to AI instead of demonstrating what it might be able to accomplish for humanity.

Google also has been accused of manipulating users with its algorithms, whether on YouTube or in its search engine. The European Commission fined the company €2.4 billion for manipulating search results to favor its preferred companies. Amazon has been accused of using its algorithms for price gouging during Hurricane Irma and  hiding the lowest-price deals from customers in favor of its own products.

In these cases, the power of algorithms is being put to work towards very questionable ends and tends to result in unforeseen consequences. Little transparency is built into their operation or into company explanations. Algorithms are gaining a negative reputation among the public. Moreover, a number of experts are growing concerned that the enormous datasets used for AI algorithms and the machine learning employed to mine this data are increasingly becoming opaque. Viktor Mayer-Schönberger, professor at Oxford University and co-author of ‘Big Data: A Revolution That Will Transform How We Live, Work, and Think’, writes, “The algorithms and datasets behind them will become black boxes that offer us no accountability, traceability, or confidence.” In the past, most computer code could be opened and inspected, making it transparent. But with AI and its enormous datasets, this traceability has become much harder.

Facebook and Google say they are going to take measures to prevent the manipulation and misuse of their algorithmic systems. But even if they succeed, these companies are still trying to solve very narrow, often commercial-driven challenges, like optimizing advertisements or keeping users on their platform as long as possible. Large retailers like Amazon are using predictive analytics algorithms to anticipate what your next purchases might be, and to give you discounts and coupons to incentivize your consumption. On YouTube and other entertainment sites, algorithms are making music and movie recommendations. Certainly, large segments of the public continue to find these services valuable, but in reality a vast amount of research and product development is going into deploying AI for things that are not game-changers, i.e. not helping humanity solve the great challenges of the 21st century. Nor do these uses greatly serve the public good.

It is true that AI is increasingly being deployed for a number of important purposes such as medical diagnosis, which benefits from AI machine learning and its very powerful pattern-matching. IBM has experimented with using artificial intelligence to ease China’s smog problems. The field of AI is benefiting from micro-discoveries in intriguing areas such as speech recognition, image classification, autonomous vehicles, legged locomotion and question-answering systems. But those are still relatively small, narrow projects when you consider the large and complex global issues that humanity faces.

According to Dr. Gary Marcus, a researcher and professor of psychology and neural science at New York University, not only is much AI research focused on unambitious goals, the technology also continues to founder in the real world. Some of the best image-recognition systems, for example, can successfully distinguish dog breeds, yet remain capable of “major blunders, like mistaking a simple pattern of yellow and black stripes for a school bus,” says Marcus. Such systems can neither comprehend what is going on in complex visual scenes (‘Who is chasing whom and why?’), nor follow simple instructions (‘Read this story and summarize what it means’).”

Google Translate, for example, which approximates translations by statistically associating sentences across languages, cannot understand what it is translating. Instead, says Marcus, such AI systems are very limited and “tend to be passive vessels, dredging through data in search of statistical correlations.”

Professor Michael Feindt, founder of Blue Yonder, a predictive analytics software, says that although machine learning is advancing, it’s a long way from the artificial intelligence we imagine. “We are very far away from a machine that is ‘intelligent’ in the very wide sense we attribute it to normally,” he explains. “Even the best algorithms do not ‘think’, ‘feel’ or ‘live’, they have no ‘self-consciousness’ and no ‘free will’.”

AI research methods: Narrow focus and siloed thinking

Marcus, who is both an academic and the founder of a start-up company, Geometric Intelligence, sees AI as being completely over-hyped. “The dirty little secret,” he says, “is that it still has a long, long way to go.” The problem is the inadequacy of AI research. There are two basic approaches, one used at larger labs in private industry, the other at university research labs. Neither is well-positioned to succeed at truly groundbreaking research, says Marcus.

The problem with academic labs is that they are fairly small. The development of automated machine reading, which is a foundation for building any truly intelligent system, requires expertise in several sub-fields of AI. Taken altogether, this “represents a lifetime of work for any single university lab,” Marcus says. The problem is simply too big for any one lab to take on.

On the other hand, corporate labs like those at Google and Facebook have considerably more resources at their disposal, but, in their narrowly-focused for-profit world, they tend to concentrate on problems related to automating advertising and monitoring content. While such research has its place, it is not likely to result in major breakthroughs.

Dr. Philipp Slusallek, scientific director at the German Research Center for Artificial Intelligence (DFKI), says that most AI research activities “are largely done in isolation, siloed by groups creating their own methods, infrastructure, and datasets. There is currently no systematic approach, nor a common platform for bringing these different pieces together.”

Adding to the research complexity, competition between nations is starting to resemble a modern-day arms race. That, in turn, is beginning to define the parameters of AI research. As in the Cold War, research is being zealously guarded by both the private and public sectors as either trade advantages or, in some cases, as national security and military secrets. Consequently, the research taking place in certain nations, as well as at private corporations, is highly enigmatic and therefore difficult to track. The current pursuit of AI risks replicating existing global inequities and exacerbating tensions among major powers.

The US remains the global leader, but other nations are catching up, especially China.   Quantifying China’s spending on AI is difficult, because authorities there disclose very little information. But a data analysis for Times Higher Education shows that China now produces almost twice as many papers on artificial intelligence as the US, with Japan third and the UK fourth. However, China scored only 34th in terms of citation impact, suggesting that most of the papers were not of the same high quality as those coming from the US or many other countries. US universities, led by MIT and Carnegie Mellon, were the global leaders in terms of citation impact. As for funding AI development, China is catching up there as well. In the private sector, one Chinese company alone, Alibaba, is investing $15 billion on AI research and quantum computing – an amount that dwarfs spending by most governments, including the recently announced €1.5 billion initiative from the European Commission.

While Chinese research has skyrocketed in recent years, it remains a black box driven by national security and commercial applications. Recent rollouts of facial recognition software, pervasive rating algorithms and other technologies are forging a “social credit system” with troubling potential for creating high levels of consumer monitoring and social control. Ideally, AI development should benefit humankind rather than causing a bunkering down into national silos.

Competition or collaboration: a CERN for AI?

Given the constraints on the current research paradigm, scientists and policy advocates are increasingly coming to the conclusion that a more collaborative and international endeavor is necessary to truly harness the power of AI and ensure it is used for beneficial purposes. In June 2017, representatives from the World Health Organization and a number of other United Nations agencies were joined by AI experts, policy makers and business leaders in Geneva, Switzerland, for a three-day summit called “AI for Good.” The goal was to provide a forum for a nonpartisan and noncommercial evaluation of the possibilities AI offers to benefit all of humanity. The participants discussed how AI research could focus on developments that would help everyone – from reducing inequality and improving the environment to building services for emerging countries, where people increasingly have access to smart phones. Facebook, for example, reaches nearly 800 million users in the developing world, where the company has tailored its app for low-bandwidth connections and less expensive Android phones.

“This is where AI can learn from other disciplines,” says Slusallek. “For example, the Human Genome project has brought researchers from around the globe together to jointly and systematically study human genomics. As a result, we have made huge progress in just a few years that would have been impossible without these coordinated efforts.”

Marcus echoes that observation, going so far as to propose the creation of a CERN for AI. CERN, the European Organization for Nuclear Research, is one of the world’s largest and most respected centers for high-intensity physics and particle acceleration. It is a huge, international collaboration, with thousands of scientists and billions of dollars in funding provided by dozens of nations. Following that model, some scientists and experts like Marcus are calling for an international consortium that is focused on pure AI science and research to serve the public good. This would be a highly interdisciplinary effort that would bring together scientists from many areas, in close collaboration and interaction with industry, politics and the public. A key element would be advancing the many strands of research in this area by discussing common approaches, methods, algorithms, data and their applications. It would foster the basic foundational architecture for AI that other, smaller efforts in university and private labs could plug into. It is a proposal that is very hands-on and practical, and that many different organizations, leaders – and nations – could potentially unite around.

“The integrated platform should be as open and flexible as possible to promote research and fast experimentation,” Slusallek says. Yet it also needs to facilitate the transfer of results and datasets to and from industry to encourage commercial applications and spinoffs. “These capabilities should not be locked behind closed doors but developed with full transparency, including open data where possible,” he explains. “Progress in AI must benefit everyone, creating ‘AI for Humans.’”

This is pretty much the exact opposite of the way we are going about AI research now. Not that there aren’t a few small collaborative AI efforts, such as Elon Musk’s OpenAI, but that has only 50 staff members. Instead, the dominant paradigm is secrecy, noncooperation and commercial competition. Private versus public, nation versus nation – the race is on to determine not only the architecture of the digital and AI future, but to see who will control its commercial, scientific, military and national security dimensions.

Instead of a race to AI supremacy, a CERN for AI could produce research and basic architecture development that would be made publicly available so that more nations and their populations could benefit from it. The world can engage in a cooperative, international effort to develop these powerful AI technologies for humanity, or we can have a trade war – or, even worse, an arms race. In recent years, we have seen a pullback from globalization to more nationalist attitudes and policies. A resurgence of internationalism focused on AI could be just what the world needs.