407.936.2132

In September 2016, rivals Google, Facebook, Amazon, IBM and Microsoft joined forces to create the Partnership on AI to Benefit People and Society (Partnership on AI). In January 2017, even Apple pushed past its famous secrecy to join this new cause along with six nonprofits: the Association for the Advancement of Artificial Intelligence (AAAI); the ACLU; Open AI; UC Berkeley; MacArthur Foundation; and the Peterson Institute of International Economics (PIIE). The Partnership on AI seeks to establish and share best practices for artificial intelligence (AI) systems.

The Partnership on AI just announced on its blog that 22 new organizations joined the consortium. They are committing themselves to several new initiatives such as working groups, “a challenge series to inspire people to use AI to address social issues,” an “AI, People, and Society” Best-Paper Award, and a civil society fellowship program.

The eight new companies are eBay, Intel, McKinsey & Company, Salesforce, SAP, Sony, Zalando, and a startup called Cogitai. The 14 new nonprofit partners are the Allen Institute for Artificial IntelligenceAI Forum of New ZealandCenter for Democracy & TechnologyCentre for Internet and Society – IndiaData & Society Research InstituteDigital Asia HubElectronic Frontier FoundationFuture of Humanity InstituteFuture of Privacy ForumHuman Rights WatchLeverhulme Centre for the Future of IntelligenceUNICEFUpturn, and the XPRIZE Foundation.

Humanity is at an inflection point in the evolution of artificial intelligence. Due to the relatively recent convergence of ever-increasing computer power, big data, and algorithmic advances, AI today offers both benefits and threats. The Partnership on AI addresses issues such as transparency, security and privacy, values, and ethics, and provides an open and inclusive platform for discussion and engagement.

According to Bill Gates, when it comes to artificial intelligence, “we are summoning the demon.” The subtext is that a dystopian future awaits humanity if machine learning begins “breaking bad” and goes unchecked—or that we may be engineering our own extinction. Though it is not presently one of its objectives, the Partnership on AI may eventually contribute to governments having regulatory oversight of AI advancements at national and international levels.

In the near term, there are other worries and opportunities. AI may present challenges such as fairness and potential biases in algorithms, which is something the ACLU in particular may be watching. UNICEF is already applying machine learning, data science, and AI to societal problems, in line with one of the Partnership on AI’s thematic pillars. UNICEF developed the Magic Box platform, which allows collaborators like IBM, Google, Amadeus, and Telefonica to pool data and develop models for real-time decision-making in emergencies.

These challenges require deep interdisciplinary and cross-sector collaboration. The Partnership on AI demonstrates laudable responsibility and leadership from these otherwise fiercely competitive companies and individuals working together and with leading nonprofit organizations to safeguard the future. With the advent of self-driving cars, with artificially intelligent pacemakers, trading systems, power grids, and so much else, there are reasons for concern. Preventing an arms race in lethal autonomous weapons is an effort Human Rights Watch and other NGOs joined in 2012. The UN is placing “killer robots” on its agenda this year.

Ultimately, the future of AI depends on who controls it—and whether it can even eventually be controlled. AI is like nuclear energy, capable of both great good and terrible harm. Right now, the Partnership on AI is looking for its own leadership in its search for its first executive director. Interested candidates should be in touch with the search firm, Isaacson & Miller.—James Schaffer

Source: NPQ
Source: TechRepublic