How Mainstream Media Covers Artificial Intelligence?

September 13,2020



A focus on the portrayal of ethical issues about artificial intelligence in the media


Artificial Intelligence, like all technologies, is neutral. But in recent times it has been portrayed, both as a panacea for all problems and an apocalyptic force. The discourse around big data, climate change, privacy and complex businesses processes are getting complex, shriller and everyday reality.  And it is quite natural that artificial intelligence technologies is seen as a tool to make sense of it all. The discourse about the ethics on how this tool is being used is again a matter of debate. A recently published research paper on how mainstream media covers the discourse around AI has focused on some interesting issues. The study is important as it reflects on how public discourse about novel technologies, studying how the ethical issues of AI are portrayed in the media, may lead to greater insight into the potential ramifications of this public discourse, particularly about development and regulation of AI. The paper, “AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media” is authored by Leila Ouchchy, Allen Coin and Veljko Dubljević of the Department of Philosophy and Religious Studies, North Carolina State University.






*Media has a realistic and practical focus in its coverage of the ethics of AI


*Media coverage on AI is still shallow. A multifaceted approach to handling the social, ethical and policy issues of AI technology is needed, including


*There is greater need to increase the accessibility of correct information to the public in the form of fact sheets and ethical value statements on trusted webpages (e.g., government agencies)


*A collaboration and inclusion of ethics and AI experts in both research and public debate, and consistent government policies or regulatory frameworks for AI technology.


*A clear government policy or regulatory framework for AI technology in countries is urgently needed.




AI and Ethical Questions


The increasing prevalence of AI has dragged it into a wide range of ethical debates. The authors of the study reflect on questions that were posed in public fora including how “AI can be programmed to make moral decisions, how these decision-making processes can be made sufficiently transparent to humans, and who should be held accountable for these decisions” and other such questions. the authors argue that a “crucial objective in the development of ethical AI is cultivating public trust and acceptance of AI technologies.” This is especially when the most successful policies for AI development --which is largely a bottom-up approach and to a certain extent, hybrid approaches, according to the authors--and monetization creates a lack of transparency. Additionally, the media has a large impact on the way issues are framed to the public in every field. It is quite natural that public opinion and acceptance of AI is likely to be impacted by media.



AI, Liberal Democracy and Public Opinion


The authors, Ouchchy, Coin and Dubljević draw the attention to the fact that the public, as both consumers in the market economy and constituents of liberal democracy, is key stakeholders for technology adoption—and, to a certain extent, for public policy and regulatory oversight. “Public opinion can affect what kind of AI is developed in the future and how AI is regulated by the government. For these reasons, it is important to analyze how issues of AI and ethics are portrayed in the media,” the study claims. The authors note that in certain instances where a topic about AI is highly stigmatized, there is a marked disconnect between media representation and public opinion but there are no initial reasons to suspect that would be the case with AI ethics, the authors argue. Ouchchy, Coin and Dubljević emphasize that it is important now to gather more data and ascertain “if a mismatch in public expectations is actually occurring.” The authors also warn that “the lack of engagement in the public discussion by experts and informed decision-makers may lead to a polarization effect in the public, where “hype and hope” and “gloom and doom” perspectives distort the debate.”


The method


The authors searched through a large archive of news articles with search terms like “Artificial Intelligence” or “Computational Intelligence” or “Computer Reasoning” or “Computer Vision Systems” or “Computer Knowledge The acquisition” or “Computer Knowledge Representation” or “Machine Intelligence” or “Machine learning” or “Artificial Neural Networks” and (Morals or Moral or Morality or Ethic or Metaethics. The authors chose 254 articles after coming across more than 900 articles on the topic.


The authors coded the news articles on the basis of the following parameters


*Ethical issues discussed in the articles relating to the ethics of AI (e.g., privacy).


 *Principles-based on ethical frameworks established ethical theories or principles based on an ethical framework explicitly mentioned in the article (e.g., utilitarianism).


*Recommendation recommendations are given or presented in the article related to the ethics of AI.


*Tone the tone of the article regarding AI (enthusiastic, balanced/neutral, or critical).


*Type of technology-specific AI technologies discussed (e.g., autonomous vehicles).








Only two of the 254 articles had an identifiable issue, 46 articles did not have a recommendation. A small number of 28 articles explicitly mentioned or drew on principles based on ethical frameworks. The authors also coded the articles as having a tone—either enthusiastic, balanced/neutral, or critical as shown above.


A majority of the articles analysed, according to the authors, were having a balanced or neutral tone. There were 173 articles “coded” as balanced or neutral, 55 were coded as critical, and 26 coded as enthusiastic. The authors also noticed that there was a slight shift from enthusiastic articles to critical articles from 2014 to 2015, but in 2016 the levels were more even. In 2017 and 2018, however, there were significantly more balanced/neutral and critical articles as compared to enthusiastic articles.


One of the key issues, the authors have argued is “whether the media portrayal of emerging ethical issues in AI constitutes an overreaction that could be transferred to the public.” The authors claim that “overreactions risk slowing or stalling the adoption of AI”




What have the authors proposed with respect to capturing public debate on AI in media?


The authors argue that public debate on the ethics of AI should adequately capture the hopes and fears arising from the rapid introduction of AI technology into society. From the articles analysed during the course of the study, the authors of the study noted that initially there was some optimism and enthusiasm while reporting --especially in 2014--but was followed up by mostly critical or balanced tones in more recent years. So even articles covering issues were deeply personal, the authors argue, such as job loss to AI, had overwhelmingly balanced or neutral tones. This may be because, the study reasons, because “prior social experience with the rapid adoption of technology with potentially disruptive effects on the workforce dictates a more cautious approach.” And also because the study argues,  “there is a healthy dose of appreciation of the charge that AI is “turning many workplaces into human-hostile environments”




Media discussion on general


The authors found that the articles addressed to 37 different kinds of AI technologies. The most common types of technologies mentioned were autonomous vehicles (AVs), autonomous weapons, and military applications. There were 181, which constituted 71% of the articles which were covering AI as a general topic. There were 96 articles that were coded as “General AI” among other types. This suggests that the media discussion around AI is less focused on specific types of AI, but will often use specific types of AI as examples in its broader discussion.




AI in media; Issues discussed


The authors came a number of issues pertaining to AI and ethics. The authors of the study organized these codes into nine strata which included “undesired results,” “accountability,” “lack of ethics in AI,” “military and law enforcement,” “public involvement,” “regulation,” “human-like AI,” “best practices,” and “other.”


The articles on the ethics of AI warned about AI going rogue, claim the authors of the study. The fear of “undesired results” or negative outcomes of AI was by far was the largest, with 33 news articles. The other issues included “protecting humans from AI,” “equity,” “economy,” “control of AI,” “politics,” and “human reliance on AI.” The highest number of occurrences of such articles was in 2018, with over double the number in 2018 when compared to the previous year of 2017.


The “undesired results” because of AI node included three of the top five most common issues coded: “prejudice,” “privacy/data protection,” and “job loss to AI/economic impact of AI.” The most common code, with 57 occurrences as “prejudice.” This often showed up in articles expressing concern over algorithms being biased against certain minority groups. There were 56 other articles according to the authors of the study that highlighted the dangers that can come from the reflection, and often magnification, of human biases in AI. Then the second most common issue discussed in news articles was “privacy/data protection,” --there were 55 occurrences of the issue in the articles that were examined, while there were 39 occurrences of the concern over “job loss to AI or economic impact of AI.






What can be done with respect to to AI coverage in mainstream media


* Increase the breadth and depth of public debate, as well as the participation of relevant stakeholders


* Collaboration and inclusion of ethicists and AI experts in both research and public debate can make media coverage of AI more sophisticated in its content and help avert possible undesired social outcomes


*There is a need to harness potential benefits of AI while at the same time decreasing or mitigating the potential negative effects on society


*After the fatal Tesla Autopilot accident of 2016, there is a considerable push toward clear policies that recognize the entitlement of individuals to challenge any decision made by algorithms.


*A multifaceted approach to handling the social, ethical and policy issues of AI technology is needed