Specific aspects of debates and policy guidelines around AI’s design and capa-bilities do not appear to have been generated directly around public conver-sations, and yet respect for and inclusion of these conversations is essential with a view to "crafting informed policy and identifying opportunities to educate the public about AI’s character, benefits, and risks" (Zhang and Dafoe, 2019, p. 3). Of significance is that public perceptions are to some extent moulded by the mass media, which may exaggerate AI’s capacities as well as obfuscate ethical and epistemological issues. Such muddying of the waters, as it were, does little to facilitate substantive deliberation around AI and obstructs informed discussion around how it should be regulated (Caveet al., 2018, p. 4).1 Even a cursory scrutiny of newspaper headlines across Africa demonstrates that media outlets have a tendency to frame AI in terms of utopian or dystopian rhetoric.
BOTSWANA: "Tech Innovation Enlisted in GBV War" (Botswana Guardian, 30 July 2021)
EGYPT: "Egypt to Introduce Artificial Intelligence in Irrigation Water Manage-ment" (Egypt Today, 10 August 2020)
GHANA: "Government Urged to Act Swiftly to Prevent ‘Killer Robots’ Development" (Ghana News Agency, 27 August 2021)
NAMIBIA: "AI, Biometrics and No Protection from Abuse" (Namibian, 24 February 2021).
NIGERIA: "Machine Learning May Erase Jobs, Says Yudala" (Daily Times, 28 August 2017)
SOUTH AFRICA: "Artificial Intelligence Trained to Identify Lung Cancer" (The Citizen, 22 May 2019)
AI is omnipresent, embedded as it is in smartphones, chatbots, voice assis-tants, global positioning systems, spam email filtering, and so forth. Yet few scholars in Africa have closely examined how the media may shape societies’ perceptions of AI (Brokensha, 2020; Brokensha and Conradie, 2021; Guanah and Ijeoma, 2020; Njuguna, 2021). In their content analysis of three popular Nigerian newspapers with high circulation rates, and in line with other studies’ findings (e.g., ́Ouchchy, Coin and Dubljević, 2020), Guanah and Ijeoma (2020) note that across all three outlets coverage of AI was fairly superficial, given that reports did not interrogate AI’s multiple facets and impacts in any in-depth manner. They conclude that newspa-pers are obliged to critically appraise AI, contending that "Since automation may be the future, newspapers must start to intensify the education of the public about AI" (Guanah and Ijeoma, 2020, p. 57). An insightful study by Njuguna (2021) of online users’ comments generated around East African news outlets’ reports on sex robots points to most users perceiving "the robots as ‘destroyers’ of the God-ordained family unit and tools of dehu-manizing women, and thus morally contradictory to Christian teaching" (Njuguna, 2021, p. 382). Studies on media framing of AI in South Africa suggest that, as is the case globally (e.g., Duberry and Hamidi, 2021; Fast and Horvitz, 2017), AI is depicted in dramatic or alarmist terms that are not aligned with reality (Brokensha, 2020; Brokensha and Conradie, 2021). Brokensha (2020) and Brokensha and Conradie (2021) have found that press coverage reflects a tendency to employ anthropomorphic tropes that either stress an AI system’s human-like form/social attributes or describe its cognitive capabilities. With respect to the former type of anthropomor-phism, these researchers note that a human-like appearance or human-like traits are commonly ascribed to AI-enabled social companions or digital assistants in the areas of human-AI interaction, healthcare, and business and finance. With respect to the latter, and particularly in the context of machine learning and neural networks, journalists to some extent portray AI systems as surpassing human intelligence.
When it comes to developing social robots, anthropomorphic design is not uncommon, given that it goes some way to enabling acceptance of and interaction with robots (Fink, 2012, p. 200; cf. Darling, 2015). While this type of anthropomorphism poses a number of significant and potential problems, such as that related to the perpetuation of heteronormativity (Ndonye, 2019) or the establishment of para-social relationships between users and machines (Boch, Lucaj and Corrigan, 2021, p. 8), we are of the view that cognitive anthropomorphism of AI by the media should at this stage concern us more, one of the main reasons being that African policies and outlooks on technology continue to point to techno-deterministic assumptions at the expense of social context and human agency (Ahmed, 2020; Diga, Nwaiwu and Plantinga, 2013; Gagliardone et al., 2015; Williams, 2019). Writing in the context of journalism research in Africa, Kothari and Cruikshank (2021, p. 29) contend that rather than underscor-ing technochauvinism, the focus needs to shift to equipping humans with the skills they need to grasp AI’s consequences and benefits.
Both in and across The Citizen, Daily Maverick, Mail & Guardian Online, and SowetanLIVE, Brokensha and Conradie (2021) established that jour-nalists are disposed to framing AI systems as matching or transcending human intelligence. Thus, typical headlines or claims were those such as "AI better at finding skin cancer than doctors: Study" (Daily Maverick, 29 May 2018) (see Brokensha and Conradie, 2021, para. 21) and "A computer pro-gramme […] learnt to navigate a virtual maze and take shortcuts, outper-forming a flesh-and-blood expert" (The Citizen, 9 May 2018) (see Brokensha and Conradie, 2021, para. 20). Of interest is that in recognising the dimen-sion of uncertainty in emerging technologies, journalists may attempt to mitigate how they frame AI through the use of discursive strategies such as scare quotes and paraphrases/quotations of various actors’ voices that effec-tively frame AI’s cognitive capacities in dualistic terms (Brokensha, 2020; Brokensha and Conradie, 2021). What is unfortunate about employing competing frames, however, is that they reflect a false balance (Boykoff and Boykoff, 2004, p. 127) that may in turn make it difficult for readers to make a distinction between reality and falsehood (Brokensha and Conradie, 2021). White and Lidskog (2021) stress that to "outsiders" such as the gen-eral public, the nature of AI and its associated risks are generally unfathom-able, and a rather sobering thought in this regard is that they may also be fairly incomprehensible to "insiders", such as AI developers and researchers. Dualistic framing aside, conflating artificial intelligence and human intelli-gence is misleading, conveying the message to the general public that AGI has come to fruition. Framing AI in utopian terms reflects what Campolo and Crawford (2020, p. 1) refer to as "enchanted determinism", which sees technology as the solution to all human ills. If machines are perceived as surpassing human intelligence, then enchanted determinism may also exhibit the type of dystopian lens that Crawford (2021, p. 214) argues sees tech-nology as consuming human beings. Both utopian and dystopian views are problematic, as both dismiss the fact that it is human beings who lie behind technology (Crawford, 2021, p. 214). Further, given that many African countries are exhibiting digital colonialism, and in light of the fact that the design and application of AI are simply entrenching disparities that are "colonial continuities" (Mohamed et al., 2020, p. 664), all AI-stakeholders-in-the-loop need to resist an ahistorical view of AI (cf. Crawford, 2021, p. 214).
Of course, it is not the mass media alone that may bombard the general public with messages that AI is arcane or inexplicable, thus creating the impression that it cannot be regulated in terms of design or application and that we face inevitable doom.Atkinson (2016) succinctly captures the dilemma that the public experience when thinking about AI when he claims that "fearful voices now drown out the optimistic ones" (Atkinson, 2016, p. 9). In this regard, "fearful voices" include industry experts and scholars. Significantly, and employing the social amplification of risk framework designed among other things to understand and assess risk perception (Kasperson et al., 1988), Neri and Cozman (2020) have found that public perceptions of the risks around AI are largely shaped by experts who frame this technology in terms of existential threats. With respect to voices from industry, and in the context of a discussion about AI’s summers and winters, Floridi (2020) reminds us that "Many followed Elon Musk in declaring the development of AI the greatest existential risk run by humanity. As if most of humanity did not live in misery and suffering" (Floridi, 2020, pp. 1–2). With respect to perceptions of AI by scholars, Atkinson (2016, p. 9) observes that computer scientist Roger Schank made the following comments when Stephen Hawking told the BBC in 2014 that "The development of full arti-ficial intelligence could spell the end of the human race" (Cellan-Jones, 2014): "Wow! Really? So, a well-known scientist can say anything he wants about anything without having any actual information about what he is talking about and get worldwide recognition for his views. We live in an amazing time" (Schank, 2014, para. 3). Alarmist messages result in "mass distraction" (Floridi, 2020, p. 2) that takes us away from the fact that AI-enabled technologies are "normal" (Floridi, 2020, p. 2) and that they can assist us in solving or reducing many of the problems we currently face. "Mass distraction" obfuscates the fact that across the continent we need voices of reason that interrogate the realities and myths of AI in African contexts, and there are several of them in areas such as automation, robot-ics, finance, agriculture, courts of law, climate change adaptation, trade and commerce, and driverless cars (Famubode, 2018; Magubane, 2021;Mhlanga, 2020; Mupangwa et al., 2020; Nwokoye et al., 2022; Parschauand Hauge, 2020; Rapanyane and Sethole, 2020; Rutenberg, Gwagwa and Omino, 2021; Vernon, 2019; Zhuo, Larbi and Addo, 2021). Studies such as these move beyond the mystique of AI, allowing us to contemplate what we are doing with this technology in the first place. Demystifying AI by shifting attention from it as an existential threat to what it can do for developing countries is key and is what Floridi (2020) calls an "exercise" in "philoso-phy, not futurology" (Floridi, 2020, p. 3).
Assessing the potential of AI in Africa, Alupo, Omeiza and Vernon (2022) observe that recognising that AI has many benefits is a complicated process that hinges to a large extent on trust as well as on AI innovation, which partly requires that physical infrastructure such as electricity and Internet connectivity be in place. Equally important is that innovation is also contin-gent on communities adopting this technology and usefully deploying it in a manner that is sensitive to local social norms and expectations, as well as socio-cultural factors. In the next chapter, and in the context of the AI eco-system in Africa, we consider the importance of decolonising the digital citi-zen without which innovation cannot take place. We once again reiterate that Africa is not homogenous and so we examine what the digital citizen in Africa could encompass without attempting to presumptuously make gener-alisations across cultures (cf. Alupo et al., 2022).