There is a risk that AI could reinforce structural inequalities and bias, perpetuate gender imbalances, threaten jobs, and undermine civil liberties and human rights. As the Cambridge Analytica scandal demonstrates, algorithms can be used to undermine elections and enable unethical and criminal activity. Fundamental factors typically present in low- and middle-income countries can make harnessing the potential of AI while mitigating the risks more difficult:
- The uneven distribution of resources and knowledge to implement AI machine learning techniques (showing, for example, very large gender and racial gaps), will create an uneven capacity for making decisions about what applications are developed and for whom.
- There is a large digital divide in terms of access to the digital ecosystem between developed and developing countries, as well as between urban and rural settings within developing countries.
- High levels of unemployment and low tax bases limit policy responses to job loss through automation.
- There is often less institutional capacity to govern and regulate AI to ensure the protection of rights like privacy.