Is AI compromising our human rights?
Artificial intelligence (AI) is changing us – how we work, how we connect with one another and how we make decisions. It is increasingly becoming commonplace in our daily lives starting with the iPhone’s facial recognition feature becoming the norm and Uber piloting its first of many self-driving vehicles on the streets of Pittsburgh. These are a couple of examples to show how we have enabled AI to infiltrate our lives in order to make our daily routine more efficient and our lives more convenient. However, this brings along the question – to what extent are humans willing to give away their personal data for convenience? And how much do we value our human rights when it comes to technology?
What are the two types of AI?
- ‘Narrow AI’ relates to today’s AI systems which are designed to undertake specific and simple tasks such as booking a restaurant through Chatbots; and
- ‘Artificial general intelligence’ which is currently theoretical involves a form of AI that can accomplish sophisticated cognitive tasks just like the human mind would. It is expected that artificial general intelligence will take place between 2030-2100.
Narrow AI is currently being integrated into our daily lives and is what we will be referring to throughout this article.
Has AI affected our rights to privacy?
Human Rights Commissioner, Edward Santow stated that AI, facial recognition, the globalisation of data and other technological developments will challenge the human rights to privacy, freedom of expression and equality. With over 130 countries having constitutional statements around the protection of privacy, it is universally understood that privacy is a fundamental human right.
However, humans today have blurred the lines as we willingly provide corporations with our personal data in order to have their products customised to our preferences. Most products have been technologically enhanced with machine learning which in turn requires the collection, storage and transfer of the user’s personal information. While many people agree to submit information about themselves in order to benefit from a service, many are not aware of the unprecedented implications that may arise should their information be accessed by a third party without their consent. In such situations, individuals can either have their digital identity information stolen or be easily manipulated by companies who have collected their data.
An example is how the personal information of thousands of Australians were potentially compromised when HR company PageUp was the victim of a nationwide data breach, thereby exposing confidential information of employees that belong to high profile clients such as the Commonwealth Bank of Australia, Telstra, NAB, Australia Post, Reserve Bank of Australia and Sony.
Which human rights need urgent attention as we prepare for the future of AI?
The issue that we are all facing – advances in processing personal data outpace existing legal protections. In 2015, the United Nations Human Rights Council reported that technological advances facilitated the possibility that corporations and government could track people, read their personal messages, and in some parts of the world, even block free speech. Corporations who accessed global surveillance could misuse personal data for their own advantages through mass interception of communications and indiscriminate data retention via CCTV or GPS.
In China, facial recognition intelligence is currently being trialled by the government through its network of surveillance cameras with the aim of assigning a social credit score to its citizens based on their actions recorded on camera. This scheme was put in place to dissuade bad behaviour such as littering. This would decrease an individual’s social credit score and subsequently affect their chances of obtaining a loan, travelling overseas or even attaining a job. The Human Rights and Technology Issues Paper published by the Australian Rights Commission (Commission) has addressed the implications of AI as it becomes more intertwined with the human life – for example, how deep brain stimulation can be used to treat degenerative brain conditions. The Commission sets out the following 12 forms of AI that engage human rights:
- New computing technologies
- Blockchain and distributed ledger technologies
- The Internet of Things (IoT)
- AI and robotics
- Advanced materials
- Additive manufacturing and multidimensional printing
- Virtual reality and augmented reality
- Energy capture, storage and transmission
- Space technologies
Can AI-informed decision making be used to influence opinions?
AI-informed decision making relates to how AI and machine learning apply algorithms to complex datasets and improve prospects through an analysis of human data and predictive capabilities. Today, many corporations are making AI-informed decisions in areas such as healthcare and policy risk assessments. Personal information that individuals agree to provide in exchange for the use of a product, has led to the concentration of large data holdings. These are commonly stored in Australian companies and governments located both locally and overseas. The below are examples of how AI-informed decision making could be used to influence people’s opinions:
- AI could be used to manipulate and influence social media newsfeeds – as was allegedly the case during the electoral process in the US and UK.
- AI could be used to influence direct advertising and search engine results – as revealed by the European Union Commissioner when a fine was imposed on Google for prioritising its own services over those offered by other providers.
- AI could influence social media and create fake news. An inquiry is currently investigating the implications for consumers, advertisers and media content creators in which findings are due mid-2019.
What is currently being done?
- Solutions provider LexisNexis has partnered with the Commission to inquire about the human rights in a new technological world involving AI, social media and big data. Findings and recommendations will be published in late 2019 and solutions to challenges will be presented as well.
- The same satellite surveillance technology that powers Google Earth is currently being used to expose human rights’ abuses.
- Last month, the proposed Identity matching services bill was discussed at a hearing by the parliamentary intelligence and security committee and had serious implications for human rights. The bill aims to create a nationwide database that links people’s physical traits with data from states and territories whilst integrating them with a facial recognition system.
About the Australian Human Rights Commission (Commission)
The Commission is established by the Australian Human Rights Commission Act 1986and is Australia’s national human rights institution. The Commission will be organising roundtable meetings and other consultation opportunities and will subsequently publish a Discussion Paper in early 2019 which would include the Commission’s preliminary proposals for change. We expect the final report to be published by early 2020.
Do you want to know how you or your organisation may be impacted by AI?
Being a leading recruitment firm within the technology space, we believe that advances in AI largely impacts many organisations, both large and small. As we are also constantly taking advantage of technology to better connect candidates who would be most suited to our clients’ changing business needs, please let us know if you wish to find out more about the best AI candidates in the market. For a confidential discussion about how technology may affect your current job or future employability prospects, feel free to give us a call on +61 02 8251 2120.