Is regulating AI a good idea? (Part 1)

In its first term of government, the Albanese Labor government started a process to decide whether to regulate the use of artificial intelligence (AI), and how it should do it.  The outcome was a recommendation that Australia should adopt a model like that used in other jurisdictions that had already implemented (or were near implementing) regulation of AI.  Given that AI and internet-based technology is a ‘global’ phenomenon, having laws operating around the world that correlate made some sense. However, I think that the thinking underlying the model of the already-implemented laws is flawed, and we in Australia should not follow it.

In essence that model is based on an assessment of risk.  Certain risks were to be deemed sufficiently serious (in terms of consequences) that AI would not be permitted to be used where such risks arose.  For other risks, the requirement would be that there would be a series of ‘guardrails’; a process that prospective AI users should follow to ensure that the impact and effect of using AI would be in line with principles nominated in the legislation.  Significantly, apart from breaching any prohibitions in the laws, there did not appear to be any sanction or consequences for parties that ignored or failed to address adequately the applicable guardrails.  In short, the proposed laws propose a mandated due diligence process.  

Prospective AI users undertaking a due diligence process before implementing an AI tool or system is good practice.  I would have thought that the need to do so would be so obvious that no sensible corporation or government would implement an AI tool or system without it; mandating such process, therefore, is a bit like mandating common sense.

More recently, Australia’s own Productivity Commission urged the government to avoid implementing ‘AI-specific’ laws.  I am inclined to agree, but probably not for the reasons that they espouse.  

Technology, and particularly AI technology, is difficult to define effectively, and as it is evolving and developing so rapidly, any laws that purport to ‘regulate’ its use are likely to be ineffective within a short space of time.  I am also troubled by the twin notions that appear to underlie current laws regulating AI namely: (i) that certain AI technology is ‘good’ or ‘beneficial’ (and so we encourage its use and implementation), and some AI technology is ‘bad’ or ‘harmful’ (and so we should prohibit or limit it); and (ii) that the approach to regulating AI should be about managing risk.  There is a certain arbitrariness or subjectivity in each of these notions, which is not a good basis for regulation.

I do not accept the view that any technology (excluding weapons technology) is inherently good, bad, beneficial or harmful.  Ultimately, it is how humans use such technology that generates the benefit, or the harm.  Therefore, the focus should not be on what the AI technology is, and what it does, and whether it should be used for particular purposes, or at all, but instead should be on the consequences of such use, ensuring those persons who make use of the technology are accountable, and that persons adversely affected by a given application of AI still have rights of redress or challenge in the face of an outcome generated through the use of AI.

While I do not support AI-specific laws, I also do not support the view that there should be no regulatory response.  Instead, our regulators should be considering the following matters before deciding how they might legislate to address AI.

  1. Citizens are now compelled to interact with technology

Nowadays, it is practically impossible for a citizen to be able to interact or communicate with government (at any level) and corporations without doing so ‘online’.  Putting aside (for now) the common objection that ‘I just want to speak with a human’, an increasing portion of the overall communication and dealings between citizens and each of government and business only takes place in a setting where the citizens, not with another human being, but some form of automated process or program.  We can expect that such processes and programs will be driven by AI, if nothing else because AI offers efficiencies and cost savings. 

Think about that for a moment.  Matters such as whether one has paid their taxes, whether their drivers’ licence will be renewed, whether one is entitled to a government benefit, service or concession, and/or whether they are granted a home loan or other credit – matters that affect, fundamentally, both our present and future lives – will depend on how we interact with a technology that is capable of operating without human oversight, or with limited oversight.

As a citizen, interacting with government, we are entitled to know that certain of our rights and obligations may be determined through the use or application of AI, if only so that we are able to exercise rights available under administrative laws to challenge decisions made by such government.  Similarly, as a customer/prospective customer, the use by our supplier of goods or services is interacting with us via non-human means should be disclosed to us, if nothing else, so that we may make an informed choice as to whether we wish to transact (or continue transacting) with that supplier.  Not only should we have a right to know that AI is being applied in a government or business’ interactions with us, but we should be able to hold such government or business accountable for the particular AI tool or program it is using.

2. Removing humans from decision-making, and accountability

Advocates for the implementation of AI point to a range of perceived benefits for so doing. Among those are efficiency and speed, due to AI’s ability to absorb and analyse large amounts of information and generate prompt outputs, and the removal of ‘human bias’ and subjectivity.  

Putting aside that neither of these benefits will be achieved unless the AI tool being applied is truly fit for purpose, in the sense that it is programmed properly, and ‘trained’ with the datasets that are both relevant and free from any inbuilt biases or blind spots, the real concern is that AI can (and will) be used to make decisions that affect the lives, rights and way of life of citizens, without human interaction or oversight.  Of equal concern is that the organisation or government that relies on AI to make decisions for it is, potentially, able to avoid responsibility and accountability for a given decision.  It may assert that the relevant decision is a product of its system, that that system is free from bias and subjectivity and therefore the decision is ‘right’ and not subject to challenge.

Such an outcome should be chilling for all citizens.  Even if an AI tool or system can be said to be appropriately-programmed and ‘trained’ via appropriate datasets, such that it is not capable of any bias or generating ‘false’ decisions (something that no AI system or tool can boast currently), the fact that a human being may be removed from the decision-making process, that is, a human does not check a recommended decision generated by AI, does not assess independently the reasoning and/or facts underlying the recommended decision and/or then decide whether to ratify the recommendation should represent a shrill alarm bell to each citizen, particularly if that citizen then has limited or no rights to challenge the decision once made: it is the equivalent of decision-making based on the ‘computer says No‘ model.

3. The Data issue

Successful cyberattacks on government departments and agencies, and major corporations are becoming so frequent, we seem to greet them in the way one does when the trains are late. Citizens are now well aware that the information and content they provide to governments and corporations, or upload onto social media, is stored by such governments and corporations on databases (which may or may not be in your own jurisdiction), and may be utilised to monitor your behaviour, assess your creditworthiness, or simply provide more targeted sales promotions.  

AI tools and programs rely on data to ‘learn’ about the task(s) for which they are applied.  The more data that is ‘fed into’ such tools and programs the better and more effectively (in theory) the relevant AI operates.  For example, an AI tool or program developed and ‘trained’ using data sourced from a particular country may not be suited for use in another country until that tool or program is trained with data from citizens from that second country.

It is therefore understandable that developers of, and advocates for, AI are keen to ensure that there are few or any limits on access to data and content that can be applied to develop, refine and perfect AI systems and tools.  If we, as citizens are free to limit access to our data, then the AI will be ‘less perfect’ and less trusted.

Fair enough, but why should we, as citizens, be denied the opportunity to assess how our data is going to be used, and to decide whether we are satisfied that our data will be used appropriately?  To date, information privacy laws, whilst tilting the balance a little way in favour of the ordinary citizen, have either not given people an informed choice as to what happens when we provide our data, or placed a high bar, in terms of effort, for an individual to obtain such informed choice.  Not only are required to sign up to technology use in order to deal with corporations and government, but also, in doing so, we are required to accept the privacy policy or statement (which explains what happens to our data) so that we may have those dealings.  In general, such policies and statements do not provide a great deal of detail about what happens to our data; at best we get a generic explanation and vague promises, the meaning of which is somewhat rubbery.

Aside from the use issue, is the security issue.  AI vacuums up a vast array of data ‘into the cloud’, and we are left largely to ‘hope’ that it is being stored securely, and processed, in such a way that it is not accessed, manipulated or altered in a way that we do not authorise.  

It should be acknowledged, however, that we have been handing over our data, happily (or in blissful ignorance) to corporations and government for decades, and therefore I can understand that the tech sector may feel frustrated that, all of a sudden, we are getting a little coy about letting our data be used to make AI better.  The problem for the tech sector is it tin ear, high-handedness and arrogance about the importance of AI and the need for them to be able to do whatever the hell it wants with our data is the precise reason why we are so coy, and we do not believe the sector can be trusted to regulate itself.

4. AI does not pay its own way

Imagine, if you will, that a new gold rush ensues, and that a group of people turn up on your doorstep to inform you that there is gold underneath your house and that not only will they dig up your front and back yards and/or dig underneath your house to retrieve said gold, but that you are not entitled to any payment, either for the gold retrieved, or for the diminution of the value of your property once they have finished with it.

Something similar has already happened in the development of AI systems and tools, but particularly in those systems and tools that generate content.  The ‘gold’ in this case is the works, intellectual property, performances and even sounds not only of creative people and artists, but also those who generate content as part of their work.  AI needs to ‘learn’ from existing content and materials to generate new content and materials, and the approach from the technology sector is to grab all the existing content that is currently available and to feed such content into AI tools to teach those tools how to generate new content. 

The concern is not that the use of AI may ‘take someone’s job away’, it is that AI can be – and has been – applied to create ‘new’ content by using content and work done by humans, without those humans receiving any payment, attribution or benefit.  Further, those original human contributors are denied the opportunity even to give their consent to the use of their works, their voices and styles as part of training and developing AI tools.

I acknowledge that the current practice is for creators to withhold consent and the ability of third parties to use their material to train AI through contract, that is, when their license their content to another party that party is prevented, contractually, from using or permitting the use of the licensed content to train AI.  While, in theory, that is an effective legal approach, in practice, it means that the onus falls ultimately on the human creator of content to enforce their contract, and in an age where content may be scooped up or scraped from existing sources electronically and without effective tracing, the human creator may become aware that his or her content is being used in breach of the licensing arrangement only after the event.  Once that content is used to train AI, the AI tool cannot ‘unlearn’ the content; the creator loses any agency over his or her work.

5. Deep fakes and fraud

It is wrong to blame AI for the explosion of deep fakes and fraudulent material; digital technology, photoshopping tools and the internet have long facilitated fake material.  However, AI enhances – greatly – but again also allows both business and criminals the ability to create material without the assistance of consent of humans.  

Not only is AI capable of generating hard-to-detect fakes of documents, it allows images of one person to be ‘superimposed’ onto the image of another, in both photographs and film, and allows the voices of a person to be copied, but also used to create new recordings of that voice, that say things not authorised by the ‘owner’ of that voice, and for which that ‘voice owner’ receives no benefit.

Given that there are AI tools and technology developed specifically to do this, it is disingenuous for the developers and owners of such tools and technology, to disown the consequences of their technology, like gun manufacturers (ie ‘guns don’t kill people, people kill people’).

6. AI should be subject to a ‘social licence’

I accept that the notion that a sector or business should be subject to a form of ‘social licence’ is subjective and somewhat problematic.

Generally, we talk about social licences when a company, a business or an activity carries with it sufficient importance to the ongoing conduct and good order in society, or whose impact is of such magnitude to society, that that company, business or activity – even if unregulated – should be subject to some form of rules or control: a social licence.

I would argue that each of the points I have raised thus far, but certainly those points collectively, necessitate some form of regulation or social licence to which AI and its developers and owners should be subject.  However, one factor that is not often discussed when the subject of AI comes up is energy and use of precious resources.

As a result of AI expanding, every country in which servers and datacentres that are or will be the heart and brain of AI will consume an expanding portion of that country’s electricity supply and, crucially – particularly in a country like Australia – its water supply.  In Australia, and in many other countries, governments either lack the will, or the political mandate, to address the future requirements for electricity and water use, and this means, if nothing else, that there is likely to be a shortage if a new or emerging industry arises whose demand for electricity and water eats into the stock that is available.

Something has to give.  I am assuming that our governments are not going to pass on the opportunities that come with housing infrastructure devoted to AI.  Unless those governments grasp the nettle and both find ways to enhance the supply of power, and overcome the increasing NIMBYism that existing in our regional communities about transmission infrastructure, there is not going to be sufficient electricity to power AI.  This aside, governments cannot make additional rain, and we humans are a bit tetchy when someone says that we must limit our consumption of water.  

My expectation is that governments and our scientists will find ways to narrow, but not close, the gap between the increased demands for electricity and water brought on by AI.  At some point, decisions will be made as to how a limited supply is divvied up.  I do not see how the ‘price’ for AI taking a greater share of a limited supply of energy and water can be reduced solely to monetary terms.

A better option might be to say that we, as a society, will cede a portion of our power and water to AI, but on the basis that the AI sector is subject to a social licence, under which it assumes a substantial portion of the risk associated with the increasing proliferation of the technology.

Conclusion

So, how to we regulate or address AI? Well, you will have to wait until Part 2 to read my suggestions.

Suffice to say that I am opposed to any regulation that, in essence, prohibits AI, or prevents its application, in any or all contexts. However, history shows that virtually every major technological development was allowed by governments to proliferate without any prior or proximate consideration of the consequences, with the result that the consequences of such technologies were ignored or swept under the carpet; their promoters were accorded a free hand, and society was left to accommodate such consequences.  Governments, if they stepped in at all, intervened far too late.  Sadly, with AI, we are on the verge of that tipping point again, but it may not be too late.