Safe and Responsible AI in Australia
An Orwellian Dream
There has been some expected fanfare over the recently released “Safe and Responsible AI in Australia” paper. And it seems that commentators and AI observers might find what’s happening over at the NDIA, a bit too unpleasant to believe.
First up. Let’s have a quick look at the AI paper. I’ll do more commentary in due course.
The first proposed principle puts the mandatory AI guardrails on a collision course with the freshly amended NDIS Act.
Principle (1): The risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations.
The government is speaking with forked tongue. The NDIS Bill was pushed through Parliament, ignoring - not even taking into account - the objections of the Human Rights Committee of Parliament, human rights scholars and legal experts. The very foundations of the NDIS Bill suspends human rights, and is operationalised via automated decision making algorithms.
And here is the second point of collision between the AI policy and the newly amended NDIS Act: the ability to challenge the use of AI or outcomes of automated decision-making (ADM).
Well, not so, according to the drafters of the NDIS Bill - who probably have not given thought to the implications of ADM and AI, notwithstanding the RoboDebt and RoboNDIS human catastrophes.
Specifically, the newly amended NDIS Act puts algorithms (the Budget Calculation Instrument) - yet to be defined - beyond the reach of administrative review. This is a terrifying world first. The black box algorithms will have absolute supremacy.
This will undoubtedly trigger High Court action at some point.
The result of this is that people are refused access or do not receive funding necessary for life. Unbelievably, the NDIA has not documented the risk to life arising from its use of algorithms - when in other jurisdictions overseas, death and the most grotesque suffering and discrimination has resulted. The cases of catastrophic harm from the NDIA’s use of robo systems and methods are documented on the robondis.org campaign website.
It is therefore curious that the paper refers to overseas high risk use-cases, including:
Recommended by LinkedIn
“Access to essential public services and products. AI systems used to determine access and type of services to be provided to individuals, including healthcare, social security benefits and emerging services.”
This is exactly the NDIS use case.
The risk of harm and threat to life is real - not theoretical - the harms are widespread and happening now. Not in a far off time. And it cannot be said, that these risks and circumstances were not and are not known. Were these not considered?
But it seems that these guardrails will not reach government’s own policy making and administration - this in itself is a risk to democracy and civil society. Then keep reading, because on page 56, there is mention of “other work”:
"...work led by AGD to develop a whole of government legal framework to support use of automated decision-making systems (ADM) for delivery of government services. This may include systems run by AI. This reform work implements the Australian Government’s response to recommendation 17.1 of the RoboDebt Royal Commission."
So very quietly, the use of ADM across government services will be legally supported at some point in the future - the current grey legal status is therefore concerning. This shifting legal minefield of "subterranean systems" is explored in the brilliant article, "Decoding the algorithmic operations of Australia's National Disability Insurance Scheme" by esteemed scholars Georgia van Toorn and Terry Carney.
Overall, the government's AI paper is a bureaucratic jumble of guardrails, pillars, and lists for a product development cycle: not for policy development or high risk service delivery.
The ambition of “Government as an exemplar” is straight out of utopia.
im not an academic i just have questions... Autistic | ADHD | INFJ | Aries/Taurus cusp | Wood Ox | Life Path 11 | Soul Urge 11 | Personality 11 | Expression 22 | Maturity 33
2moAn NDIS AI assistant like NADIA without NDIA or DSS involvement would be a powerful tool. The concept of creating a self-service co-pilot that helps participants navigate the complexities of the NDIS is highly valuable, particularly for improving accessibility, autonomy, and reducing confusion... 1. Personalized Guidance: Tailoring advice based on individual needs, goals, and NDIS plan status. 2. Real-Time Updates: Keeping participants informed of policy changes, amendments, and deadlines. 3. Document Assistance: Helping participants generate and manage the paperwork required for NDIS processes. 4. Support Coordination: Suggesting options for finding and managing services, with clear explanations of how to use funding. 5. Internal Review & Appeal Guidance: Assisting in writing internal reviews, appeals, and understanding the process. 6. Legal and Rights Protection: Educating users on their rights and providing real-time advice for complex scenarios. 7. Chat and Voice Functionality: Interactive chat or voice commands for participants with different levels of tech literacy. Integrating NDIS policy data, participant feedback, and using natural language processing to ensure the AI assistant understands the complexities of the system.
Kohlbahdin
2moLike many things, those tasked either with making decisions for drafting legislation or implementing it, don’t understand the risks as they need to, and there is a major gap between what they WANT to know and what they NEED to know. Hopefully those who have the opportunity to explain and advise are able to be excellent at explaining. I don’t think any regulator wants to implement things that have known problems, especially when circumventions and mitigations exist or can be devised Hopefully those who can explain and advise on these things, can continue to step up and persevere in helping to close that gap between want to know and need to know.