George Gouzounis
  • Newsletter
  • Insights
    • The Need for an Innovation-First Approach
    • A Warning about Australia's Regulatory Caution
    • China's Direct Tech Subsidy for Older People
    • The Empathy Protocol
    • The Elephant In The Room
    • AI: Buy, Build, or Wait
    • How AI Will Transform Aged Care
    • From Policy to Practice
  • Creative Pursuits
  • Newsletter
  • Insights
    • The Need for an Innovation-First Approach
    • A Warning about Australia's Regulatory Caution
    • China's Direct Tech Subsidy for Older People
    • The Empathy Protocol
    • The Elephant In The Room
    • AI: Buy, Build, or Wait
    • How AI Will Transform Aged Care
    • From Policy to Practice
  • Creative Pursuits
Search
CHINA INSIGHTS | Part III
15 December 2025
Subscribe to my weekly newsletter →

The Cost of Caution: Why Australia Needs an Innovation-First Approach

Picture
The first driverless vehicle that I caught in Shenzhen—the image is shaky because of my excitement. AI vehicles are part of the city's autonomous vehicle trial program that has expanded across multiple districts since 2024. (Photo taken December 2025)
In 2023-24, one in 264 Australians aged 15-24 was hospitalised due to a transport injury. We knew this risk existed. As a society, we accepted it. We didn't ban cars or freeze vehicle innovation until we eliminated every possible danger. Instead, we relied on baseline safety standards—seatbelts, airbags, roadworthy requirements—hoped for the best, and let people drive.

We made a collective decision: the benefits of transportation outweigh the risks.

Yet when it comes to AI and technology in aged care, we talk endlessly about potential harms—privacy breaches, algorithmic bias, data security—whilst people who need technology today go without it. When I present at conferences, 1 in 3 questions I receive focus on these risks rather than on implementation or outcomes. Of course I can't blame the attendees. They're operating in an environment that prioritises raising concerns, and so advocating for rapid deployment can be seen reckless. We've inverted the equation: regulation first, innovation never.

This is my third in the series of articles about China, and I wanted to talk about how they're doing the opposite. They prioritised innovation and it's working.

Subsidies to promote innovation in aged care

In several Chinese provinces, aged care organisations can receive subsidies of up to 500,000 yuan (approximately $110,000 AUD)—covering up to 50% of their investment in technology and AI solutions. This has created a market for tech providers who develop solutions, from AI software that works in the background to health monitoring for their clients.

The logic is simple: aged care providers face immediate workforce shortages and quality pressures. Technology offers solutions. Therefore, remove financial barriers to adoption and let providers implement what works.

Compare this to Australia's approach. We consult on frameworks. We develop guidelines for guidelines. (I'm somewhat guilty of this myself—I created the Guidelines for Use of GenAI in Aged Care in response to the number of enquiries I was receiving at conferences and forums.) Meanwhile, aged care providers who want to trial AI solutions face regulatory uncertainty, and the perpetual fear that they'll invest in something only to have it rejected by future compliance requirements that don't yet exist.

The result is predictable: stagnation dressed up as prudence.
Picture
A technology showroom in China displays AI applications for aged care, including monitoring systems, smart beds, robotic assistance devices, and integrated care management platforms designed for residential facilities, December 2025.
DeepSeek and the 42-second complaint resolution

China has established a national call centre where citizens can report anything—fallen trees, unsatisfactory public service, private sector issues like overcharging or misleading advertising. One phone number for everything.

The citizen speaks to a real person. But in the background, DeepSeek—a Chinese AI model—analyses the call in real time, and surfaces relevant information to help the operator guide the caller through next steps. After the caller has explained the situation, the AI shows relevant legislation and policy, the standard process for the specific type of enquiry, and locates the correct department to handle it.

Average time to connect to the right department: 42 seconds.
Accuracy: 98.7%.

This isn't a pilot programme or a limited trial. It's deployed nationally, handling hundreds of thousands of interactions daily. China didn't wait for perfect AI governance frameworks or spend years workshopping privacy guidelines. They identified a problem—citizens struggled to navigate bureaucracy—and deployed technology that solved it.

Yes, DeepSeek is analysing conversations. Yes, this raises privacy considerations. But the alternative—forcing citizens to navigate fragmented complaint systems, listen to Opus No. 1 while waiting on hold, get transferred between departments, repeat their story multiple times—creates its own harms. Frustration. Disengagement. Problems left unresolved because the process is too difficult.

The privacy trap


The predictable objection: "But what about privacy? What about data security? We can't just deploy AI without safeguards."

This objection sounds responsible, but in practice, it functions as sort of a veto.

Australia already has legal frameworks governing privacy, data protection, and consumer rights. The Privacy Act exists. The Australian Consumer Law exists. We don't need AI-specific legislation to prevent the most egregious harms—we need enforcement of existing protections and willingness to adapt them as technology evolves. The Australian government's recent decision not to legislate specifically for AI is a step in the right direction.

But we need to go further. We need an explicit innovation-first framework that allows iteration, prioritises deployment and learns from real-world use.

The car parallel


Return to the transport injury statistic. We could eliminate most of the harm caused by motorised vehicles. Ban motorcycles. Reduce speed limits to 30 km/h everywhere. Require annual driver competency testing. Mandate advanced driver-assistance systems in all vehicles immediately.

We don't do this because we've made a collective judgement: the benefits of relatively unrestricted transportation—economic activity, personal mobility, social connection—outweigh the statistical risks we can quantify.

AI in aged care presents exactly the same trade-off, except the current harms from not deploying technology are more severe than the potential harms from deploying it.
Right now, aged care workers are burning out from admin burdens that AI could automate. Residents are suffering falls that early monitoring systems could have flagged. The older person and their family members are making care decisions without the data-driven insights that predictive analytics could provide.

The choice isn't between "risky AI" and "safe status quo." It's between the known, quantifiable harms of our current system and the potential risks of technology that's already proven effective elsewhere.
Picture
Together with a company rep, we trial an AI-powered motion-tracking rehabilitation game at a Chinese aged care technology showroom. Our movements are monitored and analysed through gesture recognition to assess mobility and coordination, December 2025.
What innovation-first actually means

Innovation-first doesn't mean abandoning standards. It means reversing our current approach.

Instead of: Develop comprehensive regulatory framework → Wait for industry consultation → Draft compliance guidelines → Pilot technology in controlled settings → Evaluate → Possibly approve for broader use

We do: Set baseline standards using existing legal frameworks → Promote deployment → Monitor outcomes → Adjust regulations based on evidence

This requires a shift in how we think about risk. Currently, regulators and policymakers treat any potential harm from new technology as unacceptable until proven otherwise. But as discussed, they don't apply this standard consistently—we accept substantial risks from cars, as well as from medical procedures, from construction work, because the benefits are clear and immediate.

Aged care technology deserves the same framework. Establish minimum standards—data must be encrypted, systems must have human oversight, providers must obtain informed consent. Then let organisations deploy solutions and learn what works.

A final thought

I’m sure China's approach isn't perfect. But on the specific question of whether to deploy beneficial technology quickly or wait for comprehensive regulation, they're making the right choice.

Australia doesn't necessarily need to copy China's governance model. We just need to copy their bias towards action.

AI and assistive technology are becoming essential to how we age safely. It's time to accept that innovation, like transportation, comes with inherent risks—and deploy it anyway. The alternative is aged care workers continuing to burn out whilst we perfect the framework that will arrive too late to help them.

© 2025 GG 
  • Newsletter
  • Insights
    • The Need for an Innovation-First Approach
    • A Warning about Australia's Regulatory Caution
    • China's Direct Tech Subsidy for Older People
    • The Empathy Protocol
    • The Elephant In The Room
    • AI: Buy, Build, or Wait
    • How AI Will Transform Aged Care
    • From Policy to Practice
  • Creative Pursuits