AI Replacement in Professions: Reality, Implications, and the Douglas Social Credit Solution, By Peter West and Tom North

In April 2025, a Sydney radio station, CADA, made headlines when it was revealed that its weekday host, "Thy," was not a human but an AI-generated voice cloned from an ARN finance team employee using ElevenLabs' text-to-speech technology. For six months, Thy hosted Workdays with Thy, reaching 72,000 listeners without disclosing its artificial nature. This case, reported by The Sydney Morning Herald (April 24, 2025), underscores a growing trend: AI is infiltrating professions once thought to require uniquely human skills—radio hosting, journalism, medicine, law, and bureaucracy. Industry has long embraced automation with robots and advanced controls, but AI's expansion into white-collar roles raises urgent questions: What is real? What does it mean for work and society? And how will we afford goods in an automated economy without radical economic reforms like C.H. Douglas' Social Credit?

This article explores AI's encroachment across these professions, separating hype from reality, analysing its socio-economic implications, and evaluating whether Douglas' Social Credit could address the challenges of a potential "age of leisure" or technological singularity. Spoiler: we cannot, AI is a solid argument for social credit!

AI in Professions: What's Happening? Radio Hosting: The Case of "Thy"

CADA's use of an AI host highlights both the potential and pitfalls of AI in broadcasting. Thy, powered by ElevenLabs, delivers hip-hop playlists with a cloned voice indistinguishable from a human's, at a fraction of the cost. ARN's spokesperson noted the trial "enhanced the listener experience" but also "reinforced the power of real personalities." Yet, the lack of disclosure sparked backlash. Teresa Lim of the Australian Association of Voice Actors called for AI labelling laws, arguing that undisclosed AI undermines authenticity and deprives minority groups, like Asian-Australian women, of opportunities in an already competitive field.

Globally, AI hosts are not new. In 2023, an Oregon station used an AI based on a real presenter, and Australia's Disrupt Radio introduced "Debbie Disrupt" in 2022, transparently marketed as AI. The Australian Communications and Media Authority (ACMA) notes no current regulations mandate AI disclosure, leaving stations free to experiment. This raises ethical questions about transparency and the erosion of human connection in media.

Journalism: Automation vs. Authenticity

Journalism faces a dual-edged sword with AI. Newsrooms use AI for data analysis (68% of news organisations), automated writing (73%), and content personalisation (62%), according to a 2024 study. Tools like Quakebot, which generates quake-related news for The Los Angeles Times, show AI's efficiency in routine reporting. However, 42% of studies express concerns about AI-generated news lacking nuance and context, risking "shallow" journalism.

AI cannot replicate investigative reporting, courtroom interviews, or the trust-building required for whistle-blower stories. Yet, economic pressures push newsrooms toward AI to cut costs, threatening jobs. A 2024 Brookings report warns that replacing journalists with AI could degrade the quality of foundational models, as AI relies on human-created content for accuracy. The rise of "journalist–programmer" roles (52% of studies) suggests a hybrid future, but ethical concerns—data privacy, algorithmic bias, and transparency—persist.

Medicine: Augmentation, Not Replacement

In healthcare, AI enhances diagnostics but struggles to replace human empathy. Systems like VizAI and PathAI assist with symptom diagnosis, but a 2023 study found radiologists often underweight AI's input when it contradicts their beliefs, suggesting human-AI collaboration is optimal. AI adoption is slow in safety-critical settings due to regulatory hurdles and the need for human judgment in patient care.

Posts on X reflect sentiment that AI will augment doctors, not replace them, within 10-20 years. Doctors' roles may shift toward oversight and patient interaction, leveraging AI for efficiency while preserving the human touch essential for trust.

Law: Efficiency vs. Judgment

AI is transforming legal practice by automating tasks like document review, contract analysis, and legal research. The 2022 ABA Legal Technology Survey reported 12% of legal professionals use AI tools, up from 10% in 2021. However, AI's limitations—factual errors, lack of source transparency (e.g., ChatGPT)—mean lawyers must verify outputs. Critical tasks like negotiation, advocacy, and client counselling remain human domains.

While GPT-4 scored in the top 10% on bar exams, this doesn't translate to practicing law, where context and empathy are key. Law schools are adapting, teaching AI literacy to prepare students for a tech-driven future. AI won't eliminate lawyers but will redefine their roles, focusing on strategic thinking over rote tasks.

Bureaucracy: Streamlining or Deskilling?

Bureaucrats face AI-driven automation in tasks like data processing, case management, and customer service via chatbots. AI's ability to handle repetitive, rule-based tasks threatens middle-skilled jobs, with studies estimating 20% of UK jobs and 26% of jobs in emerging economies like China could be impacted by 2030. However, non-routine tasks requiring flexibility or creative problem-solving remain human-centric.

The risk is "deskilling," where workers become auditors of AI outputs, losing expertise and economic value. This creates a vicious cycle: AI learns from human inputs, improving while workers' skills erode, potentially entrenching inequality.

Industry Automation: A Long-Standing Trend

Industry has led automation since the Industrial Revolution, with robots and advanced controls dominating manufacturing, logistics, and energy. Siemens' AI-based quality control in steel mills, operational since 1995, extends maintenance intervals by 30% and cuts costs by 16%. By 2030, up to 375 million workers globally may need new professions as AI automates 50% of tasks, impacting not just manual labour but also high-skilled roles like engineering.

AI's economic benefits—$12 billion in retail spending by 2023—come with displacement risks, particularly in emerging economies where low-skilled jobs are vulnerable. This underscores the need for retraining and economic restructuring.

What's Real and What's Not? Reality

AI's Capabilities: AI excels at routine, data-driven tasks—generating radio scripts, writing basic news, analysing medical images, reviewing contracts, or processing bureaucratic forms. It's already integrated into newsrooms, hospitals, law firms, and factories, boosting efficiency and reducing costs.

Job Displacement: White-collar, high-skilled workers face significant "task exposure" to AI, with roles like journalism and bureaucracy at risk of automation. Middle-skilled jobs are most vulnerable, as seen in prior automation waves.

Economic Growth: AI drives innovation, with 20% of China's GDP projected to come from AI by 2030. Productivity gains are real but unevenly distributed.

Total Replacement: AI won't fully replace doctors, lawyers, or journalists soon. Human empathy, creativity, and contextual judgment are hard to replicate. Even in radio, ARN's trial reaffirmed the value of human personalities.

Imminent Singularity: The "singularity"—where AI surpasses human intelligence—is speculative. A 2023 AI Impacts survey found 53% of researchers see a 50% chance of an "intelligence explosion" within decades, but progress could stall due to data or computational limits.

Universal Job Loss: While displacement is real, AI creates new roles (e.g., journalist–programmers, AI ethicists). Historical automation waves show job creation often offsets losses, though transitions are painful. But this time it may not happen.

What Does It Mean? Economic Implications

AI's spread risks exacerbating inequality. High-skilled workers may thrive, but middle- and low-skilled workers face wage suppression or unemployment. Emerging economies, reliant on low-skilled labour, are particularly vulnerable. The IMF notes that AI's labour market effects are ambiguous, with productivity gains not guaranteed to translate into broad prosperity.

The "platformisation" of industries, where tech giants control AI infrastructure, limits smaller sectors' autonomy. Newsrooms, for instance, depend on Big Tech for AI tools, risking lock-in effects and reduced bargaining power. This concentration of power could entrench market dominance, stifling competition.

Social Implications

AI's opaque algorithms raise ethical concerns. In journalism, biases in AI-generated content could misinform the public. In medicine, over-reliance on AI risks misdiagnoses. In bureaucracy, "invisible cages" of algorithmic control could disempower workers, as seen in platform work.

The erosion of human roles threatens social cohesion. Radio hosts foster community; doctors build trust; journalists uphold democracy, ideally. Replacing these with AI could weaken societal bonds, especially if authenticity is undermined, as in CADA's case.

The Singularity and Leisure

The singularity, where AI achieves superintelligence, remains distant but fuels debates about an "age of leisure." If AI automates most work, humans could focus on creative or personal pursuits. However, without economic reform, this risks mass unemployment and poverty, as automated production outpaces purchasing power.

Douglas' Social Credit: A Solution

C.H. Douglas' Social Credit, proposed in the 1920s, argues that automation creates a gap between production and purchasing power. As machines replace labour, wages decline, but goods still need buyers. Douglas suggested a "National Dividend"— funded by societal productivity, not taxes—to ensure everyone can afford goods.

Relevance to AI

Social Credit aligns with AI's challenges:

Purchasing Power: If AI displaces jobs, a National Dividend could sustain demand, preventing economic collapse. For example, if 20% of jobs are automated by 2030, millions would need income to buy AI-produced goods.

Leisure Economy: A dividend enables an "age of leisure," freeing people for creative or community roles, like human radio hosts fostering connection.

Inequality: Unlike means-tested welfare, Social Credit is universal, reducing stigma and ensuring broad benefits from AI's productivity gains.

Feasibility in 2025

Implementing Social Credit requires political will and infrastructure. Tech giants, profiting from AI, could fund dividends via taxes or profit caps, as OpenAI's nonprofit model suggests. However, global coordination is needed to prevent capital flight. Pilot programs, like universal basic income trials in Finland or Stockton, California, offer data: Stockton's $500 monthly payments boosted employment and reduced debt, suggesting scalability. But social credit overcomes the limitations and problems of universal basic income schemes being a fundamental reform of the financial system.

Conclusion

AI's infiltration into radio, journalism, medicine, law, bureaucracy, and industry is real but not absolute. It excels at routine tasks but struggles with empathy, creativity, and judgment. The CADA case shows AI's potential to deceive and displace, yet human connection remains irreplaceable. Economically, AI risks inequality and unemployment; socially, it threatens authenticity and trust. Douglas' Social Credit offers a sound solution, ensuring purchasing power in an automated world and enabling an age of leisure.

As AI reshapes work, society must balance efficiency with humanity. Transparency laws are a start. Retraining programs, ethical AI guidelines, and economic reforms like Social Credit could ensure AI serves all, not just the few. The question isn't whether AI will change our world—it's whether we'll shape that change for the better.

https://www.smh.com.au/culture/tv-and-radio/thy-has-been-on-the-radio-for-six-months-turns-out-she-isn-t-real-20250424-p5ltxi.html

"A Sydney radio station has been using an AI-generated host for about six months without disclosing it – and was not legally obliged to.

It was revealed last week that Australian Radio Network's (ARN) Sydney-based CADA station, which broadcasts across western Sydney and is available online and through the iHeartRadio app, had created and deployed an AI host for its Workdays with Thy slot.

The artificial host known as "Thy" is on-air at 11am each weekday to present four hours of hip-hop, but at no point during the show, nor anywhere on the ARN website, is the use of AI disclosed.

Instead, the show's webpage simply says "while you are at work, driving around, doing the commute on public transport or at uni, Thy will be playing you the hottest tracks from around the world".

ARN Media also owns KIIS FM, the home of The Kyle & Jackie O Show, and the GOLD network, home to high-rating Sydney breakfast show Jonesy & Amanda.

After initial questioning from Stephanie Coombes in The Carpet newsletter, it was revealed that the station used ElevenLabs – a generative AI audio platform that transforms text into speech – to create Thy, whose likeness and voice were cloned from a real employee in the ARN finance team.

The Australian Communications and Media Authority said there were currently no specific restrictions on the use of AI in broadcast content, and no obligation to disclose its use.

An ARN spokesperson said the company was exploring how new technology could enhance the listener experience.

"We've been trialling AI audio tools on CADA, using the voice of Thy, an ARN team member. This is a space being explored by broadcasters globally, and the trial has offered valuable insights."

However, it has also "reinforced the power of real personalities in driving compelling content", the spokesperson added.

The Australian Financial Review reported that Workdays with Thy has been broadcast on CADA since November, and was reported to have reached at least 72,000 people in last month's ratings.

Vice president of the Australian Association of Voice Actors, Teresa Lim, said CADA's failure to disclose its use of AI reinforces how necessary legislation around AI labelling has become.

"AI can be such a powerful and positive tool in broadcasting if there are correct safeguards in place," she said. "Authenticity and truth are so important for broadcast media. The public deserves to know what the source is of what's being broadcast … We need to have these discussions now before AI becomes so advanced that it's too difficult to regulate."

As an Asian woman working in Australian media, Lim said it also highlights how difficult it is for her demographic to break into broadcasting.

"When we found out she was just a cardboard cut-out, it cemented the disappointment. There are a limited number of Asian-Australian female presenters who are available for the job, so just give it to one of them. Don't take that opportunity away from a minority group who's already struggling."

CADA isn't the first radio station to use an AI-generated host. Two years ago, Australian digital radio company Disrupt Radio introduced its own AI newsreader, Debbie Disrupt. However, the fact that she wasn't a real person was clearly disclosed from the beginning. And in 2023, an Oregon station in the US used an AI host, which was based on a real presenter.

An ACMA spokesperson said policies were still being developed in Australia to ensure safe and responsible use of AI. This is largely led by the Commonwealth Department of Industry, Science and Resources.

"This includes considering mandatory guardrails around transparency in high-risk settings, and the release of the Voluntary AI Safety Standard in September 2024."

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Wednesday, 07 May 2025

Captcha Image