KBS has declared AI a "survival strategy" as it approaches 2026.
Adopting artificial intelligence requires structural change across programming, production, and operations.
Technology promises cost savings and faster production, but it also raises questions about jobs and public trust.
This column lays out both sides of the debate and offers policy options in balance.
"Broadcast TV Remade by AI: What Choices Remain?"
Overview
Change on the ground is accelerating.
Park Jang-beom, KBS president, publicly called AI an "important survival strategy" and signaled shifts in organizational culture.
His statement is more than a technical pilot; it signals a review of management methods and workforce structure.
The crisis facing terrestrial TV stems from long-term declines in viewership and advertising revenue.
So technology is not merely an option but a strategic response to those structural pressures.
AI offers the lure of cost savings and higher efficiency.
However, it also brings employment, quality, and ethical challenges that demand institutional safeguards beyond simple technology swaps.
This piece moves through history, pros and cons, and policy recommendations so readers can understand the issue and join the conversation.
History and Background
The trend is already underway.
In the mid-2020s, the commercial availability of large language models (LLMs) and generative AI marked a turning point for news and media.
AI has shown value in supporting scriptwriting, automatic summarization, and multi-format conversion.
At the same time, platform competition has eroded traditional revenue models.
Viewing habits centered on OTT services and YouTube increased demand for faster production and more tailored content.
Given this environment, KBS's push for AI transformation reflects both external pressure and internal strategy.
Two goals operate in parallel: restoring competitiveness and preserving the public service mission of a public broadcaster (KBS is South Korea's national public broadcaster).
Thus AI is positioned not merely as a tool for efficiency but as a catalyst for organizational change.
At the same time, the historical arc reminds us that new technologies bring new problems.
Arguments in Favor
The case for AI is built on efficiency and competitiveness.
First, the economic argument is straightforward.
Under pressure from rising production costs, AI can automate editing, captioning, and summarization to cut labor and time costs.
Meanwhile, speed in breaking-news coverage and the ability to quickly convert material for multiple platforms help hold audiences.
From this view, AI can improve revenue structures and strengthen bargaining power with advertisers and platform partners.
Second, there are quality and scale benefits.
Large language models excel at data analysis and concise summaries, while automatic translation and captioning expand multilingual access.
That creates an opening to reach audiences beyond national borders.
Automated clip generation also enables rapid response to short-form platforms, where attention is brief and frequent.
Third, AI can reshape internal roles.
With machines handling routine tasks, reporters and producers could invest more time in deep reporting and investigative work.
That, in turn, might reinforce the public service value of a national broadcaster.
AI can amplify human capacity when used as a tool.
Finally, innovation can spread beyond efficiency.
New formats, personalized recommendation systems, and automated short-form production create long-term revenue opportunities.
Supporters warn that without AI, broadcasters risk falling behind in an increasingly competitive media ecosystem.
Concerns and Objections
The concerns are concrete and practical.
First, employment and role loss is a real worry.
Examples from finance show analysts' roles shrinking as automation takes over routine reporting and summaries.
In broadcasting, reporters, writers, editors, and technical staff could see their responsibilities narrowed or replaced.
This is more than rearranging tasks; it can lead to structural workforce changes and spark labor disputes.
For a public broadcaster, job stability is also a public-value concern.
Second, journalistic quality and trust may suffer.
Generative AI can produce factual errors and lose context, and automatic summarization has sometimes omitted key risk information.
News requires accuracy and context; if automation prizes convenience, essential information can be lost.
Losing trust is hard to regain.
Third, bias and accountability are unresolved issues.
AI reflects biases in its training data, and it is unclear whether the developer, the broadcaster, or the editor bears responsibility for errors.
If responsibility is not clearly assigned and regulated, victims may face delays in correction or compensation.
Fourth, the identity of public broadcasting could be weakened.
If efficiency and profit pressures dominate, public values and human judgment could be sidelined.
That would ultimately undermine audience trust and the broadcaster's public role.
So opponents call for limits on the pace and scope of adoption and for strong human-led verification procedures.
Policy Recommendations and Alternatives
Rules and standards are needed.
First, increase transparency.
Audiences should be clearly informed when AI has affected reporting or editing.
This basic disclosure helps restore trust and fulfills public-service responsibilities.
Second, guarantee human review and final authority.
Key reports and sensitive topics should require final checks by human editors and reporters.
Automation should play a supporting role, while humans retain decision-making responsibility.
Third, protect workers and provide retraining.
Transition programs and reskilling should be available to staff affected by AI.
Government and broadcasters can share the cost of retraining, and labor-management talks should define agreed plans for redeployment.
Fourth, set legal and ethical standards.
Public broadcasters should institutionalize bias checks, source attribution, and correction procedures so that responsibility is clear and redress is swift.
Finally, phase in experimental pilots with monitoring.
Staged trials allow assessment of benefits and harms, with results published for public debate.
Including citizens in governance can help legitimize decisions and strengthen public trust.
Conclusion
Choice brings responsibility.
In short, AI adoption by KBS and other terrestrial broadcasters promises improved competitiveness and production efficiency.
Yet it also risks job loss, degraded journalistic quality, algorithmic bias, and gaps in accountability.
Therefore adoption should be gradual, transparent, and paired with worker protections and verification systems.
Policy recommendations can be condensed to four pillars:
transparency, human final judgment, worker retraining and social safety nets, and legal and ethical norms.
These pillars can make AI transformation both an innovation and a way to preserve public broadcasting's civic purpose.
We ask readers:
How much AI-driven change should public broadcasting accept?

Technology must operate on a foundation of local communities' and audiences' trust.
Broadcasters should continuously check that internal decisions reflect social consensus.
Without those procedures, efficiency can become an excuse to erode public values.
The second image offers visual space for the argument.
But without accompanying institutions and procedures, short-term gains can become long-term losses.
Therefore technical experiments must be designed to be public and accountable.
