Across global healthcare systems, artificial intelligence is steadily transforming diagnostics, risk prediction, and clinical decision-making. From radiology to oncology, algorithmic tools are increasingly embedded in routine practice. Pediatric surgery, however, remains a notable exception. Despite rapid technological progress, the adoption of AI in this highly sensitive field has been slow, shaped less by technical limitations than by ethical uncertainty and professional caution.

Recent research highlights a clear paradox: pediatric surgeons broadly acknowledge AI’s potential to improve diagnostic accuracy, enhance surgical planning, and support complex decisions, yet its real-world use remains limited and largely academic. Rather than resistance to innovation, this hesitancy reflects deep concern about how AI can be safely integrated into care for children—patients who are uniquely vulnerable and unable to fully participate in decisions affecting their health.

Children’s limited autonomy, the central role of parental consent, and the potentially irreversible consequences of surgical errors all heighten the ethical stakes. Surgeons must navigate questions of responsibility if an AI system contributes to harm, determine how informed consent can be meaningfully obtained when advanced algorithms are involved, and safeguard sensitive patient data in environments that may lack robust digital infrastructure. In low-resource settings, these challenges are further amplified by concerns over data quality, representativeness, and regulatory preparedness.

These issues form the backdrop of a national survey conducted by a team of pediatric surgeons at the Federal Medical Centre, Umuahia, Nigeria. Published on 20 October 2025 in the World Journal of Pediatric Surgery (DOI: 10.1136/wjps-2025-001089), the study represents the first comprehensive examination of how pediatric surgeons across Nigeria perceive the ethical and practical implications of AI in their field. Drawing responses from clinicians across all six geopolitical zones, the research assessed levels of awareness, patterns of AI use, and the ethical concerns shaping clinical attitudes.

The findings reveal a profession cautiously weighing opportunity against risk. Among the 88 respondents—most of them experienced consultants actively practicing in diverse clinical environments—only about one-third reported any prior use of AI. Even then, applications were largely confined to non-clinical tasks such as literature searches, academic writing, or administrative documentation. Very few surgeons reported using AI for diagnostic support, imaging interpretation, surgical simulation, or intraoperative decision-making, underscoring a wide gap between technological potential and everyday practice.

Ethical concerns were nearly universal. Surgeons consistently identified accountability for AI-related errors as a major unresolved issue, particularly in cases where harm might result from algorithmic recommendations. The complexity of obtaining informed consent for AI-assisted care, especially when parents may not fully understand how such systems work, was another significant barrier. Data privacy and security also featured prominently, alongside worries about algorithmic bias, diminished human oversight, and unclear legal responsibility.

Opinions diverged on transparency with patients’ families. While many surgeons believed parents should be informed whenever AI tools are involved, others argued that disclosure is unnecessary if AI does not directly influence final clinical decisions. This lack of consensus reflects broader uncertainty about how AI should be framed within existing ethical norms of pediatric care.

Confidence in current legal and regulatory frameworks was notably low. Most respondents felt that existing policies were insufficient to guide responsible AI use in healthcare, particularly in pediatric surgery. Calls for clearer national guidelines, stronger regulatory leadership, and structured training programs were widespread, reflecting a desire for institutional support rather than individual experimentation.

As the research team noted, pediatric surgeons are not fundamentally opposed to AI. Instead, they are seeking reassurance that its use will be safe, equitable, and well governed. Addressing ethical challenges related to accountability, consent, and data protection is seen as a prerequisite for meaningful adoption. Without clear standards and safeguards, AI risks becoming a source of uncertainty rather than a clinical asset.

The study ultimately points to the need for pediatric-specific ethical frameworks that recognize the distinct vulnerabilities of child patients. Clearer consent procedures, well-defined accountability mechanisms, strengthened data governance, and expanded AI literacy among clinicians—and eventually families—are essential steps toward building trust. As artificial intelligence continues to move closer to the operating theatre, such measures offer a roadmap for integrating innovation while preserving the core ethical commitments of pediatric surgical care.

Source: Zhejiang University