My Crypto Paper – 24/7 Cryptocurrency & Blockchain News
Image default
DeepSeek Preps

DeepSeek Preps AI for Hospitals With Human-Labeled Medical Data

TLDR;

  • DeepSeek launches large-scale human labeling effort to refine its AI medical tools.
  • Over 300 hospitals in China are already using its technology for diagnostics.
  • The initiative aims to reduce AI hallucinations and improve clinical reliability.
  • Medical interns with coding and AI skills are being recruited in Beijing.

China-based AI startup DeepSeek is scaling up efforts to improve the accuracy and safety of its medical artificial intelligence by recruiting skilled interns to manually label sensitive clinical data. The move signals a new chapter in the company’s ambition to deepen AI integration in hospital systems, particularly in diagnostics and prescription support.

Recent job listings on Chinese platforms reveal that DeepSeek is hiring individuals with advanced medical and technical training. The roles, which pay the equivalent of about 70 US dollars per day, require candidates to be either senior medical students or master’s degree holders with experience in Python coding and prompt engineering for AI models. All work is currently restricted to Beijing.

DeepSeek’s Tools Used for Clinical Support

DeepSeek’s AI systems are already making their mark in Chinese healthcare. As of March 2025, at least 300 medical institutions have implemented the company’s technology for tasks like diagnostic support and automated prescription writing. These tools are powered by large language models (LLMs) similar to those used by global AI leaders, but tailored to China’s specific medical context.

By incorporating human-labeled data from medical professionals, DeepSeek hopes to reduce the inaccuracies often associated with LLMs. These inaccuracies, sometimes referred to as “hallucinations,” occur when AI generates responses that are convincingly structured but factually wrong—a dangerous flaw in any clinical setting. The interns will help train the AI to produce more reliable and knowledge-grounded answers in high-stakes medical environments.

Concerns Over AI Safety in Medicine

DeepSeek’s increased focus on human oversight may be a strategic response to growing scrutiny around AI’s role in healthcare. In May, a group of researchers warned that AI-generated recommendations, particularly those prone to factual errors, pose serious risks to patient safety. The researchers emphasized that while AI offers significant promise in diagnostics, it must be backed by rigorous validation and continuous improvement.

The interns hired by DeepSeek are expected to address these concerns by enriching the AI’s understanding of medical knowledge and reducing its tendency to fabricate information. Their work will contribute directly to improving the accuracy of answers to complex medical queries.

AI-in-Medicine Landscape Is Rapidly Evolving

DeepSeek’s latest initiative unfolds against a broader backdrop of rapid innovation in AI-driven medicine. Just this week, biotech firm Insilico Medicine reported promising results from an AI-generated drug treating idiopathic pulmonary fibrosis. The drug, rentosertib, successfully passed a mid-stage trial in China and is set to move to larger studies.



These parallel developments point to a growing convergence between artificial intelligence and real-world medical applications. Whether through diagnosis, treatment recommendations, or drug discovery, AI is beginning to shape the way healthcare operates, not just in China, but around the globe.

However, success increasingly hinges on reducing errors and boosting trust. DeepSeek’s human-in-the-loop training approach may represent a crucial step toward making AI a safer and more dependable partner in clinical practice.

Read More

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

Please enter CoinGecko Free Api Key to get this plugin works.