trustPAIKON: Trustworthy AI for Cancer Analysis

In clinical practice, the highest level of precision is essentiell. The rapidly expanding knowledge of biological processes and relevant biomarkers demands ever-increasing expertise from medical professionals. Artificial intelligence (AI) can provide critical support by automating time-consuming routine tasks, especially in complex, image-based cancer diagnostics, thereby enabling faster, more accurate and more precise results. This marks a significant advancement for personalized precision medicine.
For AI to become a truly reliable tool in medical workflows, it must deliver not only correct results but also transparent, traceable and quantifiable levels of confidence. It is not sufficient for an algorithm to simply perform an analysis and output a result. Clinicians must also understand how trustworthy that result is. Key questions must be answerable: How reliable is this output? Which algorithmic parameters influenced it, and how were they weighted? Only when such information is available in a transparent and interpretable manner can AI be responsibly integrated into everyday clinical practice.
This is precisely where trustPAIKON comes in. The project focuses on developing methods for quantitative assessment and visualization of uncertainty in AI-assisted image analysis. In collaboration with the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Katana Labs is pioneering novel techniques for uncertainty estimation in deep learning models, directly integrated into the PAIKON platform.
Katana Labs’ PAIKON platform supports pathologists in the digital analysis of histopathological tissue samples. trustPAIKON enhances this platform with robust uncertainty indicators for image analysis, enabling precise and transparent instance segmentation of tumor structures and cells, accompanied by clear information about the confidence level of each individual AI-assisted decision.
Beyond the medical domain, trustPAIKON also aims to develop generalizable methodologies and modular, reusable software components for AI models. These components can be applied across other safety-critical and highly regulated fields where the reliability and interpretability of AI decisions are essential.
With trustPAIKON, we aim to establish a foundation for explainable, robust, and responsibly deployable AI in medical diagnostics.
Project duration: October 2024 until September 2026
Project leads: Dr. Peter Steinbach (HZDR) and Dr. Walter de Back (Katana Labs)
This project is supported by the SAB and funds from the ESF European Social Fund.