Parlak, Ismail Enes2026-02-082026-02-0820250950-70511872-7409https://doi.org/10.1016/j.knosys.2025.114402https://hdl.handle.net/20.500.12885/5742The increasing opacity and lack of verifiable audit trails in AI decision-making systems pose significant challenges to establishing trust and accountability, particularly in high-impact domains. This paper introduces Blockchain-Assisted Explainable Decision Traces (BAXDT), a novel architecture designed to enhance the transparency and auditability of AI systems. BAXDT creates comprehensive, immutable records for each AI decision by integrating model outputs, SHAP-based XAI summaries, a novel Explanation Density Metric, and detailed model/data context into a unified JSON trace. The 0.80 threshold for the Explanation Density Metric was empirically supported by Kneedle-based automatic threshold detection. The BAXDT architecture leverages blockchain by recording a cryptographic hash of each decision trace on-chain, while the full trace is stored off-chain. The system's effectiveness was demonstrated through a multifaceted evaluation: simulations across three diverse public datasets (medical, financial, educational) confirmed its domain-agnostic applicability; a scalability analysis of up to 20,000 traces demonstrated its efficient and linear performance; and a successful deployment on the Ethereum Sepolia public testnet verified its real-world viability. A case study on text data further underscored the framework's flexibility. BAXDT provides a robust framework for documenting AI decisions-what, why, based on what, and when-thereby fostering trustworthy AI and supporting regulatory compliance.eninfo:eu-repo/semantics/closedAccessExplainable artificial intelligence (XAI)BlockchainDecision traceabilityArtificial intelligence accountabilityAuditabilityBlockchain-assisted explainable decision traces (BAXDT): An approach for transparency and accountability in artificial intelligence systemsArticle10.1016/j.knosys.2025.114402329WOS:0015670082000012-s2.0-105014945606Q1Q1