Role of Intelligent systems in the Interpretation of Deep Neural Networks in a Post-pandemic World
Keynote abstract
Prof Saman K. Halgamuge, FIEEE
Popular models of AI, in particular machine learning based models have three significant deficiencies: they are mostly manually designed using the experience of AI-experts; they lack human interpretability, i.e., users do not understand the AI architectures either semantically/linguistically or mathematically; and they are unable to dynamically change when new data are acquired. Addressing these deficiencies would provide answers to some of the valid questions about traceability, accountability and the ability to integrate existing knowledge (scientific or linguistically articulated human experience) into the AI model. This keynote addresses these deficiencies in the context of major global problems in the post pandemic world and how intelligent systems can contribute to finding solutions.
© کلیه حقوق این وب سایت محفوظ می باشد . طراحی و پیاده سازی شده توسط : همایش نگار ( ویرایش 10.0.5)