TY - GEN
T1 - Evaluating the Use of Interpretable Quantized Convolutional Neural Networks for Resource-Constrained Deployment
AU - Rogers, Harry
AU - De La Iglesia, Beatriz
AU - Zebin, Tahmina
N1 - Funding Information:
This work is supported by the Engineering and Physical Sciences Research Council [EP/S023917/1].
Publisher Copyright:
Copyright © 2023 by SCITEPRESS - Science and Technology Publications, Lda. Under CC license (CC BY-NC-ND 4.0)
PY - 2023/11/13
Y1 - 2023/11/13
N2 - The deployment of Neural Networks on resource-constrained devices for object classification and detection has led to the adoption of network compression methods, such as Quantization. However, the interpretation and comparison of Quantized Neural Networks with their Non-Quantized counterparts remains inadequately explored. To bridge this gap, we propose a novel Quantization Aware eXplainable Artificial Intelligence (XAI) pipeline to effectively compare Quantized and Non-Quantized Convolutional Neural Networks (CNNs). Our pipeline leverages Class Activation Maps (CAMs) to identify differences in activation patterns between Quantized and Non-Quantized. Through the application of Root Mean Squared Error, a subset from the top 5% scoring Quantized and Non-Quantized CAMs is generated, highlighting regions of dissimilarity for further analysis. We conduct a comprehensive comparison of activations from both Quantized and Non-Quantized CNNs, using Entropy, Standard Deviation, Sparsity metric s, and activation histograms. The ImageNet dataset is utilized for network evaluation, with CAM effectiveness assessed through Deletion, Insertion, and Weakly Supervised Object Localization (WSOL). Our findings demonstrate that Quantized CNNs exhibit higher performance in WSOL and show promising potential for real-time deployment on resource-constrained devices.
AB - The deployment of Neural Networks on resource-constrained devices for object classification and detection has led to the adoption of network compression methods, such as Quantization. However, the interpretation and comparison of Quantized Neural Networks with their Non-Quantized counterparts remains inadequately explored. To bridge this gap, we propose a novel Quantization Aware eXplainable Artificial Intelligence (XAI) pipeline to effectively compare Quantized and Non-Quantized Convolutional Neural Networks (CNNs). Our pipeline leverages Class Activation Maps (CAMs) to identify differences in activation patterns between Quantized and Non-Quantized. Through the application of Root Mean Squared Error, a subset from the top 5% scoring Quantized and Non-Quantized CAMs is generated, highlighting regions of dissimilarity for further analysis. We conduct a comprehensive comparison of activations from both Quantized and Non-Quantized CNNs, using Entropy, Standard Deviation, Sparsity metric s, and activation histograms. The ImageNet dataset is utilized for network evaluation, with CAM effectiveness assessed through Deletion, Insertion, and Weakly Supervised Object Localization (WSOL). Our findings demonstrate that Quantized CNNs exhibit higher performance in WSOL and show promising potential for real-time deployment on resource-constrained devices.
KW - Class Activation Maps
KW - Deep Learning
KW - Quantization
KW - XAI
UR - http://www.scopus.com/inward/record.url?scp=85179759150&partnerID=8YFLogxK
U2 - 10.5220/0012231900003598
DO - 10.5220/0012231900003598
M3 - Conference contribution
SN - 978-989-758-671-2
VL - 1
T3 - International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, IC3K - Proceedings
SP - 109
EP - 120
BT - Proceedings of the 15th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management
A2 - Fred, Ana
A2 - Coenen, Frans
A2 - Bernardino, Jorge
PB - SciTePress – Science and Technology Publications
ER -