[2PS-242]
Scene-Adaptive Multimodal Image Sensor for Data-Efficient In-sensor Dynamic Vision System
발표자김유손 (한국과학기술연구원)
연구책임자최창순 (성균관대학교), 임정아 (한국과학기술연구원)
Abstract
With the rapid advancement of AI vision, the demand for efficient data acquisition has increased substantially. However, conventional frame-based image sensors continuously acquire dense visual data regardless of scene dynamics, resulting in severe temporal redundancy from repeated background information, leading to increased computational overhead and excessive energy consumption. To mitigate temporal redundancy, event-based sensing has been introduced, yet its practical applicability remains limited by circuit complexity and degraded image quality. In this study, we propose a single-diode-based multimodal image sensor capable of switching between event-based sensing and conventional frame-based imaging. By integrating an Ag electrode and an iCVD-deposited hydrophilic polymer dielectric (–OH functional groups) into a vertically formed p–n photodiode, both capacitive and photocurrent-dominated response regimes are realized depending on the bias conditions.