Bharath V and Sreelatha P, Department of Biomedical Engineering, KPR Institute of Engineering and Technology, India.
Jeba N, Department of Computer Science and Engineering, Kumaraguru College of Technology, India
Priyadharshini P, PCB Design Engineer Robert Bosch Engineering and Business solutions Pvt Ltd, India.
Gururraja, Service Engineer, Silicon Systems, India.
Online First : 30 December 2020
Publisher Name : IJAICT India Publications, India.
Print ISBN : 978-81-950008-0-7
Online ISBN : 978-81-950008-1-4
Page :423-425
Abstract
The human visual system (HVS) seeks toward select relevant region in the direction of level rear method attempts. visual concentration effort to guess the essential region of films or imagery observed through an individual eye. Such representations, are functioned to areas similar to workstation work, MPEG conventions, and an eminence evaluation. while numerous models are expected, only some of them be pertinent enroute for high dynamic range (HDR) picture substance, in addition to no effort has been completed for HDR visualization. Furthermore, the disadvantage inside the obtainable form is with the intention, they couldn't reproduce the uniqueness of HVS beneath the extensive shining array established in HDR substance. This paper gets the better of these troubles by the process approach to represent the bottom-up visual saliency for HDR input through merge spatial and temporal image features. An examine of a human eye ball movement information make sure the efficiency of the proposed model. Evaluation using 3 well-known quantitative metrics show that the proposed model significantly gets better predictions of visual concentration for HDR substance.
Keywords
Visual Protuberancy State, HVS, HDR, Spatial and temporal
Cite this article
Bharath V and Sreelatha P, Jeba N, Priyadharshini P and Gururraja, “Protuberancy State Recognition in Human Visual System”, Innovations in Information and Communication Technology, pp. 423-425, December 2020.
Copyright
© 2020 J. Senthilkumar, K. Saravanan and P. Anandakumar. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.