Real Time Body Orientation Recognition for Customer Pose Orientation

Authors

  • Nethravathi P. S. Professor, Institute of Computer and Information Sciences, Srinivas University, Mangalore
  • Aithal P. S. Professor, Institute of Computer and Information Sciences, Srinivas University, Mangalore

DOI:

https://doi.org/10.47992/IJMTS.2581.6012.0197

Keywords:

Customer monitoring, Deep neural network, Pose estimation, Surveillance camera, Visibility mask, Convolutional neural network, Body orientation

Abstract

Background/Purpose: One of the most significant areas in marketing is consumer position analysis. Retailers can assess the extent of client interest in the goods based on customer pose data. Due to occlusion and left-right similarity difficulties, pose estimation is problematic. We describe a CNN-based solution that includes the body orientation and visibility mask to overcome these two challenges. It provides global information about posture configuration using simple gaits in a retail setting. When a person looks to the right, for example, the left side of his or her body is hidden by the body orientation.  In the same way, the person faces the camera, the right shoulder will most likely be on the image's left side. A novel Deep Neural Network design is used to merge body orientation and local joint connections. Second, the visibility mask simulates each joint's occlusion state. Because body orientation is the major source of self-occlusion, it is inextricably tied to it. Detecting an occluding object (such as a shopping cart in a retail setting) might provide give visibility mask prediction clues. Global body position, local joint connections, client mobility, and occluding obstructions are all taken into account in the final advised method. Finally, we run a number of comparison tests to see how effective our technique is.

Objective: This work presents customer posture estimation, and a visibility mask to build a prototype for inner and self-occlusion. It also concentrates on local joint connections, global body orientation, and customer mobility.

Methodology: The suggested technique is depicted in its entirety in Figure-2. To figure out which portion of the human picture is concealed, we employ stance markers to identify the viewable areas. The landmarks in the occlusion zone have a lower confidence score when we extract posture landmarks. As a result, it is possible to obtain visible masks incorporating occlusion information. We employ visible signs to assist the hidden individual in three ways. To begin, visible masks are utilized to detect viewable parts and to construct spatial masks that filter noise caused by occlusions.

Findings/Results: The proposed method outperforms to overcome left-right similarity difficulties, the network incorporates body orientation information, and the visibility mark layers are introduced into the network to enhance the efficiency of occluded joints.

Conclusion: For customer pose estimation, A novel architecture using the concepts of Deep Learning is proposed and the occluding object detection clearly provides inter-occlusion by object cues.  As a result, local joint connection, the global body orientation, and occluding object and human motion.

Paper Type: Research article.

Downloads

Download data is not yet available.

Downloads

Published

2022-05-18

How to Cite

Nethravathi P. S., & Aithal P. S. (2022). Real Time Body Orientation Recognition for Customer Pose Orientation. International Journal of Management, Technology and Social Sciences (IJMTS), 7(1), 390–399. https://doi.org/10.47992/IJMTS.2581.6012.0197

Most read articles by the same author(s)

1 2 3 4 5 6 7 8 > >>