A Review of Object Visual Detection for Intelligent Vehicles

Authors

  • Sirin Kumar Singh Department of Computer Science & Engineering, School of Engineering, Babu Banarasi Das University, Uttar Pradesh, India
  • Sunil Vishwakarma Department of Computer Science & Engineering, School of Engineering, Babu Banarasi Das University, Uttar Pradesh, India

DOI:

https://doi.org/10.54060/JIEEE/002.02.008

Keywords:

Convolutional Neural Network, Region based Convolutional Neural Networks, Object Detection, Optical Flow Method

Abstract

This paper contains the details of different object detection (OD) techniques, object iden-tification's relationship with video investigation, and picture understanding, it has pulled in much exploration consideration as of late. Customary item identification strat-egies are based on high-quality highlights and shallow teachable models. This survey paper presents one such strategy which is named as Optical Flow method (OFM). This strategy is discovered to be stronger and more effective for moving item recognition and the equivalent has been appeared by an investigation in this review paper. Applying optical stream to a picture gives stream vectors of the focuses comparing to the moving items. Next piece of denoting the necessary moving object of interest checks to the post-preparing. Post handling is the real commitment of the review paper for moving item identification issues. Their presentation effectively deteriorates by developing com-plex troupes which join numerous low-level picture highlights with significant level set-ting from object indicators and scene classifiers. With the fast advancement in profound learning, all the more useful assets, which can learn semantic, significant level, further highlights, are acquainted with address the issues existing in customary designs. These models carry on contrastingly in network design, preparing system, and advancement work, and so on in this review paper, we give an audit on profound learning-based item location systems. Our survey starts with a short presentation on the historical backdrop of profound learning and its agent device, in particular, Convolutional Neural Network (CNN) and region-based convolutional neural networks (R-CNN).

Downloads

Download data is not yet available.

References

P. F. Felzenszwalb, R. B. Girshick, D. Mcallester, et al., “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, p. 1627-1645, Sep 2010.

K. K. Sung and T. Poggio, “Example-based learning for view-based human face detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 1, pp. 39–51, 2002.

C. Wojek, P. Dollar, B. Schiele, et al., “Pedestrian detection: An evaluation of the state of the art,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 4, p. 743-761, April 2012.

H. Kobatake and Y. Yoshinaga, “Detection of spicules on mammogram based on skeleton analysis.” IEEE Trans. Med. Imag., vol. 15, no. 3, pp. 235–245, 1996.

Y. Jia, E. Shelhamer, J. Donahue, et al., “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the ACM International Conference on Multimedia - MM ’14, pp.675-678, 2014.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems, 2012.

Z. Cao, T. Simon, S.-E. Wei, et al., “Realtime multi-person 2d pose estimation using part affinity fields,” in CVPR, pp.7291-7299, 2017.

Z. Yang and R. Nevatia, “A multi-scale cascade fully convolutional network face detector,” in 23rd International Conference on Pattern Recognition (ICPR), 2016.

C. Chen, A. Seff, A. L. Kornhauser, et al., " Proceedings of the IEEE international conference on computer vision,” in ICCV, pp.2722-2730, 2015.

X. Chen, H. Ma, J. Wan, et al.,"multi-view 3d object detection network for autonomous driving." Proceedings of the IEEE conference on Computer Vision and Pattern Recognition., pp.1907-1915, 2017.

A. Dundar, J. Jin, B. Martini, et al., “Embedded streaming deep neural networks accelerator with applications,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 7, pp. 1572–1583, 2017.

R. J. Cintra, S. Duffner, C. Garcia, et al., “Low-complexity approximate convolutional neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 12, pp. 5981–5992, 2018.

S. H. Khan, M. Hayat, M. Bennamoun, et al., “Cost-sensitive learning of deep feature representations from imbalanced data,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 8, pp. 3573–3587, 2018.

A. Stuhlsatz, J. Lippel, and T. Zielke, “Feature extraction with deep neural networks by a generalized discriminant analysis,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 4, pp. 596–608, 2012.

R. Girshick, J. Donahue, T. Darrell, et al., “Rich feature hierarchies for accurate object detection and semantic segmentation,” in CVPR, pp.580-587, 2014.

R. Girshick, "Fast R-CNN," 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440-1448, doi: 10.1109/ICCV.2015.169.

J. Redmon, S. Divvala, R. Girshick, et al., “Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." Proceedings of the IEEE conference on computer vision and pattern recognition, in CVPR, pp.779-788, 2016.

S. Ren, K. He, R. Girshick, et al., “Faster r-cnn: Towards realtime object detection with region proposal networks,” in NIPS, pp. 91–99, 2015.

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. of Comput. Vision, vol. 60, no. 2, pp. 91–110, 2004.

A. Broggi, A. Cappalunga, S. Cattani, et al., “Lateral vehicles detection using monocular high-resolution cameras on TerraMaxTM,” in IEEE Intelligent Vehicles Symposium, 2008.

A. de la Escalera, J. M. Armingol, and M. Mata, “Traffic sign recognition and analysis for intelligent vehicles,” Image Vis. Comput., vol. 21, no. 3, pp. 247–258, 2003.

A. Coates, and Andrew Y. Ng. "The importance of encoding versus training with sparse coding and vector quantization." ICML. 2011.

A. de la Escalera, L. E. Moreno, M. A. Salichs, et al., “Road traffic sign detection and classification,” IEEE Trans. Ind. Electron., vol. 44, no. 6, pp. 848–859, 1997.

S. Agarwal and D. Roth, “Learning a sparse representation for object detection,” in Computer Vision — ECCV 2 Berlin, Heidelberg: Springer Berlin Heidelberg, vol.2353, pp. 113–127, 2002.

T. Ahonen, A. Hadid, and M. Pietikäinen, “Face Recognition with Local Binary Patterns,” in Lecture Notes in Computer Science, Berlin, Heidelberg: Springer Berlin Heidelberg, vol.3021, pp. 469–481, 2004.

S. K. Divvala, A. A. Efros, and M. Hebert, “How important are ‘deformable parts’ in the deformable parts model?” in Computer Vision – ECC, Workshops and Demonstrations, Berlin, Heidelberg: Springer Berlin Heidelberg, vol.7585, pp. 31–40, 2012.

N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), pp.886-893, 2005.

A. Gudigar, S. Chokkadi, and Raghavendra, “A review on automatic detection and recognition of traffic sign,” Multimed. Tools Appl., vol. 75, no. 1, pp. 333–364, 2016.

V. K. Vegamoor, S. Darbha, and K. R. Rajagopal, “A review of automatic vehicle following systems,” J Indian Inst Sci, vol. 99, no. 4, pp. 567–587, 2019.

S. Ichikawa and M. Miyatake, “Energy efficient train trajectory in the railway system with moving block signaling scheme,” IEEJ j. ind. appl., vol. 8, no. 4, pp. 586–591, 2019.

Y. Aoyagi and T. Asakura, “A study on traffic sign recognition in scene image using genetic algorithms and neural networks,” in Proceedings of the IEEE IECON. 22nd International Conference on Industrial Electronics, Control, and Instrumentation, pp.1838-1843, 1996.

G. Adorni, V. D'Andrea, G. Destri et al., "Shape searching in real world images: a CNN-based approach," Fourth IEEE International Workshop on Cellular Neural Networks and their Applications Proceedings (CNNA-96), pp. 213-218,1996.

G. Adorni, M. Mordonini and A. Poggi, "Autonomous agents’ coordination through traffic signals and rules," Proceedings of Conference on Intelligent Transportation Systems, pp. 290-295,1997.

P. Arnoul, M. Viala, J. P. Guerin, et al, "Traffic signs localisation for highways inventory from a video camera on board a moving collection van," Proceedings of Conference on Intelligent Vehicles, pp.141-146, 1996.

H. Austermeier, U. Büker, B. Mertsching, et al., “Analysis of traffic scenes by using the hierarchical structure code,” in Advances in Structural and Syntactic Pattern Recognition, WORLD SCIENTIFIC, pp. 561–570,1993.

M. A. Islam, M. Rochan, N.D.B Bruce, et al., “Gated feedback refinement network for dense image labeling,”CVPR, pp. 3751–3759, 2017.

Z. Cai, Q. Fan, R. S. Feris, et al., “A unified multi-scale deep convolutional neural network for fast object detection,” in Computer Vision – ECCV, Cham: Springer International Publishing, pp. 354–370,2016,

Z. Cai and N. Vasconcelos,”Cascade r-cnn: Delving into high quality object detection,”CVPR, pp. 6154–6162, 2018.

L.C. Chen, Y. Zhu, G. Papandreou, et.al., “Encoder-decoder with atrous separable convolution for semantic image segmentation,” ECCV, pp.801-818, 2018.

F. Chollet.,” Xception: Deep learning with depthwise separable convolutions,” CVPR, pp.1251-1258, 2017.

S. Elfwing, E. Uchibe, and K. Doya, “Sigmoid-weighted linear units for neural network function approximation in reinforcement learning,” Neural Netw., vol. 107, pp. 3–11, 2018.

M. Everingham, S. M. A. Eslami, L. Van Gool, et al., “The pascal visual object classes challenge: A retrospective,” Int. J. Comput. Vis., vol. 111, no. 1, pp. 98–136, 2015.

G. Ghiasi, T.Y.Lin, R. Pang, et al.,”Nas-fpn: Learning scalable feature pyramid architecture for object detection,” CVPR, pp.7036-7045, 2019.

R. Girshick. Fast r-cnn. ICCV, pp.1440-1448, 2015.

K.He, R. Girshick, and P. Dollar,”Rethinking ´ imagenet pre-training,” ICCV,pp.4918-4927, 2019.

K. He, G. Gkioxari, P. Dollar, et al.,” shick. Mask r-cnn,” ICCV, pp.2980–2988, 2017.

K.He, X.Zhang, S.Ren, et al.,” Deep residual learning for image recognition.,” CVPR, pp. 770–778, 2016.

A. Howard, M. Sandler, G. Chu, et al.,” Searching for mobilenetv3,” ICCV, pp.1314-1324, 2019.

J. Huang, V. Rathod, C.Sun, et al.,” Speed/accuracy trade-offs for modern convolutional object detectors,”CVPR, pp.7310-7311, 2017.

S.W. Kim, H.K. Kook, J.Y. Sun, et al.,” Parallel feature pyramid network for object detection” ECCV, pp.234-250, 2018.

A. Kirillov, R. Girshick, K. He, and P. Dollár, “Panoptic Feature Pyramid Networks,” 2019.

T. Kong, F. Sun, W. Huang, et.al, “Deep feature pyramid reconfiguration for object detection,” arXiv [cs.CV], Aug 2018.

B. Khan and P. Singh, “Selecting a meta-heuristic technique for smart micro-grid optimization problem: A comprehensive analysis,” IEEE Access, vol. 5, pp. 13951–13977, July 2017.

T.Molla, B.Khan, & P.Singh, “A comprehensive analysis of smart home energy management system optimization techniques,” Journal of Autonomous Intelligence, vol.1,no.1,pp. 15-21,2018.

P. Singhal, P. Singh, and A. Vidyarthi,” Interpretation and localization of Thorax diseases using DCNN in Chest X-Ray,” Journal of Informatics Electrical and Electronics Engineering, vol.1, no.1, pp.1-7,2020.

M. Vinny, & P. Singh, “Review on the Artificial Brain Technology: BlueBrain,” Journal of Informatics Electrical and Electronics Engineering, vol.1, no.1, pp.1-11,2020.

A. Sahani, P. Singh, and A. Kumar,” Introduction to Blockchain,” Journal of Informatics Electrical and Electronics Engineering, vol.1, no.1, pp.1-9, 2020.

M. Misra, & P. Singh,” Energy Optimization for Smart Housing Systems,”Journal of Informatics Electrical and Electronics Engineering, vol.1, no.1, pp.1-6, 2020.

N. Srivastava, U. Kumar, and P. Singh,” Software and Performance Testing Tools,” Journal of Informatics Electrical and Electronics Engineering, vol.2, no.1, pp. 1-12, Jan 2021.

Downloads

Published

2021-06-05

How to Cite

[1]
S. Kumar Singh and S. Vishwakarma, “A Review of Object Visual Detection for Intelligent Vehicles”, J. Infor. Electr. Electron. Eng., vol. 2, no. 2, pp. 1–10, Jun. 2021.

CITATION COUNT