A Review of Automatic Driving System by Recognizing Road Signs Using Digital Image Processing
DOI:
https://doi.org/10.54060/JIEEE/002.02.003Keywords:
object, strategies, recognition, identification, classifiersAbstract
In this review, the paper furnishes object identification's relationship with video investi-gation and picture understanding, it has pulled in much exploration consideration as of late. Customary item identification strategies are based on high-quality highlights and shallow teachable models. This survey paper presents one such strategy which is named as Optical Flow method. This strategy is discovered to be stronger and more effective for moving item recognition and the equivalent has been appeared by an investigation in this review paper. Applying optical stream to a picture gives stream vectors of the focus-es comparing to the moving items. Next piece of denoting the necessary moving object of interest checks to the post preparation. Post handling is the real commitment of the review paper for moving item identification issues. Their presentation effectively deteri-orates by developing complex troupes which join numerous low-level picture highlights with significant level setting from object indicators and scene classifiers. With the fast advancement in profound learning, all the more useful assets, which can learn semantic, significant level, further highlights, are acquainted with address the issues existing in customary designs. These models carry on contrastingly in network design, preparing system, and advancement work, and so on In this review paper, we give an audit on pro-found learning-based item location systems. Our survey starts with a short presenta-tion on the historical backdrop of profound learning and its agent device, in particular Convolutional Neural Network (CNN).
Downloads
References
P. F. Felzenszwalb, R. B. Girshick, D. M. Allester, et al., “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1627–1645, Sep 2010.
K.-K. Sung and T. Poggio, “Example-based learning for view-based human face detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 1, pp. 39–51, Jan 1998.
P. Dollár, C. Wojek, B. Schiele, et al., “Pedestrian detection: an evaluation of the state of the art,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 4, pp. 743–761, April 2012.
H. Kobatake and Y. Yoshinaga, “Detection of spicules on mammogram based on skeleton analysis,” IEEE Trans. Med. Imaging, vol. 15, no. 3, pp. 235–245, June 1996.
Y. Jia, E. Shelhamer, J. Donahue, et al., “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the ACM International Conference on Multimedia - MM ’14, Nov 2014.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.
Z. Cao, T. Simon, S.-E. Wei, et al., “Realtime multi-person 2d pose estimation using part affinity fields, “Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7291-7299, 2017.
Z. Yang and R. Nevatia, “A multi-scale cascade fully convolutional network face detector,” in 23rd International Conference on Pattern Recognition (ICPR), Dec 2016.
C. Chen, A. Seff, A. L. Kornhauser, et al., “Deepdriving: Learning affordance for direct perception in autonomous driving,” Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2722-2730, 2015.
X. Chen, H. Ma, J. Wan et al., “multi-view 3d object detection network for autonomous driving,” in CVPR, 2017.
A. Dundar, J. Jin, B. Martini, et al., “Embedded streaming deep neural networks accelerator with applications,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 7, pp. 1572–1583, July 2017.
R. J. Cintra, S. Duffner, C. Garcia, et al., “Low-complexity approximate convolutional neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 12, pp. 5981–5992, April 2018.
S. H. Khan, M. Hayat, M. Bennamoun, F. Sohel, and R. Togneri, “Cost sensitive learning of deep feature representations from imbalanced data,” vol.1 no. 99, pp. 1–15, March 2017.
A. Stuhlsatz, J. Lippel, and T. Zielke, “Feature extraction with deep neural networks by a generalized discriminant analysis,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 4, pp. 596–608, April 2012.
R. Girshick, J. Donahue, T. Darrell, et al., “Rich feature hierarchies for accurate object detection and semantic segmentation,” Oct 2014.
R. Girshick, “Fast R-CNN,” in IEEE International Conference on Computer Vision (ICCV), Dec 2015.
J. Redmon, S. Divvala, R. Girshick, et al., “You only look once: Unified, real-time object detection,” vol.1, May 2016.
S. Ren, K. He, R. Girshick, et al., “Faster R-CNN: Towards real-time object detection with region proposal networks,” pp. 91–99 Jan 2016.
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. of Comput. Vision, vol. 60, no. 2, pp. 91–110, Jan 2004.
A. Broggi, A. Cappalunga, S. Cattani, et al., “Lateral vehicles detection using monocular high-resolution cameras on TerraMaxTM,” in 2008 IEEE Intelligent Vehicles Symposium, June 2008.
A. D.L. Escalera, J. M. Armingol, and M. Mata, “Traffic sign recognition and analysis for intelligent vehicles,” Image Vis. Comput., vol. 21, no. 3, pp. 247–258, March 2003.
A. Coates, & A. Y. Ng, “The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization” Appearing in Proceedings of the 28 th International Conference on Machine Learning, Bellevue, WA, USA, pp.921-928, June 2011.
A.de la Escalera, L. E. Moreno, M. A. Salichs, et al.,” Road Traffic Sign Detection and Classification”, IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, vol. 44, no.6, Dec 1997.
S. Agarwal and D. Roth, “Learning a sparse representation for object detection,” in Computer Vision — ECCV, Berlin, Heidelberg: Springer Berlin Heidelberg, vol.2353, pp. 113–127, April 2002,
T. Ahonen, A. Hadid, and M. Pietikäinen, “Face Recognition with Local Binary Patterns,” in Lecture Notes in Computer Science, Berlin, Heidelberg: Springer Berlin Heidelberg, vol.3021, pp. 469–481 ,2004.
S. K. Divvala, A. A. Efros, and M. Hebert, “How important are ‘deformable parts’ in the deformable parts model?” in Computer Vision – ECCV 2012. Workshops and Demonstrations, Berlin, Heidelberg: Springer Berlin Heidelberg, vol.7585, pp. 31–40, 2012.
N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), July 2005.
A. Gudigar, S. Chokkadi, and U. Raghavendra, “A review on automatic detection and recognition of traffic sign,” Multimed. Tools Appl., vol. 75, no. 1, pp. 333–364, Jan 2016.
V. K. Vegamoor, S. Darbha, and K. R. Rajagopal, “A review of automatic vehicle following systems,” J Indian Inst Sci, vol. 99, no. 4, pp. 567–587, Dec 2019.
C. Purucker, F. Rugur, N. Schneider, et al., “Automatic Driving System”, Ichikawa 2014.
P. Singhal, P. Singh and A. Vidyarthi, “Interpretation and localization of Thorax diseases using DCNN in Chest X-Ray “Journal of Informatics Electrical and Electronics Engineering, vol.1, no.1, pp. 1-7 , April 2020.
M. Vinny, and P. Singh, “Review on the Artificial Brain Technology: BlueBrain, “Journal of Informatics Electrical and Electronics Engineering, vol.1, no.1, pp.3, 1-11, 2020.
A. Singh and P. Singh, “Object Detection, “Journal of Management and Service Science, vol.1, no.2 pp. 1-20, 2021.
A. Singh, and P. Singh, “Image Classification: A Survey, “Journal of Informatics Electrical and Electronics Engineering, vol.1, no.2, pp. 1-9, 2020.
A. Singh and P. Singh, “License Plate Recognition. “Journal of Management and Service Science, vol.1, no.2, pp. 1-14, 2021.