Preliminary Literature Review
Object Detection and Classification has long been a field of extensive research and it has been proposed a large number of algorithms. Different authors have proposed many goal recognition, detection and positioning techniques as follows. Lowe, et al. 19991, a method of single object recognition using a set of local feature templates has been explained, which can be used for corner detectors and filters. He has verified planar object recognition using the SIFT feature 2, which provides match for affine geometric alignment. Lowe, et al. 20011, extract object outlines with background subtraction, it is possible to identify 3D objects that are more accurate than affine models and robust to occlusion and illumination invariance.
Bag of key points given by Csurka et al. 20044 also ansother well know Feature-based object recognition skill. This is based on a figure of speech to learning methods use bag-of-word expression for text categorization and quantifies features as words. This method discusses three main matters, first is representation, where object categories are stand for some point of view only based on appearance or position and appearance. Second is learning, for example how to form a classifier to train given training data. Third one is recognition, used to compare all the stored objects or features with test or word and make category resolutions. But image doesn’t provide words which that we need to use recognition.
Application of method in the problem to be solved
Features and Control Points.The function of Features and Control Points are tracking the natural facial features such as shape, position, size, lines, edges, and contours. Tracking of which define the global and local motion, which lie on the edges of these natural facial features is the control points.
Hence, the edges of these natural facial features are sufficient to track features on the sequence of images. The edges are the features and the corners are the control points. Each feature is represented by a spline, whose internal energy is used to apply a smooth constraint and push the spline toward image features such as lines, edges, and contours.
The internal spline energy (13) is represented as
Sint = ?(s)|vs(s)|2 + ?(s)|vvs(s)|2/2
?(s) = first-order term, ? (s) = second-order term
Constituting the internal energy, v(s) = x(s), y(s) is the parametric representation spline, v, = dv/ds, and vss = d2v/ds2. Discrete formulas are used when defining splines that correspond to features in the image. Discredited the above expression by approximating the derivative with a finite difference and converted it to a vector symbol. The shape of spline is made to correspond with the edge of the feature by giving different weights for ? and ? factor. The discrete internal spline energy (13) of this feature is
Sint = ?|vi- vi-1|2/2h2 + ?|vi-1-2vi+vi+1|2/2h4Also v0 =vn, since defined are closed contours by the contours.
Task Sub Task Member Expected Finish Date
Project Proposal Introduction Jeremy Teng Kai Wen5/7/2018
Problem Statement Objective Teo Yee Chuang Literature Review Wong Chee YoongMethodology All members Reference
Kaneko, M., Koike, A. ; Hatori, Y. (1991) Picture Coding Symp. 91.
Minami, T., So, I., Mizuno, T. ; Nakamura, 0. (1990) Picture Coding Symp. 90.
Nakaya, Y., Aizawa, K. ; Harashima, H. (1990) Picture Coding Symp. 90.
Fukuhara, T., Asai, K. ; Murakami, T. (1990) Picture Coding Symp. 90.
Ekman, P. ; Friesen, W. V. (1977) in Facial Action Coding System (Consulting Psychologists Press, Palo Alto, CA).
15. Pearson, D. E. (1990) Image Commun. 2.4, 377-396.
Aizawa, K., Harashima, H. ; Saito, T. (1989) Signal Processing: Image Commun. 1, 139-152.
D. G. Lowe, “Object recognition from local scale-invariant features,”International Conference on Computer Vision, Corfu, Greece, pp. 1150-1157,September,1999.
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision 60 (2), 91-110,2004.
G. Csurka, C. Dance, L. Fan, J. Willamowski, C. Bray, “Visual categorization with bags of keypoints,”Workshop on statistical learning in computer vision, ECCV 1 (1-22), 1-2,200