DSpace Repository

Computational models of visual features: from proto-objects to object categories

Show simple item record

dc.contributor.advisor Samaras, Dimitris en_US
dc.contributor.advisor Zelinsky, Gregory J en_US
dc.contributor.author Yu, Chen-Ping en_US
dc.contributor.other Department of Computer Science en_US
dc.date.accessioned 2017-09-20T16:52:18Z
dc.date.available 2017-09-20T16:52:18Z
dc.date.issued 2016-12-01 en_US
dc.identifier.uri http://hdl.handle.net/11401/77257 en_US
dc.description 141 pg. en_US
dc.description.abstract The human visual perception is a complex system that excels in tasks such as object recognition and localization, face detection, object segmentation, action classification, and more. While we perform these everyday tasks with ease, little is known about the underlying process of the human visual system. In contrast, computer vision is a field of research that strives for better performance in the aforementioned tasks, which rely on carefully designed statistical machine learning based theories. As the field of computer vision and machine learning has matured over the past decade, there is an abundance of methods and theories that can be utilized for modeling human visual perception, which may shed more light on the underlying processes that make biological visual perception possible. This dissertation discusses novel computational models with the emphasis in visual clutter perception using proto-objects, and categorical search with category consistent features, two important problems in understanding human visual perception. Visual clutter is a global perception defined as being "crowded disorderly", affects aspects of our lives ranging from object detection to aesthetics, yet relatively little effort has been made to model this ubiquitous percept. Our approach models clutter as the number of proto-objects segmented from an image, with proto-objects defined as groupings of superpixels that are similar in low-level features. The proto-object model outperforms all other existing models and even a behavioral object segmentation ground truth, which indicates that the number of proto-objects in an image affects clutter perception more than the number of objects or the complexity of features. In a scope of perception that is more local within a visual field, object category recognition requires one to identify the correct category of a given object from an image. To better understand the human object recognition process, we introduce a generative model of category representation called the category-consistent features (CCFs) from images of category exemplars. The CCF model extracts category representative information from SIFT Bag-of-words (BoW) models and is able to predict human behavior in the context of a categorical search task. Finally, we introduce a ventral-stream inspired deep convolutional neural network (VsNet) and a convolutional version of the HMAX model (Deep-HMAX), and analyze these models with the baseline AlexNet under the representational similarity analysis framework (RSA). The results show that the two biologically-inspired models achieve higher object classification accuracies, and the layer-wise representations are more similar between the more biologically-informed models than that of the baseline model. en_US
dc.description.sponsorship This work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree. en_US
dc.format Monograph en_US
dc.format.medium Electronic Resource en_US
dc.language.iso en_US en_US
dc.publisher The Graduate School, Stony Brook University: Stony Brook, NY. en_US
dc.subject.lcsh Computer science -- Cognitive psychology en_US
dc.subject.other clustering, computational model, computer vision, deep learning, machine learning, proto-object en_US
dc.title Computational models of visual features: from proto-objects to object categories en_US
dc.type Dissertation en_US
dc.mimetype Application/PDF en_US
dc.contributor.committeemember Nguyen, Minh Hoai en_US
dc.contributor.committeemember Konkle, Talia. en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace

Advanced Search


My Account