• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 59
  • 25
  • 14
  • 11
  • 6
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 364
  • 364
  • 108
  • 101
  • 64
  • 61
  • 46
  • 43
  • 38
  • 32
  • 30
  • 26
  • 26
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Exploring Biologically-Inspired Interactive Networks for Object Recognition

Saifullah, Mohammad January 2011 (has links)
This thesis deals with biologically-inspired interactive neural networks for the task of object recognition. Such networks offer an interesting alternative approach to traditional image processing techniques. Although the networks are very powerful classification tools, they are difficult to handle due to their bidirectional interactivity. It is one of the main reasons why these networks do not perform the task of generalization to novel objects well. Generalization is a very important property for any object recognition system, as it is impractical for a system to learn all instances of an object class before classifying. In this thesis, we have investigated the working of an interactive neural network by fine tuning different structural and algorithmic parameters.  The performance of the networks was evaluated by analyzing the generalization ability of the trained network to novel objects. Furthermore, the interactivity of the network was utilized to simulate focus of attention during object classification. Selective attention is an important visual mechanism for object recognition and provides an efficient way of using the limited computational resources of the human visual system. Unlike most previous work in the field of image processing, in this thesis attention is considered as an integral part of object processing. Attention focus, in this work, is computed within the same network and in parallel with object recognition. As a first step, a study into the efficacy of Hebbian learning as a feature extraction method was conducted. In a second study, the receptive field size in the network, which controls the size of the extracted features as well as the number of layers in the network, was varied and analyzed to find its effect on generalization. In a continuation study, a comparison was made between learnt (Hebbian learning) and hard coded feature detectors. In the last study, attention focus was computed using interaction between bottom-up and top-down activation flow with the aim to handle multiple objects in the visual scene. On the basis of the results and analysis of our simulations we have found that the generalization performance of the bidirectional hierarchical network improves with the addition of a small amount of Hebbian learning to an otherwise error-driven learning. We also conclude that the optimal size of the receptive fields in our network depends on the object of interest in the image. Moreover, each receptive field must contain some part of the object in the input image. We have also found that networks using hard coded feature extraction perform better than the networks that use Hebbian learning for developing feature detectors. In the last study, we have successfully demonstrated the emergence of visual attention within an interactive network that handles more than one object in the input field. Our simulations demonstrate how bidirectional interactivity directs attention focus towards the required object by using both bottom-up and top-down effects. In general, the findings of this thesis will increase understanding about the working of biologically-inspired interactive networks. Specifically, the studied effects of the structural and algorithmic parameters that are critical for the generalization property will help develop these and similar networks and lead to improved performance on object recognition tasks. The results from the attention simulations can be used to increase the ability of networks to deal with multiple objects in an efficient and effective manner.
212

Hand gesture recognition using sEMG and deep learning

Nasri, Nadia 17 June 2021 (has links)
In this thesis, a study of two blooming fields in the artificial intelligence topic is carried out. The first part of the present document is about 3D object recognition methods. Object recognition in general is about providing the ability to understand what objects appears in the input data of an intelligent system. Any robot, from industrial robots to social robots, could benefit of such capability to improve its performance and carry out high level tasks. In fact, this topic has been largely studied and some object recognition methods present in the state of the art outperform humans in terms of accuracy. Nonetheless, these methods are image-based, namely, they focus in recognizing visual features. This could be a problem in some contexts as there exist objects that look alike some other, different objects. For instance, a social robot that recognizes a face in a picture, or an intelligent car that recognizes a pedestrian in a billboard. A potential solution for this issue would be involving tridimensional data so that the systems would not focus on visual features but topological features. Thus, in this thesis, a study of 3D object recognition methods is carried out. The approaches proposed in this document, which take advantage of deep learning methods, take as an input point clouds and are able to provide the correct category. We evaluated the proposals with a range of public challenges, datasets and real life data with high success. The second part of the thesis is about hand pose estimation. This is also an interesting topic that focuses in providing the hand's kinematics. A range of systems, from human computer interaction and virtual reality to social robots could benefit of such capability. For instance to interface a computer and control it with seamless hand gestures or to interact with a social robot that is able to understand human non-verbal communication methods. Thus, in the present document, hand pose estimation approaches are proposed. It is worth noting that the proposals take as an input color images and are able to provide 2D and 3D hand pose in the image plane and euclidean coordinate frames. Specifically, the hand poses are encoded in a collection of points that represents the joints in a hand, so that they can be easily reconstructed in the full hand pose. The methods are evaluated on custom and public datasets, and integrated with a robotic hand teleoperation application with great success.
213

Object Recognition Using Scale-Invariant Chordiogram

Tonge, Ashwini 05 1900 (has links)
This thesis describes an approach for object recognition using the chordiogram shape-based descriptor. Global shape representations are highly susceptible to clutter generated due to the background or other irrelevant objects in real-world images. To overcome the problem, we aim to extract precise object shape using superpixel segmentation, perceptual grouping, and connected components. The employed shape descriptor chordiogram is based on geometric relationships of chords generated from the pairs of boundary points of an object. The chordiogram descriptor applies holistic properties of the shape and also proven suitable for object detection and digit recognition mechanisms. Additionally, it is translation invariant and robust to shape deformations. In spite of such excellent properties, chordiogram is not scale-invariant. To this end, we propose scale invariant chordiogram descriptors and intend to achieve a similar performance before and after applying scale invariance. Our experiments show that we achieve similar performance with and without scale invariance for silhouettes and real world object images. We also show experiments at different scales to confirm that we obtain scale invariance for chordiogram.
214

Early-life trauma alters hippocampal function during an episodic memory task in adulthood

Janetsian-Fritz, Sarine S. 02 May 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Early life trauma is a risk factor for a number of neuropsychiatric disorders, including schizophrenia (SZ) and depression. Animal models have played a critical role in understanding how early-life trauma may evoke changes in behavior and biomarkers of altered brain function that resemble these neuropsychiatric disorders. However, since SZ is a complex condition with multifactorial etiology, it is difficult to model the breadth of this condition in a single animal model. Considering this, it is necessary to develop rodent models with clearly defined subsets of pathologies observed in the human condition and their developmental trajectory. Episodic memory is among the cognitive deficits observed in SZ. Theta (6-10 Hz), low gamma (30-50 Hz), and high gamma (50-100 Hz) frequencies in the hippocampus (HC) are critical for encoding and retrieval of memory. Also, theta-gamma comodulation, defined as correlated fluctuations in power between these frequencies, may provide a mechanism for coding episodic sequences by coordinating neuronal activity at timescales required for memory encoding and retrieval. Given that patients with SZ have impaired recognition memory, the overall objectives of these experiments were to assess local field potential (LFP) recordings in the theta and gamma range from the dorsal HC during a recognition memory task in an animal model that exhibits a subclass of symptoms that resemble SZ. In Aim 1, LFPs were recorded from the HC to assess theta and gamma power to determine whether rats that were maternally deprived (MD) for 24-hrs on postnatal day (PND 9), had altered theta and high/low gamma power compared to sham rats during novel object recognition (NOR). Brain activity was recorded while animals underwent NOR on PND 70, 74, and 78. In Aim 2, the effects of theta-low gamma comodulation and theta-high gamma comodulation in the HC were assessed during NOR between sham and MD animals. Furthermore, measures of maternal care were taken to assess if high or low licking/grooming behaviors influenced recognition memory. It was hypothesized that MD animals would have impaired recognition memory and lower theta and low/high gamma power during interaction with both objects compared to sham animals. Furthermore, it was hypothesized that sham animals would have higher theta-gamma comodulation during novel object exploration compared to the familiar object, which would be higher than the MD group. Measures of weight, locomotor activity, and thigmotaxis were also assessed. MD animals were impaired on the NOR task and had no change in theta or low/high gamma power or theta-gamma comodulation when interacting with the novel or familiar object during trials where they performed unsuccessfully or successfully. However, higher theta and gamma power and theta-gamma comodulation was observed in sham animals depending on the object they were exploring or whether it was a successful or unsuccessful trial. These data indicate altered functioning of the HC following MD and a dissociation between brain activity and behavior in this group, providing support that early life trauma can induce cognitive and physiological impairments that are long-lasting. In conclusion, these data identify a model of early life stress with a translational potential, given that there are points of contact between human studies and the MD model. Furthermore, these data provide a set of tools that could be used to further explore how these altered neural mechanisms may influence cognition and behavior.
215

Using EEG to Examine the Top Down Effects on Visual Object Processing

Borders, Joseph D. January 2019 (has links)
No description available.
216

Contributions to 3D object recognition and 3D hand pose estimation using deep learning techniques

Gomez-Donoso, Francisco 18 September 2020 (has links)
In this thesis, a study of two blooming fields in the artificial intelligence topic is carried out. The first part of the present document is about 3D object recognition methods. Object recognition in general is about providing the ability to understand what objects appears in the input data of an intelligent system. Any robot, from industrial robots to social robots, could benefit of such capability to improve its performance and carry out high level tasks. In fact, this topic has been largely studied and some object recognition methods present in the state of the art outperform humans in terms of accuracy. Nonetheless, these methods are image-based, namely, they focus in recognizing visual features. This could be a problem in some contexts as there exist objects that look alike some other, different objects. For instance, a social robot that recognizes a face in a picture, or an intelligent car that recognizes a pedestrian in a billboard. A potential solution for this issue would be involving tridimensional data so that the systems would not focus on visual features but topological features. Thus, in this thesis, a study of 3D object recognition methods is carried out. The approaches proposed in this document, which take advantage of deep learning methods, take as an input point clouds and are able to provide the correct category. We evaluated the proposals with a range of public challenges, datasets and real life data with high success. The second part of the thesis is about hand pose estimation. This is also an interesting topic that focuses in providing the hand's kinematics. A range of systems, from human computer interaction and virtual reality to social robots could benefit of such capability. For instance to interface a computer and control it with seamless hand gestures or to interact with a social robot that is able to understand human non-verbal communication methods. Thus, in the present document, hand pose estimation approaches are proposed. It is worth noting that the proposals take as an input color images and are able to provide 2D and 3D hand pose in the image plane and euclidean coordinate frames. Specifically, the hand poses are encoded in a collection of points that represents the joints in a hand, so that they can be easily reconstructed in the full hand pose. The methods are evaluated on custom and public datasets, and integrated with a robotic hand teleoperation application with great success.
217

Does Visual Awareness of Object Categories Require Attention?

Miller, Timothy S 01 January 2013 (has links) (PDF)
A key question in the investigation of awareness is whether it can occur without attention, or vice versa. Most evidence to date suggests that attention is necessary for awareness of visual stimuli, but that attention can sometimes be present without corresponding aware-ness. However, there has been some evidence that natural scenes in general, and in particular scenes including animals, may not require visual attention for a participant to become aware of their gist. One relatively recent paradigm for providing evidence for animal awareness without attention (Li, VanRullen, Koch, & Perona, 2002) requires participants to perform an attention demanding primary task while also determining whether a photograph displayed briefly in the periphery contains an animal as a secondary task. However, Cohen, Alvarez, and Nakayama (2011) questioned whether the primary task in these experiments used up all the available attentional capacity. Their experiments used a more demanding primary task to be sure attention really was not available for the image-recognition task, and the results indicated that attention was contributing to the animal detection task. However, in addition to changing the primary task, they displayed the stimuli for the two tasks superimposed on each other in the same area of the visual field. The experiment reported here is similar to the one by Cohen et al., but with the stimuli for the two tasks separated spatially. Animal recognition with separated stimuli was impaired by additionally performing the attention-demanding task, leaving no good evidence that it is possible to recognize natural scenes without attention, in turn removing this support for awareness without attention.
218

Semi-supervised Learning for Real-world Object Recognition using Adversarial Autoencoders

Mittal, Sudhanshu January 2017 (has links)
For many real-world applications, labeled data can be costly to obtain. Semi-supervised learning methods make use of substantially available unlabeled data along with few labeled samples. Most of the latest work on semi-supervised learning for image classification show performance on standard machine learning datasets like MNIST, SVHN, etc. In this work, we propose a convolutional adversarial autoencoder architecture for real-world data. We demonstrate the application of this architecture for semi-supervised object recognition. We show that our approach can learn from limited labeled data and outperform fully-supervised CNN baseline method by about 4% on real-world datasets. We also achieve competitive performance on the MNIST dataset compared to state-of-the-art semi-supervised learning techniques. To spur research in this direction, we compiled two real-world datasets: Internet (WIS) dataset and Real-world (RW) dataset which consists of more than 20K labeled samples each, comprising of small household objects belonging to ten classes. We also show a possible application of this method for online learning in robotics. / I de flesta verklighetsbaserade tillämpningar kan det vara kostsamt att erhålla märkt data. Inlärningsmetoder som är semi-övervakade använder sig oftast i stor utsträckning av omärkt data med stöd av en liten mängd märkt data. Mycket av det senaste arbetet inom semiövervakade inlärningsmetoder för bildklassificering visar prestanda på standardiserad maskininlärning så som MNIST, SVHN, och så vidare. I det här arbetet föreslår vi en convolutional adversarial autoencoder arkitektur för verklighetsbaserad data. Vi demonstrerar tillämpningen av denna arkitektur för semi-övervakad objektidentifiering och visar att vårt tillvägagångssätt kan lära sig av ett begränsat antal märkt data. Därmed överträffar vi den fullt övervakade CNN-baslinjemetoden med ca. 4% på verklighetsbaserade datauppsättningar. Vi uppnår även konkurrenskraftig prestanda på MNIST datauppsättningen jämfört med moderna semi-övervakade inlärningsmetoder. För att stimulera forskningen i den här riktningen, samlade vi två verklighetsbaserade datauppsättningar: Internet (WIS) och Real-world (RW) datauppsättningar, som består av mer än 20 000 märkta prov vardera, som utgörs av små hushållsobjekt tillhörandes tio klasser. Vi visar också en möjlig tillämpning av den här metoden för online-inlärning i robotik.
219

Ceilbot Development and Integration

Getahun, Tesfamichael Agidie January 2014 (has links)
The mobile robots that are present today are struggling to deal with challenges related to localization, power supply, mobility in the real world with all sorts of obstacles and other issues. At the same time, the demand for service robots for domestic applications has been growing and predictions show that the demand will continue to grow in the future. To meet the demands and to fulfill the expectations, those challenges need to be addressed. This thesis presents the development of a ceiling mounted robot known as Ceilbot. It is a type of mobile service robot except that it works on a track attached to the ceiling. This implies that the robot operates in a structured environment with continuous power supply simplifying some of the issues mentioned above. The development of the Ceilbot includes a simplified DC motor controller development, object recognition development and an easy-to-use graphical user interface design. The developed motor controller provides flexibility for the user to change the control parameters and produces deterministic output with high repeatability when compared to a regular proportional and integral (PI) controller. The designed user interface simplifies the interaction between the user and the Ceilbot by allowing the user to send commands to the Ceilbot and displaying some status parameters for monitoring. In order to have a complete robot system for demonstration purposes, a simple manipulator using two servomotors is also developed. / <p>Validerat; 20140825 (global_studentproject_submitter)</p>
220

Object registration in semi-cluttered and partial-occluded scenes for augmented reality

Gao, Q.H., Wan, Tao Ruan, Tang, W., Chen, L. 26 November 2018 (has links)
Yes / This paper proposes a stable and accurate object registration pipeline for markerless augmented reality applications. We present two novel algorithms for object recognition and matching to improve the registration accuracy from model to scene transformation via point cloud fusion. Whilst the first algorithm effectively deals with simple scenes with few object occlusions, the second algorithm handles cluttered scenes with partial occlusions for robust real-time object recognition and matching. The computational framework includes a locally supported Gaussian weight function to enable repeatable detection of 3D descriptors. We apply a bilateral filtering and outlier removal to preserve edges of point cloud and remove some interference points in order to increase matching accuracy. Extensive experiments have been carried to compare the proposed algorithms with four most used methods. Results show improved performance of the algorithms in terms of computational speed, camera tracking and object matching errors in semi-cluttered and partial-occluded scenes. / Shanxi Natural Science and Technology Foundation of China, grant number 2016JZ026 and grant number 2016KW-043).

Page generated in 0.0847 seconds