71 |
An improved model based segmentation approach and its application to volumetric study of subcortical structures in MRI brain dataLiu, Yuan 05 August 2010 (has links)
No description available.
|
72 |
Image Segmentation and Range Estimation Using a Moving-aperture LensSubramanian, Anbumani 07 May 2001 (has links)
Given 2D images, it still remains a big challenge in the field of computer vision to group the image points into logical objects (segmentation) and to determine the locations in the scene (range estimation). Despite the decades of research, a single solution is yet to be found. Through our research we have demonstrated that a possible solution is to use moving aperture lens. This lens has the effect of introducing small, repeating movements of the camera center so that objects appear to translate in the image, by an amount that depends on distance from the plane of focus. Our novel method employs optical flow techniques to an image sequence, captured using a video camera with a moving aperture lens. For a stationary scene, optical flow magnitudes and direction are directly related to the three-dimensional object distance and location from the observer. Exploiting this information, we have successfully extracted objects at different depths and estimated the locations of objects in the scene, with respect to the plane of focus. Our work therefore demonstrates an ability for passive range estimation, without emitting any energy in an environment. Other potential applications include video compression, 3D video broadcast, teleconferencing and autonomous vehicle navigation. / Master of Science
|
73 |
Pro-environmental behaviour, locus of control and willingness to pay for environmental friendly productsTrivedi, Rohit, Patel, J.D., Savalia, J.R. 07 January 2014 (has links)
Yes / Marketers have realized the importance of assessing consumers’ willingness to pay (WTP) before introducing green products across different target audience. The purpose of this paper is to examine the relative influence of consumers’ pro-environmental behaviours (PEBs) and environmental locus of control (ELOC) on their WTP for green products.
The study sample consisted of 256 Indian consumers which were recruited with the help of convenience sampling. A structured questionnaire was administered with scales that were well established and that have been used in previous research. Data were analysed with the help of CFA and structural equation modelling to test the relationship of ELOC and PEB anon WTP. Second, clustering respondents according to their PEB and ELOC has been done to find its differential effect on WTP with the help of multivariate analysis of variance (MANOVA).
Findings of the study highlight that WTP for green products is significantly predicted by two variables which are in following order: PEB and ELOC. Results of cluster analysis and MANOVA revealed that WTP differ significantly with the level of intensity of ELOC and PEB among Indian consumers.
It advances the body of knowledge centred on the interplay of the PEB and ELOC to WTP for green products. Additional work is clearly required to consider the wide range of potentially relevant variables like brand image, prices, advertisements and product quality that ensures the generalizability of findings.
The hypothesis framed, tested and inferences made can form a basis of extremely valued toolkit for those green marketers who take caution when planning their marketing and communications strategies to stimulate the WTP by conveying a reason and motivation to act environmentally.
In this study, an understanding of WTP for green products is developed. The much required knowledge gap in terms of interplay of ELOC and PEB on WTP has been filled with the help of the present study. It has been identified that those consumer groups who displays higher PEB and ELOC forms the primary target audience for green product marketer.
|
74 |
The EEG of the neonatal brain : classification of background activityLöfhede, Johan January 2009 (has links)
The brain requires a continuous supply of oxygen and nutrients, and even a short period of reduced oxygen supply can cause severe and lifelong consequences for the affected individual. The unborn baby is fairly robust, but there are of course limits also for these individuals. The mostsensitive and most important organ is the brain. When the brain is deprivedof oxygen, a process can start that ultimately may lead to the death of braincells and irreparable brain damage. This process has two phases; one more orless immediate and one delayed. There is a window of time of up to 24 hourswhere action can be taken to prevent the delayed secondary damage. One recently clinically available technique is to reduce the metabolism and thereby stop the secondary damage in the brain by cooling the baby.It is important to be able to quickly diagnose hypoxic injuries and to followthe development of the processes in the brain. For this, the electroencephalogram (EEG) is an important tool. The EEG is a voltage signal that originates within the brain and that can be recorded easily andnon-invasively at bedside. The signals are, however, highly complex and require special competence to interpret, a competence that typically is not available at the intensive care unit, and particularly not continuously day and night. This thesis addresses the problem of automatic classification ofneonatal EEG and proposes methods that would be possible to use in bedside monitoring equipment for neonatal intensive care units.The thesis is a compilation of six papers. The first four deal with the segmentation of pathological signals (burst suppression) from post-asphyctic full term newborn babies. These studies investigate the use of various classification techniques, using both supervised and unsupervised learning.In paper V the scope is widened to include both classification of pathologicalactivity versus activity found in healthy babies as well as application of thesegmentation methods on the parts of the EEG signal that are found to be of the pathological type. The use of genetic algorithms for feature selection isalso investigated. In paper VI the segmentation methods are applied onsignals from pre-term babies to investigate the impact of a certain medication on the brain.The results of this thesis demonstrate ways to improve the monitoring of the brain during intensive care of newborn babies. Hopefully it will someday be implemented in monitoring equipment and help to prevent permanent brain damage in post asphyctic babies.
|
75 |
Power, competition and regulation : the case of the UK brewing sectorBobe, Jonathan Mark January 1999 (has links)
This thesis explores the role of unequal power relationships between business enterprises in the UK brewing sector and how these asymmetries shape the dynamic and direction of changei n patternso f geographicailn dustrialisation.P ower has,t o date,r emaineda largely neglected concept in economic relationships as considered in economic geography. A new model of geographical industrialisation is developed in this thesis that focuses on capital: capital relations, incorporates the dynamic nature of enterprises and the networks of relations within which they are embeddedt,h e asymmetryo f power relations within and betweene nterprisesa ndt he dynamicc hangesin markets tructured uring periodso f recession and restructuring. It further seeks to explore the relationship between stability and instability in the derivation of emerging patterns of geographical industrialisation. The model is based on the concept of circuits of power (Clegg, 1989) which has been successfully applied to economic geography over recent years (Taylor, 1995,1996; Taylor and Hallsworth, 1996,1999; Taylor eta!, 1995). In this model inequalities in power between enterprisese stablishesth e basesu pon which competition can take place and go on to create the context within which social relationships are established and can develop. However, as currently specified this approach neglects the collective agency of enterprises inherent in segmented economic sectors (Taylor and Thrift, 1982a, 1982b, 1983). By the incorporation of appropriate insights from the study of complexity, collective agency, the element of process within the circuits of power framework, can be more fully understood. In this way those processes that create instability and flux in enterprises, but which at the same time lead to periodic stabilisations, can be identified. The thesis is divided into four parts. Part I. makes explicit the limitations of current theories of geographical industrialisation (Chapters 1 and 2) and proposes a new model (Chapter 2), incorporating the concepts of circuits of power and complexity, that addresses these limitations. Part II of the thesis (Chapters 3,4 and 5) tests the model against historical trajectories of change in the UK brewing sector identifying six cycles of change since 1700. For each cycle, by applying the model, the processes that have instigated and promulgated change are made explicit. Distinct enterprise segmentations, associated with each period of relative stability during these cycles, are also identified. Part III of the thesis, through a questionnaire survey (Chapter 6) and a series of semistructured interviews (Chapter 7), uses the model to examine the state of the UK brewing sector at the present time. Chapter 6 identifies contemporary enterprise segments active within the sector and the differential action of pressures upon these segments. In doing so the path dependent trajectories ofchange ofenterprise segments, and the limitations imposed upon such trajectories, are made explicit. Chapter 7 considers, through the model, the day to day interactions of enterprise segments and how these interactions reinforce the negotiated inequalities inherent in asymmetrical power relations. Coping strategies adopted by enterprises during a period of instability are identified and the relationship between the market and interpersonal relationships are made explicit. It is concluded that the model proposed in this thesis provides for a more realistic interpretation of changing patterns of geographical industrialisation than previous models
|
76 |
Generative probabilistic models for object segmentationEslami, Seyed Mohammadali January 2014 (has links)
One of the long-standing open problems in machine vision has been the task of ‘object segmentation’, in which an image is partitioned into two sets of pixels: those that belong to the object of interest, and those that do not. A closely related task is that of ‘parts-based object segmentation’, where additionally each of the object’s pixels are labelled as belonging to one of several predetermined parts. There is broad agreement that segmentation is coupled to the task of object recognition. Knowledge of the object’s class can lead to more accurate segmentations, and in turn accurate segmentations can be used to obtain higher recognition rates. In this thesis we focus on one side of this relationship: given the object’s class and its bounding box, how accurately can we segment it? Segmentation is challenging primarily due to the huge amount of variability one sees in images of natural scenes. A large number of factors combine in complex ways to generate the pixel intensities that make up any given image. In this work we approach the problem by developing generative probabilistic models of the objects in question. Not only does this allow us to express notions of variability and uncertainty in a principled way, but also to separate the problems of model design and inference. The thesis makes the following contributions: First, we demonstrate an explicit probabilistic model of images of objects based on a latent Gaussian model of shape. This can be learned from images in an unsupervised fashion. Through experiments on a variety of datasets we demonstrate the advantages of explicitly modelling shape variability. We then focus on the task of constructing more accurate models of shape. We present a type of layered probabilistic model that we call a Shape Boltzmann Machine (SBM) for the task of modelling foreground/background (binary) and parts-based (categorical) shapes. We demonstrate that it constitutes the state-of-the-art and characterises a ‘strong’ model of shape, in that samples from the model look realistic and that it generalises to generate samples that differ from training examples. Finally, we demonstrate how the SBM can be used in conjunction with an appearance model to form a fully generative model of images of objects. We show how parts-based object segmentations can be obtained simply by performing probabilistic inference in this joint model. We apply the model to several challenging datasets and find that its performance is comparable to the state-of-the-art.
|
77 |
Traffic and Road Sign RecognitionFleyeh, Hasan January 2008 (has links)
This thesis presents a system to recognise and classify road and traffic signs for the purpose of developing an inventory of them which could assist the highway engineers’ tasks of updating and maintaining them. It uses images taken by a camera from a moving vehicle. The system is based on three major stages: colour segmentation, recognition, and classification. Four colour segmentation algorithms are developed and tested. They are a shadow and highlight invariant, a dynamic threshold, a modification of de la Escalera’s algorithm and a Fuzzy colour segmentation algorithm. All algorithms are tested using hundreds of images and the shadow-highlight invariant algorithm is eventually chosen as the best performer. This is because it is immune to shadows and highlights. It is also robust as it was tested in different lighting conditions, weather conditions, and times of the day. Approximately 97% successful segmentation rate was achieved using this algorithm.Recognition of traffic signs is carried out using a fuzzy shape recogniser. Based on four shape measures - the rectangularity, triangularity, ellipticity, and octagonality, fuzzy rules were developed to determine the shape of the sign. Among these shape measures octangonality has been introduced in this research. The final decision of the recogniser is based on the combination of both the colour and shape of the sign. The recogniser was tested in a variety of testing conditions giving an overall performance of approximately 88%.Classification was undertaken using a Support Vector Machine (SVM) classifier. The classification is carried out in two stages: rim’s shape classification followed by the classification of interior of the sign. The classifier was trained and tested using binary images in addition to five different types of moments which are Geometric moments, Zernike moments, Legendre moments, Orthogonal Fourier-Mellin Moments, and Binary Haar features. The performance of the SVM was tested using different features, kernels, SVM types, SVM parameters, and moment’s orders. The average classification rate achieved is about 97%. Binary images show the best testing results followed by Legendre moments. Linear kernel gives the best testing results followed by RBF. C-SVM shows very good performance, but ?-SVM gives better results in some case.
|
78 |
Three dimensional object analysis and tracking by digital holography microscopySchockaert, Cédric 26 February 2007 (has links)
Digital Holography Microscopy (DHM) is a new 3D measurement technique that exists since Charge Coupled Devices (or CCD cameras) allow to record numerically high resolution images. That opens a new door to the theory of holography discovered in 1949 by Gabor: the door that masked the world of digital hologram processing. A hologram is a usual image but that contains the complex amplitude of the light coded into intensities recorded by the camera. The complex amplitude of the light can be seen as the combination of the energy information (squared amplitude modulus) with the information of the propagation angle of the light (phase of the amplitude) for each point of the image. When the hologram is digital, this dual information associated with a diffractive model of the light propagation permits to numerically investigate back and front planes to the recorded plane of the imaging system. We understand that 3D information can be recorded by a CCD camera and the acquisition rate of this volume information is only limited by the acquisition rate of the unique camera. For each digital hologram, the numerical investigation of front and back regions to the recorded plane is a tool to numerically refocus objects appearing unfocused in the original plane acquired by the CCD.
This thesis aims to develop general and robust algorithms that are devoted to automate the analysis process in the 3D space and in time of objects present in a volume studied by a specific imaging system that permits to record holograms. Indeed, the manual processing of a huge amount of holograms is not realistic and has to be automated by software implementing precise algorithms. In this thesis, the imaging system that records holograms is a Mach-Zehnder interferometer working in transmission and studied objects are either of biological nature (crystals, vesicles, cancer cells) or latex particles. We propose and test focus criteria, based on an identical focus metric, for both amplitude and phase objects. These criteria allow the determination of the best focus plane of an object when the numerical investigation is performed. The precision of the best focus plane is lower than the depth of field of the microscope. From this refocus theory, we develop object detection algorithms that build a synthetic image where objects are bright on a dark background. This detection map of objects is the first step to a fully automatic analysis of objects present in one hologram. The combination of the detection algorithm and the focus criteria allow the precise measurement of the 3D position of the objects, and of other relevant characteristics like the object surface in its focus plane, or its convexity or whatever. These extra relevant measures are carried out with a segmentation algorithm adapted to the studied objects of this thesis (opaque objects, and transparent objects in a uniform refractive index environment). The last algorithm investigated in this research work is the data association in time of objects from hologram to hologram in order to extract 3D trajectories by using the predictive Kalman filtering theory.
These algorithms are the abstract bricks of two software: DHM Object Detection and Analysis software, and Kalman Tracking software. The first software is designed for both opaque and transparent objects. The term object is not defined by one other characteristic in this work, and as a consequence, the developed algorithms are very general and can be applied on various objects studied in transmission by DHM. The tracking software is adapted to the dynamic applications of the thesis, which are flows of objects. Performance and results are exposed in a specific chapter.
|
79 |
Video processing in the compressed domainFernando, Warnakulasuriya Anil Chandana January 2000 (has links)
No description available.
|
80 |
A fuzzy method for expression classification of facesCase, Simon James January 2000 (has links)
No description available.
|
Page generated in 0.1205 seconds