• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 875
  • 201
  • 126
  • 110
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1726
  • 412
  • 311
  • 245
  • 228
  • 184
  • 173
  • 166
  • 166
  • 156
  • 154
  • 152
  • 152
  • 150
  • 140
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Computational analysis of facial expressions

Shenoy, A. January 2010 (has links)
This PhD work constitutes a series of inter-disciplinary studies that use biologically plausible computational techniques and experiments with human subjects in analyzing facial expressions. The performance of the computational models and human subjects in terms of accuracy and response time are analyzed. The computational models process images in three stages. This includes: Preprocessing, dimensionality reduction and Classification. The pre-processing of face expression images includes feature extraction and dimensionality reduction. Gabor filters are used for feature extraction as they are closest biologically plausible computational method. Various dimensionality reduction methods: Principal Component Analysis (PCA), Curvilinear Component Analysis (CCA) and Fisher Linear Discriminant (FLD) are used followed by the classification by Support Vector Machines (SVM) and Linear Discriminant Analysis (LDA). Six basic prototypical facial expressions that are universally accepted are used for the analysis. They are: angry, happy, fear, sad, surprise and disgust. The performance of the computational models in classifying each expression category is compared with that of the human subjects. The Effect size and Encoding face enable the discrimination of the areas of the face specific for a particular expression. The Effect size in particular emphasizes the areas of the face that are involved during the production of an expression. This concept of using Effect size on faces has not been reported previously in the literature and has shown very interesting results. The detailed PCA analysis showed the significant PCA components specific for each of the six basic prototypical expressions. An important observation from this analysis was that with Gabor filtering followed by non linear CCA for dimensionality reduction, the dataset vector size may be reduced to a very small number, in most cases it was just 5 components. The hypothesis that the average response time (RT) for the human subjects in classifying the different expressions is analogous to the distance measure of the data points from the classification hyper-plane was verified. This means the harder a facial expression is to classify by human subjects, the closer to the classifying hyper-plane of the classifier it is. A bi-variate correlation analysis of the distance measure and the average RT suggested a significant anti-correlation. The signal detection theory (SDT) or the d-prime determined how well the model or the human subjects were in making the classification of an expressive face from a neutral one. On comparison, human subjects are better in classifying surprise, disgust, fear, and sad expressions. The RAW computational model is better able to distinguish angry and happy expressions. To summarize, there seems to some similarities between the computational models and human subjects in the classification process.
502

The collaborative index

Ryding, Michael Philip January 2006 (has links)
Information-seekers use a variety of information stores including electronic systems and the physical world experience of their community. Within electronic systems, information-seekers often report feelings of being lost and suffering from information overload. However, in the physical world they tend not to report the same negative feelings. This work draws on existing research including Collaborative Filtering, Recommender Systems and Social Navigation and reports on a new observational study of information-seeking behaviours. From the combined findings of the research and the observational study, a set of design considerations for the creation of a new electronic interface is proposed. Two new interfaces, the second built from the recommendations of the first, and a supporting methodology are created using the proposed design considerations. The second interface, the Collaborative Index, is shown to allow physical world behaviours to be used in the electronic world and it is argued that this has resulted in an alternative and preferred access route to information. This preferred route is a product of information-seekers' interactions 'within the machine' and maintains the integrity of the source information and navigational structures. The methodology used to support the Collaborative Index provides information managers with an understanding of the information-seekers' needs and an insight into their behaviours. It is argued that the combination of the Collaborative Index and its supporting methodology has provided the capability for information-seekers and information managers to 'enter into the machine', producing benefits for both groups.
503

Seismological data acquisition and signal processing using wavelets

Hloupis, Georgios January 2009 (has links)
This work deals with two main fields: a) The design, built, installation, test, evaluation, deployment and maintenance of Seismological Network of Crete (SNC) of the Laboratory of Geophysics and Seismology (LGS) at Technological Educational Institute (TEI) at Chania. b) The use of Wavelet Transform (WT) in several applications during the operation of the aforementioned network. SNC began its operation in 2003. It is designed and built in order to provide denser network coverage, real time data transmission to CRC, real time telemetry, use of wired ADSL lines and dedicated private satellite links, real time data processing and estimation of source parameters as well as rapid dissemination of results. All the above are implemented using commercial hardware and software which is modified and where is necessary, author designs and deploy additional software modules. Up to now (July 2008) SNC has recorded 5500 identified events (around 970 more than those reported by national bulletin the same period) and its seismic catalogue is complete for magnitudes over 3.2, instead national catalogue which was complete for magnitudes over 3.7 before the operation of SNC. During its operation, several applications at SNC used WT as a signal processing tool. These applications benefited from the adaptation of WT to non-stationary signals such as the seismic signals. These applications are: HVSR method. WT used to reveal undetectable non-stationarities in order to eliminate errors in site’s fundamental frequency estimation. Denoising. Several wavelet denoising schemes compared with the widely used in seismology band-pass filtering in order to prove the superiority of wavelet denoising and to choose the most appropriate scheme for different signal to noise ratios of seismograms. EEWS. WT used for producing magnitude prediction equations and epicentral estimations from the first 5 secs of P wave arrival. As an alternative analysis tool for detection of significant indicators in temporal patterns of seismicity. Multiresolution wavelet analysis of seismicity used to estimate (in a several years time period) the time where the maximum emitted earthquake energy was observed.
504

Bayesian learning of continuous time dynamical systems with applications in functional magnetic resonance imaging

Murray, Lawrence January 2009 (has links)
Temporal phenomena in a range of disciplines are more naturally modelled in continuous-time than coerced into a discrete-time formulation. Differential systems form the mainstay of such modelling, in fields from physics to economics, geoscience to neuroscience. While powerful, these are fundamentally limited by their determinism. For the purposes of probabilistic inference, their extension to stochastic differential equations permits a continuous injection of noise and uncertainty into the system, the model, and its observation. This thesis considers Bayesian filtering for state and parameter estimation in general non-linear, non-Gaussian systems using these stochastic differential models. It identifies a number of challenges in this setting over and above those of discrete time, most notably the absence of a closed form transition density. These are addressed via a synergy of diverse work in numerical integration, particle filtering and high performance distributed computing, engineering novel solutions for this class of model. In an area where the default solution is linear discretisation, the first major contribution is the introduction of higher-order numerical schemes, particularly stochastic Runge-Kutta, for more efficient simulation of the system dynamics. Improved runtime performance is demonstrated on a number of problems, and compatibility of these integrators with conventional particle filtering and smoothing schemes discussed. Finding compatibility for the smoothing problem most lacking, the major theoretical contribution of the work is the introduction of two novel particle methods, the kernel forward-backward and kernel two-filter smoothers. By harnessing kernel density approximations in an importance sampling framework, these attain cancellation of the intractable transition density, ensuring applicability in continuous time. The use of kernel estimators is particularly amenable to parallelisation, and provides broader support for smooth densities than a sample-based representation alone, helping alleviate the well known issue of degeneracy in particle smoothers. Implementation of the methods for large-scale problems on high performance computing architectures is provided. Achieving improved temporal and spatial complexity, highly favourable runtime comparisons against conventional techniques are presented. Finally, attention turns to real world problems in the domain of Functional Magnetic Resonance Imaging (fMRI), first constructing a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed signal in fMRI. This model and the methodological advances of the work culminate in application to the deconvolution and effective connectivity problems in this domain.
505

Evidence-based spatial intervention for regeneration of deteriorating urban areas : a case of study from Tehran, Iran

Rismanchian, Omid January 2012 (has links)
Throughout the urban development process over the last seven decades in Tehran, the capital city of Iran, many self-generated neighbourhoods have developed in which the majority of the residents are low-income families. On one hand, the main spatial attribute of these deprived neighbourhoods is spatial isolation from the surrounding, more affluent areas, which is accompanied by inadequate urban infrastructure and a lack of accessibility and permeability. On the other hand, the Tehran City Revitalisation Organisation - the governmental sector which is in charge of the deprived areas - is incapable of conducting urban regenerations without investment from the private sector, and is seeking methods to create ‘socio-economic stimulant zones’ to attract private sector participation in regeneration programmes. In this regard, this research investigates the notion of ‘spatial isolation’ which in return causes socio-economic isolation as highlighted in the literature. The research suggests that in order to develop feasible regeneration programmes, which can meet the interest of both people and government, and release the deprived area from isolation both spatially and socio-economically, the regeneration plans should focus on public open space developments as ‘socio-economic stimulant zones’. With regard to this idea, the research highlights the street as a ‘social arena’ – not arteries or thoroughfares – as the type of public open space in which its development could not only release the deprived areas from spatial isolation, but could also direct more pedestrian movement to and through the deprived neighbourhoods, making more opportunities for the creation of socio-economic interactions. In this respect, the theory of ‘natural movement’ and theories and literature of ‘integrated public open spaces’ form the theoretical framework of the research to support this idea. For further investigation, two case studies, one as the deprived area and one as the control area, have been chosen, and the spatial pattern of the city and the two cases have been analysed in regard to the notion of ‘spatial isolation’ through Space Syntax using Depthmap software and GIS. Also, the correlation between the distribution pattern of commercial land uses and syntactic measures across the city of Tehran is investigated to identify the potential streets in which to create commercial opportunities. Afterwards, in order to study the street life and the variety of activities the streets can afford, a few locally integrated streets in the deprived case have been chosen. At this stage, nineteen behaviours have been observed and classified in five major classes including the necessary, social, optional, hazardous, and occasional activities, and the correlation with syntactic measures are studied. Moreover, the methods of developing a route filtering system and a transformability index for identifying the most suitable streets for the creation of a pedestrian friendly network are discussed, using an example of a deprived area, integrating it with the surrounding urban fabric to create the ‘socio-economic stimulant zones’. The results show that by identifying the underlying spatial pattern of the urban fabric, it is possible to release the deprived areas from its spatial isolation through developing a street network without causing urban fragmentation. This approach could also form a cost-effective basis for developing a pedestrian friendly street network as one of the ‘socio-economic stimulant zones’, which the Tehran City Revitalisation Organisation is looking for; the type of streets that not only support the necessary activities and transportation, but could also facilitate socio-economic interaction.
506

Characterisation of the MIRI spectrometer, an instrument for the James Webb Space Telescope

Briggs, Michael January 2010 (has links)
The MIRI-MRS is a future space based Medium Resolution Spectrometer and one of four instruments to be integrated onto The James Webb Space Telescope. The Medium Resolution Spectrometer is designed to be diffraction limited across its entire passband of 5 - 28.3 microns. It achieves this through the spectral filtering of the passband into four channels with each one containing an integral field unit optimised for minimal diffraction losses. The integral field unit enables the simultaneous measurement of the spectral data across the entire field of view. The design of the Medium Resolution Spectrometer is outlined with particular reference to the choice of slice widths used for each channel to minimise the diffraction losses from the slicing mechanism. The slice widths are also used to derive the extent of the field of view and combined with the along slice plate scale at the detector the technique required for complete spatial sampling of the spectrometer is outlined. The operation of the Channel 1 image slicer component was tested cryogenically at 5 microns for diffraction losses due to the slicing of the point spread function. This was so that the actual diffraction losses could be measured and compared with the optical model. From the resulting analysis I concluded that the operation of the image slicers were well understood for diffraction losses. Performance tests were required on the instrument because of its novel design. This was the first implementation of an integral field unit operating between 5 - 28.3 microns and it was necessary to ensure that the operation of the image slicer did not induce unacceptable diffraction losses into the instrument. Tests were required on the assembled instrument to verify the optical design. A Verification Model of MIRI was built to enable test verification of the optical design. This testing was carried out in advance of the MIRI Flight Model assembly so that changes could be made to the Flight Model design if necessary. This testing phase was also designed to define the calibration process necessary to prepare the MIRI Flight Model for scientific operations. For the testing phase it was necessary to create an astronomical source simulator. This MIRI Telescope Simulator was constructed in Madrid where I spent two months ensuring the point source movement across the field of view would be sufficient to investigate the Medium Resolution Spectrometer. My contribution was to help assemble both the Verification and Flight Models. I also participated in the Verification Model testing phase from the test design phase to the test implementation and data analysis. My role in the analysis was to investigate the field of view of the Medium Resolution Spectrometer Verification Model and whether the field of view requirements for the spectrometer were met. During this analysis I also verified that the diffraction effects of the end-to-end instrument were well understood by the optical model. The Medium Resolution Spectrometer Verification Model field of view compromised the field of view requirement for the spectrometer. A similar analysis for the Flight Model showed that there would be a low probability that the field of view requirement would be met. As a result of the analysis I defined a new slit mask design that would align the field of view sampled by Channel 1 to increase the aligned field of view. As a result there is a high probability that the field of view requirement for the Flight Model will be exceeded. The test analysis discovered a magnification effect within the spectrometer which must be properly characterised to enable accurate field of view reconstruction. I designed a test necessary for the calibration phase of the Flight Model to enable full spatial alignment of the Medium Resolution Spectrometer. I also measured an excess flux level in the Channel 1 observations at the detector and there was a ghost detected in the Channel 1 images. Whilst the origin of either the excess flux or the ghost could not be completely determined I investigated the possibility that they will not be present in the Flight Model due to the slight design differences. If present however they will not increase the background level of an observation above the requirement outlined for Channel 1.
507

Efficient multiple hypothesis track processing of boost-phase ballistic missiles using IMPULSE©-generated threat models

Rakdham, Bert 09 1900 (has links)
In this thesis, a multiple hypotheses tracking (MHT) algorithm is developed to successfully track multiple ballistic missiles within the boost phase. The success of previous work on the MHT algorithm and its application in other scientific fields enables this study to realize an efficient form of the algorithm and examine its feasibility in tracking multiple crossing ballistic missiles even though various accelerations due to staging are present. A framework is developed for the MHT, which includes a linear assignment problem approach used to search the measurement-to-contact association matrix for the set of exact N-best feasible hypotheses. To test the new MHT, an event in which multiple ballistic missiles have been launched and threaten the North American continent is considered. To aid in the interception and destruction of the threat far from their intended targets, the research focuses on the boost-phase portion of the missile flight. The near-simultaneous attacks are detected by a network of radar sensors positioned near the missile launch sites. Each sensor provides position reports or track files for the MHT routine to process. To quantify the performance of the algorithm, data from the National Air and Space Intelligence Center's IMPULSE ICBM model is used and demonstrates the feasibility of this approach. This is especially significant to the U.S. Missile Defense Agency since the IMPULSE model represents the cognizant analyst's accurate representation of the ballistic threats in a realistic environment. The results show that this new algorithm works exceptionally well in a realistic environment where complex interactions of missile staging, non-linear thrust profiles and sensor noise can significantly degrade the track algorithm performance especially in multiple target scenarios.
508

Neuronal correlates of implicit learning in the mammalian midbrain

Cruces Solis, Hugo 26 April 2016 (has links)
No description available.
509

Utredning av igensättningsproblematik i markfilter av sorbtionstyp vid Gräfsåsens avfallsanläggning

Söderberg, Karolina January 2016 (has links)
Markfilter för rening av vatten har utvecklats under årens lopp till att besitta skräddarsydda egenskaper anpassade för rening av lakvatten. Lakvatten innehåller en rad olika föroreningar, vilka beror på det deponerade materialets beskaffenhet. I sorptionsfilter kombineras ett traditionellt filters mekaniska förmåga att avskilja partiklar och biologiska förmåga att bryta ner organiskt material, med en eller flera sorbenter, vilka verkar för att fastlägga eller fälla föroreningar. Inte sällan drabbas dock denna typ av filter av igensättningar, vilket reducerar filtrets kapacitet och livslängd. Vid deponin i Gräfsåsen, tillhörande Östersunds kommun, installerades 2013 ett markfilter av sorptionstyp framtaget av konsult, vilket uppgavs ha en livslängd på 10-15 år. Efter installering av filtret visar sig kapaciteten vara sämre än förväntat och dessutom snabbt sjunkande över tid. Justeringar i filtermaterialets sammansättning sker efter ett par månades drift, men problemet med igensättningar/låg kapacitet kvarstår. Detta examensarbete utformades i syfte att utreda igensättningsproblematiken med sorptionsfiltret vid Gräfsåsen och att föreslå förbättringsåtgärder. Den andra delen bestod av att kontrollera konsultens dimensioneringsberäkningar och jämföra med olika beräkningsmodeller för markfilter. Analysresultaten och beräkningarna kompletterades med ett praktiskt försök att filtrera lakvatten och kranvatten i varsin filterkolonn som utformades med samma proportioner och material som det aktuella filtret. Avsikten med kolonnfiltreringen var att bedöma materialets hydrauliska kapacitet och undersöka om det fanns eventuella skillnader i rinntider för lak- respektive kranvatten. Filtreringen visade att packningsgraden av filtermaterialet är av stor betydelse för genomsläppligheten, inga slutsatser om vattenkvaliténs påverkan kunde dock dras. Karaktäriseringen visade att lakvattnet i Gräfsåsen innehöll höga halter BOD, POC, aluminium, svavel, kalcium och fosfor. Analyserna och jämförelserna sammantaget indikerade att metalloxider och karbonater fälldes i filtret. Beräkningarna visade att filtret dimensionerats för ett genomsnittligt mindre flöde än vad som bedömdes vara lämpligt. Den hydrauliska konduktiviteten för filtermaterialet var lägre än konsultens rekommendationer, vilket i praktiken innebar att kapaciteten för filtret var en tredjedel av flödet man avsett att dimensionera för.. / Filters for treatment of leachate have been developed over the years to hold tailor-made properties for the treatment of leachate. Leachate contains a variety of pollutants, which depends on the waste upon the landfill. In sorption-filters the traditional filter’s mechanical ability to remove particles and biological ability to break down organic material, is combined with one or several sorbents which act to precipitate or trap contaminants. However, it is rather common that these types of filters suffer from clogging which result in reduction of the filter’s capacity and lifetime. At the landfill in Gräfsåsen, belonging to the municipality of Östersund, a sorption filter developed by a consultant was installed in 2013, which was estimated to have a lifespan of 10-15 years. When installed, the capacity of the filter was lower than expected and also decreased rapidly over time. The filter media had to be redesigned after a couple of months in operation, but the problem with clogging/low capacity remained. This thesis was shaped to investigate the causes of the clogging problem of the filter at Gräfsåsen and to suggest improvements. The study consisted mainly of two parts, one part consisted of interpreting and comparing test results and characterize the leachate from a study of leachate from IVL (Swedish Environmental Research Institute). The second part consisted of controlling the consultant’s sizing and compare with other models for sizing filters. The analytical results and calculations were supplemented with a practical test to filter leachate and tap water in separate filter columns, designed with the same proportions and materials as the existing filter. The purpose of the filtering columns was to assess the material's hydraulic capacity and to examine whether there were any differences in the flow times of the leachate and tap water. The filtering showed that the degree of compaction of the filter material is of great importance for the permeability, no conclusions regarding the water quality’s impact could be drawn. The characterization indicated that the leachate in Gräfsåsen contained high concentrations of BOD, POC, aluminum, sulfur, calcium and phosphorus. Analyzes and comparisons collectively indicated that metal oxides and carbonates precipitated in the filter. The calculations showed that the filter is dimensioned for an average flow rate less than what was assessed appropriate. The hydraulic conductivity of the filter material was lower than the consultant's recommendations, which in practice meant that the capacity of the filter was a third of the flow it intended to be designed for.
510

Data Assimilation for Spatial Temporal Simulations Using Localized Particle Filtering

Long, Yuan 15 December 2016 (has links)
As sensor data becomes more and more available, there is an increasing interest in assimilating real time sensor data into spatial temporal simulations to achieve more accurate simulation or prediction results. Particle Filters (PFs), also known as Sequential Monte Carlo methods, hold great promise in this area as they use Bayesian inference and stochastic sampling techniques to recursively estimate the states of dynamic systems from some given observations. However, PFs face major challenges to work effectively for complex spatial temporal simulations due to the high dimensional state space of the simulation models, which typically cover large areas and have a large number of spatially dependent state variables. As the state space dimension increases, the number of particles must increase exponentially in order to converge to the true system state. The purpose of this dissertation work is to develop localized particle filtering to support PFs-based data assimilation for large-scale spatial temporal simulations. We develop a spatially dependent particle-filtering framework that breaks the system state and observation data into sub-regions and then carries out localized particle filtering based on these spatial regions. The developed framework exploits the spatial locality property of system state and observation data, and employs the divide-and-conquer principle to reduce state dimension and data complexity. Within this framework, we propose a two-level automated spatial partitioning method to provide optimized and balanced spatial partitions with less boundary sensors. We also consider different types of data to effectively support data assimilation for spatial temporal simulations. These data include both hard data, which are measurements from physical devices, and soft data, which are information from messages, reports, and social network. The developed framework and methods are applied to large-scale wildfire spread simulations and achieved improved results. Furthermore, we compare the proposed framework to existing particle filtering based data assimilation frameworks and evaluate the performance for each of them.

Page generated in 0.1239 seconds