Spelling suggestions: "subject:"[een] ENHANCEMENT"" "subject:"[enn] ENHANCEMENT""
311 |
Crossmodal Modulation as a Basis for Visual Enhancement of Auditory PerformanceQian, Cheng 15 February 2010 (has links)
The human sensory system processes many modalities simultaneously. It was believed that each modality would be processed individually first, and their combination deferred to higher-level cortical areas. Recent neurophysiological investigations indicate interconnections between early visual and auditory cortices, areas putatively considered unimodal, but the function remains unclear. The present work explores how this cross-modality might contribute to a visual enhancement of auditory performance, using a combined theoretical and experimental approach. The enhancement of sensory performance was studied through a signal detection framework. A model was constructed using principles from signal detection theory and neurophysiology, demonstrating enhancements of roughly 1.8dB both analytically and through simulation. Several experiments were conducted to observe e ects of visual cues on a 2-alternative-forced-choice detection task of an auditory tone in noise. Results of the main experiment showed an enhancement of 1.6dB. Better enhancement also tended to occur for more realistic relationships between audio to visual stimuli.
|
312 |
A Conversation about Conversations: Dialogue Based Methodology And HIV/AIDS In Sub-Saharan AfricaRolston, Imara 01 January 2011 (has links)
The world’s understanding of HIV/AIDS is grounded in biomedicine and shaped by cognitive psychology. Both biomedicine and cognitive psychology bonded with historically top-down development mechanisms to create ‘prevention’ strategies that obscured from vision the root causes of the pandemic. Within this hierarchy, bio-medicine and the cognitive psychological conception of human beings silenced indigenous voices and experiences of communities fighting HIV/AIDS. This is most certainly true in the case of Sub-Saharan Africa. This research explores the emergence of the Community Capacity Enhancement – Community Conversations prevention approach that places community dialogue, and the voices of communities, at the forefront of the battle to end HIV/AIDS and deconstruct and challenge the forms of structural violence that hold prevalence rates in their place. Within these spaces, oral traditions, indigenous knowledge, and resistance illustrate new and complex pictures of the viruses’ socio-economic impact and provide new foundations for community generated movements to curb the virus.
|
313 |
Dynamic Scoping for Browser Based Access Control SystemNadipelly, Vinaykumar 25 May 2012 (has links)
We have inorganically increased the use of web applications to the point of using them for almost everything and making them an essential part of our everyday lives. As a result, the enhancement of privacy and security policies for the web applications is becoming increasingly essential. The importance and stateless nature of the web infrastructure made the web a preferred target of attacks. The current web access control system is a reason behind the victory of attacks. The current web consists of two major components, the browser and the server, where the effective access control system needs to be implemented. In terms of an access control system, the current web has adopted the inadequate same origin policy and same session policy for the browser and server, respectively. The current web access control system policies are sufficient for the earlier day's web, which became inadequate to address the protection needs of today's web.
In order to protect the web application from un-trusted contents, we provide an enhanced browser based access control system by enabling the dynamic scoping. Our security model for the browser will allow the client and trusted web application contents to share a common library and protect web contents from each other, while they still get executed at different trust levels. We have implemented a working model of an enhanced browser based access control system in Java, under the Lobo browser.
|
314 |
ROBUST SPEAKER DIARIZATION FOR MEETINGSAnguera Miró, Xavier 21 December 2006 (has links)
Aquesta tesi doctoral mostra la recerca feta en l'àrea de la diarització de locutor per a sales de reunions. En la present s'estudien els algorismes i la implementació d'un sistema en diferit de segmentació i aglomerat de locutor per a grabacions de reunions a on normalment es té accés a més d'un micròfon per al processat. El bloc més important de recerca s'ha fet durant una estada al International Computer Science Institute (ICSI, Berkeley, Caligornia) per un període de dos anys.La diarització de locutor s'ha estudiat força per al domini de grabacions de ràdio i televisió. La majoria dels sistemes proposats utilitzen algun tipus d'aglomerat jeràrquic de les dades en grups acústics a on de bon principi no se sap el número de locutors òptim ni tampoc la seva identitat. Un mètode molt comunment utilitzat s'anomena "bottom-up clustering" (aglomerat de baix-a-dalt), amb el qual inicialment es defineixen molts grups acústics de dades que es van ajuntant de manera iterativa fins a obtenir el nombre òptim de grups tot i acomplint un criteri de parada. Tots aquests sistemes es basen en l'anàlisi d'un canal d'entrada individual, el qual no permet la seva aplicació directa per a reunions. A més a més, molts d'aquests algorisms necessiten entrenar models o afinar els parameters del sistema usant dades externes, el qual dificulta l'aplicabilitat d'aquests sistemes per a dades diferents de les usades per a l'adaptació.La implementació proposada en aquesta tesi es dirigeix a solventar els problemes mencionats anteriorment. Aquesta pren com a punt de partida el sistema existent al ICSI de diarització de locutor basat en l'aglomerat de "baix-a-dalt". Primer es processen els canals de grabació disponibles per a obtindre un sol canal d'audio de qualitat major, a més dínformació sobre la posició dels locutors existents. Aleshores s'implementa un sistema de detecció de veu/silenci que no requereix de cap entrenament previ, i processa els segments de veu resultant amb una versió millorada del sistema mono-canal de diarització de locutor. Aquest sistema ha estat modificat per a l'ús de l'informació de posició dels locutors (quan es tingui) i s'han adaptat i creat nous algorismes per a que el sistema obtingui tanta informació com sigui possible directament del senyal acustic, fent-lo menys depenent de les dades de desenvolupament. El sistema resultant és flexible i es pot usar en qualsevol tipus de sala de reunions pel que fa al nombre de micròfons o la seva posició. El sistema, a més, no requereix en absolute dades d´entrenament, sent més senzill adaptar-lo a diferents tipus de dades o dominis d'aplicació. Finalment, fa un pas endavant en l'ús de parametres que siguin mes robusts als canvis en les dades acústiques. Dos versions del sistema es van presentar amb resultats excel.lents a les evaluacions de RT05s i RT06s del NIST en transcripció rica per a reunions, a on aquests es van avaluar amb dades de dos subdominis diferents (conferencies i reunions). A més a més, es fan experiments utilitzant totes les dades disponibles de les evaluacions RT per a demostrar la viabilitat dels algorisms proposats en aquesta tasca. / This thesis shows research performed into the topic of speaker diarization for meeting rooms. It looks into the algorithms and the implementation of an offline speaker segmentation and clustering system for a meeting recording where usually more than one microphone is available. The main research and system implementation has been done while visiting the International Computes Science Institute (ICSI, Berkeley, California) for a period of two years. Speaker diarization is a well studied topic on the domain of broadcast news recordings. Most of the proposed systems involve some sort of hierarchical clustering of the data into clusters, where the optimum number of speakers of their identities are unknown a priory. A very commonly used method is called bottom-up clustering, where multiple initial clusters are iteratively merged until the optimum number of clusters is reached, according to some stopping criterion. Such systems are based on a single channel input, not allowing a direct application for the meetings domain. Although some efforts have been done to adapt such systems to multichannel data, at the start of this thesis no effective implementation had been proposed. Furthermore, many of these speaker diarization algorithms involve some sort of models training or parameter tuning using external data, which impedes its usability with data different from what they have been adapted to.The implementation proposed in this thesis works towards solving the aforementioned problems. Taking the existing hierarchical bottom-up mono-channel speaker diarization system from ICSI, it first uses a flexible acoustic beamforming to extract speaker location information and obtain a single enhanced signal from all available microphones. It then implements a train-free speech/non-speech detection on such signal and processes the resulting speech segments with an improved version of the mono-channel speaker diarization system. Such system has been modified to use speaker location information (then available) and several algorithms have been adapted or created new to adapt the system behavior to each particular recording by obtaining information directly from the acoustics, making it less dependent on the development data.The resulting system is flexible to any meetings room layout regarding the number of microphones and their placement. It is train-free making it easy to adapt to different sorts of data and domains of application. Finally, it takes a step forward into the use of parameters that are more robust to changes in the acoustic data. Two versions of the system were submitted with excellent results in RT05s and RT06s NIST Rich Transcription evaluations for meetings, where data from two different subdomains (lectures and conferences) was evaluated. Also, experiments using the RT datasets from all meetings evaluations were used to test the different proposed algorithms proving their suitability to the task.
|
315 |
Statistical Fusion of Scientific ImagesMohebi, Azadeh 30 July 2009 (has links)
A practical and important class of scientific images are the 2D/3D
images obtained from porous materials such as concretes, bone, active
carbon, and glass. These materials constitute an important class
of heterogeneous media possessing complicated
microstructure that is difficult to
describe qualitatively. However, they are not totally
random and there is a mixture of organization and randomness
that makes them difficult to characterize and study.
In order to study different
properties of porous materials, 2D/3D high resolution samples are
required. But obtaining high resolution samples usually requires
cutting, polishing and exposure to air, all of which affect the
properties of the sample. Moreover, 3D samples obtained by Magnetic
Resonance Imaging (MRI) are very low resolution and noisy. Therefore,
artificial samples of porous media are required to be generated
through a porous media reconstruction
process. The recent contributions in the reconstruction task are either only based on a prior model, learned from statistical features of real high resolution training data, and generating samples from that model, or based on a prior model and the measurements.
The main objective of this thesis is to some up with a statistical data fusion framework by which different images of porous materials at different resolutions and modalities are combined in order to generate artificial samples of porous media with enhanced resolution. The current super-resolution, multi-resolution and registration methods in image processing fail to provide a general framework for the porous media reconstruction purpose since they are usually based on finding an estimate rather than a typical sample, and also based on having the images from the same scene -- the case which is not true for porous media images.
The statistical fusion approach that we propose here is based on a Bayesian framework by which a prior model learned from high resolution samples are combined with a measurement model defined based on the low resolution, coarse-scale information, to come up with a posterior model. We define a measurement model, in the non-hierachical and hierarchical image modeling framework, which describes how the low resolution information is asserted in the posterior model. Then, we propose a posterior sampling approach by which 2D posterior samples of porous media are generated from the posterior model. A more general framework that we propose here is asserting other constraints rather than the measurement in the model and then propose a constrained sampling strategy based on simulated annealing to generate artificial samples.
|
316 |
Fingerprints recognitionDimitrov, Emanuil January 2009 (has links)
Nowadays biometric identification is used in a variety of applications-administration, business and even home. Although there are a lot of biometric identifiers, fingerprints are the most widely spread due to their acceptance from the people and the cheap price of the hardware equipment. Fingerprint recognition is a complex image recognition problem and includes algorithms and procedures for image enhancement and binarization, extracting and matching features and sometimes classification. In this work the main approaches in the research area are discussed, demonstrated and tested in a sample application. The demonstration software application is developed by using Verifinger SDK and Microsoft Visual Studio platform. The fingerprint sensor for testing the application is AuthenTec AES2501.
|
317 |
Enhancement and Visualization of VascularStructures in MRA Images Using Local StructureEsmaeili, Morteza January 2010 (has links)
The novel method of this thesis work is based on using quadrature filters to estimate an orientation tensor and to use the advantage of tensor information to control 3D adaptive filters. The adaptive filters are applied to enhance the Magnetic Resonance Angiography (MRA) images. The tubular structures are extracted from the volume dataset by using the quadrature filters. The idea of developing adaptive filtering in this thesis work is to enhance the volume dataset and suppress the image noise. Then the output of the adaptive filtering can be a clean dataset for segmentation of blood vessel structures to get appropriate volume visualization. The local tensors are used to create the control tensor which is used to control adaptive filters. By evaluation of the tensor eigenvalues combination, the local structures like tubular structures and stenosis structures are extracted from the dataset. The method has been evaluated with synthetic objects, which are vessel models (for segmentation), and onion like synthetic object (for enhancement). The experimental results are shown on clinical images to validate the proposed method as well.
|
318 |
Image Enhancement over a Sequence of ImagesKarelid, Mikael January 2008 (has links)
This Master Thesis has been conducted at the National Laboratory of Forensic Science (SKL) in Linköping. When images that are to be analyzed at SKL, presenting an interesting object, are of bad quality there may be a need to enhance them. If several images with the object are available, the total amount of information can be used in order to estimate one single enhanced image. A program to do this has been developed by studying methods for image registration and high resolution image estimation. Tests of important parts of the procedure have been conducted. The final results are satisfying and the key to a good high resolution image seems to be the precision of the image registration. Improvements of this part may lead to even better results. More suggestions for further improvementshave been proposed. / Detta examensarbete har utförts på uppdrag av Statens Kriminaltekniska Laboratorium (SKL) i Linköping. Då bilder av ett intressant objekt som ska analyseras på SKL ibland är av dålig kvalitet finns det behov av att förbättra dessa. Om ett flertal bilder på objektet finns tillgängliga kan den totala informationen fråndessa användas för att skatta en enda förbättrad bild. Ett program för att göra detta har utvecklats genom studier av metoder för bildregistrering och skapande av högupplöst bild. Tester av viktiga delar i proceduren har genomförts. De slutgiltiga resultaten är goda och nyckeln till en bra högupplöst bild verkar ligga i precisionen för bildregistreringen. Genom att förbättra denna del kan troligtvis ännu bättre resultat fås. Även andra förslag till förbättringar har lagts fram.
|
319 |
Enhancement of X-ray Fluoroscopy Image Sequences using Temporal Recursive Filtering and Motion CompensationForsberg, Anni January 2006 (has links)
This thesis consider enhancement of X-ray fluoroscopy image sequences. The purpose is to investigate the possibilities to improve the image enhancement in Biplanar 500, a fluoroscopy system developed by Swemac Medical Appliances, for use in orthopedic surgery. An algorithm based on recursive filtering, for temporal noise suppression, and motion compensation, for avoidance of motion artifacts, is developed and tested on image sequences from the system. The motion compensation is done both globally, by using the theory of the shift theorem, and locally, by subtracting consecutive frames. Also a new type of contrast adjustment is presented, received with an unlinear mapping function. The result is a noise reduced image sequence that shows no blurring effects upon motion. A brief study of the result shows, that both the image sequences with this algorithm applied and the contrast adjusted images are preferred by orthopedists compared to the present images in the system.
|
320 |
Statistical Fusion of Scientific ImagesMohebi, Azadeh 30 July 2009 (has links)
A practical and important class of scientific images are the 2D/3D
images obtained from porous materials such as concretes, bone, active
carbon, and glass. These materials constitute an important class
of heterogeneous media possessing complicated
microstructure that is difficult to
describe qualitatively. However, they are not totally
random and there is a mixture of organization and randomness
that makes them difficult to characterize and study.
In order to study different
properties of porous materials, 2D/3D high resolution samples are
required. But obtaining high resolution samples usually requires
cutting, polishing and exposure to air, all of which affect the
properties of the sample. Moreover, 3D samples obtained by Magnetic
Resonance Imaging (MRI) are very low resolution and noisy. Therefore,
artificial samples of porous media are required to be generated
through a porous media reconstruction
process. The recent contributions in the reconstruction task are either only based on a prior model, learned from statistical features of real high resolution training data, and generating samples from that model, or based on a prior model and the measurements.
The main objective of this thesis is to some up with a statistical data fusion framework by which different images of porous materials at different resolutions and modalities are combined in order to generate artificial samples of porous media with enhanced resolution. The current super-resolution, multi-resolution and registration methods in image processing fail to provide a general framework for the porous media reconstruction purpose since they are usually based on finding an estimate rather than a typical sample, and also based on having the images from the same scene -- the case which is not true for porous media images.
The statistical fusion approach that we propose here is based on a Bayesian framework by which a prior model learned from high resolution samples are combined with a measurement model defined based on the low resolution, coarse-scale information, to come up with a posterior model. We define a measurement model, in the non-hierachical and hierarchical image modeling framework, which describes how the low resolution information is asserted in the posterior model. Then, we propose a posterior sampling approach by which 2D posterior samples of porous media are generated from the posterior model. A more general framework that we propose here is asserting other constraints rather than the measurement in the model and then propose a constrained sampling strategy based on simulated annealing to generate artificial samples.
|
Page generated in 0.0263 seconds