Spelling suggestions: "subject:"blind."" "subject:"slind.""
391 |
An Analysis of Disability Specific Curriculum In A Specialized School for the Blind: A Case StudyLohmeier, Keri Lee January 2005 (has links)
This study analyzes the changes in disability-specific curriculum that took place in one specialized school for the blind driven by academic priorities from 1995 to 2005. The framework used in this case study approach analyzed the school's past and present (1) Artifacts - visible organizational structures and materials, (2) Expressed Values- explicitly written or stated beliefs and policies, and (3) Underlying Assumptions- unspoken attitudes and beliefs. Variables for change among the areas of teacher training, team teaching, evaluation systems, IEP's, state standards, the school improvement plan, short term and summer programming, as well as the residential program were all targeted to balance academics with an Expanded Core curriculum. The results indicate a balanced curriculum for some of the variables while other areas continue to reflect the struggle of mandates.
|
392 |
Diagnosing spatial variation patterns in manufacturing processesLee, Ho Young 30 September 2004 (has links)
This dissertation discusses a method that will aid in diagnosing the root causes of product and process variability in complex manufacturing processes when large quantities of multivariate in-process measurement data are available. As in any data mining application, this dissertation has as its objective the extraction of useful information from the data. A linear structured model, similar to the standard factor analysis model, is used to generically represent the variation patterns that result from the root causes. Blind source separation methods are investigated to identify spatial variation patterns in manufacturing data. Further, the existing blind source separation methods are extended, enhanced and improved to be a more effective, accurate and widely applicable method for manufacturing variation diagnosis. An overall strategy is offered to guide the use of the presented methods in conjunction with alternative methods.
|
393 |
High Speed Clock and Data Recovery TechniquesAbiri, Behrooz 01 December 2011 (has links)
This thesis presents two contributions in the area of high speed clock and data recovery systems. These contributions are focused on the fast phase recovery and adaptive equalization techniques.
The first contribution of this thesis is an adaptive engine for a 2x blind sampling receiver. The proposed adaptation engine is able to find the phase-dependent DFE coefficients of the receiver on the fly.
The second contribution is a burst-mode clock and data recovery architecture which uses an analog phase interpolator. The proposed burst-mode CDR is capable of locking to the first data transition it receives. The phase interpolator uses the inherent timing information in the data transition to rotate the phase of a reference clock and align it with the incoming data edge. The feasibility of the concept is demonstrated through fabrication and measurements.
|
394 |
Hybrid Time and Time-Frequency Blind Source Separation Towards Ambient System Identi cation of StructuresHazra, Budhaditya January 2010 (has links)
Blind source separation methods such as independent component analysis (ICA) and second order blind identification (SOBI) have shown considerable potential in the area of ambient vibration system identification. The objective of these methods is to separate the modal responses, or sources, from the measured output responses, without the knowledge of excitation. Several frequency domain and time domain methods have been proposed and successfully implemented in the literature. Whereas frequency-domain methods pose several challenges typical of dealing with signals in the frequency-domain, popular time-domain methods such as NExT/ERA and SSI pose limitations in dealing with noise, low sensor density, modes having low energy content, or in dealing with systems having closely-spaced modes, such as those found in structures with passive energy dissipation devices, for example, tuned mass dampers.Motivated by these challenges, the current research focuses on developing methods to address the problem of separability of sources with low energy content, closely-spaced modes, and under-determined blind identification, that is, when the number of response measurements is less than the number of sources. These methods, requiring the time and frequency diversities of the measured outputs, are referred to as hybrid time and time-frequency source separation methods. The hybrid methods are classified into two categories. In the first one, the basic principles of modified SOBI are extended using the stationary wavelet transform (SWT) in order to improve the separability of sources, thereby improving the quality of identification. In the second category, empirical mode decomposition is employed to extract the intrinsic mode functions from measurements, followed by an estimation of the mode shape matrix using iterative and/or non iterative procedures within the framework of modified-SOBI. Both experimental and large-scale structural simulation results are included to demonstrate the applicability of these hybrid approaches to structural system identification problems.
|
395 |
Blind dereverberation of speech from moving and stationary speakers using sequential Monte Carlo methodsEvers, Christine January 2010 (has links)
Speech signals radiated in confined spaces are subject to reverberation due to reflections of surrounding walls and obstacles. Reverberation leads to severe degradation of speech intelligibility and can be prohibitive for applications where speech is digitally recorded, such as audio conferencing or hearing aids. Dereverberation of speech is therefore an important field in speech enhancement. Driven by consumer demand, blind speech dereverberation has become a popular field in the research community and has led to many interesting approaches in the literature. However, most existing methods are dictated by their underlying models and hence suffer from assumptions that constrain the approaches to specific subproblems of blind speech dereverberation. For example, many approaches limit the dereverberation to voiced speech sounds, leading to poor results for unvoiced speech. Few approaches tackle single-sensor blind speech dereverberation, and only a very limited subset allows for dereverberation of speech from moving speakers. Therefore, the aim of this dissertation is the development of a flexible and extendible framework for blind speech dereverberation accommodating different speech sound types, single- or multiple sensor as well as stationary and moving speakers. Bayesian methods benefit from – rather than being dictated by – appropriate model choices. Therefore, the problem of blind speech dereverberation is considered from a Bayesian perspective in this thesis. A generic sequential Monte Carlo approach accommodating a multitude of models for the speech production mechanism and room transfer function is consequently derived. In this approach both the anechoic source signal and reverberant channel are estimated using their optimal estimators by means of Rao-Blackwellisation of the state-space of unknown variables. The remaining model parameters are estimated using sequential importance resampling. The proposed approach is implemented for two different speech production models for stationary speakers, demonstrating substantial reduction in reverberation for both unvoiced and voiced speech sounds. Furthermore, the channel model is extended to facilitate blind dereverberation of speech from moving speakers. Due to the structure of measurement model, single- as well as multi-microphone processing is facilitated, accommodating physically constrained scenarios where only a single sensor can be used as well as allowing for the exploitation of spatial diversity in scenarios where the physical size of microphone arrays is of no concern. This dissertation is concluded with a survey of possible directions for future research, including the use of switching Markov source models, joint target tracking and enhancement, as well as an extension to subband processing for improved computational efficiency.
|
396 |
The Impacts of Real-time Knowledge Based Personal Lighting Control on Energy Consumption, User Satisfaction and Task Performance in OfficesGu, Yun 01 May 2011 (has links)
Current building design and engineering practices emphasizing on energy conservation can be improved further by developing methods focusing on building occupants’ needs and interests in conservation. Specifically, the resulting energy effective building performance improvements cannot reach the desired goals, if the resulting indoor environmental conditions do not meet thermal, visual and air quality needs of the occupants. To meet both energy conservation and human performance requirements simultaneously requires to give the occupants information regarding indoor environmental qualities and energy implications of possible individual decisions. This requires that building control components and systems must enable occupants to understand how the building operates and how their own actions meet both their needs and the energy and environmental goals of the building project.
The goal of the research and experiments of this dissertation is to explore if real-time information regarding visual comfort requirements to meet a variety of tasks and to simultaneously conserve energy, improves occupant behavior to meet both objectives. Two workplaces in Robert L. Preger Intelligent Workplace were equipped to test the performance of 60 invited participants in conducting computer based tasks and a paper based task, under three difference lighting controls:
1) Centralized lighting control with no user choice
2) User control of
- blind positions for daylight shading
- ceiling based lighting fixture luminance output level
- task lighting: on/off
3) User control the three components (as listed under point 2 above), with provided simultaneous information regarding energy and related CO2 emissions implications, appropriate light levels meeting tasks requirements, and best choices in order to meet both task requirements and energy conservation goals/objectives.
The main findings of the experiments are that real-time information (listed under point 3 above) enables users to meet the visual quality requirements for both computer tasks and the paper task, and to conserve significant amounts of electricity for lighting. Furthermore, the 60 invited participants were asked to identify the importance of the four types of provided information tested in point 3 above. While individual users identified the importance of different information categories, the overall assessment were considered to be significant.
|
397 |
Teckenspråk i taktil form : Turtagning och frågor i dövblindas samtal på teckenspråkMesch, Johanna January 1998 (has links)
The present study focuses on turn-taking and questions in conversations between deaf-blind persons using tactile sign language, i.e. communicating by holding each others hands, and how sign language utterances change in the tactile mode when the nonmanual signals characteristic of turntaking and interrogative sentences in (visual) sign language are not used. The material consists of six video-recorded conversations (four with deaf-blind pairs and two where one person is deaf and one is deaf-blind). Parts of the material, viz. 168 sequences with questions and answers, has been transcribed and analyzed. The analysis shows that deaf-blind signers use their hands in two different conversation positions. In the monologue position both the signer's hands are held under the hands of the listener, whereas in the dialogue position both participants hold their hands in identical ways: the right hand under the other person's left hand and the left hand on top of the other person's right hand. It is described how the two positions affect the structure of one- and twohanded signs and how back channeling, linguistic as well as non-linguistic (with different kinds of tapping), is used in the two positions. The analysis shows that differences in the vertical and the horizontal planes are used in turn-taking regulation. Using four different conversational levels the signer can signal e.g. turn change by lowering his/her hands from the turn level to the turn change level at the end of his/her turn. The horizontal plane is devided into three different turn zones. The turn holder uses his/her own turn zone close to the body and finishes the turn by moving the hands to the joint zone midway between the interlocutors or into the listener's zone. The analyzed utterances function as questions, yes/no-questions (82) as well as wh-questions (55). It is hypothesized that yes/no-questions are marked with the manual signal extended duration of the last sign of the utterance, one of the interrogative signals of visual signing, but this was only true for 46 % of the yes/no-questions in the material. Since extended duration of the last sign also signals turn change in e.g. statements it is not regarded as an interrogative signal. Additional markers of yes/no-questions are among others the sign INDEX-adr ('you') with its variant INDEX-adr-long, used as a summons signal, and repetitions of signs or sentences. As for the wh-questions a majority are made with a manual wh-sign. Generally, if there are no interrogative signals the context and the content of the utterance will account for its interpretation as a question. To avoid misunderstandings, questions and non-linguistic signals are used in checking turns, where the signer requests back channeling or the listener requests repetition or clarification. / <p>För att köpa boken skicka en beställning till exp@ling.su.se/ To order the book send an e-mail to exp@ling.su.se</p>
|
398 |
Low-rank matrix recovery: blind deconvolution and efficient sampling of correlated signalsAhmed, Ali 13 January 2014 (has links)
Low-dimensional signal structures naturally arise in a large set of applications in various fields such as medical imaging, machine learning, signal, and array processing. A ubiquitous low-dimensional structure in signals and images is sparsity, and a new sampling theory; namely, compressive sensing, proves that the sparse signals and images can be reconstructed from incomplete measurements. The signal recovery is achieved using efficient algorithms such as \ell_1-minimization. Recently, the research focus has spun-off to encompass other interesting low-dimensional signal structures such as group-sparsity and low-rank structure.
This thesis considers low-rank matrix recovery (LRMR) from various structured-random measurement ensembles. These results are then employed for the in depth investigation of the classical blind-deconvolution problem from a new perspective, and for the development of a framework for the efficient sampling of correlated signals (the signals lying in a subspace).
In the first part, we study the blind deconvolution; separation of two unknown signals by observing their convolution. We recast the deconvolution of discrete signals w and x as a rank-1 matrix wx* recovery problem from a structured random measurement ensemble. The convex relaxation of the problem leads to a tractable semidefinite program. We show, using some of the mathematical tools developed recently for LRMR, that if we assume the signals convolved with one another live in known subspaces, then this semidefinite relaxation is provably effective.
In the second part, we design various efficient sampling architectures for signals acquired using large arrays. The sampling architectures exploit the correlation in the signals to acquire them at a sub-Nyquist rate. The sampling devices are designed using analog components with clear implementation potential. For each of the sampling scheme, we show that the signal reconstruction can be framed as an LRMR problem from a structured-random measurement ensemble. The signals can be reconstructed using the familiar nuclear-norm minimization. The sampling theorems derived for each of the sampling architecture show that the LRMR framework produces the Shannon-Nyquist performance for the sub-Nyquist acquisition of correlated signals.
In the final part, we study low-rank matrix factorizations using randomized linear algebra. This specific method allows us to use a least-squares program for the reconstruction of the unknown low-rank matrix from the samples of its row and column space. Based on the principles of this method, we then design sampling architectures that not only acquire correlated signals efficiently but also require a simple least-squares program for the signal reconstruction.
A theoretical analysis of all of the LRMR problems above is presented in this thesis, which provides the sufficient measurements required for the successful reconstruction of the unknown low-rank matrix, and the upper bound on the recovery error in both noiseless and noisy cases. For each of the LRMR problem, we also provide a discussion of a computationally feasible algorithm, which includes a least-squares-based algorithm, and some of the fastest algorithms for solving nuclear-norm minimization.
|
399 |
Independent component analysis for maternal-fetal electrocardiographyMarcynuk, Kathryn L. 09 January 2015 (has links)
Separating unknown signal mixtures into their constituent parts is a difficult problem in signal processing called blind source separation. One of the benchmark problems in this area is the extraction of the fetal heartbeat from an electrocardiogram in which it is overshadowed by a strong maternal heartbeat. This thesis presents a study of a signal separation technique called independent component analysis (ICA), in order to assess its suitability for the maternal-fetal ECG separation problem. This includes an analysis of ICA on deterministic, stochastic, simulated and recorded ECG signals. The experiments presented in this thesis demonstrate that ICA is effective on linear mixtures of known simulated or recorded ECGs. The performance of ICA was measured using visual comparison, heart rate extraction, and energy, information theoretic, and fractal-based measures. ICA extraction of clinically recorded maternal-fetal ECGs mixtures, in which the source signals were unknown, were successful at recovering the fetal heart rate.
|
400 |
Suppression of impulsive noise in wireless communicationcui, qiaofeng January 2014 (has links)
This report intends to verify the possibility that the FastICA algorithm could be applied to the GPS system to eliminate the impulsive noise from the receiver end. As the impulsive noise is so unpredictable in its pattern and of great energy level to swallow the signal we need, traditional signal selection methods exhibit no much use in dealing with this problem. Blind Source Separation seems to be a good way to solve this, but most of the other BSS algorithms beside FastICA showed more or less degrees of dependency on the pattern of the noise. In this thesis, the basic mathematic modelling of this advanced algorithm, along with the principles of the commonly used fast independent component analysis (fastICA) based on fixed-point algorithm are discussed. To verify that this method is useful under industrial use environment to remove the impulsive noises from digital BPSK modulated signals, an observation signal mixed with additive impulsive noise is generated and separated by fastICA method. And in the last part of the thesis, the fastICA algorithm is applied to the GPS receiver modeled in the SoftGNSS project and verified to be effective in industrial applications. The results have been analyzed. / 6
|
Page generated in 0.0397 seconds