Spelling suggestions: "subject:"square""
221 |
Sparse Linear Modeling of Speech from EEG / Gles Linjära Modellering av Tal från EEGTiger, Mattias January 2014 (has links)
For people with hearing impairments, attending to a single speaker in a multi-talker background can be very difficult and something which the current hearing aids can barely help with. Recent studies have shown that the audio stream a human focuses on can be found among the surrounding audio streams, using EEG and linear models. With this rises the possibility of using EEG to unconsciously control future hearing aids such that the attuned sounds get enhanced, while the rest are damped. For such hearing aids to be useful for every day usage it better be using something other than a motion sensitive, precisely placed EEG cap. This could possibly be archived by placing the electrodes together with the hearing aid in the ear. One of the leading hearing aid manufacturer Oticon and its research lab Erikholm Research Center have recorded an EEG data set of people listening to sentences and in which electrodes were placed in and closely around the ears. We have analyzed the data set by applying a range of signal processing approaches, mainly in the context of audio estimation from EEG. Two different types of linear sparse models based on L1-regularized least squares are formulated and evaluated, providing automatic dimensionality reduction in that they significantly reduce the number of channels needed. The first model is based on linear combinations of spectrograms and the second is based on linear temporal filtering. We have investigated the usefulness of the in-ear electrodes and found some positive indications. All models explored consider the in-ear electrodes to be the most important, or among the more important, of the 128 electrodes in the EEG cap.This could be a positive indication of the future possibility of using only electrodes in the ears for future hearing aids.
|
222 |
The role of the human nasal cavity in patterns of craniofacial covariation and integrationLindal, Joshua 18 January 2016 (has links)
Climate has a selective influence on nasal cavity morphology. Due to the constraints of cranial integration, naturally selected changes in one structure necessitate changes in others in order to maintain structural and functional cohesion. The relationships between climate and skull/nasal cavity morphology have been explored, but the integrative role of nasal variability within the skull as a whole has not. This thesis presents two hypotheses: 1) patterns of craniofacial integration observed in 2D can be reproduced using 3D geometric morphometric techniques; 2) the nasal cavity exhibits a higher level of covariation with the lateral cranial base than with other parts of the skull, since differences in nasal morphology and basicranial breadth have both been linked to climatic variables. The results support the former hypothesis, but not the latter; covariation observed between the nasal cavity and other cranial modules may suggest that these relationships are characterized by a unique integrative relationship. / February 2016
|
223 |
Aspects of model development using regression quantiles and elemental regressionsRanganai, Edmore 03 1900 (has links)
Dissertation (PhD)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: It is well known that ordinary least squares (OLS) procedures are sensitive to deviations from
the classical Gaussian assumptions (outliers) as well as data aberrations in the design space.
The two major data aberrations in the design space are collinearity and high leverage.
Leverage points can also induce or hide collinearity in the design space. Such leverage points
are referred to as collinearity influential points. As a consequence, over the years, many
diagnostic tools to detect these anomalies as well as alternative procedures to counter them
were developed. To counter deviations from the classical Gaussian assumptions many robust
procedures have been proposed. One such class of procedures is the Koenker and Bassett
(1978) Regressions Quantiles (RQs), which are natural extensions of order statistics, to the
linear model. RQs can be found as solutions to linear programming problems (LPs). The basic
optimal solutions to these LPs (which are RQs) correspond to elemental subset (ES)
regressions, which consist of subsets of minimum size to estimate the necessary parameters of
the model.
On the one hand, some ESs correspond to RQs. On the other hand, in the literature it is shown
that many OLS statistics (estimators) are related to ES regression statistics (estimators).
Therefore there is an inherent relationship amongst the three sets of procedures. The
relationship between the ES procedure and the RQ one, has been noted almost “casually” in
the literature while the latter has been fairly widely explored. Using these existing
relationships between the ES procedure and the OLS one as well as new ones, collinearity,
leverage and outlier problems in the RQ scenario were investigated. Also, a lasso procedure
was proposed as variable selection technique in the RQ scenario and some tentative results
were given for it. These results are promising.
Single case diagnostics were considered as well as their relationships to multiple case ones. In
particular, multiple cases of the minimum size to estimate the necessary parameters of the
model, were considered, corresponding to a RQ (ES). In this way regression diagnostics were
developed for both ESs and RQs. The main problems that affect RQs adversely are
collinearity and leverage due to the nature of the computational procedures and the fact that
RQs’ influence functions are unbounded in the design space but bounded in the response
variable. As a consequence of this, RQs have a high affinity for leverage points and a high
exclusion rate of outliers. The influential picture exhibited in the presence of both leverage points and outliers is the net result of these two antagonistic forces. Although RQs are
bounded in the response variable (and therefore fairly robust to outliers), outlier diagnostics
were also considered in order to have a more holistic picture.
The investigations used comprised analytic means as well as simulation. Furthermore,
applications were made to artificial computer generated data sets as well as standard data sets
from the literature. These revealed that the ES based statistics can be used to address
problems arising in the RQ scenario to some degree of success. However, due to the
interdependence between the different aspects, viz. the one between leverage and collinearity
and the one between leverage and outliers, “solutions” are often dependent on the particular
situation. In spite of this complexity, the research did produce some fairly general guidelines
that can be fruitfully used in practice. / AFRIKAANSE OPSOMMING: Dit is bekend dat die gewone kleinste kwadraat (KK) prosedures sensitief is vir afwykings
vanaf die klassieke Gaussiese aannames (uitskieters) asook vir data afwykings in die
ontwerpruimte. Twee tipes afwykings van belang in laasgenoemde geval, is kollinearitiet en
punte met hoë hefboom waarde. Laasgenoemde punte kan ook kollineariteit induseer of
versteek in die ontwerp. Na sodanige punte word verwys as kollinêre hefboom punte. Oor die
jare is baie diagnostiese hulpmiddels ontwikkel om hierdie afwykings te identifiseer en om
alternatiewe prosedures daarteen te ontwikkel. Om afwykings vanaf die Gaussiese aanname
teen te werk, is heelwat robuuste prosedures ontwikkel. Een sodanige klas van prosedures is
die Koenker en Bassett (1978) Regressie Kwantiele (RKe), wat natuurlike uitbreidings is van
rangorde statistieke na die lineêre model. RKe kan bepaal word as oplossings van lineêre
programmeringsprobleme (LPs). Die basiese optimale oplossings van hierdie LPs (wat RKe
is) kom ooreen met die elementale deelversameling (ED) regressies, wat bestaan uit
deelversamelings van minimum grootte waarmee die parameters van die model beraam kan
word.
Enersyds geld dat sekere EDs ooreenkom met RKe. Andersyds, uit die literatuur is dit bekend
dat baie KK statistieke (beramers) verwant is aan ED regressie statistieke (beramers). Dit
impliseer dat daar dus ‘n inherente verwantskap is tussen die drie klasse van prosedures. Die
verwantskap tussen die ED en die ooreenkomstige RK prosedures is redelik “terloops” van
melding gemaak in die literatuur, terwyl laasgenoemde prosedures redelik breedvoerig
ondersoek is. Deur gebruik te maak van bestaande verwantskappe tussen ED en KK
prosedures, sowel as nuwes wat ontwikkel is, is kollineariteit, punte met hoë hefboom
waardes en uitskieter probleme in die RK omgewing ondersoek. Voorts is ‘n lasso prosedure
as veranderlike seleksie tegniek voorgestel in die RK situasie en is enkele tentatiewe resultate
daarvoor gegee. Hierdie resultate blyk belowend te wees, veral ook vir verdere navorsing.
Enkel geval diagnostiese tegnieke is beskou sowel as hul verwantskap met meervoudige geval
tegnieke. In die besonder is veral meervoudige gevalle beskou wat van minimum grootte is
om die parameters van die model te kan beraam, en wat ooreenkom met ‘n RK (ED). Met
sodanige benadering is regressie diagnostiese tegnieke ontwikkel vir beide EDs en RKe. Die
belangrikste probleme wat RKe negatief beinvloed, is kollineariteit en punte met hoë
hefboom waardes agv die aard van die berekeningsprosedures en die feit dat RKe se invloedfunksies begrensd is in die ruimte van die afhanklike veranderlike, maar onbegrensd is
in die ontwerpruimte. Gevolglik het RKe ‘n hoë affiniteit vir punte met hoë hefboom waardes
en poog gewoonlik om uitskieters uit te sluit. Die finale uitset wat verkry word wanneer beide
punte met hoë hefboom waardes en uitskieters voorkom, is dan die netto resultaat van hierdie
twee teenstrydige pogings. Alhoewel RKe begrensd is in die onafhanklike veranderlike (en
dus redelik robuust is tov uitskieters), is uitskieter diagnostiese tegnieke ook beskou om ‘n
meer holistiese beeld te verkry.
Die ondersoek het analitiese sowel as simulasie tegnieke gebruik. Voorts is ook gebruik
gemaak van kunsmatige datastelle en standard datastelle uit die literatuur. Hierdie ondersoeke
het getoon dat die ED gebaseerde statistieke met ‘n redelike mate van sukses gebruik kan
word om probleme in die RK omgewing aan te spreek. Dit is egter belangrik om daarop te let
dat as gevolg van die interafhanklikheid tussen kollineariteit en punte met hoë hefboom
waardes asook dié tussen punte met hoë hefboom waardes en uitskieters, “oplossings”
dikwels afhanklik is van die bepaalde situasie. Ten spyte van hierdie kompleksiteit, is op
grond van die navorsing wat gedoen is, tog redelike algemene riglyne verkry wat nuttig in die
praktyk gebruik kan word.
|
224 |
Analysis of a Combined GLONASS/Compass-I Navigation AlgorithmPeng, Song, Xiao-yu, Chen, Jian-zhong, Qi 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / Compass-I system is China has built satellite navigation system. It's a kind of regional position system according to the double-star position principle. Commonly, Compass-I system need adopt active position, in the paper several passive position methods are put forward. A combination navigation mode based on GLONASS and Compass-I passive navigation is proposed in this paper. The differences of coordinates and time systems between those two navigation systems are analyzed. User position is calculated by least squares method. Combination Navigation Algorithm can improve visible satellite constellation structure and positioning precision so as to ensure the reliability and continuity of positioning result.
|
225 |
Linear transceivers for MIMO relaysShang, Cheng Yu Andy January 2014 (has links)
Relays can be used in wireless communication systems to provide cell coverage extension, reduce coverage holes and increase throughput. Full duplex (FD) relays, which transmit and receive in the same time slot, can have a higher transmission rate compared with half duplex (HD) relays. However, FD relays suffer from self interference (SI) problems, which are caused by the transmitted relay signal being received by the relay receiver. This can reduce the performance of FD relays. In the literature, the SI channel is commonly nulled and removed as it simplifies the problem considerably. In practice, complete nulling is impossible due to channel estimation errors. Therefore, in this thesis, we consider the leakage of the SI from the FD relay. Our goal is to reduce the SI and increase the signal to noise ratio (SNR) of the relay system. Hence, we propose different precoder and weight vector designs. These designs may increase the end to end (e2e) signal to interference and noise ratio (SINR) at the destination. Here, a precoder is multiplied to a signal before transmission and a weight vector is multiplied to the received signal after reception.
Initially, we consider an academic example where it uses a two path FD multiple input and multiple output (MIMO) system. The analysis of the SINR with the implementation of precoders and weight vectors shows that the SI component has the same underlying signal as the source signal when a relay processing delay is not being considered. Hence, to simulate the SI problem more realistically, we alter our relay design and focus on a one path FD MIMO relay system with a relay processing delay. For the implementation of precoders and weight vectors, choosing the optimal scheme is numerically challenging. Thus, we design the precoders and weight vectors using ad-hoc and near-optimal schemes. The ad-hoc schemes for the precoders are singular value decomposition (SVD), minimising the signal to leakage plus noise ratio (SLNR) using the Rayleigh Ritz (RR) method and zero forcing (ZF). The ad-hoc schemes for the weight vectors are SVD, minimum mean squared error (MMSE) and ZF. The near-optimal scheme uses an iterative RR method to compute the source precoder and destination weight vector and the relay precoder and weight vector are computed using the ad-hoc methods which provide the best performance.
The average power and the instantaneous power normalisations are the two methods to constrain the relay precoder power. The average power normalisation method uses a novel closed form covariance matrix with an optimisation approach to constrain the relay precoder. This closed form covariance matrix is mathematically derived using matrix vectorization techniques. For the instantaneous power normalisation method, the constraint process does not require an optimisation approach. However, using this method the e2e SINR is difficult to calculate, therefore we use symbol error rate (SER) as a measure of performance.
The results from the different precoder and weight vector designs suggest that reducing the SI using the relay weight vector instead of the relay precoder results in a higher e2e SINR. Consequently, to increase the e2e SINR, performing complicated processing at the relay receiver is more effective than at the relay transmitter.
|
226 |
Archeologinių duomenų analizė. Sukimo ašies radimas / Analysis of archaeological data. estimation of the axis of rotationMisiukevičius, Ramūnas 30 June 2014 (has links)
Pasaulyje sparčiai besivystančios informacinės technologijos (IT) neaplenkia ir archeologijos mokslo. Vis dažniau archeologai naudoja įvairias kompiuterines programas ne tik archeologinės medžiagos dokumentavimui, vaizdavimui ar rekonstrukcijai, bet ir žmonių veiklos, buities, gyvenimo aplinkos rekonstrukcijai ar modeliavimui. Šis uždavinys reikalauja atlikti kelių etapų analizę ir išsiaiškinti radinių kilmę, tipą, originalumą ir paskirtį. Turint šią informaciją, galime daug sužinoti apie žmonių, kurie naudojosi tais daiktais žinias, turėtus įrankius, papročius, emigraciją ir daug kitos informacijos. Žinių kiekis apie senovę priklauso nuo radinių ir mūsų gebėjimų juos analizuoti. Šiame darbe yra pristatomas vienas iš puodų šukių analizės metodų - sukimo ašies radimas. Tai yra pirmasis ir esminis tokio tipo radinių analizės etapas, nes nuo jo rezultatų priklauso kitos radinio analizės - profilio linijos radimas, simetriškumo tikrinimas, segmantacijos realizavimas, objektų tipologija, rekonstrukcija ir galiausiai - žmonių gyvenimo analizė. Klaidos šiame etape turi lemiamos reikšmės kitiems analizės etapams, o gautos žinios gali suklaidinti tiriant senovės žmonių kultūrą ir jų paplitimą bei migraciją. Darbe yra aptariami sukimo ašies radimo metodai, jų privalumai ir trūkumai, pateikiami pavyzdžiai. / The world is rapidly developing information technology (IT) exist in archaeological science. Increasingly, archaeologists use various computer programs not only for documentation of archaeological material, or the depiction of reconstruction, but human activity, lifestyle, environmental reconstruction and modeling. This task requires a multi-step analysis of the findings and to clarify the origin of the type of originality and purpose. With this information, we can learn a lot about the people who used the objects of knowledge at the tools, customs, emigration, and much other information. Amount of knowledge about ancient artifacts and depends on our ability to analyze them. This paper has presented one of the pottery shards of methods of analysis – estimation of the axis of rotation. This is the first of its kind and an essential step in the analysis finds, because it captures the results of another analysis - Finding the profile lines, symmetry checks, realization of segmentation, object typology, reconstruction, and finally - an analysis of people's lives. Errors at this stage is critical for other steps in the analysis and the knowledge generated is likely to mislead the investigation of ancient human cultures and their distribution and migration. The paper discusses the rotation axis of the detection methods, their advantages and disadvantages, are examples.
|
227 |
Post-manoeuvre and online parameter estimation for manned and unmanned aircraftJameson, Pierre-Daniel January 2013 (has links)
Parameterised analytical models that describe the trimmed inflight behaviour of classical aircraft have been studied and are widely accepted by the flight dynamics community. Therefore, the primary role of aircraft parameter estimation is to quantify the parameter values which make up the models and define the physical relationship of the air vehicle with respect to its local environment. Nevertheless, a priori empirical predictions dependent on aircraft design parameters also exist, and these provide a useful means of generating preliminary values predicting the aircraft behaviour at the design stage. However, at present the only feasible means that exist to actually prove and validate these parameter values remains to extract them through physical experimentation either in a wind-tunnel or from a flight test. With the advancement of UAVs, and in particular smaller UAVs (less than 1m span) the ability to fly the full scale vehicle and generate flight test data presents an exciting opportunity. Furthermore, UAV testing lends itself well to the ability to perform rapid prototyping with the use of COTS equipment. Real-time system identification was first used to monitor highly unstable aircraft behaviour in non-linear flight regimes, while expanding the operational flight envelope. Recent development has focused on creating self-healing control systems, such as adaptive re-configurable control laws to provide robustness against airframe damage, control surface failures or inflight icing. In the case of UAVs real-time identification, would facilitate rapid prototyping especially in low-cost projects with their constrained development time. In a small UAV scenario, flight trials could potentialy be focused towards dynamic model validation, with the prior verification step done using the simulation environment. Furthermore, the ability to check the estimated derivatives while the aircraft is flying would enable detection of poor data readings due to deficient excitation manoeuvres or atmospheric turbulence. Subsequently, appropriate action could then be taken while all the equipment and personnel are in place. This thesis describes the development of algorithms in order to perform online system identification for UAVs which require minimal analyst intervention. Issues pertinent to UAV applications were: the type of excitation manoeuvers needed and the necessary instrumentation required to record air-data. Throughout the research, algorithm development was undertaken using an in-house Simulink© model of the Aerosonde UAV which provided a rapid and flexible means of generating simulated data for analysis. In addition, the algorithms were further tested with real flight test data that was acquired from the Cranfield University Jestream-31 aircraft G-NFLA during its routine operation as a flying classroom. Two estimation methods were principally considered, the maximum likelihood and least squares estimators, with the aforementioned found to be best suited to the proposed requirements. In time-domain analysis reconstruction of the velocity state derivatives ˙W and ˙V needed for the SPPO and DR modes respectively, provided more statistically reliable parameter estimates without the need of a α- or β- vane. By formulating the least squares method in the frequency domain, data issues regarding the removal of bias and trim offsets could be more easily addressed while obtaining timely and reliable parameter estimates. Finally, the importance of using an appropriate input to excite the UAV dynamics allowing the vehicle to show its characteristics must be stressed.
|
228 |
Ground Object Recognition using Laser Radar Data : Geometric Fitting, Performance Analysis, and ApplicationsGrönwall, Christna January 2006 (has links)
This thesis concerns detection and recognition of ground object using data from laser radar systems. Typical ground objects are vehicles and land mines. For these objects, the orientation and articulation are unknown. The objects are placed in natural or urban areas where the background is unstructured and complex. The performance of laser radar systems is analyzed, to achieve models of the uncertainties in laser radar data. A ground object recognition method is presented. It handles general, noisy 3D point cloud data. The approach is based on the fact that man-made objects on a large scale can be considered be of rectangular shape or can be decomposed to a set of rectangles. Several approaches to rectangle fitting are presented and evaluated in Monte Carlo simulations. There are error-in-variables present and thus, geometric fitting is used. The objects can have parts that are subject to articulation. A modular least squares method with outlier rejection, that can handle articulated objects, is proposed. This method falls within the iterative closest point framework. Recognition when several similar models are available is discussed. The recognition method is applied in a query-based multi-sensor system. The system covers the process from sensor data to the user interface, i.e., from low level image processing to high level situation analysis. In object detection and recognition based on laser radar data, the range value’s accuracy is important. A general direct-detection laser radar system applicable for hard-target measurements is modeled. Three time-of-flight estimation algorithms are analyzed; peak detection, constant fraction detection, and matched filter. The statistical distribution of uncertainties in time-of-flight range estimations is determined. The detection performance for various shape conditions and signal-tonoise ratios are analyzed. Those results are used to model the properties of the range estimation error. The detector’s performances are compared with the Cramér-Rao lower bound. The performance of a tool for synthetic generation of scanning laser radar data is evaluated. In the measurement system model, it is possible to add several design parameters, which makes it possible to test an estimation scheme under different types of system design. A parametric method, based on measurement error regression, that estimates an object’s size and orientation is described. Validations of both the measurement system model and the measurement error model, with respect to the Cramér-Rao lower bound, are presented.
|
229 |
Attitude and Trajectory Estimation for Small Suborbital PayloadsYuan, Yunxia January 2017 (has links)
Sounding rockets and small suborbital payloads provide a means for research in situ of the atmosphere and ionosphere. The trajectory and the attitude of the payload are critical for the evaluation of the scientific measurements and experiments. The trajectory refers the location of the measurement, while the attitude determines the orientation of the sensors. This thesis covers methods of trajectory and attitude reconstruction implemented in several experiments with small suborbital payloads carried out by the Department of Space and Plasma Physics in 2012--2016. The problem of trajectory reconstruction based on raw GPS data was studied for small suborbital payloads. It was formulated as a global least squares optimization problem. The method was applied to flight data of two suborbital payloads of the RAIN REXUS experiment. Positions and velocities were obtained with high accuracy. Based on the trajectory reconstruction technique, atmospheric densities, temperatures, and horizontal wind speeds below 80 km were obtained using rigid free falling spheres of the LEEWAVES experiment. Comparison with independent data indicates that the results are reliable for densities below 70 km, temperatures below 50 km, and wind speeds below 45 km. Attitude reconstruction of suborbital payloads from yaw-pitch-roll Euler angles was studied. The Euler angles were established by two methods: a global optimization method and an Unscented Kalman Filter (UKF) technique. The comparison of the results shows that the global optimization method provides a more accurate fit to the observations than the UKF. Improving the results of the falling sphere experiments requires understanding of the attitude motion of the sphere. An analytical consideration was developed for a free falling and axisymmetric sphere under aerodynamic torques. The motion can generally be defined as a superposition of precession and nutation. These motion phenomena were modeled numerically and compared to flight data. / <p>QC 20170510</p>
|
230 |
Estimation of Kinetic Parameters From List-Mode Data Using an Indirect ApproachOrtiz, Joseph Christian, Ortiz, Joseph Christian January 2016 (has links)
This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.
|
Page generated in 0.0587 seconds