• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3716
  • 915
  • 683
  • 427
  • 160
  • 93
  • 61
  • 57
  • 45
  • 38
  • 36
  • 35
  • 35
  • 34
  • 27
  • Tagged with
  • 7564
  • 1139
  • 886
  • 809
  • 729
  • 726
  • 711
  • 572
  • 536
  • 534
  • 526
  • 523
  • 500
  • 483
  • 476
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Localisation robuste multi-capteurs et multi-modèles / A robust multisensors and multiple model localisation system

Ndjeng Ndjeng, Alexandre 14 September 2009 (has links)
De nombreux travaux de recherches sont menés depuis quelques années dans le but de fournir une solution précise et intègre au problème de la localisation de véhicules routiers. Ces recherches sont en majorité fondées sur la théorie probabiliste de l’estimation. Elles utilisent la fusion multi-capteurs et le filtrage de Kalman mono-modèle, au travers de variantes adaptées aux systèmes non linéaires ; l’unique modèle complexe étant supposé décrire toute la dynamique du véhicule. Nous proposons dans cette thèse une approche multi-modèles. Cette étude dérive d’une analyse modulaire de la dynamique du véhicule, c’est-à-dire que l’espace d’évolution est pris comme un espace discret : plusieurs modèles simples et dédiés chacun à une manœuvre particulière sont générés, ce qui améliore la robustesse face aux défauts de modélisation du système. Il s’agit d’une variante de l’algorithme IMM, qui prend en compte l’asynchronisme des capteurs embarqués dans le processus d’estimation de l’état du véhicule. Pour cela, une nouvelle modélisation sous contraintes est développée, ce qui permet de mettre à jour la vraisemblance des modèles intégrés même en l’absence de mesures provenant de capteurs extéroceptifs. Toutefois, la performance d’un tel système nécessite d’utiliser des données capteurs de bonne qualité. Plusieurs opérations sont présentées, illustrant la correction du biais des capteurs, des bruits de mesures ainsi que la prise en compte de l’angle de dévers de la chaussée. La méthodologie développée est validée à travers une comparaison avec les algorithmes de fusion probabilistes EKF, UKF, DD1, DD2 et le filtrage particulaire. Cette comparaison est fondée sur des mesures courantes de précision et de confiance, puis sur l’utilisation de critères statistiques de consistance et de crédibilité, à partir de scénarios synthétiques et ensuite des données réelles. / Many research works have been devoted in the last years in order to provide an accurate and high integrity solution to the problem outdoor vehicles localization. These research efforts are mainly based on the probability estimation theory. They use multi-sensor fusion approach and a single-model based Kalman filtering, through some variants adapted to nonlinear systems. The single complex model that is used is assumed to describe the dynamics of the vehicle. We rather propose a multiple model approach in this thesis. The presented study derives from a modular analysis of the dynamics of the vehicle, ie the evolution of the vehicle is considered as a discrete process, which combines several simple models. Each model is dedicated to a particular manoeuvre of the vehicle. This evolution space discretizing will improves the system robustness to modelling defects. Our approach is a variant of the IMM algorithm, which takes into account the asynchronism of the embedded sensors. In order to achieve this goal, a new system constrained modelling is developed, which allows to update the various models likelihood even in absence of exteroceptive sensors. However, the performance of such a system requires the use of good quality data. Several operations are presented, illustrating the corrections on the sensors bias, measurements noise and taking into account the road bank angle. The developed methodology is validated through a comparison with the probabilistic fusion algorithms EKF, UKF, DD1, DD2 and particle filtering. This comparison is based on measurements of accuracy and confidence, then the use of statistical consistency and credibility measures, from simulation scenarios and then real data.
502

Estimation, Testing, and Monitoring of Generalized Autoregressive Conditionally Heteroskedastic Time Series

Zhang, Aonan 01 May 2005 (has links)
We study in this dissertation Generalized Autoregressive Conditionally Heteroskedastic (GARCH) time series. The research focuses on squared GARCH sequences. Our main results are as follows: 1. We compare three methods of constructing confidence intervals for sample autocorrelations of squared returns modeled by models from the GARCH family. We compare the residual bootstrap, block bootstrap and subsampling methods. The residual bootstrap based on the standard GARCH(l,1) model is seen to perform best. Confidence intervals for cross-correlations of a bivariate GARCH model are also studied. 2. We study a test to discriminate between long memory and volatility changes in financial returns data. Finite sample performance of the test is examined and compared using various variance estimators. The Bartlett kernel estimator with truncation lag determined by a calibrated bandwidth selection procedure is seen to perform best. The testing procedure is robust to various GARCH-type models. 3. We propose several methods of on-line detection of a change in unconditional variance in a conditionally heteroskedastic time series. We follow a paradigm in which the first m observations are assumed to follow a stationary process and the monitoring scheme has asymptotically controlled probability of falsely rejecting the null hypothesis of no change. Our theory is applicable to broad classes of GARCH-type time series and relies on a strong invariance principle which holds for the squares of observations generated by such models. Practical implementation of the procedures is proposed and the performance of the methods is investigated by a simulation study.
503

Algebraic Methods for Log-Linear Models

Pribadi, Aaron 31 May 2012 (has links)
Techniques from representation theory (Diaconis, 1988) and algebraic geometry (Drton et al., 2008) have been applied to the statistical analysis of discrete data with log-linear models. With these ideas in mind, we discuss the selection of sparse log-linear models, especially for binary data and data on other structured sample spaces. When a sample space and its symmetry group satisfy certain conditions, we construct a natural spanning set for the space of functions on the sample space which respects the isotypic decomposition; these vectors may be used in algorithms for model selection. The construction is explicitly carried out for the case of binary data.
504

Estimation of Cost and Benefit of Instream Flow

Amirfathi, Parvaneh 01 May 1984 (has links)
Water flowing in streams has value for various types of recreationists and is essential for fish and wildlife. Since water demands for offstream uses in the arid west have been steadily increasing, increasing instream flows to enhance the recreational experience might be in conflict with established withdrawals for uses such as agriculture, industries and households. It is the intent of this study to contribute to an economic assessment of the tradeoff between maintaining instream flow for river recreation use and offstream uses; that is, to develop and apply a method to measure costs and benefits of water used for recreation on a river. Since market prices are not observable for instream flows, the estimation economic value of instream flow would present well known difficulties. The household production function theory was used to build the theoretical model to measure economic value of instream flow. Policy implication are discussed with emphasis on application of the information to water management decisions.
505

Estimation of Floods When Runoff Originates from Nonhomogeneous Sources

Olson, David Ray 01 May 1979 (has links)
Extreme value theory is used as a basis for deriving a distribution function for flood frequency analysis when runoff originates from nonhomogeneous sources. A modified least squares technique is used to estimate the parameters of the distribution function for eleven rivers. Goodness-of-fit statistics are computed and the distribution function is found to fit the data very well. The derived distribution function is recommended as a base method for flood frequency analysis for rivers exhibiting nonhomogeneous sources of runoff if further investigation also proves to be positive.
506

Force-Directed Instruction Scheduling for Low Power

Dongale, Prashant 24 October 2003 (has links)
The increasing need for low-power computing devices has led to the efforts to optimize power in all the components of a system. It is possible to achieve significant power optimization at the software level through instruction reordering during the compilation phase. In this thesis, we have designed and implemented a novel instruction scheduling technique, called FD-ISLP, aimed at reducing the software power consumption. In the proposed approach for instruction scheduling, we modify the force-directed scheduling technique used in high-level synthesis of VLSI circuits to derive a latency-constrained algorithm that reorders the instructions in a basic block of assembly code in application software to reduce power consumption due to its execution. The scheduling algorithm takes the data dependency graph (DDG) for a given basic block and a power dissipation table (PDT), which is generated by characterizing the instruction set architecture. We model power, commonly referred to as software power in literature, as a force to be minimized by relating the inter-instruction power cost as the spring constant,k,and the change in instruction probability as the displacement,x, in the force equation f = k * x. The salient feature of our algorithm is that it accounts for the global effect of any tentative scheduling decision, which avoids a solution being trapped in a local minima. The power estimates are obtained through using a tool set, called Simple-Power. Experimental results indicate that our technique accounts for an average of 12.68 % savings in power consumption over the original source code for the selected benchmark programs.
507

Recursive residuals and estimation for mixed models

Bani-Mustafa, Ahmed, University of Western Sydney, College of Law and Business, School of Quantitative Methods and Mathematical Sciences January 2004 (has links)
In the last three decades recursive residuals and estimation have received extensive attention as important and powerful tools in providing a diagnostic test of the structural change and functional misspecification in regression models. Recursive residuals and their relationship with recursive estimation of regression parameters have been developed for fixed effect models. Such residuals and estimation have been used to test the constancy of regression models over time and their usage has been suggested for almost all areas of regression model validation. These recursive techniques have not been developed for some of the more recent generalisations of Linear Models such as Linear Mixed Models (LMM) and their important extension to Generalised Linear Mixed Models (GLMM) which provide a suitable framework to analyse a variety of special problems in an unified way. The aim of this thesis is to extend the idea of recursive residuals and estimation to Mixed Models particularly for LMM and GLMM. Recurrence formulae are developed and recursive residuals are defined. / Doctor of Philosophy (PhD)
508

Acuity of force appreciation in the osteoarthritic knee joint

Brereton, Helen P Unknown Date (has links)
Osteoarthritis and ageing have been shown to induce changes in the number and health of peripheral mechanoreceptors. Whilst position and movement awareness in the osteoarthritic knee have been studied extensively, little work to date has been produced on muscle force awareness in this subject group. Poor force acuity may contribute to muscle and joint pain and dysfunction, and additionally hinder rehabilitation efforts in an osteoarthritic population. Overestimation of the muscles forces required for a given task, resulting in greater joint compression forces, may aggravate and inflame osteoarthritic symptoms. Underestimation of required muscle forces may amplify existing joint instability, increasing the risk of injury in an osteoarthritic population. Additionally, both under and overloading of muscles during the rehabilitation process can delay the return to full function after injury.When regarding the neurological process of force coding, current debate centres on the relative importance of centrally generated motor command mediated 'sense of effort' versus the peripheral mechanoreceptor signalled 'sense of tension' as the dominant coding process, with central mechanisms favoured in the majority of studies published to date. The purpose of this study was to investigate muscle force awareness in the knee extensors and flexors and hands of subjects with and without knee joint osteoarthritis. Twenty one subjects with knee joint osteoarthritis and 23 age and gender matched subjects with no known knee pathology were evaluated. All subjects performed ipsilateral isometric force estimation and force matching tasks, at levels scaled to individual maximum voluntary capacity (MVC). Errors in estimation and matching acuity were normalised to reference targets (comparison force/reference force) giving a relative score (RS) to allow comparison across submaximal force levels with RS less than 1.0 indicating that subjects produced insufficient force and vice versa.Maximal voluntary capacity tests revealed significantly lower (p<0.05) peak knee extension torque (111.2 Nm versus 145.3 Nm), but similar peak knee flexion torque (46.1 Nm versus 45.4 Nm for osteoarthritis and control subjects respectively). A pattern of overestimation at low reference levels and underestimation at high reference levels was demonstrated by all subjects. In the lower limb, force appreciation differed significantly between muscle groups regardless of knee condition, with knee extensors demonstrating greater overall accuracy than knee flexors. There was a significant difference (p<0.05) in force estimation ability and a trend to significance (p=0.066) for force matching acuity across groups at the 10% MVC test level. A significant (p<0.05) group difference in grip force estimation ability between the lowest and highest target levels was demonstrated.It can be concluded that there are small differences in force acuity in osteoarthritis subjects at lower submaximal force targets when compared to healthy age matched peers. The notion of information redundancy, whereby no new proprioceptive inputs, regardless of origin, are able to effect an improvement in force acuity in a given situation has been demonstrated in previous studies that reported relatively stable force matching acuity at forces between 30% and 60% of maximal capacity. The poor comparative force perception demonstrated in this study by the osteoarthritis group at the lower submaximal test levels supports the notion that centrally generated copies of motor commands do not provide sufficient data to adequately encode force magnitude at low levels of force generation, evoking a greater reliance data received from peripheral mechanoreceptors. This has significant implications for this subject group given that the majority of daily tasks require only low levels of force generation. Given that perceptive acuity in a variety of sensory modalities has been shown to improve with training there may be a role for force perception training in older adults with osteoarthritis.
509

Iterative Receiver for MIMO-OFDM System with ICI Cancellation and Channel Estimation

Li, Rui January 2008 (has links)
Master of Engineering by Research / As a multi-carrier modulation scheme, Orthogonal Frequency Division Multiplexing (OFDM) technique can achieve high data rate in frequency-selective fading channels by splitting a broadband signal into a number of narrowband signals over a number of subcarriers, where each subcarrier is more robust to multipath. The wireless communication system with multiple antennas at both the transmitter and receiver, known as multiple-input multiple-output (MIMO) system, achieves high capacity by transmitting independent information over different antennas simultaneously. The combination of OFDM with multiple antennas has been considered as one of most promising techniques for future wireless communication systems. The challenge in the detection of a space-time signal is to design a low-complexity detector, which can efficiently remove interference resulted from channel variations and approach the interference-free bound. The application of iterative parallel interference canceller (PIC) with joint detection and decoding has been a promising approach. However, the decision statistics of a linear PIC is biased toward the decision boundary after the first cancellation stage. In this thesis, we employ an iterative receiver with a decoder metric, which considerably reduces the bias effect in the second iteration, which is critical for the performance of the iterative algorithm. Channel state information is required in a MIMO-OFDM system signal detection at the receiver. Its accuracy directly affects the overall performance of MIMO-OFDM systems. In order to estimate the channel in high-delay-spread environments, pilot symbols should be inserted among subcarriers before transmission. To estimate the channel over all the subcarriers, various types of interpolators can be used. In this thesis, a linear interpolator and a trigonometric interpolator are compared. Then we propose a new interpolator called the multi-tap method, which has a much better system performance. In MIMO-OFDM systems, the time-varying fading channels can destroy the orthogonality of subcarriers. This causes serious intercarrier interference (ICI), thus leading to significant system performance degradation, which becomes more severe as the normalized Doppler frequency increases. In this thesis, we propose a low-complexity iterative receiver with joint frequency- domain ICI cancellation and pilot-assisted channel estimation to minimize the effect of time-varying fading channels. At the first stage of receiver, the interference between adjacent subcarriers is subtracted from received OFDM symbols. The parallel interference cancellation detection with decision statistics combining (DSC) is then performed to suppress the interference from other antennas. By restricting the interference to a limited number of neighboring subcarriers, the computational complexity of the proposed receiver can be significantly reduced. In order to construct the time variant channel matrix in the frequency domain, channel estimation is required. However, an accurate estimation requiring complete knowledge of channel time variations for each block, cannot be obtained. For time- varying frequency-selective fading channels, the placement of pilot tones also has a significant impact on the quality of the channel estimates. Under the assumption that channel variations can be approximated by a linear model, we can derive channel state information (CSI) in the frequency domain and estimate time-domain channel parameters. In this thesis, an iterative low-complexity channel estimation method is proposed to improve the system performance. Pilot symbols are inserted in the transmitted OFDM symbols to mitigate the effect of ICI and the channel estimates are used to update the results of both the frequency domain equalizer and the PICDSC detector in each iteration. The complexity of this algorithm can be reduced because the matrices are precalculated and stored in the receiver when the placement of pilots symbols is fixed in OFDM symbols before transmission. Finally, simulation results show that the proposed MIMO-OFDM iterative receiver can effectively mitigate the effect of ICI and approach the ICI-free performance over time-varying frequency-selective fading channels.
510

Forensic Dentistry and its Application in Age Estimation from the Teeth using a Modified Demirjian System

Blenkin, Matthew Robert Barclay January 2005 (has links)
The estimation of age at time of death is often an important step in the identification of human remains. If this age can be accurately estimated, it will significantly narrow the field of possible identities that will have to be compared to the remains in order to establish a positive identification. Some of the more accurate methods of age estimation, in the juvenile and younger adult, have been based on the assessment of the degree of dental development as it relates to chronological age. The purpose of this current study was to test the applicability of one such system, the Demirjian system, to a Sydney sample population, and to develop and test age prediction models using a large sample of Sydney children (1624 girls, 1637 boys). The use of the Demirjian standards resulted in consistent overestimates of chronological age in children under the age of 14 years by as much as a mean of 0.97 years, and underestimates of chronological age in children over 14 years by as much as a mean of 2.18 years in 16 year-old females. Of the alternative predictive models derived from the Sydney sample, those that provided the most accurate age estimates are applicable for the age ranges 2-14 years, with a coefficient of determination value of R-square=0.94 and a 95% confidence interval of �1.8 years. The Sydney based standards provided significantly different and more accurate estimates of age for that sample when compared to the published standards of Demirjian.

Page generated in 0.123 seconds