• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 767
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1615
  • 591
  • 342
  • 248
  • 246
  • 235
  • 191
  • 187
  • 177
  • 169
  • 168
  • 160
  • 143
  • 135
  • 132
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
851

Confirmatory factor analysis with ordinal data : effects of model misspecification and indicator nonnormality on two weighted least squares estimators

Vaughan, Phillip Wingate 22 October 2009 (has links)
Full weighted least squares (full WLS) and robust weighted least squares (robust WLS) are currently the two primary estimation methods designed for structural equation modeling with ordinal observed variables. These methods assume that continuous latent variables were coarsely categorized by the measurement process to yield the observed ordinal variables, and that the model proposed by the researcher pertains to these latent variables rather than to their ordinal manifestations. Previous research has strongly suggested that robust WLS is superior to full WLS when models are correctly specified. Given the realities of applied research, it was critical to examine these methods with misspecified models. This Monte Carlo simulation study examined the performance of full and robust WLS for two-factor, eight-indicator confirmatory factor analytic models that were either correctly specified, overspecified, or misspecified in one of two ways. Seven conditions of five-category indicator distribution shape at four sample sizes were simulated. These design factors were completely crossed for a total of 224 cells. Previously findings of the relative superiority of robust WLS with correctly specified models were replicated, and robust WLS was also found to perform better than full WLS given overspecification or misspecification. Robust WLS parameter estimates were usually more accurate for correct and overspecified models, especially at the smaller sample sizes. In the face of misspecification, full WLS better approximated the correct loading values whereas robust estimates better approximated the correct factor correlation. Robust WLS chi-square values discriminated between correct and misspecified models much better than full WLS values at the two smaller sample sizes. For all four model specifications, robust parameter estimates usually showed lower variability and robust standard errors usually showed lower bias. These findings suggest that robust WLS should likely remain the estimator of choice for applied researchers. Additionally, highly leptokurtic distributions should be avoided when possible. It should also be noted that robust WLS performance was arguably adequate at the sample size of 100 when the indicators were not highly leptokurtic. / text
852

Essays on Time Series Analysis : With Applications to Financial Econometrics

Preve, Daniel January 2008 (has links)
<p>This doctoral thesis is comprised of four papers that all relate to the subject of Time Series Analysis.</p><p>The first paper of the thesis considers point estimation in a nonnegative, hence non-Gaussian, AR(1) model. The parameter estimation is carried out using a type of extreme value estimators (EVEs). A novel estimation method based on the EVEs is presented. The theoretical analysis is complemented with Monte Carlo simulation results and the paper is concluded by an empirical example.</p><p>The second paper extends the model of the first paper of the thesis and considers semiparametric, robust point estimation in a nonlinear nonnegative autoregression. The nonnegative AR(1) model of the first paper is extended in three important ways: First, we allow the errors to be serially correlated. Second, we allow for heteroskedasticity of unknown form. Third, we allow for a multi-variable mapping of previous observations. Once more, the EVEs used for parameter estimation are shown to be strongly consistent under very general conditions. The theoretical analysis is complemented with extensive Monte Carlo simulation studies that illustrate the asymptotic theory and indicate reasonable small sample properties of the proposed estimators.</p><p>In the third paper we construct a simple nonnegative time series model for realized volatility, use the results of the second paper to estimate the proposed model on S&P 500 monthly realized volatilities, and then use the estimated model to make one-month-ahead forecasts. The out-of-sample performance of the proposed model is evaluated against a number of standard models. Various tests and accuracy measures are utilized to evaluate the forecast performances. It is found that forecasts from the nonnegative model perform exceptionally well under the mean absolute error and the mean absolute percentage error forecast accuracy measures.</p><p>In the fourth and last paper of the thesis we construct a multivariate extension of the popular Diebold-Mariano test. Under the null hypothesis of equal predictive accuracy of three or more forecasting models, the proposed test statistic has an asymptotic Chi-squared distribution. To explore whether the behavior of the test in moderate-sized samples can be improved, we also provide a finite-sample correction. A small-scale Monte Carlo study indicates that the proposed test has reasonable size properties in large samples and that it benefits noticeably from the finite-sample correction, even in quite large samples. The paper is concluded by an empirical example that illustrates the practical use of the two tests.</p>
853

Water Supply System Management Design and Optimization under Uncertainty

Chung, Gunhui January 2007 (has links)
Increasing population, diminishing supplies and variable climatic conditions can cause difficulties in meeting water demands. When this long range water supply plan is developed to cope with future water demand changes, accuracy and reliability are the two most important factors. To develop an accurate model, the water supply system has become more complicated and comprehensive structures. Future uncertainty also has been considered to improve system reliability as well as economic feasibility.In this study, a general large-scale water supply system that is comprised of modular components was developed in a dynamic simulation environment. Several possible scenarios were simulated in a realistic hypothetical system. In addition to water balances and quality analyses, construction and operation of system components costs were estimated for each scenario. One set of results demonstrates that construction of small-cluster decentralized wastewater treatment systems could be more economical than a centralized plant when communities are spatially scattered or located in steep areas.The Shuffled Frog Leaping Algorithm (SFLA), then, is used to minimize the total system cost of the general water supply system. Decisions are comprised of sizing decisions - pipe diameter, pump design capacity and head, canal capacity, and water/wastewater treatment capabilities - and flow allocations over the water supply network. An explicit representation of energy consumption cost for the operation is incorporated into the system in the optimization process of overall system cost. Although the study water supply systems included highly nonlinear terms in the objective function and constraints, a stochastic search algorithm was applied successfully to find optimal solutions that satisfied all the constraints for the study networks.Finally, a robust optimization approach was introduced into the design process of a water supply system as a framework to consider uncertainties of the correlated future data. The approach allows for the control of the degree of conservatism which is a crucial factor for the system reliabilities and economical feasibilities. The system stability is guaranteed under the most uncertain condition and it was found that the water supply system with uncertainty can be a useful tool to assist decision makers to develop future water supply schemes.
854

High-dimensional statistical methods for inter-subject studies in neuroimaging

Fritsch, Virgile 18 December 2013 (has links) (PDF)
La variabilité inter-individuelle est un obstacle majeur à l'analyse d'images médicales, en particulier en neuroimagerie. Il convient de distinguer la variabilité naturelle ou statistique, source de potentiels effets d'intérêt pour du diagnostique, de la variabilité artefactuelle, constituée d'effets de nuisance liés à des problèmes expérimentaux ou techniques, survenant lors de l'acquisition ou le traitement des données. La dernière peut s'avérer bien plus importante que la première : en neuroimagerie, les problèmes d'acquisition peuvent ainsi masquer la variabilité fonctionnelle qui est par ailleurs associée à une maladie, un trouble psychologique, ou à l'expression d'un code génétique spécifique. La qualité des procédures statistiques utilisées pour les études de groupe est alors diminuée car lesdites procédures reposent sur l'hypothèse d'une population homogène, hypothèse difficile à vérifier manuellement sur des données de neuroimagerie dont la dimension est élevée. Des méthodes automatiques ont été mises en oeuvre pour tenter d'éliminer les sujets trop déviants et ainsi rendre les groupes étudiés plus homogènes. Cette pratique n'a pas entièrement fait ses preuves pour autant, attendu qu'aucune étude ne l'a clairement validée, et que le niveau de tolérance à choisir reste arbitraire. Une autre approche consiste alors à utiliser des procédures d'analyse et de traitement des données intrinsèquement insensibles à l'hypothèse d'homogénéité. Elles sont en outre mieux adaptées aux données réelles en ce qu'elles tolèrent dans une certaine mesure d'autres violations d'hypothèse plus subtiles telle que la normalité des données. Un autre problème, partiellement lié, est le manque de stabilité et de sensibilité des méthodes d'analyse au niveau voxel, sources de résultats qui ne sont pas reproductibles.Nous commençons cette thèse par le développement d'une méthode de détection d'individus atypiques adaptée aux données de neuroimagerie, qui fournit un contrôle statistique sur l'inclusion de sujets : nous proposons une version regularisée d'un estimateur de covariance robuste pour le rendre utilisable en grande dimension. Nous comparons plusieurs types de régularisation et concluons que les projections aléatoires offrent le meilleur compromis. Nous présentons également des procédures non-paramétriques dont nous montrons la qualité de performance, bien qu'elles n'offrent aucun contrôle statistique. La seconde contribution de cette thèse est une nouvelle approche, nommée RPBI (Randomized Parcellation Based Inference), répondant au manque de reproductibilité des méthodes classiques. Nous stabilisons l'approche d'analyse à l'échelle de la parcelle en agrégeant plusieurs analyses indépendantes, pour lesquelles le partitionnement du cerveau en parcelles varie d'une analyse à l'autre. La méthode permet d'atteindre un niveau de sensibilité supérieur à celui des méthodes de l'état de l'art, ce que nous démontrons par des expériences sur des données synthétiques et réelles. Notre troisième contribution est une application de la régression robuste aux études de neuroimagerie. Poursuivant un travail déjà existant, nous nous concentrons sur les études à grande échelle effectuées sur plus de cent sujets. Considérant à la fois des données simulées et des données réelles, nous montrons que l'utilisation de la régression robuste améliore la sensibilité des analyses. Nous démontrons qu'il est important d'assurer une résistance face aux violations d'hypothèse, même dans les cas où une inspection minutieuse du jeu de données a été conduite au préalable. Enfin, nous associons la régression robuste à notre méthode d'analyse RPBI afin d'obtenir des tests statistiques encore plus sensibles.
855

Fast model predictive control

Buerger, Johannes Albert January 2013 (has links)
This thesis develops efficient optimization methods for Model Predictive Control (MPC) to enable its application to constrained systems with fast and uncertain dynamics. The key contribution is an active set method which exploits the parametric nature of the sequential optimization problem and is obtained from a dynamic programming formulation of the MPC problem. This method is first applied to the nominal linear MPC problem and is successively extended to linear systems with additive uncertainty and input constraints or state/input constraints. The thesis discusses both offline (projection-based) and online (active set) methods for the solution of controllability problems for linear systems with additive uncertainty. The active set method uses first-order necessary conditions for optimality to construct parametric programming regions for a particular given active set locally along a line of search in the space of feasible initial conditions. Along this line of search the homotopy of optimal solutions is exploited: a known solution at some given plant state is continuously deformed into the solution at the actual measured current plant state by performing the required active set changes whenever a boundary of a parametric programming region is crossed during the line search operation. The sequence of solutions for the finite horizon optimal control problem is therefore obtained locally for the given plant state. This method overcomes the main limitation of parametric programming methods that have been applied in the MPC context which usually require the offline precomputation of all possible regions. In contrast to this the proposed approach is an online method with very low computational demands which efficiently exploits the parametric nature of the solution and returns exact local DP solutions. The final chapter of this thesis discusses an application of robust tube-based MPC to the nonlinear MPC problem based on successive linearization.
856

Robust Image Hash Spoofing

Amir Asgari, Azadeh January 2016 (has links)
With the intensively increasing of digital media new challenges has been created for authentication and protection of digital intellectual property. A hash function extracts certain features of a multimedia object e.g. an image and maps it to a fixed string of bits. A perceptual hash function unlike normal cryptographic hash is change tolerant for image processing techniques. Perceptual hash function also referred to as robust hash, like any other algorithm is prone to errors. These errors are false negative and false positive, of which false positive error is neglected compared to false negative errors. False positive occurs when an unknown object is identified as known. In this work a new method for raising false alarms in robust hash function is devised for evaluation purposes i.e. this algorithm modifies hash key of a target image to resemble a different image’s hash key without any significant loss of quality to the modified image. This algorithm is implemented in MATLAB using block mean value based hash function and successfully reduces hamming distance between target image and modified image with a good result and without significant loss to attacked imaged quality.
857

Techniques de robustesse et d'auto-séquencement pour la commande auto-adaptative des aéronefs / Robust gain scheduling techniques for adaptive control

Antoinette, Patrice, Luc 15 June 2012 (has links)
Pour synthétiser un correcteur robuste pour un système linéaire incertain, il existe de nombreuses méthodes linéaires. Cependant, bien souvent, le gain en robustesse se fait au détriment de la performance. Aussi, dans cette thèse, on s'intéresse à la situation où la plage des valeurs possibles des paramètres est "très grande" par rapport à la "faible" variation du niveau de performance souhaité. Dans cette situation, il peut alors s'avérer intéressant d'utiliser des correcteurs séquencés. Seulement, la mise en place de cette solution nécessite que le correcteur ait à sa disposition les paramètres sur lesquels il sera séquencé. Et il peut arriver que l'on ne souhaite pas (à cause de considérations de réalisation pratique), ou que l'on ne puisse pas disposer de la mesure de ces paramètres. On est alors amené à estimer ces paramètres et donc à utiliser le paradigme de la commande adaptative. Dans cette thèse, on cherche à proposer une méthodologie de synthèse d'un correcteur auto-adaptatif afin de résoudre un problème de commande robuste d'un procédé linéaire incertain. Après une étude théorique ayant pour objectif de proposer une telle méthodologie, le cas d'un avion instable est traité à titre d'application, permettant ainsi de mettre en évidence le bénéfice que la stratégie proposée peut apporter à la commande d'un système incertain. / Many linear methods exist to design a robust controller for an uncertain linear system. This thesis considered the situation where the range of possible values of parameters is "very large" in relation to "small" variations in the desired level of performance. Frequently, an increase in robustness is obtained at the expense of a performance loss. The use of scheduled controllers may be an innovative way to address this problem. The implementation of this solution requires the controller has at its disposal the parameters on which the scheduling is done. However, it may occur that making the measure of the parameters available is not desired (for example, because of practical implementation aspects) or not possible. In these situations, the designer of the controller is led to estimate these parameters and then to use the paradigm of adaptive control. This thesis explored a methodology for designing an adaptive controller in which to solve the problem of robust control for an uncertain linear plant. A theoretical study was first undertaken which aimed to propose such a methodology; followed by, a study of the case of an unstable airplane as an application. Such an analysis highlighted the benefits that the proposed strategy can bring to the control for an uncertain plant.
858

Commande robuste référencée intention d'une orthèse active pour l'assistance fonctionnelle aux mouvements du genou / Robust and intention-based control of an active orthosis for assistance of knee movements

Mefoued, Saber 12 December 2012 (has links)
Le nombre croissant de personnes âgées dans le monde exige de relever de nouveaux défis sociétaux, notamment en termes de services d'aide et de soins de santé. Avec les récents progrès technologiques, la robotique apparaît comme une solution prometteuse pour développer des systèmes visant à faciliter et améliorer les conditions de vie de cette population. Cette thèse vise la proposition et la validation d'une approche de commande robuste et référencée intention d'une orthèse active, destinée à assister des mouvements de flexion/extension du genou pour des personnes souffrant de pathologies de cette articulation. La commande par modes glissants d'ordre 2 que nous proposons permet de prendre en compte les non-linéarités ainsi que les incertitudes paramétriques résultant de la dynamique du système équivalent orthèse-membre inférieur. Elle permet également de garantir d'une part, un bon suivi de la trajectoire désirée imposée par le thérapeute ou par le sujet lui-même, et d'autre part, une bonne robustesse vis-à-vis des perturbations externes pouvant se produire lors des mouvements de flexion/extension. Dans cette thèse, nous proposons également un modèle neuronal de type Perceptron Multi-Couches pour l'estimation de l'intention du sujet à partir de la mesure des signaux EMG caractérisant les activités musculaires volontaires du groupe musculaire quadriceps. Cette approche permet de s'affranchir d'un modèle d'activation et de contraction musculaire complexe. L'ensemble des travaux a été validé expérimentalement avec la participation volontaire de plusieurs sujets valides / The increasing number of elderly in the world reveals today new societal challenges, particularly in terms of healthcare and assistance services. With recent advances in technology, robotics appears as a promising solution to develop systems that improve the living conditions of this aging population. This thesis aims at proposing and validating an approach for robust control of an active orthosis, based on the subject intention. This orthosis is designed to assist flexion/ extension movements of the knee for people suffering from knee joint deficiencies. The proposed second order sliding mode control allows to take into account the nonlinearities and parametric uncertainties resulting from the dynamics of the equivalent lower limb-orthosis system. It also ensures on one hand, a good tracking performance of the desired trajectory imposed by the therapist or the subject itself, and on the second hand, a satisfactory robustness with respect to external disturbances that may occur during flexion and extension of the knee joint. In this thesis, a neural model based on Multi-Layer Perceptron is used to estimate the subject's intention from the measurement of the EMG signals characterizing the voluntary activities of the quadriceps muscle group. This approach overcomes the complex modeling of the muscular activation and contraction dynamics. All the proposed approaches in this thesis have been validated experimentally with the voluntary participation of several healthy subjects
859

Contributions au guillochage et à l'authentification de photographies / Contributions to guillochage and photograph authentication

Rivoire, Audrey 29 October 2012 (has links)
L'objectif est de développer un guillochage de photographie inspiré de l'holographie numérique en ligne, capable d'encoder la signature issue d'un hachage robuste de l'image (méthode de Mihçak et Venkatesan). Une telle combinaison peut permettre l'authentification de l'image guillochée dans le domaine numérique, le cas échéant après impression. Cette approche contraint le hachage à être robuste au guillochage. La signature est codée en un nuage de formes que l'on fait virtuellement diffracter pour former la marque à insérer (guilloches dites de Fresnel) dans l'image originale. Image dense, cette marque est insérée de façon peu, voire non visible afin de ne pas gêner la perception du contenu de la photographie mais de façon à pouvoir ultérieurement lire la signature encodée en vue de la comparer à la signature de la photographie à vérifier. L'impression-lecture rend la tâche plus difficile. Le guillochage de Fresnel et l'authentification associée sont testés sur une banque (réduite) d'images / This work aims to develop a new type of guilloché pattern to be inserted in a photograph (guillochage), inspired from in-line digital holography and able to encode an image robust hash value (méthode de Mihçak et Venkatesan). Such a combination can allow the authentication of the image including the guilloché pattern in the digital domain and possibly in the print domain. This approach constraints image hashing to be robust to guillochage. The hash value is encoded as a cloud of shapes that virtually produces a diffraction”pattern to be inserted as a mark (named “guilloches de Fresnel“) in the original image. The image insertion results from a trade off : the high-density mark should be quite or even not visible in order to avoid any disturbance in the perception of the image content but detectable in order to be able to compare the decoded hash to the hash of the current photograph. Print and scan makes the task harder. Both the Fresnel guillochage and the associated authentication are tested on a (reduced) image database
860

CSG modelování pro polygonální objekty / CSG modeling for polygonal objects

Václavík, Jiří January 2012 (has links)
This work deals with an efficient and robust technique of performing Boolean operations on polygonal models. Full robustness is achieved within an internal representation based on planes and BSP (binary space partitioning) trees, in which operations can be carried out exactly in mere fixed precision arithmetic. Necessary conversions from the usual representation to the inner one and back, including their consequences are analyzed in detail. The performance of the method is optimized by a localization scheme in the form of an adaptive octree. The resulting implementation RazeCSG is experimentally compared with implementations used in practice Carve and Maya, which are not fully robust. For large models, RazeCSG shows only twice lower performance in the worst case than Carve, and is at least 130 times faster than Maya.

Page generated in 0.0466 seconds