• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 221
  • 71
  • 49
  • 23
  • 8
  • 6
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 475
  • 86
  • 56
  • 52
  • 50
  • 47
  • 41
  • 41
  • 38
  • 35
  • 33
  • 32
  • 32
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Construct bias in the differential ability scales, second edition (DAS-II) : a comparison among African American, Asian, Hispanic, and White ethnic groups

Trundt, Katherine Marie 11 September 2013 (has links)
Intelligence testing has had a relatively long and controversial history, beginning with what is generally considered the first formal measure of intelligence, the Binet-Simon Scales (1916). Questions regarding possible cultural bias in these measures arose virtually simultaneously (e.g. Burt, 1921; Stern, 1914). Over the course of the twentieth and early twenty-first centuries, an abundance of intelligence measures have been developed, with many of them having several revisions, but the issue of test bias remains an important one, both in the professional literature and in the popular press (Reynolds & Lowe, 2009). A current intelligence measure in use, the Differential Ability Scales, Second Edition (DAS-II, Elliott, 2007), is a test with growing popularity for assessment of children and youth, not only for its ease of use, but also for its appeal to young children and its nonverbal composite (among other things). Consequently, it is essential that there be empirical evidence supporting the use of the DAS-II as an appropriate measure of cognitive abilities for children of varying backgrounds. The test publishers conducted extensive research with a representative sample during test development in an effort to ensure that the measure met adequate reliability and validity criteria; however, the issue of test bias, particularly regarding cultural or racial/ethnic groups, was not explicitly addressed. This issue was raised and examined with the original DAS by Keith, Quirk, Schartzer, and Elliott (1999), but with the significant changes made from the first edition to the second, there is no guaranty that the evidence from the earlier would necessarily apply to the latter. The current study investigated whether the DAS-II demonstrates systematic construct bias toward children and youth of any of four ethnic groups: Black, Hispanic, Asian, and White. Multi-group confirmatory factor analysis using data from the DAS-II standardization sample was used to assess whether criteria for increasingly strict levels of invariance were met across groups. Outcomes of this research contribute to an existing body of literature on test bias, as well as provide evidence regarding cross-group construct validity in the DAS-II. Ultimately the results of this study can be used to evaluate the appropriateness of the DAS-II for clinical use with certain ethnic groups and will help to emphasize further the importance of exploring these issues with all standardized tests. / text
162

Computational applications of invariance principles

Meka, Raghu Vardhan Reddy 14 August 2015 (has links)
This thesis focuses on applications of classical tools from probability theory and convex analysis such as limit theorems to problems in theoretical computer science, specifically to pseudorandomness and learning theory. At first look, limit theorems, pseudorandomness and learning theory appear to be disparate subjects. However, as it has now become apparent, there's a strong connection between these questions through a third more abstract question: what do random objects look like. This connection is best illustrated by the study of the spectrum of Boolean functions which directly or indirectly played an important role in a plethora of results in complexity theory. The current thesis aims to take this program further by drawing on a variety of fundamental tools, both classical and new, in probability theory and analytic geometry. Our research contributions broadly fall into three categories. Probability Theory: The central limit theorem is one of the most important results in all of probability and richly studied topic. Motivated by questions in pseudorandomness and learning theory we obtain two new limit theorems or invariance principles. The proofs of these new results in probability, of interest on their own, have a computer science flavor and fall under the niche category of techniques from theoretical computer science with applications in pure mathematics. Pseudorandomness: Derandomizing natural complexity classes is a fundamental problem in complexity theory, with several applications outside complexity theory. Our work addresses such derandomization questions for natural and basic geometric concept classes such as halfspaces, polynomial threshold functions (PTFs) and polytopes. We develop a reasonably generic framework for obtaining pseudorandom generators (PRGs) from invariance principles and suitably apply the framework to old and new invariance principles to obtain the best known PRGs for these complexity classes. Learning Theory: Learning theory aims to understand what functions can be learned efficiently from examples. As developed in the seminal work of Linial, Mansour and Nisan (1994) and strengthened by several follow-up works, we now know strong connections between learning a class of functions and how sensitive to noise, as quantified by average sensitivity and noise sensitivity, the functions are. Besides their applications in learning, bounding the average and noise sensitivity has applications in hardness of approximation, voting theory, quantum computing and more. Here we address the question of bounding the sensitivity of polynomial threshold functions and intersections of halfspaces and obtain the best known results for these concept classes.
163

Approche par invariance positive et les techniques de l'intelligence artificielle pour la régulation des carrefours signalisés

Motawej, Fadi 30 May 2012 (has links) (PDF)
La régulation du trafic dans un carrefour à feux concerne en général deux objectifs distincts: la fluidification ou la résorption de congestion. Dans le premier cas, on évite de se retrouver dans une situation de trafic très dense en essayant d'ajuster les durées de commutations des feux en fonction de la demande d'affluence au carrefour: c'est une action a priori. Dans le second cas, on est confronté à un trafic saturé (état de congestion). Dans ce cas, il faudra agir a posteriori.Dans ce travail, nous nous intéressons essentiellement à un travail en amont (action a priori) permettant d'éviter la congestion en forçant les files d'attente à ne pas dépasser le niveau du trafic correspondant à l'optimum opérationnel des lignes. Plus précisément, après avoir modélisé le système, nous proposons une commande par retour d'état basée sur le concept d'invariance positive des ensembles et permettant d'atteindre l'objectif. Deux approches sont utilisées: La première fait appel aux inégalités matricielles Linéaires (LMI). La deuxième approche utilise le concept de la (A-B)-invariance issu de la généralisation du théorème de Farkas. Ensuite, nous enrichissons ces deux approches par la technique des réseaux de neurones pour estimer les flux d'entrée au carrefour afin de garantir la faisabilité en temps réel de la commande proposée. Enfin, les résultats de ces travaux sont appliqués sur un carrefour réel du boulevard Anatole France afin de montrer leur intérêt.
164

Seismic noise : the good the bad and the ugly

Herrmann, Felix J., Wilkinson, Dave January 2007 (has links)
In this paper, we present a nonlinear curvelet-based sparsity-promoting formulation for three problems related to seismic noise, namely the ’good’, corresponding to noise generated by random sampling; the ’bad’, corresponding to coherent noise for which (inaccurate) predictions exist and the ’ugly’ for which no predictions exist. We will show that the compressive capabilities of curvelets on seismic data and images can be used to tackle these three categories of noise-related problems.
165

Robust seismic amplitude recovery using curvelets

Moghaddam, Peyman P., Herrmann, Felix J., Stolk, Christiaan C. January 2007 (has links)
In this paper, we recover the amplitude of a seismic image by approximating the normal (demigrationmigration) operator. In this approximation, we make use of the property that curvelets remain invariant under the action of the normal operator. We propose a seismic amplitude recovery method that employs an eigenvalue like decomposition for the normal operator using curvelets as eigen-vectors. Subsequently, we propose an approximate non-linear singularity-preserving solution to the least-squares seismic imaging problem with sparseness in the curvelet domain and spatial continuity constraints. Our method is tested with a reverse-time ’wave-equation’ migration code simulating the acoustic wave equation on the SEG-AA salt model.
166

Coupling distances between Lévy measures and applications to noise sensitivity of SDE

Gairing, Jan, Högele, Michael, Kosenkova, Tetiana, Kulik, Alexei January 2013 (has links)
We introduce the notion of coupling distances on the space of Lévy measures in order to quantify rates of convergence towards a limiting Lévy jump diffusion in terms of its characteristic triplet, in particular in terms of the tail of the Lévy measure. The main result yields an estimate of the Wasserstein-Kantorovich-Rubinstein distance on path space between two Lévy diffusions in terms of the couping distances. We want to apply this to obtain precise rates of convergence for Markov chain approximations and a statistical goodness-of-fit test for low-dimensional conceptual climate models with paleoclimatic data.
167

Nonlinear State Estimation and Modeling of a Helicopter UAV

Barczyk, Martin Unknown Date
No description available.
168

Measurement invariance of health-related quality of life: a simulation study and numeric example

Sarkar, Joykrishna 23 September 2010 (has links)
Measurement invariance (MI) is a prerequisite to conduct valid comparisons of Health-related quality of life (HRQOL) measures across distinct populations. This research investigated the performance of estimation methods for testing MI hypotheses in complex survey data using a simulation study, and demonstrates the application of these methods for a HRQOL measure. Four forms of MI were tested using confirmatory factory analysis. The simulation study showed that the maximum likelihood method for small sample size and low intraclass correlation (ICC) performed best, whereas the pseudomaximum likelihood with weights and clustering effects performed better for large sample sizes with high ICC to test configural invariance. Both methods performed similarly to test other forms of MI. In the numeric example, MI of one HRQOL measure in the Canadian Community Health Survey was investigated and established for Aboriginal and non-Aboriginal populations with chronic conditions, indicating that they had similar conceptualizations of quality of life.
169

Commande non linéaire et asservissement visuel de robots autonomes

Dib, Alaa 21 October 2011 (has links) (PDF)
Dans ce travail de thèse, on s'intéresse au problème de déplacement et de la localisation d'un robot mobile autonome dans son environnement local. Il est structuré en deux parties. La première partie du manuscrit porte sur les deux tâches de mouvement de base, c'est-à-dire : la stabilisation et le suivi de trajectoire. Deux stratégies de commande ont été traitées: le mode de glissement intégral et la méthode dite "Immersion et Invariance". La deuxième partie porte sur l'asservissement visuel, les deux techniques 2D et 3D d'asservissement visuel ont été appliquées. Les moments d'image ont été choisis comme indices visuels car ils sont moins sensibles au bruit d'image et autres erreurs de mesure. Une nouvelle approche de l'asservissement visuel qui repose sur l'image est ici proposée. Elle est basée sur la génération de trajectoires sur le plan de l'image directement (calcul des valeurs des primitives d'image correspondantes a une trajectoire cartésienne donnée). Cette approche garantit que la robustesse et la stabilité bien connues de l'asservissement 2D ont été étendues en raison du fait que les emplacements initial et désiré de la caméra sont proches. Les trajectoires obtenues garantissent aussi que la cible reste dans le champ de vue de la caméra et que le mouvement du robot correspondant est physiquement réalisable. Des tests expérimentaux ont été effectués et des résultats satisfaisants ont été obtenus à partir des implémentations des stratégies de commande et d'asservissement visuel. Bien qu'ils soient développés et expérimentés dans le cadre spécifique d'un robot de type unicycle, ces travaux sont assez génériques pour être appliqués sur d'autres types de véhicules.
170

Analyse statistique des processus de marche aléatoire multifractale

Duvernet, Laurent 01 December 2010 (has links) (PDF)
On étudie certaines propriétés d'une classe de processus aléatoires réels à temps continu, les marches aléatoires multifractales. Une particularité remarquable de ces processus tient en leur propriété d'autosimilarité : la loi du processus à petite échelle est identique à celle à grande échelle moyennant un facteur aléatoire multiplicatif indépendant du processus. La première partie de la thèse se consacre à la question de la convergence du moment empirique de l'accroissement du processus dans une asymptotique assez générale, où le pas de l'accroissement peut tendre vers zéro en même temps que l'horizon d'observation tend vers l'infini. La deuxième partie propose une famille de tests non-paramétriques qui distinguent entre marches aléatoires multifractales et semi-martingales d'Itô. Après avoir montré la consistance de ces tests, on étudie leur comportement sur des données simulées. On construit dans la troisième partie un processus de marche aléatoire multifractale asymétrique tel que l'accroissement passé soit négativement corrélé avec le carré de l'accroissement futur. Ce type d'effet levier est notamment observé sur les prix d'actions et d'indices financiers. On compare les propriétés empiriques du processus obtenu avec des données réelles. La quatrième partie concerne l'estimation des paramètres du processus. On commence par montrer que sous certaines conditions, deux des trois paramètres ne peuvent être estimés. On étudie ensuite les performances théoriques et empiriques de différents estimateurs du troisième paramètre, le coefficient d'intermittence, dans un cas gaussien

Page generated in 0.0663 seconds