• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 221
  • 71
  • 49
  • 23
  • 8
  • 6
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 474
  • 86
  • 56
  • 52
  • 50
  • 47
  • 41
  • 41
  • 38
  • 35
  • 33
  • 32
  • 32
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Confirmatory Factor Analysis of the Rogers African American Masculinity Scale

Rogers, Baron Kenley 15 August 2021 (has links)
No description available.
392

The Community of Inquiry Survey Instrument: Measurement Invariance in the Community College Population

Chambers, Roger Antonio 05 1900 (has links)
This study aimed to observe measurement invariance between community college students and university students in response to the Community of Inquiry (CoI) Survey instrument. Most studies of the CoI survey instruments have recorded and validated the instruments considering undergraduate or graduate students. This study sought to validate and prove the survey tool as a reliable instrument for the community college population. The study employed SEM and meta-analytic procedures to observe measurement invariance between a Monte Carlo generated general university sample population and the community college survey population. The findings are discussed, as well as the implications for CoI studies in the community college.
393

Deep learning compact and invariant image representations for instance retrieval / Représentations compactes et invariantes à l'aide de l'apprentissage profond pour la recherche d'images par similarité

Morère, Olivier André Luc 08 July 2016 (has links)
Nous avons précédemment mené une étude comparative entre les descripteurs FV et CNN dans le cadre de la recherche par similarité d’instance. Cette étude montre notamment que les descripteurs issus de CNN manquent d’invariance aux transformations comme les rotations ou changements d’échelle. Nous montrons dans un premier temps comment des réductions de dimension (“pooling”) appliquées sur la base de données d’images permettent de réduire fortement l’impact de ces problèmes. Certaines variantes préservent la dimensionnalité des descripteurs associés à une image, alors que d’autres l’augmentent, au prix du temps d’exécution des requêtes. Dans un second temps, nous proposons la réduction de dimension emboitée pour l’invariance (NIP), une méthode originale pour la production, à partir de descripteurs issus de CNN, de descripteurs globaux invariants à de multiples transformations. La méthode NIP est inspirée de la théorie pour l’invariance “i-theory”, une théorie mathématique proposée il y a peu pour le calcul de transformations invariantes à des groupes au sein de réseaux de neurones acycliques. Nous montrons que NIP permet d’obtenir des descripteurs globaux compacts (mais non binaires) et robustes aux rotations et aux changements d’échelle, que NIP est plus performants que les autres méthodes à dimensionnalité équivalente sur la plupart des bases de données d’images. Enfin, nous montrons que la combinaison de NIP avec la méthode de hachage RBMH proposée précédemment permet de produire des codes binaires à la fois compacts et invariants à plusieurs types de transformations. La méthode NIP+RBMH, évaluée sur des bases de données d’images de moyennes et grandes échelles, se révèle plus performante que l’état de l’art, en particulier dans le cas de descripteurs binaires de très petite taille (de 32 à 256 bits). / Image instance retrieval is the problem of finding an object instance present in a query image from a database of images. Also referred to as particular object retrieval, this problem typically entails determining with high precision whether the retrieved image contains the same object as the query image. Scale, rotation and orientation changes between query and database objects and background clutter pose significant challenges for this problem. State-of-the-art image instance retrieval pipelines consist of two major steps: first, a subset of images similar to the query are retrieved from the database, and second, Geometric Consistency Checks (GCC) are applied to select the relevant images from the subset with high precision. The first step is based on comparison of global image descriptors: high-dimensional vectors with up to tens of thousands of dimensions rep- resenting the image data. The second step is computationally highly complex and can only be applied to hundreds or thousands of images in practical applications. More discriminative global descriptors result in relevant images being more highly ranked, resulting in fewer images that need to be compared pairwise with GCC. As a result, better global descriptors are key to improving retrieval performance and have been the object of much recent interest. Furthermore, fast searches in large databases of millions or even billions of images requires the global descriptors to be compressed into compact representations. This thesis will focus on how to achieve extremely compact global descriptor representations for large-scale image instance retrieval. After introducing background concepts about supervised neural networks, Restricted Boltzmann Machine (RBM) and deep learning in Chapter 2, Chapter 3 will present the design principles and recent work for the Convolutional Neural Networks (CNN), which recently became the method of choice for large-scale image classification tasks. Next, an original multistage approach for the fusion of the output of multiple CNN is proposed. Submitted as part of the ILSVRC 2014 challenge, results show that this approach can significantly improve classification results. The promising perfor- mance of CNN is largely due to their capability to learn appropriate high-level visual representations from the data. Inspired by a stream of recent works showing that the representations learnt on one particular classification task can transfer well to other classification tasks, subsequent chapters will focus on the transferability of representa- tions learnt by CNN to image instance retrieval…
394

Determining the number of classes in latent class regression models / A Monte Carlo simulation study on class enumeration

Luo, Sherry January 2021 (has links)
A Monte Carlo simulation study on class enumeration with latent class regression models. / Latent class regression (LCR) is a statistical method used to identify qualitatively different groups or latent classes within a heterogeneous population and commonly used in the behavioural, health, and social sciences. Despite the vast applications, an agreed fit index to correctly determine the number of latent classes is hotly debated. To add, there are also conflicting views on whether covariates should or should not be included into the class enumeration process. We conduct a simulation study to determine the impact of covariates on the class enumeration accuracy as well as study the performance of several commonly used fit indices under different population models and modelling conditions. Our results indicate that of the eight fit indices considered, the aBIC and BLRT proved to be the best performing fit indices for class enumeration. Furthermore, we found that covariates should not be included into the enumeration procedure. Our results illustrate that an unconditional LCA model can enumerate equivalently as well as a conditional LCA model with its true covariate specification. Even with the presence of large covariate effects in the population, the unconditional model is capable of enumerating with high accuracy. As noted by Nylund and Gibson (2016), a misspecified covariate specification can easily lead to an overestimation of latent classes. Therefore, we recommend to perform class enumeration without covariates and determine a set of candidate latent class models with the aBIC. Once that is determined, the BLRT can be utilized on the set of candidate models and confirm whether results obtained by the BLRT match the results of the aBIC. By separating the enumeration procedure of the BLRT, it still allows one to use the BLRT but reduce the heavy computational burden that is associated with this fit index. Subsequent analysis can then be pursued accordingly after the number of latent classes is determined. / Thesis / Master of Science (MSc)
395

The Existence of a Discontinuous Homomorphism Requires a Strong Axiom of Choice

Andersen, Michael Steven 01 December 2014 (has links) (PDF)
Conner and Spencer used ultrafilters to construct homomorphisms between fundamental groups that could not be induced by continuous functions between the underlying spaces. We use methods from Shelah and Pawlikowski to prove that Conner and Spencer could not have constructed these homomorphisms with a weak version of the Axiom of Choice. This led us to define and examine a class of pathological objects that cannot be constructed without a strong version of the Axiom of Choice, which we call the class of inscrutable objects. Objects that do not need a strong version of the Axiom of Choice are scrutable. We show that the scrutable homomorphisms from the fundamental group of a Peano continuum are exactly the homomorphisms induced by a continuous function.We suspect that any proposed theorem whose proof does not use a strong Axiom of Choice cannot have an inscrutable counterexample.
396

Multi-view Geometric Constraints For Human Action Recognition And Tracking

Gritai, Alexei 01 January 2007 (has links)
Human actions are the essence of a human life and a natural product of the human mind. Analysis of human activities by a machine has attracted the attention of many researchers. This analysis is very important in a variety of domains including surveillance, video retrieval, human-computer interaction, athlete performance investigation, etc. This dissertation makes three major contributions to automatic analysis of human actions. First, we conjecture that the relationship between body joints of two actors in the same posture can be described by a 3D rigid transformation. This transformation simultaneously captures different poses and various sizes and proportions. As a consequence of this conjecture, we show that there exists a fundamental matrix between the imaged positions of the body joints of two actors, if they are in the same posture. Second, we propose a novel projection model for cameras moving at a constant velocity in 3D space, \emph cameras, and derive the Galilean fundamental matrix and apply it to human action recognition. Third, we propose a novel use for the invariant ratio of areas under an affine transformation and utilizing the epipolar geometry between two cameras for 2D model-based tracking of human body joints. In the first part of the thesis, we propose an approach to match human actions using semantic correspondences between human bodies. These correspondences are used to provide geometric constraints between multiple anatomical landmarks ( e.g. hands, shoulders, and feet) to match actions observed from different viewpoints and performed at different rates by actors of differing anthropometric proportions. The fact that the human body has approximate anthropometric proportion allows for innovative use of the machinery of epipolar geometry to provide constraints for analyzing actions performed by people of different anthropometric sizes, while ensuring that changes in viewpoint do not affect matching. A novel measure in terms of rank of matrix constructed only from image measurements of the locations of anatomical landmarks is proposed to ensure that similar actions are accurately recognized. Finally, we describe how dynamic time warping can be used in conjunction with the proposed measure to match actions in the presence of nonlinear time warps. We demonstrate the versatility of our algorithm in a number of challenging sequences and applications including action synchronization , odd one out, following the leader, analyzing periodicity etc. Next, we extend the conventional model of image projection to video captured by a camera moving at constant velocity. We term such moving camera Galilean camera. To that end, we derive the spacetime projection and develop the corresponding epipolar geometry between two Galilean cameras. Both perspective imaging and linear pushbroom imaging form specializations of the proposed model and we show how six different ``fundamental" matrices including the classic fundamental matrix, the Linear Pushbroom (LP) fundamental matrix, and a fundamental matrix relating Epipolar Plane Images (EPIs) are related and can be directly recovered from a Galilean fundamental matrix. We provide linear algorithms for estimating the parameters of the the mapping between videos in the case of planar scenes. For applying fundamental matrix between Galilean cameras to human action recognition, we propose a measure that has two important properties. First property makes it possible to recognize similar actions, if their execution rates are linearly related. Second property allows recognizing actions in video captured by Galilean cameras. Thus, the proposed algorithm guarantees that actions can be correctly matched despite changes in view, execution rate, anthropometric proportions of the actor, and even if the camera moves with constant velocity. Finally, we also propose a novel 2D model based approach for tracking human body parts during articulated motion. The human body is modeled as a 2D stick figure of thirteen body joints and an action is considered as a sequence of these stick figures. Given the locations of these joints in every frame of a model video and the first frame of a test video, the joint locations are automatically estimated throughout the test video using two geometric constraints. First, invariance of the ratio of areas under an affine transformation is used for initial estimation of the joint locations in the test video. Second, the epipolar geometry between the two cameras is used to refine these estimates. Using these estimated joint locations, the tracking algorithm determines the exact location of each landmark in the test video using the foreground silhouettes. The novelty of the proposed approach lies in the geometric formulation of human action models, the combination of the two geometric constraints for body joints prediction, and the handling of deviations in anthropometry of individuals, viewpoints, execution rate, and style of performing action. The proposed approach does not require extensive training and can easily adapt to a wide variety of articulated actions.
397

On the Equivalence of Time-Varying CBF-Based Control and Prescribed Performance Control : Conversion and Qualitative Comparison / Om likvärdigheten mellan tidsvarierande CBF-baserad kontroll och kontroll av föreskrivna prestationer : Konvertering och kvalitativ jämförelse

Namerikawa, Ryo January 2023 (has links)
These days, a wide range of autonomous systems, such as automobiles, delivery drones, and embedded household systems, are becoming more and more common in our society. This trend is projected to continue in the future. To effectively manage these dynamic systems, ensuring their safe operation is crucial for the well-being of our lives. Control of safety-critical systems has gained significant attention in recent years, particularly in the field of nonlinear control. While the mathematical tools for characterizing safety are well-established, there are still numerous challenges to be addressed when it comes to developing methodologies for synthesizing nonlinear control systems. This report investigates the similarity between the two control schemes, the prescribed performance control and control barrier function. Its purpose is to shed light on the development of control methodology in safetycritical systems. While both methods have been successfully constructed and developed recently, there is no existing report that clarifies their similarities. To gain a deeper understanding of the latest safety-critical control and investigate these similarities, this report aims to provide interesting insights and contribute to the further development of methodology. The key insight arises from the fact that the prescribed performance control can be considered a method based on barrier functions. Consequently, it can be regarded as a control barrier-based controller. In order to demonstrate the similarities and make a comparison between the two, a unified problem setting is presented. Once we have properly converted the problem, we can proceed with a comparison using numerical simulations. The results presented in this report demonstrate that the prescribed performance controller can be implemented using separate reciprocal CBF methods. Furthermore, it shows that the performance achieved is comparable to that of the CLF-CBF QP, which utilizes optimization techniques to ensure stability and safety requirements. These findings raise new questions regarding the relationship between these two approaches. Ultimately, the report delves into a deeper understanding of how model-free methods achieve superior performance compared to model-based methods that heavily rely on optimization. / Idag blir ett brett spektrum av autonoma system, som bilar, leveransdrönare och inbyggda hushållssystem, allt vanligare i vårt samhälle. Denna trend förväntas fortsätta i framtiden. För att effektivt hantera dessa dynamiska system är det avgörande att säkerställa att de fungerar på ett säkert sätt. Styrning av säkerhetskritiska system har fått stor uppmärksamhet under de senaste åren, särskilt inom området icke-linjär styrning. Även om de matematiska verktygen för att karakterisera säkerhet är väletablerade, finns det fortfarande många utmaningar att ta itu med när det gäller att utveckla metoder för att syntetisera olinjära styrsystem. Denna rapport undersöker likheten mellan de två kontrollsystemen, den föreskrivna prestandakontrollen och kontrollbarriärfunktionen. Syftet är att belysa utvecklingen av styrmetodik i säkerhetskritiska system. Även om båda metoderna har konstruerats och utvecklats framgångsrikt på senare tid, finns det ingen befintlig rapport som klargör deras likheter. För att få en djupare förståelse för den senaste säkerhetskritiska kontrollen och undersöka dessa likheter, syftar denna rapport till att ge intressanta insikter och bidra till den fortsatta utvecklingen av metodiken. Den viktigaste insikten härrör från det faktum att den föreskrivna prestandakontrollen kan betraktas som en metod baserad på barriärfunktioner. Följaktligen kan den betraktas som en styrbarriärbaserad styrenhet. För att visa på likheterna och göra en jämförelse mellan de två presenteras en enhetlig problemställning. När vi har omvandlat problemet på rätt sätt kan vi gå vidare med en jämförelse med hjälp av numeriska simuleringar. De resultat som presenteras i denna rapport visar att den föreskrivna prestandaregulatorn kan implementeras med separata reciproka CBF-metoder. Dessutom visar de att den uppnådda prestandan är jämförbar med den för CLFCBF QP, som använder optimeringstekniker för att säkerställa stabilitets- och säkerhetskrav. Dessa resultat väcker nya frågor om förhållandet mellan dessa två metoder. I slutändan ger rapporten en djupare förståelse för hur modellfria metoder uppnår överlägsen prestanda jämfört med modellbaserade metoder som i hög grad förlitar sig på optimering.
398

Design of a next-generation modern Michelson-Morley experiment

Nagel, Moritz 18 March 2022 (has links)
Ein Fortschritt im Verständnis der Naturgesetzte wäre eine quantenphysikalische Beschreibung der Gravitation. Eine gültige Theorie der Quantengravitation (QG) existiert derzeit wegen des fehlenden experimentellen Zugangs des Planck-Skalen-Bereichs nicht. Dennoch können Experimente Hinweise für die Suche nach einer QG liefern. Die Standardmodellerweiterung (SME) bspw. beschreibt mögliche QG-induzierte beobachtbare niederenergetische Planck-Skalen-Effekte. Diesem Ansatz folgend wurde ein modernes Michelson-Morley-Experiment der nächsten Generation entwickelt, das es erstmalig erlaubt, simultan Obergrenzen für niederenergetische Planck-Skalen-Effekte in den Bewegungsgleichungen von Photonen und Fermionen zu bestimmen. Das zugrundeliegende theoretische Modell innerhalb der SME wurde neu betrachtet und Unstimmigkeiten korrigiert. Der Aufbau besteht aus ultra-stabilen kryogenen optischen Resonatoren (COREs) und Mikrowellenresonatoren, die gemeinsam im Raum rotieren. Die durch thermisches Rauschen limitierte vorhergesagte relative Frequenzinstabilität der entwickelten COREs liegt bei Temperaturen von flüssigem Helium im Bereich von 10^-17. Die Mikrowellenresonatoren können eine relative Frequenzinstabilität von 10^-16 erreichen. Um Störeinflüsse zu reduzieren, wurde zudem ein rauscharmer Niedrigtemperatur- sowie Drehtischaufbau konzipiert. Parallel wurde mit den Mikrowellenresonatoren ein einjähriges modernes Michelson-Morley-Experiment mit einer Sensitivität im Bereich von 10^-18 durchgeführt und erstmalig vollständig entkoppelte Obergrenzen für niederenergetische Planck-Skalen-Effekte im Bereich von 10^-17 bestimmt. Für den Aufbau der nächsten Generation lässt sich basierend auf der Frequenzinstabilität der COREs und der Mikrowellenresonatoren eine Sensitivität für niederenergetische Planck-Skalen-Effekte im Bereich von 10^-20 abschätzen. Zum ersten Mal kann somit der hypothetische Planck-unterdrückte-Bereich mit elektromagnetischen Resonatoren erkundet werden. / The next big leap in understanding the working principles of nature can be expected from a quantum physical description of gravity. None of the quantum gravity (QG) candidate theories can be verified, since observations at the Planck regime have currently been impossible. Still, experiments can help to give insights. For example, standard model extension (SME) describes potential observable low-energy remnant Planck scale effects. With this in mind, a design for a next-generation modern Michelson-Morley experiment has been developed that allows extracting upper bounds on potentially observable remnant Planck scale effects in the equations of motion of photons and fermions simultaneously. The corresponding theoretical model within the framework of the SME has been revisited and discrepancies have been corrected. The experimental setup consists of co-rotating ultra-stable cryogenic optical resonators (COREs) and ultra-stable sapphire loaded cryogenic microwave whispering-gallery resonators. The developed COREs have a theoretical thermal noise limited fractional frequency instability on the order of 10^-17 at liquid helium temperatures. The cryogenic microwave resonators allow in principle a performance on the order of 10^-16. For noise reduction, a suitable low noise cryogenic as well as turntable system has been designed. In parallel, a one-year modern Michelson-Morley measurement campaign with a sensitivity on the order of 10^-18 was carried out using the cryogenic microwave resonators. The experiment has allowed to set new stringent disentangled upper bounds on remnant Planck scale effects on the order of 10^-17. With the frequency performance of the COREs and cryogenic microwave resonators of the next-generation experimental setup, a sensitivity for remnant Planck scale effects on the order of 10^-20 can be estimated. Thus, the designed setup has the potential to explore the hypothetical Planck suppressed regime using electromagnetic resonators for the first time.
399

Estimating Football Position from Context / Uppskattning av en fotbolls position utifrån kontext

Queiroz Gongora, Lucas January 2021 (has links)
Tracking algorithms provide the model to recognize objects’ motion in the past. Moreover, applied to an artificial intelligence algorithm, these algorithms allow, to some degree, the capacity to forecast the future position of an object. This thesis uses deep learning algorithms to predict the ball’s position in the three-dimensional (3D) Cartesian space given the players’ motion and referees on the 2D space. The algorithms implemented are the encoder-decoder attention-based Transformer and the Inception Time, which is comprised of an ensemble of Convolutional Neural Networks. They are compared to each other under different parametrizations to understand their ability to capture temporal and spatial aspects of the tracking data on the ball prediction. The Inception Time proved to be more inconsistent on different areas of the pitches, especially on the end-lines and corners, motivating the decision to choose the Transformer network as the optimal algorithm to predict the ball position since it achieved less volatile errors on the pitch. / Spårningsalgoritmer möjliggör för modellen att känna igen objekts tidigare rörelser. Dessutom om tillämpad till en Artificiell intelligensalgoritm, de tillåter till viss mån att prognostisera ett objekts framtida position. Detta examensarbete använder djupinlärningsalgoritmer för att förutsäga bollens position i det tredimensionella (3D) kartesiska utrymmet baserat på spelarnas och domarnas rörelse i 2D-rymden. De implementerade algoritmerna är den kodare-avkodare-uppmärksamhetsbaserade Transformer och Inception Time, som består av en sammansättning faltningsnätverk (CNN). De jämförs med varandra med olika parametriseringar för att se deras förmåga att fånga upp tidsmässiga och rumsliga aspekter av spårningsdata för att förutsäga bollens rörelse. Inception Time visade sig vara mer inkonsekvent på olika områden på planen. Det var extra tydligt på slutlinjerna och i hörnen. Det motiverade beslutet att välja Transformer-nätverket som den optimala algoritmen för att förutsäga bollpositionen, eftersom den resulterade i färre ojämna fel på planen.
400

Workplace Social Courage in the United States and India: A Measurement Invariance Study

Sturgis, Grayson D. January 2022 (has links)
No description available.

Page generated in 0.0501 seconds