• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 66
  • 18
  • 18
  • 11
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 371
  • 87
  • 58
  • 58
  • 48
  • 40
  • 36
  • 35
  • 31
  • 30
  • 30
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Generalizace vrstevnic v rovinatých územích / Simplification of contour lines in flat areas

Čelonk, Marek January 2021 (has links)
Simplification of contour lines in flat areas The diploma thesis is focused on the cartographic generalization of contour lines derived from dense point clouds in flat territories, where the original contour lines tend to oscillate. The main aim is to propose, develop and test a new algorithm for the contour simplification preserving the given vertical error and reflecting the cartographic rules. Three methods designed for large scale maps (1 : 10 000 and larger) are presented: the weighted average, modified Douglas-Peucker and potential-based approach. The most promising method is based on the repeated simplification of contour line segments by calculating the generalization potential of its vertices. The algorithm is implemented in Python 2.7 with the use of Arcpy library was tested on DMR5G data, the simplified contour lines were compared with the result created by a professional cartographer. Achieved results are presented on attached topographic maps. Keywords: contours, cartographic generalization, digital cartography, vertical buffer, smoothing, GIS
252

Omezení větných forem gramatik s rozptýleným kontextem / A Restriction of Sentetial Forms of of Scattered Context Grammars

Šimáček, Jiří January 2008 (has links)
This work introduces and discusses generalized scattered context grammars that are based upon sequences of productions whose left-hand sides are formed by nonterminal strings, not just single nonterminals. It places two restrictions on the derivations in these grammars. More specifically, let k be a constant. The first restriction requires that rewriting all symbols occurs within the first k symbols of the first continuous block of nonterminals in the sentential form during every derivation step. The other restriction defines the derivations over sentential forms containing no more than k occurrences of nonterminals. As its main result, the thesis demonstrates that both restrictions decrease the generative power of these grammars to the power of context-free grammars.
253

STRUCTURED PREDICTION: STATISTICAL AND COMPUTATIONAL GUARANTEES IN LEARNING AND INFERENCE

Kevin Segundo Bello Medina (11196552) 28 July 2021 (has links)
<div>Structured prediction consists of receiving a structured input and producing a combinatorial structure such as trees, clusters, networks, sequences, permutations, among others. From the computational viewpoint, structured prediction is in general considered <i>intractable</i> because of the size of the output space being exponential in the input size. For instance, in image segmentation tasks, the number of admissible segments is exponential in the number of pixels. A second factor is the combination of the input dimensionality along with the amount of data under availability. In structured prediction it is common to have the input live in a high-dimensional space, which involves to jointly reason about thousands or millions of variables, and at the same time contend with limited amount of data. Thus, learning and inference methods with strong computational and statistical guarantees are desired. The focus of our research is then to propose <i>principled methods</i> for structured prediction that are both polynomial time, i.e., <i>computationally efficient</i>, and require a polynomial number of data samples, i.e., <i>statistically efficient</i>.</div><div><br></div><div>The main contributions of this thesis are as follows:</div><div><br></div><div>(i) We develop an efficient and principled learning method of latent variable models for structured prediction under Gaussian perturbations. We derive a Rademacher-based generalization bound and argue that the use of non-convex formulations in learning latent-variable models leads to tighter bounds of the Gibbs decoder distortion.</div><div><br></div><div>(ii) We study the fundamental limits of structured prediction, i.e., we characterize the necessary sample complexity for learning factor graph models in the context of structured prediction. In particular, we show that the finiteness of our novel MaxPair-dimension is necessary for learning. Lastly, we show a connection between the MaxPair-dimension and the VC-dimension---which allows for using existing results on VC-dimension to calculate the MaxPair-dimension.</div><div><br></div><div>(iii) We analyze a generative model based on connected graphs, and find the structural conditions of the graph that allow for the exact recovery of the node labels. In particular, we show that exact recovery is realizable in polynomial time for a large class of graphs. Our analysis is based on convex relaxations, where we thoroughly analyze a semidefinite program and a degree-4 sum-of-squares program. Finally, we extend this model to consider linear constraints (e.g., fairness), and formally explain the effect of the added constraints on the probability of exact recovery.</div><div><br></div>
254

Social Skill Generalization with "Book in a Bag": Integrating Social Skills into the Literacy Curriculum at a School-Wide Level

Alger, Buddy Dennis 15 June 2012 (has links) (PDF)
Social skill instruction is needed in both targeted and universal contexts. This research utilized a universal social skill intervention, Book in a Bag (BIB), to increase the use of a specific social skill by all students within an elementary school, including students identified as at-risk for behavior problems. BIB was designed to integrate social skills into the curriculum by way of children's literature, specifically a read-aloud book using a direct instruction strategy. The results indicate that BIB had a positive effect on students' behavior in the classroom both for students identified and those not identified as being at-risk for behavior problems. Outcomes suggest that students used the skill across a variety of instructional, independent work, and group work settings. Teacher perceptions of the research were reported as acceptable. Teachers noted positive changes in their classroom. Implications of this research for practice include using BIB as a universal intervention to target specific social skill deficits in students, and using social skill instruction to increase positive student behavior.
255

MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels in Stacking Ensemble Learning

Ploshchik, Ilya January 2023 (has links)
Stacking, also known as stacked generalization, is a method of ensemble learning where multiple base models are trained on the same dataset, and their predictions are used as input for one or more metamodels in an extra layer. This technique can lead to improved performance compared to single layer ensembles, but often requires a time-consuming trial-and-error process. Therefore, the previously developed Visual Analytics system, StackGenVis, was designed to help users select the set of the most effective and diverse models and measure their predictive performance. However, StackGenVis was developed with only one metamodel: Logistic Regression. The focus of this Bachelor's thesis is to examine how alternative metamodels affect the performance of stacked ensembles through the use of a visualization tool called MetaStackVis. Our interactive tool facilitates visual examination of individual metamodels and metamodels' pairs based on their predictive probabilities (or confidence), various supported validation metrics, and their accuracy in predicting specific problematic data instances. The efficiency and effectiveness of MetaStackVis are demonstrated with an example based on a real healthcare dataset. The tool has also been evaluated through semi-structured interview sessions with Machine Learning and Visual Analytics experts. In addition to this thesis, we have written a short research paper explaining the design and implementation of MetaStackVis. However, this thesis  provides further insights into the topic explored in the paper by offering additional findings and in-depth analysis. Thus, it can be considered a supplementary source of information for readers who are interested in diving deeper into the subject.
256

Dynamics of learning and generalization in neural networks

Pezeshki, Mohammad 08 1900 (has links)
Les réseaux neuronaux sont remarquablement performants pour une grande variété de tâches d'apprentissage automatique et ont eu un impact profond sur la définition même de l'intelligence artificielle (IA). Cependant, malgré leur rôle important dans l'état actuel de l'IA, il est important de réaliser que nous sommes encore loin d'atteindre une intelligence de niveau humain. Une étape cruciale à l'amélioration de la performance des réseaux neuronaux consiste à faire progresser notre compréhension théorique, qui est en retard par rapport aux développements pratiques. Les dynamiques d'optimisation complexes des réseaux neuronaux, qui résultent d’interactions en haute dimension entre les nombreux paramètres du réseau, constituent un défi majeur pour l'élaboration des fondements théoriques de l'apprentissage profond. Ces dynamiques non triviales donnent lieu à des comportements empiriques déroutants qui, dans certains cas, contrastent fortement avec les prédictions théoriques. L'absence de surapprentissage dans les réseaux sur-paramétrés, leur recours à des corrélations fallacieuses et les courbes de généralisation non monotones font partie des comportements de généralisation des réseaux neuronaux qui laissent perplexe. Dans cette thèse, notre objectif est d'étudier certains de ces phénomènes perplexes en tant que pièces différentes d'un même casse-tête; un casse-tête dans lequel chaque phénomène sert de signal d'orientation pour développer une meilleure compréhension des réseaux neuronaux. Nous présentons trois articles en vue d’atteindre cet objectif; Le premier article sur multi-scale feature learning dynamics étudie les raisons qui sous-tendent la courbe de généralisation à double descente observée dans les réseaux neuronaux modernes. L'une des principales conclusions est que la double descente à travers les époques peut être attribuée à l'apprentissage de traits caractéristiques distincts à différentes échelles : Alors que les représentations faciles/rapides à apprendre sont en sur-apprentissage, les représentations plus complexes/lentes commencent à bien apprendre, ce qui entraîne une deuxième descente de l'erreur sur l’ensemble de test. Le deuxième article sur la famine de gradient identifie un phénomène fondamental qui peut entraîner une inclination à l'apprentissage dans les réseaux neuronaux. La famine de gradient se produit lorsqu'un réseau neuronal apprend à minimiser la perte en ne capturant qu'un sous-ensemble des traits caractéristiques pertinents à la classification, malgré la présence d'autres traits caractéristiques informatifs qui ne sont pas découverts. La famine de gradient a des conséquences bénéfiques et néfastes dont nous discutons. Le troisième article sur les méthodes simples de ré-équilibrage des données présente une étude empirique sur le problème de la généralisation à des groupes sous-représentés lorsque les données d'entraînement souffrent de déséquilibres importants. Ce travail porte sur les modèles qui généralisent bien en moyenne mais ne parviennent pas à généraliser à des groupes minoritaires. Notre principale conclusion est que des méthodes simples de ré-équilibrage de données permettent d'atteindre l’état de l’art pour la précision sur les groupes minoritaires, ce qui appelle à une examination plus approfondie des valeurs de référence et des méthodes de recherche sur la généralisation en-dehors du support de la distribution. Nos résultats permettent de mieux comprendre la mécanique interne des réseaux neuronaux et d'identifier les obstacles à la construction de modèles plus fiables, et ont des implications pratiques quant à l'entraînement des réseaux neuronaux. / Neural networks perform remarkably well in a wide variety of machine learning tasks and have had a profound impact on the very definition of artificial intelligence (AI). However, despite their significant role in the current state of AI, it is important to realize that we are still far from achieving human-level intelligence. A critical step in further improving neural networks is to advance our theoretical understanding which is in fact lagging behind our practical developments. A key challenge in building theoretical foundations for deep learning is the complex optimization dynamics of neural networks, resulting from the high-dimensional interactions between a large number of network parameters. Such non-trivial dynamics lead to puzzling empirical behaviors that, in some cases, appear in stark contrast with existing theoretical predictions. Lack of overfitting in over-parameterized networks, their reliance on spurious correlations, and double-descent generalization curves are among the perplexing generalization behaviors of neural networks. In this dissertation, our goal is to study some of these perplexing phenomena as different pieces of the same puzzle. A puzzle in which every phenomenon serves as a guiding signal towards developing a better understanding of neural networks. We present three articles towards this goal; The first article on multi-scale feature learning dynamics investigates the reasons underlying the double-descent generalization curve observed in modern neural networks. A central finding is that epoch-wise double descent can be attributed to distinct features being learned at different scales: as fast-learning features overfit, slower-learning features start to fit, resulting in a second descent in test error. The second article on gradient starvation identifies a fundamental phenomenon that can result in a learning proclivity in neural networks. Gradient starvation arises when a neural network learns to minimize the loss by capturing only a subset of features relevant for classification, despite the presence of other informative features which fail to be discovered. We discuss how gradient starvation can have both beneficial and adverse consequences on generalization performance. The third article on simple data balancing methods conducts an empirical study on the problem of generalization to underrepresented groups when the training data suffers from substantial imbalances. This work looks into models that generalize well on average but fail to generalize to minority groups of examples. Our key finding is that simple data balancing methods already achieve state-of-the-art accuracy on minority groups which calls for closer examination of benchmarks and methods for research in out-of-distribution generalization. These three articles take steps towards bringing insights into the inner mechanics of neural networks, identifying the obstacles in the way of building reliable models, and providing practical suggestions for training neural networks.
257

Using Self-Directed Video Prompting for Skill Acquisition With Post-Secondary Students With Intellectual Developmental Disabilities

Jimenez, Eliseo D. 30 December 2014 (has links)
No description available.
258

Application of Heuristic Optimization Techniques in Land Evaluation

Kovalskyy, Valeriy January 2004 (has links)
No description available.
259

Multicategory psi-learning and support vector machine

Liu, Yufeng 18 June 2004 (has links)
No description available.
260

The Effects of a Computer Assisted Reading Program on the Oral Reading Fluency and Comprehension of At-Risk, Urban First Grade Students

Gibson, Lenwood, Jr. 11 September 2009 (has links)
No description available.

Page generated in 0.0865 seconds