• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 6
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 26
  • 16
  • 14
  • 13
  • 13
  • 13
  • 13
  • 11
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Korpuslinguistiese ondersoek na pragmatiese merkers in Omgangsafrikaans

Fourie, Annamarie 01 1900 (has links)
Text in Afrikaans with summaries in Afrikaans, English and Tshwana / Includes bibliographical references (leaves 183-193) / Pragmatiese merkers in Omgangsafrikaans dien as belangrike kontekstualiseringswenke. Dit rig gespreksgenote in terme van uitingrelevansie en stel die spreker in staat om, op bondige wyse, ’n houding teenoor die proposisie van die uiting te openbaar. Dit dra ook by tot die gesprekstruktuur. Die sistematiese ondersoek na pragmatiese merkers volg ’n eklektiese benadering: die relevansieteorie, grammatikalisasieteorie, diskoersanalise, sosiopragmatiek en korpuslinguistiek word ingespan om die verskynsel te bestudeer en te verklaar. Die pragmatiese merkers “rêrig/regtig”, “oukei”, “soos”, “hoor” en “weet” is bestudeer aan die hand van die Pretoriakorpus van Omgangsafrikaans (PO) vanweë hul hoë gebruiksfrekwensie in die korpus. ’n Vergelyking van die gebruiksfrekwensies van hierdie pragmatiese merkers onder verskillende groeperinge van sprekers toon aan dat jong, volwasse en bejaarde mans en vroue dit verskillend gebruik. Die onderskeie funksies bied voorts leidrade waardeur die grammatikalisasie van pragmatiese merkers nagespeur kan word. Dit wil voorkom asof jong vroulike sprekers die voortou neem in die gebruik en ontwikkeling van pragmatiese merkers teenoor jong manlike sprekers. Die studie het verder bevind dat veral volwasse vroulike sprekers aktief bydra tot die ontwikkeling van hierdie pragmatiese merkers. / Pragmatic markers in interactional Afrikaans serve as important contextualising cues. They guide interlocutors as to the relevance of utterances and equip the speaker to signal an attitude towards the proposition of the utterance in a succinct way. They also contribute to the conversation structure. The systematic investigation of pragmatic markers follows an eclectic approach: relevance theory, grammaticalisation theory, discourse analysis, sociopragmatics and corpus linguistics are engaged in order to study and explain the phenomenon. The pragmatic markers “rêrig/regtig”, “oukei”, “soos”, “hoor” en “weet” are studied on the basis of the Pretoriakorpus van Omgangsafrikaans (PO) owing to their high frequency in the corpus. A comparison of the usage frequencies of these pragmatic markers among various groups of speakers indicates that young, adult and elderly men and women use them differently. The respective functions offer clues by which the grammaticalisation of pragmatic markers may be traced. It appears that young female speakers take the lead in the use and development of pragmatic markers compared to young male speakers. The study further found that especially adult female speakers contribute actively to the development of these pragmatic markers. / Matshwao a puo mo puong ya kgolagano ya Afrikaans a dira jaaka matshwao a botlhokwa a bokao. A kaela babui ka bomaleba jwa dipuo le go thusa sebui go bontsha maikutlo malebana le polelo e e tshitshinngwang ka boripana. Gape a tshwaela mo sebopegong sa puisano. Tshekatsheko e e rulaganeng ya matshwao a puo e ne e dirisa mekgwa e e farologaneng: tiori ya bomaleba, tiori ya tiriso ya thutapuo, tshekatsheko ya puisano, matshwao a puoloago le thuto ya dipuo e e lebelelang dikwalo tse di gona (corpus linguistics) di dirisitswe go batlisisa le go tlhalosa dikgakgamatso tseo. Matshwao a puo a “rêrig/regtig”, “oukei”, “soos”, “hoor” le “weet” a batlisisitswe go lebeletswe Pretoriakorpus van Omgangsafrikaans (PO) ka ntlha ya go nna teng ga ona thata mo dikwalong. Tshwantshanyo ya seelo sa tiriso ya matshwao ano a puo magareng ga ditlhopha tsa dibui e supa gore bašwa, bagolo le bagodi ba banna le basadi ba a dirisa ka ditsela tse di farologaneng. Ditiro tse di rileng di bontsha disupi tse ka tsona go ka latedisiwang tiriso ya thutapuo ya matshwao a puo. Go bonala fa dibui tsa bašwa ba basadi di eteletse pele mo tirisong le kgodisong ya matshwao a puo fa di ntshwantshanngwa le dibui tsa banna. Thutopatlisiso e fitlheletse gape gore dibui tsa bagolo ba basadi bogolosegolo di tshwaela ka botlhaga mo kgodisong ya matshwao ano a puo. / Afrikaans and Theory of Literature / M.A. (Afrikaans)
42

Augmenting High-Dimensional Data with Deep Generative Models / Högdimensionell dataaugmentering med djupa generativa modeller

Nilsson, Mårten January 2018 (has links)
Data augmentation is a technique that can be performed in various ways to improve the training of discriminative models. The recent developments in deep generative models offer new ways of augmenting existing data sets. In this thesis, a framework for augmenting annotated data sets with deep generative models is proposed together with a method for quantitatively evaluating the quality of the generated data sets. Using this framework, two data sets for pupil localization was generated with different generative models, including both well-established models and a novel model proposed for this purpose. The unique model was shown both qualitatively and quantitatively to generate the best data sets. A set of smaller experiments on standard data sets also revealed cases where this generative model could improve the performance of an existing discriminative model. The results indicate that generative models can be used to augment or replace existing data sets when training discriminative models. / Dataaugmentering är en teknik som kan utföras på flera sätt för att förbättra träningen av diskriminativa modeller. De senaste framgångarna inom djupa generativa modeller har öppnat upp nya sätt att augmentera existerande dataset. I detta arbete har ett ramverk för augmentering av annoterade dataset med hjälp av djupa generativa modeller föreslagits. Utöver detta så har en metod för kvantitativ evaulering av kvaliteten hos genererade data set tagits fram. Med hjälp av detta ramverk har två dataset för pupillokalisering genererats med olika generativa modeller. Både väletablerade modeller och en ny modell utvecklad för detta syfte har testats. Den unika modellen visades både kvalitativt och kvantitativt att den genererade de bästa dataseten. Ett antal mindre experiment på standardiserade dataset visade exempel på fall där denna generativa modell kunde förbättra prestandan hos en existerande diskriminativ modell. Resultaten indikerar att generativa modeller kan användas för att augmentera eller ersätta existerande dataset vid träning av diskriminativa modeller.
43

ANALYSIS OF LATENT SPACE REPRESENTATIONS FOR OBJECT DETECTION

Ashley S Dale (8771429) 03 September 2024 (has links)
<p dir="ltr">Deep Neural Networks (DNNs) successfully perform object detection tasks, and the Con- volutional Neural Network (CNN) backbone is a commonly used feature extractor before secondary tasks such as detection, classification, or segmentation. In a DNN model, the relationship between the features learned by the model from the training data and the features leveraged by the model during test and deployment has motivated the area of feature interpretability studies. The work presented here applies equally to white-box and black-box models and to any DNN architecture. The metrics developed do not require any information beyond the feature vector generated by the feature extraction backbone. These methods are therefore the first methods capable of estimating black-box model robustness in terms of latent space complexity and the first methods capable of examining feature representations in the latent space of black box models.</p><p dir="ltr">This work contributes the following four novel methodologies and results. First, a method for quantifying the invariance and/or equivariance of a model using the training data shows that the representation of a feature in the model impacts model performance. Second, a method for quantifying an observed domain gap in a dataset using the latent feature vectors of an object detection model is paired with pixel-level augmentation techniques to close the gap between real and synthetic data. This results in an improvement in the model’s F1 score on a test set of outliers from 0.5 to 0.9. Third, a method for visualizing and quantifying similarities of the latent manifolds of two black-box models is used to correlate similar feature representation with increase success in the transferability of gradient-based attacks. Finally, a method for examining the global complexity of decision boundaries in black-box models is presented, where more complex decision boundaries are shown to correlate with increased model robustness to gradient-based and random attacks.</p>
44

Late Mesozoic to Cenozoic erosion and sediment dispersal in the Dinaride orogen: a sedimentary provenance approach / Spätmesozoische bis Känozoische Erosion und Sedimentschüttung im Dinarischen Orogen: Ansätze aus der Provenanzanalyse

Mikes, Tamás 16 December 2008 (has links)
No description available.
45

Towards meaningful and data-efficient learning : exploring GAN losses, improving few-shot benchmarks, and multimodal video captioning

Huang, Gabriel 09 1900 (has links)
Ces dernières années, le domaine de l’apprentissage profond a connu des progrès énormes dans des applications allant de la génération d’images, détection d’objets, modélisation du langage à la réponse aux questions visuelles. Les approches classiques telles que l’apprentissage supervisé nécessitent de grandes quantités de données étiquetées et spécifiques à la tâches. Cependant, celles-ci sont parfois coûteuses, peu pratiques, ou trop longues à collecter. La modélisation efficace en données, qui comprend des techniques comme l’apprentissage few-shot (à partir de peu d’exemples) et l’apprentissage self-supervised (auto-supervisé), tentent de remédier au manque de données spécifiques à la tâche en exploitant de grandes quantités de données plus “générales”. Les progrès de l’apprentissage profond, et en particulier de l’apprentissage few-shot, s’appuient sur les benchmarks (suites d’évaluation), les métriques d’évaluation et les jeux de données, car ceux-ci sont utilisés pour tester et départager différentes méthodes sur des tâches précises, et identifier l’état de l’art. Cependant, du fait qu’il s’agit de versions idéalisées de la tâche à résoudre, les benchmarks sont rarement équivalents à la tâche originelle, et peuvent avoir plusieurs limitations qui entravent leur rôle de sélection des directions de recherche les plus prometteuses. De plus, la définition de métriques d’évaluation pertinentes peut être difficile, en particulier dans le cas de sorties structurées et en haute dimension, telles que des images, de l’audio, de la parole ou encore du texte. Cette thèse discute des limites et des perspectives des benchmarks existants, des fonctions de coût (training losses) et des métriques d’évaluation (evaluation metrics), en mettant l’accent sur la modélisation générative - les Réseaux Antagonistes Génératifs (GANs) en particulier - et la modélisation efficace des données, qui comprend l’apprentissage few-shot et self-supervised. La première contribution est une discussion de la tâche de modélisation générative, suivie d’une exploration des propriétés théoriques et empiriques des fonctions de coût des GANs. La deuxième contribution est une discussion sur la limitation des few-shot classification benchmarks, certains ne nécessitant pas de généralisation à de nouvelles sémantiques de classe pour être résolus, et la proposition d’une méthode de base pour les résoudre sans étiquettes en phase de testing. La troisième contribution est une revue sur les méthodes few-shot et self-supervised de détection d’objets , qui souligne les limites et directions de recherche prometteuses. Enfin, la quatrième contribution est une méthode efficace en données pour la description de vidéo qui exploite des jeux de données texte et vidéo non supervisés. / In recent years, the field of deep learning has seen tremendous progress for applications ranging from image generation, object detection, language modeling, to visual question answering. Classic approaches such as supervised learning require large amounts of task-specific and labeled data, which may be too expensive, time-consuming, or impractical to collect. Data-efficient methods, such as few-shot and self-supervised learning, attempt to deal with the limited availability of task-specific data by leveraging large amounts of general data. Progress in deep learning, and in particular, few-shot learning, is largely driven by the relevant benchmarks, evaluation metrics, and datasets. They are used to test and compare different methods on a given task, and determine the state-of-the-art. However, due to being idealized versions of the task to solve, benchmarks are rarely equivalent to the original task, and can have several limitations which hinder their role of identifying the most promising research directions. Moreover, defining meaningful evaluation metrics can be challenging, especially in the case of high-dimensional and structured outputs, such as images, audio, speech, or text. This thesis discusses the limitations and perspectives of existing benchmarks, training losses, and evaluation metrics, with a focus on generative modeling—Generative Adversarial Networks (GANs) in particular—and data-efficient modeling, which includes few-shot and self-supervised learning. The first contribution is a discussion of the generative modeling task, followed by an exploration of theoretical and empirical properties of the GAN loss. The second contribution is a discussion of a limitation of few-shot classification benchmarks, which is that they may not require class semantic generalization to be solved, and the proposal of a baseline method for solving them without test-time labels. The third contribution is a survey of few-shot and self-supervised object detection, which points out the limitations and promising future research for the field. Finally, the fourth contribution is a data-efficient method for video captioning, which leverages unsupervised text and video datasets, and explores several multimodal pretraining strategies.

Page generated in 0.0491 seconds