• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 17
  • 8
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 117
  • 20
  • 18
  • 17
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

O discurso formativo do Bi?logo sobre a morte: matizes e met?foras do saber que o sujeito n?o deseja saber

Santos, Valdec? dos 18 September 2008 (has links)
Made available in DSpace on 2014-12-17T14:35:53Z (GMT). No. of bitstreams: 1 ValdeciS.pdf: 571587 bytes, checksum: 5b1f79ea57b1f1783ecefd3cb316e648 (MD5) Previous issue date: 2008-09-18 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This study - Biologist s formative speech about death. Nuances and metaphors from knowing that the subject of do not want to know - shows a marginal cognitive construction in scientific education from biologist - death. It considered as obvious that death is a theme that covers both the scientific education from biologist and the division of the subject, and concerns the splitting of the double life-death and the principles of inclusion and exclusion of the subject. Part of sensitive question: What is the epistemological weave who supports biologist's speech about death? It is constituted an object of study of the biologist s speech on death. It is advocated the thesis that: Death is an epistemological obstacle announcing for something always aims to escape from the perspective of knowledge, especially of scientific knowledge because, since it is understood as cognitive learning about the disruption of biological phenomenon life which is involved on weave of imaginary and symbolic constructions about the finiteness of life; it has constituted a metaphorical knowing - encouraged by the noisy silence - which does not allow to know in full, mobilizing hence subject in searching for transitional truths that reduce the ontological being-mortal anguish centered in subjective dimension involved in the act of knowing. From this movement of search that the object mental life after death wins a symbolic value that requires a real-looking multi-referential for the study of biology - life - and its implications: the finiteness of life, especially by moving the omnipotence of scientific objectivity expressed by signs and symbols that seek say the completeness of scientific knowledge-, signaling thus the existence of the dynamics of incompleteness implicit in subjectivity that supports knowledge relating to the double, life and death, and to the temporality of the existence of Homo sapiens sapiens, with the axis guiding the desire of the subject, do not want to know about death, implicit in the mechanisms objective-subjective founded by non-said of death is the epistemology of the existence of objective-subjective subject, whose core is the negation of death. The theoretical methodological knowing web is anchored in the multi-reference which favors a transit by theoretical current, as the Psychoanalysis, bachelardian philosophy, the epistemology of complexity, the Thanatology, the Social Psychology, and Etnocenology, and Understanding Interview. The unveiling of the study object from the analysis of oral speech of eleven biologists who serve in high school, from three main guiding: Death in the history of life,Death in biologist s academic education and, Conceptions about concepts / Este estudo - O discurso formativo do bi?logo sobre a morte. Matizes e met?foras do saber que o sujeito n?o deseja saber - evidencia uma constru??o cognitiva marginal na forma??o cient?fica do bi?logo - a morte. Considera como evidente que a morte ? um tema que abrange, simultaneamente, a forma??o cient?fica do bi?logo e a cis?o do sujeito, e diz respeito ? cis?o do duplo vida-morte e aos princ?pios de inclus?o e de exclus?o do sujeito. Parte da quest?o sensibilizadora: Qual a tessitura epist?mica que fundamenta o discurso do bi?logo sobre a morte? Constitui objeto de estudo o discurso do bi?logo sobre a morte. Defende a tese que: A morte ? um obst?culo epistemol?gico anunciador de que algo, sempre, escapar? na perspectiva objetiva do conhecimento, especialmente do conhecimento cient?fico, visto que, compreendida como a constru??o cognitiva sobre a ruptura do fen?meno biol?gico vida, est? implicada na tessitura de constru??es imagin?rias e simb?licas sobre a finitude da vida; constitui-se um saber metaf?rico fomentado pelo sil?ncio ruidoso -, que n?o se permite conhecer por inteiro, mobilizando, assim, o sujeito ? busca/procura de verdades transit?rias que reduzam a ang?stia ontol?gica de ser-mortal nucleada na dimens?o subjetiva implicada no ato de conhecer. ? nesse movimento de busca/procura que o objeto mental vida p?s-morte ganha um valor simb?lico-real que requer um olhar multirreferencial para o objeto de estudo da Biologia a vida e a sua implica??o: a finitude da vida, especialmente, por deslocar a onipot?ncia da objetividade cient?fica expressa por signos e s?mbolos que procuram dizer da completude do conhecimento cient?fico -, sinalizando, assim, a exist?ncia da din?mica da incompletude impl?cita na subjetividade que fundamenta a constru??o de saberes relativos ao duplo vida-morte e ? temporalidade da exist?ncia do Homo sapiens sapiens, tendo como eixo norteador o desejo do sujeito, de n?o desejar saber sobre a morte, impl?cito nos mecanismos objetivos-subjetivos fundamentados pelo n?o-dito da morte que constitui a epistemologia da exist?ncia do sujeito objetivo-subjetivo, cujo n?cleo ? a nega??o da morte. A teia epist?mica te?rico-metodol?gica ancora-se na Multirreferencialidade que favorece um tr?nsito por correntes te?ricas, como, a Psican?lise, a filosofia bachelardiana, a epistemologia da complexidade, a Tanatologia, a Psicologia Social, e a Etnocenologia, e na Entrevista Compreensiva. O desvelamento do objeto de estudo parte da an?lise dos discursos orais de onze bi?logas que atuam no Ensino M?dio da Educa??o B?sica, a partir de tr?s eixos norteadores: A morte na hist?ria de vida, A morte na forma??o acad?mica do bi?logo e, Concep??es sobre conceitos
92

Alignement paramétrique d’images : proposition d’un formalisme unifié et prise en compte du bruit pour le suivi d’objets

Authesserre, Jean-baptiste 02 December 2010 (has links)
L’alignement d’images paramétrique a de nombreuses applications pour la réalité augmentée, la compression vidéo ou encore le suivi d’objets. Dans cette thèse, nous nous intéressons notamment aux techniques de recalage d’images (template matching) reposant sur l’optimisation locale d’une fonctionnelle d’erreur. Ces approches ont conduit ces dernières années à de nombreux algorithmes efficaces pour le suivi d’objets. Cependant, les performances de ces algorithmes ont été peu étudiées lorsque les images sont dégradées par un bruit important comme c’est le cas, par exemple, pour des captures réalisées dans des conditions de faible luminosité. Dans cette thèse, nous proposons un nouveau formalisme, appelé formalisme bidirectionnel, qui unifie plusieurs approches de l’état de l’art. Ce formalisme est utilisé dans un premier temps pour porter un éclairage nouveau sur un grand nombre d’approches de la littérature et en particulier sur l’algorithme ESM (Efficient Second-order Minimization). Nous proposons ensuite une étude théorique approfondie de l’influence du bruit sur le processus d’alignement. Cette étude conduit à la définition de deux nouvelles familles d’algorithmes, les approches ACL (Asymmetric Composition on Lie Groups) et BCL (Bidirectional Composition on Lie Groups) qui permettent d’améliorer les performances en présence de niveaux de bruit asymétriques (Rapport Signal sur Bruit différent dans les images). L’ensemble des approches introduites sont validées sur des données synthétiques et sur des données réelles capturées dans des conditions de faible luminosité. / Parametric image alignment is a fundamental task of many vision applications such as object tracking, image mosaicking, video compression and augmented reality. To recover the motion parameters, direct image alignment works by optimizing a pixel-based difference measure between a moving image and a fixed-image called template. In the last decade, many efficient algorithms have been proposed for parametric object tracking. However, those approaches have not been evaluated for aligning images of low SNR (Signal to Noise ratio) such as images captured in low-light conditions. In this thesis, we propose a new formulation of image alignment called Bidirectional Framework for unifying existing state of the art algorithms. First, this framework allows us to produce new insights on existing approaches and in particular on the ESM (Efficient Second-order Minimization) algorithm. Subsequently, we provide a theoretical analysis of image noise on the alignment process. This yields the definition of two new approaches : the ACL (Asymmetric Composition on Lie Groups) algorithm and the BCL (Bidirectional Composition on Lie Groups) algorithm, which outperform existing approaches in presence of images of different SNR. Finally, experiments on synthetic and real images captured under low-light conditions allow to evaluate the new and existing approaches under various noise conditions.
93

Effet de l'intrication brouillée sur la téléportation quantique

Coiteux-Roy, Xavier 12 1900 (has links)
La téléportation quantique promet d'être centrale à de nombreuses applications du futur tels la cryptographique quantique et l'ordinateur quantique. Comme toute mise en œuvre physique s'accompagne inévitablement d'imperfections expérimentales, on étudie la téléportation dans un contexte où la ressource quantique, c'est-à-dire l'intrication, que l'on consomme est brouillée. Pour ce faire, on introduit en premier lieu le formalisme de l'informatique quantique. En seconde partie, on approche les protocoles de téléportation quantique standard, de téléportation avec relais quantiques et de téléportation multi-ports. Notre analyse de la téléportation standard et de la téléportation multi-ports poursuit trois objectifs principaux. Le premier est de comparer l'emploi d'un canal brouillé pour la téléportation d'un état quantique avec l'utilisation de ce même canal pour l'envoi direct de l'état. On trouve ainsi les conditions pour lesquelles les deux protocoles de transmission sont équivalents. Le second but est d'observer le caractère non-local de l'intrication brouillée en regardant quand et comment Alice peut réduire le bruit chez elle à un bruit exclusivement chez Bob. En troisième, on quantifie par une borne inférieure la qualité d'un canal de téléportation en réduisant l'effet de toute intrication brouillée à celui d'un bruit de Pauli à un seul paramètre. On accomplit cette tâche en effaçant au moment approprié l'information classique superflue et en appliquant la wernerisation. Finalement, on analyse la composition de bruits de Pauli et l'effet du taux d'effacement sur la téléportation avec relais quantiques pour mieux comprendre comment se combinent les effets de l'intrication brouillée dans un réseau de téléportation quantique. La suite logique est d'établir des protocoles plus robustes de téléportation quantique qui prennent en compte l'effet de l'intrication brouillée. / Quantum teleportation will be a centerpiece of practical quantum cryptography and quantum computing in a soon to be future. As no physical implementation is perfect, we study quantum teleportation in the context of impaired quantum resources which we call noisy entanglement. In a first part, we introduce how quantum mechanics is formalized by quantum information theory. In the second part, we study standard quantum teleportation, in both the absence and presence of quantum repeaters, as well as port-based teleportation. Our analysis of standard quantum teleportation and port-based teleportation follows three main directions. The first goal is to compare the use of a noisy channel for teleportation to the one of the same channel for direct transmission. We thus find the conditions under which the two cases are equivalent. Our second objective is to observe the non-local properties of noisy entanglement by finding when and how Alice can blame Bob for her noise. Thirdly, we quantify, in the worst-case scenario, the quality of a teleportation channel by reducing the effect of any noisy entanglement to the one of a one-parameter Pauli channel that can be interpreted as a depolarizing channel in most instances. We achieve this task by erasing unneeded classical information at the appropriate time and by twirling either the entanglement or the teleported state. Finally, we analyze the composition of Pauli noises and the impact of the erasure channel parameter on the protocol of teleportation with quantum repeaters. We thus aim to understand how the effects of noisy entanglement cumulate in a teleportation network. The next logical step is to create robust teleportation schemes that take into account the effects of noisy entanglement.
94

Finding A Subset Of Non-defective Items From A Large Population : Fundamental Limits And Efficient Algorithms

Sharma, Abhay 05 1900 (has links) (PDF)
Consider a large population containing a small number of defective items. A commonly encountered goal is to identify the defective items, for example, to isolate them. In the classical non-adaptive group testing (NAGT) approach, one groups the items into subsets, or pools, and runs tests for the presence of a defective itemon each pool. Using the outcomes the tests, a fundamental goal of group testing is to reliably identify the complete set of defective items with as few tests as possible. In contrast, this thesis studies a non-defective subset identification problem, where the primary goal is to identify a “subset” of “non-defective” items given the test outcomes. The main contributions of this thesis are: We derive upper and lower bounds on the number of nonadaptive group tests required to identify a given number of non-defective items with arbitrarily small probability of incorrect identification as the population size goes to infinity. We show that an impressive reduction in the number of tests is achievable compared to the approach of first identifying all the defective items and then picking the required number of non-defective items from the complement set. For example, in the asymptotic regime with the population size N → ∞, to identify L nondefective items out of a population containing K defective items, when the tests are reliable, our results show that O _ K logK L N _ measurements are sufficient when L ≪ N − K and K is fixed. In contrast, the necessary number of tests using the conventional approach grows with N as O _ K logK log N K_ measurements. Our results are derived using a general sparse signal model, by virtue of which, they are also applicable to other important sparse signal based applications such as compressive sensing. We present a bouquet of computationally efficient and analytically tractable nondefective subset recovery algorithms. By analyzing the probability of error of the algorithms, we obtain bounds on the number of tests required for non-defective subset recovery with arbitrarily small probability of error. By comparing with the information theoretic lower bounds, we show that the upper bounds bounds on the number of tests are order-wise tight up to a log(K) factor, where K is the number of defective items. Our analysis accounts for the impact of both the additive noise (false positives) and dilution noise (false negatives). We also provide extensive simulation results that compare the relative performance of the different algorithms and provide further insights into their practical utility. The proposed algorithms significantly outperform the straightforward approaches of testing items one-by-one, and of first identifying the defective set and then choosing the non-defective items from the complement set, in terms of the number of measurements required to ensure a given success rate. We investigate the use of adaptive group testing in the application of finding a spectrum hole of a specified bandwidth in a given wideband of interest. We propose a group testing based spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy by testing a group of adjacent sub-bands in a single test. This is enabled by a simple and easily implementable sub-Nyquist sampling scheme for signal acquisition by the cognitive radios. Energy-based hypothesis tests are used to provide an occupancy decision over the group of sub-bands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes of a specified bandwidth. We extend this framework to a multistage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including non-contiguous spectrum hole search. Our analysis allows one to identify the sparsity and SNR regimes where group testing can lead to significantly lower detection delays compared to a conventional bin-by-bin energy detection scheme. We illustrate the performance of the proposed algorithms via Monte Carlo simulations.
95

Hybridization of dynamic optimization methodologies / L'hybridation de méthodes d'optimisation dynamique

Decock, Jérémie 28 November 2014 (has links)
Dans ce manuscrit de thèse, mes travaux portent sur la combinaison de méthodes pour la prise de décision séquentielle (plusieurs étapes de décision corrélées) dans des environnements complexes et incertains. Les méthodes mises au point sont essentiellement appliquées à des problèmes de gestion et de production d'électricité tels que l'optimisation de la gestion des stocks d'énergie dans un parc de production pour anticiper au mieux la fluctuation de la consommation des clients.Le manuscrit comporte 7 chapitres regroupés en 4 parties : Partie I, « Introduction générale », Partie II, « État de l'art », Partie III, « Contributions » et Partie IV, « Conclusion générale ».Le premier chapitre (Partie I) introduit le contexte et les motivations de mes travaux, à savoir la résolution de problèmes d' « Unit commitment », c'est à dire l'optimisation des stratégies de gestion de stocks d'énergie dans les parcs de production d'énergie. Les particularités et les difficultés sous-jacentes à ces problèmes sont décrites ainsi que le cadre de travail et les notations utilisées dans la suite du manuscrit.Le second chapitre (Partie II) dresse un état de l'art des méthodes les plus classiques utilisées pour la résolution de problèmes de prise de décision séquentielle dans des environnements incertains. Ce chapitre introduit des concepts nécessaires à la bonne compréhension des chapitres suivants (notamment le chapitre 4). Les méthodes de programmation dynamique classiques et les méthodes de recherche de politique directe y sont présentées.Le 3e chapitre (Partie II) prolonge le précédent en dressant un état de l'art des principales méthodes d’optimisation spécifiquement adaptées à la gestion des parcs de production d'énergie et à leurs subtilités. Ce chapitre présente entre autre les méthodes MPC (Model Predictive Control), SDP (Stochastic Dynamic Programming) et SDDP (Stochastic Dual Dynamic Programming) avec pour chacune leurs particularités, leurs avantages et leurs limites. Ce chapitre complète le précédent en introduisant d'autres concepts nécessaires à la bonne compréhension de la suite du manuscrit.Le 4e chapitre (Partie III) contient la principale contribution de ma thèse : un nouvel algorithme appelé « Direct Value Search » (DVS) créé pour résoudre des problèmes de prise de décision séquentielle de grande échelle en milieu incertain avec une application directe aux problèmes d' « Unit commitment ». Ce chapitre décrit en quoi ce nouvel algorithme dépasse les méthodes classiques présentées dans le 3e chapitre. Cet algorithme innove notamment par sa capacité à traiter des grands espaces d'actions contraints dans un cadre non-linéaire, avec un grand nombre de variables d'état et sans hypothèse particulière quant aux aléas du système optimisé (c'est à dire applicable sur des problèmes où les aléas ne sont pas nécessairement Markovien).Le 5e chapitre (Partie III) est consacré à un concept clé de DVS : l'optimisation bruitée. Ce chapitre expose une nouvelle borne théorique sur la vitesse de convergence des algorithmes d'optimisation appliqués à des problèmes bruités vérifiant certaines hypothèses données. Des méthodes de réduction de variance sont également étudiées et appliquées à DVS pour accélérer sensiblement sa vitesse de convergence.Le 6e chapitre (Partie III) décrit un résultat mathématique sur la vitesse de convergence linéaire d’un algorithme évolutionnaire appliqué à une famille de fonctions non quasi-convexes. Dans ce chapitres, il est prouvé que sous certaines hypothèses peu restrictives sur la famille de fonctions considérée, l'algorithme présenté atteint une vitesse de convergence linéaire.Le 7e chapitre (Partie IV) conclut ce manuscrit en résumant mes contributions et en dressant quelques pistes de recherche intéressantes à explorer. / This thesis is dedicated to sequential decision making (also known as multistage optimization) in uncertain complex environments. Studied algorithms are essentially applied to electricity production ("Unit Commitment" problems) and energy stock management (hydropower), in front of stochastic demand and water inflows. The manuscript is divided in 7 chapters and 4 parts: Part I, "General Introduction", Part II, "Background Review", Part III, "Contributions" and Part IV, "General Conclusion". This first chapter (Part I) introduces the context and motivation of our work, namely energy stock management. "Unit Commitment" (UC) problems are a classical example of "Sequential Decision Making" problem (SDM) applied to energy stock management. They are the central application of our work and in this chapter we explain main challenges arising with them (e.g. stochasticity, constraints, curse of dimensionality, ...). Classical frameworks for SDM problems are also introduced and common mistakes arising with them are be discussed. We also emphasize the consequences of these - too often neglected - mistakes and the importance of not underestimating their effects. Along this chapter, fundamental definitions commonly used with SDM problems are described. An overview of our main contributions concludes this first chapter. The second chapter (Part II) is a background review of the most classical algorithms used to solve SDM problems. Since the applications we try to solve are stochastic, we there focus on resolution methods for stochastic problems. We begin our study with classical Dynamic Programming methods to solve "Markov Decision Processes" (a special kind of SDM problems with Markovian random processes). We then introduce "Direct Policy Search", a widely used method in the Reinforcement Learning community. A distinction is be made between "Value Based" and "Policy Based" exploration methods. The third chapter (Part II) extends the previous one by covering the most classical algorithms used to solve UC's subtleties. It contains a state of the art of algorithms commonly used for energy stock management, mainly "Model Predictive Control", "Stochastic Dynamic Programming" and "Stochastic Dual Dynamic Programming". We briefly overview distinctive features and limitations of these methods. The fourth chapter (Part III) presents our main contribution: a new algorithm named "Direct Value Search" (DVS), designed to solve large scale unit commitment problems. We describe how it outperforms classical methods presented in the third chapter. We show that DVS is an "anytime" algorithm (users immediately get approximate results) which can handle large state spaces and large action spaces with non convexity constraints, and without assumption on the random process. Moreover, we explain how DVS can reduce modelling errors and can tackle challenges described in the first chapter, working on the "real" detailed problem without "cast" into a simplified model. Noisy optimisation is a key component of DVS algorithm; the fifth chapter (Part III) is dedicated to it. In this chapter, some theoretical convergence rate are studied and new convergence bounds are proved - under some assumptions and for given families of objective functions. Some variance reduction techniques aimed at improving the convergence rate of graybox noisy optimization problems are studied too in the last part of this chapter. Chapter sixth (Part III) is devoted to non-quasi-convex optimization. We prove that a variant of evolution strategy can reach a log-linear convergence rate with non-quasi-convex objective functions. Finally, the seventh chapter (Part IV) concludes and suggests some directions for future work.
96

Fountain codes and their typical application in wireless standards like edge

Grobler, Trienko Lups 26 January 2009 (has links)
One of the most important technologies used in modern communication systems is channel coding. Channel coding dates back to a paper published by Shannon in 1948 [1] entitled “A Mathematical Theory of Communication”. The basic idea behind channel coding is to send redundant information (parity) together with a message to make the transmission more error resistant. There are different types of codes that can be used to generate the parity required, including block, convolutional and concatenated codes. A special subclass of codes consisting of the codes mentioned in the previous paragraph, is sparse graph codes. The structure of sparse graph codes can be depicted via a graphical representation: the factor graph which has sparse connections between its elements. Codes belonging to this subclass include Low-Density-Parity-Check (LDPC) codes, Repeat Accumulate (RA), Turbo and fountain codes. These codes can be decoded by using the belief propagation algorithm, an iterative algorithm where probabilistic information is passed to the nodes of the graph. This dissertation focuses on noisy decoding of fountain codes using belief propagation decoding. Fountain codes were originally developed for erasure channels, but since any factor graph can be decoded using belief propagation, noisy decoding of fountain codes can easily be accomplished. Three fountain codes namely Tornado, Luby Transform (LT) and Raptor codes were investigated during this dissertation. The following results were obtained: <ol> <li>The Tornado graph structure is unsuitable for noisy decoding since the code structure protects the first layer of parity instead of the original message bits (a Tornado graph consists of more than one layer).</li> <li> The successful decoding of systematic LT codes were verified.</li> <li>A systematic Raptor code was introduced and successfully decoded. The simulation results show that the Raptor graph structure can improve on its constituent codes (a Raptor code consists of more than one code).</li></ol> Lastly an LT code was used to replace the convolutional incremental redundancy scheme used by the 2G mobile standard Enhanced Data Rates for GSM Evolution (EDGE). The results show that a fountain incremental redundancy scheme outperforms a convolutional approach if the frame lengths are long enough. For the EDGE platform the results also showed that the fountain incremental redundancy scheme outperforms the convolutional approach after the second transmission is received. Although EDGE is an older technology, it still remains a good platform for testing different incremental redundancy schemes, since it was one of the first platforms to use incremental redundancy. / Dissertation (MEng)--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / MEng / unrestricted
97

Dirty Geometry : Searching for a queer architecture in Stockholm city / Dirty Geometry : Sökandet efter en queer arkitektur i Stockholm city

Söderman, Viktoria January 2018 (has links)
For whom do we draw buildings? Why does contemporary architecture look the way it does?Why are certain aesthetics considered more valid than others? With this project, I propose Dirty Geometry: norm-bending design that could challenge conventions within the field of architecture. It is an investigation of concepts such as ugliness, beauty, architecture and the human body, interiority, femininity and ”bad taste”. The purpose is to, with the aid of parametric design processes, make Stockholm less boring and more dirty. Dirty Geometry is both the creative process sprung from one’s personal desires, and the resulting design. It aims to celebrate the weird, playful and colorful in an empowering way. This thesis project draws a lot of inspiration from camp aesthetics and drag culture, because of the way humour is used in a subversive way to question gender identities, power structures and norms.
98

Improving The Robustness of Artificial Neural Networks via Bayesian Approaches

Jun Zhuang (16456041) 30 August 2023 (has links)
<p>Artificial neural networks (ANNs) have achieved extraordinary performance in various domains in recent years. However, some studies reveal that ANNs may be vulnerable in three aspects: label scarcity, perturbations, and open-set emerging classes. Noisy labeling and self-supervised learning approaches address the label scarcity issues, but most of the work couldn't handle the perturbations. Adversarial training methods, topological denoising methods, and mechanism designing methods aim to mitigate the negative effects caused by perturbations. However, adversarial training methods can barely train a robust model under the circumstance of extensive label scarcity; topological denoising methods are not efficient on dynamic data structures; and mechanism designing methods often depend on heuristic explorations. Detection-based methods devote to identifying novel or anomaly instances for further downstream tasks. Nonetheless, such instances may belong to open-set new emerging classes. To embrace the aforementioned challenges, we address the robustness issues of ANNs from two aspects. First, we propose a series of Bayesian label transition models to improve the robustness of Graph Neural Networks (GNNs) in the presence of label scarcity and perturbations in the graph domain. Second, we propose a new non-exhaustive learning model, named NE-GM-GAN, to handle both open-set problems and class-imbalance issues in network intrusion datasets. Extensive experiments with several datasets demonstrate that our proposed models can effectively improve the robustness of ANNs.</p>
99

Computational auditory scene analysis and robust automatic speech recognition

Narayanan, Arun 14 November 2014 (has links)
No description available.
100

Pitch tracking and speech enhancement in noisy and reverberant environments

Wu, Mingyang 07 November 2003 (has links)
No description available.

Page generated in 0.0436 seconds