• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 17
  • 8
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 117
  • 20
  • 18
  • 17
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

L'industrialisation du logement en France (1885-1970) : De la construction légère et démontable à la construction lourde et architecturale / The industrialization of housing in France (1885-1970) : from the lightweight and removable construction to the heavy construction and architecture

Fares, Kinda 16 March 2012 (has links)
La thèse porte sur l’industrialisation du logement en France (1885-1970), de la construction légère et démontable à la construction lourde et architecturale. L’objet de cette thèse se place à l’interface de quatre grands sujets : l’existence de l’industrialisation avant la seconde guerre mondiale, la politique technique du ministère de la Reconstruction et de l’Urbanisme (MRU), les projets réalisés après la seconde guerre mondiale dont on applique les méthodes d’industrialisation imposées par l’Etat, et les principes de la charte d’Athènes. La période d’étude s’étend de 1885, premier témoin européen de l’industrialisation du bâtiment, à 1970 année de remise en cause de ce type de construction. l’industrialisation du bâtiment a des racines très anciennes, elle croît d’abord parmi les militaires, pour les besoins de la conquête coloniale, des campagnes, des guerres qui enflamment l’Europe. La cabane de plage ou la baraque de villégiature, la tente de toile, l’auvent de marché, sont autant de figures constructives qui prolifèrent en fin du XIXe siècle. Surtout, les expéditions coloniales menées tambours battants exigent rapidité, sécurité, capacité : la baraque est la solution industrielle. L’industrialisation se poursuit, non plus légère mais lourde. Elle est pour l’Etat la principale voie car elle diminue le prix de revient de la construction, réduit les interventions et améliore le confort des logements. A partir de 1945, l’Etat français nouveau investit dans la partie la plus sinistrée, encourage les innovations basées sur l’emploi de matériaux et de techniques en instituant l’agrément technique des « matériaux nouveaux et des procédés non traditionnels de construction ». Dans la première partie de cette recherche, nous avons essayé de montrer qu’il y a bien une industrialisation du bâtiment avant la seconde guerre mondiale. L’industrialisation occupe « brutalement » la construction légère dans les années 1890. La baraque démontable et transportable, militaire, ambulante devient l’objet de compétitions, de confrontations, d’intérêts guerriers en Europe de l’ouest. Des dizaines de modèles sont préfabriqués et montés en arrière des champs de batailles ou en prévision des conquêtes territoriales. Dans un second temps nous avons choisi de continuer l’histoire de la construction lourde dans l’après guerre, spécifiquement la construction du logement. Par conséquent nous avons choisi d’étudier deux projets remarquables de la période juste après la seconde guerre mondiale. 1- Le projet de la cité expérimentale de Noisy-le-Sec : au travers de ce projet l’Etat a essayé de tester les procédés et matériaux nouveaux permettant d’utiliser moins de matières premières et d’énergie, de simplifier la mise en œuvre, de faire connaître ces nouveautés pour faire de la technique une technologie et contribuer à l’amélioration de l’habitat (confort intérieur, équipement). Pour ce faire, il importe des procédés et impose des changements de rythme et d’échelle. 2- le projet des Grands Terres : Le chantier des Grands Terres doit être considéré comme le premier chef d’œuvre de préfabrication lourde de logements. Ce projet affirme aussi une nouvelle façon de penser la ville et son rapport à l’habitat, il est une des applications réussies de la Charte d’Athènes, bible de l’urbanisme de Lods, et une référence pour les évolutions urbaines des décennies 60 et 70. Enfin, pour élaborer cette recherche académique j’ai pris le parti “chronologique” ” : 1885-1940 “la construction légère et démontable”, 1940-1970, “la préfabrication lourde et indémontable”, 1945-1953 “ la cité d’expérience de Noisy-le-Sec”, 1952-1956, “le modèle achevé le plus réussi des grands opérations, le projet des Grandes Terres”. / The thesis focuses on the industrialization of housing in France (1885-1970), from the lightweight and removable construction to the heavy construction and architecture. The purpose of this thesis is placed at the interface of four major topics: the existence of industrialization before the World War II, the technical policy of the Ministry of Reconstruction and Urbanism (MRU), projects after the Second World War that applied the methods of industrialization imposed by the State, and the principles of the Charter of Athens. The study period extends from 1885, the first witness of European industrialization of the building, to 1970ties of questioning of this type of construction. The industrialization of the building has very old roots; it grows primarily in the military, for the needs of the colonial conquest, campaigns, wars, which inflamed the Europe. The beach cabin or the shack resort, the canvas tent, canopy of market, are as much constructive figures which are proliferating at the end of the nineteenth century. Especially, the colonial expeditions conducted drums requiring speed, security, capacity: the shack is the industrial solution. Industrialization continues to be not light anymore but heavy. It is the main route for the State because it decreases the cost price of the construction, reduces the interventions and improves the comfort of the housing. From 1945, the French State newly invests in the most stricken, encourages innovation based on the employment of materials and techniques in establishing the technical approval of the "new materials and non-traditional methods of construction". In the first part of this research, i have tried to show that there is an industrialization of the building before the Second World War. Industrialization occupied "brutally" lightweight construction in the 1890s. The shack removable and transportable, military becomes the object of competitions, confrontations, of interest of warriors in Western Europe. Dozens of models are prefabricated and really mounted in the fields of battles or in anticipation of territorial conquest. In a second time i have chosen to continue the story in the heavy construction in the after war, specifically the construction of the housing. Therefore i chose to study two outstanding projects of the period just after the Second World War. First is the project of Noisy-le-Sec, through which the state tried to test the processes and new materials to use fewer the raw materials and energy, to simplify the implementation, to raise awareness of these innovations to advance technology and contribute to the improvement of the habitat (interior comfort and equipment). To do this, it imported new processes and imposed changes in pace and scale. Second is the project of Grandes Terres: The site of the project of Grandes Terres could be considered as the first masterpiece of heavy housing prefabrication. In addition the project of Grandes Terres affirms a new way of thinking about the city and its report to the habitat, it is one of the successful applications of the Charter of Athens, bible of the urbanism of Lods, and a reference for the urban development’s decades 60 and 70. Finally, to develop this academic research I have taken the party "chronologically":1885-1940 "the lightweight construction and demountable", 1940-1970, "the heavy prefabrication and Unremovable", 1945-1953 "the city of experience of Noisy-le-sec", 1952-1956, "the completed model which is the most successful of major operations, the project of Grandes Terres".
42

New Methods for Learning from Heterogeneous and Strategic Agents

Divya, Padmanabhan January 2017 (has links) (PDF)
1 Introduction In this doctoral thesis, we address several representative problems that arise in the context of learning from multiple heterogeneous agents. These problems are relevant to many modern applications such as crowdsourcing and internet advertising. In scenarios such as crowdsourcing, there is a planner who is interested in learning a task and a set of noisy agents provide the training data for this learning task. Any learning algorithm making use of the data provided by these noisy agents must account for their noise levels. The noise levels of the agents are unknown to the planner, leading to a non-trivial difficulty. Further, the agents are heterogeneous as they differ in terms of their noise levels. A key challenge in such settings is to learn the noise levels of the agents while simultaneously learning the underlying model. Another challenge arises when the agents are strategic. For example, when the agents are required to perform a task, they could be strategic on the efforts they put in. As another example, when required to report their costs incurred towards performing the task, the agents could be strategic and may not report the costs truthfully. In general, the performance of the learning algorithms could be severely affected if the information elicited from the agents is incorrect. We address the above challenges that arise in the following representative learning problems. Multi-label Classification from Heterogeneous Noisy Agents Multi-label classification is a well-known supervised machine learning problem where each instance is associated with multiple classes. Since several labels can be assigned to a single instance, one of the key challenges in this problem is to learn the correlations between the classes. We first assume labels from a perfect source and propose a novel topic model called Multi-Label Presence-Absence Latent Dirichlet Allocation (ML-PA-LDA). In the current day scenario, a natural source for procuring the training dataset is through mining user-generated content or directly through users in a crowdsourcing platform. In the more practical scenario of crowdsourcing, an additional challenge arises as the labels of the training instances are provided by noisy, heterogeneous crowd-workers with unknown qualities. With this as the motivation, we further adapt our topic model to the scenario where the labels are provided by multiple noisy sources and refer to this model as ML-PA-LDA-MNS (ML-PA-LDA with Multiple Noisy Sources). With experiments on standard datasets, we show that the proposed models achieve superior performance over existing methods. Active Linear Regression with Heterogeneous, Noisy and Strategic Agents In this work, we study the problem of training a linear regression model by procuring labels from multiple noisy agents or crowd annotators, under a budget constraint. We propose a Bayesian model for linear regression from multiple noisy sources and use variational inference for parameter estimation. When labels are sought from agents, it is important to minimize the number of labels procured as every call to an agent incurs a cost. Towards this, we adopt an active learning approach. In this specific context, we prove the equivalence of well-studied criteria of active learning such as entropy minimization and expected error reduction. For the purpose of annotator selection in active learning, we observe a useful connection with the multi-armed bandit framework. Due to the nature of the distribution of the rewards on the arms, we resort to the Robust Upper Confidence Bound (UCB) scheme with truncated empirical mean estimator to solve the annotator selection problem. This yields provable guarantees on the regret. We apply our model to the scenario where annotators are strategic and design suitable incentives to induce them to put in their best efforts. Ranking with Heterogeneous Strategic Agents We look at the problem where a planner must rank multiple strategic agents, a problem that has many applications including sponsored search auctions (SSA). Stochastic multi-armed bandit (MAB) mechanisms have been used in the literature to solve this problem. Existing stochastic MAB mechanisms with a deterministic payment rule, proposed in the literature, necessarily suffer a regret of (T 2=3), where T is the number of time steps. This happens because these mechanisms address the worst case scenario where the means of the agents’ stochastic rewards are separated by a very small amount that depends on T . We however take a detour and allow the planner to indicate the resolution, , with which the agents must be distinguished. This immediately leads us to introduce the notion of -Regret. We propose a dominant strategy incentive compatible (DSIC) and individually rational (IR), deterministic MAB mechanism, based on ideas from the Upper Confidence Bound (UCB) family of MAB algorithms. The proposed mechanism - UCB achieves a -regret of O(log T ). We first establish the results for single slot SSA and then non-trivially extend the results to the case of multi-slot SSA.
43

Nonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled Signals

Sreenivasa Murthy, A January 2012 (has links) (PDF)
For time-varying signals such as speech and audio, short-time analysis becomes necessary to compute specific signal attributes and to keep track of their evolution. The standard technique is the short-time Fourier transform (STFT), using which one decomposes a signal in terms of windowed Fourier bases. An advancement over STFT is the wavelet analysis in which a function is represented in terms of shifted and dilated versions of a localized function called the wavelet. A specific modeling approach particularly in the context of speech is based on short-time linear prediction or short-time Wiener filtering of noisy speech. In most nonstationary signal processing formalisms, the key idea is to analyze the properties of the signal locally, either by first truncating the signal and then performing a basis expansion (as in the case of STFT), or by choosing compactly-supported basis functions (as in the case of wavelets). We retain the same motivation as these approaches, but use polynomials to model the signal on a short-time basis (“short-time polynomial representation”). To emphasize the local nature of the modeling aspect, we refer to it as “local polynomial modeling (LPM).” We pursue two main threads of research in this thesis: (i) Short-time approaches for speech enhancement; and (ii) LPM for enhancing smooth signals, with applications to ECG, noisy nonuniformly-sampled signals, and voiced/unvoiced segmentation in noisy speech. Improved iterative Wiener filtering for speech enhancement A constrained iterative Wiener filter solution for speech enhancement was proposed by Hansen and Clements. Sreenivas and Kirnapure improved the performance of the technique by imposing codebook-based constraints in the process of parameter estimation. The key advantage is that the optimal parameter search space is confined to the codebook. The Nonstationary signal enhancement solutions assume stationary noise. However, in practical applications, noise is not stationary and hence updating the noise statistics becomes necessary. We present a new approach to perform reliable noise estimation based on spectral subtraction. We first estimate the signal spectrum and perform signal subtraction to estimate the noise power spectral density. We further smooth the estimated noise spectrum to ensure reliability. The key contributions are: (i) Adaptation of the technique for non-stationary noises; (ii) A new initialization procedure for faster convergence and higher accuracy; (iii) Experimental determination of the optimal LP-parameter space; and (iv) Objective criteria and speech recognition tests for performance comparison. Optimal local polynomial modeling and applications We next address the problem of fitting a piecewise-polynomial model to a smooth signal corrupted by additive noise. Since the signal is smooth, it can be represented using low-order polynomial functions provided that they are locally adapted to the signal. We choose the mean-square error as the criterion of optimality. Since the model is local, it preserves the temporal structure of the signal and can also handle nonstationary noise. We show that there is a trade-off between the adaptability of the model to local signal variations and robustness to noise (bias-variance trade-off), which we solve using a stochastic optimization technique known as the intersection of confidence intervals (ICI) technique. The key trade-off parameter is the duration of the window over which the optimum LPM is computed. Within the LPM framework, we address three problems: (i) Signal reconstruction from noisy uniform samples; (ii) Signal reconstruction from noisy nonuniform samples; and (iii) Classification of speech signals into voiced and unvoiced segments. The generic signal model is x(tn)=s(tn)+d(tn),0 ≤ n ≤ N - 1. In problems (i) and (iii) above, tn=nT(uniform sampling); in (ii) the samples are taken at nonuniform instants. The signal s(t)is assumed to be smooth; i.e., it should admit a local polynomial representation. The problem in (i) and (ii) is to estimate s(t)from x(tn); i.e., we are interested in optimal signal reconstruction on a continuous domain starting from uniform or nonuniform samples. We show that, in both cases, the bias and variance take the general form: The mean square error (MSE) is given by where L is the length of the window over which the polynomial fitting is performed, f is a function of s(t), which typically comprises the higher-order derivatives of s(t), the order itself dependent on the order of the polynomial, and g is a function of the noise variance. It is clear that the bias and variance have complementary characteristics with respect to L. Directly optimizing for the MSE would give a value of L, which involves the functions f and g. The function g may be estimated, but f is not known since s(t)is unknown. Hence, it is not practical to compute the minimum MSE (MMSE) solution. Therefore, we obtain an approximate result by solving the bias-variance trade-off in a probabilistic sense using the ICI technique. We also propose a new approach to optimally select the ICI technique parameters, based on a new cost function that is the sum of the probability of false alarm and the area covered over the confidence interval. In addition, we address issues related to optimal model-order selection, search space for window lengths, accuracy of noise estimation, etc. The next issue addressed is that of voiced/unvoiced segmentation of speech signal. Speech segments show different spectral and temporal characteristics based on whether the segment is voiced or unvoiced. Most speech processing techniques process the two segments differently. The challenge lies in making detection techniques offer robust performance in the presence of noise. We propose a new technique for voiced/unvoiced clas-sification by taking into account the fact that voiced segments have a certain degree of regularity, and that the unvoiced segments do not possess any smoothness. In order to capture the regularity in voiced regions, we employ the LPM. The key idea is that regions where the LPM is inaccurate are more likely to be unvoiced than voiced. Within this frame-work, we formulate a hypothesis testing problem based on the accuracy of the LPM fit and devise a test statistic for performing V/UV classification. Since the technique is based on LPM, it is capable of adapting to nonstationary noises. We present Monte Carlo results to demonstrate the accuracy of the proposed technique.
44

Coding Schemes for Relay Networks

Nasiri Khormuji, Majid January 2011 (has links)
Cooperative communications by pooling available resources—for example, power and bandwidth—across the network, is a distributed solution for providing robust wireless transmission. Motivated by contemporary applications in multi-hop transmission and ad hoc networks, the classical three-node relay channel (RC) consisting of a source–destination pair and a relay node has received a renewed attention. One of the crucial aspects of the communication over relay networks (RNs) is the design of proper relaying protocols; that is, how the relay should take part in the transmission to meet a certain quality of service. In this dissertation, we address the design of reliable transmission strategies and quantification of the associated transmission rates over RNs. We consider three canonical examples of RNs: the classical RC, the multiple-access RC (MARC) and the two-way RC.We also investigate the three-node RC and MARC with state. The capacity of the aforementioned RNs is an open problem in general except for some special cases. In the thesis, we derive various capacity bounds, through which we also identify the capacity of some new classes of RNs. In particular, we introduce the class of state-decoupled RNs and prove that the noisy network coding is capacity achieving under certain conditions. In the thesis, we also study the effect of the memory length on the capacity of RNs. The investigated relaying protocols in the thesis can be categorized into two groups: protocols with a finite relay memory and those with infinite relay memory requirement. In particular, we consider the design of instantaneous relaying (also referred to as memoryless relaying) in which the output of the relay depends solely on the presently received signal at the relay. For optimizing the relay function, we present several algorithms constructed based on grid search and variational methods. Among other things, we surprisingly identify some classes of semi-deterministic RNs for which a properly constructed instantaneous relaying strategy achieves the capacity. We also show that the capacity of RNs can be increased by allowing the output of the relay to depend on the past received signals as well the current received signal at the relay. As an example, we propose a hybrid digital–analog scheme that outperforms the cutset upper bound for strictly causal relaying. / <p>QC 20110909</p>
45

An active-set trust-region method for bound-constrained nonlinear optimization without derivatives applied to noisy aerodynamic design problems / Une méthode de région de confiance avec ensemble actif pour l'optimisation non linéaire sans dérivées avec contraintes de bornes appliquée à des problèmes aérodynamiques bruités

Tröltzsch, Anke 07 June 2011 (has links)
L’optimisation sans dérivées (OSD) a connu un regain d’intérêt ces dernières années, principalement motivée par le besoin croissant de résoudre les problèmes d’optimisation définis par des fonctions dont les valeurs sont calculées par simulation (par exemple, la conception technique, la restauration d’images médicales ou de nappes phréatiques).Ces dernières années, un certain nombre de méthodes d’optimisation sans dérivée ont été développées et en particulier des méthodes fondées sur un modèle de région de confiance se sont avérées obtenir de bons résultats.Dans cette thèse, nous présentons un nouvel algorithme de région de confiance, basé sur l’interpolation, qui se montre efficace et globalement convergent (en ce sens que sa convergence vers un point stationnaire est garantie depuis tout point de départ arbitraire). Le nouvel algorithme repose sur la technique d’auto-correction de la géométrie proposé par Scheinberg and Toint (2010). Dans leur théorie, ils ont fait avancer la compréhension du rôle de la géométrie dans les méthodes d’OSD à base de modèles. Dans notre travail, nous avons pu améliorer considérablement l’efficacité de leur méthode, tout en maintenant ses bonnes propriétés de convergence. De plus, nous examinons l’influence de différents types de modèles d’interpolation sur les performances du nouvel algorithme.Nous avons en outre étendu cette méthode pour prendre en compte les contraintes de borne par l’application d’une stratégie d’activation. Considérer une méthode avec ensemble actif pour l’optimisation basée sur des modèles d’interpolation donne la possibilité d’économiser une quantité importante d’évaluations de fonctions. Il permet de maintenir les ensembles d’interpolation plus petits tout en poursuivant l’optimisation dans des sous-espaces de dimension inférieure. L’algorithme résultant montre un comportement numérique très compétitif. Nous présentons des résultats sur un ensemble de problèmes-tests issu de la collection CUTEr et comparons notre méthode à des algorithmes de référence appartenant à différentes classes de méthodes d’OSD.Pour réaliser des expériences numériques qui intègrent le bruit, nous créons un ensemble de cas-tests bruités en ajoutant des perturbations à l’ensemble des problèmes sans bruit. Le choix des problèmes bruités a été guidé par le désir d’imiter les problèmes d’optimisation basés sur la simulation. Enfin, nous présentons des résultats sur une application réelle d’un problème de conception de forme d’une aile fourni par Airbus. / Derivative-free optimization (DFO) has enjoyed renewed interest over the past years, mostly motivated by the ever growing need to solve optimization problems defined by functions whose values are computed by simulation (e.g. engineering design, medical image restoration or groundwater supply).In the last few years, a number of derivative-free optimization methods have been developed and especially model-based trust-region methods have been shown to perform well.In this thesis, we present a new interpolation-based trust-region algorithm which shows to be efficient and globally convergent (in the sense that its convergence is guaranteed to a stationary point from arbitrary starting points). The new algorithm relies on the technique of self-correcting geometry proposed by Scheinberg and Toint [128] in 2009. In their theory, they advanced the understanding of the role of geometry in model-based DFO methods, in our work, we improve the efficiency of their method while maintaining its good theoretical convergence properties. We further examine the influence of different types of interpolation models on the performance of the new algorithm.Furthermore, we extended this method to handle bound constraints by applying an active-set strategy. Considering an active-set method in bound-constrained model-based optimization creates the opportunity of saving a substantial amount of function evaluations. It allows to maintain smaller interpolation sets while proceeding optimization in lower dimensional subspaces. The resulting algorithm is shown to be numerically highly competitive. We present results on a test set of smooth problems from the CUTEr collection and compare to well-known state-of-the-art packages from different classes of DFO methods.To report numerical experiments incorporating noise, we create a test set of noisy problems by adding perturbations to the set of smooth problems. The choice of noisy problems was guided by a desire to mimic simulation-based optimization problems. Finally, we will present results on a real-life application of a wing-shape design problem provided by Airbus.
46

Deep spiking neural networks

Liu, Qian January 2018 (has links)
Neuromorphic Engineering (NE) has led to the development of biologically-inspired computer architectures whose long-term goal is to approach the performance of the human brain in terms of energy efficiency and cognitive capabilities. Although there are a number of neuromorphic platforms available for large-scale Spiking Neural Network (SNN) simulations, the problem of programming these brain-like machines to be competent in cognitive applications still remains unsolved. On the other hand, Deep Learning has emerged in Artificial Neural Network (ANN) research to dominate state-of-the-art solutions for cognitive tasks. Thus the main research problem emerges of understanding how to operate and train biologically-plausible SNNs to close the gap in cognitive capabilities between SNNs and ANNs. SNNs can be trained by first training an equivalent ANN and then transferring the tuned weights to the SNN. This method is called ‘off-line’ training, since it does not take place on an SNN directly, but rather on an ANN instead. However, previous work on such off-line training methods has struggled in terms of poor modelling accuracy of the spiking neurons and high computational complexity. In this thesis we propose a simple and novel activation function, Noisy Softplus (NSP), to closely model the response firing activity of biologically-plausible spiking neurons, and introduce a generalised off-line training method using the Parametric Activation Function (PAF) to map the abstract numerical values of the ANN to concrete physical units, such as current and firing rate in the SNN. Based on this generalised training method and its fine tuning, we achieve the state-of-the-art accuracy on the MNIST classification task using spiking neurons, 99.07%, on a deep spiking convolutional neural network (ConvNet). We then take a step forward to ‘on-line’ training methods, where Deep Learning modules are trained purely on SNNs in an event-driven manner. Existing work has failed to provide SNNs with recognition accuracy equivalent to ANNs due to the lack of mathematical analysis. Thus we propose a formalised Spike-based Rate Multiplication (SRM) method which transforms the product of firing rates to the number of coincident spikes of a pair of rate-coded spike trains. Moreover, these coincident spikes can be captured by the Spike-Time-Dependent Plasticity (STDP) rule to update the weights between the neurons in an on-line, event-based, and biologically-plausible manner. Furthermore, we put forward solutions to reduce correlations between spike trains; thereby addressing the result of performance drop in on-line SNN training. The promising results of spiking Autoencoders (AEs) and Restricted Boltzmann Machines (SRBMs) exhibit equivalent, sometimes even superior, classification and reconstruction capabilities compared to their non-spiking counterparts. To provide meaningful comparisons between these proposed SNN models and other existing methods within this rapidly advancing field of NE, we propose a large dataset of spike-based visual stimuli and a corresponding evaluation methodology to estimate the overall performance of SNN models and their hardware implementations.
47

Normalização textual de conteúdo gerado por usuário / User-generated content text normalization

Bertaglia, Thales Felipe Costa 18 August 2017 (has links)
Conteúdo Gerado por Usuário (CGU) é a denominação dada ao conteúdo criado de forma espontânea por indivíduos comuns, sem vínculos com meios de comunicação. Esse tipo de conteúdo carrega informações valiosas e pode ser explorado por diversas áreas do conhecimento. Muito do CGU é disponibilizado em forma de textos avaliações de produtos, comentários em fóruns sobre filmes e discussões em redes sociais são exemplos. No entanto, a linguagem utilizada em textos de CGU diverge, de várias maneiras, da norma culta da língua, dificultando seu processamento por técnicas de PLN. A linguagem de CGU é fortemente ligada à língua utilizada no cotidiano, contendo, assim, uma grande quantidade de ruídos. Erros ortográficos, abreviações, gírias, ausência ou mau uso de pontuação e de capitalização são alguns ruídos que dificultam o processamento desses textos. Diversos trabalhos relatam perda considerável de desempenho ao testar ferramentas do estado-daarte de PLN em textos de CGU. A Normalização Textual é o processo de transformar palavras ruidosas em palavras consideradas corretas e pode ser utilizada para melhorar a qualidade de textos de CGU. Este trabalho relata o desenvolvimento de métodos e sistemas que visam a (a) identificar palavras ruidosas em textos de CGU, (b) encontrar palavras candidatas a sua substituição, e (c) ranquear os candidatos para realizar a normalização. Para a identificação de ruídos, foram propostos métodos baseados em léxicos e em aprendizado de máquina, com redes neurais profundas. A identificação automática apresentou resultados comparáveis ao uso de léxicos, comprovando que este processo pode ser feito com baixa dependência de recursos. Para a geração e ranqueamento de candidatos, foram investigadas técnicas baseadas em similaridade lexical e word embeddings. Concluiu-se que o uso de word embeddings é altamente adequado para normalização, tendo atingido os melhores resultados. Todos os métodos propostos foram avaliados com base em um córpus de CGU anotado no decorrer do projeto, contendo textos de diferentes origens: fóruns de discussão, reviews de produtos e publicações no Twitter. Um sistema, Enelvo, combinando todos os métodos foi implementado e comparado a um outro sistema normalizador existente, o UGCNormal. Os resultados obtidos pelo sistema Enelvo foram consideravelmente superiores, com taxa de correção entre 67% e 97% para diferentes tipos de ruído, com menos dependência de recursos e maior flexibilidade na normalização. / User Generated Content (UGC) is the name given to content created spontaneously by ordinary individuals, without connections to the media. This type of content carries valuable information and can be exploited by several areas of knowledge. Much of the UGC is provided in the form of texts product reviews, comments on forums about movies, and discussions on social networks are examples. However, the language used in UGC texts differs, in many ways, from the cultured norm of the language, making it difficult for NLP techniques to handle them. UGC language is strongly linked to the language used in daily life, containing a large amount of noise. Spelling mistakes, abbreviations, slang, absence or misuse of punctuation and capitalization are some noises that make it difficult to process these texts. Several works report considerable loss of performance when testing NLP state-of-the-art tools in UGC texts. Textual Normalization is the process of turning noisy words into words considered correct and can be used to improve the quality of UGC texts. This work reports the development of methods and systems that aim to (a) identify noisy words in UGC, (b) find candidate words for substitution, and (c) rank candidates for normalization. For the identification of noisy words, lexical-based methods and machine learning ones using deep neural networks were proposed. The automatic identification presented results comparable to the use of lexicons, proving that this process can be done with low dependence of resources. For the generation and ranking of candidates, techniques based on lexical similarity and word embeddings were investigated. It was concluded that the use of embeddings is highly suitable for normalization, having achieved the best results. All proposed methods were evaluated based on a UGC corpus annotated throughout the project, containing texts from different sources: discussion forums, product reviews and tweets. A system, Enelvo, combining all methods was implemented and compared to another existing normalizing system, UGCNormal. The results obtained by the Enelvo system were considerably higher, with a correction rate between 67 % and 97 % for different types of noise, with less dependence on resources and greater flexibility in normalization.
48

The impact of frequency modulation (FM) system use and caregiver training on young children with hearing impairment in a noisy listening environment

Nguyen, Huong Thi Thien 01 July 2011 (has links)
The two objectives of this single-subject study were to assess how an FM system use impacts parent-child interaction in a noisy listening environment, and how a parent/caregiver training affect the interaction between parent/caregiver and child. Two 5-year-old children with hearing loss and their parent/caregiver participated. Experiement 1 was conducted using an alternating design measured three communication behaviors (e.g., child's vocalization, parent/caregiver's initiation, and parent/caregiver's response) across four listening conditions (e.g., HA+Quiet, HA+Noise, FM+Quiet, and FM+Noise). Experiment 2 was conducted using a comparison within and between conditions to re-measure the communicative behaviors across the listening conditions after the parent/caregiver training. Findings of this study point to three major conclusions. First, FM system use (i.e., FM-only mode) facilitated FM01 child's ability to maintain same level of interaction in a noisy as good as in a quiet environment. Second, parent/caregiver training enhanced the impact of FM system use for one child (FM01), although parent/caregiver initiation increased for both. Third, it is important to verify the function of both FM system and HA microphones to ensure access to FM advantage.
49

財務預測宣告對信用交易影響之研究 / Voluntary forecast versus credit transactions

唐婉珊 Unknown Date (has links)
本論文的目的,在探討我國自願性財物預測公告與公告與證券信用交易之間的關係。信用交易的增減代表使用信用交易的投資者對某特定資訊的瞭解與使用,因此實證檢視財務預測的修正行為與信用交易增減的關係,可以敏銳地瞭解,是種特定投資者在哪個時點對財務預測修正進行理性預期,並予使用且做了較實際的交易行為。因此,本研究的測試可以瞭解使用信用交易的投資者如何使用財務預測等相關資訊。據此,本研究的結果有助於了解使用信用交易的投資者如何運用自願性財務預測資訊來做投資決策。 研究期間是以民國八十四年至八十六年的資料為分析的對象,研究的結果顯示: 一、在季報(半年報、年報)公告前公佈的財務預測,好消息會引起融資顯著增加,融券增加幅度雖不如融資大,但結果亦為顯著;壞消息會使融資及融券同樣顯著增加,但融資增加幅度亦較融券顯著。 二、在季報(半年報、年報)公告後公佈的財務預測,好消息會引起融資顯著增加,融券增加幅度雖不如融資大,但結果亦為顯著;壞消息會使融資及融券同樣顯著增加,但融資增加幅度亦較融券顯著。 / This study aims to examine the relationship between an announcement of voluntary forecasts and credit transactions, including margin and short transactions. In general, an announcement of good news would attract investor to employ margin for a long position, and vice versa. Since only noisy trader can employ credit transaction in Taiwan, this study hypothesizes that investors would follow the announcement for making rational expectation. The results of this study could help understand how noisy traders use a financial forecast. This study selects the samples occurred between 1995 and 1997 to test the established hypotheses. The empirical results can be summarized as follows. ● If the announcement of voluntary forecast occurred prior to the release of quarterly, semiannual, and annual reports, both good and bad news simultaneously cause an increase of margin and short transactions during this period. However, the magnitude of margin transactions is significantly higher than that of short transactions. ● If the announcement of voluntary forecast occurred subsequent to the release of quarterly, semiannual, and annual reports, both good and bad news simultaneously cause an increase of margin and short transactions during this period; however, the magnitude of margin transaction is significantly higher than that of short transaction. Since noisy traders are essentially information follower, their judgement significantly relates to functional efficiency of informational intermediaries. These empirical results imply the function of informational intermediaries requires further improvement.
50

Feedback control of complex oscillatory systems

Tukhlina, Natalia January 2008 (has links)
In the present dissertation paper an approach which ensures an efficient control of such diverse systems as noisy or chaotic oscillators and neural ensembles is developed. This approach is implemented by a simple linear feedback loop. The dissertation paper consists of two main parts. One part of the work is dedicated to the application of the suggested technique to a population of neurons with a goal to suppress their synchronous collective dynamics. The other part is aimed at investigating linear feedback control of coherence of a noisy or chaotic self-sustained oscillator. First we start with a problem of suppressing synchronization in a large population of interacting neurons. The importance of this task is based on the hypothesis that emergence of pathological brain activity in the case of Parkinson's disease and other neurological disorders is caused by synchrony of many thousands of neurons. The established therapy for the patients with such disorders is a permanent high-frequency electrical stimulation via the depth microelectrodes, called Deep Brain Stimulation (DBS). In spite of efficiency of such stimulation, it has several side effects and mechanisms underlying DBS remain unclear. In the present work an efficient and simple control technique is suggested. It is designed to ensure suppression of synchrony in a neural ensemble by a minimized stimulation that vanishes as soon as the tremor is suppressed. This vanishing-stimulation technique would be a useful tool of experimental neuroscience; on the other hand, control of collective dynamics in a large population of units represents an interesting physical problem. The main idea of suggested approach is related to the classical problem of oscillation theory, namely the interaction between a self-sustained (active) oscillator and a passive load (resonator). It is known that under certain conditions the passive oscillator can suppress the oscillations of an active one. In this thesis a much more complicated case of active medium, which itself consists of thousands of oscillators is considered. Coupling this medium to a specially designed passive oscillator, one can control the collective motion of the ensemble, specifically can enhance or suppress it. Having in mind a possible application in neuroscience, the problem of suppression is concentrated upon. Second, the efficiency of suggested suppression scheme is illustrated by considering more complex case, i.e. when the population of neurons generating the undesired rhythm consists of two non-overlapping subpopulations: the first one is affected by the stimulation, while the collective activity is registered from the second one. Generally speaking, the second population can be by itself both active and passive; both cases are considered here. The possible applications of suggested technique are discussed. Third, the influence of the external linear feedback on coherence of a noisy or chaotic self-sustained oscillator is considered. Coherence is one of the main properties of self-oscillating systems and plays a key role in the construction of clocks, electronic generators, lasers, etc. The coherence of a noisy limit cycle oscillator in the context of phase dynamics is evaluated by the phase diffusion constant, which is in its turn proportional to the width of the spectral peak of oscillations. Many chaotic oscillators can be described within the framework of phase dynamics, and, therefore, their coherence can be also quantified by the way of the phase diffusion constant. The analytical theory for a general linear feedback, considering noisy systems in the linear and Gaussian approximation is developed and validated by numerical results. / In der vorliegenden Dissertation wird eine Näherung entwickelt, die eine effiziente Kontrolle verschiedener Systeme wie verrauschten oder chaotischen Oszillatoren und Neuronenensembles ermöglicht. Diese Näherung wird durch eine einfache lineare Rückkopplungsschleife implementiert. Die Dissertation besteht aus zwei Teilen. Ein Teil der Arbeit ist der Anwendung der vorgeschlagenen Technik auf eine Population von Neuronen gewidmet, mit dem Ziel ihre synchrone Dynamik zu unterdrücken. Der zweite Teil ist auf die Untersuchung der linearen Feedback-Kontrolle der Kohärenz eines verrauschten oder chaotischen, selbst erregenden Oszillators gerichtet. Zunächst widmen wir uns dem Problem, die Synchronisation in einer großen Population von aufeinander wirkenden Neuronen zu unterdrücken. Da angenommen wird, dass das Auftreten pathologischer Gehirntätigkeit, wie im Falle der Parkinsonschen Krankheit oder bei Epilepsie, auf die Synchronisation großer Neuronenpopulation zurück zu führen ist, ist das Verständnis dieser Prozesse von tragender Bedeutung. Die Standardtherapie bei derartigen Erkrankungen besteht in einer dauerhaften, hochfrequenten, intrakraniellen Hirnstimulation mittels implantierter Elektroden (Deep Brain Stimulation, DBS). Trotz der Wirksamkeit solcher Stimulationen können verschiedene Nebenwirkungen auftreten, und die Mechanismen, die der DBS zu Grunde liegen sind nicht klar. In meiner Arbeit schlage ich eine effiziente und einfache Kontrolltechnik vor, die die Synchronisation in einem Neuronenensemble durch eine minimierte Anregung unterdrückt und minimalinvasiv ist, da die Anregung stoppt, sobald der Tremor erfolgreich unterdrückt wurde. Diese Technik der "schwindenden Anregung" wäre ein nützliches Werkzeug der experimentellen Neurowissenschaft. Desweiteren stellt die Kontrolle der kollektiven Dynamik in einer großen Population von Einheiten ein interessantes physikalisches Problem dar. Der Grundansatz der Näherung ist eng mit dem klassischen Problem der Schwingungstheorie verwandt - der Interaktion eines selbst erregenden (aktiven) Oszillators und einer passiven Last, dem Resonator. Ich betrachte den deutlich komplexeren Fall eines aktiven Mediums, welches aus vielen tausenden Oszillatoren besteht. Durch Kopplung dieses Mediums an einen speziell hierür konzipierten, passiven Oszillator kann man die kollektive Bewegung des Ensembles kontrollieren, um diese zu erhöhen oder zu unterdrücken. Mit Hinblick auf eine möglichen Anwendung im Bereich der Neurowissenschaften, konzentriere ich mich hierbei auf das Problem der Unterdrückung. Im zweiten Teil wird die Wirksamkeit dieses Unterdrückungsschemas im Rahmen eines komplexeren Falles, bei dem die Population von Neuronen, die einen unerwünschten Rhythmus erzeugen, aus zwei nicht überlappenden Subpopulationen besteht, dargestellt. Zunächst wird eine der beiden Subpopulationen durch Stimulation beeinflusst und die kollektive Aktivität an der zweiten Subpopulation gemessen. Im Allgemeinen kann sich die zweite Subpopulation sowohl aktiv als auch passiv verhalten. Beide Fälle werden eingehend betrachtet. Anschließend werden die möglichen Anwendungen der vorgeschlagenen Technik besprochen. Danach werden verschiedene Betrachtungen über den Einfluss des externen linearen Feedbacks auf die Kohärenz eines verrauschten oder chaotischen selbst erregenden Oszillators angestellt. Kohärenz ist eine Grundeigenschaft schwingender Systeme und spielt ein tragende Rolle bei der Konstruktion von Uhren, Generatoren oder Lasern. Die Kohärenz eines verrauschten Grenzzyklus Oszillators im Sinne der Phasendynamik wird durch die Phasendiffusionskonstante bewertet, die ihrerseits zur Breite der spektralen Spitze von Schwingungen proportional ist. Viele chaotische Oszillatoren können im Rahmen der Phasendynamik beschrieben werden, weshalb ihre Kohärenz auch über die Phasendiffusionskonstante gemessen werden kann. Die analytische Theorie eines allgemeinen linearen Feedbacks in der Gaußschen, als auch in der linearen, Näherung wird entwickelt und durch numerische Ergebnisse gestützt.

Page generated in 0.0563 seconds