• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 17
  • 8
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 117
  • 20
  • 18
  • 17
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Analyse de l’environnement sonore pour le maintien à domicile et la reconnaissance d’activités de la vie courante, des personnes âgées / Sound analysis oh the environment for healthcare and recognition of daily life activities for the elderly

Robin, Maxime 17 April 2018 (has links)
L’âge moyen de la population française et européenne augmente, cette constatation apporte de nouveaux enjeux techniques et sociétaux, les personnes âgées étant les personnes les plus fragiles et les plus vulnérables, notamment du point de vue des accidents domestiques et en particulier des chutes. C’est pourquoi de nombreux projets d’aide aux personnes âgées : techniques, universitaires et commerciaux ont vu le jour ces dernières années. Ce travail de thèse a été effectué sous convention Cifre, conjointement entre l’entreprise KRG Corporate et le laboratoire BMBI (Biomécanique et Bio-ingénierie) de l’UTC (Université de technologie de Compiègne). Elle a pour objet de proposer un capteur de reconnaissance de sons et des activités de la vie courante, dans le but d’étoffer et d’améliorer le système de télé-assistance déjà commercialisé par la société. Plusieurs méthodes de reconnaissance de parole ou de reconnaissance du locuteur ont déjà été éprouvées dans le domaine de la reconnaissance de sons, entre autres les techniques : GMM (Modèle de mélange gaussien–Gaussian Mixture Model), SVM-GSL (Machine à vecteurs de support, GMM-super-vecteur à noyau linéaire – Support vector machine GMM Supervector Linear kernel) et HMM (Modèle de Markov caché – Hidden Markov Model). De la même manière, nous nous sommes proposés d’utiliser les i-vecteurs pour la reconnaissance de sons. Les i-vecteurs sont utilisés notamment en reconnaissance de locuteur, et ont révolutionné ce domaine récemment. Puis nous avons élargi notre spectre, et utilisé l’apprentissage profond (Deep Learning) qui donne actuellement de très bon résultats en classification tous domaines confondus. Nous les avons tout d’abord utilisés en renfort des i-vecteurs, puis nous les avons utilisés comme système de classification exclusif. Les méthodes précédemment évoquées ont également été testées en conditions bruités puis réelles. Ces différentes expérimentations nous ont permis d’obtenir des taux de reconnaissance très satisfaisants, les réseaux de neurones en renfort des i-vecteurs et les réseaux de neurones seuls étant les systèmes ayant la meilleure précision, avec une amélioration très significative par rapport aux différents systèmes issus de la reconnaissance de parole et de locuteur. / The average age of the French and European population is increasing; this observation brings new technical and societal challenges. Older people are the most fragile and vulnerable, especially in terms of domestic accidents and specifically falls. This is why many elderly people care projects : technical, academic and commercial have seen the light of day in recent years. This thesis work wasc arried out under Cifre agreement, jointly between the company KRG Corporate and the BMBI laboratory (Biomechanics and Bioengineering) of the UTC (Université of Technologie of Compiègne). Its purpose is to offer a sensor for sound recognition and everyday activities, with the aim of expanding and improving the tele-assistance system already marketed by the company. Several speech recognition or speaker recognition methods have already been proven in the field of sound recognition, including GMM (Modèle de mélange gaussien – Gaussian Mixture Model), SVM-GSL (Machine à vecteurs de support, GMM-super-vecteur à noyau linéaire – Support vector machine GMM Supervector Linear kernel) and HMM (Modèle de Markov caché – Hidden Markov Model). In the same way, we proposed to use i-vectors for sound recognition. I-Vectors are used in particular in speaker recognition, and have revolutionized this field recently. Then we broadened our spectrum, and used Deep Learning, which currently gives very good results in classification across all domains. We first used them to reinforce the i-vectors, then we used them as our exclusive classification system. The methods mentioned above were also tested under noisy and then real conditions. These different experiments gaves us very satisfactory recognition rates, with neural networks as reinforcement for i-vectors and neural networks alone being the most accurate systems, with a very significant improvement compared to the various speech and speaker recognition systems.
62

A noisy-channel based model to recognize words in eye typing systems / Um modelo baseado em canal de ruído para reconhecer palavras digitadas com os olhos

Hanada, Raíza Tamae Sarkis 04 April 2018 (has links)
An important issue with eye-based typing iis the correct identification of both whrn the userselects a key and which key is selected. Traditional solutions are based on predefined gaze fixation time, known as dwell-time methods. In an attempt to improve accuracy long dwell times are adopted, which un turn lead to fatigue and longer response limes. These problems motivate the proposal of methods free of dwell-time, or with very short ones, which rely on more robust recognition techniques to reduce the uncertainty about user\'s actions. These techniques are specially important when the users have disabilities which affect their eye movements or use inexpensive eye trackers. An approach to deal with the recognition problem is to treat it as a spelling correction task. An usual strategy for spelling correction is to model the problem as the transmission of a word through a noisy-channel, such that it is necessary to determine which known word of a lexicon is the received string. A feasible application of this method requires the reduction of the set of candidate words by choosing only the ones that can be transformed into the imput by applying up to k character edit operations. This idea works well on traditional typing because the number of errors per word is very small. However, this is not the case for eye-based typing systems, which are much noiser. In such a scenario, spelling correction strategies do not scale well as they grow exponentially with k and the lexicon size. Moreover, the error distribution in eye typing is different, with much more insertion errors due to specific sources, of noise such as the eye tracker device, particular user behaviors, and intrinsic chracteeristics of eye movements. Also, the lack of a large corpus of errors makes it hard to adopt probabilistic approaches based on information extracted from real world data. To address all these problems, we propose an effective recognition approach by combining estimates extracted from general error corpora with domain-specific knowledge about eye-based input. The technique is ablçe to calculate edit disyances effectively by using a Mor-Fraenkel index, searchable using a minimun prfect hashing. The method allows the early processing of most promising candidates, such that fast pruned searches present negligible loss in word ranking quality. We also propose a linear heuristic for estimating edit-based distances which take advantage of information already provided by the index. Finally, we extend our recognition model to include the variability of the eye movements as source of errors, provide a comprehensive study about the importance of the noise model when combined with a language model and determine how it affects the user behaviour while she is typing. As result, we obtain a method very effective on the task of recognizing words and fast enough to be use in real eye typing systems. In a transcription experiment with 8 users, they archived 17.46 words per minute using proposed model, a gain of 11.3% over a state-of-the-art eye-typing system. The method was particularly userful in more noisier situations, such as the first use sessions. Despite significant gains in typing speed and word recognition ability, we were not able to find statistically significant differences on the participants\' perception about their expeience with both methods. This indicates that an improved suggestion ranking may not be clearly perceptible by the users even when it enhances their performance. / Um problema importante em sistemas de digitação com os olhos é a correta identificação tanto de quando uma letra é selecionada como de qual letra foi selecionada pelo usuário. As soluções tradicionais para este problema são baseadas na verificação de quanto tempo o olho permanece retido em um alvo. Se ele fica por um certo limite de tempo, a seleção é reconhecida. Métodos em que usam esta ideia são conhecidos como baseados em tempo de retenção (dwell time). É comum que tais métodos, com intuito de melhorar a precisão, adotem tempos de retenção alto. Isso, por outro lado, leva à fadiga e tempos de resposta altos. Estes problemas motivaram a proposta de métodos não baseados em tempos de retenção reduzidos, que dependem de técnicas mais robustas de reconhecimento para inferir as ações dos usuários. Tais estratégias são particularmente mais importantes quando o usuário tem desabilidades que afetam o movimento dos olhos ou usam dispositivos de rastreamento ocular (eye-trackers) muito baratos e, portanto, imprecisos. Uma forma de lidar com o problema de reconhecimento das ações dos usuários é tratá-lo como correção ortográfica. Métodos comuns para correção ortográfica consistem em modelá-lo como a transmissão de uma palavra através de um canal de ruído, tal que é necessário determinar que palavra de um dicionário corresponde à string recebida. Para que a aplicação deste método seja viável, o conjunto de palavras candidatas é reduzido somente àquelas que podem ser transformadas na string de entrada pela aplicação de até k operações de edição de carácter. Esta ideia funciona bem em digitação tradicional porque o número de erros por palavra é pequeno. Contudo, este não é o caso de digitação com os olhos, onde há muito mais ruído. Em tal cenário, técnicas de correção de erros ortográficos não escalam pois seu custo cresce exponencialmente com k e o tamanho do dicionário. Além disso, a distribuição de erros neste cenário é diferente, com muito mais inserções incorretas devido a fontes específicas de ruído como o dispositivo de rastreamento ocular, certos comportamentos dos usuários e características intrínsecas dos movimentos dos olhos. O uso de técnicas probabilísticas baseadas na análise de logs de digitação também não é uma alternativa uma vez que não há corpora de dados grande o suficiente para tanto. Para lidar com todos estes problemas, propomos um método efetivo de reconhecimento que combina estimativas de corpus de erros gerais com conhecimento específico sobre fontes de erro encontradas em sistemas de digitação com os olhos. Nossa técnica é capaz de calcular distâncias de edição eficazmente usando um índice de Mor-Fraenkel em que buscas são feitas com auxílio de um hashing perfeito mínimo. O método possibilita o processamento ordenado de candidatos promissores, de forma que as operações de busca podem ser podadas sem que apresentem perda significativa na qualidade do ranking. Nós também propomos uma heurística linear para estimar distância de edição que tira proveito das informações já mantidas no índice, estendemos nosso modelo de reconhecimento para incluir erros vinculados à variabilidade decorrente dos movimentos oculares e fornecemos um estudo detalhado sobre a importância relativa dos modelos de ruído e de linguagem. Por fim, determinamos os efeitos do modelo no comportamento do usuário enquanto ele digita. Como resultado, obtivemos um método de reconhecimento muito eficaz e rápido o suficiente para ser usado em um sistema real. Em uma tarefa de transcrição com 8 usuários, eles alcançaram velocidade de 17.46 palavras por minuto usando o nosso modelo, o que corresponde a um ganho de 11,3% sobre um método do estado da arte. Nosso método se mostrou mais particularmente útil em situação onde há mais ruído, tal como a primeira sessão de uso. Apesar dos ganhos claros de velocidade de digitação, não encontramos diferenças estatisticamente significativas na percepção dos usuários sobre sua experiência com os dois métodos. Isto indica que uma melhoria no ranking de sugestões pode não ser claramente perceptível pelos usuários mesmo quanto ela afeta positivamente os seus desempenhos.
63

Bayesian Models for the Analyzes of Noisy Responses From Small Areas: An Application to Poverty Estimation

Manandhar, Binod 26 April 2017 (has links)
We implement techniques of small area estimation (SAE) to study consumption, a welfare indicator, which is used to assess poverty in the 2003-2004 Nepal Living Standards Survey (NLSS-II) and the 2001 census. NLSS-II has detailed information of consumption, but it can give estimates only at stratum level or higher. While population variables are available for all households in the census, they do not include the information on consumption; the survey has the `population' variables nonetheless. We combine these two sets of data to provide estimates of poverty indicators (incidence, gap and severity) for small areas (wards, village development committees and districts). Consumption is the aggregate of all food and all non-food items consumed. In the welfare survey the responders are asked to recall all information about consumptions throughout the reference year. Therefore, such data are likely to be noisy, possibly due to response errors or recalling errors. The consumption variable is continuous and positively skewed, so a statistician might use a logarithmic transformation, which can reduce skewness and help meet the normality assumption required for model building. However, it could be problematic since back transformation may produce inaccurate estimates and there are difficulties in interpretations. Without using the logarithmic transformation, we develop hierarchical Bayesian models to link the survey to the census. In our models for consumption, we incorporate the `population' variables as covariates. First, we assume that consumption is noiseless, and it is modeled using three scenarios: the exponential distribution, the gamma distribution and the generalized gamma distribution. Second, we assume that consumption is noisy, and we fit the generalized beta distribution of the second kind (GB2) to consumption. We consider three more scenarios of GB2: a mixture of exponential and gamma distributions, a mixture of two gamma distributions, and a mixture of two generalized gamma distributions. We note that there are difficulties in fitting the models for noisy responses because these models have non-identifiable parameters. For each scenario, after fitting two hierarchical Bayesian models (with and without area effects), we show how to select the most plausible model and we perform a Bayesian data analysis on Nepal's poverty data. We show how to predict the poverty indicators for all wards, village development committees and districts of Nepal (a big data problem) by combining the survey data with the census. This is a computationally intensive problem because Nepal has about four million households with about four thousand households in the survey and there is no record linkage between households in the survey and the census. Finally, we perform empirical studies to assess the quality of our survey-census procedure.
64

A noisy-channel based model to recognize words in eye typing systems / Um modelo baseado em canal de ruído para reconhecer palavras digitadas com os olhos

Raíza Tamae Sarkis Hanada 04 April 2018 (has links)
An important issue with eye-based typing iis the correct identification of both whrn the userselects a key and which key is selected. Traditional solutions are based on predefined gaze fixation time, known as dwell-time methods. In an attempt to improve accuracy long dwell times are adopted, which un turn lead to fatigue and longer response limes. These problems motivate the proposal of methods free of dwell-time, or with very short ones, which rely on more robust recognition techniques to reduce the uncertainty about user\'s actions. These techniques are specially important when the users have disabilities which affect their eye movements or use inexpensive eye trackers. An approach to deal with the recognition problem is to treat it as a spelling correction task. An usual strategy for spelling correction is to model the problem as the transmission of a word through a noisy-channel, such that it is necessary to determine which known word of a lexicon is the received string. A feasible application of this method requires the reduction of the set of candidate words by choosing only the ones that can be transformed into the imput by applying up to k character edit operations. This idea works well on traditional typing because the number of errors per word is very small. However, this is not the case for eye-based typing systems, which are much noiser. In such a scenario, spelling correction strategies do not scale well as they grow exponentially with k and the lexicon size. Moreover, the error distribution in eye typing is different, with much more insertion errors due to specific sources, of noise such as the eye tracker device, particular user behaviors, and intrinsic chracteeristics of eye movements. Also, the lack of a large corpus of errors makes it hard to adopt probabilistic approaches based on information extracted from real world data. To address all these problems, we propose an effective recognition approach by combining estimates extracted from general error corpora with domain-specific knowledge about eye-based input. The technique is ablçe to calculate edit disyances effectively by using a Mor-Fraenkel index, searchable using a minimun prfect hashing. The method allows the early processing of most promising candidates, such that fast pruned searches present negligible loss in word ranking quality. We also propose a linear heuristic for estimating edit-based distances which take advantage of information already provided by the index. Finally, we extend our recognition model to include the variability of the eye movements as source of errors, provide a comprehensive study about the importance of the noise model when combined with a language model and determine how it affects the user behaviour while she is typing. As result, we obtain a method very effective on the task of recognizing words and fast enough to be use in real eye typing systems. In a transcription experiment with 8 users, they archived 17.46 words per minute using proposed model, a gain of 11.3% over a state-of-the-art eye-typing system. The method was particularly userful in more noisier situations, such as the first use sessions. Despite significant gains in typing speed and word recognition ability, we were not able to find statistically significant differences on the participants\' perception about their expeience with both methods. This indicates that an improved suggestion ranking may not be clearly perceptible by the users even when it enhances their performance. / Um problema importante em sistemas de digitação com os olhos é a correta identificação tanto de quando uma letra é selecionada como de qual letra foi selecionada pelo usuário. As soluções tradicionais para este problema são baseadas na verificação de quanto tempo o olho permanece retido em um alvo. Se ele fica por um certo limite de tempo, a seleção é reconhecida. Métodos em que usam esta ideia são conhecidos como baseados em tempo de retenção (dwell time). É comum que tais métodos, com intuito de melhorar a precisão, adotem tempos de retenção alto. Isso, por outro lado, leva à fadiga e tempos de resposta altos. Estes problemas motivaram a proposta de métodos não baseados em tempos de retenção reduzidos, que dependem de técnicas mais robustas de reconhecimento para inferir as ações dos usuários. Tais estratégias são particularmente mais importantes quando o usuário tem desabilidades que afetam o movimento dos olhos ou usam dispositivos de rastreamento ocular (eye-trackers) muito baratos e, portanto, imprecisos. Uma forma de lidar com o problema de reconhecimento das ações dos usuários é tratá-lo como correção ortográfica. Métodos comuns para correção ortográfica consistem em modelá-lo como a transmissão de uma palavra através de um canal de ruído, tal que é necessário determinar que palavra de um dicionário corresponde à string recebida. Para que a aplicação deste método seja viável, o conjunto de palavras candidatas é reduzido somente àquelas que podem ser transformadas na string de entrada pela aplicação de até k operações de edição de carácter. Esta ideia funciona bem em digitação tradicional porque o número de erros por palavra é pequeno. Contudo, este não é o caso de digitação com os olhos, onde há muito mais ruído. Em tal cenário, técnicas de correção de erros ortográficos não escalam pois seu custo cresce exponencialmente com k e o tamanho do dicionário. Além disso, a distribuição de erros neste cenário é diferente, com muito mais inserções incorretas devido a fontes específicas de ruído como o dispositivo de rastreamento ocular, certos comportamentos dos usuários e características intrínsecas dos movimentos dos olhos. O uso de técnicas probabilísticas baseadas na análise de logs de digitação também não é uma alternativa uma vez que não há corpora de dados grande o suficiente para tanto. Para lidar com todos estes problemas, propomos um método efetivo de reconhecimento que combina estimativas de corpus de erros gerais com conhecimento específico sobre fontes de erro encontradas em sistemas de digitação com os olhos. Nossa técnica é capaz de calcular distâncias de edição eficazmente usando um índice de Mor-Fraenkel em que buscas são feitas com auxílio de um hashing perfeito mínimo. O método possibilita o processamento ordenado de candidatos promissores, de forma que as operações de busca podem ser podadas sem que apresentem perda significativa na qualidade do ranking. Nós também propomos uma heurística linear para estimar distância de edição que tira proveito das informações já mantidas no índice, estendemos nosso modelo de reconhecimento para incluir erros vinculados à variabilidade decorrente dos movimentos oculares e fornecemos um estudo detalhado sobre a importância relativa dos modelos de ruído e de linguagem. Por fim, determinamos os efeitos do modelo no comportamento do usuário enquanto ele digita. Como resultado, obtivemos um método de reconhecimento muito eficaz e rápido o suficiente para ser usado em um sistema real. Em uma tarefa de transcrição com 8 usuários, eles alcançaram velocidade de 17.46 palavras por minuto usando o nosso modelo, o que corresponde a um ganho de 11,3% sobre um método do estado da arte. Nosso método se mostrou mais particularmente útil em situação onde há mais ruído, tal como a primeira sessão de uso. Apesar dos ganhos claros de velocidade de digitação, não encontramos diferenças estatisticamente significativas na percepção dos usuários sobre sua experiência com os dois métodos. Isto indica que uma melhoria no ranking de sugestões pode não ser claramente perceptível pelos usuários mesmo quanto ela afeta positivamente os seus desempenhos.
65

Studying Perturbations on the Input of Two-Layer Neural Networks with ReLU Activation

Alsubaihi, Salman 07 1900 (has links)
Neural networks was shown to be very susceptible to small and imperceptible perturbations on its input. In this thesis, we study perturbations on two-layer piecewise linear networks. Such studies are essential in training neural networks that are robust to noisy input. One type of perturbations we consider is `1 norm bounded perturbations. Training Deep Neural Networks (DNNs) that are robust to norm bounded perturbations, or adversarial attacks, remains an elusive problem. While verification based methods are generally too expensive to robustly train large networks, it was demonstrated in [1] that bounded input intervals can be inexpensively propagated per layer through large networks. This interval bound propagation (IBP) approach lead to high robustness and was the first to be employed on large networks. However, due to the very loose nature of the IBP bounds, particularly for large networks, the required training procedure is complex and involved. In this work, we closely examine the bounds of a block of layers composed of an affine layer followed by a ReLU nonlinearity followed by another affine layer. In doing so, we propose probabilistic bounds, true bounds with overwhelming probability, that are provably tighter than IBP bounds in expectation. We then extend this result to deeper networks through blockwise propagation and show that we can achieve orders of magnitudes tighter bounds compared to IBP. With such tight bounds, we demonstrate that a simple standard training procedure can achieve the best robustness-accuracy tradeoff across several architectures on both MNIST and CIFAR10. We, also, consider Gaussian perturbations, where we build on a previous work that derives the first and second output moments of a two-layer piecewise linear network [2]. In this work, we derive an exact expression for the second moment, by dropping the zero mean assumption in [2].
66

Artificial Neural Network Approach For Characterization Of Acoustic Emission Sources From Complex Noisy Data

Bhat, Chandrashekhar 06 1900 (has links)
Safety and reliability are prime concerns in aircraft performance due to the involved costs and risk to lives. Despite the best efforts in design methodology, quality evaluation in production and structural integrity assessment in-service, attainment of one hundred percent safety through development and use of a suitable in-flight health monitoring system is still a farfetched goal. And, evolution of such a system requires, first, identification of an appropriate Technique and next its adoption to meet the challenges posed by newer materials (advanced composites), complex structures and the flight environment. In fact, a quick survey of the available Non-Destructive Evaluation (NDE) techniques suggests Acoustic Emission (AE) as the only available method. High merit in itself could be a weakness - Noise is the worst enemy of AE. So, while difficulties are posed due to the insufficient understanding of the basic behavior of composites, growth and interaction of defects and damage under a specified load condition, high in-flight noise further complicates the issue making the developmental task apparently formidable and challenging. Development of an in-flight monitoring system based on AE to function as an early warning system needs addressing three aspects, viz., the first, discrimination of AE signals from noise data, the second, extraction of required information from AE signals for identification of sources (source characterization) and quantification of its growth, and the third, automation of the entire process. And, a quick assessment of the aspects involved suggests that Artificial Neural Networks (ANN) are ideally suited for solving such a complex problem. A review of the available open literature while indicates a number of investigations carried out using noise elimination and source characterization methods such as frequency filtering and statistical pattern recognition but shows only sporadic attempts using ANN. This may probably be due to the complex nature of the problem involving investigation of a large number of influencing parameters, amount of effort and time to be invested, and facilities required and multi-disciplinary nature of the problem. Hence as stated in the foregoing, the need for such a study cannot be over emphasized. Thus, this thesis is an attempt addressing the issue of analysis and automation of complex sets of AE data such as AE signals mixed with in-flight noise thus forming the first step towards in-flight monitoring using AE. An ANN can in fact replace the traditional algorithmic approaches used in the past. ANN in general are model free estimators and derive their computational efficiency due to large connectivity, massive parallelism, non-linear analog response and learning capabilities. They are better suited than the conventional methods (statistical pattern recognition methods) due to their characteristics such as classification, pattern matching, learning, generalization, fault tolerance and distributed memory and their ability to process unstructured data sets which may be carrying incomplete information at times and hence chosen as the tool. Further, in the current context, the set of investigations undertaken were in the absence of sufficient a priori information and hence clustering of signals generated by AE sources through self-organizing maps is more appropriate. Thus, in the investigations carried out under the scope of this thesis, at first a hybrid network named "NAEDA" (Neural network for Acoustic Emission Data Analysis) using Kohonen self-organizing feature map (KSOM) and multi-layer perceptron (MLP) that learns on back propagation learning rule was specifically developed with innovative data processing techniques built into the network. However, for accurate pattern recognition, multi-layer back propagation NN needed to be trained with source and noise clusters as input data. Thus, in addition to optimizing the network architecture and training parameters, preprocessing of input data to the network and multi-class clustering and classification proved to be the corner stones in obtaining excellent identification accuracy. Next, in-flight noise environment of an aircraft was generated off line through carefully designed simulation experiments carried out in the laboratory (Ex: EMI, friction, fretting and other mechanical and hydraulic phenomena) based on the in-flight noise survey carried out by earlier investigators. From these experiments data was acquired and classified into their respective classes through MLP. Further, these noises were mixed together and clustered through KSOM and then classified into their respective clusters through MLP resulting in an accuracy of 95%- 100% Subsequently, to evaluate the utility of NAEDA for source classification and characterization, carbon fiber reinforced plastic (CFRP) specimens were subjected to spectrum loading simulating typical in-flight load and AE signals were acquired continuously up to a maximum of three designed lives and in some cases up to failure. Further, AE signals with similar characteristics were grouped into individual clusters through self-organizing map and labeled as belonging to appropriate failure modes, there by generating the class configuration. Then MLP was trained with this class information, which resulted in automatic identification and classification of failure modes with an accuracy of 95% - 100%. In addition, extraneous noise generated during the experiments was acquired and classified so as to evaluate the presence or absence of such data in the AE data acquired from the CFRP specimens. In the next stage, noise and signals were mixed together at random and were reclassified into their respective classes through supervised training of multi-layer back propagation NN. Initially only noise was discriminated from the AE signals from CFRP failure modes and subsequently both noise discrimination and failure mode identification and classification was carried out resulting in an accuracy of 95% - 100% in most of the cases. Further, extraneous signals mentioned above were classified which indicated the presence of such signals in the AE signals obtained from the CFRP specimen. Thus, having established the basis for noise identification and AE source classification and characterization, two specific examples were considered to evaluate the utility and efficiency of NAEDA. In the first, with the postulation that different basic failure modes in composites have unique AE signatures, the difference in damage generation and progression can be clearly characterized under different loading conditions. To examine this, static compression tests were conducted on a different set of CFRP specimens till failure with continuous AE monitoring and the resulting AE signals were classified through already trained NAEDA. The results obtained shows that the total number of signals obtained were very less when compared to fatigue tests and the specimens failed with hardly any damage growth. Further, NAEDA was able to discriminate the"noise and failure modes in CFRP specimen with the same degree of accuracy with which it has classified such signals obtained from fatigue tests. In the second example, with the same postulate of unique AE signatures for different failure modes, the differences in the complexion of the damage growth and progression should become clearly evident when one considers specimens with different lay up sequences. To examine this, the data was reclassified on the basis of differences in lay up sequences from specimens subjected to fatigue. The results obtained clearly confirmed the postulation. As can be seen from the summary of the work presented in the foregoing paragraphs, the investigations undertaken within the scope of this thesis involve elaborate experimentation, development of tools, acquisition of extensive data and analysis. Never the less, the results obtained were commensurate with the efforts and have been fruitful. Of the useful results that have been obtained, to state in specific, the first is, discrimination of simulated noise sources achieved with significant success but for some overlapping which is not of major concern as far as noises are concerned. Therefore they are grouped into required number of clusters so as to achieve better classification through supervised NN. This proved to be an innovative measure in supervised classification through back propagation NN. The second is the damage characterization in CFRP specimens, which involved imaginative data processing techniques that proved their worth in terms of optimization of various training parameters and resulted in accurate identification through clustering. Labeling of clusters is made possible by marking each signal starting from clustering to final classification through supervised neural network and is achieved through phenomenological correlation combined with ultrasonic imaging. Most rewarding of all is the identification of failure modes (AE signals) mixed in noise into their respective classes. This is a direct consequence of innovative data processing, multi-class clustering and flexibility of grouping various noise signals into suitable number of clusters. Thus, the results obtained and presented in this thesis on NN approach to AE signal analysis clearly establishes the fact that methods and procedures developed can automate detection and identification of failure modes in CFRP composites under hostile environment, which could lead to the development of an in-flight monitoring system.
67

Alignement de phrases parallèles dans des corpus bruités

Lamraoui, Fethi 07 1900 (has links)
La traduction statistique requiert des corpus parallèles en grande quantité. L’obtention de tels corpus passe par l’alignement automatique au niveau des phrases. L’alignement des corpus parallèles a reçu beaucoup d’attention dans les années quatre vingt et cette étape est considérée comme résolue par la communauté. Nous montrons dans notre mémoire que ce n’est pas le cas et proposons un nouvel aligneur que nous comparons à des algorithmes à l’état de l’art. Notre aligneur est simple, rapide et permet d’aligner une très grande quantité de données. Il produit des résultats souvent meilleurs que ceux produits par les aligneurs les plus élaborés. Nous analysons la robustesse de notre aligneur en fonction du genre des textes à aligner et du bruit qu’ils contiennent. Pour cela, nos expériences se décomposent en deux grandes parties. Dans la première partie, nous travaillons sur le corpus BAF où nous mesurons la qualité d’alignement produit en fonction du bruit qui atteint les 60%. Dans la deuxième partie, nous travaillons sur le corpus EuroParl où nous revisitons la procédure d’alignement avec laquelle le corpus Europarl a été préparé et montrons que de meilleures performances au niveau des systèmes de traduction statistique peuvent être obtenues en utilisant notre aligneur. / Current statistical machine translation systems require parallel corpora in large quantities, and typically obtain such corpora through automatic alignment at the sentence level: a text and its translation . The alignment of parallel corpora has received a lot of attention in the eighties and is largely considered to be a solved problem in the community. We show that this is not the case and propose an alignment technique that we compare to the state-of-the-art aligners. Our technique is simple, fast and can handle large amounts of data. It often produces better results than state-of-the-art. We analyze the robustness of our alignment technique across different text genres and noise level. For this, our experiments are divided into two main parts. In the first part, we measure the alignment quality on BAF corpus with up to 60% of noise. In the second part, we use the Europarl corpus and revisit the alignment procedure with which it has been prepared; we show that better SMT performance can be obtained using our alignment technique.
68

L'industrialisation du logement en France (1885-1970) : De la construction légère et démontable à la construction lourde et architecturale

Fares, Kinda 16 March 2012 (has links) (PDF)
La thèse porte sur l'industrialisation du logement en France (1885-1970), de la construction légère et démontable à la construction lourde et architecturale. L'objet de cette thèse se place à l'interface de quatre grands sujets : l'existence de l'industrialisation avant la seconde guerre mondiale, la politique technique du ministère de la Reconstruction et de l'Urbanisme (MRU), les projets réalisés après la seconde guerre mondiale dont on applique les méthodes d'industrialisation imposées par l'Etat, et les principes de la charte d'Athènes. La période d'étude s'étend de 1885, premier témoin européen de l'industrialisation du bâtiment, à 1970 année de remise en cause de ce type de construction. l'industrialisation du bâtiment a des racines très anciennes, elle croît d'abord parmi les militaires, pour les besoins de la conquête coloniale, des campagnes, des guerres qui enflamment l'Europe. La cabane de plage ou la baraque de villégiature, la tente de toile, l'auvent de marché, sont autant de figures constructives qui prolifèrent en fin du XIXe siècle. Surtout, les expéditions coloniales menées tambours battants exigent rapidité, sécurité, capacité : la baraque est la solution industrielle. L'industrialisation se poursuit, non plus légère mais lourde. Elle est pour l'Etat la principale voie car elle diminue le prix de revient de la construction, réduit les interventions et améliore le confort des logements. A partir de 1945, l'Etat français nouveau investit dans la partie la plus sinistrée, encourage les innovations basées sur l'emploi de matériaux et de techniques en instituant l'agrément technique des " matériaux nouveaux et des procédés non traditionnels de construction ". Dans la première partie de cette recherche, nous avons essayé de montrer qu'il y a bien une industrialisation du bâtiment avant la seconde guerre mondiale. L'industrialisation occupe " brutalement " la construction légère dans les années 1890. La baraque démontable et transportable, militaire, ambulante devient l'objet de compétitions, de confrontations, d'intérêts guerriers en Europe de l'ouest. Des dizaines de modèles sont préfabriqués et montés en arrière des champs de batailles ou en prévision des conquêtes territoriales. Dans un second temps nous avons choisi de continuer l'histoire de la construction lourde dans l'après guerre, spécifiquement la construction du logement. Par conséquent nous avons choisi d'étudier deux projets remarquables de la période juste après la seconde guerre mondiale. 1- Le projet de la cité expérimentale de Noisy-le-Sec : au travers de ce projet l'Etat a essayé de tester les procédés et matériaux nouveaux permettant d'utiliser moins de matières premières et d'énergie, de simplifier la mise en œuvre, de faire connaître ces nouveautés pour faire de la technique une technologie et contribuer à l'amélioration de l'habitat (confort intérieur, équipement). Pour ce faire, il importe des procédés et impose des changements de rythme et d'échelle. 2- le projet des Grands Terres : Le chantier des Grands Terres doit être considéré comme le premier chef d'œuvre de préfabrication lourde de logements. Ce projet affirme aussi une nouvelle façon de penser la ville et son rapport à l'habitat, il est une des applications réussies de la Charte d'Athènes, bible de l'urbanisme de Lods, et une référence pour les évolutions urbaines des décennies 60 et 70. Enfin, pour élaborer cette recherche académique j'ai pris le parti "chronologique" " : 1885-1940 "la construction légère et démontable", 1940-1970, "la préfabrication lourde et indémontable", 1945-1953 " la cité d'expérience de Noisy-le-Sec", 1952-1956, "le modèle achevé le plus réussi des grands opérations, le projet des Grandes Terres".
69

Optimising evolutionary strategies for problems with varying noise strength

Di Pietro, Anthony January 2007 (has links)
For many real-world applications of evolutionary computation, the fitness function is obscured by random noise. This interferes with the evaluation and selection processes and adversely affects the performance of the algorithm. Noise can be effectively eliminated by averaging a large number of fitness samples for each candidate, but the number of samples used per candidate (the resampling rate) required to achieve this is usually prohibitively large and time-consuming. Hence there is a practical need for algorithms that handle noise without eliminating it. Moreover, the amount of noise (noise strength and distribution) may vary throughout the search space, further complicating matters. We study noisy problems for which the noise strength varies throughout the search space. Such problems have generally been ignored by previous work, which has instead generally focussed on the specific case where the noise strength is the same at all points in the search domain. However, this need not be the case, and indeed this assumption is false for many applications. For example, in games of chance such as Poker, some strategies may be more conservative than others and therefore less affected by the inherent noise of the game. This thesis makes three significant contributions in the field of noisy fitness functions: We present the concept of dynamic resampling. Dynamic resampling is a technique that varies the resampling rate based on the noise strength and fitness for each candidate individually. This technique is designed to exploit the variation in noise strength and fitness to yield a more efficient algorithm. We present several dynamic resampling algorithms and give results that show that dynamic resampling can perform significantly better than the standard resampling technique that is usually used by the optimisation community, and that dynamic resampling algorithms that vary their resampling rates based on both noise strength and fitness can perform better than algorithms that vary their resampling rate based on only one of the above. We study a specific class of noisy fitness functions for which we counterintuitively find that it is better to use a higher resampling rate in regions of lower noise strength, and vice versa. We investigate how the evolutionary search operates on such problems, explain why this is the case, and present a hypothesis (with supporting evidence) for classifying such problems. We present an adaptive engine that automatically tunes the noise compensation parameters of the search during the run, thereby eliminating the need for the user to choose these parameters ahead of time. This means that our techniques can be readily applied to real-world problems without requiring the user to have specialised domain knowledge of the problem that they wish to solve. These three major contributions present a significant addition to the body of knowledge for noisy fitness functions. Indeed, this thesis is the first work specifically to examine the implications of noise strength that varies throughout the search domain for a variety of noise landscapes, and thus starts to fill a large void in the literature on noisy fitness functions.
70

Análise de abordagens automáticas de anotação semântica para textos ruidosos e seus impactos na similaridade entre vídeos

Dias, Laura Lima 31 August 2017 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-29T16:52:29Z No. of bitstreams: 0 / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: on 2018-01-30T14:50:12Z (GMT) / Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-30T16:08:06Z No. of bitstreams: 0 / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-03-21T19:26:08Z (GMT) No. of bitstreams: 0 / Made available in DSpace on 2018-03-21T19:26:08Z (GMT). No. of bitstreams: 0 Previous issue date: 2017-08-31 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Com o acúmulo de informações digitais armazenadas ao longo do tempo, alguns esforços precisam ser aplicados para facilitar a busca e indexação de conteúdos. Recursos como vídeos e áudios, por sua vez, são mais difíceis de serem tratados por mecanismos de busca. A anotação de vídeos é uma forma considerável de resumo do vídeo, busca e classificação. A parcela de vídeos que possui anotações atribuídas pelo próprio autor na maioria das vezes é muito pequena e pouco significativa, e anotar vídeos manualmente é bastante trabalhoso quando trata-se de bases legadas. Por esse motivo, automatizar esse processo tem sido desejado no campo da Recuperação de Informação. Em repositórios de videoaulas, onde a maior parte da informação se concentra na fala do professor, esse processo pode ser realizado através de anotações automáticas de transcritos gerados por sistemas de Reconhecimento Automático de Fala. Contudo, essa técnica produz textos ruidosos, dificultando a tarefa de anotação semântica automática. Entre muitas técnicas de Processamento de Linguagem de Natural utilizadas para anotação, não é trivial a escolha da técnica mais adequada a um determinado cenário, principalmente quando trata-se de anotar textos com ruídos. Essa pesquisa propõe analisar um conjunto de diferentes técnicas utilizadas para anotação automática e verificar o seu impacto em um mesmo cenário, o cenário de similaridade entre vídeos. / With the accumulation of digital information stored over time, some efforts need to be applied to facilitate search and indexing of content. Resources such as videos and audios, in turn, are more difficult to handle with by search engines. Video annotation is a considerable form of video summary, search and classification. The share of videos that have annotations attributed by the author most often is very small and not very significant, and annotating videos manually is very laborious when dealing with legacy bases. For this reason, automating this process has been desired in the field of Information Retrieval. In video lecture repositories, where most of the information is focused on the teacher’s speech, this process can be performed through automatic annotations of transcripts gene-rated by Automatic Speech Recognition systems. However, this technique produces noisy texts, making the task of automatic semantic annotation difficult. Among many Natural Language Processing techniques used for annotation, it is not trivial to choose the most appropriate technique for a given scenario, especially when writing annotated texts. This research proposes to analyze a set of different techniques used for automatic annotation and verify their impact in the same scenario, the scenario of similarity between videos.

Page generated in 0.1974 seconds