• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 6
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 15
  • 10
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Emergent behavior based implements for distributed network management

Wittner, Otto January 2003 (has links)
Network and system management has always been of concern for telecommunication and computer system operators. The need for standardization was recognised already 20 years ago, hence several standards for network management exist today. However, the ever-increasing number of units connected to networks and the ever-increasing number of services being provided results in significant increased complexity of average network environments. This challenges current management systems. In addition to the general increase in complexity the trend among network owners and operators of merging several single service networks into larger, heterogeneous and complex full service networks challenges current management systems even further. The full service networks will require management systems more powerful than what is possible to realize basing systems purely on todays management standards. This thesis presents a distributed stochastic optimization algorithm which enables implementations of highly robust and efficient management tools. These tools may be integrated into management systems and potentially make the systems more powerful and better prepared for management of full service networks. Emergent behavior is common in nature and easily observable in colonies of social insects and animals. Even an old oak tree can be viewed as an emergent system with its collection of interacting cells. Characteristic for any emergent system is how the overall behavior of the system emerge from many relatively simple, restricted behaviors interacting, e.g. a thousand ants building a trail, a flock of birds flying south or millions of cells making a tree grow. No centralized control exist, i.e. no single unit is in charge making global decisions. Despite distributed control, high work redundancy and stochastic behavior components, emergent systems tend to be very efficient problem solvers. In fact emergent systems tend to be both efficient, adaptive and robust which are three properties indeed desirable for a network management system. The algorithm presented in this thesis relates to a class of emergent behavior based systems known as swarm intelligence systems, i.e. the algorithm is potentially efficient, adaptive and robust. On the contrary to other related swarm intelligence algorithms, the algorithm presented has a thorough formal foundation. This enables a better understanding of the algorithm’s potentials and limitations, and hence enables better adaptation of the algorithm to new problem areas without loss of efficiency, adaptability or robustness. The formal foundations are based on work by Reuven Rubinstein on cross entropy driven optimization. The transition from Ruinstein’s centralized and synchronous algorithm to a distributed and asynchronous algorithm is described, and the distributed algorithm’s ability to solve complex problems (NP-complete) efficiently is demonstrated. Four examples of how the distributed algorithm may be applied in a network management context are presented. A system for finding near optimal patterns of primary/backup paths together with a system for finding cyclic protection paths in mesh networks demonstrate the algorithm’s ability to act as a tool helping management system to ensure quality of service. The algorithm’s potential as a management policy implementation mechanism is also demonstrated. The algorithm’s adaptability is shown to enable resolution of policy conflicts in a soft manner causing as little loss as possible. Finally, the algorithm’s ability to find near optimal paths (i.e. sequences) of resources in networks of large scale is demonstrated.
32

Optimisation de la fiabilité des structures contrôlées / Reliability optimization of controlled structures

Mrabet, Elyes 08 April 2016 (has links)
Le présent travail traite l’optimisation des paramètres des amortisseurs à masses accordées (AMA) accrochés sur des structures, linéaires. Les AMAs sont des dispositifs de contrôle passif utilisés pour atténuer les vibrations induites par des chargements dynamiques (en particulier stochastiques) appliqués sur des structures. L’efficacité de tels dispositifs est étroitement liée aux caractéristiques dynamiques qu’on doit imposer à ces systèmes. Dans ce cadre, plusieurs stratégies d’optimisation peuvent être utilisées dans des contextes déterministes et non déterministes, où les paramètres de la structure à contrôler sont incertains. Parmi les différentes approches qu’on peut trouver dans la littérature, l’optimisation structurale stochastique (OSS) et l’optimisation basée sur la fiabilité (OBF) étaient particulièrement traitées dans le présent travail.Dans la première partie de ce travail, en plus de la nature stochastique des chargements extérieurs appliqués à la structure linéaire à contrôler, la présence de paramètres structuraux de type incertains mais bornés (IMB) est prise en considération et les bornes optimales des paramètres AMA ont été calculées. Le calcul de ces bornes a été fait en utilisant une technique basée sur un développement de Taylor suivi d’une extension aux intervalles. La technique, permettant l’obtention d’une approximation des bornes optimales, a été appliquée dans les cas d’un système à un degré de liberté (1DDL) et un autre à plusieurs degrés de libertés (nDDL). Les résultats obtenus ont montrés que la technique utilisée était bien adaptée pour la stratégie OSS et elle l’est moins pour l’approche OBF.Comme suite logique aux résultats de la première partie, la seconde partie de la présente dissertation est consacrée à la présentation de deux méthodes permettant l’obtention des bornes exactes et des bornes approximées des paramètres optimaux de l’AMA et ce, en présence de paramètres structuraux de type IMB. La première méthode est celle de la boucle d’optimisation continue imbriquée, la seconde est celle des extensions aux intervalles basées sur la monotonie. Les méthodes présentées, qui ont été appliquées avec l’approche OBF, sont valables pour n’importe quel problème d’optimisation faisant intervenir des paramètres de type IMB. Mis à part le calcul de bornes optimisées du dispositif AMA, la question de la robustesse, vis-à-vis des incertitudes structurales, a été également traitée et il a été prouvé que la solution optimale correspondante au contexte déterministe était la plus robuste.L’introduction d’une nouvelle stratégie OBF des paramètres AMA a fait l’objet de la troisième partie de cette dissertation. En effet, un problème OBF est toujours relié à un mode de défaillance caractérisé par le franchissement d’une certaine réponse, de la structure à contrôler, d’un certain seuil limite pendant une certaine durée de temps. Le nouveau mode de défaillance, correspondant à la nouvelle stratégie OBF, consiste à considérer qu’une défaillance ait lieu lorsque la puissance dissipée au niveau de la structure à contrôler, pendant une période de temps, excède une certaine valeur. Faisant intervenir l’approche par franchissement ainsi que la formule de Rice, la nouvelle stratégie a été appliquée dans le cas d’un système 1DDL et l’expression exacte de la probabilité de défaillance est calculée. En se basant sur une approximation mettant en œuvre la technique du minimum d’entropie croisé, la nouvelle stratégie a été, également, appliquée dans le cas d’un système à nDDL et les résultats obtenus ont montrés la supériorité de cette stratégie par rapports à deux autres tirées de la bibliographie. / The present work deals with the parameters optimization of tuned mass dampers (TMD) used in the control of vibrating linear structures under stochastic loadings. The performance of the TMD device is deeply affected by its parameters that should be carefully chosen. In this context, several optimization strategies can be found in the literature and among them the stochastic structural optimization (SSO) and the reliability based optimization (RBO) are particularly addressed in this dissertation.The first part of this work in dedicated to the calculation of the optimal bounds solutions of the TMD parameters in presence of uncertain but bounded (UBB) structural parameters. The bounds of the optimal TMD parameters are obtained using an approximation technique based on Taylor expansion followed by interval extension. The numerical investigations applied with one degree of freedom (1DOF) and with multi-degree of freedom (multi-DOF) systems showed that the studied technique is suitable for the SSO strategy and that it’s less appropriate for the RBO strategy.As immediate consequence of the obtained results in the first part of this work, in the second part a method, called the continuous-optimization nested loop method (CONLM), providing the exact range of the optimal TMD parameters is presented and validated. The numerical studies demonstrated that the CONLM is time consuming and to overcome this disadvantage, a second method is also presented. The second method is called the monotonicity based extension method (MBEM) with box splitting. Both methods have been applied in the context of the RBO strategy with 1DOF and multi-DOF systems. The issue of effectiveness and robustness of the presented optimum bounds of the TMD parameters is also addressed and it has been demonstrated that the optimum solution corresponding to the deterministic context (deterministic structural parameters) provide good effectiveness and robustness.Another aspect of RBO approach is dealt in the third part of the present work. Indeed, a new RBO strategy of TMD parameters based on energetic criterion is presented and validated. The new RBO approach is linked to a new failure mode characterized by the exceedance of the power dissipated into the controlled structure over a certain threshold during some interval time. Based on the outcrossing approach and the Rice’s formula, the new strategy is firstly applied to 1DOF system and exact expression of the failure probability is calculated. After that, a multi-DOF system is considered and the minimum cross entropy method has been used providing an approximation to the failure probability and then the optimization is carried out. The numerical investigations showed the superiority of the presented strategy when compared with other from the literature.
33

An Exploration of the Word2vec Algorithm: Creating a Vector Representation of a Language Vocabulary that Encodes Meaning and Usage Patterns in the Vector Space Structure

Le, Thu Anh 05 1900 (has links)
This thesis is an exloration and exposition of a highly efficient shallow neural network algorithm called word2vec, which was developed by T. Mikolov et al. in order to create vector representations of a language vocabulary such that information about the meaning and usage of the vocabulary words is encoded in the vector space structure. Chapter 1 introduces natural language processing, vector representations of language vocabularies, and the word2vec algorithm. Chapter 2 reviews the basic mathematical theory of deterministic convex optimization. Chapter 3 provides background on some concepts from computer science that are used in the word2vec algorithm: Huffman trees, neural networks, and binary cross-entropy. Chapter 4 provides a detailed discussion of the word2vec algorithm itself and includes a discussion of continuous bag of words, skip-gram, hierarchical softmax, and negative sampling. Finally, Chapter 5 explores some applications of vector representations: word categorization, analogy completion, and language translation assistance.
34

[pt] AVALIAÇÃO DA CONFIABILIDADE DE SISTEMAS DE GERAÇÃO COM FONTES RENOVÁVEIS VIA TÉCNICAS DE SIMULAÇÃO MONTE CARLO E ENTROPIA CRUZADA / [en] RELIABILITY ASSESSMENT OF GENERATING SYSTEMS WITH RENEWABLE SOURCES VIA MONTE CARLO SIMULATION AND CROSS ENTROPY TECHNIQUES

RICARDO MARINHO SILVA FILHO 04 October 2021 (has links)
[pt] A avaliação de confiabilidade da capacidade de geração é extremamente útil em diversos estudos de planejamento da expansão, na avaliação dos riscos relacionados ao dimensionamento da reserva operativa e também na programação da manutenção de unidades geradoras. O principal objetivo é avaliar se uma determinada configuração de unidades de geração atende de forma aceitável à carga do sistema, assumindo que os equipamentos de transmissão sejam totalmente confiáveis e sem limitações de capacidade. Na última década, a inserção de fontes renováveis nos sistemas elétricos de potência tem crescido de forma acentuada, na grande maioria dos países desenvolvidos como também em desenvolvimento. As flutuações de suas capacidades de geração se tornaram parte da complexidade do problema de planejamento e operação de redes elétricas, uma vez que dependem das condições ambientais em que foram instaladas. Além disso, representações detalhadas da carga têm se tornado uma preocupação a mais de muitos planejadores, tendo em vista as análises de risco ao atendimento da demanda nessas redes. Novos modelos e ferramentas computacionais devem ser desenvolvidos para tratar dessas variáveis principalmente com dependência espaço-temporal. Esta dissertação apresenta diversos estudos para avaliar a confiabilidade da capacidade de sistemas de geração via simulação Monte Carlo quasi-sequencial (SMC-QS), considerando fontes de geração e carga com forte dependência espaço-temporal. Esta ferramenta é escolhida devido à sua fácil implementação computacional e capacidade de simular eventos cronológicos. A técnica de redução de variância denominada amostragem por importância baseada no método Cross Entropy (CE) foi utilizada em conjunto com a SMC-QS. As simulações terão como base o sistema teste IEEE-RTS 96, o qual é adequada-mente modificado para incluir fontes renováveis eólicas e hídricas. Portanto, o principal objetivo desta dissertação é definir a melhor maneira de lidar com as séries temporais representativas da geração renovável e carga, nos diferentes estágios do método SMC-QS via CE, de modo a maximizar sua eficiência computacional. Vários testes de simulação são realizados com o sistema IEEE-RTS 96 modificado e os resultados obtidos são amplamente discutidos. / [en] The reliability evaluation of the generating capacity is extremely useful in several expansion planning studies, in the assessment of risks related to the requirements of the operating reserve and also in the scheduling of maintenance of generating units. The main objective is to assess whether a given generating configuration meets the system load in an acceptable manner, assuming that the transmission equipment is completely reliable and without capacity limitations. In the last decade, the insertion of renewable sources in electrical power systems has grown markedly, in the vast majority of developed and developing countries. Fluctuations in their generation capacities have become part of the complexity of the problem of planning and operating electrical networks, since they depend on the environmental conditions in which they are installed. In addition, detailed representations of the load have become a concern among many planners, given the risk analyzes to meet demand in these networks. New computational models and tools must be developed to deal with these variables mainly with space-time dependence. This dissertation presents several studies to evaluate the reliability of the capacity of generation systems via quasi-sequential Monte Carlo simulation (QS-MCS), considering generation and load sources with strong space-time dependence. This tool is chosen due to its easy computational implementation and the ability to simulate chronological events. The variance reduction technique named importance sampling based on the cross-entropy (CE) method is used in conjunction with the QS-MCS. The simulations will be carried out with the IEEE-RTS 96 test system, which is adequately modified to include renewable wind and hydro sources. Therefore, the main objective of this dissertation is to define the best way to deal with the time series representing the renewable generation and load, in the different stages of the SMC-QS method via CE, in order to maximize its computational efficiency. Several simulation tests are performed with the modified IEEE-RTS 96 system and the obtained results are widely discussed.
35

[en] OPERATING RESERVE ASSESSMENT IN MULTI-AREA SYSTEMS WITH RENEWABLE SOURCES VIA CROSS ENTROPY METHOD / [pt] PLANEJAMENTO DA RESERVA OPERATIVA EM SISTEMAS MULTIÁREA COM FONTES RENOVÁVEIS VIA MÉTODO DA ENTROPIA CRUZADA

JOSÉ FILHO DA COSTA CASTRO 11 January 2019 (has links)
[pt] A reserva girante é a parcela da reserva operativa provida por geradores sincronizados, e interligados à rede de transmissão, aptos a suprir a demanda na ocorrência de falhas de unidades de geração, erros na previsão da demanda, variações de capacidade de fontes renováveis ou qualquer outro fator inesperado. Dada sua característica estocástica, essa parcela da reserva operativa é mais adequadamente avaliada por meio de métodos capazes de representar as incertezas inerentes ao seu dimensionamento e planejamento. Por meio do risco de corte de carga é possível comparar e classificar distintas configurações do sistema elétrico, garantindo a não violação dos requisitos de confiabilidade. Sistemas com elevada penetração de fontes renováveis apresentam comportamento mais complexo devido ao aumento das incertezas envolvidas, à forte dependência de fatores energético-climáticos e às variações de capacidade destas fontes. Para avaliar as correlações temporais e representar a cronologia de ocorrência dos eventos no curto-prazo, um estimador baseado na Simulação Monte Carlo Quase Sequencial é apresentado. Nos estudos de planejamento da operação de curto-prazo o horizonte em análise é de minutos a algumas horas. Nestes casos, a ocorrência de falhas em equipamentos pode apresentar baixa probabilidade e contingências que causam corte de carga podem ser raras. Considerando a raridade destes eventos, as avaliações de risco são baseadas em técnicas de amostragem por importância. Os parâmetros de simulação são obtidos por um processo numérico adaptativo de otimização estocástica, utilizando os conceitos de Entropia Cruzada. Este trabalho apresenta uma metodologia de avaliação dos montantes de reserva girante em sistemas com participação de fontes renováveis, em uma abordagem multiárea. O risco de perda de carga é estimado considerando falhas nos sistemas de geração e transmissão, observando as restrições de transporte e os limites de intercâmbio de potência entre as diversas áreas elétricas. / [en] The spinning reserve is the portion of the operational reserve provided by synchronized generators and connected to the transmission network, capable of supplying the demand considering generating unit failures, errors in load forecasting, capacity intermittency of renewable sources or any other unexpected factor. Given its stochastic characteristic, this portion of the operating reserve is more adequately evaluated through methods capable of modeling the uncertainties inherent in its design and planning. Based on the loss of load risk, it is possible to compare different configurations of the electrical system, ensuring the non-violation of reliability requirements. Systems with high penetration of renewable sources present a more complex behavior due to the number of uncertainties involved, strong dependence of energy-climatic factors and variations in the capacity of these sources. In order to evaluate the temporal correlations and to represent the chronology of occurrence of events in the short term, an estimator based on quasi-sequential Monte Carlo simulation is presented. In short-term operation planning studies, the horizon under analysis is from minutes to a few hours. In these cases, the occurrence of equipment failures may present low probability and contingencies that cause load shedding may be rare. Considering the rarity of these events, risk assessments are based on importance sampling techniques. The simulation parameters are obtained by an adaptive numerical process of stochastic optimization, using the concept of Cross Entropy. This thesis presents a methodology for evaluating the amounts of spinning reserve in systems with high penetration of renewable sources, in a multi-area approach. The risk of loss of load is estimated considering failures in the generation and transmission systems, observing the network restrictions and the power exchange limits between the different electric areas.
36

Utilizando algoritmo de cross-entropy para a modelagem de imagens de núcleos ativos de galáxias obtidas com o VLBA

Perianhes, Roberto Vitoriano 09 August 2017 (has links)
Submitted by Marta Toyoda (1144061@mackenzie.br) on 2018-02-16T23:06:29Z No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Paola Damato (repositorio@mackenzie.br) on 2018-03-08T11:19:18Z (GMT) No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-03-08T11:19:18Z (GMT). No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-08-09 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The images obtained by interferometers such as VLBA (Very Long Baseline Array) and VLBI (Very Long Baseline Interferometry), remain the direct evidence of relativistic jets and outbursts associated with supermassive black holes in active galactic nuclei (AGN). The study of these images are critical tools to the use of information from these observations, since they are one of the main ingredients for synthesis codes7 of extragalactic objects. In this thesis is used both synthetic and observed images. The VLBA images show 2-dimensional observations generated from complex 3-dimensional astrophysical processes. In this sense, one of the main difficulties of the models is the definition of parameters of functions and equations to reproduce macroscopic and dynamic physical formation events of these objects, so that images could be study reliably and on a large scale. One of the goals of this thesis is to elaborate a generic8 form of observations, assuming that the formation of these objects had origin directly by similar astrophysical processes, given the information of certain parameters of the formation events. The definition of parameters that reproduce the observations are key to the generalization formation of sources and extragalactic jets. Most observation articles have focus on few or even unique objects. The purpose of this project is to implement an innovative method, more robust and efficient, for modeling and rendering projects of various objects, such as the MOJAVE Project, which monitors several quasars simultaneously offering a diverse library for creating models (Quasars9 and Blazars10: OVV11 and BL Lacertae12). In this thesis was implemented a dynamic way to study these objects. Presents in this thesis the adaptation of the Cross-Entropy algorithm for the calibration of the parameters of astrophysical events that summarize the actual events of the VLBA observations. The development of the code of the adaptation structure includes the possibility of extension to any image, assuming that these images are dispose in intensities (Jy/beam) distributed in Right Ascension (AR) and Declination (DEC) maps. The code is validating by searching for self-convergence to synthetic models with the same structure, i.e, realistics simulations of components ejection, in milliarcsecond, similar to the observations of the MOJAVE project in 15.3 GHz. With the use of the parameters major semi-axis, angle of position, eccentricity and intensity applied individually to each observed component, it was possible to calculate the structure of the sources, the velocities of the jets, as well as the conversion in flux density to obtain light curves. Through the light curve, the brightness temperature, the Doppler factor, the Lorentz factor and the observation angle of the extragalactic objects can be estimated with precision. The objects OJ 287, 4C +15.05, 3C 279 and 4C +29.45 are studied in this thesis due the fact that they have different and complex morphologies for a more complete study. / As imagens obtidas por interferômetros, tais como VLBA (Very Long Baseline Array) e VLBI (Very Long Baseline Interferometry), são evidências diretas de jatos relativísticos associados a buracos negros supermassivos em núcleos ativos de galáxias (AGN). O estudo dessas imagens é fundamental para o aproveitamento das informações dessas observações, já que é um dos principais ingredientes para os códigos de síntese1 de objetos extragalácticos. Utiliza-se nesta tese, tanto imagens sintéticas quanto observadas. As imagens de VLBA mostram observações em 2 dimensões de processos astrofísicos complexos ocorrendo em 3 dimensões. Nesse sentido, uma das principais dificuldades dos modelos é a definição dos parâmetros das funções e equações que reproduzam de forma macroscópica e dinâmica os eventos físicos de formação desses objetos, para que as imagens sejam estudadas de forma confiável e em grande escala. Um dos objetivos desta tese é elaborar uma forma genérica2 de observações, supondo que a formação desses objetos é originada por processos astrofísicos similares, com a informação de determinados parâmetros da formação dos eventos. A definição de parâmetros que reproduzam as observações são elementos chave para a generalização da formação de componentes em jatos extragalácticos. Grande parte dos artigos de observação são voltados para poucos ou únicos objetos. Foi realizada nesta tese a implementação um método inovador, robusto e eficiente para a modelagem e reprodução de vários objetos, como por exemplo nas fontes do Projeto MOJAVE, que monitora diversos quasares simultaneamente, oferecendo uma biblioteca diversificada para a criação de modelos (Quasares3 e Blazares4: OVV5 e BL Lacertae6). Com essas fontes implementou-se uma forma dinâmica para o estudo desses objetos. Apresenta-se, nesta tese, a adaptação do algoritmo de Cross-Entropy para a calibração dos parâmetros dos eventos astrofísicos que sintetizem os eventos reais das observações em VLBA. O desenvolvimento da estrutura de adaptação do código incluiu a possibilidade de extensão para qualquer imagem, supondo que as mesmas estão dispostas em intensidades (Jy/beam) distribuídas em mapas de Ascensão Reta (AR) e Declinação (DEC). A validação do código foi feita buscando a auto convergência para modelos sintéticos com as mesmas estruturas, ou seja, de simulações realísticas de ejeção de componentes, em milissegundos de arco, similares às observações do projeto MOJAVE, em 15,3 GHz. Com a utilização dos parâmetros semieixo maior, ângulo de posição, excentricidade e intensidade aplicados individualmente a cada componente observada, é possível calcular a estrutura das fontes, as velocidades dos jatos, bem como a conversão em densidade de fluxo para obtenção de curvas de luz. Através da curva de luz estimou-se com precisão a temperatura de brilhância, o fator Doppler, o fator de Lorentz e o ângulo de observação dos objetos extragalácticos. Os objetos OJ 287, 4C +15.05, 3C 279 e 4C +29.45 são estudados nesta tese pois têm morfologias diferentes e complexas para um estudo mais completo.
37

Approche probabiliste de la tolérance aux dommages / Application au domaine aéronautique

Mattrand, Cécile 30 November 2011 (has links)
En raison de la gravité des accidents liés au phénomène de fatigue-propagation de fissure, les préoccupations de l’industrie aéronautique à assurer l’intégrité des structures soumises à ce mode de sollicitation revêtent un caractère tout à fait essentiel. Les travaux de thèse présentés dans ce mémoire visent à appréhender le problème de sûreté des structures aéronautiques dimensionnées en tolérance aux dommages sous l’angle probabiliste. La formulation et l’application d’une approche fiabiliste menant à des processus de conception et de maintenance fiables des structures aéronautiques en contexte industriel nécessitent cependant de lever un nombre important de verrous scientifiques. Les efforts ont été concentrés au niveau de trois domaines dans ce travail. Une méthodologie a tout d’abord été développée afin de capturer et de retranscrire fidèlement l’aléa du chargement de fatigue à partir de séquences de chargement observées sur des structures en service et monitorées, ce qui constitue une réelle avancée scientifique. Un deuxième axe de recherche a porté sur la sélection d’un modèle mécanique apte à prédire l’évolution de fissure sous chargement d’amplitude variable à coût de calcul modéré. Les travaux se sont ainsi appuyés sur le modèle PREFFAS pour lequel des évolutions ont également été proposées afin de lever l’hypothèse restrictive de périodicité de chargement. Enfin, les analyses probabilistes, produits du couplage entre le modèle mécanique et les modélisations stochastiques préalablement établies, ont entre autre permis de conclure que le chargement est un paramètre qui influe notablement sur la dispersion du phénomène de propagation de fissure. Le dernier objectif de ces travaux a ainsi porté sur la formulation et la résolution du problème de fiabilité en tolérance aux dommages à partir des modèles stochastiques retenus pour le chargement, constituant un réel enjeu scientifique. Une méthode de résolution spécifique du problème de fiabilité a été mise en place afin de répondre aux objectifs fixés et appliquée à des structures jugées représentatives de problèmes réels. / Ensuring the integrity of structural components subjected to fatigue loads remains an increasing concern in the aerospace industry due to the detrimental accidents that might result from fatigue and fracture processes. The research works presented here aim at addressing the question of aircraft safety in the framework of probabilistic fracture mechanics. It should be noticed that a large number of scientific challenges requires to be solved before performing comprehensive probabilistic analyses and assessing the mechanical reliability of components or structures in an industrial context. The contributions made during the PhD are reported here. Efforts are provided on each step of the global probabilistic methodology. The modeling of random fatigue load sequences based on real measured loads, which represents a key and original step in stochastic damage tolerance, is first addressed. The second task consists in choosing a model able to predict the crack growth under variable amplitude loads, i.e. which accounts for load interactions and retardation/acceleration effects, at a moderate computational cost. The PREFFAS crack closure model is selected for this purpose. Modifications are brought in order to circumvent the restrictive assumption of stationary load sequences. Finally, probabilistic analyses resulting from the coupling between the PREFFAS model and the stochastic modeling are carried out. The following conclusion can especially be drawn. Scatter in fatigue loads considerably affects the dispersion of the crack growth phenomenon. Then, it must be taken into account in reliability analyses. The last part of this work focuses on phrasing and solving the reliability problem in damage tolerance according to the selected stochastic loading models, which is a scientific challenge. A dedicated method is established to meet the required objectives and applied to structures representative of real problems.
38

Mining of Textual Data from the Web for Speech Recognition / Mining of Textual Data from the Web for Speech Recognition

Kubalík, Jakub January 2010 (has links)
Prvotním cílem tohoto projektu bylo prostudovat problematiku jazykového modelování pro rozpoznávání řeči a techniky pro získávání textových dat z Webu. Text představuje základní techniky rozpoznávání řeči a detailněji popisuje jazykové modely založené na statistických metodách. Zvláště se práce zabývá kriterii pro vyhodnocení kvality jazykových modelů a systémů pro rozpoznávání řeči. Text dále popisuje modely a techniky dolování dat, zvláště vyhledávání informací. Dále jsou představeny problémy spojené se získávání dat z webu, a v kontrastu s tím je představen vyhledávač Google. Součástí projektu byl návrh a implementace systému pro získávání textu z webu, jehož detailnímu popisu je věnována náležitá pozornost. Nicméně, hlavním cílem práce bylo ověřit, zda data získaná z Webu mohou mít nějaký přínos pro rozpoznávání řeči. Popsané techniky se tak snaží najít optimální způsob, jak data získaná z Webu použít pro zlepšení ukázkových jazykových modelů, ale i modelů nasazených v reálných rozpoznávacích systémech.

Page generated in 0.0588 seconds