• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 5
  • 3
  • 2
  • Tagged with
  • 35
  • 35
  • 14
  • 13
  • 12
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Acúmulo de mutações em linhagens assexuadas: uma abordagem via experimentos computacionais / Accumulation of mutations in asexual lineages: a study using computer experiments

Colato, Alexandre 18 November 2004 (has links)
Estudos sobre evolução têm sido desenvolvidos desde a publicação dos trabalhos de Charles Darwin sobre a origem das espécies pela seleção natural em 1859. Durante o século XX grandes avanços foram obtidos com a utilização de modelagens matemáticas e computacionais, pois com exceção de algumas espécies que podem ter sua evolução analisada in vivo, o tempo necessário para aquisição de dados é enorme e por este motivo o enfoque computacional passou a representar uma ferramenta essencial. Nesta tese são apresentados os conceitos básicos para se entender o processo evolutivo de populações assexuadas como mutação, seleção e relevos adaptativos, bem como os resultados numéricos sobre sua evolução através do processo conhecido como catraca de Muller, que baseia-se na perda estocástica da classe de indivíduos mais adaptados da população através das mutações adquiridas ao longo de sua linhagem. Neste trabalho foram estudadas diversas dinâmicas, como a de populações que estão sujeitas à passagens seriais com gargalo, onde observamos que a velocidade da catraca na não pára devido aos altos valores de epistase, enquanto que para populações com tamanho variável (crescimento e decrescimento exponencial) a catraca pára durante o período de crescimento até a população atingir o limite permitido pelo meio-ambiente, sendo que a partir deste ponto ela se comporta como no modelo de infinitos sítios tradicional. Por último, são apresentados os resultados de populações que interagem entre si em uma dinâmica presa-predador, onde o comportamento da catraca pode ser entendido com base nas dinâmicas das populações descritas anteriormente. Um outro problema abordado nesta tese é o da utilização de medidas da topologia de árvores genealógicas para verificar a presença da seleção na evolução de uma população. Apesar dos comprimentos dos ramos das árvores apresentarem alterações quando comparados ao caso neutro, observamos que os testes estatísticos utilizados não são suficientes para inferir o efeito da seleção em populações reais. / Studies about evolution have been developed since Charles Darwin\'s publications about the Origin of species and Natural Selection in 1859. During the XX century major developments were achieved through mathematical and computational modeling, since only few number of species that their evolution can be studied in vivo, once that the time scale involed for data acquisition procedure is considerable, and for this reason the computational approach become an important tool in this study. In this thesis are presented the basic concepts to understand the process of evolution in a population as mutation, selection and adaptive landscapes, in addition some numerical results about the evolution of an asexual population using the process known as Muller\'s ratchet, that can be characterized by the stochastic loss of the most fitted class of individuals through mutations that are acquired in their lineages. During this work several dynamics were studied, likewise the populations under serial bottleneck passages, where we observed that the velocity of the ratchet never stops for high epistatic coefficients, while in population whose size can varies (increasing or decreasing exponentially) the ratchet halts during population\'s increasing until these individuals do not reach the maximum number permitted, and after this point this population behaves like the traditional infinite genome size model. At last, we show the results of populations that can interact between themselves in a predator-prey dynamics, where the behaviour of the ratchet can be understood in the previous dynamics. Another problem that was studied in this thesis is related with several topology measures of genealogical trees in order to verify the selection in a population evolution. Despite branch\'s length of the trees changed due to the selection, we could see that the statistical tests used do not be sufficient to infer the effect of selection under real populations.
12

Analyse de sensibilité en fiabilité des structures / Reliability sensitivity analysis

Lemaitre, Paul 18 March 2014 (has links)
Cette thèse porte sur l'analyse de sensibilité dans le contexte des études de fiabilité des structures. On considère un modèle numérique déterministe permettant de représenter des phénomènes physiques complexes.L'étude de fiabilité a pour objectif d'estimer la probabilité de défaillance du matériel à partir du modèle numérique et des incertitudes inhérentes aux variables d'entrée de ce modèle. Dans ce type d'étude, il est intéressant de hiérarchiser l'influence des variables d'entrée et de déterminer celles qui influencent le plus la sortie, ce qu'on appelle l'analyse de sensibilité. Ce sujet fait l'objet de nombreux travaux scientifiques mais dans des domaines d'application différents de celui de la fiabilité. Ce travail de thèse a pour but de tester la pertinence des méthodes existantes d'analyse de sensibilité et, le cas échéant, de proposer des solutions originales plus performantes. Plus précisément, une étape bibliographique sur l'analyse de sensibilité puis sur l'estimation de faibles probabilités de défaillance est proposée. Cette étape soulève le besoin de développer des techniques adaptées. Deux méthodes de hiérarchisation de sources d'incertitudes sont explorées. La première est basée sur la construction de modèle de type classifieurs binaires (forêts aléatoires). La seconde est basée sur la distance, à chaque étape d'une méthode de type subset, entre les fonctions de répartition originelle et modifiée. Une méthodologie originale plus globale, basée sur la quantification de l'impact de perturbations des lois d'entrée sur la probabilité de défaillance est ensuite explorée. Les méthodes proposées sont ensuite appliquées sur le cas industriel CWNR, qui motive cette thèse. / This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored.The proposed methods are then applied on the CWNR case, which motivates this thesis.
13

Robust design using sequential computer experiments

Gupta, Abhishek 30 September 2004 (has links)
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
14

Robust design using sequential computer experiments

Gupta, Abhishek 30 September 2004 (has links)
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
15

Computer and physical experiments: design, modeling, and multivariate interpolation

Kang, Lulu 28 June 2010 (has links)
Many problems in science and engineering are solved through experimental investigations. Because experiments can be costly and time consuming, it is important to efficiently design the experiment so that maximum information about the problem can be obtained. It is also important to devise efficient statistical methods to analyze the experimental data so that none of the information is lost. This thesis makes contributions on several aspects in the field of design and analysis of experiments. It consists of two parts. The first part focuses on physical experiments, and the second part on computer experiments. The first part on physical experiments contains three works. The first work develops Bayesian experimental designs for robustness studies, which can be applied in industries for quality improvement. The existing methods rely on modifying effect hierarchy principle to give more importance to control-by-noise interactions, which can violate the true effect order of a system because the order should not depend on the objective of an experiment. The proposed Bayesian approach uses a prior distribution to capture the effect hierarchy property and then uses an optimal design criterion to satisfy the robustness objectives. The second work extends the above Bayesian approach to blocked experimental designs. The third work proposes a new modeling and design strategy for mixture-of-mixtures experiments and applies it in the optimization of Pringles potato crisps. The proposed model substantially reduces the number of parameters in the existing multiple-Scheffé model and thus, helps the engineers to design much smaller experiments. The second part on computer experiments introduces two new methods for analyzing the data. The first is an interpolation method called regression-based inverse distance weighting (RIDW) method, which is shown to overcome some of the computational and numerical problems associated with kriging, particularly in dealing with large data and/or high dimensional problems. In the second work, we introduce a general nonparametric regression method, called kernel sum regression. More importantly, we make an interesting discovery by showing that a particular form of this regression method becomes an interpolation method, which can be used to analyze computer experiments with deterministic outputs.
16

Bridging the Gap Between Space-Filling and Optimal Designs

January 2013 (has links)
abstract: This dissertation explores different methodologies for combining two popular design paradigms in the field of computer experiments. Space-filling designs are commonly used in order to ensure that there is good coverage of the design space, but they may not result in good properties when it comes to model fitting. Optimal designs traditionally perform very well in terms of model fitting, particularly when a polynomial is intended, but can result in problematic replication in the case of insignificant factors. By bringing these two design types together, positive properties of each can be retained while mitigating potential weaknesses. Hybrid space-filling designs, generated as Latin hypercubes augmented with I-optimal points, are compared to designs of each contributing component. A second design type called a bridge design is also evaluated, which further integrates the disparate design types. Bridge designs are the result of a Latin hypercube undergoing coordinate exchange to reach constrained D-optimality, ensuring that there is zero replication of factors in any one-dimensional projection. Lastly, bridge designs were augmented with I-optimal points with two goals in mind. Augmentation with candidate points generated assuming the same underlying analysis model serves to reduce the prediction variance without greatly compromising the space-filling property of the design, while augmentation with candidate points generated assuming a different underlying analysis model can greatly reduce the impact of model misspecification during the design phase. Each of these composite designs are compared to pure space-filling and optimal designs. They typically out-perform pure space-filling designs in terms of prediction variance and alphabetic efficiency, while maintaining comparability with pure optimal designs at small sample size. This justifies them as excellent candidates for initial experimentation. / Dissertation/Thesis / Ph.D. Industrial Engineering 2013
17

Acúmulo de mutações em linhagens assexuadas: uma abordagem via experimentos computacionais / Accumulation of mutations in asexual lineages: a study using computer experiments

Alexandre Colato 18 November 2004 (has links)
Estudos sobre evolução têm sido desenvolvidos desde a publicação dos trabalhos de Charles Darwin sobre a origem das espécies pela seleção natural em 1859. Durante o século XX grandes avanços foram obtidos com a utilização de modelagens matemáticas e computacionais, pois com exceção de algumas espécies que podem ter sua evolução analisada in vivo, o tempo necessário para aquisição de dados é enorme e por este motivo o enfoque computacional passou a representar uma ferramenta essencial. Nesta tese são apresentados os conceitos básicos para se entender o processo evolutivo de populações assexuadas como mutação, seleção e relevos adaptativos, bem como os resultados numéricos sobre sua evolução através do processo conhecido como catraca de Muller, que baseia-se na perda estocástica da classe de indivíduos mais adaptados da população através das mutações adquiridas ao longo de sua linhagem. Neste trabalho foram estudadas diversas dinâmicas, como a de populações que estão sujeitas à passagens seriais com gargalo, onde observamos que a velocidade da catraca na não pára devido aos altos valores de epistase, enquanto que para populações com tamanho variável (crescimento e decrescimento exponencial) a catraca pára durante o período de crescimento até a população atingir o limite permitido pelo meio-ambiente, sendo que a partir deste ponto ela se comporta como no modelo de infinitos sítios tradicional. Por último, são apresentados os resultados de populações que interagem entre si em uma dinâmica presa-predador, onde o comportamento da catraca pode ser entendido com base nas dinâmicas das populações descritas anteriormente. Um outro problema abordado nesta tese é o da utilização de medidas da topologia de árvores genealógicas para verificar a presença da seleção na evolução de uma população. Apesar dos comprimentos dos ramos das árvores apresentarem alterações quando comparados ao caso neutro, observamos que os testes estatísticos utilizados não são suficientes para inferir o efeito da seleção em populações reais. / Studies about evolution have been developed since Charles Darwin\'s publications about the Origin of species and Natural Selection in 1859. During the XX century major developments were achieved through mathematical and computational modeling, since only few number of species that their evolution can be studied in vivo, once that the time scale involed for data acquisition procedure is considerable, and for this reason the computational approach become an important tool in this study. In this thesis are presented the basic concepts to understand the process of evolution in a population as mutation, selection and adaptive landscapes, in addition some numerical results about the evolution of an asexual population using the process known as Muller\'s ratchet, that can be characterized by the stochastic loss of the most fitted class of individuals through mutations that are acquired in their lineages. During this work several dynamics were studied, likewise the populations under serial bottleneck passages, where we observed that the velocity of the ratchet never stops for high epistatic coefficients, while in population whose size can varies (increasing or decreasing exponentially) the ratchet halts during population\'s increasing until these individuals do not reach the maximum number permitted, and after this point this population behaves like the traditional infinite genome size model. At last, we show the results of populations that can interact between themselves in a predator-prey dynamics, where the behaviour of the ratchet can be understood in the previous dynamics. Another problem that was studied in this thesis is related with several topology measures of genealogical trees in order to verify the selection in a population evolution. Despite branch\'s length of the trees changed due to the selection, we could see that the statistical tests used do not be sufficient to infer the effect of selection under real populations.
18

Evaluation of GUI testing techniques for system crashing: from real to model-based controlled experiments

BERTOLINI, Cristiano 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:54:24Z (GMT). No. of bitstreams: 2 arquivo7096_1.pdf: 2072025 bytes, checksum: ca8b71b9cfdeb09118a7c281cafe2872 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Aplicações para celular estão se tornando cada vez mais complexas, bem como testá-las. Teste de interfaces gráficas (GUI) é uma tendência atual e se faz, em geral, através da simulação de interações do usuário. Várias técnicas são propostas, no qual, eficiência (custo de execução) e eficácia (possibilidade de encontrar bugs) são os aspectosmais cruciais desejados pela industria. No entanto, avaliações mais sistemáticas são necessárias para identificar quais técnicas melhoram a eficiência e eficácia de tais aplicações. Esta tese apresenta uma avaliação experimental de duas técnicas de testes de GUI, denominadas de DH e BxT, que são usadas para testar aplicações de celulares com um histórico de erros reais. Estas técnicas são executadas por um longo período de tempo (timeout de 40h, por exemplo) tentando identificar as situações críticas que levam o sistema a uma situação inesperada, onde o sistema pode não continuar sua execução normal. Essa situação é chamada de estado de crash. A técnicaDHjá existia e é utilizadapela industriade software, propomos outra chamada de BxT. Em uma avaliação preliminar, comparamos eficácia e eficiência entre DH e BxT através de uma análise descritiva. Demonstramos que uma exploração sistemática, realizada pela BxT, é uma abordagem mais interessante para detectar falhas em aplicativos de celulares. Com base nos resultados preliminares, planejamos e executamos um experimento controlado para obter evidência estatística sobre sua eficiência e eficácia. Como ambas as técnicas são limitadas por um timeout de 40h, o experimento controlado apresenta resultados parciais e, portanto, realizamos uma investigação mais aprofundada através da análise de sobrevivência. Tal análise permite encontrar a probabilidade de crash de uma aplicação usando tanto DH quanto BxT. Como experimentos controlados são onerosos, propomos uma estratégia baseada em experimentos computacionais utilizando a linguagem PRISM e seu verificador de modelos para poder comparar técnicas de teste de GUI, em geral, e DH e BxT em particular. No entanto, os resultados para DH e BxT tem uma limitação: a precisão do modelo não é estatisticamente comprovada. Assim, propomos uma estratégia que consiste em utilizar os resultados anteriores da análise de sobrevivência para calibrar nossos modelos. Finalmente, utilizamos esta estratégia, já com os modelos calibrados, para avaliar uma nova técnica de teste de GUI chamada Hybrid-BxT (ou simplesmente H-BxT), que é uma combinação de DH e BxT
19

Physical-Statistical Modeling and Optimization of Cardiovascular Systems

Du, Dongping 01 January 2002 (has links)
Heart disease remains the No.1 leading cause of death in U.S. and in the world. To improve cardiac care services, there is an urgent need of developing early diagnosis of heart diseases and optimal intervention strategies. As such, it calls upon a better understanding of the pathology of heart diseases. Computer simulation and modeling have been widely applied to overcome many practical and ethical limitations in in-vivo, ex-vivo, and whole-animal experiments. Computer experiments provide physiologists and cardiologists an indispensable tool to characterize, model and analyze cardiac function both in healthy and in diseased heart. Most importantly, simulation modeling empowers the analysis of causal relationships of cardiac dysfunction from ion channels to the whole heart, which physical experiments alone cannot achieve. Growing evidences show that aberrant glycosylation have dramatic influence on cardiac and neuronal function. Variable but modest reduction in glycosylation among congenital disorders of glycosylation (CDG) subtypes has multi-system effects leading to a high infant mortality rate. In addition, CDG in all young patients tends to cause Atrial Fibrillation (AF), i.e., the most common sustained cardiac arrhythmia. The mortality rate from AF has been increasing in the past two decades. Due to the increasing healthcare burden of AF, studying the AF mechanisms and developing optimal ablation strategies are now urgently needed. Very little is known about how glycosylation modulates cardiac electrical signaling. It is also a significant challenge to experimentally connect the changes at one organizational level (e.g.,electrical conduction among cardiac tissue) to measured changes at another organizational level (e.g., ion channels). In this study, we integrate the data from in vitro experiments with in-silico models to simulate the effects of reduced glycosylation on the gating kinetics of cardiac ion channel, i.e., hERG channels, Na+ channels, K+ channels, and to predict the glycosylation modulation dynamics in individual cardiac cells and tissues. The complex gating kinetics of Na+ channels is modeled with a 9-state Markov model that have voltage-dependent transition rates of exponential forms. The model calibration is quite a challenge as the Markov model is non-linear, non-convex, ill-posed, and has a large parametric space. We developed a new metamodel-based simulation optimization approach for calibrating the model with the in-vitro experimental data. This proposed algorithm is shown to be efficient in learning the Markov model of Na+ model. Moreover, it can be easily transformed and applied to many other optimization problems in computer modeling. In addition, the understanding of AF initiation and maintenance has remained sketchy at best. One salient problem is the inability to interpret intracardiac recordings, which prevents us from reconstructing the rhythmic mechanisms for AF, due to multiple wavelets' circulating, clashing and continuously changing direction in the atria. We are designing computer experiments to simulate the single/multiple activations on atrial tissues and the corresponding intra-cardiac signals. This research will create a novel computer-aided decision support tool to optimize AF ablation procedures.
20

Multidisciplinary Analysis and Design Optimization of an Efficient Supersonic Air Vehicle

Allison, Darcy L. 18 November 2013 (has links)
This material is based on research sponsored by Air Force Research Laboratory under agreement number FA8650-09-2-3938. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Research Laboratory or the U.S. Government. / This work seeks to develop multidisciplinary design optimization (MDO) methods to find the optimal design of a particular aircraft called an Efficient Supersonic Air Vehicle (ESAV). This is a long-range military bomber type of aircraft that is to be designed for high speed (supersonic) flight and survivability. The design metric used to differentiate designs is minimization of the take-off gross weight. The usefulness of MDO tools, rather than compartmentalized design practices, in the early stages of the design process is shown. These tools must be able to adequately analyze all pertinent physics, simultaneously and collectively, that are important to the aircraft of interest. Low-fidelity and higher-fidelity ESAV MDO frameworks have been constructed. The analysis codes in the higher-fidelity framework were validated by comparison with the legacy B-58 supersonic bomber aircraft. The low-fidelity framework used a computationally expensive process that utilized a large design of computer experiments study to explore its design space. This resulted in identifying an optimal ESAV with an arrow wing planform. Specific challenges to designing an ESAV not addressed with the low-fidelity framework were addressed with the higher-fidelity framework. Specifically, models to characterize the effects of the low-observable ESAV characteristics were required. For example, the embedded engines necessitated a higher-fidelity propulsion model and engine exhaust-washed structures discipline. Low-observability requirements necessitated adding a radar cross section discipline. A relatively less costly computational process utilizing successive NSGA-II optimization runs was used for the higher-fidelity MDO. This resulted in an optimal ESAV with a trapezoidal wing planform. The NSGA-II optimizer considered arrow wing planforms in early generations during the process, but these were later discarded in favor of the trapezoidal planform. Sensitivities around this optimal design were computed using the well-known ANOVA method to characterize the surrounding design space. The lower and higher fidelity frameworks could not be combined in a mixed-fidelity optimization process because the low-fidelity was not faithful enough to the higher-fidelity analysis results. The low-fidelity optimum was found to be infeasible according to the higher-fidelity framework and vice versa. Therefore, the low-fidelity framework was not capable of guiding the higher-fidelity framework to the eventual trapezoidal planform optimum. / Air Force Research Laboratory / Ph. D.

Page generated in 0.0805 seconds