1181 |
Estimativas dos momentos estatísticos para o problema de flexão estocástica de viga em uma fundação PasternakSantos, Marcelo Borges dos 20 March 2015 (has links)
A presente dissertação propõe a resolução do problema de flexão estocástica em uma viga Euler-Bernoulli, sobre uma fundação do tipo Pasternak, através de um método computacional baseado na simulação de Monte Carlo. A incerteza está presente nos coeficientes elásticos da viga e da fundação. Primeiramente, é estabelecida a formulação matemática do problema que é oriunda, de um modelo físico de deslocamento da viga, que leva em consideração a influência da fundação sobre a resposta do problema. Portanto foi realizado um estudo a cerca dos modelos mais usuais de fundação, que são: o modelo do tipo Winkler, e modelo de Pasternak. Logo a seguir foi provado que o problema variacional abstrato, derivado da formulação forte do problema, apresenta solução e esta é única. Para a obtenção da solução do problema, foi realizada uma fundamentação matemática, dos seguintes assuntos: representação da incerteza, método de Galerkin, série de Neumann, e por fim das cotas inferiores e superiores. Finalmente, o desempenho das cotas inferiores e superiores, em relação à simulação de Monte Carlo direto, foram avaliadas através de vários casos, nos quais a incerteza repousa sobre os diversos coeficientes que compõe a equação de flexão na forma de um problema variacional. A metodologia mostrou-se eficiente, tanto no aspecto da convergência da resposta quanto no que se refere ao custo computacional. / This work proposes the resolution of stochastic bending problem in a Euler- Bernoulli beam, on a foundation type Pasternak, through a computational method based on Monte Carlo simulation. Uncertainty is present in the elastic coefficients of the beam and foundation. First, it is established the mathematical formulation of the problem which is derived from a physical model displacement of the beam, that takes into account the influence of the foundation on the problem of response. This requires an approach that is made up on the most common models of foundation, which are: the model Winkler type and model of Pasternak.In sequence we study the existence and uniqueness of the variational problem. To obtain the solution of the problem, a mathematical reasoning is carried out, to the following matters: representation of uncertainty, Galerkin method, serial Neumann, and finally the lower and upper bounds. Finally, the performance of lower and upper bounds, derived from direct simulation of Monte Carlo were evaluated through various cases where the uncertainty lies in the different coefficients composing the equation bending as a variational problem. The method proved to be efficient, both in the response of the convergence point as regards the computational cost.
|
1182 |
Modelagem computacional de tomografia com feixe de prótons / Computational modeling of protons tomographyOlga Yevseyeva 16 February 2009 (has links)
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / Nessa tese foi feito um estudo preliminar, destinado à elaboração do programa experimental inicial para a primeira instalação da tomografia com prótons (pCT) brasileira por meio de modelagem computacional. A terapia com feixe de prótons é uma forma bastante precisa de tratamento de câncer. Atualmente, o planejamento de tratamento é baseado na tomografia computadorizada com raios X, alternativamente, a tomografia com prótons pode ser usada. Algumas questões importantes, como efeito de escala e a Curva de Calibração (fonte de dados iniciais para planejamento de terapia com prótons), foram estudados neste trabalho. A passagem
de prótons com energias iniciais de 19,68MeV; 23MeV; 25MeV; 49,10MeV e 230MeV pelas camadas de materiais variados (água, alumínio, polietileno, ouro) foi simulada usando códigos
Monte Carlo populares como SRIM e GEANT4. Os resultados das simulações foram comparados com a previsão teórica (baseada na solução aproximada da equação de transporte de Boltzmann)
e com resultados das simulações feitas com outro popular código Monte Carlo MCNPX. Análise comparativa dos resultados das simulações com dados experimentais publicados na
literatura científica para alvos grossos e na faixa de energias de prótons usada em medidas em pCT foi feita. Foi observado que apesar de que todos os códigos mostram os resultados parecidos
alguns deslocamentos não sistemáticos podem ser observados. Foram feitas observações importantes sobre a precisão dos códigos e uma necessidade em medidas sistemáticas de
frenagem de prótons em alvos grossos foi declarada. / In the present work a preliminary research via computer simulations was made in order to elaborate a prior program for the first experimental pCT setup in Brazil. Proton therapy is a high precise form of a cancer treatment. Treatment planning nowadays is performed basing on X ray Computer Tomography data (CT), alternatively the same procedure could be performed using proton Computer Tomography (pCT). Some important questions, as a scale effect and so called Calibration Curve (as a source of primary data for pCT treatment planning) were studied in this work. The 19.68MeV; 23MeV; 25MeV; 49.10MeV e 230MeV protons passage through varied absorbers (water, aluminum, polyethylene, gold) were simulated by such popular Monte Carlo packages as SRIM and GEANT4. The simulation results were compared with a theoretic prevision based on approximate solution of the Boltzmann transport equation and with simulation results of the other popular Monte Carlo code MCNPX. The comparative analysis of the simulations results with the experimental data published in scientific literature for thick absorbers and within the energy range used in the pCT measurements was made. It was noted in spite of the fact that all codes showed similar results some nonsystematic displacements can be observed. Some important observations about the codes precision were made and a necessity of the systematic measurements of the proton stopping power in thick absorbers was declared.
|
1183 |
A hybrid les / lagrangian fdf method on adaptive, block-structured mesh / Metodo híbrido LES / FDF Lagrangiana em malha adaptativa, bloco-estruturadaFerreira, Vitor Maciel Vilela 09 April 2015 (has links)
Fundação de Amparo a Pesquisa do Estado de Minas Gerais / Esta dissertação é parte de um amplo projeto de pesquisa, que visa ao desenvolvimento de uma plataforma computacional de dinâmica dos fluidos (CFD) capaz de simular a física de escoamentos que envolvem mistura de várias espécies químicas, com reação e combustão, utilizando um método hibrido Simulação de Grandes Escalas (LES) / Função Densidade Filtrada (FDF) Lagrangiana em malha adaptativa, bloco-estruturada. Uma vez que escoamentos com mistura proporcionam fenômenos que podem ser correlacionados com a combustão em escoamentos turbulentos, uma visão global da fenomenologia de mistura foi apresentada e escoamentos fechados, laminar e turbulento, que envolvem mistura de duas espécies químicas inicialmente segregadas foram simulados utilizando o código de desenvolvimento interno AMR3D e o código recentemente desenvolvido FDF Lagrangiana de composição. A primeira etapa deste trabalho consistiu na criação de um modelo computacional de partículas estocásticas em ambiente de processamento distribuído. Isto foi alcançado com a construção de um mapa Lagrangiano paralelo, que pode gerenciar diferentes tipos de elementos lagrangianos, incluindo partículas estocásticas, particulados, sensores e nós computacionais intrínsecos dos métodos Fronteira Imersa e Acompanhamento de Interface. O mapa conecta informações Lagrangianas com a plataforma Euleriana do código AMR3D, no qual equações de trans- porte são resolvidas. O método FDF Lagrangiana de composição realiza cálculos algébricos sobre partículas estocásticas e provê campos de composição estatisticamente equivalentes aos obtidos quando se utiliza o método de Diferenças Finitas para solução de equações diferenciais parciais; a técnica de Monte Carlo foi utilizada para resolver um sistema derivado de equações diferenciais estocásticas (SDE). Os resultados concordaram com os benchmarks, que são simulações baseadas em plataforma de Diferenças Finitas para solução de uma equação de transporte de composição filtrada. / This master thesis is part of a wide research project, which aims at developing a com- putational fluid dynamics (CFD) framework able to simulate the physics of multiple-species mixing flows, with chemical reaction and combustion, using a hybrid Large Eddy Simulation (LES) / Lagrangian Filtered Density Function (FDF) method on adaptive, block-structured mesh. Since mixing flows provide phenomena that may be correlated with combustion in turbulent flows, we expose an overview of mixing phenomenology and simulated enclosed, ini- tially segregated two-species mixing flows, at laminar and turbulent states, using the in-house built AMR3D and the developed Lagrangian composition FDF codes. The first step towards this objective consisted of building a computational model of notional particles transport on distributed processing environment. We achieved it constructing a parallel Lagrangian map, which can hold different types of Lagrangian elements, including notional particles, particu- lates, sensors and computational nodes intrinsic to Immersed Boundary and Front Tracking methods. The map connects Lagrangian information with the Eulerian framework of the AMR3D code, in which transport equations are solved. The Lagrangian composition FDF method performs algebraic calculations over an ensemble of notional particles and provides composition fields statistically equivalent to those obtained by Finite Differences numerical solution of partially differential equations (PDE); we applied the Monte Carlo technique to solve a derived system of stochastic differential equations (SDE). The results agreed with the benchmarks, which are simulations based on Finite Differences framework to solve a filtered composition transport equation. / Mestre em Engenharia Mecânica
|
1184 |
Estudo compartimental e dosimétrico do anti-CD20 marcado com 188Re / Compartmental and dosimetric studies of anti-CD20 labelled with 188ReKURAMOTO, GRACIELA B. 25 August 2016 (has links)
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2016-08-25T11:05:49Z
No. of bitstreams: 0 / Made available in DSpace on 2016-08-25T11:05:49Z (GMT). No. of bitstreams: 0 / A radioimunoterapia (RIT) faz uso de anticorpos monoclonais conjugados com radionuclídeos emissores α ou β-, ambos para terapia. O tratamento baseia-se na irradiação e destruição do tumor, preservando os órgãos normais quanto ao excesso de radiação. Radionuclídeos emissores β- como 90Y, 131I, 177Lu e 188Re, são úteis para o desenvolvimento de radiofármacos terapêuticos e, quando associados a AcM como o Anti-CD20 são importantes principalmente para o tratamento de Linfomas Não Hodgkins (LNH). 188Re (Eβ- = 2,12 MeV; Eγ= 155 keV; t1/2 = 16,9 h) é um radionuclídeo atrativo para RIT. O Centro de Radiofarmácia do IPEN possui um projeto que visa a produção do radiofármaco 188Re-Anti-CD20. Com isso,este estudo foi proposto para avaliar a eficácia desta técnica de marcação para tratamento em termos compartimentais e dosimétricos. O objetivo deste trabalho consistiu na compararação da marcação do AcM anti-CD20 com 188Re com a marcação do anticorpo com 90Y, 131I, 177Lu e 99mTc (pelas suas características químicas similares) e 211At, 213Bi, 223Ra e 225Ac. Através do estudo de técnicas de marcação relatadas em literatura, foi proposto um modelo compartimental para avaliação de sua farmacocinética e estudos dosimétricos, de alto interesse para a terapia. A revisão de dados publicados na literatura, possibilitou demonstrar diferentes procedimentos de marcação, rendimentos de marcação, tempo de reação, impurezas e estudos de biodistribuição. O resultado do estudo mostra uma cinética favorável para o 188Re, pelas suas características físicas e químicas frente aos demais radionuclídeos avaliados. O estudo compartimental proposto descreve o metabolismo do 188Re-anti-CD20 através de um modelo compartimental mamilar, que pela sua análise farmacocinética, realizada em comparação aos produtos marcados com emissores β-: 131I-antiCD20, 177Lu-anti-CD20, o emissor γ 99mTc-anti-CD20 e o emissor α 211At-Anti-CD20, apresentou uma constante de eliminação de aproximadamente 0,05 horas-1 no sangue do animal. A avaliação dosimétrica do 188Re-Anti-CD20 foi realizada através de duas metodologias: pelo método de Monte Carlo e pelo uso de uma fonte pontual β- através da Fórmula de Loevinger via programa Excel. Através da Fórmula de Loevinger fez-se a validação do método de Monte Carlo para a dosimetria do 188Re-Anti-CD20 e dos demais produtos. As doses e as taxas de doses obtidas pelos dois métodos foram avaliadas em comparação à dosimetria do 90Y-Anti-CD20, 131I-Anti-CD20 e do 177Lu-Anti-CD20, obtidas pela mesma metodologia. O estudo de dose foi realizado utilizando modelos matemáticos considerando um camundongo nude de 25g, simulando diferentes tamanhos de tumor e diferentes formas de distribuição do produto dentro do animal. De acordo com os resultados obtidos, pela energia de emissão β-, 188Re-Anti-CD20 apresenta maior deposição de energia para tumores volumosos em relação aos demais produtos avaliados. Em uma simulação com 100% do produto captado pelo tumor, 89% da dose total manteve-se absorvida pelo tumor, preservando a integridade de ógãos críticos como coração (2%), pulmões (5%), coluna (4%), fígado (0,014%) e rins (0,0007%). Em uma simulação onde há uma biodistribuição do produto no organismo do animal, 38% da dose total é absorvida pelo tumor e >3% é absorvida pela coluna. Nessa situação mais próxima da realidade, a extrapolação dos dados para um humano de 70kg, mostrou que a dose absorvida no tumor corresponde a cerca de 33%; na coluna 7% e o coração receberia uma dose de 35% do total. A análise compartimental e dosimétrica apresentada neste trabalho, realizada através do uso de um modelo animal para o 188Re-Anti-CD20 mostra que o produto desenvolvido e apresentado em literatura é candidato promissor para a RIT. / Tese (Doutorado em Tecnologia Nuclear) / IPEN/T / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
|
1185 |
Desenvolvimento de um protocolo de calibração utilizando espectrometria e simulação matemática, em feixes padrões de raios x / Development of a calibration protocol using spectrometry and mathematical simulation, in x ray standard beamsSANTOS, LUCAS R. dos 21 November 2017 (has links)
Submitted by Pedro Silva Filho (pfsilva@ipen.br) on 2017-11-21T11:20:13Z
No. of bitstreams: 0 / Made available in DSpace on 2017-11-21T11:20:13Z (GMT). No. of bitstreams: 0 / A calibração, por definição, é o processo pelo qual se estabelece uma relação entre valores de medição de um padrão, com as suas respectivas incertezas, e as indicações com as incertezas associadas do instrumento de medição a ser calibrado. Um protocolo de calibração descreve a metodologia a ser aplicada em um processo de calibração. O método escolhido para a obtenção deste protocolo foi o da espectrometria de feixe de raios X associada à simulação pelo método de Monte Carlo, fundamentado no fato de que ambos são considerados métodos absolutos na determinação de parâmetros de feixes de radiação. Neste trabalho foi utilizado o método de Monte Carlo utilizado para obter a função resposta do detector utilizada para a correção dos espectros obtidos do feixe primário de radiação X; deste modo foram calculadas as taxas de kerma destes feixes e comparadas aos valores obtidos com as câmaras de ionização padrão secundário do Laboratório de Calibração de Instrumentos do IPEN (LCI/IPEN). Foram obtidos os coeficientes de calibração para o sistema padrão com diferenças em relação ao fornecido pelo laboratório primário entre 1,3% e 15,3%. Os resultados obtidos indicaram a viabilidade do estabelecimento deste protocolo de calibração utilizando a espectrometria como padrão de referência, com incertezas relativas de 0,62% para k=1. As incertezas associadas ao método proposto foram satisfatórias, para um laboratório padrão secundário e comparáveis a um laboratório primário. / Tese (Doutorado em Tecnologia Nuclear) / IPEN/T / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
|
1186 |
The Wien Effect in Electric and Magnetic Coulomb systems - from Electrolytes to Spin Ice / L'effet de Wien dans systèmes de Coulomb électriques et magnétiques : des électrolytes à la glace de spinKaiser, Vojtech 29 October 2014 (has links)
Les gaz ou fluides de Coulomb sont composés de particules chargées couplées entre elles par interaction coulombienne à longue portée. De part la nature de ces interactions, la physique du gaz de Coulomb est très riche, comme par exemple dans des électrolytes plus ou moins complexes, mais aussi à travers l'émergence de monopôles magnétiques dans la glace de spin. Dans cette thèse nous nous intéressons au comportement hors d'équilibre des gaz de Coulomb et de la glace de spin. Au centre de cette étude se trouve le deuxième effet de Wien, qui est une croissance linéaire de la conductivité en fonction du champ électrique appliqué à un électrolyte faible. Ce phénomène est une conséquence directe de l'interaction coulombienne qui pousse les charges à se lier par paires ; le champ électrique va alors aider à dissocier ces paires et créer des charges mobiles qui amplifient la conductivité. Le deuxième effet de Wien est un processus hors-équilibre non-linéaire, remarquablement décrit par la théorie de Onsager. Nos simulations sur réseau permettent de découvrir le rôle de l'environnement ionique qui agit contre le deuxième effet de Wien, ainsi que de caractériser la mobilité du système et sa dépendance en fonction du champ externe. Les simulations nous ont aussi donné accès aux corrélations de charges qui décrivent le processus microscopique à la base de l'effet Wien. Enfin, nous regardons plus précisément le gaz émergent de monopôles dans la glace de spin, aussi appelé « magnétolyte », capable de décrire de manière remarquable les propriétés magnétiques de glace de spin. Nous décrivons la dynamique complète hors-équilibre de cette magnétolyte soumise à une forçage périodique ou une trempe dans un champ magnétique en incluant à la fois le deuxième effet de Wien et la réponse du réseau de spins qui est à la base de l'émergence des monopôles magnétiques. Tout au long, nous utilisons une simple extension des simulations de gaz de Coulomb sur réseau pour préciser nos prédictions. Il est très rare de trouver une théorie analytique du comportement hors-équilibre d'un système hautement frustré au-delà de la réponse linéaire. / A Coulomb gas or fluid comprises charged particles that interact via the Coulomb interaction. Examples of a Coulombic systems include simple and complex electrolytes together with magnetic monopoles in spin ice. The long-range nature of the Coulomb interaction leads to a rich array of phenomena.This thesis is devoted to the study of the non-equilibrium behaviour of lattice based Coulomb gases and of the quasi-particle excitations in the materials known as spin ice which constitute a Coulomb gas of magnetic charges. At the centre of this study lies the second Wien effect which describes the linear increase in conductivity when an electric field is applied to a weak electrolyte. The conductivity increases due to the generation of additional mobile charges via a field-enhanced dissociation from Coulombically bound pairs.The seminal theory of Onsager gave a detailed analysis of the Wien effect. We use numerical simulations not only to confirm its validity in a lattice Coulomb gas for the first time but mainly to study its extensions due to the role of the ionic atmosphere and field-dependent mobility. The simulations also allow us to observe the microscopic correlations underlying the Wien effect.Finally, we look more closely at the emergent gas of monopoles in spin ice—the magnetolyte. The magnetic behaviour of spin ice reflects the properties of the Coulomb gas contained within. We verify the presence of the Wien effect in model spin ice and in the process predict the non-linear response when exposed to a periodic driving field, or to a field quench using Wien effect theory. We use a straightforward extension of the lattice Coulomb gas simulations to refine our predictions. It is a highly unusual result to find an analytic theory for the non-equilibrium behaviour of a highly frustrated system beyond linear response.
|
1187 |
Stratégie de maintenance centrée sur la fiabilité dans les réseaux électriques de haute tensionFouathia, Ouahab 22 September 2005 (has links)
Aujourd’hui les réseaux électriques sont exploités dans un marché dérégulé. Les gestionnaires des réseaux électriques sont tenus d’assurer un certain nombre de critères de fiabilité et de continuité du service, tout en minimisant le coût total consacré aux efforts effectués pour maintenir la fiabilité des installations. Il s’agit de trouver une stratégie, qui répond à plusieurs exigences, comme :le coût, les performances, la législation, les exigences du régulateur, etc. Cependant, le processus de prise de décision est subjectif, car chaque participant ramène sa contribution sur base de sa propre expérience. Bien que ce processus permette de trouver la « meilleure » stratégie, cette dernière n’est pas forcément la stratégie « optimale ». Ce compromis technico-économique a sensibilisé les gestionnaires des réseaux électriques à la nécessité d’un recours à des outils d’aide à la décision, qui doivent se baser sur des nouvelles approches quantitatives et une modélisation plus proches de la réalité physique.<p>Cette thèse rentre dans le cadre d’un projet de recherche lancé par ELIA, et dénommé COMPRIMa (Cost-Optimization Models for the Planning of the Renewal, Inspection, and Maintenance of Belgian power system facilities). Ce projet vise à développer une méthodologie qui permet de modéliser une partie du réseau électrique de transport (par les réseaux de Petri stochastiques) et de simuler son comportement dynamique sur un horizon donné (simulation de Monte Carlo). L’évaluation des indices de fiabilité permet de comparer les différents scénarios qui tentent d’améliorer les performances de l’installation. L’approche proposée est basée sur la stratégie RCM (Reliability-Centered Maintenance).<p>La méthodologie développée dans cette thèse permet une modélisation plus réaliste du réseau qui tient compte, entre autres, des aspects suivants :<p>- La corrélation quantitative entre le processus de maintenance et le processus de vieillissement des composants (par un modèle d’âge virtuel) ;<p>- Les dépendances liées à l’aspect multi-composant du système, qui tient compte des modes de défaillance spécifiques des systèmes de protection ;<p>- L’aspect économique lié à la stratégie de maintenance (inspection, entretien, réparation, remplacement), aux coupures (programmées et forcées) et aux événements à risque (refus disjoncteur, perte d’un client, perte d’un jeu de barres, perte d’une sous-station, etc.). / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
|
1188 |
Ant colony optimization and its application to adaptive routing in telecommunication networksDi Caro, Gianni 10 November 2004 (has links)
In ant societies, and, more in general, in insect societies, the activities of the individuals, as well as of the society as a whole, are not regulated by any explicit form of centralized control. On the other hand, adaptive and robust behaviors transcending the behavioral repertoire of the single individual can be easily observed at society level. These complex global behaviors are the result of self-organizing dynamics driven by local interactions and communications among a number of relatively simple individuals.<p><p>The simultaneous presence of these and other fascinating and unique characteristics have made ant societies an attractive and inspiring model for building new algorithms and new multi-agent systems. In the last decade, ant societies have been taken as a reference for an ever growing body of scientific work, mostly in the fields of robotics, operations research, and telecommunications.<p><p>Among the different works inspired by ant colonies, the Ant Colony Optimization metaheuristic (ACO) is probably the most successful and popular one. The ACO metaheuristic is a multi-agent framework for combinatorial optimization whose main components are: a set of ant-like agents, the use of memory and of stochastic decisions, and strategies of collective and distributed learning.<p><p>It finds its roots in the experimental observation of a specific foraging behavior of some ant colonies that, under appropriate conditions, are able to select the shortest path among few possible paths connecting their nest to a food site. The pheromone, a volatile chemical substance laid on the ground by the ants while walking and affecting in turn their moving decisions according to its local intensity, is the mediator of this behavior.<p><p>All the elements playing an essential role in the ant colony foraging behavior were understood, thoroughly reverse-engineered and put to work to solve problems of combinatorial optimization by Marco Dorigo and his co-workers at the beginning of the 1990's.<p><p>From that moment on it has been a flourishing of new combinatorial optimization algorithms designed after the first algorithms of Dorigo's et al. and of related scientific events.<p><p>In 1999 the ACO metaheuristic was defined by Dorigo, Di Caro and Gambardella with the purpose of providing a common framework for describing and analyzing all these algorithms inspired by the same ant colony behavior and by the same common process of reverse-engineering of this behavior. Therefore, the ACO metaheuristic was defined a posteriori, as the result of a synthesis effort effectuated on the study of the characteristics of all these ant-inspired algorithms and on the abstraction of their common traits.<p><p>The ACO's synthesis was also motivated by the usually good performance shown by the algorithms (e.g. for several important combinatorial problems like the quadratic assignment, vehicle routing and job shop scheduling, ACO implementations have outperformed state-of-the-art algorithms).<p><p>The definition and study of the ACO metaheuristic is one of the two fundamental goals of the thesis. The other one, strictly related to this former one, consists in the design, implementation, and testing of ACO instances for problems of adaptive routing in telecommunication networks.<p><p>This thesis is an in-depth journey through the ACO metaheuristic, during which we have (re)defined ACO and tried to get a clear understanding of its potentialities, limits, and relationships with other frameworks and with its biological background. The thesis takes into account all the developments that have followed the original 1999's definition, and provides a formal and comprehensive systematization of the subject, as well as an up-to-date and quite comprehensive review of current applications. We have also identified in dynamic problems in telecommunication networks the most appropriate domain of application for the ACO ideas. According to this understanding, in the most applicative part of the thesis we have focused on problems of adaptive routing in networks and we have developed and tested four new algorithms.<p><p>Adopting an original point of view with respect to the way ACO was firstly defined (but maintaining full conceptual and terminological consistency), ACO is here defined and mainly discussed in the terms of sequential decision processes and Monte Carlo sampling and learning.<p><p>More precisely, ACO is characterized as a policy search strategy aimed at learning the distributed parameters (called pheromone variables in accordance with the biological metaphor) of the stochastic decision policy which is used by so-called ant agents to generate solutions. Each ant represents in practice an independent sequential decision process aimed at constructing a possibly feasible solution for the optimization problem at hand by using only information local to the decision step.<p>Ants are repeatedly and concurrently generated in order to sample the solution set according to the current policy. The outcomes of the generated solutions are used to partially evaluate the current policy, spot the most promising search areas, and update the policy parameters in order to possibly focus the search in those promising areas while keeping a satisfactory level of overall exploration.<p><p>This way of looking at ACO has facilitated to disclose the strict relationships between ACO and other well-known frameworks, like dynamic programming, Markov and non-Markov decision processes, and reinforcement learning. In turn, this has favored reasoning on the general properties of ACO in terms of amount of complete state information which is used by the ACO's ants to take optimized decisions and to encode in pheromone variables memory of both the decisions that belonged to the sampled solutions and their quality.<p><p>The ACO's biological context of inspiration is fully acknowledged in the thesis. We report with extensive discussions on the shortest path behaviors of ant colonies and on the identification and analysis of the few nonlinear dynamics that are at the very core of self-organized behaviors in both the ants and other societal organizations. We discuss these dynamics in the general framework of stigmergic modeling, based on asynchronous environment-mediated communication protocols, and (pheromone) variables priming coordinated responses of a number of ``cheap' and concurrent agents.<p><p>The second half of the thesis is devoted to the study of the application of ACO to problems of online routing in telecommunication networks. This class of problems has been identified in the thesis as the most appropriate for the application of the multi-agent, distributed, and adaptive nature of the ACO architecture.<p><p>Four novel ACO algorithms for problems of adaptive routing in telecommunication networks are throughly described. The four algorithms cover a wide spectrum of possible types of network: two of them deliver best-effort traffic in wired IP networks, one is intended for quality-of-service (QoS) traffic in ATM networks, and the fourth is for best-effort traffic in mobile ad hoc networks.<p><p>The two algorithms for wired IP networks have been extensively tested by simulation studies and compared to state-of-the-art algorithms for a wide set of reference scenarios. The algorithm for mobile ad hoc networks is still under development, but quite extensive results and comparisons with a popular state-of-the-art algorithm are reported. No results are reported for the algorithm for QoS, which has not been fully tested. The observed experimental performance is excellent, especially for the case of wired IP networks: our algorithms always perform comparably or much better than the state-of-the-art competitors.<p><p>In the thesis we try to understand the rationale behind the brilliant performance obtained and the good level of popularity reached by our algorithms. More in general, we discuss the reasons of the general efficacy of the ACO approach for network routing problems compared to the characteristics of more classical approaches. Moving further, we also informally define Ant Colony Routing (ACR), a multi-agent framework explicitly integrating learning components into the ACO's design in order to define a general and in a sense futuristic architecture for autonomic network control.<p><p>Most of the material of the thesis comes from a re-elaboration of material co-authored and published in a number of books, journal papers, conference proceedings, and technical reports. The detailed list of references is provided in the Introduction.<p><p><p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
|
1189 |
Analyse probabiliste du risque de stockage de déchets radioactifs par la méthode des arbres d'événements continusSmidts, Olivier 23 October 1997 (has links)
Les études du risque du stockage de déchets radioactifs comprennent, comme toute étude du risque, un traitement de l'incertitude. L'outil de calcul du risque, appelé outil PRA (Probabilistic Risk Assessment), est formé d'un code de calcul d'écoulement des eaux souterraines et de transport de chaînes de radionucléides. Ce type d'outil est essentiel pour l'évaluation de performance de la barrière géologique. Le manque de connaissances au sujet de la variabilité (dans l'espace et le temps) des propriétés hydrogéologiques de cette barrière est la raison primaire de l'incertitude et des méthodes stochastiques ont été développées en hydrogéologie pour le traiter.<p>Dans cette thèse, l'analyse d'incertitude liée à la composition du milieu géologique est partagée entre l'écoulement et le transport de la manière suivante: a) une solution moyenne de l'écoulement est tout d'abord déterminée à l'aide d'un code basé sur la méthode des différences finies. Cette solution est ensuite soumise à une analyse de sensibilité. Cette analyse débouche sur la résolution d'un problème inverse afin d'améliorer l'estimation initiale des paramètres moyens d'écoulement; b) l'effet de la variation aléatoire de la vitesse d'écoulement est envisagé lors du transport des radionucléides. Le transport est résolu à l'aide d'une méthode Monte Carlo non analogue.<p><p>L'analyse de sensibilité du problème d'écoulement est réalisée à l'aide d'une méthode variationnelle. La méthode proposée a comme avantage celui de pouvoir quantifier l'incertitude de structure; c'est-à-dire l'incertitude liée à la géométrie du milieu géologique.<p>Une méthodologie Monte Carlo non analogue est utilisée pour le transport de chaînes de radionucléides en milieu stochastique. Les apports de cette méthodologie pour le calcul du risque reposent sur trois points:<p>1) L'utilisation d'une solution de transport simple (sous la forme d'une solution adjointe) dans les mécanismes de la simulation Monte Carlo. Cette solution de transport permet de résumer, entre deux positions successives du marcheur aléatoire, les processus chimicophysiques (advection, diffusion-dispersion, adsorption, désorption,) apparaissant à l'échelle microscopique. Elle rend possible des simulations efficaces de transport en accélérant les mécanismes de transition des marcheurs aléatoires dans le domaine géologique et dans le temps.<p>2) L'application de la méthode des arbres d'événements continus au transport de chaînes de radionucléides. Cette méthode permet d'envisager les transitions radioactives entre éléments d'une chaîne selon un même formalisme que celui qui prévaut pour les simulations de transport d'un radionucléide unique. Elle permet donc de passer du transport d'un radionucléide au transport d'une chaîne de radionucléides sans coûts supplémentaires en temps de calcul et avec un coût supplémentaire en mémoire limité.<p>3) L'application de techniques dites de "double randomization" au problème de transport de radionucléides dans un milieu géologique stochastique. Ces techniques permettent de combiner efficacement une simulation Monte Carlo de paramètres avec une simulation Monte Carlo de transport et ainsi d'inclure l'incertitude associée à la composition du milieu géologique explicitement dans le calcul du risque.<p><p>Il ressort de ce travail des perspectives prometteuses de développements ultérieurs de la méthodologie Monte Carlo non analogue pour le calcul du risque.<p><p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
|
1190 |
簡單順序假設波松母數較強檢定力檢定研究 -兩兩母均數差 / More Powerful Tests for Simple Order Hypotheses in Poisson Distributions -The differences of the parameters孫煜凱, Sun, Yu-Kai Unknown Date (has links)
波松分配(Poisson Distribution)常用在單位時間或是區間內,計算對有興趣之某隨機事件次數(或是已知事件之頻率),例如:速食餐廳的單位時間來客數,又或是每段期間內,某天然災害的發生次數,可以表示為某一特定事件X服從波松分配,若lambda為單位事件發生次數或是平均次數,我們稱lambda為此波松分配之母數,記作Poisson(lambda),其中lambda屬於實數。
今天我們若想要探討由兩個服從不同波松分配抽取的隨機變數,如下列所述:令X={(X1,X2)}為一集合,其中Xi為X(i,1),X(i,2),...,X(i,ni)~Poisson(lambda(i)),i=1,2。欲探討兩波松分配之均數是否相同或相差小於某個常數d時,考慮以下檢定:H0:lambda2-lambda1<=d與H0:lambda2-lambda1>d,對於此問題可以使用的檢定方法有Przyborwski和Wilenski(1940)提出的條件檢定(Conditional test,C-test)或K.Krishnamoorthy與Jessica Thomson(2002)提出的精確性檢定(Exact test,E-test),其中的精確性檢定為一個非條件檢定(Unconditional Test);K.Krishnamoorthy與Jessica Thomson比較條件檢定與精確性檢定的p-value皆小於顯著水準(apha),而精確性檢定的檢定力不亞於條件檢定,因此精確性檢定比條件檢定更適合上面所述之假設問題。
Roger L.Berger(1996)提出一個以信賴區間的p-value所建立的較強力檢定,而目前只用於檢定兩二項分配(Binomial Distribution)的機率參數p是否相同為例,然而Berger在文中提到,較強力檢定比非條件檢定有更好的檢定力,而且要求的計算時間較少,可以提升檢定的效率。
本篇論文我們希望在固定apha與d時檢定的問題,建立一個兩波松分配均數顯著水準為apha的較強力檢定。
利用Roger L.Berger與Dennis D.Boos(1994)提出以信賴區間的p-value方法,建立波松分配兩兩母均數差的較強力檢定;研究發現此較強力檢定與精確性檢定的p-value皆小於apha,然而我們的檢定的檢定力皆不亞於精確性檢定所計算得出的檢定力,然而其apha及虛無假設皆需要善加考慮以本篇研究來看,當檢定為單尾檢定時,若apha<0.01,我們的較強力檢定沒有辦法找到比精確性檢定更好地拒絕域,換言之,此時較強力檢定與精確性檢定的檢定力將會相等。 / Poisson Distribution is used to calculate the probability of a certain phenomenon which attracted by researcher. If we want to test two random variable in an experiment .Therefore ,let X={(X1,X2)} be independent samples ,respectively ,from Poisson distribution ,also X(i,1),X(i,2),...,X(i,ni)~Poisson(lambda(i)),i=1,2.
The problem of interest here is to test:
H0:lambda2-lambda1<=d and H0:lambda2-lambda1>d,
where 0<apha<1/2 ,and let Y1 equals sum of X1 and Y2 equals sum of X2, where apha ,lambda,d be fixed.
In this problem of hypothesis testing about two Poisson means is addressed by the conditional test.However ,the exact method of testing based on the test statistic considered in K.Krishnamoorthy,Jessica Thomson(2002) also commonly used.
Roger L.Berger ,Dennis D.Boos(1994) give a new way to calculate
p-value,which replace the old method ,called it a valid p-value .In 1996, Roger L.Berger used the new way to propose a new test for two parameter of binomial distribution which is more powerful than exact test. In the other hand, Roger L.Berger also explain the unconditional test is more suitable than the conditional test.
In this paper,we propose a new method for two parameter of Poisson distribution which revise from Roger L.Berger’s method. The result we obtain that our new test is really get a much bigger rejection region.We found when the fixed increasing ,the set of more powerful test increasing, and when the fixed power increasing ,the required sample size decreasing.
|
Page generated in 0.0514 seconds