• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 135
  • 79
  • 41
  • 23
  • 16
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 368
  • 61
  • 55
  • 52
  • 51
  • 45
  • 39
  • 36
  • 36
  • 34
  • 32
  • 30
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Discrete-ordinates cost optimization of weight-dependent variance reduction techniques for Monte Carlo neutral particle transport

Solomon, Clell J. Jr. January 1900 (has links)
Doctor of Philosophy / Department of Mechanical and Nuclear Engineering / J. Kenneth Shultis / A method for deterministically calculating the population variances of Monte Carlo particle transport calculations involving weight-dependent variance reduction has been developed. This method solves a set of equations developed by Booth and Cashwell [1979], but extends them to consider the weight-window variance reduction technique. Furthermore, equations that calculate the duration of a single history in an MCNP5 (RSICC version 1.51) calculation have been developed as well. The calculation cost, defined as the inverse figure of merit, of a Monte Carlo calculation can be deterministically minimized from calculations of the expected variance and expected calculation time per history.The method has been applied to one- and two-dimensional multi-group and mixed material problems for optimization of weight-window lower bounds. With the adjoint (importance) function as a basis for optimization, an optimization mesh is superimposed on the geometry. Regions of weight-window lower bounds contained within the same optimization mesh element are optimized together with a scaling parameter. Using this additional optimization mesh restricts the size of the optimization problem, thereby eliminating the need to optimize each individual weight-window lower bound. Application of the optimization method to a one-dimensional problem, designed to replicate the variance reduction iron-window effect, obtains a gain in efficiency by a factor of 2 over standard deterministically generated weight windows. The gain in two dimensional problems varies. For a 2-D block problem and a 2-D two-legged duct problem, the efficiency gain is a factor of about 1.2. The top-hat problem sees an efficiency gain of 1.3, while a 2-D 3-legged duct problem sees an efficiency gain of only 1.05. This work represents the first attempt at deterministic optimization of Monte Carlo calculations with weight-dependent variance reduction. However, the current work is limited in the size of problems that can be run by the amount of computer memory available in computational systems. This limitation results primarily from the added discretization of the Monte Carlo particle weight required to perform the weight-dependent analyses. Alternate discretization methods for the Monte Carlo weight should be a topic of future investigation. Furthermore, the accuracy with which the MCNP5 calculation times can be calculated deterministically merits further study.
42

Inclusion problems for one-counter systems

Totzke, Patrick January 2014 (has links)
We study the decidability and complexity of verification problems for infinite-state systems. A fundamental question in formal verification is if the behaviour of one process is reproducible by another. This inclusion problem can be studied for various models of computation and behavioural preorders. It is generally intractable or even undecidable already for very limited computational models. The aim of this work is to clarify the status of the decidability and complexity of some well-known inclusion problems for suitably restricted computational models. In particular, we address the problems of checking strong and weak simulation and trace inclusion for processes definable by one-counter automata (OCA), that consist of a finite control and a single counter ranging over the non-negative integers. We take special interest of the subclass of one-counter nets (OCNs), that cannot fully test the counter for zero and which is subsumed both by pushdown automata and Petri nets / vector addition systems. Our new results include the PSPACE-completeness of strong and weak simulation, and the undecidability of trace inclusion for OCNs. Moreover, we consider semantic preorders between OCA/OCN and finite systems and close some gaps regarding their complexity. Finally, we study deterministic processes, for which simulation and trace inclusion coincide.
43

SNM neutron detection using a time-gated synthetic aperture hybrid approach

Molinar, Matthew P. 13 January 2014 (has links)
This work focuses on using forward and adjoint transport in a hybrid application of 3-D deterministic (PENTRAN) and Monte Carlo (MCNP5) codes to model a series of neutron detector blocks. These blocks, or “channels,” contain a unique set of moderators with 4 atm He-3 proportional detectors tuned to detect and profile a gross energy spectrum of a passing neutron (SNM) source. Ganging the units together as a large area system enables one to apply time gating the source-detector response to maximize signal to noise responses from a passing source with minimal background; multiple units may be positioned as a collective synthetic aperture detector array to be used as a way of performing real time neutron spectroscopy for detecting special nuclear materials in moving vehicles.
44

HARDWARE- VS. SOFTWARE-DRIVEN REAL-TIME DATA ACQUISITION

Powell, Richard, Kuhn, Jeff 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / There are two basic approaches to developing data acquisition systems. The first is to buy or develop acquisition hardware and to then write software to input, identify, and distribute the data for processing, display, storage, and output to a network. The second is to design a system that handles some or all of these tasks in hardware instead of software. This paper describes the differences between software-driven and hardware-driven system architectures as applied to real-time data acquisition systems. In explaining the characteristics of a hardware-driven system, a high-performance real-time bus system architecture developed by L-3 will be used as an example. This architecture removes the bottlenecks and unpredictability that can plague software-driven systems when applied to complex real-time data acquisition applications. It does this by handling the input, identification, routing, and distribution of acquired data without software intervention.
45

Identification and Simulation Methods for Nonlinear Mechanical Systems Subjected to Stochastic Excitation

Josefsson, Andreas January 2011 (has links)
With an ongoing desire to improve product performance, in combination with the continuously growing complexity of engineering structures, there is a need for well-tested and reliable engineering tools that can aid the decision making and facilitate an efficient and effective product development. The technical assessment of the dynamic characteristics of mechanical systems often relies on linear analysis techniques which are well developed and generally accepted. However, sometimes the errors due to linearization are too large to be acceptable, making it necessary to take nonlinear effects into account. Many existing analysis techniques for nonlinear mechanical systems build on the assumption that the input excitation of the system is periodic and deterministic. This often results in highly inefficient analysis procedures when nonlinear mechanical systems are studied in a non-deterministic environment where the excitation of the system is stochastic. The aim of this thesis is to develop and validate new efficient analysis methods for the theoretical and experimental study of nonlinear mechanical systems under stochastic excitation, with emphasis on two specific problem areas; forced response simulation and system identification from measurement data. A fundamental concept in the presented methodology is to model the nonlinearities as external forces acting on an underlying linear system, and thereby making it possible to use much of the linear theories for simulation and identification. The developed simulation methods utilize a digital filter to achieve a stable and condensed representation of the linear subparts of the system which is then solved recursively at each time step together with the counteracting nonlinear forces. The result is computationally efficient simulation routines, which are particularly suitable for performance predictions when the input excitation consist of long segments of discrete data representing a realization of the stochastic excitation of the system. Similarly, the presented identification methods take advantage of linear Multiple-Input-Multiple-Output theories for random data by using the measured responses to create artificial inputs which can separate the linear system from the nonlinear parameters. The developed methods have been tested with extensive numerical simulations and with experimental test rigs with promising results. Furthermore, an industrial case study of a wave energy converter, with nonlinear characteristics, has been carried out and an analysis procedure capable of evaluating the performance of the system in non-deterministic ocean waves is presented.
46

GIS based assessment of seismic risk for the Christchurch CBD and Mount Pleasant, New Zealand

Singh, Bina Aruna January 2006 (has links)
This research employs a deterministic seismic risk assessment methodology to assess the potential damage and loss at meshblock level in the Christchurch CBD and Mount Pleasant primarily due to building damage caused by earthquake ground shaking. Expected losses in terms of dollar value and casualties are calculated for two earthquake scenarios. Findings are based on: (1) data describing the earthquake ground shaking and microzonation effects; (2) an inventory of buildings by value, floor area, replacement value, occupancy and age; (3) damage ratios defining the performance of buildings as a function of earthquake intensity; (4) daytime and night-time population distribution data and (5) casualty functions defining casualty risk as a function of building damage. A GIS serves as a platform for collecting, storing and analyzing the original and the derived data. It also allows for easy display of input and output data, providing a critical functionality for communication of outcomes. The results of this study suggest that economic losses due to building damage in the Christchurch CBD and Mount Pleasant will possibly be in the order of $5.6 and $35.3 million in a magnitude 8.0 Alpine fault earthquake and a magnitude 7.0 Ashley fault earthquake respectively. Damage to non-residential buildings constitutes the vast majority of the economic loss. Casualty numbers are expected to be between 0 and 10.
47

Reliable and Secure Geocasting in VANETs

Prado Bernia, Antonio 19 September 2012 (has links)
Current geocasting algorithms for VANETs are being designed to enable either private or reliable communications, but not both. Existing algorithms preserve privacy by minimizing the information used for routing, and sacrifice message delivery success. On the other hand, reliable protocols often store node information that can be used to compromise a vehicle's privacy. We have designed two private and reliable geocasting protocols for VANETs that ensure confidentiality. One is a probabilistic algorithm that uses direction-based dissemination, while the other is a deterministic algorithm that uses transmission-coverage dissemination. To preserve privacy, we create unlinkable and pseudonymous channels of communication with geocasting. For encryption and authentication, we use a public key technique. Our probabilistic forwarding model depends on message rate and cumulative payload, as well as the value of the angle of spreading of the direction-based scheme. To reduce message duplication, we apply dynamic traffic restriction and probabilistic forwarding techniques. The deterministic forwarding algorithm delays forwarding messages based on its uncovered transmission area after neighbouring nodes have broadcast the message. We prove that both algorithms ensure node privacy with appropriate message encryption security, and we ran simulations to demonstrate that both meet the message delivery requirements. From the gathered data, we observe that both algorithms behave differently depending on the scenario, with node density affecting the deterministic algorithm, while the angle of spreading does have a significant impact on the probabilistic protocol.
48

Análise de texturas estáticas e dinâmicas e suas aplicações em biologia e nanotecnologia / Static and dynamic texture analysis and their applications in biology and nanotechnology

Gonçalves, Wesley Nunes 02 August 2013 (has links)
A análise de texturas tem atraído um crescente interesse em visão computacional devido a sua importância na caracterização de imagens. Basicamente, as pesquisas em texturas podem ser divididas em duas categorias: texturas estáticas e texturas dinâmicas. As texturas estáticas são caracterizadas por variações de intensidades que formam um determinado padrão repetido espacialmente na imagem. Por outro lado, as texturas dinâmicas são padrões de texturas presentes em uma sequência de imagens. Embora muitas pesquisas tenham sido realizadas, essa área ainda se encontra aberta a estudos, principalmente em texturas dinâmicas por se tratar de um assunto recente e pouco explorado. Este trabalho tem como objetivo o desenvolvimento de pesquisas que abrangem ambos os tipos de texturas nos âmbitos teórico e prático. Em texturas estáticas, foram propostos dois métodos: (i) baseado em caminhadas determinísticas parcialmente auto-repulsivas e dimensão fractal - (ii) baseado em atividade em redes direcionadas. Em texturas dinâmicas, as caminhadas determinísticas parcialmente auto-repulsivas foram estendidas para sequências de imagens e obtiveram resultados interessantes em reconhecimento e segmentação. Os métodos propostos foram aplicados em problemas da biologia e nanotecnologia, apresentando resultados interessantes para o desenvolvimento de ambas as áreas. / Texture analysis has attracted an increasing interest in computer vision due to its importance in describing images. Basically, research on textures can be divided into two categories: static and dynamic textures. Static textures are characterized by intensity variations which form a pattern repeated in the image spatially. On the other hand, dynamic textures are patterns of textures present in a sequence of images. Although many studies have been carried out, this area is still open to study, especially in dynamic textures since it is a recent and little-explored subject. This study aims to develop research covering both types of textures in theoretical and practical fields. In static textures, two methods were proposed: (i) based on deterministic partially self-avoiding walks and fractal dimension - (ii) based on activity in directed networks. In dynamic textures, deterministic partially self-avoiding walks were extended to sequences of images and obtained interesting results in recognition and segmentation. The proposed methods were applied to problems of biology and nanotechnology, presenting interesting results in the development of both areas.
49

Estudo de parâmetros epidemiológicos através de modelamento matemático: aspectos estacionários, espaciais e temporais. / The study of epidemiological parameters through mathematical modelling: stationary, spatial and temporal features.

Amaku, Marcos 27 June 2001 (has links)
Estudamos, através de modelagem matemática, aspectos estacionários, espaciais e temporais relacionados à propagação e controle de doenças infecciosas de transmissão direta por contato pessoa-a-pessoa. Elaboramos modelos matemáticos determinísticos fundamentados no princípio de ação de massas em Epidemiologia, levando em consideração a simetria no número de contatos entre suscetíveis e infectados, o que nos permitiu estimar a taxa per capita de contatos potencialmente infectantes e, por conseguinte, a força de infecção e os possíveis efeitos de diferentes programas de vacinação. O desenvolvimento do modelo de estado estacionário foi feito com base em dados sorológicos de rubéola (Azevedo Neto 1992) para uma população que ainda não havia sido imunizada por meio de vacinação. Analisamos, então, o efeito de três diferentes esquemas de vacinação para a rubéola, nos seguintes intervalos de idade: de 1 a 2 anos, de 7 a 8 anos e de 14 a 15 anos. A incerteza estatística na idade média de infecção foi estimada com o auxílio do método de Monte Carlo e tal metodologia foi aplicada a dados de varicela e hepatite A. Estudamos também o aspecto espacial, com a inclusão da variável distância na formulação de um modelo SIR e análise da influência do alcance de interação entre indivíduos. E, através do estudo da força de infecção em função da idade e do tempo, pudemos analisar, de modo qualitativo, diferentes cenários na evolução temporal de uma doença infecciosa. / We have studied, based on mathematical modelling, stationary, spatial and temporal features related to the propagation and control of directly transmitted infectious diseases through person-to-person contact. We have developed deterministic mathematical models founded on the mass-action principle of Epidemiology, taking into account the symmetry of contacts among susceptible and infectious individuals. Such symmetry enabled us to estimate the potentially infective per capita contact rate and, therefore, the force of infection and the possible effects of different vaccination programmes. The steady state modelling has been based on rubella serological data of a non-immunized population (Azevedo Neto 1992) and we have analysed three different vaccination schemes against rubella in the following age intervals: from 1 to 2 years of age, from 7 to 8 years of age, and from 14 to 15 years of age. The serological data variability has been considered in the estimation of the statistical uncertainty of the average age at infection by means of the Monte Carlo method and we have applied this methodology to varicella and hepatitis A data. The spatial feature in a SIR model has been studied with the analysis of the influence of the interaction range among individuals. We have also studied the force of infection as a function of age and time and we have analysed, in a qualitative way, different situations in the time evolution of an infectious disease.
50

Simulação da dispersão de poluentes na camada limite planetária : um modelo determinístico-estocástico

Gisch, Debora Lidia January 2018 (has links)
Questões ambientais estão no centro das discussões nas últimas décadas. A poluição atmosférica, causada pela expansão pós-revolução industrial fez surgir a necessidade de aprender a descrever, usando modelos matemáticos, esse fenômeno. Com esse conhecimento pode-se propor soluções que mitiguem a poluição e os danos colaterais causados ao ambiente. A dispersão de poluentes modelada por soluções analíticas, a partir das equações de advecção-difusão oferecem um conhecimento sobre cada componente que constrói a equação, característica inexistente em outras abordagens, como a numérica. Entretanto ela era incapaz de descrever propriedades que se referem à turbulência, as estruturas coerentes, causadas por componentes não-lineares suprimidas por construção das equações governantes do modelo. Este trabalho estudou uma forma de recuperar características associadas à turbulência através de uma componente fundamental em estruturas coerentes, a fase. Essa é incluída no modelo que passa a descrever manifestações da turbulência em processos de dispersão através de flutuações de pequena escala na concentração da solução do modelo sesquilinear, que é determinístico-estocástico. No decorrer do trabalho há um estudo através de variações de parâmetros para compreender os efeitos da fase no modelo. Ele também foi aplicado ao experimento de Copenhagen e a dois cenários reais com a intenção de compreender o modelo frente à variáveis micrometeorológicas assim como aprimorá-lo para simular a dispersão de poluentes oriundos de fontes de forma realística. / Environmental issues have been at the center of discussions in the last few decades. Atmospheric pollution, caused by post-industrial revolution, has increased the necessity to describe, using mathematical models, this phenomenon. With this knowledge is possible to propose solutions mitigating the pollution and collateral damages caused in the environment. The pollutant dispersion modeled by analytical solutions, from advection-diffusion equations, offers a knowledge about each component that constructs the equation, a characteristic that does not exist in other approaches, such as numerical. However it was unable to describe properties that refer to turbulence, coherent structures, caused by nonlinear components suppressed by constructing the model governing equations. This work studied a way to recover characteristics associated with turbulence through a fundamental component in coherent structures, the phase. This is included in the model which describes manifestations of turbulence in the dispersion process through the presence of small-scale concentration fluctuations in the sesquilinear model, which is deterministicstochastic. In the course of this work there is a study through variations of parameters to understand the phase effects in the model. It was also applied to Copenhagen experiment and to two real scenarios with the intention of understanding the model regarding micrometeorological variables as well as improving it to simulate the pollutant dispersion from sources in a realistic way.

Page generated in 0.094 seconds