• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 410
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 690
  • 132
  • 95
  • 94
  • 76
  • 70
  • 62
  • 59
  • 56
  • 54
  • 46
  • 42
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

On a continuous energy Monte Carlo simulator for neutron interactions in reactor core material considering up-scattering effects in the thermal energy region / Sobre um simulador Monte Carlo de energia contínua para interações neutrônicas no material do núcleo de reator considerando efeitos de up-scattering na região de energias térmicas

Barcellos, Luiz Felipe Fracasso Chaves January 2016 (has links)
Neste trabalho o transporte de nêutrons é simulado em materiais presentes no núcleo de reatores. O espectro de nêutrons é decomposto como uma soma de três distribuições de probabilidade. Duas das distribuições preservam sua forma com o tempo, mas não necessariamente sua integral. Uma das duas distribuições é devido ao espectro de fissão, isto é, altas energias de nêutrons, a outra é uma distribuição de Maxwell-Boltzmann para nêutrons de baixas energias (térmicos). A terceira distribuição tem uma forma a priori desconhecida e que pode variar com o tempo, sendo determinada a partir de uma simulação Monte Carlo com acompanhamento dos nêutrons e suas interações com dependência contínua de energia. Isto é obtido pela parametrização das seções de choque dos materiais do reator com funções contínuas, incluindo as regiões de ressonâncias resolvidas e não resolvidas. O objetivo deste trabalho é implementar efeitos de up-scattering através do tratamento estat ístico da população de nêutrons na distribuição térmica. O programa de simulação calcula apenas down-scattering, pois o cálculo do up-scattering microscópico aumenta signi_cativamente tempo de processamento computacional. Além de contornar esse problema, pode-se reconhecer que up-scattering é dominante na região de energia mais baixa do espectro, onde assume-se que as condições de equilíbrio térmico para nêutrons imersos em seu ambiente são válidas. A otimização pode, assim, ser atingida pela manutenção do espectro de Maxwell- Boltzmann, isto é, up-scattering é simulado por um tratamento estatístico da população de nêutrons. Esta simulação é realizada utilizando-se dependência energética contínua, e, como um primeiro caso a ser estudado assume-se um regime recorrente. As três distribuições calculadas são então utilizadas no código Monte Carlo para calcular os passos Monte Carlo subsequentes. / In this work the neutron transport is simulated in reactor core materials. The neutron spectrum is decomposed as a sum of three probability distributions. Two of the distributions preserve shape with time but not necessarily the integral. One of the two distributions is due to prompt ssion, i.e. high neutron energies and the second a Maxwell-Boltzmann distribution for low (thermal) neutron energies. The third distribution has an a priori unknown and possibly variable shape with time and is determined from a Monte Carlo simulation with tracking and interaction with continuous energy dependence. This is done by the parametrization of the material cross sections with continuous functions, including the resolved and unresolved resonances region. The objective of this work is to implement up-scattering e ects through the treatment of the neutron population in the thermal distribution. The simulation program only computes down-scattering, for the calculation of microscopic upscattering increases signi cantly computational processing time. In order to circumvent this problem, one may recognize that up-scattering is dominant towards the lower energy end of the spectrum, where we assume that thermal equilibrium conditions for neutrons immersed in their environment holds. The optimization may thus be achieved by the maintenance of the Maxwell-Boltzmann spectrum, i.e. up-scattering is simulated by a statistical treatment of the neutron population. This simulation is performed using continuous energy dependence, and as a rst case to be studied we assume a recurrent regime. The three calculated distributions are then used in the Monte Carlo code to compute the Monte Carlo steps with subsequent updates.
522

Estudo analítico das probabilidades de oscilação de neutrinos na matéria em três gerações / Analytical study of the neutrino iscillation probalities in the matter in three generations

Licciardi, Caio Augusto Pelegrina Del Bianco 09 August 2007 (has links)
Neste projeto realizamos um estudo sobre as fórmulas de probabilidades de oscilação de neutrinos na matéria em duas e três gerações. Estudamos extensivamente a fenomenologia e os experimentos de oscilação de neutrinos. Além de rever todas as expressões analíticas para as soluções exatas e aproximadas conhecidas na literatura para as probabilidades de conversão, também desenvolvemos soluções novas. Mostramos que os mesmos potenciais que possuem solução exata em duas gerações, também possuem em três. Com o formalismo proposto nesta dissertação para resolução da equação de evolução dos neutrinos na matéria, fica descartada a existência de outros potenciais, que não os aqui discutidos, com solução analítica em termos de funções que são obtidas como casos especiais ou limites da função hipergeométrica generalizada. / In this project we realized a study about the formulas of neutrino oscillation probabilities in matter in two and three generations. We studied the neutrino oscillation phenomenology and its experiments. We reviewed all the analytic expressions to the exact and aproximate solutions known at the literature to the conversion probabilities, as well as we have developed new ones. We showed that the same potentials which have exact solutions in two generations, also have exact solutions in three generations. By using the proposed formalism in this dissertation to the resolution of the evolution equation of neutrinos in matter, we ruled out the existence of other potentials not discussed here that have exact solution in terms of the special functions that are special cases or limits of the generalized hypergeometric function.
523

The teaching and learning of probability, with special reference to South Australian schools from 1959-1994

Truran, J. M. (John M.) January 2001 (has links) (PDF)
Includes bibliographies and index.
524

The teaching and learning of probability, with special reference to South Australian schools from 1959-1994

Truran, J. M. (John M.) January 2001 (has links)
Includes bibliographies and index. Electronic publication; Full text available in PDF format; abstract in HTML format. The teaching of probability in schools provides a good opportunity for examining how a new topic is integrated into a school curriculum. Furthermore, because probabilistic thinking is quite different from the deterministic thinking traditionally found in mathematics classrooms, such an examination is particularly able to highlight significant forces operating within educational practice. After six chapters which describe relevant aspects of the philosophical, cultural, and intellectual environment within which probability has been taught, a 'Broad-Spectrum Ecological Model' is developed to examine the forces which operate on a school system. Electronic reproduction.[Australia] :Australian Digital Theses Program,2001. 2 v. (xxxi, 1023 p.) : ill. ; 30 cm.
525

The Teaching and Learning of Probability, with Special Reference to South Australian Schools from 1959-1994

Truran, John Maxwell January 2001 (has links)
The teaching of probability in schools provides a good opportunity for examining how a new topic is integrated into a school curriculum. Furthermore, because probabilistic thinking is quite different from the deterministic thinking traditionally found in mathematics classrooms, such an examination is particularly able to highlight significant forces operating within educational practice. After six chapters which describe relevant aspects of the philosophical, cultural, and intellectual environment within which probability has been taught, a 'Broad-Spectrum Ecological Model' is developed to examine the forces which operate on a school system. The Model sees school systems and their various participants as operating according to general ecological principles, where and interprets actions as responses to situations in ways which minimise energy expenditure and maximise chances of survival. The Model posits three principal forces-Physical, Social and Intellectual-as providing an adequate structure. The value of the Model as an interpretative framework is then assessed by examining three separate aspects of the teaching of probability. The first is a general survey of the history of the teaching of the topic from 1959 to 1994, paying particular attention to South Australia, but making some comparisons with other countries and other states of Australia. The second examines in detail attempts which have been made throughout the world to assess the understanding of probabilistic ideas. The third addresses the influence on classroom practice of research into the teaching and learning of probabilistic ideas. In all three situations the Model is shown to be a helpful way of interpreting the data, but to need some refinements. This involves the uniting of the Social and Physical forces, the division of the Intellectual force into Mathematics and Mathematics Education forces, and the addition of Pedagogical and Charismatic forces. A diagrammatic form of the Model is constructed which provides a way of indicating the relative strengths of these forces. The initial form is used throughout the thesis for interpreting the events described. The revised form is then defined and assessed, particularly against alternative explanations of the events described, and also used for drawing some comparisons with medical education. The Model appears to be effective in highlighting uneven forces and in predicting outcomes which are likely to arise from such asymmetries, and this potential predictive power is assessed for one small case study. All Models have limitations, but this one seems to explain far more than the other models used for mathematics curriculum development in Australia which have tended to see our practice as an imitation of that in other countries. / Thesis (Ph.D.)--Graduate School of Education and Department of Pure Mathematics, 2001.
526

Statistical causal analysis for fault localization

Baah, George Kofi 08 August 2012 (has links)
The ubiquitous nature of software demands that software is released without faults. However, software developers inadvertently introduce faults into software during development. To remove the faults in software, one of the tasks developers perform is debugging. However, debugging is a difficult, tedious, and time-consuming process. Several semi-automated techniques have been developed to reduce the burden on the developer during debugging. These techniques consist of experimental, statistical, and program-structure based techniques. Most of the debugging techniques address the part of the debugging process that relates to finding the location of the fault, which is referred to as fault localization. The current fault-localization techniques have several limitations. Some of the limitations of the techniques include (1) problems with program semantics, (2) the requirement for automated oracles, which in practice are difficult if not impossible to develop, and (3) the lack of theoretical basis for addressing the fault-localization problem. The thesis of this dissertation is that statistical causal analysis combined with program analysis is a feasible and effective approach to finding the causes of software failures. The overall goal of this research is to significantly extend the state of the art in fault localization. To extend the state-of-the-art, a novel probabilistic model that combines program-analysis information with statistical information in a principled manner is developed. The model known as the probabilistic program dependence graph (PPDG) is applied to the fault-localization problem. The insights gained from applying the PPDG to fault localization fuels the development of a novel theoretical framework for fault localization based on established causal inference methodology. The development of the framework enables current statistical fault-localization metrics to be analyzed from a causal perspective. The analysis of the metrics show that the metrics are related to each other thereby allowing the unification of the metrics. Also, the analysis of metrics from a causal perspective reveal that the current statistical techniques do not find the causes of program failures instead the techniques find the program elements most associated with failures. However, the fault-localization problem is a causal problem and statistical association does not imply causation. Several empirical studies are conducted on several software subjects and the results (1) confirm our analytical results, (2) demonstrate the efficacy of our causal technique for fault localization. The results demonstrate the research in this dissertation significantly improves on the state-of-the-art in fault localization.
527

An Ab Initio Fuzzy Dynamical System Theory: Controllability and Observability

Terdpravat, Attapong 21 November 2004 (has links)
Fuzzy set is a generalization of the classical set. A classical set is distinguished from another by a sharp boundary at some threshold value and therefore, they are also known as crisp set. In fuzzy theory, sharp boundary and crisp set are replaced by partial truth and fuzzy sets. The idea of partial truth facilitates information description especially those communicated through natural language whose transition between descriptive terms are not abrupt discontinuities. Instead, the transition is a smooth change over a range corresponding to the degree of fulfillment each intermediate elements has according to the operating definition of the concept. The shape of a fuzzy set is defined by its membership function. This, by far, has been the common extent of concern regarding the membership function. Different applications may use the membership function to describe different variables such as speed, position, temperature, dirtiness, traffic conditions etc. But the underlying application of fuzzy sets remains the same: to describe information whose membership function, created in an initial setting, preserve the same size and shape throughout its entire application. In other word, fuzzy sets are utilized as if they are static entities. Nothing has been said about how an initially defined membership function can develop over time with respect to a system. The current research proposes a new framework that concerns the evolution of membership functions. We introduce the concept of membership function propagation as a dynamic description of uncertainty. Given a dynamical system with a set of uncertain initial states which can be represented by membership functions, the membership function propagation describes how these membership functions evolve over time with respect to the system. The evolution produces a set of propagated membership functions that have different size and shape from their predecessors. They represent the uncertainty associated with the states of the system at a given time. This new description also confers new definitions for two important concepts in control theory, namely controllability and observability. These two concepts are re-introduced in a fuzzy sense, based on the concept of membership function propagation. By assuming convexity of the fuzzy set, criterions for controllability and observability can be derived. These criterions are illustrated by MATLAB and SIMULINK simulations of an inverted pendulum and a 2 degree of freedom mechanical manipulator.
528

Integer Programming Approaches for Some Non-convex and Stochastic Optimization Problems

Luedtke, James 30 July 2007 (has links)
In this dissertation we study several non-convex and stochastic optimization problems. The common theme is the use of mixed-integer programming (MIP) techniques including valid inequalities and reformulation to solve these problems. We first study a strategic capacity planning model which captures the trade-off between the incentive to delay capacity installation to wait for improved technology and the need for some capacity to be installed to meet current demands. This problem is naturally formulated as a MIP with a bilinear objective. We develop several linear MIP formulations, along with classes of strong valid inequalities. We also present a specialized branch-and-cut algorithm to solve a compact concave formulation. Computational results indicate that these formulations can be used to solve large-scale instances. We next study methods for optimization with joint probabilistic constraints. These problems are challenging because evaluating solution feasibility requires multidimensional integration and the feasible region is not convex. We propose and analyze a Monte Carlo sampling scheme to simplify the probabilistic structure of such problems. Computational tests of the approach indicate that it can yield good feasible solutions and reasonable bounds on their quality. Next, we study a MIP formulation of the non-convex sample approximation problem. We obtain two strengthened formulations. As a byproduct of this analysis, we obtain new results for the previously studied mixing set, subject to an additional knapsack inequality. Computational results indicate that large-scale instances can be solved using the strengthened formulations. Finally, we study optimization problems with stochastic dominance constraints. A stochastic dominance constraint states that a random outcome which depends on the decision variables should stochastically dominate a given random variable. We present new formulations for both first and second order stochastic dominance which are significantly more compact than existing formulations. Computational tests illustrate the benefits of the new formulations.
529

Single Shot Hit Probability Computation For Air Defense Based On Error Analysis

Yuksel, Inci 01 June 2007 (has links) (PDF)
In this thesis, an error analysis based method is proposed to calculate single shot hit probability (PSSH) values of a fire control system. The proposed method considers that a weapon and a threat are located in three dimensional space. They may or may not have relative motion in three dimensions with respect to each other. The method accounts for the changes in environmental conditions. It is applicable in modeling and simulation as well as in top down design of a fire control system to reduce the design cost. The proposed method is applied to a specific fire control system and it is observed that PSSH values highly depend on the distance between the weapon and the threat, hence they are time varying. Monte Carlo simulation is used to model various defense scenarios in order to evaluate a heuristic developed by G&uuml / lez (2007) for weapon-threat assignment and scheduling of weapons&rsquo / shots. The heuristic uses the proposed method for PSSH and time of flight computation. It is observed that the difference between the results of simulation and heuristic depends on the scenario used.
530

Completion Of A Levy Market Model And Portfolio Optimization

Turkvatan, Aysun 01 September 2008 (has links) (PDF)
In this study, general geometric Levy market models are considered. Since these models are, in general, incomplete, that is, all contingent claims cannot be replicated by a self-financing portfolio consisting of investments in a risk-free bond and in the stock, it is suggested that the market should be enlarged by artificial assets based on the power-jump processes of the underlying Levy process. Then it is shown that the enlarged market is complete and the explicit hedging portfolios for claims whose payoff function depends on the prices of the stock and the artificial assets at maturity are derived. Furthermore, the portfolio optimization problem is considered in the enlarged market. The problem consists of choosing an optimal portfolio in such a way that the largest expected utility of the terminal wealth is obtained. It is shown that for particular choices of the equivalent martingale measure in the market, the optimal portfolio only consists of bonds and stocks. This corresponds to completing the market with additional assets in such a way that they are superfluous in the sense that the terminal expected utility is not improved by including these assets in the portfolio.

Page generated in 0.093 seconds