• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 568
  • 181
  • 54
  • 47
  • 23
  • 18
  • 10
  • 9
  • 9
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 1206
  • 1206
  • 1206
  • 173
  • 172
  • 165
  • 128
  • 123
  • 118
  • 108
  • 101
  • 95
  • 84
  • 84
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Desenvolvimento do manequim matematico do homem brasileiro para calculos de dosimetria interna

GUIMARAES, MARIA I.C.C. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:38:44Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:04:51Z (GMT). No. of bitstreams: 1 06045.pdf: 15655492 bytes, checksum: 6cfb83e47451ce75631c870bbce0e7cd (MD5) / Tese (Doutoramento) / IPEN/T / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
382

Avaliacao dosimetrica de detectores semicondutores para aplicacao na dosimetria e microdosimetria de neutrons em reatores nucleares e instalacoes de radiocirurgia / Dosimetric evaluation of semiconductor detectors for application in neutron dosimetry and microdosimetry in nuclear reactor and radio surgical facilities

NAHUEL CARDENAS, JOSE P. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:27:41Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:05:28Z (GMT). No. of bitstreams: 0 / Este trabalho tem como objetivo a avaliação dosimétrica de componentes semicondutores (detectores Barreira de Superfície e fotodiodos PIN) para aplicação em medições de dose equivalente em campos de baixo fluxo de nêutrons (rápidos e térmicos), utilizando uma fonte de AmBe de alto fluxo, a instalação de Neutrongrafia do reator IEA-R1 (fluxos térmicos/epitérmicos) e fluxo de nêutrons rápidos do núcleo do reator IPEN/MB-01 (UCRI Unidade Crítica). Para a detecção de nêutrons (térmicos, epitérmicos e rápidos) foram usados componentes moderadores e conversores (parafina, boro e polietileno). Os fluxos resultantes da moderação e conversão foram utilizados para a irradiação de componentes semicondutores (SSB - Barreira de Superfície e fotodiodos). Foi utilizado também um conversor misto constituído de uma folha de polietileno borado (marca Kodak). O método de simulação por Monte Carlo foi utilizado para avaliar de forma analítica a espessura ótima da parafina. O resultado obtido foi similar ao verificado experimentalmente e serviu para avaliar o fluxo de nêutrons emergentes do moderador (parafina). Da mesma forma, através de simulação, foi avaliado também o fluxo de nêutrons rápidos que atinge o conversor de polietileno que cobre a face sensível dos semicondutores. O nível de radiação gama foi avaliado cobrindo o detector por inteiro com uma folha de cádmio de 1 mm de espessura. O reator IPEN/MB-01 foi usado para avaliar a resposta dos detectores para nêutrons rápidos de alto fluxo. Os resultados, de uma forma geral, mostraram concordância e similaridade com os trabalhos desenvolvidos por outros grupos de pesquisas. Foi também estabelecida uma abordagem para o cálculo de dose equivalente utilizando os espectros obtidos nas experiências. / Tese (Doutoramento) / IPEN/T / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
383

Modelo para o estabelecimento de valores orientadores para elementos radioativos no solo

PERES, ANA C. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:53:36Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:08:08Z (GMT). No. of bitstreams: 0 / No Brasil, a decisão sobre a limpeza das áreas contaminadas com isótopos radioativos é atualmente realizada caso-a-caso, desde que não há nenhuma orientação ou recomendação geral que suporte as ações a serem tomadas nas fases iniciais de identificação do problema. Para os produtos químicos convencionais, a CETESB - órgão governamental responsável por prevenir e controlar a poluição ambiental no Estado de São Paulo - estabeleceu valores de referência de qualidade, prevenção e intervenção, como a primeira etapa para implementar ações de remediação baseada na avaliação de risco à saúde humana. O objetivo deste estudo foi desenvolver uma metodologia para o estabelecimento de valores orientadores para a contaminação radioativa do solo, tanto quanto possível consistente e compatível com a metodologia adotada pela CETESB para os locais contaminados com os produtos químicos convencionais. As seguintes etapas foram seguidas neste estudo: desenvolvimento conceitual do cenário e do modelo; codificação das equações em planilha eletrônica; seleção dos valores apropriados e distribuição estatística dos dados de entrada; e derivação dos níveis de intervenção para radionuclídeos selecionados usando o método de Monte Carlo. O modelo matemático desenvolvido foi baseado principalmente nas equações usadas pela U.S. Environmental Protection Agency (EPA) e pelo National Council on Radiation Protection and Measurements (NCRP). Apresentam-se valores de intervenção e prevenção para 3 cenários de exposição: agrícola, residencial e industrial, tendo como receptores, adultos e crianças de 10 anos; os radionuclídeos considerados foram: 3H, 14C, 32P, 35S, 45Ca, 51Cr, 90Sr, 125I, 131I, 134Cs, 137Cs, 210Pb, 226Ra, 228Ra, 232Th, 238U, 239Pu e 241Am. Valores de referência de qualidade foram determinados para os radionuclídeos 40K, 137Cs, 210Pb, 226Ra, 228Ra, 228Th, Th-nat e U-nat. Os resultados obtidos neste estudo estão de acordo com aqueles reportados pelo NCRP, considerando-se a existência de diferenças nos modelos adotados e nos valores de entrada utilizados. / Tese (Doutoramento) / IPEN/T / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
384

Calculo de Monte Carlo da dose equivalente recebida por um feto humano de fontes gama localizadas no trato-gastrointestinal

SEGRETO, VERA S.A. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:26:01Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:10:08Z (GMT). No. of bitstreams: 1 00459.pdf: 1460675 bytes, checksum: 4ee7893eae68fa2c5b514f7d0a9872b8 (MD5) / Dissertacao (Mestrado) / IEA/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
385

Determinacao da eficiencia do contador de corpo inteiro (CCI) pelo metodo de Monte Carlo, utilizando um micro computador

FERNANDES NETO, JOSE M. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:32:20Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:10:28Z (GMT). No. of bitstreams: 1 11282.pdf: 1648241 bytes, checksum: 1f0ef077bda3085781976b769a4136a9 (MD5) / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
386

The contour tracking of a rugby ball : an application of particle filtering

Janse van Rensburg, Tersia 06 February 2012 (has links)
M.Ing. / Object tracking in image sequences, in its general form, is very challenging. Due to the prohibitive complexity thereof, research has lead to the idea of tracking a template exposed to low-dimensional deformation such as translation, rotation and scaling. The inherent non-Gaussianity of the data acquired from general tracking problems renders the trusted Kalman filtering methodology futile. For this reason the idea of particle filtering was developed recently. Particle filters are sequential Monte Carlo methods based on multiple point mass (or "particle") representations of probability densities, which can be applied to any dynamical model and which generalize the traditional Kalman filtering methods. To date particle filtering has already been proved to be successful filtering method in different fields of science such as econometrics, signal processing, fluid mechanics, agriculture and aviation. In this dissertation, we discuss the problem of tracking a rugby ball in an image sequence as the ball is being passed to and fro. First, the problem of non-linear Bayesian tracking is focused upon, followed by a particular instance of particle filtering known as the condensation algorithm. Next, the problem of fitting an elliptical contour to the travelling rugby ball is dealt with in detail, after which the problem of tracking this evolving ellipse (representing the rugby ball's edge) over time along the image sequence by means of the condensation algorithm follows. Experimental results are presented and discussed and concluding remarks follow at the end.
387

Long memory and the aggregation of AR(1) processes

Mudelsee, Manfred 23 March 2017 (has links) (PDF)
Granger (1980) found that the aggregation of m short-memory AR(1) processes yields a long-memory process. Thereby he assumed m -> ∞, Gaussian shape and betadistributed AR(1) parameters over (0; 1). To test hypotheses that long memory in climate time series comes from aggregation, the finding of Granger (1980) cannot be directly applied. First, the number of \"microclimatic\" processes to be aggregated is finite. Second, climatic processes often produce right-skewed data. Third, the AR(1) parameters of the microclimatic processes could be restricted to a narrower interval than (0; 1). We therefore perform Monte Carlo simulations to study aggregation in climate time series under realistic conditions. The long-memory parameter, H, is estimated by fitting an ARFIMA model to various types of aggregations. Our results are as follows. First, for m above a few hundred, H approaches saturation. Second, the distributional shape has little influence, as noted by Granger (1980). Third, the upper limit of the interval for the AR(1) parameter has a strong influence on the saturation value of H, as noted by Granger (1980). / Granger (1980) fand heraus, dass die Summe von m schwach seriell abhängigen AR(1)-Prozessen einen stark seriell abhängigen Prozess ergibt. Er nahm dabei an, dass m -> ∞ geht, die Verteilungen Gaußsch sind und die AR(1)-Parameter beta-verteilt über (0; 1) sind. Um die Hypothese zu testen, daß starke serielle Abhängigkeit in Klimazeitreihen von dieser \"Aggregation\" rührt, kann das Ergebnis von Granger (1980) jedoch nicht direkt angewendet werden. Erstens: die Anzahl \"mikroklimatischer\", zu summierender Prozesse is endlich. Zweitens: Klimaprozesse erzeugen oft rechtsschief verteilte Daten. Drittens: die AR(1)-Parameter der mikroklimatischen Prozesse mögen auf ein engeres Intervall begrenzt sein als (0; 1). Wir f¨uhren deshalb Monte-Carlo-Simulationen durch, um die Aggregation in Klimazeitreihen für realistische Bedingungen zu studieren. Der Parameter H, der die starke serielle Abhängigkeit beschreibt, wird geschätzt durch die Anpassung eines ARFIMA-Modelles an unterschiedliche Aggregations-Typen. Unsere Ergebnisse sind wie folgt. Erstens: für m oberhalb einiger hundert erreicht H S¨attigung. Zweitens: die Verteilungsform hat geringen Einfluß, wie von Granger (1980) bemerkt. Drittens: die obere Grenze des Intervalles für den AR(1)-Parameter hat einen starken Einfluß auf den Sättigungwert von H, wie von Granger (1980) bemerkt.
388

Monte Carlo device modelling of electron transport in nanoscale transistors

Aynul, Islam January 2012 (has links)
No description available.
389

Internal balance calibration and uncertainty estimation using Monte Carlo simulation

Bidgood, Peter Mark 18 March 2014 (has links)
D.Ing. (Mechanical Engineering) / The most common data sought during a wind tunnel test program are the forces and moments acting on an airframe, (or any other test article). The most common source of this data is the internal strain gauge balance. Balances are six degree of freedom force transducers that are required to be of small size and of high strength and stiffness. They are required to deliver the highest possible levels of accuracy and reliability. There is a focus in both the USA and in Europe to improve the performance of balances through collaborative research. This effort is aimed at materials, design, sensors, electronics calibration systems and calibration analysis methods. Recent developments in the use of statistical methods, including modern design of experiments, have resulted in improved balance calibration models. Research focus on the calibration of six component balances has moved to the determination of the uncertainty of measurements obtained in the wind tunnel. The application of conventional statistically-based approaches to the determination of the uncertainty of a balance measurement is proving problematical, and to some extent an impasse has been reached. The impasse is caused by the rapid expansion of the problem size when standard uncertainty determination approaches are used in a six-degree of freedom system that includes multiple least squares regression and iterative matrix solutions. This thesis describes how the uncertainty of loads reported by a six component balance can be obtained by applying a direct simulation of the end-to-end data flow of a balance, from calibration through to installation, using a Monte Carlo Simulation. It is postulated that knowledge of the error propagated into the test environment through the balance will influence the choice of calibration model, and that an improved model, compared to that determined by statistical methods without this knowledge, will be obtained. Statistical approaches to the determination of a balance calibration model are driven by obtaining the best curve-fit statistics possible. This is done by adding as many coefficients to the modelling polynomial as can be statistically defended. This thesis shows that the propagated error will significantly influence the choice of polynomial coefficients. In order to do this a Performance Weighted Efficiency (PWE) parameter is defined. The PWE is a combination of the curve-fit statistic, (the back calculated error for the chosen polynomial), a value representing the overall prediction interval for the model(CI_rand), and a value representing the overall total propagated uncertainty of loads reported by the installed balance...
390

Toestandberaming by sub-waarneembare nie-lineêre prosesse

Wiid, Andries Johannes 11 September 2014 (has links)
M.Ing. (Electrical And Electronic Engineering) / State estimation comprises the estimation of the position and velocity (state) of a target based on the processing of noise-corrupted measurements of its motion. This study views a class of measurement processes where the states are unobservable and cannot be estimated without placing additional constraints on the system. The bearings only target motion problem is taken as being representative of this type of problem. The results of this study indicate that practical state· estimation for systems with unobservable measurement processes is possible with the application of estimation theories and available estimation techniques. Due to the inherent nonlinear geometrical characteristics the problem is classified as a unobservable nonlinear estimation problem. A review of state estimation and estimation techniques is presented. The fundamental bearings only target motion concepts are discussed. A representative selection of bearings only estimators made from the published literature, is evaluated. The evaluation consists of a theoretical analysis and a Monte Carlo simulation of the estimators. Two realistic scenario's are considered. A classification framework is presented which may be useful to practical engineers in selecting suitable estimators. Batch estimators are shown to be more stable and likely to be used in bearings only applications than recursive estimators. The importance of isolating the unobservable states from the observable states by using a modified polar co-ordinate system, is stressed. It is also shown that effective data processing can be achieved by using all available measurements and a maximum likelihood estimator.

Page generated in 0.031 seconds