• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2241
  • 617
  • 456
  • 442
  • 236
  • 148
  • 93
  • 72
  • 54
  • 44
  • 40
  • 37
  • 37
  • 26
  • 19
  • Tagged with
  • 5292
  • 540
  • 434
  • 359
  • 337
  • 313
  • 297
  • 282
  • 280
  • 269
  • 241
  • 240
  • 239
  • 236
  • 230
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Desenvolvimento de método para caracterização de embalados de rejeitos radioativos / Development of a method for the radioisotopic characterization of waste packages

Daiane Cristini Barbosa de Souza 16 September 2013 (has links)
Atualmente, a caracterização dos resíduos radioativos gerados na operação do reator nuclear de pesquisas IEA-R1 está em curso. O reator IEA-R1 é um reator do tipo piscina aberta, moderado e refrigerado por água leve, utilizando dois leitos de resinas de troca iônica e de carvão ativado para purificação de água de refrigeração. Estes meios filtrantes são substituídos quando já não são capazes de manter a qualidade da água dentro dos limites exigidos e são tratados como rejeitos radioativos. Contendo produtos de fissão, ativação e actinídeos que escapam do núcleo do reator para a água da piscina, apresentam altas taxas de dose devido à quantidade de emissores gama de meias-vidas curtas e intermediárias, emissores alfa, elementos transurânicos de meia-vida longa bem como emissores beta puros. A caracterização destes rejeitos, consequentemente, requer métodos de análise radioquímica que incluem a amostragem e o processamento das amostras, resultando em doses elevadas para os trabalhadores. Nesse contexto, o objetivo deste trabalho consistiu em correlacionar os resultados das análises radioquímicas de amostras de rejeitos, com os resultados das medições radiométricas, utilizando a modelagem das taxas de dose em diferentes distâncias da superfície dos embalados. As taxas de dose medidas foram comparadas com os resultados de cálculos . Massa, volume e geometria das fases sólidas e líquidas de cada um dos tambores também foram determinadas, uma vez que o teor de água varia amplamente entre diferentes tambores, e são essenciais para estimar as atividades totais em cada tambor. / The characterization of the radioactive wastes generated in the operation of the nuclear research reactor IEA-R1 is currently ongoing. The IEA-R1 is an open pool type reactor, moderated and cooled by light water that uses two beds of ion-exchange resins and activated charcoal to remove impurities from the cooling water. These filter media are replaced when they are no longer able to maintain water quality within the required limits and are treated as radioactive waste. They contain the actinides and the fission and activation products that leaked into the reactor pool water. They give off high dose rates due to the amount of gamma-emitters present and are a long-term radiation safety concern because of their content of long-lived alpha- and beta-emitters. The characterization of these wastes requires radiochemical analysis methods, which include the sampling and processing of samples, resulting in high exposure to the workers. The objective of this study was to correlate the results of activity concentrations obtained in previous radiochemical analyses with the results of measurements of dose rates at various distances from the package surfaces, aiming at reducing the exposure of personnel by avoiding more sampling and sample analysis operations. Mass, volume and geometry of solid and liquid phases of each drum, which vary widely among different drums, were also estimated and use to determine total activity. The measured and calculated dose rates were compared to confirm the activity estimates.
102

Budget d’erreur en optique adaptative : Simulation numérique haute performance et modélisation dans la perspective des ELT / Adaptive optics error breakdown : high performance numerical simulation and modeling for future ELT

Moura Ferreira, Florian 11 October 2018 (has links)
D'ici quelques années, une nouvelle classe de télescopes verra le jour : celle des télescopes géants. Ceux-ci se caractériseront par un diamètre supérieur à 20m, et jusqu'à 39m pour le représentant européen, l'Extremely Large Telescope (ELT). Seulement, l'atmosphère terrestre vient dégrader sévèrement les images obtenues lors d'observations au sol : la résolution de ces télescopes est alors réduite à celle d'un télescope amateur de quelques dizaines de centimètres de diamètre.L'optique adaptative (OA) devient alors essentielle. Cette dernière permet de corriger en temps-réel les perturbations induites par l'atmosphère et de retrouver la résolution théorique du télescope. Néanmoins, les systèmes d'OA ne sont pas exempt de tout défaut, et une erreur résiduelle persiste sur le front d'onde (FO) et impacte la qualité des images obtenues. Cette dernière est dépendante de la Fonction d'Étalement de Point (FEP) de l'instrument utilisé, et la FEP d'un système d'OA dépend elle-même de l'erreur résiduelle de FO. L'identification et la compréhension des sources d'erreurs est alors primordiale. Dans la perspective de ces télescopes géants, le dimensionnement des systèmes d'OA nécessaires devient tel que ces derniers représentent un challenge technologique et technique. L'un des aspects à considérer est la complexité numérique de ces systèmes. Dès lors, les techniques de calcul de haute performance deviennent nécessaires, comme la parallélisation massive. Le General Purpose Graphical Processing Unit (GPGPU) permet d'utiliser un processeur graphique à cette fin, celui-ci possédant plusieurs milliers de coeurs de calcul utilisables, contre quelques dizaines pour un processeur classique.Dans ce contexte, cette thèse s'articule autour de trois parties. La première présente le développement de COMPASS, un outil de simulation haute performance bout-en-bout dédié à l'OA, notamment à l'échelle des ELT. Tirant pleinement parti des capacités de calcul des GPU, COMPASS permet alors de simuler une OA ELT en quelques minutes. La seconde partie fait état du développement de ROKET : un estimateur complet du budget d'erreur d'un système d'OA intégré à COMPASS, permettant ainsi d'étudier statistiquement les différentes sources d'erreurs et leurs éventuels liens. Enfin, des modèles analytiques des différentes sources d'erreur sont dérivés et permettent de proposer un algorithme d'estimation de la FEP. Les possibilités d'applications sur le ciel de cet algorithme sont également discutées. / In a few years, a new class of giants telescopes will appear. The diameter of those telescope will be larger than 20m, up to 39m for the european Extremely Large Telescope (ELT). However, images obtained from ground-based observations are severely impacted by the atmosphere. Then, the resolution of those giants telescopes is equivalent to the one obtained with an amateur telescope of a few tens of centimeters of diameter.Therefore, adaptive optics (AO) becomes essential as it aims to correct in real-time the disturbance due to the atmospherical turbulence and to retrieve the theoretical resolution of the telescope. Nevertheless, AO systems are not perfect: a wavefront residual error remains and still impacts the image quality. The latter is measured by the point spread function (PSF) of the system, and this PSF depends on the wavefront residual error. Hence, identifying and understanding the various contributors of the AO residual error is primordial.For those extremely large telescopes, the dimensioning of their AO systems is challenging. In particular, the numerical complexity impacts the numerical simulation tools useful for the AO design. High performance computing techniques are needed, as such relying on massive parallelization.General Purpose Graphical Processing Unit (GPGPU) enables the use of GPU for this purpose. This architecture is suitable for massive parallelization as it leverages GPU's several thousand of cores, instead of a few tens for classical CPU.In this context, this PhD thesis is composed of three parts. In the first one, it presents the development of COMPASS : a GPU-based high performance end-to-end simulation tool for AO systems that is suitable for ELT scale. The performance of the latter allows simulating AO systems for the ELT in a few minutes. In a second part, an error breakdown estimation tool, ROKET, is added to the end-to-end simulation in order to study the various contributors of the AO residual error. Finally, an analytical model is proposed for those error contributors, leading to a new way to estimate the PSF. Possible on-sky applications are also discussed.
103

Přednádražní prostor a dopravní terminál města Havířova / City of Havířov Traffic Terminal and Area in Front of the Railway Station

Řezníček, Josef January 2013 (has links)
The main task of this thesis was to deal with the specifics of the location so that the newly proposed structure had the potential for sustainable development of all its features. In future space transport terminal station does not arise only an end in itself, but mainly there is created an interaction point, which results in a value and attractiveness of the surrounding land. Ultimately, this process has lead to a zcelení city, to enhance its attractiveness and the external and internal view of the city itself.
104

Some Aspects of Costing and Contouring Programs for Point-To-Point Numerically Controlled Machine Tools

Husemeyer, Norman C. 09 1900 (has links)
<p> This thesis is an investigation of some of the aspects of costing and machining that are applicable to numerically controlled (N/C) machine tools with particular reference to the facilities at McMaster University,and is divided into two sections. </p> <p> Section A is a brief discussion of the suitability of N/C for simulation methods and a review of the principles of metal cutting and the problems involved in estimating costs. A method is devised to simulate the machining of "typical" parts that have been generated by a random strategy. The results of the simulation were used to find a relationship between the geometric parameters of each part and the time required for all the machining operations to make that part, this relationship was called the "complexity factor" for the part. Sugestions for possible future extensions to the work were made.</p> <p> Section B is a feasibility study for increasing the range of use of a Moog point-to-point N/C machine to contouring, using the computer facilities available at McMaster University. It was proposed to produce a numerical control tape to machine a general oval based on a method of approximate linear interpolations using an on-line, time sharing computer terminal and a PDP "mini computer". The contouring method was tested by machining a circular groove (an oval with equal major and minor axes) and measuring the accuracy. The possibility of extending the work to other contours and three dimensional solids is discussed.</p> / Thesis / Master of Engineering (MEngr)
105

THE APPLICATION OF A POINT-TO-POINT NUMERICAL CONTROL MACHINE FOR CONTINUOUS CONTOUR CUTTING

Wong, Moses 27 October 2016 (has links)
<p> This thesis is a feasibility study of the application of a point-to-point numerical control (N/C) machine for contour cutting work. It comprises three basic parts which are divided into ten chapters. </p> <p> The first part deals with the primary objectives of this project and the N/C machine's hardware and software descriptions. </p> <p> The second part introduces the utilization of computer-aided programming and presents an original translation program necessary for this particular work. </p> <p> The last part gives several examples and describes the outcomes of the study; and is concluded with some objective discussions. </p> / Thesis / Master of Engineering (MEngr)
106

Simulink <sup>TM</sup>modules that emulate digital controllers realized with fixed-point or floating-point arithmetic

Robe, Edward D. January 1994 (has links)
No description available.
107

Infrastructure design and cost allocation in hub and spoke and point-to-point networks

Kim, Changjoo 29 September 2004 (has links)
No description available.
108

Point-of-care echocardiography in simulation-based education and assessment

Amini, Richard, Stolz, Lori A, Javedani, Parisa P, Gaskin, Kevin, Baker, Nicola, Ng, Vivienne, Adhikari, Srikar 31 May 2016 (has links)
UA Open Access Publishing Fund / Background: Emergency medicine milestones released by the Accreditation Council for Graduate Medical Education require residents to demonstrate competency in bedside ultrasound (US). The acquisition of these skills necessitates a combination of exposure to clinical pathology, hands-on US training, and feedback. Objectives: We describe a novel simulation-based educational and assessment tool designed to evaluate emergency medicine residents’ competency in point-of-care echocardiography for evaluation of a hypotensive patient with chest pain using bedside US. Methods: This was a cross-sectional study conducted at an academic medical center. A simulation-based module was developed to teach and assess the use of point-of-care echocardiography in the evaluation of the hypotensive patient. The focus of this module was sonographic imaging of cardiac pathology, and this focus was incorporated in all components of the session: asynchronous learning, didactic lecture, case-based learning, and hands-on stations. Results: A total of 52 residents with varying US experience participated in this study. Questions focused on knowledge assessment demonstrated improvement across the postgraduate year (PGY) of training. Objective standardized clinical examination evaluation demonstrated improvement between PGY I and PGY III; however, it was noted that there was a small dip in hands-on scanning skills during the PGY II. Clinical diagnosis and management skills also demonstrated incremental improvement across the PGY of training. Conclusion: The 1-day, simulation-based US workshop was an effective educational and assessment tool at our institution.
109

Point-of-care echocardiography in simulation-based education and assessment

Amini, Richard, Stolz, Lori, Javedani, Parisa, Gaskin, Kevin, Baker, Nicola, Ng, Vivienne, Adhikari, Srikar 05 1900 (has links)
Background: Emergency medicine milestones released by the Accreditation Council for Graduate Medical Education require residents to demonstrate competency in bedside ultrasound (US). The acquisition of these skills necessitates a combination of exposure to clinical pathology, hands-on US training, and feedback. Objectives: We describe a novel simulation-based educational and assessment tool designed to evaluate emergency medicine residents' competency in point-of-care echocardiography for evaluation of a hypotensive patient with chest pain using bedside US. Methods: This was a cross-sectional study conducted at an academic medical center. A simulation-based module was developed to teach and assess the use of point-of-care echocardiography in the evaluation of the hypotensive patient. The focus of this module was sonographic imaging of cardiac pathology, and this focus was incorporated in all components of the session: asynchronous learning, didactic lecture, case-based learning, and hands-on stations. Results: A total of 52 residents with varying US experience participated in this study. Questions focused on knowledge assessment demonstrated improvement across the postgraduate year (PGY) of training. Objective standardized clinical examination evaluation demonstrated improvement between PGY I and PGY III; however, it was noted that there was a small dip in hands-on scanning skills during the PGY II. Clinical diagnosis and management skills also demonstrated incremental improvement across the PGY of training. Conclusion: The 1-day, simulation-based US workshop was an effective educational and assessment tool at our institution.
110

Near Point of Convergence : A Comparison of Four Different Target types

Berglund Pilgrim, Caroline January 2010 (has links)
<p><strong>Purpose:</strong> The purpose of this study was to determine if there were any differences between four different target types when measuring the near point of convergence in adults.</p><p><strong>Methods and Material:</strong> The near point of convergence was measured in 35 subjects with a visual acuity of at least 1.0 (6/6) in each eye and without any strabismus. The targets used were: the tip of a pen, an accommodative target, the RAF line target and a penlight viewed through red-green filters. Both break and recovery points were assessed for the different techniques. Each target was used twice in consecutive order. The line target from RAF ruler was copied on to a small plastic ruler in order to be able to use the same ruler for measuring the results. All subjects were fitted with their best correction in the trial frame after a complete refraction. The measurements were taken to the nearest 0.25cm.</p><p><strong>Results:</strong> There was no difference found between NPC break values for the different target types in the control group. The NPC values were found to be 5.0/7.4 in the control group and 10.8/18.2 in the anomalous group. The accommodative target was found to give more remote values (11.5cm) than expected in comparison to the other targets in the anomalous group.</p><p><strong>Conclusion:</strong> In patients with normal NPC, the measurements can be taken with line target or Acc. target. Patients with receded NPC values should be evaluated with penlight and red-green glasses or at least twice with the tip of a pen. </p>

Page generated in 0.0276 seconds