• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1605
  • 691
  • 348
  • 186
  • 180
  • 93
  • 71
  • 54
  • 46
  • 32
  • 19
  • 18
  • 11
  • 10
  • 7
  • Tagged with
  • 3980
  • 574
  • 490
  • 468
  • 464
  • 428
  • 405
  • 399
  • 370
  • 360
  • 332
  • 315
  • 311
  • 309
  • 306
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Beam asymmetry measurement from pion photoproduction on the neutron

Sokhan, Daria January 2010 (has links)
The resonance spectrum of the nucleon gives direct information on the dynamics and interactions of its constituents. This offers an important challenge to the theoretical models of nucleon structure, including the emerging Lattice QCD predictions, conformal field theories and more phenomenological, QCD-based approaches. Although the various models predict different features for the excitation spectra, the experimental information is currently of too poor quality to differentiate between these models. Pion photoproduction from the nucleon is a powerful probe of the spectrum as most resonances are expected to couple to the pion decay channel. However, cross-sections alone are not sensitive enough to allow identification of the underlying excitation spectrum, as the resonances have energy widths larger than their separations. A major world effort is underway to additionally measure polarisation observables in the production process. For a model-independent analysis a “complete” set of single- and doublepolarisation observables needs to be measured in experiments involving polarised beams, targets and a means of determining recoil nucleon polarisation. In particular, the beam asymmetry is a critical observable for the constraint of partial wave analyses (PWA) used to extract the nucleon excitation spectrum from the data. Almost all of the available world data on the beam asymmetry has been taken on the proton, with the neutron dataset sparse, containing only three experiments at fixed angles and in a limited photon energy range. The lack of extensive data on the neutron is a major deficiency, as different resonances have very different electromagnetic couplings to the proton and neutron. As a result, the data from the two targets will have very different relative contributions from, and sensitivities to, the spectrum of nucleon resonances. Moreover, neutron data is essential for the separation of the isoscalar and isovector components of the reaction amplitudes. This thesis presents a very high statistics measurement of the photon beam asymmetry on the neutron with close-to-complete angular coverage and a wide range of invariant mass (1610 – 2320 MeV) extending over the third resonance region, where the excitation spectrum is particularly ill defined. The experiment was conducted at the Thomas Jefferson National Accelerator Facility (JLab) using a tagged, linearly polarised photon beam, a liquid deuterium target and the CEBAF Large Acceptance Spectrometer (CLAS). The quality and quantity of the data has allowed an invariantmass resolution of 10 MeV and an angular resolution of 0.1 in the cosine of the centre-of-mass pion production angle, θ. Good agreement is evident in the regions where there is kinematic overlap with sparse previous data. Comparison of the new data is made with the two main partial wave analyses, SAID andMAID. Significant discrepancy is observed at backward θ with SAID (across most of the energy range) and MAID (up to ∼ 1750 MeV) and also below ∼ 35◦ in θ with both analyses. This extensive new dataset will help significantly to constrain partial wave analyses and will be a crucial part of the current world effort to use meson photoproduction to tackle long-standing uncertainties in the fundamental excitation spectrum of the nucleon. As a first step towards this the refitting of the SAID partial wave analysis incorporating the new data was carried out and shows very significant changes in the properties of the magnetic P11, P13, D13, D35, F15, G17 and G19 partial waves.
372

Elektronenstrahlschmelzen – ein pulverbettbasiertes additives Fertigungsverfahren

Klöden, Burghardt, Kirchner, Alexander, Weißgärber, Thomas, Kieback, Bernd, Schöne, Christine, Stelzer, Ralph, Süß, Michael 10 December 2016 (has links) (PDF)
Aus der Einleitung: "Das selektive Elektronenstrahlschmelzen (engl. Electron Beam Melting (EBM®)) ist ein pulverbettbasiertes additives Fertigungsverfahren, mit dessen Hilfe metallische Bauteile schichtweise hergestellt werden können. Der schematische Aufbau einer entsprechenden Anlage ist in Abbildung 4 dargestellt. Dabei erfolgt die Strahlerzeugung im Bereich 1 (die Kathode besteht entweder aus Wolfram oder bei den neuesten Systemen aus einkristallinem LaB6). Die Strahlablenkung durch ein elektromagnetisches Linsensystem erfolgt im Bereich 2. Der Bereich 3 ist die eigentliche Baukammer, in der sich unter anderem die Vorratsbehälter für das Pulver, das Rakelsystem sowie die Komponenten des Bauraums (Käfig mit Hitzeschild, Bauplattform mit Startplatte) befinden. ..."
373

Lateral torsional buckling of anisotropic laminated composite beams subjected to various loading and boundary conditions

Ahmadi, Habiburrahman January 1900 (has links)
Doctor of Philosophy / Department of Civil Engineering / Hayder A. Rasheed / Thin-walled structures are major components in many engineering applications. When a thin-walled slender beam is subjected to lateral loads, causing moments, the beam may buckle by a combined lateral bending and twisting of cross-section, which is called lateral-torsional buckling. A generalized analytical approach for lateral-torsional buckling of anisotropic laminated, thin-walled, rectangular cross-section composite beams under various loading conditions (namely, pure bending and concentrated load) and boundary conditions (namely, simply supported and cantilever) was developed using the classical laminated plate theory (CLPT), with all considered assumptions, as a basis for the constitutive equations. Buckling of such type of members has not been addressed in the literature. Closed form buckling expressions were derived in terms of the lateral, torsional and coupling stiffness coefficients of the overall composite. These coefficients were obtained through dimensional reduction by static condensation of the 6x6 constitutive matrix mapped into an effective 2x2 coupled weak axis bending-twisting relationship. The stability of the beam under different geometric and material parameters, like length/height ratio, ply thickness, and ply orientation, was investigated. The analytical formulas were verified against finite element buckling solutions using ABAQUS for different lamination orientations showing excellent accuracy.
374

Método beam search aplicado a problemas de programação da produção / Beam search method for scheduling problems

Jesus Filho, José Eurípedes Ferreira de 05 December 2018 (has links)
Nesta tese, dois diferentes problemas de programação da produção são abordados, o Flexible JobShop Scheduling Problem com Flexibilidade de sequenciamento e o Flowshop Scheduling Problem com tempos de espera e permutação de sequência. Para ambos, inicialmente um algoritmo list scheduling (LS) que explora características do problema é desenvolvido e então estendido para um método do tipo Beam Search (BS) que utiliza o LS em seus principais elementos: (1) expansão dos níveis, (2) avaliação local dos candidatos e (3) avaliação global dos candidatos. Todos os métodos propostos são determinísticos e seus pseudocódigos são cuidadosamente descritos para garantir a replicabilidade dos resultados reportados. O desempenho dos métodos propostos são avaliados utilizando instâncias e outros métodos heurísticos da literatura. Os resultados computacionais obtidos mostram a eficiência das heurísticas propostas que superaram os métodos da literatura utilizando pouco tempo computacional. / In this thesis two diferent scheduling problems were addressed, the Flexible Job Shop Scheduling Problem with sequence Flexibility and the Flowshop Scheduling Problem with waiting times and sequence permutation. For both problems, firstly, a list scheduling (LS) algorithm which exploit features of the problem was developed and then it was extedend to a Beam Search (BS) method which use the LS in his main features: (1) level expansion, (2) local evaluation and (3) global evaluation. All the proposed methods are deterministics and their pseudocodes are carefully described to ensure the replicability of the reported results. The performance of the proposed methods was evaluated using instances and other heuristic methods found in literature. The computational results show the eficiency of the proposed heuristics, which outperformed the literature methods while using low computational time.
375

Dosimetria em tomografia computadorizada de feixe cônico odontológica / Dental Cone Beam Computed Tomography Dosimetry

Mauro, Rodrigo Antonio Pereira 13 June 2017 (has links)
Os objetivos deste trabalho foram caracterizar os níveis de referência de radiodiagnóstico para a tomografia computadorizada de feixe cônico odontológica e as características de desempenho dos equipamentos como quilovoltaqgem de pico, rendimento, camada semirredutora, etc., com o intuito de conhecer os níveis dosimétricos em que os pacientes estão expostos, permitindo assim identificar protocolos de aquisição de imagem mais adequados, levando-se em consideração os princípios de radioproteção, e também testar a capacidade de tais equipamentos em alcançar uma imagem de qualidade. A Cone Beam Computed Tomography tem se tornado ferramenta extremamente útil na utilização em procedimentos radiológicos na área odontológica, pois, a riqueza de informações que a imagem 3D trás para o planejamento cirúrgico ou em qualquer procedimento, minimiza as possibilidades de erros, possibilita diagnósticos mais confiáveis e claros, tendo influência direta no resultado final esperado pelo paciente. Por se tratar de uma técnica de imagem que utiliza radiação ionizante, deve-se ter uma atenção criteriosa voltada para os níveis de radiação, além de implementar uma rotina de controle de qualidade. O parâmetro dosimétrico mais utilizado em tomografia computadorizada é o Computed Tomography Dose Index, porém, quando aplicado à tomografia odontológica, a geometria cônica do feixe e ainda a extensão do campo de visão tornam essa grandeza inviável e enganosa, assim, faz-se necessária a padronização de uma grandeza dosimétrica mais otimizada, para evitar a subestimação dos níveis de dose em feixes de ampla abrangência. O PKA tem sido utilizado como uma possível grandeza dosimétrica em tomografia odontológica, uma vez que em sua metodologia de medida, todo o feixe é englobado pelo medidor, não depende da distância fonte - detector, além de ser sensível aos parâmetros de exposição. Diante disso, propõe-se o PKA ser utilizado para estabelecimento dos níveis de dose de referência em diagnóstico odontológico. Os valores PKA obtidos para este estudo estão em uma faixa entre 34,6 mGy.cm^2 e 2901,6 mGy.cm^2, com valor médio de 980,7 mGy.cm^2. Os valores encontrados para os níveis de referência de radiodiagnóstico calculados a partir do 3º quartil estão divididos em três classes referentes ao tamanho do campo de visão, onde para campos pequenos, médios e grandes os valores são 1241 mGy.cm^2, 1521 mGy.cm^2 e 1408 mGy.cm^2 respectivamente, e 1446 mGy.cm^2 é o valor global independente do campo de visão. Os testes de controle de qualidade foram todos positivos, com uma atenção para o i-CAT FLX, que excedeu levemente o limite aceitável para a exatidão do kVp. Uma comparação entre CTDI100 e CTDI300, reportou que o CTDI300 é em média 49% maior em relação ao CTI100. Os níveis de referência de radiodiagnóstico são representativos dos níveis de dose otimizados, e servem como base para adequação e otimização dos parâmetros de exposição do equipamento. Os testes de controle de qualidade alertam para possíveis irregularidades no funcionamento do tomógrafo, e deve complementar obrigatoriamente a rotina dos procedimentos clínicos. / The objectives of this study are to characterize the radiodiagnostic reference levels for computed tomography of dental cones and as performance characteristics of equipment such as peak kyvoltage, yield, semi-reducing layer, etc., in order to know the levels of the values in that users are exposed, thus allowing to identify more adequate image acquisition protocols, taking in basic concepts of radioprotection, and also to test the capacity of such equipment in a quality image. The Cone Beam CT scan has become active, useful in medical, medical, dental, on the Internet, in any situation, minimizes as possibilities of errors, allows for more reliable and clear diagnoses, having a direct influence on the final result expected by the patient. Because it is an imaging technique that uses ionizing radiation, careful attention should be given to radiation levels, in addition to implementing a quality control routine. The dosimetric parameter most commonly used in computed tomography is the Computed tomography dose index, however, when applied to dental tomography, the conic geometry of the beam and still the extension of the field of view make this greatness unfeasible and deceptive, so it is done The. standardization of a more optimized dosimetric quantity, to avoid an underestimation of the dose levels in beams of wide range. The PKA has been used as a possible dosimetric magnitude in dental tomography, since in its measurement methodology, the whole beam is encompassed by the meter, it does not depend on the source - detector distance, besides being sensitive to the exposure parameters. Therefore, it is proposed that PKA be used for the establishment of reference dose levels in dental diagnosis. The PKA values obtained for this study ranged from 34.6 mGy.cm^2 to 2901.6 mGy.cm^2, with a mean value of 980.7 mGy.cm^2. The values found for the levels of radiodiagnostic reference values calculated from the 3rd quartile are divided into three classes referring to the size of the field of vision, where for small, medium and large fields are the values are 1241 mGy.cm^2, 1521 mGy.cm^2 and 1408 mGy.cm^2 respectively, and 1446 mGy.cm^2 is the global independent value of the field of view. The quality control tests were all positive, with an attention to the i-CAT FLX, which slightly exceeded the acceptable limit for kVp accuracy. A face between CTDI100 and CTDI300, reported that the CTDI300 is on average 49% higher than the CTI100. Radiodiagnostic reference levels are representative of optimized dose levels and serve as a basis for adequacy and optimization of the exposure parameters of the equipment. The quality control tests alert to possible irregularities in the operation of the tomograph, and develop properly from the clinical process.
376

Método beam search aplicado ao problema de escalonamento de tarefas flexível / Beam search method applied to the flexible job shop scheduling problem

Jesus Filho, José Eurípedes Ferreira de 06 June 2013 (has links)
O Job Shop Scheduling Problem é um problema NP-Difícil que chama a atenção de muitos pesquisadores devido seu desafio matemático e sua aplicabilidade em contextos reais. Geralmente, principalmente em cenários próximos aos de fábricas e indústrias, obter um escalonamento ótimo por meio de métodos computacionais exatos implica em um alto desprendimento de tempo. Em contrapartida, devido às exigências de um mercado cada vez mais competitivo, as decisões de onde, como, quando e com o que produzir devem ser tomadas rapidamente. O presente trabalho propõe o desenvolvimento de um método heurístico Beam Search para solucionar o Job Shop Scheduling Problem e o Flexible Job Shop Scheduling Problem. Para isso, inicialmente um algoritmo do tipo list scheduling é definido e então o método Beam Search é construído baseado neste algoritmo. Os métodos propostos foram avaliados em diferentes níveis de complexidade utilizando instâncias da literatura que retratam diferentes cenários de planejamento. Em linhas gerais, as soluções encontradas se mostraram bastante competitivas quando comparadas a outras soluções da literatura. / The Job Shop Scheduling Problem is a NP-Hard problem which draws the attention of researchers due to both its mathematical challenge and its applicability in real contexts. Usually, mainly in industry and factory environments, an optimal schedule got by the use of exact computational methods implies in a long spending time. On the other hand, due to a more and more competitive marketplace, the decisions on where, how, when and with which to produce must be taken quickly. The present work proposes the development of an heuristic Beam Search method to solve both the Job Shop Scheduling Problem and the Flexible Job Shop Scheduling Problem. To that end, at rst a list scheduling algorithm is dened and then the Beam Search method is built based on the list scheduling algorithm. The proposed methods were evaluated over dierent complexity levels using instances from the literature that report dierent planning environments. In general terms, the solutions implemented have been proved very competitive when compared against other solutions in the literature.
377

High-Order Harmonic Generation with Structured Beams

Kong, Fanqi 12 September 2019 (has links)
The generation of high-order harmonics opened an era of attosecond science wherein coherent light bursts are used to probe dynamic processes in matter with a time resolution short enough to resolve the motions of electrons. It enabled the development of extreme ultraviolet (XUV) and X-ray table-top sources with both temporal and spatial coherence, which provides the ability to shape the temporal and spatial structure of the XUV pulses. Scientists developed techniques to control and measure the temporal structure high harmonic emissions. These techniques exploited control of the driving laser pulse in the time domain and facilitated development of more advanced high-harmonic based XUV sources that have greatly impacted ultrafast measurements. In this thesis, I apply techniques to control and measure the spatial structure of high harmonic emissions, and discuss the underlying physics and potential applications of the interaction between spatially structured laser beams and materials. This study exploits the spatial degree of freedom in strong field interaction, which has not been given as much attention as the temporal degree of freedom. I use liquid crystal devices to shape the wave front of a fundamental laser beam to a vortex structure, then imprint this structured wave front onto XUV beams through high harmonic generation. This method provides an alternative to special XUV optics, which can manipulate the wave front of XUV radiation by all optical means. This result also reveals the conservation of orbital angular momentum in this extreme nonlinear wave mixing process. In addition to shaping the wave front, shaping the polarization of the driving beam also allows generation of circularly polarized the XUV radiation using a high harmonic source. This thesis also highlights the interplay between shaping the wave front and polarization in the high harmonic generation process. The topology of the structured beam can be maintained through this extreme nonlinear interaction due to the spin selection rules and spin-orbit conservation. Moreover, this thesis demonstrates an approach to integrate a vector beam into a broadband ultrafast light source and overcome the bandwidth limitation of mode converters. We use this approach to generate a few-cycle structured beam. In the future, this beam will be used to generate a strong ultrafast magnetic impulse in gas and solid targets by driving currents in a loop, which is a valuable tool for the future of magnetic metrology. The novel properties of structured laser beams discussed in this thesis expanded the capabilities of high harmonic based XUV sources and have opened a new field to explore this additional degree of freedom in strong field interactions.
378

Wafer-scale processing of arrays of nanopore devices

Ahmadi, Amir 10 January 2013 (has links)
Nanopore-based single-molecule analysis of biomolecules such as DNA and proteins is a subject of strong scientific and technological interest. In recent years, solid state nanopores have been demonstrated to possess a number of advantages over biological (e.g., ion channel protein) pores due to the relative ease of tuning the pore dimensions, pore geometry, and surface chemistry. However, solid state fabrication methods have been limited in their scalability, automation, and reproducibility. In this work, a wafer-scale fabrication method is first demonstrated for reproducibly fabricating large arrays of solid-state nanopores. The method couples the high-resolution processes of electron beam lithography (EBL) and atomic layer deposition (ALD). Arrays of nanopores (825 per wafer) are successfully fabricated across a series of 4' wafers, with tunable pore sizes from 50 nm to sub-20 nm. The nanopores are fabricated in silicon nitride films with thicknesses varying from 10 nm to 50 nm. ALD of aluminum oxide is used to tune the nanopore size in the above range. By careful optimization of all the processing steps, a device survival rate of 96% is achieved on a wafer with 50 nm silicon nitride films on 60- 80 micron windows. Furthermore, a significant device survival rate of 88% was obtained for 20 nm silicon nitride films on order 100 micron windows. In order to develop a deeper understanding of nanopore fabrication-structure relationships, a modeling study was conducted to examine the physics of EBL, in particular: to investigate the effects of beam blur, dose, shot pattern, and secondary electrons on internal pore structure. Under the operating conditions used in pore production, the pores were expected to taper to a substantially smaller size than their apparent size in SEM. This finding was supported by preliminary conductance readings from nanopores.
379

Process modeling of InAs/AISb materials for high electron mobility transisitors grown by molecular beam epitaxy

Triplett, Gregory Edward, Jr. 01 1900 (has links)
No description available.
380

Nonlinear Analysis of Beams Using Least-Squares Finite Element Models Based on the Euler-Bernoulli and Timoshenko Beam Theories

Raut, Ameeta A. 2009 December 1900 (has links)
The conventional finite element models (FEM) of problems in structural mechanics are based on the principles of virtual work and the total potential energy. In these models, the secondary variables, such as the bending moment and shear force, are post-computed and do not yield good accuracy. In addition, in the case of the Timoshenko beam theory, the element with lower-order equal interpolation of the variables suffers from shear locking. In both Euler-Bernoulli and Timoshenko beam theories, the elements based on weak form Galerkin formulation also suffer from membrane locking when applied to geometrically nonlinear problems. In order to alleviate these types of locking, often reduced integration techniques are employed. However, this technique has other disadvantages, such as hour-glass modes or spurious rigid body modes. Hence, it is desirable to develop alternative finite element models that overcome the locking problems. Least-squares finite element models are considered to be better alternatives to the weak form Galerkin finite element models and, therefore, are in this study for investigation. The basic idea behind the least-squares finite element model is to compute the residuals due to the approximation of the variables of each equation being modeled, construct integral statement of the sum of the squares of the residuals (called least-squares functional), and minimize the integral with respect to the unknown parameters (i.e., nodal values) of the approximations. The least-squares formulation helps to retain the generalized displacements and forces (or stress resultants) as independent variables, and also allows the use of equal order interpolation functions for all variables. In this thesis comparison is made between the solution accuracy of finite element models of the Euler-Bernoulli and Timoshenko beam theories based on two different least-square models with the conventional weak form Galerkin finite element models. The developed models were applied to beam problems with different boundary conditions. The solutions obtained by the least-squares finite element models found to be very accurate for generalized displacements and forces when compared with the exact solutions, and they are more accurate in predicting the forces when compared to the conventional finite element models.

Page generated in 0.0478 seconds