• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 326
  • 89
  • 39
  • 33
  • 31
  • 12
  • 8
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 647
  • 114
  • 96
  • 64
  • 61
  • 61
  • 58
  • 55
  • 53
  • 52
  • 50
  • 45
  • 43
  • 40
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Gain and Bandwidth Enhancement of Ferrite-Loaded CBS Antenna Using Material Shaping and Positioning

January 2013 (has links)
abstract: Loading a cavity-backed slot (CBS) antenna with ferrite material and applying a biasing static magnetic field can be used to control its resonant frequency. Such a mechanism results in a frequency reconfigurable antenna. However, placing a lossy ferrite material inside the cavity can reduce the gain or negatively impact the impedance bandwidth. This thesis develops guidelines, based on a non-uniform applied magnetic field and non-uniform magnetic field internal to the ferrite specimen, for the design of ferrite-loaded CBS antennas which enhance their gain and tunable bandwidth by shaping the ferrite specimen and judiciously locating it within the cavity. To achieve these objectives, it is necessary to examine the influence of the shape and relative location of the ferrite material, and also the proximity of the ferrite specimen from the probe on the DC magnetic field and RF electric field distributions inside the cavity. The geometry of the probe and its impacts on figures-of-merit of the antenna is of interest as well. Two common cavity backed-slot antennas (rectangular and circular cross-section) were designed, and corresponding simulations and measurements were performed and compared. The cavities were mounted on 30 cm $\times$ 30 cm perfect electric conductor (PEC) ground planes and partially loaded with ferrite material. The ferrites were biased with an external magnetic field produced by either an electromagnet or permanent magnets. Simulations were performed using FEM-based commercial software, Ansys' Maxwell 3D and HFSS. Maxwell 3D is utilized to model the non-uniform DC applied magnetic field and non-uniform magnetic field internal to the ferrite specimen; HFSS however, is used to simulate and obtain the RF characteristics of the antenna. To validate the simulations they were compared with measurements performed in ASU's EM Anechoic Chamber. After many examinations using simulations and measurements, some optimal designs guidelines with respect to the gain, return loss and tunable impedance bandwidth, were obtained and recommended for ferrite-loaded CBS antennas. / Dissertation/Thesis / M.S. Electrical Engineering 2013
132

Desacelaração de césio pela técnica de sintonia Zeeman / Deceleration of cesium by Zeeman tunning technique

Monica Santos Dahmouche 18 February 1993 (has links)
Neste trabalho pela primeira vez, desaceleramos um feixe de Cs pela Técnica de Sintonia Zeeman. Usamos um laser de diodo contrapropagante ao feixe atômico. Essa técnica se baseia na utilização de um campo magnético de perfil espacial parabólico para compensar o efeito Doppler e manter o átomo ressonante com o laser durante o processo de desaceleração. Conseguimos reduzir a velocidade dos átomos até C 940cm/s. Para medir essa velocidade usamos uma técnica simples, diferente da usual, que utiliza um feixe de prova. Com o nosso magneto, não foi possível desacelerar átomos com velocidade acima de 12000 cm/s. O limite de campo magnético em que tivemos que trabalhar corresponde à campo fraco, para o estado fundamental do Cs. Esse fato acarreta um aumento na probabilidade de ocorrerem transições erradas. Observamos a presença de um intervalo de \"detuning\" útil, fora do qual não conseguimos desacelerar. Esse intervalo também está relacionado com o limite máximo de velocidades para que haja desaceleração. Chegamos a esse intervalo através de simulações feitas para encontrar os parâmetros necessários à desaceleração. Os resultados obtidos experimentalmente estão de acordo com o que foi previsto pela simulação. Paralelamente à desaceleração de CS, preparamos os lasers de diodo e reduzimos sua largura de linha. Entretanto não usamos o laser estreito para a desaceleração. A fim de trabalharmos com espectroscopia de alta resolução reduzimos a largura de linha do laser a semicondutor fazendo um acoplamento da cavidade laser com uma cavidade, Fabry-Pérot, externa. Conseguimos estreitar a largura de linha até 500KHz. Esse resultado nos possibilitará investigar as linhas do Cs, aprisionado em um \"trap\" magnético-óptico, experimento este que já está em andamento em nosso laboratório / In this work for the first time, slow a beam of Cs by the Zeeman tuning technique. We use a laser diode contrapropagante the atomic beam. This technique is based on the use of a magnetic field of parabolic spatial profile to compensate for the Doppler effect and keep the atoms resonant with the laser during the downturn. We reduce the speed of C atoms to 940cm / s. To measure this speed we use a simple technique, different from the usual, which uses a beam of evidence. With our magnet, could not slow down atoms with speeds up to 12,000 cm / s. The limit of magnetic field we had to work corresponds to the weak field for the ground state of Cs. This fact implies an increase in the probability of transitions wrong. We observed a range of \"detuning\" useful, out of which we cannot slow down. This range is also related to the maximum speed for there to be slowing. We arrived in this range through simulations to find the parameters needed for deceleration. The results obtained experimentally are in agreement with what was predicted by the simulation. Parallel to the slowdown of CS, we prepared the diode lasers and reduced its line width. However do not use the laser close to the slowdown. In order to work with high-resolution spectroscopy reduced the line width of the semiconductor laser causing a coupling of the laser cavity with a cavity, Fabry-Pérot, external. We narrow the line width up to 500KHz. This result will enable us to investigate the lines of Cs, trapped in a \"trap\" magneto-optical experiment that is already underway in our laboratory
133

Simulação do processo de desaceleração de átomos pela técnica de ajustamento Zeeman / Simulation of the process of decelerating atoms by the Zeeman-tuning technique

Reginaldo de Jesus Napolitano 16 February 1990 (has links)
O principal objetivo deste trabalho é, adotado uma abordagem centrada na simulação numérica, entender a desaceleração a laser de um feixe atômico por meio da conhecida técnica de ajuste Zeeman. Nossos cálculos numéricos são capazes de reproduzir as características fundamentais dos resultados experimentais já obtidos. Também apresentamos um modelo analítico simples incorporando as idéias básicas contidas nas hipóteses utilizadas nas simulações e mostrando que estas idéias são consistentes com as conclusões numéricas e experimentais. Isto demonstra que os aspectos essenciais do processo desacelerador são bem compreendidos. / The main purpose of this work is, adopting a numerical simulation approach, to understand the laser deceleration of an atomic beam by means 0f the kown Zeeman tuning technique. Our numerical calculations are able to reproduce the fundamental features of the experimental results already obtained. We also present a simple analytical model incorporating the basic ideas contained in the hypotheses used in the simulations, and show that these ideas are consistent with the numerical and experimental conclusions. This demonstrates that the essential aspects 0f the deceleration process are well comprehended.
134

Hybrid-Adaptive Switched Control for Robotic Manipulator Interacting with Arbitrary Surface Shapes Under Multi-Sensory Guidance

Nakhaeinia, Danial January 2018 (has links)
Industrial robots rapidly gained popularity as they can perform tasks quickly, repeatedly and accurately in static environments. However, in modern manufacturing, robots should also be able to safely interact with arbitrary objects and dynamically adapt their behavior to various situations. The large masses and rigid constructions of industrial robots prevent them from easily being re-tasked. In this context, this work proposes an immediate solution to make rigid manipulators compliant and able to efficiently handle object interactions, with only an add-on module (a custom designed instrumented compliant wrist) and an original control framework which can easily be ported to different manipulators. The proposed system utilizes both offline and online trajectory planning to achieve fully automated object interaction and surface following with or without contact where no prior knowledge of the objects is available. To minimize the complexity of the task, the problem is formulated into four interaction motion modes: free, proximity, contact and a blend of those. The free motion mode guides the robot towards the object of interest using information provided by a RGB-D sensor. The RGB-D sensor is used to collect raw 3D information on the environment and construct an approximate 3D model of an object of interest in the scene. In order to completely explore the object, a novel coverage path planning technique is proposed to generate a primary (offline) trajectory. However, RGB-D sensors provide only limited accuracy on the depth measurements and create blind spot when it reaches close to surfaces. Therefore, the offline trajectory is then further refined by applying the proximity motion mode and contact motion mode or a blend of them (blend motion mode) that allow the robot to dynamically interact with arbitrary objects and adapt to the surfaces it approaches or touches using live proximity and contact feedback from the compliant wrist. To achieve seamless and efficient integration of the sensory information and smoothly switch between different interaction modes, an original hybrid switching scheme is proposed that applies a supervisory (decision making) module and a mixture of hard and blend switches to support data fusion from multiple sensing sources by combining pairs of the main motion modes. Experimental results using a CRS-F3 manipulator demonstrate the feasibility and performance of the proposed method.
135

Automatic Algorithm Configuration: Analysis, Improvements and Applications

Perez Caceres, Leslie 23 November 2017 (has links)
Technology has a major role in today’s world. The development and massive access to information technology has enabled the use of computers to provide assistance on a wide range of tasks, from the most trivial daily ones to the most complex challenges we face as human kind. In particular, optimisation algorithms assist us in taking decisions, improving processes, designing solutions and they are successfully applied in several contexts such as industry, health, entertainment, and so on. The design and development of effective and efficient computational algorithms is, thus, a need in modern society.Developing effective and efficient optimisation algorithms is an arduous task that includes designing and testing of several algorithmic components and schemes, and requires considerable expertise. During the design of an algorithm, the developer defines parameters, that can be used to further adjust the algorithm behaviour depending on the particular application. Setting appropriate values for the parameters of an algorithm can greatly improve its performance. This way, most high-performing algorithms define parameter settings that are “finely tuned”, typically by experts, for a particular problem or execution condition.The process of finding high-performing parameter settings, called algorithm configuration, is commonly a challenging, tedious, time consuming and computationally expensive task that hinders the application and design of algorithms. Nevertheless, the algorithm configuration process can be modelled as an optimisation problem itself and optimisation techniques can be applied to provide high-performing configurations. The use of automated algorithm configuration procedures, called configurators, allows obtaining high-performing algorithms without requiring expert knowledge and it enables the design of more flexible algorithms by easing the definition of design choices as parameters to be set. Ultimately, automated algorithm configuration could be used to fully automatise the algorithm development process, providing algorithms tailored to the problem to be solved.The aim of the work presented in this thesis is to study the automated configuration of algorithms. To do so, we formally define the algorithm configuration problem and analyse its characteristics. We study the most prominent algorithm configuration procedures and identify relevant configuration techniques and their applicability. We contribute to the field by proposing and analysing several configuration procedures, being the most prominent of these the irace configurator. This work presents and studies several modifications of the configuration process implemented by irace, which considerably improve the performance of irace and broaden its applicability. In a general context, we provide insights about the characteristics of the algorithm configuration process and techniques by performing several analyses configuring different types of algorithms under varied situations. And, finally, we provide practical examples of the usage of automated configuration techniques showing its benefits and further uses for the application and design of efficient and effective algorithms. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
136

A Bayesian Group Sparse Multi-Task Regression Model for Imaging Genomics

Greenlaw, Keelin 26 August 2015 (has links)
Recent advances in technology for brain imaging and high-throughput genotyping have motivated studies examining the influence of genetic variation on brain structure. In this setting, high-dimensional regression for multi-SNP association analysis is challenging as the brain imaging phenotypes are multivariate and there is a desire to incorporate a biological group structure among SNPs based on their belonging genes. Wang et al. (Bioinformatics, 2012) have recently developed an approach for simultaneous estimation and SNP selection based on penalized regression with regularization based on a novel group l_{2,1}-norm penalty, which encourages sparsity at the gene level. A problem with the proposed approach is that it only provides a point estimate. We solve this problem by developing a corresponding Bayesian formulation based on a three-level hierarchical model that allows for full posterior inference using Gibbs sampling. For the selection of tuning parameters, we consider techniques based on: (i) a fully Bayes approach with hyperpriors, (ii) empirical Bayes with implementation based on a Monte Carlo EM algorithm, and (iii) cross-validation (CV). When the number of SNPs is greater than the number of observations we find that both the fully Bayes and empirical Bayes approaches overestimate the tuning parameters, leading to overshrinkage of regression coefficients. To understand this problem we derive an approximation to the marginal likelihood and investigate its shape under different settings. Our investigation sheds some light on the problem and suggests the use of cross-validation or its approximation with WAIC (Watanabe, 2010) when the number of SNPs is relatively large. Properties of our Gibbs-WAIC approach are investigated using a simulation study and we apply the methodology to a large dataset collected as part of the Alzheimer's Disease Neuroimaging Initiative. / Graduate
137

Ladění a testování databázových systémů pro potřeby digitálního archivu SAFE III / Tuning and testing of database systems for needs of digital archive SAFE III

Pobuda, Tomáš January 2008 (has links)
Thesis deal with tuning of database Oracle, which is used by digital archive SAFE. In the concrete deal with setting parameters of database. It is divided to three parts. In first part it characterizes factors that influence performance of database. In second part it describes possibilities of tuning and setting Oracle database. In third part it is first introduced digital archive SAFE, after that it is chosen suitable testing tool for workload generation and described test scenarios and last are performed tests and compared results at different database database settings. Goal of thesis is description and trial tuning of Oracle database, which is used by digital archive SAFE. Other goal is test of files inserting into digital archive at different settings (saving to the database, on file system). These goals are achieved by testing tool workload generation and compare response time at different settings. Contribution of this thesis is above all trial of tuning Oracle database, which is used by digital archive SAFE. Document can be used like handbook for implementatory of tested implementation of digital archive SAFE.
138

Nonlinear Reduced Order Modeling of Structures Exhibiting a Strong Nonlinearity

January 2020 (has links)
abstract: The focus of this dissertation is first on understanding the difficulties involved in constructing reduced order models of structures that exhibit a strong nonlinearity/strongly nonlinear events such as snap-through, buckling (local or global), mode switching, symmetry breaking. Next, based on this understanding, it is desired to modify/extend the current Nonlinear Reduced Order Modeling (NLROM) methodology, basis selection and/or identification methodology, to obtain reliable reduced order models of these structures. Focusing on these goals, the work carried out addressed more specifically the following issues: i) optimization of the basis to capture at best the response in the smallest number of modes, ii) improved identification of the reduced order model stiffness coefficients, iii) detection of strongly nonlinear events using NLROM. For the first issue, an approach was proposed to rotate a limited number of linear modes to become more dominant in the response of the structure. This step was achieved through a proper orthogonal decomposition of the projection on these linear modes of a series of representative nonlinear displacements. This rotation does not expand the modal space but renders that part of the basis more efficient, the identification of stiffness coefficients more reliable, and the selection of dual modes more compact. In fact, a separate approach was also proposed for an independent optimization of the duals. Regarding the second issue, two tuning approaches of the stiffness coefficients were proposed to improve the identification of a limited set of critical coefficients based on independent response data of the structure. Both approaches led to a significant improvement of the static prediction for the clamped-clamped curved beam model. Extensive validations of the NLROMs based on the above novel approaches was carried out by comparisons with full finite element response data. The third issue, the detection of nonlinear events, was finally addressed by building connections between the eigenvalues of the finite element software (Nastran here) and NLROM tangent stiffness matrices and the occurrence of the ‘events’ which is further extended to the assessment of the accuracy with which the NLROM captures the full finite element behavior after the event has occurred. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2020
139

Optimalizace provozních režimů zážehového motoru / SI Engine Performance Tuning

Beran, Martin January 2008 (has links)
The main scope of this thesis is the four stoke petrol engine performance tuning by ECU. The thesis analyses processes during engine management, describes and explains singular signals processed and generated by ECU. Designs measuring strings and optimal procedures for measuring on whose basis has been assembled optimal methodology leading to the optimalization of single operating mode of engine.
140

ACCTuner: OpenACC Auto-Tuner For Accelerated Scientific Applications

Alzayer, Fatemah 17 May 2015 (has links)
We optimize parameters in OpenACC clauses for a stencil evaluation kernel executed on Graphical Processing Units (GPUs) using a variety of machine learning and optimization search algorithms, individually and in hybrid combinations, and compare execution time performance to the best possible obtained from brute force search. Several auto-tuning techniques – historic learning, random walk, simulated annealing, Nelder-Mead, and genetic algorithms – are evaluated over a large two-dimensional parameter space not satisfactorily addressed to date by OpenACC compilers, consisting of gang size and vector length. A hybrid of historic learning and Nelder-Mead delivers the best balance of high performance and low tuning effort. GPUs are employed over an increasing range of applications due to the performance available from their large number of cores, as well as their energy efficiency. However, writing code that takes advantage of their massive fine-grained parallelism requires deep knowledge of the hardware, and is generally a complex task involving program transformation and the selection of many parameters. To improve programmer productivity, the directive-based programming model OpenACC was announced as an industry standard in 2011. Various compilers have been developed to support this model, the most notable being those by Cray, CAPS, and PGI. While the architecture and number of cores have evolved rapidly, the compilers have failed to keep up at configuring the parallel program to run most e ciently on the hardware. Following successful approaches to obtain high performance in kernels for cache-based processors using auto-tuning, we approach this compiler-hardware gap in GPUs by employing auto-tuning for the key parameters “gang” and “vector” in OpenACC clauses. We demonstrate results for a stencil evaluation kernel typical of seismic imaging over a variety of realistically sized three-dimensional grid configurations, with different truncation error orders in the spatial dimensions. Apart from random walk and historic learning based on nearest neighbor in grid size, most of our heuristics, including the one that proves best, appear to be applied in this context for the first time. This work is a stepping-stone towards an OpenACC auto-tuning framework for more general high-performance numerical kernels optimized for GPU computations.

Page generated in 0.0673 seconds