• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 33
  • 11
  • 7
  • 5
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 200
  • 29
  • 25
  • 21
  • 20
  • 17
  • 16
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Efficient Minimum Cycle Mean Algorithms And Their Applications

Supriyo Maji (9158723) 23 July 2020 (has links)
<p>Minimum cycle mean (MCM) is an important concept in directed graphs. From clock period optimization, timing analysis to layout optimization, minimum cycle mean algorithms have found widespread use in VLSI system design optimization. With transistor size scaling to 10nm and below, complexities and size of the systems have grown rapidly over the last decade. Scalability of the algorithms both in terms of their runtime and memory usage is therefore important. </p> <p><br></p> <p>Among the few classical MCM algorithms, the algorithm by Young, Tarjan, and Orlin (YTO), has been particularly popular. When implemented with a binary heap, the YTO algorithm has the best runtime performance although it has higher asymptotic time complexity than Karp's algorithm. However, as an efficient implementation of YTO relies on data redundancy, its memory usage is higher and could be a prohibitive factor in large size problems. On the other hand, a typical implementation of Karp's algorithm can also be memory hungry. An early termination technique from Hartmann and Orlin (HO) can be directly applied to Karp's algorithm to improve its runtime performance and memory usage. Although not as efficient as YTO in runtime, HO algorithm has much less memory usage than YTO. We propose several improvements to HO algorithm. The proposed algorithm has comparable runtime performance to YTO for circuit graphs and dense random graphs while being better than HO algorithm in memory usage. </p> <p><br></p> <p>Minimum balancing of a directed graph is an application of the minimum cycle mean algorithm. Minimum balance algorithms have been used to optimally distribute slack for mitigating process variation induced timing violation issues in clock network. In a conventional minimum balance algorithm, the principal subroutine is that of finding MCM in a graph. In particular, the minimum balance algorithm iteratively finds the minimum cycle mean and the corresponding minimum-mean cycle, and uses the mean and cycle to update the graph by changing edge weights and reducing the graph size. The iterations terminate when the updated graph is a single node. Studies have shown that the bottleneck of the iterative process is the graph update operation as previous approaches involved updating the entire graph. We propose an improvement to the minimum balance algorithm by performing fewer changes to the edge weights in each iteration, resulting in better efficiency.</p> <p><br></p> <p>We also apply the minimum cycle mean algorithm in latency insensitive system design. Timing violations can occur in high performance communication links in system-on-chips (SoCs) in the late stages of the physical design process. To address the issues, latency insensitive systems (LISs) employ pipelining in the communication channels through insertion of the relay stations. Although the functionality of a LIS is robust with respect to the communication latencies, such insertion can degrade system throughput performance. Earlier studies have shown that the proper sizing of buffer queues after relay station insertion could eliminate such performance loss. However, solving the problem of maximum performance buffer queue sizing requires use of mixed integer linear programming (MILP) of which runtime is not scalable. We formulate the problem as a parameterized graph optimization problem where for every communication channel there is a parameterized edge with buffer counts as the edge weight. We then use minimum cycle mean algorithm to determine from which edges buffers can be removed safely without creating negative cycles. This is done iteratively in the similar style as the minimum balance algorithm. Experimental results suggest that the proposed approach is scalable. Moreover, quality of the solution is observed to be as good as that of the MILP based approach.</p><p><br></p>
192

Výzkum volebních preferencí v ČR: návrh metodologické optimalizace / Election Polls in Czech Republic: Methodological Optimalizations

Prokop, Daniel January 2012 (has links)
Bibliographic record PROKOP, Daniel. (2012). Election polls in the Czech Republic: Methodological Optimization. Charles University in Prague, Faculty of Social Sciences, Institut of Sociological Studies. Thesis academic consultant: Mgr. Jindřich Krejčí, Ph.D. Abstract The thesis focuses on the election-polls and prediction of election results in the Czech Republic. Using data of research company MEDIAN s.r.o. from face-to-face (CAPI) and telephone interviewing (CATI) in election year 2010 it examines possibilities of methodological optimizations which could lead to reducing systematic bias and discrepancies of pre-election polls the election results. In particular, it discusses these methodological solutions: mix-mode data collection (combination of CATI and CAPI), data weighting focused on specific factors correlated with voting behavior, including preferences of undecided voters, prediction of the respondents' participation in elections, election-polls results time-series smoothing. Based on these analyses the thesis tries to articulate general findings which could be fruitful in discussion about Czech election-polls and their methodology in general. In the thesis, basic and advanced statistic methods (CART, exponential smoothing, etc.) are being used to achieve given research goals. Keywords: election...
193

A Logistic Regression Analysis of Utah Colleges Exit Poll Response Rates Using SAS Software

Stevenson, Clint W. 27 October 2006 (has links) (PDF)
In this study I examine voter response at an interview level using a dataset of 7562 voter contacts (including responses and nonresponses) in the 2004 Utah Colleges Exit Poll. In 2004, 4908 of the 7562 voters approached responded to the exit poll for an overall response rate of 65 percent. Logistic regression is used to estimate factors that contribute to a success or failure of each interview attempt. This logistic regression model uses interviewer characteristics, voter characteristics (both respondents and nonrespondents), and exogenous factors as independent variables. Voter characteristics such as race, gender, and age are strongly associated with response. An interviewer's prior retail sales experience is associated with whether a voter will decide to respond to a questionnaire or not. The only exogenous factor that is associated with voter response is whether the interview occurred in the morning or afternoon.
194

Development and Evaluation of a Flexible Framework for the Design of Autonomous Classifier Systems

Ganapathy, Priya 22 December 2009 (has links)
No description available.
195

Indirect Consequences of Exposure to Radiation in Doses Relevant to Nuclear Incidents and Accidents / INDIRECT CONSEQUENCES OF NUCLEAR INCIDENTS/ACCIDENTS

Fernando, Chandula 11 1900 (has links)
At low doses, relevant to nuclear incidents and accidental releases of radioactivity, the detriment of radiation extends beyond direct effects. This thesis investigates genomic instability, a subclass of non-targeted effects where damage and lethality is transmitted vertically and expressed in the progeny of cells many generations after initial radiation exposure. Through a series of experiments using clonogenic assay of human and fish cell culture, studies described in this thesis describe lethal mutations, hyper radiosensitivity and increased radioresistance – processes involving repair mechanisms that dictate survival in cells exposed to low doses. Further study investigates the difference in the relative biological effect of alpha particle radiation compared to what is expected at high doses. Results demonstrate increased radioresistance in a human cell line while also revealing increased lethality in a fish cell line confirming the need for consideration of dose-dependence as well as variance in behaviors of different cell lines and species. It is hoped the conclusions of this thesis will inspire the creation of protocols with greater attention to the indirect consequences of exposure to radiation at doses relevant to nuclear incidents and accidents. / Thesis / Master of Science (MSc)
196

Sections efficaces neutroniques via la méthode de substitution / Neutron-induced cross-sections via the surrogate method

Boutoux, Guillaume 25 November 2011 (has links)
Les sections efficaces neutroniques des noyaux de courte durée de vie sont des données cruciales pour la physique fondamentale et appliquée dans des domaines tels que la physique des réacteurs ou l’astrophysique nucléaire. En général, l’extrême radioactivité de ces noyaux ne nous permet pas de procéder à des mesures induites par neutrons. Cependant, il existe une méthode de substitution (« surrogate » dans la littérature) qui permet de déterminer ces sections efficaces neutroniques par l’intermédiaire de réactions de transfert ou de réactions de diffusion inélastique. Son intérêt principal est de pouvoir utiliser des cibles moins radioactives et ainsi d’accéder à des sections efficaces neutroniques qui ne pourraient pas être mesurées directement. La méthode est basée sur l’hypothèse de formation d’un noyau composé et sur le fait que la désexcitation ne dépend essentiellement que de l’énergie d’excitation et du spin et parité de l'état composé peuplé. Toutefois, les distributions de moments angulaires et parités peuplés dans des réactions de transfert et celles induites par neutrons sont susceptibles d’être différentes. Ce travail fait l’état de l’art sur la méthode substitution et sa validité. En général, la méthode de substitution fonctionne très bien pour extraire des sections efficaces de fission. Par contre, la méthode de substitution dédiée à la capture radiative est mise à mal par la comparaison aux réactions induites par neutrons. Nous avons réalisé une expérience afin de déterminer les probabilités de désexcitation gamma du 176Lu et du 173Yb à partir des réactions de substitution 174Yb(3He,p)176Lu* et 174Yb(3He,alpha)173Yb*, respectivement, et nous les avons comparées avec les probabilités de capture radiative correspondantes aux réactions 175Lu(n,gamma) et 172Yb(n,gamma) qui sont bien connues. Cette expérience a permis de comprendre pourquoi, dans le cas de la désexcitation gamma, la méthode de substitution donne des écarts importants par rapport à la réaction neutronique correspondante. Ce travail dans la région de terres rares a permis d'évaluer dans quelle mesure la méthode de substitution peut s’appliquer pour extraire des probabilités de capture dans la région des actinides. Des expériences précédentes sur la fission ont aussi pu être réinterprétées. Ce travail apporte donc un éclairage nouveau sur la méthode de substitution. / Neutron-induced cross sections of short-lived nuclei are needed for fundamental and applied physics as nuclear energy or astrophysics. However, very often the high radioactivity of the samples makes the direct measurement of these cross sections extremely difficult. The surrogate reaction method is an indirect way of determining neutron-induced cross sections through transfer or inelastic scattering reactions. This method presents the advantage that in some cases the target material is stable or less radioactive than the material required for a neutron-induced measurement. The method is based on the hypothesis that the excited nucleus is a compound nucleus whose decay depends essentially on its excitation energy and on the spin and parity state of the populated compound state. Nevertheless, the spin and parity population differences between the compound-nuclei produced in the neutron and transfer-induced reactions may be different. This work reviews the surrogate method and its validity. Neutron-induced fission cross sections obtained with the surrogate method are in general good agreement. However, it is not yet clear to what extent the surrogate method can be applied to infer radiative capture cross sections. We performed an experiment to determine the gamma-decay probabilities for 176Lu and 173Yb by using the surrogate reactions 174Yb(3He,p)176Lu* and 174Yb(3He,alpha)173Yb*, respectively, and compare them with the well-known corresponding probabilities obtained in the 175Lu(n,gamma) and 172Yb(n,gamma) reactions. This experiment provides answers to understand why, in the case of gamma-decay, the surrogate method gives significant deviations compared to the corresponding neutron-induced reaction. In this work, we have also assessed whether the surrogate method can be applied to extract capture probabilities in the actinide region. Previous experiments on fission have also been reinterpreted. Thus, this work provides new insights into the surrogate method.
197

Regression modeling with missing outcomes : competing risks and longitudinal data / Contributions aux modèles de régression avec réponses manquantes : risques concurrents et données longitudinales

Moreno Betancur, Margarita 05 December 2013 (has links)
Les données manquantes sont fréquentes dans les études médicales. Dans les modèles de régression, les réponses manquantes limitent notre capacité à faire des inférences sur les effets des covariables décrivant la distribution de la totalité des réponses prévues sur laquelle porte l'intérêt médical. Outre la perte de précision, toute inférence statistique requière qu'une hypothèse sur le mécanisme de manquement soit vérifiée. Rubin (1976, Biometrika, 63:581-592) a appelé le mécanisme de manquement MAR (pour les sigles en anglais de « manquant au hasard ») si la probabilité qu'une réponse soit manquante ne dépend pas des réponses manquantes conditionnellement aux données observées, et MNAR (pour les sigles en anglais de « manquant non au hasard ») autrement. Cette distinction a des implications importantes pour la modélisation, mais en général il n'est pas possible de déterminer si le mécanisme de manquement est MAR ou MNAR à partir des données disponibles. Par conséquent, il est indispensable d'effectuer des analyses de sensibilité pour évaluer la robustesse des inférences aux hypothèses de manquement.Pour les données multivariées incomplètes, c'est-à-dire, lorsque l'intérêt porte sur un vecteur de réponses dont certaines composantes peuvent être manquantes, plusieurs méthodes de modélisation sous l'hypothèse MAR et, dans une moindre mesure, sous l'hypothèse MNAR ont été proposées. En revanche, le développement de méthodes pour effectuer des analyses de sensibilité est un domaine actif de recherche. Le premier objectif de cette thèse était de développer une méthode d'analyse de sensibilité pour les données longitudinales continues avec des sorties d'étude, c'est-à-dire, pour les réponses continues, ordonnées dans le temps, qui sont complètement observées pour chaque individu jusqu'à la fin de l'étude ou jusqu'à ce qu'il sorte définitivement de l'étude. Dans l'approche proposée, on évalue les inférences obtenues à partir d'une famille de modèles MNAR dits « de mélange de profils », indexés par un paramètre qui quantifie le départ par rapport à l'hypothèse MAR. La méthode a été motivée par un essai clinique étudiant un traitement pour le trouble du maintien du sommeil, durant lequel 22% des individus sont sortis de l'étude avant la fin.Le second objectif était de développer des méthodes pour la modélisation de risques concurrents avec des causes d'évènement manquantes en s'appuyant sur la théorie existante pour les données multivariées incomplètes. Les risques concurrents apparaissent comme une extension du modèle standard de l'analyse de survie où l'on distingue le type d'évènement ou la cause l'ayant entrainé. Les méthodes pour modéliser le risque cause-spécifique et la fonction d'incidence cumulée supposent en général que la cause d'évènement est connue pour tous les individus, ce qui n'est pas toujours le cas. Certains auteurs ont proposé des méthodes de régression gérant les causes manquantes sous l'hypothèse MAR, notamment pour la modélisation semi-paramétrique du risque. Mais d'autres modèles n'ont pas été considérés, de même que la modélisation sous MNAR et les analyses de sensibilité. Nous proposons des estimateurs pondérés et une approche par imputation multiple pour la modélisation semi-paramétrique de l'incidence cumulée sous l'hypothèse MAR. En outre, nous étudions une approche par maximum de vraisemblance pour la modélisation paramétrique du risque et de l'incidence sous MAR. Enfin, nous considérons des modèles de mélange de profils dans le contexte des analyses de sensibilité. Un essai clinique étudiant un traitement pour le cancer du sein de stade II avec 23% des causes de décès manquantes sert à illustrer les méthodes proposées. / Missing data are a common occurrence in medical studies. In regression modeling, missing outcomes limit our capability to draw inferences about the covariate effects of medical interest, which are those describing the distribution of the entire set of planned outcomes. In addition to losing precision, the validity of any method used to draw inferences from the observed data will require that some assumption about the mechanism leading to missing outcomes holds. Rubin (1976, Biometrika, 63:581-592) called the missingness mechanism MAR (for “missing at random”) if the probability of an outcome being missing does not depend on missing outcomes when conditioning on the observed data, and MNAR (for “missing not at random”) otherwise. This distinction has important implications regarding the modeling requirements to draw valid inferences from the available data, but generally it is not possible to assess from these data whether the missingness mechanism is MAR or MNAR. Hence, sensitivity analyses should be routinely performed to assess the robustness of inferences to assumptions about the missingness mechanism. In the field of incomplete multivariate data, in which the outcomes are gathered in a vector for which some components may be missing, MAR methods are widely available and increasingly used, and several MNAR modeling strategies have also been proposed. On the other hand, although some sensitivity analysis methodology has been developed, this is still an active area of research. The first aim of this dissertation was to develop a sensitivity analysis approach for continuous longitudinal data with drop-outs, that is, continuous outcomes that are ordered in time and completely observed for each individual up to a certain time-point, at which the individual drops-out so that all the subsequent outcomes are missing. The proposed approach consists in assessing the inferences obtained across a family of MNAR pattern-mixture models indexed by a so-called sensitivity parameter that quantifies the departure from MAR. The approach was prompted by a randomized clinical trial investigating the benefits of a treatment for sleep-maintenance insomnia, from which 22% of the individuals had dropped-out before the study end. The second aim was to build on the existing theory for incomplete multivariate data to develop methods for competing risks data with missing causes of failure. The competing risks model is an extension of the standard survival analysis model in which failures from different causes are distinguished. Strategies for modeling competing risks functionals, such as the cause-specific hazards (CSH) and the cumulative incidence function (CIF), generally assume that the cause of failure is known for all patients, but this is not always the case. Some methods for regression with missing causes under the MAR assumption have already been proposed, especially for semi-parametric modeling of the CSH. But other useful models have received little attention, and MNAR modeling and sensitivity analysis approaches have never been considered in this setting. We propose a general framework for semi-parametric regression modeling of the CIF under MAR using inverse probability weighting and multiple imputation ideas. Also under MAR, we propose a direct likelihood approach for parametric regression modeling of the CSH and the CIF. Furthermore, we consider MNAR pattern-mixture models in the context of sensitivity analyses. In the competing risks literature, a starting point for methodological developments for handling missing causes was a stage II breast cancer randomized clinical trial in which 23% of the deceased women had missing cause of death. We use these data to illustrate the practical value of the proposed approaches.
198

Plasmonic properties and applications of metallic nanostructures

Zhen, Yurong 16 September 2013 (has links)
Plasmonic properties and the related novel applications are studied on various types of metallic nano-structures in one, two, or three dimensions. For 1D nanostructure, the motion of free electrons in a metal-film with nanoscale thickness is confined in its normal dimension and free in the other two. Describing the free-electron motion at metal-dielectric surfaces, surface plasmon polariton (SPP) is an elementary excitation of such motions and is well known. When further perforated with periodic array of holes, periodicity will introduce degeneracy, incur energy-level splitting, and facilitate the coupling between free-space photon and SPP. We applied this concept to achieve a plasmonic perfect absorber. The experimentally observed reflection dip splitting is qualitatively explained by a perturbation theory based on the above concept. If confined in 2D, the nanostructures become nanowires that intrigue a broad range of research interests. We performed various studies on the resonance and propagation of metal nanowires with different materials, cross-sectional shapes and form factors, in passive or active medium, in support of corresponding experimental works. Finite- Difference Time-Domain (FDTD) simulations show that simulated results agrees well with experiments and makes fundamental mode analysis possible. Confined in 3D, the electron motions in a single metal nanoparticle (NP) leads to localized surface plasmon resonance (LSPR) that enables another novel and important application: plasmon-heating. By exciting the LSPR of a gold particle embedded in liquid, the excited plasmon will decay into heat in the particle and will heat up the surrounding liquid eventually. With sufficient exciting optical intensity, the heat transfer from NP to liquid will undergo an explosive process and make a vapor envelop: nanobubble. We characterized the size, pressure and temperature of the nanobubble by a simple model relying on Mie calculations and continuous medium assumption. A novel effective medium method is also developed to replace the role of Mie calculations. The characterized temperature is in excellent agreement with that by Raman scattering. If fabricated in an ordered cluster, NPs exhibit double-resonance features and the double Fano-resonant structure is demonstrated to most enhance the four-wave mixing efficiency.
199

La préservation du système bancaire par la régulation : l'exemple du système bancaire comorien / The preservation of the banking system by regulation : the example of the Comorian banking system

Msahazi, Abdillah 29 November 2014 (has links)
Cette Thèse de sciences de gestion, se propose d’élucider les difficultés que rencontrent les acteurs du système bancaire comorien et apporter des solutions afin de lui garantir sa solidité, stabilité et enfin sa pérennité. Elle est divisée en deux parties. La première porte plus particulièrement sur le cadre national et internationale du système bancaire comorien. La deuxième met en évidence les banques comoriennes confrontées à la transparence financière et aux exigences de supervision prudentielle. Le premier titre de la première partie, tâche à mettre en lumière l’organisation actuelle du système bancaire comorien inspiré du modèle français (chapitre 1) et l’apport du développement récent de la finance islamique (chapitre 2) afin de combler le retard de la banque conventionnelle. La réorganisation de la Banque Centrale des Comores et la mise en place de la banque islamique locale, peuvent contribuer au changement radical du système bancaire comorien. Le deuxième titre, permet au régulateur et prêteur en dernier ressort (Banque Centrale des Comores) de prendre le modèle des normes prudentielles internationales proposées par le Comité de Bâle (Bâle II et III), pour réguler le système bancaire comorien afin de lui garantir sa solidité, stabilité et enfin sa pérennité (chapitre 1). A travers ces recommandations du comité de Bâle, nous avons apporté des solutions en élaborant la Matrice Msahazi Credit Scoring Corporation, destinée aux analyses des données des banques comoriennes contre un risque endogène (Chapitre2). Nous avons aussi élaboré d’autres matrices que les banques comoriennes se serviront pour la notation interne, des risques de contreparties (entreprises et particuliers) afin de lutter contre le risque exogène. La deuxième partie de cette Thèse suggère deux autres solutions : la première est l’exigence de transparence financière des banques comoriennes (Pilier 3 : Bâle2 et 3) afin de lutter contre les malversations financières orchestrées par certains agents (titre I). Le premier chapitre introduit l’objectif de la communication financière de manière générale et la manière dont le comité de Bâle (Bâle 2 et 3) recommande les banques de communiquer leurs informations financières (méthodes d’évaluations des risques et fonds propres). Le deuxième chapitre propose aux banques comoriennes et aux autorités de contrôles, les techniques de notation financière pratiquées au niveau internationale pour distinguer le niveau de solvabilité de la contrepartie. La deuxième solution, nous avons donné à la Banque Centrale des Comores, des techniques pour renforcer la supervision prudentielle (Pilier 2, Bâle 2 et 3), (titre II). Le premier chapitre exige d’une part la direction et le conseil d’administration de la banque de définir les techniques de contrôles, d’indentifications, d’évaluations, gestions des risques et les objectifs de fonds propre à atteindre. D’autre part, l’autorité de contrôle (Banque centrale des Comores) doit passer au crible tous ces outils de contrôle. Au deuxième et dernier chapitre de la recherche, nous avons proposé à la Banque Centrale des Comores des nouvelles méthodes de supervision prudentielle afin de garantir la solidité, stabilité et pérennité du système bancaire. Nous avons l’espoir que l’ensemble de ces suggestions contribueront à préserver la solidité, stabilité et pérennité du système bancaire comorien afin de financer le développement de l’économie comorienne et sortir le pays de la pauvreté. / This thesis on busness management, aims to elucidate the difficulties faced by the stakeholders of the Comorian banking system and to provide solutions to ensure its soundness, stability and sustainability. The thesis is divided into two parts. The first focuses specifically on the national and international context of the Comorian banking system. The second, highlights how the Comorian banks should adapt to the financial transparency and prudential supervision requirements. The first title of the first part, tries toshed light on the current organization of the Comorian banking system based on the French model (Chapter 1) and the contribution of the recent development of Islamic finance (Chapter 2) to close the gap in conventional banking. The reorganization of the Central Bank of the Comoros and the establishment of the local Islamic bank can contribute to a radical change in the Comorian banking system. The second title allows the regulator and lender of last resort (Central Bank of the Comoros ) to take the model of international prudential standards proposed by the Basel Committee (Basel II and III) to regulate the Comorian banking system in order to guarantee its soundness, stability and finally sustainability (Chapter 1). Through these recommendations of the Basel committee, we have provided solutions by developing Msahazi Credit Scoring Matrix Corporation, intended to analyse data of Comorian banks against endogenous risk (Chapter 2). We have also developed matrices other than Comorian banks used for internal rating of the counterparty risk (companies and individuals) to fight against exogenous risk. The second part of this thesis suggests two alternatives: the first is the requirement of financial transparency for Comorian banks (Pillar 3: Basel Conventions 2 and 3) in order to fight against embezzlement orchestrated by certain agents (Title I). The first chapter introduces the objective of financial reporting in general, and how the Basel Committee (Basel 2 and 3) asks banks to disclose their financial information (methods of risk assessments and equity). The second chapter provides credit rating techniques practiced at international level to the Comorian banks and supervisory authorities in order to distinguish the level of creditworthiness of companies and clients concerned. The second alternative we have given to the Central Bank of the Comoros is the techniques for strengthening prudential supervision (Pillar 2, Basel 2 and 3), (Title II) . The first chapter requires both the management and the bank's board of directors to define control techniques, identifications, assessments, risk managements and core capital goals. On the other hand, the supervisory authority (Comoros Central Bank) has to go through all these control tools. In the second and final chapter of the research, we propose to the Central Bank of the Comoros new prudential supervision methods to ensure the soundness, stability and sustainability of the banking system. We hope that all of these suggestions will help to preserve the soundness, stability and durability of the Comorian banking system in order to finance the development of the Comorian economy and lift the country out of poverty.
200

Settling-Time Improvements in Positioning Machines Subject to Nonlinear Friction Using Adaptive Impulse Control

Hakala, Tim 31 January 2006 (has links) (PDF)
A new method of adaptive impulse control is developed to precisely and quickly control the position of machine components subject to friction. Friction dominates the forces affecting fine positioning dynamics. Friction can depend on payload, velocity, step size, path, initial position, temperature, and other variables. Control problems such as steady-state error and limit cycles often arise when applying conventional control techniques to the position control problem. Studies in the last few decades have shown that impulsive control can produce repeatable displacements as small as ten nanometers without limit cycles or steady-state error in machines subject to dry sliding friction. These displacements are achieved through the application of short duration, high intensity pulses. The relationship between pulse duration and displacement is seldom a simple function. The most dependable practical methods for control are self-tuning; they learn from online experience by adapting an internal control parameter until precise position control is achieved. To date, the best known adaptive pulse control methods adapt a single control parameter. While effective, the single parameter methods suffer from sub-optimal settling times and poor parameter convergence. To improve performance while maintaining the capacity for ultimate precision, a new control method referred to as Adaptive Impulse Control (AIC) has been developed. To better fit the nonlinear relationship between pulses and displacements, AIC adaptively tunes a set of parameters. Each parameter affects a different range of displacements. Online updates depend on the residual control error following each pulse, an estimate of pulse sensitivity, and a learning gain. After an update is calculated, it is distributed among the parameters that were used to calculate the most recent pulse. As the stored relationship converges to the actual relationship of the machine, pulses become more accurate and fewer pulses are needed to reach each desired destination. When fewer pulses are needed, settling time improves and efficiency increases. AIC is experimentally compared to conventional PID control and other adaptive pulse control methods on a rotary system with a position measurement resolution of 16000 encoder counts per revolution of the load wheel. The friction in the test system is nonlinear and irregular with a position dependent break-away torque that varies by a factor of more than 1.8 to 1. AIC is shown to improve settling times by as much as a factor of two when compared to other adaptive pulse control methods while maintaining precise control tolerances.

Page generated in 0.0421 seconds