• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 7
  • 2
  • 2
  • 1
  • Tagged with
  • 33
  • 33
  • 23
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modelling And Controller Design Of The Gun And Turret System For An Aircraft

Mert, Ahmet 01 February 2009 (has links) (PDF)
Gun and gun turret systems are the primary units of the weapon systems of an aircraft. They are required to hit targets accurately during operations. That is why a complete, high precision control of weapon systems is required. This function is provided by accurate modeling of the system and the design of a suitable controller. This study presents the modeling of and controller design for the gun and turret system for an aircraft. For the controller design purpose, first the mathematical model of the system is constructed. Then the controller is designed to position the turret system as the target comes into sight. The reference input to the controller will either be obtained from a FLIR (Forward Looking Infrared) unit or from a HCU (Hand Control Unit). The basic specification for the controller is to hold theerror signal within the 5.5&deg / positioning envelope. This specification is satisfied by designing Linear Quadratic Gaussian and Internal Model Control type controllers. The performance of the overall system has been examined both by simulation studies and on the real physical system. Results have shown that the designed system is well over being sufficient.
12

[pt] ANÁLISE DE CENÁRIOS: INTEGRANDO A GESTÃO DO RISCO OPERACIONAL COM A MENSURAÇÃO DO CAPITAL - A EXPERIÊNCIA DO BNDES / [en] SCENARIO ANALYSIS: INTEGRATING THE OPERATIONAL RISK MANAGEMENT WITH THE CAPITAL MEASUREMENT - THE BNDES EXPERIENCE

MACELLY OLIVEIRA MORAIS 09 December 2016 (has links)
[pt] O risco operacional, que é definido como a possibilidade de ocorrência de perdas resultantes de falha, deficiência ou inadequação de processos internos, pessoas e sistemas, ou de eventos externos, está presente em qualquer atividade de uma instituição, seja ela financeira ou não. Essas características tornam a gestão e a mensuração desse risco desafiadoras e completamente diferentes dos demais tipos de risco. Apesar de Basileia II, em 2004, ter proposto diretrizes para os modelos internos de risco operacional, que visam determinar a quantia de capital que deve ser reservada para fazer frente a esse risco, os modelos internos de risco operacional ainda não se desenvolveram como os modelos de risco de crédito e mercado. Esse fato levou o Comitê de Basileia a sinalizar a intenção de eliminar os modelos internos para mensuração do risco operacional recentemente, substituindo todas as abordagens atuais, inclusive os modelos internos por uma abordagem padronizada única, que considera as perdas internas das instituições financeiras. A ausência de bases de dados internas abrangentes e que contemplem todos os riscos operacionais aos quais uma instituição financeira está exposta criou a necessidade de utilizar outros elementos, como os dados de perdas externas e os cenários. No entanto, esses elementos são criticados pela subjetividade. Esta tese teve como objetivo demonstrar a utilização do elemento análise de cenários na metodologia Loss Distribution Approach (LDA) para cálculo do capital regulamentar referente ao risco operacional tendo como referência a experiência do Banco Nacional de Desenvolvimento Econômico e Social (BNDES) na integração da gestão do risco operacional com a mensuração do capital. A metodologia proposta possibilitou, dentre outros: (i) a mensuração do capital regulamentar considerando cenários factíveis; (ii) a identificação de cenários de cauda e de corpo da distribuição agregada de perdas, que não estão refletidos na base de dados internas de perdas; (iii) a identificação e mensuração dos riscos operacionais do BNDES de forma abrangente; (iv) a obtenção de informações que podem direcionar a gestão do risco no que se refere à identificação de riscos que devem ter o tratamento priorizado; (v) o desenvolvimento de uma cultura de riscos, tendo em vista o envolvimento de especialistas de diversas unidades; (vi) a utilização de uma metodologia compreensível a todos os especialistas de negócios, que são os que conhecem os riscos de suas atividades. / [en] Operational risk, which is defined as the possibility of losses resulting from failure, deficiency or inadequacy of internal processes, people and systems or from external events, is present in any activity of an institution, be it financial or not. These features make the management and measurement of this risk challenging and completely different from other types of risk. Although Basel II in 2004 has proposed guidelines for the internal models for operational risk, which aim to determine the amount of capital that must be set aside to cover this risk, operational risk internal models have not yet developed as credit risk and Market models. This has led the Basel Committee to signal the intention to eliminate internal models for measuring operational risk recently, replacing all current approaches, including internal models by a single standardized approach, which considers the internal losses of financial institutions. The absence of comprehensive internal databases that include all operational risks to which a financial institution is exposed has created the need to use other elements such as external data loss and scenarios. However, these elements are criticized for its subjectivity. This thesis aimed to demonstrate the use of the element scenario analysis in Loss Distribution Approach (LDA) methodology for calculating regulatory capital for operational risk with reference to the experience of the Brazilian Development Bank (BNDES) in the integration of operational risk management with the measurement of capital. The proposed methodology allowed, among others: (i) the measurement of regulatory capital considering feasible scenarios; (ii) identification of tail and body scenarios of the aggregate losses distributions, which are not reflected in the internal loss database; (iii) the identification and measurement of BNDES s operational risk in a comprehensive manner; (iv) obtaining information that can target the risk management as regards the identification of risks that should be prioritized treatment; (V) developing a risk culture in view of the involvement of experts from various units; (Vi) use a comprehensive approach to all business experts, who are the ones who know the risks of their activities.
13

Modular Architecture for an Adaptive, Personalisable Knee-Ankle-Foot-Orthosis Controlled by Artificial Neural Networks

Braun, Jan-Matthias 19 November 2015 (has links)
No description available.
14

Automatic control strategies of mean arterial pressure and cardiac output : MIMO controllers, PID, internal model control, adaptive model reference, and neural nets are developed to regulate mean arterial pressure and cardiac output using the drugs Sodium Nitroprusside and Dopamine

Enbiya, Saleh Abdalla January 2013 (has links)
High blood pressure, also called hypertension is one of the most common worldwide diseases afflicting humans and is a major risk factor for stroke, myocardial infarction, vascular disease, and chronic kidney disease. If blood pressure is controlled and oscillations in the hemodynamic variables are reduced, patients experience fewer complications after surgery. In clinical practice, this is usually achieved using manual drug delivery. Given that different patients have different sensitivity and reaction time to drugs, determining manually the right drug infusion rates may be difficult. This is a problem where automatic drug delivery can provide a solution, especially if it is designed to adapt to variations in the patient’s conditions. This research work presents an investigation into the development of abnormal blood pressure (hypertension) controllers for postoperative patients. Control of the drugs infusion rates is used to simultaneously regulate the hemodynamic variables such as the Mean Arterial Pressure (MAP) and the Cardiac Output (CO) at the desired level. The implementation of optimal control system is very essential to improve the quality of patient care and also to reduce the workload of healthcare staff and costs. Many researchers have conducted studies earlier on modelling and/or control of abnormal blood pressure for postoperative patients. However, there are still many concerns about smooth transition of blood pressure without any side effect. The blood pressure is classified in two categories: high blood pressure (Hypertension) and low blood pressure (Hypotension). The hypertension often occurred after cardiac surgery, and the hypotension occurred during cardiac surgery. To achieve the optimal control solution for these abnormal blood pressures, many methods are proposed, one of the common methods is infusing the drug related to blood pressure to maintain it at the desired level. There are several kinds of vasodilating drugs such as Sodium Nitroprusside (SNP), Dopamine (DPM), Nitro-glycerine (NTG), and so on, which can be used to treat postoperative patients, also used for hypertensive emergencies to keep the blood pressure at safety level. A comparative performance of two types of algorithms has been presented in chapter four. These include the Internal Model Control (IMC), and Proportional-Integral-Derivative (PID) controller. The resulting controllers are implemented, tested and verified for three sensitivity patient response. SNP is used for all three patients’ situation in order to reduce the pressure smoothly and maintain it at the desire level. A Genetic Algorithms (GAs) optimization technique has been implemented to optimise the controllers’ parameters. A set of experiments are presented to demonstrate the merits and capabilities of the control algorithms. The simulation results in chapter four have demonstrated that the performance criteria are satisfied with the IMC, and PID controllers. On the other hand, the settling time for the PID control of all three patients’ response is shorter than the settling time with IMC controller. Using multiple interacting drugs to control both the MAP and CO of patients with different sensitivity to drugs is a challenging task. A Multivariable Model Reference Adaptive Control (MMRAC) algorithm is developed using a two-input, two-output patient model. Because of the difference in patient’s sensitivity to the drug, and in order to cover the wide ranges of patients, Model Reference Adaptive Control (MRAC) has been implemented to obtain the optimal infusion rates of DPM and SNP. This is developed in chapters five and six. Computer simulations were carried out to investigate the performance of this controller. The results show that the proposed adaptive scheme is robust with respect to disturbances and variations in model parameters, the simulation results have demonstrated that this algorithm cannot cover the wide range of patient’s sensitivity to drugs, due to that shortcoming, a PID controller using a Neural Network that tunes the controller parameters was designed and implemented. The parameters of the PID controller were optimised offline using Matlab genetic algorithm. The proposed Neuro-PID controller has been tested and validated to demonstrate its merits and capabilities compared to the existing approaches to cover wide range of patients.
15

General Insurance Reserve Risk Modeling Based on Unaggregated Data / Modelování rizika rezerv v neživotním pojištění založené na neagregovaných datech

Zimmermann, Pavel January 2004 (has links)
Recently the eld of actuarial mathematics has experienced a large development due to a signi cant increase of demands for insurance and nancial risk quanti cation due to the fact that the implementation of a complex of rules of international reporting standards (IFRS) and solvency reporting (Solvency II) has started. It appears that the key question for solvency measuring is determination of probability distribution of future cash ows of an insurance company. Solvency is then reported through an appropriate risk measure based e.g. on a percentile of this distribution. While as present popular models are based solely on aggregated data (such as total loss development from a certain time period), the main objective of this work is to scrutinize possibilities of modelling of the reserve risk (i.e. roughly said, the distribution of the ultimate incurred value of claims that have already happened in the past) based directly on individual claims. These models have not yet become popular and to the author's knowledge an overview of such models has not been published previously. The assumptions and speci cation of the already published models were compared to the practical experience and some inadequacies were pointed out. Further more a new reserve risk model was constructed which is believed to have practically more suitable assumptions and properties than the existing models. Theoretical aspects of the new model were studied and distribution of the ultimate incurred value (the modelled variable) was derived. An emphasis was put also on practical aspects of the developed model and its applicability in the case of industrial use. Therefore some restrictive assumptions which might be considered realistic in variety of practical cases and which lead to a signi cant simpli cation of the model were identi ed throughout the work. Furthermore, algorithms to reduce the number of the necessary calculations were developed. In the last chapters of the work, an e ort was devoted to the methods of the estimation of the considered parameters respecting practical limitations (such as missing observations at the time of modelling). For this purpose, survival analysis was (amongst other methods) applied.
16

Estrat?gia de controle robusto para filtro ativo paralelo sem detec??o de harm?nicos de correntes

Sousa, Raphaell Maciel de 11 February 2011 (has links)
Made available in DSpace on 2014-12-17T14:55:45Z (GMT). No. of bitstreams: 1 RaphaellMS_DISSERT.pdf: 3087457 bytes, checksum: 184208141b97a58de312de245a6bd3e8 (MD5) Previous issue date: 2011-02-11 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / Conventional control strategies used in shunt active power filters (SAPF) employs real-time instantaneous harmonic detection schemes which is usually implements with digital filters. This increase the number of current sensors on the filter structure which results in high costs. Furthermore, these detection schemes introduce time delays which can deteriorate the harmonic compensation performance. Differently from the conventional control schemes, this paper proposes a non-standard control strategy which indirectly regulates the phase currents of the power mains. The reference currents of system are generated by the dc-link voltage controller and is based on the active power balance of SAPF system. The reference currents are aligned to the phase angle of the power mains voltage vector which is obtained by using a dq phase locked loop (PLL) system. The current control strategy is implemented by an adaptive pole placement control strategy integrated to a variable structure control scheme (VS?APPC). In the VS?APPC, the internal model principle (IMP) of reference currents is used for achieving the zero steady state tracking error of the power system currents. This forces the phase current of the system mains to be sinusoidal with low harmonics content. Moreover, the current controllers are implemented on the stationary reference frame to avoid transformations to the mains voltage vector reference coordinates. This proposed current control strategy enhance the performance of SAPF with fast transient response and robustness to parametric uncertainties. Experimental results are showing for determining the effectiveness of SAPF proposed control system / Resumo: As estrat?gias de controle convencionais de filtros ativos de pot?ncia paralelos (FAPP) empregam esquemas de detec??o de harm?nicos em tempo real, usualmente implementados com filtros digitais. Isso aumenta o n?mero de sensores na estrutura do filtro, o que resulta em altos custos. Al?m disso, esses esquemas de detec??o introduzem atrasos que podem deteriorar o desempenho da compensa??o de harm?nicos. Diferentemente dos esquemas de controle convencionais, este artigo prop?e uma nova estrat?gia de controle que regula indiretamente as correntes de fase da rede el?trica. As correntes de refer?ncia do sistema s?o geradas pelo controle de tens?o do barramento CC e s?o baseadas no balan?o de pot?ncia ativa do sistema FAPP. As correntes de refer?ncia s?o alinhadas com o ?ngulo de fase do vetor tens?o da rede, que ? obtido usando um PLL (Phase Locked Loop). O controle de corrente ? implementado por uma estrat?gia de controle adaptativo por aloca??o de p?los, integrada com um esquema de controle com estrutura vari?vel (VS?APPC). No VS?APPC, o princ?pio do modelo interno (IMP) de refer?ncia ? usado para eliminar o erro em regime permanente das correntes do sistema. Isso for?a as correntes de fase do sistema a serem senoidais e com baixo teor de harm?nicos. Al?m disso, os controladores de corrente s?o implementados no referencial estacion?rio para evitar transforma??es nas coordenadas de refer?ncia do vetor tens?o da rede. Esta estrat?gia de controle de corrente melhora a performance do FAPP com uma resposta transit?ria r?pida e robustez a incertezas param?tricas. Resultados experimentais s?o mostrados para demonstrar a efic?cia do sistema de controle proposto para o FAPP
17

Prédire le passé et le futur : rôle des représentations motrices dans l'inférence du mouvement / Forecasting the past and the future : role of the motor representations in the motion inference

Carlini, Alessandro 12 October 2012 (has links)
L’efficacité du système visuel est permise par un complexe réseau d’élaboration, qui s’appuie sur des structures corticales, sous-corticales et périphériques. Le but de la présente recherche est de mieux comprendre le processus de perception visuelle du mouvement, et réaliser un modèle computationnel capable de reproduire les fonctionnalités humaines du tracking (suivi) d'un objet en mouvement. Ce travail de thèse comprend une ample recherche bibliographique, ainsi qu’une série d’expérimentations ; la thèse se compose de deux parties :La première partie a pour objet la détermination des performances dans l’inférence « vers le passé », d’un mouvement partiellement visible. Il s’agit de définir l’implication des informations exogènes (les signaux rétiniens) et endogènes (les modèles internes de l’action observée) dans la reconstruction de la cinématique d’une cible en mouvement et partiellement occultée. Nos résultats supportent l’hypothèse que le Système Nerveux Central adopte un mécanisme basé sur le recours aux modèles internes dans la reconstruction du passé de cinématiques biologiques. La deuxième partie complémente la première, et vise à identifier la structure et les caractéristiques fonctionnelles du système de poursuite, ainsi que à comprendre l’origine des erreurs systématiques présentes dans la localisation d’une cible chez l’homme. Nous avons développé un modèle computationnel en langage Matlab, basé sur le mécanisme d’extrapolation du mouvement, qui est capable de reproduire les données expérimentales dans la tâche de localisation / The effectiveness of the visual system is permitted by a complex processing network, which relies on cortical, sub-cortical and peripheral structures. The purpose of this research is to improve the knowledge of the process sustaining the visual perception of motion, and to produce a computational model able to reproduce the features of human visual tracking of a moving object.This work includes an extensive bibliographic research, and a series of experiments. The thesis consists of two parts:The first part pertains to the determination of performance in the "backward" inference of a partially visible movement. It consist of defining the involvement of exogenous information (retinal signals) and endogenous information (internal models of observed action) in the kinematic reconstruction of a partially hidden trajectory of a moving target. Our results support the hypothesis that the CNS adopts a mechanism based on the use of internal models in the reconstruction of past biological kinematics.The second part complements the first one, and aims to identify the structure and the functional characteristics of the tracking system; it also aims to understand the origin of systematic errors present in the location of a target, in humans.We developed a computational model in Matlab, based on the extrapolation mechanism of movement, which is capable of reproducing the experimental data for the localization task
18

Versatilité et infaisabilité : vers la fin des théories computationnelles du comportement moteur / Versatility and intractability : towards the end of computational theories of motor behavior

Flament Fultot, Martin 08 November 2019 (has links)
Le comportement moteur est un phénomène où les différentes composantes d’un système biologique sont organisées de façon à assurer la coordination d’un mouvement intentionnel. Selon les théories computationnelles, le comportement est défini comme un problème moteur dont la solution peut être trouvée par des systèmes divisés de manière hiérarchique. Les composantes traitent et communiquent entre elles de l’information représentant les aspects pertinents du problème moteur (positions, trajectoires, vitesses, forces, etc.) lesquels sont censés être organisés à leur tour selon une hiérarchie d’abstraction et de complexité ascendante. Le défi est de faire face à quatre problèmes centraux du comportement : a) Le nombre élevé de degrés de liberté et d’interactions ; b) La redondance des degrés de liberté ; c) L’anticipation des effets du mouvement ; d) L’incertitude dans l’information. Les théories computationnelles classiques proposent des schémas explicatifs composés d’un agencement de différents modèles internes (prospectifs et inverses). Plus récemment, l’approche bayésienne propose un schéma hiérarchique plus homogène lequel est censé faire face aussi à l’incertitude de l’information. Cette recherche démontre que les théories computationnelles, y compris l’approche bayésienne, sont paralysées par un dilemme insurmontable : soit elles peuvent passer à l’échelle de manière computationnellement faisable - les calculs peuvent être réalisés en un temps raisonnable - mais dans ce cas elles ne peuvent pas reproduire la versatilité caractéristique du comportement des êtres vivants ; soit elles aspirent à reproduire la versatilité biologique mais alors elles sont infaisables. / Motor behavior is a phenomenon where the components making up a biological system are organized so as to ensure the coordination of a purposeful movement. According to computational theories, behavior is defined as a motor problem the solution of which can be found by systems divided hierarchically. The components process and communicate information representing the relevant variables of the motor problem (positions, trajectories, velocities, forces, etc.) which are, in turn, assumed to be organized as a hierarchy of increasing abstraction and complexity. The challenge is to tackle the four core problems of behavior: a) The high number of degrees of freedom and their interactions; b) The redundancy of degrees of freedom; c) The anticipation of the effects of movement; d) The uncertainty in information. Classical computational theories advance explanatory schemas made of structured sets of internal models (forward and inverse). More recently, the Bayesian approach advances a more homogeneous hierarchical schema which is supposed to account for uncertainty in information. This research shows that computational theories, including the Bayesian approach, are crippled by an unsolvable dilemma: The first horn is that if the models can scale up while staying computationally tractable, i.e. the computations can be carried out in a reasonable amount of time, then they fail to reproduce the versatility which characterizes the behavior of living beings. The second horn is that if the models aspire to reproduce biological versatility, then they are intractable.
19

Automatic Control Strategies of Mean Arterial Pressure and Cardiac Output. MIMO controllers, PID, internal model control, adaptive model reference, and neural nets are developed to regulate mean arterial pressure and cardiac output using the drugs sodium Nitroprusside and dopamine

Enbiya, Saleh A. January 2013 (has links)
High blood pressure, also called hypertension is one of the most common worldwide diseases afflicting humans and is a major risk factor for stroke, myocardial infarction, vascular disease, and chronic kidney disease. If blood pressure is controlled and oscillations in the hemodynamic variables are reduced, patients experience fewer complications after surgery. In clinical practice, this is usually achieved using manual drug delivery. Given that different patients have different sensitivity and reaction time to drugs, determining manually the right drug infusion rates may be difficult. This is a problem where automatic drug delivery can provide a solution, especially if it is designed to adapt to variations in the patient’s conditions. This research work presents an investigation into the development of abnormal blood pressure (hypertension) controllers for postoperative patients. Control of the drugs infusion rates is used to simultaneously regulate the hemodynamic variables such as the Mean Arterial Pressure (MAP) and the Cardiac Output (CO) at the desired level. The implementation of optimal control system is very essential to improve the quality of patient care and also to reduce the workload of healthcare staff and costs. Many researchers have conducted studies earlier on modelling and/or control of abnormal blood pressure for postoperative patients. However, there are still many concerns about smooth transition of blood pressure without any side effect. The blood pressure is classified in two categories: high blood pressure (Hypertension) and low blood pressure (Hypotension). The hypertension often occurred after cardiac surgery, and the hypotension occurred during cardiac surgery. To achieve the optimal control solution for these abnormal blood pressures, many methods are proposed, one of the common methods is infusing the drug related to blood pressure to maintain it at the desired level. There are several kinds of vasodilating drugs such as Sodium Nitroprusside (SNP), Dopamine (DPM), Nitro-glycerine (NTG), and so on, which can be used to treat postoperative patients, also used for hypertensive emergencies to keep the blood pressure at safety level. A comparative performance of two types of algorithms has been presented in chapter four. These include the Internal Model Control (IMC), and Proportional-Integral-Derivative (PID) controller. The resulting controllers are implemented, tested and verified for three sensitivity patient response. SNP is used for all three patients’ situation in order to reduce the pressure smoothly and maintain it at the desire level. A Genetic Algorithms (GAs) optimization technique has been implemented to optimise the controllers’ parameters. A set of experiments are presented to demonstrate the merits and capabilities of the control algorithms. The simulation results in chapter four have demonstrated that the performance criteria are satisfied with the IMC, and PID controllers. On the other hand, the settling time for the PID control of all three patients’ response is shorter than the settling time with IMC controller. Using multiple interacting drugs to control both the MAP and CO of patients with different sensitivity to drugs is a challenging task. A Multivariable Model Reference Adaptive Control (MMRAC) algorithm is developed using a two-input, two-output patient model. Because of the difference in patient’s sensitivity to the drug, and in order to cover the wide ranges of patients, Model Reference Adaptive Control (MRAC) has been implemented to obtain the optimal infusion rates of DPM and SNP. This is developed in chapters five and six. Computer simulations were carried out to investigate the performance of this controller. The results show that the proposed adaptive scheme is robust with respect to disturbances and variations in model parameters, the simulation results have demonstrated that this algorithm cannot cover the wide range of patient’s sensitivity to drugs, due to that shortcoming, a PID controller using a Neural Network that tunes the controller parameters was designed and implemented. The parameters of the PID controller were optimised offline using Matlab genetic algorithm. The proposed Neuro-PID controller has been tested and validated to demonstrate its merits and capabilities compared to the existing approaches to cover wide range of patients. / Libyan Ministry of Higher Education scholarship
20

Rôle de la vision pour le contrôle de la dynamique du mouvement lors d'un geste de pointage manuel chez l'adulte ainsi que chez l'enfant

Mackrous, Isabelle January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.

Page generated in 0.0771 seconds