• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 845
  • 434
  • 244
  • 154
  • 117
  • 26
  • 26
  • 18
  • 14
  • 14
  • 13
  • 11
  • 10
  • 10
  • 7
  • Tagged with
  • 2461
  • 371
  • 339
  • 250
  • 212
  • 209
  • 195
  • 155
  • 148
  • 133
  • 130
  • 117
  • 113
  • 112
  • 110
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Ekonomické scénáře v pojišťovnictví / Economic scenarios in insurance

Krýcha, Daniel January 2012 (has links)
In this thesis we will focus on interest rate modelling and related practical aspects. We will explain the significance of generated scenarios of interest rate's movement for economic results of both life and non-life insurance companies. We will analyse presently known ways of approaching this matter and describe the selected models in detail. Taking into consideration the practical focus of this thesis, we will address the applied methods of model's calibration. Furthermore, we will employ these methods in an extensive numerical study, that will aim to reveal the weaknesses and strengths of particular calibration methods while implementing a specific model and to evaluate their potential application in actuarial practice. Central model of this work is CIR (Cox-Ingersoll-Ross) model.
332

Análise quantitativa de alcanolaminas e CO2 no processo de absorção química via espectroscopia no infravermelho / Quantitative analysis of alkanolamines and CO2 in chemical absorption process by infrared spectroscopy

Denise Trigilio Tavares 15 December 2015 (has links)
Em virtude da necessidade de se quantificar os carbamatos provenientes de processos de absorção química do gás CO2 por monoetanolamina (MEA) e dietanolamina (DEA), curvas analíticas por espectroscopia no infravermelho (IV) foram construídas visando à determinação do teor de carbamatos de MEA e DEA, além da quantificação de MEA, DEA e metildietanolamina (MDEA) em soluções puras e em misturas. O procedimento analítico compreendeu o preparo das amostras-padrão constituintes das curvas de calibração e suas quantificações com o emprego de técnicas instrumentais de referência titulação potenciométrica, pesagem e GC-FID. As amostras-padrão de aminas puras foram quantificadas por titulação potenciométrica, sendo possível a detecção exata do ponto de equivalência. As composições dos padrões analíticos de misturas foram estabelecidas segundo um diagrama triangular para misturas e o teor de cada componente foi determinado por gravimetria e GC-FID. Tanto a hidrólise quanto a degradação térmica dos carbamatos foram fatores que restringiram o emprego da titulação potenciométrica, HPLC-MS/MS e GC-FID como técnicas de referência em suas quantificações. Essas restrições, somadas ao fato de não haver disponibilidade comercial desses carbamatos, levaram ao uso da espectroscopia de RMN de 1H na determinação quantitativa. As curvas de calibração apresentaram ótimo ajuste dos valores preditos com relação aos de referência e erro máximo de predição de 0,594 %. Dois processos de absorção química do gás CO2 foram realizados em escala semipiloto e os resultados obtidos foram de 1,02 % e 0,98 % de CO2 absorvido por solução de MEA e DEA, respectivamente. Os mesmos processos foram simulados no software Aspen Plus, obtendo-se 1,18 % de CO2 absorvido por solução de MEA e 1,00 % por solução de DEA. / Due to the need to quantify carbamates from the CO2 gas chemical absorption process by monoethanolamine (MEA) and diethanolamine (DEA), analytical curves by IR spectroscopy were obtained with the aim of quantifying MEA and DEA carbamates and MEA, DEA and methyldiethanolamine (MDEA) in pure solutions and mixtures. The analytical procedure considered the preparation of standard samples constituents of the calibration curves and their quantification using reference instrumental techniques potentiometric titration, gravimetry and GC-FID. The standard samples of pure amines were quantified by potentiometric titration, getting the accurate detection of the equivalence point. The compositions of the analytical standards of the mixtures were established according to a triangular diagram for mixtures and the content of each component was determined by gravimetry and GC-FID. The carbamate hidrolysis and its thermal degradation were factors that limited the use of potentiometric titration, HPLCMS/ MS and GC-FID as reference techniques in their quantitation. These restrictions, added to the fact of not having commercial availability of these carbamates, led the use of 1H NMR spectroscopy for the quantitative determination. The calibration curves resulted in an excellent adjustment of the expected values related to the reference values and maximum error of prediction of 0.594 %. Two chemical absorption processes of CO2 gas were performed in semi-pilot scale and the obtained results were of 1.02 % and 0.98 % of absorbed CO2 by MEA and DEA solutions, respectively. Both processes were simulated using Aspen Plus software, presenting 1.18 % of CO2 absorbed by MEA solution and 1.00 % by DEA solution.
333

Uma contribuição ao desenvolvimento de sistemas baseados em visão estéreo para o auxílio a navegação de robôs móveis e veículos inteligentes / A contribution to the development of stereo vision based to aid the mobile robot navigation and intelligent vehicles

Leandro Carlos Fernandes 04 December 2014 (has links)
Esta tese visa apresentar uma contribuição ao desenvolvimento de sistemas computacionais, baseados principalmente em visão computacional, usados para o auxílio a navegação de robôs móveis e veículos inteligentes. Inicialmente, buscou-se apresentar uma proposição de uma arquitetura de um sistema computacional para veículos inteligente que permita a construção de sistemas que sirvam tanto para o apoio ao motorista, auxiliando-o em sua forma de condução, quanto para o controle autônomo, proporcionando maior segurança e autonomia do tráfego de veículos em meio urbano, em rodovias e inclusive no meio rural. Esta arquitetura vem sendo aperfeiçoada e validada junto as plataformas CaRINA I e CaRINA II (Carro Robótico Inteligente para Navegação Autônoma), que também foram alvo de desenvolvimentos e pesquisas junto a esta tese, permitindo também a experimentação prática dos conceitos propostos nesta tese. Neste contexto do desenvolvimento de veículos inteligentes e autônomos, o uso de sensores para a percepção 3D do ambiente possui um papel muito importante, permitindo o desvio de obstáculos e navegação autônoma, onde a adoção de sensores de menor custo tem sido buscada a m de viabilizar aplicações comerciais. As câmeras estéreo são dispositivos que se enquadram nestes requisitos de custo e percepção 3D, destacando-se como sendo o foco da proposta de um novo método automático de calibração apresentado nesta tese. O método proposto permite estimar os parâmetros extrínsecos de um sistema de câmeras estéreo através de um processo evolutivo que considera apenas a coerência e a qualidade de alguns elementos do cenário quanto ao mapa de profundidade. Esta proposta apresenta uma forma original de calibração que permite a um usuário, sem grandes conhecimentos sobre visão estéreo, ajustar o sistema de câmeras para novas configurações e necessidades. O sistema proposto foi testado com imagens reais, obtendo resultados bastante promissores, se comparado aos métodos tradicionais de calibração de câmeras estéreo que fazem uso de um processo interativo de estimação dos parâmetros através da apresentação e uso de um padrão xadrez. Este método apresenta-se como uma abordagem promissora para realizar a fusão dos dados de câmeras e sensores, permitindo o ajuste das matrizes de transformação (parâmetros extrínsecos do sistema), a m de obter uma referência única onde são representados e agrupados os dados vindos dos diferentes sensores. / This thesis aims to provide a contribution to computer systems development based on computer vision used to aid the navigation of mobile robots and intelligent vehicles. Initially, we propose a computer system architecture for intelligent vehicles where intention is to support both the driver, helping him in driving way; and the autonomous control, providing greater security and autonomy of vehicular traffic in urban areas, on highways and even in rural areas. This architecture has been validated and improved with CaRINA I and CaRINA II platforms development, which were also subject of this thesis and allowed the practical experimentation of concepts proposed. In context of intelligent autonomous vehicles, the use of sensors that provides a 3D environment perception has a very important role to enable obstacle avoidance and autonomous navigation. Therefor the adoption of lower cost sensors have been sought in order to facilitate commercial applications. The stereo cameras are devices that fit these both requirements (cost and 3D perception), standing out as focus of the proposal for a new automatic calibration method presented in this thesis. The proposed method allows to estimate the extrinsic parameters of a stereo camera system through an evolutionary process that considers only the consistency and the quality of some elements of the scenario as to the depth map. This proposal presents a unique form of calibration that allows a user without much knowledge of stereo vision, adjust the camera system for new settings and needs. The system was tested with real images, obtaining very promising results as compared to traditional methods of calibration of stereo cameras that use an iterative process of parameter estimation through the presentation and use of a checkerboard pattern. This method offers a promising approach to achieve the fusion of the data from cameras and sensors, allowing adjustment of transformation matrices (extrinsic system parameters) in order to obtain a single reference in which they are grouped together and represented the data from the different sensors.
334

Desenvolvimento de sistema computacional para planejamento e controle da manutenção do reator IEA-R1 / Development of a computational program to planning and control of the IEA-R1 reactor maintenance

Mauro Onofre Martins 29 May 2015 (has links)
A manutenção é uma atividade essencial em reatores nucleares. Os componentes de sistemas de segurança de uma instalação industrial devem ter uma baixa probabilidade de falha, especialmente se houver um elevado risco de acidentes que podem causar danos ambientais. Em instalações nucleares, a presença de sistemas de segurança são uma especificação técnica e uma exigência para a sua licença de funcionamento. De forma a gerenciar todo o fluxo de informações provenientes das manutenções do Reator IEA-R1 foi desenvolvido um sistema computacional, que além de planejar e controlar toda a manutenção mantém atualizados documentos e registros para salvaguardar a qualidade e garantir a segurança na operação do Reator IEA-R1. O sistema computacional possui níveis de acesso e apresenta relatórios detalhados de todas as manutenções previstas e executadas, e também um histórico individual de cada equipamento durante sua vida útil na instalação. O trabalho apresenta todas as etapas de desenvolvimento do sistema, sua descrição, compatibilidades, aplicação, vantagens e resultados obtidos experimentalmente. / Maintenance is an essential activity in nuclear reactors. The safety systems components of an industrial plant should have a low failure probability, especially in case of high risk of accidents with potential for environment damage. In nuclear facilities, the security systems are a technical specification and a requirement for license operation. In order to manage the IEA-R1 maintenance information flow, a computational program (software) was developed, to plan and control all the maintenance, update the documents and records to quality safeguard and ensure the safe reactor operation. The software has access levels and share detailed maintenance planned and implemented reports, and equipment reports, during its facility lifetime. This work presents all the stages of the software development, description, compatibilities, application, advantages and experimental results.
335

Calibration, réalisme et détermination d'étudiants et d'étudiantes universitaires en fonction de l'expérience, de la distance temporelle, du genre et du niveau de réussite

Cloutier, Catherine January 2004 (has links)
No description available.
336

Adjustment of RapidScat Backscatter Measurements for Improved Radar Images

McDonald, Garrett Scott 01 June 2018 (has links)
RapidScat is a spaceborne wind scatterometer mounted on the International Space Station (ISS). The RapidScat mission lasted from September 2014 to November 2016. RapidScat enables the measurement of diurnal patterns of sigma-0 measurements. This capability is possible because of the non-sun-synchronous orbit of the ISS, in which the local time of day (LTOD) of sigma-0 measurements gradually shifts over time in any given location. The ISS platform is a relatively unstable platform for wind scatterometers. Because of the varying attitude of the ISS, RapidScat experiences a constant variation of its pointing vector. Variations of the pointing vector cause variations in the incidence angle of the measurement on the ground, which has a direct effect on sigma-0 measurements. In order to mitigate sigma-0 variations caused by incidence angle and LTOD, the dependence of on these parameters is modeled in order to enable a normalization procedure for sigma-0 . These models of sigma-0 dependence are determined in part by comparing RapidScat data with other active Ku-band instruments. The normalization procedure is designed to adjust the mean value of sigma-0 to be constant across the full range of significant parameter values to match the mean of sigma-0 at a particular nominal parameter value. The normalization procedure is tested both in simulation and with real sigma-0 measurements. The simulated normalization procedure is effective at modeling and removing sigma-0 dependence on incidence angle and LTOD over a homogeneous region. The variance in simulated images is reduced by the normalization procedure. The normalization procedure also reduces variance in real backscatter images of the Amazon and an arbitrary region in East Africa.
337

Deep Probabilistic Models for Camera Geo-Calibration

Zhai, Menghua 01 January 2018 (has links)
The ultimate goal of image understanding is to transfer visual images into numerical or symbolic descriptions of the scene that are helpful for decision making. Knowing when, where, and in which direction a picture was taken, the task of geo-calibration makes it possible to use imagery to understand the world and how it changes in time. Current models for geo-calibration are mostly deterministic, which in many cases fails to model the inherent uncertainties when the image content is ambiguous. Furthermore, without a proper modeling of the uncertainty, subsequent processing can yield overly confident predictions. To address these limitations, we propose a probabilistic model for camera geo-calibration using deep neural networks. While our primary contribution is geo-calibration, we also show that learning to geo-calibrate a camera allows us to implicitly learn to understand the content of the scene.
338

A comparison of calibration methods and proficiency estimators for creating IRT vertical scales

Kim, Jungnam 01 January 2007 (has links)
The main purpose of this study was to construct different vertical scales based on various combinations of calibration methods and proficiency estimators to investigate the impact different choices may have on these properties of the vertical scales that result: grade-to-grade growth, grade-to-grade variability, and the separation of grade distributions. Calibration methods investigated were concurrent calibration, separate calibration, and fixed a, b, and c item parameters for common items with simple prior updates (FSPU). Proficiency estimators investigated were Maximum Likelihood Estimator (MLE) with pattern scores, Expected A Posteriori (EAP) with pattern scores, pseudo-MLE with summed scores, pseudo-EAP with summed scores, and Quadrature Distribution (QD). The study used datasets from the Iowa Tests of Basic Skills (ITBS) in the Vocabulary, Reading Comprehension (RC), Math Problem Solving and Data Interpretation (MPD), and Science tests for grades 3 through 8. For each of the research questions, the following conclusions were drawn from the study. With respect to the comparisons of three calibration methods, for the RC and Science tests, concurrent calibration, compared to FSPU and separate calibration, showed less growth and more slowly decreasing growth in the lower grades, less decrease in variability over grades, and less separation in the lower grades in terms of horizontal distances. For the Vocabulary and MPD tests, differences in both grade-to-grade growth and in the separation of grade distributions were trivial. With respect to the comparisons of five proficiency estimators, for all content areas, the trend of pseudo-MLE ≥ MLE > QD > EAP ≥ pseudo-EAP was found in within-grade SDs, and the trend of pseudo-EAP ≥ EAP > QD > MLE ≥ pseudo-MLE was found in the effect sizes. However, the degree of decrease in variability over grades was similar across proficiency estimators. With respect to the comparisons of the four content areas, for the Vocabulary and MPD tests compared to the RC and Science tests, growth was less, but somewhat steady, and the decrease in variability over grades was less. For separation of grade distributions, it was found that the large growth suggested by larger mean differences for the RC and Science tests was reduced through the use of effect sizes to standardize the differences.
339

Apprentissage simultané d'une tâche nouvelle et de l'interprétation de signaux sociaux d'un humain en robotique / Learning from unlabeled interaction frames

Grizou, Jonathan 24 October 2014 (has links)
Cette thèse s'intéresse à un problème logique dont les enjeux théoriques et pratiques sont multiples. De manière simple, il peut être présenté ainsi : imaginez que vous êtes dans un labyrinthe, dont vous connaissez toutes les routes menant à chacune des portes de sortie. Derrière l'une de ces portes se trouve un trésor, mais vous n'avez le droit d'ouvrir qu'une seule porte. Un vieil homme habitant le labyrinthe connaît la bonne sortie et se propose alors de vous aider à l'identifier. Pour cela, il vous indiquera la direction à prendre à chaque intersection. Malheureusement, cet homme ne parle pas votre langue, et les mots qu'il utilise pour dire ``droite'' ou ``gauche'' vous sont inconnus. Est-il possible de trouver le trésor et de comprendre l'association entre les mots du vieil homme et leurs significations ? Ce problème, bien qu'en apparence abstrait, est relié à des problématiques concrètes dans le domaine de l'interaction homme-machine. Remplaçons le vieil homme par un utilisateur souhaitant guider un robot vers une sortie spécifique du labyrinthe. Ce robot ne sait pas en avance quelle est la bonne sortie mais il sait où se trouvent chacune des portes et comment s'y rendre. Imaginons maintenant que ce robot ne comprenne pas a priori le langage de l'humain; en effet, il est très difficile de construire un robot à même de comprendre parfaitement chaque langue, accent et préférence de chacun. Il faudra alors que le robot apprenne l'association entre les mots de l'utilisateur et leur sens, tout en réalisant la tâche que l'humain lui indique (i.e.trouver la bonne porte). Une autre façon de décrire ce problème est de parler d'auto-calibration. En effet, le résoudre reviendrait à créer des interfaces ne nécessitant pas de phase de calibration car la machine pourrait s'adapter,automatiquement et pendant l'interaction, à différentes personnes qui ne parlent pas la même langue ou qui n'utilisent pas les mêmes mots pour dire la même chose. Cela veut aussi dire qu'il serait facile de considérer d’autres modalités d'interaction (par exemple des gestes, des expressions faciales ou des ondes cérébrales). Dans cette thèse, nous présentons une solution à ce problème. Nous appliquons nos algorithmes à deux exemples typiques de l'interaction homme robot et de l'interaction cerveau machine: une tâche d'organisation d'une série d'objets selon les préférences de l'utilisateur qui guide le robot par la voix, et une tâche de déplacement sur une grille guidé par les signaux cérébraux de l'utilisateur. Ces dernières expériences ont été faites avec des utilisateurs réels. Nos résultats démontrent expérimentalement que notre approche est fonctionnelle et permet une utilisation pratique d’une interface sans calibration préalable. / This thesis investigates how a machine can be taught a new task from unlabeled humaninstructions, which is without knowing beforehand how to associate the human communicative signals withtheir meanings. The theoretical and empirical work presented in this thesis provides means to createcalibration free interactive systems, which allow humans to interact with machines, from scratch, using theirown preferred teaching signals. It therefore removes the need for an expert to tune the system for eachspecific user, which constitutes an important step towards flexible personalized teaching interfaces, a key forthe future of personal robotics.Our approach assumes the robot has access to a limited set of task hypotheses, which include the task theuser wants to solve. Our method consists of generating interpretation hypotheses of the teaching signalswith respect to each hypothetic task. By building a set of hypothetic interpretation, i.e. a set of signallabelpairs for each task, the task the user wants to solve is the one that explains better the history of interaction.We consider different scenarios, including a pick and place robotics experiment with speech as the modalityof interaction, and a navigation task in a brain computer interaction scenario. In these scenarios, a teacherinstructs a robot to perform a new task using initially unclassified signals, whose associated meaning can bea feedback (correct/incorrect) or a guidance (go left, right, up, ...). Our results show that a) it is possible tolearn the meaning of unlabeled and noisy teaching signals, as well as a new task at the same time, and b) itis possible to reuse the acquired knowledge about the teaching signals for learning new tasks faster. Wefurther introduce a planning strategy that exploits uncertainty from the task and the signals' meanings toallow more efficient learning sessions. We present a study where several real human subjects controlsuccessfully a virtual device using their brain and without relying on a calibration phase. Our system identifies, from scratch, the target intended by the user as well as the decoder of brain signals.Based on this work, but from another perspective, we introduce a new experimental setup to study howhumans behave in asymmetric collaborative tasks. In this setup, two humans have to collaborate to solve atask but the channels of communication they can use are constrained and force them to invent and agree ona shared interaction protocol in order to solve the task. These constraints allow analyzing how acommunication protocol is progressively established through the interplay and history of individual actions.
340

Vision-based calibration, position control and force sensing for soft robots / Calibration basée sur la vision, contrôle de position et détection de force pour robots doux

Zhang, Zhongkai 10 January 2019 (has links)
La modélisation de robots souples est extrêmement difficile, à cause notamment du nombre théoriquement infini des degrés de liberté. Cette difficulté est accentuée lorsque les robots ont des configurations complexes. Ce problème de modélisation entraîne de nouveaux défis pour la calibration et la conception des commandes des robots, mais également de nouvelles opportunités avec de nouvelles stratégies de détection de force possibles. Cette thèse a pour objectif de proposer des solutions nouvelles et générales utilisant la modélisation et la vision. La thèse présente dans un premier temps un modèle cinématique à temps discret pour les robots souples reposant sur la méthode des éléments finis (FEM) en temps réel. Ensuite, une méthode de calibration basée sur la vision du système de capteur-robot et des actionneurs est étudiée. Deux contrôleurs de position en boucle fermée sont conçus. En outre, pour traiter le problème de la perte d'image, une stratégie de commande commutable est proposée en combinant à la fois le contrôleur à boucle ouverte et le contrôleur à boucle fermée. Deux méthodes (avec et sans marqueur(s)) de détection de force externe pour les robots déformables sont proposées. L'approche est basée sur la fusion de mesures basées sur la vision et le modèle par FEM. En utilisant les deux méthodes, il est possible d'estimer non seulement les intensités, mais également l'emplacement des forces externes. Enfin, nous proposons une application concrète : un robot cathéter dont la flexion à l'extrémité est piloté par des câbles. Le robot est contrôlé par une stratégie de contrôle découplée qui permet de contrôler l’insertion et la flexion indépendamment, tout en se basant sur un modèle FEM. / The modeling of soft robots which have, theoretically, infinite degrees of freedom, are extremely difficult especially when the robots have complex configurations. This difficulty of modeling leads to new challenges for the calibration and the control design of the robots, but also new opportunities with possible new force sensing strategies. This dissertation aims to provide new and general solutions using modeling and vision. The thesis at first presents a discrete-time kinematic model for soft robots based on the real-time Finite Element (FE) method. Then, a vision-based simultaneous calibration of sensor-robot system and actuators is investigated. Two closed-loop position controllers are designed. Besides, to deal with the problem of image feature loss, a switched control strategy is proposed by combining both the open-loop controller and the closed-loop controller. Using soft robot itself as a force sensor is available due to the deformable feature of soft structures. Two methods (marker-based and marker-free) of external force sensing for soft robots are proposed based on the fusion of vision-based measurements and FE model. Using both methods, not only the intensities but also the locations of the external forces can be estimated.As a specific application, a cable-driven continuum catheter robot through contacts is modeled based on FE method. Then, the robot is controlled by a decoupled control strategy which allows to control insertion and bending independently. Both the control inputs and the contact forces along the entire catheter can be computed by solving a quadratic programming (QP) problem with a linear complementarity constraint (QPCC).

Page generated in 0.084 seconds