Spelling suggestions: "subject:"automatic calibration"" "subject:"2automatic calibration""
11 |
Automated calibration of a tractor transmission control unitKörtgen, Christopher, Morandi, Gabriele, Jacobs, Georg, Straßburger, Felix January 2016 (has links)
This paper presents an approach for an automated calibration process for electronic control units (ECU) of power split transmissions in agricultural tractors. Today the calibration process is done manually on a prototype tractor by experts. In order to reduce development costs the calibration process is shifted from prototype testing to software modelling. Simultaneous optimization methods are used within the software modelling to calculate new parameters. The simultaneous optimization includes objective evaluation methods to evaluate the tractor behaviour. With the combination of both methods inside the software modelling, the calibration process can be automated. The success of this approach depends on the quality of the software modelling. Therefore the identification of the initial prototype behaviour and the fitting of the tractor software model is done at the beginning. At the end of the automated calibration the validation and fine-tuning of the calculated parameters are done on the real tractor. These steps are condensed to a five step automated calibration process which includes simultaneous optimization and objective evaluation methods in several applications. After the detailed discussion of this automated calibration process one function of the ECU (one transmission component) will be calibrated through this process as example.
|
12 |
Détection de ruptures multiples – application aux signaux physiologiques. / Multiple change point detection – application to physiological signals.Truong, Charles 29 November 2018 (has links)
Ce travail s’intéresse au problème de détection de ruptures multiples dans des signaux physiologiques (univariés ou multivariés). Ce type de signaux comprend par exemple les électrocardiogrammes (ECG), électroencéphalogrammes (EEG), les mesures inertielles (accélérations, vitesses de rotation, etc.). L’objectif de cette thèse est de fournir des algorithmes de détection de ruptures capables (i) de gérer de long signaux, (ii) d’être appliqués dans de nombreux scénarios réels, et (iii) d’intégrer la connaissance d’experts médicaux. Par ailleurs, les méthodes totalement automatiques, qui peuvent être utilisées dans un cadre clinique, font l’objet d’une attention particulière. Dans cette optique, des procédures robustes de détection et des stratégies supervisées de calibration sont décrites, et une librairie Python open-source et documentée, est mise en ligne.La première contribution de cette thèse est un algorithme sous-optimal de détection de ruptures, capable de s’adapter à des contraintes sur temps de calcul, tout en conservant la robustesse des procédures optimales. Cet algorithme est séquentiel et alterne entre les deux étapes suivantes : une rupture est détectée, puis retranchée du signal grâce à une projection. Dans le cadre de sauts de moyenne, la consistance asymptotique des instants estimés de ruptures est démontrée. Nous prouvons également que cette stratégie gloutonne peut facilement être étendue à d’autres types de ruptures, à l’aide d’espaces de Hilbert à noyau reproduisant. Grâce à cette approche, des hypothèses fortes sur le modèle génératif des données ne sont pas nécessaires pour gérer des signaux physiologiques. Les expériences numériques effectuées sur des séries temporelles réelles montrent que ces méthodes gloutonnes sont plus précises que les méthodes sous-optimales standards et plus rapides que les algorithmes optimaux.La seconde contribution de cette thèse comprend deux algorithmes supervisés de calibration automatique. Ils utilisent tous les deux des exemples annotés, ce qui dans notre contexte correspond à des signaux segmentés. La première approche apprend le paramètre de lissage pour la détection pénalisée d’un nombre inconnu de ruptures. La seconde procédure apprend une transformation non-paramétrique de l’espace de représentation, qui améliore les performances de détection. Ces deux approches supervisées produisent des algorithmes finement calibrés, capables de reproduire la stratégie de segmentation d’un expert. Des résultats numériques montrent que les algorithmes supervisés surpassent les algorithmes non-supervisés, particulièrement dans le cas des signaux physiologiques, où la notion de rupture dépend fortement du phénomène physiologique d’intérêt.Toutes les contributions algorithmiques de cette thèse sont dans "ruptures", une librairie Python open-source, disponible en ligne. Entièrement documentée, "ruptures" dispose également une interface consistante pour toutes les méthodes. / This work addresses the problem of detecting multiple change points in (univariate or multivariate) physiological signals. Well-known examples of such signals include electrocardiogram (ECG), electroencephalogram (EEG), inertial measurements (acceleration, angular velocities, etc.). The objective of this thesis is to provide change point detection algorithms that (i) can handle long signals, (ii) can be applied on a wide range of real-world scenarios, and (iii) can incorporate the knowledge of medical experts. In particular, a greater emphasis is placed on fully automatic procedures which can be used in daily clinical practice. To that end, robust detection methods as well as supervised calibration strategies are described, and a documented open-source Python package is released.The first contribution of this thesis is a sub-optimal change point detection algorithm that can accommodate time complexity constraints while retaining most of the robustness of optimal procedures. This algorithm is sequential and alternates between the two following steps: a change point is estimated then its contribution to the signal is projected out. In the context of mean-shifts, asymptotic consistency of estimated change points is obtained. We prove that this greedy strategy can easily be extended to other types of changes, by using reproducing kernel Hilbert spaces. Thanks this novel approach, physiological signals can be handled without making assumption of the generative model of the data. Experiments on real-world signals show that those approaches are more accurate than standard sub-optimal algorithms and faster than optimal algorithms.The second contribution of this thesis consists in two supervised algorithms for automatic calibration. Both rely on labeled examples, which in our context, consist in segmented signals. The first approach learns the smoothing parameter for the penalized detection of an unknown number of changes. The second procedure learns a non-parametric transformation of the representation space, that improves detection performance. Both supervised procedures yield finely tuned detection algorithms that are able to replicate the segmentation strategy of an expert. Results show that those supervised algorithms outperform unsupervised algorithms, especially in the case of physiological signals, where the notion of change heavily depends on the physiological phenomenon of interest.All algorithmic contributions of this thesis can be found in ``ruptures'', an open-source Python library, available online. Thoroughly documented, ``ruptures'' also comes with a consistent interface for all methods.
|
13 |
[en] SCENE TRACKING WITH AUTOMATIC CAMERA CALIBRATION / [pt] ACOMPANHAMENTO DE CENAS COM CALIBRAÇÃO AUTOMÁTICA DE CÂMERASFLAVIO SZENBERG 01 June 2005 (has links)
[pt] É cada vez mais comum, na transmissão de eventos esportivos
pelas emissoras de
televisão, a inserção, em tempo real, de elementos
sintéticos na imagem, como anúncios,
marcações no campo, etc. Geralmente, essa inserção é feita
através do emprego de câmeras
especiais, previamente calibradas e dotadas de dispositivos
que registram seu movimento e a
mudança de seus parâmetros. De posse destas informações, é
simples inserir novos elementos
na cena com a projeção apropriada.
Nesta tese, é apresentado um algoritmo para recuperar, em
tempo real e sem utilizar
qualquer informação adicional, a posição e os parâmetros da
câmera em uma seqüência de
imagens contendo a visualização de modelos conhecidos. Para
tal, é explorada a existência,
nessas imagens, de segmentos de retas que compõem a
visualização do modelo cujas posições
são conhecidas no mundo tridimensional. Quando se trata de
uma partida de futebol, por
exemplo, o modelo em questão é composto pelo conjunto das
linhas do campo, segundo as
regras que definem sua geometria e dimensões.
Inicialmente, são desenvolvidos métodos para a extração de
segmentos de retas longos
da primeira imagem. Em seguida é localizada uma imagem do
modelo no conjunto desses
segmentos com base em uma árvore de interpretação. De posse
desse reconhecimento, é feito
um reajuste nos segmentos que compõem a visualização do
modelo, sendo obtidos pontos de
interesse que são repassados a um procedimento capaz de
encontrar a câmera responsável pela
visualização do modelo. Para a segunda imagem da seqüência
em diante, apenas uma parte
do algoritmo é utilizada, levando em consideração a
coerência entre quadros, a fim de
aumentar o desempenho e tornar possível o processamento em
tempo real.
Entre diversas aplicações que podem ser empregadas para
comprovar o desempenho e
a validade do algoritmo proposto, está uma que captura
imagens através de uma câmera para
demonstrar o funcionamento do algoritmo on line. A
utilização de captura de imagens
permite testar o algoritmo em inúmeros casos, incluindo
modelos e ambientes diferentes. / [en] In the television casting of sports events, it has become
very common to insert
synthetic elements to the images in real time, such as
adds, marks on the field, etc. Usually,
this insertion is made using special cameras, previously
calibrated and provided with features
that record their movements and parameter changes. With
such information, inserting new
objects to the scene with the adequate projection is a
simple task.
In the present work, we will introduce an algorithm to
retrieve, in real time and using
no additional information, the position and parameters of
the camera in a sequence of images
containing the visualization of previously-known models.
For such, the method explores the
existence in these images of straight-line segments that
compose the visualization of the
model whose positions are known in the three-dimensional
world. In the case of a soccer
match, for example, the respective model is composed by
the set of field lines determined by
the rules that define their geometry and dimensions.
Firstly, methods are developed to extract long straight-
line segments from the first
image. Then an image of the model is located in the set
formed by such segments based on an
interpretation tree. With such information, the segments
that compose the visualization of the
model are readjusted, resulting in the obtainment of
interest points which are then passed to a
proceeding able to locate the camera responsible for the
model`s visualization. For the second
image on, only a part of the algorithm is used, taking
into account the coherence between the
frames, with the purpose of improving performance to allow
real-time processing.
Among several applications that can be employed to
evaluate the performance and
quality of the proposed method, there is one that captures
images with a camera to show the
on-line functioning of the algorithm. By using image
capture, we can test the algorithm in a
great variety of instances, including different models and
environments.
|
14 |
Aprimoramento computacional do modelo Lavras Simulation of Hydrology (LASH) : aplicação em duas bacias do Rio Grande do Sul / Computational enhancement of the Lavras Simulation of Hydrology (LASH): application in two watersheds situated in the Rio Grande do Sul StateCaldeira, Tamara Leitzke 26 February 2016 (has links)
Submitted by Aline Batista (alinehb.ufpel@gmail.com) on 2016-06-03T21:37:39Z
No. of bitstreams: 2
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Aprimoramento computacional do modelo Lavras Simulation of Hydrology.pdf: 18241232 bytes, checksum: 8066a641ca6665c34c70a177efe853e9 (MD5) / Approved for entry into archive by Aline Batista (alinehb.ufpel@gmail.com) on 2016-06-03T22:26:35Z (GMT) No. of bitstreams: 2
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Aprimoramento computacional do modelo Lavras Simulation of Hydrology.pdf: 18241232 bytes, checksum: 8066a641ca6665c34c70a177efe853e9 (MD5) / Approved for entry into archive by Aline Batista (alinehb.ufpel@gmail.com) on 2016-06-03T22:26:57Z (GMT) No. of bitstreams: 2
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Aprimoramento computacional do modelo Lavras Simulation of Hydrology.pdf: 18241232 bytes, checksum: 8066a641ca6665c34c70a177efe853e9 (MD5) / Made available in DSpace on 2016-06-03T22:27:19Z (GMT). No. of bitstreams: 2
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Aprimoramento computacional do modelo Lavras Simulation of Hydrology.pdf: 18241232 bytes, checksum: 8066a641ca6665c34c70a177efe853e9 (MD5)
Previous issue date: 2016-02-26 / Sem bolsa / A modelagem hidrológica em bacias hidrográficas consiste numa das principais e mais modernas ferramentas para gestão de recursos hídricos e dimensionamentos hidrológicos, no entanto, muitos modelos demandam um grande número de informações temporais e espaciais, o que muitas vezes impede que sejam aplicados, principalmente em países em desenvolvimento, como é o caso do Brasil, onde é mais observado o monitoramento de bacias de grande porte. Frente a este fato, em 2008 uma equipe de pesquisadores da Universidade Federal de Lavras, em parceria com a Universidade de Purdue (EUA), deram início ao desenvolvimento de um modelo hidrológico conceitual voltado à bacias com limitações na base de dados; em 2008 surgia a primeira versão deste modelo e, em 2009, a segunda, quando passaria a ser chamado de Lavras Simulation of Hydrology (LASH). O modelo LASH passou por aprimoramentos computacionais entre o desenvolvimento da primeira e da segunda versão, contudo, não apresentava-se “amigável” para suprir a demanda por parte de profissionais fora do ambiente acadêmico. Foi então que surgiu a ideia de desenvolver sua terceira versão, contando agora também com a parceria da Universidade Federal de Pelotas. O objetivo deste trabalho foi desenvolver e apresentar a terceira versão do modelo LASH, contemplando módulos auxiliares, inúmeros aprimoramentos computacionais e a adaptação da rotina hidrológica e de calibração automática para modelagem com discretização espacial por sub-bacias hidrográficas, bem como avaliar a aplicabilidade desta versão à duas bacias hidrográficas localizadas no sul do Rio Grande do Sul, sob duas estratégias distintas de calibração. Os resultados obtidos apontam para um enorme avanço computacional: i) o módulo para processamento da base de dados temporal (SYHDA) tomou grandes proporções durante seu desenvolvimento, ao ponto de ter sido registrado junto ao Instituto Nacional de Propriedade Industrial (INPI) e vir sendo empregado de forma isolada ao modelo, como uma importante ferramenta de análises hidrológicas; ii) o módulo de processamento da base dados espaciais se mostrou bastante eficiente e, neste momento, encontra-se sob aguardo de deferimento de registro junto ao INPI iii) os módulos de banco de dados, integração e calibração automática se mostraram indispensáveis frente às funcionalidades que lhes foram atribuídas; e iv) o tempo de processamento foi bastante inferior quando comparado à segunda versão. Do ponto de vista hidrológico, a análise do desempenho da terceira versão do LASH frente à calibração e validação para as bacias analisadas indica que o modelo foi capaz de capturar o comportamento geral das vazões observadas, no entanto, a representatividade espacial dos processos hidrológicos é menor quando comparada à segunda versão. No que tange à calibração, as estratégias empregadas apresentaram resultados distintos, assim como as funções objetivo, tendo sido o modelo mais eficiente quando todos parâmetros foram calibrados de forma concentrada. Esta constatação dá indícios de que a estrutura do módulo de calibração automático precisa ser melhor avaliada e de que deve-se analisar a possibilidade de empregar métodos de calibração multiobjetivo, os quais são mais aconselháveis, segundo a literatura, quando objetiva-se a utilização do modelo em ambientes não acadêmicos. / Hydrological watershed modeling is considered one of the main and most modern tools for water resources management and hydrological designs; however, many models require a large amount of temporal and spatial information. This commonly makes it difficult their application, especially in Brazil and in other developing countries, where hydrological monitoring has been predominantly observed for large watersheds. In 2008, a research team from Federal University of Lavras in collaboration with Purdue University (USA) began the development of a conceptual hydrological model intended for data-scarce watersheds. The researchers finished the first version of such model in 2008 and, its second version in 2009, which was known as Lavras Simulation of Hydrology (LASH). The second version of LASH model had many computational refinements in relation to its first version; nevertheless, it does not have a friendly integrated development environment (IDE) to fulfil the needs of non-academic professionals. Then, the research team decided to develop its third version, having collaboration with Federal University of Pelotas. The objectives of this study were to: i) develop and present the third version of the LASH model, addressing auxiliary modules, countless computational enhancements and adaptation of hydrological and automatic calibration routines for modeling with spatial discretization of subwatersheds; and ii) evaluate the applicability of this version to two watersheds situated in the southern Rio Grande do Sul State, considering two calibration schemes. The results found in this study indicated a considerable computational upgrade: i) the module designed for processing of temporal data bases (SYHDA) has been frequently used in many applications as an independent software since its development, such that it was protected by copyright (Instituto de Propriedade Industrial – INPI); ii) the module for processing of spatial data bases was considered efficient and also protected by copyright (INPI); iii) the modules of database, integration and automatic calibration were indispensable considering their designed functionalities; and iv) the time of processing was undoubtedly less than that spent by the second version. Under the hydrological point of view, the performance analysis of the third version of LASH, with respect to calibration and validation of the studied watersheds, indicated that the model was able to capture the overall behavior of the observed hydrograph; however, it should be mentioned that the spatial representativeness of the hydrological processes is inferior when compared to that existing in the second version. Relative to calibration, the used schemes and objective functions presented somewhat contrasting results; the most efficient scheme was that in which all the calibration parameters were lumped. This finding suggests that the framework of the automatic calibration module needs to be better evaluated and that there might be the necessity to implement multi-objective calibration algorithms, which have drawn attention in scientific community, when the goal is the model application for non-academic purposes.
|
15 |
Měření rychlosti automobilů z dohledové kamery / Speed Measurement of Vehicles from Surveillance CameraJaklovský, Samuel January 2018 (has links)
This master's thesis is focused on fully automatic calibration of traffic surveillance camera, which is used for speed measurement of passing vehicles. Thesis contains and describes theoretical information and algorithms related to this issue. Based on this information and algorithms, a comprehensive system design for automatic calibration and speed measurement was built. The proposed system has been successfully implemented. The implemented system is optimized to process the smallest portion of the video input for the automatic calibration of the camera. Calibration parameters are obtained after processing only two and half minutes of input video. The accuracy of the implemented system was evaluated on the dataset BrnoCompSpeed. The speed measurement error using the automatic calibration system is 8.15 km/h. The error is mainly caused by inaccurate scale acquisition, and when it is replaced by manually obtained scale, the error is reduced to 2.45 km/h. The speed measuring system itself has an error of only 1.62 km/h (evaluated using manual calibration parameters).
|
Page generated in 0.133 seconds