• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 10
  • 7
  • 7
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 11
  • 11
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Distributed Implementations of Component-based Systems with Prioritized Multiparty Interactions : Application to the BIP Framework. / Implantations distribuées de modèles à base de composants communicants par interactions multiparties avec priorités : application au langage BIP

Quilbeuf, Jean 16 September 2013 (has links)
Les nouveaux systèmes ont souvent recours à une implémentation distribuée du logiciel, pour des raisons d'efficacité et à cause de l'emplacement physique de certains capteurs et actuateurs. S'assurer de la correction d'un logiciel distribué est difficile car cela impose de considérer tous les enchevêtrements possibles des actions exécutées par des processus distincts. Cette thèse propose une méthode pour générer, à partir d'un modèle d'application haut niveau, une implémentation distribuée correcte et efficace. Le modèle de l'application comporte des composants communiquant au moyen d'interactions multiparties avec priorités. L'exécution d'une interaction multipartie, qui correspond à un pas de la sémantique, change de façon atomique l'état de tous les composants participant à l'interaction. On définit une implantation distribuée comme un ensemble de processus communiquant par envoi de message asynchrone. La principale difficulté est de produire une implémentation correcte et efficace des interactions multiparties avec priorités, en utilisant uniquement l'envoi de message comme primitive. La méthode se fonde sur un flot de conception rigoureux qui raffine progressivement le modèle haut niveau en un modèle bas niveau, à partir duquel le code pour une plateforme particulière est généré. Tous les modèles intermédiaires apparaissant dans le flot sont exprimés avec la même sémantique que le modèle original. À chaque étape du flot, les interactions complexes sont remplacés par des constructions utilisant des interactions plus simples. En particulier, le dernier modèle obtenu avant la génération du code ne contient que des interactions modélisant l'envoi de message. La correction de l'implémentation est obtenue par construction. L'utilisation des interactions multiparties comme primitives dans le modèle de l'application permet de réduire très significativement l'ensemble des états atteignables, par rapport à un modèle équivalent mais utilisant des primitives de communication plus simples. Les propriétés essentielles du système sont vérifiées à ce niveau d'abstraction. Chaque transformation constituante du flot de conception est suffisamment simple pour être complètement formalisée et prouvée, en termes d'équivalence observationelle ou d'équivalence de trace entre le modèles avant et après transformation. L'implémentation ainsi obtenue est correcte par rapport au modèle original, ce qui évite une coûteuse vérification a posteriori. Concernant l'efficacité, la performance de l'implémentation peut être optimisée en choisissant les paramètres adéquats pour les transformations, ou en augmentant la connaissance des composants. Cette dernière solution requiert une analyse du modèle de départ afin de calculer la connaissance qui est réutilisée pour les étapes ultérieures du flot de conception. Les différentes transformations et optimisations constituant le flot de conception ont été implémentées dans le cadre de BIP. Cette implémentation a permis d'évaluer les différentes possibilités ainsi que l'influence des différents paramètres, sur la performance de l'implémentation obtenue avec plusieurs exemples. Le code généré utilise les primitives fournies par les sockets POSIX, MPI ou les pthreads pour envoyer des messages entre les processus. / Distributed software is often required for new systems, because of efficiency and physical distribution and sensors and actuators. Ensuring correctness of a distributed implementation is hard due to the interleaving of actions belonging to distinct processes. This thesis proposes a method for generating a correct and efficient distributed implementation from a high-level model of an application. The input model is described as a set of components communicating through prioritized multiparty interactions. Such primitives change the state of all components involved in an interaction during a single atomic execution step. We assume that a distributed implementation is a set of processes communicating through asynchronous message-passing. The main challenge is to produce a correct and efficient distributed implementation of prioritized multiparty interactions, relying only on message-passing. The method relies on a rigorous design flow refining the high-level model of the application into a low-level model, from which code for a given platform is generated. All intermediate models appearing in the flow are expressed using the same semantics as the input model. Complex interactions are replaced with constructs using simpler interactions at each step of the design flow. In particular, the last model obtained before code generation contains only interactions modeling asynchronous message passing. The correctness of the implementation is obtained by construction. Using multiparty interaction reduces drastically the set of reachable states, compared to an equivalent model expressed with lower level primitives. Essential properties of the system are checked at this abstraction level. Each transformation of the design flow is simple enough to be fully formalized and proved by showing observational equivalence or trace equivalence between the input and output models. The obtained implementation is correct with respect to the original model, which avoids an expensive a posteriori verification. Performance can be optimized through adequate choice of the transformation parameters, or by augmenting the knowledge of components. The latter solution requires to analyze the original model to compute the knowledge, that is reused at subsequent steps of the decentralization. The various transformations and optimizations constituting the design flow have been implemented using the BIP framework. The implementation has been used to evaluate the different possibilities, as well the influence of parameters of the design flow, on several examples. The generated code uses either Unix sockets, MPI or pthreads primitives for communication between processes.
12

Parallelism and modular proof in differential dynamic logic / Parallélisme et preuve modulaire en logique dynamique différentielle

Lunel, Simon 28 January 2019 (has links)
Les systèmes cyber-physiques mélangent des comportements physiques continus, tel la vitesse d'un véhicule, et des comportement discrets, tel que le régulateur de vitesse d'un véhicule. Ils sont désormais omniprésents dans notre société. Un grand nombre de ces systèmes sont dits critiques, i.e. une mauvaise conception entraînant un comportement non prévu, un bug, peut mettre en danger des êtres humains. Il est nécessaire de développer des méthodes pour garantir le bon fonctionnement de tels systèmes. Les méthodes formelles regroupent des procédés mathématiques pour garantir qu'un système se comporte comme attendu, par exemple que le régulateur de vitesse n'autorise pas de dépasser la vitesse maximale autorisée. De récents travaux ont permis des progrès significatifs dans ce domaine, mais l'approche adoptée est encore monolithique, i.e. que le système est modélisé d'un seul tenant et est ensuite soumis à la preuve. Notre problématique est comment modéliser efficacement des systèmes cyber-physiques dont la complexité réside dans une répétition de morceaux élémentaires. Et une fois que l'on a obtenu une modélisation, comment garantir le bon fonctionnement de tels systèmes. Notre approche consiste à modéliser le système de manière compositionnelle. Plutôt que de vouloir le modéliser d'un seul tenant, il faut le faire morceaux par morceaux, appelés composants. Chaque composant correspond à un sous-système du système final qu'il est simple de modéliser. On obtient le système complet en assemblant les composants ensembles. Ainsi une usine de traitement des eaux est obtenue en assemblant différentes cuves. L'intérêt de cette méthode est qu'elle correspond à l'approche des ingénieurs dans l'industrie : considérer des éléments séparés que l'on compose ensuite. Mais cette approche seule ne résout pas le problème de la preuve de bon fonctionnement du système. Il faut aussi rendre la preuve compositionnelle. Pour cela, on associe à chaque composant des propriétés sur ses entrées et sortie, et on prouve qu'elles sont respectées. Cette preuve peut être effectué par un expert, mais aussi par un ordinateur si les composants sont de tailles raisonnables. Il faut ensuite nous assurer que lors de l'assemblage des composants, les propriétés continuent à être respectées. Ainsi, la charge de la preuve est reportée sur les composants élémentaires, l'assurance du respect des propriétés désirées est conservée lors des étapes de composition. On peut alors obtenir une preuve du bon fonctionnement de systèmes industriels avec un coût de preuve réduit. Notre contribution majeure est de proposer une telle approche compositionnelle à la fois pour modéliser des systèmes cyber-physiques, mais aussi pour prouver qu'ils respectent les propriétés voulues. Ainsi, à chaque étape de la conception, on s'assure que les propriétés sont conservées, si possible à l'aide d'un ordinateur. Le système résultant est correct par construction. De ce résultat, nous avons proposé plusieurs outils pour aider à la conception de systèmes cyber-physiques de manière modulaire. On peut raisonner sur les propriétés temporelles de tels systèmes, par exemple est-ce que le temps de réaction d'un contrôleur est suffisamment court pour garantir le bon fonctionnement. On peut aussi raisonner sur des systèmes où un mode nominal cohabite avec un mode d'urgence. / Cyber-physical systems mix continuous physical behaviors, e.g. the velocity of a vehicle, and discrete behaviors, e.g. the cruise-controller of the vehicle. They are pervasive in our society. Numerous of such systems are safety-critical, i.e. a design error which leads to an unexpected behavior can harm humans. It is mandatory to develop methods to ensure the correct functioning of such systems. Formal methods is a set of mathematical methods that are used to guarantee that a system behaves as expected, e.g. that the cruise-controller does not allow the vehicle to exceed the speed limit. Recent works have allowed significant progress in the domain of the verification of cyber-physical systems, but the approach is still monolithic. The system under consideration is modeled in one block. Our problematic is how to efficiently model cyber-physical systems where the complexity lies in a repetition of elementary blocks. And once this modeling done, how guaranteeing the correct functioning of such systems. Our approach is to model the system in a compositional manner. Rather than modeling it in one block, we model it pieces by pieces, called components. Each component correspond to a subsystem of the final system and are easier to model due to their reasonable size. We obtain the complete system by assembling the different components. A water-plant will thus be obtained by the composition of several water-tanks. The main advantage of this method is that it corresponds to the work-flow in the industry : consider each elements separately and compose them later. But this approach does not solve the problem of the proof of correct functioning of the system. We have to make the proof compositional too. To achieve it, we associate to each component properties on its inputs and outputs, then prove that they are satisfied. This step can be done by a domain expert, but also by a computer program if the component is of a reasonable size. We have then to ensure that the properties are preserved through the composition. Thus, the proof effort is reported to elementary components. It is possible to obtain a proof of the correct functioning of industrial systems with a reduced proof effort. Our main contribution is the development of such approach in Differential Dynamic Logic. We are able to modularly model cyber-physical systems, but also prove their correct functioning. Then, at each stage of the design, we can verify that the desired properties are still guaranteed. The resulting system is correct-by-construction. From this result, we have developed several tools to help for the modular reasoning on cyber-physical systems. We have proposed a methodology to reason on temporal properties, e.g. if the execution period of a controller is small enough to effectively regulate the continuous behavior. We have also showed how we can reason on functioning modes in our framework.
13

Tools for the Design of Reliable and Efficient Functions Evaluation Libraries / Outils pour la conception de bibliothèques de calcul de fonctions efficaces et fiables

Torres, Serge 22 September 2016 (has links)
La conception des bibliothèques d’évaluation de fonctions est un activité complexe qui requiert beaucoup de soin et d’application, particulièrement lorsque l’on vise des niveaux élevés de fiabilité et de performances. En pratique et de manière habituelle, on ne peut se livrer à ce travail sans disposer d’outils qui guident le concepteur dans le dédale d’un espace de solutions étendu et complexe mais qui lui garantissent également la correction et la quasi-optimalité de sa production. Dans l’état actuel de l’art, il nous faut encore plutôt raisonner en termes de « boite à outils » d’où le concepteur doit tirer et combiner des mécanismes de base, au mieux de ses objectifs, plutôt qu’imaginer que l’on dispose d’un dispositif à même de résoudre automatiquement tous les problèmes.Le présent travail s’attache à la conception et la réalisation de tels outils dans deux domaines:∙ la consolidation du test d’arrondi de Ziv utilisé, jusqu’à présent de manière plus ou moins empirique, dans l’implantation des approximations de fonction ;∙ le développement d’une implantation de l’algorithme SLZ dans le but de résoudre le « Dilemme du fabricant de table » dans le cas de fonctions ayant pour opérandes et pour résultat approché des nombres flottants en quadruple précision (format Binary64 selon la norme IEEE-754). / The design of function evaluation libraries is a complex task that requires a great care and dedication, especially when one wants to satisfy high standards of reliability and performance. In actual practice, it cannot be correctly performed, as a routine operation, without tools that not only help the designer to find his way in a complex and extended solution space but also to guarantee that his solutions are correct and (almost) optimal. As of the present state of the art, one has to think in terms of “toolbox” from which he can smartly mix-and-match the utensils that fit better his goals rather than expect to have at hand a solve-all automatic device.The work presented here is dedicated to the design and implementation of such tools in two realms:∙ the consolidation of Ziv’s rounding test that is used, in a more or less empirical way, for the implementation of functions approximation;∙ the development of an implementation of the SLZ-algorithm in order to solve the Table Maker Dilemma for the function with quad-precision floating point (IEEE-754 Binary128 format) arguments and images.
14

Contributions à l'arithmétique flottante : codages et arrondi correct de fonctions algébriques / Contributions to floating-point arithmetic : Coding and correct rounding of algebraic functions

Panhaleux, Adrien 27 June 2012 (has links)
Une arithmétique sûre et efficace est un élément clé pour exécuter des calculs rapides et sûrs. Le choix du système numérique et des algorithmes arithmétiques est important. Nous présentons une nouvelle représentation des nombres, les "RN-codes", telle que tronquer un RN-code à une précision donnée est équivalent à l'arrondir au plus près. Nous donnons des algorithmes arithmétiques pour manipuler ces RN-codes et introduisons le concept de "RN-code en virgule flottante." Lors de l'implantation d'une fonction f en arithmétique flottante, si l'on veut toujours donner le nombre flottant le plus proche de f(x), il faut déterminer si f(x) est au-dessus ou en-dessous du plus proche "midpoint", un "midpoint" étant le milieu de deux nombres flottants consécutifs. Pour ce faire, le calcul est d'abord fait avec une certaine précision, et si cela ne suffit pas, le calcul est recommencé avec une précision de plus en plus grande. Ce processus ne s'arrête pas si f(x) est un midpoint. Étant donné une fonction algébrique f, soit nous montrons qu'il n'y a pas de nombres flottants x tel que f(x) est un midpoint, soit nous les caractérisons ou les énumérons. Depuis le PowerPC d'IBM, la division en binaire a été fréquemment implantée à l'aide de variantes de l'itération de Newton-Raphson dues à Peter Markstein. Cette itération est très rapide, mais il faut y apporter beaucoup de soin si l'on veut obtenir le nombre flottant le plus proche du quotient exact. Nous étudions comment fusionner efficacement les itérations de Markstein avec les itérations de Goldschmidt, plus rapides mais moins précises. Nous examinons également si ces itérations peuvent être utilisées pour l'arithmétique flottante décimale. Nous fournissons des bornes d'erreurs sûres et précises pour ces algorithmes. / Efficient and reliable computer arithmetic is a key requirement to perform fast and reliable numerical computations. The choice of the number system and the choice of the arithmetic algorithms are important. We present a new representation of numbers, the "RN-codings", such that truncating a RN-coded number to some position is equivalent to rounding it to the nearest. We give some arithmetic algorithms for manipulating RN-codings and introduce the concept of "floating-point RN-codings". When implementing a function f in floating-point arithmetic, if we wish to always return the floating-point number nearest f(x), one must be able to determine if f(x) is above or below the closest "midpoint", where a midpoint is the middle of two consecutive floating-point numbers. This determination is first done with some given precision, and if it does not suffice, we start again with higher precision, and so on. This process may not terminate if f(x) can be a midpoint. Given an algebraic function f, we try either to show that there are no floating-point numbers x such that f(x) is a midpoint, or we try to enumerate or characterize them. Since the IBM PowerPC, binary division has frequently been implemented using variants of the Newton-Raphson iteration due to Peter Markstein. This iteration is very fast, but much care is needed if we aim at always returning the floating-point number nearest the exact quotient. We investigate a way of efficiently merging Markstein iterations with faster yet less accurate iterations called Goldschmidt iterations. We also investigate whether those iterations can be used for decimal floating-point arithmetic. We provide sure and tight error bounds for these algorithms.
15

Pingu och PSC: språkljudsproduktion hos barn med språkljudsstörning vid fyra olika taluppgifter : Analys av träffsäkerhet och avvikelsetyper samt utvärdering av en ny eliciteringsmetod och ett nytt träffsäkerhetsmått

Ode, Carina, Öster Cattu Alves, Mirjam January 2018 (has links)
ABSTRACT The purpose of the current study was to compare speech samples elicited with four different methods regarding speech sound production errors. Nine Swedish-speaking children with SSD (Speech Sound Disorder) participated. A new method of speech elicitation was introduced, a narrative task using a silent short film as a prompt. Severity of involvement of the speech sound production was measured using PCC-R (Percentage of Consonants Correct-Revised), as well as a new severity metric, PSC (Percentage of Syllables Correct). Speech error patterns were also analyzed. All four methods of speech elicitation are suggested to be useful clinical tools for phonological assessment. The elicitation methods yielded similar results. However, the results indicated that a higher degree of control and phonological complexity in a task generally yield lower severity measures and more types of speech error patterns. The definition of SSD used in this study includes several clinical diagnoses used by speech and language pathologists. The participants’ results were therefore analyzed regarding clinical diagnosis. No difference was found. This first evaluation of PSC shows that it is a promising new severity metric, and that its strength lies first and foremost in the possibility to include unintelligible speech. The evaluation of the new elicitation task shows that narration of a silent short film as a prompt is promising as well. The results yielded indicate a gain in degree of control combined with a preserved high ecological validity associated with speech elicitation methods yielding conversational speech. SAMMANFATTNING Syftet med den här studien var att undersöka variation av avvikelser i språkljudsproduktionen hos nio svensktalande barn med språkljudsstörning vid fyra olika taluppgifter. En av de fyra uppgifterna innefattade en ny eliciteringsstrategi: berättande till ljudlös film. Grad av avvikelse undersöktes genom beräkning av PCC-R (Percentage of Consonants Correct-Revised), samt ett nytt mått, PSC (Percentage of Syllables Correct). Även avvikelsetyper undersöktes. Resultatet tyder på att samtliga undersökta taluppgifter kan vara användbara kliniska verktyg. Även om ingen skillnad kunde påvisas mellan deltagarnas grad av avvikelse vid de fyra taluppgifterna, sågs en tendens till att en hög styrningsgrad och fonologisk komplexitet av målorden ger upphov till en lägre träffsäkerhet i språkljudsproduktionen och ökar antalet olika avvikelsetyper som förekommer. Definitionen av språkljudsstörning inkluderar olika logopediska diagnoser. I denna studie kunde barnens diagnostillhörighet inte förklara variationer i resultaten. Den första utvärderingen av PSC visar att det är ett lovande nytt mått på träffsäkerhet i språkljudsproduktionen, och att dess styrka framför allt ligger i möjligheten att inkludera oförståeligt tal. Även denna första utvärdering av den nya eliciteringsmetoden berättande till ljudlös film är lovande. Resultaten tyder på en högre styrningsgrad samtidigt som den ekologiska validiteten i sammanhängande tal till stor del bibehållits.
16

Statistical partition problem for exponential populations and statistical surveillance of cancers in Louisiana

Gu, Jin 18 December 2014 (has links)
In this dissertation, we consider the problem of partitioning a set of k population with respect to a control population. For this problem some multistage methodologies are proposed and their properties are derived. Using the Monte Carlo simulation techniques, the small and moderate sample size performance of the proposed procedure are studied. We have also considered at statistical surveillance of various cancers in Louisiana.
17

PREENCHIMENTO DE FALHAS DE SÉRIES DE DADOS CLIMÁTICOS UTILIZANDO REDES P2P

Schmitke, Luiz Rafael 30 June 2012 (has links)
Made available in DSpace on 2017-07-21T14:19:33Z (GMT). No. of bitstreams: 1 Luiz Rafael Schmitke.pdf: 1854453 bytes, checksum: c7e3cc9cb3865213cd2b9f59a4cf211c (MD5) Previous issue date: 2012-06-30 / Agriculture is an activity where the weather has more impact, influencing techniques and crops employed. Much of the agricultural productivity is affected by climatic conditions that are created by natural factors and are not likely to control. Although you can’t control the weather, we can predict it, or even simulate their conditions to try minimize its impact on agriculture. To be able to make these predictions and simulations are necessary data collected from weather stations that can be conventional or automatic and must be without gaps or abnormal data. Most of these errors are caused by signal interference, disconnection, oxidation of cables and spatio-temporal variation of climate which consequently end up generating those problems at the climates bases. Thus, this research work has as main objective to create a model capable of correcting gaps in climate databases, observing that not to correct abnormal observations or replace statistical methods for the same purpose. Therefore a model was created to correct the gaps in weather data between stations using the P2P architecture. With this model, an application was created to test its performance to correct the gaps. Also to perform the tests were used bases in the cities of Ponta Grossa, Fernandes Pinheiro and Telêmaco Borba provided by Instituto Tecnológico SIMEPAR, and bases of the cities of Castro, Carambeí, Pirai do Sul and Tibagi provided by Fundação ABC, which are collected daily on automatic stations. As a result it was observed that the performance of P2P correction model was satisfactory when compared to the simulator used in the tests, with lower results only in February, which corresponds to the period of summer, to the autumn, winter and spring the model P2P was better than simulated. Although it was found that the number of stations participating in the network at the time of correcting influences the results, and the higher it is, the better the results obtained with the correcting. / A agricultura é uma das atividades onde o clima tem mais impacto, influenciando as técnicas e os cultivos empregados. Grande parte da produtividade agrícola se deve as condições climáticas que são criadas por fatores naturais e não são passíveis de controle. Embora não seja possível controlar o clima, pode-se prevê-lo ou até simular suas condições para tentar minimizar seu impacto na agricultura. Para que seja possível realizar estas previsões e simulações são necessários dados coletados em estações climáticas que podem ser convencionais ou automáticas e que precisam estar sem dados anormais ou lacunas. Grande parte desses erros se deve a interferência no sinal, desconexão, oxidação de cabos e a variação espaço-temporal do clima que por consequência acabam gerando aqueles problemas nas bases climáticas. Desta forma, este trabalho de pesquisa tem como objetivo principal criar um modelo capaz de corrigir as lacunas existentes nas bases de dados climáticas, salientando-se que não visa à correção de observações anormais e nem a substituição dos métodos estatísticos para o mesmo fim. Para tanto foi criado um modelo de correção das lacunas em dados climáticos entre as estações utilizando a arquitetura P2P. Com este modelo, foi criada uma aplicação para testar seu desempenho em corrigir as lacunas encontradas. Também para a realização dos testes foram utilizadas bases das cidades de Ponta Grossa, Fernades Pinheiro e Telêmaco Borba, fornecidas pelo Instituto Tecnológico SIMEPAR, e bases das cidades de Castro, Carambeí, Tibagi e Pirai do Sul fornecidas pela Fundação ABC, sendo estes dados, diários e coletados em estações automáticas. Como resultados foi possível observar que o desempenho do modelo de correção P2P foi satisfatório quando comparado ao simulador utilizado nos testes, apresentando resultados inferiores somente no mês de fevereiro, que corresponde ao período de verão, para as estações de outono, inverno e primavera o modelo P2P foi melhor que o simulado. Ainda foi verificado que a quantidade de estações que participa da rede na hora da correção influencia os resultados, sendo que quanto maior ela for, melhores são os resultados obtidos com a correção. Palavras-chave: Redes P2P, Correção, Dados Climáticos.
18

Self-correcting Bayesian target tracking

Biresaw, Tewodros Atanaw January 2015 (has links)
Visual tracking, a building block for many applications, has challenges such as occlusions,illumination changes, background clutter and variable motion dynamics that may degrade the tracking performance and are likely to cause failures. In this thesis, we propose Track-Evaluate-Correct framework (self-correlation) for existing trackers in order to achieve a robust tracking. For a tracker in the framework, we embed an evaluation block to check the status of tracking quality and a correction block to avoid upcoming failures or to recover from failures. We present a generic representation and formulation of the self-correcting tracking for Bayesian trackers using a Dynamic Bayesian Network (DBN). The self-correcting tracking is done similarly to a selfaware system where parameters are tuned in the model or different models are fused or selected in a piece-wise way in order to deal with tracking challenges and failures. In the DBN model representation, the parameter tuning, fusion and model selection are done based on evaluation and correction variables that correspond to the evaluation and correction, respectively. The inferences of variables in the DBN model are used to explain the operation of self-correcting tracking. The specific contributions under the generic self-correcting framework are correlation-based selfcorrecting tracking for an extended object with model points and tracker-level fusion as described below. For improving the probabilistic tracking of extended object with a set of model points, we use Track-Evaluate-Correct framework in order to achieve self-correcting tracking. The framework combines the tracker with an on-line performance measure and a correction technique. We correlate model point trajectories to improve on-line the accuracy of a failed or an uncertain tracker. A model point tracker gets assistance from neighbouring trackers whenever degradation in its performance is detected using the on-line performance measure. The correction of the model point state is based on the correlation information from the states of other trackers. Partial Least Square regression is used to model the correlation of point tracker states from short windowed trajectories adaptively. Experimental results on data obtained from optical motion capture systems show the improvement in tracking performance of the proposed framework compared to the baseline tracker and other state-of-the-art trackers. The proposed framework allows appropriate re-initialisation of local trackers to recover from failures that are caused by clutter and missed detections in the motion capture data. Finally, we propose a tracker-level fusion framework to obtain self-correcting tracking. The fusion framework combines trackers addressing different tracking challenges to improve the overall performance. As a novelty of the proposed framework, we include an online performance measure to identify the track quality level of each tracker to guide the fusion. The trackers in the framework assist each other based on appropriate mixing of the prior states. Moreover, the track quality level is used to update the target appearance model. We demonstrate the framework with two Bayesian trackers on video sequences with various challenges and show its robustness compared to the independent use of the trackers used in the framework, and also compared to other state-of-the-art trackers. The appropriate online performance measure based appearance model update and prior mixing on trackers allows the proposed framework to deal with tracking challenges.
19

The Effects of Deception and Manipulation of Motivation to Deceive on Event Related Potentials

Ashworth, Ethan C 01 December 2016 (has links)
The Correct Response Negativity (CRN) is an event-related potential component that is affected by the act of deception. However, there have been inconsistent findings on the effect of deception on the CRN. Suchotzki, et al. (2015) suggested that the design of the paradigm used to elicit the deceptive response is what controls the size of the CRN. Specifically, motivation to deceive changes the size of deception relative to telling the truth. This study attempted to follow up on suggestions made by Suchotzki et al. (2015) to investigate if extraneous motivation to lie does indeed invert the ratio of CRN in lie compared to truth responses in a deception experiment by manipulating the motivation to lie. This study used a modification of the image-based guilty knowledge test (GKT) paradigm used in Langleben et al. (2002). The first hypothesis of this experiments was that a larger CRN during deception relative to truth-telling will be observed when participants are not motivated to lie, while a larger CRN during truth-telling relative to deception will be observed when participants are motivated to lie. The hypothesis was not supported. The second hypothesis of this experiment was that the P300 component would be larger when participants were motivated to lie, as compared to when they were instructed to lie. Results indicated that P300 was significantly higher in the lie conditions than in the truth conditions; however, there was no difference in amplitude as a function of whether they were in the informed or motivated lie condition.
20

When to use aluminium in home environments, according to democratic design / När aluminium bör användas i hemmiljöer, enligt demokratisk design

Hill, Richard January 2019 (has links)
This thesis is about how aluminium correlates to democratic design. It includes comparisons between different materials used at a company that develops products according to democratic design. The analysis of how aluminium correlates is done in order to identify if the company has used and are using aluminium in the correct way, in a home environment. This report also includes some examples of analysis of product types, to show which would and whish would not benefit from being made out of aluminium.

Page generated in 0.4482 seconds