Spelling suggestions: "subject:"model based."" "subject:"godel based.""
451 |
Metodologia de desenvolvimento baseado em modelos e sua aplicação em máquinas agrícolasCosta, Felipe Tozetto 30 June 2017 (has links)
Submitted by Angela Maria de Oliveira (amolivei@uepg.br) on 2017-11-30T10:37:02Z
No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
Felipe Costa.pdf: 2622002 bytes, checksum: ae1227fbb019301002367ceabaa6dd1e (MD5) / Made available in DSpace on 2017-11-30T10:37:02Z (GMT). No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
Felipe Costa.pdf: 2622002 bytes, checksum: ae1227fbb019301002367ceabaa6dd1e (MD5)
Previous issue date: 2017-06-30 / O presente trabalho tem por objetivo apresentar a aplicação da metodologia baseado em
modelos (MBD – model-based design) para o desenvolvimento de dispositivos eletrônicos de
controle e automação para máquinas agrícolas. Baseado nos conceitos de MBD, foi proposto
o desenvolvimento de uma central eletrônica embarcada para controle de funções específicas
de uma plantadeira. Baseado na definição de requisitos, utilizou-se um conjunto dos
principais sensores e atuadores para o desenvolvimento do modelo. Utilizou-se ferramentas
como Matlab/Simulink® para o modelamento em fluxo de estados e geração de código
automático o qual foi validado em software através da etapa MIL (Model-in-the-Loop).
Posteriormente, realizou-se a geração automática de código através da etapa SIL (Softwarein-
the Loop), validando-se o firmware gerado em software; e, através da etapa PIL
(Processor-in-the Loop), validou-se experimentalmente o código gerado gravando-o no
microcontrolador e utilizando materiais de apoio para simulação experimentalmente dos
sensores e atuadores. Posteriormente, uma vez validadas as partes do projeto e, como forma
de avaliação do sistema em uma situação próxima da realidade de um plantio, desenvolveu-se
uma aplicação para avaliar com dispensa controlada de sementes, inclusive avaliando uma
possível obstrução de um dos orifícios do disco de dispensa. Como resultado, esta aplicação
teve por objetivo propor uma ferramenta para avaliar a distância média do plantio entre
sementes, velocidade média do plantio e gerar informações estatísticas a respeito da precisão
da dispensa de sementes. Por fim, os resultados de validação e experimentais são
apresentados, os quais permitiram concluir que a metodologia baseada em modelos pode ser
aplicada também em sistemas agrícolas resultando no rápido desenvolvimento de dispositivos
de eletrônica embarcada e permitindo os processos de teste e validação durante o seu
desenvolvimento. / The present work presents the application of the model-based methodology (MBD) for the
development of electronic control and automation devices for agricultural machines. Based on
the concepts of MBD, it was proposed the development of an embedded electronic central to
control the specific functions of a planter. Once the requirements were defined, we used a set
of the main sensors and actuators for the development of the model. We used tools such as
Matlab/Simulink® for stateflow modeling and automatic code generation which was tested
and validated in software through MIL (Model-in-the-Loop) step. Later, the automatic
generation of code was performed through the Software-in-the-Loop (SIL) step, validating the
firmware generated in software; and, through the Processor-in-the-Loop (PIL) step, the
generated code was validated by recording it in the microcontroller and using support
materials for experimental simulation of sensors and actuators. Later, once the parts of the
project were validated and, as a way of evaluating the system in a situation close to the reality
of a planting, an application was developed to evaluate with a controlled dispense of seeds,
including evaluating a possible obstruction of one of the holes in the dispensing disc. As a
result, the objective of this application was to propose a tool to evaluate the average planting
distance between seeds, average planting velocity and to generate statistical information about
the accuracy of the seed dispensation. Finally, the validation and experimental results are
presented, which allowed to conclude that the model-based methodology can be applied also
in agricultural systems resulting in the rapid development of embedded electronics devices
and allowing the processes of testing and validation during its development.
|
452 |
On the designs of early phase oncology studiesAnanthakrishnan, Revathi Nayantara 01 December 2017 (has links)
This thesis focuses on the design, statistical operating characteristics and interpretation of early phase oncology clinical trials. Anti-cancer drugs are generally highly toxic and it is imperative to deliver a dose to the patient that is low enough to be safe but high enough to produce a clinically meaningful response. Thus, a study of dose limiting toxicities (DLTs) and a determination of the maximum tolerated dose (MTD) of a drug that can be used in later phase trials is the focus of most Phase I oncology trials. We first comprehensively compare the statistical operating characteristics of various early phase oncology designs, finding that all the designs examined select the MTD more accurately when there is a clear separation between the true DLT rate at the MTD and the rates at the dose levels immediately above and below. Among the rule-based designs studied, we found that the 3+3 design under-doses a large percentage of patients and is not accurate in selecting the MTD for all the cases considered. The 5+5 a design picks the MTD as accurately as the model based designs for the true DLT rates generated using the chosen log-logistic and linear dose-toxicity curves, but requires enrolling a larger number of patients. The model based designs examined, mTPI, TEQR, BOIN, CRM and EWOC designs, perform well on the whole, assign the maximum percentage of patients to the MTD, and pick the MTD fairly accurately. However, the limited sample size of these Phase I oncology trials makes it difficult to accurately predict the MTD. Hence, we next study the effect of sample size and cohort size on the accuracy of dose selection in early phase oncology designs, finding that an adequate sample size is crucial. We then propose some integrated Phase 1/2 oncology designs, namely the 20+20 accelerated titration design and extensions of the mTPI and TEQR designs, that consider both toxicity and efficacy in dose selection, utilizing a larger sample size. We demonstrate that these designs provide an improvement over the existing early phase designs. / 2019-12-01T00:00:00Z
|
453 |
Ingéniérie dirigée par les modèles pour la gestion de la variabilité dans le test d'applications mobiles / Model-Driven Engineering for Variability Management in Mobile Application TestingRidene, Youssef 23 September 2011 (has links)
L'engouement du grand public pour les applications mobiles, dont le nombre ne cessede croître, a rendu les utilisateurs de plus en plus exigeants quant à la qualité de cesapplications. Seule une procédure de test efficace permet de répondre à ces exigences.Dans le contexte des applications embarquées sur téléphones mobiles, le test est unetâche coûteuse et répétitive principalement à cause du nombre important de terminauxmobiles qui sont tous différents les uns des autres.Nous proposons dans cette thèse le langage MATeL, un DSML (Domain-Specific ModelingLanguage) qui permet de d’écrire des scénarios de test spécifiques aux applicationsmobiles. Sa syntaxe abstraite, i.e. un méta modèle et des contraintes OCL, permet auconcepteur de manipuler les concepts métier du test d'applications mobiles (testeur, mobileou encore résultats attendus et résultats obtenus). Par ailleurs, il permet d'enrichirces scénarii avec des points de variabilité qui autorisent de spécifier des variations dansle test en fonction des particularités d'un mobile ou d'un ensemble de mobiles. La syntaxeconcrète de MATeL, qui est inspirée de celle des diagrammes de séquence UML,ainsi que son environnement basé sur Eclipse permettent à l'utilisateur de concevoir desscénarii relativement facilement.Grâce à une plateforme de test en ligne construite pour les besoins de notre projet,il est possible d'exécuter les scénarii sur plusieurs téléphones différents. La démarcheest illustrée dans cette thèse à travers des cas d'utilisation et des expérimentations quiont permis de vérifier et valider notre proposition. / Mobile applications have increased substantially in volume with the emergence ofsmartphones. Ensuring high quality and successful user experience is crucial to the successof such applications. Only an efficient test procedure allows developers to meet these requirements. In the context of embedded mobile applications, the test is costly and repetitive. This is mainly due to the large number of different mobile devices. In this thesis, we describe MATeL, a Domain-Specific Modeling Language (DSML) for designing test scenarios for mobile applications. Its abstract syntax, i.e. a meta model and OCL constraints, enables the test designer to manipulate mobile applications testing concepts such as tester, mobile or outcomes and results. It also enables him/her to enrich these scenarios with variability points in the spirit of Software Product-Line engineering, that can specify variations in the test according to the characteristics of one mobile or a set of mobiles. The concrete syntax of MATeL that is inspired from UML sequence diagrams and its environment based on Eclipse allow the user to easily develop scenarios. MATeL is built upon an industrial platform (a test bed) in order to be able to run scenarios on several different phones. The approach is illustrated in this thesis through use cases and experiments that led to verify and validate our contribution.
|
454 |
ORQA : un canevas logiciel pour la gestion de l'énergie dans les véhicules électriques / ORQA : a framework for the energy management in electric vehiclesChristophe-Tchakaloff, Borjan 13 January 2015 (has links)
Les véhicules électriques présentent désormais une alternative crédible aux véhicules équipés demoteurs à combustion interne. Ils sont plus propres et plus confortables à l’utilisation. La gestion del’énergie d’un véhicule électrique est actuellement focalisée sur le fonctionnement du moteur, principalconsommateur d’énergie, sans tenir compte du confort de l’utilisateur.Le travail présenté dans cette thèse intègre la prise en compte des préférences utilisateur au seinde la gestion de l’énergie des véhicules électriques. La contribution est réalisée par un canevas logicielnommé ORQA. Dans un premier temps, les organes du véhicule sont caractérisés par leurs besoinsénergétiques et par les niveaux de qualité de service qu’ils proposent. Un gestionnaire d’énergie estensuite intégré au système logiciel du véhicule. Il se base sur les caractéristiques des organes et propose àl’utilisateur une solution de configuration de trajet prenant en compte ses préférences d’utilisation ainsique d’éventuelles contraintes de temps et d’énergie. Cette solution définit des contraintes d’utilisationsur le moteur et les organes de confort lors du déroulement du trajet.Le gestionnaire d’énergie est exécuté au sein des systèmes embarqués du véhicule dont les platesformessont fortement contraintes. Pour satisfaire celles-ci, il est primordial de proposer une configurationefficace du gestionnaire d’énergie. L’intérêt et la validité de l’approche employée sont démontrésau travers de la comparaison entre la configuration originelle et une configuration optimisée sur deuxvéhicules exemples. / Electric vehicles are nowadays a viable alternative to vehicles built around an internal combustionengine. They offer a cleaner and more comfortable driving thanks to the electric engine. The energymanagement of an electric vehicle is usually focused on the engine operation, the biggest energyconsumer, thus ignoring the user comfort.The contribution presented in this thesis allows for the consideration of the user preferences insidethe energy management of electric vehicles. It takes shape with a framework named ORQA. First,the vehicle devices are characterised by their energy requirements and the quality levels they offer.An energy manager based on the devices characteristics is then integrated into the software systemof the vehicle. The manager offers the user a solution to configure a trip request. The configuration isbased on the usage preferences and optional constraints over duration and consumption. The solutiondefines a set of constraints on the motor and on the comfort-related devices during the trip execution.The energy manager is executed in the embedded systems of the vehicle which platforms are highlyconstrained. Thus, the energy manager’s implementation is optimised to satisfy the execution platformconstraints. The interest and the validity of the chosen approach are attested by the comparison ofthe original configuration and an optimised configuration on two example vehicles.
|
455 |
Avaliação de custo e eficácia de métodos e critérios de teste baseado em Máquinas de Estados Finitos / Evaluate of cost and effectiveness of FSM based testing methods and criteriaDusse, Flávio 16 December 2009 (has links)
MÉTODOS de geração de casos de teste visam a gerar um conjunto de casos de teste com uma boa relação custo/benefício. Critérios de cobertura de teste definem requisitos de teste, os quais um conjunto de teste adequado deve cobrir. Métodos e critérios visam a selecionar casos de teste baseados em especificações, que podem ser descritas por meio de modelos, tais como Máquinas de Estados Finitos (MEF). Existem diversos métodos de geração e critérios de cobertura, diferindo entre si em função das propriedades exigidas da MEF, do custo dos testes gerados e da eficácia na revelação de defeitos. Apesar de pesquisas intensas na definição desses métodos e critérios, são poucas as ferramentas de apoio disponíveis assim como são poucos os relatos de aplicação em termos de custo e eficácia para a definição de estratégias de teste efetivas. Dessa forma, é necessário obter dados reais das vantagens e desvantagens dos métodos e critérios para subsidiar a tomada de decisão no processo de desenvolvimento de software no que tange às atividades de teste e validação. Este trabalho apresenta resultados de experimentos para avaliar o custo e a eficácia de aplicação dos métodos e critérios mais relevantes para subsidiar a definição de estratégias de teste em diversos contextos, como por exemplo, no desenvolvimento de protocolos e de sistemas reativos. Utiliza-se um protótipo desenvolvido a partir de uma reengenharia da ferramenta Plavis/FSM para apoiar os experimentos / TEST case generation methods aim to generate a test suite that offers an acceptable trade-off between cost and avail. Test coverage criteria define testing requirements, which an adequate test suite must fulfill. Methods and criteria help to select test case from specifications, which can be describe as models, for example Finite State Machines (FSM). There are several generation methods and coverage criteria that differ depending on the required properties of the FSM, the cost of generated tests and the effectiveness in revealing faults. In spite of intense researches in the definition of those methods and criteria, there are few available tools to apply them as well as application reports about cost and effectiveness issues to define effective test strategies. Thus, it is necessary to obtain real data of the advantages and disadvantages of the methods and criteria to provide decision-making in the software development process as far in the validation and test activities. This work aimed to lead experiments to evaluate the cost and the effetiveness in applying the most relevant methods and criteria to subsidize test strategies definition in several contexts as the communication protocol development and the reactive systems development. A prototype was developed based on reengineering of the Plavis/FSM tool to support the experiments
|
456 |
Advances in dual-energy computed tomography imaging of radiological propertiesHan, Dong 01 January 2018 (has links)
Dual-energy computed tomography (DECT) has shown great potential in the reduction of uncertainties of proton ranges and low energy photon cross section estimation used in radiation therapy planning. The work presented herein investigated three contributions for advancing DECT applications. 1) A linear and separable two-parameter DECT, the basis vector model (BVM) was used to estimate proton stopping power. Compared to other nonlinear two-parameter models in the literature, the BVM model shows a comparable accuracy achieved for typical human tissues. This model outperforms other nonlinear models in estimations of linear attenuation coefficients. This is the first study to clearly illustrate the advantages of linear model not only in accurately mapping radiological quantities for radiation therapy, but also in providing a unique model for accurate linear forward projection modelling, which is needed by the statistical iterative reconstruction (SIR) and other advanced DECT reconstruction algorithms. 2) Accurate DECT requires knowledge of x-ray beam properties. Using the Birch-Marshall1 model and beam hardening correction coefficients encoded in a CT scanner’s sinogram header files, an efficient and accurate way to estimate the x-ray spectrum is proposed. The merits of the proposed technique lie in requiring no physical transmission measurement after a one-time calibration against an independently measured spectrum. This technique can also be used in monitoring the aging of x-ray CT tubes. 3) An iterative filtered back projection with anatomical constraint (iFBP-AC) algorithm was also implemented on a digital phantom to evaluate its ability in mitigating beam hardening effects and supporting accurate material decomposition for in vivo imaging of photon cross section and proton stopping power. Compared to iFBP without constraints, both algorithms demonstrate high efficiency of convergence. For an idealized digital phantom, similar accuracy was observed under a noiseless situation. With clinically achievable noise level added to the sinograms, iFBP-AC greatly outperforms iFBP in prediction of photon linear attenuation at low energy, i.e., 28 keV. The estimated mean errors of iFBP and iFBP-AC for cortical bone are 1% and 0.7%, respectively; the standard deviations are 0.6% and 5%, respectively. The achieved accuracy of iFBP-AC shows robustness versus contrast level. Similar mean errors are maintained for muscle tissue. The standard deviation achieved by iFBP-AC is 1.2%. In contrast, the standard deviation yielded by iFBP is about 20.2%. The algorithm of iFBP-AC shows potential application of quantitative measurement of DECT. The contributions in this thesis aim to improve the clinical performance of DECT.
|
457 |
Model-based testing of dynamic component systemsHaschemi, Siamak 22 July 2015 (has links)
Die Arbeit widmet sich der Frage, ob sich die etablierte Technik des modellbasierten Testens (MBT) auf eine spezielle Art von Software-Komponentensystemen, den dynamischen Komponentensystemen (DCS), anwenden lässt. DCS bieten die besondere Eigenschaft, dass sich die Komposition der Komponenteninstanzen zur Laufzeit ändern kann, da in solchen Systemen jede Komponenteninstanz einen Lebenszyklus aufweist. Damit ist es möglich, im laufenden Betrieb einzelne Komponenten im Softwaresystem zu aktualisieren oder dem System neue hinzuzufügen. Derartige Eingriffe führen dazu, dass die von den Komponenteninstanzen bereitgestellte Funktionalität jederzeit eingeschränkt oder unverfügbar werden kann. Diese Eigenschaft der DCS macht die Entwicklung von Komponenten schwierig, da diese in ihrem potentiellen Verhalten darauf vorbereitet werden müssen, dass die von ihnen jeweils benötigte und genutzte Funktionalität nicht ständig verfügbar ist. Ziel dieser Dissertation ist es nun, einen systematischen Testansatz zu entwickeln, der es erlaubt, bereits während der Entwicklung von DCS-Komponenten Toleranzaussagen bzgl. ihrer dynamischen Verfügbarkeit treffen zu können. Untersucht wird, inwieweit bestehende MBT-Ansätze bei entsprechender Anpassung für den neuen Testansatz übernommen werden können. Durch die in der Dissertation entwickelten Ansätze sowie deren Implementierung und Anwendung in einer Fallstudie wird gezeigt, dass eine systematische Testfallgenerierung für dynamische Komponentensysteme mit Hilfe der Anwendung und Anpassung von modellbasierten Testtechnologien erreicht werden kann. / This dissertation devotes to the question whether the established technique of model based testing (MBT) can be applied to a special type of software component systems called dynamic component systems (DCSs). DCSs have the special characteristic that they support the change of component instance compositions during runtime of the system. In these systems, each component instance exhibits an own lifecycle. This makes it possible to update existing, or add new components to the system, while it is running. Such changes cause that functionality provided by the component instances may become restricted or unavailable at any time. This characteristic of DCSs makes the development of components difficult because required and used functionality is not available all the time. The goal of this dissertation is to develop a systematic testing approach which allows to test a component’s tolerance to dynamic availability during development time. We analyze, to what extend existing MBT approaches can be reused or adapted. The approaches of this dissertation has been implemented in a software prototype. This prototype has been used in a case study and it has been showed, that systematic test generation for DCSs can be done with the help of MBT.
|
458 |
Génération du Code Embarqué a partir de Composants de Haut-niveau HétérogènesSofronis, Christos 15 November 2006 (has links) (PDF)
Le travail décrit dans cette thèse fait partie d'un effort de recherche au laboratoire VERIMAG pour créer une chaîne d'outils basée sur modèles (model-based) pour la conception et l'implantation des systèmes embarquées. Nous utilisons une approche en trois couches, qui séparent le niveau d'application du niveau implantation/architecture. L'application est décrite dans un langage de haut niveau qui est indépendante des détails d'implantation. L'application est ensuite transférée à l'architecture d'exécution en utilisant des techniques spécifiques pour que les propriétés demandées soient bien préservées.<br />Dans cette thèse, l'application est décrite en Simulink/Stateflow, un langage de modélisation très répandu dans le milieu de l'industrie, comme celui de l'automobile. Au niveau de l'architecture, nous considérons des implantation sur une plate-forme "mono-processeur" et "multi-tâches". Multi-tâches signifie que l'application est répartie en un nombre des tâches qui sont ordonnées par un système d'exploitation temps-réel (RTOS) en fonction d'une politique d'ordonnancement préemptive comme par exemple la priorité statique (static-priority SP) ou la date-limite la plus proche en priorité (earliest deadline first EDF).<br />Entre ces deux couches, on rajoute une couche de représentation intermédiaire basée sur le langage de programmation synchrone Lustre, développé à VERIMAG durant les 25 dernières années. Cette représentation intermédiaire permet de profiter des nombreux outils également développés à VERIMAG tels que des simulateurs, des générateurs de tests, des outils de vérification et des générateurs de code.<br />Dans la première partie de cette thèse, on étudie comment réaliser une traduction automatique de modèle Simulink/Stateflow en modèles Lustre. Coté Simulink, le problème est relativement simple mais nécessite néanmoins l'utilisation d'algorithmes sophistiqués pour inférer correctement les informations de temps et de types (de signaux) avant de générer les variables correspondantes dans le programme Lustre. La traduction de Stateflow est plus difficile à cause d'un certain nombre de raisons ; d'abord Stateflow présent un certain nombre de comportements "non-sûr" tels que la non-terminaison d'un cycle synchrone ou des sémantiques qui dépendent de la disposition graphique des composants sur un modèle. De plus Stateflow est un langage impératif, tandis que Lustre un langage de flots de données. Pour le premier problème nous proposons un ensemble de conditions vérifiables statiquement servant à définir un sous-ensemble "sûr" de Stateflow. Pour le deuxième type de problèmes nous proposons un ensemble de techniques pour encoder des automates et du code séquentiel en équations de flots de données.<br />Dans la deuxième partie de la thèse, on étudie le problème de l'implantation de programmes synchrones dans l'architecture mono-processeur et multi-tâche décrite plus haut. Ici, l'aspect le plus important est comment implanter les communications entre tâches de manière à ce que la sémantique synchrone du système soit préservée. Des implantations standards, utilisant des buffers de taille un, protégés par des sémaphores pour assurer l'atomicité, ou d'autres protocoles "lock-free" proposés dans la littérature ne préservent pas la sémantique synchrone. Nous proposons un nouveau schéma de buffers, qui préserve la sémantique synchrone tout en étant également "lock-free". Nous montrons de plus que ce schéma est optimal en terme d'utilisation des buffers.
|
459 |
Le traitement des variables régionalisées en écologie : apports de la géomatique et de la géostatistiqueAUBRY, PHILIPPE 06 January 2000 (has links) (PDF)
Face à la contradiction consistant à traiter les variables régionalisées écologiques sans tenir compte de leurs propriétés spatiales, nous développons des méthodes géomatiques, utilisant des techniques informatiques, et géostatistiques, appliquant la théorie des fonctions aléatoires. Après avoir introduit des éléments de géomatique et de géostatistique, et avoir précisé la nature spécifique de l'autocorrélation spatiale, nous introduisons les inférences design-based et model-based. A l'aide de fonctions aléatoires, nous étudions l'efficacité de l'échantillonnage, optimisons des prédicteurs, et calculons des intervalles de prédiction. Nous proposons une procédure d'optimisation des classes de distances lors du calcul du variogramme. Nous examinons également l'utilisation de l'intégrale du variogramme, et justifions la modélisation du variogramme par ajustement aux moindres carrés pondérés. Nous discutons de la précision du variogramme dans les cadres design-based et model-based, et au sens du jackknife. L'optimisation de l'échantillonnage d'une population finie en vue de l'estimation de la moyenne spatiale ou du variogramme est examinée à l'aide de plusieurs heuristiques d'optimisation combinatoire et de simulations de fonctions aléatoires. Le problème du test de la corrélation ou de l'association entre deux variables régionalisées est étudié, à nouveau en utilisant des simulations de fonctions aléatoires. Nous passons en revue plusieurs méthodes et recommandons les tests qui font explicitement référence à l'autocorrélation spatiale des variables régionalisées. Dans le cadre de la définition de l'association spatiale entre variables régionalisées, nous proposons une méthode hybride utilisant des quadtrees et une distance d'édition entre arborescences récursives. Enfin, nous étudions des mesures de la complexité spatiale, critiquons l'analyse fractale et proposons des méthodes alternatives, notamment une mesure de complexité topologique d'une carte en isolignes.
|
460 |
Model Based Coding : Initialization, Parameter Extraction and EvaluationYao, Zhengrong January 2005 (has links)
<p>This thesis covers topics relevant to model-based coding. Model-based coding is a promising very low bit rate video coding technique. The idea behind this technique is to parameterize a talking head and to extract and transmit the parameters describing facial movements. At the receiver, the parameters are used to reconstruct the talking head. Since only high-level animation parameters are transmitted, very high compression can be achieved with this coding scheme. This thesis covers the following three key problems.</p><p>Although it is a fundamental problem, the initialization problem, has been neglected some extent in the literature. In this thesis, we pay particular attention to the study of this problem. We propose a pseudo-automatic initialization scheme: an Analysis-by-Synthesis scheme based on Simulated Annealing. It has been proved to be an efficient scheme.</p><p>Owing to technical advance today and the newly emerged MPEG-4 standard, new schemes of performing texture mapping and motion estimation are suggested which use sample based direct texture mapping; the feasibility of using active motion estimation is explored which proves to be able to give more than 10 times tracking resolution. Based on the matured face detection technique, Dynamic Programming is introduced to face detection module and work for face tracking.</p><p>Another important problem addressed in this thesis is how to evaluate the face tracking techniques. We studied the evaluation problems by examining the commonly used method, which employs a physical magnetic sensor to provide "ground truth". In this thesis we point out that it is quite misleading to use such a method.</p>
|
Page generated in 0.0735 seconds