• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 33
  • 33
  • 22
  • 15
  • 12
  • 10
  • 7
  • 7
  • 7
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modeling Of Software As A Service Architectures And Investigation On Their Design Alternatives

Ozturk, Karahan 01 July 2010 (has links) (PDF)
In general, a common reference architecture can be derived for Software as a Service (SaaS) architecture. However, while designing particular applications one may derive various different application design alternatives from the same reference SaaS architecture specification. To meet the required functional and nonfunctional requirements of different enterprise applications it is important to model the possible design so that a feasible alternative can be defined. In this thesis, we propose a systematic approach and the corresponding tool support for guiding the design of the SaaS application architecture. The approach defines a SaaS reference architecture, a family feature model and a set of reference design rules. Based on the business requirements an application feature model is defined using the family feature model. Selected features are related to design decisions and a SaaS application architecture design is derived. By defining multiple application architectures based on different application feature models we can even compare multiple alternatives and based on this select the most feasible alternative.
22

Ingénierie dirigée par les modèles pour le provisioning d'images de machines virtuelles pour l'informatique en nuage

Le Nhan, Tam 10 December 2013 (has links) (PDF)
Le contexte et la problématique De nos jours, le cloud computing est omniprésent dans la recherche et aussi dans l'industrie. Il est considéré comme une nouvelle génération de l'informatique où les ressources informatiques virtuelles à l'échelle dynamique sont fournies comme des services via l'internet. Les utilisateurs peuvent accéder aux systèmes de cloud utilisant différentes interfaces sur leurs différents dis- positifs. Ils ont seulement besoin de payer ce qu'ils utilisent, respectant le l'accord de service (Service-Layer Agreement) établi entre eux et les fournisseurs de services de cloud. Une des caractéristiques principales du cloud computing est la virtualisation grâce à laquelle toutes les ressources deviennent transparentes aux utilisateurs. Les utilisateurs n'ont plus besoin de contrôler et de maintenir les infrastructures informatiques. La virtualisation dans le cloud computing combine des images de machines virtuelles (VMIs) et des machines physiques où ces images seront déployées. Typiquement, le déploiement d'une telle VMI comprend le démarrage de l'image, l'installation et la configuration des packages définis pas la VMI. Dans les approches traditionnelles, les VMIs sont crées par les experts techniques des fournisseurs de services cloud. Il s'agit des VMIs pré-packagés qui viennent avec des composants pré-installés et pré-configurés. Pour répondre à une requête d'un client, le fournisseur sélectionne une VMI appropriée pour cloner et déployer sur un nœud de cloud. Si une telle VMI n'existe pas, une nouvelle VMI va être créée pour cette requête. Cette VMI pourrait être générée à partir de la VMI existante la plus proche ou être entièrement neuve. Le cycle de vie de l'approvisionnement d'une VMI dans l'approche traditionnelle est décrite dans la Figure 1. Une VMI standard contient normalement plusieurs packages parmi lesquels certains qui ne seront jamais utilisés. Cela vient du fait que la VMI est créée au moment de conception pour le but d'être clonée plus tard. Cette approche a des inconvénients tels que la demande de ressources importantes pour stocker des VMIs ou pour les déployer. De plus, elle requiert le démarrage de plusieurs composants, y compris ceux non utilisés. Particulièrement, à partir du point de vue de gestion de services, il est difficile de gérer la complexité des interdépendances entre les différents composants afin de maintenir les VMIs déployées et de les faire évoluer. Pour résoudre les problèmes énumérés ci-dessus, les fournisseurs de services de cloud pourraient automatiser le processus d'approvisionnement et permettre aux utilisateurs de choisir des VMIs d'une manière flexible en gardant les profites des fournisseur en terme de temps, de ressources, et de coût. Dans cette optique, les fournisseurs devraient considérer quelques préoccupations: (1) Quels packages et dépendances seront déployés? (2) Comment optimiser une configuration en terme de coût, de temps, et de consommation de ressources? (3) Comment trouver la VMI la plus ressemblante et comment l'adapter pour obtenir une nouvelle VMI? (4) Comment éviter les erreurs qui viennent souvent des opérations manuelles? (5) Comment gérer l'évolution de la VMI déployée et l'adapter aux besoins de reconfigurer et de passer automatiquement à l'échelle? A cause de ces exigences, la construction d'un systèmes de gestion de plateformes cloud (PaaS-Platform as a Sevice) est difficile, particulièrement dans le processus d'approvisionnement de VMIs. Cette difficulté requiert donc une approche appropriée pour gérer les VMIs dans les systèmes de cloud computing. Cette méthode fournirait des solutions pour la reconfiguration et le passage automatique à l'échelle. Les défis et les problèmes clés A partir de la problématique, nous avons identifié sept défis pour le développement d'un processus d'approvisionnements dans cloud computing. * C1: Modélisation de la variabilité des options de configuration des VMIs afin de gérer les interdépendances entre les packages logiciels Les différents composants logiciels pourraient requérir des packages spécifiques ou des bibliothèques du système d'exploitation pour une configuration correcte. Ces dépendances doivent être arrangées, sélectionnées, et résolues manuellement pour chaque copie de la VMI standard. D'autre part, les VMIs sont créées pour répondre aux exigences d'utilisateurs qui pourraient partager des sous-besoins en commun. La modélisation de la similitude et de la variabilité des VMIs au regard de ces exigences est donc nécessaire. * C2: Réduction des données transférées via les réseaux pendant le processus d'approvisionnement Afin d'être prêt pour répondre aux requêtes de clients, plusieurs packages sont installés sur la machine virtuelle standard , y compris les packages qui ne seront pas utilisé. Ces packages devront être limités afin de minimaliser la taille des VMIs. * C3: Optimisation de la consommation de ressources pendant l'exécution Dans l'approche traditionnelle, les activités de création et de mise à jour des VMIs requièrent des opérations manuelles qui prennent du temps. D'autre part, tous les packages dans les VMIs, y compris ceux qui ne sont pas utilisés, sont démarrés et occupent donc des ressources. Ces consommations de ressources devraient être optimisées. * C4: Mise à disposition d'un outil interactif facilitant les choix de VMIs des utilisateurs Les fournisseurs de services cloud voudraient normalement donner la flexibilité aux utilisateurs clients dans leurs choix de VMIs. Cependant, les utilisateurs n'ont pas de con- naissances techniques approfondies. Pour cette raison, des outils facilitant les choix sont nécessaires. * C5: Automatisation du déploiement des VMIs Plusieurs opérations du processus d'approvisionnement sont très complexes. L'automatisation de ces opérations peut réduire le temps de déploiement et les erreurs. * C6: Support de la reconfiguration de VMIs pendant leurs exécutions Un des caractéristiques importantes de cloud computing est de fournir des services à la demande. Puisque les demandes évoluent pendant l'exécution des VMIs, les systèmes de cloud devraient aussi s'adapter à ces évolutions des demandes. * C7: Gestion de la topologie de déploiement de VMIs Le déploiement de VMIs ne doit pas seulement tenir en compte multiple VMIs avec la même configuration, mais aussi le cas de multiple VMIs ayant différentes configurations. De plus, le déploiement de VMIs pourrait être réalisé sur différentes plateformes de cloud quand le fournisseur de service accepte une infrastructure d'un autre fournisseur Afin d'adresser ces défis, nous considérons trois problèmes clés pour le déploiement du processus d'approvisionnement de VMIs: 1. Besoin d'un niveau d'abstraction pour la gestion de configurations de VMIs: Une approche appropriée devrait fournir un haut niveau d'abstraction pour la modélisation et la gestion des configurations des VMIs avec leurs packages et les dépendances entre ces packages. Cette abstraction permet aux ingénieurs experts des fournisseurs de services de cloud à spécifier la famille de produits de configurations de VMIs. Elle facilite aussi l'analyse et la modélisation de la similitude et de la variabilité des configurations de VMIs, ainsi que la création des VMIs valides et cohérentes. 2. Besoin d'un niveau d'abstraction pour le processus de déploiement de VMIs: Une ap- proche appropriée pour l'approvisionnement de VMIs devrait fournir une abstraction du processus de déploiement. 3. Besoin d'un processus de déploiement et de reconfiguration automatique: Une approche appropriée devrait fournir une abstraction du processus de déploiement et de reconfigura- tion automatique. Cette abstraction facilite la spécification, l'analyse, et la modélisation la modularité du processus. De plus, l'approche devrait supporter l'automatisation afin de réduire les tâches manuelles qui sont couteuses en terme de performance et contiennent potentiellement des erreurs.
23

3D face analysis : landmarking, expression recognition and beyond

Zhao, Xi 13 September 2010 (has links) (PDF)
This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
24

Usando contextos e requisitos não-funcionais para configurar modelos de objetivos, modelos de features e cenários para linhas de produtos de software

VARELA, Jean Poul 23 February 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-04-05T15:42:49Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação - Jean Poul Varela.pdf: 3797900 bytes, checksum: fa011df68d9bf4b963c64b5a5b22c945 (MD5) / Made available in DSpace on 2016-04-05T15:42:49Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação - Jean Poul Varela.pdf: 3797900 bytes, checksum: fa011df68d9bf4b963c64b5a5b22c945 (MD5) Previous issue date: 2015-02-23 / FACEPE / O processo GS2SPL (Goals and Scenarios to Software Product Lines) visa obter, de maneira sistemática, o modelo de features e a especificação de cenários de caso de uso, a partir de modelos de objetivos de uma linha de produto de software (LPS). Além disso, esse processo permite realizar a configuração desses artefatos de requisitos para um produto da LPS, com base no atendimento de requisitos nãofuncionais (RNFs). Contudo, essa configuração é realizada sem considerar o estado do contexto do ambiente no qual a aplicação gerada será implantada. Isso é uma limitação, pois uma configuração pode não atender as necessidades do stakeholders. Por outro lado, o processo E-SPL (Early Software Product Line) permite configurar o modelo de objetivos de um produto visando maximizar o atendimento de RNFs e levando em consideração o estado do contexto. Para superar a limitação do processo GS2SPL, o presente trabalho propõe uma extensão do processo GS2SPL para incorporar a atividade de configuração do E-SPL. O novo processo é chamado de GSC2SPL (Goals, Scenarios and Contexts to Software Product Lines), o qual possibilita a obtenção do modelo de features e cenários de caso de uso, a partir de modelos de objetivos contextuais. O processo também permite realizar a configuração desses artefatos de requisitos com base nas informações sobre o contexto e visando aumentar o atendimento dos requisitos nãofuncionais. O processo é apoiado pela ferramenta GCL-Tool (Goal and Context for Product Line - Tool). O processo foi aplicado à especificação de duas LPS: o Media@ e o Smart Home. / GS2SPL (Goals and Scenarios to Software Product Lines) is a process aimed at systematically obtaining a feature model and the specification of use case scenarios from goal models of a Software Product Line (SPL). Moreover, this process allows configuring specific applications of an SPL based on the fulfillment of non-functional requirements (NFRs). However, this configuration is performed without considering the context state in which the system will be deployed. This is a limitation because a configuration may not meet the needs of stakeholders. On the other hand, E-SPL (Early Software Product Line) is a process that allows configuring a product aimed maximizing the fulfillment of NFRs and taking into account the context state. To overcome the limitation of the GS2SPL process, in this work we propose extension of the GS2SPL process, to incorporate the configuration activity of the E-SPL. The new process is called GSC2SPL (Goals, Scenarios and Contexts to Software Product Lines), which allows obtaining a feature model and use case scenarios from contextual goal models. The process will also allow the configuration of such requirements artifacts based on the information about the context and aiming to maximize the fulfillment of non-functional requirements. The process is supported by the GCL-Tool (Goal and Context for Product Line - Tool). The process was applied to the specification of two LPS: Media@ and the Smart Home.
25

Uma abordagem para linha de produtos de software científico baseada em ontologia e workflow

Costa, Gabriella Castro Barbosa 27 February 2013 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-31T17:53:13Z No. of bitstreams: 1 gabriellacastrobarbosacosta.pdf: 2243060 bytes, checksum: 0aef87199975808e0973490875ce39b5 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-01T11:50:00Z (GMT) No. of bitstreams: 1 gabriellacastrobarbosacosta.pdf: 2243060 bytes, checksum: 0aef87199975808e0973490875ce39b5 (MD5) / Made available in DSpace on 2017-06-01T11:50:00Z (GMT). No. of bitstreams: 1 gabriellacastrobarbosacosta.pdf: 2243060 bytes, checksum: 0aef87199975808e0973490875ce39b5 (MD5) Previous issue date: 2013-02-27 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Uma forma de aprimorar a reutilização e a manutenção de uma família de produtos de software é através da utilização de uma abordagem de Linha de Produtos de Software (LPS). Em algumas situações, tais como aplicações científicas para uma determinada área, é vantajoso desenvolver uma coleção de produtos de software relacionados, utilizando uma abordagem de LPS. Linhas de Produtos de Software Científico (LPSC) diferem-se de Li nhas de Produtos de Software pelo fato de que LPSC fazem uso de um modelo abstrato de workflow científico. Esse modelo abstrato de workflow é definido de acordo com o domínio científico e, através deste workflow, os produtos da LPSC serão instanciados. Analisando as dificuldades em especificar experimentos científicos e considerando a necessidade de composição de aplicações científicas para a sua implementação, constata-se a necessidade de um suporte semântico mais adequado para a fase de análise de domínio. Para tanto, este trabalho propõe uma abordagem baseada na associação de modelo de features e onto logias, denominada PL-Science, para apoiar a especificação e a condução de experimentos científicos. A abordagem PL-Science, que considera o contexto de LPSC, visa auxiliar os cientistas através de um workflow que engloba as aplicações científicas de um dado experimento. Usando os conceitos de LPS, os cientistas podem reutilizar modelos que especificam a LPSC e tomar decisões de acordo com suas necessidades. Este trabalho enfatiza o uso de ontologias para facilitar o processo de aplicação de LPS em domínios científicos. Através do uso de ontologia como um modelo de domínio consegue-se fornecer informações adicionais, bem como adicionar mais semântica ao contexto de LPSC. / A way to improve reusability and maintainability of a family of software products is through the Software Product Line (SPL) approach. In some situations, such as scientific applications for a given area, it is advantageous to develop a collection of related software products, using an SPL approach. Scientific Software Product Lines (SSPL) differs from the Software Product Lines due to the fact that SSPL uses an abstract scientific workflow model. This workflow is defined according to the scientific domain and, using this abstract workflow model, the products will be instantiated. Analyzing the difficulties to specify scientific experiments, and considering the need for scientific applications composition for its implementation, an appropriated semantic support for the domain analysis phase is necessary. Therefore, this work proposes an approach based on the combination of feature models and ontologies, named PL-Science, to support the specification and conduction of scientific experiments. The PL-Science approach, which considers the context of SPL and aims to assist scientists to define a scientific experiment, specifying a workflow that encompasses scientific applications of a given experiment, is presented during this disser tation. Using SPL concepts, scientists can reuse models that specify the scientific product line and carefully make decisions according to their needs. This work also focuses on the use of ontologies to facilitate the process of applying Software Product Line to scientific domains. Through the use of ontology as a domain model, we can provide additional information as well as add more semantics in the context of Scientific Software Product Lines.
26

Feature-based Configuration Management of Applications in the Cloud / Feature-basierte Konfigurationsverwaltung von Cloud-Anwendungen

Luo, Xi 27 June 2013 (has links) (PDF)
The complex business applications are increasingly offered as services over the Internet, so-called software-as-a-Service (SaaS) applications. The SAP Netweaver Cloud offers an OSGI-based open platform, which enables multi-tenant SaaS applications to run in the cloud. A multi-tenant SaaS application is designed so that an application instance is used by several customers and their users. As different customers have different requirements for functionality and quality of the application, the application instance must be configurable. Therefore, it must be able to add new configurations into a multi-tenant SaaS application at run-time. In this thesis, we proposed concepts of a configuration management, which are used for managing and creating client configurations of cloud applications. The concepts are implemented in a tool that is based on Eclipse and extended feature models. In addition, we evaluate our concepts and the applicability of the developed solution in the SAP Netwaver Cloud by using a cloud application as a concrete case example.
27

Desenvolvimento e reúso de frameworks com base nas características do domínio

Viana, Matheus Carvalho 08 May 2014 (has links)
Made available in DSpace on 2016-06-02T19:03:58Z (GMT). No. of bitstreams: 1 6131.pdf: 4820064 bytes, checksum: 9eff44ec6ccb5da42e47aea930745c02 (MD5) Previous issue date: 2014-05-08 / Universidade Federal de Sao Carlos / Frameworks are software artifacts that implement the basic functionality of a domain. Its reuse can improve the efficiency of development process and the quality of application code. However, frameworks are difficult to develop and reuse, since they require a complex structure to implement domain variability and be adaptable enough to be reused by different applications. Due to these difficulties, this research presents two approaches: 1) the From Features to Frameworks (F3) approach, in which the developer models the features of a domain and a pattern language helps in implementing a framework based on this model; and 2) the approach that uses a Domain-Specific Language (DSL) built from the identification and analysis of the domain features of a framework to facilitate the reuse of this framework. A tool, called From Features to Frameworks Tool (F3T), was also developed to support the use of these two approaches, providing editors for modeling domains and applications and automating the implementation of code of frameworks, DSLs and applications. In addition to facilitate the development and reuse of frameworks, experiments conducted during this project showed that these two approaches make these processes more efficient and allow the construction of frameworks and applications with less difficulty. / Frameworks são artefatos de software que implementam a funcionalidade básica de um domínio. Seu reúso pode aumentar a eficiência do processo de desenvolvimento e a qualidade do código de aplicações. Contudo, frameworks são difíceis de construir e reutilizar, pois necessitam de uma estrutura complexa para implementar as variabilidades do seu domínio e serem adaptáveis o suficiente para poderem ser reutilizados por diversas aplicações. Em vista dessas dificuldades este projeto apresenta duas abordagens: 1) a abordagem From Features to Frameworks (F3), na qual o desenvolvedor modela as características de um domínio e uma linguagem de padrões auxilia na implementação de um framework com base nesse modelo; e 2) a abordagem que utiliza uma Domain-Specific Language (DSL) construída a partir da identificação e análise das características do domínio do framework para facilitar o reúso desse framework. Uma ferramenta, denominada From Features to Frameworks Tool (F3T), também foi desenvolvida para apoiar o uso dessas duas abordagens, fornecendo editores para a modelagem dos domínios e das aplicações e automatizando a implementação de código dos frameworks, das DSLs e das aplicações. Além de facilitar o desenvolvimento e o reúso de framework, experimentos realizados ao longo deste projeto mostraram que essas duas abordagens tornam esses processos mais eficientes e permitem a construção de frameworks e aplicações com menor dificuldade.
28

3D face analysis : landmarking, expression recognition and beyond / Reconnaissance de l'expression du visage

Zhao, Xi 13 September 2010 (has links)
Cette thèse de doctorat est dédiée à l’analyse automatique de visages 3D, incluant la détection de points d’intérêt et la reconnaissance de l’expression faciale. En effet, l’expression faciale joue un rôle important dans la communication verbale et non verbale, ainsi que pour exprimer des émotions. Ainsi, la reconnaissance automatique de l’expression faciale offre de nombreuses opportunités et applications, et est en particulier au coeur d’interfaces homme-machine "intelligentes" centrées sur l’être humain. Par ailleurs, la détection automatique de points d’intérêt du visage (coins de la bouche et des yeux, ...) permet la localisation d’éléments du visage qui est essentielle pour de nombreuses méthodes d’analyse faciale telle que la segmentation du visage et l’extraction de descripteurs utilisée par exemple pour la reconnaissance de l’expression. L’objectif de cette thèse est donc d’élaborer des approches de détection de points d’intérêt sur les visages 3D et de reconnaissance de l’expression faciale pour finalement proposer une solution entièrement automatique de reconnaissance de l’activité faciale incluant l’expression et les unités d’action (ou Action Units). Dans ce travail, nous avons proposé un réseau de croyance bayésien (Bayesian Belief Network ou BBN) pour la reconnaissance d’expressions faciales ainsi que d’unités d’action. Un modèle statistique de caractéristiques faciales (Statistical Facial feAture Model ou SFAM) a également été élaboré pour permettre la localisation des points d’intérêt sur laquelle s’appuie notre BBN afin de permettre la mise en place d’un système entièrement automatique de reconnaissance de l’expression faciale. Nos principales contributions sont les suivantes. Tout d’abord, nous avons proposé un modèle de visage partiel déformable, nommé SFAM, basé sur le principe de l’analyse en composantes principales. Ce modèle permet d’apprendre à la fois les variations globales de la position relative des points d’intérêt du visage (configuration du visage) et les variations locales en terme de texture et de forme autour de chaque point d’intérêt. Différentes instances de visages partiels peuvent ainsi être produites en faisant varier les valeurs des paramètres du modèle. Deuxièmement, nous avons développé un algorithme de localisation des points d’intérêt du visage basé sur la minimisation d’une fonction objectif décrivant la corrélation entre les instances du modèle SFAM et les visages requête. Troisièmement, nous avons élaboré un réseau de croyance bayésien (BBN) dont la structure décrit les relations de dépendance entre les sujets, les expressions et les descripteurs faciaux. Les expressions faciales et les unités d’action sont alors modélisées comme les états du noeud correspondant à la variable expression et sont reconnues en identifiant le maximum de croyance pour tous les états. Nous avons également proposé une nouvelle approche pour l’inférence des paramètres du BBN utilisant un modèle de caractéristiques faciales pouvant être considéré comme une extension de SFAM. Finalement, afin d’enrichir l’information utilisée pour l’analyse de visages 3D, et particulièrement pour la reconnaissance de l’expression faciale, nous avons également élaboré un descripteur de visages 3D, nommé SGAND, pour caractériser les propriétés géométriques d’un point par rapport à son voisinage dans le nuage de points représentant un visage 3D. L’efficacité de ces méthodes a été évaluée sur les bases FRGC, BU3DFE et Bosphorus pour la localisation des points d’intérêt ainsi que sur les bases BU3DFE et Bosphorus pour la reconnaissance des expressions faciales et des unités d’action. / This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
29

PRECISE - Um processo de verificaÃÃo formal para modelos de caracterÃsticas de aplicaÃÃes mÃveis e sensÃveis ao contexto / PRECISE - A Formal Verification Process for Feature Models for Mobile and Context-Aware Applications

Fabiana Gomes Marinho 27 August 2012 (has links)
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico / As LPSs, alÃm do seu uso em aplicaÃÃes tradicionais, tÃm sido utilizadas no desenvolvimento de aplicaÃÃes que executam em dispositivos mÃveis e sÃo capazes de se adaptarem sempre que mudarem os elementos do contexto em que estÃo inseridas. Essas aplicaÃÃes, ao sofrerem alteraÃÃes devido a mudanÃas no seu ambiente de execuÃÃo, podem sofrer adaptaÃÃes inconsistentes e, consequentemente, comprometer o comportamento esperado. Por esse motivo, à essencial a criaÃÃo de um processo de verificaÃÃo que consiga checar a corretude e a consistÃncia dessas LPSS, bem como checar a corretude tanto dos produtos derivados como dos produtos adaptados dessas LPSs. Sendo assim, nesta tese de doutorado à proposto o PRECISE - um Processo de VerificaÃÃo Formal para Modelos de CaracterÃsticas de AplicaÃÃes MÃveis e SensÃveis ao Contexto. O PRECISE auxilia na identificaÃÃo de defeitos na modelagem da variabilidade de uma LPS para aplicaÃÃes mÃveis e sensÃveis ao contexto e, assim, minimiza problemas que ocorreriam durante a execuÃÃo dos produtos gerados a partir dessa LPS. à importante ressaltar que o PRECISE à definido com base em uma especificaÃÃo formal e em um conjunto de propriedades de boa formaÃÃo elaborados usando LÃgica de Primeira Ordem. Essa especificaÃÃo à um prÃ-requisito para a realizaÃÃo de uma modelagem da variabilidade sem ambiguidades. Para avaliar o PRECISE, uma validaÃÃo à realizada a partir da especificaÃÃo formal e das propriedades de boa formaÃÃo definidas no processo. Essa validaÃÃo tem como objetivo mostrar que o PRECISE consegue identificar defeitos, anomalias e inconsistÃncias existentes em um modelo de variabilidades de uma LPS para aplicaÃÃes mÃveis e sensÃveis ao contexto. Nessa validaÃÃo, cinco tÃcnicas diferentes sÃo utilizadas: Perfil UML, OCL, LÃgica Proposicional, Prolog e SimulaÃÃo. AlÃm de minimizar os defeitos e inconsistÃncias dos modelos de variabilidades das LPSs, o PRECISE ainda se beneficia da generalidade e flexibilidade intrÃnsecas à notaÃÃo formal usada na sua especificaÃÃo. / SPLc have been used to develop different types of applications, including the ones that run on mobile devices and are able to adapt when the context elements in which they are located change. These applications can change due to variations in their execution environment and inconsistent adaptations can occur, compromising the expected behavior. Then there is a need for creating a verification process to check the correctness and consistency of these SPLs as well as to check the correctness of both derived products and adapted products from these SPLs. Thus, this work proposes PRECISE - A Formal Verification Process for Feature Models of Mobile and Context-Aware Applications. PRECISE helps to identify defects in the variability modeling of an SPL for mobile and context-aware applications, minimizing problems that can take place during the execution of products generated from this SPL. It is worth noting that PRECISE is defined based on a formal specification and a set of well-formedness properties developed using First-Order Logic, which are prerequisites for the achievement of an unambiguous variability modeling. To evaluate PRECISE, a validation is performed from the formal specification and well-formedness properties defined in the process. This validation intends to show that PRECISE is able to identify defects, anomalies and inconsistencies in a variability model of an SPL for mobile and context-aware applications. In this validation, five different techniques are used: UML Profile, OCL, Propositional Logic, Prolog and Simulation. While minimizing the defects and inconsistencies in the variability models of an SPL, PRECISE still benefits from the generality and flexibility intrinsic to the formal notation used in its specification.
30

MULTI-TARGET TRACKING ALGORITHMS FOR CLUTTERED ENVIRONMENTS

Do hyeung Kim (8052491) 03 December 2019 (has links)
<div>Multi-target tracking (MTT) is the problem to simultaneously estimate the number of targets and their states or trajectories. Numerous techniques have been developed for over 50 years, with a multitude of applications in many fields of study; however, there are two most widely used approaches to MTT: i) data association-based traditional algorithms; and ii) finite set statistics (FISST)-based data association free Bayesian multi-target filtering algorithms. Most data association-based traditional filters mainly use a statistical or simple model of the feature without explicitly considering the correlation between the target behavior</div><div>and feature characteristics. The inaccurate model of the feature can lead to divergence of the estimation error or the loss of a target in heavily cluttered and/or low signal-to-noise ratio environments. Furthermore, the FISST-based data association free Bayesian multi-target filters can lose estimates of targets frequently in harsh environments mainly</div><div>attributed to insufficient consideration of uncertainties not only measurement origin but also target's maneuvers.</div><div>To address these problems, three main approaches are proposed in this research work: i) new feature models (e.g., target dimensions) dependent on the target behavior</div><div>(i.e., distance between the sensor and the target, and aspect-angle between the longitudinal axis of the target and the axis of sensor line of sight); ii) new Gaussian mixture probability hypothesis density (GM-PHD) filter which explicitly considers the uncertainty in the measurement origin; and iii) new GM-PHD filter and tracker with jump Markov system models. The effectiveness of the analytical findings is demonstrated and validated with illustrative target tracking examples and real data collected from the surveillance radar.</div>

Page generated in 0.0632 seconds