• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 266
  • 123
  • 18
  • 17
  • 9
  • 7
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 518
  • 518
  • 147
  • 145
  • 126
  • 123
  • 74
  • 61
  • 53
  • 51
  • 51
  • 50
  • 47
  • 46
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Résolution de l'hétérogénéité des intergiciels d'un environnement ubiquitaire

Bromberg, David 01 December 2006 (has links) (PDF)
nombre croissant de dispositifs informatiques par le biais de technologies réseaux sans fil basées ou non sur des infrastructures (WLAN, Bluetooth, GSM, GPRS, UMTS). Une des problématiques majeures de l'informatique diffuse est de faire communiquer de façon dynamique, spontanée et transparente ces différents dispositifs entre eux indépendamment de leurs hétérogénéités matérielle et logicielle. Les intergiciels ont été introduits dans cet objectif, cependant étant donné leur diversité, une nouvelle source d'hétérogénéité de plus haut niveau apparaît, notamment au niveau de leur protocole d'interaction. Actuellement, deux méthodes permettent de résoudre ces incompatibilités : la substitution et la traduction de protocoles. La première requiert la conception de nouveaux intergiciels capables de s'adapter en fonction de leur environnement d'exécution afin de résoudre dynamiquement l'hétérogénéité des intergiciels existants. L'avantage de cette méthode est de fournir une interopérabilité dynamique. En revanche, son inconvénient est d'être non transparente : elle crée une nouvelle source d'hétérogénéité entre ces nouveaux intergiciels, et nécessite de développer des applications qui leur sont spécifiques. La seconde méthode, quant à elle, est transparente : elle ne requiert ni la conception de nouveaux intergiciels, ni le développement de nouvelles applications. Cependant, elle reste statique et planifiée contrairement à la précédente méthode. Dans le contexte de l'informatique diffuse, ces deux méthodes sont complémentaires. Notre contribution consiste à combiner ces deux approches. A l'aide des langages de processus, nous proposons, dans un premier temps, une spécification formelle de notre solution qui permet de résoudre l'hétérogénéité des intergiciels quels que soient la spécificité de leurs caractéristiques, de leurs protocoles et de leurs technologies. Dans un second temps, nous présentons deux systèmes, basés sur cette spécification, conçus pour résoudre : (i) les incompatibilités des protocoles de découverte de services, (ii) les incompatibilités des protocoles de communication. Leur particularité est d'assurer une interopérabilité dynamique et transparente sans requérir de modifications des applications et des intergiciels existants. A partir de nos différentes expérimentations, il apparaît que le surcoût de cette solution pour résoudre les incompatibilités de protocoles est raisonnable.
472

Efficient Architectures for Retrieving Mixed Data with Rest Architecture Style and HTML5 Support

Maddipudi, Koushik 01 May 2013 (has links)
Software as a service is an emerging but important aspect of the web. WebServices play a vital role in providing it. Web Services are commonly provided in one of two architectural styles: a "REpresentational State Transfer" (REST), or using the "Simple Object Access Protocol" (SOAP.) Originally most web content was text and small images. But more recent services involve complex data structures including text, images, audio, and video. The task of optimizing data to provide delivery of these structures is a complex one, involving both theoretical and practical aspects. In this thesis work, I have considered two architectures developed in the REST architectural style and tested them on mixes of data types (plain text, image, audio) being retrieved from a file system or database. The payload which carries the actual content of a data transmission process can either be in Extensible Markup Language (XML) or JavaScript Object Notation (JSON). Both of these language notations are widely used. The two architectures used in this thesis work are titled as Scenario 1 and Scenario 2. Scenario 1 proposes two different cases for storing, retrieving and presenting the data via a REST web service. We investigate the question of what is the best way to provide different data types (image, audio) via REST Web Service. Payload size for JSON and XML are compared. Scenario 2 proposes an enhanced and optimized architecture which is derived from the pros of the first two cases in Scenario 1. The proposed architecture is best suited for retrieving and serving non-homogeneous data as a service in a homogenous environment. This thesis is composed of theoretical and practical parts. The theory part contains the design and principles of REST architecture. The practical part has a Web Service provider and consumer model developed in Java. The practical part is developed using the Spring MVC framework and Apache CXF, which provides an implementation using JAX-RS, the Java API for RESTful services. A glossary of acronyms used in this thesis appears in the appendix on page 101.
473

Automating Component-Based System Assembly

Subramanian, Gayatri 23 May 2006 (has links)
Owing to advancements in component re-use technology, component-based software development (CBSD) has come a long way in developing complex commercial software systems while reducing software development time and cost. However, assembling distributed resource-constrained and safety-critical systems using current assembly techniques is a challenge. Within complex systems when there are numerous ways to assemble the components unless the software architecture clearly defines how the components should be composed, determining the correct assembly that satisfies the system assembly constraints is difficult. Component technologies like CORBA and .NET do a very good job of integrating components, but they do not automate component assembly; it is the system developer's responsibility to ensure thatthe components are assembled correctly. In this thesis, we first define a component-based system assembly (CBSA) technique called "Constrained Component Assembly Technique" (CCAT), which is useful when the system has complex assembly constraints and the system architecture specifies component composition as assembly constraints. The technique poses the question: Does there exist a way of assembling the components that satisfies all the connection, performance, reliability, and safety constraints of the system, while optimizing the objective constraint? To implement CCAT, we present a powerful framework called "CoBaSA". The CoBaSA framework includes an expressive language for declaratively describing component functional and extra-functional properties, component interfaces, system-level and component-level connection, performance, reliability, safety, and optimization constraints. To perform CBSA, we first write a program (in the CoBaSA language) describing the CBSA specifications and constraints, and then an interpreter translates the CBSA program into a satisfiability and optimization problem. Solving the generated satisfiability and optimization problem is equivalent to answering the question posed by CCAT. If a satisfiable solution is found, we deduce that the system can be assembled without violating any constraints. Since CCAT and CoBaSA provide a mechanism for assembling systems that have complex assembly constraints, they can be utilized in several industries like the avionics industry. We demonstrate the merits of CoBaSA by assembling an actual avionic system that could be used on-board a Boeing aircraft. The empirical evaluation shows that our approach is promising and can scale to handle complex industrial problems.
474

Distributed Game Environment : A Software Product Line for Education and Research

Quan, Nguyen January 2013 (has links)
A software product line is a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or demand. Software product lines capitalize commonality and manage variation to reduce the time, effort, cost and complexity when creating and maintaining products in a product line. Therefore reusing core assets, software product line can address problems such as cost, time-to-market, quality, complexity of developing and maintaining variants, and need to quickly respond to market’s demands. The development of a software product line is different from conventional software development and in the area of education and research of product line there is a lack of a suitable purposefully designed and developed software product line (SPL) that can be used for educational or research purposes. In this thesis we have developed a software product line for turn-based two players distributed board games environment that can be used for educational and research purposes. The software product line supports dynamic runtime update, including games, chat, and security features, via OSGi framework. Furthermore, it supports remote gameplay via local area network and dynamic runtime activity recovery. We delivered a product configuration tool that is used to derive and configure products from the core assets based on feature selection. We have also modeled the software product line’s features and documented its requirements, architecture and user guides. Furthermore, we performed functional and integration tests of the software product line to ensure that the requirements are met according to the requirements specification prescribed by the stakeholders.
475

Change-effects analysis for effective testing and validation of evolving software

Santelices, Raul A. 17 May 2012 (has links)
The constant modification of software during its life cycle poses many challenges for developers and testers because changes might not behave as expected or may introduce erroneous side effects. For those reasons, it is of critical importance to analyze, test, and validate software every time it changes. The most common method for validating modified software is regression testing, which identifies differences in the behavior of software caused by changes and determines the correctness of those differences. Most research to this date has focused on the efficiency of regression testing by selecting and prioritizing existing test cases affected by changes. However, little attention has been given to finding whether the test suite adequately tests the effects of changes (i.e., behavior differences in the modified software) and which of those effects are missed during testing. In practice, it is necessary to augment the test suite to exercise the untested effects. The thesis of this research is that the effects of changes on software behavior can be computed with enough precision to help testers analyze the consequences of changes and augment test suites effectively. To demonstrate this thesis, this dissertation uses novel insights to develop a fundamental understanding of how changes affect the behavior of software. Based on these foundations, the dissertation defines and studies new techniques that detect these effects in cost-effective ways. These techniques support test-suite augmentation by (1) identifying the effects of individual changes that should be tested, (2) identifying the combined effects of multiple changes that occur during testing, and (3) optimizing the computation of these effects.
476

Autenticação biométrica de usuários em sistemas de E-learning baseada em reconhecimento de faces a partir de vídeo /

Penteado, Bruno Elias. January 2009 (has links)
Orientador: Aparecido Nilceu Elias / Banca: Agma Juci Machado Traina / Banca: Wilson Massashiro Yonezawa / Resumo: Nos últimos anos tem sido observado um crescimento exponencial na oferta de cursos a distância realizados pela Internet, decorrente de suas vantagens e características (menores custos de distribuição e atualização de conteúdo, gerenciamento de grandes turmas, aprendizado assíncrono e geograficamente independente, etc.), bem como de sua regulamentação e apoio governamental. Entretanto, a falta de mecanismos eficazes para assegurar a autenticação dos alunos neste tipo de ambiente é apontada como uma séria deficiência, tanto no acesso ao sistema quanto durante a participação do usuário nas atividades do curso. Atualmente, a autenticação baseada em senhas continua predominante. Porém, estudos têm sido conduzidos sobre possíveis aplicações da Biometria para autenticação em ambientes Web. Com a popularização e conseqüente barateamento de hardware habilitado para coleta biométrica (como webcams, microfone e leitores de impressão digital embutidos), a Biometria passa a ser considerada uma forma segura e viável de autenticação remota de indivíduos em aplicações Web. Baseado nisso, este trabalho propõe uma arquitetura distribuída para um ambiente de e-Learning, explorando as propriedades de um sistema Web para a autenticação biométrica tanto no acesso ao sistema quanto de forma contínua, durante a realização do curso. Para análise desta arquitetura, é avaliada a performance de técnicas de reconhecimento de faces a partir de vídeo capturadas on-line por uma webcam em um ambiente de Internet, simulando a interação natural de um indivíduo em um sistema de e- Learning. Para este fim, foi criada uma base de dados de vídeos própria, contando com 43 indivíduos navegando e interagindo com páginas Web. Os resultados obtidos mostram que os métodos analisados, consolidados na literatura, podem ser aplicados com sucesso nesse tipo de aplicação... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: In the last years it has been observed an exponential growth in the offering of Internet-enabled distance courses, due to its advantages and features (decreased distribution and content updates costs, management of large groups of students, asynchronous and geographically independent learning) as well as its regulation and governmental support. However, the lack of effective mechanisms that assure user authentication in this sort of environment has been pointed out as a serious deficiency, both in the system logon and during user attendance in the course assignments. Currently, password based authentication still prevails. Nevertheless, studies have been carried out about possible biometric applications for Web authentication. With the popularization and resultant decreasing costs of biometric enabled devices, such as webcams, microphones and embedded fingerprint sensors, Biometrics is reconsidered as a secure and viable form of remote authentication of individuals for Web applications. Based on that, this work presents a distributed architecture for an e-Learning environment, by exploring the properties of a Web system for biometric authentication both in the system logon and in continuous monitoring, during the course attendance. For the analysis of this architecture, the performance of techniques for face recognition from video, captured on-line by a webcam in an Internet environment, is evaluated, simulating the natural interaction of an individual in an e-Learning system. For that, a private database was created, with 43 individuals browsing and interacting with Web pages. The results show that the methods analyzed, though consolidated in the literature, can be successfully applied in this kind of application, with recognition rates up to 97% in ideal conditions, with low execution times and with short amount of information transmitted between client and server, with templates sizes of about 30KB. / Mestre
477

Uma estrat?gia dirigida a modelos e baseada em linguagem de descri??o arquitetural para linhas de produtos de software

Medeiros, Ana Luisa Ferreira de 30 July 2012 (has links)
Made available in DSpace on 2014-12-17T15:47:00Z (GMT). No. of bitstreams: 1 AnaLFM_TESE.pdf: 3970701 bytes, checksum: 513ce9d2a22c9323df778dcf23fb1033 (MD5) Previous issue date: 2012-07-30 / Model-oriented strategies have been used to facilitate products customization in the software products lines (SPL) context and to generate the source code of these derived products through variability management. Most of these strategies use an UML (Unified Modeling Language)-based model specification. Despite its wide application, the UML-based model specification has some limitations such as the fact that it is essentially graphic, presents deficiencies regarding the precise description of the system architecture semantic representation, and generates a large model, thus hampering the visualization and comprehension of the system elements. In contrast, architecture description languages (ADLs) provide graphic and textual support for the structural representation of architectural elements, their constraints and interactions. This thesis introduces ArchSPL-MDD, a model-driven strategy in which models are specified and configured by using the LightPL-ACME ADL. Such strategy is associated to a generic process with systematic activities that enable to automatically generate customized source code from the product model. ArchSPLMDD strategy integrates aspect-oriented software development (AOSD), modeldriven development (MDD) and SPL, thus enabling the explicit modeling as well as the modularization of variabilities and crosscutting concerns. The process is instantiated by the ArchSPL-MDD tool, which supports the specification of domain models (the focus of the development) in LightPL-ACME. The ArchSPL-MDD uses the Ginga Digital TV middleware as case study. In order to evaluate the efficiency, applicability, expressiveness, and complexity of the ArchSPL-MDD strategy, a controlled experiment was carried out in order to evaluate and compare the ArchSPL-MDD tool with the GingaForAll tool, which instantiates the process that is part of the GingaForAll UML-based strategy. Both tools were used for configuring the products of Ginga SPL and generating the product source code / Estrat?gias dirigidas a modelos t?m sido usadas para facilitar a customiza??o de produtos no contexto de Linhas de Produtos de Software (LPS) e gera??o de c?digo fonte desses produtos derivados atrav?s do gerenciamento de variabilidades. A maioria dessas estrat?gias faz uso da especifica??o de modelos baseados em UML (Unified Modeling Language), que apesar de ser amplamente aplicada, possui algumas limita??es por ser essencialmente gr?fica, apresentar defici?ncia em descrever precisamente a sem?ntica da representa??o da arquitetura do sistema e gerar um modelo extenso, o que dificulta a visualiza??o e compreens?o dos elementos do sistema. J? as linguagens de descri??o arquiteturais (ADLs) oferecem suporte textual e gr?fico para representa??o estrutural dos elementos arquiteturais, suas restri??es e intera??es. Essa tese apresenta ArchSPL-MDD, uma estrat?gia dirigida a modelos especificados e configurados usando a ADL LightPL-ACME. Tal estrat?gia est? associada a um processo gen?rico com atividades sistem?ticas que permitem a gera??o autom?tica do c?digo fonte customizados a partir do modelo do produto. A estrat?gia ArchSPL-MDD integra o desenvolvimento orientado a aspectos (DSOA), desenvolvimento dirigido a modelos (DDM), e LPS, o que permite a modelagem expl?cita e modulariza??o de variabilidades e caracter?sticas transversais. O processo ? instanciado pela ferramenta ArchSPL-MDD, que oferece suporte para a especifica??o, em LightPL-ACME dos modelos de dom?nio que s?o o foco do desenvolvimento. O ArchSPL-MDD usa como estudo de caso o middleware de TV Digital Ginga. De forma a avaliar a efici?ncia, aplicabilidade, expressividade e complexidade da estrat?gia ArchSPL-MDD, foi realizado um experimento controlado que avalia e compara a ferramenta ArchSPL-MDD, com a ferramenta GingaForAll, que instancia o processo que faz parte da estrat?gia GingaForAll, baseada em UML. Ambas as ferramentas foram usadas para configura??o do produto da LPS do do middlelare Ginga e gera??o de c?digo fonte do produto
478

A Model Driven Method to Design and Analyze Secure System-of-Systems Architectures : Application to Predict Cascading Attacks in Smart Buildings. / Une Méthode Dirigée par les Modèles pour la Conception et l'Analyse des Architectures Sécurisées des Systèmes-de-Systèmes : Application à la Prédiction des Attaques en Cascade dans les Bâtiments Intelligents.

El Hachem, Jamal 07 December 2017 (has links)
Le Système-de-Systèmes (SdS) devient l'un des principaux paradigmes pour l'ingénieriedes solutions de la prochaine génération, telles que les villes intelligentes, les bâtiments intelligents,les systèmes médicaux, les systèmes d'interventions d'urgence et les systèmes de défense. Parconséquent, l'intérêt apporté aux SdS, leur architecture et surtout leur sécurité est en croissancecontinue. Cependant, les caractéristiques de différenciation des SdS, telles que le comportementémergent et l'indépendance managériale et opérationnelle de ses constituants, peuvent introduiredes problèmes spécifiques qui rendent leurs modélisation, simulation et analyse de sécurité un déficritique. Dans cette thèse, nous étudions comment les approches du génie logiciel peuvent êtreétendues pour modéliser et analyser les architectures sécurisées de SdS, afin de découvrir lesattaques à fort impact (attaques en cascade) tôt à la phase d'architecture. Pour atteindre notreobjectif, nous proposons une méthode d'Ingénierie Dirigée par les Modèles (IDM), nommée Systems-of-Systems Security (SoSSec), qui comprend: (1) un langage de modélisation (SoSSecML) pour lamodélisation des architectures sécurisées des SdS, et une extension des Systèmes Multi-Agents(SMA) pour l'analyse des architectures sécurisées des SdS; (2) les outils correspondants: un éditeurgraphique, un générateur de code, une extension de la plate-forme Java Agent Development (JADE)pour la simulation des SMA, un outil personnalisé pour l'enregistrement des résultats de simulation;et (3) un processus pour guider l'utilisation de la méthode SoSSec. Pour illustrer notre approche,nous avons réalisé un cas d'étude sur un bâtiment intelligent réel, le bâtiment de l'école de santé del'Université d'Adélaïde (AHMS). / Systems-of-Systems (SoS) is becoming one of the major paradigm forengineering next generation solutions such as smart cities, smart buildings, health-care, emergencyresponse and defense. Therefore, there is a growing interest in SoS, their architecture and speciallytheir security. However, SoS differentiating characteristics, such as emergent behavior andmanagerial and operational independence of its constituents, may introduce specific issues thatmake their security modeling, simulation and analysis a critical challenge. In this thesis we investigatehow Software Engineering approaches can be extended to model and analyze secure SoS solutionsfor discovering high impact attacks (cascading attacks) at the architecture stage. In order to achieveour objective, we propose a Model Driven Engineering method, Systems-of-Systems Security(SoSSec), that comprises: (1) a modeling description language (SoSSecML) for secure SoS modelingand an extension of Multi-Agent Systems (MAS) for secure SoS architecture analysis, (2) thecorresponding tools: a graphical editor, a code generator, an extension of the Java AgentDevelopment (JADE) MAS simulation framework, a custom logging tool, (3) an utilization process toguide the use of the SoSSec method. To illustrate our approach we conducted a case study on a reallifesmart building SoS, the Adelaide University Health and Medical School (AHMS).
479

Modèles, outils et plate-forme d’exécution pour les applications à service dynamiques / Models, tools and execution platform for dynamique service-oriented applications

Moreno-Garcia, Diana 22 February 2013 (has links)
L'essor de l'Internet et l'évolution des dispositifs communicants ont permis l'intégration du monde informatique et du monde réel, ouvrant ainsi la voie à de nouveaux types d'applications, tels que les applications ubiquitaires et pervasives. Ces applications doivent s'exécuter dans des contextes hétérogènes, distribués et ouverts qui sont en constante évolution. Dans de tels contextes, la disponibilité des services et des dispositifs, les préférences et la localisation des utilisateurs peuvent varier à tout moment pendant l'exécution des applications. La variabilité des contextes d'exécution fait que l'exécution d'une application dépend, par exemple, des services disponibles ou des dispositifs accessibles à l'exécution. En conséquence, l'architecture d'une telle application ne peut pas être connue statiquement à la conception, au développement ou au déploiement, ce qui impose de redéfinir ce qu'est une application dynamique : comment la concevoir, la développer, l'exécuter et la gérer à l'exécution. Dans cette thèse, nous proposons une approche dirigée par les modèles pour la conception, le développement et l'exécution d'applications dynamiques. Pour cela, nous avons défini un modèle de composants à services permettant d'introduire des propriétés de dynamisme au sein d'un modèle de composants. Ce modèle permet de définir une application en intention, via un ensemble de propriétés, de contraintes et de préférences de composition. Une application est ainsi spécifiée de façon abstraite ce qui permet de contrôler la composition graduelle de l'application lors de son développement et de son exécution. Notre approche vise à effacer la frontière entre les activités effectuées avant et pendant l'exécution des applications. Pour ce faire, le même modèle et les mêmes mécanismes de composition sont utilisés de la conception jusqu'à l'exécution des applications. A l'exécution, le processus de composition considère, en plus, les services disponibles dans la plate-forme d'exécution permettant la composition opportuniste des applications ; ainsi que la variabilité du contexte d'exécution permettant l'adaptation dynamique des compositions. Nous avons mis en œuvre notre approche via un prototype nommé COMPASS, qui s'appuie sur les plates-formes CADSE pour la réalisation d'environnements logiciels de conception et de développement, et APAM pour la réalisation d'un environnement d'exécution d'applications à services dynamiques. / The growth of the Internet and the evolution of communicating devices have allow the integration of the computer world and the real world, paving the way for developing new types of applications such as pervasive and ubiquitous ones. These applications must run in heterogeneous, distributed and open environments that evolve constantly. In such environments, the availability of services and devices, the preferences and location of users may change at any time during the execution of applications. The variability of the execution context makes the execution of an application dependent on the available services and devices. Building applications capable of evolving dynamically to their execution context is a challenging task. In fact, the architecture of such an application cannot be fully known nor statically specified at design, development or deployment times. It is then needed to redefine the concept of dynamic application in order to cover the design, development, execution and management phases, and to enable thus the dynamic construction and evolution of applications. In this dissertation, we propose a model-driven approach for the design, development and execution of dynamic applications. We defined a component service model that considers dynamic properties within a component model. This model allows defining an application by its intention (its goal) through a set of composition properties, constraints and preferences. An application is thus specified in an abstract way, which allows controlling its gradual composition during development and execution times. Our approach aims to blur the boundary between development-time and runtime. Thus, the same model and the same composition mechanisms are used from design to runtime. At runtime, the composition process considers also the services available in the execution platform in order to compose applications opportunistically; and the variability of the execution context in order to adapt compositions dynamically. We implemented our approach through a prototype named COMPASS, which relies on the CADSE platform for building software design and development environments, and on the APAM platform for building an execution environment for dynamic service-based applications.
480

API de Segurança e Armazenamento de uma Arquitetura Multibiométrica para Controle de Acesso com Autenticação Contínua. / Security and Persistence APIs of a Multi-biometric Access Control Architecture for Continuous Authentication.

Oliveira, Adriana Esmeraldo de 16 September 2011 (has links)
Made available in DSpace on 2015-05-14T12:36:30Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 4594295 bytes, checksum: bd4f4df655903b796eb6cf79a5060ded (MD5) Previous issue date: 2011-09-16 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A biometric system that employs one single biometric characteristic is constrained. This limitation can be reduced by fusing the information presented by multiple sources. A system that consolidates the evidence presented by multiple biometric sources is known as a multibiometric system. In such a context, this work proposes the security and persistence APIs of a multi-biometric architecture, which is capable of using one or more biometric modalities. In access control applications, a user might be forced to authenticate in order to give an unauthorized access to a criminal. As an alternative to this problem, the API uses a continuous authentication process, which verifies if the user identified at the start of the software application is still able to remain on the system, without human interferences or breaks in the process. Much of the literature on biometric system design has focused on system error rates and scaling equations. However, it is also important to have a solid foundation for future progress as the processes and systems architecture for the new biometric application are designed. Hence, the designed architecture made it possible to create a well-defined API for multibiometric systems, which may help developers to standardize, among other things, their data structure, in order to enable and facilitate templates fusion and interoperability. Therefore, the developed security and persistence APIs support a multi-biometric access control architecture. This architecture is extensible, that is, capable of easily comprising new biometric characteristics and processes, yet making it possible to use a template security mechanism. The APIs were designed and implemented. They were demonstrated by a prototype application, through which it was possible to conduct the test experiments. / Um sistema biométrico que empregue uma única peculiaridade ou traço característico é restrito. Esta limitação pode ser suavizada pela fusão dos dados apresentados por múltiplas fontes. Um sistema que consolida a evidência apresentada por múltiplas fontes biométricas é conhecido como um sistema multibiométrico. Nesse contexto, este trabalho propõe a interface de aplicação (API) de segurança e armazenamento de uma arquitetura multibiométrica, com habilidade de empregar uma ou mais modalidades biométricas. Em aplicações de controle de acesso, um usuário pode ser coagido a se autenticar para permitir um acesso indevido. Como alternativa para este problema, a API utiliza um processo de autenticação contínua, que verifica se o usuário que se identificou no início de uma aplicação de software ainda está apto a continuar no sistema, sem interferências humanas ou paralisações do processo. Grande parte da literatura sobre projeto de sistemas biométricos tem o foco nas taxas de erro do sistema e na simplificação de equações. No entanto, também é importante que se tenha uma base sólida para progressos futuros no momento em que os processos e a arquitetura da nova aplicação biométrica estiverem sendo projetados. Neste sentido, a arquitetura projetada permitiu a construção de uma API bem definida para sistemas multibiométricos, que deverá auxiliar os desenvolvedores a padronizar, entre outras coisas, sua estrutura de dados, de forma a possibilitar e facilitar a fusão de modelos biométricos e a interoperabilidade. Deste modo, a API de segurança e armazenamento desenvolvida suporta uma arquitetura multibiométrica de controle de acesso para autenticação contínua extensível, isto é, capaz de receber novas características e processos biométricos com facilidade, permitindo, ainda, o uso de um mecanismo de segurança de templates biométricos. A API foi projetada e implementada. Sua demonstração foi feita através de uma aplicação protótipo, por meio da qual foi possível realizar os testes.

Page generated in 0.0882 seconds