• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 217
  • 216
  • 28
  • 24
  • 24
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 590
  • 140
  • 130
  • 110
  • 110
  • 93
  • 92
  • 69
  • 62
  • 62
  • 59
  • 59
  • 59
  • 57
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Konzept und Umsetzung einer modularen, portierbaren Middleware für den automatisierten Test eingebetteter Systeme

Trenkel, Kristian 30 November 2015 (has links)
Die Dissertationsschrift diskutiert die Machbarkeit einer modularen, portierbaren Middleware für die automatisierte Ausführung und Dokumentation von Software-Tests mit einer durchgehenden Nachverfolgbarkeit von der Anforderungsspezifikation bis hin zur Dokumentation der Testergebnisse. Es werden die Eigenschaften und Probleme bestehender Testautomatisierungslösungen analysiert und dargelegt. Unter Berücksichtigung dieser Probleme werden neuartige Lösungsansätze entwickelt. Die Neuheiten dieser Arbeit sind der modulare Aufbau der Middleware mit einer unproblematischen Portierbarkeit auf neue Testsysteme in Verbindung mit dem neu erarbeiteten Speicherformat für die Testergebnisse. Es wird die Möglichkeit aufgezeigt, Testfälle sowohl mit graphischer als auch textueller Eingabe zu bearbeiten. Neben den typischen Einsatzbereichen, wie zum Beispiel Hardware In The Loop-Tests (HIL), werden auch weitere Felder, vom Modul-Test bis zum Bandende-Test, abgedeckt. Das Speicherformat der Testergebnisse ermöglicht die Ablage aller wichtigen Informationen zu den Tests, ist flexibel erweiterbar und erlaubt die Generierung von Testreports in unterschiedlichen Zielformaten. Ein weiterer zentraler Punkt ist der automatisierte Austausch von Informationen und Testergebnissen mit verschiedenen Requirementsmanagement-Systemen sowie eine nahtlose Integration in vorhandene Versionsmanagement-Systeme. Basierend auf den theoretischen Ausarbeitungen wurde eine modulare, portierbare Middleware in Form des modularen aufgebauten Testautomatisierungs-Frameworks (modTF) umgesetzt. Anhand der dabei gesammelten Erfahrungen und der Ergebnisse der praktischen Erprobung werden die Vorteile des Frameworks gezeigt. / These PhD discusses the feasibility of a modular, portable middleware for automated execution and documentation of software tests with a continuous traceability from the requirements to the test results. The properties and the problems of existing test automation solutions are analyzed and presented. Based on the problems novel solutions are developed. The novelties of this PhD are the modular structure of the middleware with the easy portability to new test systems in cooperation with the novel storage format for the test results. The possibility for a graphical and textual description of the test cases is shown. Beside the typical applications like Hardware In The Loop tests (HIL) also applications from the module test to the line end test are include. The storage format for the results allows the storage of all needed information according to the test cases in one file. The format is flexible and extendable. The generation of test reports in different target formats is possible. Another imported point is the automated exchange of information and test results with different requirements management systems and the seamless integration in existing version management systems. By means of the collected experiences and the results of the practical proving the advantages of the framework are shown.
492

A technology reference model for client/server software development

Nienaber, R. C. (Rita Charlotte) 06 1900 (has links)
In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on personal computers. The aim of the research is to compile a technology model for the development of client/server software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development. / Computing / M. Sc. (Information Systems)
493

Development of distributed control system for SSL soccer robots

Holtzhausen, David Schalk 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: This thesis describes the development of a distributed control system for SSL soccer robots. The project continues on work done to develop a robotics research platform at Stellenbosch University. The wireless communication system is implemented using Player middleware. This enables high level programming of the robot drivers and communication clients, resulting in an easily modifiable system. The system is developed to be used as either a centralised or decentralised control system. The software of the robot’s motor controller unit is updated to ensure optimal movement. Slippage of the robot’s wheels restricts the robot’s movement capabilities. Trajectory tracking software is developed to ensure that the robot follows the desired trajectory while operating within its physical limits. The distributed control architecture reduces the robots dependency on the wireless network and the off-field computer. The robots are given some autonomy by integrating the navigation and control on the robot self. Kalman filters are designed to estimate the robots translational and rotational velocities. The Kalman filters fuse vision data from an overhead vision system with inertial measurements of an on-board IMU. This ensures reliable and accurate position, orientation and velocity information on the robot. Test results show an improvement in the controller performance as a result of the proposed system. / AFRIKAANSE OPSOMMING: Hierdie tesis beskryf die ontwikkeling van ’n verspreidebeheerstelsel vir SSL sokker robotte. Die projek gaan voort op vorige werk wat gedoen is om ’n robotika navorsingsplatform aan die Universiteit van Stellenbosch te ontwikkel. Die kommunikasiestelsel is geïmplementeer met behulp van Player middelware. Dit stel die robotbeheerders en kommunikasiekliënte in staat om in hoë vlak tale geprogrameer te word. Dit lei tot ’n maklik veranderbare stelsel. Die stelsel is so ontwikkel dat dit gebruik kan word as óf ’n gesentraliseerde of verspreidebeheerstelsel. Die sagteware van die motorbeheer eenheid is opgedateer om optimale robot beweging te verseker. As die robot se wiele gly beperk dit die robot se bewegingsvermoëns. Trajekvolgings sagteware is ontwikkel om te verseker dat die robot die gewenste pad volg, terwyl dit binne sy fisiese operasionele grense bly. Die verspreibeheerargitektuur verminder die robot se afhanklikheid op die kommunikasienetwerk en die sentrale rekenaar. Die robot is ’n mate van outonomie gegee deur die integrasie van die navigasie en beheer op die robot self te doen. Kalman filters is ontwerp om die robot se translasie en rotasie snelhede te beraam. Die Kalman filters kombineer visuele data van ’n oorhoofse visiestelsel met inertia metings van ’n IMU op die robot. Dit verseker betroubare en akkurate posisie, oriëntasie en snelheids inligting. Toetsresultate toon ’n verbetering in die beheervermoë as ’n gevolg van die voorgestelde stelsel.
494

資訊科技在共同行銷應用之研究─以銀行與保險業務為例

黃惠卿, Huang, Hui Ching Unknown Date (has links)
台灣金融業為因應全球金融自由化及國際化的趨勢,近年來,陸續成立了十四家金融控股公司,藉由原有之金融三大領域;銀行、保險與證券之資源統籌與資本集中,建立多角化的經營版圖,同時開發創新的組合式金融商品,以共同行銷之方式,提供顧客一次購足的服務並降低營運成本。 本研究基於探索共同行銷電腦化整合的動機,並以下列二點為研究目的: 1. 探討金融控股公司如何運用集團資訊資源,減少各子公司之重覆投資,達到資源共享目的。 2. 探討金融集團共同行銷之資訊應用系統整合架構。 本研究以Zachman Framework資訊架構為分析工具,進行以國內某金融集團為例之個案研究,此一個案之核心係銀行與保險公司之保單借款,並於其中加入證券公司資源以利探討金融集團應用資訊之整合,分析企業應用整合(EAI)平台進行即時之借款、還款與財富理財等服務之建置工作。 研究結果發現,資訊科技的整合在共同行銷上應用之成效主要有四個層面: 1. 整合集團客戶資料,建立客戶關係管理。 2. 發展組合式金融商品,滿足客戶需求。 3. 通路整合,銷售流程自動化。 4. 建立集團單一入口網站,有效降低資訊成本。 / In recent years, fourteen financial holding companies have been consecutively founded in Taiwan’s financial sector in response to the liberalization and globalization trends of the world finance. Stemming from three major finance realms, such as the pooling of resources and capital for banking, insurance, and securities, multi-faceted business territories have been established. By providing the customers with bundling of financial products and services towards a on-stop-shopping manner, the financial industry has begun to reducing operation costs by using cross-selling methodologies. This research explores thus the framework and IT requirement behind cross-selling services, with the following research goals: 1. to investigate the methodology needed by financial holding companies to employ corporate information resources to reduce redundant investments of subsidiary companies and to achieve the mutual resource usage goal. 2. to investigate the information resources integration framework for cross-selling financial corporations. Based on analytic method originated from Zachman Framework, this research conducted thus a case study on a domestic financial group. The core of this case study is to identify policy loan related cross-selling activities among the banking and insurance companies of the financial group. Therefore, the author has dissected and restructured the legacy financial application of information technology to establish an overview of the architect of EAI (Enterprise Applications Integration) platform with extension of real-time lending, repayment, and wealth management services, etc. A last, this research concludes the affected efficacy with respect to four aspects on information technology integration of cross-selling: 1. Organization of information resources and establishment of customer relations management. 2. Development of composite financial products to satisfy customer needs. 3. Channels integration and sales force automation system. 4. Establishment of enterprise portal with at least single-sign-on functionality to reduce cost efficiently.
495

Adaptive Middleware for Self-Configurable Embedded Real-Time Systems : Experiences from the DySCAS Project and Remaining Challenges

Persson, Magnus January 2009 (has links)
<p>Development of software for embedded real-time systems poses severalchallenges. Hard and soft constraints on timing, and usually considerableresource limitations, put important constraints on the development. Thetraditional way of coping with these issues is to produce a fully static design,i.e. one that is fully fixed already during design time.Current trends in the area of embedded systems, including the emergingopenness in these types of systems, are providing new challenges for theirdesigners – e.g. integration of new software during runtime, software upgradeor run-time adaptation of application behavior to facilitate better performancecombined with more ecient resource usage. One way to reach these goals is tobuild self-configurable systems, i.e. systems that can resolve such issues withouthuman intervention. Such mechanisms may be used to promote increasedsystem openness.This thesis covers some of the challenges involved in that development.An overview of the current situation is given, with a extensive review ofdi erent concepts that are applicable to the problem, including adaptivitymechanisms (incluing QoS and load balancing), middleware and relevantdesign approaches (component-based, model-based and architectural design).A middleware is a software layer that can be used in distributed systems,with the purpose of abstracting away distribution, and possibly other aspects,for the application developers. The DySCAS project had as a major goaldevelopment of middleware for self-configurable systems in the automotivesector. Such development is complicated by the special requirements thatapply to these platforms.Work on the implementation of an adaptive middleware, DyLite, providingself-configurability to small-scale microcontrollers, is described andcovered in detail. DyLite is a partial implementation of the concepts developedin DySCAS.Another area given significant focus is formal modeling of QoS andresource management. Currently, applications in these types of systems arenot given a fully formal definition, at least not one also covering real-timeaspects. Using formal modeling would extend the possibilities for verificationof not only system functionality, but also of resource usage, timing and otherextra-functional requirements. This thesis includes a proposal of a formalismto be used for these purposes.Several challenges in providing methodology and tools that are usablein a production development still remain. Several key issues in this areaare described, e.g. version/configuration management, access control, andintegration between di erent tools, together with proposals for future workin the other areas covered by the thesis.</p> / <p>Utveckling av mjukvara för inbyggda realtidssystem innebär flera utmaningar.Hårda och mjuka tidskrav, och vanligtvis betydande resursbegränsningar,innebär viktiga inskränkningar på utvecklingen. Det traditionellasättet att hantera dessa utmaningar är att skapa en helt statisk design, d.v.s.en som är helt fix efter utvecklingsskedet.Dagens trender i området inbyggda system, inräknat trenden mot systemöppenhet,skapar nya utmaningar för systemens konstruktörer – exempelvisintegration av ny mjukvara under körskedet, uppgradering av mjukvaraeller anpassning av applikationsbeteende under körskedet för att nå bättreprestanda kombinerat med e ektivare resursutnyttjande. Ett sätt att nå dessamål är att bygga självkonfigurerande system, d.v.s. system som kan lösa sådanautmaningar utan mänsklig inblandning. Sådana mekanismer kan användas föratt öka systemens öppenhet.Denna avhandling täcker några av utmaningarna i denna utveckling. Enöversikt av den nuvarande situationen ges, med en omfattande genomgångav olika koncept som är relevanta för problemet, inklusive anpassningsmekanismer(inklusive QoS och lastbalansering), mellanprogramvara och relevantadesignansatser (komponentbaserad, modellbaserad och arkitekturell design).En mellanprogramvara är ett mjukvarulager som kan användas i distribueradesystem, med syfte att abstrahera bort fördelning av en applikation överett nätverk, och möjligtvis även andra aspekter, för applikationsutvecklarna.DySCAS-projektet hade utveckling av mellanprogramvara för självkonfigurerbarasystem i bilbranschen som ett huvudmål. Sådan utveckling försvåras avde särskilda krav som ställs på dessa plattformarArbete på implementeringen av en adaptiv mellanprogramvara, DyLite,som tillhandahåller självkonfigurerbarhet till småskaliga mikrokontroller,beskrivs och täcks i detalj. DyLite är en delvis implementering av konceptensom utvecklats i DySCAS.Ett annat område som får särskild fokus är formell modellering av QoSoch resurshantering. Idag beskrivs applikationer i dessa områden inte heltformellt, i varje fall inte i den mån att realtidsaspekter täcks in. Att användaformell modellering skulle utöka möjligheterna för verifiering av inte barasystemfunktionalitet, men även resursutnyttjande, tidsaspekter och andraicke-funktionella krav. Denna avhandling innehåller ett förslag på en formalismsom kan användas för dessa syften.Det återstår många utmaningar innan metodik och verktyg som är användbarai en produktionsmiljö kan erbjudas. Många nyckelproblem i områdetbeskrivs, t.ex. versions- och konfigurationshantering, åtkomststyrning ochintegration av olika verktyg, tillsammans med förslag på framtida arbete iövriga områden som täcks av avhandlingen.</p> / DySCAS
496

Modelagem de um sistema de informação para rastreabilidade na indústria vinícola baseado em uma arquitetura orientada a serviços. / Modeling of an information system for wine traceability based on a service oriented architecture.

Gogliano Sobrinho, Osvaldo 25 April 2008 (has links)
O objetivo do presente trabalho é a modelagem de um sistema de informação destinado ao registro de dados de rastreabilidade aplicado à indústria do vinho, segundo os conceitos de uma arquitetura computacional orientada a serviços. A importância da pesquisa decorre do fato de ser obrigatória, desde 2005, a manutenção de tais registros por parte de todos os produtores que pretendem exportar seus produtos para países da Comunidade Européia. Além desta exigência legal, os consumidores finais, inclusive brasileiros, têm apresentado uma demanda crescente sobre informações acerca dos produtos alimentícios por eles consumidos. No software modelado, buscou-se uma solução que contemple a indústria coletivamente, através de consórcios ou associações de produtores, visando a diluição de custos e compartilhamento dos benefícios auferidos. A partir do levantamento bibliográfico realizado, efetuaram-se contatos com o setor produtivo vinícola brasileiro, na cidade de Bento Gonçalves, RS, e pesquisaram-se tópicos de tecnologia da informação ligados ao tema. O software foi modelado através da Unified Modeling Language, UML, a partir de modelo de caracterização do processo produtivo do vinho utilizado pelo autor. Criou-se um protótipo funcional. Através de sua utilização, constatou-se que o modelo adotado é viável para atender as necessidades da indústria vinícola, individual ou coletivamente. A continuidade do trabalho poderá transformar o protótipo construído em um produto para utilização comercial. Finalmente, observou-se que a mesma estrutura de modelagem poderá ser utilizada em outros domínios. / The purpose of this project is the modeling of an information system aimed at the maintenance of traceability data in the wine industry, according to the principles of a service oriented architecture. The importance of this issue is due to the fact that, since 2005, traceability data maintenance is mandatory for all food and feed producers intending to export their products to any European Union country. Besides that, final consumers, Brazilians included, have more and more been demanding for information about their food products consumed. In the project, a collective solution intended to be used by producer consortiums or associations, was attempted. The aim was sharing the costs and benefits of such a solution. Starting with an extensive bibliographic review, Brazilian wine industries at Bento Gonçalves, RS, Brazil, were visited and information technology issues related with the theme were researched. The software was modeled with the Unified Modeling Language, UML, through a representation of the wine production process used by the author. A functional prototype was built. Through its utilization, it was possible to perceive that the model adopted is able to fulfill the demands of wine producers considered both individually and collectively. Future development of this work, could transform the built prototype into a full featured product. As a final point, another interesting possibility to be considered is the use of this model in other domains.
497

Modèles et outils pour des bases lexicales "métier" multilingues et contributives de grande taille, utilisables tant en traduction automatique et automatisée que pour des services dictionnairiques variés / Methods and tools for large multilingual and contributive lexical databases, usable as well in machine (aided) translation as for various dictonary services

Zhang, Ying 28 June 2016 (has links)
Notre recherche se situe en lexicographie computationnelle, et concerne non seulement le support informatique aux ressources lexicales utiles pour la TA (traduction automatique) et la THAM (traduction humaine aidée par la machine), mais aussi l'architecture linguistique des bases lexicales supportant ces ressources, dans un contexte opérationnel (thèse CIFRE avec L&M).Nous commençons par une étude de l'évolution des idées, depuis l'informatisation des dictionnaires classiques jusqu'aux plates-formes de construction de vraies "bases lexicales" comme JIBIKI-1 [Mangeot, M. et al., 2003 ; Sérasset, G., 2004] et JIBIKI-2 [Zhang, Y. et al., 2014]. Le point de départ a été le système PIVAX-1 [Nguyen, H.-T. et al., 2007 ; Nguyen, H. T. & Boitet, C., 2009] de bases lexicales pour systèmes de TA hétérogènes à pivot lexical supportant plusieurs volumes par "espace lexical" naturel ou artificiel (UNL). En prenant en compte le contexte industriel, nous avons centré notre recherche sur certains problèmes, informatiques et lexicographiques.Pour passer à l'échelle, et pour profiter des nouvelles fonctionnalités permises par JIBIKI-2, dont les "liens riches", nous avons transformé PIVAX-1 en PIVAX-2, et réactivé le projet GBDLEX-UW++ commencé lors du projet ANR TRAOUIERO, en réimportant toutes les données (multilingues) supportées par PIVAX-1, et en les rendant disponibles sur un serveur ouvert.Partant d'un besoin de L&M concernant les acronymes, nous avons étendu la "macrostructure" de PIVAX en y intégrant des volumes de "prolexèmes", comme dans PROLEXBASE [Tran, M. & Maurel, D., 2006]. Nous montrons aussi comment l'étendre pour répondre à de nouveaux besoins, comme ceux du projet INNOVALANGUES. Enfin, nous avons créé un "intergiciel de lemmatisation", LEXTOH, qui permet d'appeler plusieurs analyseurs morphologiques ou lemmatiseurs, puis de fusionner et filtrer leurs résultats. Combiné à un nouvel outil de création de dictionnaires, CREATDICO, LEXTOH permet de construire à la volée un "mini-dictionnaire" correspondant à une phrase ou à un paragraphe d'un texte en cours de "post-édition" en ligne sous IMAG/SECTRA, ce qui réalise la fonctionnalité d'aide lexicale proactive prévue dans [Huynh, C.-P., 2010]. On pourra aussi l'utiliser pour créer des corpus parallèles "factorisés" pour construire des systèmes de TA en MOSES. / Our research is in computational lexicography, and concerns not only the computer support to lexical resources useful for MT (machine translation) and MAHT (Machine Aided Human Translation), but also the linguistic architecture of lexical databases supporting these resources in an operational context (CIFRE thesis with L&M).We begin with a study of the evolution of ideas in this area, since the computerization of classical dictionaries to platforms for building up true "lexical databases" such as JIBIKI-1 [Mangeot, M. et al., 2003 ; Sérasset, G., 2004] and JIBIKI-2 [Zhang, Y. et al., 2014]. The starting point was the PIVAX-1 system [Nguyen, H.-T. et al., 2007 ; Nguyen, H. T. & Boitet, C., 2009] designed for lexical bases for heterogeneous MT systems with a lexical pivot, able to support multiple volumes in each "lexical space", be it natural or artificial (as UNL). Considering the industrial context, we focused our research on some issues, in informatics and lexicography.To scale up, and to add some new features enabled by JIBIKI-2, such as the "rich links", we have transformed PIVAX-1 into PIVAX-2, and reactivated the GBDLEX-UW++ project that started during the ANR TRAOUIERO project, by re-importing all (multilingual) data supported by PIVAX-1, and making them available on an open server.Hence a need for L&M for acronyms, we expanded the "macrostructure" of PIVAX incorporating volumes of "prolexemes" as in PROLEXBASE [Tran, M. & Maurel, D., 2006]. We also show how to extend it to meet new needs such as those of the INNOVALANGUES project. Finally, we have created a "lemmatisation middleware", LEXTOH, which allows calling several morphological analyzers or lemmatizers and then to merge and filter their results. Combined with a new dictionary creation tool, CREATDICO, LEXTOH allows to build on the fly a "mini-dictionary" corresponding to a sentence or a paragraph of a text being "post-edited" online under IMAG/SECTRA, which performs the lexical proactive support functionality foreseen in [Huynh, C.-P., 2010]. It could also be used to create parallel corpora with the aim to build MOSES-based "factored MT systems".
498

Eliot: uma arquitetura para internet das coisas: explorando a elasticidade da computação em nuvem com alto desempenho

Gomes, Márcio Miguel 26 February 2015 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-06-15T19:38:10Z No. of bitstreams: 1 Márcio Miguel Gomes_.pdf: 2811232 bytes, checksum: 771d1fb5e5429c1093d60d2dfadb36e3 (MD5) / Made available in DSpace on 2015-06-15T19:38:10Z (GMT). No. of bitstreams: 1 Márcio Miguel Gomes_.pdf: 2811232 bytes, checksum: 771d1fb5e5429c1093d60d2dfadb36e3 (MD5) Previous issue date: 2015-02-26 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O universo digital vem crescendo a taxas expressivas nos últimos anos. Um dos principais responsáveis por esse aumento no volume de dados é a Internet das Coisas, que em uma definição simplista, consiste em identificar unicamente objetos de forma eletrônica, rastreá-los e armazenar suas informações para posterior utilização. Para lidar com tamanha carga de dados, são necessárias soluções a nível de software, hardware e arquitetura. Estudos realizados neste trabalho apontam que a arquitetura adotada atualmente apresenta limitações, principalmente no quesito escalabilidade. Como escalabilidade é uma característica fundamental para atender a demanda crescente de coleta, processamento e armazenamento de dados, este trabalho apresenta a arquitetura intitulada Eliot, com propostas para resolver justamente a escalabilidade além de oferecer elasticidade ao sistema. Para isso, está sendo proposto o uso de técnicas de bancos de dados distribuídos, processamento paralelo e nuvem computacional, além de reestruturação da arquitetura atual. Os resultados obtidos após a implantação e avaliação do Eliot em um ambiente de nuvem computacional comprovam a viabilidade, eficiência e confiabilidade dessa nova arquitetura proposta. Foi possível identificar melhora do desempenho através da redução nos tempos de resposta e aumento do volume de requisições processadas e trafegadas na rede além da redução nas falhas de conexão e de comunicação de dados. / The digital universe is growing at significant rates in recent years. One of the main responsible for this increase in the volume of data is the Internet of Things, which in a simplistic definition, is to uniquely identify objects electronically, track them and store their information for later use. To deal with such data load, solutions are needed at software, hardware and architecture levels. Studies conducted in this work show that the architecture adopted currently has limitations, specially regarding scalability. As scalability is a key feature to meet the growing demand for collection, processing and storage of data, this paper presents the architecture entitled Eliot, with proposals to precisely resolve the scalability and offers elasticity to the system. For this, is proposed the use of techniques of distributed databases, parallel processing and cloud computing, as well as restructuring of the current architecture. The results obtained after the implementation and evaluation of Eliot in a cloud computing environment demonstrate the feasibility, efficiency and reliability of this new proposed architecture. It was possible to improve performance by reducing response times and increased volume of requisitions processed and trafficked in the network, in addition to the reduction in connection failures and data communication.
499

Plateforme d’adaptation autonomique contextuelle à base de connaissances / Autonomic knowledge - based context-driven adaptation platform

Da, Kelling 16 October 2014 (has links)
Le développement d’applications ubiquitaires est particulièrement complexe. Au-delà de l’aspect dynamique de telles applications, l’évolution de l’informatique vers la multiplication des terminaux mobiles ne facilite pas les choses. Une solution pour simplifier le développement et l’exploitation de telles applications est d’utiliser des plateformes logicielles dédiées au déploiement et à l’adaptation des applications et gérant l’hétérogénéité des périphériques. Elles permettent aux concepteurs de se focaliser sur les aspects métiers et facilitent la réutilisation. La gestion du contexte est un élément clé lorsque l’on souhaite réaliser des applications pervasives sensibles au contexte. Les informations contextuelles issues d’un grand nombre de sources distribuées différentes sont, généralement, des informations brutes qui, sans interprétation, peuvent être dénuées de sens. En se basant sur des ontologies, il est possible de construire des modèles sémantiques qui seront alimentés par ces informations brutes et ainsi non seulement d’augmenter leur niveau de représentation sémantique mais surtout de pouvoir les utiliser pour prendre des décisions automatiques d’adaptation d’applications basées sur le contexte au runtime. La démocratisation des périphériques conduit à ce qu’un usager dispose actuellement de plusieurs périphériques incluant postes fixes, téléphones, tablettes, box, etc. pour son usage personnel. Il est souhaitable que cet ensemble de ressources lui soit accessible en tout point et à tout moment. De même des ressources publiques (stockage, services, etc.) peuvent lui être offertes. En revanche, la protection de la vie privée et les risques d’intrusion ne peuvent être négligés. Notre proposition est de définir, pour chaque utilisateur, un domaine d’adaptation qui contient l’ensemble des ressources auxquelles il peut accéder sans limite. Ces ressources sont celles qu’il a accepté de rendre disponibles sur ses machines pour lui-même et celles que les autres utilisateurs ont accepté de partager. Ainsi la notion de contexte est liée à celle d’utilisateur et inclut la totalité des ressources auxquelles il a accès. C’est la totalité de ces ressources qui sera exploitée pour faire en sorte de lui offrir les services adaptés à ses choix, ses dispositifs, sa localisation, etc. Nous proposons un middleware de gestion de contexte Kali2Much afin de fournir des services dédiés à la gestion du contexte distribué sur le domaine. Ce middleware est accompagné du module Kali-Reason permettant la construction de chaînes de raisonnement en BPMN afin d’offrir des fonctionnalités de raisonnent sur les informations de contexte dans le but d’identifier des situations nécessitant éventuellement une reconfiguration soit de l’application soit de la plateforme elle-même. C’est ainsi qu’est introduit l’aspect autonomique lié à la prise de décision. Les situations ainsi détectées permettent d’identifier le moment où déclencher les adaptations ainsi que les services d’adaptation qu’il sera nécessaire de déclencher. La conséquence étant d’assurer la continuité de service et d’ainsi s’adapter en permanence au contexte du moment. Le travail de reconfiguration d’applications est confié au service Kali-Adapt dont le rôle est de mettre en oeuvre les adaptations par déploiement/redéploiement de services de l’application et/ou de la plateforme. Un prototype fonctionnel basé sur la plateforme Kalimucho vient valider ces propositions / The ubiquitous applications development is not a trivial task. Beyond the dynamic aspect of suchapplications, the evolution of computer science toward the proliferation of mobile devices does not make things easier. A solution to simplify the development and operation of such applications is to use software platforms dedicated to deployment and adaptation of applications and managing heterogeneous devices. Such platforms allow designers to focus on business issues and facilitate reuse. Context management is a key element for making context-aware pervasive applications. Contextual information comes from many different distributed sources. It is generally raw information with no interpretation. It may be meaningless. Based on ontologies, it is possible to construct semantic models that would be powered by the raw information. This does not only increase the level of semantic representation but it can also be used to make automatic decisions for adapting context-based applications at runtime. Devices’ democratization allows a user to have multiple devices including personal computer, mobile phones, tablets, box, etc. for his personal use. It is desirable that the set of resources will be available to him from everywhere and at any time. Similarly, public resources (storage, services, etc.) would also be accessible to him. However, protection of privacy and intrusion risks cannot be ignored. Our proposal is to define, for each user, an adaptation domain that contains all his resources. Users can access their resources without limits. Users can agree on sharing resources with other users. Thus the notion of context is related to the user and includes all the resources he can access. All these resources will be exploited to offer him services adapted to his preferences, his features, his location, etc.We propose a context management middleware Kali2Much to provide services dedicated to the management of distributed context on the domain. This middleware is accompanied by Kali-Reason module for building reasoning chains in BPMN. The reasoning chains provide context information reasoning functionality. They reason about context information in order to identify situations that might require a reconfiguration of the application or of the platform itself. Thus the autonomic aspect related to decision making is introduced. Situations detected allow to identify when there is a need to trigger adaptation. The consequence is to ensure continuity of service and thus constantly adapt to the current context. The reconfiguration applications work is dedicated to Kali-Adapt service whose role is to implement the adaptations deployment/redeployment of application services and/or platform. A working prototype based on Kalimucho-A platform validates the proposals.
500

Modelagem de um sistema de informação para rastreabilidade na indústria vinícola baseado em uma arquitetura orientada a serviços. / Modeling of an information system for wine traceability based on a service oriented architecture.

Osvaldo Gogliano Sobrinho 25 April 2008 (has links)
O objetivo do presente trabalho é a modelagem de um sistema de informação destinado ao registro de dados de rastreabilidade aplicado à indústria do vinho, segundo os conceitos de uma arquitetura computacional orientada a serviços. A importância da pesquisa decorre do fato de ser obrigatória, desde 2005, a manutenção de tais registros por parte de todos os produtores que pretendem exportar seus produtos para países da Comunidade Européia. Além desta exigência legal, os consumidores finais, inclusive brasileiros, têm apresentado uma demanda crescente sobre informações acerca dos produtos alimentícios por eles consumidos. No software modelado, buscou-se uma solução que contemple a indústria coletivamente, através de consórcios ou associações de produtores, visando a diluição de custos e compartilhamento dos benefícios auferidos. A partir do levantamento bibliográfico realizado, efetuaram-se contatos com o setor produtivo vinícola brasileiro, na cidade de Bento Gonçalves, RS, e pesquisaram-se tópicos de tecnologia da informação ligados ao tema. O software foi modelado através da Unified Modeling Language, UML, a partir de modelo de caracterização do processo produtivo do vinho utilizado pelo autor. Criou-se um protótipo funcional. Através de sua utilização, constatou-se que o modelo adotado é viável para atender as necessidades da indústria vinícola, individual ou coletivamente. A continuidade do trabalho poderá transformar o protótipo construído em um produto para utilização comercial. Finalmente, observou-se que a mesma estrutura de modelagem poderá ser utilizada em outros domínios. / The purpose of this project is the modeling of an information system aimed at the maintenance of traceability data in the wine industry, according to the principles of a service oriented architecture. The importance of this issue is due to the fact that, since 2005, traceability data maintenance is mandatory for all food and feed producers intending to export their products to any European Union country. Besides that, final consumers, Brazilians included, have more and more been demanding for information about their food products consumed. In the project, a collective solution intended to be used by producer consortiums or associations, was attempted. The aim was sharing the costs and benefits of such a solution. Starting with an extensive bibliographic review, Brazilian wine industries at Bento Gonçalves, RS, Brazil, were visited and information technology issues related with the theme were researched. The software was modeled with the Unified Modeling Language, UML, through a representation of the wine production process used by the author. A functional prototype was built. Through its utilization, it was possible to perceive that the model adopted is able to fulfill the demands of wine producers considered both individually and collectively. Future development of this work, could transform the built prototype into a full featured product. As a final point, another interesting possibility to be considered is the use of this model in other domains.

Page generated in 0.0282 seconds