• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 217
  • 216
  • 28
  • 24
  • 24
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 590
  • 140
  • 130
  • 110
  • 110
  • 93
  • 92
  • 69
  • 62
  • 62
  • 59
  • 59
  • 59
  • 57
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

A generic information-model for distributing VRE using DDS / Modèle générique de représentation des connaissances pour la distribution des environnements virtuels utilisant DDS

Haidar, Hassan 03 September 2015 (has links)
No / Virtual Reality Environments, which present a safer learning and training environment, are increasingly being adopted to simulate complex systems. In parallel, distribution services have become essential following advances in telecommunications and the subsequent demand on the mobility of users. Hence, middleware enables technologies to provide such services to existing and newly-developed applications. However, distributing VRE with existing APIs still requires lots of specific development and customization.Data Distribution Service (DDS) is one of the standardized middleware for real-time applications based on a peer-to-peer architecture. It requires awareness about the types of data distributed and which is achieved by defining an information-model using an Interface Definition Language (IDL) file. Consequently, distributing VRE using DDS introduces an additional step for modelling a specific IDL file to meet each application requirements. Considering the fact that domains addressed by VRE are populated by complex data types (procedural, behavioral, etc.) then engineering a specific IDL file for each application is a complex task that requires an intervention from the computer-scientist and the domain-expert each time an application has to be distributed. The first contribution of my thesis is to provide a generic information-model which is reusable upon distributing different VRE.The novelty of our approach is based on the coupling between conceptual models (in our case we use MASCARET meta-model) and the awareness need of DDS about data to distribute, so we create generic structures within the IDL file. By this, we eliminate one step of the workflow and consequently we simplify the process of using DDS. From another side, DDS remains a low level middleware for distribution that is based on peer-to-peer architecture with no control layer in the middle. Like other classical algorithms, lots of messages should be sent over the network to synchronize the distributed environment. Moreover, we should specify by code how to detect changes in the virtual environment to send updates. Thus, the second contribution we propose is to use a generic control layer that can dynamically detect when changes occur. This layer is based onthe explicit knowledge about executed behaviors.
452

Statistical Methods for Computational Markets : Proportional Share Market Prediction and Admission Control

Sandholm, Thomas January 2008 (has links)
We design, implement and evaluate statistical methods for managing uncertainty when consuming and provisioning resources in a federated computational market. To enable efficient allocation of resources in this environment, providers need to know consumers' risk preferences, and the expected future demand. The guarantee levels to offer thus depend on techniques to forecast future usage and to accurately capture and model uncertainties. Our main contribution in this thesis is threefold; first, we evaluate a set of techniques to forecast demand in computational markets; second, we design a scalable method which captures a succinct summary of usage statistics and allows consumers to express risk preferences; and finally we propose a method for providers to set resource prices and determine guarantee levels to offer. The methods employed are based on fundamental concepts in probability theory, and are thus easy to implement, as well as to analyze and evaluate. The key component of our solution is a predictor that dynamically constructs approximations of the price probability density and quantile functions for arbitrary resources in a computational market. Because highly fluctuating and skewed demand is common in these markets, it is difficult to accurately and automatically construct representations of arbitrary demand distributions. We discovered that a technique based on the Chebyshev inequality and empirical prediction bounds, which estimates worst case bounds on deviations from the mean given a variance, provided the most reliable forecasts for a set of representative high performance and shared cluster workload traces. We further show how these forecasts can help the consumers determine how much to spend given a risk preference and how providers can offer admission control services with different guarantee levels given a recent history of resource prices. / QC 20100909
453

Towards a new approach for enterprise integration : the semantic modeling approach

Radhakrishnan, Ranga Prasad 01 February 2005
Manufacturing today has become a matter of the effective and efficient application of information technology and knowledge engineering. Manufacturing firms success depends to a great extent on information technology, which emphasizes the integration of the information systems used by a manufacturing enterprise. This integration is also called enterprise application integration (here the term application means information systems or software systems). The methodology for enterprise application integration, in particular enterprise application integration automation, has been studied for at least a decade; however, no satisfactory solution has been found. Enterprise application integration is becoming even more difficult due to the explosive growth of various information systems as a result of ever increasing competition in the software market. This thesis aims to provide a novel solution to enterprise application integration. The semantic data model concept that evolved in database technology is revisited and applied to enterprise application integration. This has led to two novel ideas developed in this thesis. First, an ontology of an enterprise with five levels (following the data abstraction: generalization/specialization) is proposed and represented using unified modeling language. Second, both the ontology for the enterprise functions and the ontology for the enterprise applications are modeled to allow automatic processing of information back and forth between these two domains. The approach with these novel ideas is called the enterprise semantic model approach. The thesis presents a detailed description of the enterprise semantic model approach, including the fundamental rationale behind the enterprise semantic model, the ontology of enterprises with levels, and a systematic way towards the construction of a particular enterprise semantic model for a company. A case study is provided to illustrate how the approach works and to show the high potential of solving the existing problems within enterprise application integration.
454

Towards a new approach for enterprise integration : the semantic modeling approach

Radhakrishnan, Ranga Prasad 01 February 2005 (has links)
Manufacturing today has become a matter of the effective and efficient application of information technology and knowledge engineering. Manufacturing firms success depends to a great extent on information technology, which emphasizes the integration of the information systems used by a manufacturing enterprise. This integration is also called enterprise application integration (here the term application means information systems or software systems). The methodology for enterprise application integration, in particular enterprise application integration automation, has been studied for at least a decade; however, no satisfactory solution has been found. Enterprise application integration is becoming even more difficult due to the explosive growth of various information systems as a result of ever increasing competition in the software market. This thesis aims to provide a novel solution to enterprise application integration. The semantic data model concept that evolved in database technology is revisited and applied to enterprise application integration. This has led to two novel ideas developed in this thesis. First, an ontology of an enterprise with five levels (following the data abstraction: generalization/specialization) is proposed and represented using unified modeling language. Second, both the ontology for the enterprise functions and the ontology for the enterprise applications are modeled to allow automatic processing of information back and forth between these two domains. The approach with these novel ideas is called the enterprise semantic model approach. The thesis presents a detailed description of the enterprise semantic model approach, including the fundamental rationale behind the enterprise semantic model, the ontology of enterprises with levels, and a systematic way towards the construction of a particular enterprise semantic model for a company. A case study is provided to illustrate how the approach works and to show the high potential of solving the existing problems within enterprise application integration.
455

End-to-End Security of Information Flow in Web-based Applications

Singaravelu, Lenin 25 June 2007 (has links)
Web-based applications and services are increasingly being used in security-sensitive tasks. Current security protocols rely on two crucial assumptions to protect the confidentiality and integrity of information: First, they assume that end-point software used to handle security-sensitive information is free from vulnerabilities. Secondly, these protocols assume point-to-point communication between a client and a service provider. However, these assumptions do not hold true with large and complex vulnerable end point software such as the Internet browser or web services middleware or in web service compositions where there can be multiple value-adding service providers interposed between a client and the original service provider. To address the problem of large and complex end-point software, we present the AppCore approach which uses manual analysis of information flow, as opposed to purely automated approaches, to split existing software into two parts: a simplified trusted part that handles security-sensitive information and a legacy, untrusted part that handles non-sensitive information without access to sensitive information. Not only does this approach avoid many common and well-known vulnerabilities in the legacy software that compromised sensitive information, it also greatly reduces the size and complexity of the trusted code, thereby making exhaustive testing or formal analysis more feasible. We demonstrate the feasibility of the AppCore approach by constructing AppCores for two real-world applications: a client-side AppCore for https-based applications and an AppCore for web service platforms. Our evaluation shows that security improvements and complexity reductions (over a factor of five) can be attained with minimal modifications to existing software (a few tens of lines of code, and proxy settings of a browser) and an acceptable performance overhead (a few percent). To protect the communication of sensitive information between the clients and service providers in web service compositions, we present an end-to-end security framework called WS-FESec that provides end-to-end security properties even in the presence of misbehaving intermediate services. We show that WS-FESec is flexible enough to support the lattice model of secure information flow and it guarantees precise security properties for each component service at a modest cost of a few milliseconds per signature or encrypted field.
456

Java in eingebetteten Systemen

Gatzka, Stephan 13 July 2009 (has links) (PDF)
Moderne, objektorientierte Sprachen spielen bei der Entwicklung von Software für eingebettete Systeme bislang kaum eine Rolle. Die Gründe hierfür sind vielfältig, meist wird jedoch die mangelnde Effizienz und der größere Speicherbedarf hervorgehoben. Obwohl Java viele Eigenschaften hat, die sehr für einen Einsatz in eingebetteten Systemen sprechen, so hängt doch gerade Java vielfach immer noch das Vorurteil an, in Systemen mit beschränkter Rechenleistung und Speicher zu viele Ressourcen zu benötigen. Diese Arbeit soll dazu beitragen, diese Vorurteile abzutragen. Sie stellt insbesondere Techniken vor, die den Speicherbedarf einer JVM so gering wie möglich halten und diese effizient mit der zur Verfügung stehenden Rechenleistung umgehen lassen. Viele der dargestellten Verfahren und Algorithmen wurden in der Kertasarie VM implementiert, einer virtuellen Maschine, die speziell für den Einsatz in eingebetteten Systemen konzipiert wurde. Durch die weit verbreitete Vernetzung eingebetteter Systeme über das Internet stellt sich in vielen Fällen zudem das Problem einer modernen, abstrakten und effizienten Form der Kommunikation. Aus diesem Grund liegt der zweite Schwerpunkt dieser Arbeit auf dem Vergleich von objektorientierten Middleware-Architekturen, insbesondere von Java-RMI. Auch auf diesem Gebiet wird eine eigene, speziell an eingebettete Systeme angepasste RMI-Variante vorgestellt. / Modern, object oriented languages do not play an important role when developing software for embedded systems. There are many reasons for it, most often an inadequate performance and a greater memory demand are mentioned. In spite of the fact that Java has many features suitable for embedded systems, Java often faces the prejudice to consume too much resources in systems with limited processing power and memory. This work is a contribution to diminish this prejudices. It presents techniques to limit the memory demands of a Java Virtual Machine and to effectively cope with limited computing power. Many of the presented methods and algorithms are implemented in the Kertasarie VM, a JVM designed to run in embedded systems.Due to the fact of increasing network capabilities embedded systems often face the problem of a modern, abstract and efficient communication. Therefore the second emphasis of this work is put on the comparison of object oriented middleware architectures, especially Java-RMI. An own implementation for embedded systems is also presented.
457

Ontological mapping between different higher educational systems : The mapping of academic educational system on an international level

Esmaeily, Kaveh January 2006 (has links)
<p>This Master thesis sets its goals in researching and understanding the structure of different educational systems. The main goal that this paper inflicts is to develop a middleware aiming at translating courses between different educational systems.</p><p>The procedure is to find the meaning of objects and courses from the different educational systems point of view, this is mainly done through processes such as identifying the context, semantics and state of the objects involved, perhaps in different activities. The middleware could be applied, with small changes, to any structured system of education.</p><p>This thesis introduces a framework for using ontologies in the translation and integration of course aspects in different processes. It suggests using ontologies when adopting and structuring different educational systems on an international level. This thesis will, through an understanding of ontologies construct a middleware for the translation process between different courses in the different educational systems. As an example courses in Sweden, Germany and Tajikistan have been used for the mapping and constructing learning goals and qualifications.</p>
458

Modell eines virtuellen Finanzdienstleisters: Der Forschungsprototyp cofis.net 1

Fettke, Peter, Loos, Peter, Thießen, Friedrich, Zwicker, Jörg 26 April 2001 (has links) (PDF)
Zur Zeit sieht sich die Finanzdienstleistungswirtschaft vielfältigen technologischen sowie wirtschaftlichen Entwicklungen gegenüber. Virtuelle Finanzdienstleister sind ein Ansatz, um diesen Entwicklungen gerecht zu werden. In der Literatur wird das Konzept des Virtuellen Finanzdienstleisters nicht einheitlich beschrieben. Es werden fünf Sichtweisen unterschieden: Virtual-Reality-Technologien, Finanzmanagementsoftware, Marketingmix, elektronischer Marktplatz sowie Virtuelle Organisation. An der Fakultät für Wirtschaftswissenschaften der Technischen Universität Chemnitz wurde von den Professuren Wirtschaftsinformatik II sowie Finanzwirtschaft und Bankbetriebslehre ein Modell eines Virtuellen Finanzdienstleisters konzipiert, entworfen und implementiert. Der entwickelte Prototyp cofis.net implementiert zur Zeit das Produkt Virtuelle Überweisung. Das Fach-, DV- sowie Implementierungskonzept des Prototyps wird in dieser Arbeit vorgestellt. Der Prototyp basiert auf einer 5-stufigen Schichtenarchitektur. Die gewählte Architektur konnte realisiert werden. Es verbleiben zahlreiche weiterführende Fragen bspw. hinsichtlich der Gestaltung des Produktmanagement-Zyklus eines Virtuellen Finanzdienstleisters oder der allgemeinen informationstechnischen Abbildbarkeit von Finanzprodukten.
459

cofis.net - Ein Informationssystem für Virtuelle Finanzdienstleister

Fettke, Peter, Loos, Peter, Thießen, Friedrich 24 September 2001 (has links) (PDF)
Zur Zeit sieht sich die Finanzdienstleistungswirtschaft vielfältigen technologischen sowie wirtschaftlichen Entwicklungen gegenüber. Virtuelle Finanzdienstleister sind ein Ansatz, um diesen Entwicklungen gerecht zu werden. In der Literatur wird das Konzept des Virtuellen Finanzdienstleisters nicht einheitlich beschrieben. Es werden fünf Sichtweisen unterschieden: Virtual-Reality-Technologien, Finanzmanagementsoftware, Marketingmix, elektronischer Marktplatz sowie Virtuelle Organisation. An der Fakultät für Wirtschaftswissenschaften der Technischen Universität Chemnitz wurde von den Professuren Wirtschaftsinformatik II sowie Finanzwirtschaft und Bankbetriebslehre ein Modell eines Virtuellen Finanzdienstleisters konzipiert, entworfen und implementiert.
460

Interoperabilité à large échelle dans le contexte de l'Internet du future

Rodrigues, Preston 27 May 2013 (has links) (PDF)
La croissance de l'Internet en tant que plateforme d'approvisionnement à grande échelled'approvisionnement de contenus multimédia a été une grande success story du 21e siécle.Toutefois, les applications multimédia, avec les charactéristiques spécifiques de leur trafic ainsique les les exigences des nouveaux services, posent un défi intéressant en termes de découverte,de mobilité et de gestion. En outre, le récent élan de l'Internet des objets a rendu très nécessairela revitalisation de la recherche pour intégrer des sources hétérogènes d'information à travers desréseaux divers. Dans cet objectif, les contributions de cette thèse essayent de trouver un équilibreentre l'hétérogénéité et l'interopérabilité, pour découvrir et intégrer les sources hétérogènesd'information dans le contexte de l'Internet du Futur.La découverte de sources d'information sur différents réseaux requiert une compréhensionapprofondie de la façon dont l'information est structurée et quelles méthodes spécifiques sontutilisés pour communiquer. Ce processus a été régulé à l'aide de protocoles de découverte.Cependant, les protocoles s'appuient sur différentes techniques et sont conçues en prenant encompte l'infrastructure réseau sous-jacente, limitant ainsi leur capacité à franchir la limite d'unréseau donné. Pour résoudre ce problème, le première contribution dans cette thèse tente detrouver une solution équilibrée permettant aux protocoles de découverte d'interagir les uns avecles autres, tout en fournissant les moyens nécessaires pour franchir les frontières entre réseaux.Dans cet objectif, nous proposons ZigZag, un middleware pour réutiliser et étendre les protocolesde découverte courants, conçus pour des réseaux locaux, afin de découvrir des servicesdisponibles dans le large. Notre approche est basée sur la conversion de protocole permettant ladécouverte de service indépendamment de leur protocole de découverte sous-jacent. Toutefois,dans les réaux de grande échelle orientée consommateur, la quantité des messages de découvertepourrait rendre le réseau inutilisable. Pour parer à cette éventualité, ZigZag utilise le conceptd'agrégation au cours du processus de découverte. Grâce à l'agrégation, ZigZag est capabled'intégrer plusieurs réponses de différentes sources supportant différents protocoles de découverte.En outre, la personnalisation du processus d'agrégation afin de s'aligner sur ses besoins,requiert une compréhension approfondie des fondamentaux de ZigZag. À cette fin, nous proposonsune seconde contribution: un langage flexible pour aider à définir les politiques d'unemanière propre et efficace.

Page generated in 0.0942 seconds