• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 25
  • 11
  • 8
  • 8
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 159
  • 159
  • 39
  • 20
  • 19
  • 18
  • 16
  • 15
  • 14
  • 14
  • 14
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Component Decomposition of Distributed Real-Time Systems

Brohede, Marcus January 2000 (has links)
<p>Development of distributed real-time applications, in contrast to best effort applications, traditionally have been a slow process due to the lack of available standards, and the fact that no commercial off the shelf (COTS) distributed object computing (DOC) middleware supporting real-time requirements have been available to use, in order to speed up the development process without sacrificing any quality.</p><p>Standards and DOC middlewares are now emerging that are addressing key requirements of real-time systems, predictability and efficiency, and therefore, new possibilities such as component decomposition of real-time systems arises.</p><p>A number of component decomposed architectures of the distributed active real-time database system DeeDS is described and discussed, along with a discussion on the most suitable DOC middleware. DeeDS is suitable for this project since it supports hard real-time requirements and is distributed. The DOC middlewares that are addressed in this project are OMG's Real-Time CORBA, Sun's Enterprise JavaBeans, and Microsoft's COM/DCOM. The discussion to determine the most suitable DOC middleware focuses on real-time requirements, platform support, and whether implementations of these middlewares are available.</p>
52

CORBA and Web Service Performance Comparison for Reliable and Confidential Message Transmission in Heterogeneous Distributed Systems

Miess, Jürgen January 2004 (has links)
<p>The business pressures which companies and organisations encounter are steadily growing. They continuously have to improve their efficiency to keep up with these new developments. One very important aspect in doing so is the reinforced adoption of computer based information systems. This paper focuses on a computer based system which is able to automate everyday business communication between distributed team members.</p><p>Reliable and confidential message delivery, event notification, the integration of different end devices (mobile phones, PCs etc.) and the message transport across different networks (wireless, wired) have been allocated as the main system requirements. Based on these requirements the performance of two middleware technologies, namely CORBA and Web services, has been compared. The result of this comparison was that both technologies are suited to use for implementing such a system but both too, have strengths and weaknesses in achieving the stated requirements.</p><p>CORBA for example, due to several supporting, already included services, allows the programmer to concentrate on the application development itself and use these services to ensure reliable and confidential message transmission. Additionally, CORBA is very efficient in using the bandwidth of the underlying communication network, but makes higher demands to the memory space available on clients. This is critical, if clients are mobile devices with limited resources.</p><p>Web service technology is much more modest than CORBA with respect to the client side memory space, but message transmission requires much more bandwidth. Further one there are no built-in security and reliability services available for Web services, like there are for CORBA. Hence it is up the application programmer to manually implement these features; however he has not necessarily develop everything from scratch but can resort to already existing specifications, still having the freedom of developing specially tailored features.</p><p>In short could be stated that CORBA is more consequential and consistent and WS technology is more adjustable and flexible.</p>
53

An Application of Sync Time Division Multiplexing in Telemetry System

Lu, Chun, Yan, Yihong, Song, Jian 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / High speed real-time data transportation is most important for telemetry systems, especially for large-scale distributed systems. This paper introduces a STDM (Sync Time Division Multiplexing) network structure for data transportation between devices in telemetry systems. The data in these systems is transported through virtual channels between devices. In addition, a proper frame format is designed based on PCM format to meet the needs of synchronization and real-time transportation in large-scale distributed telemetry systems.
54

Web-based Stereoscopic Collaboration for Medical Visualization

Kaspar, Mathias 23 August 2013 (has links)
Medizinische Volumenvisualisierung ist ein wertvolles Werkzeug zur Betrachtung von Volumen- daten in der medizinischen Praxis und Lehre. Eine interaktive, stereoskopische und kollaborative Darstellung in Echtzeit ist notwendig, um die Daten vollständig und im Detail verstehen zu können. Solche Visualisierung von hochauflösenden Daten ist jedoch wegen hoher Hardware- Anforderungen fast nur an speziellen Visualisierungssystemen möglich. Remote-Visualisierung wird verwendet, um solche Visualisierung peripher nutzen zu können. Dies benötigt jedoch fast immer komplexe Software-Deployments, wodurch eine universelle ad-hoc Nutzbarkeit erschwert wird. Aus diesem Sachverhalt ergibt sich folgende Hypothese: Ein hoch performantes Remote- Visualisierungssystem, welches für Stereoskopie und einfache Benutzbarkeit spezialisiert ist, kann für interaktive, stereoskopische und kollaborative medizinische Volumenvisualisierung genutzt werden. Die neueste Literatur über Remote-Visualisierung beschreibt Anwendungen, welche nur reine Webbrowser benötigen. Allerdings wird bei diesen kein besonderer Schwerpunkt auf die perfor- mante Nutzbarkeit von jedem Teilnehmer gesetzt, noch die notwendige Funktion bereitgestellt, um mehrere stereoskopische Präsentationssysteme zu bedienen. Durch die Bekanntheit von Web- browsern, deren einfach Nutzbarkeit und weite Verbreitung hat sich folgende spezifische Frage ergeben: Können wir ein System entwickeln, welches alle Aspekte unterstützt, aber nur einen reinen Webbrowser ohne zusätzliche Software als Client benötigt? Ein Proof of Concept wurde durchgeführt um die Hypothese zu verifizieren. Dazu gehörte eine Prototyp-Entwicklung, deren praktische Anwendung, deren Performanzmessung und -vergleich. Der resultierende Prototyp (CoWebViz) ist eines der ersten Webbrowser basierten Systeme, welches flüssige und interaktive Remote-Visualisierung in Realzeit und ohne zusätzliche Soft- ware ermöglicht. Tests und Vergleiche zeigen, dass der Ansatz eine bessere Performanz hat als andere ähnliche getestete Systeme. Die simultane Nutzung verschiedener stereoskopischer Präsen- tationssysteme mit so einem einfachen Remote-Visualisierungssystem ist zur Zeit einzigartig. Die Nutzung für die normalerweise sehr ressourcen-intensive stereoskopische und kollaborative Anatomieausbildung, gemeinsam mit interkontinentalen Teilnehmern, zeigt die Machbarkeit und den vereinfachenden Charakter des Ansatzes. Die Machbarkeit des Ansatzes wurde auch durch die erfolgreiche Nutzung für andere Anwendungsfälle gezeigt, wie z.B. im Grid-computing und in der Chirurgie.
55

Design Optimization of Soft Real-Time Applications on FlexRay Platforms

Malekzadeh, Mahnaz January 2013 (has links)
FlexRay is a deterministic communication bus in the automotive context that supports fault-tolerant and high-speed bus system. It operates based on the time-division-multiple-access scheme and allows transmission of event-driven and time-driven messages between nodes in a system. A FlexRay bus has two periodic segments which form a bus cycle: static segment and dynamic segment. Such a bus system could be used in a wide area of real-time automotive applications with soft and hard timing constraints. Recent research has been focused on the FlexRay static segment. As opposed to the static segment, however, the dynamic one is based on an event-triggered scheme. This scheme is more difficult to be temporally predicted. Nevertheless, the event-triggered paradigm provides more flexibility for further incremental design. The dynamic segment is also suitable for applications with erratic data size. Such advantages motivate for more research on the dynamic segment. In a real-time system, results of the computations have to be ready by a specific instant of time called deadline . However, in a soft real-time application, the result can be used with a degraded Quality of Service even after the deadline has passed while in a hard real-time system, missing a deadline leads to a catastrophe. This thesis aims at optimizing some of the parameters of the FlexRay bus for soft real-time applications. The cost function which helps to assess the solution to the optimization problem is the deadline miss ratio and a solution to our problem consists of two parts: (1) Frame identifiers to messages which are produced at each node. (2) The size of each individual minislot which is one of the FlexRay bus parameters. The optimization is done based on genetic algorithms. To evaluate the proposed approach, several experiments have been conducted based on the FlexRay bus simulator implemented in this thesis. The achieved results show that suitable choice of the parameters which are generated by our optimization engine improves the timing behavior of simulated communicating nodes.
56

A CONTROLLER AREA NETWORK LAYER FOR RECONFIGURABLE EMBEDDED SYSTEMS

Jeganathan, Nithyananda Siva 01 January 2007 (has links)
Dependable and Fault-tolerant computing is actively being pursued as a research area since the 1980s in various fields involving development of safety-critical applications. The ability of the system to provide reliable functional service as per its design is a key paradigm in dependable computing. For providing reliable service in fault-tolerant systems, dynamic reconfiguration has to be supported to enable recovery from errors (induced by faults) or graceful degradation in case of service failures. Reconfigurable Distributed applications provided a platform to develop fault-tolerant systems and these reconfigurable architectures requires an embedded network that is inherently fault-tolerant and capable of handling movement of tasks between nodes/processors within the system during dynamic reconfiguration. The embedded network should provide mechanisms for deterministic message transfer under faulty environments and support fault detection/isolation mechanisms within the network framework. This thesis describes the design, implementation and validation of an embedded networking layer using Controller Area Network (CAN) to support reconfigurable embedded systems.
57

A Reference Architecture for Providing Latent Semantic Analysis Applications in Distributed Systems. Diploma Thesis

Dietl, Reinhard 12 1900 (has links) (PDF)
With the increasing availability of storage and computing power, Latent Semantic Analysis (LSA) has gained more and more significance in practice over the last decade. This diploma thesis aims to develop a reference architecture which can be utilised to provide LSA based applications in a distributed system. It outlines the underlying problems of generation, processing and storage of large data objects resulting from LSA operations, the problems arising from bringing LSA into a distributed context, suggests an architecture for the software components necessary to perform the tasks, and evaluates the applicability to real world scenarios, including the implementation of a classroom scenario as a proof-of-concept. (author's abstract) / Series: Theses / Institute for Statistics and Mathematics
58

[en] A TOOL FOR REBUILDING THE SEQUENCE OF INTERACTIONS BETWEEN COMPONENTS OF A DISTRIBUTED SYSTEM / [pt] UMA FERRAMENTA PARA RECONSTRUÇÃO DA SEQUÊNCIA DE INTERAÇÕES ENTRE COMPONENTES DE UM SISTEMA DISTRIBUÍDO

PAULO ROBERTO FRANCA DE SOUZA 11 October 2011 (has links)
[pt] Sistemas distribuídos frequentemente apresentam um comportamento em tempo de execução diferente do esperado pelo programador. A análise estática, somente, não suficiente para a compreensão do comportamento e para o diagnóstico de problemas nesses sistemas, em razão da sua natureza nao determinística, reflexo de características inerentes como concorrência, latência na comunicação e falha parcial. Sendo assim, torna-se necessário um melhor entendimento das interações entre os diferentes componentes de software que formam o sistema, para que o desenvolvedor possa ter uma melhor visão do comportamento do sistema durante sua execução. Neste trabalho, apresentamos uma ferramenta que faz a reconstrução das interações entre os componentes de uma aplicação distribuída, oferecendo uma visão das linhas de execução distribuídas e permitindo o acompanhamento das sequências de chamadas remotas e a análise das relações de causalidade. Essa ferramenta também faz a persistência do histórico dessas interações ao longo do tempo, correlacionando-as a arquitetura do sistema e aos dados de desempenho. Assim, a ferramenta proposta auxilia o desenvolvedor a melhor compreender cenários que envolvem comportamentos indevido do sistema e a restringir o escopo da análise do erro, facilitando a busca de uma solução. / [en] Distributed systems often present a runtime behavior different than what is expected by the programmer. Static analysis is not enough to understand the runtime behavior and to diagnoses errors. This difficulty is caused by the non-deterministic nature of distributed systems, because of their inherent characteristics, such as concurrency, communication latency and partial failure. Therefore, it’s necessary a better view of the interactions between the system’s software components in order to understand its runtime behavior. In this work we present a tool that rebuilds the interactions among distributed components, presents a view of distributed threads and remote call sequences, and allows the analysis of causality relationships. Our tool also stores the interactions over time and correlates them to the system architecture and to performance data. The proposed tool helps the developer to better understand scenarios involving an unexpected behavior of the system and to restrict the scope of error analysis, making easier the search for a solution.
59

Métodos de Exploração de Espaço de Projeto em Tempo de Execução em Sistemas Embarcados de Tempo Real Soft baseados em Redes-Em-Chip. / Methods of Run-time Design Space Exploration in NoC-based Soft Real Time Embedded Systems

Briao, Eduardo Wenzel January 2008 (has links)
A complexidade no projeto de sistemas eletrônicos tem aumentado devido à evolução tecnológica e permite a concepção de sistemas inteiros em um único chip (SoCs – do inglês, Systems-on-Chip). Com o objetivo de reduzir a alta complexidade de projeto, custos de projeto e o tempo de lançamento do produto no mercado, os sistemas são desenvolvidos em módulos funcionais, pré-verificados e pré-projetados, denominados de núcleos de propriedade intelectual (IP – do inglês, Intellectual Property). Esses núcleos IP podem ser reutilizados de outros projetos ou adquiridos de terceiros. Entretanto, é necessário prover uma estrutura de comunicação para interligar esses núcleos e as estruturas atuais (barramentos) são inadequadas para atender as necessidades dos futuros SoCs (compartilhamento de banda, falta de escalabilidade). As redes-em-chip (NoCs{ XE "NoCs" } – do inglês, Networks-on-Chip) vêm sendo apresentadas como uma solução para atender essas restrições. No desenvolvimento de sistemas embarcados baseados em redes-em-chip, deve-se personalizar a rede para atendimento de restrições. Essa exploração de espaço de projeto (EEP), segundo uma infinidade de trabalhos, é realizada em tempo de projeto, supondo-se que é conhecido o perfil das aplicações que devem ser executadas pelo sistema. No entanto, cada vez mais sistemas embarcados aproximam-se de dispositivos genéricos de processamento (como palmtops), onde as tarefas a serem executadas não são inteiramente conhecidas a priori. Com a mudança dinâmica da carga de trabalho de um sistema embarcado, a busca pelo atendimento de requisitos pode então ser enfrentada por mecanismos adaptativos, que implementam dinamicamente a EEP. No âmbito deste trabalho, a EEP em tempo de execução provê mecanismos adaptativos que deverão realizar suas funções para atendimento de restrições de projeto. Consequentemente, EEP em tempo de execução pode permitir resultados ainda melhores, no que diz respeito a sistemas embarcados com restrições de projetos rígidas. É possível maximizar o tempo de duração da energia da bateria que alimenta um sistema embarcado ou, até mesmo, diminuir a taxa de perda de deadlines em um sistema de tempo real soft, realocando em tempo de execução tarefas de modo a gerar menor taxa de comunicação entre os processadores, desde que o sistema seja executado em um tempo suficiente para amortizar os custos de migração. Neste trabalho, foi utilizada a combinação de heurísticas de alocação da área dos Sistemas Computacionais Distribuídos como, por exemplo, algoritmos bin-packing e linear clustering. Resultados mostraram que a realocação de tarefas, utilizando uma combinação Worst-Fit e Linear Clustering, reduziu o consumo de energia e a taxa de perda de deadlines em 17% e 37%, respectivamente, utilizando o modelo de migração por cópia. / The complexity of electronic systems design has been increasing due to the technological evolution, which now allows the inclusion of a complete system on a single chip (SoC – System-on-Chip). In order to cope with the corresponding design complexity and reduce design costs and time-to-market, systems are built by assembling pre-designed and pre-verificated functional modules, called IP (Intellectual Property) cores. IP cores can be reused from previous designs or acquired from third-party vendors. However, an adequate communication architecture is required to interconnect these IP cores. Current communication architectures (busses) are unsuitable for the communication requirements of future SoCs (sharing of bandwidth, lack of scalability). Networks-on-Chip (NoC) arise as one of the solutions to fulfill these requirements. While developing NoC-based embedded systems, the NoC customization is mandatory to fulfill design constraints. This design space exploration (DSE), according to most approaches in the literature, is achieved at compile-time (off-line DSE), assuming the profiles of the tasks that will be executed in the embedded system are known a priori. However, nowadays, embedded systems are becoming more and more similar to generic processing devices (such as palmtops), where the tasks to be executed are not completely known a priori. Due to the dynamic modification of the workload of the embedded system, the fulfillment of requirements can be accomplished by using adaptive mechanisms that implement dynamically the DSE (run-time DSE or on-line DSE). In the scope of this work, DSE is on-line. In other words, when the system is running, adaptive mechanisms will be executed to fulfill the requirements of the system. Consequently, on-line DSE can achieve better results than off-line DSE alone, especially considering embedded systems with tight constraints. It is thus possible to maximize the lifetime of the battery that feeds an embedded system, or even to decrease the deadline miss ratio in a soft real-time system, for example by relocating tasks dynamically in order to generate less communication among the processors, provided that the system runs for enough execution time in order to amortize the migration overhead.In this work, a combination of allocation heuristics from the domain of Distributed Computing Systems is applied, for instance bin-packing and linear clustering algorithms. Results shows that applying task reallocation using the Worst-Fit and Linear Clustering combination reduces the energy consumption and deadline miss ratio by 17% and 37%, respectively, using the copy task migration model.
60

Surveillance de systèmes à composants multi-threads et distribués / monitoring multi-threaded and distributed (component-based) systems

Nazarpour, Hosein 26 June 2017 (has links)
La conception à base de composants est le processus qui permet à partir d’exigences et un ensemble de composants prédéfinis d’aboutir à un système respectant les exigences. Les composants sont des blocs de construction encapsulant du comportement. Ils peuvent être composés afin de former des composants composites. Leur composition doit être rigoureusement définie de manière à pouvoir i) inférer le comportement des composants composites à partir de leurs constituants, ii) déduire des propriétés globales à partir des propriétés des composants individuels. Cependant, il est généralement impossible d’assurer ou de vérifier les propriétés souhaitées en utilisant des techniques de vérification statiques telles que la vérification de modèles ou l’analyse statique. Ceci est du au problème de l’explosion d’espace d’états et au fait que la propriété est souvent décidable uniquement avec de l’information disponible durant l’exécution (par exemple, provenant de l’utilisateur ou de l’environnement). La vérification à l’exécution (Runtime Verification) désigne les langages, les techniques, et les outils pour la vérification dynamique des exécutions des systèmes par rapport à des propriétés spécifiant formellement leur comportement. En vérification à l’exécution, une exécution du système vérifiée est analysée en utilisant une procédure de décision : un moniteur. Un moniteur peut être généré à partir d’une spécification écrite par l’utilisateur (par exemple une formule de logique temporelle, un automate) et a pour but de détecter les satisfactions ou les violations par rapport à la spécification. Généralement, le moniteur est une procédure de décision réalisant une analyse pas à pas de l’exécution capturée comme une séquence d’états du système, et produisant une séquence de verdicts (valeur de vérité prise dans un domaine de vérité) indiquant la satisfaction ou la violation de la spécification.Cette thèse s’intéresse au problème de la vérification de systèmes à composants multithread et distribués. Nous considérons un modèle général de la sémantique et système à composants avec interactions multi-parties: les composants intrinsèquement indépendants et leur interactions sont partitionées sur plusieurs ordonnanceurs. Dans ce contexte, il est possible d’obtenir des modèles avec différents degrés de parallelisme, des systèmes séquentiels, multi-thread, et distribués. Cependant, ni le modèle exact ni le comportement du système est connu. Ni le comportement des composants ni le comportement des ordonnanceurs est connu. Notre approche ne dépend pas du comportement exact des composants et des ordonnanceurs. En s’inspirant de la théorie du test de conformité, nous nommons cette hypothèse : l’hypothèse de monitoring. L’hypothèse de monitoring rend notre approche indépendante du comportement des composants et de la manière dont ce comportement est obtenu. Lorsque nous monitorons des composants concurrents, le problème qui se pose est celui de l’indisponibilité de l’état global à l’exécution. Une solution naïve à ce problème serait de brancher un moniteur qui forcerait le système à se synchroniser afin d’obtenir une séquence des états globaux à l’exécution. Une telle solution irait complètement à l’encontre du fait d’avoir des exécutions concurrentes et des systèmes distribués. Nous définissons deux approches pour le monitoring de système un composant multi-thread et distribués. Dans les deux approches, nous attachons des contrôleurs locaux aux ordonnanceurs pour obtenir des événements à partir des traces locales. Les événements locaux sont envoyés à un moniteur (observateur global) qui reconstruit l’ensemble des traces globale qui sont i) compatibles avec les traces locales et ii) adéquates pour le monitoring, tout en préservant la concurrence du système. / Component-based design is the process leading from given requirements and a set of predefined components to a system meeting the requirements. Components are abstract building blocks encapsulating behavior. They can be composed in order to build composite components. Their composition should be rigorously defined so that it is possible to infer the behavior of composite components from the behavior of their constituents as well as global properties from the properties of individual components. It is, however, generally not possible to ensure or verify the desired property using static verification techniques such as model-checking or static analysis, either because of the state-space explosion problem or because the property can only be decided with information available at runtime (e.g., from the user or the environment). Runtime Verification (RV) is an umbrella term denoting the languages, techniques, and tools for the dynamic verification of system executions against formally-specified behavioral properties. In this context, a run of the system under scrutiny is analyzed using a decision procedure: a monitor. Generally, the monitor may be generated from a user-provided specification (e.g., a temporal-logic formula, an automaton), performs a step-by-step analysis of an execution captured as a sequence of system states, and produces a sequence of verdicts (truth-values taken from a truth-domain) indicating specification satisfaction or violation.This thesis addresses the problem of runtime monitoring multi-threaded and distributed component-based systems with multi-party interactions (CBSs). Although, neither the exact model nor the behavior of the system are known (black box system), the semantic of such CBSs can be modeled with labeled transition systems (LTSs). Inspiring from conformance testing theory, we refer to this as the monitoring hypothesis. Our monitoring hypothesis makes our approach oblivious of (i) the behavior of the CBSs, and (ii) how this behavior is obtained. We consider a general abstract semantic model of CBSs consisting of a set of intrinsically independent components whose interactions are managed by several schedulers. Using such an abstract model, one can obtain systems with different degrees of parallelism, such as sequential, multi-threaded and distributed systems. When monitoring concurrent (multi-threaded and distributed) CBSs, the problem that arises is that a global state of the system is not available at runtime, since the schedulers execute interactions even by knowing the partial state of the system. Moreover, in distributed systems the total ordering of the execution of the interaction is not observable. A naive solution to these problems would be to plug in a monitor which would however force the system to synchronize in order to obtain the sequence of global states as well as the total ordering of the executions at runtime Such a solution would defeat the whole purpose of having concurrent executions and distributed systems. We define two approaches for the monitoring of multi-threaded and distributed CBSs. In both approaches, we instrument the system to retrieve the local events of the schedulers. Local events are sent to an online monitor which reconstructs on-the-fly the set of global traces that are i) compatible with the local traces of the schedulers, and ii) suitable for monitoring purposes, in a concurrency-preserving fashion.

Page generated in 0.057 seconds