• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 25
  • 11
  • 8
  • 8
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 159
  • 159
  • 39
  • 20
  • 19
  • 18
  • 16
  • 15
  • 14
  • 14
  • 14
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Métodos de Exploração de Espaço de Projeto em Tempo de Execução em Sistemas Embarcados de Tempo Real Soft baseados em Redes-Em-Chip. / Methods of Run-time Design Space Exploration in NoC-based Soft Real Time Embedded Systems

Briao, Eduardo Wenzel January 2008 (has links)
A complexidade no projeto de sistemas eletrônicos tem aumentado devido à evolução tecnológica e permite a concepção de sistemas inteiros em um único chip (SoCs – do inglês, Systems-on-Chip). Com o objetivo de reduzir a alta complexidade de projeto, custos de projeto e o tempo de lançamento do produto no mercado, os sistemas são desenvolvidos em módulos funcionais, pré-verificados e pré-projetados, denominados de núcleos de propriedade intelectual (IP – do inglês, Intellectual Property). Esses núcleos IP podem ser reutilizados de outros projetos ou adquiridos de terceiros. Entretanto, é necessário prover uma estrutura de comunicação para interligar esses núcleos e as estruturas atuais (barramentos) são inadequadas para atender as necessidades dos futuros SoCs (compartilhamento de banda, falta de escalabilidade). As redes-em-chip (NoCs{ XE "NoCs" } – do inglês, Networks-on-Chip) vêm sendo apresentadas como uma solução para atender essas restrições. No desenvolvimento de sistemas embarcados baseados em redes-em-chip, deve-se personalizar a rede para atendimento de restrições. Essa exploração de espaço de projeto (EEP), segundo uma infinidade de trabalhos, é realizada em tempo de projeto, supondo-se que é conhecido o perfil das aplicações que devem ser executadas pelo sistema. No entanto, cada vez mais sistemas embarcados aproximam-se de dispositivos genéricos de processamento (como palmtops), onde as tarefas a serem executadas não são inteiramente conhecidas a priori. Com a mudança dinâmica da carga de trabalho de um sistema embarcado, a busca pelo atendimento de requisitos pode então ser enfrentada por mecanismos adaptativos, que implementam dinamicamente a EEP. No âmbito deste trabalho, a EEP em tempo de execução provê mecanismos adaptativos que deverão realizar suas funções para atendimento de restrições de projeto. Consequentemente, EEP em tempo de execução pode permitir resultados ainda melhores, no que diz respeito a sistemas embarcados com restrições de projetos rígidas. É possível maximizar o tempo de duração da energia da bateria que alimenta um sistema embarcado ou, até mesmo, diminuir a taxa de perda de deadlines em um sistema de tempo real soft, realocando em tempo de execução tarefas de modo a gerar menor taxa de comunicação entre os processadores, desde que o sistema seja executado em um tempo suficiente para amortizar os custos de migração. Neste trabalho, foi utilizada a combinação de heurísticas de alocação da área dos Sistemas Computacionais Distribuídos como, por exemplo, algoritmos bin-packing e linear clustering. Resultados mostraram que a realocação de tarefas, utilizando uma combinação Worst-Fit e Linear Clustering, reduziu o consumo de energia e a taxa de perda de deadlines em 17% e 37%, respectivamente, utilizando o modelo de migração por cópia. / The complexity of electronic systems design has been increasing due to the technological evolution, which now allows the inclusion of a complete system on a single chip (SoC – System-on-Chip). In order to cope with the corresponding design complexity and reduce design costs and time-to-market, systems are built by assembling pre-designed and pre-verificated functional modules, called IP (Intellectual Property) cores. IP cores can be reused from previous designs or acquired from third-party vendors. However, an adequate communication architecture is required to interconnect these IP cores. Current communication architectures (busses) are unsuitable for the communication requirements of future SoCs (sharing of bandwidth, lack of scalability). Networks-on-Chip (NoC) arise as one of the solutions to fulfill these requirements. While developing NoC-based embedded systems, the NoC customization is mandatory to fulfill design constraints. This design space exploration (DSE), according to most approaches in the literature, is achieved at compile-time (off-line DSE), assuming the profiles of the tasks that will be executed in the embedded system are known a priori. However, nowadays, embedded systems are becoming more and more similar to generic processing devices (such as palmtops), where the tasks to be executed are not completely known a priori. Due to the dynamic modification of the workload of the embedded system, the fulfillment of requirements can be accomplished by using adaptive mechanisms that implement dynamically the DSE (run-time DSE or on-line DSE). In the scope of this work, DSE is on-line. In other words, when the system is running, adaptive mechanisms will be executed to fulfill the requirements of the system. Consequently, on-line DSE can achieve better results than off-line DSE alone, especially considering embedded systems with tight constraints. It is thus possible to maximize the lifetime of the battery that feeds an embedded system, or even to decrease the deadline miss ratio in a soft real-time system, for example by relocating tasks dynamically in order to generate less communication among the processors, provided that the system runs for enough execution time in order to amortize the migration overhead.In this work, a combination of allocation heuristics from the domain of Distributed Computing Systems is applied, for instance bin-packing and linear clustering algorithms. Results shows that applying task reallocation using the Worst-Fit and Linear Clustering combination reduces the energy consumption and deadline miss ratio by 17% and 37%, respectively, using the copy task migration model.
62

Vers des protocoles de tolérance aux fautes Byzantines efficaces et robustes / Efficient and Robust Byzantine Fault Tolerant Replication Protocols

Aublin, Pierre-Louis 21 January 2014 (has links)
Les systèmes d'information deviennent de plus en plus complexes et il est difficile de les garantir exempts de fautes. La réplication de machines à états est une technique permettant de tolérer les fautes, quelque soit leur nature, qu'elles soient logicielles ou matérielles. Cette thèse traite des protocoles de réplication de machines à états tolérant les fautes arbitraires, également appelées Byzantines. Ces protocoles doivent relever deux défis : (i) ils doivent être efficaces, c'est-à-dire que leurs performances doivent être les meilleurs possibles, afin de masquer le coût supplémentaire dû à la réplication et (ii) ils doivent être robustes, c'est-à-dire qu'une attaque ne doit pas faire baisser leurs performances de manière importante. Dans cette thèse nous observons qu'aucun protocole ne relève ces deux défis en même temps : les protocoles que nous connaissons aujourd'hui sont soit conçus pour être efficaces au détriment de leur robustesse, soit conçus pour être robustes au détriment de leurs performances. Une première contribution de cette thèse est la conception d'un nouveau protocole qui réunit le meilleur des deux mondes. Ce protocole, R-Aliph, combine un protocole efficace mais peu robuste avec un protocole robuste afin de fournir un protocole à la fois efficace et robuste. Nous évaluons ce protocole de manière expérimentale et montrons que ses performances en cas d'attaque sont égales aux performances du protocole robuste sous-jacent. De plus, ses performances dans le cas sans faute sont très proches des performances du protocole connu le plus efficace : la différence maximale de débit est inférieure à 6%. Dans la seconde partie de cette thèse nous observons que les protocoles conçus pour être robustes sont peu robustes en réalité. En effet, il est possible de concevoir une attaque dans laquelle leur perte de débit est supérieure à 78%. Nous identifions le problème de ces protocoles et nous concevons un nouveau protocole plus robuste que les précédents : RBFT. L'idée de base de ce protocole est d'exécuter en parallèle plusieurs instances d'un même protocole. Les performances de ces différentes instances sont surveillées de près afin de détecter tout comportement malicieux. Nous évaluons RBFT dans le cas sans faute et en cas d'attaque. Nous montrons que ses performances dans le cas sans faute sont comparables aux performances des protocoles considérés comme robustes. De plus, nous observons que la dégradation maximale de performance qu'un attaquant peut causer sur le système est inférieure à 3%, même dans le cas de la pire attaque possible. / Information systems become more and more complex and it is difficult to guarantee that they are bug-free. State Machine Replication is a technique for tolerating faults, regardless their nature, whether they are software or hardware faults. This thesis studies Fault Tolerant State Machine Replication protocols that tolerate arbitrary, also called Byzantine, faults. These protocols face two challenges: (i) they must be efficient, i.e., their performance have to be the best ones, in order to mask the cost of the replication and (ii) they must be robust, i.e., an attack should not cause an important performance degradation. In this thesis, we observe that no protocol addresses both of these challenges: current protocols are either designed to be efficient but fail to be robust, or designed to be robust but exhibit poor performance. A first contribution of this thesis is the design of a new protocol which achieves the best of both worlds. This protocol, R-Aliph, combines an efficient but not robust protocol with a protocol designed to be robust. The result is a protocol that is both robust and efficient. We evaluate this protocol experimentally and show that its performance under attack equals the performance of the underlying robust protocol. Moreover, its performance in the fault-free case is close to the performance of the best known efficient protocol: the maximal throughput difference is less than 6%. In the second part of this thesis we analyze the state-of-the-art robust protocols and demonstrate that they are not effectively robust. Indeed, one can run an attack on each of these protocols such that the throughput loss is at least equal to 78%. We identify the problem of these protocols and design a new, effectively robust, protocol called RBFT. The main idea of this protocol is to execute several instances of a robust protocol in parallel and closely monitor their performance, in order to detect a malicious behaviour. We evaluate RBFT in the fault-free case and under attack. We observe that its performance in the fault-free case is equivalent to the performance of the other so-called robust BFT protocols. Moreover, we show that the maximal throughput degradation, under the worst possible attack, is less than 3%.
63

Paralelização da ferramenta de alinhamento de sequências MUSCLE para um ambiente distribuído

Marucci, Evandro Augusto [UNESP] 11 February 2009 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:24:01Z (GMT). No. of bitstreams: 0 Previous issue date: 2009-02-11Bitstream added on 2014-06-13T19:51:06Z : No. of bitstreams: 1 marucci_ea_me_sjrp.pdf: 2105093 bytes, checksum: 5b417abdc99cd4c7f9807768af1ab956 (MD5) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Devido a crescente quantidade de dados genômicos para comparação, a computação paralela está se tornando cada vez mais necessária para realizar uma das operaçoes mais importantes da bioinformática, o alinhamento múltiplo de sequências. Atualmente, muitas ferramentas computacionais são utilizadas para resolver alinhamentos e o uso da computação paralela está se tornando cada vez mais generalizado. Entretanto, embora diferentes algoritmos paralelos tenham sido desenvolvidos para suportar as pesquisas genômicas, muitos deles não consideram aspectos fundamentais da computação paralela. O MUSCLE [1] e uma ferramenta que realiza o alinhamento m ultiplo de sequências com um bom desempenho computacional e resultados biológicos signi cativamente precisos [2]. Embora os m etodos utilizados por ele apresentem diferentes versões paralelas propostas na literatura, apenas uma versão paralela do MUSCLE foi proposta [3]. Essa versão, entretanto, foi desenvolvida para sistemas de mem oria compartilhada. O desenvolvimento de uma versão paralela do MUSCLE para sistemas distribu dos e importante dado o grande uso desses sistemas em laboratórios de pesquisa genômica. Esta paralelização e o foco deste trabalho e ela foi realizada utilizando-se abordagens paralelas existentes e criando-se novas abordagens. Como resultado, diferentes estratégias paralelas foram propostas. Estas estratégias podem ser incorporadas a outras ferramentas de alinhamento que utilizam, em determinadas etapas, a mesma abordagem seq uencial. Em cada método paralelizado, considerou-se principalmente a e ciência, a escalabilidade e a capacidade de atender problemas reais da biologia. Os testes realizados mostram que, para cada etapa paralela, ao menos uma estratégia de nida atende bem todos esses crit erios. Al em deste trabalho realizar um paralelismo in edito, ao viabilizar a execução da ferramenta MUSCLE em... / Due to increasing amount of genetic data for comparison, parallel computing is becoming increasingly necessary to perform one of the most important operations in bioinformatics, the multiple sequence alignments. Nowadays, many software tools are used to solve sequence alignments and the use of parallel computing is becoming more and more widespread. However, although di erent parallel algorithms were developed to support genetic researches, many of them do not consider fundamental aspects of parallel computing. The MUSCLE [1] is a tool that performs multiple sequence alignments with good computational performance and biological results signi cantly precise [2]. Although the methods used by them have di erent parallel versions proposed in the literature, only one parallel version of the MUSCLE tool was proposed [3]. This version, however, was developed for shared memory systems. The development of a parallel MUSCLE tool for distributed systems is important given the wide use of such systems in laboratories of genomic researches. This parallelization is the aim of this work and it was done using existing parallel approaches and creating new approaches. Consequently, di erent parallel strategies have been proposed. These strategies can be incorporated into other alignment tools that use, in a given stage, the same sequential approach. In each parallel method, we considered mainly the e ciency, scalability and ability to meet real biological problems. The tests show that, for each parallel step, at least one de ned strategy meets all these criteria. In addition to the new MUSCLE parallelization, enabling it execute in a distributed systems, the results show that the de ned strategies have a better performance than the existing strategies.
64

Learning with Staleness

Dai, Wei 01 March 2018 (has links)
A fundamental assumption behind most machine learning (ML) algorithms and analyses is the sequential execution. That is, any update to the ML model can be immediately applied and the new model is always available for the next algorithmic step. This basic assumption, however, can be costly to realize, when the computation is carried out across multiple machines, linked by commodity networks that are usually 104 times slower than the memory speed due to fundamental hardware limitations. As a result, concurrent ML computation in the distributed settings often needs to handle delayed updates and perform learning in the presence of staleness. This thesis characterizes learning with staleness from three directions: (1) We extend the theoretical analyses of a number of classical ML algorithms, including stochastic gradient descent, proximal gradient descent on non-convex problems, and Frank-Wolfe algorithms, to explicitly incorporate staleness into their convergence characterizations. (2)We conduct simulation and large-scale distributed experiments to study the empirical effects of staleness on ML algorithms under indeterministic executions. Our results reveal that staleness is a key parameter governing the convergence speed for all considered ML algorithms, with varied ramifications. (3) We design staleness-minimizing parameter server systems by optimizing synchronization methods to effectively reduce the runtime staleness. The proposed optimization of a bounded consistency model utilizes the additional network bandwidths to communicate updates eagerly, relieving users of the burden to tune the staleness level. By minimizing staleness at the framework level, our system stabilizes diverging optimization paths and substantially accelerates convergence across ML algorithms without any modification to the ML programs.
65

Sistema multiagente para recomposiÃÃo automÃtica de subestaÃÃo e redes de distribuiÃÃo de energia elÃtrica / Multiagent System to Automatic Restoration of Substation and Distribution Networks

Joao Victor Cavalcante Barros 29 November 2013 (has links)
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico / Este trabalho apresenta um sistema multiagente para recomposiÃÃo au-tomÃtica de redes de distribuiÃÃo de energia elÃtrica. O sistema proposto à formado por trÃs tipos de agentes: agente dispositivo, agente alimentador e agente subesta-ÃÃo. Os agentes dispositivos estÃo associados aos equipamentos do sistema e sÃo responsÃveis pela coleta de informaÃÃes da rede; os agentes alimentadores sÃo responsÃveis pelo gerenciamento dos alimentadores do sistema; e, os agentes su-bestaÃÃes sÃo responsÃveis pelo gerenciamento da capacidade de suprimento da rede. SÃo apresentados os comportamentos e interaÃÃes dos agentes do sistema durante o processo de recomposiÃÃo, visando restabelecer a energia elÃtrica dos trechos desenergizados e nÃo defeituosos, afetados por falta no alimentador ou falta no transformador da subestaÃÃo. SÃo apresentados oito estudos de casos que testam diferentes situaÃÃes possÃveis de ser encontrada nos sistemas de distribuiÃÃo, como recomposiÃÃo mediante descoordenaÃÃo da proteÃÃo, recomposiÃÃo parcial do sistema devido as restriÃÃes operacionais do sistema e recomposiÃÃes para falta no transformador da subestaÃÃo. Para validaÃÃo dos casos testes foi desenvolvido um simulador que possibilita a simulaÃÃo de faltas em diferentes locais do sistema elÃtrico em estudo. AtravÃs do ambiente visual do simulador, onde à disponibilizado o diagrama unifilar do sistema com os estados dos equipamentos e a ferramenta de captura de mensagem do JADE, à possÃvel simular diversos cenÃrios e observar interaÃÃo dos agentes em busca da recomposiÃÃo do sistema. O sistema proposto à capaz de localizar a falta, isolar o trecho defeituoso e restaurar o sistema, considerando restriÃÃes operacionais e descoordenaÃÃo do sistema de proteÃÃo. / This work presents a multi-agent system for automatic restoration of power distribution networks. The proposed system consists of three types of agents: device agent, feeder agent and substation agent. Device agents are associated with system equipment and are responsible for information acquisition of the network; the feeder agents are responsible to manage the feeders; and the substations agents are accountable of manage the network supply capability. The behaviors and interactions of the system agents during the restoration process, aiming to restore the feeder sectors affected by but not in fault, are presented. Eight case studies are presented, which test different possible situations likely to be found in the distribution systems, such as restoration by uncoordinated protection, partial recovery of the system because of operating system restrictions and recomposition for faults in the transformer substation. To validate the test cases a simulator was developed, which allows the simulation of faults in different locations of the power system under study. Through the simulator, where there are the single line diagram of the system with the state of the equipment and the tool to capture message in JADE, is possible to simulate different scenarios and observe the interaction of agents in search of system restoration. The proposed system is able to locate and isolate the fault, and restore suitably the system, considering operating limits and uncoordinated system protection.
66

Métodos de Exploração de Espaço de Projeto em Tempo de Execução em Sistemas Embarcados de Tempo Real Soft baseados em Redes-Em-Chip. / Methods of Run-time Design Space Exploration in NoC-based Soft Real Time Embedded Systems

Briao, Eduardo Wenzel January 2008 (has links)
A complexidade no projeto de sistemas eletrônicos tem aumentado devido à evolução tecnológica e permite a concepção de sistemas inteiros em um único chip (SoCs – do inglês, Systems-on-Chip). Com o objetivo de reduzir a alta complexidade de projeto, custos de projeto e o tempo de lançamento do produto no mercado, os sistemas são desenvolvidos em módulos funcionais, pré-verificados e pré-projetados, denominados de núcleos de propriedade intelectual (IP – do inglês, Intellectual Property). Esses núcleos IP podem ser reutilizados de outros projetos ou adquiridos de terceiros. Entretanto, é necessário prover uma estrutura de comunicação para interligar esses núcleos e as estruturas atuais (barramentos) são inadequadas para atender as necessidades dos futuros SoCs (compartilhamento de banda, falta de escalabilidade). As redes-em-chip (NoCs{ XE "NoCs" } – do inglês, Networks-on-Chip) vêm sendo apresentadas como uma solução para atender essas restrições. No desenvolvimento de sistemas embarcados baseados em redes-em-chip, deve-se personalizar a rede para atendimento de restrições. Essa exploração de espaço de projeto (EEP), segundo uma infinidade de trabalhos, é realizada em tempo de projeto, supondo-se que é conhecido o perfil das aplicações que devem ser executadas pelo sistema. No entanto, cada vez mais sistemas embarcados aproximam-se de dispositivos genéricos de processamento (como palmtops), onde as tarefas a serem executadas não são inteiramente conhecidas a priori. Com a mudança dinâmica da carga de trabalho de um sistema embarcado, a busca pelo atendimento de requisitos pode então ser enfrentada por mecanismos adaptativos, que implementam dinamicamente a EEP. No âmbito deste trabalho, a EEP em tempo de execução provê mecanismos adaptativos que deverão realizar suas funções para atendimento de restrições de projeto. Consequentemente, EEP em tempo de execução pode permitir resultados ainda melhores, no que diz respeito a sistemas embarcados com restrições de projetos rígidas. É possível maximizar o tempo de duração da energia da bateria que alimenta um sistema embarcado ou, até mesmo, diminuir a taxa de perda de deadlines em um sistema de tempo real soft, realocando em tempo de execução tarefas de modo a gerar menor taxa de comunicação entre os processadores, desde que o sistema seja executado em um tempo suficiente para amortizar os custos de migração. Neste trabalho, foi utilizada a combinação de heurísticas de alocação da área dos Sistemas Computacionais Distribuídos como, por exemplo, algoritmos bin-packing e linear clustering. Resultados mostraram que a realocação de tarefas, utilizando uma combinação Worst-Fit e Linear Clustering, reduziu o consumo de energia e a taxa de perda de deadlines em 17% e 37%, respectivamente, utilizando o modelo de migração por cópia. / The complexity of electronic systems design has been increasing due to the technological evolution, which now allows the inclusion of a complete system on a single chip (SoC – System-on-Chip). In order to cope with the corresponding design complexity and reduce design costs and time-to-market, systems are built by assembling pre-designed and pre-verificated functional modules, called IP (Intellectual Property) cores. IP cores can be reused from previous designs or acquired from third-party vendors. However, an adequate communication architecture is required to interconnect these IP cores. Current communication architectures (busses) are unsuitable for the communication requirements of future SoCs (sharing of bandwidth, lack of scalability). Networks-on-Chip (NoC) arise as one of the solutions to fulfill these requirements. While developing NoC-based embedded systems, the NoC customization is mandatory to fulfill design constraints. This design space exploration (DSE), according to most approaches in the literature, is achieved at compile-time (off-line DSE), assuming the profiles of the tasks that will be executed in the embedded system are known a priori. However, nowadays, embedded systems are becoming more and more similar to generic processing devices (such as palmtops), where the tasks to be executed are not completely known a priori. Due to the dynamic modification of the workload of the embedded system, the fulfillment of requirements can be accomplished by using adaptive mechanisms that implement dynamically the DSE (run-time DSE or on-line DSE). In the scope of this work, DSE is on-line. In other words, when the system is running, adaptive mechanisms will be executed to fulfill the requirements of the system. Consequently, on-line DSE can achieve better results than off-line DSE alone, especially considering embedded systems with tight constraints. It is thus possible to maximize the lifetime of the battery that feeds an embedded system, or even to decrease the deadline miss ratio in a soft real-time system, for example by relocating tasks dynamically in order to generate less communication among the processors, provided that the system runs for enough execution time in order to amortize the migration overhead.In this work, a combination of allocation heuristics from the domain of Distributed Computing Systems is applied, for instance bin-packing and linear clustering algorithms. Results shows that applying task reallocation using the Worst-Fit and Linear Clustering combination reduces the energy consumption and deadline miss ratio by 17% and 37%, respectively, using the copy task migration model.
67

[en] OPTIMAL LOCATION OF SENSORS AND CONTROLLERS IN DISTRIBUTED SYSTEMS / [pt] LOCALIZAÇÃO ÓTIMA DE SENSORES E CONTROLADORES EM SISTEMAS DISTRIBUÍDOS

HELIOS MALEBRANCHE OLBRISCH FRERES FILHO 24 January 2008 (has links)
[pt] Nesta tese é considerada a localização ótima de sensores (LOS) e controladores (LOC) em sistemas dinâmicos de parâmetros distribuídos (SPD) governados por equações diferenciais parciais. Uma discussão dos problemas de Identificação de Sistemas, Estimação de Estado, e Controle Ótimo, para sistemas de dimensão infinita, antecede o desenvolvimento de um exemplo ilustrativo da localização ótima de sensores e controladores num problema de controle ótimo. A contribuição original deste trabalho consiste na elaboração de um survey sobre LOS e LOC para SPD. Uma revisão geral dos diversos métodos discutidos na literatura corrente é apresentada. Esses métodos são então classificados de acordo com suas respectivas particularidades e agrupados segundo a técnica utilizada na LOS e LOC. Uma avaliação crítica da área de pesquisa é então desenvolvida seguindo a classificação proposta neste trabalho. / [en] In this thesis the optimal sensors (OSL)and controllers (OCL) location is considered for dynamical distributed parameter systems (DPS) governed by partial differential equations. The problems of system Identification, State Estimation, and Optimal Control for infinite dimenional systems are discussed; thus following an ilustrative example for the optimal sensors and controllers location applied to an optimal control problem. Te original contribution in this work concerns the presentation of a survey on OSL and OCL for DPS. A general review of the several methods discussed in the current literature is presented. These methods are then classified according to their respective particularities, and they are grouped in accordance with the techniques used in the OSL and OCL problems. A critical evaluation in the fiels is developed in the light of the classification proposed here.
68

Theoretical and Quantitative Comparison of SensibleThings and GSN

Wang, Kaidi January 2016 (has links)
This project is aimed at making comparison between current existing Internet- of-Things (IoT) platforms, SensibleThings (ST) and Global Sensors Networks (GSN). Project can be served as a further work of platforms’ investigation. Comparing and learning from each other aim to contribute to the improvement of future platforms development. Detailed comparison is mainly with the respect of platform feature, communication and data present-frequency performance under stress, and platform node scalability performance on one limited device. Study is conducted through developing applications on each platform, and making measuring performance under the same condition in household network environment. So far, all these respects have had results and been concluded. Qualitatively comparing, GSN performs better in the facets of node’s swift development and deployment, data management, node subscription and connection retry mechanism. Whereas, ST is superior in respects of network package encryption, platform reliability, session initializing latency, and degree of developing freedom. In quantitative comparison, nodes on GSN has better data push pressure resistence while ST nodes works with lower session latency. In terms of data present-frequency, ST node can reach higher updating frequency than GSN node. In the aspect of node sclability on one limited device, ST nodes take the advantage in averagely lower latency than GSN node when nodes number is less than 15 on limited device. But due to sharing mechanism of GSN, on one limited device, it's nodes shows more scalable if platform nodes have similar job.
69

Simulation of a Distributed Implementation of an Adaptive Cruise Controller

Riis, Pontus January 2007 (has links)
Much functionality of today's vehicles runs as software on embedded computer systems. This includes, for example, automatic climate control and engine control. As the processors necessarily are located in diffent physical locations inside the vehicle wires must be drawn between processors that need to communicate. Therefore, it is typical to have one or several buses connecting the processors in an embedded computer network, thus creating a distributed system. As some parts of the system in the car have real-time properties, it is necessary to validate that the real-time properties are upheld in the distributed system. This thesis presents the design and implementation of an adaptive cruise controller (ACC), which is a cruise controller that also keeps a minimum distance to the closest vehicle in front. Further, the performance of the ACC has been evaluated using an existing system-level simulator for distributed real-time systems together with metrics for Quality-of-Control (QoC). The ACC has then been simulated under different scenarios. The scenarios include outside conditions, for example the slope of the road, the behaviour of the vehicle in front, and the desired velocity, as well as internal conditions as adding different amounts of extra load on the processors and the bus. The results show that the functionality of the ACC starts deteriorating when the extra load on the nodes reaches high levels. When the extra load reaches very high levels, the ACC stops functioning completely. The results also show that the extra load on the bus has very little effect on the performance of the ACC.
70

Distributed Electronic Health Record System based on Middleware

Xin, Zhang January 2013 (has links)
With the fast development of information technology, traditional healthcare is evolving to a more digital and electronic stage. Electronic HealthRecord (EHR) is residents’ basic information and health care relatedinformation conforming to standard. It can not only provide usefulinformation to medical workers, but also exchange resources with otherinformation systems. But with the growing complexity of electronichealth record data sources, it becomes a big challenge to set up a structurewhich allows different types of data sharing and exchanging inmulti-platform applications. It’s even more important to find out amethod to support great amount of users from different applicationplatform to sharing and exchanging data at the same time.In this paper, we proposed a distributed electronic health record systembased on middleware to address the problem. Both permanent and realtimedata should pass through the middleware provided by the system,and will be transformed into standard format for storage. Multi-threadand distributed server group design will let the system be more flexibleand scalable, and will be able to provide service to users concurrently.The system creates a standard data format for data transferring andstorage. All raw data collected from different kinds of sensor system willbe formatted with application programming interface (API) or softwaredevelopment kit (SDK) system provided before upload to the system.Encryption methods are also implemented to ensure data security andprivacy protection.

Page generated in 0.0676 seconds