391 |
Enabling Digital Twins : A comparative study on messaging protocols and serialization formats for Digital Twins in IoV / Att möjliggöra digitala tvillingarPersson Proos, Daniel January 2019 (has links)
In this thesis, the trade-offs between latency and transmitted data volume in vehicle-to-cloud communication for different choices of application layer messaging protocols and binary serialization formats are studied. This is done with the purpose of getting enough performance improvement to enable delay-sensitive Intelligent Transport System (ITS) features, and to reduce data usage in mobile networks. The studied protocols are Constrained Application Protocol (CoAP), Advanced Message Queuing Protocol (AMQP) and Message Queuing Telemetry Transport (MQTT), and the serialization formats studied are Protobuf and Flatbuffers. The results show that CoAP — the only User Datagram Protocol (UDP) based protocol — has the lowest latency and overhead while not being able to guarantee reliable transfer. The best performer that can guarantee reliable transfer is MQTT. For the serialization formats, Protobuf is shown to have three times smaller serialized message size than Flatbuffers and also faster serialization speed. Flatbuffers is the winner in the case of memory use and deserialization time, which could make up for the poor performance in other aspects of data processing in the cloud. Further, the implications of these results in ITS communication are discussed suggestions made into future research topics.
|
392 |
Accelerator-enabled Communication Middleware for Large-scale Heterogeneous HPC Systems with Modern InterconnectsChu, Ching-Hsiang January 2020 (has links)
No description available.
|
393 |
Managing Service Levels in Grid Computing Systems : Quota Policy and Computational Market ApproachesSandholm, Thomas January 2007 (has links)
We study techniques to enforce and provision differentiated service levels in Computational Grid systems. The Grid offers simplified provisioning of peak-capacity for applications with computational requirements beyond local machines and clusters, by sharing resources across organizational boundaries. Current systems have focussed on access control, i.e., managing who is allowed to run applications on remote sites. Very little work has been done on providing differentiated service levels for those applications that are admitted. This leads to a number of problems when scheduling jobs in a fair and efficient way. For example, users with a large number of long-running jobs could starve out others, both intentionally and non-intentionally. We investigate the requirements of High Performance Computing (HPC) applications that run in academic Grid systems, and propose two models of service-level management. Our first model is based on global real-time quota enforcement, where projects are granted resource quota, such as CPU hours, across the Grid by a centralized allocation authority. We implement the SweGrid Accounting System to enforce quota allocated by the Swedish National Allocations Committee in the SweGrid production Grid, which connects six Swedish HPC centers. A flexible authorization policy framework allows provisioning and enforcement of two different service levels across the SweGrid clusters; high-priority and low-priority jobs. As a solution to more fine-grained control over service levels we propose and implement a Grid Market system, using a market-based resource allocator called Tycoon. The conclusion of our research is that although the Grid accounting solution offers better service level enforcement support than state-of-the-art production Grid systems, it turned out to be complex to set the resource price and other policies manually, while ensuring fairness and efficiency of the system. Our Grid Market on the other hand sets the price according to the dynamic demand, and it is further incentive compatible, in that the overall system state remains healthy even in the presence of strategic users. / QC 20101116
|
394 |
SECURE MIDDLEWARE FOR FEDERATED NETWORK PERFORMANCE MONITORINGKulkarni, Shweta Samir 06 August 2013 (has links)
No description available.
|
395 |
API Design and Middleware Optimization for Big Data and Machine Learning ApplicationsGuo, Jia January 2021 (has links)
No description available.
|
396 |
Resource Warehouses : a distributed information management infrastructureEl-Khoury, Simon January 2002 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
|
397 |
Performance and availability trade-offs in fault-tolerant middlewareSzentiványi, Diana January 2002 (has links)
Distributing functionality of an application is in common use. Systems that are built with this feature in mind also have to provide high levels of dependability. One way of assuring availability of services is to tolerate faults in the system, thereby avoiding failures. Building distributed applications is not an easy task. To provide fault tolerance is even harder. Using middlewares as mediators between hardware and operating systems on one hand and high-level applications on the other hand is a solution to the above difficult problems. It can help application writers by providing automatic generation of code supporting e.g. fault tolerance mechanisms, and by offering interoperability and language independence. For over twenty years, the research community is producing results in the area of . However, experimental studies of different platforms are performed mostly by using made-up simple applications. Also, especially in case of CORBA, there is no fault-tolerant middleware totally conforming to the standard, and well studied in terms of trade-offs. This thesis presents a fault-tolerant CORBA middleware built and evaluated using a realistic application running on top of it. Also, it contains results obtained after experiments with an alternative infrastructure implementing a robust fault-tolerant algorithm using basic CORBA. In the first infrastructure a problem is the existence of single points of failure. On the other hand, overheads and recovery times fall in acceptable ranges. When using the robust algorithm, the problem of single points of failure disappears. The problem here is the memory usage, and overhead values as well as recovery times that can become quite long. / <p>Report code: LiU-TEK-LIC-2002:55.</p>
|
398 |
Managing Next Generation Networks (NGNs) based on the Service-Oriented Architechture (SOA). Design, Development and testing of a message-based Network Management platform for the integration of heterogeneous management systems.Kotsopoulos, Konstantinos January 2010 (has links)
Next Generation Networks (NGNs) aim to provide a unified network
infrastructure to offer multimedia data and telecommunication services
through IP convergence. NGNs utilize multiple broadband, QoS-enabled
transport technologies, creating a converged packet-switched network
infrastructure, where service-related functions are separated from the
transport functions. This requires significant changes in the way how
networks are managed to handle the complexity and heterogeneity of
NGNs.
This thesis proposes a Service Oriented Architecture (SOA) based
management framework that integrates heterogeneous management
systems in a loose coupling manner. The key benefit of the proposed
management architecture is the reduction of the complexity through
service and data integration. A network management middleware layer
that merges low level management functionality with higher level
management operations to resolve the problem of heterogeneity was
proposed.
A prototype was implemented using Web Services and a testbed was
developed using trouble ticket systems as the management application to
demonstrate the functionality of the proposed framework. Test results
show the correcting functioning of the system. It also concludes that the
proposed framework fulfils the principles behind the SOA philosophy.
|
399 |
Methods and tools for network reconnaissance of IoT devicesGvozdenović, Stefan 18 January 2024 (has links)
The Internet of Things (IoT) impacts nearly all aspects surrounding our daily life, including housing, transportation, healthcare, and manufacturing. IoT devices communicate through a variety of communication protocols, such as Bluetooth Low Energy (BLE), Zigbee, Z-Wave, and LoRa. These protocols serve essential purposes in both commercial industrial and personal domains, encompassing wearables and intelligent buildings.
The organic and decentralized development of IoT protocols under the auspices of different organizations has resulted in a fragmented and heterogeneous IoT ecosystem. In many cases, IoT devices do not have an IP address. Furthermore, some protocols, such as LoRa and Z-Wave, are proprietary in nature and incompatible with standard protocols.
This heterogeneity and fragmentation of the IoT introduce challenges in assessing the security posture of IoT devices. To address this problem, this thesis proposes a novel methodology that transcends specific protocols and supports network and security monitoring of IoT devices at scale. This methodology leverages the capabilities of software-defined radio (SDR) technology to implement IoT protocols in software.
We first investigate the problem of IoT network reconnaissance, that is the discovery and characterization of all the IoT devices in one’s organization. We focus on four popular protocols, namely Zigbee, BLE, Z-Wave, and LoRa. We introduce and analyze new algorithms to improve the performance and speed-up the discovery of IoT devices. These algorithms leverage the ability of SDRs to transmit and receive signals across multiple channels in parallel.
We implement these algorithms in the form of an SDR tool, called IoT-Scan, the first universal IoT scanner middleware. We thoroughly evaluate the delay and energy performance of IoT-Scan. Notably, using multi-channel scanning, we demonstrate a reduction of 70% in the discovery times of Bluetooth and Zigbee devices in the 2.4GHz band and of LoRa and Z-Wave devices in the 900MHz band, versus single-channel scanning.
Second, we investigate a new type of denial-of-service attacks on IoT cards, called Truncate-after-Preamble (TaP) attacks. We employ SDRs to assess the security posture of off-the-shelf Zigbee and Wi-Fi cards to TaP attacks. We show that all the Zigbee devices are vulnerable to TaP attacks, while the Wi-Fi devices are vulnerable to the attack to a varying degree. Remarkably, TaP attacks demand energy consumption five orders of magnitude lower than what is required by a continuous jamming mechanism. We propose several countermeasures to mitigate the attacks.
Third, we devise an innovative approach for the purpose of identifying and creating unique profiles for IoT devices. This approach leverages SDRs to create malformed packets at the physical layer (e.g., truncated or overlapping packets). Experiments demonstrate the ability of this approach to perform fine-grained timing experiments (at the microsecond level), craft multi-packet transmissions/collisions, and derive device-specific reception curves.
In summary, the results of this thesis validate the feasibility of our proposed SDR-based methodology in addressing fundamental security challenges caused by the heterogeneity of the IoT. This methodology is future-proof and can accommodate new protocols and protocol upgrades.
|
400 |
Semantics-Enabled Framework for Knowledge Discovery from Earth Observation DataDurbha, Surya Srinivas 09 December 2006 (has links)
Earth observation data has increased significantly over the last decades with satellites collecting and transmitting to Earth receiving stations in excess of three terabytes of data a day. This data acquisition rate is a major challenge to the existing data exploitation and dissemination approaches. The lack of content and semantics based interactive information searching and retrieval capabilities from the image archives is an impediment to the use of the data. The proposed framework (Intelligent Interactive Image Knowledge retrieval-I3KR) is built around a concept-based model using domain dependant ontologies. An unsupervised segmentation algorithm is employed to extract homogeneous regions and calculate primitive descriptors for each region. An unsupervised classification by means of a Kernel Principal Components Analysis (KPCA) method is then performed, which extracts components of features that are nonlinearly related to the input variables, followed by a Support Vector Machine (SVM) classification to generate models for the object classes. The assignment of the concepts in the ontology to the objects is achieved by a Description Logics (DL) based inference mechanism. This research also proposes new methodologies for domain-specific rapid image information mining (RIIM) modules for disaster response activities. In addition, several organizations/individuals are involved in the analysis of Earth observation data. Often the results of this analysis are presented as derivative products in various classification systems (e.g. land use/land cover, soils, hydrology, wetlands, etc.). The generated thematic data sets are highly heterogeneous in syntax, structure and semantics. The second framework developed as a part of this research (Semantics-Enabled Thematic data Integration (SETI)) focuses on identifying and resolving semantic conflicts such as confounding conflicts, scaling and units conflicts, and naming conflicts between data in different classification schemes. The shared ontology approach presented in this work facilitates the reclassification of information items from one information source into the application ontology of another source. Reasoning on the system is performed through a DL reasoner that allows classification of data from one context to another by equality and subsumption. This enables the proposed system to provide enhanced knowledge discovery, query processing, and searching in way that is not possible with key word based searches.
|
Page generated in 0.0302 seconds