• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 33
  • 26
  • 16
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 321
  • 321
  • 91
  • 65
  • 63
  • 50
  • 45
  • 44
  • 41
  • 35
  • 33
  • 31
  • 29
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Distributed Reconfigurable Simulation for Communication Systems

Kim, Song Hun 27 November 2002 (has links)
The simulation of physical-layer communication systems often requires long execution times. This is due to the nature of the Monte Carlo simulation. To obtain a valid result by producing enough errors, the number of bits or symbols being simulated must significantly exceed the inverse of the bit error rate of interest. This often results in hours or even days of execution using a personal computer or a workstation. Reconfigurable devices can perform certain functions faster than general-purpose processors. In addition, they are more flexible than Application Specific Integrated Circuit (ASIC) devices. This fast yet flexible property of reconfigurable devices can be used for the simulation of communication systems. However, although reconfigurable devices are more flexible than ASIC devices, they are often not compatible with each other. Programs are usually written in hardware description languages such as Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL). A program written for one device often cannot be used for another device because these devices all have different architectures, and programs are architecture-specific. Distributed computing, which is not a new concept, refers to interconnecting a number of computing elements, often heterogeneous, to perform a given task. By applying distributed computing, reconfigurable devices and digital signal processors can be connected to form a distributed reconfigurable simulator. In this paper, it is shown that using reconfigurable devices can greatly increase the speed of simulation. A simple physical-layer communication system model has been created using a WildForce board, a reconfigurable device, and the performance is compared to a traditional software simulation of the same system. Using the reconfigurable device, the performance was increased by approximately one hundred times. This demonstrates the possibility of using reconfigurable devices for simulation of physical-layer communication systems. Also, an middleware architecture for distributed reconfigurable simulation is proposed and implemented. Using the middleware, reconfigurable devices and various computing elements can be integrated. The proposed middleware has several components. The master works as the server for the system. An object is any device that has computing capability. A resource is an algorithm or function implemented for a certain object. An object and its resources are connected to the system through an agent. This middleware system is tested with three different objects and six resources, and the performance is analyzed. The results shows that it is possible to interconnect various objects to perform a distributed simulation using reconfigurable devices. Possible future research to enhance the architecture is also discussed. / Ph. D.
32

Implementing a RESTful Software Architecture to Coordinate Heterogeneous Networked Embedded Devices

Davis, Jason Tyler 27 October 2021 (has links)
Modern embedded systems---autonomous vehicle-to-vehicle communication, smart cities, and military Joint All-Domain Operations---feature increasingly heterogeneous distributed components. As a result, existing communication methods, tightly coupled with specific networking layers and individual applications, can no longer balance the flexibility of modern data distribution with the traditional constraints of embedded systems. To address this problem, the investigation herein presents a domain-specific language, designed around the Representational State Transfer (REST) architecture, most famously used on the web. Our language, called the Communication Language for Embedded Systems (CLES), supports both traditional point-to-point data communication and management and allocation of decentralized distributed processing tasks. To meet the traditional constraints of embedded execution, CLES' novel runtime allocates processing tasks across a heterogeneous network of embedded devices, overcoming limitations from other modern distribution methods: centralized task management and limited operating system integration. CLES was evaluated with performance micro-benchmarks, implementation of distributed stochastic gradient descent, and application to the design of versatile stateless services for vehicle-to-vehicle communication and military Joint All-Domain Command and Control (JDAC). From this evaluation, it was determined that CLES meets the data distribution needs of realistic cyber-physical embedded systems. / Master of Science / As computers become smaller, cheaper, more powerful, and energy efficient, they are increasingly used in cyber-physical systems such as planes, trains, and automobiles, as well as large-scale networks such as power plants and smart cities. The field of embedded computing is facing new challenges involving the communication and coordination of large numbers of different devices. Some of the software challenges within embedded device communications are: flexibility both in ability to run on different devices and use different communication links such as cellular, Wi-Fi, or Bluetooth, performance constraints of low-power embedded devices, latency and reliability to ensure safe operations, and the schedule and cost of development. To address these challenges, this thesis presents a new programming language, designed around the Representational State Transfer (REST) architecture, most famously used in HTTP to drive the web. Our language, called the Communication Language for Embedded Systems (CLES), supports both traditional point-to-point data communication designed to prioritize latency and reliability, as well as a standalone application or runtime that can be run on an embedded device to accept requests for processing tasks. CLES and its supporting Software Development Kit (SDK) is designed to allow for quick and cost effective development of flexible low-latency device to device communications and large scale distributed processing on embedded devices.
33

Resource Allocation for Wireless Distributed Computing Networks

Chen, Xuetao 11 May 2012 (has links)
Wireless distributed computing networks (WDCNs) will become the next frontier of the wireless industry as the performance of wireless platforms is being increased every year and wireless industries are looking for "killer" applications for increased channel capacity. However, WDCNs have several unique problems compared with currently well-investigated methods for wireless sensor networks and wired distributed computing. For example, it is difficult for WDCNs to be power/energy efficient considering the uncertainty and heterogeneity of the wireless environment. In addition, the service model has to take account of the interference-limited feature of wireless channels to reduce the service delay. Our research proposes a two-phase model for WDCNs including the service provision phase and the service access phase according to different traffic patterns and performance requirements. For the service provision phase, we investigate the impact of communication channel conditions on the average execution time of the computing tasks within WDCNs. We then discuses how to increase the robustness and power efficiency for WDCNs subject to the impact of channel variance and spatial heterogeneity. A resource allocation solution for computation oriented WDCNs is then introduced in detail which mitigates the effects of channel variations with a stochastic programming solution. Stochastic geometry and queue theory are combined to analyze the average performance of service response time and to design optimal access strategies during the service access phase. This access model provides a framework to analyze the service access performance and evaluate whether the channel heterogeneity should be considered. Based on this analysis, optimal strategies to access the service nodes can be determined in order to reduce the service response time. In addition, network initialization and synchronization are investigated in order to build a multiple channel WDCN in dynamic spectrum access (DSA) environments. Further, an efficient primary user detection method is proposed to reduce the channel vacation latency for WDCNs in DSA environments. Finally, this dissertation presents the complete design and implementation of a WDCN on COgnitive Radio Network (CORNET). Based on SDR technologies, software dedicated to WDCNs is designed and implemented across the PHY layer, MAC layer, and application layer. System experiments are carried out to demonstrate the performance issues and solutions presented in this dissertation. Wireless distributed computing networks (WDCNs) will become the next frontier of the wireless industry as the performance of wireless platforms is being increased every year and wireless industries are looking for "killer" applications for increased channel capacity. However, WDCNs have several unique problems compared with currently well-investigated methods for wireless sensor networks and wired distributed computing. For example, it is difficult for WDCNs to be power/energy efficient considering the uncertainty and heterogeneity of the wireless environment. In addition, the service model has to take account of the interference-limited feature of wireless channels to reduce the service delay. Our research proposes a two-phase model for WDCNs including the service provision phase and the service access phase according to different traffic patterns and performance requirements. For the service provision phase, we investigate the impact of communication channel conditions on the average execution time of the computing tasks within WDCNs. We then discuses how to increase the robustness and power efficiency for WDCNs subject to the impact of channel variance and spatial heterogeneity. A resource allocation solution for computation oriented WDCNs is then introduced in detail which mitigates the effects of channel variations with a stochastic programming solution. Stochastic geometry and queue theory are combined to analyze the average performance of service response time and to design optimal access strategies during the service access phase. This access model provides a framework to analyze the service access performance and evaluate whether the channel heterogeneity should be considered. Based on this analysis, optimal strategies to access the service nodes can be determined in order to reduce the service response time. In addition, network initialization and synchronization are investigated in order to build a multiple channel WDCN in dynamic spectrum access (DSA) environments. Further, an efficient primary user detection method is proposed to reduce the channel vacation latency for WDCNs in DSA environments. Finally, this dissertation presents the complete design and implementation of a WDCN on COgnitive Radio Network (CORNET). Based on SDR technologies, software dedicated to WDCNs is designed and implemented across the PHY layer, MAC layer, and application layer. System experiments are carried out to demonstrate the performance issues and solutions presented in this dissertation. / Ph. D.
34

A Distributed Approach to EpiFast using Apache Spark

Kannan, Vijayasarathy 04 August 2015 (has links)
EpiFast is a parallel algorithm for large-scale epidemic simulations, based on an interpretation of the stochastic disease propagation in a contact network. The original EpiFast implementation is based on a master-slave computation model with a focus on distributed memory using message-passing-interface (MPI). However, it suffers from few shortcomings with respect to scale of networks being studied. This thesis addresses these shortcomings and provides two different implementations: Spark-EpiFast based on the Apache Spark big data processing engine and Charm-EpiFast based on the Charm++ parallel programming framework. The study focuses on exploiting features of both systems that we believe could potentially benefit in terms of performance and scalability. We present models of EpiFast specific to each system and relate algorithm specifics to several optimization techniques. We also provide a detailed analysis of these optimizations through a range of experiments that consider scale of networks and environment settings we used. Our analysis shows that the Spark-based version is more efficient than the Charm++ and MPI-based counterparts. To the best of our knowledge, ours is one of the preliminary efforts of using Apache Spark for epidemic simulations. We believe that our proposed model could act as a reference for similar large-scale epidemiological simulations exploring non-MPI or MapReduce-like approaches. / Master of Science
35

A CONCEPTUAL FRAMEWORK FOR DISTRIBUTED SOFTWARE QUALITY NETWORK

ANUSHKA HARSHAD PATIL (7036883) 12 October 2021 (has links)
The advancement in technology has revolutionized the role of software in recent years. Software usage is practically found in all areas of the industry and has become a prime factor in the overall working of the companies. Simultaneously with an increase in the utilization of software, the software quality assurance parameters have become more crucial and complex. Currently the quality measurement approaches, standards, and models that are applied in the software industry are extremely divergent. Many a time the correct approach will wind up to be a combination of di erent concepts and techniques from di erent software assurance approaches [1]. Thus, having a platform that provides a single workspace for incorporating multiple software quality assurance approaches will ease the overall software quality process. In this thesis we have proposed a theoretical framework for distributed software quality assurance, which will be able to continuously monitor a source code repository; create a snapshot of the system for a given commit (both past and present); the snapshot can be used to create a multi-granular blockchain of the system and its metrics (i.e.,metadata) which we believe will let the tool developers and vendors participate continuously in assuring quality and security of systems and in the process be accessible when required while being rewarded for their services.
36

Multiple Learning for Generalized Linear Models in Big Data

Xiang Liu (11819735) 19 December 2021 (has links)
Big data is an enabling technology in digital transformation. It perfectly complements ordinary linear models and generalized linear models, as training well-performed ordinary linear models and generalized linear models require huge amounts of data. With the help of big data, ordinary and generalized linear models can be well-trained and thus offer better services to human beings. However, there are still many challenges to address for training ordinary linear models and generalized linear models in big data. One of the most prominent challenges is the computational challenges. Computational challenges refer to the memory inflation and training inefficiency issues occurred when processing data and training models. Hundreds of algorithms were proposed by the experts to alleviate/overcome the memory inflation issues. However, the solutions obtained are locally optimal solutions. Additionally, most of the proposed algorithms require loading the dataset to RAM many times when updating the model parameters. If multiple model hyper-parameters needed to be computed and compared, e.g. ridge regression, parallel computing techniques are applied in practice. Thus, multiple learning with sufficient statistics arrays are proposed to tackle the memory inflation and training inefficiency issues.
37

HopsWorks : A project-based access control model for Hadoop

Moré, Andre, Gebremeskel, Ermias January 2015 (has links)
The growth in the global data gathering capacity is producing a vast amount of data which is getting vaster at an increasingly faster rate. This data properly analyzed can represent great opportunity for businesses, but processing it is a resource-intensive task. Sharing can increase efficiency due to reusability but there are legal and ethical questions that arise when data is shared. The purpose of this thesis is to gain an in depth understanding of the different access control methods that can be used to facilitate sharing, and choose one to implement on a platform that lets user analyze, share, and collaborate on, datasets. The resulting platform uses a project based access control on the API level and a fine-grained role based access control on the file system to give full control over the shared data to the data owner. / I dagsläget så genereras och samlas det in oerhört stora mängder data som växer i ett allt högre tempo för varje dag som går. Den korrekt analyserade datan skulle kunna erbjuda stora möjligheter för företag men problemet är att det är väldigt resurskrävande att bearbeta. Att göra det möjligt för organisationer att dela med sig utav datan skulle effektivisera det hela tack vare återanvändandet av data men det dyker då upp olika frågor kring lagliga samt etiska aspekter när man delar dessa data. Syftet med denna rapport är att få en djupare förståelse för dom olika åtkomstmetoder som kan användas vid delning av data för att sedan kunna välja den metod som man ansett vara mest lämplig att använda sig utav i en plattform. Plattformen kommer att användas av användare som vill skapa projekt där man vill analysera, dela och arbeta med DataSets, vidare kommer plattformens säkerhet att implementeras med en projekt-baserad åtkomstkontroll på API nivå och detaljerad rollbaserad åtkomstkontroll på filsystemet för att ge dataägaren full kontroll över den data som delas
38

Using Non-Intrusive Instrumentation to Analyze any Distributed Middleware in Real-Time

Nyalia James-Korsuk Lui (10686993) 10 May 2021 (has links)
<div>Dynamic Binary Instrumentation (DBI) is one way to monitor a distributed system in real-time without modifying source code. Previous work has shown it is possible to instrument distributed systems using standards-based distributed middleware. Existing work, however, only applies to a single middleware, such as CORBA.</div><div><br></div><div>This thesis therefore presents a tool named the Standards-based Distributed Middleware Monitor (SDMM), which generalizes two modern standards-based distributed middleware, the Data Distribution Service (DDS) and gRemote Procedure Call (gRPC). SDMM uses DBI to extract values and other data relevant to monitoring a distributed system in real-time. Using dynamic instrumentation allows SDMM to capture information without a priori knowledge of the distributed system under instrumentation. We applied SDMM to systems created with two DDS vendors, RTI Connext DDS and OpenDDS, as well as gRPC which is a complete remote procedure call framework. Our results show that the data collection process contributes to less than 2% of the run-time overhead in all test cases.</div>
39

THE MODULAR RANGE INTERFACE (MODRI) DATA ACQUISITION CAPABILITIES AND STRATEGIES

Marler, Thomas M. 10 1900 (has links)
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California / The Modular Range Interface (ModRI) is a reliable networked data acquisition system used to acquire and disseminate dissimilar data. ModRI’s purpose is to connect TSPI systems to a central computer network. The modular hardware design consists of an SBC, COTS network interfaces, and other COTS interfaces in a VME form factor. The modular software design uses C++ and OO patterns running under an RTOS. Current capabilities of ModRI include acquisition of Ethernet, PCM data, RS-422/232 serial data, and IRIG-B time. Future strategies might include stand-alone data acquisition, acquisition of digital video, and migration to other architectures and operating systems.
40

Real-time In-situ Seismic Tomography in Sensor Network

Shi, Lei 09 August 2016 (has links)
Seismic tomography is a technique for illuminating the physical dynamics of the Earth by seismic waves generated by earthquakes or explosions. In both industry and academia, the seismic exploration does not yet have the capability of imaging seismic tomography in real-time and with high resolution. There are two reasons. First, at present raw seismic data are typically recorded on sensor nodes locally then are manually collected to central observatories for post processing, and this process may take months to complete. Second, high resolution tomography requires a large and dense sensor network, the real-time data retrieval from a network of large-amount wireless seismic nodes to a central server is virtually impossible due to the sheer data amount and resource limitations. This limits our ability to understand earthquake zone or volcano dynamics. To obtain the seismic tomography in real-time and high resolution, a new design of sensor network system for raw seismic data processing and distributed tomography computation is demanded. Based on these requirements, three research aspects are addressed in this work. First, a distributed multi-resolution evolving tomography computation algorithm is proposed to compute tomography in the network, while avoiding costly data collections and centralized computations. Second, InsightTomo, an end-to-end sensor network emulation platform, is designed to emulate the entire process from data recording to tomography image result delivery. Third, a sensor network testbed is presented to verify the related methods and design in real world. The design of the platform consists of hardware, sensing and data processing components.

Page generated in 0.1153 seconds