• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 313
  • 274
  • 30
  • 21
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 797
  • 797
  • 267
  • 219
  • 149
  • 145
  • 113
  • 97
  • 86
  • 79
  • 78
  • 75
  • 72
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Using Application Benefit for Proactive Resource Allocation in Asynchronous Real-Time Distributed Systems

Hegazy, Tamir A. 12 October 2001 (has links)
This thesis presents two proactive resource allocation algorithms, RBA* and OBA, for asynchronous real-time distributed systems. The algorithms consider an application model where timeliness requirements are expressed using Jensen's benefit functions and propose adaptation functions to describe anticipated workload for future time intervals. Furthermore, an adaptation model is considered where processes are replicated for sharing workload increases. A real-time Ethernet system model is considered where message collisions are resolved. Given such models, the objective is to maximize aggregate application benefit and minimize aggregate missed deadline ratio. Since determining the optimal allocation is computationally intractable, the algorithms heuristically compute the allocation so that it is as "close" as possible to the optimal allocation. While RBA* analyzes process response times to determine the allocation, OBA analyzes processor overloads to compute the decision in a much faster way. RBA* incurs a quadratic amortized complexity in terms of subtask arrivals for the most computationally intensive component when DASA is used as the underlying process-scheduling algorithm, whereas OBA incurs a logarithmic amortized complexity for the corresponding component. To study how different process-scheduling and message-scheduling algorithms affect the performance of the algorithms and to compare their performances, benchmark-driven experiments were conducted. The experimental results reveal that RBA* produces higher aggregate benefit and lower missed deadline ratio when DASA is used for process scheduling and message scheduling. Furthermore, it is observed that RBA* produces higher aggregate benefit and lower missed deadline ratio than OBA, confirming the intuition that accurate response time analysis can lead to better results. / Master of Science
142

An Efficient Parallel Three-Level Preconditioner for Linear Partial Differential Equations

Yao, Aixiang I Song 26 February 1998 (has links)
The primary motivation of this research is to develop and investigate parallel preconditioners for linear elliptic partial differential equations. Three preconditioners are studied: block-Jacobi preconditioner (BJ), a two-level tangential preconditioner (D0), and a three-level preconditioner (D1). Performance and scalability on a distributed memory parallel computer are considered. Communication cost and redundancy are explored as well. After experiments and analysis, we find that the three-level preconditioner D1 is the most efficient and scalable parallel preconditioner, compared to BJ and D0. The D1 preconditioner reduces both the number of iterations and computational time substantially. A new hybrid preconditioner is suggested which may combine the best features of D0 and D1. / Master of Science
143

Uncertainties in Mobile Learning applications : Software Architecture Challenges

Gil de la Iglesia, Didac January 2012 (has links)
The presence of computer technologies in our daily life is growing by leaps and bounds. One of the recent trends is the use of mobile technologies and cloud services for supporting everyday tasks and the sharing of information between users. The field of education is not absent from these developments and many organizations are adopting Information and Communication Technologies (ICT) in various ways for supporting teaching and learning. The field of Mobile Learning (M-Learning) offers new opportunities for carrying out collaborative educational activities in a variety of settings and situations. The use of mobile technologies for enhancing collaboration provides new opportunities but at the same time new challenges emerge. One of those challenges is discussed in this thesis and it con- cerns with uncertainties related to the dynamic aspects that characterized outdoor M-Learning activities. The existence of these uncertainties force software developers to make assumptions in their developments. However, these uncertainties are the cause of risks that may affect the required outcomes for M-Learning activities. Mitigations mechanisms can be developed and included to reduce the risks’ impact during the different phases of development. However, uncertainties which are present at runtime require adaptation mechanisms to mitigate the resulting risks. This thesis analyzes the current state of the art in self-adaptation in Technology-Enhanced Learning (TEL) and M-Learning. The results of an extensive literature survey in the field and the outcomes of the Geometry Mobile (GEM) research project are reported. A list of uncertainties in collaborative M-Learning activities and the associated risks that threaten the critical QoS outcomes for collaboration are identified and discussed. A detailed elaboration addressing mitigation mechanisms to cope with these problems is elaborated and presented. The results of these efforts provide valuable insights and the basis towards the design of a multi-agent self-adaptive architecture for multiple concerns that is illustrated with a prototype implementation. The proposed conceptual architecture is an initial cornerstone towards the creation of a decentralized distributed self-adaptive system for multiple concerns to guarantee collaboration in M-Learning.
144

Cauldron: A Scalable Domain Specific Database for Product Data

Ottosson, Love January 2017 (has links)
This project investigated how NoSQL databases can be used together with a logical layer, instead of a relational database with separated backend logic, to search for products with customer specific constraints in an e-commerce scenario. The motivation behind moving from a relational database was the scalability issues and increased read latencies experienced as the data increased. The work resulted in a framework called Cauldron that uses pipelines a sequence of execution steps to expose its data stored in an in-memory key-value store and a document database. Cauldron uses write replication between distributed instances to increase read throughput at the cost of write latency. A product database with customer specific constraints was implemented using Cauldron to compare it against an existing solution based on a relational database. The new product database can serve search queries 10 times faster in the general case and up to 25 times faster in extreme cases compared to the existing solution. / Projektet undersökte hur NoSQL databaser tillsammans med ett logiskt lager, istället för en relationsdatabas med separat backend logik, kan användas för att söka på produkter med kundunika restriktioner. Motivationen till att byta ut relationsdatabasen berodde på skalbarhetsproblem och långsammare svarstider när datamängden ökade. Arbetet resulterade i ett ramverk vid namn Cauldron som använder pipelines sammankopplade logiska steg för att exponera sin data från en minnesbunden nyckel-värde-databas och en dokumentdatabas. Cauldron använder replikering mellan distribuerade instanser för att öka läsgenomstömmningen på bekostnad av högre skrivlatenser. En produktdatabas med kundunika restriktioner implementerades med hjälp av Cauldron för att jämföra den mot en befintlig lösning baserad på en relationsdatabas. Den nya databasen kan besvara sökförfrågningar 10 gånger snabbare i normalfallen och upp till 25 gånger snabbare i extremfallen jämfört med den befintliga lösningen.
145

Adaptive Knowledge Exchange with Distributed Partial Models@Run.time

Werner, Christopher 11 January 2016 (has links) (PDF)
Die wachsende Anzahl an Robotikanwendungen, in denen mehrere Roboter ein gemeinsames Ziel verfolgen, erfordert eine gesonderte Betrachtung der Interaktion zwischen diesen Robotern mit Bezug auf den damit entstehenden Datenaustausch. Dieser muss hierbei effizient betrieben werden und die Sicherheit des gesamt Systems gewährleisten. Diese Masterarbeit stellt eine Simulationsumgebung vor, welche anhand von Testszenarien und Austauschstrategien Roboterkonstellationen prüft und Messergebnisse ausliefert. Zu Beginn der Arbeit werden drei Datenaustauschverfahren betrachtet und anschließend Publikationen vorgestellt, in denen Datenaustausch betrieben wird und Simulatoren für die Nutzbarkeit der Simulationsumgebung untersucht. Die anschließenden Kapitel behandeln das Konzept und die Implementierung der Testumgebung erläutert, wobei Roboter aus einer Menge von Hardware Komponenten und Zielen beschrieben werden. Der Aufbau des Experiments umfasst die verschiedenen Umgebungen, Testszenarien und Roboterkonfiguration. Der Aufbau beschreibt die Grundlage für die Auswertung der Testergebnisse.
146

Fog Computing with Go: A Comparative Study

Butterfield, Ellis H 01 January 2016 (has links)
The Internet of Things is a recent computing paradigm, de- fined by networks of highly connected things – sensors, actuators and smart objects – communicating across networks of homes, buildings, vehicles, and even people. The Internet of Things brings with it a host of new problems, from managing security on constrained devices to processing never before seen amounts of data. While cloud computing might be able to keep up with current data processing and computational demands, it is unclear whether it can be extended to the requirements brought forth by Internet of Things. Fog computing provides an architectural solution to address some of these problems by providing a layer of intermediary nodes within what is called an edge network, separating the local object networks and the Cloud. These edge nodes provide interoperability, real-time interaction, routing, and, if necessary, computational delegation to the Cloud. This paper attempts to evaluate Go, a distributed systems language developed by Google, in the context of requirements set forth by Fog computing. Similar methodologies of previous literature are simulated and benchmarked against in order to assess the viability of Go in the edge nodes of Fog computing architecture.
147

Planificación dinámica sobre entornos grid

Bertogna, Mario Leandro 04 September 2013 (has links)
El objetivo de esta Tesis es el análisis para la gestión de entornos virtuales de manera eficiente. En este sentido, se realizó una optimización sobre el middleware de planificación en forma dinámica sobre entornos de computación Grid, siendo la meta a alcanzar la asignación y utilización óptima de recursos para la ejecución coordinada de tareas. Se investigó en particular la interacción entre servicios Grid y la problemática de la distribución de tareas en meta-organizaciones con requerimientos de calidad de servicio no trivial, estableciendo una relación entre la distribución de tareas y las necesidades locales pertenecientes a organizaciones virtuales. La idea tuvo origen en el estudio de laboratorios virtuales y remotos para la creación de espacios virtuales. En muchas organizaciones públicas y de investigación se dispone de gran cantidad de recursos, pero estos no siempre se encuentran accesibles, debido a la distancia geográfica, o no se dispone de la capacidad de interconectarlos para lograr un fin común. El concepto de espacio virtual introduce una capa de abstracción sobre estos recursos logrando independencia de ubicación y la interactividad entre dispositivos heterogéneos, logrando de esta manera hacer uso eficiente de los medios disponibles. Durante el desarrollo se ha experimentado y logrado la implementación de un entorno para la generación de espacios virtuales. Se ha definido la infraestructura, se implementaron dos tipos de laboratorios y se ha propuesto una optimización para lograr el máximo aprovechamiento en un entorno para aplicaciones paralelas. Actualmente estos conceptos han evolucionando y algunas de las ideas publicadas se han implementado en prototipos funcionales para infraestructuras comerciales, si bien aún se encuentra en investigación la planificación sobre centros de cómputos para miles de equipos.
148

Dynamic Load Balancing Schemes for Large-scale HLA-based Simulations

De Grande, Robson E. 26 July 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
149

An efficient execution model for reactive stream programs

Nguyen, Vu Thien Nga January 2015 (has links)
Stream programming is a paradigm where a program is structured by a set of computational nodes connected by streams. Focusing on data moving between computational nodes via streams, this programming model fits well for applications that process long sequences of data. We call such applications reactive stream programs (RSPs) to distinguish them from stream programs with rather small and finite input data. In stream programming, concurrency is expressed implicitly via communication streams. This helps to reduce the complexity of parallel programming. For this reason, stream programming has gained popularity as a programming model for parallel platforms. However, it is also challenging to analyse and improve the performance without an understanding of the program's internal behaviour. This thesis targets an effi cient execution model for deploying RSPs on parallel platforms. This execution model includes a monitoring framework to understand the internal behaviour of RSPs, scheduling strategies for RSPs on uniform shared-memory platforms; and mapping techniques for deploying RSPs on heterogeneous distributed platforms. The foundation of the execution model is based on a study of the performance of RSPs in terms of throughput and latency. This study includes quantitative formulae for throughput and latency; and the identification of factors that influence these performance metrics. Based on the study of RSP performance, this thesis exploits characteristics of RSPs to derive effective scheduling strategies on uniform shared-memory platforms. Aiming to optimise both throughput and latency, these scheduling strategies are implemented in two heuristic-based schedulers. Both of them are designed to be centralised to provide load balancing for RSPs with dynamic behaviour as well as dynamic structures. The first one uses the notion of positive and negative data demands on each stream to determine the scheduling priorities. This scheduler is independent from the runtime system. The second one requires the runtime system to provide the position information for each computational node in the RSP; and uses that to decide the scheduling priorities. Our experiments show that both schedulers provides similar performance while being significantly better than a reference implementation without dynamic load balancing. Also based on the study of RSP performance, we present in this thesis two new heuristic partitioning algorithms which are used to map RSPs onto heterogeneous distributed platforms. These are Kernighan-Lin Adaptation (KLA) and Congestion Avoidance (CA), where the main objective is to optimise the throughput. This is a multi-parameter optimisation problem where existing graph partitioning algorithms are not applicable. Compared to the generic meta-heuristic Simulated Annealing algorithm, both proposed algorithms achieve equally good or better results. KLA is faster for small benchmarks while slower for large ones. In contrast, CA is always orders of magnitudes faster even for very large benchmarks.
150

Towards Energy-Efficient Mobile Sensing: Architectures and Frameworks for Heterogeneous Sensing and Computing

Fan, Songchun January 2016 (has links)
<p>Modern sensing apps require continuous and intense computation on data streams. Unfortunately, mobile devices are failing to keep pace despite advances in hardware capability. In contrast to powerful system-on-chips that rapidly evolve, battery capacities merely grow. This hinders the potential of long-running, compute-intensive sensing services such as image/audio processing, motion tracking and health monitoring, especially on small, wearable devices. </p><p>In this thesis, we present three pieces of work that target at improving the energy efficiency for mobile sensing. (1) In the first work, we study heterogeneous mobile processors that dynamically switch between high-performance and low-power cores according to tasks' performance requirements. We benchmark interactive mobile workloads and quantify the energy improvement of different microarchitectures. (2) Realizing that today's users often carry more than one mobile devices, in the second work, we extend the resource boundary of individual devices by prototyping a distributed framework that coordinates multiple devices. When devices share common sensing goals, the framework schedules sensing and computing tasks according to devices' heterogeneity, improving the performance and latency for compute-intensive sensing apps. (3) In the third work, we study the power breakdown of motion sensing apps on wearable devices and show that traditional offloading schemes cannot mitigate sensing’s high energy costs. We design a framework that allows the phone to take over sensing and computation by predicting the wearable's sensory data, when motions of the two devices are highly correlated. This allows the wearable to offload without communicating raw sensing data, resulting in little performance loss but significant energy savings.</p> / Dissertation

Page generated in 0.0519 seconds