• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 314
  • 274
  • 30
  • 21
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 804
  • 804
  • 267
  • 221
  • 149
  • 145
  • 114
  • 97
  • 88
  • 80
  • 78
  • 75
  • 72
  • 72
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Network-centric methods for heterogeneous multiagent systems

Abbas, Waseem 13 January 2014 (has links)
We present tools for a network topology based characterization of heterogeneity in multiagent systems, thereby providing a framework for the analysis and design of heterogeneous multiagent networks from a network structure view-point. In heterogeneous networks, agents with a diverse set of resources coordinate with each other. Coordination among different agents and the structure of the underlying network topology have significant impacts on the overall behavior and functionality of the system. Using constructs from graph theory, a qualitative as well as a quantitative analysis is performed to examine an inter-relationship between the network topology and the distribution of agents with various capabilities in heterogeneous networks. Our goal is to allow agents maximally exploit heterogeneous resources available within the network through local interactions, thus exploring a promise heterogeneous networks hold to accomplish complicated tasks by leveraging upon the assorted capabilities of agents. For a reliable operations of such systems, the issue of security against intrusions and malicious agents is also addressed. We provide a scheme to secure a network against a sequence of intruder attacks through a set of heterogeneous guards. Moreover, robustness of networked systems against noise corruption and structural changes in the underlying network topology is also examined.
242

Collaborative Web-Based Mapping of Real-Time Sensor Data

Gadea, Cristian 10 February 2011 (has links)
The distribution of real-time GIS (Geographic Information System) data among users is now more important than ever as it becomes increasingly affordable and important for scientific and government agencies to monitor environmental phenomena in real-time. A growing number of sensor networks are being deployed all over the world, but there is a lack of solutions for their effective monitoring. Increasingly, GIS users need access to real-time sensor data from a variety of sources, and the data must be represented in a visually-pleasing way and be easily accessible. In addition, users need to be able to collaborate with each other to share and discuss specific sensor data. The real-time acquisition, analysis, and sharing of sensor data from a large variety of heterogeneous sensor sources is currently difficult due to the lack of a standard architecture to properly represent the dynamic properties of the data and make it readily accessible for collaboration between users. This thesis will present a JEE-based publisher/subscriber architecture that allows real-time sensor data to be displayed collaboratively on the web, requiring users to have nothing more than a web browser and Internet connectivity to gain access to that data. The proposed architecture is evaluated by showing how an AJAX-based and a Flash-based web application are able to represent the real-time sensor data within novel collaborative environments. By using the latest web-based technology and relevant open standards, this thesis shows how map data and GIS data can be made more accessible, more collaborative and generally more useful.
243

GENI in the cloud

Yuen, Marco 20 July 2010 (has links)
Computer networking researchers often have access to a few different network testbeds (Section 1.2) for their experiments. However, those testbeds are limited in resources; contentions for resources are prominent in those testbeds especially when conference deadline is looming. Moreover, services running on those testbeds are subject to seasonal and daily tra c spikes from users all round the world. Hence, demand for resources at the testbeds are high. Some researchers can use other testbeds in conjunction with the ones they are using. Even though each of the testbeds may have different infrastructures, and characteristics, in the end, what the researchers receive in return is a set of computing resources, either virtual machines or physical machines. Essentially, those testbeds are providing a similar service, but researchers have to manage the credentials for accessing the testbeds manually, and they have to manually request resources from different testbeds in order to setup experiments that span across different testbeds. This thesis presents GENICloud, a project that enables the federation of testbeds with clouds. Computing and storage resources can be provisioned to researchers and services running on existing testbeds dynamically from an Eucalyptus cloud. As a part of the GENICloud project, the user proxy (Section 3.4) provides a less arduous method for testbeds administrators to federate with other testbeds; the same service also manages researchers credentials, so they do not have to acquire resources from each testbed individually. The user proxy provides a single interface for researchers to interact with di erent testbeds and clouds and manage their experiments. Furthermore, GENICloud demonstrates that there are, in fact, quite a few architectural similarities between different testbeds and even clouds.
244

Infraestructura para computación de alta disponibilidad y administración de recursos mediante Condor

Martínez, Paula 29 December 2014 (has links)
Los trabajos que requieren capacidad de cómputo intensiva necesitan de un administrador de carga de trabajo especializado, que brinde mecanismos de cola, políticas de planificación, esquema de prioridades, monitoreo y administración de recursos. Cuando los usuarios emiten sus trabajos, el administrador deberá decidir cuándo y dónde ejecutarlos, teniendo en cuenta sus requerimientos, realizar un monitoreo del progreso e informar al usuario cuando haya finalizado su ejecución. Al integrar capacidad de proceso, almacenamiento y acceso a recursos remotos, se podrán ejecutar aplicaciones que no pueden procesarse en una computadora única y así satisfacer demandas de cómputo complejas. En este trabajo se discute el uso de Condor como gestor de recursos disponibles en un entorno de Computación de Alta Disponibilidad ya que es un sistema que ofrece funcionalidades de HTC donde los usuarios no tienen que preocuparse, por ejemplo, dónde enviar sus trabajos para ejecución, ni de tener que enviar manualmente un gran número de ellos cuando así lo requieran.
245

Parallel Pattern Search in Large, Partial-Order Data Sets on Multi-core Systems

Ekpenyong, Olufisayo January 2011 (has links)
Monitoring and debugging distributed systems is inherently a difficult problem. Events collected during the execution of distributed systems can enable developers to diagnose and fix faults. Process-time diagrams are normally used to view the relationships between the events and understand the interaction between processes over time. A major difficulty with analyzing these sets of events is that they are usually very large. Therefore, being able to search through the event-data sets can enable users to get to points of interest quickly and find out if patterns in the dataset represent the expected behaviour of the system. A lot of research work has been done to improve the search algorithm for finding event-patterns in large partial-order datasets. In this thesis, we improve on this work by parallelizing the search algorithm. This is useful as many computers these days have more than one core or processor. Therefore, it makes sense to exploit this available computing power as part of an effort to improve the speed of the algorithm. The search problem itself can be modeled as a Constraint Satisfaction Problem (CSP). We develop a simple and efficient way of generating tasks (to be executed by the cores) that guarantees that no two cores will ever repeat the same work-effort during the search. Our approach is generic and can be applied to any CSP consisting of a large domain space. We also implement an efficient dynamic work-stealing strategy that ensures the cores are kept busy throughout the execution of the parallel algorithm. We evaluate the efficiency and scalability of our algorithm through experiments and show that we can achieve efficiencies of up to 80% on a 24-core machine.
246

Collaborative Data Access and Sharing in Mobile Distributed Systems

Islam, Mohammad Towhidul January 2011 (has links)
The multifaceted utilization of mobile computing devices, including smart phones, PDAs, tablet computers with increasing functionalities and the advances in wireless technologies, has fueled the utilization of collaborative computing (peer-to-peer) technique in mobile environment. Mobile collaborative computing, known as mobile peer-to-peer (MP2P), can provide an economic way of data access among users of diversified applications in our daily life (exchanging traffic condition in a busy high way, sharing price-sensitive financial information, getting the most-recent news), in national security (exchanging information and collaborating to uproot a terror network, communicating in a hostile battle field) and in natural catastrophe (seamless rescue operation in a collapsed and disaster torn area). Nonetheless, data/content dissemination among the mobile devices is the fundamental building block for all the applications in this paradigm. The objective of this research is to propose a data dissemination scheme for mobile distributed systems using an MP2P technique, which maximizes the number of required objects distributed among users and minimizes to object acquisition time. In specific, we introduce a new paradigm of information dissemination in MP2P networks. To accommodate mobility and bandwidth constraints, objects are segmented into smaller pieces for efficient information exchange. Since it is difficult for a node to know the content of every other node in the network, we propose a novel Spatial-Popularity based Information Diffusion (SPID) scheme that determines urgency of contents based on the spatial demand of mobile users and disseminates content accordingly. The segmentation policy and the dissemination scheme can reduce content acquisition time for each node. Further, to facilitate efficient scheduling of information transmission from every node in the wireless mobile networks, we modify and apply the distributed maximal independent set (MIS) algorithm. We also consider neighbor overlap for closely located mobile stations to reduce duplicate transmission to common neighbors. Different parameters in the system such as node density, scheduling among neighboring nodes, mobility pattern, and node speed have a tremendous impact on data diffusion in an MP2P environment. We have developed analytical models for our proposed scheme for object diffusion time/delay in a wireless mobile network to apprehend the interrelationship among these different parameters. In specific, we present the analytical model of object propagation in mobile networks as a function of node densities, radio range, and node speed. In the analysis, we calculate the probabilities of transmitting a single object from one node to multiple nodes using the epidemic model of spread of disease. We also incorporate the impact of node mobility, radio range, and node density in the networks into the analysis. Utilizing these transition probabilities, we construct an analytical model based on the Markov process to estimate the expected delay for diffusing an object to the entire network both for single object and multiple object scenarios. We then calculate the transmission probabilities of multiple objects among the nodes in wireless mobile networks considering network dynamics. Through extensive simulations, we demonstrate that the proposed scheme is efficient for data diffusion in mobile networks.
247

Information enrichment for quality recommender systems

Weng, Li-Tung January 2008 (has links)
The explosive growth of the World-Wide-Web and the emergence of ecommerce are the major two factors that have led to the development of recommender systems (Resnick and Varian, 1997). The main task of recommender systems is to learn from users and recommend items (e.g. information, products or books) that match the users’ personal preferences. Recommender systems have been an active research area for more than a decade. Many different techniques and systems with distinct strengths have been developed to generate better quality recommendations. One of the main factors that affect recommenders’ recommendation quality is the amount of information resources that are available to the recommenders. The main feature of the recommender systems is their ability to make personalised recommendations for different individuals. However, for many ecommerce sites, it is difficult for them to obtain sufficient knowledge about their users. Hence, the recommendations they provided to their users are often poor and not personalised. This information insufficiency problem is commonly referred to as the cold-start problem. Most existing research on recommender systems focus on developing techniques to better utilise the available information resources to achieve better recommendation quality. However, while the amount of available data and information remains insufficient, these techniques can only provide limited improvements to the overall recommendation quality. In this thesis, a novel and intuitive approach towards improving recommendation quality and alleviating the cold-start problem is attempted. This approach is enriching the information resources. It can be easily observed that when there is sufficient information and knowledge base to support recommendation making, even the simplest recommender systems can outperform the sophisticated ones with limited information resources. Two possible strategies are suggested in this thesis to achieve the proposed information enrichment for recommenders: • The first strategy suggests that information resources can be enriched by considering other information or data facets. Specifically, a taxonomy-based recommender, Hybrid Taxonomy Recommender (HTR), is presented in this thesis. HTR exploits the relationship between users’ taxonomic preferences and item preferences from the combination of the widely available product taxonomic information and the existing user rating data, and it then utilises this taxonomic preference to item preference relation to generate high quality recommendations. • The second strategy suggests that information resources can be enriched simply by obtaining information resources from other parties. In this thesis, a distributed recommender framework, Ecommerce-oriented Distributed Recommender System (EDRS), is proposed. The proposed EDRS allows multiple recommenders from different parties (i.e. organisations or ecommerce sites) to share recommendations and information resources with each other in order to improve their recommendation quality. Based on the results obtained from the experiments conducted in this thesis, the proposed systems and techniques have achieved great improvement in both making quality recommendations and alleviating the cold-start problem.
248

Αξιοποίηση της τεχνολογίας συνιστωσών στην ανάπτυξη κατανεμημένων συστημάτων

Κυριάκου, Γιώργος 18 May 2010 (has links)
Στόχος της εργασίας αυτής είναι η αξιοποίηση της τεχνολογίας συνιστωσών στην ανάπτυξη κατανεμημένων συστημάτων αυτοματισμού και ελέγχου. Την τελευταία δεκαετία γίνεται μια μεγάλη προσπάθεια να αναπτυχθούν νέες τεχνολογίες λογισμικού για την υποστήριξη ανάπτυξης εφαρμογών για κατανεμημένα ετερογενή συστήματα. Αιχμή της τεχνολογίας σήμερα στην κατεύθυνση αυτή αποτελούν τα μοντέλα συνιστωσών. Ειδικότερα στον τομέα των βιομηχανικών εφαρμογών μετρήσεων και ελέγχου η ανάγκη αυτή γίνεται επιτακτική ώστε το λογισμικό που θα παράγεται γι αυτές να είναι αποτελεσματικό και επεκτάσιμο. Στην εργασία αυτή ξεκινάμε την μελέτη μας με το μοντέλο DOC middleware και προχωρούμε στην επέκτασή του, το component middleware. Εξετάζουμε τους περιορισμούς που παρουσιάζει το μοντέλο DOC middleware και τα προτερήματα του component middleware. Στη συνέχεια παρουσιάζουμε εκτενώς το μοντέλου συνιστωσών CORBA. Για το μοντέλο αυτό παρουσιάζουμε το OpenCCM που αποτελεί την μοναδική υλοποίηση που υπάρχει σήμερα η οποία είναι βασισμένη στη γλώσσα Java. Ακολούθως παρουσιάζουμε εκτενώς το πρότυπο για την ανάπτυξη βιομηχανικών εφαρμογών IEC-61499 που στηρίζεται στη έννοια του Function Block. Στη συνέχεια περιγράφουμε τη λύση που προτείνουμε στα πλαίσια της εργασίας αυτής για την υλοποίηση του προτύπου IEC-61499, η οποία στηρίζεται στο μοντέλο συνιστωσών CORBA. Η λύση μας χρησιμοποιεί την Java ως γλώσσα προγραμματισμού και ως πλατφόρμα ανάπτυξης το CORFU από την πλευρά των Function Block και το Cadena από την πλευρά συνιστωσών CORBA. Προχωρούμε στην παρουσίαση του εργαλείου FBtoCCMtool το οποίο αυτοματοποιεί ένα μεγάλο μέρος της διαδικασίας μετασχηματισμού από το μοντέλο FBs στο μοντέλο συνιστωσών CORBA . Τελικά περιγράφουμε την εφαρμογή της προτεινόμενης λύσης πάνω σε ένα πρότυπο σύστημα βιομηχανικής διεργασίας, το FESTO Modular Processing System της εταιρίας FESTO. / -
249

Εγκατάσταση και λειτουργία ολοκληρωμένων εφαρμογών διάχυτου υπολογισμού με χρήση ασυρμάτων ετερογενών συσκευών, αισθητήρων και ελεγκτών / Real world deployment and evaluation of pervasive computing services using heterogeneous wireless sensor networks

Ακριβόπουλος, Ορέστης 01 February 2013 (has links)
Στην παρούσα μεταπτυχιακή διπλωματική εργασία μελετάται ο σχεδιασμός, η ανάπτυξη, η εγκατάσταση και λειτουργία ολοκληρωμένων εφαρμογών δίαχυτου υπολογισμού με χρήση ετερογενών ασυρμάτων αισθητήρων. Η ασύρματη επικοινωνία ετερογενών συσκευών σε ένα διάχυτο σύστημα παρουσιάζει σημαντικά προβλήματα καθώς χρησιμοποιούνται συσκευές με εντελώς διαφορετική αρχιτεκτονική, διαφορετικά χαρακτηριστικά και διαφορετικές δυνατότητες όσον αφορά τις τεχνολογίες που υποστηρίζουν για την υλοποίηση εφαρμογών σε αυτές. Αρχικά εστιάζουμε στον εντοπισμό των αιτιών που δεν επιτρέπουν την ασύρματη επικοινωνία των συσκευών και προτείνουμε συγκεκριμένες λύσεις. Προτείνουμε επίσης, μια αρχιτεκτονική ενός συγκεκριμένου συστήματος η οποία βασίζεται σε μια ιεραρχία επιπέδων προσφέροντας επεκτασιμότητα καθώς επίσης και χρήση διαφορετικών συσκευών.Για την μελέτη της συμπεριφοράς του συστήματος αναπτύσσονται συγκεκριμένες εφαρμογές οι οποίες παρουσιάζουν τις δυνατότητες του. / Within the scope of this MSc dissertation, we discuss the design and implementation of pervasive applications on top of heterogeneous wireless sensor network environment. The wireless communication between heterogeneous devices is an inherently difficult research problem due to fundamental differences in system architecture, properties and capabilities of the these devices. Initially, our research focused on the identification of the problems related to the intercommunication among the devices of a heterogeneous wireless sensor network. As a solution, we propose a new abstract system that provides the key qualities needed for a successful pervasive system; expandability, scalability and performance. The new architecture achieves interoperability among the devices by introducing abstraction in the communication protocol. In order to demonstrate the applicability of our system we include various representative use case scenarios, that illustrate the usage of our infrastructure.
250

Ερωτήματα διαστημάτων σε περιβάλλοντα νεφών υπολογιστών

Σφακιανάκης, Γεώργιος 04 February 2014 (has links)
Τα νέφη υπολογιστών γίνονται ολοένα και πιο σημαντικά για εφαρμογές διαχείρισης δεδομένων, λόγω της δυνατότητας που προσφέρουν για διαχείριση πολύ μεγάλου όγκου δεδομένων. Καθημερινά προκύπτουν νέα προβλήματα, που η λύση τους απαιτεί αποδοτικές και κλιμακώσιμες εφαρμογές για την επεξεργασία αυτού του τεράστιου όγκου πληροφορίας. Κεντρικό ρόλο σε αυτόν τον τομέα κατέχουν τα συστήματα αποθήκευσης κλειδιού-τιμής σε νέφη υπολογιστών (cloud key-value stores), καθώς και συστήματα παράλληλης επεξεργασίας μεγάλης ποσότητας δεδομένων όπως το MapReduce. Τα ερωτήματα διαστημάτων εμφανίζονται συχνά σε πραγματικές εφαρμογές. Η εργασία αυτή ασχολείται με ερωτήματα διαστημάτων σε περιβάλλοντα νεφών υπολογιστών με κορυφαία εφαρμογή τα χρονικά ερωτήματα (temporal queries). Τέτοια ερωτήματα επικεντρώνονται συνήθως στο να απαντήσουν ποια γεγονότα συνέβησαν ή συνέβαιναν κατά την διάρκεια ενός χρονικού διαστήματος. ́Ομως τα παραδοσιακά συστήματα για τη διαχείριση τέτοιου είδους ερωτημάτων δεν μπορούν να αντεπεξέλθουν στον όγκο δεδομένων που παράγονται τη σημερινή εποχή από ορισμένες εφαρμογές, με αποτέλεσμα να μην υπάρχει μία αποδοτική λύση. Για να αντιμετωπιστεί το πρόβλημα αυτό προτείνεται η χρήση συστημάτων νεφών υπολογιστών, τέτοιων που θα καταστήσουν διαχειρίσιμο αυτόν τον τεράστιο όγκο δεδομένων. Τα υπάρχοντα, όμως, έως σήμερα συστήματα νεφών υπολογιστών δεν διαθέτουν τη δυνατότητα υποστήριξης τέτοιου είδους ερωτημάτων. Στην εργασία αυτή, αρχικά, μελετήθηκε το πρόβλημα και οι σχετικές λύσεις που είχαν προταθεί παλαιότερα, όπως πχ. τα δέντρα ευθυγράμμων τμημάτων (Segment trees). Αυτές οι δομές επιτρέπουν την απάντηση των ερωτημάτων που περιγράφονται παραπάνω με αποδοτικό τρόπο. Στη συνέχεια μελετήθηκε η δυνατότητα εφαρμογής τους σε περιβάλλοντα νεφών υπολογιστών, ενώ διερευνήθηκαν πιθανές εναλλακτικές λύσεις που θα εκμεταλλεύονται καλύτερα τις δυνατότητες που προσφέρουν τα συστήματα αυτά. Η μελέτη αυτή οδήγησε στην δημιουργία νέων δομών δεδομένων και αλγορίθμων, ή τροποποιήσεις των υπαρχόντων, που βοηθούν στην αποδοτική επίλυση του προβλήματος. Τέλος πραγματοποιήθηκε σύγκριση της απόδοσης των λύσεων και τον αλγορίθμων που προτείνονται με τις ήδη υπάρχουσες. Τα αποτελέσματα της σύγκρισης έδειξαν βελτίωση του χρόνου εκτέλεσης έως και μία τάξης μεγέθους σε μερικές περιπτώσεις. / The cloud is becoming increasingly more important for data management applications, as it can seamlessly handle huge amounts of data. New problems arise on a daily basis and can only be solved by the use of efficient and scalable applications that can process these data. Cloud key-value storage systems play a crucial role in this new field, along with systems like MapReduce that can distributedly process huge amounts of data. One of these problems appearing often is supporting interval queries, an efficient solution for which is lacking in the field of cloud key-value stores. This thesis deals with this problem, and more specifically with the problem of temporal queries. This kind of queries try to answer what happened during a specific time range. But in recent years there has been an explosion in how much data are produced from some applications, rendering traditional systems incapable of handling them. For handling this amount of data the use of cloud key-value stores is suggested. But these systems don't have any special functionality for enabling them to answer those queries. First, in this thesis, older solutions where studied, such as Segment Trees. These kinds of data structures can answer the queries described above in an efficient way. After that, it was studied whether these data structures can be deployed on top of cloud key-value stores, additionally other solutions were investigated that could take better advantage of these systems. Finally, the efficiency of these new methods is compared with those already existing. The comparisons results showed even an order of magnitude improvement on some occasions.

Page generated in 0.3494 seconds