• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 19
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 14
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Offline Task Scheduling in a Three-layer Edge-Cloud Architecture

Mahjoubi, Ayeh January 2023 (has links)
Internet of Things (IoT) devices are increasingly being used everywhere, from the factory to the hospital to the house to the car. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. Thus, many obstacles need to be overcome while offloading tasks to the cloud. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes.  In this thesis, we model the offloading problem in an edge cloud infrastructure as a Mixed-Integer Linear Programming (MILP) problem and look for efficient optimization techniques to tackle it, aiming to minimize the total delay of the system after completing all tasks of all services requested by all users. To accomplish this, we use the exact approaches like simplex to find a solution to the MILP problem. Due to the fact that precise techniques, such as simplex, require a large number of processing resources and a considerable amount of time to solve the problem, we propose several heuristics and meta-heuristics methods to solve the problem and use the simplex findings as a benchmark to evaluate these methods. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results. Meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems. In order to solve this issue, we propose two meta-heuristic approaches, one based on a genetic algorithm and the other on simulated annealing. Compared to heuristics algorithms, the genetic algorithm-based method yields a more accurate solution, but it requires more time and resources to solve the MILP, while the simulated annealing-based method is a better fit for the problem since it produces more accurate solutions in less time than the genetics-based method. / Internet of Things (IoT) devices are increasingly being used everywhere. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes.  In this thesis, the offloading problem in an edge cloud infrastructure is modeled as a Mixed-Integer Linear Programming (MILP) problem, and efficient optimization techniques seeking to minimize the total delay of the system are employed to address it. To accomplish this, the exact approaches are used to find a solution to the MILP problem. Due to the fact that precise techniques require a large number of processing resources and a considerable amount of time to solve the problem, several heuristics and meta-heuristics methods are proposed. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results while meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems.
42

Automatic methods for distribution of data-parallel programs on multi-device heterogeneous platforms

Moreń, Konrad 07 February 2024 (has links)
This thesis deals with the problem of finding effective methods for programming and distributing data-parallel applications for heterogeneous multiprocessor systems. These systems are ubiquitous today. They range from embedded devices with low power consumption to high performance distributed systems. The demand for these systems is growing steadily. This is due to the growing number of data-intensive applications and the general growth of digital applications. Systems with multiple devices offer higher performance but unfortunately add complexity to the software development for such systems. Programming heterogeneous multiprocessor systems present several unique challenges compared to single device systems. The first challenge is the programmability of such systems. Despite constant innovations in programming languages and frameworks, they are still limited. They are either platform specific, like CUDA which supports only NVIDIA GPUs, or applied at a low level of abstraction, such as OpenCL. Application developers that design OpenCL programs must manually distribute data to the different devices and synchronize the distributed computations. These capabilities have an impact on the productivity of the developers. To reduce the programming complexity and the development time, this thesis introduces two approaches that automatically distribute and synchronize the data-parallel workloads. Another challenge is the multi-device hardware utilization. In contrast to single-device platforms, the application optimization process for a multi-device system is even more complicated. The application designers need to apply not only optimization strategies specific for a single-device architecture. They need also focus on the careful workload balancing between all the platform processors. For the balancing problem, this thesis proposes a method based on the platform model. The platform model is created with machine learning techniques. Using machine learning, this thesis builds automatically a reliable platform model, which is portable and adaptable to different platform setups, with a minimum manual involvement of the programmers.
43

Resource- and Time-Constrained Control Synthesis for Multi-Agent Systems

Yu, Pian January 2018 (has links)
Multi-agent systems are employed for a group of agents to achieve coordinated tasks, in which distributed sensing, computing, communication and control are usually integrated with shared resources. Efficient usage of these resources is therefore an important issue. In addition, in applications such as robotics, a group of agents may encounter the request of a sequence of tasks and deadline constraint on the completion of each task is a common requirement. Thus, the integration of multi-agent task scheduling and control synthesis is of great practical interest. In this thesis, we study control of multi-agent systems under a networked control system framework. The first purpose is to design resource-efficient communication and control strategies to solve consensus problem for multi-agent systems.The second purpose is to jointly schedule task sequence and design controllers for multiagent systems that are subject to a sequence of deadline-constrained tasks. In the first part, a distributed asynchronous event-triggered communication and control strategy is proposed to tackle multi-agent consensus. It is shown that the proposed event-triggered communication and control strategy fulfils the reduction of both the rates of sensor-controller communication and controller-actuator communication as well as excluding Zeno behavior. To further relax the requirement of continuous sensing and computing, a periodic event-triggered communication and control strategy is proposed in the second part. In addition, an observer-based encoder-decoder with finite-level quantizeris designed to deal with the constraint of limited data rate. An explicit formula for the maximum allowable sampling period is derived first. Then, it is proven that exponential consensus can be achieved in the presence of data rate constraint. Finally, in the third part, the problem of deadline-constrained multi-agent task scheduling and control synthesis is addressed. A dynamic scheduling strategy is proposed and a distributed hybrid control law is designed for each agent that guarantees the completion and deadline satisfaction of each task. The effectiveness of the theoretical results in the thesis is verified by several simulation examples. / <p>QC 20180918</p>
44

Multi-Robot Task Allocation and Scheduling with Spatio-Temporal and Energy Constraints

Dutia, Dharini 24 April 2019 (has links)
Autonomy in multi-robot systems is bounded by coordination among its agents. Coordination implies simultaneous task decomposition, task allocation, team formation, task scheduling and routing; collectively termed as task planning. In many real-world applications of multi-robot systems such as commercial cleaning, delivery systems, warehousing and inventory management: spatial & temporal constraints, variable execution time, and energy limitations need to be integrated into the planning module. Spatial constraints comprise of the location of the tasks, their reachability, and the structure of the environment; temporal constraints express task completion deadlines. There has been significant research in multi-robot task allocation involving spatio-temporal constraints. However, limited attention has been paid to combine them with team formation and non- instantaneous task execution time. We achieve team formation by including quota constraints which ensure to schedule the number of robots required to perform the task. We introduce and integrate task activation (time) windows with the team effort of multiple robots in performing tasks for a given duration. Additionally, while visiting tasks in space, energy budget affects the robots operation time. We map energy depletion as a function of time to ensure long-term operation by periodically visiting recharging stations. Research on task planning approaches which combines all these conditions is still lacking. In this thesis, we propose two variants of Team Orienteering Problem with task activation windows and limited energy budget to formulate the simultaneous task allocation and scheduling as an optimization problem. A complete mixed integer linear programming (MILP) formulation for both variants is presented in this work, implemented using Gurobi Optimizer and analyzed for scalability. This work compares the different objectives of the formulation like maximizing the number of tasks visited, minimizing the total distance travelled, and/or maximizing the reward, to suit various applications. Finally, analysis of optimal solutions discover trends in task selection based on the travel cost, task completion rewards, robot's energy level, and the time left to task inactivation.
45

Gestion logicielle légère pour la reconfiguration dynamique partielle sur les FPGAs / Light software services for dynamical partial reconfiguration in FPGAs

Xu, Yan 13 March 2014 (has links)
Cette thèse s'intéresse aux architectures contenant des FPGAs reconfigurables dynamiquement et partiellement. Dans ces architectures, la complexité et la difficulté de portage des applications sont principalement dues aux connections étroites entre la gestion de la reconfiguration et le calcul lui-même. Nous proposons 1) un nouveau niveau d'abstraction, appelé gestionnaire de composants matériels (HCM) et 2) un mécanisme de communication scalable (SCM), qui permettent une séparation claire entre l'allocation d'une fonction matérielle et la procédure de reconfiguration. Cela réduit l'impact de la gestion de la reconfiguration dynamique sur le code de l'application, ce qui simplifie grandement l'utilisation des plateformes FPGA. Les application utilisant le HCM et le SCM peuvent aussi être portées de manière transparentes à des systèmes multi-FPGA et/ou multi-utilisateurs. L'implémentation de cette couche HCM et du mécanisme SCM sur des plateformes réalistes de prototypage virtuel démontre leur capacité à faciliter la gestion du FPGA tout en préservant les performances d'une gestion manuelle, et en garantissant la protection des fonctions matérielles. L'implémentation du HCM et du mécanisme SCM ainsi que leur environnement de simulation sont open-source dans l'espoir d'une réutilisation par la communauté. / This thesis shows that in FPGA-based dynamic reconfigurable architectures, the complexity and low portability of application developments are mainly due to the tight connections between reconfiguration management and computation. By proposing 1) a new abstraction layer, called Hardware Component Manager (HCM) and 2) a Scalable Communication Mechanism (SCM), we clearly separate the allocation of a hardware function from the control of a reconfiguration procedure. This reduces the dynamic reconfiguration management impact on the application code, which greatly simplifies the use of FPGA platforms. Applications using the HCM and the SCM can also be transparently ported to multi-user and/or multi-FPGA systems. The implementation of this HCM layer and the SCM mechanism on realistic simulation platforms demonstrates their ability to ease the management of FPGA flexibility while preserving performance and ensuring hardware function protection. The HCM and SCM implementations and their simulation environment are open-source in the hope of reuse by the community.
46

Resource allocation and scheduling strategies using utility and the knapsack problem on computational grids

Vanderster, Daniel Colin 25 March 2008 (has links)
Computational grids are distributed systems composed of heterogeneous computing resources which are distributed geographically and administratively. These highly scalable systems are designed to meet the large computational demands of many users from scientific and business orientations. This dissertation address problems related to the allocation of the computing resources which compose a grid. First, the design of a pan-Canadian grid is presented. The design exploits the maturing stability of grid deployment toolkits, and introduces novel services for efficiently allocating the grids resources. The challenges faced by this grid deployment motivate further exploration in optimizing grid resource allocations. The primary contribution of this dissertation is one such technique for allocating grid resources. By applying a utility model to the grid allocation options, it is possible to quantify the relative merits of the various possible scheduling decisions. Indeed, a number of utility heuristics are derived to provide quality-of-service policies on the grid; these implement scheduling policies which favour efficiency and also allow users to intervene with urgent tasks. Using this model, the allocation problem is then formulated as a knapsack problem. Formulation in this manner allows for rapid solution times and results in nearly optimal allocations. The combined utility/knapsack approach to grid resource allocation is first presented in the allocation of single resource type, processors. By evaluating the approach with novel utility heuristics using both random and real workloads, it is shown to result in efficient schedules which have characteristics that match the intended policies. Additionally, two design and analysis techniques are performed to optimize the design of the utility/knapsack scheduler; these techniques play a significant role in practical adoption of the approach. Next, the utility/knapsack approach is extended to the allocation of multiple resource types. This extension generalizes the grid allocation solution a wider variety of resources, including processors, disk storage, and network bandwidth. The general technique, when combined with new heuristics for the varied resource types, is shown to result in improved performance against reference strategies. This dissertation concludes with a novel application of the utility/knapsack approach to fault-tolerant task scheduling. Computational grids typically feature many techniques for providing fault tolerance to the grid tasks, including retrying failed tasks or replicating running tasks. By applying the utility/knapsack approach, the relative merits of these varied techniques can be quantified, and the overall number of failures can be decreased subject to resource cost considerations.
47

Χρονοπρογραμματισμός και δρομολόγηση σε δίκτυα πλέγματος και δίκτυα δεδομένων

Κόκκινος, Παναγιώτης 05 January 2011 (has links)
Τα δίκτυα πλέγματος (grid networks) αποτελούνται από ένα σύνολο ισχυρών υπολογιστικών, αποθηκευτικών και άλλων πόρων. Οι πόροι αυτοί είναι συνήθως γεωγραφικά αλλά και διοικητικά διασκορπισμένοι και συνδέονται με ένα δίκτυο δεδομένων. Τα δίκτυα πλέγματος το τελευταίο καιρό έχουν αποκτήσει μία δυναμική, η οποία εντάσσεται μέσα σε ένα γενικότερο πλαίσιο, αυτό της κατανεμημένης επεξεργασίας και αποθήκευσης δεδομένων. Επιστήμονες, ερευνητές αλλά και απλοί χρήστες χρησιμοποιούν από κοινού τους κατανεμημένους πόρους για την εκτέλεση διεργασιών ή τη χρήση εφαρμογών, για τις οποίες δεν μπορούν να χρησιμοποιήσουν τους τοπικά διαθέσιμους υπολογιστές τους λόγω των περιορισμένων δυνατοτήτων τους. Στην παρούσα διδακτορική διατριβή εξετάζουμε ζητήματα που σχετίζονται με το χρονοπρογραμματισμό (scheduling) των διεργασιών στους διαθέσιμους πόρους, καθώς και με τη δρομολόγηση (routing) των δεδομένων που οι διεργασίες χρειάζονται. Εξετάζουμε τα ζητήματα αυτά είτε χωριστά, είτε σε συνδυασμό, μελετώντας έτσι τις αλληλεπιδράσεις τους. Αρχικά, προτείνουμε ένα πλαίσιο παροχής ποιότητας υπηρεσιών στα δίκτυα πλέγματος, το οποίο μπορεί να εγγυηθεί σε ένα χρήστη μία μέγιστη χρονική καθυστέρηση εκτέλεσης των διεργασιών του. Με τον τρόπο αυτό, ένας χρήστης μπορεί να επιλέξει με απόλυτη βεβαιότητα εκείνον τον υπολογιστικό πόρο που μπορεί να εκτελέσει τη διεργασία του πριν τη λήξη της προθεσμίας της. Το προτεινόμενο πλαίσιο δεν στηρίζεται στην εκ των προτέρων δέσμευση των υπολογιστικών πόρων, αλλά στο ότι οι χρήστες μπορούν να αυτό-περιορίσουν το ρυθμό δημιουργίας διεργασιών τους, ο οποίος συμφωνείται ξεχωριστά με κάθε πόρο κατά τη διάρκεια μίας φάσης εγγραφής τους. Πραγματοποιούμε έναν αριθμό πειραμάτων προσομοίωσης που αποδεικνύουν ότι το προτεινόμενο πλαίσιο μπορεί πράγματι να παρέχει στους χρήστες εγγυημένο μέγιστο χρόνο καθυστέρησης εκτέλεσης των διεργασιών τους, ενώ με τις κατάλληλες επεκτάσεις το πλαίσιο μπορεί να χρησιμοποιηθεί ακόμα και όταν το φορτίο των διεργασιών δεν είναι εκ των προτέρων γνωστό. Στη συνέχεια εξετάζουμε το πρόβλημα της ``Συγκέντρωσης Δεδομένων'' (ΣΔ), που εμφανίζεται όταν μία διεργασία χρειάζεται περισσότερα του ενός τμήματα δεδομένων να μεταφερθούν σε έναν υπολογιστικό πόρο, πριν η διεργασία ξεκινήσει την εκτέλεσή της σε αυτόν. Μελετάμε τα υπό-προβλήματα της επιλογής των αντιγράφων των δεδομένων, του χρονοπρογραμματισμού της διεργασίας και της δρομολόγησης των δεδομένων της και προτείνουμε έναν αριθμό πλαισίων ``Συγκέντρωσης Δεδομένων''. Μερικά πλαίσια εξετάζουν μόνο τις υπολογιστικές ή μόνο τις επικοινωνιακές απαιτήσεις των διεργασιών, ενώ άλλα εξετάζουν και τα δύο είδη απαιτήσεων. Επιπλέον, προτείνονται πλαίσια ``Συγκέντρωσης Δεδομένων'' τα οποία βασίζονται στην κατασκευή ελαχίστων γεννητικών δέντρων(Minimum Spanning Tree - MST), με σκοπό τη μείωση της συμφόρησης στο δίκτυο δεδομένων, που εμφανίζεται κατά την ταυτόχρονη μεταφορά των δεδομένων μίας διεργασίας. Στα πειράματα προσομοίωσης μας αξιολογούμε τα προτεινόμενα πλαίσια και δείχνουμε ότι αν η διαδικασία της ``Συγκέντρωση Δεδομένων'' πραγματοποιηθεί σωστά, τότε η απόδοση του δικτύου πλέγματος, όσον αφορά τη χρήση των πόρων και την εκτέλεση των διεργασιών, μπορεί να βελτιωθεί. Επιπλέον, ερευνούμε την εφαρμογή τεχνικών σύνοψης της πληροφορίας των χαρακτηριστικών των πόρων στα δίκτυα πλέγματος. Προτείνουμε ένα σύνολο μεθόδων και τελεστών σύνοψης, προσπαθώντας να μειώσουμε τον όγκο των πληροφοριών πόρων που μεταφέρονται πάνω από το δίκτυο, ενώ παράλληλα επιθυμούμε οι συνοπτικές πληροφορίες που παράγονται να βοηθούν το χρονοπρογραμματιστή να παίρνει αποδοτικές αποφάσεις ανάθεσης διεργασιών στους διαθέσιμους πόρους. Οι τεχνικές αυτές μπορούν να συνδυαστούν και με τις αντίστοιχες τεχνικές που εφαρμόζονται στα ιεραρχικά δίκτυα δεδομένων για τη δρομολόγηση, εξασφαλίζοντας έτσι τη διαλειτουργικότητα μεταξύ διαφορετικών δικτύων πλέγματος καθώς και το απόρρητο των πληροφοριών που ανήκουν σε διαφορετικούς παρόχους πόρων. Στα πειράματα προσομοίωσης μας χρησιμοποιούμε σαν μετρική της ποιότητας / αποδοτικότητας των αποφάσεων του χρονοπρογραμματιστή τον Stretch Factor (SF), που ορίζεται ως ο λόγος της μέσης καθυστέρησης εκτέλεσης των διεργασιών όταν αυτές χρονοπρογραμματίζονται με βάση ακριβείς πληροφορίες πόρων, προς τη μέση καθυστέρηση τους όταν χρησιμοποιούνται συνοπτικές πληροφορίες. Ακόμα, μετράμε τη συχνότητα με την οποία ο χρονοπρογραμματιστής ενημερώνεται για τις αλλαγές στην κατάσταση των πόρων καθώς και τον όγκο των πληροφοριών πόρων που μεταφέρονται. Μελετάμε, ακόμα, ζητήματα που προκύπτουν από την υλοποίηση αλγορίθμων χρονοπρογραμματισμού που έχουν αρχικά μελετηθεί σε περιβάλλοντα προσομοίωσης, σε πραγματικά συστήματα ενδιάμεσου λογισμικού (middleware) για δίκτυα πλέγματος, όπως το gLite. Το πρώτο ζήτημα που εξετάζουμε είναι το γεγονός ότι οι πληροφορίες που παρέχονται στους αλγορίθμους χρονοπρογραμματισμού στα συστήματα αυτά δεν είναι πάντα έγκυρες, ενώ το δεύτερο ζήτημα είναι ότι δεν υπάρχει ευελιξία στο διαμοιρασμό των πόρων μεταξύ διαφορετικών διεργασιών. Η μελέτη μας δείχνει ότι με απλές αλλαγές στους μηχανισμούς διαχείρισης διεργασιών ενός συστήματος ενδιάμεσου λογισμικού, αυτά αλλά και άλλα ζητήματα μπορούν να αντιμετωπιστούν, επιτυγχάνοντας σημαντικές βελτιώσεις στην απόδοση των δικτύων πλέγματος. Στα πλαίσια αυτά μάλιστα, εξετάζουμε τη χρήση της τεχνολογίας της εικονικοποίησης (virtualization). Υλοποιούμε και αξιολογούμε τους προτεινόμενους μηχανισμούς σε ένα μικρό δοκιμαστικό δίκτυο πλέγματος. Τέλος, προτείνουμε έναν αλγόριθμο πολλαπλών κριτηρίων για τη δρομολόγηση και ανάθεση μήκους κύματος υπό την παρουσία φυσικών εξασθενήσεων (Impairment-Aware Routing and Wavelength Assignment, IA-RWA) για οπτικά δίκτυα δεδομένων. Τα οπτικά δίκτυα είναι η δικτυακή τεχνολογία που χρησιμοποιείται σήμερα για τη διασύνδεση των υπολογιστικών και αποθηκευτικών πόρων των δικτύων πλέγματος, ενώ οι διάφορες φυσικές εξασθενήσεις τείνουν να μειώνουν την ποιότητα μετάδοσης (Quality of Transmission - QoT) των οπτικών σημάτων. Κύριο χαρακτηριστικό του προτεινόμενου αλγορίθμου είναι ότι υπολογίζει την ποιότητα μετάδοσης (Quality of Transmission - QoT) ενός υποψήφιου οπτικού μονοπατιού (lightpath) μη βασιζόμενο σε πραγματικές μετρήσεις ή εκτιμήσεις μέσω αναλυτικών μοντέλων των διαφόρων φυσικών εξασθενήσεων, αλλά μετρώντας τις αιτίες στις οποίες αυτά οφείλονται. Με τον τρόπο αυτό ο αλγόριθμος γίνεται πιο γενικός και εφαρμόσιμος σε διαφορετικές συνθήκες (μέθοδοι διαμόρφωσης του οπτικού σήματος, ρυθμοί μετάδοσης, τιμές διαφόρων φυσικών παραμέτρων, κ.α.). Τα πειράματα προσομοίωσης μας δείχνουν ότι ο προτεινόμενος αλγόριθμος μπορεί να εξυπηρετήσει τις περισσότερες δυναμικές αιτήσεις σύνδεσης, υπολογίζοντας γρήγορα, μονοπάτια με καλή ποιότητα μετάδοσης σήματος. Γενικά, η παρούσα διδακτορική διατριβή παρουσιάζει έναν αριθμό σημαντικών και καινοτόμων μεθόδων, πλαισίων και αλγορίθμων που αφορούν τα δίκτυα πλέγματος. Παράλληλα ωστόσο αποκαλύπτει το εύρος των ζητημάτων και ως ένα βαθμό και τις αλληλεπιδράσεις τους, που σχετίζονται με την αποδοτική λειτουργία των δικτύων πλέγματος, τα οποία απαιτούν τη σύνθεση και τη συνεργασία ερευνητών, μηχανικών και επιστημόνων από διάφορα πεδία. / Grid networks consist of several high capacity, computational, storage and other resources, which are geographically distributed and may belong to different administrative domains. These resources are usually connected through high capacity optical networks. The grid networks evolution follows the current trend of distributedly performed computation and storage. This trend provides several new possibilities to scientists, researchers and to simple users around the world, so as to use the shared resources for executing their tasks and running their applications. These operations are not always possible to perform in local, limited capacity, resources. In this thesis we study issues related to the scheduling of tasks and the routing of their datasets. We study these issues both separately and jointly, along with their interactions. Initially, we present a Quality of Service (QoS) framework for grids that guarantees to users an upper bound on the execution delay of their submitted tasks. Such delay guarantees imply that a user can choose, with absolute certainty, a resource to execute a task before its deadline expires. Our framework is not based on the advance reservation of resources, instead, the users follow a self constrained task generation pattern, which is agreed separately with each resource during a registration phase. We validate experimentally the proposed Quality of Service (QoS) framework for grids, verifying that it satisfies the delay guarantees promised to users. In addition, when the proposed extensions are used, the framework also provides delay guarantees without exact a-priori knowledge of the task workloads. Next, we examine a task scheduling and data migration problem for grid networks, which we refer to as the Data Consolidation (DC) problem. Data Consolidation arises when a task requests concurrently multiple pieces of data, possibly scattered throughout the grid network that have to be present at a selected site before the task's execution starts. In such a case, the scheduler must select the data replicas to be used, the site where these data will be gathered for the task to be executed, and the routing paths to be followed. We propose and experimentally evaluate several Data Consolidation schemes. Some consider only the computational or only the communication requirements of the tasks, while others consider both kinds of requirements. We also propose Data Consolidation (DC) schemes, which are based on Minimum Spanning Trees (MST) that route concurrently the datasets so as to reduce the congestion that may appear in the future, due to these transfers. In our simulation experiments we validate the proposed schemes and show that if the Data Consolidation operation is performed efficiently, then significant benefits can be achieved, in terms of the resources' utilization and task delay. We also consider the use of resource information aggregation in grid networks. We propose a number of aggregation schemes and operators for reducing the information exchanged in a grid network and used by the resource manager in order to make efficient scheduling decisions. These schemes can be integrated with the schemes utilized in hierarchical data networks for data routing, providing interoperability between different grid networks, while the sensitive or detailed information of resource providers is kept private. We perform a large number of experiments to evaluate the proposed aggregation schemes and the used operators. As a metric of the quality of the aggregated information we introduce the Stretch Factor (SF), defined as the ratio of the task delay when the task is scheduled using complete resource information over the task delay when an aggregation scheme is used. We also measure the number of resource information updates triggered by each aggregation scheme and the amount of resource information transferred. In addition, we are interested in the difficulties encountered and the solutions provided in order to develop and evaluate scheduling policies, initially implemented in a simulation environment, in the gLite grid middleware. We identify two important such implementation issues, namely the inaccuracy of the information provided to the scheduler by the information system, and the inflexibility in the sharing of a resource among different jobs. Our study indicates that simple changes in the gLite's scheduling procedures can solve these and other similar issues, yielding significant performance gains. We also investigate the use of the virtualization technology in the gLite middleware. We implement and evaluate the proposed mechanisms in a small gLite testbed. Finally, we propose a multicost impairment-aware routing and wavelength assignment (IA-RWA) algorithm in optical networks. In general, physical impairments tend to degrade the optical signal quality. Also, optical networks is the main networking technology used today for the interconnection of the grid's, computational and storage, resources around the world. The main characteristic of the proposed algorithm is that it calculates the quality of transmission (QoT) of a candidate lightpath by measuring several impairment-generating source parameters and not by using complex formulas to directly account for the effects of physical impairments. In this way, this approach is more generic and more easily applicable to different conditions (modulation formats, bit rates). Our results indicate that the proposed impairment-aware routing and wavelength assignment (IA-RWA) algorithm can efficiently serve the online traffic in an optical network and to guarantee the transmission quality of the found lightpaths, with low running times. In general, in this thesis we present several novel mechanisms and algorithms for grid networks. At the same time, this Thesis reveals the variety of the issues that relate to the efficient operation of the grid networks and their interdependencies. For handling all these issues the cooperation of researches, scientists and engineers from various fields, is required.
48

Distribuição de tarefas em sistemas de workflow por meio da seleção induzida de recursos / Tasks Distribution in Work ow Systems Based on Resources Induced Selection

Silva, Rogério Sousa e 12 September 2007 (has links)
The assingment of tasks to resources of a workflow system is called task distribution. The task distribution is an important activity for workflow systems, because it is necessary to ensure that a task is performed by the appropriate resource in due time. There are several approaches to task distribution in workflow systems. This work innovates by using a Link Analysis technique applied to the task distribution. The Link Analysis is used to rank the result of a web query. The rank is performed by considering the relevance of the pages. This work presents the application of Link Analysis in the context of workflow task distribution. We have proposed a new task distribution algorithm (wf-hits) based on Link Analysis algorithm. We have compared wf-hits against other related ones. This comparison have considered quantitative and qualitative aspects. The experiments have shown that the use of wf-hits has improved workflow systems 25% in quantitative terms meanwhile the qualitative terms has maintained the same level of similar related works. / A entrega de tarefas para que sejam executadas pelos recursos de um sistema de work ow é chamada de distribuição de tarefas. A distribuição de tarefas é uma atividade importante para os sistemas de work ow, pois ´e necessário assegurar que uma determinada tarefa seja executada pelo recurso apropriado no tempo devido. Há várias abordagens para a distribuição de tarefas em sistemas de workflow. Este trabalho inova ao utilizar uma técnica oriunda da Análise de Ligações (Link Analysis) aplicada à distribuição de tarefas. A Link Analysis é utilizada para classificar o resultado de uma consulta na internet. A classificação é realizada considerando a relevância das páginas. O presente trabalho propõe a aplicação da Link Analysis no contexto da distribuição de tarefas em sistemas de work ow. É proposto um novo algoritmo para a distribuição de tarefas (wf-hits) que é baseado no algoritmo de Link Analysis. O algoritmo wf-hits é comparado com trabalhos correlatos em termos quantitativos e qualitativos. Os experimentos realizados mostraram que a utilização do wf-hits na distribuição de tarefas aos recursos em sistemas de workflow representa ganhos na ordem de 25% em termos quantitativos mantendo os mesmos patamares de qualidade dos trabalhos relacionados. / Mestre em Ciência da Computação
49

Avaliação das metodologias de análise de sistemas de tubulações de vapor sujeitas a carregamentos do tipo Steam Hammer / Evaluation of Methodologies for analysis of steam piping systems subjected to Steam Hammer loadings

HIPOLITO, FABIO C. 21 December 2016 (has links)
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2016-12-21T18:18:35Z No. of bitstreams: 0 / Made available in DSpace on 2016-12-21T18:18:35Z (GMT). No. of bitstreams: 0 / Carregamentos transientes termo hidráulicos do tipo Steam Hammer são eventos comuns em sistemas de tubulações de vapor com grandes potenciais de catástrofes em plantas de geração de energia. Uma vez iniciado o evento, ondas de pressões são geradas com amplitudes, geralmente, de grande magnitude ocasionando altas pressões no sistema, ruídos, deformações, fadiga, com possibilidade de danos materiais e econômicos e em casos extremos fatalidades. Os procedimentos da indústria para análise deste tipo de sistema consistem realização de análises estáticas equivalentes ou análise de espectro de resposta com carregamentos caracterizados por meio de métodos analíticos baseados em hipóteses simplificadoras do fluido e fluxo. Neste trabalho é proposta a analise de sistema de tubulações por meio do método de integração numérica com superposição modal e carregamento caracterizado por método numérico com base no método das características. Comparações foram efetuadas entre os resultados obtidos pela metodologia proposta e os procedimentos da indústria, demonstrando que, dado ao alto grau de conservadorismo, os procedimentos da indústria acarretam em superdimensionamento de estruturas e tubulações ocasionando custos adicionais de projeto, sendo a otimização do projeto obtida aplicando-se a metodologia proposta no trabalho. / Dissertação (Mestrado em Tecnologia Nuclear) / IPEN/D / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
50

Simulação e otimização para o problema integrado de alocação de recursos humanos especialistas e sequenciamento de tarefas em uma indústria criativa

Santos, André Luis Marques Ferreira dos 30 August 2017 (has links)
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2018-01-24T20:53:58Z No. of bitstreams: 1 Andre Luis Marques Ferreira dos Santos.pdf: 3359520 bytes, checksum: 44628dce3b0bf9344584d99777736b0f (MD5) / Made available in DSpace on 2018-01-24T20:53:58Z (GMT). No. of bitstreams: 1 Andre Luis Marques Ferreira dos Santos.pdf: 3359520 bytes, checksum: 44628dce3b0bf9344584d99777736b0f (MD5) Previous issue date: 2017-08-30 / The internet has contributed to the appearance of new kinds of businesses, which require a peculiar scientific approach in order to enlarge the operational results of the said enterprises. This context gave rise to a new set of organizations, businesses focused on intellectual production, the so-called creative industries. One of the ways to assess these intellectual capital companies is the optimization of human resources time, also known as Human Resources Specialists (HRS). The optimization of HRS’ consists in identifying which resources must be allocated to each project and the order in which the given tasks must be undertaken. In literature, this subject is known as “integrated problem of resources allocation and task scheduling”, commonly studied in manufacture businesses as “job shop problem”, but there is very little reference of the subject in the context of creative industries. Based on this assumption, the goal of this piece of work is to apply heuristic techniques to solve the HRS allocation and task scheduling problems within creative industries in an integrated way. Therefore, the model developed herein becomes relevant to researches involving the use of intellectual capital. The mentioned model consists in representing, through computer simulations, the operation of a competitive intelligence department and, making use of heuristics, determine the best scenarios for system optimization. In other words, to identify which of those scenarios makes viable to perform all projects in the shortest time possible. This model was based in real data, collected in two years of thorough observation of a company, the study object of this work. The outcome was satisfactory and the proposed model has achieved its objective. / A Internet contribuiu para o surgimento de um novo tipo de organização, as indústrias criativas, empresas com negócios focados no capital intelectual, os quais podem ser avaliados por meio da otimização do tempo dos recursos humanos, também intitulado como, recursos humanos especialistas (RHE). A otimização dos RHE consiste em identificar quais os recursos devem ser alocados para cada projeto e em qual ordem as tarefas devem ser realizadas, tema conhecido na literatura como “problema integrado de alocação de recursos e sequenciamento de tarefas”, comumente estudado nas empresas de manufatura como job shop problem, mas com pouca referência no contexto das indústrias criativas. Partindo desse pressuposto o objetivo deste trabalho foi desenvolver um modelo de simulação e otimização computacional, para resolver de forma integrada o problema de alocação de RHE e sequenciamento de tarefas dentro de uma indústria criativa. Assim, este trabalho torna-se relevante para as pesquisas que envolvam o uso de capital intelectual. O modelo desenvolvido neste trabalho consiste em representar por meio de simulação computacional o funcionamento de um departamento de inteligência competitiva e determinar os melhores cenários para otimizar o sistema, ou seja, identificar quais cenários viabilizam realizar todos os projetos em um menor tempo possível. A parametrização do modelo foi realizada com base em informações reais, coletadas em dois anos de observações na empresa objeto de estudo desta pesquisa. Os resultados mostram que a utilização de métodos computacionais de otimização pode contribuir na tomada de decisão para minimizar o tempo de realização dos projetos e para identificar os pontos de ociosidade do sistema em ambientes dinâmicos.

Page generated in 0.1074 seconds