• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 362
  • 118
  • 102
  • 40
  • 25
  • 18
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 822
  • 295
  • 135
  • 84
  • 80
  • 79
  • 77
  • 65
  • 62
  • 62
  • 60
  • 58
  • 55
  • 55
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

IMPROVEMENT OF PICKING OPERATIONS AND DEVELOPMENT OF WORK BALANCING MODEL

Meivert, Oscar, Klevensparr, Johan January 2014 (has links)
Purpose – The purpose of the thesis is to improve picking operations that kit materials to assembly lines and based on that, develop a model concerning work balancing in relation to varying demand. To achieve this purpose the following questions of issue are answered: 1. What difficulties can exist in picking operations that kit materials to assembly lines? 2. How can difficulties in picking operations that kit materials to assembly lines be resolved? 3. How can a model be developed that facilitates improvements concerning work balancing picking operations in relation to varying demand? Method – Through a summary, major difficulties are identified with support of literature studies, interviews, observations and documentation performed at a case company. The collected data form the base of the developed work-balancing model aiming at allowing users to work balance their picking operations. Findings – Initially, a comparison between literature studies, interviews and observations at the case company revealed five major difficulties: lack of parts, order handling, warehouse maintenance, standardization of picking operations and work balancing. When the first three difficulties are improved, a standardization of processes should be performed in order to achieve process perfection. Furthermore, as a last step in the developed four-step-improvement-process, work balancing could lead to improved resource utilization. Moreover, work balancing the picking operation at the case company through the developed model resulted in a reduced balancing loss of 39%. Research limitations – The conducted case study is structured as a holistic single case study. Since companies differ it would be appropriate to conduct multiple case studies in order to generalize results. Furthermore, the authors want to expand the extensive literature studies to identify all difficulties and alternative solutions. Nevertheless, this is not achieved in relation to the scale of the work and time limitations. Further research – If performing a similar investigation other factors need to be taken into consideration regarding gathering of time studies such as lack of parts, picking errors, deficiencies and other interruptions, this in order to acquire a accurate company reflection due to more precise measurements. Key words – Lean production, work balancing, kitting, improvements, standardization, picking operation, varying demand
32

A computerized methodology for balancing and sequencing mixed model stochastic assembly lines /

Pantouvanos, John P., January 1992 (has links)
Thesis (M.S.)--Virginia Polytechnic Institute and State University, 1992. / Vita. Abstract. Includes bibliographical references (leaves 63-66). Also available via the Internet.
33

Improved load-balancing for a chord-based peer-to-peer storage system in a cluster environment

Chen, Fu January 2015 (has links)
The thesis investigates deployment of a Peer-to-Peer storage system in a cluster environment, in which machines have good and persist network connection, in order to provide the functionality of a data centre. For various reasons, the implementation is based on the Peer-to-Peer system known as Chord. Chord naturally provides storage load-balancing, especially if its virtual node scheme is used, but this needs to be improved if Chord is used to implement a storage system. A novel, threshold-based storage load-balancing scheme is proposed. Each machine in the system contributes a fixed amount of disk storage space to the Peer-to-Peer storage system. The system commences operation in the normal Chord manner except that two distinct sets of tables are initialised, one to maintain the usual Chord Ring, and one to maintain proximity information about the machines in the system. As files are inserted, the collective storage space gradually fills up. When any machine reaches the threshold for usage of its contributed space, the system behaviour is modified. Attempts are made, repeatedly if necessary, to migrate virtual nodes from heavily loaded machines to less-heavily loaded machines elsewhere in the system. The proximity information is used so as to minimise the costs of this migration. The nature of the proximity information is complex, and a Space-Filling Curve is utilised to reduce the complexity. For reasons of effectiveness, demonstrated by an evaluation against other kinds of Space-Filling Curve, the Hilbert curve is specifically chosen. The performance of the resulting implementation is evaluated in a practical experimental environment which consists of five teaching laboratories in the author’s school. Under the specific conditions of the experiments, the new system achieves significantly better distribution of storage utilisation across the participating machines and also defers the onset of unreliable behaviour in the system. In one experiment, the amount of the total storage space available that is actually utilised by the system increased from ∼ 43% to ∼ 62% using the proposed mechanism. The parameters used in the experiments have been chosen somewhat arbitrarily, so it is possible that even better results might be feasible.
34

Dynamic scheduling in multicore processors

Rosas Ham, Demian January 2012 (has links)
The advent of multi-core processors, particularly with projections that numbers of cores will continue to increase, has focused attention on parallel programming. It is widely recognized that current programming techniques, including those that are used for scientific parallel programming, will not allow the easy formulation of general purpose applications. An area which is receiving interest is the use of programming styles which do not have side-effects. Previous work on parallel functional programming demonstrated the potential of this to permit the easy exploitation of parallelism. This thesis investigates a dynamic load balancing system for shared memory Chip Multiprocessors. This system is based on a parallel computing model called SLAM (Spreading Load with Active Messages), which makes use of functional language evaluation techniques. A novel hardware/software mechanism for exploiting fine grain parallelism is presented. This mechanism comprises a runtime system which performs dynamic scheduling and synchronization automatically when executing parallel applications. Additionally the interface for using this mechanism is provided in the form of an API. The proposed system is evaluated using cycle-level models and multithreaded applications running in a full system simulation environment.
35

Dynamická metrika v OSPF sítích / Dynamic Metric in OSPF Networks

Mácha, Tomáš January 2016 (has links)
Masivní vývoj Internetu vedl ke zvýšeným požadavkům na spolehlivou síťovou infrastrukturu. Efektivita komunikace v síti závisí na schopnosti směrovačů určit nejlepší cestu pro odesílání a přeposílání paketů ke koncovému zařízení. Jelikož OSPF v současné době představuje jeden z nejpoužívanějších směrovacích protokolů, jakýkoli přínos, který by pomohl udržet krok s rychle se měnícím prostředí Internetu, je velmi vítán. Významným omezením OSPF protokolu je, mimo jiné, absence informovanosti algoritmu pro výpočet metriky o aktuálním vytížení linky. Tato vlastnost představuje tzv. slabé místo, což má negativní vliv na výkonnost sítě. Z tohoto důvodu byla navržena nová metoda založená na dynamické adaptaci měnících se síťových podmínek a alternativní strategii OSPF metrik. Navržená metoda řeší problém neinformovanosti OSPF metriky o síťovém provozu a nevhodně vytížených linek, které snižují výkonnost sítě. Práce rovněž přináší praktickou realizaci, kdy vlastnosti nové metody jsou testovány a ověřeny spuštěním testů algoritmu v reálných zařízeních.
36

Dynamic Load Balancing of Virtual Machines Hosted on Xen

Wilcox, Terry Clyde 10 December 2008 (has links) (PDF)
Currently systems of virtual machines are load balanced statically which can create load imbalances for systems where the load changes dynamically over time. For throughput and response time of a system to be maximized it is necessary for load to be evenly distributed among each part of the system. We implement a prototype policy engine for the Xen virtual machine monitor which can dynamically load balance virtual machines. We compare the throughput and response time of our system using the cpu2000 and the WEB2005 benchmarks from SPEC. Under the loads we tested, dynamic load balancing had 5%-8% higher throughput than static load balancing.
37

Balancing the Ticket: How Selecting A Vice President Has Changed in the Modern Era

Boxleitner, Jon Arthur 07 January 2009 (has links)
Over the past century, the role of the vice presidency has increased drastically, to the point that some view the president and the vice president as a co-presidency. When this started and who perpetuated the change is up to debate, but the fact that the vice presidency and the vice-presidential selection process have increased in visibility and importance is not. This project analyzes the changes that occurred in the selection of the vice-presidential running mates in the last four decades by comparing the news coverage of the vice-presidential selection process in the years 1968 and 2000. What characteristics (such as ideology, compatibility, moral character, experience, etc.) do the media value most when reporting on the vice-presidential selection? The study observes the presidential election-year months of March through December in order to acquire data from the time the veepstakes speculation starts—after a presidential candidate secures enough delegates to win the nomination—to after the general election—where the electoral impact of the vice-presidential choice can be interpreted. / Master of Arts
38

Optimal Synthesis and Balancing of Linkages

Sutherland, George 10 1900 (has links)
<p> The problems of dimensional synthesis and of balancing of linkages are formulated as multifactor optimization problems. Using the new techniques developed in the thesis to solve these problems, a general computer program has been written to be a design aid for such problems. A guide to usage and a complete documentation for this computer program are included in the thesis. </p> / Thesis / Master of Engineering (MEngr)
39

Energy Aware Size Interval Task Based Assignment

Moore, Maxwell January 2022 (has links)
A thesis based around saving response time costs as well as respecting electrical costs of a homogenous multi-server system. / In this thesis we consider the impacts of energy costs as they relate to Size Interval Task Assignment Equally--loaded (SITA-E) systems. We find that given systems which have small and large jobs being processed (high variance systems) we could in some cases find savings in terms of energy costs and in terms of lowering the mean response times of the system. How we achieve this is by first working from SITA-E, wherein servers are always on to Electrically Aware SITA-E (EA-SITA-E) by seeing if it is beneficial to make any of our servers rotate between being on and being off as needed. When most beneficial to do so we will turn off some of the servers in question, after this is completed we reallocate some of the jobs that are on the servers that we decide will be cycling to servers that will remain on indefinitely to better use their idle time. This also lowers the mean response time below what we originally saw with SITA-E, by lowering the variance in the sizes of jobs seen by the servers with the longest jobs. These long--job servers are by far the most impacted by the variance of the sizes of the jobs, so it is very desirable to lower this variance. The algorithm contained here can provide benefits in terms of both energy costs and mean response time under some specific conditions. Later we discuss the effect of errors in our assumed knowledge of task sizes. This research contributes methodology that may be used to expand on EA-SITA-E system design and analysis in the future. / Thesis / Master of Science (MSc) / The intention of this research is to be able to improve on existing size interval task-based assignment policies. We try to improve by turning servers off at key times to save energy costs, while not sacrificing too greatly in terms of mean response time of the servers, and in some cases even improving the mean response time through an intelligent re-balancing of the server loads.
40

Integrating Algorithmic and Systemic Load Balancing Strategies in Parallel Scientific Applications

Ghafoor, Sheikh Khaled 13 December 2003 (has links)
Load imbalance is a major source of performance degradation in parallel scientific applications. Load balancing increases the efficient use of existing resources and improves performance of parallel applications running in distributed environments. At a coarse level of granularity, advances in runtime systems for parallel programs have been proposed in order to control available resources as efficiently as possible by utilizing idle resources and using task migration. At a finer granularity level, advances in algorithmic strategies for dynamically balancing computational loads by data redistribution have been proposed in order to respond to variations in processor performance during the execution of a given parallel application. Algorithmic and systemic load balancing strategies have complementary set of advantages. An integration of these two techniques is possible and it should result in a system, which delivers advantages over each technique used in isolation. This thesis presents a design and implementation of a system that combines an algorithmic fine-grained data parallel load balancing strategy called Fractiling with a systemic coarse-grained task-parallel load balancing system called Hector. It also reports on experimental results of running N-body simulations under this integrated system. The experimental results indicate that a distributed runtime environment, which combines both algorithmic and systemic load balancing strategies, can provide performance advantages with little overhead, underscoring the importance of this approach in large complex scientific applications.

Page generated in 0.0695 seconds