• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 363
  • 118
  • 102
  • 40
  • 25
  • 19
  • 9
  • 8
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 825
  • 296
  • 135
  • 84
  • 79
  • 79
  • 77
  • 64
  • 62
  • 62
  • 60
  • 58
  • 56
  • 55
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Two-sided Assembly Line Balancing Models And Heuristics

Arikan, Ugur 01 September 2009 (has links) (PDF)
This study is focused on two-sided assembly line balancing problems of type-I and type-II. This problem is encountered in production environments where a two-sided assembly line is used to produce physically large products. For type-I problems, there is a specified production target for a fixed time interval and the objective is to reach this production capacity with the minimum assembly line length used. On the other hand, type-II problem focuses on reaching the maximum production level using a fixed assembly line and workforce. Two different mathematical models for each problem type are developed to optimally solve the problems. Since the quality of the solutions by mathematical models decreases for large-sized problems due to time and memory limitations, two heuristic approaches are presented for solving large-sized type-I problem. The validity of all formulations is verified with the small-sized literature problems and the performances of the methods introduced are tested with large-sized literature problems.
82

Parallel Computing for Applications in Aeronautical CFD

Ytterström, Anders January 2001 (has links)
No description available.
83

Industriella distributionssystem för kyla : Kartläggning,injustering och varvtalsreglering / Industrial distribution system for cooling : Investigation, balancing and speed control

Nilsson, Gustav January 2015 (has links)
De flesta industrier har någon typ av kylsystem som betjänar industriella processer och komfortkyla. Funktion och effektivitet på systemen varierar och när kylproblematik uppstår på anläggningen kan anledningarna vara flera. Kylproblematiken kan bero på bristande kylkapacitet, felaktigt injusterade köldbärarflöden eller underdimensionering av distributionssystem samt cirkulationspump. Detta examensarbete har utförts i samarbete med Stora Enso Skoghall och syftet med studien är att kartlägga och analysera deras kylsystems kapacitet, både sett till producerad kyleffekt och möjligheter till distribution på anläggningen. Genom en ökad kännedom av systemets funktion kan optimeringsförslag arbetas fram för att minska risken för kylproblematik och kostsam produktionsbegränsning. Då distributionssystemet är äldre och påbyggt i etapper finns misstankar om att det även finns möjligheter att med modern teknik få systemet att bli mer energieffektivt. Att effektivisera system gör att resurser kan sparas in vilket är ett viktigt steg mot en hållbar utveckling. För att få en överblick över systemets funktion och problematik har kyleffektsbehovet som ska betjänas kartlagts och jämförts med produktionskapacitet. Flödesmätningar har utförts på systemet för att kartlägga hur flödesbilden ser ut och jämföra verkliga flöden med de projekterade värdena för att avgöra hur väl injusterat systemet är. Undersökningen visade att det i dagsläget finns tillräckligt med kylproduktionskapacitet men att det förekommer relativt stora skillnader mellan projekterade och verkliga flöden. Det framkom också att cirkulationspumparna i dagsläget inte klarar av att leverera tillräckligt stort totalflöde för att tillfredsställa dimensionerande behov. För att undersöka hur en injustering kommer att påverka tryckfallen i systemet och för att avgöra hur pumparnas dimensionering skulle se ut vid drift av ett injusterat system har en beräkningsmodell över tryckfallen i systemet byggts upp med grund i mätdata från anläggning. För att lösa problematiken bör systemet injusteras. Genom att injustera systemet säkerställs att korrekta flöden betjänar respektive behov. Pumparna bör köras för att kunna tillgodose totalflödet vid dimensionerande fall, detta kan lösas med parallellkörning samt ökat varvtal på pumparna. Utredningen har dock visat att det finns en besparingspotential som motiverar att pumparna bör varvtalsregleras mot en proportionell tryckhållning för att minska att onödigt stora köldbärarflöden pumpas runt, vilket leder till onödigt höga driftkostnader för systemet. Studien har visat att det finns möjligheter till en rad åtgärder som skulle kunna minska elförbrukningen till pumparna under vissa perioder med upp till 75 %. / Most industries have some type of cooling system serves the industrial process and comfort cooling. Operation and efficiency of the systems varies and when cooling problem occurs, it can depend on several reasons. Cooling problems may be due to a lack of cooling capacity, improperly aligned water flow or under-sizing of the distribution system and circulation pump. This thesis has been carried out in cooperation with Stora Enso Skoghall and the purpose of the study is to investigate and analyze their cooling system properties, both in terms of produced cooling capacity and the opportunities for distribution on the facility. Through increased knowledge of system operation, proposals for optimization can be worked out to reduce the risk of cooling problems and costly production limitation. Because the distribution system is old, there are opportunities to use modern technology to get the system more energy efficient. When making the system more efficient it allows resources to be saved and this is an important step towards sustainable development. To get an overview of the problems in the system has the cooling effect been mapped and compared with the maximal production capacity. Flow measurements have been performed on the system to chart how the flows looks and the actual flow has been compared with the projected values to determine how well balanced the system is. In the current situation has the investigation showed that the system has enough production of cooling effect. But that there are relatively large differences between projected and actual flows. It also emerged that in the current situation the circulation pumps are not capable to supplying sufficient total flow to satisfy the designed requirements. To calculate how the adjustment will affect the pressure drops in the system and determine how the pumps design would look at the operation of a balanced system has a calculation-model of pressure drops in the system been built up with the data from the facility. To solve the problem, the system should be balanced so that the correct flow serving the respective needs. The pumps should be run so the overall flow is big enough to serve the total design flow. This can be solved by parallel operation and increased speed of the pumps. The investigation has shown that if the pumps would be speed-controlled against a proportional pressure limit, it can reduce the costs of pump energy and that leads to a potential for saving money. The study has shown that there are opportunities for a range of proceed that could reduce the electricity consumption of the pump at certain periods of up to 75%.
84

Performance Benchmarking of Fast Multipole Methods

Al-Harthi, Noha A. 06 1900 (has links)
The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware dependent. The primary objective of this study is to provide these constants on various computer architectures.
85

The Making of a Conceptual Design for a Balancing Tool

Eriksson, Jonas January 2014 (has links)
Balancing is usually done in the later phases of creating a game to make sure everything comes together to an enjoyable experience. Most of the time balancing is done with a series of playthroughs by the designers or by outsourced play testers and the imbalances found are corrected followed by more playthroughs. This method occupies a lot of time and might therefore not find everything. In this study I use information gathered from interviews with experienced designers and designer texts along with features from methods frequently used for aiding the designers to make a conceptual design of a tool that is aimed towards simplifying the process of balancing and reducing the amount of work hours having to be spent on this phase.
86

Profit Oriented Disassembly Line Balancing

Altekin, Fatma Tevhide 01 January 2005 (has links) (PDF)
In this study, we deal with the profit oriented partial disassembly line balancing problem which seeks a feasible assignment of selected disassembly tasks to stations such that the precedence relations among the tasks are satisfied and the profit is maximized. We consider two versions of this problem. In the profit maximization per cycle problem (PC), we maximize the profit for a single disassembly cycle given the task times and costs, part revenues and demands and station costs. We propose a heuristic solution approach for PC based on the liner programming relaxation of our mixed integer programming formulation. In the profit maximization over the planning horizon problem (PH), the planning horizon is divided into time zones each of which may have a different disassembly rate and a different line balance. We also incorporate other issues such as finite supply of discarded product, subassembly and released part inventories availability, and smoothing of the number of stations across the zones. PH is decomposed into a number of successive per cycle problems, which are solved by a similar heuristic approach. Computational analysis is conducted for both problems and results are reported.
87

Multipath Probabilistic Early Response TCP

Singh, Ankit 2012 August 1900 (has links)
Many computers and devices such as smart phones, laptops and tablet devices are now equipped with multiple network interfaces, enabling them to use multiple paths to access content over the network. If the resources could be used concurrently, end user experience can be greatly improved. The recent studies in MPTCP suggest that improved reliability, load balancing and mobility are feasible. The thesis presents a new multipath delay based algorithm, MPPERT (Multipath Probabilistic Early response TCP), which provides high throughput and efficient load balancing. In all-PERT environment, MPPERT suffers no packet loss and maintains much smaller queue sizes compared to existing MPTCP, making it suitable for real time data transfer. MP-PERT is suitable for incremental deployment in a heterogeneous environment. It also presents a parametrized approach to tune the amount of traffic shift off the congested path. Multipath approach is benefited from having multiple connections between end hosts. However, it is desired to keep the connection set minimal as increasing number of paths may not always provide significant increase in the performance. Moreover, higher number of paths unnecessarily increase computational requirement. Ideally, we should suppress paths with low throughputs and avoid paths with shared bottlenecks. In case of MPTCP, there is no efficient way to detect a common bottleneck between subflows. MPTCP applies a constraint of best single-path TCP throughput, to ensure fair share at a common bottleneck link. The best path throughput constraint along with traffic shift, from more congested to less congested paths, provide better opportunity for the competing flows to achieve higher throughput. However, the disadvantage is that even if there are no shared links, the same constraint would decrease the overall achievable throughput of a multipath flow. PERT, being a delay based TCP protocol, has continuous information about the state of the queue. This information is valuable in enabling MPPERT to detect subflows sharing a common bottleneck and obtain a smaller set of disjoint subflows. This information can even be used to switch from coupled (a set of subflows having interdependent increase/decrease of congestion windows) to uncoupled (independent increase/decrease of congestion windows) subflows, yielding higher throughput when best single-path TCP constraint is relaxed. The ns-2 simulations support MPPERT as a highly competitive multipath approach, suitable for real time data transfer, which is capable of offering higher throughput and improved reliability.
88

A case study of handling load spikes in authentication systems

Sverrisson, Kristjon January 2008 (has links)
<p>The user growth in Internet services for the past years has caused a need to re-think methods for user authentication, authorization and accounting for network providers. To deal with this growing demand for Internet services, the underlying user authentication systems have to be able to, among other things, handle load spikes. This can be achieved by using loadbalancing, and there are both adaptive and non-adaptive methods of loadbalancing.</p><p>This case study compares adaptive and non-adaptive loadbalancing for user authentication in terms of average throughput. To do this we set up a lab where we test two different load-balancing methods; a non-adaptive and a adaptive.</p><p>The non-adaptive load balancing method is simple, only using a pool of servers to direct the load to in a round-robin way, whereas the adaptive load balancing method tries to direct the load using a calculation of the previous requests.</p>
89

Το πρόβλημα της εξισορρόπησης γραμμών συναρμολόγησης στο τραπεζικό περιβάλλον

Κουτρούλη, Ευτυχία 10 June 2013 (has links)
Στην παρούσα διπλωματική εργασία εξετάζουμε το θεωρητικό και το εμπειρικό υπόβαθρο των Προβλημάτων Εξισορρόπησης Γραμμών Συναρμολόγησης, την υποδειγματοποίηση τους και τις εφαρμογές τους. Αναλύουμε λεπτομερώς τις βασικές διαδικασίες επίλυσης των άνωθεν προβλημάτων όπως αυτές προτείνονται στη διεθνή βιβλιογραφία. Επιπρόσθετα, εξετάζουμε τις εφαρμογές των Γραμμών Συναρμολόγησης στην παραγωγή προϊόντων και υπηρεσιών. Ειδικότερα, μελετάμε την Εξισορρόπηση των Γραμμών Συναρμολόγησης στον τομέα των υπηρεσιών με εμπειρική εφαρμογή στα τραπεζικά δάνεια. Στόχος της διπλωματικής εργασίας είναι η παρουσίαση του τρόπου με τον οποίο ένα κλασσικό ζήτημα βιομηχανικής παραγωγής όπως είναι το πρόβλημα της εξισορρόπησης γραμμών συναρμολόγησης (που συναντιέται κύρια σε βιομηχανίες μαζικής παραγωγής τυποποιημένων προϊόντων όπως η αυτοκινητοβιομηχανία) εφαρμόζεται και στον τομέα των υπηρεσιών. Την λεπτομερή καταγραφή των σταδίων που συνδέονται με την παραγωγή τραπεζικών προϊόντων όπως είναι τα δάνεια, ακολουθεί σχετική μελέτη περίπτωσης. Η μελέτη αυτή χωρίζεται σε δύο βασικές περιπτώσεις. Με τη χρήση μιας σειράς μεθόδων οδηγούμαστε στην επίλυση του προβλήματος που διατυπώνουμε. Αναζητούμε την άριστη κατανομή των παραγωγικών συντελεστών για την επίτευξη του επιθυμητού αποτελέσματος. Πιο συγκεκριμένα, ο στόχος της εμπειρικής προσέγγισης είναι ο εντοπισμός του βέλτιστου αριθμού των αναγκαίων υπαλλήλων που πρέπει να ασχοληθούν με τις εργασίες δανειοδότησης λαμβάνοντας υπόψη τους περιορισμούς προήγησης στην ανάθεση των διαφόρων εργασιών. / -
90

Energy-aware load balancing approaches to improve energy efficiency on HPC systems / Abordagens de balanceamento de carga ciente de energia para melhorar a eficiência energética em sistemas HPC

Padoin, Edson Luiz January 2016 (has links)
Os atuais sistemas de HPC tem realizado simulações mais complexas possíveis, produzindo benefícios para diversas áreas de pesquisa. Para atender à crescente demanda de processamento dessas simulações, novos equipamentos estão sendo projetados, visando à escala exaflops. Um grande desafio para a construção destes sistemas é a potência que eles vão demandar, onde perspectivas atuais alcançam GigaWatts. Para resolver este problema, esta tese apresenta uma abordagem para aumentar a eficiência energética usando recursos de HPC, objetivando reduzir os efeitos do desequilíbrio de carga e economizar energia. Nós desenvolvemos uma estratégia baseada no consumo de energia, chamada ENERGYLB, que considera características da plataforma, irregularidade e dinamicidade de carga das aplicações para melhorar a eficiência energética. Nossa estratégia leva em conta carga computacional atual e a frequência de clock dos cores, para decidir entre chamar uma estratégia de balanceamento de carga que reduz o desequilíbrio de carga migrando tarefas, ou usar técnicas de DVFS par ajustar as frequências de clock dos cores de acordo com suas cargas computacionais ponderadas. Como as diferentes arquiteturas de processador podem apresentam dois níveis de granularidade de DVFS, DVFS-por-chip ou DVFS-por-core, nós criamos dois diferentes algoritmos para a nossa estratégia. O primeiro, FG-ENERGYLB, permite um controle fino da frequência dos cores em sistemas que possuem algumas dezenas de cores e implementam DVFS-por-core. Por outro lado, CG-ENERGYLB é adequado para plataformas de HPC composto de vários processadores multicore que não permitem tal refinado controle, ou seja, que só executam DVFS-por-chip. Ambas as abordagens exploram desbalanceamentos residuais em aplicações interativas e combinam balanceamento de carga dinâmico com técnicas de DVFS. Assim, eles reduzem a frequência de clock dos cores com menor carga computacional os quais apresentam algum desequilíbrio residual mesmo após as tarefas serem remapeadas. Nós avaliamos a aplicabilidade das nossas abordagens utilizando o ambiente de programação paralela CHARM++ sobre benchmarks e aplicações reais. Resultados experimentais presentaram melhorias no consumo de energia e na demanda potência sobre algoritmos do estado-da-arte. A economia de energia com ENERGYLB usado sozinho foi de até 25% com nosso algoritmo FG-ENERGYLB, e de até 27% com nosso algoritmo CG-ENERGYLB. No entanto, os desequilíbrios residuais ainda estavam presentes após as serem tarefas remapeadas. Neste caso, quando as nossas abordagens foram empregadas em conjunto com outros balanceadores de carga, uma melhoria na economia de energia de até 56% é obtida com FG-ENERGYLB e de até 36% com CG-ENERGYLB. Estas economias foram obtidas através da exploração do desbalanceamento residual em aplicações interativas. Combinando balanceamento de carga dinâmico com DVFS nossa estratégia é capaz de reduzir a demanda de potência média dos sistemas paralelos, reduzir a migração de tarefas entre os recursos disponíveis, e manter o custo de balanceamento de carga baixo. / Current HPC systems have made more complex simulations feasible, yielding benefits to several research areas. To meet the increasing processing demands of these simulations, new equipment is being designed, aiming at the exaflops scale. A major challenge for building these systems is the power that they will require, which current perspectives reach the GigaWatts. To address this problem, this thesis presents an approach to increase the energy efficiency using of HPC resources, aiming to reduce the effects of load imbalance to save energy. We developed an energy-aware strategy, called ENERGYLB, which considers platform characteristics, and the load irregularity and dynamicity of the applications to improve the energy efficiency. Our strategy takes into account the current computational load and clock frequency, to decide whether to call a load balancing strategy that reduces load imbalance by migrating tasks, or use Dynamic Voltage and Frequency Scaling (DVFS) technique to adjust the clock frequencies of the cores according to their weighted loads. As different processor architectures can feature two levels of DVFS granularity, per-chip DVFS or per-core DVFS, we created two different algorithms for our strategy. The first one, FG-ENERGYLB, allows a fine control of the clock frequency of cores in systems that have few tens of cores and feature per-core DVFS control. On the other hand, CGENERGYLB is suitable for HPC platforms composed of several multicore processors that do not allow such a fine-grained control, i.e., that only perform per-chip DVFS. Both approaches exploit residual imbalances on iterative applications and combine dynamic load balancing with DVFS techniques. Thus, they reduce the clock frequency of underloaded computing cores, which experience some residual imbalance even after tasks are remapped. We evaluate the applicability of our approaches using the CHARM++ parallel programming system over benchmarks and real world applications. Experimental results present improvements in energy consumption and power demand over state-of-the-art algorithms. The energy savings with ENERGYLB used alone were up to 25%with our FG-ENERGYLB algorithm, and up to 27%with our CG-ENERGYLB algorithm. Nevertheless, residual imbalances were still present after tasks were remapped. In this case, when our approaches were employed together with these load balancers, an improvement in energy savings of up to 56% is achieved with FG-ENERGYLB and up to 36% with CG-ENERGYLB. These savings were obtained by exploiting residual imbalances on iterative applications. By combining dynamic load balancing with the DVFS technique, our approach is able to reduce the average power demand of parallel systems, reduce the task migration among the available resources, and keep load balancing overheads low.

Page generated in 0.0643 seconds