• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 2
  • Tagged with
  • 8
  • 8
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Feasibility Of A Supplementary Water Storage For Birkapili Hydroelectric Power Plant

Bozkurt, Melih 01 September 2011 (has links) (PDF)
Climate change concerns, high oil prices and increasing government support are some of the driving reasons of increasing renewable energy legislation, incentives, and commercialization. Hydroelectricity is the most widely used form of renewable energy and refers to electricity generated by hydropower. In this study, a storage facility is proposed to store some additional water and increase the profitability of the existing Birkapili Hydroelectric Power Plant. The storage facility is composed of a gravity dam and an uncontrolled spillway. With the help of the proposed storage facility, maximum utilization of the water is provided and shift of the electricity generation to peak demand periods becomes possible. Consequently, feasibility of the existing power plant is improved. A number of alternatives for a spillway are taken into account and the corresponding concrete gravity dam is designed. Stability analyses and operation studies are conducted using spreadsheets to achieve an economical solution.
2

Optimering av platsutnyttjande för temporär lagring i en crossdocking-terminal : En fallstudie hos Lincargo AB

Lundgren, Hannes, Svensson, Rebecka January 2020 (has links)
Syftet med studien var att utreda hur den temporära lagringen av gods i en crossdockingterminal kan förbättras med hjälp av optimerad slotting. Den tillämpade metoden var av abduktiv natur med kvalitativ och kvantitativ empiri, samt en optimeringsbaserad analysmetod med hjälp av programvara. Det erhållna resultatet bestod av tre olika varianter på optimeringsmodellen, där varje variant löstes med fem olika viktningar på internt avstånd mellan baklagringsytorna. Varje lösning resulterade i den mest optimala placeringen av gods som modellen tog fram efter två timmars körning. Resultatet gav främst insikter kring vikten av informationssystem och informationsflöden, jämförelser av praktiskt tillämpade lagringsstrategier gentemot teorietiskt optimerade strategier gällande effektivitet, genomförbarhet och lämplighet. Slutsatserna av studien är bland annat att ju färre restriktioner och ju lägre krav på placeringen av gods desto färre slöserier skapade modellen. Det talar dessutom för värdet av lättillgänglig information för att kunna hantera en blandad baklagring. Utöver det framkom även att en låsning av vissa parametrar kan resultera i ett bättre resultat då problemet är för komplicerat för att köras till optimum. De kortsiktiga rekommendationerna bestod i en föreslagen layout som är realiserbar i dagsläget, vilken kommer frigöra 27,2 m2 i baklagringsyta. De långsiktiga rekommendationerna behandlade behovet av ett förbättrat informationssystem för att möjliggöra bättre framtida layouts genom nuvarande klassindelade lagring, samt även möjligheter till klassindelning baserad på lastbärare. Vidare föreslogs även förändringar för den övergripande terminallayouten för att ge bättre förutsättningar för tillämpning av redan existerande optimeringsmodeller. / The purpose with this thesis was to explore how temporary storage within a cross docking terminal can be improved by optimized slotting. An abductive method was used and the empirical data collected was both qualitative and quantitative in nature. The analysis consisted mainly of the development of an optimization model. The obtained result consisted of three different variants of the optimization model, where each one was solved with five different weights for the internal distance of storage slots. Each run resulted in the most optimal placement of goods that the model could produce in twohours. The results mainly provided insights of the importance of information systems and information flows, comparisons of practical applied storage strategies against theoretically optimized strategies regarding efficiency, feasibility and suitability. The conclusions of the study are, among other things, that fewer restrictions and lower requirements for the placement of goods, the less waste remained. It also emphasise the value of easily accessible information in order to handle a mixed storage. In addition, it was also found that locking certain parameters can result in a better result as the problem is too complicated to be executed to optimum. Long-term recommendations addressed the need for an improved information system to enable better future layouts through current class-based storage, as well as opportunities for class-based load carriers. Furthermore, changes were also proposed for the overall terminal layout to provide better conditions for the application of pre-existing optimization models.
3

Competition effects of simultaneous application of flexibility options within an energy community

Scheller, Fabian, Johanning, Simon, Reichardt, Sören, Reichelt, David G., Bruckner, Thomas 16 October 2023 (has links)
As part of an increased diffusion of decentralized renewable energy technologies, an additional need for flexibility arises. Studies indicate that operating battery storage systems for multiple uses as community electricity storage system (CES) promises superior benefits. This seems decisive, since cheaper flexibility options such as demand response (DR) are more applicable and might further reduce the market size for storage facilities. This research paper aims to analyze the competition effects of CES with simultaneous application of DR. The optimization results of the synthetic case studies provide insights in the profitability level, the service provision and the flexibility potential. While even under requested legal circumstances a CES is only partially profitable, the economic situation improves in terms of an optimal storage utilization. This, however, is reduced through competition effects with DR.
4

Automatic Optimization of Geometric Multigrid Methods using a DSL Approach

Vasista, Vinay V January 2017 (has links) (PDF)
Geometric Multigrid (GMG) methods are widely used in numerical analysis to accelerate the convergence of partial differential equations solvers using a hierarchy of grid discretizations. These solvers find plenty of applications in various fields in engineering and scientific domains, where solving PDEs is of fundamental importance. Using multigrid methods, the pace at which the solvers arrive at the solution can be improved at an algorithmic level. With the advance in modern computer architecture, solving problems with higher complexity and sizes is feasible - this is also the case with multigrid methods. However, since hardware support alone cannot achieve high performance in execution time, there is a need for good software that help programmers in doing so. Multiple grid sizes and recursive expression of multigrid cycles make the task of manual program optimization tedious and error-prone. A high-level language that aids domain experts to quickly express complex algorithms in a compact way using dedicated constructs for multigrid methods and with good optimization support is thus valuable. Typical computation patterns in a GMG algorithm includes stencils, point-wise accesses, restriction and interpolation of a grid. These computations can be optimized for performance on modern architectures using standard parallelization and locality enhancement techniques. Several past works have addressed the problem of automatic optimizations of computations in various scientific domains using a domain-specific language (DSL) approach. A DSL is a language with features to express domain-specific computations and compiler support to enable optimizations specific to these computations. Halide and PolyMage are two of the recent works in this direction, that aim to optimize image processing pipelines. Many computations like upsampling and downsampling an image are similar to interpolation and restriction in geometric multigrid methods. In this thesis, we demonstrate how high performance can be achieved on GMG algorithms written in the PolyMage domain-specific language with new optimizations we added to the compiler. We also discuss the implementation of non-trivial optimizations, on PolyMage compiler, necessary to achieve high parallel performance for multigrid methods on modern architectures. We realize these goals by: • introducing multigrid domain-specific constructs to minimize the verbosity of the algorithm specification; • storage remapping to reduce the memory footprint of the program and improve cache locality exploitation; • mitigating execution time spent in data handling operations like memory allocation and freeing, using a pool of memory, across multiple multigrid cycles; and • incorporating other well-known techniques to leverage performance, like exploiting multi-dimensional parallelism and minimizing the lifetime of storage buffers. We evaluate our optimizations on a modern multicore system using five different benchmarks varying in multigrid cycle structure, complexity and size, for two-and three-dimensional data grids. Experimental results show that our optimizations: • improve performance of existing PolyMage optimizer by 1.31x; • are better than straight-forward parallel and vector implementations by 3.2x; • are better than hand-optimized versions in conjunction with optimizations by Pluto, a state-of-the-art polyhedral source-to-source optimizer, by 1.23x; and • achieve up to 1.5$\times$ speedup over NAS MG benchmark from the NAS Parallel Benchmarks. (The speedup numbers are Geometric means over all benchmarks)
5

Automatic Storage Optimization of Arrays Affine Loop Nests

Bhaskaracharya, Somashekaracharya G January 2016 (has links) (PDF)
Efficient memory usage is crucial for data-intensive applications as a smaller memory footprint ensures better cache performance and allows one to run a larger problem size given a axed amount of main memory. The solutions found by existing techniques for automatic storage optimization for arrays in a new loop-nests, which minimize the storage requirements for the arrays, are often far from good or optimal and could even miss nearly all storage optimization potential. In this work, we present a new automatic storage optimization framework and techniques that can be used to achieve intra-array as well as inter-array storage reuse within a new loop-nests with a pre-determined schedule. Over the last two decades, several heuristics have been developed for achieving complex transformations of a new loop-nests using the polyhedral model. However, there are no comparably strong heuristics for tackling the problem of automatic memory footprint optimization. We tackle the problem of storage optimization for arrays by formulating it as one of ending the right storage partitioning hyperplanes: each storage partition corresponds to a single storage location. Statement-wise storage partitioning hyperplanes are determined that partition a unit end global array space so that values with overlapping live ranges are not mapped to the same partition. Our integrated heuristic for exploiting intra-array as well as inter-array reuse opportunities is driven by a fourfold objective function that not only minimizes the dimensionality and storage requirements of arrays required for each high-level statement, but also maximizes inter-statement storage reuse. We built an automatic polyhedral storage optimizer called SMO using our storage partitioning approach. Storage reduction factors and other results we report from SMO demon-strate the e activeness of our approach on several benchmarks drawn from the domains of image processing, stencil computations, high-performance computing, and the class of tiled codes in general. The reductions in storage requirement over previous approaches range from a constant factor to asymptotic in the loop blocking factor or array extents { the latter being a dramatic improvement for practical purposes. As an incidental and related topic, we also studied the problem of polyhedral compilation of graphical data programs. While polyhedral techniques for program transformation are now used in several proprietary and open source compilers, most of the research on poly-herald compilation has focused on imperative languages such as C, where the computation is species in terms of statements with zero or more nested loops and other control structures around them. Graphical data ow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and ref-eventual transparency of data ow languages impose a di errant set of challenges. In this work, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from data ow programs and to synthesize them from their equivalent polyhedral representation. We then describe Polyglot, a framework for automatic transformation of data ow programs that we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used data ow programming languages. Results show that data ow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30 while running on an 8-core Intel system.
6

Search-Optimized Disk Layouts For Suffix-Tree Genomic Indexes

Bhavsar, Rajul D 08 1900 (has links) (PDF)
Over the last decade, biological sequence repositories have been growing at an exponential rate. Sophisticated indexing techniques are required to facilitate efficient searching through these humongous genetic repositories. A particularly attractive index structure for such sequence processing is the classical suffix-tree, a vertically compressed trie structure built over the set of all suffixes of a sequence. Its attractiveness stems from its linearity properties -- suffix-tree construction times are linear in the size of the indexed sequences, while search times are linear in the size of the query strings. In practice, however, the promise of suffix-trees is not realized for extremely long sequences, such as the human genome, that run into the billions of characters. This is because suffix-trees, which are typically an order of magnitude larger than the indexed sequence, necessarily have to be disk-resident for such elongated sequences, and their traditional construction and traversal algorithms result in random disk accesses. We investigate, in this thesis, post-construction techniques for disk-based suffix-tree storage optimization, with the objective of maximizing disk-reference locality during query processing. We begin by focusing on the layout reorganization in which the node-to-block assignments and sequence of blocks are reworked. Our proposed algorithm is based on combining the breadth-first layout approach advocated in the recent literature with probabilistic techniques for minimizing the physical distance between successive block accesses, based on an analysis of node traversal patterns. In our next step, we consider techniques for reducing the space overheads incurred by suffix-trees. In particular, we propose an embedding strategy whereby leaf nodes can be completely represented within their parent internal nodes, without requiring any space extension of the parent node's structure. To quantitatively evaluate the benefits of our reorganized and restructured layouts, we have conducted extensive experiments on complete human genome sequences, with complex and computationally expensive user queries that involve finding the maximal common substring matches of the query strings. We show, for the first time, that the layout reorganization approach can be scaled to entire genomes, including the human genome. In the layout reorganization, with careful choice of node-to-block assignment condition and optimized sequence of blocks, search-time improvements ranging from 25% to 75% can be achieved with respect to the construction layouts on such genomes. While the layout reorganization does take considerable time, it is a one-time process whereas searches will be repeatedly invoked on this index. The internalization of leaf nodes results in a 25% reduction in the suffix-tree space occupancy. More importantly, when applied to the construction layout, it provides search-time improvements ranging from 25% to 85%, and in conjunction with the reorganized layout, searches are speeded up by 50% to 90%. Overall, our study and experimental results indicate that through careful choice of node implementations and layouts, the disk access locality of suffix-trees can be improved to the extent that upto an order-of-magnitude improvements in search-times may result relative to the classical implementations.
7

Studies In Automatic Management Of Storage Systems

Pipada, Pankaj 06 1900 (has links) (PDF)
Autonomic management is important in storage systems and the space of autonomics in storage systems is vast. Such autonomic management systems can employ a variety of techniques depending upon the specific problem. In this thesis, we first take an algorithmic approach towards reliability enhancement and then we use learning along with a reactive framework to facilitate storage optimization for applications. We study how the reliability of non-repairable systems can be improved through automatic reconfiguration of their XOR-coded structure. To this regard we propose to increase the fault tolerance of non-repairable systems by reorganizing the system, after a failure is detected, to a new XOR-code with a better fault tolerance. As errors can manifest during reorganization due to whole reads of multiple submodules, our framework takes them in to account and models such errors as based on access intensity (ie.BER-biterrorrate). We present and evaluate the reliability of an example storage system with and without reorganization. Motivated by the critical need for automating various aspects of data management in virtualized data centers, we study the specific problem of automatically implementing Virtual Machine (VM) migration in a dynamic environment according to some pre-set policies. This is a problem that requires automated identification of various workloads and their execution environments running inside virtual machines in a non-intrusive manner. To this end we propose AuM (for Autonomous Manager) that has the capability to learn workloads by aggregating variety of information obtained from network traces of storage protocols. We use state of the art Machine Learning tools, namely Multiple Kernel learning ,to aggregate information and show that AuM is indeed very accurate in identifying work loads, their execution environments and is also successful in following user set policies very closely for the VM migration tasks. Storage infrastructure in large-scale cloud data center environments must support applications with diverse, time-varying data access patterns while observing the quality of service. To meet service level requirements in such heterogeneous application phases, storage management needs to be phase-aware and adaptive ,i.e. ,identify specific storage access patterns of applications as they occur and customize their handling accordingly. We build LoadIQ, an online application phase detector for networked (file and block) storage systems. In a live deployment , LoadIQ analyzes traces and emits phase labels learnt online. Such labels could be used to generate alerts or to trigger phase-specific system tuning.
8

Materialplanering : för optimerade logistiska processer med avseende på tid och kostnad. / Material planning : for optimized logistical processes in terms of time and cost.

Linhem, Nathalie, Arvidsson, Caroline January 2017 (has links)
Planering, styrning och uppföljning är viktiga processer inom materialplanering och inom dess flöde från leverantör till slutkund. Planeringsprocessens aktiviteter kan även ha en avgörande effekt på tillgängligheten av företagets produkter. I detta examensarbete granskas hur planeringen kan optimeras för olika produkter, med målet att ta fram rutiner kring hanteringen av produktplanering för att bidra till en bättre effektivitet ur ett helhetsperspektiv.Det finns ett behov av ett tydligt ramverk för att optimera förutsättningarna för planering av komplexa kundorder. Planeringsprocessen är i behov av kringliggande aktiviteter för ett optimalt utfall och baseras på logistiska och ekonomiska verktyg och metoder. Vid effektiv tillämpning kan ledtiderna reduceras och bakåtplaneringen förenklas, då rätt produkter lagerhålls och orderläggning sker vid rätt tidpunkt, med optimerad beställningskvantitet för att minimera de kostnader som kan uppstå. Därutöver krävs ett helhetstänk, där hela försörjningskedjan är involverad. Relationerna i nätverket är en viktig aspekt för en optimerad planering för säkerställande av en god leveransprecision.Det ramverk som tagits fram inkluderar ett grundtänk med logiskt tänkande, effektiva kommunikationsvägar och rätt sak på rätt plats vid rätt tillfälle. Men det bör poängteras att varje organisation är unik och bör anpassa tillvägagångssätt efter sin egen verksamhet, men ändå tillämpa det tankesätt som ramverket medför. / Planning, control and monitoring are key processes in material planning and its flow from supplier to end customer. The activities within the planning process can also have a crucial effect on the availability of the company’s products. This thesis examines how planning can be optimized for different products, and aims to develop procedures for the handling of product planning in order to contribute to a better efficiency from a holistic perspective.There’s a need for a clear framework to optimize the conditions for the planning of complex customer orders. The planning process is in need of ancillary activities for optimal outcomes and is based on logistical and financial tools and methods. If these are applied effectively, the lead times can be reduced and the backward scheduling can be simplified, since the right products are stocked and orders are being placed at the right time, with the optimum order quantity for minimizing the costs that may arise. Additionally, a holistic approach is required, where the whole supply chain is involved. The relationships within the network are an important aspect for an optimized planning, to ensure a high delivery precision.The framework that have been produced, includes a basic idea of logical thinking, effective communication channels and the right thing at the right place at the right time. But it should be emphasized that each organization is unique and should adapt this approach to its own business, but still apply the mindset that the framework entails.

Page generated in 0.1234 seconds