• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 335
  • 168
  • 124
  • 18
  • 16
  • 14
  • 11
  • 10
  • 7
  • 6
  • 5
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 822
  • 158
  • 144
  • 141
  • 111
  • 110
  • 107
  • 102
  • 100
  • 96
  • 96
  • 89
  • 84
  • 78
  • 74
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Förbättrad lager- och produktionsstyrning : vid tillverkning mot kundorder på Rimaster AB / Improved management of inventory and manufacturing : in a make-to-order environment at Rimaster AB

Hägglund, Andreas, Johansson, Anders January 2006 (has links)
Detta examensarbete har utförts på Rimaster AB i Rimforsa, vars huvudsakliga verksamhet är legotillverkning inom elektronik och mekatronik. Rimaster AB har cirka 140 anställda och omsätter cirka 116 MSEK. Den största delen av tillverkningen är manuell och all produktion sker mot kundorder. Företaget upplever att produktionsflödet är ojämnt och därmed även kapacitetsutnyttjandet. Dessutom är kapitalbindningen hög, både i råvarulager och i färdigvarulager. Syftet med detta arbete är därför att genom en förbättrad styrning av lager och produktion minska kapitalbindningen samt uppnå ett jämnare kapacitetsutnyttjande. Kostnaden för kapitalbindningen bestäms genom att en lagerränta för respektive lagerställe tas fram. För råvarulagret beräknas lagerräntan till 15,8 procent och för färdigvarulagret 16,9 procent. Totalt ger detta en årlig kostnad för kapitalbindning på 2,4 MSEK. Utöver detta ska lagerräntan användas till att fatta korrekta beslut vid lagerstyrning, till exempel vid beräknande av inköpskvantiteter. Vid en analys av råvarulagret identifieras ett antal artikeltyper som i sin tur ligger till grund för en klassificering av de lagerförda artiklarna. En artikelklass som stora delar av kapitalbindningsproblematiken kan hänföras till är artiklar med låg behovsfrekvens i kombination med leverantörernas minimala orderkvantiteter. För att minska detta problem rekommenderas ett fördjupat leverantörssamarbete i syfte att göra orderkvantiteterna mindre. För övriga artikelklasser föreslås en differentierad lagerstyrning som ett verktyg för att få kontroll över lagret. Kapitalbindningen i färdigvarulagret beror främst på att kunden, genom avtal, bestämmer hur deras ledtidskrav ska uppfyllas. Detta beslut bör istället fattas av Rimaster AB och baseras på kundorderpunktens placering. Det ojämna kapacitetsutnyttjandet beror bland annat på att produktionen är uppdelad i flera mindre grupper med kundfokus. Dessutom stämmer inte produktionsgrupperna överens med de grupper som använts planeringsmässigt. Av denna anledning föreslås ett nytt produktionsflöde där fyra produktorienterade slutmonteringsgrupper skapas, dessa bildar dessutom planeringsgrupper. Genom denna aggregering av resurser skapas förutsättningar för ett jämnare produktionsflöde och därmed även ett jämnare kapacitetsutnyttjande. Samtidigt föreslås ett nytt detaljplaneringssystem med funktionalitet enligt Workload Control-konceptet för att möjliggöra korta och säkra ledtider och en begränsad mängd produkter i arbete. En ytterligare bidragande orsak till det ojämna produktionsflödet är att produktionsbatcherna är stora, trots att ställtidsandelen normalt sett är låg. Detta beror på att batchstorleken normalt likställs med storleken på en kundorder. Följaktligen föreslås en minskning av batchstorlekarna vilket förutom att bidra till ett jämnare flöde även medför kortare produktionsledtider. / This Master Thesis has been carried out at Rimaster AB in Rimforsa, a sub-contractor in electronics and mechatronics. Rimaster AB has 140 employees and an approximate annual turnover of 116 MSEK. The main part of the production consists of manual work and is carried out on receipt of customer orders. The company perceives a fluctuating flow in the production and consequently fluctuations in the capacity utilization. Furthermore there is a considerable capital tye-up in the inventory of raw materials as well as in the inventory of finished goods. Therefore the purpose of this Master Thesis is to achieve levelled capacity utilization and less capital tied-up through an improved management of inventory and manufacturing. An inventory holding cost is calculated for the inventories respectively to determine the cost of capital tied-up. For the inventory of raw materials the cost is 15.8 percent of the annual average inventory value, and for the inventory of finished goods the cost is 16.9 percent. This yields an annual cost of 2.4 MSEK. Furthermore the inventory holding cost should be used in decisions regarding inventory management, e.g. when calculating order quantities. When analyzing the inventory of raw materials a number of types of stock items are identified. These types form the foundation of a classification of the stock items. A significant part of the capital tied-up relates to stock items with low demand combined with minimum order quantities from the suppliers. An improved co-operation with the suppliers in order to reduce these quantities is therefore recommended. For the remaining classes of stock items a differentiated inventory control system is proposed. In the inventory of finished goods the capital tied-up is mainly a consequence of customers requiring, by contract, how their demanded lead time shall be fulfilled. Instead this should be decided by Rimaster AB based on the position of the order penetration point. One reason to the fluctuating capacity utilization is that the production is divided into several sub-groups with customer focus. Furthermore these groups do not correspond to the groups used for production planning. A new production flow is therefore proposed with four product-oriented final assembly groups. These new groups should also be used for production planning. Through this aggregation of production resources it is possible to achieve a levelled production flow and hence a levelled capacity utilization. In combination with the measures mentioned above, a new production activity control system, in accordance to the Workload Control concept, is proposed. This control system enables short and safe lead times and a limited amount of work in progress. Another reason to the fluctuating capacity utilization, caused by a fluctuating production flow, is the use of large batches despite short set-up times. The size of a batch is normally equal to the size of a customer order. Consequently a reduction of the batch sizes is proposed in order to achieve a levelled production flow and shortened production lead times.
102

Automated Storage Layout for Database Systems

Ozmen, Oguzhan 08 1900 (has links)
Modern storage systems are complex. Simple direct-attached storage devices are giving way to storage systems that are flexible, network-attached, consolidated and virtualized. Today, storage systems have their own administrators, who use specialized tools and expertise to configure and manage storage resources. As a result, database administrators are no longer in direct control of the design and configuration of their database systems' underlying storage resources. This introduces problems because database physical design and storage configuration are closely related tasks, and the separation makes it more difficult to achieve a good end-to-end design. For instance, the performance of a database system depends strongly on the storage layout of database objects, such as tables and indexes, and the separation makes it hard to design a storage layout that is tuned to the I/O workload generated by the database system. In this thesis we address this problem and attempt to close the information gap between database and storage tiers by addressing the problem of predicting the storage (I/O) workload that will be generated by a database management system. Specifically, we show how to translate a database workload description, together with a database physical design, into a characterization of the I/O workload that will result. Such a characterization can directly be used by a storage configuration tool and thus enables effective end-to-end design and configuration spanning both the database and storage tiers. We then introduce our storage layout optimization tool, which leverages such workload characterizations to generate an optimized layout for a given set of database objects. We formulate the layout problem as a non-linear programming (NLP) problem and use the I/O characterization as input to an NLP solver. We have incorporated our I/O estimation technique into the PostgreSQL database management system and our layout optimization technique into a database layout advisor. We present an empirical assessment of the cost of both tools as well as the efficacy and accuracy of their results.
103

Work-related social support, job demands and burnout studies of Swedish workers, predominantly employed in health care /

Sundin, Lisa, January 2009 (has links)
Diss. (sammanfattning) Stockholm : Karolinska institutet, 2009. / Härtill 4 uppsatser.
104

Frequency allocation, transmit power control, and load balancing with site specific knowledge for optimizing wireless network performance

Chen, Jeremy Kang-pen 18 August 2011 (has links)
Not available / text
105

Load balancing in distributed object computing systems

張立新, Cheung, Lap-sun. January 2001 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
106

Automated Storage Layout for Database Systems

Ozmen, Oguzhan 08 1900 (has links)
Modern storage systems are complex. Simple direct-attached storage devices are giving way to storage systems that are flexible, network-attached, consolidated and virtualized. Today, storage systems have their own administrators, who use specialized tools and expertise to configure and manage storage resources. As a result, database administrators are no longer in direct control of the design and configuration of their database systems' underlying storage resources. This introduces problems because database physical design and storage configuration are closely related tasks, and the separation makes it more difficult to achieve a good end-to-end design. For instance, the performance of a database system depends strongly on the storage layout of database objects, such as tables and indexes, and the separation makes it hard to design a storage layout that is tuned to the I/O workload generated by the database system. In this thesis we address this problem and attempt to close the information gap between database and storage tiers by addressing the problem of predicting the storage (I/O) workload that will be generated by a database management system. Specifically, we show how to translate a database workload description, together with a database physical design, into a characterization of the I/O workload that will result. Such a characterization can directly be used by a storage configuration tool and thus enables effective end-to-end design and configuration spanning both the database and storage tiers. We then introduce our storage layout optimization tool, which leverages such workload characterizations to generate an optimized layout for a given set of database objects. We formulate the layout problem as a non-linear programming (NLP) problem and use the I/O characterization as input to an NLP solver. We have incorporated our I/O estimation technique into the PostgreSQL database management system and our layout optimization technique into a database layout advisor. We present an empirical assessment of the cost of both tools as well as the efficacy and accuracy of their results.
107

Workload Management for Data-Intensive Services

Lim, Harold Vinson Chao January 2013 (has links)
<p>Data-intensive web services are typically composed of three tiers: i) a display tier that interacts with users and serves rich content to them, ii) a storage tier that stores the user-generated or machine-generated data used to create this content, and iii) an analytics tier that runs data analysis tasks in order to create and optimize new content. Each tier has different workloads and requirements that result in a diverse set of systems being used in modern data-intensive web services.</p><p>Servers are provisioned dynamically in the display tier to ensure that interactive client requests are served as per the latency and throughput requirements. The challenge is not only deciding automatically how many servers to provision but also when to provision them, while ensuring stable system performance and high resource utilization. To address these challenges, we have developed a new control policy for provisioning resources dynamically in coarse-grained units (e.g., adding or removing servers or virtual machines in cloud platforms). Our new policy, called proportional thresholding, converts a user-specified performance target value into a target range in order to account for the relative effect of provisioning a server on the overall workload performance.</p><p>The storage tier is similar to the display tier in some respects, but poses the additional challenge of needing redistribution of stored data when new storage nodes are added or removed. Thus, there will be some delay before the effects of changing a resource allocation will appear. Moreover, redistributing data can cause some interference to the current workload because it uses resources that can otherwise be used for processing requests. We have developed a system, called Elastore, that addresses the new challenges found in the storage tier. Elastore not only coordinates resource allocation and data redistribution to preserve stability during dynamic resource provisioning, but it also finds the best tradeoff between workload interference and data redistribution time.</p><p>The workload in the analytics tier consists of data-parallel workflows that can either be run in a batch fashion or continuously as new data becomes available. Each workflow is composed of smaller units that have producer-consumer relationships based on data. These workflows are often generated from declarative specifications in languages like SQL, so there is a need for a cost-based optimizer that can generate an efficient execution plan for a given workflow. There are a number of challenges when building a cost-based optimizer for data-parallel workflows, which includes characterizing the large execution plan space, developing cost models to estimate the execution costs, and efficiently searching for the best execution plan. We have built two cost-based optimizers: Stubby for batch data-parallel workflows running on MapReduce systems, and Cyclops for continuous data-parallel workflows where the choice of execution system is made a part of the execution plan space.</p><p>We have conducted a comprehensive evaluation that shows the effectiveness of each tier's automated workload management solution.</p> / Dissertation
108

SD Storage Array: Development and Characterization of a Many-device Storage Architecture

Katsuno, Ian 29 November 2013 (has links)
Transactional workloads have storage request streams consisting of many small, independent, random requests. Flash memory is well suited to these types of access patterns, but is not always cost-effective. This thesis presents a novel storage architecture called the SD Storage Array (SDSA), which adopts a many-device approach. It utilizes many flash storage devices in the form of an array of Secure Digital (SD) cards. This approach leverages the commodity status of SD cards to pursue a cost-effective means of providing the high throughput that transactional workloads require. Characterization of a prototype revealed that when the request stream was 512B randomly addressed reads, the SDSA provided 1.5 times the I/O operations per second (IOPS) of a top-of-the-line solid state drive, provided there were at least eight requests in-flight. A scale-out simulation showed the IOPS should scale with the size of the array, provided there are no upstream bottlenecks.
109

SD Storage Array: Development and Characterization of a Many-device Storage Architecture

Katsuno, Ian 29 November 2013 (has links)
Transactional workloads have storage request streams consisting of many small, independent, random requests. Flash memory is well suited to these types of access patterns, but is not always cost-effective. This thesis presents a novel storage architecture called the SD Storage Array (SDSA), which adopts a many-device approach. It utilizes many flash storage devices in the form of an array of Secure Digital (SD) cards. This approach leverages the commodity status of SD cards to pursue a cost-effective means of providing the high throughput that transactional workloads require. Characterization of a prototype revealed that when the request stream was 512B randomly addressed reads, the SDSA provided 1.5 times the I/O operations per second (IOPS) of a top-of-the-line solid state drive, provided there were at least eight requests in-flight. A scale-out simulation showed the IOPS should scale with the size of the array, provided there are no upstream bottlenecks.
110

Decision models for on-line adaptive resource management

Paul, Daniel 12 1900 (has links)
No description available.

Page generated in 0.0535 seconds