• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 18
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 15
  • 15
  • 14
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

On the effectiveness of additional resources for on-line firm deadlinescheduling

顔尊還, Ngan, Tsuen-wan. January 2001 (has links)
published_or_final_version / abstract / toc / Computer Science and Information Systems / Master / Master of Philosophy
12

AUTOMATED CAPACITY PLANNING AND SUPPORT FOR ENTERPRISE APPLICATIONS

Thakkar, Dharmesh 02 February 2009 (has links)
Capacity planning is crucial for successful development of enterprise applications. Capacity planning activities are most consequential during the verification and maintenance phases of Software Development Life Cycle. During the verification phase, analysts need to execute a large number of performance tests to build accurate performance models. Performance models help customers in capacity planning for their deployments. To build valid performance models, the performance tests must be redone for every release or build of an application. This is a time-consuming and error-prone manual process, which needs tools and techniques to speed up the process. In the maintenance phase, when customers run into performance and capacity related issues after deployment, they commonly engage the vendor of the application for troubleshooting and fine tuning of the troubled deployments. At the end of the engagement, analysts create engagement report, which contain valuable information about the observed symptoms, attempted workarounds, identified problems, and the final solutions. Engagement reports are stored in a customer engagement repository. While information stored in the engagement reports is valuable in helping analysts with future engagements, no systematic techniques exist to retrieve relevant reports from such a repository. In this thesis we present a framework for the systematic and automated building of capacity calculators during software verification phase. Then, we present a technique to retrieve relevant reports from a customer engagement repository. Our technique helps analyst fix performance and capacity related issues in the maintenance phase by providing easy access to information from relevant reports. We demonstrate our contributions with case studies on an open-source benchmarking application and an enterprise application. / Thesis (Master, Computing) -- Queen's University, 2009-01-29 14:14:37.235
13

A Model for Capacity Planning in Cassandra : Case Study on Ericsson’s Voucher System

Abbireddy, Sharath January 2015 (has links)
Cassandra is a NoSQL(Not only Structured Query Language) database which serves large amount of data with high availability .Cassandra data storage dimensioning also known as Cassandra capacity planning refers to predicting the amount of disk storage required when a particular product is deployed using Cassandra. This is an important phase in any product development lifecycle involving Cassandra data storage system. The capacity planning is based on many factors which are classified as Cassandra specific and Product specific.This study is to identify the different Cassandra specific and product specific factors affecting the disk space in Cassandra data storage system. Based on these factors a model is to be built which would predict the disk storage for Ericsson’s voucher system.A case-study is conducted on Ericsson’s voucher system and its Cassandra cluster. Interviews were conducted on different Cassandra users within Ericsson R&D to know their opinion on capacity planning approaches and factors affecting disk space for Cassandra. Responses from the interviews were transcribed and analyzed using grounded theory.A total of 9 Cassandra specific factors and 3 product specific factors are identified and documented. Using these 12 factors a model was built. This model was used in predicting the disk space required for voucher system’s Cassandra.The factors affecting disk space for deploying Cassandra are now exhaustively identified. This makes the capacity planning process more efficient. Using these factors the Voucher system’s disk space for deployment is predicted successfully.
14

Effective Capacity Planning of the Virtual Environment using Enterprise Architecture

Mahimane, Arati 23 August 2013 (has links)
No description available.
15

Elastic Resource Management in Cloud Computing Platforms

Sharma, Upendra 01 May 2013 (has links)
Large scale enterprise applications are known to observe dynamic workload; provisioning correct capacity for these applications remains an important and challenging problem. Predicting high variability fluctuations in workload or the peak workload is difficult; erroneous predictions often lead to under-utilized systems or in some situations cause temporarily outage of an otherwise well provisioned web-site. Consequently, rather than provisioning server capacity to handle infrequent peak workloads, an alternate approach of dynamically provisioning capacity on-the-fly in response to workload fluctuations has become popular. Cloud platforms are particularly suited for such applications due to their ability to provision capacity when needed and charge for usage on pay-per-use basis. Cloud environments enable elastic provisioning by providing a variety of hardware configurations as well as mechanisms to add or remove server capacity. The first part of this thesis presents Kingfisher, a cost-aware system that provides a generalized provisioning framework for supporting elasticity in the cloud by (i) leveraging multiple mechanisms to reduce the time to transition to new configurations, and (ii) optimizing the selection of a virtual server configuration that minimize cost. Majority of these enterprise applications, deployed as web applications, are distributed or replicated with a multi-tier architecture. SLAs for such applications are often expressed as a high percentile of a performance metric, for e.g. 99 percentile of end to end response time is less than 1 sec. In the second part of this thesis I present a model driven technique which provisions a multi-tier application for such an SLA and is targeted for cloud platforms. Enterprises critically depend on these applications and often own large IT infrastructure to support the regular operation of these applications. However, provisioning for a peak load or for high percentile of response time could be prohibitively expensive. Thus there is a need of hybrid cloud model, where the enterprise uses its own private resources for the majority of its computing, but then "bursts" into the cloud when local resources are insufficient. I discuss a new system, namely Seagull, which performs dynamic provisioning over a hybrid cloud model by enabling cloud bursting. Finally, I describe a methodology to model the configuration patterns (i.e deployment topologies) of different control plane services of a cloud management system itself. I present a generic methodology, based on empirical profiling, which provides initial deployment configuration of a control plane service and also a mechanism which iteratively adjusts the configuration to avoid violation of control plane's Service Level Objective (SLO).
16

Minimering av slöserier och kapacitetsbegränsningar för att öka produktionskapaciteten : En fallstudie på företaget Svensson & Linnér

Karlsson, Therese, Eriksson, Rebecca January 2016 (has links)
Bakgrund: Formgivningsprocessen på Svensson & Linnér är en produktionsprocess som syftar till att förändra produktens form genom pressning samt böjning. I en produktionsprocess existerar det oftast ett flertal slöserier och kapacitetbegränsningar som inte kan identifieras förrän en processkartläggning är gjord. Kartläggning av processer möjliggör därför att företag blir medvetna om de slöserier och kapacitetsbegränsningar som existerar. Detta gör att företag kan förbättra sina processer genom att eliminera ledtider, väntetider och onödiga rörelser. Syfte: Syftet med detta examensarbete är att identifiera slöserier och kapacitetsbegränsningar i formgivningsprocessen på Svensson & Linnér samt förklara dess bakomliggande orsaker. Syftet är vidare att föreslå förbättringsåtgärder som borde göras för att öka kapaciteten i processen. Metod: Studien som genomförts är en fallstudie på företaget Svensson & Linnér där data samlats in genom deltagande och strukturerade observationer samt genom ostrukturerade och semi-strukturerade intervjuer. En processkartläggning och beräkningar av kapacitetsutnyttjandet har gjorts för att kunna identifiera slöserier och kapacitetsbegränsningar. Orsak-verkandiagram har sedan upprättas för att identifiera bakomliggande orsaker och ge förslag på förbättringsåtgärder som bör göras för att öka kapaciteten i processen. Slutsatser: Studien har kommit fram till att det i formgivningsprocessen existerar slöserier i form av onödiga lager, rörelser, transporter, väntan, inkorrekta processer, defekta produkter och outnyttjad kreativitet hos medarbetarna. Efter beräkningar av kapacitetsutnyttjandet i processen identifierades det att funktionen skär- och sliproboten är kapacitetsbegränsningen i processen. Utifrån de bakomliggande orsakerna har fem förbättringsförslag identifierats som kan leda till kapacitetsökning i formgivningsprocessen. Svensson & Linnér bör endast ha ett lager för stansat faner där FIFO-principen och ställagring av pallat gods bör användas. I processen bör fel och brister åtgärdas långsiktigt istället för provisoriskt och arbetssättet bör standardiseras. I robotcellerna bör soptunnor placeras ut så att avfall kan slängas direkt och inte vid skiftbytet senare. För att öka kapaciteten i kapacitetsbegränsningen, skär- och sliproboten, föreslås det att fallföretaget ska anpassa kapaciteten efter efterfrågan på produkten genom att utnyttja övertid i processen. / Background: The shaping process at Svensson and Linnér is a process that aims to change the shape of the product by pressing and bending. In a production process there usually exists a number of wastes or capacity constraints which not can be identified until a process mapping is made. A mapping of a process therefore enables companies to become aware of the waste and the capacity constraints that exist. This enables companies to improve their processes by eliminating lead times, waiting times and unnecessary movements. Purpose: The aim of this thesis to identify waste and capacity constraints in the shaping process at Svensson & Linnér and to explain its underlying causes. The aim is then to propose improvements that should be done in order to increase the capacity in the shaping process. Method: The study is conducted as a case study at the company Svensson & Linnér where data was collected through participant and structured observations and through unstructured and semi-structured interviews. A process mapping and calculations of capacity utilization has been made to identify waste and capacity constraints. Cause - effect diagrams was then established to identify the underlying causes and to suggest improvements that could lead to increased capacity. Conclusions: The study has concluded that it in the shaping process exists wastes in terms of unnecessary inventory, motion, transportation, waiting, incorrect procedures, defective products and untapped creativity of the operatives. After calculations of capacity utilization in the process, it was identified that the cutting and grinding robot is the capacity limit of the process. Based on the underlying causes have five suggestions for improvements been identified that could lead to increased capacity in the shaping process. Svensson & Linnér should only have one stock for punched veneer where the FIFO principle and rack storage of palletized goods should be used. In the process should errors and flaws be fixed in long terms instead of provisionally and working methods should be standardized. In the robot cells should dustbins be placed so that waste can be disposed immediately and not at the shift change later. To increase the capacity of the capacity constraint, cutting and grinding robot, it is suggested that the company should adjust their capacity to demand for the product by using overtime in the process.
17

A Study of Production Planning in a Hospital Environment

Pettersson, Tobias January 2011 (has links)
No description available.
18

Método para planejamento de capacidade de redes ATM baseado em simulação / Capacity planning method for atm networks based on simulation

Goncalves, Adriano Ramos January 2001 (has links)
O processo de dimensionar redes de comunicação tem sido um desafio para pesquisadores e projetistas. A partir da especificação, passando pela operação, controle e gerenciamento de redes, as estimativas de comportamento do desempenho são úteis para o dimensionamento adequado dos equipamentos. O detalhamento e precisão na capacidade de avaliar o impacto de carga futura melhoram as chances de prever dificuldades em atingir metas de serviços. Com redes de banda larga, como ATM, não tem sido diferente. Pela sua concepção de oferecer garantia de qualidade para serviços com diferentes requisitos, ATM se empenha em evitar a sobrecarga da rede. De início, essa premissa é preservada naturalmente através da restrição da quantidade e tipo de conexões ingressas na rede. Portanto, a adequação dos recursos que compõem a estrutura de uma rede ATM determina o grau de disponibilidade em atender certa quantidade de serviços. A pergunta que desejamos responder é: como estimar com precisão a quantidade de serviço suportada por determinada rede ATM? O limite da rede é alcançado quando os recursos disponíveis são menores que os recursos necessários à carga de serviço. Com o emprego cada vez maior de ATM por empresas de telecomunicações, conhecer o limite da rede é estar ciente da potencialidade de negócios sem comprometimento da qualidade. É poder prever expansões evitando bloqueio de novos serviços. O processo de dimensionamento de capacidade de uma rede ATM revela a quantidade de recursos necessários para suportar determinada carga de serviço. Quando os recursos necessários forem maiores que os recursos disponíveis, o limite da rede foi alcançado. Nesse caso, são duas as possibilidades para o equilíbrio: aumentar os recursos da rede ou diminuir a carga de serviço desejado. Esta dissertação propõe um método para dimensionamento dos recursos de uma rede ATM. A principal técnica empregada no método é a simulação do comportamento de tráfego sobre comutadores ATM. Para determinada carga de tráfego são executadas diferentes simulações variando os recursos disponíveis dentro de parâmetros prováveis. As seguintes medidas de desempenho são obtidas nas simulações como resultados estatísticos médios: razão de perda de células (CLR), atraso de transferência de células (CTD) e variação do atraso de células (CDV). Conhecendo o desempenho desejado (QoS) pela carga de serviço, o método pode determinar a quantidade necessária de recursos que satisfazem os requisitos de QoS. A ferramenta escolhida para implementar o modelo foi o simulador orientado a eventos ATM/HFC do National Institute of Standards and Technology (NIST). O simulador é composto por diferentes modelos de elementos, cada qual com seus atributos, que podem ser combinados para caracterizar determinadas configurações de rede que se deseja avaliar. Os elementos podem ser desde representações de tipos de comutadores ATM até diferentes técnicas de controle de tráfego a serem utilizadas na simulação. A carga de serviço na simulação é provida por elementos modeladores que caracterizam diferentes tipos de aplicações geradoras de tráfego, permitindo arranjos de serviços CBR, VBR, ABR e UBR através de seus respectivos parâmetros descritores. A validação do método é efetuada através da comparação dos resultados obtidos com outro trabalho similar desenvolvido utilizando simulação. / The process of planning communication networks has been a challenge for researchers and designers. From the specification, through the operation, control and management of networks, the behavior performance estimates are useful for the adequate equipment dimensioning. The detailing and accuracy in the capacity to evaluate the future load impact increase the possibilities to forecast difficulties in reaching goals of services. With broadband networks, as ATM, it has not been different. ATM efforts in preventing network overload by its conception to offer quality guarantee for services with different requirements. From beginning, this premise is naturally preserved through restriction of the amount and type of connections that can enter the network. Therefore, the adequacy of the resources that compose the ATM network structure determines the degree of availability in attending certain amount of services. The question that we wish to answer is: how can we estimate accurately the amount of services supported by specific ATM network? The limit of the network is reached when the available resources are below the necessary resources to service load. With the higher use of ATM for telecommunications companies, to know the limit of network is to be aware of the potentiality without damage to the quality. It's to be able to forecast expansions to prevent new services blocking. The capacity planning process of an ATM network shows the amount of resources needed to support a specific workload. When the resource need is greater than the available resource, the network limit has been reached. In this case, there are two possibilities to reach balance: increase the network resources or lower the load of desired service. This work is about a method for ATM network resources dimensioning. The main technique used in the method is the traffic behavior simulation over ATM switches. For specific workload, different simulations are executed and they vary according to the resources available inside the probable parameters. The following measures of performance are gotten in the simulations as average statistics results: cell loss ratio (CLR), cell transfer delay (CTD) and cell delay variation (CDV). Knowing the workload desired performance (QoS), the method can determine the necessary amount of resources that will satisfy the QoS requirements. The chosen tool to implement the model was the event driven simulator ATM/HFC of the National Institute of Standards and Technology (NIST). The simulator is made up of different models of elements, each one with its attributes, which can be combined to characterize specific network configurations that are to be evaluated. The elements can range from representations of types of ATM switches to different techniques of traffic management to be used in the simulation. The workload in the simulation is provided by modeler elements that characterize different types of traffic generator applications, allowing sets of CBR, VBR, ABR and UBR services through their respective traffic parameters. The method validation is carried out through the matching of the results gotten with other similar work developed using simulation.
19

Project Managers' Capacity-Planning Practices for Infrastructure Projects in Qatar

Ojo, Emmanuel Opeyemi 01 January 2019 (has links)
Infrastructure project delays and cost overrun are caused by ineffective use of organizational skills, processes, and resources by project managers in the construction industry. Cost overrun and schedule delay in Qatari infrastructure projects have had damaging effects on the national economy by way of claims and litigation, contractual disputes, delays in dependent projects, and project abandonment. The purpose of this qualitative case study was to explore the perceptions of project managers regarding how they utilize capacity-planning practices to mitigate project schedule delay and cost overrun in government-funded infrastructure projects in Qatar. This study was framed by three conceptual models developed by Gill to outline the capacity management needs within a construction company: (a) the time horizon model, (b) the individual-organization-industry levels model, and (c) the capacity development across components model. Date were collected from semistructured interviews with 8 participants, observational field notes, and archival data regarding Qatari infrastructure project managers' experiences in capacity-planning practices. Thematic analysis of textual data and cross-case synthesis analysis yielded 5 conceptual categories that encompassed 15 themes. The conceptual categories were (a) resources to meet performance capacity, (b) knowledgeable and skillful staff, (c) short- and long-term planning strategy, (d) cost overrun issue, and (e) time management. Findings may be used to promote timely completion of infrastructure projects, which may benefit citizens, construction companies, and the economy of Qatar.
20

Multi-stage Stochastic Programming Models in Production Planning

Huang, Kai 13 July 2005 (has links)
In this thesis, we study a series of closely related multi-stage stochastic programming models in production planning, from both a modeling and an algorithmic point of view. We first consider a very simple multi-stage stochastic lot-sizing problem, involving a single item with no fixed charge and capacity constraint. Although a multi-stage stochastic integer program, this problem can be shown to have a totally unimodular constraint matrix. We develop primal and dual algorithms by exploiting the problem structure. Both algorithms are strongly polynomial, and therefore much more efficient than the Simplex method. Next, motivated by applications in semiconductor tool planning, we develop a general capacity planning problem under uncertainty. Using a scenario tree to model the evolution of the uncertainties, we present a multi-stage stochastic integer programming formulation for the problem. In contrast to earlier two-stage approaches, the multi-stage model allows for revision of the capacity expansion plan as more information regarding the uncertainties is revealed. We provide analytical bounds for the value of multi-stage stochastic programming over the two-stage approach. By exploiting the special simple stochastic lot-sizing substructure inherent in the problem, we design an efficient approximation scheme and show that the proposed scheme is asymptotically optimal. We conduct a computational study with respect to a semiconductor-tool-planning problem. Numerical results indicate that even an approximate solution to the multi-stage model is far superior to any optimal solution to the two-stage model. These results show that the value of multi-stage stochastic programming for this class of problem is extremely high. Next, we extend the simple stochastic lot-sizing model to an infinite horizon problem to study the planning horizon of this problem. We show that an optimal solution of the infinite horizon problem can be approximated by optimal solutions of a series of finite horizon problems, which implies the existence of a planning horizon. We also provide a useful upper bound for the planning horizon.

Page generated in 0.1267 seconds