• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The importance of contextual factors on the accuracy of estimates in project management. An emergence of a framework for more realistic estimation process

Lazarski, Adam January 2014 (has links)
Successful projects are characterized by the quality of their planning. Good planning that better takes into account contextual factors allows more accurate estimates to be achieved. As an outcome of this research, a new framework composed of best practices has been discovered. This comprises an open platform that project experts and practitioners can work with efficiently, and that researchers can develop further as required. The research investigation commenced in the autumn of 2008 with a pilot study and then proceeded through an inductive research process, involving a series of eleven interviews. These consisted of interviews with four well-recognized experts in the field, four interviews with different practitioners and three group interviews. In addition, a long-running observation of forty-five days was conceptualized, together with other data sources, before culminating in the proposal of a new framework for improving the accuracy of estimates. Furthermore, an emerging framework – and a description of its know-how in terms of application – have been systematically reviewed through the course of four hundred twenty-five days of meetings, dedicated for the most part to improving the use of a wide range of specific project management tools and techniques and to an improvement in understanding of planning and the estimation process associated with it. This approach constituted an ongoing verification of the research’s findings against project management practice and also served as an invaluable resource for the researcher’s professional and practice-oriented development. The results obtained offered fresh insights into the importance of knowledge management in the estimation process, including the “value of not knowing”, the oft-overlooked phenomenon of underestimation and its potential to co-exist with overestimation, and the use of negative buffer management in the critical chain concept to secure project deadlines. The project also highlighted areas of improvement for future research practice that wishes to make use of an inductive approach in order to achieve a socially agreed framework, rather than a theory alone. In addition, improvements were suggested to the various qualitative tools employed in the customized data analysis process.
12

Making use a new open-multipurpose framework for more realistic estimation process in project management

Hussain, Zahid I., Lazarski, A.B. January 2016 (has links)
Yes / The current turbulent times call for adaptability, especially in non-repetitive endeavours being a vital characteristic of project management. The research organized along five objectives commenced in the autumn of 2008 with a pilot study. Then it proceeded through an inductive research process, involving a series of interviews with well-recognized international experts in the field. In addition conceptualized long-running observation of forty-five days was used, before proposal of a new framework for improving the accuracy of estimates in project management. Furthermore, the framework’s “know-how to apply” description have been systematically reviewed through the course of four hundred twenty-five days of meetings. This achieved socially agreed understanding assured that it may be possible to improve accuracy of estimates, while having flexible, adaptable framework exploiting dependency between project context and conditioned by it, use of tools and techniques.
13

The importance of contextual factors on the accuracy of estimates in project management : an emergence of a framework for more realistic estimation process

Lazarski, Adam January 2014 (has links)
Successful projects are characterized by the quality of their planning. Good planning that better takes into account contextual factors allows more accurate estimates to be achieved. As an outcome of this research, a new framework composed of best practices has been discovered. This comprises an open platform that project experts and practitioners can work with efficiently, and that researchers can develop further as required. The research investigation commenced in the autumn of 2008 with a pilot study and then proceeded through an inductive research process, involving a series of eleven interviews. These consisted of interviews with four well-recognized experts in the field, four interviews with different practitioners and three group interviews. In addition, a long-running observation of forty-five days was conceptualized, together with other data sources, before culminating in the proposal of a new framework for improving the accuracy of estimates. Furthermore, an emerging framework – and a description of its know-how in terms of application – have been systematically reviewed through the course of four hundred twenty-five days of meetings, dedicated for the most part to improving the use of a wide range of specific project management tools and techniques and to an improvement in understanding of planning and the estimation process associated with it. This approach constituted an ongoing verification of the research’s findings against project management practice and also served as an invaluable resource for the researcher’s professional and practice-oriented development. The results obtained offered fresh insights into the importance of knowledge management in the estimation process, including the “value of not knowing”, the oft-overlooked phenomenon of underestimation and its potential to co-exist with overestimation, and the use of negative buffer management in the critical chain concept to secure project deadlines. The project also highlighted areas of improvement for future research practice that wishes to make use of an inductive approach in order to achieve a socially agreed framework, rather than a theory alone. In addition, improvements were suggested to the various qualitative tools employed in the customized data analysis process.
14

Optimal Resource Allocation Algorithms For Efficient Operation Of Wireless Networks

Ozel, Omur 01 July 2009 (has links) (PDF)
In this thesis, we analyze allocation of two separate resources in wireless networks: transmit power and buffer space. Controlled allocation of power can provide good performance for both users and the network. Although centralized mechanisms are possible, distributed power control algorithms are preferable for efficient operation of the network. Viewing distributed power allocation as the collection of rational decisions of each user, we make game theoretic problem formulations, devise distributed algorithms and analyze them. First, equilibrium analysis of a vector power control game based on network energy efficiency in a multiple access point wireless network is presented. Then, a distributed mechanism is proposed that can smooth admission control type power control so that every user can stay in the system. Introducing a new externality into utility function, a game theoretic formulation that results in desired distributed actions is made. Next, the proposed externality is investigated in a control theoretic framework. Convergence of gradient based iterative power updates are investigated and stability of corresponding continuous time dynamical system is established. In the final part of the thesis, allocation of buffer space is addressed in a wireless downlink using a queueing theoretic framework. An efficient algorithm that finds optimal buffer partitioning is proposed and applications of the algorithm for different scenarios are illustrated. Implications of the results about cross layer design and multiuser diversity are discussed.
15

Simulação e análise do método Gerenciamento Dinâmico do Pulmão para ajuste de estoque de produtos destinados à entrega imediata / Simulation and analysis of the Dynamic Buffer Management method for inventory adjustment of products destined for immediate delivery

Ikeziri, Lucas Martins 28 June 2018 (has links)
Submitted by Lucas Martins Ikeziri (lucas.ikeziri@unesp.br) on 2018-08-13T13:57:43Z No. of bitstreams: 1 Dissertação - Lucas Martins Ikeziri.pdf: 3834872 bytes, checksum: dcbff094cc5b0ef08e5910d9efbb4364 (MD5) / Approved for entry into archive by Minervina Teixeira Lopes null (vina_lopes@bauru.unesp.br) on 2018-08-13T18:33:33Z (GMT) No. of bitstreams: 1 ikeziri_lm_me_bauru.pdf: 3267956 bytes, checksum: 0073dc72ff68b084049d7c710e797002 (MD5) / Made available in DSpace on 2018-08-13T18:33:33Z (GMT). No. of bitstreams: 1 ikeziri_lm_me_bauru.pdf: 3267956 bytes, checksum: 0073dc72ff68b084049d7c710e797002 (MD5) Previous issue date: 2018-06-28 / O gerenciamento de estoques em cadeias de suprimentos vem enfrentando diversos problemas, como baixa rotatividade, altos investimentos em estoque, vendas perdidas por rupturas e alto índice de obsolescência dos produtos estocados. Assim, esta pesquisa tem o objetivo de avaliar a efetividade da técnica Gerenciamento Dinâmico do Pulmão (Dynamic Buffer Management - DBM) como forma de preservar a disponibilidade de produtos destinados à entrega imediata em ambientes que adotem a solução de distribuição e reposição puxada da Teoria das Restrições. Para tanto, foram realizadas simulações no software ProModel utilizando dados estocásticos para o tempo de reposição e cenários com diferentes combinações de demanda. Esta, por sua vez, pode se comportar de duas formas: i) média sem tendência com dois níveis de coeficiente de variação e ii) média com tendência sazonal com dois níveis de intensidade. Dois algoritmos do DBM foram modelados segundo seus princípios constatados na literatura e combinados com parâmetros para ajuste dinâmico do estoque alvo. Os modelos foram avaliados por meio dos seguintes indicadores: fill rate, estoque médio no sistema e rotatividade por atendimento imediato da demanda. Os resultados indicam que para a demanda média sem tendência, desativar o DBM e manter um nível sensato de estoque alvo no sistema inibe ajustes indevidos e garante um bom desempenho do modelo. Contudo, para a demanda média com tendência sazonal, o DBM é altamente recomendado para assegurar a performance do sistema e ajustar os níveis de estoque conforme o padrão da demanda se altera. / Inventory management in supply chains has faced several problems, such as low turnover, high inventory investments, lost sales due to disruptions, and high obsolescence of stored products. Thus, this research aims to evaluate the effectiveness of the Dynamic Buffer Management (DBM) technique as a way of preserving the availability of products destined for immediate delivery in environments that adopt the distribution solution and pull replenishment of the Theory of Constraints (TOC). For that, simulations were performed in the ProModel software using stochastic data for the replenishment time and scenarios with different combinations of demand. This, in turn, can behave in two ways: i) average without trend with two levels of coefficient of variation and ii) average with seasonal trend with two levels of intensity. Two DBM algorithms were modeled according to their principles found in the literature and combined with parameters for dynamic adjustment of the target level. The models were evaluated through the following indicators: fill rate, average inventory in the system and turnover for immediate demand response. The results indicate that for the average demand without trend, disabling DBM and maintaining a sensible level of target level in the system inhibits undue adjustments and ensures good model performance. However, for average demand with seasonal trend, DBM is highly recommended to ensure system performance and adjust inventory levels as the demand pattern changes.
16

Gerenciamento de buffers em redes DTN baseado em entropia de mobilidade

Lima, Polianny Almeida 27 March 2013 (has links)
Made available in DSpace on 2015-04-11T14:03:00Z (GMT). No. of bitstreams: 1 Polianny Almeida Lima.pdf: 1135711 bytes, checksum: 18018c30e2dd4e938af7eadcae042cbe (MD5) Previous issue date: 2013-03-27 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Standard networking technologies are based on typical assumptions such as the existence of connectivity end-to-end between source and destination for the entire period of communication, low error rates, retransmission mechanisms to repair errors, and relatively small communication delays. If one or more of these assumptions are not achieved in practice, blinks occur in communication, creating challenges that must be overcome to ensure the successful transmission. In this context, came the delay-tolerant networks and disconnection (DTN - Delay Tolerant Networks), an emerging technology for communications when there is no network infrastructure and disconnections may occur frequently due to the mobility of nodes. To bypass the problem of disconnections, DTN networks use the concept of persistent storage, which becomes a problem because the nodes have limited capacity of the buffers. The question is how to select the messages that remain in buffers longer considering the unpredictable mobility of nodes, short contacts between pairs of nodes, storage nodes achieved by finite and so on. This paper proposes a buffer management that transmits messages adaptively based on entropy variation of mobility network. / Tecnologias de redes tradicionais baseiam-se em suposições típicas, como a existência de conectividade fim a fim entre origem e destino durante todo o período de comunicação, baixas taxas de erros, mecanismos de retransmissão para reparar erros, e atrasos de comunicação relativamente pequenos. Caso uma ou mais dessas suposições não sejam alcançadas na prática, ocorrem intermitências na comunicação, criando desafios que devem ser superados a fim de garantir o sucesso na transmissão. Nesse contexto, surgiram as redes tolerantes a atraso e desconexão (DTN - Delay Tolerant Networks), uma tecnologia emergente para comunicações quando não há infraestrutura de rede e desconexões podem ocorrer com frequência devido à mobilidade dos nodos. Para contornar o problema das desconexões, as redes DTN utilizam o conceito de armazenamento persistente, o que se torna um problema, pois os nodos tem capacidade de buffers limitada. A questão é selecionar as mensagens que ficarão nos buffers por mais tempo levando em consideração a mobilidade imprevisível dos nodos, contatos de curta duração entre pares de nodos, armazenamento finito realizado pelos nodos e assim por diante. Este trabalho apresenta uma proposta de gerenciamento de buffer que transmite mensagens de forma adaptativa baseada na variação de entropia de mobilidade da rede.
17

Reducing and absorbing variations in a manufacturing context : A capacity management perspective

Hedvall, Lisa January 2019 (has links)
Variations can have substantially negative effects on performance and it is therefore important to investigate how the variations can be reduced or absorbed in an appropriate way for competitiveness. Manufacturing companies are daily exposed to variations and operations managers need to take capacity management decisions with this in mind to secure the delivery capability. The current body of knowledge mainly focus on techniques for root cause analysis for reducing variations, especially in the manufacturing processes, neglecting the fact that both internally and externally generated variations are more or less possible to affect. Buffers are traditionally associated with waste and unfavourable performance, while the right buffers can be of strategic importance. The purpose of this research is therefore to increase the understanding of causes for variations, actions to reduce variations and how buffers can be used to mitigate negative effects of variations related to capacity management in a manufacturing context. Three research studies have been conducted, including four appended papers, to fulfil the purpose. These studies encompassed different combinations of traditional literature reviews, conceptual research with logical reasoning and case study research. The findings demonstrate that working on mitigating negative effects of variations is a complex challenge and not just about choosing to reduce or absorb the variations. In general, it is concluded that the variation management and buffer management include several aspects to consider with implications for the capacity management. In addition, the results indicate that the decisions in manufacturing companies tend to be based on intuition and previous experience due to a lack of decision support. Furthermore, the participating companies perceive that several of their internal routines contribute to their prevailing variations. This research contributes to an awareness of causes for variations that are possible to affect, possible actions to reduce the variations and the purposes of different buffers to absorb variations. This is sought to facilitate a systematic way of working with reducing and absorbing variations, creating support in the variation management and the buffer management from a capacity management perspective. / Variationer kan medföra negativa effekter för en verksamhets prestation och det är därmed viktigt att utforska hur variationerna kan reduceras eller absorberas på lämpligt sätt för att verksamheten ska vara konkurrenskraftig. Tillverkande företag är dagligen exponerade för variationer och produktionschefer behöver beakta detta i kapacitetsadministrationen för att säkerställa leveransförmågan. Befintlig litteratur fokuserar främst på tekniker för att analysera grundorsaker inom tillverkningsprocesserna och ignorerar det faktum att både internt och externt genererade variationer är mer eller mindre påverkbara. Buffertar associeras traditionellt med oönskat slöseri fastän rätt buffertar kan vara av strategisk betydelse. Syftet med denna forskning är därmed att öka förståelsen för orsaker till variationer, åtgärder för att reducera variationer och hur buffertar kan tillämpas för att överkomma negativa effekter av variationer i relation till kapacitetsadministration inom en tillverkningskontext. Tre forskningsstudier, innefattande fyra publikationer, har genomförts för att uppfylla syftet. Dessa studier består av en kombination av traditionella litteraturstudier, konceptuell forskning med logiska resonemang och fallstudier i tillverkande företag. Resultaten visar att arbetet med att hantera negativa effekter av variationer är en komplex utmaning som stäcker sig bortom valet av att reducera eller absorbera variationerna. Generellt dras slutsatsen att det är flertalet aspekter att beakta för variationsadministration och buffertadministration med implikationer för kapacitetsadministrationen. Det visade sig att besluten tenderar att vara baserat på intuition och erfarenheter i tillverkande företag, mycket till följd av en avsaknad av beslutsstöd. Vidare framgår det att de medverkande företagen upplever att flera av deras interna rutiner bidrar till upphovet av variationer. Denna forskning bidrar till en ökad medvetenhet om orsaker till variationer som är påverkbara inom företag, möjliga åtgärder för att reducera variationerna och syftet med olika buffertar för att absorbera variationer. Detta är avsett att underlätta ett systematiskt arbetssätt för att reducera och absorbera variationer, samt utgöra ett stöd för variationsadministration och buffertadministration från ett kapacitetsadministrationsperspektiv. / KOPability
18

Automatic Data Allocation, Buffer Management And Data Movement For Multi-GPU Machines

Ramashekar, Thejas 10 1900 (has links) (PDF)
Multi-GPU machines are being increasingly used in high performance computing. These machines are being used both as standalone work stations to run computations on medium to large data sizes (tens of gigabytes) and as a node in a CPU-Multi GPU cluster handling very large data sizes (hundreds of gigabytes to a few terabytes). Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and managed at a on each GPU. A significant body of scientific applications that utilize multi-GPU machines contain computations inside affine loop nests, i.e., loop nests that have affine bounds and affine array access functions. These include stencils, linear-algebra kernels, dynamic programming codes and data-mining applications. Data allocation, buffer management, and coherency handling are critical steps that need to be performed to run affine applications on multi-GPU machines. Existing works that propose to automate these steps have limitations and in efficiencies in terms of allocation sizes, exploiting reuse, transfer costs and scalability. An automatic multi-GPU memory manager that can overcome these limitations and enable applications to achieve salable performance is highly desired. One technique that has been used in certain memory management contexts in the literature is that of bounding boxes. The bounding box of an array, for a given tile, is the smallest hyper-rectangle that encapsulates all the array elements accessed by that tile. In this thesis, we exploit the potential of bounding boxes for memory management far beyond their current usage in the literature. In this thesis, we propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding Box based Memory Manager (BBMM). BBMM is a compiler-assisted runtime memory manager. At compile time, it use static analysis techniques to identify a set of bounding boxes accessed by a computation tile. At run time, it uses the bounding box set operations such as union, intersection, difference, finding subset and superset relation to compute a set of disjoint bounding boxes from the set of bounding boxes identified at compile time. It also exploits the architectural capability provided by GPUs to perform fast transfers of rectangular (strided) regions of memory and hence performs all data transfers in terms of bounding boxes. BBMM uses these techniques to automatically allocate, and manage data required by applications (suitably tiled and parallelized for GPUs). This allows It to (1) allocate only as much data (or close to) as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence, maximize data reuse across tiles and minimize the data transfer overhead, (3) and as a result, enable applications to maximize the utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a system with four GPUs with various scientific programs showed that BBMM is able to reduce data allocations on each GPU by up to 75% compared to current allocation schemes, yield at least 88% of the performance of hand-optimized Open CL codes and allows excellent weak scaling.

Page generated in 0.0281 seconds