• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 347
  • 78
  • 60
  • 56
  • 49
  • 42
  • 16
  • 11
  • 9
  • 8
  • 7
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 838
  • 112
  • 111
  • 89
  • 78
  • 74
  • 66
  • 64
  • 62
  • 56
  • 55
  • 54
  • 53
  • 52
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Analysis of Worker Assignment Policies on Production Line Performance Utilizing a Multi-skilled Workforce

McDonald, Thomas N. 18 March 2004 (has links)
Lean production prescribes training workers on all tasks within the cell to adapt to changes in customer demand. Multi-skilling of workers can be achieved by cross-training. Cross-training can be improved and reinforced by implementing job rotation. Lean production also prescribes using job rotation to improve worker flexibility, worker satisfaction, and to increase worker knowledge in how their work affects the rest of the cell. Currently, there is minimal research on how to assign multi-skilled workers to tasks within a lean production cell while considering multi-skilling and job rotation. In this research, a new mathematical model was developed that assigns workers to tasks, while ensuring job rotation, and determines the levels of skill, and thus training, necessary to meet customer demand, quality requirements, and training objectives. The model is solved using sequential goal programming to incorporate three objectives: overproduction, cost of poor quality, and cost of training. The results of the model include an assignment of workers to tasks, a determination of the training necessary for the workers, and a job rotation schedule. To evaluate the results on a cost basis, the costs associated with overproduction, defects, and training were used to calculate the net present cost for one year. The solutions from the model were further analyzed using a simulation model of the cell to determine the impact of job rotation and multi-skilling levels on production line performance. The measures of performance include average flowtime, work-in-process (WIP) level, and monthly shipments (number produced). Using the model, the impact of alternative levels of multi-skilling and job rotation on the performance of cellular manufacturing systems is investigated. Understanding the effect of multi-skilling and job rotation can aid both production managers and human resources managers in determining which workers need training and how often workers should be rotated to improve the performance of the cell. The lean production literature prescribes training workers on all tasks within a cell and developing a rotation schedule to reinforce the cross-training. Four levels of multi-skilling and three levels of job rotation frequency are evaluated for both a hypothetical cell and a case application in a relatively mature actual production cell. The results of this investigation provide insight on how multi-skilling and job rotation frequency influence production line performance and provide guidance on training policies. The results show that there is an interaction effect between multi-skilling and job rotation for flowtime, work-in-process, in both the hypothetical cell and the case application and monthly shipments in the case application. Therefore, the effect of job rotation on performance measures is not the same at all levels of multi-skilling thus indicating that inferences about the effect of changing multi-skilling, for example, should not be made without considering the job rotation level. The results also indicate that the net present cost is heavily influenced by the cost of poor quality. The results for the case application indicated that the maturity level of the cell influences the benefits derived from increased multi-skilling and affects several key characteristics of the cell. As a cell becomes more mature, it is expected that the quality levels increase and that the skill levels on tasks normally performed increase. Because workers in the case application already have a high skill level on some tasks, the return on training is not as significant. Additionally, the mature cell has relatively high quality levels from the beginning and any improvements in quality would be in small increments rather than in large breakthroughs. The primary contribution of this research is the development of a sequential goal programming worker assignment model that addresses overproduction, poor quality, cross-training, and job rotation in order to meet the prescription in the lean production literature of only producing to customer demand while utilizing multi-skilled workers. Further contributions are analysis of how multi-skilling level and job rotation frequency impact the performance of the cell. Lastly, a contribution is the application of optimization and simulation methods for comprehensively analyzing the impact of worker assignment on performance measures. / Ph. D.
172

Enhancements to Transportation Analysis and Simulation Systems

Jeihani Koohbanani, Mansoureh 22 December 2004 (has links)
Urban travel demand forecasting and traffic assignment models are important tools in developing transportation plans for a metropolitan area. These tools provide forecasts of urban travel patterns under various transportation supply conditions. The predicted travel patterns then provide useful information in planning the transportation system. Traffic assignment is the assignment of origin-destination flows to transportation routes, based on factors that affect route choice. The urban travel demand models, developed in the mid 1950s, provided accurate and precise answers to the planning and policy issues being addressed at that time, which mainly revolved around expansion of the highway system to meet the rapidly growing travel demand. However, the urban transportation planning and analysis have undergone changes over the years, while the structure of the travel demand models has remained largely unchanged except for the introduction of disaggregate choice models beginning in the mid-1970s. Legislative and analytical requirements that exceed the capabilities of these models and methodologies have driven new technical approaches such as TRANSIMS. The Transportation Analysis and Simulation System, or TRANSIMS, is an integrated system of travel forecasting models designed to give transportation planners accurate, and complete information on traffic impacts, congestion, and pollution. It was developed by the Los Alamos National Laboratory to address new transportation and air quality forecasting procedures required by the Clean Air Act, the Intermodal Surface Transportation Efficiency Act, and other regulations. TRANSIMS includes six different modules: Population Synthesizer, Activity Generator, Route Planner, Microsimulator, Emissions Estimator, and Feedback. This package has been under development since 1994 and needs significant improvements within some of its modules. This dissertation enhances the interaction between the Route Planner and the Microsimulator modules to improve the dynamic traffic assignment process in TRANSIMS, and the Emissions Estimator module. The traditional trip assignment is static in nature. Static assignment models assume that traffic is in a steady-state, link volumes are time invariant, the time to traverse a link depends only on the number of vehicles on that link, and that the vehicle queues are stacked vertically and do not traverse to the upstream links in the network. Thus, a matrix of steady-state origin-destination (O-D) trip rates is assigned simultaneously to shortest paths from each origin to a destination. To address the static traffic assignment problems, dynamic traffic assignment models are proposed. In dynamic traffic assignment models, the demand is allowed to be time varying so that the number of vehicles passing through a link and the corresponding link travel times become time-dependent. In contrast with the static case, the dynamic traffic assignment problem is still relatively unexplored and a precise formulation is not clearly established. Most models in the literature do not present a solution algorithm and among the presented methods, most of them are not suitable for large-scale networks. Among the suggested solution methodologies that claim to be applicable to large-scale networks, very few methods have been actually tested on such large-scale networks. Furthermore, most of these models have stability and convergence problem. A solution methodology for computing dynamic user equilibria in large-scale transportation networks is presented in this dissertation. This method, which stems from the convex simplex method, routes one traveler at a time on the network and updates the link volumes and link travel times after each routing. Therefore, this method is dynamic in two aspects: it is time-dependent, and it routes travelers based on the most updated link travel times. To guarantee finite termination, an additional stopping criterion is adopted. The proposed model is implemented within TRANSIMS, the Transportation Analysis and Simulation System, and is applied to a large-scale network. The current user equilibrium computation in TRANSIMS involves simply an iterative process between the Route Planner and the MicroSimulator modules. In the first run, the Route Planner uses free-flow speeds on each link to estimate the travel time to find the shortest paths, which is not accurate because there exist other vehicles on the link and so, the speed is not simply equal to the free-flow speed. Therefore, some paths might not be the shortest paths due to congestion. The Microsimulator produces the new travel times based on accurate vehicle speeds. These travel times are fed back to the Route Planner, and the new routes are determined as the shortest paths for selected travelers. This procedure does not necessarily lead to a user equilibrium solution. The existing problems in this procedure are addressed in our proposed algorithm as follows. TRANSIMS routes one person at a time but does not update link travel times. Therefore, each traveler is routed regardless of other travelers on the network. The current stopping criterion is based only on visualization and the procedure might oscillate. Also, the current traffic assignment spends a huge amount of time by iterating frequently between the Route Planner and the Microsimulator. For example in the Portland study, 21 iterations between the Route Planner and the Microsimulator were performed that took 33:29 hours using three 500-MHZ CPUs (parallel processing). These difficulties are addressed by distributing travelers on the network in a better manner from the beginning in the Route Planner to avoid the frequent iterations between the Route Planner and the Microsimulator that are required to redistribute them. By updating the link travel times using a link performance function, a near-equilibrium is obtained only in one iteration. Travelers are distributed in the network with regard to other travelers in the first iteration; therefore, there is no need to redistribute them using the time-consuming iterative process. To avoid problems caused by link performance function usage, an iterative procedure between the current Route Planner and the Microsimulator is performed and a user equilibrium is found after a few iterations. Using an appropriate descent-based stopping criterion, the finite termination of the procedure is guaranteed. An illustration using real-data pertaining to the transportation network of Portland, Oregon, is presented along with comparative analyses. TRANSIMS framework contains a vehicle emissions module that estimates tailpipe emissions for light and heavy duty vehicles and evaporative emissions for light duty vehicles. It uses as inputs the emissions arrays obtained the Comprehensive Modal Emissions Model (CMEM). This dissertation describes and validates the framework of TRANSIMS for modeling vehicle emissions. Specifically, it identifies an error in the model calculations and enhances the emission modeling formulation. Furthermore, the dissertation compares the TRANSIMS emission estimates to on-road emission-measurements and other state-of-the-art emission models including the VT-Micro and CMEM models. / Ph. D.
173

Dynamic Code Sharing Algorithms for IP Quality of Service in Wideband CDMA 3G Wireless Networks

Fossa, Carl Edward Jr. 26 April 2002 (has links)
This research investigated the efficient utilization of wireless bandwidth in Code Division Multiple Access (CDMA)systems that support multiple data rates with Orthogonal Variable Spreading Factor (OVSF)codes. The specific problem being addressed was that currently proposed public-domain algorithms for assigning OVSF codes make inefficient use of wireless bandwidth for bursty data traffic sources with different Quality of Service (QoS) requirements. The purpose of this research was to develop an algorithm for the assignment of OVSF spreading codes in a Third-Generation (3G)Wideband CDMA (WCDMA)system. The goal of this algorithm was to efficiently utilize limited, wireless resources for bursty data traffic sources with different QoS requirements. The key contribution of this research was the implementation and testing of two code sharing techniques which are not implemented in existing OVSF code assignment algorithms. These techniques were termed statistical multiplexing and dynamic code sharing. The statistical multiplexing technique used a shared channel to support multiple bursty traffic sources. The dynamic code sharing technique supported multiple data users by temporarily granting access to dedicated channels. These techniques differed in terms of both complexity and performance guarantees. / Ph. D.
174

Knowledge Transfer through Narratives in an Organization

Limon, Susana Dinkins 12 April 2007 (has links)
This dissertation looks at the role narratives play in addressing organizational challenges by facilitating a collective assignment of meaning to those challenges that allows for problem solving, or at least a way to cope with the challenges. Specifically, the research examines how informal knowledge is embedded in organizations in the form of narratives, and how narratives are used to transfer knowledge across the organization. The dissertation develops the concept of narrative, and the qualities of the narratives used in this dissertation, focused on events, focused on people, focused on values, and it develops an understanding of knowledge transfer as the collective assignment of meaning to challenges that are constantly emerging. In this case study, three means, or tools emerge as facilitating the assignment of meaning. These tools are superstars, indexing, and knowledge objects. This research will enrich the public administration and nonprofit literature by utilizing narrative inquiry to examine the transfer of knowledge in a nonprofit social service organization that serves a vital public purpose under contracts with various levels of government. / Ph. D.
175

DR_BEV: Developer Recommendation Based on Executed Vocabulary

Bendelac, Alon 28 May 2020 (has links)
Bug-fixing, or fixing known errors in computer software, makes up a large portion of software development expenses. Once a bug is discovered, it must be assigned to an appropriate developer who has the necessary expertise to fix the bug. This bug-assignment task has traditionally been done manually. However, this manual task is time-consuming, error-prone, and tedious. Therefore, automatic bug assignment techniques have been developed to facilitate this task. Most of the existing techniques are report-based. That is, they work on bugs that are textually described in bug reports. However, only a subset of bugs that are observed as a faulty program execution are also described textually. Certain bugs, such as security vulnerability bugs, are only represented with a faulty program execution, and are not described textually. In other words, these bugs are represented by a code coverage, which indicates which lines of source code have been executed in the faulty program execution. Promptly fixing these software security vulnerability bugs is necessary in order to manage security threats. Accordingly, execution-based bug assignment techniques, which model a bug with a faulty program execution, are an important tool in fixing software security bugs. In this thesis, we compare WhoseFault, an existing execution-based bug assignment technique, to report-based techniques. Additionally, we propose DR_BEV (Developer Recommendation Based on Executed Vocabulary), a novel execution-based technique that models developer expertise based on the vocabulary of each developer's source code contributions, and we demonstrate that this technique out-performs the current state-of-the-art execution-based technique. Our observations indicate that report-based techniques perform better than execution-based techniques, but not by a wide margin. Therefore, while a report-based technique should be used if a report exists for a bug, our results should provide confidence in the scenarios in which only execution-based techniques are applicable. / Master of Science / Bug-fixing, or fixing known errors in computer software, makes up a large portion of software development expenses. Once a bug is discovered, it must be assigned to an appropriate developer who has the necessary expertise to fix the bug. This bug-assignment task has traditionally been done manually. However, this manual task is time-consuming, error-prone, and tedious. Therefore, automatic bug assignment techniques have been developed to facilitate this task. Most of the existing techniques are report-based. That is, they work on bugs that are textually described in bug reports. However, only a subset of bugs that are observed as a faulty program execution are also described textually. Certain bugs, such as security vulnerability bugs, are only represented with a faulty program execution, and are not described textually. In other words, these bugs are represented by a code coverage, which indicates which lines of source code have been executed in the faulty program execution. Promptly fixing these software security vulnerability bugs is necessary in order to manage security threats. Accordingly, execution-based bug assignment techniques, which model a bug with a faulty program execution, are an important tool in fixing software security bugs. In this thesis, we compare WhoseFault, an existing execution-based bug assignment technique, to report-based techniques. Additionally, we propose DR_BEV (Developer Recommendation Based on Executed Vocabulary), a novel execution-based technique that models developer expertise based on the vocabulary of each developer's source code contributions, and we demonstrate that this technique out-performs the current state-of-the-art execution-based technique.
176

Empirical Analysis of Algorithms for the k-Server and Online Bipartite Matching Problems

Mahajan, Rutvij Sanjay 14 August 2018 (has links)
The k–server problem is of significant importance to the theoretical computer science and the operations research community. In this problem, we are given k servers, their initial locations and a sequence of n requests that arrive one at a time. All these locations are points from some metric space and the cost of serving a request is given by the distance between the location of the request and the current location of the server selected to process the request. We must immediately process the request by moving a server to the request location. The objective in this problem is to minimize the total distance traveled by the servers to process all the requests. In this thesis, we present an empirical analysis of a new online algorithm for k-server problem. This algorithm maintains two solutions, online solution, and an approximately optimal offline solution. When a request arrives we update the offline solution and use this update to inform the online assignment. This algorithm is motivated by the Robust-Matching Algorithm [RMAlgorithm, Raghvendra, APPROX 2016] for the closely related online bipartite matching problem. We then give a comprehensive experimental analysis of this algorithm and also provide a graphical user interface which can be used to visualize execution instances of the algorithm. We also consider these problems under stochastic setting and implement a lookahead strategy on top of the new online algorithm. / MS / Motivated by real-time logistics, we study the online versions of the well-known bipartite matching and the k-server problems. In this problem, there are servers (delivery vehicles) located in different parts of the city. When a request for delivery is made, we have to immediately assign a delivery vehicle to this request without any knowledge of the future. Making cost-effective assignments, therefore, becomes incredibly challenging. In this thesis, we implement and empirically evaluate a new algorithm for the k-server and online matching problems.
177

Modelling and Appraisal in Congested Transport Networks

West, Jens January 2016 (has links)
Appraisal methodologies for congestion mitigation projects are relatively less well developed compared to methodologies for projects reducing free flow travel times. For instance, static assignment models are incapable of representing the build-up and dissipation of traffic queues, or capturing the experienced crowding caused by uneven on-board passenger loads. Despite the availability of dynamic traffic assignment, only few model systems have been developed for cost-benefit analysis of real applications. The six included papers present approaches and tools for analysing traffic and transit projects where congestion relief is the main target. In the transit case studies, we use an agent-based simulation model to analyse congestion and crowding effects and to conduct cost-benefit analyses. In the case study of a metro extension in Stockholm, we demonstrate that congestion and crowding effects constitute more than a third of the total benefits and that a conventional static model underestimates these effects vastly. In another case study, we analyse various operational measures and find that the three main measures (boarding through all doors, headway-based holding and bus lanes) had an overall positive impact on service performance and that synergetic effects exist. For the congestion charging system in Gothenburg, we demonstrate that a hierarchal route choice model with a continuous value of time distribution gives realistic predictions of route choice effects although the assignment is static. We use the model to show that the net social benefit of the charging system in Gothenburg is positive, but that low income groups pay a larger share of their income than high income groups. To analyse congestion charges in Stockholm however, integration of dynamic traffic assignment with the demand model is necessary, and we demonstrate that this is fully possible. Models able to correctly predict these effects highlight the surprisingly large travel time savings of pricing policies and small operational measures. These measures are cheap compared to investments in new infrastructure and their implementation can therefore lead to large societal gains. / <p>QC 20160829</p>
178

Les propriétés-sûretés en droit de l’OHADA : comparaison avec le droit français / Property-security in the OHADA law : a comparison with French law

Diallo, Thierno Abdoulaye 17 October 2017 (has links)
La propriété-sûreté a été introduite en droit de l’OHADA à l’occasion de la réforme de l’Acte uniforme portant organisation des sûretés en date du 15 décembre 2010. La présente thèse a pour ambition de montrer les points de convergence et de divergence de la propriété-sûreté du droit de l’OHADA par rapport au droit français. Elle démontre également l’inexactitude de la reconnaissance au titulaire de la propriété-sûreté d’un droit réel sur le bien objet de la sûreté, eu égard au fait que la propriété-sûreté ne saurait juridiquement être assimilée à la propriété ordinaire. Elle montre au contraire que la propriété-sûreté est réductible aux sûretés réelles traditionnelles. Elle invite, ce faisant, les législateurs du droit de l’OHADA et du droit français à aligner le régime de la propriété-sûreté sur celui des sûretés réelles traditionnelles. / Property-security (title for security purposes) was enshrined in the OHADA law during the reform of the Uniform Act on the organization of security rights on December 15, 2010. This thesis then aims at pointing out the similarities and the differences between the OHADA’s property-security law and the French law. It also challenges the accuracy of recognizing to the owner of the title for security purposes a right in rem in connection with the property concerned, as property-security cannot, as to the law, be assimilated to an ordinary property. By contrast, this study shows that property-security has to be seen as other traditional real guarantees. Therefore, both the OHADA and the French legislators are called to shape the legal regime of the property-security in accordance with that of the traditional real guarantees.
179

On the nonnegative least squares

Santiago, Claudio Prata 19 August 2009 (has links)
In this document, we study the nonnegative least squares primal-dual method for solving linear programming problems. In particular, we investigate connections between this primal-dual method and the classical Hungarian method for the assignment problem. Firstly, we devise a fast procedure for computing the unrestricted least squares solution of a bipartite matching problem by exploiting the special structure of the incidence matrix of a bipartite graph. Moreover, we explain how to extract a solution for the cardinality matching problem from the nonnegative least squares solution. We also give an efficient procedure for solving the cardinality matching problem on general graphs using the nonnegative least squares approach. Next we look into some theoretical results concerning the minimization of p-norms, and separable differentiable convex functions, subject to linear constraints described by node-arc incidence matrices for graphs. Our main result is the reduction of the assignment problem to a single nonnegative least squares problem. This means that the primal-dual approach can be made to converge in one step for the assignment problem. This method does not reduce the primal-dual approach to one step for general linear programming problems, but it appears to give a good starting dual feasible point for the general problem.
180

Avaliação de indicadores de desempenho na análise de importância de segmentos de uma rede viária

Dalosto, Francisco Marchet January 2018 (has links)
A identificação dos segmentos críticos da rede viária é um conhecimento básico que todo planejador de transportes deve ter sobre a rede viária. É inevitável a ocorrência de incidentes e eventos de redução da capacidade nos elementos da rede. O efeito de acidentes e obstruções em segmentos críticos da rede ocasionam impactos que prejudicam o desempenho da rede. Este estudo foi desenvolvido com o apoio do modelo de alocação de tráfego do software VISUM (versão 2015) e propõe um método para determinar a importância de cada segmento da rede viária, a identificação de segmentos críticos da rede e a avaliação de sua obstrução de forma estática e dinâmica. Para isso, são utilizados diversos indicadores de desempenho da rede viária. O método foi aplicado na região do Litoral Norte do Rio Grande do Sul, utilizando os dados de tráfego fornecidos pela CONCEPA TRIUNFO, DAER e DNIT. A determinação da importância de cada segmento decorreu da avaliação do impacto na rede causado pela obstrução do próprio segmento. Através do método proposto neste estudo foi possível identificar o segmento crítico da rede viária estudada e, de forma qualitativa, verificar a extensão da obstrução desse segmento nas análises estática e dinâmica Verificou-se que o indicador diferença do total de tempo despendido na rede é o indicador que mais apresenta crescimento com o incremento da demanda, não apresenta alterações de priorização dos segmentos frente a variações de intensidade e sentido da demanda. Os resultados deste estudo mostraram que o segmento crítico da rede pertence a BR-101 entre os municípios de Osório e Terra de Areia. O método de hierarquização proposto independe do sentido e da intensidade da demanda, e está sujeito a mais de uma métrica para avaliar o segmento crítico. Estes resultados podem subsidiar o planejamento de transportes, identificando trechos críticos da rede viária que necessitam de mais atenção dos gestores e apontando medidas de operação no caso de eventos disruptivos nos trechos críticos. / Identifying the most important link of the network is essential knowledge that the transport planners should have over the network. Incidents and events of capacity reduction in network elements are inevitable. The effect of accidents and obstructions on critical network links causes impacts that hamper network performance. This study was developed with support of VISUM (version 2015) traffic assignment model software with proposes a method to determinate each network link importance level, to identify the critical link and to measure the critical link blockage impact on network. For this, several road network performance indicators are used. The method was applied in the North Coast region of Rio Grande do Sul, using traffic data provided by CONCEPA TRIUNFO, DAER and DNIT. The link level importance in define from the own link impact due its obstruction. The proposed method identified the most critical link of the studied network and verified the qualitative impact of its obstruction extent in the static and dynamic assignment analyses It was verified that the measure total spent time difference in the network is the most sensible measure that growth with a demand increase, this measure does not present changes the link importance rank against variations of intensity and direction of demand flow changes. The results of this study show that the critical link of the network belongs to the BR-101 highway between the municipalities of Osório and Terra de Areia. The proposed hierarchical method developed with several metrics measures fond the critical link in an independent demand direction and intensity analysis. The findings may support transport planners to identify the most critical arc of a network. To better implement resources of road management and repairs. Also identify where the operation measures may be implanted in face a disruptive event on a critical link.

Page generated in 0.0719 seconds