• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2028
  • 247
  • 99
  • 74
  • 49
  • 17
  • 16
  • 10
  • 9
  • 7
  • 6
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 2935
  • 2935
  • 506
  • 484
  • 483
  • 482
  • 450
  • 401
  • 343
  • 332
  • 218
  • 208
  • 183
  • 177
  • 176
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

DETECTION RESOURCE ALLOCATION IN ADVERSARIAL PROBLEMS

Bayrak, Halil 25 September 2007 (has links)
We consider the problem of optimally allocating static and dynamic detection resources in order to detect or prevent evaders from reaching their destinations. The evaders may be terrorists or smugglers attempting to enter a facility or illegally cross a border. Examples of static detection resources include sensors that detect people and weapons, cameras and check points. In addition, examples of dynamic detection resources include guards at the borders and unmanned aerial vehicles. It is crucial to use these resources efficiently to increase the detection probabilities of evaders. This study describes two different models built to allocate the available resources. In the first model, we seek an optimal allocation scheme in which only static detection resources are considered. Information asymmetry between the evader and the system designer is utilized and several risk criteria are analyzed. In the second model, both static and dynamic detection resources are considered. We determine an allocation scheme for the static detection resources and an inspection policy for the dynamic detection resources. The models are built, solved and analyzed using integer programming, stochastic programming and game theory techniques. Structural properties of the models are explored and heuristic algorithms are developed to solve larger problem instances.
112

An Investigation of the Idea Generation and Protection Process in Academia

Golish, Bradley Lawrence 25 September 2007 (has links)
The Bayh-Dole Act of 1980 enabled U.S. universities to patent inventions developed through federally funded research programs. This provided an opportunity for academia to develop technologies from the research conducted by faculty. Over 25 years have passed and 39,671 patents have been granted to academic inventors. Unfortunately, this accounts for less than two percent of the total patents awarded in the U.S. during this time. To address this concern, the research presented here investigates the academic technology development process to determine factors that are critical to shaping ideas towards creating patentable technologies. While past research has been corporate-focused and conducted from the managerial perspective; this research examined the process from the inventor perspective and from the technology transfer office through two investigations that utilized a common framework. Study One, focused in the area of Radio Frequency Identification, explored the process from idea generation to protection of 11 successful patent inventors. The inventors created concept maps describing their development process. Five investigations were conducted on the maps: three quantitative and two qualitative. The participating corporate inventors focused more on financial issues and in regards to challenges found strategic issues to be more problematic and societal aspects to be more time-consuming and problematic than did the academic inventors. Part II of Study One involved an inventor questionnaire based on the information gathered in Part I. Unfortunately, the response rate was ineffectively poor resulting in inconclusive data. Study Two identified the critical duties being performed by technology transfer offices (TTOs). One qualitative and two quantitative analyses were conducted on the data collected from a TTO licensing manager survey. Analyses from this study provided insight on elements that influence TTO success factors. From these two studies, a model for academic technology development was created. If new and existing TTOs can facilitate academic inventors with respect to the elements identified in this model, the possibility exists to further stimulate the quality and quantity of the number of patents arising from academia.
113

Integrated Decision Making in Global Supply Chains and Networks

Arisoy, Ozlem 25 September 2007 (has links)
One of the more visible and often controversial effects of globalization is the rising trend in global sourcing, commonly referred to as outsourcing, offshoring or offshore outsourcing. Today, many organizations experience the necessity of growing globally in order to remain profitable and competitive. This research focuses on the process that organizations undergo in making strategic decisions of whether or not to go offshore, and then on the location and volume of these offshore operations. This research considers the strategic decision of offshoring and sub-divides it into two components: analysis of monetary benefits and evaluation of intangible variables. In this research, these two components are integrated by developing an analytical decision approach that can incorporate quantitative and qualitative factors in a structure based on multiple solution methodologies. The decision approach developed consists of two phases which concurrently assess the offshoring decision by utilizing mixed integer programming and multi-attribute decision modeling, specifically using Analytic Network Process, followed by multi-objective optimization and tradeoff analysis. The decision approach is further enhanced by employing engineering economic tools such as life cycle costing and activity based costing. As a result, the approach determines optimal offshoring strategies and provides a framework to investigate the optimality of the decisions with changing parameters and priorities. The applicability, compliance and effectiveness of the developed integrated decision making approach is demonstrated on two real life cases in two different industry types. Through empirical studies, different dimensions of offshoring decisions are examined, classified and characterized within the framework of the developed decision approach. The solutions are evaluated by their value, level of support and relevance to the decision makers.The utilization of the developed systematic approach showed that counterintuitive decisions may sometimes be the best strategy. This study contributes to the literature with a comprehensive decision approach for determining the most advantageous offshoring location and distribution strategies by integrating multiple solution methodologies. This approach can be adapted in the corporate world as a tool to improve global vision.
114

DEVELOPMENT OF A SIMULTANEOUS DESIGN FOR SUPPLY CHAIN PROCESS FOR THE OPTIMIZATION OF THE PRODUCT DESIGN AND SUPPLY CHAIN CONFIGURATION PROBLEM

Gokhan, Nuri Mehmet 30 January 2008 (has links)
This research investigates the development of a process for Design for Supply Chain (DFSC) a process that aims to reduce the product life cycle costs, improve product quality, improve efficiency, and improve profitability for all partners in the supply chain (SC). It focuses on understanding the impacts and benefits of incorporating the SC configuration problem into the product design phase. As the product design establishes different requirements on the manufacturability, cost, and similar parameters, the SC is also closely linked to product design decisions and impacted by them. This research uniquely combines the impacts of the product design and price decisions on the product demand and the impacts of the SC decisions on cost, lead time, and demand satisfaction. The developed mathematical models are aimed at economically managing the SC for product design and support not only product design, but also redesign associated with process improvements and design changes in general. This research suggests development of a proactive approach to product design allowing impacts to the SC to be predicted in advance and resolved more quickly and economically. It presents two product and SC design approaches. The sequential approach examines the design of a product followed by the SC design where the simultaneous approach considers both the product and SC designs concurrently. By utilizing Mixed Integer Programming and a Genetic Algorithm, this research studies various research questions which examine modeling preferences and essential performance metrics, impacts of using a sequential versus simultaneous design approach on these performance metrics, the robustness of the resulting SC design, and relative importance of the product and SC design on the profits. To answer these questions, different models are developed, tested with illustrative data, and the results are analyzed. The test results and industry experts validations conclude that the developed DFSC models add significant value to the product design procedure resulting in a useful decision support tool. The results indicate that the simultaneous DFSC approach captures the complex interactions between the product and supply chain decisions, improving the overall profit of a product across its life cycle.
115

DEVELOPING METHODS TO SOLVE THE WORKFORCE ASSIGNMENT PROBLEM CONSIDERING WORKER HETEROGENEITY AND LEARNING AND FORGETTINGDEVELOPING METHODS TO SOLVE THE WORKFORCE ASSIGNMENT PROBLEM CONSIDERING WORKER HETEROGENEITY AND LEARNING AND FORGETTING

Vidic, Natasa S. 10 June 2008 (has links)
In this research we study how the assignment of a fully cross-trained workforce organized on a serial production line affect throughput. We focus on two serial production environments: dynamic worksharing on a production line, similar to bucket brigade systems and a fixed assignment serial-production line where workers work on a specific task during a given time period. For the dynamic assignment environment we concentrated on the impact of different assignment approaches and policies on the overall system performance. First, we studied two worker two station lines when incomplete dominance is possible as well as the effects of duplicating tooling at these lines. One focus of this research was to optimally solve the dynamic worksharing assignment problem and determine exact percentages of work performed by each worker under the assumptions presented. We developed a mixed integer programming formulation for n workers and m stations that models one-cycle balanced line behavior where workers exchange parts at exactly one position. This formulation is extended to incorporate multiple production lines. We also developed a two-cycle formulation that models a condition when workers exchange parts at exactly two positions in a periodic manner. We also determined throughput levels when workers productivity changes over time due to workers learning and forgetting characteristics. A fixed worker assignment system considers a serial production setting in which work is passed from station to station with intermediate buffers between stations. We considered two models. The first model assumes that workers perform tasks based on their steady-state productivity rate. The second model assumes that workers productivity rates vary based on their learning and forgetting characteristics. Heuristic methods were developed and implemented to solve these two models and to determine optimal throughput levels and optimal worker assignments. We were also able to demonstrate the importance of introducing learning and forgetting into these types of worker assignment problems. A final focus of this research was the comparison of the dynamic worksharing and fixed worker assignment environments.
116

Analysis of Patient Fall Data

Benson, Carl Joseph 08 September 2008 (has links)
Patient falls are common adverse events that occur in all healthcare environments. Patient falls are a common cause of morbidity (disability caused by accident) and the leading cause of nonfatal injuries producing trauma-related hospitalizations in the United States. Patient falls result in longer hospital stays, attendant increases in medical costs and reduced quality of life for the patients who experience these events. The purpose of this thesis was to examine the patient fall data collected by a community based acute teaching hospital. These data were then analyzed by a variety of analytical methods to determine if there are correlations related to location and timing of the falls, as well as the characteristics of the patients who fell. Conclusions were then made as to possible improvements in methods to monitor patients to reduce patient fall rate. The major results of this analysis were: (1) statistical methods were found to be useful in providing an improved understanding of the characteristics of the patient fall data and thus allow hospital staff to rely on quantitative metrics to make decisions of how to try and reduce patient fall rates, (2) the time intervals between consecutive fall events were found to be distributed exponentially, (3) the hospital-wide hospital monthly fall rate goals, as well as the individual hospital unit patient fall rate goals were shown to be regularly exceeded by the measured data, and (5) review of the fall score screen values used to assess the risk for patient falls, while overall a predictor of patient who did and did not fall, was not a good predictor for determining if individual patients would fall. As a result of this study, a number of specific recommendations will proposed to the hospital as a means to potentially improve the methods for addressing patient falls. A hospital-wide cultural change had been commenced in June 2007 to attempt to reduce the rate of patient falls. The effect of implementing this program will be followed by observing whether the over-all hospital and unit monthly fall rates are reduced.
117

Estimating the price of privacy in liver transplantation

Sandikci, Burhaneddin 08 September 2008 (has links)
In the United States, patients with end-stage liver disease must join a waiting list to be eligible for cadaveric liver transplantation. However, the details of the composition of this waiting list are only partially available to the patients. Patients currently have the prerogative to reject any offered livers without any penalty. We study the problem of optimally deciding which offers to accept and which to reject. This decision is significantly affected by the patient's health status and progression as well as the composition of the waiting list, as it determines the chances a patient receives offers. We evaluate the value of obtaining the waiting list information through explicitly incorporating this information into the decision making process faced by these patients. We define the concept of the patient's price of privacy, namely the number of expected life days lost due to a lack of perfect waiting list information. We develop Markov decision process models that examine this question. Our first model assumes perfect waiting list information and, when compared to an existing model from the literature, yields upper bounds on the true price of privacy. Our second model relaxes the perfect information assumption and, hence, provides an accurate representation of the partially observable waiting list as in current practice. Comparing the optimal policies associated with these two models provides more accurate estimates for the price of privacy. We derive structural properties of both models, including conditions that guarantee monotone value functions and control-limit policies, and solve both models using clinical data. We also provide an extensive empirical study to test whether patients are actually making their accept/reject decisions so as to maximize their life expectancy, as this is assumed in our previous models. For this purpose, we consider patients transplanted with living-donor livers only, as considering other patients implies a model with enormous data requirements, and compare their actual decisions to the decisions suggested by a nonstationary MDP model that extends an existing model from the literature.
118

Designing the Liver Allocation Hierarchy: Incorporating Equity and Uncertainty

Demirci, Mehmet Can 08 September 2008 (has links)
Liver transplantation is the only available therapy for any acute or chronic condition resulting in irreversible liver dysfunction. The liver allocation system in the U.S. is administered by the United Network for Organ Sharing (UNOS), a scientific and educational nonprofit organization. The main components of the organ procurement and transplant network are Organ Procurement Organizations (OPOs), which are collections of transplant centers responsible for maintaining local waiting lists, harvesting donated organs and carrying out transplants. Currently in the U.S., OPOs are grouped into 11 regions to facilitate organ allocation, and a three-tier mechanism is utilized that aims to reduce organ preservation time and transport distance to maintain organ quality, while giving sicker patients higher priority. Livers are scarce and perishable resources that rapidly lose viability, which makes their transport distance a crucial factor in transplant outcomes. When a liver becomes available, it is matched with patients on the waiting list according to a complex mechanism that gives priority to patients within the harvesting OPO and region. Transplants at the regional level accounted for more than 50% of all transplants since 2000. This dissertation focuses on the design of regions for liver allocation hierarchy, and includes optimization models that incorporate geographic equity as well as uncertainty throughout the analysis. We employ multi-objective optimization algorithms that involve solving parametric integer programs to balance two possibly conflicting objectives in the system: maximizing efficiency, as measured by the number of viability adjusted transplants, and maximizing geographic equity, as measured by the minimum rate of organ flow into individual OPOs from outside of their own local area. Our results show that efficiency improvements of up to 6% or equity gains of about 70% can be achieved when compared to the current performance of the system by redesigning the regional configuration for the national liver allocation hierarchy. We also introduce a stochastic programming framework to capture the uncertainty of the system by considering scenarios that correspond to different snapshots of the national waiting list and maximize the expected benefit from liver transplants under this stochastic view of the system. We explore many algorithmic and computational strategies including sampling methods, column generation strategies, branching and integer-solution generation procedures, to aid the solution process of the resulting large-scale integer programs. We also explore an OPO-based extension to our two-stage stochastic programming framework that lends itself to more extensive computational testing. The regional configurations obtained using these models are estimated to increase expected life-time gained per transplant operation by up to 7% when compared to the current system. This dissertation also focuses on the general question of designing efficient algorithms that combine column and cut generation to solve large-scale two-stage stochastic linear programs. We introduce a flexible method to combine column generation and the L-shaped method for two-stage stochastic linear programming. We explore the performance of various algorithm designs that employ stabilization subroutines for strengthening both column and cut generation to effectively avoid degeneracy. We study two-stage stochastic versions of the cutting stock and multi-commodity network flow problems to analyze the performances of algorithms in this context.
119

OPTIMIZATION OF MAPPING ONTO A FLEXIBLE LOW-POWER ELECTRONIC FABRIC ARCHITECTURE

Baz, Mustafa 08 September 2008 (has links)
A combinatorial problem that arises from a novel electronic fabric architecture designed for low-power devices such as cellular phones and palm computers is presented. We consider the problem of efficiently mapping a given data flow graph onto a particular implementation of the fabric architecture. We formulate mixed integer linear programs (MILP) and design a sliding partial MILP heuristic for this problem. We highlight the modeling and algorithmic aspects that are necessary to make the MILP formulation competitive. The sliding partial MILP heuristic is developed to generate mappings faster and to find mappings for benchmark instances that cannot be solved by the MILP formulation. We also present a method to tune software parameters using ideas from software testing and machine learning. The method is based on the key observation that for many classes of instances, the software shows improved performance if a few critical parameters have good values, although which parameters are critical depends on the class of instances. Our method attempts to find good parameter values using a relatively small number of optimization trials.
120

AN INTEGRATED MULTIPLE STATISTICAL TECHNIQUE FOR PREDICTING POST-SECONDARY EDUCATIONAL DEGREE OUTCOMES BASED PRIMARILY ON VARIABLES AVAILABLE IN THE 8TH GRADE

Nicholls, Gillian M. 28 January 2009 (has links)
There is a class of complex problems that may be too complicated to solve by any single analytical technique. Such problems involve so many measurements of interconnected factors that analysis with a single dimension technique may improve one aspect of the problem while overall achieving little or no improvement. This research examines the utility of modeling a complex problem with multiple statistical techniques integrated to analyze different types of data. The goal was to determine if this integrated approach was feasible and provided significantly better results than a single statistical technique. An application in engineering education was chosen because of the availability and comprehensiveness of the NELS:88 longitudinal dataset. This dataset provided a huge number of variables and 12,144 records of actual students progressing from 8th grade to their final educational outcomes 12 years later. The probability of earning a Science, Technology, Engineering, or Mathematics (STEM) degree is modeled using variables available in the 8th grade as well as standardized test scores. The variables include demographic, academic performance, and experiential measures. Extensive manipulation of the NELS:88 dataset was conducted to identify the student outcomes, prepare the set of covariates for modeling, and determine when the students final outcome status occurred. The integrated models combined logistic regression, survival analysis, and Receiver Operating Characteristics (ROC) Curve analysis to predict obtaining a STEM degree vs. other outcomes. The results of the integrated models were compared to actual outcomes and the results of separate logistic regression models. Both sets of models provided good predictive accuracy. The feasibility of integrated models for complex problems was confirmed. The integrated approach provided less variability in incorrect STEM predictions, but the improvement was not statistically significant. The main contribution of this research is designing the integrated model approach and confirming its feasibility. Additional contributions include designing a process to create large multivariate logistic regression models; developing methods for extensive manipulation of a large dataset to adapt it for new analytical purposes; extending the application of logistic regression, survival analysis, and ROC Curve analysis within educational research; and creating a formal definition for STEM that can be statistically verified.

Page generated in 0.0969 seconds