• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 10
  • Tagged with
  • 36
  • 36
  • 36
  • 36
  • 10
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Nanowire Growth Process Modeling and Reliability Models for Nanodevices

Fathi Aghdam, Faranak January 2016 (has links)
Nowadays, nanotechnology is becoming an inescapable part of everyday life. The big barrier in front of its rapid growth is our incapability of producing nanoscale materials in a reliable and cost-effective way. In fact, the current yield of nano-devices is very low (around 10 %), which makes fabrications of nano-devices very expensive and uncertain. To overcome this challenge, the first and most important step is to investigate how to control nano-structure synthesis variations. The main directions of reliability research in nanotechnology can be classified either from a material perspective or from a device perspective. The first direction focuses on restructuring materials and/or optimizing process conditions at the nano-level (nanomaterials). The other direction is linked to nano-devices and includes the creation of nano-electronic and electro-mechanical systems at nano-level architectures by taking into account the reliability of future products. In this dissertation, we have investigated two topics on both nano-materials and nano-devices. In the first research work, we have studied the optimization of one of the most important nanowire growth processes using statistical methods. Research on nanowire growth with patterned arrays of catalyst has shown that the wire-to-wire spacing is an important factor affecting the quality of resulting nanowires. To improve the process yield and the length uniformity of fabricated nanowires, it is important to reduce the resource competition between nanowires during the growth process. We have proposed a physical-statistical nanowire-interaction model considering the shadowing effect and shared substrate diffusion area to determine the optimal pitch that would ensure the minimum competition between nanowires. A sigmoid function is used in the model, and the least squares estimation method is used to estimate the model parameters. The estimated model is then used to determine the optimal spatial arrangement of catalyst arrays. This work is an early attempt that uses a physical-statistical modeling approach to studying selective nanowire growth for the improvement of process yield. In the second research work, the reliability of nano-dielectrics is investigated. As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure (i.e., failure mechanisms and modes). This necessitates new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor that is subject to various failure mechanisms. Dielectric breakdown is known to be the most critical one and has become a major barrier for reliable circuit design in nano-scale. Due to the need for aggressive downscaling of transistors, dielectric films are being made extremely thin, and this has led to adopting high permittivity (k) dielectrics as an alternative to widely used SiO₂ in recent years. Since most time-dependent dielectric breakdown test data on bilayer stacks show significant deviations from a Weibull trend, we have proposed two new approaches to modeling the time to breakdown of bi-layer high-k dielectrics. In the first approach, we have used a marked space-time self-exciting point process to model the defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is employed to minimize the Kullback-Leibler divergence between the empirical distribution obtained from the real data and the one based on the simulated data to find the best parameter values and to predict the total time to failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectric that is a function of time, space and size of the previous defects. In addition, in the second approach, a k-out-of-n system framework is proposed to estimate the total failure time after the generation of more than one soft breakdown.
12

Answers to EMS Queries About Dynamic Deployment: Fractile Performance, Cost, and Management

Aljalahema, Rashid Shaheen January 2015 (has links)
Dynamic deployment is an Emergency Medical Services (EMS) ambulance management strategy where 911 call demand coverage is maximized continuously through time. Unlike static deployment where dispatched ambulances leave a coverage gap until they return to their home-base after service, dynamic deployment redeploys idle ambulances to different locations if that leads to an increase in demand coverage. The purpose of this dissertation was to study dynamic deployment as a viable, beneficial, and cost-effective methodology in managing EMS ambulances and crews. The literature, while rich in studies on static deployment, was lacking when it came to ambulance management strategies like dynamic deployment. Through a discrete-event simulation model, hypothetical EMS systems were simulated under dynamic and static deployment with different demand patterns, demand loads, and system sizes. Dynamic deployment was found to be as good, or often better, in emergency response metrics than static deployment. When EMS systems want to meet a certain response goal, dynamic deployment may enable them to achieve that performance with fewer vehicles than static deployment. While savings in number of vehicles translate to substantial savings in crew wages and vehicular purchasing costs, dynamic deployment may increase operating costs per vehicle because of the extra mileage involved in redeployments. Many EMS systems with average vehicular utilizations of 40% to 50% may find, however, that dynamic deployment may be both cost-effective and beneficial in improving response performance. Different redeployment strategies were studied to address the added travel costs of dynamic deployment and a min-sum assignment model was found to decrease redeployment travel the most without impacting response performance. Finally, a procedure and a mathematical model were developed to route vehicles intelligently such that demand coverage is maximized throughout the redeployment process.
13

Real Time Performance Observation and Measurement in a Connected Vehicle Environment

Khoshmagham, Shayan, Khoshmagham, Shayan January 2016 (has links)
Performance monitoring systems have experienced remarkable development in the past few decades. In today's world, an important issue for almost every industry is to find a way to appropriately evaluate the performance of the provided service. Having a reliable performance monitoring system is necessary, and researchers have developed assessment models and tools to deal with this concern. There are many approaches to the development of performance measurement and observation systems. The internet-of-things (IoT) creates a broad range of opportunities to monitor the systems by using the information from connected people and devices. The IoT is providing many new sources of data that need to be managed. One of the key issues that arises in any data management system is confidentiality and privacy.Significant progress has been made in development and deployment of performance monitoring systems in the signalized traffic environment. The current monitoring and data collection system relies mostly on infrastructure-based sensors, e.g. loop detectors, video surveillance, cell phone data, vehicle signatures, or radar. High installation and maintenance costs and a high rate of failure are the two major drawbacks of the existing system. Emerging technologies, i.e. connected vehicles (CV), will provide a new, high fidelity approach to be used for better performance monitoring and traffic control.This dissertation investigates the real-time performance observation system in a multi-modal connected vehicle environment. A trajectory awareness component receive and processes the connected vehicle data using the Basic Safety Message (BSM). A geo-fence section makes sure the infrastructure system (for example, roadside unit (RSU)) receives the BSM from only the connected vehicles on the roadway and within the communication range. The processed data can be used as an input to a real-time performance observer component.Three major classes of performance metrics, including mobility, signal, and CV-system measures, are investigated. Multi-modal dashboards that utilize radar diagrams are introduced to visualize large data sets in an easy to understand way. A mechanism to maintain the anonymity of vehicle information to ensure privacy was also developed. The proposed algorithm uses partial vehicle trajectories to estimate travel time average and variability on a link basis. It is shown that the model is not very sensitive to the market penetration rate of connected vehicles. This is a desirable feature especially because of the fact that the market penetration rate of connected vehicles will not be very high in near future. The system architecture for connected vehicle based performance observation applications was developed to be applicable for both a simulation environment and a real world traffic system. Both hardware-in-the-loop (HIL) and software-in-the-loop (SIL) simulation environments are developed and calibrated to mimic the real world. Comprehensive testing and assessment of the proposed models and algorithms are conducted in simulation as well as field test networks. A web application is also developed as part of a central system component to generate reports and visualizations of the data collection experiments.
14

Robust and Survivable Network Design Considering Uncertain Node and Link Failures

Sadeghi, Elham January 2016 (has links)
The network design is a planning process of placing system components to provide service or meet certain needs in an economical way. It has strong links to real application areas, such as transportation network, communication network, supply chain, power grid, water distribution systems, etc. In practice, these infrastructures are very vulnerable to any failures of system components. Therefore, the design of such infrastructure networks should be robust and survivable to any failures caused by many factors, for example, natural disasters, intentional attacks, system limits, etc. In this dissertation, we first summarize the background and motivations of our research topic on network design problems. Different from literature on network design, we consider both uncertain node and link failures during the network design process. The first part of our research is to design a survivable network with mixed connectivity requirements, or the (k,l)-connectivity. The designed network can still be connected after failures of any k vertices and (l-1) edges or failures of any (k-1) vertices and l edges. After formally proving its relationships to edge and vertex disjoint paths, we present two integer programming (IP) formulations, valid inequalities to strengthen the IP formulations, and a cutting plane algorithm. Numerical experiments are performed on randomly generated graphs to compare these approaches. Special cases of this problem include: when k=0, l=1, this problem becomes the well-known minimum spanning tree problem; and when k=0, l ≥ 1, this problem is to find a minimum-cost l-edge-connected spanning subgraph, while when k ≥ 2, l=0, the problem is to find a minimum-cost k-vertex-connected spanning subgraph. As a generalization of k-minimum spanning tree and λ-edge-connected spanning subgraph problems for network design, we consider the minimum-cost λ-edge-connected k-subgraph problem, or the (k, λ)-subgraph problem, which is to find a minimum-cost λ-edge-connected subgraph of a graph with at least k vertices. This problem can be considered as designing k-minimum spanning tree with higher connectivity requirements. We also propose several IP formulations for exactly solving the (k, λ)-subgraph problem, based on some graph properties, for example, requirements of cutsets for a division of the graph and paths between any two vertices. In addition, we study the properties of (k,2)-subgraphs, such as connectivity, bridgeless, and strong orientation properties. Based on these properties, we propose several stronger and more compact IP formulations for solving the (k,2)-subgraph problem, which is a direct generalization of the k-minimum spanning tree problem. Serving as a virtual backbone for wireless ad hoc networks, the connected dominating set problem has been widely studied. We design a robust and survivable connected dominating set for a virtual backbone of a larger graph for ad hoc network. More specifically, we study the (k,l)-connected d-dominating set problem. Given a graph G=(V,E), a subset D ⊆ V is a (k,l)-connected d-dominating set if the subgraph induced by D has mixed connectivity at least (k,l) and every vertex outside of S has at least d neighbors from D. The type of virtual backbone is survivable and also robust for sending message under certain number of both node and link failures. We study the properties of such dominating set and also IP formulations. In addition, we design a cutting plane algorithm to solve it.
15

Study on Preventive Replacement and Reordering of Spare Parts Experiencing On-Shelf Deterioration

Luo, Hongwei January 2016 (has links)
High availability of a system can be achieved by performing timely replacement of degraded or failed components. To this end, spare parts are expected to be available and reordered when needed. It is not uncommon that spare parts may deteriorate on the shelf because of their physical characteristics and/or the imperfect storage and transportation conditions. Such phenomena will affect the reliability of spare parts and the availability of the system. In this dissertation, we first focus on a system with single critical operating component and one unit of deteriorating spare part. For such a system, to ensure the system availability and cost efficiency, making a joint decision on component replacement and reordering time is of vital importance. In particular, we study both failure-switching and preventive-switching strategies, where cumulative damage is considered for the spare part switching from its in-stock to operating conditions. To determine the corresponding optimal component replacement and reordering policies, the long-run average costs are minimized under a fixed lead time. It is expected that the work will benefit quite a few industry sectors, such as mining, oil and gas, and defense, where the operation of systems heavily relies on capital-intensive components. To advance the research a step further, we have relaxed the system with only a single operating component to a more complex system with multiple components. In addition, we have eliminated the limitations on the order quantity and inventory capacity. To capture the on-shelf part deterioration, a two-phase deteriorating process is adopted, for which the first phase is from the spare's new arrival to the identification of its degradation, and the second phase is the period thereafter but before the unit fails. Based on the parts' degradation states, we introduce two different replacement strategies for the spare consumption, i.e., the Degraded-First strategy and the New-First strategy. Because of the random nature of component failures and on-shelf deterioration, stochastic cost models for both DF and NF strategies are derived. With the objective of cost reduction through coordinating the inventory and maintenance policies, an enumeration algorithm with stochastic dynamic programming is employed for finding the joint optimal solution over a finite time horizon. Numerical experiments are conducted to study the impacts of these two strategies on the operation costs, and the analysis of key parameters that affect the optimal solutions is also carried out in the numerical study. The joint policies of our interest focus on both replacement and reordering of spare parts, which are more realistic and complex than those policies handling maintenance and spare parts inventory control separately. In particular: When the maintenance planning and inventory control strategy are jointly optimized, we consider the spare parts inventory experiencing on-shelf deterioration, which has not been well studied in the related literature. When dealing with a system carrying only one spare part, the impact of on-shelf deterioration of the spare part on its remaining operational lifetime is explicitly dealt with and described by the Cumulative Exposure (CE) model. For the extended model for a multi-component system, we make an early attempt to adopt a two-phase process to take into account on-shelf degradation of parts. The issues in the degradation-level-based ordering of spare parts in the multi-component system are also discussed. Several integrated cost models are developed in both systems and are used to determine the optimal replacement and reordering decisions with the objective of minimizing the expected long-run cost rate over an infinite/finite horizon.
16

Game-Theoretic Contract Models for Equipment Leasing and Maintenance Service Outsourcing

Hamidi, Maryam January 2016 (has links)
There is a major trend that manufacturers sell their services to customers instead of selling their products. These services can be provided through leasing, warranty, or maintenance outsourcing. In this dissertation, we have studied leasing and maintenance outsourcing services from different aspects of reliability-based maintenance, game-theoretic decision making, and inventory and supply chain management. We have studied how different interactions and relationships between the manufacturer and customer in service contracting affect the decisions they make and the profits they gain. The methods used to tackle the related decision-making processes are stochastic modeling, non-convex optimization, game-theoretical framework, and simulation. For equipment leasing, two non-cooperative game-theoretic models and a cooperative model have been developed to describe the relationships between the manufacturer (lessor) and customer (lessee). Through the lease contracts, the lessor decides on the maintenance policy of the leased equipment, and the lessee decides on the lease period and usage rate. In the non-cooperative simultaneous move scenario, the lessee and the lessor act simultaneously and independently to make their decisions. In the leader-follower non- cooperative contract, the lessor is the leader who specifies the maintenance policy first, and the lessee, as the follower, decides on the lease period and usage rate accordingly. We have next determined the total maximum profit and shown that the Nash and Stackelberg equilibria are different from the total maximum solution. As a result, the players can increase their total profit by cooperation. We have implemented the cooperative solution as an equilibrium through a nonlinear transfer-payment contract. Our results illustrate that cooperation can be regarded as a value-added strategy in establishing such lease contracts. Besides, our numerical results show that although cooperation always increases the total profit of the players, the magnitude of increase is case specific. When the lease price is low or the revenue is high, the profits in the non-cooperative contracts will be close to the cooperative alternative, while the cooperation may increase the total profit significantly in other cases. For maintenance outsourcing, we have studied different bargaining scenarios in determining the contract terms. First, we have considered the Nash bargaining solution to compute the bargaining profit of players. Next, we have considered the case where players pose threat against each other in order to increase their own bargaining position. We have determined the optimal threat strategy for each player. Our result shows that although such threatening decreases the efficiency of the contract, it can dramatically increase the profit of the player with a higher bargaining position. We have finally provided a solution to the problem of how the service agent and customer can cooperate and negotiate on the price. We have determined the discounted price as a result of negotiation. Indeed, the discounted price induces the customer to choose the total maximum maintenance policy. Our numerical examples illustrate the feasibility of using such a price-discount contract in maintenance service outsourcing. Moreover, one can see that both the customer and agent can benefit from this price-discount contract.
17

Using Network Science to Estimate the Cost of Architectural Growth

Dabkowski, Matthew Francis January 2016 (has links)
Between 1997 and 2009, 47 major defense acquisition programs experienced cost overruns of at least 15% or 30% over their current or original baseline estimates, respectively (GAO, 2011, p. 1). Known formally as a Nunn-McCurdy breach (GAO, 2011, p. 1), the reasons for this excessive growth are myriad, although nearly 70% of the cases identified engineering and design issues as a contributing factor (GAO, 2011, p. 5). Accordingly, Congress legislatively acknowledged the need for change in 2009 with the passage of the Weapon Systems Acquisition Reform Act (WSARA, 2009), which mandated additional rigor and accountability in early life cycle (or Pre-Milestone A) cost estimation. Consistent with this effort, the Department of Defense has recently required more system specification earlier in the life cycle, notably the submission of detailed architectural models, and this has created opportunities for new approaches. In this dissertation, I describe my effort to transform one such model (or view), namely the SV-3, into computational knowledge that can be leveraged in Pre-Milestone A cost estimation and risk analysis. The principal contribution of my work is Algorithm 3-a novel, network science-based method for estimating the cost of unforeseen architectural growth in defense programs. Specifically, using number theory, network science, simulation, and statistical analysis, I simultaneously find the best fitting probability mass functions and strengths of preferential attachment for an incoming subsystem's interfaces, and I apply blockmodeling to find the SV-3's globally optimal macrostructure. Leveraging these inputs, I use Monte Carlo simulation and the Constructive Systems Engineering Cost Model to estimate the systems engineering effort required to connect a new subsystem to the existing architecture. This effort is chronicled by the five articles given in Appendices A through C, and it is summarized in Chapter 2.In addition to Algorithm 3, there are several important, tangential outcomes of this work, including: an explicit connection between Model Based System Engineering and parametric cost modeling, a general procedure for organizations to improve the measurement reliability of their early life cycle cost estimates, and several exact and heuristic methods for the blockmodeling of one-, two-, and mixed-mode networks. More generally, this research highlights the benefits of applying network science to systems engineering, and it reinforces the value of viewing architectural models as computational objects.
18

Algorithmic Developments in Monte Carlo Sampling-Based Methods for Stochastic Programming

Pierre-Louis, Péguy January 2012 (has links)
Monte Carlo sampling-based methods are frequently used in stochastic programming when exact solution is not possible. In this dissertation, we develop two sets of Monte Carlo sampling-based algorithms to solve classes of two-stage stochastic programs. These algorithms follow a sequential framework such that a candidate solution is generated and evaluated at each step. If the solution is of desired quality, then the algorithm stops and outputs the candidate solution along with an approximate (1 - α) confidence interval on its optimality gap. The first set of algorithms proposed, which we refer to as the fixed-width sequential sampling methods, generate a candidate solution by solving a sampling approximation of the original problem. Using an independent sample, a confidence interval is built on the optimality gap of the candidate solution. The procedures stop when the confidence interval width plus an inflation factor falls below a pre-specified tolerance epsilon. We present two variants. The fully sequential procedures use deterministic, non-decreasing sample size schedules, whereas in another variant, the sample size at the next iteration is determined using current statistical estimates. We establish desired asymptotic properties and present computational results. In another set of sequential algorithms, we combine deterministically valid and sampling-based bounds. These algorithms, labeled sampling-based sequential approximation methods, take advantage of certain characteristics of the models such as convexity to generate candidate solutions and deterministic lower bounds through Jensen's inequality. A point estimate on the optimality gap is calculated by generating an upper bound through sampling. The procedure stops when the point estimate on the optimality gap falls below a fraction of its sample standard deviation. We show asymptotically that this algorithm finds a solution with a desired quality tolerance. We present variance reduction techniques and show their effectiveness through an empirical study.
19

Multistage Stochastic Programming and Its Applications in Energy Systems Modeling and Optimization

Golari, Mehdi January 2015 (has links)
Electric energy constitutes one of the most crucial elements to almost every aspect of life of people. The modern electric power systems face several challenges such as efficiency, economics, sustainability, and reliability. Increase in electrical energy demand, distributed generations, integration of uncertain renewable energy resources, and demand side management are among the main underlying reasons of such growing complexity. Additionally, the elements of power systems are often vulnerable to failures because of many reasons, such as system limits, weak conditions, unexpected events, hidden failures, human errors, terrorist attacks, and natural disasters. One common factor complicating the operation of electrical power systems is the underlying uncertainties from the demands, supplies and failures of system components. Stochastic programming provides a mathematical framework for decision making under uncertainty. It enables a decision maker to incorporate some knowledge of the intrinsic uncertainty into the decision making process. In this dissertation, we focus on application of two-stage and multistage stochastic programming approaches to electric energy systems modeling and optimization. Particularly, we develop models and algorithms addressing the sustainability and reliability issues in power systems. First, we consider how to improve the reliability of power systems under severe failures or contingencies prone to cascading blackouts by so called islanding operations. We present a two-stage stochastic mixed-integer model to find optimal islanding operations as a powerful preventive action against cascading failures in case of extreme contingencies. Further, we study the properties of this problem and propose efficient solution methods to solve this problem for large-scale power systems. We present the numerical results showing the effectiveness of the model and investigate the performance of the solution methods. Next, we address the sustainability issue considering the integration of renewable energy resources into production planning of energy-intensive manufacturing industries. Recently, a growing number of manufacturing companies are considering renewable energies to meet their energy requirements to move towards green manufacturing as well as decreasing their energy costs. However, the intermittent nature of renewable energies imposes several difficulties in long term planning of how to efficiently exploit renewables. In this study, we propose a scheme for manufacturing companies to use onsite and grid renewable energies provided by their own investments and energy utilities as well as conventional grid energy to satisfy their energy requirements. We propose a multistage stochastic programming model and study an efficient solution method to solve this problem. We examine the proposed framework on a test case simulated based on a real-world semiconductor company. Moreover, we evaluate long-term profitability of such scheme via so called value of multistage stochastic programming.
20

Development and Optimization of Low Energy Orbits for Advancing Exploration of the Solar System

Kidd, John Nocon January 2015 (has links)
The architecture of a system which enables the cost-effective exploration of the solar system is proposed. Such a system will make use of the benefits of the natural dynamics represented in the Circular Restricted Three-Body Problem (CRTBP). Additionally, a case study of the first missions which apply the lessons from the CRTBP is examined. The guiding principle of the proposed system is to apply lessons learned from both the Apollo project for deep space exploration and the International Space Station for long term habitation in space as well as modular space vehicle design. From this preliminary system design, a number of missions are outlined. These missions form the basis of an evolvable roadmap to fully develop the infrastructure required for long-term sustained manned exploration of the solar system. This roadmap provides a clear and concise pathway from current exploration capabilities to the current long-term goal of sustained manned exploration of Mars. The primary method employed in designing the staging orbits is the "Single Lunar Swingby", each of the component segment trajectory design processes is explored in detail. Additionally, the method of combining each of these segments together in a larger End-to-End optimizer environment within the General Mission Analysis Tool (GMAT) is introduced, called the Multiple Shooting Method. In particular, a specific Baseline Parking Orbit, or BPO, is chosen and analyzed. This BPO serves as the parking home orbit of any assets not currently in use. A BPO of amplitude (14000, 28000, 6000) kilometers. The BPO has full coverage to both the Earth and the Moon and orbit station-keeping may be conducted at a cost of less than 1 m/s over a 14 year period. This provides a cost-effective platform from which more advanced exploration activities can be based, both robotic and manned. One of the key advanced exploration activities considered is manned exploration of Mars, one of the current long-term goals of NASA. Trajectories from the BPO to Mars and back to Earth are explored and show approximately 50% decrease in required ΔV provided by the spacecraft.

Page generated in 0.1627 seconds