• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 583
  • 532
  • 10
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 1138
  • 1138
  • 918
  • 845
  • 821
  • 278
  • 275
  • 144
  • 104
  • 78
  • 76
  • 71
  • 63
  • 61
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analyzing Complex Systems Using an Integrated Multi-scale Systems Approach

Ravitz, Alan D. 30 August 2016 (has links)
<p> Many industries, such as healthcare, transportation, education, and other fields that involve large corporations and institutions, are complex systems composed of many diverse interacting components. Frequently, to improve performance within these industries, to move into new markets, or to expand capability or capacity, decision-makers face opportunities or mandates to implement innovations (new technology, processes, and services). Successful implementation of these innovations involves seamless integration with the policy, economic, social, and technological dynamics associated with the complex system. These dynamics are frequently difficult for decision-makers to observe and understand. Consequently, they take on risk from lack of insight into how best to implement the innovation and how their system-of-interest will ultimately perform. This research defines a framework for an integrated, multi-scale modeling and simulation systems approach that provides decision-makers with prospective insight into the likely performance to expect once an innovation of change is implemented in a complex system. The need for such a framework when modeling complex systems is described, and suitable simulation paradigms and the challenges related to implementing these simulations are discussed. A healthcare case study is used to demonstrate the framework&rsquo;s application and utility in understanding how an innovation, once fielded, will actually affect the larger complex system to which it belongs.</p>
2

Engaging Stakeholders in Resilience Assessment and Management for Coastal Communities

Bostick, Thomas P. 30 August 2016 (has links)
<p> Coastal hazards including storm surge, sea-level rise, and cyclone winds continue to have devastating effects on infrastructure systems and communities despite the costly investments already being made in risk management to mitigate predicted consequences. Risk management has generally not been sufficiently focused on coastal resilience with community stakeholders involved in the process of making their coastlines more resilient to damaging storms. Thus, without earlier stakeholder involvement in coastal resilient planning for their community, they are frustrated after disasters occur. The US National Academies has defined resilience as &ldquo;the ability to prepare and plan for, absorb, recover from and more successfully adapt to adverse events&rdquo; (National Research Council (NRC), 2012). This dissertation introduces a methodology for enabling stakeholder-involved resilience evaluation across the physical, information, cognitive and social domains (DiMase, Collier, Heffner, &amp; Linkov, 2015; Linkov et al., 2013). The methodology addresses the stages of resilience: prepare, absorb, recover and adapt and integrates performance assessment of risk management project initiatives with scenario analysis to characterize disruptions of risk-management priorities (Linkov, Fox-Lent, Keisler, Della Sala, &amp; Sieweke, 2014b). The goal of the methodology is not to find the &ldquo;right&rdquo; solution set of priorities by quantitative means., but to develop a methodology for dialogue among the stakeholders. Rather, the purpose is to develop a methodology that would allow stakeholder involvement in the process of making their coastal communities more resilient by determining important resilience stages and domains, critical functions of the system, project initiatives for consideration, and potential future scenarios of concern. Stakeholder qualitative comments are transformed into quantitative inputs to produce qualitative outputs. The results of the methodology allow the stakeholders to easily &ldquo;see&rdquo; the priorities and the resilience stages and domains. The methodology is illustrated through a case study at Mobile Bay, Alabama, USA and then illustrated again through a second case study of Southeast Region of Florida and produces more focused results for the stakeholders. The research findings as broadly implemented will benefit federal and local policymakers and emergency responders, business and community leaders, and individual homeowners and residents in the United States and the International Community.</p>
3

Static and dynamic optimization problems in cooperative multi-agent systems

Sun, Xinmiao 02 November 2017 (has links)
This dissertation focuses on challenging static and dynamic problems encountered in cooperative multi-agent systems. First, a unified optimization framework is proposed for a wide range of tasks including consensus, optimal coverage, and resource allocation problems. It allows gradient-based algorithms to be applied to solve these problems, all of which have been studied in a separate way in the past. Gradient-based algorithms are shown to be distributed for a subclass of problems where objective functions can be decoupled. Second, the issue of global optimality is studied for optimal coverage problems where agents are deployed to maximize the joint detection probability. Objective functions in these problems are non-convex and no global optimum can be guaranteed by gradient-based algorithms developed to date. In order to obtain a solution close to the global optimum, the selection of initial conditions is crucial. The initial state is determined by an additional optimization problem where the objective function is monotone submodular, a class of functions for which the greedy solution performance is guaranteed to be within a provable bound relative to the optimal performance. The bound is known to be within 1 − 1/e of the optimal solution and is improved by exploiting the curvature information of the objective function. The greedy solution is subsequently used as an initial point of a gradient-based algorithm for the original optimal coverage problem. In addition, a novel method is proposed to escape a local optimum in a systematic way instead of randomly perturbing controllable variables away from a local optimum. Finally, optimal dynamic formation control problems are addressed for mobile leader-follower networks. Optimal formations are determined by maximizing a given objective function while continuously preserving communication connectivity in a time-varying environment. It is shown that in a convex mission space, the connectivity constraints can be satisfied by any feasible solution to a Mixed Integer Nonlinear Programming (MINLP) problem. For the class of optimal formation problems where the objective is to maximize coverage, the optimal formation is proven to be a tree which can be efficiently constructed without solving a MINLP problem. In a mission space constrained by obstacles, a minimum-effort reconfiguration approach is designed for obtaining the formation which still optimizes the objective function while avoiding the obstacles and ensuring connectivity.
4

Leveraging Model-Based Techniques for Component Level Architecture Analysis in Product-Based Systems

McKean, David Keith 10 April 2019 (has links)
<p> System design at the component level seeks to construct a design trade space of alternate solutions comprising mapping(s) of system function(s) to physical hardware or software product components. The design space is analyzed to determine a near-optimal next-level allocated architecture solution that system function and quality requirements. Software product components are targeted to increasingly complex computer systems that provide heterogeneous combinations of processing resources. These processing technologies facilitate performance (speed) optimization via algorithm parallelization. However, speed optimization can conflict with electrical energy and thermal constraints. A multi-disciplinary architecture analysis method is presented that considers all attribute constraints required to synthesize a robust, optimum, extensible next-level solution. This paper presents an extensible, executable model-based architecture attribute framework that efficiently constructs a component-level design trade space. A proof-of-concept performance attribute model is introduced that targets single-CPU systems. The model produces static performance estimates that support optimization analysis and dynamic performance estimation values that support simulation analysis. This model-based approach replaces current architecture analysis of alternatives spreadsheet approaches. The capability to easily model computer resource alternatives that produces attribute estimates improves design space exploration productivity. Performance estimation improvements save time and money through reduced prototype requirements. Credible architecture attribute estimates facilitate more informed design tradeoff discussions with specialty engineers. This paper presents initial validation of a model-based architecture attribute analysis method and model framework using a single computation thread application on two laptop computers with different CPU configurations. Execution time estimates are calibrated for several data input sizes using the first laptop. Actual execution times on the second laptop are shown to be within 10 percent of execution time estimates for all data input sizes.</p><p>
5

A Critical Examination of the Relationships Between Risk Management, Knowledge Management and Decision Making

Lengyel, David M. 15 September 2018 (has links)
<p> The goal of this research is to critically examine the nexus of risk management, decision-making, and knowledge management in an integrated framework, or triad. This research will examine this framework through the lens of managers in both human and scientific spaceflight missions at the National Aeronautics and Space Administration (NASA). It is intended to expose how coupling risk, knowledge and decision-making improve chances for mission success while potentially averting mishaps. Historical case studies of NASA Programs will be used to validate this assertion. Common risk management and knowledge management processes will be examined as enablers for risk-informed decision-making, particularly for residual risk acceptance decisions. Decision-making under normal programmatic conditions as well as during anomalous or mishap-related conditions will also be assessed. </p><p> Residual risk acceptance decision-making might be considered a special case of a requisite decision analysis model. In the context of NASA programs and projects these decisions are made throughout the lifecycle. They take on special significance however in the context of human spaceflight missions such as whether to: proceed with human-tended test and evaluation, to launch, or to respond to an off-nominal condition on-orbit. Finally, this dissertation offers a checklist for use by managers to improve residual risk acceptance decision competency within an organization.</p><p>
6

The Evaluation of Symptomatic and Latent System Failure Causal Factors in Complex Socio-Technical Systems

Erjavac, Anthony J. 22 August 2018 (has links)
<p> Human error remains the highest symptomatic cause of system failure across multiple domains. Technology failure has seen a significant rate decrease through analyses and design changes, but human error causal factors have not experienced the same level of improvement. Designers and developers need improved methods to model the events leading up to and resulting in human error events. </p><p> This study examines the reported causal data of an integrated human complex system, multi-engine aircraft data, and explores methods of evaluating those data to identify the factors contributing to system failure including the human actor, technology and latent factors. The analysis methods demonstrated in this paper provide a means of assessing symptomatic failure data to provide quantitative information about the failure modes and related latent causal factors. National Transportation and Safety Board (NTSB) aviation accident data is coded using the Human Factors Analysis and Classification System (HFACS) to provide a framework for evaluating the mediating and moderating effects affecting the symptomatic and latent causal factors. We comparatively analyze these accident data for general aviation and air transport pilots to evaluate potential causal factor differences. The methodology explored leverages previous work in causal relationships to model the relationships between latent causal factors and symptomatic causal factors. </p><p> Two methods are employed to evaluate the data. Hierarchical multiple regression is used as derived from the Baron and Kenny method for statistical evaluation of moderator and mediator effects on human behavior. Multiple variable logistic regression is used to model the relationships between latent causal factors, symptomatic causal factors and accident severity. The usefulness of the framework and the possibilities for future research are discussed. </p><p> The findings first confirm the inverse relationship between pilot experience and accident rate to establish a baseline from which the quantifiable mediating and moderating effects of direct causal factors and latencies are demonstrated. Further analysis elucidates the association between the symptomatic human error initiating events and the latent failure causal factors. These data analyses are used to demonstrate the systemic relationships of the spatially and temporally disparate symptomatic and latent causal factors. </p><p> The relationship between symptomatic causal factor and latent causal factors are modeled using a causal loop and probability path diagrams. These methods identify the causal factors with the most impact to help guide users, managers and policy-makers as mitigation strategies are developed. The method is transportable to other systems based on the universality of the methods described. This research helps to enhance the ability to develop an objective systemic perspective to enable holistic solution sets. </p><p> The benefit to the systems engineering domain is the demonstration of methods that quantify and objectively link the symptomatic causal human error events initiating failures to the latent causal factors where mitigation methods can be applied. Using these methods, systems can be analyzed to better understand the disparate causal effects affecting system failure in complex socio-technical systems and actions that can be taken to mitigate the latent causal factors behind symptomatic events.</p><p>
7

Formal methods for partial differential equations

Penedo Alvarez, Francisco 19 May 2020 (has links)
Partial differential equations (PDEs) model nearly all of the physical systems and processes of interest to scientists and engineers. The analysis of PDEs has had a tremendous impact on society by enabling our understanding of thermal, electrical, fluidic and mechanical processes. However, the study of PDEs is often approached with methods that do not allow for rigorous guarantees that a system satisfies complex design objectives. In contrast, formal methods have recently been developed to allow the formal statement of specifications, while also developing analysis techniques that can guarantee their satisfaction by design. In this dissertation new design methodologies are introduced that enable the systematic creation of structures whose mechanical properties, shape and functionality can be time-varying. A formal methods formulation and solution to the tunable fields problem is first introduced, where a prescribed time evolution of the displacement field for different spatial regions of the structure is to be achieved using boundary control inputs. A spatial and temporal logic is defined that allows the specification of interesting properties in a user-friendly fashion and can provide a satisfaction score for the designed inputs. This score is used to formulate an optimization procedure based on Mixed Integer Programming (MIP) to find the best design. In the second part, a sampling based assumption mining algorithm is introduced, which is the first step towards a divide and conquer strategy to solve the tunable fields problem using assume-guarantee contracts. The algorithm produces a temporal logic formula that represents initial conditions and external inputs of a system that satisfy a goal given as a temporal logic formula over its state. An online supervised learning algorithm is presented, based on decision tree learning, that is used to produce a temporal logic formula from assumption samples. The third part focuses on the tunable constitutive properties problem, where the goal is to create structures satisfying a stress-strain response by designing their geometry. The goal is represented as a logic formula that captures the allowed deviation from a reference and provides a satisfaction score. An optimization procedure is used to obtain the best geometric design.
8

Suitability of Model Based Systems Engineering for Agile Automotive Product Development

Gena, Kriti January 2020 (has links)
No description available.
9

Provision of energy and regulation reserve services by buildings

Aslam, Waleed 17 January 2024 (has links)
Power consumption and generation in the electrical grid must be balanced at all times. This balance is maintained by the grid operator through the procurement of energy and regulation reserve services in the wholesale electricity market. Traditionally, these services could only be procured from generation resources. However, helped by the advances in the computational and communication infrastructure, the demand resources are increasingly being leveraged in this regard. In particular, the Heating, Ventilation and Air-Conditioning (HVAC) systems of buildings are gaining traction due to the consumption flexibility provided by their energy-shifting and fast-response capabilities. The provision of energy and regulation reserve services in the wholesale market, from the perspective of a typical building’s HVAC system, can be construed in terms of two synergistic problems: an hourly deterministic optimization problem, referred to as Scheduling Problem, and a real-time (seconds timescale) stochastic control problem, referred to as Deployment Problem. So far, the existing literature has synthesized the solutions of these two problems in a simplistic, sequential/iterative manner without employing an integrated approach that captures explicitly their cost and constraint interactions. Moreover, the deployment problem has only been solved with classical controllers which are not optimal, whereas the non-convexities in the scheduling problem have been addressed with methods that are sensitive to initialization. The current approaches therefore do not fully optimize the decisions of the two problems either individually or collectively, and hence do not fully exploit the HVAC resource. The goal of the proposed research is to have optimal decision-making across both the scheduling and deployment problems. Our approach proposes deriving an optimal control policy for the deployment problem, and expressing the corresponding expected sum of deployment costs over the hour (called ‘expected intra-hour costs’) as a closed-form analytic function of the scheduling decisions. The inclusion of these expected intra-hour costs into the scheduling problem allows the optimization of the hourly scheduling decisions, pursuant to the real-time use of the optimal deployment control policy. Thus, our approach captures the interaction of the two problems and optimizes decisions across timescales yielding a truly integrated solution. We investigate the estimation of the expected intra-hour costs (based on a myopic policy optimizing instantaneous tracking error and utility loss), and solve the integrated problem with tight relaxations, demonstrating the value and applicability of the approach. Further, we investigate the derivation of the optimal control policy for the deployment problem, formulating and solving it as discounted-cost infinite horizon Dynamic Program (DP) and Reinforcement Learning (RL) problems. We also characterize the optimal policy as a closed-form analytic mapping from state-space to action-space, and obtain the corresponding expected cost-to-go (a.k.a. expected intra-hour costs) as a closed-form analytic function of the scheduling decisions. We further illustrate that the optimal policy better captures the deployment costs compared to existing approaches. As such, our work represents a structured, interpretable and automated way for the end-to-end consideration of energy and regulation reserve market participation, and can be extended to other demand side resources.
10

Predictive modeling of fuel efficiency of trucks

Bindingnolle Narasimha, Srivatsa 12 April 2016 (has links)
<p> This research studied the behavior of several controllable variables that affect the fuel efficiency of trucks. Re-routing is the process of modifying the parameters of the routes for a set of trips to optimize fuel consumption and also to increase customer satisfaction through efficient deliveries. This is an important process undertaken by a food distribution company to modify the trips to adapt to the immediate necessities. A predictive model was developed to calculate the change in Miles per Gallon (MPG) whenever a re-route is performed on a region of a particular distribution area. The data that was used, was from the Dallas center which is one of the distribution centers owned by the company. A consistent model that could provide relatively accurate predictions across five distribution centers had to be developed. It was found that the model built using the data from the Corporate center was the most consistent one. The timeline of the data used to build the model was from May 2013 through December 2013. The predictive model provided predictions of which about 88% of the data that was used, was within the 0-10% error group. This was an improvement on the lesser 43% obtained for the linear regression and K-means clustering models. The model was also validated on the data for January 2014 through the first two weeks of March 2014 and it provided predictions of which about 81% of the data was within the 0-10 % error group. The average overall error was around 10%, which was the least for the approaches explored in this research. Weight, stop count and stop time were identified as the most significant factors which influence the fuel efficiency of the trucks. Further, neural network architecture was built to improve the predictions of the MPG. The model can be used to predict the average change in MPG for a set of trips whenever a re-route is performed. Since the aim of re-routing is to reduce the miles and trips; extra load will be added to the remaining trips. Although, the MPG would decrease because of this extra load, it would be offset by the savings due to the drop in miles and trips. The net savings in the fuel can now be translated into the amount of money saved.</p>

Page generated in 0.0512 seconds