• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • Tagged with
  • 40
  • 40
  • 40
  • 8
  • 6
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An Investigation Into the Effectiveness of Energy Savings Performance Contracting (ESPC) Structure| How to Fill the Opportunity to Implementation Gap

Pinthuprapa, Chatchai 22 July 2017 (has links)
<p> This dissertation introduce and empirically verify a structure of Energy Savings Performance Contracting (ESPC) perspective among clients, Energy Service Companies (ESCOs) and governments. The new approach ESPC structure model presented herein is built on the Motivation-Opportunity-Ability (MOA) theory where key influences are identified. An important component of performance contracting, the performance measurement, has been incorporated in the structure to monitor the effect on ESPC implementation success.</p><p> The proposed structure and hypothesis were verified as a current ESPC practice in the United States through an online survey. The proposed structure was analyzed to understand relationships between the stakeholders, key factors, barriers and/or practices among the constructs through the structural equation modeling (SEM) approach. Due to the complex relationship of non-normal data and small sample size compared to the number of variables, the Partial Least Square SEM (PLS-SEM) method was chosen. The survey statistical results were used to verify the ESPC-MOA structure, develop an implementation guide and identify the ESPC critical factors to help establish its implementation success.</p><p>
22

Validating the Use of pPerformance Risk Indices for System-Level Risk and Maturity Assessments

Holloman, Sherrica S. 07 April 2016 (has links)
<p> With pressure on the U.S. Defense Acquisition System (DAS) to reduce cost overruns and schedule delays, system engineers&rsquo; performance is only as good as their tools. Recent literature details a need for 1) objective, analytical risk quantification methodologies over traditional subjective qualitative methods &ndash; such as, expert judgment, and 2) mathematically rigorous system-level maturity assessments. The Mahafza, Componation, and Tippett (2005) Technology Performance Risk Index (TPRI) ties the assessment of technical performance to the quantification of risk of unmet performance; however, it is structured for component- level data as input. This study&rsquo;s aim is to establish a modified TPRI with systems-level data as model input, and then validate the modified index with actual system-level data from the Department of Defense&rsquo;s (DoD) Major Defense Acquisition Programs (MDAPs). This work&rsquo;s contribution is the establishment and validation of the System-level Performance Risk Index (SPRI). With the introduction of the SPRI, system-level metrics are better aligned, allowing for better assessment, tradeoff and balance of time, performance and cost constraints. This will allow system engineers and program managers to ultimately make better-informed system-level technical decisions throughout the development phase.</p>
23

Application of systems engineering principles for analysis of utility baseline development process

Johnson, Benjamin D. 15 February 2017 (has links)
<p> There is a need in the energy services industry for companies to accurately estimate and verify energy savings. This starts with the accurate development of a utility baseline. Systems Engineering principles can be used to determine the optimal method to use for developing a utility baseline. Several Systems Engineering tools were used in this thesis, including a stakeholder analysis, needs and requirements files, requirements traceability, functional flow block diagrams, a work breakdown structure, trade studies, and verification and validation of requirements. These tools helped to identify three main components of the process for further analysis. The trade study was then used to determine the best way to address these components of the process, and resulted in innovative methods that had not previously been considered. The recommendations in this work will benefit both the energy services company and their customers.</p>
24

Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles

Knox, Lenora A. 20 April 2017 (has links)
<p>The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.
25

COSYSMO 3.0| An Extended, Unified Cost Estimating Model for Systems Engineering

Alstad, James Patrick 29 January 2019 (has links)
<p> The discipline of systems engineering continues to increase in importance. There are more projects, projects are larger, and projects are more critical, and all of these mean that more and better systems engineering is required. It follows that the cost of systems engineering continues to increase in importance. In turn, it follows that accurate estimation of systems engineering costs continues to increase in importance, as systems engineering results suffer if a project either underestimates or overestimates its cost. </p><p> There are models for estimating systems engineering cost, notably COSYSMO 1.0, but all these models leave out some aspect of modern practices, and therefore tend to estimate a modern systems engineering cost inaccurately, or not at all. These modern practices include reuse of engineering artifacts, requirements volatility, and engineering in a system-of-systems context. While all of these practices have been addressed to some extent in research papers, there is no all-encompassing model that integrates all of them. </p><p> My research has resulted in an integrated model that includes the features of COSYSMO 1.0 and covers those modern practices. It is open and therefore widely available. I have completed a comprehensive model based, in part, on actual project data.</p><p>
26

Exploring Critical Infrastructure Single Point of Failure Analysis (SPFA) for Data Center Risk and Change Management

Moore, Michael Ronald 29 November 2018 (has links)
<p> Critical infrastructure (CI) risk management frameworks require identification of single point of failure risks but existing reliability assessment methods are not practical for identifying single points of failure in CI systems. The purpose of this study was development and assessment of a system reliability assessment tool specific to the task of identifying single points of failure in CI systems. Using a series of action research nested cycles the author developed, applied, and improved a single point of failure analysis (SPFA) tool consisting of a six step method and novel single point of failure analysis algorithm which was utilized in analyzing several CI systems at a participating data center organization for single points of failure. The author explored which components of existing reliability analysis methods can be used for SPFA, how SPFA aligns with CI change and risk management, and the benefits and drawbacks of using the six step method to perform SPFA. System decomposition, network tree, stated assumptions, and visual aids were utilized in the six step method to perform SPFA. Utilizing the method the author was able to provide the participating organization with knowledge of single point of failure risks in 2N and N+X redundant systems for use in risk and change management. The author and two other individuals independently performed SPFA on a system and consistently identified two components as single points of failure. The method was beneficial in that analysts were able to analyze different types of systems of varying familiarity and consider common cause failure and system interdependencies as single points of failure. Drawbacks of the method are reliance on the ability of the analyst and assumptions stated by the analyst. The author recommends CI organizations utilize the method for identification of single points of failure in risk management and calls future researchers to further investigate the method quantitatively.</p><p>
27

An Agent-Based Model for Improved System of Systems Decision Making in Air Transportation

Esmaeilzadeh, Ehsan 05 December 2018 (has links)
<p> Team collaboration and decision making have a significant role in the overall performance of complex System of Systems (SoS). Inefficient decisions can result in significant extra cost and reduction in the overall team performance within the organizations. Therefore, the early and effective evaluation of the individual and collaborative team decisions are essential. The degree of utility of alternative strategy designs may vary given the condition, but these conditions are not always considered when the decisions are being made. Improved Systems Engineering (SE) processes and tools are needed during the systems&rsquo; development lifecycle to make decisions, and rapidly evaluate and assess multiple design alternatives to effectively select architectural design strategies that yield the highest mission performance. This research applies the SE principles of design for change and flexibility in an agent-based model to simulate design alternatives for systems analysis and decision making. The model evaluates and selects the decision(s) for SoS comprised of collaborative teams resulting in higher mission performance. The proposed agent-based model was applied to test the team decision-making improvements within the SoS structure of the air transportation domain. The experimental results indicated significant improvements in decision coordination, decision workload, and overall system performance in air transportation domain. This research contributes to the agent-based modeling of the SoS teams and can assist with evaluating team decisions and improving decision making as an essential element in the field of Systems Engineering.</p><p>
28

Safety Engineering of Computational Cognitive Architectures within Safety-Critical Systems

Dreany, Harry Hayes 14 March 2018 (has links)
<p> This paper presents the integration of an intelligent decision support model (IDSM) with a cognitive architecture that controls an autonomous non-deterministic safety-critical system. The IDSM will integrate multi-criteria, decision-making tools via intelligent technologies such as expert systems, fuzzy logic, machine learning, and genetic algorithms. </p><p> Cognitive technology is currently simulated within safety-critical systems to highlight variables of interest, interface with intelligent technologies, and provide an environment that improves the system&rsquo;s cognitive performance. In this study, the IDSM is being applied to an actual safety-critical system, an unmanned surface vehicle (USV) with embedded artificial intelligence (AI) software. The USV&rsquo;s safety performance is being researched in a simulated and a real-world, maritime based environment. The objective is to build a dynamically changing model to evaluate a cognitive architecture&rsquo;s ability to ensure safe performance of an intelligent safety-critical system. The IDSM does this by finding a set of key safety performance parameters that can be critiqued via safety measurements, mechanisms, and methodologies. The uniqueness of this research lies in bounding the decision-making associated with the cognitive architecture&rsquo;s key safety parameters (KSPs). Other real-time applications (RTAs) that would benefit from advancing cognitive science associated with safety are unmanned platforms, transportation technologies, and service robotics. Results will provide cognitive science researchers with a reference for the safety engineering of artificially intelligent safety-critical systems. </p><p>
29

An Event Management Framework to Aid Solution Providers in Cybersecurity

Leon, Ryan James 15 March 2018 (has links)
<p> Cybersecurity event management is critical to the successful accomplishment of an organization&rsquo;s mission. To put it in perspective, in 2016 Symantec tracked over 700 global adversaries and recorded events from 98 million sensors (Aimoto et al., 2017). Studies show that in 2015, more than 55% of the cyberattacks on government operation centers were due to negligence and the lack of skilled personnel to perform network security duties including the failure to properly identify events (Ponemon, 2015a). Practitioners are charged to perform as first responders to any event that affects the network. Inconsistencies and errors that occur at this level can determine the outcome of an event. </p><p> In a time when 91% of Americans believe they have lost control over how information is collected and secured, there is nothing more dangerous than thinking new technology is not vulnerable to attacks (Rainie, 2016). Assailants target those with weak security postures who are unprepared, distracted or lack fundamental elements to identify significant events and secure the environment. </p><p> Under executive order, to address these concerns organizations such as the National Institute of Standards and Technology (NIST) and International Organization of Standards (ISO) developed cybersecurity frameworks, which have been widely accepted as industry standards. These standards focus on business drivers to guide cybersecurity activities and risks within critical infrastructure. It outlines a set of cybersecurity activities, references, and outcomes that can be used to align its cyber activities with business requirements at a high-level. </p><p> This praxis explores the solution provider&rsquo;s role in and method of securing environments through their event management practices. Solution providers are a critical piece of proper event management. They are often contracted to provide solutions that adhere to a NIST type framework with little to no guidance. There are supportive documents and guides for event management but nothing substantive like the Cybersecurity Framework and ISO 27001 has been adopted. Using existing processes and protocols an event management framework is proposed that can be utilized to properly manage events and aid solution providers in their cybersecurity mission. </p><p> Knowledge of event management was captured through subject matter expertise and supported through literature review and investigation. Statistical methods were used to identify deficiencies in cyber operations that would be worth addressing in an event management framework.</p><p>
30

Level of Repair Analysis for the Enhancement of Maintenance Resources in Vessel Life Cycle Sustainment

Marino, Lucas Charles 27 April 2018 (has links)
<p> The United States Coast Guard does not adequately perform a Level of Repair Analysis and Business Case Analysis to align vessel maintenance requirements with available resources during the development of an Integrated Logistics Life Cycle Plan (ILSP) which leads to increased vessel sustainment costs and reduced workforce proficiency. The ILSP&rsquo;s maintenance philosophy dictates the designed supportability and maintainability of a vessel throughout its life cycle yet existing ILSPs fail to prescribe the required balance of Coast Guard (internal employees) and non-Coast Guard (contracted) technicians for the execution of hull, mechanical, and electrical depot maintenance. This research develops a standard model to enhance the balance of maintenance resources using a Business Case Analysis (BCA) extension of a Level of Repair Analysis (LORA) informed by Activity-Based Costing Management (ABC/M) theory. These analyses are integral to a complete Life Cycle Cost Analysis (LCCA) and will contain a simplified LORA framework that provides an analysis of alternatives, a decision analysis, and a sensitivity analysis for the development of a maintenance philosophy. This repeatable analysis can be used by engineering managers to develop and sustain cost-effective maintenance plans through all phases of a vessel&rsquo;s life cycle (acquisition, sustainment, and disposal).</p><p>

Page generated in 0.1008 seconds