591 |
Exploring Critical Infrastructure Single Point of Failure Analysis (SPFA) for Data Center Risk and Change ManagementMoore, Michael Ronald 29 November 2018 (has links)
<p> Critical infrastructure (CI) risk management frameworks require identification of single point of failure risks but existing reliability assessment methods are not practical for identifying single points of failure in CI systems. The purpose of this study was development and assessment of a system reliability assessment tool specific to the task of identifying single points of failure in CI systems. Using a series of action research nested cycles the author developed, applied, and improved a single point of failure analysis (SPFA) tool consisting of a six step method and novel single point of failure analysis algorithm which was utilized in analyzing several CI systems at a participating data center organization for single points of failure. The author explored which components of existing reliability analysis methods can be used for SPFA, how SPFA aligns with CI change and risk management, and the benefits and drawbacks of using the six step method to perform SPFA. System decomposition, network tree, stated assumptions, and visual aids were utilized in the six step method to perform SPFA. Utilizing the method the author was able to provide the participating organization with knowledge of single point of failure risks in 2N and N+X redundant systems for use in risk and change management. The author and two other individuals independently performed SPFA on a system and consistently identified two components as single points of failure. The method was beneficial in that analysts were able to analyze different types of systems of varying familiarity and consider common cause failure and system interdependencies as single points of failure. Drawbacks of the method are reliance on the ability of the analyst and assumptions stated by the analyst. The author recommends CI organizations utilize the method for identification of single points of failure in risk management and calls future researchers to further investigate the method quantitatively.</p><p>
|
592 |
An Agent-Based Model for Improved System of Systems Decision Making in Air TransportationEsmaeilzadeh, Ehsan 05 December 2018 (has links)
<p> Team collaboration and decision making have a significant role in the overall performance of complex System of Systems (SoS). Inefficient decisions can result in significant extra cost and reduction in the overall team performance within the organizations. Therefore, the early and effective evaluation of the individual and collaborative team decisions are essential. The degree of utility of alternative strategy designs may vary given the condition, but these conditions are not always considered when the decisions are being made. Improved Systems Engineering (SE) processes and tools are needed during the systems’ development lifecycle to make decisions, and rapidly evaluate and assess multiple design alternatives to effectively select architectural design strategies that yield the highest mission performance. This research applies the SE principles of design for change and flexibility in an agent-based model to simulate design alternatives for systems analysis and decision making. The model evaluates and selects the decision(s) for SoS comprised of collaborative teams resulting in higher mission performance. The proposed agent-based model was applied to test the team decision-making improvements within the SoS structure of the air transportation domain. The experimental results indicated significant improvements in decision coordination, decision workload, and overall system performance in air transportation domain. This research contributes to the agent-based modeling of the SoS teams and can assist with evaluating team decisions and improving decision making as an essential element in the field of Systems Engineering.</p><p>
|
593 |
Safety Engineering of Computational Cognitive Architectures within Safety-Critical SystemsDreany, Harry Hayes 14 March 2018 (has links)
<p> This paper presents the integration of an intelligent decision support model (IDSM) with a cognitive architecture that controls an autonomous non-deterministic safety-critical system. The IDSM will integrate multi-criteria, decision-making tools via intelligent technologies such as expert systems, fuzzy logic, machine learning, and genetic algorithms. </p><p> Cognitive technology is currently simulated within safety-critical systems to highlight variables of interest, interface with intelligent technologies, and provide an environment that improves the system’s cognitive performance. In this study, the IDSM is being applied to an actual safety-critical system, an unmanned surface vehicle (USV) with embedded artificial intelligence (AI) software. The USV’s safety performance is being researched in a simulated and a real-world, maritime based environment. The objective is to build a dynamically changing model to evaluate a cognitive architecture’s ability to ensure safe performance of an intelligent safety-critical system. The IDSM does this by finding a set of key safety performance parameters that can be critiqued via safety measurements, mechanisms, and methodologies. The uniqueness of this research lies in bounding the decision-making associated with the cognitive architecture’s key safety parameters (KSPs). Other real-time applications (RTAs) that would benefit from advancing cognitive science associated with safety are unmanned platforms, transportation technologies, and service robotics. Results will provide cognitive science researchers with a reference for the safety engineering of artificially intelligent safety-critical systems. </p><p>
|
594 |
An Event Management Framework to Aid Solution Providers in CybersecurityLeon, Ryan James 15 March 2018 (has links)
<p> Cybersecurity event management is critical to the successful accomplishment of an organization’s mission. To put it in perspective, in 2016 Symantec tracked over 700 global adversaries and recorded events from 98 million sensors (Aimoto et al., 2017). Studies show that in 2015, more than 55% of the cyberattacks on government operation centers were due to negligence and the lack of skilled personnel to perform network security duties including the failure to properly identify events (Ponemon, 2015a). Practitioners are charged to perform as first responders to any event that affects the network. Inconsistencies and errors that occur at this level can determine the outcome of an event. </p><p> In a time when 91% of Americans believe they have lost control over how information is collected and secured, there is nothing more dangerous than thinking new technology is not vulnerable to attacks (Rainie, 2016). Assailants target those with weak security postures who are unprepared, distracted or lack fundamental elements to identify significant events and secure the environment. </p><p> Under executive order, to address these concerns organizations such as the National Institute of Standards and Technology (NIST) and International Organization of Standards (ISO) developed cybersecurity frameworks, which have been widely accepted as industry standards. These standards focus on business drivers to guide cybersecurity activities and risks within critical infrastructure. It outlines a set of cybersecurity activities, references, and outcomes that can be used to align its cyber activities with business requirements at a high-level. </p><p> This praxis explores the solution provider’s role in and method of securing environments through their event management practices. Solution providers are a critical piece of proper event management. They are often contracted to provide solutions that adhere to a NIST type framework with little to no guidance. There are supportive documents and guides for event management but nothing substantive like the Cybersecurity Framework and ISO 27001 has been adopted. Using existing processes and protocols an event management framework is proposed that can be utilized to properly manage events and aid solution providers in their cybersecurity mission. </p><p> Knowledge of event management was captured through subject matter expertise and supported through literature review and investigation. Statistical methods were used to identify deficiencies in cyber operations that would be worth addressing in an event management framework.</p><p>
|
595 |
Level of Repair Analysis for the Enhancement of Maintenance Resources in Vessel Life Cycle SustainmentMarino, Lucas Charles 27 April 2018 (has links)
<p> The United States Coast Guard does not adequately perform a Level of Repair Analysis and Business Case Analysis to align vessel maintenance requirements with available resources during the development of an Integrated Logistics Life Cycle Plan (ILSP) which leads to increased vessel sustainment costs and reduced workforce proficiency. The ILSP’s maintenance philosophy dictates the designed supportability and maintainability of a vessel throughout its life cycle yet existing ILSPs fail to prescribe the required balance of Coast Guard (internal employees) and non-Coast Guard (contracted) technicians for the execution of hull, mechanical, and electrical depot maintenance. This research develops a standard model to enhance the balance of maintenance resources using a Business Case Analysis (BCA) extension of a Level of Repair Analysis (LORA) informed by Activity-Based Costing Management (ABC/M) theory. These analyses are integral to a complete Life Cycle Cost Analysis (LCCA) and will contain a simplified LORA framework that provides an analysis of alternatives, a decision analysis, and a sensitivity analysis for the development of a maintenance philosophy. This repeatable analysis can be used by engineering managers to develop and sustain cost-effective maintenance plans through all phases of a vessel’s life cycle (acquisition, sustainment, and disposal).</p><p>
|
596 |
An Evaluation of the Relationship Between Critical Technology Developments and Technology MaturityPeters, Wanda Carter 26 October 2017 (has links)
<p> The research presented in this dissertation investigates the relationship between critical technologies and technology maturity assessments at a key decision point in the product development life cycle. This study utilizes statistical methods for assessing technology maturity at a key decision point. A regression model is established and utilized for predicting the probability of a system achieving technology maturity. The study disclosed with a 95% confidence that there is statistical evidence that utilization of heritage technology developments, as originally designed, significantly increases the probability of achieving technology maturity at a key decision point. This finding is significance due to the potential for engineers to overestimate technology maturity when utilizing heritage designs. One challenge facing systems engineers is quantifying the impact technology developments have on technology maturity assessments, especially when transitioning from formulation to implementation. Correctly assessing the maturity of a technology is crucial for an organization’s ability to manage performance, cost, and schedule. The findings from this research has the potential to reduce unacceptable or unsatisfactory technical performance and programmatic overruns through the minimization of inaccurate maturity determinations.</p><p>
|
597 |
Flexibility in early stage design of US Navy ships : an analysis of optionsPage, Jonathan(Scientist in mechanical engineering)(Jonathan Edward) January 2011 (has links)
Thesis (Nav. E.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 75-77). / This thesis explores design options for naval vessels and provides a framework for analyzing their benefit to the Navy. Future demands on Navy warships, such as new or changing missions and capabilities, are unknowns at the time of the ship's design. Therefore, ships often require costly engineering changes throughout their service life. These are expensive both fiscally - because the Navy pays for engineering and installation work - and operationally - because either a warship cannot carry out a desired mission need or is carrying out a mission for which it was not initially designed. One method of addressing uncertainty in capital assets is by imbedding flexibilities in their architecture. The thesis offers early stage design suggestions on flexibilities for naval platforms to incorporate pre-planned repeats of the platform with new or different missions. A conceptual platform created - the SCAMP - includes each of these suggestions in its architecture. Then, the thesis uses an analysis framework similar to real options to evaluate the value of including these expansion options in early stage design versus traditional design methods and their products. The analysis uses a version of the MIT Cost Model for early stage ship design to determine acquisition and life cycle costs. The model was modified to support this analysis by allowing a simulation of possible mission changes with their severity distributed stochastically over a realistic time horizon. Subsequently, the model calculates these effects on life cycle cost. The most important result is the value of the framework for evaluating these managerial options. This framework can be extended to the subsystem level or to the system-of-systems level. In this application, the model predicts that, on average, a flexible platform should not only cost less to build, but also reduce modernization costs by 9% per ship over its life cycle. Therefore, counterintuitively, building a less-capable ship with the flexibility to expand capabilities or switch missions actually provides greater expected utility during its service life. / by Jonathan Page. / S.M.in Engineering and Management / Nav.E.
|
598 |
An ecosystem strategy for Intel Corporation to drive adoption of its embedded solutionsPaliwal, Prashant January 2010 (has links)
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 50). / With time, successful companies and businesses grow to create a network of partners and stakeholders that work very closely with them. The very survival and growth of these companies is dependent on this ecosystem network around them. The ecosystem thrives on stakeholders mutually benefiting from each other while contributing to growth of the ecosystem itself. Every now and then business growth of such big companies with powerful ecosystems of their own is disrupted by relatively small players when incumbents have to respond. Intel, world's largest semiconductor company, has seen tremendous growth in its business since its inception. While Intel focused on continuously innovating and delivering great products for the personal computer industry, it chose not to compete in low margin embedded computing markets. Advanced RISC Machines (ARM Holdings Ltd.), a small semiconductor company during early nineties developed architecture for low power embedded computing markets that with time became the dominant architecture for mobile computing. As demand in the personal computer industry and consumer interest shifted towards portable and mobile computers, Intel delivered products for these markets. In recent years Intel, the incumbent is being threatened by ARM, the disruptor because mobile embedded platforms based on ARM architecture have encroached Intel's territory. Intel at the same time has its sight at the high growth embedded markets dominated by ARM. Today, both these players with their mature ecosystems are facing each other as they try to enter each other's territories. This Thesis analyses this classic battle for ecosystem leadership for embedded markets by Intel and ARM. Software and platform leadership is analyzed in detail and an Ecosystem strategy for Intel to drive adoption of its embedded solutions is devised in later chapters. / by Prashant Paliwal. / S.M.in Engineering and Management
|
599 |
Development of an Integrity Evaluation System for Wells in Carbon Sequestration FieldsLi, Ben 03 February 2016 (has links)
<p> Carbon sequestration is a promising solution to mitigate the accumulation of greenhouse gases. Depleted oil and gas reservoirs are desirable vessels for carbon sequestration. It is crucial to maintain the sealing ability of carbon sequestration fields with high concentrations of CO<sub>2</sub>.</p><p> A systematic well integrity evaluation system has been developed and validated for carbon sequestration fields. The system constitutes 1) a newly developed analytical model for assessing cement sheath integrity under various operating conditions, 2) quantifications of well parameters contributing to the probability of well leakage, and 3) genetic-neural network algorithm for data analysis and well-leakage probability assessment.</p><p> A wellbore system consists of well casing, cement sheath, and formation rock. A new analytical stress model was developed. The new analytical model solves for the stresses in the casing-cement sheath formation system loaded by the isotropic and anisotropic horizontal in-situ stresses. Further analyses with the analytical model reveal that Young’s modulus of cement sheath is a major factor that contributes to the sealing ability of the cement sheath, while Poisson’s ratio and cohesion play less important roles in the cement sheath sealing ability. The cement sheath in the shale formation exhibits higher sealing ability than that in the sandstone formation. The sealing ability of weak cement is higher than that of strong cement.</p><p> Descriptive quantifications of well parameters were made in this study for analyzing their effect on the probability of well leakage. These parameters include well cement placement relative to aquifers and fluid reservoir zones, cement type, cement sheath integrity in operating conditions, well aging, and well plugging conditions. It is the combination of these parameters that controls the probability of well leakage. A significant proportion of wells were identified as risky wells in these two fields. It is concluded that the well trained neural network model can be used to predict the well leakage risk over the CO<sub>2</sub> sequestration lifespan, which can promote prevention activities and mitigations to the CO<sub>2</sub> leakage risky wells.</p>
|
600 |
NOVA - Nottingham Off-road Vehicle ArchitectureStrachan, Jamie Robert January 2009 (has links)
This thesis describes a program of research aimed at the creation of an unmanned ground vehicle. In this research the Nottingham Off-road Vehicle Architecture (NOVA) was developed along with the ARP (Autonomous Route Proving vehicle. NOVA is a control architecture for a vehicle with the role of autonomous route proving in natural terrain. The ARP vehicle was constructed to demonstrate this architecture. NOVA includes all the required competence for the ARP vehicle to be deployed in unknown outdoor environments. The architecture embodies systems for vehicle localisation, autonomous navigation and obstacle avoidance. The localisation system fuses data from absolute and relative localisation equipment. GPS provides the absolute position of the ARP vehicle. Relative position information is derived from wheel encoders and a pose sensor. NOVA uses a probabilistic technique known as a particle filter to combine the two position estimates. NOVA maintains a local obstacle map based on range data generated by the perception sensors on the ARP vehicle. Analysis is performed on this map to find any untraversable terrain. A local path planner then selects the best path for the vehicle to follow using the map. Decisions made by the path planner are recorded to allow the vehicle to backtrack and try another path if NOVA later finds the chosen route is blocked. NOVA has been extensively tested onboard the ARP vehicle. Results from a series of experiments are presented to validate the various parts of the architecture.
|
Page generated in 0.1292 seconds