• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • Tagged with
  • 6
  • 6
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Agile Framework to Develop Safety Critical Software for Aircraft

Bacon, Duane Lee 12 February 2019 (has links)
<p> Industries have been discovering significant improvements in quality, productivity, and cost by implementing Agile principles during the software development life cycle. However, the Aerospace industry has been slow to adopt Agile to develop safety critical software, primarily because DO-178C has been interpreted as prescribing Waterfall development(VanderLeest &amp; Buter, 2009). This work introduces the advantages of Agile and posits that Agile can meet DO-178C considerations. A literature review conducted, herein, makes the case that Agile is a significantly better approach than Waterfall for software development. Further, the review outlines some of the challenges of Agile in large software development programs but indicates how these challenges can be addressed. This work provides an Agile framework and demonstrates how the framework meets the objectives of DO-178C for safety critical software development. The framework provides alternate approaches to some DO-178C development activities, such as Stages of Involvement. This analysis clearly demonstrates that DO-178C does not require a Waterfall approach and that safety critical software can and should be developed using more modern development approaches such as Agile.</p><p>
2

An Event Management Framework to Aid Solution Providers in Cybersecurity

Leon, Ryan James 15 March 2018 (has links)
<p> Cybersecurity event management is critical to the successful accomplishment of an organization&rsquo;s mission. To put it in perspective, in 2016 Symantec tracked over 700 global adversaries and recorded events from 98 million sensors (Aimoto et al., 2017). Studies show that in 2015, more than 55% of the cyberattacks on government operation centers were due to negligence and the lack of skilled personnel to perform network security duties including the failure to properly identify events (Ponemon, 2015a). Practitioners are charged to perform as first responders to any event that affects the network. Inconsistencies and errors that occur at this level can determine the outcome of an event. </p><p> In a time when 91% of Americans believe they have lost control over how information is collected and secured, there is nothing more dangerous than thinking new technology is not vulnerable to attacks (Rainie, 2016). Assailants target those with weak security postures who are unprepared, distracted or lack fundamental elements to identify significant events and secure the environment. </p><p> Under executive order, to address these concerns organizations such as the National Institute of Standards and Technology (NIST) and International Organization of Standards (ISO) developed cybersecurity frameworks, which have been widely accepted as industry standards. These standards focus on business drivers to guide cybersecurity activities and risks within critical infrastructure. It outlines a set of cybersecurity activities, references, and outcomes that can be used to align its cyber activities with business requirements at a high-level. </p><p> This praxis explores the solution provider&rsquo;s role in and method of securing environments through their event management practices. Solution providers are a critical piece of proper event management. They are often contracted to provide solutions that adhere to a NIST type framework with little to no guidance. There are supportive documents and guides for event management but nothing substantive like the Cybersecurity Framework and ISO 27001 has been adopted. Using existing processes and protocols an event management framework is proposed that can be utilized to properly manage events and aid solution providers in their cybersecurity mission. </p><p> Knowledge of event management was captured through subject matter expertise and supported through literature review and investigation. Statistical methods were used to identify deficiencies in cyber operations that would be worth addressing in an event management framework.</p><p>
3

The Role of Canalization in the Spreading of Perturbations in Boolean Networks

Manicka, Santosh Venkatiah Sudharshan 26 May 2017 (has links)
<p> Canalization is a property of Boolean automata that characterizes the extent to which subsets of inputs determine (canalize) the output. Here, we investigate the role of canalization as a characteristic of perturbation-spreading in random Boolean networks (BN) with homogeneous connectivity via numerical simulations. We consider two different measures of canalization introduced by Marques-Pita and Rocha, namely `effective connectivity' and `input symmetry', in a three-pronged approach. First, we show that the mean `effective connectivity', a measure of the true mean in-degree of a BN, is a better predictor of the dynamical regime (order or chaos) of the BN than the mean in-degree. Next, we combine effective connectivity and input symmetry in a single measure of `unified canalization' by using a common yardstick of Boolean hypercube ``dimension", a form of fractal dimension. We show that the unified measure is a better predictor of dynamical regime than effective connectivity alone for BNs with large in-degrees. When considered separately, the relative contributions of the two components of the unified measure changes systematically with the mean in-degree, where input symmetry becomes increasingly more dominant with larger in-degrees. As an application, we show that the said measures of canalization characterize the dynamical regimes of a suite of Systems biology models better than the in-degree. Finally, we introduce `integrated effective connectivity' as an extension of effective connectivity that characterizes the canalization present in BNs with arbitrary timescales obtained by iteratively composing a BN with itself. We show that the integrated measure is a better predictor of long-term dynamical regime than just effective connectivity for a small class of BNs known as the elementary cellular automata. This dissertation will advance theoretical understanding of BNs, allowing us to more accurately predict their short-term and long-term dynamic character, based on canalization. As BNs are generic models of complex systems, combining interaction graphs with multivariate dynamics, these results contribute to the complex networks and systems field. Moreover, as BNs are important models of choice in Systems biology, our methods contribute to the burgeoning toolkit of the field.</p>
4

ALE Analytics| A Software Pipeline and Web Platform for the Analysis of Microbial Genomic Data from Adaptive Laboratory Evolution Experiments

Phaneuf, Patrick 28 December 2016 (has links)
<p> Adaptive Laboratory Evolution (ALE) methodologies are used for studying microbial adaptive mutations that optimize host metabolism. The Systems Biology Research Group (SBRG) at the University of California, San Diego, has implemented high-throughput ALE experiment automation that enables the group to expand their experimental evolutions to scales previously infeasible with manual workflows. The data generated by the high-throughput automation now requires a post-processing, content management and analysis framework that can operate on the same scale. We developed a software system which solves the SBRG's specific ALE big data to knowledge challenges. The software system is comprised of a post-processing protocol for quality control, a software framework and database for data consolidation and a web platform named ALE Analytics for report generation and automated key mutation analysis. The automated key mutation analysis is evaluated against published ALE experiment key mutation results from the SBRG and maintains an average recall of 89.6% and an average precision of 71.2%. The consolidation of all ALE experiments into a unified resource has enabled the development of web applications that compare key mutations across multiple experiments. These features find the genomic regions <i> rph, hns/tdk, rpoB, rpoC</i> and <i>pykF</i> mutated in more than one ALE experiment published by the SBRG. We reason that leveraging this software system relieves the bottleneck in ALE experiment analysis and generates new data mining opportunities for research in understanding system-level mechanisms that govern adaptive evolution.</p>
5

Development of Unintended Radiated Emissions (URE) Threat Identification System

Friedel, Joseph E. 26 April 2018 (has links)
<p> There&rsquo;s always a requirement for faster, more accurate, and easier to implement threat identification systems for concealed electronics, to thwart terrorism and espionage attempts. Common electronic devices are used in the design of improvised explosive devices (IEDs) that target military and civilian populations alike, while concealed recording devices illegally capture proprietary and confidential data, compromising both governmental and industrial information resources. This research proposes a unique nonintrusive, repeatable, reliable and scalable D&amp;I system for identifying threat devices by unintended radiated emissions (URE). Only a passive URE system, as opposed to active or hybrid systems, is appropriate for bomb detection or human interrogation, since potentially hazardous energy radiations are not emitted. Additionally, the proposed system is distinctive in its simplicity, allowing rapid implementation, and easy expandability. Finally, validation testing is provided to demonstrate the system&rsquo;s reliability and repeatability. </p><p> URE is the electromagnetic emissions that active electronic equipment, such as radios and cellphones radiate. URE is analogous to a human fingerprint, since on a microscopic level, each and every URE signature is unique. However same-type electronic devices put out similar radiations and electronics of the same model have almost identical radio frequency signatures. URE signatures can change with device settings, such as a channel on a radio, or Airplane versus Clock mode on a cell phone. This uniqueness of URE data per device setting enables URE to be used to determine the mode of an operational electronic device. The characteristics of URE enable it to be used for explosive ordinance detection (EOD) and applications such as quality control in manufacturing, electronics troubleshooting,device identification for inventory, and detection of prohibited hidden electronics. </p><p> The proposed D&amp;I process also addresses big data problems involved in capturing URE data and building a database of URE characteristics for identification. Issue interpretation is utilized with the URE data to distinguish between threat and non-threat electronic devices, using multiple criteria decision analysis (MCDA) and decision-making techniques to determine type, model and mode of the hidden devices. The outlined URE data handling methods and specified decision analysis techniques for URE data processing are further unique contributions of this research. </p><p> Optimization, verified by testing, is used to improve the speed and accuracy of the identification decision algorithm. The developed system is validated with URE data from 166 devices, which are representative of IED and espionage threats, but the system is extendable to all URE D&amp;I applications, such as Quality Assurance, Inventory, and smart applications. Due to the immaturity of the URE D&amp;I field and lack of documentation on the topic, the properties and potential of this more effective D&amp;I system, compared to current methods, will be of interest to explosive ordnance disposal, security service, electronic system manufacturing, automated inventory, and mobile application development communities and potentially others as well. </p><p>
6

Optimal control and analysis of bulk service queueing systems

Han, Youngnam 01 January 1992 (has links)
Queueing Theory has been successfully and extensively applied to the scheduling, control, and analysis of complex stochastic systems. In this dissertation, the problems of optimal scheduling, control and analysis of bulk service queueing systems are studied. A Dynamic Programming formulation is provided for the optimal service strategy of a two-server bulk queue. An extension of the general bulk service rule is shown to be optimal in the sense of minimizing either the finite discounted or average waiting cost. It is shown that the optimal dispatching rule is a multi-stage threshold type where servers are dispatched only when the number of waiting customers exceeds certain threshold values depending both on the number of waiting customers and the number of servers available at decision epochs. It is conjectured that the result is extendable to the case for more than two servers. Exact analysis of the state probability in equilibrium is carried out under the optimal policy obtained for a queue with two bulk servers. Comparison of the optimal threshold policy is carried out by evaluating a single stage vs. a two-stage threshold two-server system. By calculating the mean number of customers waiting in the queue of both systems, it is shown that a two-stage threshold policy yields optimal performance over the general bulk service rule under any operating condition. Examples for different parameter sets are provided. A network of two bulk service queues served by a common transport carrier with finite capacity is analyzed where the general bulk service rule is applied only at one queue. Decomposition is employed to provide an exact analysis of the steady-state probability distribution, mean waiting time distribution, and mean number of customers waiting at both queues in equilibrium. Networks of more than two bulk service queues can be analyzed by direct extension of the methodology. An optimization procedure for the optimal threshold value to minimize total mean waiting cost is also discussed.

Page generated in 0.1112 seconds