• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 237
  • 45
  • 25
  • 25
  • 25
  • 25
  • 25
  • 25
  • 5
  • 1
  • Tagged with
  • 334
  • 334
  • 334
  • 61
  • 60
  • 48
  • 21
  • 21
  • 21
  • 21
  • 19
  • 16
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

An Automated System for Rapid and Secure Device Sanitization

LaBarge, Ralph S. 24 June 2014 (has links)
<p> Public and private organizations face the challenges of protecting their networks from cyber-attacks, while reducing the amount of time and money spent on Information Technology. Organizations can reduce their expenditures by reusing server, switch and router hardware, but they must use reliable and efficient methods of sanitizing these devices before they can be redeployed. The sanitization process removes proprietary, sensitive or classified data, as well as persistent malware from a device prior to reuse. The Johns Hopkins University Applied Physics Laboratory has developed an automated, rapid, and secure method for sanitizing servers, switches and routers. This sanitization method was implemented and tested on several different types of network devices during the Cyber Measurement &amp; Analysis Center project, which was funded under Phases I and II of the DARPA National Cyber Range program. The performance of the automated sanitization system was excellent with an order of magnitude reduction in the time required to sanitize servers, routers and switches, and a significant improvement in the effectiveness of the sanitization process through the addition of persistent malware removal.</p>
212

The Effect of Mission Assurance on ELV Launch Success Rate| An Analysis of Two Management Systems for Launch Vehicles

Leung, Raymond 03 June 2014 (has links)
<p> There are significant challenges involved in regulating the growing commercial human spaceflight industry. The safety of the crew and passengers should be protected; however, care should be taken not to overburden the industry with too many or too stringent, or perhaps inapplicable, regulations. </p><p> An improvement in launch success would improve the safety of the crew and passengers. This study explores the effectiveness of Mission Assurance policies to guide regulations and standards. There is a severe lack of data regarding commercial human space flights. This means that a direct test of effectiveness by looking at historical commercial human space flight data is not possible. Historical data on current expendable commercial launchers have been used in this study. The National Aeronautics and Space Administration (NASA) has strong Mission Assurance policies for its launch of civil payloads. The Office of Commercial Space Transportation at the Federal Aviation Administration (FAA/AST) regulations of commercial launches are more safety oriented. </p><p> A comparison of launches between NASA and the FAA/AST is used to gauge the effectiveness of Mission Assurance policies on launch success. Variables between the two agencies are reduced so that Mission Assurance policies are isolated as the main difference between launches. Scenarios pertinent to commercial human space flight are used so results can be applicable.</p>
213

The energy delivery paradigm

Bukowski, Stephen A. 12 March 2015 (has links)
<p> A sustainable world is one in which human needs are met equitably without harm to the environment, and without sacrificing the ability of future generations to meet their needs. Electrical energy is one such need, but neither the production nor the utilization are equitable or harmless. Growth of electricity availability and how we use electricity in industrialized nations has established a dichotomy between usage and sustainability. This dichotomy is best illuminated by the current "just-in-time" approach where excessive electricity generation capacity is installed to be able to instantaneously meet load from consumers at all times. Today in the United States, electricity generation capacity is approximately 3.73 kW per person versus 3.15 kW per person in 2002. [1] [2] At this magnitude of installed capacity the entire world would need approximately 25.5 TW of generation or approximately 12,250 Hoover Dams today and must add 766 MW of capacity every day. [3] This unsustainable effect is further exacerbated by the fact that consumers do not have a strong vested incentive to keep electricity generation sustainable because the producers shoulder the burden of instantaneously meeting demand. </p><p> What is needed are paradigms to make these resources economically sustainable. The opportunity provided by the smart-grid is lost if we just automate existing paradigms, hence it is new paradigms that should be enabled by the smart-grid. This dissertation examines a new paradigm which shifts the problem towards `energy delivery' rather than `power delivery' for economic sustainability. The shift from a just in time power model to an energy delivery represents a fundamental change in approach to the research happening today.</p><p> The energy delivery paradigm introduces the concept of a producer providing electrical energy to a system at a negotiated cost and within power limits, leaving the issue of balancing instantaneous power to the consumer, which has overall control on its demand and power requirements. This paradigm has potential to alter the current technical, market, and regulatory problem in electrical energy production and move the economic landscape toward electrical energy production for a more sustainable, reliable, and efficient electrical energy system. This dissertation examines concepts along the path of energy delivery which crosses many fields including power systems, data communications, controls, electric markets, and public utility regulation ultimately proposing a mathematical formulation and solution. The dissertation then shifts to examining potential physical interpretations of the formulation and solution and impacts to different fields within the energy paradigm.</p>
214

Evaluating System Readiness Level Reversal Characteristics Using Incidence Matrices

London, Mark Alan 11 February 2015 (has links)
<p> Contemporary system maturity assessment approaches have failed to provide robust quantitative system evaluations resulting in increased program costs and developmental risks. Standard assessment metrics, such as Technology Readiness Levels (TRL), do not sufficiently evaluate increasingly complex systems. The System Readiness Level (SRL) is a newly developed system development metric that is a mathematical function of TRL and Integration Readiness Level (IRL) values for the components and connections of a particular system. SRL acceptance has been hindered because of concerns over SRL mathematical operations that may lead to inaccurate system readiness assessments. These inaccurate system readiness assessments are called readiness reversals. A new SRL calculation method using incidence matrices, the Incidence Matrix System Readiness Level (IMSRL), was proposed to alleviate these mathematical concerns. The presence of SRL readiness reversal was modeled for four SRL calculation methods across several system configurations. Logistic regression analysis demonstrated that the IMSRL has a decreased presence of readiness reversal than other approaches suggested in the literature. The IMSRL was also analytically evaluated for conformance to five standard SRL mathematical characteristics and a sixth newly proposed SRL property. The improved SRL mathematical characteristics discussed in this research will directly support quantitative analysis of system technological readiness measurements.</p>
215

A biologically-motivated system for tracking moving people using a stationary camera /

Cheung, Owen Ou Loung. January 1997 (has links)
This thesis describes the design and implementation of a real-time system for tracking moving people in an indoor environment. The system utilizes a stationary camera which can be mounted in any location within the indoor environment such as on a wall or the ceiling. Embodied in the system is a control mechanism which is based on a biological model of gaze control. This mechanism controls the movements of software windows which are used for tracking, such that the target is always centred within a window, in much the same way as human eyes move such that a target is always centred on the foveal region of the retina. / Results from tracking experiments show that the implemented system is capable of tracking both single and multiple targets in real-time, with the camera placed at different heights above the floor. Furthermore, the system performs just as well under different lighting conditions. / The tracking system requires no special hardware for its operation. All that is required is an off-the-shelf processor, camera, and frame-grabber. Therefore, the system provides a cost-effective solution for many applications which involve tracking such as security surveillance, robot navigation, and (human) traffic analysis.
216

System goodput (gs)| A modeling and simulation approach to refute current thinking regarding system level quality of service

Sahlin, John P. 11 February 2014 (has links)
<p> This dissertation represents a modeling and simulation approach toward determining whether distributed computing architectures (e.g., Cloud Computing) require state of the art servers to ensure top performance, and whether alternate approaches can result in optimized Quality of Service by reducing operating costs while maintaining high overall system performance. The author first investigated the origins of Cloud Computing, to ensure that the model of distributed computing architectures still applied to the Cloud Computing business model. After establishing that Cloud Computing was in fact a new iteration of a current architecture, the author conducted a series of modeling and simulation experiments using the OPNET Modeler system dynamics tool to evaluate whether variations in the server infrastructure altered the overall system performance of a distributed computing architecture environment. This modeling exercise focused on comparing state of the art commodity Information Technology (IT) servers to those meeting the Advanced Telecommunications Association (AdvancedTCA or ATCA) open standard requirements, which are generally at least one generation behind commodity servers in terms of performance benchmarks. After modeling an enterprise IT environment, and simulating network traffic using the OPNET Modeler tool, the author concluded that there is no system-level performance degradation in using AdvancedTCA servers for the consolidation effort, using ANOVA/Tukey and Kruskal-Wallis multivariate data analysis of the simulation results. In order to conduct this comparison, the author developed a system-level performance benchmark, System Goodput (GS) to represent end to end performance of services, a more appropriate measure of the performance of distributed systems such as Cloud Computing. The analysis of the data proved that individual component benchmarks are not an accurate predictor of system-level performance. After establishing that using slower servers (e.g., ATCA) does not affect overall system performance in a Cloud Computing environment, the author developed a model for optimizing system-level Quality of Service (QoS) for Cloud Computing infrastructures by relying on the more rugged ATCA servers to extend the service life of a Cloud Computing environment, resulting in a much lower Total Ownership Cost (TOC) for the Cloud Computing infrastructure provider.</p>
217

A Heuristic Approach to Utilizing Penalty/Incentive Schemes in Risk Management of a Stochastic Activity Network

Ahmed, Mohamed Ali E. 26 February 2014 (has links)
<p> Neglecting uncertainties in the estimation of activities, costs, and durations can significantly contribute to overruns in a project's budget and schedule. On the other hand, properly enforced penalties and incentives can motivate contractors to finish on time and within the allotted budget. However, the current literature on this topic does not sufficiently address project penalties and incentives within the context of uncertainty and dependence. Thus, this dissertation considers how allocating penalties and incentives can impact a stochastic project network in which activity durations are random variables, and some of the activities are subcontracted. The impact of penalty/incentive schemes on project and activities uncertainties is also examined. Overall, one of the main pursued benefits of this work is to provide project stakeholders with a tool that can help determine the appropriate penalty and incentive rates for outsourced activities when creating the contract. </p><p> The study revealed that a total allocation of a project level penalty/incentive to relevant activities was considered a fair allocation. A Monte Carlo Simulation model (MCS) was used to generate random variables, incorporate activity distributions, incorporate dependence uncertainties, and to examine the effect the penalty/incentive scheme has on the aggregated project cost. In order to validate the simulation model, its outcomes were verified with deterministic outcomes. Furthermore, based on the several allocation methods explored, the most adequate allocation method found was the normalized allocation of project penalty/incentive to activities based on the probability of a zero slack activity lying on the critical path. The presented MCS model was then expanded and applied on a larger network. The results of this study demonstrated that the penalty/incentive scheme can increase the project uncertainty at earlier stages of the project, but by using the proper allocation method at later stages, it is contained to the baseline levels that do not comprise any penalty/incentive. The study also revealed that the common practice of assuming project activities as being independent underestimates the most critical, not the least critical, activities' penalty/incentive rates.</p>
218

A study of the impact of flexible AC transmission system devices on the economic-secure operation of power systems

Griffin, Julie January 1995 (has links)
This thesis examines how Flexible AC Transmission Systems (FACTS) devices can improve the secure-economic operation of a power system. More specifically, the benefits of FACTS devices in a network are evaluated in terms of four areas of power system study: system security, economic dispatch operation, maximum network loadability and electric industry deregulation. Simulations of a simple network are made to evaluate how a FACTS device can be used to increase the security region of a network. Based on this analysis, simulations are performed using the 24-bus IEEE reliability test network to assess the possible savings in generation costs, the increase in maximum network loadability and the improvements in flexibility of exchanges resulting from the use of a FACTS device in this network. The results demonstrate that FACTS devices can be used effectively to increase the security region of a network thereby allowing for a better optimum operating point in any optimization problem performed over such a region.
219

Electric energy system planning and the second principle of thermodynamics

Oliveira F., Delly January 1995 (has links)
This thesis deals with the long-term planning of electric energy systems. Such systems are defined by complex interconnections of end-uses, energy conversion devices and natural resources. The planning process is usually guided by a number of design criteria, namely, economic, social and environmental impacts as well as system reliability and efficiency. The planning challenge is to find an acceptable compromise among these often conflicting objectives. System efficiency is a critical design criterion normally measuring the ratio of the system output and input energies. In electric energy systems, efficiency is normally defined according to the First Principle of Thermodynamics which states that energy cannot be destroyed. In this thesis, the definition of efficiency in electric energy system planning is broadened to include interpretations according to both the First and Second Principles of Thermodynamics. The Second Principle essentially states that the "quality" of energy decreases or, at best, remains constant in any conversion process where the quality of energy (denoted here by exergy) is a measure of the ability of a form of energy to be converted into any other form. Work, hydroelectric potential and electricity are examples of high quality energy sources while low temperatures heat end-use applications are at the low end of the quality scale. Since certain types of energy conversion processes may show high levels of exergy destruction, even though energetically efficient, it is important to design energy systems such that the energy quality of an end-use is matched as much as possible to that of the energy supply thus avoiding situation where a high quality supply is used for a low quality purpose. / The electric energy industry has virtually ignored exergetic considerations in system planning due, to a large extent, to a lack of familiarity with the Second Principle and its implications. Nevertheless, exergy is an attribute which must be planned and conserved with at least the same priority as energy. It is demonstrated here that the planning of energy systems will be drastically affected when both energy and exergy are considered. However, to be able to rationally use the natural resources, exergetic analysis must become an integral part of system planning. This thesis analyses the application of the Second Principle of Thermodynamics in the planning of electric energy systems through theory, examples and case studies including economic considerations. / In order to achieve electric energy systems that are more exergetically efficient, a new type of electric energy tariff called type-of-use, is proposed. Analogous to the time-of-use rate that assigns different monetary values for the time of the day considered, the type-of-use tariff assigns a monetary value to the end-uses. Simulations are performed in different electric energy systems to demonstrate that type-of-use tariffs will indeed lead to more exergetically efficient systems. / The benefits of exergetic analysis are supported by a number of studies presented in this thesis. These studies analyse from the points of view of energetic and exergetic efficiency and cost the following: (i) A space heating system; (ii) The impact of a major introduction of electric vehicles in Canada and (iii) The long range planning of a regional electric power system consisting of two interconnected provinces.
220

Detecting colluders in PageRank: Finding slow mixing states in a Markov chain

Mason, Kahn. Unknown Date (has links)
Thesis (Ph.D.)--Stanford University, 2005. / (UnM)AAI3187317. Source: Dissertation Abstracts International, Volume: 66-08, Section: A, page: 3044. Adviser: Benjamin Van Roy.

Page generated in 0.0865 seconds