• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 258
  • 45
  • 25
  • 25
  • 25
  • 25
  • 25
  • 25
  • 5
  • 5
  • 4
  • 1
  • 1
  • Tagged with
  • 368
  • 368
  • 347
  • 66
  • 63
  • 55
  • 26
  • 24
  • 23
  • 21
  • 19
  • 18
  • 18
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Reasons for non-compliance with mandatory information assurance policies by a trained population

Shelton, D. Cragin 13 May 2015 (has links)
<p> Information assurance (IA) is about protecting key attributes of information and the data systems. Treating IA as a system, it is appropriate to consider the three major elements of any system: <i>people, processes,</i> and <i>tools.</i> While IA tools exist in the form of hardware and software, tools alone cannot assure key information attributes. IA procedures and the people that must follow those procedures are also part of the system. There is no argument that people do not follow IA procedures. A review of the literature showed that not only is there no general consensus on why people do not follow IA procedures, no discovered studies simply asked people their reasons. Published studies addressed reasons for non-compliance, but always within a framework of any one of several assumed theories of human performance. The study described here took a first small step by asking a sample from an under-studied population, users of U.S. federal government information systems, why they have failed to comply with two IA procedures related to password management, and how often. The results may lay the groundwork for extending the same methodology across a range of IA procedures, eventually suggesting new approaches to motivating people, modifying procedures, or developing tools to better meet IA goals. In the course of the described study, an unexpected result occurred. The study plan had included comparing the data for workers with and without IA duties. However, almost all of the respondents in the survey declared having IA duties. Consideration of a comment by a pilot study participant brought the realization that IA awareness programs emphasizing universal responsibility for information security may have caused the unexpected responses. The study conclusions address suggestions for refining the question in future studies.</p>
212

Post-Disaster Interim Housing| Forecasting Requirements and Determining Key Planning Factors

Jachimowicz, Adam 16 September 2014 (has links)
<p> Common tenets in the field of emergency management hold that all disasters are different and all disasters hold a great deal of uncertainty. For these and many other reasons, many challenges are present when providing post-disaster assistance to victims. The Federal Emergency Management Agency (FEMA) has identified post-disaster interim housing as one of its greatest challenges. These challenges have been highlighted in recent years in the media as spectacular failures as evidenced during the recovery efforts for Hurricane Katrina. Partly in response, FEMA developed the <i>National Disaster Housing Strategy </i> that establishes the framework and strategic goals of providing housing to disaster victims. This strategy calls for emergency management professionals to both anticipate needs and balance a host of factors to provide quick, economical, and community-based housing solutions that meet the individual, family, and community needs while enabling recovery. The first problem is that emergency management officials need to make decisions early on without actual event data in order to provide timely interim housing options to victims. The second problem is that there is little guidance and no quantitative measures on prioritizing the many factors that these same officials need for providing interim housing. This research addressed both of these problems. To anticipate needs, a series of models were developed utilizing historical data provided by FEMA and regression analysis to produce a series of forecast models. The models developed were for the cost of a housing mission, the number of individuals applying to FEMA for assistance, the number of people eligible for housing assistance and the number of trailers FEMA will provide as interim housing. The variables analyzed and used to make the prediction were; population, wind-speed, homeownership rate, number of households, income, and poverty level. Of the four models developed, the first three demonstrated statistical significance, while the last one did not. The models were limited only to wind related hazards. These models and associated forecasts can assist federal, state, and local government officials with scoping and planning for a housing mission. In addition, the models also provide insight into how the six variables used to make the prediction can influence it. The second part of this research used a structured feedback process (Delphi) and expert opinion to develop a ranked list of the most important factors that emergency management officials should consider when conducting operational planning for a post-disaster housing mission. This portion of the research took guidance from the "National Disaster Housing Strategy" and attempted to quantify it based on the consensus opinion of a group of experts. The top three factors that were determined by the Delphi were 1) House disaster survivors as soon as possible 2) The availability of existing housing and 3) Status of infrastructure.</p>
213

An Automated System for Rapid and Secure Device Sanitization

LaBarge, Ralph S. 24 June 2014 (has links)
<p> Public and private organizations face the challenges of protecting their networks from cyber-attacks, while reducing the amount of time and money spent on Information Technology. Organizations can reduce their expenditures by reusing server, switch and router hardware, but they must use reliable and efficient methods of sanitizing these devices before they can be redeployed. The sanitization process removes proprietary, sensitive or classified data, as well as persistent malware from a device prior to reuse. The Johns Hopkins University Applied Physics Laboratory has developed an automated, rapid, and secure method for sanitizing servers, switches and routers. This sanitization method was implemented and tested on several different types of network devices during the Cyber Measurement &amp; Analysis Center project, which was funded under Phases I and II of the DARPA National Cyber Range program. The performance of the automated sanitization system was excellent with an order of magnitude reduction in the time required to sanitize servers, routers and switches, and a significant improvement in the effectiveness of the sanitization process through the addition of persistent malware removal.</p>
214

The Effect of Mission Assurance on ELV Launch Success Rate| An Analysis of Two Management Systems for Launch Vehicles

Leung, Raymond 03 June 2014 (has links)
<p> There are significant challenges involved in regulating the growing commercial human spaceflight industry. The safety of the crew and passengers should be protected; however, care should be taken not to overburden the industry with too many or too stringent, or perhaps inapplicable, regulations. </p><p> An improvement in launch success would improve the safety of the crew and passengers. This study explores the effectiveness of Mission Assurance policies to guide regulations and standards. There is a severe lack of data regarding commercial human space flights. This means that a direct test of effectiveness by looking at historical commercial human space flight data is not possible. Historical data on current expendable commercial launchers have been used in this study. The National Aeronautics and Space Administration (NASA) has strong Mission Assurance policies for its launch of civil payloads. The Office of Commercial Space Transportation at the Federal Aviation Administration (FAA/AST) regulations of commercial launches are more safety oriented. </p><p> A comparison of launches between NASA and the FAA/AST is used to gauge the effectiveness of Mission Assurance policies on launch success. Variables between the two agencies are reduced so that Mission Assurance policies are isolated as the main difference between launches. Scenarios pertinent to commercial human space flight are used so results can be applicable.</p>
215

The energy delivery paradigm

Bukowski, Stephen A. 12 March 2015 (has links)
<p> A sustainable world is one in which human needs are met equitably without harm to the environment, and without sacrificing the ability of future generations to meet their needs. Electrical energy is one such need, but neither the production nor the utilization are equitable or harmless. Growth of electricity availability and how we use electricity in industrialized nations has established a dichotomy between usage and sustainability. This dichotomy is best illuminated by the current "just-in-time" approach where excessive electricity generation capacity is installed to be able to instantaneously meet load from consumers at all times. Today in the United States, electricity generation capacity is approximately 3.73 kW per person versus 3.15 kW per person in 2002. [1] [2] At this magnitude of installed capacity the entire world would need approximately 25.5 TW of generation or approximately 12,250 Hoover Dams today and must add 766 MW of capacity every day. [3] This unsustainable effect is further exacerbated by the fact that consumers do not have a strong vested incentive to keep electricity generation sustainable because the producers shoulder the burden of instantaneously meeting demand. </p><p> What is needed are paradigms to make these resources economically sustainable. The opportunity provided by the smart-grid is lost if we just automate existing paradigms, hence it is new paradigms that should be enabled by the smart-grid. This dissertation examines a new paradigm which shifts the problem towards `energy delivery' rather than `power delivery' for economic sustainability. The shift from a just in time power model to an energy delivery represents a fundamental change in approach to the research happening today.</p><p> The energy delivery paradigm introduces the concept of a producer providing electrical energy to a system at a negotiated cost and within power limits, leaving the issue of balancing instantaneous power to the consumer, which has overall control on its demand and power requirements. This paradigm has potential to alter the current technical, market, and regulatory problem in electrical energy production and move the economic landscape toward electrical energy production for a more sustainable, reliable, and efficient electrical energy system. This dissertation examines concepts along the path of energy delivery which crosses many fields including power systems, data communications, controls, electric markets, and public utility regulation ultimately proposing a mathematical formulation and solution. The dissertation then shifts to examining potential physical interpretations of the formulation and solution and impacts to different fields within the energy paradigm.</p>
216

Evaluating System Readiness Level Reversal Characteristics Using Incidence Matrices

London, Mark Alan 11 February 2015 (has links)
<p> Contemporary system maturity assessment approaches have failed to provide robust quantitative system evaluations resulting in increased program costs and developmental risks. Standard assessment metrics, such as Technology Readiness Levels (TRL), do not sufficiently evaluate increasingly complex systems. The System Readiness Level (SRL) is a newly developed system development metric that is a mathematical function of TRL and Integration Readiness Level (IRL) values for the components and connections of a particular system. SRL acceptance has been hindered because of concerns over SRL mathematical operations that may lead to inaccurate system readiness assessments. These inaccurate system readiness assessments are called readiness reversals. A new SRL calculation method using incidence matrices, the Incidence Matrix System Readiness Level (IMSRL), was proposed to alleviate these mathematical concerns. The presence of SRL readiness reversal was modeled for four SRL calculation methods across several system configurations. Logistic regression analysis demonstrated that the IMSRL has a decreased presence of readiness reversal than other approaches suggested in the literature. The IMSRL was also analytically evaluated for conformance to five standard SRL mathematical characteristics and a sixth newly proposed SRL property. The improved SRL mathematical characteristics discussed in this research will directly support quantitative analysis of system technological readiness measurements.</p>
217

A biologically-motivated system for tracking moving people using a stationary camera /

Cheung, Owen Ou Loung. January 1997 (has links)
This thesis describes the design and implementation of a real-time system for tracking moving people in an indoor environment. The system utilizes a stationary camera which can be mounted in any location within the indoor environment such as on a wall or the ceiling. Embodied in the system is a control mechanism which is based on a biological model of gaze control. This mechanism controls the movements of software windows which are used for tracking, such that the target is always centred within a window, in much the same way as human eyes move such that a target is always centred on the foveal region of the retina. / Results from tracking experiments show that the implemented system is capable of tracking both single and multiple targets in real-time, with the camera placed at different heights above the floor. Furthermore, the system performs just as well under different lighting conditions. / The tracking system requires no special hardware for its operation. All that is required is an off-the-shelf processor, camera, and frame-grabber. Therefore, the system provides a cost-effective solution for many applications which involve tracking such as security surveillance, robot navigation, and (human) traffic analysis.
218

System goodput (gs)| A modeling and simulation approach to refute current thinking regarding system level quality of service

Sahlin, John P. 11 February 2014 (has links)
<p> This dissertation represents a modeling and simulation approach toward determining whether distributed computing architectures (e.g., Cloud Computing) require state of the art servers to ensure top performance, and whether alternate approaches can result in optimized Quality of Service by reducing operating costs while maintaining high overall system performance. The author first investigated the origins of Cloud Computing, to ensure that the model of distributed computing architectures still applied to the Cloud Computing business model. After establishing that Cloud Computing was in fact a new iteration of a current architecture, the author conducted a series of modeling and simulation experiments using the OPNET Modeler system dynamics tool to evaluate whether variations in the server infrastructure altered the overall system performance of a distributed computing architecture environment. This modeling exercise focused on comparing state of the art commodity Information Technology (IT) servers to those meeting the Advanced Telecommunications Association (AdvancedTCA or ATCA) open standard requirements, which are generally at least one generation behind commodity servers in terms of performance benchmarks. After modeling an enterprise IT environment, and simulating network traffic using the OPNET Modeler tool, the author concluded that there is no system-level performance degradation in using AdvancedTCA servers for the consolidation effort, using ANOVA/Tukey and Kruskal-Wallis multivariate data analysis of the simulation results. In order to conduct this comparison, the author developed a system-level performance benchmark, System Goodput (GS) to represent end to end performance of services, a more appropriate measure of the performance of distributed systems such as Cloud Computing. The analysis of the data proved that individual component benchmarks are not an accurate predictor of system-level performance. After establishing that using slower servers (e.g., ATCA) does not affect overall system performance in a Cloud Computing environment, the author developed a model for optimizing system-level Quality of Service (QoS) for Cloud Computing infrastructures by relying on the more rugged ATCA servers to extend the service life of a Cloud Computing environment, resulting in a much lower Total Ownership Cost (TOC) for the Cloud Computing infrastructure provider.</p>
219

A Heuristic Approach to Utilizing Penalty/Incentive Schemes in Risk Management of a Stochastic Activity Network

Ahmed, Mohamed Ali E. 26 February 2014 (has links)
<p> Neglecting uncertainties in the estimation of activities, costs, and durations can significantly contribute to overruns in a project's budget and schedule. On the other hand, properly enforced penalties and incentives can motivate contractors to finish on time and within the allotted budget. However, the current literature on this topic does not sufficiently address project penalties and incentives within the context of uncertainty and dependence. Thus, this dissertation considers how allocating penalties and incentives can impact a stochastic project network in which activity durations are random variables, and some of the activities are subcontracted. The impact of penalty/incentive schemes on project and activities uncertainties is also examined. Overall, one of the main pursued benefits of this work is to provide project stakeholders with a tool that can help determine the appropriate penalty and incentive rates for outsourced activities when creating the contract. </p><p> The study revealed that a total allocation of a project level penalty/incentive to relevant activities was considered a fair allocation. A Monte Carlo Simulation model (MCS) was used to generate random variables, incorporate activity distributions, incorporate dependence uncertainties, and to examine the effect the penalty/incentive scheme has on the aggregated project cost. In order to validate the simulation model, its outcomes were verified with deterministic outcomes. Furthermore, based on the several allocation methods explored, the most adequate allocation method found was the normalized allocation of project penalty/incentive to activities based on the probability of a zero slack activity lying on the critical path. The presented MCS model was then expanded and applied on a larger network. The results of this study demonstrated that the penalty/incentive scheme can increase the project uncertainty at earlier stages of the project, but by using the proper allocation method at later stages, it is contained to the baseline levels that do not comprise any penalty/incentive. The study also revealed that the common practice of assuming project activities as being independent underestimates the most critical, not the least critical, activities' penalty/incentive rates.</p>
220

A study of the impact of flexible AC transmission system devices on the economic-secure operation of power systems

Griffin, Julie January 1995 (has links)
This thesis examines how Flexible AC Transmission Systems (FACTS) devices can improve the secure-economic operation of a power system. More specifically, the benefits of FACTS devices in a network are evaluated in terms of four areas of power system study: system security, economic dispatch operation, maximum network loadability and electric industry deregulation. Simulations of a simple network are made to evaluate how a FACTS device can be used to increase the security region of a network. Based on this analysis, simulations are performed using the 24-bus IEEE reliability test network to assess the possible savings in generation costs, the increase in maximum network loadability and the improvements in flexibility of exchanges resulting from the use of a FACTS device in this network. The results demonstrate that FACTS devices can be used effectively to increase the security region of a network thereby allowing for a better optimum operating point in any optimization problem performed over such a region.

Page generated in 0.0702 seconds