• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 117
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 125
  • 96
  • 57
  • 22
  • 15
  • 14
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Improving it portfolio management decision confidence using multi-criteria decision making and hypervariate display techniques

Landmesser, John Andrew 01 March 2014 (has links)
<p> Information technology (IT) investment decision makers are required to process large volumes of complex data. An existing body of knowledge relevant to IT portfolio management (PfM), decision analysis, visual comprehension of large volumes of information, and IT investment decision making suggest Multi-Criteria Decision Making (MCDM) and hypervariate display techniques can reduce cognitive load and improve decision confidence in IT PfM decisions. This dissertation investigates improving the decision confidence by reducing cognitive burden of the decision maker through greater comprehension of relevant decision information. Decision makers from across the federal government were presented with actual federal IT portfolio project lifecycle costs and durations using hypervariate displays to better comprehend IT portfolio information more quickly and make more confident decisions. Other information economics attributes were randomized for IT portfolio projects to generate Balanced Scorecard (BSC) values to support MCDM decision aids focused on IT investment alignment with specific business objectives and constraints. Both quantitative and qualitative measures of participant comprehension, confidence, and efficiency were measured to assess hypervariate display treatment and then MCDM decision aid treatment effectiveness. Morae Recorder Autopilot guided participants through scenario tasks and collected study data without researcher intervention for analysis using Morae Manager. Results showed improved comprehension and decision confidence using hypervariate displays of federal IT portfolio information over the standard displays. Both quantitative and qualitative data showed significant differences in accomplishment of assigned IT portfolio management tasks and increased confidence in decisions. MCDM techniques, incorporating IT BSC, Monte Carlo simulation, and optimization algorithms to provide cost, value, and risk optimized portfolios improved decision making efficiency. Participants did not find improved quality and reduced uncertainty from optimized IT portfolio information. However, on average participants were satisfied and confident with the portfolio optimizations. Improved and efficient methods of delivering and visualizing IT portfolio information can reduce decision maker cognitive load, improve comprehension efficiency, and improve decision making confidence. Study results contribute to knowledge in the area of comprehension and decision making cognitive processes, and demonstrate important linkages between Human-Computer Interaction (HCI) and Decision Support Systems (DSS) to support IT PfM decision making.</p>
12

Securing Location Services Infrastructures| Practical Criteria for Application Developers and Solutions Architects

Karamanian, Andre 30 August 2013 (has links)
<p> This qualitative, exploratory, normative study examined the security and privacy of location based services in mobile applications. This study explored risk, and controls to implement privacy and security. This study was addressed using components of the FIPS Risk Management Framework. This study found that risk to location information was considered <i> high</i> and the study provided suggested controls for security and privacy. </p>
13

Designing Privacy Notices| Supporting User Understanding and Control

Kelley, Patrick Gage 12 November 2013 (has links)
<p>Users are increasingly expected to manage complex privacy settings in their normal online interactions. From shopping to social networks, users make decisions about sharing their personal information with corporations and contacts, frequently with little assistance. Current solutions require consumers to read long documents or go out of their way to manage complex settings buried deep in management interfaces, all of which lead to little or no actual control. </p><p> The goal of this work is to help people cope with the shifting privacy landscape. While our work looks at many aspects of how users make decisions regarding their privacy, this dissertation focuses on two specific areas: the current state of web privacy policies and mobile phone application permissions. We explored consumers' current understandings of privacy in these domains, and then used that knowledge to iteratively design and test more comprehensible information displays. </p><p> These prototyped information displays should not be seen as final commercially-ready solutions, but as examples of privacy notices that can help users think about, cope with, and make decisions regarding their data privacy. We conclude with a series of design suggestions motivated by our findings. </p><p> Keywords: privacy, notice, usability, user interfaces, security, mobile, policy, P3P, HCI, information design. </p>
14

Executive security awareness primer

Toussaint, Gregory W. 22 April 2015 (has links)
<p> The purpose of this paper was to create a primer for a security awareness program to educate senior level executives on the key aspects of cyber security. This is due to the gap area that was discovered in the lack of both executive security awareness programs, and the lack of executives that fully abide by their company's security policies. This, coupled with research showing that executives are highly targeted by attackers, was the impetus behind this project. It was determined that the content of an executive security awareness program should be similar to that of a security awareness program for all other employees, with the differences being in the delivery and time frame of each segment. Due to this, literature was reviewed on the various topics of security awareness. Research revealed the importance of capturing an executive's attention, in order to keep their interest in the program. It was recommended that individuals charged with creating an executive security awareness program begin by having one on one meetings with the executives in their company. These meetings will help assess the time constraints of their company executives as well as their current knowledge of the various security awareness topics. This will help with tailoring the program specifically to their company executives. This primer may be used by any company or organization in the beginning stages of creating their own security awareness program for executives. Keywords: Cybersecurity, Professor Albert Orbinati, Executive Security Awareness, Internet Safety.</p>
15

Windows hibernation and memory forensics

Ayers, Amy L. 30 April 2015 (has links)
<p>ABSTRACT The purpose of this capstone project was to research the hibernation file, its role in memory forensics and to explore current technology, techniques and concepts for analysis. This study includes an in-depth look at the Windows hibernation feature, file format, potential evidence saved to the file and its impacts in digital forensic investigations. This research was performed to demonstrate the importance of the hibernation file and to generate awareness for this forensic artifact. The research questions presented were designed to identify the properties of Windows hibernation and its significance in digital forensics. Additionally, these research questions were aimed at identifying the important concepts analysts should understand in selecting forensic software and in hibernation analysis. Through the literature review process, the hibernation file was identified as an essential part of digital forensics which provides analysts with snapshots of system memory from various points in the past. This data includes web, email and chat sessions in addition to running processes, login credentials, encryption keys, program data and much more. Beyond forensics, the hibernation file is useful in the fields of data recovery and incident response. A review of current hibernation file publications revealed incomplete and conflicting works culminating in the acknowledgment that more research is needed in order to close these research gaps. More awareness for hibernation forensics through its inclusion in future published works and in computer forensic educational courses is recommended. These inclusions will assist to arm practitioners with the ability to accurately utilize the hibernation file in order to obtain the highest quality forensic evidence. Keywords: Cybersecurity, hiberfil.sys, hybrid sleep, malware, slack space, Albert Orbinati.
16

Adopting Workgroup Collaboration Tools in 3D Virtual Worlds

Schott, Thomas R. 11 September 2014 (has links)
<p> Collaboration is vital in today's information age, and tools are increasingly used to bring together teams that are geographically dispersed. Second Life, a 3D virtual world, can incorporate most of the visual, hearing and spatial elements of the real world, and can create a feeling of presence or the sense of actually "being there" for users. Common 2D groupware collaboration tools, such as web conferencing and conference calls used for virtual team collaboration in professional contexts, are key enablers for virtual teams. However, businesses and organizations have not adopted virtual worlds for virtual teams and workgroup collaboration. Shen &amp; Eder (2009) conducted a study using their modified Technology Acceptance Model (TAM) applied to the adoption of Second Life for business purposes. For participants, they used college students who were new to Second Life. The purpose of this research is to examine how the seven factors identified in the Shen and Eder's (2009) extended Technology Acceptance Model (TAM) relate to the behavioral intention to use workgroup collaboration tools in the Second Life using a non-student sample of experienced Second Life users that was more demographically representative of the Second Life population. Although this research supported many of Shen and Eder's findings, it found a negative relationship between the construct of perceived enjoyment and behavioral intent. This finding is important because contrary to positive relationship with gaming and entertainment environments, perceived enjoyment is not an antecedent for behavioral intention of 3D virtual worlds when used for productivity activities. The results of this study may provide insight for tool developers and integrators on where to focus efforts that lead to improved adoption of these workgroup collaboration tools.</p>
17

Supporting source code comprehension during software evolution and maintenance

Alhindawi, Nouh 13 June 2014 (has links)
<p> This dissertation addresses the problems of program comprehension to support the evolution of large-scale software systems. The research concerns how software engineers locate features and concepts along with categorizing changes within very large bodies of source code along with their versioned histories. More specifically, advanced Information Retrieval (IR) and Natural Language Processing (NLP) are utilized and enhanced to support various software engineering tasks. This research is not aimed at directly improving IR or NLP approaches; rather it is aimed at understanding how additional information can be leveraged to improve the final results. The work advances the field by investigating approaches to augment and re-document source code with different types of abstract behavior information. The hypothesis is that enriching the source code corpus with meaningful descriptive information, and integrating this orthogonal information (semantic and structural) that is extracted from source code, will improve the results of the IR methods for indexing and querying information. Moreover, adding this new information to a corpus is a form of supervision. That is, apriori knowledge is often used to direct and supervise machine-learning and IR approaches. </p><p> The main contributions of this dissertation involve improving on the results of previous work in feature location and source code querying. The dissertation demonstrates that the addition of statically derived information from source code (e.g., method stereotypes) can improve the results of IR methods applied to the problem of feature location. Further contributions include showing the effects of eliminating certain textual information (comments and function calls) from being included when performing source code indexing for feature/concept location. Moreover, the dissertation demonstrates an IR-based method of natural language topic extraction that assists developers in gaining an overview of past maintenance activities based on software repository commits. </p><p> The ultimate goal of this work is to reduce the costs, effort, and time of software maintenance by improving the results of previous work in feature location and source code querying, and by supporting a new platform for enhancing program comprehension and facilitating software engineering research.</p>
18

In-network processing for mission-critical wireless networked sensing and control| A real-time, efficiency, and resiliency perspective

Xiang, Qiao 13 June 2014 (has links)
<p> As wireless cyber-physical systems (WCPS) are increasingly being deployed in mission-critical applications, it becomes imperative that we consider application QoS requirements in in-network processing (INP). In this dissertation, we explore the potentials of two INP methods, packet packing and network coding, on improving network performance while satisfying application QoS requirements. We find that not only can these two techniques increase the energy efficiency, reliability, and throughput of WCPS while satisfying QoS requirements of applications in a relatively static environment, but also they can provide low cost proactive protection against transient node failures in a more dynamic wireless environment. </p><p> We first study the problem of jointly optimizing packet packing and the timeliness of data delivery. We identify the conditions under which the problem is strong NP-hard, and we find that the problem complexity heavily depends on aggregation constraints instead of network and traffic properties. For cases when the problem is NP-hard, we show that there is no polynomial-time approximation scheme (PTAS); for cases when the problem can be solved in polynomial time, we design polynomial time, offline algorithms for finding the optimal packet packing schemes. We design a distributed, online protocol<i> tPack </i>that schedules packet transmissions to maximize the local utility of packet packing at each node. We evaluate the properties of tPack in NetEye testbed. We find that jointly optimizing data delivery timeliness and packet packing and considering real-world aggregation constraints significantly improve network performance. </p><p> We then work on the problem of minimizing the transmission cost of network coding based routing in sensor networks. We propose the first mathematical framework so far as we know on how to theoretically compute the expected transmission cost of NC-based routing in terms of expected number of transmission. Based on this framework, we design a polynomial-time greedy algorithm for forwarder set selection and prove its optimality on transmission cost minimization. We designed EENCR, an energy-efficient NC-based routing protocol that implement our forwarder set selection algorithm to minimize the overall transmission cost. Through comparative study on EENCR and other state-of-the-art routing protocols, we show that EENCR significantly outperforms CTP, MORE and CodeOR in delivery reliability, delivery cost and network goodput. </p><p> Furthermore, we study the 1+1 proactive protection problem using network coding. We show that even under a simplified setting, finding two node-disjoint routing braids with minimal total cost is NP-hard. We then design a heuristic algorithm to construct two node-disjoint braids with a transmission cost upper bounded by two shortest node-disjoint paths. And we design ProNCP, a proactive NC-based protection protocol using similar design philosophy as in EENCR. We evaluate the performance of ProNCP under various transient network failure scenarios. Experiment results show that ProNCP is resilient to various network failure scenarios and provides a state performance in terms of reliability, delivery cost and goodput. </p><p> Our findings in this dissertation explore the challenges, benefits and solutions in designing real-time, efficient, resilient and QoS-guaranteed wireless cyber-physical systems, and our solutions shed lights for future research on related topics.</p>
19

An analysis of open source security software products downloads

Barta, Brian J. 16 April 2014 (has links)
<p> Despite the continued demand for open source security software, a gap in the identification of success factors related to the success of open source security software persists. There are no studies that accurately assess the extent of this persistent gap, particularly with respect to the strength of the relationships of open source software development attributes and the number of security software downloads. The research conducted in this study investigates the strength of the relationships of particular open source software project development factors against a particular measure of open source security software success. This research focuses on open source software development with an emphasis on anti-virus, firewall and intrusion detection software. Additionally, reviewed in this study are some key cyber-security events that have shaped the cyber landscape as well as descriptions of some security technologies that have emerged as a result of those events. A level of correlation between the dependent variable <i>number of software downloads</i> and the independent variables <i>project team size</i> and <i>number of software project events</i> are analyzed in this research. </p>
20

Use of double-loop learning to combat advanced persistent threat| Multiple case studies

Lamb, Christopher J. 12 February 2014 (has links)
<p> The Advanced Persistent Threat (APT) presents an ever present and more growing threat to organizations across the globe. Traditional Information Technology (IT) incident response falls short in effectively addressing this threat. This researcher investigated the use of single-loop and double-loop learning in two organizations with internal incident response processes designed to combat the APT. Two cases were examined within organizations employing an internal incident response team. The third case was examined from an organization providing incident response as a service in addressing APT compromises. The study developed four themes: the inefficacy of single-loop learning in addressing APT, the need for better visibility within corporate infrastructure, the need for continuous improvement and bi-directional knowledge flow, and the need for effective knowledge management. Based on these themes, a conceptual model was developed modifying the traditional incident response process. Three implications were derived from the research. First, perimeter defense falls short when addressing the APT. Second, the preparation phase of incident response requires modification along with the addition of a new baseline loop phase running contiguously with the entire process. Finally, opportunistic learning needs to be encouraged in addressing the APT.</p>

Page generated in 0.1264 seconds