• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 122
  • 10
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 151
  • 151
  • 50
  • 24
  • 24
  • 22
  • 21
  • 18
  • 17
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Attributes effecting software testing estimation; is organizational trust an issue?

Hammoud, Wissam 05 September 2014 (has links)
<p> This quantitative correlational research explored the potential association between the levels of organizational trust and the software testing estimation. This was conducted by exploring the relationships between organizational trust, tester&rsquo;s expertise, organizational technology used, and the number of hours, number of testers, and time-coding estimated by the software testers. The research conducted on a software testing department of a health insurance organization, employed the use of the Organizational Trust Inventory- Short Form (OTI-SF) developed by Philip Bromiley and Larry Cummings and revealed a strong relationship between organizational trust and software testing estimation. The research reviews historical theories of organizational trust and include a deep discussion about software testing practices and software testing estimation. By examining the significant impact of organizational trust on project estimating and time-coding in this research, software testing leaders can benefit from this research to improve project planning and managing process by improving the levels of trust within their organizations.</p>
52

An Automated System for Rapid and Secure Device Sanitization

LaBarge, Ralph S. 24 June 2014 (has links)
<p> Public and private organizations face the challenges of protecting their networks from cyber-attacks, while reducing the amount of time and money spent on Information Technology. Organizations can reduce their expenditures by reusing server, switch and router hardware, but they must use reliable and efficient methods of sanitizing these devices before they can be redeployed. The sanitization process removes proprietary, sensitive or classified data, as well as persistent malware from a device prior to reuse. The Johns Hopkins University Applied Physics Laboratory has developed an automated, rapid, and secure method for sanitizing servers, switches and routers. This sanitization method was implemented and tested on several different types of network devices during the Cyber Measurement &amp; Analysis Center project, which was funded under Phases I and II of the DARPA National Cyber Range program. The performance of the automated sanitization system was excellent with an order of magnitude reduction in the time required to sanitize servers, routers and switches, and a significant improvement in the effectiveness of the sanitization process through the addition of persistent malware removal.</p>
53

"Measuring Operational Effectiveness of Information Technology Infrastructure Library (ITIL) and the Impact of Critical Facilities Inclusion in the Process."

Woodell, Eric A. 31 May 2014 (has links)
<p> Information Technology (IT) professionals use the Information Technology Infrastructure Library (ITIL) process to better manage their business operations, measure performance, improve reliability and lower costs. This study examined the operational results of those data centers using ITIL against those that do not, and whether the results change when traditional facilities engineers are included in the process. Overall, those IT departments using ITIL processes had no statistically significant improvements when compared to those who do not. Inclusion of Critical Facilities (CF) personnel in the framework offered a statistically significant improvement in their overall reliability of their data centers. Those IT departments who do not include CF personnel in the ITIL framework have a slightly lower level of reliability than those who do not use the ITIL processes at all.</p>
54

An investigation of data privacy and utility using machine learning as a gauge

Mivule, Kato 18 June 2014 (has links)
<p> The purpose of this investigation is to study and pursue a user-defined approach in preserving data privacy while maintaining an acceptable level of data utility using machine learning classification techniques as a gauge in the generation of synthetic data sets. This dissertation will deal with data privacy, data utility, machine learning classification, and the generation of synthetic data sets. Hence, data privacy and utility preservation using machine learning classification as a gauge is the central focus of this study. Many organizations that transact in large amounts of data have to comply with state, federal, and international laws to guarantee that the privacy of individuals and other sensitive data is not compromised. Yet at some point during the data privacy process, data loses its utility - a measure of how useful a privatized dataset is to the user of that dataset. Data privacy researchers have documented that attaining an optimal balance between data privacy and utility is an NP-hard challenge, thus an intractable problem. Therefore we propose the classification error gauge (x-CEG) approach, a data utility quantification concept that employs machine learning classification techniques to gauge data utility based on the classification error. In the initial phase of this proposed approach, a data privacy algorithm such as differential privacy, Gaussian noise addition, generalization, and or k-anonymity is applied on a dataset for confidentiality, generating a privatized synthetic data set. The privatized synthetic data set is then passed through a machine learning classifier, after which the classification error is measured. If the classification error is lower or equal to a set threshold, then better utility might be achieved, otherwise, adjustment to the data privacy parameters is made and then the refined synthetic data set is sent to the machine learning classifier; the process repeats until the error threshold is reached. Additionally, this study presents the Comparative x-CEG concept, in which a privatized synthetic data set is passed through a series of classifiers, each of which returns a classification error, and the classifier with the lowest classification error is chosen after parameter adjustments, an indication of better data utility. Preliminary results from this investigation show that fine-tuning parameters in data privacy procedures, for example in the case of differential privacy, and increasing weak learners in the ensemble classifier for instance, might lead to lower classification error, thus better utility. Furthermore, this study explores the application of this approach by employing signal processing techniques in the generation of privatized synthetic data sets and improving data utility. This dissertation presents theoretical and empirical work examining various data privacy and utility methodologies using machine learning classification as a gauge. Similarly this study presents a resourceful approach in the generation of privatized synthetic data sets, and an innovative conceptual framework for the data privacy engineering process.</p>
55

Validating the OCTAVE Allegro Information Systems Risk Assessment Methodology| A Case Study

Keating, Corland G. 22 March 2014 (has links)
<p> An information system (IS) risk assessment is an important part of any successful security management strategy. Risk assessments help organizations to identify mission-critical IS assets and prioritize risk mitigation efforts. Many risk assessment methodologies, however, are complex and can only be completed successfully by highly qualified and experienced security experts. Small-sized organizations, including small-sized colleges and universities, due to their financial constraints and lack of IS security expertise, are challenged to conduct a risk assessment. Therefore, most small-sized colleges and universities do not perform IS risk assessments, which leaves the institution's data vulnerable to security incursions. The negative consequences of a security breach at these institutions can include a decline in the institution's reputation, loss of financial revenue, and exposure to lawsuits. </p><p> The goal of this research is to address the challenge of conducting IS risk assessments in small-sized colleges and universities by validating the use of the Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) Allegro risk assessment methodology at a small-sized university. OCTAVE Allegro is a streamlined risk assessment method created by Carnegie Mellon University's Software Engineering Institute. OCTAVE Allegro has the ability to provide robust risk assessment results, with a relatively small investment in time and resources, even for those organizations that do not have extensive risk management expertise. </p><p> The successful use of OCTAVE Allegro was validated using a case study that documented the process and outcome of conducting a risk assessment at George Fox University (GFU), a small-sized, private university located in Newberg, Oregon. GFU has the typical constraints of other small-sized universities; it has a relatively small information technology staff with limited expertise in conducting IS risk assessments and lacks a dedicated IS risk manager. Nevertheless, OCTAVE Allegro was relatively easy for GFU staff to understand, provided GFU with the ability to document the security requirements of their IS assets, helped to identify and evaluate IS security concerns, and provided an objective way to prioritize IS security projects. Thus, this research validates that OCTAVE Allegro is an appropriate and effective IS risk assessment method for small-sized colleges and universities.</p>
56

Designing online conversations to engage local practice a framework for the mutual development of tacit and explicit knowledge /

Wise, Alyssa Friend. January 2007 (has links)
Thesis (Ph.D.)--Indiana University, School of Education, 2007. / Source: Dissertation Abstracts International, Volume: 68-07, Section: A, page: 2816. Adviser: Thomas M. Duffy. Title from dissertation home page (viewed Apr. 14, 2008).
57

Information technology as an agent of post-modernism

Nel, David Ferguson. January 2006 (has links)
Thesis (MCom(Informatics))--University of Pretoria, 2006. / Includes bibliographical references (leaves 118-127).
58

An exploration of the factors associated with the attitudes of high school EFL teachers in Syria toward information and communication technology

Albirini, Abdulkafi. January 2004 (has links)
Thesis (Ph. D.)--Ohio State University, 2004. / Document formatted into pages; contains 179 p. Includes bibliographical references. Abstract available online via OhioLINK's ETD Center; full text release delayed at author's request until 17 Aug. 2005.
59

The strategic importance of information technology in Hong Kong insurance industry /

Chan, Pui-leung. January 1998 (has links)
Thesis (M.B.A.)--University of Hong Kong, 1998. / Includes bibliographical references.
60

An investigation of a framework to evaluate computer supported collaborative work /

Beauvais, Erik Alexander Maoui. January 1999 (has links)
Thesis (M.Sc. (Computer Science and Information Systems) - Rhodes University, 2000.

Page generated in 0.218 seconds