• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 476
  • 281
  • 75
  • 64
  • 35
  • 15
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1161
  • 243
  • 174
  • 162
  • 160
  • 151
  • 145
  • 131
  • 108
  • 98
  • 97
  • 95
  • 87
  • 87
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Software Design Metrics for Predicting Maintainability of Service-Oriented Software

Perepletchikov, Mikhail, mikhail.perepletchikov@rmit.edu.au January 2009 (has links)
As the pace of business change increases, service-oriented (SO) solutions should facilitate easier maintainability as underlying business logic and rules change. To date, little effort has been dedicated to considering how the structural properties of coupling and cohesion may impact on the maintainability of SO software products. Moreover, due to the unique design characteristics of Service-Oriented Computing (SOC), existing Procedural and Object-Oriented (OO) software metrics are not sufficient for the accurate measurement of service-oriented design structures. This thesis makes a contribution to the field of SOC, and Software Engineering in general, by proposing and evaluating a suite of design-level coupling and cohesion metrics for predicting the maintainability of service-oriented software products early in the Software Development LifeCycle (SDLC). The proposed metrics can provide the following benefits: i) facilitate design decisions that could lead to the specification of quality SO designs that can be maintained more easily; ii) identify design problems that can potentially have a negative effect on the maintainability of existing service-oriented design structures; and iii) support more effective control of maintainability in the earlier stages of SDLC. More specifically, the following research was conducted as part of this thesis: - A formal mathematical model covering the structural and behavioural properties of service-oriented system design was specified. - Software metrics were defined in a precise, unambiguous, and formal manner using the above model. - The metrics were theoretically validated and empirically evaluated in order to determine the success of this thesis as follows: a. Theoretical validation was based on the property-based software engineering measurement framework. All the proposed metrics were deemed as theoretically valid. b. Empirical evaluation employed a controlled experimental study involving ten participants who performed a range of maintenance tasks on two SO systems developed (and measured using the proposed metrics) specifically for this study. The majority of the experimental outcomes compared favourably with our expectations and hypotheses. More specifically, the results indicated that most of the proposed metrics can be used to predict the maintainability of service-oriented software products early in the SDLC, thereby providing evidence for the validity and potential usefulness of the derived metrics. Nevertheless, a broader range of industrial scale experiments and analyses are required to fully demonstrate the practical applicability of the metrics. This has been left to future work.
112

Design and implementation of a framework for security metrics creation / Konstruktion och användning av ett ramverk för säkerhetsmetriker

Lundholm, Kristoffer January 2009 (has links)
<p>Measuring information security is the key to unlocking the knowledge of how secure information systems really are. In order to perform these measurements, security metrics can be used. Since all systems and organizations are different, there is no single set of metrics that is generally applicable. In order to help organizations create metrics, this thesis will present a metrics creation framework providing a structured way of creating the necessary metrics for any information system. The framework takes a high level information security goal as input, and transforms it to metrics using decomposition of goals that are then inserted into a template. The thesis also presents a set of metrics based on a minimum level of information security produced by the Swedish emergency management agency. This set of metrics can be used to show compliance with the minimum level or as a base when a more extensive metrics program is created.</p>
113

A Method for Assessment of System Security

Andersson, Rikard January 2005 (has links)
<p>With the increasing use of extensive IT systems for sensitive or safety-critical applications, the matter of IT security is becoming more important. In order to be able to make sensible decisions about security there is a need for measures and metrics for computer security. There currently exist no established methods to assess the security of information systems.</p><p>This thesis presents a method for assessing the security of computer systems. The basis of the method is that security relevant characteristics of components are modelled by a set of security features and connections between components are modelled by special functions that capture the relations between the security features of the components. These modelled components and relations are used to assess the security of each component in the context of the system and the resulting system dependent security values are used to assess the overall security of the system as a whole.</p><p>A software tool that implements the method has been developed and used to demonstrate the method. The examples studied show that the method delivers reasonable results, but the exact interpretation of the results is not clear, due to the lack of security metrics.</p>
114

Scenario-Based Evaluation of a Method for System Security Assessment

Bengtsson, Jonna January 2005 (has links)
<p>This thesis evaluates a method for system security assessment (MASS), developed at the Swedish Defence Research Agency in Linköping. The evaluation has been carried out with the use of scenarios, consisting of three example networks and several modifications of those. The results from the scenarios are then compared to the expectations of the author and a general discussion is taken about whether or not the results are realistic.</p><p>The evaluation is not meant to be exhaustive, so even if MASS had passed the evaluation with flying colors, it could not have been regarded as proof that the method works as intended. However, this was not the case; even though MASS responded well to the majority of the modifications, some issues indicating possible adjustments or improvements were found and commented on in this report.</p><p>The conclusion from the evaluation is therefore that there are issues to be solved and that the evaluated version of MASS is not ready to be used to evaluate real networks. The method has enough promise not to be discarded, though. With the aid of the issues found in this thesis, it should be developed further, along with the supporting tools, and be re-evaluated.</p>
115

Mathematical foundation needed for development of IT security metrics

Bengtsson, Mattias January 2007 (has links)
<p>IT security metrics are used to achieve an IT security assessment of certain parts of the IT security environment. There is neither a consensus of the definition of an IT security metric nor a natural scale type of the IT security. This makes the interpretation of the IT security difficult. To accomplish a comprehensive IT security assessment one must aggregate the IT security values to compounded values.</p><p>When developing IT security metrics it is important that permissible mathematical operations are made so that the information are maintained all the way through the metric. There is a need for a sound mathematical foundation for this matter.</p><p>The main results produced by the efforts in this thesis are:</p><p>• Identification of activities needed for IT security assessment when using IT security metrics.</p><p>• A method for selecting a set of security metrics in respect to goals and criteria, which also is used to</p><p>• Aggregate security values generated from a set of security metrics to compounded higher level security values.</p><p>• A mathematical foundation needed for development of security metrics.</p>
116

Code Profiling : Static Code Analysis

Borchert, Thomas January 2008 (has links)
<p>Capturing the quality of software and detecting sections for further scrutiny within are of high interest for industry as well as for education. Project managers request quality reports in order to evaluate the current status and to initiate appropriate improvement actions and teachers feel the need of detecting students which need extra attention and help in certain programming aspects. By means of software measurement software characteristics can be quantified and the produced measures analyzed to gain an understanding about the underlying software quality.</p><p>In this study, the technique of code profiling (being the activity of creating a summary of distinctive characteristics of software code) was inspected, formulized and conducted by means of a sample group of 19 industry and 37 student programs. When software projects are analyzed by means of software measurements, a considerable amount of data is produced. The task is to organize the data and draw meaningful information from the measures produced, quickly and without high expenses.</p><p>The results of this study indicated that code profiling can be a useful technique for quick program comparisons and continuous quality observations with several application scenarios in both industry and education.</p>
117

Development of New Methods for Inferring and Evaluating Phylogenetic Trees

Hill, Tobias January 2007 (has links)
<p>Inferring phylogeny is a difficult computational problem. Heuristics are necessary to minimize the time spent evaluating non optimal trees. In paper I, we developed an approach for heuristic searching, using a genetic algorithm. Genetic algorithms mimic the natural selections ability to solve complex problems. The algorithm can reduce the time required for weighted maximum parsimony phylogenetic inference using protein sequences, especially for data sets involving large number of taxa. </p><p>Evaluating and comparing the ability of phylogenetic methods to infer the correct topology is complex. In paper II, we developed software that determines the minimum subtree prune and regraft (SPR) distance between binary trees to ease the process. The minimum SPR distance can be used to measure the incongruence between trees inferred using different methods. Given a known topology the methods could be evaluated on their ability to infer the correct phylogeny given specific data. </p><p>The minimum SPR software the intermediate trees that separate two binary trees. In paper III we developed software that given a set of incongruent trees determines the median SPR consensus tree i.e. the tree that explains the trees with a minimum of SPR operations. We investigated the median SPR consensus tree and its possible interpretation as a species tree given a set of gene trees. We used a set of α-proteobacteria gene trees to test the ability of the algorithm to infer a species tree and compared it to previous studies. The results show that the algorithm can successfully reconstruct a species tree.</p><p>Expressed sequence tag (EST) data is important in determining intron-exon boundaries, single nucleotide polymorphism and the coding sequence of genes. In paper IV we aligned ESTs to the genome to evaluate the quality of EST data. The results show that many ESTs are contaminated by vector sequences and low quality regions. The reliability of EST data is largely determined by the clustering of the ESTs and the association of the clusters to the correct portion of genome. We investigate the performance of EST clustering using the genome as template compared to previously existing methods using pair-wise alignments. The results show that using the genome as guidance improves the resulting EST clusters in respect to the extent ESTs originating from the same transcriptional unit are separated into disjunct clusters. </p>
118

Guesswork and Entropy as Security Measures for Selective Encryption

Lundin, Reine January 2012 (has links)
More and more effort is being spent on security improvements in today's computer environments, with the aim to achieve an appropriate level of security. However, for small computing devices it might be necessary to reduce the computational cost imposed by security in order to gain reasonable performance and/or energy consumption. To accomplish this selective encryption can be used, which provides confidentiality by only encrypting chosen parts of the information. Previous work on selective encryption has chiefly focused on how to reduce the computational cost while still making the information perceptually secure, but not on how computationally secure the selectively encrypted information is.  Despite the efforts made and due to the harsh nature of computer security, good quantitative assessment methods for computer security are still lacking. Inventing new ways of measuring security are therefore needed in order to better understand, assess, and improve the security of computer environments. Two proposed probabilistic quantitative security measures are entropy and guesswork. Entropy gives the average number of guesses in an optimal binary search attack, and guesswork gives the average number of guesses in an optimal linear search attack. In information theory, a considerable amount of research has been carried out on entropy and on entropy-based metrics. However, the same does not hold for guesswork. In this thesis, we evaluate the performance improvement when using the proposed generic selective encryption scheme. We also examine the confidentiality strength of selectively encrypted information by using and adopting entropy and guesswork. Moreover, since guesswork has been less theoretical investigated compared to entropy, we extend guesswork in several ways and investigate some of its behaviors.
119

Development of New Methods for Inferring and Evaluating Phylogenetic Trees

Hill, Tobias January 2007 (has links)
Inferring phylogeny is a difficult computational problem. Heuristics are necessary to minimize the time spent evaluating non optimal trees. In paper I, we developed an approach for heuristic searching, using a genetic algorithm. Genetic algorithms mimic the natural selections ability to solve complex problems. The algorithm can reduce the time required for weighted maximum parsimony phylogenetic inference using protein sequences, especially for data sets involving large number of taxa. Evaluating and comparing the ability of phylogenetic methods to infer the correct topology is complex. In paper II, we developed software that determines the minimum subtree prune and regraft (SPR) distance between binary trees to ease the process. The minimum SPR distance can be used to measure the incongruence between trees inferred using different methods. Given a known topology the methods could be evaluated on their ability to infer the correct phylogeny given specific data. The minimum SPR software the intermediate trees that separate two binary trees. In paper III we developed software that given a set of incongruent trees determines the median SPR consensus tree i.e. the tree that explains the trees with a minimum of SPR operations. We investigated the median SPR consensus tree and its possible interpretation as a species tree given a set of gene trees. We used a set of α-proteobacteria gene trees to test the ability of the algorithm to infer a species tree and compared it to previous studies. The results show that the algorithm can successfully reconstruct a species tree. Expressed sequence tag (EST) data is important in determining intron-exon boundaries, single nucleotide polymorphism and the coding sequence of genes. In paper IV we aligned ESTs to the genome to evaluate the quality of EST data. The results show that many ESTs are contaminated by vector sequences and low quality regions. The reliability of EST data is largely determined by the clustering of the ESTs and the association of the clusters to the correct portion of genome. We investigate the performance of EST clustering using the genome as template compared to previously existing methods using pair-wise alignments. The results show that using the genome as guidance improves the resulting EST clusters in respect to the extent ESTs originating from the same transcriptional unit are separated into disjunct clusters.
120

Discovering Constructs and Dimensions for Information Privacy Metrics

Dayarathna, Rasika January 2013 (has links)
Privacy is a fundamental human right. During the last decades, in the information age, information privacy has become one of the most essential aspects of privacy. Information privacy is concerned with protecting personal information pertaining to individuals. Organizations, which frequently process the personal information, and individuals, who are the subjects of the information, have different needs, rights and obligations. Organizations need to utilize personal information as a basis to develop tailored services and products to their customers in order to gain advantage over their competitors. Individuals need assurance from the organizations that their personal information is not changed, disclosed, deleted or misused in any other way. Without this guarantee from the organizations, individuals will be more unwilling to share their personal information. Information privacy metrics is a set of parameters used for the quantitative assessment and benchmark of an organization’s measures to protect personal information. These metrics can be used by organizations to demonstrate, and by individuals to evaluate, the type and level of protection given to personal information. Currently, there are no systematically developed, established or widely used information privacy metrics. Hence, the purpose of this study is to establish a solid foundation for building information privacy metrics by discovering some of the most critical constructs and dimensions of these metrics.  The research was conducted within the general research strategy of design science and by applying research methods such as data collection and analysis informed by grounded theory as well as surveys using interviews and questionnaires in Sweden and in Sri Lanka. The result is a conceptual model for information privacy metrics including its basic foundation; the constructs and dimensions of the metrics. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 6: Accepted.</p>

Page generated in 0.034 seconds