• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 18
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 173
  • 173
  • 73
  • 58
  • 32
  • 26
  • 21
  • 21
  • 17
  • 17
  • 16
  • 16
  • 13
  • 13
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

A Verification Framework for Access Control in Dynamic Web Applications

Alalfi, Manar 30 April 2010 (has links)
Current technologies such as anti-virus software programs and network firewalls provide reasonably secure protection at the host and network levels, but not at the application level. When network and host-level entry points are comparatively secure, public interfaces of web applications become the focus of malicious software attacks. In this thesis, we focus on one of most serious web application vulnerabilities, broken access control. Attackers often try to access unauthorized objects and resources other than URL pages in an indirect way; for instance, using indirect access to back-end resources such as databases. The consequences of these attacks can be very destructive, especially when the web application allows administrators to remotely manage users and contents over the web. In such cases, the attackers are not only able to view unauthorized content,but also to take over site administration. To protect against these types of attacks, we have designed and implemented a security analysis framework for dynamic web applications. A reverse engineering process is performed on an existing dynamic web application to extract a role-based access-control security model. A formal analysis is applied on the recovered model to check access-control security properties. This framework can be used to verify that a dynamic web application conforms to access control polices specified by a security engineer. Our framework provides a set of novel techniques for the analysis and modeling of web applications for the purpose of security verification and validation. It is largely language independent, and based on adaptable model recovery which can support a wide range of security analysis tasks. / Thesis (Ph.D, Computing) -- Queen's University, 2010-04-30 14:30:53.018
92

Management Aspects of Software Clone Detection and Analysis

2014 June 1900 (has links)
Copying a code fragment and reusing it by pasting with or without minor modifications is a common practice in software development for improved productivity. As a result, software systems often have similar segments of code, called software clones or code clones. Due to many reasons, unintentional clones may also appear in the source code without awareness of the developer. Studies report that significant fractions (5% to 50%) of the code in typical software systems are cloned. Although code cloning may increase initial productivity, it may cause fault propagation, inflate the code base and increase maintenance overhead. Thus, it is believed that code clones should be identified and carefully managed. This Ph.D. thesis contributes in clone management with techniques realized into tools and large-scale in-depth analyses of clones to inform clone management in devising effective techniques and strategies. To support proactive clone management, we have developed a clone detector as a plug-in to the Eclipse IDE. For clone detection, we used a hybrid approach that combines the strength of both parser-based and text-based techniques. To capture clones that are similar but not exact duplicates, we adopted a novel approach that applies a suffix-tree-based k-difference hybrid algorithm, borrowed from the area of computational biology. Instead of targeting all clones from the entire code base, our tool aids clone-aware development by allowing focused search for clones of any code fragment of the developer's interest. A good understanding on the code cloning phenomenon is a prerequisite to devise efficient clone management strategies. The second phase of the thesis includes large-scale empirical studies on the characteristics (e.g., proportion, types of similarity, change patterns) of code clones in evolving software systems. Applying statistical techniques, we also made fairly accurate forecast on the proportion of code clones in the future versions of software projects. The outcome of these studies expose useful insights into the characteristics of evolving clones and their management implications. Upon identification of the code clones, their management often necessitates careful refactoring, which is dealt with at the third phase of the thesis. Given a large number of clones, it is difficult to optimally decide what to refactor and what not, especially when there are dependencies among clones and the objective remains the minimization of refactoring efforts and risks while maximizing benefits. In this regard, we developed a novel clone refactoring scheduler that applies a constraint programming approach. We also introduced a novel effort model for the estimation of efforts needed to refactor clones in source code. We evaluated our clone detector, scheduler and effort model through comparative empirical studies and user studies. Finally, based on our experience and in-depth analysis of the present state of the art, we expose avenues for further research and development towards a versatile clone management system that we envision.
93

Reverse Engineering Behavioural Models by Filtering out Utilities from Execution Traces

Braun, Edna 10 September 2013 (has links)
An important issue in software evolution is the time and effort needed to understand existing applications. Reverse engineering software to recover behavioural models is difficult and is complicated due to the lack of a standardized way of extracting and visualizing knowledge. In this thesis, we study a technique for automatically extracting static and dynamic data from software, filtering and analysing the data, and visualizing the behavioural model of a selected feature of a software application. We also investigate the usefulness of the generated diagrams as documentation for the software. We present a literature review of studies that have used static and dynamic data analysis for software comprehension. A set of criteria is created, and each approach, including this thesis’ technique, is compared using those criteria. We propose an approach to simplify lengthy traces by filtering out software components that are too low level to give a high-level picture of the selected feature. We use static information to identify and remove small and simple (or uncomplicated) software components from the trace. We define a utility method as any element of a program designed for the convenience of the designer and implementer and intended to be accessed from multiple places within a certain scope of the program. Utilityhood is defined as the extent to which a particular method can be considered a utility. Utilityhood is calculated using different combinations of selected dynamic and static variables. Methods with high utilityhood values are detected and removed iteratively. By eliminating utilities, we are left with a much smaller trace which is then visualized using the Use Case Map (UCM) notation. UCM is a scenario language used to specify and explain behaviour of complex systems. By doing so, we can identify the algorithm that generates a UCM closest to the mental model of the designers. Although when analysing our results we did not identify an algorithm that was best in all cases, there is a trend in that three of the best four algorithms (out of a total of eight algorithms investigated) used method complexity and method lines of code in their parameters. We also validated the algorithm results by doing a comparison with a list of methods given to us by the creators of the software and doing precision and recall calculations. Seven out of the eight participants agreed or strongly agreed that using UCM diagrams to visualize reduced traces is valid approach, with none who disagreed.
94

Policy-Driven Framework for Static Identification and Verification of Component Dependencies

Livogiannis, Anastasios 02 June 2011 (has links)
Software maintenance is considered to be among the most difficult, lengthy and costly parts of a software application's life-cycle. Regardless of the nature of the software application and the software engineering efforts to reduce component coupling to minimum, dependencies between software components in applications will always exist and initiate software maintenance operations as they tend to threaten the "health" of the software system during the evolution of particular components. The situation is more serious with modern technologies and development paradigms, such as Service Oriented Architecture Systems and Cloud Computing that introduce larger software systems that consist of a substantial number of components which demonstrate numerous types of dependencies with each other. This work proposes a reference architecture and a corresponding software framework that can be used to model the dependencies between components in software systems and can support the verification of a set of policies that are derived from system dependencies and are relative to the software maintenance operations being applied. Dependency modelling is performed using configuration information from the system, as well as information harvested from component interface descriptions. The proposed approach has been applied to a medium scale SOA system, namely the SCA Travel Sample from Apache Software Foundation, and has been evaluated for performance in a configuration specification related to a simulated SOA system consisting to up to a thousand web services offered in a few hundred components.
95

Metadata Foundations for the Life Cycle Management of Software Systems

Mr David Hyland-Wood Unknown Date (has links)
No description available.
96

Reverse engineering of UML sequence diagrams using dynamic information /

Miao, Yucong, January 1900 (has links)
Thesis (M. Sc.)--Carleton University, 2003. / Includes bibliographical references (p. 77-79). Also available in electronic format on the Internet.
97

"Two-way" obliviousness in general aspect-oriented modeling

Roberts, Nathan V. Song, Eunjee. January 2008 (has links)
Thesis (M.S.)--Baylor University, 2008. / Includes bibliographical references (p. 110-112)
98

Automated generation of SW design constructs from MESA source code /

Egerton, David. January 1993 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 1993. / Typescript. Includes bibliographical references (vol. 1, leaves 155-160).
99

Change decision support extraction and analysis of late architecture changes using change characterization and software metrics /

Williams, Byron Joseph, January 2009 (has links)
Thesis (Ph.D.)--Mississippi State University. Department of Computer Science and Engineering. / Title from title screen. Includes bibliographical references.
100

Classificação e resolução de defeitos em manutenção de software utilizando ODC e histórico de soluções

Manhães, Marcelo Mota 29 August 2014 (has links)
Nos dias atuais, com o aumento da demanda de serviços de suporte, como, por exemplo, no campo da alta disponibilidade e do desempenho para o cliente e da demanda de custos mais baixos para as empresas de manutenção e hospedagem de software, a resolução de incidentes de software em um tempo menor junto ao cliente e a prevenção de defeitos tornaram-se um tópicos fundamentais. Além disso, a atividade de manutenção de software é uma das fases que consome mais tempo, esforço e consequentemente custo no ciclo de desenvolvimento de software. Balancear eficiência e custo torna-se um desafio para qualquer empresa de suporte em manutenção de software. A abordagem utilizada nesta pesquisa estabelece um processo para classificar defeitos e delinear um conjunto de melhores soluções para os defeitos classificados a partir do histórico de defeitos e a base de conhecimento do cliente. Esta classificação também separa complexidade de problemas para serem gerenciados pelo time de suporte mais adequado. O método base utilizado é o ODC (Classificação Ortogonal de Defeitos) e extensões voltadas ao suporte de software são propostas e utilizadas. Por meio desta pesquisa, é possível verificar se a classificação e associação de soluções podem acarretar em uma redução no tempo de atendimento dos incidentes de suporte. Foi observado em quatro amostras de dois clientes diferentes (X e Y) que utilizando a classificação dos defeitos, direcionamento correto aos times de suporte e agrupamento de soluções, promoveu uma redução no tempo de atendimento em 70% dos incidentes de suporte no cliente Y e 92,5% dos incidentes de suporte para o cliente X. A redução de tempo foi obtida pela redução no número de transferências entre os times de suporte e a redução de incidentes. O processo apresentado é incremental, pois é baseado no aumento das informações históricas e na eficácia das soluções propostas. Este método de soluções pode favorecer a redução dos recursos necessários para suportar sistemas computacionais em provedores de serviço. / Nowadays with the increasing of demand in support services such as high availability and performance faced by customers and low cost operations to software maintenance and hosting enterprises, the resolution of software incidents in less time and defect prevention turned a key point. Moreover, the software maintenance is one most time consuming and effort demanding and by consequence cost in software development life cycle. Balancing effectiveness and costs turn a challenge to any enterprise that manages support in software maintenance. The approach used on this research defines a process to classify defects and boundary a set of best solutions associated to these classes from defects history and customer knowledge base. Also this classification separates different problem complexities that can be handled to different support teams. The base method used is the ODC (Orthogonal Defect Classification). With this research is possible to verify that the classification and solutions associations with problem class can undertake a time reduction in incidents resolution in software maintenance. It was verified using four service provider samples in two different customers (X and Y) that classification of defects, correct support team redirection and best solution grouping helps on reduction of incidents resolution time between 70% of incidents to customer Y and 92,5 % of incidents to customer X. The reduction was reached with reduction in number of transfers between supporting teams and incidents number reduction. The process is incremental because it is unfolded from history information and from effectiveness into solutions purposed. This process can leverage human resources nedded to support computational systems in service providers.

Page generated in 0.0564 seconds