• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 8
  • 2
  • 1
  • Tagged with
  • 43
  • 43
  • 43
  • 16
  • 11
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Empirical Studies of Mobile Apps and Their Dependence on Mobile Platforms

Syer, MARK 24 January 2013 (has links)
Our increasing reliance on mobile devices has given rise to a new class of software applications (i.e., mobile apps). Tens of thousands of developers have developed hundreds of thousands of mobile apps that are available across multiple platforms. These apps are used by millions of people around the world every day. However, most software engineering research has been performed on large desktop or server applications. We believe that research efforts must begin to examine mobile apps. Mobile apps are rapidly growing, yet they differ from traditionally-studied desktop/server applications. In this thesis, we examine such apps by performing three quantitative studies. First, we study differences in the size of the code bases and development teams of desktop/server applications and mobile apps. We then study differences in the code, dependency and churn properties of mobile apps from two different mobile platforms. Finally, we study the impact of size, coupling, cohesion and code reuse on the quality of mobile apps. Some of the most notable findings are that mobile apps are much smaller than traditionally-studied desktop/server applications and that most mobile apps tend to be developed by only one or two developers. Mobile app developers tend to rely heavily on functionality provided by the underlying mobile platform through platform-specific APIs. We find that Android app developers tend to rely on the Android platform more than BlackBerry app developers rely on the BlackBerry platform. We also find that defects in Android apps tend to be concentrated in a small number of files and that files that depend on the Android platform tend to have more defects. Our results indicate that major differences exist between mobile apps and traditionally-studied desktop/server applications. However, the mobile apps of two different mobile platforms also differ. Further, our results suggest that mobile app developers should avoid excessive platform dependencies and focus their testing efforts on source code files that rely heavily on the underlying mobile platform. Given the widespread use of mobile apps and the lack of research surrounding these apps, we believe that our results will have significant impact on software engineering research. / Thesis (Master, Computing) -- Queen's University, 2013-01-24 10:15:56.086
2

An Empirical Study of CSS Code Smells in Web Frameworks

Bleisch, Tobias Paul 01 March 2018 (has links)
Cascading Style Sheets (CSS) has become essential to front-end web development for the specification of style. But despite its simple syntax and the theoretical advantages attained through the separation of style from content and behavior, CSS authoring today is regarded as a complex task. As a result, developers are increasingly turning to CSS preprocessor languages and web frameworks to aid in development. However, previous studies show that even highly popular websites which are known to be developed with web frameworks contain CSS code smells such as duplicated rules and hard-coded values. Such code smells have the potential to cause adverse effects on websites and complicate maintenance. It is therefore important to investigate whether web frameworks may be encouraging the introduction of CSS code smells into websites. In this thesis, we investigate the prevalence of CSS code smells in websites built with different web frameworks and attempt to recognize a pattern of CSS behavior in these frameworks. We collect a dataset of several hundred websites produced by each of 19 different frameworks, collect code smells and other metrics present in the CSS code of each website, train a classifier to predict which framework the website was built with, and perform various clustering tasks to gain insight into the correlations between code smells. Our results show that CSS code smells are highly prevalent in websites built with web frameworks, we achieve an accuracy of 39% in correctly classifying the frameworks based on CSS code smells and metrics, and we find interesting correlations between code smells.
3

On the Impact and Defeat of Regular Expression Denial of Service

Davis, James Collins 28 May 2020 (has links)
Regular expressions (regexes) are a widely-used yet little-studied software component. Engineers use regexes to match domain-specific languages of strings. Unfortunately, many regex engine implementations perform these matches with worst-case polynomial or exponential time complexity in the length of the string. Because they are commonly used in user-facing contexts, super-linear regexes are a potential denial of service vector known as Regular expression Denial of Service (ReDoS). Part I gives the necessary background to understand this problem. In Part II of this dissertation, I present the first large-scale empirical studies of super-linear regex use. Guided by case studies of ReDoS issues in practice (Chapter 3), I report that the risk of ReDoS affects up to 10% of the regexes used in practice (Chapter 4), and that these findings generalize to software written in eight popular programming languages (Chapter 5). ReDoS appears to be a widespread vulnerability, motivating the consideration of defenses. In Part III I present the first systematic comparison of ReDoS defenses. Based on the necessary conditions for ReDoS, a ReDoS defense can be erected at the application level, the regex engine level, or the framework/runtime level. In my experiments I report that application-level defenses are difficult and error prone to implement (Chapter 6), that finding a compatible higher-performing regex engine is unlikely (Chapter 7), that optimizing an existing regex engine using memoization incurs (perhaps acceptable) space overheads (Chapter 8), and that incorporating resource caps into the framework or runtime is feasible but faces barriers to adoption (Chapter 9). In Part IV of this dissertation, we reflect on our findings. By leveraging empirical software engineering techniques, we have exposed the scope of potential ReDoS vulnerabilities, and given strong motivation for a solution. To assist practitioners, we have conducted a systematic evaluation of the solution space. We hope that our findings assist in the elimination of ReDoS, and more generally that we have provided a case study in the value of data-driven software engineering. / Doctor of Philosophy / Software commonly performs pattern-matching tasks on strings. For example, when validating input in a Web form, software commonly tests whether an input fits the pattern of a credit card number or an email address. Software engineers often implement such string-based pattern matching using a tool called regular expressions (regexes). Regexes permit software engineers to succinctly describe the sequences of characters that make up common "languages" like the set of valid Visa credit card numbers (16 digits, starting with a 4) or the set of valid emails (some characters, an '@', and more characters including at least one'.'). Using regexes on untrusted user input in this manner may be a dangerous decision because some regexes take a long time to evaluate. These slow regexes can be exploited by attackers in order to carry out a denial of service attack known as Regular expression Denial of Service (ReDoS). To date, ReDoS has led to outages affecting hundreds of websites and tens of thousands of users. While the risk of ReDoS is well known in theory, in this dissertation I present the first large-scale empirical studies measuring the extent to which slow regular expressions are used in practice. I found that about 10% of real regular expressions extracted from hundreds of thousands of software projects can exhibit longer-than-expected worst-case behavior in popular programming languages including JavaScript, Python, and Ruby. Motivated by these findings, I then consider a range of ReDoS solution approaches: application refactoring, regex engine replacement, regex engine optimization, and resource caps. I report that application refactoring is error-prone, and that regex engine replacement seems unlikely due to incompatibilities between regex engines. Some resource caps are more successful than others, but all resource cap approaches struggle with adoption. My novel regex engine optimizations seem the most promising approach for protecting existing regex engines, offering significant time reductions with acceptable space overheads.
4

Data cleaning techniques for software engineering data sets

Liebchen, Gernot Armin January 2010 (has links)
Data quality is an important issue which has been addressed and recognised in research communities such as data warehousing, data mining and information systems. It has been agreed that poor data quality will impact the quality of results of analyses and that it will therefore impact on decisions made on the basis of these results. Empirical software engineering has neglected the issue of data quality to some extent. This fact poses the question of how researchers in empirical software engineering can trust their results without addressing the quality of the analysed data. One widely accepted definition for data quality describes it as `fitness for purpose', and the issue of poor data quality can be addressed by either introducing preventative measures or by applying means to cope with data quality issues. The research presented in this thesis addresses the latter with the special focus on noise handling. Three noise handling techniques, which utilise decision trees, are proposed for application to software engineering data sets. Each technique represents a noise handling approach: robust filtering, where training and test sets are the same; predictive filtering, where training and test sets are different; and filtering and polish, where noisy instances are corrected. The techniques were first evaluated in two different investigations by applying them to a large real world software engineering data set. In the first investigation the techniques' ability to improve predictive accuracy in differing noise levels was tested. All three techniques improved predictive accuracy in comparison to the do-nothing approach. The filtering and polish was the most successful technique in improving predictive accuracy. The second investigation utilising the large real world software engineering data set tested the techniques' ability to identify instances with implausible values. These instances were flagged for the purpose of evaluation before applying the three techniques. Robust filtering and predictive filtering decreased the number of instances with implausible values, but substantially decreased the size of the data set too. The filtering and polish technique actually increased the number of implausible values, but it did not reduce the size of the data set. Since the data set contained historical software project data, it was not possible to know the real extent of noise detected. This led to the production of simulated software engineering data sets, which were modelled on the real data set used in the previous evaluations to ensure domain specific characteristics. These simulated versions of the data set were then injected with noise, such that the real extent of the noise was known. After the noise injection the three noise handling techniques were applied to allow evaluation. This procedure of simulating software engineering data sets combined the incorporation of domain specific characteristics of the real world with the control over the simulated data. This is seen as a special strength of this evaluation approach. The results of the evaluation of the simulation showed that none of the techniques performed well. Robust filtering and filtering and polish performed very poorly, and based on the results of this evaluation they would not be recommended for the task of noise reduction. The predictive filtering technique was the best performing technique in this evaluation, but it did not perform significantly well either. An exhaustive systematic literature review has been carried out investigating to what extent the empirical software engineering community has considered data quality. The findings showed that the issue of data quality has been largely neglected by the empirical software engineering community. The work in this thesis highlights an important gap in empirical software engineering. It provided clarification and distinctions of the terms noise and outliers. Noise and outliers are overlapping, but they are fundamentally different. Since noise and outliers are often treated the same in noise handling techniques, a clarification of the two terms was necessary. To investigate the capabilities of noise handling techniques a single investigation was deemed as insufficient. The reasons for this are that the distinction between noise and outliers is not trivial, and that the investigated noise cleaning techniques are derived from traditional noise handling techniques where noise and outliers are combined. Therefore three investigations were undertaken to assess the effectiveness of the three presented noise handling techniques. Each investigation should be seen as a part of a multi-pronged approach. This thesis also highlights possible shortcomings of current automated noise handling techniques. The poor performance of the three techniques led to the conclusion that noise handling should be integrated into a data cleaning process where the input of domain knowledge and the replicability of the data cleaning process are ensured.
5

Coordinating requirements engineering and software testing

Unterkalmsteiner, Michael January 2015 (has links)
The development of large, software-intensive systems is a complex undertaking that is generally tackled by a divide and conquer strategy. Organizations face thereby the challenge of coordinating the resources which enable the individual aspects of software development, commonly solved by adopting a particular process model. The alignment between requirements engineering (RE) and software testing (ST) activities is of particular interest as those two aspects are intrinsically connected: requirements are an expression of user/customer needs while testing increases the likelihood that those needs are actually satisfied. The work in this thesis is driven by empirical problem identification, analysis and solution development towards two main objectives. The first is to develop an understanding of RE and ST alignment challenges and characteristics. Building this foundation is a necessary step that facilitates the second objective, the development of solutions relevant and scalable to industry practice that improve REST alignment. The research methods employed to work towards these objectives are primarily empirical. Case study research is used to elicit data from practitioners while technical action research and field experiments are conducted to validate the developed  solutions in practice. This thesis contains four main contributions: (1) An in-depth study on REST alignment challenges and practices encountered in industry. (2) A conceptual framework in the form of a taxonomy providing constructs that further our understanding of REST alignment. The taxonomy is operationalized in an assessment framework, REST-bench (3), that was designed to be lightweight and can be applied as a postmortem in closing development projects. (4) An extensive investigation into the potential of information retrieval techniques to improve test coverage, a common REST alignment challenge, resulting in a solution prototype, risk-based testing supported by topic models (RiTTM). REST-bench has been validated in five cases and has shown to be efficient and effective in identifying improvement opportunities in the coordination of RE and ST. Most of the concepts operationalized from the REST taxonomy were found to be useful, validating the conceptual framework. RiTTM, on the other hand, was validated in a single case experiment where it has shown great potential, in particular by identifying test cases that were originally overlooked by expert test engineers, improving effectively test coverage.
6

An Analysis of the Differences between Unit and Integration Tests

Trautsch, Fabian 08 April 2019 (has links)
No description available.
7

Requirements engineering in software startups: a qualitative investigation / Engenharia de requisitos em startups de software: uma investigação qualitativa

Jorge Augusto Melegati Gonçalves 06 March 2017 (has links)
Software startups face a very demanding market: they must deliver high innovative solutions in the shortest possible period of time. Resources are limited and time to reach market is short. Then, it is extremely important to gather the right requirements and that they are precise. Nevertheless, software requirements are usually not clear and startups struggle to identify what they should build. This context affects how requirements engineering activities are performed in these organizations. This work seeks to characterize the state-of-practice of requirements engineering in software startups. Using an iterative approach, seventeen interviews were conducted during three stages with founders and/or managers of different Brazilian software startups operating in different market sectors and with different maturity levels. Data was analyzed using grounded theory techniques such open and axial coding through continuous comparison. As a result, a conceptual model of requirements engineering state-of-practice in software startups was developed consisting of its context influences (founders, software development manager, developers, business model, market and ecosystem) and activities description (product team; elicitation; analysis, validation and prioritization; product validation and documentation). Software development and startup development techniques are also presented and their use in the startup context is analyzed. Finally, using a bad smell analogy borrowed from software development literature, some bad practices and behaviors identified in software startups are presented and solutions to avoid them proposed. / Startups de software enfrentam um mercado muito exigente: elas devem entregar soluções altamente inovativas no menor período de tempo possível. Recursos são limitados e tempo para alcançar o mercado é pequeno. Então, é extremamente importante coletar os requisitos certos e que eles sejam precisos. Entretanto, os requisitos de software geralmente não são claros e as startups fazem um grande esforço para identificar quais serão implementados. Esse contexto afeta como as atividades de engenharia de requisitos são executadas nessas organizações. Este trabalho procura compreender o estado-da-prática da engenharia de requisitos em startups de software. Usando uma abordagem iterativa, dezessete entrevistas foram realizados em três diferentes estágios com fundadores e/ou gestores de diferentes startups de software brasileiras operando em diferentes setores e com diferentes estágios de maturidade. Os dados foram analisados usando técnicas de teoria fundamentada como codificação aberta e axial através da comparação contínua. Como resultado, um modelo conceitual do estado-da-prática da engenharia de requisitos em startups de software foi desenvolvido consistindo da suas influências do contexto (fundadores, gerente de desenvolvimento de software, desenvolvedores, modelo de negócio, mercado e ecossistema) e descrição das atividades (time de produto; levantamento; análise, validação e priorização; e documentação). Técnicas oriundas de metodologias de desenvolvimento de software e desenvolvimento de startups também são apresentadas e seu uso em no contexto de startups é analisado. Finalmente, a partir de uma analogia de maus cheiros presente na literatura de desenvolvimento de software, algumas más práticas e maus comportamentos identificados em startups de software são apresentados e algumas sugestões de solução são propostas.
8

On the Maintenance Costs of Formal Software Requirements Specification Written in the Software Cost Reduction and in the Real-time Unified Modeling Language Notations

Kwan, Irwin January 2005 (has links)
A formal specification language used during the requirements phase can reduce errors and rework, but formal specifications are regarded as expensive to maintain, discouraging their adoption. This work presents a single-subject experiment that explores the costs of modifying specifications written in two different languages: a tabular notation, Software Cost Reduction (SCR), and a state-of-the-practice notation, Real-time Unified Modeling Language (UML). The study records the person-hours required to write each specification, the number of defects made during each specification effort, and the amount of time repairing these defects. Two different problems are specified&mdash;a Bidirectional Formatter (BDF), and a Bicycle Computer (BC)&mdash;to balance a learning effect from specifying the same problem twice with different specification languages. During the experiment, an updated feature for each problem is sent to the subject and each specification is modified to reflect the changes. <br /><br /> The results show that the cost to modify a specification are highly dependent on both the problem and the language used. There is no evidence that a tabular notation is easier to modify than a state-of-the-practice notation. <br /><br /> A side-effect of the experiment indicates there is a strong learning effect, independent of the language: in the BDF problem, the second time specifying the problem required more time, but resulted in a better-quality specification than the first time; in the BC problem, the second time specifying the problem required less time and resulted in the same quality specification as the first time. <br /><br /> This work demonstrates also that single-subject experiments can add important information to the growing body of empirical data about the use of formal requirements specifications in software development.
9

On the Maintenance Costs of Formal Software Requirements Specification Written in the Software Cost Reduction and in the Real-time Unified Modeling Language Notations

Kwan, Irwin January 2005 (has links)
A formal specification language used during the requirements phase can reduce errors and rework, but formal specifications are regarded as expensive to maintain, discouraging their adoption. This work presents a single-subject experiment that explores the costs of modifying specifications written in two different languages: a tabular notation, Software Cost Reduction (SCR), and a state-of-the-practice notation, Real-time Unified Modeling Language (UML). The study records the person-hours required to write each specification, the number of defects made during each specification effort, and the amount of time repairing these defects. Two different problems are specified&mdash;a Bidirectional Formatter (BDF), and a Bicycle Computer (BC)&mdash;to balance a learning effect from specifying the same problem twice with different specification languages. During the experiment, an updated feature for each problem is sent to the subject and each specification is modified to reflect the changes. <br /><br /> The results show that the cost to modify a specification are highly dependent on both the problem and the language used. There is no evidence that a tabular notation is easier to modify than a state-of-the-practice notation. <br /><br /> A side-effect of the experiment indicates there is a strong learning effect, independent of the language: in the BDF problem, the second time specifying the problem required more time, but resulted in a better-quality specification than the first time; in the BC problem, the second time specifying the problem required less time and resulted in the same quality specification as the first time. <br /><br /> This work demonstrates also that single-subject experiments can add important information to the growing body of empirical data about the use of formal requirements specifications in software development.
10

Empirical Studies of Performance Bugs and Performance Analysis Approaches for Software Systems

ZAMAN, SHAHED 30 April 2012 (has links)
Developing high quality software is of eminent importance to keep the existing customers satisfied and to remain competitive. One of the most important software quality characteristics is performance, which defines how fast and/or efficiently a software can perform its operation. While several studies have shown that field problems are often due to performance issues instead of feature bugs, prior research typically treats all bugs as similar when studying various aspects of software quality (e.g., predicting the time to fix a bug) or focused on other types of bug (e.g., security bugs). There is little work that studies performance bugs. In this thesis, we perform an empirical study to quantitatively and qualitatively examine performance bugs in the Mozilla Firefox and Google Chrome web browser projects in order to find out if performance bugs are really different from other bugs in practice and to understand the rationale behind those differences. In our quantitative study, we find that performance bugs of the Firefox project take longer time to fix, are fixed by more experienced developers, and require changes to more lines of code. We also study performance bugs relative to security bugs, since security bugs have been extensively studied separately in the past. We find that security bugs are re-opened and tossed more often, are fixed and triaged faster, are fixed by more experienced developers, and are assigned more number of developers in the Firefox project. Google Chrome project also shows different quantitative characteristics between performance and non-performance bugs and from the Firefox project. Based on our quantitative results, we look at that data from a qualitative point of view. As one of our most interesting observation, we find that end-users are often frustrated with performance problems and often threaten to switch to competing software products. To better understand, the rationale for some users being very frustrated (even threatening to switch product) even though most systems are well tested, we performed an additional study. In this final study, we explore a global perspective vs a user centric perspective of analyzing performance data. We find that a user-centric perspective might lead to a small number of users with considerably poor performance while the global perspective might show good or same performance across releases. The results of our studies show that performance bugs are different and should be studied separately in large scale software systems to improve the quality assurance processes related to software performance. / Thesis (Master, Computing) -- Queen's University, 2012-04-30 01:28:22.623

Page generated in 0.0574 seconds