• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 9
  • 2
  • 1
  • Tagged with
  • 47
  • 47
  • 47
  • 18
  • 13
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Empirical Studies of Mobile Apps and Their Dependence on Mobile Platforms

Syer, MARK 24 January 2013 (has links)
Our increasing reliance on mobile devices has given rise to a new class of software applications (i.e., mobile apps). Tens of thousands of developers have developed hundreds of thousands of mobile apps that are available across multiple platforms. These apps are used by millions of people around the world every day. However, most software engineering research has been performed on large desktop or server applications. We believe that research efforts must begin to examine mobile apps. Mobile apps are rapidly growing, yet they differ from traditionally-studied desktop/server applications. In this thesis, we examine such apps by performing three quantitative studies. First, we study differences in the size of the code bases and development teams of desktop/server applications and mobile apps. We then study differences in the code, dependency and churn properties of mobile apps from two different mobile platforms. Finally, we study the impact of size, coupling, cohesion and code reuse on the quality of mobile apps. Some of the most notable findings are that mobile apps are much smaller than traditionally-studied desktop/server applications and that most mobile apps tend to be developed by only one or two developers. Mobile app developers tend to rely heavily on functionality provided by the underlying mobile platform through platform-specific APIs. We find that Android app developers tend to rely on the Android platform more than BlackBerry app developers rely on the BlackBerry platform. We also find that defects in Android apps tend to be concentrated in a small number of files and that files that depend on the Android platform tend to have more defects. Our results indicate that major differences exist between mobile apps and traditionally-studied desktop/server applications. However, the mobile apps of two different mobile platforms also differ. Further, our results suggest that mobile app developers should avoid excessive platform dependencies and focus their testing efforts on source code files that rely heavily on the underlying mobile platform. Given the widespread use of mobile apps and the lack of research surrounding these apps, we believe that our results will have significant impact on software engineering research. / Thesis (Master, Computing) -- Queen's University, 2013-01-24 10:15:56.086
2

Business Process Variability: A Systematic Literature Review

SANTOS, George Augusto Valença 03 1900 (has links)
Submitted by Pedro Henrique Rodrigues (pedro.henriquer@ufpe.br) on 2015-03-05T17:19:23Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação George Augusto Valença Santos _ CIn UFPE _ Bussiness Process Variability - A Systematic Literature Review.pdf: 5890061 bytes, checksum: 6ff1f1ce8ef59a0eab2536ed6f50659a (MD5) / Made available in DSpace on 2015-03-05T17:19:23Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação George Augusto Valença Santos _ CIn UFPE _ Bussiness Process Variability - A Systematic Literature Review.pdf: 5890061 bytes, checksum: 6ff1f1ce8ef59a0eab2536ed6f50659a (MD5) Previous issue date: 2012-03 / Business processes have facilitated and enhanced management activities, being considered an instrument capable of approximating the strategic guidance and the people who execute their work to achieve organizational goals. In this scenario, continuous evaluation procedures, compliance with government regulations and industry standards, evolutions in the business domain, stakeholders’ needs, new technologies and economic factors related to globalization pressure are examples of aspects that can foster changes on business processes. The impact of this changing environment is the variation of business processes, in a phenomenon called business process variability. The objective of this research is, therefore, to aggregate relevant studies which address the context of this phenomenon. The studies’ selection was accomplished through a Systematic Literature Review, conducting automatic searches in a set of digital libraries and manual searches in leading conferences and journals in the fields of Business Process Management and Computer Science. In total, 13619 studies were retrieved, from which 80 were classified as relevant. This set of primary studies acted as sources of evidence for answering 3 research questions and their respectively subquestions. From the analysis performed, the study concludes that despite efforts in the literature for managing business process variability, this concept is not clear and well delimited, involving additional aspects and, hence, lacking a structured taxonomy. Contributions of the current work are: to provide valuable information with respect to the main notions in business process variability field and, possible types and inductors of process variability;to identify the main challenges faced by organizations when dealing with this phenomenon and; to examine a set of proposals for process variability management, investigating the existence of tool support and empirical evaluations carried out.
3

An Empirical Study of CSS Code Smells in Web Frameworks

Bleisch, Tobias Paul 01 March 2018 (has links)
Cascading Style Sheets (CSS) has become essential to front-end web development for the specification of style. But despite its simple syntax and the theoretical advantages attained through the separation of style from content and behavior, CSS authoring today is regarded as a complex task. As a result, developers are increasingly turning to CSS preprocessor languages and web frameworks to aid in development. However, previous studies show that even highly popular websites which are known to be developed with web frameworks contain CSS code smells such as duplicated rules and hard-coded values. Such code smells have the potential to cause adverse effects on websites and complicate maintenance. It is therefore important to investigate whether web frameworks may be encouraging the introduction of CSS code smells into websites. In this thesis, we investigate the prevalence of CSS code smells in websites built with different web frameworks and attempt to recognize a pattern of CSS behavior in these frameworks. We collect a dataset of several hundred websites produced by each of 19 different frameworks, collect code smells and other metrics present in the CSS code of each website, train a classifier to predict which framework the website was built with, and perform various clustering tasks to gain insight into the correlations between code smells. Our results show that CSS code smells are highly prevalent in websites built with web frameworks, we achieve an accuracy of 39% in correctly classifying the frameworks based on CSS code smells and metrics, and we find interesting correlations between code smells.
4

On the Impact and Defeat of Regular Expression Denial of Service

Davis, James Collins 28 May 2020 (has links)
Regular expressions (regexes) are a widely-used yet little-studied software component. Engineers use regexes to match domain-specific languages of strings. Unfortunately, many regex engine implementations perform these matches with worst-case polynomial or exponential time complexity in the length of the string. Because they are commonly used in user-facing contexts, super-linear regexes are a potential denial of service vector known as Regular expression Denial of Service (ReDoS). Part I gives the necessary background to understand this problem. In Part II of this dissertation, I present the first large-scale empirical studies of super-linear regex use. Guided by case studies of ReDoS issues in practice (Chapter 3), I report that the risk of ReDoS affects up to 10% of the regexes used in practice (Chapter 4), and that these findings generalize to software written in eight popular programming languages (Chapter 5). ReDoS appears to be a widespread vulnerability, motivating the consideration of defenses. In Part III I present the first systematic comparison of ReDoS defenses. Based on the necessary conditions for ReDoS, a ReDoS defense can be erected at the application level, the regex engine level, or the framework/runtime level. In my experiments I report that application-level defenses are difficult and error prone to implement (Chapter 6), that finding a compatible higher-performing regex engine is unlikely (Chapter 7), that optimizing an existing regex engine using memoization incurs (perhaps acceptable) space overheads (Chapter 8), and that incorporating resource caps into the framework or runtime is feasible but faces barriers to adoption (Chapter 9). In Part IV of this dissertation, we reflect on our findings. By leveraging empirical software engineering techniques, we have exposed the scope of potential ReDoS vulnerabilities, and given strong motivation for a solution. To assist practitioners, we have conducted a systematic evaluation of the solution space. We hope that our findings assist in the elimination of ReDoS, and more generally that we have provided a case study in the value of data-driven software engineering. / Doctor of Philosophy / Software commonly performs pattern-matching tasks on strings. For example, when validating input in a Web form, software commonly tests whether an input fits the pattern of a credit card number or an email address. Software engineers often implement such string-based pattern matching using a tool called regular expressions (regexes). Regexes permit software engineers to succinctly describe the sequences of characters that make up common "languages" like the set of valid Visa credit card numbers (16 digits, starting with a 4) or the set of valid emails (some characters, an '@', and more characters including at least one'.'). Using regexes on untrusted user input in this manner may be a dangerous decision because some regexes take a long time to evaluate. These slow regexes can be exploited by attackers in order to carry out a denial of service attack known as Regular expression Denial of Service (ReDoS). To date, ReDoS has led to outages affecting hundreds of websites and tens of thousands of users. While the risk of ReDoS is well known in theory, in this dissertation I present the first large-scale empirical studies measuring the extent to which slow regular expressions are used in practice. I found that about 10% of real regular expressions extracted from hundreds of thousands of software projects can exhibit longer-than-expected worst-case behavior in popular programming languages including JavaScript, Python, and Ruby. Motivated by these findings, I then consider a range of ReDoS solution approaches: application refactoring, regex engine replacement, regex engine optimization, and resource caps. I report that application refactoring is error-prone, and that regex engine replacement seems unlikely due to incompatibilities between regex engines. Some resource caps are more successful than others, but all resource cap approaches struggle with adoption. My novel regex engine optimizations seem the most promising approach for protecting existing regex engines, offering significant time reductions with acceptable space overheads.
5

AN EMPIRICAL STUDY OF TRUST & SAFETY ENGINEERING IN OPEN-SOURCE SOCIAL MEDIA PLATFORMS

Geoffrey William Cramer (15337534) 22 April 2023 (has links)
<p>    </p> <p>Social Media Platforms (SMPs) are used by almost 60% of the global population. Along with the ubiquity of SMPs, there are increasing Trust & Safety (T&S) risks that expose users to spam, harassment, abuse, and other harmful content online. <em>T&S Engineering </em>is an emerging area of software engineering striving to mitigate these risks. This study provides the first step in understanding this form of software engineering.</p> <p>This study examines how T&S Engineering is practiced by SMP engineers. I studied two open-source (OSS) SMPs, Mastodon and Diaspora, which comprise 89% of the 9.6 million OSS SMP accounts. I focused on the T&S design process by analyzing T&S discussions within 60 GitHub issues. I applied a T&S discussion model to taxonomize the T&S risks, T&S engineering patterns, and resolution rationales. I found that T&S issues persist throughout a platform’s lifetime, they are difficult to resolve, and engineers favor reactive treatments. To integrate findings, I mapped T&S engineering patterns onto a gen- eral model of SMPs. My findings give T&S engineers a systematic understanding of their T&S risk treatment options. I conclude with future directions to study and improve T&S Engineering, spanning software design, decision-making, and validation. </p>
6

Data cleaning techniques for software engineering data sets

Liebchen, Gernot Armin January 2010 (has links)
Data quality is an important issue which has been addressed and recognised in research communities such as data warehousing, data mining and information systems. It has been agreed that poor data quality will impact the quality of results of analyses and that it will therefore impact on decisions made on the basis of these results. Empirical software engineering has neglected the issue of data quality to some extent. This fact poses the question of how researchers in empirical software engineering can trust their results without addressing the quality of the analysed data. One widely accepted definition for data quality describes it as `fitness for purpose', and the issue of poor data quality can be addressed by either introducing preventative measures or by applying means to cope with data quality issues. The research presented in this thesis addresses the latter with the special focus on noise handling. Three noise handling techniques, which utilise decision trees, are proposed for application to software engineering data sets. Each technique represents a noise handling approach: robust filtering, where training and test sets are the same; predictive filtering, where training and test sets are different; and filtering and polish, where noisy instances are corrected. The techniques were first evaluated in two different investigations by applying them to a large real world software engineering data set. In the first investigation the techniques' ability to improve predictive accuracy in differing noise levels was tested. All three techniques improved predictive accuracy in comparison to the do-nothing approach. The filtering and polish was the most successful technique in improving predictive accuracy. The second investigation utilising the large real world software engineering data set tested the techniques' ability to identify instances with implausible values. These instances were flagged for the purpose of evaluation before applying the three techniques. Robust filtering and predictive filtering decreased the number of instances with implausible values, but substantially decreased the size of the data set too. The filtering and polish technique actually increased the number of implausible values, but it did not reduce the size of the data set. Since the data set contained historical software project data, it was not possible to know the real extent of noise detected. This led to the production of simulated software engineering data sets, which were modelled on the real data set used in the previous evaluations to ensure domain specific characteristics. These simulated versions of the data set were then injected with noise, such that the real extent of the noise was known. After the noise injection the three noise handling techniques were applied to allow evaluation. This procedure of simulating software engineering data sets combined the incorporation of domain specific characteristics of the real world with the control over the simulated data. This is seen as a special strength of this evaluation approach. The results of the evaluation of the simulation showed that none of the techniques performed well. Robust filtering and filtering and polish performed very poorly, and based on the results of this evaluation they would not be recommended for the task of noise reduction. The predictive filtering technique was the best performing technique in this evaluation, but it did not perform significantly well either. An exhaustive systematic literature review has been carried out investigating to what extent the empirical software engineering community has considered data quality. The findings showed that the issue of data quality has been largely neglected by the empirical software engineering community. The work in this thesis highlights an important gap in empirical software engineering. It provided clarification and distinctions of the terms noise and outliers. Noise and outliers are overlapping, but they are fundamentally different. Since noise and outliers are often treated the same in noise handling techniques, a clarification of the two terms was necessary. To investigate the capabilities of noise handling techniques a single investigation was deemed as insufficient. The reasons for this are that the distinction between noise and outliers is not trivial, and that the investigated noise cleaning techniques are derived from traditional noise handling techniques where noise and outliers are combined. Therefore three investigations were undertaken to assess the effectiveness of the three presented noise handling techniques. Each investigation should be seen as a part of a multi-pronged approach. This thesis also highlights possible shortcomings of current automated noise handling techniques. The poor performance of the three techniques led to the conclusion that noise handling should be integrated into a data cleaning process where the input of domain knowledge and the replicability of the data cleaning process are ensured.
7

Coordinating requirements engineering and software testing

Unterkalmsteiner, Michael January 2015 (has links)
The development of large, software-intensive systems is a complex undertaking that is generally tackled by a divide and conquer strategy. Organizations face thereby the challenge of coordinating the resources which enable the individual aspects of software development, commonly solved by adopting a particular process model. The alignment between requirements engineering (RE) and software testing (ST) activities is of particular interest as those two aspects are intrinsically connected: requirements are an expression of user/customer needs while testing increases the likelihood that those needs are actually satisfied. The work in this thesis is driven by empirical problem identification, analysis and solution development towards two main objectives. The first is to develop an understanding of RE and ST alignment challenges and characteristics. Building this foundation is a necessary step that facilitates the second objective, the development of solutions relevant and scalable to industry practice that improve REST alignment. The research methods employed to work towards these objectives are primarily empirical. Case study research is used to elicit data from practitioners while technical action research and field experiments are conducted to validate the developed  solutions in practice. This thesis contains four main contributions: (1) An in-depth study on REST alignment challenges and practices encountered in industry. (2) A conceptual framework in the form of a taxonomy providing constructs that further our understanding of REST alignment. The taxonomy is operationalized in an assessment framework, REST-bench (3), that was designed to be lightweight and can be applied as a postmortem in closing development projects. (4) An extensive investigation into the potential of information retrieval techniques to improve test coverage, a common REST alignment challenge, resulting in a solution prototype, risk-based testing supported by topic models (RiTTM). REST-bench has been validated in five cases and has shown to be efficient and effective in identifying improvement opportunities in the coordination of RE and ST. Most of the concepts operationalized from the REST taxonomy were found to be useful, validating the conceptual framework. RiTTM, on the other hand, was validated in a single case experiment where it has shown great potential, in particular by identifying test cases that were originally overlooked by expert test engineers, improving effectively test coverage.
8

Reengenharia da ferramenta Projection Explorer para apoio à seleção de estudos primários em revisão sistemática / Reengineering of projection explorer tool to support selection of primary studies on systematic review

Martins, Rafael Messias 11 April 2011 (has links)
A crescente adoção do paradigma experimental na pesquisa em Engenharia de Software visa a obtenção de evidências experimentais sobre as tecnologias propostas para garantir sua correta avaliação e para a construção de um corpo de conhecimento sólido da disciplina. Uma das abordagens de pesquisa experimental é a revisão sistemática, um método rigoroso, planejado e auditável para a realização da coleta e análise crítica de dados experimentais disponíveis sobre um determinado tema de pesquisa. Apesar de produzir resultados confiáveis, a condução de uma revisão sistemática pode ser trabalhosa e muitas vezes demorada, principalmente quando existe um grande volume de estudos a serem considerados, selecionados e avaliados. Uma solução encontrada na literatura é a utilização de ferramentas de Mineração Visual de Textos (VTM) como a Projection Explorer (PEx) para apoiar a fase de seleção e análise de estudos primários no processo de revisão sistemática. Neste trabalho foi realizada uma reengenharia de software na ferramenta PEx com dois objetivos principais: apoiar, utilizando VTM, a fase de seleção e análise de estudos primários no processo de revisão sistemática e implementar novos requisitos não-funcionais relativos à melhoria da manutenibilidade e escalabilidade da ferramenta. Como resultado foi construída uma plataforma modular para a instanciação de ferramentas de visualização e, a partir da mesma, uma ferramenta de revisão sistemática apoiada por VTM. Os resultados de um estudo de caso executado com a ferramenta mostraram que a abordagem de aplicação de técnicas VTM usada nesse contexto é viável e promissora, melhorando tanto a performance quanto a efetividade da seleção / The increasing adoption of the experimental paradigm in Software Engineering research aims at obtaining experimental evidence of the proposed technologies to ensure their proper evaluation and to build a solid body of knowledge for the discipline. One approach of experimental research is the systematic review, a rigorous, auditable and planned method to carry out the collection and analysis of experimental data available on a particular research topic. Despite producing reliable results, conducting a systematic review can be a cumbersome and often lengthy process, especially when a large volume of studies is to be considered, selected and evaluated. One solution found in the literature is the use of Visual Text Mining (VTM) tools such as the Projection Explorer (PEx) to support the selection and analysis of primary studies in the systematic review process. In this work a software re-engineering was performed on PEx with two main goals: to support, using VTM, the stage of selection and analysis of primary studies in the systematic review process and to implement new non-functional requirements related to improving the maintainability and scalability of the tool. The results were the building of a modular platform for instantiating visualization tools and, from it, the instantiation of a systematic review tool supported by VTM. The results of a case study carried out with the tool showed that the VTM approach used in this context is feasible and promising, improving both performance and the effectiveness of selection
9

Requirements engineering in software startups: a qualitative investigation / Engenharia de requisitos em startups de software: uma investigação qualitativa

Gonçalves, Jorge Augusto Melegati 06 March 2017 (has links)
Software startups face a very demanding market: they must deliver high innovative solutions in the shortest possible period of time. Resources are limited and time to reach market is short. Then, it is extremely important to gather the right requirements and that they are precise. Nevertheless, software requirements are usually not clear and startups struggle to identify what they should build. This context affects how requirements engineering activities are performed in these organizations. This work seeks to characterize the state-of-practice of requirements engineering in software startups. Using an iterative approach, seventeen interviews were conducted during three stages with founders and/or managers of different Brazilian software startups operating in different market sectors and with different maturity levels. Data was analyzed using grounded theory techniques such open and axial coding through continuous comparison. As a result, a conceptual model of requirements engineering state-of-practice in software startups was developed consisting of its context influences (founders, software development manager, developers, business model, market and ecosystem) and activities description (product team; elicitation; analysis, validation and prioritization; product validation and documentation). Software development and startup development techniques are also presented and their use in the startup context is analyzed. Finally, using a bad smell analogy borrowed from software development literature, some bad practices and behaviors identified in software startups are presented and solutions to avoid them proposed. / Startups de software enfrentam um mercado muito exigente: elas devem entregar soluções altamente inovativas no menor período de tempo possível. Recursos são limitados e tempo para alcançar o mercado é pequeno. Então, é extremamente importante coletar os requisitos certos e que eles sejam precisos. Entretanto, os requisitos de software geralmente não são claros e as startups fazem um grande esforço para identificar quais serão implementados. Esse contexto afeta como as atividades de engenharia de requisitos são executadas nessas organizações. Este trabalho procura compreender o estado-da-prática da engenharia de requisitos em startups de software. Usando uma abordagem iterativa, dezessete entrevistas foram realizados em três diferentes estágios com fundadores e/ou gestores de diferentes startups de software brasileiras operando em diferentes setores e com diferentes estágios de maturidade. Os dados foram analisados usando técnicas de teoria fundamentada como codificação aberta e axial através da comparação contínua. Como resultado, um modelo conceitual do estado-da-prática da engenharia de requisitos em startups de software foi desenvolvido consistindo da suas influências do contexto (fundadores, gerente de desenvolvimento de software, desenvolvedores, modelo de negócio, mercado e ecossistema) e descrição das atividades (time de produto; levantamento; análise, validação e priorização; e documentação). Técnicas oriundas de metodologias de desenvolvimento de software e desenvolvimento de startups também são apresentadas e seu uso em no contexto de startups é analisado. Finalmente, a partir de uma analogia de maus cheiros presente na literatura de desenvolvimento de software, algumas más práticas e maus comportamentos identificados em startups de software são apresentados e algumas sugestões de solução são propostas.
10

An Analysis of the Differences between Unit and Integration Tests

Trautsch, Fabian 08 April 2019 (has links)
No description available.

Page generated in 0.1305 seconds