1 |
Empirical Study of Information Design: Four ExperimentsAlton, Noel Teresa 12 July 2010 (has links) (PDF)
Current design theory sets out many rules and guidelines for designers, but good design is still difficult to replicate. Often the design principles found in the manuals are misapplied, resulting in designs that (1) do not fulfill their purpose and (2) disrupt the clarity of information. This thesis will review and provide experimental data supporting a model of visual form/visual purpose connections based on the semiotic of C.S. Peirce. This model was first used by Amare and Manning (2005, 2006, 2007, 2008, 2009) to evaluate and explain both effective and ineffective visual information design. This thesis will extend their approach, reporting on the results of four experiments to test the aesthetic appeal and information retention from various visual designs. The four experiments presented in this thesis show that viewer's ability to recall information does not coincide with designs that they find the most visually stimulating or visually pleasing. High indicative contrasts allow for higher retention rate, but those contrasts do not necessarily conform to viewer's aesthetic preferences. Low indicative contrast options have a lower retention rate, but are preferred aesthetically by viewers. Peircean analysis accounts for this disconnect between usability and preference and can help designers find the balance that is needed between these competing purposes in visual information design.
|
2 |
Social Capital, Cognitions, and Firm Innovation: Theoretical Model and Empirical StudiesXu, Yang 13 July 2006 (has links)
Innovation is the central value of economic behavior. In this dissertation research, I attempt to explore the social and cognitive origins of firm innovation through three interrelated studies, by merging several research streams — managerial cognitions, social networks, and innovation, and collecting data through multiple methods — archives and surveys.
First, I proposed a social-cognitive view to study the sources of firm innovation. In the context of firm innovation, top management teams' cognitions or an entrepreneur's cognitions shape the way they use the social structure available to them, while the social structures influence the embedded actors' cognitions and ultimately strategic actions. Managers and entrepreneurs form collaborative partnerships aimed at innovation and competitiveness. During this dynamic social learning process, cognitive differences influence the formation of social capital and its realized benefits. The impact of social capital on innovation cannot be evaluated without understanding the individual cognitive characteristic first.
Next, I tested this theoretical model in two contexts. In the first empirical study, I derived firm-level hypotheses that link the top management team's cognitions, the firm's social capital, and the technological innovations. These hypotheses are tested on a sample of U.S. semiconductor firms in the years 1991-1998. In the second empirical study, I derived similar hypotheses that link entrepreneur's cognitions, social capital and startup's technological innovations. A survey was conducted in both Pennsylvania and Virginia, targeting the entrepreneurial firms in technology industries. The hypotheses were empirically tested on a final sample of 70 U.S. small and medium-sized manufacturers. Two empirical studies supported some of the derived hypotheses and the findings have significant theoretical, empirical, and practical implications. In a diverse social network, actors' knowledge structure tends to be more complex, and more centralized. In addition, these studies indicate that both social capital and cognitive structure play important roles in technological innovation.
By distinguishing between cognitive structures, as well as social capital characteristics, and by investigating their effects on firm innovations, this dissertation extends the literature on organization theory, innovation research, entrepreneurship, and research methodologies. This dissertation research deepens our understanding of firm innovation, and opens a whole line of further research. / Ph. D.
|
3 |
Insight-Based Studies for Pathway and Microarray Visualization ToolsSaraiya, Purviben Bharatkumar 11 December 2006 (has links)
Pathway diagrams, similar to the graph diagrams using a node-link representation, are used by biologists to represent complex interactions at the molecular level in living cells. The recent shift towards data-intensive bioinformatics and systems-level science has created a strong need for advanced pathway visualization tools that support exploratory data analysis. User studies suggest that an important requirement for biologists is the need to associate microarray data to pathway diagrams.
A design space for visualization tools that allow analysis of microarray data in pathway context was identified for a systematic evaluation of the visualization alternatives. The design space is divided into two dimensions. Dimension 1 is based on the method used to overlay data attributes onto pathway nodes. The three possible approaches are: overlay of data on pathway nodes one data attribute at a time by manipulating a visual property (e.g. color) of the node, along with sliders or some such mechanism to animate the pathway for other timepoints. In another approach data from all the attributes in data can be overlaid simultaneously by embedding small charts (e.g., line charts or heatmap) into pathway nodes. The third approach uses miniature version of the pathways-as-glyph view for each attribute in the data. Dimension 2 decides if additional view besides pathway diagrams were used. These pathway visualizations are often linked to other type of visualization methods (e.g., parallel co-ordinates) using the concept of brushing and linking.
The visualization alternatives from pathway + microarray data design space were evaluated by conducting two independent user studies. Both the studies used timeseries datasets. The first study used visualization alternatives from both dimension 1 and dimension 2. The results suggest that the method to overlay multidimensional data on pathway nodes has a non trivial influence on accuracy of participants' responses, whereas the number of visualizations affect participants' performance time for pre-selected tasks. The second study used visualization alternatives from dimension 1 that focuses on method used to overlay data attributes on pathway nodes. The study suggests that participants using pathway visualization that display data one attribute at a time on nodes have more controlled performance for all type of tasks as compared to the participants using other alternatives. Participants using pathway visualization that display data in node-as-glyphs view have better performance for tasks that require analysis for a single node, and identifying outlier nodes. Whereas, pathway visualizations with pathways-as-glyph view provide better performance on tasks that require analysis of overall changes in the pathway, and identifying interesting timepoints in the data.
An insight-based method was designed to evaluate visualization tools for real world biologists' data analysis scenarios. The insight-based method uses different quantifiable characteristics of an "insight" that can be measured uniformly across participants. These characteristics were identified based on observations of the participants analyzing microarray data in a pilot study. The insight-based method provides an alternative to traditional task-based methods. This is especially helpful for evaluating visualization tools on large and complicated datasets where designing tasks can be difficult. Though, the insight-based method was developed to empirically evaluate visualization tools for short term studies, the method can also be used in real world longitudinal studies that analyzes the usage of visualization tools by the intended end-users. / Ph. D.
|
4 |
MINING UNSTRUCTURED SOFTWARE REPOSITORIES USING IR MODELSThomas, STEPHEN 12 December 2012 (has links)
Mining Software Repositories, which is the process of analyzing the data related
to software development practices, is an emerging field which aims to
aid development teams in their day to day tasks. However, data in many
software repositories is currently unused because the data is unstructured, and therefore
difficult to mine and analyze. Information Retrieval (IR) techniques, which were developed
specifically to handle unstructured data, have recently been used by researchers to mine
and analyze the unstructured data in software repositories, with some success.
The main contribution of this thesis is the idea that the research and practice of using
IR models to mine unstructured software repositories can be improved by going beyond the
current state of affairs. First, we propose new applications of IR models to existing software
engineering tasks. Specifically, we present a technique to prioritize test cases based on their
IR similarity, giving highest priority to those test cases that are most dissimilar. In another
new application of IR models, we empirically recover how developers use their mailing list
while developing software.
Next, we show how the use of advanced IR techniques can improve results. Using a
framework for combining disparate IR models, we find that bug localization performance
can be improved by 14–56% on average, compared to the best individual IR model. In
addition, by using topic evolution models on the history of source code, we can uncover the
evolution of source code concepts with an accuracy of 87–89%.
Finally, we show the risks of current research, which uses IR models as black boxes without
fully understanding their assumptions and parameters. We show that data duplication
in source code has undesirable effects for IR models, and that by eliminating the duplication,
the accuracy of IR models improves. Additionally, we find that in the bug localization
task, an unwise choice of parameter values results in an accuracy of only 1%, where optimal
parameters can achieve an accuracy of 55%.
Through empirical case studies on real-world systems, we show that all of our proposed
techniques and methodologies significantly improve the state-of-the-art. / Thesis (Ph.D, Computing) -- Queen's University, 2012-12-12 12:34:59.854
|
5 |
A rational approach to estimate reasonable design values of selected joints by using lower tolerance limitsMesut Uysal (6589793) 10 June 2019 (has links)
Lower tolerance limits (LTLs) methods was used to estimate design values of furniture joints. To have higher reliability in joint, LTLs were chosen for higher confidence/proportional level. The logic behind phenomena is that if stress on joint exceeds the given LTLs, failure on joints is most likely observed. Therefore, joint sizes were determined to maintain internal stresses on joint below LTLs value corresponding to external load.
|
6 |
Replicação de estudos empíricos em engenharia de software. / Empirical studies replication in engineering software.Dória, Emerson Silas 11 June 2001 (has links)
A crescente utilização de sistemas baseados em computação em praticamente todas as áreas da atividade humana provoca uma crescente demanda por qualidade e produtividade, tanto do ponto de vista do processo de produção como do ponto de vista dos produtos de software gerados. Nessa perspectiva, atividades agregadas sob o nome de Garantia de Qualidade de Software têm sido introduzidas ao longo de todo o processo de desenvolvimento de software. Dentre essas atividades destacam-se as atividades de Teste e Revisão, ambas com o objetivo principal de minimizar a introdução de erros durante o processo de desenvolvimento nos produtos de software gerados. A atividade de Teste constitui um dos elementos para fornecer evidências da confiabilidade do software em complemento a outras atividades, como por exemplo, o uso de revisões e de técnicas formais e rigorosas de especificação e de verificação. A atividade de Revisão, por sua vez, é um 'filtro' eficiente para o processo de engenharia de software, pois favorece a identificação e a eliminação de erros antes do passo seguinte do processo de desenvolvimento. Atualmente, pesquisas estão sendo realizadas com objetivo de determinar qual técnica, Revisão ou Teste, é mais adequada e efetiva, em determinadas circunstâncias, para descobrir determinadas classes de erros; e de forma mais ampla, como as técnicas podem ser aplicadas de forma complementar para melhoria da qualidade de software. Ainda que a atividade de teste seja indispensável no processo de desenvolvimento, investigar o aspecto complementar dessas técnicas é de grande interesse, pois em muitas situações tem-se observado que as revisões são tão ou mais efetivas quanto os testes. Nessa perspectiva, este trabalho tem como objetivo realizar um estudo comparativo, por meio da replicação de experimentos, entre Técnicas de Teste e Técnicas de Revisão no que se refere à detecção de erros em produtos de software (código fonte e documento de especificação de requisitos). Para realizar esse estudo são utilizados critérios de teste das técnicas funcional (particionamento em classes de equivalência e análise do valor limite), estrutural (todos-nós, todos-arcos, todos-usos, todos-potenciais-usos), baseada em erros (análise de mutantes), bem como, técnicas de leitura (stepwise abstraction e perspective based reading) e técnicas de inspeção (ad hoc e checklist). Além de comparar a efetividade e a eficiência das técnicas em detectar erros em produtos de software, este trabalho objetivo ainda utilizar os conhecimentos específicos relacionados a critérios de teste para reavaliar as técnicas utilizadas nos experimentos de Basili & Selby, Kamsties & Lott e Basili. / The increasing use of computer based systems in practically all human activity areas provokes higher demand for quality and productivity, from the point of view of software process as well as from the point of view of software products. In this perspective, activities aggregated under the name of Software Quality Assurance have been introduced throughout the software development process. Amongst these activities, the test and review activities are distinguished, both of them aiming at minimizing the introduction of errors during the development process. The test activity constitutes one of the elements to supply evidences of software reliability as a complement to other activities, for example, the use of review and formal, rigorous techniques for specification and verification. The review activity, in turn, is an efficient 'filter' for the process of software engineering, therefore it favors the identification of errors before the next step of the development process. Currently, researches have been carried out with the objective of determining which technique, review or test, is more appropriate and effective, in certain circumstances, to discover some classes of errors, and mostly, how the techniques can be applied in complement to each other for improvement of software quality. Even if the test activity is indispensable in the development process, investigating the complementary aspect of these techniques is of great interest, for in many situations it has been observed that reviews are as or more effective as test. In this perspective, this work aims at accomplishing a comparative study, through the replication of experiments, between Testing Techniques and Reviews concerning error detection in software products at the source code and requirement specification level. To carry out this study are used testing criteria of the techniques: functional (equivalence partitioning and boundary value analysis); structural (all-nodes, all-edges, all-uses, all-potential-uses); error based (mutation testing), as well as reading techniques (stepwise abstraction and perspective based reading) and inspection techniques (ad hoc e checklist). Besides comparing the effectiveness and efficiency of the techniques in detecting errors in software products, this work also aims at reevaluating and eventually at improving the techniques used in experiment of Basili & Selby, Kamsties & Lott and Basili.
|
7 |
STUDYING THE IMPACT OF DEVELOPER COMMUNICATION ON THE QUALITY AND EVOLUTION OF A SOFTWARE SYSTEMBettenburg, Nicolas 22 May 2014 (has links)
Software development is a largely collaborative effort, of which the actual encoding of program logic in source code is a relatively small part. Software developers have to collaborate effectively and communicate with their peers in order to avoid coordination problems. To date, little is known how developer communication during software development activities impacts the quality and evolution of a software.
In this thesis, we present and evaluate tools and techniques to recover communication data from traces of the software development activities. With this data, we study the impact of developer communication on the quality and evolution of the software through an in-depth investigation of the role of developer communication during software development activities. Through multiple case-studies on a broad spectrum of open-source software projects, we find that communication between developers stands in a direct relationship to the quality of the software. Our findings demonstrate that our models based on developer communication explain software defects as well as state-of-the art models that are based on technical information such as code and process metrics, and that social information metrics are orthogonal to these traditional metrics, leading to a more complete and integrated view on software defects. In addition, we find that communication between developers plays a important role in maintaining a healthy contribution management process, which is one of the key factors to the successful evolution of the software. Source code contributors who are part of the community surrounding open-source projects are available for limited times, and long communication times can lead to the loss of valuable contributions.
Our thesis illustrates that software development is an intricate and complex process that is strongly influenced by the social interactions between the stakeholders involved in the development activities. A traditional view based solely on technical aspects of software development such as source code size and complexity, while valuable, limits our understanding of software development activities. The research presented in this thesis consists of a first step towards gaining a more holistic view on software development activities. / Thesis (Ph.D, Computing) -- Queen's University, 2014-05-22 12:07:13.823
|
8 |
Contributions à l’usage des détecteurs de clones pour des tâches de maintenance logicielle / Contributions to the use of code clone detectors in software maintenance tasksCharpentier, Alan 17 October 2016 (has links)
L’existence de plusieurs copies d’un même fragment de code (nommées des clones dans lalittérature) dans un logiciel peut compliquer sa maintenance et son évolution. La duplication decode peut poser des problèmes de consistance, notamment lors de la propagation de correction debogues. La détection de clones est par conséquent un enjeu important pour préserver et améliorerla qualité logicielle, propriété primordiale pour le succès d’un logiciel.L’objectif général de cette thèse est de contribuer à l’usage des détecteurs de clones dans destâches de maintenance logicielle. Nous avons centré nos contributions sur deux axes de recherche.Premièrement, la méthodologie pour comparer et évaluer les détecteurs de clones, i.e. les benchmarksde clones. Nous avons empiriquement évalué un benchmark de clones et avons montré queles résultats dérivés de ce dernier n’étaient pas fiables. Nous avons également identifié des recommandationspour fiabiliser la construction de benchmarks de clones. Deuxièmement, la spécialisationdes détecteurs de clones dans des tâches de maintenance logicielle.Nous avons développé uneapproche spécialisée dans un langage et une tâche (la réingénierie) qui permet aux développeursd’identifier et de supprimer la duplication de code de leurs logiciels. Nous avons mené des étudesde cas avec des experts du domaine pour évaluer notre approche. / The existence of several copies of a same code fragment—called code clones in the literature—in a software can complicate its maintenance and evolution. Code duplication can lead to consistencyproblems, especially during bug fixes propagation. Code clone detection is therefore a majorconcern to maintain and improve software quality, which is an essential property for a software’ssuccess.The general objective of this thesis is to contribute to the use of code clone detection in softwaremaintenance tasks. We chose to focus our contributions on two research topics. Firstly, themethodology to compare and assess code clone detectors, i.e. clone benchmarks. We perform anempirical assessment of a clone benchmark and we found that results derived from this latter arenot reliable. We also identified recommendations to construct more reliable clone benchmarks.Secondly, the adaptation of code clone detectors in software maintenance tasks. We developed aspecialized approach in one language and one task—refactoring—allowing developers to identifyand remove code duplication in their softwares. We conducted case studies with domain experts toevaluate our approach.
|
9 |
Understanding And Guiding Software Product Lines Evolution Based On Requirements Engineering ActivitiesOliveira, Raphael Pereira de 10 September 2015 (has links)
Submitted by Kleber Silva (kleberbs@ufba.br) on 2017-06-01T20:36:17Z
No. of bitstreams: 1
2015_Thesis_Final_v01.pdf: 25381943 bytes, checksum: cf9b5a7ab05c5f433c6abe06c7c8815e (MD5) / Approved for entry into archive by Vanessa Reis (vanessa.jamile@ufba.br) on 2017-06-07T11:38:56Z (GMT) No. of bitstreams: 1
2015_Thesis_Final_v01.pdf: 25381943 bytes, checksum: cf9b5a7ab05c5f433c6abe06c7c8815e (MD5) / Made available in DSpace on 2017-06-07T11:38:56Z (GMT). No. of bitstreams: 1
2015_Thesis_Final_v01.pdf: 25381943 bytes, checksum: cf9b5a7ab05c5f433c6abe06c7c8815e (MD5) / Software Product Line (SPL) has emerged as an important strategy to cope with the increasing demand of large-scale products customization. SPL has provided companies with an efficient and effective means of delivering products with higher quality at a lower cost, when compared to traditional software engineering strategies. However, such benefits do not come for free.
There is a necessity in SPL to deal with the evolution of its assets to support changes within the environment and user needs. These changes in SPL are firstly represented by requirements. Thus, SPL should manage the commonality and variability of products by means of a “Requirements Engineering (RE) - change management” process. Hence, besides dealing with the reuse and evolution of requirements in an SPL, the RE for SPL also needs an approach to represent explicitly the commonality and variability information (e.g., through feature models and use cases).
To understand the evolution in SPL, this Thesis presents two empirical studies within industrial SPL projects and a systematic mapping study on SPL evolution. The two empirical studies evaluated Lehman’s laws of software evolution in two industrial SPL projects,demonstrating that most of the laws are supported by SPL environments. The systematic mapping study on SPL evolution identified approaches in the area and revealed gaps for researching, such as, that most of the proposed approaches perform the evolution of SPL requirements in an ad-hoc way and were evaluated through feasibility studies.
These results led to systematize, through guidelines, the SPL processes by starting with the SPL requirements. Thus, it was proposed an approach to specify SPL requirements called Feature-Driven Requirements Engineering (FeDRE). FeDRE specifies SPL requirements in a systematic way driven by a feature model. To deal with the evolution of FeDRE requirements, a new approach called Feature-Driven Requirements Engineering Evolution (FeDRE2) was presented. FeDRE2 is responsible for guiding, in a systematic way, the SPL evolution based on activities from RE.
Both proposed approaches, FeDRE and and FeDRE2, were evaluated and the results, besides being preliminaries, shown that the approaches were perceived as easy to use and also useful, coping with the improvement and systematization of SPL processes.
|
10 |
Replicação de estudos empíricos em engenharia de software. / Empirical studies replication in engineering software.Emerson Silas Dória 11 June 2001 (has links)
A crescente utilização de sistemas baseados em computação em praticamente todas as áreas da atividade humana provoca uma crescente demanda por qualidade e produtividade, tanto do ponto de vista do processo de produção como do ponto de vista dos produtos de software gerados. Nessa perspectiva, atividades agregadas sob o nome de Garantia de Qualidade de Software têm sido introduzidas ao longo de todo o processo de desenvolvimento de software. Dentre essas atividades destacam-se as atividades de Teste e Revisão, ambas com o objetivo principal de minimizar a introdução de erros durante o processo de desenvolvimento nos produtos de software gerados. A atividade de Teste constitui um dos elementos para fornecer evidências da confiabilidade do software em complemento a outras atividades, como por exemplo, o uso de revisões e de técnicas formais e rigorosas de especificação e de verificação. A atividade de Revisão, por sua vez, é um 'filtro' eficiente para o processo de engenharia de software, pois favorece a identificação e a eliminação de erros antes do passo seguinte do processo de desenvolvimento. Atualmente, pesquisas estão sendo realizadas com objetivo de determinar qual técnica, Revisão ou Teste, é mais adequada e efetiva, em determinadas circunstâncias, para descobrir determinadas classes de erros; e de forma mais ampla, como as técnicas podem ser aplicadas de forma complementar para melhoria da qualidade de software. Ainda que a atividade de teste seja indispensável no processo de desenvolvimento, investigar o aspecto complementar dessas técnicas é de grande interesse, pois em muitas situações tem-se observado que as revisões são tão ou mais efetivas quanto os testes. Nessa perspectiva, este trabalho tem como objetivo realizar um estudo comparativo, por meio da replicação de experimentos, entre Técnicas de Teste e Técnicas de Revisão no que se refere à detecção de erros em produtos de software (código fonte e documento de especificação de requisitos). Para realizar esse estudo são utilizados critérios de teste das técnicas funcional (particionamento em classes de equivalência e análise do valor limite), estrutural (todos-nós, todos-arcos, todos-usos, todos-potenciais-usos), baseada em erros (análise de mutantes), bem como, técnicas de leitura (stepwise abstraction e perspective based reading) e técnicas de inspeção (ad hoc e checklist). Além de comparar a efetividade e a eficiência das técnicas em detectar erros em produtos de software, este trabalho objetivo ainda utilizar os conhecimentos específicos relacionados a critérios de teste para reavaliar as técnicas utilizadas nos experimentos de Basili & Selby, Kamsties & Lott e Basili. / The increasing use of computer based systems in practically all human activity areas provokes higher demand for quality and productivity, from the point of view of software process as well as from the point of view of software products. In this perspective, activities aggregated under the name of Software Quality Assurance have been introduced throughout the software development process. Amongst these activities, the test and review activities are distinguished, both of them aiming at minimizing the introduction of errors during the development process. The test activity constitutes one of the elements to supply evidences of software reliability as a complement to other activities, for example, the use of review and formal, rigorous techniques for specification and verification. The review activity, in turn, is an efficient 'filter' for the process of software engineering, therefore it favors the identification of errors before the next step of the development process. Currently, researches have been carried out with the objective of determining which technique, review or test, is more appropriate and effective, in certain circumstances, to discover some classes of errors, and mostly, how the techniques can be applied in complement to each other for improvement of software quality. Even if the test activity is indispensable in the development process, investigating the complementary aspect of these techniques is of great interest, for in many situations it has been observed that reviews are as or more effective as test. In this perspective, this work aims at accomplishing a comparative study, through the replication of experiments, between Testing Techniques and Reviews concerning error detection in software products at the source code and requirement specification level. To carry out this study are used testing criteria of the techniques: functional (equivalence partitioning and boundary value analysis); structural (all-nodes, all-edges, all-uses, all-potential-uses); error based (mutation testing), as well as reading techniques (stepwise abstraction and perspective based reading) and inspection techniques (ad hoc e checklist). Besides comparing the effectiveness and efficiency of the techniques in detecting errors in software products, this work also aims at reevaluating and eventually at improving the techniques used in experiment of Basili & Selby, Kamsties & Lott and Basili.
|
Page generated in 0.0748 seconds