• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7534
  • 5170
  • 1352
  • 678
  • 657
  • 587
  • 436
  • 370
  • 206
  • 103
  • 92
  • 92
  • 92
  • 87
  • 75
  • Tagged with
  • 21217
  • 7162
  • 5834
  • 2352
  • 2064
  • 2051
  • 1983
  • 1930
  • 1740
  • 1678
  • 1475
  • 1246
  • 1179
  • 1135
  • 1134
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Alinhamento de sequências na avaliação de resultados de teste de robustez / Sequence alignment algorithms applied in the evaluation of robustness testing results

Lemos, Gizelle Sandrini, 1975- 12 November 2012 (has links)
Orientador: Eliane Martins / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T06:26:11Z (GMT). No. of bitstreams: 1 Lemos_GizelleSandrini_D.pdf: 2897622 bytes, checksum: 93eb35a9b69a8e36e90d0399422e6520 (MD5) Previous issue date: 2013 / Resumo: A robustez, que é a capacidade do sistema em funcionar de maneira adequada em situações inesperadas, é uma propriedade cada vez mais importante, em especial para sistemas críticos. Uma técnica bastante empregada para testar a robustez consiste em injetar falhas no sistema e observar seu comportamento. Um problema comum nos testes é a determinação de um oráculo, i.e., um mecanismo que decida se o comportamento do sistema é ou não aceitável. Oráculos como a comparação com padrão-ouro - execução sem injeção de falhas - consideram todo comportamento diferente do padrão como sendo erro no sistema em teste (SUT). Por exemplo, a ativação de rotinas de tratamento de exceção em presença de falhas, pode ser considerada como erros. Também utilizada como oráculo, à busca por propriedades de segurança (safety) pode mostrar a presença de não robustez no SUT. Caso haja no SUT eventos semanticamente similares aos da propriedade estes não são notados (a menos que sejam explicitamente definidos na propriedade). O objetivo deste trabalho é o desenvolvimento de oráculos específicos para teste de robustez visando à diminuição dos problemas existentes nas soluções atualmente empregadas. Desenvolvemos duas abordagens a serem utilizadas como oráculos. O principal diferencial de nossas soluções em relação aos oráculos atuais é a adoção de algoritmos de alinhamento de sequências comumente aplicados em bioinformática. Estes algoritmos trabalham com matching inexato, permitindo algumas variações entre as sequência comparadas. A primeira abordagem criada é baseada na comparação tradicional com o padrão-ouro, porém aplica o alinhamento global de sequências na comparação de traços de execução coletados durante a injeção de falhas e padrões coletados sem a injeção de falhas. Isto permite que traços com pequenas porções diferentes do padrão sejam também classificados como robustos possibilitando inclusive a utilização da abordagem como oráculo em sistemas não deterministas, o que não é possível atualmente. Já, a segunda abordagem busca propriedades de segurança (safety) em traços coletados durante a injeção de falhas por meio do uso do algoritmo de alinhamento local de sequências. Além das vantagens do fato de serem algoritmos de matching inexato, estes algoritmos utilizam um sistema de pontuação que se baseia em informações obtidas na especificação do sistema em teste para guiar o alinhamento das sequências. Mostramos o resultado da aplicação das abordagens em estudos de caso / Abstract: Robustness, which is the ability of a system to work properly in unexpected situations, is an important characteristic, especially for critical systems. A commonly used technique for robustness testing is to inject faults during the system execution and to observe its behavior. A frequent problem during the tests is to determine an oracle, i.e., a mechanism that decides if the system behavior is acceptable or not. Oracles such as the golden run comparison - system execution without injection of faults - consider all different behaviors from the golden run as errors in the system under test (SUT). For example, the activation of exception handlers in the presence of faults could be considered as errors. Safety property searching approach is also used as oracle and it can show the presence of non-robustness in the SUT. If there are events in the SUT execution that are semantically similar to the property they are not taken into account (unless they have been explicitly defined in the property). The objective of this work is to develop specific oracles to evaluate results of robustness testing in order to minimize the deficiencies in the current oracles. The main difference between our solutions and the existing approaches is the type of algorithm that we used to compare the sequences. We adopted sequence alignment algorithms commonly applied in Bioinformatics. These algorithms are a kind of inexact matching, allowing some variations between the compared sequences. First approach is based on the traditional golden run comparison, but applies global sequence alignment of sequences to compare traces collected during fault injection and traces collected without fault injection. The use of these algorithms allows that traces with some differences of the golden run also being classified as robust allowing it use in non-deterministic systems evaluation which is not possible currently. The second approach works with comparison of patterns derived from safety properties and traces collected during robustness testing. However, differently from the first approach, the second one use of local sequence alignment algorithm to search for subsequences. Besides the advantages of the inexact matching, these algorithms use a scoring system based on information obtained from SUT specification to guide the alignment of the sequences. We show the results of the approaches application through case studies / Doutorado / Ciência da Computação / Doutora em Ciência da Computação
322

Evolução e recuperação de arquiteturas de software baseadas em componentes / Component based software architecture recovery and evolution

Esteve, Andre Petris, 1989- 24 August 2018 (has links)
Orientador: Cecília Mary Fischer Rubira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-24T12:39:49Z (GMT). No. of bitstreams: 1 Esteve_AndrePetris_M.pdf: 5633446 bytes, checksum: 5874ca0a9d4e25e35c24df8d99a5a724 (MD5) Previous issue date: 2013 / Resumo: Os sistemas de software precisam evoluir para atender as necessidades das pessoas. Isso é natural, já que as pessoas, as sociedades mudam, novas necessidades surgem, conceitos alteram-se e o software deve adequar-se às mudanças. Ao passo que alterações são feitas em sistemas de softwares, eles começam a envelhecer. O envelhecimento de software é o termo dado a redução da vida útil de um software devido à natureza da complexidade de seus processos de criação, manutenção e entendimento. Conforme um software é modificado, ele tende a se tornar mais complexo. Assim, ao precisar atender a novos requisitos seu código torna-se mais complexo, tornando-se menos expressivo, isto é, menos legível e mais complicado de ser compreendido, ou simplesmente por ser tão imenso e intricado, com centenas de componentes e funcionalidades, entendê-lo torna-se uma tarefa desafiadora. A arquitetura de software é um conceito fundamental no desenvolvimento e evolução de software. E juntamente com o sistema de software, a arquitetura também sofre os efeitos do envelhecimento de software, conhecidos como ersoão e desvio arquiteturais. É de extrema importância o gerenciamento da evolução da arquitetura de sofware, uma vez que ela representa a visão geral do sistema e é fundamentalmente responsável por requisitos de qualidade, como segurança e performance. Eventualmente, manter e evoluir um sistema de software torna-se mais custoso que reescrevê-lo por completo. Atingido esse estado, o software original é abandonado e sua vida útil chega ao fim. Ele é substituído por uma nova implementação, mais moderna, que atende aos novos requisitos, mas que, eventualmente, também envelhecerá, e também será substituída por uma nova implementação. Este trabalho propõe uma solução, composta por método e apoio ferramental, de gerenciamento da evolução de um sistema de software, especialmente, de sua arquitetura, que é peça fundamental na longevidade do sistema. O método proposto nesta solução objetiva identificar precocemente problemas de envelhecimento de software relacionados à arquitetura do sistema, permitindo ao arquiteto de software atuar sobre eles, de forma a mitigar ou eliminar seus impactos sobre a arquitetura do sistema, consequentemente, prolongando a vida útil do mesmo / Abstract: Software systems must evolve so to fit people¿s needs. That is a natural process, in which people and societies change, new necessities arise and existing concepts are replaced by modern ones, thus forcing the software to fit into the new picture, adapting itself to the changes. As software systems are modified, they age. Software ageing is the term given to the diminishing of a software¿s life spawn due to the inherently complexity of its creation, maintenance and understanding processes. As it evolves, the system tends to grow more intricate, its source code tends to become less expressive as it needs to be adapted to new requirements, run with a better performance or simply it has become so immense and complex that understanding it is, by itself, a challenging task. Eventually, maintaining and supporting a software system will turn out to be even more expensive than rewriting it from scratch. From that point onwards, the original piece of software is abandoned, having its life spawn reached an end. The aged software is replaced by a new and modern implementation, one that fulfils the just placed requirements. However this brand new software piece will share the same fate as its predecessors, it will age and will eventually be replaced. This work proposes a solution, composed by method and computer-aided tools, for managing software architecture evolution, which is a fundamental piece in the system¿s longevity. The solution¿s method aims to identify software architecture ageing problems ahead of time, so their impact can be adequately mitigated or even completely avoided, thus extending the software¿s life spawn. So to allow the practical use of the method, as part of the proposed solution, a tool was implemented to automate most of the method¿s activities. Through automation, the tool is capable of reducing the human error associated to the processes execution, thus yielding high efficiency. By analyzing case studies, it is possible to verify that, when applied, the solution is capable to guide the software architect to uncover software ageing problems on the system under investigation. Through the computational aid offered by the solution, the architect is able to act upon the newly discovered issues, with undemanding time and effort, thus resolving or mitigating the problems that arise with software ageing / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
323

AnaSoft : um analisador de software baseado em metricas para medir complexidade

Mora Rodriguez, Carlos 20 March 1990 (has links)
Orientadores : Orion de Oliveira Silva eArthur João Catto / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Ciencia da Computação / Made available in DSpace on 2018-07-13T22:57:37Z (GMT). No. of bitstreams: 1 MoraRodriguez_Carlos_M.pdf: 3155186 bytes, checksum: e180d7237165ccf21fd3a4a2e590e931 (MD5) Previous issue date: 1990 / Resumo: Esta dissertação apresenta um estudo sobre as diferentes métricas de software para medir complexidade e propõe um analisador de código baseado em três destas métricas. Os aspectos considerados mais importantes em relação à complexidade de um programa são: a quantidade de dado manipulado, o fluxo de informação entre os módulos ou procedimentos e, finalmente, o fluxo de controle. As métricas escolhidas medem estes três fatores e fazem um diagnóstico da complexidade dos procedimentos do programa. Portanto, o objetivo do analisador proposto consiste em facilitar a manutenção de um software através de uma análise da complexidade dos procedimentos que os compõem. Finalmente, a ferramenta é testada em vários programas e são apresentadas as conclusões finais, que incluem extensões para pesquisas futuras / Abstract: This dissertation presents a study of the different software metrics availa­ble to measure complexity. In addition, it proposes a code analyzer, AnaSoft, based on three of the most important ones. The software aspects considered most important in relation to software complexity are: the quantity of information processed, the flow of information among the components and the flow of control. The selected metrics measure this three factors and, at the same time, perform a diagnostic of the procedures' complexity. With this in mind, the main objective of the tool proposed is to aid the software maintener to perform a more efficient job. The analizer presented here was tested sucesfully in several software and conclusion were drawn. Finally, further extension for future research are suggested / Mestrado / Mestre em Ciência da Computação
324

Controle de versões e configurações em ambientes de desenvolvimento de software

Victorelli, Eliane Zambon 19 December 1990 (has links)
Orientador : Geovane Cayres Magalhães / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Ciencia da Computação / Made available in DSpace on 2018-07-13T23:09:22Z (GMT). No. of bitstreams: 1 Victorelli_ElianeZambon_M.pdf: 4887945 bytes, checksum: 4a93e5e420a3cb236ff47debabdbb20e (MD5) Previous issue date: 1990 / Resumo: O gerenciamento de versões e configurações é um assunto que vem sendo muito pesquisado nos últimos anos. Alguns ambientes de desenvolvimento de software oferecem suporte para esta tarefa, porém os mecanismos propostos abordam determinados aspectos do problema, deixando outros em aberto. O objetivo deste trabalho é propor um modelo para o gerenciamento de versões e configurações, que seja adequado a vários ambientes. O enfoque foi dado para a distribuição e compartilhamento dos dados. A adaptação do modelo às características de cada ambiente é feita através da definição de alguns parâmetros. Um gerador de gerenciadores de versões e configurações foi implementado utilizando-se, entre outras ferramentas, um protótipo de banco de dados orientado a objetos. A implementação teve corno objetivo provar a viabilidade dos mecanismos propostos, bem corno estudar as dificuldades envolvidas no mapeamento de suas características para o modelo de um banco de dados / Abstract: Not informed / Mestrado / Mestre em Ciência da Computação
325

Software para a geração de codigos RLL empregando o algoritmo dos blocos deslizantes

Costa, Rossini Trindade 15 June 1994 (has links)
Orientador: Celso de Almeida / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica / Made available in DSpace on 2018-07-19T12:29:22Z (GMT). No. of bitstreams: 1 Costa_RossiniTrindade_M.pdf: 6164891 bytes, checksum: 5c96c4c7e91173b0a24315a4d03611c5 (MD5) Previous issue date: 1994 / Resumo: Tendo sido desenvolvido num ramo da matemática conhecido como Dinâmica Simbólica, o algoritmo dos Blocos Deslizantes, constitui-se num procedimento sistemático e eficiente na busca por códigos com as características desejáveis. Será apresentada neste trabalho, uma discussão completa e com exemplos do algoritmo, sem entrar nos detalhes matemáticos que lhe são inerentes. Serão enfatizadas as técnicas componentes deste (divisào e fusão de estados, além do algoritmo do autovetor aproximado) e sua implementação via um software desenvolvido pelo autor. Como forma de validação da ferramenta, o programa foi adaptado para gerar códigos RLL. que são códigos cujo objetivo principal é aumentar a densidade de armazenamento de dados em meios magnéticos e óticos. Diversos códigos obtidos serão apresentados. A análise destes, considerando parámetros como complexidade do codificador e decodificador, mostrou que os resultados encontram-se próximos dos limitantes permitidos comprovando assim a funcionalidade do software. Um detalhe importante é que apesar do programa ter sido implementado para códigos RLL, ele pode ser facilmente adaptado para incorporar outros tipos de códigos restritos a sistemas de estados finitos bastando, para isso, pequenas alterações em algumas rotinas do programa / Abstract: Sliding Block algorithm was developed in a branch of mathematics known as Symbolic Dynamics and it consists of a systematic and efficient procedure to fineI codes with desirable characteristics. This work will present a complete discussion of this algorithm, without emphasis on the rigorous mathematical details that are inherent in it. The techniques that constitute this algorithm will also be emphasized (state mergillg and splitting) and its implementation through a software developed by the author. The software was adapted to generate RLL codes, which are codes with the objective to increase data storage density in magnetic and optical media. Several codes will be presented. Analysis of them, considering parameters like encoder and decoder complexity, and the number of states show that the resuIts were found near to the available bounds showing the efficiency of the software. It's important to remark that, though the software had been implemented to find RLL codes, it can be easily adapted to obtain other kinds of codes, with small changes in some routines of the sofhvare / Mestrado / Mestre em Engenharia Elétrica
326

The new /given index: A measure to explore, evaluate, and monitor eDiscourse in educational conferencing applications

Welts, Dana Raymond 01 January 2002 (has links)
This dissertation addresses the limited measures available to conduct comparative linguistic analysis across spoken, written and eDiscourse environments and proposes a new measure—the new/given index. The new/given construct of Halliday and Clark is reviewed as well as the relevant literature of eDiscourse and other persistent electronic communication. A data set of writing samples, face to face meeting transcripts, and electronic conferences is assembled and used to test and validate the new/given index. The data are reviewed and scored by raters for new and given material and the rater scores are compared with the score generated by the new/given index software parser. The data suggest that the new/given index reliably reports the presence of new and given information in processed text and provides a measure of the efficiency with which this text is resolved or grounded in discourse. The data are further processed by the software parser and aggregate new/given indices for the data types are generated. This analysis reveals that statistically significant differences between the new/give index of written text, transcriptions of face to face discussion, and eDiscourse conferencing transcripts exist. Finally, a qualitative analysis based on interviews with the creators of the data set explore their experience in the eDiscourse conferencing environment and the relation between individual behavior in a group problem-solving situation and an individuals new/given index in an eDiscourse environment. The study concludes with suggestions for the application of the new/given index in eDiscourse and other persistent electronic communication environments.
327

Educational technology: Learning in a computer -mediated environment

Moyano Camihort, Karin 01 January 2005 (has links)
This study investigates the impact of online versus pen and paper homework on college students' learning and performance, and explores their experiences in each modality. After familiarizing students with two different homework modalities, students' decision to work in the online versus the traditional environment was utilized as the student preference indicator. Students' gender and computer comfort levels were also recorded. Although differences were found on the computer comfort levels of male and female students, there were no significant differences on learning outcomes. The findings suggest that students can learn equally well in either modality, regardless of their preference, gender or computer comfort level. In the attempt to better understand their experiences, students were asked to describe and compare their learning in both modalities. According to the students, instant feedback was the most valuable feature. They enjoyed working with computers; it helped them stay interested and motivated. They mentioned, however, that they learn better writing down on paper rather than typing on a computer keyboard.
328

Misconception to concept: Employing cognitive flexibility theory -based hypermedia to promote conceptual change in ill -structured domains

Frantiska, Joseph John 01 January 2001 (has links)
The field of New Media authoring is still evolving and is largely an inexact science. The power of New Media lies in it's capability to present information in various forms to the learner for not only the acquisition of needed information but to allow for new ways of interpreting and understanding the information. More knowledge is needed to understand how best to combine different forms of media to enhance learning especially in domains of knowledge that are ill-defined and ill-structured. This investigation explores and examines how to best combine visual and textual information in the context of science education to promote conceptual change. Cognitive Flexibility Theory (CFT) will serve as the basis for this study. Tornado formation was chosen as the subject matter. The main principles of CFT are that learning activities must provide multiple representations of content with instructional materials should avoid oversimplifying the content domain and support context-dependent knowledge. Also, instruction should be case-based and emphasize knowledge construction, not transmission of information and knowledge sources should be highly interconnected rather than compartmentalized. The main hypothesis is that employment of the principles of CFT in a hypermedia learning environment that directs browsing of the dynamics of tornado formation will improve learning and transfer of complex knowledge of the subject matter and initiate conceptual change. The hypothesis was tested by having the subjects first complete a pre-test in which they displayed their current understanding of how a tornado forms. They are then directed to enter the hypermedia site via an entry point based on their apparent misconception of the subject matter as seen in the pre-test. The hypermedia treatment guides the student through both textual and graphical information about the formation of tornadoes in accordance with the principles of CFT. The subjects are allowed to change their conceptual understanding at points along the way. They are allowed to see case studies, definitions and an animation. Soon after the treatment is finished, they complete a post-test which is identical to the pre-test. The change in the test responses represents a conceptual change. The results showed a profound increase towards a conceptual change representing a shift from the subject's original misconception to a more correct understanding of the phenomenon. Specifically, when the counter-examples were in their initial positions so that they would counteract the subject's misconception, the rate of positive conceptual change was high. Also, when the examples were reversed in an effort to see if they could bring about a continued high rate of change, they were indeed able to produce this rate of change. Presumably, this was due to a heightened contrast between the misconception and the correct concept as the subject was lead deeper into their misconception before seeing the correct concept. In both cases, the number of subjects displaying a positive conceptual change was in excess of 60%.
329

Biblioteca Nao Master

Figuerola Mora, Gary Johal, Leon Rojas, Gustavo Manuel 24 March 2017 (has links)
El Robot Humanoide NAO, de Aldebaran Robotics, es un robot de aspecto humano desarrollado con fines académicos y de investigación, siendo utilizado en diferentes instituciones académicas en todo el mundo, incluyendo la Universidad de Tokio, el IIT Kanpur de la India y la Universidad del Rey Fahd de Petróleo y Minerales de Arabia Saudita. El presente proyecto tuvo como meta dar a conocer, y validar, las capacidades técnicas que puedan ser explotadas a nivel académico dentro de universidades interesadas en la investigación sobre robótica, dado que el impacto de esta rama de conocimiento aún es leve dentro del entorno académico peruano. Así mismo, con este proyecto se dará inicio a una nueva área de investigación dentro de las carreras de Ingeniería relacionada a Ciencias de la Computación, donde se integre el robot humanoide NAO con diferentes tecnologías modernas de alto impacto en la sociedad, como puede ser Emotiv EPOC. Al final del desarrollo del proyecto se obtuvo una base sólida de conocimiento acerca de NAO y sus características más resaltantes. Para lograrlo se desarrolló una biblioteca en Python que permite controlar a NAO sin la necesidad de utilizar Choregraphe, la validación de esta biblioteca se hizo con el desarrollo de una aplicación que integró NAO con Emotiv EPOC, donde se utilizaron características como teleoperación, ejecución de rutinas de movimiento, control de la cámara y uso de Text-to-Speech y Speech-to-Text. Durante el presente proyecto se contemplaron tres fases: Análisis de la tecnología a utilizar, en este caso, del desarrollo para el robot humanoide NAO; el desarrollo de la biblioteca; y la integración con Emotiv EPOC. / NAO, from Aldebaran Robotics, is a humanoid robot developed with academic and research purposes being used in many academic institutions worldwide, including the University of Tokyo, the Indian Institute of Technology Kanpur and the King Fahd University of Petroleum and Minerals. This project had the goal to present, and validate, the technical capabilities that can be used in an academic level within universities interested in research on robotics, since the impact of this branch of knowledge is still mild in the Peruvian academic environment. Likewise, it was expected from this project to create a new research area within engineering careers related to computer science, where the NAO humanoid robot could be integrated with other modern technologies with high impact, as Emotiv EPOC. At the end of this project we obtained a solid base of knowledge about NAO and its most important capabilities. To achieve this, a Python library was developed, which allows control NAO robot without using Choregraphe IDE. To validate this library, an application was developed, which integrated NAO with Emotiv EPOC; this application used features as teleoperation, movement routines, camera control and usage of Text-to-Speech and Speech-to-Text. During this project three phases were contemplated: research of the technology, development of the integration with Emotiv EPOC, development of applications. / Tesis
330

Social conditions leading to Scrum process breakdowns during Global Agile Software Development: a theory of practice perspective

Tanner, Maureen Cynthia January 2013 (has links)
Global Software Development (GSD) and Agile are two popular software development trends that are gaining in popularity. In addition, more and more organisations are now seeking to engage in agile software development within the GSD context to reap the benefits of both ventures and achieve project success. Hence, agile methodologies adapted to fit the GSD context are commonly termed Global Agile Software Development (GASD) methodologies. However, because of geographical, temporal, and cultural challenges, collaboration is not easily realized in the GASD context. In addition, this work context is characterized by multiple overlapping fields of practice, which further hinder collaboration, and give rise to social challenges. Given the existence of these social challenges, there is a need to further investigate the human-centred aspect of collaboration during GASD. Following an extensive literature review on the application of Scrum and other agile methodologies in GASD between 2006 and 2011, it was noted that there is a lack of understanding of the social conditions giving rise to the social challenges experienced during GASD. It was noted that past studies have instead sought to describe these social challenges and to provide mitigating strategies in the form of best-practices, without detailing and theorising about the social conditions under which these social challenges emerge. One of the objective of the study was thus to investigate the use of Scrum during GASD. In particular, the Scrum process breakdowns experienced during and after Scrum's sprint planning and retrospective meetings were identified. The social conditions under which these breakdowns emerged were investigated in the light of Bourdieu's Theory of Practice. Scrum Process breakdowns were defined as any deviation from an ideal Scrum process (as per the Scrum methodology's guidelines) which yields to the emergence of social challenges, conflict or disagreements in the GASD team. The study was empirical and qualitative in nature and followed the positivist research paradigm. Two case studies, in line with Bonoma (1985)'s "drift" and "design" stages of case study design, were undertaken to investigate the phenomena of interest and answer the research questions. The first case focused on a distributed agile team executing a software project across South Africa (Cape Town) and Brazil (Sao Paulo) while the second case focused on a team executing an agile software project across India (Pune) and South Africa (Durban). The site selection was carefully thought out and the results from the first case informed the second case in order to add more richness in the data being gathered. In both case studies, data was collected through semi-structured interviews, documentation, field notes and direct observation. The underlying theoretical framework employed for the study was the Theory of Practice (Bourdieu, 1990). The study has identified various forms of Scrum process breakdowns occurring during and after sprint planning and retrospective meetings: » Different perceptions about task urgency at the software development sites » Disagreements on the suitability of software engineering practices » Low level of communication openness during meetings involving the whole GASD team compared to internal meetings at the sites » Impromptu changes to user stories' content and priorities » Product Owner's low level of authority » Disagreements on estimation mechanisms » Number of User Stories to be completed during the Sprint Is imposed on the team » Decisions on Scrum process updates not made by the development team » Selective invitation to retrospective meetings In addition, various social conditions were identified as possibly leading to the emergence of these Scrum process breakdowns in the GASD context: » GASD project stakeholders' low level of capital in the joint field » Different beliefs and values because of multiple fields Two theoretical propositions were derived to describe the social conditions and the corresponding Scrum process breakdowns which are likely to emerge under these conditions.

Page generated in 0.098 seconds