461 |
Static Code Analysis: A Systematic Literature Review and an Industrial SurveyIlyas, Bilal, Elkhalifa, Islam January 2016 (has links)
Context: Static code analysis is a software verification technique that refers to the process of examining code without executing it in order to capture defects in the code early, avoiding later costly fixations. The lack of realistic empirical evaluations in software engineering has been identified as a major issue limiting the ability of research to impact industry and in turn preventing feedback from industry that can improve, guide and orient research. Studies emphasized rigor and relevance as important criteria to assess the quality and realism of research. The rigor defines how adequately a study has been carried out and reported, while relevance defines the potential impact of the study on industry. Despite the importance of static code analysis techniques and its existence for more than three decades, the number of empirical evaluations in this field are less in number and do not take into account the rigor and relevance into consideration. Objectives: The aim of this study is to contribute toward bridging the gap between static code analysis research and industry by improving the ability of research to impact industry and vice versa. This study has two main objectives. First, developing guidelines for researchers, which will explore the existing research work in static code analysis research to identify the current status, shortcomings, rigor and industrial relevance of the research, reported benefits/limitations of different static code analysis techniques, and finally, give recommendations to researchers to help improve the future research to make it more industrial oriented. Second, developing guidelines for practitioners, which will investigate the adoption of different static code analysis techniques in industry and identify benefits/limitations of these techniques as perceived by industrial professionals. Then cross-analyze the findings of the SLR and the surbvey to draw final conclusions, and finally, give recommendations to professionals to help them decide which techniques to adopt. Methods: A sequential exploratory strategy characterized by the collection and analysis of qualitative data (systematic literature review) followed by the collection and analysis of quantitative data (survey), has been used to conduct this research. In order to achieve the first objective, a thorough systematic literature review has been conducted using Kitchenham guidelines. To achieve the second study objective, a questionnaire-based online survey was conducted, targeting professionals from software industry in order to collect their responses regarding the usage of different static code analysis techniques, as well as their benefits and limitations. The quantitative data obtained was subjected to statistical analysis for the further interpretation of the data and draw results based on it. Results: In static code analysis research, inspection and static analysis tools received significantly more attention than the other techniques. The benefits and limitations of static code analysis techniques were extracted and seven recurrent variables were used to report them. The existing research work in static code analysis field significantly lacks rigor and relevance and the reason behind it has been identified. Somre recommendations are developed outlining how to improve static code analysis research and make it more industrial oriented. From the industrial point of view, static analysis tools are widely used followed by informal reviews, while inspections and walkthroughs are rarely used. The benefits and limitations of different static code analysis techniques, as perceived by industrial professionals, have been identified along with the influential factors. Conclusions: The SLR concluded that the techniques having a formal, well-defined process and process elements have receive more attention in research, however, this doesn’t necessarily mean that technique is better than the other techniques. The experiments have been used widely as a research method in static code analysis research, but the outcome variables in the majority of the experiments are inconsistent. The use of experiments in academic context contributed nothing to improve the relevance, while the inadequate reporting of validity threats and their mitigation strategies contributed significantly to poor rigor of research. The benefits and limitations of different static code analysis techniques identified by the SLR could not complement the survey findings, because the rigor and relevance of most of the studies reporting them was weak. The survey concluded that the adoption of static code analysis techniques in the industry is more influenced by the software life-cycle models in practice in organizations, while software product type and company size do not have much influence. The amount of attention a static code analysis technique has received in research doesn’t necessarily influence its adoption in industry which indicates a wide gap between research and industry. However, the company size, product type, and software life-cycle model do influence professionals perception on benefits and limitations of different static code analysis techniques.
|
462 |
Utforskning i spel och immersionens djup : En empirisk studie om upplevelsen av immersion i ett utforskningbaserat spelAxelsson, Kim, Batalje, Kasper January 2015 (has links)
Den här studien är en empirisk studie genomförd via en enkät online där deltagarnahar fått svara på frågor som rör utforskning och inlevelse i spelet Starbound.Deltagarna fick svara fritt i löpande text hur de upplever att dessa aspekter påverkarderas förmåga för immersion. Dessa svar har vi sen analyserat med tematisk analysför att ta fram teman och kategorier som ger underlag för vår forskning. Vi fann att deredan etablerade kategorierna för immersion kunde utökas då svaren vi fick tydligtindikerade på att utforskning är av stor vikt för spelarnas förmåga att uppnåimmersion i spelet. Utöver utforskningsbaserad immersion fann vi också att tillgångtill flerspelarläge var en återkommande viktig aspekt för spelarnas immersion. / This study is an empirical study that was conducted via an online survey where theparticipants have answered questions regarding exploration and immersion in thegame Starbound. The participants answered the questions in their own words abouthow these aspects affected their ability to immerse themselves in the game. Theseanswers were then analyzed using thematic analysis in order for us to establish themesand categories that we used as a foundation for our research. We found that thealready established categories for immersion could be complemented by our findings;the answers from the participants indicated that exploration is of great importance forthe players ability to achieve immersion in the game. In addition to explorativeimmersion, we also found that immersion via multiplayer was a recurring importantaspect for the participants.
|
463 |
Time Series Online Empirical Bayesian Kernel Density Segmentation: Applications in Real Time Activity Recognition Using Smartphone AccelerometerNa, Shuang 28 June 2017 (has links)
Time series analysis has been explored by the researchers in many areas such, as statistical research, engineering applications, medical analysis, and finance study. To represent the data more efficiently, the mining process is supported by time series segmentation. Time series segmentation algorithm looks for the change points between two different patterns and develops a suitable model, depending on the data observed in such segment. Based on the issue of limited computing and storage capability, it is necessary to consider an adaptive and incremental online segmentation method. In this study, we propose an Online Empirical Bayesian Kernel Segmentation (OBKS), which combines Online Multivariate Kernel Density Estimation (OMKDE) and Online Empirical Bayesian Segmentation (OBS) algorithm. This innovative method considers Online Multivariate Kernel density as a predictive distribution derived by Online Empirical Bayesian segmentation instead of using posterior predictive distribution as a predictive distribution. The benefit of Online Multivariate Kernel Density Estimation is that it does not require the assumption of a pre-defined prior function, which makes the OMKDE more adaptive and adjustable than the posterior predictive distribution.
Human Activity Recognition (HAR) by smartphones with embedded sensors is a modern time series application applied in many areas, such as therapeutic applications and sensors of cars. The important procedures related to the HAR problem include classification, clustering, feature extraction, dimension reduction, and segmentation. Segmentation as the first step of HAR analysis attempts to represent the time interval more effectively and efficiently. The traditional segmentation method of HAR is to partition the time series into short and fixed length segments. However, these segments might not be long enough to capture the sufficient information for the entire activity time interval. In this research, we segment the observations of a whole activity as a whole interval using the Online Empirical Bayesian Kernel Segmentation algorithm as the first step. The smartphone with built-in accelerometer generates observations of these activities.
Based on the segmenting result, we introduce a two-layer random forest classification method. The first layer is used to identify the main group; the second layer is designed to analyze the subgroup from each core group. We evaluate the performance of our method based on six activities: sitting, standing, lying, walking, walking\_upstairs, and walking\_downstairs on 30 volunteers. If we want to create a machine that can detect walking\_upstairs and walking\_downstairs automatically, it requires more information and more detail that can generate more complicated features, since these two activities are very similar. Continuously, considering the real-time Activity Recognition application on the smartphones by the embedded accelerometers, the first layer classifies the activities as static and dynamic activities, the second layer classifies each main group into the sub-classes, depending on the first layer result. For the data collected, we get an overall accuracy of 91.4\% based on the six activities and an overall accuracy of 100\% based only on the dynamic activity (walking, walking\_upstairs, walking\_downstairs) and the static activity (sitting, standing, lying).
|
464 |
Escalation prediction using feature engineering: addressing support ticket escalations within IBM’s ecosystemMontgomery, Lloyd Robert Frank 28 August 2017 (has links)
Large software organizations handle many customer support issues every day in the form of bug reports, feature requests, and general misunderstandings as submitted by customers. Strategies to gather, analyze, and negotiate requirements are comple- mented by efforts to manage customer input after products have been deployed. For the latter, support tickets are key in allowing customers to submit their issues, bug re- ports, and feature requests. Whenever insufficient attention is given to support issues, there is a chance customers will escalate their issues, and escalation to management is time-consuming and expensive, especially for large organizations managing hundreds of customers and thousands of support tickets. This thesis provides a step towards simplifying the job for support analysts and managers, particularly in predicting the risk of escalating support tickets. In a field study at our large industrial partner, IBM, a design science methodology was employed to characterize the support process and data available to IBM analysts in managing escalations. Through iterative cycles of design and evaluation, support analysts’ expert knowledge about their customers was translated into features of a support ticket model to be implemented into a Ma- chine Learning model to predict support ticket escalations. The Machine Learning model was trained and evaluated on over 2.5 million support tickets and 10,000 escalations, obtaining a recall of 79.9% and an 80.8% reduction in the workload for support analysts looking to identify support tickets at risk of escalation. Further on- site evaluations were conducted through a tool developed to implement the Machine Learning techniques in industry, deployed during weekly support-ticket-management meetings. The features developed in the Support Ticket Model are designed to serve as a starting place for organizations interested in implementing the model to predict support ticket escalations, and for future researchers to build on to advance research in Escalation Prediction. / Graduate
|
465 |
Investigating styles in variability modeling: Hierarchical vs. constrained stylesReinhartz-Berger, Iris, Figl, Kathrin, Haugen, Øystein 07 1900 (has links) (PDF)
Context: A common way to represent product lines is with variability modeling. Yet, there are different ways to extract and organize relevant characteristics of variability. Comprehensibility of these models and the ease of creating models are important for the efficiency of any variability management approach.
Objective: The goal of this paper is to investigate the comprehensibility of two common styles to organize variability into models - hierarchical and constrained - where the dependencies between choices are specified either through the hierarchy of the model or as cross-cutting constraints, respectively.
Method: We conducted a controlled experiment with a sample of 90 participants who were students with prior training in modeling. Each participant was provided with two variability models specified in Common Variability Language (CVL) and was asked to answer questions requiring interpretation of provided models. The models included 9 to 20 nodes and 8 to 19 edges and used the main variability elements. After answering the questions, the participants were asked to create a model based on a textual description.
Results: The results indicate that the hierarchical modeling style was easier to comprehend from a subjective point of view, but there was also a significant interaction effect with the degree of dependency in the models, that influenced objective comprehension. With respect to model creation, we found that the use of a constrained modeling style resulted in higher correctness of variability models.
Conclusions: Prior exposure to modeling style and the degree of dependency among elements in the model determine what modeling style a participant chose when creating the model from natural language descriptions. Participants tended to choose a hierarchical style for modeling situations with high dependency and a constrained style for situations with low dependency. Furthermore, the degree of dependency also influences the comprehension of the variability model.
|
466 |
A Manifestation of Model-Code Duality: Facilitating the Representation of State Machines in the Umple Model-Oriented Programming LanguageBadreldin, Omar January 2012 (has links)
This thesis presents research to build and evaluate embedding of a textual form of state machines into high-level programming languages. The work entailed adding state machine syntax and code generation to the Umple model-oriented programming technology. The added concepts include states, transitions, actions, and composite states as found in the Unified Modeling Language (UML). This approach allows software developers to take advantage of the modeling abstractions in their textual environments, without sacrificing the value added of visual modeling.
Our efforts in developing state machines in Umple followed a test-driven approach to ensure high quality and usability of the technology. We have also developed a syntax-directed editor for Umple, similar to those available to other high-level programming languages. We conducted a grounded theory study of Umple users and used the findings iteratively to guide our experimental development. Finally, we conducted a controlled experiment to evaluate the effectiveness of our approach.
By enhancing the code to be almost as expressive as the model, we further support model-code duality; the notion that both model and code are two faces for the same coin. Systems can be and should be equally-well specified textually and diagrammatically. Such duality will benefit both modelers and coders alike. Our work suggests that code enhanced with state machine modeling abstractions is semantically equivalent to visual state machine models.
The flow of the thesis is as follows; the research hypothesis and questions are presented in “Chapter 1: Introduction”. The background is explored in “Chapter 2: Background”. “Chapter 3: Syntax and semantics of simple state machines” and “Chapter 4: Syntax and semantics of composite state machines” investigate simple and composite state machines in Umple, respectively. “Chapter 5: Implementation of composite state machines” presents the approach we adopt for the implementation of composite state machines that avoids explosion of the amount of generated code. From this point on, the thesis presents empirical work. A grounded theory study is presented in “Chapter 6: A Grounded theory study of Umple”, followed by a controlled experiment in “Chapter 7: Experimentation”. These two chapters constitute our validation and evaluation of Umple research. Related and future work is presented in “Chapter 8: Related work”.
|
467 |
Asymptotics for the Sequential Empirical Process and Testing for Distributional Change for Stationary Linear ModelsEl Ktaibi, Farid January 2015 (has links)
Detecting a change in the structure of a time series is a classical statistical problem. Here we consider a short memory causal linear process $X_i=\sum_{j=0}^\infty a_j\xi_{i-j}$, $i=1,\cdots,n$, where the innovations $\xi_i$ are independent and identically distributed and the coefficients $a_j$ are summable. The goal is to detect the existence of an unobserved time at which there is a change in the marginal distribution of the $X_i$'s. Our model allows us to simultaneously detect changes in the coefficients and changes in location and/or scale of the innovations. Under very simple moment and summability conditions, we investigate the asymptotic behaviour of the sequential empirical process based on the $X_i$'s both with and without a change-point, and show that two proposed test statistics are consistent. In order to find appropriate critical values for the test statistics, we then prove the validity of the moving block bootstrap for the sequential empirical process under both the hypothesis and the alternative, again under simple conditions. Finally, the performance of the proposed test statistics is demonstrated through Monte Carlo simulations.
|
468 |
Understanding Schenkerian Analysis from the Perspective of Music Perception and CognitionCarrabré, Ariel January 2015 (has links)
This thesis investigates the perceptual and cognitive reality of Schenkerian theory through a survey of relevant empirical research. It reviews existing Schenkerian-specific empirical research, examines general tonal research applicable to Schenkerian analysis, and proposes the possibility of an optimal empirical research method by which to explore the theory. It evaluates data dealing with musical instruction’s effect on perception. From this review, reasonable evidence for the perceptual reality of Schenkerian-style structural levels is found to exist. This thesis asserts that the perception of Schenkerian analytical structures is largely an unconscious process.
|
469 |
Statistical Inference for Heavy Tailed Time Series and VectorsTong, Zhigang January 2017 (has links)
In this thesis we deal with statistical inference related to extreme value phenomena.
Specifically, if X is a random vector with values in d-dimensional space, our goal is
to estimate moments of ψ(X) for a suitably chosen function ψ when the magnitude
of X is big. We employ the powerful tool of regular variation for random variables,
random vectors and time series to formally define the limiting quantities of interests
and construct the estimators. We focus on three statistical estimation problems: (i)
multivariate tail estimation for regularly varying random vectors, (ii) extremogram
estimation for regularly varying time series, (iii) estimation of the expected shortfall
given an extreme component under a conditional extreme value model. We establish asymptotic normality of estimators for each of the estimation problems. The theoretical findings are supported by simulation studies and the estimation procedures are applied to some financial data.
|
470 |
Occupational adaptation in diverse contexts with focus on persons in vulnerable life situationsJohansson, Ann January 2017 (has links)
Introduction. This present thesis focuses on occupational adaptation in the empirical context of vulnerable populations relative to ageing (Study II, III), disability (Study I, II) and poverty (Study IV) and in a theoretical context (V). Aim. The overall aim was to explore and describe occupational adaptation in diverse contexts with a focus on persons in vulnerable life situations. Methods. The thesis was conducted with a mixed design embracing quantitative and qualitative methods and a literature review. The data collection methods comprised questionnaires (Study I, II, III), individual interviews (Study II, IV), group interviews (Study III) and data base searches (Study V). Altogether 115 persons participated in the studies and 50 articles were included in the literature review. Qualitative content analysis was used to analyse the interviews (Study I, II, III, IV) and the literature review (Study V). Parametric and non-parametric statistics were applied when analysing the quantitative data (Study II, III). Results: Women in St Petersburg, Russia, who have had a minor stroke reported more dependence in everyday occupations than the stroke symptoms indicated and they overemphasized their disability and dysfunction. When the environmental press did not meet their competence, it caused negative adaptive behaviour (Study I). In home rehabilitation for older persons with disabilities, interventions based on the occupational adaptation model was compared with interventions based on well-tried professional experience. The results indicated that the use of the occupational adaptation model increased experienced health and the participants acquired adaptive strategies to manage every day occupations. (Study II). An occupation based health-promoting programme for older community dwelling persons was compared with a control group. The intervention group showed statistically significant improvement in general health variables as vitality and mental health, but there were no statistically significant differences between the groups. A qualitative evaluation, in the intervention group, showed that participation in meaningful, challenging occupations in different environments stimulated the occupational adaptation process (Study III). Occupational adaptation among vulnerable EU citizens begging in Sweden was explored by interviews. The results showed that the participants experienced several occupational challenges when begging abroad. The results show a variety of adaptive responses, but whether they are experienced as positive or negative is a matter of perspective and can only be determined by the participants themselves (Study IV). Finally, the results from a literature review (Study V) showed that research on occupational adaptation was mainly based on Schkade and Schultz’s and Kielhofner’s theoretical approaches. Occupational adaptation was also used without further explanation or theoretical argument (Study V). Conclusion: The surrounding context was shown to play an important role for the participants’ occupational adaptation. There were no general occupational challenges or adaptive responses to the various vulnerable life situations, but some common features in the participant groups’ adaptive responses were found. For example, if the environment put too great demand on the person and social support was lacking, there was a risk of negative adaptation. Moreover, persons with low functional capacity were vulnerable to environmental demands and dependent on a supportive environment for their adaptive response. However, persons living in supportive environments developed adaptive responses by themselves. Further, personal factors needed to be strengthened to meet the demands of the environment. Upholding occupational roles was a driving force in finding ways to adapt and perform occupations. Considering the theoretical context, the occupational adaptation theoretical approaches need to be further developed in relation to negative adaptation and to support use within community-based and health-promotive areas.
|
Page generated in 0.0602 seconds