Spelling suggestions: "subject:"ariability modelling"" "subject:"heariability modelling""
1 |
Variability-Modelling Practices in Industrial Software Product Lines: A Qualitative StudyNair, Divya Karunakaran 06 May 2013 (has links)
Many organizations have transitioned from single-systems development to product-line development with the goal of increasing productivity and facilitating mass customization. Variability modelling is a key activity in software product-line development that deals with the explicit representation of variability using dedicated models. Variability models specify points of variability and their variants in a product line. Although many variability-modelling notations and tools have been designed by researchers and practitioners, very little is known about their usage, actual benefits or challenges. Existing studies mostly describe product-line practices in general, with little focus on variability modelling. We address this gap through a qualitative study on variability-modelling practices in medium- and large-scale companies using two empirical methods: surveys and interviews. We investigated companies' variability-modelling practices and experiences with the aim to gather information on 1) the methods and strategies used to create and manage variability models, 2) the tools and notations used for variability modelling, 3) the perceived values and challenges of variability modelling, and 4) the core characteristics of their variability models. Our results show that variability models are often created by re-engineering existing products into a product line. All of the interviewees and the majority of survey participants indicated that they represent variability using separate variability models rather than annotative approaches. We found that developers use variability models for many purposes, such as the visualization of variabilities, configuration of products, and scoping of products. Although we observed that high degree of heterogeneity exists in the variability-modelling notations and tools used by organizations, feature-based notations and tools are the most common. We saw huge differences in the sizes of variability models and their contents, which indicate that variability models can have different use cases depending on the organization. Most of our study participants reported complexity challenges that were related mainly to the visualization and evolution of variability models, and dependency management. In addition, reports from interviews suggest that product-line adoption and variability modelling have forced developers to think in terms of a product-line scenario rather than a product-based scenario.
|
2 |
Variability-Modelling Practices in Industrial Software Product Lines: A Qualitative StudyNair, Divya Karunakaran 06 May 2013 (has links)
Many organizations have transitioned from single-systems development to product-line development with the goal of increasing productivity and facilitating mass customization. Variability modelling is a key activity in software product-line development that deals with the explicit representation of variability using dedicated models. Variability models specify points of variability and their variants in a product line. Although many variability-modelling notations and tools have been designed by researchers and practitioners, very little is known about their usage, actual benefits or challenges. Existing studies mostly describe product-line practices in general, with little focus on variability modelling. We address this gap through a qualitative study on variability-modelling practices in medium- and large-scale companies using two empirical methods: surveys and interviews. We investigated companies' variability-modelling practices and experiences with the aim to gather information on 1) the methods and strategies used to create and manage variability models, 2) the tools and notations used for variability modelling, 3) the perceived values and challenges of variability modelling, and 4) the core characteristics of their variability models. Our results show that variability models are often created by re-engineering existing products into a product line. All of the interviewees and the majority of survey participants indicated that they represent variability using separate variability models rather than annotative approaches. We found that developers use variability models for many purposes, such as the visualization of variabilities, configuration of products, and scoping of products. Although we observed that high degree of heterogeneity exists in the variability-modelling notations and tools used by organizations, feature-based notations and tools are the most common. We saw huge differences in the sizes of variability models and their contents, which indicate that variability models can have different use cases depending on the organization. Most of our study participants reported complexity challenges that were related mainly to the visualization and evolution of variability models, and dependency management. In addition, reports from interviews suggest that product-line adoption and variability modelling have forced developers to think in terms of a product-line scenario rather than a product-based scenario.
|
3 |
Feature Model SynthesisShe, Steven 29 August 2013 (has links)
Variability provides the ability to adapt and customize a software system's artifacts for a particular context or circumstance. Variability enables code reuse, but its mechanisms are often tangled within a software artifact or scattered over multiple artifacts. This makes the system harder to maintain for developers, and harder to understand for users that configure the software.
Feature models provide a centralized source for describing the variability in a software system. A feature model consists of a hierarchy of features—the common and variable system characteristics—with constraints between features. Constructing a feature model, however, is a arduous and time-consuming manual process.
We developed two techniques for feature model synthesis. The first, Feature-Graph-Extraction, is an automated algorithm for extracting a feature graph from a propositional formula in either conjunctive normal form (CNF), or disjunctive normal form (DNF). A feature graph describes all feature diagrams that are complete with respect to the input. We evaluated our algorithms against related synthesis algorithms and found that our CNF variant was significantly faster than the previous comparable technique, and the DNF algorithm performed similarly to a comparable, but newer technique, with the exception of several models where our algorithm was faster.
The second, Feature-Tree-Synthesis, is a semi-automated technique for building a feature model given a feature graph. This technique uses both logical constraints and text to address the most challenging part of feature model synthesis—constructing the feature hierarchy—by ranking potential parents of a feature with a textual similarity heuristic. We found that the procedure effectively reduced a modeler's choices from thousands, to five or less when synthesizing the Linux and eCos variability models.
Our third contribution is the analysis of Kconfig—a language similar to feature modeling used to specify the variability model of the Linux kernel. While large feature models are reportedly used in industry, these models have not been available to the research community for benchmarking feature model analysis and synthesis techniques. We compare Kconfig to feature modeling, reverse engineer formal semantics, and translate 12 open-source Kconfig models—including the Linux model with over 6000 features—to propositional logic.
|
4 |
Feature Model SynthesisShe, Steven 29 August 2013 (has links)
Variability provides the ability to adapt and customize a software system's artifacts for a particular context or circumstance. Variability enables code reuse, but its mechanisms are often tangled within a software artifact or scattered over multiple artifacts. This makes the system harder to maintain for developers, and harder to understand for users that configure the software.
Feature models provide a centralized source for describing the variability in a software system. A feature model consists of a hierarchy of features—the common and variable system characteristics—with constraints between features. Constructing a feature model, however, is a arduous and time-consuming manual process.
We developed two techniques for feature model synthesis. The first, Feature-Graph-Extraction, is an automated algorithm for extracting a feature graph from a propositional formula in either conjunctive normal form (CNF), or disjunctive normal form (DNF). A feature graph describes all feature diagrams that are complete with respect to the input. We evaluated our algorithms against related synthesis algorithms and found that our CNF variant was significantly faster than the previous comparable technique, and the DNF algorithm performed similarly to a comparable, but newer technique, with the exception of several models where our algorithm was faster.
The second, Feature-Tree-Synthesis, is a semi-automated technique for building a feature model given a feature graph. This technique uses both logical constraints and text to address the most challenging part of feature model synthesis—constructing the feature hierarchy—by ranking potential parents of a feature with a textual similarity heuristic. We found that the procedure effectively reduced a modeler's choices from thousands, to five or less when synthesizing the Linux and eCos variability models.
Our third contribution is the analysis of Kconfig—a language similar to feature modeling used to specify the variability model of the Linux kernel. While large feature models are reportedly used in industry, these models have not been available to the research community for benchmarking feature model analysis and synthesis techniques. We compare Kconfig to feature modeling, reverse engineer formal semantics, and translate 12 open-source Kconfig models—including the Linux model with over 6000 features—to propositional logic.
|
5 |
Quantitative laser diagnostics for combustionWilliams, Benjamin Ashley Oliver January 2009 (has links)
Quantitative Planar Laser Induced Fluorescence (QPLIF) is developed as a diagnostic technique and then applied to a prototype Jaguar optical internal combustion engine. QPLIF derives quantitative, two-dimensional, spatially-resolved measurements of fuel concentration. This work reports the first demonstration of a fully-fractionated surrogate fuel which exhibits all the characteristics of a typical gasoline. This 'pseudo' fuel, developed in association with Shell UK, is blended to accept a fluorescent tracer which may track one of the light, middle or heavy fractions of the fuel, each of different volatility. The traditional weaknesses of PLIF for quantitative measurements are addressed by use of a fired in-situ calibration method, which maps the quantum efficiency of the tracer and concurrently corrects for window fouling and exhaust gas residuals (EGR). Fuel distributions are presented with an estimated super-pixel accuracy of 10% at different operating conditions, and then compared to the computational fluid dynamics (CFD) predictions of an in-house Jaguar model. Fuel/Air Ratios by Laser Induced thermal Gratings (FARLIG) is developed theoretically, and results of validation experiments conducted in a laboratory setting are reported. FARLIG conceptually enables the measurement of fuel concentration, oxygen concentration and temperature within a spatially-localised probe volume. Uniquely, the technique exploits the dominant influence of molecular oxygen on non-radiative quenching processes in an aromatic tracer molecule. The changing character of a model quenching mechanism potentially allows the oxygen concentration in the measurement volume to be derived. Absolute signal strength is used to determine fuel concentration, while the oscillation period of the signal provides a precise measurement of temperature (~0.3% uncertainty), with accuracy limited by knowledge of the gas composition.
|
6 |
Achieving Autonomic Computing through the Use of Variability Models at Run-timeCetina Englada, Carlos 15 April 2010 (has links)
Increasingly, software needs to dynamically adapt its behavior at run-time in response
to changing conditions in the supporting computing infrastructure and in
the surrounding physical environment. Adaptability is emerging as a necessary underlying
capability, particularly for highly dynamic systems such as context-aware
or ubiquitous systems.
By automating tasks such as installation, adaptation, or healing, Autonomic
Computing envisions computing environments that evolve without the need for human
intervention. Even though there is a fair amount of work on architectures
and their theoretical design, Autonomic Computing was criticised as being a \hype
topic" because very little of it has been implemented fully. Furthermore, given that
the autonomic system must change states at runtime and that some of those states
may emerge and are much less deterministic, there is a great challenge to provide
new guidelines, techniques and tools to help autonomic system development.
This thesis shows that building up on the central ideas of Model Driven Development
(Models as rst-order citizens) and Software Product Lines (Variability
Management) can play a signi cant role as we move towards implementing the key
self-management properties associated with autonomic computing. The presented
approach encompass systems that are capable of modifying their own behavior with
respect to changes in their operating environment, by using variability models as if
they were the policies that drive the system's autonomic recon guration at runtime.
Under a set of recon guration commands, the components that make up the architecture
dynamically cooperate to change the con guration of the architecture to a
new con guration.
This work also provides the implementation of a Model-Based Recon guration
Engine (MoRE) to blend the above ideas. Given a context event, MoRE queries the variability models to determine how the system should evolve, and then it provides
the mechanisms for modifying the system. / Cetina Englada, C. (2010). Achieving Autonomic Computing through the Use of Variability Models at Run-time [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7484
|
Page generated in 0.0922 seconds