Spelling suggestions: "subject:"[een] REFINEMENT"" "subject:"[enn] REFINEMENT""
131 |
Behavioural Model FusionNejati, Shiva 19 January 2009 (has links)
In large-scale model-based development, developers periodically need to combine collections of interrelated models. These models may capture different features of a system, describe alternative perspectives on a single feature, or express ways in which different features alter one another's structure or behaviour. We refer to the process of combining a set of interrelated models as "model fusion".
A number of factors make model fusion complicated. Models may overlap, in that they refer to the same concepts, but these concepts may be presented differently in each model, and the models may contradict one another. Models may describe independent system components, but the components may interact, potentially causing undesirable side effects. Finally, models may cross-cut, modifying one another in ways that violate their syntactic or semantic properties.
In this thesis, we study three instances of the fusion problem for "behavioural models", motivated by real-world applications. The first problem is combining "partial" models of a single feature with the goal of creating a more complete description of that feature. The second problem is maintenance of "variant" specifications of individual features. The goal here is to combine the variants while preserving their points of difference (i.e., variabilities). The third problem is analysis of interactions between models describing "different" features. Specifically, given a set of features, the goal is to construct a composition such that undesirable interactions are absent. We provide an automated tool-supported solution to each of these problems and evaluate our solutions.
The main novelties of the techniques presented in this thesis are (1) preservation of semantics during the fusion process, and (2) applicability to large and evolving collections of models. These are made possible by explicit modelling of partiality, variability and regularity in behavioural models, and providing semantic-preserving notions for relating these models.
|
132 |
Block-based Adaptive Mesh Refinement Finite-volume Scheme for Hybrid Multi-block MeshesZheng, Zheng Xiong 27 November 2012 (has links)
A block-based adaptive mesh refinement (AMR) finite-volume scheme is developed for solution of hyperbolic conservation laws on two-dimensional hybrid multi-block meshes. A Godunov-type upwind finite-volume spatial-discretization scheme, with piecewise limited linear reconstruction and Riemann-solver based flux functions, is applied to the quadrilateral cells of a hybrid multi-block mesh and these computational cells are embedded in either body-fitted structured or general unstructured grid partitions of the hybrid grid. A hierarchical quadtree data structure is used to allow local refinement of the individual subdomains based on heuristic physics-based refinement criteria. An efficient and scalable parallel implementation of the proposed algorithm is achieved via domain decomposition. The performance of the proposed scheme is demonstrated through application to solution of the compressible Euler equations for a number of flow configurations and regimes in two space dimensions. The efficiency of the AMR procedure and accuracy, robustness, and scalability of the hybrid mesh scheme are assessed.
|
133 |
Block-based Adaptive Mesh Refinement Finite-volume Scheme for Hybrid Multi-block MeshesZheng, Zheng Xiong 27 November 2012 (has links)
A block-based adaptive mesh refinement (AMR) finite-volume scheme is developed for solution of hyperbolic conservation laws on two-dimensional hybrid multi-block meshes. A Godunov-type upwind finite-volume spatial-discretization scheme, with piecewise limited linear reconstruction and Riemann-solver based flux functions, is applied to the quadrilateral cells of a hybrid multi-block mesh and these computational cells are embedded in either body-fitted structured or general unstructured grid partitions of the hybrid grid. A hierarchical quadtree data structure is used to allow local refinement of the individual subdomains based on heuristic physics-based refinement criteria. An efficient and scalable parallel implementation of the proposed algorithm is achieved via domain decomposition. The performance of the proposed scheme is demonstrated through application to solution of the compressible Euler equations for a number of flow configurations and regimes in two space dimensions. The efficiency of the AMR procedure and accuracy, robustness, and scalability of the hybrid mesh scheme are assessed.
|
134 |
Behavioural Model FusionNejati, Shiva 19 January 2009 (has links)
In large-scale model-based development, developers periodically need to combine collections of interrelated models. These models may capture different features of a system, describe alternative perspectives on a single feature, or express ways in which different features alter one another's structure or behaviour. We refer to the process of combining a set of interrelated models as "model fusion".
A number of factors make model fusion complicated. Models may overlap, in that they refer to the same concepts, but these concepts may be presented differently in each model, and the models may contradict one another. Models may describe independent system components, but the components may interact, potentially causing undesirable side effects. Finally, models may cross-cut, modifying one another in ways that violate their syntactic or semantic properties.
In this thesis, we study three instances of the fusion problem for "behavioural models", motivated by real-world applications. The first problem is combining "partial" models of a single feature with the goal of creating a more complete description of that feature. The second problem is maintenance of "variant" specifications of individual features. The goal here is to combine the variants while preserving their points of difference (i.e., variabilities). The third problem is analysis of interactions between models describing "different" features. Specifically, given a set of features, the goal is to construct a composition such that undesirable interactions are absent. We provide an automated tool-supported solution to each of these problems and evaluate our solutions.
The main novelties of the techniques presented in this thesis are (1) preservation of semantics during the fusion process, and (2) applicability to large and evolving collections of models. These are made possible by explicit modelling of partiality, variability and regularity in behavioural models, and providing semantic-preserving notions for relating these models.
|
135 |
Two-Dimensional Anisotropic Cartesian Mesh Adaptation for the Compressible Euler EquationsKeats, William A. January 2004 (has links)
Simulating transient compressible flows involving shock waves presents challenges to the CFD practitioner in terms of the mesh quality required to resolve discontinuities and prevent smearing. This document discusses a novel two-dimensional Cartesian anisotropic mesh adaptation technique implemented for transient compressible flow. This technique, originally developed for laminar incompressible flow, is efficient because it refines and coarsens cells using criteria that consider the solution in each of the cardinal directions separately. In this document the method will be applied to compressible flow. The procedure shows promise in its ability to deliver good quality solutions while achieving computational savings. Transient shock wave diffraction over a backward step and shock reflection over a forward step are considered as test cases because they demonstrate that the quality of the solution can be maintained as the mesh is refined and coarsened in time. The data structure is explained in relation to the computational mesh, and the object-oriented design and implementation of the code is presented. Refinement and coarsening algorithms are outlined. Computational savings over uniform and isotropic mesh approaches are shown to be significant.
|
136 |
Efficient Verification of Bit-Level Pipelined Machines Using RefinementSrinivasan, Sudarshan Kumar 24 August 2007 (has links)
Functional verification is a critical problem facing the semiconductor
industry: hardware designs are extremely complex and highly optimized,
and even a single bug in deployed systems can cost more than $10
billion. We focus on the verification of pipelining, a key
optimization that appears extensively in hardware systems such as
microprocessors, multicore systems, and cache coherence protocols.
Existing techniques for verifying pipelined machines either consume
excessive amounts of time, effort, and resources, or are not
applicable at the bit-level, the level of abstraction at which
commercial systems are designed and functionally verified.
We present a highly automated, efficient, compositional, and scalable
refinement-based approach for the verification of bit-level pipelined
machines. Our contributions include:
(1) A complete compositional reasoning framework based on refinement.
Our notion of refinement guarantees that pipelined machines satisfy
the same safety and liveness properties as their instruction set
architectures.
In addition, our compositional framework can be used to decompose
correctness proofs into smaller, more manageable pieces, leading to
drastic reductions in verification times and a high-degree of
scalability.
(2) The development of ACL2-SMT, a verification system that integrates
the popular ACL2 theorem prover (winner of the 2005 ACM Software
System Award) with decision procedures. ACL2-SMT allows us to
seamlessly take advantage of the two main approaches to hardware
verification: theorem proving and decision procedures.
(3) A proof methodology based on our compositional reasoning framework
and ACL2-SMT that allows us to reduce the bit-level verification
problem to a sequence of highly automated proof steps.
(4) A collection of general-purpose refinement maps, functions that
relate pipelined machine states to instruction set architecture
states. These refinement maps provide more flexibility and lead to
increased verification efficiency.
The effectiveness of our approach is demonstrated by verifying various
pipelined machine models, including a bit-level, Intel XScale inspired
processor that implements 593 instructions and includes features such
as branch prediction, precise exceptions, and predicated instruction
execution.
|
137 |
The Production and Deformation Behaviour of Ultrafine-Grained AZ31 Mg AlloyLee, Wen-Tu 31 August 2011 (has links)
Ultrafine-grained(UFG) AZ31 Mg alloy was obtained by equal-channel angular extrusion(ECAE) and subsequent annealing at elevated temperatures. The basal texture component for ECAEed material is located on the Z plane of the ECAEed billets. Tensile tests were performed at temperatures between room temperature and 125¢J, and strain rates used ranging from 3*10-5 to 6*10-2 s-1. The experimental results showed that a high tensile yield stress of 394 MPa was obtained at room temperature under a strain rate of 3*10-3 s-1. Strengths of UFG AZ31 specimens were greatly improved due to grain refinement. It was found that strain rate sensitivity of UFG AZ31 alloy increased significantly from 0.024 to 0.321 with increasing temperature. The constant k of Hall-Petch equation, £m=£m0 +kd-1/2, decreased with increasing temperature, and decreasing strain rate. Negative k values were ontained at 75¢J and 100¢J under a strain rate 3*10-5 s-1.
When compressed along X, Y and X45Z billet orientations, strain localization within shear bands was found in UFG AZ31 specimens. Shear bands are formed inclined near 45 to the compression axis. The smaller the grain size, the thinner the shear band. Different Hall-Petch constant k were found in specimens deformed along different orientations, which is caused by different deformation mechanisms. The formation of tension twins is the primary deformation mechanism for compressed X and Y samples, and basal slip is responsible for the deformation of X45Z sample. tension twins were found in 0.46 £gm grain size specimens.
|
138 |
Selection And Fusion Of Multiple Stereo Algorithms For Accurate Disparity SegmentationBilgin, Arda 01 November 2008 (has links) (PDF)
Fusion of multiple stereo algorithms is performed in order to obtain accurate disparity segmentation. Reliable disparity map of real-time stereo images is estimated and disparity segmentation is performed for object detection purpose. First,
stereo algorithms which have high performance in real-time applications are chosen among the algorithms in the literature and three of them are implemented. Then, the results of these algorithms are fused to gain better performance in disparity estimation. In fusion process, if a pixel has the same disparity value in all algorithms, that disparity value is assigned to the pixel. Other pixels are labelled as unknown
disparity. Then, unknown disparity values are estimated by a refinement procedure where neighbourhood disparity information is used. Finally, the resultant disparity
map is segmented by using mean shift segmentation.
The proposed method is tested in three different stereo data sets and several real stereo pairs. The experimental results indicate an improvement for the stereo analysis performance by the usage of fusion process and refinement procedure.
Furthermore, disparity segmentation is realized successfully by using mean shift segmentation for detecting objects at different depth levels.
|
139 |
Bimodal Automatic Speech Segmentation And Boundary Refinement TechniquesAkdemir, Eren 01 March 2010 (has links) (PDF)
Automatic segmentation of speech is compulsory for building large speech databases to be used in speech processing applications. This study proposes a bimodal automatic speech segmentation system that uses either articulator motion information (AMI) or visual information obtained by a camera in collaboration with auditory information. The presence of visual modality is shown to be very beneficial in speech recognition applications, improving the performance and noise robustness of those systems. In this dissertation a significant increase in the performance of the automatic speech segmentation system is achieved by using a bimodal approach.
Automatic speech segmentation systems have a tradeoff between precision and resulting number of gross errors. Boundary refinement techniques are used in order to increase precision of these systems without decreasing the system performance. Two novel boundary refinement techniques are proposed in this thesis / a hidden Markov model (HMM) based fine tuning system and an inverse filtering based fine tuning system. The segment boundaries obtained by the bimodal speech segmentation system are improved further by using these techniques.
To fulfill these goals, a complete two-stage automatic speech segmentation system is produced and tested in two different databases. A phonetically rich Turkish audiovisual speech database, that contains acoustic data and camera recordings of 1600 Turkish sentences uttered by a male speaker, is build from scratch in order to be used in the experiments. The visual features of the recordings are extracted and manual phonetic alignment of the database is done to be used as a ground truth for the performance tests of the automatic speech segmentation systems.
|
140 |
Development Of An Incompressible, Laminar Flowsolver Based On Least Squares Spectral Element Methodwith P-type Adaptive Refinement CapabilitiesOzcelikkale, Altug 01 June 2010 (has links) (PDF)
The aim of this thesis is to develop a flow solver that has the ability to obtain an accurate numerical solution fast and efficiently with minimum user intervention. In this study, a two-dimensional viscous, laminar, incompressible flow solver based on Least-Squares Spectral Element Method (LSSEM) is developed. The LSSEM flow solver can work on hp-type nonconforming grids and can perform p-type adaptive refinement. Several benchmark problems are solved in order to validate the solver and successful results are obtained. In particular, it is demonstrated that p-type adaptive refinement on hp-type non-conforming grids can be used to improve the quality of the solution. Moreover, it is found that mass conservation performance of LSSEM can be enhanced by using p-type adaptive refinement strategies while keeping computational costs reasonable.
|
Page generated in 0.0568 seconds