Spelling suggestions: "subject:"cerification**"" "subject:"erification**""
301 |
Validation of UML conceptual schemas with OCL constraints and operationsQueralt Calafat, Anna 02 March 2009 (has links)
Per tal de garantir la qualitat final d'un sistema d'informació, és imprescindible que l'esquema conceptual que representa el coneixement sobre el seu domini i les funcions que ha de realitzar sigui semànticament correcte.La correctesa d'un esquema conceptual es pot veure des de dues perspectives. Per una banda, des del punt de vista de la seva definició, determinar la correctesa d'un esquema conceptual consisteix en respondre la pregunta "És correcte l'esquema conceptual?". Aquesta pregunta es pot respondre determinant si l'esquema satisfà certes propietats, com satisfactibilitat, no redundància o executabilitat de les seves operacions.D'altra banda, des de la perspectiva dels requisits que el sistema d'informació ha de satisfer, l'esquema conceptual no només ha de ser correcte sinó que també ha de ser el correcte. Per tal d'assegurar-ho, el dissenyador necessita algun tipus de guia i ajut durant el procés de validació, de manera que pugui entendre què està representant l'esquema exactament i veure si es correspon amb els requisits que s'han de formalitzar.En aquesta tesi presentem una aproximació que millora els resultats de les propostes anteriors adreçades a validar un esquema conceptual en UML, amb les restriccions i operacions formalitzades en OCL. La nostra aproximació permet validar un esquema conceptual tant des del punt de vista de la seva definició com de la seva correspondència amb els requisits.La validació es porta a terme mitjançant un conjunt de proves que s'apliquen a l'esquema, algunes de les quals es generen automàticament mentre que d'altres són definides ad-hoc pel dissenyador. Totes les proves estan formalitzades de tal manera que es poden tractar d'una manera uniforme,independentment de la propietat específica que determinen.La nostra proposta es pot aplicar tant a un esquema conceptual complet com només a la seva part estructural. Quan es pretén validar només la part estructural d'un esquema, oferim un conjunt de condicions que permeten determinar si qualsevol prova de validació que es pugui fer sobrel'esquema acabarà en temps finit. Per aquells casos en els quals aquestes condicions de terminació se satisfan, també proposem un procediment de raonament sobre l'esquema que s'aprofita d'aquest fet i és més eficient que en el cas general. Aquesta aproximació permet validar esquemes conceptuals molt expressius, assegurant completesa i decidibilitat al mateix temps.Per provar la factibilitat de la nostra aproximació, hem implementat el procés de validació complet per a la part estructural d'un esquema. A més, per a la validació d'un esquema conceptual que inclou la definició del comportament, hem implementat el procediment de raonament estenent un mètode existent. / To ensure the quality of an information system, it is essential that the conceptual schema that represents the knowledge about its domain and the functions it has to perform is semantically correct.The correctness of a conceptual schema can be seen from two different perspectives. On the one hand, from the point of view of its definition, determining the correctness of a conceptual schema consists in answering to the question "Is the conceptual schema right?". This can be achieved by determining whether the schema fulfills certain properties, such as satisfiability, non-redundancy or operation executability.On the other hand, from the perspective of the requirements that the information system should satisfy, not only the conceptual schema must be right, but it also must be the right one. To ensure this, the designer must be provided with some kind of help and guidance during the validation process, so that he is able to understand the exact meaning of the schema and see whether it corresponds to the requirements to be formalized.In this thesis we provide an approach which improves the results of previous proposals that address the validation of a UML conceptual schema, with its constraints and operations formalized in OCL. Our approach allows to validate the conceptual schema both from the point of view of its definition and of its correspondence to the requirements.The validation is performed by means of a set of tests that are applied to the schema, including automatically generated tests and ad-hoc tests defined by the designer. All the validation tests are formalized in such a way that they can be treated uniformly, regardless the specific property they allow to test.Our approach can be either applied to a complete conceptual schema or only to its structural part. In case that only the structural part is validated, we provide a set of conditions to determine whether any validation test performed on the schema will terminate. For those cases in which these conditions of termination are satisfied, we also provide a reasoning procedure that takes advantage of this situation and works more efficiently than in the general case. This approach allows the validation of very expressive schemas and ensures completeness and decidability at the same time. To show the feasibility of our approach, we have implemented the complete validation process for the structural part of a conceptual schema.Additionally, for the validation of a conceptual schema with a behavioral part, the reasoning procedure has been implemented as an extension of an existing method.
|
302 |
Bluenose II: Towards Faster Design and Verification of Pipelined CircuitsChan, Ca Bol 08 1900 (has links)
The huge demand for electronic devices has driven semiconductor companies to create better products in terms of area, speed, power etc. and to deliver them to market faster. Delay to market can result in lost opportunities. The length of the design cycle directly affects the time to market. However, inadequate time for design and verification can cause bugs that will cause further delays to market and correcting the error after manufacturing is very expensive. A bug in an ASIC found after fabrication requires respinning the mask at a cost of several million dollars. Even as the pressure to reduce the length of the design cycles grows, the size and complexity of digital hardware circuits have increased, which puts even greater pressure on design and verification productivity. Pipelining is one optimization technique which has contributed to the increased complexity in hardware design. Pipeline increases throughput by overlapping the execution of instructions. It is a challenge to design and verify pipelines because the specification is written to describe how instructions are executed in sequence while there can be multiple instructions being executed in a pipeline at one time. The overlapping of instructions adds further complexity to the hardware in the form of hazards which arise from resource conflicts, data dependencies or speculation of parcels due to branch instructions.
To address these issues, we present PipeNet, a metamodel for describing hardware design at a higher level of abstraction and Bluenose II, a graphical tool for manipulating a PipeNet model. PipeNet is based on a pipeline model in a formal pipeline verification framework. The pipeline model contains arbiters, flow-control state machines, datapath and data-routing. The designer describes the pipeline design using PipeNet. Based on the PipeNet model, Bluenose II generates synthesizable VHDL code and a HOL verification script. Bluenose II's ability to generate HOL scripts turns the HOL theorem prover into Bluenose II's external verification environment. A direct connection to HOL is implemented in the form of a console to display results from HOL directly in Bluenose II. The data structures that represent PipeNet are evaluated for their extensibility to accommodate future changes. Finally, a case study based on an implementation of a two-wide superscalar 32-bit RISC integer pipeline is conducted to examine the quality of the generated codes and the entire design process in Bluenose II. The generation of VHDL code is improved over that provided in Bluenose I, Bluenose II's predecessor.
|
303 |
Modeling in Modelica and SysML of System Engineering at Scania Applied to Fuel Level DisplayLiang, Feng January 2012 (has links)
The main objective of this thesis is to introduce a four perspectives structure in order to provide one solution for traceability and dependability in the system design phase. The traceability between different perspectives help engineers have a clear picture of the whole system before goes to the real implementation. Fuel Level Display system from Scania Truck is used to undertake as a case study to offer insights of the approach. A four perspectives structure is made in the first place in order to analysis traceability between different viewpoints. After implementing the Fuel Level Display system in Modelica, a verification scenario is specified to perform a complete requirement verification process for system design against requirements.
|
304 |
Bluenose II: Towards Faster Design and Verification of Pipelined CircuitsChan, Ca Bol 08 1900 (has links)
The huge demand for electronic devices has driven semiconductor companies to create better products in terms of area, speed, power etc. and to deliver them to market faster. Delay to market can result in lost opportunities. The length of the design cycle directly affects the time to market. However, inadequate time for design and verification can cause bugs that will cause further delays to market and correcting the error after manufacturing is very expensive. A bug in an ASIC found after fabrication requires respinning the mask at a cost of several million dollars. Even as the pressure to reduce the length of the design cycles grows, the size and complexity of digital hardware circuits have increased, which puts even greater pressure on design and verification productivity. Pipelining is one optimization technique which has contributed to the increased complexity in hardware design. Pipeline increases throughput by overlapping the execution of instructions. It is a challenge to design and verify pipelines because the specification is written to describe how instructions are executed in sequence while there can be multiple instructions being executed in a pipeline at one time. The overlapping of instructions adds further complexity to the hardware in the form of hazards which arise from resource conflicts, data dependencies or speculation of parcels due to branch instructions.
To address these issues, we present PipeNet, a metamodel for describing hardware design at a higher level of abstraction and Bluenose II, a graphical tool for manipulating a PipeNet model. PipeNet is based on a pipeline model in a formal pipeline verification framework. The pipeline model contains arbiters, flow-control state machines, datapath and data-routing. The designer describes the pipeline design using PipeNet. Based on the PipeNet model, Bluenose II generates synthesizable VHDL code and a HOL verification script. Bluenose II's ability to generate HOL scripts turns the HOL theorem prover into Bluenose II's external verification environment. A direct connection to HOL is implemented in the form of a console to display results from HOL directly in Bluenose II. The data structures that represent PipeNet are evaluated for their extensibility to accommodate future changes. Finally, a case study based on an implementation of a two-wide superscalar 32-bit RISC integer pipeline is conducted to examine the quality of the generated codes and the entire design process in Bluenose II. The generation of VHDL code is improved over that provided in Bluenose I, Bluenose II's predecessor.
|
305 |
Automatic Datapath Abstraction Of Pipelined CircuitsVlad, Ciubotariu 18 February 2011 (has links)
Pipelined circuits operate as an assembly line that starts processing new instructions while older ones
continue execution. Control properties specify the correct behaviour of the pipeline with respect to
how it handles the concurrency between instructions. Control properties stand out as one of the most
challenging aspects of pipelined circuit verification. Their verification depends on the datapath and
memories, which in practice account for the largest part of the state space of the circuit. To alleviate
the state explosion problem, abstraction of memories and datapath becomes mandatory. This thesis
provides a methodology for an efficient abstraction of the datapath under all possible control-visible
behaviours. For verification of control properties, the abstracted datapath is then substituted in place
of the original one and the control circuitry is left unchanged. With respect to control properties, the
abstraction is shown conservative by both language containment and simulation.
For verification of control properties, the pipeline datapath is represented by a network of registers,
unrestricted combinational datapath blocks and muxes. The values flowing through the datapath are
called parcels. The control is the state machine that steers the parcels through the network. As parcels
travel through the pipeline, they undergo transformations through the datapath blocks. The control-
visible results of these transformations fan-out into control variables which in turn influence the next
stage the parcels are transferred to by the control. The semantics of the datapath is formalized as a
labelled transition system called a parcel automaton. Parcel automata capture the set of all control
visible paths through the pipeline and are derived without the need of reachability analysis of the
original pipeline. Datapath abstraction is defined using familiar concepts such as language containment
or simulation. We have proved results that show that datapath abstraction leads to pipeline abstraction.
Our approach has been incorporated into a practical algorithm that yields directly the abstract parcel
automaton, bypassing the construction of the concrete parcel automaton. The algorithm uses a SAT
solver to generate incrementally all possible control visible behaviours of the pipeline datapath. Our
largest case study is a 32-bit two-wide superscalar OpenRISC microprocessor written in VHDL, where
it reduced the size of the implementation from 35k gates to 2k gates in less than 10 minutes while using
less than 52MB of memory.
|
306 |
Tags: Augmenting Microkernel Messages with Lightweight MetadataSaif Ur Rehman, Ahmad January 2012 (has links)
In this work, we propose Tags, an e cient mechanism that augments microkernel interprocess
messages with lightweight metadata to enable the development of new, systemwide
functionality without requiring the modi cation of application source code. Therefore, the
technology is well suited for systems with a large legacy code base and for third-party
applications such as phone and tablet applications.
As examples, we detailed use cases in areas consisting of mandatory security and runtime
veri cation of process interactions. In the area of mandatory security, we use tagging
to assess the feasibility of implementing a mandatory integrity propagation model in the
microkernel. The process interaction veri cation use case shows the utility of tagging to
track and verify interaction history among system components.
To demonstrate that tagging is technically feasible and practical, we implemented it
in a commercial microkernel and executed multiple sets of standard benchmarks on two
di erent computing architectures. The results clearly demonstrate that tagging has only
negligible overhead and strong potential for many applications.
|
307 |
Modelling the impact of total stress changes on groundwater flowDissanayake, Nalinda 29 April 2008 (has links)
The research study involved using the modified FEMWATER code to investigate the impact of total stress changes on groundwater flow in the vicinity of a salt tailings pile. Total stress and pore-pressure data observed at the Lanigan and Rocanville potash-mine sites were used to assist the development of a generic FEMWATER model. The original 3-D mesh considered for model study covers a region of 7.6 km x 7.6 km x 60 m. The simulated pile itself covers a surface area of 1.6 km x 1.6 km within the region. Symmetry of the idealized system allowed half of the system to be modelled to reduce the size of the mesh. The model was layered to facilitate different materials representing different hydrostratigraphic scenarios. The GMS-release of the FEMWATER code (version 2.1) was modified to simulate the pore-pressure response to total stress changes caused by tailings pile loading at the ground surface to be modelled. The modified code was verified before applying to present study.<p>Long-term pore pressure generation and dissipation due to pile construction was investigated for eleven hydrostratigraphic scenarios consisting of plastic clays, stiff till and dense sand layers commonly found in Saskatchewan potash mining regions. The model was run for two distinctive pile loading patterns. Model results indicated that the loading pattern has a significant influence on pore pressure generation beneath the pile. The model was initially run for 30 year pile construction period and later simulated for 15, 25 and 35 year construction periods to investigate the impact of loading rate. These results showed that, as expected, the peak pore water pressure head is proportional to the pile construction rate. A sensitivity analysis, which was carried out by changing hydraulic conductivity of stiff till, revealed that the lower the hydraulic conductivity, the greater the pore pressure generation beneath the pile.<p>Overall, the research study helped to understand and predict the influence of pile construction and hydrostratigraphy on pore-pressure changes beneath salt tailing piles. Low K/Ss or cv materials (compressible tills) demonstrate a slow dissipation rate and high excess pressures. Compared to dense sand which has very high K/Ss, till has very low K/Ss which causes in high excess pore pressure generation. Sand layers act as drains, rapidly dissipating pore pressures. Thicker low K/Ss units result in slower dissipation and higher pressures. As the thickness of the low K/Ss layer increases, the peak pressures increase as the drainage path lengthens. Thin plastic clay layers give rise to the highest pressures.<p>The model study showed that hydrostratigraphic scenarios similar to those found at Saskatchewan potash mine sites can generate the high pore pressures observed in the vicinity of salt tailings piles as a result of pile loading. Peak pressures are very sensitive to pile construction rates, loading patterns and hydrostratiagraphy of the region. Peak pressures can reach levels that would be of concern for pile stability on the presence of adverse geological conditions.
|
308 |
Verification-Aware Processor DesignLungu, Anita January 2009 (has links)
<p>As technological advances enable computers to permeate many of our society's critical application domains (such as medicine, finances, transportation), the requirement for computers to always behave correctly becomes critical as well. Currently, ensuring that processor designs are correct represents a major challenge for the computing industry consuming the majority (up to 70%) of the resources allocated for the creation of a new processor. Looking towards the future, we see that with each new processor generation, even more transistors fit on the same chip area and more complex designs become possible, which makes it unlikely that the difficulty of the design verification problem will decrease by itself.</p><p>We believe that the difficulty of the design verification problem is compounded by the current processor design flow. In most design cycles, a design's verifiability is not explicitly considered at an early stage - when decisions are most influential - because that initial focus is exclusively on improving the design on more traditional metrics like performance, power, and area. It is thus possible for the resulting design to be very difficult to verify in the end, specifically because its verifiability was not ranked high on the priority list in the beginning. </p><p>In this thesis we propose to view verifiability as a critical design constraint to be considered, together with other established metrics, like performance and power, from the initial stages of design. Our high level goal is for this approach to make designs more verifiable, which would both decrease the resources invested in the verification step and lead to more robust designs. </p><p>More specifically, we make five main contributions in this thesis. The first is our proposal for a change in design perspective towards considering verifiability as a first class constraint. Second, we use formal verification (through a combination of theorem proving, model checking, and probabilistic model checking ) to quantitatively evaluate the impact on verifiability of various design choices like the organization of caches, TLBs, pipeline, operand bypass network, and dynamic power management mechanisms. Our third contribution is to evaluate design trade-offs between verifiability and other established metrics, like performance and power, in the context of multi-core dynamic power management schemes. Fourth, we re-design several components for increasing their verifiability. Finally, we propose design guidelines for increasing verifiability. In the context of single core processors our guidelines refer to the organization of caches and translation lookaside buffers (TLBs), the depth of the core's pipeline, the type of ALUs used, while for multi-core processors we refer to dynamic power management schemes (DPMs) for power capping. </p><p>Our results confirm that making design choices with verifiability as a first class design constraint has the capacity to decrease the verification effort. Furthermore, making explicit trade-offs between verifiability, performance and power helps identify better design points for given verification, performance, and power goals.</p> / Dissertation
|
309 |
Design and verification of an ARM10-like Processor and its System IntegrationLin, Chun-Shou 07 February 2012 (has links)
With the advanced of the technique, we can design more IP in the same area space chip. The embedded system has more powerful about its application. We need to have a more efficient core processor to support the whole embedded system in complex system environment. The main purpose of this paper is increased the calculated speed, memory management and debugging for SYS32TME III, which is designed by our lab as an ARM10 like processor. We integrate the cache/MMU and EICE( Embedded in-circuit emulator ) into the embedded processor core. Using the cache/MMU, we can not only speed up the processor which access external memory time but also use the virtual address for Operating System. In order to keep the correctness of the system and speed up the system integration time, we use five functional (cache off, cache on and MMU off with cache hit/miss, cache on and MMU on with cach hit/cache miss and TLB hit/cache miss and TLB miss) tests to verify the cache/MMU and six coprocessor instructions (LDC, MCR, MCRR, MRC, MRRC, STC ) to verify the EICE. After that, we also use the regression test about the microprocessor, cache/MMU and EICE system integration. In the end, we turned the performance about the integrated cache/MMU and EICE, so that we can support an 200MHz ARM 10-like processor by 0.18£gm.
|
310 |
Software Engineering Process ImprovementSezer, Bulent 01 April 2007 (has links) (PDF)
This thesis presents a software engineering process improvement study. The literature on software process improvement is reviewed. Then the current design verification process at one of the Software Engineering Departments of
the X Company, Ankara, Tü / rkiye (SED) is analyzed. Static software development process metrics have been calculated for the SED based on a recently proposed approach. Some improvement suggestions have been made based on the metric values calculated according to the proposals of that study.
Besides, the author' / s improvement suggestions have been discussed with the senior staff at the department and then final version of the improvements has been gathered. Then, a discussion has been made comparing these two approaches. Finally, a new software design verification process model has
been proposed. Some of the suggestions have already been applied and preliminary results have been obtained.
|
Page generated in 0.0912 seconds