• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 867
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1759
  • 421
  • 359
  • 295
  • 270
  • 260
  • 254
  • 223
  • 211
  • 192
  • 179
  • 171
  • 128
  • 123
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Bluenose II: Towards Faster Design and Verification of Pipelined Circuits

Chan, Ca Bol 08 1900 (has links)
The huge demand for electronic devices has driven semiconductor companies to create better products in terms of area, speed, power etc. and to deliver them to market faster. Delay to market can result in lost opportunities. The length of the design cycle directly affects the time to market. However, inadequate time for design and verification can cause bugs that will cause further delays to market and correcting the error after manufacturing is very expensive. A bug in an ASIC found after fabrication requires respinning the mask at a cost of several million dollars. Even as the pressure to reduce the length of the design cycles grows, the size and complexity of digital hardware circuits have increased, which puts even greater pressure on design and verification productivity. Pipelining is one optimization technique which has contributed to the increased complexity in hardware design. Pipeline increases throughput by overlapping the execution of instructions. It is a challenge to design and verify pipelines because the specification is written to describe how instructions are executed in sequence while there can be multiple instructions being executed in a pipeline at one time. The overlapping of instructions adds further complexity to the hardware in the form of hazards which arise from resource conflicts, data dependencies or speculation of parcels due to branch instructions. To address these issues, we present PipeNet, a metamodel for describing hardware design at a higher level of abstraction and Bluenose II, a graphical tool for manipulating a PipeNet model. PipeNet is based on a pipeline model in a formal pipeline verification framework. The pipeline model contains arbiters, flow-control state machines, datapath and data-routing. The designer describes the pipeline design using PipeNet. Based on the PipeNet model, Bluenose II generates synthesizable VHDL code and a HOL verification script. Bluenose II's ability to generate HOL scripts turns the HOL theorem prover into Bluenose II's external verification environment. A direct connection to HOL is implemented in the form of a console to display results from HOL directly in Bluenose II. The data structures that represent PipeNet are evaluated for their extensibility to accommodate future changes. Finally, a case study based on an implementation of a two-wide superscalar 32-bit RISC integer pipeline is conducted to examine the quality of the generated codes and the entire design process in Bluenose II. The generation of VHDL code is improved over that provided in Bluenose I, Bluenose II's predecessor.
302

Modeling in Modelica and SysML of System Engineering at Scania Applied to Fuel Level Display

Liang, Feng January 2012 (has links)
The main objective of this thesis is to introduce a four perspectives structure in order to provide one solution for traceability and dependability in the system design phase. The traceability between different perspectives help engineers have a clear picture of the whole system before goes to the real implementation.  Fuel Level Display system from Scania Truck is used to undertake as a case study to offer insights of the approach. A four perspectives structure is made in the first place in order to analysis traceability between different viewpoints. After implementing the Fuel Level Display system in Modelica, a verification scenario is specified to perform a complete requirement verification process for system design against requirements.
303

Bluenose II: Towards Faster Design and Verification of Pipelined Circuits

Chan, Ca Bol 08 1900 (has links)
The huge demand for electronic devices has driven semiconductor companies to create better products in terms of area, speed, power etc. and to deliver them to market faster. Delay to market can result in lost opportunities. The length of the design cycle directly affects the time to market. However, inadequate time for design and verification can cause bugs that will cause further delays to market and correcting the error after manufacturing is very expensive. A bug in an ASIC found after fabrication requires respinning the mask at a cost of several million dollars. Even as the pressure to reduce the length of the design cycles grows, the size and complexity of digital hardware circuits have increased, which puts even greater pressure on design and verification productivity. Pipelining is one optimization technique which has contributed to the increased complexity in hardware design. Pipeline increases throughput by overlapping the execution of instructions. It is a challenge to design and verify pipelines because the specification is written to describe how instructions are executed in sequence while there can be multiple instructions being executed in a pipeline at one time. The overlapping of instructions adds further complexity to the hardware in the form of hazards which arise from resource conflicts, data dependencies or speculation of parcels due to branch instructions. To address these issues, we present PipeNet, a metamodel for describing hardware design at a higher level of abstraction and Bluenose II, a graphical tool for manipulating a PipeNet model. PipeNet is based on a pipeline model in a formal pipeline verification framework. The pipeline model contains arbiters, flow-control state machines, datapath and data-routing. The designer describes the pipeline design using PipeNet. Based on the PipeNet model, Bluenose II generates synthesizable VHDL code and a HOL verification script. Bluenose II's ability to generate HOL scripts turns the HOL theorem prover into Bluenose II's external verification environment. A direct connection to HOL is implemented in the form of a console to display results from HOL directly in Bluenose II. The data structures that represent PipeNet are evaluated for their extensibility to accommodate future changes. Finally, a case study based on an implementation of a two-wide superscalar 32-bit RISC integer pipeline is conducted to examine the quality of the generated codes and the entire design process in Bluenose II. The generation of VHDL code is improved over that provided in Bluenose I, Bluenose II's predecessor.
304

Automatic Datapath Abstraction Of Pipelined Circuits

Vlad, Ciubotariu 18 February 2011 (has links)
Pipelined circuits operate as an assembly line that starts processing new instructions while older ones continue execution. Control properties specify the correct behaviour of the pipeline with respect to how it handles the concurrency between instructions. Control properties stand out as one of the most challenging aspects of pipelined circuit verification. Their verification depends on the datapath and memories, which in practice account for the largest part of the state space of the circuit. To alleviate the state explosion problem, abstraction of memories and datapath becomes mandatory. This thesis provides a methodology for an efficient abstraction of the datapath under all possible control-visible behaviours. For verification of control properties, the abstracted datapath is then substituted in place of the original one and the control circuitry is left unchanged. With respect to control properties, the abstraction is shown conservative by both language containment and simulation. For verification of control properties, the pipeline datapath is represented by a network of registers, unrestricted combinational datapath blocks and muxes. The values flowing through the datapath are called parcels. The control is the state machine that steers the parcels through the network. As parcels travel through the pipeline, they undergo transformations through the datapath blocks. The control- visible results of these transformations fan-out into control variables which in turn influence the next stage the parcels are transferred to by the control. The semantics of the datapath is formalized as a labelled transition system called a parcel automaton. Parcel automata capture the set of all control visible paths through the pipeline and are derived without the need of reachability analysis of the original pipeline. Datapath abstraction is defined using familiar concepts such as language containment or simulation. We have proved results that show that datapath abstraction leads to pipeline abstraction. Our approach has been incorporated into a practical algorithm that yields directly the abstract parcel automaton, bypassing the construction of the concrete parcel automaton. The algorithm uses a SAT solver to generate incrementally all possible control visible behaviours of the pipeline datapath. Our largest case study is a 32-bit two-wide superscalar OpenRISC microprocessor written in VHDL, where it reduced the size of the implementation from 35k gates to 2k gates in less than 10 minutes while using less than 52MB of memory.
305

Tags: Augmenting Microkernel Messages with Lightweight Metadata

Saif Ur Rehman, Ahmad January 2012 (has links)
In this work, we propose Tags, an e cient mechanism that augments microkernel interprocess messages with lightweight metadata to enable the development of new, systemwide functionality without requiring the modi cation of application source code. Therefore, the technology is well suited for systems with a large legacy code base and for third-party applications such as phone and tablet applications. As examples, we detailed use cases in areas consisting of mandatory security and runtime veri cation of process interactions. In the area of mandatory security, we use tagging to assess the feasibility of implementing a mandatory integrity propagation model in the microkernel. The process interaction veri cation use case shows the utility of tagging to track and verify interaction history among system components. To demonstrate that tagging is technically feasible and practical, we implemented it in a commercial microkernel and executed multiple sets of standard benchmarks on two di erent computing architectures. The results clearly demonstrate that tagging has only negligible overhead and strong potential for many applications.
306

Modelling the impact of total stress changes on groundwater flow

Dissanayake, Nalinda 29 April 2008 (has links)
The research study involved using the modified FEMWATER code to investigate the impact of total stress changes on groundwater flow in the vicinity of a salt tailings pile. Total stress and pore-pressure data observed at the Lanigan and Rocanville potash-mine sites were used to assist the development of a generic FEMWATER model. The original 3-D mesh considered for model study covers a region of 7.6 km x 7.6 km x 60 m. The simulated pile itself covers a surface area of 1.6 km x 1.6 km within the region. Symmetry of the idealized system allowed half of the system to be modelled to reduce the size of the mesh. The model was layered to facilitate different materials representing different hydrostratigraphic scenarios. The GMS-release of the FEMWATER code (version 2.1) was modified to simulate the pore-pressure response to total stress changes caused by tailings pile loading at the ground surface to be modelled. The modified code was verified before applying to present study.<p>Long-term pore pressure generation and dissipation due to pile construction was investigated for eleven hydrostratigraphic scenarios consisting of plastic clays, stiff till and dense sand layers commonly found in Saskatchewan potash mining regions. The model was run for two distinctive pile loading patterns. Model results indicated that the loading pattern has a significant influence on pore pressure generation beneath the pile. The model was initially run for 30 year pile construction period and later simulated for 15, 25 and 35 year construction periods to investigate the impact of loading rate. These results showed that, as expected, the peak pore water pressure head is proportional to the pile construction rate. A sensitivity analysis, which was carried out by changing hydraulic conductivity of stiff till, revealed that the lower the hydraulic conductivity, the greater the pore pressure generation beneath the pile.<p>Overall, the research study helped to understand and predict the influence of pile construction and hydrostratigraphy on pore-pressure changes beneath salt tailing piles. Low K/Ss or cv materials (compressible tills) demonstrate a slow dissipation rate and high excess pressures. Compared to dense sand which has very high K/Ss, till has very low K/Ss which causes in high excess pore pressure generation. Sand layers act as drains, rapidly dissipating pore pressures. Thicker low K/Ss units result in slower dissipation and higher pressures. As the thickness of the low K/Ss layer increases, the peak pressures increase as the drainage path lengthens. Thin plastic clay layers give rise to the highest pressures.<p>The model study showed that hydrostratigraphic scenarios similar to those found at Saskatchewan potash mine sites can generate the high pore pressures observed in the vicinity of salt tailings piles as a result of pile loading. Peak pressures are very sensitive to pile construction rates, loading patterns and hydrostratiagraphy of the region. Peak pressures can reach levels that would be of concern for pile stability on the presence of adverse geological conditions.
307

Verification-Aware Processor Design

Lungu, Anita January 2009 (has links)
<p>As technological advances enable computers to permeate many of our society's critical application domains (such as medicine, finances, transportation), the requirement for computers to always behave correctly becomes critical as well. Currently, ensuring that processor designs are correct represents a major challenge for the computing industry consuming the majority (up to 70%) of the resources allocated for the creation of a new processor. Looking towards the future, we see that with each new processor generation, even more transistors fit on the same chip area and more complex designs become possible, which makes it unlikely that the difficulty of the design verification problem will decrease by itself.</p><p>We believe that the difficulty of the design verification problem is compounded by the current processor design flow. In most design cycles, a design's verifiability is not explicitly considered at an early stage - when decisions are most influential - because that initial focus is exclusively on improving the design on more traditional metrics like performance, power, and area. It is thus possible for the resulting design to be very difficult to verify in the end, specifically because its verifiability was not ranked high on the priority list in the beginning. </p><p>In this thesis we propose to view verifiability as a critical design constraint to be considered, together with other established metrics, like performance and power, from the initial stages of design. Our high level goal is for this approach to make designs more verifiable, which would both decrease the resources invested in the verification step and lead to more robust designs. </p><p>More specifically, we make five main contributions in this thesis. The first is our proposal for a change in design perspective towards considering verifiability as a first class constraint. Second, we use formal verification (through a combination of theorem proving, model checking, and probabilistic model checking ) to quantitatively evaluate the impact on verifiability of various design choices like the organization of caches, TLBs, pipeline, operand bypass network, and dynamic power management mechanisms. Our third contribution is to evaluate design trade-offs between verifiability and other established metrics, like performance and power, in the context of multi-core dynamic power management schemes. Fourth, we re-design several components for increasing their verifiability. Finally, we propose design guidelines for increasing verifiability. In the context of single core processors our guidelines refer to the organization of caches and translation lookaside buffers (TLBs), the depth of the core's pipeline, the type of ALUs used, while for multi-core processors we refer to dynamic power management schemes (DPMs) for power capping. </p><p>Our results confirm that making design choices with verifiability as a first class design constraint has the capacity to decrease the verification effort. Furthermore, making explicit trade-offs between verifiability, performance and power helps identify better design points for given verification, performance, and power goals.</p> / Dissertation
308

Design and verification of an ARM10-like Processor and its System Integration

Lin, Chun-Shou 07 February 2012 (has links)
With the advanced of the technique, we can design more IP in the same area space chip. The embedded system has more powerful about its application. We need to have a more efficient core processor to support the whole embedded system in complex system environment. The main purpose of this paper is increased the calculated speed, memory management and debugging for SYS32TME III, which is designed by our lab as an ARM10 like processor. We integrate the cache/MMU and EICE( Embedded in-circuit emulator ) into the embedded processor core. Using the cache/MMU, we can not only speed up the processor which access external memory time but also use the virtual address for Operating System. In order to keep the correctness of the system and speed up the system integration time, we use five functional (cache off, cache on and MMU off with cache hit/miss, cache on and MMU on with cach hit/cache miss and TLB hit/cache miss and TLB miss) tests to verify the cache/MMU and six coprocessor instructions (LDC, MCR, MCRR, MRC, MRRC, STC ) to verify the EICE. After that, we also use the regression test about the microprocessor, cache/MMU and EICE system integration. In the end, we turned the performance about the integrated cache/MMU and EICE, so that we can support an 200MHz ARM 10-like processor by 0.18£gm.
309

Software Engineering Process Improvement

Sezer, Bulent 01 April 2007 (has links) (PDF)
This thesis presents a software engineering process improvement study. The literature on software process improvement is reviewed. Then the current design verification process at one of the Software Engineering Departments of the X Company, Ankara, T&uuml / rkiye (SED) is analyzed. Static software development process metrics have been calculated for the SED based on a recently proposed approach. Some improvement suggestions have been made based on the metric values calculated according to the proposals of that study. Besides, the author&#039 / s improvement suggestions have been discussed with the senior staff at the department and then final version of the improvements has been gathered. Then, a discussion has been made comparing these two approaches. Finally, a new software design verification process model has been proposed. Some of the suggestions have already been applied and preliminary results have been obtained.
310

NONE

Yang, Dennis 27 July 2001 (has links)
The inspection (verification) or Certification firm is to utilize professional specialist, technology and equipment, standing on an independent¡Aimpartial and objective position to conduct inspection, testing and assessment on the quantity/quality of commodity, performance of machinery and components, and implementation of the established quality system, then provide the supplier/buyer and stakeholder with certificate or report for fulfilling the contractual obligation or commitment made in the trade transaction activities. The inspection and certification are playing the role of pioneer in the process of technological development, and they are also one of the necessary pusher for business and industrial enterprises to achieve the target of enhancing product quality¡A obtaining international accreditation and increasing international competence. In the current society of free trade and free economy, the inspection or certification firms, along with the consumers¡¦ awareness and the drastic competitions among inspection or certification industries, facing the daily increasing competition and consumers¡¦ demanding, should make research of how to adopt an appropriate competitive strategy, to look for the market niche, to enhance service quality, to understand customers¡¦ real needs and customers¡¦ satisfactions so as to broaden service scopes and customer groups for the purpose of accomplishing the objective of building the firm to last forever. - iii - This study is to explain the characters, current status and outlook of the inspection or certification industry, and to analyze the development history, performance and achievement of SGS Group in Taiwan - the world leader of verification, testing and certification organization. By employing the industry analysis method addressed by Porter and the SWOT analysis for Strength, Weakness, Opportunity and Threat, we discussed the effect of ¡§five forces¡¨ on SGS Taiwan¡¦s business strategy, the correlation of service quality and customers¡¦ satisfaction, and currently its implementation of the managing tools of Balanced Scorecard and ISO 9001 quality management system. In summary, we have derived the Key success factors for the inspection or certification industry, and submitted suggestions of strengthening business model and management strategy for other inspection or certification firms as reference and benchmark enabling them to provide best and integrated services for the business and industrial enterprises in Taiwan who are pursuing quality. Key words:Inspection, Verification, Certification, Business Strategy, Service Quality, Customers Satisfaction.

Page generated in 0.1143 seconds