Spelling suggestions: "subject:"cerification"" "subject:"erification""
21 |
Assertion Based Verification on Senior DSPLepenica, Nermin January 2011 (has links)
Digital designs are often very large and complex, this makes locating and fixing a bug very hard and time consuming. Often more than half of the development time is spent on verification. Assertion based verification is a method that uses assertions that can help to improve the verification time. Simulating with assertions provides more information that can be used to locate and correct a bug. In this master thesis assertions are discussed and implemented in Senior DSP processor.
|
22 |
The Design Verification Methodology for an Advanced MicroprocessorZhong, Jing-Kun 22 August 2008 (has links)
According to references, testing and verification of a hardware circuit project
occupy about 60%˜70% of project time. Now that product cycle time is decreasing,
verification methodology is an important parameter for effective and successful
completion of a design project. Enhanced processor functions also make verification
conditions more difficult.
In this thesis the processor SYS32IME III, which is constructed based on architecture
of ARM 1022E, is verified by using V5TE instruction set. This thesis
focus on processor verification flow and others to help verification method. The
verification language that is used to help generate testbench are described in this
paper. Also, corner cases are generated, producing test cases that may be reused
in different verification environments. Lastly, errors from CPU architecture, verification
environments, interface wrapper and instruction set simulator were found
in different verification environment and fixed. To conclude the study, insertion
of self-implemented RTL monitor circuit into CPU architecture supply verification
information about testbench¡¦s coverage of functional verification.
|
23 |
Model checking: beyond the finiteKahlon, Vineet 28 August 2008 (has links)
Not available / text
|
24 |
Deductive mechanical verification of concurrent systemsSumners, Robert W. 28 August 2008 (has links)
Not available / text
|
25 |
Mechanical verification of reactive systemsManolios, Panagiotis 25 May 2011 (has links)
Not available / text
|
26 |
Wind forecast verification : a study in the accuracy of wind forecasts made by the Weather Channel and AccuWeatherScheele, Kyle Fred 08 November 2011 (has links)
The Weather Channel (TWC) and AccuWeather (AWX) are leading providers of weather information to the general public. The purpose of this Master’s Report is to examine the wind speed forecasts made by these two providers and determine their reliability and accuracy. The data used within this report was collected over a 12-month period at 51 locations across the state of Texas. The locations were grouped according to wind power class, which ranged from Class 1 to Class 4. The length of the forecast period was 9 days for TWC and 14 days for AWX.
It was found that the values forecasted by TWC were generally not well calibrated, but were never far from being perfectly calibrated and always demonstrated positive skill. The sharpness of TWC’s forecasts decreased consistently with lead time, allowing them to maintain a skill score greater than the climatological average throughout the forecast period. TWC tended to over-forecast wind speed in short term forecasts, especially within the lower wind power class regions. AWX forecasts were found to have positive skill the first 6 days of the forecasting period before becoming near zero or negative. AWX’s forecasts maintained a fairly high sharpness throughout the forecast period, which helped contribute to increasingly un-calibrated forecast values and negative skill in longer term forecasts. The findings within this report should help provide a better understanding of the wind forecasts made by TWC and AWX, and determine the strengths and weaknesses of both companies. / text
|
27 |
Thermal verification of programsKoskinen, Eric John January 2013 (has links)
No description available.
|
28 |
Automated discovery of performance regressions in enterprise applicationsFoo, King Chun (Derek) 31 January 2011 (has links)
Performance regression refers to the phenomena where the application performance degrades compared to prior releases. Performance regressions are unwanted side-effects caused by changes to application or its execution environment. Previous research shows that most problems experienced by customers in the field are related to application performance. To reduce the likelihood of performance regressions slipping into production, software vendors must verify the performance of an application before its release. The current practice of performance verification is carried out only at the implementation level through performance tests. In a performance test, service requests with intensity similar to the production environment are pushed to the applications under test; various performance counters (e.g., CPU utilization) are recorded. Analysis of the results of performance verification is both time-consuming and error-prone due to the large volume of collected data, the absence of formal objectives and the subjectivity of performance analysts. Furthermore, since performance verification is done just before release, evaluation of high impact design changes is delayed until the end of the development lifecycle. In this thesis, we seek to improve the effectiveness of performance verification. First, we propose an approach to construct layered simulation models to support performance verification at the design level. Performance analysts can leverage our layered simulation models to evaluate the impact of a proposed design change before any development effort is committed. Second, we present an automated approach to detect performance regressions from results of performance tests conducted on the implementation of an application. Our approach compares the results of new tests against counter correlations extracted from performance testing repositories. Finally, we refine our automated analysis approach with ensemble-learning algorithms to evaluate performance tests conducted in heterogeneous software and hardware environments. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-01-31 15:53:02.732
|
29 |
Model checking data-independent systems with arraysNewcomb, Tom C. January 2003 (has links)
We say a program is data-independent with respect to a data type X if the operations it can perform on values of type X are restricted to just equality testing, although the system may also input, store and move around (via assignment) values of type X within its variables. This property can be exploited to give procedures for the automatic verification, called model checking, of such programs independently of the instance for the type X. This thesis considers data-independent programs with arrays, which are useful for modelling memory systems such as cache protocols. The main question of interest is the following parameterised model-checking problem: whether a program satisfies its specification for all non-empty finite instances of its types. In order to obtain these results, we present a UNITY-like programming language with arrays that is suited to the study of decidability of various modelchecking problems, whilst being useful for prototyping memory systems such as caches. Its semantics are given in terms of transition systems, and we use the modal μ-calculus, a branching-time temporal logic with recursion, as our specification language. We describe a model-checking procedure for programs that use arrays indexed by one data-independent type X and storing values from another Y. This allows us to prove properties about parameterised systems: for example, that memory systems can be verified independently of memory size and data values. This decidability result is shown to extend to data-independent programs with many types and multidimensional arrays which are acyclic, meaning it is not possible to form loops of types in the 'indexed by' relation. Conversely, it is shown that even reachability model-checking problems are undecidable for classes of programs that allow cyclic-array programs. We give practical motivation for these decidability results by demonstrating how one could verify a fault-tolerant interface on a set of unreliable memories, and the cache protocol in the Pentium Pro processor. Significantly, the verifications are performed independently of many of these systems' parameters. These case studies suggest two extensions to the language: an array reset instruction, which sets every element of an array to a particular value, and an array assignment or copy instruction. Both are shown to restrict decidability of model checking problems; however we can obtain some interesting decidability results for arrays with reset by restricting the number of arrays to just one, or by allowing the arrays only to store fixed finite types, such as the booleans.
|
30 |
Robust correlation and support vector machines for face identificationJonsson, K. T. January 2000 (has links)
No description available.
|
Page generated in 0.0729 seconds