• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 804
  • 125
  • 116
  • 98
  • 58
  • 24
  • 21
  • 17
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1653
  • 395
  • 331
  • 274
  • 245
  • 242
  • 236
  • 213
  • 210
  • 179
  • 167
  • 155
  • 116
  • 115
  • 114
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Confirmation theory & confirmation logic

Lin, Chao-tien January 1987 (has links)
The title of my dissertation is "confirmation theory & confirmation logic", and it consists of five Parts. The motivation of the dissertation was to construct an adequate confirmation theory that could solve "the paradoxes of confirmation" discovered by Carl G. Hempel. In Part One I try mainly to do the three things, (i) introduce the fundamentals of Hempel's theory of qualitative confirmation as the common background for subsequent discussions, (ii) review the major views of the paradoxes of confirmation, (iii) present a new view, which is more radical than other known views, and argue that a solution to the paradoxes of confirmation may require a change of logic. In Part Two I construct a number of promising three-valued logics. I employ these "quasi confirmation logics" as the underlying logics of some new confirmation theories which, I had hoped, would solve the paradoxes of confirmation. I consider three-valued logics instead of any other many-valued logics as the underlying logic for any promising confirmation theory, because I believe that there is some intimate relationship or, even, a one-to-one correspondence between the (controversial) three truth-values of "truth", "falsity" and "neither truth nor falsity" and, respectively, the (non-controversial) three confirmation-statuses of "confirmation", "disconfirmation" and "neutrality". Unfortunately, these theories were found to be semantically inadequate. This became clear after a complete semantics for them had been developed. Thus, one negative result of Part Two is that our syntactical approach to confirmation theory is wrong from the very beginning. However, from this negative result we learn a positive lesson: a semantical approach is more fundamental and decisive than a syntactical one, at least this is so for constructing an adequate theory of confirmation. It is rewarding to note that the three-valued semantics worked out in Part Two is simple, complete and the first of its kind. In fact, the new three-valued semantics is in the spirit of Frege, although the line of thought is much neglected (even by Frege himself). In Part Three I shift the search for a confirmation logic and an adequate theory of confirmation from a syntactical to a semantical approach because of the lesson learned in Part Two. After a systematic search through several promising three-valued logics I come, at last, to a plausible confirmation logic and to a confirmation theory that could solve all known paradoxes of confirmation. The promising three-valued confirmation theory is called "the internal confirmation theory". In Part Four I review and appraise the adequacy conditions laid down by Hempel as the necessary conditions for any adequate confirmation theory. Under the criticisms of Carnap, Goodman and, especially, with the help of Hanen's thorough studies, I come to almost an identical conclusion to Hanen's we should not impose a priori in a theory of qualitative confirmation any adequacy conditions laid down by Hempel except perhaps the Entailment Condition, although the internal confirmation theory also adopts the Equivalence Condition for some intrinsic reasons. In the last Part Five I try to appraise the three most important confirmation theories discussed and/or constructed in this dissertation. They are Hempel's theory of confirmation, Goodman's and Scheffler's theory of selective confirmation and the internal confirmation theory. After some more vigorous criticisms are made and some new paradoxes of confirmation are unexpectedly derived in both the theory of selective confirmation and the internal confirmation theory, I arrive at, perhaps reluctantly, this more reasonable conclusion under the present situation when there is no obvious way to overcome the new difficulties the best thing that we can do is to dissolve (i.e. to live with) all new and old paradoxes of confirmation, for Hempel may be after all right to say that the paradoxes of confirmation are not genuine and to think otherwise is to have psychological illusions as Hempel says. / Arts, Faculty of / Philosophy, Department of / Graduate

Controlo remoto de presenças recorrendo à tecnologia de speaker verification

Moura, Paulo André Alves January 2010 (has links)
Estágio realizado na PT Inovação e orientado pelo Eng.º Sérgio Ramalho / Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores (Major Telecomunicações). Faculdade de Engenharia. Universidade do Porto. 2010

The Design Verification Methodology for an Advanced Microprocessor

Zhong, Jing-Kun 22 August 2008 (has links)
According to references, testing and verification of a hardware circuit project occupy about 60%˜70% of project time. Now that product cycle time is decreasing, verification methodology is an important parameter for effective and successful completion of a design project. Enhanced processor functions also make verification conditions more difficult. In this thesis the processor SYS32IME III, which is constructed based on architecture of ARM 1022E, is verified by using V5TE instruction set. This thesis focus on processor verification flow and others to help verification method. The verification language that is used to help generate testbench are described in this paper. Also, corner cases are generated, producing test cases that may be reused in different verification environments. Lastly, errors from CPU architecture, verification environments, interface wrapper and instruction set simulator were found in different verification environment and fixed. To conclude the study, insertion of self-implemented RTL monitor circuit into CPU architecture supply verification information about testbench¡¦s coverage of functional verification.

Early Verification of the Power Delivery Network in Integrated Circuits

Abdul Ghani, Nahi 05 January 2012 (has links)
The verification of power grids in modern integrated circuits must start early in the design process when adjustments can be most easily incorporated. We adopt an existing early verification framework. The framework is vectorless, i.e., it does not require input test patterns and does not rely on simulating the power grid subject to these patterns. In this framework, circuit uncertainty is captured via a set of current constraints that capture what may be known or specified from circuit behavior. Grid verification becomes a question of finding the worst-case grid behavior which, in turn, entails the solution of linear programs (LPs) whose size and number is proportional to the size of the grids. The thesis builds on this systematic framework for dealing with circuit uncertainty with the aim of improving efficiency and expanding the capabilities handled within. One contribution introduces an efficient method based on a sparse approximate inverse technique to greatly reduce the size of the required linear programs while ensuring a user-specified over-estimation margin on the exact solution. The application of the method is exhibited under both R and RC grid models. Another contribution first extends grid verification under RC grid models to also check for the worst-case branch currents. This would require as many LPs as there are branches. Then, it shows how to adapt the approximate inverse technique to speed up the branch current verification process. A third contribution proposes a novel approach to reduce the number of LPs in the voltage drop and branch current verification problems. This is achieved by examining dominance relations among node voltage drops and among branch currents. This allows us to replace a group of LPs by one conservative and tight LP. A fourth contribution proposes an efficient verification technique under RLC models. The proposed approach provides tight conservative bounds on the maximum and minimum worst-case voltage drops at every node on the grid.

Automated discovery of performance regressions in enterprise applications

Foo, King Chun (Derek) 31 January 2011 (has links)
Performance regression refers to the phenomena where the application performance degrades compared to prior releases. Performance regressions are unwanted side-effects caused by changes to application or its execution environment. Previous research shows that most problems experienced by customers in the field are related to application performance. To reduce the likelihood of performance regressions slipping into production, software vendors must verify the performance of an application before its release. The current practice of performance verification is carried out only at the implementation level through performance tests. In a performance test, service requests with intensity similar to the production environment are pushed to the applications under test; various performance counters (e.g., CPU utilization) are recorded. Analysis of the results of performance verification is both time-consuming and error-prone due to the large volume of collected data, the absence of formal objectives and the subjectivity of performance analysts. Furthermore, since performance verification is done just before release, evaluation of high impact design changes is delayed until the end of the development lifecycle. In this thesis, we seek to improve the effectiveness of performance verification. First, we propose an approach to construct layered simulation models to support performance verification at the design level. Performance analysts can leverage our layered simulation models to evaluate the impact of a proposed design change before any development effort is committed. Second, we present an automated approach to detect performance regressions from results of performance tests conducted on the implementation of an application. Our approach compares the results of new tests against counter correlations extracted from performance testing repositories. Finally, we refine our automated analysis approach with ensemble-learning algorithms to evaluate performance tests conducted in heterogeneous software and hardware environments. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-01-31 15:53:02.732

Wind forecast verification : a study in the accuracy of wind forecasts made by the Weather Channel and AccuWeather

Scheele, Kyle Fred 08 November 2011 (has links)
The Weather Channel (TWC) and AccuWeather (AWX) are leading providers of weather information to the general public. The purpose of this Master’s Report is to examine the wind speed forecasts made by these two providers and determine their reliability and accuracy. The data used within this report was collected over a 12-month period at 51 locations across the state of Texas. The locations were grouped according to wind power class, which ranged from Class 1 to Class 4. The length of the forecast period was 9 days for TWC and 14 days for AWX. It was found that the values forecasted by TWC were generally not well calibrated, but were never far from being perfectly calibrated and always demonstrated positive skill. The sharpness of TWC’s forecasts decreased consistently with lead time, allowing them to maintain a skill score greater than the climatological average throughout the forecast period. TWC tended to over-forecast wind speed in short term forecasts, especially within the lower wind power class regions. AWX forecasts were found to have positive skill the first 6 days of the forecasting period before becoming near zero or negative. AWX’s forecasts maintained a fairly high sharpness throughout the forecast period, which helped contribute to increasingly un-calibrated forecast values and negative skill in longer term forecasts. The findings within this report should help provide a better understanding of the wind forecasts made by TWC and AWX, and determine the strengths and weaknesses of both companies. / text

An improved method for register file verification

Quan, Tong 2009 August 1900 (has links)
Register file logic verification historically involves comparing two human generated logic sources such as a VHDL code file and a circuit schematic for logic equivalence. This method is valid for most cases, however it does not account for instances when both logic sources are equivalent but incorrect. This report proposes a method to eliminate this problem by testing logic coherency of various sources with a golden logic source. This golden logic source will be generated by a register file simulation program which has been developed to simulate accurate regfile I/O port data. Implementation of this simulation program for logic verification will eliminate the accuracy problem stated above, in addition the logic simulation time for the new method has also been reduced by 36% compared to the former method. / text

Model checking: beyond the finite

Kahlon, Vineet 28 August 2008 (has links)
Not available / text

Deductive mechanical verification of concurrent systems

Sumners, Robert W. 28 August 2008 (has links)
Not available / text

Mechanical verification of reactive systems

Manolios, Panagiotis 25 May 2011 (has links)
Not available / text

Page generated in 0.11 seconds