• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 735
  • 350
  • 73
  • 73
  • 73
  • 73
  • 73
  • 72
  • 48
  • 31
  • 9
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 1683
  • 1683
  • 269
  • 247
  • 235
  • 207
  • 186
  • 184
  • 173
  • 164
  • 145
  • 137
  • 136
  • 125
  • 124
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Minimization of Exclusive Sum of Products Expressions for Multiple-Valued Input Incompletely Specified Functions

Song, Ning 10 August 1992 (has links)
In recent years, there is an increased interest in the design of logic circuits which use EXOR gates. Particular interest is in the minimization of arbitrary Exclusive Sums Of Products (ESOPs). Functions realized by such circuits can have fewer gates, fewer connections, and take up less area in VLSI and especially, FPGA realizations. They are also easily testable. So far, the ESOPs are not as popular as their Sum of Products (SOP) counterparts. One of the main reasons it that the problem of the minimization of ESOP circuits was traditionally an extremely difficult one. Since exact solutions can be practically found only for functions with not more than 5 variables the interest is in approximate solutions. Two approaches to generate s~b optimal solutions can be found in the literature. One approach is to minimize sub-families of ESOPs. Another approach is to minimize ESOPs using heuristic algorithms. The method we introduced in this thesis belongs to the second approach, which normally generates better results than the first approach. In the second approach, two general methods are used. One method is to minimize the coefficients of Reed-Muller forms. Another method is to perform a set of cube operations iteratively on a given ESOP. So far, this method has achieved better results than other methods. In this method (we call it cube operation approach), the quality of the results depends on the quality of the cube operations. Different cube operations have been invented in the past a few years. All these cube operations can be applied only when some conditions are satisfied. This is due to the limitations of the operations. These limitations reduce the opportunity to get a high quality solution and reduce the efficiency of the algorithm as well. The efforts of removing these limitations led to the invention of our new cube operation, exorlink, which is introduced in this thesis. Exorlink can be applied on any two cubes in the array without condition. All the existing cube operations in this approach are included in it. So this is the most general operation in this approach. Another key issue in the cube operation approach is the efficiency of the algorithm. Existing algorithms perform all possible cube operations, and give litter guide to select the operations. Our new algorithm selectively performs some of the possible operations. Experimental results show that this algorithm is more efficient than existing ones. New algorithms to minimize multiple output functions and especially incompletely specified ESOPs are also presented. The algorithms are included in the program EXORCISM-MV -2, which is a new version of EXORCISM-MY. EXORCISM-MV -2 was tested on many benchmark functions and compared to the results from literature. The program in most cases gives the same or better solutions on binary and 4-valued completely specified functions. More importantly, it is able to efficiently minimize arbitrary-valued and incompletely specified functions, while the programs from literature are either for completely specified functions, or for binary variables. Additionally, as in Espresso, the number of variables in our program is unlimited and the only constraint is the number of input cubes that are read, so very large functions can be minimized. Based on our new cube operation and new algorithms, in the thesis, we present a solution to a problem that has not yet been practically solved in the literature: efficient minimization of arbitrary ESOP expressions for multiple-output multiple-valued input incompletely specified functions.
142

An expert system for softwood lumber grading

Zeng, Yimin 05 May 1993 (has links)
The focus of this research is to develop a prototype expert system for softwood lumber grading. The grading rules used in the knowledge base of the system are based on Western Lumber Grading Rules 88 published by the Western Wood Products Association. The system includes 27 grades in Dimension, Select/Finish, and Boards categories. The system is designed to be interactive and menu-driven. The user input to the system consists of lumber size, grade category, and type, location and size of defects for each face. The system then infers the grade corresponding to each face, and an overall grade for the lumber. The system provides limited explanation capabilities. Evaluation of the system was performed using 85 samples of pre-graded Siberian larch 2x4x12s in Structural Light Framing category. The initial evaluation was performed using the two wide faces of boards. Results indicated a 60 percent match between the grade assigned by the human expert and the system. The largest cause of deviation was exclusion of defects on the two narrow faces. The knowledge base was expanded to include the two narrow faces; the match rate improved to 76.5 percent. Evaluations for other grading categories need to be conducted in the future to assess the adequacy of the knowledge base. The prototype development concentrates on selected defect characteristics for each grade. These characteristics are clearly defined and described in the rule book, and are usually the most frequently encountered defects on softwood lumber. The knowledge base needs to be refined and expanded if additional factors such as knot positions relative to each other, warp, manufacturing imperfections and clustering of defects are to be considered. / Graduation date: 1993
143

A model-based methodology for the evaluation of computerized group decision making

McNown Perry, Cindy A. 26 September 2001 (has links)
Increased global competition is forcing organizations to increase their use of group decision making today. Computerized group decision support aids (CGDSAs) are being developed to improve the efficiency of these groups and to improve decision quality. Even though the use of CGDSAs has increased, very little research has been done on the evaluation of CGDSAs. The purpose of this research was to develop a model-based generalized methodology for CGDSA evaluation from the user's perspective. Two models were developed as a foundation for the CGDSA evaluation methodology. The first model was a model of group decision making and the second model was a model of computer-aided group decision making. The group decision making model was based upon a basic input-output model with the problem as the input and the selected alternative as the output. Analogous to how problems are viewed in terms of classical design of experiments, independent variables affect the outcome (problem solution the dependent variable) of the decision making process. As in design of experiments, independent variables are either noise variables or control variables. In the model presented, the independent variables are further divided into four categories (internal, external, process, and problem) in the group decision making model as a way to help develop an exhaustive list of independent variables affecting the decision making process. The generalized methodology for CGDSA evaluation mapped directly to the computer-aided group decision making model. Solution quality is measured directly or by measuring independent variables that have been previously been correlated to solution quality using standard design of experiment techniques. The generalized methodology for CGDSA evaluation was applied to the assessment of ConsensusBuilder, an example of a CGDSA. As prescribed by the CGDSA evaluation methodology, usability was also assessed and practical use considerations were followed when designing the evaluation. The value of the ConsensusBuilder evaluation for this research was that it was possible to perform a thorough evaluation of ConsensusBuilder, a CGDSA, using the CGDSA Evaluation Methodology developed in this research. In addition to the ConsensusBuilder evaluation, six different CGDSA evaluations cited in the literature were assessed in terms of the CGDSA evaluation methodology. / Graduation date: 2002
144

SketchPad for Windows : an intelligent and interactive sketching software

Gulur, Sudheendra S. 07 October 1994 (has links)
The sketching software developed in this thesis, is aimed to serve as an intelligent design tool for the conceptual design stage of the mechanical design process. This sketching software, Sketch Pad for Windows, closely mimics the traditional paper-and-pencil sketching environment by allowing the user to sketch freely on the computer screen using a mouse. The recognition algorithm built into the application replaces the sketch stroke with the exact CAD entity. Currently, the recognition of two-dimensional design primitives such as lines, circles and arcs has been addressed. Since manufacturing requires that the design concepts be detailed, sketches need to be refined as detailed drawings. This process of carrying design data from the conceptual design stage into the detail designing stage is achieved with the help of a convertor that converts the sketch data into DesignView (a variational CAD software). Currently, only geometrical information is transferred from the sketching software into DesignView. The transparent graphical user interface built into this sketching system challenges the hierarchial and regimental user interface built into current CAD software. / Graduation date: 1995
145

A library for doing polyhedral operations

Wilde, Doran K. 06 December 1993 (has links)
Polyhedra are geometric representations of linear systems of equations and inequalities. Since polyhedra are used to represent the iteration domains of nested loop programs, procedures for operating on polyhedra can be used for doing loop transformations and other program restructuring transformations which are needed in parallelizing compilers. Thus a need for a library of polyhedral operations has recently been recognized in the parallelizing compiler community. Polyhedra are also used in the definition of domains of variables in systems of affine recurrence equations (SARE). ALPHA is a language which is based on the SARE formalism in which all variables are declared over polyhedral domains consisting of finite unions of polyhedra. This thesis describes a library of polyhedral functions which was developed to support the ALPHA langauge environment, and which is general enough to satisfy the needs of researchers doing parallelizing compilers. This thesis describes the data structures used to represent domains, gives the motivations for the major design decisions that were made in creating the library, and presents the algorithms used for doing polyhedral operations. A new algorithm for recursively generating the face lattice of a polyhedron is also presented. This library has been written and tested, and has be in use since the first quarter of 1993. It is used by research facilities in Europe and Canada which do research in parallelizing compilers and systolic array synthesis. The library is freely distributed by ftp. / Graduation date: 1994
146

Effective test case selection for context-aware applications based on mutation testing and adequacy testing from a context diversityperspective

Wang, Huai, 王怀 January 2013 (has links)
Mutation testing and adequacy testing are two major technologies to assure the quality of software. In this thesis, we present the first work that alleviates the high cost of mutation testing and ineffectiveness of adequacy testing for context-aware applications. We also present large-scale multi-subject case studies to evaluate how our work successfully alleviates these problems. Mutation testing incurs a high execution cost if randomly selected test inputs kill a small percentage of remaining live mutants. To address this problem, we formulate the notion of context diversity to measure the context changes inherent in test inputs, and propose three context-aware strategies in the selection of test inputs. The empirical results show that the use of test inputs with higher context diversity can significantly benefit mutation testing in terms of resulting in fewer test runs, fewer test case trials, and smaller resultant test suites that achieve a high mutation score level. The case study also shows that at the test case level, the context diversity of test inputs positively and strongly correlates with multiple types of adequacy metrics, which provide a foundation on why context diversity contributes to the effectiveness of test cases in revealing faults in context-aware applications. In adequacy testing, many strategies randomly select test cases to construct adequate test suites with respect to program-based adequacy criteria. They usually exclude redundant test cases that are unable to improve the coverage of the test requirements of an adequacy criterion achieved by constructing test suites. These strategies have not explored in the diversity in test inputs to improve the test effectiveness of test suites. To address this problem, we propose three context-aware refined strategies to check whether redundant test cases can replace previously selected test cases to achieve the same coverage level but with different context diversity levels. The empirical study shows that context diversity can be significantly injected into adequate test suites, and favoring test cases with higher context diversity can significantly improve the fault detection rates of adequate test suites for testing context-aware applications. In conclusion, this thesis makes the significant contributions to the research in testing context-aware applications: (1) It has formulated context diversity, a novel metric to measure context changes inherent in test inputs. (2) It has proposed three context-aware strategies to select test cases with different levels of context diversity. Compared with the baseline strategy, the strategy CAS-H that uses test cases with higher context diversity can significantly reduce the cost of mutation testing over context-aware applications in terms of less number of test runs, smaller adequate test suites, and less number of test inputs used to construct test suites. (3) It has defined three context-aware refined strategies to construct adequate test suites with different context diversity levels. Compared with the baseline strategy, the strategy CARS-H that favors test cases with higher context diversity can significantly improve the effectiveness of adequacy testing in terms of higher fault detection rates. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
147

Productivity with performance: property/behavior-based automated composition of parallel programs from self-describing components / Property/behavior-based automated composition of parallel programs from self-describing components

Mahmood, Nasim, 1976- 28 August 2008 (has links)
Development of efficient and correct parallel programs is a complex task. These parallel codes have strong requirements for performance and correctness and must operate robustly and efficiently across a wide spectrum of application parameters and on a wide spectrum of execution environments. Scientific and engineering programs increasingly use adaptive algorithms whose behavior can change dramatically at runtime. Performance properties are often not known until programs are tested and performance may degrade during execution. Many errors in parallel programs arise in incorrect programming of interactions and synchronizations. Testing has proven to be inadequate. Formal proofs of correctness are needed. This research is based on systematic application of software engineering methods to effective development of efficiently executing families of high performance parallel programs. We have developed a framework (P-COM²) for development of parallel program families which addresses many of the problems cited above. The conceptual innovations underlying P-COM² are a software architecture specification language based on self-describing components, a timing and sequencing algorithm which enables execution of programs with both concrete and abstract components and a formal semantics for the architecture specification language. The description of each component incorporates compiler-useable specifications for the properties and behaviors of the components, the functionality a component implements, pre-conditions and postconditions on the inputs and outputs and state machine based sequencing control for invocations of the component. The P-COM² compiler and runtime system implement these concepts to enable: (a) evolutionary development where a program instance is evolved from a performance model to a complete application with performance known at each step of evolution, (b) automated composition of program instances targeting specific application instances and/or execution environments from self-describing components including generation of all parallel structuring, (c) runtime adaptation of programs on a component by component basis, (d) runtime validation of pre-and post-conditions and sequencing of interactions and (e) formal proofs of correctness for interactions among components based on model checking of the interaction and synchronization properties of the program. The concepts and their integration are defined, the implementation is described and the capabilities of the system are illustrated through several examples.
148

RNA secondary structure prediction and an expert systems methodology for RNA comparative analysis in the genomic era

Doshi, Kishore John, 1974- 28 August 2008 (has links)
The ability of certain RNAs to fold into complicated secondary and tertiary structures provides them with the ability to perform a variety of functions in the cell. Since the secondary and tertiary structures formed by certain RNAs in the cell are central to understanding how they function, one of the most active areas of research has been how to accurately and reliably predict RNA secondary structure from sequence; better known as the RNA Folding Problem. This dissertation examines two fundamental areas of research in RNA structure prediction, free energy minimization and comparative analysis. The most popular RNA secondary structure prediction program, Mfold 3.1 predicts RNA secondary structure via free energy minimization using experimentally determined energy parameters. I present an evaluation of the accuracy of Mfold 3.1 using the largest set of phylogenetically diverse, comparatively predicted RNA secondary structures available. This evaluation will show that despite significant revisions to the energy parameters, the prediction accuracy of Mfold 3.1 is not significantly improved when compared to previous versions. In contrast, RNA comparative analysis has repeatedly demonstrated the ability to accurately and reliably predict RNA secondary structure. The downside is that RNA comparative analysis frequently requires an expert systems methodology which is predominately manual in nature. As a result, RNA comparative analysis is not capable of scaling adequately to be useful in the genomic era. Therefore, I developed the Comparative Analysis Toolkit (CAT) which is intended to be the fundamental component of a vertically integrated software infrastructure to facilitate high-throughput RNA comparative analysis using an expert systems methodology.
149

Computerized optimization of highway geometric alignment design

鍾子維, Chung, Chi-wai. January 1982 (has links)
published_or_final_version / Civil Engineering / Master / Master of Philosophy
150

BEHAVIOR OF UNDERGROUND LINED CIRCULAR SHAFTS

Almadhoun, Ibrahim Hasan January 1981 (has links)
The results of a study to model a circular mine shaft constructed in a time-dependent medium are presented. The construction sequence is considered as well as the time-dependent properties of the media around the shaft. The loads acting on the shaft liner are due to excavation of the shaft material and to the loads relieved from the media onto the liner. The results show the importance of considering the time-dependent behavior of media. The analysis was carried out using the Finite Element Method. Axisymmetric triangular and quadrilateral elements were used to model the medium, and axisymmetric shell elements were used to model the liner. The construction sequence was modeled by analyzing the system under small load increments where each load increment represents a construction step. The time behavior was modeled by using the initial strain method, which assigns a different strain value for each element in the medium. The strains are transferred to stresses and then to forces, and an incremental process is started to cover the time range desired. The results for a 400-foot shaft are shown, and changes in liner stresses were monitored as time passes. Different rock materials were modeled by using different constants in the creep law. Some materials showed significant changes in the results, and others did not. The liner horizontal displacement, and horizontal and vertical stresses increased when material constants for rock salt and anhydrite were used. Stresses in the elements adjacent to the liner decreased as time passed by, and some even went into a tensile stress site. A comparison between two solutions, one representing a multi-step construction sequence and another representing an instantaneous construction of the lined shaft, showed that liner stresses are much higher when the construction sequence is not modeled. This is due to the fact that when the excavation is modeled the forces representing the construction sequence are applied to the medium. In the other case, the forces are directly applied to the liner.

Page generated in 0.0415 seconds