• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 751
  • 350
  • 73
  • 73
  • 73
  • 73
  • 73
  • 72
  • 48
  • 31
  • 9
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 1700
  • 1700
  • 271
  • 253
  • 241
  • 208
  • 186
  • 185
  • 174
  • 166
  • 145
  • 138
  • 137
  • 127
  • 125
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Data Dependence in Programs Involving Indexed Variables

Nikolik, Borislav 06 August 1993 (has links)
Symbolic execution is a powerful technique used to perform various activities such as program testing, formal verification of programs, etc. However, symbolic execution does not deal with indexed variables in an adequate manner. Integration of indexed variables such as arrays into symbolic execution would increase the generality of this technique. We present an original substitution technique that produces array-term-free constraints as a counterargument to the commonly accepted belief that symbolic execution cannot handle arrays. The substitution technique deals with constraints involving array terms with a single aggregate name, array terms with multiple aggregate names, and nested array terms. Our approach to solving constraints involving array terms is based on the analysis of the relationship between the array subscripts. Dataflow dependence analysis of programs involving indexed variables suffers from problems of undecidability. We propose a separation technique in which the array subscript constraints are separated from the loop path constraints. The separation technique suggests that the problem of establishing data dependencies is not as hard as the general loop problem. In this respect, we present a new general heuristic program analysis technique which is used to preserve the properties of the relations between program variables.
672

Studies of reinforced concrete regions near discontinuities

Cook, William Digby January 1987 (has links)
No description available.
673

Fuzzy multi-mode resource-constrained project scheduling

Pan, Hongqi, 1961- January 2003 (has links)
Abstract not available
674

Structured graphs: a visual formalism for scalable graph based tools and its application to software structured analysis

January 1996 (has links)
Very large graphs are difficult for a person to browse and edit on a computer screen. This thesis introduces a visual formalism, structured graphs, which supports the scalable browsing and editing of very large graphs. This approach is relevant to a given application when it incorporates a large graph which is composed of named nodes and links, and abstraction hierarchies which can be defined on these nodes and links. A typical browsing operation is the selection of an arbitrary group of nodes and the display of the network of nodes and links for these nodes. Typical editing operations is: adding a new link between two nodes, adding a new node in the hierarchy, and moving sub-graphs to a new position in the node hierarchy. These operations are scalable when the number of user steps involved remains constant regardless of how large the graph is. This thesis shows that with structured graphs, these operations typically take one user step. We demonstrate the utility of structured graph formalism in an application setting. Computer aided software engineering tools, and in particular, structured analysis tools, are the chosen application area for this thesis, as they are graph based, and existing tools, though adequate for medium sized systems, lack scalability. In this thesis examples of an improved design for a structured analysis tool, based on structured graphs, is given. These improvements include scalable browsing and editing operations to support an individual software analyst, and component composition operations to support the construction of large models by a group of software analysts. Finally, we include proofs of key properties and descriptions of two text based implementations.
675

Rapid development of problem-solvers with HeurEAKA! - a heuristic evolutionary algorithm and incremental knowledge acquisition approach

Bekmann, Joachim Peter, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
A new approach for the development of problem-solvers for combinatorial problems is proposed in this thesis. The approach combines incremental knowledge acquisition and probabilistic search algorithms, such as evolutionary algorithms, to allow a human to rapidly develop problem-solvers in new domains in a framework called HeurEAKA. The approach addresses a known problem, that is, adapting evolutionary algorithms to the search domain by the introduction of domain knowledge. The development of specialised problem-solvers has historically been labour intensive. Implementing a problem-solver from scratch is very time consuming. Another approach is to adapt a general purpose search strategy to the problem domain. This is motivated by the observation that in order to scale an algorithm to solve complex problems, domain knowledge is needed. At present there is no systematic approach allowing one to efficiently engineer a specialpurpose search strategy for a given search problem. This means that, for example, adapting evolutionary algorithms (which are general purpose algorithms) is often very difficult and has lead some people to refer to their use as a ???black art???. In the HeurEAKA approach, domain knowledge is introduced by incrementally building a knowledge base that controls parts of the evolutionary algorithm. For example, the fitness function and the mutation operators in a genetic algorithm. An evolutionary search algorithm ismonitored by a human whomakes recommendations on search strategy based on individual solution candidates. It is assumed that the human has a reasonable intuition of the search problem. The human adds rules to a knowledge base describing how candidate solutions can be improved, or why they are desirable or undesirable in the search for a good solution. The incremental knowledge acquisition approach is inspired by the idea of (Nested) Ripple Down Rules. This approach sees a human provide exception rules to rules already existing in the knowledge base using concrete examples of inappropriate performance of the existing knowledge base. The Nested Ripple Down Rules (NRDR) approach allows humans to compose rules using concepts that are natural and intuitive to them. In HeurEAKA, NRDR are significantly adapted to form part of a probabilistic search algorithm. The probabilistic search algorithms used in the presented system are a genetic algorithm and a hierarchical bayesian optimization algorithm. The success of the HeurEAKA approach is demonstrated in experiments undertaken on industrially relevant domains. Problem-solvers were developed for detailed channel and switchbox routing in VLSI design and traffic light optimisation for urban road networks. The problem-solvers were developed in a short amount of time, in domains where a large amount of effort has gone into developing existing algorithms. Experiments show that chosen benchmark problems are solved as well or better than existing approaches. Particularly in the traffic light optimisation domain excellent results are achieved.
676

Searching and ranking structured documents

Trotman, Andrew, n/a January 2007 (has links)
It is common to see documents with explicit structure marked up in languages such as XML. Queries, on the other hand, typically have no structure. There is a clear mismatch, although documents contain structure it is typically not used in information retrieval. An efficient index structure for document-centric searching is proposed and its efficiency is discussed. It is shown to be at worst linear with respect to the number of occurrences of a given search term. The algorithm is then extended to accommodate element-centric information retrieval. Ranking algorithms for structured documents are examined. Genetic Algorithms are used to learn different weights for each structure present in a document. Applying these weights as part of a function is shown to yield significant precision improvements in some functions. Genetic Programming is then used to learn an entire ranking function. This function is shown to be portable between document collections. A query language for structured information retrieval is proposed. Use of this language in the 2004 INEX workshop resulted in a large decrease in query errors. Structured information retrieval is now a viable alternative to its unstructured counterpart. A successful query language, efficient indexing structures, and improved ranking functions are all presented.
677

Adversarial planning by strategy switching in a real-time strategy game

King, Brian D. (Brian David) 12 June 2012 (has links)
We consider the problem of strategic adversarial planning in a Real-Time Strategy (RTS) game. Strategic adversarial planning is the generation of a network of high-level tasks to satisfy goals while anticipating an adversary's actions. In this thesis we describe an abstract state and action space used for planning in an RTS game, an algorithm for generating strategic plans, and a modular architecture for controllers that generate and execute plans. We describe in detail planners that evaluate plans by simulation and select a plan by Game Theoretic criteria. We describe the details of a low-level module of the hierarchy, the combat module. We examine a theoretical performance guarantee for policy switching in Markov Games, and show that policy switching agents can underperform fixed strategy agents. Finally, we present results for strategy switching planners playing against single strategy planners and the game engine's scripted player. The results show that our strategy switching planners outperform single strategy planners in simulation and outperform the game engine's scripted AI. / Graduation date: 2013
678

Web-based distributed applications for cytosensor

Liew, Ji Seok 17 March 2003 (has links)
To protect the environment and save human lives, the detection of various hazardous toxins of biological or chemical origin has been a major challenge to the researchers at Oregon State University. Living fish cells can indicate the presence of a wide range of toxins by reactions such as changing color and shape changes. A research team in Electrical and Computer Engineering Department is developing a hybrid detection device (Cytosensor) that combines biological reaction and digital technology. The functions of Cytosensor can be divided into three parts, which are real-time image acquisition, data processing and statistical data analysis. User-friendly Web-Based Distributed Applications (WBDA) for Cytosensor offer various utilities. WBDA allow the users to control and observe the local Cytosensor, search and retrieve data acquired by the sensor network, and process the acquired images remotely using only a web browser. Additionally, these applications minimize the user's exposure to dangerous chemicals or biological products. This thesis describes the design of a remote controller, system observer, remote processor, and search engine using JAVA applets, XML, Perl, MATLAB, and Peer-to-Peer models. Furthermore, the implementations of image segmentation technique in MATLAB and the Machine Vision Algorithm in JAVA for independent web-based processing are investigated. / Graduation date: 2003
679

A study of hardware/software multithreading

Carlson, Ryan L. 04 June 1998 (has links)
As the design of computers advances, two important trends have surfaced: The exploitation of parallelism and the design against memory latency. Into these two new trends has come the Multithreaded Virtual Processor (MVP). Based on a standard superscalar core, the MVP is able to exploit both Instruction Level Parallelism (ILP) and, utilizing the concepts of multithreading, is able to further exploit Thread Level Parallelism (TLP) in program code. By combining both hardware and software multithreading techniques into a new hybrid model, the MVP is able to use fast hardware context switching techniques along with both hardware and software scheduling. The new hybrid creates a processor capable of exploiting long memory latency operations to increase parallelism, while introducing both minimal software overhead and hardware design changes. This thesis will explore the MVP model and simulator and provide results that illustrate MVP's effectiveness and demonstrate its recommendation to be included in future processor designs. Additionally, the thesis will show that MVP's effectiveness is governed by four main considerations: (1) The data set size relative to the cache size, (2) the number of hardware contexts/threads supported, (3) the amount of locality within the data sets, and (4) the amount of exploitable parallelism within the algorithms. / Graduation date: 1999
680

Predicting activity type from accelerometer data

Zheng, Yonglei 17 August 2012 (has links)
The study of physical activity is important in improving people���s health as it can help people understand the relationship between physical activity and health. Accelerometers, due to its small size, low cost, convenience and its ability to provide objective information about the frequency, intensity, and duration of physical activity, has become the method of choice in measuring physical activity. Machine learning algorithms based on the featurized representation of accelerometer data have become the most widely used approaches in physical activity prediction. To improve the classification accuracy, this thesis first explored the impact of the choice of data (raw vs processed) as well as the choice of features on the performance of various classifiers. The empirical results showed that the machine learning algorithms with strong regularization capabilities always performed better if provided with the most comprehensive feature set extracted from raw accelerometer signal. Based on the hypothesis that for some time series, the most discriminative information could be found at subwindows of various sizes, the Subwindow Ensemble Model (SWEM) was proposed. The SWEM was designed for the accelerometer-based physical activity data, and classified the time series based on the features extracted from subwindows. It was evaluated on six time series datasets. Three of them were accelerometer-based physical activity data, which the SWEM was designed for, and the rest were different types of time series data chosen from other domains. The empirical results indicated a strong advantage of the SWEM over baseline models on the accelerometerbased physical activity data. Further analysis confirmed the hypothesis that the most discriminative features could be extracted from subwindows of different sizes, and they were effectively used by the SWEM. / Graduation date: 2013

Page generated in 0.0478 seconds