• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 476
  • 104
  • 29
  • 24
  • 24
  • 24
  • 24
  • 24
  • 24
  • 8
  • 7
  • 6
  • 2
  • 2
  • 2
  • Tagged with
  • 795
  • 795
  • 787
  • 196
  • 191
  • 191
  • 128
  • 123
  • 100
  • 96
  • 94
  • 90
  • 89
  • 89
  • 86
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Natural feature extraction as a front end for simultaneous localization and mapping.

Kiang, Kai-Ming, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2006 (has links)
This thesis is concerned with algorithms for finding natural features that are then used for simultaneous localisation and mapping, commonly known as SLAM in navigation theory. The task involves capturing raw sensory inputs, extracting features from these inputs and using the features for mapping and localising during navigation. The ability to extract natural features allows automatons such as robots to be sent to environments where no human beings have previously explored working in a way that is similar to how human beings understand and remember where they have been. In extracting natural features using images, the way that features are represented and matched is a critical issue in that the computation involved could be wasted if the wrong method is chosen. While there are many techniques capable of matching pre-defined objects correctly, few of them can be used for real-time navigation in an unexplored environment, intelligently deciding on what is a relevant feature in the images. Normally, feature analysis that extracts relevant features from an image is a 2-step process, the steps being firstly to select interest points and then to represent these points based on the local region properties. A novel technique is presented in this thesis for extracting a small enough set of natural features robust enough for navigation purposes. The technique involves a 3-step approach. The first step involves an interest point selection method based on extrema of difference of Gaussians (DOG). The second step applies Textural Feature Analysis (TFA) on the local regions of the interest points. The third step selects the distinctive features using Distinctness Analysis (DA) based mainly on the probability of occurrence of the features extracted. The additional step of DA has shown that a significant improvement on the processing speed is attained over previous methods. Moreover, TFA / DA has been applied in a SLAM configuration that is looking at an underwater environment where texture can be rich in natural features. The results demonstrated that an improvement in loop closure ability is attained compared to traditional SLAM methods. This suggests that real-time navigation in unexplored environments using natural features could now be a more plausible option.
232

Structured development of an asynchronous forth processor using trace theory.

Newlands, D.A., mikewood@deakin.edu.au January 1989 (has links)
This thesis examines the use of a structured design methodology in the design of asynchronous circuits so that high level constructs can be specified purely in terms of signal exchanges and without the intrusion of lower level concepts. Trace theory is used to specify a multi-processor Forth machine at a high level then part of the design is further elaborated using trace theory operations to (insure that the behaviours of the lower level constructs will combine to give the high level specified behaviour without locking or other hazards. A novel form of threaded language to take advantage of the machine architecture is developed. At suitable points the design is tested by simulation. The stack element which is designed is reduced to an electric circuit which is itself tested by simulation to verify the design.
233

Fuzzy concepts and formal methods.

Matthews, Chris, mikewood@deakin.edu.au January 2001 (has links)
It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the information systems problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This thesis suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. A fuzzy logic toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations is developed. The toolkit contains a detailed set of laws that demonstrate the properties of the definitions when applied to partial set membership. It also provides a set of laws that establishes an isomorphism between the toolkit notation and that of conventional Z when applied to boolean sets and relations. The thesis also illustrates how the fuzzy logic toolkit can be applied in the problem domains of interest. Several examples are presented and discussed including the representation of imprecise concepts as fuzzy sets and relations, system requirements as a series of linguistically quantified propositions, the modelling of conflict and agreement in terms of fuzzy sets and the partial specification of a fuzzy expert system. The thesis concludes with a consideration of potential areas for future research arising from the work presented here.
234

Distributed parallel computation using standard ML

Chattopadhyay, Vaishali, January 2007 (has links) (PDF)
Thesis (M.S. in computer science)--Washington State University, December 2007. / Includes bibliographical references (p. 97-102).
235

Lego TC logo as a learning environment in problem-solving in advanced supplementary level design & technology with pupils aged 16-19

Lo, Ting-kau. January 1992 (has links)
Thesis (M.Ed.)--University of Hong Kong, 1992. / Includes bibliographical references (leaf 154-160). Also available in print.
236

Electra : integrating constraints, condition-based dispatching, and features exclusion into the multiparadigm language Leda

Zamel, Nabil M. 06 December 1994 (has links)
Multiparadigm languages are languages that are designed to support more than one style of programming. Leda is a strongly-typed multiparadigm programming language that supports imperative, functional, object-oriented, and logic programming. The constraint programming paradigm is a declarative style of programming where the programmer is able to state relationships among some entities and expect the system to maintain the validity of these relationships throughout program execution. The system accomplishes this either by invoking user-defined fixes that impose rigid rules governing the evolution of the entities, or by finding suitable values to be assigned to the constrained entities without violating any active constraint. Constraints, due to their declarative semantics, are suitable for the direct mapping of the characteristics of a number of mechanisms including: consistency checks, constraint-directed search, and constraint-enforced reevaluation, among others. This makes constraint languages the most appropriate languages for the implementation of a large number of applications such as scheduling, planning, resource allocation, simulation, and graphical user interfaces. The semantics of constraints cannot be easily emulated by other constructs in the paradigms that are offered by the language Leda. However, the constraint paradigm does not provide any general control constructs. The lack of general control constructs impedes this paradigm's ability to naturally express a large number of problems. This dissertation presents the language Electra, which integrates the constraint paradigm into the language Leda by creating a unified construct that provides the ability to express the conventional semantics of constraints with some extensions. Due to the flexibility of this construct, the programmer is given the choice of either stating how a constraint is to be satisfied or delegating that task to the constraint-satisfier. The concept of providing the programmer with the ability to express system-maintained relations, which is the basic characteristic of constraints, provided a motivation for enhancing other paradigms with similar abilities. The functional paradigm is extended by adding to it the mechanism of condition-based dispatching which is similar to argument pattern-matching. The object-oriented paradigm is extended by allowing feature exclusion which is a form of inheritance exception. This dissertation claims that the integration provided by the language Electra will enable Leda programmers to reap the benefits of the paradigm of constraints while overcoming its limitations. / Graduation date: 1995
237

Java bytecode compilation for high-performance, platform-independent logical inference

Arte, Ashish. Sturgill, David Brian. January 2005 (has links)
Thesis (M.S.)--Baylor University, 2005. / Includes bibliographical references (p. 146-148).
238

QUICKTALK: A Smalltalk-80 dialect for defining primitive methods

Ballard, Mark B. 04 1900 (has links) (PDF)
M.S. / Computer Science & Engineering / QUICKTALK is a dialect of Smalltalk-80 that can be compiled directly into native machine code, instead of virtual machine bytecodes. The dialect includes "hints" on the class of method arguments, instance variables, and class variables. The dialect is designed to describe primitive Smalltalk methods. Improved performance over bytecodes is achieved by eliminating the interpreter loop on bytecode execution, by reducing the number of message send/returns via binding some target methods at compilation, and by eliminating redundant class checking. Changes to the Smalltalk- 80 system and compiler to support the dialect are identified and performance measurements are given.
239

Semantic Inspection of Software Artifacts From Theory to Practice

Heyer, Tim January 2001 (has links)
Providing means for the development of correct software still remains a central challenge of computer science. In this thesis we present a novel approach to tool-based inspection focusing on the functional correctness of software artifacts. The approach is based on conventional inspection in the style of Fagan, but extended with elements of formal verification in the style of Hoare. In Hoare’s approach a program is annotated with assertions. Assertions express conditions on program variables and are used to specify the intended behavior of the program. Hoare introduced a logic for formally proving the correctness of a program with respect to the assertions. Our main contribution concerns the predicates used to express assertions. In contrast to Hoare, we allow an incomplete axiomatization of those predicates beyond the point where a formal proof of the correctness of the program may no longer be possible. In our approach predicates may be defined in a completely informal manner (e.g. using natural language). Our hypothesis is, that relaxing the requirements on formal rigor makes it easier for the average developer to express and reason about software artifacts while still allowing the automatic generation of relevant, focused questions that help in finding defects. The questions are addressed in the inspection, thus filling the somewhat loosely defined steps of conventional inspection with a very concrete content. As a side-effect our approach facilitates a novel systematic, asynchronous inspection process based on collecting and assessing the answers to the questions. We have adapted the method to the inspection of code as well as the inspection of early designs. More precisely, we developed prototype tools for the inspection of programs written in a subset of Java and early designs expressed in a subset of UML. We claim that the method can be adapted to other notations and (intermediate) steps of the software process. Technically, our approach is working and has successfully been applied to small but non-trivial code (up to 1000 lines) and designs (up to five objects and ten messages). An in-depth industrial evaluation requires an investment of substantial resources over many years and has not been conducted. Despite this lack of extensive assessment, our experience shows that our approach indeed makes it easier to express and reason about assertions at a high level of abstraction.
240

Understanding Java applications using frequency of method calls /

Ghadiri, Amirali. January 2007 (has links)
Thesis (M.Sc.)--York University, 2007. Graduate Programme in Computer Science. / Typescript. Includes bibliographical references (leaves 122-124). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR31995

Page generated in 0.0685 seconds