• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

A metametadata framework to support semantic searching of pedagogic data

Ismail, Amirah January 2009 (has links)
This thesis focuses on a novel method for semantic searching and retrieval of information about learning materials. Metametadata encapsulate metadata instances by using the properties and attributes provided by ontologies rather than describing learning objects. A novel metametadata taxonomy has been developed which provides the basis for a semantic search engine to extract, match and map queries to retrieve relevant results. The use of ontological views is a foundation for viewing the pedagogical content of metadata extracted from learning objects by using the pedagogical attributes from the metametadata taxonomy. Using the ontological approach and metametadata (based on the metametadata taxonomy) we present a novel semantic searching mechanism. These three strands – the taxonomy, the ontological views, and the search algorithm – are incorporated into a novel architecture (OMESCOD) which has been implemented, and results of using OMESCOD have been used to evaluate the effectiveness of the metametadata approach, and the recall and precision of the search algorithm are compared with search algorithms based on metadata and on ontologies. The results support the research hypothesis that using metametadata can effectively represent the semantic relationships between learning object metadata.
132

Analysing the familiar : reasoning about space and time in the everyday world

Randell, David Anthony January 1991 (has links)
The development of suitable explicit representations of knowledge that can be manipulated by general purpose inference mechanisms has always been central to Artificial Intelligence (AI). However, there has been a distinct lack of rigorous formalisms in the literature that can be used to model domain knowledge associated with the everyday physical world. If AI is to succeed in building automata that can function reasonably well in unstructured physical domains, the development and utility of such formalisms must be secured. This thesis describes a first order axiomatic theory that can be used to encode much topological and metrical information that arises in our everyday dealings with the physical world. The formalism is notable for the minimal assumptions required in order to lift up a very general framework that can cover the representation of much intuitive spatial and temporal knowledge. The basic ontology assumes regions that can be either spatial or temporal and over which a set of relations and functions are defined. The resulting partitioning of these abstract spaces, allow complex relationships between objects and the description of processes to be formally represented. This also provides a useful foundation to control the proliferation of inference commonly associated with mechanised logics. Empirical information extracted from the domain is added and mapped to these basic structures showing how further control of inference can be secured. The representational power of the formalism and computational tractability of the general methodology proposed is substantiated using two non-trivial domain problems - modelling phagocytosis and exocytosis of uni-cellular organisms, and modelling processes arising during the cycle of operations of a force pump.
133

Accounting for software in the United States

McGee, Robert W. January 1986 (has links)
This thesis represents the first major research to be completed either in the United Kingdom or the United States on the subject of accounting for software. Part I concentrates on the financial aspects of software accounting, and consisted of in-person interviews with a number of individuals from software' vendor and user companies who are knowledgeable about software accounting. The interviews were followed by two mail questionnaires, one each to software vending company executives and software user company executives. The NAARS database was also used to determine how software accounting policies are disclosed for these two types of company. It was concluded that more than one policy exists in practice. While approximately 90% of the companies surveyed expense internally constructed software, about two-thirds capitalize the cost of purchased software. Reasons given for individual company policy seem to be based on expediency rather than good accounting theory. The interviews and questionnaire responses in Part I seemed to indicate that software vendor companies that capitaliize software find it easier to raise debt and equity capital than do companies which expense software costs. Part II presents the results of two questionnaires that were mailed to bank lending officers and one questionnaire that was mailed to financial analysts for the purpose of obtaining more information on this point. It was concluded that companies that capitalize software costs find it significantly easier to obtain bank loans than do companies that expense software costs. The effect on stock price was less clear cut, although the questionnaire responses did indicate that a company's software accounting policy does influence the value a financial analyst places on a company's stock. Part III discusses the United States federal and state tax aspects of software. Thirteen appendices giving supplementary data are also included.
134

An approach to computer-based knowledge representation for the business environment using empirical modelling

Rasmequan, Suwanna January 2001 (has links)
The motivation for the thesis arises from the difficulties experienced by business people who are non-programmers with the inflexibilities of conventional packages and tools for model-making. After a review of current business software an argument is made for the need for a new computing paradigm that would offer more support for the way that people actually experience their business activities. The Empirical Mod- elling (EM) approach is introduced as a broad theoretical and practical paradigm for computing that can be viewed as a far-reaching generali- sation of the spreadsheet concept. The concepts and principles of EM emphasise the experiential pro- cesses underlying familiar abstractions and by which we come to iden- tify reliable components in everyday life and, in particular, business activities. The emphasis on experience and on interaction leads to the new claim that EM environments offer a framework for combining propositional, experiential and tacit knowledge in a way that is more accessible and supportive of cognitive processes than conventional computer-based modelling. It is proposed that such environments offer an alternative kind of knowledge representation. Turning to the imple- mentation and development of systems, the difficulties inherent in con- ventional methods are discussed and then the practical aspects of EM, and its potential for system building, are outlined. Finally, a more detailed study is made of Decision Support Systems and the ways in which the EM focus on experience, and knowledge through interaction, can contribute to the representation of qualitative aspects of business activities and their use in a more human-centred, but computer-supported, process of decision making. Illustrations of the practical application of EM methods to the requirements of a deci- sion support environment are given by means of extracts from a num- ber of existing EM models.
135

Empirical modelling for participative business process reengineering

Chen, Yih-Chang January 2001 (has links)
The purpose of this thesis is to introduce a new broad approach to computing - Empirical Modelling (EM) - and to propose a way of applying this approach for system development so as to avoid the limitations of conventional approaches and integrate system development with business process reengineering (BPR). Based on the concepts of agency, observable and dependency, EM is an experiencebased approach to modelling with computers in which the modeller interacts with an artefact through continuous observations and experiments. It is a natural way of working for business process modelling because the modeller is involved in, and takes account of, the real world context. It is also adaptable to a rapidly changing environment as the computer-based models serve as creative artefacts with which the modeller can interact in a situated and open-ended manner. This thesis motivates and illustrates the EM approach to new concepts of participative BPR and participative process modelling. That is, different groups of people, with different perceptions, competencies and requirements, can be involved during the process of system development and BPR, rather than just being involved at an early stage. This concept aims to address the well-known high failure rate of BPR. A framework SPORE (situated process of requirements engineering), which has been proposed to guide the process of cultivating requirements in a situated manner, is extended to participative BPR (i.e. to support many users in a distributed environment). Two levels of modelling are proposed for the integration of contextual understanding and system development. A comparison between EM and object-orientation is also provided to give insight into how EM differs from current methodologies and to point out the potential of EM in system development and BPR. The ISMs (interactive situation models), built using the principles and tools of EM, are used to form artefacts during the modelling process. A warehouse and logistics management system is taken as an illustrative case study for applying this framework.
136

Applications of formal methods in engineering

Tran, Sang Cong January 1991 (has links)
The main idea presented in this thesis is to propose and justify a general framework for the development of safety-related systems based on a selection of criticality and the required level of integrity. We show that formal methods can be practically and consistently introduced into the system design lifecycle without incurring excessive development cost. An insight into the process of generating and validating a formal specification from an engineering point of view is illustrated, in conjunction with formal definitions of specification models, safety criteria and risk assessments. Engineering specifications are classified into two main classes of systems, memoryless and memory bearing systems. Heuristic approaches for specification generation and validation of these systems are presented and discussed with a brief summary of currently available formal systems and their supporting tools. It is further shown that to efficiently address different aspects of real-world problems, the concept of embedding one logic within another mechanised logic, in order to provide mechanical support for proofs and reasoning, is practical. A temporal logic framework, which is embedded in Higher Order Logic, is used to verify and validate the design of a real-time system. Formal definitions and properties of temporal operators are defined in HOL and real-time concepts such as timing marker, interrupt and timeout are presented. A second major case study is presented on the specification a solid model for mechanical parts. This work discusses the modelling theory with set theoretic topology and Boolean operations. The theory is used to specify the mechanical properties of large distribution transformers. Associated mechanical properties such as volumetric operations are also discussed.
137

An expert system for material handling equipment selection

Al-Meshaiei, Eisa Abdullah Eisa S. January 1999 (has links)
Manufacturing Systems are subject to increasingly frequent changes in demand in terms of number and type of products they produce. It is impractical to continually reconfigure the facilities, but it is possible to modify the material handling arrangements so that the selected equipment is the most appropriate for the current requirements. The number of decisions that need to be made coupled with the rate at which decisions must be taken adds significant difficulty to the problem of equipment selection. Furthermore there are relatively few experts who have the necessary range of knowledge coupled with the ability to use this knowledge to select the most appropriate material handling solution in any situation. Access to such experts is therefore greatly restricted and decisions are more commonly made by less experienced people, who depend on equipment vendors for information, often resulting in poor equipment selection. This research first examines the significance of appropriate material handling equipment choice in dynamic environments. The objective is to construct a computer based expert system utilising knowledge from the best available sources in addition to a systematic procedure for selection of material handling equipment. A new system has been produced, based on the Flex language, which elicits from the inexperienced user details of the handling requirements in order to build an equipment specification. It then selects from among 11 handling solution groups and provides the user with information supporting the selection. Original features of the system are the way in which the knowledge is grouped, the ability of the procedure to deal with quantifiable and non-quantifiable equipment and selection factors, selection of decision analysis method and the validation of the final choice to establish confidence in the results. The system has been tested using real industrial data and has been found in 81% of cases to produce results which are acceptable to the experts who provided the information.
138

An intensional implementation technique for functional languages

Yaghi, Ali A. G. January 1984 (has links)
The potential of functional programming languages has not been widely accepted yet. The reason lies in the difficulties associated with their implementation. In this dissertation we propose a new implementation technique for functional languages by compiling them into 'Intensional Logic' of R. Montague and R. Carnap. Our technique is not limited to a particular hardware or to a particular evaluation strategy; nevertheless it lends itself directly to demand-driven tagged dataflow architecture. Even though our technique can handle conventional languages as well, our main interest is exclusively with functional languages in general and with Lucid-like dataflow languages in particular. We give a brief general account of intensional logic and then introduce the concept of intensional algebras as structures (models) for intensional logic. We, formally, show the computability requirements for such algebras. The target language of our compilation is the family of languages DE (definitional equations over intensional expressions). A program in DE is a linear (not structured) set of non-ambiguous equations defining nullary variable symbols. One of these variable symbols should be the symbol result. We introduce the compilation of Iswim (a first order variant of Landin's ISWIM) as an example of compiling functions into intensional expressions. A compilation algorithm is given. Iswim(A), for any algebra of data types A, is compiled into DE(Flo(A)) where Flo(A) is a uniquely defined intensional algebra over the tree of function calls. The approach is extended to compiling Luswim and Lucid. We describe the demand-driven tagged dataflow (the eduction) approach to evaluating the intensional family of target languages DE. Furthermore, for each intensional algebra, we introduce a collection of rewrite rules. A justification of correctness is given. These rules are the basis for evaluating programs in the target DE by reduction. Finally, we discuss possible refinements and extensions to our approach.
139

The development of artificial neural networks for the analysis of market research and electronic nose data

Larkin, Andrew B. January 1995 (has links)
This thesis details research carried out into the application of unsupervised neural network and statistical clustering techniques to market research interview survey analysis. The objective of the research was to develop mathematical mechanisms to locate and quantify internal clusters within the data sets with definite commonality. As the data sets being used were binary, this commonality was expressed in terms of identical question answers. Unsupervised neural network paradigms are investigated, along with statistical clustering techniques. The theory of clustering in a binary space is also looked at. Attempts to improve the clarity of output of Self-Organising Maps (SOM) consisted of several stages of investigation culminating in the conception of the Interrogative Memory Structure (lMS). IMS proved easy to use, fast in operation and consistently produced results with the highest degree of commonality when tested against SOM, Adaptive Resonance Theory (ART!) and FASTCLUS. ARTl performed well when clusters were measured using general metrics. During the course of the research a supervised technique, the Vector Memory Array (VMA), was developed. VMA was tested against Back Propagation (BP) (using data sets provided by the Warwick electronic nose project) and consistently produced higher classification accuracies. The main advantage of VMA is its speed of operation - in testing it produced results in minutes compared to hours for the BP method, giving speed increases in the region of 100: 1.
140

Representing knowledge patterns in a conceptual database design aid : a dual-base knowledge model

Chang, Tsiar-Yuan January 1998 (has links)
The current status of the Knowledge-Based Database Design Systems (KBDDSs) is reviewed. It is shown that they do not resolve the problems of the identification of the relevant objects (relations) and the interpretation of the identified objects from the semantic-rich reality. Consequently, a theoretical architecture is developed to alleviate these problems by reusing the finished conceptual data schemata. By taking account of the essence of the reality and the problem-solving behaviour of experts, a new knowledge model called the Dual-Base Knowledge Model (DBKM), which involves two syngeristic knowledge structures, the concept and case bases, is constructed by the theories of conceptual knowledge in the psychological realm and the notions of relation and function from set theory. The aim is to provide rational and valid grounds for the support and interplay of these two bases in order to reuse the relevant old cases and facilitate the acquisition of new cases. Thus, the process model, which involves two process mechanisms, the case retrieval and knowledge accumulation mechanisms, is analysed according to the theory of the proposed DBKM. In this way, the feasibility of reusing the relevant schemata or part of them can be established in the DBKM architecture. The functionality of the DBKM architecture is tested by a simulated example to show how the relevant cases are recalled in the knowledge pool and the new knowledge is stored in the knowledge repository. The distinctions between the DBKM architecture and the frameworks of current KBDDSs and Case-Based Reasoning (CBR) systems (from the knowledge-based system view), and between the DBKM and those knowledge models in current KBDDSs and rule-based data modelling approaches (from the knowledge-modelling view) are investigated to contrast the current levels of progress of the conceptual data modelling. This research establishes the feasibility of the DBKM architecture, although it demonstrates the need to accommodate the dynamic and functional aspects of the Universe of Discourse (UoD). The main contributions of the DBKM are (1) to provide a valid basis for complementing the environments supported by the current KBDDSs and a rational basis for creating the symbiosis of humans and computer; and (2) to moderate the beliefs underlying the fact-based school and provide a hermeneutic environment, so that the confusion of the current conceptualising work can be alleviated and the difficulty of the conceptualising task can be eased to some degree.

Page generated in 0.0981 seconds