• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 6
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 58
  • 58
  • 33
  • 28
  • 15
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The exploration and adaptation of soft systems methodology using learning theories to enable more effective development of information systems applications

Small, Adrian January 2007 (has links)
According to Lyytinen and Robey (1999), information systems development (ISD) involves risk. This risk is regularly being taken by managers and employees within an organisation but the outcome of such information systems development projects many become a failed information system (IS). The problem is further compounded through the lack of learning about such failures, and unsuccessful/negligible efforts to try and avoid such mistakes in the future (Lyytinen and Robey, 1999). The contribution to knowledge of this thesis is the development of a framework to incorporate a learning approach within information system application (ISA) projects. This thesis puts forward the need for an embedded learning approach and examines its importance for organisations. It is argued that more attention needs to be placed on generating learning because many individuals within organisations focus mainly on their operations and less on other processes. Three areas of theory are argued to relate to exploring these issues, namely how IS can currently be designed and implemented, what role the area of the learning organization can contribute in helping promote and embed a learning approach into an ISD methodology and finally, what theories of learning can be applied to these two bodies of literature. From addressing such issues, the main question of this thesis is how a learning approach can be incorporated into soft methodologies for the design and implementation of information systems applications. By examining a number of soft methodologies and arguing for the expansion of Soft Systems Methodology (SSM), or as the expansion is labelled, Soft Systems Methodology eXpanded for Learning (SSW), a manufacturing organisation is used to test out the framework in practice. The first cycle of action research investigated how SSM' worked in practice. The second cycle of action research, while not using a formal framework, investigated how these participants implemented and managed the technology. Reflecting back on the technology management literature, a technology management process framework (TMPF) is identified and adapted to try and further embed the learning individuals have obtained from the SSM' framework. A discussion on how the two frameworks can be joined together and used in practice is undertaken. This framework is labelled as Soft Systems Methodology eXpanded for Learning and incorporating Technology Management (SSM'). A second case is used to test this now developed SSWTM framework. The second case involved a National Health Service (NHS) organisation. This second case identifies learning points that support or can pose problems with the SSW' framework allowing any refinements to be made. This work finishes by firstly, providing a detailed discussion on the research process this work adopted as well as undertaking an evaluation of the SSW' framework. Secondly, the conclusions address how well a learning approach can be incorporated into a soft methodology for the design and implementation of information system applications (ISA). Lastly, it is stated how this SSM'm can impact on theory and practice.
12

Updating semi-structured data

Amornsinlaphachai, Pensri January 2007 (has links)
The Web has had a tremendous success with its support for the rapid and inexpensive exchange of information. A considerable body of data exchange is in the form of semi- structured data such as the eXtensible Markup Language (XML). XML, an effective standard to represent and exchange semi-structured data on the Web, is used ubiquitously in almost all areas of information technology. Most researchers in the XML area have concentrated on storing, querying and publishing XML while not many have paid attention to updating XML; thus the XML update area is not fully developed. We propose a solution for updating XML as a representation of semi-structured data. XML is updated through an object-relational database (ORDB) to exploit the maturity of the relational engine and the newer object features of the OR technology. The engine is used to enforce constraints during the updating of the XML whereas the object features are used to handle the XML hierarchical structure. Updating XML via ORDB makes it easier to join XML documents in an update and in turn joins of XML documents make it possible to keep non-redundant data in multiple XML documents. This thesis contributes a solution for the update of XML documents via an ORDB to advance our understanding of the XML update area. Rules for mapping XML structure and constraints to an ORDB schema are presented and a mechanism to handle XML cardinality constraint is provided. An XML update language, an extension to XQuery, has been designed and this language is translated into the standard SQL executed on an ORDB. To handle the recursive nature of XML, a recursive function updating XML data is translated into SQL commands equipped with a programming capability. A method is developed to reflect the changes from the ORDB to XML documents. A prototype of the solution has been implemented to help validate our approach. Experimental study to evaluate the performance of XML update processing based on the prototype has been conducted. The experimental results show that updating multiple XML documents storing non-redundant data yields a better performance than updating a single XML document storing redundant data; an ORDB can take advantage of this by caching data to a greater extent than a native XML database. The solution of updating XML documents via an ORDB can solve some problems in existing update methods as follows. Firstly, the preservation of XML constraints is handled by the ORDB engine. Secondly, non-redundant data is stored in linked XML documents; thus the problem of data inconsistency and low performance caused by data redundancy are solved. Thirdly, joins of XML documents are converted to joins of tables in SQL. Fourthly, fields or tables involved in regular path expressions can be tackled in a short time by using mapping data. Finally, a recursive function is translated into SQL commands equipped with a programming capability.
13

Automating database curation with workflow technology

Sanghi, Gaurav Ashokkumar. Kazic, Toni Marie. January 2005 (has links)
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed February 12, 2010). Thesis advisor: Dr. Toni Kazic. Includes bibliographical references.
14

Design and implementation of a multimedia DBMS retreival management /

Pongsuwan, Wuttipong. January 1990 (has links) (PDF)
Thesis (M.S. in Computer Science)--Naval Postgraduate School, September 2009. / Thesis Advisor(s): Lum, Vincent Y. Second Reader: Wu, C. Thomas. "September 1990." Description based on title screen as viewed on December 17, 2009. DTIC Descriptor(s): Data Bases, Data Management, Schools, Models, Computers, Alphanumeric Data, Navy, Semantics, Theses, Information Retrieval, User Needs, Media, Management, Interfaces. DTIC Identifier(s): Management Information Systems, Data Bases, Systems Engineering, Theses, Installation. Author(s) subject terms: Multimedia Database Management System, Multimedia, DBMS, MDBMS, Image Database. Includes bibliographical references (p. 128-130). Also available in print.
15

Visualizing wiki author contributions in higher education

Arias Torres, Cristina G. January 2009 (has links)
Thesis (M.A.)--University of Alberta, 2009. / Title from PDF file main screen (viewed on Feb. 19, 2010). A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Arts, [Department of] Humanities Computing, University of Alberta. Includes bibliographical references.
16

Efficient query processing over uncertain data /

Lian, Xiang. January 2009 (has links)
Includes bibliographical references (p. 185-196).
17

Adaptive fuzzy logic control for solar buildings

El-Deen, M. M. G. Naser January 2002 (has links)
Significant progress has been made on maximising passive solar heating loads through the careful selection of glazing, orientation and internal mass within building spaces. Control of space heating in buildings of this type has become a complex problem. Additionally, and in common with most building control applications, there is a need to develop control solutions that permit simple and transparent set up and commissioning procedures. This work concerns the development and testing of an adaptive control method for space heating in buildings with significant solar input. A simulation model of a building space to assess the performance of different control strategies is developed. A lumped parameter model based on an optimisation technique has been proposed and validated. It is shown that this model gives an improvement over existing low order modelling methods. A detailed model of a hot water heating system and related control devices is developed and evaluated for the specific purpose of control simulation. A PI-based fuzzy logic controller is developed in which the error and change of error between the internal air temperature and the user set point temperature is used as the controller input. A conventional PD controller is also considered for comparison. The parameters of the controllers are set to values that result in the best performance under likely disturbances and changes in setpoint. In a further development of the fuzzy logic controller, the Predicted Mean Vote (PMV) is used to control the indoor temperature of a space by setting it at a point where the PMV index becomes zero and the predicted percentage of persons dissatisfied (PPD) achieves a maximum threshold of 5%. The controller then adjusts the air temperature set point in order to satisfy the required comfort level given the prevailing values of other comfort variables contributing to the comfort sensation. The resulting controller is free of the set up and tuning problems that hinder conventional HVAC controllers. The need to develop an adaptive capability in the fuzzy logic controller to account for lagging influence of solar heat gain is established and a new adaptive controller has therefore been proposed. The development of a "quasi-adaptive" fuzzy logic controller is developed in two steps. A feedforward neural network is used to predict the internal air temperature, in which a singular value decomposition (SVD) algorithm is used to remove the highly correlated data from the inputs of the neural network to reduce the network structure. The fuzzy controller is then modified to have two inputs: the first input being the error between the setpoint temperature and the internal air temperature and the second the predicted future internal air temperature. When compared with a conventional method of control the proposed controller is shown to give good tracking of the setpoint temperature, reduced energy consumption and improved thermal comfort for the occupants by reducing solar overheating. The proposed controller is tested in real time using a test cell equipped with an oil- filled electric radiator, temperature and solar sensors. Experimental results confirm earlier findings arrived at by simulations, in that the proposed controller achieves superior tracking and reduces afternoon solar overheating, when compared with a conventional method of control.
18

Pedestrian detection and tracking

Suppitaksakul, Chatchai January 2006 (has links)
This report presents work on the detection and tracking of people in digital images. The employed detection technique is based on image processing and classification techniques. The work uses an object detection process to detect object candidate locations and a classification method using a Self-Organising Map neural network to identify the pedestrian head positions in an image. The proposed tracking technique with the support of a novel prediction method is based on the association of Cellular Automata (CA) and a Backpropagation Neural Network (BPNN). The tracking employs the CA to capture the pedestrian's movement behaviour, which in turn is learned by the BPNN in order to the estimated location of the pedestrians movement without the need to use empirical data. The report outlines this method and describes how it detects and identifies the pedestrian head locations within an image. Details of how the proposed prediction technique is applied to support the tracking process are then provided. Assessments of each component of the system and on the system as a whole have been carried out. The results obtained have shown that the novel prediction technique described is able to provide an accurate forecast of the movement of a pedestrian through a video image sequence.
19

Generic model for application driven XML data processing

Elbekai, Ali Sayeh January 2006 (has links)
XML technology has emerged during recent years as a popular choice for representing and exchanging semi-structured data on the Web. It integrates seamlessly with web-based applications. If data is stored and represented as XML documents, then it should be possible to query the contents of these documents in order to extract, synthesize and analyze their contents. This thesis for experimental study of Web architecture for data processing is based on semantic mapping of XML Schema. The thesis involves complex methods and tools for specification, algorithmic transformation and online processing of semi-structured data over the Web in XML format with persistent storage into relational databases. The main focus of the research is preserving the structure of original data for data reconciliation during database updates and also to combine different technologies for XML data processing such as storing (SQL), transformation (XSL Processors), presenting (HTML), querying (XQUERY) and transporting (Web services) using a common framework, which is both theoretically and technologically well grounded. The experimental implementation of the discussed architecture requires a Web server (Apache), Java container (Tomcat) and object-relational DBMS (Oracle 9) equipped with Java engine and corresponding libraries for parsing and transformation of XML data (Xerces and Xalan). Furthermore the central idea behind the research is to use a single theoretical model of the data to be processed by the system (XML algebra) controlled by one standard metalanguage specification (XML Schema) for solving a class of problems (generic architecture). The proposed work combines theoretical novelty and technological advancement in the field of Internet computing. This thesis will introduce a generic approach since both our model (XML algebra) and our problem solver (the architecture of the integrated system) are XML Schema- driven. Starting with the XML Schema of the data, we first develop domain-specific XML algebra suitable for data processing of the specific data and then use it for implementing the main offline components of the system for data processing.
20

Discovery and information use patterns of Nobel laureates in physiology or medicine

Balcom, Karen Suzanne, Harmon, Glynn, January 2005 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2005. / Supervisor: E. Glynn Harmon. Vita. Includes bibliographical references.

Page generated in 0.1102 seconds