• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Developing a model of mobile Web uptake in the developing world

Purwandari, Betty January 2013 (has links)
This research was motivated by the limited penetration of the Internet within emerging economies and the ‘mobile miracle’, which refers to a steep increase of mobile phone penetration. In the context of the developing world, harnessing the ‘mobile miracle’ to improve Internet access can leverage the potential of the Web. However, no comprehensive model exists, which can identify and measure indicators of Mobile Web uptake. The absence of such a model creates problems in understanding the impact of the Mobile Web. This has generated the key question under study in this thesis: “What is a suitable model for Mobile Web uptake and its impact in the developing world?” In order to address the research question, the Model of Mobile Web Uptake in the Developing World (MMWUDW) was created. It was informed by a literature review, pilot study in Kenya and expert reviews. The MMWUDW was evaluated using Structural Equation Modelling (SEM) with the primary data that consisted of the questionnaire and interview data from Indonesia. The SEM analysis was triangulated with the questionnaire results and interview findings. Examining the primary data to evaluate the MMWUDW was essential to understand why people used mobile phones to make or follow links on the Web. The MMWUDW has three main factors. These are Mobile Web maturity, uptake and impact. The results of the SEM suggested that mobile networks, percentage of income for mobile credits, literacy and digital literacy did not affect Mobile Web uptake. In contrast, web-enabled phones, Web applications or contents, and mobile operator services strongly indicated Mobile Web maturity, which was a prerequisite for Mobile Web uptake. The uptake then created Mobile Web impact, which included both positive and negative features; ease of access to information and a convenient way to communicate; being entertained and empowered; maintaining of social cohesion and economic benefits, as well as wasting time and money, and being exposed to cyber bullying. Moreover, the research identified areas for improvement in the Mobile Web and regression equations to measure the factors and indicators of the MMWUDW. Possible future work comprises advancement of the MMWUDW and new Web Science research on the Mobile Web in developing countries.
52

Direct write printed flexible electronic devices on fabrics

Li, Yi January 2014 (has links)
This thesis describes direct write printing methods to achieve flexible electronic devices on fabrics by investigating, low temperature process; and functional conductor, insulator and semiconductor inks. The objective is to print flexible electronic devices onto fabrics solely by inkjet printing or pneumatic dispenser printing. Antennas and capacitors, as intermediate inkjet printed electronic devices, are addressed before transistor fabrication. There are many publications that report inkjet printed flexible electronic devices. However, none of the reported methods use fabrics as the target substrate or are processed under a sufficiently low temperature (≤150 oC) to enable fabrics to survive. The target substrate in this research, standard 65/35 polyester cotton fabric, has a maximum thermal curing condition of 180 oC for 15 minutes and 150 oC for 45 minutes. Therefore the total effective curing time is best below 150 oC within 30 minutes to minimise any potential degradation of the fabric substrate. This thesis reports on an inkjet printed flexible half wavelength fabric dipole antenna, an inkjet printed fabric patch antenna, an all inkjet printed SU-8 capacitor, an all inkjet printed fabric capacitor and an inkjet printed transistor on a silicon dioxide coated silicon wafer. The measured fabric dipole antenna peak operating frequency is 1.897 GHz with 74.1 % efficiency and 3.6 dBi gain. The measured fabric patch antenna peak operating frequency is around 2.48 GHz with efficiency up to 57 % and 5.09 dBi gain. The measured capacitance of the printed capacitor is 48.5 pF (2.47 pF/mm2) at 100 Hz using the inkjet printed SU-8. The capacitance of an all inkjet printed flexible fabric capacitor is 163 pF (23.1 pF/mm2) at 100Hz with the UV curable PVP dielectric ink developed as part of this work.
53

Engineering a Semantic Web trust infrastructure

Cobden, Marcus January 2014 (has links)
The ability to judge the trustworthiness of information is an important and challenging problem in the field of Semantic Web research. In this thesis, we take an end-to-end look at the challenges posed by trust on the Semantic Web, and present contributions in three areas: a Semantic Web identity vocabulary, a system for bootstrapping trust environments, and a framework for trust aware information management. Typically Semantic Web agents, which consume and produce information, are not described with sufficient information to permit those interacting with them to make good judgements of trustworthiness. A descriptive vocabulary for agent identity is required to enable effective inter agent discourse, and the growth of trust and reputation within the Semantic Web; we therefore present such a foundational identity ontology for describing web-based agents. It is anticipated that the Semantic Web will suffer from a trust network bootstrapping problem. In this thesis, we propose a novel approach which harnesses open data to bootstrap trust in new trust environments. This approach brings together public records published by a range of trusted institutions in order to encourage trust in identities within new environments. Information integrity and provenance are both critical prerequisites for well-founded judgements of information trustworthiness. We propose a modification to the RDF Named Graph data model in order to address serious representational limitations with the named graph proposal, which affect the ability to cleanly represent claims and provenance records. Next, we propose a novel graph based approach for recording the provenance of derived information. This approach offers computational and memory savings while maintaining the ability to answer graph-level provenance questions. In addition, it allows new optimisations such as strategies to avoid needless repeat computation, and a delta-based storage strategy which avoids data duplication.
54

A framework for the real-time analysis of musical events

Ibbotson, John Bryan January 2009 (has links)
In this thesis I propose a framework for the real-time creation of a harmonic structural model of music. Unlike most uses of computing in musicology which are based on batch processing, the framework uses publish/subscribe messaging techniques found in business systems to create an interconnected set of collaborating applications within a network that process streamed events of the kind generated during a musical performance. These applications demonstrate the transformation of data in the form of MIDI commands into information and knowledge in the form of the music’s harmonic structure represented as a model using semantic web techniques. With such a framework, collaborative performances over the network become possible with a shared representation of the music being performed accessible to all performers both human and potentially software agents. The framework demonstrates novel real-time implementations of pitch spelling, chord and key extraction algorithms interacting with semantic web and database technologies in a collaborative manner. It draws on relevant research in information science, musical cognition, semantic web and business messaging technologies to implement a framework and set of software components for the real-time analysis of musical events, the output of which is a description of the music’s harmonic structure. Finally, it proposes a pattern based approach to querying the generated model which suggests a visual query and navigation paradigm.
55

Intersomatic awareness in game design

Thomas, Siobhán January 2015 (has links)
The aim of this qualitative research study was to develop an understanding of the lived experiences of game designers from the particular vantage point of intersomatic awareness. Intersomatic awareness is an interbodily awareness based on the premise that the body of another is always understood through the body of the self. While the term intersomatics is related to intersubjectivity, intercoordination, and intercorporeality it has a specific focus on somatic relationships between lived bodies. This research examined game designers’ body-oriented design practices, finding that within design work the body is a ground of experiential knowledge which is largely untapped. To access this knowledge a hermeneutic methodology was employed. The thesis presents a functional model of intersomatic awareness comprised of four dimensions: sensory ordering, sensory intensification, somatic imprinting, and somatic marking.
56

The development of a structured methodology for the construction and integrity control of spreadsheet models

Rajalingham, Kamalasen January 2002 (has links)
Numerous studies and reported cases have established the seriousness of the frequency and impact of user-generated spreadsheet errors. This thesis presents a structured methodology for spreadsheet model development, which enables improved integrity control of the models. The proposed methodology has the potential to ensure consistency in the development process and produce more comprehensible, reliable and maintainable models, which can reduce the occurrence of user-generated errors. An insight into the nature and properties of spreadsheet errors is essential for the development of a methodology for controlling the integrity of spreadsheet models. An important by-product of the research is the development of a comprehensive classification or taxonomy of the different types of user-generated spreadsheet errors based on a rational taxonomic scheme. Research on the phenomenon of spreadsheet errors has revealed the need to adopt a software engineering based methodology as a framework for spreadsheet development in practical situations. The proposed methodology represents a new approach to the provision of a structured, software engineering based discipline for the development of spreadsheet models. It is established in this thesis that software engineering principles can in fact be applied to the process of spreadsheet model building to help improve the quality of the models. The methodology uses Jackson structures to produce the logical design of the spreadsheet model. This is followed by a technique to derive the physical model, which is then implemented as a spreadsheet. The methodology’s potential for improving the quality of spreadsheet models is demonstrated. In order to evaluate the effectiveness of the proposed framework, the various features of the proposed structured methodology are tested on a range of spreadsheet models through a series of experiments. The results of the tests provide adequate evidence of the methodology’s potential to reduce the occurrence of user-generated errors and enhance the comprehensibility of the models.
57

Tutoring systems based on user-interface dialogue specification

Martin, Frank A. January 1990 (has links)
This thesis shows how the appropriate specification of a user interface to an application software package can be used as the basis for constructing a tutorial for teaching the use of that interface. An economy can hence be made by sharing the specification between the application development and tutorial development stages. The major part of the user-interface specification which is utilised, the task classification structure, must be transformed from an operational to a pedagogic ordering. Heuristics are proposed to achieve this, although human expertise is required to apply them. The report approach is best suited to domains with hierarchically-ordered command sets. A portable rule-based shell has been developed in Common Lisp which supports the delivery of tutorials for a range of software application package interfaces. The use of both the shell and tutorials for two such interfaces is reported. A computer-based authoring environment provides support for tutorial development. The shell allows the learner of a software interface to interact directly with the application software being learnt while remaining under tutorial control. The learner can always interrupt in order to request a tutorial on any topic, although advice may be offered against this in the light of the tutor's current knowledge of the learner. This advice can always be over-ridden. The key-stroke sequences of the tutorial designer and the learner interacting with the package are parsed against an application model based on the task classification structure. Diagnosis is effected by a differential modelling technique applied to the structures generated by the parsing processes. The approach reported here is suitable for an unsupported software interface learner and is named LIY (`Learn It Yourself'). It provides a promising method for augmenting a software engineering tool-kit with a new technique for producing tutorials for application software.
58

Strategies and tools for the exploitation of massively parallel computer systems

Evans, Emyr Wyn January 2000 (has links)
The aim of this thesis is to develop software and strategies for the exploitation of parallel computer hardware, in particular distributed memory systems, and embedding these strategies within a parallelisation tool to allow the automatic generation of these strategies. The parallelisation of four structured mesh codes using the Computer Aided Parallelisation Tools provided a good initial parallelisation of the codes. However, investigation revealed that simple optimisation of the communications within these codes provided an even better improvement in performance. The dominant factor within the communications was the data transfer time with communication start-up latencies also significant. This was significant throughout the codes but especially in sections of pipelined code where there were large amounts of communication present. This thesis describes the development and testing of the methods used to increase the performance of these communications by overlapping them with unrelated calculation. This method of overlapping the communications was applied to the exchange of data communications as well as the pipelined communications. The successful application by hand provided the motivation for these methods to be incorporated and automatically generated within the Computer Aided Parallelisation Tools. These methods were integrated within these tools as an additional stage of the parallelisation. This required a generic algorithm that made use of many of the symbolic algebra tests and symbolic variable manipulation routines within the tools. The automatic generation of overlapped communications was applied to the four codes previously parallelised as well as a further three codes, one of which was a real world Computational Fluid Dynamics code. The methods to apply automatic generation of overlapped communications to unstructured mesh codes were also discussed. These methods are similar to those applied to the structured mesh codes and their automation is viewed to be of a similar fashion.
59

Improving the regulatory acceptance and numerical performance of CFD based fire-modelling software

Grandison, Angus Joseph January 2003 (has links)
The research of this thesis was concerned with practical aspects of Computational Fluid Dynamics (CFD) based fire modelling software, specifically its application and performance. Initially a novel CFD based fire suppression model was developed (FIREDASS). The FIREDASS (FIRE Detection And Suppression Simulation) programme was concerned with the development of water misting systems as a possible replacement for halon based fire suppression systems currently used in aircraft cargo holds and ship engine rooms. A set of procedures was developed to test the applicability of CFD fire modelling software. This methodology was demonstrated on three CFD products that can be used for fire modelling purposes. The proposed procedure involved two phases. Phase 1 allowed comparison between different computer codes without the bias of the user or specialist features that may exist in one code and not another by rigidly defining the case set-up. Phase 2 allowed the software developer to perform the test using the best modelling features available in the code to best represent the scenario being modelled. In this way it was hoped to demonstrate that in addition to achieving a common minimum standard of performance, the software products were also capable of achieving improved agreement with the experimental or theoretical results. A significant conclusion drawn from this work suggests that an engineer using the basic capabilities of any of the products tested would be likely to draw the same conclusions from the results irrespective of which product was used. From a regulators view, this is an important result as it suggests that the quality of the predictions produced are likely to be independent of the tool used - at least in situations where the basic capabilities of the software were used. The majority of this work has focussed on the use of specialised proprietary hardware generally based around the UNIX operating system. The majority of engineering firms that would benefit from the reduced timeframes offered by parallel processing rarely have access to such specialised systems. However, in recent years with the increasing power of individual office PCs and the improved performance of Local Area Networks (LAN) it has now come to the point where parallel processing can be usefully utilised in a typical office environment where many such PCs maybe connected to a LAN. Harnessing this power for fire modelling has great promise. Modern low cost supercomputers are now typically constructed from commodity PC motherboards connected via a dedicated high-speed network. However, virtually no work has been published on using office based PCs connected via a LAN in a parallel manner on real applications. The SMARTFIRE fire field model was modified to utilise multiple PCs on a typical office based LAN. It was found that good speedup could be achieved on homogeneous PCs, for example for a problem composed of-100,000 cells would run on a network of 12 PCs with a speedup of 9.3 over a single PC. A dynamic load balancing scheme was devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact of the parallel processing on other computer users was minimised. This scheme also minimised the impact of other computer users on the parallel processing performed by the FSE.
60

Columbus : a solution using metadata for integrating document management, project hosting and document control in the construction industry

Herrero, Juan Jose January 2003 (has links)
This thesis presents a solution for integrating document handling technologies within the construction industry using metadata in a novel way and providing a working solution in the form of an application called Columbus. The research analyses in detail the problem of project collaboration. It concentrates on the usage of document management, project hosting and document control systems as important enabling technologies. The creation, exchange and recording of information are addressed as key factors for having a unified document handling solution. Metadata is exploited as a technology providing for effective open information exchange within and between project participants. The technical issues relating to the use of metadata are addressed at length. The Columbus application is presented as a working solution to this problem. Columbus is currently used by over 20000 organisations in 165 countries and has become a standard for information exchange. The main benefit of Columbus has been in getting other project participants to send metadata with their electronic documents and in dealing with project archival. This has worked very well on numerous projects, saving countless man-hours of data input time, document cataloguing and searching. The application is presented in detail from both commercial and technical perspectives and is shown as an open solution, which can be extended by third parties. The commercial success of Columbus is discussed by means of a number of reviews and case studies that cover its usage within the industry. In 2000, it was granted an Institution of Civil Engineers' Special Award in recognition of its contribution to the Latham and Egan initiatives for facilitating information exchange within the construction industry.

Page generated in 0.1445 seconds