• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1258
  • 959
  • 482
  • 266
  • 46
  • 35
  • 27
  • 22
  • 17
  • 10
  • 8
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 3509
  • 745
  • 681
  • 667
  • 657
  • 648
  • 606
  • 460
  • 371
  • 323
  • 302
  • 295
  • 241
  • 222
  • 203
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Exploring the Utility of Ratio-based Co-expression Networks using a GPU Implementation of Semantic Similarity

Greer, Michael J. 21 November 2017 (has links)
The reduced cost of sequencing has made it feasible to acquire multi-tissue site expression data from the same patient. In the field of cancer research, this has caused an accumulation of cancer type specific tumor with matched adjacent normal expression data sets. Co-expression network analysis is a common technique used to analyze expression data; however, it is unknown whether integrating multi-tissue site data into network construction or constructing pan-cancer networks will improve gene function prediction. One method of evaluating network performance relies on semantic similarity scores; however, computing these scores is computationally intensive. Here, I develop a GPU implementation of a commonly used semantic similarity measure and evaluate its performance compared to CPU-based approaches. Next, I explore whether constructing co-expression networks using the ratio of tumor to match adjacent normal mRNA or a pan-cancer consensus network produces superior performance compared to networks constructed with tumor expression alone. The findings presented here indicate that the GPU-based approach offers significant performance improvement over CPU-based approaches. However, the ratio- and pan-cancer networks produce only a modest improvement over tumor-based networks.
172

Towards a Semantic Wiki Experience – Desktop Integration and Interactivity in WikSAR

Aumüller, David, Auer, Sören 11 October 2018 (has links)
Common Wiki systems such as MediaWiki lack semantic annotations. WikSAR (Semantic Authoring and Retrieval within a Wiki), a prototype of a semantic Wiki, offers effortless semantic authoring. Instant gratification of users is achieved by context aware means of navigation, interactive graph visualisation of the emerging ontology, as well as semantic retrieval possibilities. Embedding queries into Wiki pages creates views (as dependant collections) on the information space. Desktop integration includes accessing dates (e.g. reminders) entered in the Wiki via local calendar applications, maintaining bookmarks, and collecting web quotes within the Wiki. Approaches to reference documents on the local file system are sketched out, as well as an enhancement of the Wiki interface to suggest appropriate semantic annotations to the user.
173

A design process from a collaborative point of view : Exploring the importance of collaboration between designers and clients

Anttila, Lars, Ögren, Martin January 2009 (has links)
<p>This essay studies the design process from a collaborative point of view. A literature study has been carried out with focus on design processes and collaborative design, where important concepts and notions are presented. A practical design process was carried out where the project was to create a station identification for Swedish Channel 9. The process was than retrospectively analyzed and broken down into a time-line describing the whole process from amongst other a collaborative point of view. Collaboration between designers and client is found to be very important for the final design. Constraints are considered important to force creativity and the ability to convey abstract thoughts over long distance channels such as e-mail is found to be important in order to overcome the spatial barrier that exists in many design processes today.</p>
174

Indiana's Community Networking Movement: Websites Then and Now

Clodfelter, Kathryn, Buente, Wayne, Rosenbaum, Howard January 2006 (has links)
This is a submission to the "Interrogating the social realities of information and communications systems pre-conference workshop, ASIST AM 2006".
175

The application of data clustering algorithms in packet pair/stream dispersion probing of wired and wired-cum-wireless networks

Hosseinpour, Mehri January 2012 (has links)
This thesis reports a study of network probing algorithms to wired and wireless Ethernet networks. H begins with a literature survey of Ethernet and related technology, and existing research on bandwidth probing. The OPtimized Network Engineering Tool (OPNET) was used to implement a network probing testbed, through the development of packet pair/stream modules. Its performance was validated using a baseline scenario (two workstations communicating directly on a wired or wireless channel) and it was shown how two different probe packet sizes allowed link parameters (bandwidth and the inter-packet gap) to be obtained from the packet pair measurements and compared with their known values. More tests were carried out using larger networks of nodes carrying cross-traffic, giving rise to multimodal dispersion distributions which could be automatically classified using data-clustering algorithms. Further studies used the ProbeSim simulation software, which allowed network and data classification processes were brought together in a common simulation framework The probe packet dispersion data were classified dynamically during operation, and a closed¬loop algorithm was used to adjust parameters for optimum measurement. The results were accurate for simple wired scenarios, but the technique was shown to be unsuitable for heterogeneous wired-cum-wireless topologies with mixed cross-traffic.
176

Machine learning in compilers

Leather, Hugh January 2011 (has links)
Tuning a compiler so that it produces optimised code is a difficult task because modern processors are complicated; they have a large number of components operating in parallel and each is sensitive to the behaviour of the others. Building analytical models on which optimisation heuristics can be based has become harder as processor complexity increased and this trend is bound to continue as the world moves towards further heterogeneous parallelism. Compiler writers need to spend months to get a heuristic right for any particular architecture and these days compilers often support a wide range of disparate devices. Whenever a new processor comes out, even if derived from a previous one, the compiler’s heuristics will need to be retuned for it. This is, typically, too much effort and so, in fact, most compilers are out of date. Machine learning has been shown to help; by running example programs, compiled in different ways, and observing how those ways effect program run-time, automatic machine learning tools can predict good settings with which to compile new, as yet unseen programs. The field is nascent, but has demonstrated significant results already and promises a day when compilers will be tuned for new hardware without the need for months of compiler experts’ time. Many hurdles still remain, however, and while experts no longer have to worry about the details of heuristic parameters, they must spend their time on the details of the machine learning process instead to get the full benefits of the approach. This thesis aims to remove some of the aspects of machine learning based compilers for which human experts are still required, paving the way for a completely automatic, retuning compiler. First, we tackle the most conspicuous area of human involvement; feature generation. In all previous machine learning works for compilers, the features, which describe the important aspects of each example to the machine learning tools, must be constructed by an expert. Should that expert choose features poorly, they will miss crucial information without which the machine learning algorithm can never excel. We show that not only can we automatically derive good features, but that these features out perform those of human experts. We demonstrate our approach on loop unrolling, and find we do better than previous work, obtaining XXX% of the available performance, more than the XXX% of previous state of the art. Next, we demonstrate a new method to efficiently capture the raw data needed for machine learning tasks. The iterative compilation on which machine learning in compilers depends is typically time consuming, often requiring months of compute time. The underlying processes are also noisy, so that most prior works fall into two categories; those which attempt to gather clean data by executing a large number of times and those which ignore the statistical validity of their data to keep experiment times feasible. Our approach, on the other hand guarantees clean data while adapting to the experiment at hand, needing an order of magnitude less work that prior techniques.
177

Programming support for CSCW : using X windows

Winnett, Maria E. January 1995 (has links)
This thesis presents a model for programming support for synchronous, distributed CSCW (Computer Supported Co-operative Work). Synchronous, distributed CSCW aims to allow groups of people separated, by distance to work together in real time as if they were at the same location. The model proposed in the thesis allows an application program to be constructed using user interface components known as “shared widgets”. A shared widget displays underlying application data on multiple screens and processes input from multiple users distributed over a network. The distribution of data to and from the users and the underlying network communication is hidden from the application program within the shared widget. The model describes a shared widget as comprising a single “Artefact” and a number of “Views.” The Artefact contains the underlying data and the actions that can be performed on it. A View is the presentation of the Artefact on a user's screen. Shared widgets contain a View for each user in the group. Each user can provide input to the Artefact via their own View, and any change made to the Artefact is reflected synchronously in all the Views. The Artefact can also impose a floor control policy to restrict input to a particular user or group of users, by checking each input event against a known floor control value. The model differs from previous approaches to programming support for CSCW in that the distributed nature of the users is hidden from the application programmer within the shared widgets. As a result, the application programmer does not have to be concerned with the processing of input events or the distribution of output to multiple users. The hiding of these implementation details within the shared widgets allows the CSCW application to be constructed in a similar way to a single-user application. An implementation of the shared widget model, using X Windows, is also described in the thesis. Experimental results and observations are given and used to suggest future directions for further research.
178

Motion segmentation of semantic objects in video sequences

Thirde, David J. January 2007 (has links)
The extraction of meaningful objects from video sequences is becoming increasingly important in many multimedia applications such as video compression or video post-production. The goal of this thesis is to review, evaluate and build upon the wealth of recent work on the problem of video object segmentation in the context of probabilistic techniques for generic video object segmentation. Methods are suggested that solve this problem using formal probabilistic learning techniques, this allows principled justification of methods applied to the problem of segmenting video objects. By applying a simple, but effective, evaluation methodology the impact of all aspects of the video object segmentation process are quantitatively analysed. This research focuses on the application of feature spaces and probabilistic models for video object segmentation are investigated. Subsequently, an efficient region-based approach to object segmentation is described along with an evaluation of mechanisms for updating such a representation. Finally, a hierarchical Bayesian framework is proposed to allow efficient implementation and comparison of combined region-level and object-level representational schemes.
179

Object modelling of temporal changes in Geographical Information Systems

Adamu, Abdul T. January 2003 (has links)
Changes in current temporally enabled GIS systems, thàt have been successfully implemented, are based on the snapshot approach which consist of sequences of discrete images. This approach does not allow either the pattern of changes to be shown or the complexities within the changes to be examined. Also the existing GIS database models cannot represent effectively the history of geographical phenomena. The aim of this research is to develop an object-oriented GIS model (OOGIS) that will represent detailed changes of geographical objects and track the evolution of objects. The detailed changes include spatial, thematic, temporal, events and processes that are involved in the changes. Those have been addressed, but not implemented, by a number of previous GIS projects. Object tracking and evolution includes not only attributes changes to homogenous objects, but also major changes that lead to transforming/destroying existing objects and creating new ones. This will allow the pattern of changes of - geographical phenomena to be examined by tracking the evolution of geographical objects. The OOGIS model was designed using an object-oriented visual modelling tool and was implemented using an object-oriented programming environment (OOPE), an object-oriented database system (OODBS). The visual modelling tool for designing the OOGIS model was Unified Modelling Language (UML), OOPE for implementing the OOGIS model was Microsoft Visual C++ and the OODBS was Objectivity/DB. The prototype of the investigation has been successfully implemented using a Case Study of Royal Borough of Kingston-Upon-Thames, in the United Kingdom. This research is addressing in particular the deficiencies in two existing GIS models that are related to this work. The fust model, the triad model, represents the spatial, thematic and temporal but fails to represent events and processes connected to the changes. The second model, the event-oriented model, though it represents the events (or processes) related to the changes, it stores the changes as attributes of the object. This model is.limited to temporal stable (static) changes and can not be applied to the evolution of geographical phenomena or changes that involve several objects sharing common . .propertíes and temporal relationships. Moreover, the model does not take into account the evolution (e.g. splitting, transformation etc) of a specific object which can involve more than changes to its attributes. Both models are not able to tackle, for instance, in situation when an object such as a park is disappearing to make way for new objects (i.e. roads and new buildings) or in situation where an agriculture piece of land becomes an industrial lot or village becomes a city. In this work the construction of a new approach which overcomes these deficiencies is presented. Also the approach take into account associations and relationships between objects such as inheritance which would be reflected in the object oriented database. For example a road can be regarded a base class from which other classes can be derived such as motorways, streets, dual roads etc which might reflect the evolution of objects ,in non-homogenous ways. The object versioning technique in this work will allow the versions of a geographical object to be related, thereby creating temporal relationships between them. It requires less data storage, since only the changes are recorded. The association between the versions allows continuous forward and backward movement within the versions, and promotes optimum query mechanisms.
180

An energy expert advisor and decision support system for aluminium melting and casting

Yoberd, Belmond January 1994 (has links)
The aim of this project was to develop and implement an expert advisor system to provide information for selecting and scheduling several items of small foundry plants using electric resistance bale-out furnaces, to optimise metal use and reduce energy costs. This involved study in formulating the procedures and developing a “foundry user friendly” expert system for giving advice to unskilled operatives in what was a complex multi- variable process. This system (FOES) included investigation and development of an advising system on the casting of a large numbers of different objects cast under different operating conditions and electricity tariffs. Knowledge elicitation techniques were developed and used during the complicated knowledge election process. Since this research programme intended to look at the complete process of melting, holding and pouring of the aluminium alloy, complex electricity tariffs were incorporated into the expert system in order to accurately calculate the energy cost of each process. A sub-section of the FOES system (DAD) could advise the unskilled foundry operative identify and eliminate the seven most common aluminium alloy casting defect by using a novel technique of incorporating actual defect photographs which were digitally scanned into the system.

Page generated in 0.0718 seconds