• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Video/Image Processing Algorithms for Video Compression and Image Stabilization Applications

Tsoligkas, Nick A. January 2009 (has links)
As the use of video becomes increasingly popular and wide spread in the areas of broadcast services, internet, entertainment and security-related applications, providing means for fast. automated, and effective techniques to represent video based on its content, such as objects and meanings, is important topic of research. In many applications.. removing the hand shaking effect and making video images stable and clear or decomposing (and then transmitting) the video content into a collection of meaningful objects is a necessity. Therefore automatic techniques for video stabilization, extraction of objects from video data as well as transmitting their shapes, motion and texture at very low bit rates over error networks, are desired. In this thesis the design of a new low bit rate codec is presented. Furthermore a method about video stabilization is introduced. The main technical contributions resulted from this work are as follows. Firstly, an adaptive change detection algorithm identifies the objects from the background. The luminance difference between framer~ in the first stage, is modelled so as to separate contributions caused by noise and illumination variations from those caused by meaningful moving objects. In the second stage the segmentation tool based on image blocks, histograms and clustering algorithms segments the difference image into areas corresponding to objects. In the third stage morphological edge detection, contour analysis, and object labelling are the main tasks of the proposed segmentation algorithm. Secondly, a new low bit rate codec is designed and analyzed based on the proposed segmentation tool. The estimated motion vectors inside the change detection mask, the comer points of the shapes as well as the residual information inside the motion failure regions are transmitted to the decoder using different coding techniques, thus achieving efficient compression. Thirdly, a novel approach of estimating and removing unwanted video motion, which does not require accelerators or gyros, is presented. The algorithm estimates the camera motion from the incoming video stream and compensates for unwanted translation and rotation. A synchronization unit supervises and generates the stabilized video sequence. The reliability of all the proposed algorithms is demonstrated by extensive experimentation on various video shots.
192

Initiating system innovation : a technological frames analysis of the origins of groupware projects

Lin, Angela January 2000 (has links)
This research explores the origins of information systems innovation through two case studies of groupware projects. The thesis argues that the study of the origins of projects has an important role in explaining the subsequent events during the more formal implementation activity. This is particularly so in the case of groupware, where a substantial literature has emerged describing and analysing the unpredicted outcomes of such projects. The research is based on a model of systems adoption as a continuous process, and with the choices and decisions taken at an early stage with regard to technology having significant effects on the adoption across time. The analysis of the early stages of a project can be significant in explaining subsequent levels and degrees of system use. It is argued that in order to provide a more complete description of the adoption process one needs to go back to the origins of a project and to examine the choices and decisions made during that period. This period of initiation of groupware projects has received little attention in CSCW research and scarcely more in the broader IS field. The purpose of this thesis is both to address this absence of scrutiny and to argue for its significance. The thesis presents a detailed review of CSCW and related literature, and explores how and to what extent the initiation of projects has been considered and addressed within this field. The thesis then develops a research framework to explore initiation, based on a synthesis of the contextualist approach with a cognitive model based on Orlikowski's notion of technological frames. The thesis then applies the framework in the analysis of two interpretive case studies of the initiation of groupware projects. These case studies were conducted in the British Oxygen Company (BOC) and the Bank for International Settlement (BIS). These studies produce an account of initiation activity that offers a particular emphasis on how time plays multiple roles in the process, linking content, context and process. These roles include, in addition to conventional 'clock time', time as an indicator, time as an era, and time as measurement and control. The findings also illustrate the duality of individuals' technological frames; that is, individuals' frames are both the basis and the consequence of the choices and decisions made by those same individuals. The analysis explores how and to what extent changes in the organisational or cultural setting (context and process) can have an impact on frames of reference, and how they are shared and communicated.
193

Initialisation Problems in Feature Composition

Nhlabatsi, Armstrong January 2009 (has links)
Composing features that have inconsistent requirements may lead to feature interactions that violate requirements satisfied by each feature in isolation. These interactions manifest themselves as conflicts on shared resources. Arbitration is a common approach to resolving such conflicts that uses prioritisation to decide which feature has access to resources when there is a conflict. However, arbitration alone does not guarantee satisfaction of the requirement of the feature that eventually gains access to a resource. This is because arbitration does not take into account that the resource may be in a state that is inconsistent with that expected by the feature. We call this the initialisation problem. In this thesis we propose an approach to addressing the initialisation problem which combines arbitration with contingencies. Contingency means having several specifications per feature satisfying the same requirement, depending on the current resource state. We illustrate and validate our approach by applying it to resolving conflicts between features in smart home and automotive domains. The validation shows that contingencies complement arbitration by enabling satisfaction of the requirement of the feature that eventually gains access to a shared resource, regardless of the current state of the resource. The main contribution of this thesis is an approach to analysing initialisation concerns in feature composition. At the core of our approach is an explicit consideration of all possible states of a resource as potential initial states. Given each initial state we then derive corresponding specifications that would enable a feature to satisfy its requirement in those states. We show that our approach to initialisation problems is relevant to addressing the feature interaction problem by characterising some types of conflicts as initialisation concerns.
194

Active group communication

Graves, Rory January 2001 (has links)
This thesis explores the application of active networking (A.N.) to group communication. A.N. adds programmable computation platforms to the nodes that form the switching fabric of the network. By leveraging these new facilities we can make the development of complex protocols easier and provide new 'value-added' services to the network infrastructure. Active networking is a relatively new research area. There are popular toolkits but no overall agreements. We examine this field in detail exploring its benefits and pitfalls. We explore the problems of simulating A.N. and possible solutions. We describe NetSim a network simulator we have developed to meet the goals of A.N. simulation. Writing A.N. protocols is far more difficult than writing conventional protocols. We must consider not only the end-points of communication, but also the switching hardware within the network. We show the inherent complexity can be addressed by abstraction and the use of frameworks. We demonstrate AFrame, an active service that supplies both communication and local information to active agents at each node. This framework abstracts and hides some of the complexities of communication. We use this framework to develop new information agents and higher-level protocols. We have constructed the Active Multicast Framework (AMF) to show how programming techniques (abstraction and object-oriented) can be applied to the complex area of group communication. We use A.N. to simplify and improve current multicast and group communication protocols and show implementations of best effort, reliable and ordered multicast. We leverage AMF to show that novel protocols can be developed. Using processing power within the network allows us to develop new breeds of protocols. To show this we developed ATOM, an efficient, fair, totally ordered multicast implementation. We achieve our goal of demonstrating the benefits of applying A.N. technology to group communication and the strengths of good frameworks.
195

Analytic knowledge discovery techniques for ad-hoc information retrieval and automatic text summarization

Goyal, Pawan January 2011 (has links)
Information retrieval is broadly concerned with the problem of automated searching for information within some document repository to support various information requests by users. The traditional retrieval frameworks work on the simplistic assumptions of “word independence” and “bag-of-words”, giving rise to problems such as “term mismatch” and “context independent document indexing”. Automatic text summarization systems, which use the same paradigm as that of information retrieval, also suffer from these problems. The concept of “semantic relevance” has also not been formulated in the existing literature. This thesis presents a detailed investigation of the knowledge discovery models and proposes new approaches to address these issues. The traditional retrieval frameworks do not succeed in defining the document content fully because they do not process the concepts in the documents; only the words are processed. To address this issue, a document retrieval model has been proposed using concept hierarchies, learnt automatically from a corpora. A novel approach to give a meaningful representation to the concept nodes in a learnt hierarchy has been proposed using a fuzzy logic based soft least upper bound method. A novel approach of adapting the vector space model with dependency parse relations for information retrieval also has been developed. A user query for information retrieval (IR) applications may not contain the most appropriate terms (words) as actually intended by the user. This is usually referred to as the term mismatch problem and is a crucial research issue in IR. To address this issue, a theoretical framework for Query Representation (QR) has been developed through a comprehensive theoretical analysis of a parametric query vector. A lexical association function has been derived analytically using the relevance criteria. The proposed QR model expands the user query using this association function. A novel term association metric has been derived using the Bernoulli model of randomness. x The derived metric has been used to develop a Bernoulli Query Expansion (BQE) model. The Bernoulli model of randomness has also been extended to the pseudo relevance feedback problem by proposing a Bernoulli Pseudo Relevance (BPR) model. In the traditional retrieval frameworks, the context in which a term occurs is mostly overlooked in assigning its indexing weight. This results in context independent document indexing. To address this issue, a novel Neighborhood Based Document Smoothing (NBDS) model has been proposed, which uses the lexical association between terms to provide a context sensitive indexing weight to the document terms, i.e. the term weights are redistributed based on the lexical association with the context words. To address the “context independent document indexing” for sentence extraction based text summarization task, a lexical association measure derived using the Bernoulli model of randomness has been used. A new approach using the lexical association between terms has been proposed to give a context sensitive weight to the document terms and these weights have been used for the sentence extraction task. Developed analytically, the proposed QR, BQE, BPR and NBDS models provide a proper mathematical framework for query expansion and document smoothing techniques, which have largely been heuristic in the existing literature. Being developed in the generalized retrieval framework, as also proposed in this thesis, these models are applicable to all of the retrieval frameworks. These models have been empirically evaluated over the benchmark TREC datasets and have been shown to provide significantly better performance than the baseline retrieval frameworks to a large degree, without adding significant computational or storage burden. The Bernoulli model applied to the sentence extraction task has also been shown to enhance the performance of the baseline text summarization systems over the benchmark DUC datasets. The theoretical foundations alongwith the empirical results verify that the proposed knowledge discovery models in this thesis advance the state of the art in the field of information retrieval and automatic text summarization.
196

The Effectiveness of Software Diversity

van der Meulen, Meine Jochum Peter January 2008 (has links)
No description available.
197

The use of alternative data models in data warehousing environments

Gonzalez Castro, Victor January 2009 (has links)
Data Warehouses are increasing their data volume at an accelerated rate; high disk space consumption; slow query response time and complex database administration are common problems in these environments. The lack of a proper data model and an adequate architecture specifically targeted towards these environments are the root causes of these problems. Inefficient management of stored data includes duplicate values at column level and poor management of data sparsity which derives from a low data density, and affects the final size of Data Warehouses. It has been demonstrated that the Relational Model and Relational technology are not the best techniques for managing duplicates and data sparsity. The novelty of this research is to compare some data models considering their data density and their data sparsity management to optimise Data Warehouse environments. The Binary-Relational, the Associative/Triple Store and the Transrelational models have been investigated and based on the research results a novel Alternative Data Warehouse Reference architectural configuration has been defined. For the Transrelational model, no database implementation existed. Therefore it was necessary to develop an instantiation of it’s storage mechanism, and as far as could be determined this is the first public domain instantiation available of the storage mechanism for the Transrelational model.
198

View-invariant Human Action Recognition via Probabilistic Graphical Models

Ji, Xiaofei January 2010 (has links)
No description available.
199

Action semantics of unified modeling language

Yang, Mikai January 2009 (has links)
The Uni ed Modeling Language or UML, as a visual and general purpose modeling language, has been around for more than a decade, gaining increasingly wide application and becoming the de-facto industrial standard for modeling software systems. However, the dynamic semantics of UML behaviours are only described in natural languages. Speci cation in natural languages inevitably involves vagueness, lacks reasonability and discourages mechanical language implementation. Such semi-formality of UML causes wide concern for researchers, including us. The formal semantics of UML demands more readability and extensibility due to its fast evolution and a wider range of users. Therefore we adopt Action Semantics (AS), mainly created by Peter Mosses, to formalize the dynamic semantics of UML, because AS can satisfy these needs advantageously compared to other frameworks. Instead of de ning UML directly, we design an action language, called ALx, and use it as the intermediary between a typical executable UML and its action semantics. ALx is highly heterogeneous, combining the features of Object Oriented Programming Languages, Object Query Languages, Model Description Languages and more complex behaviours like state machines. Adopting AS to formalize such a heterogeneous language is in turn of signi cance in exploring the adequacy and applicability of AS. In order to give assurance of the validity of the action semantics of ALx, a prototype ALx-to-Java translator is implemented, underpinned by our formal semantic description of the action language and using the Model Driven Approach (MDA). We argue that MDA is a feasible way of implementing this source-to-source language translator because the cornerstone of MDA, UML, is adequate to specify the static aspect of programming languages, and MDA provides executable transformation languages to model mapping rules between languages. We also construct a translator using a commonly-used conventional approach, in i which a tool is employed to generate the lexical scanner and the parser, and then other components including the type checker, symbol table constructor, intermediate representation producer and code generator, are coded manually. Then we compare the conventional approach with the MDA. The result shows that MDA has advantages over the conventional method in the aspect of code quality but is inferior to the latter in terms of system performance.
200

Usability investigation of anthropomorphic user interface feedback

Murano, Pietro January 2009 (has links)
This research has investigated the usability of anthropomorphic feedback. This investigation has been very important and useful for the research community and user interface developers because knowing definitively if an anthropomorphic type of feedback is usable or not is an unresolved issue. Therefore this research aimed to find out if anthropomorphic feedback is indeed more effective and more satisfying to use than conventional user interface feedbacks. It was also the aim of this research to devise a model for appropriate use of anthropomorphic feedback. The research conducted used a hypothetico-inductive approach and in conjunction with this, experimental techniques were used. Empirical data was collected and analysed. The body of research conducted has contributed to six novel and significant contributions to knowledge. The first contribution to knowledge concerns the fact that this research began by looking at contextual and domain issues concerning feedback types and their appropriateness. However following several experiments, the results suggested that context and domain were not the main factors involved for the results obtained and also the results obtained by other researchers. The second contribution to knowledge concerns the novel way the experiments and tasks were designed and executed. Having concluded that the domain and context were not the crucial elements to consider, other issues were therefore investigated. These concerned the possibility that other factors at the user interface (and not the actual anthropomorphic appearance) were affecting the results. The aspects specifically investigated were Cognitive Load Theory, Baddeley’s Working Memory Theory and the Theory of Affordances. The investigation suggests that Cognitive Load Theory and Baddeley’s Working Memory Theory do not explain the results obtained. These two analyses constitute two further contributions to knowledge as these analyses have not been conducted before on such issues. However the Theory of Affordances does explain the results of the suite of experiments conducted and also the results of a sample of research conducted by other authors. This analysis adds a further contribution to knowledge suggesting that the facilitation of various strands of affordances are key to the usability of an interface, rather than their being anthropomorphic. The last contribution to knowledge of this research is now proposing a tentative model concerning user interface feedback and the Theory of Affordances.

Page generated in 0.0142 seconds