• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3267
  • 1226
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8737
  • 4075
  • 2533
  • 2456
  • 2456
  • 805
  • 805
  • 588
  • 579
  • 554
  • 552
  • 525
  • 486
  • 480
  • 472
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Content dissemination in participatory delay tolerant networks

Jahanbakhsh Mashhadi, A. January 2011 (has links)
As experience with the Web 2.0 has demonstrated, users have evolved from being only consumers of digital content to producers. Powerful handheld devices have further pushed this trend, enabling users to consume rich media (for example, through high resolution displays), as well as create it on the go by means of peripherals such as built-in cameras. As a result, there is an enormous amount of user-generated content, most of which is relevant only within local communities. For example, students advertising events taking place around campus. For such scenarios, where producers and consumers of content belong to the same local community, networks spontaneously formed on top of colocated user devices can offer a valid platform for sharing and disseminating content. Recently, there has been much research in the field of content dissemination in mobile networks, most of which exploits user mobility prediction in order to deliver messages from the producer to the consumer, via spontaneously formed Delay Tolerant Networks (DTNs). Common to most protocols is the assumption that users are willing to participate in the content distribution network; however, because of the energy restrictions of handheld devices, users’ participation cannot be taken for granted. In this thesis, we design content dissemination protocols that leverage information about user mobility, as well as interest, in order to deliver content, while avoiding overwhelming noninterested users. We explicitly reason about battery consumption of mobile devices to model participation, and achieve fairness in terms of workload distribution. We introduce a dynamic priority scheduling framework, which enables the network to allocate the scarce energy resources available to support the delivery of the most desired messages. We evaluate this work extensively by means of simulation on a variety of real mobility traces and social networks, and draw a comparative evaluation with the major related works in the field.
222

Addressing the cold start problem in tag-based recommender systems

Zanardi, V. January 2011 (has links)
Folksonomies have become a powerful tool to describe, discover, search, and navigate online resources (e.g., pictures, videos, blogs) on the Social Web. Unlike taxonomies and ontologies, which impose a hierarchical categorisation on content, folksonomies directly allow end users to freely create and choose the categories (in this case, tags) that best describe a piece of information. However, the freedom aafforded to users comes at a cost: as tags are defined informally, the retrieval of information becomes more challenging. Different solutions have been proposed to help users discover content in this highly dynamic setting. However, they have proved to be effective only for users who have already heavily used the system (active users) and who are interested in popular items (i.e., items tagged by many other users). In this thesis we explore principles to help both active users and more importantly new or inactive users (cold starters) to find content they are interested in even when this content falls into the long tail of medium-to-low popularity items (cold start items). We investigate the tagging behaviour of users on content and show how the similarities between users and tags can be used to produce better recommendations. We then analyse how users create new content on social tagging websites and show how preferences of only a small portion of active users (leaders), responsible for the vast majority of the tagged content, can be used to improve the recommender system's scalability. We also investigate the growth of the number of users, items and tags in the system over time. We then show how this information can be used to decide whether the benefits of an update of the data structures modelling the system outweigh the corresponding cost. In this work we formalize the ideas introduced above and we describe their implementation. To demonstrate the improvements of our proposal in recommendation efficacy and efficiency, we report the results of an extensive evaluation conducted on three different social tagging websites: CiteULike, Bibsonomy and MovieLens. Our results demonstrate that our approach achieves higher accuracy than state-of-the-art systems for cold start users and for users searching for cold start items. Moreover, while accuracy of our technique is comparable to other techniques for active users, the computational cost that it requires is much smaller. In other words our approach is more scalable and thus more suitable for large and quickly growing settings.
223

Secure message transmission in the general adversary model

Yang, Q. January 2011 (has links)
The problem of secure message transmission (SMT), due to its importance in both practice and theory, has been studied extensively. Given a communication network in which a sender S and a receiver R are indirectly connected by unreliable and distrusted channels, the aim of SMT is to enable messages to be transmitted from S to R with a reasonably high level of privacy and reliability. SMT must be achieved in the presence of a Byzantine adversary who has unlimited computational power and can corrupt the transmission. In the general adversary model, the adversary is characterized by an adversary structure. We study two different measures of security: perfect (PSMT) and almost perfect (APSMT). Moreover, reliable (but not private) message transmission (RMT) are considered as a specific part of SMT. In this thesis, we study RMT, APSMT and PSMT in two different network settings: point-to-point and multicast. To prepare the study of SMT in these two network settings, we present some ideas and observations on secret sharing schemes (SSSs), generalized linear codes and critical paths. First, we prove that the error-correcting capability of an almost perfect SSS is the same as a perfect SSS. Next, we regard general access structures as linear codes, and introduce some new properties that allow us to construct pseudo-basis for efficient PSMT protocol design. In addition, we define adversary structures over "critical paths", and observe their properties. Having these new developments, the contributions on SMT in the aforementioned two network settings can be presented as follows. The results on SMT in point-to-point networks are obtained in three aspects. First, we show a Guessing Attack on some existing PSMT protocols. This attack is critically important to the design of PSMT protocols in asymmetric networks. Second, we determine necessary and sufficient conditions for different levels of RMT and APSMT. In particular, by applying the result on almost perfect SSS, we show that relaxing the requirement of privacy does not weaken the minimal network connectivity. Our final contribution in the point-to-point model is to give the first ever efficient, constant round PSMT protocols in the general adversary model. These protocols are designed using linear codes and critical paths, and they significantly improve some previous results in terms of communication complexity and round complexity. Regarding SMT in multicast networks, we solve a problem that has been open for over a decade. That is, we show the necessary and sufficient conditions for all levels of SMT in different adversary models. First, we give an Extended Characterization of the network graphs based on our observation on the eavesdropping and separating activities of the adversary. Next, we determine the necessary and sufficient conditions for SMT in the general adversary model with the new Extended Characterization. Finally, we apply the results to the threshold adversary model to completely solve the problem of SMT in general multicast network graphs.
224

Approximate inference for state-space models

Higgs, M. C. January 2011 (has links)
This thesis is concerned with state estimation in partially observed diffusion processes with discrete time observations. This problem can be solved exactly in a Bayesian framework, up to a set of generally intractable stochastic partial differential equations. Numerous approximate inference methods exist to tackle the problem in a practical way. This thesis introduces a novel deterministic approach that can capture non normal properties of the exact Bayesian solution. The variational approach to approximate inference has a natural formulation for partially observed diffusion processes. In the variational framework, the exact Bayesian solution is the optimal variational solution and, as a consequence, all variational approximations have a universal ordering in terms of optimality. The new approach generalises the current variational Gaussian process approximation algorithm, and therefore provides a method for obtaining super optimal algorithms in relation to the current state-of-the-art variational methods. Every diffusion process is composed of a drift component and a diffusion component. To obtain a variational formulation, the diffusion component must be fixed. Subsequently, the exact Bayesian solution and all variational approximations are characterised by their drift component. To use a particular class of drift, the variational formulation requires a closed form for the family of marginal densities generated by diffusion processes with drift components from the aforementioned class. This requirement in general cannot be met. In this thesis, it is shown how this coupling can be weakened, allowing for more flexible relations between the variational drift and the variational approximations of the marginal densities of the true posterior process. Based on this revelation, a selection of novel variational drift components are proposed.
225

Superpixel lattices

Moore, A. P. January 2011 (has links)
Superpixels are small image segments that are used in popular approaches to object detection and recognition problems. The superpixel approach is motivated by the observation that pixels within small image segments can usually be attributed the same label. This allows a superpixel representation to produce discriminative features based on data dependent regions of support. The reduced set of image primitives produced by superpixels can also be exploited to improve the efficiency of subsequent processing steps. However, it is common for the superpixel representation to have a different graph structure from the original pixel representation of the image. The first part of the thesis argues that a number of desirable properties of the pixel representation should be maintained by superpixels and that this is not possible with existing methods. We propose a new representation, the superpixel lattice, and demonstrate its advantages. The second part of the thesis investigates incorporating a priori information into superpixel segmentations. We learn a probabilistic model that describes the spatial density of object boundaries in the image. We demonstrate our approach using road scene data and show that our algorithm successfully exploits the spatial distribution of object boundaries to improve the superpixel segmentation. The third part of the thesis presents a globally optimal solution to our superpixel lattice problem in either the horizontal or vertical direction. The solution makes use of a Markov Random Field formulation where the label field is guaranteed to be a set of ordered layers. We introduce an iterative algorithm that uses this framework to learn colour distributions across an image in an unsupervised manner. We conclude that our approach achieves comparable or better performance than competing methods and that it confers several additional advantages.
226

Exploring sparse, unstructured video collections of places

Tompkin, J. H. January 2013 (has links)
The abundance of mobile devices and digital cameras with video capture makes it easy to obtain large collections of video clips that contain the same location, environment, or event. However, such an unstructured collection is difficult to comprehend and explore. We propose a system that analyses collections of unstructured but related video data to create a Videoscape: a data structure that enables interactive exploration of video collections by visually navigating — spatially and/or temporally — between different clips. We automatically identify transition opportunities, or portals. From these portals, we construct the Videoscape, a graph whose edges are video clips and whose nodes are portals between clips. Now structured, the videos can be interactively explored by walking the graph or by geographic map. Given this system, we gauge preference for different video transition styles in a user study, and generate heuristics that automatically choose an appropriate transition style. We evaluate our system using three further user studies, which allows us to conclude that Videoscapes provides significant benefits over related methods. Our system leads to previously unseen ways of interactive spatio-temporal exploration of casually captured videos, and we demonstrate this on several video collections.
227

Goal-driven collaborative filtering

Jambor, T. January 2013 (has links)
Recommender systems aim to identify interesting items (e.g. movies, books, websites) for a given user, based on their previously expressed preferences. As recommender systems grow in popularity, a notable divergence emerges between research practices and the reality of deployed systems: when recommendation algorithms are designed, they are evaluated in a relatively static context, mainly concerned about a predefined error measure. This approach disregards the fact that a recommender system exists in an environment where there are a number of factors that the system needs to satisfy, some of these factors are dynamic and can only be tackled over time. Thus, this thesis intends to study recommender systems from a goal-oriented point of view, where we define the recommendation goals, their associated measures and build the system accordingly. We first start with the argument that a single fixed measure, which is used to evaluate the system’s performance, might not be able to capture the multidimensional quality of a recommender system. Different contexts require different performance measures. We propose a unified error minimisation framework that flexibly covers various (directional) risk preferences. We then extend this by simultaneously optimising multiple goals, i.e., not only considering the predicted preference scores (e.g. ratings) but also dealing with additional operational or resource related requirements such as the availability, profitability or usefulness of a recommended item. We demonstrate multiple objectives through another example where a number of requirements, namely, diversity, novelty and serendipity are optimised simultaneously. At the end of the thesis, we deal with time-dependent goals. To achieve complex goals such as keeping the recommender model up-to-date over time, we consider a number of external requirements. Generally, these requirements arise from the physical nature of the system, such as available computational resources or available storage space. Modelling such a system over time requires describing the system dynamics as a combination of the underlying recommender model and its users’ behaviour. We propose to solve this problem by applying the principles of Modern Control Theory to construct and maintain a stable and robust recommender system for dynamically evolving environments. The conducted experiments on real datasets demonstrate that all the proposed approaches are able to cope with multiple objectives in various settings. These approaches offer solutions to a variety of scenarios that recommender systems might face.
228

Bayesian learning in financial markets : economic news and high-frequency European sovereign bond markets

Mirghaemi, M. January 2012 (has links)
No description available.
229

Trust in web geographical information systems for public participation

Skarlatidou, A. January 2012 (has links)
Maps have a long history in the communication of spatial information, yet Web Geographical Information Systems (Web GIS) expanded map use to a wide variety of contexts and to include people who do not have knowledge of spatial and GIS issues (non- experts). This non-expert interaction with Web GIS generates Human-Computer Interaction (HCI) implications. While such HCI elements as usability attracted the attention of GIS research, additional HCI aspects, such as trust, were overlooked. The significance of trust in non-expert interaction with Web GIS becomes more apparent as these tools are used to engage the public at different levels of public participation. The public participation literature suggests that when Information and Communication Technology (ICT) mediums such as Web GIS are used to engage the public, it is essential that they improve public knowledge and trust. This thesis researches how this can be achieved, using the case of the site selection of a nuclear waste repository in the UK. Firstly, the thesis presents an HCI-based investigation of existing Web GIS applications to understand the functional and perceptual attributes that influence non-experts' trust perceptions and introduces a set of trust guidelines. These guidelines inform the development and design of the PE-Nuclear tool, a Web GIS to inform lay people about the site selection process of a nuclear waste repository in the UK. Secondly, the Mental Models approach is used to support the development of the PE-Nuclear tool's information content based on lay people's mental models, needs and expectations. Finally, the tool is evaluated to investigate separately whether the trust guidelines and the information content improve public trust and knowledge. The research findings and methodological framework provide a holistic approach for the development of Web GIS applications, which have the potential to enhance public knowledge and help non-experts develop rational trust perceptions, protecting them from unethical and inappropriate use of the technology. It is essential to further note that this research thesis supported the identification of critical gaps and methodological implications that should inform future GIS research, especially of an HCI character. Last but not least due to the multidisciplinary nature of this research, the scientific knowledge gained contributes to other fields such as Risk Communication, Public Participation, but also provides important lessons to inform the current Nuclear Waste Management Programme in the UK.
230

Developing a novel method for homology detection of transmembrane proteins

Hurwitz, N. January 2013 (has links)
Analysis of the complete genomic sequences for several organisms indicates that 20-25% of all genes code for transmembrane proteins (Jones, 1998, Wallin and von Heijne, 1998), yet only a very small number of transmembrane 3D structures are known. Hence, it is of great importance to develop theoretical methods capable of predicting transmembrane protein structure and function based on protein sequence alone. To address this, we sought to devise a systematic and high throughput method for identifying homologous transmembrane proteins. Since protein structure is more evolutionarily conserved than amino acid sequence, we predicted that adding structural information to simple sequence alignment would improve homology detection of transmembrane proteins. In the present work, we describe development of a search method that combines sequence alignment with structural information. In our method the initial sequence alignment searches are performed using PSI-BLAST. Then profiles derived from the multiple sequence alignments are input into a neural network, developed in this work to predict which transmembrane residues are buried (core of the helix-bundle) or exposed (to the lipid environment). A maximum accuracy of 86% was achieved. Moreover, for almost half of the query set, the predicted residue orientation was more than 70% accurate. In the last step of the work presented here, the predicted helix locations, residue orientations and loop length scores are added to the PSI-BLAST E-value, to create a ‘combined’ classifier. A linear equation was built for calculating the 'combined’ classifier score. Our method was evaluated using two databases of proteins: Pfam and GPCRDB. The Pfam database was chosen, as transmembrane proteins in this database have been classified into various families. GPCRDB was employed as this database, though narrow, is well-studied and maintained. Before building the ‘combined’ classifier, PSI-BLAST sequence alignment was benchmarked using the Pfam database. We found that our 'combined’ classifier, as compared to a classifier based solely on PSI-BLAST, resulted in more true positives with less false positives when tested using GPCRDB and could differentiate between GPCRDB families. However, our ‘combined’ classifier did not improve homology detection when searching transmembrane proteins from the Pfam database. A comparison of our ‘combined’ classifier method with two other published methods suggested that profile-profile based searches could be more powerful than profile-sequence based searches, even after the addition of structural information as described here. In light of our study, we propose that combining structural information with profile-profile sequence alignment into a 'combined’ classifier could result in a search method superior to any existing ones for detecting homologous transmembrane proteins.

Page generated in 0.0406 seconds