• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 3
  • 1
  • Tagged with
  • 142
  • 141
  • 141
  • 140
  • 140
  • 140
  • 140
  • 140
  • 82
  • 43
  • 40
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Assisted Reuse of Pattern-Based Composition Knowledge for Mashup Development

Roy Chowdhury, Soudip January 2013 (has links)
First generation of the World Wide Web (WWW) enabled users to have instantaneous access to a large diversity of knowledge. Second generation of the WWW (Web 2.0) brought a fundamental change in the way people interact with and through the World Wide Web. Web 2.0 has made the World Wide Web a platform not only for communication and sharing information but also for software development (e.g., web service composition). Web mashup or mashup development is a Web2.0 development approach in which users are expected to create applications by combining multiple data sources, application logic and UI components from the web to cater for their situational application needs. However, in reality creating an even simple mashup application is a complex task that can only be managed by skilled developers. Examples of ready mashup models are one of the main sources of help for users who don't know how to design a mashup, provided that suitable examples can be found (examples that have an analogy with the modeling situation faced by the user). But also tutorials, expert colleagues or friends, and, of course, Google are typical means to find help. However, searching for help does not always lead to a success, and retrieved information is only seldom immediately usable as it is, since the retrieved pieces of information are not contextual, i.e., immediately applicable to the given modeling problem. Motivated by the development challenges faced by a naive user of existing mashup tools, in this thesis we propose toaid such users by enabling assisted reuse of pattern-based composition knowledge. In this thesis we show how it is possible to effectively assist these users in their development task with contextual, interactive recommendations of composition knowledge in the form of mashup model patterns. We study a set of recommendation algorithms with different levels of performance and describe a flexible pattern weaving approach for the one-click reuse of patterns. We prove the generality of our algorithms and approach by implementing two prototype tools for two different mashup platforms. Finally, we validate the usefulness of our assisted development approach by performing thorough empirical tests and two user studies with our prototype tools.
92

Photo Indexing and Retrieval based on Content and Context

Broilo, Mattia January 2011 (has links)
The widespread use of digital cameras, as well as the increasing popularity of online photo sharing has led to the proliferation of networked photo collections. Handling such a huge amount of media, without imposing complex and time consuming archiving procedures, is highly desirable and poses a number of interesting research challenges to the media community. In particular, the definition of suitable content based indexing and retrieval methodologies is attracting the effort of a large number of researchers worldwide, who proposed various tools for automatic content organization, retrieval, search, annotation and summarization. In this thesis, we will present and discuss three different approaches for content-and-context based retrieval. The main focus will be put on personal photo albums, which can be considered one of the most challenging application domains in this field, due to the largely unstructured and variable nature of the datasets. The methodologies that we will describe can be summarized into the following three points: i. Stochastic approaches to exploit the user interaction in query-by-example photos retrieval. Understanding the subjective meaning of a visual query, by converting it into numerical parameters that can be extracted and compared by a computer, is the paramount challenge in the field of intelligent image retrieval, also referred to as the “semantic gap†problem. An innovative approach is proposed that combines a relevance feedback process with a stochastic optimization engine, as a way to grasp user's semantics through optimized iterative learning providing on one side a better exploration of the search space, and on the other side avoiding stagnation in local minima during the retrieval. ii. Unsupervised event collection, segmentation and summarization. The need for automatic tools able to extract salient moments and provide automatic summary of large photo galleries is becoming more and more important due to the exponential growth in the use of digital media for recording personal, familiar or social life events. The multi-modal event segmentation algorithm faces the summarization problem in an holistic way, making it possible to exploit the whole available information in a fully unsupervised way. The proposed technique aims at providing such a tool, with the specific goal of reducing the need of complex parameter settings and letting the system be widely useful for as many situations as possible. iii. Content-based synchronization of multiple galleries related to the same event. The large spread of photo cameras makes it quite common that an event is acquired through different devices, conveying different subjects and perspectives of the same happening. Automatic tools are more and more used to support the users in organizing such archives, and it is largely accepted that time information is crucial to this purpose. Unfortunately time-stamps may be affected by erroneous or imprecise setting of the camera clock. The synchronization algorithm presented is the first that uses the content of pictures to estimate the mutual delays among different cameras, thus achieving an a-posteriori synchronization of various photo collections referring to the same event.
93

Frame-Based Ontology Population from Text: Models, Systems, and Applications

Corcoglioniti, Francesco January 2016 (has links)
Ontology Population from text is an interdisciplinary task that integrates Natural Language Processing (NLP), Knowledge Representation, and Semantic Web techniques to extract assertional knowledge from texts according to specific ontologies. As most information on the Web is available as unstructured text, Ontology Population plays an important role in bridging the gap between structured and unstructured data, thus helping realizing the vision of a (Semantic) Web where contents are equally consumable by humans and machines. In this thesis we move beyond Ontology Population of instances and binary relations, and focus on (what we call) Frame-based Ontology Population, whose target is the extraction of semantic frames from text. Semantic frames are defined by RDFS/OWL ontologies, such as FrameBase and the Event Situation Ontology derived from FrameNet, and consist in events, situations and other structured entities reified as ontological instances (e.g., a sell event) and connected to related instances via properties specifying their semantic roles in the frame (e.g., seller, buyer). This representation (called neo-Davidsonian) supports expressing n-ary and arbitrarily qualified relations, and permits leveraging complex NLP tasks such as Semantic Role Labeling (SRL), which annotates frame-like structures in text consisting of predicates and their semantic arguments as defined by domain-general predicate models. We contribute to the task of Frame-based Ontology Population from multiple directions. We start with developing an extension of the Lemon lexicon model for ontologies (PreMOn) to represent predicate models --- PropBank, NombBank, VerbNet, and FrameNet --- and their mappings to FrameBase. Based on this, our core contribution is a Frame-based Ontology Population approach (PIKES) where processing is decoupled in two phases: first, an English text is processed by a SRL-based NLP pipeline to extract mentions, i.e., snippets of text denoting entities or facts; then, mentions are processed by mapping rules to extract ontological instances aligned to DBpedia and Yago, and semantic frames aligned to FrameBase. We represent all the contents involved in this process in RDF with named graphs, according to an ontological model (KEM) built on top of the semiotic notions of meaning and reference, aligned to DOLCE and the NLP Interchange Format (NIF) ontologies. The model allows navigating from any piece of extracted knowledge to its mentions and back, and allows representing all the generated intermediate information (e.g., NLP annotations) and associated metadata (e.g., confidence, provenance). Based on this model, we propose a scalable system (KnowledgeStore) for storing and querying all the text, mentions, and RDF data involved in the population process, together with relevant RDF background knowledge, so that they can be jointly accessed by applications. Finally, to support the necessary RDF processing tasks, such as rule evaluation, RDFS and owl:sameAs inference, and data filtering and integration, we propose a tool (RDFpro) implementing a simple, non-distributed processing model combining streaming and sorting techniques in complex pipelines, capable of processing billions of RDF triples on a commodity machine. We describe the application of these solutions for processing differently scoped/sized datasets within and outside the NewsReader EU Project, and for improving search performances in Information Retrieval, through an approach (KE4IR) that enriches the term vectors of documents and queries with semantic terms obtained from extracted knowledge. All the proposed solutions were implemented and released open-source with demonstrators, and ontological models were published online according to Linked Data best practices. The results obtained were validated via empirical performance evaluations and case studies.
94

Event-centric management of personal photos

Paniagua Laconich, Eduardo Javier January 2015 (has links)
Since the last decade we have been observing a tremendous growth in the size of personal photo collections. For this reason, and due to the lack of proper automatic classification and annotation in standard album-centric photo software, users find it increasingly difficult to organise and make use of their photos. Although automatic annotation of media content can work to achieve more sophisticated multimedia classification and retrieval if its used in combination with rich knowledge representations, it still requires the availability of well-annotated training sets to produce the type of higher-level descriptions that would be of interest to casual users. Thus, the applicability of this approach is highly unlikely in the broad domain of personal photography. Recent developments in the media industry show an interest towards the organisation and structuring of media collections using an event-centric metaphor. This event-centric approach is inspired by strong research in psychology on how our autobiographical memory works to organise, recollect and share our life experiences. While this metaphor is backed by some early user studies, these were led before the large adoption of social media sharing services and there has been little recent research on how users actually use events digitally to organise and share their media. In this work we first present an updated study on what users are doing with their photos in current online platforms to support the suitability of an event-centric approach. Next, we introduce a simple framework for event-centric personal photo management focused on temporal and spatial aspects and through it we describe our techniques for automatic photo organisation and sharing. Finally, we propose a platform for personal photo management that makes use of these automatic techniques and present an evaluation of a prototypical implementation.
95

Optimization Modulo Theories with Linear Rational Costs

Tomasi, Silvia January 2014 (has links)
In the contexts of automated reasoning (AR) and formal verification (FV),important { decision} problems are effectively encoded into Satisfiability Modulo Theories (SMT). In the last decade efficient SMT solvers have been developed for several theories of practical interest (e.g., linear arithmetic, arrays, bit-vectors). Surprisingly, little work has been done to extend SMT to deal with optimization problems; in particular, concerning the development of SMT solvers able to produce solutions which minimize cost functions over arithmetical variables (we are aware of only one very-recent work [Li et al., POPL-14]). In the work described in this thesis we start filling this gap. We present and discuss two general procedures for leveraging SMT to handle the minimization of linear rational cost functions, combining SMT with standard minimization techniques. We have implemented the procedures within the MathSAT SMT solver. Due to the absence of competitors in AR and FV, we have experimentally evaluated our implementation against state-of-the-art tools for the domain of linear generalized disjunctive programming (LGDP), which is closest in spirit to our domain, and a very-recent SMT-based optimizer [Li et al., POPL-14]. Our benchmark set consists of problems which have been previously proposed for our competitors. The results show that our tool is very competitive, and often outperforms these tools (especially LGDP ones)on their problems, clearly demonstrating the potential of the approach. Stochastic Local-Search (SLS) procedures are sometimes very competitive in pure SAT on both satisfiable instances and optimization problems. As a side work, in this thesis we investigate the possibility to exploit SLS inside SMT tools which are commonly based on the lazy approach (it combines a Conflict-Driven-Clause-Learning(CDCL)SAT solver with theory-specific decision procedures, called T-solvers). We first introduce a general procedure for integrating a SLS solver of the WalkSAT family with a T-solver. Then we present a group of techniques aimed at improving the synergy between these two components. Finally we implement all these techniques into a novel SLS-based SMT solver for the theory of linear arithmetic over the rationals, and perform an empirical evaluation on satisfiable instances. Although the results are encouraging, we concluded that the efficiency of proposed SLS-based SMT techniques is still far from being comparable to that of standard SMT solvers.
96

Determining what information is transmitted across neural populations

Bím, Jan January 2017 (has links)
Quantifying the amount of information communicated between neural population is crucial to understand brain dynamics. To address this question, many tools for the analysis of time series of neural activity, such as Granger causality, Transfer Entropy, Directed Information have been proposed. However, none of these popular model-free measures can reveal what information has been exchanged. Yet, understanding what information is exchanged is key to be able to infer, from brain recordings, the nature and the mechanisms of brain computation. To provide the mathematical tools needed to address this issue, we developed a new measure, exploiting benefits of novel Partial Information Decomposition framework, that determines how much information about each specific stimulus or task feature has been transferred between two neuronal populations. We tested this methodology on simulated neural data and showed that it captures the specific information being transmitted very well, and it is also highly robust to several of the confounds that have proven to be problematic for previous methods. Moreover, the measure was significantly better in detection of the temporal evolution of the information transfer and the directionality of it than the previous measures. We also applied the measure to an EEG dataset acquired during a face detection task that revealed interesting patterns of interhemispheric phase-specific information transfer. We finally analyzed high gamma activity in an MEG dataset of a visuomotor associations. Our measure allowed for tracing of the stimulus information flow and it confirmed the notion that dorsal fronto-parietal network is crucial for the visuomotor computations transforming visual information into motor plans. Altogether our work suggests that our new measure has potential to uncover previously hidden specific information transfer dynamics in neural communication.
97

Advanced Methods for the Analysis of Radar Sounder and VHR SAR Signals

Ferro, Adamo January 2011 (has links)
In the last decade the interest in radar systems for the exploration of planetary bodies and for Earth Observation (EO) from orbit increased considerably. In this context, the main goal of this thesis is to present novel methods for the automatic analysis of planetary radar sounder (RS) signals and very high resolution (VHR) synthetic aperture radar (SAR) images acquired on the Earth. Both planetary RSs and VHR SAR systems are instruments based on relatively recent technology which make it possible to acquire from orbit new types of data that before were available only in limited areas from airborne acquisitions. The use of orbiting platforms allows the acquisition of a huge amount of data on large areas. This calls for the development of effective and automatic methods for the extraction of information tuned on the characteristics of these new systems. The work has been organized in two parts. The first part is focused on the automatic analysis of data acquired by planetary RSs. RS signals are currently mostly analyzed by means of manual investigations and the topic of automatic analysis of such data has been only marginally addressed in the literature. In this thesis we provide three main novel contributions to the state of the art on this topic. First, we present a theoretical and empirical statistical study of the properties of RS signals. Such a study drives the development of two novel automatic methods for the generation of subsurface feature maps and for the detection of basal returns. The second contribution is a method for the extraction of subsurface layering in icy environments, which is capable to detect linear features with sub-pixel accuracy. Moreover, measures for the analysis of the properties of the detected layers are proposed. Finally, the third contribution is a technique for the detection of surface clutter returns in radargrams. The proposed method is based on the automatic matching between real and clutter data generated according to a simulator developed in this thesis. The second part of this dissertation is devoted to the analysis of VHR SAR images, with special focus on urban areas. New VHR SAR sensors allow the analysis of such areas at building level from space. This is a relatively recent topic, which is especially relevant for crisis management and damage assessment. In this context, we describe in detail an empirical and theoretical study carried out on the relation between the double-bounce effect of buildings and their orientation angle. Then, a novel approach to the automatic detection and reconstruction of building radar footprints from VHR SAR images is pre-sented. Unlike most of the methods presented in the literature, the developed method can extract and reconstruct building radar footprints from single VHR SAR images. The technique is based on the detection and combination of primitive features in the image, and introduces the concept of semantic meaning of the primitives. Qualitative and quantitative experimental results obtained on real planetary RS and spaceborne VHR SAR data confirm the effectiveness of the proposed methods.
98

Socially-Aware Interfaces for Supporting Co-located Interactions

Schiavo, Gianluca January 2015 (has links)
In everyday life we engage in social interactions and use our body to communicate social actions and intentions in a seamless way. Our body, our behaviour and the behaviour of the people around us tell a lot about the social situations in which we are involved and of the social context in which we act. For example, the way people direct their attention while engaged in a conversation or the spatial distribution of a group during an activity can reveal information about the social context that characterises that particular social situation. Inspired by these observations, several researchers have investigated ways to automatically interpret social signals, using sensors and algorithms for detecting and modelling behavioural cues related to social interactions. With the increasing availability of sensors and technology in our environment, we believe that this research direction is highly relevant nowadays and presents an opportunity for further investigation of social signals in the HCI domain. In this dissertation, I discuss the design and evaluation of socially-aware systems, technologies that are sensitive to the social context in which they are deployed. The focus of this work is to explore interfaces that support engagement in co-located multi-user interactions, taking into account users’ nonverbal behaviour, including gaze, facial expressions, and body movements. In particular, the research goals are twofold: to understand which nonverbal cues and social signals reflect engagement in co-located group activities and to design systems that can utilise this information. The thesis presents an in-depth analysis of the state of art in designing socially-aware systems. Moreover, it presents empirical studies that describe the design, implementation and assessment (both in real-world and laboratory settings) of such systems. This research has involved the development and evaluation of prototypes in their real context of use. Specifically, the thesis focuses on socially-aware systems in the form of two multi-user technologies meant to be used in different social contexts: a responsive display deployed in a public setting, and an ambient display for supporting conversations in small group discussions. Finally, based on the findings and insights gained through these studies, the thesis provides generalised characteristics for socially-aware systems and presents MOSAIC, a theoretical framework for framing the complexity of the design space of such technology. The final aim of this work is to support HCI researcher and practitioners in exploring the opportunities and limitations of technologies that react and respond to the social context.
99

Study and Development of Novel Techniques for PHY-Layer Optimization of Smart Terminals in the Context of Next-Generation Mobile Communications

D'Orazio, Leandro January 2008 (has links)
Future mobile broadband communications working over wireless channels are required to provide high performance services in terms of speed, capacity, and quality. A key issue to be considered is the design of multi-standard and multi-modal ad-hoc network architectures, capable of self-configuring in an adaptive and optimal way with respect to channel conditions and traffic load. In the context of 4G-wireless communications, the implementation of efficient baseband receivers characterized by affordable computational load is a crucial point in the development of transmission systems exploiting diversity in different domains. This thesis proposes some novel multi-user detection techniques based on different criterions (i.e., MMSE, ML, and MBER) particularly suited for multi-carrier CDMA systems, both in the single- and multi-antenna cases. Moreover, it considers the use of evolutionary strategies (such as GA and PSO) to channel estimation purposes in MIMO multicarrier scenarios. Simulation results evidenced that the proposed PHY-layer optimization techniques always outperform state of the art schemes by spending an affordable computational burden. Particular attention has been used on the software implementation of the formulated algorithms, in order to obtain a modular software architecture that can be used in an adaptive and optimized reconfigurable scenario.
100

Event Detection and Classification for the Digital Humanities

Sprugnoli, Rachele January 2018 (has links)
In recent years, event processing has become an active area of research in the Natural Language Processing community but resources and automatic systems developed so far have mainly addressed contemporary texts. However, the recognition and elaboration of events is a crucial step when dealing with historical texts: research in this domain can lead to the development of methodologies and tools that can assist historians in enhancing their work and can have an impact both in the fields of Natural Language Processing and Digital Humanities. Our work aims at shedding light on the complex concept of events adopting an interdisciplinary perspective. More specifically, theoretical and practical investigations are carried out on the specific topic of event detection and classification in historical texts by developing and releasing new annotation guidelines, new resources and new models for automatic annotation.

Page generated in 0.6745 seconds