• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1079
  • 503
  • 14
  • 3
  • 2
  • 2
  • Tagged with
  • 1603
  • 799
  • 691
  • 496
  • 415
  • 345
  • 276
  • 210
  • 192
  • 179
  • 149
  • 138
  • 121
  • 111
  • 109
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Technologies for Supporting Social Participation with a focus on intergeneretional Interactions

Jara Laconich, Juan José January 2016 (has links)
Loneliness increases mortality risk by 50% and is one of the main causes of depression. Several factors like living far away from the family, not being able to move much due to physical problems, or being unable to use communication technologies favor the likeliness of feeling lonely, especially in later life. We propose Lifehsare, a system for intergenerational communications that facilitates connecting people, enabling them to participate in the life of each other either in an active (synchronous interactions) or passive (asynchronous interactions) way. Current proposals for intergenerational communication do not address the problems related to the lack of time to share and lack of topic to talk that young usually have when interacting with their older relatives. Our proposal addresses these problems by implementing a method that requires no effort to share on the side of the young and by automatically enhancing the shared information. Furthermore, our experience with the evaluation of our proposal was translated into design recommendations that extend the current literature on design guidelines for applications for older adults.
152

Cross-Domain and Cross-Language Porting of Shallow Parsing

Stepanov, Evgeny January 2014 (has links)
EEnglish was the main focus of attention of the Natural Language Processing (NLP) community for years. As a result, there are significantly more annotated linguistic resources in English than in any other language. Consequently, data-driven tools for automatic text or speech processing are developed mainly for English. Developing similar corpora and tools for other languages is an important issue. However, this requires significant amount of effort. Recently, Statistical Machine Translation (SMT) techniques and parallel corpora were used to transfer annotations from a linguistic resource rich languages to a resource-poor languages for a variety of Natural Language Processing (NLP) tasks, including Part-of-Speech tagging, Noun Phrase chunking, dependency parsing, textual entailment, etc. This cross-language NLP paradigm relies on the solution of the following sub-problems: - Data-driven NLP techniques are very sensitive to the differences in training and testing conditions. Different domains, such as financial news-wire and biomedical publications, have different distributions of NLP task-specific properties; thus, the domain adaptation of the source language tools -- either the development of models with good cross-domain performance or tuned to the target domain -- is critical. - Another difference in training and testing conditions arises with cross-genre applications such as written text (monologues) and spontaneous dialog data. Properties of written text such as punctuation and the notion of sentence are not present in spoken conversation transcriptions. Thus, style-adaptation techniques to cover a wider range of genres is critical as well. - The basis of cross-language porting is parallel corpora. Unfortunately, parallel corpora are scarce. Thus, generation or retrieval of parallel corpora between the languages of interest is important. Additionally, these parallel corpora most often are not in the domains of interest; consequently, the cross-language porting should be augmented with SMT domain adaptation techniques. - The language distance play an important role within the paradigm, since for close family language pairs (e.g. Romance languages Italian and Spanish) the range of linguistic phenomena to consider is significantly less compared to the distant family language pairs (e.g. Italian and Turkish). The developed cross-language techniques should be applicable to both conditions. In this thesis we address these sub-problems on complex Natural Language Processing tasks of Discourse Parsing and Spoken Language Understanding. Both tasks are cast as token-level shallow parsing. Penn Discourse Treebank (PDTB) style discourse parsing is applied cross-domain and we contribute feature-level domain adaptation techniques for the task. Additionally, we explore PDTB-style discourse parsing on dialog data in Italian are report on challenges. The problems of parallel corpora creation, language style adaptation, SMT domain-adaptation and language distance are addressed on the task of cross-language porting of Spoken Language Understanding. This thesis contributes to the task with the language-style and domain adaptation techniques for machine translation of spoken conversations using off-the-shelf systems like Google Translate, SMT systems trained on both out-of-domain and in-domain parallel data. We demonstrate that the techniques are beneficial for both close and distant language pairs. We propose the methodologies for the creation of parallel spoken conversation corpora via professional translation services that considers speech phenomena such as disfluencies. Additionally, we explore the semantic annotation transfer using automatic SMT methods and crowdsourcing. For the later, we propose the computational methodology to obtain acceptable quality corpus without the target language references and the low worker agreement.
153

Predictive Modeling of Human Behavior: Supervised Learning from Telecom Metadata

Bogomolov, Andrey January 2017 (has links)
Big data, specifically Telecom Metadata, opens new opportunities for human behavior understanding, applying machine learning and big data processing computational methods combined with interdisciplinary knowledge of human behavior. In this thesis new methods are developed for human behavior predictive modeling based on anonymized telecom metadata on individual level and on large scale group level, which were studied during research projects held in 2012-2016 in collaboration with Telecom Italia, Telefonica Research, MIT Media Lab and University of Trento. It is shown that human dynamics patterns could be reliably recognized based on human behavior metrics derived from the mobile phone and cellular network activity (call log, sms log, bluetooth interactions, internet consumption). On individual level the results are validated on use cases of detecting daily stress and estimating subjective happiness. An original approach is introduced for feature extraction, selection, recognition model training and validation. Experimental results based on ensemble stochastic classification and regression tree models are discussed. On large group level, following big data for social good challenges, the problem of crime hotspot prediction is formulated and solved. In the proposed approach we use demographic information along with human mobility characteristics as derived from anonymized and aggregated mobile network data. The models, built on and evaluated against real crime data from London, obtain accuracy of almost 70% when classifying whether a specific area in the city will be a crime hotspot or not in the following month. Electric energy consumption patterns are correlated with human behavior patterns in highly nonlinear way. Second large scale group behavior prediction result is formulated as predicting next week energy consumption based on human dynamics analysis derived out of the anonymized and aggregated telecom data, processed from GSM network call detail records (CDRs). The proposed solution could act on energy producers/distributors as an essential aid to smart meters data for making better decisions in reducing total primary energy consumption by limiting energy production when the demand is not predicted, reducing energy distribution costs by efficient buy-side planning in time and providing insights for peak load planning in geographic space. All the studied experimental results combine the introduced methodology, which is efficient to implement for most of multimedia and real-time applications due to highly reduced low-dimensional feature space and reduced machine learning pipelines. Also the indicators which have strong predictive power are discussed opening new horizons for computational social science studies.
154

Autonomous resource management for cloud-assisted peer-to-peer based services

Kavalionak, Hanna January 2013 (has links)
Peer-to-Peer (P2P) and Cloud Computing are two of the latest trends in the Internet arena. They both could be labelled as large-scale distributed systems, yet their approach is completely different: based on completely decentralized protocols exploiting edge resources the former, focusing on huge data centres the latter. Several Internet startups have quickly reached stardom by exploiting cloud resources. Instead, P2P applications still lack a well-defined business model. Recently, companies like Spotify and Wuala have started to explore how the two worlds could be merged by exploiting (free) user resources whenever possible, aiming at reducing the cost of renting cloud resource. However, although very promising, this model presents challenging issues, in particular about the autonomous regulation of the usage of P2P and cloud resources. Next-generation services need the possibility to guarantee a minimum level of service when peer resources are not sufficient, and to exploit as much P2P resources as possible when they are abundant. In this thesis, we answer the above research questions in the form of new algorithms and systems. We designed a family of mechanisms to self-regulate the amount of cloud resources when peer resources are not enough. We applied and adapted these mechanisms to support different Internet applications, including storage, video streaming and online gaming. To support a replication service, we designed an algorithm that self-regulates the cloud resources used for storing replicas by orchestrating their provisioning. We presented CLive, a video streaming P2P framework that meet the real-time constraints on video delay by autonomously regulating the amount of cloud helpers upon need. We proposed an architecture to support large scale on-line games, where the load coming from the interaction of players is strategically migrated between P2P and cloud resources in an autonomous way. Finally, we proposed a solution to the NAT problem that employs cloud resources to allow a node behind it to be seen from outside. Using extensive simulations, we showed that hybrid infrastructures can reduce the economical effort on the service providers, while offering a level of service comparable with centralized architectures. The results of this thesis proved that the combination of Cloud Computing and P2P is one of the milestones for next generation distributed P2P-based architectures.
155

Secure Business Process Engineering: a socio-technical approach

Salnitri, Mattia January 2016 (has links)
Dealing with security is a central activity for todays organizations. Security breaches impact on the activities executed in organizations, preventing them to execute their business processes and, therefore, causing millions of dollars of losses. Security by design principles underline the importance of considering security as early as during the design of organizations to avoid expensive fixes during later phases of their lifecycle. However, the design of secure business processes cannot take into account only security aspects on the sequences of activities. Security reports in the last years demonstrate that security breaches are more and more caused by attacks that take advantage of social vulnerabilities. Therefore, those aspects should be analyzed in order to design a business process robust to technical and social attacks. Still, the mere design of business processes does not guarantee that their correct execution, such business processes have to be correctly implemented and performed. We propose SEcure Business process Engineering (SEBE), a method that considers social and organizational aspects for designing and implementing secure business processes. SEBE provides an iterative and incremental process and a set of verification of transformation rules, supported by a software tool, that integrate different modeling languages used to specify social security aspects, business processes and the implementation code. In particular, SEBE provides a new modeling language which permits to specify business processes with security concepts and complex security constraints. We evaluated the effectiveness of SEBE for engineering secure business processes with two empirical evaluations and applications of the method to three real scenarios.
156

Museum Visits for Older Adults with Mobility Constraints: Sharing and Participation through Technology

Kostoska, Galena January 2015 (has links)
The aim of the thesis is to study how older adults with mobility constrains can enjoy museum experiences of their family members (by providing methods and tools for family members to “save” and share memories of museum visits with older adults at home) and to investigate how older adults can remotely participate in museum visits through technology. We employed face-to-face interviews and questionnaires inside two different museums settings to understand if and what visitors share with non-visitors, and which technology they use for this purpose. The results showed that a low number of visitors share their museum visits with some materials like pictures they took or books bought in the shop. Although visitors have the intention and would like to share information, they rarely do so. In order to support sharing with non-visitors, we provided several ways for “saving” museum content. The visitors were able to bookmark objects during a museum visit, and received by email a link with the bookmarked content in the form of a digital booklet. We tested whether people would use these features, and if they would access and share the “saved” content after the visit. The results suggested that our approach can significantly increase sharing: at least half of the participants shared the digital booklet with someone. We adapted the booklet for older adults and we performed usability study on it, in order to understand if older adults with and without cognitive decline can use it. We measured and compared the performance on four tasks: opening the booklet, browsing the content, zooming in the content and closing the content after being zoomed in. Results show that the booklet enables older adults to consume content to some extend and it allows additional in-depth exploration. We studied factors influencing feasibility of remote participation for older adults, where we measured the impact of different designs and interaction techniques on participants ability to understand, follow and engage in remote museum visits. Interactive navigation was found the most suitable interaction paradigm for active older adults, whereas frail adults can participate only through interaction-free tours. While almost all of the participants were able to understand the tours in our experimental setting, the ability to follow a visit was strongly influenced by the interaction type. We investigated levels of experienced presence, social closeness, engagement and enjoyment when older adults join museum visit of onsite visitors in a drama-based approach. The remote participant and onsite participants were connected with audio link, the information about the objects were contained and presented in form of a story connecting all the objects in the exhibition. The constructs of closeness, engagement and enjoyment correlated significantly: we found that both audio channel and interactive story were important elements for creating an affective virtual experience, the audio channel increased the sense of togetherness, while the interactive story made the visit more enjoyable and fun. A virtual tour was designed and developed to engage older adults in an immersive visit through part of the Louvre, by a distant real-life guide. An initial diary study and a creative workshop were conducted to learn how to better support the needs and values of older adults, and which approaches would work better for the scenario of remote participation. Visitors’ experienced levels of social and spatial presence, immersion and engagement were quite high independently of the level of interactivity of the guide, or the presence of others. We discuss further recommendations for video-mediated remote participation for older adults.
157

Competitive Robotic Car: Sensing, Planning and Architecture Design

Rizano, Tizar January 2013 (has links)
Research towards a complete autonomous car has been pushed through by industries as it offers numerous advantages such as the improvement to traffic flow, vehicle and pedestrian safety, and car efficiency. One of the main challenges faced in this area is how to deal with different uncertainties perceived by the sensors on the current state of the car and the environment. An autonomous car needs to employ efficient planning algorithm that generates the vehicle trajectory based on the environmental sensing implemented in real-time. An complete motion planning algorithm is an algorithm that returns a valid solution if one exist in finite time and returns no path exist when none exist. The algorithm is optimal when it returns an optimal path based on some criteria. In this thesis we work on a special case of motion planning problem: to find an optimal trajectory for a robotic car in order to win a car race. We propose an efficient realtime vision based technique for localization and path reconstruction. For our purpose of winning a car race we identify a characterization of the alphabet of optimal maneuvers for the car, an optimal local planning strategy and an optimal graph-based global planning strategy with obstacle avoidance. We have also implemented the hardware and software of this approach on as a testbed of the planning strategy.
158

Non-Redundant Overlapping Clustering: Algorithms and Applications

Truong, Duy Tin January 2013 (has links)
Given a dataset, traditional clustering algorithms often only provide a single partitioning or a single view of the dataset. On complex tasks, many different clusterings of a dataset exist, thus alternative clusterings which are of high quality and different from given trivial clusterings are asked to have complementary views. The task is therefore a clear multi-objective optimization problem. However, most approaches in the literature optimize these objectives sequentially (one after another one) or indirectly (by some heuristic combination). This can result in solutions which are not Pareto- optimal. The problem is even more difficult for high-dimensional datasets as clusters can be located in various subspaces of the original feature space. Besides, many practical applications require that subspace clusters can still overlap but the overlap must be below a predefined threshold. Nonetheless, most of the state-of-the-art subspace clustering algorithms can only generate a set of disjoint or significantly overlapping subspace clusters. To deal with the above issues, for full-space alternative clustering, we develop an algorithm which fully acknowledges the multiple objectives, optimizes them directly and simultaneously, and produces solutions approximating the Pareto front. As for non-redundant subspace clustering, we propose a general framework for generating K overlapping subspace clusters where the maximum overlap between them is guaranteed to be below a predefined threshold. In both cases, our algorithms can be applied for several domains as different analyzing models can be used without modifying the main parts of the algorithms.
159

Evolutionary Test Case Generation via Many Objective Optimization and Stochastic Grammars

Kifetew, Fitsum Meshesha January 2015 (has links)
In search based test case generation, most of the research works focus on the single-objective formulation of the test case generation problem. However, there are a wide variety of multi- and many-objective optimization strategies that could offer advantages currently not investigated when addressing the problem of test case generation. Furthermore, existing techniques and available tools mainly handle test generation for programs with primitive inputs, such as numeric or string input. The techniques and tools applicable to such types of programs often do not effectively scale up to large sizes and complex inputs. In this thesis work, at the unit level, branch coverage is reformulated as a many-objective optimization problem, as opposed to the state of the art single-objective formulation, and a novel algorithm is proposed for the generation of branch adequate test cases. At the system level, this thesis proposes a test generation approach that combines stochastic grammars with genetic programming for the generation of branch adequate test cases. Furthermore, the combination of stochastic grammars and genetic programming is also investigated in the context of field failure reproduction for programs with highly structured input.
160

Large-scale Structural Reranking for Hierarchical Text Categorization

JU, QI January 2013 (has links)
Current hierarchical text categorization (HTC) methods mainly fall into three directions: (1) Flat one-vs.-all approach, which flattens the hierarchy into independent nodes and trains a binary one-vs.-all classifier for each node. (2) Top-down method, which uses the hierarchical structure to decompose the entire problem into a set of smaller sub-problems, and deals with such sub-problems in top-down fashion along the hierarchy. (3) Big-bang approach, which learns a single (but generally complex) global model for the class hierarchy as a whole with a single run of the learning algorithm. These methods were shown to provide relatively high performance in previous evaluations. However, they still suffer from two main drawbacks: (1) relatively low accuracy as they disregard category dependencies, or (2) low computational efficiency when considering such dependencies. In order to build an accurate and efficient model we adopted the following strategy: first, we design advanced global reranking models (GR) that exploit structural dependencies in hierarchical multi-label text classification (TC). They are based on two algorithms: (1) to generate the k-best classification of hypotheses based on decision probabilities of the flat one-vs.-all and top-down methods; and (2) to encode dependencies in the reranker by: (i) modeling hypotheses as trees derived by the hierarchy itself and (ii) applying tree kernels (TK) to them. Such TK-based reranker selects the best hierarchical test hypothesis, which is naturally represented as a labeled tree. Additionally, to better investigate the role of category relationships, we consider two interesting cases: (i) traditional schemes in which node-fathers include all the documents of their child-categories; and (ii) more general schemes, in which children can include documents not belonging to their fathers. Second, we propose an efficient local incremental reranking model (LIR), which combines a top-down method with a local reranking model for each sub-problem. These local rerankers improve the accuracy by absorbing the local category dependencies of sub-problems, which alleviate the errors of top-down method in the higher levels of the hierarchy. The application of LIR recursively deals with the sub-problems by applying the corresponding local rerankers in top-down fashion, resulting in high efficiency. In addition, we further optimize LIR by (i) improving the top-down method by creating local dictionaries for each sub-problem; (ii) using LIBLINEAR instead of LIBSVM; and (iii) adopting the compact representation of hypotheses for learning the local reranking model. This makes LIR applicable for large-scale hierarchical text categorization. The experimentation on different hierarchical datasets has shown promising enhancements by exploiting the structural dependencies in large-scale hierarchical text categorization.

Page generated in 0.0541 seconds