• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 342
  • 83
  • 34
  • 21
  • 12
  • 10
  • 8
  • 7
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 660
  • 279
  • 186
  • 169
  • 130
  • 89
  • 85
  • 75
  • 72
  • 70
  • 68
  • 63
  • 60
  • 57
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Design Automation Flow using Library Adaptation for Variation Aware Logic Synthesis

Atluri, Lava Kumar 03 June 2014 (has links)
No description available.
122

Delay-Aware Multi-Path Routing in a Multi-Hop Network: Algorithms and Applications

Liu, Qingyu 21 June 2019 (has links)
Delay is known to be a critical performance metric for various real-world routing applications including multimedia communication and freight delivery. Provisioning delay-minimal (or at least delay-bounded) routing services for all traffic of an application is highly important. As a basic paradigm of networking, multi-path routing has been proven to be able to obtain lower delay performance than the single-path routing, since traffic congestions can be avoided. However, to our best knowledge, (i) many of existing delay-aware multi-path routing studies only consider the aggregate traffic delay. Considering that even the solution achieving the optimal aggregate traffic delay has a possibly unbounded delay performance for certain individual traffic unit, those studies may be insufficient in practice; besides, (ii) most existing studies which optimize or bound delays of all traffic are best-effort, where the achieved solutions have no theoretical performance guarantee. In this dissertation, we study four delay-aware multi-path routing problems, with the delay performances of all traffic taken into account. Three of them are in communication and one of them is in transportation. Note that our study differ from all related ones as we are the first to study the four fundamental problems to our best knowledge. Although we prove that our studied problems are all NP-hard, we design approximation algorithms with theoretical performance guarantee for solving each of them. To be specific, we claim the following contributions. Minimize maximum delay and average delay. First, we consider a single-unicast setting where in a multi-hop network a sender requires to use multiple paths to stream a flow at a fixed rate to a receiver. Two important delay metrics are the average sender-to-receiver delay and the maximum sender-to-receiver delay. Existing results say that the two delay metrics of a flow cannot be both within bounded-ratio gaps to the optimal. In comparison, we design three different flow solutions, each of which can minimize the two delay metrics simultaneously within a $(1/epsilon)$-ratio gap to the optimal, at a cost of only delivering $(1-epsilon)$-fraction of the flow, for any user-defined $epsilonin(0,1)$. The gap $(1/epsilon)$ is proven to be at least near-tight, and we further show that our solutions can be extended to the multiple-unicast setting. Minimize Age-of-Information (AoI). Second, we consider a single-unicast setting where in a multi-hop network a sender requires to use multiple paths to periodically send a batch of data to a receiver. We study a newly proposed delay-sensitive networking performance metric, AoI, defined as the elapsed time since the generation of the last received data. We consider the problem of minimizing AoI subject to throughput requirements, which we prove is NP-hard. We note that our AoI problem differs from existing ones in that we are the first to consider the batch generation of data and multi-path communication. We develop both an optimal algorithm with a pseudo-polynomial time complexity and an approximation framework with a polynomial time complexity. Our framework can build upon any polynomial-time $alpha$-approximation algorithm of the maximum delay minimization problem, to construct an $(alpha+c)$-approximate solution for minimizing AoI. Here $c$ is a constant dependent on throughput requirements. Maximize network utility. Third, we consider a multiple-unicast setting where in a multi-hop network there exist many network users. Each user requires a sender to use multiple paths to stream a flow to a receiver, incurring an utility that is a function of the experienced maximum delay or the achieved throughput. Our objective is to maximize the aggregate utility of all users under throughput requirements and maximum delay constraints. We observe that it is NP-complete either to construct an optimal solution under relaxed maximum delay constraints or relaxed throughput requirements, or to figure out a feasible solution with all constraints satisfied. Hence it is non-trivial even to obtain approximate solutions satisfying relaxed constraints in a polynomial time. We develop a polynomial-time approximation algorithm. Our algorithm obtains solutions with constant approximation ratios under realistic conditions, at the cost of violating constraints by up to constant-ratios. Minimize fuel consumption for a heavy truck to timely fulfill multiple transportation tasks. Finally, we consider a common truck operation scenario where a truck is driving in a national highway network to fulfill multiple transportation tasks in order. We study an NP-hard timely eco-routing problem of minimizing total fuel consumption under task pickup and delivery time window constraints. We note that optimizing task execution times is a new challenging design space for saving fuel in our multi-task setting, and it differentiates our study from existing ones under the single-task setting. We design a fast and efficient heuristic. We characterize conditions under which the solution of our heuristic must be optimal, and further prove its optimality gap in case the conditions are not met. We simulate a heavy-duty truck driving across the US national highway system, and empirically observe that the fuel consumption achieved by our heuristic can be $22%$ less than that achieved by the fastest-/shortest- path baselines. Furthermore, the fuel saving of our heuristic as compared to the baselines is robust to the number of tasks. / Doctor of Philosophy / We consider a network modeled as a directed graph, where it takes time for data to traverse each link in the network. It models many critical applications both in the communication area and in the transportation field. For example, both the European education network and the US national highway network can be modeled as directed graphs. We consider a scenario where a source node is required to send multiple (a set of) data packets to a destination node through the network as fast as possible, possibly using multiple source-to-destination paths. In this dissertation we study four problems all of which try to figure out routing solutions to send the set of data packets, with an objective of minimizing experienced travel time or subject to travel time constraints. Although all of our four problems are NP-hard, we design approximation algorithms to solve them and obtain solutions with theoretically bounded gaps as compared to the optimal. The first three problems are in the communication area, and the last problem is in the transportation field. We claim the following specific contributions. Minimize maximum delay and average delay. First, we consider the setting of simultaneously minimizing the average travel time and the worst (largest) travel time of sending the set of data packets from source to destination. Existing results say that the two metrics of travel time cannot be minimized to be both within bounded-ratio gaps to the optimal. As a comparison, we design three different routing solutions, each of which can minimize the two metrics of travel time simultaneously within a constant bounded ratio-gap to the optimal, but at a cost of only delivering a portion of the data. Minimize Age-of-Information (AoI). Second, we consider the problem of minimizing a newly proposed travel-time-sensitive performance metric, i.e., AoI, which is the elapsed time since the generation of the last received data. Our AoI study differs from existing ones in that we are the first to consider a set of data and multi-path routing. We develop both an optimal algorithm with a pseudo-polynomial time complexity and an approximation framework with a polynomial time complexity. Maximize network utility. Third, we consider a more general setting with multiple source destination pairs. Each source incurs a utility that is a function of the experienced travel time or the achieved throughput to send data to its destination. Our objective is to maximize the aggregate utility under throughput requirements and travel time constraints. We develop a polynomial-time approximation algorithm, at the cost of violating constraints by up to constant-ratios. It is non-trivial to design such algorithms, as we prove that it is NPcomplete either to construct an optimal solution under relaxed delay constraints or relaxed throughput requirements, or to figure out a feasible solution with all constraints satisfied. Minimize fuel consumption for a heavy truck to timely fulfill multiple transportation tasks. Finally, we consider a truck and multiple transportation tasks in order, where each task requires the truck to pick up cargoes at a source timely, and deliver them to a destination timely. The need of coordinating task execution times is a new challenging design space for saving fuel in our multi-task setting, and it differentiates our study from existing ones under the single-task setting. We design an efficient heuristic. We characterize conditions under which the solution of our heuristic must be optimal, and further prove its performance gap as compared to the optimal in case the conditions are not met.
123

Resource Optimized Scheduling For Enhanced Power Efficiency And Throughput On Chip Multi Processor Platforms

Kundan, Shivam 01 May 2024 (has links) (PDF)
The parallel nature of process execution on Chip Multi-Processors (CMPs) has boosted levels of application performance far beyond the capabilities of erstwhile single-core designs. Generally, CMPs offer improved performance by integrating multiple simpler cores onto a single die that share certain computing resources among them such as last-level caches, data buses, and main memory. This ensures architectural simplicity while also boosting performance for multi-threaded applications. However, a major trade-off associated with this approach is that concurrently executing applications incur performance degradation if their collective resource requirements exceed the total amount of resources available to the system. If dynamic resource allocation is not carefully considered, the potential performance gain from having multiple cores may be outweighed by the losses due to contention for allocation of shared resources. Additionally, CMPs with inbuilt dynamic voltage-frequency scaling (DVFS) mechanisms may try to compensate for the performance bottleneck by scaling to higher clock frequencies. For performance degradation due to shared-resource contention, this does not necessarily improve performance but does ensure a significant penalty on power consumption due to the quadratic relation of electrical power and voltage (P_dynamic ∝ V^2 * f).This dissertation presents novel methodologies for balancing the competing requirements of high performance, fairness of execution, and enforcement of priority, while also ensuring overall power efficiency of CMPs. Specifically, we (1) Analyze the problem of resource interference during concurrent process execution and propose two fine-grained scheduling methodologies for improving overall performance and fairness, (2) Develop an approach for enforcement of priority (i.e., minimum performance) for specific processes while avoiding resource starvation for others, and (3) Present a machine-learning approach for maximizing the power efficiency (performance-per-Watt) of CMPs through estimation of a workload's performance and power consumption limits at different clock frequencies.As modern computing workloads become increasingly dynamic, and computers themselves become increasingly ubiquitous, the problem of finding the ideal balance between performance and power consumption of CMPs is of particular relevance today, especially given the unprecedented proliferation of embedded devices for use in Internet-of-Things, edge computing, smart wearables, and even exotic experiments such as space probes comprised entirely of a CMP, sensors, and an antenna ("space chips"). Additionally, reducing power consumption while maintaining constant performance can contribute to addressing the growing problem of dark silicon.
124

Verification of Data-aware Business Processes in the Presence of Ontologies

Santoso, Ario 14 November 2016 (has links) (PDF)
The meet up between data, processes and structural knowledge in modeling complex enterprise systems is a challenging task that has led to the study of combining formalisms from knowledge representation, database theory, and process management. Moreover, to ensure system correctness, formal verification also comes into play as a promising approach that offers well-established techniques. In line with this, significant results have been obtained within the research on data-aware business processes, which studies the marriage between static and dynamic aspects of a system within a unified framework. However, several limitations are still present. Various formalisms for data-aware processes that have been studied typically use a simple mechanism for specifying the system dynamics. The majority of works also assume a rather simple treatment of inconsistency (i.e., reject inconsistent system states). Many researches in this area that consider structural domain knowledge typically also assume that such knowledge remains fixed along the system evolution (context-independent), and this might be too restrictive. Moreover, the information model of data-aware processes sometimes relies on relatively simple structures. This situation might cause an abstraction gap between the high-level conceptual view that business stakeholders have, and the low-level representation of information. When it comes to verification, taking into account all of the aspects above makes the problem more challenging. In this thesis, we investigate the verification of data-aware processes in the presence of ontologies while at the same time addressing all limitations above. Specifically, we provide the following contributions: (1) We propose a formal framework called Golog-KABs (GKABs), by leveraging on the state of the art formalisms for data-aware processes equipped with ontologies. GKABs enable us to specify semantically-rich data-aware business processes, where the system dynamics are specified using a high-level action language inspired by the Golog programming language. (2) We propose a parametric execution semantics for GKABs that is able to elegantly accommodate a plethora of inconsistency-aware semantics based on the well-known notion of repair, and this leads us to consider several variants of inconsistency-aware GKABs. (3) We enhance GKABs towards context-sensitive GKABs that take into account the contextual information during the system evolution. (4) We marry these two settings and introduce inconsistency-aware context-sensitive GKABs. (5) We introduce the so-called Alternating-GKABs that allow for a more fine-grained analysis over the evolution of inconsistency-aware context-sensitive systems. (6) In addition to GKABs, we introduce a novel framework called Semantically-Enhanced Data-Aware Processes (SEDAPs) that, by utilizing ontologies, enable us to have a high-level conceptual view over the evolution of the underlying system. We provide not only theoretical results, but have also implemented this concept of SEDAPs. We also provide numerous reductions for the verification of sophisticated first-order temporal properties over all of the settings above, and show that verification can be addressed using existing techniques developed for Data-Centric Dynamic Systems (which is a well-established data-aware processes framework), under suitable boundedness assumptions for the number of objects freshly introduced in the system while it evolves. Notably, all proposed GKAB extensions have no negative impact on computational complexity.
125

Analys av metoder för att beräkna livsmedels vattenfotavtryck

Lundmark, Lina January 2019 (has links)
Vatten är en nödvändig resurs för allt liv på jorden. Med en ökande befolkningsmängd förväntas även sötvattenanvändningen att öka, vilket ställer krav på att hanteringen av de vattenresurser som finns sker på ett hållbart sätt. Jordbrukssektorn är i dagsläget den största konsumenten av vatten, varpå det är viktigt att uppmärksamma konsumenter om vattenanvändning vid produktion av livsmedel så att kunskapen ökar kring hur vatten används idag. Ett verktyg för att bedöma miljöpåverkan från vattenanvändning är det så kallade vattenfotavtrycket. De senaste åren har flera beräkningsmetoder tillkommit för att beräkna vattenfotavtryck, och dessa tar hänsyn till olika aspekter. Syftet med denna studie var att utvärdera tre sådana metoder och använda dem för att beräkna vattenfotavtrycket för ett antal livsmedel, jämföra resultatet och slutligen ta fram en rekommendation kring vilken eller vilka metoder som lämpar sig för konsumentvägledning. De metoder som undersöktes var TOTAL som är en metod av Water Footprint Network (WFN), metoden WSI och metoden AWARE. Resultatet visade att vissa nötter fick särskilt högt vattenfotavtryck oavsett vilken metod som användes, för exempelvis mandlar erhölls med respektive metod ett vattenfotavtryck motsvarande 15 m3 vatten/kg, 3,3 m3 WSI-H2O-ekvivalenter/kg samt 165 m3 AWARE-H2O-ekvivalenter/kg. Att resultaten har olika enheter samt storleksordningar beror på att metoderna är olika uppbyggda. Generellt fick baljväxter, spannmål samt frukt och grönt låga resultat, dock varierade resultaten något beroende på vilken metod som användes. Detta beror bland annat på att endast WSI och AWARE tar hänsyn till hur den lokala vattensituationen ser ut där vattnet används. Vid jämförelse av metoderna ansågs både metoden TOTAL samt AWARE vara lämpliga att använda för konsumentvägledning då den förstnämnda är väl beprövad samt lättförstådd medan den sistnämnda är en uppdaterad indikator som tar hänsyn till lokal vattenbrist. / Water is a vital resource for all life on earth. With an increasing population, the use of freshwater is also expected to increase, which requires a sustainable management of existing water resources. The agricultural sector is currently the largest consumer of water, and it is important to pay attention to consumers about water use in food production so that knowledge is increasing about how water is used today. The so-called water footprint is a tool for assessing the amount of water used to produce a good or a service. In recent years, several calculation methods have been added to calculate water footprints, and these take into account various aspects. The purpose of this study was to evaluate three such methods and use them to calculate the water footprint for a number of foods, compare the results and finally give a recommendation on which method or methods that are best suited for consumer guidance. The assessed methods were TOTAL by Water Footprint Network (WFN), the WSI method and the AWARE method. The results showed that some nuts had a particularly high water footprint regardless of the method used. Almonds, for example, obtained with each method a water footprint corresponding to 15 m3 water/kg, 3.3 m3 WSI-H2O-equivalents/kg and 165 m3 AWARE-H2O-equivalents/kg. The fact that the results have different units and orders of magnitude is because the different structure of the methods. Generally, legumes, cereals and fruits and vegetables had low water footprints, but the results varied somewhat depending on the method used. This is partly due to the fact that only WSI and AWARE take into account how the water situations looks where the water is used. When comparing the methods, both TOTAL and AWARE were considered suitable for use for consumer guidance, since the former is well-proven and easily understood while the latter is an updated indicator that takes local water shortage into account.
126

Entity-based coherence in statistical machine translation : a modelling and evaluation perspective

Wetzel, Dominikus Emanuel January 2018 (has links)
Natural language documents exhibit coherence and cohesion by means of interrelated structures both within and across sentences. Sentences do not stand in isolation from each other and only a coherent structure makes them understandable and sound natural to humans. In Statistical Machine Translation (SMT) only little research exists on translating a document from a source language into a coherent document in the target language. The dominant paradigm is still one that considers sentences independently from each other. There is both a need for a deeper understanding of how to handle specific discourse phenomena, and for automatic evaluation of how well these phenomena are handled in SMT. In this thesis we explore an approach how to treat sentences as dependent on each other by focussing on the problem of pronoun translation as an instance of a discourse-related non-local phenomenon. We direct our attention to pronoun translation in the form of cross-lingual pronoun prediction (CLPP) and develop a model to tackle this problem. We obtain state-of-the-art results exhibiting the benefit of having access to the antecedent of a pronoun for predicting the right translation of that pronoun. Experiments also showed that features from the target side are more informative than features from the source side, confirming linguistic knowledge that referential pronouns need to agree in gender and number with their target-side antecedent. We show our approach to be applicable across the two language pairs English-French and English-German. The experimental setting for CLPP is artificially restricted, both to enable automatic evaluation and to provide a controlled environment. This is a limitation which does not yet allow us to test the full potential of CLPP systems within a more realistic setting that is closer to a full SMT scenario. We provide an annotation scheme, a tool and a corpus that enable evaluation of pronoun prediction in a more realistic setting. The annotated corpus consists of parallel documents translated by a state-of-the-art neural machine translation (NMT) system, where the appropriate target-side pronouns have been chosen by annotators. With this corpus, we exhibit a weakness of our current CLPP systems in that they are outperformed by a state-of-the-art NMT system in this more realistic context. This corpus provides a basis for future CLPP shared tasks and allows the research community to further understand and test their methods. The lack of appropriate evaluation metrics that explicitly capture non-local phenomena is one of the main reasons why handling non-local phenomena has not yet been widely adopted in SMT. To overcome this obstacle and evaluate the coherence of translated documents, we define a bilingual model of entity-based coherence, inspired by work on monolingual coherence modelling, and frame it as a learning-to-rank problem. We first evaluate this model on a corpus where we artificially introduce coherence errors based on typical errors CLPP systems make. This allows us to assess the quality of the model in a controlled environment with automatically provided gold coherence rankings. Results show that this model can distinguish with high accuracy between a human-authored translation and one with coherence errors, that it can also distinguish between document pairs from two corpora with different degrees of coherence errors, and that the learnt model can be successfully applied when the test set distribution of errors comes from a different one than the one from the training data, showing its generalization potentials. To test our bilingual model of coherence as a discourse-aware SMT evaluation metric, we apply it to more realistic data. We use it to evaluate a state-of-the-art NMT system against post-editing systems with pronouns corrected by our CLPP systems. For verifying our metric, we reuse our annotated parallel corpus and consider the pronoun annotations as proxy for human document-level coherence judgements. Experiments show far lower accuracy in ranking translations according to their entity-based coherence than on the artificial corpus, suggesting that the metric has difficulties generalizing to a more realistic setting. Analysis reveals that the system translations in our test corpus do not differ in their pronoun translations in almost half of the document pairs. To circumvent this data sparsity issue, and to remove the need for parameter learning, we define a score-based SMT evaluation metric which directly uses features from our bilingual coherence model.
127

Compact variation-aware standard cells for statistical static timing analysis

Aftabjahani, Seyed-Abdollah 09 June 2011 (has links)
This dissertation reports on a new methodology to characterize and simulate a standard cell library to be used for statistical static timing analysis. A compact variation-aware timing model for a standard cell in a cell library has been developed. The model incorporates variations in the input waveform and loading, process parameters, and the environment into the cell timing model. Principal component analysis (PCA) has been used to form a compact model of a set of waveforms impacted by these sources of variation. Cell characterization involves determining equations describing how waveforms are transformed by a cell as a function of the input waveforms, process parameters, and the environment. Different versions of factorial designs and Latin hypercube sampling have been explored to model cells, and their complexity and accuracy have been compared. The models have been evaluated by calculating the delay of paths. The results demonstrate improved accuracy in comparison with table-based static timing analysis at comparable computational cost. Our methodology has been expanded to adapt to interconnect dominant circuits by including a resistive-capacitive load model. The results show the feasibility of using the new load model in our methodology. We have explored comprehensive accuracy improvement methods to tune the methodology for the best possible results. The following is a summary of the main contributions of this work to the statistical static timing analysis: (a) accurate waveform modeling for standard cells using statistical waveform models based on principal components; (b) compact performance modeling of standard cells using experimental design statistical techniques; and (c) variation-aware performance modeling of standard cells considering the effect of variation parameters on performance, where variation parameters include loading, waveform shape, process parameters (gate length and threshold voltage of NMOS and PMOS transistors), and environmental parameters (supply voltage and temperature); and (f) extending our methodology to support resistive-capacitive loads to be applicable to interconnect dominant circuits; and (e) classifying the sources of error for our variational waveform model and cell models and introducing of the related accuracy improvement methods; and (f) introducing our fast block-based variation-aware statistical dynamic timing analysis framework and showing that (i) using compiler-compiler techniques, we can generate our timing models, test benches, and data analysis for each circuit, which are compiled to machine-code to reduce the overhead of dynamic timing simulation, and (ii) using the simulation engine, we can perform statistical timing analysis to measure the performance distribution of a circuit using a high-level model for gate delay changes, which can be linked to their parameter variation.
128

A Distributed Architecture for Computing Context in Mobile Devices

Dargie, Waltenegus 27 May 2006 (has links) (PDF)
Context-aware computing aims at making mobile devices sensitive to the social and physical settings in which they are used. A necessary requirement to achieve this goal is to enable those devices to establish a shared understanding of the desired settings. Establishing a shared understanding entails the need to manipulate sensed data in order to capture a real world situation wholly, conceptually, and meaningfully. Quite often, however, the data acquired from sensors can be inexact, incomplete, and/or uncertain. Inexact sensing arises mostly due to the inherent limitation of sensors to capture a real world phenomenon precisely. Incompleteness is caused by the absence of a mechanism to capture certain real-world aspects; and uncertainty stems from the lack of knowledge about the reliability of the sensing sources, such as their sensing range, accuracy, and resolution. The thesis identifies a set of criteria for a context-aware system to capture dynamic real-world situations. On the basis of these criteria, a distributed architecture is designed, implemented and tested. The architecture consists of Primitive Context Servers, which abstract the acquisition of primitive contexts from physical sensors; Aggregators, to minimise error caused by inconsistent sensing, and to gather correlated primitive contexts pertaining to a particular entity or situation; a Knowledge Base and an Empirical Ambient Knowledge Component, to model dynamic properties of entities with facts and beliefs; and a Composer, to reason about dynamic real-world situations on the basis of sensed data. Two additional components, namely, the Event Handler and the Rule Organiser, are responsible for dynamically generating context rules by associating decision events ? signifying a user?s activity ? with the context in which those decision events are produced. Context-rules are essential elements with which the behaviour of mobile devices can be controlled and useful services can be provided. Four estimation and recognition schemes, namely, Fuzzy Logic, Hidden Markov Models, Dempster-Schafer Theory of Evidence, and Bayesian Networks, are investigated, and their suitability for the implementation of the components of the architecture of the thesis is studied. Subsequently, fuzzy sets are chosen to model dynamic properties of entities. Dempster-Schafer?s combination theory is chosen for aggregating primitive contexts; and Bayesian Networks are chosen to reason about a higher-level context, which is an abstraction of a real-world situation. A Bayesian Composer is implemented to demonstrate the capability of the architecture in dealing with uncertainty, in revising the belief of the Empirical Ambient Knowledge Component, in dealing with the dynamics of primitive contexts and in dynamically defining contextual states. The Composer could be able to reason about the whereabouts of a person in the absence of any localisation sensor. Thermal, relative humidity, light intensity properties of a place as well as time information were employed to model and reason about a place. Consequently, depending on the variety and reliability of the sensors employed, the Composer could be able to discriminate between rooms, corridors, a building, or an outdoor place with different degrees of uncertainty. The Context-Aware E-Pad (CAEP) application is designed and implemented to demonstrate how applications can employ a higher-level context without the need to directly deal with its composition, and how a context rule can be generated by associating the activities (decision events) of a mobile user with the context in which the decision events are produced.
129

Collaborative review and analysis of science literature

Bayat, Samaneh Unknown Date
No description available.
130

CD-cars: cross domain context-aware recomender systems

SILVA, Douglas Véras e 21 July 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-02-21T16:47:42Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dvsTeseBiblioteca.pdf: 6571192 bytes, checksum: eb7914e5ffef25b8f01ff92d9a60c164 (MD5) / Made available in DSpace on 2017-02-21T16:47:42Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dvsTeseBiblioteca.pdf: 6571192 bytes, checksum: eb7914e5ffef25b8f01ff92d9a60c164 (MD5) Previous issue date: 2016-07-21 / FACEPE / Traditionally, single-domain recommender systems (SDRS) have achieved good results in recommending relevant items for users in order to solve the information overload problem. However, cross-domain recommender systems (CDRS) have emerged aiming to enhance SDRS by achieving some goals such as accuracy improvement, diversity, addressing new user and new item problems, among others. Instead of treating each domain independently, CDRS use knowledge acquired in a source domain (e.g. books) to improve the recommendation in a target domain (e.g. movies). Likewise SDRS research, collaborative filtering (CF) is considered the most popular and widely adopted approach in CDRS, because its implementation for any domain is relatively simple. In addition, its quality of recommendation is usually higher than that of content-based filtering (CBF) algorithms. In fact, the majority of the cross-domain collaborative filtering RS (CD-CFRS) can give better recommendations in comparison to single domain collaborative filtering recommender systems (SD-CFRS), leading to a higher users’ satisfaction and addressing cold-start, sparsity, and diversity problems. However, CD-CFRS may not necessarily be more accurate than SD-CFRS. On the other hand, context-aware recommender systems (CARS) deal with another relevant topic of research in the recommender systems area, aiming to improve the quality of recommendations too. Different contextual information (e.g., location, time, mood, etc.) can be leveraged in order to provide recommendations that are more suitable and accurate for a user depending on his/her context. In this way, we believe that the integration of techniques developed in isolation (cross-domain and contextaware) can be useful in a variety of situations, in which recommendations can be improved by information from different sources as well as they can be refined by considering specific contextual information. In this thesis, we define a novel formulation of the recommendation problem, considering both the availability of information from different domains (source and target) and the use of contextual information. Based on this formulation, we propose the integration of cross-domain and context-aware approaches for a novel recommender system (CD-CARS). To evaluate the proposed CD-CARS, we performed experimental evaluations through two real datasets with three different contextual dimensions and three distinct domains. The results of these evaluations have showed that the use of context-aware techniques can be considered as a good approach in order to improve the cross-domain recommendation quality in comparison to traditional CD-CFRS. / Tradicionalmente, “sistemas de recomendação de domínio único” (SDRS) têm alcançado bons resultados na recomendação de itens relevantes para usuários, a fim de resolver o problema da sobrecarga de informação. Entretanto, “sistemas de recomendação de domínio cruzado” (CDRS) têm surgido visando melhorar os SDRS ao atingir alguns objetivos, tais como: “melhoria de precisão”, “melhor diversidade”, abordar os problemas de “novo usuário” e “novo item”, dentre outros. Ao invés de tratar cada domínio independentemente, CDRS usam conhecimento adquirido em um domínio fonte (e.g. livros) a fim de melhorar a recomendação em um domínio alvo (e.g. filmes). Assim como acontece na área de pesquisa sobre SDRS, a filtragem colaborativa (CF) é considerada a técnica mais popular e amplamente utilizada em CDRS, pois sua implementação para qualquer domínio é relativamente simples. Além disso, sua qualidade de recomendação é geralmente maior do que a dos algoritmos baseados em filtragem de conteúdo (CBF). De fato, a maioria dos “sistemas de recomendação de domínio cruzado” baseados em filtragem colaborativa (CD-CFRS) podem oferecer melhores recomendações em comparação a “sistemas de recomendação de domínio único” baseados em filtragem colaborativa (SD-CFRS), aumentando o nível de satisfação dos usuários e abordando problemas tais como: “início frio”, “esparsidade” e “diversidade”. Entretanto, os CD-CFRS podem não ser mais precisos do que os SD-CFRS. Por outro lado, “sistemas de recomendação sensíveis à contexto” (CARS) tratam de outro tópico relevante na área de pesquisa de sistemas de recomendação, também visando melhorar a qualidade das recomendações. Diferentes informações contextuais (e.g. localização, tempo, humor, etc.) podem ser utilizados a fim de prover recomendações que são mais adequadas e precisas para um usuário dependendo de seu contexto. Desta forma, nós acreditamos que a integração de técnicas desenvolvidas separadamente (de “domínio cruzado” e “sensíveis a contexto”) podem ser úteis em uma variedade de situações, nas quais as recomendações podem ser melhoradas a partir de informações obtidas em diferentes fontes além de refinadas considerando informações contextuais específicas. Nesta tese, nós definimos uma nova formulação do problema de recomendação, considerando tanto a disponibilidade de informações de diferentes domínios (fonte e alvo) quanto o uso de informações contextuais. Baseado nessa formulação, nós propomos a integração de abordagens de “domínio cruzado” e “sensíveis a contexto” para um novo sistema de recomendação (CD-CARS). Para avaliar o CD-CARS proposto, nós realizamos avaliações experimentais através de dois “conjuntos de dados” com três diferentes dimensões contextuais e três domínios distintos. Os resultados dessas avaliações mostraram que o uso de técnicas sensíveis a contexto pode ser considerado como uma boa abordagem a fim de melhorar a qualidade de recomendações de “domínio cruzado” em comparação às recomendações de CD-CFRS tradicionais.

Page generated in 0.0521 seconds