• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 245
  • 103
  • 44
  • 28
  • 26
  • 25
  • 19
  • 13
  • 12
  • 9
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 574
  • 153
  • 122
  • 104
  • 103
  • 97
  • 96
  • 84
  • 77
  • 73
  • 64
  • 64
  • 58
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Using Analysts’ Characteristics in Gauging Recommendation Optimism and the Implication for Recommendation Profitability

Cao, Jian 16 September 2007 (has links)
No description available.
262

Personalized and Adaptive Semantic Information Filtering for Social Media

Kapanipathi, Pavan 01 June 2016 (has links)
No description available.
263

[en] A QUESTION-ORIENTED VISUALIZATION RECOMMENDATION SYSTEM FOR DATA EXPLORATION / [pt] UM SISTEMA DE RECOMENDAÇÃO DE VISUALIZAÇÕES ORIENTADO A PERGUNTAS PARA EXPLORAÇÃO DE DADOS

RAUL DE ARAUJO LIMA 15 September 2020 (has links)
[pt] O crescimento cada vez mais acelerado da produção de dados e a decorrente necessidade de explorá-los a fim de se obter respostas para as mais variadas perguntas têm promovido o desenvolvimento de ferramentas que visam a facilitar a manipulação e a construção de gráficos. Essas visualizações devem permitir explorar os dados de maneira efetiva, comunicando as informações com precisão e possibilitando um maior ganho de conhecimento. No entanto, construir boas visualizações de dados não é uma tarefa trivial, uma vez que pode requerer um grande número de decisões que, em muitos casos, exigem certa experiência por parte de seu projetista. Visando a facilitar o processo de exploração de conjuntos de dados através da construção de visualizações, nós desenvolvemos a ferramenta VisMaker, que utiliza um conjunto de regras para construir as visualizações consideradas mais apropriadas para um determinado conjunto de variáveis. Além de permitir que o usuário defina visualizações através do mapeamento entre variáveis e dimensões visuais, o VisMaker apresenta recomendações de visualizações organizadas através de perguntas construídas com base nas variáveis selecionadas pelo usuário, objetivando facilitar a compreensão das visualizações recomendadas e auxiliando o processo exploratório. Para a avaliação do Vis- Maker, nós realizamos dois estudos comparando-o com o Voyager 2, uma ferramenta de propósito similar existente na literatura. O primeiro estudo teve foco na resolução de perguntas enquanto que o segundo esteve voltado para a exploração de dados em si. Nós analisamos alguns aspectos da utilização das ferramentas e coletamos os comentários dos participantes, através dos quais pudemos identificar vantagens e desvantagens da abordagem de recomendação que propusemos, levantando possíveis melhorias para esse tipo de ferramenta. / [en] The increasingly rapid growth of data production and the consequent need to explore them to obtain answers to a wide range of questions have promoted the development of tools to facilitate the manipulation and construction of data visualizations. These tools should allow users to effectively explore data, communicate information accurately, and enable more significant knowledge gain through data. However, building useful data visualizations is not a trivial task: it may involve a large number of decisions that often require experience from their designer. To facilitate the process of exploring datasets through the construction of visualizations, we developed VisMaker, a software tool which uses a set of rules to determine appropriate visualizations for a certain selection of variables. In addition to allowing the user to define visualizations by mapping variables onto visualization channels, VisMaker presents visualization recommendations organized through questions constructed based on the variables selected by the user, trying to facilitate the understanding of the visualization recommendations and assisting the exploratory process. To evaluate VisMaker, we carried out two studies comparing it with another tool that exists in the literature, one aimed at solving questions and the other at data exploration. We analyzed some aspects of the use of the tools. We collected feedback from the participants, through which we were able to identify the advantages and disadvantages of the recommendation approach we proposed, raising possible improvements for this type of tool.
264

Citation Knowledge Mining for On-the-fly Recommendations / その場での推薦のための引用知識マイニング

Zhang, Yang 23 March 2022 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24036号 / 情博第792号 / 新制||情||134(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)准教授 馬 強, 教授 田島 敬史, 教授 森 信介 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
265

Examination of Feeding Decisions and Behavior of Low-Income Mothers of Infants 4 - 9 Months Old

Edgar, Kristin L. 09 August 2022 (has links)
No description available.
266

Integrating Community with Collections in Educational Digital Libraries

Akbar, Monika 23 January 2014 (has links)
Some classes of Internet users have specific information needs and specialized information-seeking behaviors. For example, educators who are designing a course might create a syllabus, recommend books, create lecture slides, and use tools as lecture aid. All of these resources are available online, but are scattered across a large number of websites. Collecting, linking, and presenting the disparate items related to a given course topic within a digital library will help educators in finding quality educational material. Content quality is important for users. The results of popular search engines typically fail to reflect community input regarding quality of the content. To disseminate information related to the quality of available resources, users need a common place to meet and share their experiences. Online communities can support knowledge-sharing practices (e.g., reviews, ratings). We focus on finding the information needs of educators and helping users to identify potentially useful resources within an educational digital library. This research builds upon the existing 5S digital library (DL) framework. We extend core DL services (e.g., index, search, browse) to include information from latent user groups. We propose a formal definition for the next generation of educational digital libraries. We extend one aspect of this definition to study methods that incorporate collective knowledge within the DL framework. We introduce the concept of deduced social network (DSN) - a network that uses navigation history to deduce connections that are prevalent in an educational digital library. Knowledge gained from the DSN can be used to tailor DL services so as to guide users through the vast information space of educational digital libraries. As our testing ground, we use the AlgoViz and Ensemble portals, both of which have large collections of educational resources and seek to support online communities. We developed two applications, ranking of search results and recommendation, that use the information derived from DSNs. The revised ranking system incorporates social trends into the system, whereas the recommendation system assigns users to a specific group for content recommendation. Both applications show enhanced performance when DSN-derived information is incorporated. / Ph. D.
267

The New Generation of Recommendation Agents (RAs 2.0): An Affordance Perspective

Wang, Jeremy Fei 03 January 2023 (has links)
Rapid technological advances in artificial intelligence (AI), data analytics, big data, the semantic web, the Internet of Things (IoT), and cloud and mobile computing have given rise to a new generation of AI-driven recommendation agents (RAs). These agents continue to evolve and offer potential for use in a variety of application domains. However, extant information systems (IS) research has predominantly focused on user perceptions and evaluations of traditional non-intelligent product-brokering recommendation agents (PRAs), supported by empirical studies on custom-built experimental RAs that heavily rely on explicit user preference elicitations. To address the lack of research in the new generation of intelligent RAs (RAs 2.0), this dissertation aims to study consumer responses to AI-driven RAs using an affordance perspective. Notably, this research is the first in the IS discourse to link RA design artifacts, RA affordances, RA outcomes, and user continuance. It examines how actualized RA affordances influence user engagements with and evaluations of these highly personalized systems, which increasingly focus on user experiences and long-term relationships. This three-essay dissertation, consisting of one theory-building paper and two empirical studies, conceptually defines "RAs 2.0," proposes a comprehensive theoretical framework with testable propositions, and conducts two empirical studies guided by smaller carved-out models to test the validity of the comprehensive framework. The research is expected to enrich the IS literature on RAs and identify potential areas for future research. Moreover, it offers key implications for industry professionals regarding the effective system development of the new generation of intelligent RAs. / Doctor of Philosophy / Rapid technological advances in artificial intelligence (AI), data analytics, big data, the semantic web, the Internet of Things (IoT), and cloud and mobile computing have given rise to a new generation of AI-driven recommendation agents (RAs). These agents continue to evolve and offer potential for use in a variety of application domains. This three-essay dissertation, consisting of one theory-building paper and two empirical studies, conceptually defines "RAs 2.0," proposes a comprehensive theoretical framework with testable propositions, and conducts two empirical studies guided by smaller carved-out models to test the validity of the comprehensive framework. The research is expected to enrich the IS literature on RAs and identify potential areas for future research. Moreover, it offers key implications for industry professionals regarding the effective system development of the new generation of intelligent RAs.
268

A Study of Machine Learning Approaches for Integrated Biomedical Data Analysis

Chang, Yi Tan 29 June 2018 (has links)
This thesis consists of two projects in which various machine learning approaches and statistical analysis for the integration of biomedical data analysis were explored, developed and tested. Integration of different biomedical data sources allows us to get a better understating of human body from a bigger picture. If we can get a more complete view of the data, we not only get a more complete view of the molecule basis of phenotype, but also possibly can identify abnormality in diseases which were not found when using only one type of biomedical data. The objective of the first project is to find biological pathways which are related to Duechenne Muscular Dystrophy(DMD) and Lamin A/C(LMNA) using the integration of multi-omics data. We proposed a novel method which allows us to integrate proteins, mRNAs and miRNAs to find disease related pathways. The goal of the second project is to develop a personalized recommendation system which recommend cancer treatments to patients. Compared to the traditional way of using only users' rating to impute missing values, we proposed a method to incorporate users' profile to help enhance the accuracy of the prediction. / Master of Science
269

Optimizing TEE Protection by Automatically Augmenting Requirements Specifications

Dhar, Siddharth 03 June 2020 (has links)
An increasing number of software systems must safeguard their confidential data and code, referred to as critical program information (CPI). Such safeguarding is commonly accomplished by isolating CPI in a trusted execution environment (TEE), with the isolated CPI becoming a trusted computing base (TCB). TEE protection incurs heavy performance costs, as TEE-based functionality is expensive to both invoke and execute. Despite these costs, projects that use TEEs tend to have unnecessarily large TCBs. As based on our analysis, developers often put code and data into TEE for convenience rather than protection reasons, thus not only compromising performance but also reducing the effectiveness of TEE protection. In order for TEEs to provide maximum benefits for protecting CPI, their usage must be systematically incorporated into the entire software engineering process, starting from Requirements Engineering. To address this problem, we present a novel approach that incorporates TEEs in the Requirements Engineering phase by using natural language processing (NLP) to classify those software requirements that are security critical and should be isolated in TEE. Our approach takes as input a requirements specification and outputs a list of annotated software requirements. The annotations recommend to the developer which corresponding features comprise CPI that should be protected in a TEE. Our evaluation results indicate that our approach identifies CPI with a high degree of accuracy to incorporate safeguarding CPI into Requirements Engineering. / Master of Science / An increasing number of software systems must safeguard their confidential data like passwords, payment information, personal details, etc. This confidential information is commonly protected using a Trusted Execution Environment (TEE), an isolated environment provided by either the existing processor or separate hardware that interacts with the operating system to secure sensitive data and code. Unfortunately, TEE protection incurs heavy performance costs, with TEEs being slower than modern processors and frequent communication between the system and the TEE incurring heavy performance overhead. We discovered that developers often put code and data into TEE for convenience rather than protection purposes, thus not only hurting performance but also reducing the effectiveness of TEE protection. By thoroughly examining a project's features in the Requirements Engineering phase, which defines the project's functionalities, developers would be able to understand which features handle confidential data. To that end, we present a novel approach that incorporates TEEs in the Requirements Engineering phase by means of Natural Language Processing (NLP) tools to categorize the project requirements that may warrant TEE protection. Our approach takes as input a project's requirements and outputs a list of categorized requirements defining which requirements are likely to make use of confidential information. Our evaluation results indicate that our approach performs this categorization with a high degree of accuracy to incorporate safeguarding the confidentiality related features in the Requirements Engineering phase.
270

Recommending TEE-based Functions Using a Deep Learning Model

Lim, Steven 14 September 2021 (has links)
Trusted execution environments (TEEs) are an emerging technology that provides a protected hardware environment for processing and storing sensitive information. By using TEEs, developers can bolster the security of software systems. However, incorporating TEE into existing software systems can be a costly and labor-intensive endeavor. Software maintenance—changing software after its initial release—is known to contribute the majority of the cost in the software development lifecycle. The first step of making use of a TEE requires that developers accurately identify which pieces of code would benefit from being protected in a TEE. For large code bases, this identification process can be quite tedious and time-consuming. To help reduce the software maintenance costs associated with introducing a TEE into existing software, this thesis introduces ML-TEE, a recommendation tool that uses a deep learning model to classify whether an input function handles sensitive information or sensitive code. By applying ML-TEE, developers can reduce the burden of manual code inspection and analysis. ML-TEE's model was trained and tested on functions from GitHub repositories that use Intel SGX and on an imbalanced dataset. The accuracy of the final model used in the recommendation system has an accuracy of 98.86% and an F1 score of 80.00%. In addition, we conducted a pilot study, in which participants were asked to identify functions that needed to be placed inside a TEE in a third-party project. The study found that on average, participants who had access to the recommendation system's output had a 4% higher accuracy and completed the task 21% faster. / Master of Science / Improving the security of software systems has become critically important. A trusted execution environment (TEE) is an emerging technology that can help secure software that uses or stores confidential information. To make use of this technology, developers need to identify which pieces of code handle confidential information and should thus be placed in a TEE. However, this process is costly and laborious because it requires the developers to understand the code well enough to make the appropriate changes in order to incorporate a TEE. This process can become challenging for large software that contains millions of lines of code. To help reduce the cost incurred in the process of identifying which pieces of code should be placed within a TEE, this thesis presents ML-TEE, a recommendation system that uses a deep learning model to help reduce the number of lines of code a developer needs to inspect. Our results show that the recommendation system achieves high accuracy as well as a good balance between precision and recall. In addition, we conducted a pilot study and found that participants from the intervention group who used the output from the recommendation system managed to achieve a higher average accuracy and perform the assigned task faster than the participants in the control group.

Page generated in 0.0847 seconds