• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 20
  • 11
  • 11
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 321
  • 321
  • 133
  • 110
  • 80
  • 69
  • 65
  • 43
  • 43
  • 41
  • 39
  • 38
  • 36
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Kompendium der Online-Forschung (DGOF)

Deutsche Gesellschaft für Online-Forschung e. V. (DGOF) 24 November 2021 (has links)
Die DGOF veröffentlicht hier digitale Kompendien zu aktuellen Themen der Online-Forschung mit Fachbeiträgen von Experten und Expertinnen aus der Branche.
132

Machine Learning Modeling of Polymer Coating Formulations: Benchmark of Feature Representation Schemes

Evbarunegbe, Nelson I 14 November 2023 (has links) (PDF)
Polymer coatings offer a wide range of benefits across various industries, playing a crucial role in product protection and extension of shelf life. However, formulating them can be a non-trivial task given the multitude of variables and factors involved in the production process, rendering it a complex, high-dimensional problem. To tackle this problem, machine learning (ML) has emerged as a promising tool, showing considerable potential in enhancing various polymer and chemistry-based applications, particularly those dealing with high dimensional complexities. Our research aims to develop a physics-guided ML approach to facilitate the formulations of polymer coatings. As the first step, this project focuses on finding machine-readable feature representation techniques most suitable for encoding formulation ingredients. Utilizing two polymer-informatics datasets, one encompassing a large set of 700,000 common homopolymers including epoxies and polyurethanes as coating base materials while the other a relatively small set of 1000 data points of epoxy-diluent formulations, four featurization schemes to represent polymer coating molecules were benchmarked. They include the molecular access system, the extended connectivity fingerprint, molecular graph-based chemical graph network, and graph convolutional network (MG-GCN) embeddings. These representation schemes were used with ensemble models to predict molecular properties including topological surface area and viscosity. The results show that the combination of MG-GCN and ensemble models such as the extreme boosting machine and random forest models achieved the best overall performance, with coefficient of determination (r2) values of 0.74 in topological surface area and 0.84 in viscosity, which compare favorably with existing techniques. These results lay the foundation for using ML with physical modeling to expedite the development of polymer coating formulations.
133

ONLINE STATISTICAL INFERENCE FOR LOW-RANK REINFORCEMENT LEARNING

Qiyu Han (18284758) 01 April 2024 (has links)
<p dir="ltr">We propose a fully online procedure to conduct statistical inference with adaptively collected data. The low-rank structure of the model parameter and the adaptivity nature of the data collection process make this task challenging: standard low-rank estimators are biased and cannot be obtained in a sequential manner while existing inference approaches in sequential decision-making algorithms fail to account for the low-rankness and are also biased. To tackle the challenges previously outlined, we first develop an online low-rank estimation process employing Stochastic Gradient Descent with noisy observations. Subsequently, to facilitate statistical inference using the online low-rank estimator, we introduced a novel online debiasing technique designed to address both sources of bias simultaneously. This method yields an unbiased estimator suitable for parameter inference. Finally, we developed an inferential framework capable of establishing an online estimator for performing inference on the optimal policy value. In theory, we establish the asymptotic normality of the proposed online debiased estimators and prove the validity of the constructed confidence intervals for both inference tasks. Our inference results are built upon a newly developed low-rank stochastic gradient descent estimator and its non-asymptotic convergence result, which is also of independent interest.</p>
134

A proposal for an integrated framewoek capable of aggregating IoT data with diverse data types. / Uma proposta de um framework capaz de agregar dados de IoT com diversos tipos de dados.

Faria, Maria Luisa Lopes de 30 March 2017 (has links)
The volume of information in the Internet is growing exponentially. The ability to find intelligible information among vast amounts of data is transforming the human vision of the universe and everything within it. The underlying question then becomes which methods or techniques can be applied to transform the raw data into something intelligible, active and personal? This question is explored in this document by investigating techniques that improve intelligence for systems in order to make them perceptive/active to the recent information shared by each individual. Consequently, the main objective of this thesis is to enhance the experience of the user (individual) by providing a broad perspective about an event, which could result in improved ideas and better decisions. Therefore, three different data sources (individual data, sensor data, web data) have been investigated. This thesis includes research into techniques that process, interpret and reduce these data. By aggregating these techniques into a platform it is possible to deliver personalised information to applications and services. The contribution of this thesis is twofold. First, it presents a novel process that has shifted its focus from IoT technology to the user (or smart citizen). Second, this research shows that huge volumes of data can be reduced if the underlying sensor signal has adequate spectral properties to be filtered and good results can be obtained when employing a filtered sensor signal in applications. By investigating these areas it is possible to contribute to this new interconnected society by offering socially aware applications and services. / Sem resumo
135

Modelo de avaliação de conjuntos de dados cientí­ficos por meio da dimensão de veracidade dos dados. / Scientific datasets evaluation model based on the data veracity dimension.

André Filipe de Moraes Batista 06 November 2018 (has links)
A ciência é uma organização social: grupos de colaboração independentes trabalham para gerar conhecimento como um bem público. A credibilidade dos trabalhos científicos está enraizada nas evidências que os suportam, as quais incluem a metodologia aplicada, os dados adquiridos e os processos para execução dos experimentos, da análise de dados e da interpretação dos resultados obtidos. O dilúvio de dados sob o qual a atual ciência está inserida revoluciona a forma como as pesquisas são realizadas, resultando em um novo paradigma de ciência baseada em dados. Sob tal paradigma, novas atividades são inseridas no método científico de modo a organizar o processo de geração, curadoria e publicação de dados, beneficiando a comunidade científica com o reuso de conjuntos de dados científicos e a reprodutibilidade de experimentos. Nesse contexto, novas abordagens para a resolução de problemas estão sendo apresentadas, obtendo resultados que antes eram considerados de relevante dificuldade, bem como possibilitando a geração de novos conhecimentos. Diversos portais estão disponibilizando conjuntos de dados resultantes de pesquisas científicas. Todavia, tais portais pouco abordam o contexto sobre os quais os conjuntos de dados foram criados, dificultando a compreensão sobre os dados e abrindo espaço para o uso indevido ou uma interpretação errônea. Poucas são as literaturas que abordam essa problemática, deixando o foco para outros temas que lidam com o volume, a variedade e a velocidade dos dados. Essa pesquisa objetivou definir um modelo de avaliação de conjuntos de dados científicos, por meio da construção de um perfil de aplicação, o qual padroniza a descrição de conjuntos de dados científicos. Essa padronização da descrição é baseada no conceito de dimensão de Veracidade dos dados, definido ao longo da pesquisa, e permite o desenvolvimento de métricas que formam o índice de veracidade de conjuntos de dados científicos. Tal índice busca refletir o nível de detalhamento de um conjunto de dados, com base no uso dos elementos de descrição, que facilitarão o reuso dos dados e a reprodutibilidade dos experimentos científicos. O índice possui duas dimensões: a dimensão intrínseca aos dados, a qual pode ser utilizada como critério de admissão de conjunto de dados em portais de publicação de dados; e a dimensão social, mensurando a adequabilidade de um conjunto de dados para uso em uma área de pesquisa ou de aplicação, por meio da avaliação da comunidade científica. Para o modelo de avaliação proposto, um estudo de caso foi desenvolvido, descrevendo um conjunto de dados proveniente de um projeto científico internacional, o projeto GoAmazon, de modo a validar o modelo proposto entre os pares, demonstrando o potencial da solução no apoio ao reuso dos dados, podendo ser incorporado em portais de dados científicos. / Science is a social organization: independent collaboration groups work to generate knowledge as a public good. The credibility of the scientific work is entrenched in the evidence that supports it, which includes the applied methodology, the acquired data, the processes to execute the experiments, the data analysis, and the interpretation of the obtained results. The flood of data under which current science is embedded revolutionizes the way surveys are conducted, resulting in a new paradigm of data-driven science. Under such a paradigm, new activities are inserted into the scientific method to organize the process of generation, curation, and publication of data, benefiting the scientific community with the reuse and reproducibility of scientific datasets. In this context, new approaches to problem solving are being presented, obtaining results that previously were considered of relevant difficulty, as well as making possible the generation of new knowledge. Several portals are providing datasets resulting from scientific research. However, such portals do little to address the context upon which datasets are created, making it difficult to understand the data and opening up space for misuse or misinterpretation. In the Big Data area, the dimension that proposes to deal with this aspect is called Veracity. Few studies in the literature approach such a theme, focusing on other dimensions, such as volume, variety, and velocity of data. This research aimed to define a of scientific datasets, through the establishment of an application profile, which standardizes the description of scientific datasets. This standardization of the description is based on the veracity dimension concept, which is defined throughout the research and allows the development of metrics that form the Veracity Index of scientific datasets. This index seeks to reflect the level of detail of a dataset based on the use of the descriptive elements, which will facilitate the reuse and reproducibility of the data. The index is weighted by the evaluation of the scientific community in a collaborative sense, which assess the level of description, comprehension capacity, and suitability of the dataset for a given research or application area. For the proposed collaborative evaluation model, a case study was developed that described a dataset from an international scientific project, the GoAmazon project, in order to validate the proposed model among the peers, demonstrating the potential of the solution in the reuse and reproducibility of datasets, showing that such an index can be incorporated into scientific data portals.
136

Modelo de avaliação de conjuntos de dados cientí­ficos por meio da dimensão de veracidade dos dados. / Scientific datasets evaluation model based on the data veracity dimension.

Batista, André Filipe de Moraes 06 November 2018 (has links)
A ciência é uma organização social: grupos de colaboração independentes trabalham para gerar conhecimento como um bem público. A credibilidade dos trabalhos científicos está enraizada nas evidências que os suportam, as quais incluem a metodologia aplicada, os dados adquiridos e os processos para execução dos experimentos, da análise de dados e da interpretação dos resultados obtidos. O dilúvio de dados sob o qual a atual ciência está inserida revoluciona a forma como as pesquisas são realizadas, resultando em um novo paradigma de ciência baseada em dados. Sob tal paradigma, novas atividades são inseridas no método científico de modo a organizar o processo de geração, curadoria e publicação de dados, beneficiando a comunidade científica com o reuso de conjuntos de dados científicos e a reprodutibilidade de experimentos. Nesse contexto, novas abordagens para a resolução de problemas estão sendo apresentadas, obtendo resultados que antes eram considerados de relevante dificuldade, bem como possibilitando a geração de novos conhecimentos. Diversos portais estão disponibilizando conjuntos de dados resultantes de pesquisas científicas. Todavia, tais portais pouco abordam o contexto sobre os quais os conjuntos de dados foram criados, dificultando a compreensão sobre os dados e abrindo espaço para o uso indevido ou uma interpretação errônea. Poucas são as literaturas que abordam essa problemática, deixando o foco para outros temas que lidam com o volume, a variedade e a velocidade dos dados. Essa pesquisa objetivou definir um modelo de avaliação de conjuntos de dados científicos, por meio da construção de um perfil de aplicação, o qual padroniza a descrição de conjuntos de dados científicos. Essa padronização da descrição é baseada no conceito de dimensão de Veracidade dos dados, definido ao longo da pesquisa, e permite o desenvolvimento de métricas que formam o índice de veracidade de conjuntos de dados científicos. Tal índice busca refletir o nível de detalhamento de um conjunto de dados, com base no uso dos elementos de descrição, que facilitarão o reuso dos dados e a reprodutibilidade dos experimentos científicos. O índice possui duas dimensões: a dimensão intrínseca aos dados, a qual pode ser utilizada como critério de admissão de conjunto de dados em portais de publicação de dados; e a dimensão social, mensurando a adequabilidade de um conjunto de dados para uso em uma área de pesquisa ou de aplicação, por meio da avaliação da comunidade científica. Para o modelo de avaliação proposto, um estudo de caso foi desenvolvido, descrevendo um conjunto de dados proveniente de um projeto científico internacional, o projeto GoAmazon, de modo a validar o modelo proposto entre os pares, demonstrando o potencial da solução no apoio ao reuso dos dados, podendo ser incorporado em portais de dados científicos. / Science is a social organization: independent collaboration groups work to generate knowledge as a public good. The credibility of the scientific work is entrenched in the evidence that supports it, which includes the applied methodology, the acquired data, the processes to execute the experiments, the data analysis, and the interpretation of the obtained results. The flood of data under which current science is embedded revolutionizes the way surveys are conducted, resulting in a new paradigm of data-driven science. Under such a paradigm, new activities are inserted into the scientific method to organize the process of generation, curation, and publication of data, benefiting the scientific community with the reuse and reproducibility of scientific datasets. In this context, new approaches to problem solving are being presented, obtaining results that previously were considered of relevant difficulty, as well as making possible the generation of new knowledge. Several portals are providing datasets resulting from scientific research. However, such portals do little to address the context upon which datasets are created, making it difficult to understand the data and opening up space for misuse or misinterpretation. In the Big Data area, the dimension that proposes to deal with this aspect is called Veracity. Few studies in the literature approach such a theme, focusing on other dimensions, such as volume, variety, and velocity of data. This research aimed to define a of scientific datasets, through the establishment of an application profile, which standardizes the description of scientific datasets. This standardization of the description is based on the veracity dimension concept, which is defined throughout the research and allows the development of metrics that form the Veracity Index of scientific datasets. This index seeks to reflect the level of detail of a dataset based on the use of the descriptive elements, which will facilitate the reuse and reproducibility of the data. The index is weighted by the evaluation of the scientific community in a collaborative sense, which assess the level of description, comprehension capacity, and suitability of the dataset for a given research or application area. For the proposed collaborative evaluation model, a case study was developed that described a dataset from an international scientific project, the GoAmazon project, in order to validate the proposed model among the peers, demonstrating the potential of the solution in the reuse and reproducibility of datasets, showing that such an index can be incorporated into scientific data portals.
137

A proposal for an integrated framewoek capable of aggregating IoT data with diverse data types. / Uma proposta de um framework capaz de agregar dados de IoT com diversos tipos de dados.

Maria Luisa Lopes de Faria 30 March 2017 (has links)
The volume of information in the Internet is growing exponentially. The ability to find intelligible information among vast amounts of data is transforming the human vision of the universe and everything within it. The underlying question then becomes which methods or techniques can be applied to transform the raw data into something intelligible, active and personal? This question is explored in this document by investigating techniques that improve intelligence for systems in order to make them perceptive/active to the recent information shared by each individual. Consequently, the main objective of this thesis is to enhance the experience of the user (individual) by providing a broad perspective about an event, which could result in improved ideas and better decisions. Therefore, three different data sources (individual data, sensor data, web data) have been investigated. This thesis includes research into techniques that process, interpret and reduce these data. By aggregating these techniques into a platform it is possible to deliver personalised information to applications and services. The contribution of this thesis is twofold. First, it presents a novel process that has shifted its focus from IoT technology to the user (or smart citizen). Second, this research shows that huge volumes of data can be reduced if the underlying sensor signal has adequate spectral properties to be filtered and good results can be obtained when employing a filtered sensor signal in applications. By investigating these areas it is possible to contribute to this new interconnected society by offering socially aware applications and services. / Sem resumo
138

Applications In Sentiment Analysis And Machine Learning For Identifying Public Health Variables Across Social Media

Clark, Eric Michael 01 January 2019 (has links)
Twitter, a popular social media outlet, has evolved into a vast source of linguistic data, rich with opinion, sentiment, and discussion. We mined data from several public Twitter endpoints to identify content relevant to healthcare providers and public health regulatory professionals. We began by compiling content related to electronic nicotine delivery systems (or e-cigarettes) as these had become popular alternatives to tobacco products. There was an apparent need to remove high frequency tweeting entities, called bots, that would spam messages, advertisements, and fabricate testimonials. Algorithms were constructed using natural language processing and machine learning to sift human responses from automated accounts with high degrees of accuracy. We found the average hyperlink per tweet, the average character dissimilarity between each individual's content, as well as the rate of introduction of unique words were valuable attributes in identifying automated accounts. We performed a 10-fold Cross Validation and measured performance of each set of tweet features, at various bin sizes, the best of which performed with 97% accuracy. These methods were used to isolate automated content related to the advertising of electronic cigarettes. A rich taxonomy of automated entities, including robots, cyborgs, and spammers, each with different measurable linguistic features were categorized. Electronic cigarette related posts were classified as automated or organic and content was investigated with a hedonometric sentiment analysis. The overwhelming majority (≈ 80%) were automated, many of which were commercial in nature. Others used false testimonials that were sent directly to individuals as a personalized form of targeted marketing. Many tweets advertised nicotine vaporizer fluid (or e-liquid) in various “kid-friendly” flavors including 'Fudge Brownie', 'Hot Chocolate', 'Circus Cotton Candy' along with every imaginable flavor of fruit, which were long ago banned for traditional tobacco products. Others offered free trials, as well as incentives to retweet and spread the post among their own network. Free prize giveaways were also hosted whose raffle tickets were issued for sharing their tweet. Due to the large youth presence on the public social media platform, this was evidence that the marketing of electronic cigarettes needed considerable regulation. Twitter has since officially banned all electronic cigarette advertising on their platform. Social media has the capacity to afford the healthcare industry with valuable feedback from patients who reveal and express their medical decision-making process, as well as self-reported quality of life indicators both during and post treatment. We have studied several active cancer patient populations, discussing their experiences with the disease as well as survivor-ship. We experimented with a Convolutional Neural Network (CNN) as well as logistic regression to classify tweets as patient related. This led to a sample of 845 breast cancer survivor accounts to study, over 16 months. We found positive sentiments regarding patient treatment, raising support, and spreading awareness. A large portion of negative sentiments were shared regarding political legislation that could result in loss of coverage of their healthcare. We refer to these online public testimonies as “Invisible Patient Reported Outcomes” (iPROs), because they carry relevant indicators, yet are difficult to capture by conventional means of self-reporting. Our methods can be readily applied interdisciplinary to obtain insights into a particular group of public opinions. Capturing iPROs and public sentiments from online communication can help inform healthcare professionals and regulators, leading to more connected and personalized treatment regimens. Social listening can provide valuable insights into public health surveillance strategies.
139

Ciência de dados, poluição do ar e saúde / Data science, air pollution and health

Amorim, William Nilson de 17 May 2019 (has links)
A Estatística é uma ferramenta imprescindível para a aplicação do método científico, estando presente em todos os campos de pesquisa. As metodologias estatísticas usuais estão bem estabelecidas entre os pesquisadores das mais diversas áreas, sendo que a análise de dados em muitos trabalhos costuma ser feita pelos próprios autores. Nos últimos anos, a área conhecida como Ciência de Dados vem exigindo de estatísticos e não-estatísticos habilidades que vão muito além de modelagem, começando na obtenção e estruturação das bases de dados e terminando na divulgação dos resultados. Dentro dela, uma abordagem chamada de aprendizado automático reuniu diversas técnicas e estratégias para modelagem preditiva, que, com alguns cuidados, podem ser aplicadas também para inferência. Essas novas visões da Estatística foram pouco absorvidas pela comunidade científica até então, principalmente pela ausência de estatísticos em grande parte dos estudos. Embora pesquisa de base em Probabilidade e Estatística seja importante para o desenvolvimento de novas metodologias, a criação de pontes entre essas disciplinas e suas áreas de aplicação é essencial para o avanço da ciência. O objetivo desta tese é aproximar a ciência de dados, discutindo metodologias novas e usuais, da área de pesquisa em poluição do ar, que, segundo a Organização Mundial da Saúde, é o maior risco ambiental à saúde humana. Para isso, apresentaremos diversas estratégias de análise e as aplicaremos em dados reais de poluição do ar. Os problemas utilizados como exemplo foram o estudo realizado por Salvo et al. (2017), cujo objetivo foi associar a proporção de carros rodando a gasolina com a concentração de ozônio na cidade de São Paulo, e uma extensão desse trabalho, na qual analisamos o efeito do uso de gasolina/etanol na mortalidade de idosos e crianças. Concluímos que suposições como linearidade a aditividade, feitas por alguns modelos usuais, podem ser muito restritivas para problemas essencialmente complexos, com diferentes modelos levando a diferentes conclusões, nem sempre sendo fácil identificar qual delas é a mais apropriada. / Statistics is a fundamental part of the scientific method and it is present in all the research fields. The usual statistical techniques are well established in the scientific community, and, regardless of the area, the authors themselves perform the data analysis in most papers. In the last years, the area known as Data Science has been challenging statisticians and non-statisticians to perform tasks beyond data modeling. It starts with importing, organizing and manipulating the databases, and ends with the proper communication of the results. Another area called Machine Learning created a framework to fit predictive models, where the goal is to obtain the most precise predictions to a variable under study. These new approaches were not completely adopted by the scientific community yet, mainly due to the absence of statisticians in most of the studies. Although basic research in Probabilities and Statistics is important, the link between these disciplines and their application areas is essential for the advancement of science. The goal of this thesis was to bring together the news views of Data Science and Machine Learning and air pollution research. We presented several strategies of data analysis and apply them to reanalyze the real world air pollution problem presented by Salvo et al. (2017) explore the association between ozone concentration and the proportion of bi-fuel vehicles running on gasoline in the city of São Paulo, Brazil. We also extended this analysis to study the effect of using gasoline/ethanol in mortality (child and elderly). We concluded that assumptions such as linearity and additivity, commonly required by usual models, can be very restrictive to intrinsically complex problems, leading to different conclusions for each fitted model, with little information about which one is more appropriate.
140

Policy and Place: A Spatial Data Science Framework for Research and Decision-Making

January 2017 (has links)
abstract: A major challenge in health-related policy and program evaluation research is attributing underlying causal relationships where complicated processes may exist in natural or quasi-experimental settings. Spatial interaction and heterogeneity between units at individual or group levels can violate both components of the Stable-Unit-Treatment-Value-Assumption (SUTVA) that are core to the counterfactual framework, making treatment effects difficult to assess. New approaches are needed in health studies to develop spatially dynamic causal modeling methods to both derive insights from data that are sensitive to spatial differences and dependencies, and also be able to rely on a more robust, dynamic technical infrastructure needed for decision-making. To address this gap with a focus on causal applications theoretically, methodologically and technologically, I (1) develop a theoretical spatial framework (within single-level panel econometric methodology) that extends existing theories and methods of causal inference, which tend to ignore spatial dynamics; (2) demonstrate how this spatial framework can be applied in empirical research; and (3) implement a new spatial infrastructure framework that integrates and manages the required data for health systems evaluation. The new spatially explicit counterfactual framework considers how spatial effects impact treatment choice, treatment variation, and treatment effects. To illustrate this new methodological framework, I first replicate a classic quasi-experimental study that evaluates the effect of drinking age policy on mortality in the United States from 1970 to 1984, and further extend it with a spatial perspective. In another example, I evaluate food access dynamics in Chicago from 2007 to 2014 by implementing advanced spatial analytics that better account for the complex patterns of food access, and quasi-experimental research design to distill the impact of the Great Recession on the foodscape. Inference interpretation is sensitive to both research design framing and underlying processes that drive geographically distributed relationships. Finally, I advance a new Spatial Data Science Infrastructure to integrate and manage data in dynamic, open environments for public health systems research and decision- making. I demonstrate an infrastructure prototype in a final case study, developed in collaboration with health department officials and community organizations. / Dissertation/Thesis / Doctoral Dissertation Geography 2017

Page generated in 0.095 seconds