• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 65
  • 20
  • 15
  • 12
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 335
  • 335
  • 70
  • 49
  • 48
  • 45
  • 39
  • 37
  • 35
  • 34
  • 32
  • 32
  • 31
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

ResearchIQ: An End-To-End Semantic Knowledge Platform For Resource Discovery in Biomedical Research

Raje, Satyajeet 20 December 2012 (has links)
No description available.
292

Statistical Improvements for Ecological Learning about Spatial Processes

Dupont, Gaetan L 20 October 2021 (has links) (PDF)
Ecological inquiry is rooted fundamentally in understanding population abundance, both to develop theory and improve conservation outcomes. Despite this importance, estimating abundance is difficult due to the imperfect detection of individuals in a sample population. Further, accounting for space can provide more biologically realistic inference, shifting the focus from abundance to density and encouraging the exploration of spatial processes. To address these challenges, Spatial Capture-Recapture (“SCR”) has emerged as the most prominent method for estimating density reliably. The SCR model is conceptually straightforward: it combines a spatial model of detection with a point process model of the spatial distribution of individuals, using data collected on individuals within a spatially referenced sampling design. These data are often coarse in spatial and temporal resolution, though, motivating research into improving the quality of the data available for analysis. Here I explore two related approaches to improve inference from SCR: sampling design and data integration. Chapter 1 describes the context of this thesis in more detail. Chapter 2 presents a framework to improve sampling design for SCR through the development of an algorithmic optimization approach. Compared to pre-existing recommendations, these optimized designs perform just as well but with far more flexibility to account for available resources and challenging sampling scenarios. Chapter 3 presents one of the first methods of integrating an explicit movement model into the SCR model using telemetry data, which provides information at a much finer spatial scale. The integrated model shows significant improvements over the standard model to achieve a specific inferential objective, in this case: the estimation of landscape connectivity. In Chapter 4, I close by providing two broader conclusions about developing statistical methods for ecological inference. First, simulation-based evaluation is integral to this process, but the circularity of its use can, unfortunately, be understated. Second, and often underappreciated: statistical solutions should be as intuitive as possible to facilitate their adoption by a diverse pool of potential users. These novel approaches to sampling design and data integration represent essential steps in advancing SCR and offer intuitive opportunities to advance ecological learning about spatial processes.
293

[en] EXTENSION OF AN INTEGRATION SYSTEM OF LEARNING OBJECTS REPOSITORIES AIMING AT PERSONALIZING QUERIES WITH FOCUS ON ACCESSIBILITY / [pt] EXTENSÃO DE UM SISTEMA DE INTEGRAÇÃO DE REPOSITÓRIOS DE OBJETOS DE APRENDIZAGEM VISANDO A PERSONALIZAÇÃO DAS CONSULTAS COM ENFOQUE EM ACESSIBILIDADE

RAPHAEL GHELMAN 16 October 2006 (has links)
[pt] Hoje em dia e-learning está se tornando mais importante por possibilitar a disseminação de conhecimento e informação através da internet de uma forma mais rápida e menos dispendiosa. Consequentemente, de modo a filtrar o que é mais relevante e/ou de interesse do usuário, arquiteturas e técnicas de personalização vêm sendo abordadas. Dentre as muitas possibilidades de personalização existentes, a que lida com acessibilidade está se tornando essencial, pois garante que uma grande variedade de usuários possa ter acesso à informação conforme suas necessidades e características. Acessibilidade não é apenas garantir que pessoas com alguma deficiência, ou dificuldade, possam ter acesso à informação, apesar de ser importante e eventualmente ser uma exigência legal. Acessibilidade é também garantir que uma larga variedade de usuários e interfaces possam obter acesso à informação, maximizando assim a audiência potencial. Esta dissertação apresenta uma extensão do LORIS, um sistema de integração de repositórios de objetos de aprendizagem, descrevendo as alterações na sua arquitetura para ser capaz de lidar com acessibilidade e reconhecer diferentes versões de um mesmo objeto de aprendizagem, permitindo assim que um usuário execute uma consulta considerando seu perfil e preferências. Foi desenvolvido um protótipo dos serviços descritos na arquitetura utilizando serviços Web e navegação facetada, bem como padrões web, de e-learning e de acessibilidade. O uso de serviços Web e de padrões visa promover flexibilidade e interoperabilidade, enquanto a navegação facetada, como implementada, permite que o usuário aplique múltiplos filtros aos resultados da consulta sem a necessidade de re-submetê-la. / [en] Nowadays e-learning is becoming more important as it makes possible the dissemination of knowledge and information through the internet in a faster and costless way. Consequently, in order to filter what is more relevant and/or of users interest, architectures and personalization techniques have been raised. Among the many existing possibilities of personalization, the one that deals with accessibility is becoming essential because it guarantees that a wide variety of users may have access to the information according to their preferences and needs. Accessibility is not just about ensuring that disabled people can access information, although this is important and may be a legal requirement. It is also about ensuring that the wide variety of users and devices can all gain access to information, thereby maximizing the potential audience. This dissertation presents an extension of LORIS, an integration system of learning object repositories, describing the changes on its architecture to make it able to deal with accessibility and to recognize different versions of the same learning object, thus allowing a user to execute a query considering his/her preferences and needs. A prototype of the services that are described in the architecture was developed using web services and faceted navigation, as well as e-learning and accessibility standards. The use of web services and standards aims at providing flexibility and interoperability, while the faceted navigation, as implemented, allows the user to apply multiple filters to the query results without the need to resubmit it.
294

Machine Learning Demand Forecast for Demand Sensing and Shaping : Combine the existing work done with demand sensing and shaping to achieve a higher customer service level, customer experience and balancing inventory

Bernabeu Fernandez De Liencres, Damian January 2024 (has links)
Detta examensarbete undersöker användningen av datadrivna metoder för efterfrågan prognoser och lagerstyrning inom ramen för Ericssons supply chain management. Studien fokuserar på integrationen av maskininlärning, demand shaping och realtidsdata för att förbättra noggrannheten och effektiviteten inom dessa avgörande områden. Studien utforskar effekten av maskininlärningstekniker på efterfråganprognoser och betonar betydelsen av exakta förutsägelser för att vägleda produktion, lagerhantering och distributionsstrategier. För att implementera detta föreslår studien integrationen av realtidsdataströmmar och Internet of Things (IoT)-enheter, vilket möjliggör insamling av aktuell information. Denna integration underlättar snabba svar på varierande efterfrågemönster och optimerar därmed supply chain-operationer. Studien ger värdefulla insikter för Ericsson för att förbättra sina förmågor inom efterfråganprognoser och för att optimera lagerhanteringen i en datadriven miljö. / This master's thesis investigates the utilization of data-driven approaches for demand forecasting and inventory control in the context of Ericsson's supply chain management. The study focuses on the integration of machine learning, demand shaping, and real-time data to enhance accuracy and efficiency in these critical areas. The research explores the impact of machine learning techniques on demand forecasting, highlighting the significance of precise predictions in guiding production, inventory management, and distribution strategies. To address this, the study proposes the integration of real-time data streams and Internet of Things (IoT) devices, enabling the capture of up-to-date information. This integration facilitates prompt responses to evolving demand patterns, thereby optimizing supply chain operations.The research provides valuable insights for Ericsson to enhance its demand forecasting capabilities and optimize inventory management in a data-driven environment.
295

Data integration between clinical research and patient care: A framework for context-depending data sharing and in silico predictions

Hoffmann, Katja, Pelz, Anne, Karg, Elena, Gottschalk, Andrea, Zerjatke, Thomas, Schuster, Silvio, Böhme, Heiko, Glauche, Ingmar, Roeder, Ingo 16 January 2025 (has links)
The transfer of new insights from basic or clinical research into clinical routine is usually a lengthy and time-consuming process. Conversely, there are still many barriers to directly provide and use routine data in the context of basic and clinical research. In particular, no coherent software solution is available that allows a convenient and immediate bidirectional transfer of data between concrete treatment contexts and research settings. Here, we present a generic framework that integrates health data (e.g., clinical, molecular) and computational analytics (e.g., model predictions, statistical evaluations, visualizations) into a clinical software solution which simultaneously supports both patient-specific healthcare decisions and research efforts, while also adhering to the requirements for data protection and data quality. Specifically, our work is based on a recently established generic data management concept, for which we designed and implemented a web-based software framework that integrates data analysis, visualization as well as computer simulation and model prediction with audit trail functionality and a regulation-compliant pseudonymization service. Within the front-end application, we established two tailored views: a clinical (i.e., treatment context) perspective focusing on patient-specific data visualization, analysis and outcome prediction and a research perspective focusing on the exploration of pseudonymized data. We illustrate the application of our generic framework by two use-cases from the field of haematology/oncology. Our implementation demonstrates the feasibility of an integrated generation and backward propagation of data analysis results and model predictions at an individual patient level into clinical decision-making processes while enabling seamless integration into a clinical information system or an electronic health record.
296

Разработка прототипа системы управления и обменом данными в международной логистике : магистерская диссертация / Development of a prototype of a data management and exchange system in international logistics

Мухачев, И. А., Mukhachev, I. A. January 2024 (has links)
Выпускная квалификационная работа магистра 69 стр., 5 рис., 30 источник, 3 прил. / The final qualification work of the master 69 p., 5 fig., 30 source, 3 adj.
297

Efficient use of a protein structure annotation database / application to packing analysis

Rother, Kristian 14 August 2007 (has links)
Im Rahmen dieser Arbeit wird eine Vielzahl von Daten zur Struktur und Funktion von Proteinen gesammelt. Anschließend wird in strukturellen Daten die atomare Packungsdichte untersucht. Untersuchungen an Strukturen benötigen oftmals maßgeschneiderte Datensätze von Proteinen. Kriterien für die Auswahl einzelner Proteine sind z.B. Eigenschaften der Sequenzen, die Faltung oder die Auflösung einer Struktur. Solche Datensätze mit den im Netz verfügbaren Mitteln herzustellen ist mühselig, da die notwendigen Daten über viele Datenbanken verteilt liegen. Um diese Aufgabe zu vereinfachen, wurde Columba, eine integrierte Datenbank zur Annotation von Proteinstrukturen, geschaffen. Columba integriert insgesamt sechzehn Datenbanken, darunter u.a. die PDB, KEGG, Swiss-Prot, CATH, SCOP, die Gene Ontology und ENZYME. Von den in Columba enthaltenen Strukturen der PDB sind zwei Drittel durch viele andere Datenbanken annotiert. Zum verbliebenen Drittel gibt es nur wenige zusätzliche Angaben, teils da die entsprechenden Strukturen erst seit kurzem in der PDB sind, teils da es gar keine richtigen Proteine sind. Die Datenbank kann über eine Web-Oberfläche unter www.columba-db.de spezifisch für einzelne Quelldatenbanken durchsucht werden. Ein Benutzer kann sich auf diese Weise schnell einen Datensatz von Strukturen aus der PDB zusammenstellen, welche den gewählten Anforderungen entsprechen. Es wurden Regeln aufgestellt, mit denen Datensätze effizient erstellt werden können. Diese Regeln wurden angewandt, um Datensätze zur Analyse der Packungsdichte von Proteinen zu erstellen. Die Packungsanalyse quantifiziert den Raum zwischen Atomen, und kann Regionen finden, in welchen eine hohe lokale Beweglichkeit vorliegt oder welche Fehler in der Struktur beinhalten. In einem Referenzdatensatz wurde so eine große Zahl von atomgroßen Höhlungen dicht unterhalb der Proteinoberfläche gefunden. In Transmembrandomänen treten diese Höhlungen besonders häufig in Kanal- und Transportproteinen auf, welche Konformationsänderungen vollführen. In proteingebundenen Liganden und Coenzymen wurde eine zu den Referenzdaten ähnliche Packungsdichte beobachtet. Mit diesen Ergebnissen konnten mehrere Widersprüche in der Fachliteratur ausgeräumt werden. / In this work, a multitude of data on structure and function of proteins is compiled and subsequently applied to the analysis of atomic packing. Structural analyses often require specific protein datasets, based on certain properties of the proteins, such as sequence features, protein folds, or resolution. Compiling such sets using current web resources is tedious because the necessary data are spread over many different databases. To facilitate this task, Columba, an integrated database containing annotation of protein structures was created. Columba integrates sixteen databases, including PDB, KEGG, Swiss-Prot, CATH, SCOP, the Gene Ontology, and ENZYME. The data in Columba revealed that two thirds of the structures in the PDB database are annotated by many other databases. The remaining third is poorly annotated, partially because the according structures have only recently been published, and partially because they are non-protein structures. The Columba database can be searched by a data source-specific web interface at www.columba-db.de. Users can thus quickly select PDB entries of proteins that match the desired criteria. Rules for creating datasets of proteins efficiently have been derived. These rules were applied to create datasets for analyzing the packing of proteins. Packing analysis measures how much space there is between atoms. This indicates regions where a high local mobility of the structure is required, and errors in the structure. In a reference dataset, a high number of atom-sized cavities was found in a region near the protein surface. In a transmembrane protein dataset, these cavities frequently locate in channels and transporters that undergo conformational changes. A dataset of ligands and coenzymes bound to proteins was packed as least as tightly as the reference data. By these results, several contradictions in the literature have been resolved.
298

Organisation et exploitation des connaissances sur les réseaux d'intéractions biomoléculaires pour l'étude de l'étiologie des maladies génétiques et la caractérisation des effets secondaires de principes actifs / Organization and exploitation of biological molecular networks for studying the etiology of genetic diseases and for characterizing drug side effects

Bresso, Emmanuel 25 September 2013 (has links)
La compréhension des pathologies humaines et du mode d'action des médicaments passe par la prise en compte des réseaux d'interactions entre biomolécules. Les recherches récentes sur les systèmes biologiques produisent de plus en plus de données sur ces réseaux qui gouvernent les processus cellulaires. L'hétérogénéité et la multiplicité de ces données rendent difficile leur intégration dans les raisonnements des utilisateurs. Je propose ici des approches intégratives mettant en oeuvre des techniques de gestion de données, de visualisation de graphes et de fouille de données, pour tenter de répondre au problème de l'exploitation insuffisante des données sur les réseaux dans la compréhension des phénotypes associés aux maladies génétiques ou des effets secondaires des médicaments. La gestion des données sur les protéines et leurs propriétés est assurée par un système d'entrepôt de données générique, NetworkDB, personnalisable et actualisable de façon semi-automatique. Des techniques de visualisation de graphes ont été couplées à NetworkDB pour utiliser les données sur les réseaux biologiques dans l'étude de l'étiologie des maladies génétiques entrainant une déficience intellectuelle. Des sous-réseaux de gènes impliqués ont ainsi pu être identifiés et caractérisés. Des profils combinant des effets secondaires partagés par les mêmes médicaments ont été extraits de NetworkDB puis caractérisés en appliquant une méthode de fouille de données relationnelles couplée à Network DB. Les résultats permettent de décrire quelles propriétés des médicaments et de leurs cibles (incluant l'appartenance à des réseaux biologiques) sont associées à tel ou tel profil d'effets secondaires / The understanding of human diseases and drug mechanisms requires today to take into account molecular interaction networks. Recent studies on biological systems are producing increasing amounts of data. However, complexity and heterogeneity of these datasets make it difficult to exploit them for understanding atypical phenotypes or drug side-effects. This thesis presents two knowledge-based integrative approaches that combine data management, graph visualization and data mining techniques in order to improve our understanding of phenotypes associated with genetic diseases or drug side-effects. Data management relies on a generic data warehouse, NetworkDB, that integrates data on proteins and their properties. Customization of the NetworkDB model and regular updates are semi-automatic. Graph visualization techniques have been coupled with NetworkDB. This approach has facilitated access to biological network data in order to study genetic disease etiology, including X-linked intellectual disability (XLID). Meaningful sub-networks of genes have thus been identified and characterized. Drug side-effect profiles have been extracted from NetworkDB and subsequently characterized by a relational learning procedure coupled with NetworkDB. The resulting rules indicate which properties of drugs and their targets (including networks) preferentially associate with a particular side-effect profile
299

Data Governance : A conceptual framework in order to prevent your Data Lake from becoming a Data Swamp

Paschalidi, Charikleia January 2015 (has links)
Information Security nowadays is becoming a very popular subject of discussion among both academics and organizations. Proper Data Governance is the first step to an effective Information Security policy. As a consequence, more and more organizations are now switching their approach to data, considering them as assets, in order to get as much value as possible out of it. Living in an IT-driven world makes a lot of researchers to approach Data Governance by borrowing IT Governance frameworks.The aim of this thesis is to contribute to this research by doing an Action Research in a big Financial Institution in the Netherlands that is currently releasing a Data Lake where all the data will be gathered and stored in a secure way. During this research a framework on implementing a proper Data Governance into the Data Lake is introduced.The results were promising and indicate that under specific circumstances, this framework could be very beneficial not only for this specific institution, but for every organisation that would like to avoid confusions and apply Data Governance into their tasks. / <p>Validerat; 20151222 (global_studentproject_submitter)</p>
300

A Study on Machine Learning Techniques for the Schema Matching Networks Problem / Um Estudo de Técnicas de Aprendizagem de Máquina para o Problema de Casamento de Esquemas em Rede

Rodrigues, Diego de Azevedo, 981997982 22 October 2018 (has links)
Submitted by Diego Rodrigues (diego.rodrigues@icomp.ufam.edu.br) on 2018-12-07T21:38:02Z No. of bitstreams: 2 Diego Rodrigues.pdf: 3673641 bytes, checksum: f1fdd4162dc6acd590136bb6b886704e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Secretaria PPGI (secretariappgi@icomp.ufam.edu.br) on 2018-12-07T22:27:06Z (GMT) No. of bitstreams: 2 Diego Rodrigues.pdf: 3673641 bytes, checksum: f1fdd4162dc6acd590136bb6b886704e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-12-10T19:02:56Z (GMT) No. of bitstreams: 2 Diego Rodrigues.pdf: 3673641 bytes, checksum: f1fdd4162dc6acd590136bb6b886704e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-12-10T19:02:56Z (GMT). No. of bitstreams: 2 Diego Rodrigues.pdf: 3673641 bytes, checksum: f1fdd4162dc6acd590136bb6b886704e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-10-22 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Schema Matching is the problem of finding semantic correspondences between elements from different schemas. This is a challenging problem, since the same concept is often represented by disparate elements in the schemas. The traditional instances of this problem involved a pair of schemas to be matched. However, recently there has been a increasing interest in matching several related schemas at once, a problem known as Schema Matching Networks, where the goal is to identify elements from several schemas that correspond to a single concept. We propose a family of methods for schema matching networks based on machine learning, which proved to be a competitive alternative for the traditional matching problem in several domains. To overcome the issue of requiring a large amount of training data, we also propose a bootstrapping procedure to automatically generate training data. In addition, we leverage constraints that arise in network scenarios to improve the quality of this data. We also propose a strategy for receiving user feedback to assert some of the matchings generated, and, relying on this feedback, improving the quality of the final result. Our experiments show that our methods can outperform baselines reaching F1-score up to 0.83. / Casamento de Esquemas é a tarefa de encontrar correpondências entre elementos de diferentes esquemas de bancos de dados. É um problema desafiador, uma vez que o mesmo conceito geralmente é representado de maneiras distintas nos esquemas.Tradicionalmente, a tarefa envolve um par de esquemas a serem mapeados. Entretanto, houve um crescimento na necessidade de mapear vários esquemas ao mesmo tempo, tarefa conhecida como Casamento de Esquemas em Rede, onde o objetivo é identificar elementos de vários esquemas que correspondem ao mesmo conceito. Este trabalho propõe uma famı́lia de métodos para o problema do casamento de esquemas em rede baseados em aprendizagem de máquina, que provou ser uma alternativa viável para o problema do casamento tradicional em diversos domı́nios. Para superar obstáculo de obter bastantes instâncias de treino, também é proposta uma técnica de bootstrapping para gerar treino automático. Além disso, o trabalho considera restrições de integridade que ajudam a nortear o processo de casamento em rede. Este trabalho também propõe uma estratégia para receber avaliações do usuário, com o propósito de melhorar o resultado final. Experimentos mostram que o método proposto supera outros métodos comparados alcançando valor F1 até 0.83 e sem utilizar muitas avaliações do usuário.

Page generated in 0.0218 seconds