• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 5
  • Tagged with
  • 11
  • 11
  • 11
  • 10
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Eine generische Dienstarchitektur für das Gesundheitswesen

Pucklitzsch, Thomas 28 July 2010 (has links) (PDF)
Im Gesundheitswesen müssen ganz unterschiedlich strukturierte Einrichtungen bei der Behandlung von Patienten zusammenarbeiten. Die Arbeit stellt Lösungen vor, die eine solche Zusammenarbeit mit Hilfe verteilter Anwendungen auf Basis von Webservices und Peer-to-Peer Technologie unterstützen. Dabei werden vorhandene Strukturen und Beziehungen zwischen diesen Einrichtungen benutzt, um gute Ergebnisse bei der Suche nach Daten und der Steuerung von Abläufen zu erzielen.
2

Implementierung des Genom-Alignments auf modernen hochparallelen Plattformen / Implementing Genome Alignment Algorithms on Highly Parallel Platforms

Knodel, Oliver 26 March 2014 (has links) (PDF)
Durch die wachsende Bedeutung der DNS-Sequenzierung wurden die Geräte zur Sequenzierung weiterentwickelt und ihr Durchsatz so erhöht, dass sie Millionen kurzer Nukleotidsequenzen innerhalb weniger Tage liefern. Moderne Algorithmen und Programme, welche die dadurch entstehenden großen Datenmengen in akzeptabler Zeit verarbeiten können, ermitteln jedoch nur einen Bruchteil der Positionen der Sequenzen in bekannten Datenbanken. Eine derartige Suche ist eine der wichtigsten Aufgaben in der modernen Molekularbiologie. Diese Arbeit untersucht mögliche Übertragungen moderner Genom-Alignment Programme auf hochparallele Plattformen wie FPGA und GPU. Die derzeitig an das Problem angepassten Programme und Algorithmen werden untersucht und hinsichtlich ihrer Parallelisierbarkeit auf den beiden Plattformen FPGA und GPU analysiert. Nach einer Bewertung der Alternativen erfolgt die Auswahl eines Algorithmus. Anschließend wird dessen Übertragung auf die beiden Plattformen entworfen und implementiert. Dabei stehen die Geschwindigkeit der Suche, die Anzahl der ermittelten Positionen sowie die Nutzbarkeit im Vordergrund. Der auf der GPU implementierte reduzierte Smith & Waterman-Algorithmus ist effizient an die Problemstellung angepasst und erreicht für kurze Sequenzen höhere Geschwindigkeiten als bisherige Realisierungen auf Grafikkarten. Eine vergleichbare Umsetzung auf dem FPGA benötigt eine deutlich geringere Laufzeit, findet ebenfalls jede Position in der Datenbank und erreicht dabei ähnliche Geschwindigkeiten wie moderne leistungsfähige Programme, die aber heuristisch arbeiten. Die Anzahl der gefundenen Positionen ist bei FPGA und GPU damit mehr als doppelt so hoch wie bei sämtlichen vergleichbaren Programmen. / Further developments of DNA sequencing devices produce millions of short nucleotide sequences. Finding the positions of these sequences in databases of known sequences is an important problem in modern molecular biology. Current heuristic algorithms and programs only find a small fraction of these positions. In this thesis genome alignment algorithms are implemented on massively parallel platforms as FPGA and GPU. The next generation sequencing technologies that are currently in use are reviewed regarding their possible parallelization on FPGA and GPU. After evaluation one algorithm is chosen for parallelization. Its implementation on both platforms is designed and realized. Runtime, accuracy as well as usability are important features of the implementation. The reduced Smith & Waterman algorithm which is realized on the GPU outperforms similar GPU programs in speed and efficiency for short sequences. The runtime of the FPGA approach is similar to those of widely used heuristic software mappers and much lower than on the GPU. Furthermore the FPGA guarantees to find all alignment positions of a sequence in the database, which is more than twice the number that is found by comparable software algorithms.
3

Prävalenz von Kopfschmerzen und die damit verbundene Arztkonsultationsquote / Prevalence of headaches and the associated consultation rate - An evaluation in the German speaking area

Honekamp, Wilfried, Giese, Thomas 18 August 2010 (has links) (PDF)
Einleitung: Im Rahmen eines Projekts wird untersucht, ob sich medizinische Laien mit einem neugestalteten internetbasierten Informationssystem besser informieren können, als dieses mit Suchmaschinen und Gesundheitsportalen möglich ist. Die Evaluation eines Systems zur Informationsversorgung von Kopfschmerzpatienten ist nur dann sinnvoll, wenn tatsächlich viele Menschen im deutschsprachigen Raum unter Kopfschmerzen leiden und mit Ihren Beschwerden eher das Internet als einen Arzt konsultieren. Daher wurde in drei Studien die Prävalenz von Kopfschmerzen und die damit verbundene Arztkonsultationsquote untersucht. Methode: Dazu wurden 2000 Versicherte der BARMER Ersatzkasse, ca. 9000 Studenten der Hochschule Bremen und ca. 1000 Studenten der Universität für Gesundheitswissenschaften, Medizinische Informatik und Technik in Tirol, Österreich (UMIT) angeschrieben und gefragt, ob sie unter Kopfschmerzen leiden und wenn ja, ob sie für ihre Kopfschmerzen bereits eine ärztliche Diagnose haben. Insgesamt nahmen 521 Personen an der Untersuchung teil. Ergebnisse: Davon litten 292 Teilnehmer (56 %) unter Kopfschmerzen. Eine ärztliche Diagnose für diese hatten 52 (18 %). Alles in allem zeigt sich damit eine etwas geringere Kopfschmerzprävalenz als in den vorangegangen Studien. Die in der Literatur genannte Arztkonsultationsquote wird bestätigt. Diskussion: Die Auswertung der drei Studien zeigte, dass die Prävalenz von Kopfschmerzen weiterhin hoch und die damit verbundene Arztkonsultationsquote immer noch gering ist. / Introduction: In a project, it is investigated whether a newly designed web-information system can better inform medical laymen than traditional search engines and health portals. The evaluation of a system for providing information to headache patients is only useful when in fact many people suffer from headaches in the German speaking area and if these people with their complaints rather consult the Internet than a practitioner. Therefore, in three studies the prevalence of headache and the associated physician consultation rate was examined. Method: About 9000 students of the University of Applied Sciences Bremen, 2000 insurants of the BARMER, and about 1000 students of the University of Health Sciences, Medical Informatics and technology Tyrol, Austria (UMIT) were asked if they suffer from headaches and if so whether they already have a medical diagnosis. A total of 521 persons participated in the investigation. Results: From headaches suffered 292 (56%) participants. A medical diagnosis for these had 52 (18%). All in all, it shows slightly lower headache prevalence than found in previous studies. The medical consultation rate cited in the literature is confirmed. Discussion: The evaluation of the three studies showed that the prevalence of headache remains high and the associated physician consultation rate is still low.
4

Dynamic Thermal Imaging for Intraoperative Monitoring of Neuronal Activity and Cortical Perfusion

Hoffmann, Nico 23 November 2017 (has links) (PDF)
Neurosurgery is a demanding medical discipline that requires a complex interplay of several neuroimaging techniques. This allows structural as well as functional information to be recovered and then visualized to the surgeon. In the case of tumor resections this approach allows more fine-grained differentiation of healthy and pathological tissue which positively influences the postoperative outcome as well as the patient's quality of life. In this work, we will discuss several approaches to establish thermal imaging as a novel neuroimaging technique to primarily visualize neural activity and perfusion state in case of ischaemic stroke. Both applications require novel methods for data-preprocessing, visualization, pattern recognition as well as regression analysis of intraoperative thermal imaging. Online multimodal integration of preoperative and intraoperative data is accomplished by a 2D-3D image registration and image fusion framework with an average accuracy of 2.46 mm. In navigated surgeries, the proposed framework generally provides all necessary tools to project intraoperative 2D imaging data onto preoperative 3D volumetric datasets like 3D MR or CT imaging. Additionally, a fast machine learning framework for the recognition of cortical NaCl rinsings will be discussed throughout this thesis. Hereby, the standardized quantification of tissue perfusion by means of an approximated heating model can be achieved. Classifying the parameters of these models yields a map of connected areas, for which we have shown that these areas correlate with the demarcation caused by an ischaemic stroke segmented in postoperative CT datasets. Finally, a semiparametric regression model has been developed for intraoperative neural activity monitoring of the somatosensory cortex by somatosensory evoked potentials. These results were correlated with neural activity of optical imaging. We found that thermal imaging yields comparable results, yet doesn't share the limitations of optical imaging. In this thesis we would like to emphasize that thermal imaging depicts a novel and valid tool for both intraoperative functional and structural neuroimaging.
5

CASSANDRA: drug gene association prediction via text mining and ontologies

Kissa, Maria 28 January 2015 (has links) (PDF)
The amount of biomedical literature has been increasing rapidly during the last decade. Text mining techniques can harness this large-scale data, shed light onto complex drug mechanisms, and extract relation information that can support computational polypharmacology. In this work, we introduce CASSANDRA, a fully corpus-based and unsupervised algorithm which uses the MEDLINE indexed titles and abstracts to infer drug gene associations and assist drug repositioning. CASSANDRA measures the Pointwise Mutual Information (PMI) between biomedical terms derived from Gene Ontology (GO) and Medical Subject Headings (MeSH). Based on the PMI scores, drug and gene profiles are generated and candidate drug gene associations are inferred when computing the relatedness of their profiles. Results show that an Area Under the Curve (AUC) of up to 0.88 can be achieved. The algorithm can successfully identify direct drug gene associations with high precision and prioritize them over indirect drug gene associations. Validation shows that the statistically derived profiles from literature perform as good as (and at times better than) the manually curated profiles. In addition, we examine CASSANDRA’s potential towards drug repositioning. For all FDA-approved drugs repositioned over the last 5 years, we generate profiles from publications before 2009 and show that the new indications rank high in these profiles. In summary, co-occurrence based profiles derived from the biomedical literature can accurately predict drug gene associations and provide insights onto potential repositioning cases.
6

Detection of KRAS Synthetic Lethal Partners through Integration of Existing RNAi Screens

Christodoulou, Eleni 18 December 2014 (has links) (PDF)
KRAS is a gene that plays a very important role in the initiation and development of several types of cancer. In particular, 90% of human pancreatic cancers are due to KRAS mutations. KRAS is difficult to target directly and a promising therapeutic path is its indirect inactivation by targeting one of its Synthetic Lethal Partners (SLPs). A gene G is a Synthetic Lethal Partner of KRAS if the simultaneous perturbation of KRAS and G leads to cell death. In the past, efforts to identify KRAS SLPs with high-throughput RNAi screens have been performed. These studies have reported only few top-ranked SLPs. To our knowledge, these screens have never been considered in combination for further examination. This thesis employs integrative analysis of the published screens, utilizing additional, independent data aiming at the detection of more robust therapeutic targets. To this aim, RankSLP, a novel statistical analysis approach was implemented, which for the first time i) consistently integrates existing KRAS-specific RNAi screens, ii) consistently integrates and normalizes the results of various ranking methods, iii) evaluates its findings with the use of external data and iv) explores the effects of random data inclusion. This analysis was able to predict novel SLPs of KRAS and confirm some of the existing ones.
7

Word-sense disambiguation in biomedical ontologies

Alexopoulou, Dimitra 12 January 2011 (has links) (PDF)
With the ever increase in biomedical literature, text-mining has emerged as an important technology to support bio-curation and search. Word sense disambiguation (WSD), the correct identification of terms in text in the light of ambiguity, is an important problem in text-mining. Since the late 1940s many approaches based on supervised (decision trees, naive Bayes, neural networks, support vector machines) and unsupervised machine learning (context-clustering, word-clustering, co-occurrence graphs) have been developed. Knowledge-based methods that make use of the WordNet computational lexicon have also been developed. But only few make use of ontologies, i.e. hierarchical controlled vocabularies, to solve the problem and none exploit inference over ontologies and the use of metadata from publications. This thesis addresses the WSD problem in biomedical ontologies by suggesting different approaches for word sense disambiguation that use ontologies and metadata. The "Closest Sense" method assumes that the ontology defines multiple senses of the term; it computes the shortest path of co-occurring terms in the document to one of these senses. The "Term Cooc" method defines a log-odds ratio for co-occurring terms including inferred co-occurrences. The "MetaData" approach trains a classifier on metadata; it does not require any ontology, but requires training data, which the other methods do not. These approaches are compared to each other when applied to a manually curated training corpus of 2600 documents for seven ambiguous terms from the Gene Ontology and MeSH. All approaches over all conditions achieve 80% success rate on average. The MetaData approach performs best with 96%, when trained on high-quality data. Its performance deteriorates as quality of the training data decreases. The Term Cooc approach performs better on Gene Ontology (92% success) than on MeSH (73% success) as MeSH is not a strict is-a/part-of, but rather a loose is-related-to hierarchy. The Closest Sense approach achieves on average 80% success rate. Furthermore, the thesis showcases applications ranging from ontology design to semantic search where WSD is important.
8

Automated Patent Categorization and Guided Patent Search using IPC as Inspired by MeSH and PubMed

Eisinger, Daniel 08 September 2014 (has links) (PDF)
The patent domain is a very important source of scientific information that is currently not used to its full potential. Searching for relevant patents is a complex task because the number of existing patents is very high and grows quickly, patent text is extremely complicated, and standard vocabulary is not used consistently or doesn’t even exist. As a consequence, pure keyword searches often fail to return satisfying results in the patent domain. Major companies employ patent professionals who are able to search patents effectively, but even they have to invest a lot of time and effort into their search. Academic scientists on the other hand do not have access to such resources and therefore often do not search patents at all, but they risk missing up-to-date information that will not be published in scientific publications until much later, if it is published at all. Document search on PubMed, the pre-eminent database for biomedical literature, relies on the annotation of its documents with relevant terms from the Medical Subject Headings ontology (MeSH) for improving recall through query expansion. Similarly, professional patent searches expand beyond keywords by including class codes from various patent classification systems. However, classification-based searches can only be performed effectively if the user has very detailed knowledge of the system, which is usually not the case for academic scientists. Consequently, we investigated methods to automatically identify relevant classes that can then be suggested to the user to expand their query. Since every patent is assigned at least one class code, it should be possible for these assignments to be used in a similar way as the MeSH annotations in PubMed. In order to develop a system for this task, it is necessary to have a good understanding of the properties of both classification systems. In order to gain such knowledge, we perform an in-depth comparative analysis of MeSH and the main patent classification system, the International Patent Classification (IPC). We investigate the hierarchical structures as well as the properties of the terms/classes respectively, and we compare the assignment of IPC codes to patents with the annotation of PubMed documents with MeSH terms. Our analysis shows that the hierarchies are structurally similar, but terms and annotations differ significantly. The most important differences concern the considerably higher complexity of the IPC class definitions compared to MeSH terms and the far lower number of class assignments to the average patent compared to the number of MeSH terms assigned to PubMed documents. As a result of these differences, problems are caused both for unexperienced patent searchers and professionals. On the one hand, the complex term system makes it very difficult for members of the former group to find any IPC classes that are relevant for their search task. On the other hand, the low number of IPC classes per patent points to incomplete class assignments by the patent office, therefore limiting the recall of the classification-based searches that are frequently performed by the latter group. We approach these problems from two directions: First, by automatically assigning additional patent classes to make up for the missing assignments, and second, by automatically retrieving relevant keywords and classes that are proposed to the user so they can expand their initial search. For the automated assignment of additional patent classes, we adapt an approach to the patent domain that was successfully used for the assignment of MeSH terms to PubMed abstracts. Each document is assigned a set of IPC classes by a large set of binary Maximum-Entropy classifiers. Our evaluation shows good performance by individual classifiers (precision/recall between 0:84 and 0:90), making the retrieval of additional relevant documents for specific IPC classes feasible. The assignment of additional classes to specific documents is more problematic, since the precision of our classifiers is not high enough to avoid false positives. However, we propose filtering methods that can help solve this problem. For the guided patent search, we demonstrate various methods to expand a user’s initial query. Our methods use both keywords and class codes that the user enters to retrieve additional relevant keywords and classes that are then suggested to the user. These additional query components are extracted from different sources such as patent text, IPC definitions, external vocabularies and co-occurrence data. The suggested expansions can help unexperienced users refine their queries with relevant IPC classes, and professionals can compose their complete query faster and more easily. We also present GoPatents, a patent retrieval prototype that incorporates some of our proposals and makes faceted browsing of a patent corpus possible.
9

From Correlation to Causality: Does Network Information improve Cancer Outcome Prediction?

Roy, Janine 10 July 2014 (has links) (PDF)
Motivation: Disease progression in cancer can vary substantially between patients. Yet, patients often receive the same treatment. Recently, there has been much work on predicting disease progression and patient outcome variables from gene expression in order to personalize treatment options. A widely used approach is high-throughput experiments that aim to explore predictive signature genes which would provide identification of clinical outcome of diseases. Microarray data analysis helps to reveal underlying biological mechanisms of tumor progression, metastasis, and drug-resistance in cancer studies. Despite first diagnostic kits in the market, there are open problems such as the choice of random gene signatures or noisy expression data. The experimental or computational noise in data and limited tissue samples collected from patients might furthermore reduce the predictive power and biological interpretability of such signature genes. Nevertheless, signature genes predicted by different studies generally represent poor similarity; even for the same type of cancer. Integration of network information with gene expression data could provide more efficient signatures for outcome prediction in cancer studies. One approach to deal with these problems employs gene-gene relationships and ranks genes using the random surfer model of Google's PageRank algorithm. Unfortunately, the majority of published network-based approaches solely tested their methods on a small amount of datasets, questioning the general applicability of network-based methods for outcome prediction. Methods: In this thesis, I provide a comprehensive and systematically evaluation of a network-based outcome prediction approach -- NetRank - a PageRank derivative -- applied on several types of gene expression cancer data and four different types of networks. The algorithm identifies a signature gene set for a specific cancer type by incorporating gene network information with given expression data. To assess the performance of NetRank, I created a benchmark dataset collection comprising 25 cancer outcome prediction datasets from literature and one in-house dataset. Results: NetRank performs significantly better than classical methods such as foldchange or t-test as it improves the prediction performance in average for 7%. Besides, we are approaching the accuracy level of the authors' signatures by applying a relatively unbiased but fully automated process for biomarker discovery. Despite an order of magnitude difference in network size, a regulatory, a protein-protein interaction and two predicted networks perform equally well. Signatures as published by the authors and the signatures generated with classical methods do not overlap -- not even for the same cancer type -- whereas the network-based signatures strongly overlap. I analyze and discuss these overlapping genes in terms of the Hallmarks of cancer and in particular single out six transcription factors and seven proteins and discuss their specific role in cancer progression. Furthermore several tests are conducted for the identification of a Universal Cancer Signature. No Universal Cancer Signature could be identified so far, but a cancer-specific combination of general master regulators with specific cancer genes could be discovered that achieves the best results for all cancer types. As NetRank offers a great value for cancer outcome prediction, first steps for a secure usage of NetRank in a public cloud are described. Conclusion: Experimental evaluation of network-based methods on a gene expression benchmark dataset suggests that these methods are especially suited for outcome prediction as they overcome the problems of random gene signatures and noisy expression data. Through the combination of network information with gene expression data, network-based methods identify highly similar signatures over all cancer types, in contrast to classical methods that fail to identify highly common gene sets across the same cancer types. In general allows the integration of additional information in gene expression analysis the identification of more reliable, accurate and reproducible biomarkers and provides a deeper understanding of processes occurring in cancer development and progression.
10

Application of the FITT framework to evaluate a prototype health information system

Honekamp, Wilfried, Ostermann, Herwig 24 June 2011 (has links) (PDF)
We developed a prototype information system with an integrated expert system for headache patients. The FITT (fit between individual, task and technology) framework was used to evaluate the prototype health information system and to determine which deltas to work on in future developments. We positively evaluated the system in all FITT dimensions. The framework provided a proper tool for evaluating the prototype health information system and determining which deltas to work on in future developments.

Page generated in 0.0237 seconds