• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 606
  • 285
  • 85
  • 61
  • 40
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 12
  • 6
  • 5
  • 5
  • Tagged with
  • 1351
  • 236
  • 168
  • 164
  • 140
  • 125
  • 110
  • 109
  • 103
  • 94
  • 91
  • 90
  • 89
  • 82
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
881

Spectral Similarity Measures for In Vivo Human Tissue Discrimination Based on Hyperspectral Imaging

Pathak, Priya, Chalopin, Claire, Zick, Laura, Köhler, Hannes, Pfahl, Annekatrin, Rayes, Nada, Gockel, Ines, Neumuth, Thomas, Melzer, Andreas, Jansen-Winkeln, Boris, Maktabi, Marianne 14 February 2025 (has links)
Problem: Similarity measures are widely used as an approved method for spectral discrimination or identification with their applications in different areas of scientific research. Even though a range of works have been presented, only a few showed slightly promising results for human tissue, and these were mostly focused on pathological and non-pathological tissue classification. Methods: In this work, several spectral similarity measures on hyperspectral (HS) images of in vivo human tissue were evaluated for tissue discrimination purposes. Moreover, we introduced two new hybrid spectral measures, called SID-JM-TAN(SAM) and SID-JM-TAN(SCA). We analyzed spectral signatures obtained from 13 different human tissue types and two different materials (gauze, instruments), collected from HS images of 100 patients during surgeries. Results: The quantitative results showed the reliable performance of the different similarity measures and the proposed hybrid measures for tissue discrimination purposes. The latter produced higher discrimination values, up to 6.7 times more than the classical spectral similarity measures. Moreover, an application of the similarity measures was presented to support the annotations of the HS images. We showed that the automatic checking of tissue-annotated thyroid and colon tissues was successful in 73% and 60% of the total spectra, respectively. The hybrid measures showed the highest performance. Furthermore, the automatic labeling of wrongly annotated tissues was similar for all measures, with an accuracy of up to 90%. Conclusion: In future work, the proposed spectral similarity measures will be integrated with tools to support physicians in annotations and tissue labeling of HS images.
882

Vyhledávání graffiti tagů podle podobnosti / Graffiti Tag Retrieval

Grünseisen, Vojtěch January 2013 (has links)
This work focuses on a possibility of using current computer vision alghoritms and methods for automatic similarity matching of so called graffiti tags. Those are such graffiti, that are used as a fast and simple signature of their authors. The process of development and implementation of CBIR system, which is created for this task, is described. For the purposes of finding images similarity, local features are used, most notably self-similarity features.
883

Discovering Implant Terms in Medical Records

Jerdhaf, Oskar January 2021 (has links)
Implant terms are terms like "pacemaker" which indicate the presence of artifacts in the body of a human. These implant terms are key to determining if a patient can safely undergo Magnetic Resonance Imaging (MRI). However, to identify these terms in medical records is time-consuming, laborious and expensive, but necessary for taking the correct precautions before an MRI scan. Automating this process is of great interest to radiologists as it ideally saves time, prevents mistakes and as a result saves lives. The electronic medical records (EMR) contain the documented medical history of a patient, including any implants or objects that an individual would have inside their body. Information about such objects and implants are of great interest when determining if and how a patient can be scanned using MRI. This information is unfortunately not easily extracted through automatic means. Due to their sparse presence and the unusual structure of medical records compared to most written text, makes it very difficult to automate using simple means. By leveraging the recent advancements in Artificial Intelligence (AI), this thesis explores the ability to identify and extract such terms automatically in Swedish EMRs. For the task of identifying implant terms in medical records a generally trained Swedish Bidirectional Encoder Representations from Transformers (BERT) model is used, which is then fine-tuned on Swedish medical records. Using this model a variety of approaches are explored two of which will be covered in this thesis. Using this model a variety of approaches are explored, namely BERT-KDTree, BERT-BallTree, Cosine Brute Force and unsupervised NER. The results show that BERT-KDTree and BERT-BallTree are the most rewarding methods. Results from both methods have been evaluated by domain experts and appear promising for such an early stage, given the difficulty of the task. The evaluation of BERT-BallTree shows that multiple methods of extraction may be preferable as they provide different but still useful terms. Cosine brute force is deemed to be an unrealistic approach due to computational and memory requirements. The NER approach was deemed too impractical and laborious to justify for this study, yet is potentially useful if not more suitable given a different set of conditions and goals. While there is much to be explored and improved, these experiments are a clear indication that automatic identification of implant terms is possible, as a large number of implant terms were successfully discovered using automated means.
884

COPS: Cluster optimized proximity scaling

Rusch, Thomas, Mair, Patrick, Hornik, Kurt January 2015 (has links) (PDF)
Proximity scaling (i.e., multidimensional scaling and related methods) is a versatile statistical method whose general idea is to reduce the multivariate complexity in a data set by employing suitable proximities between the data points and finding low-dimensional configurations where the fitted distances optimally approximate these proximities. The ultimate goal, however, is often not only to find the optimal configuration but to infer statements about the similarity of objects in the high-dimensional space based on the the similarity in the configuration. Since these two goals are somewhat at odds it can happen that the resulting optimal configuration makes inferring similarities rather difficult. In that case the solution lacks "clusteredness" in the configuration (which we call "c-clusteredness"). We present a version of proximity scaling, coined cluster optimized proximity scaling (COPS), which solves the conundrum by introducing a more clustered appearance into the configuration while adhering to the general idea of multidimensional scaling. In COPS, an arbitrary MDS loss function is parametrized by monotonic transformations and combined with an index that quantifies the c-clusteredness of the solution. This index, the OPTICS cordillera, has intuitively appealing properties with respect to measuring c-clusteredness. This combination of MDS loss and index is called "cluster optimized loss" (coploss) and is minimized to push any configuration towards a more clustered appearance. The effect of the method will be illustrated with various examples: Assessing similarities of countries based on the history of banking crises in the last 200 years, scaling Californian counties with respect to the projected effects of climate change and their social vulnerability, and preprocessing a data set of hand written digits for subsequent classification by nonlinear dimension reduction. (authors' abstract) / Series: Discussion Paper Series / Center for Empirical Research Methods
885

Analyse computationnelle des protéines kinases surexprimées dans le cancer du sein «Triple-négatif» / Computational analysis of overexpressed protein kinases in «triple-negative» breast cancer.

Um Nlend, Ingrid January 2014 (has links)
Résumé : Malgré l’apport de nouvelles armes thérapeutiques, le cancer du sein reste la première cause de décès par cancer chez la femme de moins de 65 ans. Le cancer du sein dit «triple-négatif», un sous-type représentant environ 10 % des cancers du sein, est caractérisé par l’absence de récepteurs hormonaux aux oestrogènes et à la progestérone et aussi par l’absence d’expression du récepteur de croissance HER-2. Ce type de cancer considéré comme étant le plus agressif des cancers du sein, possède un profil clinique défavorable avec un haut risque de rechute métastatique. Les seuls outils thérapeutiques disponibles actuellement contre ce type de cancer sont la chimiothérapie et la radiothérapie, qui s’avèrent être très toxiques pour le patient et ne ciblent pas de manière spécifique la tumeur. Il a été ainsi démontré qu’il existe au sein du kinome (i.e. l’ensemble des protéines kinases du génome humain), 26 protéines kinases surexprimées dans le cancer du sein dit «triple-négatif» et dont le rôle s’avère être critique dans la croissance de ces cellules cancéreuses. Nous avons utilisé différentes méthodes computationnelles développées au sein de notre laboratoire afin de caractériser le site de liaison de l’ensemble de ces 26 protéines kinases. Plus précisément, nous avons calculé les similitudes entre les protéines kinases à plusieurs niveaux: 1. séquence globale, 2. séquence des sites de liaison, 3. structure des sites de liaison et 4. profils de liaison. Nous avons utilisé des outils de visualisation de données afin de mettre en évidence ces similarités. Le profil de liaison de 38 molécules inhibitrices a été déterminé pour un ensemble de 290 protéines kinases humaines, incluant 15 des protéines kinases appartenant à notre sous-ensemble de protéines d'intérêt. Ces profils de liaison sont utilisés pour définir les similarités fonctionnelles entre les protéines kinases d'intérêt, en utilisant le coefficient tau de corrélation des rangs de Kendall ([tau]). Nous avons effectué des simulations d’arrimage à l’aide du logiciel FlexAID, pour chacune des protéines et l’ensemble des 38 molécules inhibitrices afin d’élargir l’analyse précédente aux autres protéines qui n’ont pas été testé par Karaman et al. Grâce aux différentes études structurales et computationnelles effectuées ci-dessus, nous avons été à même de hiérarchiser les protéines kinases en fonction des similarités moléculaires vis-à-vis de leurs profils de liaison, en vue du développement futur d’outils thérapeutiques poly-pharmacologiques. // Abstract : Despite the development of novel therapeutic agents, breast cancer represents a major cause of death among women. Among breast cancer patients, triple negative (TN) breast cancer (TNBC) represents approximately 15% of cases. TNBC is characterized by the absence of the estrogen receptor, the progesterone receptor as well as the HER2 protein kinase. Recently, it has been shown that a subset of 26 protein kinases (TNVT set) is overexpressed in TNBC. Their inhibition in siRNA knockdown experiments leads to varying levels of growth inhibition in TN and sometimes non-TN cancer cell lines. These studies validate TNVT set kinases as potential therapeutic targets. The aim of this project is to characterize the binding site of TNVT set kinases using different computational methods developed in our research group and to determine which protein kinases of this subset could be more likely to bind similar ligands as part of a poly-pharmacological approach. We calculated global sequence similarities, binding-site sequence similarities and 3D atomic binding-site similarities for the TNVT set of kinases. This analysis shows that binding-site sequence similarities somehow reflect global sequence similarities. Binding-site 3D atomic similarities reflect binding-site sequence similarities but are more widespread. This may have potential functional consequences in terms of small-molecule molecular recognition. Such similarities can potentially lead to cross-reactivity effects but they can also be exploited in the development of multi-functional poly-pharmacological drugs. Recently, the dissociation constants (K[indice inférieur d]) of 38 small-molecule inhibitors for 290 protein kinases (including 17 kinases in the TNVT set) were calculated. These experimental bindingprofiles were used to define a measure of functional profile similarity using Kendall rank correlations ([tau]). We will present results using our docking program FlexAID for the 38 small-molecules tested by Karaman et al. against the 26 kinases in the TNVT set. Similar to experimental binding-profiles, the docking scores can be used to define docking bindingprofiles similarities using [tau] rank correlations. Docking binding-profile similarities are then used to cluster the 26 kinases in the TNVT set. Clusters represent subsets of kinases within the TNVT set with functionally similar binding-sites. Finally, we compare functional docking profile similarities to the sequence and 3D atomic similarities discussed above. This analysis will allow us to detect subsets of kinases in the TNVT set for which it may be possible to develop multi-functional inhibitors.
886

Biological and clinical data integration and its applications in healthcare

Hagen, Matthew 07 January 2016 (has links)
Answers to the most complex biological questions are rarely determined solely from the experimental evidence. It requires subsequent analysis of many data sources that are often heterogeneous. Most biological data repositories focus on providing only one particular type of data, such as sequences, molecular interactions, protein structure, or gene expression. In many cases, it is required for researchers to visit several different databases to answer one scientific question. It is essential to develop strategies to integrate disparate biological data sources that are efficient and seamless to facilitate the discovery of novel associations and validate existing hypotheses. This thesis presents the design and development of different integration strategies of biological and clinical systems. The BioSPIDA system is a data warehousing solution that integrates many NCBI databases and other biological sources on protein sequences, protein domains, and biological pathways. It utilizes a universal parser facilitating integration without developing separate source code for each data site. This enables users to execute fine-grained queries that can filter genes by their protein interactions, gene expressions, functional annotation, and protein domain representation. Relational databases can powerfully return and generate quickly filtered results to research questions, but they are not the most suitable solution in all cases. Clinical patients and genes are typically annotated by concepts in hierarchical ontologies and performance of relational databases are weakened considerably when traversing and representing graph structures. This thesis illustrates when relational databases are most suitable as well as comparing the performance benchmarks of semantic web technologies and graph databases when comparing ontological concepts. Several approaches of analyzing integrated data will be discussed to demonstrate the advantages over dependencies on remote data centers. Intensive Care Patients are prioritized by their length of stay and their severity class is estimated by their diagnosis to help minimize wait time and preferentially treat patients by their condition. In a separate study, semantic clustering of patients is conducted by integrating a clinical database and a medical ontology to help identify multi-morbidity patterns. In the biological area, gene pathways, protein interaction networks, and functional annotation are integrated to help predict and prioritize candidate disease genes. This thesis will present the results that were able to be generated from each project through utilizing a local repository of genes, functional annotations, protein interactions, clinical patients, and medical ontologies.
887

The impact of prior experience on acquisition behaviour and performance : an integrated examination of corporate acquisitions in the USA and UK

Dionne, Steven Scott January 2008 (has links)
The objective of the thesis is to advance the concept of learning by explicating the mechanisms contributing to knowledge accumulation and its transfer to new situations. On the basis of 44 case studies, the framework is refined to accurately capture the unique features and outcomes of experiential knowledge in acquisitions. Feedback from the performance of prior acquisitions was found to enrich representations of action-outcome linkages and modify procedures in search and valuation. Inferential transfer though depended on similar kinds of features emerging in subsequent decisions. Outcomes therefore reflected the integration of feedback processes and similarity judgments. From the case studies, a set of hypotheses was developed and their plausibility tested, using another data set on the acquisitions of 687 managers. The research finds that the performance of prior decisions and the similarity to prior experiences materially impact behaviour. Poor performance in prior, similar acquisitions led to a reduction in subsequent risk behaviour, illustrated by the extent of risk management and by the lessening of commitment to specific transactions. The impact of performance feedback was also extant in the similarity of choice to prior experiences. The results illustrate that although feedback shapes perceptions of likelihood and expected value, similarity judgments moderate the impact of prior performance on behaviour. Given the impact on acquisition behaviour, the research also illustrates that prior experiences do not necessarily increase performance. Adaptation from prior failures was not unambiguously linked to positive returns, suggesting limitations from feedback mechanisms. Rather, the extent and similarity of acquisition experience led to a reduction in the variability of performance. By providing a framework for selecting planning procedures, greater experience tended to reduce surprises post-acquisition.
888

Extraction et reconnaissance de primitives dans les façades de Paris à l'aide d'appariement de graphes / Extraction and recognition of object in the facades of Paris using graph matching

Haugeard, Jean-emmanuel 17 December 2010 (has links)
Cette dernière décennie, la modélisation des villes 3D est devenue l'un des enjeux de la recherche multimédia et un axe important en reconnaissance d'objets. Dans cette thèse nous nous sommes intéressés à localiser différentes primitives, plus particulièrement les fenêtres, dans les façades de Paris. Dans un premier temps, nous présentons une analyse des façades et des différentes propriétés des fenêtres. Nous en déduisons et proposons ensuite un algorithme capable d'extraire automatiquement des hypothèses de fenêtres. Dans une deuxième partie, nous abordons l'extraction et la reconnaissance des primitives à l'aide d'appariement de graphes de contours. En effet une image de contours est lisible par l'oeil humain qui effectue un groupement perceptuel et distingue les entités présentes dans la scène. C'est ce mécanisme que nous avons cherché à reproduire. L'image est représentée sous la forme d'un graphe d'adjacence de segments de contours, valué par des informations d'orientation et de proximité des segments de contours. Pour la mise en correspondance inexacte des graphes, nous proposons plusieurs variantes d'une nouvelle similarité basée sur des ensembles de chemins tracés sur les graphes, capables d'effectuer les groupements des contours et robustes aux changements d'échelle. La similarité entre chemins prend en compte la similarité des ensembles de segments de contours et la similarité des régions définies par ces chemins. La sélection des images d'une base contenant un objet particulier s'effectue à l'aide d'un classifieur SVM ou kppv. La localisation des objets dans l'image utilise un système de vote à partir des chemins sélectionnés par l'algorithme d'appariement. / This last decade, modeling of 3D city became one of the challenges of multimedia search and an important focus in object recognition. In this thesis we are interested to locate various primitive, especially the windows, in the facades of Paris. At first, we present an analysis of the facades and windows properties. Then we propose an algorithm able to extract automatically window candidates. In a second part, we discuss about extraction and recognition primitives using graph matching of contours. Indeed an image of contours is readable by the human eye, which uses perceptual grouping and makes distinction between entities present in the scene. It is this mechanism that we have tried to replicate. The image is represented as a graph of adjacency of segments of contours, valued by information orientation and proximity to edge segments. For the inexact matching of graphs, we propose several variants of a new similarity based on sets of paths, able to group several contours and robust to scale changes. The similarity between paths takes into account the similarity of sets of segments of contours and the similarity of the regions defined by these paths. The selection of images from a database containing a particular object is done using a KNN or SVM classifier.
889

Regions, technological interdependence and growth in Europe

Fischer, Manfred M. January 2009 (has links) (PDF)
This paper presents a theoretical neoclassical growth model with two kinds of capital, and technological interdependence among regions. Technological interdependence is assumed to operate through spatial externalities caused by disembodied knowledge diffusion between technologically similar regions. The transition from theory to econometrics yields a reduced-form empirical model that in the spatial econometrics literature is known as spatial Durbin model. Technological dependence between regions is formulated by a connectivity matrix that measures closeness of regions in a technological space spanned by 120 distinct technological fields. We use a system of 158 regions across 14 European countries over the period from 1995 to 2004 to empirically test the model. The paper illustrates the importance of an impact-based model interpretation, in terms of the LeSage and Pace (2009) approach, to correctly quantify the magnitude of spillover effects that avoid incorrect inferences about the presence or absence of significant capital externalities among technologically similar regions. (author's abstract)
890

AURA : a hybrid approach to identify framework evolution

Wu, Wei 02 1900 (has links)
Les cadriciels et les bibliothèques sont indispensables aux systèmes logiciels d'aujourd'hui. Quand ils évoluent, il est souvent fastidieux et coûteux pour les développeurs de faire la mise à jour de leur code. Par conséquent, des approches ont été proposées pour aider les développeurs à migrer leur code. Généralement, ces approches ne peuvent identifier automatiquement les règles de modification une-remplacée-par-plusieurs méthodes et plusieurs-remplacées-par-une méthode. De plus, elles font souvent un compromis entre rappel et précision dans leur résultats en utilisant un ou plusieurs seuils expérimentaux. Nous présentons AURA (AUtomatic change Rule Assistant), une nouvelle approche hybride qui combine call dependency analysis et text similarity analysis pour surmonter ces limitations. Nous avons implanté AURA en Java et comparé ses résultats sur cinq cadriciels avec trois approches précédentes par Dagenais et Robillard, M. Kim et al., et Schäfer et al. Les résultats de cette comparaison montrent que, en moyenne, le rappel de AURA est 53,07% plus que celui des autre approches avec une précision similaire (0,10% en moins). / Software frameworks and libraries are indispensable to today's software systems. As they evolve, it is often time-consuming for developers to keep their code up-to-date. Approaches have been proposed to facilitate this. Usually, these approaches cannot automatically identify change rules for one-replaced-by-many and many-replaced-by-one methods, and they trade off recall for higher precision using one or more experimentally-evaluated thresholds. We introduce AURA (AUtomatic change Rule Assistant), a novel hybrid approach that combines call dependency and text similarity analyses to overcome these limitations. We implement it in a Java system and compare it on five frameworks with three previous approaches by Dagenais and Robillard, M. Kim et al., and Schäfer et al. The comparison shows that, on average, the recall of AURA is 53.07% higher while its precision is similar (0.10% lower).

Page generated in 0.0862 seconds