• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 15
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 86
  • 31
  • 22
  • 20
  • 18
  • 17
  • 15
  • 15
  • 14
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Applications of Knowledge Discovery in Quality Registries - Predicting Recurrence of Breast Cancer and Analyzing Non-compliance with a Clinical Guideline

Razavi, Amir Reza January 2007 (has links)
In medicine, data are produced from different sources and continuously stored in data depositories. Examples of these growing databases are quality registries. In Sweden, there are many cancer registries where data on cancer patients are gathered and recorded and are used mainly for reporting survival analyses to high level health authorities. In this thesis, a breast cancer quality registry operating in South-East of Sweden is used as the data source for newer analytical techniques, i.e. data mining as a part of knowledge discovery in databases (KDD) methodology. Analyses are done to sift through these data in order to find interesting information and hidden knowledge. KDD consists of multiple steps, starting with gathering data from different sources and preparing them in data pre-processing stages prior to data mining. Data were cleaned from outliers and noise and missing values were handled. Then a proper subset of the data was chosen by canonical correlation analysis (CCA) in a dimensionality reduction step. This technique was chosen because there were multiple outcomes, and variables had complex relationship to one another. After data were prepared, they were analyzed with a data mining method. Decision tree induction as a simple and efficient method was used to mine the data. To show the benefits of proper data pre-processing, results from data mining with pre-processing of the data were compared with results from data mining without data pre-processing. The comparison showed that data pre-processing results in a more compact model with a better performance in predicting the recurrence of cancer. An important part of knowledge discovery in medicine is to increase the involvement of medical experts in the process. This starts with enquiry about current problems in their field, which leads to finding areas where computer support can be helpful. The experts can suggest potentially important variables and should then approve and validate new patterns or knowledge as predictive or descriptive models. If it can be shown that the performance of a model is comparable to domain experts, it is more probable that the model will be used to support physicians in their daily decision-making. In this thesis, we validated the model by comparing predictions done by data mining and those made by domain experts without finding any significant difference between them. Breast cancer patients who are treated with mastectomy are recommended to receive radiotherapy. This treatment is called postmastectomy radiotherapy (PMRT) and there is a guideline for prescribing it. A history of this treatment is stored in breast cancer registries. We analyzed these datasets using rules from a clinical guideline and identified cases that had not been treated according to the PMRT guideline. Data mining revealed some patterns of non-compliance with the PMRT guideline. Further analysis with data mining revealed some reasons for guideline non-compliance. These patterns were then compared with reasons acquired from manual inspection of patient records. The comparisons showed that patterns resulting from data mining were limited to the stored variables in the registry. A prerequisite for better results is availability of comprehensive datasets. Medicine can take advantage of KDD methodology in different ways. The main advantage is being able to reuse information and explore hidden knowledge that can be obtained using advanced analysis techniques. The results depend on good collaboration between medical informaticians and domain experts and the availability of high quality data.
52

VLSI Implementation of Digital Signal Processing Algorithms for MIMO Detection and Channel Pre-processing

Patel, Dimpesh 16 September 2011 (has links)
The efficient high-throughput VLSI implementation of Soft-output MIMO detectors for high-order constellations and large antenna configurations has been a major challenge in the literature. This thesis introduces a novel Soft-output K-Best scheme that improves BER performance and reduces the computational complexity significantly by using three major improvement ideas. It also presents an area and power efficient VLSI implementation of a 4x4 64-QAM Soft K-Best MIMO detector that attains the highest detection throughput of 2 Gbps and second lowest energy/bit reported in the literature, fulfilling the aggressive requirements of emerging 4G standards such as IEEE 802.16m and LTE-Advanced. A low-complexity and highly parallel algorithm for QR Decomposition, an essential channel pre-processing task, is also developed that uses 2D, Householder 3D and 4D Givens Rotations. Test results for the QRD chip, fabricated in 0.13um CMOS, show that it attains the lowest reported latency of 144ns and highest QR Processing Efficiency.
53

VLSI Implementation of Digital Signal Processing Algorithms for MIMO Detection and Channel Pre-processing

Patel, Dimpesh 16 September 2011 (has links)
The efficient high-throughput VLSI implementation of Soft-output MIMO detectors for high-order constellations and large antenna configurations has been a major challenge in the literature. This thesis introduces a novel Soft-output K-Best scheme that improves BER performance and reduces the computational complexity significantly by using three major improvement ideas. It also presents an area and power efficient VLSI implementation of a 4x4 64-QAM Soft K-Best MIMO detector that attains the highest detection throughput of 2 Gbps and second lowest energy/bit reported in the literature, fulfilling the aggressive requirements of emerging 4G standards such as IEEE 802.16m and LTE-Advanced. A low-complexity and highly parallel algorithm for QR Decomposition, an essential channel pre-processing task, is also developed that uses 2D, Householder 3D and 4D Givens Rotations. Test results for the QRD chip, fabricated in 0.13um CMOS, show that it attains the lowest reported latency of 144ns and highest QR Processing Efficiency.
54

Using dynamic time warping for multi-sensor fusion

Ko, Ming Hsiao January 2009 (has links)
Fusion is a fundamental human process that occurs in some form at all levels of sense organs such as visual and sound information received from eyes and ears respectively, to the highest levels of decision making such as our brain fuses visual and sound information to make decisions. Multi-sensor data fusion is concerned with gaining information from multiple sensors by fusing across raw data, features or decisions. The traditional frameworks for multi-sensor data fusion only concern fusion at specific points in time. However, many real world situations change over time. When the multi-sensor system is used for situation awareness, it is useful not only to know the state or event of the situation at a point in time, but also more importantly, to understand the causalities of those states or events changing over time. / Hence, we proposed a multi-agent framework for temporal fusion, which emphasises the time dimension of the fusion process, that is, fusion of the multi-sensor data or events derived over a period of time. The proposed multi-agent framework has three major layers: hardware, agents, and users. There are three different fusion architectures: centralized, hierarchical, and distributed, for organising the group of agents. The temporal fusion process of the proposed framework is elaborated by using the information graph. Finally, the core of the proposed temporal fusion framework – Dynamic Time Warping (DTW) temporal fusion agent is described in detail. / Fusing multisensory data over a period of time is a challenging task, since the data to be fused consists of complex sequences that are multi–dimensional, multimodal, interacting, and time–varying in nature. Additionally, performing temporal fusion efficiently in real–time is another challenge due to the large amount of data to be fused. To address these issues, we proposed the DTW temporal fusion agent that includes four major modules: data pre-processing, DTW recogniser, class templates, and decision making. The DTW recogniser is extended in various ways to deal with the variability of multimodal sequences acquired from multiple heterogeneous sensors, the problems of unknown start and end points, multimodal sequences of the same class that hence has different lengths locally and/or globally, and the challenges of online temporal fusion. / We evaluate the performance of the proposed DTW temporal fusion agent on two real world datasets: 1) accelerometer data acquired from performing two hand gestures, and 2) a benchmark dataset acquired from carrying a mobile device and performing pre-defined user scenarios. Performance results of the DTW based system are compared with those of a Hidden Markov Model (HMM) based system. The experimental results from both datasets demonstrate that the proposed DTW temporal fusion agent outperforms HMM based systems, and has the capability to perform online temporal fusion efficiently and accurately in real–time.
55

Identification and classification of new psychoactive substances using Raman spectroscopy and chemometrics

Guirguis, Amira January 2017 (has links)
The sheer number, continuous emergence, heterogeneity and wide chemical and structural diversity of New Psychoactive Substance (NPS) products are factors being exploited by illicit drug designers to obscure detection of these compounds. Despite the advances in analytical techniques currently used by forensic and toxicological scientists in order to enable the identification of NPS, the lack of a priori knowledge of sample content makes it very challenging to detect an 'unknown' substance. The work presented in this thesis serves as a proof-of-concept by combining similarity studies, Raman spectroscopy and chemometrics, underpinned by robust pre-processing methods for the identification of existing or newly emerging NPS. It demonstrates that the use of Raman spectroscopy, in conjunction with a 'representative' NPS Raman database and chemometric techniques, has the potential for rapidly and non-destructively classifying NPS according to their chemical scaffolds. The work also demonstrates the potential of indicating the purity in formulations typical of those purchased by end users of the product i.e. 'street-like' mixtures. Five models were developed, and three of these provided an insight into the identification and classification of NPS depending on their purity. These are: the 'NPS and non-NPS/benchtop' model, the 'NPS reference standards/handheld' model and the 'NPS and non-NPS/handheld' model. In the 'NPS and non-NPS/benchtop' model (laser λex = 785 nm), NPS internet samples were projected onto a PCA model derived from a Raman database comprising 'representative' NPSs and cutting agent/ adulterant reference standards. This proved the most successful in suggesting the likely chemical scaffolds for NPS present in samples bought from the internet. In the 'NPS reference standards/handheld' model (laser λex = 1064 nm), NPS reference standards were projected onto a PCA model derived from a Raman database comprising 'representative' NPSs. This was the most successful of the three models with respect to the accurate identification of pure NPS. This model suggested chemical scaffolds in 89% of samples compared to 76% obtained with the benchtop instrument, which generally had higher fluorescent backgrounds. In the 'NPS and non-NPS/handheld' model (laser λex = 1064 nm), NPS internet samples were projected onto a PCA model derived from a Raman database comprising 'representative' NPSs and cutting agent/ adulterant reference standards. This was the most successful in differentiating between NPS internet samples dependent on their purity. In all models, the main challenges for identification of NPS were spectra displaying high fluorescent backgrounds and low purity profiles. The 'first pass' matching identification of NPS internet samples on a handheld platform was improved to ~50% using a laser source of 1064 nm because of a reduction in fluorescence at this wavelength. We outline limitations in using a handheld platform that may have added to problems with appropriate identification of NPS in complex mixtures. However, the developed models enabled the appropriate selection of Raman signals crucial for identification of NPS via data reduction, and the extraction of important patterns from noisy and/or corrupt data. The models constitute a significant contribution in this field with respect to suggesting the likely chemical scaffold of an 'unknown' molecule. This insight may accelerate the screening of newly emerging NPS in complex matrices by assigning them to: a structurally similar known molecule (supercluster/ cluster); or a substance from the same EMCDDA/EDND class of known compounds. Critical challenges in instrumentation, chemometrics, and the complexity of samples have been identified and described. As a result, future work should focus on: optimising the pre-processing of Raman data collected with a handheld platform and a 1064 nm laser λex; and optimising the 'representative' database by including other properties and descriptors of existing NPS.
56

Redução de dimensionalidade usando agrupamento e discretização ponderada para a recuperação de imagens por conteúdo

Pirolla, Francisco Rocha 19 November 2012 (has links)
Made available in DSpace on 2016-06-02T19:06:00Z (GMT). No. of bitstreams: 1 4756.pdf: 1515606 bytes, checksum: 12146689055c9826f258e527c3ae001a (MD5) Previous issue date: 2012-11-19 / Universidade Federal de Sao Carlos / This work proposes two new techniques of feature vector pre-processing to improve CBIR and image classification systems: a method of feature transformation based on the k-means clustering approach (Feature Transformation based on K-means - FTK) and a method of Weighted Feature Discretization - WFD. The FTK method employs the clustering principle of k-means to compact the feature vector space. The WFD method performs a weighted feature discretization, privileging the most important feature ranges to distinguish images. The proposed methods were employed to pre-process the feature vector in CBIR and in classification approaches, comparing the results with the pre-processing performed by PCA (a well known feature transformation method) and the original feature vector: FTK produced a reduction in the feature vector size with an improving in the query precision and a improvement in the classification accuracy; WFD improved the query precision up to and a improvement in the classification accuracy; the combination of WFD and FTK improved also the query precision and a improvement in the classification accuracy. These are very important results, especially when compared with PCA results, which leads to a minor reduction in the feature vector size, a minor increase in the query precision and a minor increase in the classification accuracy. Also the proposed approaches have linear computational cost where PCA has a cubic computational cost. The results indicate that the proposed approaches are well-suited to perform image feature vector pre-processing improving the overall quality of CBIR and classification systems. / Neste trabalho, propomos diminuir o gap semântico e os problemas de maldição de dimensionalidade apresentando duas técnicas de préprocessamento do vetor de características com o objetivo de melhorar a recuperação de imagens baseada em conteúdo e sistemas de classificação de imagens: um método de redução de dimensionalidade do vetor de características original, baseado no algoritmo k-means, chamado FTK (Feature Transformation based on K-means) e um método de discretização ponderada de características que privilegia as faixas de características mais importantes para distinguir imagens, chamado WFD (Weighted Feature Discretization). Os métodos propostos foram utilizados para pré-processar os vetores de características nas abordagens CBIR e classificação, comparando o pré-processamento executado pelo método PCA e os resultados dos vetores de características originais. O algoritmo FTK promoveu uma redução no tamanho do vetor de características com uma melhoria na precisão da consulta e na precisão de classificação. O algoritmo WFD melhorou a precisão da consulta e classificação; a combinação de dos dois algoritmos propostos também melhorou a precisão da consulta e classificação. Estes resultados são muito importantes, especialmente quando comparados com os resultados do método PCA, que também leva a uma redução no tamanho do vetor de características, a um menor aumento na precisão da consulta e a menor aumento na precisão da classificação. Além disso, as técnicas propostas têm custo computacional linear, enquanto o PCA tem um custo computacional cúbico. Os resultados indicam que os métodos propostos são abordagens adequadas para realizar pré-processamento dos vetores de características de imagens em sistemas CBIR e em sistemas de classificação.
57

Using regression analyses for the determination of protein structure from FTIR spectra

Wilcox, Kieaibi January 2014 (has links)
One of the challenges in the structural biological community is processing the wealth of protein data being produced today; therefore, the use of computational tools has been incorporated to speed up and help understand the structures of proteins, hence the functions of proteins. In this thesis, protein structure investigations were made through the use of Multivariate Analysis (MVA), and Fourier Transformed Infrared (FTIR), a form of vibrational spectroscopy. FTIR has been shown to identify the chemical bonds in a protein in solution and it is rapid and easy to use; the spectra produced from FTIR are then analysed qualitatively and quantitatively by using MVA methods, and this produces non-redundant but important information from the FTIR spectra. High resolution techniques such as X-ray crystallography and NMR are not always applicable and Fourier Transform Infrared (FTIR) spectroscopy, a widely applicable analytical technique, has great potential to assist structure analysis for a wide range of proteins. FTIR spectral shape and band positions in the Amide I (which contains the most intense absorption region), Amide II, and Amide III regions, can be analysed computationally, using multivariate regression, to extract structural information. In this thesis Partial least squares (PLS), a form of MVA, was used to correlate a matrix of FTIR spectra and their known secondary structure motifs, in order to determine their structures (in terms of "helix", "sheet", “310-helix”, “turns” and "other" contents) for a selection of 84 non-redundant proteins. Analysis of the spectral wavelength range between 1480 and 1900 cm-1 (Amide I and Amide II regions) results in high accuracies of prediction, as high as R2 = 0.96 for α-helix, 0.95 for β-sheet, 0.92 for 310-helix, 0.94 for turns and 0.90 for other; their Root Mean Square Error for Calibration (RMSEC) values are between 0.01 to 0.05, and their Root Mean Square Error for Prediction (RMSEP) values are between 0.02 to 0.12. The Amide II region also gave results comparable to that of Amide I, especially for predictions of helix content. We also used Principal Component Analysis (PCA) to classify FTIR protein spectra into their natural groupings as proteins of mainly α-helical structure, or protein of mainly β-sheet structure or proteins of some mixed variations of α-helix and β-sheet. We have also been able to differentiate between parallel and anti-parallel β-sheet. The developed methods were applied to characterize the secondary structure conformational changes of an unfolding protein as a function of pH and also to determine the limit of Quantitation (LoQ).Our structural analyses compare highly favourably to those in the literature using machine learning techniques. Our work proves that FTIR spectra in combination with multivariate regression analysis like PCA and PLS, can accurately identify and quantify protein secondary structure. The developed models in this research are especially important in the pharmaceutical industry where the therapeutic effect of drugs strongly depends on the stability of the physical or chemical structure of their proteins targets; therefore, understanding the structure of proteins is very important in the biopharmaceutical world for drugs production and formulation. There is a new class of drugs that are proteins themselves used to treat infectious and autoimmune diseases. The use of spectroscopy and multivariate regression analysis in the medical industry to identify biomarkers in diseases has also brought new challenges to the bioinformatics field. These methods may be applicable in food science and academia in general, for the investigation and elucidation of protein structure.
58

Segmentace obrazu jako výškové mapy / Image Segmentation Using Height Maps

Moučka, Milan January 2011 (has links)
This thesis deals with image segmentation of volumetric medical data. It describes a well-known watershed technique that has received much attention in the field of medical image processing. An application for a direct segmentation of 3D data is proposed and further implemented by using ITK and VTK toolkits. Several kinds of pre-processing steps used before the watershed method are presented and evaluated. The obtained results are further compared against manually annotated datasets by means of the F-Measure and discussed.
59

Biometrické rozpoznání živosti prstu / Biometric fingerprint liveness detection

Jurek, Jakub January 2016 (has links)
This project deals with general biometrics issues focusing on fingerprint biometrics, with description of dermal papillae and principles of fingerprint sensors. Next this work deals with fingerprint liveness detection issues, including description of methods of detection. Next this work describes chosen features for own detection, used database of fingerprints and own algorithm for image pre-processing. Furthermore neural network classifier for liveness detection with chosen features is decribed followed by statistic evaluation of the chosen features and detection results as well as description of the created graphical user interface.
60

CREO SIMULATE : ROADMAP

Coronado, Jose 06 June 2017 (has links)
This presentation is intended to inform about the enhancements of Creo Simulate and the Roadmap for the future.

Page generated in 0.0582 seconds