21 |
Filtering Methods for Mass Spectrometry-based Peptide Identification Processes2013 October 1900 (has links)
Tandem mass spectrometry (MS/MS) is a powerful tool for identifying peptide sequences. In a typical experiment, incorrect peptide identifications may result due to noise contained in the MS/MS spectra and to the low quality of the spectra. Filtering methods are widely used to remove the noise and improve the quality of the spectra before the subsequent spectra identification process. However, existing filtering methods often use features and empirically assigned weights. These weights may not reflect the reality that the contribution (reflected by weight) of each feature may vary from dataset to dataset. Therefore, filtering methods that can adapt to different datasets have the potential to improve peptide identification results.
This thesis proposes two adaptive filtering methods; denoising and quality assessment, both of which improve efficiency and effectiveness of peptide identification. First, the denoising approach employs an adaptive method for picking signal peaks that is more suitable for the datasets of interest. By applying the approach to two tandem mass spectra datasets, about 66% of peaks (likely noise peaks) can be removed. The number of peptides identified later by peptide identification on those datasets increased by 14% and 23%, respectively, compared to previous work (Ding et al., 2009a). Second, the quality assessment method estimates the probabilities of spectra being high quality based on quality assessments of the individual features. The probabilities are estimated by solving a constraint optimization problem. Experimental results on two datasets illustrate that searching only the high-quality tandem spectra determined using this method saves about 56% and 62% of database searching time and loses 9% of high-quality spectra.
Finally, the thesis suggests future research directions including feature selection and clustering of peptides.
|
22 |
Quality data extraction methodology based on the labeling of coffee leaves with nutritional deficienciesJungbluth, Adolfo, Yeng, Jon Li 04 1900 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / Nutritional deficiencies detection for coffee leaves is a task which is often undertaken manually by experts on the field known as agronomists. The process they follow to carry this task is based on observation of the different characteristics of the coffee leaves while relying on their own experience. Visual fatigue and human error in this empiric approach cause leaves to be incorrectly labeled and thus affecting the quality of the data obtained. In this context, different crowdsourcing approaches can be applied to enhance the quality of the data extracted. These approaches separately propose the use of voting systems, association rule filters and evolutive learning. In this paper, we extend the use of association rule filters and evolutive approach by combining them in a methodology to enhance the quality of the data while guiding the users during the main stages of data extraction tasks. Moreover, our methodology proposes a reward component to engage users and keep them motivated during the crowdsourcing tasks. The extracted dataset by applying our proposed methodology in a case study on Peruvian coffee leaves resulted in 93.33% accuracy with 30 instances collected by 8 experts and evaluated by 2 agronomic engineers with background on coffee leaves. The accuracy of the dataset was higher than independently implementing the evolutive feedback strategy and an empiric approach which resulted in 86.67% and 70% accuracy respectively under the same conditions. / Revisión por pares
|
23 |
Analysis and Performance Optimization of a GPGPU Implementation of Image Quality Assessment (IQA) Algorithm VSNRJanuary 2017 (has links)
abstract: Image processing has changed the way we store, view and share images. One important component of sharing images over the networks is image compression. Lossy image compression techniques compromise the quality of images to reduce their size. To ensure that the distortion of images due to image compression is not highly detectable by humans, the perceived quality of an image needs to be maintained over a certain threshold. Determining this threshold is best done using human subjects, but that is impractical in real-world scenarios. As a solution to this issue, image quality assessment (IQA) algorithms are used to automatically compute a fidelity score of an image.
However, poor performance of IQA algorithms has been observed due to complex statistical computations involved. General Purpose Graphics Processing Unit (GPGPU) programming is one of the solutions proposed to optimize the performance of these algorithms.
This thesis presents a Compute Unified Device Architecture (CUDA) based optimized implementation of full reference IQA algorithm, Visual Signal to Noise Ratio (VSNR) that uses M-level 2D Discrete Wavelet Transform (DWT) with 9/7 biorthogonal filters among other statistical computations. The presented implementation is tested upon four different image quality databases containing images with multiple distortions and sizes ranging from 512 x 512 to 1600 x 1280. The CUDA implementation of VSNR shows a speedup of over 32x for 1600 x 1280 images. It is observed that the speedup scales with the increase in size of images. The results showed that the implementation is fast enough to use VSNR on high definition videos with a frame rate of 60 fps. This work presents the optimizations made due to the use of GPU’s constant memory and reuse of allocated memory on the GPU. Also, it shows the performance improvement using profiler driven GPGPU development in CUDA. The presented implementation can be deployed in production combined with existing applications. / Dissertation/Thesis / Masters Thesis Computer Science 2017
|
24 |
Adaptive intra refresh for robust wireless multi-view videoLawan, Sagir January 2016 (has links)
Mobile wireless communication technology is a fast developing field and every day new mobile communication techniques and means are becoming available. In this thesis multi-view video (MVV) is also refers to as 3D video. Thus, the 3D video signals through wireless communication are shaping telecommunication industry and academia. However, wireless channels are prone to high level of bit and burst errors that largely deteriorate the quality of service (QoS). Noise along the wireless transmission path can introduce distortion or make a compressed bitstream lose vital information. The error caused by noise progressively spread to subsequent frames and among multiple views due to prediction. This error may compel the receiver to pause momentarily and wait for the subsequent INTRA picture to continue decoding. The pausing of video stream affects the user's Quality of Experience (QoE). Thus, an error resilience strategy is needed to protect the compressed bitstream against transmission errors. This thesis focuses on error resilience Adaptive Intra Refresh (AIR) technique. The AIR method is developed to make the compressed 3D video more robust to channel errors. The process involves periodic injection of Intra-coded macroblocks in a cyclic pattern using H.264/AVC standard. The algorithm takes into account individual features in each macroblock and the feedback information sent by the decoder about the channel condition in order to generate an MVV-AIR map. MVV-AIR map generation regulates the order of packets arrival and identifies the motion activities in each macroblock. Based on the level of motion activity contained in each macroblock, the MVV-AIR map classifies frames as high or low motion macroblocks. A proxy MVV-AIR transcoder is used to validate the efficiency of the generated MVV-AIR map. The MVV-AIR transcoding algorithm uses spatial and views downscaling scheme to convert from MVV to single view. Various experimental results indicate that the proposed error resilient MVV-AIR transcoder technique effectively improves the quality of reconstructed 3D video in wireless networks. A comparison of MVV-AIR transcoder algorithm with some traditional error resilience techniques demonstrates that MVV-AIR algorithm performs better in an error prone channel. Results of simulation revealed significant improvements in both objective and subjective qualities. No additional computational complexity emanates from the scheme while the QoS and QoE requirements are still fully met.
|
25 |
Computerised GRBAS assessement of voice qualityJalalinajafabadi, Farideh January 2016 (has links)
Vocal cord vibration is the source of voiced phonemes in speech. Voice quality depends on the nature of this vibration. Vocal cords can be damaged by infection, neck or chest injury, tumours and more serious diseases such as laryngeal cancer. This kind of physical damage can cause loss of voice quality. To support the diagnosis of such conditions and also to monitor the effect of any treatment, voice quality assessment is required. Traditionally, this is done ‘subjectively’ by Speech and Language Therapists (SLTs) who, in Europe, use a well-known assessment approach called ‘GRBAS’. GRBAS is an acronym for a five dimensional scale of measurements of voice properties. The scale was originally devised and recommended by the Japanese Society of Logopeadics and Phoniatrics and several European research publications. The proper- ties are ‘Grade’, ‘Roughness’, ‘Breathiness’, ‘Asthenia’ and ‘Strain’. An SLT listens to and assesses a person’s voice while the person performs specific vocal maneuvers. The SLT is then required to record a discrete score for the voice quality in range of 0 to 3 for each GRBAS component. In requiring the services of trained SLTs, this subjective assessment makes the traditional GRBAS procedure expensive and time-consuming to administer. This thesis considers the possibility of using computer programs to perform objective assessments of voice quality conforming to the GRBAS scale. To do this, Digital Signal Processing (DSP) algorithms are required for measuring voice features that may indicate voice abnormality. The computer must be trained to convert DSP measurements to GRBAS scores and a ‘machine learning’ approach has been adopted to achieve this. This research was made possible by the development, by Manchester Royal Infirmary (MRI) Hospital Trust, of a ‘speech database’ with the participation of clinicians, SLT’s, patients and controls. The participation of five SLTs scorers allowed norms to be established for GRBAS scoring which provided ‘reference’ data for the machine learning approach.
To support the scoring procedure carried out at MRI, a software package, referred to as GRBAS Presentation and Scoring Package (GPSP), was developed for presenting voice recordings to each of the SLTs and recording their GRBAS scores. A means of assessing intra-scorer consistency was devised and built into this system. Also, the assessment of inter-scorer consistency was advanced by the invention of a new form of the ‘Fleiss Kappa’ which is applicable to ordinal as well as categorical scoring. The means of taking these assessments of scorer consistency into account when producing ‘reference’ GRBAS scores are presented in this thesis. Such reference scores are required for training the machine learning algorithms. The DSP algorithms required for feature measurements are generally well known and available as published or commercial software packages. However, an appraisal of these algorithms and the development of some DSP ‘thesis software’ was found to be necessary. Two ‘machine learning’ regression models have been developed for map- ping the measured voice features to GRBAS scores. These are K Nearest Neighbor Regression (KNNR) and Multiple Linear Regression (MLR). Our research is based on sets of features, sets of data and prediction models that are different from the approaches in the current literature. The performance of the computerised system is evaluated against reference scores using a Normalised Root Mean Squared Error (NRMSE) measure. The performances of MLR and KNNR for objective prediction of GRBAS scores are compared and analysed ‘with feature selection’ and ‘without feature selection’. It was found that MLR with feature selection was better than MLR without feature selection and KNNR with and without feature selection, for all five GRBAS components. It was also found that MLR with feature selection gives scores for ‘Asthenia’ and ‘Strain’ which are closer to the reference scores than the scores given by all five individual SLT scorers. The best objective score for ‘Roughness’ was closer than the scores given by two SLTs, roughly equal to the score of one SLT and worse than the other two SLT scores. The best objective scores for ‘Breathiness’ and ‘Grade’ were further from the reference scores than the scores produced by all five SLT scorers. However, the worst ‘MLR with feature selection’ result has normalised RMS error which is only about 3% worse than the worst SLT scoring. The results obtained indicate that objective GRBAS measurements have the potential for further development towards a commercial product that may at least be useful in augmenting the subjective assessments of SLT scorers.
|
26 |
Quality assessment for register-based statistics - Results for the Austrian census 2011Asamer, Eva-Maria, Astleithner, Franz, Cetkovic, Predrag, Humer, Stefan, Lenk, Manuela, Moser, Mathias, Rechta, Henrik January 2016 (has links) (PDF)
In 2011, Statistics Austria carried out its first register-based census. Advantages of using administrative data for statistical purposes are, among others, a reduced burden for respondents and lower cost for the National Statistical Institutes (NSI). However, new challenges, like need for a new approach to the quality assessment of this kind of data arise. Therefore, Statistics Austria developed a comprehensive standardized framework to evaluate data quality for register-based statistics. In this paper, we present the basic concept of this quality framework and provide detailed results from the quality evaluation of the Austrian census of 2011. More specifically, we derive a quality measure for each census attribute from four complementary hyperdimensions. The first three of these hyperdimensions address the documentation of data, the usability of records and an external data validation. The fourth hyperdimension focuses on the quality of data imputations. The proposed framework combines these different quality-related information sources for each attribute to form an overall quality indicator. This procedure allows to track changes in quality during data processing and to compare the quality of different census generations.
|
27 |
Multi-modality quality assessment for unconstrained biometric samples / Évaluation de la qualité multimodale pour des échantillons biométriques non soumis à des contraintesLiu, Xinwei 22 June 2018 (has links)
L’objectif de ces travaux de recherche est d’étudier les méthodes d’évaluation de laqualité des images biométriques multimodales sur des échantillons acquis de manièrenon contrainte. De nombreuses s études ont noté l’importance de la qualité del’échantillon pour un système de reconnaissance ou un algorithme de comparaison,puisque la performance du système biométrique est intrinsèquement dépendant dela qualité des images de l’échantillon. Dès lors, la nécessité d’évaluer la qualitédes échantillons biométriques pour plusieurs modalités (empreintes digitales, iris,visage, etc.) est devenue primordiale notamment avec l’apparition de systèmesbiométriques multimodaux de haute précision.Après une introduction présentant un historique de la biométrie et des préceptesliés à la qualité des échantillons biométriques, nous présentons le concept d’évaluationde la qualité des échantillons pour plusieurs modalités. Les normes de qualitéISO / CEI récemment établies pour les empreintes digitales, l’iris et le visage sontprésentées. De plus, des approches d’évaluation de la qualité des échantillons conçuesspécifiquement pour les empreintes digitales avec et sans contact, pour l’iris(dont une image est capturée en proche infrarouge et dans le domaine visible),ainsi que le visage sont étudiées. Finalement, des techniques d’évaluation des performancesdes mesures de qualité des échantillons biométriques sont égalementétudiées.Sur la base des conclusions formulées suite à l’étude des solutions algorithmiques portant sur l’évaluation de la qualité des échantillons biométriques, nous proposonsun cadre commun pour l’évaluation de la qualité d’image biométrique pourplusieurs modalité. Après avoir étudié les attributs de qualité basés sur l’image parmodalité biométrique, nous examinons quelle intersection existe pour l’ensembledes modalités. Ensuite, nous sélectionnons et redéfinissons les attributs de qualitébasés sur l’image qui sont les plus importants afin de définir un cadre commun.Afin de relier ces attributs de qualité aux vrais échantillons biométriques,nous développons une nouvelle base de données de qualité d’image biométriquemulti-modalité qui contient des images échantillons de haute qualité et des imagesdégradées pour l’empreinte digitale acquise sans contact, l’iris (dont l’acquisitionest réalisée dans le spectre visible) et le visage. Les types de dégradation appliquéssont liés aux attributs de qualité qui sont communs aux diverses modalitéset qui sont basés sur l’image. Un autre aspect important du cadre commun proposéest la qualité de l’image et ses applications en biométrie. Nous avons d’abordintroduit et classifié les métriques de qualité d’image existantes, puis effectué unbref aperçu des métriques de qualité d’image sans référence, qui peuvent être appliquéespour l’évaluation de la qualité des échantillons biométriques. De plus, nousétudions comment les mesures de qualité d’image sans référence ont été utiliséespour l’évaluation de la qualité des empreintes digitales, de l’iris et des modalitésbiométriques du visage.Des expériences pour l’évaluation de la performance des métriques de qualitéd’image sans référence sur les images de visage et de l’iris sont effectuées. Lesrésultats expérimentaux indiquent qu’il existe plusieurs métriques qui peuventévaluer la qualité des échantillons biométriques de l’iris et du visage avec un fortcoefficient de correlation. La méthode obtenant les meilleurs résultats en termede performance est ré-entrainée sur des images d’empreintes digitales, ce qui permetd’augmenter significativement les performances du système de reconnaissancebiométrique.À travers le travail réalisé dans cette thèse, nous avons démontré l’applicabilitédes métriques de qualité d’image sans référence pour l’évaluation d’échantillonsbiométriques multi-modalité non contraints. / The aim of this research is to investigate multi-modality biometric image qualityassessment methods for unconstrained samples. Studies of biometrics noted thesignificance of sample quality for a recognition system or a comparison algorithmbecause the performance of the biometric system depends mainly on the qualityof the sample images. The need to assess the quality of multi-modality biometricsamples is increased with the requirement of a high accuracy multi-modalitybiometric systems.Following an introduction and background in biometrics and biometric samplequality, we introduce the concept of biometric sample quality assessment for multiplemodalities. Recently established ISO/IEC quality standards for fingerprint,iris, and face are presented. In addition, sample quality assessment approacheswhich are designed specific for contact-based and contactless fingerprint, nearinfrared-based iris and visible wavelength iris, as well as face are surveyed. Followingthe survey, approaches for the performance evaluation of biometric samplequality assessment methods are also investigated.Based on the knowledge gathered from the biometric sample quality assessmentchallenges, we propose a common framework for the assessment of multi-modalitybiometric image quality. We review the previous classification of image-basedquality attributes for a single biometric modality and investigate what are the commonimage-based attributes for multi-modality. Then we select and re-define themost important image-based quality attributes for the common framework. In order to link these quality attributes to the real biometric samples, we develop anew multi-modality biometric image quality database which has both high qualitysample images and degraded images for contactless fingerprint, visible wavelengthiris, and face modalities. The degradation types are based on the selected commonimage-based quality attributes. Another important aspect in the proposed commonframework is the image quality metrics and their applications in biometrics. Wefirst introduce and classify the existing image quality metrics and then conducteda brief survey of no-reference image quality metrics, which can be applied to biometricsample quality assessment. Plus, we investigate how no-reference imagequality metrics have been used for the quality assessment for fingerprint, iris, andface biometric modalities.The experiments for the performance evaluation of no-reference image qualitymetrics for visible wavelength face and iris modalities are conducted. The experimentalresults indicate that there are several no-reference image quality metricsthat can assess the quality of both iris and face biometric samples. Lastly, we optimizethe best metric by re-training it. The re-trained image quality metric canprovide better recognition performance than the original. Through the work carriedout in this thesis we have shown the applicability of no-reference image qualitymetrics for the assessment of unconstrained multi-modality biometric samples.
|
28 |
Assessing home economics coursework in senior secondary schools in BotswanaLeepile, Gosetsemang 07 June 2011 (has links)
The aim of this research was to explore how examiners achieve and maintain high quality assessment during marking and moderation of the BGCSE (Botswana General Certificate of Secondary Education) Home Economics coursework in Botswana. In 2000, localization of the Cambridge Overseas School Certificate (COSC) to the Botswana General Certificate of Secondary Education (BGCSE) took place as per the recommendations of the Revised National Policy on Education (RNPE) document. This new certificate system, marked locally, allows for varied modes of assessment, with more emphasis being placed on continuous assessment. This also means that the assessment is school-based, with teachers centrally involved. As is procedure with this kind of assessment, it is subjected to moderation. However, implementation of this new assessment approach exposed, among other challenges, challenges in establishing dependability of teachers’ assessment, possible increase in teacher workload, teachers’ lack of expertise and confidence in undertaking the assessment scheme. This study, among other things, considers the forms of moderation used by the BGCSE to establish consistency in school-based assessment (SBA) and in so doing, it identifies that a dual form of moderation is used. The main research questions guiding this investigation were: <ul> <li>How are teachers and moderators trained so that they may be competent examiners?</li> <li>How is quality assured during marking of coursework?</li> <li>How does the examining body (BEC) Botswana Examination Council ensure that the examiners adhere to the quality control mechanisms?</li> </ul> This was a qualitative study and the sources of data were semi-structured interviews, document analysis and the research journal. The eight respondents who participated in this study were Home Economics teachers, moderators from senior secondary schools and subject experts from the examining body who were all non-randomly sampled from across the country. Purposive sampling was used based on the respondents’ characteristics relevant to the research problem. Data were analyzed using thematic content analysis to describe the phenomenon under inquiry and obtain detailed data. Major findings revealed inconsistencies between teachers and moderators’ marks, and that even though there are procedures that underpin a high quality assessment regime, there is little monitoring by the Botswana Examinations Council (BEC) to ensure adherence by the examiners. Other key concerns included examiners’ dissatisfaction about training and inadequate official support and guidance to equip them as competent examiners in general. / Dissertation (MEd)--University of Pretoria, 2009. / Science, Mathematics and Technology Education / unrestricted
|
29 |
A Study in Computerized Translation Testing (CTT) for the Arabic LanguageKuhn, Amanda J. 11 June 2012 (has links) (PDF)
Translation quality assessment remains pertinent in both translation theory and in the industry. Specifically, the process of assessing a target document's quality or a person's translation competence involves a lot of time and money on the part of various governments, organizations and individuals. In response to this issue, this project builds on the ongoing research of Hague et al. (2012), who seek to determine the capabilities of a computerized translation test for the French-to-English and Spanish-to-English language pairs. Specifically, Hague et al. (2012) question whether a good score on a detect-and-correct style computerized translation test that is calculated by a computer also indicates a good score on a traditional full translation test that is calculated by hand. This project seeks to further this research by seeking to answer the same question using an Arabic-to-English language pair. The methods used in this research involve testing individuals using two different style translation tests and then comparing the results. The first style translation test involves a detect-and-correct format where a subject is given a list of project specifications in the form of a translation brief, a source text passage and a corresponding target text passage that has errors introduced throughout. The subject is expected to detect and fix the errors while leaving the rest of the text alone. A score is given for this test using an automated algorithm. The second style test is a traditional translation test where a subject is given the same translation brief and a source text. The subject is expected to produce an acceptable target text, which is subsequently scored by hand. Thereafter, various forms of analysis are used to determine the relationship between the scores of the two types of tests. The results of this research do not strongly suggest that a high score on the detect-and-correct portion of the test indicates a high score on a hand-graded full translation test for the subject population used. However, this research still provides insight, especially concerning whether the detect-and-correct portion of the test actually measures translation competence and concerning second language acquisition (SLA) programs and their intentions. In addition, this research provides insight into logistical issues in testing such as the impact text difficulty and length may have on a detect-and-correct style test as well as the negative impact the American Translators Association (ATA) grading practices of weighting errors and capping errors can have on an experiment such as the one described in this research.
|
30 |
New methods for positional quality assessment and change analysis of shoreline featuresAli, Tarig Abdelgayoum January 2003 (has links)
No description available.
|
Page generated in 0.081 seconds