Spelling suggestions: "subject:"normalization"" "subject:"formalization""
61 |
Changing Access: Building a Culture of Accessibility Within Normalized Technical Communication PracticesHuntsman, Sherena 01 August 2019 (has links)
As a field intricately connected to human experience and interaction, technical and professional communication (TPC) is historically, ethically, and practically tooled to address issues of equality, diversity, and access. While these important issues have not always been the focal point of TPC, the recent turn toward social justice has scholars asking critical questions about how users access information, how specific design practices may privilege some and disenfranchise others, and how we can be more inclusive across our communication practices. In this dissertation, I argue that it is within reach of TPC to address the specific problem of access—the gap between what we believe to be accessible and what is actually accessible—and to begin to change specific norms (beliefs, standards, guidelines, etc.) that guide our practices. We change norms, or the typical way we do things, by exposing them, disrupting them, and developing new, more inclusive practices. I argue that we can create new norms that are liberated from unjust assumptions of embodied ability and include accessibility as a normalized part of the design process.
|
62 |
A Rapid Lipid-based Approach for Normalization of Quantum Dot-detected Biomarker Expression on Extracellular Vesicles in Complex Biological SamplesJanuary 2019 (has links)
abstract: Extracellular Vesicles (EVs), particularly exosomes, are of considerable interest as tumor biomarkers since tumor-derived EVs contain a broad array of information about tumor pathophysiology including its metabolic and metastatic status. However, current EV based assays cannot distinguish between EV biomarker changes by altered secretion of EVs during diseased conditions like cancer, inflammation, etc. that express a constant level of a given biomarker, stable secretion of EVs with altered biomarker expression, or a combination of these two factors. This issue was addressed by developing a nanoparticle and dye-based fluorescent immunoassay that can distinguish among these possibilities by normalizing EV biomarker level(s) to EV abundance, revealing average expression levels of EV biomarker under observation. In this approach, EVs are captured from complex samples (e.g. serum), stained with a lipophilic dye and hybridized with antibody-conjugated quantum dot probes for specific EV surface biomarkers. EV dye signal is used to quantify EV abundance and normalize EV surface biomarker expression levels. EVs from malignant (PANC-1) and nonmalignant pancreatic cell lines (HPNE) exhibited similar staining, and probe-to-dye ratios did not change with EV abundance, allowing direct analysis of normalized EV biomarker expression without a separate EV quantification step. This EV biomarker normalization approach markedly improved the ability of serum levels of two pancreatic cancer biomarkers, EV EpCAM, and EV EphA2, to discriminate pancreatic cancer patients from nonmalignant control subjects. The streamlined workflow and robust results of this assay are suitable for rapid translation to clinical applications and its flexible design permits it to be rapidly adapted to quantitate other EV biomarkers by the simple swapping of the antibody-conjugated quantum dot probes for those that recognize a different disease-specific EV biomarker utilizing a workflow that is suitable for rapid clinical translation. / Dissertation/Thesis / Doctoral Dissertation Biomedical Engineering 2019
|
63 |
Implications of Punctuation Mark Normalization on Text RetrievalKim, Eungi 08 1900 (has links)
This research investigated issues related to normalizing punctuation marks from a text retrieval perspective. A punctuated-centric approach was undertaken by exploring changes in meanings, whitespaces, words retrievability, and other issues related to normalizing punctuation marks. To investigate punctuation normalization issues, various frequency counts of punctuation marks and punctuation patterns were conducted using the text drawn from the Gutenberg Project archive and the Usenet Newsgroup archive. A number of useful punctuation mark types that could aid in analyzing punctuation marks were discovered. This study identified two types of punctuation normalization procedures: (1) lexical independent (LI) punctuation normalization and (2) lexical oriented (LO) punctuation normalization. Using these two types of punctuation normalization procedures, this study discovered various effects of punctuation normalization in terms of different search query types. By analyzing the punctuation normalization problem in this manner, a wide range of issues were discovered such as: the need to define different types of searching, to disambiguate the role of punctuation marks, to normalize whitespaces, and indexing of punctuated terms. This study concluded that to achieve the most positive effect in a text retrieval environment, normalizing punctuation marks should be based on an extensive systematic analysis of punctuation marks and punctuation patterns and their related factors. The results of this study indicate that there were many challenges due to complexity of language. Further, this study recommends avoiding a simplistic approach to punctuation normalization.
|
64 |
Multicolor Underwater Imaging TechniquesWaggoner, Douglas Scott January 2007 (has links)
No description available.
|
65 |
Normalizer: Augmenting Code Clone Detectors Using Source Code NormalizationLy, Kevin 01 March 2017 (has links) (PDF)
Code clones are duplicate fragments of code that perform the same task. As software code bases increase in size, the number of code clones also tends to increase. These code clones, possibly created through copy-and-paste methods or unintentional duplication of effort, increase maintenance cost over the lifespan of the software. Code clone detection tools exist to identify clones where a human search would prove unfeasible, however the quality of the clones found may vary. I demonstrate that the performance of such tools can be improved by normalizing the source code before usage. I developed Normalizer, a tool to transform C source code to normalized source code where the code is written as consistently as possible. By maintaining the code's function while enforcing a strict format, the variability of the programmer's style will be taken out. Thus, code clones may be easier to detect by tools regardless of how it was written.
Reordering statements, removing useless code, and renaming identifiers are used to achieve normalized code. Normalizer was used to show that more clones can be found in Introduction to Computer Networks assignments by normalizing the source code versus the original source code using a small variety of code clone detection tools.
|
66 |
Normalization of Complex Mode Shapes by Truncation of the Alpha-PolynomialNiranjan, Adityanarayan C. January 2015 (has links)
No description available.
|
67 |
A Study of Machine Learning Approaches for Biomedical Signal ProcessingShen, Minjie 10 June 2021 (has links)
The introduction of high-throughput molecular profiling technologies provides the capability of studying diverse biological systems at molecular level. However, due to various limitations of measurement instruments, data preprocessing is often required in biomedical research. Improper preprocessing will have negative impact on the downstream analytics tasks. This thesis studies two important preprocessing topics: missing value imputation and between-sample normalization.
Missing data is a major issue in quantitative proteomics data analysis. While many methods have been developed for imputing missing values in high-throughput proteomics data, comparative assessment on the accuracy of existing methods remains inconclusive, mainly because the true missing mechanisms are complex and the existing evaluation methodologies are imperfect. Moreover, few studies have provided an outlook of current and future development.
We first report an assessment of eight representative methods collectively targeting three typical missing mechanisms. The selected methods are compared on both realistic simulation and real proteomics datasets, and the performance is evaluated using three quantitative measures. We then discuss fused regularization matrix factorization, a popular low-rank matrix factorization framework with similarity and/or biological regularization, which is extendable to integrating multi-omics data such as gene expressions or clinical variables. We further explore the potential application of convex analysis of mixtures, a biologically inspired latent variable modeling strategy, to missing value imputation. The preliminary results on proteomics data are provided together with an outlook into future development directions.
While a few winners emerged from our comparative assessment, data-driven evaluation of imputation methods is imperfect because performance is evaluated indirectly on artificial missing or masked values not authentic missing values. Imputation accuracy may vary with signal intensity. Fused regularization matrix factorization provides a possibility of incorporating external information. Convex analysis of mixtures presents a biologically plausible new approach.
Data normalization is essential to ensure accurate inference and comparability of gene expressions across samples or conditions. Ideally, gene expressions should be rescaled based on consistently expressed reference genes. However, for normalizing biologically diverse samples, the most commonly used reference genes have exhibited striking expression variability, and distribution-based approaches can be problematic when differentially expressed genes are significantly asymmetric.
We introduce a Cosine score based iterative normalization (Cosbin) strategy to normalize biologically diverse samples. The between-sample normalization is based on iteratively identified consistently expressed genes, where differentially expressed genes are sequentially eliminated according to scale-invariant Cosine scores.
We evaluate the performance of Cosbin and four other representative normalization methods (Total count, TMM/edgeR, DESeq2, DEGES/TCC) on both idealistic and realistic simulation data sets. Cosbin consistently outperforms the other methods across various performance criteria. Implemented in open-source R scripts and applicable to grouped or individual samples, the Cosbin tool will allow biologists to detect subtle yet important molecular signals across known or novel phenotypic groups. / Master of Science / Data preprocessing is often required due to various limitations of measurement instruments in biomedical research. This thesis studies two important preprocessing topics: missing value imputation and between-sample normalization.
Missing data is a major issue in quantitative proteomics data analysis. Imputation is the process of substituting for missing values. We propose a more realistic assessment workflow which can preserve the original data distribution, and then assess eight representative general-purpose imputation strategies. We explore two biologically inspired imputation approaches: fused regularization matrix factorization (FRMF) and convex analysis of mixtures (CAM) imputation. FRMF integrates external information such as clinical variables and multi-omics data into imputation, while CAM imputation incorporates biological assumptions. We show that the integration of biological information improves the imputation performance.
Data normalization is required to ensure correct comparison. For gene expression data, between sample normalization is needed. We propose a Cosine score based iterative normalization (Cosbin) strategy to normalize biologically diverse samples. We show that Cosbin significantly outperform other methods in both ideal simulation and realistic simulation. Implemented in open-source R scripts and applicable to grouped or individual samples, the Cosbin tool will allow biologists to detect subtle yet important molecular signals across known or novel cell types.
|
68 |
Students' Conceptions of NormalizationWatson, Kevin L. 13 October 2020 (has links)
Improving the learning and success of students in undergraduate science, technology, engineering, and mathematics (STEM) courses has become an increased focus of education researchers within the past decade. As part of these efforts, discipline-based education research (DBER) has emerged within STEM education as a way to address discipline-specific challenges for teaching and learning, by combining expert knowledge of the various STEM disciplines with knowledge about teaching and learning (Dolan et al., 2018; National Research Council, 2012). Particularly important to furthering DBER and improving STEM education are interdisciplinary studies that examine how the teaching and learning of specific concepts develop among and across various STEM disciplines... / Ph. D. / Dissertation proposal
|
69 |
Towards RDF normalization / Vers une normalisation RDFTicona Herrera, Regina Paola 06 July 2016 (has links)
Depuis ces dernières décennies, des millions d'internautes produisent et échangent des données sur le Web. Ces informations peuvent être structurées, semi-structurées et/ou non-structurées, tels que les blogs, les commentaires, les pages Web, les contenus multimédias, etc. Afin de faciliter la publication ainsi que l'échange de données, le World Wide Web Consortium (ou W3C) a défini en 1999 le standard RDF. Ce standard est un modèle qui permet notamment de structurer une information sous la forme d'un réseau de données dans lequel il est possible d'y attacher des descriptions sémantiques. Ce modèle permet donc d'améliorer l'interopérabilité entre différentes applications exploitant des données diverses et variées présentes sur le Web.Actuellement, une grande quantité de descriptions RDF est disponible en ligne, notamment grâce à des projets de recherche qui traitent du Web de données liées, comme par exemple DBpedia et LinkedGeoData. De plus, de nombreux fournisseurs de données ont adopté les technologies issues de cette communauté du Web de données en partageant, connectant, enrichissant et publiant leurs informations à l'aide du standard RDF, comme les gouvernements (France, Canada, Grande-Bretagne, etc.), les universités (par exemple Open University) ainsi que les entreprises (BBC, CNN, etc.). Il en résulte que de nombreux acteurs actuels (particuliers ou organisations) produisent des quantités gigantesques de descriptions RDF qui sont échangées selon différents formats (RDF/XML, Turtle, N-Triple, etc.). Néanmoins, ces descriptions RDF sont souvent verbeuses et peuvent également contenir de la redondance d'information. Ceci peut concerner à la fois leur structure ou bien leur sérialisation (ou le format) qui en plus souffre de multiples variations d'écritures possibles au sein d'un même format. Tous ces problèmes induisent des pertes de performance pour le stockage, le traitement ou encore le chargement de ce type de descriptions. Dans cette thèse, nous proposons de nettoyer les descriptions RDF en éliminant les données redondantes ou inutiles. Ce processus est nommé « normalisation » de descriptions RDF et il est une étape essentielle pour de nombreuses applications, telles que la similarité entre descriptions, l'alignement, l'intégration, le traitement des versions, la classification, l'échantillonnage, etc. Pour ce faire, nous proposons une approche intitulée R2NR qui à partir de différentes descriptions relatives à une même information produise une et une seule description normalisée qui est optimisée en fonction de multiples paramètres liés à une application cible. Notre approche est illustrée en décrivant plusieurs cas d'étude (simple pour la compréhension mais aussi plus réaliste pour montrer le passage à l'échelle) nécessitant l'étape de normalisation. La contribution de cette thèse peut être synthétisée selon les points suivants :i. Produire une description RDF normalisée (en sortie) qui préserve les informations d'une description source (en entrée),ii. Éliminer les redondances et optimiser l'encodage d'une description normalisée,iii. Engendrer une description RDF optimisée en fonction d'une application cible (chargement rapide, stockage optimisée...),iv. Définir de manière complète et formelle le processus de normalisation à l'aide de fonctions, d'opérateurs, de règles et de propriétés bien fondées, etc.v. Fournir un prototype RDF2NormRDF (avec deux versions : en ligne et hors ligne) permettant de tester et de valider l'efficacité de notre approche.Afin de valider notre proposition, le prototype RDF2NormRDF a été utilisé avec une batterie de tests. Nos résultats expérimentaux ont montré des mesures très encourageantes par rapport aux approches existantes, notamment vis-à-vis du temps de chargement ou bien du stockage d'une description normalisée, tout en préservant le maximum d'informations. / Over the past three decades, millions of people have been producing and sharing information on the Web, this information can be structured, semi-structured, and/or non-structured such as blogs, comments, Web pages, and multimedia data, etc., which require a formal description to help their publication and/or exchange on the Web. To help address this problem, the Word Wide Web Consortium (or W3C) introduced in 1999 the RDF standard as a data model designed to standardize the definition and use of metadata, in order to better describe and handle data semantics, thus improving interoperability, and scalability, and promoting the deployment of new Web applications. Currently, billions of RDF descriptions are available on the Web through the Linked Open Data cloud projects (e.g., DBpedia and LinkedGeoData). Also, several data providers have adopted the principles and practices of the Linked Data to share, connect, enrich and publish their information using the RDF standard, e.g., Governments (e.g., Canada Government), universities (e.g., Open University) and companies (e.g., BBC and CNN). As a result, both individuals and organizations are increasingly producing huge collections of RDF descriptions and exchanging them through different serialization formats (e.g., RDF/XML, Turtle, N-Triple, etc.). However, many available RDF descriptions (i.e., graphs and serializations) are noisy in terms of structure, syntax, and semantics, and thus may present problems when exploiting them (e.g., more storage, processing time, and loading time). In this study, we propose to clean RDF descriptions of redundancies and unused information, which we consider to be an essential and required stepping stone toward performing advanced RDF processing as well as the development of RDF databases and related applications (e.g., similarity computation, mapping, alignment, integration, versioning, clustering, and classification, etc.). For that purpose, we have defined a framework entitled R2NR which normalizes different RDF descriptions pertaining to the same information into one normalized representation, which can then be tuned both at the graph level and at the serialization level, depending on the target application and user requirements. We illustrate this approach by introducing use cases (real and synthetics) that need to be normalized.The contributions of the thesis can be summarized as follows:i. Producing a normalized (output) RDF representation that preserves all the information in the source (input) RDF descriptions,ii. Eliminating redundancies and disparities in the normalized RDF descriptions, both at the logical (graph) and physical (serialization) levels,iii. Computing a RDF serialization output adapted w.r.t. the target application requirements (faster loading, better storage, etc.),iv. Providing a mathematical formalization of the normalization process with dedicated normalization functions, operators, and rules with provable properties, andv. Providing a prototype tool called RDF2NormRDF (desktop and online versions) in order to test and to evaluate the approach's efficiency.In order to validate our framework, the prototype RDF2NormRDF has been tested through extensive experimentations. Experimental results are satisfactory show significant improvements over existing approaches, namely regarding loading time and file size, while preserving all the information from the original description.
|
70 |
Análise de técnicas de normalização aplicadas ao reconhecimento facialAndrezza, Igor Lucena Peixoto 27 February 2015 (has links)
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2016-02-18T10:52:41Z
No. of bitstreams: 1
arquivototal.pdf: 2258673 bytes, checksum: 21b44b2c7c089c792e07c8e4e298daf6 (MD5) / Made available in DSpace on 2016-02-18T10:52:41Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 2258673 bytes, checksum: 21b44b2c7c089c792e07c8e4e298daf6 (MD5)
Previous issue date: 2015-02-27 / Biometrics offers a reliable authentication mechanism that identifies the users through their physical and behavioral characteristics. The problem of face recognition is not trivial because there are many factors that affect the face detection and recognition, as for example, lighting, face position, hair, beard, etc. This work proposes to analyze the effects of geometric and lighting normalization on face recognition techniques, aiming to adapt them to uncontrolled environments. The results show that the utilization of background information in the normalization process increases the face recognition error rates and this happens in many papers in the literature. The lighting and geometric normalization methods, when performed with precise points of the eyes centers, effectively help in face recognition. / A biometria oferece um mecanismo de autenticação confiável, que identifica os usuários por intermédio de suas características físicas e comportamentais. O problema do reconhecimento facial não é trivial, pois existem muitos fatores que influenciam na detecção e no reconhecimento de face como, por exemplo, a iluminação, a posição da face, cabelo, barba, etc. Este trabalho se propõe a analisar os efeitos de técnicas de normalização geométrica e de iluminação sobre métodos de reconhecimento de face, visando adequar esses métodos para ambientes não controlados. Os resultados mostram que a presença do plano de fundo no processo de normalização contribui para aumentar as taxas de erro no reconhecimento de face, fato que ocorre em vários trabalhos presentes na literatura. Nesta dissertação, verificou-se que a aplicação de técnicas de normalização de iluminação e normalização geométrica, quando realizadas com pontos precisos dos centros dos olhos, efetivamente ajuda na tarefa de reconhecimento facial.
|
Page generated in 0.1091 seconds