• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 434
  • 177
  • 47
  • 45
  • 33
  • 18
  • 14
  • 11
  • 11
  • 10
  • 9
  • 8
  • 6
  • 5
  • 4
  • Tagged with
  • 973
  • 133
  • 131
  • 128
  • 127
  • 87
  • 83
  • 79
  • 79
  • 75
  • 73
  • 73
  • 71
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Learning Statistical and Geometric Models from Microarray Gene Expression Data

Zhu, Yitan 01 October 2009 (has links)
In this dissertation, we propose and develop innovative data modeling and analysis methods for extracting meaningful and specific information about disease mechanisms from microarray gene expression data. To provide a high-level overview of gene expression data for easy and insightful understanding of data structure, we propose a novel statistical data clustering and visualization algorithm that is comprehensively effective for multiple clustering tasks and that overcomes some major limitations of existing clustering methods. The proposed clustering and visualization algorithm performs progressive, divisive hierarchical clustering and visualization, supported by hierarchical statistical modeling, supervised/unsupervised informative gene/feature selection, supervised/unsupervised data visualization, and user/prior knowledge guidance through human-data interactions, to discover cluster structure within complex, high-dimensional gene expression data. For the purpose of selecting suitable clustering algorithm(s) for gene expression data analysis, we design an objective and reliable clustering evaluation scheme to assess the performance of clustering algorithms by comparing their sample clustering outcome to phenotype categories. Using the proposed evaluation scheme, we compared the performance of our newly developed clustering algorithm with those of several benchmark clustering methods, and demonstrated the superior and stable performance of the proposed clustering algorithm. To identify the underlying active biological processes that jointly form the observed biological event, we propose a latent linear mixture model that quantitatively describes how the observed gene expressions are generated by a process of mixing the latent active biological processes. We prove a series of theorems to show the identifiability of the noise-free model. Based on relevant geometric concepts, convex analysis and optimization, gene clustering, and model stability analysis, we develop a robust blind source separation method that fits the model to the gene expression data and subsequently identify the underlying biological processes and their activity levels under different biological conditions. Based on the experimental results obtained on cancer, muscle regeneration, and muscular dystrophy gene expression data, we believe that the research work presented in this dissertation not only contributes to the engineering research areas of machine learning and pattern recognition, but also provides novel and effective solutions to potentially solve many biomedical research problems, for improving the understanding about disease mechanisms. / Ph. D.
492

Time-Varying Frequency Selective IQ Imbalance Estimation and Compensation

Inti, Durga Laxmi Narayana Swamy 14 June 2017 (has links)
Direct-Down Conversion (DDC) principle based transceiver architectures are of interest to meet the diverse needs of present and future wireless systems. DDC transceivers have a simple structure with fewer analog components and offer low-cost, flexible and multi-standard solutions. However, DDC transceivers have certain circuit impairments affecting their performance in wide-band, high data rate and multi-user systems. IQ imbalance is one of the problems of DDC transceivers that limits their image rejection capabilities. Compensation techniques for frequency independent IQI arising due to gain and phase mismatches of the mixers in the I/Q paths of the transceiver have been widely discussed in the literature. However for wideband multi-channel transceivers, it is becoming increasingly important to address frequency dependent IQI arising due to mismatches in the analog I/Q lowpass filters. A hardware-efficient and standard independent digital estimation and compensation technique for frequency dependent IQI is introduced which is also capable of tracking time-varying IQI changes. The technique is blind and adaptive in nature, based on the second order statistical properties of complex random signals such as properness/circularity. A detailed performance analysis of the introduced technique is executed through computer simulations for various real-time operating scenarios. A novel technique for finding the optimal number of taps required for the adaptive IQI compensation filter is proposed and the performance of this technique is validated. In addition, a metric for the measure of properness is developed and used for error power and step size analysis. / Master of Science / A wireless transceiver consists of two major building blocks namely the RF front-end and digital baseband. The front-end performs functions such as frequency conversion, filtering, and amplification. Impurities because of deep-submicron fabrication lead to non-idealities of the front-end components which limit their accuracy and affect the performance of the overall transceiver. Complex (I/Q) mixing of baseband signals is preferred over real mixing because of its inherent trait of bandwidth efficiency. The I/Q paths enabling this complex mixing in the front-end may not be exactly identical thereby disturbing the perfect orthogonality of inphase and quadrature components leading to IQ Imbalance. The resultant IQ imbalance leads to an image of the signal formed at its mirror frequencies. Imbalances arising from mixers lead to an image of constant strength whereas I/Q low-pass filter mismatches lead to an image of varying strength across the Nyquist range. In addition, temperature effects cause slow variation in IQ imbalance with time. In this thesis a hardware efficient and standard-independent technique is introduced to compensate for performance degrading IQ imbalance. The technique is blind and adaptive in nature and uses second order statistical signal properties like circularity or properness for IQ imbalance estimation. The contribution of this work, which gives a key insight into the optimal number of taps required for the adaptive compensation filter improves the state-of-the-art technique. The performance of the technique is evaluated under various scenarios of interest and a detailed analysis of the results is presented.
493

Computational Dissection of Composite Molecular Signatures and Transcriptional Modules

Gong, Ting 22 January 2010 (has links)
This dissertation aims to develop a latent variable modeling framework with which to analyze gene expression profiling data for computational dissection of molecular signatures and transcriptional modules. The first part of the dissertation is focused on extracting pure gene expression signals from tissue or cell mixtures. The main goal of gene expression profiling is to identify the pure signatures of different cell types (such as cancer cells, stromal cells and inflammatory cells) and estimate the concentration of each cell type. In order to accomplish this, a new blind source separation method is developed, namely, nonnegative partially independent component analysis (nPICA), for tissue heterogeneity correction (THC). The THC problem is formulated as a constrained optimization problem and solved with a learning algorithm based on geometrical and statistical principles. The second part of the dissertation sought to identify gene modules from gene expression data to uncover important biological processes in different types of cells. A new gene clustering approach, nonnegative independent component analysis (nICA), is developed for gene module identification. The nICA approach is completed with an information-theoretic procedure for input sample selection and a novel stability analysis approach for proper dimension estimation. Experimental results showed that the gene modules identified by the nICA approach appear to be significantly enriched in functional annotations in terms of gene ontology (GO) categories. The third part of the dissertation moves from gene module level down to DNA sequence level to identify gene regulatory programs by integrating gene expression data and protein-DNA binding data. A sparse hidden component model is first developed for this problem, taking into account a well-known biological principle, i.e., a gene is most likely regulated by a few regulators. This is followed by the development of a novel computational approach, motif-guided sparse decomposition (mSD), in order to integrate the binding information and gene expression data. These computational approaches are primarily developed for analyzing high-throughput gene expression profiling data. Nevertheless, the proposed methods should be able to be extended to analyze other types of high-throughput data for biomedical research. / Ph. D.
494

A framework for blind signal correction using optimized polyspectra-based cost functions

Braeger, Steven W. 01 January 2009 (has links)
"Blind" inversion of the effects of a given operator on a signal is an extremely difficult task that has no easy solutions. However,. Dr. Hany Farid has published several works that each individua:lly appear to achieve exactly this seemingly impossible result. In this work, we contribute a comprehensive overview of the published applications of blind process inversion, as well as provide the generalized form of the algorithms and requirements that are found in each of these applications, thereby formulating and explaining a general framework for blind process inversion using Farid's Algorithm. Additionally, we explain the knowledge required to derive the ROSA-based cost function on which Farid's Algorithm depends. As our primary contribution, we analyze the algorithmic complexity of this cost function based on the way it is currently, naively calculated, and derive a new algorithm to compute this cost function that has greatly reduced algorithmic complexity. Finally, we suggest an additional application of Farid's Algorithm to the problem of blindly estimating true camera response functions from a single image.
495

Transmitter Authentication in Dynamic Spectrum Sharing

Kumar, Vireshwar 02 February 2017 (has links)
Recent advances in spectrum access technologies, such as software-defined radios, have made dynamic spectrum sharing (DSS) a viable option for addressing the spectrum shortage problem. However, these advances have also contributed to the increased possibility of "rogue" transmitter radios which may cause significant interference to other radios in DSS. One approach for countering such threats is to employ a transmitter authentication scheme at the physical (PHY) layer. In PHY-layer authentication, an authentication signal is generated by the transmitter, and embedded into the message signal. This enables a regulatory enforcement entity to extract the authentication signal from the received signal, uniquely identify a transmitter, and collect verifiable evidence of a rogue transmission that can be used later during an adjudication process. There are two primary technical challenges in devising a transmitter authentication scheme for DSS: (1) how to generate and verify the authentication signal such that the required security and privacy criteria are met; and (2) how to embed and extract the authentication signal without negatively impacting the performance of the transmitters and the receivers in DSS. With regard to dealing with the first challenge, the authentication schemes in the prior art, which provide privacy-preserving authentication, have limited practical value for use in large networks due to the high computational complexity of their revocation check procedures. In this dissertation, the novel approaches which significantly improve scalability of the transmitter authentication with respect to revocation, are proposed. With regard to dealing with the second challenge, in the existing PHY-layer authentication techniques, the authentication signal is embedded into the message signal in such a way that the authentication signal appears as noise to the message signal and vice versa. Hence, existing schemes are constrained by a fundamental tradeoff between the message signal's signal to interference and noise ratio (SINR) and the authentication signal's SINR. In this dissertation, the novel approaches which are not constrained by the aforementioned tradeoff between message and authentication signals, are proposed. / Ph. D. / Recent advances in spectrum access technologies, such as software-defined radios, have made dynamic spectrum sharing (DSS) a viable option for addressing the spectrum shortage problem. However, these advances have also contributed to the increased possibility of “rogue” transmitter radios which may cause significant interference to other radios in DSS. One approach for countering such threats is to employ a <i>transmitter authentication</i> scheme at the physical (PHY) layer. In PHY-layer authentication, an authentication signal is generated by the transmitter, and embedded into the message signal. This enables a regulatory enforcement entity to extract the authentication signal from the received signal, uniquely identify a transmitter, and collect verifiable evidence of a rogue transmission that can be used later during an adjudication process. There are two primary technical challenges in devising a transmitter authentication scheme for DSS: (1) how to generate and verify the authentication signal such that the required security and privacy criteria are met; and (2) how to embed and extract the authentication signal without negatively impacting the performance of the transmitters and the receivers in DSS. With regard to dealing with the first challenge, the authentication schemes in the prior art, which provide privacy-preserving authentication, have limited practical value for use in large networks due to the high computational complexity of their revocation check procedures. In this dissertation, the novel approaches which significantly improve scalability of the transmitter authentication with respect to revocation, are proposed. With regard to dealing with the second challenge, in the existing PHY-layer authentication techniques, the authentication signal is embedded into the message signal in such a way that the authentication signal appears as noise to the message signal and vice versa. Hence, existing schemes are constrained by a fundamental tradeoff between the message signal’s signal to interference and noise ratio (SINR) and the authentication signal’s SINR. In this dissertation, the novel approaches which are not constrained by the aforementioned tradeoff between message and authentication signals, are proposed.
496

Autologous cell therapy for aged human skin: A randomized, placebo-controlled, phase-I study

Grether-Beck, S., Marini, A., Jaenicke, T., Goessens-Rück, P., McElwee, Kevin J., Hoffman, R., Krutmann, J. 10 December 2019 (has links)
Yes / Introduction: Skin ageing involves senescent fibroblast accumulation, disturbance in extracellular matrix (ECM) homeostasis, and decreased collagen synthesis. Objective: to assess a cell therapy product for aged skin (RCS-01; verum) consisting of ~25 × 106 cultured, autologous cells derived from anagen hair follicle non-bulbar dermal sheath (NBDS). Methods: For each subject in the verum group, 4 areas of buttock skin were injected intradermally 1 or 3 times at monthly intervals with RCS-01, cryomedium, or needle penetration without injection; in the placebo group RCS-01 was replaced by cryomedium. The primary endpoint was assessment of local adverse event profiles. As secondary endpoints, expression of genes related to ECM homeostasis was assessed in biopsies from randomly selected volunteers in the RCS-01 group taken 4 weeks after the last injection. ­Results: Injections were well tolerated with no severe adverse events reported 1 year after the first injection. When compared with placebo-treated skin, a single treatment with RCS-01 resulted in a significant upregulation of TGFβ1, CTGF, COL1A1, COL1A2, COL3A1, and lumican mRNA expression. Limitations: The cohort size was insufficient for dose ­ranging evaluation and subgroup analyses of efficacy. Conclusions: RCS-01 therapy is well tolerated and associated with a gene expression response consistent with an improvement of ECM homeostasis. / Replicel Life Sciences Inc, Vancouver, Canada.
497

Den osynliga ingrediensen : Hur namnval styr konsumentensuppfattning / The invisible ingredient : How name choices shape consumer perception

Flodberg, Rasmus, Olsson, Albin January 2024 (has links)
Naming a dish or a product may seem straightforward, but the influence of the name can have a significant impact on how it is ultimately perceived, with the use of sensory descriptions potentially enhancing the value of a dish. The purpose of the study is to investigate how name descriptions affect consumer perceptions. The study was conducted at the School of Hospitality, Culinary Arts &amp; Meal Science in Grythyttan with the participation of 37 individuals, who took part in a sensory test. The sensory evaluation was conducted using a liking test and CATA. The results of the study show that expectations and name influence play a significant role in consumer perceptions. The name of a dish, such as "crisis,""homemade," or "gourmet," creates both positive and negative expectations for the guest, which in turn enhances or diminishes the perception of a product and can be crucial in the choice of dish.
498

Consider the View (La Due)

Jordan, Tamia Chantel 08 1900 (has links)
Visual impairment/blindness is not often discussed in a media space, and the community is often left out and forgotten otherwise in the course of history. Through documentary filmmaking, Consider the View (La Vue) provides an artistic exploration of blindness by using the camera as optical power and other forms of art. Viewers experience a new perspective of what it means to be visually impaired.
499

Virtual world accessibility: a multitool approach

Kruger, Rynhardt Pieter 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: Computer-based virtual worlds are increasingly used for activities which previously required physical environments. However, in its mainstream form, such a virtual world provides output on a graphical display and is thus inaccessible to a blind user. To achieve accessibility for blind users, an alternative to graphical output must be used. Audio and text are two output methods that can be considered. However, when using audio, care must be taken not to overload the audio channel. Channel overloading is possible with audio since it is not a selective output medium like the visual channel, that is, the user cannot choose what he/she wants to hear. Text should also be treated as audio, since a blind user consumes textual information as synthesized speech. In this research we discuss one possible solution to the problem of channel overloading, by the use of multiple exploration and navigation tools. These tools should allow the user to shape the information provided as audio output. Specially, we discuss the development of a virtual world client called Perspective, enabling non-visual access to virtual worlds by the use of multiple navigation and exploration tools. Perspective also serves as a framework for tool implementation and evaluation. Finally we give recommendations for improvements to current virtual world building practices and protocols, as to work toward an accessibility standard. / AFRIKAANSE OPSOMMING: Rekenaargebaseerde virtuele wêrelde word toenemend gebruik vir aktiwiteite wat voorheen fiesiese omgewings benodig het. Tog verskaf so 'n virtuele wêreld, in sy standaard vorm, afvoer as 'n grafiese beeld en is dus ontoeganklik vir 'n blinde gebruiker. Om toeganklikheid vir blinde gebruikers te bewerkstellig, moet 'n alternatief vir die grafiese beeld gebruik word. Klank en teks is twee alternatiewe wat beskou kan word. Tog moet klank versigtig gebruik word, aangesien die klankkanaal oorlaai kan word. Die klankkanaal kan oorlaai word aangesien dit nie 'n selektiewe kanaal soos die visuele kanaal is nie, met ander woorde, die gebruiker kan nie kies wat hy/sy wil hoor nie. Teks moet ook as klank beskou word, aangesien 'n blinde gebruiker teks in die vorm van gesintetiseerde spraak inneem. Met hierdie navorsing bespreek ons een oplossing vir die probleem van kanaaloorlading, deur die gebruik van verskeie navigasie- en verkenningsgereedskapstukke. Hierdie gereedskapstukke behoort die gebruiker in staat te stel om die inligting wat as klank oorgedra word, te bepaal. Ons bespreek spesi ek die ontwikkeling van 'n virtuele wêreld-kliënt genaamd Perspective, wat nie-visuele toegang tot virtuele wêrelde bewerkstellig deur die gebruik van meervoudige navigasie- en verkenningsgereedskapstukke. Perspective dien ook as 'n raamwerk vir die ontwikkeling en evaluering van gereedskapstukke. Laastens verskaf ons voorstelle vir verbeteringe van die boutegnieke en protokolle van huidige virtuele wêrelde, as eerste stap na 'n toeganklikheidsstandaard vir virtuele wêrelde.
500

Restauration d'images Satellitaires par des techniques de filtrage statistique non linéaire / Satellite image restoration by nonlinear statistical filtering techniques

Marhaba, Bassel 21 November 2018 (has links)
Le traitement des images satellitaires est considéré comme l'un des domaines les plus intéressants dans les domaines de traitement d'images numériques. Les images satellitaires peuvent être dégradées pour plusieurs raisons, notamment les mouvements des satellites, les conditions météorologiques, la dispersion et d'autres facteurs. Plusieurs méthodes d'amélioration et de restauration des images satellitaires ont été étudiées et développées dans la littérature. Les travaux présentés dans cette thèse se concentrent sur la restauration des images satellitaires par des techniques de filtrage statistique non linéaire. Dans un premier temps, nous avons proposé une nouvelle méthode pour restaurer les images satellitaires en combinant les techniques de restauration aveugle et non aveugle. La raison de cette combinaison est d'exploiter les avantages de chaque technique utilisée. Dans un deuxième temps, de nouveaux algorithmes statistiques de restauration d'images basés sur les filtres non linéaires et l'estimation non paramétrique de densité multivariée ont été proposés. L'estimation non paramétrique de la densité à postériori est utilisée dans l'étape de ré-échantillonnage du filtre Bayésien bootstrap pour résoudre le problème de la perte de diversité dans le système de particules. Enfin, nous avons introduit une nouvelle méthode de la combinaison hybride pour la restauration des images basée sur la transformée en ondelettes discrète (TOD) et les algorithmes proposés à l'étape deux, et nos avons prouvé que les performances de la méthode combinée sont meilleures que les performances de l'approche TOD pour la réduction du bruit dans les images satellitaires dégradées. / Satellite image processing is considered one of the more interesting areas in the fields of digital image processing. Satellite images are subject to be degraded due to several reasons, satellite movements, weather, scattering, and other factors. Several methods for satellite image enhancement and restoration have been studied and developed in the literature. The work presented in this thesis, is focused on satellite image restoration by nonlinear statistical filtering techniques. At the first step, we proposed a novel method to restore satellite images using a combination between blind and non-blind restoration techniques. The reason for this combination is to exploit the advantages of each technique used. In the second step, novel statistical image restoration algorithms based on nonlinear filters and the nonparametric multivariate density estimation have been proposed. The nonparametric multivariate density estimation of posterior density is used in the resampling step of the Bayesian bootstrap filter to resolve the problem of loss of diversity among the particles. Finally, we have introduced a new hybrid combination method for image restoration based on the discrete wavelet transform (DWT) and the proposed algorithms in step two, and, we have proved that the performance of the combined method is better than the performance of the DWT approach in the reduction of noise in degraded satellite images.

Page generated in 0.0708 seconds