• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 480
  • 106
  • 97
  • 74
  • 40
  • 14
  • 13
  • 13
  • 8
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1065
  • 291
  • 281
  • 258
  • 156
  • 142
  • 138
  • 130
  • 121
  • 120
  • 103
  • 98
  • 93
  • 83
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Latent variable based computational methods for applications in life sciences : Analysis and integration of omics data sets

Bylesjö, Max January 2008 (has links)
With the increasing availability of high-throughput systems for parallel monitoring of multiple variables, e.g. levels of large numbers of transcripts in functional genomics experiments, massive amounts of data are being collected even from single experiments. Extracting useful information from such systems is a non-trivial task that requires powerful computational methods to identify common trends and to help detect the underlying biological patterns. This thesis deals with the general computational problems of classifying and integrating high-dimensional empirical data using a latent variable based modeling approach. The underlying principle of this approach is that a complex system can be characterized by a few independent components that characterize the systematic properties of the system. Such a strategy is well suited for handling noisy, multivariate data sets with strong multicollinearity structures, such as those typically encountered in many biological and chemical applications. The main foci of the studies this thesis is based upon are applications and extensions of the orthogonal projections to latent structures (OPLS) method in life science contexts. OPLS is a latent variable based regression method that separately describes systematic sources of variation that are related and unrelated to the modeling aim (for instance, classifying two different categories of samples). This separation of sources of variation can be used to pre-process data, but also has distinct advantages for model interpretation, as exemplified throughout the work. For classification cases, a probabilistic framework for OPLS has been developed that allows the incorporation of both variance and covariance into classification decisions. This can be seen as a unification of two historical classification paradigms based on either variance or covariance. In addition, a non-linear reformulation of the OPLS algorithm is outlined, which is useful for particularly complex regression or classification tasks. The general trend in functional genomics studies in the post-genomics era is to perform increasingly comprehensive characterizations of organisms in order to study the associations between their molecular and cellular components in greater detail. Frequently, abundances of all transcripts, proteins and metabolites are measured simultaneously in an organism at a current state or over time. In this work, a generalization of OPLS is described for the analysis of multiple data sets. It is shown that this method can be used to integrate data in functional genomics experiments by separating the systematic variation that is common to all data sets considered from sources of variation that are specific to each data set. / Funktionsgenomik är ett forskningsområde med det slutgiltiga målet att karakterisera alla gener i ett genom hos en organism. Detta inkluderar studier av hur DNA transkriberas till mRNA, hur det sedan translateras till proteiner och hur dessa proteiner interagerar och påverkar organismens biokemiska processer. Den traditionella ansatsen har varit att studera funktionen, regleringen och translateringen av en gen i taget. Ny teknik inom fältet har dock möjliggjort studier av hur tusentals transkript, proteiner och små molekyler uppträder gemensamt i en organism vid ett givet tillfälle eller över tid. Konkret innebär detta även att stora mängder data genereras även från små, isolerade experiment. Att hitta globala trender och att utvinna användbar information från liknande data-mängder är ett icke-trivialt beräkningsmässigt problem som kräver avancerade och tolkningsbara matematiska modeller. Denna avhandling beskriver utvecklingen och tillämpningen av olika beräkningsmässiga metoder för att klassificera och integrera stora mängder empiriskt (uppmätt) data. Gemensamt för alla metoder är att de baseras på latenta variabler: variabler som inte uppmätts direkt utan som beräknats från andra, observerade variabler. Detta koncept är väl anpassat till studier av komplexa system som kan beskrivas av ett fåtal, oberoende faktorer som karakteriserar de huvudsakliga egenskaperna hos systemet, vilket är kännetecknande för många kemiska och biologiska system. Metoderna som beskrivs i avhandlingen är generella men i huvudsak utvecklade för och tillämpade på data från biologiska experiment. I avhandlingen demonstreras hur dessa metoder kan användas för att hitta komplexa samband mellan uppmätt data och andra faktorer av intresse, utan att förlora de egenskaper hos metoden som är kritiska för att tolka resultaten. Metoderna tillämpas för att hitta gemensamma och unika egenskaper hos regleringen av transkript och hur dessa påverkas av och påverkar små molekyler i trädet poppel. Utöver detta beskrivs ett större experiment i poppel där relationen mellan nivåer av transkript, proteiner och små molekyler undersöks med de utvecklade metoderna.
592

Direction-of-Arrival Estimation in Spherically Isotropic Noise

Dorosh, Anastasiia January 2013 (has links)
Today the multisensor array signal processing of noisy measurements has received much attention. The classical problem in array signal processing is determining the location of an energy-radiating source relative to the location of the array, in other words, direction-of-arrival (DOA) estimation. One is considering the signal estimation problem when together with the signal(s) of interest some noise and interfering signals are present. In this report a direction-of-arrival estimation system is described based on an antenna array for detecting arrival angles in azimuth plane of signals pitched by the antenna array. For this, the Multiple Signal Classication (MUSIC) algorithmis first of all considered. Studies show that in spite of its good reputation and popularity among researches, it has a certain limit of its performance. In this subspace-based method for DOA estimation of signal wavefronts, the term corresponding to additive noise is initially assumed spatially white. In our paper, we address the problem of DOA estimation of multiple target signals in a particular noise situation - in correlated spherically isotropic noise, which, in many practical cases, models a more real context than under the white noise assumption. The purpose of this work is to analyze the behaviour of the MUSIC algorithm and compare its performance with some other algorithms (such as the Capon and the Classical algorithms) and, uppermost, to explore the quality of the detected angles in terms of precision depending on different parameters, e.g. number of samples, noise variance, number of incoming signals. Some modifications of the algorithms are also done is order to increase their performance. Program MATLAB is used to conduct the studies. The simulation results on the considered antenna array system indicate that in complex conditions the algorithms in question (and first of all, the MUSIC algorithm) are unable to automatically detect and localize the DOA signals with high accuracy. Other algorithms andways for simplification the problem (for example, procedure of denoising) exist and may provide more precision but require more computation time.
593

Adaptive pre-distortion for nonlinear high power amplifiers in OFDM systems

Durney Wasaff, Hugo Ivan 22 July 2004 (has links)
El acelerado crecimiento de las comunicaciones a través de plataformas de transmisión en banda ancha por vía alámbrica e inalámbrica, sumado al uso cada vez más extenso de modulaciones de amplitud no constante que, debido a su alta eficiencia espectral y bajo coste de implementación, han sido adoptadas en el marco de desarrollo de diversos estándares de transmisión, son aspectos que han servido de soporte y motivación fundamental para el presente trabajo de investigación en el campo de la compensación de distorsiones no lineales en sistemas de comunicación. El estudio de los efectos de la distorsión no lineal y su compensación ha sido desde hace ya muchos años objeto de atención para investigadores de diversas áreas. Hoy, en particular, este estudio sigue siendo fundamental ya que se encuentra directamente implicado en el desarrollo de tecnologías de última generación en el área de las comunicaciones. Los nuevos sistemas de transmisión digital, en especial aquellos basados en OFDM (Orthogonal Frequency Division Multiplexing), son capaces de ofrecer altos niveles de eficiencia espectral utilizando modulaciones lineales multinivel sobre un numeroso conjunto de subportadoras que, al ser (idealmente) ortogonales en frecuencia, pueden ser ubicadas en un ancho de banda muy reducido permitiendo así transmitir elevadas tasas de información por segundo y por ancho de banda. Sin embargo, y a consecuencia de esto, problemas como las interferencias por canal adyacente o la presencia de una distorsión no lineal en la cadena de transmisión afectan de manera crítica las prestaciones de estos sistemas imponiendo severos límites a su viabilidad. De hecho, en el campo de las comunicaciones móviles y satelitales, existen actualmente diversas aplicaciones donde estos esquemas de modulación y multicanalización están ya operativos. En estos casos, la eficiencia de potencia en transmisión resulta primordial para, entre otras razones, lograr una máxima autonomía del equipamiento. En este contexto, el comportamiento no lineal de los amplificadores de alta potencia utilizados en transmisión de radiofrecuencia, constituye el principal obstáculo (desde el punto de vista de la distorsión no lineal) para el buen funcionamiento de los sistemas de comunicación digital basados en OFDM. Afortunadamente, este nocivo efecto puede ser compensado mediante diversas técnicas clásicas de linealización cuyas variantes -ad-hoc' han sido propuestas y ampliamente investigadas, existiendo al día de hoy una nutrida literatura parte de la cual referimos a lo largo de este trabajo. Entre dichas técnicas, la pre-distorsión digital ofrece óptimas condiciones para el diseño de linealizadores adaptativos ya que puede ser implementada a muy bajo coste sobre la información discreta de las señales de banda base. El objetivo que se persigue, en general, es el de proveer las condiciones de linealidad necesarias para explotar las capacidades propias de las modulaciones de alta eficiencia espectral, y al mismo tiempo alcanzar un máximo aprovechamiento de la potencia disponible. En este trabajo de investigación, efectuamos inicialmente una revisión sintetizada de algunas importantes técnicas de linealización para luego dar paso a una revisión más detallada de dos modelos relevantes utilizados para caracterizar el comportamiento no lineal de los amplificadores de alta potencia (modelo se Series de Volterra y modelo de Saleh para amplificadores nolineales sin memoria). Junto con ello se examinan algunas interesantes propiedades estadísticas asociadas al fenómeno de la distorsión no lineal que han dado pie a considerar durante la investigación posibles nuevas aplicaciones en estrategias de pre-distorsión. Se ha querido también incluir la descripción, a nivel de sistema y modelo de señal, de un esquema de transmisión OFDM genérico incluyendo caracterizaciones analíticas detalladas del efecto no lineal a objeto de formalizar en propiedad un modelo discreto exacto que otorgue una visión más profunda para la comprensión del fenómeno estudiado. Finalmente se presenta el diseño y evaluación de un esquema de pre-distorsión basado en un algoritmo iterativo que considera, como principal aporte, la optimización bidimensional de un reducido número de coeficientes de interpolación que identifican de manera adaptativa la característica inversa de ganancia compleja de un amplificador, tanto en función de la particular morfología no lineal de dicha curva, como también de la distribución de probabilidad de las señales de entrada en banda base. / The rapid growth of wired and wireless broad-band communications and the pervasive use of spectrally efficient non-constant amplitude modulations, adopted in the framework of several standardized transmission formats, motivates and supports the present research work in the field of non-linear distortion in communication systems. The compensation of nonlinearities has received a lot of attention in past and recent years, presenting direct implications on industrial development of last generation communication technologies. New digital transmission systems, particularly those based on Orthogonal Frequency Division Multiplexing (OFDM), feature high spectral efficiency as they exploit multilevel linear modulations to transmit at high information rates in combination with a dense allocation of a large number of (ideally) orthogonal sub-carriers in a relatively reduced bandwidth. As a result, problems such as adjacent channel interference and non-linear distortion become critical for system performance and, therefore, must be reduced to a minimum. Moreover, numerous applications of such transmission schemes are already operative in the field of satellite and mobile communications, where power efficiency is of primary concern due to, among other reasons, operation autonomy of the equipment and effective transmitted power. In this context, the non-linear behaviour of high power amplifiers (HPAs) constitutes a major impairment for OFDM-based digital communications systems. The compensation of these harmful effects can be achieved using a variety of techniques that have been proposed and widely dealt with in the literature. Among these techniques, digital pre-distortion, which can be carried out at a very low cost over the discrete base-band information, provides optimal features for the efficient implementation of adaptive linearization. Thence, in order to provide good conditions for the reliable use of high spectral efficiency modulations while taking the maximum advantage from the transmitting power budget, it is necessary to incorporate a suitable linearization technique.In the present work, we begin by reviewing some background on linearization techniques. This leads us to continue analyzing two relevant theoretical models typically used in characterizing memory and memoryless nonlinear HPAs (Volterra Series model and Saleh model for memoryless nonlinear HPAs). In addition to this a generic OFDM system and signal structure is described in detail by including the non-linear effect in the analytical model of the transmission chain. This is done in order to formalize an exact discrete OFDM model that help us in achieving a deeper understanding of the phenomenon under consideration. Then, some useful statistical properties and parameters associated to the nonlinear distortion are examined as well as the application of a CDF-based estimation of nonlinearities which is proposed as a new pre-distortion strategy. Finally, a new discrete adaptive pre-distortion scheme is formulated and then tested via simulation. The analysis and design of the main algorithm proposed considers the adaptive identification of the inverse complex gain characteristic of a nonlinear HPA. For this purpose, an iterative 2-D optimization of a reduced number of interpolation functions is formulated under a special two-fold criterion which accounts for the particular morphology of the HPA's nonlinear gain characteristic, as well as the probability distribution of the input base-band information.
594

Panic Detection in Human Crowds using Sparse Coding

Kumar, Abhishek 21 August 2012 (has links)
Recently, the surveillance of human activities has drawn a lot of attention from the research community and the camera based surveillance is being tried with the aid of computers. Cameras are being used extensively for surveilling human activities; however, placing cameras and transmitting visual data is not the end of a surveillance system. Surveillance needs to detect abnormal or unwanted activities. Such abnormal activities are very infrequent as compared to regular activities. At present, surveillance is done manually, where the job of operators is to watch a set of surveillance video screens to discover an abnormal event. This is expensive and prone to error. The limitation of these surveillance systems can be effectively removed if an automated anomaly detection system is designed. With powerful computers, computer vision is being seen as a panacea for surveillance. A computer vision aided anomaly detection system will enable the selection of those video frames which contain an anomaly, and only those selected frames will be used for manual verifications. A panic is a type of anomaly in a human crowd, which appears when a group of people start to move faster than the usual speed. Such situations can arise due to a fearsome activity near a crowd such as fight, robbery, riot, etc. A variety of computer vision based algorithms have been developed to detect panic in human crowds, however, most of the proposed algorithms are computationally expensive and hence too slow to be real-time. Dictionary learning is a robust tool to model a behaviour in terms of the linear combination of dictionary elements. A few panic detection algorithms have shown high accuracy using the dictionary learning method; however, the dictionary learning approach is computationally expensive. Orthogonal matching pursuit (OMP) is an inexpensive way to model a behaviour using dictionary elements and in this research OMP is used to design a panic detection algorithm. The proposed algorithm has been tested on two datasets and results are found to be comparable to state-of-the-art algorithms.
595

Signal Acquisition and Tracking for Fixed Wireless Access Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing

Mody, Apurva Narendra 23 November 2004 (has links)
The general objective of this proposed research is to design and develop signal acquisition and tracking algorithms for multiple input multiple output orthogonal frequency division multiplexing (MIMO-OFDM) systems for fixed wireless access applications. The algorithms are specifically targeted for systems that work in time division multiple access and frequency division multiple access frame modes. In our research, we first develop a comprehensive system model for a MIMO-OFDM system under the influence of the radio frequency (RF) oscillator frequency offset, sampling frequency (SF) offset, RF oscillator phase noise, frequency selective channel impairments and finally the additive white Gaussian noise. We then develop the acquisition and tracking algorithms to estimate and track all these parameters. The acquisition and tracking algorithms are assisted by a preamble consisting of one or more training sequences and pilot symbol matrices. Along with the signal acquisition and tracking algorithms, we also consider design of the MIMO-OFDM preamble and pilot signals that enable the suggested algorithms to work efficiently. Signal acquisition as defined in our research consists of time and RF synchronization, SF offset estimation and correction, phase noise estimation and correction and finally channel estimation. Signal tracking consists of RF, SF, phase noise and channel tracking. Time synchronization, RF oscillator frequency offset, SF oscillator frequency offset, phase noise and channel estimation and tracking are all research topics by themselves. A large number of studies have addressed these issues, but usually individually and for single-input single-output (SISO) OFDM systems. In the proposed research we present a complete suite of signal acquisition and tracking algorithms for MIMO-OFDM systems along with Cramr-Rao bounds for the SISO-OFDM case. In addition, we also derive the Maximum Likelihood (ML) estimates of the parameters for the SISO-OFDM case. Our proposed research is unique from the existing literature in that it presents a complete receiver implementation for MIMO-OFDM systems and accounts for the cumulative effects of all possible acquisition and tracking errors on the bit error rate (BER) performance. The suggested algorithms and the pilot/training schemes may be applied to any MIMO OFDM system and are independent of the space-time coding techniques that are employed.
596

Mechanisms and modeling of white layer formation in orthogonal machining of steels

Han, Sangil 29 March 2006 (has links)
The research objectives of this thesis are as follows: (1) Investigate the effects of carbon content, alloying, and heat treatment of steels on white layer formation, (2) Prove/disprove that the temperature for phase transformation in machining is the same as the nominal phase transformation temperature of the steel, (3) Quantify the contributions of thermal and mechanical effects to white layer generation in machining, (4) Develop a semi-empirical procedure for prediction of white layer formation that accounts for both thermal and mechanical effects. These research objectives are realized through experimental and modeling efforts on steels. Depth and hardness measurements of the white layers formed in steels show the importance of heat treatment and carbon content on white layer formation. Measurements of workpiece surface temperature and X-Ray Diffraction characterization of the machined surfaces show that phase transformation occurs below the nominal As temperature suggesting that mechanical effects play an important role in white layer formation. The maximum workpiece surface temperature, the effective stress, and plastic strain on the workpiece surface are measured and/or calculated and shown to affect the white layer depth and amount of retained austenite. A semi-empirical procedure is developed by correlating the maximum workpiece temperature and the unit thrust force increase with white layer formation.
597

Reduced-Order Modeling of Multiscale Turbulent Convection: Application to Data Center Thermal Management

Rambo, Jeffrey D. 27 March 2006 (has links)
Data centers are computing infrastructure facilities used by industries with large data processing needs and the rapid increase in power density of high performance computing equipment has caused many thermal issues in these facilities. Systems-level thermal management requires modeling and analysis of complex fluid flow and heat transfer processes across several decades of length scales. Conventional computational fluid dynamics and heat transfer techniques for such systems are severely limited as a design tool because their large model sizes render parameter sensitivity studies and optimization impractically slow. The traditional proper orthogonal decomposition (POD) methodology has been reformulated to construct physics-based models of turbulent flows and forced convection. Orthogonal complement POD subspaces were developed to parametrize inhomogeneous boundary conditions and greatly extend the use of the existing POD methodology beyond prototypical flows with fixed parameters. A flux matching procedure was devised to overcome the limitations of Galerkin projection methods for the Reynolds-averaged Navier-Stokes equations and greatly improve the computational efficiency of the approximate solutions. An implicit coupling procedure was developed to link the temperature and velocity fields and further extend the low-dimensional modeling methodology to conjugate forced convection heat transfer. The overall reduced-order modeling framework was able to reduce numerical models containing 105 degrees of freedom (DOF) down to less than 20 DOF, while still retaining greater that 90% accuracy over the domain. Rigorous a posteriori error bounds were formulated by using the POD subspace to partition the error contributions and dual residual methods were used to show that the flux matching procedure is a computationally superior approach for low-dimensional modeling of steady turbulent convection. To efficiently model large-scale systems, individual reduced-order models were coupled using flow network modeling as the component interconnection procedure. The development of handshaking procedures between low-dimensional component models lays the foundation to quickly analyze and optimize the modular systems encountered in electronics thermal management. This modularized approach can also serve as skeletal structure to allow the efficient integration of highly-specialized models across disciplines and significantly advance simulation-based design.
598

Engineering Residual Stress into the Workpiece through the Design of Machining Process Parameters

Hanna, Carl Robert 13 August 2007 (has links)
The surface integrity of a machined component that meets the demands of a specific application requirement is defined by several characteristics. The residual stress profile into the component is often considered as the critical characteristics as it carries a direct effect on the fatigue life of a machined component. A significant amount of effort has been dedicated by researchers to predict post process stress in a workpiece using analytical, experimental, and numerical modeling methods. Nonetheless, no methodology is available that can express the cutting process parameters and tool geometry parameters as functions of machined residual stress profile to allow process planning in achieving desired residual stress profile. This research seeks to fill that void by developing a novel approach to enable the extraction of cutting process and tool geometry parameters from a desired or required residual stress profile. More specifically, the model consists in determining the depth of cut, the tool edge radius and the cutting forces needed to obtain a prescribed residual stress profile for an orthogonal machining operation. The model is based on the inverse solution of a physics-based modeling approach of the orthogonal machining operation and the inverse solution of the residual stress prediction from Hertzian stresses. Experimental and modeling data are used to validate the developed model. The work constitutes a novel approach in engineering residual stress in a machined component.
599

New Selection Criteria for Tone Reservation Technique Based on Cross-Entropy Algorithm in OFDM Systems

Chiu, Min-han 24 August 2011 (has links)
This thesis considers the use of the tone reservation (TR) technique in orthogonal frequency division multiplexing (OFDM) systems. The nonlinear distortion is usually introduces by the high power amplifiers (HPA) used in wireless communications systems. It orders to reduce the inter-modulation distortion (IMD) in OFDM systems. In addition to the original peak-to-average power ratio (PAPR)-reduction criterion, we propose signal-to-distortion plus noise power ratio (SDNR) criterion and distortion power plus inverse of signal power (DIS) criterion. Based on these criteria, the cross-entropy (CE) algorithm is introduced to determine desired values of the peak reduction carriers (PRCs) to improve the bit error rate (BER) of nonlinearly distorted. Computational complexity is always the major concern of PAPR technique. Therefore, the real-valued PRCs and the modified transform decomposition (MTD) method are introduced here to dramatically decrease complexity of inverse fast Fourier transform (IFFT) operation with slightly performance loss. The simulation results show that the proposed criteria provide a better BER performance and a lower computational complexity.
600

An Improved ICI Self-Cancellation Scheme for Distributed MISO-OFDM Systems

Li, Pei-Hsun 24 August 2011 (has links)
One of the challenges of distributed cooperative orthogonal frequency division multiplexing systems is that the multiple carrier frequency offsets (CFOs) simultaneously present at the receiver. According to our knowledge up to now, even the CFOs are known at the receiver, the way to perfectly eliminate the effect of CFOs is still an open problem. This thesis proposes a scheme to mitigate the effect due to multiple CFOs by using the concept of intercarrier interference self-cancellation in transitional OFDM systems, a scheme where the data are simultaneously modulated on symmetric subcarriers between two transmit antennas. Before processing FFT, two values related to CFOs are used to adjust the time-domain signal resulting in better signal-to-interference ratio in even and odd subcarriers respectively. After that, the data are combined by applying maximum ratio combining and then decoded. Simulation results are given to demonstrate the effectiveness of the proposed scheme as compared to previous scheme.

Page generated in 0.0575 seconds