• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 305
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 743
  • 743
  • 743
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Applications of MALDI-TOF/MS combined with molecular imaging for breast cancer diagnosis

Chiang, Yi-Yan 26 July 2011 (has links)
The incidence of breast cancer became the most common female cancer, and the fourth cause of female cancer death. In this study, matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF/MS) have been combined with multivariate statistics to investigate breast cancer tissues and cell lines. Core needle biopsy and fine needle aspiration (FNA) are techniques largely applied in the diagnosis of breast cancer. In this study, we have established an efficient protocol for detecting breast tissue and FNA samples with MALDI-TOF/MS. With the help of statistical analysis software, we can find the lipid-derived ion signals which can be use to distinguish breast cancer tumor tissues from non-tumor parts. This strategy can differentiate normal and tumor tissue, which is potential to apply in clinical diagnoses. The analysis of breast cancer tissue is challenging as the complexity of the tissue sample. Direct tissue analyses by matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI-IMS) allows us to investigate the molecular structure and their distribution while maintaining the integrity of the tissue and avoiding the loss of signals from extraction steps. Combined MALDI-IMS with statistic software, tissues can be analyzed and classified based on their molecular content which is helpful to distinguish tumor regions from non-tumor regions of breast cancer tissue. Our result shows the differences in the distribution and content of lipids between tumor and non-tumor tissue which can be supplements of current pathological analysis in tumor margins. In this study, MALDI-TOF/MS combined with multivariate statistics were used to rapidly differentiate breast cancer cell lines with different estrogen receptor (ER) and human epidermal growth factor receptor 2 (HER2) status. The protocol for efficiently detecting peptides and proteins in breast cancer cells with MALDI-TOF/MS was established, two multivariate statistics including principle component analysis (PCA) and hierarchical clustering analysis were used to process the obtaining MALDI mass spectra of six different breast cancer cell lines and one normal breast cell lines. Based on the difference of the peptide and protein profiles, breast cancer cell lines with same ER and HER-2 status were grouped in nearby region on the PCA score plot. The results of hierarchical cluster analysis also revealed high conformity between breast cancer cell protein profiles and respective hormone receptor types.
382

Seasonal Variation of Ambient Volatile Organic Compounds and Sulfur-containing Odors Correlated to the Emission Sources of Petrochemical Complexes

Liu, Chih-chung 21 August 2012 (has links)
Neighboring northern Kaohsiung with a dense population of petrochemical and petroleum industrial complexes included China Petroleum Company (CPC) refinery plant, Renwu and Dazher petrochemical industrial plants. In recent years, although many scholars have conducted regional studies, but are still limited by the lack of relevant information evidences (such as odorous matters identification and VOCs fingerprint database), while unable to clearly identify the causes of poor ambient air quality. By sampling and analyzing VOCs, we will be able to understand the major sources of VOCs in northern Kaohsiung and their contribution, and to provide the air quality management and control countermeasures for local environmental protection administration. In this study, we sampled and analyzed the speciation of VOCs and sulfur-containing odorous matters (SOMs) in the CPC refinery plants, Renwu and Dazher petrochemical complexes simultaneously with stack sampling. The sampling of VOCs and SOMs were conducted on January 7th, 14th, and 19th, 2011 (dry season) and May 6th, 13rd, and 23rd, 2011 (wet season). We established the emission source database, investigated the characteristics of VOC fingerprints, and estimate the emission factor of each stack. It helps us understand the temporal and spatial distribution of VOCs and ascertain major sources and their contribution of VOCs. Major VOCs emitted from the stacks of the CPC refinery plant were toluene and acetone. It showed that petroleum refinery processes had similar VOCs characteristics and fingerprints. The fingerprints of stack emissions at Renwu and Dashe industrial complexes varied with their processes. Hydrogen sulfide was the major sulfur-containing odorous matter in all petrochemical plants. Compared to other petrochemical complexes, Renwu industrial complex emitted a variety of SOMs species as well as relatively high concentrations of sulfur-containing odorous matters. The petrochemical industrial complexes in the industrial ambient of VOCs analysis results showed that isobutane, butane, isopentane, pentane, propane of alkanes, propene of alkenes, toluene, ethylbenzene, xylene, styrene of aromatics, 2-Butanone (MEK), acetone, of carbonyls are major species of VOCs. In addition, ethene+acetylene+ethane (C2), 1,2-dichloroethane, chloromethane, dichloromethane, MTBE were also occasionally found. Sulfur-containing odorous matter (SOMs) analytical results showed that major odorous matters included hydrogen sulfide, methanethiol, dimethyl sulfide, and carbon disulfide. The highest hydrogen sulfide concentration went up to 5.5 ppbv. In this study, the species of VOCs were divided into alkanes, alkenes, aromatics, carbonyls, and others. The temporal and spatial distribution of various types of VOCs strongly correlated with near-surface wind direction. The most obvious contaminants were alkanes, aromatics, and carbonyls of the dispersion to the downwind. Generally, the ambient air surrounding the petrochemical industrial complexes was influenced by various pollutants in the case of high wind speeds. It showed that stack emission and fugitive sources had an important contribution to ambient air quality. TSOMs and hydrogen sulfide emitting mainly from local sources resulted in high concentration of TSOMs and hydrogen sulfide surrounding the petrochemical industrial complex. Principal component analysis (PCA) results showed that the surrounding areas of petrochemical industrial complexes, regardless of dry or wet seasons, were mainly influenced by the process emissions and solvent evaporation. The impact of traffic emission sources ranked the second. Chemical mass balance receptor modeling showed that stack emissions from the CPC refinery plants contributed about 48 %, while fugitive emission sources and mobile sources contributed about 30 % and 11%, respectively. The stack emissions from Renwu industrial complex contributed about 75 %, while fugitive emission sources and mobile sources contributed about 17 % and 5 %, respectively. The stack emissions from Dazher industrial complex contributed about 68 %, while fugitive emission sources and mobile sources contributed about 21 % and 2 %, respectively.
383

Recognition Of Human Face Expressions

Ener, Emrah 01 September 2006 (has links) (PDF)
In this study a fully automatic and scale invariant feature extractor which does not require manual initialization or special equipment is proposed. Face location and size is extracted using skin segmentation and ellipse fitting. Extracted face region is scaled to a predefined size, later upper and lower facial templates are used for feature extraction. Template localization and template parameter calculations are carried out using Principal Component Analysis. Changes in facial feature coordinates between analyzed image and neutral expression image are used for expression classification. Performances of different classifiers are evaluated. Performance of proposed feature extractor is also tested on sample video sequences. Facial features are extracted in the first frame and KLT tracker is used for tracking the extracted features. Lost features are detected using face geometry rules and they are relocated using feature extractor. As an alternative to feature based technique an available holistic method which analyses face without partitioning is implemented. Face images are filtered using Gabor filters tuned to different scales and orientations. Filtered images are combined to form Gabor jets. Dimensionality of Gabor jets is decreased using Principal Component Analysis. Performances of different classifiers on low dimensional Gabor jets are compared. Feature based and holistic classifier performances are compared using JAFFE and AF facial expression databases.
384

Characterization Of Taxonomically Related Some Turkish Oak (quercus L.) Species In An Isolated Stand: A Morphometric Analysis Approach

Aktas, Caner 01 June 2010 (has links) (PDF)
The genus Quercus L. is represented with more than 400 species in the world and 18 of these species are found naturally in Turkey. Although its taxonomical, phytogeographical and dendrological importance, the genus Quercus is still taxonomically one of the most problematical woody genus in Turkish flora. In this study, multivariate morphometric approach was used to analyze oak specimens collected from an isolated forest (Beynam Forest, Ankara) where Quercus pubescens Willd., Q. infectoria Olivier subsp. boissieri (Reuter) O. Schwarz and Q. macranthera Fisch. &amp / C. A. Mey. ex Hohen. subsp. syspirensis (C.Koch) Menitsky taxa are belonging to section Quercus sensu stricto (s.s.) are found. Additional oak specimens were included in the analysis for comparison. Morphometric study was based on 52 leaf characters such as, distance, angle, and area as well as counted, descriptive and calculated variables. Morphometric variables were calculated automatically by use of landmark and outline data. Random forest classification method was used to select discriminating variables and predict unidentified specimens by use of pre-identified training group. The results of the random forest variable selection procedure and the principal component analysis (PCA) showed that the morphometric variables could distinguish the specimens of Q. pubescens and Q. macranthera subsp. syspirensis mostly based on the overall leaf size and number of intercalary veins while the specimens of Q. infectoria subsp. boissieri were separated from others based on lobe and lamina base shape. Finally, micromorphological observations of abaxial lamina surface have been performed by scanning electron microscope (SEM) on selected specimens which were found useful to differentiate, particularly the specimens of Q. macranthera subsp. syspirensis and its putative hybrids from other taxa.
385

Analysis And Classification Of Spelling Paradigm Eeg Data And An Attempt For Optimization Of Channels Used

Yildirim, Asil 01 December 2010 (has links) (PDF)
Brain Computer Interfaces (BCIs) are systems developed in order to control devices by using only brain signals. In BCI systems, different mental activities to be performed by the users are associated with different actions on the device to be controlled. Spelling Paradigm is a BCI application which aims to construct the words by finding letters using P300 signals recorded via channel electrodes attached to the diverse points of the scalp. Reducing the letter detection error rates and increasing the speed of letter detection are crucial for Spelling Paradigm. By this way, disabled people can express their needs more easily using this application. In this thesis, two different methods, Support Vector Machine (SVM) and AdaBoost, are used for classification in the analysis. Classification and Regression Trees is used as the weak classifier of the AdaBoost. Time-frequency domain characteristics of P300 evoked potentials are analyzed in addition to time domain characteristics. Wigner-Ville Distribution is used for transforming time domain signals into time-frequency domain. It is observed that classification results are better in time domain. Furthermore, optimum subset of channels that models P300 signals with minimum error rate is searched. A method that uses both SVM and AdaBoost is proposed to select channels. 12 channels are selected in time domain with this method. Also, effect of dimension reduction is analyzed using Principal Component Analysis (PCA) and AdaBoost methods.
386

Investigation of probabilistic principal component analysis compared to proper orthogonal decomposition methods for basis extraction and missing data estimation

Lee, Kyunghoon 21 May 2010 (has links)
The identification of flow characteristics and the reduction of high-dimensional simulation data have capitalized on an orthogonal basis achieved by proper orthogonal decomposition (POD), also known as principal component analysis (PCA) or the Karhunen-Loeve transform (KLT). In the realm of aerospace engineering, an orthogonal basis is versatile for diverse applications, especially associated with reduced-order modeling (ROM) as follows: a low-dimensional turbulence model, an unsteady aerodynamic model for aeroelasticity and flow control, and a steady aerodynamic model for airfoil shape design. Provided that a given data set lacks parts of its data, POD is required to adopt a least-squares formulation, leading to gappy POD, using a gappy norm that is a variant of an L2 norm dealing with only known data. Although gappy POD is originally devised to restore marred images, its application has spread to aerospace engineering for the following reason: various engineering problems can be reformulated in forms of missing data estimation to exploit gappy POD. Similar to POD, gappy POD has a broad range of applications such as optimal flow sensor placement, experimental and numerical flow data assimilation, and impaired particle image velocimetry (PIV) data restoration. Apart from POD and gappy POD, both of which are deterministic formulations, probabilistic principal component analysis (PPCA), a probabilistic generalization of PCA, has been used in the pattern recognition field for speech recognition and in the oceanography area for empirical orthogonal functions in the presence of missing data. In formulation, PPCA presumes a linear latent variable model relating an observed variable with a latent variable that is inferred only from an observed variable through a linear mapping called factor-loading. To evaluate the maximum likelihood estimates (MLEs) of PPCA parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). By virtue of the EM algorithm, the EM-PCA is capable of not only extracting a basis but also restoring missing data through iterations whether the given data are intact or not. Therefore, the EM-PCA can potentially substitute for both POD and gappy POD inasmuch as its accuracy and efficiency are comparable to those of POD and gappy POD. In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data-missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ~ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set.
387

Non-local active contours

Appia, Vikram VijayanBabu 17 May 2012 (has links)
This thesis deals with image segmentation problems that arise in various computer vision related fields such as medical imaging, satellite imaging, video surveillance, recognition and robotic vision. More specifically, this thesis deals with a special class of image segmentation technique called Snakes or Active Contour Models. In active contour models, image segmentation is posed as an energy minimization problem, where an objective energy function (based on certain image related features) is defined on the segmenting curve (contour). Typically, a gradient descent energy minimization approach is used to drive the initial contour towards a minimum for the defined energy. The drawback associated with this approach is that the contour has a tendency to get stuck at undesired local minima caused by subtle and undesired image features/edges. Thus, active contour based curve evolution approaches are very sensitive to initialization and noise. The central theme of this thesis is to develop techniques that can make active contour models robust against certain classes of local minima by incorporating global information in energy minimization. These techniques lead to energy minimization with global considerations; we call these models -- 'Non-local active contours'. In this thesis, we consider three widely used active contour models: 1) Edge- and region-based segmentation model, 2) Prior shape knowledge based segmentation model, and 3) Motion segmentation model. We analyze the traditional techniques used for these models and establish the need for robust models that avoid local minima. We address the local minima problem for each model by adding global image considerations.
388

Power System Data Compression For Archiving

Das, Sarasij 11 1900 (has links)
Advances in electronics, computer and information technology are fueling major changes in the area of power systems instrumentations. More and more microprocessor based digital instruments are replacing older type of meters. Extensive deployment of digital instruments are generating vast quantities of data which is creating information pressure in Utilities. The legacy SCADA based data management systems do not support management of such huge data. As a result utilities either have to delete or store the metered information in some compact discs, tape drives which are unreliable. Also, at the same time the traditional integrated power industry is going through a deregulation process. The market principle is forcing competition between power utilities, which in turn demands a higher focus on profit and competitive edge. To optimize system operation and planning utilities need better decision making processes which depend on the availability of reliable system information. For utilities it is becoming clear that information is a vital asset. So, the utilities are now keen to store and use as much information as they can. Existing SCADA based systems do not allow to store data of more than a few months. So, in this dissertation effectiveness of compression algorithms in compressing real time operational data has been assessed. Both, lossy and lossless compression schemes are considered. In lossless method two schemes are proposed among which Scheme 1 is based on arithmetic coding and Scheme 2 is based on run length coding. Both the scheme have 2 stages. First stage is common for both the schemes. In this stage the consecutive data elements are decorrelated by using linear predictors. The output from linear predictor, named as residual sequence, is coded by arithmetic coding in Scheme 1 and by run length coding in Scheme 2. Three different types of arithmetic codings are considered in this study : static, decrement and adaptive arithmetic coding. Among them static and decrement codings are two pass methods where the first pass is used to collect symbol statistics while the second is used to code the symbols. The adaptive coding method uses only one pass. In the arithmetic coding based schemes the average compression ratio achieved for voltage data is around 30, for frequency data is around 9, for VAr generation data is around 14, for MW generation data is around 11 and for line flow data is around 14. In scheme 2 Golomb-Rice coding is used for compressing run lengths. In Scheme 2 the average compression ratio achieved for voltage data is around 25, for frequency data is around 7, for VAr generation data is around 10, for MW generation data is around 8 and for line flow data is around 9. The arithmetic coding based method mainly looks at achieving high compression ratio. On the other hand, Golomb-Rice coding based method does not achieve good compression ratio as arithmetic coding but it is computationally very simple in comparison with the arithmetic coding. In lossy method principal component analysis (PCA) based compression method is used. From the data set, a few uncorrelated variables are derived and stored. The range of compression ratio in PCA based compression scheme is around 105-115 for voltage data, around 55-58 for VAr generation data, around 21-23 for MW generation data and around 27-29 for line flow data. This shows that the voltage parameter is amenable for better compression than other parameters. Data of five system parameters - voltage, line flow, frequency, MW generation and MVAr generation - of Souther regional grid of India have been considered for study. One of the aims of this thesis is to argue that collected power system data can be put to other uses as well. In particular we show that, even mining the small amount of practical data (collected from SRLDC) reveals some interesting system behavior patterns. A noteworthy feature of the thesis is that all the studies have been carried out considering data of practical systems. It is believed that the thesis opens up new questions for further investigations.
389

A method for reducing dimensionality in large design problems with computationally expensive analyses

Berguin, Steven Henri 08 June 2015 (has links)
Strides in modern computational fluid dynamics and leaps in high-power computing have led to unprecedented capabilities for handling large aerodynamic problem. In particular, the emergence of adjoint design methods has been a break-through in the field of aerodynamic shape optimization. It enables expensive, high-dimensional optimization problems to be tackled efficiently using gradient-based methods in CFD; a task that was previously inconceivable. However, adjoint design methods are intended for gradient-based optimization; the curse of dimensionality is still very much alive when it comes to design space exploration, where gradient-free methods cannot be avoided. This research describes a novel approach for reducing dimensionality in large, computationally expensive design problems to a point where gradient-free methods become possible. This is done using an innovative application of Principal Component Analysis (PCA), where the latter is applied to the gradient distribution of the objective function; something that had not been done before. This yields a linear transformation that maps a high-dimensional problem onto an equivalent low-dimensional subspace. None of the original variables are discarded; they are simply linearly combined into a new set of variables that are fewer in number. The method is tested on a range of analytical functions, a two-dimensional staggered airfoil test problem and a three-dimensional Over-Wing Nacelle (OWN) integration problem. In all cases, the method performed as expected and was found to be cost effective, requiring only a relatively small number of samples to achieve large dimensionality reduction.
390

A Fusion Model For Enhancement of Range Images / English

Hua, Xiaoben, Yang, Yuxia January 2012 (has links)
In this thesis, we would like to present a new way to enhance the “depth map” image which is called as the fusion of depth images. The goal of our thesis is to try to enhance the “depth images” through a fusion of different classification methods. For that, we will use three similar but different methodologies, the Graph-Cut, Super-Pixel and Principal Component Analysis algorithms to solve the enhancement and output of our result. After that, we will compare the effect of the enhancement of our result with the original depth images. This result indicates the effectiveness of our methodology. / Room 401, No.56, Lane 21, Yin Gao Road, Shanghai, China

Page generated in 0.113 seconds