• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 393
  • 168
  • 46
  • 44
  • 29
  • 21
  • 19
  • 18
  • 17
  • 17
  • 15
  • 7
  • 4
  • 3
  • 3
  • Tagged with
  • 949
  • 949
  • 748
  • 149
  • 148
  • 142
  • 124
  • 113
  • 97
  • 87
  • 76
  • 74
  • 72
  • 64
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
741

Saccharomyces cerevisiae: A Platform for Structure-activity Relationship Analysis and High-throughput Candidate Prioritization

Song, Kyung Tae Kevin 17 July 2013 (has links)
The budding yeast Saccharomyces cerevisiae has been an invaluable model organism in contributing to the current understanding of cellular biology, owing mainly to its highly tractable genetic system and the completion of its genome sequencing in 1996. Indeed, these bolstered the development of novel methods that have provided great insights into genetic and protein networks in human cells. With the large collection of datasets, S. cerevisiae also became an ideal platform for investigating the mechanism of action of novel compounds. The first part of my thesis uses a validated chemogenomic assay to investigate the mechanism of action of structurally related novel DNA-damaging agents, delineating valuable structure-activity relationship in the process. The second part describes the development of a method that uses drug-induced wild-type growth dynamic to characterize novel compounds, which, in combination with the chemogenomic assay, may complement existing high throughput screening experiments to improve the current drug development process.
742

Real-time Detection And Tracking Of Human Eyes In Video Sequences

Savas, Zafer 01 September 2005 (has links) (PDF)
Robust, non-intrusive human eye detection problem has been a fundamental and challenging problem for computer vision area. Not only it is a problem of its own, it can be used to ease the problem of finding the locations of other facial features for recognition tasks and human-computer interaction purposes as well. Many previous works have the capability of determining the locations of the human eyes but the main task in this thesis is not only a vision system with eye detection capability / Our aim is to design a real-time, robust, scale-invariant eye tracker system with human eye movement indication property using the movements of eye pupil. Our eye tracker algorithm is implemented using the Continuously Adaptive Mean Shift (CAMSHIFT) algorithm proposed by Bradski and the EigenFace method proposed by Turk &amp / Pentland. Previous works for scale invariant object detection using Eigenface method are mostly dependent on limited number of user predefined scales which causes speed problems / so in order to avoid this problem an adaptive eigenface method using the information extracted from CAMSHIFT algorithm is implemented to have a fast and scale invariant eye tracking. First of all / human face in the input image captured by the camera is detected using the CAMSHIFT algorithm which tracks the outline of an irregular shaped object that may change size and shape during the tracking process based on the color of the object. Face area is passed through a number of preprocessing steps such as color space conversion and thresholding to obtain better results during the eye search process. After these preprocessing steps, search areas for left and right eyes are determined using the geometrical properties of the human face and in order to locate each eye indivually the training images are resized by the width information supplied by the CAMSHIFT algortihm. Search regions for left and right eyes are individually passed to the eye detection algortihm to determine the exact locations of each eye. After the detection of eyes, eye areas are individually passed to the pupil detection and eye area detection algorithms which are based on the Active Contours method to indicate the pupil and eye area. Finally, by comparing the geometrical locations of pupil with the eye area, human gaze information is extracted. As a result of this thesis a software named &ldquo / TrackEye&rdquo / with an user interface having indicators for the location of eye areas and pupils, various output screens for human computer interaction and controls for allowing to test the effects of color space conversions and thresholding types during object tracking has been built.
743

應用資料包絡法降低電源轉換器溫升之研究

廖 合, Liao,Ho Unknown Date (has links)
由績效觀點,品質(適質)與成本(適量),在概念上是完全一致的。因此,績效的管理,應以品質與成本作為其目標達成與否的衡量標準。本研究以績效觀點來解決公司面臨到品質與成本的兩難的問題。隨著電子產品的功能多樣化,發熱問題卻接踵而來,發熱密度的不斷提昇,對於散熱設計的需求也越來越受到重視。本研究以電源轉換器為對象,其目前已設計完成且已通過美國UL安規認證,但因為其溫升及其變異很大,因此降低電源轉換器的溫升及其變異是一急需解決的問題,以期能找出穩健於不可控因子,使產品變異小且各零件溫升與損失均能降至最低的最適外部零件組合。透過了田口與實驗設計的方法規劃及進行實驗並收集數據。引用加權SN比(multi-response S/N ratio)的方法,分別透過(1)管制圖法及(2)資料包絡法的CCR保證領域法(指CCR-AR模型)來決定加權SN比的權數,以決定可控因子及其水準值。對矩陣實驗的數據利用MTS ( M a h a l o n o b I s - Taguchi System)來篩選研究問題中較重要的特性變數,再針對篩選結果中較重要的特性變數的數據分別利用(1)倒傳遞類神經網路結合資料包絡法及(2)資料包絡法結合主成份分析法兩種分析方法,得到外殼鑽孔形狀與矽膠片大小的最佳因子組合。由改善後的確認實驗結果得知,雖然平均溫升下降的程度不大,然而大部份量測點的溫升標準差都顯著變小了,因此本研究在降低該電源轉換器溫升變異的效果顯著。
744

An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation.

Lin, TsungPo 26 June 2008 (has links)
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principle component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
745

In silico tools in risk assessment : of industrial chemicals in general and non-dioxin-like PCBs in particular

Stenberg, Mia January 2012 (has links)
Industrial chemicals in European Union produced or imported in volumes above 1 tonne annually, necessitate a registration within REACH. A common problem, concerning these chemicals, is deficient information and lack of data for assessing the hazards posed to human health and the environment. Animal studies for the type of toxicological information needed are both expensive and time consuming, and to that an ethical aspect is added. Alternative methods to animal testing are thereby requested. REACH have called for an increased use of in silico tools for non-testing data as structure-activity relationships (SARs), quantitative structure-activity relationships (QSARs), and read-across. The main objective of the studies underlying this thesis is related to explore and refine the use of in silico tools in a risk assessment context of industrial chemicals. In particular, try to relate properties of the molecular structure to the toxic effect of the chemical substance, by using principles and methods of computational chemistry. The initial study was a survey of all industrial chemicals; the Industrial chemical map was created. A part of this map was identified including chemicals of potential concern. Secondly, the environmental pollutants, polychlorinated biphenyls (PCBs) were examined and in particular the non-dioxin-like PCBs (NDL-PCBs). A set of 20 NDL-PCBs was selected to represent the 178 PCB congeners with three to seven chlorine substituents. The selection procedure was a combined process including statistical molecular design for a representative selection and expert judgements to be able to include congeners of specific interest. The 20 selected congeners were tested in vitro in as much as 17 different assays. The data from the screening process was turned into interpretable toxicity profiles with multivariate methods, used for investigation of potential classes of NDL-PCBs. It was shown that NDL-PCBs cannot be treated as one group of substances with similar mechanisms of action. Two groups of congeners were identified. A group including in general lower chlorinated congeners with a higher degree of ortho substitution showed a higher potency in more assays (including all neurotoxic assays). A second group included abundant congeners with a similar toxic profile that might contribute to a common toxic burden. To investigate the structure-activity pattern of PCBs effect on DAT in rat striatal synaptosomes, ten additional congeners were selected and tested in vitro. NDL-PCBs were shown to be potent inhibitors of DAT binding. The congeners with highest DAT inhibiting potency were tetra- and penta-chlorinated with 2-3 chlorine atoms in ortho-position. The model was not able to distinguish the congeners with activities in the lower μM range, which could be explained by a relatively unspecific response for the lower ortho chlorinated PCBs. / Den europeiska kemikalielagstiftningen REACH har fastställt att kemikalier som produceras eller importeras i en mängd över 1 ton per år, måste registreras och riskbedömmas. En uppskattad siffra är att detta gäller för 30 000 kemikalier. Problemet är dock att data och information ofta är otillräcklig för en riskbedömning. Till stor del har djurförsök använts för effektdata, men djurförsök är både kostsamt och tidskrävande, dessutom kommer den etiska aspekten in. REACH har därför efterfrågat en undersökning av möjligheten att använda in silico verktyg för att bidra med efterfrågad data och information. In silico har en ungefärlig betydelse av i datorn, och innebär beräkningsmodeller och metoder som används för att få information om kemikaliers egenskaper och toxicitet. Avhandlingens syfte är att utforska möjligheten och förfina användningen av in silico verktyg för att skapa information för riskbedömning av industrikemikalier. Avhandlingen beskriver kvantitativa modeller framtagna med kemometriska metoder för att prediktera, dvs förutsäga specifika kemikaliers toxiska effekt. I den första studien (I) undersöktes 56 072 organiska industrikemikalier. Med multivariata metoder skapades en karta över industrikemikalierna som beskrev dess kemiska och fysikaliska egenskaper. Kartan användes för jämförelser med kända och potentiella miljöfarliga kemikalier. De mest kända miljöföroreningarna visade sig ha liknande principal egenskaper och grupperade i kartan. Genom att specialstudera den delen av kartan skulle man kunna identifiera fler potentiellt farliga kemiska substanser. I studie två till fyra (II-IV) specialstuderades miljögiftet PCB. Tjugo PCBs valdes ut så att de strukturellt och fysiokemiskt representerade de 178 PCB kongenerna med tre till sju klorsubstituenter. Den toxikologiska effekten hos dessa 20 PCBs undersöktes i 17 olika in vitro assays. De toxikologiska profilerna för de 20 testade kongenerna fastställdes, dvs vilka som har liknande skadliga effekter och vilka som skiljer sig åt. De toxicologiska profilerna användes för klassificering av PCBs. Kvantitativa modeller utvecklades för prediktioner, dvs att förutbestämma effekter hos ännu icke testade PCBs, och för att få ytterligare kunskap om strukturella egenskaper som ger icke önskvärda effekter i människa och natur. Information som kan användas vid en framtida riskbedömning av icke-dioxinlika PCBs. Den sista studien (IV) är en struktur-aktivitets studie som undersöker de icke-dioxinlika PCBernas hämmande effekt av signalsubstansen dopamin i hjärnan.
746

Hybrid 2D and 3D face verification

McCool, Christopher Steven January 2007 (has links)
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
747

The application of time-of-flight secondary ion mass spectrometry (ToF-SIMS) to forensic glass analysis and questioned document examination

Denman, John A January 2007 (has links)
The combination of analytical sensitivity and selectivity provided by time-of-flight secondary ion mass spectrometry (ToF-SIMS), with advanced statistical interrogation by principal component analysis (PCA), has allowed a significant advancement in the forensic discrimination of pen, pencil and glass materials based on trace characterisation.
748

Multi-layer Perceptron Error Surfaces: Visualization, Structure and Modelling

Gallagher, Marcus Reginald Unknown Date (has links)
The Multi-Layer Perceptron (MLP) is one of the most widely applied and researched Artificial Neural Network model. MLP networks are normally applied to performing supervised learning tasks, which involve iterative training methods to adjust the connection weights within the network. This is commonly formulated as a multivariate non-linear optimization problem over a very high-dimensional space of possible weight configurations. Analogous to the field of mathematical optimization, training an MLP is often described as the search of an error surface for a weight vector which gives the smallest possible error value. Although this presents a useful notion of the training process, there are many problems associated with using the error surface to understand the behaviour of learning algorithms and the properties of MLP mappings themselves. Because of the high-dimensionality of the system, many existing methods of analysis are not well-suited to this problem. Visualizing and describing the error surface are also nontrivial and problematic. These problems are specific to complex systems such as neural networks, which contain large numbers of adjustable parameters, and the investigation of such systems in this way is largely a developing area of research. In this thesis, the concept of the error surface is explored using three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method for visualizing the learning trajectory followed by an algorithm on the error surface. It is found that PCA provides an effective method for performing such a visualization, as well as providing an indication of the significance of individual weights to the training process. Secondly, sampling methods are used to explore the error surface and to measure certain properties of the error surface, providing the necessary data for an intuitive description of the error surface. A number of practical MLP error surfaces are found to contain a high degree of ultrametric structure, in common with other known configuration spaces of complex systems. Thirdly, a class of global optimization algorithms is also developed, which is focused on the construction and evolution of a model of the error surface (or search spa ce) as an integral part of the optimization process. The relationships between this algorithm class, the Population-Based Incremental Learning algorithm, evolutionary algorithms and cooperative search are discussed. The work provides important practical techniques for exploration of the error surfaces of MLP networks. These techniques can be used to examine the dynamics of different training algorithms, the complexity of MLP mappings and an intuitive description of the nature of the error surface. The configuration spaces of other complex systems are also amenable to many of these techniques. Finally, the algorithmic framework provides a powerful paradigm for visualization of the optimization process and the development of parallel coupled optimization algorithms which apply knowledge of the error surface to solving the optimization problem.
749

探索性資料分析方法在文本資料中的應用─以「新青年」雜誌為例 / A Study of Exploratory Data Analysis on Text Data ── A Case study based on New Youth Magazine

潘艷艷, Pan, Yan Yan Unknown Date (has links)
隨著經濟繁榮和網絡發展的日新月異,線上線下每時每刻都產生龐大數據,其中約有80%的文字、影像等非結構化數據,如何量化和採取適合的分析方法,成為有效提取有價值信息及對其加以利用的關鍵。針對文字類型的資料,本文提出探索性資料分析方法,並以《新青年》雜誌的語言變化為例,呈現如何選取文本特徵并对其量化及分析的過程。 首先,本文以卷為分析單位,多角度量化《新青年》雜誌各卷的文本結構,包括文本用字、用句、文言和白虛字使用以及常用字詞共用等方面,通過多種圖表相結合的呈現方式,窺探《新青年》雜誌語言變化歷程以及轉變特點。這其中既包括了對文言文到白話文轉變機制的探索,也包括白話語言演化的探索。其次,根據各卷初探的結果,尋找可區隔文言文和白話文兩種語言形式的文本特徵變數,再以《新青年》第一卷和第七卷為訓練樣本,結合主成分和羅吉斯迴歸,對文、白兩種語言形式的文章進行分類訓練,再利用第四卷進行測試。結果證實,所提取的文本變數能夠有效實現對文、白兩種語言形式的文章的區分。此外,本文亦根據前述初探結果以及人文學者經驗,探索《新青年》雜誌後期語言形式的變化,即從五四運動時期的白話文至以「紅色中文」為特徵的白話文(二戰之後中國使用的白話文)的變化。以第七卷和第十一卷為樣本進行訓練,結果證實這兩卷語言形式存在明顯區別;並加入台灣《聯合報》和中國大陸的《人民日報》進行分類預測,發現兩類報刊的語言偏向有明顯差異,值得後續深入研究。 / Tremendous data are produced every day, due to the rapid development of computer technology and economics. Unstructured data, such as text, pictures, videos, etc., account for nearly 80 percent of all data created. Choosing appropriate methods for quantifying and analyzing this kind of data would determine whether or not we can extract useful information. For that, we propose a standard operating process of exploratory data analysis (EDA) and use a case study of language changes in New Youth Magazine as a demonstration. First, we quantify the texts of New Youth magazine from different perspectives, including the uses of words, sentences, function words, and share of common vocabulary. We aim to detect the evolution of modern language itself as well as changes from traditional Chinese to modern Chinese. Then, according to the results of exploratory data analysis, we treat the first and seventh volumes of New Youth magazine for training data to develop classification model and apply the model to fourth volume (i.e., testing data). The results show that the traditional Chinese and modern Chinese can be successfully classified. Next, we intend to verify the changes from modern Chinese of the May 4th Movement to those by advocating Socialism. We treat the seventh volume and eleventh volume of New Youth magazine as training data and again develop a classification model. Then we apply this model to the United Daily News from Taiwan and People’s Daily from Mainland China. We found these two newspapers are very different and the style of United Daily News is closer to that of seventh volume, while the style of People’s Daily is more like that of eleventh volume. This indicates that the People’s Daily is likely to be influenced by the Soviet Union.
750

Détection et localisation tridimensionnelle par stéréovision d’objets en mouvement dans des environnements complexes : application aux passages à niveau / Detection and 3D localization of moving and stationary obstacles by stereo vision in complex environments : application at level crossings

Fakhfakh, Nizar 14 June 2011 (has links)
La sécurité des personnes et des équipements est un élément capital dans le domaine des transports routiers et ferroviaires. Depuis quelques années, les Passages à Niveau (PN) ont fait l’objet de davantage d'attention afin d'accroître la sécurité des usagers sur cette portion route/rail considérée comme dangereuse. Nous proposons dans cette thèse un système de vision stéréoscopique pour la détection automatique des situations dangereuses. Un tel système permet la détection et la localisation d'obstacles sur ou autour du PN. Le système de vision proposé est composé de deux caméras supervisant la zone de croisement. Nous avons développé des algorithmes permettant à la fois la détection d'objets, tels que des piétons ou des véhicules, et la localisation 3D de ces derniers. L'algorithme de détection d'obstacles se base sur l'Analyse en Composantes Indépendantes et la propagation de croyance spatio-temporelle. L'algorithme de localisation tridimensionnelle exploite les avantages des méthodes locales et globales, et est composé de trois étapes : la première consiste à estimer une carte de disparité à partir d'une fonction de vraisemblance basée sur les méthodes locales. La deuxième étape permet d'identifier les pixels bien mis en correspondance ayant des mesures de confiances élevées. Ce sous-ensemble de pixels est le point de départ de la troisième étape qui consiste à ré-estimer les disparités du reste des pixels par propagation de croyance sélective. Le mouvement est introduit comme une contrainte dans l'algorithme de localisation 3D permettant l'amélioration de la précision de localisation et l'accélération du temps de traitement. / Within the past years, railways undertakings became interested in the assessment of Level Crossings (LC) safety. We propose in this thesis an Automatic Video-Surveillance system (AVS) at LC for an automatic detection of specific events. The system allows automatically detecting and 3D localizing the presence of one or more obstacles which are motionless at the level crossing. Our research aims at developing an AVS using the passive stereo vision principles. The proposed imaging system uses two cameras to detect and localize any kind of object lying on a railway level crossing. The cameras are placed so that the dangerous zones are well (fully) monitored. The system supervises and estimates automatically the critical situations by detecting objects in the hazardous zone defined as the crossing zone of a railway line by a road or path. The AVS system is used to monitor dynamic scenes where interactions take place among objects of interest (people or vehicles). After a classical image grabbing and digitizing step, the processing is composed of the two following modules: moving and stationary objects detection and 3-D localization. The developed stereo matching algorithm stems from an inference principle based on belief propagation and energy minimization. It takes into account the advantages of local methods for reducing the complexity of the inference step achieved by the belief propagation technique which leads to an improvement in the quality of results. The motion detection module is considered as a constraint which allows improving and speeding up the 3D localization algorithm.

Page generated in 0.0721 seconds