• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 17
  • 14
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 95
  • 38
  • 27
  • 22
  • 21
  • 20
  • 18
  • 18
  • 16
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Studies on SI engine simulation and air/fuel ratio control systems design

Bai, Yang January 2013 (has links)
More stringent Euro 6 and LEV III emission standards will immediately begin execution on 2014 and 2015 respectively. Accurate air/fuel ratio control can effectively reduce vehicle emission. The simulation of engine dynamic system is a very powerful method for developing and analysing engine and engine controller. Currently, most engine air/fuel ratio control used look-up table combined with proportional and integral (PI) control and this is not robust to system uncertainty and time varying effects. This thesis first develops a simulation package for a port injection spark-ignition engine and this package include engine dynamics, vehicle dynamics as well as driving cycle selection module. The simulations results are very close to the data obtained from laboratory experiments. New controllers have been proposed to control air/fuel ratio in spark ignition engines to maximize the fuel economy while minimizing exhaust emissions. The PID control and fuzzy control methods have been combined into a fuzzy PID control and the effectiveness of this new controller has been demonstrated by simulation tests. A new neural network based predictive control is then designed for further performance improvements. It is based on the combination of inverse control and predictive control methods. The network is trained offline in which the control output is modified to compensate control errors. The simulation evaluations have shown that the new neural controller can greatly improve control air/fuel ratio performance. The test also revealed that the improved AFR control performance can effectively restrict engine harmful emissions into atmosphere, these reduce emissions are important to satisfy more stringent emission standards.
22

Towards new computational tools for predicting toxicity

Chavan, Swapnil January 2016 (has links)
The toxicological screening of the numerous chemicals that we are exposed to requires significant cost and the use of animals. Accordingly, more efficient methods for the evaluation of toxicity are required to reduce cost and the number of animals used. Computational strategies have the potential to reduce both the cost and the use of animal testing in toxicity screening. The ultimate goal of this thesis is to develop computational models for the prediction of toxicological endpoints that can serve as an alternative to animal testing. In Paper I, an attempt was made to construct a global quantitative structure-activity relationship (QSAR)model for the acute toxicity endpoint (LD50 values) using the Munro database that represents a broad chemical landscape. Such a model could be used for acute toxicity screening of chemicals of diverse structures. Paper II focuses on the use of acute toxicity data to support the prediction of chronic toxicity. The results of this study suggest that for related chemicals having acute toxicities within a similar range, their lowest observed effect levels (LOELs) can be used in read-across strategies to fill gaps in chronic toxicity data. In Paper III a k-nearest neighbor (k-NN) classification model was developed to predict human ether-a-go-go related gene (hERG)-derived toxicity. The results suggest that the model has potential for use in identifying compounds with hERG-liabilities, e.g. in drug development.
23

Analise de sentimento em documentos financeiros com múltiplas entidades

Ferreira, Javier Zambreno 25 February 2014 (has links)
Submitted by Geyciane Santos (geyciane_thamires@hotmail.com) on 2015-06-17T16:03:10Z No. of bitstreams: 1 Dissertação- Javier zambrano Ferreira.pdf: 1147973 bytes, checksum: 97e9e41bf4b5d99b09151b69f9c99dfe (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-17T20:27:39Z (GMT) No. of bitstreams: 1 Dissertação- Javier zambrano Ferreira.pdf: 1147973 bytes, checksum: 97e9e41bf4b5d99b09151b69f9c99dfe (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-17T20:29:11Z (GMT) No. of bitstreams: 1 Dissertação- Javier zambrano Ferreira.pdf: 1147973 bytes, checksum: 97e9e41bf4b5d99b09151b69f9c99dfe (MD5) / Made available in DSpace on 2015-06-17T20:29:11Z (GMT). No. of bitstreams: 1 Dissertação- Javier zambrano Ferreira.pdf: 1147973 bytes, checksum: 97e9e41bf4b5d99b09151b69f9c99dfe (MD5) Previous issue date: 2014-02-25 / Não Informada / Given the amount of information available on the internet, it becomes unfeasible the manual content analysis to identify information of interest. Among such analyses, one of particular interest is the polarity analysis, that is, the classi cation of a text document in positive, negative, and neutral, according to a certain topic. This task is particularly useful in the nance domain, where news about a company can a ect the performance of its stocks. Although most of the methods about this domain consider that documents have only one polarity, in fact most of the documents cite many entities and these entities are often the target of the polarity analysis. Thus, in this work, we intend to study strategies for polarity detection in nancial documents with multiple entities. In particular, we study methods based on the learning of multiple models, one for each observed entity, using SVM classi ers. We evaluate models based on the partition of documents according to the entities they cite and on the segmentation of documents into fragments according to the entities they cite. To segment documents we use several heuristics based on shallow and deep natural language proecssing. We found that entity-speci c models created by simply partitioning the document collection largely outperformed strategies based on single models. / Dado o volume de infoma ção disponivel na Internet torna-se inviavel a analise manual do conteudo disponvel para identi car diversas informações de interesse. Entre v arias analises de interesse, uma de destaque e a an alise de polaridade da opinião, ou seja, a classificação de um documento textual em positivo, negativo ou neutro, em rela cão a um certo topico. Esta tarefa e particularmente util no dom nio fi nanceiro, onde not cias sobre uma empresa podem afetar o seu desempenho em mercados de a cões. Embora a maioria dos m etodos nesse dom nio considere que os documentos possuem uma unica polaridade, observamos que a maioria deles e constitudo de multiplas entidades e o alvo da analise de polaridade e, em geral, as entidades que estes documentos referenciam. O objetivo deste trabalho e, portanto, estudar estrategias para a detecção de polaridade em documentos financeiros com multiplas entidades. Para tanto, estudamos m etodos baseados na cria c~ao de multiplos modelos de aprendizado com um conjunto pr e-de nido de entidades, usando o classi cador SVM. N os avaliamos tanto modelos baseados em conjuntos de documentos especcos por entidade quanto modelos baseados em segmentação de documentos usando diversas heursticas de processamento de linguagem natural. Os resultados mostraram que h a um ganho em fragmentar os textos para an alise de polaridade com r otulos de classi ca cão por entidades.
24

Data Reduction Techniques in Classification Processes

Lozano Albalate, Maria Teresa 25 July 2007 (has links)
The learning process consists of different steps: building a Training Set (TS), training the system, testing its behaviour and finally classifying unknown objects. When using a distance based rule as a classifier, i.e. 1-Nearest Neighbour (1-NN), the first step (building a training set) includes editing and condensing data. The main reason for that is that the rules based on distance need many time to classify each unlabelled sample, x, as each distance from x to each point in the training set should be calculated. So, the more reduced the training set, the shorter the time needed for each new classification process. This thesis is mainly focused on building a training set from some already given data, and specially on condensing it; however different classification techniques are also compared.The aim of any condensing technique is to obtain a reduced training set in order to spend as few time as possible in classification. All that without a significant loss in classification accuracy. Somenew approaches to training set size reduction based on prototypes are presented. These schemes basically consist of defining a small number of prototypes that represent all the original instances. That includes those approaches that select among the already existing examples (selective condensing algorithms), and those which generate new representatives (adaptive condensing algorithms).Those new reduction techniques are experimentally compared to some traditional ones, for data represented in feature spaces. In order to test them, the classical 1-NN rule is here applied. However, other classifiers (fast classifiers) have been considered here, as linear and quadratic ones constructed in dissimilarity spaces based on prototypes, in order to realize how editing and condensing concepts work for this different family of classifiers.Although the goal of the algorithms proposed in this thesis is to obtain a strongly reduced set of representatives, the performance is empirically evaluated over eleven real data sets by comparing not only the reduction rate but also the classification accuracy with those of other condensing techniques. Therefore, the ultimate aim is not only to find a strongly reduced set, but also a balanced one.Several ways to solve the same problem could be found. So, in the case of using a rule based on distance as a classifier, not only the option of reducing the training set can be afford. A different family of approaches consists of applying several searching methods. Therefore, results obtained by the use of the algorithms here presented are compared in terms of classification accuracy and time, to several efficient search techniques.Finally, the main contributions of this PhD report could be briefly summarised in four principal points. Firstly, two selective algorithms based on the idea of surrounding neighbourhood. They obtain better results than other algorithms presented here, as well as better than other traditional schemes. Secondly, a generative approach based on mixtures of Gaussians. It presents better results in classification accuracy and size reduction than traditional adaptive algorithms, and similar to those of the LVQ. Thirdly, it is shown that classification rules other than the 1-NN can be used, even leading to better results. And finally, it is deduced from the experiments carried on, that with some databases (as the ones used here) the approaches here presented execute the classification processes in less time that the efficient search techniques.
25

Adaptive Neuro Fuzzy Inference System Applications In Chemical Processes

Guner, Evren 01 November 2003 (has links) (PDF)
Neuro-Fuzzy systems are the systems that neural networks (NN) are incorporated in fuzzy systems, which can use knowledge automatically by learning algorithms of NNs. They can be viewed as a mixture of local experts. Adaptive Neuro-Fuzzy inference system (ANFIS) is one of the examples of Neuro Fuzzy systems in which a fuzzy system is implemented in the framework of adaptive networks. ANFIS constructs an input-output mapping based both on human knowledge (in the form of fuzzy rules) and on generated input-output data pairs. Effective control for distillation systems, which are one of the important unit operations for chemical industries, can be easily designed with the known composition values. Online measurements of the compositions can be done using direct composition analyzers. However, online composition measurement is not feasible, since, these analyzers, like gas chromatographs, involve large measurement delays. As an alternative, compositions can be estimated from temperature measurements. Thus, an online estimator that utilizes temperature measurements can be used to infer the produced compositions. In this study, ANFIS estimators are designed to infer the top and bottom product compositions in a continuous distillation column and to infer the reflux drum compositions in a batch distillation column from the measurable tray temperatures. Designed estimator performances are further compared with the other types of estimators such as NN and Extended Kalman Filter (EKF). In this study, ANFIS performance is also investigated in the adaptive Neuro-Fuzzy control of a pH system. ANFIS is used in specialized learning algorithm as a controller. Simple ANFIS structure is designed and implemented in adaptive closed loop control scheme. The performance of ANFIS controller is also compared with that of NN for the case under study.
26

MIMPCA: uma abordagem robusta para extração de características aplicada à classificação de faces

Francisco Pereira, José 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:56:15Z (GMT). No. of bitstreams: 2 arquivo2793_1.pdf: 1387248 bytes, checksum: e99d52780679d746f07f5ff17549301a (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010 / É crescente a necessidade de controle de acesso a lugares, serviços e informações. É crescente também a busca por soluções mais eficientes na identificação pessoal. Neste contexto, a biometria, que consiste no uso de características biológicas como mecanismo de identificação, tem sido utilizada com resultados bastante promissores. Dentre as informações utilizadas para identificação dos indivíduos podem ser destacadas a íris, a retina, a face, a impressão digital ou até mesmo a geometria da mão. Dentre as biometrias, o reconhecimento de faces destaca-se por ser uma técnica que apresenta ótimos resultados com baixo custo de implantação. Ela pode ser utilizada nos mais diversos tipos de dispositivos e, em sua forma mais simples, não exige hardware dedicado. A técnica destaca-se ainda por não necessitar da interação do usuário ou qualquer tipo de contato físico para captura e classificação das faces. O presente trabalho é focado no reconhecimento de faces baseado em imagens (2D). Mais precisamente o trabalho visa reduzir ou eliminar os efeitos de variações no ambiente ou na própria face que prejudiquem a sua classificação final. As técnicas examinadas e propostas fazem uso da análise de componentes principais (PCA) para extração de características das imagens de faces frontais. Elas baseiam-se em estudos recentes com o objetivo de melhorar as taxas de classificação mesmo sob condições adversas de aquisição de imagens ou oclusão parcial das faces. Os resultados obtidos mostraram uma superioridade nas taxas de acerto das abordagens propostas em relação às suas técnicas-base quando executadas sobre imagens com algum tipo de variação local. Foi constatado também um grande ganho no tempo de processamento das imagens, o que contribui para aplicar as técnicas propostas em dispositivos com menor capacidade computacional
27

Design of intelligent ensembled classifiers combination methods

Alani, Shayma January 2015 (has links)
Classifier ensembling research has been one of the most active areas of machine learning for a long period of time. The main aim of generating combined classifier ensembles is to improve the prediction accuracy in comparison to using an individual classifier. A combined classifiers ensemble can improve the prediction results by compensating for the individual classifier weaknesses in certain areas and benefiting from better accuracy of the other ensembles in the same area. In this thesis, different algorithms are proposed for designing classifier ensemble combiners. The existing methods such as averaging, voting, weighted average, and optimised weighted method does not increase the accuracy of the combiner in comparison to the proposed advanced methods such as genetic programming and the coalition method. The different methods are studied in detail and analysed using different databases. The aim is to increase the accuracy of the combiner in comparison to the standard stand-alone classifiers. The proposed methods are based on generating a combiner formula using genetic programming, while the coalition is based on estimating the diversity of the classifiers such that a coalition is generated with better prediction accuracy. Standard accuracy measures are used, namely accuracy, sensitivity, specificity and area under the curve, in addition to training error accuracies such as the mean square error. The combiner methods are compared empirically with several stand-alone classifiers using neural network algorithms. Different types of neural network topologies are used to generate different models. Experimental results show that the combiner algorithms are superior in creating the most diverse and accurate classifier ensembles. Ensembles of the same models are generated to boost the accuracy of a single classifier type. An ensemble of 10 models of different initial weights is used to improve the accuracy. Experiments show a significant improvement over a single model classifier. Finally, two combining methods are studied, namely the genetic programming and coalition combination methods. The genetic programming algorithm is used to generate a formula for the classifiers’ combinations, while the coalition method is based on a simple algorithm that assigns linear combination weights based on the consensus theory. Experimental results of the same databases demonstrate the effectiveness of the proposed methods compared to conventional combining methods. The results show that the coalition method is better than genetic programming.
28

MIMO Channel Equalization and Symbol Detection using Multilayer Neural Network

Waseem, Athar, Hossain, A.H.M Sadath January 2013 (has links)
In recent years Multiple Input Multiple Output (MIMO) systems have been employed in wireless communication systems to reach the goals of high data rate. A MIMO use multiple antennas at both transmitting and receiving ends. These antennas communicate with each other on the same frequency band and help in linearly increasing the channel capacity. Due to the multi paths wireless channels face the problem of channel fading which cause Inter Symbol Interference (ISI). Each channel path has an independent path delay, independent path loss or path gain and phase shift, cause deformations in a signal and due to this deformation the receiver can detect a wrong or a distorted signal. To remove this fading effect of channel from received signal many Neural Network (NN) based channel equalizers have been proposed in literature. Due to high level non-linearity, NN can be efficient to decode transmitted symbols that are effected by fading channels. The task of channel equalization can also be considered as a classification job. In the data (received symbol sequences) spaces NN can easily make decision regions. Specifically, NN has the universal approximation capability and form decision regions with arbitrarily shaped boundaries. This property supports the NN to be introduced and perform the task of channel equalization and symbol detection. This research project presents the implementation of NN to be use as a channel equalizer for Rayleigh fading channels causing ISI in MIMO systems. Channel equalization has been done using NN as a classification problem. The equalizer is implemented over MIMO system of different forms using Quadrature Amplitude Modulation scheme (4QAM & 16QAM) signals. Levenberg-Marquardt (LM), One Step Secant (OSS), Gradient Descent (GD), Resilient backpropagation (Rprop) and Conjugate Gradient (CG) algorithms are used for the training of NN. The Weights calculated during the training process provides the equalization matrix as an estimate of Channel. The output of the NN provides the estimate of transmitted signals. The equalizer is assessed in terms of Symbol Error Rate (SER) and equalizer efficiency.
29

Maximum Mass Restraint of Neutron Stars: Quarks, Pion, Kaons, and Hyperons

Ryan, Garrett 01 January 2017 (has links)
This thesis explores the topic of maximum mass stability of neutron stars. The outer structure is detailed and explores nuclear pasta phases, the neutron drip line, and density transitions of matter in the crust and atmosphere layers. Other discussion points include superfluids in the crust and core, vortex roles in neutron stars, and magnetic field effects on the EOS in neutron stars. The inner core is studied in much more detail due to its significant role in EOS. The variety of stars include pion condensate stars, kaon condensate stars, npeu stars, npeu stars with the inclusion of hyperons, quark-hybrid stars, and strange stars. Included with these is a description of nucleon-nucleon, nucleon-nucleon-nucleon interactions, the appearance factors that affect hyperon species, and the formation process of kaons, pions, quarks, and hyperons. The ending EOS are compared with their maximum mass values to determine which ones are likely to limit the mass of neutron stars.
30

Applicera maskininlärning på vägtrafikdata för att klassificera gatutyper i Stockholm / Apply Machine Learning on Road Traffic Data in order to Classify Street Types in Stockholm

Engberg, Alexander January 2020 (has links)
In this thesis, two different machine learning models have been applied on road traffic data from two large cities in Sweden: Gothenburg and Stockholm. The models have been evaluated with regard to classification of street types in urban environments. When planning and developing road traffic systems it is important that there is reliable knowledge about the traffic system. The amount of available traffic data from urban areas is growing and to gain insights about historical, current and future traffic patterns the data can be used for traffic analysis. By training machine learning models that are able to predict what type of street a measuring location belongs to, a classification can be made based on historical data. In this thesis, the performance of two different machine learning models are presented and evaluated when street types are predicted and classified. The algorithms used for the classification were K-Nearest Neighbor and Random Forest which were applied to different combinations of attributes. This was done in order to identify which attributes that lead to the optimal classification of street types in Gothenburg. For training the algorithms the dataset consisted of traffic data collected in Gothenburg. The final model was applied on the traffic data in Stockholm and hence the prediction of street types in that area were obtained. The results of this study show that a combination of all tested attributes leads to the highest accuracy and the model that obtained these results was Random Forest. Even though there are differences between topography and size of the two cities, the study leads to relevant insights about traffic patterns in Stockholm.

Page generated in 0.0268 seconds