• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 180
  • 31
  • 25
  • 21
  • 16
  • 11
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 644
  • 644
  • 644
  • 135
  • 134
  • 123
  • 119
  • 107
  • 93
  • 85
  • 73
  • 70
  • 69
  • 57
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Smart meter integrado a analisador de qualidade de energia para propósitos de identificação de cargas residenciais / Smart meter integrated to power quality analyzer for identification purposes of residential loads

Sergio Date Fugita 20 November 2014 (has links)
Este trabalho consiste em apresentar o desenvolvimento de um Smart meter, integrado a um analisador de qualidade de energia, para análise de distorções harmônicas, utilizando método de redes neurais artificiais embarcado em hardware. Tal Smart meter está incluído dentro dos conceitos de Smart Grid, que serão apresentados também neste trabalho. O intuito do desenvolvimento do Smart meter para análise de distorções harmônicas é auxiliar concessionárias de energia elétrica a identificar que tipo de carga o consumidor utiliza em sua residência, a fim de contribuir para a tomada de decisões apropriadas, tais como a diminuição da emissão de correntes harmônicas, demanda de energia, detecção de falhas no fornecimento de energia elétrica e faturas diferenciadas de acordo com a quantidade de harmônicas injetadas na rede elétrica. Adicionalmente, observou-se que o Smart meter desenvolvido pode ser ainda utilizado para detectar fenômenos de VTCD, como elevação, afundamento e interrupção de energia. Todo o processo de desenvolvimento do Smart meter é apresentado no decorrer desta tese de doutorado. / This thesis consists to present the development of a Smart Meter integrated to power quality analyzer for the analysis of harmonic distortion, using methods based on artificial neural networks in embedded hardware. This Smart Meter is included within the concepts of Smart Grid, which will be also presented in this work. The intention of the development of the Smart Meter for analysis of harmonic distortion is to assist utilities companies to identify what loads type the consumer uses at your residence in order to contribute for supporting decisions, such as reducing the emission of the harmonic currents, power demand and faults detection in electric energy supply and distinct bills according to the amount of harmonics injected into the power grid. In addition, it was observed that this developed Smart Meter can be even used to detect the VTCD phenomena, such as swell, sag and interruption of the energy supply. All development steps of this Smart Meter is presented in this doctoral thesis.
362

Otimização e análise das máquinas de vetores de suporte aplicadas à classificação de documentos. / Optimization and analysis of support vector machine applied to text classification.

Eduardo Akira Kinto 17 June 2011 (has links)
A análise das informações armazenadas é fundamental para qualquer tomada de decisão, mas para isso ela deve estar organizada e permitir fácil acesso. Quando temos um volume de dados muito grande, esta tarefa torna-se muito mais complicada do ponto de vista computacional. É fundamental, então, haver mecanismos eficientes para análise das informações. As Redes Neurais Artificiais (RNA), as Máquinas de Vetores-Suporte (Support Vector Machine - SVM) e outros algoritmos são frequentemente usados para esta finalidade. Neste trabalho, iremos explorar o SMO (Sequential Minimal Optimization) e alterá-lo, com a finalidade de atingir um tempo de treinamento menor, mas, ao mesmo tempo manter a capacidade de classificação. São duas as alterações propostas, uma, no seu algoritmo de treinamento e outra, na sua arquitetura. A primeira modificação do SMO proposta neste trabalho é permitir a atualização de candidatos ao vetor suporte no mesmo ciclo de atualização de um coeficiente de Lagrange. Dos algoritmos que codificam o SVM, o SMO é um dos mais rápidos e um dos que menos consome memória. A complexidade computacional do SMO é menor com relação aos demais algoritmos porque ele não trabalha com inversão de uma matriz de kernel. Esta matriz, que é quadrada, costuma ter um tamanho proporcional ao número de amostras que compõem os chamados vetores-suporte. A segunda proposta para diminuir o tempo de treinamento do SVM consiste na subdivisão ordenada do conjunto de treinamento, utilizando-se a dimensão de maior entropia. Esta subdivisão difere das abordagens tradicionais pelo fato de as amostras não serem constantemente submetidas repetidas vezes ao treinamento do SVM. Finalmente, é aplicado o SMO proposto para classificação de documentos ou textos por meio de uma abordagem nova, a classificação de uma-classe usando classificadores binários. Como toda classificação de documentos, a análise dos atributos é uma etapa fundamental, e aqui uma nova contribuição é apresentada. Utilizamos a correlação total ponto a ponto para seleção das palavras que formam o vetor de índices de palavras. / Stored data analysis is very important when taking a decision in every business, but to accomplish this task data must be organized in a way it can be easily accessed. When we have a huge amount of information, data analysis becomes a very computational hard job. So, it is essential to have an efficient mechanism for information analysis. Artificial neural networks (ANN), support vector machine (SVM) and other algorithms are frequently used for information analysis, and also in huge volume information analysis. In this work we will explore the sequential minimal optimization (SMO) algorithm, a learning algorithm for the SVM. We will modify it aiming for a lower training time and also to maintaining its classification generalization capacity. Two modifications are proposed to the SMO, one in the training algorithm and another in its architecture. The first modification to the SMO enables more than one Lagrange coefficient update by choosing the neighbor samples of the updating pair (current working set). From many options of SVM implementation, SMO was chosen because it is one of the fastest and less memory consuming one. The computational complexity of the SMO is lower than other types of SVM because it does not require handling a huge Kernel matrix. Matrix inversion is one of the most time consuming step of SVM, and its size is as bigger as the number of support vectors of the sample set. The second modification to the SMO proposes the creation of an ordered subset using as a reference one of the dimensions; entropy measure is used to choose the dimension. This subset creation is different from other division based SVM architectures because samples are not used in more than one training pair set. All this improved SVM is used on a one-class like classification task of documents. Every document classification problem needs a good feature vector (feature selection and dimensionality reduction); we propose in this work a novel feature indexing mechanism using the pointwise total correlation.
363

Modelo computacional de um rebanho bovino de corte virtual utilizando simulação Monte Carlo e redes neurais artificiais / Computational model of virtual beef cattle heard applying Monte Carlo simulation and artificial neural networks

Flávia Devechio Providelo Meirelles 04 February 2005 (has links)
Neste trabalho, foram utilizadas duas ferramentas computacionais para fins de auxiliar tomadas de decisões na produção de bovinos de corte, criados de maneira extensivas, em condições de manejo encontrados no Brasil. A primeira parte do trabalho visou à construção de um software utilizando a técnica de Simulação Monte Carlo para analisar características de produção (ganho de peso) e manejo (fertilidade, anestro pós-parto, taxa de natalidade e puberdade). Na segunda parte do trabalho foi aplicada a técnica de Redes Neurais Artificiais para classificar animais, segundo ganho de peso nas fases de crescimento (nascimento ao desmame, do desmame ao sobreano) relacionado com o valor genético do ganho de peso do desmame ao sobreano (GP345) obtidos pelo BLUP. Ambos modelos mostraram potencial para auxiliar a produção de gado de corte / Herein we applied two different computational techniques with the specific objective to help the decision-making at Brazilian extensive beef cattle production systems. The first part of the work was dedicated to the construction of software based on Monte Carlo Simulation. Two different models were designed for further fusion and willing the analysis of productions (weight gain) and reproduction traits (fertility, post partum anestrus, born rate and puberty). The second part of the work applied Artificial Neural Network techniques to classify animals related to the weight gain during growing period (Weight at Calving, Weaning Weight, Weight at 550 days) comparing data with genetic value of the daily gain from weaning to 550 days adjusted to 345 days BLUP output. The results obtained in both models showed potential to help beef cattle production
364

Deep Learning Black Box Problem

Hussain, Jabbar January 2019 (has links)
Application of neural networks in deep learning is rapidly growing due to their ability to outperform other machine learning algorithms in different kinds of problems. But one big disadvantage of deep neural networks is its internal logic to achieve the desired output or result that is un-understandable and unexplainable. This behavior of the deep neural network is known as “black box”. This leads to the following questions: how prevalent is the black box problem in the research literature during a specific period of time? The black box problems are usually addressed by socalled rule extraction. The second research question is: what rule extracting methods have been proposed to solve such kind of problems? To answer the research questions, a systematic literature review was conducted for data collection related to topics, the black box, and the rule extraction. The printed and online articles published in higher ranks journals and conference proceedings were selected to investigate and answer the research questions. The analysis unit was a set of journals and conference proceedings articles related to the topics, the black box, and the rule extraction. The results conclude that there has been gradually increasing interest in the black box problems with the passage of time mainly because of new technological development. The thesis also provides an overview of different methodological approaches used for rule extraction methods.
365

Implémentation de méthodes d'intelligence artificielle pour le contrôle du procédé de projection thermique / Implementing artificial intelligence methods for controlling the thermal spraying process

Liu, Taikai 09 December 2013 (has links)
Depuis sa création, la projection thermique ne cesse d’étendre son champ d’application en raison de ses potentialités à projeter des matériaux bien différents (métallique, céramique, plastique,...) sous des formes bien différentes aussi (poudre, fil, suspension, solution,...). Plusieurs types de procédés ont été développés afin de satisfaire les applications industrielles, par exemple, le procédé HVOF (High Velocity Oxygen Fuel), le procédé APS (Atmospheric Plasma Spraying), le procédé VLPPS (Very Low Pressure Plasma Spray). Parmi ces procédés, le procédé APS est aujourd’hui bien implanté dans l’industrie et en laboratoire réussissant à élaborer des revêtements de bonne qualité à coût intéressant. Néanmoins, cette technologie pâtit des incidences des instabilités du procédé sur la qualité du produit obtenu et souffre d’un manque de compréhension des relations entre les paramètres opératoires et les caractéristiques des particules en vol.Pour rappel, pendant la projection APS, les phénomènes d’instabilité du pied d’arc, d’érosion des électrodes, d’instabilité des paramètres opératoires ne peuvent pas être complètement éliminés. Et, il est encore aujourd’hui difficile de mesurer et de bien contrôler ces paramètres.Compte tenu des progrès réalisés sur les moyens de diagnostic qui peuvent être utilisés en milieu hostile (comme dans le cas de la projection APS), un contrôle efficace de ce procédé en boucle fermée peut être maintenant envisagé et requiert le développement d’un système expert qui se compose des réseaux de neurones artificiels et de logique floue. Les réseaux de neurones artificiels sont développés dans plusieurs domaines d’application et aussi maintenant au cas de la projection thermique. La logique floue quant à elle est une extension de la logique booléenne basée sur la théorie mathématique des ensembles flous. Nous nous sommes intéressés dans ce travail à bâtir le modèle de contrôle en ligne du procédé de projection basé sur des éléments d’Intelligence Artificielle et à construire un émulateur qui reproduise aussi fidèlement que possible le comportement dynamique du procédé. / Since its creation, the thermal spraying continuously expands its application scope because of its potential to project very different materials (metal, ceramic, plastic ...) as well as different forms (powder, wire, suspension, solution ...). Several types of methods have been developed to meet industrial applications, for example, the process HVOF (High Velocity Oxygen Fuel), the process APS (Atmospheric Plasma Spraying), the process VLPPS (Very Low Pressure Plasma Spray). Among these methods, the APS process is now well established in the industry and laboratory for successfully developing coatings with good quality but low cost. However, this technology suffers from the instability effect of the process on the obtained product quality and endures a lack of understanding of the relationship between the operating parameters and the characteristics of in-flight particles.As a reminder, during the projection APS, the arc foot instability phenomena, the electrode erosion, the instability of the operating parameters cannot be completely eliminated. Further, it is still difficult to measure and control these parameters well. With the developing technology of diagnostic tools that can be used in a hostile environment (as in the case of APS process), an effective control of APS process in closed-loop can be considered and requires the development of an expert system consisting of artificial neural networks and fuzzy logic controlling. The artificial neural networks have been developed in several application fields and now also to plasma spraying process. Fuzzy logic controlling is an extension of Boolean logic based on the mathematical theory of fuzzy sets.We are interested in this work to build an on-line control model for the APS process based on the elements of artificial intelligence and to build an emulator that replicates as closely as possible the dynamic behavior of the process. Further, the artificial neural networks will be combined with the emulator for constituting a big system who can monitor the process and also can automatically carry out modification action. The system then will be tested off-line, the time response will be discussed.
366

Cardiac Troponins in Patients with Suspected or Confirmed Acute Coronary Syndrome : New Applications for Biomarkers in Coronary Artery Disease

Eggers, Kai January 2007 (has links)
<p>The cardiac troponins are the biochemical markers of choice for the diagnosis of acute myocardial infarction (AMI) and risk prediction in patients with acute coronary syndrome (ACS). In this thesis, the role of early serial cardiac troponin I (cTnI) testing was assessed in fairly unselected patient populations admitted because of chest pain and participating in the FAST II-study (n=197) and the FASTER I-study (n=380). Additionally, the importance of cTnI testing in stable post-ACS patients from the FRISC II-study (n=1092) was studied.</p><p>The analyses in chest pain patients demonstrate that cTnI is very useful for early diagnostic and prognostic assessment. cTnI allowed already 2 hours after admission the reliable exclusion of AMI and the identification of low-risk patients when ECG findings and a renal marker such as cystatin C were added as conjuncts. Other biomarkers such as CK-MB, myoglobin, NT-pro BNP or CRP did not provide superior clinical information. However, myoglobin may be valuable in combination with cTnI results for the early prediction of an impending major AMI when used as input variable for an artificial neural network. Such an approach applying cTnI results only may also furthermore improve the early diagnosis of AMI.</p><p>Persistent cTnI elevation > 0.01 μg/L was detectable using a high-sensitive assay in 26% of the stable post-ACS patients from the FRISC II-study. NT-pro BNP levels at 6 months were the most important variable independently associated to persistent cTnI elevation besides male gender, indicating a relationship between adverse left ventricular remodeling processes and cTnI leakage. Patients with persistent cTnI elevation had a considerable risk for both mortality and AMI during 5 year follow-up. </p><p>These analyses thus, confirm the value of cTnI for early assessment of chest pain patients and provide new and unique evidence regarding the role of cTnI for risk prediction in post-ACS populations.</p>
367

Cardiac Troponins in Patients with Suspected or Confirmed Acute Coronary Syndrome : New Applications for Biomarkers in Coronary Artery Disease

Eggers, Kai January 2007 (has links)
The cardiac troponins are the biochemical markers of choice for the diagnosis of acute myocardial infarction (AMI) and risk prediction in patients with acute coronary syndrome (ACS). In this thesis, the role of early serial cardiac troponin I (cTnI) testing was assessed in fairly unselected patient populations admitted because of chest pain and participating in the FAST II-study (n=197) and the FASTER I-study (n=380). Additionally, the importance of cTnI testing in stable post-ACS patients from the FRISC II-study (n=1092) was studied. The analyses in chest pain patients demonstrate that cTnI is very useful for early diagnostic and prognostic assessment. cTnI allowed already 2 hours after admission the reliable exclusion of AMI and the identification of low-risk patients when ECG findings and a renal marker such as cystatin C were added as conjuncts. Other biomarkers such as CK-MB, myoglobin, NT-pro BNP or CRP did not provide superior clinical information. However, myoglobin may be valuable in combination with cTnI results for the early prediction of an impending major AMI when used as input variable for an artificial neural network. Such an approach applying cTnI results only may also furthermore improve the early diagnosis of AMI. Persistent cTnI elevation &gt; 0.01 μg/L was detectable using a high-sensitive assay in 26% of the stable post-ACS patients from the FRISC II-study. NT-pro BNP levels at 6 months were the most important variable independently associated to persistent cTnI elevation besides male gender, indicating a relationship between adverse left ventricular remodeling processes and cTnI leakage. Patients with persistent cTnI elevation had a considerable risk for both mortality and AMI during 5 year follow-up. These analyses thus, confirm the value of cTnI for early assessment of chest pain patients and provide new and unique evidence regarding the role of cTnI for risk prediction in post-ACS populations.
368

Unstructured Road Recognition And Following For Mobile Robots Via Image Processing Using Anns

Dilan, Askin Rasim 01 June 2010 (has links) (PDF)
For an autonomous outdoor mobile robot ability to detect roads existing around is a vital capability. Unstructured roads are among the toughest challenges for a mobile robot both in terms of detection and navigation. Even though mobile robots use various sensors to interact with their environment, being a comparatively low-cost and rich source of information, potential of cameras should be fully utilized. This research aims to systematically investigate the potential use of streaming camera images in detecting unstructured roads. The investigation focused on the use of methods employing Artificial Neural Networks (ANNs). An exhaustive test process is followed where different kernel sizes and feature vectors are varied systematically where trainings are carried out via backpropagation in a feed-forward ANN. The thesis also claims a contribution in the creation of test data where truth images are created almost in realtime by making use of the dexterity of human hands. Various road profiles v ranging from human-made unstructured roads to trails are investigated. Output of ANNs indicating road regions is justified against the vanishing point computed in the scene and a heading vector is computed that is to keep the robot on the road. As a result, it is shown that, even though a robot cannot fully rely on camera images for heading computation as proposed, use of image based heading computation can provide a useful assistance to other sensors present on a mobile robot.
369

Customer Load Profiling and Aggregation

Chang, Rung-Fang 28 June 2002 (has links)
Power industry restructuring has created many opportunities for customers to reduce their electricity bills. In order to facilitate the retail choice in a competitive power market, the knowledge of hourly load shape by customer class is necessary. Requiring a meter as a prerequisite for lower voltage customers to choose a power supplier is not considered practical at the present time. In order to be used by Energy Service Provider (ESP) to assign customers to specific load profiles with certainty factors, a technique which bases on load research and customers¡¦ monthly energy usage data for a preliminary screening of customer load profiles is required. Distribution systems supply electricity to different mixtures of customers, due to lack of field measurements, load point data used in distribution network studies have various degrees of uncertainties. In order to take the expected uncertainties in the demand into account, many previous methods have used fuzzy load models in their studies. However, the issue of deriving these models has not been discussed. To address this issue, an approach for building these fuzzy load models is needed. Load aggregation allows customers to purchase electricity at a lower price. In some contracts, load factor is considered as one critical aspect of aggregation. To facilitate a better load aggregation in distribution networks, feeder reconfiguration could be used to improve the load factor in a distribution subsystem. To solve the aforementioned problems, two data mining techniques, namely, the fuzzy c-means (FCM) method and an Artificial Neural Network (ANN) based pattern recognition technique, are proposed for load profiling and customer class assignment. A variant to the previous load profiling technique, customer hourly load distributions obtained from load research can be converted to fuzzy membership functions based on a possibility¡Vprobability consistency principle. With the customer class fuzzy load profiles, customer monthly power consumption and feeder load measurements, hourly loads of each distribution transformer on the feeder can be estimated and used in distribution network analysis. After feeder models are established, feeder reconfiguration based on binary particle swarm optimization (BPSO) technique is used to improve feeder load factors. Test results based on several simple sample networks have shown that the proposed feeder reconfiguration method could improve customers¡¦ position for a good bargain in electricity service.
370

Artificial neural network (ANN) based decision support model for alternative workplace arrangements (AWA): readiness assessment and type selection

Kim, Jun Ha 11 November 2009 (has links)
A growing body of evidence shows that globalization and advances in information and communication technology (ICT) have prompted a revolution in the way work is produced. One of the most notable changes is the establishment of the alternative workplace arrangement (AWA), in which workers have more freedom in their work hours and workplaces. Just as all organizations are not good candidates for AWA adoption, all work types, all employees and all levels of facilities supports are not good candidates for AWA adoption. The main problem is that facility managers have no established tools to assess their readiness for AWA adoption or to select among the possible choices regarding which AWA type is most appropriate considering their organizations' business reasons or objectives of adoption and the current readiness levels. This dissertation resulted in the development of readiness level assessment indicators (RLAI), which measure the initial readiness of high-tech companies for adopting AWAs and the ANN based decision model, which allows facility managers to predict not only an appropriate AWA type, but also an anticipated satisfaction level considering the objectives and the current readiness level. This research has identified significant factors and relative attributes for facility managers to consider when measuring their organization's readiness for AWA adoption. Robust predictive performance of the ANN model shows that the main factors or key determinants have been correctly identified in RLAI and can be used to predict an appropriate AWA type as well as a high-tech company's satisfaction level regarding the AWA adoption.

Page generated in 0.0549 seconds