• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 70
  • 23
  • 22
  • 21
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 442
  • 442
  • 442
  • 177
  • 145
  • 99
  • 86
  • 73
  • 72
  • 58
  • 55
  • 55
  • 54
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Tillämpning av maskininlärning för att införa automatisk adaptiv uppvärmning genom en studie på KTH Live-In Labs lägenheter

Vik, Emil, Åsenius, Ingrid January 2020 (has links)
The purpose of this study is to investigate if it is possible to decrease Sweden's energy consumption through adaptive heating that uses climate data to detect occupancy in apartments using machine learning. The application of the study has been made using environmental data from one of KTH Live-In Labs apartments. The data was first used to investigate the possibility to detect occupancy through machine learning and was then used as input in an adaptive heating model to investigate potential benefits on the energy consumption and costs of heating. The result of the study show that occupancy can be detected using environmental data but not with 100% accuracy. It also shows that the features that have greatest impact in detecting occupancy is light and carbon dioxide and that the best performing machine learning algorithm, for the used dataset, is the Decision Tree algorithm. The potential energy savings through adaptive heating was estimated to be up to 10,1%. In the final part of the paper, it is discussed how a value creating service can be created around adaptive heating and its possibility to reach the market.
252

Current based condition monitoring of electromechanical systems. Model-free drive system current monitoring: faults detection and diagnosis through statistical features extraction and support vector machines classification.

Bin Hasan, M.M.A. January 2012 (has links)
A non-invasive, on-line method for detection of mechanical (rotor, bearings eccentricity) and stator winding faults in a 3-phase induction motors from observation of motor line current supply input. The main aim is to avoid the consequence of unexpected failure of critical equipment which results in extended process shutdown, costly machinery repair, and health and safety problems. This thesis looks into the possibility of utilizing machine learning techniques in the field of condition monitoring of electromechanical systems. Induction motors are chosen as an example for such application. Electrical motors play a vital role in our everyday life. Induction motors are kept in operation through monitoring its condition in a continuous manner in order to minimise their off times. The author proposes a model free sensor-less monitoring system, where the only monitored signal is the input to the induction motor. The thesis considers different methods available in literature for condition monitoring of induction motors and adopts a simple solution that is based on monitoring of the motor current. The method proposed use the feature extraction and Support Vector Machines (SVM) to set the limits for healthy and faulty data based on the statistical methods. After an extensive overview of the related literature and studies, the motor which is the virtual sensor in the drive system is analysed by considering its construction and principle of operation. The mathematical model of the motor is used for analysing the system. This is followed by laboratory testing of healthy motors and comparing their output signals with those of the same motors after being intentionally failed, concluding with the development of a full monitoring system. Finally, a monitoring system is proposed that can detect the presence of a fault in the monitored machine and diagnose the fault type and severity / Ministry of Higher Education, Libya; Switchgear & Instruments Ltd.
253

Designing Reactive Power Control Rules for Smart Inverters using Machine Learning

Garg, Aditie 14 June 2018 (has links)
Due to increasing penetration of solar power generation, distribution grids are facing a number of challenges. Frequent reverse active power flows can result in rapid fluctuations in voltage magnitudes. However, with the revised IEEE 1547 standard, smart inverters can actively control their reactive power injection to minimize voltage deviations and power losses in the grid. Reactive power control and globally optimal inverter coordination in real-time is computationally and communication-wise demanding, whereas the local Volt-VAR or Watt-VAR control rules are subpar for enhanced grid services. This thesis uses machine learning tools and poses reactive power control as a kernel-based regression task to learn policies and evaluate the reactive power injections in real-time. This novel approach performs inverter coordination through non-linear control policies centrally designed by the operator on a slower timescale using anticipated scenarios for load and generation. In real-time, the inverters feed locally and/or globally collected grid data to the customized control rules. The developed models are highly adjustable to the available computation and communication resources. The developed control scheme is tested on the IEEE 123-bus system and is seen to efficiently minimize losses and regulate voltage within the permissible limits. / Master of Science / The increasing integration of solar photovoltaic (PV) systems poses both opportunities and technical challenges for the electrical distribution grid. Although PV systems provide more power to the grid but, can also lead to problems in the operation of the grid like overvoltages and voltage fluctuations. These variations can lead to overheating and burning of electrical devices and equipment malfunction. Since the solar generation is highly dependent on weather and geographical location, they are uncertain in their output. The uncertainity in the solar irradiance can not be handled with the existing voltage control devices as they need to operate more frequently than usual which can cause recurring maintenance needs for these devices. Thus, to make solar PV more flexible and grid-friendly, smart inverters are being developed. Smart inverters have the capability of advanced sensing, communication, and controllability which can be utilized for voltage control. The research discusses how the inverters can be used to improve the grid profile by providing reactive power support to reduce the power losses and maintain voltages in their limits for a safer operation.
254

Aplikace umělé inteligence v řízení kreditních rizik / Artificial Intelligence Approach to Credit Risk

Říha, Jan January 2016 (has links)
This thesis focuses on application of artificial intelligence techniques in credit risk management. Moreover, these modern tools are compared with the current industry standard - Logistic Regression. We introduce the theory underlying Neural Networks, Support Vector Machines, Random Forests and Logistic Regression. In addition, we present methodology for statistical and business evaluation and comparison of the aforementioned models. We find that models based on Neural Networks approach (specifically Multi-Layer Perceptron and Radial Basis Function Network) are outperforming the Logistic Regression in the standard statistical metrics and in the business metrics as well. The performance of the Random Forest and Support Vector Machines is not satisfactory and these models do not prove to be superior to Logistic Regression in our application.
255

Artificial Intelligence Aided Rapid Trajectory Design in Complex Dynamical Environments

Ashwati Das (6638018) 14 May 2019 (has links)
<div><div>Designing trajectories in dynamically complex environments is challenging and can easily become intractable via solely manual design efforts. Thus, the problem is recast to blend traditional astrodynamics approaches with machine learning to develop a rapid and flexible trajectory design framework. This framework incorporates knowledge of the spacecraft performance specifications via the computation of Accessible Regions (ARs) that accommodate specific spacecraft acceleration levels for varied mission scenarios in a complex multi-body dynamical regime. Specifically, pathfinding agents, via Heuristically Accelerated Reinforcement Learning (HARL) and Dijkstra's algorithms, engage in a multi-dimensional combinatorial search to sequence advantageous natural states emerging from the ARs to construct initial guesses for end-to-end transfers. These alternative techniques incorporate various design considerations, for example, prioritizing computational time versus the pursuit of globally optimal solutions to meet multi-objective mission goals. The initial guesses constructed by pathfinding agents then leverage traditional numerical corrections processes to deliver continuous transport of a spacecraft from departure to destination. Solutions computed in the medium-fidelity Circular Restricted Three Body (CR3BP) model are then transitioned to a higher-fidelity ephemeris regime where the impact of time-dependent gravitational influences from multiple bodies is also explored.</div><div><br></div><div>A broad trade-space arises in this investigation in large part due to the rich and diverse dynamical flows available in the CR3BP. These dynamical pathways included in the search space via: (i) a pre-discretized database of known periodic orbit families; (ii) flow-models of these families of orbits/arcs `trained' via the supervised learning algorithms Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs); and, finally (iii) a free-form search that permits selection of both chaotic and ordered motion. All three approaches deliver variety in the constructed transfer paths. The first two options offer increased control over the nature of the transfer geometry while the free-form approach eliminates the need for a priori knowledge about available flows in the dynamical environment. The design framework enables varied transfer scenarios including orbit-orbit transport, s/c recovery during contingency events, and rendezvous with a pre-positioned object at an arrival orbit. Realistic mission considerations such as altitude constraints with respect to a primary are also incorporated.</div></div>
256

Máquinas de Vetores Suporte e a Análise de Gestos: incorporando aspectos temporais / Support Vector Machines and Gesture Analysis: incorporating temporal aspects

Madeo, Renata Cristina Barros 15 May 2013 (has links)
Recentemente, tem se percebido um interesse maior da área de computação pela pesquisa em análise de gestos. Parte dessas pesquisas visa dar suporte aos pesquisadores da área de \"estudos dos gestos\", que estuda o uso de partes do corpo para fins comunicativos. Pesquisadores dessa área analisam gestos a partir de transcrições de conversas ou discursos gravados em vídeo. Para a transcrição dos gestos, geralmente realiza-se a sua segmentação em unidades gestuais e fases. O presente trabalho tem por objetivo desenvolver estratégias para segmentação automatizada das unidades gestuais e das fases dos gestos contidos em um vídeo no contexto de contação de histórias, formulando o problema como uma tarefa de classificação supervisionada. As Máquinas de Vetores Suporte foram escolhidas como método de classificação, devido à sua capacidade de generalização e aos bons resultados obtidos para diversos problemas complexos. Máquinas de Vetores Suporte, porém, não consideram os aspectos temporais dos dados, características que são importantes na análise dos gestos. Por esse motivo, este trabalho investiga métodos de representação temporal e variações das Máquinas de Vetores Suporte que consideram raciocínio temporal. Vários experimentos foram executados neste contexto para segmentação de unidades gestuais. Os melhores resultados foram obtidos com Máquinas de Vetores Suporte tradicionais aplicadas a dados janelados. Além disso, três estratégias de classificação multiclasse foram aplicadas ao problema de segmentação das fases dos gestos. Os resultados indicam que um bom desempenho para a segmentação de gestos pode ser obtido ao realizar o treinamento da estratégia com um trecho inicial do vídeo para obter uma segmentação automatizada do restante do vídeo. Assim, os pesquisadores da área de \"estudos dos gestos\" poderiam segmentar manualmente apenas um trecho do vídeo, reduzindo o tempo necessário para realizar a análise dos gestos presentes em gravações longas. / Recently, it has been noted an increasing interest from computer science for research on gesture analysis. Some of these researches aims at supporting researchers from \"gesture studies\", which studies the use of several body parts for communicative purposes. Researchers of \"gesture studies\" analyze gestures from transcriptions of conversations and discourses recorded in video. For gesture transcriptions, gesture unit segmentation and gesture phase segmentation are usually employed. This study aims to develop strategies for automated segmentation of gestural units and phases of gestures contained in a video in the context of storytelling, formulating the problem as a supervised classification task. Support Vector Machines were selected as classification method, because of its ability to generalize and good results obtained for many complex problems. Support Vector Machines, however, do not consider the temporal aspects of data, characteristics that are important for gesture analysis. Therefore, this paper investigates methods of temporal representation and variations of the Support Vector machines that consider temporal reasoning. Several experiments were performed in this context for gesture units segmentation. The best results were obtained with traditional Support Vector Machines applied to windowed data. In addition, three strategies of multiclass classification were applied to the problem of gesture phase segmentation. The results indicate that a good performance for gesture segmentation can be obtained by training the strategy with an initial part of the video to get an automated segmentation of the rest of the video. Thus, researchers in \"gesture studies\" could manually segment only part of the video, reducing the time needed to perform the analysis of gestures contained in long recordings.
257

Reconhecimento de imagens de marcas de gado utilizando redes neurais convolucionais e máquinas de vetores de suporte

Santos, Carlos Alexandre Silva dos 26 September 2017 (has links)
Submitted by Marlucy Farias Medeiros (marlucy.farias@unipampa.edu.br) on 2017-10-31T17:44:17Z No. of bitstreams: 1 Carlos_Alexandre Silva_dos Santos - 2017.pdf: 27850839 bytes, checksum: c4399fa8396d3b558becbfa67b7dd777 (MD5) / Approved for entry into archive by Marlucy Farias Medeiros (marlucy.farias@unipampa.edu.br) on 2017-10-31T18:24:21Z (GMT) No. of bitstreams: 1 Carlos_Alexandre Silva_dos Santos - 2017.pdf: 27850839 bytes, checksum: c4399fa8396d3b558becbfa67b7dd777 (MD5) / Made available in DSpace on 2017-10-31T18:24:21Z (GMT). No. of bitstreams: 1 Carlos_Alexandre Silva_dos Santos - 2017.pdf: 27850839 bytes, checksum: c4399fa8396d3b558becbfa67b7dd777 (MD5) Previous issue date: 2017-09-26 / O reconhecimento automático de imagens de marca de gado é uma necessidade para os órgãos governamentais responsáveis por esta atividade. Para auxiliar neste processo, este trabalho propõe uma arquitetura que seja capaz de realizar o reconhecimento automático dessas marcas. Nesse sentido, uma arquitetura foi implementada e experimentos foram realizados com dois métodos: Bag-of-Features e Redes Neurais Convolucionais (CNN). No método Bag-of-Features foi utilizado o algoritmo SURF para extração de pontos de interesse das imagens e para criação do agrupa mento de palavras visuais foi utilizado o clustering K-means. O método Bag-of-Features apresentou acurácia geral de 86,02% e tempo de processamento de 56,705 segundos para um conjunto de 12 marcas e 540 imagens. No método CNN foi criada uma rede completa com 5 camadas convolucionais e 3 camadas totalmente conectadas. A 1 ª camada convolucional teve como entrada imagens transformadas para o formato de cores RGB. Para ativação da CNN foi utilizada a função ReLU, e a técnica de maxpooling para redução. O método CNN apresentou acurácia geral de 93,28% e tempo de processamento de 12,716 segundos para um conjunto de 12 marcas e 540 imagens. O método CNN consiste de seis etapas: a) selecionar o banco de imagens; b) selecionar o modelo de CNN pré-treinado; c) pré-processar as imagens e aplicar a CNN; d) extrair as características das imagens; e) treinar e classificar as imagens utilizando SVM; f) avaliar os resultados da classificação. Os experimentos foram realizados utilizando o conjunto de imagens de marcas de gado de uma prefeitura municipal. Para avaliação do desempenho da arquitetura proposta foram utilizadas as métricas de acurácia geral, recall, precisão, coeficiente Kappa e tempo de processamento. Os resultados obtidos foram satisfatórios, nos quais o método CNN apresentou os melhores resultados em comparação ao método Bag-of-Features, sendo 7,26% mais preciso e 43,989 segundos mais rápido. Também foram realizados experimentos com o método CNN em conjuntos de marcas com número maior de amostras, o qual obteve taxas de acurácia geral de 94,90% para 12 marcas e 840 imagens, e 80,57% para 500 marcas e 22.500 imagens, respectivamente. / The automatic recognition of cattle branding is a necessity for government agencies responsible for this activity. In order to improve this process, this work proposes an architecture which is able of performing the automatic recognition of these brandings. The proposed software implements two methods, namely: Bag-of-Features and CNN. For the Bag-of-Features method, the SURF algorithm was used in order to extract points of interest from the images. We also used K-means clustering to create the visual word cluster. The Bag-of-Features method presented a overall accuracy of 86.02% and a processing time of 56.705 seconds in a set containing 12 brandings and 540 images. For the CNN method, we created a complete network with five convolutional layers, and three layers fully connected. For the 1st convolutional layer we converted the input images into the RGB color for mat. In order to activate the CNN, we performed an application of the ReLU, and used the maxpooling technique for the reduction. The CNN method presented 93.28% of overall accuracy and a processing time of 12.716 seconds for a set containing 12 brandings and 540 images. The CNN method includes six steps: a) selecting the image database; b) selecting the pre-trained CNN model; c) pre-processing the images and applying the CNN; d) extracting the features from the images; e) training and classifying the images using SVM; f) assessing the classification results. The experiments were performed using the cattle branding image set of a City Hall. Metrics of overall accuracy, recall, precision, Kappa coefficient, and processing time were used in order to assess the performance of the proposed architecture. Results were satisfactory. The CNN method showed the best results when compared to Bag-of-Features method, considering that it was 7.26% more accurate and 43.989 seconds faster. Also, some experiments were conducted with the CNN method for sets of brandings with a greater number of samples. These larger sets presented a overall accuracy rate of 94.90% for 12 brandings and 840 images, and 80.57% for 500 brandings and 22,500 images, respectively.
258

Extração de conhecimento simbólico em técnicas de aprendizado de máquina caixa-preta por similaridade de rankings / Symbolic knowledge extraction from black-box machine learning techniques with ranking similarities

Bianchi, Rodrigo Elias 26 September 2008 (has links)
Técnicas de Aprendizado de Máquina não-simbólicas, como Redes Neurais Artificiais, Máquinas de Vetores de Suporte e combinação de classificadores têm mostrado um bom desempenho quando utilizadas para análise de dados. A grande limitação dessas técnicas é a falta de compreensibilidade do conhecimento armazenado em suas estruturas internas. Esta Tese apresenta uma pesquisa realizada sobre métodos de extração de representações compreensíveis do conhecimento armazenado nas estruturas internas dessas técnicas não-simbólicas, aqui chamadas de caixa preta, durante seu processo de aprendizado. A principal contribuição desse trabalho é a proposta de um novo método pedagógico para extração de regras que expliquem o processo de classificação seguido por técnicas não-simbólicas. Esse novo método é baseado na otimização (maximização) da similaridade entre rankings de classificação produzidos por técnicas de Aprendizado de Máquina simbólicas e não simbólicas (de onde o conhecimento interno esta sendo extraído). Experimentos foram realizados com vários conjuntos de dados e os resultados obtidos sugerem um bom potencial para o método proposto / Non-symbolic Machine Learning techniques, like Artificial Neural Networks, Support Vector Machines and Ensembles of classifiers have shown a good performance when they are used in data analysis. The strong limitation regarding the use of these techniques is the lack of comprehensibility of the knowledge stored in their internal structure. This Thesis presents an investigation of methods capable of extracting comprehensible representations of the knowledge acquired by these non-symbolic techniques, here named black box, during their learning process. The main contribution of this work is the proposal of a new pedagogical method for rule extraction that explains the classification process followed by non-symbolic techniques. This new method is based on the optimization (maximization) of the similarity between classification rankings produced by symbolic and non-symbolic (from where the internal knowledge is being extracted) Machine Learning techniques. Experiments were performed for several datasets and the results obtained suggest a good potential of the proposed method
259

Identification et caractérisation des perturbations affectant les réseaux électriques HTA. / Identification and Characterization of Power Quality Disturbances affecting MV Distribution Networks

Caujolle, Mathieu 27 September 2011 (has links)
La reconnaissance des perturbations survenant sur les réseaux HTA est une problématique essentielle pour les clients industriels comme pour le gestionnaire du réseau. Ces travaux de thèse ont permis de développer un système d’identification automatique. Il s’appuie sur des méthodes de segmentation qui décomposent de manière précise et efficace les régimes transitoires et permanents des perturbations. Elles utilisent des filtres de types Kalman linéaire ou anti-harmoniques pour extraire les régimes transitoires. La prise en compte des variations harmoniques et de la présence de transitoires proches se fait à l’aide de seuils adaptatifs. Des méthodes de correction du retard a posteriori permettent d’améliorer la précision de la décomposition. Des indicateurs adaptés à la dynamique des régimes de fonctionnement analysés sont utilisés pour caractériser les perturbations. Peu sensibles aux erreurs de segmentation et aux perturbations harmoniques, ils permettent une description fiable des phases des perturbations. Deux types de systèmes de décision ont également été étudiés : des systèmes experts et des classifieurs SVM. Ces systèmes ont été mis au point à partir d’une large base de perturbations simulées. Leurs performances ont été évaluées sur une base de perturbations réelles : ils déterminent efficacement le type et la direction des perturbations observées (taux de reconnaissance moyen > 98%). / The recognition of disturbances affecting MV networks is essential to industrials and distribution system operators. The aim of this thesis work is to design a near real-time automatic system able to detect and identify disturbances from their waveforms. Segmentation methods split the disturbed waveforms into transient and steady-state intervals. They use Kalman filters or anti-harmonic filters to extract the transient intervals. Adaptive thresholding methods increase the detection capacity while a posterior delay compensation methods improve the accuracy of the decomposition. Indicators adapted to the disturbance dynamic are used to characterize its steady-state and transient phases. They are robust to segmentation inaccuracies as well as to steady-state disturbances such as harmonics. Two distinct decision systems are also studied: expert recognition systems and SVM classifiers. During the learning stage, a large simulated event database is used to train both systems. Their performances are evaluated on real events: the type and direction of the measured disturbances are determined with a recognition rate over 98%.
260

W-operator learning using linear models for both gray-level and binary inputs / Aprendizado de w-operadores usando modelos lineares para imagens binárias e em níveis de cinza

Igor dos Santos Montagner 12 June 2017 (has links)
Image Processing techniques can be used to solve a broad range of problems, such as medical imaging, document processing and object segmentation. Image operators are usually built by combining basic image operators and tuning their parameters. This requires both experience in Image Processing and trial-and-error to get the best combination of parameters. An alternative approach to design image operators is to estimate them from pairs of training images containing examples of the expected input and their processed versions. By restricting the learned operators to those that are translation invariant and locally defined ($W$-operators) we can apply Machine Learning techniques to estimate image transformations. The shape that defines which neighbors are used is called a window. $W$-operators trained with large windows usually overfit due to the lack sufficient of training data. This issue is even more present when training operators with gray-level inputs. Although approaches such as the two-level design, which combines multiple operators trained on smaller windows, partly mitigates these problems, they also require more complicated parameter determination to achieve good results. In this work we present techniques that increase the window sizes we can use and decrease the number of manually defined parameters in $W$-operator learning. The first one, KA, is based on Support Vector Machines and employs kernel approximations to estimate image transformations. We also present adequate kernels for processing binary and gray-level images. The second technique, NILC, automatically finds small subsets of operators that can be successfully combined using the two-level approach. Both methods achieve competitive results with methods from the literature in two different application domains. The first one is a binary document processing problem common in Optical Music Recognition, while the second is a segmentation problem in gray-level images. The same techniques were applied without modification in both domains. / Processamento de imagens pode ser usado para resolver problemas em diversas áreas, como imagens médicas, processamento de documentos e segmentação de objetos. Operadores de imagens normalmente são construídos combinando diversos operadores elementares e ajustando seus parâmetros. Uma abordagem alternativa é a estimação de operadores de imagens a partir de pares de exemplos contendo uma imagem de entrada e o resultado esperado. Restringindo os operadores considerados para o que são invariantes à translação e localmente definidos ($W$-operadores), podemos aplicar técnicas de Aprendizagem de Máquina para estimá-los. O formato que define quais vizinhos são usadas é chamado de janela. $W$-operadores treinados com janelas grandes frequentemente tem problemas de generalização, pois necessitam de grandes conjuntos de treinamento. Este problema é ainda mais grave ao treinar operadores em níveis de cinza. Apesar de técnicas como o projeto dois níveis, que combina a saída de diversos operadores treinados com janelas menores, mitigar em parte estes problemas, uma determinação de parâmetros complexa é necessária. Neste trabalho apresentamos duas técnicas que permitem o treinamento de operadores usando janelas grandes. A primeira, KA, é baseada em Máquinas de Suporte Vetorial (SVM) e utiliza técnicas de aproximação de kernels para realizar o treinamento de $W$-operadores. Uma escolha adequada de kernels permite o treinamento de operadores níveis de cinza e binários. A segunda técnica, NILC, permite a criação automática de combinações de operadores de imagens. Este método utiliza uma técnica de otimização específica para casos em que o número de características é muito grande. Ambos métodos obtiveram resultados competitivos com algoritmos da literatura em dois domínio de aplicação diferentes. O primeiro, Staff Removal, é um processamento de documentos binários frequente em sistemas de reconhecimento ótico de partituras. O segundo é um problema de segmentação de vasos sanguíneos em imagens em níveis de cinza.

Page generated in 0.0525 seconds