• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Methods for the approximation of network centrality measures / Metodos para a aproximação de medidas de centralidade de redes

Grando, Felipe January 2018 (has links)
Medidas de centralidades são um mecanismo importante para revelar informações vitais sobre redes complexas. No entanto, essas métricas exigem um alto custo computacional que prejudica a sua aplicação em grandes redes do mundo real. Em nosso estudo propomos e explicamos que através do uso de redes neurais artificiais podemos aplicar essas métricas em redes de tamanho arbitrário. Além disso, identificamos a melhor configuração e metodologia para otimizar a acurácia do aprendizado neural, além de apresentar uma maneira fácil de obter e gerar um número suficiente de dados de treinamento substanciais através do uso de um modelo de redes complexas que é adaptável a qualquer aplicação. Também realizamos um comparativo da técnica proposta com diferentes metodologias de aproximação de centralidade da literatura, incluindo métodos de amostragem e outros algoritmos de aprendizagem, e, testamos o modelo gerado pela rede neural em casos reais. Mostramos com os resultados obtidos em nossos experimentos que o modelo de regressão gerado pela rede neural aproxima com sucesso as métricas é uma alternativa eficiente para aplicações do mundo real. A metodologia e o modelo de aprendizagem de máquina que foi proposto usa apenas uma fração do tempo de computação necessário para os algoritmos de aproximação baseados em amostragem e é mais robusto que as técnicas de aprendizagem de máquina testadas / Centrality measures are an important analysis mechanism to uncover vital information about complex networks. However, these metrics have high computational costs that hinder their applications in large real-world networks. I propose and explain the use of artificial neural learning algorithms can render the application of such metrics in networks of arbitrary size. Moreover, I identified the best configuration and methodology for neural learning to optimize its accuracy, besides presenting an easy way to acquire and generate plentiful and meaningful training data via the use of a complex networks model that is adaptable for any application. In addition, I compared my prosed technique based on neural learning with different centrality approximation methods proposed in the literature, consisting of sampling and other artificial learning methodologies, and, I also tested the neural learning model in real case scenarios. I show in my results that the regression model generated by the neural network successfully approximates the metric values and is an effective alternative in real-world applications. The methodology and machine learning model that I propose use only a fraction of computing time with respect to other commonly applied approximation algorithms and is more robust than the other tested machine learning techniques.
2

Exploiting diversity for efficient machine learning

Geras, Krzysztof Jerzy January 2018 (has links)
A common practice for solving machine learning problems is currently to consider each problem in isolation, starting from scratch every time a new learning problem is encountered or a new model is proposed. This is a perfectly feasible solution when the problems are sufficiently easy or, if the problem is hard when a large amount of resources, both in terms of the training data and computation, are available. Although this naive approach has been the main focus of research in machine learning for a few decades and had a lot of success, it becomes infeasible if the problem is too hard in proportion to the available resources. When using a complex model in this naive approach, it is necessary to collect large data sets (if possible at all) to avoid overfitting and hence it is also necessary to use large computational resources to handle the increased amount of data, first during training to process a large data set and then also at test time to execute a complex model. An alternative to this strategy of treating each learning problem independently is to leverage related data sets and computation encapsulated in previously trained models. By doing that we can decrease the amount of data necessary to reach a satisfactory level of performance and, consequently, improve the accuracy achievable and decrease training time. Our attack on this problem is to exploit diversity - in the structure of the data set, in the features learnt and in the inductive biases of different neural network architectures. In the setting of learning from multiple sources we introduce multiple-source cross-validation, which gives an unbiased estimator of the test error when the data set is composed of data coming from multiple sources and the data at test time are coming from a new unseen source. We also propose new estimators of variance of the standard k-fold cross-validation and multiple-source cross-validation, which have lower bias than previously known ones. To improve unsupervised learning we introduce scheduled denoising autoencoders, which learn a more diverse set of features than the standard denoising auto-encoder. This is thanks to their training procedure, which starts with a high level of noise, when the network is learning coarse features and then the noise is lowered gradually, which allows the network to learn some more local features. A connection between this training procedure and curriculum learning is also drawn. We develop further the idea of learning a diverse representation by explicitly incorporating the goal of obtaining a diverse representation into the training objective. The proposed model, the composite denoising autoencoder, learns multiple subsets of features focused on modelling variations in the data set at different levels of granularity. Finally, we introduce the idea of model blending, a variant of model compression, in which the two models, the teacher and the student, are both strong models, but different in their inductive biases. As an example, we train convolutional networks using the guidance of bidirectional long short-term memory (LSTM) networks. This allows to train the convolutional neural network to be more accurate than the LSTM network at no extra cost at test time.
3

Community Hawkes Models for Continuous-time Networks

Soliman, Hadeel 15 September 2022 (has links)
No description available.
4

Model za predviđanje količine ambalažnog i biorazgradivog otpada primenom neuronskih mreža / Packaging waste, biodegradable municipal waste, artificial neural networks, model, prediction, waste management

Batinić Bojan 08 May 2015 (has links)
<p>U okviru disertacije, kori&scaron;ćenjem ve&scaron;tačkih neuronskih mreža razvijeni su modeli za predviđanje količina ambalažnog i biorazgradivog komunalnog otpada u Republici Srbiji do kraja 2030. godine. Razvoj modela baziran je na zavisnosti između ukupne potro&scaron;nje domaćinstva i generisane količine dva posmatrana toka otpada. Pored toga, na bazi zavisnosti sa bruto domaćim proizvodom (BDP), definisan je i model za projekciju zastupljenosti osnovnih opcija tretmana komunalnog otpada u Republici Srbiji za isti period. Na osnovu dobijenih rezultata, stvorene su polazne osnove za procenu potencijala za reciklažu ambalažnog otpada, kao i za procenu u kojoj meri se može očekivati da određene količine biorazgradivog otpada u narednom periodu ne budu odložene na deponije, &scaron;to je u skladu sa savremenim principima upravljanja otpadom i postojećim zahtevima EU u ovoj oblasti.</p> / <p>By using artificial neural networks, models for prediction of the quantity of<br />packaging and biodegradable municipal waste in the Republic of Serbia by<br />the end of 2030, were developed. Models were based on dependence<br />between total household consumption and generated quantities of two<br />observed waste streams. In addition, based on dependence with the Gross<br />Domestic Product (GDP), a model for the projection of share of different<br />municipal solid waste treatment options in the Republic of Serbia for the same<br />period, was created. Obtained results represent a starting point for assessing<br />the potential for recycling of packaging waste, and determination of<br />biodegradable municipal waste quantities which expected that in the future<br />period will not be disposed at landfills, in accordance with modern principles<br />of waste management and existing EU requirements in this area.</p>
5

Optimal operation of RO system with daily variation of freshwater demand and seawater temperature

Sassi, Kamal M., Mujtaba, Iqbal January 2013 (has links)
no / The optimal operation policy of flexible RO systems is studied in this work. The design and operation of RO process is optimized and controlled considering variations in water demands and changing seawater temperature throughout the day. A storage tank is added to the system layout to provide additional operational flexibility and to ensure the availability of freshwater to customer at all times. A steady state model for the RO process is developed and linked with a dynamic model for the storage tank. The membrane modules are divided into a number of groups to add flexibility in operation to RO network. The total operating cost of the RO process is minimized in order to find the optimal layout and operating variables at discreet time intervals for three design scenarios. (C) 2013 Elsevier Ltd. All rights reserved.

Page generated in 0.0584 seconds