• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 84
  • 32
  • 30
  • 24
  • 19
  • 15
  • 14
  • 14
  • 13
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Estratégia computacional para avaliação de propriedades mecânicas de concreto de agregado leve

Bonifácio, Aldemon Lage 16 March 2017 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-06-21T11:44:49Z No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-08-07T19:04:13Z (GMT) No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) / Made available in DSpace on 2017-08-07T19:04:13Z (GMT). No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) Previous issue date: 2017-03-16 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O concreto feito com agregados leves, ou concreto leve estrutural, é considerado um material de construção versátil, bastante usado em todo o mundo, em diversas áreas da construção civil, tais como, edificações pré-fabricadas, plataformas marítimas, pontes, entre outros. Porém, a modelagem das propriedades mecânicas deste tipo de concreto, tais como o módulo de elasticidade e a resistência a compressão, é complexa devido, principalmente, à heterogeneidade intrínseca aos componentes do material. Um modelo de predição das propriedades mecânicas do concreto de agregado leve pode ajudar a diminuir o tempo e o custo de projetos ao prover dados essenciais para os cálculos estruturais. Para esse fim, este trabalho visa desenvolver uma estratégia computacional para a avaliação de propriedades mecânicas do concreto de agregado leve, por meio da combinação da modelagem computacional do concreto via MEF (Método de Elementos Finitos), do método de inteligência computacional via SVR (Máquina de vetores suporte com regressão, do inglês Support Vector Regression) e via RNA (Redes Neurais Artificiais). Além disso, com base na abordagem de workflow científico e many-task computing, uma ferramenta computacional foi desenvolvida com o propósito de facilitar e automatizar a execução dos experimentos científicos numéricos de predição das propriedades mecânicas. / Concrete made from lightweight aggregates, or lightweight structural concrete, is considered a versatile construction material, widely used throughout the world, in many areas of civil construction, such as prefabricated buildings, offshore platforms, bridges, among others. However, the modeling of the mechanical properties of this type of concrete, such as the modulus of elasticity and the compressive strength, is complex due mainly to the intrinsic heterogeneity of the components of the material. A predictive model of the mechanical properties of lightweight aggregate concrete can help reduce project time and cost by providing essential data for structural calculations. To this end, this work aims to develop a computational strategy for the evaluation of mechanical properties of lightweight concrete by combining the concrete computational modeling via Finite Element Method, the computational intelligence method via Support Vector Regression, and via Artificial Neural Networks. In addition, based on the approachs scientific workflow and many-task computing, a computational tool will be developed with the purpose of facilitating and automating the execution of the numerical scientific experiments of prediction of the mechanical properties.
72

Prognostic Health Management Systems for More Electric Aircraft Applications

Demus, Justin Cole 09 September 2021 (has links)
No description available.
73

Rozpoznávání hudební nálady a emocí za pomoci technik Music Information Retrieval / Music mood and emotion recognition using Music information retrieval techniques

Smělý, Pavel January 2019 (has links)
This work focuses on scientific area called Music Information Retrieval, more precisely it’s subdivision focusing on the recognition of emotions in music called Music Emotion Recognition. The beginning of the work deals with general overview and definition of MER, categorization of individual methods and offers a comprehensive view of this discipline. The thesis also concentrates on the selection and description of suitable parameters for the recognition of emotions, using tools openSMILE and MIRtoolbox. A freely available DEAM database was used to obtain the set of music recordings and their subjective emotional annotations. The practical part deals with the design of a static dimensional regression evaluation system for numerical prediction of musical emotions in music recordings, more precisely their position in the AV emotional space. The thesis publishes and comments on the results obtained by individual analysis of the significance of individual parameters and for the overall analysis of the prediction of the proposed model.
74

Forecasting hourly electricity demand in South Africa using machine learning models

Thanyani, Maduvhahafani 12 August 2020 (has links)
MSc (Statistics) / Department of Statistics / Short-term load forecasting in South Africa using machine learning and statistical models is discussed in this study. The research is focused on carrying out a comparative analysis in forecasting hourly electricity demand. This study was carried out using South Africa’s aggregated hourly load data from Eskom. The comparison is carried out in this study using support vector regression (SVR), stochastic gradient boosting (SGB), artificial neural networks (NN) with generalized additive model (GAM) as a benchmark model in forecasting hourly electricity demand. In both modelling frameworks, variable selection is done using least absolute shrinkage and selection operator (Lasso). The SGB model yielded the least root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) on testing data. SGB model also yielded the least RMSE, MAE and MAPE on training data. Forecast combination of the models’ forecasts is done using convex combination and quantile regres- sion averaging (QRA). The QRA was found to be the best forecast combination model ibased on the RMSE, MAE and MAPE. / NRF
75

Dynamic Warning Signals and Time Lag Analysis for Seepage Prediction in Hydropower Dams : A Case Study of a Swedish Hydropower Plant

Olsson, Lovisa, Hellström, Julia January 2023 (has links)
Hydropower is an important energy source since it is fossil-free, renewable, and controllable. Characteristics that become especially important as the reliance on intermittent energy sources increases. However, the dams for the hydropower plants are also associated with large risks as a dam failure could have fatal consequences. Dams are therefore monitored by several sensors, to follow and evaluate any changes in the dam. One of the most important dam surveillance measurements is seepage since it can examine internal erosion. Seepage is affected by several different parameters such as reservoir water level, temperature, and precipitation. Studies also indicate the existence of a time lag between the reservoir water level and the seepage flow, meaning that when there is a change in the reservoir level there is a delay before these changes are reflected in the seepage behaviour. Recent years have seen increased use of AI in dam monitoring, enabling more dynamic warning systems.  This master’s thesis aims to develop a model for dynamic warning signals by predicting seepage using reservoir water level, temperature, and precipitation. Furthermore, a snowmelt variable was introduced to account for the impact of increased water flows during the spring season. The occurrence of a time lag and its possible influence on the model’s performance is also examined. To predict the seepage, three models with different complexity are used – linear regression, support vector regression, and long short-term memory. To investigate the time lag, the linear regression and support vector regression models incorporate a static time lag by shifting the reservoir water level data up to 14 days. The time lag was further investigated using the long short-term memory model as well.  The results show that reservoir water level, temperature, and the snowmelt variable are the combination of input parameters that generate the best results for all three models. Although a one-day time lag between reservoir water level and seepage slightly improved the predictions, the exact duration and nature of the time lag remain unclear. The more complex models (support vector regression and long short-term memory) generated better predictions than the linear regression but performed similarly when evaluated based on the dynamic warning signals. Therefore, linear regression is deemed a suitable model for dynamic warning signals by seepage prediction.
76

Quantifying urban land cover by means of machine learning and imaging spectrometer data at multiple spatial scales

Okujeni, Akpona 15 December 2014 (has links)
Das weltweite Ausmaß der Urbanisierung zählt zu den großen ökologischen Herausforderungen des 21. Jahrhunderts. Die Fernerkundung bietet die Möglichkeit das Verständnis dieses Prozesses und seiner Auswirkungen zu erweitern. Der Fokus dieser Arbeit lag in der Quantifizierung der städtischen Landbedeckung mittels Maschinellen Lernens und räumlich unterschiedlich aufgelöster Hyperspektraldaten. Untersuchungen berücksichtigten innovative methodische Entwicklungen und neue Möglichkeiten, die durch die bevorstehende Satellitenmission EnMAP geschaffen werden. Auf Basis von Bilder des flugzeugestützten HyMap Sensors mit Auflösungen von 3,6 m und 9 m sowie simulierten EnMAP-Daten mit einer Auflösung von 30 m wurde eine Kartierung entlang des Stadt-Umland-Gradienten Berlins durchgeführt. Im ersten Teil der Arbeit wurde die Kombination von Support Vektor Regression mit synthetischen Trainingsdaten für die Subpixelkartierung eingeführt. Ergebnisse zeigen, dass sich der Ansatz gut zur Quantifizierung thematisch relevanter und spektral komplexer Oberflächenarten eignet, dass er verbesserte Ergebnisse gegenüber weiteren Subpixelverfahren erzielt, und sich als universell einsetzbar hinsichtlich der räumlichen Auflösung erweist. Im zweiten Teil der Arbeit wurde der Wert zukünftiger EnMAP-Daten für die städtische Fernerkundung abgeschätzt. Detaillierte Untersuchungen unterstreichen deren Eignung für eine verbesserte und erweiterte Beschreibung der Stadt nach dem bewährten Vegetation-Impervious-Soil-Schema. Analysen der Möglichkeiten und Grenzen zeigen sowohl Nachteile durch die höhere Anzahl von Mischpixel im Vergleich zu hyperspektralen Flugzeugdaten als auch Vorteile aufgrund der verbesserten Differenzierung städtischer Materialien im Vergleich zu multispektralen Daten. Insgesamt veranschaulicht diese Arbeit, dass die Kombination von hyperspektraler Satellitenbildfernerkundung mit Methoden des Maschinellen Lernens eine neue Qualität in die städtische Fernerkundung bringen kann. / The global dimension of urbanization constitutes a great environmental challenge for the 21st century. Remote sensing is a valuable Earth observation tool, which helps to better understand this process and its ecological implications. The focus of this work was to quantify urban land cover by means of machine learning and imaging spectrometer data at multiple spatial scales. Experiments considered innovative methodological developments and novel opportunities in urban research that will be created by the upcoming hyperspectral satellite mission EnMAP. Airborne HyMap data at 3.6 m and 9 m resolution and simulated EnMAP data at 30 m resolution were used to map land cover along an urban-rural gradient of Berlin. In the first part of this work, the combination of support vector regression with synthetically mixed training data was introduced as sub-pixel mapping technique. Results demonstrate that the approach performs well in quantifying thematically meaningful yet spectrally challenging surface types. The method proves to be both superior to other sub-pixel mapping approaches and universally applicable with respect to changes in spatial scales. In the second part of this work, the value of future EnMAP data for urban remote sensing was evaluated. Detailed explorations on simulated data demonstrate their suitability for improving and extending the approved vegetation-impervious-soil mapping scheme. Comprehensive analyses of benefits and limitations of EnMAP data reveal both challenges caused by the high numbers of mixed pixels, when compared to hyperspectral airborne imagery, and improvements due to the greater material discrimination capability when compared to multispectral spaceborne imagery. In summary, findings demonstrate how combining spaceborne imaging spectrometry and machine learning techniques could introduce a new quality to the field of urban remote sensing.
77

REAL-TIME PREDICTION OF SHIMS DIMENSIONS IN POWER TRANSFER UNITS USING MACHINE LEARNING

Jansson, Daniel, Blomstrand, Rasmus January 2019 (has links)
No description available.
78

Nuevas metodologías para la asignación de tareas y formación de coaliciones en sistemas multi-robot

Guerrero Sastre, José 31 March 2011 (has links)
Este trabajo analiza la idoneidad de dos de los principales métodos de asignación de tareas en entornos con restricciones temporales. Se pondrá de manifiesto que ambos tipos de mecanismos presentan carencias para tratar tareas con deadlines, especialmente cuando los robots han de formar coaliciones. Uno de los aspectos a los que esta tesis dedica mayor atención es la predicción del tiempo de ejecución, que depende, entre otros factores, de la interferencia física entre robots. Este fenómeno no se ha tenido en cuenta en los mecanismos actuales de asignación basados en subastas. Así, esta tesis presenta el primer mecanismo de subastas para la creación de coaliciones que tiene en cuenta la interferencia entre robots. Para ello, se ha desarrollado un modelo de predicción del tiempo de ejecución y un nuevo paradigma llamado subasta doble. Además, se han propuesto nuevos mecanismos basados en swarm
79

A New Mathematical Framework for Regional Frequency Analysis of Floods

Basu, Bidroha January 2015 (has links) (PDF)
Reliable estimates of design flood quantiles are often necessary at sparsely gauged/ungauged target locations in river basins for various applications in water resources engineering. Development of effective methods for use in this task has been a long-standing challenge in hydrology for over five decades.. Hydrologists often consider various regional flood frequency analysis (RFFA) approaches that involve (i) use of regionalization approach to delineate a homogeneous group of watersheds resembling watershed of the target location, and (ii) use of a regional frequency analysis (RFA) approach to transfer peak flow related information from gauged watersheds in the group to the target location, and considering the information as the basis to estimate flood quantile(s) for the target site. The work presented in the thesis is motivated to address various shortcomings/issues associated with widely used regionalization and RFA approaches. Regionalization approaches often determine regions by grouping data points in multidimensional space of attributes depicting watershed’s hydrology, climatology, topography, land-use/land-cover and soils. There are no universally established procedures to identify appropriate attributes, and modelers use subjective procedures to choose a set of attributes that is considered common for the entire study area. This practice may not be meaningful, as different sets of attributes could influence extreme flow generation mechanism in watersheds located in different parts of the study area. Another issue is that practitioners usually give equal importance (weight) to all the attributes in regionalization, though some attributes could be more important than others in influencing peak flows. To address this issue, a two-stage clustering approach is developed in the thesis. It facilitates identification of appropriate attributes and their associated weights for use in regionalization of watersheds in the context of flood frequency analysis. Effectiveness of the approach is demonstrated through a case study on Indiana watersheds. Conventional regionalization approaches could prove effective for delineating regions when data points (depicting watersheds) in watershed related attribute space can be segregated into disjoint groups using straight lines or linear planes. They prove ineffective when (i) data points are not linearly separable, (ii) the number of attributes and watersheds is large, (iii) there are outliers in the attribute space, and (iv) most watersheds resemble each other in terms of their attributes. In real world scenario, most watersheds resemble each other, and regions may not always be segregated using straight lines or linear planes, and dealing with outliers and high-dimensional data is inevitable in regionalization. To address this, a fuzzy support vector clustering approach is proposed in the thesis and its effectiveness over commonly used region-of-influence approach, and different cluster analysis based regionalization methods is demonstrated through a case study on Indiana watersheds. For the purpose of regional frequency analysis (RFA), index-flood approach is widely used over the past five decades. Conventional index-flood (CIF) approach assumes that values of scale and shape parameters of frequency distribution are identical across all the sites in a homogeneous region. In real world scenario, this assumption may not be valid even if a region is statistically homogeneous. Logarithmic index-flood (LIF) and population index-flood (PIF) methodologies were proposed to address the problem, but even those methodologies make unrealistic assumptions. PIF method assumes that the ratio of scale to location parameters is a constant for all the sites in a region. On the other hand, LIF method assumes that appropriate frequency distribution to fit peak flows could be found in log-space, but in reality the distribution of peak flows in log space may not be closer to any of the known theoretical distributions. To address this issue, a new mathematical approach to RFA is proposed in L-moment and LH-moment frameworks that can overcome shortcomings of the CIF approach and its related LIF and PIF methods that make various assumptions but cannot ensure their validity in RFA. For use with the proposed approach, transformation mechanisms are proposed for five commonly used three-parameter frequency distributions (GLO, GEV, GPA, GNO and PE3) to map the random variable being analyzed from the original space to a dimensionless space where distribution of the random variable does not change, and deviations of regional estimates of all the distribution’s parameters (location, scale, shape) with respect to their population values as well as at-site estimates are minimal. The proposed approach ensures validity of all the assumptions of CIF approach in the dimensionless space, and this makes it perform better than CIF approach and related LIF and PIF methods. Monte-Carlo simulation experiments revealed that the proposed approach is effective even when the form of regional frequency distribution is mis-specified. Case study on watersheds in conterminous United States indicated that the proposed approach outperforms methods based on index-flood approach in real world scenario. In recent decades, fuzzy clustering approach gained recognition for regionalization of watersheds, as it can account for partial resemblance of several watersheds in watershed related attribute space. In working with this approach, formation of regions and quantile estimation requires discerning information from fuzzy-membership matrix. But, currently there are no effective procedures available for discerning the information. Practitioners often defuzzify the matrix to form disjoint clusters (regions) and use them as the basis for quantile estimation. The defuzzification approach (DFA) results in loss of information discerned on partial resemblance of watersheds. The lost information cannot be utilized in quantile estimation, owing to which the estimates could have significant error. To avert the loss of information, a threshold strategy (TS) was considered in some prior studies, but it results in under-prediction of quantiles. To address this, a mathematical approach is proposed in the thesis that allows discerning information from fuzzy-membership matrix derived using fuzzy clustering approach for effective quantile estimation. Effectiveness of the approach in estimating flood quantiles relative to DFA and TS was demonstrated through Monte-Carlo simulation experiments and case study on mid-Atlantic water resources region, USA. Another issue with index flood approach and its related RFA methodologies is that they assume linear relationship between each of the statistical raw moments (SMs) of peak flows and watershed related attributes in a region. Those relationships form the basis to arrive at estimates of SMs for the target ungauged/sparsely gauged site, which are then utilized to estimate parameters of flood frequency distribution and quantiles corresponding to target return periods. In reality, non-linear relationships could exist between SMs and watershed related attributes. To address this, simple-scaling and multi-scaling methodologies have been proposed in literature, which assume that scaling (power law) relationship exists between each of the SMs of peak flows at sites in a region and drainage areas of watersheds corresponding to those sites. In real world scenario, drainage area alone may not completely describe watershed’s flood response. Therefore flood quantile estimates based on the scaling relationships can have large errors. To address this, a recursive multi-scaling (RMS) approach is proposed that facilitates construction of scaling (power law) relationship between each of the SMs of peak flows and a set of site’s region-specific watershed related attributes chosen/identified in a recursive manner. The approach is shown to outperform index-flood based region-of-influence approach, simple-and multi-scaling approaches, and a multiple linear regression method through leave-one-out cross validation experiment on watersheds in and around Indiana State, USA. The conventional approaches to flood frequency analysis (FFA) are based on the assumption that peak flows at the target site represent a sample of independent and identically distributed realization drawn from a stationary homogeneous stochastic process. This assumption is not valid when flows are affected by changes in climate and/or land use/land cover, and regulation of rivers through dams, reservoirs and other artificial diversions/storages. In situations where evidence of non-stationarity in peak flows is strong, it is not appropriate to use quantile estimates obtained based on the conventional FFA approaches for hydrologic designs and other applications. Downscaling is one of the options to arrive at future projections of flows at target sites in a river basin for use in FFA. Conventional downscaling methods attempt to downscale General Circulation Model (GCM) simulated climate variables to streamflow at target sites. In real world scenario, correlation structure exists between records of streamflow at sites in a study area. An effective downscaling model must be parsimonious, and it should ensure preservation of the correlation structure in downscaled flows to a reasonable extent, though exact reproduction/mimicking of the structure may not be necessary in a climate change (non-stationary) scenario. A few recent studies attempted to address this issue based on the assumption of spatiotemporal covariance stationarity. However, there is dearth of meaningful efforts especially for multisite downscaling of flows. To address this, multivariate support vector regression (MSVR) based methodology is proposed to arrive at flood return levels (quantile estimates) for target locations in a river basin corresponding to different return periods in a climate change scenario. The approach involves (i) use of MSVR relationships to downscale GCM simulated large scale atmospheric variables (LSAVs) to monthly time series of streamflow at multiple locations in a river basin, (ii) disaggregation of the downscaled streamflows corresponding to each site from monthly to daily time scale using k-nearest neighbor disaggregation methodology, (iii) fitting time varying generalized extreme value (GEV) distribution to annual maximum flows extracted from the daily streamflows and estimating flood return levels for different target locations in the river basin corresponding to different return periods.
80

Estimation du RUL par des approches basées sur l'expérience : de la donnée vers la connaissance / Rul estimation using experience based approached : from data to knwoledge

Khelif, Racha 14 December 2015 (has links)
Nos travaux de thèses s’intéressent au pronostic de défaillance de composant critique et à l’estimation de la durée de vie résiduelle avant défaillance (RUL). Nous avons développé des méthodes basées sur l’expérience. Cette orientation nous permet de nous affranchir de la définition d’un seuil de défaillance, point problématique lors de l’estimation du RUL. Nous avons pris appui sur le paradigme de Raisonnement à Partir de Cas (R à PC) pour assurer le suivi d’un nouveau composant critique et prédire son RUL. Une approche basée sur les instances (IBL) a été développée en proposant plusieurs formalisations de l’expérience : une supervisée tenant compte de l’ état du composant sous forme d’indicateur de santé et une non-supervisée agrégeant les données capteurs en une série temporelle mono-dimensionnelle formant une trajectoire de dégradation. Nous avons ensuite fait évoluer cette approche en intégrant de la connaissance à ces instances. La connaissance est extraite à partir de données capteurs et est de deux types : temporelle qui complète la modélisation des instances et fréquentielle qui, associée à la mesure de similarité permet d’affiner la phase de remémoration. Cette dernière prend appui sur deux types de mesures : une pondérée entre fenêtres parallèles et fixes et une pondérée avec projection temporelle. Les fenêtres sont glissantes ce qui permet d’identifier et de localiser l’état actuel de la dégradation de nouveaux composants. Une autre approche orientée donnée a été test ée. Celle-ci est se base sur des caractéristiques extraites des expériences, qui sont mono-dimensionnelles dans le premier cas et multi-dimensionnelles autrement. Ces caractéristiques seront modélisées par un algorithme de régression à vecteurs de support (SVR). Ces approches ont été évaluées sur deux types de composants : les turboréacteurs et les batteries «Li-ion». Les résultats obtenus sont intéressants mais dépendent du type de données traitées. / Our thesis work is concerned with the development of experience based approachesfor criticalcomponent prognostics and Remaining Useful Life (RUL) estimation. This choice allows us to avoidthe problematic issue of setting a failure threshold.Our work was based on Case Based Reasoning (CBR) to track the health status of a new componentand predict its RUL. An Instance Based Learning (IBL) approach was first developed offering twoexperience formalizations. The first is a supervised method that takes into account the status of thecomponent and produces health indicators. The second is an unsupervised method that fuses thesensory data into degradation trajectories.The approach was then evolved by integrating knowledge. Knowledge is extracted from the sensorydata and is of two types: temporal that completes the modeling of instances and frequential that,along with the similarity measure refine the retrieval phase. The latter is based on two similaritymeasures: a weighted one between fixed parallel windows and a weighted similarity with temporalprojection through sliding windows which allow actual health status identification.Another data-driven technique was tested. This one is developed from features extracted from theexperiences that can be either mono or multi-dimensional. These features are modeled by a SupportVector Regression (SVR) algorithm. The developed approaches were assessed on two types ofcritical components: turbofans and ”Li-ion” batteries. The obtained results are interesting but theydepend on the type of the treated data.

Page generated in 0.1164 seconds