• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Nasazení aplikací zohledňující komunikační zpoždění v prostředí tzv. edge-cloud / Latency aware deployment in the edge-cloud environment

Filandr, Adam January 2020 (has links)
The goal of this thesis is to propose a layer on top of edge-cloud, in order to provide soft real-time guarantees on the execution time of applications. This is done in order to satisfy the soft-real time requirements set by the developers of latency-sensitive applications. The proposed layer uses a predictor of execution time, in order to find combinations of processes with, which satisfy the soft real- time requirements when collocated. To implement the predictor, we are provided with information about the resource usage of processes and execution times of collocated combinations. We utilize similarity between the processes, cluster analysis, and regression analysis to form four prediction methods. We also provide a boundary system of resource usage used to filter out combinations exceeding the capacity of a computer. Because the metrics indicating the resource usage of a process can vary in their usefulness, we also added a system of weights, which estimates the importance of each metric. We experimentally analyze the accuracy of each prediction method, the influence of the boundary detection system, and the effects of weights. 1
2

Enhancing fuzzy associative rule mining approaches for improving prediction accuracy : integration of fuzzy clustering, apriori and multiple support approaches to develop an associative classification rule base

Sowan, Bilal Ibrahim January 2011 (has links)
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system.
3

Design and performance evaluation of failure prediction models

Mousavi Biouki, Seyed Mohammad Mahdi January 2017 (has links)
Prediction of corporate bankruptcy (or distress) is one of the major activities in auditing firms’ risks and uncertainties. The design of reliable models to predict distress is crucial for many decision-making processes. Although a variety of models have been designed to predict distress, the relative performance evaluation of competing prediction models remains an exercise that is unidimensional in nature. To be more specific, although some studies use several performance criteria and their measures to assess the relative performance of distress prediction models, the assessment exercise of competing prediction models is restricted to their ranking by a single measure of a single criterion at a time, which leads to reporting conflicting results. The first essay of this research overcomes this methodological issue by proposing an orientation-free super-efficiency Data Envelopment Analysis (DEA) model as a multi-criteria assessment framework. Furthermore, the study performs an exhaustive comparative analysis of the most popular bankruptcy modelling frameworks for UK data. Also, it addresses two important research questions; namely, do some modelling frameworks perform better than others by design? and to what extent the choice and/or the design of explanatory variables and their nature affect the performance of modelling frameworks? Further, using different static and dynamic statistical frameworks, this chapter proposes new Failure Prediction Models (FPMs). However, within a super-efficiency DEA framework, the reference benchmark changes from one prediction model evaluation to another one, which in some contexts might be viewed as “unfair” benchmarking. The second essay overcomes this issue by proposing a Slacks-Based Measure Context-Dependent DEA (SBM-CDEA) framework to evaluate the competing Distress Prediction Models (DPMs). Moreover, it performs an exhaustive comparative analysis of the most popular corporate distress prediction frameworks under both a single criterion and multiple criteria using data of UK firms listed on London Stock Exchange (LSE). Further, this chapter proposes new DPMs using different static and dynamic statistical frameworks. Another shortcoming of the existing studies on performance evaluation lies in the use of static frameworks to compare the performance of DPMs. The third essay overcomes this methodological issue by suggesting a dynamic multi-criteria performance assessment framework, namely, Malmquist SBM-DEA, which by design, can monitor the performance of competing prediction models over time. Further, this study proposes new static and dynamic distress prediction models. Also, the study addresses several research questions as follows; what is the effect of information on the performance of DPMs? How the out-of-sample performance of dynamic DPMs compares to the out-of-sample performance of static ones? What is the effect of the length of training sample on the performance of static and dynamic models? Which models perform better in forecasting distress during the years with Higher Distress Rate (HDR)? On feature selection, studies have used different types of information including accounting, market, macroeconomic variables and the management efficiency scores as predictors. The recently applied techniques to take into account the management efficiency of firms are two-stage models. The two-stage DPMs incorporate multiple inputs and outputs to estimate the efficiency measure of a corporation relative to the most efficient ones, in the first stage, and use the efficiency score as a predictor in the second stage. The survey of the literature reveals that most of the existing studies failed to have a comprehensive comparison between two-stage DPMs. Moreover, the choice of inputs and outputs for DEA models that estimate the efficiency measures of a company has been restricted to accounting variables and features of the company. The fourth essay adds to the current literature of two-stage DPMs in several respects. First, the study proposes to consider the decomposition of Slack-Based Measure (SBM) of efficiency into Pure Technical Efficiency (PTE), Scale Efficiency (SE), and Mix Efficiency (ME), to analyse how each of these measures individually contributes to developing distress prediction models. Second, in addition to the conventional approach of using accounting variables as inputs and outputs of DEA models to estimate the measure of management efficiency, this study uses market information variables to calculate the measure of the market efficiency of companies. Third, this research provides a comprehensive analysis of two-stage DPMs through applying different DEA models at the first stage – e.g., input-oriented vs. output oriented, radial vs. non-radial, static vs. dynamic, to compute the measures of management efficiency and market efficiency of companies; and also using dynamic and static classifier frameworks at the second stage to design new distress prediction models.
4

Enhancing Fuzzy Associative Rule Mining Approaches for Improving Prediction Accuracy. Integration of Fuzzy Clustering, Apriori and Multiple Support Approaches to Develop an Associative Classification Rule Base

Sowan, Bilal I. January 2011 (has links)
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system. / Applied Science University (ASU) of Jordan
5

Verbesserung der Performance von virtuellen Sensoren in totzeitbehafteten Prozessen / Improvement of performance for virtual sensors in dead time processes

Dementyev, Alexander 12 December 2014 (has links) (PDF)
Modellbasierte virtuelle Sensoren (VS) ermöglichen die Messung von qualitätsbestimmenden Prozessparametern (bzw. Hilfsregelgrößen) dort, wo eine direkte Messung zu teuer oder gar nicht möglich ist. Für die adaptiven VS, die ihr internes Prozessmodell nach Data-Driven-Methode bilden (z. B. durch die Benutzung künstlicher neuronaler Netze (KNN)), besteht das Problem der Abschätzung der Prädiktionsstabilität. Aktuelle Lösungsansätze lösen dieses Problem nur für wenige KNN-Typen und erfordern enormen Entwurfs- und Rechenaufwand. In dieser Arbeit wird eine alternative Methode vorgestellt, welche für eine breite Klasse von KNN gilt und keinen hohen Entwurfs- und Rechenaufwand erfordert. Die neue Methode wurde anhand realer Anwendungsbeispiele getestet und hat sehr gute Ergebnisse geliefert. Für die nicht adaptiven virtuellen Sensoren wurde eine aufwandsreduzierte Adaption nach Smith-Schema vorgeschlagen. Dieses Verfahren ermöglicht die Regelung totzeitbehafteter und zeitvarianter Prozesse mit VS in einem geschlossenen Regelkreis. Im Vergleich zu anderen Regelungsstrategien konnte damit vergleichbare Regelungsqualität bei einem deutlich geringeren Entwurfsaufwand erzielt werden. / Model-based virtual sensors allow the measurement of parameters critical for process quality where a direct measurement is too expensive or not at all possible. For the adaptive virtual sensors built after data-driven method (e.g., by use of an ANN model) there is a problem of the prediction stability. Current solutions attempt to solve this problem only for a few ANN types and require a very high development effort. In this dissertation a new method for the solution of this problem is suggested, which is valid for a wide class of the ANNs and requires no high development effort. The new method was tested on real application examples and has delivered very good results. For the non-adaptive virtual sensors a simple adaptation mechanism was suggested. This technique allows the control of dead-time and time-variant processes in closed loop. Besides, in comparison to other control strategies the comparable results were achieved with smaller development effort.
6

Verbesserung der Performance von virtuellen Sensoren in totzeitbehafteten Prozessen

Dementyev, Alexander 17 October 2014 (has links)
Modellbasierte virtuelle Sensoren (VS) ermöglichen die Messung von qualitätsbestimmenden Prozessparametern (bzw. Hilfsregelgrößen) dort, wo eine direkte Messung zu teuer oder gar nicht möglich ist. Für die adaptiven VS, die ihr internes Prozessmodell nach Data-Driven-Methode bilden (z. B. durch die Benutzung künstlicher neuronaler Netze (KNN)), besteht das Problem der Abschätzung der Prädiktionsstabilität. Aktuelle Lösungsansätze lösen dieses Problem nur für wenige KNN-Typen und erfordern enormen Entwurfs- und Rechenaufwand. In dieser Arbeit wird eine alternative Methode vorgestellt, welche für eine breite Klasse von KNN gilt und keinen hohen Entwurfs- und Rechenaufwand erfordert. Die neue Methode wurde anhand realer Anwendungsbeispiele getestet und hat sehr gute Ergebnisse geliefert. Für die nicht adaptiven virtuellen Sensoren wurde eine aufwandsreduzierte Adaption nach Smith-Schema vorgeschlagen. Dieses Verfahren ermöglicht die Regelung totzeitbehafteter und zeitvarianter Prozesse mit VS in einem geschlossenen Regelkreis. Im Vergleich zu anderen Regelungsstrategien konnte damit vergleichbare Regelungsqualität bei einem deutlich geringeren Entwurfsaufwand erzielt werden. / Model-based virtual sensors allow the measurement of parameters critical for process quality where a direct measurement is too expensive or not at all possible. For the adaptive virtual sensors built after data-driven method (e.g., by use of an ANN model) there is a problem of the prediction stability. Current solutions attempt to solve this problem only for a few ANN types and require a very high development effort. In this dissertation a new method for the solution of this problem is suggested, which is valid for a wide class of the ANNs and requires no high development effort. The new method was tested on real application examples and has delivered very good results. For the non-adaptive virtual sensors a simple adaptation mechanism was suggested. This technique allows the control of dead-time and time-variant processes in closed loop. Besides, in comparison to other control strategies the comparable results were achieved with smaller development effort.

Page generated in 0.1046 seconds