• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Decision Support System for Value-Based Evaluation and Conditional Approval of Construction Submittals

Sherbini, Khaled Ali 03 May 2010 (has links)
To ensure compliance with specifications during construction, a formal review process, called the submittals process is typically implemented, whereby the contractor is required to submit proposals for materials, equipment, and processes for the owner’s approval within a short period of time. This procedure can be a difficult task because of lack of time, lack of information in the submittal package, difficulty in retrieving related data, and lack of defined criteria for evaluation. This research introduces development of a framework for submittal evaluation that considers the operational impact of any minor variation in the required specifications. The evaluation mechanism uses the Multi-Attribute Utility Theory (MAUT) approach, which is adaptable to the varying requirements of organizations. Through the process of analyzing the current submittal mechanism, a list of key submittals is defined and the top one (chiller) is selected to be the focus of the research. The governing criteria (evaluation parameters) are defined for the selected submittal item and categorized into two categories: inflexible and flexible. The inflexible parameters have been dealt with using checklists with predefined threshold that must be met without tolerance. Flexible parameters have been analyzed using utility functions that represent decision maker preferences and tolerance levels. Accordingly, the evaluation process considers multi-parameters to determine an overall utility for the submittal and the value-based condition for accepting it, incorporating LEED requirements. The investigation is based on data provided by three main organizations, as well as intensive meetings and interviews with experts from each participating organization. The outcome of this investigation is the development of evaluation criteria and checklist parameters that are used as the basis of a value-based evaluation, which is the core of the developed decision support system. In summary, it has been demonstrated that a decision support system for the evaluation of construction submittals can be constructed and that it will provide numerous benefits: an expedited decision process, an audit trail for decisions, more consistent and objective decisions, risk identification, internal alignment of organizational values, and improved lifecycle asset performance. The benefits were validated by demonstration, and by experts' evaluations.
2

Decision Support System for Value-Based Evaluation and Conditional Approval of Construction Submittals

Sherbini, Khaled Ali 03 May 2010 (has links)
To ensure compliance with specifications during construction, a formal review process, called the submittals process is typically implemented, whereby the contractor is required to submit proposals for materials, equipment, and processes for the owner’s approval within a short period of time. This procedure can be a difficult task because of lack of time, lack of information in the submittal package, difficulty in retrieving related data, and lack of defined criteria for evaluation. This research introduces development of a framework for submittal evaluation that considers the operational impact of any minor variation in the required specifications. The evaluation mechanism uses the Multi-Attribute Utility Theory (MAUT) approach, which is adaptable to the varying requirements of organizations. Through the process of analyzing the current submittal mechanism, a list of key submittals is defined and the top one (chiller) is selected to be the focus of the research. The governing criteria (evaluation parameters) are defined for the selected submittal item and categorized into two categories: inflexible and flexible. The inflexible parameters have been dealt with using checklists with predefined threshold that must be met without tolerance. Flexible parameters have been analyzed using utility functions that represent decision maker preferences and tolerance levels. Accordingly, the evaluation process considers multi-parameters to determine an overall utility for the submittal and the value-based condition for accepting it, incorporating LEED requirements. The investigation is based on data provided by three main organizations, as well as intensive meetings and interviews with experts from each participating organization. The outcome of this investigation is the development of evaluation criteria and checklist parameters that are used as the basis of a value-based evaluation, which is the core of the developed decision support system. In summary, it has been demonstrated that a decision support system for the evaluation of construction submittals can be constructed and that it will provide numerous benefits: an expedited decision process, an audit trail for decisions, more consistent and objective decisions, risk identification, internal alignment of organizational values, and improved lifecycle asset performance. The benefits were validated by demonstration, and by experts' evaluations.
3

Benchmarking Renewable Energy Supply Forecasts

Ulbricht, Robert 19 July 2021 (has links)
The ability of generating precise numerical forecasts is important to successful Enterprises in order to prepare themselves for undetermined future developments. For Utility companies, forecasts of prospective energy demand are a crucial component in order to maintain the physical stability and reliability of electricity grids. The constantly increasing capacity of fluctuating renewable energy sources creates a challenge in balancing power supply and demand. To allow for better integration, supply forecasting has become an important topic in the research field of energy data management and many new forecasting methods have been proposed in the literature. However, choosing the optimal solution for a specific forecasting problem remains a time- and work-intensive Task as meaningful benchmarks are rare and there is still no standard, easy-to-use, and robust approach. Many of the models in use are obtained by executing black-box machine learning tools and then manually optimized by human experts via trial-and-error towards the requirements of the underlying use case. Due to the lack of standardized Evaluation methodologies and access to experimental data, these results are not easily comparable. In this thesis, we address the topic of systematic benchmarks for renewable Energy supply forecasts. These usually include two stages, requiring a weather- and an energy forecast model. The latter can be selected amongst the classes of physical, statistical, and hybrid models. The selection of an appropriate model is one of the major tasks included in the forecasting process. We conducted an empirical analysis to assess the most popular forecasting methods. In contrast to the classical time- and resource intensive, mostly manual evaluation procedure, we developed a more efficient decision-support solution. With the inclusion of contextual information, our heuristic approach HMR is able to identify suitable examples in a case base and generates a recommendation out of the results from already existing solutions. The usage of time series representations reduces the dimensions of the original data thus allowing for an efficient search in large data sets. A context-aware evaluation methodology is introduced to assess a forecast’s quality based on its monetary return in the corresponding market environment. Results otherwise usually evaluated using statistical accuracy criteria become more interpretable by estimating real-world impacts. Finally, we introduced the ECAST framework as an open and easy to-use online platform that supports the benchmarking of energy time series forecasting methods. It aides inexperienced practitioners by supporting the execution of automated tasks, thus making complex benchmarks much more efficient and easy to handle. The integration of modules like the Ensembler, the Recommender, and the Evaluator provide additional value for forecasters. Reliable benchmarks can be conducted on this basis, while analytical functions for output explanation provide transparency for the user.:1 INTRODUCTION 11 2 ENERGY DATA MANAGEMENT CHALLENGES 17 2.1 Market Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2 EDMS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.1 Core Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.2 Typical Energy Data Management Processes . . . . . . . . . . . 23 2.2.3 System Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.1 Smart Metering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.2 Energy Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.3.3 Energy Saving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.3.4 Mobile Consumption Devices . . . . . . . . . . . . . . . . . . . . . 30 2.3.5 Smart Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3 ENERGY SUPPLY FORECASTING CONCEPTS 35 3.1 Energy Supply Forecasting Approaches . . . . . . . . . . . . . . . . . . . 36 3.1.1 Weather Forecast Models . . . . . . . . . . . . . . . . . . . . . . . . 36 3.1.2 Energy Forecast Models . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2 Energy Forecasting Process . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.1 Iterative Standard Process Model . . . . . . . . . . . . . . . . . . . 43 3.2.2 Context-Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.3 Model Selection - A Benchmark Case Study . . . . . . . . . . . . . . . . 48 3.3.1 Use Case Description . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.3.3 Result Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4 RELEVANCE OF RENEWABLE ENERGY FORECASTING METHODS 55 4.1 Scientific Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.2 Quantitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1.3 Qualitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2 Practical Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.2.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.2.2 Feedback from Software Providers . . . . . . . . . . . . . . . . . . 61 4.2.3 Feedback from Software Users . . . . . . . . . . . . . . . . . . . . . 62 4.3 Forecasting Competitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5 HEURISTIC MODEL RECOMMENDATION 67 5.1 Property-based Similarity Determination . . . . . . . . . . . . . . . . . . 67 5.1.1 Time Series Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5.1.2 Reducing Dimensionality with Property Extraction . . . . . . . . . 69 5.1.3 Correlation Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.2 Feature Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.2.1 Feature Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.2.2 Feature Pre-Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.2.3 Property-based Least Angle Regression . . . . . . . . . . . . . . . 85 5.3 HMR Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.1 Formalized Foundations . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.2 Procedure Description . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.3.3 Quality Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.4.1 Case Base Composition . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.4.2 Classifier Performance on univariate Models . . . . . . . . . . . . 95 5.4.3 HMR performance on multivariate models . . . . . . . . . . . . . 99 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6 VALUE-BASED RESULT EVALUATION METHODOLOGY 105 6.1 Accuracy evaluation in energy forecasting . . . . . . . . . . . . . . . . 106 6.2 Energy market models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 6.3 Value-based forecasting performance . . . . . . . . . . . . . . . . . . . 110 6.3.1 Forecast Benefit Determination . . . . . . . . . . . . . . . . . . . . 110 6.3.2 Multi-dimensional Ranking Scores . . . . . . . . . . . . . . . . . . . 113 6.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6.4.1 Use Case Description . . . . . . . . . . . . . . . . . . . . . . . . . . 117 6.4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 6.4.3 Result Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 7 ECAST BENCHMARK FRAMEWORK 129 7.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 7.1.1 Objective Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 7.1.2 Fundamental Design Principles . . . . . . . . . . . . . . . . . . . . 131 7.2 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 7.2.1 Task Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 7.2.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 7.3 Demonstration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 7.3.1 Step 1: Create a new Benchmark . . . . . . . . . . . . . . . . . . 137 7.3.2 Step 2: Build Ensembles . . . . . . . . . . . . . . . . . . . . . . . . . 139 7.3.3 Step 3: Evaluate the Output . . . . . . . . . . . . . . . . . . . . . . 141 7.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 8 CONCLUSIONS 145 BIBLIOGRAPHY 149 LIST OF FIGURES 167 LIST OF TABLES 169 A LIST OF REVIEWED JOURNAL ARTICLES 171 B QUESTIONNAIRES 175 C STANDARD ERRORS FOR RANKING SCORES 179 D ERROR DISTRIBUTION FOR BENCHMARKED PREDICTORS 183

Page generated in 0.0688 seconds