Spelling suggestions: "subject:"model complexity"" "subject:"model komplexity""
1 |
A STUDY OF VHDL-AMS SIMULATION PERFORMANCE AS A FUNCTION OF MODEL COMPLEXITYGHALI, KALYAN VENKATA 11 October 2001 (has links)
No description available.
|
2 |
Deciding among models : a decision-theoretic view of model complexityMozano, Jennifer Maile 11 November 2010 (has links)
This research examines the trade-off between the cost of adding complexity to a model and the value added to the results within the context of decision-making. It seeks to determine how complex a model should be in order to fit it to the purpose at hand. The report begins with a discussion on general modeling theory and model complexity. It next considers the specific case of petroleum reservoir models and
the existing research that has compared modeling results with model complexity levels. Finally, it presents original results applying Monte Carlo sampling to a drilling decision scenario and to a one-dimensional reservoir model where a cylindrical oil field is
represented by different numbers of cells and the results compared. / text
|
3 |
Assessment of Watershed Model Simplification and Potential Application in Small Ungaged Watersheds: A Case Study of Big Creek, Atlanta, GAComarova, Zoia A, Ms 11 August 2011 (has links)
Technological and methodological advances of the past few decades have provided hydrologists with advanced and increasingly complex hydrological models. These models improve our ability to simulate hydrological systems, but they also require a lot of detailed input data and, therefore, have a limited applicability in locations with poor data availability. From a case study of Big Creek watershed, a 186.4 km2 urbanizing watershed in Atlanta, GA, for which continuous flow data are available since 1960, this project investigates the relationship between model complexity, data availability and predictive performance in order to provide reliability factors for the use of reduced complexity models in areas with limited data availability, such as small ungaged watersheds in similar environments. My hope is to identify ways to increase model efficiency without sacrificing significant model reliability that will be transferable to ungaged watersheds.
|
4 |
A Comprehensive Analysis of Deep Learning for Interference Suppression, Sample and Model Complexity in Wireless SystemsOyedare, Taiwo Remilekun 12 March 2024 (has links)
The wireless spectrum is limited and the demand for its use is increasing due to technological advancements in wireless communication, resulting in persistent interference issues. Despite progress in addressing interference, it remains a challenge for effective spectrum usage, particularly in the use of license-free and managed shared bands and other opportunistic spectrum access solutions. Therefore, efficient and interference-resistant spectrum usage schemes are critical. In the past, most interference solutions have relied on avoidance techniques and expert system-based mitigation approaches. Recently, researchers have utilized artificial intelligence/machine learning techniques at the physical (PHY) layer, particularly deep learning, which suppress or compensate for the interfering signal rather than simply avoiding it. In addition, deep learning has been utilized by researchers in recent years to address various difficult problems in wireless communications such as, transmitter classification, interference classification and modulation recognition, amongst others. To this end, this dissertation presents a thorough analysis of deep learning techniques for interference classification and suppression, and it thoroughly examines complexity (sample and model) issues that arise from using deep learning. First, we address the knowledge gap in the literature with respect to the state-of-the-art in deep learning-based interference suppression. To account for the limitations of deep learning-based interference suppression techniques, we discuss several challenges, including lack of interpretability, the stochastic nature of the wireless channel, issues with open set recognition (OSR) and challenges with implementation. We also provide a technical discussion of the prominent deep learning algorithms proposed in the literature and also offer guidelines for their successful implementation. Next, we investigate convolutional neural network (CNN) architectures for interference and transmitter classification tasks. In particular, we utilize a CNN architecture to classify interference, investigate model complexity of CNN architectures for classifying homogeneous and heterogeneous devices and then examine their impact on test accuracy. Next, we explore the issues with sample size and sample quality with regards to the training data in deep learning. In doing this, we also propose a rule-of-thumb for transmitter classification using CNN based on the findings from our sample complexity study. Finally, in cases where interference cannot be avoided, it is important to suppress such interference. To achieve this, we build upon autoencoder work from other fields to design a convolutional neural network (CNN)-based autoencoder model to suppress interference thereby ensuring coexistence of different wireless technologies in both licensed and unlicensed bands. / Doctor of Philosophy / Wireless communication has advanced a lot in recent years, but it is still hard to use the limited amount of available spectrum without interference from other devices. In the past, researchers tried to avoid interference using expert systems. Now, researchers are using artificial intelligence and machine learning, particularly deep learning, to mitigate interference in a different way. Deep learning has also been used to solve other tough problems in wireless communication, such as classifying the type of device transmitting a signal, classifying the signal itself or avoiding it. This dissertation presents a comprehensive review of deep learning techniques for reducing interference in wireless communication. It also leverages a deep learning model called convolutional neural network (CNN) to classify interference and investigates how the complexity of the CNN effects its performance. It also looks at the relationship between model performance and dataset size (i.e., sample complexity) in wireless communication. Finally, it discusses a CNN-based autoencoder technique to suppress interference in digital amplitude-phase modulation system. All of these techniques are important for making sure different wireless technologies can work together in both licensed and unlicensed bands.
|
5 |
CONTINENTAL SCALE DIAGNOSTIC EVALUATION OF MONTHLY WATER BALANCE MODELS FOR THE UNITED STATESMartinez Baquero, Guillermo Felipe January 2010 (has links)
Water balance models are important for the characterization of hydrologic systems, to help understand regional scale dynamics, and to identify hydro-climatic trends and systematic biases in data. Because existing models have, to-date, only been tested on data sets of limited spatial representativeness and extent, it has not yet been established that they are capable of reproducing the range of dynamics observed in nature. This dissertation develops systematic strategies to guide selection of water balance models, establish data requirements, estimate parameters, and evaluate performance. Through a series of three papers, these challenges are investigated in the context of monthly water balance modeling across the conterminous United States. The first paper reports on an initial diagnostic iteration to evaluate relevant components of model error, and to examine details of its spatial variability. We find that to conduct a robust model evaluation it is not sufficient to rely upon conventional NSE and/or r^2aggregate statistics of performance; to have reasonable confidence that the model can provide hydrologically consistent simulations, it is also necessary to examine measures of water balance and hydrologic variability. The second paper builds upon the results of the first, and evaluates the suitability of several candidate model structures, focusing specifically snow-free catchments. A diagnostic Maximum-Likelihood model evaluation procedure is developed to incorporate the notion of `Hydrological Consistency' and controls for structural complexity. The results confirm that the evaluation of hydrologic consistency, based on benchmark comparisons and on stringent analysis of residuals, provides a robust basis for guiding model selection. The results reveal strong spatial persistence of certain model structures that needs to be understood in future studies. The third paper focuses on understanding and improving the procedure for constraining model parameters to provide hydrologically consistent results. In particular, it develops a penalty-function based modification of the Mean Squared Error estimation to help ensure proper reproduction of system behaviors by minimizing interaction of error components and by facilitating inclusion of relevant information. The analysis and results provide insight into the identifiability of model parameters, and further our understanding of how performance criteria should be applied during model identification.
|
6 |
EXAMINING THE CONFIRMATORY TETRAD ANALYSIS (CTA) AS A SOLUTION OF THE INADEQUACY OF TRADITIONAL STRUCTURAL EQUATION MODELING (SEM) FIT INDICESLiu, Hangcheng 01 January 2018 (has links)
Structural Equation Modeling (SEM) is a framework of statistical methods that allows us to represent complex relationships between variables. SEM is widely used in economics, genetics and the behavioral sciences (e.g. psychology, psychobiology, sociology and medicine). Model complexity is defined as a model’s ability to fit different data patterns and it plays an important role in model selection when applying SEM. As in linear regression, the number of free model parameters is typically used in traditional SEM model fit indices as a measure of the model complexity. However, only using number of free model parameters to indicate SEM model complexity is crude since other contributing factors, such as the type of constraint or functional form are ignored.
To solve this problem, a special technique, Confirmatory Tetrad Analysis (CTA) is examined. A tetrad refers to the difference in the products of certain covariances (or correlations) among four random variables. A structural equation model often implies that some tetrads should be zero. These model implied zero tetrads are called vanishing tetrads. In CTA, the goodness of fit can be determined by testing the null hypothesis that the model implied vanishing tetrads are equal to zero. CTA can be helpful to improve model selection because different functional forms may affect the model implied vanishing tetrad number (t), and models not nested according to the traditional likelihood ratio test may be nested in terms of tetrads.
In this dissertation, an R package was created to perform CTA, a two-step method was developed to determine SEM model complexity using simulated data, and it is demonstrated how the number of vanishing tetrads can be helpful to indicate SEM model complexity in some situations.
|
7 |
Appropriate Modelling Complexity: An Application to Mass Balance Modelling of Lake Vänern, SwedenDahl, Magnus January 2004 (has links)
<p>This work is about finding an appropriate modelling complexity for a mass-balance model for phosphorus in Lake Vänern, Sweden. A statistical analysis of 30 years of water quality data shows that epilimnion and hypolimnion have different water quality and should be treated separately in a model. Further vertical division is not motivated. Horizontally, the lake should be divided into the two main basins Värmlandssjön and Dalbosjön. Shallow near shore ares, bays and areas close to point sources have to be considered as specific sub-basins if they are to be modelled correctly.</p><p>These results leads to the use of a model based on ordinary differential equations. The model applied is named LEEDS (Lake Eutrophication Effect Dose Sensitivity) and considers phosphorus and suspended particles. Several modifications were made for the application of the model to Lake Vänern. The two major ones are a revision of the equations governing the outflow of phosphorus and suspended particle through the outflow river, and the inclusion of chemical oxygen demand (COD) into the model, in order to model emissions from pulp and paper mills. The model has also been modified to handle several sub-basins.</p><p>The LEEDS model has been compared to three other eutrophication models applied to Lake Vänern. Two were simple models developed as parts of catchment area models and the third was a lake model with higher resolution than the LEEDS model. The models showed a good fit to calibration and validation data, and were compared in two nutrient emission scenarios and a scenario with increased temperature, corresponding to the green house effect.</p>
|
8 |
Appropriate Modelling Complexity: An Application to Mass Balance Modelling of Lake Vänern, SwedenDahl, Magnus January 2004 (has links)
This work is about finding an appropriate modelling complexity for a mass-balance model for phosphorus in Lake Vänern, Sweden. A statistical analysis of 30 years of water quality data shows that epilimnion and hypolimnion have different water quality and should be treated separately in a model. Further vertical division is not motivated. Horizontally, the lake should be divided into the two main basins Värmlandssjön and Dalbosjön. Shallow near shore ares, bays and areas close to point sources have to be considered as specific sub-basins if they are to be modelled correctly. These results leads to the use of a model based on ordinary differential equations. The model applied is named LEEDS (Lake Eutrophication Effect Dose Sensitivity) and considers phosphorus and suspended particles. Several modifications were made for the application of the model to Lake Vänern. The two major ones are a revision of the equations governing the outflow of phosphorus and suspended particle through the outflow river, and the inclusion of chemical oxygen demand (COD) into the model, in order to model emissions from pulp and paper mills. The model has also been modified to handle several sub-basins. The LEEDS model has been compared to three other eutrophication models applied to Lake Vänern. Two were simple models developed as parts of catchment area models and the third was a lake model with higher resolution than the LEEDS model. The models showed a good fit to calibration and validation data, and were compared in two nutrient emission scenarios and a scenario with increased temperature, corresponding to the green house effect.
|
9 |
Insights and Characterization of l1-norm Based Sparsity Learning of a Lexicographically Encoded Capacity Vector for the Choquet IntegralAdeyeba, Titilope Adeola 09 May 2015 (has links)
This thesis aims to simultaneously minimize function error and model complexity for data fusion via the Choquet integral (CI). The CI is a generator function, i.e., it is parametric and yields a wealth of aggregation operators based on the specifics of the underlying fuzzy measure. It is often the case that we desire to learn a fusion from data and the goal is to have the smallest possible sum of squared error between the trained model and a set of labels. However, we also desire to learn as “simple’’ of solutions as possible. Herein, L1-norm regularization of a lexicographically encoded capacity vector relative to the CI is explored. The impact of regularization is explored in terms of what capacities and aggregation operators it induces under different common and extreme scenarios. Synthetic experiments are provided in order to illustrate the propositions and concepts put forth.
|
10 |
Model Complexity in Linear Regression: Extensions for Prediction and HeteroscedasticityLuan, Bo 18 August 2022 (has links)
No description available.
|
Page generated in 0.0548 seconds