• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Field Evaluation Methodology for Quantifying Network-wide Efficiency, Energy, Emission, and Safety Impacts of Operational-level Transportation Projects

Sin, Heung Gweon 28 September 2001 (has links)
This thesis presents a proposed methodology for the field evaluation of the efficiency, energy, environmental, and safety impacts of traffic-flow improvement projects. The methodology utilizes Global Positioning System (GPS) second-by-second speed measurements using fairly inexpensive GPS units to quantify the impacts of traffic-flow improvement projects on the efficiency, energy, and safety of a transportation network. It should be noted that the proposed methodology is incapable of isolating the effects of induced demand and is not suitable for estimating long-term impacts of such projects that involve changes in land-use. Instead, the proposed methodology can quantify changes in traffic behavior and changes in travel demand. This thesis, also, investigates the ability of various data smoothing techniques to remove such erroneous data without significantly altering the underlying vehicle speed profile. Several smoothing techniques are then applied to the acceleration profile, including data trimming, Simple Exponential smoothing, Double Exponential smoothing, Epanechnikov Kernel smoothing, Robust Kernel smoothing, and Robust Simple Exponential Smoothing. The results of the analysis indicate that the application of Robust smoothing (Kernel of Exponential) to vehicle acceleration levels, combined with a technique to minimize the difference between the integral of the raw and smoothed acceleration profiles, removes invalid GPS data without significantly altering the underlying measured speed profile The methodology has been successfully applied to two case studies provided insights as to the potential benefits of coordinating traffic signals across jurisdictional boundaries. More importantly two case studies demonstrate the feasibility of using GPS second-by-second speed measurements for the evaluation of operational-level traffic flow improvement projects. To identify any statistically significant differences in traffic demand along two case study corridors before and after traffic signal condition, tube counts and turning counts were collected and analyzed using ANOVA technique. The ANOVA results of turning volume counts indicated that there is no statistically significant difference in turning volumes between the before and after conditions. Furthermore, the ANOVA results of tube counts also confirmed that there did not appear to be a statistically significant difference (5 percent level of significance) in the tube counts between the before and after conditions. / Ph. D.
2

Robust techniques for regression models with minimal assumptions / M.M. van der Westhuizen

Van der Westhuizen, Magdelena Marianna January 2011 (has links)
Good quality management decisions often rely on the evaluation and interpretation of data. One of the most popular ways to investigate possible relationships in a given data set is to follow a process of fitting models to the data. Regression models are often employed to assist with decision making. In addition to decision making, regression models can also be used for the optimization and prediction of data. The success of a regression model, however, relies heavily on assumptions made by the model builder. In addition, the model may also be influenced by the presence of outliers; a more robust model, which is not as easily affected by outliers, is necessary in making more accurate interpretations about the data. In this research study robust techniques for regression models with minimal assumptions are explored. Mathematical programming techniques such as linear programming, mixed integer linear programming, and piecewise linear regression are used to formulate a nonlinear regression model. Outlier detection and smoothing techniques are included to address the robustness of the model and to improve predictive accuracy. The performance of the model is tested by applying it to a variety of data sets and comparing the results to those of other models. The results of the empirical experiments are also presented in this study. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
3

Robust techniques for regression models with minimal assumptions / M.M. van der Westhuizen

Van der Westhuizen, Magdelena Marianna January 2011 (has links)
Good quality management decisions often rely on the evaluation and interpretation of data. One of the most popular ways to investigate possible relationships in a given data set is to follow a process of fitting models to the data. Regression models are often employed to assist with decision making. In addition to decision making, regression models can also be used for the optimization and prediction of data. The success of a regression model, however, relies heavily on assumptions made by the model builder. In addition, the model may also be influenced by the presence of outliers; a more robust model, which is not as easily affected by outliers, is necessary in making more accurate interpretations about the data. In this research study robust techniques for regression models with minimal assumptions are explored. Mathematical programming techniques such as linear programming, mixed integer linear programming, and piecewise linear regression are used to formulate a nonlinear regression model. Outlier detection and smoothing techniques are included to address the robustness of the model and to improve predictive accuracy. The performance of the model is tested by applying it to a variety of data sets and comparing the results to those of other models. The results of the empirical experiments are also presented in this study. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
4

Run-to-run modelling and control of batch processes

Duran Villalobos, Carlos Alberto January 2016 (has links)
The University of ManchesterCarlos Alberto Duran VillalobosDoctor of Philosophy in the Faculty of Engineering and Physical SciencesDecember 2015This thesis presents an innovative batch-to-batch optimisation technique that was able to improve the productivity of two benchmark fed-batch fermentation simulators: Saccharomyces cerevisiae and Penicillin production. In developing the proposed technique, several important challenges needed to be addressed:For example, the technique relied on the use of a linear Multiway Partial Least Squares (MPLS) model to adapt from one operating region to another as productivity increased to estimate the end-point quality of each batch accurately. The proposed optimisation technique utilises a Quadratic Programming (QP) formulation to calculate the Manipulated Variable Trajectory (MVT) from one batch to the next. The main advantage of the proposed optimisation technique compared with other approaches that have been published was the increase of yield and the reduction of convergence speed to obtain an optimal MVT. Validity Constraints were also included into the batch-to-batch optimisation to restrict the QP calculations to the space only described by useful predictions of the MPLS model. The results from experiments over the two simulators showed that the validity constraints slowed the rate of convergence of the optimisation technique and in some cases resulted in a slight reduction in final yield. However, the introduction of the validity constraints did improve the consistency of the batch optimisation. Another important contribution of this thesis were a series of experiments that were implemented utilising a variety of smoothing techniques used in MPLS modelling combined with the proposed batch-to-batch optimisation technique. From the results of these experiments, it was clear that the MPLS model prediction accuracy did not significantly improve using these smoothing techniques. However, the batch-to-batch optimisation technique did show improvements when filtering was implemented.
5

An investigation into Functional Linear Regression Modeling

Essomba, Rene Franck January 2015 (has links)
Functional data analysis, commonly known as FDA", refers to the analysis of information on curves of functions. Key aspects of FDA include the choice of smoothing techniques, data reduction, model evaluation, functional linear modeling and forecasting methods. FDA is applicable in numerous applications such as Bioscience, Geology, Psychology, Sports Science, Econometrics, Meteorology, etc. This dissertation main objective is to focus more specifically on Functional Linear Regression Modelling (FLRM), which is an extension of Multivariate Linear Regression Modeling. The problem of constructing a Functional Linear Regression modelling with functional predictors and functional response variable is considered in great details. Discretely observed data for each variable involved in the modelling are expressed as smooth functions using: Fourier Basis, B-Splines Basis and Gaussian Basis. The Functional Linear Regression Model is estimated by the Least Square method, Maximum Likelihood method and more thoroughly by Penalized Maximum Likelihood method. A central issue when modelling Functional Regression models is the choice of a suitable model criterion as well as the number of basis functions and an appropriate smoothing parameter. Four different types of model criteria are reviewed: the Generalized Cross-Validation, the Generalized Information Criterion, the modified Akaike Information Criterion and Generalized Bayesian Information Criterion. Each of these aforementioned methods are applied to a dataset and contrasted based on their respective results.
6

Hierarchical Approximation Methods for Option Pricing and Stochastic Reaction Networks

Ben Hammouda, Chiheb 22 July 2020 (has links)
In biochemically reactive systems with small copy numbers of one or more reactant molecules, stochastic effects dominate the dynamics. In the first part of this thesis, we design novel efficient simulation techniques for a reliable and fast estimation of various statistical quantities for stochastic biological and chemical systems under the framework of Stochastic Reaction Networks. In the first work, we propose a novel hybrid multilevel Monte Carlo (MLMC) estimator, for systems characterized by having simultaneously fast and slow timescales. Our hybrid multilevel estimator uses a novel split-step implicit tau-leap scheme at the coarse levels, where the explicit tau-leap method is not applicable due to numerical instability issues. In a second work, we address another challenge present in this context called the high kurtosis phenomenon, observed at the deep levels of the MLMC estimator. We propose a novel approach that combines the MLMC method with a pathwise-dependent importance sampling technique for simulating the coupled paths. Our theoretical estimates and numerical analysis show that our method improves the robustness and complexity of the multilevel estimator, with a negligible additional cost. In the second part of this thesis, we design novel methods for pricing financial derivatives. Option pricing is usually challenging due to: 1) The high dimensionality of the input space, and 2) The low regularity of the integrand on the input parameters. We address these challenges by developing different techniques for smoothing the integrand to uncover the available regularity. Then, we approximate the resulting integrals using hierarchical quadrature methods combined with Brownian bridge construction and Richardson extrapolation. In the first work, we apply our approach to efficiently price options under the rough Bergomi model. This model exhibits several numerical and theoretical challenges, implying classical numerical methods for pricing being either inapplicable or computationally expensive. In a second work, we design a numerical smoothing technique for cases where analytic smoothing is impossible. Our analysis shows that adaptive sparse grids’ quadrature combined with numerical smoothing outperforms the Monte Carlo approach. Furthermore, our numerical smoothing improves the robustness and the complexity of the MLMC estimator, particularly when estimating density functions.
7

Jointly integrating current context and social influence for improving recommendation / Intégration simultanée du contexte actuel et de l'influence sociale pour l'amélioration de la recommandation

Bambia, Meriam 13 June 2017 (has links)
La diversité des contenus recommandation et la variation des contextes des utilisateurs rendent la prédiction en temps réel des préférences des utilisateurs de plus en plus difficile mettre en place. Toutefois, la plupart des approches existantes n'utilisent que le temps et l'emplacement actuels séparément et ignorent d'autres informations contextuelles sur lesquelles dépendent incontestablement les préférences des utilisateurs (par exemple, la météo, l'occasion). En outre, ils ne parviennent pas considérer conjointement ces informations contextuelles avec les interactions sociales entre les utilisateurs. D'autre part, la résolution de problèmes classiques de recommandation (par exemple, aucun programme de télévision vu par un nouvel utilisateur connu sous le nom du problème de démarrage froid et pas assez d'items co-évalués par d'autres utilisateurs ayant des préférences similaires, connu sous le nom du problème de manque de donnes) est d'importance significative puisque sont attaqués par plusieurs travaux. Dans notre travail de thèse, nous proposons un modèle probabiliste qui permet exploiter conjointement les informations contextuelles actuelles et l'influence sociale afin d'améliorer la recommandation des items. En particulier, le modèle probabiliste vise prédire la pertinence de contenu pour un utilisateur en fonction de son contexte actuel et de son influence sociale. Nous avons considérer plusieurs éléments du contexte actuel des utilisateurs tels que l'occasion, le jour de la semaine, la localisation et la météo. Nous avons utilisé la technique de lissage Laplace afin d'éviter les fortes probabilités. D'autre part, nous supposons que l'information provenant des relations sociales a une influence potentielle sur les préférences des utilisateurs. Ainsi, nous supposons que l'influence sociale dépend non seulement des évaluations des amis mais aussi de la similarité sociale entre les utilisateurs. Les similarités sociales utilisateur-ami peuvent être établies en fonction des interactions sociales entre les utilisateurs et leurs amis (par exemple les recommandations, les tags, les commentaires). Nous proposons alors de prendre en compte l'influence sociale en fonction de la mesure de similarité utilisateur-ami afin d'estimer les préférences des utilisateurs. Nous avons mené une série d'expérimentations en utilisant un ensemble de donnes réelles issues de la plateforme de TV sociale Pinhole. Cet ensemble de donnes inclut les historiques d'accès des utilisateurs-vidéos et les réseaux sociaux des téléspectateurs. En outre, nous collectons des informations contextuelles pour chaque historique d'accès utilisateur-vidéo saisi par le système de formulaire plat. Le système de la plateforme capture et enregistre les dernières informations contextuelles auxquelles le spectateur est confronté en regardant une telle vidéo.Dans notre évaluation, nous adoptons le filtrage collaboratif axé sur le temps, le profil dépendant du temps et la factorisation de la matrice axe sur le réseau social comme tant des modèles de référence. L'évaluation a port sur deux tâches de recommandation. La première consiste sélectionner une liste trie de vidéos. La seconde est la tâche de prédiction de la cote vidéo. Nous avons évalué l'impact de chaque élément du contexte de visualisation dans la performance de prédiction. Nous testons ainsi la capacité de notre modèle résoudre le problème de manque de données et le problème de recommandation de démarrage froid du téléspectateur. Les résultats expérimentaux démontrent que notre modèle surpasse les approches de l'état de l'art fondes sur le facteur temps et sur les réseaux sociaux. Dans les tests des problèmes de manque de donnes et de démarrage froid, notre modèle renvoie des prédictions cohérentes différentes valeurs de manque de données. / Due to the diversity of alternative contents to choose and the change of users' preferences, real-time prediction of users' preferences in certain users' circumstances becomes increasingly hard for recommender systems. However, most existing context-aware approaches use only current time and location separately, and ignore other contextual information on which users' preferences may undoubtedly depend (e.g. weather, occasion). Furthermore, they fail to jointly consider these contextual information with social interactions between users. On the other hand, solving classic recommender problems (e.g. no seen items by a new user known as cold start problem, and no enough co-rated items with other users with similar preference as sparsity problem) is of significance importance since it is drawn by several works. In our thesis work, we propose a context-based approach that leverages jointly current contextual information and social influence in order to improve items recommendation. In particular, we propose a probabilistic model that aims to predict the relevance of items in respect with the user's current context. We considered several current context elements such as time, location, occasion, week day, location and weather. In order to avoid strong probabilities which leads to sparsity problem, we used Laplace smoothing technique. On the other hand, we argue that information from social relationships has potential influence on users' preferences. Thus, we assume that social influence depends not only on friends' ratings but also on social similarity between users. We proposed a social-based model that estimates the relevance of an item in respect with the social influence around the user on the relevance of this item. The user-friend social similarity information may be established based on social interactions between users and their friends (e.g. recommendations, tags, comments). Therefore, we argue that social similarity could be integrated using a similarity measure. Social influence is then jointly integrated based on user-friend similarity measure in order to estimate users' preferences. We conducted a comprehensive effectiveness evaluation on real dataset crawled from Pinhole social TV platform. This dataset includes viewer-video accessing history and viewers' friendship networks. In addition, we collected contextual information for each viewer-video accessing history captured by the plat form system. The platform system captures and records the last contextual information to which the viewer is faced while watching such a video. In our evaluation, we adopt Time-aware Collaborative Filtering, Time-Dependent Profile and Social Network-aware Matrix Factorization as baseline models. The evaluation focused on two recommendation tasks. The first one is the video list recommendation task and the second one is video rating prediction task. We evaluated the impact of each viewing context element in prediction performance. We tested the ability of our model to solve data sparsity and viewer cold start recommendation problems. The experimental results highlighted the effectiveness of our model compared to the considered baselines. Experimental results demonstrate that our approach outperforms time-aware and social network-based approaches. In the sparsity and cold start tests, our approach returns consistently accurate predictions at different values of data sparsity.

Page generated in 0.1142 seconds