• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 2
  • 2
  • 2
  • Tagged with
  • 26
  • 26
  • 15
  • 15
  • 11
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimating the effect of future oil prices on petroleum engineering project investment yardsticks.

Mendjoge, Ashish V 30 September 2004 (has links)
This study proposes two methods, (1) a probabilistic method based on historical oil prices and (2) a method based on Gaussian simulation, to model future prices of oil. With these methods to model future oil prices, we can calculate the ranges of uncertainty in traditional probability indicators based on cash flow analysis, such as net present values, net present value to investment ratio and internal rate of return. We found that conventional methods used to quantify uncertainty which use high, low and base prices produce uncertainty ranges far narrower than those observed historically. These methods fail because they do not capture the "shocks" in oil prices that arise from geopolitical events or supply-demand imbalances. Quantifying uncertainty is becoming increasingly important in the petroleum industry as many current investment opportunities in reservoir development require large investments, many in harsh exploration environments, with intensive technology requirements. Insight into the range of uncertainty, particularly for downside, may influence our investment decision in these difficult areas.
2

Estimating the effect of future oil prices on petroleum engineering project investment yardsticks.

Mendjoge, Ashish V 30 September 2004 (has links)
This study proposes two methods, (1) a probabilistic method based on historical oil prices and (2) a method based on Gaussian simulation, to model future prices of oil. With these methods to model future oil prices, we can calculate the ranges of uncertainty in traditional probability indicators based on cash flow analysis, such as net present values, net present value to investment ratio and internal rate of return. We found that conventional methods used to quantify uncertainty which use high, low and base prices produce uncertainty ranges far narrower than those observed historically. These methods fail because they do not capture the "shocks" in oil prices that arise from geopolitical events or supply-demand imbalances. Quantifying uncertainty is becoming increasingly important in the petroleum industry as many current investment opportunities in reservoir development require large investments, many in harsh exploration environments, with intensive technology requirements. Insight into the range of uncertainty, particularly for downside, may influence our investment decision in these difficult areas.
3

Machine learning approach for crude oil price prediction

Abdullah, Siti Norbaiti binti January 2014 (has links)
Crude oil prices impact the world economy and are thus of interest to economic experts and politicians. Oil price’s volatile behaviour, which has moulded today’s world economy, society and politics, has motivated and continues to excite researchers for further study. This volatile behaviour is predicted to prompt more new and interesting research challenges. In the present research, machine learning and computational intelligence utilising historical quantitative data, with the linguistic element of online news services, are used to predict crude oil prices via five different models: (1) the Hierarchical Conceptual (HC) model; (2) the Artificial Neural Network-Quantitative (ANN-Q) model; (3) the Linguistic model; (4) the Rule-based Expert model; and, finally, (5) the Hybridisation of Linguistic and Quantitative (LQ) model. First, to understand the behaviour of the crude oil price market, the HC model functions as a platform to retrieve information that explains the behaviour of the market. This is retrieved from Google News articles using the keyword “Crude oil price”. Through a systematic approach, price data are classified into categories that explain the crude oil price’s level of impact on the market. The price data classification distinguishes crucial behaviour information contained in the articles. These distinguished data features ranked hierarchically according to the level of impact and used as reference to discover the numeric data implemented in model (2). Model (2) is developed to validate the features retrieved in model (1). It introduces the Back Propagation Neural Network (BPNN) technique as an alternative to conventional techniques used for forecasting the crude oil market. The BPNN technique is proven in model (2) to have produced more accurate and competitive results. Likewise, the features retrieved from model (1) are also validated and proven to cause market volatility. In model (3), a more systematic approach is introduced to extract the features from the news corpus. This approach applies a content utilisation technique to news articles and mines news sentiments by applying a fuzzy grammar fragment extraction. To extract the features from the news articles systematically, a domain-customised ‘dictionary’ containing grammar definitions is built beforehand. These retrieved features are used as the linguistic data to predict the market’s behaviour with crude oil price. A decision tree is also produced from this model which hierarchically delineates the events (i.e., the market’s rules) that made the market volatile, and later resulted in the production of model (4). Then, model (5) is built to complement the linguistic character performed in model (3) from the numeric prediction model made in model (2). To conclude, the hybridisation of these two models and the integration of models (1) to (5) in this research imitates the execution of crude oil market’s regulators in calculating their risk of actions before executing a price hedge in the market, wherein risk calculation is based on the ‘facts’ (quantitative data) and ‘rumours’ (linguistic data) collected. The hybridisation of quantitative and linguistic data in this study has shown promising accuracy outcomes, evidenced by the optimum value of directional accuracy and the minimum value of errors obtained.
4

The Relevance of Accounting Information for Valuation and Risk

Brimble, Mark Andrew, m.brimble@griffith.edu.au January 2003 (has links)
A key theme in capital markets research examines the relationships between accounting information and firm value. Two concerns relating to the value relevance of accounting information are: (1) concerns over the explanatory and predictive power of the evidence presented in the prior literature (Lev, 1989); and (2) the evidence of a deterioration in the association between accounting information and stock prices over the past four decades (Collins, Maydew and Weiss, 1997; Francis and Schipper, 1999; Lev and Zarowin, 1999). These concerns provide the key motivation for this thesis which examines: (1) the usefulness of the clean surplus accounting equation in valuation; (2) the role of accounting information in estimating and predicting systematic risk and; (3) the changing nature of the relationship between accounting information, stock prices and risk over time. The empirical research provides evidence of the value-irrelevance of the clean surplus equation and that controlling for the functional form of the earnings-returns relationship is more important. Evidence is also provided that accounting variables are highly associated with M-GARCH risk betas and also possess predictive ability relative to these risk measures. Finally, the relationships between stock prices, risk models and accounting information are shown to have not deteriorated over time, contrary to prior evidence. Rather, the functional form of the relationship has changed from linear to a non-linear arctan association. Overall, accounting information continues to play the central role in the determination of stock prices and risk metrics.
5

Predicting low airfares with time series features and a decision tree algorithm

Krook, Jonatan January 2018 (has links)
Airlines try to maximize revenue by letting prices of tickets vary over time. This fluctuation contains patterns that can be exploited to predict price lows. In this study, we create an algorithm that daily decides whether to buy a certain ticket or wait for the price to go down. For creation and evaluation, we have used data from searches made online for flights on the route Stockholm – New York during 2017 and 2018. The algorithm is based on time series features selected by a decision tree and clearly outperforms the selected benchmarks.
6

Price Prediction for Used Cars : A Comparison of Machine Learning Regression Models

Collard, Marcus January 2022 (has links)
Bilar av ett visst märke, modell, år och uppsättning funktioner börjar med ett pris som fastställs av tillverkaren. När de åldras och säljs vidare som de används, är de föremål för prissättning av utbud och efterfrågan för deras speciella uppsättning funktioner, utöver deras unika historia. Ju mer detta skiljer dem från jämförbara bilar, desto svårare blir de att utvärdera med traditionella metoder. Genom att använda maskininlärning algoritmer för att bättre utnyttja data om alla mindre vanliga egenskaper hos en bil kan man mer exakt bedöma ett fordons värde. Denna studie jämför prestandan för algoritmer för Linjär Regression, Ridge Regression, Lasso Regression och Random Forest Regression när det gäller att förutsäga priset på begagnade bilar. En viktig kvalifikation för ett prisförutsägelseverktyg är att avskrivningar kan representeras för att bättre utnyttja tidigare data för aktuell prisförutsägelse. Denna studie jämför därför även den skattade prisavtagningen hos algoritmerna. Studien har genomförts med en stor offentlig datauppsättning av begagnade bilar. Resultaten visar att Random Forest Regression visar den högsta prisförutsägelseprestanda för alla mätvärden som används. Den kunde också representera den genomsnittliga avskrivningen mycket närmare verkligheten än de andra algoritmerna, med 13,7 % förutspådd årlig geometrisk prisavtagning för datasetet oberoende av fordonets ålder. / Cars of a particular make, model, year, and set of features start out with a price set by the manufacturer. As they age and are resold as used, they are subject to supply-and-demand pricing for their particular set of features, in addition to their unique history. The more this sets them apart from comparable cars, the harder they become to evaluate with traditional methods. Using Machine Learning algorithms to better utilize data on all the less common features of a car can more accurately assess the value of a vehicle. This study compares the performance of Linear Regression, Ridge Regression, Lasso Regression, and Random Forest Regression ML algorithms in predicting the price of used cars. An important qualification of a price prediction tool is that depreciation can be represented to better utilize past data for current price prediction. The study has been conducted with a large public dataset of used cars. The results show that Random Forest Regression demonstrates the highest price prediction performance across all metrics used. It was also able to represent average depreciation much more closely than the other algorithms, at 13.7% predicted annual geometric depreciation for the dataset independent of vehicle age.
7

Time Series Prediction for Stock Price and Opioid Incident Location

January 2019 (has links)
abstract: Time series forecasting is the prediction of future data after analyzing the past data for temporal trends. This work investigates two fields of time series forecasting in the form of Stock Data Prediction and the Opioid Incident Prediction. In this thesis, the Stock Data Prediction Problem investigates methods which could predict the trends in the NYSE and NASDAQ stock markets for ten different companies, nine of which are part of the Dow Jones Industrial Average (DJIA). A novel deep learning model which uses a Generative Adversarial Network (GAN) is used to predict future data and the results are compared with the existing regression techniques like Linear, Huber, and Ridge regression and neural network models such as Long-Short Term Memory (LSTMs) models. In this thesis, the Opioid Incident Prediction Problem investigates methods which could predict the location of future opioid overdose incidences using the past opioid overdose incidences data. A similar deep learning model is used to predict the location of the future overdose incidences given the two datasets of the past incidences (Connecticut and Cincinnati Opioid incidence datasets) and compared with the existing neural network models such as Convolution LSTMs, Attention-based Convolution LSTMs, and Encoder-Decoder frameworks. Experimental results on the above-mentioned datasets for both the problems show the superiority of the proposed architectures over the standard statistical models. / Dissertation/Thesis / Masters Thesis Computer Science 2019
8

Predicting Airbnb Prices in European Cities Using Machine Learning

Gangarapu, Shalini, Mernedi, Venkata Surya Akash January 2023 (has links)
Background: Machine learning is a field of computer science that focuses on creating models that can predict patterns and relations among data. In this thesis, we use machine learning to predict Airbnb prices in various European cities to help the hosts in setting reasonable prices for their properties. Different supervised machine learning algorithms will be used to determine which model will provide the highest accuracy so that hosts set profitable prices for their housing properties. Objectives: The main goal of this thesis is to use machine learning algorithms to assist the hosts in setting reasonable rental prices for their properties so that they can keep their properties affordable for renters across Europe and achieve maximum occupancy. Methods: The dataset for Airbnb in European cities is gathered from Kaggle and then has been pre-processed using techniques like one-hot encoding, label encoder, standardscaler and principle component analysis. The data set is divided into three parts for training, validation and testing. Next, feature selection is done to determine the most important features that contribute to the pricing, and the dimensionality of the dataset is reduced. Supervised machine learning algorithms are utilized for training. The models are evaluated with reliable performance estimates after tuning the hyperparameters using k-fold cross-validation. Results: The feature_importance_ predicts that room capacity, type of room(shared or not), and the country appear in all three algorithms. Although scores vary between algorithms, these are among the top five attributes that influence the target variable. Day, cleanliness rating, and attr index are some other attributes that are among the top five characteristics. Among the chosen learning algorithms, the random forest regressor gave the best regression model with a R2 score of 0.70. The second best is the gradient boosting regressor with a R2 score of 0.32. While SVM gave the least score of 0.06. Conclusions: Random forest regressor was the best algorithm for predicting the prices of Airbnb and suggests hosts setting reasonable rental prices for their properties with more accurate pricing for renters across Europe compared to other chosen models. Contrary to our expectations SVM had performed the least for this dataset.
9

Predictions of Electricity Prices in Different Time Periods With Lasso

Manninger, Harriet, Liu, Xue January 2022 (has links)
When the big data time comes, people also need to keep pace with the times to seek and develop tools that can deal with the vast amount of information. In this project, lassois applied to build parametric models of electricity prices based on different affecting factors. Thereafter, the models are used to predict the electricity prices 8 days forward for three different time periods. We compare their prediction performances in terms of normalized mean square error (NMSE) and identify dominant factors of the electricity prices in different time periods using lasso. The results show that a model that spans over a 24 hourlong period gives the lowest NMSE, followed by one spanning over a two hour long period where the electricity prices are leading up to a peak value. The model that obtains the highestNMSE is from a two hour long period, where the electricity prices have a peak value. Besides, we also analyze potential reasons for the results. / När big data-tiden kommer måste även människor hålla jämna steg med tiderna för att söka och utveckla verktyg som kan hantera den stora mängden information. I detta projekt används lasso för att bygga parametriska modeller av elpriser baserade på olika påverkansfaktorer. Därefter används modellerna för att förutsäga elpriserna 8 dagar framåt för tre olika tidsperioder. Vi jämför deras prediktionsprestanda i termer av normaliserat medelkvadratfel (NMSE) och identifierar dominerande faktorer för elpriserna under olika tidsperioder med hjälp av lasso. Resultaten visar att en modell som sträcker sig över en 24 timmar lång period ger lägst NMSE värde, följt av en som sträcker sig över en två timmar lång period där elpriserna leder fram till ett toppvärde. Modellen som får högst NMSE är från en två timmar lång period, där elpriserna har ett toppvärde. Dessutom analyserar vi också potentiella orsaker till resultaten. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
10

Automation of price prediction using machine learning in a large furniture company

Ghorbanali, Mojtaba January 2022 (has links)
The accurate prediction of the price of products can be highlybeneficial for the procurers both businesses wised and productionwise. Many companies today, in various fields ofoperations and sizes, have access to a vast amount of datathat valuable information can be extracted from them. In thismaster thesis, some large databases of products in differentcategories have been analyzed. Because of confidentiality, thelabels from the database that are in this thesis are subtitled bysome general titles and the real titles are not mentioned. Also,the company is not referred to by name, but the whole job iscarried out on the real data set of products. As a real-worlddata set, the data was messy and full of nulls and missing data.So, the data wrangling took some more time. The approachesthat were used for the model were Regression methods andGradient Boosting models.The main purpose of this master thesis was to build priceprediction models based on the features of each item to assistwith the initial positioning of the product and its initial price.The best result that was achieved during this master thesiswas from XGBoost machine learning model with about 96%accuracy which can be beneficial for the producer to acceleratetheir pricing strategies.

Page generated in 0.0615 seconds