• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 83
  • 83
  • 83
  • 32
  • 30
  • 24
  • 19
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Cellular Services Market In India : Predictive Models And Assessing Interventions

Shrinivas, V Prasanna 04 1900 (has links)
The Objective of this thesis is to address some interesting problems in the Indian cellular services market. The first problem we address relates to identifying important change points that marked the evolution of the telecom market since Indian Independence. We use the data on per-capita availability of telephones in India to this effect. We identify important change points that mapped to the computerization move in 1989, the liberalization and globalization policies starting from 1991 and subsequently the introduction of NTP 1997 and NTP 1999. We also identify the important change points that mark the growth of cellular services subscriber base in India. We map change points detected to some of the important macro level policy initiatives that were taken by TRAI. The second problem we address is the assessment of policy interventions on the growth of cellular subscriber base in India. We model the impact of two important policy interventions namely, the NTP 1999 and its spill-over policy the entry of the fourth player into the market to offer services. We model the abrupt temporary, abrupt permanent and gradual permanent impacts of these interventions individually and in a coupled manner. We are arguably the first to use the intervention analysis and change point analysis to study the Indian telecom market. The third problem relates to the most challenging task of forecasting the growth of cellular services subscribers in India. We use novel machine learning techniques like ε-SVR and ν-SVR and compare its performance with ANN and ARIMA using standard performance metrics. Initially, we venture to predict the aggregate subscriber growth of cellular mobile subscribers in India using the SVR techniques. This would be of interest to the policy makers from a strategic standpoint. Subsequently, we predict the marginal(monthly) subscriber growth using SVR and tabulate the results for varying depths of forecasting which would be of interest to service providers form an operation standpoint. We find that the SVR techniques performed better than ANN and ARIMA particularly with respect to forward or out-sample forecasting when the time periods increase. The final problem involves a differential game model in an oligopoly set up for the telecom service providers who tried to optimize their advertisement innovation mix in order to maximize their discounted flow of profits. We consider the situation where the service providers make Cournot conjectures about the action of their rivals. The firms would not enter into agreements or form cartels. The firms choose the quantity they want to sell simultaneously. The essence of the Cournot conjecture was that though it was a quantity based competition, no single firm could unilaterally try to improve the total quantity sold in the market. Every firm made only one decision and did so when other firms were simultaneously making decisions. We have come across papers that considered either advertisement or product/process innovation separately but not together. We incorporate both these control variables with the inverse demand function as the state variable. We propose an open-loop solution that is dependent on time. We conduct experiments with various combinations of churn and spill-over rates of advertisement and innovation and thereby get some managerial insights.
52

Semi-Supervised Classification Using Gaussian Processes

Patel, Amrish 01 1900 (has links)
Gaussian Processes (GPs) are promising Bayesian methods for classification and regression problems. They have also been used for semi-supervised classification tasks. In this thesis, we propose new algorithms for solving semi-supervised binary classification problem using GP regression (GPR) models. The algorithms are closely related to semi-supervised classification based on support vector regression (SVR) and maximum margin clustering. The proposed algorithms are simple and easy to implement. Also, the hyper-parameters are estimated without resorting to expensive cross-validation technique. The algorithm based on sparse GPR model gives a sparse solution directly unlike the SVR based algorithm. Use of sparse GPR model helps in making the proposed algorithm scalable. The results of experiments on synthetic and real-world datasets demonstrate the efficacy of proposed sparse GP based algorithm for semi-supervised classification.
53

Computational tools for soft sensing and state estimation

Balakrishnapillai Chitralekha, Saneej Unknown Date
No description available.
54

Computational tools for soft sensing and state estimation

Balakrishnapillai Chitralekha, Saneej 06 1900 (has links)
The development of fast and efficient computer hardware technology has resulted in the rapid development of numerous computational software tools for making statistical inferences. The computational algorithms, which are the backbone of these tools, originate from distinct areas in science, mathematics and engineering. The main focus of this thesis is on computational tools which can be employed for estimating unmeasured variables in a process using all the available prior information. Specifically, this thesis demonstrates the application of a variety of tools for soft sensing of process variables and uncertain parameters of physiochemical process models, using routine data available from the process. The application examples presented in this thesis come from broad areas where process uncertainty is inherent and includes petrochemical processes, mechanical valve actuators, and upstream production processes in petroleum reservoirs. The mathematical models that are employed in different domains vary significantly in their structure and their level of complexity. In the petrochemical domain, the focus was on developing empirical soft sensors which are essentially nonparametric mathematical models identified using routine data from the process. The Support Vector Regression technique was applied for identifying such nonparametric empirical models. On the other hand, in all the other application examples in this thesis the physical parametric models of the process were utilized. The latter application examples, which cover a major portion of this thesis, demonstrate the application of modern state and parameter estimation algorithms which are firmly grounded on Bayesian theory and Monte Carlo techniques. Prior to the chapters on the application of state and parameter estimation techniques, a tutorial overview of the Monte Carlo simulation based state estimation algorithms is provided with an attempt to throw new light on these techniques. The tutorial is aimed at making these techniques simple to visualize and understand. The application case studies serve to illustrate the performance of the different algorithms. All case studies presented in this thesis are performed on processes that exhibit significant nonlinearity in terms of the relationship between the process input variables and output variables. / Process Control
55

Caractérisation des forêts de montagne par scanner laser aéroporté : estimation de paramètres de peuplement par régression SVM et apprentissage non supervisé pour la détection de sommets / Using airborne laser scanning for mountain forests mapping : support vector regression for stand parameters estimation and unsupervised training for treetop detection.

Monnet, Jean-Matthieu 25 October 2011 (has links)
De nombreux travaux ont montré le potentiel de la télédétection parscanner laser aéroporté pour caractériser les massifs forestiers.Cependant, l'application aux forêts complexes de montagne reste encorepeu documentée. On se propose donc de tester les deux principalesméthodes permettant d'extraire des paramètres forestiers sur desdonnées acquises en zone montagneuse et de les adapter aux contraintesspéci fiques à cet environnement. En particulier on évaluera d'unepart l'apport conjoint de la régression à vecteurs de support et de laréduction de dimension pour l'estimation de paramètres de peuplement,et d'autre part l'intérêt d'un apprentissage non supervisé pour ladétection d'arbres. / Numerous studies have shown the potential of airborne laser scanningfor the mapping of forest resources. However, the application of thisremote sensing technique to complex forests encountered in mountainousareas requires further investigation. In this thesis, the two mainmethods used to derive forest information are tested with airbornelaser scanning data acquired in the French Alps, and adapted to theconstraints of mountainous environments. In particular,a framework for unsupervised training of treetop detection isproposed, and the performance of support vector regression combinedwith dimension reduction for forest stand parameters estimation isevaluated.
56

Algoritmos de aprendizagem para aproximaÃÃo da cinemÃtica inversa de robÃs manipuladores: um estudo comparativo / Machine learning algorithms for inverse kinematics approximation of robot manipulators: a comparative study

Davyd Bandeira de Melo 06 July 2015 (has links)
In this dissertation it is reported the results of a comprehensive comparative study involving seven machine learning algorithms applied to the task of approximating the inverse kinematic model of 3 robotic arms (planar, PUMA 560 and Motoman HP6). The evaluated algorithm are the following ones: Multilayer Perceptron (MLP), Extreme Learning Machine (ELM), Least Squares Support Vector Regression (LS-SVR), Minimal Learning Machine (MLM), Gaussian Processes (GP), Adaptive Network-Based Fuzzy Inference Systems (ANFIS) and Local Linear Mapping (LLM). Each algorithm is evaluated with respect to its accuracy in estimating the joint angles given the cartesian coordinates which comprise end-effector trajectories within the robot workspace. A comprehensive evaluation of the performances of the aforementioned algorithms is carried out based on correlation analysis of the residuals. Finally, hypothesis testing procedures are also executed in order to verifying if there are significant differences in performance among the best algorithms. / Nesta dissertaÃÃo sÃo reportados os resultados de um amplo estudo comparativo envolvendo sete algoritmos de aprendizado de mÃquinas aplicados à tarefa de aproximaÃÃo do modelo cinemÃtico inverso de 3 robÃs manipuladores (planar, PUMA 560 e Motoman HP6). Os algoritmos avaliados sÃo os seguintes: Perceptron Multicamadas (MLP), MÃquina de Aprendizado Extremo (ELM), RegressÃo de MÃnimos Quadrados via Vetores-Suporte (LS-SVR), MÃquina de Aprendizado MÃnimo (MLM), Processos Gaussianos (PG), Sistema de InferÃncia Fuzzy Baseado em Rede Adaptativa (ANFIS) e Mapeamento Linear Local (LLM). Estes algoritmos sÃo avaliados quanto à acurÃcia na estimaÃÃo dos Ãngulos das juntas dos robÃs manipuladores em experimentos envolvendo a geraÃÃo de vÃrios tipos de trajetÃrias no volume de trabalho dos referidos robÃs. Uma avaliaÃÃo abrangente do desempenho de cada algoritmo à feito com base na anÃlise dos resÃduos e testes de hipÃteses sÃo executados para verificar se hà diferenÃas significativas entre os desempenhos dos melhores algoritmos.
57

Comparison of different models for forecasting of Czech electricity market / Comparison of different models for forecasting of Czech electricity market

Kunc, Vladimír January 2017 (has links)
There is a demand for decision support tools that can model the electricity markets and allows to forecast the hourly electricity price. Many different ap- proach such as artificial neural network or support vector regression are used in the literature. This thesis provides comparison of several different estima- tors under one settings using available data from Czech electricity market. The resulting comparison of over 5000 different estimators led to a selection of several best performing models. The role of historical weather data (temper- ature, dew point and humidity) is also assesed within the comparison and it was found that while the inclusion of weather data might lead to overfitting, it is beneficial under the right circumstances. The best performing approach was the Lasso regression estimated using modified Lars. 1
58

Forecasting hourly electricity consumption for sets of households using machine learning algorithms

Linton, Thomas January 2015 (has links)
To address inefficiency, waste, and the negative consequences of electricity generation, companies and government entities are looking to behavioural change among residential consumers. To drive behavioural change, consumers need better feedback about their electricity consumption. A monthly or quarterly bill provides the consumer with almost no useful information about the relationship between their behaviours and their electricity consumption. Smart meters are now widely dispersed in developed countries and they are capable of providing electricity consumption readings at an hourly resolution, but this data is mostly used as a basis for billing and not as a tool to assist the consumer in reducing their consumption. One component required to deliver innovative feedback mechanisms is the capability to forecast hourly electricity consumption at the household scale. The work presented by this thesis is an evaluation of the effectiveness of a selection of kernel based machine learning methods at forecasting the hourly aggregate electricity consumption for different sized sets of households. The work of this thesis demonstrates that k-Nearest Neighbour Regression and Gaussian process Regression are the most accurate methods within the constraints of the problem considered. In addition to accuracy, the advantages and disadvantages of each machine learning method are evaluated, and a simple comparison of each algorithms computational performance is made. / För att ta itu med ineffektivitet, avfall, och de negativa konsekvenserna av elproduktion så vill företag och myndigheter se beteendeförändringar bland hushållskonsumenter. För att skapa beteendeförändringar så behöver konsumenterna bättre återkoppling när det gäller deras elförbrukning. Den nuvarande återkopplingen i en månads- eller kvartalsfaktura ger konsumenten nästan ingen användbar information om hur deras beteenden relaterar till deras konsumtion. Smarta mätare finns nu överallt i de utvecklade länderna och de kan ge en mängd information om bostäders konsumtion, men denna data används främst som underlag för fakturering och inte som ett verktyg för att hjälpa konsumenterna att minska sin konsumtion. En komponent som krävs för att leverera innovativa återkopplingsmekanismer är förmågan att förutse elförbrukningen på hushållsskala. Arbetet som presenteras i denna avhandling är en utvärdering av noggrannheten hos ett urval av kärnbaserad maskininlärningsmetoder för att förutse den sammanlagda förbrukningen för olika stora uppsättningar av hushåll. Arbetet i denna avhandling visar att "k-Nearest Neighbour Regression" och "Gaussian Process Regression" är de mest exakta metoder inom problemets begränsningar. Förutom noggrannhet, så görs en utvärdering av fördelar, nackdelar och prestanda hos varje maskininlärningsmetod.
59

Machine learning and statistical analysis in fuel consumption prediction for heavy vehicles / Maskininlärning och statistisk analys för prediktion av bränsleförbrukning i tunga fordon

Almér, Henrik January 2015 (has links)
I investigate how to use machine learning to predict fuel consumption in heavy vehicles. I examine data from several different sources describing road, vehicle, driver and weather characteristics and I find a regression to a fuel consumption measured in liters per distance. The thesis is done for Scania and uses data sources available to Scania. I evaluate which machine learning methods are most successful, how data collection frequency affects the prediction and which features are most influential for fuel consumption. I find that a lower collection frequency of 10 minutes is preferable to a higher collection frequency of 1 minute. I also find that the evaluated models are comparable in their performance and that the most important features for fuel consumption are related to the road slope, vehicle speed and vehicle weight. / Jag undersöker hur maskininlärning kan användas för att förutsäga bränsleförbrukning i tunga fordon. Jag undersöker data från flera olika källor som beskriver väg-, fordons-, förar- och väderkaraktäristiker. Det insamlade datat används för att hitta en regression till en bränsleförbrukning mätt i liter per sträcka. Studien utförs på uppdrag av Scania och jag använder mig av datakällor som är tillgängliga för Scania. Jag utvärderar vilka maskininlärningsmetoder som är bäst lämpade för problemet, hur insamlingsfrekvensen påverkar resultatet av förutsägelsen samt vilka attribut i datat som är mest inflytelserika för bränsleförbrukning. Jag finner att en lägre insamlingsfrekvens av 10 minuter är att föredra framför en högre frekvens av 1 minut. Jag finner även att de utvärderade modellerna ger likvärdiga resultat samt att de viktigaste attributen har att göra med vägens lutning, fordonets hastighet och fordonets vikt.
60

Sales Forecasting by Assembly of Multiple Machine Learning Methods : A stacking approach to supervised machine learning

Falk, Anton, Holmgren, Daniel January 2021 (has links)
Today, digitalization is a key factor for businesses to enhance growth and gain advantages and insight in their operations. Both in planning operations and understanding customers the digitalization processes today have key roles, and companies are spending more and more resources in this fields to gain critical insights and enhance growth. The fast-food industry is no exception where restaurants need to be highly flexible and agile in their work. With this, there exists an immense demand for knowledge and insights to help restaurants plan their daily operations and there is a great need for organizations to continuously adapt new technological solutions into their existing processes. Well implemented Machine Learning solutions in combination with feature engineering are likely to bring value into the existing processes. Sales forecasting, which is the main field of study in this thesis work, has a vital role in planning of fast food restaurant's operations, both for budgeting purposes, but also for staffing purposes. The word fast food describes itself. With this comes a commitment to provide high quality food and rapid service to the customers. Understaffing can risk violating either quality of the food or service while overstaffing leads to low overall productivity. Generating highly reliable sales forecasts are thus vital to maximize profits and minimize operational risk. SARIMA, XGBoost and Random Forest were evaluated on training data consisting of sales numbers, business hours and categorical variables describing date and month. These models worked as base learners where sales predictions from a specific dataset were used as training data for a Support Vector Regression model (SVR). A stacking approach to this type of project shows sufficient results with a significant gain in prediction accuracy for all investigated restaurants on a 6-week aggregated timeline compared to the existing solution. / Digitalisering har idag en nyckelroll för att skapa tillväxt och insikter för företag, dessa insikter ger fördelar både inom planering och i förståelsen om deras kunder. Det här är ett område som företag lägger mer och mer resurser på för att skapa större förståelse om sin verksamhet och på så sätt öka tillväxten. Snabbmatsindustrin är inget undantag då restauranger behöver en hög grad av flexibilitet i sina arbetssätt för att möta kundbehovet. Det här skapar en stor efterfrågan av kunskap och insikter för att hjälpa dem i planeringen av deras dagliga arbete och det finns ett stort behov från företagen att kontinuerligt implementera nya tekniska lösningar i befintliga processer. Med väl implementerade maskininlärningslösningar i kombination med att skapa mer informativa variabler från befintlig data kan aktörer skapa mervärde till redan existerande processer. Försäljningsprognostisering, som är huvudområdet för den här studien, har en viktig roll för verksamhetsplaneringen inom snabbmatsindustrin, både inom budgetering och bemanning. Namnet snabbmat beskriver sig själv, med det följer ett löfte gentemot kunden att tillhandahålla hög kvalitet på maten samt att kunna tillhandahålla snabb service. Underbemanning kan riskera att bryta någon av dessa löften, antingen i undermålig kvalitet på maten eller att inte kunna leverera snabb service. Överbemanning riskerar i stället att leda till ineffektivitet i användandet av resurser. Att generera högst tillförlitliga prognoser är därför avgörande för att kunna maximera vinsten och minimera operativ risk. SARIMA, XGBoost och Random Forest utvärderades på ett träningsset bestående av försäljningssiffror, timme på dygnet och kategoriska variabler som beskriver dag och månad. Dessa modeller fungerar som basmodeller vars prediktioner från ett specifikt testset används som träningsdata till en Stödvektorsreggresionsmodell (SVR). Att använda stapling av maskininlärningsmodeller till den här typen av problem visade tillfredställande resultat där det påvisades en signifikant förbättring i prediktionssäkerhet under en 6 veckors aggregerad period gentemot den redan existerande modellen.

Page generated in 0.1328 seconds