• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 645
  • 99
  • 46
  • 40
  • 22
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • Tagged with
  • 999
  • 999
  • 999
  • 142
  • 130
  • 108
  • 105
  • 94
  • 93
  • 88
  • 84
  • 83
  • 79
  • 68
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Time series Forecast of Call volume in Call Centre using Statistical and Machine Learning Methods

Baldon, Nicoló January 2019 (has links)
Time series is a collection of points gathered at regular intervals. Time series analysis explores the time correlations and tries to model it according to trend and seasonality. One of the most relevant tasks, in time series analysis, is forecasting future values, which is considered fundamental in many real-world scenarios. Nowadays, many companies forecast using hand-written models or naive statistical models. Call centers are the front end of the organization, managing the relationship with the customers. A key challenge for call centers remains the call load forecast and the optimization of the schedule. Call load indicates the number of calls a call center receives. The call load forecast is mostly exploited to schedule the staff. They are interested in the short term forecast to handle the unforeseen and to optimize the staff schedule, and in the long term forecast to hire or assign staff to other tasks. Machine learning has been applied to several fields reporting excellent results, and recently, time series forecasting problems have gained a high-interest thanks to the new recurrent network, named Long-short Term Memory. This thesis has explored the capabilities of machine learning in modeling and forecasting call load time series, characterized by a strong seasonality, both at daily and hourly scale. We compare Seasonal Artificial Neural Network (ANN) and a Long-Short Term Memory (LSTM) models with Seasonal Autoregressive Integrated Moving Average (SARIMA) model, which is one of the most common statistical method utilized by call centers. The primary metric used to evaluate the results is the Normalized Mean Squared Error (NMSE), the secondary is the Symmetric Mean Absolute Percentage Error (SMAPE), utilized to calculate the accuracy of the models. We carried out our experiments on three different datasets provided by the Teleopti. Experimental results have proven SARIMA to be more accurate in forecasting at daily scale across the three datasets. It performs better than the Seasonal ANN and the LSTM with a limited amount of data points. At hourly scale, Seasonal ANN and LSTM outperform SARIMA, showing robustness across a forecasting horizon of 160 points. Finally, SARIMA has shown no correlation between the quality of the model and the number of data points, while both SANN and LSTM improves together with the number of sample / Tidsserie är en samling punkter som samlas in med jämna mellanrum. Tidsseriens analys undersöker tidskorrelationerna och försöker modellera den enligt trend och säsongsbetonade. En av de mest relevanta uppgifterna, i tidsserieranalys, är att förutse framtida värden, som anses vara grundläggande i många verkliga scenarier. Numera förutspår många företag med handskrivna modeller eller naiva statistiska modeller. Callcenter är organisationens främre del och hanterar relationen med kunderna. En viktig utmaning för callcentra är fortfarande samtalslastprognosen och optimeringen av schemat. Samtalslast indikerar antalet samtal ett callcenter tar emot. Samtalslastprognosen utnyttjas mest för att schemalägga personalen. De är intresserade av den kortsiktiga prognosen för att hantera det oförutsedda och för att optimera personalplanen och på långsiktigt prognos för att anställa eller tilldela personal till andra uppgifter. Maskininlärning har använts på flera fält som rapporterar utmärkta resultat, och nyligen har prognosproblem i tidsserier fått ett stort intresse tack vare det nya återkommande nätverket, som heter Long-short Term Memory. Den här avhandlingen har undersökt kapaciteten för maskininlärning i modellering och prognoser samtalsbelastningstidsserier, kännetecknad av en stark säsongsbetonning, både på daglig och timskala. Vi jämför modeller med säsongsmässigt artificiellt neuralt nätverk (ANN) och ett LSTM-modell (Long- Short Term Memory) med Seasonal Autoregressive Integrated Moving Average (SARIMA)-modell, som är en av de vanligaste statistiska metoderna som används av callcenter. Den primära metriken som används för att utvärdera resultaten är det normaliserade medelkvadratfelet (NMSE), det sekundära är det symmetriska genomsnittet absolut procentuellt fel (SMAPE), som används för att beräkna modellernas noggrannhet. Vi genomförde våra experiment på tre olika datasätt från Teleopti. Experimentella resultat har visat att SARIMA är mer exakt när det gäller prognoser i daglig skala över de tre datasätten. Det presterar bättre än Seasonal ANN och LSTM med en begränsad mängd datapoäng. På timskala överträffar Seasonal ANN och LSTM SARIMA och visar robusthet över en prognoshorisont på 160 poäng. SARIMA har slutligen inte visat någon korrelation mellan modellens kvalitet och antalet datapunkter, medan både SANN och LSTM förbättras tillsammans med antalet sampel.
692

A Study Of Equatorial Ionopsheric Variability Using Signal Processing Techniques

Wang, Xiaoni 01 January 2007 (has links)
The dependence of equatorial ionosphere on solar irradiances and geomagnetic activity are studied in this dissertation using signal processing techniques. The statistical time series, digital signal processing and wavelet methods are applied to study the ionospheric variations. The ionospheric data used are the Total Electron Content (TEC) and the critical frequency of the F2 layer (foF2). Solar irradiance data are from recent satellites, the Student Nitric Oxide Explorer (SNOE) satellite and the Thermosphere Ionosphere Mesosphere Energetics Dynamics (TIMED) satellite. The Disturbance Storm-Time (Dst) index is used as a proxy of geomagnetic activity in the equatorial region. The results are summarized as follows. (1) In the short-term variations ≤ 27-days, the previous three days solar irradiances have significant correlation with the present day ionospheric data using TEC, which may contribute 18% of the total variations in the TEC. The 3-day delay between solar irradiances and TEC suggests the effects of neutral densities on the ionosphere. The correlations between solar irradiances and TEC are significantly higher than those using the F10.7 flux, a conventional proxy for short wavelength band of solar irradiances. (2) For variations ≤ 27 days, solar soft X-rays show similar or higher correlations with the ionosphere electron densities than the Extreme Ultraviolet (EUV). The correlations between solar irradiances and foF2 decrease from morning (0.5) to the afternoon (0.1). (3) Geomagnetic activity plays an important role in the ionosphere in short-term variations ≤ 10 days. The average correlation between TEC and Dst is 0.4 at 2-3, 3-5, 5-9 and 9-11 day scales, which is higher than those between foF2 and Dst. The correlations between TEC and Dst increase from morning to afternoon. The moderate/quiet geomagnetic activity plays a distinct role in these short-term variations of the ionosphere (~0.3 correlation).
693

Addressing nonlinear systems with information-theoretical techniques

Castelluzzo, Michele 07 July 2023 (has links)
The study of experimental recording of dynamical systems often consists in the analysis of signals produced by that system. Time series analysis consists of a wide range of methodologies ultimately aiming at characterizing the signals and, eventually, gaining insights on the underlying processes that govern the evolution of the system. A standard way to tackle this issue is spectrum analysis, which uses Fourier or Laplace transforms to convert time-domain data into a more useful frequency space. These analytical methods allow to highlight periodic patterns in the signal and to reveal essential characteristics of linear systems. Most experimental signals, however, exhibit strange and apparently unpredictable behavior which require more sophisticated analytical tools in order to gain insights into the nature of the underlying processes generating those signals. This is the case when nonlinearity enters into the dynamics of a system. Nonlinearity gives rise to unexpected and fascinating behavior, among which the emergence of deterministic chaos. In the last decades, chaos theory has become a thriving field of research for its potential to explain complex and seemingly inexplicable natural phenomena. The peculiarity of chaotic systems is that, despite being created by deterministic principles, their evolution shows unpredictable behavior and a lack of regularity. These characteristics make standard techniques, like spectrum analysis, ineffective when trying to study said systems. Furthermore, the irregular behavior gives the appearance of these signals being governed by stochastic processes, even more so when dealing with experimental signals that are inevitably affected by noise. Nonlinear time series analysis comprises a set of methods which aim at overcoming the strange and irregular evolution of these systems, by measuring some characteristic invariant quantities that describe the nature of the underlying dynamics. Among those quantities, the most notable are possibly the Lyapunov ex- ponents, that quantify the unpredictability of the system, and measure of dimension, like correlation dimension, that unravel the peculiar geometry of a chaotic system’s state space. These methods are ultimately analytical techniques, which can often be exactly estimated in the case of simulated systems, where the differential equations governing the system’s evolution are known, but can nonetheless prove difficult or even impossible to compute on experimental recordings. A different approach to signal analysis is provided by information theory. Despite being initially developed in the context of communication theory, by the seminal work of Claude Shannon in 1948, information theory has since become a multidisciplinary field, finding applications in biology and neuroscience, as well as in social sciences and economics. From the physical point of view, the most phenomenal contribution from Shannon’s work was to discover that entropy is a measure of information and that computing the entropy of a sequence, or a signal, can answer to the question of how much information is contained in the sequence. Or, alternatively, considering the source, i.e. the system, that generates the sequence, entropy gives an estimate of how much information the source is able to produce. Information theory comprehends a set of techniques which can be applied to study, among others, dynamical systems, offering a complementary framework to the standard signal analysis techniques. The concept of entropy, however, was not new in physics, since it had actually been defined first in the deeply physical context of heat exchange in thermodynamics in the 19th century. Half a century later, in the context of statistical mechanics, Boltzmann reveals the probabilistic nature of entropy, expressing it in terms of statistical properties of the particles’ motion in a thermodynamic system. A first link between entropy and the dynamical evolution of a system is made. In the coming years, following Shannon’s works, the concept of entropy has been further developed through the works of, to only cite a few, Von Neumann and Kolmogorov, being used as a tool for computer science and complexity theory. It is in particular in Kolmogorov’s work, that information theory and entropy are revisited from an algorithmic perspective: given an input sequence and a universal Turing machine, Kolmogorov found that the length of the shortest set of instructions, i.e. the program, that enables the machine to compute the input sequence was related to the sequence’s entropy. This definition of the complexity of a sequence already gives hint of the differences between random and deterministic signals, in the fact that a truly random sequence would require as many instructions for the machine as the size of the input sequence to compute, as there is no other option than programming the machine to copy the sequence point by point. On the other hand, a sequence generated by a deterministic system would simply require knowing the rules governing its evolution, for example the equations of motion in the case of a dynamical system. It is therefore through the work of Kolmogorov, and also independently by Sinai, that entropy is directly applied to the study of dynamical systems and, in particular, deterministic chaos. The so-called Kolmogorov-Sinai entropy, in fact, is a well-established measure of how complex and unpredictable a dynamical system can be, based on the analysis of trajectories in its state space. In the last decades, the use of information theory on signal analysis has contributed to the elaboration of many entropy-based measures, such as sample entropy, transfer entropy, mutual information and permutation entropy, among others. These quantities allow to characterize not only single dynamical systems, but also highlight the correlations between systems and even more complex interactions like synchronization and chaos transfer. The wide spectrum of applications of these methods, as well as the need for theoretical studies to provide them a sound mathematical background, make information theory still a thriving topic of research. In this thesis, I will approach the use of information theory on dynamical systems starting from fundamental issues, such as estimating the uncertainty of Shannon’s entropy measures on a sequence of data, in the case of an underlying memoryless stochastic process. This result, beside giving insights on sensitive and still-unsolved aspects when using entropy-based measures, provides a relation between the maximum uncertainty on Shannon’s entropy estimations and the size of the available sequences, thus serving as a practical rule for experiment design. Furthermore, I will investigate the relation between entropy and some characteristic quantities in nonlinear time series analysis, namely Lyapunov exponents. Some examples of this analysis on recordings of a nonlinear chaotic system are also provided. Finally, I will discuss other entropy-based measures, among them mutual information, and how they compare to analytical techniques aimed at characterizing nonlinear correlations between experimental recordings. In particular, the complementarity between information-theoretical tools and analytical ones is shown on experimental data from the field of neuroscience, namely magnetoencefalography and electroencephalography recordings, as well as mete- orological data.
694

The Patterns and Determinants of Roundwood Exports from United States Pacific Northwest

Ban, Bibek 03 May 2019 (has links)
The Forest Resource Conservation and Shortage Relief Act of 1990 was the first federal attempt to impose a blanket restriction on export of roundwood to conserve existing forest cover and generate economic benefits from exporting processed wood. This study estimates the export demand equation for total export from United States Pacific Northwest, major species and destination countries using Johansen multivariate time series analysis. Cointegration rank is identified using Johansen cointegration test incorporating a structural breaks and normalized restriction is imposed to predict demand function under the framework of vector error correction model. All the variables under study are statistically significant with expected signs in the long run demand estimates. Roundwood export restriction policies are found to have impacted the export demand equation negatively. The study helps to understand the impact of log export restrictions policies along with other economic variables and assist in future policy formulations.
695

Modeling Organic Installs in a Free-to-Play Game / Modellering av organiska nedladdningar i ett Free-to-Play Spel.

Prudhomme, Maxime January 2022 (has links)
The Free-To-Play industry relies on getting a huge inflow of new players that might result in future gross bookings. Consequently, getting organic new players is crucial to ensure its health, especially as they have no direct associated acquisition cost. In addition, forecasting helps business planning as future gross bookings result from those news installs. This thesis investigates methods such as Linear Regression, Ridge, Lasso regularization, time-series analysis, and Prophet to forecast the inflow of organic installs and try to understand the factors impacting it. Using the data from 3 games for two platforms and 15 countries, it investigates the differences in behavior observed over the segments. This thesis first focuses on a specific segment by modeling the inflow of organic installs for the game number 17 on iOS in the United States of America. On this segment, the best model is the Lasso model using, among others, a Prophet model as a variable. However, the generalization to all segments is difficult. On average, exponential decay over time is the best way to forecast the future inflow of organic as it presents the more consistent performances over all segments. / Free-To-Play-branschen är beroende av att få ett stort inflöde av nya spelare, som sedan eventuellt kan generera framtida intäkter. För att kunna säkerställa ett spels fortsatta hälsa är det därför avgörande att få nya spelare organiskt. Detta är särskilt viktigt då det inte innebär någon anskaffningskostnad. Då framtida intäkter är beroende av nya nedladdningar är prognostisering till stor nytta i företagsplanering. Denna uppsats använder metoder som linjär regression, Ridge, Lasso-regularization, tidsserieanalys och Prophet för att förutspå inflödet av organiska nedladdningar och förstå vilka faktorer som påverkar detta inflöde.Genom användningen av data från tre spel från två plattformar och 15 länder undersöks skillnader i beteende för olika segment. Denna uppsats fokuserar på ett specifikt segment genom att modellera inflödet av organiska nedladdningar för spel nummer 17 på iOS i USA. För detta segment är Lasso-modellen bäst, som bland annat använder Prophet-modellen som variabel. Det är dock svårt att överföra slutsatserna på andra segment. Istället är det bättre att anta en exponentiell nedgång över tid när man förutspår framtida inflöden av organiska nedladdningar, då det ger mer konsekventa resultat för alla segment.
696

Enhancing Business Support Systems through Data Science and Machine Learning : A study on possible applications within BSS

Castello, Jacopo January 2021 (has links)
The companies’ support phase, as all of business’ functional areas and components, went through a heavy and rapid digitalization which has unlocked the availability of an unprecedented amount of data. Unlike other relevant business areas and components, the support phase seems to have experienced fewer improvements attributable to Data Science and machine learning. By focusing on two well-known problems of these two fields, Time Series Analysis and Regression Analysis, this project aims at understanding which techniques are applicable within the support phase and how these can improve the effectiveness and pro-activeness of this area. The goal within this project is to apply them to improve the handling of support tickets, the digital entity used to track issues and requests within support systems. Through the use of Time Series Analysis, we aim at forecasting the volume of tickets to be expected in a near-future time frame. Using Regression Analysis we intend to estimate the resolution time of a newly submitted ticket. The results produced by the two tasks were satisfactory. On one hand, the Time Series task produced accurate results and the models could be directly employed and bring some added value to help Elvenite’s support team. On the other hand, while the Regression Analysis results were not as good, they nonetheless proved that the task’s aim is achievable through improvements on both the data used and the models applied. Finally, both tasks successfully showcased how to investigate and evaluate the application of such techniques within the support phase of a business. / Supportfasen, likväl samtliga andra delar av företags affärsfunktionella områden och komponenter, har genomgått en intensiv och snabb digitalisering som har öppnat upp tillgången till en enastående mängd data. Till skillnad från andra relevanta affärsområden och komponenter verkar supportfasen ha upplevt färre förbättringar som kan attribueras till Datavetenskap och maskininlärning. Projektet syftar till att förstå det ovanstående genom att fokusera på två välkända tekniker: tidsserieanalys och regressionsanalys. Det är följaktligen viktigt att undersöka vilka metoder från föregående nämnda områden som är användbara inom supportsystemen, samt hur dessa kan förbättra effektiviteten och proaktiviteten inom området. Det genomgående målet för projektet är att tillämpa analysmetoderna för att förbättra hanteringen av supportbiljetter. Supportbiljetter är den digitala enheten som används för att spåra frågor och förfrågningar inom supportsystem. Genom att använda tidsserieanalys eftersträvas att prognostisera volymen av biljetter som kan förväntas inom en snar framtid. Regressionsanalys användas för att tillhandahålla en uppskattad tid för en nyanländ biljett att bli löst, baserat på lösningstiden för tidigare lösta liknande biljetter. De två tillvägagångssätten gav olika, men tillfredställande resultat. Till att börja med anses tidsserieanalysen vara tillfredsställande och kan vara av värde samt hjälp för Elvenites supportteam. Dessvärre var resultaten från regressionsanalysen inte lika optimala och modellerna skulle behöva förbättras ytterligare före de appliceras i verkligheten. De båda teknikerna kunde ändock framgångsrikt bevisa och påvisa hur man kan undersöka samt utvärdera liknande metoder inom supportfasen av ett företag.
697

A Comparative Study : Time-Series Analysis Methods for Predicting COVID-19 Case Trend / En jämförande studie : Tidsseriens analysmetoder för att förutsäga fall av COVID-19

Xu, Chenhui January 2021 (has links)
Since 2019, COVID-19, as a new acute respiratory disease, has struck the whole world, causing millions of death and threatening the economy, politics, and civilization. Therefore, an accurate prediction of the future spread of COVID-19 becomes crucial in such a situation. In this comparative study, four different time-series analysis models, namely the ARIMA model, the Prophet model, the Long Short-Term Memory (LSTM) model, and the Transformer model, are investigated to determine which has the best performance when predicting the future case trends of COVID-19 in six countries. After obtaining the publicly available COVID-19 case data from Johns Hopkins University Center for Systems Science and Engineering database, we conduct repetitive experiments which exploit the data to predict future trends for all models. The performance is then evaluated by mean squared error (MSE) and mean absolute error (MAE) metrics. The results show that overall the LSTM model has the best performance for all countries that it can achieve extremely low MSE and MAE. The Transformer model has the second-best performance with highly satisfactory results in some countries, and the other models have poorer performance. This project highlights the high accuracy of the LSTM model, which can be used to predict the spread of COVID-19 so that countries can be better prepared and aware when controlling the spread. / Sedan 2019 har COVID-19, som en ny akut andningssjukdom, drabbat hela världen, orsakat miljontals dödsfall och hotat ekonomin, politiken och civilisationen. Därför blir en korrekt förutsägelse av den framtida spridningen av COVID-19 avgörande i en sådan situation. I denna jämförande studie undersöks fyra olika tidsseriemodeller, nämligen ARIMA-modellen, profetmodellen, Long Short-Term Memory (LSTM) -modellen och transformatormodellen, för att avgöra vilken som har bäst prestanda när man förutsäger framtida falltrender av COVID-19 i sex länder. Efter att ha fått offentligt tillgängliga COVID-19-falldata från Johns Hopkins University Center for Systems Science and Engineering-databasen utför vi repetitiva experiment som utnyttjar data för att förutsäga framtida trender för alla modeller. Prestandan utvärderas sedan med medelvärde för kvadratfel (MSE) och medelvärde för absolut fel (MAE). Resultaten visar att LSTM -modellen överlag har den bästa prestandan för alla länder att den kan uppnå extremt låg MSE och MAE. Transformatormodellen har den näst bästa prestandan med mycket tillfredsställande resultat i vissa länder, och de andra modellerna har sämre prestanda. Detta projekt belyser den höga noggrannheten hos LSTM-modellen, som kan användas för att förutsäga spridningen av COVID-19 så att länder kan vara bättre förberedda och medvetna när de kontrollerar spridningen.
698

Electrochemical studies of external forcing of periodic oscillating systems and fabrication of coupled microelectrode array sensors

Clark, David 01 May 2020 (has links)
This dissertation describes the electrochemical behavior of nickel and iron that was studied in different acid solutions via linear sweep voltammetry, cyclic voltammetry, and potentiostatic measurements over a range of temperatures at specific potential ranges. The presented work displays novel experiments where a nickel electrode was heated locally with an inductive heating system, and a platinum (Pt) electrode was used to change the proton concentration at iron and nickel electrode surfaces to control the periodic oscillations (frequency and amplitude) produced and to gain a greater understanding of the systems (kinetics), oscillatory processes, and corrosion processes. Temperature pulse voltammetry, linear sweep voltammetry, and cyclic voltammetry were used for temperature calibration at different heating conditions. Several other metal systems (bismuth, lead, zinc, and silver) also produce periodic oscillations as corrosion occurs; however, creating these with pure metal electrodes is very expensive. In this work, metal systems were created via electrodeposition by using inexpensive, efficient, coupled microelectrode array sensors (CMASs) as a substrate. CMASs are integrated devices with multiple electrodes that are connected externally in a circuit in which all of the electrodes have the same amount of potential applied or current passing through them. CMASs have been used for many years to study different forms of corrosion (crevice corrosion, pitting corrosion, intergranular corrosion, and galvanic corrosion), and they are beneficial because they can simulate single electrodes of the same size. The presented work also demonstrates how to construct CMASs and shows that the unique phenomena of periodic oscillations that can be created and studied by using coated and bare copper CMASs. Furthermore, these systems can be controlled by implementing external forcing with a Pt electrode at the CMAS surface. The data from the single Ni electrode experiments and CMAS experiments were analyzed by using the Nonlinear Time-Series Analysis approach.
699

PV Module Performance Under Real-world Test Conditions - A Data Analytics Approach

Hu, Yang 12 June 2014 (has links)
No description available.
700

Temporal and Spatial Analysis of Water Quality Time Series

Khalil Arya, Farid January 2015 (has links)
No description available.

Page generated in 0.0835 seconds