• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1740
  • 414
  • 161
  • 72
  • 54
  • 54
  • 50
  • 50
  • 50
  • 50
  • 50
  • 48
  • 40
  • 37
  • 34
  • Tagged with
  • 3207
  • 437
  • 430
  • 381
  • 364
  • 304
  • 291
  • 264
  • 262
  • 243
  • 231
  • 229
  • 225
  • 216
  • 211
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
811

Integrated Predictive Modeling and Analytics for Crisis Management

Alhamadani, Abdulaziz Abdulrhman 15 May 2024 (has links)
The surge in the application of big data and predictive analytics in fields of crisis management, such as pandemics and epidemics, highlights the vital need for advanced research in these areas, particularly in the wake of the COVID-19 pandemic. Traditional methods, which typically rely on historical data to forecast future trends, fall short in addressing the complex and ever-changing nature of challenges like pandemics and public health crises. This inadequacy is further underscored by the pandemic's significant impact on various sectors, notably healthcare, government, and the hotel industry. Current models often overlook key factors such as static spatial elements, socioeconomic conditions, and the wealth of data available from social media, which are crucial for a comprehensive understanding and effective response to these multifaceted crises. This thesis employs spatial forecasting and predictive analytics to address crisis management in several distinct but interrelated contexts: the COVID-19 pandemic, the opioid crisis, and the impact of the pandemic on the hotel industry. The first part of the study focuses on using big data analytics to explore the relationship between socioeconomic factors and the spread of COVID-19 at the zip code level, aiming to predict high-risk areas for infection. The second part delves into the opioid crisis, utilizing semi-supervised deep learning techniques to monitor and categorize drug-related discussions on Reddit. The third part concentrates on developing spatial forecasting and providing explanations of the rising epidemic of drug overdose fatalities. The fourth part of the study extends to the realm of the hotel industry, aiming to optimize customer experience by analyzing online reviews and employing a localized Large Language Model to generate future customer trends and scenarios. Across these studies, the thesis aims to provide actionable insights and comprehensive solutions for effectively managing these major crises. For the first work, the majority of current research in pandemic modeling primarily relies on historical data to predict dynamic trends such as COVID-19. This work makes the following contributions in spatial COVID-19 pandemic forecasting: 1) the development of a unique model solely employing a wide range of socioeconomic indicators to forecast areas most susceptible to COVID-19, using detailed static spatial analysis, 2) identification of the most and least influential socioeconomic variables affecting COVID-19 transmission within communities, 3) construction of a comprehensive dataset that merges state-level COVID-19 statistics with corresponding socioeconomic attributes, organized by zip code. For the second work, we make the following contributions in detecting drug Abuse crisis via social media: 1) enhancing the Dynamic Query Expansion (DQE) algorithm to dynamically detect and extract evolving drug names in Reddit comments, utilizing a list curated from government and healthcare agencies, 2) constructing a textual Graph Convolutional Network combined with word embeddings to achieve fine-grained drug abuse classification in Reddit comments, identifying seven specific drug classes for the first time, 3) conducting extensive experiments to validate the framework, outperforming six baseline models in drug abuse classification and demonstrating effectiveness across multiple types of embeddings. The third study focuses on developing spatial forecasting and providing explanations of the escalating epidemic of drug overdose fatalities. Current research in this field has shown a deficiency in comprehensive explanations of the crisis, spatial analyses, and predictions of high-risk zones for drug overdoses. Addressing these gaps, this study contributes in several key areas: 1) Establishing a framework for spatially forecasting drug overdose fatalities predominantly affecting U.S. counties, 2) Proposing solutions for dealing with scarce and heterogeneous data sets, 3) Developing an algorithm that offers clear and actionable insights into the crisis, and 4) Conducting extensive experiments to validate the effectiveness of our proposed framework. In the fourth study, we address the profound impact of the pandemic on the hotel industry, focusing on the optimization of customer experience. Traditional methodologies in this realm have predominantly relied on survey data and limited segments of social media analytics. Those methods are informative but fall short of providing a full picture due to their inability to include diverse perspectives and broader customer feedback. Our study aims to make the following contributions: 1) the development of an integrated platform that distinguishes and extracts positive and negative Memorable Experiences (MEs) from online customer reviews within the hotel industry, 2) The incorporation of an advanced analytical module that performs temporal trend analysis of MEs, utilizing sophisticated data mining algorithms to dissect customer feedback on a monthly and yearly scale, 3) the implementation of an advanced tool that generates prospective and unexplored Memorable Experiences (MEs) by utilizing a localized Large Language Model (LLM) with keywords extracted from authentic customer experiences to aid hotel management in preparing for future customer trends and scenarios. Building on the integrated predictive modeling approaches developed in the earlier parts of this dissertation, this final section explores the significant impacts of the COVID-19 pandemic on the airline industry. The pandemic has precipitated substantial financial losses and operational disruptions, necessitating innovative crisis management strategies within this sector. This study introduces a novel analytical framework, EAGLE (Enhancing Airline Groundtruth Labels and Review rating prediction), which utilizes Large Language Models (LLMs) to improve the accuracy and objectivity of customer sentiment analysis in strategic airline route planning. EAGLE leverages LLMs for zero-shot pseudo-labeling and zero-shot text classification, to enhance the processing of customer reviews without the biases of manual labeling. This approach streamlines data analysis, and refines decision-making processes which allows airlines to align route expansions with nuanced customer preferences and sentiments effectively. The comprehensive application of LLMs in this context underscores the potential of predictive analytics to transform traditional crisis management strategies by providing deeper, more actionable insights. / Doctor of Philosophy / In today's digital age, where vast amounts of data are generated every second, understanding and managing crises like pandemics or economic disruptions has become increasingly crucial. This dissertation explores the use of advanced predictive modeling and analytics to manage various crises, significantly enhancing how predictions and responses to these challenges are developed. The first part of the research uses data analysis to identify areas at higher risk during the COVID-19 pandemic, focusing on how different socioeconomic factors can affect virus spread at a local level. This approach moves beyond traditional methods that rely on past data, providing a more dynamic way to forecast and manage public health crises. The study then examines the opioid crisis by analyzing social media platforms like Reddit. Here, a method was developed to automatically detect and categorize discussions about drug abuse. This technique aids in understanding how drug-related conversations evolve online, providing insights that could guide public health responses and policy-making. In the hospitality sector, customer reviews were analyzed to improve service quality in hotels. By using advanced data analysis tools, key trends in customer experiences were identified, which can help businesses adapt and refine their services in real-time, enhancing guest satisfaction. Finally, the study extends to the airline industry, where a model was developed that uses customer feedback to improve airline services and route planning. This part of the research shows how sophisticated analytics can help airlines better understand and meet traveler needs, especially during disruptions like the pandemic. Overall, the dissertation provides methods to better manage crises and illustrates the vast potential of predictive analytics in making informed decisions that can significantly mitigate the impacts of future crises. This research is vital for anyone—from government officials to business leaders—looking to harness the power of data for crisis management and decision-making.
812

Development of empirical ozone models for the East Central Florida and Pensacola, Florida airsheds

Chambers, Rachel 01 April 2001 (has links)
No description available.
813

An investigation of a bivariate distribution approach to modeling diameter distributions at two points in time

Knoebel, Bruce R. January 1985 (has links)
A diameter distribution prediction procedure for single species stands was developed based on the bivariate S<sub>B</sub> distribution model. The approach not only accounted for and described the relationships between initial and future diameters and their distributions, but also assumed future diameter given initial diameter to be a random variable. While this method was the most theoretically correct, comparable procedures based on the definition of growth equations which assumed future diameter given initial diameter to be a constant, sometimes provided somewhat better results. Both approaches performed as well, and in some cases, better than the established methods of diameter distribution prediction such as parameter recovery, percentile prediction, and parameter prediction. The approaches based on the growth equations are intuitively and biologically appealing in that the future distribution is determined from an initial distribution and a specified initial-future diameter relationship. ln most appropriate. While this result simplified some procedures, it also implied that the initial and future diameter distributions differed only in location and scale, not in shape. This is a somewhat unrealistic assumption, however, due to the relatively short growth periods and the alterations in stand structure and growth due to the repeated thinnings, the data did not provide evidence against the linear growth equation assumption. The growth equation procedures not only required the initial and future diameter distributions to be of a particular form, but they also restricted the initial-future diameter relationship to be of a particular form. The individual tree model, which required no distributional assumptions or restrictions on the growth equation, proved to be the better approach to use in terms of predicting future stand tables as it performed better than all of the distribution-based approaches. For the bivariate distribution, the direct fit, parameter recovery, parameter prediction and percentile prediction diameter distribution prediction techniques, implied diameter relationships were defined. Evaluations revealed that these equations were both accurate and precise, indicating that the accurate specification of the initial distribution and the diameter diameter distribution. / Ph. D.
814

Natural gas storage level forecasting using temperature data

Sundin, Daniel January 2020 (has links)
Even though the theory of storage is historically a popular view to explain commodity futures prices, many authors focus on the oil price link. Past studies have shown an increased futures price volatility on Mondays and days when natural gas storage levels are released, which could both implicate that storage levels and temperature data are incorporated in the prices. In this thesis, the U.S. natural gas storage level change is studied as a function of the consumption and production. Consumption and production are furthered segmented and separately forecasted by modelling inverse problems that are solved by least squares regression using temperature data and timeseries analysis. The results indicate that each consumer consumption segment is highly dependent of the temperature with R2-values of above 90%. However, modelling each segment completely by time-series analysis proved to be more efficient due to lack of flexibility in the polynomials, lack of used weather stations and seasonal patterns in addition to the temperatures. Although the forecasting models could not beat analysts’ consensus estimates, these present natural gas storage level drivers and can thus be used to incorporate temperature forecasts when estimating futures prices.
815

Vooruitberamingsmodelle in die telekommunikasie-omgewing

Schoeman, Daniel Frederik 06 1900 (has links)
M.Sc. (Statistics)
816

Non-parametric volatility measurements and volatility forecasting models

Du Toit, Cornel 03 1900 (has links)
Assignment (MComm)--Stellenbosch University, 2005. / ENGLISH ABSTRACT: Volatilty was originally seen to be constant and deterministic, but it was later realised that return series are non-stationary. Owing to this non-stationarity nature of returns, there were no reliable ex-post volatility measurements. Subsequently, researchers focussed on ex-ante volatility models. It was only then realised that before good volatility models can be created, reliable ex-post volatility measuremetns need to be defined. In this study we examine non-parametric ex-post volatility measurements in order to obtain approximations of the variances of non-stationary return series. A detailed mathematical derivation and discussion of the already developed volatility measurements, in particular the realised volatility- and DST measurements, are given In theory, the higher the sample frequency of returns is, the more accurate the measurements are. These volatility measurements referred to above, however, all have short-comings in that the realised volatility fails if the sample frequency becomes to high owing to microstructure effects. On the other hand, the DST measurement cannot handle changing instantaneous volatility. In this study we introduce a new volatility measurement, termed microstructure realised volatility, that overcomes these shortcomings. This measurement, as with realised volatility, is based on quadratic variation theory, but the underlying return model is more realistic. / AFRIKAANSE OPSOMMING: Volatiliteit is oorspronklik as konstant en deterministies beskou, dit was eers later dat besef is dat opbrengste nie-stasionêr is. Betroubare volatiliteits metings was nie beskikbaar nie weens die nie-stasionêre aard van opbrengste. Daarom het navorsers gefokus op vooruitskattingvolatiliteits modelle. Dit was eers op hierdie stadium dat navorsers besef het dat die definieering van betroubare volatiliteit metings 'n voorvereiste is vir die skepping van goeie vooruitskattings modelle. Nie-parametriese volatiliteit metings word in hierdie studie ondersoek om sodoende benaderings van die variansies van die nie-stasionêre opbrengste reeks te beraam. 'n Gedetaileerde wiskundige afleiding en bespreking van bestaande volatiliteits metings, spesifiek gerealiseerde volatiliteit en DST- metings, word gegee. In teorie salopbrengste wat meer dikwels waargeneem word tot beter akkuraatheid lei. Bogenoemde volatilitieits metings het egter tekortkominge aangesien gerealiseerde volatiliteit faal wanneer dit te hoog raak, weens mikrostruktuur effekte. Aan die ander kant kan die DST meting nie veranderlike oombliklike volatilitiet hanteer nie. Ons stel in hierdie studie 'n nuwe volatilitieits meting bekend, naamlik mikro-struktuur gerealiseerde volatiliteit, wat nie hierdie tekortkominge het nie. Net soos met gerealiseerde volatiliteit sal hierdie meting gebaseer wees op kwadratiese variasie teorie, maar die onderliggende opbrengste model is meer realisties.
817

Vooruitberamingsmodelle in die telekommunikasie-omgewing

Schoeman, Daniel Frederik 06 1900 (has links)
M.Sc. (Statistics)
818

Mercado preditivo: um método de previsão baseado no conhecimento coletivo / Prediction market: a forecasting method based on the collective knowledge

Ferraz, Ivan Roberto 08 December 2015 (has links)
Mercado Preditivo (MP) é uma ferramenta que utiliza o mecanismo de preço de mercado para agregar informações dispersas em um grande grupo de pessoas, visando à geração de previsões sobre assuntos de interesse. Trata-se de um método de baixo custo, capaz de gerar previsões de forma contínua e que não exige amostras probabilísticas. Há diversas aplicações para esses mercados, sendo que uma das principais é o prognóstico de resultados eleitorais. Este estudo analisou evidências empíricas da eficácia de um Mercado Preditivo no Brasil, criado para fazer previsões sobre os resultados das eleições gerais do ano de 2014, sobre indicadores econômicos e sobre os resultados de jogos do Campeonato Brasileiro de futebol. A pesquisa teve dois grandes objetivos: i) desenvolver e avaliar o desempenho de um MP no contexto brasileiro, comparando suas previsões em relação a métodos alternativos; ii) explicar o que motiva as pessoas a participarem do MP, especialmente quando há pouca ou nenhuma interação entre os participantes e quando as transações são realizadas com uma moeda virtual. O estudo foi viabilizado por meio da criação da Bolsa de Previsões (BPrev), um MP online que funcionou por 61 dias, entre setembro e novembro de 2014, e que esteve aberto à participação de qualquer usuário da Internet no Brasil. Os 147 participantes registrados na BPrev efetuaram um total de 1.612 transações, sendo 760 no tema eleições, 270 em economia e 582 em futebol. Também foram utilizados dois questionários online para coletar dados demográficos e percepções dos usuários. O primeiro foi aplicado aos potenciais participantes antes do lançamento da BPrev (302 respostas válidas) e o segundo foi aplicado apenas aos usuários registrados, após dois meses de experiência de uso da ferramenta (71 respostas válidas). Com relação ao primeiro objetivo, os resultados sugerem que Mercados Preditivos são viáveis no contexto brasileiro. No tema eleições, o erro absoluto médio das previsões do MP na véspera do pleito foi de 3,33 pontos percentuais, enquanto o das pesquisas de opinião foi de 3,31. Considerando todo o período em que o MP esteve em operação, o desempenho dos dois métodos também foi parecido (erro absoluto médio de 4,20 pontos percentuais para o MP e de 4,09 para as pesquisas). Constatou-se também que os preços dos contratos não são um simples reflexo dos resultados das pesquisas, o que indica que o mercado é capaz de agregar informações de diferentes fontes. Há potencial para o uso de MPs em eleições brasileiras, principalmente como complemento às metodologias de previsão mais tradicionais. Todavia, algumas limitações da ferramenta e possíveis restrições legais podem dificultar sua adoção. No tema economia, os erros foram ligeiramente maiores do que os obtidos com métodos alternativos. Logo, um MP aberto ao público geral, como foi o caso da BPrev, mostrou-se mais indicado para previsões eleitorais do que para previsões econômicas. Já no tema futebol, as previsões do MP foram melhores do que o critério do acaso, mas não houve diferença significante em relação a outro método de previsão baseado na análise estatística de dados históricos. No que diz respeito ao segundo objetivo, a análise da participação no MP aponta que motivações intrínsecas são mais importantes para explicar o uso do que motivações extrínsecas. Em ordem decrescente de relevância, os principais fatores que influenciam a adoção inicial da ferramenta são: prazer percebido, aprendizado percebido, utilidade percebida, interesse pelo tema das previsões, facilidade de uso percebida, altruísmo percebido e recompensa percebida. Os indivíduos com melhor desempenho no mercado são mais propensos a continuar participando. Isso sugere que, com o passar do tempo, o nível médio de habilidade dos participantes tende a crescer, tornando as previsões do MP cada vez melhores. Os resultados também indicam que a prática de incluir questões de entretenimento para incentivar a participação em outros temas é pouco eficaz. Diante de todas as conclusões, o MP revelou-se como potencial técnica de previsão em variados campos de investigação. / Prediction Market (PM) is a tool which uses the market price mechanism to aggregate information scattered in a large group of people, aiming at generating predictions about matters of interest. It is a low cost method, able to generate forecasts continuously and it does not require random samples. There are several applications for these markets and one of the main ones is the prognosis of election outcomes. This study analyzed empirical evidences on the effectiveness of Prediction Markets in Brazil, regarding forecasts about the outcomes of the general elections in the year of 2014, about economic indicators and about the results of the Brazilian Championship soccer games. The research had two main purposes: i) to develop and evaluate the performance of PMs in the Brazilian context, comparing their predictions to the alternative methods; ii) to explain what motivates people´s participation in PMs, especially when there is little or no interaction among participants and when the trades are made with a virtual currency (play-money). The study was made feasible by means of the creation of a prediction exchange named Bolsa de Previsões (BPrev), an online marketplace which operated for 61 days, from September to November, 2014, being open to the participation of any Brazilian Internet user. The 147 participants enrolled in BPrev made a total of 1,612 trades, with 760 on the election markets, 270 on economy and 582 on soccer. Two online surveys were also used to collect demographic data and users´ perceptions. The first one was applied to potential participants before BPrev launching (302 valid answers) and the second was applied only to the registered users after two-month experience in tool using (71 valid answers). Regarding the first purpose, the results suggest Prediction Markets to be feasible in the Brazilian context. On the election markets, the mean absolute error of PM predictions on the eve of the elections was of 3.33 percentage points whereas the one of the polls was of 3.31. Considering the whole period in which BPrev was running, the performance of both methods was also similar (PM mean absolute error of 4.20 percentage points and poll´s 4.09). Contract prices were also found as not being a simple reflection of poll results, indicating that the market is capable to aggregate information from different sources. There is scope for the use of PMs in Brazilian elections, mainly as a complement of the most traditional forecasting methodologies. Nevertheless, some tool limitations and legal restrictions may hinder their adoption. On markets about economic indicators, the errors were slightly higher than those obtained by alternative methods. Therefore, a PM open to general public, as in the case of BPrev, showed as being more suitable to electoral predictions than to economic ones. Yet, on soccer markets, PM predictions were better than the criterion of chance although there had not been significant difference in relation to other forecasting method based on the statistical analysis of historical data. As far as the second purpose is concerned, the analysis of people´s participation in PMs points out intrinsic motivations being more important in explaining their use than extrinsic motivations. In relevance descending order, the principal factors that influenced tool´s initial adoption are: perceived enjoyment, perceived learning, perceived usefulness, interest in the theme of predictions, perceived ease of use, perceived altruism and perceived reward. Individuals with better performance in the market are more inclined to continue participating. This suggests that, over time, participants´ average skill level tends to increase, making PM forecasts better and better. Results also indicate that the practice of creating entertainment markets to encourage participation in other subjects is ineffective. Ratifying all the conclusions, PM showed as being a prediction potential technique in a variety of research fields.
819

Using a logistic phenology model with improved degree-day accumulators to forecast emergence of pest grasshoppers

Irvine, Paul Michael January 2011 (has links)
Many organisms, especially animals like insects, which depend on the environment for body heat, have growth stages and life cycles that are highly dependent on temperature. To better understand and model how insect life history events progress, for example in the emergence and initial growth of the biogeographical research subjects, we must first understand he relationship between temperature, heat accumulation, and subsequent development. The measure of the integration of heat over time, usually referred to as degree-days, is a widely used science-based method of forecasting, that quantifies heat accumulation based on measured ambient temperature. Some popular methods for calculation of degreedays are the traditional sinusoidal method and the average method. The average method uses only the average of the daily maximum and minimum temperature, and has the advantage that it is very easy to use. However, this simplest method can underestimate the amount of degree-day accumulation that is occurring in the environment of interest, and thus has a greater potential to reduce the accuracy of forecasting insect pest emergence. The sinusoidal method was popularized by Allen (1976, [1]), and gives a better approximation to the actual accumulation of degree-days. Both of these degree-day accumulators are independent of typical heating and cooling patterns during a typical day cycle. To address possible non-symmetrical effect, it was deemed prudent to construct degree-day accumulators to take into account phenomena like sunrise, sunset, and solar noon. Consideration of these temporal factors eliminated the assumption that heating and cooling in a typical day during the growth season is symmetric. In some tested cases, these newer degree-day integrators are more accurate than the traditional sinusoidal method, and in all tested cases, these integrators are more accurate than the average method. After developing the newer degree-day accumulators, we chose to investigate use of a logistic phenology model similar to one used by Onsager and Kemp (1986, [54]) when studying grasshopper development. One reason for studying this model is that it has parameters that are important when considering pest management tactics, such as the required degree-day accumulations needed for insects in immature stages (instars) to be completed, as well as a parameter related to the variability of the grasshopper population. Onsager and Kemp used a nonlinear regression algorithm to find parameters for the model. I constructed a simplex algorithm and studied the effectiveness when searching for parameters for a multi-stage insect population model. While investigating the simplex algorithm, it was found that initial values of parameters for constructing the simplex played a crucial role in obtaining realistic and biologically meaningful parameters from the nonlinear regression. Also, while analyzing this downhill simplex method for finding parameters, it was found there is the potential for the simplex to get trapped in many local minima, and thus produce extraneous or incorrectly fitted parameter estimates, although Onsager and Kemp did not mention this problem. In tests of my methods of fitting, I used an example of daily weather data from Onefour, AB, with a development threshold of 12 ±C and a biofix day of April 1st, as an example. The method could be applied to larger, more extensive datasets that include grasshopper population data on numbers per stage, by date, linked to degree accumulations based on the non-symmetrical method, to determine whether it would offer significant improvement in forecasting accuracy of spring insect pest events, over the long term. / xii, 106 leaves ; 29 cm
820

An assessment of scale issues related to the configuration of the ACRU model for design flood estimation

Chetty, Kershani. January 2010 (has links)
There is a frequent need for estimates of design floods by hydrologists and engineers for the design of hydraulic structures. There are various techniques for estimating these design floods which are dependent largely on the availability of data. The two main approaches to design flood estimation are categorised as methods based on the analysis of floods and those based on rainfall-runoff relationships. Amongst the methods based on the analysis of floods, regional flood frequency analysis is seen as a reliable and robust method and is the recommended approach. Design event models are commonly used for design flood estimation in rainfall-runoff based analyses. However, these have several simplifying assumptions which are important in design flood estimation. A continuous simulation approach to design flood estimation has many advantages and overcomes many of the limitations of the design event approach. A major concern with continuous simulation using a hydrological model is the scale at which should take place. According to Martina (2004) the “level” of representation that will preserve the “physical chain” of the hydrological processes, both in terms of scale of representation and level of description of the physical parameters for the modelling process, is a critical question to be addressed. The objectives of this study were to review the literature on different approaches commonly used in South Africa and internationally for design flood estimation and, based on the literature, assess the potential for the use of a continuous simulation approach to design flood estimation. Objectives of both case studies undertaken in this research were to determine the optimum levels of catchment discretisation, optimum levels of soil and land cover information required and, to assess the optimum use of daily rainfall stations for the configuration of the ACRU agrohydrological model when used as a continuous simulation model for design flood estimation. The last objective was to compare design flood estimates from flows simulated by the ACRU model with design flood estimates obtained from observed data. Results obtained for selected quaternary catchments in the Thukela Catchment and Lions River catchment indicated that modelling at the level of hydrological response units (HRU’s), using area weighted soils information and more than one driver rainfall station where possible, produced the most realistic results when comparing observed and simulated streamflows. Design flood estimates from simulated flows compared reasonably well with design flood estimates obtained from observed data only for QC59 and QCU20B. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2010.

Page generated in 0.064 seconds