• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 35
  • 35
  • 35
  • 35
  • 35
  • 33
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 256
  • 256
  • 103
  • 74
  • 36
  • 29
  • 23
  • 22
  • 20
  • 16
  • 15
  • 15
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Rank statistics of forecast ensembles

Siegert, Stefan 08 March 2013 (has links) (PDF)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members. An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank. A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented. Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account. The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
242

Correlation between SQUID and fluxgate magnetometer data for geomagnetic storms

Phiri, Temwani-Joshua 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Geomagnetic storms are primarily driven by the rapid transfer of energy from the solar wind to the magnetosphere. The mechanism of energy transfer involves the merging of the interplanetary magnetic field to the geomagnetic field in a process known as magnetic reconnection. This leads to an influx of energetic, charged particles into the magnetosphere so that current systems are enhanced. Specifically, an increase in the equatorial ring current leads to a decrease in the surface field. Geomagnetic storms are thus characterized by a strong decline in the horizontal components of the geomagnetic field, lasting from several hours to days. The intensity of a storm is described by the disturbed storm-time index, which is essentially a measure of the deviation from the typical quiet day variation along the equator. Severe storms can lead to the disruption of high frequency (HF) communications as a consequence of a strongly perturbed ionosphere. By the same token, the global positioning system (GPS) can become highly unreliable during magnetically disturbed conditions, yielding distance errors as large as 50 meters. The impact of geomagnetic activity and other solar-driven processes on technology systems are collectively known as space weather. Magnetic field sensing thus forms an important part of space weather forecasting and is vital to space science research as a means of improving our understanding of solar wind-magnetosphere interactions. This study examines the use of magnetometers built as SQUIDs (Superconducting Quantum Interference Devices) for monitoring the geomagnetic field for space weather forecasting purposes. A basic theory of superconductivity is presented and subsequently the key aspects governing the operation of SQUIDs are discussed. Space weather is also introduced with respect to the various processes on the sun that perturb the magnetosphere and hence the geomagnetic field. The method of analysis was basically to Fourier-transform the data using the Wiener-Khintchine theorem. A systematic approach to Fourier analysis is thus presented, demonstrating the superiority of the Wiener-Khintchine theorem in noise reduction. The suitability of SQUID magnetometers for space science research is demonstrated by a comparative study between SQUID and fluxgate datasets for magnetic storms during 2011. Strong correlation was observed between the frequency content of the SQUID and fluxgate signals. This result supports South Africa’s SQUID project, currently undertaken as a collaborative effort between SANSA Space Science and the Department of Electrical and Electronic Engineering at Stellenbosch University. This thesis thus lays a foundation for future research involving advanced magnetometry using SQUIDs. / AFRIKAANSE OPSOMMING: Geomagnetiese storms word hoofsaaklik gedryf deur die vinnige oordrag van energie van die sonwind na die magnetosfeer. Die meganisme van energie oordrag behels die samesmelting van die interplanetêre magneetveld met die geomagneetveld, in 'n proses wat bekend staan as magnetiese heraansluiting. Dit lei tot 'n instroming van energieke elektries-gelaaide deeltjies, tot in die magnetosfeer, met die gevolg dat magnetosferiese elektriese stroomstelsels versterk word. 'n Toename in die ekwatoriale ringstrome lei spesifiek tot 'n afname in die horisontale komponent van die geomagnetiese veld. Geomagnetiese storms word dus gekenmerk deur 'n sterk afname in die horisontale komponent van die geomagnetiese veld, ‘n afname wat etlike ure tot dae kan duur. Die intensiteit van 'n storm word beskryf deur die storm-tyd versteurings indeks , 'n maatstaf van die afwyking van die tipiese stil dag magnetiese variasie langs die ewenaar. Ernstige storms kan lei tot die ontwrigting van hoë frekwensie (HF) kommunikasie as 'n gevolg van 'n erg versteurde ionosfeer. Soortgelyk kan die Globale Posisionering Stelsel (GPS) hoogs onbetroubaar word tydens magneties versteurde toestande, en posisiefoute so groot as 50 meter veroorsaak. Die impak van geomagnetiese aktiwiteit en ander sonkrag gedrewe prosesse op tegnologie is gesamentlik bekend as ruimteweer. Magneetveldmetinge vorm dus 'n belangrike deel van ruimteweervoorspelling en is noodsaaklik vir ruimtewetenskaplike navorsing as 'n middel om die sonwind-magnetosfeer interaksies beter te verstaan. Hierdie studie ondersoek die gebruik van SQUID (Engels: Superconducting Quantum Interference Device) magnetometers vir die monitering van die geomagnetiese veld vir ruimteweervoorspellingsdoeleindes. ’n Basiese teorie van supergeleiding word aangebied, waarvolgens die sleutelaspekte van SQUIDs bespreek word. Ruimteweer word ook voorgestel in terme van die verskillende prosesse op die son wat die aarde se magnetosfeer en dus die geomagnetiese veld versteur. Die analisemetode wat hier gebruik word, is om die Fourier-transform van data met die Wiener-Khintchine theorema te bereken. A sistematiese metode vir Fourier-analise word aangebied, wat die superiorireit van die Wiener-Khintchine teorema vir ruisvermindering demonstreer. Die geskiktheid van SQUID magnetometers vir ruimtewetenskaplike navorsing word gedemonstreer deur ’n vergelykende studie tussen SQUID- en vloedhek-datastelle vir magnetiese storms gedurende 2011. Sterk korrelasie is waargeneem tussen die frekwensie-inhoud van die SQUID- en vloedhekseine. Hierdie resultate ondersteun Suid-Afrika se SQUID-projek, wat tans as ’n samewerkingspoging tussen SANSA Space Science en die Departement Elektriese en Elektroniese Ingenieurswese aan die Universiteit van Stellenbosch bedryf word. Hierdie tesis lê ’n fondasie vir toekomstige navorsing oor gevorderde magnetometrie met SQUIDs.
243

De mutacionibus aeris. Kořeny, tradice a vývoj středověké nauky o předpovídání počasí, včetně recepce v bohemikálních rukopisech / De mutacionibus aeris. The roots, traditions and development of the learned medieval weather forecasting, including the reception in the Czech manuscripts

Kocánová, Barbora January 2014 (has links)
The dissertation work examines the roots and development of the medieval learned weather forecasting in the context of ancient and medieval sources and its reception in the central European space, respectively in medieval Bohemia. The work can thus enrich our knowledge of history of natural sciences in the Middle Ages, medieval erudition and written culture in general. At present weather forecasting is a subject of meteorology, based on the analysis of air pressure, temperature and air density and the physiological conditions of the Earth's surface. A detailed analysis of these factors was practically infeasible in the past. Therefore weather forecasting was achieved by means of other methods and premises. We would also hardly find texts concerning weather forecasting between manuscript treatises on the origin and nature of meteorological phenomena: these surprisingly contain a minimum of weather forecast references. At that time weather forecasting was not a part of meteorology; it was the subject of other treatises appearing in the manuscripts frequently entitled De pluviis. These were primarily based on other tradition, respectively on other traditions that were different from that of Aristotle. The aim of the dissertation is to discover and to bring together the various traditions which formed...
244

Physical parameterisations for a high resolution operational numerical weather prediction model / Paramétrisations physiques pour un modèle opérationnel de prévision météorologique à haute résolution

Gerard, Luc 31 August 2001 (has links)
Les modèles de prévision opérationnelle du temps résolvent numériquement les équations de la mécanique des fluides en calculant l'évolution de champs (pression, température, humidité, vitesses) définis comme moyennes horizontales à l'échelle des mailles d'une grille (et à différents niveaux verticaux).<p><p>Les processus d'échelle inférieure à la maille jouent néanmoins un rôle essentiel dans les transferts et les bilans de chaleur, humidité et quantité de mouvement. Les paramétrisations physiques visent à évaluer les termes de source correspondant à ces phénomènes, et apparaissant dans les équations des champs moyens aux points de grille.<p><p>Lorsque l'on diminue la taille des mailles afin de représenter plus finement l'évolution des phénomènes atmosphériques, certaines hypothèses utilisées dans ces paramétrisations perdent leur validité. Le problème se pose surtout quand la taille des mailles passe en dessous d'une dizaine de kilomètres, se rapprochant de la taille des grands systèmes de nuages convectifs (systèmes orageux, lignes de grain).<p><p>Ce travail s'inscrit dans le cadre des développements du modèle à mailles fines ARPÈGE ALADIN, utilisé par une douzaine de pays pour l'élaboration de prévisions à courte échéance (jusque 48 heures).<p><p>Nous décrivons d'abord l'ensemble des paramétrisations physiques du modèle.<p>Suit une analyse détaillée de la paramétrisation actuelle de la convection profonde. Nous présentons également notre contribution personnelle à celle ci, concernant l'entraînement de la quantité de mouvement horizontale dans le nuage convectif.<p>Nous faisons ressortir les principaux points faibles ou hypothèses nécessitant des mailles de grandes dimensions, et dégageons les voies pour de nouveaux développements.<p>Nous approfondissons ensuite deux des aspects sortis de cette discussion: l'usage de variables pronostiques de l'activité convective, et la prise en compte de différences entre l'environnement immédiat du nuage et les valeurs des champs à grande échelle. Ceci nous conduit à la réalisation et la mise en œuvre d'un schéma pronostique de la convection profonde.<p>A ce schéma devraient encore s'ajouter une paramétrisation pronostique des phases condensées suspendues (actuellement en cours de développement par d'autres personnes) et quelques autres améliorations que nous proposons.<p>Des tests de validation et de comportement du schéma pronostique ont été effectués en modèle à aire limitée à différentes résolutions et en modèle global. Dans ce dernier cas l'effet du nouveau schéma sur les bilans globaux est également examiné.<p>Ces expériences apportent un éclairage supplémentaire sur le comportement du schéma convectif et les problèmes de partage entre la schéma de convection profonde et le schéma de précipitation de grande échelle.<p><p>La présente étude fait donc le point sur le statut actuel des différentes paramétrisations du modèle, et propose des solutions pratiques pour améliorer la qualité de la représentation des phénomènes convectifs.<p><p>L'utilisation de mailles plus petites que 5 km nécessite enfin de lever l'hypothèse hydrostatique dans les équations de grande échelle, et nous esquissons les raffinements supplémentaires de la paramétrisation possibles dans ce cas.<p><p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
245

Rank statistics of forecast ensembles

Siegert, Stefan 21 December 2012 (has links)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members. An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank. A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented. Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account. The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
246

Climate Change Assessment in Columbia River Basin (CRB) Using Copula Based on Coupling of Temperature and Precipitation

Qin, Yueyue 29 May 2015 (has links)
The multi downscaled-scenario products allow us to better assess the uncertainty of the variations of precipitation and temperature in the current and future periods. Joint Probability distribution functions (PDFs), of both the climatic variables, might help better understand the interdependence of the two, and thus in-turn help in accessing the future with confidence. In the present study, we have used multi-modelled statistically downscaled ensemble of precipitation and temperature variables. The dataset used is multi-model ensemble of 10 Global Climate Models (GCMs) downscaled product from CMIP5 daily dataset, using the Bias Correction and Spatial Downscaling (BCSD) technique, generated at Portland State University. The multi-model ensemble PDFs of both precipitation and temperature is evaluated for summer (dry) and winter (wet) periods for 10 sub-basins across Columbia River Basin (CRB). Eventually, Copula is applied to establish the joint distribution of two variables on multi-model ensemble data. Results have indicated that the probabilistic distribution helps remove the limitations on marginal distributions of variables in question and helps in better prediction. The joint distribution is then used to estimate the change in trends of said variables in future, along with estimation of the probabilities of the given change. The joint distribution trends are varied, but certainly positive, for summer and winter time scales based on sub-basins. Dry season, generally, is indicating towards higher positive changes in precipitation than temperature (as compared to historical) across sub-basins with wet season inferring otherwise. Probabilities of changes in future, as estimated by the joint precipitation and temperature, also indicates varied degree and forms during dry season whereas the wet season is rather constant across all the sub-basins.
247

Towards Improving Drought Forecasts Across Different Spatial and Temporal Scales

Madadgar, Shahrbanou 03 January 2014 (has links)
Recent water scarcities across the southwestern U.S. with severe effects on the living environment inspire the development of new methodologies to achieve reliable drought forecasting in seasonal scale. Reliable forecast of hydrologic variables, in general, is a preliminary requirement for appropriate planning of water resources and developing effective allocation policies. This study aims at developing new techniques with specific probabilistic features to improve the reliability of hydrologic forecasts, particularly the drought forecasts. The drought status in the future is determined by certain hydrologic variables that are basically estimated by the hydrologic models with rather simple to complex structures. Since the predictions of hydrologic models are prone to different sources of uncertainties, there have been several techniques examined during past several years which generally attempt to combine the predictions of single (multiple) hydrologic models to generate an ensemble of hydrologic forecasts addressing the inherent uncertainties. However, the imperfect structure of hydrologic models usually lead to systematic bias of hydrologic predictions that further appears in the forecast ensembles. This study proposes a post-processing method that is applied to the raw forecast of hydrologic variables and can develop the entire distribution of forecast around the initial single-value prediction. To establish the probability density function (PDF) of the forecast, a group of multivariate distribution functions, the so-called copula functions, are incorporated in the post-processing procedure. The performance of the new post-processing technique is tested on 2500 hypothetical case studies and the streamflow forecast of Sprague River Basin in southern Oregon. Verified by some deterministic and probabilistic verification measures, the method of Quantile Mapping as a traditional post-processing technique cannot generate the qualified forecasts as comparing with the copula-based method. The post-processing technique is then expanded to exclusively study the drought forecasts across the different spatial and temporal scales. In the proposed drought forecasting model, the drought status in the future is evaluated based on the drought status of the past seasons while the correlations between the drought variables of consecutive seasons are preserved by copula functions. The main benefit of the new forecast model is its probabilistic features in analyzing future droughts. It develops conditional probability of drought status in the forecast season and generates the PDF and cumulative distribution function (CDF) of future droughts given the past status. The conditional PDF can return the highest probable drought in the future along with an assessment of the uncertainty around that value. Using the conditional CDF for forecast season, the model can generate the maps of drought status across the basin with particular chance of occurrence in the future. In a different analysis of the conditional CDF developed for the forecast season, the chance of a particular drought in the forecast period can be approximated given the drought status of earlier seasons. The forecast methodology developed in this study shows promising results in hydrologic forecasts and its particular probabilistic features are inspiring for future studies.
248

Essays on Credit Markets and on Information

Plavsic, Bozidar January 2024 (has links)
In the first chapter of my thesis, titled “Interventions in Credit Markets and Effects on Economic Activity: Evidence from Brazil,” I investigate the impact of the Brazilian government policy implemented in March 2012, which aimed at increasing credit supply through public banks. Using bank branch level data, I find that the policy successfully increased overall credit supply, as increased lending of public banks did not significantly offset private lending. On the other hand, there is no evidence of significant client-switching between private and public banks. However, the effects of the policy on economic activity were limited and even negligible. I conduct a series of robustness checks to further explore this puzzling result. I find evidence suggesting that increased lending led to significant increases in deposits, indicating that borrowers leveraged easily accessible credit to take loans and save funds for future use. In the second chapter, titled “Television Introduction and Agricultural Production,” I investigate how improved information affected agricultural activity in the U.S. Specifically, I argue that the introduction of television brings more comprehensible weather forecast information to farmers, improving their decision making process. Using data about television entry and county level farming production in a difference-in-differences methodology, I estimate economically significant effect of television introduction on crop yields.
249

Quantifying numerical weather and surface model sensitivity to land use and land cover changes

Lotfi, Hossein 09 August 2022 (has links)
Land surfaces have changed as a result of human and natural processes, such asdeforestation, urbanization, desertification and natural disasters like wildfires. Land use and landcover change impacts local and regional climates through various bio geophysical processes acrossmany time scales. More realistic representation of land surface parameters within the land surfacemodels are essential to for climate models to accurately simulate the effects of past, current andfuture land surface processes. In this study, we evaluated the sensitivity and accuracy of theWeather Research and Forecasting (WRF) model though the default MODIS land cover data andannually updated land cover data over southeast of United States. Findings of this study indicatedthat the land surface fluxes, and moisture simulations are more sensitive to the surfacecharacteristics over the southeast US. Consequently, we evaluated the WRF temperature andprecipitation simulations with more accurate observations of land surface parameters over thestudy area. We evaluate the model performance for the default and updated land cover simulationsagainst observational datasets. Results of the study showed that updating land cover resulted insubstantial variations in surface heat fluxes and moisture balances. Despite updated land use andland cover data provided more representative land surface characteristics, the WRF simulated 2- m temperature and precipitation did not improved due to use of updated land cover data. Further,we conducted machine learning experiments to post-process the Noah-MP land surface modelsimulations to determine if post processing the model outputs can improve the land surfaceparameters. The results indicate that the Noah-MP simulations using machine learning remarkablyimproved simulation accuracy and gradient boosting, and random forest model had smaller meanerror bias values and larger coefficient of determination over the majority of stations. Moreover,the findings of the current study showed that the accuracy of surface heat flux simulations byNoah-MP are influenced by land cover and vegetation type.
250

"There is no bad weather..." : The weather information behaviour of everyday users and meteorologists view on information / "Det finns inget dåligt väder..." : Allmänhetens väderinfromationsvanor och meteorologers syn på information

Thorsson, Petra January 2016 (has links)
The aim of this thesis is to investigate the use of weather information from an information science perspective. By using Everyday Life Information Seeking theories and a qualitative method this thesis takes a novel approach on how weather information is used and viewed by the everyday users and meteorologists. Thus the material, based on seven interviews with everyday users and two focus group interviews with meteorologists, manages to convey new aspects on how weather information is used in an everyday setting and how meteorologists view their role as information providers. The analysis show that for everyday users there is a difference in how weather information is used depending on age. While apps on mobile phones are used by both younger and older informants, other media types, such as TV and webpages, tend to be used by either older or younger age groups. The results also show that there are non-traditional sources used for weather information among everyday users, such as non-weather web cameras and social media. The results also show that there is a difference in how meteorological forecasters and researchers view different aspects of weather information. Both groups have an understanding of information as being dependent on how it is presented, though forecast meteorologists express a more nuanced view. The results from this study show that information science can be a vital tool for studying weather information habits. It is the firm belief of the author that using information science could gain new insights for the meteorological community in the future. This is a two-year master thesis in Archive, Library and Museum studies. / Syftet med denna uppsats är att undersöka väderinformation från ett informationsvetenskapligt perspektiv. Genom att använda Everyday Life Information Seeking teorier och en kvalitativ metod ger denna uppsats ett nydanande angreppsätt på hur väderinformation används och ses av vardagsanvändare och meteorologer. Således kan materialet, som baseras på sju intervjuer med vardagsanvändare och två fokusgruppsintervjuer med meteorologer, frambära nya aspekter på hur väderinformation används i vardagen och hur meteorologer ser på sin roll som informationsförmedlare. Analysen visar att det för vardagsanvändare finns en skillnad i hur väderinformation används beroende på åldersgrupp. Medan appar på mobiltelefoner används av både yngre och äldre informanter, så tenderar övriga media typer, som TV och hemsidor, att användas främst av endast en ålderskategori. Vidare visar resultaten på att icketraditionella källor för väderprognoser används av vardaganvändare, så som webkameror och sociala medier. Resultaten visar även på att det finns en skillnad i hur prognosmeteorologer och meteorologiska forskare ser på olika aspekter av väderinformation. Båda grupperna visar på en förståelse för att information är beroende på hur den presenteras, så ger prognosmeteorologer uttryck för en mer nyanserad bild. Resultaten från studien visa på att informationsvetenskap kan vara ett viktigt verktyg för att studera väderinformationsvanor. Författaren menar på att informationsvetenskap skulle kunna ge nya insikter inom det meteorologiska området i framtiden. Detta är en tvåårig masteruppsats inom Arkiv-, Bibliotek- och Museumstudier.

Page generated in 0.0902 seconds