541 |
Development of a framework for an integrated time-varying agrohydrological forecast system for southern Africa.Ghile, Yonas Beyene. January 2007 (has links)
Policy makers, water managers, farmers and many other sectors of the society in southern Africa are confronting increasingly complex decisions as a result of the marked day-to-day, intra-seasonal and inter-annual variability of climate. Hence, forecasts of hydro-climatic variables with lead times of days to seasons ahead are becoming increasingly important to them in making more informed risk-based management decisions. With improved representations of atmospheric processes and advances in computer technology, a major improvement has been made by institutions such as the South African Weather Service, the University of Pretoria and the University of Cape Town in forecasting southern Africa’s weather at short lead times and its various climatic statistics for longer time ranges. In spite of these improvements, the operational utility of weather and climate forecasts, especially in agricultural and water management decision making, is still limited. This is so mainly because of a lack of reliability in their accuracy and the fact that they are not suited directly to the requirements of agrohydrological models with respect to their spatial and temporal scales and formats. As a result, the need has arisen to develop a GIS based framework in which the “translation” of weather and climate forecasts into more tangible agrohydrological forecasts such as streamflows, reservoir levels or crop yields is facilitated for enhanced economic, environmental and societal decision making over southern Africa in general, and in selected catchments in particular. This study focuses on the development of such a framework. As a precursor to describing and evaluating this framework, however, one important objective was to review the potential impacts of climate variability on water resources and agriculture, as well as assessing current approaches to managing climate variability and minimising risks from a hydrological perspective. With the aim of understanding the broad range of forecasting systems, the review was extended to the current state of hydro-climatic forecasting techniques and their potential applications in order to reduce vulnerability in the management of water resources and agricultural systems. This was followed by a brief review of some challenges and approaches to maximising benefits from these hydro-climatic forecasts. A GIS based framework has been developed to serve as an aid to process all the computations required to translate near real time rainfall fields estimated by remotely sensed tools, as well as daily rainfall forecasts with a range of lead times provided by Numerical Weather Prediction (NWP) models into daily quantitative values which are suitable for application with hydrological or crop models. Another major component of the framework was the development of two methodologies, viz. the Historical Sequence Method and the Ensemble Re-ordering Based Method for the translation of a triplet of categorical monthly and seasonal rainfall forecasts (i.e. Above, Near and Below Normal) into daily quantitative values, as such a triplet of probabilities cannot be applied in its original published form into hydrological/crop models which operate on a daily time step. The outputs of various near real time observations, of weather and climate models, as well as of downscaling methodologies were evaluated against observations in the Mgeni catchment in KwaZulu-Natal, South Africa, both in terms of rainfall characteristics as well as of streamflows simulated with the daily time step ACRU model. A comparative study of rainfall derived from daily reporting raingauges, ground based radars, satellites and merged fields indicated that the raingauge and merged rainfall fields displayed relatively realistic results and they may be used to simulate the “now state” of a catchment at the beginning of a forecast period. The performance of three NWP models, viz. the C-CAM, UM and NCEP-MRF, were found to vary from one event to another. However, the C-CAM model showed a general tendency of under-estimation whereas the UM and NCEP-MRF models suffered from significant over-estimation of the summer rainfall over the Mgeni catchment. Ensembles of simulated streamflows with the ACRU model using ensembles of rainfalls derived from both the Historical Sequence Method and the Ensemble Re-ordering Based Method showed reasonably good results for most of the selected months and seasons for which they were tested, which indicates that the two methods of transforming categorical seasonal forecasts into ensembles of daily quantitative rainfall values are useful for various agrohydrological applications in South Africa and possibly elsewhere. The use of the Ensemble Re-ordering Based Method was also found to be quite effective in generating the transitional probabilities of rain days and dry days as well as the persistence of dry and wet spells within forecast cycles, all of which are important in the evaluation and forecasting of streamflows and crop yields, as well as droughts and floods. Finally, future areas of research which could facilitate the practical implementation of the framework were identified. / Thesis (Ph.D.)-University of KwaZulu-Natal, Pietermaritzburg, 2007.
|
542 |
Tight Bernoulli tail probability bounds / Tiksliosios Bernulio tikimybių nelygybėsDzindzalieta, Dainius 12 May 2014 (has links)
The purpose of the dissertation is to prove universal tight bounds for deviation from the mean probability inequalities for functions of random variables. Universal bounds shows that they are uniform with respect to some class of distributions and quantity of variables and other parameters. The bounds are called tight, if we can construct a sequence of random variables, such that the upper bounds are achieved. Such inequalities are useful for example in insurance mathematics, for constructing effective algorithms. We extend the results for Lipschitz functions on general probability metric spaces. / Disertacijos darbo tikslas – įrodyti universalias tiksliąsias nelygybes atsitiktinių dydžių funkcijų nukrypimo nuo vidurkio tikimybėms. Universalios nelygybės pažymi, kad jos yra tolygios pagal tam tikras bendras skirstinių klases ir pagal atsitiktinių dydžių kiekį, kartais ir pagal kitus parametrus. Nelygybės vadinamos tiksliosiomis, jeigu pavyksta sukonstruoti atsitiktinių dydžių seką, kuriai nelygybės virsta lygybėmis. Tokios nelygybės labai naudingos, pavyzdžiui, draudimo matematikoje, konstruojant efektyvius algoritmus. Disertaciją sudaro šeši skyriai. Pirmasis skyrius yra įvadas, kuriame neformaliai pristatomas disertacijoje tiriamas objektas, pateikiamas bendras darbo aprašymas ir motyvacija. Detalesnė kitų autorių rezultatų apžvalga pateikiama atskirai kiekviename skyriuje. Antrasis skyrius skirtas atvejui, kai atsitiktiniai dydžiai yra aprėžti ir simetriniai. Trečiajame skyriuje įrodomos nelygybės atsitiktiniams dydžiams, tenkinantiems dispersijos aprėžtumo sąlygą. Ketvirtajame skyriuje nagrinėjamos sąlyginai aprėžtų atsitiktinių dydžių sumos. Penktajame skyriuje tiriamos atsitiktinių dydžių sekos, sudarančios martingalą arba supermartingalą, ir joms gaunamos universaliosios tikimybinės nelygybės ir sukonstruojama nehomogeninė Markovo grandinė, kuri yra martingalas, ir kuriai minėtos nelygybės virsta lygybėmis. Šeštajame skyriuje rezultatai yra apibendrinami atsitiktinių dydžių sekos Lipšico funkcijoms.
|
543 |
Tiksliosios Bernulio tikimybių nelygybės / Tight Bernoulli tail probability boundsDzindzalieta, Dainius 12 May 2014 (has links)
Disertacijos darbo tikslas – įrodyti universalias tiksliąsias nelygybes atsitiktinių dydžių funkcijų nukrypimo nuo vidurkio tikimybėms. Universalios nelygybės pažymi, kad jos yra tolygios pagal tam tikras bendras skirstinių klases ir pagal atsitiktinių dydžių kiekį, kartais ir pagal kitus parametrus. Nelygybės vadinamos tiksliosiomis, jeigu pavyksta sukonstruoti atsitiktinių dydžių seką, kuriai nelygybės virsta lygybėmis. Tokios nelygybės labai naudingos, pavyzdžiui, draudimo matematikoje, konstruojant efektyvius algoritmus. Disertaciją sudaro šeši skyriai. Pirmasis skyrius yra įvadas, kuriame neformaliai pristatomas disertacijoje tiriamas objektas, pateikiamas bendras darbo aprašymas ir motyvacija. Detalesnė kitų autorių rezultatų apžvalga pateikiama atskirai kiekviename skyriuje. Antrasis skyrius skirtas atvejui, kai atsitiktiniai dydžiai yra aprėžti ir simetriniai. Trečiajame skyriuje įrodomos nelygybės atsitiktiniams dydžiams, tenkinantiems dispersijos aprėžtumo sąlygą. Ketvirtajame skyriuje nagrinėjamos sąlyginai aprėžtų atsitiktinių dydžių sumos. Penktajame skyriuje tiriamos atsitiktinių dydžių sekos, sudarančios martingalą arba supermartingalą, ir joms gaunamos universaliosios tikimybinės nelygybės ir sukonstruojama nehomogeninė Markovo grandinė, kuri yra martingalas, ir kuriai minėtos nelygybės virsta lygybėmis. Šeštajame skyriuje rezultatai yra apibendrinami atsitiktinių dydžių sekos Lipšico funkcijoms. / The purpose of the dissertation is to prove universal tight bounds for deviation from the mean probability inequalities for functions of random variables. Universal bounds shows that they are uniform with respect to some class of distributions and quantity of variables and other parameters. The bounds are called tight, if we can construct a sequence of random variables, such that the upper bounds are achieved. Such inequalities are useful for example in insurance mathematics, for constructing effective algorithms. We extend the results for Lipschitz functions on general probability metric spaces.
|
544 |
Yield Curve Estimation And Prediction With Vasicek ModelBayazit, Dervis 01 July 2004 (has links) (PDF)
The scope of this study is to estimate the zero-coupon yield curve of tomorrow by using Vasicek yield curve model with the zero-coupon bond yield data of today. The raw data of this study is the yearly simple spot rates of the Turkish zero-coupon bonds with different maturities of each day from July 1, 1999 to March 17, 2004. We completed the missing data by using Nelson-Siegel yield curve model and we estimated tomorrow yield cuve with the discretized Vasicek yield curve model.
|
545 |
Unsupervised discovery of activity primitives from multivariate sensor dataMinnen, David 08 July 2008 (has links)
This research addresses the problem of temporal pattern discovery in real-valued, multivariate sensor data. Several algorithms were developed, and subsequent evaluation demonstrates that they can efficiently and accurately discover unknown recurring patterns in time series data taken from many different domains. Different data representations and motif models were investigated in order to design an algorithm with an improved balance between run-time and detection accuracy. The different data representations are used to quickly filter large data sets in order to detect potential patterns that form the basis of a more detailed analysis. The representations include global discretization, which can be efficiently analyzed using a suffix tree, local discretization with a corresponding random projection algorithm for locating similar pairs of subsequences, and a density-based detection method that operates on the original, real-valued data. In addition, a new variation of the multivariate motif discovery problem is proposed in which each pattern may span only a subset of the input features. An algorithm that can efficiently discover such "subdimensional" patterns was developed and evaluated. The discovery algorithms are evaluated by measuring the detection accuracy of discovered patterns relative to a set of expected patterns for each data set. The data sets used for evaluation are drawn from a variety of domains including speech, on-body inertial sensors, music, American Sign Language video, and GPS tracks.
|
546 |
On the limiting shape of random young tableaux for Markovian wordsLitherland, Trevis J. 17 November 2008 (has links)
The limiting law of the length of the longest increasing subsequence, LI_n, for sequences (words) of length n arising from iid letters drawn from finite, ordered alphabets is studied using a straightforward Brownian functional approach. Building on the insights gained in both the uniform and non-uniform iid cases, this approach is then applied to iid countable alphabets. Some partial results associated with the extension to independent, growing alphabets are also given. Returning again to the finite setting, and keeping with the same Brownian formalism, a generalization is then made to words arising from irreducible, aperiodic, time-homogeneous Markov chains on a finite, ordered alphabet. At the same time, the probabilistic object, LI_n, is simultaneously generalized to the shape of the associated Young tableau given by the well-known RSK-correspondence. Our results on this limiting shape describe, in detail, precisely when the limiting shape of the Young tableau is (up to scaling) that of the iid case, thereby answering a conjecture of Kuperberg. These results are based heavily on an analysis of the covariance structure of an m-dimensional Brownian motion and the precise form of the Brownian functionals. Finally, in both the iid and more general Markovian cases, connections to the limiting laws of the spectrum of certain random matrices associated with the Gaussian Unitary Ensemble (GUE) are explored.
|
547 |
Normiertes Misstrauen : der Verdacht im Strafverfahren /Schulz, Lorenz. January 2001 (has links) (PDF)
Univ., Habil.-Schr.--Frankfurt am Main, 1998.
|
548 |
Realität und Wahrheit zur Kritik d. krit. Rationalismus /Keuth, Herbert, January 1978 (has links)
Habilitationsschrift--Mannheim. / Includes indexes. Bibliography: p. [198]-205.
|
549 |
Um modelo de engenharia do conhecimento baseado em ontologia e cálculo probabilístico para o apoio ao diagnósticoLopes, Luiz Fernando 29 September 2011 (has links)
O diagnóstico, como tarefa intensiva em conhecimento, é um processo complexo uma vez que existe uma grande variedade de elementos e circunstâncias a serem considerados para uma tomada de decisão. Incertezas geradas pela subjetividade, imprecisão e/ou falta de informações atualizadas existem em quase todos os estágios e interferem quanto à segurança e eficácia no resultado. Os dados e informações úteis, quando coletados e tratados adequadamente (técnica), provenientes de diagnósticos realizados (processo) e que permanecem em estado latente, podem tornar-se uma valiosa fonte de conhecimento se associados à experiência e observação do profissional (humano) que os utiliza. Assim, o objetivo desta pesquisa é propor um modelo de Engenharia do Conhecimento que possibilita a geração de novos conhecimentos para apoiar o processo de diagnóstico. As metodologias, métodos e técnicas da Engenharia do Conhecimento, utilizados neste modelo para apoiar este processo, são: CommonKADS, Ontologias, Cálculo Probabilístico e Sistemas de Descoberta Baseados na Literatura. Através da integração entre esses elementos, o modelo proposto é aplicado em um estudo de caso, o qual possibilita que evidências sejam destacadas e analisadas através de pesquisa literária como possíveis novos conhecimentos. Após a confirmação de um novo conhecimento, envolvendo a comunidade científica, o processo de inferência é atualizado. Para a verificação do aspecto de consistência do modelo, buscou-se o consenso de opiniões em um grupo de especialistas utilizando o método Delphi. Os resultados mostram que a aceitação nos conceitos, métodos e técnicas, que compõem o modelo, fica acima de um mínimo estabelecido para este estudo e os comentários dos especialistas geraram reflexões para compor o resultado final deste trabalho. Conclui-se, portanto, que, através desta pesquisa, o modelo proposto atende os requisitos para a geração de novos conhecimentos e contribui para o aperfeiçoamento da tarefa de diagnóstico. / The diagnosis, as knowledge-intensive task, is a complex process since there is a wide variety of elements and circumstances to be considered for a decision-making. Uncertainty generated by the subjectivity, vagueness and/or lack of updated information exist in almost all stages and interfere for the safety and efficacy in the outcome. The data and useful information, when collected and treated appropriately (technical), deriving from diagnosis accomplished and which remain latent (process), can become a valuable source of knowledge if associated with the experience and observation of the individual (human) who uses them. The goal of this research is to propose a model of Knowledge Engineering that allows the creation of new knowledge to support the diagnosis process. The methodologies, methods and techniques of Knowledge Engineering, used on this model to support the process are: CommonKADS, Ontology, Probabilistic Calculation and Discovery Systems Based on Literature. Through the integration of these elements, the proposed model is applied to a case study which allows evidence to be highlighted and analyzed through research literature as potential new knowledge. After the information of a new knowledge, involving the scientific community, the inference process is updated. To verify the consistency aspect of the model, it is sought the consensus of opinions in a group of experts using the Delphi method. The results show that the acceptance of the concepts, methods and techniques that comprise the model are above the minimum established for this study, and comments from the experts generated ideas to compose the final result of this work. It is concluded, therefore, that through this research, the proposed model meets the requirements for the generation of new knowledge, and contributes to the improvement of the diagnostic test.
|
550 |
Ambiente virtual de aprendizagem para o ensino de probabilidade e estatística nos anos iniciais do ensino fundamental / Virtual learning environment for the teaching of probability and statistics in the early years of elementary schoolDias, Cristiane de Fatima Budek 26 August 2016 (has links)
Acompanha produção técnica / O presente trabalho teve como objetivo desenvolver um Ambiente Virtual de Aprendizagem para o ensino de Probabilidade e Estatística nos anos iniciais do Ensino Fundamental à luz dos documentos curriculares oficiais e das práticas docentes. Com a intenção de alcançar tal objetivo, desenvolveu-se uma pesquisa aplicada, com enfoque qualitativo de cunho interpretativo com professores dos anos iniciais do Ensino Fundamental da Rede Municipal de Ensino de Ponta Grossa/PR. O ensino de Probabilidade e Estatística nos anos iniciais é entendido, nesta pesquisa, a partir das concepções de Lopes (1998, 2003, 2008, 2010); Guimarães (2014); Grando, Nacarato e Lopes (2014); Lopes e Oliveira (2013); Silva (2011); Borba, Monteiro, Guimarães, Coutinho e Kataoka (2011). As reflexões sobre o processo de ensino da temática, por meio do uso da tecnologia, estão pautadas em Ben-Zvi (2011); Batanero (2001); Estevan (2010); Estevan e Kalinke (2013); Ponte e Fonseca (2001); Lira e Monteiro (2011); entre outros. A formação do professor e seus saberes é compreendida a partir da perspectiva de Nóvoa (2009); Shulman (1986, 2005) e Mishra e Koehler (2006, 2008). Para concretização do presente estudo, inicialmente, realizou-se uma análise dos documentos curriculares vigentes, PCN (BRASIL, 1997) e Diretrizes Curriculares Municipais (PONTA GROSSA, 2015); posteriormente aplicou-se um questionário com o intuito de averiguar as práticas docentes para o ensino de Probabilidade e Estatística, buscando-se uma possível relação dessas práticas com os documentos curriculares. Após essa etapa de análise, iniciou-se o desenvolvimento do AVA, tendo como base as propostas curriculares, o referencial teórico estudado e as práticas docentes, reveladas no questionário. Em uma etapa posterior, os professores foram convidados a interagir e participar do desenvolvimento, por meio de um encontro em que os mesmos foram apresentados à proposta e instigados a intervir na mesma, opinando e revelando seus anseios para a ferramenta. Após esse encontro de coparticipação no desenvolvimento, o AVA foi finalizado, levando-se em conta as propostas apresentadas pelos docentes. Com os resultados, é possível verificar que os professores declaram trabalhar boa parte dos conteúdos propostos nos documentos curriculares, entretanto, os documentos apresentam lacunas que precisam ser preenchidas. Quanto à interação no processo de construção do AVA, a pesquisa demonstrou que há muitas dificuldades de aceitação quando se busca por outras formas para o desenvolvimento de ferramentas tecnológicas para o ensino e também de novas propostas formativas. Com base nos dados, observou-se que há necessidade de propostas mais efetivas de formação de professores dos anos iniciais para o trabalho com a Probabilidade e Estatística, propostas que considerem as questões específicas de conteúdo. Apesar disso, é possível inferir que a participação dos professores no processo de desenvolvimento de recursos tecnológicos é fundamental para esses recursos sejam, realmente, desenvolvidos com atenção aos anseios dos professores e à realidade da sala de aula. / This study aimed to develop a Virtual Learning Environment for teaching probability and statistics in the early years of elementary school in the light of the official curriculum documents and teaching practices. In order to achieve this goal, a survey was applied, with qualitative approach of interpretative nature with teachers of the early years of elementary school of the Municipal Network of Teaching of Ponta Grossa / PR. Teaching of Probability and Statistics in the early years is understood in this research from Lopes concepts (1998, 2003, 2008, 2010); Guimarães (2014); Grando, Nacarato and Lopes (2014); Lopes and Oliveira (2013); Silva (2011); Borba Monteiro Guimarães, Coutinho and Kataoka (2011). The reflections on the theme of teaching process through the use of technology are guided in Ben-Zvi (2011); Batanero (2001); Estevan (2010); Estevan and Kalinke (2013); Bridge and Fonseca (2001); Lira and Monteiro (2011); among others. Teacher education and their knowledge is understood from the perspective of Nóvoa (2009); Shulman (1986, 2005) and Mishra and Koehler (2006, 2008). For realization of this study originally was conducted an analysis of existing curriculum documents CPN (BRAZIL, 1997) and Municipal Curriculum Guidelines (PONTA GROSSA, 2015); later was applied a questionnaire in order to ascertain the teaching practices for teaching Probability and Statistics, seeking a possible relationship of these practices with the curriculum documents. After this analysis stage, began the development of the VLE, based on the proposed curriculum, the theoretical study and teaching practices revealed in the questionnaire. At a later stage the teachers were invited to interact and participate in the development, through a meeting in which they were presented the proposal and urged to intervene in it, opining and revealing their longing for the tool. After this meeting of co-participation in the development, the VLE was finalized, taking into account proposals made by teachers. With the results, you can see that teachers say they work most of the proposed content in curriculum documents, however, the documents have gaps that need to be filled. As for the interaction in the VLE construction process, research has shown that there are many acceptance difficulties when seeking other ways to the development of technological tools for teaching and also new training proposals. Based on the data, it was observed that there is need for more effective proposals for teacher training in the early years to work with the Probability and Statistics, proposals that address the specific issues of content. Nevertheless, it is possible to infer that the participation of teachers in the technological resources development process is critical so that these resources are actually developed with attention to the concerns of teachers and classroom reality.
|
Page generated in 0.0213 seconds