• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 21
  • 9
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 73
  • 54
  • 40
  • 32
  • 14
  • 11
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Estudo de fissão e espalação em núcleos actinídeos e pré-actinídeos a energias intermediárias / Study of fission and spallation of pre-actinide and actinide nuclei at intermediate energies.

Carlos David Gonzales Lorenzo 21 May 2015 (has links)
Neste trabalho apresentamos um estudo das reações de spallation a energias interme- diárias em núcleos actinídeos e pré-actinídeos. Para esta finalidade foi utilizado o modelo de Monte Carlo CRISP (Colaboração Rio-São Paulo), que neste estudo foi importante na reprodução da distribuição de massa de produtos residuais e as seções de choque de fissão e espalação. Estes observáveis são importantes para o estudo de Reatores Hibridos ADS considerado como dispositivos promissores para a transmutação de resíduos nucle- ares. Os modelos físicos necessários para uma correta simulação de dados experimentais foram já implementadas no CRISP, como o modelo de evaporação para emissão de par- tículas descrito por Weisskopf de 1937, e para fissão o clássico modelo de Bohr/Wheeler de 1939. Para a obtenção da distribuição dos fragmentos de massa de fissão o CRISP conta também com um modelo baseado na parametrização multimodal de fissão, que si- mula os processos de fissão simétrica e assimétrica predominantes em altas e baixas ener- gias, respectivamente. Os resultados obtidos do CRISP depois da aplicação dos modelos mencionados, foram os rendimentos de massa dos fragmentos residuais, os quais foram analisadas para o cálculo da seção de choque de fissão e espalação mediante uma fórmula implementada no modelo. Com o resultado se fez o gráfico da distribuição de massa para cada uma das reações analisadas. Uma das reações estudadas foi a reação induzida por fótons de Bremsstrahlung com energias máximas de 50 e 3500 MeV em um alvo de 181 Ta, calculando a distribuição de massa de fissão e espalação, mostrando bons resultados de acordo com os dados experimentais. Nas reações induzidas por prótons foram calcula- das as seções de choque de fissão e espalação assim como sua respectiva distribuição de massa dos produtos residuais. Neste caso estudamos duas reações, sendo: a reação p (1 GeV) + 208 Pb, e a reação de p (660 MeV) + 238 U. Para a primeira reação com chumbo os resultados do CRISP foram comparados com dados experimentais, e também com os resultados obtidos do modelo MCNPX-Bertini do trabalho de Baylac-Domengetroy de 2003, que simulou a mesma reação com chumbo. Obtendo-se melhores resultados com o CRISP mas com uma superestimação de dados no final da distribuição calculada. No caso do urânio, foi necessário usar a chamada fissão superassimétrica porque a distribuição de massa experimental é mais complexa e o modelo multimodal clássico não é suficiente para sua correta simulação. Foi também estudado as reações induzidas por dêuterons usando o modelo CRISP, mostrando os resultados da distribuição de massa para 197 Au e 208 Pb com algumas limitações do modelo para este tipo de reações. / In this work we present a study of the spallation reactions by intermediate energies in actinide and pre-actinide nuclei. For this purpose we used the Monte Carlo model CRISP (Rio-São Paulo Collaboration), for our study was important in the reproduction of the mass distribution of waste products and the total fission and spallation cross secti- ons. These observables are important for the study of Accelerator Driven System Reac- tors (ADS) considered as promising devices for the transmutation of nuclear waste. The physical models needed for a correct simulation of experimental data were already imple- mented in CRISP, such as the evaporation model for emission of particles described by Weisskopf in 1937, and the classical Bohr/Wheeler model in 1939, for fission. To obtain the fragment mass distribution for fission, CRISP has a model based on multimodal fis- sion parameter, which simulates the processes called symmetric and asymmetric fission that are predominant at high and low energies respectively. The CRISP results, obtai- ned after the application of the above mentioned models, were the mass yield of residual fragments, which were analyzed to calculate the fission and spallation cross section using a formula that was implemented in the CRISP model. With these result was obtained the mass distribution for each reaction analyzed. One of the reactions studied was a re- action induced by Bremsstrahlung photons with endpoint energies of 50 MeV and 3500 in a target 181 Ta, calculating the fission and spallation mass distribution, showing good results according the experimental data. In the reactions induced by protons were cal- culated fission and spallation cross sections as well as their respective mass distribution of the residual products. In this case we study two reactions, as follows: p (1 GeV) + 208 , and p (660 MeV) + 238 U. For the first reaction with lead, the results of CRISP were compared with experimental data and with results obtained of MCNPX-Bertini model of Baylac-Domengetroy work in 2003, simulated the same reaction with lead. Obtaining better results with CRISP, but with data-overestimated at the end of calculated distribu- tion. For uranium it was necessary to use the called superasymmetric fission, because the experimental mass distribution is more complicated and the classical model is not suffi- cient for a correct simulation. Has been also studied the reactions induced by deuterons using the CRISP model, showing the mass distribution
62

A comparative study between algorithms for time series forecasting on customer prediction : An investigation into the performance of ARIMA, RNN, LSTM, TCN and HMM

Almqvist, Olof January 2019 (has links)
Time series prediction is one of the main areas of statistics and machine learning. In 2018 the two new algorithms higher order hidden Markov model and temporal convolutional network were proposed and emerged as challengers to the more traditional recurrent neural network and long-short term memory network as well as the autoregressive integrated moving average (ARIMA). In this study most major algorithms together with recent innovations for time series forecasting is trained and evaluated on two datasets from the theme park industry with the aim of predicting future number of visitors. To develop models, Python libraries Keras and Statsmodels were used. Results from this thesis show that the neural network models are slightly better than ARIMA and the hidden Markov model, and that the temporal convolutional network do not perform significantly better than the recurrent or long-short term memory networks although having the lowest prediction error on one of the datasets. Interestingly, the Markov model performed worse than all neural network models even when using no independent variables.
63

Représentation et gestion des connaissances dans un processus d'Extraction de Connaissances à partir de Données multi-points de vue

Zemmouri, El Moukhtar 14 December 2013 (has links) (PDF)
Les systèmes d'information des entreprises actuelles sont de plus en plus " submergés " par des données de tous types : structurées (bases de données, entrepôts de données), semi-structurées (documents XML, fichiers log) et non structurées (textes et multimédia). Ceci a créé de nouveaux défis pour les entreprises et pour la communauté scientifique, parmi lesquels comment comprendre et analyser de telles masses de données afin d'en extraire des connaissances. Par ailleurs, dans une organisation, un projet d'Extraction de Connaissances à partir de Données (ECD) est le plus souvent mené par plusieurs experts (experts de domaine, experts d'ECD, experts de données...), chacun ayant ses préférences, son domaine de compétence, ses objectifs et sa propre vision des données et des méthodes de l'ECD. C'est ce que nous qualifions de processus d'ECD multi-vues (ou processus multi-points de vue). Notre objectif dans cette thèse est de faciliter la tâche de l'analyste d'ECD et d'améliorer la coordination et la compréhensibilité entre les différents acteurs d'une analyse multi-vues, ainsi que la réutilisation du processus d'ECD en termes de points de vue. Aussi, nous proposons une définition qui rend explicite la notion de point de vue en ECD et qui tient compte des connaissances de domaine (domaine analysé et domaine de l'analyste) et du contexte d'analyse. A partir de cette définition, nous proposons le développement d'un ensemble de modèles sémantiques, structurés dans un Modèle Conceptuel, permettant la représentation et la gestion des connaissances mises en œuvre lors d'une analyse multi-vues. Notre approche repose sur une caractérisation multi-critères du point de vue en ECD. Une caractérisation qui vise d'abord à capturer les objectifs et le contexte d'analyse de l'expert, puis orienter l'exécution du processus d'ECD, et par la suite garder, sous forme d'annotations, la trace du raisonnement effectué pendant un travail multi-experts. Ces annotations sont partagées, comparées et réutilisées à l'aide d'un ensemble de relations sémantiques entre points de vue.
64

De l'information à la prise de décision, analyse du processus de politique publique en Afrique francophone : le cas de la politique des enseignants contractuels de l'Etat / From Knowledge to Action, Analysis of the Decision Making Process in Africa : the Case of the State Contract Teacher Policy

Nkengne Nkengne, Alain Patrick 08 March 2011 (has links)
L’analyse des politiques en Afrique s’est jusqu’à présent focalisée sur l’évaluation de celles-ci. Au moment où plusieurs réformes sont entreprises et où la démocratisation amène progressivement de nouveaux acteurs sur la scène politique, il est utile d’analyser comment les politiques publiques se font. Cette thèse examine la réforme de la politique de recrutement des enseignants en Afrique francophone. Elle montre que la réforme est née des insuffisances de la politique habituelle, mais surtout des recherches en économie de l’éducation qui ont conclu qu’elle était nécessaire. Cependant, tous les pays ne l’ont pas adopté. Parmi ceux qui l’ont fait, les facteurs l’ayant déclenché sont variables : fort déficit en enseignants, existence de schémas alternatifs de recrutement et surtout pression des bailleurs via l’aide extérieure. Dans certains pays, la société civile a été plus impliquée dans la réforme. Mais, cette plus forte implication n’a pas nécessairement empêché que la réforme qui ciblait la réduction du salaire des nouveaux enseignants soit conduite. / Policy analysis in Africa has mainly focused on their evaluation. Now that several reforms are undertaken and that the progressive move toward democracy allows new actors to enter the political arena, it is useful to analyze how decisions on public issues are made. The dissertation examines the reform of the teacher recruitment policy in francophone Africa. The reform consisted in changing the teacher status in order to allow the government to pay a lower salary to new teachers. The drawbacks of the usual policy and research from the area of economics of education demonstrating that the reform was necessary put the issue on the agenda. If many countries adopted it, some have not. Factors that triggered the reforms vary among countries: the severity of the lack of teachers, the existence of a high number of teachers paid by the communities, and mainly pressure from donors through external aid. In some countries, the civil society was more involved in the policy process. However, this involvement did not block the reform; neither did it prevent the reform from diverting significantly from the usual policy.
65

Reálná aplikace metod dobývání znalostí z databází na praktická data / The real application of methods knowledge discovery in databases on practical data

Mansfeldová, Kateřina January 2014 (has links)
This thesis deals with a complete analysis of real data in free to play multiplayer games. The analysis is based on the methodology CRISP-DM using GUHA method and system LISp-Miner. The goal is defining player churn in pool from Geewa ltd.. Practical part show the whole process of knowledge discovery in databases from theoretical knowledge concerning player churn, definition of player churn, across data understanding, data extraction, modeling and finally getting results of tasks. In thesis are founded hypothesis depending on various factors of the game.
66

Návrh a implementace Data Mining modelu v technologii MS SQL Server / Design and implementation of Data Mining model with MS SQL Server technology

Peroutka, Lukáš January 2012 (has links)
This thesis focuses on design and implementation of a data mining solution with real-world data. The task is analysed, processed and its results evaluated. The mined data set contains study records of students from University of Economics, Prague (VŠE) over the course of past three years. First part of the thesis focuses on theory of data mining, definition of the term, history and development of this particular field. Current best practices and meth-odology are described, as well as methods for determining the quality of data and methods for data pre-processing ahead of the actual data mining task. The most common data mining techniques are introduced, including their basic concepts, advantages and disadvantages. The theoretical basis is then used to implement a concrete data mining solution with educational data. The source data set is described, analysed and some of the data are chosen as input for created models. The solution is based on MS SQL Server data mining platform and it's goal is to find, describe and analyse potential as-sociations and dependencies in data. Results of respective models are evaluated, including their potential added value. Also mentioned are possible extensions and suggestions for further development of the solution.
67

Možnosti prezentace výsledků DZD na webu / Options of presentation of KDD results on Web

Koválik, Tomáš January 2015 (has links)
This diploma thesis covers KDD analysis of data and options of presentation of KDD results on Web. The paper is divided into three main sections, which follow the whole process of this thesis. In the first section are mentioned theoretical basics needed for understanding of discussed problem. In this section are described notions data matrix and domain knowledge, concept of CRISP-DM methodology, GUHA method, system LISp-Miner and implementation of GUHA method in LISp-Miner including description of core procedures 4ft-Miner and CF-Miner. The second section is dedicated to the first goal of this paper. It briefly summarizes analysis made during pre-analysis phase. Then is described process of analysis of domain knowledge in a given data set. The third part focuses on the second goal of this thesis, which is problem of presentation of KDD results on Web. This section covers brief theoretical basis for used technologies. Then is described development of export script for automatic generation of website from results found using LISp-Miner system including description of structure of the output and recommendations for work in LISp-Miner system.
68

Webový portál pro správu a klasifikaci informací z distribuovaných zdrojů / Web Application for Managing and Classifying Information from Distributed Sources

Vrána, Pavel January 2011 (has links)
This master's thesis deals with data mining techniques and classification of the data into specified categories. The goal of this thesis is to implement a web portal for administration and classification of data from distributed sources. To achieve the goal, it is necessary to test different methods and find the most appropriate one for web articles classification. From the results obtained, there will be developed an automated application for downloading and classification of data from different sources, which would ultimately be able to substitute a user, who would process all the tasks manually.
69

Metodika vývoje a nasazování Business Intelligence v malých a středních podnicích / Methodology of development and deployment of Business Intelligence solutions in Small and Medium Sized Enterprises

Rydzi, Daniel January 2005 (has links)
Dissertation thesis deals with development and implementation of Business Intelligence (BI) solutions for Small and Medium Sized Enterprises (SME) in the Czech Republic. This thesis represents climax of author's up to now effort that has been put into completing a methodological model for development of this kind of applications for SMEs using self-owned skills and minimum of external resources and costs. This thesis can be divided into five major parts. First part that describes used technologies is divided into two chapters. First chapter describes contemporary state of Business Intelligence concept and it also contains original taxonomy of Business Intelligence solutions. Second chapter describes two Knowledge Discovery in Databases (KDD) techniques that were used for building those BI solutions that are introduced in case studies. Second part describes the area of Czech SMEs, which is an environment where the thesis was written and which it is meant to contribute to. This environment is represented by one chapter that defines the differences of SMEs against large corporations. Furthermore, there are author's reasons why he is personally focusing on this area explained. Third major part introduces the results of survey that was conducted among Czech SMEs with support of Department of Information Technologies of Faculty of Informatics and Statistics of University of Economics in Prague. This survey had three objectives. First one was to map the readiness of Czech SMEs for BI solutions development and deployment. Second was to determine major problems and consequent decisions of Czech SMEs that could be supported by BI solutions and the third objective was to determine top factors preventing SMEs from developing and deploying BI solutions. Fourth part of the thesis is also the core one. In two chapters there is the original Methodology for development and deployment of BI solutions by SMEs described as well as other methodologies that were studied. Original methodology is partly based on famous CRISP-DM methodology. Finally, last part describes particular company that has become a testing ground for author's theories and that supports his research. In further chapters it introduces case-studies of development and deployment of those BI solutions in this company, that were build using contemporary BI and KDD techniques with respect to original methodology. In that sense, these case-studies verified theoretical methodology in real use.
70

Geração de dados espaciais vagos baseada em modelos exatos

Proença, Fernando Roberto 29 May 2013 (has links)
Made available in DSpace on 2016-06-02T19:06:05Z (GMT). No. of bitstreams: 1 5287.pdf: 3924606 bytes, checksum: 935b5a09df26eb1b41df901a189a6e2a (MD5) Previous issue date: 2013-05-29 / Universidade Federal de Sao Carlos / Geographic information systems with the aid of spatial databases store and manage crisp spatial data (or exact spatial data), whose shapes (boundaries) are well defined and have a precise location in space. However, several spatial data do not have precisely known boundaries or have an uncertain location in space, which are called vague spatial data. The boundaries of a given vague spatial data may shrink or extend, therefore, may have a minimum and maximum extension. Clouds of pollution, deforestation, fire outbreaks, route of an airplane, habitats of plants and animals are examples of vague spatial data. In the literature, there are currently vague spatial data models, such as Egg-Yolk, QMM and VASA. However, according to our knowledge, they focus only on the formal aspect of the model definition. Thus, real or synthetic vague spatial data is not available for use. The main objective of this master thesis is the development of algorithms for the generation of synthetic vague spatial data based on the crisp models of spatial data vague Egg-Yolk, QMM and VASA. It was also implemented a tool, called VagueDataGeneration, to assist in the process of generation such data. For both the algorithms and the tool, the user is able to set the properties related to the data type of model, such as size, shape, volume, complexity, location and spatial distribution. By using the proposed algorithms and the VagueDataGeneration tool, researchers can generate large samples of vague spatial data, enabling new research, such as testing indexes for vague spatial data or evaluating query processing over data warehouses that store vague spatial data. The validation of the vague spatial data generation was conducted using a case study with data from vague rural phenomena. / Sistemas de informação geográfica com o auxílio de bancos de dados espaciais armazenam e gerenciam dados espaciais exatos, cujas formas (fronteiras) são bem definidas e que possuem uma localização exata no espaço. Entretanto, vários dados espaciais reais não possuem os seus limites precisamente conhecidos ou possuem uma localização incerta no espaço, os quais são denominados dados espaciais vagos. Os limites de um dado espacial vago podem encolher ou estender, portanto, podem ter uma extensão mínima e máxima. Nuvens de poluição, desmatamentos, focos de incêndios, rota de um avião, habitats de plantas e de animais são exemplos de dados espaciais vagos. Na literatura, atualmente existem modelos de dados espaciais vagos, tais como Egg-Yolk, QMM e VASA. No entanto, segundo o nosso conhecimento, estes enfocam apenas no aspecto formal da definição do modelo. Com isso, dados espaciais vagos reais ou sintéticos não estão disponíveis para uso. O principal objetivo deste trabalho de mestrado consiste no desenvolvimento de algoritmos para a geração de dados espaciais vagos sintéticos baseados nos modelos exatos de dados espaciais vagos Egg-Yolk, QMM e VASA. Também foi implementada uma ferramenta, chamada VagueDataGeneration, para auxiliar no processo de geração desses dados. Nos algoritmos propostos e na ferramenta desenvolvida, o usuário define as propriedades referentes ao tipo de dado de um modelo, tais como tamanho, formato, volume, complexidade, localização e distribuição espacial dos dados espaciais vagos a serem gerados. Por meio do uso dos algoritmos propostos e da ferramenta VagueDataGeneration, os pesquisadores podem gerar grandes amostras de dados espaciais vagos, possibilitando novas pesquisas, como exemplo, testar índices para dados espaciais vagos ou testar técnicas de processamento de consultas em Data Warehouses que armazenam dados espaciais vagos. A validação da geração de dados espaciais vagos foi efetuada usando um estudo de caso com dados de fenômenos rurais vagos.

Page generated in 0.0326 seconds