• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 132
  • 39
  • 33
  • 21
  • 11
  • 9
  • 9
  • 7
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 317
  • 317
  • 160
  • 66
  • 62
  • 58
  • 44
  • 44
  • 37
  • 37
  • 36
  • 35
  • 35
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Intelligent information services in environmental applications

Räsänen, T. (Teemu) 22 November 2011 (has links)
Abstract The amount of information available has increased due to the development of our modern digital society. This has caused an information overflow, meaning that there is lot of data available but the meaningful information or knowledge is hidden inside the overwhelming data smog. Nevertheless, the large amount of data together with the increased capabilities of computers provides a great opportunity to learn the behaviour of different kinds of phenomena at a more detailed level. The quality of life, well-being and a healthy living environment, for example, are fields where new information services can assist the creation of proactive decisions to avoid environmental problems caused by industrial activity, traffic, or extraordinary weather conditions. The combination of data coming from different sources such as public registers, companies’ operational information systems, online sensors and process monitoring systems provides a fruitful basis for creating new valuable information for citizens, decision makers or other end users. The aim of this thesis is to present the concept of intelligent information services and a methodological background in order to add intelligence using computational methods for the enrichment of multidimensional data. Moreover, novel examples are presented where new significant information is created and then provided for end users. The data refining process used is called data mining and contains methods for data collection, pre-processing, modelling, visualizing and interpreting the results and sharing the new information thus created. Information systems are a base for the creation of information services, meaning that stakeholder groups have access only to information but they do not own the whole information system that contains measurement systems, data collecting, and a technological platform. Intelligence in information services comes from the use of computational intelligent methods in data processing, modelling and visualization. In this thesis the general concept of such services is presented and concretized using five cases that focus on environmental and industrial examples. The results of these case studies show that the combination of different data sources provides fertile ground for developing new information services. The data mining methods used such as clustering and predictive modelling together with effective pre-processing methods have great potential to handle the large amount of multivariate data in this environmental context also. A self-organizing map combined with k-means clustering is useful for creating more detailed information about personal energy use. Predictive modelling using a multilayer perceptron (MLP) is well suited for estimating the number of tourists visiting a leisure centre and to find the correspondence between pulp process characteristics and the chemicals used. These results have many indirect effects on reducing negative concerns regarding our surroundings and maintaining a healthy living environment. The innovative use of stored data is one of the main elements in the creation of future information services. Thus, more emphasis should be placed on the development of data integration and effective data processing methods. Furthermore, it is noted that final end users, such as citizens or decision makers, should be involved in the data refining process at the very first stage. In this way, the approach is truly customer-oriented and the results fulfil the concrete need of specific end users. / Tiivistelmä Informaation määrä on kasvanut merkittävästi tietoyhteiskunnan kehittymisen myötä. Käytössämme onkin huomattava määrä erimuotoista tietoa, josta voimme hyödyntää kuitenkin vain osan. Jatkuvasti mitattavan datan suuri määrä ja sijoittuminen hajalleen asettavat osaltaan haasteita tiedon hyödyntämiselle. Tietoyhteiskunnassa hyvinvointi ja terveellisen elinympäristön säilyminen koetaan aiempaa tärkeämmäksi. Toisaalta yritysten toiminnan tehostaminen ja kestävän kehityksen edistäminen vaativat jatkuvaa parantamista. Informaatioteknologian avulla moniulotteista mittaus- ja rekisteritietoa voidaan hyödyntää esimerkiksi ennakoivaan päätöksentekoon jolla voidaan edistää edellä mainittuja tavoitteita. Tässä työssä on esitetty ympäristöalan älykkäiden informaatiopalveluiden konsepti, jossa oleellista on loppukäyttäjien tarpeiden tunnistaminen ja ongelmien ratkaiseminen jalostetun informaation avulla. Älykkäiden informaatiopalvelujen taustalla on yhtenäinen tiedonlouhintaan perustuva tiedonjalostusprosessi, jossa raakatieto jalostetaan loppukäyttäjille soveltuvaan muotoon. Tiedonjalostusprosessi koostuu datan keräämisestä ja esikäsittelystä, mallintamisesta, tiedon visualisoinnista, tulosten tulkitsemisesta sekä oleellisen tiedon jakamisesta loppukäyttäjäryhmille. Datan käsittelyyn ja analysointiin on käytetty laskennallisesti älykkäitä menetelmiä, josta juontuu työn otsikko; älykkäät informaatiopalvelut. Väitöskirja pohjautuu viiteen artikkeliin, joissa osoitetaan tiedonjalostusprosessin toimivuus erilaisissa tapauksissa ja esitetään esimerkkejä kuhunkin prosessin vaiheeseen soveltuvista laskennallisista menetelmistä. Artikkeleissa on kuvattu matkailualueen kävijämäärien ennakointiin ja kotitalouksien sähköenergian kulutuksen pienentämiseen liittyvät informaatiopalvelut sekä analyysi selluprosessissa käytettävien kemikaalien määrän pienentämiseksi. Näistä saadut kokemukset ja tulokset on yleistetty älykkään informaatiopalvelun konseptiksi. Väitöskirjan toisena tavoitteena on rohkaista organisaatioita hyödyntämään tietovarantoja aiempaa tehokkaammin ja monipuolisemmin sekä rohkaista tarkastelemaan myös oman organisaation ulkopuolelta saatavien tietolähteiden käyttämistä. Toisaalta, uudenlaisten informaatiopalvelujen ja liiketoimintojen kehittämistä tukisi julkisilla varoilla kerättyjen, ja osin yritysten hallussa olevien, tietovarantojen julkaiseminen avoimiksi.
232

Beam position diagnostics with higher order modes in third harmonic superconducting accelerating cavities

Zhang, Pei January 2013 (has links)
Higher order modes (HOM) are electromagnetic resonant fields. They can be excited by an electron beam entering an accelerating cavity, and constitute a component of the wakefield. This wakefield has the potential to dilute the beam quality and, in the worst case, result in a beam-break-up instability. It is therefore important to ensure that these fields are well suppressed by extracting energy through special couplers. In addition, the effect of the transverse wakefield can be reduced by aligning the beam on the cavity axis. This is due to their strength depending on the transverse offset of the excitation beam. For suitably small offsets the dominant components of the transverse wakefield are dipole modes, with a linear dependence on the transverse offset of the excitation bunch. This fact enables the transverse beam position inside the cavity to be determined by measuring the dipole modes extracted from the couplers, similar to a cavity beam position monitor (BPM), but requires no additional vacuum instrumentation.At the FLASH facility in DESY, 1.3 GHz (known as TESLA) and 3.9 GHz (third harmonic) cavities are installed. Wakefields in 3.9 GHz cavities are significantly larger than in the 1.3 GHz cavities. It is therefore important to mitigate the adverse effects of HOMs to the beam by aligning the beam on the electric axis of the cavities. This alignment requires an accurate beam position diagnostics inside the 3.9 GHz cavities. It is this aspect that is focused on in this thesis. Although the principle of beam diagnostics with HOM has been demonstrated on 1.3 GHz cavities, the realization in 3.9 GHz cavities is considerably more challenging. This is due to the dense HOM spectrum and the relatively strong coupling of most HOMs amongst the four cavities in the third harmonic cryo-module. A comprehensive series of simulations and HOM spectra measurements have been performed in order to study the modal band structure of the 3.9 GHz cavities. The dependencies of various dipole modes on the offset of the excitation beam were subsequently studied using a spectrum analyzer. Various data analysis methods were used: modal identification, direct linear regression, singular value decomposition and k-means clustering. These studies lead to three modal options promising for beam position diagnostics, upon which a set of test electronics has been built. The experiments with these electronics suggest a resolution of 50 micron accuracy in predicting local beam position in the cavity and a global resolution of 20 micron over the complete module. This constitutes the first demonstration of HOM-based beam diagnostics in a third harmonic 3.9 GHz superconducting cavity module. These studies have finalized the design of the online HOM-BPM for 3.9 GHz cavities at FLASH.
233

Text mining se zaměřením na shlukovací a fuzzy shlukovací metody / Text mining focused on clustering and fuzzy clustering methods

Zubková, Kateřina January 2018 (has links)
This thesis is focused on cluster analysis in the field of text mining and its application to real data. The aim of the thesis is to find suitable categories (clusters) in the transcribed calls recorded in the contact center of Česká pojišťovna a.s. by transferring these textual documents into the vector space using basic text mining methods and the implemented clustering algorithms. From the formal point of view, the thesis contains a description of preprocessing and representation of textual data, a description of several common clustering methods, cluster validation, and the application itself.
234

Statistické vlastnosti mikrostruktury dopravního proudu / Statistical characteristics of the traffic flow microstructure

Apeltauer, Jiří Unknown Date (has links)
The actual traffic flow theory assumes interactions only between neighbouring vehicles within the traffic. This assumption is reasonable, but it is based on the possibilities of science and technology available decades ago, which are currently overcome. Obviously, in general, there is an interaction between vehicles at greater distances (or between multiple vehicles), but at the time, no procedure has been put forward to quantify the distance of this interaction. This work introdukce a method, which use mathematical statistics and precise measurement of time distances of individual vehicles, which allows to determine these interacting distances (between several vehicles) and its validation for narrow densities of traffic flow. It has been revealed that at high traffic flow densities there is an interaction between at least three consecutive vehicles and four and five vehicles at lower densities. Results could be applied in the development of new traffic flow models and its verification.
235

Segmentace měkkých tkání v obličejové části myších embryí v mikrotomografických datech / Segmentation of soft tissues in facial part of mouse embryos from X-ray computed microtomography data

Janštová, Michaela January 2019 (has links)
This diploma thesis deals with a segmentation of soft tissues in facial part of mouse embryos in Matlab. Segmentation of soft tissues of mouse embryos was not fully automated and every case needs a specific solution. Solving parts of this issues can provide valuable data for evolutionary biologists. Issues about staining and segmentation techniques are described. On the basis of accessible literature otsu thresholding, region growing, k-means clustering and segmentation with atlas were tested. In the end of this paper are those methods tested and evaluated on 3D microtomography data.
236

Dynamická faktorová analýza časových řad / Time series dynamic factor analysis

Slávik, Ľuboš January 2021 (has links)
Táto diplomová práca sa zaoberá novým prístupom k zhlukovaniu časových rád na základe dynamického faktorového modelu. Dynamický faktorový model je technika redukujúca dimenziu a rozširuje klasickú faktorovú analýzu o požiadavku autokorelačnej štruktúry latentných faktorov. Parametre modelu sa odhadujú pomocou EM algoritmu za použitia Kalmanovho filtra a vyhladzovača a taktiež sú aplikované nevyhnutné podmienky na model, aby sa stal identifikovateľným. Po tom, ako je v práci predstavený teoretický koncept prístupu, dynamický faktorový model je aplikovaný na skutočné pozorované časové rady a práca skúma jeho správanie a vlastnosti na jednomesačných meteorologických dátach požiarneho indexu (Fire Weather Index) na 108 požiarnych staniciach umiestnených v Britskej Kolumbii. Postup výpočtu modelu odhadne záťažovú maticu (loadings matrix) spolu so zodpovedajúcim malým počtom latentných faktorov a kovariačnou maticou modelovaných časových rád. Diplomová práca aplikuje k-means zhlukovanie na výslednú záťažovú maticu a ponúka rozdelenie meteorologických staníc do zhlukov založené na redukovanej dimenzionalite pôvodných dát. Vďaka odhadnutým priemerom zhlukov a odhadnutým latentným faktorom je možné získať aj priemerné trendy každého zhluku. Následne sú dosiahnuté výsledky porovnané s výsledkami získanými na dátach z rovnakých staníc avšak iného mesiaca, aby sa stanovila stabilita zhlukovania. Práca sa taktiež zaoberá efektom varimax rotácie záťažovej matice. Diplomová práca naviac navrhuje metódu detekovania odľahlých časových rád založenú na odhadnutej kovariačnej matici modelu a rozoberá dôsledky odľahlých hodnôt na odhanutý model.
237

Segmentace obrazu pomocí neuronové sítě / Neural Network Based Image Segmentation

Vrábelová, Pavla January 2010 (has links)
This paper deals with application of neural networks in image segmentation. First part is an introduction to image processing and neural networks, second part describes an implementation of segmentation system and presents results of experiments. The segmentation system enables to use different types of classifiers, various image features extraction and also to evaluate the success of segmentation. Two classifiers were created - a neural network (self-organizing map) and an algorithm K-means. Colour (RGB and HSV) and texture features and their combinations were used for classification. Texture features were extracted using a set of Gabor filters. Experiments with designed classifiers and feature extractors were carried out and results were compared.
238

Comparing unsupervised clustering algorithms to locate uncommon user behavior in public travel data : A comparison between the K-Means and Gaussian Mixture Model algorithms

Andrésen, Anton, Håkansson, Adam January 2020 (has links)
Clustering machine learning algorithms have existed for a long time and there are a multitude of variations of them available to implement. Each of them has its advantages and disadvantages, which makes it challenging to select one for a particular problem and application. This study focuses on comparing two algorithms, the K-Means and Gaussian Mixture Model algorithms for outlier detection within public travel data from the travel planning mobile application MobiTime1[1]. The purpose of this study was to compare the two algorithms against each other, to identify differences between their outlier detection results. The comparisons were mainly done by comparing the differences in number of outliers located for each model, with respect to outlier threshold and number of clusters. The study found that the algorithms have large differences regarding their capabilities of detecting outliers. These differences heavily depend on the type of data that is used, but one major difference that was found was that K-Means was more restrictive then Gaussian Mixture Model when it comes to classifying data points as outliers. The result of this study could help people determining which algorithms to implement for their specific application and use case.
239

ASSESSING THE PERFORMANCE OF PROCEDURALLY GENERATED TERRAINS USING HOUDINI’S CLUSTERING METHOD

Varisht Raheja (8797292) 05 May 2020 (has links)
<p>Terrain generation is a convoluted and a popular topic in the VFX industry. Whether you are part of the film/TV or gaming industry, a terrain, is a highly nuanced feature that is usually present. Regardless of walking on a desert like terrain in the film, Blade Runner 2049 or fighting on different planets like in Avatar, 3D terrains is a major part of any digital media. The purpose of this thesis is about developing a workflow for large-scale terrains using complex data sets and utilizing this workflow to maintain a balance between the procedural content and the artistic input made especially for smaller companies which cannot afford an enhanced pipeline to deal with major technical complications. The workflow consists of two major elements, development of the tool used to optimize the workflow and the recording and maintaining of the efficiency in comparison to the older workflow. </p> <p> </p> <p> My research findings indicate that despite the increase in overall computational abilities, one of the many issues that are still present is generating a highly advanced terrain with the added benefits of the artists and users’ creative variations. Reducing the overall time to simulate and compute a highly realistic and detailed terrain is the main goal, thus this thesis will present a method to overcome the speed deficiency while keeping the details of the terrain present.</p>
240

Quelques exemples de jeux à champ moyen / Some examples of mean field games

Coron, Jean-Luc 18 December 2017 (has links)
La théorie des jeux à champ moyen fut introduite en 2006 par Jean-Michel Lasry et Pierre-Louis Lions. Elle permet l'étude de la théorie des jeux dans certaines configurations où le nombre de joueurs est trop grand pour espérer une résolution pratique. Nous étudions la théorie des jeux à champ moyen sur les graphes en nous appuyant sur les travaux d'Olivier Guéant que nous étendrons à des formes plus générales d'Hilbertien. Nous étudierons aussi les liens qui existent entres les K-moyennes et les jeux à champ moyen ce qui permettra en principe de proposer de nouveaux algorithmes pour les K-moyennes grâce aux techniques de résolution numérique propres aux jeux à champ moyen. Enfin nous étudierons un jeu à champ moyen à savoir le problème "d'heure de début d'une réunion" en l'étendant à des situations où les agents peuvent choisir entre deux réunions. Nous étudierons de manière analytique et numérique l'existence et la multiplicité des solutions de ce problème. / The mean field game theory was introduced in 2006 by Jean-Michel Lasry and Pierre-Louis Lions. It allows us to study the game theory in some situations where the number of players is too high to be able to be solved in practice. We will study the mean field game theory on graphs by learning from the studies of Oliver Guéant which we will extend to more generalized forms of Hilbertian. We will also study the links between the K-means and the mean field game theory. In principle, this will offer us new algorithms for solving the K-means thanks to the techniques of numerical resolutions of the mean field games. Findly, we will study a mean field game called the "starting time of a meeting". We will extend it to situations where the players can choose between two meetings. We will study analytically and numerically the existence and multiplicity of the solutions to this problem.

Page generated in 0.0489 seconds