• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 63
  • 41
  • 36
  • 14
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 383
  • 45
  • 45
  • 41
  • 39
  • 29
  • 29
  • 28
  • 26
  • 20
  • 20
  • 20
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Accurate and efficient strategies for the appearance filtering of complex materials

Gamboa Guzman, Luis Eduardo 12 1900 (has links)
La synthèse d’images réalistes repose sur des modèles physiques décrivant les interactions entre la lumière et les matériaux attachés aux objets dans une scène tridimensionnelle. Ces modèles mathématiques sont complexes et, dans le cas général, n’admettent pas de solution analytique. Pour cette raison, l’utilisation de méthodes numériques robustes et efficaces est nécessaire. Les méthodes de Monte Carlo ou techniques alternatives comme l’utilisation de développement par fonction de base sont appropriées pour résoudre ce type de problème. Dans cette thèse par articles, nous présentons deux nouvelles techniques permettant l’in- tégration numérique efficace de matériaux complexes. En premier lieu, nous introduisons une nouvelle méthode permettant d’intégrer simultanément plusieurs dimensions définies dans le domaine angulaire et spatiale. Avoir une technique efficace est essentiel pour intégrer des matériaux avec des normales variant rapidement sous différentes conditions d’éclairage. Notre technique utilise une nouvelle formulation basée sur un histogramme sphérique définie de façon directionnelle et spatial. Ce dernier nous permet d’utiliser des harmoniques sphé- riques pour intégrer les différentes dimensions rapidement, réduisant le temps de calcul d’un facteur approximatif de 30× par rapport aux méthodes de l’état de l’art. Dans notre second travail, nous introduisons une nouvelle stratégie d’échantillonnage pour estimer le transport de lumière à l’intérieur de matériaux multicouches. En identifiant les meilleures stratégies d’échantillonnage, nous proposons une technique efficace et non biaisée pour construire des chemins de lumière à l’intérieur de ce type de matériau. Notre nouvelle approche permet d’obtenir un estimateur de Monte Carlo efficace et de faible variance dans des matériaux contenant un nombre arbitraire de couches. / Realistic computer generated images and simulations require physically-based models to properly capture and reproduce light-material interactions. The underlying mathematical formulations are complex and mandate the use of efficient numerical methods, since analytic solutions are not available. Monte Carlo integration is one such commonly used numerical method, although, alternative approaches leveraging, e.g., basis expansions, may be suitable to solve these challenging problems. In this thesis by articles, we present two works where we efficiently devise numerical integration strategies for the rendering of complex materials. First, we propose a method to compute a spatial-angular multi-dimensional integration problem present when rendering materials with high-frequency normal variation under large, angularly varying illumination. By computing and manipulating a novel spherical histogram data representation, we are able to use spherical harmonics to efficiently solve the integral, outperforming the state-of-the-art by a factor of roughly 30×. Our second work describes a high-performance Monte Carlo integration strategy for rendering layered materials. By identifying the best path sampling strategies in the micro-scale light transport context, we are able to tailor an unbiased and efficient path construction method to evaluate high throughput, low variance paths through an arbitrary number of layers.
352

Le plaidoyer de la Coalition montréalaise des tables de quartier dans le débat public montréalais de lutte contre la pauvreté et l’exclusion sociale

Pillet, Amandine 06 1900 (has links)
Montréal est une Métropole où des populations de toutes origines, de tous niveaux de vie et d’éducation cohabitent. En tant qu’acteur de santé publique notre objectif est de faire en sorte que chacun ait droit à l’égalité des chances aussi bien sociales que sanitaires. Dans la Métropole, il est possible d’observer des inégalités dans différents domaines, tels : l’éducation, le travail, le logement, l’alimentation et bien d’autres encore. Ce mémoire est une étude de Cas portant sur le plaidoyer de la Coalition Montréalaise des Tables de Quartier (CMTQ) entre le 1er janvier 2011 et le 1er juin 2016 en faveur de la lutte contre la pauvreté et l’exclusion sociale et ayant pour but d’explorer comment est exercé cedit plaidoyer par les acteurs de la CMTQ. La CMTQ qui est un Organisme à But Non Lucratif (OBNL) positionne la population au centre de ses préoccupations, milite de sorte que Montréal soit une métropole juste et égalitaire et place la pauvreté et l’exclusion sociale dans ces objectifs sur lesquels il est important d’agir pour le bien-être de la communauté. L’Initiative Montréalaise de Soutien au Développement social local (IM) qui est un programme de la Direction de Santé Publique (DSP), de Centraide du Grand Montréal et de la CMTQ, permet d’offrir un soutien financier à des institutions telles que les Tables de concertation locales dans le but d’améliorer la qualité et les conditions de vie des Montréalais. La CMTQ agit en partenariat avec les tables de quartier en travaillant sur des enjeux soulevés par le développement social local, l’amélioration de la qualité et des conditions de vie des citoyens et la lutte contre la pauvreté et l’exclusion sociale. Cette étude a utilisé des données majoritairement qualitatives issues d’analyses de contenus d’entretiens semi-dirigés, de prises de position, de mémoires, d’apparitions dans les médias traditionnels (La Presse) ainsi qu’une analyse de contenus quantitatifs des réseaux sociaux, plus particulièrement Twitter. Les résultats de cette recherche ont permis d’identifier les porte-parole de la CMTQ sur la place publique et mettent en lumière les stratégies et les moyens utilisés par la CMTQ pour exercer leur plaidoyer ainsi que les messages contenus dans ces stratégies et la façon dont les acteurs s’y prennent. / Montreal is a metropolis where people of all origins, socio-economic background and education live. As public health advocates, our goal is to ensure that each person is afforded equal rights to both social opportunities and the benefits which make for a healthy life. In Montreal, it is possible to observe inequalities in areas such as education, work, housing, and food security, amongst many others. This thesis is a case study of CMTQ’s advocacy methods between january 1st 2011 and june 1st 2016 as well as their approach to fighting poverty and social exclusion (in the public debate). In addition this thesis will specifically explores how CMTQ activists apply their methods of advocacy in order to eliminate poverty and social exclusion. The Montreal Coalition of Neighborhood Round Tables (CMTQ) is a Non-Profit Organization (NPO) that places the population at the center of its campaigns and works to ensure that Montreal is a fair and egalitarian city that prioritizes issues of poverty and social exclusion. The Montreal Initiative of Support for Local Social Development (IM), a program of the Montreal Public Health Department (DSP), the Centraide of Greater Montreal and of the CMTQ, provides financial support to institutions such as local round tables with the purpose of improving the quality and living conditions of Montrealers. The CMTQ works in partnership with neighbourhood councils by working on issues raised by local social development and also by working on ways to eradicate poverty and social exclusion in order to better the quality and living conditions of citizens. This study uses primarily qualitative data derived from the analysis of semi-structured interviews, the examination of official positions held by the organizations, thesis publications and discussions in traditional media (The Press), as well as a quantitative content analysis found on social medias especially Twitter. The results of this research permitted the identification of the CMTQ’s advocates and also shed light on the strategies and tools used by the CMTQ as well as the contents of theses strategies and message and the way activists apply these tools.
353

Dekódování čárového kódu v obraze / Decoding Barcode in Image

Bačíková, Petra January 2011 (has links)
The thesis describes the basic types of barcodes, their development and history. It's mentioned cutting barcodes by dimension, types of barcodes which are the best known and the best used, are described. The key chapter describes details of EAN-8, EAN-13, UPC-A and the additional symbol. It's outlined an algorithm for decoding barcode in image. In conclusion, the results are evaluated and a further development of the project is outlined.
354

Aplikace (geo)demografických metod v oblasti vzdělávání / Application of (geo)demographic methods in education

Šebestík, Libor January 2011 (has links)
Application of (geo)demographic methods in education Abstract This master's thesis presents the possibilities of application of demographic, geodemographic and statistical methods on data published by the educational sector. The methods of demographic analysis are represented by the usage of rates, the concept of multistate demography (Markov chains) and the application of life tables. The enrollment ratio at particular levels of education, the average length of schooling and the number of dropouts from school grades are evaluated by these procedures. Markov chains which are based on the probabilities of transition between grades are also examined in terms of their use for forecasting purposes. These methods analyze the situation at the preschool, primary and secondary levels and are used on data from the annual Statistical Yearbooks on Education. In the field of geodemography, the so called preferential model of migration flows is presented. This model examines how applicants for tertiary education prefer or reject the regions of the Czech Republic for their tertiary education studies. The last method is the binary logistic regression which analyzes the inequalities in access to tertiary education. Both preferential model and logistic regression are based on data files on the admission process at...
355

Using the Media as a Means to Develop Students’ Statistical Concepts

Kemp, Marian 02 May 2012 (has links)
In this era of increasingly fast communication people are being exposed to quantitative information, from national and international sources, through a range of media including newspapers, magazines, television, radio, pod-casts, YouTube and other areas of the Internet. Contexts include health statistics, environmental issues, traffic statistics, wars, gun laws and so on. It is becoming more and more important that citizens are able to critically read and interpret this information, and to do so requires an understanding of statistical concepts. Research has shown that students are motivated and engaged in learning through the use of authentic, real life tasks. The media provides current information, which can be used to help develop both students’ awareness of how social issues are constructed as well as vital statistical concepts. This paper proposes that secondary school students\'' application of a model for statistical analysis to material taken from media sources, enhances their understanding of statistical concepts. This model, called the Five Step Framework, is described and exemplified for the particular context of opinion polling.
356

Ökonomische Analyse forstlicher Bestandesbehandlung / Economic Analysis of Forest Stand Management

Koster, Roman 11 September 2020 (has links)
No description available.
357

Wordlength inference in the Spade HDL : Seven implementations of wordlength inference and one implementation that actually works / Ordlängdsinferans i Spade HDL : Sju olika implementationer av ordlängdsinferens och en implementation som faktiskt fungerar

Thörnros, Edvard January 2023 (has links)
Compilers, complex programs with the potential to greatly facilitate software and hardware design. This thesis focuses on enhancing the Spade hardware description language, known for its user-friendly approach to hardware design. In the realm of hardware development data size - for numerical values data size is known as "wordlength" - plays a critical role for reducing the hardware resources. This study presents an innovative approach that seamlessly integrates wordlength inference directly into the Spade language, enabling the over-estimation of numeric data sizes solely from the program's source code. The methodology involves iterative development, incorporating various smaller implementations and evaluations, reminiscent of an agile approach. To assess the efficacy of the wordlength inference, multiple place and route operations are performed on identical Spade code using various versions of nextpnr. Surprisingly, no discernible impact on hardware resource utilization emerges from the modifications introduced in this thesis. Nonetheless, the true significance of this endeavor lies in its potential to unlock more advanced language features within the Spade compiler. It is important to note that while the wordlength inference proposed in this thesis shows promise, it necessitates further integration efforts to realize its full potential.
358

Database System Acceleration on FPGAs

Moghaddamfar, Mehdi 30 May 2023 (has links)
Relational database systems provide various services and applications with an efficient means for storing, processing, and retrieving their data. The performance of these systems has a direct impact on the quality of service of the applications that rely on them. Therefore, it is crucial that database systems are able to adapt and grow in tandem with the demands of these applications, ensuring that their performance scales accordingly. In the past, Moore's law and algorithmic advancements have been sufficient to meet these demands. However, with the slowdown of Moore's law, researchers have begun exploring alternative methods, such as application-specific technologies, to satisfy the more challenging performance requirements. One such technology is field-programmable gate arrays (FPGAs), which provide ideal platforms for developing and running custom architectures for accelerating database systems. The goal of this thesis is to develop a domain-specific architecture that can enhance the performance of in-memory database systems when executing analytical queries. Our research is guided by a combination of academic and industrial requirements that seek to strike a balance between generality and performance. The former ensures that our platform can be used to process a diverse range of workloads, while the latter makes it an attractive solution for high-performance use cases. Throughout this thesis, we present the development of a system-on-chip for database system acceleration that meets our requirements. The resulting architecture, called CbMSMK, is capable of processing the projection, sort, aggregation, and equi-join database operators and can also run some complex TPC-H queries. CbMSMK employs a shared sort-merge pipeline for executing all these operators, which results in an efficient use of FPGA resources. This approach enables the instantiation of multiple acceleration cores on the FPGA, allowing it to serve multiple clients simultaneously. CbMSMK can process both arbitrarily deep and wide tables efficiently. The former is achieved through the use of the sort-merge algorithm which utilizes the FPGA RAM for buffering intermediate sort results. The latter is achieved through the use of KeRRaS, a novel variant of the forward radix sort algorithm introduced in this thesis. KeRRaS allows CbMSMK to process a table a few columns at a time, incrementally generating the final result through multiple iterations. Given that acceleration is a key objective of our work, CbMSMK benefits from many performance optimizations. For instance, multi-way merging is employed to reduce the number of merge passes required for the execution of the sort-merge algorithm, thus improving the performance of all our pipeline-breaking operators. Another example is our in-depth analysis of early aggregation, which led to the development of a novel cache-based algorithm that significantly enhances aggregation performance. Our experiments demonstrate that CbMSMK performs on average 5 times faster than the state-of-the-art CPU-based database management system MonetDB.:I Database Systems & FPGAs 1 INTRODUCTION 1.1 Databases & the Importance of Performance 1.2 Accelerators & FPGAs 1.3 Requirements 1.4 Outline & Summary of Contributions 2 BACKGROUND ON DATABASE SYSTEMS 2.1 Databases 2.1.1 Storage Model 2.1.2 Storage Medium 2.2 Database Operators 2.2.1 Projection 2.2.2 Filter 2.2.3 Sort 2.2.4 Aggregation 2.2.5 Join 2.2.6 Operator Classification 2.3 Database Queries 2.4 Impact of Acceleration 3 BACKGROUND ON FPGAS 3.1 FPGA 3.1.1 Logic Element 3.1.2 Block RAM (BRAM) 3.1.3 Digital Signal Processor (DSP) 3.1.4 IO Element 3.1.5 Programmable Interconnect 3.2 FPGADesignFlow 3.2.1 Specifications 3.2.2 RTL Description 3.2.3 Verification 3.2.4 Synthesis, Mapping, Placement, and Routing 3.2.5 TimingAnalysis 3.2.6 Bitstream Generation and FPGA Programming 3.3 Implementation Quality Metrics 3.4 FPGA Cards 3.5 Benefits of Using FPGAs 3.6 Challenges of Using FPGAs 4 RELATED WORK 4.1 Summary of Related Work 4.2 Platform Type 4.2.1 Accelerator Card 4.2.2 Coprocessor 4.2.3 Smart Storage 4.2.4 Network Processor 4.3 Implementation 4.3.1 Loop-based implementation 4.3.2 Sort-based Implementation 4.3.3 Hash-based Implementation 4.3.4 Mixed Implementation 4.4 A Note on Quantitative Performance Comparisons II Cache-Based Morphing Sort-Merge with KeRRaS (CbMSMK) 5 OBJECTIVES AND ARCHITECTURE OVERVIEW 5.1 From Requirements to Objectives 5.2 Architecture Overview 5.3 Outlineof Part II 6 COMPARATIVE ANALYSIS OF OPENCL AND RTL FOR SORT-MERGE PRIMITIVES ON FPGAS 6.1 Programming FPGAs 6.2 RelatedWork 6.3 Architecture 6.3.1 Global Architecture 6.3.2 Sorter Architecture 6.3.3 Merger Architecture 6.3.4 Scalability and Resource Adaptability 6.4 Experiments 6.4.1 OpenCL Sort-Merge Implementation 6.4.2 RTLSorters 6.4.3 RTLMergers 6.4.4 Hybrid OpenCL-RTL Sort-Merge Implementation 6.5 Summary & Discussion 7 RESOURCE-EFFICIENT ACCELERATION OF PIPELINE-BREAKING DATABASE OPERATORS ON FPGAS 7.1 The Case for Resource Efficiency 7.2 Related Work 7.3 Architecture 7.3.1 Sorters 7.3.2 Sort-Network 7.3.3 X:Y Mergers 7.3.4 Merge-Network 7.3.5 Join Materialiser (JoinMat) 7.4 Experiments 7.4.1 Experimental Setup 7.4.2 Implementation Description & Tuning 7.4.3 Sort Benchmarks 7.4.4 Aggregation Benchmarks 7.4.5 Join Benchmarks 7. Summary 8 KERRAS: COLUMN-ORIENTED WIDE TABLE PROCESSING ON FPGAS 8.1 The Scope of Database System Accelerators 8.2 Related Work 8.3 Key-Reduce Radix Sort(KeRRaS) 8.3.1 Time Complexity 8.3.2 Space Complexity (Memory Utilization) 8.3.3 Discussion and Optimizations 8.4 Architecture 8.4.1 MSM 8.4.2 MSMK: Extending MSM with KeRRaS 8.4.3 Payload, Aggregation and Join Processing 8.4.4 Limitations 8.5 Experiments 8.5.1 Experimental Setup 8.5.2 Datasets 8.5.3 MSMK vs. MSM 8.5.4 Payload-Less Benchmarks 8.5.5 Payload-Based Benchmarks 8.5.6 Flexibility 8.6 Summary 9 A STUDY OF EARLY AGGREGATION IN DATABASE QUERY PROCESSING ON FPGAS 9.1 Early Aggregation 9.2 Background & Related Work 9.2.1 Sort-Based Early Aggregation 9.2.2 Cache-Based Early Aggregation 9.3 Simulations 9.3.1 Datasets 9.3.2 Metrics 9.3.3 Sort-Based Versus Cache-Based Early Aggregation 9.3.4 Comparison of Set-Associative Caches 9.3.5 Comparison of Cache Structures 9.3.6 Comparison of Replacement Policies 9.3.7 Cache Selection Methodology 9.4 Cache System Architecture 9.4.1 Window Aggregator 9.4.2 Compressor & Hasher 9.4.3 Collision Detector 9.4.4 Collision Resolver 9.4.5 Cache 9.5 Experiments 9.5.1 Experimental Setup 9.5.2 Resource Utilization and Parameter Tuning 9.5.3 Datasets 9.5.4 Benchmarks on Synthetic Data 9.5.5 Benchmarks on Real Data 9.6 Summary 10 THE FULL PICTURE 10.1 System Architecture 10.2 Benchmarks 10.3 Meeting the Objectives III Conclusion 11 SUMMARY AND OUTLOOK ON FUTURE RESEARCH 11.1 Summary 11.2 Future Work BIBLIOGRAPHY LIST OF FIGURES LIST OF TABLES
359

Evaluation des sources d'espèces et des déterminants de la diversité végétale des parcelles agricoles : interchamps, stock semencier, pratiques agricoles et paysage de l'Installation Expérimentale Inra ASTER Mirecourt / Assessment of species sources and determinants of plant diversity established in agricultural fields : field boundaries, seed bank, farming practices and landscape of the experimental farm of Inra ASTER Mirecourt

Gaujour, Etienne 11 May 2010 (has links)
L'un des moyens pour faire face à la réduction de l'utilisation d'intrants de synthèse est de favoriser durablement les services agro-écologiques de la diversité végétale. Pour cela, l'agriculteur devra adapter ses modes de gestion. Ma thèse s'inscrit dans l'objectif finalisé d'apporter à l'agriculteur, de l'aide à la gestion de cette diversité végétale sur le territoire de son exploitation. Je me suis fixé 2 objectifs de recherche : i) vérifier si les interchamps et le stock semencier constituent des sources d'espèces pour le centre des parcelles agricoles et ii) quantifier l'influence de la dynamique de deux grands groupes de facteurs, représentée sous la forme de trajectoires de parcelles, sur la diversité végétale : les pratiques agricoles et les caractéristiques de la mosaïque paysagère.J'ai mené ce travail sur l'ensemble du parcellaire de l'Installation Expérimentale de l'Inra ASTER Mirecourt, dont les systèmes de production (polyculture-élevage bovin laitier) sont convertis à l'agriculture biologique depuis 2004. J'ai caractérisé la végétation - en place dans les interchamps et au centre des parcelles, et dans le stock semencier - des parcelles en prairies permanentes et en champs cultivés selon deux approches complémentaires : taxonomique au rang de l'espèce et fonctionnelle à partir de sept propriétés rendant compte de la dissémination, de l'établissement et de la survie des espèces végétales. J'ai caractérisé les trajectoires des parcelles, sur une durée de neuf ans, soit à partir des pratiques agricoles mises en œuvre chaque année, soit à partir des caractéristiques annuelles de la mosaïque paysagère. Cette mosaïque correspond aux différentes occupations du sol obtenues à partir de relevés de terrain ou d'enquêtes auprès des exploitants.Je montre que le stock semencier et les interchamps ne constituent pas des sources potentielles d'espèces végétales pour le centre des parcelles, champs cultivés ou prairies permanentes. En revanche, ce sont des refuges importants pour une grande partie des espèces prairiales. Au vu de mes résultats, je fais l'hypothèse que les interchamps sont des puits d'espèces adventices en champs cultivés. J'ai également mis en évidence que le gradient fonctionnel de la végétation prairiale entre la bordure et le centre s'étend jusqu'à 2 m seulement.Enfin, la diversité végétale des parcelles étudiées est principalement influencée par la trajectoire des parcelles selon les caractéristiques de la mosaïque paysagère et par les pratiques agricoles mises en œuvre durant l'année en cours. Les caractéristiques du sol ont un rôle très minoritaire. Ces trois groupes de facteurs expliquent à eux seuls plus des trois quarts de la variabilité de la composition fonctionnelle de la végétation.La gestion de la diversité végétale des parcelles agricoles d'une exploitation peut donc être menée en partie par l'agriculteur. Cependant, compte-tenu des effets de la trajectoire des parcelles selon les caractéristiques paysagères, il est nécessaire de mettre aussi en place une gestion collective de la végétation entre les différents acteurs partageant le territoire / One of the means to offset the decrease of pesticide use is to favour agro-ecological services of plant diversity. In this aim, farmer will have to adapt its farming management. My work partly answers to the following applied objective: to bring to the farmer some advices for the management of plant diversity on the farm territory. I have two scientific objectives: i) to verify if field boundaries and soil seed bank are potential sources of plant species for field centres; ii) to quantify the relative influence of dynamics of two factor groups, characterized as field paths, on plant diversity: farming practices and characteristics of landscape mosaïc.I have carried out this study on the experimental farm of INRA ASTER Mirecourt. Its farming systems (mixed crop-dairy systems) have been converted to organic farming since 2004. I have characterized vegetation - established vegetation in field boundaries and in field centres, and vegetation in the soil seed bank - of permanent grasslands and arable fields with complementary approaches: taxonomical approach based on the species, and functional approach based on seven functional properties about dispersal, establishment and persistence of plant species. I have characterized field paths, along nine years, either from farming practices set up on field, either from annual characteristics of landscape mosaïc. I have represented this landscape mosaïc as a mosaïc of distinct land-uses. All of them and their spatialization have been determined from farmer surveys or landscape observations.My results show that soil seed bank and field bboundaries are not potential sources of plant species for field centres, in both permanent grasslands and arable fields. On the other hand, they are efficient refuges for a large part of grassland species. According to my results, I hypothesize that field boudaries are species sinks in arable fields. I also highlight that functional gradient of grassland vegetation in the field edge, between field margins and field centres, is spread until 2 m only.Finally, plant diversity in studied fields is mainly influenced by field path according landscape mosaïc and by farming practices set up the same year of vegetation sampling. Soil characteristics have a minor influence. These three groups of influent factors explain more than 75 % of the functional composition variability of the vegetation in field centres.The management of plant diversity in agricultural fields of a given farm can be partly reach by the farmer. However, according to the effects of field paths about landscape mosaïc, it is necessary to set up a collective management of plant diversity with all actors sharing the studied territory
360

Aproksimativna diskretizacija tabelarno organizovanih podataka / Approximative Discretization of Table-Organized Data

Ognjenović Višnja 27 September 2016 (has links)
<p>Disertacija se bavi analizom uticaja raspodela podataka na rezultate algoritama diskretizacije u okviru procesa ma&scaron;inskog učenja. Na osnovu izabranih baza i algoritama diskretizacije teorije grubih skupova i stabala odlučivanja, istražen je uticaj odnosa raspodela podataka i tačaka reza određene diskretizacije.<br />Praćena je promena konzistentnosti diskretizovane tabele u zavisnosti od položaja redukovane tačke reza na histogramu. Definisane su fiksne tačke reza u zavisnosti od segmentacije multimodal raspodele, na osnovu kojih je moguće raditi redukciju preostalih tačaka reza. Za određivanje fiksnih tačaka konstruisan je algoritam FixedPoints koji ih određuje u skladu sa grubom segmentacijom multimodal raspodele.<br />Konstruisan je algoritam aproksimativne diskretizacije APPROX MD za redukciju tačaka reza, koji koristi tačke reza dobijene algoritmom maksimalne razberivosti i parametre vezane za procenat nepreciznih pravila, ukupni procenat klasifikacije i broj tačaka redukcije. Algoritam je kompariran u odnosu na algoritam maksimalne razberivosti i u odnosu na algoritam maksimalne razberivosti sa aproksimativnim re&scaron;enjima za &alpha;=0,95.</p> / <p>This dissertation analyses the influence of data distribution on the results of discretization algorithms within the process of machine learning. Based on the chosen databases and the discretization algorithms within the rough set theory and decision trees, the influence of the data distribution-cuts relation within certain discretization has been researched.<br />Changes in consistency of a discretized table, as dependent on the position of the reduced cut on the histogram, has been monitored. Fixed cuts have been defined, as dependent on the multimodal segmentation, on basis of which it is possible to do the reduction of the remaining cuts. To determine the fixed cuts, an algorithm FixedPoints has been constructed, determining these points in accordance with the rough segmentation of multimodal distribution.<br />An algorithm for approximate discretization, APPROX MD, has been constructed for cuts reduction, using cuts obtained through the maximum discernibility (MD-Heuristic) algorithm and the parametres related to the percent of imprecise rules, the total classification percent and the number of reduction cuts. The algorithm has been compared to the MD algorithm and to the MD algorithm with approximate solutions for &alpha;=0,95.</p>

Page generated in 0.0548 seconds