• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 23
  • 13
  • 7
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 118
  • 118
  • 24
  • 23
  • 21
  • 20
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Metodologia de análise estrutural e pós-processamento a partir de simulações do comportamento de sistemas oceânicos. / Methodology of structural analysis and post-processing from offshore system simulations.

Henrique Murilo Gaspar 28 June 2007 (has links)
Este trabalho apresenta uma metodologia capaz de unir a análise hidrodinâmica de um sistema oceânico com sua análise estrutural, assim como o pós-processamento acoplado dos resultados. Foram criadas rotinas e códigos para que a série temporal de forças das linhas de risers e amarração de uma plataforma pudessem tornar-se dados passíveis de entrada num pré- processador de elementos finitos. Com a aplicação destas no modelo, e sua conseqüente análise no domínio do tempo, foi criada uma interface para os resultados do solver, para que pudesse ser importados no pós-processador hidrodinâmico, e visualizados com os mesmos movimentos que os obtidos na resposta da análise hidrodinâmica. O TPNView, atual pós-processador do laboratório Tanque de Provas Numérico(TPN), foi quem recebeu por fim as rotinas e interfaces criadas a partir das idéias apresentadas nesta dissertação. Com isso é possível ver em uma única ferramenta de visualização tanto o comportamento hidrodinâmico quanto o estrutural de uma estrutura do sistema de uma só vez.. / This work presents a methodology developed to treat the hydrodynamic analysis of an offshore system conjointly with its structural analysis; the same methodology also allows for combined post-processing of data. Programming routines were created so as to enable the use of the time series of the forces present at the risers and mooring lines as input data for a finite element analysis solver software. Applying this forces in to the finite element model, and its subsequent analysis in time domain, it was possible to create an interface between the solver output, so that structural analysis could be imported into the hydrodynamic post-processor and visualised with the same movements obtained in the hydrodynamic analysis response. TPNView, the post-processor developed at the Tanque de Provas Numérico laboratory, was benefited from the programming routines and interfaces developed for this thesis. Using the aforedescribed visualisation tools, it became possible to monitor at once both the hydrodynamic and the structural behaviour of a system component.
62

Compositional Decompilation using LLVM IR

Eklind, Robin January 2015 (has links)
Decompilation or reverse compilation is the process of translating low-level machine-readable code into high-level human-readable code. The problem is non-trivial due to the amount of information lost during compilation, but it can be divided into several smaller problems which may be solved independently. This report explores the feasibility of composing a decompilation pipeline from independent components, and the potential of exposing those components to the end-user. The components of the decompilation pipeline are conceptually grouped into three modules. Firstly, the front-end translates a source language (e.g. x86 assembly) into LLVM IR; a platform-independent low-level intermediate representation. Secondly, the middle-end structures the LLVM IR by identifying high-level control flow primitives (e.g. pre-test loops, 2-way conditionals). Lastly, the back-end translates the structured LLVM IR into a high-level target programming language (e.g. Go). The control flow analysis stage of the middle-end uses subgraph isomorphism search algorithms to locate control flow primitives in CFGs, both of which are described using Graphviz DOT files. The decompilation pipeline has been proven capable of recovering nested pre-test and post-test loops (e.g. while, do-while), and 1-way and 2-way conditionals (e.g. if, if-else) from LLVM IR. Furthermore, the data-driven design of the control flow analysis stage facilitates extensions to identify new control flow primitives. There is huge potential for future development. The Go output could be made more idiomatic by extending the post-processing stage, using components such as Grind by Russ Cox which moves variable declarations closer to their usage. The language-agnostic aspects of the design will be validated by implementing components in other languages; e.g. data flow analysis in Haskell. Additional back-ends (e.g. Python output) will be implemented to verify that the general decompilation tasks (e.g. control flow analysis, data flow analysis) are handled by the middle-end. / <p>BSc dissertation written during an ERASMUS exchange from Uppsala University to the University of Portsmouth.</p>
63

Apports et difficultés d’une collecte de données à l’aide de récepteurs GPS pour réaliser une enquête sur la mobilité / Contributions and difficulties of data collection using GPS receivers to conduct a survey on mobility

Pham, Thi Huong Thao 02 May 2016 (has links)
Les méthodes de collecte de données sur la mobilité basée sur les nouvelles technologies ont évolué au cours des dernières décennies. Le suivi de la mobilité par GPS pour la recherche sur les comportements de déplacement se diffuse, principalement en raison de la couverture mondiale et de la précision du système GPS. Le sous-échantillon d’environ 957 volontaires qui ont accepté de porter un GPS pendant au moins une semaine dans le cadre de l'Enquête Nationale Transport et Déplacements (ENTD) 2007-08 est la première expérience nationale de ce type dans le monde.Cette thèse présente l'intérêt des nouvelles technologies pour les enquêtes de mobilité, leurs avantages et leurs biais. D'abord, en étudiant l'acceptabilité et les biais des enquêtes de mobilité à l'aide de GPS, les résultats confirment que les enquêtés volontaires pour une enquête GPS sont plus consciencieux et décrivent mieux leur mobilité. Ensuite, nous décrivons le profil des enquêtés qui acceptent l’enquête GPS. L’une des raisons principales du faible taux de l’acceptation du GPS dans l’ENTD 2007-2008 est la disponibilité de GPS. Donc pour l'accroissement du taux de disponibilité des GPS, nous avons besoin de calculer les temps et durées des visites de l'enquête pour réduire le taux d’immobilisation forcé des GPS.Indépendamment de ce problème d’acceptabilité du GPS par l’enquêté, un défi dans le traitement a posteriori des données collectées par GPS est la mise au point de méthodes permettant de combler les données manquantes et de reconstituer de manière automatisée des séquences continues, à la fois dans l'espace et dans le temps. Nous présentons l’algorithme de post-traitement et les résultats du logiciel de post-traitement des données GPS de l’ENTD 2007-2008. La validation est effectuée en comparant avec les données collectées par les méthodes classiques, ainsi qu’avec le CAPI-GPS qui fournit des informations supplémentaires sur la fiabilité de l'appareil et des caractéristiques plus détaillées (mode, motif...) pour les déplacements réalisés un jour tiré au sort.Ensuite, nous comparons les descriptions de la mobilité obtenues par deux méthodes, questionnaire classique de l’enquête, d’une part et traces GPS d’autre part, pour les jours couverts par ces deux instruments d’observation. Un panorama de l’appariement des traces GPS et des déplacements quotidiens est effectué en s’appuyant sur leur chronologie de manière automatique mais également par un contrôle manuel. En identifiant les caractéristiques des déplacements quotidiens et traces GPS non appariés, nous estimons les raisons de leur non-appariement.Cette recherche montre que l’enquête à l’aide de GPS peut être utilisée avec succès pour compléter les enquêtes de transport classiques, mais qu’il est encore prématuré d’imaginer la substitution complète des enquêtes classiques de mobilité par l’enquête GPS.Mots-clés : enquête de transports, mobilité, nouvelles technologies, collecte de données, GPS, post-traitement, acceptabilité, acceptation, CAPI- GPS, traces GPS, déplacement quotidien. / Travel survey methods based on new technologies have evolved in the past few decades, shifting from limited experiments to large-scale travel surveys. GPS-based data collection methods have become particularly popular in travel behavior research, mainly because of the worldwide coverage and the accuracy of the GPS system. We have taken the opportunity of the French National Travel Survey (FNTS) to have the first nationwide experience with embedding such a “GPS package” in a traditional survey, with a sub-sample of approximately 957 voluntary respondents.This thesis begins by presenting a review of interest in new technologies for mobility surveys, advantages and limitations. Prior to going into the field with the processing of GPS data, it is important to understand the likely acceptability of GPS device among FNTS respondents – specifically whether a sufficient proportion would be willing to use a GPS device while completing a travel diary and, if not, why not. The findings confirmed that the voluntary respondents with GPS receiver are more conscientious and better describe their mobility. We find the profile of respondents who accept the GPS survey. One of the main reason for low GPS acceptance rate in the FNTS 2007 is the availability of GPS. For increasing the GPS availability rate, we need to calculate the time and duration of the survey visits to reduce the GPS immobilization rate.A challenge in the GPS data post-processing is the development of methods to fill GPS missing data and to reconstitute automatically continuous sequence, both in space and time. Then, the architecture of GPS data post-processing for the FNTS 2007-08 is described. After, the results of the post processing software is calibrated by comparison with conventional methods, such as CAPI-GPS providing for a few trips additional information on the reliability of the device and on more detailed characteristics (e.g. mode and purpose).Next, we provide an overview of comparing the descriptions of mobility obtained by two methods CAPI and GPS in the days covered by these two observation tools. The automatic matching of GPS traces to daily trips based on their chronologies and the manual control of this matching are performed. By identifying the characteristics of unmatched GPS traces and daily trips, we expect the reasons for their mismatches.Finally, we evaluate the contributions and challenges of data collection using GPS device. This study shows that the GPS survey can be used successfully to complete the conventional transport surveys, but it is still too early to predict the complete substitution of conventional survey by the GPS mobility survey.Keywords: transport survey, mobility, new technologies, data collection, GPS, post processing, acceptability, GPS traces, daily trip.
64

Optimalizace NC programu pomocí CAD/CAM software / Optimization of NC program using CAD/CAM software

Paseka, Jan January 2014 (has links)
Tendency of this master thesis is a proposal of savings in the process of technological production’s preparation in a manufacturing company. In the first part is elaborated general theoretical study of current components and NC programs. Based on this, and finished analysis was defined optimization’s proposals described in second part. Thanks this complete proposals comes to time and money savings, which are needed for implementation of prototype project into serial production.
65

Provinsgenerering med postprocess

Aldabbagh, Haimen January 2014 (has links)
This bachelor's thesis is a part of a larger project carried out at Paradox Interactive. The project aims to improve a map generator for the strategy game Europa Universalis IV. This work addresses the creation and implementation of a province generator that divides a pregenerated landscape in provinces. The provinces in the game are the regions on the game map that the game mechanics are based upon. The improvements that are expected of the new province generator includes the following properties: • The provinces that are created have more logically placed boundaries that are affected by the structure of the landscape. • The program gives the user more control over how the end result should look like by letting the user set the values of input parameters. • The execution should not exceed an approximate time limit. This time limit is set by Paradox Interactive. The work began with research on the topics map creation and map division to give enough knowledge to plan the implementation of the program. The programming language that is used is Java. The implementation of the program is based on many well-known algorithms in which the most remarkable one is Fortune's algorithm which performs the main task of the provincial division of the program, the creation of Voronoi diagrams. The Voronoi diagrams are used to divide the map into regions which by using a post-process results in the creation of the provinces. Other well-known algorithms and methods used or addressed in this report include the Lloyd relaxation, Bresenham's line algorithm, Scan Line Flood Fill, Delaunay triangulation and Bowyer-Watson's algorithm. The result of this work is a Java application that can load a map file containing information of a landscape structure and create a division of provinces with provincial boundaries that depend on the structure of the landscape. The results of the provincial division may be controlled by a number of user defined parameters. The program could not be fully calibrated during the time of the project because the landscape generator was not ready in time to be able to provide a map of a generated landscape. The generated provinces can be saved as an image file on the hard disk. / Kandidatexamensarbetet är en del av ett större projekt som utförs på företaget Paradox Interactive. Projektets mål är att förbättra en kartgenerator för strategispelet Europa Universalis IV. Det här arbetet avser skapandet och implementationen av en provinsgenerator som delar in ett färdiggenererat landskap i provinser. Provinserna i spelet är de landsdelar på kartan som spelmekaniken bygger på. Förbättringarna som förväntas av den nya provinsgeneratorn är bland annat att: • Provinserna som skapas ska ha mer logiska gränser som påverkas av landskapets utformning och inte vara alltför orealistiska. • Ge användaren mer kontroll över hur slutresultatet ska se ut genom användarinmatade parametrar. • Inte överstiga en ungefärlig tidsgräns vid programmets exekvering. Tidsgränsen sätts av Paradox Interactive. Arbetet började med forskning kring ämnena kartgenerering och kartindelning vilket gav tillräckligt med kunskap för att planera hur programmet skulle implementeras. Programmeringsspråket som används är Java. Implementationen av programmet bygger på många kända algoritmer där den mest anmärkningsvärda algoritmen är Fortune's algoritm som utför huvuduppgiften för provinsindelningen i programmet, skapandet av Voronoidiagram. Voronoi-diagramen används för att dela in kartan i ytor som med hjälp av en postprocess resulterar i skapandet av provinserna. Andra kända algoritmer och metoder som används eller tas upp i den här rapporten är bland annat Lloyd relaxation, Bresenham's linjealgoritm, Scanline floodfill, Delaunay triangulering och Bowyer–Watson's algoritm. Resultatet av arbetet är ett Java-program som kan läsa in en kartfil med information om landskapsstruktur och skapa en indelning av provinser med provinsgränser som beror på landskapets utformning. Resultatet av provinsindelningen kan styras med hjälp av ett antal användarinmatade parametrar. Programmet hann inte kalibreras fullt ut under arbetets gång på grund av att landskapsgeneratorn inte blev färdig i tid för att kunna bidra med en genererad landskapskarta. De genererade provinserna kan sparas som en bildfil på hårddisken.
66

Digital Image Processing Algorithms Research Based on FPGA

Xu, Haifeng January 2011 (has links)
As we can find through the development of TV systems in America, the digital TV related digital broadcasting is just the road we would walk into. Nowadays digital television is prevailing in China, and the government is promoting the popularity of digital television. However, because of the economic development, analog television will still take its place in the TV market during a long period. But the broadcasting system has not been reformed, as a result, we should not only take use of the traditional analog system we already have, but also improve the quality of the pictures of analog system. With the high-speed development of high-end television, the research and application of digital television technique, the flaws caused by interlaced scan in traditional analog television, such as color-crawling, flicker and fast-moved object's boundary blur and zigzag, are more and more obvious. Therefore the conversion of interlaced scan to progressing scan, which is de-interlacing, is an important part of current television production. At present there are many kinds of TV sets appearing in the market. They are based on digital processing technology and use various digital methods to process the interlaced, low-field rate video data, including the de-interlacing and field rate conversion. The digital process chip of television is the heart of the new-fashioned TV set, and is the reason of visual quality improvement. As a requirement of real time television signal processing, most of these chips has developed novel hardware architecture or data processing algorithm. So far, the most quality effective algorithm is based on motion compensation, in which motion detection and motion estimation will be inevitably involved, in despite of the high computation cost. in video processing chips, the performance and complexity of motion estimation algorithm have a direct impact on speed area and power consumption of chips. Also, motion estimation determined the efficiency of the coding algorithms in video compression. This thesis proposes a Down-sampled Diamond NTSS algorithm (DSD-NTSS) based on New Three Step Search (NTSS) algorithm, taking both performance and complexity of motion estimation algorithms into consideration. The proposed DSD-NTSS algorithm makes use of the similarity of neighboring pixels in the same image and down-samples pixels in the reference blocks with the decussate pattern to reduce the computation cost. Experiment results show that DSD-NTSS is a better tradeoff in the terms of performance and complexity. The proposed DSD-NTSS reduces the computation cost by half compared with NTSS when having the equivalent image quality. Further compared with Four Step Search(FSS) Diamond Search(DS)、Three Step Search(TSS) and some other fast searching algorithms, the proposed DSD-NTSS generally surpasses in performance and complexity. This thesis focuses on a novel computation-release motion estimation algorithm in video post-processing system and researches the FPGA design of the system.
67

Physically-based Real-time Glare

Delavennat, Julien January 2021 (has links)
The theme of this master’s thesis is the real-time rendering of glare as seen through human eyes, as a post-processing effect applied to a first-person view in a 3D application. Several techniques already exist, and the basis for this project is a paper from 2009 titled Temporal Glare: Real-Time Dynamic Simulation of the Scattering in the Human Eye, by Ritschel et al. The goal of my project was initially to implement that paper as part of a larger project, but it turned out that there might be some opportunities to build upon aspects of the techniques described in Temporal Glare; in consequence these various opportunities have been explored and constitute the main substance of this project. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
68

IMAGE CAPTIONING FOR REMOTE SENSING IMAGE ANALYSIS

Hoxha, Genc 09 August 2022 (has links)
Image Captioning (IC) aims to generate a coherent and comprehensive textual description that summarizes the complex content of an image. It is a combination of computer vision and natural language processing techniques to encode the visual features of an image and translate them into a sentence. In the context of remote sensing (RS) analysis, IC has been emerging as a new research area of high interest since it not only recognizes the objects within an image but also describes their attributes and relationships. In this thesis, we propose several IC methods for RS image analysis. We focus on the design of different approaches that take into consideration the peculiarity of RS images (e.g. spectral, temporal and spatial properties) and study the benefits of IC in challenging RS applications. In particular, we focus our attention on developing a new decoder which is based on support vector machines. Compared to the traditional decoders that are based on deep learning, the proposed decoder is particularly interesting for those situations in which only a few training samples are available to alleviate the problem of overfitting. The peculiarity of the proposed decoder is its simplicity and efficiency. It is composed of only one hyperparameter, does not require expensive power units and is very fast in terms of training and testing time making it suitable for real life applications. Despite the efforts made in developing reliable and accurate IC systems, the task is far for being solved. The generated descriptions are affected by several errors related to the attributes and the objects present in an RS scene. Once an error occurs, it is propagated through the recurrent layers of the decoders leading to inaccurate descriptions. To cope with this issue, we propose two post-processing techniques with the aim of improving the generated sentences by detecting and correcting the potential errors. They are based on Hidden Markov Model and Viterbi algorithm. The former aims to generate a set of possible states while the latter aims at finding the optimal sequence of states. The proposed post-processing techniques can be injected to any IC system at test time to improve the quality of the generated sentences. While all the captioning systems developed in the RS community are devoted to single and RGB images, we propose two captioning systems that can be applied to multitemporal and multispectral RS images. The proposed captioning systems are able at describing the changes occurred in a given geographical through time. We refer to this new paradigm of analysing multitemporal and multispectral images as change captioning (CC). To test the proposed CC systems, we construct two novel datasets composed of bitemporal RS images. The first one is composed of very high-resolution RGB images while the second one of medium resolution multispectral satellite images. To advance the task of CC, the constructed datasets are publically available in the following link: https://disi.unitn.it/~melgani/datasets.html. Finally, we analyse the potential of IC for content based image retrieval (CBIR) and show its applicability and advantages compared to the traditional techniques. Specifically, we focus our attention on developing a CBIR systems that represents an image with generated descriptions and uses sentence similarity to search and retrieve relevant RS images. Compare to traditional CBIR systems, the proposed system is able to search and retrieve images using either an image or a sentence as a query making it more comfortable for the end-users. The achieved results show the promising potentialities of our proposed methods compared to the baselines and state-of-the art methods.
69

Image processing techniques for sector scan sonar

Hendriks, Lukas Anton 12 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2009. / ENGLISH ABSTRACT: Sonars are used extensively for underwater sensing and recent advances in forward-looking imaging sonar have made this type of sonar an appropriate choice for use on Autonomous Underwater Vehicles. The images received from these sonar do however, tend to be noisy and when used in shallow water contain strong bottom reflections that obscure returns from actual targets. The focus of this work was the investigation and development of post-processing techniques to enable the successful use of the sonar images for automated navigation. The use of standard image processing techniques for noise reduction and background estimation, were evaluated on sonar images with varying amounts of noise, as well as on a set of images taken from an AUV in a harbour. The use of multiple background removal and noise reduction techniques on a single image was also investigated. To this end a performance measure was developed, based on the dynamic range found in the image and the uniformity of returned targets. This provided a means to quantitatively compare sets of post-processing techniques and identify the “optimal” processing. The resultant images showed great improvement in the visibility of target areas and the proposed techniques can significantly improve the chances of correct target extraction. / AFRIKAANSE OPSOMMING: Sonars word algemeen gebruik as onderwater sensors. Onlangse ontwikkelings in vooruit-kykende sonars, maak hierdie tipe sonar ’n goeie keuse vir die gebruik op ’n Outomatiese Onderwater Voertuig. Die beelde wat ontvang word vanaf hierdie sonar neig om egter raserig te wees, en wanneer dit in vlak water gebruik word toon dit sterk bodemrefleksies, wat die weerkaatsings van regte teikens verduister. Die fokus van die werk was die ondersoek en ontwikkeling van naverwerkings tegnieke, wat die sonar beelde bruikbaar maak vir outomatiese navigasie. Die gebruik van standaard beeldverwerkingstegnieke vir ruis-onderdrukking en agtergrond beraming, is geëvalueer aan die hand van sonar beelde met verskillende hoeveelhede ruis, asook aan die hand van ’n stel beelde wat in ’n hawe geneem is. Verdere ondersoek is ingestel na die gebruik van meer as een agtergrond beramings en ruis onderdrukking tegniek op ’n enkele beeld. Hierdie het gelei tot die ontwikkeling van ’n maatstaf vir werkverrigting van toegepaste tegnieke. Hierdie maatstaf gee ’n kwantitatiewe waardering van die verbetering op die oorspronklike beeld, en is gebaseer op die verbetering in dinamiese bereik in die beeld en die uniformiteit van die teiken se weerkaatsing. Hierdie maatstaf is gebruik vir die vergelyking van verskeie tegnieke, en identifisering van die “optimale” verwerking. Die verwerkte beelde het ’n groot verbetering getoon in die sigbaarheid van teikens, en die voorgestelde tegnieke kan ’n betekenisvolle bedrae lewer tot die suksesvolle identifisering van obstruksies.
70

Uma metodologia para exploração de regras de associação generalizadas integrando técnicas de visualização de informação com medidas de avaliação do conhecimento / A methodology for exploration of generalized association rules integrating information visualization techniques with knowledge evaluation measures

Fujimoto, Magaly Lika 04 August 2008 (has links)
O processo de mineração de dados tem como objetivo encontrar o conhecimento implícito em um conjunto de dados para auxiliar a tomada de decisão. Do ponto de vista do usuário, vários problemas podem ser encontrados durante a etapa de pós-processamento e disponibilização do conhecimento extraído, como a enorme quantidade de padrões gerados por alguns algoritmos de extração e a dificuldade na compreensão dos modelos extraídos dos dados. Além do problema da quantidade de regras, os algoritmos tradicionais de regras de associação podem levar à descoberta de conhecimento muito específico. Assim, pode ser realizada a generalização das regras de associação com o intuito de obter um conhecimento mais geral. Neste projeto é proposta uma metodologia interativa que auxilie na avaliação de regras de associação generalizadas, visando melhorar a compreensibilidade e facilitar a identificação de conhecimento interessante. Este auxílio é realizado por meio do uso de técnicas de visualização em conjunto com a aplicação medidas de avaliação objetivas e subjetivas, que estão implementadas no módulo de visualização de regras de associação generalizados denominado RulEE-GARVis, que está integrado ao ambiente de exploração de regras RulEE (Rule Exploration Environment). O ambiente RulEE está sendo desenvolvido no LABIC-ICMC-USP e auxilia a etapa de pós-processamento e disponibilização de conhecimento. Neste contexto, também foi objetivo deste projeto de pesquisa desenvolver o Módulo de Gerenciamento do ambiente de exploração de regras RulEE. Com a realização do estudo dirigido, foi possível verificar que a metodologia proposta realmente facilita a compreensão e a identificação de regras de associação generalizadas interessantes / The data mining process aims at finding implicit knowledge in a data set to aid in a decision-making process. From the users point of view, several problems can be found at the stage of post-processing and provision of the extracted knowledge, such as the huge number of patterns generated by some of the extraction algorithms and the difficulty in understanding the types of the extracted data. Besides the problem of the number of rules, the traditional algorithms of association rules may lead to the discovery of very specific knowledge. Thus, the generalization of association rules can be realized to obtain a more general knowledge. In this project an interactive methodology is proposed to aid in the evaluation of generalized association rules in order to improve the understanding and to facilitate the identification of interesting knowledge. This aid is accomplished through the use of visualization techniques along with the application of objective and subjective evaluation measures, which are implemented in the visualization module of generalized association rules called RulEE-GARVis, which is integrated with the Rule Exploration Environment RulEE. The RulEE environment is being developed at LABIC-ICMC-USP and aids in the post-processing and provision of knowledge. In this context, it was also the objective of this research project to develop the Module Management of the rule exploration environment RulEE. Through this directed study, it was verified that the proposed methodology really facilitates the understanding and identification of interesting generalized association rules

Page generated in 0.0863 seconds