• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 7
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 81
  • 81
  • 34
  • 28
  • 24
  • 19
  • 17
  • 16
  • 15
  • 14
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

The development of an online energy auditing software application with remote SQL-database support

Van der Merwe, Johannes Schalk 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: In the last century the earth has experienced an increase in the global mean temperature, with the main contributing factor being the increase in greenhouse gasses. Evidence indicates that the burning of fossil fuels, critical in the supply of energy, contributed towards three quarters of the carbon dioxide (CO2) increase. In 2008 South Africa reached electricity capacity constraints. A subsequent economic downturn experienced in the country, brought about by the worldwide economic recession, has relieved some of the strain on the electricity supply system. However, consumption levels are returning to those experienced during 2008 and no new base load power stations have been added. Short-term capacity constraints can be managed by shifting the peak demand, but the electricity shortage can only be avoided by adding additional capacity or reducing the overall electricity consumption. Supply-side solutions are both overdue and too expensive. The only solutions that can provide lasting results are demand-side solutions. During the past few years the Energy Efficiency and Demand-side Management (EEDSM) programme implemented by South Africa’s electricity supply utility, Eskom, has gained prominence. This programme relies heavily on calculating the savings incurred through any demand-side intervention. Energy audits enable the calculation of various consumption scenarios and can provide valuable insight into load operation and user behaviour. Energy audits involve a two-part procedure consisting of load surveying and an analysis. This thesis describes the development of both these procedures, combined into a single application. The application has been tested and provides an accurate and effective tool for simulating consumption and quantifying savings for various load adjustments. The results gained from the auditing application surpassed the expectations and provides the user with a sufficient base-line consumption estimate. The results do not reflect day-to-day variations, but the simulations are sufficient to quantify savings and determine whether demand-side interventions are financially viable. The application also presents a benchmark for the type of applications required to successfully implement an EEDSM programme. / AFRIKAANSE OPSOMMING: In die afgelope eeu het die aarde se gemiddelde temperatuur toegeneem, met die toename in kweekhuisgasse as die grootste bydraende faktor. Dit wil ook voorkom asof die verbranding van fossielbrandstowwe, wat noodsaaklik is vir die verskaffing van energie, verantwoordelik is vir driekwart van die toename in koolstofdioksied (CO2). Gedurende 2008 het Suid-Afrika elektrisiteitsbeperkings bereik. Die daaropvolgende ekonomiese afswaai wat in die land ervaar is weensdie wêreldwye ekonomiese resessie, het van die druk op die elekriese netwerk verlig. Verbruikersvlakke is egter besig om terug te keer na waar dit in 2008 was, maar geen nuwe basislas-kragstasies is gebou nie. Op die kort termyn kan die kapasiteitsbeperkings bestuur word deur die aanvraag te verskuif, maar die elektrisiteitstekort kan op die lang duur slegs vermy word deur bykomende kapasiteit by te voeg of die totale aanvraag te verminder. Toevoerkant-oplossings is beide agterstallig en te duur. Die enigste oplossings wat blywende resultate kan lewer, is dus aan die verbruikerkant. In die afgelope paar jaar het die effektiewe bestuur van energieverbruik baie aansien geniet. Die nasionale energievoorsiener, Eskom, het ook 'n program geloods om te help met die implimentering van energiebesparingmaatreëls. Die implementering van energie-oudits om met die kwantifisering van besparings te help, is van integrale belang vir die sukses van die program. Energie-oudits stel die eindverbruiker in staat om verskeie verbruiksmoontlikhede te beproef en sodoende waardevolle inligitng te verkry rakende die verbruikspatrone van die fasiliteit. Energie-oudits behels 'n tweeledige proses, bestaande uit 'n lasopname en 'n verbruiksanalise. Hierdie proefskrif beskryf die ontwikkeling van 'n stelsel wat beide die prosesse kombineer in 'n enkele applikasie. Die applikasie is getoets en bied 'n akkurate en doeltreffende instrument om verbruik te simuleer en besparings te kwantifiseer vir verskeie verbruiksmoontlikhede. v Die resultate van die oudit het die aanvanklike verwagtinge oortref en voorsien verbruikers van 'n goeie skatting van die basisverbruik van 'n fasiliteit. Die resultate weerspieël nie dagtot- dag variasies nie, maar die simulasies is voldoende om besparings te kwantifiseer en help om die finansiële lewensvatbaarheid van verbruikerskant-intervensies te bepaal. Die program bied ook 'n verwysingspunt vir applikasies wat besparingstudies wil implementeer.
52

A pattern-driven corpus to predictive analytics in mitigating SQL injection attack

Uwagbole, Solomon January 2018 (has links)
The back-end database provides accessible and structured storage for each web application's big data internet web traffic exchanges stemming from cloud-hosted web applications to the Internet of Things (IoT) smart devices in emerging computing. Structured Query Language Injection Attack (SQLIA) remains an intruder's exploit of choice to steal confidential information from the database of vulnerable front-end web applications with potentially damaging security ramifications. Existing solutions to SQLIA still follows the on-premise web applications server hosting concept which were primarily developed before the recent challenges of the big data mining and as such lack the functionality and ability to cope with new attack signatures concealed in a large volume of web requests. Also, most organisations' databases and services infrastructure no longer reside on-premise as internet cloud-hosted applications and services are increasingly used which limit existing Structured Query Language Injection (SQLI) detection and prevention approaches that rely on source code scanning. A bio-inspired approach such as Machine Learning (ML) predictive analytics provides functional and scalable mining for big data in the detection and prevention of SQLI in intercepting large volumes of web requests. Unfortunately, lack of availability of robust ready-made data set with patterns and historical data items to train a classifier are issues well known in SQLIA research applying ML in the field of Artificial Intelligence (AI). The purpose-built competition-driven test case data sets are antiquated and not pattern-driven to train a classifier for real-world application. Also, the web application types are so diverse to have an all-purpose generic data set for ML SQLIA mitigation. This thesis addresses the lack of pattern-driven data set by deriving one to predict SQLIA of any size and proposing a technique to obtain a data set on the fly and break the circle of relying on few outdated competitions-driven data sets which exist are not meant to benchmark real-world SQLIA mitigation. The thesis in its contributions derived pattern-driven data set of related member strings that are used in training a supervised learning model with validation through Receiver Operating Characteristic (ROC) curve and Confusion Matrix (CM) with results of low false positives and negatives. We further the evaluations with cross-validation to have obtained a low variance in accuracy that indicates of a successful trained model using the derived pattern-driven data set capable of generalisation of unknown data in the real-world with reduced biases. Also, we demonstrated a proof of concept with a test application by implementing an ML Predictive Analytics to SQLIA detection and prevention using this pattern-driven data set in a test web application. We observed in the experiments carried out in the course of this thesis, a data set of related member strings can be generated from a web expected input data and SQL tokens, including known SQLI signatures. The data set extraction ontology proposed in this thesis for applied ML in SQLIA mitigation in the context of emerging computing of big data internet, and cloud-hosted services set our proposal apart from existing approaches that were mostly on-premise source code scanning and queries structure comparisons of some sort.
53

Service recommendation for individual and process use / Recommandation de services pour un usage individuel et la conception de procédés métiers

Nguyen, Ngoc Chan 13 December 2012 (has links)
Les services Web proposent un paradigme intéressant pour la publication, la découverte et la consommation de services. Ce sont des applications faiblement couplées qui peuvent être exécutées seules ou être composées pour créer de nouveaux services à valeur ajoutée. Ils peuvent être consommés comme des services individuels qui fournissent une interface unique qui reçoit des inputs et retourne des outputs (cas 1), ou bien ils peuvent être consommés en tant que composants à intégrer dans des procédés métier (cas 2). Nous appelons le premier cas de consommation « utilisation individuelle » et le second cas de consommation « utilisation en procédé métier ». La nécessité d'avoir des outils dédiés pour aider les consommateurs dans les deux cas de consommation a impliqué de nombreux travaux de recherche dans les milieux académiques ou industriels. D'une part, beaucoup de portails et de moteurs de recherche de services ont été développés pour aider les utilisateurs à rechercher et invoquer les services Web pour une utilisation individuelle. Cependant, les approches actuelles prennent principalement en compte les connaissances explicites présentées par les descriptions de service. Ils font des recommandations sans tenir compte des données qui reflètent l'intérêt des utilisateurs et peuvent demander des informations supplémentaires aux utilisateurs. D'autre part, plusieurs techniques et mécanismes associées aux procédés métier ont été élaborés pour rechercher des modèles de procédé métiers similaires, ou utiliser des modèles de référence. Ces mécanismes sont utilisés pour assister les analystes métiers à la conception de procédés métiers. Cependant, ils sont lents, source d'erreurs, grands consommateurs de ressources humaines, et peuvent induire à l’erreur les analystes métier. Dans notre travail, nous cherchons à faciliter la consommation de services Web pour une utilisation individuelle ou en procédé métier en proposant des techniques de recommandation. Notre objectif est de recommander aux utilisateurs des services qui sont proches de leur intérêt et de recommander aux analystes métier des services qui sont pertinents pour un procédé métier en cours de conception. Pour recommander des services pour une utilisation individuelle, nous prenons en compte l’historique des données d'utilisation de l'utilisateur qui reflètent ses intérêts. Nous appliquons des techniques de filtrage collaboratif bien connues pour faire des recommandations. Nous avons proposé cinq algorithmes et développé une application Web qui permet aux utilisateurs d'utiliser des services recommandés. Pour recommander des services pour une utilisation en procédé métier, nous prenons en compte les relations entre les services du procédé métier. Nous proposons de recommander les services en fonction de leurs localisations dans le procédé métier. Nous avons définit le contexte de voisinage d'un service. Nous avons présenté des recommandations basées sur l'appariement de contexte de voisinage. Par ailleurs, nous avons développé un langage de requête pour permettre aux analystes métier d'exprimer formellement des contraintes de filtrage. Nous avons proposé également une approche pour extraire le contexte de voisinage à partir de traces d’exécution de procédés métier. Enfin, nous avons développé trois applications afin de valider notre approche. Nous avons effectué des expérimentations sur des données recueillies par nos applications et sur deux grands ensembles de données publiques. Les résultats expérimentaux montrent que notre approche est faisable, précise et performante dans des cas d'utilisation réels / Web services have been developed as an attractive paradigm for publishing, discovering and consuming services. They are loosely-coupled applications that can be run alone or be composed to create new value-added services. They can be consumed as individual services which provide a unique interface to receive inputs and return outputs; or they can be consumed as components to be integrated into business processes. We call the first consumption case individual use and the second case business process use. The requirement of specific tools to assist consumers in the two service consumption cases involves many researches in both academics and industry. On the one hand, many service portals and service crawlers have been developed as specific tools to assist users to search and invoke Web services for individual use. However, current approaches take mainly into account explicit knowledge presented by service descriptions. They make recommendations without considering data that reflect user interest and may require additional information from users. On the other hand, some business process mechanisms to search for similar business process models or to use reference models have been developed. These mechanisms are used to assist process analysts to facilitate business process design. However, they are labor-intense, error-prone, time-consuming, and may make business analyst confused. In our work, we aim at facilitating the service consumption for individual use and business process use using recommendation techniques. We target to recommend users services that are close to their interest and to recommend business analysts services that are relevant to an ongoing designed business process. To recommend services for individual use, we take into account the user's usage data which reflect the user's interest. We apply well-known collaborative filtering techniques which are developed for making recommendations. We propose five algorithms and develop a web-based application that allows users to use services. To recommend services for business process use, we take into account the relations between services in business processes. We target to recommend relevant services to selected positions in a business process. We define the neighborhood context of a service. We make recommendations based on the neighborhood context matching. Besides, we develop a query language to allow business analysts to formally express constraints to filter services. We also propose an approach to extract the service's neighborhood context from business process logs. Finally, we develop three applications to validate our approach. We perform experiments on the data collected by our applications and on two large public datasets. Experimental results show that our approach is feasible, accurate and has good performance in real use-cases
54

Framkomlighetsanalys med hjälp av en digital terrängmodell och kartdata / Driveability analysis using a digital terrain model and map data

Edlund, Susanne January 2004 (has links)
<p>Driveability analysis of terrain data offers an important technique for decision support for all kinds of movements in the terrain. The work described in this report uses a high resolution digital terrain model generated from the laser radar data and further processed by the Category Viewer program, and information from the Real Estate Map. Properties of features found in a filtering process are calculated and compared with a set of rules in a knowledge base to get a driveability cost. This cost is then visualized in a graphical user interface. </p><p>An evaluation of what driveability is and what it is affected by is performed, and a general cost function is developed, which can be used even if not all relevant information is available. </p><p>The methods for property and cost calculation need to be developed further, as well as the rules in the knowledge base. However, the implemented program offers a good framework for furtherresearch in the area.</p>
55

A Methodology for Domain-Specific Conceptual Data Modeling and Querying

Tian, Hao 02 May 2007 (has links)
Traditional data management technologies originating from business domain are currently facing many challenges from other domains such as scientific research. Data structures in databases are becoming more and more complex and data query functions are moving from the back-end database level towards the front-end user-interface level. Traditional query languages such as SQL, OQL, and form-based query interfaces cannot fully meet the needs today. This research is motivated by the data management issues in life science applications. I propose a methodology for domain-specific conceptual data modeling and querying. The methodology can be applied to any domain to capture more domain semantics and empower end-users to formulate a query at the conceptual level with terminologies and functions familiar to them. The query system resulting from the methodology is designed to work on all major types of database management systems (DBMS) and support end-users to dynamically define and add new domain-specific functions. That is, all user-defined functions can be either pre-defined by domain experts and/or data model creators at the time of system creation, or dynamically defined by end-users from the client side at any time. The methodology has a domain-specific conceptual data model (DSC-DM) and a domain-specific conceptual query language (DSC-QL). DSC-QL uses only the abstract concepts, relationships, and functions defined in DSC-DM. It is a user-oriented high level query language and intentionally designed to be flexible, extensible, and readily usable. DSC-QL queries are much simpler than corresponding SQL or OQL queries because of advanced features such as user-defined functions, composite and set attributes, dot-path expressions, and super-classes. DSC-QL can be translated into SQL and OQL through a dynamic mapping function, and automatically updated when the underlying database schema evolves. The operational and declarative semantics of DSC-QL are formally defined in terms of graphs. A normal form for DSC-QL as a standard format for the mappings from flexible conceptual expressions to restricted SQL or OQL statements is also defined. Two translation algorithms from normalized DSC-QL to SQL and OQL are introduced. Through comparison, DSC-QL is shown to have very good balance between simplicity and expressive power and is suitable for end-users. Implementation details of the query system are reported as well. Two prototypes have been built. One prototype is for neuroscience domain, which is built on an object-oriented DBMS. The other one is for traditional business domain, which is built on a relational DBMS.
56

Protein Structure Data Management System

Wang, Yanchao 03 August 2007 (has links)
With advancement in the development of the new laboratory instruments and experimental techniques, the protein data has an explosive increasing rate. Therefore how to efficiently store, retrieve and modify protein data is becoming a challenging issue that most biological scientists have to face and solve. Traditional data models such as relational database lack of support for complex data types, which is a big issue for protein data application. Hence many scientists switch to the object-oriented databases since object-oriented nature of life science data perfectly matches the architecture of object-oriented databases, but there are still a lot of problems that need to be solved in order to apply OODB methodologies to manage protein data. One major problem is that the general-purpose OODBs do not have any built-in data types for biological research and built-in biological domain-specific functional operations. In this dissertation, we present an application system with built-in data types and built-in biological domain-specific functional operations that extends the Object-Oriented Database (OODB) system by adding domain-specific additional layers Protein-QL, Protein Algebra Architecture and Protein-OODB above OODB to manage protein structure data. This system is composed of three parts: 1) Client API to provide easy usage for different users. 2) Middleware including Protein-QL, Protein Algebra Architecture and Protein-OODB is designed to implement protein domain specific query language and optimize the complex queries, also it capsulates the details of the implementation such that users can easily understand and master Protein-QL. 3) Data Storage is used to store our protein data. This system is for protein domain, but it can be easily extended into other biological domains to build a bio-OODBMS. In this system, protein, primary, secondary, and tertiary structures are defined as internal data types to simplify the queries in Protein-QL such that the domain scientists can easily master the query language and formulate data requests, and EyeDB is used as the underlying OODB to communicate with Protein-OODB. In addition, protein data is usually stored as PDB format and PDB format is old, ambiguous, and inadequate, therefore, PDB data curation will be discussed in detail in the dissertation.
57

Reduction Of Query Optimizer Plan Diagrams

Darera, Pooja N 12 1900 (has links)
Modern database systems use a query optimizer to identify the most efficient strategy, called "plan", to execute declarative SQL queries. Optimization is a mandatory exercise since the difference between the cost of best plan and a random choice could be in orders of magnitude. The role of query optimization is especially critical for the decision support queries featured in data warehousing and data mining applications. For a query on a given database and system configuration, the optimizer's plan choice is primarily a function of the selectivities of the base relations participating in the query. A pictorial enumeration of the execution plan choices of a database query optimizer over this relational selectivity space is called a "plan diagram". It has been shown recently that these diagrams are often remarkably complex and dense, with a large number of plans covering the space. An interesting research problem that immediately arises is whether complex plan diagrams can be reduced to a significantly smaller number of plans, without materially compromising the query processing quality. The motivation is that reduced plan diagrams provide several benefits, including quantifying the redundancy in the plan search space, enhancing the applicability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overhead of multi-plan approaches. In this thesis, we investigate the plan diagram reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan diagram reduction, w.r.t. minimizing the number of plans in the reduced diagram, is an NP-hard problem, and remains so even for a storage-constrained variation. We then present CostGreedy, a greedy reduction algorithm that has tight and optimal performance guarantees, and whose complexity scales linearly with the number of plans in the diagram. Next, we construct an extremely fast estimator, AmmEst, for identifying the location of the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Both CostGreedy and AmmEst have been incorporated in the publicly-available Picasso optimizer visualization tool. Through extensive experimentation with benchmark query templates on industrial-strength database optimizers, we demonstrate that with only a marginal increase in query processing costs, CostGreedy reduces even complex plan diagrams running to hundreds of plans to "anorexic" levels (small absolute number of plans). While these results are produced using a highly conservative upper-bounding of plan costs based on a cost monotonicity constraint, when the costing is done on "actuals" using remote plan costing, the reduction obtained is even greater - in fact, often resulting in a single plan in the reduced diagram. We also highlight how anorexic reduction provides enhanced resistance to selectivity estimate errors, a long-standing bane of good plan selection. In summary, this thesis demonstrates that complex plan diagrams can be efficiently converted to anorexic reduced diagrams, a result with useful implications for the design and use of next-generation database query optimizers.
58

UMA LINGUAGEM ESPECÍFICA DE DOMÍNIO PARA CONSULTA EM CÓDIGO ORIENTADO A ASPECTOS / A DOMAIN SPECIFIC LANGUAGE FOR ASPECT-ORIENTED CODE QUERY

Faveri, Cristiano de 28 August 2013 (has links)
Ensuring code quality is crucial in software development. Not seldom, developers resort to static analysis tools to assist them in both understanding pieces of code and identifying defects or refactoring opportunities during development activities. A critical issue when defining such tools is their ability to obtain information about code. Static analysis tools depend, in general, of an intermediate program representation to identify locations that meet the conditions described in their algorithms. This perspective can be enlarged when techniques of crosscutting concerns modularization, such as aspect-oriented programming (AOP) is applied. In AOP applications, a piece of code can be systematically affected, using both static and dynamic combinations. The main goal of this dissertation is the specification and the implementation of AQL, a domain-specific language (DSL) designed to search aspect-oriented code bases. AQL is a declarative language, based on object query language (OQL), which enables the task of querying elements, relationships and program metrics to support the construction of static analysis and code searching tools for aspect oriented programs. The language was designed in two steps. First, we built a framework (AOPJungle), responsible to extract data from aspect-oriented programs. AOPJungle performs the computation of metrics, inferences and connections between the elements of the program. In the second step, we built an AQL compiler as a reference implementation. We adopted a source-to-source transformation for this step, in which an AQL query is transformed into HQL statements before being executed. In order to evaluate the reference implementation, we developed a static analysis tool for identifying refactoring opportunities in aspect-oriented programs. This tool receives a set of AQL queries to identify potential scenarios where refactoring could be applied. / Assegurar a qualidade de código é um ponto crucial durante o desenvolvimento de software. Frequentemente, os desenvolvedores recorrem às ferramentas de análise estática para auxiliá-los tanto na compreensão de código, quanto na identificação de defeitos ou de oportunidades de refatoração durante o ciclo de desenvolvimento de aplicações. Um dos pontos críticos na definição de tais ferramentas está na sua capacidade de obter informações a respeito de código. As ferramentas de análise estática dependem, em geral, de uma representação intermediária de um programa para identificar situações que atendam às condições necessárias descritas em seus algoritmos. Esse panorama se amplia com o uso de técnicas de modularização de interesses transversais, tais como a programação orientada a aspectos (POA), na qual um código pode ser afetado de forma sistêmica, por meio de combinações estáticas e dinâmicas. O principal objetivo desta dissertação é a especificação e implementação de AQL, uma DSL (linguagem específica de domínio) para a realização de busca em código orientado a aspectos. A AQL é uma linguagem declarativa, baseada em linguagem de busca em objetos (OQL) e que permite consultar elementos, relações, derivações e métricas de um programa orientado a aspectos (OA), a fim de apoiar a construção de ferramentas de análise estática e de pesquisa em código. O projeto de implementação da linguagem foi realizado em duas etapas. Primeiro, foi criado um framework (AOPJungle) para a extração de dados de programas OA. O AOPJungle além de extrair dados de programas OA, realiza a computação de métricas, inferências e ligações entre os elementos de um programa. Na segunda etapa, um compilador de referência para AQL foi construído. A abordagem adotada foi a transformação fonte a fonte, sendo uma consulta AQL transformada em uma consulta HQL (Hibernate Query Language) antes de sua execução. A fim de avaliar a implementação proposta, uma ferramenta de análise estática para identificação de oportunidades de refatoração em programas AO foi elaborada, usando a AQL para a busca de dados sobre esses programas.
59

GeoMiningVisualQL: uma linguagem de consulta visual para mineração de dados geográficos

Pedrosa, Klebber de Araújo 10 August 2010 (has links)
Made available in DSpace on 2015-05-14T12:36:56Z (GMT). No. of bitstreams: 1 parte1.pdf: 1854774 bytes, checksum: 9564eb94b101d580f9879bf9c9422f98 (MD5) Previous issue date: 2010-08-10 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Several areas of knowledge domain, such as remote sensing systems, transportation, telecommunication, digital mapping, among others, make use of large amounts of geographic data. Typically, these data are stored in Management Systems Geographic Database (SGBDGeo), through which can be often manipulated by Geographic Information Systems (GIS). However, these systems are not able to extract new information, previously unknown to users, which may be embedded within the database field analysed and that, somehow, represent new and userful knowledge, for example, for decision making. In this case, it is necessary to make use of specific techniques of Knowledge Discovery in Databases (KDD). Moreover, spatial data present inherently visual characteristics that, often, can be associated with geometric and pictographic visual representations. In this context, there are few visual query languages for spatial data. However, few of this treat mining methods among the spatial data. Thus, this paper proposes the construction of an environment for data mining tasks performed under certain geographical areas, beyond the formal specification of a visual query language to be used in this environment. These queries are formulated through pictorial representations of geographic features, operators, and spatial relationships between these data. To this end, we use metaphorical abstractions on the metadata of the geographical environment, and the approach defined as "flowing stream" in which the user focuses attention on certain stages of the mining process, facilitating the construction of these consultations a number of them. Thus, the proposed environment aims to simplify the tasks of consultations on mining spatial data, making them more user friendly, providing more efficiency and speed when compared to textual queries scripts. / Diversas áreas de domínio de conhecimento, tais como os sistemas de sensoriamente remoto, transportes, telecomunicações, cartografia digital, entre outras, fazem uso de uma grande quantidade de dados geográficos. Normalmente, esses dados são armazenados em Sistemas Gerenciadores de Banco de Dados Geográficos (SGBDGeo), através dos quais, muitas vezes, podem ser manipulados por Sistemas de Informações Geográficas (SIG). Entretanto, esses sistemas não são capazes de extrair novas informações, previamente desconhecidas pelos usuários, as quais podem estar embutidas dentro da base de dados do domínio analisado e que, de certo modo, representam algum conhecimento novo e de grande utilidade, por exemplo, para tomadas de decisões. Neste caso, é necessário fazer uso de técnicas específicas de Descoberta de Conhecimento em Banco de Dados (DCBD ou KDD, Knowledge Discovery in Database). Além disso, os dados geográficos apresentam características inerentemente visuais que, muitas vezes, podem ser associados a representações visuais geométricas ou pictográficas. Nesse contexto, existem algumas linguagens de consultas visuais para dados geográficos. Todavia, poucas delas tratam métodos de mineração espacial entre os dados. Desta forma, este trabalho propõe a construção de um ambiente para as tarefas de mineração de dados realizada sob certos domínios geográficos, além da especificação formal de uma linguagem de consulta visual a ser usada neste ambiente. Estas consultas são formuladas através de representações pictóricas de feições geográficas, operadores e relacionamentos espaciais existentes entre estes dados. Para tal, utilizam-se abstrações metafóricas sobre os metadados do ambiente geográfico, além da abordagem definida como fluxo corrente na qual o usuário foca a sua atenção em determinadas etapas do processo de mineração, facilitando a construção destas consultas por parte dos mesmos. Desta forma, o ambiente proposto tem como objetivo simplificar as consultas sobre tarefas de mineração de dados geográficos, tornando-as mais amigáveis aos usuários, concedendo mais eficiência e rapidez quando se comparado aos scripts textuais de consultas.
60

Framkomlighetsanalys med hjälp av en digital terrängmodell och kartdata / Driveability analysis using a digital terrain model and map data

Edlund, Susanne January 2004 (has links)
Driveability analysis of terrain data offers an important technique for decision support for all kinds of movements in the terrain. The work described in this report uses a high resolution digital terrain model generated from the laser radar data and further processed by the Category Viewer program, and information from the Real Estate Map. Properties of features found in a filtering process are calculated and compared with a set of rules in a knowledge base to get a driveability cost. This cost is then visualized in a graphical user interface. An evaluation of what driveability is and what it is affected by is performed, and a general cost function is developed, which can be used even if not all relevant information is available. The methods for property and cost calculation need to be developed further, as well as the rules in the knowledge base. However, the implemented program offers a good framework for furtherresearch in the area.

Page generated in 0.0502 seconds