• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 32
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 110
  • 110
  • 49
  • 47
  • 22
  • 17
  • 17
  • 14
  • 12
  • 12
  • 12
  • 11
  • 10
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A teoria da classificação facetada na modelagem de dados em banco de dados computacionais

Silva, Márcio Bezerra da 30 March 2011 (has links)
Made available in DSpace on 2015-04-16T15:23:14Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1860067 bytes, checksum: efb79bfbc412d5c297b784baa4a6927f (MD5) Previous issue date: 2011-03-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The study presents the Ranganathan‟s Theory of Faceted Classification (TFC) and the Database (DB) as Computational elements that enable the structuring of knowledge through the organization of concepts and creation of relationships. Contributions of Information Science (IS) for the research by defend the importance of organization for effective information retrieval is discussed: descriptive and thematic representation of information. The five categories established by Ranganathan and facets in this system called PMEST are cited: Person, Material, Energy, Space and Time. Discusses the contributions of Computer Science (CS) in this study: database and data modeling. From an applied and exploratory investigation and a qualitative approach, the objective is to investigate the possibility of applying the faceted classification to knowledge organization, focusing on the recovery of information in the database. We also want to investigate the possibility of the applicability of faceted classification with data modeling in digital environments, develop a prototype database system based on faceted classification and validate the applicability of the prototype database with users of the system. Takes two times as investigation steps: prototype development and data collection, which is divided in literature, the questionnaire, functional testing (software) and usability testing. Presents as search results the satisfaction of users on the benefits provided by the prototype, named Faceted System, in the organization and retrieval of information in the institution investigated. Also presents as search results the approval of system functionality and usability aspects. Believe that studies like this show us the importance of interdisciplinary Information Science (IC) and Computer Science (CC), and both can cause numerous contributions, both for itself and for other areas. / Apresenta a Teoria da Classificação Facetada (TCF) de Ranganathan e o Banco de Dados (BD) Computacional como elementos que possibilitam a estruturação do conhecimento, através da organização de conceitos e da criação de relacionamentos. São discutidas as contribuições da Ciência da Informação (CI) para a realização da pesquisa, ao defender a importância da organização para a efetiva recuperação da informação: representação descritiva e temática da informação. São elencadas as cinco categorias ou facetas estabelecidas por Ranganathan, chamadas de PMEST: Personalidade, Material, Energia, Espaço e Tempo. Discute as contribuições da Ciência da Computação (CC) neste trabalho: BD e modelagem e dados. Objetiva-se, a partir de uma pesquisa aplicada, exploratória e de abordagem qualitativa, investigar a possibilidade de aplicação da classificação facetada para organização do conhecimento, visando à recuperação da informação em banco de dados. Também se almeja investigar a possibilidade da aplicabilidade da classificação facetada com a modelagem de dados em ambientes digitais, desenvolver um protótipo de banco de dados com base no sistema de classificação facetada e validar a aplicabilidade do protótipo de banco de dados junto aos usuários do sistema. Adota dois momentos como etapas da pesquisa: desenvolvimento do protótipo e coleta de dados, o qual subdividi-se em pesquisa bibliográfica, aplicação do questionário, teste funcional (software) e teste de usabilidade. Apresenta como resultados da pesquisa a satisfação dos usuários quanto aos benefícios proporcionados pelo protótipo, chamado de Sistema Facetado, na organização e recuperação da informação na Instituição em pesquisa. Também expõe como resultados da pesquisa a aprovação das funcionalidades do sistema e dos aspectos de usabilidade. Acredita-se que estudos como este mostram a importância da interdisciplinaridade, entre a Ciência da Informação (CI) e a Ciência da Computação (CC), e que ambas podem trazer inúmeras contribuições, tanto para a sua própria área do conhecimento, como para outras áreas.
102

itSIMPLE: ambiente integrado de modelagem e análise de domínios de planejamento automático. / itSIMPLE: integrated environment for modeling and analysis of automated planning domains.

Tiago Stegun Vaquero 14 March 2007 (has links)
O grande avanço das técnicas de Planejamento em Inteligência Artificial fez com que a Engenharia de Requisitos e a Engenharia do Conhecimento ganhassem extrema importância entre as disciplinas relacionadas a projeto de engenharia (Engineering Design). A especificação, modelagem e análise dos domínios de planejamento automático se tornam etapas fundamentais para melhor entender e classificar os domínios de planejamento, servindo também de guia na busca de soluções. Neste trabalho, é apresentada uma proposta de um ambiente integrado de modelagem e análise de domínios de planejamento, que leva em consideração o ciclo de vida de projeto, representado por uma ferramenta gráfica de modelagem que utiliza diferentes representações: a UML para modelar e analisar as características estáticas dos domínios; XML para armazenar, integrar, e exportar informação para outras linguagens (ex.: PDDL); as Redes de Petri para fazer a análise dinâmica; e a PDDL para testes com planejadores. / The great development in Artificial Intelligence Planning has emphasized the role of Requirements Engineering and Knowledge Engineering among the disciplines that contributes to Engineering Design. The modeling and specification of automated planning domains turn out to be fundamental tasks in order to understand and classify planning domains and guide the application of problem solving techniques. In this work, it is presented the proposed integrated environment for modeling and analyzing automated planning domains, which considered the life cycle of a project, represented by a tool that uses several language representations: UML to model and perform static analyses of planning environments; XML to hold, integrate, share and export information to other language representations (e.g. PDDL); Petri Nets, where dynamic analyses are made; and PDDL for testing models with planners.
103

Data Model Proposal to Integrate GIS with PLM for DfS / Proposition de modèle de données pour intégrer les SIG avec PLM pour DfS

Vadoudi, Kiyan 19 June 2017 (has links)
Le déploiement du développement durable passe par des enjeux de transition sociétale et technique auxquels cherche à répondre le Design for Sustainability (DfS). Dans le cadre de la conception des systèmes de production, et en particulier pour les produits manufacturés, les impacts environnementaux que ce soit en termes de consommation de ressources ou de rejets (déchets, émissions) doivent être intégrés comme des paramètres de conception. L’évaluation des impacts environnementaux (par exemple par l’Analyse de Cycle de Vie, ACV) doit donc s’articuler avec la gestion du cycle de vie des produits (PLM). L’inventaire de cycle de vie, ICV est un élément central du lien entre le système de production et son environnement, caractérisé par des informations géographiques et spatiales sur l’écosphère. Le travail de thèse proposé stipule que les impacts environnementaux des systèmes conçus dépendent de cette caractérisation géographique. Les approches d’écoconception et de DFS doivent donc intégrer ces informations géographiques ce qu’elles ne font que très peu, ces informations n’étant pas intégré dans les outils de conception. La thèse propose donc une approche de modélisation pour intégrer les informations relatives au produit et son système de production (via le PLM), l’évaluation de son potentiel d’impact environnemental (via l’ACV et en particulier l’ICV), et les informations géographiques en conception. Pour cela, les informations géographiques à associer sont identifiées et des cas d’études illustratifs sont construits pour montrer l’impact de ces informations sur la définition des produits / There are different approaches to implement sustainability and Design for Sustainability (DfS) is the one that give more accurate result by considering both global and regional scales. Integration of Life Cycle Assessment (LCA) into Product Lifecycle Management (PLM) is an example of tool integration to support sustainability. In LCA framework, Life Cycle Inventory (LCI) is the quantified and classified list of input and output flow of the LCA model that is a model of the product system, linking the technological system to the ecosphere (Environment system). As each region has a unique environmental system, design characteristics and specifications of technological system should be modified and adopted based on these differences. Implementation of this approach will require geographical information of interacted environmental systems, which is a kind of new strategy in DfS. Therefore, we tested the interest of the integration of Geographical Information Systems (GIS) with PLM to support geographical considerations during product development activities. The main research question of this research work is then how to propose this PLM-GIS integration for DfS. Thus, we conducted that literature review on existing data models about product, environment, geography and their combination is a key to prove the link among them
104

Graphdatenbanken für die textorientierten e-Humanities

Efer, Thomas 08 February 2017 (has links)
Vor dem Hintergrund zahlreicher Digitalisierungsinitiativen befinden sich weite Teile der Geistes- und Sozialwissenschaften derzeit in einer Transition hin zur großflächigen Anwendung digitaler Methoden. Zwischen den Fachdisziplinen und der Informatik zeigen sich große Differenzen in der Methodik und bei der gemeinsamen Kommunikation. Diese durch interdisziplinäre Projektarbeit zu überbrücken, ist das zentrale Anliegen der sogenannten e-Humanities. Da Text der häufigste Untersuchungsgegenstand in diesem Feld ist, wurden bereits viele Verfahren des Text Mining auf Problemstellungen der Fächer angepasst und angewendet. Während sich langsam generelle Arbeitsabläufe und Best Practices etablieren, zeigt sich, dass generische Lösungen für spezifische Teilprobleme oftmals nicht geeignet sind. Um für diese Anwendungsfälle maßgeschneiderte digitale Werkzeuge erstellen zu können, ist eines der Kernprobleme die adäquate digitale Repräsentation von Text sowie seinen vielen Kontexten und Bezügen. In dieser Arbeit wird eine neue Form der Textrepräsentation vorgestellt, die auf Property-Graph-Datenbanken beruht – einer aktuellen Technologie für die Speicherung und Abfrage hochverknüpfter Daten. Darauf aufbauend wird das Textrecherchesystem „Kadmos“ vorgestellt, mit welchem nutzerdefinierte asynchrone Webservices erstellt werden können. Es bietet flexible Möglichkeiten zur Erweiterung des Datenmodells und der Programmfunktionalität und kann Textsammlungen mit mehreren hundert Millionen Wörtern auf einzelnen Rechnern und weitaus größere in Rechnerclustern speichern. Es wird gezeigt, wie verschiedene Text-Mining-Verfahren über diese Graphrepräsentation realisiert und an sie angepasst werden können. Die feine Granularität der Zugriffsebene erlaubt die Erstellung passender Werkzeuge für spezifische fachwissenschaftliche Anwendungen. Zusätzlich wird demonstriert, wie die graphbasierte Modellierung auch über die rein textorientierte Forschung hinaus gewinnbringend eingesetzt werden kann. / In light of the recent massive digitization efforts, most of the humanities disciplines are currently undergoing a fundamental transition towards the widespread application of digital methods. In between those traditional scholarly fields and computer science exists a methodological and communicational gap, that the so-called \\\"e-Humanities\\\" aim to bridge systematically, via interdisciplinary project work. With text being the most common object of study in this field, many approaches from the area of Text Mining have been adapted to problems of the disciplines. While common workflows and best practices slowly emerge, it is evident that generic solutions are no ultimate fit for many specific application scenarios. To be able to create custom-tailored digital tools, one of the central issues is to digitally represent the text, as well as its many contexts and related objects of interest in an adequate manner. This thesis introduces a novel form of text representation that is based on Property Graph databases – an emerging technology that is used to store and query highly interconnected data sets. Based on this modeling paradigm, a new text research system called \\\"Kadmos\\\" is introduced. It provides user-definable asynchronous web services and is built to allow for a flexible extension of the data model and system functionality within a prototype-driven development process. With Kadmos it is possible to easily scale up to text collections containing hundreds of millions of words on a single device and even further when using a machine cluster. It is shown how various methods of Text Mining can be implemented with and adapted for the graph representation at a very fine granularity level, allowing the creation of fitting digital tools for different aspects of scholarly work. In extended usage scenarios it is demonstrated how the graph-based modeling of domain data can be beneficial even in research scenarios that go beyond a purely text-based study.
105

Residential Energy Report Card for University Students for Driving Behavioral Energy Reduction and for Measuring Behavior Impact on Consumption

Bhattarai, Saroj 31 May 2018 (has links)
No description available.
106

A FRAMEWORK FOR IMPROVED DATA FLOW AND INTEROPERABILITY THROUGH DATA STRUCTURES, AGRICULTURAL SYSTEM MODELS, AND DECISION SUPPORT TOOLS

Samuel A Noel (13171302) 28 July 2022 (has links)
<p>The agricultural data landscape is largely dysfunctional because of the industry’s highvariability  in  scale,  scope,  technological  adoption,  and  relationships.   Integrated  data  andmodels of agricultural sub-systems could be used to advance decision-making, but interoperability  challenges  prevent  successful  innovation.   In  this  work,  temporal  and  geospatial indexing  strategies  and  aggregation  were  explored  toward  the  development  of  functional data  structures  for  soils,  weather,  solar,  and  machinery-collected  yield  data  that  enhance data context, scalability, and sharability.</p> <p>The data structures were then employed in the creation of decision support tools including web-based  applications  and  visualizations.   One  such  tool  leveraged  a  geospatial  indexing technique called geohashing to visualize dense yield data and measure the outcomes of on-farm yield trials.  Additionally, the proposed scalable, open-standard data structures were used to drive a soil water balance model that can provide insights into soil moisture conditions critical to farm planning, logistics, and irrigation.  The model integrates SSURGO soil data,weather data from the Applied Climate Information System, and solar data from the National Solar Radiation Database in order to compute a soil water balance, returning values including runoff, evaporation, and soil moisture in an automated, continuous, and incremental manner.</p> <p>The approach leveraged the Open Ag Data Alliance framework to demonstrate how the data structures can be delivered through sharable Representational State Transfer Application Programming Interfaces and to run the model in a service-oriented manner such that it can be operated continuously and incrementally, which is essential for driving real-time decision support tools.  The implementations rely heavily on the Javascript Object Notation data schemas leveraged by Javascript/Typescript front-end web applications and back-end services delivered through Docker containers.  The approach embraces modular coding concepts and several levels of open source utility packages were published for interacting with data sources and supporting the service-based operations.</p> <p>By making use of the strategies laid out by this framework, industry and research canenhance data-based decision making through models and tools.  Developers and researchers will  be  better  equipped  to  take  on  the  data  wrangling  tasks  involved  in  retrieving  and parsing unfamiliar datasets, moving them throughout information technology systems, and understanding those datasets down to a semantic level.</p>
107

Modelagem computacional de dados e controle inteligente no espaço de estado / State space computational data modelling and intelligent control

Del Real Tamariz, Annabell 15 July 2005 (has links)
Orientador: Celso Pascoli Bottura / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-04T18:33:31Z (GMT). No. of bitstreams: 1 DelRealTamariz_Annabell_D.pdf: 5783881 bytes, checksum: 21a1a2e27552398a982a934513988a24 (MD5) Previous issue date: 2005 / Resumo: Este estudo apresenta contribuições para modelagem computacional de dados multivariáveis no espaço de estado, tanto com sistemas lineares invariantes como com variantes no tempo. Propomos para modelagem determinística-estocástica de dados ruidosos, o Algoritmo MOESP_AOKI. Propomos, utilizando Redes Neurais Recorrentes multicamadas, algoritmos para resolver a Equação Algébrica de Riccati Discreta bem como a Inequação Algébrica de Riccati Discreta, via Desigualdades Matriciais Lineares. Propomos um esquema de controle adaptativo com Escalonamento de Ganhos, baseado em Redes Neurais, para sistemas multivariáveis discretos variantes no tempo, identificados pelo algoritmo MOESP_VAR, também proposto nesta tese. Em síntese, uma estrutura de controle inteligente para sistemas discretos multivariáveis variantes no tempo, através de uma abordagem que pode ser chamada ILPV (Intelligent Linear Parameter Varying), é proposta e implementada. Um controlador LPV Inteligente, para dados computacionalmente modelados pelo algoritmo MOESP_VAR, é concretizado, implementado e testado com bons resultados / Abstract: This study presents contributions for state space multivariable computational data modelling with discrete time invariant as well as with time varying linear systems. A proposal for Deterministic-Estocastica Modelling of noisy data, MOESP_AOKI Algorithm, is made. We present proposals forsolving the Discrete-Time Algebraic Riccati Equation as well as the associate Linear Matrix Inequalityusing a multilayer Recurrent Neural Network approaches. An Intelligent Linear Parameter Varying(ILPV) control approach for multivariable discrete Linear Time Varying (LTV) systems identified bythe MOESP_VAR algorithm, are both proposed. A gain scheduling adaptive control scheme based on neural networks is designed to tune on-line the optimal controllers. In synthesis, an Intelligent Linear Parameter Varying (ILPV) Control approach for multivariable discrete Linear Time Varying Systems (LTV), identified by the algorithm MOESP_VAR, is proposed. This way an Intelligent LPV Control for multivariable data computationally modeled via the MOESP_VAR algorithm is structured, implemented and tested with good results / Doutorado / Automação / Doutor em Engenharia Elétrica
108

Modeling and optimization of least-cost corridors

Seegmiller, Lindsi January 2021 (has links)
Given a grid of cells, each having a value indicating its cost per unit area, a variant of the least-cost path problem is to find a corridor of a specified width connecting two termini such that its cost-weighted area is minimized. A computationally efficient method exists for finding such corridors, but as is the case with conventional raster-based least-cost paths, their incremental orientations are limited to a fixed number of (typically eight orthogonal and diagonal) directions, and therefore, regardless of the grid resolution, they tend to deviate from those conceivable on the Euclidean plane. Additionally, these methods are limited to problems found on two-dimensional grids and ignore the ever-increasing availability and necessity of three-dimensional raster based geographic data. This thesis attempts to address the problems highlighted above by designing and testing least-cost corridor algorithms. First a method is proposed for solving the two-dimensional raster-based least-cost corridor problem with reduced distortion by adapting a distortion reduction technique originally designed for least-cost paths and applying it to an efficient but distortionprone least-cost corridor algorithm. The proposed method for distortion reduction is, in theory, guaranteed to generate no less accurate solutions than the existing one in polynomial time and, in practice, expected to generate more accurate solutions, as demonstrated experimentally using synthetic and real-world data. A corridor is then modeled on a threedimensional grid of cost-weighted cubic cells or voxels as a sequence of sets of voxels, called ‘neighborhoods,’ that are arranged in a 26-hedoral form, design a heuristic method to find a sequence of such neighborhoods that sweeps the minimum cost-weighted volume, and test its performance with computer-generated random data. Results show that the method finds a low-cost, if not least-cost, corridor with a specified width in a threedimensional cost grid and has a reasonable efficiency as its complexity is O(n2) where n is the number of voxels in the input cost grid and is independent of corridor width. A major drawback is that the corridor found may self-intersect, which is often not only an undesirable quality but makes the estimation of its cost-weighted volume inaccurate. / Med tanke på ett rutnät av celler, som vart och ett har ett värde som indikerar dess kostnad per areaenhet, är en variant av det billigaste banproblemet att hitta en korridor med en specificerad bredd som förbinder två terminaler så att dess kostnadsviktade område minimeras. Det finns en beräkningseffektiv metod för att hitta sådana korridorer, men som är fallet med konventionella rasterbaserade lägsta kostnadsspår är deras inkrementella orienteringar begränsade till ett fast antal (vanligtvis åtta ortogonala och diagonala) riktningar, och därför, oavsett nätupplösning tenderar de att avvika från de tänkbara på det euklidiska planet. Dessutom är dessa metoder begränsade till problem som finns i tvådimensionella nät och ignorerar den ständigt ökande tillgängligheten och nödvändigheten av tredimensionell rasterbaserad geografisk data. Denna avhandling försöker ta itu med problemen som belyses ovan genom att utforma och testa korridoralgoritmer till lägsta kostnad. Först föreslås en metod för att lösa det tvådimensionella rasterbaserade problemet med billigaste korridorer med minskad förvrängning genom att anpassa en distorsionsminskningsteknik som ursprungligen utformades för billigaste vägar och tillämpa den på en effektiv men distorsionsbenägen billigaste korridoralgoritm. Den föreslagna metoden för distorsionsminskning är i teorin garanterad att generera inte mindre exakta lösningar än den befintliga i polynomtid och i praktiken förväntas generera mer exakta lösningar, vilket demonstreras experimentellt med syntetiska och verkliga data. En korridor modelleras sedan på ett tredimensionellt rutnät av kostnadsvägda kubikceller eller voxels som en sekvens av uppsättningar av voxels, kallade "stadsdelar", som är ordnade i en 26-hedoral form, designar en heuristisk metod för att hitta en sekvens av sådana stadsdelar som sveper den lägsta kostnadsviktade volymen och testar dess prestanda med datorgenererade slumpmässiga data. Resultaten visar att metoden hittar en låg kostnad, om inte minst kostnad, korridor med en specificerad bredd i ett tredimensionellt kostnadsnät och har en rimlig effektivitet eftersom dess komplexitet är O (n2) där n är antalet voxlar i ingångskostnadsnätet och är oberoende av korridorbredd En stor nackdel är att korridoren som hittas kan korsa sig själv, vilket ofta inte bara är en oönskad kvalitet utan gör uppskattningen av dess kostnadsviktade volym felaktig. / <p>QC 20210309</p>
109

IMPLEMENTING NETCONF AND YANG ON CUSTOM EMBEDDED SYSTEMS

Georges, Krister, Jahnstedt, Per January 2023 (has links)
Simple Network Management Protocol (SNMP) has been the traditional approach for configuring and monitoring network devices, but its limitations in security and automation have driven the exploration of alternative solutions. The Network Configuration Protocol (NETCONF) and Yet Another Next Generation (YANG) data modeling language significantly improve security and automation capabilities. This thesis aims to investigate the feasibility of implementing a NETCONF server on the Anybus CompactCom (ABCC) Industrial Internet of Things (IIoT) Security module, an embedded device with limited processing power and memory, running on a custom operating system, and using open source projects with MbedTLS as the cryptographic primitive library. The project will assess implementing a YANG model to describe the ABCC’s configurable interface, connecting with a NETCONF client to exchange capabilities, monitoring specific attributes or interfaces on the device, and invoking remote procedure call (RPC) commands to configure the ABCC settings. The goal is to provide a proof of concept and contribute to the growing trend of adopting NETCONF and YANG in the industry, particularly for the Industrial Internet of Things (IIoT) platform of Hardware Meets Software (HMS).
110

A Framework for Interoperability on the United States Electric Grid Infrastructure

Laval, Stuart 01 January 2015 (has links)
Historically, the United States (US) electric grid has been a stable one-way power delivery infrastructure that supplies centrally-generated electricity to its predictably consuming demand. However, the US electric grid is now undergoing a huge transformation from a simple and static system to a complex and dynamic network, which is starting to interconnect intermittent distributed energy resources (DERs), portable electric vehicles (EVs), and load-altering home automation devices, that create bidirectional power flow or stochastic load behavior. In order for this grid of the future to effectively embrace the high penetration of these disruptive and fast-responding digital technologies without compromising its safety, reliability, and affordability, plug-and-play interoperability within the field area network must be enabled between operational technology (OT), information technology (IT), and telecommunication assets in order to seamlessly and securely integrate into the electric utility's operations and planning systems in a modular, flexible, and scalable fashion. This research proposes a potential approach to simplifying the translation and contextualization of operational data on the electric grid without being routed to the utility datacenter for a control decision. This methodology integrates modern software technology from other industries, along with utility industry-standard semantic models, to overcome information siloes and enable interoperability. By leveraging industrial engineering tools, a framework is also developed to help devise a reference architecture and use-case application process that is applied and validated at a US electric utility.

Page generated in 0.063 seconds