71 |
A Semantic Web-Based Digital Library Infrastructure to Facilitate Computational EpidemiologyHasan, S. M. Shamimul 15 September 2017 (has links)
Computational epidemiology generates and utilizes massive amounts of data. There are two primary categories of datasets: reported and synthetic. Reported data include epidemic data published by organizations (e.g., WHO, CDC, other national ministries and departments of health) during and following actual outbreaks, while synthetic datasets are comprised of spatially explicit synthetic populations, labeled social contact networks, multi-cell statistical experiments, and output data generated from the execution of computer simulation experiments. The discipline of computational epidemiology encounters numerous challenges because of the size, volume, and dynamic nature of both types of these datasets.
In this dissertation, we present semantic web-based schemas to organize diverse reported and synthetic computational epidemiology datasets. There are three layers of these schemas: conceptual, logical, and physical. The conceptual layer provides data abstraction by exposing common entities and properties to the end user. The logical layer captures data fragmentation and linking aspects of the datasets. The physical layer covers storage aspects of the datasets. We can create mapping files from the schemas. The schemas are flexible and can grow.
The schemas presented include data linking approaches that can connect large-scale and widely varying epidemic datasets. This linked data leads to an integrated knowledge-base, enabling an epidemiologist to ask complex queries that employ multiple datasets. We demonstrate the utility of our knowledge-base by developing a query bank, which represents typical analyses carried out by an epidemiologist during the course of planning for or responding to an epidemic. By running queries with different data mapping techniques, we demonstrate the performance of various tools. The empirical results show that leveraging semantic web technology is an effective strategy for: reasoning over multiple datasets simultaneously, developing network queries pertinent in an epidemic analysis, and conducting realistic studies undertaken in an epidemic investigation. The performance of queries varies according to the choice of hardware, underlying database, and resource description framework (RDF) engine. We provide application programming interfaces (APIs) on top of our linked datasets, which an epidemiologist can use for information retrieval, without knowing much about underlying datasets. The proposed semantic web-based digital library infrastructure can be highly beneficial for epidemiologists as they work to comprehend disease propagation for timely outbreak detection and efficient disease control activities. / PHD / Computational epidemiology generates and utilizes massive amounts of data, and the field faces numerous challenges because of the volume and dynamic nature of the datasets utilized. There are two primary categories of datasets. The first contains epidemic datasets tracking actual outbreaks of disease, which are reported by governments, private companies, and associated parties. The second category is synthetic data created through computer simulation. We present semantic web-based schemas to organize diverse reported and synthetic computational epidemiology datasets. The schemas are flexible in use and scale, and utilize data linking approaches that can connect large-scale and widely varying epidemic datasets. This linked data leads to an integrated knowledge-base, enabling an epidemiologist to ask complex queries that employ multiple datasets. This ability helps epidemiologists better understand disease propagation, for efficient outbreak detection and disease control activities.
|
72 |
Consistently Updating XML Documents Using Incremental checks With XQueriesKane, Bintou 05 May 2003 (has links)
When updating a valid XML Data or Schema, an efficient yet light-weight mechanism is needed to determine if the update would invalidate the document. Towards this goal, we have developed a framework called SAXE. First, we analyzed the constraints expressed in XML schema specifications to establish constraint rules that must be observed when a schema or an XML data conforming to a given XML Schema is altered. We then classify the rules based on their relevancy for a given update case. That is, we show the minimal set of rules that must be checked to guarantee the safety for each update primitive. Next, we illustrate that this set of incremental constraint checks can be specified using generic XQuery expressions composed of three type of components. Safe updates for the XML data have the following components: (1) XML schema meta-queries to retrieve any con-straint knowledge potentially relevant to the given update from the schema or XMl data being altered, (2) retrieval of specific characteristics from the to-be-modified XML, and (3) lastly an analysis of information collected about the XML schema and the affected XML document to determine validity of the update. For the safe schema alteration, the components are: (1) XML schema meta-queries to retrieve relevant information from the schema (2)analysis and usage of retrieved information to update the schema, and lastly to (3) propagate the changes to the XML data when necessary. As a proof of concept, we have established a library of these generic XQuery constraint checks for the type-related XML constraints. The key idea of SAXE is to rewrite each XQuery update into a safe XML Query by extending it with appropriate constraint check subqueries. This en-hanced XML update query can then safely be executed using any existing XQuery engine that supports updates - thus turning any update engine automatically into an incremen-tal constraint-check engine. In order to verify the feasibility of our approach, we have implemented a prototype system SAXE that generates safe XQuery updates. Our experimental evaluation assesses the overhead of rewriting as well as the relative performance of our loosely-coupled incremental constraint check approach against the more traditional first-change-document and then revalidate-it approach.
|
73 |
Common Information Model/eXtensible Markup Language (CIM/XML) na troca de informações entre centros de controle de energia elétrica. / Common information Model/eXtensible Markup Language (CIM/XML) to exchange data among Power Utilities' Control Centers.Carlos Augusto Siqueira da Cunha Júnior 14 July 2005 (has links)
Esta dissertação analisa a utilização do modelo de dados padronizado pela norma IEC 61970 (CIM), tida como uma ferramenta para troca de informações cadastrais e operacionais entre empresas de energia elétrica, com sistemas computacionais de diferentes fabricantes. O propósito deste padrão é criar um mecanismo para troca de informações baseado em XML, denominado de CIM/XML um formato, especificamente, utilizado pelos Centros de Controle de Energia Elétrica para troca de dados. Os modelos de dados do padrão IEC 61970 são apresentados, bem como a avaliação do CIM/XML como uma ferramenta de interoperabilidade de dados entre empresas de energia que apresentam bases de dados de diferentes modelagem e implementação. Um dos méritos deste modelo, além de fazer uso de uma tecnologia aberta (XML) disponível em qualquer tipo de computador, e que possibilita armazenar e transferir não só dados cadastrais de equipamentos, mas também dados de topologia da rede, curvas de carga, programação de geração, saídas programadas de equipamentos, bem como medições de sistemas SCADA, indicações de estado e alarmes. Possibilita ainda armazenamento de resultados de simulação, tais como resultados do programa de fluxo de potência. Adicionalmente, é apresentado em detalhes informações sobre: a implementação do modelo lógico orientado a objetos do CIM, numa base de dados relacional; os registros de equipamentos e a topologia de um trecho de linha de sub-transmissão aéreo e subterrâneo; as informações (exportadas e importadas no formato CIM/XML) inseridas na base de dados, e a geração do documento CIM/XML. / This dissertation provides an analysis about the application of a data model based on IEC 61970 (CIM) standard a tool for exchanging operational and equipments information among different Electrical Power Utilities' computing systems. The purpose of this standard is to create a mechanism for information exchange using XML, called CIM/XML a format specifically used by Electrical Power Utilities' Control Centers for data exchange. The IEC 61970 standard-based data models is presented as well the evaluation of CIM/XML as a tool for data interoperability among Electrical Power Utilities' databases, that uses different modeling and implementation approaches. The benefit of this model besides of using a open standard technology (XML) that can be found on any type of computer is the capability of store and transfer information not only from equipments, but also from network topology, load flow curves, generation scheduling, equipments outages, SCADA system measurements, status indication and alarms. It also enables the storage of simulation results, such as the power flow bus voltage and lines loads. Additionally, is also provided highly detailed information about: the CIM object-oriented model implementation mapped to a relational database; the records of equipments and topology of an aerial and underground subtransmission line section; the information (exported and imported using CIM/XML format) included in the database, and the CIM/XML document generation.
|
74 |
Common Information Model/eXtensible Markup Language (CIM/XML) na troca de informações entre centros de controle de energia elétrica. / Common information Model/eXtensible Markup Language (CIM/XML) to exchange data among Power Utilities' Control Centers.Cunha Júnior, Carlos Augusto Siqueira da 14 July 2005 (has links)
Esta dissertação analisa a utilização do modelo de dados padronizado pela norma IEC 61970 (CIM), tida como uma ferramenta para troca de informações cadastrais e operacionais entre empresas de energia elétrica, com sistemas computacionais de diferentes fabricantes. O propósito deste padrão é criar um mecanismo para troca de informações baseado em XML, denominado de CIM/XML um formato, especificamente, utilizado pelos Centros de Controle de Energia Elétrica para troca de dados. Os modelos de dados do padrão IEC 61970 são apresentados, bem como a avaliação do CIM/XML como uma ferramenta de interoperabilidade de dados entre empresas de energia que apresentam bases de dados de diferentes modelagem e implementação. Um dos méritos deste modelo, além de fazer uso de uma tecnologia aberta (XML) disponível em qualquer tipo de computador, e que possibilita armazenar e transferir não só dados cadastrais de equipamentos, mas também dados de topologia da rede, curvas de carga, programação de geração, saídas programadas de equipamentos, bem como medições de sistemas SCADA, indicações de estado e alarmes. Possibilita ainda armazenamento de resultados de simulação, tais como resultados do programa de fluxo de potência. Adicionalmente, é apresentado em detalhes informações sobre: a implementação do modelo lógico orientado a objetos do CIM, numa base de dados relacional; os registros de equipamentos e a topologia de um trecho de linha de sub-transmissão aéreo e subterrâneo; as informações (exportadas e importadas no formato CIM/XML) inseridas na base de dados, e a geração do documento CIM/XML. / This dissertation provides an analysis about the application of a data model based on IEC 61970 (CIM) standard a tool for exchanging operational and equipments information among different Electrical Power Utilities' computing systems. The purpose of this standard is to create a mechanism for information exchange using XML, called CIM/XML a format specifically used by Electrical Power Utilities' Control Centers for data exchange. The IEC 61970 standard-based data models is presented as well the evaluation of CIM/XML as a tool for data interoperability among Electrical Power Utilities' databases, that uses different modeling and implementation approaches. The benefit of this model besides of using a open standard technology (XML) that can be found on any type of computer is the capability of store and transfer information not only from equipments, but also from network topology, load flow curves, generation scheduling, equipments outages, SCADA system measurements, status indication and alarms. It also enables the storage of simulation results, such as the power flow bus voltage and lines loads. Additionally, is also provided highly detailed information about: the CIM object-oriented model implementation mapped to a relational database; the records of equipments and topology of an aerial and underground subtransmission line section; the information (exported and imported using CIM/XML format) included in the database, and the CIM/XML document generation.
|
75 |
Improving Table Scans for Trie Indexed DatabasesToney, Ethan 01 January 2018 (has links)
We consider a class of problems characterized by the need for a string based identifier that reflect the ontology of the application domain. We present rules for string-based identifier schemas that facilitate fast filtering in databases used for this class of problems. We provide runtime analysis of our schema and experimentally compare it with another solution. We also discuss performance in our solution to a game engine. The string-based identifier schema can be used in addition scenarios such as cloud computing. An identifier schema adds metadata about an element. So the solution hinges on additional memory but as long as queries operate only on the included metadata there is no need to load the element from disk which leads to huge performance gains.
|
76 |
An Extended Iterative Location Management Schema for Load-Balancing in a Cellular NetworkSubramanian, Shanthi Sridhar 12 May 2005 (has links)
Location Management is defined as the process of tracking the position of a mobile terminal when it moves to its associated area within the network. This allows the network to detect the mobile user’s path for the purpose of call delivery. The location management schema in a public-LAN mobile network (IS-41 and GSM) is based on centralized two-tier database architecture. The root level is called the Home Location Register (HLR) and the second level is called the Visitor Location Register (VLR). The HLR permanently stores all the mobile users’ location information and the types of services subscribed in the user’s profile database. The VLR stores the location information whenever a user registered in the HLR moves to its related location area within the network. By contacting the HLR, the VLR authenticates and updates the mobile user’s current position when a mobile terminal moves from one location area to another. The HLR then updates the mobile terminal’s new location information and removes the mobile terminal from its previous VLR. There can be multiple VLR’s under each HLR in a network. In the current location management schema, all the information requests, queries, acknowledgements have to go through the HLR. This results in excessive overload at the HLR. This overload becomes high when the number of mobile terminals increases within the network. The heavy traffic at the root (HLR) may cause congestion, degradation of the bandwidth at the root and hence becomes a major bottleneck for the entire network. To solve this congestion/bottleneck problem, a modified iterative protocol with VLR cache was introduced, where the VLRs in the network handle all de-registration, registration and acknowledgement of messages. The HLR only handles updating the location information of the mobile terminal in its database. This reduced the excess load/traffic experienced at the HLR thus improving the network’s performance. The modified protocol was tested with different cache replacement policies such as First-In First-Out (FIFO), Random and Least Frequently Visited (LFV) with uniform traffic with random mobile terminal movement. In this thesis report, we extend the previous work in the modified iterative protocol by 1) increasing the topography of the network, to analyze the impact of network’s size on performance and 2) changing the mobile terminal traffic pattern from uniform traffic with random mobile terminal movement to non-uniform traffic with unbalanced probability movement. With these changes, we analyzed the modified protocol’s performance with different cache replacement policies (FIFO, LFV and Random) under uniform traffic with random movement and non-uniform traffic with unbalanced probability movement.
|
77 |
Exploring Novice Teachers' Cognitive Processes Using Digital Video Technology: A Qualitative Case StudySun-Ongerth, Yuelu 20 December 2012 (has links)
This dissertation describes a qualitative case study that investigated novice teachers’ video-aided reflection on their own teaching. To date, most studies that have investigated novice teachers’ video-aided reflective practice have focused on examining novice teachers’ levels of reflective writing rather than the cognitive processes involved during their reflection. Few studies have probed how novice teachers schematize and theorize their newly acquired and/or existing knowledge during video-aided reflection.
The purpose of this study was to explore novice teachers’ cognitive processes, particularly video-aided schematization and theorization (VAST), which is a set of cognitive processes that help novice teachers construct, restructure and reconstruct their professional knowledge and pedagogical thinking while reflecting on videos of their own teaching. The researcher measured novice teachers’ VAST by examining their schema construction and automation in terms of schema accretion, schema tuning and schema restructuring. The study attempted to answer the following questions: a) What is the focus of novice teachers’ video-aided reflection? and b) How do novice teachers connect the focus of their reflections to their prior knowledge and future actions?
The findings indicate that video-aided reflection could help novice teachers (1) notice what was needed to improve in their teaching practice, (2) realize how various elements in teaching were interrelated, and (3) construct, restructure, or reconstruct their professional knowledge – in other words, develop their schemata about teaching and learning through VAST. With a more developed and mature schemata, novice teachers could be able to better understand the various elements involved in teaching and learning, and handle the situations they encounter in their teaching. This may be because people’s schemata can provide the link between concepts and patterns of what they do (Rumelhart, 1980).
This research has provided a new way to look at novice teachers’ video-aided reflection: how the cognitive processes they experience during their reflection can help them develop the knowledge about teaching and learning, and how their cognitive development can help them grow toward becoming teaching experts. The research findings add to the knowledge base about the use of video technology in teachers’ self-reflection and professional development in teacher education.
|
78 |
Duomenų loginių struktūrų išskyrimas funkcinių reikalavimų specifikacijos pagrindu / Data logical structure segregation on the ground of a functional requirements specificationJučiūtė, Laura 25 May 2006 (has links)
The place of data modelling in information systems‘ life cycle and the importance of data model quality in effective IS exploitation are shown in this masters' work. Refering to results of the nonfiction literature analysis the reasons why the process of data modelling must be automated are introduced; current automatization solutions are described. And as it is the main purpose of this work an original data modelling method is described and programmable prototype which automates one step of that method – schema integration is introduced.
|
79 |
Ar prisimename tai, ką matome? Regimųjų vaizdų ribų išplėtimo tyrimas / Do we remember what we see? Research of boundary extension of visual imagesJankūnaitė, Jurgita 22 July 2014 (has links)
Tyrimo tikslas – ištirti, kuriais atvejais įvyksta ribų išplėtimas iš atminties perpiešiant skirtingo turinio paveikslėlius.
Regimųjų vaizdų ribų išplėtimo tyrimo metodika buvo sukurta remiantis Hubbard ir bendraautorių (2010) parašyta „Boundary extension: Findings and theories“ metaanalize. Metodiką sudaro 12 stimulų (matmenys 10x15 cm), kuriuose vaizduojamas fotografuotas vaizdas arba piešinio eskizas. Pateikiamoje stimulinėje medžiagoje vaizduojami skirtingo turinio vaizdai – užbaigtas objektas, objektas su nukirptais kraštais, emociškai neutralus, pozityvus ir neigiamas objektas, judantis objektas.
Tyrime dalyvavo 120 tiriamųjų, kurių amžius nuo 14 – 45 metų (amžiaus vidurkis 25,6 m.). Tiriamieji buvo suskirstyti į tris grupes: 1. 14 – 19 metų (imtinai) paaugliai; 2. 20 – 30 metų (imtinai) jaunieji suaugusieji; 3. 31 – 45 metų (imtinai) vyresnieji suaugusieji.
Pirmoji hipotezė, teigianti, kad ribų išplėtimas dažnesnis iš atminties perpiešiant paveikslus, vaizduojančius objektus su nukirptais kraštais, nei vaizduojančius užbaigtus objektus, pasitvirtino.
Antroji hipotezė, teigianti, kad ribų išplėtimas dažnesnis iš atminties perpiešiant paveikslus, vaizduojančius emociškai neutralius objektus, nei vaizduojančius emociškai pozityvius ar emociškai įtemptus objektus, nepasitvirtino. Palyginus paveikslus pagal jų emocinį turinį nustatyta, kad ribų išplėtimas dažnesnis iš atminties perpiešiant paveikslus, vaizduojančius emociškai pozityvius ar emociškai įtemptus objektus... [toliau žr. visą tekstą] / The goal of this study was to investigate in which cases boundary extension occurs when repainting visual images with different content from your memory.
Method that were used in this study is based on meta-analysis conducted by Hubbard and co-authors (2010), it is called „Boundary extension: Findings and theories“. Method consists of 12 stimuli (dimensions 10x15 cm), which shows photographic image or sketch of a painting. Presented stimuli material contains images with different content – finished object, object with its corners removed, emotionally neutral, positive and negative object, moving object.
120 respondents participated in the study, their age ranged from 14 to 45 years old (average age - 25,6). Subjects were divided into three groups: 1. 14 – 19 years old (inclusively) teenagers; 2. 20 – 30 year old (inclusively) young adults; 3. 31 – 45 year old (inclusively) older adults.
First hypothesis, stating that boundary extension is more frequent with images of objects with removed corners than those of finished objects repainted from memory, was confirmed.
Second hypothesis, stating that boundary extension is more frequent with images of emotionally neutral objects than those of emotionally positive or intense objects repainted from memory, was not confirmed. After comparing images in terms of their emotional content it was found, that boundary extension is more often when images repainted from memory contain emotionally positive or emotionally intense objects... [to full text]
|
80 |
Räumliche Repräsentation, Komplexität und Deduktion: eine kognitive Komplexitätstheorie /Ragni, Marco. January 2008 (has links)
Zugl.: Freiburg (Breisgau), Universiẗat, Diss., 2008.
|
Page generated in 0.059 seconds