• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 63
  • 41
  • 36
  • 14
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 383
  • 45
  • 45
  • 41
  • 39
  • 29
  • 29
  • 28
  • 26
  • 20
  • 20
  • 20
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Constructing a Clinical Research Data Management System

Quintero, Michael C. 04 November 2017 (has links)
Clinical study data is usually collected without knowing what kind of data is going to be collected in advance. In addition, all of the possible data points that can apply to a patient in any given clinical study is almost always a superset of the data points that are actually recorded for a given patient. As a result of this, clinical data resembles a set of sparse data with an evolving data schema. To help researchers at the Moffitt Cancer Center better manage clinical data, a tool was developed called GURU that uses the Entity Attribute Value model to handle sparse data and allow users to manage a database entity’s attributes without any changes to the database table definition. The Entity Attribute Value model’s read performance gets faster as the data gets sparser but it was observed to perform many times worse than a wide table if the attribute count is not sufficiently large. Ultimately, the design trades read performance for flexibility in the data schema.
232

Recovering the Semantics of Tabular Web Data

Braunschweig, Katrin 26 October 2015 (has links) (PDF)
The Web provides a platform for people to share their data, leading to an abundance of accessible information. In recent years, significant research effort has been directed especially at tables on the Web, which form a rich resource for factual and relational data. Applications such as fact search and knowledge base construction benefit from this data, as it is often less ambiguous than unstructured text. However, many traditional information extraction and retrieval techniques are not well suited for Web tables, as they generally do not consider the role of the table structure in reflecting the semantics of the content. Tables provide a compact representation of similarly structured data. Yet, on the Web, tables are very heterogeneous, often with ambiguous semantics and inconsistencies in the quality of the data. Consequently, recognizing the structure and inferring the semantics of these tables is a challenging task that requires a designated table recovery and understanding process. In the literature, many important contributions have been made to implement such a table understanding process that specifically targets Web tables, addressing tasks such as table detection or header recovery. However, the precision and coverage of the data extracted from Web tables is often still quite limited. Due to the complexity of Web table understanding, many techniques developed so far make simplifying assumptions about the table layout or content to limit the amount of contributing factors that must be considered. Thanks to these assumptions, many sub-tasks become manageable. However, the resulting algorithms and techniques often have a limited scope, leading to imprecise or inaccurate results when applied to tables that do not conform to these assumptions. In this thesis, our objective is to extend the Web table understanding process with techniques that enable some of these assumptions to be relaxed, thus improving the scope and accuracy. We have conducted a comprehensive analysis of tables available on the Web to examine the characteristic features of these tables, but also identify unique challenges that arise from these characteristics in the table understanding process. To extend the scope of the table understanding process, we introduce extensions to the sub-tasks of table classification and conceptualization. First, we review various table layouts and evaluate alternative approaches to incorporate layout classification into the process. Instead of assuming a single, uniform layout across all tables, recognizing different table layouts enables a wide range of tables to be analyzed in a more accurate and systematic fashion. In addition to the layout, we also consider the conceptual level. To relax the single concept assumption, which expects all attributes in a table to describe the same semantic concept, we propose a semantic normalization approach. By decomposing multi-concept tables into several single-concept tables, we further extend the range of Web tables that can be processed correctly, enabling existing techniques to be applied without significant changes. Furthermore, we address the quality of data extracted from Web tables, by studying the role of context information. Supplementary information from the context is often required to correctly understand the table content, however, the verbosity of the surrounding text can also mislead any table relevance decisions. We first propose a selection algorithm to evaluate the relevance of context information with respect to the table content in order to reduce the noise. Then, we introduce a set of extraction techniques to recover attribute-specific information from the relevant context in order to provide a richer description of the table content. With the extensions proposed in this thesis, we increase the scope and accuracy of Web table understanding, leading to a better utilization of the information contained in tables on the Web.
233

Managing human-induced material use : adding cyclic inter-sectoral flows to Physical Input-Output Tables to analyse the environmental impact of economic activity

Altimiras-Martin, Aleix January 2016 (has links)
Current human activity is degrading the environment and depleting biotic and abiotic resources at unheard-of rates, inducing global environmental change and jeopardising the development of humankind. The structure of human activity determines which resources are extracted, how they are transformed and where and how they are emitted back to the environment. Thus, the structure of human activity ultimately determines the human-Earth System interaction and human-induced environmental degradation. Several theories and empirical findings suggest that a cyclic structure would lower the resource requirements and emissions of the economic system, decoupling production and consumption from their environmental impacts. However, the cyclic structure has not been fully characterised nor related to the resource requirements or emission generation estimates of environmental impacts as calculated through models representing the physical structure of the economic system. This thesis is interested in developing tools to analyse the physical structure of the economic system and, ultimately, to develop a method to identify its cyclic structure and relate it to the environmental impact induced by economic activity. Using this new knowledge, it might be possible to reduce the environmental impact of the economy by altering its physical structure. In chapter 3, the different methods to calculate the emissions and resources associated to a given final demand of physical input-output tables are reviewed because they gather different results; it is argued that only two are valid. Surprisingly, these two methods reveal different physical structures; these are explored using a backward linkage analysis and their differences explained. It is found that only one method is appropriate to analyse the physical structure of the economic system and this method is in fact a new input-output model capable of tracing by-products as final outputs. Also, since traditional input-output structural analyses provide aggregate measures, a visual representation of input-output tables enabling researchers to perform disaggregated structural analyses and identify intersectoral patterns is developed. In chapter 4, a method to derive the full cyclic structure of the economic system is developed using network analysis within the Input-Output framework; it identifies the intersectoral cycles and the resources and emissions associated to cycling. It is shown that cyclic flows maximise the system throughput but lower the resource efficiency of the system vis-à-vis the system outputs. It is demonstrated that 1) the complete structure is composed of a cyclic-acyclic and a direct-indirect sub-structure, challenging the common understanding of the functioning of the structure, and 2) cycling is composed of pre-consumer cycling, post-consumer cycling, re-cycling and trans-cycling. In chapter 5, a set of indicators are developed to capture the weight and emissions associated to each sub-structure and the sub-structures are related to the economy's resource efficiency and emissions. In chapter 6, it is illustrated how to use the concepts, indicators and methods developed in previous chapters to identify strategies to improve the resource efficiency of the economy by altering its structure. Finally, in chapter 7, it is suggested to refine the definition of recycling to integrate the different systemic effects of pre-consumer and post-consumer cycling and it is argued that the ideal structure of a circular, close-loop economy should minimise its pre-consumer cycling in favour of more efficient acyclic flows while maximising its post-consumer cycling.
234

Distributed Algorithms for Networks Formation in a Scalable Internet of Things

Jedda, Ahmed January 2014 (has links)
The Internet of Things (IoT) is a vision that aims at inter-connecting every physical identifiable object (or, a thing) via a global networking infrastructure (e.g., the legacy Internet). Several architectures are proposed to realize this vision; many of which agree that the IoT shall be considered as a global network of networks. These networks are used to manage wireless sensors, Radio Frequency IDentification (RFID) tags, RFID readers and other types of electronic devices and integrate them into the IoT. A major requirement of the IoT architectures is scalability, which is the capability of delivering high performance even if the input size (e.g., number of the IoT objects) is large. This thesis studies and proposes solutions to meet this requirement, and specifically focuses on the scalability issues found in the networks of the IoT. The thesis proposes several network formation algorithms to achieve these objectives, where a network formation algorithm is an algorithm that, if applied to a certain network, optimizes it to perform its tasks in a more efficient manner by virtually deleting some of its nodes and/or edges. The thesis focuses on three types of networks found in the IoT: 1) RFID readers coverage networks; whose main task is to cover (i.e., identify, monitor, track, sense) IoT objects located in a given area, 2) readers inter-communications networks; whose main task is to guarantee that their nodes are able to inter-communicate with each other and hence use their resources more efficiently (the thesis specifically considers inter-communication networks of readers using Bluetooth for communications), and 3) Object Name Systems (ONS) which are networks of several inter-connected database servers (i.e., distributed database) whose main task is to resolve an object identifier into an Internet address to enable inter-communication via the Internet. These networks are chosen for several reasons. For example, the technologies and concepts found in these networks are among the major enablers of the IoT. Furthermore, these networks solve tasks that are central to any IoT architecture. Particularly, the thesis a) studies the data and readers redundancy problem found in RFID readers coverage networks and introduces decentralized RFID coverage and readers collisions avoidance algorithms to solve it, b) contributes to the problem of forming multihop inter-communications networks of Bluetooth-equipped readers by proposing decentralized time-efficient Bluetooth Scatternet Formation algorithms, and c) introduces a geographic-aware ONS architecture based on Peer-To-Peer (P2P) computing to overcome weaknesses found in existing ONS architectures.
235

Zkoumání závislosti charakteristik a vybavení bytu na typu domácnosti / Investigation of dependence of the housing characteristics and the amenities in the dwelling on the household type

Čapková, Kateřina January 2010 (has links)
The aim of the diploma thesis is to provide a comprehensive overview regarding survey of income and living conditions of households in the Czech Republic and propose a feasible approach to the exploitation of the collected data. The diploma thesis describes the framework of the EU-SILC survey and presents its practical implementation in the Czech Republic. A technique utilizing contingency table analysis is proposed for studying asymmetric relationships between selected housing characteristics and amenities in the dwelling with respect to different household types. The analysis is based on the relative frequencies of housing characteristics observed for particular types of households. The frequencies stem from the sample survey of income and living conditions of households carried out in the Czech Republic under the official title of Living Conditions 2008. Asymmetric measures of association for nominal and ordinal variables were used for the description of the relationships as the examined housing characteristics and household types have the character of alternative, nominal or ordinal variables. The analyses show significant relationship of the selected housing characteristics and the amenities in the dwelling on the monitored types of households.
236

Analýza úmrtnostních tabulek pomocí vybraných vícerozměrných statistických metod / Life tables analysis using selected multivariate statistical methods

Bršlíková, Jana January 2015 (has links)
The mortality is historically one of the most important demographic indicator and definitely reflects the maturity of each country. The objective of this diploma thesis is the comparison of mortality rates in analyzed countries around the world over time and among each other using the principle component analysis that allows assessing data different way. The big advantage of this method is minimal loss of information and quite understandable interpretation of mortality in each country. This thesis offers several interesting graphical outputs, that for example confirm higher mortality rate in Eastern European countries compared to Western European countries and show that Czech republic is country where mortality has fallen most in context of post-communist countries between 1990 and 2010. Source of the data is Human Mortality Database and all data were processed in statistical tool SPSS.
237

Využití gravitačních modelů při konstrukci odhadů komoditních toků / Construction of estimates of interregional commodity flows by using gravity model

Kieslichová, Kateřina January 2015 (has links)
The aim of my thesis is the construction of estimates of interregional commodity flows for the regions of the Czech Republic, by using a gravity model. Gravity model is based on Newton's law of gravitation. Gravity models can be used in two different information contexts. The first is an information context, when the spatial interaction flows are known a priori, and the model is used to explain the trade flows' behaviour. And the second is an information context in which these interactions are totally unknown a priori and the flows must be estimated. This paper is focused on the second information context. When we estimating commodity flows we need to know the value of exports and imports for individual regions. Estimated interregional commodity flows are the results of this work. Estimated interregional flows are put into the regional input-output tables compiled by the Department of Economic Statistics. Regional input-output tables are arranged so as to reached the equality of resources and use. On the basis of the resulting tables for all regions, I conducted a input-output analysis. Input-output analysis examines the impact of model changes to investment on selected commodities, to estimated interregional flows and selected macroeconomic indicators.
238

Temporální rozšíření pro PostgreSQL / A Temporal Extension for PostgreSQL

Jelínek, Radek January 2015 (has links)
This thesis is focused on PostgreSQL database system. You can find here introducing to temporal databases, database system PostgreSQL, proposal of temporal extension for PostgreSQL and implementation chapter with examples of using this extension. In this thesis is also using temporal database systems and use temporal databases in practise.
239

Automatisering av multiplikationstabellerna : En studie om automatisering av multiplikationstabellerna

Abdullahi, Beyar, Nordström, Karin January 2020 (has links)
Tidigare forskning har visat att elever har bristande kunskaper i multiplikationstabellerna. Att automatisera tabellerna ger eleverna goda förutsättningar inför övriga matematikområden. Syftet med studien var att skapa kunskap om lärares uppfattning till automatisering av multiplikationstabellerna samt att få kunskap om hur många av lärarnas elever som hade automatiserat tabellerna. Syftet var även att ta reda på vilka metoder lärare använder för att stötta eleverna i detta. Studien genomfördes med enkät som datainsamlingsinstrument och resultatet från enkäten följdes upp med fokusgruppsintervjuer. Resultatet av studien visade att många lärare upplever automatisering av multiplikationstabellerna som viktigt eftersom det underlättar för eleverna inför övriga matematikområden samt att det avlastar elevernas arbetsminne. Metoderna som flest lärare använde i sin undervisning var digitala undervisningsplattformar samt olika arbetsblad som drillträning av tabellerna. / Previous research had shown that students lacked knowledge in the multiplication tables. Automating the tables gives students good conditions to succeed in other areas of mathematics. The purpose of this study was to create knowledge about teacher’s perceptions of automating the multiplication tables and to examine how many of the pupils had automated the tables. The purpose was also to identify what methods teachers use to support students in multiplication table automation. The study was conducted with a survey and the results were followed up with focus group interviews. The results showed that many teachers value the automation of the multiplication tables because it prepares the students for other mathematics areas and it relieves the student’s working memory. The methods most teachers used in their teaching were digital teaching platforms and various worksheets such as drill training of the tables.
240

Recovering the Semantics of Tabular Web Data

Braunschweig, Katrin 09 October 2015 (has links)
The Web provides a platform for people to share their data, leading to an abundance of accessible information. In recent years, significant research effort has been directed especially at tables on the Web, which form a rich resource for factual and relational data. Applications such as fact search and knowledge base construction benefit from this data, as it is often less ambiguous than unstructured text. However, many traditional information extraction and retrieval techniques are not well suited for Web tables, as they generally do not consider the role of the table structure in reflecting the semantics of the content. Tables provide a compact representation of similarly structured data. Yet, on the Web, tables are very heterogeneous, often with ambiguous semantics and inconsistencies in the quality of the data. Consequently, recognizing the structure and inferring the semantics of these tables is a challenging task that requires a designated table recovery and understanding process. In the literature, many important contributions have been made to implement such a table understanding process that specifically targets Web tables, addressing tasks such as table detection or header recovery. However, the precision and coverage of the data extracted from Web tables is often still quite limited. Due to the complexity of Web table understanding, many techniques developed so far make simplifying assumptions about the table layout or content to limit the amount of contributing factors that must be considered. Thanks to these assumptions, many sub-tasks become manageable. However, the resulting algorithms and techniques often have a limited scope, leading to imprecise or inaccurate results when applied to tables that do not conform to these assumptions. In this thesis, our objective is to extend the Web table understanding process with techniques that enable some of these assumptions to be relaxed, thus improving the scope and accuracy. We have conducted a comprehensive analysis of tables available on the Web to examine the characteristic features of these tables, but also identify unique challenges that arise from these characteristics in the table understanding process. To extend the scope of the table understanding process, we introduce extensions to the sub-tasks of table classification and conceptualization. First, we review various table layouts and evaluate alternative approaches to incorporate layout classification into the process. Instead of assuming a single, uniform layout across all tables, recognizing different table layouts enables a wide range of tables to be analyzed in a more accurate and systematic fashion. In addition to the layout, we also consider the conceptual level. To relax the single concept assumption, which expects all attributes in a table to describe the same semantic concept, we propose a semantic normalization approach. By decomposing multi-concept tables into several single-concept tables, we further extend the range of Web tables that can be processed correctly, enabling existing techniques to be applied without significant changes. Furthermore, we address the quality of data extracted from Web tables, by studying the role of context information. Supplementary information from the context is often required to correctly understand the table content, however, the verbosity of the surrounding text can also mislead any table relevance decisions. We first propose a selection algorithm to evaluate the relevance of context information with respect to the table content in order to reduce the noise. Then, we introduce a set of extraction techniques to recover attribute-specific information from the relevant context in order to provide a richer description of the table content. With the extensions proposed in this thesis, we increase the scope and accuracy of Web table understanding, leading to a better utilization of the information contained in tables on the Web.

Page generated in 0.0417 seconds