• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 15
  • 15
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic Creation of Researcher’s Competence Profiles Based on Semantic Integration of Heterogeneous Data sources

Khadgi, Vinaya, Wang, Tianyi January 2012 (has links)
The research journals and publications are great source of knowledge produced by the virtue of hard work done by researchers. Several digital libraries have been maintaining the records of such research publications in order for general people and other researchers to find and study the previous work done in the research field they are interested in. In order to make the search criteria effective and easier, all of these digital libraries keep a record/database to store the meta-data of the publications. These meta-data records are generally well design to keep the vital records of the publications/articles, which has the potential to give information about the researcher, their research activities, and hence the competence profile. This thesis work is a study and search of method for building the competence profile of researchers’ base on the records of their publications in the well-known digital libraries. The publications of researchers publish in different publication houses, so, in order to make a complete profile, the data from several of these heterogeneous digital libraries sources have to be integrated semantically. Several of the semantic technologies were studied in order to investigate the challenges of integration of the heterogeneous sources and modeling the researchers’ competence profile .An approach of on-demand profile creation was chosen where user of system could enter some basic name detail of the researcher whose profile is to be created. In this thesis work, Design Science Research methodology was used as the method for research work and to complement this research method with a working artifact, scrum- an agile software development methodology was used to develop a competence profile system as proof of concept.
2

A data cleaning and annotation framework for genome-wide studies.

Ranjani Ramakrishnan 11 1900 (has links) (PDF)
M.S. / Computer Science and Engineering / Genome-wide studies are sensitive to the quality of annotation data included for analyses and they often involve overlaying both computationally derived and experimentally generated data onto a genomic scaffold. A framework for successful integration of data from diverse sources needs to address, at a minimum, the conceptualization of the biological identity in the data sources, the relationship between the sources in terms of the data present, the independence of the sources and, any discrepancies in the data. The outcome of the process should either resolve or incorporate these discrepancies into downstream analyses. In this thesis we identify factors that are important in detecting errors within and between sources and present a generalized framework to detect discrepancies. An implementation of our workflow is used to demonstrate the utility of the approach in the construction of a genome-wide mouse transcription factor binding map and in the classification of Single nucleotide polymorphisms. We also present the impact of these discrepancies on downstream analyses. The framework is extensible and we discuss future directions including summarization of the discrepancies in a biological relevant manner.
3

Vartotojo sąsajos elementų modeliavimas informacijos srautų specifikacijos pagrindu / User Interface Elements Modeling Based on Information Flow Specification

Subačiūtė, Ieva 24 September 2004 (has links)
The target of this project was to generate a prototype, which would model the elements of the user interface based on the information flow specification. The system is programmed using MS Visual Basic for Applications, its’ graphical environment is MS Visio 2000 and the elements of GUI for modeling are taken from MS Visio 2000 stencil WUI (wui.vss). The created module allows for easy modeling of graphical user interface elements for the data source, which is selected from the database. The processing stages of the data sources are illustrated using MS Visio 2000 documents’ windows, – one window for each stage. In the database, the elements of GUI are associated with the data source processing stages they are related to, i.e. the ones that they are present in. It means, that when working with the system, after selecting the desired data source and its processing stages one wants to model, the system generates the forms with associated GUI elements. This work was being done in the order of the scientific group of Information systems design, which analyses the design of the Information systems in the wide context of requirements engineering, process and data structures specification. This designed and realized model is going to be a part of a CASE tool meant for designing computerized information systems. Once the CASE tool is finished, it is planned to use it for educational purposes in Kaunas University of Technology in the department of Information systems. It is going to... [to full text]
4

Emergence of a Cancer Identity in Emerging Adulthood: Weblogs as Illness Narratives

Soltermann, Tanya C. 21 February 2014 (has links)
The focus of this research is on the specific relational and particular circumstances that result in an emerging cancer identity expressed through the daily lived- experiences of emerging adults via personal weblogs. Identity, a complex term in its own right, is discussed here under the rubric of social identity as processual, therefore it is expected that an emerging cancer identity will develop as the participants begin to narrativize their daily experiences with cancer on their weblogs. By critically engaging with notions of emerging adulthood theories with theories on the sociology of death and dying and illness narratives, this research seeks to understand the specific psychosocial changes that occur as the participants engage with their illness on their weblogs, which arguably contributes to an emerging cancer identity.
5

Emergence of a Cancer Identity in Emerging Adulthood: Weblogs as Illness Narratives

Soltermann, Tanya C. January 2014 (has links)
The focus of this research is on the specific relational and particular circumstances that result in an emerging cancer identity expressed through the daily lived- experiences of emerging adults via personal weblogs. Identity, a complex term in its own right, is discussed here under the rubric of social identity as processual, therefore it is expected that an emerging cancer identity will develop as the participants begin to narrativize their daily experiences with cancer on their weblogs. By critically engaging with notions of emerging adulthood theories with theories on the sociology of death and dying and illness narratives, this research seeks to understand the specific psychosocial changes that occur as the participants engage with their illness on their weblogs, which arguably contributes to an emerging cancer identity.
6

SDL model pro Source Specific Multicast / SDL model for Source Specific Multicast

Záň, Stanislav January 2008 (has links)
This work deals with questions of IP net communication, with using metod Source-Specific Multicast. It focused on questions of registration, unregistration and administration of clients in multicast group and IGMP protocol, which is made for this communication. Work deals also about problems of signalization between source of data and clients of multicast group.In the introduction of this work questions of communications in multicast are analyse. I tis followed by the chapter focused on the specific metod of multicast – Source Specific Multicast (SSM). Next chapter is based on the protocols in SSM, which are used for distribution of data stream in source to clients direction and also in the reverse direction. The net chapter deals with signalization of communication in SSM. It specializes for reflection and summarization metods, which are used here. This chapter also shows basic matematics formules for sending signalization packets and proposes other solutions and ways for simplify communication and minimalize delay, which is for signallization very important. After that, the work deals with differences between these two metods of signalization. The aplicationis builded in practical part of work from the knowlidge of theory from previous chapters and it simulates the real communication between data source and clients situated to multicast group. This communication is explained in MSC diagrams. The aplication also simulates both used metods of signalization and real count of cients in multicast group. The results of simulation are interpreted in the last part of work.
7

Targeted feedback collection for data source selection with uncertainty

Cortés Ríos, Julio César January 2018 (has links)
The aim of this dissertation is to contribute to research on pay-as-you-go data integration through the proposal of an approach for targeted feedback collection (TFC), which aims to improve the cost-effectiveness of feedback collection, especially when there is uncertainty associated with characteristics of the integration artefacts. In particular, this dissertation focuses on the data source selection task in data integration. It is shown how the impact of uncertainty about the evaluation of the characteristics of the candidate data sources, also known as data criteria, can be reduced, in a cost-effective manner, thereby improving the solutions to the data source selection problem. This dissertation shows how alternative approaches such as active learning and simple heuristics have drawbacks that throw light into the pursuit of better solutions to the problem. This dissertation describes the resulting TFC strategy and reports on its evaluation against alternative techniques. The evaluation scenarios vary from synthetic data sources with a single criterion and reliable feedback to real data sources with multiple criteria and unreliable feedback (such as can be obtained through crowdsourcing). The results confirm that the proposed TFC approach is cost-effective and leads to improved solutions for data source selection by seeking feedback that reduces uncertainty about the data criteria of the candidate data sources.
8

A visual query language served by a multi-sensor environment

Camara (Silvervarg), Karin January 2007 (has links)
<p>A problem in modern command and control situations is that much data is available from different sensors. Several sensor data sources also require that the user has knowledge about the specific sensor types to be able to interpret the data.</p><p>To alleviate the working situation for a commander, we have designed and constructed a system that will take input from several different sensors and subsequently present the relevant combined information to the user. The users specify what kind of information is of interest at the moment by means of a query language. The main issues when designing this query language have been that (a) the users should not have to have any knowledge about sensors or sensor data analysis, and (b) that the query language should be powerful and flexible, yet easy to use. The solution has been to (a) use sensor data independence and (b) have a visual query language.</p><p>A visual query language was developed with a two-step interface. First, the users pose a “rough”, simple query that is evaluated by the underlying knowledge system. The system returns the relevant information that can be found in the sensor data. Then, the users have the possibility to refine the result by setting conditions for this. These conditions are formulated by specifying attributes of objects or relations between objects.</p><p>The problem of uncertainty in spatial data; (i.e. location, area) has been considered. The question of how to represent potential uncertainties is dealt with. An investigation has been carried out to find which relations are practically useful when dealing with uncertain spatial data.</p><p>The query language has been evaluated by means of a scenario. The scenario was inspired by real events and was developed in cooperation with a military officer to assure that it was fairly realistic. The scenario was simulated using several tools where the query language was one of the more central ones. It proved that the query language can be of use in realistic situations.</p> / Report code: LiU-Tek-Lic-2007:42.
9

A visual query language served by a multi-sensor environment

Camara (Silvervarg), Karin January 2007 (has links)
A problem in modern command and control situations is that much data is available from different sensors. Several sensor data sources also require that the user has knowledge about the specific sensor types to be able to interpret the data. To alleviate the working situation for a commander, we have designed and constructed a system that will take input from several different sensors and subsequently present the relevant combined information to the user. The users specify what kind of information is of interest at the moment by means of a query language. The main issues when designing this query language have been that (a) the users should not have to have any knowledge about sensors or sensor data analysis, and (b) that the query language should be powerful and flexible, yet easy to use. The solution has been to (a) use sensor data independence and (b) have a visual query language. A visual query language was developed with a two-step interface. First, the users pose a “rough”, simple query that is evaluated by the underlying knowledge system. The system returns the relevant information that can be found in the sensor data. Then, the users have the possibility to refine the result by setting conditions for this. These conditions are formulated by specifying attributes of objects or relations between objects. The problem of uncertainty in spatial data; (i.e. location, area) has been considered. The question of how to represent potential uncertainties is dealt with. An investigation has been carried out to find which relations are practically useful when dealing with uncertain spatial data. The query language has been evaluated by means of a scenario. The scenario was inspired by real events and was developed in cooperation with a military officer to assure that it was fairly realistic. The scenario was simulated using several tools where the query language was one of the more central ones. It proved that the query language can be of use in realistic situations. / <p>Report code: LiU-Tek-Lic-2007:42.</p>
10

30 anos de linchamentos na região metropolitana de São Paulo - 1980-2009 / 30 years of lynchings in the Metropolitan Region of São Paulo - 1980-2009

Ariadne Lima Natal 08 February 2013 (has links)
A pesquisa analisa dados sobre linchamentos ocorridos entre 1980 e 2009, na cidade de São Paulo e nos municípios de sua Região Metropolitana, utilizando como fonte primária o material coletado pelo Banco de Dados da Imprensa Sobre as Graves Violações de Direitos Humanos do Núcleo de Estudos da Violência da Universidade de São Paulo. Os procedimentos metodológicos incluíram o uso de técnicas como análise documental e análise de conteúdo para tratar os dados quantitativos e dados qualitativos extraídos das notícias de jornal. O objetivo da análise longitudinal foi observar possíveis mudanças nas características deste fenômeno ao longo do tempo, para responder as questões propostas: O que muda nos casos de linchamentos ao longo das três últimas décadas? e De que forma as transformações socioeconômicas ocorridas na região estudada podem afetar as ocorrências e características dos linchamentos?. Considerando alguns dos principais aspectos e transformações na economia, política, urbanização, criminalidade e padrões de sociabilidade marcantes na região nos últimos 30 anos, a pesquisa busca estabelecer uma conexão entre os dados analisados e a dinâmica macrossocial da cidade, apontando a importância de se considerar elementos contextuais em análises longitudinais de linchamentos. / The research analyzes data on lynchings occurred between 1980 and 2009 in the city of São Paulo and in the municipalities of its metropolitan area, using as primary source material collected by the Database of Press About the Serious Human Rights Violations of the Center for the Study of violence at the University of São Paulo. The methodological procedures included the use of techniques such as document analysis and content analysis to treat quantitative data and qualitative data drawn from news reports. Through a longitudinal analysis, the survey noted possible changes in the characteristics of lynchings over time, to answer the questions posed: \"What changes in the cases of lynchings over the past three decades\" And \"How the socioeconomic transformations in the region studied may affect the occurrence and characteristics of lynching\". Considering some of the main aspects and transformations in the economy, politics, urbanization, crime and patterns of sociability prominent in that the region in the past 30 years, the research seeks to establish a connection between the data analyzed and macrosocial dynamics of the city, pointing out the importance of consider contextual elements in longitudinal analyzes of lynchings.

Page generated in 0.0489 seconds