• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1606
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3645
  • 856
  • 804
  • 754
  • 608
  • 544
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Vestígios de peixes em sítios arqueológicos de caçadores-coletores do Rio Grande do Sul, Brasil / Fish remains in hunter-gatherers archaeological sites of Rio Grande do Sul, Brazil

Ricken, Claudio January 2015 (has links)
Foram estudados os restos de peixes de três sítios arqueológicos no estado do Rio Grande do Sul, Brasil. Os sítios RS-S-327 e o RS-C-61 Pilger, estão localizados nas bacias dos rios Sinos e Caí, em abrigos sob-rocha, formados pela erosão dos arenitos da formação Botucatu. Foram identificadas 14 espécies de peixes no sítio arqueológico RS-S-327-Sangão: Bunocephalus sp.; Crenicichla sp.; Geophagus sp.; Hoplias sp.; Hypostomus sp.; Hoplosternum sp.; Microglanis sp.; Oligosarcus sp.; Pimelodus sp.; Prochilodus sp.; Rhamdia sp.; Salminus sp.; Synbranchus sp. No sítio arqueológico RS-C-61- Adelar Pilger 12 espécies de peixes: Crenicichla sp.; Geophagus sp.; Hoplias sp.; Hoplosternum sp.; Hypostomus sp.; Leporinus sp.; Oligosarcus sp.; Pimelodus sp.; Prochilodus sp.; Rhamdia sp.; Salminus sp.; Synbranchus sp., e uma espécie marinha:Carcharinus sp. A presença de espécies que apresentam migração reprodutiva corrobora a hipótese de que esses abrigos sob-rocha, eram ocupados em períodos mais quentes do ano. A maior exploração de espécies de peixes oriundas de ambientes próximos aos sítios aponta para uma atividade de pesca não especializada, feita dentro da área de influência doméstica dos abrigos. A análise dos vestígios do sítio RS-AS-01, Sambaqui Praia do Paraíso, localizado em Arroio do Sal (RS), demonstrou que molusco Mesodesma mactroides, foi a espécie dominante em todos os níveis estratigráficos, sendo seguida por Donax hanleyanus. Dentre os vertebrados, os peixes apresentaram o maior número de peças identificadas, representados em maior número por Genidens sp., Pogonias chromis, Menticirrhuslittoralise Micropogonias furnieri e espécies com menor representação: Paralonchurus brasiliensis, Macrodon sp., Cynoscion sp., Mugil sp., Paralichthys sp., Urophycis sp. e duas espécies dulcícolas: Hoplias sp. e Microglanis sp.. A estimativa das dimensões corporais com base nos otólitos das espécies Genidens sp., Menticirhuslitorallis e Micropogoniasfurnieri, conduziu a hipótese do uso de redes com malha padronizada. As experimentações da quebra e seccionamento de esporões de Genidens barbus, demonstraram que a quebra de esporões "in natura" e dos espécimes assados envoltos em folhas mostraram padrões de quebra irregulares. Os esporões dos exemplares assados em forno elétrico mostraram padrões de quebra regulares. Os exemplares expostos ao cozimento apresentaram um padrão de descoloração diretamente proporcional ao tempo de exposição. Os exemplares seccionados com lasca lítica por fricção apresentaram padrões condizentes com aqueles encontrados em esporões procedentes de sítios arqueológicos da cultura Sambaqui. Tendo como objetivo fornecer opções para melhoria das análises arqueofaunísticas, foi desenvolvido um programa para o gerenciamento de dados zooarqueológicos, utilizando a linguagem Pascal e como compilador/editor o ambiente de programação Delphi. O banco de dados é formado por lotes numerados sequencialmente, onde além das informações básicas para identificação da origem das peças é possível incluir informações sobre taxonomia, anatomia e tafonomia das peças. Considerando a grande diversidade de animais, as opções para inclusão de novos táxons estão em aberto a partir do nível de Filo. Diversas opções oferecidas pela bibliografia para os cálculos de NISP (número de espécimes identificados), NMI (Número mínimo de indivíduos) e tafonomia, foram contempladas pelo programa O sistema desenvolvido possibilita a tradução do software para qualquer língua com alfabeto latino e interação remota entre o usuário remoto e um servidor central. O programa ArchaeoBones, demonstrou ser eficiente para o registro de vestígios arqueológicos, geração de dados primários e secundários com confiabilidade e repetibilidade compatíveis com o grande número de dados utilizados. / Were studied the fish remains of three archaeological sites in the state of Rio Grande do Sul State, Brasil. The RS-S-327 and the RS-C-61, Pilger sites are located in the Sinos and Caí rivers basins in rock shelters formed by erosion of the Botucatu formation sandstones. Were identified 14 species of fish in RS-S-327-Sangão site: Bunocephalus sp.; Crenicichla sp.; Geophagus sp.; Hoplias sp.; Hypostomus sp.; Hoplosternum sp.; Microglanis sp.; Oligosarcus sp.; Pimelodus sp.; Prochilodus sp.; Rhamdia sp.; Salminus sp.; Synbranchus sp. And 12 species of fish in the RS-C-61- Adelar Pilger archaeological site: Crenicichla sp.; Geophagus sp.; Hoplias sp.; Hoplosternum sp.; Hypostomus sp.; Leporinus sp.; Oligosarcus sp.; Pimelodus sp.; Prochilodus sp.; Rhamdia sp.; Salminus sp.; Synbranchus sp., and a marine species: Carcharinus sp. The hypothesis that these rock shelters were occupied in warmer periods of the year is suported by the presence of species with reproductive migration. A further exploration of fish species from environments close to the sites point to a fishing activity unspecialized made within the domestic shelters range of influence.The analysis of the remains of RS-AS-01, Sambaqui Praia do Paraíso site, located in Arroio do Sal (RS) showed that clam Mesodesma mactroides was the dominant species in all stratigraphic levels, followed by Donax hanleyanus. Among vertebrates, the fish had the highest number of identified parts, represented in numbers by Genidens sp.; Pogonias chromis, Menticirrhus littoralis and Micropogonias furnieri and species with less representation: Paralonchurus brasiliensis, Macrodon sp.; Cynoscion sp.; Mugil sp.; Paralichthys sp.; Urophycis sp., and two freshwater species: Hoplias sp. and Microglanis sp. Based on otolith dimensions, the estimation of body size of Genidens sp.; Menticirhus litorallis and Micropogonias furnieri has led the hypothesis of a use of nets with standardized mesh. The experiments of breaking and sectioning demonstrated that in Genidens barbus copies, breaking spines "in natura" and roasted specimens wrapped in leaves showed irregulars break patterns The spines of specimens roasted in an electric oven showed regular breaks paterns. The specimens exposed to cooking in wather, showed a discoloration pattern directly proportional to the exposure time. Spines sectioned by lithic flake friction showed consistent patterns with those found in spines coming from Sambaqui culture archaeological sites. Aiming supply options to improvement of archaeofaunal analyzes, we developed a software for zooarchaeologycal data management, using Pascal language and Delphi programming environment how compiler/editor. The database consists of sequentially numbered lots, in which beyond the basic information to identify the origin of parts, can include information on taxonomy, anatomy and taphonomy of the pieces. Regard to the great diversity of animals, the options to include new taxa are open from Phylum level. Several options offered by bibliography for NISP (number of identified specimens) and MNI (minimum number of individuals) calculations and taphonomic characteristics were included in the program. The developed system allows the software translation into any language in Latin alphabet and interaction between the remote user and a central server. The ArchaeoBones software proved efficient for recording archaeological remains, generating primary and secondary data with consistent reliability and repeatability with the large number of data used.
542

IMPLEMENTATION OF A STRICT OPTIMISTIC CONCURRENCY CONTROL PROTOCOL

THAKUR, KISHOREKUMAR SINGH 01 January 2008 (has links)
In today's database management systems (DBMS), concurrency control is one of the main issues that draw a lot of attention. Concurrency control protocols prevent changes to the database made by one user to interfere with those made by another user. During last couple of decades, many new concurrency control mechanisms were introduced into the study of database management systems. Researchers have designed new concurrency control algorithms and examined their performances in comparison with well known concurrency control mechanisms, which are widely used in today's database management systems. The results reported to date, rather than being definitive, have tended to be quite contradictory [1]. The main cause of such findings is use of different assumptions and implications when defining a simulation model for database management systems. Different coding schemes and logical programmatic flows play another important role in obtaining questionable results. In this paper, rather than proposing yet another concurrency control algorithm, I will implement a standardized simulation model within windows application that can then be used by any researcher to test performance of his concurrency control protocol. I will implement Optimistic Concurrency control protocol to validate functionality of my application and compare it with two phase locking protocol.
543

An evaluation of non-relational database management systems as suitable storage for user generated text-based content in a distributed environment

Du Toit, Petrus 07 October 2016 (has links)
Non-relational database management systems address some of the limitations relational database management systems have when storing large volumes of unstructured, user generated text-based data in distributed environments. They follow different approaches through the data model they use, their ability to scale data storage over distributed servers and the programming interface they provide. An experimental approach was followed to measure the capabilities these alternative database management systems present in their approach to address the limitations of relational databases in terms of their capability to store unstructured text-based data, data warehousing capabilities, ability to scale data storage across distributed servers and the level of programming abstraction they provide. The results of the research highlighted the limitations of relational database management systems. The different database management systems do address certain limitations, but not all. Document-oriented databases provide the best results and successfully address the need to store large volumes of user generated text-based data in a distributed environment / School of Computing / M. Sc. (Computer Science)
544

Desenvolvimento de um banco de dados (HTLV-1 molecular epidemiology databases) para dataming e data management de sequências do HTLV-1 / Desenvolvimento de um banco de dados (HTLV-1 molecular epidemiology databases) para dataming e data management de sequências do HTLV-1

Araújo, Thessika Hialla Almeida January 2012 (has links)
Submitted by Ana Maria Fiscina Sampaio (fiscina@bahia.fiocruz.br) on 2012-08-31T17:46:19Z No. of bitstreams: 1 Thessika Hialla Almeida Araujo Desenvolvimento de um banco de dados HTLV-1....pdf: 1454472 bytes, checksum: 665a7044bdf71c54637b51f71b0d6527 (MD5) / Made available in DSpace on 2012-08-31T17:46:19Z (GMT). No. of bitstreams: 1 Thessika Hialla Almeida Araujo Desenvolvimento de um banco de dados HTLV-1....pdf: 1454472 bytes, checksum: 665a7044bdf71c54637b51f71b0d6527 (MD5) Previous issue date: 2012 / Fundação Oswaldo Cruz. Centro de Pesquisas Gonçalo Moniz. Salvador, Bahia, Brasil / As pesquisas biológicas geram uma grande quantidade de informações que devem ser armazenadas e gerenciadas, permitindo que os usuários tenham acesso a dados completos sobre o tema de interesse. O volume de dados não relacionados gerados nas pesquisas com HTLV-1 justifica a criação de um Banco de dados que contenha o maior número de informações sobre o vírus, seus aspectos epidemiológicos, para que possam estabelecer melhores relações sobre infecção, patogênese, origem e principalmente, evolução. Os dados foram obtidos a partir de pesquisa no GENBANK, em artigos relacionados e diretamente com os autores dos dados. O banco de dados foi desenvolvido utilizando o Apache Webserver 2.1.6 e o SGBD – MySQL. A webpage foi desenvolvida em HTML a escrita em PHP. Atualmente temos cadastradas 2435 sequências, sendo que 1968 (80,8%) representam diferentes isolados. Em relação ao status clínico, o banco de dados tem informação de 40,49% das sequências, no qual 43%, 18,69%, 32,7%, 5,61% são TSP/HAM, ATL, assintomático e outras doenças, respectivamente. Quanto ao gênero e idade tem-se informação de 15,4% e 10,56% respectivamente. O HTLV-1 Molecular Epidemiology Database está hospedado no servidor do Centro de Pesquisa Gonçalo Moniz/FIOCRUZ-BA com acesso em http://htlv1db.bahia.fiocruz.br/, sendo um repositório de sequências do HTLV-1 com informações clínicas, epidemiológicas e geográficas. Esta base de dados dará apoio às investigações clínicas e pesquisas para desenvolvimento de vacinas. / Scientific development has generated a large amount of data that should be stored and managed in order for researchers to have access to complete data sets. Information generated from research on HTLV-1 warrants the design of databases to aggregate data from a range of epidemiological aspects. This database would support further research on HTLV-1 viral infections, pathogenesis, origins, and evolutionary dynamics. All data was obtained from publications available at GenBank or through contact with the authors. The database was developed using the Apache Webserver 2.1.6 and SGBD MySQL. The webpage interfaces were developed in HTML and sever-side scripting written in PHP. There are currently 2,435 registered sequences with 1,968 (80.8%) of those sequences representing different isolates. Of these sequences, 40.49% are related to clinical status (TSP/HAM, 43%, ATLL, 18.69%, asymptomatic, 32.7%, and other diseases, 5.61%). Further, 15.4% of sequences contain information on patient gender while 10.56% of sequences provide the age of the patient. The HTLV-1 Molecular Epidemiology Database is hosted on the Gonçalo Moniz/FIOCRUZ-BA research center server with access at http://htlv1db.bahia.fiocruz.br/. Here, we have developed a repository of HTLV-1 genetic sequences from clinical, epidemiological, and geographical studies. This database will support clinical research and vaccine development related to viral genotype.
545

Subarachnoid Hemorrhage: The Ottawa Hospital Experience

English, Shane January 2014 (has links)
Background: Primary subarachnoid hemorrhage (1°SAH) is an important disease that causes significant morbidity and mortality. The sparse Canadian epidemiologic literature on 1° SAH is outdated and relies on diagnostic coding for case ascertainment which misses true cases and incorrectly labels non-cases. Objectives: Primary objective was to identify all patients with 1° SAH presenting to the Ottawa Hospital (TOH) between July 1, 2002 and June 30, 2011 by deriving and validating a search algorithm using an enriched administrative database. Secondary objectives included: 1) determine incidence and case-fatality rates (CFR) of 1° SAH at TOH; and 3) derive and validate a method to identify 1° SAH using routinely collected administrative data. Methods: A cohort of 1° SAH patients were identified with a case-defining algorithm that was derived and validated using a combination of cerebrospinal fluid analysis results and text-search algorithms of both cranial imaging and post-mortem reports. The incidence of 1° SAH was calculated using the total number of hospital encounters over the same time period. CFR was calculated by linking to vital statistic data of hospitalized patients at discharge. An optimal1° SAH prediction model was derived and validated using binomial recursive partitioning built with independent variables obtained from routinely collected administrative data. Results: Using the case-defining algorithm, 831 patients were identified with a 1° SAH over the study period. Hospital incidence of 1° SAH was 17.2 events per 10,000 inpatient encounters (or 0.17% of encounters) with a case-fatality rate of 18.1%. A validated SAH prediction model based on administrative data using a recursive partitioning model had a sensitivity of 96.5% (95% CI 93.9-98.0), a specificity of 99.8% (95%CI 99.6-99.9), and a +LR of 483 (95% CI 254-879). This results in a post-test probability of disease of 45%. Conclusion: We identified almost all cases of 1° SAH at our hospital using an enriched administrative data. Accurately identifying such patients with routinely collected health administrative data is possible, providing important opportunities to examine and study this patient population. Further studies, involving multiple centres are needed to reproduce these results.
546

A Comparative Analysis of Graph Vs Relational Database For Instructional Module Development System

January 2017 (has links)
abstract: In today's data-driven world, every datum is connected to a large amount of data. Relational databases have been proving itself a pioneer in the field of data storage and manipulation since 1970s. But more recently they have been challenged by NoSQL graph databases in handling data models which have an inherent graphical representation. Graph databases with the ability to store physical relationships between two nodes and native graph processing technique have been doing exceptionally well in graph data storage and management for applications like recommendation engines, biological modeling, network modeling, social media applications, etc. Instructional Module Development System (IMODS) is a web-based software system that guides STEM instructors through the complex task of curriculum design, ensures tight alignment between various components of a course (i.e., learning objectives, content, assessments), and provides relevant information about research-based pedagogical and assessment strategies. The data model of IMODS is highly connected and has an inherent graphical representation between all its entities with numerous relationships between them. This thesis focuses on developing an algorithm to determine completeness of course design developed using IMODS. As part of this research objective, the study also analyzes the data model for best fit database to run these algorithms. As part of this thesis, two separate applications abstracting the data model of IMODS have been developed - one with Neo4j (graph database) and another with PostgreSQL (relational database). The research objectives of the thesis are as follows: (i) evaluate the performance of Neo4j and PostgreSQL in handling complex queries that will be fired throughout the life cycle of the course design process; (ii) devise an algorithm to determine the completeness of a course design developed using IMODS. This thesis presents the process of creating data model for PostgreSQL and converting it into a graph data model to be abstracted by Neo4j, creating SQL and CYPHER scripts for undertaking experiments on both platforms, testing and elaborate analysis of the results and evaluation of the databases in the context of IMODS. / Dissertation/Thesis / Masters Thesis Computer Science 2017
547

Ambiente data cleaning: suporte extensível, semântico e automático para análise e transformação de dados

Jardini, Toni [UNESP] 30 November 2012 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:41Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-11-30Bitstream added on 2014-06-13T19:39:00Z : No. of bitstreams: 1 jardini_t_me_sjrp.pdf: 3132731 bytes, checksum: f7d17c296de5c8631819f117979b411d (MD5) / Um dos grandes desa os e di culdades para se obter conhecimento de fontes de dados e garantir consistência e a não duplicidade das informações armazenadas. Diversas técnicas e algoritmos têm sido propostos para minimizar o custoso trabalho de permitir que os dados sejam analisados e corrigidos. Porém, ainda há outras vertentes essenciais para se obter sucesso no processo de limpeza de dados, e envolvem diversas areas tecnológicas: desempenho computacional, semântica e autonomia do processo. Diante desse cenário, foi desenvolvido um ambiente data cleaningque contempla uma coleção de ferramentas de suporte a análise e transformação de dados de forma automática, extensível, com suporte semântico e aprendizado, independente de idioma. O objetivo deste trabalho e propor um ambiente cujas contribuições cobrem problemas ainda pouco explorados pela comunidade científica area de limpeza de dados como semântica e autonomia na execução da limpeza e possui, dentre seus objetivos, diminuir a interação do usuário no processo de análise e correção de inconsistências e duplicidades. Dentre as contribuições do ambiente desenvolvido, a eficácia se mostras significativa, cobrindo aproximadamente 90% do total de inconsistências presentes na base de dados, com percentual de casos de falsos-positivos 0% sem necessidade da interação do usuário / One of the great challenges and di culties to obtain knowledge from data sources is to ensure consistency and non-duplication of stored data. Many techniques and algorithms have been proposed to minimize the hard work to allow data to be analyzed and corrected. However, there are still other essential aspects for the data cleaning process success which involve many technological areas: performance, semantic and process autonomy. Against this backdrop, an data cleaning environment has been developed which includes a collec-tion of tools for automatic data analysis and processing, extensible, with multi-language semantic and learning support. The objective of this work is to propose an environment whose contributions cover problems yet explored by data cleaning scienti c community as semantic and autonomy in data cleaning process and it has, among its objectives, to re-duce user interaction in the process of analyzing and correcting data inconsistencies and duplications. Among the contributions of the developed environment, e ciency is signi -cant exhibitions, covering approximately 90% of database inconsistencies, with the 0% of false positives cases without the user interaction need
548

A web-based biodiversity toolkit as a conservation management tool for natural fragments in an urban context

Gibbs, Dalton Jerome January 2017 (has links)
Magister Scientiae (Biodiversity and Conservation Biology) - MSc (Biodiv and Cons Biol) / The collection of biological information has a long history, motivated by a variety of reasons and in more recent years is largely being driven for research and academic purposes. As a result biological information is often linked to a specific species or ecosystem management and is discipline specific, not relating to general management actions at a specific conservation site. The biological data that exists is often not consolidated in a central place to allow for effective management of conservation sites. Different databases and formats are often used to cover biological, infrastructural, heritage and management information. Biological information has traditionally not influenced real-time site-specific conservation management, with long term data sets being used to draw conclusions before they can influence management actions. In order to overcome this problem of scattered and unfocused data a biodiversity database related to specific site management was developed. This study focuses on the development of this database and its links to the management of spatially defined sites. Included in the solution of scattered data are the applications of information management tools which interpret data and convert it into management actions, both in terms of long term trends and immediate real- time management actions as the information is received and processed.
549

'n Studie van 'n aantal gelyktydigheidsbeheerprotokolle vir databasisse

Kruger, Hanlie 18 March 2014 (has links)
M.Sc. (Computer Science) / Concurrency control is the problem that exists in a database management system when more than one transaction or application is executed simultaneously. If transactions or applications are executed sequentially there will- be no problem with the allocation of resources. It is however necessary to execute transactions concurrently to utilise computer and resource capacity to its maximum extent. It can lead to inconsistent data if this concurrent execution of transactions are not properly controlled. If this should happen the data would be of no more use to the users of a system. The thesis is divided in the following way. Chapter 1 gives background information on the concurrency control problem. In chapter 2 a couple of mechanisms for solving the concurrency control problem are studied briefly. Chapters 3 and 4 provides a more in depth study of two specific mechanisms namely two-phase locking and timestamps. 80th of these mechanisms have already been implemented in systems to. solve the concurrency control problem.- In chapter 5 a comparison is made of the two methods described in chapters 3 and 4. A third method for handling concurrency control is briefly described in chapter 6. This method hasn't received a lot of attention from researchers yet. And in the last chapter, chapter 7, the concurrency control method used in the SDD-1 system is studied in more detail. SDD-1 is a distributed database management system.
550

A critical review of the IFIP TC11 Security Conference Series

Gaadingwe, Tshepo Gaadingwe January 2007 (has links)
Over the past few decades the field of computing has grown and evolved. In this time, information security research has experienced the same type of growth. The increase in importance and interest in information security research is reflected by the sheer number of research efforts being produced by different type of organizations around the world. One such organization is the International Federation for Information Processing (IFIP), more specifically the IFIP Technical Committee 11 (IFIP TC11). The IFIP TC11 community has had a rich history in producing high quality information security specific articles for over 20 years now. Therefore, IFIP TC11 found it necessary to reflect on this history, mainly to try and discover where it came from and where it may be going. Its 20th anniversary of its main conference presented an opportunity to begin such a study of its history. The core belief driving the study being that the future can only be realized and appreciated if the past is well understood. The main area of interest was to find out topics which may have had prevalence in the past or could be considered as "hot" topics. To achieve this, the author developed a systematic process for the study. The underpinning element being the creation of a classification scheme which was used to aid the analysis of the IFIP TC11 20 year's worth of articles. Major themes were identified and trends in the series highlighted. Further discussion and reflection on these trends were given. It was found that, not surprisingly, the series covered a wide variety of topics in the 20 years. However, it was discovered that there has been a notable move towards technically focused papers. Furthermore, topics such as business continuity had just about disappeared in the series while topics which are related to networking and cryptography continue to gain more prevalence.

Page generated in 0.0465 seconds