• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 6
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 49
  • 9
  • 9
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Empfehlungen zur RDF-Repräsentation bibliografischer Daten

DINI, AG KIM Gruppe Titeldaten 2 June 2014 (has links) (PDF)
In den letzten Jahren wurde eine Vielzahl an Datensets aus Kultur- und Wissenschaftseinrichtungen als Linked Open Data veröffentlicht. Auch das deutsche Bibliothekswesen hat sich aktiv an den Entwicklungen im Bereich Linked Data beteiligt. Die zuvor lediglich in den Bibliothekskatalogen vorliegenden Daten können weiteren Sparten geöffnet und so auf vielfältige Weise in externe Anwendungen eingebunden werden. Gemeinsames Ziel bei der Veröffentlichung der Bibliotheksdaten als Linked Data ist außerdem, Interoperabilität und Nachnutzbarkeit zu ermöglichen und sich auf diese Weise stärker mit anderen Domänen außerhalb der Bibliothekswelt zu vernetzen. Es bestehen sowohl Linked-Data-Services einzelner Bibliotheken als auch der deutschen Bibliotheksverbünde. Trotz ihres gemeinsamen Ziels sprechen die bestehenden Services nicht die gleiche Sprache, da sie auf unterschiedlichen Modellierungen basieren. Um die Interoperabilität dieser Datenquellen zu gewährleisten, sollten die Dienste künftig einer einheitlichen Modellierung folgen. Vor diesem Hintergrund wurde im Januar 2012 eine Arbeitsgruppe gegründet, in der alle deutschen Bibliotheksverbünde, die Deutsche Nationalbibliothek sowie einige weitere interessierte und engagierte Kolleginnen und Kollegen mit entsprechender Expertise vertreten sind. Die Gruppe Titeldaten agiert seit April 2012 als Untergruppe des Kompetenzzentrums Interoperable Metadaten (DINI-AG KIM). Die Moderation und Koordination liegt bei der Deutschen Nationalbibliothek. Im Dezember 2012 schloss sich auch der OBVSG der Arbeitsgruppe an. Die Schweizerische Nationalbibliothek folgte im Mai 2013. Vorliegende Empfehlungen sollen zu einer Harmonisierung der RDFRepräsentationen von Titeldaten im deutschsprachigen Raum beitragen und so möglichst einen Quasi-Standard etablieren. Auch international wird an der Herausforderung gearbeitet, die bestehenden bibliothekarischen Strukturen in die heute zur Verfügung stehenden Konzepte des Semantic Web zu überführen und ihren Mehrwert auszuschöpfen. Die neuesten internationalen Entwicklungen im Bereich der Bereitstellung bibliografischer Daten im Semantic Web wie die Bibliographic Framework Transition Initiative der Library of Congress (BIBFRAME) haben ebenfalls das Ziel, ein Modell zur RDF-Repräsentation bibliothekarischer Daten bereitzustellen. Die Gruppe Titeldaten beobachtet diese Entwicklungen und beabsichtigt, die Erfahrungen und Anforderungen der deutschsprachigen Bibliothekswelt mit einzubringen. Dabei werden einerseits international erarbeitete Empfehlungen aufgegriffen und andererseits Impulse aus der nationalen Kooperation dort eingebracht. Die hier verwendeten Properties könnten z. B. als Grundlage für ein Mapping zu BIBFRAME dienen.
2

Development of fast magnetic resonance imaging methods for investigation of the brain

Grieve, Stuart Michael 2000 (has links)
No description available.
3

Estratégia de manutenção em uma oficina de cilindros de laminação de aços longos

Monteiro, Guilherme Arthur Brunet 29 July 2013 (has links)
O mercado siderúrgico como um todo sofre constantes modificações em função de novos entrantes, oscilação do mercado e sob um contexto extremamente competitivo, produtores da indústria do aço seguem um caminho árduo na busca incessante por custos competitivos globalmente. Esta busca incessante pela redução de custos provoca a revisão de padrões e conceitos sobre o negócio, fazendo com que surjam idéias e novas formas de se fazer o que já é feito da mesma forma há muito tempo. Esta dissertação apresenta a aplicação de uma metodologia baseada em conceitos de manutenção aplicado a uma Oficina de Cilindros em uma Laminação de aços longos. Sob uma atuação de montagem, desmontagem, calibração e ajustes operacionais, uma Oficina de Cilindros apresenta muita interação com a recuperação e substituição de itens, com forte interação baseada em inspeções operacionais. O modelo proposto aborda estas inspeções, deixando claros a freqüência, parâmetros e seqüenciamento da atividade, além de empregar a manutenção preventiva nos conjuntos específicos, tudo com base em dados históricos, obtidos com snapshot, analisando o comportamento das falhas e quebras, permitindo decidir o tipo de intervenção, dividida em tecnologia, metodologia, periodicidade ou freqüência, sendo essas duas últimas obtidas com o conceito de delay time analysis.
4

Estratégia de manutenção em uma oficina de cilindros de laminação de aços longos

Monteiro, Guilherme Arthur Brunet 29 July 2013 (has links)
O mercado siderúrgico como um todo sofre constantes modificações em função de novos entrantes, oscilação do mercado e sob um contexto extremamente competitivo, produtores da indústria do aço seguem um caminho árduo na busca incessante por custos competitivos globalmente. Esta busca incessante pela redução de custos provoca a revisão de padrões e conceitos sobre o negócio, fazendo com que surjam idéias e novas formas de se fazer o que já é feito da mesma forma há muito tempo. Esta dissertação apresenta a aplicação de uma metodologia baseada em conceitos de manutenção aplicado a uma Oficina de Cilindros em uma Laminação de aços longos. Sob uma atuação de montagem, desmontagem, calibração e ajustes operacionais, uma Oficina de Cilindros apresenta muita interação com a recuperação e substituição de itens, com forte interação baseada em inspeções operacionais. O modelo proposto aborda estas inspeções, deixando claros a freqüência, parâmetros e seqüenciamento da atividade, além de empregar a manutenção preventiva nos conjuntos específicos, tudo com base em dados históricos, obtidos com snapshot, analisando o comportamento das falhas e quebras, permitindo decidir o tipo de intervenção, dividida em tecnologia, metodologia, periodicidade ou freqüência, sendo essas duas últimas obtidas com o conceito de delay time analysis.
5

Serializable Isolation for Snapshot Databases

Cahill, Michael James 2009 (has links)
PhD Many popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases.
6

Serializable Isolation for Snapshot Databases

Cahill, Michael James 2009 (has links)
PhD Many popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases.
7

Multi-Master Replication for Snapshot Isolation Databases

Chairunnanda, Prima 2013 (has links)
Lazy replication with snapshot isolation (SI) has emerged as a popular choice for distributed databases. However, lazy replication requires the execution of update transactions at one (master) site so that it is relatively easy for a total SI order to be determined for consistent installation of updates in the lazily replicated system. We propose a set of techniques that support update transaction execution over multiple partitioned sites, thereby allowing the master to scale. Our techniques determine a total SI order for update transactions over multiple master sites without requiring global coordination in the distributed system, and ensure that updates are installed in this order at all sites to provide consistent and scalable replication with SI. We have built our techniques into PostgreSQL and demonstrate their effectiveness through experimental evaluation.
8

Molecular methods for genotyping selected detoxification and DNA repair enzymes / J. Labuschagne

Labuschagne, Jeanine 2010 (has links)
The emerging field of personalized medicine and the prediction of side effects experienced due to pharmaceutical drugs is being studied intensively in the post genomic era. The molecular basis of inheritance and disease susceptibility is being unravelled, especially through the use of rapidly evolving new technologies. This in turn facilitates analyses of individual variations in the whole genome of both single subjects and large groups of subjects. Genetic variation is a common occurrence and although most genetic variations do not have any apparent effect on the gene product some do exhibit effects, such as an altered ability to detoxify xenobiotics. The human body has a highly effective detoxification system that detoxifies and excretes endogenous as well as exogenous toxins. Numerous studies have proved that specific genetic variations have an influence on the efficacy of the metabolism of pharmaceutical drugs and consequently the dosage administered. The primary aim of this project was the local implementation and assessment of two different genotyping approaches namely: the Applied Biosystems SNaPshot technique and Affymetrix DMET microarray. A secondary aim was to investigate if links could be found between the genetic data and the biochemical detoxification profile of participants. I investigated the approaches and gained insight into which method would be better for specific local applications, taking into consideration the robustness and ease of implementation as well as cost effectiveness in terms of data generated. The final study cohort comprised of 18 participants whose detoxification profiles were known. Genotyping was performed using the DMET microarray and SNaPshot techniques. The SNaPshot technique was used to genotype 11 SNPs relating to DNA repair and detoxification and was performed locally. Each DMET microarray delivers significantly more data in that it genotypes 1931 genetic markers relating to drug metabolism and transport. Due to the absence of a local service supplier, the DMET - microarrays were outsourced to DNALink in South Korea. DNALink generated raw data which was analysed locally. I experienced many problems with the implementation of the SNaPshot technique. Numerous avenues of troubleshooting were explored with varying degrees of success. I concluded that SNaPshot technology is not the best suited approach for genotyping. Data obtained from the DMET microarray was fed into the DMET console software to obtain genotypes and subsequently analysed with the help of the NWU statistical consultation services. Two approaches were followed: firstly, clustering the data and, secondly, a targeted gene approach. Neither of the two methods was able to establish a relationship between the DMET genotyping data and the detoxification profiling. For future studies to successfully correlate SNPs or SNP groups and a specific detoxification profile, two key issues should be addressed: i) The procedure for determining the detoxification profile following substrate loading should be further refined by more frequent sampling after substrate loading. ii) The number of participants should be increased to provide statistical power that will enable a true representation of the particular genetic markers in the specific population. The statistical analyses, such as latent class analyses to cluster the participants will also be of much more use for data analyses and interpretation if the study is not underpowered. Thesis (M.Sc. (Biochemistry))--North-West University, Potchefstroom Campus, 2011.
9

Molecular methods for genotyping selected detoxification and DNA repair enzymes / J. Labuschagne

Labuschagne, Jeanine 2010 (has links)
The emerging field of personalized medicine and the prediction of side effects experienced due to pharmaceutical drugs is being studied intensively in the post genomic era. The molecular basis of inheritance and disease susceptibility is being unravelled, especially through the use of rapidly evolving new technologies. This in turn facilitates analyses of individual variations in the whole genome of both single subjects and large groups of subjects. Genetic variation is a common occurrence and although most genetic variations do not have any apparent effect on the gene product some do exhibit effects, such as an altered ability to detoxify xenobiotics. The human body has a highly effective detoxification system that detoxifies and excretes endogenous as well as exogenous toxins. Numerous studies have proved that specific genetic variations have an influence on the efficacy of the metabolism of pharmaceutical drugs and consequently the dosage administered. The primary aim of this project was the local implementation and assessment of two different genotyping approaches namely: the Applied Biosystems SNaPshot technique and Affymetrix DMET microarray. A secondary aim was to investigate if links could be found between the genetic data and the biochemical detoxification profile of participants. I investigated the approaches and gained insight into which method would be better for specific local applications, taking into consideration the robustness and ease of implementation as well as cost effectiveness in terms of data generated. The final study cohort comprised of 18 participants whose detoxification profiles were known. Genotyping was performed using the DMET microarray and SNaPshot techniques. The SNaPshot technique was used to genotype 11 SNPs relating to DNA repair and detoxification and was performed locally. Each DMET microarray delivers significantly more data in that it genotypes 1931 genetic markers relating to drug metabolism and transport. Due to the absence of a local service supplier, the DMET - microarrays were outsourced to DNALink in South Korea. DNALink generated raw data which was analysed locally. I experienced many problems with the implementation of the SNaPshot technique. Numerous avenues of troubleshooting were explored with varying degrees of success. I concluded that SNaPshot technology is not the best suited approach for genotyping. Data obtained from the DMET microarray was fed into the DMET console software to obtain genotypes and subsequently analysed with the help of the NWU statistical consultation services. Two approaches were followed: firstly, clustering the data and, secondly, a targeted gene approach. Neither of the two methods was able to establish a relationship between the DMET genotyping data and the detoxification profiling. For future studies to successfully correlate SNPs or SNP groups and a specific detoxification profile, two key issues should be addressed: i) The procedure for determining the detoxification profile following substrate loading should be further refined by more frequent sampling after substrate loading. ii) The number of participants should be increased to provide statistical power that will enable a true representation of the particular genetic markers in the specific population. The statistical analyses, such as latent class analyses to cluster the participants will also be of much more use for data analyses and interpretation if the study is not underpowered. Thesis (M.Sc. (Biochemistry))--North-West University, Potchefstroom Campus, 2011.
10

Estudo da estrutura e filogenia da população do Rio de Janeiro através de SNPs do cromossomo Y

Santos, Simone Teixeira Bonecker dos 2010 (has links)
A população brasileira é considerada miscigenada, derivada de um processo relativamente recorrente e recente. Aqui viviam milhões de indígenas quando começou o processo colonizatório envolvendo integrantes europeus, principalmente portugueses do sexo masculino, tornando comum o acasalamento entre homens europeus e mulheres indígenas, começando assim a heterogeneidade étnica encontrada em nossa população. Posteriormente com a chegada dos escravos, durante o ciclo econômico da cana-de-açúcar, começou a ocorrer relacionamentos entre europeus e africanas. Basicamente, trata-se de uma população tri-híbrida que atualmente apresenta em sua composição outros grupos, entre eles: italianos, espanhóis, sírios, libaneses e japoneses. Para o melhor entendimento das raízes filogenéticas brasileiras, foram utilizados neste estudo marcadores bi-alélicos da região não recombinante do cromossomo Y. O objetivo foi analisar como esses grupos heterogêneos contribuíram para o pool genético de origem paterna encontrado na população masculina do Rio de Janeiro, e assim enriquecer os conhecimentos acerca dos movimentos migratórios no processo de estruturação desta população. Foram analisados, através do minissequenciamento multiplex, 13 polimorfismos de base única (SNPs) e foi possível a identificação de nove haplogrupos e quatro sub-haplogrupos, em uma amostra de 200 indivíduos não aparentados e residentes do Estado do Rio de Janeiro, escolhidos aleatoriamente entre participantes de estudos de paternidade da Defensoria Pública do Rio de Janeiro. Dos haplogrupos analisados, somente o R1a, não foi observado em nossa população. O haplogrupo mais representativo foi o de origem européia, o R1b1, com 51%, enquanto o menos representativo, com 1% foi o Q1a3a, encontrado entre os nativos americanos. Cerca de 85% dos cromossomos Y analisados são de origem européia; 10,5% de africanos e 1% de ameríndios, e o restante são de origem indefinida. Ao comparamos com dados da literatura nossa população mostrou-se semelhante a população branca de Porto Alegre e nenhuma diferença significativa foi encontrada entre o pool gênico da população masculina do Rio de Janeiro com a portuguesa. Os resultados aqui encontrados corroboram os dados históricos da fundação da população do Rio de Janeiro durante o século XVI, período onde foi observada uma significante redução da população ameríndia, com importante contribuição demográfica vinda da região Subsaariana da África e Europa, principalmente de portugueses. Tendo em vista o alto grau de miscigenação da nossa população e os avanços na medicina personalizada, estudos sobre a estrutura genética humana têm fundamental implicação no entendimento na evolução e no impacto em doenças humanas, uma vez que para esta abordagem, a coloração da pele é um preditor não confiável de ancestralidade étnica do indivíduo.

Page generated in 0.0558 seconds