Spelling suggestions: "subject:"snapshot"" "subject:"snapshots""
1 |
Development of fast magnetic resonance imaging methods for investigation of the brainGrieve, Stuart Michael January 2000 (has links)
No description available.
|
2 |
Estratégia de manutenção em uma oficina de cilindros de laminação de aços longosMonteiro, Guilherme Arthur Brunet 29 July 2013 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2015-03-12T16:45:10Z
No. of bitstreams: 2
DISSERTAÇÃO Guilherme Arthur Brunet Monteiro.pdf: 3391661 bytes, checksum: f43864a0c429d64328086345fa921f69 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-12T16:45:10Z (GMT). No. of bitstreams: 2
DISSERTAÇÃO Guilherme Arthur Brunet Monteiro.pdf: 3391661 bytes, checksum: f43864a0c429d64328086345fa921f69 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2013-07-29 / O mercado siderúrgico como um todo sofre constantes modificações em função de novos entrantes, oscilação do mercado e sob um contexto extremamente competitivo, produtores da indústria do aço seguem um caminho árduo na busca incessante por custos competitivos globalmente. Esta busca incessante pela redução de custos provoca a revisão de padrões e conceitos sobre o negócio, fazendo com que surjam idéias e novas formas de se fazer o que já é feito da mesma forma há muito tempo. Esta dissertação apresenta a aplicação de uma metodologia baseada em conceitos de manutenção aplicado a uma Oficina de Cilindros em uma Laminação de aços longos. Sob uma atuação de montagem, desmontagem, calibração e ajustes operacionais, uma Oficina de Cilindros apresenta muita interação com a recuperação e substituição de itens, com forte interação baseada em inspeções operacionais. O modelo proposto aborda estas inspeções, deixando claros a freqüência, parâmetros e seqüenciamento da atividade, além de empregar a manutenção preventiva nos conjuntos específicos, tudo com base em dados históricos, obtidos com snapshot, analisando o comportamento das falhas e quebras, permitindo decidir o tipo de intervenção, dividida em tecnologia, metodologia, periodicidade ou freqüência, sendo essas duas últimas obtidas com o conceito de delay time analysis.
|
3 |
Fireworks: A Fast, Efficient and Safe Serverless FrameworkShin, Wonseok 01 June 2021 (has links)
Serverless computing is a new paradigm, and it is becoming rapidly popular in Cloud computing. Serverless computing has interesting, unique properties that the unit of deployment
and execution is a serverless function. Moreover, it introduces the new economic model
pay-as-you-go billing model. It provides a high economic benefit from highly elastic resource
provisioning to the application.
However, it also accompanies the new challenges for serverless computing: (1) start-up time
latency problem from relatively short function execution time, (2) high-security risk from
highly consolidated environment, and (3) memory efficiency problem from unpredictable
function invocations. These problems not only degrade performance but also lowers the
economic benefits of Cloud providers.
In this work, we propose VM-level pre-JIT snapshot and develop Fireworks to solve
the three main challenges without any compromises. The key idea behind the VM-level preJIT snapshot is to leverage pre-JITted serverless function codes to reduce both start-up time
and execution time of the function and improve memory efficiency by sharing the pre-JITted
codes. Also, Fireworks can provide high-level isolation by storing the pre-JITted codes to
the snapshot of microVM's snapshot. Our evaluation shows that Fireworks outperforms the
state-of-art serverless platforms by 20.6× and memory efficiency up to 7.3×. / Master of Science / Serverless computing is the most popular in cloud computing. Contrary to its name, developers write and run their code on servers managed by cloud providers. The number of
servers, required CPU, memory are automatically adjusted in proportion to the incoming
traffic. Also, the users only pay for what they use and the pay-as-you-go attracts attention
as new infrastructure. Serverless computing continues to evolve and it is being done as research from business to academic. There are many efforts to reduce cold start, which is the
delay in creating the necessary resources when a serverless program runs first. The serverless
platforms prepare resources in advance or provide lighter cloud resources. However, this can
waste resources or increase a security threat. In this work, we propose a fast, efficient, and
safe serverless framework. We use Just-In-Time (JIT) compilation, which can improve the
performance of the interpreter languages which are widely used in the serverless. We keep
the JIT-generated machine code in the snapshot for reuse. Besides, the security is guaranteed by the VM-level snapshot. In addition, the snapshot can be shared, increasing memory
efficiency. Through our implementation and evaluation, we have shown that Fireworks improve up to 20 times in terms of cold start performance and more than 7 times in memory
efficiency than state-of-the-art serverless platforms. We believe our research has made a new
way to use the JIT and the snapshot in the serverless computing.
|
4 |
Serializable Isolation for Snapshot DatabasesCahill, Michael James January 2009 (has links)
PhD / Many popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases.
|
5 |
Multi-Master Replication for Snapshot Isolation DatabasesChairunnanda, Prima January 2013 (has links)
Lazy replication with snapshot isolation (SI) has emerged as a popular choice for distributed databases. However, lazy replication requires the execution of update transactions at one (master) site so that it is relatively easy for a total SI order to be determined for consistent installation of updates in the lazily replicated system. We propose a set of techniques that support update transaction execution over multiple partitioned sites, thereby allowing the master to scale. Our techniques determine a total SI order for update transactions over multiple master sites without requiring global coordination in the distributed system, and ensure that updates are installed in this order at all sites to provide consistent and scalable replication with SI. We have built our techniques into PostgreSQL and demonstrate their effectiveness through experimental evaluation.
|
6 |
Molecular methods for genotyping selected detoxification and DNA repair enzymes / J. LabuschagneLabuschagne, Jeanine January 2010 (has links)
The emerging field of personalized medicine and the prediction of side effects experienced due to pharmaceutical drugs is being studied intensively in the post genomic era. The molecular basis of inheritance and disease susceptibility is being unravelled, especially through the use of rapidly evolving new technologies. This in turn facilitates analyses of individual variations in the whole genome of both single subjects and large groups of subjects.
Genetic variation is a common occurrence and although most genetic variations do not have any apparent effect on the gene product some do exhibit effects, such as an altered ability to detoxify xenobiotics.
The human body has a highly effective detoxification system that detoxifies and excretes endogenous as well as exogenous toxins. Numerous studies have proved that specific genetic variations have an influence on the efficacy of the metabolism of pharmaceutical drugs and consequently the dosage administered.
The primary aim of this project was the local implementation and assessment of two different genotyping approaches namely: the Applied Biosystems SNaPshot technique and Affymetrix DMET microarray. A secondary aim was to investigate if links could be found between the genetic data and the biochemical detoxification profile of participants. I investigated the approaches and gained insight into which method would be better for specific local applications, taking into consideration the robustness and ease of implementation as well as cost effectiveness in terms of data generated.
The final study cohort comprised of 18 participants whose detoxification profiles were known. Genotyping was performed using the DMET microarray and SNaPshot techniques. The SNaPshot technique was used to genotype 11 SNPs relating to DNA repair and detoxification and was performed locally. Each DMET microarray delivers significantly more data in that it genotypes 1931 genetic markers relating to drug metabolism and transport. Due to the absence of a local service supplier, the DMET - microarrays were outsourced to DNALink in South Korea. DNALink generated raw data which was analysed locally.
I experienced many problems with the implementation of the SNaPshot technique. Numerous avenues of troubleshooting were explored with varying degrees of success. I concluded that SNaPshot technology is not the best suited approach for genotyping. Data obtained from the DMET microarray was fed into the DMET console software to obtain genotypes and subsequently analysed with the help of the NWU statistical consultation services. Two approaches were followed: firstly, clustering the data and, secondly, a targeted gene approach. Neither of the two methods was able to establish a relationship between the DMET genotyping data and the detoxification profiling.
For future studies to successfully correlate SNPs or SNP groups and a specific detoxification profile, two key issues should be addressed: i) The procedure for determining the detoxification profile following substrate loading should be further refined by more frequent sampling after substrate loading. ii) The number of participants should be increased to provide statistical power that will enable a true representation of the particular genetic markers in the specific population. The statistical analyses, such as latent class analyses to cluster the participants will also be of much more use for data analyses and interpretation if the study is not underpowered. / Thesis (M.Sc. (Biochemistry))--North-West University, Potchefstroom Campus, 2011.
|
7 |
Molecular methods for genotyping selected detoxification and DNA repair enzymes / J. LabuschagneLabuschagne, Jeanine January 2010 (has links)
The emerging field of personalized medicine and the prediction of side effects experienced due to pharmaceutical drugs is being studied intensively in the post genomic era. The molecular basis of inheritance and disease susceptibility is being unravelled, especially through the use of rapidly evolving new technologies. This in turn facilitates analyses of individual variations in the whole genome of both single subjects and large groups of subjects.
Genetic variation is a common occurrence and although most genetic variations do not have any apparent effect on the gene product some do exhibit effects, such as an altered ability to detoxify xenobiotics.
The human body has a highly effective detoxification system that detoxifies and excretes endogenous as well as exogenous toxins. Numerous studies have proved that specific genetic variations have an influence on the efficacy of the metabolism of pharmaceutical drugs and consequently the dosage administered.
The primary aim of this project was the local implementation and assessment of two different genotyping approaches namely: the Applied Biosystems SNaPshot technique and Affymetrix DMET microarray. A secondary aim was to investigate if links could be found between the genetic data and the biochemical detoxification profile of participants. I investigated the approaches and gained insight into which method would be better for specific local applications, taking into consideration the robustness and ease of implementation as well as cost effectiveness in terms of data generated.
The final study cohort comprised of 18 participants whose detoxification profiles were known. Genotyping was performed using the DMET microarray and SNaPshot techniques. The SNaPshot technique was used to genotype 11 SNPs relating to DNA repair and detoxification and was performed locally. Each DMET microarray delivers significantly more data in that it genotypes 1931 genetic markers relating to drug metabolism and transport. Due to the absence of a local service supplier, the DMET - microarrays were outsourced to DNALink in South Korea. DNALink generated raw data which was analysed locally.
I experienced many problems with the implementation of the SNaPshot technique. Numerous avenues of troubleshooting were explored with varying degrees of success. I concluded that SNaPshot technology is not the best suited approach for genotyping. Data obtained from the DMET microarray was fed into the DMET console software to obtain genotypes and subsequently analysed with the help of the NWU statistical consultation services. Two approaches were followed: firstly, clustering the data and, secondly, a targeted gene approach. Neither of the two methods was able to establish a relationship between the DMET genotyping data and the detoxification profiling.
For future studies to successfully correlate SNPs or SNP groups and a specific detoxification profile, two key issues should be addressed: i) The procedure for determining the detoxification profile following substrate loading should be further refined by more frequent sampling after substrate loading. ii) The number of participants should be increased to provide statistical power that will enable a true representation of the particular genetic markers in the specific population. The statistical analyses, such as latent class analyses to cluster the participants will also be of much more use for data analyses and interpretation if the study is not underpowered. / Thesis (M.Sc. (Biochemistry))--North-West University, Potchefstroom Campus, 2011.
|
8 |
Serializable Isolation for Snapshot DatabasesCahill, Michael James January 2009 (has links)
PhD / Many popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases.
|
9 |
Estudo da estrutura e filogenia da população do Rio de Janeiro através de SNPs do cromossomo YSantos, Simone Teixeira Bonecker dos January 2010 (has links)
Submitted by Tatiana Silva (tsilva@icict.fiocruz.br) on 2013-02-08T13:08:41Z
No. of bitstreams: 1
simone_t_b_santos_ioc_bcm_0006_2010.pdf: 2551865 bytes, checksum: 53317a265fddefc98babc5e7a0074acb (MD5) / Made available in DSpace on 2013-02-08T13:08:41Z (GMT). No. of bitstreams: 1
simone_t_b_santos_ioc_bcm_0006_2010.pdf: 2551865 bytes, checksum: 53317a265fddefc98babc5e7a0074acb (MD5)
Previous issue date: 2010 / Fundação Oswaldo Cruz.Instituto Oswaldo Cruz. Rio de janeiro, RJ, Brasil / A população brasileira é considerada miscigenada, derivada de um processo relativamente
recorrente e recente. Aqui viviam milhões de indígenas quando começou o processo colonizatório
envolvendo integrantes europeus, principalmente portugueses do sexo masculino, tornando comum o
acasalamento entre homens europeus e mulheres indígenas, começando assim a heterogeneidade étnica
encontrada em nossa população. Posteriormente com a chegada dos escravos, durante o ciclo
econômico da cana-de-açúcar, começou a ocorrer relacionamentos entre europeus e africanas.
Basicamente, trata-se de uma população tri-híbrida que atualmente apresenta em sua composição
outros grupos, entre eles: italianos, espanhóis, sírios, libaneses e japoneses. Para o melhor
entendimento das raízes filogenéticas brasileiras, foram utilizados neste estudo marcadores bi-alélicos
da região não recombinante do cromossomo Y. O objetivo foi analisar como esses grupos heterogêneos
contribuíram para o pool genético de origem paterna encontrado na população masculina do Rio de
Janeiro, e assim enriquecer os conhecimentos acerca dos movimentos migratórios no processo de
estruturação desta população. Foram analisados, através do minissequenciamento multiplex, 13
polimorfismos de base única (SNPs) e foi possível a identificação de nove haplogrupos e quatro sub-haplogrupos, em uma amostra de 200 indivíduos não aparentados e residentes do Estado do Rio de
Janeiro, escolhidos aleatoriamente entre participantes de estudos de paternidade da Defensoria Pública
do Rio de Janeiro. Dos haplogrupos analisados, somente o R1a, não foi observado em nossa população.
O haplogrupo mais representativo foi o de origem européia, o R1b1, com 51%, enquanto o menos
representativo, com 1% foi o Q1a3a, encontrado entre os nativos americanos. Cerca de 85% dos
cromossomos Y analisados são de origem européia; 10,5% de africanos e 1% de ameríndios, e o
restante são de origem indefinida. Ao comparamos com dados da literatura nossa população mostrou-se
semelhante a população branca de Porto Alegre e nenhuma diferença significativa foi encontrada entre
o pool gênico da população masculina do Rio de Janeiro com a portuguesa. Os resultados aqui
encontrados corroboram os dados históricos da fundação da população do Rio de Janeiro durante o
século XVI, período onde foi observada uma significante redução da população ameríndia, com
importante contribuição demográfica vinda da região Subsaariana da África e Europa, principalmente
de portugueses. Tendo em vista o alto grau de miscigenação da nossa população e os avanços na
medicina personalizada, estudos sobre a estrutura genética humana têm fundamental implicação no
entendimento na evolução e no impacto em doenças humanas, uma vez que para esta abordagem, a
coloração da pele é um preditor não confiável de ancestralidade étnica do indivíduo. / The Brazilian population is highly admixture, a relatively recurrent and recent process. Millions
of indigenous people had been living here when the colonization process began, initially involving
mainly Portuguese men. The immigration of European women during the first centuries was
insignificant, making common the marriage between European men with indigenous women; hence,
starting the ethnic heterogeneity found in our population nowadays. Subsequently, with the arrival of
slaves during the economic cycle of sugarcane, began the relationships between Europeans and
Africans. Basically, it is a tri-hybrid population with contributions from other groups, such as Italians,
Germans, Syrians, Lebanese and Japanese. For a better understanding of Brazilian phylogenetic roots,
biallelic markers of nonrecombining region of the Y chromosome were used in this study. The goal
was to analyze how these heterogeneous groups contributed to the genetic pool found present-day in
the population of Rio de Janeiro, and thus contribute to the understanding of migratory movements in
the process of structuring this population. We analyzed, through minisequencing multiplex, 13 single
nucleotide polymorphisms (SNPs) and through those it was able to identify nine haplogrupos and four
subhaplogrupos, in a sample of 200 unrelated individuals, residents of the State of Rio de Janeiro,
chosen randomly between participants from studies of fatherhood. Of the haplogrupos examined, only
the R1a, has not been observed in our population. The more representative haplogroup was from
European origin, the R1b1, with 51%, while the less representative, with 1% was the Q1a3a, found
among Native Amerindians. 85% of Y chromosomes analyzed were from Europeans; 10.5% from
Africans and 1% of Native Amerindians, and the rest have not had their origin defined. In this study
sample, the vast majority of Y-chromosomes proved to be of European origin. Indeed, there were no
significant differences when the haplogroup frequencies in Brazil and Portugal were compared by
means of an exact test of population differentiation. These results corroborate historical data of the
foundation of the population of Rio de Janeiro during the 16th century, a period where it was observed
a significant reduction of Amerindian population was observed with important contribution from the
Sub-Saharan region of Africa and Europe, particularly the Portuguese. In view of the high degree of
admixture of oBrazilian population and advances in medicine, customized research on human genetic
structure have fundamental implication in understanding the evolution and impact on human diseases,
since for this approach, the skin color is an unreliable ancestry predictor of individual ethnic
|
10 |
Development of a SNP Assay for the Differentiation of Allelic Variations in the mdx Dystrophic Mouse ModelMisyak, Sarah A. 06 June 2008 (has links)
The purpose of this study was to develop a SNaPshot® assay to simultaneously discriminate between the dystrophic and wild type (wt) alleles in mdx mice. The mdx mouse is an animal model for Duchenne muscular dystrophy (DMD), a severe and fatal muscle wasting disease. To evaluate possible treatments and to carry out genetic studies, it is essential to distinguish between mice that carry the mutant dystrophic or wt allele(s). The current Amplification-Resistant Mutation System (ARMS) assay used to genotype mdx mice is labor intensive and sometimes fails to yield typing results, which reduce its efficiency as a screening tool. An alternative assay based on single nucleotide polymorphism (SNP) extension technology (i.e., SNaPshot®) would be advantageous because its specificity and capability to be automated would reduce the labor involved and increase the fidelity of each assay. A SNaPshot® assay has been developed that provides a robust and potentially automatable assay that discriminates between the wt and dystrophic alleles. The assay has been optimized to use: an undiluted DNA in the PCR, a 0.1 µM PCR primer concentration, a full PCR product for the SNP extension reaction, a 50ºC annealing temperature for the SNP extension in accordance with standard SNaPshot® conditions, and a 0.4 µM concentration of the SNP extension primer. The advantages of the resultant SNaPshot® assay over the ARMS assay include higher fidelity, robustness, and more consistent performance within and among laboratories, and reduced risk of human error. / Master of Science
|
Page generated in 0.0341 seconds