• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 737
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 10
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1529
  • 301
  • 289
  • 286
  • 234
  • 194
  • 175
  • 146
  • 127
  • 123
  • 122
  • 111
  • 111
  • 92
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Use of constructivism in the development and evaluation of an educational game environment.

Seagram, Robert. January 2004 (has links)
Formal learning contexts often present information to learners in an inert and highly abstract form, making it unlikely that learners would ever use this information in their every-day lives. Learners do, however, show a greater propensity for retaining information that is seen as having relevance in their lives. Constructivism is an educational paradigm that has gained popularity amongst educationists. The core tenet of this paradigm is that learners learn through interaction with their environment and that all knowledge construction is based on previous life experience. Information that is presented to learners in a contextualised form not only has a better chance of being retained in long-term memory, but also has a greater likelihood of being applied in relevant life situations. This publication deals with the research, design and delivery of important information concerning diseases that have a major impact in Southern Africa. Firstly, learners at the University of Natal, Durban were polled for their existing knowledge concerning four widespread diseases, namely HIV/AIDS, tuberculosis, malaria and cancer. Aspects of these diseases where learners demonstrated a low level of awareness were defined as the primary learning objectives for an educational 3D- immersive microworld. Areas of knowledge concerning the transmission, symptomatic expression, biology and prevention of these diseases were generally not well represented in the learner sample. Hence, information regarding these aspects is presented to learners in a contextualised form within the microworld. Motivation for learners to play in this microworld is provided by a storyline that was researched and written for the portal. In addition, the model used in the storyline design was evaluated for its effectiveness as a tool to be used in the planning of future educational games. A model, the Puzzle Process model, was proposed to inform the design of puzzle interfaces for these types of interactive learning environments, and puzzle interfaces were designed for the virtual environment according to the model guidelines. The learning environment was tested as part of the formative evaluation with a small sample of learners. The testing process made use of both quantitative and qualitative methodologies to evaluate the effectiveness of the learning environment as a possible learning tool. Comparison of pre- and post-gameplay questionnaires showed that learners gained a more indepth and richer understanding of the topics being dealt with in the portal. In particular, the puzzle objects situated in the environment stimulated learners to negotiate meanings for the puzzle interfaces and, in the process, encouraged learners to discuss the topic being dealt with. Results from this study also show that the longer learners discussed and negotiated a certain knowledge domain, the greater their increase in richness of information was for that knowledge domain after gameplay. These results highlight the importance of social dialogue in the knowledge construction process and suggest that environments like these have great potential based on their ability to encourage learners to talk to one another and their facilitators while negotiating mutually acceptable knowledge. The original Puzzle Process model, as well as the Game Achievement model and the Game Object model were modified to account for the need for social dialogue and content. These more comprehensive models are instrumental for use in future virtual world environment design. / Thesis (Ph.D.)-University of KwaZulu- Natal, Durban, 2004.
482

Purification and Identification of Cell Surface Antigens using Lamprey Monoclonal Antibodies

Shabab, Ali 20 November 2013 (has links)
The evolutionary distance of lampreys from humans in conjunction with their distinct antibody architecture is profound. Thus, lampreys may provide antibodies with specificity for antigens unrecognized by conventional mammalian antibodies. This study investigates lamprey based monoclonal variable lymphocyte receptor antibodies (VLRs) for purifying and identifying an antigen by tandem mass spectrometry. VLRs specific for clinically relevant cell populations were isolated. Subsequently, utilizing intrinsic VLR affinity, with or without covalent cross-linking molecules, for immunoprecipitating VLR protein antigens was tested. In one case, CD5 glycoprotein from Jurkat T cells was purified by a VLR; the antigen was identified by tandem mass spectrometry. Antibody specificity was validated by western blotting and flow cytometry. Furthermore, VLR binding to CD5 required multimerization of the antibody, indicating the individual VLR units likely bind antigen with low affinity. The study provides ‘proof of concept’ for human biomarker identification using novel lamprey monoclonal antibodies.
483

La frontière arctique du Canada : les expéditions de Joseph-Elzéar Bernier (1895-1925)

Minotto, Claude. January 1975 (has links)
No description available.
484

Cooperative Semantic Information Processing for Literature-Based Biomedical Knowledge Discovery

Yu, Zhiguo 01 January 2013 (has links)
Given that data is increasing exponentially everyday, extracting and understanding the information, themes and relationships from large collections of documents is more and more important to researchers in many areas. In this paper, we present a cooperative semantic information processing system to help biomedical researchers understand and discover knowledge in large numbers of titles and abstracts from PubMed query results. Our system is based on a prevalent technique, topic modeling, which is an unsupervised machine learning approach for discovering the set of semantic themes in a large set of documents. In addition, we apply a natural language processing technique to transform the “bag-of-words” assumption of topic models to the “bag-of-important-phrases” assumption and build an interactive visualization tool using a modified, open-source, Topic Browser. In the end, we conduct two experiments to evaluate the approach. The first, evaluates whether the “bag-of-important-phrases” approach is better at identifying semantic themes than the standard “bag-of-words” approach. This is an empirical study in which human subjects evaluate the quality of the resulting topics using a standard “word intrusion test” to determine whether subjects can identify a word (or phrase) that does not belong in the topic. The second is a qualitative empirical study to evaluate how well the system helps biomedical researchers explore a set of documents to discover previously hidden semantic themes and connections. The methodology for this study has been successfully used to evaluate other knowledge-discovery tools in biomedicine.
485

Nonnegative matrix factorization for clustering

Kuang, Da 27 August 2014 (has links)
This dissertation shows that nonnegative matrix factorization (NMF) can be extended to a general and efficient clustering method. Clustering is one of the fundamental tasks in machine learning. It is useful for unsupervised knowledge discovery in a variety of applications such as text mining and genomic analysis. NMF is a dimension reduction method that approximates a nonnegative matrix by the product of two lower rank nonnegative matrices, and has shown great promise as a clustering method when a data set is represented as a nonnegative data matrix. However, challenges in the widespread use of NMF as a clustering method lie in its correctness and efficiency: First, we need to know why and when NMF could detect the true clusters and guarantee to deliver good clustering quality; second, existing algorithms for computing NMF are expensive and often take longer time than other clustering methods. We show that the original NMF can be improved from both aspects in the context of clustering. Our new NMF-based clustering methods can achieve better clustering quality and run orders of magnitude faster than the original NMF and other clustering methods. Like other clustering methods, NMF places an implicit assumption on the cluster structure. Thus, the success of NMF as a clustering method depends on whether the representation of data in a vector space satisfies that assumption. Our approach to extending the original NMF to a general clustering method is to switch from the vector space representation of data points to a graph representation. The new formulation, called Symmetric NMF, takes a pairwise similarity matrix as an input and can be viewed as a graph clustering method. We evaluate this method on document clustering and image segmentation problems and find that it achieves better clustering accuracy. In addition, for the original NMF, it is difficult but important to choose the right number of clusters. We show that the widely-used consensus NMF in genomic analysis for choosing the number of clusters have critical flaws and can produce misleading results. We propose a variation of the prediction strength measure arising from statistical inference to evaluate the stability of clusters and select the right number of clusters. Our measure shows promising performances in artificial simulation experiments. Large-scale applications bring substantial efficiency challenges to existing algorithms for computing NMF. An important example is topic modeling where users want to uncover the major themes in a large text collection. Our strategy of accelerating NMF-based clustering is to design algorithms that better suit the computer architecture as well as exploit the computing power of parallel platforms such as the graphic processing units (GPUs). A key observation is that applying rank-2 NMF that partitions a data set into two clusters in a recursive manner is much faster than applying the original NMF to obtain a flat clustering. We take advantage of a special property of rank-2 NMF and design an algorithm that runs faster than existing algorithms due to continuous memory access. Combined with a criterion to stop the recursion, our hierarchical clustering algorithm runs significantly faster and achieves even better clustering quality than existing methods. Another bottleneck of NMF algorithms, which is also a common bottleneck in many other machine learning applications, is to multiply a large sparse data matrix with a tall-and-skinny dense matrix. We use the GPUs to accelerate this routine for sparse matrices with an irregular sparsity structure. Overall, our algorithm shows significant improvement over popular topic modeling methods such as latent Dirichlet allocation, and runs more than 100 times faster on data sets with millions of documents.
486

Deriving Semantic Objects from the Structured Web (Inférer des Objects Sémantiques du Web Structuré)

Oita, Marilena 29 October 2012 (has links) (PDF)
This thesis focuses on the extraction and analysis of Web data objects, investigated from different points of view: temporal, structural, semantic. We first survey different strategies and best practices for deriving temporal aspects of Web pages, together with a more in-depth study on Web feeds for this particular purpose. Next, in the context of dynamically-generated Web pages by content management systems, we present two keyword-based techniques that perform article extraction from such pages. Keywords, either automatically acquired through a Tf−Idf analysis, or extracted from Web feeds, guide the process of object identification, either at the level of a single Web page (SIGFEED algorithm), or across different pages sharing the same template (FOREST algorithm). We finally present, in the context of the deep Web, a generic framework which aims at discovering the semantic model of a Web object (here, data record) by, first, using FOREST for the extraction of objects, and second, by representing the implicit rdf:type similarities between the object attributes and the entity of the Web interface as relationships that, together with the instances extracted from the objects, form a labeled graph. This graph is further aligned to a generic ontology like YAGO for the discovery of the graph's unknown types and relations.
487

Bioinformatics challenges of high-throughput SNP discovery and utilization in non-model organisms

2014 October 1900 (has links)
A current trend in biological science is the increased use of computational tools for both the production and analysis of experimental data. This is especially true in the field of genomics, where advancements in DNA sequencing technology have dramatically decreased the time and cost associated with DNA sequencing resulting in increased pressure on the time required to prepare and analyze data generated during these experiments. As a result, the role of computational science in such biological research is increasing. This thesis seeks to address several major questions with respect to the development and application of single nucleotide polymorphism (SNP) resources in non-model organisms. Traditional SNP discovery using polymerase chain reaction (PCR) amplification and low-throughput DNA sequencing is a time consuming and laborious process, which is often limited by the time required to design intron-spanning PCR primers. While next-generation DNA sequencing (NGS) has largely supplanted low-throughput sequencing for SNP discovery applications, the PCR based SNP discovery method remains in use for cost effective, targeted SNP discovery. This thesis seeks to develop an automated method for intron-spanning PCR design which would remove a significant bottleneck in this process. This work develops algorithms for combining SNP data from multiple individuals, independent of the DNA sequencing platforms, for the purpose of developing SNP genotyping arrays. Additionally, tools for the filtering and selection of SNPs will be developed, providing start to finish support for the development of SNP genotyping arrays in complex polyploids using NGS. The result of this work includes two automated pipelines for the design of intron-spanning PCR primers, one which designs a single primer pair per target and another that designs multiple primer pairs per target. These automated pipelines are shown to reduce the time required to design primers from one hour per primer pair using the semi-automated method to 10 minutes per 100 primer pairs while maintaining a very high efficacy. Efficacy is tested by comparing the number of successful PCR amplifications of the semi- automated method with that of the automated pipelines. Using the Chi-squared test, the semi-automated and automated approaches are determined not to differ in efficacy. Three algorithms for combining SNP output from NGS data from multiple individuals are developed and evaluated for their time and space complexities. These algorithms were found to be computationally efficient, requiring time and space linear to the size of the input. These algorithms are then implemented in the Perl language and their time and memory performance profiled using experimental data. Profiling results are evaluated by applying linear models, which allow for predictions of resource requirements for various input sizes. Additional tools for the filtering of SNPs and selection of SNPs for a SNP array are developed and applied to the creation of two SNP arrays in the polyploid crop Brassica napus. These arrays, when compared to arrays in similar species, show higher numbers of polymorphic markers and better 3-cluster genotype separation, a viable method for determining the efficacy of design in complex genomes.
488

Using MapReduce to scale event correlation discovery for process mining

Reguieg, Hicham 19 February 2014 (has links) (PDF)
The volume of data related to business process execution is increasing significantly in the enterprise. Many of data sources include events related to the execution of the same processes in various systems or applications. Event correlation is the task of analyzing a repository of event logs in order to find out the set of events that belong to the same business process execution instance. This is a key step in the discovery of business processes from event execution logs. Event correlation is a computationally-intensive task in the sense that it requires a deep analysis of very large and growing repositories of event logs, and exploration of various possible relationships among the events. In this dissertation, we present a scalable data analysis technique to support efficient event correlation for mining business processes. We propose a two-stages approach to compute correlation conditions and their entailed process instances from event logs using MapReduce framework. The experimental results show that the algorithm scales well to large datasets.
489

Tudor English contacts with North Americans, 1497-1603

Sewell, William Kenneth January 1971 (has links)
English exploration in North America before Jamestown has been relatively neglected, except for Sir Walter Raleigh's Lost Colony. This study is a survey of the contacts which the Tudor English, 1497-1603, made with North American natives.John Cabot and his young sons reached North America in 1497. He or one of his successors took three American aborigines to England. Henry VII showed concern for natives of North America and suggested that his explorers make rules designed to protect the aborigines. Henry VIII helped finance voyages to America and indirectly laid foundations for later English discovery and colonization, but his son, Edward VI, and his daughter Mary were little interested in furthering English activities in North America.Elizabeth the Protestant was enthusiastic about America and about Christianizing its natives. She was unlucky in backing Thomas Stuckley in the early 1560'x, but involved herself extensively in the three voyages of Martin Frobisher in the late 1570's. These voyages turned into a wild gold chase but his expeditions returned with much information, not appreciated at the time, of the Arctic regions of North America and its people. The Eskimos captured five of Frobisher's men, whom he was never able to recover. The captain seized several natives and took them to England where they aroused much curiosity. The Privy Council gave Frobisher specific instructions concerning his future contacts with the welfare of the aborigines. A minister, who accompanied Frobisher's third expedition, was to remain a year with a company of 100, serve them and convert the Eskimos. This colony did not remain, however.Sir Francis Drake made his global circumnavigation during the years Frobisher sailed with his three expeditions. The son of an Anglican rector and avid Protestant, Drake obviously had a real Christian interest in the Indians whom he encountered, especially in Nova Albion or California. He hoped to establish colonies in the Western Hemisphere which would be missions to the pagans. These colonies and their Christian Indians were intended to counter Spanish activities in the New World.Early in the 1580's Sir Humphrey Gilbert sailed with an expedition to Newfoundland. His. leading associate, the pro-Catholic Sir George Peckham, wrote a tract to promote this expedition which was the first to argue extensively that England should colonize in America in order to Christianize and civilize the Indians.Gilbert's half-brother, Sir Walter Raleigh, was long involved in colonization efforts, in Christianizing the Indians, and extending the English empire.Captain John Davis followed Martin Frobisher a decade later to the Arctic and Sub-Arctic regions. In the 1590'x, Davis wrote two books in which he praised the Eskimos as the most blessed of peoples, and asserted it was England's Christian responsibility to carry the Gospel to these pagans.The Reverend Richard Rakluyt was the younger cousin of the lawyer, Richard siakluyt; as leading geographers during Elizabethan times, they knew most of the great English captains and navigators. The minister was the compiler, editor and publisher of a mass of geographical information often described as the prose epic of the English nation.English Separatists during the 1590's made a colonizing thrust into the St. Lawrence Gulf, and after the turn of the century the English made two ploys into the New England area, where the Indians seemed friendly at first. In the south, one of the two voyages sent to look for the lost Roanoke Colony ended in tragedy just after Elizabeth died.By 1603 many of the Indians in the Chesapeake Bay and Roanoke areas were hostile to the English. Spaniards and Frenchmen, as well as Englishmen who had visited there earlier were in part responsible for this. Thus by the beginning of the Stuart period the English had secured a comprehensive knowledge of the eastern North American coast, but through their own efforts or those of others, had to some degree alienated its native inhabitants.
490

The discovery of antiviral compounds targeting adenovirus and herpes simplex virus : assessment of synthetic compounds and natural products

Strand, Mårten January 2014 (has links)
There is a need for new antiviral drugs. Especially for the treatment of adenovirus infections, since no approved anti-adenoviral drugs are available. Adenovirus infections in healthy persons are most often associated with respiratory disease, diarrhea and infections of the eye. These infections can be severe, but are most often self-limiting. However, in immunocompromised patients, adenovirus infections are associated with morbidity and high mortality rates. These patients are mainly stem cell or bone marrow transplantation recipients, however solid organ transplantation recipients or AIDS patients may be at risk as well. In addition, children are at higher risk to develop disseminated disease. Due to the need for effective anti-adenoviral drugs, we have developed a cell based screening assay, using a replication-competent GFP expressing adenovirus vector based on adenovirus type 11 (RCAd11GFP). This assay facilitates the screening of chemical libraries for antiviral activity. Using this assay, we have screened 9800 small molecules for anti-adenoviral activity with low toxicity. One compound, designated Benzavir-1, was identified with activity against representative types of all adenovirus species. In addition, Benzavir-1 was more potent than cidofovir, which is the antiviral drug used for treatment of adenovirus disease. By structure-activity relationships analysis (SAR), the potency of Benzavir-1 was improved. Hence, the improved compound is designated Benzavir-2. To assess the antiviral specificity, the activity of Benzavir-1 and -2 on both types of herpes simplex virus (HSV) was evaluated. Benzavir-2 displayed better efficacy than Benzavir-1 and had an activity comparable to acyclovir, which is the original antiviral drug used for therapy of herpes virus infections. In addition, Benzavir-2 was active against acyclovir-resistant clinical isolates of both HSV types. To expand our search for compounds with antiviral activity, we turned to the natural products. An ethyl acetate extract library was established, with extracts derived from actinobacteria isolated from sediments of the Arctic Sea. Using our screening assay, several extracts with anti-adenoviral activity and low toxicity were identified. By activity-guided fractionation of the extracts, the active compounds could be isolated. However, several compounds had previously been characterized with antiviral activity. Nonetheless, one compound had uncharacterized antiviral activity and this compound was identified as a butenolide. Additional butenolide analogues were found and we proposed a biosynthetic pathway for the production of these compounds. The antiviral activity was characterized and substantial differences in their toxic potential were observed. One of the most potent butenolide analogues had minimal toxicity and is an attractive starting point for further optimization of the anti-adenoviral activity. This thesis describes the discovery of novel antiviral compounds that targets adenovirus and HSV infections, with the emphasis on adenovirus infections. The discoveries in this thesis may lead to the development of new antiviral drugs for clinical use.

Page generated in 0.0564 seconds