• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 736
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 10
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1526
  • 300
  • 288
  • 284
  • 233
  • 193
  • 175
  • 146
  • 127
  • 123
  • 122
  • 111
  • 111
  • 92
  • 89
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Computer model of the exploration of western Oceania

Avis, Christopher Alexander 26 February 2010 (has links)
The initial discovery and settlement of the islands of Oceania is an important issue in Pacific anthropology. I test. two methods by which new island groups might be discovered: drift voyages and downwind sailing. I focus on the region of the initial eastward expansion into Remote Oceania by the Lapita people. Simulations are driven by high resolution surface wind and current data from atmosphere and ocean models forced by real observations and which capture the high degree of seasonal and interannual variability in the region. Both drift and sailing voyages can account for the discovery of all the islands in the Lapita region based on initial starting points in the Bismarck and Solomon archipelagos. Eastward crossings are most probable in the Austral summer and fall when the probability of occurence of westerly winds is highest. Contact with islands in the arc from Santa Cruz to New Caledonia is viable in all years and is particularly probable in the Austral summer. Pathways further to the east as far as Tonga and Samoa are plausible when considering anomalous westerlies which occur in certain years. Other key crossings in Polynesia are also possible when considering this interannual variability, much of which is associated with El Nino events. Many of my findings differ from an important, earlier modelling study performed by Levison et al. (1973).
492

Die Entdeckung des Actiniums / The discovery of actinium

Niese, Siegfried 24 September 2014 (has links) (PDF)
Friedrich Giesel entdeckte im Jahre 1902 das Actinium nach Fällung mit Lanthan aus einer Pechblendelösung. Er hatte den Namen Emanium vorgeschlagen, da es stark emanierte. Lange Zeit wurde nur Andre-Louis Debierne als Entdecker des Actiniums akzeptiert, da er 1904 behauptet hatte, dass die von ihm im Jahr 1900 gefundene von ihm Actinium genannte radioaktive Substanz mit den chemischen Eigenschaften des Thoriums, die hauptsächlich das Thoriumisotop 230Th enthielt, mit dem Emanium von Giesel identisch gewesen sei. In dem Beitrag werden die Entdeckungen von Debierne und Giesel und der Weg bis zur Anerkennung von Giesel als Entdecker vorgestellt. / Friedrich Giesel discovered actinium in 1902 after co precipitation with lanthanum from a solution of pitchblende. He had suggested to name it emanium, because of its emanating properties. But for a long time only Andre-Louis Debierne was accepted as discoverer of actinium, because in 1904 he has explained, that the radioactive substance found by him in 1900, with chemical properties of thorium, named actinium, and mainly consisting of the thorium isotope 230Th, has been identical with the emanium of Giesel. The discoveries of Giesel and Debierne are explained as well as the steps on the way of acceptance of Giesel as discoverer of actinium.
493

Record Linkage for Web Data

Hassanzadeh, Oktie 15 August 2013 (has links)
Record linkage refers to the task of finding and linking records (in a single database or in a set of data sources) that refer to the same entity. Automating the record linkage process is a challenging problem, and has been the topic of extensive research for many years. However, the changing nature of the linkage process and the growing size of data sources create new challenges for this task. This thesis studies the record linkage problem for Web data sources. Our hypothesis is that a generic and extensible set of linkage algorithms combined within an easy-to-use framework that integrates and allows tailoring and combining of these algorithms can be used to effectively link large collections of Web data from different domains. To this end, we first present a framework for record linkage over relational data, motivated by the fact that many Web data sources are powered by relational database engines. This framework is based on declarative specification of the linkage requirements by the user and allows linking records in many real-world scenarios. We present algorithms for translation of these requirements to queries that can run over a relational data source, potentially using a semantic knowledge base to enhance the accuracy of link discovery. Effective specification of requirements for linking records across multiple data sources requires understanding the schema of each source, identifying attributes that can be used for linkage, and their corresponding attributes in other sources. Schema or attribute matching is often done with the goal of aligning schemas, so attributes are matched if they play semantically related roles in their schemas. In contrast, we seek to find attributes that can be used to link records between data sources, which we refer to as linkage points. In this thesis, we define the notion of linkage points and present the first linkage point discovery algorithms. We then address the novel problem of how to publish Web data in a way that facilitates record linkage. We hypothesize that careful use of existing, curated Web sources (their data and structure) can guide the creation of conceptual models for semi-structured Web data that in turn facilitate record linkage with these curated sources. Our solution is an end-to-end framework for data transformation and publication, which includes novel algorithms for identification of entity types and their relationships out of semi-structured Web data. A highlight of this thesis is showcasing the application of the proposed algorithms and frameworks in real applications and publishing the results as high-quality data sources on the Web.
494

Record Linkage for Web Data

Hassanzadeh, Oktie 15 August 2013 (has links)
Record linkage refers to the task of finding and linking records (in a single database or in a set of data sources) that refer to the same entity. Automating the record linkage process is a challenging problem, and has been the topic of extensive research for many years. However, the changing nature of the linkage process and the growing size of data sources create new challenges for this task. This thesis studies the record linkage problem for Web data sources. Our hypothesis is that a generic and extensible set of linkage algorithms combined within an easy-to-use framework that integrates and allows tailoring and combining of these algorithms can be used to effectively link large collections of Web data from different domains. To this end, we first present a framework for record linkage over relational data, motivated by the fact that many Web data sources are powered by relational database engines. This framework is based on declarative specification of the linkage requirements by the user and allows linking records in many real-world scenarios. We present algorithms for translation of these requirements to queries that can run over a relational data source, potentially using a semantic knowledge base to enhance the accuracy of link discovery. Effective specification of requirements for linking records across multiple data sources requires understanding the schema of each source, identifying attributes that can be used for linkage, and their corresponding attributes in other sources. Schema or attribute matching is often done with the goal of aligning schemas, so attributes are matched if they play semantically related roles in their schemas. In contrast, we seek to find attributes that can be used to link records between data sources, which we refer to as linkage points. In this thesis, we define the notion of linkage points and present the first linkage point discovery algorithms. We then address the novel problem of how to publish Web data in a way that facilitates record linkage. We hypothesize that careful use of existing, curated Web sources (their data and structure) can guide the creation of conceptual models for semi-structured Web data that in turn facilitate record linkage with these curated sources. Our solution is an end-to-end framework for data transformation and publication, which includes novel algorithms for identification of entity types and their relationships out of semi-structured Web data. A highlight of this thesis is showcasing the application of the proposed algorithms and frameworks in real applications and publishing the results as high-quality data sources on the Web.
495

Design and Performance Evaluation of Service Discovery Protocols for Vehicular Networks

Abrougui, Kaouther 28 September 2011 (has links)
Intelligent Transportation Systems (ITS) are gaining momentum among researchers. ITS encompasses several technologies, including wireless communications, sensor networks, data and voice communication, real-time driving assistant systems, etc. These states of the art technologies are expected to pave the way for a plethora of vehicular network applications. In fact, recently we have witnessed a growing interest in Vehicular Networks from both the research community and industry. Several potential applications of Vehicular Networks are envisioned such as road safety and security, traffic monitoring and driving comfort, just to mention a few. It is critical that the existence of convenience or driving comfort services do not negatively affect the performance of safety services. In essence, the dissemination of safety services or the discovery of convenience applications requires the communication among service providers and service requesters through constrained bandwidth resources. Therefore, service discovery techniques for vehicular networks must efficiently use the available common resources. In this thesis, we focus on the design of bandwidth-efficient and scalable service discovery protocols for Vehicular Networks. Three types of service discovery architectures are introduced: infrastructure-less, infrastructure-based, and hybrid architectures. Our proposed algorithms are network layer based where service discovery messages are integrated into the routing messages for a lightweight discovery. Moreover, our protocols use the channel diversity for efficient service discovery. We describe our algorithms and discuss their implementation. Finally, we present the main results of the extensive set of simulation experiments that have been used in order to evaluate their performance.
496

Discovery of Deaminase Activities in COG1816

Goble, Alissa M 03 October 2013 (has links)
Improved sequencing technologies have created an explosion of sequence information that is analyzed and proteins are annotated automatically. Annotations are made based on similarity scores to previously annotated sequences, so one misannotation is propagated throughout databases and the number of misannotated proteins grows with the number of sequenced genomes. A systematic approach to correctly identify the function of proteins in the amidohydrolase superfamily is described in this work using Clusters of Orthologous Groups of proteins as defined by NCBI. The focus of this work is COG1816, which contains proteins annotated, often incorrectly, as adenosine deaminase enzymes. Sequence similarity networks were used to evaluate the relationship between proteins. Proteins previously annotated as adenosine deaminases: Pa0148 (Pseudomonas aeruginosa PAO1), AAur_1117 (Arthrobacter aurescens TC1), Sgx9403e and Sgx9403g, were purified and their substrate profiles revealed that adenine and not adenosine was a substrate for these enzymes. All of these proteins will deaminate adenine with values of kcat/Km that exceed 105 M-1s-1. A small group of enzymes similar to Pa0148 was discovered to catalyze the hydrolysis of N-6-substituted adenine derivatives, several of which are cytokinins, a common type of plant hormone. Patl2390, from Pseudoalteromonas atlantica T6c, was shown to hydrolytically deaminate N-6-isopentenyladenine to hypoxanthine and isopentenylamine with a kcat/Km of 1.2 x 107 M^-1 s^-1. This enzyme does not catalyze the deamination of adenine or adenosine. Two small groups of proteins from COG1816 were found to have 6-aminodeoxyfutalosine as their true substrate. This function is shared with 2 small groups of proteins closely related to guanine and cytosine deaminase from COG0402. The deamination of 6-aminofutalosine is part of the alternative menaquinone biosynthetic pathway that involves the formation of futalosine. 6-Aminofutalosine is deaminated with a catalytic effeciency of 105 M-1s-1 or greater, Km’s of 0.9 to 6.0 µM and kcat’s of 1.2 to 8.6 s-1. Another group of proteins was shown to deaminate cyclic- 3’, 5’ -adenosine monophosphate (cAMP) to produce cyclic-3’, 5’-inosine monophosphate, but will not deaminate adenosine, adenine or adenosine monophosphate. This protein was cloned from a human pathogen, Leptospira interrogans. Deamination may function in regulating the signaling activities of cAMP.
497

Teaching Logarithm By Guided Discovery Learning And Real Life Applications

Cetin, Yucel 01 April 2004 (has links) (PDF)
The purpose of the study was to investigate the effects of discovery and application based instruction (DABI) on students&rsquo / mathematics achievement and also to explore opinions of students toward DABI. The research was conducted by 118 ninth grade students from Etimesgut Anatolian High School, in Ankara, during the spring semester of 2001-2002 academic year. During the study, experimental groups received DABI and control groups received Traditionally Based Instruction (TBI). The treatment was completed in three weeks. Mathematics Achievement Test (MAT) and Logarithm Achievement Test (LAT) were administered as pre and posttest respectively. In addition, a questionnaire, Students&rsquo / Views and Attitudes About DABI (SVA) and interviews were administered to determine students&rsquo / views and attitudes toward DABI. Analysis of Covariance (ANCOVA), independent sample t-test and descriptive statistics were used for testing the hypothesis of the study. No significant difference was found between LAT mean scores of students taught with DABI and traditionally based instruction when MAT test scores were controlled. In addition, neither students&rsquo / field of study nor gender was a significant factor for LAT scores. Students&rsquo / gender was not a significant factor for SVA scores. However, there was significant effect of math grades and field selections of students on SVA scores.
498

Indiana Jones and the Mysterious Maya: Mapping Performances and Representations Between the Tourist and the Maya in the Mayan Riviera

Batchelor, Brian 06 1900 (has links)
This thesis is a guidebook to the complex networks of representations in the Cob Mayan Jungle Adventure and Cob Mayan Village tours in Mexicos Mayan Riviera. Sold to tourists as opportunities to encounter an authentic Mayan culture and explore the ancient ruins at Cob, these excursions exemplify the crossroads at which touristic and Western scientific discourses construct a Mayan Other, and can therefore be scrutinized as staged post-colonial encounters mediated by scriptural and performative economies: the Museum of Maya Culture (Castaneda) and the scenario of discovery (Taylor). Tourist and Maya are not discrete identities but rather inter-related performances: the Maya become mysterious and jungle-connected while the tourist plays the modernized adventurer/discoverer. However, the tours foundations ultimately crumble due to uncanny and partial representations. As the roles and narratives that present the Maya as indigenous Other fracture, so too do those that construct the tourist as authoritative consumer of cultural differentiation.
499

Towards a New Generation of Anti-HIV Drugs : Interaction Kinetic Analysis of Enzyme Inhibitors Using SPR-biosensors

Elinder, Malin January 2011 (has links)
As of today, there are 25 drugs approved for the treatment of HIV and AIDS. Nevertheless, HIV continues to infect and kill millions of people every year. Despite intensive research efforts, both a vaccine and a cure remain elusive and the long term efficacy of existing drugs is limited by the development of resistant HIV strains. New drugs and preventive strategies that are effective against resistant virus are therefore still needed. In this thesis an enzymological approach, primarily using SPR-based interaction kinetic analysis, has been used for identification and characterization of compounds of potential use in next generation anti-HIV drugs. By screening of a targeted non-nucleoside reverse transcriptase inhibitor (NNRTI) library, one novel and highly potent NNRTI was identified. The inhibitor was selected with respect to resilience to drug resistance and for high affinity and slow dissociation – a kinetic profile assumed to be suitable for inhibitors used in topical microbicides. In order to confirm the hypothesis that such a kinetic profile would result in an effective preventive agent with long-lasting effect, the correlation between antiviral effect and kinetic profile was investigated for a panel of NNRTIs. The kinetic profiles revealed that NNRTI efficacy is dependent on slow dissociation from the target, although the induced fit interaction mechanism prevented quantification of the rate constants. To avoid cross-resistance, the next generation anti-HIV drugs should be based on chemical entities that do not resemble drugs in clinical use, either in structure or mode-of-action. Fragment-based drug discovery was used for identification of structurally new inhibitors of HIV-enzymes. One fragment that was effective also on variants of HIV RT with resistance mutations was identified. The study revealed the possibility of identifying structurally novel NNRTIs as well as fragments interacting with other sites of the protein. The two compounds identified in this thesis represent potential starting points for a new generation of NNRTIs. The applied methodologies also show how interaction kinetic analysis can be used as an effective and versatile tool throughout the lead discovery process, especially when integrated with functional enzymological assays.
500

Creating & Enabling the Useful Service Discovery Experience : The Perfect Recommendation Does Not Exist / Att skapa och möjliggöra en användbar upplevelse för att upptäcka erbjudna servicar och enheter : Den perfekta rekommendationen finns inte

Ingmarsson, Magnus January 2013 (has links)
We are rapidly entering a world with an immense amount of services and devices available to humans and machines. This is a promising future, however there are at least two major challenges for using these services and devices: (1) they have to be found and (2) after being found, they have to be selected amongst. A significant difficulty lies in not only finding most available services, but presenting the most useful ones. In most cases, there may be too many found services and devices to select from. Service discovery needs to become more aimed towards humans and less towards machines. The service discovery challenge is especially prevalent in ubiquitous computing. In particular, service and device flux, human overloading, and service relevance are crucial. This thesis addresses the quality of use of services and devices, by introducing a sophisticated discovery model through the use of new layers in service discovery. This model allows use of services and devices when current automated service discovery and selection would be impractical by providing service suggestions based on user activities, domain knowledge, and world knowledge. To explore what happens when such a system is in place, a wizard of oz study was conducted in a command and control setting. To address service discovery in ubiquitous computing new layers and a test platform were developed together with a method for developing and evaluating service discovery systems. The first layer, which we call the Enhanced Traditional Layer (ETL), was studied by developing the ODEN system and including the ETL within it. ODEN extends the traditional, technical service discovery layer by introducing ontology-based semantics and reasoning engines. The second layer, the Relevant Service Discovery Layer, was explored by incorporating it into the MAGUBI system. MAGUBI addresses the human aspects in the challenge of relevant service discovery by employing common-sense models of user activities, domain knowledge, and world knowledge in combination with rule engines.  The RESPONSORIA system provides a web-based evaluation platform with a desktop look and feel. This system explores service discovery in a service-oriented architecture setting. RESPONSORIA addresses a command and control scenario for rescue services where multiple actors and organizations work together at a municipal level. RESPONSORIA was the basis for the wizard of oz evaluation employing rescue services professionals. The result highlighted the importance of service naming and presentation to the user. Furthermore, there is disagreement among users regarding the optimal service recommendation, but the results indicated that good recommendations are valuable and the system can be seen as a partner. / Vi rör oss snabbt in i en värld med en enorm mängd tjänster och enheter som finns tillgängliga för människor och maskiner. Detta är en lovande framtid, men det finns åtminstone två stora utmaningar för att använda dessa tjänster och enheter: (1) de måste hittas och (2) rätt tjänst/enhet måste väljas. En betydande svårighet ligger i att, inte bara finna de mest lättillgängliga tjänsterna och enheterna, men också att presentera de mest användbara sådana. I de flesta fall kan det vara för många tjänster och enheter som hittas för att kunna välja mellan. Upptäckten av tjänster och enheter behöver bli mer anpassad till människor och mindre till maskiner. Denna utmaning är särskilt framträdande i desktopmetaforens efterföljare Ubiquitous Computing. (Det vill säga en form av interaktion med datorer som blivit integrerad i aktiviteter och objekt i omgivningen.) Framförallt tjänster och enheters uppdykande och försvinnande, mänsklig överbelastning och tjänstens relevans är avgörande utmaningar. Denna avhandling behandlar kvaliteten på användningen av tjänster och enheter, genom att införa en sofistikerad upptäcktsmodell med hjälp av nya lager i tjänsteupptäcktsprocessen. Denna modell tillåter användning av tjänster och enheter när nuvarande upptäcktsprocess och urval av dessa skulle vara opraktiskt, genom att ge förslag baserat på användarnas aktiviteter, domänkunskap och omvärldskunskap. För att utforska vad som händer när ett sådant system är på plats, gjordes ett så kallat Wizard of Oz experiment i ledningscentralen på en brandstation. (Ett Wizard Of Oz experiment är ett experiment där användaren tror att de interagerar med en dator, men i själva verket är det en människa som agerar dator.) För att hantera tjänste- och enhetsupptäckt i Ubiquitous Computing utvecklades nya lager och en testplattform tillsammans med en metod för att utveckla och utvärdera system för tjänste- och enhetsupptäckt. Det första lagret, som vi kallar Förbättrat Traditionellt Lager (FTL), studerades genom att utveckla ODEN och inkludera FTL i den. ODEN utökar det traditionella, datororienterade tjänste- och enhetsupptäcktslagret genom att införa en ontologibaserad semantik och en logisk regelmotor. Det andra skiktet, som vi kallar Relevant Tjänst Lager, undersöktes genom att införliva det i systemet MAGUBI. MAGUBI tar sig an de mänskliga aspekterna i den utmaning som vi benämner relevant tjänste- och enhetsupptäckt, genom att använda modeller av användarnas aktiviteter, domänkunskap och kunskap om världen i kombination med regelmotorer. RESPONSORIA är en webbaserad plattform med desktoputseende och desktopkänsla, och är ett system för utvärdering av ovanstående utmaning tillsammans med de tidigare systemen. Detta system utforskar tjänste- och enhetsupptäckt i ett tjänsteorienterat scenario. RESPONSORIA tar ett ledningsscenario för räddningstjänst där flera aktörer och organisationer arbetar tillsammans på en kommunal nivå. RESPONSORIA låg till grund för ett Wizard of Oz experiment där experimentdeltagarna var professionella räddningsledare. Resultatet underströk vikten av namngivning av tjänster och enheter samt hur dessa presenteras för användaren. Dessutom finns det oenighet bland användare om vad som är den optimala service-/enhets-rekommendationen, men resultaten visar att goda rekommendationer är värdefulla och systemet kan ses som en partner.

Page generated in 0.0645 seconds