• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 744
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 11
  • 10
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1546
  • 304
  • 296
  • 291
  • 236
  • 196
  • 177
  • 146
  • 127
  • 124
  • 122
  • 111
  • 111
  • 92
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Discovery of Deaminase Activities in COG1816

Goble, Alissa M 03 October 2013 (has links)
Improved sequencing technologies have created an explosion of sequence information that is analyzed and proteins are annotated automatically. Annotations are made based on similarity scores to previously annotated sequences, so one misannotation is propagated throughout databases and the number of misannotated proteins grows with the number of sequenced genomes. A systematic approach to correctly identify the function of proteins in the amidohydrolase superfamily is described in this work using Clusters of Orthologous Groups of proteins as defined by NCBI. The focus of this work is COG1816, which contains proteins annotated, often incorrectly, as adenosine deaminase enzymes. Sequence similarity networks were used to evaluate the relationship between proteins. Proteins previously annotated as adenosine deaminases: Pa0148 (Pseudomonas aeruginosa PAO1), AAur_1117 (Arthrobacter aurescens TC1), Sgx9403e and Sgx9403g, were purified and their substrate profiles revealed that adenine and not adenosine was a substrate for these enzymes. All of these proteins will deaminate adenine with values of kcat/Km that exceed 105 M-1s-1. A small group of enzymes similar to Pa0148 was discovered to catalyze the hydrolysis of N-6-substituted adenine derivatives, several of which are cytokinins, a common type of plant hormone. Patl2390, from Pseudoalteromonas atlantica T6c, was shown to hydrolytically deaminate N-6-isopentenyladenine to hypoxanthine and isopentenylamine with a kcat/Km of 1.2 x 107 M^-1 s^-1. This enzyme does not catalyze the deamination of adenine or adenosine. Two small groups of proteins from COG1816 were found to have 6-aminodeoxyfutalosine as their true substrate. This function is shared with 2 small groups of proteins closely related to guanine and cytosine deaminase from COG0402. The deamination of 6-aminofutalosine is part of the alternative menaquinone biosynthetic pathway that involves the formation of futalosine. 6-Aminofutalosine is deaminated with a catalytic effeciency of 105 M-1s-1 or greater, Km’s of 0.9 to 6.0 µM and kcat’s of 1.2 to 8.6 s-1. Another group of proteins was shown to deaminate cyclic- 3’, 5’ -adenosine monophosphate (cAMP) to produce cyclic-3’, 5’-inosine monophosphate, but will not deaminate adenosine, adenine or adenosine monophosphate. This protein was cloned from a human pathogen, Leptospira interrogans. Deamination may function in regulating the signaling activities of cAMP.
502

Teaching Logarithm By Guided Discovery Learning And Real Life Applications

Cetin, Yucel 01 April 2004 (has links) (PDF)
The purpose of the study was to investigate the effects of discovery and application based instruction (DABI) on students&rsquo / mathematics achievement and also to explore opinions of students toward DABI. The research was conducted by 118 ninth grade students from Etimesgut Anatolian High School, in Ankara, during the spring semester of 2001-2002 academic year. During the study, experimental groups received DABI and control groups received Traditionally Based Instruction (TBI). The treatment was completed in three weeks. Mathematics Achievement Test (MAT) and Logarithm Achievement Test (LAT) were administered as pre and posttest respectively. In addition, a questionnaire, Students&rsquo / Views and Attitudes About DABI (SVA) and interviews were administered to determine students&rsquo / views and attitudes toward DABI. Analysis of Covariance (ANCOVA), independent sample t-test and descriptive statistics were used for testing the hypothesis of the study. No significant difference was found between LAT mean scores of students taught with DABI and traditionally based instruction when MAT test scores were controlled. In addition, neither students&rsquo / field of study nor gender was a significant factor for LAT scores. Students&rsquo / gender was not a significant factor for SVA scores. However, there was significant effect of math grades and field selections of students on SVA scores.
503

Indiana Jones and the Mysterious Maya: Mapping Performances and Representations Between the Tourist and the Maya in the Mayan Riviera

Batchelor, Brian 06 1900 (has links)
This thesis is a guidebook to the complex networks of representations in the Cob Mayan Jungle Adventure and Cob Mayan Village tours in Mexicos Mayan Riviera. Sold to tourists as opportunities to encounter an authentic Mayan culture and explore the ancient ruins at Cob, these excursions exemplify the crossroads at which touristic and Western scientific discourses construct a Mayan Other, and can therefore be scrutinized as staged post-colonial encounters mediated by scriptural and performative economies: the Museum of Maya Culture (Castaneda) and the scenario of discovery (Taylor). Tourist and Maya are not discrete identities but rather inter-related performances: the Maya become mysterious and jungle-connected while the tourist plays the modernized adventurer/discoverer. However, the tours foundations ultimately crumble due to uncanny and partial representations. As the roles and narratives that present the Maya as indigenous Other fracture, so too do those that construct the tourist as authoritative consumer of cultural differentiation.
504

Towards a New Generation of Anti-HIV Drugs : Interaction Kinetic Analysis of Enzyme Inhibitors Using SPR-biosensors

Elinder, Malin January 2011 (has links)
As of today, there are 25 drugs approved for the treatment of HIV and AIDS. Nevertheless, HIV continues to infect and kill millions of people every year. Despite intensive research efforts, both a vaccine and a cure remain elusive and the long term efficacy of existing drugs is limited by the development of resistant HIV strains. New drugs and preventive strategies that are effective against resistant virus are therefore still needed. In this thesis an enzymological approach, primarily using SPR-based interaction kinetic analysis, has been used for identification and characterization of compounds of potential use in next generation anti-HIV drugs. By screening of a targeted non-nucleoside reverse transcriptase inhibitor (NNRTI) library, one novel and highly potent NNRTI was identified. The inhibitor was selected with respect to resilience to drug resistance and for high affinity and slow dissociation – a kinetic profile assumed to be suitable for inhibitors used in topical microbicides. In order to confirm the hypothesis that such a kinetic profile would result in an effective preventive agent with long-lasting effect, the correlation between antiviral effect and kinetic profile was investigated for a panel of NNRTIs. The kinetic profiles revealed that NNRTI efficacy is dependent on slow dissociation from the target, although the induced fit interaction mechanism prevented quantification of the rate constants. To avoid cross-resistance, the next generation anti-HIV drugs should be based on chemical entities that do not resemble drugs in clinical use, either in structure or mode-of-action. Fragment-based drug discovery was used for identification of structurally new inhibitors of HIV-enzymes. One fragment that was effective also on variants of HIV RT with resistance mutations was identified. The study revealed the possibility of identifying structurally novel NNRTIs as well as fragments interacting with other sites of the protein. The two compounds identified in this thesis represent potential starting points for a new generation of NNRTIs. The applied methodologies also show how interaction kinetic analysis can be used as an effective and versatile tool throughout the lead discovery process, especially when integrated with functional enzymological assays.
505

Creating & Enabling the Useful Service Discovery Experience : The Perfect Recommendation Does Not Exist / Att skapa och möjliggöra en användbar upplevelse för att upptäcka erbjudna servicar och enheter : Den perfekta rekommendationen finns inte

Ingmarsson, Magnus January 2013 (has links)
We are rapidly entering a world with an immense amount of services and devices available to humans and machines. This is a promising future, however there are at least two major challenges for using these services and devices: (1) they have to be found and (2) after being found, they have to be selected amongst. A significant difficulty lies in not only finding most available services, but presenting the most useful ones. In most cases, there may be too many found services and devices to select from. Service discovery needs to become more aimed towards humans and less towards machines. The service discovery challenge is especially prevalent in ubiquitous computing. In particular, service and device flux, human overloading, and service relevance are crucial. This thesis addresses the quality of use of services and devices, by introducing a sophisticated discovery model through the use of new layers in service discovery. This model allows use of services and devices when current automated service discovery and selection would be impractical by providing service suggestions based on user activities, domain knowledge, and world knowledge. To explore what happens when such a system is in place, a wizard of oz study was conducted in a command and control setting. To address service discovery in ubiquitous computing new layers and a test platform were developed together with a method for developing and evaluating service discovery systems. The first layer, which we call the Enhanced Traditional Layer (ETL), was studied by developing the ODEN system and including the ETL within it. ODEN extends the traditional, technical service discovery layer by introducing ontology-based semantics and reasoning engines. The second layer, the Relevant Service Discovery Layer, was explored by incorporating it into the MAGUBI system. MAGUBI addresses the human aspects in the challenge of relevant service discovery by employing common-sense models of user activities, domain knowledge, and world knowledge in combination with rule engines.  The RESPONSORIA system provides a web-based evaluation platform with a desktop look and feel. This system explores service discovery in a service-oriented architecture setting. RESPONSORIA addresses a command and control scenario for rescue services where multiple actors and organizations work together at a municipal level. RESPONSORIA was the basis for the wizard of oz evaluation employing rescue services professionals. The result highlighted the importance of service naming and presentation to the user. Furthermore, there is disagreement among users regarding the optimal service recommendation, but the results indicated that good recommendations are valuable and the system can be seen as a partner. / Vi rör oss snabbt in i en värld med en enorm mängd tjänster och enheter som finns tillgängliga för människor och maskiner. Detta är en lovande framtid, men det finns åtminstone två stora utmaningar för att använda dessa tjänster och enheter: (1) de måste hittas och (2) rätt tjänst/enhet måste väljas. En betydande svårighet ligger i att, inte bara finna de mest lättillgängliga tjänsterna och enheterna, men också att presentera de mest användbara sådana. I de flesta fall kan det vara för många tjänster och enheter som hittas för att kunna välja mellan. Upptäckten av tjänster och enheter behöver bli mer anpassad till människor och mindre till maskiner. Denna utmaning är särskilt framträdande i desktopmetaforens efterföljare Ubiquitous Computing. (Det vill säga en form av interaktion med datorer som blivit integrerad i aktiviteter och objekt i omgivningen.) Framförallt tjänster och enheters uppdykande och försvinnande, mänsklig överbelastning och tjänstens relevans är avgörande utmaningar. Denna avhandling behandlar kvaliteten på användningen av tjänster och enheter, genom att införa en sofistikerad upptäcktsmodell med hjälp av nya lager i tjänsteupptäcktsprocessen. Denna modell tillåter användning av tjänster och enheter när nuvarande upptäcktsprocess och urval av dessa skulle vara opraktiskt, genom att ge förslag baserat på användarnas aktiviteter, domänkunskap och omvärldskunskap. För att utforska vad som händer när ett sådant system är på plats, gjordes ett så kallat Wizard of Oz experiment i ledningscentralen på en brandstation. (Ett Wizard Of Oz experiment är ett experiment där användaren tror att de interagerar med en dator, men i själva verket är det en människa som agerar dator.) För att hantera tjänste- och enhetsupptäckt i Ubiquitous Computing utvecklades nya lager och en testplattform tillsammans med en metod för att utveckla och utvärdera system för tjänste- och enhetsupptäckt. Det första lagret, som vi kallar Förbättrat Traditionellt Lager (FTL), studerades genom att utveckla ODEN och inkludera FTL i den. ODEN utökar det traditionella, datororienterade tjänste- och enhetsupptäcktslagret genom att införa en ontologibaserad semantik och en logisk regelmotor. Det andra skiktet, som vi kallar Relevant Tjänst Lager, undersöktes genom att införliva det i systemet MAGUBI. MAGUBI tar sig an de mänskliga aspekterna i den utmaning som vi benämner relevant tjänste- och enhetsupptäckt, genom att använda modeller av användarnas aktiviteter, domänkunskap och kunskap om världen i kombination med regelmotorer. RESPONSORIA är en webbaserad plattform med desktoputseende och desktopkänsla, och är ett system för utvärdering av ovanstående utmaning tillsammans med de tidigare systemen. Detta system utforskar tjänste- och enhetsupptäckt i ett tjänsteorienterat scenario. RESPONSORIA tar ett ledningsscenario för räddningstjänst där flera aktörer och organisationer arbetar tillsammans på en kommunal nivå. RESPONSORIA låg till grund för ett Wizard of Oz experiment där experimentdeltagarna var professionella räddningsledare. Resultatet underströk vikten av namngivning av tjänster och enheter samt hur dessa presenteras för användaren. Dessutom finns det oenighet bland användare om vad som är den optimala service-/enhets-rekommendationen, men resultaten visar att goda rekommendationer är värdefulla och systemet kan ses som en partner.
506

Effective web service discovery using a combination of a semantic model and a data mining technique

Bose, Aishwarya January 2008 (has links)
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
507

Granule-based knowledge representation for intra and inter transaction association mining

Yang, Wanzhong January 2009 (has links)
Abstract With the phenomenal growth of electronic data and information, there are many demands for the development of efficient and effective systems (tools) to perform the issue of data mining tasks on multidimensional databases. Association rules describe associations between items in the same transactions (intra) or in different transactions (inter). Association mining attempts to find interesting or useful association rules in databases: this is the crucial issue for the application of data mining in the real world. Association mining can be used in many application areas, such as the discovery of associations between customers’ locations and shopping behaviours in market basket analysis. Association mining includes two phases. The first phase, called pattern mining, is the discovery of frequent patterns. The second phase, called rule generation, is the discovery of interesting and useful association rules in the discovered patterns. The first phase, however, often takes a long time to find all frequent patterns; these also include much noise. The second phase is also a time consuming activity that can generate many redundant rules. To improve the quality of association mining in databases, this thesis provides an alternative technique, granule-based association mining, for knowledge discovery in databases, where a granule refers to a predicate that describes common features of a group of transactions. The new technique first transfers transaction databases into basic decision tables, then uses multi-tier structures to integrate pattern mining and rule generation in one phase for both intra and inter transaction association rule mining. To evaluate the proposed new technique, this research defines the concept of meaningless rules by considering the co-relations between data-dimensions for intratransaction-association rule mining. It also uses precision to evaluate the effectiveness of intertransaction association rules. The experimental results show that the proposed technique is promising.
508

Essays on the dynamic relationship between different types of investment flow and prices

OH, Natalie Yoon-na, Banking & Finance, Australian School of Business, UNSW January 2005 (has links)
This thesis presents three related essays on the dynamic relationship between different types of investment flow and prices in the equity market. These studies attempt to provide greater insight into the evolution of prices by investigating not ???what moves prices??? but ???who moves prices??? by utilising a unique database from the Korean Stock Exchange. The first essay investigates the trading behaviour and performance of online equity investors in comparison to other investors on the Korean stock market. Whilst the usage of online resources for trading is becoming more and more prevalent in financial markets, the literature on the role of online investors and their impact on prices is limited. The main finding arising from this essay supports the claim that online investors are noise traders at an aggregate level. Whereas foreigners show distinct trading patterns as a group in terms of consensus on the direction of market movements, online investors do not show such distinct trading patterns. The essay concludes that online investors do not trade on clear information signals and introduce noise into the market. Direct performance and market timing ability measures further show that online investors are the worst performers and market timers whereas foreign investors consistently show outstanding performance and market timing ability. Domestic mutual funds in Korea have not been extensively researched. The second essay analyses mutual fund activity and relations between stock market returns and mutual fund flows in Korea. Although regulatory authorities have been cautious about introducing competing funds, contractual-type mutual funds have not been cannibalized by the US-style corporate mutual funds that started trading in 1998. Negative feedback trading activity is observed between stock market returns and mutual fund flows, measured as net trading volumes using stock purchases and sales volume. It is predominantly returns that drive flows, although stock purchases contain information about returns, partially supporting the price pressure hypothesis. After controlling for declining markets, the results suggest Korean equity fund managers tend to swing indiscriminately between increasing purchases and increasing sales in times of rising market volatility, possibly viewing volatility as an opportunity to profit and defying the mean-variance framework that predicts investors should retract from the market as volatility increases. Mutual funds respond indifferently to wide dispersions in investor beliefs. The third essay focuses on the conflicting issue of home bias by looking at the impact on domestic prices of foreign trades relative to locals using high frequency data from the Korean Stock Exchange (KSE). This essay extends the work of Choe, Kho and Stulz (2004) (CKS) in three ways. First, it analyses the post-Asian financial crisis period, whereas CKS (2004) analyse the crisis (1996-98) period. Second, this essay adopts a modified version of the CKS method to better capture the aggregate behaviour of each investor-type by utilising the participation ratio in comparison to the CKS method. Third, this essay does not limit investigation to intra-day analysis but extends to daily analysis up to 50 days to observe the effect of intensive trading activity in a longer horizon than the CKS study. In contrast to the CKS findings, this paper finds that foreigners have a short-lived private information advantage over locals and trades by foreigners have a larger impact on prices using intra-day data. However, assuming investors buy-hold for up to 50 days, the local individuals provide a greater impact and more profitable returns than foreigners. Superior performance is documented for buys rather than sells.
509

Discovery and Validation for Composite Services on the Semantic Web

Gooneratne, Nalaka Dilshan, s3034554@student.rmit.edu.au January 2009 (has links)
Current technology for locating and validating composite services are not sufficient due to the following reasons. • Current frameworks do not have the capacity to create complete service descriptions since they do not model all the functional aspects together (i.e. the purpose of a service, state transitions, data transformations). Those that deal with behavioural descriptions are unable to model the ordering constraints between concurrent interactions completely since they do not consider the time taken by interactions. Furthermore, there is no mechanism to assess the correctness of a functional description. • Existing semantic-based matching techniques cannot locate services that conform to global constraints. Semantic-based techniques use ontological relationships to perform mappings between the terms in service descriptions and user requests. Therefore, unlike techniques that perform either direct string matching or schema matching, semantic-based approaches can match descriptions created with different terminologies and achieve a higher recall. Global constraints relate to restrictions on values of two or more attributes of multiple constituent services. • Current techniques that generate and validate global communication models of composite services yield inaccurate results (i.e. detect phantom deadlocks or ignore actual deadlocks) since they either (i) do not support all types of interactions (i.e. only send and receive, not service and invoke) or (ii) do not consider the time taken by interactions. This thesis presents novel ideas to deal with the stated limitations. First, we propose two formalisms (WS-ALUE and WS-π-calculus) for creating functional and behavioural descriptions respectively. WS-ALUE extends the Description Logic language ALUE with some new predicates and models all the functional aspects together. WS-π-calculus extends π-calculus with Interval Time Logic (ITL) axioms. ITL axioms accurately model temporal relationships between concurrent interactions. A technique comparing a WS-π-calculus description of a service against its WS-ALUE description is introduced to detect any errors that are not equally reflected in both descriptions. We propose novel semantic-based matching techniques to locate composite services that conform to global constraints. These constraints are of two types: strictly dependent or independent. A constraint is of the former type if the values that should be assigned to all the remaining restricted attributes can be uniquely determined once a value is assigned to one. Any global constraint that is not strictly dependent is independent. A complete and correct technique that locates services that conform to strictly dependent constraints in polynomial time, is defined using a three-dimensional data cube. The proposed approach that deals with independent constraints is correct, but not complete, and is a heuristic approach. It incorporates user defined objective functions, greedy algorithms and domain rules to locate conforming services. We propose a new approach to generate global communication models (of composite services) that are free of deadlocks and synchronisation conflicts. This approach is an extension of a transitive temporal reasoning mechanism.
510

The performance of multiple hypothesis testing procedures in the presence of dependence

Clarke, Sandra Jane January 2010 (has links)
Hypothesis testing is foundational to the discipline of statistics. Procedures exist which control for individual Type I error rates and more global or family-wise error rates for a series of hypothesis tests. However, the ability of scientists to produce very large data sets with increasing ease has led to a rapid rise in the number of statistical tests performed, often with small sample sizes. This is seen particularly in the area of biotechnology and the analysis of microarray data. This thesis considers this high-dimensional context with particular focus on the effects of dependence on existing multiple hypothesis testing procedures. / While dependence is often ignored, there are many existing techniques employed currently to deal with this context but these are typically highly conservative or require difficult estimation of large correlation matrices. This thesis demonstrates that, in this high-dimensional context when the distribution of the test statistics is light-tailed, dependence is not as much of a concern as in the classical contexts. This is achieved with the use of a moving average model. One important implication of this is that, when this is satisfied, procedures designed for independent test statistics can be used confidently on dependent test statistics. / This is not the case however for heavy-tailed distributions, where we expect an asymptotic Poisson cluster process of false discoveries. In these cases, we estimate the parameters of this process along with the tail-weight from the observed exceedences and attempt to adjust procedures. We consider both conservative error rates such as the family-wise error rate and more popular methods such as the false discovery rate. We are able to demonstrate that, in the context of DNA microarrays, it is rare to find heavy-tailed distributions because most test statistics are averages.

Page generated in 0.0358 seconds