• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 744
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 11
  • 10
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1546
  • 304
  • 296
  • 291
  • 236
  • 196
  • 177
  • 146
  • 127
  • 124
  • 122
  • 111
  • 111
  • 92
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

A Semantic-based Approach to Web Services Discovery

Tsai, Yu-Huai 13 June 2011 (has links)
Service-oriented Architecture is now an important issue when it comes to program development. However, there is not yet an efficient and effective way for developer to obtain appropriate component. Current researches mostly focus on either textual meaning or ontology relation of the services. In this research we propose a hybrid approach that integrates both types of information. It starts by defining important attributes and their weights for web service discovery using Multiple Criteria Decision Making. Then a method of similarity calculation based on both textual and ontological information is applied. In the experiment, we collect 103 real-world Web services, and the experimental results show that our approach generally performs better than the existing ones.
162

The history and development of caravels

Schwarz, George Robert 15 May 2009 (has links)
An array of ship types was used during the European Age of Expansion (early 15th to early 17th centuries), but one vessel in particular emerges from the historical records as a harbinger of discovery: the caravel. The problem is that little is known about these popular ships of discovery, despite the fair amount of historical evidence that has been uncovered. How big were they? How many men did it take to operate such a vessel? What kind of sailing characteristics did they have? How and by whom were they designed? Where did they originate and how did they develop? These questions cannot be answered by looking at the historical accounts alone. For this reason, scholars must take another approach for learning about caravels by examining additional sources, namely ancient shipbuilding treatises, archaeological evidence, surviving archaic shipbuilding techniques, and iconographic representations from the past. Information gained from the available sources reveals many of the caravel’s characteristics through time. This ship type outclassed its contemporaries during the age of exploration because of its highly adaptive characteristics. These traits were, principally, its shallow draught, speed, maneuverability, and ability to sail close to the wind. This combination of attributes made the caravel the ideal ship for reconnaissance along the rocky African coastline, as well as for making the transatlantic voyages to the New World. It was built in a Mediterranean way during its post-medieval phases, a method that still survives in some parts of the world today. During the Age of Discovery (ca. 1430 to 1530), the caravel sat low in the water, had one sterncastle, and was either lateen-rigged or had a combination of square and lateen sails. This vessel reflects the advanced shipbuilding technology that existed in Europe at this time, and played and important role in the voyages which allowed the Europeans to expand their territories around the world. The results of the studies presented in this thesis provide a history and development of the caravel, which was gradual and often obscure. What has been gained from this work is a body of information that can be applied to other studies about ancient seafaring, and can serve as a starting point for further research.
163

Insider trading at the turn of the century: two essays

Tartaroglu, Semih -. 15 May 2009 (has links)
Insider trading may convey information to the market and promote accurate pricing of stocks. In this dissertation, I investigate insider trading at the turn of the century. In the first essay, I investigate insider trading activity in technology stocks during the high price - high volatility period of the late 1990s. I document that insiders of technology firms were heavy sellers during the ten month pre-peak period in which stock prices more than doubled. The technology stocks that were sold by insiders more extensively in the pre-peak period had lower returns in the post-peak period. I furthermore investigate the relation between the net order flows (buyer initiated minus seller initiated trades) and abnormal insider trading activity. I document that the net order flow is positively related to abnormal insider trading activity. However, this positive relation becomes weaker in the peak period; which implies less price discovery through insider trading during the rise of technology stock prices. In the second essay, I document that disclosure requirements significantly affect insider trading behavior. The Sarbanes-Oxley Act of 2002 requires expedited and on-line disclosure of insider transactions. This increase in the visibility of insider trading reduces informational advantage of insiders and increases the likelihood of facing legal sanctions for insiders. I document that insider purchases significantly declined after the Sarbanes- Oxley Act. In addition, the incidences of insider purchases (sales) prior to positive (negative) earnings surprises declined after the Act. Finally, I document that the earnings announcements become more informative after the Act, which is consistent with less price discovery through insider trading prior to earnings announcements. However, the evidence that the decline in insider trading contributes to more informative earnings announcements is pronounced for insider purchases but not for insider sales.
164

Missing Link Discovery In Wikipedia: A Comparative Study

Sunercan, Omer 01 February 2010 (has links) (PDF)
The fast growing online encyclopedia concept presents original and innovative features by taking advantage of information technologies. The links connecting the articles is one of the most important instances of these features. In this thesis, we present our work on discovering missing links in Wikipedia articles. This task is important for both readers and authors of Wikipedia. Readers will bene&amp / #64257 / t from the increased article quality with better navigation support. On the other hand, the system can be employed to support authors during editing. This study combines the strengths of different approaches previously applied for the task, and proposes its own techniques to reach satisfactory results. Because of the subjectivity in the nature of the task / automatic evaluation is hard to apply. Comparing approaches seems to be the best method to evaluate new techniques, and we offer a semi-automatized method for evaluation of the results. The recall is calculated automatically using existing links in Wikipedia. The precision is calculated according to manual evaluations of human assessors. Comparative results for different techniques are presented, showing the success of our improvements. Our system employs Turkish Wikipedia (Vikipedi) and, according to our knowledge, it is the &amp / #64257 / rst study on it. We aim to exploit the Turkish Wikipedia as a semantic resource to examine whether it is scalable enough for such purposes.
165

A Randomness Based Analysis on the Data Size Needed for Removing Deceptive Patterns

IBARAKI, Toshihide, BOROS, Endre, YAGIURA, Mutsunori, HARAGUCHI, Kazuya 01 March 2008 (has links)
No description available.
166

Mining Workflow Instances to Support Workflow Schema Design

Yang, Wan-Shiou 23 May 2000 (has links)
Facing the increasing global competition, modern business organizations have to respond quickly and correctly to the constant changing environment to ensure their competitive advantages. This goal has led to a recent surge of work on Business Process Reengineering (BPR) and Workflow Management. While most work in these areas assume that process definitions are known in a priori, it is widely recognized that defining a process type which totally represents all properties of the underlying business process is a difficult job. This job is currently practiced in a very ad-hoc fashion. In this paper, we postulate an algorithm to discover the process definition from analyzing the existing process instances. We compare our algorithm with other existing algorithms proposed in the literature in terms of time complexity and apply these algorithms through synthetic data sets to measure the qualities of output results. It has been found that our algorithm is able to return the process definitions closer to the real ones in a faster manner.
167

Using Fuzzy Rule Induction for Mining Classification Knowledge

Chen, Kun-Hsien 02 August 2000 (has links)
With the computerization of businesses, more and more data are generated and stored in databases for many business applications. Finding interesting patterns among those data may lead to useful knowledge that provides competitive advantage in business. Knowledge discovery in database has thus become an important issue to help business acquire knowledge that assists managerial and operational work. Among many types of knowledge, classification knowledge is widely used. Most classification rules learned by induction algorithms are in the crisp form. Fuzzy linguistic representation of rules, however, is much closer to the way human reasons. The objective of this research is to propose a method to mine classification knowledge from the database with fuzzy descriptions. The procedure contains five steps, starting from data preparation to rule pruning. A rule induction algorithm, RITIO, is employed to generate the classification rules. Fuzzy inference mechanism that includes fuzzy matching and output reasoning is specified to yield the output class. An experiment is conducted using several databases to show advantages of this work. The proposed method is justified with good system performance. It can be easily implemented in various business applications on classification tasks.
168

Topics in multiple hypotheses testing

Qian, Yi 25 April 2007 (has links)
It is common to test many hypotheses simultaneously in the application of statistics. The probability of making a false discovery grows with the number of statistical tests performed. When all the null hypotheses are true, and the test statistics are indepen- dent and continuous, the error rates from the family wise error rate (FWER)- and the false discovery rate (FDR)-controlling procedures are equal to the nominal level. When some of the null hypotheses are not true, both procedures are conservative. In the first part of this study, we review the background of the problem and propose methods to estimate the number of true null hypotheses. The estimates can be used in FWER- and FDR-controlling procedures with a consequent increase in power. We conduct simulation studies and apply the estimation methods to data sets with bio- logical or clinical significance. In the second part of the study, we propose a mixture model approach for the analysis of ChIP-chip high density oligonucleotide array data to study the interac- tions between proteins and DNA. If we could identify the specific locations where proteins interact with DNA, we could increase our understanding of many important cellular events. Most experiments to date are performed in culture on cell lines, bac- teria, or yeast, and future experiments will include those in developing tissues, organs, or cancer biopsies, and they are critical in understanding the function of genes and proteins. Here we investigate the ChIP-chip data structure and use a beta-mixture model to help identify the binding sites. To determine the appropriate number of components in the mixture model, we suggest the Anderson-Darling testing. Our study indicates that it is a reasonable means of choosing the number of components in a beta-mixture model. The mixture model procedure has broad applications in biology and is illustrated with several data sets from bioinformatics experiments.
169

Automated Network Node Discovery and Topology Analysis

Sigholm, Johan January 2007 (has links)
<p>This Master's Thesis describes the design and development of an architecture for automated network node discovery and topology analysis, implemented as an extension to the network management and provisioning system NETadmin. The architecture includes functionality for flexible network model assessment, using a method for versatile comparison between off-line database models and real-world models. These models are populated by current node data collected by network sensors.</p><p>The presented architecture supports (1) efficient creation and synchronization of network topology information (2) accurate recognition of new, replaced and upgraded nodes, including rogue nodes that may exhibit malicious behavior, and (3) provides an extension of an existing vendor-neutral enterprise network management and provisioning system.</p><p>An evaluation of the implementation shows evidence of accurate discovery and classification of unmatched hosts in a live customer production network with over 400 nodes, and presents data on performance and scalability levels.</p><p>The work was carried out at Netadmin System i Sverige AB, in Linköping, Sweden.</p>
170

Improving drug discovery decision making using machine learning and graph theory in QSAR modeling

Ahlberg Helgee, Ernst, January 2010 (has links)
Diss. (sammanfattning) Göteborg : Göteborgs universitet, 2010.

Page generated in 0.0408 seconds