• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2040
  • 446
  • 363
  • 282
  • 173
  • 152
  • 95
  • 56
  • 51
  • 44
  • 40
  • 26
  • 20
  • 18
  • 18
  • Tagged with
  • 4354
  • 380
  • 361
  • 356
  • 355
  • 297
  • 284
  • 282
  • 276
  • 275
  • 269
  • 265
  • 264
  • 229
  • 225
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

High resolution linkage and association study of quantitative trait loci

Jung, Jeesun 01 November 2005 (has links)
As a large number of single nucleotide polymorphisms (SNPs) and microsatellite markers are available, high resolution mapping employing multiple markers or multiple allele markers is an important step to identify quantitative trait locus (QTL) of complex human disease. For many complex diseases, quantitative phenotype values contain more information than dichotomous traits do. Much research has been done on conducting high resolution mapping using information of linkage and linkage disequilibrium. The most commonly employed approaches for mapping QTL are pedigree-based linkage analysis and population-based association analysis. As one of the methods dealing with multiple alleles markers, mixed models are developed to work out family-based association study with the information of transmitted allele and nontransmitted allele from one parent to offspring. For multiple markers, variance component models are proposed to perform association study and linkage analysis simultaneously. Linkage analysis provides suggestive linkage based on a broad chromosome region and is robust to population admixtures. One the other hand, allelic association due to linkage disequilibrium (LD) usually operates over very short genetic distance, but is affected by population stratification. Combining both approaches plays a synergistic role in overcoming their limitations and in increasing the efficiency and effectiveness of gene mapping.
612

Discovery of fuzzy temporal and periodic association rules

Lee, Wan-Jui 29 January 2008 (has links)
With the rapidly growing volumes of data from various sources, new tools and computational theories are required to extract useful information (knowledge) from large databases. Data mining techniques such as association rules have been proved to be effective in searching hidden knowledge in a large database. However, if we want to extract knowledge from data with temporal components, it becomes necessary to incorporate temporal semantics with the traditional data mining techniques. As mining techniques evolves, mathematical techniques become more involved to help improve the quality and diversity of mining. Fuzzy theory is one that has been adopted for this purpose. Up to now, many approaches have been proposed to discover temporal association rules or fuzzy association rules, respectively. However, no work is contributed on mining fuzzy temporal patterns. We propose in this thesis two data mining systems for discovering fuzzy temporal association rules and fuzzy periodic association rules, respectively. The mined patterns are expressed in fuzzy temporal and periodic association rules which satisfy the temporal requirements specified by the user. Temporal requirements specified by human beings tend to be ill-defined or uncertain. To deal with this kind of uncertainty, a fuzzy calendar algebra is developed to allow users to describe desired temporal requirements in fuzzy calendars easily and naturally. Moreover, the fuzzy calendar algebra helps the construction of desired time intervals in which interesting patterns are discovered and presented in terms of fuzzy temporal and periodic association rules. In our system of mining fuzzy temporal association rules, a border-based mining algorithm is proposed to find association rules incrementally. By keeping useful information of the database in a border, candidate itemsets can be computed in an efficient way. Updating of the discovered knowledge due to addition and deletion of transactions can also be done efficiently. The kept information can be used to help save the work of counting and unnecessary scans over the updated database can be avoided. Simulation results show the effectiveness of the proposed system for mining fuzzy temporal association rules. In our mining system for discovering fuzzy periodic association rules, we develop techniques for discovering patterns with periodicity. Patterns with periodicity are those that occur at regular time intervals, and therefore there are two aspects to the problem: finding the pattern, and determining the periodicity. The difficulty of the task lies in the problem of discovering these regular time intervals, i.e., the periodicity. Periodicites in the database are usually not very precise and have disturbances, and might occur at time intervals in multiple time granularities. To discover the patterns with fuzzy periodicity, we utilize the information of crisp periodic patterns to obtain a lower bound for generating candidate itemsets with fuzzy periodicities. Experimental results have shown that our system is effective in discovering fuzzy periodic association rules.
613

An Efficient Parameter-Relationship-Based Approach for Projected Clustering

Huang, Tsun-Kuei 16 June 2008 (has links)
The clustering problem has been discussed extensively in the database literature as a tool for many applications, for example, bioinformatics. Traditional clustering algorithms consider all of the dimensions of an input dataset in an attempt to learn as much as possible about each object described. In the high dimensional data, however, many of the dimensions are often irrelevant. Therefore, projected clustering is proposed. A projected cluster is a subset C of data points together with a subset D of dimensions such that the points in C are closely clustered in the subspace of dimensions D. There have been many algorithms proposed to find the projected cluster. Most of them can be divided into three kinds of classification: partitioning, density-based, and hierarchical. The DOC algorithm is one of well-known density-based algorithms for projected clustering. It uses a Monte Carlo algorithm for iteratively computing projected clusters, and proposes a formula to calculate the quality of cluster. The FPC algorithm is an extended version of the DOC algorithm, it uses the mining large itemsets approach to find the dimensions of projected cluster. Finding the large itemsets is the main goal of mining association rules, where a large itemset is a combination of items whose appearing times in the dataset is greater than a given threshold. Although the FPC algorithm has used the technique of mining large itemsets to speed up finding projected clusters, it still needs many user-specified parameters to work. Moreover, in the first step, to choose the medoid, the FPC algorithm applies a random approach for several times to get the medoid, which takes long time and may still find a bad medoid. Furthermore, the way to calculate the quality of a cluster can be considered in more details, if we take the weight of dimensions into consideration. Therefore, in this thesis, we propose an algorithm which improves those disadvantages. First, we observe that the relationship between parameters, and propose a parameter-relationship-based algorithm that needs only two parameters, instead of three parameters in most of projected clustering algorithms. Next, our algorithm chooses the medoid with the median, we choose the medoid only one time and the quality of our cluster is better than that in the FPC algorithm. Finally, our quality measure formula considers the weight of each dimension of the cluster, and gives different values according to the times of occurrences of dimensions. This formula makes the quality of projected clustering based on our algorithm better than that of the FPC algorithm. It avoids the cluster containing too many irrelevant dimensions. From our simulation results, we show that our algorithm is better than the FPC algorithm, in term of the execution time and the quality of clustering.
614

none

Lo, Hsueh-yun 11 July 2008 (has links)
Traditionally, customers of the airline and travel agent service industries only interacted with the agents at the travel office. The internet travel services have taken off rapidly. This industry is becoming very large for the suppliers, employees, sales and final users and this is going to be the main method for travel services. Corporate partners working together can cover the production disadvantages. My travel and airline research focuses on the co-opetition relationship. Airline websites and travel websites working together can develop a strength in online transactions. Developing a strategy together results in a huge economic effect. My research methods use my travel service company, Comfort Travel Services (Cola Tours), as an example. I spoke to a variety of Taiwanese airline managers to find out how we can work together. Through analysis we can plan production and plan sales and hope to make a bright future. The conclusions from my research are as follows: 1.Airlines rely on travel agencies for more than 90% of its business. This is a big example of corporate partners working together. 2.Travel services should diversify its internet services to more easily satisfy the customers¡¦ needs. 3.Travel agent websites serve the purpose of promoting and strengthening the skills of the agents which increases the volume of business 4.Electronic commerce relationships can be used for prosperous growth. 5.Airlines B2B can help travel agents in smoothing the flow of travel business and business standardization. 6.Direct flights to China bring a new business opportunity and further co-opetition relationships should be explored. 7.Market changes, combined with useful and positive planning, personal development and positive cycles achieve resource sharing and creates bilateral development of each brand.
615

Combinational polymorphisms of seven CXCL12-related genes are protective against breast cancer in Taiwan

Tai, Hsiao-ting 14 July 2008 (has links)
Purpose¡G Many single nucleotide polymorphisms (SNPs) have been found to be associated with breast cancer but their SNP interactions are seldom addressed. In this study, we focused on the joint effect for SNP combinations of seven CXCL12-related genes involved in major cancer related pathways. Patients and Methods¡G SNP genotyping was determined by PCR-restriction fragment length polymorphism (RFLP) in this study (case = 220, control = 334). Different numbers of combinational SNPs with genotypes called the pseudo-haplotypes from different chromosomes were used to evaluate their joint effect on breast cancer risk. Results¡G Except for VEGF rs3025039-CT, none of these SNPs was found to individually contribute to breast cancer risk. However, for two combined SNPs, the proportion of subjects with breast cancer was significantly low in the pseudo-haplotype with CC-GG genotypes in rs2228014-1801157 (CXCR4-CXCL12) compared to those with non-CC-GG genotypes. Similarly, the pseudo-haplotype of rs12812942-rs2228014-rs3025039 (CD4-CXCR4-CXCL12) And rs12812942-rs3136685-rs2228014 -rs1801157(CD4-CCR7-CXCR4-CXCL12)with specific genotype pattern (AT-CC-CC and AT-AG-CC-GG) among three and four combinational SNPs were significantly low in breast cancer occurrence. More SNP combinations larger than five SNPs were also addressed and shown the similar effect. After controlling for age, comparing to their corresponding non-pseudo-haplotypes, the estimated odds ratios for breast cancer ranged between 0.20 and 0.71 for specific pseudo-haplotypes with two to seven SNPs. Conclusion¡G We have identified the potential combined CXCL12-related SNPs with genotypes that were protective against breast cancer and may have an impact on identification of a low risk population for the development of breast cancer.
616

An Efficient Subset-Lattice Algorithm for Mining Closed Frequent Itemsets in Data Streams

Peng, Wei-hau 25 June 2009 (has links)
Online mining association rules over data streams is an important issue in the area of data mining, where an association rule means that the presence of some items in a transaction will imply the presence of other items in the same transaction. There are many applications of using association rules in data streams, such as market analysis, network security, sensor networks and web tracking. Mining closed frequent itemsets is a further work of mining association rules, which aims to find the subsets of frequent itemsets that could extract all frequent itemsets. Formally, a closed frequent itemset is an frequent itemset which has no superset with the same support as it. Since data streams are continuous, high-speed, and unbounded, archiving everything from data streams is impossible. That is, we can only scan once for the data streams and it is a main-memory database. Therefore, previous algorithms to mine closed frequent itemsets in the traditional database are not suitable for data streams. On the other hand, many applications are interested in the most recent data, and there is a model to deal with the most recent data in data streams, called emph{Sliding Window Model}, which acquires the recent data with a window size meets this characteristic. One of well-known algorithms for mining closed frequent itemsets which based on the sliding window model is the NewMoment algorithm. However, the NewMoment algorithm could not efficiently mine closed frequent itemsets in data streams, since they will generate closed frequent itemsets and many unclosed frequent itemsets. Moreover, when data in the sliding window is incrementally updated, the NewMoment algorithm needs to reconstruct the whole tree structure. Therefore, in this thesis, we propose a sliding window approach, the Subset-Lattice algorithm, which embeds the subset property into the lattice structure to efficiently mine closed frequent itemsets. Basically, Our proposed algorithm considers five kinds of set concepts : (1) equivalent, (2) superset, (3) subset, (4) intersection, (5) empty relation, when data items are inserted. We judge closed frequent itemsets without generating unclosed frequent itemsets by these five kinds of set concepts. Moreover, when data in the sliding window is incrementally updated, our Subset-Lattice algorithm will not reconstruct the whole lattice structure. Therefore, our Subset-Lattice algorithm is more efficient than the Moment algorithm. Furthermore, we use the bit-pattern to represent the itemsets, and use bit-operations to speed up the set-checking. From our simulation results, we show that our Subset-Lattice algorithm needs less memory and less processing time than the NewMoment algorithm. When window slides, the execution time could be saved up to 50\%.
617

Study on Service-Oriented Architects Association Website Model

Su, Yu-mei 27 January 2010 (has links)
At this stage, enterprises are faced with rapidly changing business environment. With estimated the relative reaction time and decision-making has also become very short time. Any decision-making will be on corporate organizational structure and business process changes consequent. The adjustments of the enterprises would require the co-ordination of Information Systems. How to quickly modify the information system for the enterprises has become a very important issue. This thesis has a website for example that describes how to use the Service-Oriented Architects Association Website Model (SOAAWM) of the amendment to the enterprise information system. SOAAWM uses four tools such as architecture hierarchy diagram, service operation diagram, structure-behavior coalescence diagram, and business process diagram to build up the website. SOAAWM is based on the service-oriented theory and method. By using the structure-behavior coalescence approach embedded in SOAAWM, we are able to describe working situations of organizational structures, business processes, and information systems clearly enough to reduce the business risks. In this study, through structure-behavior coalescence approach embedded in the theory and method of service-oriented re-planning of the organizational structures and business processes, making such a great level of complexity and impact of information systems can be avoided in the build omissions or bias, but also enhance the post on-line information system communication efficiency and maintain quality. This is the major achievement of our research.
618

P-x measurements for 2-ethoxyethanol and four chlorinated hydrocarbons at 303.15 K [electronic resource] / by Salil Milan Pathare.

Pathare, Salil Milan. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 94 pages. / Thesis (M.S.Ch.E.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: Total pressure measurements at 303.15 K are reported for the binary systems of 2-ethoxyethanol with carbon tetrachloride, chloroform, 1,2-dichloroethane and dichloromethane. Total pressure measurements for the system of hexane and 2-ethoxyethanol were also made to check the validity of the experimental apparatus and procedure. These measurements were taken according to the static method proposed by Van Ness (1975). Data reduction was accomplished using Barker's Method. The modified Margules equation was used as a model for the excess Gibbs free energy (GE) and parameter values were obtained by regression of the experimental data. The obtained data were used to test the association model developed by Kretschmer and Wiebe (1967). In its original form, the Kretschmer-Wiebe model assumes self-association between molecules of 2-ethoxyethanol. / ABSTRACT: An extended form of the Kretschmer-Wiebe model, in which cross-association of 2-ethoxyethanol with the halogenated hydrocarbon is postulated, was examined as well. The regular solution model, which results from the above theories when all association is neglected, was also examined. It was found that the Kretschmer-Wiebe model was far superior to the regular solution model. However, the extended form of the Kretschmer-Wiebe model showed less improvement over the original form. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
619

A. C. Bhaktivedanta Swami im interreligiösen Dialog : biographische Studien zur Begegnung von Hinduismus und Christentum /

Schmidt, Peter, January 1999 (has links)
Dissertation--Frankfurt am Main--Universität, 1998. / Numéro de "Theion", ISSN 0943-9587, 10. Bibliogr. p. 244-257.
620

Equipping a selected group of pastors in the Gulf Stream Baptist Association, Fort Lauderdale, Florida, in strategic planning skills

Boone, John C., January 1900 (has links)
Thesis (D. Min.)--New Orleans Baptist Theological Seminary, 2007. / Abstract and vita. Includes final project proposal. Includes bibliographical references (leaves 190-196, 74-78).

Page generated in 0.072 seconds