• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2040
  • 446
  • 363
  • 282
  • 173
  • 152
  • 95
  • 56
  • 51
  • 44
  • 40
  • 26
  • 20
  • 18
  • 18
  • Tagged with
  • 4353
  • 380
  • 361
  • 356
  • 355
  • 297
  • 284
  • 282
  • 276
  • 275
  • 269
  • 265
  • 264
  • 229
  • 225
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Breeding Maize for Drought Tolerance: Diversity Characterization and Linkage Disequilibrium of Maize Paralogs ZmLOX4 and ZmLOX5

De La Fuente, Gerald 2012 May 1900 (has links)
Maize production is limited agronomically by the availability of water and nutrients during the growing season. Of these two limiting factors, water availability is predicted to increase in importance as climate change and the growing urban landscape continue to stress limited supplies of freshwater. Historically, efforts to breed maize for water-limited environments have been extensive; especially in the areas of root architecture and flowering physiology. As progress has been made and new traits have been discovered and selected for, the different responses to drought stress at specific developmental stages of the maize plant have been selected as a whole when drought tolerance is evaluated. Herein we attempt to define the characteristics of the maize drought response during different developmental stages of the maize plant that can be altered through plant breeding. Towards breeding for drought tolerance, 400 inbred lines from a diversity panel were amplified and sequenced at the ZmLOX4 and ZmLOX5 loci in an effort to characterize their linkage disequilibrium and genetic diversity. Understanding these characteristics is essential for an association mapping study that accompanies this project, searching for novel and natural allelic diversity to improve drought tolerance and aflatoxin resistance in maize. This study is among the first to investigate genetic diversity at important gene paralogs ZmLOX4 and ZmLOX5 believed to be highly conserved among all Eukaryotes. We show very little genetic diversity and very low linkage disequilibrium in these genes, but also identified one natural variant line with knocked out ZmLOX5, a variant line missing ZmLOX5, and five line variants with a duplication of ZmLOX5. Tajima's D test suggests that both ZmLOX4 and ZmLOX5 have both been under neutral selection. Further investigation of haplotype data revealed that ZmLOX12, a member of the ZmLOX family, showed strong LD that extends much further than expected in maize. Linkage disequilibrium patterns at these loci of interest are crucial to quantify for future candidate gene association mapping studies. Knockout and copy number variants of ZmLOX5, while not a surprising find, are under further investigation for crop improvement.
592

A Meaningful Candidate Approach to Mining Bi-Directional Traversal Patterns on the WWW

Chen, Jiun-rung 27 July 2004 (has links)
Since the World Wide Web (WWW) appeared, more and more useful information has been available on the WWW. In order to find the information, one application of data mining techniques on the WWW, referred to as Web mining, has become a research area with increasing importance. Mining traversal patterns is one of the important topics in Web mining. It focuses on how to find the Web page sequences which are frequently browsed by users. Although the algorithms for mining association rules (e.g., Apriori and DHP algorithms) could be applied to mine traversal patterns, they do not utilize the property of Web transactions and generate too many invalid candidate patterns. Thus, they could not provide good performance. Wu et al. proposed an algorithm for mining traversal patterns, SpeedTracer, which utilizes the property of Web transactions, i.e., the continuous property of the traversal patterns in the Web structure. Although they decrease the number of candidate patterns generated in the mining process, they do not efficiently utilize the property of Web transactions to decrease the number of checks while checking the subsets of each candidate pattern. In this thesis, we design three algorithms, which improve the SpeedTracer algorithm, for mining traversal patterns. For the first algorithm, SpeedTracer*-I, it utilizes the property of Web transactions to directly generate and count all candidate patterns from user sessions. Moreover, it utilizes this property to improve the checking step, when candidate patterns are generated. Next, according to the SpeedTracer*-I algorithm, we propose SpeedTracer*-II and SpeedTracer*-III algorithms. In these two algorithms, we improve the performance of the SpeedTracer*-I algorithm by decreasing the number of times to scan the database. In the SpeedTracer*-II algorithm, given a parameter n, we apply the SpeedTracer*-I algorithm to find Ln first, and use Ln to generate all Ck, where k > n. After generating all candidate patterns, we scan the database once to count all candidate patterns and then the frequent patterns could be determined. In the SpeedTracer*-III algorithm, given a parameter n, we also apply the SpeedTracer*-I algorithm to find Ln first, and directly generate and count Ck from user sessions based on Ln, where k > n. The simulation results show that the performance of the SpeedTracer*-I algorithm is better than that of the Speed- Tracer algorithm in terms of the processing time. The simulation results also show that SpeedTracer*-II and SpeedTracer*-III algorithms outperform SpeedTracer and SpeedTracer*-I algorithms, because the former two algorithms need less times to scan the database than the latter two algorithms. Moreover, from our simulation results, we show that all of our proposed algorithms could provide better performance than Apriori-like algorithms (e.g., FS and FDLP algorithms) in terms of the processing time.
593

Research and Development of DSP Based Human Headtracker

Cheng, Kai-wen 27 July 2004 (has links)
The thesis shows the development of DSP-based ¡§human head-tracker¡¨ system. It uses CCD camera to capture images, and detects in the image sequence. When someone interrupts, the system will lock on his head and shows the locked image on the LCD screen. The Head-tracker system includes three sub-systems : ¡§Motion Detector¡¨, ¡§Ellipse Algorism¡¨, and ¡§Visual Probability Data Association Filter¡¨. From the results of experiment, it can meet the expectation and gain good performance and robustness.
594

A Sliding-Window Approach to Mining Maximal Large Itemsets for Large Databases

Chang, Yuan-feng 28 July 2004 (has links)
Mining association rules, means a process of nontrivial extraction of implicit, previously and potentially useful information from data in databases. Mining maximal large itemsets is a further work of mining association rules, which aims to find the set of all subsets of large (frequent) itemsets that could be representative of all large itemsets. Previous algorithms to mining maximal large itemsets can be classified into two approaches: exhausted and shortcut. The shortcut approach could generate smaller number of candidate itemsets than the exhausted approach, resulting in better performance in terms of time and storage space. On the other hand, when updates to the transaction databases occur, one possible approach is to re-run the mining algorithm on the whole database. The other approach is incremental mining, which aims for efficient maintenance of discovered association rules without re-running the mining algorithms. However, previous algorithms for mining maximal large itemsets based on the shortcut approach can not support incremental mining for mining maximal large itemsets. While the algorithms for incremental mining, {it e.g.}, the SWF algorithm, could not efficiently support mining maximal large itemsets, since it is based on the exhausted approach. Therefore, in this thesis, we focus on the design of an algorithm which could provide good performance for both mining maximal itemsets and incremental mining. Based on some observations, for example, ``{it if an itemset is large, all its subsets must be large; therefore, those subsets need not to be examined further}", we propose a Sliding-Window approach, the SWMax algorithm, for efficiently mining maximal large itemsets and incremental mining. Our SWMax algorithm is a two-passes partition-based approach. We will find all candidate 1-itemsets ($C_1$), candidate 3-itemsets ($C_3$), large 1-itemsets ($L_1$), and large 3-itemsets ($L_3$) in the first pass. We generate the virtual maximal large itemsets after the first pass. Then, we use $L_1$ to generate $C_2$, use $L_3$ to generate $C_4$, use $C_4$ to generate $C_5$, until there is no $C_k$ generated. In the second pass, we use the virtual maximal large itemsets to prune $C_k$, and decide the maximal large itemsets. For incremental mining, we consider two cases: (1) data insertion, (2) data deletion. Both in Case 1 and Case 2, if an itemset with size equal to 1 is not large in the original database, it could not be found in the updated database based on the SWF algorithm. That is, a missing case could occur in the incremental mining process of the SWF algorithm, because the SWF algorithm only keeps the $C_2$ information. While our SWMax algorithm could support incremental mining correctly, since $C_1$ and $C_3$ are maintained in our algorithm. We generate some synthetic databases to simulate the real transaction databases in our simulation. From our simulation, the results show that our SWMax algorithm could generate fewer number of candidates and needs less time than the SWF algorithm.
595

An Analysis of Collective Action on National Teachers' Association R.O.C

Hsieh, Pi-Ying 29 July 2004 (has links)
Collective Action , National Teachers' Association R.O.C
596

Targeted Advertising Based on GP-association rules

Tsai, Chai-wen 13 August 2004 (has links)
Targeting a small portion of customers for advertising has long been recognized by businesses. In this thesis we proposed a novel approach to promoting products with no prior transaction records. This approach starts with discovering the GP-association rules between customer types and product genres that had occurred frequently in transaction records. Customers are characterized by demographic attributes, some of these attributes have concept hierarchies and products can be generalized through some product taxonomy. Based on GP-association rules set, we developed a comprehensive algorithm to locating a short list of prospective customers for a given promotion product. The new approach was evaluated using the patron¡¦s circulation data from OPAC system of our university library. We measured the accuracy of estimated method and the effectiveness of targeted advertising in different parameters. The result shows that our approach achieved higher accuracy and effectiveness than other methods.
597

A Class-rooted FP-tree Approach to Data Classification

Chen, Chien-hung 29 June 2005 (has links)
Classification, an important problem of data mining, is one of useful techniques for prediction. The goal of the classification problem is to construct a classifier from a given database for training, and to predict new data with the unknown class. Classification has been widely applied to many areas, such as medical diagnosis and weather prediction. The decision tree is the most popular model among classifiers, since it can generate understandable rules and perform classification without requiring any computation. However, a major drawback of the decision tree model is that it only examines a single attribute at a time. In the real world, attributes in some databases are dependent on each other. Thus, we may improve the accuracy of the decision tree by discovering the correlation between attributes. The CAM method applies the method of mining association rules, like the Apriori method, for discovering the attribute dependence. However, traditional methods for mining association rules are inefficient in the classification applications and could have five problems: (1) the combinatorial explosion problem, (2) invalid candidates, (3) unsuitable minimal support, (4) the ignored meaningful class values, and (5) itemsets without class data. The FP-growth avoids the first two problems. However, it is still suffered from the remaining three problems. Moreover, one more problem occurs: Unnecessary nodes for the classification problem which make the FP-tree incompact and huge. Furthermore, the workload of the CAM method is expensive due to too many times of database scanning, and the attribute combination problem causes some misclassification. Therefore, in this thesis, we present an efficient and accurate decision tree building method which resolves the above six problems and reduces the overhead of database scanning in the CAM method. We build a structure named class-rooted FP-tree which is a tree similar to the FP-tree, except the root of the tree is always a class item. Instead of using a static minimal support applied in the FP-growth method, we decide the minimal support dynamically, which can avoid some misjudgement of large itemsets used for the classification problem. In the decision tree building phase, we provide a pruning strategy that can reduce the times of database scanning. We also solve the attribute combination problem in the CAM method and improve the accuracy. From our simulation, we show that the performance of the proposed class-rooted FP-tree mining method is better than that of other mining association rule methods in terms of storage usage. Our simulation also shows the performance improvement of our method in terms of the times of database scanning and classification accuracy as compared with the CAM method. Therefore, the mining strategy of our proposed method is applicable to any method for building decision tree, and provides high accuracy in the real world.
598

none

Chen, Li-Hui 26 August 2005 (has links)
In recent decade, the government is devoted to promote community empowerment to make the residents positively take part in public affairs of community, arouse their community identity, and develop their own particular culture for promoting life quality. However, the outcomes are not better than those the government expected. In order to carry through the concepts of community empowerment, the government and non-governmental parties aggressively set into action and stress community and adult education based on the community development and community learning as their new strategies of community development. Besides, in the non-governmental circles, the community development is mainly implemented by non-profit organizations as community form. The Meinung People¡¦s Association was chosen as the case study in this thesis. She is not only keeping working for cultural maintenances but also have excellent results of characteristics of population in Meinung, implementing village-formed community college and community education for foreign wives and they are noticed and approved by others. The functions and roles of community education are addressed with the patterns of development. The interviews are used to collect the first-hand data and SWOT Analysis of Porter and CORPS Model of Seetoo, Dah-Hsian were used to analyze and evaluate strategic practices on community education of and try to further understand what problems a non-profit org. will face while implementing community education and find out corresponded solutions. Finally, PDCA Model of Dr. Edward Deming was used to think the systematic strategies of future development of the Meinung People¡¦s Association. With the development experiences of the Meinung People¡¦s Association, I sincerely hope to propose the practical suggestions and references to how to implement the practices of community education for the non-profit organization.
599

Constructing Directed Domain Knowledge Structure Map Using Association Rule - An Example of MIS Domain

Cheng, Pai-shung 31 August 2006 (has links)
In the coming knowledge-based economy era, knowledge structure map (KSM) has becoming more and more important. If learners doing learning without the support of knowledge structure map, it will cause learning alone problem. In order to construct a real KSM, we targeted the MIS domain. By using the National Dissertation and Thesis Abstract System as input source, we first extract different research subjects from keywords and then calculate the relation strength between each keyword pairs. An automatic approach has been developed for constructing KSM for different periods of time. The constructed KSM can help learners to reduce learning alone and provide a good reference for new researchers to seek for related research directions. The proposed method can also be applied to enterprises. They can adopt this method to construct any specific KSM corresponding to their professional domain, the constructed KSM would help new employee to learn better. Furthermore, with the support of KSM, CEO can make a better decision as the KSM would contain internal and external competitive advantages about future directions.
600

A Randomness Based Analysis on the Data Size Needed for Removing Deceptive Patterns

IBARAKI, Toshihide, BOROS, Endre, YAGIURA, Mutsunori, HARAGUCHI, Kazuya 01 March 2008 (has links)
No description available.

Page generated in 0.1106 seconds