• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1981
  • 524
  • 512
  • 204
  • 117
  • 91
  • 55
  • 42
  • 35
  • 27
  • 27
  • 18
  • 18
  • 18
  • 18
  • Tagged with
  • 4301
  • 1285
  • 516
  • 515
  • 463
  • 328
  • 311
  • 306
  • 294
  • 291
  • 282
  • 274
  • 271
  • 260
  • 240
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Simulation Study for the Performance of a Large Solar Hot Water System Using Natural Circulation DHW system Modules

Yu, Kuan-Hsiang 16 September 2011 (has links)
This research is aimed to study the system performance for a large solar hot water system constructed by connecting a series of small domestic natural circulation systems. There are few studies on this type of large solar hot water system available. The major concern is that when circulation pump is on, there forms a short flow between inlet and outlet of each storage tank of natural circulation solar hot water unit. Therefore, water does not have chance to flow though the collector by thermosyphon and system performance can be lowered down drastically. This thesis presents the numerical simulation study for the control and system operating parameters effects on the system performance to provide important information both for users and system designers.
402

Influence of hydrological seasonality on sandbank benthos: algal biomass and shrimp abundance in a large neotropical river

Montoya Ceballos, Jose Vicente 15 May 2009 (has links)
In this study, I examined the influence of hydrological seasonality on spatiotemporal variation of algal biomass and shrimp abundance on sandbanks of the Cinaruco River in southwestern Venezuela. Seasonal variations of abiotic and biotic variables in the Cinaruco were driven by the hydrological regime. During the highwater periods, river sites in the main channel and lagoon sites were similar in water physicochemical variables and algal biomass. In contrast, physicochemical variables and algal biomass differed between river and lagoon sites during the low-water period. The absence of flow in lagoons and consistently low algal biomass on river sandbanks were the most important features of the spatial variability between main-channel and lagoon sandbanks during low-water phases. Benthic algal biomass was highly uniform at small spatial scales and significantly heterogeneous at large spatial scales. In the second major part of this dissertation, I found a relatively species-rich shrimp assemblage with seven species inhabiting the sandbanks of the Cinaruco. I also observed clear patterns of temporal and spatial variation in shrimp abundance on the Cinaruco sandbanks. Abundance of shrimp on the sandbanks presented remarkable diel variation, showing almost exclusive use of this habitat at nights. Seasonally, shrimp were more abundant during rising- and falling-water periods, when rapid changes of environmental conditions occur. Shrimp abundance was high on those sandbanks with absence of troughs and presence of submerged vegetation. These environmental features presumably promote colonization/establishment and survival/persistence of shrimp in the sandbanks. In a patch-dynamic view of communities, a mobility control model seems to apply to shrimp of the sandbanks in the Cinaruco during the period of rapid changes in hydrology and habitat structure. During low-water periods, when habitat structure of sandbanks is relatively constant, low shrimp abundance appears to be heavily controlled by high fish predation. The annual flood regime of the Cinaruco, which drives the concentrations of dissolved materials, affects material interchanges between aquatic and terrestrial systems, and modifies aquatic habitat structural complexity, is responsible for creating strong patterns of seasonal and spatial variation in benthic algal crops and shrimp abundance on the sandbanks of this large floodplain river.
403

Transient Analysis of Large-scale Stochastic Service Systems

Ko, Young Myoung 2011 May 1900 (has links)
The transient analysis of large-scale systems is often difficult even when the systems belong to the simplest M/M/n type of queues. To address analytical difficulties, previous studies have been conducted under various asymptotic regimes by suitably accelerating parameters, thereby establishing some useful mathematical frameworks and giving insights into important characteristics and intuitions. However, some studies show significant limitations when used to approximate real service systems: (i) they are more relevant to steady-state analysis; (ii) they emphasize proofs of convergence results rather than numerical methods to obtain system performance; and (iii) they provide only one set of limit processes regardless of actual system size. Attempting to overcome the drawbacks of previous studies, this dissertation studies the transient analysis of large-scale service systems with time-dependent parameters. The research goal is to develop a methodology that provides accurate approximations based on a technique called uniform acceleration, utilizing the theory of strong approximations. We first investigate and discuss the possible inaccuracy of limit processes obtained from employing the technique. As a solution, we propose adjusted fluid and diffusion limits that are specifically designed to approximate large, finite-sized systems. We find that the adjusted limits significantly improve the quality of approximations and hold asymptotic exactness as well. Several numerical results provide evidence of the effectiveness of the adjusted limits. We study both a call center which is a canonical example of large-scale service systems and an emerging peer-based Internet multimedia service network known as P2P. Based on our findings, we introduce a possible extension to systems which show non-Markovian behavior that is unaddressed by the uniform acceleration technique. We incorporate the denseness of phase-type distributions into the derivation of limit processes. The proposed method offers great potential to accurately approximate performance measures of non-Markovian systems with less computational burden.
404

A Sliding-Window Approach to Mining Maximal Large Itemsets for Large Databases

Chang, Yuan-feng 28 July 2004 (has links)
Mining association rules, means a process of nontrivial extraction of implicit, previously and potentially useful information from data in databases. Mining maximal large itemsets is a further work of mining association rules, which aims to find the set of all subsets of large (frequent) itemsets that could be representative of all large itemsets. Previous algorithms to mining maximal large itemsets can be classified into two approaches: exhausted and shortcut. The shortcut approach could generate smaller number of candidate itemsets than the exhausted approach, resulting in better performance in terms of time and storage space. On the other hand, when updates to the transaction databases occur, one possible approach is to re-run the mining algorithm on the whole database. The other approach is incremental mining, which aims for efficient maintenance of discovered association rules without re-running the mining algorithms. However, previous algorithms for mining maximal large itemsets based on the shortcut approach can not support incremental mining for mining maximal large itemsets. While the algorithms for incremental mining, {it e.g.}, the SWF algorithm, could not efficiently support mining maximal large itemsets, since it is based on the exhausted approach. Therefore, in this thesis, we focus on the design of an algorithm which could provide good performance for both mining maximal itemsets and incremental mining. Based on some observations, for example, ``{it if an itemset is large, all its subsets must be large; therefore, those subsets need not to be examined further}", we propose a Sliding-Window approach, the SWMax algorithm, for efficiently mining maximal large itemsets and incremental mining. Our SWMax algorithm is a two-passes partition-based approach. We will find all candidate 1-itemsets ($C_1$), candidate 3-itemsets ($C_3$), large 1-itemsets ($L_1$), and large 3-itemsets ($L_3$) in the first pass. We generate the virtual maximal large itemsets after the first pass. Then, we use $L_1$ to generate $C_2$, use $L_3$ to generate $C_4$, use $C_4$ to generate $C_5$, until there is no $C_k$ generated. In the second pass, we use the virtual maximal large itemsets to prune $C_k$, and decide the maximal large itemsets. For incremental mining, we consider two cases: (1) data insertion, (2) data deletion. Both in Case 1 and Case 2, if an itemset with size equal to 1 is not large in the original database, it could not be found in the updated database based on the SWF algorithm. That is, a missing case could occur in the incremental mining process of the SWF algorithm, because the SWF algorithm only keeps the $C_2$ information. While our SWMax algorithm could support incremental mining correctly, since $C_1$ and $C_3$ are maintained in our algorithm. We generate some synthetic databases to simulate the real transaction databases in our simulation. From our simulation, the results show that our SWMax algorithm could generate fewer number of candidates and needs less time than the SWF algorithm.
405

An Efficient Union Approach to Mining Closed Large Itemsets in DNA Microarray Datasets

Lee, Li-Wen 05 July 2006 (has links)
A DNA microarray is a very good tool to study the gene expression level in different situations. Mining association rules in DNA microarray datasets can help us know how genes affect each other, and what genes are usually co-expressed. Mining closed large itemsets can be useful for reducing the size of the mining result from the DNA microarray datasets, where a closed itemset is an itemset that there is no superset whose support value is the same as the support value of this itemset. Since the number of genes stored in columns is much larger than the number of samples stored in rows in a DNA microarray dataset, traditional mining methods which use column enumeration face a great challenge, where the column enumeration means that enumerating itemsets from the combinations of items stored in columns. Therefore, several row enumeration methods, e.g., RERII, have been proposed to solve this problem, where row enumeration means that enumerating itemsets from the combinations of items stored in rows. Although the RERII method saves more memory space and has better performance than the other row enumeration methods, it needs complex intersection operations at each node of the row enumeration tree to generate the closed itemsets. In this thesis, we propose a new method, UMiner, which is based on the union operations to mine the closed large itemsets in the DNA microarray datasets. Our approach is a row enumeration method. First, we add all tuples in the transposed table to a prefix tree, where a transposed table records the information about where an item appears in the original table. Next, we traverse this prefix tree to create a row-node table which records the information about a node and the related range of its child nodes in the prefix tree created from the transposed table. Then we generate the closed itemset by using the union operations on the itemsets in the item groups stored in the row-node table. Since we do not use the intersection operations to generate the closed itemset for each enumeration node, we can reduce the time complexity that is needed at each node of the row enumeration tree. Moreover, we develop four pruning techniques to reduce the number of candidate closed itemsets in the row enumeration tree. By replacing the complex intersection operations with the union operations and designing four pruning techniques to reduce the number of branches in the row enumeration tree, our method can find closed large itemsets very efficiently. In our performance study, we use three real datasets which are the clinical data on breast cancer, lung cancer, and AML-ALL. From the experiment results, we show that our UMiner method is always faster than the RERII method in all support values, no matter what the average length of the closed large itemsets is. Moreover, in our simulation result, we also show that the processing time of our method increases much more slowly than that of the RERII method as the average number of items in the rows of a dataset increases.
406

Application of coupled E/H field formulation to the design of multiple layer AR coating for large incident angles

You, Neng-Jung 17 July 2000 (has links)
Thin-film theorems are well developed and so are the fabrication processes. Yet under some special conditions, traditional methods (such as the ABCD matrix and the transmission matrix methods) will lead to a serious numerical error. In this thesis, we propose a new method called Couple E/H field formulation, which will overcome this numerical problem in simulating characteristics of complex multi-layered structures. We have verified both the algorithm and its results with the traditional techniques. By extending the impedance matching principle, we came out with a multi-layer anti-reflection coating design optimized for a time-harmonic plane wave incidence with any incident angle. Such a design allows for more plane waves with adjacent angles to pass through the coating layers with minimal reflection. Furthermore, we apply this AR coating design to facets of semiconductor lasers. Our calculation shows that multi-layer coating does a better job than a single layer coating. The reflectivity of a laser diode from single layer coating 0.085% to 5 layer coating 0.056%, which is a 33% improvement.
407

On the strong law of large numbers for sums of random elements in Banach space

Hong, Jyy-I 12 June 2003 (has links)
Let $mathcal{B}$ be a separable Banach space. In this thesis, it is shown that the Chung's strong law of large numbers holds for a sequence of independent $mathcal{B}$-valued random elements and an array of rowwise independent $mathcal{B}$-valued random elements under some weaker assumptions by using more generalized functions $phi_{n}$'s.
408

Design of large time constant switched-capacitor filters for biomedical applications

Tumati, Sanjay 17 February 2005 (has links)
This thesis investigates the various techniques to achieve large time constants and the ultimate limitations therein. A novel circuit technique for the realization of large time constants for high pass corners in switched-capacitor filters is also proposed and compared with existing techniques. The switched-capacitor technique is insensitive to parasitic capacitances and is area efficient and it requires only two clock phases. The circuit is used to build a typical switched-capacitor front end with a gain of 10. The low pass corner is fixed at 200 Hz. The high pass corner is varied from 0.159Hz to 4 Hz and various performance parameters, such as power consumption, silicon area etc., are compared with conventional techniques and the advantages and disadvantages of each technique are demonstrated. The front-ends are fully differential and are chopper stabilized to protect against DC offsets and 1/f noise. The front-end is implemented in AMI0.6um technology with a supply voltage of 1.6V and all transistors operate in weak inversion with currents in the range of tens of nano-amperes.
409

An Efficient Parameter-Relationship-Based Approach for Projected Clustering

Huang, Tsun-Kuei 16 June 2008 (has links)
The clustering problem has been discussed extensively in the database literature as a tool for many applications, for example, bioinformatics. Traditional clustering algorithms consider all of the dimensions of an input dataset in an attempt to learn as much as possible about each object described. In the high dimensional data, however, many of the dimensions are often irrelevant. Therefore, projected clustering is proposed. A projected cluster is a subset C of data points together with a subset D of dimensions such that the points in C are closely clustered in the subspace of dimensions D. There have been many algorithms proposed to find the projected cluster. Most of them can be divided into three kinds of classification: partitioning, density-based, and hierarchical. The DOC algorithm is one of well-known density-based algorithms for projected clustering. It uses a Monte Carlo algorithm for iteratively computing projected clusters, and proposes a formula to calculate the quality of cluster. The FPC algorithm is an extended version of the DOC algorithm, it uses the mining large itemsets approach to find the dimensions of projected cluster. Finding the large itemsets is the main goal of mining association rules, where a large itemset is a combination of items whose appearing times in the dataset is greater than a given threshold. Although the FPC algorithm has used the technique of mining large itemsets to speed up finding projected clusters, it still needs many user-specified parameters to work. Moreover, in the first step, to choose the medoid, the FPC algorithm applies a random approach for several times to get the medoid, which takes long time and may still find a bad medoid. Furthermore, the way to calculate the quality of a cluster can be considered in more details, if we take the weight of dimensions into consideration. Therefore, in this thesis, we propose an algorithm which improves those disadvantages. First, we observe that the relationship between parameters, and propose a parameter-relationship-based algorithm that needs only two parameters, instead of three parameters in most of projected clustering algorithms. Next, our algorithm chooses the medoid with the median, we choose the medoid only one time and the quality of our cluster is better than that in the FPC algorithm. Finally, our quality measure formula considers the weight of each dimension of the cluster, and gives different values according to the times of occurrences of dimensions. This formula makes the quality of projected clustering based on our algorithm better than that of the FPC algorithm. It avoids the cluster containing too many irrelevant dimensions. From our simulation results, we show that our algorithm is better than the FPC algorithm, in term of the execution time and the quality of clustering.
410

A Large Itemset-Based Approach to Mining Subspace Clusters from DNA Microarray Data

Tsai, Yueh-Chi 20 June 2008 (has links)
DNA Microarrays are one of the latest breakthroughs in experimental molecular biology and have opened the possibility of creating datasets of molecular information to represent many systems of biological or clinical interest. Clustering techniques have been proven to be helpful to understand gene function, gene regulation, cellular processes, and subtypes of cells. Investigations show that more often than not, several genes contribute to a disease, which motivates researchers to identify a subset of genes whose expression levels are similar under a subset of conditions. Most of the subspace clustering models define similarity among different objects by distances over either all or only a subset of the dimensions. However, strong correlations may still exist among a set of objects, even if they are far apart from each other as measured by the distance functions. Many techniques, such as pCluster and zCluster, have been proposed to find subspace clusters with the coherence expression of a subset of genes on a subset of conditions. However, both of them contain the time-consuming steps, which are constructing gene-pair MDSs and distributing the gene information in each node of a prefix tree. Therefore, in this thesis, we propose a Large Itemset-Based Clustering (LISC) algorithm to improve the disadvantages of the pCluster and zCluster algorithms. First, we avoid to construct the gene-pair MDSs. We only construct the condition-pair MDSs to reduce the processing time. Second, we transform the task of mining the possible maximal gene sets into the mining problem of the large itemsets from the condition-pair MDSs. We make use of the concept of the large itemset which is used in mining association rules, where a large itemset is represented as a set of items appearing in a sufficient number of transactions. Since we are only interested in the subspace cluster with gene sets as large as possible, it is desirable to pay attention to those gene sets which have reasonably large support from the condition-pair MDSs. In other words, we want to find the large itemsets from the condition-pair MDSs; therefore, we obtain the gene set with respect to enough condition-pairs. In this step, we efficiently use the revised version of FP-tree structure, which has been shown to be one of the most efficient data structures for mining large itemsets, to find the large itemsets of gene sets from the condition-pair MDSs. Thus, we can avoid the complex distributing operation and reduce the search space dramatically by using the FP-tree structure. Finally, we develop an algorithm to construct the final clusters from the gene set and the condition--pair after searching the FP-tree. Since we are interested in the clusters which are large enough and not belong to any other clusters, we alternately combine or extend the gene sets and the condition sets to construct the interesting subspace clusters as large as possible. From our simulation results, we show that our proposed algorithm needs shorter processing time than those previous proposed algorithms, since they need to construct gene-pair MDSs.

Page generated in 0.0341 seconds