• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 197
  • 74
  • 26
  • 23
  • 18
  • 11
  • 11
  • 7
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 681
  • 180
  • 112
  • 81
  • 68
  • 52
  • 50
  • 47
  • 46
  • 46
  • 45
  • 44
  • 43
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Evaluation and Quantification of Engineered Flocs and Drinking Water Treatability

Arnold, Adam January 2008 (has links)
Jar tests are performed to simulate full-scale pre-treatment and particle removal processes. Operators typically conduct them in an effort to attempt alternative treatment doses and strategies without altering the performance of the full-scale drinking water treatment plant. However, information obtained from these tests must be evaluated judiciously, as they currently focus on reduction of specific water quality parameters (i.e., ultraviolet absorption at 254 nm (UV254) and turbidity), and measuring and understanding the significance of coagulant dose on floc size. Consideration of aggregate structure has been less explored due mainly to a lack of appropriate theories to describe the complex random floc structure. Improving the predictive capacity of bench-scale protocols commonly used for optimizing conventional chemical pre-treatment in full-scale drinking water treatment plants is required. Results from settling tests indicated that the production of larger and more settleable flocs could not be described by floc settling velocities and floc sizes. Settling velocities were not directly related to either UV254 or turbidity reductions. Results of the floc characterization tests indicated that measured values of UV254 and turbidity of the supernatant were generally inversely proportional to aggregate D90; that is, the residual UV254 and/or turbidity decreased as the value of D90 increased, which may have been indicative of flocculent settling. No direct relationship could be discerned between fractal dimension D1 (i.e., floc shape) and the UV254 and turbidity of the supernatant; however, the turbidity after flocculation and a period of settling appeared to be inversely proportional to fractal dimension D2 (i.e., porosity). Overall, the results of the experiments have demonstrated that grain size distributions and fractal dimensions might be used to assess and/or predict pre-treatment and/or particle removal performance. Specifically, the relationship between D90 values calculated from samples of flocculated water prior to settling and UV254 and turbidity values of that water after a period of settling may be a simple tool that can be utilized to describe and potentially better predict flocculent settling performance. At present, this appears to be the first such tool of its kind that has been reported.
152

Two- and Three-Dimensional Coding Schemes for Wavelet and Fractal-Wavelet Image Compression

Alexander, Simon January 2001 (has links)
This thesis presents two novel coding schemes and applications to both two- and three-dimensional image compression. Image compression can be viewed as methods of functional approximation under a constraint on the amount of information allowable in specifying the approximation. Two methods of approximating functions are discussed: Iterated function systems (IFS) and wavelet-based approximations. IFS methods approximate a function by the fixed point of an iterated operator, using consequences of the Banach contraction mapping principle. Natural images under a wavelet basis have characteristic coefficient magnitude decays which may be used to aid approximation. The relationship between quantization, modelling, and encoding in a compression scheme is examined. Context based adaptive arithmetic coding is described. This encoding method is used in the coding schemes developed. A coder with explicit separation of the modelling and encoding roles is presented: an embedded wavelet bitplane coder based on hierarchical context in the wavelet coefficient trees. Fractal (spatial IFSM) and fractal-wavelet (coefficient tree), or IFSW, coders are discussed. A second coder is proposed, merging the IFSW approaches with the embedded bitplane coder. Performance of the coders, and applications to two- and three-dimensional images are discussed. Applications include two-dimensional still images in greyscale and colour, and three-dimensional streams (video).
153

Evaluation and Quantification of Engineered Flocs and Drinking Water Treatability

Arnold, Adam January 2008 (has links)
Jar tests are performed to simulate full-scale pre-treatment and particle removal processes. Operators typically conduct them in an effort to attempt alternative treatment doses and strategies without altering the performance of the full-scale drinking water treatment plant. However, information obtained from these tests must be evaluated judiciously, as they currently focus on reduction of specific water quality parameters (i.e., ultraviolet absorption at 254 nm (UV254) and turbidity), and measuring and understanding the significance of coagulant dose on floc size. Consideration of aggregate structure has been less explored due mainly to a lack of appropriate theories to describe the complex random floc structure. Improving the predictive capacity of bench-scale protocols commonly used for optimizing conventional chemical pre-treatment in full-scale drinking water treatment plants is required. Results from settling tests indicated that the production of larger and more settleable flocs could not be described by floc settling velocities and floc sizes. Settling velocities were not directly related to either UV254 or turbidity reductions. Results of the floc characterization tests indicated that measured values of UV254 and turbidity of the supernatant were generally inversely proportional to aggregate D90; that is, the residual UV254 and/or turbidity decreased as the value of D90 increased, which may have been indicative of flocculent settling. No direct relationship could be discerned between fractal dimension D1 (i.e., floc shape) and the UV254 and turbidity of the supernatant; however, the turbidity after flocculation and a period of settling appeared to be inversely proportional to fractal dimension D2 (i.e., porosity). Overall, the results of the experiments have demonstrated that grain size distributions and fractal dimensions might be used to assess and/or predict pre-treatment and/or particle removal performance. Specifically, the relationship between D90 values calculated from samples of flocculated water prior to settling and UV254 and turbidity values of that water after a period of settling may be a simple tool that can be utilized to describe and potentially better predict flocculent settling performance. At present, this appears to be the first such tool of its kind that has been reported.
154

Fractal Network Traffic Analysis with Applications

Liu, Jian 19 May 2006 (has links)
Today, the Internet is growing exponentially, with traffic statistics that mathematically exhibit fractal characteristics: self-similarity and long-range dependence. With these properties, data traffic shows high peak-to-average bandwidth ratios and causes networks inefficient. These problems make it difficult to predict, quantify, and control data traffic. In this thesis, two analytical methods are used to study fractal network traffic. They are second-order self-similarity analysis and multifractal analysis. First, self-similarity is an adaptability of traffic in networks. Many factors are involved in creating this characteristic. A new view of this self-similar traffic structure related to multi-layer network protocols is provided. This view is an improvement over the theory used in most current literature. Second, the scaling region for traffic self-similarity is divided into two timescale regimes: short-range dependence (SRD) and long-range dependence (LRD). Experimental results show that the network transmission delay separates the two scaling regions. This gives us a physical source of the periodicity in the observed traffic. Also, bandwidth, TCP window size, and packet size have impacts on SRD. The statistical heavy-tailedness (Pareto shape parameter) affects the structure of LRD. In addition, a formula to estimate traffic burstiness is derived from the self-similarity property. Furthermore, studies with multifractal analysis have shown the following results. At large timescales, increasing bandwidth does not improve throughput. The two factors affecting traffic throughput are network delay and TCP window size. On the other hand, more simultaneous connections smooth traffic, which could result in an improvement of network efficiency. At small timescales, in order to improve network efficiency, we need to control bandwidth, TCP window size, and network delay to reduce traffic burstiness. In general, network traffic processes have a Hlder exponent a ranging between 0.7 and 1.3. Their statistics differ from Poisson processes. From traffic analysis, a notion of the efficient bandwidth, EB, is derived. Above that bandwidth, traffic appears bursty and cannot be reduced by multiplexing. But, below it, traffic is congested. An important finding is that the relationship between the bandwidth and the transfer delay is nonlinear.
155

Stacked Package MIMO Antenna and Isolator Design of MIMO Antenna

Lee, Cheng-Han 30 July 2012 (has links)
This thesis is divided into two parts. In the first part, the antenna is integrated into the stacked package. The antenna and the semiconductor chip are co-designed together. We utilize the advantage of IPD manufacturing to develop strong capacitively coupled-fed miniaturization technology and fractal slots miniaturization technology. We design a miniaturized antenna operating at WLAN 2.4GHz band. The size of the antenna is only 4 mm¡Ñ8.625 mm (0.0327£f¡Ñ0.0707£f). The operating bandwidth is over 100 MHz. The radiation efficiency is over 60%. In the second part, we design a stacked structure using FR4 substrate. The MIMO antenna is miniaturized by strong capacitively coupled-fed miniaturization technique, and we propose an S-shaped isolator which has wider isolation bandwidth to improve the isolation problem. The separation of both antennas is only 12 mm. The size of the isolator is only 10 mm¡Ñ10 mm. The measured operating bandwidth is 200 MHz, and the radiation efficiency is over 60%. We also design a 10 mm¡Ñ10 mm size of MIMO antenna with 2 mm¡Ñ8 mm isolator on the stacked package structure. The antenna is operating at WLAN 2.45 GHz band, and the operating bandwidth is over 100 MHz. the radiation efficiency is over 40%. Finally, we propose two different stacked package antenna applications. The first one is a dual-frequency design. The proposed antenna is operating at GPS (1.57 GHz) band and WLAN 2.4 GHz band. Another is a broadband design. The size of IPD is only 3 mm¡Ñ3 mm. The operating bandwidth is 40% (from 4.8 GHz to 7.2 GHz).
156

Multiple-Instance Learning Image Database Retrieval employing Orthogonal Fractal Bases

Wang, Ya-ling 08 August 2004 (has links)
The objective of the present work is to propose a novel method to extract a stable feature set representative of image content. Each image is represented by a linear combination of fractal orthonormal basis vectors. The mapping coefficients of an image projected onto each orthonormal basis constitute the feature vector. The set of orthonormal basis vectors are generated by utilizing fractal iterative function through target and domain blocks mapping. The distance measure remains consistent, i.e., isometric embedded, between any image pairs before and after the projection onto orthonormal axes. Not only similar images generate points close to each other in the feature space, but also dissimilar ones produce feature points far apart. The above statements are logically equivalent to that distant feature points are guaranteed to map to images with dissimilar contents, while close feature points correspond to similar images. In this paper, we adapt the Multiple Instance Learning paradigm using the Diverse Density algorithm as a way of modeling the ambiguity in images in order to learning concepts used to classify images. A user labels an image as positive if the image contains the concepts, as negative if the image far from the concepts. Each example image is a bag of blocks where only the bag is labeled. The User selects positive and negative image examples to train the concepts in feature space. From a small collection of positive and negative examples, the system learns the concepts using them to retrieve images that contain the concepts from database. Each concept having similar blocks becomes the group in each image. According groups¡¦ location distribution, variation and spatial relations computes positive examples and database images similarity.
157

Video Database Retrieval System

Lin, Chia-Hsuan 03 July 2006 (has links)
During the Digital Period, the more people using these digital video. When there are more and more users and amount of video data, the management of video data becomes a significant dimension during development. Therefore, there are more and more studying of accomplishing video database system, which provide users to search and get them. In this paper, a novel method for Video Scene Change Detection and video database retrieval is proposed. Uses Fractal orthonormal bases to guarantee the similar index has the similar image the characteristic union support vector clustering, splits a video into a sequence of shots, extracts a few representative frames(key-frames) to take the video database index from each shot. When image search compared to, according to MIL to pick up the characteristic, which images pursues the video database to have the similar characteristic, computation similar, makes the place output according to this.
158

Study on Micro-Contact Mechanics Model for Multiscale Rough Surfaces

Lee, Chien 18 August 2006 (has links)
The observed multiscale phenomenon of rough surfaces, i.e. the smaller mountains mount on the bigger ones successively, renders the hierarchical structures which are described by the fractal geometry. In this situation, when two rough surfaces are loaded together with a higher load, the smaller asperities will undergo plastic flow and immerge into the bigger asperities below them. In other words, the higher load needs to be supported by the bigger asperities. However, when the GW model was proposed in 1966, its analytical method considered that the length-scale of asperities is fixed, which is independent of load (or surface separation). In such condition, the analytical results for a specific asperity length-scale can only suit the situation of a certain narrow range of load. In this research, a new model, called the multiscale GW model, has been developed, which takes into account the relationship between the load and the asperity length-scale. At first, based on the Nayak¡¦s model the multiscale asperity properties with different surface parameters have been derived, and based on the material yielding theory a criterion for determining the optimal asperity length-scale, which functions as supporting the load, is developed. Then both of the above are integrated into the GW model to build the multiscale GW model. The new model is compared with traditional one qualitatively and quantitatively and show their essential differences. The effects of surface parameters and material parameters are discussed in this model. Finally a comparison with the experiment is made, and reveal the good coincidence.
159

GA-based Fractal Image Compression and Active Contour Model

Wu, Ming-Sheng 01 January 2007 (has links)
In this dissertation, several GA-based approaches for fractal image compression and active contour model are proposed. The main drawback of the classical fractal image compression is the long encoding time. Two methods are proposed in this dissertation to solve this problem. First, a schema genetic algorithm (SGA), in which the Schema Theorem is embedded in GA, is proposed to reduce the encoding time. In SGA, the genetic operators are adapted according to the Schema Theorem in the evolutionary process performed on the range blocks. We find that such a method can indeed speedup the encoder and also preserve the image quality. Moreover, based on the self-similarity characteristic of the natural image, a spatial correlation genetic algorithm (SC-GA) is proposed to further reduce the encoding time. There are two stages in the SC-GA method. The first stage makes use of spatial correlations in images for both the domain pool and the range pool to exploit local optima. The second stage is operated on the whole image to explore more adequate similarities if the local optima are not satisfactory. Thus not only the encoding speed is accelerated further, but also the higher compression ratio is achieved, because the search space is limited relative to the positions of the previously matched blocks, fewer bits are required to record the offset of the domain block instead of the absolute position. The experimental results of comparing the two methods with the full search, traditional GA, and other GA search methods are provided to demonstrate that they can indeed reduce the encoding time substantially. The main drawback of the traditional active contour model (ACM) for extracting the contour of a given object is that the snake cannot converge to the concave region of the object under consideration. An improved ACM algorithm is proposed in this dissertation to solve this problem. The algorithm is composed of two stages. In the first stage, the ACM with traditional energy function guides the snake to converge to the object boundary except the concave regions. In the second stage, for the control points which stay outside the concave regions, a proper energy template are chosen and are added in the external energy. The modified energy function is applied so as to move the snake toward the concave regions. Therefore, the object of interest can be completely extracted. The experimental results show that, by using this method, the snake can indeed completely extract the boundary of the given object, while the extra cost is very low. In addition, for the problem that the snake cannot precisely extract the object contour when the number of the control points on the snake is not enough, a GA-based ACM algorithm is presented to deal with such a problem. First the improved ACM algorithm is used to guide the snake to approximately extract the object boundary. By utilizing the evolutionary strategy of GA, we attempt to extract precisely the object boundary by adding a few control points into the snake. Similarly, some experimental results are provided to show the performance of the method.
160

Fractal Image Coding Based on Classified Range Regions

USUI, Shin'ichi, TANIMOTO, Masayuki, FUJII, Toshiaki, KIMOTO, Tadahiko, OHYAMA, Hiroshi 20 December 1998 (has links)
No description available.

Page generated in 0.0453 seconds