• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 59
  • 11
  • 9
  • 9
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Network Initialization Protocol for Smart Grid

Huang, Yao-Chin 15 August 2012 (has links)
In recent years, due to the issues of energy saving, the smart grid has become more important. AMI (Advanced Metering Infrastructure) is the basic of smart grid. However, AMI¡¦s network initialization usually cost a lot of time delay and energy waste because of many collisions due to the initialization in the high node density and variable network. In this paper, we proposed a Dynamic Contention Slot Initialization Protocol (DCSI Protocol) to reduce time delay and energy waste in the network initialization. At the beginning, all nodes in DCSI protocol are set in the receiving state. The proposed approach reduces not only collisions but also the communication failure due to the interference out of the transmission range. We divided time into time slots and then composed them to superframe. The first slot of superframe is designed for master node¡¦s broadcast, and other time slots are devised for other nodes to join in the network. Based on the previous superframe, nodes for the proposed protocol adjust the number of the contention slot by detecting collisions to adapt the high node density and variable network. The simulation results demonstrate superiority of DCSI protocol over flooding.
2

Nonlinear Transformations and Filtering Theory for Space Operations

Weisman, Ryan Michael 1984- 14 March 2013 (has links)
Decisions for asset allocation and protection are predicated upon accurate knowledge of the current operating environment as well as correctly characterizing the evolution of the environment over time. The desired kinematic and kinetic states of objects in question cannot be measured directly in most cases and instead are inferred or estimated from available measurements using a filtering process. Often, nonlinear transformations between the measurement domain and desired state domain distort the state domain probability density function yielding a form which does not necessarily resemble the form assumed in the filtering algorithm. The distortion effect must be understood in greater detail and appropriately accounted for so that even if sensors, state estimation algorithms, and state propagation algorithms operate in different domains, they can all be effectively utilized without any information loss due to domain transformations. This research presents an analytical investigation into understanding how non-linear transformations of stochastic, but characterizable, processes affect state and uncertainty estimation with direct application to space object surveillance and space- craft attitude determination. Analysis is performed with attention to construction of the state domain probability density function since state uncertainty and correlation are derived from the statistical moments of the probability density function. Analytical characterization of the effect nonlinear transformations impart on the structure of state probability density functions has direct application to conventional non- linear filtering and propagation algorithms in three areas: (1) understanding how smoothing algorithms used to estimate indirectly observed states impact state uncertainty, (2) justification or refutation of assumed state uncertainty distribution for more realistic uncertainty quantification, and (3) analytic automation of initial state estimate and covariance in lieu of user tuning. A nonlinear filtering algorithm based upon Bayes’ Theorem is presented to ac- count for the impact nonlinear domain transformations impart on probability density functions during the measurement update and propagation phases. The algorithm is able to accommodate different combinations of sensors for state estimation which can also be used to hypothesize system parameters or unknown states from available measurements because information is able to appropriately accounted for.
3

Assessment and Improvement of Snow Datasets Over the United States

Dawson, Nicholas, Dawson, Nicholas January 2017 (has links)
Improved knowledge of the cryosphere state is paramount for continued model development and for accurate estimates of fresh water supply. This work focuses on evaluation and potential improvements of current snow datasets over the United States. Snow in mountainous terrain is most difficult to quantify due to the slope, aspect, and remote nature of the environment. Due to the difficulty of measuring snow quantities in the mountains, the initial study creates a new method to upscale point measurements to area averages for comparison to initial snow quantities in numerical weather prediction models. The new method is robust and cross validation of the method results in a relatively low mean absolute error of 18% for snow depth (SD). Operational models at the National Centers for Environmental Prediction which use Air Force Weather Agency (AFWA) snow depth data for initialization were found to underestimate snow depth by 77% on average. Larger error is observed in areas that are more mountainous. Additionally, SD data from the Canadian Meteorological Center, which is used for some model evaluations, performed similarly to models initialized with AFWA data. The use of constant snow density for snow water equivalent (SWE) initialization for models which utilize AFWA data exacerbates poor SD performance with dismal SWE estimates. A remedy for the constant snow density utilized in NCEP snow initializations is presented in the next study which creates a new snow density parameterization (SNODEN). SNODEN is evaluated against observations and performance is compared with offline land surface models from the National Land Data Assimilation System (NLDAS) as well as the Snow Data Assimilation System (SNODAS). SNODEN has less error overall and reproduces the temporal evolution of snow density better than all evaluated products. SNODEN is also able to estimate snow density for up to 10 snow layers which may be useful for land surface models as well as conversion of remotely-sensed SD to SWE. Due to the poor performance of previously evaluated snow products, the last study evaluates openly-available remotely-sensed snow datasets to better understand the strengths and weaknesses of current global SWE datasets. A new SWE dataset developed at the University of Arizona is used for evaluation. While the UA SWE data has already been stringently evaluated, confidence is further increased by favorable comparison of UA snow cover, created from UA SWE, with multiple snow cover extent products. Poor performance of remotely-sensed SWE is still evident even in products which combine ground observations with remotely-sensed data. Grid boxes that are predominantly tree covered have a mean absolute difference up to 87% of mean SWE and SWE less than 5 cm is routinely overestimated by 100% or more. Additionally, snow covered area derived from global SWE datasets have mean absolute errors of 20%-154% of mean snow covered area.
4

Synchronous Latency Insensitive Design in FPGA

Sheng, Cheng January 2005 (has links)
<p>A design methodology to mitigate timing problems due to long wire delays is proposed. The timing problems are taking care of at architecture level instead of layout level in this design method so that no change is needed when the whole design goes to backend design. Hence design iterations are avoided by using this design methodology. The proposed design method is based on STARI architecture, and a novel initialization mechanism is proposed in this paper. Low frequency global clock is used to synchronize the communication and PLLs are used to provide high frequency working clocks. The feasibility of new design methodology is proved on FPGA test board and the implementation details are also described in this paper. Only standard library cells are used in this design method and no change is made to the traditional design flow. The new design methodology is expected to reduce the timing closure effort in high frequency and complex digital design in deep submicron technologies.</p>
5

Initialization Methods for System Identification

Lyzell, Christian January 2009 (has links)
<p>In the system identification community a popular framework for the problem of estimating a parametrized model structure given a sequence of input and output pairs is given by the prediction-error method. This method tries to find the parameters which maximize the prediction capability of the corresponding model via the minimization of some chosen cost function that depends on the prediction error. This optimization problem is often quite complex with several local minima and is commonly solved using a local search algorithm. Thus, it is important to find a good initial estimate for the local search algorithm. This is the main topic of this thesis.</p><p>The first problem considered is the regressor selection problem for estimating the order of dynamical systems. The general problem formulation is difficult to solve and the worst case complexity equals the complexity of the exhaustive search of all possible combinations of regressors. To circumvent this complexity, we propose a relaxation of the general formulation as an extension of the nonnegative garrote regularization method. The proposed method provides means to order the regressors via their time lag and a novel algorithmic approach for the \textsc{arx} and \textsc{lpv-arx} case is given.</p><p> </p><p>Thereafter, the initialization of linear time-invariant polynomial models is considered. Usually, this problem is solved via some multi-step instrumental variables method. For the estimation of state-space models, which are closely related to the polynomial models via canonical forms, the state of the art estimation method is given by the subspace identification method. It turns out that this method can be easily extended to handle the estimation of polynomial models. The modifications are minor and only involve some intermediate calculations where already available tools can be used. Furthermore, with the proposed method other a priori information about the structure can be readily handled, including a certain class of linear gray-box structures. The proposed extension is not restricted to the discrete-time case and can be used to estimate continuous-time models.</p><p> </p><p>The final topic in this thesis is the initialization of discrete-time systems containing polynomial nonlinearities. In the continuous-time case, the tools of differential algebra, especially Ritt's algorithm, have been used to prove that such a model structure is globally identifiable if and only if it can be written as a linear regression model. In particular, this implies that once Ritt's algorithm has been used to rewrite the nonlinear model structure into a linear regression model, the parameter estimation problem becomes trivial. Motivated by the above and the fact that most system identification problems involve sampled data, a version of Ritt's algorithm for the discrete-time case is provided. This algorithm is closely related to the continuous-time version and enables the handling of noise signals without differentiations.</p>
6

Impact of Assimilating Airborne Doppler Radar Winds on the Inner-Core Structure and Intensity of Hurricane Ike (2008)

Gordon, Ronald Walter 26 July 2011 (has links)
Accurate prediction of Tropical Cyclones (TC) is vital for the protection of life and property in areas that are prone to their destructive forces. While significant improvements have been made in forecasting TC track, intensity remains a challenge. It is hypothesized that accurate TC intensity forecast requires, among other things, an adequate initial description of their inner-core region. Therefore, there must be reliable observations of the inner-core area of the TC and effective data assimilation (DA) methods to ingest these data into the Numerical Weather Prediction (NWP) models. However, these requirements are seldom met at the relatively low resolution of operational global prediction models and the lack of routine observations assimilated in the TC inner-core. This study tests the impacts of assimilating inner-core Airborne Doppler Radar (ADR) winds on the initial structure and subsequent intensity forecast of Hurricane Ike (2008). The 4-dimensional variational (4DVar) and the 3-dimensional variational (3DVar) methods are used to perform DA while the Weather Research and Forecasting (WRF) model is used to perform forecasts. It is found that assimilating data helps to initialize a more realistic inner-core structure using both DA methods. Additionally, the resulting short-term and long-term intensity forecasts are more accurate when data is assimilated versus cases when there is no DA. Additionally, it is found that in some cases the impact of DA lasts up to 12 hours longer with 4DVar versus 3DVar. It is shown that this is because the flow-dependent 4DVar method produces more dynamically and balanced analysis increments compared to the static and isotropic increments of 3DVar. However, the impact of using both methods is minimal in the long-range. The analyses show that at longer forecast range the dynamics of hurricane Ike was influenced more by outer environment features than the inner-core winds.
7

Synchronous Latency Insensitive Design in FPGA

Sheng, Cheng January 2005 (has links)
A design methodology to mitigate timing problems due to long wire delays is proposed. The timing problems are taking care of at architecture level instead of layout level in this design method so that no change is needed when the whole design goes to backend design. Hence design iterations are avoided by using this design methodology. The proposed design method is based on STARI architecture, and a novel initialization mechanism is proposed in this paper. Low frequency global clock is used to synchronize the communication and PLLs are used to provide high frequency working clocks. The feasibility of new design methodology is proved on FPGA test board and the implementation details are also described in this paper. Only standard library cells are used in this design method and no change is made to the traditional design flow. The new design methodology is expected to reduce the timing closure effort in high frequency and complex digital design in deep submicron technologies.
8

Partial Sort and Its Applications on Single-Hop Wireless Networks

Shiau, Shyue-Horng 19 January 2006 (has links)
In this dissertation, we focus on the study of the partial sorting (generalized sorting) problem and the initialization problem. The partial sorting problem is to find the first k smallest (or largest) elements among n input elements and to report them in nondecreasing (or nonincreasing). The initialization problem on a multiprocessor system is to assign each of n input elements a unique identification number, from 1 to n. This problem can be regarded as a special case of the sorting problem in which all input elements have the same value. We propose some algorithms for solving these problems. The main result is to give precise analysis for these algorithms. On the traditional model, we modify two algorithms, based on insertion sort and quicksort, to solve the partial sorting problem. Our analysis figures out the whole race between the two partial sorting algorithms and shows that the partial insertion sort algorithm obtains the leading position from k = 1 (the beginning) until k 3 5pn. After that, the partial quicksort algorithm will take the leading position on the way to the end. We also extend the partial sorting problem on the Single-Hop wireless network with collision detection (WNCD) model. The extension fits in with the wireless trend and may be a foundation for studying divide-and-conquer. With the repeat maximum finding scheme, we propose a partial sorting algorithm and prove that its average time complexity is (k + log (n − k)). For the initialization problem on the WNCD model, we can invoke the sorting algorithms directly for solving it. However, those sorting algorithms would not be better than the method of building a partition tree. We show that the partition tree method requires 2.88n time slots in average. After reconstructing and analyzing the method, we improve the result from 2.88n to 2.46n.
9

Initialization Methods for System Identification

Lyzell, Christian January 2009 (has links)
In the system identification community a popular framework for the problem of estimating a parametrized model structure given a sequence of input and output pairs is given by the prediction-error method. This method tries to find the parameters which maximize the prediction capability of the corresponding model via the minimization of some chosen cost function that depends on the prediction error. This optimization problem is often quite complex with several local minima and is commonly solved using a local search algorithm. Thus, it is important to find a good initial estimate for the local search algorithm. This is the main topic of this thesis. The first problem considered is the regressor selection problem for estimating the order of dynamical systems. The general problem formulation is difficult to solve and the worst case complexity equals the complexity of the exhaustive search of all possible combinations of regressors. To circumvent this complexity, we propose a relaxation of the general formulation as an extension of the nonnegative garrote regularization method. The proposed method provides means to order the regressors via their time lag and a novel algorithmic approach for the \textsc{arx} and \textsc{lpv-arx} case is given.   Thereafter, the initialization of linear time-invariant polynomial models is considered. Usually, this problem is solved via some multi-step instrumental variables method. For the estimation of state-space models, which are closely related to the polynomial models via canonical forms, the state of the art estimation method is given by the subspace identification method. It turns out that this method can be easily extended to handle the estimation of polynomial models. The modifications are minor and only involve some intermediate calculations where already available tools can be used. Furthermore, with the proposed method other a priori information about the structure can be readily handled, including a certain class of linear gray-box structures. The proposed extension is not restricted to the discrete-time case and can be used to estimate continuous-time models.   The final topic in this thesis is the initialization of discrete-time systems containing polynomial nonlinearities. In the continuous-time case, the tools of differential algebra, especially Ritt's algorithm, have been used to prove that such a model structure is globally identifiable if and only if it can be written as a linear regression model. In particular, this implies that once Ritt's algorithm has been used to rewrite the nonlinear model structure into a linear regression model, the parameter estimation problem becomes trivial. Motivated by the above and the fact that most system identification problems involve sampled data, a version of Ritt's algorithm for the discrete-time case is provided. This algorithm is closely related to the continuous-time version and enables the handling of noise signals without differentiations.
10

Region-based Crossover for Clustering Problems

Dsouza, Jeevan 01 January 2012 (has links)
Data clustering, which partitions data points into clusters, has many useful applications in economics, science and engineering. Data clustering algorithms can be partitional or hierarchical. The k-means algorithm is the most widely used partitional clustering algorithm because of its simplicity and efficiency. One problem with the k-means algorithm is that the quality of partitions produced is highly dependent on the initial selection of centers. This problem has been tackled using genetic algorithms (GA) where a set of centers is encoded into an individual of a population and solutions are generated using evolutionary operators such as crossover, mutation and selection. Of the many GA methods, the region-based genetic algorithm (RBGA) has proven to be an effective technique when the centroid was used as the representative object of a cluster (ROC) and the Euclidean distance was used as the distance metric. The RBGA uses a region-based crossover operator that exchanges subsets of centers that belong to a region of space rather than exchanging random centers. The rationale is that subsets of centers that occupy a given region of space tend to serve as building blocks. Exchanging such centers preserves and propagates high-quality partial solutions. This research aims at assessing the RBGA with a variety of ROCs and distance metrics. The RBGA was tested along with other GA methods, on four benchmark datasets using four distance metrics, varied number of centers, and centroids and medoids as ROCs. The results obtained showed the superior performance of the RBGA across all datasets and sets of parameters, indicating that region-based crossover may prove an effective strategy across a broad range of clustering problems.

Page generated in 0.0993 seconds