• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 472
  • 171
  • 62
  • 40
  • 26
  • 19
  • 14
  • 14
  • 13
  • 10
  • 7
  • 7
  • 7
  • 7
  • 7
  • Tagged with
  • 1010
  • 1010
  • 199
  • 181
  • 165
  • 157
  • 148
  • 137
  • 123
  • 115
  • 96
  • 93
  • 80
  • 79
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Uncertainty evaluation of delayed neutron decay parameters

Wang, Jinkai 15 May 2009 (has links)
In a nuclear reactor, delayed neutrons play a critical role in sustaining a controllable chain reaction. Delayed neutron’s relative yields and decay constants are very important for modeling reactivity control and have been studied for decades. Researchers have tried different experimental and numerical methods to assess these delayed neutron parameters. The reported parameter values vary widely, much more than the small statistical errors reported with these parameters. Interestingly, the reported parameters fit their individual measurement data well in spite of these differences. This dissertation focuses on evaluation of the errors and methods of delayed neutron relative yields and decay constants for thermal fission of U-235. Various numerical methods used to extract the delayed neutron parameter from the measured data, including Matrix Inverse, Levenberg-Marquardt, and Quasi-Newton methods, were studied extensively using simulated delayed neutron data. This simulated data was Poisson distributed around Keepin’s theoretical data. The extraction methods produced totally different results for the same data set, and some of the above numerical methods could not even find solutions for some data sets. Further investigation found that ill-conditioned matrices in the objective function were the reason for the inconsistent results. To find a reasonable solution with small variation, a regularization parameter was introduced using a numerical method called Ridge Regression. The results from the Ridge Regression method, in terms of goodness of fit to the data, were good and often better than the other methods. Due to the introduction of a regularization number in the algorithm, the fitted result contains a small additional bias, but this method can guarantee convergence no matter how large the coefficient matrix condition number. Both saturation and pulse modes were simulated to focus on different groups. Some of the factors that affect the solution stability were investigated including initial count rate, sample flight time, initial guess values. Finally, because comparing reported delayed neutron parameters among different experiments is useless to determine if their data actually differs, methods are proposed that can be used to compare the delayed neutron data sets.
102

G7 business cycle synchronization and transmission mechanism.

Chou, I-Hsiu 22 June 2006 (has links)
Since Bretton Woods System break down in year 1973. Many economists found that there are more similar business cycle between industrial countries. Recently, Doyle and Faust(2002) proposed the correlation of business cycle between two countries becomes weaker. Therefore in this search, we try to carry out two different aspect factor that effects the countries¡¦ business cycle correlation. The factor is so-called ¡§transmission mechanism.¡¨ Generally specking, Many empirical analysis have pointed out the temporary factors to the business cycle mainly come from the transferred factors of economic aspect. What is ¡§Transmission Mechanisms?¡¨ Economists often try to substitute it in good markets, financial markets, and the coordination of monetary policies. However, in this duration of the empirical analysis, using only these proxy variables to explain BCCs between two countries seems too limited. According to this situation, we believe if the BCCs can be explained by using proxy factors of non-economic variable, the result can be utilized by making up the defect. We attempt to find new factors in political approach and combine with the ¡§Transmission Mechanisms¡¨ that we have introduced earlier. To analyze further economic implication in our research, five conclusions have been summarized below: Firstly, increasing bilateral trade has significantly provided positive effect to BCCs among G7 countries from 1980 to 2002. Because bilateral trade intensity index is endogenous , we use exogenous variable as instrumental variable to estimate ¡§Trade¡¨. Secondly, we use Panel method to expand its matrix. Finally, we improve the empirical estimators of insignificant statistics before. So, when we talk about the relations between BCCs and good and service markets, we must consider these exogenous factors. Eventually, we will receive more detailed results. Secondly, although to trade in financial markets can increase the BCCs between two countries, the statistic report is insignificant . About this empirical result, we can obtain reasonable explanations from the researches (for instance: Imbs, 2004 or Kose et al, 2003), they point out that financial markets are bound excessively by globalization. Therefore, this will aggravatingly make each country to focus on its specialization. Finally, this situation will make the BCCs getting collapsed among these countries. This also explains that the specialization among these countries will reduce the positive effect from the BBCs to financial markets. Thirdly, in the research, the statistics effect of the trade intensity index and specialization are significant negative. It means that when good in transaction will result in more specialization. Two countries have similar industrial structure.Imbs(2004) consider the problem is the index we use to measure bilateral trade intensity. This index was effected from two countries¡¦ size . If use Clark and van Wincoop¡¦s trade intensity index to measure the effect, we can find that significant specialization by comparative advantage effect. Fourth, there are high level financial integration between two countries, because international risk sharing result in two countries have different industrial structure. Lastly, in the research, the statistics effect of the party variables and business cycle of correlations are very significant. This also indicates the political factor will play an important role for many sources of the fluctuation tread of BCCs. In other words, when we discuss the issue of BCCs if miss the contribution of political factors to the BCCs. Then, this might cause the omitted variable biased, and finally cause the whole computation become inefficient. In addition, we can have further discussion by an input of a factor: to conserve the joint benefit of all the member countries in an economic organization, these countries need to be ruled by the same ideal political party. Otherwise, the institute will never reach its essential result. Combining all the conclusions we have shown above, we can find out the BCCs among G7 countries from 1980-2002. Besides the influence of the ¡§Transmission Mechanisms,¡¨ the result will be varied by the political factors. In conclusion, we need to consider the contribution of the political party variables to the BCCs when talking about this issue, therefore; the original theoretical model can be more persuasive. According to a statistics of IMF, the BCCs among those industrial countries are falling little by little in recent years. Therefore, consolidating trade cooperation is essential for what we believe to improve the BCCs among G7. At the same time, pass through a strong integrate monetary policy can move forward all the incumbent parties from all the countries to agree among themselves, and even reach more substantial effect. From the example like this, we might find evidence from BCCs issues by discussing the integration process in European Monetary Union.
103

Least-squares methods for computational electromagnetics

Kolev, Tzanio Valentinov 15 November 2004 (has links)
The modeling of electromagnetic phenomena described by the Maxwell's equations is of critical importance in many practical applications. The numerical simulation of these equations is challenging and much more involved than initially believed. Consequently, many discretization techniques, most of them quite complicated, have been proposed. In this dissertation, we present and analyze a new methodology for approximation of the time-harmonic Maxwell's equations. It is an extension of the negative-norm least-squares finite element approach which has been applied successfully to a variety of other problems. The main advantages of our method are that it uses simple, piecewise polynomial, finite element spaces, while giving quasi-optimal approximation, even for solutions with low regularity (such as the ones found in practical applications). The numerical solution can be efficiently computed using standard and well-known tools, such as iterative methods and eigensolvers for symmetric and positive definite systems (e.g. PCG and LOBPCG) and reconditioners for second-order problems (e.g. Multigrid). Additionally, approximation of varying polynomial degrees is allowed and spurious eigenmodes are provably avoided. We consider the following problems related to the Maxwell's equations in the frequency domain: the magnetostatic problem, the electrostatic problem, the eigenvalue problem and the full time-harmonic system. For each of these problems, we present a natural (very) weak variational formulation assuming minimal regularity of the solution. In each case, we prove error estimates for the approximation with two different discrete least-squares methods. We also show how to deal with problems posed on domains that are multiply connected or have multiple boundary components. Besides the theoretical analysis of the methods, the dissertation provides various numerical results in two and three dimensions that illustrate and support the theory.
104

An improved bus signal priority system for networks with nearside bus stops

Kim, Wonho 17 February 2005 (has links)
Bus Signal Priority (BSP), which has been deployed in many cities around the world, is a traffic signal enhancement strategy that facilitates efficient movement of buses through signalized intersections. Most BSP systems do not work well in transit networks with nearside bus stop because of the uncertainty in dwell time. Unfortunately, most bus stops on arterial roadways are of this type in the U.S. This dissertation showed that dwell time at nearside bus stops could be modeled using weighted least squares regression. More importantly, the prediction intervals associated with the estimate dwell time were calculated. These prediction intervals were subsequently used in the improved BSP algorithm that attempted to reduce the negative effects of nearside bus stops on BSP operations. The improved BSP algorithm was tested on urban arterial section of Bellaire Boulevard in Houston, Texas. VISSIM, a micro simulation model was used to evaluate the performance of the BSP operations. Prior to evaluating the algorithm, the parameters of the micro simulation model were calibrated using an automated Genetic Algorithm based methodology in order to make the model accurately represent the traffic conditions observed in the field. It was shown that the improved BSP algorithm significantly improved the bus operations in terms of bus delay. In addition, it was found that the delay to other vehicles on the network was not statistically different from other BSP algorithms currently being deployed. It is hypothesized that the new approach would be particularly useful in North America where there are many transit systems that utilize nearside bus stops in their networks.
105

An assessment of least squares finite element models with applications to problems in heat transfer and solid mechanics

Pratt, Brittan Sheldon 10 October 2008 (has links)
Research is performed to assess the viability of applying the least squares model to one-dimensional heat transfer and Euler-Bernoulli Beam Theory problems. Least squares models were developed for both the full and mixed forms of the governing one-dimensional heat transfer equation along weak form Galerkin models. Both least squares and weak form Galerkin models were developed for the first order and second order versions of the Euler-Bernoulli beams. Several numerical examples were presented for the heat transfer and Euler- Bernoulli beam theory. The examples for heat transfer included: a differential equation having the same form as the governing equation, heat transfer in a fin, heat transfer in a bar and axisymmetric heat transfer in a long cylinder. These problems were solved using both least squares models, and the full form weak form Galerkin model. With all four examples the weak form Galerkin model and the full form least squares model produced accurate results for the primary variables. To obtain accurate results with the mixed form least squares model it is necessary to use at least a quadratic polynominal. The least squares models with the appropriate approximation functions yielde more accurate results for the secondary variables than the weak form Galerkin. The examples presented for the beam problem include: a cantilever beam with linearly varying distributed load along the beam and a point load at the end, a simply supported beam with a point load in the middle, and a beam fixed on both ends with a distributed load varying cubically. The first two examples were solved using the least squares model based on the second order equation and a weak form Galerkin model based on the full form of the equation. The third problem was solved with the least squares model based on the second order equation. Both the least squares model and the Galerkin model calculated accurate results for the primary variables, while the least squares model was more accurate on the secondary variables. In general, the least-squares finite element models yield more acurate results for gradients of the solution than the traditional weak form Galkerkin finite element models. Extension of the present assessment to multi-dimensional problems and nonlinear provelms is awaiting attention.
106

A three degrees of freedom test-bed for nanosatellite and Cubesat attitude dynamics, determination, and control

Meissner, David M. January 2009 (has links) (PDF)
Thesis (M.S. in Mechanical Engineering)--Naval Postgraduate School, December 2009. / Thesis Advisor(s): Romano, Marcello ; Bevilacqua, Riccardo. "December 2009." Description based on title screen as viewed on January 27, 2010. Author(s) subject terms: spacecraft, cubesat, nanosat, TINYSCOPE, simulator, test bed, control, system identification, least squares, adaptive mass balancing, mass balancing, three axis simulator, NACL, TAS, CubeTAS, ADCS. Includes bibliographical references (p. 77-82). Also available in print.
107

Latent models for cross-covariance /

Wegelin, Jacob A. January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (p. 139-145).
108

Linear estimation for data with error ellipses

Amen, Sally Kathleen 21 August 2012 (has links)
When scientists collect data to be analyzed, regardless of what quantities are being measured, there are inevitably errors in the measurements. In cases where two independent variables are measured with errors, many existing techniques can produce an estimated least-squares linear fit to the data, taking into consideration the size of the errors in both variables. Yet some experiments yield data that do not only contain errors in both variables, but also a non-zero covariance between the errors. In such situations, the experiment results in measurements with error ellipses with tilts specified by the covariance terms. Following an approach suggested by Dr. Edward Robinson, Professor of Astronomy at the University of Texas at Austin, this report describes a methodology that finds the estimates of linear regression parameters, as well as an estimated covariance matrix, for a dataset with tilted error ellipses. Contained in an appendix is the R code for a program that produces these estimates according to the methodology. This report describes the results of the program run on a dataset of measurements of the surface brightness and Sérsic index of galaxies in the Virgo cluster. / text
109

Adaptive L1 regularized second-order least squares method for model selection

Xue, Lin 11 September 2015 (has links)
The second-order least squares (SLS) method in regression model proposed by Wang (2003, 2004) is based on the first two conditional moments of the response variable given the observed predictor variables. Wang and Leblanc (2008) show that the SLS estimator (SLSE) is asymptotically more efficient than the ordinary least squares estimator (OLSE) if the third moment of the random error is nonzero. We apply the SLS method to variable selection problems and propose the adaptively weighted L1 regularized SLSE (L1-SLSE). The L1-SLSE is robust against the shape of error distributions in variable selection problems. Finite sample simulation studies show that the L1-SLSE is more efficient than L1-OLSE in the case of asymmetric error distributions. A real data application with L1-SLSE is presented to demonstrate the usage of this method. / October 2015
110

Identifying Housing Patterns in Pima County, Arizona Using the DEYA Affordability Index and Geospatial Analysis

Nevarez Martinez, Deyanira January 2015 (has links)
When the Fair Housing Act of 1968 was passed 47 years ago, the United States was in the midst of the civil rights movement and fair housing was identified as a pillar of equality. While, progress has been made, there is much work that needs to be done in order to achieve integration. As a country, the United States is a highly segregated country. It is important to understand the factors that contribute to this and it is important to understand the relationships that exists between them in order to attempt to solve the problem. While the legal barriers to integration have been lifted choices continue to be limited to families of color that lack the resources to live in desirable neighborhoods. The ultimate goal of this study is to examine the relationship between the impact of individual indicators and housing patterns in the greater Tucson/Pima county region. An affordability index, the DEYA index, was created to determine where affordability is at its highest. The index includes different weights for foreclosure, Pima County spending on affordable housing, the existence of Pima County general obligations bond affordable housing projects, land value and inclusion in the community land trust. Once this was determined a regression analysis was used to determine the relationship between affordability and individual factors that may be affecting integration. The indicators used were broken down into 3 categories: the categories were education, housing and neighborhoods and employment and economic health.

Page generated in 0.0435 seconds