Spelling suggestions: "subject:"endpoint"" "subject:"standpoint""
1 |
Numerical simulation of finite-time blow-up in nonlinear ODEs, reaction-diffusion equations and VIDEsDlamini, Phumlani Goodwill 02 November 2012 (has links)
M.Sc. / There have been an extensive study on solutions of differential equations modeling physical phenomena that blows up in finite time. The blow-up time often represents an important change in the properties of such models and hence it is very important to compute it as accurate as possible. In this work, an adaptive in time numerical method for computing blow-up solutions for nonlinear ODEs is introduced. The method is named implicit midpoint-implicit Euler method (IMIE) and is based on the implicit Euler and the implicit midpoint method. The method is used to compute blow-up time for different examples of ODEs, PDEs and VIDEs. The PDEs studied are reaction-diffusion equations whereby the method of lines is first used to discretize the equation in space to obtain a system of ODEs. Quadrature rules are used to approximate the integral in the VIDE to get a system of ODEs. The IMIE method is then used then to solve the system of ODEs. The results are compared to results obtained by the PECEIE method and Matlab solvers ode45 and ode15s. The results show that the IMIE method gives better results than the PECE-IE and ode15s and compares quite remarkably with the 4th order ode45 yet it is of order 1 with order 2 superconvergence at the mesh points.
|
2 |
A Comparison of Three Time-stepping Methods for the LLG Equation in Dynamic MicromagneticsWredh, Simon, Kroner, Anton, Berg, Tomas January 2017 (has links)
Micromagnetism is the study of magnetic materials on the microscopic length scale (of nano to micrometers), this scale does not take quantum mechanical effects into account, but is small enough to neglect certain macroscopic effects of magnetism in a material. The Landau-Lifshitz-Gilbert (LLG) equation is used within micromagnetism to determine the time evolution of the magnetisation vector field in a ferromagnetic solid. It is a partial differential equation with high non linearity, which makes it very difficult so solve analytically. Thus numerical methods have been developed for approximating the solution using computers. In this report we compare the performance of three different numerical methods for the LLG equation, the implicit midpoint method (IMP), the midpoint with extrapolation method (MPE), and the Gauss-Seidel Projection method (GSPM). It was found that all methods have convergence rates as expected; second order for IMP and MPE, and first order for GSPM. Energy conserving properties of the schemes were analysed and neither MPE or GSPM conserve energy. The computational time required for each method was determined to be very large for the IMP method in comparison to the other two. Suggestions for different areas of use for each method are provided.
|
3 |
Impacts of midpoint FACTS controllers on the coordiantion between generator phase backup protection and generator capability limitsElsamahy, Mohamed Salah Kamel 15 July 2011
The thesis reports the results of comprehensive studies carried out to explore the impact of midpoint FACTS Controllers (STATCOM and SVC) on the generator distance phase backup protection in order to identify important issues that protection engineers need to consider when designing and setting a generator protection system. In addition, practical, feasible and simple solutions to mitigate the adverse impact of midpoint FACTS Controllers on the generator distance phase backup protection are explored.
The results of these studies show that midpoint FACTS Controllers have an adverse effect on the generator distance phase backup protection. This adverse effect, which can be in the form of underreach, overreach or a time delay, varies according to the fault type, fault location and generator loading. Moreover, it has been found that the adverse effect of the midpoint FACTS Controllers extends to affect the coordination between the generator distance phase backup protection and the generator steady-state overexcited capability limit.
The Support Vector Machines classification technique is proposed as a replacement for the existing generator distance phase backup protection relay in order to alleviate potential problems. It has been demonstrated that this technique is a very promising solution, as it is fast, reliable and has a high performance efficiency. This will result in enhancing the coordination between the generator phase backup protection and the generator steady-state overexcited capability limit in the presence of midpoint FACTS Controllers.
The thesis also presents the results of investigations carried out to explore the impact of the generator distance phase backup protection relay on the generator overexcitation thermal capability. The results of these investigations reveal that with the relay settings according to the current standards, the generator is over-protected and the generator distance phase backup protection relay restricts the generator overexcitation thermal capability during system disturbances. This restriction does not allow the supply of the maximum reactive power of the generating unit during such events. The restriction on the generator overexcitation thermal capability caused by the generator distance phase backup protection relay highlights the necessity to revise the relay settings. The proposed solution in this thesis is to reduce the generator distance phase backup protection relay reach in order to provide secure performance during system disturbances.
|
4 |
Impacts of midpoint FACTS controllers on the coordiantion between generator phase backup protection and generator capability limitsElsamahy, Mohamed Salah Kamel 15 July 2011 (has links)
The thesis reports the results of comprehensive studies carried out to explore the impact of midpoint FACTS Controllers (STATCOM and SVC) on the generator distance phase backup protection in order to identify important issues that protection engineers need to consider when designing and setting a generator protection system. In addition, practical, feasible and simple solutions to mitigate the adverse impact of midpoint FACTS Controllers on the generator distance phase backup protection are explored.
The results of these studies show that midpoint FACTS Controllers have an adverse effect on the generator distance phase backup protection. This adverse effect, which can be in the form of underreach, overreach or a time delay, varies according to the fault type, fault location and generator loading. Moreover, it has been found that the adverse effect of the midpoint FACTS Controllers extends to affect the coordination between the generator distance phase backup protection and the generator steady-state overexcited capability limit.
The Support Vector Machines classification technique is proposed as a replacement for the existing generator distance phase backup protection relay in order to alleviate potential problems. It has been demonstrated that this technique is a very promising solution, as it is fast, reliable and has a high performance efficiency. This will result in enhancing the coordination between the generator phase backup protection and the generator steady-state overexcited capability limit in the presence of midpoint FACTS Controllers.
The thesis also presents the results of investigations carried out to explore the impact of the generator distance phase backup protection relay on the generator overexcitation thermal capability. The results of these investigations reveal that with the relay settings according to the current standards, the generator is over-protected and the generator distance phase backup protection relay restricts the generator overexcitation thermal capability during system disturbances. This restriction does not allow the supply of the maximum reactive power of the generating unit during such events. The restriction on the generator overexcitation thermal capability caused by the generator distance phase backup protection relay highlights the necessity to revise the relay settings. The proposed solution in this thesis is to reduce the generator distance phase backup protection relay reach in order to provide secure performance during system disturbances.
|
5 |
Studies on the bid ask spread component using high frequency trading dataWey, An-pin 18 July 2006 (has links)
In this paper, we use the high frequency trading data of New York Stock Exchange to analyze the bid-ask spread components. It is found that there is an exponential relationship between the log returns of quoted midpoints and the trade volume. We also observe a negative linear correlation between the changes of quoted depth and the trade volume. Furthermore, changes of the quoted ask depth and the quoted bid depth are asymmetric due to the trading direction. Furthermore, statistical quality control charts, p-charts, are built for fixed number of trades to monitor unusual trades entering the market. Finally, logistic regression models are established to predict the probabilities of unusual trades entering the market based on the quotes and the quoted depth adjustments of the market makers.
|
6 |
LCA : Hur påverkas resultatet av livscykelanalysen av vilken metod för viktning som används?Thernström, Jessica January 2015 (has links)
Syftet med arbetet är att beskriva ett antal olika metoder som används vid viktning i samband med livscykelanalyser, samt diskutera hur valet av metod kan påverka resultatet av analysen. Metoderna som beskrivs är EPS2000, Ecoindicator99, ReCiPe, Stepwise2006 och LIME. Varje metod beskrivs kort med påverkanskategorier, skyddsobjekt och indikatorer. Metoderna grundas på beräkningar av utsläpp och resursanvändning, men för att få fram viktningstalen så använder man sig av olika teorier, paneldiskussioner, frågeformulär och subjektiva antagande. Min slutsats är att det är svårt att få viktningsmetodiken mindre subjektiv, och att mycket vid viktningen bygger på antagande. Varje metod bygger på olika antaganden och det är därför viktigt att man är tydlig med vilka omständigheter som gäller vid beräkningarna samt att man inte jämför resultat från olika metoder. Däremot så går det utmärkt att jämföra olika produkter med hjälp av samma viktningsmetod, och att hitta i vilket skede i livscykeln som man får störst miljöpåverkan. Jag tror att det egentligen är diskussionen kring produktens miljöpåverkan som är det väsentliga, inte värdena som man får från viktningen, då dessa till så stor del baseras på antagande och gissningar. / The purpose of the study is to describe some methods used for weighting when conducting a life cycle analysis, and discuss how the choice of method may effect the result of the analysis. The methods described are EPS2000, Ecoindicator99, ReCiPe, Stepwise2006 and LIME. Each method is shortly described with impact categories, safety objects and indicators. The methods are based upon calculations of pollutions and use of resources, but to get the weightingfactors, one will have to rely on different theories, panel discussions, questionnaries and subjective assumptions. The conclusion of the study will be that it is difficult to get the methods for weighting less subjective, and that much of the weighting will be based upon assumptions. Every method is built on different assumptions, and therefor is it important that each are clear with the circumstances for the calculations, and you can not compare results from the different methods. On the contrary you may compare products if you use the same one for both, and you may also find in which phase of the lifecycle when you will get the most impact of the environment. I think that the discussion about the environmental impact of the product is most important, not the figures you will get from the procedure of weighting, which is based upon assumptions and guesses.
|
7 |
Successive Backward Sweep Methods for Optimal Control of Nonlinear Systems with ConstraintsCho, Donghyurn 16 December 2013 (has links)
Continuous and discrete-time Successive Backward Sweep (SBS) methods for solving nonlinear optimal control problems involving terminal and control constraints are proposed in this dissertation. They closely resemble the Neighboring Extremals and Differential Dynamic Programming algorithms, which are based on the successive solutions to a series of linear control problems with quadratic performance indices. The SBS methods are relatively insensitive to the initial guesses of the state and control histories, which are not required to satisfy the system dynamics. Hessian modifications are utilized, especially for non-convex problems, to avoid singularities during the backward integration of the gain equations. The SBS method requires the satisfaction of the Jacobi no-conjugate point condition and hence, produces optimal solutions. The standard implementation of the SBS method for continuous-time systems incurs terminal boundary condition errors due to an algorithmic singularity as well as numerical inaccuracies in the computation of the gain matrices. Alternatives for boundary error reduction are proposed, notably the aiming point and the switching between two forms of the sweep expansion formulae. Modification of the sweep formula expands the domain of convergence of the SBS method and allows for a rigorous testing for the existence of conjugate points.
Numerical accuracy of the continuous-time formulation of the optimal control problem can be improved with the use of symplectic integrators, which generally are implicit schemes in time. A time-explicit group preserving method based on the Magnus series representation of the state transition is implemented in the SBS setting and is shown to outperform a non-symplectic integrator of the same order.
Discrete-time formulations of the optimal control problem, directly accounting for a specific time-stepping method, lead to consistent systems of equations, whose solutions satisfy the boundary conditions of the discretized problem accurately. In this regard, the second-order, implicit mid-point averaging scheme, a symplectic integrator, is adapted for use with the SBS method. The performance of the mid-point averaging scheme is compared with other methods of equal and higher-order non-symplectic schemes to show its advantages. The SBS method is augmented with a homotopy- continuation procedure to isolate and regulate certain nonlinear effects for difficult problems, in order to extend its domain of convergence. The discrete-time SBS method is also extended to solve problems where the controls are approximated to be impulsive and to handle waypoint constraints as well.
A variety of highly nonlinear optimal control problems involving orbit transfer, atmospheric reentry, and the restricted three-body problem are treated to demonstrate the performance of the methods developed in this dissertation.
|
8 |
Redox active tyrosine residues in biomimetic beta hairpinsSibert, Robin S. 15 July 2009 (has links)
Biomimetic peptides are autonomously folding secondary structural units designed to serve as models for examining processes that occur in proteins. Although de novo biomimetic peptides are not simply abbreviated versions of proteins already found in nature, designing biomimetic peptides does require an understanding of how native proteins are formed and stabilized. The discovery of autonomously folding fragments of ribonuclease A and tendamistat pioneered the use of biomimetic peptides for determining how the polypeptide sequence stabilizes formation of alpha helices and beta hairpins in aqueous and organic solutions. A set of rules for constructing stable alpha helices have now been established. There is no exact set of rules for designing beta hairpins; however, some factors that must be considered are the identity of the residues in the turn and non-covalent interactions between amino acid side chains. For example, glycine, proline, aspargine, and aspartic acid are favored in turns. Non-covalent interactions that stabilize hairpin formation include salt bridges, pi-stacked aromatic interactions, cation-pi interactions, and hydrophobic interactions. The optimal strand length for beta hairpins depends on the numbers of stabilizing non-covalent interactions and high hairpin propensity amino acids in the specific peptide being designed. Until now, de novo hairpins have not previously been used to examine biological processes aside from protein folding. This thesis uses de novo designed biomimetic peptides as tractable models to examine how non-covalent interactions control the redox properties of tyrosine in enzymes.
The data in this study demonstrate that proton transfer to histidine, a hydrogen bond to arginine, and a pi-cation interaction create a peptide environment that lowers the midpoint potential of tyrosine in beta hairpins. Moreover, these interactions contribute equally to control the midpoint potential. The data also show that hydrogen bonding is not the sole determinant of the midpoint potential of tyrosine. Finally, the data suggest that the Tyr 160D2-Arg 272CP47 pi-cation interaction contributes to the differences in redox properties between Tyr 160 and Tyr 161 of photosystem II.
|
9 |
Deploying Monitoring Trails for Fault Localization in All-optical Networks and Radio-over-Fiber Passive Optical NetworksMaamoun, Khaled M. 24 August 2012 (has links)
Fault localization is the process of realizing the true source of a failure from a set of collected failure notifications. Isolating failure recovery within the network optical domain is necessary to resolve alarm storm problems. The introduction of the monitoring trail (m-trail) has been proven to deliver better performance by employing monitoring resources in a form of optical trails - a monitoring framework that generalizes all the previously reported counterparts. In this dissertation, the m-trail design is explored and a focus is given to the analysis on using m-trails with established lightpaths to achieve fault localization. This process saves network resources by reducing the number of the m-trails required for fault localization and therefore the number of wavelengths used in the network. A novel approach based on Geographic Midpoint Technique, an adapted version of the Chinese Postman’s Problem (CPP) solution and an adapted version of the Traveling Salesman’s Problem (TSP) solution algorithms is introduced. The desirable features of network architectures and the enabling of innovative technologies for delivering future millimeter-waveband (mm-WB) Radio-over-Fiber (RoF) systems for wireless services integrated in a Dense Wavelength Division Multiplexing (DWDM) is proposed in this dissertation. For the conceptual illustration, a DWDM RoF system with channel spacing of 12.5 GHz is considered. The mm-WB Radio Frequency (RF) signal is obtained at each Optical Network Unit (ONU) by simultaneously using optical heterodyning photo detection between two optical carriers. The generated RF modulated signal has a frequency of 12.5 GHz. This RoF system is easy, cost-effective, resistant to laser phase noise and also reduces maintenance needs, in principle. A revision of related RoF network proposals and experiments is also included. A number of models for Passive Optical Networks (PON)/ RoF-PON that combine both innovative and existing ideas along with a number of solutions for m-trail design problem of these models are proposed. The comparison between these models uses the expected survivability function which proved that these models are liable to be implemented in the new and existing PON/ RoF-PON systems. This dissertation is followed by recommendation of possible directions for future research in this area.
|
10 |
設限與截斷資料Weibull模式之研究 / A Weibull-based proportional hazards model for arbitrarily censored and truncated data黃偉傑, Huang, Wei-Jie Unknown Date (has links)
成比例危險迴歸模式常被用於分析存活資料,Weibull模式更是其中惟一兼具加速失敗特性者。本論文將利用兩種分析方法,以研究任意設限及截斷資料的Weibull迴歸模式。第一種方法是利用最大概似估計法求算設限及截斷資料下的參數估計值(MLE),第二種方法則是對左設限及區間設限分別以所在區間之中點代入,稱其為中點估計法,再求算模式中的參數估計值(MDE)。並對此兩種估計方法進行比較。模擬結果顯示,相當地大樣本之下,最大概似估計法在許多情況均優於中點估計法;而在樣本少、危險率為平穩或接近平穩且區間設限比率約為0.5時,中點估計法是可被推薦的。而且,本論文亦提出對設限及截斷資料的Weibull模式之適合度檢驗程序。 / The proportional hazards regression model is most commonly used model for lifetime data. The Weibull model is the only parametric model which has both a proportional hazards representation and an accelerated failure-time representation. This paper studies the use of a Weibull-based proportional hazards regression model when any censored and truncated data are observed. Two alternative methods of analysis are considered. First, the maximum likelihood estimates(MLEs) of parameters are computed for the observed censoring and truncation pattern. Second, the estimates where midpoints are substituted for left- and interval-censored data(midpoint estimation, MDE)are computed. Then, MLEs are compared with MDEs. Simulation studies indicate that for relative large samples there are many instances when the MLE is superior to the MDE. For small samples where the hazard rate is flat or nearly so, and the percentage of interval-censored data is nearly half of samples, the MDE is adequate. Also, an evaluation of the adequacy of the Weibull model for any censored and truncated data is proposed.
|
Page generated in 0.0488 seconds