Spelling suggestions: "subject:"computational"" "subject:"eomputational""
291 |
Reverse Stress Test Optimization : A study on how to optimize an algorithm for reverse stress testingMarklund, Sarah January 2018 (has links)
In this thesis we investigate how to optimize an algorithm that determines a scenario multiplier for a reverse stress test with a method where predefined scenarios are scaled. The scenarios are composed by different risk factors that represents market events. A reverse stress test is used for risk estimation and explains for what market condition a given portfolio will lose a particular amount. In this study we consider a reverse stress test where the goal is to find for what scenario a clearing house become insolvent, that is when the clearing house's loss is equal to its resource pool. The goal with this work is to find a more efficient algorithm than the current bisection algorithm for finding the scenario multiplier in the reverse stress test. The algorithms that were examined were one bracketing algorithm (the false-position algorithm) and two iterative algorithms (the Newton-Raphson and Halley's algorithms), which were implemented in MATLAB. A comparative study was made where the efficiency of the optimized algorithms were compared with the bisection algorithm. The algorithms were evaluated by comparing the running times and number of iterations needed to find the scenario multiplier in the reverse stress test. Other optimization strategies that were investigated were to reduce the number of scenarios in the predefined scenario matrix to decrease the running time and determine an appropriate initial multiplier to use in the iterative algorithms. The reduction of scenarios consisted of removing the scenarios that were multiples of other scenarios by comparing the risk factors in each scenario. We used Taylor approximation to simplify the loss function and thereby approximate an initial multiplier, which would reduce the manually input from the user. Furthermore, we investigated the running times and number of iterations needed to find the scenario multiplier when several initial multipliers were used in the iterative algorithms to increase the chance of finding a solution. The result shows that both the Newton-Raphson algorithm and Halley's algorithm are more efficient and need less iterations to find the scenario multiplier than the current bisection algorithm. Halley's algorithm is the most efficient, which is on average 200-470% faster than the current algorithm depending on how many initial multipliers that are used (one, two or three), while the Newton-Raphson algorithm is on average 150-300% faster than the current algorithm. Furthermore, the result shows that the false-position algorithm is not efficient for this aim. The result from the reduction of scenarios shows that scenarios could be removed by this approach, where the real scenario obtained from performing a reverse stress test was never among the removed scenarios. Moreover, the initial multiplier approximation could be used when the scenario matrix contains a certain type of risk factors. Finally, this study shows that the current bisection algorithm can be optimized by the Newton-Raphson algorithm and Halley's algorithm.
|
292 |
Computation as a Model Building Tool in a High School Physics ClassroomStirewalt, Heather R. 01 August 2018 (has links)
<p> The Next Generation Science Standards (NGSS) have established computational thinking as one of the science and engineering practices that should be developed in high school classrooms. Much of the work done by scientists is accomplished through the use of computation, but many students leave high school with little to no exposure to coding of any kind. This study outlines an attempt to integrate computational physics lessons into a high school algebra-based physics course which utilizes Modeling Instruction. Specifically, it aims to determine if students who complete computational physics assignments demonstrate any difference in understanding force concepts as measured by the Force Concept Inventory (FCI) versus students who do not. Additionally, it investigates students’ attitudes about learning computation alongside physics. Students were introduced to Vpython programs during the course of a semester. The FCI was administered pre and post instruction, and the gains were measured against a control group. The Computational Modeling in Physics Attitudinal Student Survey (COMPASS) was administered post instruction and the responses were analyzed. While the FCI gains were slightly larger on average than the control group, the difference was not statistically significant. This at least suggests that incorporating computational physics assignments does not adversely affect students’ conceptual learning.</p><p>
|
293 |
Solutions in generalised linear systems via MapleJones, Jonathan January 1998 (has links)
In this thesis we consider several distinct problems in linear systems theory and encompass the implementation of such work via the symbolic computational language Maple. Our analytical contribution is split into three main areas these being: the solution to a regular, discrete-time ARMA-representation; the computation of the generalised inverse of a rational matrix P(s)ER(s)n×m; and the computation of the invariant direction vectors associated with a regular polynomial matrix description (PMD).
|
294 |
Stochastic transduction for English grapheme-to-phoneme conversionLuk, Robert Wing Pong January 1992 (has links)
No description available.
|
295 |
The stability properties of some rheological flowsDemir, Huseyin January 1996 (has links)
The stability of wall driven and thermally driven cavity flow is investigated for a wide range of viscous and viscoelastic fluids. The effect of inertia, elasticity, temperature gradients, viscous heating and Biot boundary conditions are of particular interest. Both destabilisation and bifurcation phenomenon are found. For Newtonian constant viscosity flow the instabilities are characterised by a critical Reynolds number which represents the ratio of inertial forces to viscous forces, and instability occurs when the inertial forces become large. For non-Newtonian viscoelastic fluids the instability is characterised by a critical Weissenberg number, which represents the ratio of elastic forces to viscous forces, and instability also occurs when elastic forces dominate the viscous forces. For thermally driven flow the instability is characterised by a critical Rayleigh number, which represents the ratio of temperature gradient to viscosity, and instability occurs when the Rayleigh number become large. In this case the instability is also characterised by both Eckert and Biot number. The work has relevance to thermal convection and mixing processes which occur in the viscous and viscoelastic fluid within the Earth's mantle. Three-dimensional steady and transient flow in a cylindrical cavity and three dimensional steady flow in a spherical cavity, are also considered for both viscous and viscoelastic fluids. Instabilities in these three-dimensional flow depend on the same parameters as the flow in square cavity.
|
296 |
Design and implementation of a multi-agent opportunistic grid computing platformMuranganwa, Raymond January 2016 (has links)
Opportunistic Grid Computing involves joining idle computing resources in enterprises into a converged high performance commodity infrastructure. The research described in this dissertation investigates the viability of public resource computing in offering a plethora of possibilities through seamless access to shared compute and storage resources. The research proposes and conceptualizes the Multi-Agent Opportunistic Grid (MAOG) solution in an Information and Communication Technologies for Development (ICT4D) initiative to address some limitations prevalent in traditional distributed system implementations. Proof-of-concept software components based on JADE (Java Agent Development Framework) validated Multi-Agent Systems (MAS) as an important tool for provisioning of Opportunistic Grid Computing platforms. Exploration of agent technologies within the research context identified two key components which improve access to extended computer capabilities. The first component is a Mobile Agent (MA) compute component in which a group of agents interact to pool shared processor cycles. The compute component integrates dynamic resource identification and allocation strategies by incorporating the Contract Net Protocol (CNP) and rule based reasoning concepts. The second service is a MAS based storage component realized through disk mirroring and Google file-system’s chunking with atomic append storage techniques. This research provides a candidate Opportunistic Grid Computing platform design and implementation through the use of MAS. Experiments conducted validated the design and implementation of the compute and storage services. From results, support for processing user applications; resource identification and allocation; and rule based reasoning validated the MA compute component. A MAS based file-system that implements chunking optimizations was considered to be optimum based on evaluations. The findings from the undertaken experiments also validated the functional adequacy of the implementation, and show the suitability of MAS for provisioning of robust, autonomous, and intelligent platforms. The context of this research, ICT4D, provides a solution to optimizing and increasing the utilization of computing resources that are usually idle in these contexts.
|
297 |
Generating referring expressions in a domain of objects and processesDale, Robert January 1989 (has links)
No description available.
|
298 |
One Shell, Two Shell, Red Shell, Blue Shell| Numerical Modeling to Characterize the Circumstellar Environments of Type I SupernovaeHarris, Chelsea E. 21 November 2018 (has links)
<p> Though fundamental to our understanding of stellar, galactic, and cosmic evolution, the stellar explosions known as supernovae (SNe) remain mysterious. We know that mass loss and mass transfer are central processes in the evolution of a star to the supernova event, particularly for thermonuclear Type Ia supernovae (SNe Ia), which are in a close binary system. The circumstellar environment (CSE) contains a record of the mass lost from the progenitor system in the centuries prior to explosion and is therefore a key diagnostic of SN progenitors. Unfortunately, tools for studying the CSE are specialized to stellar winds rather than the more complicated and violent mass-loss processes hypothesized for SN Ia progenitors.</p><p> This thesis presents models for constraining the properties of a CSE detached from the stellar surface. In such cases, the circumstellar material (CSM) may not be observed until interaction occurs and dominates the SN light weeks or even months after maximum light. I suggest we call SNe with delayed interaction SNe X;n (i.e. SNe Ia;n, SNe Ib;n). I per- formed numerical hydrodynamic simulations and radiation transport calculations to study the evolution of shocks in these systems. I distilled these results into simple equations that translate radio luminosity into a physical description of the CSE. I applied my straightforward procedure to derive upper limits on the CSM for three SNe Ia: SN 2011fe, SN 2014J, and SN 2015cp. I modeled interaction to late times for the SN Ia;n PTF11kx; this led to my participation in the program that discovered interaction in SN 2015cp. Finally, I expanded my simulations to study the Type Ib;n SN 2014C, the first optically-confirmed SN X;n with a radio detection. My SN 2014C models represent the first time an SN X;n has been simultaneous modeled in the x-ray and radio wavelengths.</p><p>
|
299 |
Vortex Formation in Free SpaceOlsson, Martin January 2018 (has links)
Aircraft trailing vortices is an inevitable side effect when an aircraft generates lift. The vortices represent a danger for following aircraft and forces large spacing between landing and take off at airports. Detailed knowledge about the dynamics of aircraft trailing vortices is therefore needed to increase airport capacity and aviation safety. In this thesis, an accurate numerical simulation of aircraft trailing vortices is performed. The vortices undergo an expected instability phenomena followed by a reconnection process. The reconnection process is studied in detail. During the reconnection, theoretically described structures can be observed.
|
300 |
Spatio-Spectral Interferometric Imaging and the Wide-Field Imaging Interferometry TestbedIacchetta, Alexander S. 07 November 2018 (has links)
<p> The light collecting apertures of space telescopes are currently limited in part by the size and weight restrictions of launch vehicles, ultimately limiting the spatial resolution that can be achieved by the observatory. A technique that can overcome these limitations and provide superior spatial resolution is interferometric imaging, whereby multiple small telescopes can be combined to produce a spatial resolution comparable to a much larger monolithic telescope. In astronomy, the spectrum of the sources in the scene are crucial to understanding the material composition of the sources. So, the ultimate goal is to have high-spatial-resolution imagery and obtain sufficient spectral resolution for all points in the scene. This goal can be accomplished through spatio-spectral interferometric imaging, which combines the aperture synthesis aspects of a Michelson stellar interferometer with the spectral capabilities of Fourier transform spectroscopy. </p><p> Spatio-spectral interferometric imaging can be extended to a wide-field imaging modality, which increases the collecting efficiency of the technique. This is the basis for NASA’s Wide-field Imaging Interferometry Testbed (WIIT). For such an interferometer, there are two light collecting apertures separated by a variable distance known as the baseline length. The optical path in one of the arms of the interferometer is variable, while the other path delay is fixed. The beams from both apertures are subsequently combined and imaged onto a detector. For a fixed baseline length, the result is many low-spatial-resolution images at a slew of optical path differences, and the process is repeated for many different baseline lengths and orientations. Image processing and synthesis techniques are required to reduce the large dataset into a single high-spatial-resolution hyperspectral image. </p><p> Our contributions to spatio-spectral interferometry include various aspects of theory, simulation, image synthesis, and processing of experimental data, with the end goal of better understanding the nature of the technique. We present the theory behind the measurement model for spatio-spectral interferometry, as well as the direct approach to image synthesis. We have developed a pipeline to preprocess experimental data to remove unwanted signatures in the data and register all image measurements to a single orientation, which leverages information about the optical system’s point spread function. In an experimental setup, such as WIIT, the reference frame for the path difference measured for each baseline is unknown and must be accounted for. To overcome this obstacle, we created a phase referencing technique that leverages point sources within the scene of known separation in order to recover unknown information regarding the measurements in a laboratory setting. We also provide a method that allows for the measurement of spatially and spectrally complicated scenes with WIIT by decomposing them prior to scene projection.</p><p>
|
Page generated in 0.115 seconds