151 |
A one-group parametric sensitivity analysis for the graphite isotope ratio method and other related techniques using ORIGEN 2.2Chesson, Kristin Elaine 02 June 2009 (has links)
Several methods have been developed previously for estimating cumulative energy production and plutonium production from graphite-moderated reactors. The Graphite Isotope Ratio Method (GIRM) is one well-known technique. This method is based on the measurement of trace isotopes in the reactor’s graphite matrix to determine the change in their isotopic ratios due to burnup. These measurements are then coupled with reactor calculations to determine the total plutonium and energy production of the reactor. To facilitate sensitivity analysis of these methods, a one-group cross section and fission product yield library for the fuel and graphite activation products has been developed for MAGNOX-style reactors. This library is intended for use in the ORIGEN computer code, which calculates the buildup, decay, and processing of radioactive materials. The library was developed using a fuel cell model in Monteburns. This model consisted of a single fuel rod including natural uranium metal fuel, magnesium cladding, carbon dioxide coolant, and Grade A United Kingdom (UK) graphite. Using this library a complete sensitivity analysis can be performed for GIRM and other techniques. The sensitivity analysis conducted in this study assessed various input parameters including 235U and 238U cross section values, aluminum alloy concentration in the fuel, and initial concentrations of trace elements in the graphite moderator. The results of the analysis yield insight into the GIRM method and the isotopic ratios the method uses as well as the level of uncertainty that may be found in the system results.
|
152 |
Edge-degenerate families of ΨDO’s on an infinite cylinderAbed, Jamil, Schulze, Bert-Wolfgang January 2009 (has links)
We establish a parameter-dependent pseudo-differential calculus on an infinite cylinder, regarded as a manifold with conical exits to infinity. The parameters are involved in edge-degenerate form, and we formulate the operators in terms of operator-valued amplitude functions.
|
153 |
Stability results for viscous shock waves and plane Couette flowLiefvendahl, Mattias January 2001 (has links)
No description available.
|
154 |
Speed Choice : The Driver, the Road and Speed LimitsHaglund, Mats January 2001 (has links)
Speed choice is one of the more characteristic features of driver behaviour. The speed a driver chooses to travel at determines the degree of difficulty he or she operates under. Higher speeds lead to more accidents, higher accident risk and more severe consequences of an accident. The present thesis examines factors that are associated with drivers’ speed choice. Repeated measures of drivers’ speed showed a reasonably high correlation, but also that stability in speed varied with road layout between measurement sites. Effects of police enforcement were studied on roads with temporary reduced speed limits (from 50 km/h to 30 km/h) during school hours. Lower speeds were found on roads with enforcement and drivers observed on one such road showed a higher perceived probability of detection than did drivers observed on a non-enforced road. However, in a laboratory study higher driving speeds and lower accident risk was associated with enforced roads. Drivers not informed about existing speed limits overestimated the limits to a large extent and chose driving speeds above the limit as did drivers informed about the limits. In an on-the-road survey, fast drivers reported higher driving speed, thought a higher percentage of other drivers were speeding and had a more positive attitude towards speeding than did slower drivers. The results suggest that drivers’ travel speed is influenced by road factors, other road users and enforcement. Furthermore, drivers’ own judgements of what is an appropriate speed are also important for speed choice.
|
155 |
Auction Houses and Contemporary Art : A Study of Outstanding Sales in 2007 and 2009Kalmykova, Anna January 2010 (has links)
This thesis aims to analyze contemporary art market in terms of auction sales carried out bySotheby’s and Christie’s in London and New York in 2007 and 2009. The study deals withinvestigating the cases when artworks’ prices exceeded their estimates. A model testing therelationships between hammer price, auction house, artist, form of art, the year of object’screation and its current owner along with the year of sale and performance of stock marketwas developed according to the theoretical framework, which includes such concepts as artobjects and their value, gatekeepers and investment and auction theories. Regression analysis revealed that the presence of a pre-lot note published in auctioncatalogue and specifying the collector putting the artwork for sale and the year of art piececreation have a significant contribution to predicting their hammer price. Moreover, theanalysis identified that paintings and sculptures typically reach high prices while drawings,watercolors and gouaches appeared to be less expensive objects. As far as the artists areconcerned, the study showed that pieces by such top artists as Andy Warhol, Jean-MichelBasquiat, Gerhard Richter, Willem de Kooning and Jeff Koons tend to achieve outstandingresults more often. The cases of over-performance were identified according to the model, which provided anopportunity to estimate predicted hammer price and compare it to the one achieved duringthe sale. Analysis did not reveal a clear pattern among over-performers, however, it can beobserved that objects sold at Sotheby’s tend to over-perform slightly more often; AndyWarhol and Damien Hirst appeared to be the artists whose artworks reach prices higherthan the estimates; and watercolors, drawings and gouaches along with sculptures, statuesand figures and photographs and prints turned out to be the over-performing forms of art.
|
156 |
Boreal vegetation responses to forestry as reflected in field trial and survey data and the quality of cover estimates and presence/absence in vegetation inventoryBergstedt, Johan January 2008 (has links)
Den här avhandlingen belyser hur avverkning och markberedning påverkar markfloran i den svenska barrskogen. Dessutom utvärderas två inventeringsmetoder som används inom växtekologin. Vid arbetet har både rikstäckande inventeringsdata och fältförsök använts och de likartade resultaten tyder på att rikstäckande inventeringar är en underutnyttjad resurs i forskningen. Ju större andel av träden som avverkas desto större blir förändringen av markflorans sammansättning. Vissa arter, som lingon, ljung, etc., verkar dock inte påverkas i nämnvärd omfattning, medan andra, som blåbär, minskar i relation till hur mycket som avverkats. Gräs och mjölkört ökar efter avverkning, dock visar sig vissa gräs och mjölkört inte reagera om inte avverkningen överskrider ett tröskelvärde på ca 80 %. Avverkning har en liten, men signifikant, effekt på antalet arter, medan artomsättning, d.v.s. arters etablering på och/eller försvinnande från provytorna, framförallt påverkas av andel gran innan avverkning, markens produktionsförmåga och först därefter av hur stor andel av träden som avverkas. Det var också uppenbart att markberedning har en stark effekt som skiljer sig från avverkning. Framförallt gynnas björnmossor av markberedning men även vårfryle, kruståtel och mjölkört. Arter som missgynnas av markberedning var bl.a., en levermossa, lingon, väggmossa och kråkbär. I växtekologi är visuell täckningsbedömning, d.v.s. hur stor del av en provyta som täcks av en växtart, och registrering av förekomst/icke förekomst, d.v.s. finns en växtart på en provyta eller inte, de två vanligaste metoderna vid vegetationsinventering. Vid registrering av förekomst/icke förekomst missas upp till en tredjedel av förekomsterna, vanligaste orsaken till missade registreringar verkar vara att man inte upptäcker arten snarare än att den inte kan identifieras. Det var stora variationer mellan arter, där arter med få exemplar på provytan missas oftare. Både den visuella täckningsbedömningen och förekomst/icke förekomst visar sig ha personberoende fel, d.v.s. att olika personer genomgående ger högre eller lägre värden än andra. Trots det personberoende felet visar sig täckningsbedömningar ha ett större informationsvärde än registrering av förekomst/icke förekomst när det gäller att särskilja olika typer av vegetation. Erfarenhet har en förvånansvärt liten effekt på kvaliteten av täckningsbedömningar. / This thesis has two main focuses; first, the response of forest ground layer flora on forestry, mainly harvesting and secondly, the quality of the vegetation assessment methods, cover estimates by eye and presence/absence data. The effect of harvesting intensity was evaluated with survey data from permanent plots as well as vegetation data from a field trial fourteen years after harvesting. Both data sets confirmed that response of ground layer flora increased with increasing logging intensity. Thereby, indicating that survey data is possible to use in research. From the survey data set, existence of a time lag was evident for several species and also a threshold level was evident in cutting intensity needed to affect a number of species. Logging had a modest, but significant positive effect on the change in species number per plot. Species turnover was influenced by the proportion of Picea abies in the tree canopy; site productivity; and logging intensity. In the field trial scarification had a strong effect that was different from the one created by cutting. In plant ecology cover estimate by eye and presence/absence recording are the two most frequent methods used. The methods were evaluated with survey data and a field trial. In the first data set vegetation was recorded independently by two observers in 342 permanent 100-m2 plots. Overall, one third of each occurrence was missed by one of the two observers, but with large differences among species. Species occurring at low abundance tended to be frequently overlooked. Observer-explained variance in cover estimates was <10% in 15 of 17 species. In the second data set, 10 observers independently estimated cover in sixteen 100-m2 plots in two different vegetation types. The bias connected to observer varied substantially between species. The estimates of missing field and bottom layer had the highest bias, indicating that missing layers are problematic to use in analysis of change. Experience had a surprisingly small impact on the bias connected to observer. Analyses revealed that for the statistical power, cover estimates by eye carries a higher information value than do presence/absence data when distinguishing between vegetation types, differences between observers is negligible, and using more than one observer had little effect.
|
157 |
Asymptotic Estimates for Rational Spaces on Hypersurfaces in Function FieldsZhao, Xiaomei January 2010 (has links)
The ring of polynomials over a finite field has many arithmetic properties similar to those of the ring of rational integers. In this thesis, we apply the Hardy-Littlewood circle method to investigate the density of rational points on certain algebraic varieties in function fields. The aim is to establish asymptotic relations that are relatively robust to changes in the characteristic of the base finite field. More notably, in the case when the characteristic is "small", the results are sharper than their integer analogues.
|
158 |
A one-group parametric sensitivity analysis for the graphite isotope ratio method and other related techniques using ORIGEN 2.2Chesson, Kristin Elaine 02 June 2009 (has links)
Several methods have been developed previously for estimating cumulative energy production and plutonium production from graphite-moderated reactors. The Graphite Isotope Ratio Method (GIRM) is one well-known technique. This method is based on the measurement of trace isotopes in the reactor’s graphite matrix to determine the change in their isotopic ratios due to burnup. These measurements are then coupled with reactor calculations to determine the total plutonium and energy production of the reactor. To facilitate sensitivity analysis of these methods, a one-group cross section and fission product yield library for the fuel and graphite activation products has been developed for MAGNOX-style reactors. This library is intended for use in the ORIGEN computer code, which calculates the buildup, decay, and processing of radioactive materials. The library was developed using a fuel cell model in Monteburns. This model consisted of a single fuel rod including natural uranium metal fuel, magnesium cladding, carbon dioxide coolant, and Grade A United Kingdom (UK) graphite. Using this library a complete sensitivity analysis can be performed for GIRM and other techniques. The sensitivity analysis conducted in this study assessed various input parameters including 235U and 238U cross section values, aluminum alloy concentration in the fuel, and initial concentrations of trace elements in the graphite moderator. The results of the analysis yield insight into the GIRM method and the isotopic ratios the method uses as well as the level of uncertainty that may be found in the system results.
|
159 |
Designing a cost estimation method for the design of prototype systemsHolmes, Jonathan Frank 09 April 2012 (has links)
There are unique cases when designing products where a prototype is required to demonstrate critical operations of a system or subsystem such that it will serve as a basis for how the design will move forward. These prototypes are oftentimes on the critical design path. Due to the fact there is typically some aspect of a prototype that is not well understood there can be a considerable amount of uncertainty associated with the amount of resources needed to design such a prototype. The goal of this thesis is to address how to systematically reduce uncertainty for the purpose of creating a robust cost estimate. This statement does highlight the problem of defining what a robust estimate is, which results in addressing the key question driving this research: "When is enough information gathered to generate a robust estimate for the design of prototype systems?"
The crux of the problem lies in how to characterize the interactions and uncertainty associated with cost, schedule, and performance. Additionally, the breakdown of a prototype system into its subsystems results in errors exist at each division. The result is a cost estimation method that has been generated by leveraging the principles of design methodology. Two test cases are applied including one theoretical model, and one project from the Georgia Tech Research Institute (GTRI). The GTRI project was work performed for the Georgia Department of Transportation related to the filling of cracks on asphalt road surfaces. These examples are evaluated from the view point of the Validation Square in order to verify the effectiveness beyond example problems.
|
160 |
Kontrolle semilinearer elliptischer Randwertprobleme mit variationeller DiskretisierungMatthes, Ulrich 06 April 2010 (has links) (PDF)
Steuerungsprobleme treten in vielen Anwendungen in Naturwissenschaft und Technik auf. In dieser Arbeit werden Optimalsteuerungsprobleme mit semilinearen elliptischen partiellen Differentialgleichungen als Nebenbedingungen untersucht. Die Kontrolle wird durch Kontrollschranken als Ungleichungsnebenbedingungen eingeschränkt.
Dabei ist die Zielfunktion quadratisch in der Kontrolle. Die Lösung des Optimierungsproblems kann dann durch die Projektionsbedingung mit Hilfe des adjungierten Zustandes dargestellt werden.
Ein neuer Zugang ist die variationelle Diskretisierung. Bei dieser wird nur der Zustand und der adjungierte Zustand diskretisiert, nicht aber der Raum der Kontrollen. Dieser Zugang erlaubt höhere Konvergenzraten für die Kontrolle für kontrollrestingierte Probleme als bei einer Diskretisierung des Kontrollraumes. Die Projektionsbedingung für das variationell diskretisierte Problem ist dabei auf die gleiche zulässige Menge wie beim nicht diskretisierten Problem.
In der vorliegenden Arbeit wird die Methode der variationellen Diskretisierung auf semilineare elliptische Optimalkontrollprobleme angewendet und Fehlerabschätzungen für die Kontrollen bewiesen. Dabei wird hauptsächlich auf die verteilte Steuerung Wert gelegt, aber auch die Neumann-Randsteuerung mitbehandelt.
Nach einem Überblick über die Literatur wird die Aufgabenstellung mit den Voraussetzungen aufgeschrieben und die Optimalitätsbedingungen angegeben.
Danach wird die Existenz einer Lösung, sowie die Konvergenz der diskreten Lösungen gegen eine kontinuierliche Lösung gezeigt. Außerdem werden Finite-Elemente-Konvergenzordnungen angegeben.
Dann werden optimale Fehlerabschätzungen in verschiedenen Normen für die variationelle Kontrolle bewiesen.
Insbesondere werden die Fehlerabschätzung in Abhängigkeit vom Finite-Elemente-Fehler des Zustandes und des adjungierten Zustandes angegeben.
Dabei wird die nichtlineare Fixpunktgleichung mittels semismooth Newtonverfahrens linearisiert. Das Newtonverfahren wird auch für die numerische Lösung des Problems eingesetzt. Die Voraussetzung für die Konvergenzordnung ist dabei nicht die SSC, die hinreichende Bedingung zweiter Ordnung, welche eine lokale Konvexität in der Zielfunktion impliziert, sondern die Invertierbarkeit des Newtonoperators. Dies ist eine stationäre Bedingung in der optimalen Kontrolle.
Dabei wird nur benötigt, dass der Rand der aktiven Menge eine Nullmenge ist und die Invertierbarkeit des Newtonoperators in der Optimallösung.
Der Schaudersche Fixpunktsatz wird benutzt, um für die Newtongleichung die Existenz eines Fixpunktes innerhalb der gewünschten Umgebung zu beweisen. Außerdem wird die Eindeutigkeit eines solchen Fixpunktes für eine gegebene Triangulation bei hinreichend feiner Diskretisierung gezeigt.
Das Ergebnis ist, dass die Konvergenzrate nur durch die Finite-Elemente-Konvergenzraten von Zustand und adjungiertem Zustand beschränkt wird. Diese Rate wird nicht nur durch die Ansatzfunktionen, sondern auch durch die Glattheit der rechten Seite beschränkt, so dass der Knick am Rand der aktiven Menge hier ein Grenze setzt.
Außerdem wird die Implementation des semismooth Newtonverfahrens für den unendlichdimensionalen Kontrollraum für die variationelle Diskretisierung erläutert. Dabei wird besonders auf den zweidimensionalen verteilten Fall eingegangen.
Es werden die bewiesenen Konvergenzraten an einigen semilinearen und linearen Beispielen mittels der variationellen Diskretisierung demonstriert.
Es entsprechen sich die bei den analytische Beweisen und der numerischen Lösung eingesetzten Verfahren, die Fixpunktiteration sowie das nach Kontrolle oder adjungiertem Zustand aufgelöste Newtonverfahren. Dabei sind einige Besonderheiten bei der Implementation zu beachten, beispielsweise darf die Kontrolle nicht inkrementell mit dem Newtonverfahren oder der Fixpunktiteration aufdatiert werden, sondern muss in jedem Schritt neu berechnet werden.
|
Page generated in 0.0805 seconds