• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 113
  • 91
  • 76
  • 36
  • 24
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 877
  • 877
  • 145
  • 124
  • 121
  • 118
  • 113
  • 101
  • 101
  • 85
  • 82
  • 81
  • 73
  • 70
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Investigation of phytoplankton dynamics using time-series analysis of biophysical parameters in Gippsland Lakes, South-eastern Australia

Khanna, Neha, Neha.Khanna@mdbc.gov.au January 2007 (has links)
There is a need for ecological modelling to help understand the dynamics in ecological systems, and thus aid management decisions to maintain or improve the quality of the ecological systems. This research focuses on non linear statistical modelling of observations from an estuarine system, Gippsland Lakes, on the south-eastern coast of Australia. Feed forward neural networks are used to model chlorophyll time series from a fixed monitoring station at Point King. The research proposes a systematic approach to modelling in ecology using feed forward neural networks, to ensure: (a) that results are reliable, (b) to improve the understanding of dynamics in the ecological system, and (c) to obtain a prediction, if possible. An objective filtering algorithm to enable modelling is presented. Sensitivity analysis techniques are compared to select the most appropriate technique for ecological models. The research generated a chronological profile of relationships between biophysical parameters and chlorophyll level for different seasons. A sensitivity analysis of the models was used to understand how the significance of the biophysical parameters changes as the time difference between the input and predicted value changes. The results show that filtering improves modelling without introducing any noticeable bias. Partial derivative method is found to be the most appropriate technique for sensitivity analysis of ecological feed forward neural networks models. Feed forward neural networks show potential for prediction when modelled on an appropriate time series. Feed forward neural networks also show capability to increase understanding of the ecological environment. In this research, it can be seen that vertical gradient and temperature are important for chlorophyll levels at Point King at time scales from a few hours to a few days. The importance of chlorophyll level at any time to chlorophyll levels in the future reduces as the time difference between them increases.
162

Improved Experimental Agreement of Ionization and Pressure Peak Location by Adding a Dynamical NO-Model / Förbättrad experimentell överenstämmelse med jonström- och trycktoppsläge genom införande av en dynamisk NO-modell

Claesson, Daniel January 2004 (has links)
<p>Modelling combustion engines is an important tool in engine research. Development and modelling of ionization current has potential in developing virtual pressure sensors based on ionization measurements. Previous models has problem when predicting the true relationshipbetween the pressure peak location and ionization peak location, and both too early and too late predictions has been observed. An explanation for these discrepancies are provided and a model where the experimental mismatch has been reduced to less than one CAD is also presented. This is well within the measurement uncertainty.</p>
163

Fysikalisk modellering av klimat i entreprenadmaskin / Physical Modeling of Climate in Construction Vehicles

Nilsson, Sebastian January 2005 (has links)
<p>This masters thesis concerns a modeling project performed at Volvo Technology in Gothenburg, Sweden. The main purpose of the project has been to develop a physical model of the climate in construction vehicles that later on can be used in the development of an electronic climate controller. The focus of the work has been on one type of wheel loader and one type of excavator. The temperature inside the compartment has been set equal to the notion climate. </p><p>With physical theories about air flow and heat transfer in respect, relations between the components in the climate unit and the compartment has been calculated. Parameters that has had unknown values has been estimated. The relations have then been implemented in the modeling tool Simulink. </p><p>The validation of the model has been carried out by comparison between measured data and modeled values by calculation of Root Mean Square and correlation. Varying the estimated parameters and identifying the change in the output signal, i.e the temperature of the compartment, have performed a sensitivity analysis. </p><p>The result of the validation has shown that the factor with the greatest influence on the temperature in the vehicle is the airflow through the climate unit and the outlets. Minor changes of airflow have resulted in major changes in temperature. The validation principally shows that the model gives a good estimation of the temperature in the compartment. The static values of the model differs from the values of the measured data but is regarded being as within an acceptable margin of error. The weakness of the model is mainly its predictions of the dynamics, which does not correlate satisfyingly with the data.</p>
164

Life Cycle Costing in the evaluation process of new production lines / Livscykelkostnad i utvecklingsprocessen av nya produktionslinor

Ludvigsson, Rebecka January 2010 (has links)
<p>The purpose of this thesis is to develop a Life Cycle Cost model that could be used for investment, budgeting and comparing alternatives. An evaluation of existing models concluded that there was a need for a model that was easy to use and understand but in the same way economical and technical complex. Theoretical and empirical information was gathered in accordance with the purpose and made a base of the model. The model highlights operative, energy and maintenance costs. A case study to test the model has been carried out and selected company for this has been Swedwood International AB which is a part of IKEA. Swedwood currently works with pay back calculations which could lead to wrong decisions during the life length of the investment. The developed LCC model was tested on different techniques for applying an edge on a substrate. The result of the report is that the user will have a clear and structured overview of an investment during its economical life length. A final investment decision demands further tests and evaluations, for example technical test and MCDM. Further researches for the LCC model could be to investigate if the model lacks any critical aspects that should be included. A recommendation for Swedwood is to follow up the developed standards for collecting data at the factories in order to facilitate when investigating for new techniques and comparing between investment options.</p> / <p>Syftet med examensarbetet är att utveckla en livscykelkostnadsmodell som kan användas vid investeringar, budgeteringar och jämförelser. Efter en utvärdering av tillgängliga modeller konstaterades det att behov fanns för en modell som var ekonomisk och teknisk avancerad men ändå användarvänlig. Teori och empiri insamlades i enlighet med syftet och bildade en grund för modellen. Modellen belyser speciellt kostnadsaktiviteter så som operativa, energi och underhållskostnader. En fallstudie för att testa modellen har genomförts och fallföretaget var Swedwood International AB som är en del av IKEA. Swedwood arbetar nu med payback kalkyler vilket kan leda till fel beslut sett till hela investeringens livslängd. Den framtagna LCC modellen testades på olika tekniker för att applicera en kant på ett arbetstycke. Resultatet av rapporten är genom att använda modellen får man en klar och tydlig översikt av alla kostnader under en investerings ekonomiska livslängd. Ett investeringsbeslut kräver ytterligare tester och utvärderingar så som tekniska tester och MCDM. En fortsatt utveckling av modellen kan vara att undersöka om den saknar någon kritisk del som ska var inkluderad. En rekommendation till Swedwood är att följa upp de centralt utvecklade standarder på fabrikerna så att alla samlar in data på samma sätt, vilket skulle underlätta vid implementering av nya tekniker och vid jämförelser av investeringar.</p>
165

Sensitivity Analysis for Shortest Path Problems and Maximum Capacity Path Problems in Undirected Graphs

Ramaswamy, Ramkumar, Orlin, James B., Chakravarty, Nilopal 30 April 2004 (has links)
This paper addresses sensitivity analysis questions concerning the shortest path problem and the maximum capacity path problem in an undirected network. For both problems, we determine the maximum and minimum weights that each edge can have so that a given path remains optimal. For both problems, we show how to determine these maximum and minimum values for all edges in O(m + K log K) time, where m is the number of edges in the network, and K is the number of edges on the given optimal path.
166

On an Extension of Condition Number Theory to Non-Conic Convex Optimization

Freund, Robert M., Ordóñez, Fernando, 1970- 02 1900 (has links)
The purpose of this paper is to extend, as much as possible, the modern theory of condition numbers for conic convex optimization: z* := minz ctx s.t. Ax - b Cy C Cx , to the more general non-conic format: z* := minx ctx (GPd) s.t. Ax-b E Cy X P, where P is any closed convex set, not necessarily a cone, which we call the groundset. Although any convex problem can be transformed to conic form, such transformations are neither unique nor natural given the natural description of many problems, thereby diminishing the relevance of data-based condition number theory. Herein we extend the modern theory of condition numbers to the problem format (GPd). As a byproduct, we are able to state and prove natural extensions of many theorems from the conic-based theory of condition numbers to this broader problem format.
167

Evaluation of Capital Investment and Cash Flows for Alternative Switchgrass Feedstock Supply Chain Configurations

Chen, Jie 01 August 2011 (has links)
Biofuels have been widely recognized as a potential renewable energy source, and the United States’ government has been interested in producing ethanol from lignocellulosic biomass such as switchgrass. To evaluate whether lignocellulosic biomass based biofuels production is economically feasible, this paper estimated the capital investment outlays, operation costs, and net present value for investment in alternative switchgrass feedstock supply chain configurations in East Tennessee a 25 million gallon per year ethanol biorefinery. Two scenarios are analyzed in the study. The conventional hay harvest scenario includes the production, harvest, storage and transportation of biomass feedstocks from the fields to the biorefinery. The preprocessing scenario added preprocessing facilities into the biomass supply chain. According to various harvest, storage, preprocessing, and harvest equipment options, analysis and comparisons were made among different systems. The capital budgeting model developed in this study generated the optimal feedstock supply chain configurations to determine the largest net present value of cash flow from investment. Results of this study shown that with the Biomass Crop Assistance Program (BCAP) incentives, a round bale system using feedstock stored without tarp on pallets using custom hired equipment had the largest positive net present value. By comparison, if all the harvest equipment is purchased rather than custom hired, the stretch wrap baler preprocessing systems, using switchgrass harvested by a chopper with rotary cutter-header, was found to have a cost advantage over conventional hay harvest logistic systems (large round bale and large square bale systems) and pellet preprocessing systems. Assuming most likely values for switchgrass price and production costs, none of the feed stock supply chain configurations evaluated in this study produced a positive net present value when BCAP subsidies were assumed to not be available. However, without the BCAP incentives and based on combination of optimistic assumption, the round bale system using feedstock stored without tarp on pallets using custom hired equipment still has the largest positive net present value. Without the BCAP incentives, no feedstock supply chain configuration using purchased rather than custom hired equipment generated a positive net present value.
168

A one-group parametric sensitivity analysis for the graphite isotope ratio method and other related techniques using ORIGEN 2.2

Chesson, Kristin Elaine 02 June 2009 (has links)
Several methods have been developed previously for estimating cumulative energy production and plutonium production from graphite-moderated reactors. The Graphite Isotope Ratio Method (GIRM) is one well-known technique. This method is based on the measurement of trace isotopes in the reactor’s graphite matrix to determine the change in their isotopic ratios due to burnup. These measurements are then coupled with reactor calculations to determine the total plutonium and energy production of the reactor. To facilitate sensitivity analysis of these methods, a one-group cross section and fission product yield library for the fuel and graphite activation products has been developed for MAGNOX-style reactors. This library is intended for use in the ORIGEN computer code, which calculates the buildup, decay, and processing of radioactive materials. The library was developed using a fuel cell model in Monteburns. This model consisted of a single fuel rod including natural uranium metal fuel, magnesium cladding, carbon dioxide coolant, and Grade A United Kingdom (UK) graphite. Using this library a complete sensitivity analysis can be performed for GIRM and other techniques. The sensitivity analysis conducted in this study assessed various input parameters including 235U and 238U cross section values, aluminum alloy concentration in the fuel, and initial concentrations of trace elements in the graphite moderator. The results of the analysis yield insight into the GIRM method and the isotopic ratios the method uses as well as the level of uncertainty that may be found in the system results.
169

Combining smart model diagnostics and effective data collection for snow catchments

Reusser, Dominik E. January 2011 (has links)
Complete protection against flood risks by structural measures is impossible. Therefore flood prediction is important for flood risk management. Good explanatory power of flood models requires a meaningful representation of bio-physical processes. Therefore great interest exists to improve the process representation. Progress in hydrological process understanding is achieved through a learning cycle including critical assessment of an existing model for a given catchment as a first step. The assessment will highlight deficiencies of the model, from which useful additional data requirements are derived, giving a guideline for new measurements. These new measurements may in turn lead to improved process concepts. The improved process concepts are finally summarized in an updated hydrological model. In this thesis I demonstrate such a learning cycle, focusing on the advancement of model evaluation methods and more cost effective measurements. For a successful model evaluation, I propose that three questions should be answered: 1) when is a model reproducing observations in a satisfactory way? 2) If model results deviate, of what nature is the difference? And 3) what are most likely the relevant model components affecting these differences? To answer the first two questions, I developed a new method to assess the temporal dynamics of model performance (or TIGER - TIme series of Grouped Errors). This method is powerful in highlighting recurrent patterns of insufficient model behaviour for long simulation periods. I answered the third question with the analysis of the temporal dynamics of parameter sensitivity (TEDPAS). For calculating TEDPAS, an efficient method for sensitivity analysis is necessary. I used such an efficient method called Fourier Amplitude Sensitivity Test, which has a smart sampling scheme. Combining the two methods TIGER and TEDPAS provided a powerful tool for model assessment. With WaSiM-ETH applied to the Weisseritz catchment as a case study, I found insufficient process descriptions for the snow dynamics and for the recession during dry periods in late summer and fall. Focusing on snow dynamics, reasons for poor model performance can either be a poor representation of snow processes in the model, or poor data on snow cover, or both. To obtain an improved data set on snow cover, time series of snow height and temperatures were collected with a cost efficient method based on temperature measurements on multiple levels at each location. An algorithm was developed to simultaneously estimate snow height and cold content from these measurements. Both, snow height and cold content are relevant quantities for spring flood forecasting. Spatial variability was observed at the local and the catchment scale with an adjusted sampling design. At the local scale, samples were collected on two perpendicular transects of 60 m length and analysed with geostatistical methods. The range determined from fitted theoretical variograms was within the range of the sampling design for 80% of the plots. No patterns were found, that would explain the random variability and spatial correlation at the local scale. At the watershed scale, locations of the extensive field campaign were selected according to a stratified sample design to capture the combined effects of elevation, aspect and land use. The snow height is mainly affected by the plot elevation. The expected influence of aspect and land use was not observed. To better understand the deficiencies of the snow module in WaSiM-ETH, the same approach, a simple degree day model was checked for its capability to reproduce the data. The degree day model was capable to explain the temporal variability for plots with a continuous snow pack over the entire snow season, if parameters were estimated for single plots. However, processes described in the simple model are not sufficient to represent multiple accumulation-melt-cycles, as observed for the lower catchment. Thus, the combined spatio-temporal variability at the watershed scale is not captured by the model. Further tests on improved concepts for the representation of snow dynamics at the Weißeritz are required. From the data I suggest to include at least rain on snow and redistribution by wind as additional processes to better describe spatio-temporal variability. Alternatively an energy balance snow model could be tested. Overall, the proposed learning cycle is a useful framework for targeted model improvement. The advanced model diagnostics is valuable to identify model deficiencies and to guide field measurements. The additional data collected throughout this work helps to get a deepened understanding of the processes in the Weisseritz catchment. / Modelle zur Hochwasservorhersage und –warnung basieren auf einer bio-physikalisch Repräsentation der relevanten hydrologischen Prozesse. Eine Verbesserungen der Beschreibung dieser Prozesse kann zuverlässigere Vorhersagen ermöglichen. Dazu wird die Benutzung eines Lernzykluses bestehend aus einer kritische Beurteilung eines existierenden Modells, der Erhebung zusätzlicher Daten, der Bildung eines vertieften Verständnis und einer Überarbeitung des Modells vorgeschlagen. In dieser Arbeit wird ein solcher Lernzyklus aufgegriffen, wobei der Schwerpunkt auf einer verbesserten Modellanalyse und kosteneffizientere Messungen liegt. Für eine erfolgreiche Modellbeurteilung sind drei Fragen zu beantworten: 1) Wann reproduziert ein Modell die beobachteten Werte in einer zufriedenstellenden Weise (nicht)? 2) Wie lassen sich die Abweichungen charakterisieren? und 3) welches sind die Modellkomponenten, die diese Abweichungen bedingen? Um die ersten beiden Fragen zu beantworten, wird eine neue Methode zur Beurteilung des zeitlichen Verlaufs der Modellgüte vorgestellt. Eine wichtige Stärke ist, dass wiederholende Muster ungenügender Modellgüte auch für lange Simulationsläufe einfach identifiziert werden können. Die dritte Frage wird durch die Analyse des zeitlichen Verlaufs der Parametersensitivität beantwortet. Eine Kombination der beiden Methoden zur Beantwortung aller drei Fragen stellt ein umfangreiches Werkzeug für die Analyse hydrologischer Modelle zur Verfügung. Als Fallstudie wurde WaSiM-ETH verwendet, um das Einzugsgebiet der wilden Weißeritz zu modellieren. Die Modellanalyse von WaSiM-ETH hat ergeben, dass die Schneedynamik und die Rezession während trockener Perioden im Spätsommer und Herbst, für eine Beschreibung der Prozesse an der Weißeritz nicht geeignet sind. Die Erhebung zusätzlicher Daten zum besseren Verständnis der Schneedynamik bildet den nächste Schritt im Lernzyklus. Daten über Schneetemperaturen und Schneehöhen wurden mit Hilfe eines neuen, preisgünstigen Verfahrens erhoben. Dazu wurde die Temperatur an jedem Standort mit unterschiedlichen Abständen zum Boden gemessen und mit einem neuen Algorithmus in Schneehöhe und Kältegehalt umgerechnet. Die Schneehöhe und Kältegehalt sind wichtige Größen für die Vorhersage von Frühjahrshochwassern. Die räumliche Variabilität der Schneedecke auf der Einzugsgebietsskala wurde entsprechend der Landnutzung, der Höhenzone und der Ausrichtung stratifiziert untersucht, wobei lediglich der Einfluss der Höhe nachgewiesen werden konnte, während Ausrichtung und Landnutzung keinen statistisch signifikanten Einfluss hatten. Um die Defizite des WaSiM-ETH Schneemodules für die Beschreibung der Prozesse im Weißeritzeinzugsgebiets besser zu verstehen, wurde der gleiche konzeptionelle Ansatz als eigenständiges, kleines Modell benutzt, um die Dynamik in den Schneedaten zu reproduzieren. Während dieses Grad-Tag-Modell in der Lage war, den zeitlichen Verlauf für Flächen mit einer kontinuierlichen Schneedecke zu reproduzieren, konnte die Dynamik für Flächen mit mehreren Akkumulations- und Schmelzzyklen im unteren Einzugsgebiet vom Modell nicht abgebildet werden. Vorschläge zur Verbesserung des Modells werden in der Arbeit gemacht. Zusammenfassend hat sich das Lernzyklus-Konzept als nützlich erwiesen, um gezielt an einer Modellverbesserung zu arbeiten. Die differenzierte Modelldiagnose ist wertvoll, um Defizite im Modellkonzept zu identifizieren. Die während dieser Studie erhobenen Daten sind geeignet, um ein verbessertes Verständnis der Schnee-Prozesse an der Weißeritz zu erlangen.
170

Block stability analysis using deterministic and probabilistic methods

Bagheri, Mehdi January 2011 (has links)
This thesis presents a discussion of design tools for analysing block stability around a tunnel. First, it was determined that joint length and field stress have a significant influence on estimating block stability. The results of calculations using methods based on kinematic limit equilibrium (KLE) were compared with the results of filtered DFN-DEM, which are closer to reality. The comparison shows that none of the KLE approaches– conventional, limited joint length, limited joint length with stress and probabilistic KLE – could provide results similar to DFN-DEM. This is due to KLE’s unrealistic assumptions in estimating either volume or clamping forces. A simple mechanism for estimating clamping forces such as continuum mechanics or the solution proposed by Crawford-Bray leads to an overestimation of clamping forces, and thus unsafe design. The results of such approaches were compared to those of DEM, and it was determined that these simple mechanisms ignore a key stage of relaxation of clamping forces due to joint existence. The amount of relaxation is a function of many parameters, such as stiffness of the joint and surrounding rock, the joint friction angle and the block half-apical angle. Based on a conceptual model, the key stage was considered in a new analytical solution for symmetric blocks, and the amount of joint relaxation was quantified. The results of the new analytical solution compared to those of DEM and the model uncertainty of the new solution were quantified. Further numerical investigations based on local and regional stress models were performed to study initial clamping forces. Numerical analyses reveal that local stresses, which are a product of regional stress and joint stiffness, govern block stability. Models with a block assembly show that the clamping forces in a block assembly are equal to the clamping forces in a regional stress model. Therefore, considering a single block in massive rock results in lower clamping forces and thus safer design compared to a block assembly in the same condition of in-situ stress and properties. Furthermore, a sensitivity analysis was conducted to determine which is  the most important parameter by assessing sensitivity factors and studying the applicability of the partial coefficient method for designing block stability. It was determined that the governing parameter is the dispersion of the half-apical angle. For a dip angle with a high dispersion, partial factors become very large and the design value for clamping forces is close to zero. This suggests that in cases with a high dispersion of the half-apical angle, the clamping forces could be ignored in a stability analysis, unlike in cases with a lower dispersion. The costs of gathering more information about the joint dip angle could be compared to the costs of overdesign. The use of partial factors is uncertain, at least without dividing the problem into sub-classes. The application of partial factors is possible in some circumstances but not always, and a FORM analysis is preferable. / QC 20111201

Page generated in 0.1026 seconds