• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 259
  • 49
  • 17
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 9
  • 6
  • 4
  • 4
  • 3
  • Tagged with
  • 470
  • 470
  • 91
  • 82
  • 68
  • 50
  • 48
  • 45
  • 39
  • 37
  • 35
  • 35
  • 31
  • 30
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Luteria composicional de algoritmos pós-tonais

Soares, Guilherme Rafael 30 March 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2015-12-07T16:46:12Z No. of bitstreams: 1 guilhermerafaelsoares.pdf: 5619162 bytes, checksum: 75fa907e315795bd1f893ed8c941e9bd (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2015-12-07T21:41:38Z (GMT) No. of bitstreams: 1 guilhermerafaelsoares.pdf: 5619162 bytes, checksum: 75fa907e315795bd1f893ed8c941e9bd (MD5) / Made available in DSpace on 2015-12-07T21:41:38Z (GMT). No. of bitstreams: 1 guilhermerafaelsoares.pdf: 5619162 bytes, checksum: 75fa907e315795bd1f893ed8c941e9bd (MD5) Previous issue date: 2015-03-30 / FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais / Esta pesquisa sistematiza um catálogo de experimentos constituído de estudos musicais e seus algoritmos geradores, organizando procedimentos para composição assistida por computador orientados por regras derivadas de análises musicais de contexto pós-tonal. Os procedimentos são inspirados em apontamentos de estudos sobre pós-tonalidade no compositor Béla Bartók, encontrados nas obras de Lendvai (1971), Antokoletz (1984), Cohn (1991) e Suchoff (2004). Problematizam-se aqui os conceitos de ciclos intervalares, eixos de simetria, polimodalismo e peculiaridades de coleções referenciais de classes de altura - conforme sugestões de Forte (1973), Straus (2004) e Susanni e Antokoletz (2012). São detalhadas questões computacionais para esta implementação, utilizando como base as ferramentas OpenMusic e biblioteca Python Music21. Um legado em código aberto fica disponível para continuidades possíveis deste trabalho. / This research produces a catalog of experiments in musical studies and its related generative algorithms, organizing procedures for computer aided composition oriented by constraints extracted from post-tonal musical analyses. The procedures are inspired by post-tonality studies of Béla Bartók’s music, found in the works of Lendvai (1971), Antokoletz (1984), Cohn (1991) and Suchoff (2004). Main focus on problematization of interval cycles, symmetry axis, polymodalism and peculiarity of referencial collections from pitch-class set theory - as sugested by Forte (1973), Straus (2004) and Susanni e Antokoletz (2012). Details of computational issues for the implementation, using the open source tools OpenMusic and Music21 (python library) as base.
462

The basics of set theory - some new possibilities with ClassPad

Paditz, Ludwig 20 March 2012 (has links)
No description available.
463

Non-deterministic analysis of slope stability based on numerical simulation

Shen, Hong 29 June 2012 (has links)
In geotechnical engineering, the uncertainties such as the variability and uncertainty inherent in the geotechnical properties have caught more and more attentions from researchers and engineers. They have found that a single “Factor of Safety” calculated by traditional deterministic analyses methods can not represent the slope stability exactly. Recently in order to provide a more rational mathematical framework to incorporate different types of uncertainties in the slope stability estimation, reliability analyses and non-deterministic methods, which include probabilistic and non probabilistic (imprecise methods) methods, have been applied widely. In short, the slope non-deterministic analysis is to combine the probabilistic analysis or non probabilistic analysis with the deterministic slope stability analysis. It cannot be regarded as a completely new slope stability analysis method, but just an extension of the slope deterministic analysis. The slope failure probability calculated by slope non-deterministic analysis is a kind of complement of safety factor. Therefore, the accuracy of non deterministic analysis is not only depended on a suitable probabilistic or non probabilistic analysis method selected, but also on a more rigorous deterministic analysis method or geological model adopted. In this thesis, reliability concepts have been reviewed first, and some typical non-deterministic methods, including Monte Carlo Simulation (MCS), First Order Reliability Method (FORM), Point Estimate Method (PEM) and Random Set Theory (RSM), have been described and successfully applied to the slope stability analysis based on a numerical simulation method-Strength Reduction Method (SRM). All of the processes have been performed in a commercial finite difference code FLAC and a distinct element code UDEC. First of all, as the fundamental of slope reliability analysis, the deterministic numerical simulation method has been improved. This method has a higher accuracy than the conventional limit equilibrium methods, because of the reason that the constitutive relationship of soil is considered, and fewer assumptions on boundary conditions of slope model are necessary. However, the construction of slope numerical models, particularly for the large and complicated models has always been very difficult and it has become an obstacle for application of numerical simulation method. In this study, the excellent spatial analysis function of Geographic Information System (GIS) technique has been introduced to help numerical modeling of the slope. In the process of modeling, the topographic map of slope has been gridded using GIS software, and then the GIS data was transformed into FLAC smoothly through the program built-in language FISH. At last, the feasibility and high efficiency of this technique has been illustrated through a case study-Xuecheng slope, and both 2D and 3D models have been investigated. Subsequently, three most widely used probabilistic analyses methods, Monte Carlo Simulation, First Order Reliability Method and Point Estimate Method applied with Strength Reduction Method have been studied. Monte Carlo Simulation which needs to repeat thousands of deterministic analysis is the most accurate probabilistic method. However it is too time consuming for practical applications, especially when it is combined with numerical simulation method. For reducing the computation effort, a simplified Monte Carlo Simulation-Strength Reduction Method (MCS-SRM) has been developed in this study. This method has estimated the probable failure of slope and calculated the mean value of safety factor by means of soil parameters first, and then calculated the variance of safety factor and reliability of slope according to the assumed probability density function of safety factor. Case studies have confirmed that this method can reduce about 4/5 of time compared with traditional MCS-SRM, and maintain almost the same accuracy. First Order Reliability Method is an approximate method which is based on the Taylor\'s series expansion of performance function. The closed form solution of the partial derivatives of the performance function is necessary to calculate the mean and standard deviation of safety factor. However, there is no explicit performance function in numerical simulation method, so the derivative expressions have been replaced with equivalent difference quotients to solve the differential quotients approximately in this study. Point Estimate Method is also an approximate method involved even fewer calculations than FORM. In the present study, it has been integrated with Strength Reduction Method directly. Another important observation referred to the correlation between the soil parameters cohesion and friction angle. Some authors have found a negative correlation between cohesion and friction angle of soil on the basis of experimental data. However, few slope probabilistic studies are found to consider this negative correlation between soil parameters in literatures. In this thesis, the influence of this correlation on slope probability of failure has been investigated based on numerical simulation method. It was found that a negative correlation considered in the cohesion and friction angle of soil can reduce the variability of safety factor and failure probability of slope, thus increasing the reliability of results. Besides inter-correlation of soil parameters, these are always auto-correlated in space, which is described as spatial variability. For the reason that knowledge on this character is rather limited in literature, it is ignored in geotechnical engineering by most researchers and engineers. In this thesis, the random field method has been introduced in slope numerical simulation to simulate the spatial variability structure, and a numerical procedure for a probabilistic slope stability analysis based on Monte Carlo simulation was presented. The soil properties such as cohesion and friction angle were discretized to continuous random fields based on local averaging method. In the case study, both stationary and non-stationary random fields have been investigated, and the influence of spatial variability and averaging domain on the convergence of numerical simulation and probability of failure was studied. In rock medium, the structure faces have very important influence on the slope stability, and the rock material can be modeled as the combination of rigid or deformable blocks with joints in distinct element method. Therefore, much more input parameters like strength of joints are required to input the rock slope model, which increase the uncertainty of the results of numerical model. Furthermore, because of the limitations of the current laboratory and in-site testes, there is always lack of exact values of geotechnical parameters from rock material, even the probability distribution of these variables. Most of time, engineers can only estimate the interval of these variables from the limit testes or the expertise’s experience. In this study, to assess the reliability of the rock slope, a Random Set Distinct Element Method (RS-DEM) has been developed through coupling of Random Set Theory and Distinct Element Method, and applied in a rock slope in Sichuan province China.
464

Mnohost bytí: Ontologie Alaina Badioua / The Multiplicity of Being: The Ontology of Alain Badiou

Pivoda, Tomáš January 2012 (has links)
Tomáš Pivoda, The Multiplicity of Being: The Ontology of Alain Badiou PhD thesis Abstract The thesis introduces for the first time in the Czech philosophical context the ontology of the French philosopher Alain Badiou, as he set it out in his fundamental work Being and Event (L'être et l'événement, 1988). It first presents the starting point of Badiou's philosophy as well as the reasons of his identification of ontology with the set theory, and it points out Badiou's importance for contemporary philosophy, especially for the so called speculative realism around Quentin Meillassoux. The main axis of the exposition is then built around Badiou's four fundamental "Ideas": the multiplicity, the event, the truths and the subject, in connection with which it is shown how Badiou constructs his conceptual apparatus out of individual axioms of the set theory, whereby he follows the basic formal definition of multiplicity based on the operator . In connection with∈ the first Idea of multiplicity, the thesis exposes - with references to Martin Heidegger and Plato - Badiou's conceptual transposition of the couple one/multiple on the couple existence/being and defines the fundamental concepts of his ontology - the situation, the presentation, the representation and the void, with the help of which Badiou interprets...
465

Void Evolution and Cosmic Star Formation

Wasserman, Joel January 2023 (has links)
The rate at which stars have formed throughout the history of theuniverse is not constant, it started out slow, increased until around redshift ∼ 2 when it reversed and became slower again. The reason for this behaviour is still being investigated with various models and simulations usually based upon dark matter halos. The aim of this study is to instead investigate whether there is a correlation between the cosmic star formation rate and the evolution of cosmic voids. This is achieved by comparing the total mass flow from voids with the amount of matter forming stars. A simple model of void mass flow is created and compared with observational data of star formation. The model is shown to exhibit the same behaviour as the star formation rate indicating that there is indeed a correlation between void evolution and star formation. This suggests it to be fruitful to create a more involved, alternative model of star formation based upon void evolution as opposed to the common halo evolution / Hur snabbt stjärnor bildas har genom universums historia förändrats över tid, det började långsamt och ökade sedan fram till rödförskutning ∼ 2 då trenden vände och saktade ner igen. Förklaringen till detta beteende utforskas fortfarande genom diverse modeller och simularingar som vanligtvis bygger på mörk materia halos. Syftet med detta arbete är att istället undersöka ifall det finns en korrelation mellan tomrumsutveckling och den kosmiska stjärnbildningen. Detta åstadkoms genom att jämföra det totala massflödet från tomrum med den massa som bildar stjärnorna. En simpel model för tomrumsutveckling skapas och jämförs med observationell data för stjärnbildningshastighet. Denna modell visar samma beteende som stjärnbildningen och tyder på att det finns en korrelation mellan denna och tommrumsutveckling. Som slutsats pekar denna studie på att det kan vara fruktbart att utveckla en mer anancerad modell för den kosmiska stjärnbildningen som bygger på tomrumsutveckling istället för mörk materia halos.
466

Entwicklung und Anwendung eines Softwaresystems zur Simulation des Wasserhaushalts und Stofftransports in variabel gesättigten Böden

Blankenburg, René 29 April 2020 (has links)
Die Bodenzone, in der Literatur vielfach auch Wurzelzone, Aerationszone oder ungesättigte Zone genannt, ist geprägt durch variabel-wassergesättigte Verhältnisse und nimmt in vielen Disziplinen eine wichtige Rolle ein. Aus Sicht des Schutzguts Grundwasser stellt sie eine Schutz- und Pufferzone vor oberirdischen Umwelteinflüssen dar, in der eindringende oder eingebrachte Schadstoffe durch die dort ablaufenden Transport-, Abbau- und Sorptionsprozesse retardiert, teilweise bis vollständig abgebaut oder in andere Stoffe umgesetzt werden können, und somit eine Verunreinigung des Grundwassers verhindern kann. Um potenzielle Gefährdungen des Grundwassers anhand einer Altlast oder eines Schadensfalls abschätzen zu können, ist in Deutschland eine Sickerwasserprognose nach dem Bundesbodenschutzgesetz und der Bundesbodenschutzverordnung vorgeschrieben. Hierbei übernimmt die ungesättigte Zone die Funktion des Quell- und Transportterms für den Schadstoff. Der Quellterm dient der Beschreibung des zeitlichen Austragsverhaltens von Schadstoffen aus der Schadstoffquelle mit dem Sickerwasser, der Transportterm beschreibt den Wirkungspfad im Boden von der Geländeoberkante bis zur Grundwasseroberfläche. Die Anforderungen und Aufgaben des vom BMBF geförderten Forschungsvorhabens „Prognose des Schadstoffeintrags in das Grundwasser mit dem Sickerwasser“ (SiWaP) motivierten die Entwicklung des Programms PCSiWaPro. Innerhalb des Vorhabens sollte die Möglichkeit geschaffen werden, mit geringem Aufwand eine modellgestützte Sickerwasserprognose unter Berücksichtigung der Forschungsergebnisse aus SiWaP durchführen zu können. Kommerziell verfügbare Software blieb dabei außen vor, da die Implementierung eigener Prozesse, Datenbanken und Parameter damit nicht möglich ist. Gleichzeitig war eine komplexe Betrachtung der ablaufenden Prozesse erforderlich sowie die Dokumentation der Ein- und Ausgabedaten für eine entsprechende Nachweispflicht. Dies führte zur Entwicklung einer grafischen Benutzeroberfläche (GUI) mit einem Assistenten, der den Anwender in 5 sequenziell ablaufenden Schritten zu einem physikalisch begründeten Ergebnis führt (Protokoll). Alle notwendigen Eingaben werden dazu mit sinnvollen Werten vorbelegt und bei Änderung durch den Nutzer auf Plausibilität geprüft. Gleichzeitig sollte die Funktionalität nicht auf die Möglichkeiten des Assistenten beschränkt bleiben und dem erfahrenen Modellierer alle Optionen der numerischen Simulation bereitstellen. Die Dokumentation der Ein- und Ausgabedaten wird dabei durch die Verwendung von Datenbanken sichergestellt. Für den Einsatz in Ingenieurbüros, Behörden oder auch international war die GUI mehrsprachig zu implementieren. Diese Anforderungen begründeten die Entwicklung eines Simulationssystems, um den Wasserhaushalt und Stofftransport in ungesättigten Böden auch unter komplexen Bedingungen berechnen zu können. Das aus dem zuvor genannten BMBF-Verbundvorhaben SiWaP entstandene Programm PCSiWaPro war wesentlicher Bestandteil nachfolgender Forschungsvorhaben, deren Ergebnisse in die weitere Entwicklung des Programms einflossen und dessen Anwendungsgebiete außerhalb der Sickerwasserprognose erweiterten. So sind erforderliche Eingangsdaten wie bodenhydraulische und Stofftransportparameter oft mit Unsicherheiten behaftet oder können nur in Wertebereichen gefasst werden. Um derartige Unschärfen auch in den Berechnungsergebnissen von numerischen Simulationen ausweisen zu können, wurde die Fuzzy-Set-Theorie verwendet, die eine Zuordnung der Unsicherheiten über sogenannte α-Schnitte ermöglicht. Für jeden unscharfen Parameter kann dessen Schwankungsbreite definiert und in der Simulation berücksichtigt werden. Die Ausweisung der Unschärfen im Ergebnis erfolgt unter Angabe des sich ergebenden Minimums und Maximums der berechneten Größe (Druckhöhe, Konzentration). Anhand verschiedener Beispielanwendungen werden die in der Arbeit vorgestellten Problemstellungen durch Einsatz von PCSiWaPro behandelt. Die Arbeit gibt ebenso einen Ausblick auf weiterführenden Forschungs- und Entwicklungsbedarf, der sich aus den in der Arbeit erzielten Ergebnissen und Betrachtungen ableiten lässt.:Abbildungsverzeichnis Tabellenverzeichnis Abkürzungsverzeichnis Symbolverzeichnis 1 Einleitung 2 Wasserhaushaltsberechnung in variabel gesättigten porösen Medien 2.1 Zugrundeliegende Gleichung 2.2 Numerische Lösung 3 Transport- und Umsetzungsprozesse 3.1 Erhaltungsgleichung 3.2 Transportprozesse 3.3 Umsetzungsprozesse 3.4 Basisgleichung für den Stofftransport in PCSiWaPro 3.5 Numerische Lösung 4 Entwicklung des Programms PCSiWaPro 4.1 Softwarearchitektur 4.2 Datenbankkonzept 4.3 Benutzeroberfläche für das Preprocessing 4.4 Ergebnisvisualisierung und Postprocessing 4.5 Parallelisierung des Rechenkernels 4.6 Dual-Porosität nach DURNER 4.7 Strömungsrandbedingung als zeitvariable Polygonfunktion 4.8 Berücksichtigung von Unsicherheiten in den Eingangsdaten 5 Anwendungsbeispiele 5.1 Deichdurchströmung 5.2 Modellgestützte Sickerwasserprognose mit unscharfen Eingangsdaten 5.3 Test der Parallelisierung am synthetischen Beispiel 5.4 Zusammenfassung Anwendungsbeispiele 6 Zusammenfassung und Ausblick 7 Literaturverzeichnis 8 Anhang / The soil zone, often referred to as root zone, aeration zone or unsaturated zone in the literature, is characterized by variably saturated conditions and is of particular importance in many disciplines. From the groundwater point of view, it is a zone for protection and buffering of environmental processes at the surface. Penetrating hazardous substances can be retarded or even completely decayed due to the transport, degradation and sorption processes which occur and thus, can prevent a contamination of the groundwater. In order to estimate potential threats to the groundwater based on a contaminated site or a damage, a leachate forecast is required in Germany according to the Federal Soil Protection Act (BBodSchG) and the Federal Soil Protection Ordinance (BBodSchV). The unsaturated zone takes on the function of the source and transport term for the pollutant. The source term function is used to describe the temporal discharge behavior of pollutants from the contaminant source with the leachate, the transport term describes the action path in the soil from the top of the site to the groundwater surface. The requirements and tasks of the BMBF-funded research project 'Prognosis of Pollutant Infiltration into Groundwater with Leachate' (“Prognose des Schadstoffeintrags in das Grundwasser mit dem Sickerwasser”) (SiWaP) motivated the development of the PCSiWaPro program. Within the project, the possibility should be created to be able to carry out a model-based leachate forecast with little effort, taking into account the research results from the SiWaP project. Commercially available software had to be left out, since the implementation of new processes, databases and parameters is not possible. At the same time, a total consideration of the complex processes taking place was necessary, as was the documentation of the input and output data to provide evidence. This led to the development of a graphical user interface (GUI) with an assistant that leads the user in 5 sequential steps to a physically based result including a protocol. All necessary input data are pre-assigned with useful values and checked for plausibility when changed by the user. At the same time, the functionality should not be limited to the possibilities of the assistant and the GUI must provide all available options of a numerical simulation to advanced users. The documentation of the input and output data is ensured by using databases. The GUI provides multiple languages for use in engineering offices, authorities or international projects. These requirements justified the development of a simulation system to be able to calculate the water balance and solute transport in unsaturated soils even under complex conditions. The PCSiWaPro program, emerged from the BMBF joint project SiWaP mentioned above, was an integral part of subsequent research projects, the results of which were incorporated into the further development of the program and expanded its fields of application outside of the leachate forecast. Required input data such as soil hydraulic and solute transport parameters are often subject to uncertainties or can only be captured in value ranges. In order to show such blurring in the calculation results of numerical simulations, the fuzzy set theory was used, which enables the uncertainties to be assigned using so-called α-cuts. The fluctuation range for each uncertain parameter can be defined individually and considered in the simulation. The blurring in the result is indicated by specifying the resulting minimum and maximum of the calculated quantity (pressure level, concentration). Using various sample applications, the problems presented in the thesis are dealt with by using PCSiWaPro. The thesis also gives an outlook on further research and development perspectives, which are derived from the results achieved in this thesis and the demands from the daily practice.:Abbildungsverzeichnis Tabellenverzeichnis Abkürzungsverzeichnis Symbolverzeichnis 1 Einleitung 2 Wasserhaushaltsberechnung in variabel gesättigten porösen Medien 2.1 Zugrundeliegende Gleichung 2.2 Numerische Lösung 3 Transport- und Umsetzungsprozesse 3.1 Erhaltungsgleichung 3.2 Transportprozesse 3.3 Umsetzungsprozesse 3.4 Basisgleichung für den Stofftransport in PCSiWaPro 3.5 Numerische Lösung 4 Entwicklung des Programms PCSiWaPro 4.1 Softwarearchitektur 4.2 Datenbankkonzept 4.3 Benutzeroberfläche für das Preprocessing 4.4 Ergebnisvisualisierung und Postprocessing 4.5 Parallelisierung des Rechenkernels 4.6 Dual-Porosität nach DURNER 4.7 Strömungsrandbedingung als zeitvariable Polygonfunktion 4.8 Berücksichtigung von Unsicherheiten in den Eingangsdaten 5 Anwendungsbeispiele 5.1 Deichdurchströmung 5.2 Modellgestützte Sickerwasserprognose mit unscharfen Eingangsdaten 5.3 Test der Parallelisierung am synthetischen Beispiel 5.4 Zusammenfassung Anwendungsbeispiele 6 Zusammenfassung und Ausblick 7 Literaturverzeichnis 8 Anhang
467

Hybrid Zonotopes: A Mixed-Integer Set Representation for the Analysis of Hybrid Systems

Trevor John Bird (13877174) 29 September 2022 (has links)
<p>Set-based methods have been leveraged in many engineering applications from robust control and global optimization, to probabilistic planning and estimation. While useful, these methods have most widely been applied to analysis over sets that are convex, due to their ease in both representation and calculation. The representation and analysis of nonconvex sets is inherently complex. When nonconvexity arises in design and control applications, the nonconvex set is often over-approximated by a convex set to provide conservative results. However, the level of conservatism may be large and difficult to quantify, often leading to trivial results and requiring repetitive analysis by the engineer. Nonconvexity is inherent and unavoidable in many applications, such as the analysis of hybrid systems and robust safety constraints. </p> <p>In this dissertation, I present a new nonconvex set representation named the hybrid zonotope. The hybrid zonotope builds upon a combination of recent advances in the compact representation of convex sets in the controls literature with methods leveraged in solving mixed-integer programming problems. It is shown that the hybrid zonotope is equivalent to the union of an exponential number of convex sets while using a linear number of continuous and binary variables in the set’s representation. I provide identities for, and derivations of, the set operations of hybrid zonotopes for linear mappings, Minkowski sums, generalized intersections, halfspace intersections, Cartesian products, unions, complements, point containment, set containment, support functions, and convex enclosures. I also provide methods for redundancy removal and order reduction to improve the compactness and computational efficiency of the represented sets. Therefore proving the hybrid zonotopes expressive power and applicability to many nonconvex set-theoretic methods. Beyond basic set operations, I specifically show how the exact forward and backward reachable sets of linear hybrid systems may be found using identities that are calculated algebraically and scale linearly. Numerical examples show the scalability of the proposed methods and how they may be used to verify the safety and performance of complex systems. These exact methods may also be used to evaluate the level of conservatism of the existing approximate methods provided in the literature.  </p>
468

Application of new shape descriptors and theory of uncertainty in image processing / Примена нових дескриптора облика и теорије неодређености у обради слике / Primena novih deskriptora oblika i teorije neodređenosti u obradi slike

Ilić Vladimir 20 December 2019 (has links)
<p>The doctoral thesis deals with the study of quantitative aspects of shape attribute ssuitable for numerical characterization, i.e., shape descriptors, as well as the theory of uncertainty, particularly the theory of fuzzy sets, and their application in image<br />processing. The original contributions and results of the thesis can be naturally divided into two groups, in accordance with the approaches used to obtain them. The first group of contributions relates to introducing new shape descriptors (of hexagonality and fuzzy squareness) and associated measures that evaluate to what extent the shape considered satisfies these properties. The introduced measures are naturally defined, theoretically well-founded, and satisfy most of the desirable properties expected to be satisfied by each well-defined shape measure. To mention some of them: they both range through (0,1] and achieve the largest possible value 1 if and only if the shape considered is a hexagon, respectively a fuzzy square; there is no non-zero area shape with the measured hexagonality or fuzzy squareness equal to 0; both introduced measures are invariant to similarity transformations; and provide results that are consistent with the theoretically proven results, as well as human perception and expectation. Numerous experiments on synthetic and real examples are shown aimed to illustrate theoretically proven considerations and to provide clearer insight into the behaviour of the introduced shape measures. Their advantages and applicability are illustrated in various tasks of recognizing and classifying objects images of several well-known and most frequently used image datasets. Besides, the doctoral thesis contains research related to the application of the theory of uncertainty, in the narrower sense fuzzy set theory, in the different tasks of image processing and shape analysis. We distinguish between the tasks relating to the extraction of shape features, and those relating to performance improvement of different image processing and image analysis techniques. Regarding the first group of tasks, we deal with the application of fuzzy set theory in the tasks of introducing new fuzzy shape-based descriptor, named fuzzy squareness, and measuring how much fuzzy square is given fuzzy shape. In the second group of tasks, we deal with<br />the study of improving the performance of estimates of both the Euclidean distance<br />transform in three dimensions (3D EDT) and the centroid distance signature of shape in two dimensions. Performance improvement is particularly reflected in terms of achieved accuracy and precision, increased invariance to geometrical transformations (e.g., rotation and translation), and robustness in the presence of noise and uncertainty resulting from the imperfection of devices or imaging conditions. The latter also refers to the second group of the original contributions and results of the thesis. It is motivated by the fact that the shape analysis traditionally assumes that the objects appearing in the image are previously uniquely and crisply extracted from the image. This is usually achieved in the process of sharp (i.e., binary) segmentation of the original image where a decision on the membership of point to an imaged object is made in a sharp manner. Nevertheless, due to the imperfections of imaging conditions or devices, the presence of noise, and various types of imprecision (e.g., lack of precise object boundary or clear boundaries between the objects, errors in computation, lack of information, etc.), different levels of uncertainty and vagueness in the process of making a decision regarding the membership of image point may potentially occur. This is particularly noticeable in the case of discretization (i.e., sampling) of continuous image domain when a single image element, related to corresponding image sample point, iscovered by multiple objects in an image. In this respect, it is clear that this type of segmentation can potentially lead to a wrong decision on the membership of image points, and consequently irreversible information loss about the imaged objects. This<br />stems from the fact that image segmentation performed in this way does not permit that the image point may be a member to a particular imaged object to some degree, further leading to the potential risk that points partially contained in the object before<br />segmentation will not be assigned to the object after segmentation. However, if instead of binary segmentation, it is performed segmentation where a decision about the membership of image point is made in a gradual rather than crisp manner, enabling that point may be a member to an object to some extent, then making a sharp decision on the membership can be avoided at this early analysis step. This further leads that potentially a large amount of object information can be preserved after segmentation and used in the following analysis steps. In this regard, we are interested in one specific type of fuzzy segmentation, named coverage image segmentation, resulting in fuzzy digital image representation where membership value assigned to each image element is proportional to its relative coverage by a continuous object present in the original image. In this thesis, we deal with the study of coverage digitization model providing coverage digital image representation and present how significant improvements in estimating 3D EDT, as well as the centroid distance signature of continuous shape, can be achieved, if the coverage<br />information available in this type of image representation is appropriately considered.</p> / <p>Докторска дисертација се бави проучавањем квантитативних аспеката атрибута<br />облика погодних за нумеричку карактеризацију, то јест дескриптора облика, као и<br />теоријом неодређености, посебно теоријом фази скупова, и њиховом применом у обради слике. Оригинални доприноси и резултати тезе могу се природно поделити у две групе, у складу са приступом и методологијом која је коришћена за њихово добијање. Прва група доприноса односи се на увођење нових дескриптора облика (шестоугаоности и фази квадратности) као и одговарајућих мера које нумерички оцењују у ком обиму разматрани облик задовољава разматрана својства. Уведене мере су природно дефинисане, теоријски добро засноване и задовољавају већину пожељних својстава које свака добро дефинисана мера облика треба да задовољава. Поменимо неке од њих: обе мере узимају вредности из интервала (0,1] и достижу највећу могућу вредност 1 ако и само ако је облик који се посматра шестоугао, односно фази квадрат; не постоји облик не-нула површине чија је измерена шестоугаоност, односно фази квадратност једнака 0; обе уведене мере су инваријантне у односу на трансформације сличности; и дају резултате који су у складу са теоријски доказаним резултатима, као и људском перцепцијом и очекивањима. Бројни експерименти на синтетичким и реалним примерима приказани су у циљу илустровања теоријски доказаних разматрања и пружања јаснијег увида у понашање уведених мера. Њихова предност и корисност илустровани су у различитим задацима препознавања и класификације слика објеката неколико познатих и најчешће коришћених база слика. Поред тога, докторска теза садржи истраживања везана за примену теорије неодређености, у ужем смислу теорије фази скупова, у различитим задацима обраде слике и анализе облика. Разликујемо задатке који се односе на издвајање карактеристика облика и<br />оне који се односе на побољшање перформанси различитих техника обраде и<br />анализе слике. Што се тиче прве групе задатака, бавимо се применом теорије фази скупова у задацима дефинисања новог дескриптора фази облика, назван фази квадратност, и мерења колико је фази квадратан посматрани фази облик. У другој групи задатака бавимо се истраживањем побољшања перформанси оцене трансформације слике еуклидским растојањима у три димензије (3Д ЕДТ), као и сигнатуре непрекидног облика у две димензије засноване на растојању од<br />центроида облика. Ово последње се посебно огледа у постигнутој тачности и<br />прецизности оцене, повећаној инваријантности у односу на ротацију и транслацију објекта, као и робустности у присуству шума и неодређености које су последица несавршености уређаја или услова снимања. Последњи резултати се такође односе и на другу групу оригиналних доприноса тезе који су мотивисани чињеницом да анализа облика традиционално претпоставља да су објекти на слици претходно једнозначно и јасно издвојени из слике. Такво издвајање објеката се обично постиже у процесу јасне (то јест бинарне) сегментације оригиналне слике где се одлука о припадности тачке објекту на слици доноси на једнозначан и недвосмислени начин. Међутим, услед несавршености услова или уређаја за снимање, присуства шума и различитих врста непрецизности (на пример непостојање прецизне границе објекта или јасних граница између самих објеката, грешке у рачунању, недостатка информација, итд.), могу се појавити различити нивои несигурности и неодређености у процесу доношења одлуке у вези са припадношћу тачке слике. Ово је посебно видљиво у случају дискретизације (то јест узорковања) непрекидног домена слике када<br />елемент слике, придружен одговарајућој тачки узорка домена, може бити<br />делимично покривен са више објеката на слици. У том смислу, имамо да ова врста сегментације може потенцијално довести до погрешне одлуке о припадности тачака слике, а самим тим и неповратног губитка информација о објектима који се на слици налазе. То произлази из чињенице да сегментација слике изведена на овај начин не дозвољава да тачка слике може делимично у одређеном обиму бити члан посматраног објекта на слици, што даље води потенцијалном ризику да тачке делимично садржане у објекту пре сегментације неће бити придружене објекту након сегментације. Међутим, ако се уместо бинарне сегментације изврши сегментација слике где се одлука о припадности тачке слике објекту доноси на начин који омогућава да тачка може делимично бити члан објекта у неком обиму, тада се доношење бинарне одлуке о чланство тачке објекту на слици може избећи у овом раном кораку анализе. То даље резултира да се потенцијално велика количина информација о објектима присутним на слици може сачувати након сегментације, и користити у следећим корацима анализе. С тим у вези, од посебног интереса за нас јесте специјална врста фази сегментације слике, сегментација заснована на покривености елемената слике, која као резултат обезбеђује фази дигиталну репрезентацију слике где је вредност чланства додељена сваком елементу пропорционална његовој релативној покривености непрекидним објектом на оригиналној слици. У овој тези бавимо се истраживањем модела дигитализације покривености који пружа овакву врсту репрезентацију слике и представљамо како се могу постићи значајна побољшања у оцени 3Д ЕДТ, као и сигнатуре непрекидног облика засноване на растојању од центроида, ако су информације о покривености<br />доступне у овој репрезентацији слике разматране на одговарајући начин.</p> / <p>Doktorska disertacija se bavi proučavanjem kvantitativnih aspekata atributa<br />oblika pogodnih za numeričku karakterizaciju, to jest deskriptora oblika, kao i<br />teorijom neodređenosti, posebno teorijom fazi skupova, i njihovom primenom u obradi slike. Originalni doprinosi i rezultati teze mogu se prirodno podeliti u dve grupe, u skladu sa pristupom i metodologijom koja je korišćena za njihovo dobijanje. Prva grupa doprinosa odnosi se na uvođenje novih deskriptora oblika (šestougaonosti i fazi kvadratnosti) kao i odgovarajućih mera koje numerički ocenjuju u kom obimu razmatrani oblik zadovoljava razmatrana svojstva. Uvedene mere su prirodno definisane, teorijski dobro zasnovane i zadovoljavaju većinu poželjnih svojstava koje svaka dobro definisana mera oblika treba da zadovoljava. Pomenimo neke od njih: obe mere uzimaju vrednosti iz intervala (0,1] i dostižu najveću moguću vrednost 1 ako i samo ako je oblik koji se posmatra šestougao, odnosno fazi kvadrat; ne postoji oblik ne-nula površine čija je izmerena šestougaonost, odnosno fazi kvadratnost jednaka 0; obe uvedene mere su invarijantne u odnosu na transformacije sličnosti; i daju rezultate koji su u skladu sa teorijski dokazanim rezultatima, kao i ljudskom percepcijom i očekivanjima. Brojni eksperimenti na sintetičkim i realnim primerima prikazani su u cilju ilustrovanja teorijski dokazanih razmatranja i pružanja jasnijeg uvida u ponašanje uvedenih mera. NJihova prednost i korisnost ilustrovani su u različitim zadacima prepoznavanja i klasifikacije slika objekata nekoliko poznatih i najčešće korišćenih baza slika. Pored toga, doktorska teza sadrži istraživanja vezana za primenu teorije neodređenosti, u užem smislu teorije fazi skupova, u različitim zadacima obrade slike i analize oblika. Razlikujemo zadatke koji se odnose na izdvajanje karakteristika oblika i<br />one koji se odnose na poboljšanje performansi različitih tehnika obrade i<br />analize slike. Što se tiče prve grupe zadataka, bavimo se primenom teorije fazi skupova u zadacima definisanja novog deskriptora fazi oblika, nazvan fazi kvadratnost, i merenja koliko je fazi kvadratan posmatrani fazi oblik. U drugoj grupi zadataka bavimo se istraživanjem poboljšanja performansi ocene transformacije slike euklidskim rastojanjima u tri dimenzije (3D EDT), kao i signature neprekidnog oblika u dve dimenzije zasnovane na rastojanju od<br />centroida oblika. Ovo poslednje se posebno ogleda u postignutoj tačnosti i<br />preciznosti ocene, povećanoj invarijantnosti u odnosu na rotaciju i translaciju objekta, kao i robustnosti u prisustvu šuma i neodređenosti koje su posledica nesavršenosti uređaja ili uslova snimanja. Poslednji rezultati se takođe odnose i na drugu grupu originalnih doprinosa teze koji su motivisani činjenicom da analiza oblika tradicionalno pretpostavlja da su objekti na slici prethodno jednoznačno i jasno izdvojeni iz slike. Takvo izdvajanje objekata se obično postiže u procesu jasne (to jest binarne) segmentacije originalne slike gde se odluka o pripadnosti tačke objektu na slici donosi na jednoznačan i nedvosmisleni način. Međutim, usled nesavršenosti uslova ili uređaja za snimanje, prisustva šuma i različitih vrsta nepreciznosti (na primer nepostojanje precizne granice objekta ili jasnih granica između samih objekata, greške u računanju, nedostatka informacija, itd.), mogu se pojaviti različiti nivoi nesigurnosti i neodređenosti u procesu donošenja odluke u vezi sa pripadnošću tačke slike. Ovo je posebno vidljivo u slučaju diskretizacije (to jest uzorkovanja) neprekidnog domena slike kada<br />element slike, pridružen odgovarajućoj tački uzorka domena, može biti<br />delimično pokriven sa više objekata na slici. U tom smislu, imamo da ova vrsta segmentacije može potencijalno dovesti do pogrešne odluke o pripadnosti tačaka slike, a samim tim i nepovratnog gubitka informacija o objektima koji se na slici nalaze. To proizlazi iz činjenice da segmentacija slike izvedena na ovaj način ne dozvoljava da tačka slike može delimično u određenom obimu biti član posmatranog objekta na slici, što dalje vodi potencijalnom riziku da tačke delimično sadržane u objektu pre segmentacije neće biti pridružene objektu nakon segmentacije. Međutim, ako se umesto binarne segmentacije izvrši segmentacija slike gde se odluka o pripadnosti tačke slike objektu donosi na način koji omogućava da tačka može delimično biti član objekta u nekom obimu, tada se donošenje binarne odluke o članstvo tačke objektu na slici može izbeći u ovom ranom koraku analize. To dalje rezultira da se potencijalno velika količina informacija o objektima prisutnim na slici može sačuvati nakon segmentacije, i koristiti u sledećim koracima analize. S tim u vezi, od posebnog interesa za nas jeste specijalna vrsta fazi segmentacije slike, segmentacija zasnovana na pokrivenosti elemenata slike, koja kao rezultat obezbeđuje fazi digitalnu reprezentaciju slike gde je vrednost članstva dodeljena svakom elementu proporcionalna njegovoj relativnoj pokrivenosti neprekidnim objektom na originalnoj slici. U ovoj tezi bavimo se istraživanjem modela digitalizacije pokrivenosti koji pruža ovakvu vrstu reprezentaciju slike i predstavljamo kako se mogu postići značajna poboljšanja u oceni 3D EDT, kao i signature neprekidnog oblika zasnovane na rastojanju od centroida, ako su informacije o pokrivenosti<br />dostupne u ovoj reprezentaciji slike razmatrane na odgovarajući način.</p>
469

Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects / Modellierung unscharfer Eingabeparameter zur Wirtschaftlichkeitsuntersuchung von Wasserkraftprojekten basierend auf Random Set Theorie

Beisler, Matthias Werner 24 August 2011 (has links) (PDF)
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results. / Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben.
470

Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects

Beisler, Matthias Werner 25 May 2011 (has links)
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results. / Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben.

Page generated in 0.0569 seconds