Spelling suggestions: "subject:"tet 1heory"" "subject:"tet btheory""
451 |
Advanced methods and models in uncertainty for the order promising process in supply chain characterized by the lack of homogeneity in productGrillo Espinoza, Hanzel 22 September 2019 (has links)
Tesis por compendio / The Lack of Homogeneity in the Product (LHP) appears in productive processes with raw materials, which directly stem from nature and/or production processes with operations that confer heterogeneity to the characteristics of the outputs obtained, even when the inputs used are homogeneous. LHP appears in different sectors such as ceramic tile, horticulture, marble, snacks, among others. LHP becomes a managerial problem when customers require to be served with homogeneous product. Supply chains responsible to provide homogeneous product face the need to include classification activities in their productive processes to obtain sub-lots of homogeneous product. Due to the inherent LHP uncertainty, these homogeneous sub-lots will not be known until the product have been produced and classified. An improper management of the LHP can have a very negative impact on the customers' satisfaction due to inconsistencies in the answer to their requirements and also on the Supply Chain's efficiency.
The Order Promising Process (OPP) appears as a key element for properly managing the LHP in order to ensure the matching of uncertain homogeneous supply with customer order proposals. The OPP refers to the set of business activities that are triggered to provide a response to the orders from customers. These activities are related to the acceptance/rejection decision, and to set delivery dates. For supply chains affected by the LHP, the OPP must consider the homogeneity as another requirement in the answer to the orders. Besides, due to the LHP inherent uncertainty, discrepancies between the real and planned homogeneous quantities might provoke that previously committed orders cannot be served. The Shortage Planning (SP) process intends to find alternatives in order to minimise the negative impact on customers and the supply chain.
Considering LHP in the OPP brings a set of new challenging features to be addressed. The conventional approach of assuming homogeneity in the product for the master production schedule (MPS) and the quantities Available-To-Promise (ATP) derived from it is no longer adequate. Instead, both the MPS and ATP should be handled in terms of homogeneous sub-lots. Since the exact quantity of homogeneous product from the planned lots in the MPS is not exactly known until the classification activities have been performed, the ATP also inherits this uncertainty, bringing a new level of complexity. Non-homogeneous product cannot be accumulated in order to fulfil future incoming orders. Even more, if the product handled is perishable, the homogeneity management becomes considerably more complex. This is because the state of the product is dynamic with time and related variables to it, like quality, price, etc., could change with time. This situation could bring unexpected wasting costs apart from the shortages already mentioned. The perishability factor is itself another source of uncertainty associated to the LHP.
This dissertation proposes a conceptual framework and different mathematical programming models and tools, in both deterministic and uncertainty environments, in order to support the OPP and SP under LHP's effect. The aim is to provide a reliable commitment with customer orders looking for a high service level not just in the due date and quantity but also in the homogeneity requirements. The modelling of the characteristics inherent to LHP under deterministic context constitutes itself one of the main contribution of this dissertation. Another novelty consists in the inclusion of uncertainty in the definition of homogeneous sub-lots, their quantities and their dynamic state and value. The uncertainty modelling approach proposed is mainly based on the application of fuzzy set theory and possibility theory.
The proposed mathematical models and tools have been validated in real cases of SC, specifically in the ceramic tile sector for non perishables, and in the fruit sector for perishables. The results show a ... / La Falta de Homogeneidad en el Producto (LHP, por sus siglas del inglés ``Lack of Homogeneity in the Product'') aparece en procesos productivos con materias primas que derivan directamente de la naturaleza y/o procesos de producción con operaciones que confieren heterogeneidad a las características de los productos obtenidos, incluso cuando los insumos utilizados son homogéneos. La LHP aparece en diferentes sectores como la cerámica, horticultura, mármol, snacks, entre otros. Se convierte en un problema gerencial cuando los clientes requieren homogeneidad en el producto y las cadenas de suministro enfrentan la necesidad de incluir actividades de clasificación en sus procesos productivos para obtener sub-lotes de producto homogéneo. Debido a la incertidumbre inherente a la LHP, los sub-lotes homogéneos y su cantidad no serán conocidos hasta que el producto haya sido producido y clasificado. Una gestión inadecuada de la LHP puede tener un impacto muy negativo en la satisfacción de los clientes debido a inconsistencias en la respuesta a sus requerimientos y también en la eficacia de la Cadena de Suministro.
El Proceso de Comprometer de Pedido (OPP, por sus siglas del inglés ``Order Promising Process'') aparece como un elemento clave para gestionar adecuadamente la LHP, con el fin de asegurar la coincidencia entre el suministro incierto de producto homogéneo y las propuestas de pedido del cliente. El OPP se refiere al conjunto de actividades empresariales realizadas para proporcionar una respuesta a las órdenes de los clientes. Estas actividades están relacionadas con las decisiones de aceptación/rechazo, y establecimiento de fechas de entrega para las órdenes del cliente. En las cadenas de suministro afectadas por la LHP, el OPP debe considerar la homogeneidad como otro requisito adicional en la respuesta a los pedidos. Además, debido a la incertidumbre intrínseca de la LHP, las discrepancias entre las cantidades homogéneas reales y planificadas podrían provocar que las órdenes comprometidas anteriormente no puedan ser completadas debido a la escasez de producto. El proceso de planificación de la escasez (SP, por sus siglas del inglés "Shortage Planning") se encarga de encontrar alternativas para minimizar este impacto negativo en los clientes y la cadena de suministro.
Considerar la LHP dentro del OPP implica un conjunto nuevo de características desafiantes que deben ser abordadas. El enfoque convencional de asumir la homogeneidad en el producto para el programa maestro de producción (MPS, por sus siglas del inglés "Master Production Schedule") y las cantidades disponibles a comprometer (ATP, por sus siglas del inglés "Available-To-Promise") derivadas de él, no es adecuado. En cambio, tanto el MPS como el ATP deben manejarse en términos de sub-lotes homogéneos. Dado que la cantidad exacta de producto homogéneo de los lotes previstos en el MPS no se sabe exactamente hasta que se han realizado las actividades de clasificación, el ATP también hereda esta incertidumbre, trayendo un nuevo nivel de complejidad. El producto no homogéneo no se puede acumular para satisfacer futuras órdenes entrantes. Más aún, si el producto manipulado es perecedero, el manejo de la homogeneidad se vuelve mucho más complejo. Esto se debe a que el estado del producto es dinámico en el tiempo, y variables relacionadas como calidad, precio, etc., podrían también cambiar con el tiempo. Esta situación puede provocar costos inesperados de desperdicio aparte de la escasez ya mencionada. El factor de perecedero es en sí mismo otra fuente de incertidumbre asociada a la LHP.
Esta disertación propone un marco conceptual y diferentes modelos y herramientas de programación matemática, tanto en entornos deterministas como de incertidumbre, para apoyar al OPP y SP considerando el efecto de LHP. El objetivo es proporcionar un compromiso fiable con los pedidos de los clientes en busca de un alto nivel de servicio no s / La Falta d'Homogeneïtat en el Producte (LHP, per les seues sigles de l'anglés ''Lack of Homogeneity in the Product'') apareix en processos productius amb matèries primes que deriven directament de la natura i/o processos de producció amb operacions que conferixen heterogeneïtat a les característiques dels productes obtinguts, fins i tot quan les entrades utilitzades són homogènies . La LHP apareix en diferents sectors com la ceràmica, horticultura, marbre, snacks, entre altres. Es convertix en un problema gerencial quan els clients requereixen homogeneïtat en el producte i les cadenes de subministrament enfronten la necessitat d'incloure activitats de classificació en els seus processos productius per a obtindre sublots de producte homogeni. A causa de la incertesa inherent a la LHP, els sublots homogenis i la seua quantitat no seran coneguts fins que el producte haja sigut produït i classificat. Una gestió inadequada de la LHP pot tindre un impacte molt negatiu en la satisfacció dels clients degut a inconsistències en la resposta als seus requeriments i també en l'eficàcia de la Cadena de Subministrament.
El Procés de Comprometre Comandes (OPP, per les seues sigles de l'anglés ''Order Promising Process'') apareix com un element clau per a gestionar adequadament la LHP, a fi d'assegurar la coincidència entre el subministrament incert de producte homogeni i les propostes de comanda del client. L'OPP es refereix al conjunt d'activitats empresarials realitzades per a proporcionar una resposta a les ordres dels clients. Aquestes activitats estan relacionades amb les decisions d'acceptació/rebuig, i establiment de dates de lliurament per a les ordres del client. En les cadenes de subministrament afectades per la LHP, l'OPP ha de considerar l'homogeneïtat com un altre requisit addicional en la resposta a les comandes. A més, a causa de la incertesa intrínseca de la LHP, les discrepàncies entre les quantitats homogènies reals i planificades podrien provocar que les ordres compromeses anteriorment no puguen ser completades a causa de l'escassetat de producte. El procés de planificació de l'escassetat (SP, per les seues sigles de l'anglés "Shortage Planning") s'encarrega de trobar alternatives per a minimitzar aquest impacte negatiu en els clients i en la cadena de subministrament.
Considerar la LHP dins de l'OPP implica un conjunt nou de característiques desafiants que han de ser abordades. L'enfocament convencional d'assumir l'homogeneïtat en el producte per al programa mestre de producció (MPS, per les seues sigles de l'anglés "Master Production Schedule") i les quantitats disponibles a comprometre (ATP, per les seues sigles de l'anglés "Available-To-Promise") derivades d'ell, no és adequat. En canvi, tant el MPS com l'ATP han de manejar-se en termes de sublots homogenis. Atés que la quantitat exacta de producte homogeni dels lots previstos en el MPS no se sap exactament fins que s'han realitzat les activitats de classificació, l'ATP també hereta aquesta incertesa, portant un nou nivell de complexitat. El producte no homogeni no es pot acumular per a satisfer futures ordees entrants. Més encara, si el producte manipulat és perible, el maneig de l'homogeneïtat es torna molt més complex. Açò es deu al fet que l'estat del producte és dinàmic en el temps, i variables relacionades com qualitat, preu, etc., podrien també canviar amb el temps. Aquesta situació pot provocar costos inesperats de rebuig a banda de l'escassetat ja esmentada. El factor de perible és en si mateix un altra font d'incertesa associada a la LHP.
Aquesta dissertació proposa un marc conceptual i diferents models i eines de programació matemàtica, tant en entorns deterministes com d'incertesa, per a recolzar a l'OPP i SP considerant l'efecte de LHP. L'objectiu és proporcionar un compromís fiable amb les comandes dels clients a la recerca d'un alt nivell de servei no sols en la data i la quantitat esperades, s / Grillo Espinoza, H. (2017). Advanced methods and models in uncertainty for the order promising process in supply chain characterized by the lack of homogeneity in product [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/91110 / Compendio
|
452 |
Aproksimativna diskretizacija tabelarno organizovanih podataka / Approximative Discretization of Table-Organized DataOgnjenović Višnja 27 September 2016 (has links)
<p>Disertacija se bavi analizom uticaja raspodela podataka na rezultate algoritama diskretizacije u okviru procesa mašinskog učenja. Na osnovu izabranih baza i algoritama diskretizacije teorije grubih skupova i stabala odlučivanja, istražen je uticaj odnosa raspodela podataka i tačaka reza određene diskretizacije.<br />Praćena je promena konzistentnosti diskretizovane tabele u zavisnosti od položaja redukovane tačke reza na histogramu. Definisane su fiksne tačke reza u zavisnosti od segmentacije multimodal raspodele, na osnovu kojih je moguće raditi redukciju preostalih tačaka reza. Za određivanje fiksnih tačaka konstruisan je algoritam FixedPoints koji ih određuje u skladu sa grubom segmentacijom multimodal raspodele.<br />Konstruisan je algoritam aproksimativne diskretizacije APPROX MD za redukciju tačaka reza, koji koristi tačke reza dobijene algoritmom maksimalne razberivosti i parametre vezane za procenat nepreciznih pravila, ukupni procenat klasifikacije i broj tačaka redukcije. Algoritam je kompariran u odnosu na algoritam maksimalne razberivosti i u odnosu na algoritam maksimalne razberivosti sa aproksimativnim rešenjima za α=0,95.</p> / <p>This dissertation analyses the influence of data distribution on the results of discretization algorithms within the process of machine learning. Based on the chosen databases and the discretization algorithms within the rough set theory and decision trees, the influence of the data distribution-cuts relation within certain discretization has been researched.<br />Changes in consistency of a discretized table, as dependent on the position of the reduced cut on the histogram, has been monitored. Fixed cuts have been defined, as dependent on the multimodal segmentation, on basis of which it is possible to do the reduction of the remaining cuts. To determine the fixed cuts, an algorithm FixedPoints has been constructed, determining these points in accordance with the rough segmentation of multimodal distribution.<br />An algorithm for approximate discretization, APPROX MD, has been constructed for cuts reduction, using cuts obtained through the maximum discernibility (MD-Heuristic) algorithm and the parametres related to the percent of imprecise rules, the total classification percent and the number of reduction cuts. The algorithm has been compared to the MD algorithm and to the MD algorithm with approximate solutions for α=0,95.</p>
|
453 |
Evaluating reasoning heuristics for a hybrid theorem proving platformAckermann, Jacobus Gideon 06 1900 (has links)
Text in English with abstracts in English, Afrikaans and isiZulu / The formalisation of first-order logic and axiomatic set theory in the first half of the 20th century—along with the advent of the digital computer—paved the way for the development of automated theorem proving. In the 1950s, the automation of proof developed from proving elementary geometric problems and finding direct proofs for problems in Principia Mathematica by means of simple, human-oriented rules of inference. A major advance in the field of automated theorem proving occurred in 1965, with the formulation of the resolution inference mechanism. Today, powerful Satisfiability Modulo Theories (SMT) provers combine SAT solvers with sophisticated knowledge from various problem domains to prove increasingly complex theorems.
The combinatorial explosion of the search space is viewed as one of the major challenges to progress in the field of automated theorem proving. Pioneers from the 1950s and 1960s have already identified the need for heuristics to guide the proof search effort.
Despite theoretical advances in automated reasoning and technological advances in computing, the size of the search space remains problematic when increasingly complex proofs are attempted. Today, heuristics are still useful and necessary to discharge complex proof obligations.
In 2000, a number of heuristics was developed to aid the resolution-based prover OTTER in finding proofs for set-theoretic problems. The applicability of these heuristics to next-generation theorem provers were evaluated in 2009. The provers Vampire and Gandalf required respectively 90% and 80% of the applicable OTTER heuristics.
This dissertation investigates the applicability of the OTTER heuristics to theorem proving in the hybrid theorem proving environment Rodin—a system modelling tool suite for the Event-B formal method. We show that only 2 of the 10 applicable OTTER heuristics were useful when discharging proof obligations in Rodin. Even though we argue that the OTTER heuristics were largely ineffective when applied to Rodin proofs, heuristics were still needed when proof obligations could not be discharged automatically. Therefore, we
propose a number of our own heuristics targeted at theorem proving in the Rodin tool suite. / Die formalisering van eerste-orde-logika en aksiomatiese versamelingsteorie in die eerste helfte van die 20ste eeu, tesame met die koms van die digitale rekenaar, het die weg vir die ontwikkeling van geoutomatiseerde bewysvoering gebaan. Die outomatisering van bewysvoering het in die 1950’s ontwikkel vanuit die bewys van
elementêre meetkundige probleme en die opspoor van direkte bewyse vir probleme in Principia Mathematica deur middel van eenvoudige, mensgerigte inferensiereëls. Vooruitgang is in 1965 op die gebied van geoutomatiseerde bewysvoering gemaak toe die resolusie-inferensie-meganisme geformuleer is. Deesdae kombineer kragtige Satisfiability Modulo Theories (SMT) bewysvoerders SAT-oplossers met gesofistikeerde kennis vanuit verskeie probleemdomeine om steeds meer komplekse stellings te bewys.
Die kombinatoriese ontploffing van die soekruimte kan beskou word as een van die grootste uitdagings vir verdere vooruitgang in die veld van geoutomatiseerde bewysvoering. Baanbrekers uit die 1950’s en 1960’s het reeds bepaal dat daar ’n behoefte is aan heuristieke om die soektog na bewyse te rig.
Ten spyte van die teoretiese vooruitgang in outomatiese bewysvoering en die tegnologiese vooruitgang in die rekenaarbedryf, is die grootte van die soekruimte steeds problematies wanneer toenemend komplekse bewyse aangepak word. Teenswoordig is heuristieke steeds nuttig en noodsaaklik om komplekse bewysverpligtinge uit te voer.
In 2000 is ’n aantal heuristieke ontwikkel om die resolusie-gebaseerde bewysvoerder OTTER te help om bewyse vir versamelingsteoretiese probleme te vind. Die toepaslikheid van hierdie heuristieke vir die volgende generasie bewysvoerders is in 2009 geëvalueer. Die bewysvoerders Vampire en Gandalf het onderskeidelik 90% en 80% van die toepaslike OTTER-heuristieke nodig gehad.
Hierdie verhandeling ondersoek die toepaslikheid van die OTTER-heuristieke op bewysvoering in die hibriede bewysvoeringsomgewing Rodin—’n stelselmodelleringsuite vir die formele Event-B-metode. Ons toon dat slegs 2 van die 10 toepaslike OTTER-heuristieke van nut was vir die uitvoering van bewysverpligtinge in Rodin. Ons voer aan dat die OTTER-heuristieke grotendeels ondoeltreffend was toe dit op Rodin-bewyse toegepas is. Desnieteenstaande is heuristieke steeds nodig as bewysverpligtinge nie outomaties uitgevoer kon word nie. Daarom stel ons ’n aantal van ons eie heuristieke voor wat in die Rodin-suite aangewend kan word. / Ukwenziwa semthethweni kwe-first-order logic kanye ne-axiomatic set theory ngesigamu sokuqala sekhulunyaka lama-20—kanye nokufika kwekhompyutha esebenza ngobuxhakaxhaka bedijithali—kwavula indlela ebheke ekuthuthukisweni kwenqubo-kusebenza yokufakazela amathiyoremu ngekhomyutha. Ngeminyaka yawo-1950, ukuqinisekiswa kobufakazi kwasuselwa ekufakazelweni kwezinkinga zejiyomethri eziyisisekelo kanye nasekutholakaleni kobufakazi-ngqo bezinkinga eziphathelene ne-Principia Mathematica ngokuthi kusetshenziswe
imithetho yokuqagula-sakucabangela elula, egxile kubantu. Impumelelo enkulu emkhakheni wokufakazela amathiyoremu ngekhompyutha yenzeka ngowe-1965, ngokwenziwa semthethweni kwe-resolution inference mechanism. Namuhla, abafakazeli abanohlonze bamathiyori abizwa nge-Satisfiability Modulo Theories (SMT) bahlanganisa ama-SAT solvers nolwazi lobungcweti oluvela kwizizinda zezinkinga ezihlukahlukene ukuze bakwazi ukufakazela amathiyoremu okungelula neze ukuwafakazela.
Ukukhula ngesivinini kobunzima nobunkimbinkimbi benkinga esizindeni esithile kubonwa njengenye yezinselelo ezinkulu okudingeka ukuthi zixazululwe ukuze kube nenqubekela phambili ekufakazelweni kwamathiyoremu ngekhompyutha. Amavulandlela eminyaka yawo-1950 nawo-1960 asesihlonzile kakade isidingo sokuthi amahuristikhi (heuristics) kube yiwona ahola umzamo wokuthola ubufakazi.
Nakuba ikhona impumelelo esiyenziwe kumathiyori ezokucabangela okujulile kusetshenziswa amakhompyutha kanye nempumelelo yobuchwepheshe bamakhompyutha, usayizi wesizinda usalokhu uyinkinga uma kwenziwa imizamo yokuthola ubufakazi obuyinkimbinkimbi futhi obunobunzima obukhudlwana. Namuhla imbala, amahuristikhi asewuziso futhi ayadingeka ekufezekiseni izibopho zobufakazi obuyinkimbinkimbi.
Ngowezi-2000, kwathuthukiswa amahuristikhi amaningana impela ukuze kulekelelwe uhlelo-kusebenza olungumfakazeli osekelwe phezu kwesixazululo, olubizwa nge-OTTER, ekutholeni ubufakazi bama-set-theoretic problems. Ukusebenziseka kwalawa mahuristikhi kwizinhlelo-kusebenza ezingabafakazeli bamathiyoremu besimanjemanje kwahlolwa ngowezi-2009. Uhlelo-kusebenza olungumfakazeli, olubizwa nge-Vampire kanye nalolo olubizwa nge-Gandalf zadinga ama-90% kanye nama-80%, ngokulandelana kwazo, maqondana nama-OTTER heuristics afanelekile.
Lolu cwaningo luphenya futhi lucubungule ukusebenziseka kwama-OTTER heuristics ekufakazelweni kwamathiyoremu esimweni esiyinhlanganisela sokufakazela amathiyoremu esibizwa nge-Rodin—okuyi-system modelling tool suite eqondene ne-Event-B formal method. Kulolu cwaningo siyabonisa ukuthi mabili kuphela kwayi-10 ama-OTTER heuristics aba wusizo ngenkathi kufezekiswa isibopho sobufakazi ku-Rodin. Nakuba sibeka umbono wokuthi esikhathini esiningi ama-OTTER heuristics awazange abe wusizo uma esetshenziswa kuma-Rodin proofs, amahuristikhi asadingeka ezimweni lapho izibopho zobufakazi zingazenzekelanga ngokwazo ngokulawulwa yizinhlelo-kusebenza zekhompyutha. Ngakho-ke, siphakamisa amahuristikhi ethu amaningana angasetshenziswa ekufakazeleni amathiyoremu ku-Rodin tool suite. / School of Computing / M. Sc. (Computer Science)
|
454 |
Non-deterministic analysis of slope stability based on numerical simulationShen, Hong 02 October 2012 (has links) (PDF)
In geotechnical engineering, the uncertainties such as the variability and uncertainty inherent in the geotechnical properties have caught more and more attentions from researchers and engineers. They have found that a single “Factor of Safety” calculated by traditional deterministic analyses methods can not represent the slope stability exactly. Recently in order to provide a more rational mathematical framework to incorporate different types of uncertainties in the slope stability estimation, reliability analyses and non-deterministic methods, which include probabilistic and non probabilistic (imprecise methods) methods, have been applied widely. In short, the slope non-deterministic analysis is to combine the probabilistic analysis or non probabilistic analysis with the deterministic slope stability analysis. It cannot be regarded as a completely new slope stability analysis method, but just an extension of the slope deterministic analysis. The slope failure probability calculated by slope non-deterministic analysis is a kind of complement of safety factor. Therefore, the accuracy of non deterministic analysis is not only depended on a suitable probabilistic or non probabilistic analysis method selected, but also on a more rigorous deterministic analysis method or geological model adopted.
In this thesis, reliability concepts have been reviewed first, and some typical non-deterministic methods, including Monte Carlo Simulation (MCS), First Order Reliability Method (FORM), Point Estimate Method (PEM) and Random Set Theory (RSM), have been described and successfully applied to the slope stability analysis based on a numerical simulation method-Strength Reduction Method (SRM). All of the processes have been performed in a commercial finite difference code FLAC and a distinct element code UDEC.
First of all, as the fundamental of slope reliability analysis, the deterministic numerical simulation method has been improved. This method has a higher accuracy than the conventional limit equilibrium methods, because of the reason that the constitutive relationship of soil is considered, and fewer assumptions on boundary conditions of slope model are necessary. However, the construction of slope numerical models, particularly for the large and complicated models has always been very difficult and it has become an obstacle for application of numerical simulation method. In this study, the excellent spatial analysis function of Geographic Information System (GIS) technique has been introduced to help numerical modeling of the slope. In the process of modeling, the topographic map of slope has been gridded using GIS software, and then the GIS data was transformed into FLAC smoothly through the program built-in language FISH. At last, the feasibility and high efficiency of this technique has been illustrated through a case study-Xuecheng slope, and both 2D and 3D models have been investigated.
Subsequently, three most widely used probabilistic analyses methods, Monte Carlo Simulation, First Order Reliability Method and Point Estimate Method applied with Strength Reduction Method have been studied. Monte Carlo Simulation which needs to repeat thousands of deterministic analysis is the most accurate probabilistic method. However it is too time consuming for practical applications, especially when it is combined with numerical simulation method. For reducing the computation effort, a simplified Monte Carlo Simulation-Strength Reduction Method (MCS-SRM) has been developed in this study. This method has estimated the probable failure of slope and calculated the mean value of safety factor by means of soil parameters first, and then calculated the variance of safety factor and reliability of slope according to the assumed probability density function of safety factor. Case studies have confirmed that this method can reduce about 4/5 of time compared with traditional MCS-SRM, and maintain almost the same accuracy.
First Order Reliability Method is an approximate method which is based on the Taylor\'s series expansion of performance function. The closed form solution of the partial derivatives of the performance function is necessary to calculate the mean and standard deviation of safety factor. However, there is no explicit performance function in numerical simulation method, so the derivative expressions have been replaced with equivalent difference quotients to solve the differential quotients approximately in this study. Point Estimate Method is also an approximate method involved even fewer calculations than FORM. In the present study, it has been integrated with Strength Reduction Method directly.
Another important observation referred to the correlation between the soil parameters cohesion and friction angle. Some authors have found a negative correlation between cohesion and friction angle of soil on the basis of experimental data. However, few slope probabilistic studies are found to consider this negative correlation between soil parameters in literatures. In this thesis, the influence of this correlation on slope probability of failure has been investigated based on numerical simulation method. It was found that a negative correlation considered in the cohesion and friction angle of soil can reduce the variability of safety factor and failure probability of slope, thus increasing the reliability of results.
Besides inter-correlation of soil parameters, these are always auto-correlated in space, which is described as spatial variability. For the reason that knowledge on this character is rather limited in literature, it is ignored in geotechnical engineering by most researchers and engineers. In this thesis, the random field method has been introduced in slope numerical simulation to simulate the spatial variability structure, and a numerical procedure for a probabilistic slope stability analysis based on Monte Carlo simulation was presented. The soil properties such as cohesion and friction angle were discretized to continuous random fields based on local averaging method. In the case study, both stationary and non-stationary random fields have been investigated, and the influence of spatial variability and averaging domain on the convergence of numerical simulation and probability of failure was studied.
In rock medium, the structure faces have very important influence on the slope stability, and the rock material can be modeled as the combination of rigid or deformable blocks with joints in distinct element method. Therefore, much more input parameters like strength of joints are required to input the rock slope model, which increase the uncertainty of the results of numerical model. Furthermore, because of the limitations of the current laboratory and in-site testes, there is always lack of exact values of geotechnical parameters from rock material, even the probability distribution of these variables. Most of time, engineers can only estimate the interval of these variables from the limit testes or the expertise’s experience. In this study, to assess the reliability of the rock slope, a Random Set Distinct Element Method (RS-DEM) has been developed through coupling of Random Set Theory and Distinct Element Method, and applied in a rock slope in Sichuan province China.
|
455 |
模糊卡方適合度檢定 / Fuzzy Chi-square Test Statistic for goodness-of-fit林佩君, Lin,Pei Chun Unknown Date (has links)
在資料分析上,調查者通常需要決定,不同的樣本是否可被視為來自相同的母體。一般最常使用的統計量為Pearson’s 統計量。然而,傳統的統計方法皆是利用二元邏輯觀念來呈現。如果我們想要用模糊邏輯的概念來做樣本調查,此時,使用傳統 檢定來分析這些模糊樣本資料是否仍然適當?透過這樣的觀念,我們使用傳統統計方法,找出一個能處理這些模糊樣本資料的公式,稱之為模糊 。結果顯示,此公式可用來檢定,模糊樣本資料在不同母體下機率的一致性。 / In the analysis of research data, the investigator often needs to decide whether several independent samples may be regarded as having come from the same population. The most commonly used statistic is Pearson’s statistic. However, traditional statistics reflect the result from a two-valued logic concept. If we want to survey sampling with fuzzy logic concept, is it still appropriate to use the traditional -test for analysing those fuzzy sample data? Through this concept, we try to use a traditional statistic method to find out a formula, called fuzzy , that enables us to deal with those fuzzy sample data. The result shows that we can use the formula to test hypotheses about probabilities of various outcomes in fuzzy sample data.
|
456 |
The basics of set theory - some new possibilities with ClassPadPaditz, Ludwig 20 March 2012 (has links) (PDF)
No description available.
|
457 |
Equações integrais via teoria de domínios: problemas direto e inverso / Integral equations in domain theory: problems direct and inverseAntônio Espósito Júnior 23 July 2008 (has links)
Apresenta-se um estudo em Teoria de Domínios das equações integrais da forma geral
f (x) = h(x)+g Z b(x)
a(x)
g(x, y, f (y))dy
com h, a e b definidas para x ∈ [a0,b0], a0 ≤a(x)≤b(x)≤b0 e g definida para x, y ∈ [a0,b0], cujo lado direito define uma contração sobre o espaço métrico de funções reais contínuas
limitadas. O ponto de partida desse trabalho é a reescrita da Análise Intervalar para Teoria de Domínios do problema de valor incial em equações diferenciais ordinárias que possuem
solução como ponto fixo do operador de Picard. Com o conjunto dos números reais interpretados pelo Domínio Intervalar, as funções reais são estendidas para operarem no domínio de funçoes intervalares de variável real. Em particular, faz-se a extensão canônica do campo vetorial em relação à segunda variável. Nesse contexto, pela primeira vez tem-se o estudo das equações integrais de Fredholm e Volterra sobre o domínio de funções intervalares de variável real definida pelo operador integral intervalar com a participação da extensão canônica de g em relação à terceira variável. Adicionando ao domínio de funções intervalares sua função medição, efetua-se a análise da convergência do operador intervalar de Fredholm e Volterra em Teoria de Domínios com o cálculo da sua derivada
informática em relação à medição no seu ponto fixo. Com a representação das funções intervalares em função passo constante a partir da partição do intervalo [a0,b0], reescrevese
o algoritmo da Análise Intervalar em Teoria de Domínios com a introdução do cálculo da aproximação da extensão canônica de g e com o comprimento do intervalo da partição tendendo para zero. Estende-se essa abordagem mais completa do estudo das equações integrais na resolução de problemas de valores iniciais e valor de contorno em equações diferenciais ordinárias e parciais. Uma vez que para uma pequena variação do campo
vetorial v ou do valor inicial y0 da equação diferencial f ′(x) = v(x, f (x)) com a condição inicial f (x0) = y0, pode-se ter uma solução tão próxima da solução f da equação quanto
possível, formaliza-se pela primeira vez em Teoria de Domínios um algoritmo na resolução do problema inverso em que, conhecendo a função f , determina-se uma equação diferencial
ordinária com o cálculo de um campo vetorial v tal que o operador de Picard associado mapeia f tão próxima quanto possível a ela mesma. / We present a study in Domain Theory of integral equations of the form
f (x) = h(x)+g Z b(x)
a(x)
g(x, y, f (y))dy
for a0 ≤ a(x) ≤ b(x) ≤ b0 with h, a, b defined for x ∈ [a0,b0] and g defined for x, y ∈ [a0,b0], in which the right-hand side defines a contraction on the metric space of continuous realvalued functions on [a0,b0]. The starting point of this work is to revisit Interval Analysis in Domain Theory for the initial-value problem in ordinary differential equations where a solution is expressed as a fixed point of the Picard operator. With the set of real numbers interpreted as the interval domain, real-valued functions are extended to work in the space of interval-valued functions of the real variable domain. In particular, the vector field is extended in the second argument. Under these conditions, for the first time Fredholm and Volterra integral equations have solutions expressed as fixed points of a contraction mapping in terms of the splitting on interval-valued functions of the real variable domain. The measurement for interval-valued functions of the real variable domain is considered where we can asssess the convergence properties of the interval integral operator by means of the informatic derivative. The proposed techniques are applied to more general methods in ordinary differencial equations (ODEs) and partial differential equations (PDEs). For the first time, an algorithm is proposed to provide solutions to the inverse problem for
Odinary Differential Equation where, given a function f , it is found a vector field v that defines a Picard operator which maps the solution f as close as possible to itself, such
that the ODE f ′(x) = v(x, f (x)) admits f as either an exact or, as closely as desired, an approximate solution.
|
458 |
Problem-Based SRS: método para especificação de requisitos de software baseado em problemas / Problem-Based SRS: method for sofware requirements specification based on problemsSouza, Rafael Gorski Moreno 23 August 2016 (has links)
Especificação de requisitos é reconhecida como como uma atividade critica nos processos de desenvolvimento de software por causa de seu impacto nos riscos do projeto quando mal executada. Um grande conjunto de estudos discute aspectos teóricos, proposições de técnicas e práticas recomendadas para a Engenharia de Requisitos (ER). Para ter sucesso, ER tem que assegurar que requisitos especificados são completos e corretos, o que significa que todas as intenções dos stakeholders são cobertas pelos requisitos e que não existem requisitos desnecessários. Entretanto, a captura precisa das intenções stakeholders continua sendo um desafio e é o maior fator para falhas em projetos de software. Esta dissertação apresenta um novo método denominado “Problem-Based SRS” que tem como objetivo melhorar a qualidade da especificação de requisitos de software (SRS – Software Requirements Specification) no sentido de que os requisitos especificados forneçam uma resposta adequada para os problemas dos clientes reais. Neste método, o conhecimento sobre os requisitos de software é construído a partir do conhecimento sobre os problemas do cliente. O Problem-Based SRS consiste de uma organização de atividades e resultados através de um processo que contem cinco etapas. O método fornece suporte ao time de engenharia de requisitos para analisar sistematicamente o contexto de negócio e especificar os requisitos de software, considerando o vislumbre e a visão do software. Os aspectos de qualidade das especificações são avaliados utilizando técnicas de rastreabilidade e princípios do axiomatic design. Os casos de estudo realizados e apresentados nesta dissertação apontam que o método proposto pode contribuir de forma significativa para uma melhor especificação de requisitos de software. / Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This master’s dissertation presents a novel method referred to as “Problem-Based SRS” aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer ́s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer ́s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.
|
459 |
Problem-Based SRS: método para especificação de requisitos de software baseado em problemas / Problem-Based SRS: method for sofware requirements specification based on problemsSouza, Rafael Gorski Moreno 23 August 2016 (has links)
Especificação de requisitos é reconhecida como como uma atividade critica nos processos de desenvolvimento de software por causa de seu impacto nos riscos do projeto quando mal executada. Um grande conjunto de estudos discute aspectos teóricos, proposições de técnicas e práticas recomendadas para a Engenharia de Requisitos (ER). Para ter sucesso, ER tem que assegurar que requisitos especificados são completos e corretos, o que significa que todas as intenções dos stakeholders são cobertas pelos requisitos e que não existem requisitos desnecessários. Entretanto, a captura precisa das intenções stakeholders continua sendo um desafio e é o maior fator para falhas em projetos de software. Esta dissertação apresenta um novo método denominado “Problem-Based SRS” que tem como objetivo melhorar a qualidade da especificação de requisitos de software (SRS – Software Requirements Specification) no sentido de que os requisitos especificados forneçam uma resposta adequada para os problemas dos clientes reais. Neste método, o conhecimento sobre os requisitos de software é construído a partir do conhecimento sobre os problemas do cliente. O Problem-Based SRS consiste de uma organização de atividades e resultados através de um processo que contem cinco etapas. O método fornece suporte ao time de engenharia de requisitos para analisar sistematicamente o contexto de negócio e especificar os requisitos de software, considerando o vislumbre e a visão do software. Os aspectos de qualidade das especificações são avaliados utilizando técnicas de rastreabilidade e princípios do axiomatic design. Os casos de estudo realizados e apresentados nesta dissertação apontam que o método proposto pode contribuir de forma significativa para uma melhor especificação de requisitos de software. / Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This master’s dissertation presents a novel method referred to as “Problem-Based SRS” aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer ́s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer ́s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.
|
460 |
Equações integrais via teoria de domínios: problemas direto e inverso / Integral equations in domain theory: problems direct and inverseAntônio Espósito Júnior 23 July 2008 (has links)
Apresenta-se um estudo em Teoria de Domínios das equações integrais da forma geral
f (x) = h(x)+g Z b(x)
a(x)
g(x, y, f (y))dy
com h, a e b definidas para x ∈ [a0,b0], a0 ≤a(x)≤b(x)≤b0 e g definida para x, y ∈ [a0,b0], cujo lado direito define uma contração sobre o espaço métrico de funções reais contínuas
limitadas. O ponto de partida desse trabalho é a reescrita da Análise Intervalar para Teoria de Domínios do problema de valor incial em equações diferenciais ordinárias que possuem
solução como ponto fixo do operador de Picard. Com o conjunto dos números reais interpretados pelo Domínio Intervalar, as funções reais são estendidas para operarem no domínio de funçoes intervalares de variável real. Em particular, faz-se a extensão canônica do campo vetorial em relação à segunda variável. Nesse contexto, pela primeira vez tem-se o estudo das equações integrais de Fredholm e Volterra sobre o domínio de funções intervalares de variável real definida pelo operador integral intervalar com a participação da extensão canônica de g em relação à terceira variável. Adicionando ao domínio de funções intervalares sua função medição, efetua-se a análise da convergência do operador intervalar de Fredholm e Volterra em Teoria de Domínios com o cálculo da sua derivada
informática em relação à medição no seu ponto fixo. Com a representação das funções intervalares em função passo constante a partir da partição do intervalo [a0,b0], reescrevese
o algoritmo da Análise Intervalar em Teoria de Domínios com a introdução do cálculo da aproximação da extensão canônica de g e com o comprimento do intervalo da partição tendendo para zero. Estende-se essa abordagem mais completa do estudo das equações integrais na resolução de problemas de valores iniciais e valor de contorno em equações diferenciais ordinárias e parciais. Uma vez que para uma pequena variação do campo
vetorial v ou do valor inicial y0 da equação diferencial f ′(x) = v(x, f (x)) com a condição inicial f (x0) = y0, pode-se ter uma solução tão próxima da solução f da equação quanto
possível, formaliza-se pela primeira vez em Teoria de Domínios um algoritmo na resolução do problema inverso em que, conhecendo a função f , determina-se uma equação diferencial
ordinária com o cálculo de um campo vetorial v tal que o operador de Picard associado mapeia f tão próxima quanto possível a ela mesma. / We present a study in Domain Theory of integral equations of the form
f (x) = h(x)+g Z b(x)
a(x)
g(x, y, f (y))dy
for a0 ≤ a(x) ≤ b(x) ≤ b0 with h, a, b defined for x ∈ [a0,b0] and g defined for x, y ∈ [a0,b0], in which the right-hand side defines a contraction on the metric space of continuous realvalued functions on [a0,b0]. The starting point of this work is to revisit Interval Analysis in Domain Theory for the initial-value problem in ordinary differential equations where a solution is expressed as a fixed point of the Picard operator. With the set of real numbers interpreted as the interval domain, real-valued functions are extended to work in the space of interval-valued functions of the real variable domain. In particular, the vector field is extended in the second argument. Under these conditions, for the first time Fredholm and Volterra integral equations have solutions expressed as fixed points of a contraction mapping in terms of the splitting on interval-valued functions of the real variable domain. The measurement for interval-valued functions of the real variable domain is considered where we can asssess the convergence properties of the interval integral operator by means of the informatic derivative. The proposed techniques are applied to more general methods in ordinary differencial equations (ODEs) and partial differential equations (PDEs). For the first time, an algorithm is proposed to provide solutions to the inverse problem for
Odinary Differential Equation where, given a function f , it is found a vector field v that defines a Picard operator which maps the solution f as close as possible to itself, such
that the ODE f ′(x) = v(x, f (x)) admits f as either an exact or, as closely as desired, an approximate solution.
|
Page generated in 0.045 seconds