• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 259
  • 49
  • 17
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 9
  • 6
  • 4
  • 4
  • 3
  • Tagged with
  • 470
  • 470
  • 91
  • 82
  • 68
  • 50
  • 48
  • 45
  • 39
  • 37
  • 35
  • 35
  • 31
  • 30
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Evaluating reasoning heuristics for a hybrid theorem proving platform

Ackermann, Jacobus Gideon 06 1900 (has links)
Text in English with abstracts in English, Afrikaans and isiZulu / The formalisation of first-order logic and axiomatic set theory in the first half of the 20th century—along with the advent of the digital computer—paved the way for the development of automated theorem proving. In the 1950s, the automation of proof developed from proving elementary geometric problems and finding direct proofs for problems in Principia Mathematica by means of simple, human-oriented rules of inference. A major advance in the field of automated theorem proving occurred in 1965, with the formulation of the resolution inference mechanism. Today, powerful Satisfiability Modulo Theories (SMT) provers combine SAT solvers with sophisticated knowledge from various problem domains to prove increasingly complex theorems. The combinatorial explosion of the search space is viewed as one of the major challenges to progress in the field of automated theorem proving. Pioneers from the 1950s and 1960s have already identified the need for heuristics to guide the proof search effort. Despite theoretical advances in automated reasoning and technological advances in computing, the size of the search space remains problematic when increasingly complex proofs are attempted. Today, heuristics are still useful and necessary to discharge complex proof obligations. In 2000, a number of heuristics was developed to aid the resolution-based prover OTTER in finding proofs for set-theoretic problems. The applicability of these heuristics to next-generation theorem provers were evaluated in 2009. The provers Vampire and Gandalf required respectively 90% and 80% of the applicable OTTER heuristics. This dissertation investigates the applicability of the OTTER heuristics to theorem proving in the hybrid theorem proving environment Rodin—a system modelling tool suite for the Event-B formal method. We show that only 2 of the 10 applicable OTTER heuristics were useful when discharging proof obligations in Rodin. Even though we argue that the OTTER heuristics were largely ineffective when applied to Rodin proofs, heuristics were still needed when proof obligations could not be discharged automatically. Therefore, we propose a number of our own heuristics targeted at theorem proving in the Rodin tool suite. / Die formalisering van eerste-orde-logika en aksiomatiese versamelingsteorie in die eerste helfte van die 20ste eeu, tesame met die koms van die digitale rekenaar, het die weg vir die ontwikkeling van geoutomatiseerde bewysvoering gebaan. Die outomatisering van bewysvoering het in die 1950’s ontwikkel vanuit die bewys van elementêre meetkundige probleme en die opspoor van direkte bewyse vir probleme in Principia Mathematica deur middel van eenvoudige, mensgerigte inferensiereëls. Vooruitgang is in 1965 op die gebied van geoutomatiseerde bewysvoering gemaak toe die resolusie-inferensie-meganisme geformuleer is. Deesdae kombineer kragtige Satisfiability Modulo Theories (SMT) bewysvoerders SAT-oplossers met gesofistikeerde kennis vanuit verskeie probleemdomeine om steeds meer komplekse stellings te bewys. Die kombinatoriese ontploffing van die soekruimte kan beskou word as een van die grootste uitdagings vir verdere vooruitgang in die veld van geoutomatiseerde bewysvoering. Baanbrekers uit die 1950’s en 1960’s het reeds bepaal dat daar ’n behoefte is aan heuristieke om die soektog na bewyse te rig. Ten spyte van die teoretiese vooruitgang in outomatiese bewysvoering en die tegnologiese vooruitgang in die rekenaarbedryf, is die grootte van die soekruimte steeds problematies wanneer toenemend komplekse bewyse aangepak word. Teenswoordig is heuristieke steeds nuttig en noodsaaklik om komplekse bewysverpligtinge uit te voer. In 2000 is ’n aantal heuristieke ontwikkel om die resolusie-gebaseerde bewysvoerder OTTER te help om bewyse vir versamelingsteoretiese probleme te vind. Die toepaslikheid van hierdie heuristieke vir die volgende generasie bewysvoerders is in 2009 geëvalueer. Die bewysvoerders Vampire en Gandalf het onderskeidelik 90% en 80% van die toepaslike OTTER-heuristieke nodig gehad. Hierdie verhandeling ondersoek die toepaslikheid van die OTTER-heuristieke op bewysvoering in die hibriede bewysvoeringsomgewing Rodin—’n stelselmodelleringsuite vir die formele Event-B-metode. Ons toon dat slegs 2 van die 10 toepaslike OTTER-heuristieke van nut was vir die uitvoering van bewysverpligtinge in Rodin. Ons voer aan dat die OTTER-heuristieke grotendeels ondoeltreffend was toe dit op Rodin-bewyse toegepas is. Desnieteenstaande is heuristieke steeds nodig as bewysverpligtinge nie outomaties uitgevoer kon word nie. Daarom stel ons ’n aantal van ons eie heuristieke voor wat in die Rodin-suite aangewend kan word. / Ukwenziwa semthethweni kwe-first-order logic kanye ne-axiomatic set theory ngesigamu sokuqala sekhulunyaka lama-20—kanye nokufika kwekhompyutha esebenza ngobuxhakaxhaka bedijithali—kwavula indlela ebheke ekuthuthukisweni kwenqubo-kusebenza yokufakazela amathiyoremu ngekhomyutha. Ngeminyaka yawo-1950, ukuqinisekiswa kobufakazi kwasuselwa ekufakazelweni kwezinkinga zejiyomethri eziyisisekelo kanye nasekutholakaleni kobufakazi-ngqo bezinkinga eziphathelene ne-Principia Mathematica ngokuthi kusetshenziswe imithetho yokuqagula-sakucabangela elula, egxile kubantu. Impumelelo enkulu emkhakheni wokufakazela amathiyoremu ngekhompyutha yenzeka ngowe-1965, ngokwenziwa semthethweni kwe-resolution inference mechanism. Namuhla, abafakazeli abanohlonze bamathiyori abizwa nge-Satisfiability Modulo Theories (SMT) bahlanganisa ama-SAT solvers nolwazi lobungcweti oluvela kwizizinda zezinkinga ezihlukahlukene ukuze bakwazi ukufakazela amathiyoremu okungelula neze ukuwafakazela. Ukukhula ngesivinini kobunzima nobunkimbinkimbi benkinga esizindeni esithile kubonwa njengenye yezinselelo ezinkulu okudingeka ukuthi zixazululwe ukuze kube nenqubekela phambili ekufakazelweni kwamathiyoremu ngekhompyutha. Amavulandlela eminyaka yawo-1950 nawo-1960 asesihlonzile kakade isidingo sokuthi amahuristikhi (heuristics) kube yiwona ahola umzamo wokuthola ubufakazi. Nakuba ikhona impumelelo esiyenziwe kumathiyori ezokucabangela okujulile kusetshenziswa amakhompyutha kanye nempumelelo yobuchwepheshe bamakhompyutha, usayizi wesizinda usalokhu uyinkinga uma kwenziwa imizamo yokuthola ubufakazi obuyinkimbinkimbi futhi obunobunzima obukhudlwana. Namuhla imbala, amahuristikhi asewuziso futhi ayadingeka ekufezekiseni izibopho zobufakazi obuyinkimbinkimbi. Ngowezi-2000, kwathuthukiswa amahuristikhi amaningana impela ukuze kulekelelwe uhlelo-kusebenza olungumfakazeli osekelwe phezu kwesixazululo, olubizwa nge-OTTER, ekutholeni ubufakazi bama-set-theoretic problems. Ukusebenziseka kwalawa mahuristikhi kwizinhlelo-kusebenza ezingabafakazeli bamathiyoremu besimanjemanje kwahlolwa ngowezi-2009. Uhlelo-kusebenza olungumfakazeli, olubizwa nge-Vampire kanye nalolo olubizwa nge-Gandalf zadinga ama-90% kanye nama-80%, ngokulandelana kwazo, maqondana nama-OTTER heuristics afanelekile. Lolu cwaningo luphenya futhi lucubungule ukusebenziseka kwama-OTTER heuristics ekufakazelweni kwamathiyoremu esimweni esiyinhlanganisela sokufakazela amathiyoremu esibizwa nge-Rodin—okuyi-system modelling tool suite eqondene ne-Event-B formal method. Kulolu cwaningo siyabonisa ukuthi mabili kuphela kwayi-10 ama-OTTER heuristics aba wusizo ngenkathi kufezekiswa isibopho sobufakazi ku-Rodin. Nakuba sibeka umbono wokuthi esikhathini esiningi ama-OTTER heuristics awazange abe wusizo uma esetshenziswa kuma-Rodin proofs, amahuristikhi asadingeka ezimweni lapho izibopho zobufakazi zingazenzekelanga ngokwazo ngokulawulwa yizinhlelo-kusebenza zekhompyutha. Ngakho-ke, siphakamisa amahuristikhi ethu amaningana angasetshenziswa ekufakazeleni amathiyoremu ku-Rodin tool suite. / School of Computing / M. Sc. (Computer Science)
452

Non-deterministic analysis of slope stability based on numerical simulation

Shen, Hong 02 October 2012 (has links) (PDF)
In geotechnical engineering, the uncertainties such as the variability and uncertainty inherent in the geotechnical properties have caught more and more attentions from researchers and engineers. They have found that a single “Factor of Safety” calculated by traditional deterministic analyses methods can not represent the slope stability exactly. Recently in order to provide a more rational mathematical framework to incorporate different types of uncertainties in the slope stability estimation, reliability analyses and non-deterministic methods, which include probabilistic and non probabilistic (imprecise methods) methods, have been applied widely. In short, the slope non-deterministic analysis is to combine the probabilistic analysis or non probabilistic analysis with the deterministic slope stability analysis. It cannot be regarded as a completely new slope stability analysis method, but just an extension of the slope deterministic analysis. The slope failure probability calculated by slope non-deterministic analysis is a kind of complement of safety factor. Therefore, the accuracy of non deterministic analysis is not only depended on a suitable probabilistic or non probabilistic analysis method selected, but also on a more rigorous deterministic analysis method or geological model adopted. In this thesis, reliability concepts have been reviewed first, and some typical non-deterministic methods, including Monte Carlo Simulation (MCS), First Order Reliability Method (FORM), Point Estimate Method (PEM) and Random Set Theory (RSM), have been described and successfully applied to the slope stability analysis based on a numerical simulation method-Strength Reduction Method (SRM). All of the processes have been performed in a commercial finite difference code FLAC and a distinct element code UDEC. First of all, as the fundamental of slope reliability analysis, the deterministic numerical simulation method has been improved. This method has a higher accuracy than the conventional limit equilibrium methods, because of the reason that the constitutive relationship of soil is considered, and fewer assumptions on boundary conditions of slope model are necessary. However, the construction of slope numerical models, particularly for the large and complicated models has always been very difficult and it has become an obstacle for application of numerical simulation method. In this study, the excellent spatial analysis function of Geographic Information System (GIS) technique has been introduced to help numerical modeling of the slope. In the process of modeling, the topographic map of slope has been gridded using GIS software, and then the GIS data was transformed into FLAC smoothly through the program built-in language FISH. At last, the feasibility and high efficiency of this technique has been illustrated through a case study-Xuecheng slope, and both 2D and 3D models have been investigated. Subsequently, three most widely used probabilistic analyses methods, Monte Carlo Simulation, First Order Reliability Method and Point Estimate Method applied with Strength Reduction Method have been studied. Monte Carlo Simulation which needs to repeat thousands of deterministic analysis is the most accurate probabilistic method. However it is too time consuming for practical applications, especially when it is combined with numerical simulation method. For reducing the computation effort, a simplified Monte Carlo Simulation-Strength Reduction Method (MCS-SRM) has been developed in this study. This method has estimated the probable failure of slope and calculated the mean value of safety factor by means of soil parameters first, and then calculated the variance of safety factor and reliability of slope according to the assumed probability density function of safety factor. Case studies have confirmed that this method can reduce about 4/5 of time compared with traditional MCS-SRM, and maintain almost the same accuracy. First Order Reliability Method is an approximate method which is based on the Taylor\'s series expansion of performance function. The closed form solution of the partial derivatives of the performance function is necessary to calculate the mean and standard deviation of safety factor. However, there is no explicit performance function in numerical simulation method, so the derivative expressions have been replaced with equivalent difference quotients to solve the differential quotients approximately in this study. Point Estimate Method is also an approximate method involved even fewer calculations than FORM. In the present study, it has been integrated with Strength Reduction Method directly. Another important observation referred to the correlation between the soil parameters cohesion and friction angle. Some authors have found a negative correlation between cohesion and friction angle of soil on the basis of experimental data. However, few slope probabilistic studies are found to consider this negative correlation between soil parameters in literatures. In this thesis, the influence of this correlation on slope probability of failure has been investigated based on numerical simulation method. It was found that a negative correlation considered in the cohesion and friction angle of soil can reduce the variability of safety factor and failure probability of slope, thus increasing the reliability of results. Besides inter-correlation of soil parameters, these are always auto-correlated in space, which is described as spatial variability. For the reason that knowledge on this character is rather limited in literature, it is ignored in geotechnical engineering by most researchers and engineers. In this thesis, the random field method has been introduced in slope numerical simulation to simulate the spatial variability structure, and a numerical procedure for a probabilistic slope stability analysis based on Monte Carlo simulation was presented. The soil properties such as cohesion and friction angle were discretized to continuous random fields based on local averaging method. In the case study, both stationary and non-stationary random fields have been investigated, and the influence of spatial variability and averaging domain on the convergence of numerical simulation and probability of failure was studied. In rock medium, the structure faces have very important influence on the slope stability, and the rock material can be modeled as the combination of rigid or deformable blocks with joints in distinct element method. Therefore, much more input parameters like strength of joints are required to input the rock slope model, which increase the uncertainty of the results of numerical model. Furthermore, because of the limitations of the current laboratory and in-site testes, there is always lack of exact values of geotechnical parameters from rock material, even the probability distribution of these variables. Most of time, engineers can only estimate the interval of these variables from the limit testes or the expertise’s experience. In this study, to assess the reliability of the rock slope, a Random Set Distinct Element Method (RS-DEM) has been developed through coupling of Random Set Theory and Distinct Element Method, and applied in a rock slope in Sichuan province China.
453

模糊卡方適合度檢定 / Fuzzy Chi-square Test Statistic for goodness-of-fit

林佩君, Lin,Pei Chun Unknown Date (has links)
在資料分析上,調查者通常需要決定,不同的樣本是否可被視為來自相同的母體。一般最常使用的統計量為Pearson’s 統計量。然而,傳統的統計方法皆是利用二元邏輯觀念來呈現。如果我們想要用模糊邏輯的概念來做樣本調查,此時,使用傳統 檢定來分析這些模糊樣本資料是否仍然適當?透過這樣的觀念,我們使用傳統統計方法,找出一個能處理這些模糊樣本資料的公式,稱之為模糊 。結果顯示,此公式可用來檢定,模糊樣本資料在不同母體下機率的一致性。 / In the analysis of research data, the investigator often needs to decide whether several independent samples may be regarded as having come from the same population. The most commonly used statistic is Pearson’s statistic. However, traditional statistics reflect the result from a two-valued logic concept. If we want to survey sampling with fuzzy logic concept, is it still appropriate to use the traditional -test for analysing those fuzzy sample data? Through this concept, we try to use a traditional statistic method to find out a formula, called fuzzy , that enables us to deal with those fuzzy sample data. The result shows that we can use the formula to test hypotheses about probabilities of various outcomes in fuzzy sample data.
454

The basics of set theory - some new possibilities with ClassPad

Paditz, Ludwig 20 March 2012 (has links) (PDF)
No description available.
455

Equações integrais via teoria de domínios: problemas direto e inverso / Integral equations in domain theory: problems direct and inverse

Antônio Espósito Júnior 23 July 2008 (has links)
Apresenta-se um estudo em Teoria de Domínios das equações integrais da forma geral f (x) = h(x)+g Z b(x) a(x) g(x, y, f (y))dy com h, a e b definidas para x ∈ [a0,b0], a0 ≤a(x)≤b(x)≤b0 e g definida para x, y ∈ [a0,b0], cujo lado direito define uma contração sobre o espaço métrico de funções reais contínuas limitadas. O ponto de partida desse trabalho é a reescrita da Análise Intervalar para Teoria de Domínios do problema de valor incial em equações diferenciais ordinárias que possuem solução como ponto fixo do operador de Picard. Com o conjunto dos números reais interpretados pelo Domínio Intervalar, as funções reais são estendidas para operarem no domínio de funçoes intervalares de variável real. Em particular, faz-se a extensão canônica do campo vetorial em relação à segunda variável. Nesse contexto, pela primeira vez tem-se o estudo das equações integrais de Fredholm e Volterra sobre o domínio de funções intervalares de variável real definida pelo operador integral intervalar com a participação da extensão canônica de g em relação à terceira variável. Adicionando ao domínio de funções intervalares sua função medição, efetua-se a análise da convergência do operador intervalar de Fredholm e Volterra em Teoria de Domínios com o cálculo da sua derivada informática em relação à medição no seu ponto fixo. Com a representação das funções intervalares em função passo constante a partir da partição do intervalo [a0,b0], reescrevese o algoritmo da Análise Intervalar em Teoria de Domínios com a introdução do cálculo da aproximação da extensão canônica de g e com o comprimento do intervalo da partição tendendo para zero. Estende-se essa abordagem mais completa do estudo das equações integrais na resolução de problemas de valores iniciais e valor de contorno em equações diferenciais ordinárias e parciais. Uma vez que para uma pequena variação do campo vetorial v ou do valor inicial y0 da equação diferencial f ′(x) = v(x, f (x)) com a condição inicial f (x0) = y0, pode-se ter uma solução tão próxima da solução f da equação quanto possível, formaliza-se pela primeira vez em Teoria de Domínios um algoritmo na resolução do problema inverso em que, conhecendo a função f , determina-se uma equação diferencial ordinária com o cálculo de um campo vetorial v tal que o operador de Picard associado mapeia f tão próxima quanto possível a ela mesma. / We present a study in Domain Theory of integral equations of the form f (x) = h(x)+g Z b(x) a(x) g(x, y, f (y))dy for a0 ≤ a(x) ≤ b(x) ≤ b0 with h, a, b defined for x ∈ [a0,b0] and g defined for x, y ∈ [a0,b0], in which the right-hand side defines a contraction on the metric space of continuous realvalued functions on [a0,b0]. The starting point of this work is to revisit Interval Analysis in Domain Theory for the initial-value problem in ordinary differential equations where a solution is expressed as a fixed point of the Picard operator. With the set of real numbers interpreted as the interval domain, real-valued functions are extended to work in the space of interval-valued functions of the real variable domain. In particular, the vector field is extended in the second argument. Under these conditions, for the first time Fredholm and Volterra integral equations have solutions expressed as fixed points of a contraction mapping in terms of the splitting on interval-valued functions of the real variable domain. The measurement for interval-valued functions of the real variable domain is considered where we can asssess the convergence properties of the interval integral operator by means of the informatic derivative. The proposed techniques are applied to more general methods in ordinary differencial equations (ODEs) and partial differential equations (PDEs). For the first time, an algorithm is proposed to provide solutions to the inverse problem for Odinary Differential Equation where, given a function f , it is found a vector field v that defines a Picard operator which maps the solution f as close as possible to itself, such that the ODE f ′(x) = v(x, f (x)) admits f as either an exact or, as closely as desired, an approximate solution.
456

Problem-Based SRS: método para especificação de requisitos de software baseado em problemas / Problem-Based SRS: method for sofware requirements specification based on problems

Souza, Rafael Gorski Moreno 23 August 2016 (has links)
Especificação de requisitos é reconhecida como como uma atividade critica nos processos de desenvolvimento de software por causa de seu impacto nos riscos do projeto quando mal executada. Um grande conjunto de estudos discute aspectos teóricos, proposições de técnicas e práticas recomendadas para a Engenharia de Requisitos (ER). Para ter sucesso, ER tem que assegurar que requisitos especificados são completos e corretos, o que significa que todas as intenções dos stakeholders são cobertas pelos requisitos e que não existem requisitos desnecessários. Entretanto, a captura precisa das intenções stakeholders continua sendo um desafio e é o maior fator para falhas em projetos de software. Esta dissertação apresenta um novo método denominado “Problem-Based SRS” que tem como objetivo melhorar a qualidade da especificação de requisitos de software (SRS – Software Requirements Specification) no sentido de que os requisitos especificados forneçam uma resposta adequada para os problemas dos clientes reais. Neste método, o conhecimento sobre os requisitos de software é construído a partir do conhecimento sobre os problemas do cliente. O Problem-Based SRS consiste de uma organização de atividades e resultados através de um processo que contem cinco etapas. O método fornece suporte ao time de engenharia de requisitos para analisar sistematicamente o contexto de negócio e especificar os requisitos de software, considerando o vislumbre e a visão do software. Os aspectos de qualidade das especificações são avaliados utilizando técnicas de rastreabilidade e princípios do axiomatic design. Os casos de estudo realizados e apresentados nesta dissertação apontam que o método proposto pode contribuir de forma significativa para uma melhor especificação de requisitos de software. / Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This master’s dissertation presents a novel method referred to as “Problem-Based SRS” aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer ́s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer ́s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.
457

Problem-Based SRS: método para especificação de requisitos de software baseado em problemas / Problem-Based SRS: method for sofware requirements specification based on problems

Souza, Rafael Gorski Moreno 23 August 2016 (has links)
Especificação de requisitos é reconhecida como como uma atividade critica nos processos de desenvolvimento de software por causa de seu impacto nos riscos do projeto quando mal executada. Um grande conjunto de estudos discute aspectos teóricos, proposições de técnicas e práticas recomendadas para a Engenharia de Requisitos (ER). Para ter sucesso, ER tem que assegurar que requisitos especificados são completos e corretos, o que significa que todas as intenções dos stakeholders são cobertas pelos requisitos e que não existem requisitos desnecessários. Entretanto, a captura precisa das intenções stakeholders continua sendo um desafio e é o maior fator para falhas em projetos de software. Esta dissertação apresenta um novo método denominado “Problem-Based SRS” que tem como objetivo melhorar a qualidade da especificação de requisitos de software (SRS – Software Requirements Specification) no sentido de que os requisitos especificados forneçam uma resposta adequada para os problemas dos clientes reais. Neste método, o conhecimento sobre os requisitos de software é construído a partir do conhecimento sobre os problemas do cliente. O Problem-Based SRS consiste de uma organização de atividades e resultados através de um processo que contem cinco etapas. O método fornece suporte ao time de engenharia de requisitos para analisar sistematicamente o contexto de negócio e especificar os requisitos de software, considerando o vislumbre e a visão do software. Os aspectos de qualidade das especificações são avaliados utilizando técnicas de rastreabilidade e princípios do axiomatic design. Os casos de estudo realizados e apresentados nesta dissertação apontam que o método proposto pode contribuir de forma significativa para uma melhor especificação de requisitos de software. / Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This master’s dissertation presents a novel method referred to as “Problem-Based SRS” aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer ́s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer ́s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.
458

Equações integrais via teoria de domínios: problemas direto e inverso / Integral equations in domain theory: problems direct and inverse

Antônio Espósito Júnior 23 July 2008 (has links)
Apresenta-se um estudo em Teoria de Domínios das equações integrais da forma geral f (x) = h(x)+g Z b(x) a(x) g(x, y, f (y))dy com h, a e b definidas para x ∈ [a0,b0], a0 ≤a(x)≤b(x)≤b0 e g definida para x, y ∈ [a0,b0], cujo lado direito define uma contração sobre o espaço métrico de funções reais contínuas limitadas. O ponto de partida desse trabalho é a reescrita da Análise Intervalar para Teoria de Domínios do problema de valor incial em equações diferenciais ordinárias que possuem solução como ponto fixo do operador de Picard. Com o conjunto dos números reais interpretados pelo Domínio Intervalar, as funções reais são estendidas para operarem no domínio de funçoes intervalares de variável real. Em particular, faz-se a extensão canônica do campo vetorial em relação à segunda variável. Nesse contexto, pela primeira vez tem-se o estudo das equações integrais de Fredholm e Volterra sobre o domínio de funções intervalares de variável real definida pelo operador integral intervalar com a participação da extensão canônica de g em relação à terceira variável. Adicionando ao domínio de funções intervalares sua função medição, efetua-se a análise da convergência do operador intervalar de Fredholm e Volterra em Teoria de Domínios com o cálculo da sua derivada informática em relação à medição no seu ponto fixo. Com a representação das funções intervalares em função passo constante a partir da partição do intervalo [a0,b0], reescrevese o algoritmo da Análise Intervalar em Teoria de Domínios com a introdução do cálculo da aproximação da extensão canônica de g e com o comprimento do intervalo da partição tendendo para zero. Estende-se essa abordagem mais completa do estudo das equações integrais na resolução de problemas de valores iniciais e valor de contorno em equações diferenciais ordinárias e parciais. Uma vez que para uma pequena variação do campo vetorial v ou do valor inicial y0 da equação diferencial f ′(x) = v(x, f (x)) com a condição inicial f (x0) = y0, pode-se ter uma solução tão próxima da solução f da equação quanto possível, formaliza-se pela primeira vez em Teoria de Domínios um algoritmo na resolução do problema inverso em que, conhecendo a função f , determina-se uma equação diferencial ordinária com o cálculo de um campo vetorial v tal que o operador de Picard associado mapeia f tão próxima quanto possível a ela mesma. / We present a study in Domain Theory of integral equations of the form f (x) = h(x)+g Z b(x) a(x) g(x, y, f (y))dy for a0 ≤ a(x) ≤ b(x) ≤ b0 with h, a, b defined for x ∈ [a0,b0] and g defined for x, y ∈ [a0,b0], in which the right-hand side defines a contraction on the metric space of continuous realvalued functions on [a0,b0]. The starting point of this work is to revisit Interval Analysis in Domain Theory for the initial-value problem in ordinary differential equations where a solution is expressed as a fixed point of the Picard operator. With the set of real numbers interpreted as the interval domain, real-valued functions are extended to work in the space of interval-valued functions of the real variable domain. In particular, the vector field is extended in the second argument. Under these conditions, for the first time Fredholm and Volterra integral equations have solutions expressed as fixed points of a contraction mapping in terms of the splitting on interval-valued functions of the real variable domain. The measurement for interval-valued functions of the real variable domain is considered where we can asssess the convergence properties of the interval integral operator by means of the informatic derivative. The proposed techniques are applied to more general methods in ordinary differencial equations (ODEs) and partial differential equations (PDEs). For the first time, an algorithm is proposed to provide solutions to the inverse problem for Odinary Differential Equation where, given a function f , it is found a vector field v that defines a Picard operator which maps the solution f as close as possible to itself, such that the ODE f ′(x) = v(x, f (x)) admits f as either an exact or, as closely as desired, an approximate solution.
459

FORMALIZAÇÃO DA TRANSFORMAÇÃO DE MODELOS UTILIZANDO A LINGUAGEM Z / FORMALISATION OF THE TRANSFORMATION OF MODEL USING THE LANGUAGE Z

MENDES, Carlos César Gomes 29 July 2011 (has links)
Made available in DSpace on 2016-08-17T14:53:19Z (GMT). No. of bitstreams: 1 dissertacao Carlos Cesar.pdf: 5267302 bytes, checksum: 26bac06a2f7777c938fb717c801f59cd (MD5) Previous issue date: 2011-07-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In this thesis, an approach based on Set Theory and on the Z Formal Language Specification is presented to formalize the transformations between models in the context of Model Driven Engineering (MDE). The motivation for this research is justified due the ambiguities and inconsistencies present in the models of transformation used to abstract the model transformation process in the MDE context. The precision absence in these the models lead the user to misinterpret complex structures present in the mapping of the source model elements to the target model elements. In this context, we proposed to develop a formal methodology that eliminates the ambiguities and inconsistencies present in the informal representations of model transformation in MDE. To solve this problem, a Formal and Conceptual Framework is developed that groups the elements involved in the process of transformation, represented by mathematical artifacts from the Set Theory and specified on Z language. This Framework is validated through a case study that contains complex transformations, tested on the mathematical proof tool Z/EVES, which supports statements made in Z language. / Nesta dissertação, apresenta-se uma abordagem baseada na Teoria dos Conjuntos e na Linguagem de Especificação Formal Z para formalizar a Transformação entre Modelos dentro do contexto da Engenharia Dirigida a Modelos (MDE). A motivação desta pesquisa se deu devido a constatação de que a literatura sobre MDE tem apresentado ambiguidades e inconsistências nos modelos utilizados para abstrair o processo de transformação de modelos no contexto da MDE. Esta falta de precisão nestes tipos de modelos leva o usuário a interpretar de forma errada estruturas complexas presentes no mapeamento de elementos do modelo fonte para o modelo alvo. Sendo assim, propõe-se desenvolver uma metodologia formal que elimine as ambiguidades e inconsistências presentes nas representações informais da transformação de modelos da MDE. Para solucionar este problema, desenvolveu-se um Framework Conceitual Formal que agrupa os elementos envolvidos no processo de transformação, onde estes são representados através de artefatos matemáticos da Teoria dos Conjuntos e especificados em linguagem Z. Este Framework é validado através de um estudo de caso que contêm transformações, testadas na ferramenta de prova matemática Z/EVES, que suporta declarações feitas em linguagem Z.
460

[en] DATA FUSION OF TIME OF FLIGHT TECHNIQUES USING ULTRASONIC TRANSDUCERS FOR WIND SPEED MEASUREMENT / [pt] FUSÃO DE DADOS DAS TÉCNICAS DE TEMPO DE TRÂNSITO UTILIZANDO TRANSDUTORES ULTRA-SÔNICOS PARA MEDIÇÃO DA VELOCIDADE DO VENTO

JUAN MOISES MAURICIO VILLANUEVA 10 January 2018 (has links)
[pt] A medição da velocidade de fluidos tem relevância considerável em aplicações industriais e científicas, nas quais medições com baixa incerteza são geralmente requeridas. Nesta tese, tem-se como objetivo projetar e modelar um sistema de medição de velocidade de vento utilizando fusão de dados das informações dos tempos de trânsito obtidas pelas técnicas de detecção de limiar e diferença de fase. Para este propósito, este trabalho é composto por duas partes principais. Na primeira parte, apresenta-se uma análise da propagação de incertezas das técnicas de detecção de limiar e diferença de fase considerando duas estruturas para a medição da velocidade do vento, e faz-se a comparação das faixas de medição e suas incertezas associadas para cada estrutura de medição. Na segunda parte deste trabalho, faz-se um estudo das técnicas de fusão de dados aplicadas a instrumentação e medição, identificandose duas técnicas principais baseadas em: (a) estimação de máxima probabilidade (MLE – Maximum Likelihood Estimation), (b) relação de compatibilidade fuzzy e operadores OWA (Order Weighted Average) com agregação parcial. Em seguida, estas técnicas de fusão são aplicadas para a estimação do tempo de trânsito, considerando-se várias medições independentes do tempo de trânsito obtidas pelas técnicas de detecção de limiar e diferença de fase. Finalmente, realiza-se uma análise da incerteza quantificando-se a incerteza de cada medição sobre o resultado final de fusão. Apresenta-se um estudo de caso englobando estas duas partes do trabalho, desenvolvendo-se o projeto e modelagem de um instrumento de medição de velocidade do vento com baixa incerteza, considerando-se as incertezas associadas, e o uso de técnicas adequadas de fusão de dados para prover informações com maior exatidão e confiabilidade. Resultados experimentais são realizados em um túnel de vento de baixa velocidade com o objetivo de verificar a consistência dos estudos teóricos apresentados. / [en] Flow speed measurement has considerable relevance in industrial and scientific applications, where measurements with low uncertainty are required. In this work, a system for wind speed measurement using ultrasonic transducers is designed and modelled. This system makes use of data fusion techniques for the time-of-flight estimation, combining independent information provided by the threshold detection and phase difference methods. For this purpose, this work consists of two main parts. The first part presents an analysis of uncertainty and error propagation concerning the threshold detection and phase difference techniques and considering two structures for the wind speed measurement. Measurement ranges are associated uncertainties are then compared for each of those estrutures. In the second part of this work, data fusion techniques applied to instrumentation and measurement are studied; two main techniques are singled out: (a) Maximum Likelihood Estimation (MLE), (b) Fuzzy compatibility relation and Order Weighted Average (OWA) operators with partial aggregation. These fusion techniques are then applied to the time-of-flight estimation, by considering several independent measurements obtained through the threshold detection and phase difference techniques. Finally, uncertainty analysis is carried out by quantifying the influence of each independent measurement on the global fusion result. A case study is also presented, where an instrument for wind speed measurements with low uncertainty is designed and modelled. Appropriate techniques of data fusion aimed at improving accuracy and realiability are considered. Experiments are performed in a wind tunnel in order to verify the consistency of the results in view of the theoretical studies.

Page generated in 0.056 seconds