• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 13
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 102
  • 24
  • 18
  • 12
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Semantic markup for inconsistency handling in requirements engineering

Chen, Zerong. January 2004 (has links)
Thesis (M.Info.Sys.)--University of Wollongong, 2004. / Typescript. Includes bibliographical references: p. 91-100.
12

A Mixed-Method Case Study of Swedish Loan Officers' Mortgage Credit Decisions

Ekberg, Leonard January 2022 (has links)
This thesis purpose was to examine to what extent and why loan officers’ judgment influence mortgage credit decisions. A mixed method was used to collect primary data from loan officers’ at a Swedish commercial bank. Quantitative data were collected from 39 loan officers answering an identical mortgage application, and qualitative data were collected from interviews held with five loan officers. The findings showed a considerable inconsistency among the loan officers' mortgage credit decisions and that judgment plays a vital role in the decision-making process. The credit amount granted ranged from SEK 4.25 million to 5.6 million, whereas the charged interest rate ranged from 40 to 51 basis points across four fixed interest periods. Overall, the findings can be understood by loan officers’ lacked explicit rules controlling the relationship between assessing data presented in the mortgage application and which credit decisions they should make. The theoretical contribution of the thesis is to enhance our understanding of how loan officers make credit decisions and the role of judgment in the process. The research findings imply that households should expect large variations in offered credit amounts and interest rates, depending on which loan officer managing their loan application.
13

The theory and pedagody of semantic inconsistency in critical reasoning

Dixon, Scott Walton 05 1900 (has links)
One aspect of critical reasoning is the analysis and appraisal of claims and arguments. A typical problem, when analysing and appraising arguments, is inconsistent statements. Although several inconsistencies may have deleterious effects on rationality and action, not all of them do. As educators, we also have an obligation to teach this evaluation in a way that does justice to our normal reasoning practices and judgements of inconsistency. Thus, there is a need to determine the acceptable inconsistencies from those that are not, and to impart that information to students. We might ask: What is the best concept of inconsistency for critical reasoning and pedagogy? While the answer might appear obvious to some, the history of philosophy shows that there are many concepts of “inconsistency”, the most common of which comes from classical logic and its reliance on opposing truth-values. The current exemplar of this is the standard truth functional account from propositional logic. Initially, this conception is shown to be problematic, practically, conceptually and pedagogically speaking. Especially challenging from the classical perspective are the concepts of ex contradictione quodlibet and ex falso quodlibet. The concepts may poison the well against any notion of inconsistency, which is not something that should be done unreflectively. Ultimately, the classical account of inconsistency is rejected. In its place, a semantic conception of inconsistency is argued for and demonstrated to handle natural reasoning cases effectively. This novel conception utilises the conceptual antonym theory to explain semantic contrast and gradation, even in the absence of non-canonical antonym pairs. The semantic conception of inconsistency also fits with an interrogative argument model that exploits inconsistency to display semantic contrast in reasons and conclusions. A method for determining substantive inconsistencies follows from this argument model in a 4 straightforward manner. The conceptual fit is then incorporated into the pedagogy of critical reasoning, resulting in a natural approach to reasoning which students can apply to practical matters of everyday life, which include inconsistency. Thus, the best conception of inconsistency for critical reasoning and its pedagogy is the semantic, not the classical. / Philosophy Practical and Systematic Theology / D. Phil
14

Product strategies under durability, lock-in and assortment considerations

Jonnalagedda, Sreelata 21 June 2010 (has links)
In this dissertation I focus on two considerations that influence the product strategy of a firm. The first is consumers’ choice and its influence on a firm’s product offering, and the second is the interaction between durable products and their contingent consumables. First, I study the assortment planning problem for a firm; I illustrate the complexity of solving this product selection problem, present simple solutions for some commonly used choice models, and develop heuristics for other practically motivated models. Second, I study the incentives of a durable goods monopolist when she can lock-in consumers through a contingent consumable. Adopting a lock-in strategy has two interesting effects on the incentives of a durable goods manufacturer. On one hand, by locking-in consumers to its consumable, a durable goods monopolist can curb its temptation to reduce durable prices over time, thereby mitigating the classic time inconsistency problem. On the other hand, lock-in will create a hold-up issue and adversely affect consumers’ expectations of future prices for the consumable. My research demonstrates the trade-off between time inconsistency and hold-up, and derives insights about the conditions under which a lock-in strategy can be effective. I further analyze the trade-off between time inconsistency and hold-up associated with lock-in in the presence of consumable stock-piling. My findings indicate in the presence of consumer stock-piling, lock-in has an effect similar to that of competition in the consumables market: they help to dampen the hold-up problem that arises from lock-in and at the same time increase the manufacturer’s incentive to reduce durable prices over time. / text
15

Essays in normative macroeconomics

Brendon, Charles Frederick January 2011 (has links)
This thesis is divided into two main parts. The first provides a novel analysis of dynamic optimal taxation under the assumption that individuals in an economy have ‘hidden’ idiosyncratic productivity levels. Specifically, it shows how to derive a complete set of optimality conditions characterising the solution to a problem of this kind. The method relies on constructing perturbations to the consumption-output allocations of agents in a manner that preserves all relevant incentive compatibility restrictions. We are able to use it to generalise the ‘inverse Euler condition’ to cases in which preferences are non-separable between consumption and labour supply, and to prove a number of novel results about optimal income and savings tax wedges. The second main part investigates a more general problem. When policymakers are constrained in their present choices by expectations of future outcomes a well-known time-inconsistency problem hinders optimal decision-making: the preferences of policymakers who exist at different points in time are not in agreement with one another, because of differences in the constraints faced by each. We present a new approach to determining policy in this setting, based on asking: What policy would be chosen by a decisionmaker who did not know the time period in which their choice was to be implemented? This is akin to designing institutions from behind a Rawlsian ‘veil of ignorance’. The theory is used to obtain qualitative policy prescriptions across a number of environments; these policies have several appealing properties that we outline.
16

Automating Regression Test Selection for Web Services

Ruth, Michael Edward 08 August 2007 (has links)
As Web services grow in maturity and use, so do the methods which are being used to test and maintain them. Regression Testing is a major component of most major testing systems but has only begun to be applied to Web services. The majority of the tools and techniques applying regression test to Web services are focused on test-case generation, thus ignoring the potential savings of regression test selection. Regression test selection optimizes the regression testing process by selecting a subset of all tests, while still maintaining some level of confidence about the system performing no worse than the unmodified system. A safe regression test selection technique implies that after selection, the level of confidence is as high as it would be if no tests were removed. Since safe regression test selection techniques generally involve code-based (white-box) testing, they cannot be directly applied to Web services due to their loosely-coupled, standards-based, and distributed nature. A framework which automates both the regression test selection and regression testing processes for Web services in a decentralized, end-to-end manner is proposed. As part of this approach, special consideration is given to the concurrency issues which may occur in an autonomous and decentralized system. The resulting synchronization method will be presented along with a set of algorithms which manage the regression testing and regression test selection processes throughout the system. A set of empirical results demonstrate the feasibility and benefit of the approach.
17

Measuring inconsistency in probabilistic knowledge bases / Medindo inconsistência em bases de conhecimento probabilístico

De Bona, Glauber 22 January 2016 (has links)
In terms of standard probabilistic reasoning, in order to perform inference from a knowledge base, it is normally necessary to guarantee the consistency of such base. When we come across an inconsistent set of probabilistic assessments, it interests us to know where the inconsistency is, how severe it is, and how to correct it. Inconsistency measures have recently been put forward as a tool to address these issues in the Artificial Intelligence community. This work investigates the problem of measuring inconsistency in probabilistic knowledge bases. Basic rationality postulates have driven the formulation of inconsistency measures within classical propositional logic. In the probabilistic case, the quantitative character of probabilities yielded an extra desirable property: that inconsistency measures should be continuous. To attend this requirement, inconsistency in probabilistic knowledge bases have been measured via distance minimisation. In this thesis, we prove that the continuity postulate is incompatible with basic desirable properties inherited from classical logic. Since minimal inconsistent sets are the basis for some desiderata, we look for more suitable ways of localising the inconsistency in probabilistic logic, while we analyse the underlying consolidation processes. The AGM theory of belief revision is extended to encompass consolidation via probabilities adjustment. The new forms of characterising the inconsistency we propose are employed to weaken some postulates, restoring the compatibility of the whole set of desirable properties. Investigations in Bayesian statistics and formal epistemology have been interested in measuring an agent\'s degree of incoherence. In these fields, probabilities are usually construed as an agent\'s degrees of belief, determining her gambling behaviour. Incoherent agents hold inconsistent degrees of beliefs, which expose them to disadvantageous bet transactions - also known as Dutch books. Statisticians and philosophers suggest measuring an agent\'s incoherence through the guaranteed loss she is vulnerable to. We prove that these incoherence measures via Dutch book are equivalent to inconsistency measures via distance minimisation from the AI community. / Em termos de raciocínio probabilístico clássico, para se realizar inferências de uma base de conhecimento, normalmente é necessário garantir a consistência de tal base. Quando nos deparamos com um conjunto de probabilidades que são inconsistentes entre si, interessa-nos saber onde está a inconsistência, quão grave esta é, e como corrigi-la. Medidas de inconsistência têm sido recentemente propostas como uma ferramenta para endereçar essas questões na comunidade de Inteligência Artificial. Este trabalho investiga o problema da medição de inconsistência em bases de conhecimento probabilístico. Postulados básicos de racionalidade têm guiado a formulação de medidas de inconsistência na lógica clássica proposicional. No caso probabilístico, o carácter quantitativo da probabilidade levou a uma propriedade desejável adicional: medidas de inconsistência devem ser contínuas. Para atender a essa exigência, a inconsistência em bases de conhecimento probabilístico tem sido medida através da minimização de distâncias. Nesta tese, demonstramos que o postulado da continuidade é incompatível com propriedades desejáveis herdadas da lógica clássica. Como algumas dessas propriedades são baseadas em conjuntos inconsistentes minimais, nós procuramos por maneiras mais adequadas de localizar a inconsistência em lógica probabilística, analisando os processos de consolidação subjacentes. A teoria AGM de revisão de crenças é estendida para englobar a consolidação pelo ajuste de probabilidades. As novas formas de caracterizar a inconsistência que propomos são empregadas para enfraquecer alguns postulados, restaurando a compatibilidade de todo o conjunto de propriedades desejáveis. Investigações em estatística Bayesiana e em epistemologia formal têm se interessado pela medição do grau de incoerência de um agente. Nesses campos, probabilidades são geralmente interpretadas como graus de crença de um agente, determinando seu comportamento em apostas. Agentes incoerentes possuem graus de crença inconsistentes, que o expõem a transações de apostas desvantajosas - conhecidas como Dutch books. Estatísticos e filósofos sugerem medir a incoerência de um agente através do prejuízo garantido a qual ele está vulnerável. Nós provamos que estas medidas de incoerência via Dutch books são equivalentes a medidas de inconsistência via minimização de distâncias da comunidade de IA.
18

Psicofísica de percepção em tomadas de decisão: a propriedade aditiva do grau de inconsistência intertemporal e uma nova proposta para a função peso de probabilidades / Psychophysics of perception in decision making: the additive property of the intertemporal degree of inconsistency and a new proposal for the probability weighting function.

Destefano, Natália 20 June 2013 (has links)
Tomadas de decisão intertemporais são influenciadas não somente pelo efeito de desconto do valor em diferentes instantes, como também pelo efeito de percepção temporal. Uma das principais dificuldades que afetam experimentos padrões envolvendo estas escolhas é a simultaneidade de ambos os efeitos no processo de desconto. Através da unificação das leis da psicofísica de percepção do atraso e da associação destas às funções de desconto de valor, propusemos uma forma generalizada para o processo de desconto intertemporal envolvendo ambos os domínios. Mostramos também que a propriedade aditiva do grau de inconsistência, grandeza obtida a partir das funções de desconto, permite discriminar a influência de cada efeito no processo de tomada de decisão. De forma similar ao proposto para escolhas intertemporais, estendemos a teoria psicofísica de percepção ao domínio das probabilidades. Adotando a perspectiva de que o atraso médio para recebimento de uma recompensa está relacionado à sua probabilidade de recebimento, obtivemos uma função de desconto probabilística que abrange os efeitos de desconto de valor e de percepção de probabilidades. Em paralelo ao desenvolvimento no domínio experimental, exploramos também os modelos teóricos (axiomáticos) que fundamentam as tomadas de decisão probabilísticas. Propusemos que a forma da função peso de probabilidades, exploradas em modelos como a teoria da utilidade esperada dependente da ordenação e a teoria do prospecto acumulado, seja representada pela função de desconto generalizada que obtivemos a partir dos modelos fenomenológicos. Neste caso, a função peso é amparada por modelos fenomenológicos de decisão e deriva da suposição de que indivíduos se comportam de forma similar frente a probabilidades e atrasos. / Intertemporal decision making are influenced not only by the effect of the reward value discount at different moments, but also by the time perception effect. One of the main difficulties that affects standard experiments involving intertemporal choices is the simultaneity of both effects on the discount. Through the unification of the psychophysical laws of delay perception and their association to the value discounting functions, we proposed a generalized process for the intertemporal discount involving both domains. We also showed that the additive property of the inconsistency degree, quantity obtained from the discount functions, allowed us to distinguish between both effects in decision making. As proposed for intertemporal choices, we extended the psychophysical perception theory to the probabilities domain. Adopting the perspective that the average delay for receiving a reward is related to the probability of receiving it, we obtained a probabilistic discount function covering value discounting and probability perception. Parallel to the experimental development, we also explored theoretical models that underlie probabilistic decision making. We proposed that the shape of the probability weighting function, explored in models such as rank-dependent expected utility theory and cumulative prospect theory, should be described by the generalized probabilistic function that we obtained from the phenomenological models. Therefore, the weighting function is supported by phenomenological models and derives from the assumption that subjects behave similarly in front of probabilities and delays.
19

Essays in behavioural and education economics

Carroll, Nathan John 10 July 2013 (has links)
No description available.
20

An Implementation of the USF/ Calvo Model in Verilog-A to Enforce Charge Conservation in Applicable FET Models

Nicodemus, Joshua 11 March 2005 (has links)
The primary goal of this research is to put into code a unique approach to addressing problems apparent with nonlinear FET models which were exposed by Calvo in her work in 1994. Since that time, the simulation software for which her model was appropriate underwent a significant update, necessitating the rewriting of her model code for a few applicable FET models in a Verilog-A, making it more compatible with the new versions of software and simulators. The problems addressed are the inconsistencies between the small-signal model and the corresponding large-signal models due to a factor called transcapacitance. It has been noted by several researchers that the presence of a nonlinear capacitor in a circuit model mathematically implies the existence of a parallel transcapacitor, if the value of its capacitance is a function of two bias voltages, the local and a remote voltage. As a consequence, simulating small signal excursions using the linear model, if the latter does not include the transcapacitance, which is inevitably present. The Calvo model attempted to improve the performance of these models by modifying terms in the charge source equations which minimize these transcapacities. Thanks to the present effort, Calvo's theory is now incorporated in the Angelov Model and can also be implemented in some other popular existing models such as Curtic, Statz and Parker Skellern models.

Page generated in 0.0575 seconds