Spelling suggestions: "subject:"bnormal 3dmodeling"" "subject:"bnormal bymodeling""
1 |
Dyads, Rationalist Explanations for War, and the Theoretical Underpinnings of IR TheoryGallop, Max Blau January 2015 (has links)
<p>Critiquing dyads as the unit of analysis in statistical work has become increasingly prominent; a number of scholars have demonstrated that ignoring the interdependencies and selection effects among dyads can bias our inference. My dissertation argues that the problem is even more serious. The bargaining model relies on the assumption that bargaining occurs between two states in isolation. When we relax this assumption one of the most crucial findings of these bargaining models vanishes: it is no longer irrational, even with complete information and an absence of commitment issues, for states to go to war. By accounting for the non-dyadic nature of interstate relations, we are better able to explain a number of empirical realities, and better able to predict when states will go to war.</p><p>In the first chapter of my dissertation I model a bargaining episode between three players and demonstrate its marked divergence from canonical bargaining models. In traditional two player bargaining models, it is irrational for states to go to war. I find this irrationality of war to be in part an artifact of limiting the focus to two players. In the model in chapter one, three states are bargaining over policy, and each state has a preference in relation to this policy. When these preferences diverge enough, it can become impossible for players to resolve their disputes peacefully. One implication of this model is that differences between two and three player bargaining is not just a difference in degree, but a difference in kind. The model in this chapter forms the core of the writing sample enclosed. Chapter two tests whether my own model is just an artifact of a particular set of assumptions. I extend the bargaining model to allow for N-players and modify the types of policies being bargained over, and I find that not only do the results hold, in many cases they are strengthened. The second chapter also changes chapter one's model so states are bargaining over resources rather than policy which results in a surprising finding: while we might expect states to be more willing to fight in defense of the homeland than over a policy, if more than two states are involved, it is in fact the disputes over territory that are significantly more peaceful.</p><p>In the final chapter of my dissertation, I attempt to apply the insights from the theoretical chapters to the study of interstate conflict and war. In particular, I compare a purely dyadic model of interstate crises to a model that accounts for non-dyadic interdependencies. The non-dyadic model that I present is an Additive and Multiplicative Effects Network model, and it substantially outperforms the traditional dyadic model, both in explaining the variance of the data and in predicting out of sample. By combining the theoretical work in the earlier chapters with the empirical work in the final chapter I can show that not only do dyadic models limit our ability to model the causes of conflict, but that by moving beyond the dyad we actually get notable gains in our ability to understand the world and make predictions.</p> / Dissertation
|
2 |
Hierarchical Game-Theoretic Models of Transparency in the Administrative StateTai, Laurence 30 September 2013 (has links)
This dissertation develops three game-theoretic models in each of its three chapters to explore the strategic implications of transparency in the administrative state. Each model contains a similar set of three players: a political principal, an agent representing an agency or a bureaucrat, and an interested third party. The models consider the utility of transparency as a tool for mitigating regulatory capture, in which the third party influences the agent to serve its interest rather than the principal's. Chapter 1, "Transparency and Media Scrutiny in the Regulatory Process," models transparency as the volume of records that the media receives from the agent, which raises the likelihood of news alleging low costs to the interest group after the agent's proposal of lax regulation. Such reports cost these two players and may deter the group from capturing the agent. Among other things, the model describes costs due to distorted policy proposals and loss of information when greater transparency causes inaccurate reports to increase along with accurate ones. In Chapter 2, "Transparency and Power in Rulemaking," transparency is a requirement for the agent to disclose an item of information, such as his message from the regulated party or his signal about the cost of regulation. The agent can always disclose this information, but doing so may increase the principal's power to set regulation higher than he or the regulated party desires. A key result is that transparency is not necessary for the principal to know as much as the agent does but may discourage the generation of the message or signal. Chapter 3, "A Reverse Rationale for Reliance on Regulators," suggests that an agent can benefit a principal not by gathering information from an outsider that she cannot access, but by preventing her from obtaining or acting on this information. The agent benefits the principal when he induces additional effort in the outside party's information generation because he is more adversarial toward that party than she is. Mandatory disclosure of the agent's information is harmful because it effectively allows the outsider to communicate directly with the principal and provide lower quality information.
|
3 |
A Deontic Analysis of Inter-Organizational Control RequirementsNguyen, Vu 28 May 2008 (has links)
This research focuses on the design and verification of inter-organizational controls. Instead of looking at a documentary procedure, which is the flow of documents and data among the parties, the research examines the underlying deontic purpose of the procedure, the so-called deontic process, and identifies control requirements to secure this purpose. The vision of the research is a formal theory for streamlining bureaucracy in business and government procedures. Underpinning most inter-organizational procedures are deontic relations, which are about rights and obligations of the parties. When all parties trust each other, they are willing to fulfill their obligations and honor the counter parties’ rights; thus controls may not be needed. The challenge is in cases where trust may not be assumed. In these cases, the parties need to rely on explicit controls to reduce their exposure to the risk of opportunism. However, at present there is no analytic approach or technique to determine which controls are needed for a given contracting or governance situation. The research proposes a formal method for deriving inter-organizational control requirements based on static analysis of deontic relations and dynamic analysis of deontic changes. The formal method will take a deontic process model of an inter-organizational transaction and certain domain knowledge as inputs to automatically generate control requirements that a documentary procedure needs to satisfy in order to limit fraud potentials. The deliverables of the research include a formal representation namely Deontic Petri Nets that combine multiple modal logics and Petri nets for modeling deontic processes, a set of control principles that represent an initial formal theory on the relationships between deontic processes and documentary procedures, and a working prototype that uses model checking technique to identify fraud potentials in a deontic process and generate control requirements to limit them. Fourteen scenarios of two well-known international payment procedures -- cash in advance and documentary credit -- have been used to test the prototype. The results showed that all control requirements stipulated in these procedures could be derived automatically.
|
4 |
Alloy-Guided Verification of Cooperative Autonomous Driving BehaviorVanValkenburg, MaryAnn E. 18 May 2020 (has links)
Alloy is a lightweight formal modeling tool that generates instances of a software specification to check properties of the design. This work demonstrates the use of Alloy for the rapid development of autonomous vehicle driving protocols. We contribute two driving protocols: a Normal protocol that represents the unpredictable yet safe driving behavior of typical human drivers, and a Connected protocol that employs connected technology for cooperative autonomous driving. Using five properties that define safe and productive driving actions, we analyze the performance of our protocols in mixed traffic. Lightweight formal modeling is a valuable way to reason about driving protocols early in the development process because it can automate the checking of safety and productivity properties and prevent costly design flaws.
|
5 |
GHENeSys, uma rede unificada e de alto nível. / GHENeSys, a unified and high level net.San Pedro Miralles, José Armando 23 March 2012 (has links)
Esquemas baseados em grafos, em diferentes níveis de formalismo, são um forte apelo para a constituição de representações de sistemas complexos e de grande porte aplicados em várias áreas do conhecimento. Este fato responde pelo crescimento acentuado de métodos e representações formais baseadas em grafos e aplicadas em diferentes áreas, especialmente na Engenharia. As Redes de Petri (RdP) constituem um destes métodos, que apareceu em 1962 e desde então tem contribuído para o avanço dos métodos formais para o tratamento de sistemas de controle, sistemas discretos, logística, workflow, cadeia de fornecedores, redes de computadores, e uma variada classe de outros sistemas. Da mesma forma que outras representações formais, as primeiras tentativas de uso prático destas redes estiveram sempre ligadas ao domínio de aplicação, o que levou à criação de várias extensões. Por outro lado, a necessidade de se aplicar a representação em redes para sistemas de grande porte suscitou a discussão sobre as limitações do formalismo e sobre a necessidade de se inserir redes de alto nível. No entanto, todo este desenvolvimento, apesar de sua difusão em diferentes domínios, levantou a discussão sobre a unificação das redes. Desde 1992 a unificação do formalismo das RdPs é discutida pela comunidade acadêmica e, finalmente, no início deste século um padrão ISO/IEC foi proposto. Esta proposta conduz a dois desafios: i) mostrar que um formalismo de redes que seja candidato a ser usado na prática pertença de fato à classe de redes prescrita pelo padrão; ii) participar da discussão sobre a semântica das extensões propondo ambientes computacionais para o uso prático na modelagem e design de sistemas de grande porte. A rede GHENeSys, concebida e desenvolvida no Design Lab da Universidade de São Paulo, é uma rede estendida com conceitos de orientação a objetos, um mecanismo de hierarquia e, até o momento, parece ser uma das primeiras tentativas de prover um ambiente de modelagem e design com as propriedades de uma rede unificada, com capacidade para cobrir as diferentes variantes das RdP e suas extensões. Neste trabalho é apresentada uma proposta de ambiente integrado de modelagem para a representação de sistemas a eventos discretos (SEDs) em RdP, baseada em um formalismo enquadrado dentro da norma ISO/IEC 15909 recentemente proposta. Este formalismo é a rede GHENeSys, que terá sua definição estendida utilizando como base a definição das RdPs Coloridas (CPN) com o objetivo de permitir a representação de tipos nas marcas. Um protótipo para testes, resultado da integração de diversos trabalhos desenvolvidos separadamente por membros do D-Lab que nunca foram implementados nem integrados em formalismo único, é apresentado. Este protótipo é utilizado em um estudo de caso com a finalidade de validar de forma prática os novos elementos acrescentados à definição da rede GHENeSys para permitir a modelagem de sistemas utilizando os elementos das RdPs de alto nível. / Graph schemas are a strong approach to the representation (in dierent degrees of formality) of large and complex systems in several areas of knowledge. This fact has provided a continuous growth of methods and new formal schemas, specially in Engineering. Petri Nets(PN) are one of these methods, which appears in 1962 and since then has improved the representation of discrete control, discrete systems, logistics, workflow, supply chain, computer networks, and a variety of other systems. As any other representation, the first attempts to use it in practice were always made in a close relation between the representation and the domain of discourse, openning opportunity for several extensions. Also the need to use it in large systems brought a discussion about the formalism and the need for high level systems. However, all this development, besides the broad use in different domains, rose the need for an unified approach. Since 1992 such unification has been addressed by the scientific community and finally, in the beginning of this century, a ISO/IEC standard was proposed. That proposal also brings two new challenges: i) to show that any proposed net that belongs to Petri Net class proved itself as satisfying the requirements of the standard; ii) to enter the discussions of the semantics of extensions and also provide practical and unified system environments that can really support the design of large and complex systems. In this work, we present a proposal for the developing of an integrated modeling environment for the representation of discrete event systems using Petri Nets. This environment will use an underlying formalism framed within the rules defined recently by the ISO/IEC, in the standard 15909. The formalism to be used will be the GHENeSys net, which will have its definition extended using the definition of the Coloured PN (CPN) as a starting point in order to allow the representation of types within the net tokens. A testing prototype for this integrated modeling environment, result of the integration of several previous works of D-Lab members that were never implemented or integrated in a unique formalism, is presented. This prototype will be used in a case study in order to validate in practical way the new elements added to the definition of GHENeSys, to allow the modeling of systems using the elements of HLPNs.
|
6 |
Formal Composition and Recovery Policies in Service-Based Business ProcessesHamadi, Rachid, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
Process-based composition of Web services is emerging as a promising technology for the effective automation of integrated and collaborative applications. As Web services are often autonomous and heterogeneous entities, coordinating their interactions to build complex processes is a difficult, error prone, and time-consuming task. In addition, since Web services usually operate in dynamic and highly evolving environments, there is a need for supporting flexible and correct execution of integrated processes. In this thesis, we propose a Petri net-based framework for formal composition and recovery policies in service-based business processes. We first propose an algebra for composing Web services. The formal semantics of this algebra is expressed in terms of Petri nets. The use of a formal model allows the effective verification and analysis of properties, both within a service, such as termination and absence of deadlock, and between services, such as behavioral equivalences. We also develop a top down approach for the correct (e.g., absence of deadlock and termination) composition of complex business processes. The approach defines a set of refinement operators that guarantee correctness of the resulting business process nets at design time. We then introduce Self-Adaptive Recovery Net (SARN), an extended Petri net model for specifying exceptional behavior in business processes. SARN adapts the structure of the underlying Petri net at run time to handle exceptions while keeping the Petri net design simple and easy. The proposed framework caters for the specification of high-level recovery policies that are incorporated either with a single task or a set of tasks, called a recovery region. Finally, we propose a pattern-based approach to dynamically restructure SARN. These patterns capture the ways past exceptions have been dealt with. The objective is to continuously restructure recovery regions within the SARN model to minimize the impact of exception handling. To illustrate the viability of the proposed composition and exception handling techniques, we have developed HiWorD (HIerarchical WORkflow Designer), a hierarchical Petri net-based business process modeling and simulation tool.
|
7 |
A Resource-Aware Component Model for Embedded SystemsVulgarakis, Aneta January 2009 (has links)
<p>Embedded systems are microprocessor-based systems that cover a large range of computer systems from ultra small computer-based devices to large systems monitoring and controlling complex processes. The particular constraints that must be met by embedded systems, such as timeliness, resource-use efficiency, short time-to-market and low cost, coupled with the increasing complexity of embedded system software, demand technologies and processes that will tackle these issues. An attractive approach to manage the software complexity, increase productivity, reduce time to market and decrease development costs, lies in the adoption of the component based software engineering (CBSE) paradigm. The specific characteristics of embedded systems lead to important design issues that need to be addressed by a component model. Consequently, a component model for development of embedded systems needs to systematically address extra-functional system properties. The component model should support predictable system development and as such guarantee absence or presence of certain properties. Formal methods can be a suitable solution to guarantee the correctness and reliability of software systems.</p><p> </p><p>Following the CBSE spirit, in this thesis we introduce the ProCom component model for development of distributed embedded systems. ProCom is structured in two layers, in order to support both a high-level view of loosely coupled subsystems encapsulating complex functionality, and a low-level view of control loops with restricted functionality. These layers differ from each other in terms of execution model, communication style, synchronization etc., but also in kind of analysis which are suitable. To describe the internal behavior of a component, in a structured way, in this thesis we propose REsource Model for Embedded Systems (REMES) that describes both functional and extra-functional behavior of interacting embedded components. We also formalize the resource-wise properties of interest and show how to analyze such behavioral models against them.</p> / PROGRESS
|
8 |
Essays on the Political Economy of Corruption and Rent-SeekingPopa, Mircea 25 September 2013 (has links)
The dissertation is made up of three papers on the political economy of corruption and rent-seeking. Two of the papers make use of the historical experience of Britain to illustrate the theoretical points being made. The first paper shows that eighteenth-century Britain displayed patterns of corruption similar to those of developing countries today. To explain anti-corruption reforms, the paper develops a model in which the political elite is split between government officials and asset-owners. Government officials can act in one of two regimes: a corrupt one in which they are free to maximize their income from the provision of government goods, and one in which a regulated system leaves no room for individual profit maximization. Faced with a change in the level of demand for government goods, officials become able to extract rents at a level that leads to other members of the elite voting to enact reforms. The logic of the model is tested using a new dataset of members of the House of Commons and its main implications are validated. The second paper develops a model of how the British political class came to give up its power to extract rents from the economy between the 1810s and the 1850s. The key of the explanation lies in understanding the bargaining process between economic agents who seek permission to engage in economic activity and a legislature that can grant such permissions. The third paper analyzes the distributive effects of corrupt interactions between government officials and citizens. Corruption is modeled as a solution to an allocation problem for a generic government good G. Beyond a transfer from citizens to the government, corruption redistributes welfare towards "insiders" who share some natural connection to the government and to other insiders. Corruption also redistributes welfare towards those who are skilled in imposing negative externalities, and encourages the imposition of such negative externalities. / Government
|
9 |
The Extended Maurer Model: Bridging Turing-Reducibility and Measure Theory to Jointly Reason about Malware and its DetectionElgamal, Mohamed Elsayed Abdelhameed 15 September 2014 (has links)
An arms-race exists between malware authors and system defenders in which defenders develop new detection approaches only to have the malware authors develop new techniques to bypass them. This motivates the need for a formal framework to jointly reason about malware and its detection. This dissertation presents such a formal framework termed the extended Maurer model} (EMM) and then applies this framework to develop a game-theoretic model of the malware authors versus system defenders confrontation.
To be inclusive of modern computers and networks, the EMM has been developed by extending to the existing Maurer computer model, a Turing-reducible model of computer operations. The basic components of the Maurer model have been extended to incorporate the necessary structures to enable the modeling of programs, concurrency, multiple processors, and networks. In particular, we show that the proposed EMM remains a Turing equivalent model which is able to model modern computers, computer networks, as well as complex programs such as modern virtual machines and web browsers.
Through the proposed EMM, we provide formalizations for the violations of the standard security policies. Specifically, we provide the definitions of the violations of confidentiality policies, integrity policies, availability policies, and resource usage policies. Additionally, we also propose formal definitions of a number of common malware classes, including viruses, Trojan horses, spyware, bots, and computer worms. We also show that the proposed EMM is complete in terms of its ability to model all implementable that could exist malware within the context of a given defended environment.
We then use the EMM to evaluate and analyze the resilience of a number of common malware detection approaches. We show that static anti-malware signature scanners can be easily evaded by obfuscation, which is consistent with the results of prior experimental work. Additionally, we also use the EMM to formally show that malware authors can avoid detection by dynamic system call sequence detection approaches, which also agrees with recent experimental work. A measure-theoretic model of the EMM is then developed by which the completeness of the EMM with respect to its ability to model all implementable malware detection approaches is shown.
Finally, using the developed EMM, we provide a game-theoretic model of the confrontation of malware authors and system defenders. Using this game model, under game theory's strict dominance solution concept, we show that rational attackers are always required to develop malware that is able to evade the deployed malware detection solutions. Moreover, we show that the attacker and defender adaptations can be modeled as a sequence of iterative games. Hence, the question can be asked as to the conditions required if such a sequence (or arms-race) is to converge towards a defender advantageous end-game. It is shown via the EMM that, in the general context, this desired situation requires that the next attacker adaptation exists as, at least, a computationally hard problem. If this is not the case, then we show via the EMM's measure theory perspective, that the defender is left needing to track statistically non-stationary attack behaviors. Hence, by standard information theory constructs, past attack histories can be shown to be uninformative with respect to the development of the next to be required adaptation of the deployed defenses.
To our knowledge, this is the first work to: (i) provide a joint model of malware and its detection, (ii) provide a model that is complete with respect to all implementable malware and detection approaches, (iii) provide a formal bridge between Turing-reducibility and measure theory, and (iv) thereby, allow game theory's strict dominance solution concept to be applied to formally reason about the requirements if the malware versus anti-malware arms-race is to converge to a defender advantageous end-game. / Graduate / melgamal@uvic.ca
|
10 |
Formal Approaches for Behavioral Modeling and Analysis of Design-time Services and Service NegotiationsČaušević, Aida January 2014 (has links)
During the past decade service-orientation has become a popular design paradigm, offering an approach in which services are the functional building blocks. Services are self-contained units of composition, built to be invoked, composed, and destroyed on (user) demand. Service-oriented systems (SOS) are a collection of services that are developed based on several design principles such as: (i) loose coupling between services (e.g., inter-service communication can involve either simple data passing or two or more connected services coordinating some activity) that allows services to be independent, yet highly interoperable when required; (ii) service abstraction, which emphasizes the need to hide as many implementation details as possible, yet still exposing functional and extra-functional capabilities that can be offered to service users; (iii) service reusability provided by the existing services in a rapid and flexible development process; (iv) service composability as one of the main assets of SOS that provide a design platform for services to be composed and decomposed, etc. One of the main concerns in such systems is ensuring service quality per se, but also guaranteeing the quality of newly composed services. To accomplish the above, we consider two system perspectives: the developer's and the user's view, respectively. In the former, one can be assumed to have access to the internal service representation: functionality, enabled actions, resource usage, and interactions with other services. In the second, one has information primarily on the service interface and exposed capabilities (attributes/features). Means of checking that services and service compositions meet the expected requirements, the so-called correctness issue, can enable optimization and possibility to guarantee a satisfactory level of a service composition quality. In order to accomplish exhaustive correctness checks of design-time SOS, we employ model-checking as the main formal verification technique, which eventually provides necessary information about quality-of-service (QoS), already at early stages of system development. ~As opposed to the traditional approach of software system construction, in SOS the same service may be offered at various prices, QoS, and other conditions, depending on the user needs. In such a setting, the interaction between involved parties requires the negotiation of what is possible at request time, aiming at meeting needs on demand. The service negotiation process often proceeds with timing, price, and resource constraints, under which users and providers exchange information on their respective goals, until reaching a consensus. Hence, a mathematically driven technique to analyze a priori various ways to achieve such goals is beneficial for understanding what and how can particular goals be achieved. This thesis presents the research that we have been carrying out over the past few years, which resulted in developing methods and tools for the specification, modeling, and formal analysis of services and service compositions in SOS. The contributions of the thesis consist of: (i)constructs for the formal description of services and service compositions using the resource-aware timed behavioral language called REMES; (ii) deductive and algorithmic approaches for checking correctness of services and service compositions;(iii) a model of service negotiation that includes different negotiation strategies, formally analyzed against timing and resource constraints; (iv) a tool-chain (REMES SOS IDE) that provides an editor and verification support (by integration with the UPPAAL model-checker) to REMES-based service-oriented designs;(v) a relevant case-study by which we exercise the applicability of our framework.The presented work has also been applied on other smaller examples presented in the published papers. / Under det senaste årtiondet har ett tjänstorienterat paradigm blivit allt-mer populärt i utvecklingen av datorsystem. I detta paradigm utgör så kallade tjänster den minsta funktionella systemenheten. Dessa tjänster är konstruerade så att de kan skapas, användas, sammansättas och avslutas separat. De ska vara oberoende av varandra samtidigt som de ska kunna fungera effektivt tillsammans och i samarbete med andra system när så behövs. Vidare ska tjänsterna dölja sina interna implementa-tionsdetaljer i så stor grad som möjligt, samtidigt som deras fulla funktionalitet ska exponeras för systemdesignern. Tjänsterna ska också på ett enkelt sätt kunna återanvändas och sammansättas i en snabb och flexibel utvecklingsprocess.En av de viktigaste aspekterna i tjänsteorienterade datorsystem är att kunna säkerställa systemens kvalitet. För att åstadkomma detta ärdet viktigt att få en djupare insikt om tjänstens interna funktionalitet, i termer av möjliga operationer, resursinformation, samt tänkbar inter-aktion med andra tjänster. Detta är speciellt viktigt när utvecklaren har möjlighet att välja mellan två funktionellt likvärda tjänster somär olika med avseende på andra egenskaper, såsom responstid eller andra resurskrav. I detta sammanhang kan en matematisk beskrivning av en tjänsts beteende ge ökad förståelse av tjänstemodellen, samt hjälpa användaren att koppla ihop tjänster på ett korrekt sätt. En matematisk beskrivning öppnar också upp för ett sätt att matematiskt resonera kring tjänster. Metoder för att kontrollera att komponerade tjänstermöter ställda resurskrav möjliggör också resursoptimering av tjänster samt verifiering av ställda kvalitetskrav.I denna avhandling presenteras forskning som har bedrivits under de senaste åren. Forskningen har resulterat i metoder och verktyg föratt specificera, modellera och formellt analysera tjänster och sammansättning av tjänster. Arbetet i avhandlingen består av (i) en formell definition av tjänster och sammansättning av tjänster med hjälp avett resursmedvetet formellt specifikationsspråk kallat Remes; (ii) två metoder för att analysera tjänster och kontrollera korrektheten i sammansättning av tjänster, både deduktivt och algoritmiskt; (iii) en modell av förhandlingsprocessen vid sammansättning av tjänster som inkluderar olika förhandlingsstrategier; (iv) ett antal verktyg som stödjer dessa metoder. Metoderna har använts i ett antal fallstudier som är presenterade i de publicerade artiklarna. / Contesse
|
Page generated in 0.0906 seconds