• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 226
  • 46
  • 34
  • 19
  • 14
  • 8
  • 6
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 421
  • 136
  • 53
  • 50
  • 41
  • 36
  • 35
  • 33
  • 30
  • 29
  • 29
  • 28
  • 26
  • 26
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Graph Theoretical Modelling of Electrical Distribution Grids

Kohler, Iris 01 June 2021 (has links) (PDF)
This thesis deals with the applications of graph theory towards the electrical distribution networks that transmit electricity from the generators that produce it and the consumers that use it. Specifically, we establish the substation and bus network as graph theoretical models for this major piece of electrical infrastructure. We also generate substation and bus networks for a wide range of existing data from both synthetic and real grids and show several properties of these graphs, such as density, degeneracy, and planarity. We also motivate future research into the definition of a graph family containing bus and substation networks and the classification of that family as having polynomial expansion.
172

Experimental Investigation of Drag Reduction by Trailing Edge Tabs on a Square Based Bluff Body in Ground Effect

Sawyer, Scott R 01 May 2015 (has links) (PDF)
This thesis presents an experimental investigation of drag reduction devices on a bluff body in ground effect. It has previously been shown that the addition of end-plate tabs to a rectangular based bluff body with an aspect ratio of 4 is effective in eliminating vortex shedding and reducing drag for low Reynolds number flows. In the present study a square based bluff body, both with and without tabs, will be tested under the same conditions, except this time operating within proximity to a ground plane in order to mimic the properties of bounded aerodynamics that would be present for a body in ground effect.
173

Assumption-Based Runtime Verification of Finite- and Infinite-State Systems

Tian, Chun 23 November 2022 (has links)
Runtime Verification (RV) is usually considered as a lightweight automatic verification technique for the dynamic analysis of systems, where a monitor observes executions produced by a system and analyzes its executions against a formal specification. If the monitor were synthesized, in addition to the monitoring specification, also from extra assumptions on the system behavior (typically described by a model as transition systems), then it may output more precise verdicts or even be predictive, meanwhile it may no longer be lightweight, since monitoring under assumptions has the same computation complexity with model checking. When suitable assumptions come into play, the monitor may also support partial observability, where non-observable variables in the specification can be inferred from observables, either present or historical ones. Furthermore, the monitors are resettable, i.e. being able to evaluate the specification at non-initial time of the executions while keeping memories of the input history. This helps in breaking the monotonicity of monitors, which, after reaching conclusive verdicts, can still change its future outputs by resetting its reference time. The combination of the above three characteristics (assumptions, partial observability and resets) in the monitor synthesis is called the Assumption-Based Runtime Verification, or ABRV. In this thesis, we give the formalism of the ABRV approach and a group of monitoring algorithms based on specifications expressed in Linear Temporal Logic with both future and past operators, involving Boolean and possibly other types of variables. When all involved variables have finite domain, the monitors can be synthesized as finite-state machines implemented by Binary Decision Diagrams. With infinite-domain variables, the infinite-state monitors are based on satisfiability modulo theories, first-order quantifier elimination and various model checking techniques. In particular, Bounded Model Checking is modified to do its work incrementally for efficiently obtaining inconclusive verdicts, before IC3-based model checkers get involved. All the monitoring algorithms in this thesis are implemented in a tool called NuRV. NuRV support online and offline monitoring, and can also generate standalone monitor code in various programming languages. In particular, monitors can be synthesized as SMV models, whose behavior correctness and some other properties can be further verified by model checking.
174

Examining Bounded Rationality Influences on Decisions Concerning Information Security : A Study That Connects Bounded Rationality and Information Security

Malm Wiklund, Oskar, Larsson, Caroline January 2024 (has links)
This study investigates the impact of bounded rationality on information security decisions in public Swedish authorities. The research addresses how cognitive limitations and organizational dynamics shape decisions in this area. Utilizing qualitative research methods, in-depth interviews and document analysis, the study provides nuanced insights into decision-making processes. A thematic analysis identifies six recurring themes influencing decision-making: Awareness & Knowledge, Individual Characteristics, Organizational Culture & Behavioral Patterns, Organization & Execution, Regulatory Frameworks & Management, Responsibility & Obligation.  The findings reveal significant influences and barriers in implementing effective security strategies, making a theoretical contribution to information security management in public sectors. This research highlights the importance of understanding human behavior in information security, offering insights to shape strategic directions for policy and practical implementation to enhance organizational and national cybersecurity resilience.
175

Resource-Bounded Information Acquisition and Learning

Kanani, Pallika H 01 May 2012 (has links)
In many scenarios it is desirable to augment existing data with information acquired from an external source. For example, information from the Web can be used to fill missing values in a database or to correct errors. In many machine learning and data mining scenarios, acquiring additional feature values can lead to improved data quality and accuracy. However, there is often a cost associated with such information acquisition, and we typically need to operate under limited resources. In this thesis, I explore different aspects of Resource-bounded Information Acquisition and Learning. The process of acquiring information from an external source involves multiple steps, such as deciding what subset of information to obtain, locating the documents that contain the required information, acquiring relevant documents, extracting the specific piece of information, and combining it with existing information to make useful decisions. The problem of Resource-bounded Information Acquisition (RBIA) involves saving resources at each stage of the information acquisition process. I explore four special cases of the RBIA problem, propose general principles for efficiently acquiring external information in real-world domains, and demonstrate their effectiveness using extensive experiments. For example, in some of these domains I show how interdependency between fields or records in the data can also be exploited to achieve cost reduction. Finally, I propose a general framework for RBIA, that takes into account the state of the database at each point of time, dynamically adapts to the results of all the steps in the acquisition process so far, as well as the properties of each step, and carries them out striving to acquire most information with least amount resources.
176

Numerical Investigation of High-Speed Wall-Bounded Turbulence Subject to Complex Wall Impedance

Yongkai Chen (14253383) 15 December 2022 (has links)
<p>Laminar or turbulent flows over porous surfaces have received extensive attention in the past few decades, due to their potential to achieve passive flow controls. These surfaces either in natural exhibit roughness or are engineered in purpose, and usually entail special features such as increasing/reducing surface drags. An increasing interest has arisen in the interaction between these surfaces and high-speed compressible flows, which could inform the next-level flow control studies at supersonic and hypersonic speeds for the designs of high-speed vehicles. In this dissertation, the interaction between high-speed compressible turbulent flows and acoustically permeable surface is investigated. The surface property is modeled via the Time-Domain Impedance Boundary Condition (TDIBC), which avoids the inclusion of the geometric details in the numerical simulations.</p> <p>We first perform Large-Eddy Simulations of compressible turbulent channel flows over one impedance wall for three bulk Mach numbers:Mb = 1.5, 3.5 and 6.0. The bulk Reynolds number Reb is tuned to achieve similar viscous Reynolds number Re∗τ ≈ 220 across all Mb to ensure a nearly common state of near-wall turbulence structures over impermeable walls. The TDIBC based on the auxiliary differential equations (ADE) method is applied to bottom wall of the channel. A three-parameter complex impedance model with a resonating frequency tuned to the large-eddy turn-over frequency of the flow is adopted. With a sufficiently high permeability, a streamwise traveling instability wave that is confined in nature and that increases the surface drag, is observed in the near-wall region and changes the local turbulent events. As a result, the first and second order mean flow statistics are found to deviate from that of a flow over impermeable walls. We then perform a linear stability analysis using a turbulent background base flow and confirm that the instability wave is triggered by a sufficiently high permeability and manifests a confined nature. The critical resistance Rcr (interpreted as the inverse of the permeability), above which the instability is suppressed, is found to be sub-linearly proportional to the bulk Mach number Mb, indicating less permeability required to trigger the instability in high Mach number flows.</p> <p>Due to the extremely high computational cost in high Mach number wall-bounded flow calculations, the next-phase optimization/flow control design using the porous surface becomes unaffordable. An ’economical’ flow setup that can server the purpose of rapid flow generation would greatly benefit the planned research. For such reason, we carry out a study about the effect of the domain size on the near-wall turbulence structures in compressible turbulent channel flows, to identify such type of flow setup. Apart from the concept of minimal flow units (MFU, as in the literature) entailing a minimal domain size required for near-wall turbulence to be sustained, efforts have also been made to identify a range of the domain size that can sustain both the inner and outer layer turbulence, and lead to only small deviations in mean flow statistics from the baseline data, which herein defined as minimal turbulent channel (MTC). The motivation of proposing the concept of MTC is to provide a computationally efficient setup for the rapid generation of near-wall turbulence with minimal compromise on the fidelity of the simulated field for investigations requiring numerous simulations, such as machine learning, flow control/optimization designs. It is found that the mean flow statistics from a computational domain spanning 700 − 1100 and 230 − 280 local viscous units in streamwise and spanwise directions, respectively, agree reasonably well with the reference calculations of all three Mach numbers under investigation, and are thus identified as the range in which the MTC stays. The large scale near-wall turbulence structures observed in full scale DNS simulations, and their spatially coherent connections, are roughly preserved in MTC, indicated by the existence of the grouped streamwise aligned hairpin vortices of various sizes and the resulted patterns of uniform momentum zones and thermal zones in the instantaneous flow field. In an MTC, the energy transfer paths among the kinetic energy of the mean field, turbulent kinetic energy and mean internal energy are slightly modified, with the most significant change observed in the viscous dissipation. The mean wall-shear stress and mean wall heat flux see less than 5% error as compared to the full scale simulations. Such reduced-order flow setup requires less than 3% of the computational resource as compared to the full scale simulations.</p>
177

Methodological Foundations for Bounded Rationality as a Primary Framework

Modarres-Mousavi, Shabnam 10 January 2003 (has links)
Experimental observations have shown that economic agents behave in ways different from the maximization of any utility function. Herbert Simon sought to deal with this by positing that individuals do not maximize, but rather "satisfice." This was a radical departure from the traditional economic framework, and one that still has not been adequately formalized. But Simon's suggestion is only the smallest part of what is needed for a theory that reflects the actual behavior. For instance, Simon's framework cannot deal with the observation that the act of choice changes the chooser. This dissertation is further developing Simon's original ideas through embracing John Dewey's transactional thinking to attain an adequate theory of economic choice that accounts for boundedly rational agents. I clarify that substantive rationality and bounded (procedural) rationality share the same basic utilitarian assumption of predetermined goals. In terms of a Deweyan (transactional) analysis, the idea of utilitarian "optimization" ultimately guides and constrains both theories. But empirical study of choice behavior and the behavior of subjects in experimental laboratories, both indicate that neither substantive nor procedural rationality can effectively account for actual economic choices. I emphasize the importance of treating bounded rationality without reference to the rational framework. To me, bounded rationality implies a realistic picture of behavior, which is associated with emerging goals and not ones that exist prior to the making of a choice. I consider uncertainty as a normal characteristic of the situation, which in turn allows consideration of acting based on inconsistent information, just as people actually do. The basis of a systematic approach to behavior that can capture inconsistency is developed by Tom Burke. He mathematizes Dewey's logic. He allows for impossible worlds in the set of states. Thus, not only can the initial state space hold inconsistent states, the information set can include mutually inconsistent elements. So the current neoclassical paradigm resembles the representative realism, but is there any good reason why we should accept this methodology as economists? Whatever one's ultimate metaphysics and epistemology, I want to show that an alternative approach to economic decision-making may prove highly useful in theory and practice. / Ph. D.
178

A Test of Bounded Generalized Reciprocity and Social Identity Theory in a Social Video Game Play Context

Velez, John A. 21 July 2014 (has links)
No description available.
179

Quasi-isometries of graph manifolds do not preserve non-positive curvature

Nicol, Andrew 15 October 2014 (has links)
No description available.
180

On Reducing the Trusted Computing Base in Binary Verification

An, Xiaoxin 15 June 2022 (has links)
The translation of binary code to higher-level models has wide applications, including decompilation, binary analysis, and binary rewriting. This calls for high reliability of the underlying trusted computing base (TCB) of the translation methodology. A key challenge is to reduce the TCB by validating its soundness. Both the definition of soundness and the validation method heavily depend on the context: what is in the TCB and how to prove it. This dissertation presents three research contributions. The first two contributions include reducing the TCB in binary verification, and the last contribution includes a binary verification process that leverages a reduced TCB. The first contribution targets the validation of OCaml-to-PVS translation -- commonly used to translate instruction-set-architecture (ISA) specifications to PVS -- where the destination language is non-executable. We present a methodology called OPEV to validate the translation between OCaml and PVS, supporting non-executable semantics. The validation includes generating large-scale tests for OCaml implementations, generating test lemmas for PVS, and generating proofs that automatically discharge these lemmas. OPEV incorporates an intermediate type system that captures a large subset of OCaml types, employing a variety of rules to generate test cases for each type. To prove the PVS lemmas, we develop automatic proof strategies and discharge the test lemmas using PVS Proof-Lite, a powerful proof scripting utility of the PVS verification system. We demonstrate our approach in two case studies that include 259 functions selected from the Sail and Lem libraries. For each function, we generate thousands of test lemmas, all of which are automatically discharged. The dissertation's second contribution targets the soundness validation of a disassembly process where the source language does not have well-defined semantics. Disassembly is a crucial step in binary security, reverse engineering, and binary verification. Various studies in these fields use disassembly tools and hypothesize that the reconstructed disassembly is correct. However, disassembly is an undecidable problem. State-of-the-art disassemblers suffer from issues ranging from incorrectly recovered instructions to incorrectly assessing which addresses belong to instructions and which to data. We present DSV, a systematic and automated approach to validate whether the output of a disassembler is sound with respect to the input binary. No source code, debugging information, or annotations are required. DSV defines soundness using a transition relation defined over concrete machine states: a binary is sound if, for all addresses in the binary that can be reached from the binary's entry point, the bytes of the (disassembled) instruction located at an address are the same as the actual bytes read from the binary. Since computing this transition relation is undecidable, DSV uses over-approximation by preventing false positives (i.e., the existence of an incorrectly disassembled reachable instruction but deemed unreachable) and allowing, but minimizing, false negatives. We apply DSV to 102 binaries of GNU Coreutils with eight different state-of-the-art disassemblers from academia and industry. DSV is able to find soundness issues in the output of all disassemblers. The dissertation's third contribution is WinCheck: a concolic model checker that detects memory-related properties of closed-source binaries. Bugs related to memory accesses are still a major issue for security vulnerabilities. Even a single buffer overflow or use-after-free in a large program may be the cause of a software crash, a data leak, or a hijacking of the control flow. Typical static formal verification tools aim to detect these issues at the source code level. WinCheck is a model-checker that is directly applicable to closed-source and stripped Windows executables. A key characteristic of WinCheck is that it performs its execution as symbolically as possible while leaving any information related to pointers concrete. This produces a model checker tailored to pointer-related properties, such as buffer overflows, use-after-free, null-pointer dereferences, and reading from uninitialized memory. The technique thus provides a novel trade-off between ease of use, accuracy, applicability, and scalability. We apply WinCheck to ten closed-source binaries available in a Windows 10 distribution, as well as the Windows version of the entire Coreutils library. We conclude that the approach taken is precise -- provides only a few false negatives -- but may not explore the entire state space due to unresolved indirect jumps. / Doctor of Philosophy / Binary verification is a process that verifies a class of properties, usually security-related properties, on binary files, and does not need access to source code. Since a binary file is composed of byte sequences and is not human-readable, in the binary verification process, a number of assumptions are usually made. The assumptions often involve the error-free nature of a set of subsystems used in the verification process and constitute the verification process's trusted computing base (or TCB). The reliability of the verification process therefore depends on how reliable the TCB is. The dissertation presents three research contributions in this regard. The first two contributions include reducing the TCB in binary verification, and the last contribution includes a binary verification process that leverages a reduced TCB. The dissertation's first contribution presents a validation on OCaml-to-PVS translations -- commonly used to translate a computer architecture's instruction specifications to PVS, a language that allows mathematical specifications. To build up a reliable semantical model of assembly instructions, which is assumed to be in the TCB, it is necessary to validate the translation. The dissertation's second contribution validates the soundness of the disassembly process, which translates a binary file to corresponding assembly instructions. Since the disassembly process is generally assumed to be trustworthy in many binary verification works, the TCB of binary verification could be reduced by validating the soundness of the disassembly process. With the reduced TCB, the dissertation introduces WinCheck, the dissertation's third and final contribution: a concolic model checker that validates pointer-related properties of closed-source Windows binaries. The pointer-related properties include absence of buffer overflow, absence of use-after-free, and absence of null-pointer dereference.

Page generated in 0.038 seconds