• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 9
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 11
  • 10
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Ethics and the advocate

Crispin, K. J., n/a January 1995 (has links)
This thesis examines the ethical implications of legal professional advocacy in an adversary system of justice. It identifies a standard conception of the advocate's duty which is encapsulated in the various professional codes and its fundamental principles of partisanship and zealous advocacy. It acknowledges that the standard conception involves a duty to pursue the interests of clients without regard for the interests of others and explores the inevitable moral ambivalence which such an absolute loyalty entails. The concept of role morality upon which this conception is based is explored. This involves an examination of the adversary system of justice and the extent to which it serves the public interest. It is concluded that the adversary system is of considerable utility in maintaining individual rights, eliciting the truth, providing an important element of ritual and sublimating conflict. Consequently, its value provides ethical justification for lawyers to fulfil the adversarial roles upon which it depends. However, it is contended that it neither requires nor justifies the absolutism inherent in the standard conception of the advocate's duty. A number of alternative paradigms are considered but rejected as inadequate. It is argued that the existing norms of partisanship and zealous advocacy should be retained but relegated to prima facie duties which may have to be balanced against competing ethical demands such as the need to avoid causing undue harm to others.
2

Worlds Collide through Gaussian Processes: Statistics, Geoscience and Mathematical Programming

Christianson, Ryan Beck 04 May 2023 (has links)
Gaussian process (GP) regression is the canonical method for nonlinear spatial modeling among the statistics and machine learning communities. Geostatisticians use a subtly different technique known as kriging. I shall highlight key similarities and differences between GPs and kriging through the use of large scale gold mining data. Most importantly GPs are largely hands-off, automatically learning from the data whereas kriging requires an expert human in the loop to guide analysis. To emphasize this, I show an imputation method for left censored values frequently seen in mining data. Oftentimes geologists ignore censored values due to the difficulty of imputing with kriging, but GPs execute imputation with relative ease leading to better estimates of the gold surface. My hope is that this research can serve as a springboard to encourage the mining community to consider using GPs over kriging for diverse utility after GP model fitting. Another common use of GPs that would be inefficient for kriging is Bayesian Optimization (BO). Traditionally BO is designed to find a global optima by sequentially sampling from a function of interest using an acquisition function. When two or more local or global optima of the function of interest have similar objective values, it often makes some sense to target the more "robust" solution with a wider domain of attraction. However, traditional BO weighs these solutions the same, favoring whichever has a slightly better objective value. By combining the idea of expected improvement (EI) from the BO community with mathematical programming's concept of an adversary, I introduce a novel algorithm to target robust solutions called robust expected improvement (REI). The adversary penalizes "peaked" areas of the objective function making those values appear less desirable. REI performs acquisitions using EI on the adversarial space yielding data sets focused on the robust solution that exhibit EI's already proven excellent balance of exploration and exploitation. / Doctor of Philosophy / Since its origins in the 1940's, spatial statistics modeling has adapted to fit different communities. The geostatistics community developed with an emphasis on modeling mining operations and has further evolved to cover a slew of different applications largely focused on two or three physical dimensions. The computer experiments community developed later when these physical experiments started moving into the virtual realm with advances in computer technology. While birthed from the same foundation, computer experimenters often look at ten or sometimes even higher dimension problems. Due to these differences among others, each community tailored their methods to best fit their common problems. My research compares the modern instantiations of the differing methodology on two sets of real gold mining data. Ultimately, I prefer the computer experiments methods for their ease of adaptation to downstream tasks at no cost to model performance. A statistical model is almost never a standalone development; it is created with a specific goal in mind. The first case I show of this is "imputation" of mining data. Mining data often have a detection threshold such that any observation with very small mineral concentrations are recorded at the threshold. Frequently, geostatisticians simply throw out these observations because they cause problems in modeling. Statisticians try to use the information that there is a low concentration combined with the rest of the fully observed data to derive a best guess at the concentration of thresholded locations. Under the geostatistics framework, this is cumbersome, but the computer experiments community consider imputation an easy extension. Another common model task is creating an experiment to best learn a surface. The surface may be a gold deposit on Earth or an unknown virtual function or anything measurable really. To do this, computer experimenters often use "active learning" by sampling one point at a time, using that point to generate a better informed model which suggests a new point to sample, repeating until a satisfactory number of points are sampled. Geostatisticians often prefer "one-shot" experiments by deciding all samples prior to collecting any. Thus the geostatistics framework is not appropriate for active learning. Active learning tries to find the "best" location of the surface with either the maximum or minimum response. I adapt this problem to redefine best to find a "robust" location where the response does not change much even if the location is not perfectly specified. As an example, consider setting operating conditions for a factory. If locations produce a similar amount of product, but one needs an exact pressure setting or else it blows up the factory, the other is certainly preferred. To design experiments to find robust locations, I borrow ideas from the mathematical programming community to develop a novel method for robust active learning.
3

Data Security in Unattended Wireless Sensor Networks

Vepanjeri Lokanadha Reddy, Sasi Kiran 14 January 2013 (has links)
In traditional Wireless Sensor network's (WSN's), the sink is the only unconditionally trusted authority. If the sink is not connected to the nodes for a period of time then the network is considered as unattended. In Unattended Wireless Sensor Network (UWSN), a trusted mobile sink visits each node periodically to collect data. This network differs from the traditional multi hop wireless sensor networks where the nodes close to the sink deplete their power earlier than the other nodes. An UWSN can prolong the life time of the network by saving the battery of the nodes and also it can be deployed in environments where it is not practical for the sink to be online all the time. Saving data in the memory of the nodes for a long time causes security problems due to the lack of tamper-resistant hardware. Data collected by the nodes has to be secured until the next visit of the sink. Securing the data from an adversary in UWSN is a challenging task. We present two non-cryptographic algorithms (DS-PADV and DS-RADV) to ensure data survivability in mobile UWSN. The DS-PADV protects against proactive adversary which compromises nodes before identifying its target. DS-RADV makes the network secure against reactive adversary which compromises nodes after identifying the target. We also propose a data authentication scheme against a mobile adversary trying to modify the data. The proposed data authentication scheme uses inexpensive cryptographic primitives and few message exchanges. The proposed solutions are analyzed both mathematically and using simulations proving that the proposed solutions are better than the previous ones in terms of security and communication overhead.
4

Data Security in Unattended Wireless Sensor Networks

Vepanjeri Lokanadha Reddy, Sasi Kiran 14 January 2013 (has links)
In traditional Wireless Sensor network's (WSN's), the sink is the only unconditionally trusted authority. If the sink is not connected to the nodes for a period of time then the network is considered as unattended. In Unattended Wireless Sensor Network (UWSN), a trusted mobile sink visits each node periodically to collect data. This network differs from the traditional multi hop wireless sensor networks where the nodes close to the sink deplete their power earlier than the other nodes. An UWSN can prolong the life time of the network by saving the battery of the nodes and also it can be deployed in environments where it is not practical for the sink to be online all the time. Saving data in the memory of the nodes for a long time causes security problems due to the lack of tamper-resistant hardware. Data collected by the nodes has to be secured until the next visit of the sink. Securing the data from an adversary in UWSN is a challenging task. We present two non-cryptographic algorithms (DS-PADV and DS-RADV) to ensure data survivability in mobile UWSN. The DS-PADV protects against proactive adversary which compromises nodes before identifying its target. DS-RADV makes the network secure against reactive adversary which compromises nodes after identifying the target. We also propose a data authentication scheme against a mobile adversary trying to modify the data. The proposed data authentication scheme uses inexpensive cryptographic primitives and few message exchanges. The proposed solutions are analyzed both mathematically and using simulations proving that the proposed solutions are better than the previous ones in terms of security and communication overhead.
5

Data Security in Unattended Wireless Sensor Networks

Vepanjeri Lokanadha Reddy, Sasi Kiran January 2013 (has links)
In traditional Wireless Sensor network's (WSN's), the sink is the only unconditionally trusted authority. If the sink is not connected to the nodes for a period of time then the network is considered as unattended. In Unattended Wireless Sensor Network (UWSN), a trusted mobile sink visits each node periodically to collect data. This network differs from the traditional multi hop wireless sensor networks where the nodes close to the sink deplete their power earlier than the other nodes. An UWSN can prolong the life time of the network by saving the battery of the nodes and also it can be deployed in environments where it is not practical for the sink to be online all the time. Saving data in the memory of the nodes for a long time causes security problems due to the lack of tamper-resistant hardware. Data collected by the nodes has to be secured until the next visit of the sink. Securing the data from an adversary in UWSN is a challenging task. We present two non-cryptographic algorithms (DS-PADV and DS-RADV) to ensure data survivability in mobile UWSN. The DS-PADV protects against proactive adversary which compromises nodes before identifying its target. DS-RADV makes the network secure against reactive adversary which compromises nodes after identifying the target. We also propose a data authentication scheme against a mobile adversary trying to modify the data. The proposed data authentication scheme uses inexpensive cryptographic primitives and few message exchanges. The proposed solutions are analyzed both mathematically and using simulations proving that the proposed solutions are better than the previous ones in terms of security and communication overhead.
6

Comparison of adversary emulation tools for reproducing behavior in cyber attacks / : Jämförelse av verktyg för motståndaremulering vid återskapande av beteenden i cyberattacker

Elgh, Joakim January 2022 (has links)
As cyber criminals can find many different ways of gaining unauthorized access to systems without being detected, it is of high importance for organizations to monitor what is happening inside their systems. Adversary emulation is a way to mimic behavior of advanced adversaries within cyber security, which can be used to test detection capabilities of malicious behavior within a system of an organization. The emulated behavior can be based on what have been observed in real cyber attacks - open source knowledge bases such as MITRE ATT&CK collect this kind of intelligence. Many organizations have in recent years developed tools to simplify emulating the behavior of known adversaries. These tools are referred to as adversary emulation tools in this thesis. The purpose of this thesis was to evaluate how noisy different adversary emulation tools are. This was done through measurements on the amount of event logs generated by Sysmon when performing emulations against a Windows system. The goal was to find out which tool was the least noisy. The different adversary emulation tools included in this thesis were Invoke-AtomicRedTeam, CALDERA, ATTPwn and Red Team Automation. To make sure the correlation between the adversary emulation tools and the generated event logs could be identified, a controlled experiment was selected as the method for the study. Five experiments were designed including one emulation scenario each, executed by the different adversary emulation tools included in each experiment. After each emulation, event logs were collected, filtered, and measured for use in the comparison. Three experiments were conducted which compared Invoke-AtomicRedTeam, CALDERA, and a manual emulation. The results of the first three experiments indicated that Invoke-AtomicRedTeam team was the noisiest, followed by CALDERA, and the manual emulation was the least noisy. On average, the manual emulation generated 83,9% fewer logs than Invoke-AtomicRedTeam and 78,4% fewer logs than CALDERA in experiments 1-3. A fourth experiment compared Red Team Automation and Invoke-AtomicRedTeam, where Red Team Automation was the least noisy tool. The final fifth experiment compared ATTPwn and CALDERA, and the results indicated that these were similarly noisy but in different ways. It was also concluded that a main difference between the adversary emulation tools was that the number of techniques available differed between the tools which could limit the ability to emulate the behavior of real adversaries. However, as the emulation tools were implemented in different ways, this thesis could be one starting point for future development of silent adversary emulation tools or to assist in selecting an existing adversary emulation tool.
7

Cross purposes : a critical analysis of the representational force of questions in adversarial legal examination /

Gaines, Phil. January 1998 (has links)
Thesis (Ph. D.)--University of Washington, 1998. / Vita. Includes bibliographical references (p. [167]-174).
8

Semi-Informed Multi-Agent Patrol Strategies

Hardin, Chad E. 01 January 2018 (has links)
The adversarial multi-agent patrol problem is an active research topic with many real-world applications such as physical robots guarding an area and software agents protecting a computer network. In it, agents patrol a graph looking for so-called critical vertices that are subject to attack by adversaries. The agents are unaware of which vertices are subject to attack by adversaries and when they encounter such a vertex they attempt to protect it from being compromised (an adversary must occupy the vertex it targets a certain amount of time for the attack to succeed). Even though the terms adversary and attack are used, the problem domain extends to patrolling a graph for other interesting noncompetitive contexts such as search and rescue. The problem statement adopted in this work is formulated such that agents obtain knowledge of local graph topology and critical vertices over the course of their travels via an API ; there is no global knowledge of the graph or communication between agents. The challenge is to balance exploration, necessary to discover critical vertices, with exploitation, necessary to protect critical vertices from attack. Four types of adversaries were used for experiments, three from previous research – waiting, random, and statistical - and the fourth, a hybrid of those three. Agent strategies for countering each of these adversaries are designed and evaluated. Benchmark graphs and parameter settings from related research will be employed. The proposed research culminates in the design and evaluation of agents to counter these various types of adversaries under a range of conditions. The results of this work are agent strategies in which each agent becomes solely responsible for protecting those critical vertices it discovers. The agents use emergent behavior to minimize successful attacks and maximize the discovery of new critical vertices. A set of seven edge choosing primitives (ECPs) are defined that are combined in different ways to yield a range of agent strategies using the chain of responsibility OOP design pattern. Every permutation of them were tested and measured in order to identify those strategies that perform well. One strategy performed particularly well against all adversaries, graph topology, and other experimental variables. This particular strategy combines ECPs of: A hard-deadline return to covered vertices to counter the random adversary, efficiently checking vertices to see if they are being attacked by the waiting adversary, and random movement to impede the statistical adversary.
9

Řízení o omezení svéprávnosti / Procedure to limit legal capacity

Tichá, Tereza January 2021 (has links)
Procedure to limit legal capacity, Abstract This paper on the topic of procedures to limit legal capacity is chiefly about the prerequisites, purpose, nature and progress of such procedures and also the form of the rulings issued during these procedures. It maps not only the legal regulations for directing such a procedure in accordance with Act No. 293/2013 Sb., on special judicial procedures, but also its practical impact, current court practice and particularly findings from court practice. This work emphasises and minutely discusses selected issues regarding the procedure to limit legal capacity, i.e. the moment of submission of expert opinions, the need to appoint a guardian ad litem during each procedure, the content, scope and definition of decisions in these matters. This paper also interconnects substantive legal and procedural law, because the inseparability of substantive and procedural law is highly evident in procedures to limit legal capacity, whereas the purpose pursued in Act No. 89/2012 Sb. Civil Code is fulfilled by means of judicial procedures. This paper also briefly compares the regulations concerning the institute of legal capacity in the past and contemporary interpretation of the term. The basis of this paper is a complex treatise of judicial procedures in matters of legal capacity....
10

Making decisions about screening cargo containers for nuclear threats using decision analysis and optimization

Dauberman, Jamie 06 August 2010 (has links)
One of the most pressing concerns in homeland security is the illegal passing of weapons-grade nuclear material through the borders of the United States. If terrorists can gather the materials needed to construct a nuclear bomb or radiological dispersion device (RDD, i.e., dirty bomb) while inside the United States, the consequences would be devastating. Preventing plutonium, highly enriched uranium (HEU), tritium gas or other materials that can be used to construct a nuclear weapon from illegally entering the United States is an area of vital concern. There are enormous economic consequences when our nation's port security system is compromised. Interdicting nuclear material being smuggled into the United States on cargo containers is an issue of vital national interest, since it is a critical aspect of protecting the United States from nuclear attacks. However, the efforts made to prevent nuclear material from entering the United States via cargo containers have been disjoint, piecemeal, and reactive, not the result of coordinated, systematic planning and analysis. Our economic well-being is intrinsically linked with the success and security of the international trade system. International trade accounts for more than thirty percent of the United States economy (Rooney, 2005). Ninety-five percent of international goods that enter the United States come through one of 361 ports, adding up to more than 11.4 million containers every year (Fritelli, 2005; Rooney, 2005; US DOT, 2007). Port security has emerged as a critically important yet vulnerable component in the homeland security system. Applying game theoretic methods to counterterrorism provides a structured technique for defenders to analyzing the way adversaries will interact under different circumstances and scenarios. This way of thinking is somewhat counterintuitive, but is an extremely useful tool in analyzing potential strategies for defenders. Decision analysis can handle very large and complex problems by integrating multiple perspectives and providing a structured process in evaluating preferences and values from the individuals involved. The process can still ensure that the decision still focuses on achieving the fundamental objectives. In the decision analysis process value tradeoffs are evaluated to review alternatives and attitudes to risk can be quantified to help the decision maker understand what aspects of the problem are not under their control. Most of all decision analysis provides insight that may not have been captured or fully understood if decision analysis was not incorporated into the decision making process. All of these factors make decision analysis essentially to making an informed decision. Game theory and decision analysis both play important roles in counterterrorism efforts. However, they both have their weaknesses. Decision analysis techniques such as probabilistic risk analysis can provide incorrect assessments of risk when modeling intelligent adversaries as uncertain hazards. Game theory analysis also has limitations. For example when analyzing a terrorist or terrorist group using game theory we can only take into consideration one aspect of the problem to optimize at a time. Meaning the analysis is either analyzing the problem from the defenders perspective or from the attacker’s perspective. Parnell et al. (2009) was able to develop a model that simultaneously maximizes the effects of the terrorist and minimizes the consequences for the defender. The question this thesis aims to answer is whether investing in new detector technology for screening cargo containers is a worthwhile investment for protecting our country from a terrorist attack. This thesis introduces an intelligent adversary risk analysis model for determining whether to use new radiological screening technologies at our nation’s ports. This technique provides a more realistic risk assessment of the true situation being modeled and determines whether it is cost effective for our country to invest in new cargo container screening technology. The optimal decision determined by our model is for the United States to invest in a new detector, and for the terrorists to choose agent cobalt-60, shown in Figure 18. This is mainly due to the prominence of false alarms and the high costs associated with screening all of these false alarms, and we assume for every cargo container that sounds an alarm, that container is physically inspected. With the new detector technology the prominence of false alarms decreases and the true alarm rate increases, the cost savings associated with this change in the new technology outweighs the cost of technical success or failure. Since the United States is attempting to minimize their expected cost per container, the optimal choice is to invest in the new detector. Our intelligent adversary risk analysis model can simultaneously determine the best decision for the United States, who is trying to minimize the expected cost, and the terrorist, who is trying to maximize the expected cost to the United States. Simultaneously modeling the decisions of the defender and attacker provides a more accurate picture of reality and could provide important insights to the real situation that may have been missed with other techniques. The model is extremely sensitive to certain inputs and parameters, even though the values are in line with what is available in the literature, it is important to understand the sensitivities. Two inputs that were found to be particularly important are the expected cost for physically inspecting a cargo container, and the cost of implementing the technology needed for the new screening device. Using this model the decision maker can construct more accurate judgments based on the true situation. This increase in accuracy could save lives with the decisions being made. The model can also help the decision maker understand the interdependencies of the model and visually see how his resource allocations affect the optimal decisions of the defender and the attacker.

Page generated in 0.0651 seconds