341 |
Matrix Representations and Extension of the Graph Model for Conflict ResolutionXu, Haiyan January 2009 (has links)
The graph model for conflict resolution (GMCR) provides a convenient
and effective means to model and analyze a strategic conflict.
Standard practice is to carry out a stability analysis of a graph
model, and then to follow up with a post-stability analysis, two
critical components of which are status quo analysis and coalition
analysis. In stability analysis, an equilibrium is a state that is
stable for all decision makers (DMs) under appropriate stability
definitions or solution concepts. Status quo analysis aims to
determine whether a particular equilibrium is reachable from a
status quo (or an initial state) and, if so, how to reach it. A
coalition is any subset of a set of DMs. The coalition stability
analysis within the graph model is focused on the status quo states
that are equilibria and assesses whether states that are stable from
individual viewpoints may be unstable for coalitions. Stability
analysis began within a simple preference structure which includes a
relative preference relationship and an indifference relation.
Subsequently, preference uncertainty and strength of preference were
introduced into GMCR but not formally integrated.
In this thesis, two new preference frameworks, hybrid preference and
multiple-level preference, and an integrated algebraic approach are
developed for GMCR. Hybrid preference extends existing preference
structures to combine preference uncertainty and strength of
preference into GMCR. A multiple-level preference framework expands
GMCR to handle a more general and flexible structure than any
existing system representing strength of preference. An integrated
algebraic approach reveals a link among traditional stability
analysis, status quo analysis, and coalition stability analysis by
using matrix representation of the graph model for conflict
resolution.
To integrate the three existing preference structures into a hybrid
system, a new preference framework is proposed for graph models
using a quadruple relation to express strong or mild preference of
one state or scenario over another, equal preference, and an
uncertain preference. In addition, a multiple-level preference
framework is introduced into the graph model methodology to handle
multiple-level preference information, which lies between relative
and cardinal preferences in information content. The existing
structure with strength of preference takes into account that if a
state is stable, it may be either strongly stable or weakly stable
in the context of three levels of strength. However, the three-level
structure is limited in its ability to depict the intensity of
relative preference. In this research, four basic solution concepts
consisting of Nash stability, general metarationality, symmetric
metarationality, and sequential stability, are defined at each level
of preference for the graph model with the extended multiple-level
preference. The development of the two new preference frameworks
expands the realm of applicability of the graph model and provides
new insights into strategic conflicts so that more practical and
complicated problems can be analyzed at greater depth.
Because a graph model of a conflict consists of several interrelated
graphs, it is natural to ask whether well-known results of Algebraic
Graph Theory can help analyze a graph model. Analysis of a graph
model involves searching paths in a graph but an important
restriction of a graph model is that no DM can move twice in
succession along any path. (If a DM can move consecutively, then
this DM's graph is effectively transitive. Prohibiting consecutive
moves thus allows for graph models with intransitive graphs, which
are sometimes useful in practice.) Therefore, a graph model must be
treated as an edge-weighted, colored multidigraph in which each arc
represents a legal unilateral move and distinct colors refer to
different DMs. The weight of an arc could represent some preference
attribute. Tracing the evolution of a conflict in status quo
analysis is converted to searching all colored paths from a status
quo to a particular outcome in an edge-weighted, colored
multidigraph. Generally, an adjacency matrix can determine a simple
digraph and all state-by-state paths between any two vertices.
However, if a graph model contains multiple arcs between the same
two states controlled by different DMs, the adjacency matrix would
be unable to track all aspects of conflict evolution from the status
quo. To bridge the gap, a conversion function using the matrix
representation is designed to transform the original problem of
searching edge-weighted, colored paths in a colored multidigraph to
a standard problem of finding paths in a simple digraph with no
color constraints. As well, several unexpected and useful links
among status quo analysis, stability analysis, and coalition
analysis are revealed using the conversion function.
The key input of stability analysis is the reachable list of a DM,
or a coalition, by a legal move (in one step) or by a legal sequence
of unilateral moves, from a status quo in 2-DM or $n$-DM ($n
> 2$) models. A weighted reachability matrix for a DM or a coalition along
weighted colored paths is designed to construct the reachable list
using the aforementioned conversion function. The weight of each
edge in a graph model is defined according to the preference
structure, for example, simple preference, preference with
uncertainty, or preference with strength. Furthermore, a graph model
and the four basic graph model solution concepts are formulated
explicitly using the weighted reachability matrix for the three
preference structures. The explicit matrix representation for
conflict resolution (MRCR) that facilitates stability calculations
in both 2-DM and $n$-DM ($n
> 2$) models for three existing preference structures. In addition,
the weighted reachability matrix by a coalition is used to produce
matrix representation of coalition stabilities in
multiple-decision-maker conflicts for the three preference
frameworks.
Previously, solution concepts in the graph model were traditionally
defined logically, in terms of the underlying graphs and preference
relations. When status quo analysis algorithms were developed, this
line of thinking was retained and pseudo-codes were developed
following a similar logical structure. However, as was noted in the
development of the decision support system (DSS) GMCR II, the nature
of logical representations makes coding difficult. The DSS GMCR II,
is available for basic stability analysis and status quo analysis
within simple preference, but is difficult to modify or adapt to
other preference structures. Compared with existing graphical or
logical representation, matrix representation for conflict
resolution (MRCR) is more effective and convenient for computer
implementation and for adapting to new analysis techniques.
Moreover, due to an inherent link between stability analysis and
post-stability analysis presented, the proposed algebraic approach
establishes an integrated paradigm of matrix representation for the
graph model for conflict resolution.
|
342 |
Shipment Consolidation in Discrete Time and Discrete Quantity: Matrix-Analytic MethodsCai, Qishu 22 August 2011 (has links)
Shipment consolidation is a logistics strategy whereby many small shipments are combined into a few larger loads. The economies of scale achieved by shipment consolidation help in reducing the transportation costs and improving the utilization of logistics resources.
The fundamental questions about shipment consolidation are i) to how large a size should the consolidated loads be allowed to accumulate? And ii) when is the best time to dispatch such loads? The answers to these questions lie in the set of decision rules known as shipment consolidation policies.
A number of studies have been done in an attempt to find the optimal consolidation policy. However, these studies are restricted to only a few types of consolidation policies and are constrained by the input parameters, mainly the order arrival process and the order weight distribution. Some results on the optimal policy parameters have been obtained, but they are limited to a couple of specific types of policies.
No comprehensive method has yet been developed which allows the evaluation of different types of consolidation policies in general, and permits a comparison of their performance levels. Our goal in this thesis is to develop such a method and use it to evaluate a variety of instances of shipment consolidation problem and policies.
In order to achieve that goal, we will venture to use matrix-analytic methods to model and solve the shipment consolidation problem. The main advantage of applying such methods is that they can help us create a more versatile and accurate model while keeping the difficulties of computational procedures in check.
More specifically, we employ a discrete batch Markovian arrival process (BMAP) to model the weight-arrival process, and for some special cases, we use phase-type (PH) distributions to represent order weights. Then we model a dispatch policy by a discrete monotonic function, and construct a discrete time Markov chain for the shipment consolidation process.
Borrowing an idea from matrix-analytic methods, we develop an efficient algorithm for computing the steady state distribution of the Markov chain and various performance measures such as i) the mean accumulated weight per load, ii) the average dispatch interval and iii) the average delay per order. Lastly, after specifying the cost structures, we will compute the expected long-run cost per unit time for both the private carriage and common carriage cases.
|
343 |
Skill and knowledge matrix and evaluation tool for CAD-users at Atlas Copco Rock Drills ABÅberg, Maria January 2010 (has links)
No description available.
|
344 |
Simulation of Lidar Return Signals Associated with Water CloudsLu, Jianxu 14 January 2010 (has links)
We revisited an empirical relationship between the integrated volume depolar-
ization ratio, oacc, and the effective multiple scattering factor, -n, on the basis of Monte
Carlo simulations of spaceborne lidar backscatter associated with homogeneous wa-
ter clouds. The relationship is found to be sensitive to the extinction coefficient and
to the particle size. The layer integrated attenuated backscatter is also obtained.
Comparisons made between the simulations and statistics derived relationships of
the layer integrated depolarization ratio, oacc, and the layer integrated attenuated
backscatter, -n, based on the measurement by the Cloud-Aerosol Lidar and Infrared
Pathfinder Satellite Observations (CALIPSO) satellite show that a cloud with a
large effective size or a large extinction coefficient has a relatively large integrated
backscatter and a cloud with a small effective size or a large extinction coefficient
has a large integrated volume depolarization ratio. The present results also show
that optically thin water clouds may not obey the empirical relationship derived by
Y. X. Hu. and co-authors.
|
345 |
Biomechanics of common carotid arteries from mice heterozygous for mgR, the most common mouse model of Marfan syndromeTaucer, Anne Irene 15 May 2009 (has links)
Marfan syndrome, affecting approximately one out of every 5,000 people, is
characterized by abnormal bone growth, ectopia lentis, and often-fatal aortic dilation and
dissection. The root cause is a faulty extracellular matrix protein, fibrillin-1, which
associates with elastin in many tissues. Common carotids from wild-type controls and
mice heterozygous for the mgR mutation, the most commonly used mouse model of
Marfan syndrome, were studied in a biaxial testing device. Mechanical data in the form
of pressure-diameter and force-stretch tests in both the active and passive states were
collected, as well data on the functional responses to phenylephrine, carbamylcholine
chloride, and sodium nitroprusside. Although little significant difference was found
between the heterozygous and wild-type groups in general, the in vivo stretch for both
groups was significantly different from previously studied mouse vessels. Although the
two groups do not exhibit significant differences, this study comprises a control group
for future work with mice homozygous for mgR, which do exhibit Marfan-like
symptoms. As treatment of Marfan syndrome improves, more Marfan patients will
survive and age, increasing the likelihood that they will develop many of the vascular complications affecting the normal population, including hypertension and
atherosclerosis. Therefore, it is imperative to gather biomechanical data from the Marfan
vasculature so that clinicians may predict the effects of vascular complications in Marfan
patients and develop appropriate methods of treatment.
|
346 |
Estimating and testing of functional data with restrictionsLee, Sang Han 15 May 2009 (has links)
The objective of this dissertation is to develop a suitable statistical methodology
for functional data analysis. Modern advanced technology allows researchers to collect
samples as functional which means the ideal unit of samples is a curve. We consider
each functional observation as the resulting of a digitized recoding or a realization
from a stochastic process. Traditional statistical methodologies often fail to be applied
to this functional data set due to the high dimensionality.
Functional hypothesis testing is the main focus of my dissertation. We suggested
a testing procedure to determine the significance of two curves with order
restriction. This work was motivated by a case study involving high-dimensional
and high-frequency tidal volume traces from the New York State Psychiatric Institute
at Columbia University. The overall goal of the study was to create a model
of the clinical panic attack, as it occurs in panic disorder (PD), in normal human
subjects. We proposed a new dimension reduction technique by non-negative basis
matrix factorization (NBMF) and adapted a one-degree of freedom test in the context
of multivariate analysis. This is important because other dimension techniques, such
as principle component analysis (PCA), cannot be applied in this context due to the
order restriction.
Another area that we investigated was the estimation of functions with constrained
restrictions such as convexification and/or monotonicity, together with the development of computationally efficient algorithms to solve the constrained least
square problem. This study, too, has potential for applications in various fields.
For example, in economics the cost function of a perfectly competitive firm must be
increasing and convex, and the utility function of an economic agent must be increasing
and concave. We propose an estimation method for a monotone convex function
that consists of two sequential shape modification stages: (i) monotone regression
via solving a constrained least square problem and (ii) convexification of the monotone
regression estimate via solving an associated constrained uniform approximation
problem.
|
347 |
Integrated biomechanical model of cells embedded in extracellular matrixMuddana, Hari Shankar 15 May 2009 (has links)
Nature encourages diversity in life forms (morphologies). The study of morphogenesis
deals with understanding those processes that arise during the embryonic development
of an organism. These processes control the organized spatial distribution of cells,
which in turn gives rise to the characteristic form for the organism. Morphogenesis
is a multi-scale modeling problem that can be studied at the molecular, cellular, and
tissue levels.
Here, we study the problem of morphogenesis at the cellular level by introducing
an integrated biomechanical model of cells embedded in the extracellular matrix.
The fundamental aspects of mechanobiology essential for studying morphogenesis at
the cellular level are the cytoskeleton, extracellular matrix (ECM), and cell adhesion.
Cells are modeled using tensegrity architecture. Our simulations demonstrate cellular
events, such as differentiation, migration, and division using an extended tensegrity
architecture that supports dynamic polymerization of the micro-filaments of the cell.
Thus, our simulations add further support to the cellular tensegrity model. Viscoelastic
behavior of extracellular matrix is modeled by extending one-dimensional
mechanical models (by Maxwell and by Voigt) to three dimensions using finite element
methods. The cell adhesion is modeled as a general Velcro-type model. We
integrated the mechanics and dynamics of cell, ECM, and cell adhesion with a geometric
model to create an integrated biomechanical model. In addition, the thesis discusses various computational issues, including generating the finite element mesh,
mesh refinement, re-meshing, and solution mapping.
As is known from a molecular level perspective, the genetic regulatory network of
the organism controls this spatial distribution of cells along with some environmental
factors modulating the process. The integrated biomechanical model presented here,
besides generating interesting morphologies, can serve as a mesoscopic-scale platform
upon which future work can correlate with the underlying genetic network.
|
348 |
REACTIVE FLOW IN VUGGY CARBONATES: METHODS AND MODELS APPLIED TO MATRIX ACIDIZING OF CARBONATESIzgec, Omer 2009 May 1900 (has links)
Carbonates invariably have small (micron) to large (centimeter) scale
heterogeneities in flow properties that may cause the effects of injected acids to differ
greatly from what is predicted by a model based on a homogenous formation. To the best
of our knowledge, there are neither theoretical nor experimental studies on the effect of
large scale heterogeneities (vugs) on matrix acidizing. The abundance of carbonate
reservoirs (60% of the world?s oil reserves) and the lack of a detailed study on the effect
of multi-scale heterogeneities in carbonate acidizing are the main motivations behind this
study.
In this work, we first present a methodology to characterize the carbonate cores
prior to the core-flood acidizing experiments. Our approach consists of characterization
of the fine-scale (millimeter) heterogeneities using computerized tomography (CT) and
geostatistics, and the larger-scale (millimeter to centimeter) heterogeneities using
connected component labeling algorithm and numerical simulation.
In order to understand the connectivity of vugs and thus their contribution to flow,
a well-known 2D visualization algorithm, connected component labeling (CCL), was
implemented in 3D domain. Another tool used in this study to understand the
connectivity of the vugs and its effect on fluid flow is numerical simulation. A 3D finite
difference numerical model is developed based on Darcy-Brinkman formulation (DBF). Using the developed simulator a flow-based inversion approach is implemented to
understand the connectivity of the vugs in the samples studied.
After multi-scale characterization of the cores, acid core-flood experiments are
conducted. Cores measuring four inches in diameter by twenty inches in length are used
to decrease the geometry effects on the wormhole path. The post acid injection porosity
distribution and wormhole paths are visualized after the experiments.
The experimental results demonstrate that acid follows not only the high
permeability paths but also the spatially correlated ones. While the connectivity between
the vugs, total amount of vuggy pore space and size of the cores are the predominant
factors, spatial correlation of the petro-physical properties has less pronounced effect on
wormhole propagation in acidiziation of carbonates.
The fact that acid channeled through the vugular cores, following the path of the
vug system, was underlined with computerized tomography scans of the cores before and
after acid injection. This observation proposes that local pressure drops created by vugs
are more dominant in determining the wormhole flow path than the chemical reactions
occurring at the pore level. Following this idea, we present a modeling study in order to
understand flow in porous media in the presence of vugs. Use of coupled Darcy and
Stokes flow principles, known as Darcy-Brinkman formulation (DBF), underpins the
proposed approach. Several synthetic simulation scenarios are created to study the effect
of vugs on flow and transport.
The results demonstrate that total injection volume to breakthrough is affected by
spatial distribution, amount and connectivity of vuggy pore space. An interesting finding
is that although the presence and amount of vugs does not change the effective
permeability of the formation, it could highly effect fluid diversion. We think this is a
very important observation for designing of multi layer stimulation.
|
349 |
Experimental and Theoretical Study of Surfactant-Based Acid Diverting MaterialsAlghamdi, Abdulwahab 2010 December 1900 (has links)
The purpose of matrix stimulation in carbonate reservoirs is to bypass damaged zones and increase the effective wellbore area. This can be achieved by creating highly conductive flow channels known as wormholes. A further injection of acid will follow a wormhole path where the permeability has increased significantly, leaving substantial intervals untreated. Diverting materials such as surfactant-based acids plays an important role in mitigating this problem. In this study and for the first time, 20-inch long cores were used to conduct the acidizing experiments in two configurations, single coreflood and parallel coreflood.
The major findings from performing single coreflood experiments can be summarized as follows: The acid injection rate was found to be a critical parameter in maximizing the efficiency of using surfactant-based acids as a diverting chemical, in addition to creating wormholes. The maximum apparent viscosity, which developed during viscoelastic surfactant acid injection, occurred over a narrow range of acid injection rates. Higher injection rates were not effective in enhancing the acidizing process, and the use of diverting material produced results similar to those of regular acids. The amount of calcium measured in the effluent samples suggests that, if the acid was injected below the optimum rate, it would allow the acid filtrate to extend further ahead of the wormhole; at some point, it would trigger the surfactant and form micelles. When the acid injection rate was lowered further to a value of 1.5 cm3/min, the fluid front developed in more progressive fashion and the calcium concentration was more significant, continuing to increase until wormhole breakthrough
On the other hand, the parallel coreflood tests show several periods that can be identified from the shape of the flow rate distribution entering each core. The acid injection rate was confirmed as influencing the efficiency of the surfactant to divert acid. Acid diversion was noted to be most efficient at low rates (3 cm3/min). No significant diversion was noted at high initial permeability ratios, at least for the given core length. The use of surfactant-based acid was also found to be constrained by the scale of the initial permeability ratio. For permeability ratios greater than about 10, diversion was insufficient.
|
350 |
Capacitor-Less VAR Compensator Based on a Matrix ConverterBalakrishnan, Divya Rathna 2010 December 1900 (has links)
Reactive power, denoted as volt-ampere reactive (VARs), is fundamental to ac power systems and is due to the complex impedance of the loads and transmission lines. It has several undesirable consequences which include increased transmission loss, reduction of power transfer capability, and the potential for the onset of system-wide voltage instability, if not properly compensated and controlled. Reactive power compensation is a technique used to manage and control reactive power in the ac network by supplying or consuming VARs from points near the loads or along the transmission lines. Load compensation is aimed at applying power factor correction techniques directly at the loads by locally supplying VARs. Typical loads such as motors and other inductive devices operate with lagging power factor and consume VARs; compensation techniques have traditionally employed capacitor banks to supply the required VARs. However, capacitors are known to have reliability problems with both catastrophic failure modes and wear-out mechanisms. Thus, they require constant monitoring and periodic replacement, which greatly increases the cost of traditional load compensation techniques. This thesis proposes a reactive power load compensator that uses inductors (chokes) instead of capacitors to supply reactive power to support the load. Chokes are regarded as robust and rugged elements; but, they operate with lagging power factor and thus consume VARs instead of generating VARs like capacitors. A matrix converter interfaces the chokes to the ac network. The matrix converter is controlled using the Venturini modulation method which can enable the converter to exhibit a current phase reversal property. So, although the inductors draw lagging currents from the output of the converter, the converter actually draws leading currents from the ac network. Thus, with the proposed compensation technique, lagging power factor loads can be compensated without using capacitor banks.
The detailed operation of the matrix converter and the Venturini modulation method are examined in the thesis. The application of the converter to the proposed load compensation technique is analyzed. Simulations of the system in the MATLAB and PSIM environments are presented that support the analysis. A digital implementation of control signals for the converter is developed which demonstrates the practical feasibility of the proposed technique. The simulation and hardware results have shown the proposed compensator to be a promising and effective solution to the reliability issues of capacitor-based load-side VAR compensation techniques.
|
Page generated in 0.0508 seconds