1 |
Reification of network resource control in multi-agent systemsLiu, Chen 31 August 2006
In multi-agent systems [1], coordinated resource sharing is indispensable for a set of autonomous agents, which are running in the same execution space, to accomplish their computational objectives. This research presents a new approach to network resource control in multi-agent systems, based on the CyberOrgs [2] model. This approach aims to offer a mechanism to reify network resource control in multi-agent systems and to realize this mechanism in a prototype system. <p>In order to achieve these objectives, a uniform abstraction vLink (Virtual Link) is introduced to represent network resource, and based on this abstraction, a coherent mechanism of vLink creation, allocation and consumption is developed. This mechanism is enforced in the network by applying a fine-grained flow-based scheduling scheme. In addition, concerns of computations are separated from those of resources required to complete them, which simplifies engineering of network resource control. Thus, application programmers are enabled to focus on their application development and separately declaring resource request and defining resource control policies for their applications in a simplified way. Furthermore, network resource is bounded to computations and controlled in a hierarchy to coordinate network resource usage. A computation and its sub-computations are not allowed to consume resources beyond their resource boundary. However, resources can be traded between different boundaries. <p> In this thesis, the design and implementation of a prototype system is described as well. The prototype system is a middleware system architecture, which can be used to build systems supporting network resource control. This architecture has a layered structure and aims to achieve three goals: (1) providing an interface for programmers to express resource requests for applications and define their resource control policies; (2) specializing the CyberOrgs model to control network resource; and (3) providing carefully designed mechanisms for routing, link sharing and packet scheduling to enforce required resource allocation in the network.
|
2 |
Bandwidth-efficient communication systems based on finite-length low density parity check codesVu, Huy Gia 31 October 2006
Low density parity check (LDPC) codes are linear block codes constructed by pseudo-random parity check matrices. These codes are powerful in terms of error performance and, especially, have low
decoding complexity. While infinite-length LDPC codes approach the capacity of communication channels, finite-length LDPC codes also
perform well, and simultaneously meet the delay requirement of many communication applications such as voice and backbone transmissions. Therefore, finite-length LDPC codes are attractive to employ in low-latency communication systems. This thesis mainly focuses on the bandwidth-efficient communication systems using finite-length LDPC codes. Such bandwidth-efficient systems are realized by mapping a group of LDPC coded bits to a symbol of a high-order signal constellation. Depending on the systems' infrastructure and knowledge of the channel state information (CSI), the signal constellations in different coded modulation systems can be two-dimensional multilevel/multiphase constellations or multi-dimensional space-time constellations.
In the first part of the thesis, two basic bandwidth-efficient coded modulation systems, namely LDPC coded modulation and multilevel LDPC coded modulation, are investigated for both additive white Gaussian noise (AWGN) and frequency-flat Rayleigh fading channels. The bounds on the bit error rate (BER) performance are derived for these systems based on the maximum likelihood (ML) criterion. The derivation of these bounds relies on the union bounding and combinatoric techniques. In particular, for the LDPC coded modulation, the ML bound is computed from the Hamming distance spectrum of the LDPC code and the Euclidian distance profile of the two-dimensional constellation. For the multilevel LDPC coded modulation, the bound of each decoding stage is obtained for a generalized multilevel coded modulation, where more than one coded bit is considered for level. For both systems, the bounds are confirmed by the simulation results of ML decoding and/or the performance of the ordered-statistic decoding (OSD) and the sum-product decoding. It is demonstrated that these bounds can be efficiently used to evaluate the error performance and select appropriate parameters (such as the code rate, constellation and mapping) for the two communication systems.<p>The second part of the thesis studies bandwidth-efficient LDPC coded systems that employ multiple transmit and multiple receive antennas, i.e., multiple-input multiple-output (MIMO) systems. Two scenarios of CSI availability considered are: (i) the CSI is unknown at both the transmitter and the receiver; (ii) the CSI is known at both the transmitter and the receiver. For the first scenario, LDPC coded unitary space-time modulation systems are most suitable and the ML performance bound is derived for these non-coherent systems. To derive the bound, the summation of chordal distances is obtained and used instead of the Euclidean distances. For the second case of CSI, adaptive LDPC coded MIMO modulation systems are studied, where three adaptive schemes with antenna beamforming and/or antenna selection are investigated and compared in terms of the bandwidth efficiency. For uncoded discrete-rate adaptive modulation, the computation of the bandwidth efficiency shows that the scheme with antenna selection at the transmitter and antenna combining at the receiver performs the best when the number of antennas is small. For adaptive LDPC coded MIMO modulation systems, an achievable threshold of the bandwidth efficiency is also computed from the ML bound of LDPC coded modulation derived in the first part.
|
3 |
Reification of network resource control in multi-agent systemsLiu, Chen 31 August 2006 (has links)
In multi-agent systems [1], coordinated resource sharing is indispensable for a set of autonomous agents, which are running in the same execution space, to accomplish their computational objectives. This research presents a new approach to network resource control in multi-agent systems, based on the CyberOrgs [2] model. This approach aims to offer a mechanism to reify network resource control in multi-agent systems and to realize this mechanism in a prototype system. <p>In order to achieve these objectives, a uniform abstraction vLink (Virtual Link) is introduced to represent network resource, and based on this abstraction, a coherent mechanism of vLink creation, allocation and consumption is developed. This mechanism is enforced in the network by applying a fine-grained flow-based scheduling scheme. In addition, concerns of computations are separated from those of resources required to complete them, which simplifies engineering of network resource control. Thus, application programmers are enabled to focus on their application development and separately declaring resource request and defining resource control policies for their applications in a simplified way. Furthermore, network resource is bounded to computations and controlled in a hierarchy to coordinate network resource usage. A computation and its sub-computations are not allowed to consume resources beyond their resource boundary. However, resources can be traded between different boundaries. <p> In this thesis, the design and implementation of a prototype system is described as well. The prototype system is a middleware system architecture, which can be used to build systems supporting network resource control. This architecture has a layered structure and aims to achieve three goals: (1) providing an interface for programmers to express resource requests for applications and define their resource control policies; (2) specializing the CyberOrgs model to control network resource; and (3) providing carefully designed mechanisms for routing, link sharing and packet scheduling to enforce required resource allocation in the network.
|
4 |
Bandwidth-efficient communication systems based on finite-length low density parity check codesVu, Huy Gia 31 October 2006 (has links)
Low density parity check (LDPC) codes are linear block codes constructed by pseudo-random parity check matrices. These codes are powerful in terms of error performance and, especially, have low
decoding complexity. While infinite-length LDPC codes approach the capacity of communication channels, finite-length LDPC codes also
perform well, and simultaneously meet the delay requirement of many communication applications such as voice and backbone transmissions. Therefore, finite-length LDPC codes are attractive to employ in low-latency communication systems. This thesis mainly focuses on the bandwidth-efficient communication systems using finite-length LDPC codes. Such bandwidth-efficient systems are realized by mapping a group of LDPC coded bits to a symbol of a high-order signal constellation. Depending on the systems' infrastructure and knowledge of the channel state information (CSI), the signal constellations in different coded modulation systems can be two-dimensional multilevel/multiphase constellations or multi-dimensional space-time constellations.
In the first part of the thesis, two basic bandwidth-efficient coded modulation systems, namely LDPC coded modulation and multilevel LDPC coded modulation, are investigated for both additive white Gaussian noise (AWGN) and frequency-flat Rayleigh fading channels. The bounds on the bit error rate (BER) performance are derived for these systems based on the maximum likelihood (ML) criterion. The derivation of these bounds relies on the union bounding and combinatoric techniques. In particular, for the LDPC coded modulation, the ML bound is computed from the Hamming distance spectrum of the LDPC code and the Euclidian distance profile of the two-dimensional constellation. For the multilevel LDPC coded modulation, the bound of each decoding stage is obtained for a generalized multilevel coded modulation, where more than one coded bit is considered for level. For both systems, the bounds are confirmed by the simulation results of ML decoding and/or the performance of the ordered-statistic decoding (OSD) and the sum-product decoding. It is demonstrated that these bounds can be efficiently used to evaluate the error performance and select appropriate parameters (such as the code rate, constellation and mapping) for the two communication systems.<p>The second part of the thesis studies bandwidth-efficient LDPC coded systems that employ multiple transmit and multiple receive antennas, i.e., multiple-input multiple-output (MIMO) systems. Two scenarios of CSI availability considered are: (i) the CSI is unknown at both the transmitter and the receiver; (ii) the CSI is known at both the transmitter and the receiver. For the first scenario, LDPC coded unitary space-time modulation systems are most suitable and the ML performance bound is derived for these non-coherent systems. To derive the bound, the summation of chordal distances is obtained and used instead of the Euclidean distances. For the second case of CSI, adaptive LDPC coded MIMO modulation systems are studied, where three adaptive schemes with antenna beamforming and/or antenna selection are investigated and compared in terms of the bandwidth efficiency. For uncoded discrete-rate adaptive modulation, the computation of the bandwidth efficiency shows that the scheme with antenna selection at the transmitter and antenna combining at the receiver performs the best when the number of antennas is small. For adaptive LDPC coded MIMO modulation systems, an achievable threshold of the bandwidth efficiency is also computed from the ML bound of LDPC coded modulation derived in the first part.
|
5 |
New Bounding Methods for Global Dynamic OptimizationSong, Yingkai January 2021 (has links)
Global dynamic optimization arises in many engineering applications such as parameter estimation, global optimal control, and optimization-based worst-case uncertainty analysis. In branch-and-bound deterministic global optimization algorithms, a major computational bottleneck is generating appropriate lower bounds for the globally optimal objective value. These bounds are typically constructed using convex relaxations for the solutions of dynamic systems with respect to decision variables. Tighter convex relaxations thus translate into tighter lower bounds, which will typically reduce the number of iterations required by branch-and-bound. Subgradients, as useful local sensitivities of convex relaxations, are typically required by nonsmooth optimization solvers to effectively minimize these relaxations. This thesis develops novel techniques for efficiently computing tight convex relaxations with the corresponding subgradients for the solutions of ordinary differential equations (ODEs), to ultimately improve efficiency of deterministic global dynamic optimization.
Firstly, new bounding and comparison results for dynamic process models are developed, which are more broadly applicable to engineering models than previous results. These new results show for the first time that in a state-of-the-art ODE relaxation framework, tighter enclosures of the original ODE system's right-hand side will necessarily translate into enclosures for the state variables that are at least as tight, which paves the way towards new advances for bounding in global dynamic optimization.
Secondly, new convex relaxations are proposed for the solutions of ODE systems. These new relaxations are guaranteed to be at least as tight as state-of-the-art ODE relaxations. Unlike established ODE relaxation approaches, the new ODE relaxation approach can employ any valid convex and concave relaxations for the original right-hand side, and tighter such relaxations will necessarily yield ODE relaxations that are at least as tight. In a numerical case study, such tightness does indeed improve computational efficiency in deterministic global dynamic optimization. This new ODE relaxation approach is then extended in various ways to further tighten ODE relaxations.
Thirdly, new subgradient evaluation approaches are proposed for ODE relaxations. Unlike established approaches that compute valid subgradients for nonsmooth dynamic systems, the new approaches are compatible with reverse automatic differentiation (AD). It is shown for the first time that subgradients of dynamic convex relaxations can be computed via a modified adjoint ODE sensitivity system, which could speed up lower bounding in global dynamic optimization.
Lastly, in the situation where convex relaxations are known to be correct but subgradients are unavailable (such as for certain ODE relaxations), a new approach is proposed for tractably constructing useful correct affine underestimators and lower bounds of the convex relaxations just by black-box sampling. No additional assumptions are required, and no subgradients must be computed at any point. Under mild conditions, these new bounds are shown to converge rapidly to an original nonconvex function as the domain of interest shrinks. Variants of the new approach are presented to account for numerical error or noise in the sampling procedure. / Thesis / Doctor of Philosophy (PhD)
|
6 |
Three dimensional formulation for the stress-strain-dilatancy elasto-plastic constitutive model for sand under cyclic behaviour.Das, Saumyasuchi January 2014 (has links)
Recent experiences from the Darfield and Canterbury, New Zealand earthquakes have shown that the soft soil condition of saturated liquefiable sand has a profound effect on seismic response of buildings, bridges and other lifeline infrastructure. For detailed evaluation of seismic response three dimensional integrated analysis comprising structure, foundation and soil is required; such an integrated analysis is referred to as Soil Foundation Structure Interaction (SFSI) in literatures. SFSI is a three-dimensional problem because of three primary reasons: first, foundation systems are three-dimensional in form and geometry; second, ground motions are three-dimensional, producing complex multiaxial stresses in soils, foundations and structure; and third, soils in particular are sensitive to complex stress because of heterogeneity of soils leading to a highly anisotropic constitutive behaviour. In literatures the majority of seismic response analyses are limited to plane strain configuration because of lack of adequate constitutive models both for soils and structures, and computational limitation. Such two-dimensional analyses do not represent a complete view of the problem for the three reasons noted above. In this context, the present research aims to develop a three-dimensional mathematical formulation of an existing plane-strain elasto-plastic constitutive model of sand developed by Cubrinovski and Ishihara (1998b). This model has been specially formulated to simulate liquefaction behaviour of sand under ground motion induced earthquake loading, and has been well-validated and widely implemented in verifcation of shake table and centrifuge tests, as well as conventional ground response analysis and evaluation of case histories.
The approach adopted herein is based entirely on the mathematical theory of plasticity and utilises some unique features of the bounding surface plasticity formalised by Dafalias (1986). The principal constitutive parameters, equations, assumptions and empiricism of the existing plane-strain model are adopted in their exact form in the three-dimensional version. Therefore, the original two-dimensional model can be considered as a true subset of the three-dimensional form; the original model can be retrieved when the tensorial quantities of the three dimensional version are reduced to that of the plane-strain configuration. Anisotropic Drucker-Prager type failure surface has been adopted for the three-dimensional version to accommodate triaxial stress path. Accordingly, a new mixed hardening rule based on Mroz’s approach of homogeneous surfaces (Mroz, 1967) has been introduced for the virgin loading surface. The three-dimensional version is validated against experimental data for cyclic torsional and triaxial stress paths.
|
7 |
Relationships between selected speed strength performance tests and temporal variables of maximal running velocityFaccioni, Adrian, n/a January 1995 (has links)
The relationships between selected sprint specific bounding exercises and sprint
performance were analysed using fourteen sprint athletes (7 elite performers, 7 sub-elite
performers). Subjects were required to perform sprints over 60m, Counter Movement
Jumps with and without loading (20kg), High Speed Alternate Leg Bounding over 30m
and High Speed Single Leg Hopping over 20m. All athletes were subject to
anthropometric measurement (Height, Weight and Leg Length). Of all variables
measured, the Elite group were significantly better (p<0.001) in Counter Movement
Jump, Time to 60m, Time from 30m to 60m and in their Maximal Running Velocity.
Linear regressions were carried out on all variables that correlated with Time to 30m
(Acceleration Phase) and Maximal Running Velocity at both the pO.OOl and p<0.01
level of significance. This allowed several prediction tables to be compiled that had
performance measures (sprints and jumps) that could be used as testing measures for
sprint athletes to determine their Acceleration Phase and Maximal Running Velocity. A
stepwise multiple regression demonstrated that Time to 60m was the best predictor of
Maximal Running Velocity. Time to 60m, Leg length, High Speed Alternate Leg
Bounding and Sprint Stride Rate were the best predictors of the Acceleration Phase. A
Stepwise cross-validation linear discriminant function analysis was used to determine the
best predictors from both sprint and jump measures that would distinguish an athlete as
an elite or sub-elite performer. From sprint variables, Time to 60m and Time to 30m were
the two variables that best placed a sprint subject in either the Elite or Sub-elite group.
From the bounding variables, Counter Movement Jump and the Ground Contact Time of
the High Speed Alternate Leg Bounding were the two variables that best placed a sprint
subject in either the Elite or Sub-elite group. The present study suggests that Time to 60m
is the best predictor of Maximal Running Velocity and Acceleration Phase. Counter
Movement Jumping and High speed Alternate Leg Bounding are also useful tools in
developing and testing elite sprint athlete performance.
|
8 |
Sampling from the Hardcore ProcessDodds, William C 01 January 2013 (has links)
Partially Recursive Acceptance Rejection (PRAR) and bounding chains used in conjunction with coupling from the past (CFTP) are two perfect simulation protocols which can be used to sample from a variety of unnormalized target distributions. This paper first examines and then implements these two protocols to sample from the hardcore gas process. We empirically determine the subset of the hardcore process's parameters for which these two algorithms run in polynomial time. Comparing the efficiency of these two algorithms, we find that PRAR runs much faster for small values of the hardcore process's parameter whereas the bounding chain approach is vastly superior for large values of the process's parameter.
|
9 |
Sampling from the Hardcore ProcessDodds, William C 01 January 2013 (has links)
Partially Recursive Acceptance Rejection (PRAR) and bounding chains used in conjunction with coupling from the past (CFTP) are two perfect simulation protocols which can be used to sample from a variety of unnormalized target distributions. This paper first examines and then implements these two protocols to sample from the hardcore gas process. We empirically determine the subset of the hardcore process's parameters for which these two algorithms run in polynomial time. Comparing the efficiency of these two algorithms, we find that PRAR runs much faster for small values of the hardcore process's parameter whereas the bounding chain approach is vastly superior for large values of the process's parameter.
|
10 |
Exploration of Deep Learning Applications on an Autonomous Embedded Platform (Bluebox 2.0)Katare, Dewant 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / An Autonomous vehicle depends on the combination of latest technology or the ADAS safety features such as Adaptive cruise control (ACC), Autonomous Emergency Braking (AEB), Automatic Parking, Blind Spot Monitor, Forward Collision Warning or Avoidance (FCW or FCA), Lane Departure Warning. The current trend follows incorporation of these technologies using the Artificial neural network or Deep neural network, as an imitation of the traditionally used algorithms. Recent research in the field of deep learning and development of competent processors for autonomous or self-driving car have shown amplitude of prospect, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Deployment of several mentioned ADAS safety feature using multiple sensors and individual processors, increases the integration complexity and also results in the distribution of the system, which is very pivotal for autonomous vehicles.
This thesis attempts to tackle two important adas safety feature: Forward collision Warning, and Object Detection using the machine learning and Deep Neural Networks and there deployment in the autonomous embedded platform.
1. A machine learning based approach for the forward collision warning system in an autonomous vehicle.
2. 3-D object detection using Lidar and Camera which is primarily based on Lidar Point Clouds.
The proposed forward collision warning model is based on the forward facing automotive radar providing the sensed input values such as acceleration, velocity and separation distance to a classifier algorithm which on the basis of supervised learning model, alerts the driver of possible collision. Decision Tress, Linear Regression, Support Vector Machine, Stochastic Gradient Descent, and a Fully Connected Neural Network is used for the prediction purpose.
The second proposed methods uses object detection architecture, which combines the 2D object detectors and a contemporary 3D deep learning techniques. For this approach, the 2D object detectors is used first, which proposes a 2D bounding box on the images or video frames. Additionally a 3D object detection technique is used where the point clouds are instance segmented and based on raw point clouds density a 3D bounding box is predicted across the previously segmented objects.
|
Page generated in 0.082 seconds