51 |
Evaluation of Nonlinear Damping Effects on BuildingsAlagiyawanna, Krishanthi 01 January 2007 (has links)
Analysis of the dynamic behavior on structures is one vital aspect of designing structures such as buildings and bridges. Determination of the correct damping factor is of critical importance as it is the governing factor of dynamic design. Damping on structures exhibits a very complex behavior. Different models are suggested in literature to explain damping behavior. The usefulness of a valid damping model depends on how easily it can be adopted to analyze the dynamic behavior. Ease of mathematically representing the model and ease of analyzing the dynamic behavior by using the mathematical representation are the two determining aspects of the utility of the selected model. This thesis presents a parametric representation of non-linear damping models of the form presented by [Jea86] and the mathematical techniques to use the parametrically represented damping model in dynamic behavior analysis. In the damping model used in this thesis, the damping factor is proportional to the amplitude of vibration of the structure. However, determination of the amplitude again depends on the damping of the structure for a given excitation. Also, the equations which explain the behavior of motion are differential equations in a matrix form that is generally linearly inseparable. This thesis addresses these challenges and presents a numerical method to solve the motion equations by using Runge-Kutta techniques. This enables one to use a given non-linear model of the form proposed by [Jea86] to analyze the actual response of the structure to a given excitation from wind, seismic or any other source. Several experiments were conducted for reinforced concrete and steel framed buildings to evaluate the proposed framework. The non-linear damping model proposed by [Sat03], which conforms to [Jea86] is used to demonstrate the use of the proposed techniques. Finally, a new damping model is proposed based on the actual behavior and the serviceability criteria, which better explains the damping behavior of structures.
|
52 |
Dynamic Ridesharing: Understanding the Role of Gender and TechnologySiddiqi, Zarar 26 November 2012 (has links)
Using a case study approach, the thesis examines how dynamic ridesharing (DRS) has evolved through time, parallel with changes in information and communication technologies (ICTs). DRS is conceptually framed using a socio-ecological modeling approach, the goal being to develop hypotheses regarding factors likely influencing DRS use. This conceptual work forms the foundation for an empirical study of DRS use. Survey data were used in descriptive analysis and logistic regression modeling organized to identify who uses DRS and how. The study reveals that gender may be a central concept to understanding why and how DRS is used by certain segments of population more than others. With regard to technology, it is found that although technical competencies were enabling, in terms of facilitating rideshares, gender and perhaps related mobility constraints, emerged as a larger issues. The findings also caution against relying solely on technological advancement for the success of ridesharing programs.
|
53 |
Dynamic Ridesharing: Understanding the Role of Gender and TechnologySiddiqi, Zarar 26 November 2012 (has links)
Using a case study approach, the thesis examines how dynamic ridesharing (DRS) has evolved through time, parallel with changes in information and communication technologies (ICTs). DRS is conceptually framed using a socio-ecological modeling approach, the goal being to develop hypotheses regarding factors likely influencing DRS use. This conceptual work forms the foundation for an empirical study of DRS use. Survey data were used in descriptive analysis and logistic regression modeling organized to identify who uses DRS and how. The study reveals that gender may be a central concept to understanding why and how DRS is used by certain segments of population more than others. With regard to technology, it is found that although technical competencies were enabling, in terms of facilitating rideshares, gender and perhaps related mobility constraints, emerged as a larger issues. The findings also caution against relying solely on technological advancement for the success of ridesharing programs.
|
54 |
Dynamic payload estimation in four wheel drive loadersHindman, Jahmy J. 22 December 2008
Knowledge of the mass of the manipulated load (i.e. payload) in off-highway machines is useful information for a variety of reasons ranging from knowledge of machine stability to ensuring compliance with transportion regulations. This knowledge is difficult to ascertain however. This dissertation concerns itself with delineating the motivations for, and difficulties in development of a dynamic payload weighing algorithm. The dissertation will describe how the new type of dynamic payload weighing algorithm was developed and progressively overcame some of these difficulties.<p>
The payload mass estimate is dependent upon many different variables within the off-highway vehicle. These variables include static variability such as machining tolerances of the revolute joints in the linkage, mass of the linkage members, etc as well as dynamic variability such as whole-machine accelerations, hydraulic cylinder friction, pin joint friction, etc. Some initial effort was undertaken to understand the static variables in this problem first by studying the effects of machining tolerances on the working linkage kinematics in a four-wheel-drive loader. This effort showed that if the linkage members were machined within the tolerances prescribed by the design of the linkage components, the tolerance stack-up of the machining variability had very little impact on overall linkage kinematics.<p>
Once some of the static dependent variables were understood in greater detail significant effort was undertaken to understand and compensate for the dynamic dependent variables of the estimation problem. The first algorithm took a simple approach of using the kinematic linkage model coupled with hydraulic cylinder pressure information to calculate a payload estimate directly. This algorithm did not account for many of the aforementioned dynamic variables (joint friction, machine acceleration, etc) but was computationally expedient. This work however produced payload estimates with error far greater than the 1% full scale value being targeted. Since this initial simplistic effort met with failure, a second algorithm was needed.
The second algorithm was developed upon the information known about the limitations of the first algorithm. A suitable method of compensating for the non-linear dependent dynamic variables was needed. To address this dilemma, an artificial neural network approach was taken for the second algorithm. The second algorithms construction was to utilise an artificial neural network to capture the kinematic linkage characteristics and all other dynamic dependent variable behaviour and estimate the payload information based upon the linkage position and hydraulic cylinder pressures. This algorithm was trained using emperically collected data and then subjected to actual use in the field. This experiment showed that that the dynamic complexity of the estimation problem was too large for a small (and computationally feasible) artificial neural network to characterize such that the error estimate was less than the 1% full scale requirement.<p>
A third algorithm was required due to the failures of the first two. The third algorithm was constructed to ii take advantage of the kinematic model developed and utilise the artificial neural networks ability to perform nonlinear mapping. As such, the third algorithm developed uses the kinematic model output as an input to the artificial neural network. This change from the second algorithm keeps the network from having to characterize the linkage kinematics and only forces the network to compensate for the dependent dynamic variables excluded by the kinematic linkage model. This algorithm showed significant improvement over the previous two but still did not meet the required 1% full scale requirement. The promise shown by this algorithm however was convincing enough that further effort was spent in trying to refine it to improve the accuracy.<p>
The fourth algorithm developed proceeded with improving the third algorithm. This was accomplished by adding additional inputs to the artificial neural network that allowed the network to better compensate for the variables present in the problem. This effort produced an algorithm that, when subjected to actual field use, produced results very near the 1% full scale accuracy requirement. This algorithm could be improved upon slightly with better input data filtering and possibly adding additional network inputs.<p>
The final algorithm produced results very near the desired accuracy. This algorithm was also novel in that for this estimation, the artificial neural network was not used soley as the means to characterize the problem for estimation purposes. Instead, much of the responsibility for the mathematical characterization of the problem was placed upon a kinematic linkage model that then fed its own payload estimate into the neural network where the estimate was further refined during network training with calibration data and additional inputs. This method of nonlinear state estimation (i.e. utilising a neural network to compensate for nonlinear effects in conjunction with a first principles model) has not been seen previously in the literature.
|
55 |
Implementation of Dynamic Customer Product Information for a Network Router ProductLotfi, Saghi January 2011 (has links)
When a telecommunication company delivers a product to a customer, three main pieces are included: software, hardware and Customer Product Information (CPI). The CPI can be thought of as the “user manual” for the product. The CPI is important to most companies. It is important not only that the final product really corresponds to the needs of the customer, but also that the customer in an easy manner can learn how to install, configure and subsequently use the product. To provide this information to the customer, a correct content and a good information structure of the CPI is crucial. To ensure this, the studied company has developed a “Customer Product Information Life Cycle Process” to enhance the understanding of the customer needs in terms of documentation and training material about the product and comply with customer needs. Part of this thesis consists of a study which makes an evaluation of the development parts of the CPI process. This is done in order to find a method and tool to be able to improve the structure, content and usability of the CPI used in a product called Gateway GPRS support Node (GGSN). The conclusions from the study are implemented with a Content Management System (CMS). One important aim is to use a wiki-type tool where the customer can make local adaptation to the delivered CPI and add information about their own network, configurations and handling; in this way they will be able to make the CPI structure more users friendly and used more efficiently by the customer’s staff. As part of this thesis, a test was carried out to suggest a new model to improve the current CPI model used at the company. The test was based on a methodology, tool-independent CMS called Drupal. Drupal was used to create test documents in the GGSN-MPG CPI environment. The quality of the CPI made with the Drupal tool was examined after the change. The results clearly demonstrate that a new portal platform, based on a Drupal-like tool achieves a far more flexible structure and would greatly improve the CPI capabilities. It is preferable to use a Drupal-like portal platform rather than a website when implementing a new CPI structure.
|
56 |
Flocculation of natural organic matter in Swedish lakesKlemedsson, Shicarra January 2012 (has links)
Flocculation is an important part of the carbon cycle. It is therefore crucial to understand how flocculation is regulated and how different environmental factors impact. A dilemma is that it has been found difficult to measure flocculation experimentally. In this thesis, flocculation of dissolved organic carbon in a Swedish lake was measured in a series of laboratory experiments. The method used was Dynamic Light Scattering (DLS). DLS is used to determine the size distribution profile of, for instance, small particles in suspension. DLS measures Brownian motion and relates it to the particle size by measuring the fluctuation in scattering intensity. It is not very effective to measure the frequency spectrum contained in the intensity fluctuations directly, so instead, a digital auto correlator is used. Since factors such as pH, salinity and calcium chloride content varies in lakes and is thought to have an impact on flocculation, this was investigated as well. As pH was changed in a range of 3 to 9, small changes in size distribution could be detected. Salinity and calcium chloride content have quite an impact on flocculation. Time also has a great impact, samples that were set to rest for a week showed a significant increase in particle size. For DLS to work, the samples need to be filtered of centrifuged to get rid of large particles. Different types of filters were tested to see which filter material was the best to use. When filtering the water we only want to filter out the large particles. Natural organic matter has a hydrophobic component which adsorbs to some filter types but not to others. It is crucial to know which filters this hydrophobic component adsorbs to, so that the loss of dissolved organic carbon during filtration can be minimalized.
|
57 |
Mechanisms for Dynamic Setting with Restricted AllocationsYu, Yuxin 21 October 2011 (has links)
Dynamic mechanism design is an important area of multiagent systems, and commonly used in resource allocation where the resources are time related or the agents exist dynamically. We focus on a multiagent model within which the agents stay, and the resources arrive and depart. The resources are interpreted as work or jobs and are called tasks. The allocation outcome space has a special restriction that every agent can only work on one resource at a time, because every agent has a finite computational capability in reality.
We propose a dynamic mechanism and analyze its incentive properties; we show that the mechanism is incentive compatible. Empirically, our dynamic mechanism performs well and is able to achieve high economic efficiency, even outperforming standard approaches if the agents are concerned about future tasks. We also introduce a static mechanism under the setting of a restricted outcome space; it is proved that the static mechanism is incentive compatible, and its computational complexity is much less than that of the standard VCG mechanism.
|
58 |
Dynamic payload estimation in four wheel drive loadersHindman, Jahmy J. 22 December 2008 (has links)
Knowledge of the mass of the manipulated load (i.e. payload) in off-highway machines is useful information for a variety of reasons ranging from knowledge of machine stability to ensuring compliance with transportion regulations. This knowledge is difficult to ascertain however. This dissertation concerns itself with delineating the motivations for, and difficulties in development of a dynamic payload weighing algorithm. The dissertation will describe how the new type of dynamic payload weighing algorithm was developed and progressively overcame some of these difficulties.<p>
The payload mass estimate is dependent upon many different variables within the off-highway vehicle. These variables include static variability such as machining tolerances of the revolute joints in the linkage, mass of the linkage members, etc as well as dynamic variability such as whole-machine accelerations, hydraulic cylinder friction, pin joint friction, etc. Some initial effort was undertaken to understand the static variables in this problem first by studying the effects of machining tolerances on the working linkage kinematics in a four-wheel-drive loader. This effort showed that if the linkage members were machined within the tolerances prescribed by the design of the linkage components, the tolerance stack-up of the machining variability had very little impact on overall linkage kinematics.<p>
Once some of the static dependent variables were understood in greater detail significant effort was undertaken to understand and compensate for the dynamic dependent variables of the estimation problem. The first algorithm took a simple approach of using the kinematic linkage model coupled with hydraulic cylinder pressure information to calculate a payload estimate directly. This algorithm did not account for many of the aforementioned dynamic variables (joint friction, machine acceleration, etc) but was computationally expedient. This work however produced payload estimates with error far greater than the 1% full scale value being targeted. Since this initial simplistic effort met with failure, a second algorithm was needed.
The second algorithm was developed upon the information known about the limitations of the first algorithm. A suitable method of compensating for the non-linear dependent dynamic variables was needed. To address this dilemma, an artificial neural network approach was taken for the second algorithm. The second algorithms construction was to utilise an artificial neural network to capture the kinematic linkage characteristics and all other dynamic dependent variable behaviour and estimate the payload information based upon the linkage position and hydraulic cylinder pressures. This algorithm was trained using emperically collected data and then subjected to actual use in the field. This experiment showed that that the dynamic complexity of the estimation problem was too large for a small (and computationally feasible) artificial neural network to characterize such that the error estimate was less than the 1% full scale requirement.<p>
A third algorithm was required due to the failures of the first two. The third algorithm was constructed to ii take advantage of the kinematic model developed and utilise the artificial neural networks ability to perform nonlinear mapping. As such, the third algorithm developed uses the kinematic model output as an input to the artificial neural network. This change from the second algorithm keeps the network from having to characterize the linkage kinematics and only forces the network to compensate for the dependent dynamic variables excluded by the kinematic linkage model. This algorithm showed significant improvement over the previous two but still did not meet the required 1% full scale requirement. The promise shown by this algorithm however was convincing enough that further effort was spent in trying to refine it to improve the accuracy.<p>
The fourth algorithm developed proceeded with improving the third algorithm. This was accomplished by adding additional inputs to the artificial neural network that allowed the network to better compensate for the variables present in the problem. This effort produced an algorithm that, when subjected to actual field use, produced results very near the 1% full scale accuracy requirement. This algorithm could be improved upon slightly with better input data filtering and possibly adding additional network inputs.<p>
The final algorithm produced results very near the desired accuracy. This algorithm was also novel in that for this estimation, the artificial neural network was not used soley as the means to characterize the problem for estimation purposes. Instead, much of the responsibility for the mathematical characterization of the problem was placed upon a kinematic linkage model that then fed its own payload estimate into the neural network where the estimate was further refined during network training with calibration data and additional inputs. This method of nonlinear state estimation (i.e. utilising a neural network to compensate for nonlinear effects in conjunction with a first principles model) has not been seen previously in the literature.
|
59 |
Low-frequency variability of currents in the deepwater eastern Gulf of MexicoCole, Kelly Lynne 15 May 2009 (has links)
Vertical structure of the low frequency horizontal currents at the northern edge of
the Loop Current during eddy shedding events is observed using concurrent
hydrographic, moored, and satellite altimetry data from 2005. Dynamic modes are
calculated at three deep (~3000 m), full water-column moorings in the eastern Gulf of
Mexico. Time-series of the barotropic and first two baroclinic modes are found using a
least squares minimization that fits theoretically derived modes to observed moored
velocity data.
EOF analyses show that the majority of observed variance is explained by a
surface-trapped mode that is highly coherent with the temporal amplitudes of the first
baroclinic mode, and a lower, but significant percentage of variance is captured in
bottom-intensified modes. Amplitudes of the second empirical mode indicate that
currents are more coherent in the ocean interior approaching the Loop Current, as more
variance is explained by this mode at the southernmost mooring near the Loop Current.
A dynamic mode decomposition of the horizontal currents reveals that the
barotropic and first baroclinic modes exhibit low frequency variability and eddy time scales of 10 – 40 days. Second baroclinic mode amplitudes show higher frequency
variability and shorter time scales. A model utility test for the least squares fit of
modeled to observed velocity shows that the second baroclinic mode is useful to the
statistical model during 50 – 85 % of the mooring deployment, and is particularly
necessary to the model when cyclonic features are present in the study area. The
importance of the second baroclinic mode to the model increases significantly closer to
the Loop Current.
High-speed currents associated with the Loop Current and anticyclones stimulate
a strong first baroclinic response, but the second baroclinic mode amplitudes are found
to be similar in magnitude to the first baroclinic mode amplitudes at times. This happens
episodically and could be an indication of higher order dynamics related to frontal eddies
or Loop Current eddy shedding.
|
60 |
Modeling and analyzing spread of epidemic diseases: case study based on cervical cancerParvin, Hoda 15 May 2009 (has links)
In this thesis, health care policy issues for prevention and cure of cervical cancer have
been considered. The cancer is typically caused by Human Papilloma Virus (HPV) for
which individuals can be tested and also given vaccinations. Policymakers are faced with
the decision of how many cancer treatments to subsidize, how many vaccinations to give
and how many tests to be performed in each period of a given time horizon. To aid this
decision-making exercise, a stochastic dynamic optimal control problem with feedback was
formulated, which can be modeled as a Markov decision process (MDP). Solving the MDP
is, however, computationally intractable because of the large state space as the embedded
stochastic network cannot be decomposed. Hence, an algorithm was proposed that initially
ignores the feedback and later incorporates it heuristically. As part of the algorithm, alternate
methodologies, based on deterministic analysis, were developed, Markov chains and
simulations to approximately evaluate the objective function.
Upon implementing the algorithm using a meta-heuristic for a case study of the population
in the United States, several measures were calculated to observe the behavior of
the system through the course of time, based on the different proposed policies. The policies
compared were static, dynamic without feedback and dynamic with feedback. It was
found that the dynamic policy without feedback performs almost as well as the dynamic
policy with feedback, both of them outperforming the static policy. All these policies are
applicable and fast for easy what-if analysis for the policymakers.
|
Page generated in 0.1344 seconds