• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 151
  • 49
  • 48
  • 44
  • 29
  • 20
  • 16
  • 5
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 473
  • 73
  • 54
  • 52
  • 44
  • 38
  • 36
  • 34
  • 34
  • 33
  • 32
  • 31
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Tělo a jeho manifestace / Body and its manifestation

Havlanová, Michaela January 2011 (has links)
Resumé (En) Key words: Body, soma, sarx, pexis, horizon, aesthesis, corporate scheme, body art, body modifications. This thesis deals with body and its manifestation. The body is conceived as philosophic, anthropologic and psychic phenomenon. Philosophic part determinates body as soma, sarx and pexis. The phenomenon as corporate scheme, horizon, motion and aesthesis are used for better understanding as well. Next part deals with body as anthropologic phenomenon. Body modifications, suspensions and their history are showed here. The research makes clear minds and reasons of extremely modificated persons. The last part is psychological. Problems related with wrong corporate scheme are described, as well as evolution of body's perception.
32

Estimation et diagnostic de réseaux de Petri partiellement observables / Estimation and diagnosis of partially observed Petri nets

Dardour, Amira 17 December 2018 (has links)
Avec l'évolution de la technologie, l'homme a procédé à la conception de systèmes de plus en plus complexes mais aussi de plus en plus sensibles aux défauts qui peuvent les affecter. Une procédure de diagnostic contribuant au bon déroulement du processus est ainsi nécessaire. Dans ce contexte, le but de cette thèse est le diagnostic des systèmes à événements discrets modélisés par des Réseaux de Petri Étiquetés (RdPE) partiellement observables. Sous l'hypothèse que chaque défaut est modélisé par le tir d'une transition non observable, deux approches de diagnostic à base d'estimation d'état sont développées. Une première approche composée de deux étapes consiste à estimer l'ensemble des marquages de base sur un horizon élémentaire glissant. La première étape consiste à déterminer un ensemble de vecteurs candidats à partir d'une approche algébrique. La deuxième étape consiste à éliminer les solutions candidates calculées qui ne sont pas associées à une trajectoire possible du RdPE. Comme l'ensemble des marquages de base pourra aussi être important, une deuxième approche de diagnostic évitera cet écueil en n'estimant pas les marquages. Une technique de relaxation des problèmes de Programmation Linéaire en Nombres Entiers (PLNE) sur un horizon fuyant est utilisée afin d'avoir un diagnostic en temps polynomial. / With the evolution of technology, humans have made available systems increasingly complex but also increasingly sensitive to faults that may affect it. A diagnostic procedure which contributes to the smooth running of the process is thus necessary. In this context, the aim of this thesis is the diagnosis of discrete event systems modeled by partially observed Labeled Petri Nets (LPNs). Under the assumption that each defect is modeled by the firing of an unobservable transition, two diagnostic approaches based on state estimation are developed. A first approach is to estimate the set of basis markings on a sliding elementary horizon. This approach is carried out in two steps. The first step is to determine a set of candidate vectors from an algebraic approach. The second step is to eliminate the calculated candidate solutions that are not associated with a possible trajectory of the LPN. As the set of basis markings can also be huge, a second diagnostic approach will avoid this pitfall by not estimating the markings. A relaxation technique of Integer Linear Programming (ILP) problems on a receding horizon is used to have a diagnosis in polynomial time.
33

Observateur à horizon glissant pour les systèmes non linéaires : application au diagnostic du Radiotélescope de Nançay

Delouche, David 17 December 2009 (has links) (PDF)
L'objectif de ce travail a été de proposer une méthode de détection de défaut pour le déplacement longitudinal du chariot mobile du Radiotélescope de Nançay. L'importance de l'implémentation d'une procédure de détection des défauts a été mise en évidence grâce à la description des besoins du personnel en charge de la maintenance de cet instrument scientifique. Ce mémoire débute par un état de l'art sur différentes méthodes de diagnostic (détection et isolation des défauts), une analyse critique de ces méthodes est réalisée. Nous rappelons ensuite les notions d'observabilité avant de présenter l'observateur de Newton et l'observateur de Ciccarella. L'extension de ce dernier aux systèmes MIMO est réalisée par la suite. Une comparaison de ces différents observateurs termine le chapitre 2. Le chapitre 3 présente le Radiotélescope de Nançay et plus particulièrement la modélisation du déplacement longitudinal du chariot mobile. Une étude des propriétés du modèle est abordée par la suite. Le dernier chapitre traite de la validation partielle du modèle obtenu au chapitre précédent. Ensuite, l'utilisation des relations de redondances analytiques a permis de mettre en évidence la faisabilité du diagnostic sur l'application. L'utilisation de l'extension de l'observateur de Ciccarella pour le diagnostic permet de réaliser la détection de défaut capteur en utilisant un banc d'observateurs. Le suivi de paramètres du modèle permet de suivre l'évolution du système (vieillissement par exemple) et la détection de défaut actionneurs. Dans l'ensemble, les résultats obtenus permettent de conclure sur la bonne détection des défauts capteurs et actionneurs.
34

Distorted black holes and black strings

Shoom, Andrey A. 11 1900 (has links)
The main objective of this thesis is to study the behavior of black objects in external fields, for example black holes and black strings in 4 and 5-dimensional spacetimes respectively. In particular, to analyze how external fields affect horizons and the internal structure of such objects, to study their properties, and tocunderstand how the spacetime fabric works. The thesis contains three chapters. In Chapters 1 and 2 we study the interior of 4-dimensional static, axisymmetric, electrically neutral and electrically charged distorted black holes. We analyze how external static and axisymmetric distortions affect the interior of such black holes. In particular, we study the behavior of the interior solution of an electrically neutral black hole near its horizon and singularity. The analysis shows that there exists a certain duality between the event horizon and the singularity. As a special example, we study the interior of a compactified 4-dimensional Schwarzschild black hole. In the case of an electrically charged black hole, a similar duality exists between its event and Cauchy horizons. The duality implies that the Cauchy horizon remains regular, provided the distortion is regular at the event horizon. Extension of the general theory of relativity to higher dimensional spacetimes brings a large variety of black objects whose boundary, the event horizon, may be of a complicated structure. One such object is a black string. In Chapter 3 we discuss the so-called Gregory-Laflamme instability of 5-dimensional black strings in a spacetime with one compact dimension and their topological phase transitions. Here we consider black strings with electric or magnetic charge. Linear static perturbations of these objects indicate the presence of a threshold unstable mode. An analysis of such mode shows that an electrically charged black string is less stable than a neutral one. The situation is opposite for a magnetically charged black string. An analysis of 5-dimensional extremal black string with electric charge shows a continuous spectrum of unstable threshold modes. The results presented in this thesis may have applications in the theory of classical 4-dimensional black holes and in the modern theoretical models of higher dimensions.
35

Exploration of robust software sensor techniques with applications in vehicle positioning and bioprocess state estimation

Goffaux, Guillaume 05 February 2010 (has links)
Résumé : Le travail réalisé au cours de cette thèse traite de la mise au point de méthodes d’estimation d’état robuste, avec deux domaines d’application en ligne de mire. Le premier concerne le positionnement sécuritaire en transport. L’objectif est de fournir la position et la vitesse du véhicule sous la forme d’intervalles avec un grand degré de confiance. Le second concerne la synthèse de capteurs logiciels pour les bioprocédés, et en particulier la reconstruction des concentrations de composants réactionnels à partir d’un nombre limité de mesures et d’un modèle mathématique interprétant le comportement dynamique de ces composants. L’objectif principal est de concevoir des algorithmes qui puissent fournir des estimations acceptables en dépit des incertitudes provenant de la mauvaise connaissance du système comme les incertitudes sur les paramètres du modèle ou les incertitudes de mesures. Dans ce contexte, plusieurs algorithmes ont été étudiés et mis au point. Ainsi, dans le cadre du positionnement de véhicule, la recherche s’est dirigée vers les méthodes robustes Hinfini et les méthodes par intervalles. Les méthodes Hinfini sont des méthodes linéaires prenant en compte une incertitude dans la modélisation et réalisant une optimisation min-max, c’est-à-dire minimisant une fonction de coût qui représente la pire situation compte tenu des incertitudes paramétriques. La contribution de ce travail concerne l’extension à des modèles faiblement non linéaires et l’utilisation d’une fenêtre glissante pour faire face à des mesures asynchrones. Les méthodes par intervalles développées ont pour but de calculer les couloirs de confiance des variables position et vitesse en se basant sur la combinaison d’intervalles issus des capteurs d’une part et sur l’utilisation conjointe d’un modèle dynamique et cinématique du véhicule d’autre part. Dans le cadre des capteurs logiciels pour bioprocédés, trois familles de méthodes ont été étudiées: le filtrage particulaire, les méthodes par intervalles et le filtrage par horizon glissant. Le filtrage particulaire est basé sur des méthodes de Monte-Carlo pour estimer la densité de probabilité conditionnelle de l’état connaissant les mesures. Un de ses principaux inconvénients est sa sensibilité aux erreurs paramétriques. La méthode développée s’applique aux bioprocédés et profite de la structure particulière des modèles pour proposer une version du filtrage particulaire robuste aux incertitudes des paramètres cinétiques. Des méthodes d’estimation par intervalles sont adaptées à la situation où les mesures sont disponibles à des instants discrets, avec une faible fréquence d’échantillonnage, en développant des prédicteurs appropriés. L’utilisation d’un faisceau de prédicteurs grâce à des transformations d’état et le couplage entre les prédicteurs avec des réinitialisations fréquentes permettent d’améliorer les résultats d’estimation. Enfin, une méthode basée sur le filtre à horizon glissant est étudiée en effectuant une optimisation min-max : la meilleure condition initiale est reconstruite pour le plus mauvais modèle. Des solutions sont aussi proposées pour minimiser la quantité de calculs. Pour conclure, les méthodes et résultats obtenus constituent un ensemble d’améliorations dans le cadre de la mise au point d’algorithmes robustes vis-à-vis des incertitudes. Selon les applications et les objectifs fixés, telle ou telle famille de méthodes sera privilégiée. Cependant, dans un souci de robustesse, il est souvent utile de fournir les estimations sous forme d’intervalles auxquels est associé un niveau de confiance lié aux conditions de l’estimation. C’est pourquoi, une des méthodes les plus adaptées aux objectifs de robustesse est représentée par les méthodes par intervalles de confiance et leur développement constituera un point de recherche futur. __________________________________________ Abstract : This thesis work is about the synthesis of robust state estimation methods applied to two different domains. The first area is dedicated to the safe positioning in transport. The objective is to compute the vehicle position and velocity by intervals with a great confidence level. The second area is devoted to the software sensor design in bioprocess applications. The component concentrations are estimated from a limited number of measurements and a mathematical model describing the dynamical behavior of the system. The main interest is to design algorithms which achieve estimation performance and take uncertainties into account coming from the model parameters and the measurement errors. In this context, several algorithms have been studied and designed. Concerning the vehicle positioning, the research activities have led to robust Hinfinity methods and interval estimation methods. The robust Hinfinity methods use a linear model taking model uncertainty into account and perform a min-max optimization, minimizing a cost function which describes the worst-case configuration. The contribution in this domain is an extension to some systems with a nonlinear model and the use of a receding time window facing with asynchronous data. The developed interval algorithms compute confidence intervals of the vehicle velocity and position. They use interval combinations by union and intersection operations obtained from sensors along with kinematic and dynamic models. In the context of bioprocesses, three families of state estimation methods have been investigated: particle filtering, interval methods and moving-horizon filtering. The particle filtering is based on Monte-Carlo drawings to estimate the posterior probability density function of the state variables knowing the measurements. A major drawback is its sensitivity to model uncertainties. The proposed algorithm is dedicated to bioprocess applications and takes advantage of the characteristic structure of the models to design an alternative version of the particle filter which is robust to uncertainties in the kinetic terms. Moreover, interval observers are designed in the context of bioprocesses. The objective is to extend the existing methods to discrete-time measurements by developing interval predictors. The use of a bundle of interval predictors thanks to state transformations and the use of the predictor coupling with reinitializations improve significantly the estimation performance. Finally, a moving-horizon filter is designed, based on a min-max optimization problem. The best initial conditions are generated from the model using the worst parameter configuration. Furthermore, additional solutions have been provided to reduce the computational cost. To conclude, the developed algorithms and related results can be seen as improvements in the design of estimation methods which are robust to uncertainties. According to the application and the objectives, a family may be favored. However, in order to satisfy some robustness criteria, an interval is preferred along with a measure of the confidence level describing the conditions of the estimation. That is why, the development of confidence interval observers represents an important topic in the future fields of investigation.
36

Nonlinear Estimation for Model Based Fault Diagnosis of Nonlinear Chemical Systems

Qu, Chunyan 2009 December 1900 (has links)
Nonlinear estimation techniques play an important role for process monitoring since some states and most of the parameters cannot be directly measured. There are many techniques available for nonlinear state and parameter estimation, i.e., extended Kalman filter (EKF), unscented Kalman filter (UKF), particle filtering (PF) and moving horizon estimation (MHE) etc. However, many issues related to the available techniques are to be solved. This dissertation discusses three important techniques in nonlinear estimation, which are the application of unscented Kalman filters, improvement of moving horizon estimation via computation of the arrival cost and different implementations of extended Kalman filters. First the use of several estimation algorithms such as linearized Kalman filter (LKF), extended Kalman filter (EKF), unscented Kalman filter (UKF) and moving horizon estimation (MHE) are investigated for nonlinear systems with special emphasis on UKF as it is a relatively new technique. Detailed case studies show that UKF has advantages over EKF for highly nonlinear unconstrained estimation problems while MHE performs better for systems with constraints. Moving horizon estimation alleviates the computational burden of solving a full information estimation problem by considering a finite horizon of the measurement data; however, it is non-trivial to determine the arrival cost. A commonly used approach for computing the arrival cost is to use a first order Taylor series approximation of the nonlinear model and then apply an extended Kalman filter. The second contribution of this dissertation is that an approach to compute the arrival cost for moving horizon estimation based on an unscented Kalman filter is proposed. It is found that such a moving horizon estimator performs better in some cases than if one based on an extended Kalman filter. It is a promising alternative for approximating the arrival cost for MHE. Many comparative studies, often based upon simulation results, between extended Kalman filters (EKF) and other estimation methodologies such as moving horizon estimation, unscented Kalman filter, or particle filtering have been published over the last few years. However, the results returned by the extended Kalman filter are affected by the algorithm used for its implementation and some implementations of EKF may lead to inaccurate results. In order to address this point, this dissertation investigates several different algorithms for implementing extended Kalman filters. Advantages and drawbacks of different EKF implementations are discussed in detail and illustrated in some comparative simulation studies. Continuously predicting covariance matrix for EKF results in an accurate implementation. Evaluating covariance matrix at discrete times can also be applied. Good performance can be expected if covariance matrix is obtained from integrating the continuous-time equation or if the sensitivity equation is used for computing the Jacobian matrix.
37

Autonomous Orbit Estimation For Near Earth Satellites Using Horizon Scanners

Nagarajan, N 07 1900 (has links)
Autonomous navigation is the determination of satellites position and velocity vectors onboard the satellite, using the measurements available onboard. The orbital information of a satellite needs to be obtained to support different house keeping operations such as routine tracking for health monitoring, payload data processing and annotation, orbit manoeuver planning, and prediction of intrusion in various sensors' field of view by celestial bodies like Sun, Moon etc. Determination of the satellites orbital parameters is done in a number of ways using a variety of measurements. These measurements may originate from ground based systems as range and range rate measurements, or from another satellite as in the case of GPS (Global Positioning System) and TDUSS (Tracking Data Relay Satellite Systems), or from the same satellite by using sensors like horizon sensor^ sun sensor, star tracker, landmark tracker etc. Depending upon the measurement errors, sampling rates, and adequacy of the estimation scheme, the navigation accuracy can be anywhere in the range of 10m - 10 kms in absolute location. A wide variety of tracking sensors have been proposed in the literature for autonomous navigation. They are broadly classified as (1) Satellite-satellite tracking, (2) Ground- satellite tracking, (3) fully autonomous tracking. Of the various navigation sensors, it may be cost effective to use existing onboard sensors which are well proven in space. Hence, in the current thesis, the Horizon scanner is employed as the primary navigation sensor-. It has been shown in the literature that by using horizon sensors and gyros, a high accuracy pointing of the order of .01 - .03 deg can be achieved in the case of low earth orbits. Motivated by such a fact, the current thesis deals with autonomous orbit determination using measurements from the horizon sensors with the assumption that the attitude is known to the above quoted accuracies. The horizon scanners are mounted on either side of the yaw axis in the pitch yaw plane at an angle of 70 deg with respect to the yaw axis. The Field Of View (FOV) moves about the scanner axis on a cone of 45 deg half cone angle. During each scan, the FOV generates two horizon points, one at the space-Earth entry and the other at the Earth-space exit. The horizon points, therefore, lie• on the edge of the Earth disc seen by the satellite. For a spherical earth, a minimum of three such horizon points are needed to estimate the angular radius and the center of the circular horizon disc. Since a total of four horizon points are available from a pair of scanners, they can be used to extract the satellite-earth distance and direction.These horizon points are corrupted by noise due to uncertainties in the Earth's radiation pattern, detector mechanism, the truncation and roundoff errors due to digitisation of the measurements. Owing to the finite spin rate of the scanning mechanism, the measurements are available at discrete time intervals. Thus a filtering algorithm with appropriate state dynamics becomes essential to handle the •noise in the measurements, to obtain the best estimate and to propagate the state between the measurements. The orbit of a low earth satellite can be represented by either a state vector (position and velocity vectors in inertial frame) or Keplerian elements. The choice depends upon the available processors, functions and the end use of the estimated orbit information. It is shown in the thesis that position and velocity vectors in inertial frame or the position vector in local reference frame, do result in a simplified, state representation. By using the f and g series method for inertial position and velocity, the state propagation is achieved in linear form. i.e. Xk+1 = AXK where X is the state (position, velocity) and A the state transition matrix derived from 'f' and 'g' series. The configuration of a 3 axis stabilised spacecraft with two horizon scanners is used to simulate the measurements. As a step towards establishing the feasibility of extracting the orbital parameters, the governing equations are formulated to compute the satellite-earth vector from the four horizon points generated by a pair of Horizon Scanners in the presence of measurement noise. Using these derived satellite-earth vectors as measurements, Kalman filter equations are developed, where both the state and measurements equations are linear. Based on simulations, it is shown that a position accuracy of about 2 kms can be achieved. Additionally, the effect of sudden disturbances like substantial slewing of the solar panels prior and after the payload operations are also analysed. It is shown that a relatively simple Low Pass Filter (LPF) in the measurements loop with a cut-off frequency of 10 Wo (Wo = orbital frequency) effectively suppresses the high frequency effects from sudden disturbances which otherwise camouflage the navigational information content of the signal. Then Kalman filter can continue to estimate the orbit with the same kind of accuracy as before without recourse to re-tuning of covariance matrices. Having established the feasibility of extracting the orbit information, the next step is to treat the measurements in its original form, namely, the non-linear form. The entry or exit timing pulses generated by the scanner when multiplied by the scan rate yield entry or exit azimuth angles in the scanner frame of reference, which in turn represents an effective measurement variable. These azimuth angles are obtained as inverse trigonometric functions of the satellite-earth vector. Thus the horizon scanner measurements are non-linear functions of the orbital state. The analytical equations for the horizon points as seen in the body frame are derived, first for a spherical earth case. To account for the oblate shape of the earth, a simple one step correction algorithm is developed to calculate the horizon points. The horizon points calculated from this simple algorithm matches well with the ones from accurate model within a bound of 5%. Since the horizon points (measurements) are non-linear functions of the state, an Extended Kalman Filter (EKF) is employed for state estimation. Through various simulation runs, it is observed that the along track state has got poor observability when the four horizon points are treated as measurements in their original form, as against the derived satellite-earth vector in the earlier strategy. This is also substantiated by means of condition number of the observability matrix. In order to examine this problem in detail, the observability of the three modes such as along-track, radial, and cross-track components (i.e. the local orbit frame of reference) are analysed. This difficulty in observability is obviated when an additional sensor is used in the roll-yaw plane. Subsequently the simulation studies are carried out with two scanners in pitch-yaw plane and one scanner in the roll-yaw plane (ie. a total of 6 horizon points at each time). Based on the simulations, it is shown that the achievable accuracy in absolute position is about 2 kms.- Since the scanner in the roll-yaw plane is susceptible to dazzling by Sun, the effect of data breaks due to sensor inhibition is also analysed. It is further established that such data breaks do not improve the accuracy of the estimates of the along-track component during the transient phase. However, filter does not diverge during this period. Following the analysis of the' filter performance, influence of Earth's oblateness on the measurement model studied. It is observed that the error in horizon points, due to spherical Earth approximation behave like a sinusoid of twice the orbital frequency alongwith a bias of about 0.21° in the case of a 900 kms sun synchronous orbit. The error in the 6 horizon points is shown to give rise to 6 sinusoids. Since the measurement model for a spherical earth is the simplest one, the feasibility of estimating these sinusoids along with the orbital state forms the next part of the thesis. Each sinusoid along with the bias is represented as a 3 state recursive equation in the following form where i refers to the ith sinusoid and T the sampling interval. The augmented or composite state variable X consists of bias, Sine and Cosine components of the sinusoids. The 6 sinusoids together with the three dimensional orbital position vector in local coordinate frame then lead to a 21 state augmented Kalman Filter. With the 21 state filter, observability problems are experienced. Hence the magnetic field strength, which is a function of radial distance as measured by an onboard magnetometer is proposed as additional measurement. Subsequently, on using 6 horizon point measurements and the radial distance measurements obtained from a magnetometer and taking advantage of relationships between sinusoids, it is shown that a ten state filter (ie. 3 local orbital states, one bias and 3 zero mean sinusoids) can effectively function as an onboard orbit filter. The filter performance is investigated for circular as well as low eccentricity orbits. The 10-state filter is shown to exhibit a lag while following the radial component in case of low eccentricity orbits. This deficiency is overcome by introducing two more states, namely the radial velocity and acceleration thus resulting in a 12-state filter. Simulation studies reveal that the 12-state filter performance is very good for low eccentricity orbits. The lag observed in 10-state filter is totally removed. Besides, the 12-state filter is able to follow the changes in orbit due to orbital manoeuvers which are part of orbit acquisition plans for any mission.
38

Innovation restrained : unlocking the innovation of acquired software startups

McNutt, Robert Blaine 17 February 2012 (has links)
The ability to exploit disruptive innovation is the main factor in a company’s continued success. The ability to significantly advance a field or create a new field is paramount not only to a venture’s ability to generate revenue, but it is key to our nation’s economic vitality. Yet today’s business environment is dominated by funding options and exit strategies focused on near-term results and unreasonable profit expectations (Estrin, 2008b). Given these constraints, software startups must focus on incremental innovation to obtain initial funding. The result is an industry focused on short-term strategies that limit the likelihood of developing disruptive innovations and companies with long-term focuses. Current business models do not adequately address a key factor in preventing the loss of innovations and stagnancy in industries and their markets. New business frameworks and models are required that focus on preserving the core teams found in software startups and to provide them with the runway they require to develop disruptive innovations. Nowhere is innovation more crucial than in the startup environment where the abilities to invent, adapt, outwit, and outlast on a shoestring budget predict success. This paper evaluates today’s business models to determine how they help overcome roadblocks faced by software startups in today’s acquisition environment, identifies related research, and recommends new models and adaptations to existing models to overcome these roadblocks. “It is estimated that 70-95 percent of acquisitions fail. A significant percentage is due to the friction that is created by trying to integrate the startup with the large company's financials, HR department, product, market and business model. Most startups when they are acquired are uncertain on many of these dimensions, and forcing them to conform on any one of these dimensions to the large company can stunt their growth and often kill them.” – (Herrmann, September 2011) This paper investigates how to preserve innovation within a startup and within an acquiring company, how innovation is interwoven in team members, the leadership characteristics that inspire innovation, and the importance of balancing the value of innovation against process. The recommended guidance and frameworks focus on preserving the core team and their innovations, as well as generating a strong return on investment, when an established business acquires a startup. The perspectives presented are based on the author’s experiences as a key team member in two startups in the mid 1990s, multiple failed internal incubation groups within a fortune 100 company, and in considering a new startup in today’s environment. / text
39

Solving dynamic repositioning problem for bicycle sharing systems : model, heuristics, and decomposition

Wang, Tan, active 21st century 02 February 2015 (has links)
Bicycle sharing systems (BSS) have emerged as a powerful stimulus to non- motorized travel, especially for short-distance trips. However, the imbalances in the distribution of bicycles in BSS are widely observed. It is thus necessary to reposition bicycles to reduce the unmet demand due to such imbalances as much as possible. This paper formulates a new mixed-integer linear programming model considering the dynamic nature of the demand to solve the repositioning problem, which is later validated by an illustrative example. Due to the NP-Hard nature of this problem, we seek for two heuristics (greedy algorithm and rolling horizon approach) and one exact solution method (Benders’ decomposition) to get an acceptable solution for problems with large instances within a reasonable computation time. We create four datasets based on real world data with 12, 24, 36, and 48 stations respectively. Computational results show that our model and solution methods performed well. Finally, this paper gives some suggestions on extensions or modifications that might be added to our work in the future. / text
40

Essays on the Economics of Risk and Financial Markets

Turley, Robert Staffan 23 September 2013 (has links)
Prices in financial markets are primarily driven by the interaction of risk and time. The returns to financial assets over long time horizons are primarily driven by fundamental news regarding their promised cash flows. In contrast, short-run price variation is associated with a large degree of predictable, transient investor trading behavior unrelated to fundamental prospects. The quantity of long-run risk directly affects economic well-being, and its magnitude has varied significantly over the past century. The theoretical model presented here shows some success in quantifying the impact of news about future risks on asset prices. In particular, some investing strategies that appear to offer anomalously large returns are associated with high exposures to future long-run risks. The historical returns to these portfolios are partly a result of investors’ distaste for assets whose worth declines when uncertainty increases. The financial sector is tasked with pricing these risks in a way that properly allocates investment resources. Over the past thirty years, this sector has grown much more rapidly than the economy as a whole. As a result, asset prices appear to be more informative. However, the new information relates to short-term uncertainty, not long-run risk. This type of high-frequency information is unlikely to affect real investment in a way that would benefit broader economic growth.

Page generated in 0.0432 seconds