• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 530
  • 232
  • 68
  • 48
  • 28
  • 25
  • 20
  • 17
  • 13
  • 12
  • 8
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1178
  • 1032
  • 202
  • 193
  • 173
  • 161
  • 155
  • 147
  • 123
  • 121
  • 106
  • 96
  • 90
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
911

Statistical modelling of return on capital employed of individual units

Burombo, Emmanuel Chamunorwa 10 1900 (has links)
Return on Capital Employed (ROCE) is a popular financial instrument and communication tool for the appraisal of companies. Often, companies management and other practitioners use untested rules and behavioural approach when investigating the key determinants of ROCE, instead of the scientific statistical paradigm. The aim of this dissertation was to identify and quantify key determinants of ROCE of individual companies listed on the Johannesburg Stock Exchange (JSE), by comparing classical multiple linear regression, principal components regression, generalized least squares regression, and robust maximum likelihood regression approaches in order to improve companies decision making. Performance indicators used to arrive at the best approach were coefficient of determination ( ), adjusted ( , and Mean Square Residual (MSE). Since the ROCE variable had positive and negative values two separate analyses were done. The classical multiple linear regression models were constructed using stepwise directed search for dependent variable log ROCE for the two data sets. Assumptions were satisfied and problem of multicollinearity was addressed. For the positive ROCE data set, the classical multiple linear regression model had a of 0.928, an of 0.927, a MSE of 0.013, and the lead key determinant was Return on Equity (ROE),with positive elasticity, followed by Debt to Equity (D/E) and Capital Employed (CE), both with negative elasticities. The model showed good validation performance. For the negative ROCE data set, the classical multiple linear regression model had a of 0.666, an of 0.652, a MSE of 0.149, and the lead key determinant was Assets per Capital Employed (APCE) with positive effect, followed by Return on Assets (ROA) and Market Capitalization (MC), both with negative effects. The model showed poor validation performance. The results indicated more and less precision than those found by previous studies. This suggested that the key determinants are also important sources of variability in ROCE of individual companies that management need to work with. To handle the problem of multicollinearity in the data, principal components were selected using Kaiser-Guttman criterion. The principal components regression model was constructed using dependent variable log ROCE for the two data sets. Assumptions were satisfied. For the positive ROCE data set, the principal components regression model had a of 0.929, an of 0.929, a MSE of 0.069, and the lead key determinant was PC4 (log ROA, log ROE, log Operating Profit Margin (OPM)) and followed by PC2 (log Earnings Yield (EY), log Price to Earnings (P/E)), both with positive effects. The model resulted in a satisfactory validation performance. For the negative ROCE data set, the principal components regression model had a of 0.544, an of 0.532, a MSE of 0.167, and the lead key determinant was PC3 (ROA, EY, APCE) and followed by PC1 (MC, CE), both with negative effects. The model indicated an accurate validation performance. The results showed that the use of principal components as independent variables did not improve classical multiple linear regression model prediction in our data. This implied that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Generalized least square regression was used to assess heteroscedasticity and dependences in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the weighted generalized least squares regression model had a of 0.920, an of 0.919, a MSE of 0.044, and the lead key determinant was ROE with positive effect, followed by D/E with negative effect, Dividend Yield (DY) with positive effect and lastly CE with negative effect. The model indicated an accurate validation performance. For the negative ROCE data set, the weighted generalized least squares regression model had a of 0.559, an of 0.548, a MSE of 57.125, and the lead key determinant was APCE and followed by ROA, both with positive effects.The model showed a weak validation performance. The results suggested that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Robust maximum likelihood regression was employed to handle the problem of contamination in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the robust maximum likelihood regression model had a of 0.998, an of 0.997, a MSE of 6.739, and the lead key determinant was ROE with positive effect, followed by DY and lastly D/E, both with negative effects. The model showed a strong validation performance. For the negative ROCE data set, the robust maximum likelihood regression model had a of 0.990, an of 0.984, a MSE of 98.883, and the lead key determinant was APCE with positive effect and followed by ROA with negative effect. The model also showed a strong validation performance. The results reflected that the key determinants are major sources of variability in ROCE of individual companies that management need to work with. Overall, the findings showed that the use of robust maximum likelihood regression provided more precise results compared to those obtained using the three competing approaches, because it is more consistent, sufficient and efficient; has a higher breakdown point and no conditions. Companies management can establish and control proper marketing strategies using the key determinants, and results of these strategies can see an improvement in ROCE. / Mathematical Sciences / M. Sc. (Statistics)
912

Dynamique et estimation paramétrique pour les gyroscopes laser à milieu amplificateur gazeux / Dynamics and parametric estimations for gaz ring laser gyroscopes

Badaoui, Noad 02 December 2016 (has links)
Les gyroscopes laser à gaz constituent une solution technique de haute performances dans les problématiques de navigation inertielle. Néanmoins, pour de très faibles vitesses de rotation, les petites imperfections des miroirs de la cavité optique font que les deux faisceaux contra-propageant sont verrouillés en phase. En conséquence, les mesures en quadrature de leur différence de phase ne permettent plus de remonter directement aux vitesses de rotation à l'intérieur d'une zone autour de zéro, dite zone aveugle statique, ou, si l'on utilise une procédure d'activation mécanique, dite zone aveugle dynamique. Ce travail montre qu'il est néanmoins possible, en utilisant des méthodes issues du filtrage et de l'estimation, de remonter aux vitesses de rotation mêmes si ces dernières sont en zone aveugle. Pour cela, on part d'une modélisation physique de la dynamique que l'on simplifie par des techniques de perturbations singulières pour en déduire une généralisation des équations de Lamb. Il s'agit de quatre équations différentielles non-linéaires qui décrivent la dynamique des intensités et des phases des deux faisceaux contra-propageant. Une étude qualitative par perturbations régulières, stabilité exponentielle des points d'équilibre et applications de Poincaré permet de caractériser les zones aveugles statiques et dynamiques en fonction des imperfections dues aux miroirs. Il est alors possible d'estimer en ligne avec un observateur asymptotique fondé sur les moindre carrés récursifs ces imperfections en rajoutant aux deux mesures en quadrature celles des deux intensités. La connaissance précise de ces imperfections permet alors de les compenser dans la dynamique de la phase relative, et ainsi d'estimer les rotations en zone aveugle. Des simulations numériques détaillées illustrent l'intérêt de ces observateurs pour augmenter la précision des gyroscopes à gaz. / Gaz ring laser gyroscopes provide a high performance technical solution for inertial navigation. However, for very low rotational speeds, the mirrors imperfections of the optical cavity induce a locking phenomena between the phases of the two counter-propagating Laser beams. Hence, the measurements of the phase difference can no longer be used when the speed is within an area around zero, called lock-in zone, or,if a procedure of mechanical dithering is implemented, dithering lock-in zone. Nevertheless, this work shows that it is possible using filtering and estimation methods to measure the speed even within the lock-in zones. To achieve this result, we exploit a physical modeling of the dynamics that we simplify, using singular perturbation techniques, to obtain a generalization of Lamb's equations. There are four non-linear differential equations describing the dynamics of the intensities and phases of the two counter-propagating beams. A qualitative study by regular perturbation theory, exponential stability of the equilibrium points and Poincaré maps allows a characterisation of the lock-in zones according to the mirrors imperfections. It is then possible to estimate online, with an asymptotic observer based on recursive least squares, these imperfections by considering the additional measurements of the beam intensities. Accurate knowledge of these imperfections enables us to compensate them in the dynamic of the relative phase, and thus to estimate rotational speeds within the lock-in zones. Detailed numerical simulations illustrate the interest of those observers to increase the accuracy of gas ring laser gyroscopes.
913

Modélisation et affinement des structures locales de matériaux désordonnés à base d'oxyde-hydroxyde de nickel par spectroscopie d'absorption des rayons X / Local Structure Modeling and Refinement of Disordered Materials based on Nickel Oxide- Hydroxide by X-ray Absorption Spectroscopy

Bounif, Mohamed 13 October 2009 (has links)
Les composés électrochomes changent de couleur en fonction d'une tension qui leur est appliquée. La tenue en cyclage de couches minces à base de NiO, électrochrome cathodique, dépend fortement de la température et de la pression de dépôt. D’autre part la proportion de phase électrochimiquement active dépend fortement de l’épaisseur des couches. Selon le modèle proposé NiO en milieu KOH se transforme en hydroxyde de nickel Ni(OH)2 puis dans la phase de coloration en oxyhydroxyde NiOOH avant de revenir à sa forme réduite durant la phase décoloration. À la fin de ce cycle, des traces de phase colorée persistent. Nous avons développé des nouvelles méthodes numériques d’analyse des spectres d'absorption de rayons X, caractérisant la structure locale autour du nickel dans ces phases non cristallisées, afin de déterminer les concentrations des diverses espèces au cours du cycle. Aucune des méthodes habituellement pratiquées, comme la combinaison linéaire d’espèces modèles par la méthode des moindres carrés linéaire, et la méthode d’Analyse en Composantes Principales, ne sont adaptées aux cas de spectres fortement corrélés comme les oxydes-hydroxydes de nickel. Nous montrons qu’il est possible d’améliorer la méthode des moindres carrés en utilisant un algorithme original : « la méthode des Moindres Carrés Linéaire Progressive ». La base de cette méthode est fondée sur l’étude statistique des erreurs et corrélations. Ce travail a permis de valider le modèle électrochimique et d’évaluer pour la première fois la concentration en NiOOH dans la phase réduite, signature de l'irréversibilité. Seule, la présence avant cyclage de Ni(OH)2 dans les films de NiO n'a pu être expliquée / Electrochromic materials change their color versus an applied electric voltage. The electrochemical study of NiO thin films, which are cathodic electromic materials, shows an important oxidationreduction cycles lifetime dependance on thin films deposition temperature and pressure. The proportion of electrochemically active material depends on the film thickness. The proposed model suggests that in KOH NiO is transformed first to nickel hydroxide Ni(OH)2, and to nickel oxihydroxide in the coloration process, before returning to Ni(OH)2 in the decoloration step. At the end of the cycle, traces of the colored phase are still observed. We have developed a new X-Ray absorption spectroscopy numerical analysis method in order to characterize the concentration of each species present in these non crystalline materials. Mixed Ni oxides and hydroxides XAS spectra are highly correlated and Linear Least Squares and Principal Component Analysis methods proved to be totally inefficient. In order to improve the linear least squares method, we have developed a home-made algorithm named « Progressive Linear Least Squares method ». This method is based on the use of statistical errors and correlations evaluation of the spectra. It was then possible to valid the electrochemical model and to evaluate for the first time the concentration of residual NiOOH in the reduced thin film phases, related to the irreversibility of the electrochromic process. However, we were unable to explain the presence of a small amount of de Ni(OH)2 in the NiO films prepared at room temperature, prior to any electrochemical treatment
914

Soutien social des collègues et stress au travail : une approche par l'analyse des réseaux sociaux / Co-worker social support and wordplace stress : a social network approach

Sader, Myra 16 November 2018 (has links)
La littérature sur le stress au travail considère souvent que les personnes dépourvues de soutien social tendent à avoir un taux de stress plus élevé. Si cette vision est confirmée empiriquement, elle a toutefois une portée limitée : elle ne tient pas toujours compte de l’inégalité d’accès au soutien, inégalité qui affecte la perception de ce soutien. Pourquoi certains salariés ont plus de facilité à accéder au soutien social ? Qu’est-ce qui fait que l’aide est plus disponible et plus variée pour une personne plutôt que pour une autre ? Ces interrogations nous amènent à situer le soutien social perçu, et plus précisément le soutien des collègues perçu, dans un modèle théorique plus large nourri par la théorie des réseaux sociaux. A l’aide d’un modèle explicatif, l’objectif de notre recherche est d’étudier l’impact du positionnement de l’individu dans le réseau social sur le stress au travail perçu. Les hypothèses de recherche ont été testées en utilisant les techniques de régression en moindres carrés partiels pour estimer les équations structurelles. A partir de données de type « réseau complet » collectées auprès d’une entreprise de services de taille moyenne (N=343), nous avons montré que la force des liens favorise l’accès au soutien des collègues et, par conséquent, réduit le stress professionnel. Les résultats indiquent que le soutien des collègues est médiateur total dans cette relation, et que le lien direct entre la force des liens et le stress perçu n’est pas établi. De plus, nous avons confirmé l’ambivalence des bridging ties (liens vers des personnes de départements différents) : ils influencent négativement la perception du soutien social (qui réduit le stress), mais ont aussi un effet négatif direct sur le stress au travail. En soulignant le rôle des relations informelles comme antécédent au soutien social, nous avons contribué à fournir un outil analytique susceptible d’être mis en œuvre dans la sphère managériale. / The literature on workplace stress often considers that people who lack social support tend to have higher levels of perceived stress. This is empirically confirmed, but it is not always taken into account and offers limited scope. Indeed, why do some employees have more access to social support? What renders support more available and more varied from one person to the other? These questions allow us to situate perceived social support, and more accurately the perceived support of colleagues, in a larger theoretical model, enhanced by social network theory. Through an explanatory model, the objective of this research is to explore the role of the positioning of the individual in the social network on perceived workplace stress. Based on a “complete network” data in a medium-sized IT services company, we used partial least squares to test our hypothesis (N = 343). The strength of ties affects stress through social support, such that people with stronger ties perceive more support and ultimately exhibit less stress. However, the direct link between strength and stress is not established. Bridging ties (supportive ties to other departments) negatively influence social support (a situation which increases stress) but also have a direct negative effect on stress. By stressing the role of social relationships as an antecedent of social support and stress, our results offer new potential managerial actions within organizations
915

As tramas e o poder: Jaboticabal 1895-1936 praça, igreja e uma outra história / Urban fabric and the power within: Jaboticabal 1895-1936 central square, church and yet another narrative

Garcia, Valéria Eugênia 24 September 2008 (has links)
Trata do espaço público como referência material na institucionalização das relações de poder. Aborda as alterações da paisagem urbana central da cidade de Jaboticabal a partir da instalação do regime Republicano (1889-1930), que gradativamente alterou o foco da ação política do meio rural para o ambiente da cidade. O seqüente incremento do modo de vida urbano suscitou transformações na organização das estruturas de poder com conseqüências diretas na concepção de cidade, enquanto artefato e enquanto locus de transmissão de mensagens. Essa situação gerou transformações e, paradoxalmente, permitiu permanências. No âmbito local, isso significou diversas disputas pelos espaços centrais da cidade, no que tange sua propriedade, administração, usos e significados. As implicações diretas remetem-se à organização das construções na região central, com destaque à praça e ao edifício da igreja, assim como os processos de remodelação relacionados a estes espaços. A câmara é, nesse sentido, agente de difusão dos projetos nacionais e a praça central, elemento espacial escolhido como recorte da pesquisa, o veículo portador das mensagens necessárias à fundamentação dessa nova ordem centrada na propagação do ideário de construção da nação brasileira. A discussão, no entanto, atravessa outra faceta do campo da administração da cidade, quando se soma aos valores simbólicos mencionados, as demandas concretas por infra-estrutura, situação inerente a um sistema urbano em expansão. Paralelamente, a adesão ao modo de produção capitalista insere a região na dinâmica mundial de produção, consumo e divisão do trabalho. A manutenção da base agrária, centrada no cultivo do café, fornece os instrumentos para a atuação de uma autoridade local constituída por defensores da República fatalmente ligados à cafeicultura. Finalmente, a imprensa local, que se transforma no veículo portador dos diversos discursos, engendrando a materialização destes na urbes. Para tanto, propõe uma análise baseada na coerência entre os discursos de legitimação e o espaço efetivamente construído. À medida que o discurso precisa ser fiel ao contexto pesquisado, opta pela utilização de documentos primários: leis, artigos de jornais, atas, cartas e todo e qualquer material que expresse de forma direta a alocução dos atores envolvidos. Da mesma forma, utiliza o suporte fotográfico com a finalidade de entender a configuração dos espaços e edifícios frente aos dados proporcionados pelos documentos pesquisados. / This dissertation deals with power relations materialization into public spaces. It investigates the changes in Jaboticabals urban landscape after the instatement of the Republican Regime (1889-1930), which gradually altered the focus of political actions from rural to urban environment. The rise of the urban way of living triggered transformations on the power structures with direct consequences for the city idea, as artifact and as locus for symbolic messages. This circumstance of pressing changes paradoxically permitted a number of resistances. On the local sphere it meant de installment of several disputes for the citys central locations, in regard of its property, management, use and meanings. The direct implications refer to the downtowns hierarchy of buildings, with emphasis on the square and church, along with their renovation processes. In this sense, the town council is the dissemination agent of national projects, and the central square, our study object, the instrument to deliver the necessary messages to configure this new order based on the ideology of constructing the brazilian nation. This discussion, however, is another facet of city government deals that in addition to the mentioned symbolic values must be understood in a situation of concrete demands for infrastructure inherent to an urban expansion condition. Also, the accession of capitalist places the region in the world production, consumption and labor division dynamics. The persistence of the agrarian base, focused on coffee plantation, provide the tools for the act of local authorities made up of supporters of the Republic and inevitably linked to coffee production. Finally, the local press that becomes the vehicle of various speeches, engendered in the materialization of urban grounds. To that end, we propose an analysis based on consistency between the speeches of legitimacy and space effectively built. To the extent that the speech must be faithful to the search context, we opt for primary sources use: laws, newspaper articles, minutes, letters and any material that expresses in a direct address the goals of the actors involved. As well as the use of photographic medium in order to understand the configuration of spaces and buildings, resource that enables the comparison image and data.
916

"Calibração multivariada e cinética diferencial em sistemas de análises em fluxo com detecção espectrofotométrica" / "Multivariate calibration and differential kinetic analysis in flow systems with spectrophotometric detection"

Fortes, Paula Regina 19 June 2006 (has links)
A associação dos métodos cinéticos de análises e dos sistemas de análises em fluxo foi demonstrada em relação à determinação espectrofotométrica de ferro e vanádio em ligas Fe-V O método se baseia na influência de Fe2+ e VO2+ na taxa de oxidação de iodeto por dicromato sob condições ácidas; por esta razão o emprego do redutor de Jones foi necessário. Um sistema de análises por injeção em fluxo (FIA) e um sistema multi-impulsão foram dimensionados e avaliados. Em ambos os sistemas, a solução da amostra era inserida no fluxo transportador / reagente iodeto, e a solução de dicromato era adicionada por confluência. Sucessivas medidas eram realizadas durante a passagem da zona de amostra processada pelo detector, cada uma relacionada a uma diferente condição para o desenvolvimento da reação. O tratamento dos dados envolveu calibração multivariada, particularmente o algorítmo PLS. O sistema FIA se mostrou pouco adequado para as determinações multi-paramétricas, uma vez que os elementos de fluído resultantes da natureza de escoamento laminar não continham informações cinéticas suficientes para compor as etapas de modelagem. Por outro lado, MPFS mostrou que a natureza do fluxo pulsado resulta em melhorias nas figuras de mérito devido ao movimento caótico dos elementos de fluído. O sistema proposto é simples e robusto, capaz de analisar 50 amostras por hora, significando em um consumo de 48 mg KI por determinação. A duas primeiras variáveis latentes contém ca 94 % da informação analítica, mostrando que a dimensionalidade dupla intrínsica ao conjunto de dados. Os resultados se apresentaram concordantes com aqueles obtidos por espectrometria de emissão optica com plasma induzido em argônio. / Differential kinetic analysis can be implemented in a flow system analyser, and this was demonstrated in designing an improved spectrophotometric catalytic determination of iron and vanadium in Fe-V alloys. The method relied on the influence of Fe2+ and VO2+ on the rate of the iodide oxidation by Cr2O7 under acidic conditions; therefore the Jones reductor was needed. To this end, a flow injection system (FIA) and a multi-pumping flow system (MPFS) were dimensioned and evaluated. In both systems, the alloy solution was inserted into an acidic KI solution that acted also as carrier stream, and a dichromate solution was added by confluence. Successive measurements were performed during sample passage through the detector, each one related to a different yet reproducible condition for reaction development. Data treatment involved multivariate calibration by the PLS algorithm. The FIA system was less recommended for multi-parametric determination, as the laminar flow regimen could not provide suitable kinetic information. On the other hand, a MPFS demonstrated that pulsed flow led to enhance figures of merit due to chaotic movement of its fluid elements. The proposed MPFS system is very simple and rugged, allowing 50 samples to be run per hour, meaning 48 mg KI per determination. The first two latent variables carry ca 94 % of the analytical information, pointing out that the intrinsic dimensionality of the data set is two. Results are in agreement with inductively coupled argon plasma – optical emission spectrometry.
917

Itération sur les politiques optimiste et apprentissage du jeu de Tetris / Optimistic Policy Iteration and Learning the Game of Tetris

Thiéry, Christophe 25 November 2010 (has links)
Cette thèse s'intéresse aux méthodes d'itération sur les politiques dans l'apprentissage par renforcement à grand espace d'états avec approximation linéaire de la fonction de valeur. Nous proposons d'abord une unification des principaux algorithmes du contrôle optimal stochastique. Nous montrons la convergence de cette version unifiée vers la fonction de valeur optimale dans le cas tabulaire, ainsi qu'une garantie de performances dans le cas où la fonction de valeur est estimée de façon approximative. Nous étendons ensuite l'état de l'art des algorithmes d'approximation linéaire du second ordre en proposant une généralisation de Least-Squares Policy Iteration (LSPI) (Lagoudakis et Parr, 2003). Notre nouvel algorithme, Least-Squares [lambda] Policy Iteration (LS[lambda]PI), ajoute à LSPI un concept venant de [lambda]-Policy Iteration (Bertsekas et Ioffe, 1996) : l'évaluation amortie (ou optimiste) de la fonction de valeur, qui permet de réduire la variance de l'estimation afin d'améliorer l'efficacité de l'échantillonnage. LS[lambda]PI propose ainsi un compromis biais-variance réglable qui peut permettre d'améliorer l'estimation de la fonction de valeur et la qualité de la politique obtenue. Dans un second temps, nous nous intéressons en détail au jeu de Tetris, une application sur laquelle se sont penchés plusieurs travaux de la littérature. Tetris est un problème difficile en raison de sa structure et de son grand espace d'états. Nous proposons pour la première fois une revue complète de la littérature qui regroupe des travaux d'apprentissage par renforcement, mais aussi des techniques de type évolutionnaire qui explorent directement l'espace des politiques et des algorithmes réglés à la main. Nous constatons que les approches d'apprentissage par renforcement sont à l'heure actuelle moins performantes sur ce problème que des techniques de recherche directe de la politique telles que la méthode d'entropie croisée (Szita et Lorincz, 2006). Nous expliquons enfin comment nous avons mis au point un joueur de Tetris qui dépasse les performances des meilleurs algorithmes connus jusqu'ici et avec lequel nous avons remporté l'épreuve de Tetris de la Reinforcement Learning Competition 2008 / This thesis studies policy iteration methods with linear approximation of the value function for large state space problems in the reinforcement learning context. We first introduce a unified algorithm that generalizes the main stochastic optimal control methods. We show the convergence of this unified algorithm to the optimal value function in the tabular case, and a performance bound in the approximate case when the value function is estimated. We then extend the literature of second-order linear approximation algorithms by proposing a generalization of Least-Squares Policy Iteration (LSPI) (Lagoudakis and Parr, 2003). Our new algorithm, Least-Squares [lambda] Policy Iteration (LS[lambda]PI), adds to LSPI an idea of [lambda]-Policy Iteration (Bertsekas and Ioffe, 1996): the damped (or optimistic) evaluation of the value function, which allows to reduce the variance of the estimation to improve the sampling efficiency. Thus, LS[lambda]PI offers a bias-variance trade-off that may improve the estimation of the value function and the performance of the policy obtained. In a second part, we study in depth the game of Tetris, a benchmark application that several works from the literature attempt to solve. Tetris is a difficult problem because of its structure and its large state space. We provide the first full review of the literature that includes reinforcement learning works, evolutionary methods that directly explore the policy space and handwritten controllers. We observe that reinforcement learning is less successful on this problem than direct policy search approaches such as the cross-entropy method (Szita et Lorincz, 2006). We finally show how we built a controller that outperforms the previously known best controllers, and shortly discuss how it allowed us to win the Tetris event of the 2008 Reinforcement Learning Competition
918

Numerical methods for backward stochastic differential equations of quadratic and locally Lipschitz type

Turkedjiev, Plamen 17 July 2013 (has links)
Der Fokus dieser Dissertation liegt darauf, effiziente numerische Methode für ungekoppelte lokal Lipschitz-stetige und quadratische stochastische Vorwärts-Rückwärtsdifferenzialgleichungen (BSDE) mit Endbedingungen von schwacher Regularität zu entwickeln. Obwohl BSDE viele Anwendungen in der Theorie der Finanzmathematik, der stochastischen Kontrolle und der partiellen Differenzialgleichungen haben, gibt es bisher nur wenige numerische Methoden. Drei neue auf Monte-Carlo- Simulationen basierende Algorithmen werden entwickelt. Die in der zeitdiskreten Approximation zu lösenden bedingten Erwartungen werden mittels der Methode der kleinsten Quadrate näherungsweise berechnet. Ein Vorteil dieser Algorithmen ist, dass sie als Eingabe nur Simulationen eines Vorwärtsprozesses X und der Brownschen Bewegung benötigen. Da sie auf modellfreien Abschätzungen aufbauen, benötigen die hier vorgestellten Verfahren nur sehr schwache Bedingungen an den Prozess X. Daher können sie auf sehr allgemeinen Wahrscheinlichkeitsräumen angewendet werden. Für die drei numerischen Algorithmen werden explizite maximale Fehlerabschätzungen berechnet. Die Algorithmen werden dann auf Basis dieser maximalen Fehler kalibriert und die Komplexität der Algorithmen wird berechnet. Mithilfe einer zeitlich lokalen Abschneidung des Treibers der BSDE werden quadratische BSDE auf lokal Lipschitz-stetige BSDE zurückgeführt. Es wird gezeigt, dass die Komplexität der Algorithmen im lokal Lipschitz-stetigen Fall vergleichbar zu ihrer Komplexität im global Lipschitz-stetigen Fall ist. Es wird auch gezeigt, dass der Vergleich mit bereits für Lipschitz-stetige BSDE existierenden Methoden für die hier vorgestellten Algorithmen positiv ausfällt. / The focus of the thesis is to develop efficient numerical schemes for quadratic and locally Lipschitz decoupled forward-backward stochastic differential equations (BSDEs). The terminal conditions satisfy weak regularity conditions. Although BSDEs have valuable applications in the theory of financial mathematics, stochastic control and partial differential equations, few efficient numerical schemes are available. Three algorithms based on Monte Carlo simulation are developed. Starting from a discrete time scheme, least-square regression is used to approximate conditional expectation. One benefit of these schemes is that they require as an input only the simulations of an explanatory process X and a Brownian motion W. Due to the use of distribution-free tools, one requires only very weak conditions on the explanatory process X, meaning that these methods can be applied to very general probability spaces. Explicit upper bounds for the error are obtained. The algorithms are then calibrated systematically based on the upper bounds of the error and the complexity is computed. Using a time-local truncation of the BSDE driver, the quadratic BSDE is reduced to a locally Lipschitz BSDE, and it is shown that the complexity of the algorithms for the locally Lipschitz BSDE is the same as that of the algorithm of a uniformly Lipschitz BSDE. It is also shown that these algorithms are competitive compared to other available algorithms for uniformly Lipschitz BSDEs.
919

Evolution du VIH : méthodes, modèles et algorithmes / Evolution of HIV : methods, models and algorithms

Jung, Matthieu 21 May 2012 (has links)
La donnée de séquences nucléotidiques permet d'inférer des arbres phylogénétiques, ou phylogénies, qui décrivent leur lien de parenté au cours de l'évolution. Associer à ces séquences leur date de prélèvement ou leur pays de collecte, permet d'inférer la localisation temporelle ou spatiale de leurs ancêtres communs. Ces données et procédures sont très utilisées pour les séquences de virus et, notamment, celles du virus de l'immunodéficience humaine (VIH), afin d'en retracer l'histoire épidémique à la surface du globe et au cours du temps. L'utilisation de séquences échantillonnées à des moments différents (ou hétérochrones) sert aussi à estimer leur taux de substitution, qui caractérise la vitesse à laquelle elles évoluent.Les méthodes les plus couramment utilisées pour ces différentes tâches sont précises, mais lourdes en temps de calcul car basées sur des modèles complexes, et ne peuvent traiter que quelques centaines de séquences. Devant le nombre croissant de séquences disponibles dans les bases de données, souvent plusieurs milliers pour une étude donnée, le développement de méthodes rapides et efficaces devient indispensable. Nous présentons une méthode de distances, Ultrametric Least Squares, basée sur le principe des moindres carrés, souvent utilisé en phylogénie, qui permet d'estimer le taux de substitution d'un ensemble de séquences hétérochrones, dont on déduit ensuite facilement les dates des spéciations ancestrales. Nous montrons que le critère à optimiser est parabolique par morceaux et proposons un algorithme efficace pour trouver l'optimum global.L'utilisation de séquences échantillonnées en des lieux différents permet aussi de retracer les chaînes de transmission d'une épidémie. Dans ce cadre, nous utilisons la totalité des séquences disponibles (~3 500) du sous-type C du VIH-1 (VIH de type 1), responsable de près de 50% des infections mondiales au VIH-1, pour estimer ses principaux flux migratoires à l'échelle mondiale, ainsi que son origine géographique. Des outils novateurs, basés sur le principe de parcimonie combiné avec différents critères statistiques, sont utilisés afin de synthétiser et interpréter l'information contenue dans une grande phylogénie représentant l'ensemble des séquences étudiées. Enfin, l'origine géographique et temporelle de ce variant (VIH-1 C) au Sénégal est précisément explorée lors d'une seconde étude, portant notamment sur les hommes ayant des rapports sexuels avec des hommes. / Nucleotide sequences data enable the inference of phylogenetic trees, or phylogenies, describing their evolutionary re-lationships during evolution. Combining these sequences with their sampling date or country of origin, allows inferring the temporal or spatial localization of their common ancestors. These data and methods are widely used with viral sequences, and particularly with human immunodeficiency virus (HIV), to trace the viral epidemic history over time and throughout the globe. Using sequences sampled at different points in time (or heterochronous) is also a mean to estimate their substitution rate, which characterizes the speed of evolution. The most commonly used methods to achieve these tasks are accurate, but are computationally heavy since they are based on complex models, and can only handle few hundreds of sequences. With an increasing number of sequences avail-able in the databases, often several thousand for a given study, the development of fast and accurate methods becomes essential. Here, we present a new distance-based method, named Ultrametric Least Squares, which is based on the princi-ple of least squares (very popular in phylogenetics) to estimate the substitution rate of a set of heterochronous sequences and the dates of their most recent common ancestors. We demonstrate that the criterion to be optimized is piecewise parabolic, and provide an efficient algorithm to find the global minimum.Using sequences sampled at different locations also helps to trace transmission chains of an epidemic. In this respect, we used all available sequences (~3,500) of HIV-1 subtype C, responsible for nearly 50% of global HIV-1 infections, to estimate its major migratory flows on a worldwide scale and its geographic origin. Innovative tools, based on the principle of parsimony, combined with several statistical criteria were used to synthesize and interpret information in a large phylogeny representing all the studied sequences. Finally, the temporal and geographical origins of the HIV-1 subtype C in Senegal were further explored and more specifically for men who have sex with men.
920

Application of multivariate regression techniques to paint: for the quantitive FTIR spectroscopic analysis of polymeric components

Phala, Adeela Colyne January 2011 (has links)
Thesis submitted in fulfilment of the requirements for the degree Master of Technology Chemistry in the Faculty of (Science) Supervisor: Professor T.N. van der Walt Bellville campus Date submitted: October 2011 / It is important to quantify polymeric components in a coating because they greatly influence the performance of a coating. The difficulty associated with analysis of polymers by Fourier transform infrared (FTIR) analysis’s is that colinearities arise from similar or overlapping spectral features. A quantitative FTIR method with attenuated total reflectance coupled to multivariate/ chemometric analysis is presented. It allows for simultaneous quantification of 3 polymeric components; a rheology modifier, organic opacifier and styrene acrylic binder, with no prior extraction or separation from the paint. The factor based methods partial least squares (PLS) and principle component regression (PCR) permit colinearities by decomposing the spectral data into smaller matrices with principle scores and loading vectors. For model building spectral information from calibrators and validation samples at different analysis regions were incorporated. PCR and PLS were used to inspect the variation within the sample set. The PLS algorithms were found to predict the polymeric components the best. The concentrations of the polymeric components in a coating were predicted with the calibration model. Three PLS models each with different analysis regions yielded a coefficient of correlation R2 close to 1 for each of the components. The root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) was less than 5%. The best out-put was obtained where spectral features of water was included (Trial 3). The prediction residual values for the three models ranged from 2 to -2 and 10 to -10. The method allows paint samples to be analysed in pure form and opens many opportunities for other coating components to be analysed in the same way.

Page generated in 0.0523 seconds