• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 530
  • 232
  • 68
  • 48
  • 28
  • 25
  • 20
  • 17
  • 13
  • 12
  • 8
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1178
  • 1032
  • 202
  • 193
  • 173
  • 161
  • 155
  • 147
  • 123
  • 121
  • 106
  • 96
  • 90
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Computation of Parameters in some Mathematical Models

Wikström, Gunilla January 2002 (has links)
<p>In computational science it is common to describe dynamic systems by mathematical models in forms of differential or integral equations. These models may contain parameters that have to be computed for the model to be complete. For the special type of ordinary differential equations studied in this thesis, the resulting parameter estimation problem is a separable nonlinear least squares problem with equality constraints. This problem can be solved by iteration, but due to complicated computations of derivatives and the existence of several local minima, so called short-cut methods may be an alternative. These methods are based on simplified versions of the original problem. An algorithm, called the modified Kaufman algorithm, is proposed and it takes the separability into account. Moreover, different kinds of discretizations and formulations of the optimization problem are discussed as well as the effect of ill-conditioning.</p><p>Computation of parameters often includes as a part solution of linear system of equations <i>Ax = b</i>. The corresponding pseudoinverse solution depends on the properties of the matrix <i>A</i> and vector <i>b</i>. The singular value decomposition of <i>A</i> can then be used to construct error propagation matrices and by use of these it is possible to investigate how changes in the input data affect the solution <i>x</i>. Theoretical error bounds based on condition numbers indicate the worst case but the use of experimental error analysis makes it possible to also have information about the effect of a more limited amount of perturbations and in that sense be more realistic. It is shown how the effect of perturbations can be analyzed by a semi-experimental analysis. The analysis combines the theory of the error propagation matrices with an experimental error analysis based on randomly generated perturbations that takes the structure of <i>A</i> into account</p>
492

Computation of Parameters in some Mathematical Models

Wikström, Gunilla January 2002 (has links)
In computational science it is common to describe dynamic systems by mathematical models in forms of differential or integral equations. These models may contain parameters that have to be computed for the model to be complete. For the special type of ordinary differential equations studied in this thesis, the resulting parameter estimation problem is a separable nonlinear least squares problem with equality constraints. This problem can be solved by iteration, but due to complicated computations of derivatives and the existence of several local minima, so called short-cut methods may be an alternative. These methods are based on simplified versions of the original problem. An algorithm, called the modified Kaufman algorithm, is proposed and it takes the separability into account. Moreover, different kinds of discretizations and formulations of the optimization problem are discussed as well as the effect of ill-conditioning. Computation of parameters often includes as a part solution of linear system of equations Ax = b. The corresponding pseudoinverse solution depends on the properties of the matrix A and vector b. The singular value decomposition of A can then be used to construct error propagation matrices and by use of these it is possible to investigate how changes in the input data affect the solution x. Theoretical error bounds based on condition numbers indicate the worst case but the use of experimental error analysis makes it possible to also have information about the effect of a more limited amount of perturbations and in that sense be more realistic. It is shown how the effect of perturbations can be analyzed by a semi-experimental analysis. The analysis combines the theory of the error propagation matrices with an experimental error analysis based on randomly generated perturbations that takes the structure of A into account
493

Utformning av mjukvarusensorer för avloppsvatten med multivariata analysmetoder / Design of soft sensors for wastewater with multivariate analysis

Abrahamsson, Sandra January 2013 (has links)
Varje studie av en verklig process eller ett verkligt system är baserat på mätdata. Förr var den tillgängliga datamängden vid undersökningar ytterst begränsad, men med dagens teknik är mätdata betydligt mer lättillgängligt. Från att tidigare enbart haft få och ofta osammanhängande mätningar för någon enstaka variabel, till att ha många och så gott som kontinuerliga mätningar på ett större antal variabler. Detta förändrar möjligheterna att förstå och beskriva processer avsevärt. Multivariat analys används ofta när stora datamängder med många variabler utvärderas. I det här projektet har de multivariata analysmetoderna PCA (principalkomponentanalys) och PLS (partial least squares projection to latent structures) använts på data över avloppsvatten insamlat på Hammarby Sjöstadsverk. På reningsverken ställs idag allt hårdare krav från samhället för att de ska minska sin miljöpåverkan. Med bland annat bättre processkunskaper kan systemen övervakas och styras så att resursförbrukningen minskas utan att försämra reningsgraden. Vissa variabler är lätta att mäta direkt i vattnet medan andra kräver mer omfattande laboratorieanalyser. Några parametrar i den senare kategorin som är viktiga för reningsgraden är avloppsvattnets innehåll av fosfor och kväve, vilka bland annat kräver resurser i form av kemikalier till fosforfällning och energi till luftning av det biologiska reningssteget. Halterna av dessa ämnen i inkommande vatten varierar under dygnet och är svåra att övervaka. Syftet med den här studien var att undersöka om det är möjligt att utifrån lättmätbara variabler erhålla information om de mer svårmätbara variablerna i avloppsvattnet genom att utnyttja multivariata analysmetoder för att skapa modeller över variablerna. Modellerna kallas ofta för mjukvarusensorer (soft sensors) eftersom de inte utgörs av fysiska sensorer. Mätningar på avloppsvattnet i Linje 1 gjordes under tidsperioden 11 – 15 mars 2013 på flera ställen i processen. Därefter skapades flera multivariata modeller för att försöka förklara de svårmätbara variablerna. Resultatet visar att det går att erhålla information om variablerna med PLS-modeller som bygger på mer lättillgänglig data. De framtagna modellerna fungerade bäst för att förklara inkommande kväve, men för att verkligen säkerställa modellernas riktighet bör ytterligare validering ske. / Studies of real processes are based on measured data. In the past, the amount of available data was very limited. However, with modern technology, the information which is possible to obtain from measurements is more available, which considerably alters the possibility to understand and describe processes. Multivariate analysis is often used when large datasets which contains many variables are evaluated. In this thesis, the multivariate analysis methods PCA (principal component analysis) and PLS (partial least squares projection to latent structures) has been applied to wastewater data collected at Hammarby Sjöstadsverk WWTP (wastewater treatment plant). Wastewater treatment plants are required to monitor and control their systems in order to reduce their environmental impact. With improved knowledge of the processes involved, the impact can be significantly decreased without affecting the plant efficiency. Several variables are easy to measure directly in the water, while other require extensive laboratory analysis. Some of the parameters from the latter category are the contents of phosphorus and nitrogen in the water, both of which are important for the wastewater treatment results. The concentrations of these substances in the inlet water vary during the day and are difficult to monitor properly. The purpose of this study was to investigate whether it is possible, from the more easily measured variables, to obtain information on those which require more extensive analysis. This was done by using multivariate analysis to create models attempting to explain the variation in these variables. The models are commonly referred to as soft sensors, since they don’t actually make use of any physical sensors to measure the relevant variable. Data were collected during the period of March 11 to March 15, 2013 in the wastewater at different stages of the treatment process and a number of multivariate models were created. The result shows that it is possible to obtain information about the variables with PLS models based on easy-to-measure variables. The best created model was the one explaining the concentration of nitrogen in the inlet water.
494

Nonnegative matrix and tensor factorizations, least squares problems, and applications

Kim, Jingu 14 November 2011 (has links)
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
495

Implicit Least Squares Kinetic Upwind Method (LSKUM) And Implicit LSKUM Based On Entropy Variables (q-LSKUM)

Swarup, A Sri Sakti 07 1900 (has links)
With increasing demand for computational solutions of fluid dynamical problems, researchers around the world are working on the development of highly robust numerical schemes capable of solving flow problems around complex geometries arising in Aerospace engineering. Also considerable time and effort are devoted to development of convergence acceleration devices, for reducing the computational time required for such numerical solutions. Reduction in run times is very vital for production codes which are used many times in design cycle. In this present work, we consider a numerical scheme called LSKUM capable of operating on any arbitrary distribution of points. LSKUM is being used in CFD center (IIsc) and DRDL (Hyderabad) to compute flows around practical geometries and presently these LSKUM based codes are explicit- It has been observed already by the earlier researchers that the explicit schemes for these methods are robust. Therefore, it is absolutely essential to consider the possibility of accelerating explicit LSKUM by making it LSKUM-Implicit. The present thesis focuses on such a study. We start with two kinetic schemes namely Least Squares Kinetic Upwind Method (LSKUM) and LSKUM based on entropy variables (q-LSKUM). We have developed the following two implicit schemes using LSKUM and q-LSKUM. They are (i)Non-Linear Iterative Implicit Scheme called LSKUM-NII. (ii)Linearized Beam and Warming implicit Scheme, called LSKUM-BW. For the purpose of demonstration of efficiency of the newly developed above implicit schemes, we have considered flow past NACA0012 airfoil as a test example. In this regard we have tested these implicit schemes for flow regimes mentioned below •Subsonic Case: M∞ = 0.63, a.o.a = 2.0° •Transonic Case: M∞ = 0.85, a.o.a = 1.0° The speedup of the above two implicit schemes has been studied in this thesis by operating them on different grid sizes given below •Coarse Grid: 4074 points •Medium Grid: 8088 points •Fine Grid: 16594 points The results obtained by running these implicit schemes are found to be very much encouraging. It has been observed that these newly developed implicit schemes give as much as 2.8 times speedup compared to their corresponding explicit versions. Further improvement is possible by combining LKSUM-Implicit with modern iterative methods of solving resultant algebraic equations. The present work is a first step towards this objective.
496

Weighted Least Squares Kinetic Upwind Method Using Eigendirections (WLSKUM-ED)

Arora, Konark 11 1900 (has links)
Least Squares Kinetic Upwind Method (LSKUM), a grid free method based on kinetic schemes has been gaining popularity over the conventional CFD methods for computation of inviscid and viscous compressible flows past complex configurations. The main reason for the growth of popularity of this method is its ability to work on any point distribution. The grid free methods do not require the grid for flow simulation, which is an essential requirement for all other conventional CFD methods. However, they do require point distribution or a cloud of points. Point generation is relatively simple and less time consuming to generate as compared to grid generation. There are various methods for point generation like an advancing front method, a quadtree based point generation method, a structured grid generator, an unstructured grid generator or a combination of above, etc. One of the easiest ways of point generation around complex geometries is to overlap the simple point distributions generated around individual constituent parts of the complex geometry. The least squares grid free method has been successfully used to solve a large number of flow problems over the years. However, it has been observed that some problems are still encountered while using this method on point distributions around complex configurations. Close analysis of the problems have revealed that bad connectivity of the nodes is the cause and this leads to bad connectivity related code divergence. The least squares (LS) grid free method called LSKUM involves discretization of the spatial derivatives using the least squares approach. The formulae for the spatial derivatives are obtained by minimizing the sum of the squares of the error, leading to a system of linear algebraic equations whose solution gives us the formulae for the spatial derivatives. The least squares matrix A for 1-D and 2-D cases respectively is given by (Refer PDF File for equation) The 1-D LS formula for the spatial derivatives is always well behaved in the sense that ∑∆xi2 can never become zero. In case of 2-D problems can arise. It is observed that the elements of the Ls matrix A are functions of the coordinate differentials of the nodes in the connectivity. The bad connectivity of a node thus can have an adverse effect on the nature of the LS matrices. There are various types of bad connectivities for a node like insufficient number of nodes in the connectivity, highly anisotropic distribution of nodes in the connectivity stencil, the nodes falling nearly on a line (or a plane in 3-D), etc. In case of multidimensions, the case of all nodes in a line will make the matrix A singular thereby making its inversion impossible. Also, an anisotropic distribution of nodes in the connectivity can make the matrix A highly illconditioned thus leading to either loss in accuracy or code divergence. To overcome this problem, the approach followed so far is to modify the connectivity by including more neighbours in the connectivity of the node. In this thesis, we have followed a different approach of using weights to alter the nature of the LS matrix A. (Refer PDF File for equation) The weighted LS formulae for the spatial derivatives in 1-D and 2-D respectively are are all positive. So we ask a question : Can we reduce the multidimensional LS formula for the derivatives to the 1-D type formula and make use of the advantages of 1-D type formula in multidimensions? Taking a closer look at the LS matrices, we observe that these are real and symmetric matrices with real eigenvalues and a real and distinct set of eigenvectors. The eigenvectors of these matrices are orthogonal. Along the eigendirections, the corresponding LS formulae reduce to the 1-D type formulae. But a problem now arises in combining the eigendirections along with upwinding. Upwinding, which in LS is done by stencil splitting, is essential to provide stability to the numerical scheme. It involves choosing a direction for enforcing upwinding. The stencil is split along the chosen direction. But it is not necessary that the chosen direction is along one of the eigendirections of the split stencil. Thus in general we will not be able to use the 1-D type formulae along the chosen direction. This difficulty has been overcome by the use of weights leading to WLSKUM-ED (Weighted Least Squares Kinetic Upwind Method using Eigendirections). In WLSKUM-ED weights are suitably chosen so that a chosen direction becomes an eigendirection of A(w). As a result, the multi-dimensional LS formulae reduce to 1-D type formulae along the eigendirections. All the advantages of the 1-D LS formuale can thus be made use of even in multi-dimensions. A very simple and novel way to calculate the positive weights, utilizing the coordinate differentials of the neighbouring nodes in the connectivity in 2-D and 3-D, has been developed for the purpose. This method is based on the fact that the summations of the coordinate differentials are of different signs (+ or -) in different quadrants or octants of the split stencil. It is shown that choice of suitable weights is equivalent to a suitable decomposition of vector space. The weights chosen either fully diagonalize the least squares matrix ie. decomposing the 3D vector space R3 as R3 = e1 + e2 + e3, where e1, e2and e3are the eigenvectors of A (w) or the weights make the chosen direction the eigendirection ie. decomposing the 3D vector space R3 as R3 = e1 + ( 2-D vector space R2). The positive weights not only prevent the denominator of the 1-D type LS formulae from going to zero, but also preserve the LED property of the least squares method. The WLSKUM-ED has been successfully applied to a large number of 2-D and 3-D test cases in various flow regimes for a variety of point distributions ranging from a simple cloud generated from a structured grid generator (shock reflection problem in 2-D and the supersonic flow past hemisphere in 3-D) to the multiple chimera clouds generated from multiple overlapping meshes (BI-NACA test case in 2-D and FAME cloud for M165 configuration in 3-D) thus demonstrating the robustness of the WLSKUM-ED solver. It must be noted that the second order acccurate computations using this method have been performed without the use of the limiters in all the flow regimes. No spurious oscillations and wiggles in the captured shocks have been observed, indicating the preservation of the LED property of the method even for 2ndorder accurate computations. The convergence acceleration of the WLSKUM-ED code has been achieved by the use of LUSGS method. The use of 1-D type formulae has simplified the application of LUSGS method in the grid-free framework. The advantage of the LUSGS method is that the evaluation and storage of the jacobian matrices can be eliminated by approximating the split flux jacobians in the implicit operator itself. Numerical results reveal the attainment of a speed up of four by using the LUSGS method as compared to the explicit time marching method. The 2-D WLSKUM-ED code has also been used to perform the internal flow computations. The internal flows are the flows which are confined within the boundaries. The inflow and the outflow boundaries have a significant effect on these flows. The accurate treatment of these boundary conditions is essential particularly if the flow condition at the outflow boundary is subsonic or transonic. The Kinetic Periodic Boundary Condition (KPBC) which has been developed to enable the single-passage (SP) flow computations to be performed in place of the multi-passage (MP) flow computations, utilizes the moment method strategy. The state update formula for the points at the periodic boundaries is identical to the state update formula for the interior points and can be easily extended to second order accuracy like the interior points. Numerical results have shown the successful reproduction of the MP flow computation results using the SP flow computations by the use of KPBC. The inflow and the outflow boundary conditions at the respective boundaries have been enforced by the use of Kinetic Outer Boundary Condition (KOBC). These boundary conditions have been validated by performing the flow computations for the 3rdtest case of the 4thstandard blade configuration of the turbine blade. The numerical results show a good comparison with the experimental results.
497

Multivariate analysis of high-throughput sequencing data / Analyses multivariées de données de séquençage à haut débit

Durif, Ghislain 13 December 2016 (has links)
L'analyse statistique de données de séquençage à haut débit (NGS) pose des questions computationnelles concernant la modélisation et l'inférence, en particulier à cause de la grande dimension des données. Le travail de recherche dans ce manuscrit porte sur des méthodes de réductions de dimension hybrides, basées sur des approches de compression (représentation dans un espace de faible dimension) et de sélection de variables. Des développements sont menés concernant la régression "Partial Least Squares" parcimonieuse (supervisée) et les méthodes de factorisation parcimonieuse de matrices (non supervisée). Dans les deux cas, notre objectif sera la reconstruction et la visualisation des données. Nous présenterons une nouvelle approche de type PLS parcimonieuse, basée sur une pénalité adaptative, pour la régression logistique. Cette approche sera utilisée pour des problèmes de prédiction (devenir de patients ou type cellulaire) à partir de l'expression des gènes. La principale problématique sera de prendre en compte la réponse pour écarter les variables non pertinentes. Nous mettrons en avant le lien entre la construction des algorithmes et la fiabilité des résultats.Dans une seconde partie, motivés par des questions relatives à l'analyse de données "single-cell", nous proposons une approche probabiliste pour la factorisation de matrices de comptage, laquelle prend en compte la sur-dispersion et l'amplification des zéros (caractéristiques des données single-cell). Nous développerons une procédure d'estimation basée sur l'inférence variationnelle. Nous introduirons également une procédure de sélection de variables probabiliste basée sur un modèle "spike-and-slab". L'intérêt de notre méthode pour la reconstruction, la visualisation et le clustering de données sera illustré par des simulations et par des résultats préliminaires concernant une analyse de données "single-cell". Toutes les méthodes proposées sont implémentées dans deux packages R: plsgenomics et CMF / The statistical analysis of Next-Generation Sequencing data raises many computational challenges regarding modeling and inference, especially because of the high dimensionality of genomic data. The research work in this manuscript concerns hybrid dimension reduction methods that rely on both compression (representation of the data into a lower dimensional space) and variable selection. Developments are made concerning: the sparse Partial Least Squares (PLS) regression framework for supervised classification, and the sparse matrix factorization framework for unsupervised exploration. In both situations, our main purpose will be to focus on the reconstruction and visualization of the data. First, we will present a new sparse PLS approach, based on an adaptive sparsity-inducing penalty, that is suitable for logistic regression to predict the label of a discrete outcome. For instance, such a method will be used for prediction (fate of patients or specific type of unidentified single cells) based on gene expression profiles. The main issue in such framework is to account for the response to discard irrelevant variables. We will highlight the direct link between the derivation of the algorithms and the reliability of the results. Then, motivated by questions regarding single-cell data analysis, we propose a flexible model-based approach for the factorization of count matrices, that accounts for over-dispersion as well as zero-inflation (both characteristic of single-cell data), for which we derive an estimation procedure based on variational inference. In this scheme, we consider probabilistic variable selection based on a spike-and-slab model suitable for count data. The interest of our procedure for data reconstruction, visualization and clustering will be illustrated by simulation experiments and by preliminary results on single-cell data analysis. All proposed methods were implemented into two R-packages "plsgenomics" and "CMF" based on high performance computing
498

Robust Least Squares Kinetic Upwind Method For Inviscid Compressible Flows

Ghosh, Ashis Kumar 06 1900 (has links) (PDF)
No description available.
499

L'hôpital magnétique : définition, conceptualisation, attributs organisationnels et conséquences perçues sur les attitudes au travail / Magnet hospital : definition, conceptualization, organizational attributes and perceived consequences on work attitudes

Sibé, Matthieu 21 November 2014 (has links)
De nombreux constats contemporains s’alarment du malaise récurrent des ressources humaines hospitalières, particulièrement à l’endroit des médecins et des soignants, et par conséquent du risque de mauvaise qualité de prise en charge des patients. Adoptant une approche plus optimiste, des chercheurs américains en soins infirmiers ont mis en évidence depuis le début des années 1980 l’existence d’hôpitaux dits magnétiques, parce qu’attractifs et fidélisateurs, et où il ferait bon travailler et se faire soigner. Cette thèse vise à approfondir le concept de Magnet Hospital, à éclairer sa définition et sa portée pour la gestion des ressources humaines hospitalières en France. Suivant une démarche hypothético-déductive, la conceptualisation, fondée sur un état de l’art, débute par une appropriation du modèle synthétique du Magnet Hospital. Empruntant une perspective psychosociale, notre modèle original de recherche se focalise sur la perception, à l’échelle des unités de soins, des attributs managériaux du magnétisme hospitalier (leadership transformationnel, empowerment perçu de la participation et climat relationnel collégial entre médecins et soignants) et ses conséquences attitudinales positives (satisfaction, implication, intention de rester, équilibre émotionnel travail/hors travail et efficacité collective perçue). Une méthodologie quantitative interroge au moyen de 8 échelles ad hoc un échantillon représentatif de 133 médecins, 361 infirmières et 362 aides-soignantes de 36 services de médecine polyvalente français. Une série de modélisations par équations structurelles, selon l’algorithme Partial Least Squares, teste la nature et l’intensité des relations directes et indirectes du magnétisme managérial perçu. Les résultats statistiques indiquent une bonne qualité des construits et d’ajustement des modèles. Un contexte managérial magnétique produit son principal effet positif sur l’efficacité collective perçue. Des différences catégorielles existent quant à la perception de sa composition et à la transmission de ses effets par la médiation de l’efficacité collective perçue, signalant le caractère contingent du magnétisme. Ces résultats ouvrent des perspectives managériales et scientifiques, en soulignant l’intérêt des approches positives de l’organisation hospitalière. / Many contemporary findings are alarmed of the recurring discomfort of hospital human resources, especially against doctors and nurses, and consequently against risk of poor quality of care for patients. Adopting a more optimistic approach, American nursing scholars have highlighted since the 1980s, some magnet hospitals, able to attract and retain, and with good working and care conditions. This thesis aims to explore Magnet Hospital concept, to inform its definition and scope for hospital human resource management in France. According to a hypothetico-deductive approach, based on a review of the literature, the conceptualization begins with appropriation of synthetic Magnet Hospital model. Under a psychosocial perspective, our original research model focuses on perception of managerial magnetic attributes (transformational leadership, perceived empowerment of participation, collegial climate between doctors and nurses) and their consequences on positive job attitudes (satisfaction, commitment, intent to rest, emotional equilibrium work/family, perceived collective efficacy), at wards level. A quantitative methodology proceeds by a questionnaire of 8 ad hoc scales and interviews 133 doctors, 361 nurses, 362, auxiliary nurses, in 36 French medicine units. A set of structural equations modeling, according to Partial Least Squares, tests nature and intensity of direct and indirect relationships of perceived managerial magnetism. The statistical results show a good validity of constructs and a good fit of models. The major positive effect of magnetic managerial context is on perceived collective efficacy. Some professional differences exist about perceptions of composition and transmission of magnetic effects (via mediation of perceived collective efficacy), indicating the contingency of magnetism. These findings open managerial and scientific opportunities, emphasizing the interest for positive organizational approach of hospital.
500

Unmanned ground vehicles: adaptive control system for real-time rollover prevention

Mlati, Malavi Clifford 04 1900 (has links)
Real-Time Rollover prevention of Unmanned Ground Vehicle (UGV) is very paramount to its reliability and survivability mostly when operating on unknown and rough terrains like mines or other planets.Therefore this research presents the method of real-time rollover prevention of UGVs making use of Adaptive control techniques based on Recursive least Squares (RLS) estimation of unknown parameters, in order to enable the UGVs to adapt to unknown hush terrains thereby increasing their reliability and survivability. The adaptation is achieved by using indirect adaptive control technique where the controller parameters are computed in real time based on the online estimation of the plant’s (UGV) parameters (Rollover index and Roll Angle) and desired UGV’s performance in order to appropriately adjust the UGV speed and suspension actuators to counter-act the vehicle rollover. A great challenge of indirect adaptive control system is online parameter identification, where in this case the RLS based estimator is used to estimate the vehicles rollover index and Roll Angle from lateral acceleration measurements and height of the centre of gravity of the UGV. RLS is suitable for online parameter identification due to its nature of updating parameter estimate at each sample time. The performance of the adaptive control algorithms and techniques is evaluated using Matlab Simulink® system model with the UGV Model built using SimMechanics physical modelling platform and the whole system runs within Simulink environment to emulate real world application. The simulation results of the proposed adaptive control algorithm based on RLS estimation, show that the adaptive control algorithm does prevent or minimize the likely hood of vehicle rollover in real time. / Electrical and Mining Engineering / M. Tech. (Electrical Engineering)

Page generated in 0.0552 seconds