• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 30
  • 23
  • 18
  • 17
  • 11
  • 11
  • 9
  • 9
  • 9
  • 7
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 333
  • 59
  • 45
  • 39
  • 37
  • 36
  • 31
  • 29
  • 29
  • 26
  • 25
  • 22
  • 20
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

The changing role of the secondary school principal in building sustainable communities

Souls, Jacobus Abram 30 November 2005 (has links)
The aim of the study was to investigate the changing role of the secondary school principal in building sustainable communities. It is supposed that communities that are not sustainable affect secondary schools. The focus is on how the secondary school principal should go about building, sustaining and uplifting the school community. The direct and indirect involvement of secondary school principals in community issues, could contribute to sustainability within the community, which gradually becomes a reality. The task of the secondary school principal is realised through the results of effective educative teaching and learning practices. Through literature study it was found that the role of the secondary school principal in enhancing sustainable communities would contribute to the upliftment of communities. The qualitative approach was successful in obtaining information about how the changing role of the secondary school principal in building sustainable communities is viewed. Recommendations were made concerning research findings for stakeholders and officials to note. / Educational Studies / M.Ed(Education Management))
322

Conversions from Islam to Christianity in the Sudan

Straehler, Reinhold 30 November 2005 (has links)
This research project focuses on conversions from Islam to Christianity in the Sudan. It first gives a biblical and theological understanding of conversion and then introduces the sociological and psychological understanding of such a change in religious affiliation. It discusses conversion as a spiritual decision process and develops a spiritual decision matrix for evaluating conversion processes of Muslims. The heart of the study is an analysis of the conversion processes of six converts with a Northern Sudanese background from different Muslim tribes. The interviews that were conducted with these converts are analysed in terms of five parameters: reasons for conversion; factors that led to conversion; stages in the conversion processes; problems encountered during the conversion processes; and results of the conversion. These parameters are compared with existing data from six studies of Muslims in other geographical areas who also converted to the Christian faith. / Christian Spirituality Church History and Missiology / M.Th. (Missiology)
323

Les crises économiques et financières et les facteurs favorisant leur occurrence / Empirical varieties and leading contexts of economic and financial crises

Cabrol, Sébastien 31 May 2013 (has links)
Cette étude vise à mettre en lumière les différences et similarités existant entre les principales crises économiques et financières ayant frappé un échantillon de 21 pays avancés depuis 1981. Nous analyserons plus particulièrement la crise des subprimes que nous rapprocherons avec des épisodes antérieurs. Nous étudierons à la fois les années du déclenchement des turbulences (analyse typologique) ainsi que celles les précédant (prévision). Cette analyse sera fondée sur l’utilisation de la méthode CART (Classification And Regression Trees). Cette technique non linéaire et non paramétrique permet de prendre en compte les effets de seuil et les interactions entre variables explicatives de façon à révéler plusieurs contextes distincts explicatifs d’un même événement. Dans le cadre d‘un modèle de prévision, l’analyse des années précédant les crises nous indique que les variables à surveiller sont : la variation et la volatilité du cours de l’once d’or, le déficit du compte courant en pourcentage du PIB et la variation de l’openness ratio et enfin la variation et la volatilité du taux de change. Dans le cadre de l’analyse typologique, l’étude des différentes variétés de crise (année du déclenchement de la crise) nous permettra d’identifier deux principaux types de turbulence d’un point de vue empirique. En premier lieu, nous retiendrons les crises globales caractérisées par un fort ralentissement ou une baisse de l’activité aux Etats-Unis et une faible croissance du PIB dans les pays touchés. D’autre part, nous mettrons en évidence des crises idiosyncratiques propres à un pays donné et caractérisées par une inflation et une volatilité du taux de change élevées. / The aim of this thesis is to analyze, from an empirical point of view, both the different varieties of economic and financial crises (typological analysis) and the context’s characteristics, which could be associated with a likely occurrence of such events. Consequently, we analyze both: years seeing a crisis occurring and years preceding such events (leading contexts analysis, forecasting). This study contributes to the empirical literature by focusing exclusively on the crises in advanced economies over the last 30 years, by considering several theoretical types of crises and by taking into account a large number of both economic and financial explanatory variables. As part of this research, we also analyze stylized facts related to the 2007/2008 subprimes turmoil and our ability to foresee crises from an epistemological perspective. Our empirical results are based on the use of binary classification trees through CART (Classification And Regression Trees) methodology. This nonparametric and nonlinear statistical technique allows us to manage large data set and is suitable to identify threshold effects and complex interactions among variables. Furthermore, this methodology leads to characterize crises (or context preceding a crisis) by several distinct sets of independent variables. Thus, we identify as leading indicators of economic and financial crises: variation and volatility of both gold prices and nominal exchange rates, as well as current account balance (as % of GDP) and change in openness ratio. Regarding the typological analysis, we figure out two main different empirical varieties of crises. First, we highlight « global type » crises characterized by a slowdown in US economic activity (stressing the role and influence of the USA in global economic conditions) and low GDP growth in the countries affected by the turmoil. Second, we find that country-specific high level of both inflation and exchange rates volatility could be considered as evidence of « idiosyncratic type » crises.
324

Dinámica de tangibles e intangibles en el desarrollo local. El caso de San Juan Nuevo de Parangaricutiro / Dinámica de tangibles e intangibles en el desarrollo local. El caso de San Juan Nuevo de Parangaricutiro

Solari Vicente, Andrés 10 April 2018 (has links)
This article explains the particular conformation and dynamic of the integrativeelements of the relationship between the tangible and intangible aspects in the processof local development  of San Juan  Nuevo  de Parangaricutiro (Michoacán, México). On one hand, the historical characteristics and their connection to the way thecommunity faces natural adversity; on the other, the creation of a participative and democratic leading style and the role of selection in the constitution of social capital and the local endogenous nucleus. In the same way, the decisive role played by the intangible aspects, in specific, the role of the meta local forces, in this dynamic is emphasized. The author supports the idea that local development is the result of a dynamic of feedback and mutual  growth among the tangible and intangible aspects, in a sequence that starts with the maturation and accumulation of one of these aspects in the community, widening the possibilities of development. In this dynamic, the role played by the meta local forces operating in the backings of the collective memory, long term linking the different moments of the process and serving as factors of re­ impulse within the feedback, is decisive. The author also explores the importance of the public policies in the successive moments of the dynamic between tangibles and intangibles in the local development and the concept of reversibility in the construction of this dynamic. / El artículo explica la particular conformación y dinámica de los elementos constitu­tivos de la relación entre los aspectos tangibles e intangibles en el proceso de desarrollo local de San Juan Nuevo de Parangaricutiro (Michoacán, México). Por un lado, los aglutinantes históricos y su vinculación con la forma en que la comunidad encara las adversidades naturales; por otro, la generación de un liderazgo de participación incipientemente democrático, así como el papel de la selección en la constitución de capital social y del núcleo endógeno local. Asimismo, se subraya el papel decisivo jugado por los aspectos intangibles y, especialmente, por lasfuerzas meta locales den­ tro de esta dinámica. Se sostiene que el desarrollo local es el resultado de una dinámi­ ca de retroalimentación y escalamiento mutuo entre los aspectos tangibles e intangibles, en una secuencia que se inicia con la maduración y acumulación de alguno de estos dos aspectos en la localidad, lo que abre las posibilidades de desarrollo. En esta diná­ mica, es decisivo el papel cumplido por las fuerzas meta locales que operan en el trasfondo de la memoria colectiva vinculando a largo plazo los distintos momentos del proceso y sirviendo de factores de reimpulso dentro de la retroalimentación. Se plantea también la importancia de las políticas públicas en los sucesivos momentos de la dinámica entre tangibles e intangibles en el desarrollo local, así como el concep­to de reversibilidad en la construcción de esta dinámica.
325

Understanding co-movements in macro and financial variables

D'Agostino, Antonello 09 January 2007 (has links)
Over the last years, the growing availability of large datasets and the improvements in the computational speed of computers have further fostered the research in the fields of both macroeconomic modeling and forecasting analysis. A primary focus of these research areas is to improve the models performance by exploiting the informational content of several time series. Increasing the dimension of macro models is indeed crucial for a detailed structural understanding of the economic environment, as well as for an accurate forecasting analysis. As consequence, a new generation of large-scale macro models, based on the micro-foundations of a fully specified dynamic stochastic general equilibrium set-up, has became one of the most flourishing research areas of interest both in central banks and academia. At the same time, there has been a revival of forecasting methods dealing with many predictors, such as the factor models. The central idea of factor models is to exploit co-movements among variables through a parsimonious econometric structure. Few underlying common shocks or factors explain most of the co-variations among variables. The unexplained component of series movements is on the other hand due to pure idiosyncratic dynamics. The generality of their framework allows factor models to be suitable for describing a broad variety of models in a macroeconomic and a financial context. The revival of factor models, over the recent years, comes from important developments achieved by Stock and Watson (2002) and Forni, Hallin, Lippi and Reichlin (2000). These authors find the conditions under which some data averages become collinear to the space spanned by the factors when, the cross section dimension, becomes large. Moreover, their factor specifications allow the idiosyncratic dynamics to be mildly cross-correlated (an effect referred to as the 'approximate factor structure' by Chamberlain and Rothschild, 1983), a situation empirically verified in many applications. These findings have relevant implications. The most important being that the use of a large number of series is no longer representative of a dimensional constraint. On the other hand, it does help to identify the factor space. This new generation of factor models has been applied in several areas of macroeconomics and finance as well as for policy evaluation. It is consequently very likely to become a milestone in the literature of forecasting methods using many predictors. This thesis contributes to the empirical literature on factor models by proposing four original applications. <p><p>In the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation<p>function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used. <p><p>The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario. <p><p>The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear<p>to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability. <p><p>The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It<p>focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the<p>two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts. <p> / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished
326

The Kodály Method and Tonal Harmony: An Issue of Post-secondary Pedagogical Compatibility

Penny, Lori Lynn January 2012 (has links)
This study explores the topic of music theory pedagogy in conjunction with the Kodály concept of music education and its North-American adaptation by Lois Choksy. It investigates the compatibility of the Kodály Method with post-secondary instruction in tonal harmony, using a theoretical framework derived from Kodály’s methodology and implemented as a teaching strategy for the dominant-seventh chord. The customary presentation of this concept is authenticated with an empirical case study involving four university professors. Subsequently, Kodály’s four-step instructional process informs a comparative analysis of five university-level textbooks that evaluates the sequential placement of V7, examines the procedure by which it is presented, and considers the inclusion of correlated musical excerpts. Although divergent from traditional approaches to tonal harmony, Kodály’s principles and practices are pedagogically effective. By progressing from concrete to abstract, preceding symbolization with extensive musical experience, conceptual understandings are not only intellectualized, but are developed and internalized.
327

Etude de l'influence de la dilution du combustible et de l'oxydant dans le processus de décrochage de flammes-jet non-prémélangées et l'émission de polluants / Study of the influence of air-side and fuel-side dilution on the lifting process of an attached non-premixed jet-flame and on pollutant emissions

Marin Ospina, Yohan Manuel 17 November 2016 (has links)
La compréhension des mécanismes pilotes de la stabilisation des flammes-jet non-prémélangées constitue un point clé dans la caractérisation des modes opératoires des brûleurs industriels fonctionnant en régime de combustion diluée. Ce travail porte son attention sur l'étude expérimentale de l'influence de la dilution du combustible ou de l'air, sur le processus de décrochage et l'émission des polluants d'une flamme-jet non-prémélangée accrochée au brûleur. L'investigation est menée via un grand nombre d'expériences par combinaison des conditions suivantes : i) dioxyde de carbone (CO2), azote (N2), argon (Ar) et vapeur d'eau (H2Ov), sont utilisés comme diluants ; ii) deux configurations de dilution : dilution de l'air ou dilution du combustible ; iii) un couple de vitesses d'air et de combustible couvrant le domaine d'hystérésis de la flamme dans sa totalité, du régime de jet laminaire à celui de jet turbulent. Ceci permet de discriminer l'influence des effets intrinsèques à la nature du diluant de celle de l'aérodynamique des réactants (combustible et oxydant), dans la stabilité de la flamme accrochée. En particulier, les différences comportementales de la réponse de la flamme à la dilution de l'air ou à celle du combustible, sont analysées. Ces deux configurations de dilution diffèrent par deux effets de mélange, indépendants de la réaction, qui jouent un rôle important dans le cas de la dilution du combustible, mais sont négligeables dans le cas de celle de l'air : i) un effet dû à la modification de la fraction de mélange stœchiométrique. ii) un impact mécanique induit par l'apport de matière (diluants) responsable d'une augmentation de la vitesse des réactants. L'étude se divise en trois principales étapes. D'abord la réponse globale de la flamme à la dilution est étudiée via ses limites de décrochage quantifiées par les fractions molaires critiques des diluants dans l'oxydant ou dans le combustible, mesurées au décrochage. Le nombre de Peclet du combustible, Pef, est identifié comme le nombre adimensionnel qui ordonne ces limites de décrochage de manière homothétique pour tous les diluants. Grâce au comportement homothétique deux coefficients d'affinité, Kd,ox pour le cas de la dilution de l'air et Kd,f pour celle du combustible, sont introduits. Ils sont définis comme le rapport entre la limite de décrochage obtenue avec un diluant et celle obtenue avec le CO2 , à Pef = cste. Ceux-ci permettent l'établissement de deux polynômes génériques décrivant les limites de décrochage pour tous les diluants testés et dans toute la gamme des conditions aérodynamiques étudiées. En effet, Kd,ox et Kd,f englobent l'ensemble des effets physico-chimiques d'un diluant (dilution pure, thermique, propriétés de transport, chimie) et ceux des impacts mécaniques, affectant la stabilité de la flamme. Ils permettent de trouver les lois d'auto-similitude au décrochage pour un diluant chimiquement faible quelconque, à partir des résultats obtenus dans ce travail. Ensuite, une étude locale et détaillée du processus de décrochage induit par la dilution est réalisée. Celui-ci se base sur l'approche du bout propagatif décrivant la stabilité de la flamme accrochée comme résultant d'un équilibre à sa base entre la vitesse de l'écoulement et la vitesse de propagation. Afin de démontrer le lien entre cette approche et la stabilité de la flamme, une analyse approfondie des caractéristiques de sa base (localisation, intensité du radical CH* et champ de vitesses) est réalisée. Les résultats confirment la pertinence de l'approche du bout propagatif, comme mécanisme descriptif de la stabilisation de la flamme accrochée en présence de dilution. Enfin, une étude caractérisant aussi bien l'influence de la nature des diluants que celle de la configuration de dilution choisie (air ou combustible), sur l'émission des polluants (suies, NOx et CO), est présentée. / Understanding the main mechanisms piloting non-premixed jet flame stability is an important point in characterizing the operation modes of industrials burners in which dilution is involved. This work puts special emphasis on the experimental study of the influence of air-side and methane-side dilution in the lifting process of attached non-premixed jet flames. The study is based on numerous experiments combining the following conditions : i) carbon dioxide (CO2), nitrogen (N2), argon (Ar) or water vapor (H20v,) used as diluents d ; ii) two diluted configurations : air-side or methane-side dilution ; iii) two air and fuel velocities covering the entire flame hysteresis domain, from the laminar to the turbulent regime. This allows the influence of the intrinsic diluent nature effects to be discriminated from those of the aerodynamics of the reactants (fuel and oxidant), in attached flame stability. In particular, the behavioral differences of the flame response to air-side or to fuel-side dilution are analyzed. These two configurations differ by two mixing effects which are independent of the combustion reaction, and which are significant when the fuel is diluted, but negligible when air is diluted : i) an effect due to the changes in the stoichiometric mixture fraction ; ii) a mechanical impact induced by the addition of matter (diluents) producing an increase in the bulk velocity of the reactants. The study is composed of three parts. First, the global flame response to dilution is analyzed on the basis of the lifting limits defined as the critical molar fractions of the diluents in the fuel or in the oxidant measured at liftoff. The fuel Peclet number, Pef, appears as the dimensionless number which puts these limits in a homothetic order. This homothetic behavior allows the introduction of two affinity parameters, Kd,ox for air-side dilution and Kd,f for fuel-side dilution. They are defined by the ratio of the flame lifting limits calculated with a diluent d and with CO2, at Pef=const. Kd,ox and Kd, allow two generic polynomial laws to be established describing the flame lifting limits for all the diluents and in the whole range of aerodynamic conditions of this study. Indeed, Kd,ox and Kd,f encompass all the diluent effects affecting flame stability (pure dilution, thermal, transport, chemical), to which mechanical impacts are added. These coefficients make it possible to obtain the self-similarity laws of the lifting limits for any chemically-weak diluent, by using the results obtained in this work. Then, a local and detailed study of the flame lifting process induced by dilution is presented. This is based on the flame-leading-edge approach describing flame stability as a result of the balance between the incoming gas velocity of the reactants and the flame propagation velocity at the flame base. In order to show the link between this approach and flame stability, an extensive analysis of the flame-base characteristics (location, CH* emission intensity and velocity field) is carried out. The results attest to the pertinence of the propagative flame-leading-edge, as the mechanism describing the attached flame stability under dilution. Finally, a study concerning the influence of both the diluent nature and the diluted configuration (air or fuel) on pollutant emissions (soot, NOx and CO) is presented.
328

Improving the management of the professional development of lecturers at a selected technical and vocational education and training (TVET) college

Motaung, Motselisi Rose 08 1900 (has links)
The TVET colleges in South Africa contribute to the social and economic development of the country. It is for this reason that TVET colleges are expected to provide quality teaching and learning, but this core business of the colleges has been hampered by a lack of professional development or irrelevant professional development of lecturers. The study aims to investigate the relevance of professional development offered at a selected TVET college in the Free-State province and the improvement thereof. The study employed a qualitative research design involving 22 participants. The participants comprised of two executive managers, two campus managers, four heads of department, six senior lecturers and eight lecturers. The participants were purposively selected using purposive sampling. Face-to-face semistructured interviews were conducted to collect data from managers and two focus group interviews were employed to collect data from lecturers. The findings of the study revealed that there is a need for relevant professional development of lecturers at the selected TVET college. The most important recommendation with the potential to change the situation at the studied college is that the planning, organising, leading and control of professional development need to be done more professionally. If managers utilise management functions (planning, organising, leading and control) properly to manage lecturer’s professional development, lecturers will be in a better position to provide quality teaching. Other relevant recommendations are also provided. / Educational Management and Leadership / M. Ed. (Education Management)
329

高等教育における分野融合アプローチのための要件 : 工学系人材養成に着目して / コウトウ キョウイク ニオケル ブンヤ ユウゴウ アプローチ ノ タメ ノ ヨウケン : コウガクケイ ジンザイ ヨクセイ ニ チャクモク シテ

竹永 啓悟, Keigo Takenaga 20 March 2022 (has links)
工学系人材養成における大学院での文理融合の教育アプローチの可能性を検討した基礎研究である。工学系人材にはエンジニアリング・デザイン能力,技術者倫理,グローバル・コンピテンシーなどの獲得が期待される。本研究はそれに資するプログラムとして「博士課程教育リーディング大学院」の事例を分析した。結果,上記の3要素の学習成果の可視化,および教育目標や学生評価の基軸として学生の学びにおける文理融合の「統合」の水準の設定が肝要であると結論した。 / This is basic research that examines the possibility of an integration of humanities and social sciences educational approaches in graduate schools for the development of engineering human resources. Human resources in engineering are expected to acquire engineering design ability, engineering ethics, and global competencies. This study analyzes the case of the "Program for Leading Graduate Schools" as a program that contributes to this goal. As a result, we concluded that it is important to visualize the learning outcomes of the above three elements and to set the level of "integration" of humanities and social sciences in student learning as the basis for educational goals and student evaluation. / 博士(教育文化学) / Doctor of Philosophy in Education and Culture / 同志社大学 / Doshisha University
330

An active chain process of self-leadership : Dynamically practising self-leading strategies for sustainability

Amilon, Mia, Nguyen, Stephanie January 2022 (has links)
Title: An active chain process of self-leadership: Dynamically practising self-leading strategies for sustainability. Keywords: Active and dynamic, Chain process of self-leadership, Self-leadership strategies Background: Sustainability is important and of current interest, requiring all organisations to be well-functioning, committed to sustainability and create strategic decisions for their long-term sustainability. Organisations thence benefit from training the employees into self-leaders, as it results in beneficial outcomes that lead to greatness within organisations, and hence society at large. Research question: Why do individuals succeed in maintaining and practising an active chain process of self-leadership? Purpose: This study aims to understand why individuals sustain maintaining an active chain process of self-leadership by dynamically practising self-leadership strategies, where they continue to be self-aware, manage and lead themselves, practice self-leadership strategies, attain self-efficacy and achieve beneficial outputs that in extension contribute to a more efficient and long-term sustainable society. To better understand what activates the chain process of self-leadership and creates the dynamic, the authors have developed a summarising model (see model 5.1 in chapter 5). Method: The study is of qualitative character with an abductive research approach, where the empirical data have been collected through semi-structured interviews and a collective case study design with ten informants who practise self-leadership, and then analysed by the Gioia method. Findings: The chain process of self-leadership is holistic and what maintains it active are feelings of well-being, competence, and efficacy, as well as succeeding, contributing to a greater good and seeing things in a greater context. Of significance is to be reminded and followed up regularly. Paper type: Master thesis

Page generated in 0.0612 seconds