• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 236
  • 83
  • 70
  • 26
  • 22
  • 17
  • 10
  • 10
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 566
  • 128
  • 112
  • 89
  • 83
  • 74
  • 62
  • 46
  • 38
  • 38
  • 35
  • 31
  • 30
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Câmbio e preços no Brasil: uma análise do período 1995 2006 / Exchange rate and prices in Brazil: an analysis of the period 1995 2006

Habe, James Hiroshi 27 April 2009 (has links)
Made available in DSpace on 2016-04-26T20:48:54Z (GMT). No. of bitstreams: 1 James Hiroshi Habe.pdf: 273240 bytes, checksum: 03be8572f00ddd04542e1cc1675edede (MD5) Previous issue date: 2009-04-27 / Price stability brought to the Brazilian economy a new reality. What was the relationship between price and exchange rate necessary to achieve the new scenario? The exchange rate anchor was the instrument used to reach the price stability. In 1995 managed exchanged rates biased the prices. In 1999 the regime of exchange rate anchor changed to inflation target. The modification in regimes could have altered the relationship between exchange rates and prices. Econometric tests, using monthly exchange rate data, provided evidence of a connection among exchange rates and prices. From 1999 forward, the relations among consumer prices (IPCA) happen through wholesale prices (IPA). The separation of the IPCA in two groups allowed the verification of major influence in monitored prices then in market prices, increasing the IPCA and affecting the monetary policy decision / A estabilidade nos preços trouxe um novo cenário a economia brasileira e qual foi a relação entre preços e câmbio para atingir a estabilidade? A âncora cambial foi o instrumento para a estabilidade nos preços. A adoção do regime de câmbio administrado, em 1995, manteve os preços atrelados ao câmbio. Em 1999, houve a mudança da âncora cambial para as metas de inflação. A mudança de regime cambial poderia ter alterado a relação entre câmbio e preços no atacado e ao consumidor. Os testes econométricos, utilizando dados de variação cambial mensal, comprovaram a existência da relação entre câmbio e os preços. E, a partir de 1999, a relação existente entre os preços ao consumidor (IPCA) ocorreu através dos preços no atacado (IPA). A separação do IPCA em dois grupos permitiu verificar uma maior influência cambial sobre os preços monitorados do que os preços livres, elevando o valor do IPCA e afetando a decisão da política monetária
462

Recherche d'une description optimum des sources et systèmes vibroacoustiques pour la simulation du bruit de passage des véhicules automobiles / Research for an optimal description of vibro-acoustic sources and systems for the simulation of vehicle pass-by noise

Hamdad, Hichem 20 December 2018 (has links)
Pour commercialiser un véhicule, les constructeurs automobiles doivent se soumettre à la réglementation sur le bruit extérieur. Le règlement de la commission économique pour l'Europe, ECE R51.03, spécifie les niveaux admissibles que peut rayonner un véhicule automobile en roulage. Ce règlement est entré en vigueur depuis le 1er juillet 2016 pour remplacer l'ancien règlement ECE R51.02 (changement de méthode d’essai et sévérisation des niveaux de bruit admissibles). La diminution drastique des niveaux sonores tolérés se fait en trois étapes : passage de 74 dB (A) sous l'ancien règlement, à 68 dB (A) en 2024. Par conséquent, les constructeurs ainsi que les fournisseurs automobiles seront confrontés à un grand défi pour atteindre cet objectif. Ainsi, l'objectif de ces travaux de thèse consiste à développer une aide à la modélisation totale du bruit de passage d’un véhicule, comme le préconisent les essais réglementaires. Le but est de construire des modèles optimaux pour prévoir et évaluer avec précision le bruit que peut rayonner un véhicule en roulage plus tôt dans son cycle de développement, i.e. avant l'étape d'industrialisation. Il faut alors se placer dans la recherche d'un compromis entre précision des estimations, sensibilité aux paramètres, robustesse de la méthode et efficacité numérique. / Currently, to put a vehicle on market, car manufacturers must comply to a certification test of exterior noise. The regulation of the United Nations Economic Commission for Europe, ECE R51-03, specifies permissible levels a rolling motor vehicle can emit. This regulation is applied since July 1st, 2016, to replace the old regulation ECE R51-02 (test method change and tightening of permissible levels). The drastic reduction in noise levels will be done in 3 steps: from 74 dB (A) under the old regulation to 68 dB (A) in 2024. Therefore, manufacturers as well as their suppliers will face a great challenge to achieve this goal. The objective of this thesis is to develop an aid to the modeling of the pass-by noise of a vehicle, as called for in regulatory testing. The goal is to predict and evaluate accurately the noise emissions earlier in the vehicle development cycle, i.e. before the industrialization stage. We must then seek a trade-off between accuracy of estimates, sensitivity to parameters, robustness of the method and numerical efficiency.
463

Broadband Phase Shifter Realization With Surface Micromachined Lumped Components

Tokgoz, Korkut Kaan 01 September 2012 (has links) (PDF)
Phase Shifters are one of the most important building cells of the applications in microwave and millimeter-wave range, especially for communications and radar applications / to steer the main beam for electronic scanning. This thesis includes all of the stages starting from the theoretical design stage to the measurements of the phase shifters. In detail, all-pass network phase shifter configuration is used to achieve broadband and ultra wide-band differential phase characteristics. For these reasons, 1 to 2 GHz, 2 to 4 GHz, and 3 to 6 GHz 4-bit, 22.5&deg / phase resolution phase shifter realization with surface micromachined lumped components are designed, simulated, fabricated and measured. Basic building blocks of the phase shifters, i.e., surface micromachined lumped components, square planar spiral inductors and Metal-Insulator-Metal capacitors are designed with EM simulation and lumped equivalent model extractions. The validation of the designed square planar spiral inductors is done with fabrication and measurement steps, very low error, below 1%, between the designs and fabricated samples are observed. Using this knowledge on lumped elements finally phase shifters are designed with surface micromachined lumped components, fabricated using an in house technology provided by METU-MEMS facilities, RF MEMS group. Low phase rms error, good return and insertion loss considerations are aimed, and achieved. In addition to the main work of this thesis, a generalized theoretical calculation method for 2n-1 number of stages all-pass network phase shifters is presented for the first time in literature. A different, new, broadband, and combined phase shifter topology using two-stage all-pass filters is presented. Moreover, the implementation of this idea is proved to be practical to 3 to 6 GHz 5.625&deg / and 11.25&deg / combined phase shifter. A new approach for stage numbers other than power of 2 is indicated, which is different from what is already presented in the literature. An example practical implementation results are provided for the three-stage 4-bit 1 to 6 GHz phase shifter. Also, a small improvement in SRF of the high inductance valued inductors is achieved with the mitering of the corners of square planar spiral inductors. Comparison of the measured data between the normal inductors and mitered versions shows that the first SRF of the inductors are increased about 80 MHz, and second SRF of the inductors are increased about 200 MHz.
464

匯率與總體經濟關聯性之實證研究-以中國大陸為例 / The empirical research on the correlation between Foreign exchange rates and Macroeconomics, taking Mainland China as an example

李素英, Lee, Su Ying Unknown Date (has links)
本研究係探討匯率與總體經濟之關聯性,以中國大陸1996第一季至 2013年第一季之總體經濟變數,共計樣本數為69筆季資料。先以1996第一季至 2013年第一季全期數據進行實證分析。再以2005年7月為分界點,分為1996年第一季至2005年第二季及2005年第三季至2013年第一季數據分別進行實證分析。 本論文就REER、GDP、CPI、M2、UNEMP、CHIBOR、FDI、OPEN等總體經濟變數,以單根檢定及建構向量自我迴歸模型進行實證分析,並以Granger因果關係檢定、衝擊反應分析及預測誤差變異數分解,以了解匯率與總體經濟相互間之關係。 實證結果發現,中國大陸匯率與總體經濟間的關係自2005年7月21日匯率改革後逐漸增強,但整體言之匯率與總體經濟間之傳導能力仍然不大,人民幣匯率的變動主要受其自身影響較多,受總體經濟變數的相互影響較小,顯示其外匯市場的開放程度與一個真正開放的經濟體還是有些許差距。 / This research examines the correlation between foreign exchange rates and macroeconomics by using the data of economic variables of China from the 1st quarter of 1996 to the 1st quarter of 2013. The sample contains 69 quarterly data during the entire period, while the reform of Chinese exchange rate on 21st July 2005 is a crucial division. In order to find the correlation between foreign exchange rates and macroeconomics, the research examines the economic variables such as REER, GDP, CPI, M2, UNEMP, CHIBOR, FDI, and OPEN by using unit root test, vector autoregression model, Granger causality test, impulse response function and variance decomposition impulse response function. The result of the tests indicates that after the reform of Chinese exchange rate on 21st July 2005, the correlation between exchange rates and macroeconomics has been enhanced, but the connection is not prominent. In other words, the fluctuation of Renminbi is mainly affected by the nation’s policy instead of its macroeconomic factors. Hence, the openness of the Chinese foreign exchange market is still distant from a real open economy.
465

Accelerating microarchitectural simulation via statistical sampling principles

Bryan, Paul David 05 December 2012 (has links)
The design and evaluation of computer systems rely heavily upon simulation. Simulation is also a major bottleneck in the iterative design process. Applications that may be executed natively on physical systems in a matter of minutes may take weeks or months to simulate. As designs incorporate increasingly higher numbers of processor cores, it is expected the times required to simulate future systems will become an even greater issue. Simulation exhibits a tradeoff between speed and accuracy. By basing experimental procedures upon known statistical methods, the simulation of systems may be dramatically accelerated while retaining reliable methods to estimate error. This thesis focuses on the acceleration of simulation through statistical processes. The first two techniques discussed in this thesis focus on accelerating single-threaded simulation via cluster sampling. Cluster sampling extracts multiple groups of contiguous population elements to form a sample. This thesis introduces techniques to reduce sampling and non-sampling bias components, which must be reduced for sample measurements to be reliable. Non-sampling bias is reduced through the Reverse State Reconstruction algorithm, which removes ineffectual instructions from the skipped instruction stream between simulated clusters. Sampling bias is reduced via the Single Pass Sampling Regimen Design Process, which guides the user towards selected representative sampling regimens. Unfortunately, the extension of cluster sampling to include multi-threaded architectures is non-trivial and raises many interesting challenges. Overcoming these challenges will be discussed. This thesis also introduces thread skew, a useful metric that quantitatively measures the non-sampling bias associated with divergent thread progressions at the beginning of a sampling unit. Finally, the Barrier Interval Simulation method is discussed as a technique to dramatically decrease the simulation times of certain classes of multi-threaded programs. It segments a program into discrete intervals, separated by barriers, which are leveraged to avoid many of the challenges that prevent multi-threaded sampling.
466

The study of Optimal Asset Allocation of Banks after Asset-backed Securitization and write off NPL with secreturization

Yen, Tsung-Yu 30 May 2003 (has links)
In the financial industry , typical indirect-financial institution attracts deposit, inter-bank loan, or issuing negotiable certificate of time deposit and bonds.¡@After collecting money from excess capital units through auditing procedure then loan to the needed parties as a financial intermediary in the market. The roles of financial institutions such as banks are acting as a financial intermediary by providing buy-sell funding to enterprises or individuals. Those banks actually take whole funding liquidity risk to exchange main resource of bank¡¦s profitability. Once failure in managing risk or facing dynamically financial environment changing, bank may engage in difficulty and cause serious financial crisis. Comparison with large international financial institutions, our financial institutions hold a lot of NPL (Non-Performing Loan; Taiwan major NPL almost came from mortgage), it not only lower the liquidity of fund, longer payment duration but also raise operation risk can¡¦t recover financial assets. The quality of asset has also been worse off rapidly. These phenomena raise financial institution operation risk and influence stability of financial system and development of financial environment. With the financial environment is changing, those developed countries mostly adopted structured finance or financial asset securitization methods. The purpose of financial asset securitization in general is to raise fund for originator. Originator is the most important participant on the securitization process. The originators pool and reorganize those assets, which could create cash flow into small-amount unit security and sell to the investors. By this way originator don¡¦t have to wait till maturity and buyback those securities. That is why by using financial asset securitization will help financial institution to improve asset/liability management, spread asset risk and increase the ratio of equity to assets. At the same time, this will improve the effect and efficiency of finance institution¡¦s operating and open up the funding market. Mortgage securitization can raise banks¡¦ capital adequacy and current ratio. By way of asset securitization, the originators enjoy higher asset liquidity, lower funding cost, and improved capital ratio; while investors can use mortgage-backed securities to diversity their portfolios, improve liquidity and enhance yields. For originators, securitization is not only lower the cost of capital, increase the net profit but also enhances the liquidity of cash and balances the assets¡¦ structure. Assets-backed securitization has been prevailed in USA for years. It effectively controls the NPL (Non-performing Loans) problem and stabilizes financial management. Through financial asset securitization optimal asset allocation model, this thesis has the following finding: 1. Financial market funding supply shows multiple effects after Banking Financial asset securitization. In the initial stage of securitization, banks will lower risky assets and then will increase to original size. 2. After Financial asset securitization, a capital adequate ratio will rise first then become normal level. 3. Under assumption that financial asset securitization does not create any capital gain or loss; bank will lower profitability at initial stage. Then after a while, profitability will increase dramatically later. 4. After consideration of risk, this research discovers that securitization wills steeper Capital Allocation Line. It means every risk taking will compensate higher return. Improve Banking efficiency and profitability. Securitization provides a groundbreaking tool to increase profitability and avoid risk. Under MBS structure, the commissions and fees, absolutely out of risk, is major and stable income of the bank. On the other hand, the successful development of USA RTC implement is another contribution to resolve NPL. In sum, financial asset securitization not only accelerates the efficiency of financial institutions for more balance capital markets but also avoids financial risk in the banking system. At present, the prime theme of he banking sector should be how to maintain sound operations by strengthening credit risk management and restructure assets quality. Introducing successful external professional partner system is another way to deal with NPL problems.
467

Autonomous Orbit Estimation For Near Earth Satellites Using Horizon Scanners

Nagarajan, N 07 1900 (has links)
Autonomous navigation is the determination of satellites position and velocity vectors onboard the satellite, using the measurements available onboard. The orbital information of a satellite needs to be obtained to support different house keeping operations such as routine tracking for health monitoring, payload data processing and annotation, orbit manoeuver planning, and prediction of intrusion in various sensors' field of view by celestial bodies like Sun, Moon etc. Determination of the satellites orbital parameters is done in a number of ways using a variety of measurements. These measurements may originate from ground based systems as range and range rate measurements, or from another satellite as in the case of GPS (Global Positioning System) and TDUSS (Tracking Data Relay Satellite Systems), or from the same satellite by using sensors like horizon sensor^ sun sensor, star tracker, landmark tracker etc. Depending upon the measurement errors, sampling rates, and adequacy of the estimation scheme, the navigation accuracy can be anywhere in the range of 10m - 10 kms in absolute location. A wide variety of tracking sensors have been proposed in the literature for autonomous navigation. They are broadly classified as (1) Satellite-satellite tracking, (2) Ground- satellite tracking, (3) fully autonomous tracking. Of the various navigation sensors, it may be cost effective to use existing onboard sensors which are well proven in space. Hence, in the current thesis, the Horizon scanner is employed as the primary navigation sensor-. It has been shown in the literature that by using horizon sensors and gyros, a high accuracy pointing of the order of .01 - .03 deg can be achieved in the case of low earth orbits. Motivated by such a fact, the current thesis deals with autonomous orbit determination using measurements from the horizon sensors with the assumption that the attitude is known to the above quoted accuracies. The horizon scanners are mounted on either side of the yaw axis in the pitch yaw plane at an angle of 70 deg with respect to the yaw axis. The Field Of View (FOV) moves about the scanner axis on a cone of 45 deg half cone angle. During each scan, the FOV generates two horizon points, one at the space-Earth entry and the other at the Earth-space exit. The horizon points, therefore, lie• on the edge of the Earth disc seen by the satellite. For a spherical earth, a minimum of three such horizon points are needed to estimate the angular radius and the center of the circular horizon disc. Since a total of four horizon points are available from a pair of scanners, they can be used to extract the satellite-earth distance and direction.These horizon points are corrupted by noise due to uncertainties in the Earth's radiation pattern, detector mechanism, the truncation and roundoff errors due to digitisation of the measurements. Owing to the finite spin rate of the scanning mechanism, the measurements are available at discrete time intervals. Thus a filtering algorithm with appropriate state dynamics becomes essential to handle the •noise in the measurements, to obtain the best estimate and to propagate the state between the measurements. The orbit of a low earth satellite can be represented by either a state vector (position and velocity vectors in inertial frame) or Keplerian elements. The choice depends upon the available processors, functions and the end use of the estimated orbit information. It is shown in the thesis that position and velocity vectors in inertial frame or the position vector in local reference frame, do result in a simplified, state representation. By using the f and g series method for inertial position and velocity, the state propagation is achieved in linear form. i.e. Xk+1 = AXK where X is the state (position, velocity) and A the state transition matrix derived from 'f' and 'g' series. The configuration of a 3 axis stabilised spacecraft with two horizon scanners is used to simulate the measurements. As a step towards establishing the feasibility of extracting the orbital parameters, the governing equations are formulated to compute the satellite-earth vector from the four horizon points generated by a pair of Horizon Scanners in the presence of measurement noise. Using these derived satellite-earth vectors as measurements, Kalman filter equations are developed, where both the state and measurements equations are linear. Based on simulations, it is shown that a position accuracy of about 2 kms can be achieved. Additionally, the effect of sudden disturbances like substantial slewing of the solar panels prior and after the payload operations are also analysed. It is shown that a relatively simple Low Pass Filter (LPF) in the measurements loop with a cut-off frequency of 10 Wo (Wo = orbital frequency) effectively suppresses the high frequency effects from sudden disturbances which otherwise camouflage the navigational information content of the signal. Then Kalman filter can continue to estimate the orbit with the same kind of accuracy as before without recourse to re-tuning of covariance matrices. Having established the feasibility of extracting the orbit information, the next step is to treat the measurements in its original form, namely, the non-linear form. The entry or exit timing pulses generated by the scanner when multiplied by the scan rate yield entry or exit azimuth angles in the scanner frame of reference, which in turn represents an effective measurement variable. These azimuth angles are obtained as inverse trigonometric functions of the satellite-earth vector. Thus the horizon scanner measurements are non-linear functions of the orbital state. The analytical equations for the horizon points as seen in the body frame are derived, first for a spherical earth case. To account for the oblate shape of the earth, a simple one step correction algorithm is developed to calculate the horizon points. The horizon points calculated from this simple algorithm matches well with the ones from accurate model within a bound of 5%. Since the horizon points (measurements) are non-linear functions of the state, an Extended Kalman Filter (EKF) is employed for state estimation. Through various simulation runs, it is observed that the along track state has got poor observability when the four horizon points are treated as measurements in their original form, as against the derived satellite-earth vector in the earlier strategy. This is also substantiated by means of condition number of the observability matrix. In order to examine this problem in detail, the observability of the three modes such as along-track, radial, and cross-track components (i.e. the local orbit frame of reference) are analysed. This difficulty in observability is obviated when an additional sensor is used in the roll-yaw plane. Subsequently the simulation studies are carried out with two scanners in pitch-yaw plane and one scanner in the roll-yaw plane (ie. a total of 6 horizon points at each time). Based on the simulations, it is shown that the achievable accuracy in absolute position is about 2 kms.- Since the scanner in the roll-yaw plane is susceptible to dazzling by Sun, the effect of data breaks due to sensor inhibition is also analysed. It is further established that such data breaks do not improve the accuracy of the estimates of the along-track component during the transient phase. However, filter does not diverge during this period. Following the analysis of the' filter performance, influence of Earth's oblateness on the measurement model studied. It is observed that the error in horizon points, due to spherical Earth approximation behave like a sinusoid of twice the orbital frequency alongwith a bias of about 0.21° in the case of a 900 kms sun synchronous orbit. The error in the 6 horizon points is shown to give rise to 6 sinusoids. Since the measurement model for a spherical earth is the simplest one, the feasibility of estimating these sinusoids along with the orbital state forms the next part of the thesis. Each sinusoid along with the bias is represented as a 3 state recursive equation in the following form where i refers to the ith sinusoid and T the sampling interval. The augmented or composite state variable X consists of bias, Sine and Cosine components of the sinusoids. The 6 sinusoids together with the three dimensional orbital position vector in local coordinate frame then lead to a 21 state augmented Kalman Filter. With the 21 state filter, observability problems are experienced. Hence the magnetic field strength, which is a function of radial distance as measured by an onboard magnetometer is proposed as additional measurement. Subsequently, on using 6 horizon point measurements and the radial distance measurements obtained from a magnetometer and taking advantage of relationships between sinusoids, it is shown that a ten state filter (ie. 3 local orbital states, one bias and 3 zero mean sinusoids) can effectively function as an onboard orbit filter. The filter performance is investigated for circular as well as low eccentricity orbits. The 10-state filter is shown to exhibit a lag while following the radial component in case of low eccentricity orbits. This deficiency is overcome by introducing two more states, namely the radial velocity and acceleration thus resulting in a 12-state filter. Simulation studies reveal that the 12-state filter performance is very good for low eccentricity orbits. The lag observed in 10-state filter is totally removed. Besides, the 12-state filter is able to follow the changes in orbit due to orbital manoeuvers which are part of orbit acquisition plans for any mission.
468

Architecture et filtres pour la détection des chenaux dans la glace de l'océan Arctique

Léonard, Daniel January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
469

匯率轉嫁之時間變動特性-台灣實證研究 / Time-varying nature of exchange rate pass-through for Taiwan

沈睿宸, Shen, Juei Chen Unknown Date (has links)
過去實證研究顯示,匯率轉嫁程度並非一成不變,而是具有隨時間變動的特性。因此,有別於過去文獻大多採用滾動相關係數,本文則是使用Engle(2002)提出的動態條件相關係數模型,估計台灣於1982年至2014年間匯率變動與進口價格變動間的動態條件相關係數;並以其做為匯率轉嫁的代理變數,進而探討台灣匯率轉嫁的時間變動趨勢。我們的實證結果顯示,不論是用滾動相關係數還是動態條件相關係數,台灣的匯率轉嫁都明顯具有隨時間變動的特性。雖然5年期與10年期的滾動相關係數均在1997年前後分別呈現上升與下降的趨勢,動態條件相關係數則無類似的現象。然而,由於滾動相關係數容易受到滾動視窗樣本大小或滾動視窗有無包含極端值的影響,使得此方法較無法看出匯率轉嫁變動的準確時間點,而動態條件相關係數模型則可避免此問題。此外,本文實證發現,通膨環境與匯率波動是造成台灣匯率轉嫁隨時間變動的主要因子,對匯率轉嫁皆有顯著的正向影響。在排除1986年匯率轉嫁與進口滲透率呈現短暫負向關係的資料後,進口滲透率與匯率轉嫁的正向關係變為顯著,而進口滲透率也成為影響匯率轉嫁的原因之一。 / According to past empirical studies, it is believed that exchange rate pass -through (ERPT) has the time-varying nature. In this paper, we apply the Dynamic Conditional Correlation (DCC) model of Engle (2002), rather than the rolling correlation coefficient prevalently used by other studies, to analyze the time trend of ERPT for Taiwan. We estimate the dynamic condition correlation between the changes of exchange rate and the changes of import price using monthly data from 1982 to 2014 and use this correlation as a proxy for the degree of ERPT. Our empirical results show that ERPT for Taiwan, whether measured by the DCC or the rolling correlation coefficient, has a significant time- varying nature. In addition, both 5-year and 10-year window rolling correlation coefficient increase before 1997 and decline after 1997, which does not show in the DCC. However, the rolling correlation coefficient does not provide precise timings in the changes in ERPT, because of the dependence on the size of windows and whether or not outliers exist in the window. In contrast, the DCC does not have this kind of problem. Another important empirical result of this paper is that the inflation environment and the exchange rate volatility are main factors which explain the time-varying ERPT, and both of them have positive relation with ERPT. Moreover, the import penetration becomes positively significant after excluding data which shows temporary negative impact of the import penetration on ERPT in 1986.
470

Improving Free State matriculation results : a total quality management approach / A. Magadla

Magadla, Andiswa Antonette January 2010 (has links)
The aim of the study was to establish the possible causes of poor Grade 12 results in physical science in South Africa and to apply a total quality management (TQM) approach to suggest a solution. The literature study indicates that resources, preparation or subject knowledge, commitment and support affect the quality of performance. The research was done in one school district (cluster). Following the literature study a questionnaire was distributed to 150 science teachers from 31 schools and the response rate was 73% (113 responses). The questionnaire tested the respondents' perception on the availability of resources and the support received by them, the support given to learners and their level of preparation and subject knowledge. From this, as well as from the biographical information from the questionnaire, certain conclusions were made about the reasons for poor performance of learners in science examinations. It could be concluded from the analysis of the results that limited support to teachers and support to learners are important factors contributing to a poor Grade 12 pass rate. The pass rate also correlates positively with experience levels of teachers. Although 39% of teachers are unqualified or under-qualified, no significant correlation could be found between pass rate and level of qualification. Analysis of the effect of commitment on pass rate was inconclusive. / Thesis (M.B.A.)--North-West University, Potchefstroom Campus, 2011.

Page generated in 1.9579 seconds