• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 614
  • 433
  • 114
  • 100
  • 53
  • 45
  • 40
  • 17
  • 11
  • 11
  • 11
  • 9
  • 7
  • 7
  • 7
  • Tagged with
  • 1994
  • 343
  • 313
  • 310
  • 239
  • 161
  • 116
  • 114
  • 91
  • 90
  • 86
  • 85
  • 84
  • 79
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Components of An Effective Workplace Mentorship

Woolwine, Elaine W. 28 April 1999 (has links)
The purpose of the study was to identify the components of an effective workplace mentorship. Twenty-five panelists participated in a three-round Delphi study to reach a consensus on these components. The panelists were (1) local school-site teachers and school-to-work coordinators, (2) community college school-to-work coordinators, (3) directors of tech-prep consortia, and representatives from (4) business and industry, (5) labor and management, (6) corporate rsearch, and (7) federal government. A two-round pilot study was conducted to test the initial open-ended questions for round 1 and to test the survey instrument developed for round 2. Feedback from the pilot study was used to develop the open-ended questionnaire instrument in round 1 and the Likert scale used in round 2 of the study. Criteria of an effective workplace mentorship were retained in both rounds 2 and 3 if 80% of the respondents rated them "important" or "very important." The study produced 93 criteria in five categories necessary for an effective workplace mentorship. The five categories were: (1) program structure; (2) recruitment, selection, and placement; (3) support activities; (4) program outcomes and evaluation; and (5) ethics. A sixth category, barriers and obstacles to an effective workplace mentorship, was included in the survey and contained four responses. These four responses were summarized along with the 93 criteria of an effective workplace mentorship. A checklist of criteria is included for the assessment of existing programs or to aid those implementing new programs. / Ed. D.
412

Essays on Factor Models

Lin, Chun-Wei 16 May 2024 (has links)
This dissertation consists of three chapters describing the applications of factor models in different fields of asset pricing. The first chapter addresses the following issue: Prominent volatility-based factor pricing models focus exclusively on the second moment of asset returns, and hence, tend to identify volatile factors but with little risk premia. This chapter demonstrates that a simple asset return transform can arbitrarily upset the ranking of volatility-based factors, but not their prices of risks. Accordingly, we propose a new framework to identify factors based on their prices of risks, or the so-called principally priced risk factors (PPRFs). We construct these factors by generalizing the standard Sharpe ratio for a single asset to a set of assets, incorporating information from both the first and second moments of asset returns. The PPRF framework improves out-of-sample pricing performance in both equity and currency markets. The second chapter identifies the origins of covariance in institutional trading. Conceptually, we introduce two perspectives: the asset perspective, which prioritizes assets as the key market fundamentals, and the manager perspective, which prioritizes fund managers as the key market fundamentals that drive institutional trading covariance. Empirically, we establish that the asset perspective is the primary driver of covariance in institutional trading. Our analysis documents two further empirical patterns. First, returns stemming from the covariance in institutional trading from the asset perspective have higher volatility, offering valuable insights into the demand-based asset pricing literature. Second, the persistence in trading often breaks down during economic downturns, suggesting potential connections to the uncertainty-based business cycle literature. Finally, the third chapter examines the impact of changes in monetary policy rules on the asset valuations of firms with different profitability. I have the following two empirical findings. First, during periods of hawkish monetary policies, the 'profitability premium'— the expected extra return on investments in more profitable firms — tends to increase. Second, when analyzing the factors mediating this effect, changes in inflation expectations play a more significant role in influencing the profitability premium during transitions to a hawkish monetary regime, compared to the effects of real interest rate adjustments on production costs. These observations suggest a possible mechanism by which monetary policy may have different long-term effects on firms with different characteristics. / Doctor of Philosophy / This dissertation explores factor models in asset pricing across three chapters. The first chapter critiques volatility-based models that focus on asset return variance and introduces a new framework for identifying factors based on risk prices, enhancing pricing performance in equity and currency markets. The second chapter investigates the origins of covariance in institutional trading, emphasizing the asset perspective as the dominant influence and documenting higher volatility and breakdowns in trading persistence during economic downturns. The third chapter examines the effects of monetary policy changes on firm asset valuations, finding that hawkish policies increase the profitability premium, significantly influenced by shifts in inflation expectations rather than changes in real interest rates. These insights highlight the nuanced impacts of market fundamentals and monetary policy on asset pricing and firm profitability.
413

Next generation high temperature gas reactors : a failure methodology for the design of nuclear graphite components

Hindley, Michael Philip 03 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: This thesis presents a failure evaluation methodology for nuclear graphite components used in high-temperature gas reactors. The failure methodology is aimed at predicting the failure of real parts based on the mechanical testing results of material specimens. The method is a statistical failure methodology for calculating the probability of failure of graphite components, and has been developed and implemented numerically in conjunction with a finite element analysis. Therefore, it can be used on any geometry and load configuration that can be modelled using finite element analysis. The methodology is demonstrated by mechanical testing of NBG-18 nuclear grade graphite specimens with varying geometries under various loading conditions. Some tests were developed as an extension of the material characterisation, specifically engineered to assess the effect of stress concentrations on the failure of NBG-18 components. Two relevant statistical distribution functions, a normal distribution and a twoparameter Weibull distribution are fitted to the experimental material strength data for NBG-18 nuclear graphite. Furthermore, the experimental data are normalised for ease of comparison and combined into one representative data set. The combined data set passes a goodness-of-fit test which implies the mechanism of failure is similar between data sets. A three-parameter Weibull fit to the tensile strength data is only used in order to predict the failure of independent problems according to the statistical failure methodology. The analysis of the experimental results and a discussion of the accuracy of the failure prediction methodology are presented. The data is analysed at median failure load prediction as well as at lower probabilities of failure. This methodology is based on the existence of a “link volume”, a volume of material in a weakest link methodology defined in terms of two grouping criteria. The process for approximating the optimal size of a link volume required for the weakest link failure calculation in NBG-18 nuclear graphite is demonstrated. The influence of the two grouping criteria on the failure load prediction is evaluated. A detailed evaluation of the failure prediction for each test case is performed for all proposed link volumes. From the investigation, recommended link volumes for NBG-18 are given for an accurate or conservative failure prediction. Furthermore, failure prediction of a full-sized specimen test is designed to simulate the failure condition which would be encountered if the reactor is evaluated independently. Three specimens are tested and evaluated against the predicted failure. Failure of the full-size component is predicted realistically but conservatively. The predicted failure using link volume values for the test rig design is 20% conservative. The methodology is based on the Weibull weakest link method which is inherently volume dependent. Consequently, the conservatism shows that the methodology has volume dependency as experienced in the classic Weibull theory but to a far lesser extent. / AFRIKAANSE OPSOMMING: Hierdie tesis beskryf ‘n metode wat gebruik kan word om falings in kern grafiet komponente te voorspel. Hierdie komponente word in hoë temperatuur gas reaktore gebruik. Die falings metodologie beoog om die falings van regte komponente te voorspel wat gebaseer is op meganiese toets resultate van materiaal monsters. Dit is ‘n statistiese falings metodologie wat die waarskynlikheid van faling vir grafiet komponente bereken. Die metode is numeries ontwikkel en geïmplementeer deur middel van die eindige element metode, dus kan die metodologie toegepas word op enige geometrie en belastingsgeval wat dan gemodelleer kan word deur gebruik te maak van eindige element metodes. Die metodologie word gedemonstreer deur gebruik te maak van NBG-18 kern grafiet toets monsters. Sommige van hierdie toetse is ontwikkel as ‘n uitbreiding van die materiaal karakterisering wat spesifiek ontwerp is om die effek van die spannings konsentrasies op die faling van die NBG-18 komponente te evalueer. Twee relevante statistiese verspreiding funksies word gekoppel aan die eksperimentele sterkte data van die NBG-18 kern grafiet, naamlik ‘n normale verspreiding en ‘n twee-parameter Weibull verspreiding. Die data stelle word ook genormaliseer vir gemak van vergelyking en gekombineer in een verteenwoordigende data stel. Die gekombineerde data stel slaag ‘n korrelasie toets wat impliseer dat die meganisme van faling soortgelyk is tussen die data stelle. ‘n Drie-parameter Weibull korrelasie op die trek toets monsters word gebruik vir die statistiese falings metodologie. Die analise van die eksperimentele resultate sowel as ‘n bespreking van die akkuraatheid van die faling voorspelling metodologie word voorgelê. Die data word geanaliseer by gemiddelde faling voorspelling asook by laer voorspellings van falings. Hierdie metode is gebaseer op die bestaan van ‘n “ketting volume” wat die volume van ‘n materiaal wat gebruik word in die swakste ketting voorstel en koppel aan die metodologie. ‘n Metode vir die benadering van die ketting volume word voorgestel en daaropeenvolgend gebruik om die ketting volume te bereken vir NBG-18. ‘n Gedetailleerde evaluasie van die falings voorspelling vir elke toets geval word uitgevoer vir die voorgestelde ketting volumes. Gebaseer op hierdie ondersoek is voorgestelde ketting volumes vir NBG-18 gegee vir beide akkurate en konserwatiewe falings voorspellings. Verder was ‘n volgrootte strukturele toets ontwikkel om dieselfde falings omstandighede te simuleer wat verwag is gedurende normale werking van die reaktor. Drie monsters word getoets en geëvalueer teen die voorspelde faling vir beide die berekende ketting volume groottes. Faling van die volgrootte komponente word realisties asook konserwatief voorspel. Die voorpselling is 20% konserwatief. Die metodologie is gebaseer op die Weibull metode wat inherent volume afhanklik is; gevolglik dui die konserwatisme aan dat die metodologie oor volume afhanklikheid beskik soos ondervind word in die klassieke Weibull teorie, maar tot ‘n baie kleiner mate.
414

Fabrication of advanced LTCC structures for microwave devices

Tick, T. (Timo) 17 November 2009 (has links)
Abstract The main objective of this thesis was to research the integration of novel materials and fabrication processes into Low Temperature Co-fired Ceramic (LTCC) technology; enabling fabrication of Radio Frequency (RF) and microwave components with advanced performance. The research focuses on two specific integration cases, which divide the thesis into two sections: the integration of tunable dielectric structures and the integration of air filled waveguides. The first section of the thesis describes the development and characterization of low sintering temperature Barium Strontium Titanate (BST) thick film paste. Sintering temperature of BST is decreased from approximately 1350 °C down to 900 °C by lithium doping and pre-reaction of the doped composition. This allows the co-sintering of the developed BST paste with commercial LTCC materials. Additionally two integration techniques to embed tunable components in an LTCC substrate using the developed BST paste are also presented and the electrical performance of the components is evaluated. The highest measured tunability value was 44% with a bias field of 5.7 V/µm. The permittivity of the films varied between 790 and 190, and the loss tangent varied between 0.004 and 0.005, all measured unbiased at 10 kHz. The developed LTCC compatible BST paste and the presented integration techniques for tunable components have not been previously published. In the second section of the thesis, a fabrication method for the LTCC integrated air-filled rectangular waveguides with solid metallic walls is presented. The fabrication method is described in detail and implemented in a set of waveguides used for characterization. A total loss of 0.1–0.2 dB/mm was measured over a frequency band of 140–200 GHz. The electrical performance of the waveguides is evaluated and their use demonstrated in an integrated LTCC antenna operating at 160 GHz.
415

Extensions of principal components analysis

Brubaker, S. Charles 29 June 2009 (has links)
Principal Components Analysis is a standard tool in data analysis, widely used in data-rich fields such as computer vision, data mining, bioinformatics, and econometrics. For a set of vectors in n dimensions and a natural number k less than n, the method returns a subspace of dimension k whose average squared distance to that set is as small as possible. Besides saving computation by reducing the dimension, projecting to this subspace can often reveal structure that was hidden in high dimension. This thesis considers several novel extensions of PCA, which provably reveals hidden structure where standard PCA fails to do so. First, we consider Robust PCA, which prevents a few points, possibly corrupted by an adversary, from having a large effect on the analysis. When applied to learning noisy logconcave mixture models, the algorithm requires only slightly more separation between component means than is required for the noiseless case. Second, we consider Isotropic PCA, which can go beyond the first two moments in identifying ``interesting' directions in data. The method leads to the first affine-invariant algorithm that can provably learn mixtures of Gaussians in high dimensions, improving significantly on known results. Thirdly, we define the ``Subgraph Parity Tensor' of order r of a graph and reduce the problem of finding planted cliques in random graphs to the problem of finding the top principal component of this tensor.
416

Integration of petrographic and petrophysical logs analyses to characterize and assess reservoir quality of the lower cretaceous sediments in the Orange basin, offshore south africa

Mugivhi, Murendeni Hadley January 2017 (has links)
Magister Scientiae - MSc / Commercial hydrocarbon production relies on porosity and permeability that defines the storage capacity and flow capacity of the resevoir. To assess these parameters, petrographic and petrophysical log analyses has been found as one of the most powerful approach. The approach has become ideal in determining reservoir quality of uncored reservoirs following regression technique. It is upon this background that a need arises to integrate petrographic and petrophysical well data from the study area. Thus, this project gives first hand information about the reservoir quality for hydrocarbon producibility. Five wells (A-J1, A-D1, A-H1, A-K1 and K-A2) were studied within the Orange Basin, Offshore South Africa and thirty five (35) reservoirs were defined on gamma ray log where sandstone thickness is greater than 10m. Eighty three (83) sandstone samples were gathered from these reservoirs for petrographic analyses within Hauterevian to Cenomanian sequences. Thin section analyses of these sediments revealed pore restriction by quartz and feldspar overgrowths and pore filling by siderite, pyrite, kaolinite, illite, chlorite and calcite. These diagenetic minerals occurrence has distructed intergranular pore space to almost no point count porosity in well K-A2 whilst in A-J1, A-D1, A-H1 and A-K1 wells porosity increases at some zones due to secondary porosity. Volume of clay, porosity, permeability, water saturation, storage capacity, flow capacity and hydrocarbon volume were calculated within the pay sand interval. The average volume of clay ranged from 6% to 70.5%. The estimated average effective porosity ranged from 10% to 20%. The average water saturation ranged from 21.7% to 53.4%. Permeability ranged from a negligible value to 411.05mD. Storage capacity ranged from 6.56 scf to 2228.17 scf. Flow capacity ranged from 1.70 mD-ft to 31615.82 mD-ft. Hydrocarbon volume varied from 2397.7 cubic feet to 6215.4 cubic feet. Good to very good reservoir qualities were observed in some zones of well A-J1, A-K1 and A-H1 whereas well A-D1 and K-A2 presented poor qualities.
417

High programmed death ligand 1 expression in carcinomatous components predicts a poor prognosis in pulmonary pleomorphic carcinoma / 肺多形癌の上皮成分におけるプログラム細胞死リガンド1(PD-L1)高発現は予後不良を予測する

Noguchi, Misa 23 March 2022 (has links)
京都大学 / 新制・課程博士 / 博士(医学) / 甲第23788号 / 医博第4834号 / 新制||医||1057(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 武藤 学, 教授 小濱 和貴, 教授 生田 宏一 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
418

A Compact Three-Phase Multi-stage EMI Filter with Compensated Parasitic-Component Effects

Chen, Shin-Yu 14 September 2023 (has links)
With the advent of wide bandgap (WBG) semiconductor devices, the electromagnetic interference (EMI) emissions are more pronounced due to high slew rates in the form of high dv/dt and high di/dt at higher switching frequencies compared to the traditional silicon technology. To comply with the stringent conducted emission requirements, EMI filters are adopted to attenuate the high frequency common mode (CM) and differential mode (DM) noise through the propagation path. However, self and mutual parasitic components are known to degrade the EMI filter performance. While parasitic cancellation techniques have been discussed at length in prior literature, most of them have focused mainly on single phase applications. As such this work focuses on extending the preexisting concepts to three-phase systems. Novel component placement, winding strategy as well as shielding and grounding techniques were developed to desensitize the influence of the parasitic effects on a three-phase multi-stage filter. The effectiveness of the three-phase filter structure employing the proposed methodologies has been validated via noise measurements at the line impedance stabilization network (LISN) in a 15kW rated motor drive system. Consequently, general design guidelines have been formulated for filter topologies with different inductor and capacitor form-factors. / Master of Science / The adoption of wide bandgap (WBG) semiconductor devices, such as Silicon Carbide (SiC) or Gallium Nitride (GaN) transistors, improves the power density with higher slew rates and switching frequencies compared to the traditional Silicon technology. However, the high switching speeds and high frequencies have generated higher electromagnetic interference (EMI) noise in the surroundings. To comply with the conducted emission requirements at the grid terminal, EMI filter is mandatory to attenuate the high frequency EMI noise that flows into grid. However, near field and the effect of parasitic components are known to degrade the filter performance at the higher end of frequency spectrum where the limit lines are typically stringent. While parasitic cancellation techniques have been discussed at length in prior literature, most of them has focused mainly on single phase applications. Therefore, this thesis aims to extend the pre-existing concepts to compensate the mutual and self-parasitic coupling components in a three-phase multi-stage filter. In this regard, novel component placement, winding strategy as well as shielding and grounding techniques were developed to compensate for the parasitic effects in a three- phase multi-stage filter. The effectiveness of the three-phase filter structure employing the proposed methodologies has been validated in a 15kW rated motor drive system. Consequently, general design guidelines have been formulated for filter design with minimal parasitic effects.
419

Uma abordagem na camada de middleware para troca din?mica de componentes em sistemas multim?dia distribu?dos baseados no framework Cosmos

Vieira Junior, Ivanilson Fran?a 20 March 2009 (has links)
Made available in DSpace on 2014-12-17T15:47:50Z (GMT). No. of bitstreams: 1 IvanilsonFVJ.pdf: 1175067 bytes, checksum: 36d08791e1658b3543761ec05cd15028 (MD5) Previous issue date: 2009-03-20 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / To manage the complexity associated with the management of multimedia distributed systems, a solution must incorporate concepts of middleware in order to hide specific hardware and operating systems aspects. Applications in these systems can be implemented in different types of platforms, and the components of these systems must interact each with the other. Because of the variability of the state of the platforms implementation, a flexible approach should allow dynamic substitution of components in order to ensure the level of QoS of the running application . In this context, this work presents an approach in the layer of middleware that we are proposing for supporting dynamic substitution of components in the context the Cosmos framework , starting with the choice of target component, rising taking the decision, which, among components candidates will be chosen and concluding with the process defined for the exchange. The approach was defined considering the Cosmos QoS model and how it deals with dynamic reconfiguration / Para tratar a complexidade associada ao gerenciamento dos sistemas multim?dia distribu?dos, uma solu??o deve incorporar conceitos de middleware de forma a abstrair especificidades de hardware e sistemas operacionais. Aplica??es nesses sistemas podem ser executadas em diferentes tipos de plataformas, e os componentes destes sistemas precisam interagir uns com os outros. Devido ? variabilidade dos estados das plataforma de execu??o, uma abordagem flex?vel deve permitir a troca din?mica de componentes visando garantir o n?vel de QoS da aplica??o em execu??o. Neste contexto, o presente trabalho apresenta uma abordagem na camada de middleware para a realiza??o de troca din?mica de componentes no contexto do framework Cosmos, iniciando com a escolha do componente alvo para a troca, passando pela tomada de decis?o de qual, entre os componentes candidatos, ser? escolhido e concluindo com o processo definido para a troca. A abordagem foi definida com base nos requisitos de QoS considerados no framework Cosmos de maneira a suportar a reconfigura??o din?mica de componentes
420

Generalized Identification : Individuals’ levels of identification with groups and its relation to personality

Manninen, Elina January 2016 (has links)
This thesis investigates a newly developed term coined by the author called generalized identification, which is the tendency that people who identify high with one group tend to identify high with other groups as well, and how personality variables from the Five-Factor model may relate to this tendency. A common component of identification towards 10 preselected groups was calculated (N = 148), with a principal component analysis. The result reveal that the generalized identification account for 41 % of the total variance. A stepwise multiple regression analysis further showed that Openness to Experience and Agreeableness from the Five-Factor model explained 26 % of the variance in the generalized identification. However, due to low reliability when measuring personality traits, the relationship between personality and generalized identification could not be interpreted in a satisfying way, and it needs to be further explored before drawing firm conclusions.

Page generated in 0.1074 seconds