• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 757
  • 222
  • 87
  • 68
  • 60
  • 33
  • 30
  • 24
  • 20
  • 15
  • 10
  • 7
  • 7
  • 6
  • 5
  • Tagged with
  • 1549
  • 271
  • 203
  • 186
  • 154
  • 147
  • 143
  • 143
  • 128
  • 124
  • 87
  • 87
  • 85
  • 81
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Fundamentação eletromiográfica do método de pré-exaustão no treinamento de força / Electromyography as a basis to pre-exhaustion method in strength training

Leite, Allan Brennecke 03 April 2007 (has links)
Ao contrário da recomendação tradicional do treinamento de força, a proposta do método de pré-exaustão é iniciar a sessão de treino com exercícios monoarticulares e terminar com exercícios multiarticulares. O objetivo deste estudo foi, por meio da EMG, investigar parâmetros temporais e de intensidade da ativação dos músculos peitoral maior (PM), deltóide (DA) e tríceps braquial (TB) que possam fundamentar a aplicação do método de pré-exaustão em 10RM dos exercícios supino e crucifixo. Foram comparados dois protocolos experimentais: P1) método da préexaustão; P2) recomendações tradicionais. A intensidade de ativação baseada no valor RMS, bem como a relação desta com a duração da contração muscular, estabelecida em faixas de intensidade, não obteve diferenças estatisticamente significativas para PM. Para DA, não houve diferenças estatisticamente significativas entre os protocolos na intensidade de ativação quando as repetições foram analisadas em conjunto. Entretanto, quando analisado cada repetição, este músculo apresentou aumento estatisticamente significativo de intensidade de ativação em P1, assim como maior solicitação da faixa de intensidade 80 a 100% CIVM. Para TB, a intensidade de ativação foi significativamente maior em P1 que em P2 para todas as formas de análise. Os resultados mostraram que o aparelho locomotor aumentou a dependência de TB como estratégia alternativa para tentar atingir 10RM do supino em P1. Assim, é possível afirmar que o método de pré-exaustão pode ser eficiente para impor maior estímulo neural sobre pequenos grupos acessórios na execução de um movimento e não sobre o grupo principal o qual se deseja. Entretanto, estes achados suportam que os efeitos do método de pré-exaustão ainda não podem ser afirmados categoricamente. Pois, ao longo da série em P1 não houve aumento significativo na intensidade de ativação de um mesmo músculo, bem como das faixas de intensidade, como houve em P2. Desse modo, é possível afirmar que os músculos, em P1, iniciaram a série em um nível de intensidade mais alto que em P2, pois foram estimulados previamente / Contrariwise the strength training traditional recommendation, the preexhaustion method purposes to begin a training session with monoarticular exercises and to finish it with multiarticular exercises. The aim of this study was, through EMG, to inquire into temporal and activation intensity parameters of pectoralis major (PM), deltoid (DA) and triceps brachii (TB) muscles, which can be used as a basis to bench press and flying 10RM pre-exhaustion method application. It was compared two experimental protocols: P1) pre-exhaustion method; P2) traditional recommendation. The activation intensity, as well its relationship with the muscular contraction duration, established on intensity levels, did not attain significant differences to PM. To DA, there were not differences between the protocols respecting the activation intensity when whole the repetitions were analyzed. However, when each repetition was analyzed, this muscle exhibited significant increasing in activation intensity in P1; as well it showed a more intense solicitation of 80 to 100% MVIC level. To TB, the activation intensity was significant greater in P1 than P2 respecting whole manners to data analysis. The results exhibited that the locomotor apparatus increased the TB dependence as an alternative strategy to try to attain bench press 10RM in P1. Therefore, it is possible to assert that pre-exhaustion method may be efficient to impose largest neural stimuli on small synergists muscular groups during movement execution, but not on the main target muscular group. However, these findings sustain that the pre-exhaustion method effects cannot receive a categorical affirmation, yet. Because, contrariwise the P2, during the P1 bench press set there was not significant increasing in the same muscle activation intensity, as well in the intensity levels. This way, it is possible to assert that, in P1 the muscles began the set in a highest intensity levels than in P2, because they were stimulated previously
222

Hedging no modelo com processo de Poisson composto / Hedging in compound Poisson process model

Sung, Victor Sae Hon 07 December 2015 (has links)
Interessado em fazer com que o seu capital gere lucros, o investidor ao optar por negociar ativos, fica sujeito aos riscos econômicos de qualquer negociação, pois não existe uma certeza quanto a valorização ou desvalorização de um ativo. Eis que surge o mercado futuro, em que é possível negociar contratos a fim de se proteger (hedge) dos riscos de perdas ou ganhos excessivos, fazendo com que a compra ou venda de ativos, seja justa para ambas as partes. O objetivo deste trabalho consiste em estudar os processos de Lévy de puro salto de atividade finita, também conhecido como modelo de Poisson composto, e suas aplicações. Proposto pelo matemático francês Paul Pierre Lévy, os processos de Lévy tem como principal característica admitir saltos em sua trajetória, o que é frequentemente observado no mercado financeiro. Determinaremos uma estratégia de hedging no modelo de mercado com o processo de Poisson composto via o conceito de mean-variance hedging e princípio da programação dinâmica. / The investor, that negotiate assets, is subject to economic risks of any negotiation because there is no certainty regarding the appreciation or depreciation of an asset. Here comes the futures market, where contracts can be negotiated in order to protect (hedge) the risk of excessive losses or gains, making the purchase or sale assets, fair for both sides. The goal of this work consist in study Lévy pure-jump process with finite activity, also known as compound Poisson process, and its applications. Discovered by the French mathematician Paul Pierre Lévy, the Lévy processes admits jumps in paths, which is often observed in financial markets. We will define a hedging strategy for a market model with compound Poisson process using mean-variance hedging and dynamic programming.
223

Dekomponeringsanalys av personbilstrafikens CO2-utsläpp i Sverige 1990–2015

Kalla, Christelle January 2019 (has links)
År 2045 ska Sverige uppnå territoriella nettonollutsläpp och till år 2030 ska utsläppen från transportsektorn ha minskat med 70 % jämfört med år 2010. Sveriges vägtrafik står för en tredjedel av de totala växthusgasutsläppen. För att uppnå klimatmålen bör de mest lämpade styrmedlen och åtgärderna prioriteras. En systematisk undersökning av de faktorer som påverkat utsläppsutvecklingen kan vägleda beslutsfattare att fördela resurserna där de gör mest nytta. Dekomponeringsanalys är en potentiell metod för detta syfte då flera olika faktorers effekter kan särskiljs och mätas. Fem additiva LMDI-I dekomponeringsanalyser genomfördes på utsläppsutvecklingen av fossilt CO2 inom personbilstrafiken mellan åren 1990–2015. De faktorer som undersöktes var befolkning, bil per capita, bränsleteknologier, motorstorlekar, trafikarbete per bil, emissioner och biobränsle. Data från emissionsmodellen HBEFA, Trafikverket och SCB användes i analyserna. Under hela perioden 1990–2015 minskade CO2-utsläppen och dekomponeringsanalyserna visade att alla de ingående faktorerna påverkat utvecklingen. Sett över hela tidsperioden 1990–2015 hade faktorerna påverkat utvecklingen mest i storleksordningen trafikarbete per bil (35 %), bränsleteknologier (15 %), befolkning (15 %), bil per capita (13 %), emissioner (11 %), biobränsle (7 %) samt motorstorlekar (5 %). Procenten anger andelen som faktorn utgjorde av effekternas absoluta summa. Trafikarbete per bil, emissioner, biobränsle och motorstorlekar minskade utsläppen. Bränsleteknologier, befolkning och bil per capita ökade utsläppen. Resultaten kan användas som en indikation för vilka faktorer som kan påverka den framtida utsläppsutvecklingen mest och för vilka åtgärder bör vidtas. Åtgärderförslag är incitament för att välja mer hållbara transportsätt, öka andelen av bilar med lägre utsläpp i fordonsflottan och använda mer biobränsle. / By year 2045 Sweden shall reach zero territorial net emissions and by year 2030 the emissions from the transport sector shall be reduced by 70% compared to year 2010. In Sweden the road traffic stands for a third of the total greenhouse gas emissions. In order to achieve the climate targets, the most suited policies and actions should be prioritized. A systematic investigation into the factors that affect the change in emissions can guide decision makers to distribute resources where they contribute the most. A decomposition analysis is a potential method for this purpose since the effect of different factors can be separated and measured. Five additive LMDI-I decomposition analyses were made on the change in fossil CO2 emission from passenger cars in Sweden between year 1990–2015. The factors that were investigated were: population, vehicle per capita, fuel technologies, engine sizes, distance travelled per vehicle, emissions and biofuel share. Data from the emissions model HBEFA, the Swedish Transport Administration and Statistics Sweden were used in the analyses. During the period of year 1990–2015 the CO2 emissions were reduced, and the decomposition analyses showed that all ingoing factors affected the change. Throughout the period the factors that contributed the most were in order of size: distance travelled per vehicle (35%), fuel technologies (15%), population (15%), car per capita (13%), emissions (11%), biofuel (7%) and engine size (5%). The percentage is the share of the factor’s effect of the absolute sum of all the different effects. Distance travelled per vehicle, emissions, bio fuels and engine size reduced the emissions. Fuel technologies, population and car per capita increased the emissions. The suggestions of actions are incentive for people to use more sustainable means for transportation, increase the share of cars with lower emissions in the fleet and use more biofuel.
224

A comparison between the effects of black tea and rooibos on the iron status of primary school children / P. Breet

Breet, Petronella January 2003 (has links)
Thesis (M.Sc. (Nutrition))--North-West University, Potchefstroom Campus, 2004.
225

Numerical Methods for Continuous Time Mean Variance Type Asset Allocation

Wang, Jian January 2010 (has links)
Many optimal stochastic control problems in finance can be formulated in the form of Hamilton-Jacobi-Bellman (HJB) partial differential equations (PDEs). In this thesis, a general framework for solutions of HJB PDEs in finance is developed, with application to asset allocation. The numerical scheme has the following properties: it is unconditionally stable; convergence to the viscosity solution is guaranteed; there are no restrictions on the underlying stochastic process; it can be easily extended to include features as needed such as uncertain volatility and transaction costs; and central differencing is used as much as possible so that use of a locally second order method is maximized. In this thesis, continuous time mean variance type strategies for dynamic asset allocation problems are studied. Three mean variance type strategies: pre-commitment mean variance, time-consistent mean variance, and mean quadratic variation, are investigated. The numerical method can handle various constraints on the control policy. The following cases are studied: allowing bankruptcy (unconstrained case), no bankruptcy, and bounded control. In some special cases where analytic solutions are available, the numerical results agree with the analytic solutions. These three mean variance type strategies are compared. For the allowing bankruptcy case, analytic solutions exist for all strategies. However, when additional constraints are applied to the control policy, analytic solutions do not exist for all strategies. After realistic constraints are applied, the efficient frontiers for all three strategies are very similar. However, the investment policies are quite different. These results show that, in deciding which objective function is appropriate for a given economic problem, it is not sufficient to simply examine the efficient frontiers. Instead, the actual investment policies need to be studied in order to determine if a particular strategy is applicable to specific investment problem.
226

A Hybrid of Stochastic Programming Approaches with Economic and Operational Risk Management for Petroleum Refinery Planning under Uncertainty

Khor, Cheng Seong January 2006 (has links)
In view of the current situation of fluctuating high crude oil prices, it is now more important than ever for petroleum refineries to operate at an optimal level in the present dynamic global economy. Acknowledging the shortcomings of deterministic models, this work proposes a hybrid of stochastic programming formulations for an optimal midterm refinery planning that addresses three factors of uncertainties, namely price of crude oil and saleable products, product demand, and production yields. An explicit stochastic programming technique is utilized by employing compensating slack variables to account for violations of constraints in order to increase model tractability. Four approaches are considered to ensure both solution and model robustness: (1) the Markowitz’s mean–variance (MV) model to handle randomness in the objective coefficients of prices by minimizing variance of the expected value of the random coefficients; (2) the two-stage stochastic programming with fixed recourse approach via scenario analysis to model randomness in the right-hand side and left-hand side coefficients by minimizing the expected recourse penalty costs due to constraints’ violations; (3) incorporation of the MV model within the framework developed in Approach 2 to minimize both the expectation and variance of the recourse costs; and (4) reformulation of the model in Approach 3 by adopting mean-absolute deviation (MAD) as the risk metric imposed by the recourse costs for a novel application to the petroleum refining industry. A representative numerical example is illustrated with the resulting outcome of higher net profits and increased robustness in solutions proposed by the stochastic models.
227

Numerical Methods for Continuous Time Mean Variance Type Asset Allocation

Wang, Jian January 2010 (has links)
Many optimal stochastic control problems in finance can be formulated in the form of Hamilton-Jacobi-Bellman (HJB) partial differential equations (PDEs). In this thesis, a general framework for solutions of HJB PDEs in finance is developed, with application to asset allocation. The numerical scheme has the following properties: it is unconditionally stable; convergence to the viscosity solution is guaranteed; there are no restrictions on the underlying stochastic process; it can be easily extended to include features as needed such as uncertain volatility and transaction costs; and central differencing is used as much as possible so that use of a locally second order method is maximized. In this thesis, continuous time mean variance type strategies for dynamic asset allocation problems are studied. Three mean variance type strategies: pre-commitment mean variance, time-consistent mean variance, and mean quadratic variation, are investigated. The numerical method can handle various constraints on the control policy. The following cases are studied: allowing bankruptcy (unconstrained case), no bankruptcy, and bounded control. In some special cases where analytic solutions are available, the numerical results agree with the analytic solutions. These three mean variance type strategies are compared. For the allowing bankruptcy case, analytic solutions exist for all strategies. However, when additional constraints are applied to the control policy, analytic solutions do not exist for all strategies. After realistic constraints are applied, the efficient frontiers for all three strategies are very similar. However, the investment policies are quite different. These results show that, in deciding which objective function is appropriate for a given economic problem, it is not sufficient to simply examine the efficient frontiers. Instead, the actual investment policies need to be studied in order to determine if a particular strategy is applicable to specific investment problem.
228

A Bidirectional Lms Algorithm For Estimation Of Fast Time-varying Channels

Yapici, Yavuz 01 May 2011 (has links) (PDF)
Effort to estimate unknown time-varying channels as a part of high-speed mobile communication systems is of interest especially for next-generation wireless systems. The high computational complexity of the optimal Wiener estimator usually makes its use impractical in fast time-varying channels. As a powerful candidate, the adaptive least mean squares (LMS) algorithm offers a computationally efficient solution with its simple first-order weight-vector update equation. However, the performance of the LMS algorithm deteriorates in time-varying channels as a result of the eigenvalue disparity, i.e., spread, of the input correlation matrix in such chan nels. In this work, we incorporate the L MS algorithm into the well-known bidirectional processing idea to produce an extension called the bidirectional LMS. This algorithm is shown to be robust to the adverse effects of time-varying channels such as large eigenvalue spread. The associated tracking performance is observed to be very close to that of the optimal Wiener filter in many cases and the bidirectional LMS algorithm is therefore referred to as near-optimal. The computational complexity is observed to increase by the bidirectional employment of the LMS algorithm, but nevertheless is significantly lower than that of the optimal Wiener filter. The tracking behavior of the bidirectional LMS algorithm is also analyzed and eventually a steady-state step-size dependent mean square error (MSE) expression is derived for single antenna flat-fading channels with various correlation properties. The aforementioned analysis is then generalized to include single-antenna frequency-selective channels where the so-called ind ependence assumption is no more applicable due to the channel memory at hand, and then to multi-antenna flat-fading channels. The optimal selection of the step-size values is also presented using the results of the MSE analysis. The numerical evaluations show a very good match between the theoretical and the experimental results under various scenarios. The tracking analysis of the bidirectional LMS algorithm is believed to be novel in the sense that although there are several works in the literature on the bidirectional estimation, none of them provides a theoretical analysis on the underlying estimators. An iterative channel estimation scheme is also presented as a more realistic application for each of the estimation algorithms and the channel models under consideration. As a result, the bidirectional LMS algorithm is observed to be very successful for this real-life application with its increased but still practical level of complexity, the near-optimal tracking performa nce and robustness to the imperfect initialization.
229

Comparing latent means using two factor scaling methods : a Monte Carlo study

Wang, Dandan, 1981- 10 July 2012 (has links)
Social science researchers are increasingly using multi-group confirmatory factor analysis (MG-CFA) to compare different groups' latent variable means. To ensure that a MG-CFA model is identified, two approaches are commonly used to set the scale of the latent variable. The reference indicator (RI) strategy, which involves constraining one loading per factor to a value of one across groups, assumes that the RI has equal factor loadings across groups. The second approach involves constraining each factor's variance to a value of one across groups and, thus, assumes that the factor variances are equal across groups. Latent mean differences may be tested and described using Gonzalez and Griffin's (2001) likelihood ratio test (LRT[subscript k]) and Hancock's (2001) standardized latent mean difference effect size measure ([delta subscript k]), respectively. Applied researchers using the LRT[subscript k] and/or the [delta subscript k] when comparing groups' latent means may not explicitly test the assumptions underlying the two factor scaling methods. To date, no study has examined the impact of violating the assumptions associated with the two scaling methods on latent mean comparisons. The purpose of this study was to assess the performance of the LRT[subscript k] and the [delta subscript k] when violating the assumptions underlying the RI strategy and/or the factor variance scaling method. Type I error and power of the LRT[subscript k] as well as relative parameter bias and parameter bias of the [delta subscript k] were examined when varying loading difference magnitude, factor variance ratio, factor loading pattern and sample size ratio. Rejection rates of model fit indices, including the x² test, RMSEA, CFI, TLI and SRMR, under these varied conditions were also examined. The results indicated that violating the assumptions underlying the RI strategy did not affect the LRT[subscript k] or the [delta subscript k]. However, violating the assumption underlying the factorvariance scaling method influenced Type I error rates of the LRT[subscript k], particularly in unequal sample size conditions. Results also indicated that the four factors manipulated in this study had an impact on correct model rejection rates of the model fit indices. It is hoped that this study provides useful information to researchers concerning the use of the LRT[subscript k] and [delta subscript k] under factor scaling method assumption violations. / text
230

旋翼UAS影像密匹配建物點雲自動分群之研究 / Automatic clustering of building point clouds from dense matching VTOL UAS images

林柔安, Lin, Jou An Unknown Date (has links)
三維城市模型之建置需求漸趨繁多,可提供都市規劃、城市導航及虛擬實境等相關應用,過去研究多以建置LOD2城市模型為主,且較著重於屋頂結構。近年來,逐漸利用垂直影像及傾斜影像作為原始資料,提供建物牆面之建置,並且,隨著無人機系統(Unmanned Aircraft System, UAS)發展,可利用其蒐集高解析度且高重疊垂直及傾斜拍攝之建物影像,並採影像密匹配技術產製高密度點雲,進而快速取得建物包含屋頂及牆面之三維資訊,而這些資訊可進一步提供後續建置LOD3建置層級之模型,而在建置前,首先須對資料進行特徵分析,萃取特徵點、線、面,進而提供建置模型所需之資訊。 因此,本研究期望利用密匹配點雲,計算其點雲特徵,並採用Mean Shift分群法(Comaniciu and Meer, 2002)萃取建物點雲資訊,並提供一最佳分群策略。首先,本研究將以UAS為載具,設計一野外率定場率定相機,並蒐集建物高重疊UAS影像密匹配產製高密度點雲,針對單棟建物高密度點雲,實驗測試點雲疏化程度後,依據疏化成果計算點雲特徵,並以此批點雲資料實驗測試Mean shift分群法(Cheng, 1995)中之參數,後設計分群流程以分離平面點群及曲面點群,探討分群成果以決定最佳分群策略。實驗結果顯示本研究提出之分群策略,可自動區分平面點群及曲面點群,並單獨將平面點群分群至各牆面。 / Unmanned Aerial System (UAS) offer several new possibilities in a wide range of applications. One example is the 3D reconstruction of buildings. In former times this was either restricted by earthbound vehicles to the reconstruction of facades or by air-borne sensors to generate only very coarse building models. UAS are able to observe the whole 3D scene and to capture images of the object of interest from completely different perspectives. Therefore, this study will use UAS to collected images of buildings and to generate point cloud from dense image matching for modeling buildings. In the proposed approach, this method computes principal orientations by PCA and identifies clusters by Mean shift clustering. Analyze the factors which can affect the clustering methods and try to decrease the use of threshold, and this result can cluster the façade of buildings automatically and offer the after building reconstruction for LOD3.

Page generated in 0.0455 seconds