• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1404
  • 599
  • 402
  • 195
  • 145
  • 140
  • 129
  • 118
  • 112
  • 37
  • 29
  • 24
  • 17
  • 16
  • 15
  • Tagged with
  • 3931
  • 579
  • 548
  • 336
  • 273
  • 266
  • 262
  • 214
  • 207
  • 205
  • 196
  • 191
  • 191
  • 184
  • 172
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Zhodnocení finanční situace Vodní záchranné služby a návrhy jejího financování / Evaluation of the Financial Situation in the Water Rescue Services and Proposals to Its Financing

Novotný, Zbyněk January 2012 (has links)
The Diploma Thesis "Evaluation of the Financial Situation in the Water Rescue Services and Proposals to Its Financing" I have chosen, because this issue is very actual and affects a large part of the population. Water Rescue Service cares about the safety and health of people in more than 80 locations in the Czech Republic. Thanks to learning the methods of the financial analysis will be possible in practical part to evaluate the situation in the Water Rescue Service. The main proposal is analysis of the one local group and the other part is presenting the functioning of a few selected groups. All found data will lead to the creation of the proposals and solutions of the financing that will help this organization continue to operate and take care about the health of many people.
202

The Influence of Nozzle Spacing and Diameter on the Acoustic Emissions of Closely Spaced Supersonic Jet Arrays

Coltrin, Ian S. 02 February 2012 (has links) (PDF)
The acoustic emissions from supersonic jets represent an area of significant research needs; not only in the field of aero-acoustics, but in industry as well where high pressure let down processes have been known to cause acoustically induced vibrations. A common method to reduce the acoustic emissions of such processes involves dividing the single larger supersonic flow into several smaller ones. Though this is common practice, there is not yet a current model which describes the reduction of acoustic emissions from an array of smaller supersonic jets. Current research which studies supersonic jet arrays are mainly focused on the effects of screech. Though screech is important, due to its high amplitude acoustic pressure, this research focuses on the overall acoustic emissions radiated from supersonic jet arrays which can cause severe acoustic loadings. This research investigated the acoustic emissions and shock formations from several eight by eight arrays of axisymmetric jet experimentally. The array nozzle diameters investigated ranged from 1/8 inch to 1/4 inch and the spacing over diameter ratio ranged from 1.44 to 3. The net pressure ratios investigated ranged from 2 to 24. Results revealed a strong correlation between the acoustic emissions and the shock formations of the flow. Up until a critical net pressure ratio, the overall sound pressure levels were comparable to that of a single jet within an array. At net pressure ratios beyond the critical the overall sound pressure levels transitioned to higher decibel levels; equivalent to a single jet with an equivalent exit area of an entire array. Also, the characteristic acoustic frequency emitted from a nozzle array remained ultrasonic (above 20 kHz) at lower net pressure ratios and then shifted to audible levels (between 20 Hz to 20 kHz) at net pressure ratios beyond the critical. Also, before the critical net pressure ratio the shock cells from the jets within the array remained unmerged, but at net pressure ratios beyond the critical the shock cells merged and formed lattices of weak oblique shocks at first and then strong oblique shocks as the net pressure ratio continued to increase. The critical net pressure ratio was investigated by non-dimensional analysis. The non-dimensional analysis revealed that the critical net pressure ratio was a strong linear function of the spacing over diameter ratio. A linear model was derived which is able to predict the critical net pressure ratio, and in turn, predict a critical shift in the acoustic emissions of a nozzle array.
203

A study of the effects of free cash flow and capital structure on profitability of Nasdaq Stockholm companies

Karmestål, Victor, Rzayev, Mahir January 1996 (has links)
Free cash flow and capital structure is a widely covered topic, with several studies conducted in previous years and markets. We set out to examine the possible effects of free cash flow and capital structure on the Stockholm Nasdaq OMX between the years 2018 and 2022. Regarding this period, no previous studies had been conducted that consisted of a population encompassing an entire market. We decided to employ a deductive approach to perform our quantitative research. Using the ORBIS database, we gathered data regarding variables free cash flow, debt ratio, debt-equity ratio, asset turnover ratio, return on equity and return on assets. Return on equity and return on assets worked as our dependent variables, with free cash flow, debt ratio, debt-equity ratio and asset turnover ratio as independent variables. After testing the data for heteroskedasticity and autocorrelation, a fixed effects regression model was both constructed and examined along with a Pearson’s correlation test.  Our results indicated a significant negative relationship between free cash flow and return on equity, as well as a significant positive relationship between asset turnover ratio and return on equity. From these results, we gathered we had detected evidence to support the financial slack theory, which is a theory that highlights the importance of keeping an excess of resources to use when needed. The theory advocates using additional resources and not allowing an overflow of assets to gather dust in inventory.
204

An Analytical Tool for Calculating Co-Channel Interference in Satellite Links That Utilize Frequency Reuse

Chhabra, Saurbh 06 November 2006 (has links)
This thesis presents the results of the development of a user-friendly computer code (in MATLAB) that can be used to calculate co-channel interferences, both in the downlink and in the uplink of a single satellite/space-based mobile communications system, due to the reuse of frequencies in spot beams or coverage cells. The analysis and computer code can be applied to any type of satellite or platform elevated at any height above earth. The cells or beams are defined in the angular domain, as measured from the satellite or the elevated platform, and cell centers are arranged in a hexagonal lattice. The calculation is only for a given instant of time for which the system parameters are input into the program. The results obtained in one program run are for the overall carrier to interference ratio (CIR) along with CIR for both the uplink and downlink paths. An overall carrier to noise plus interference ratio (CNIR) is also calculated, which exemplifies the degradation in the carrier to noise ratio (CNR) of the system. Comparisons for systems with differing system scenarios are also made. For example, overall CIRs are compared for different reuse numbers (3, 4, 7, and 13) in LEO and GEO satellite systems. In conclusion, as expected, it is observed that the co-channel interference generally increases as we decrease the reuse number employed for the frequency reuse in the cells. It is also observed that co-channel interference can cause substantial degradation to the overall CNR of a system. / Master of Science
205

The Effect of Density Ratio on Steep Injection Angle Purge Jet Cooling for a Converging Nozzle Guide Vane Endwall at Transonic Conditions

Sibold, Ridge Alexander 17 September 2019 (has links)
The study presented herein describes and analyzes a detailed experimental investigation of the effects of density ratio on endwall thermal performance at varying blowing rates for a typical nozzle guide vane platform purge jet cooling scheme. An axisymmetric converging endwall with an upstream doublet staggered cylindrical hole purge jet cooling scheme was employed. Nominal exit flow conditions were engine representative and as follows: {rm Ma}_{Exit} = 0.85, {rm Re}_{Exit,C_{ax}} = 1.5 times {10}^6, and large-scale freestream Tu = 16%. Two blowing ratios were investigated corresponding to the upper and lower engine extrema. Each blowing ratio was investigated amid two density ratios; one representing typical experimental neglect of density ratio, at DR = 1.2, and another engine representative density ratio achieved by mixing foreign gases, DR = 1.95. All tests were conducted on a linear cascade in the Virginia Tech Transonic Blowdown Wind Tunnel using IR thermography and transient data reduction techniques. Oil paint flow visualization techniques were used to gather quantitative information regarding the alteration of endwall flow physics due two different blowing rates of high-density coolant. High resolution endwall adiabatic film cooling effectiveness, Nusselt number, and Net Heat Flux Reduction contour plots were used to analyze the thermal effects. The effect of density is dependent on the coolant blowing rate and varies greatly from the high to low blowing condition. At the low blowing condition better near-hole film cooling performance and heat transfer reduction is facilitated with increasing density. However, high density coolant at low blowing rates isn't adequately equipped to penetrate and suppress secondary flows, leaving the SS and PS largely exposed to high velocity and temperature mainstream gases. Conversely, it is observed that density ratio only marginally affects the high blowing condition, as the momentum effects become increasingly dominant. Overall it is concluded density ratio has a first order impact on the secondary flow alterations and subsequent heat transfer distributions that occur as a result of coolant injection and should be accounted for in purge jet cooling scheme design and analysis. Additionally, the effect of increasing high density coolant blowing rate was analyzed. Oil paint flow visualization indicated that significant secondary flow suppression occurs as a result of increasing the blowing rate of high-density coolant. Endwall adiabatic film cooling effectiveness, Nusselt number, and NHFR comparisons confirm this. Low blowing rate coolant has a more favorable thermal impact in the upstream region of the passage, especially near injection. The low momentum of the coolant is eventually dominated and entrained by secondary flows, providing less effectiveness near PS, near SS, and into the throat of the passage. The high momentum present for the high blowing rate, high-density coolant suppresses these secondary flows and provides enhanced cooling in the throat and in high secondary flow regions. However, the increased turbulence impartation due to lift off has an adverse effect on the heat load in the upstream region of the passage. It is concluded that only marginal gains near the throat of the passage are observed with an increase in high density coolant blowing rate, but severe thermal penalty is observed near the passage onset. / Master of Science / Gas turbine technology is used frequently in the burning of natural gas for power production. Increases in engine efficiency are observed with increasing firing temperatures, however this leads to the potential of overheating in the stages following. To prevent failure or melting of components, cooler air is extracted from the upstream compressor section and used to cool these components through various highly complex cooling schemes. The design and operational adequacy of these schemes is highly subject to the mainstream and coolant flow conditions, which are hard to represent in a laboratory setting. This experimental study explores the effects of various coolant conditions, and their respective response, for a purge jet cooling scheme commonly found in engine. This scheme utilizes two rows of staggered cylindrical holes to inject air into the mainstream from platform, upstream of the nozzle guide vane. It is the hope that this air forms a protective layer, effectively shielding the platform from the hostile mainstream conditions. Currently, little research has been done to quantify these effects of purge flow cooling scheme while mimicking engine geometry, mainstream and coolant conditions. For this study, an endwall geometry like that found in engine with a purge jet cooling scheme is studied. Commonly, an upstream gap is formed between the combustor lining and first stage vane platform, which is accounted for in this testing. Mainstream and coolant flow conditions can have large impacts on the results gathered, so both were matched to engine conditions. Varying of coolant density and injection rate is studied and quantitative results are gathered. Results indicate coolant fluid density plays a large role in purge jet cooling, and with neglection of this, potential thermal failure points could be overlooked This is exacerbated with less coolant injection. Interestingly, increasing the amount of coolant injected decreases performance across much of the passage, with only marginal gains in regions of complex flow. These results help to better explain the impacts of experimental neglect of coolant density, and aid in the understanding of purge jet coolant injection.
206

South Africa's economic policies on unemployment : a historical analysis of two decades of transition / Lorainne Steenkamp

Steenkamp, Lorainne January 2015 (has links)
After twenty years of democracy, the most pressing problem facing South Africa is the absence of sustainable economic growth and job creation. Since 1994, major economic reforms and adjustments have been made, which were seen as a requirement for achieving economic growth and development. However, despite these efforts, unemployment in South Africa remains a challenging problem. The main objectives of the study are, firstly, to examine South Africa’s economic policy initiatives implemented since 1994. Secondly, to determine whether the issue of unemployment has improved under a review of the economic policies that have been implemented since 1994. Finally, this is achieved by examining the changes in employment and, more specifically, the changes in the cost-neutral change in the capital/labour (K/L) ratio between 1995 and 2013 by means of a historical Computable General Equilibrium (CGE) modelling approach. The literature study focuses on employment, growth and human capital theories to reflect on the present state of knowledge and to contribute to evidence-based policy debates. It also provides an overview of South Africa’s economic policy, programmes and strategy decisions and of the country’s economic stance since the transition to democracy in 1994, with a specific focus on the labour market. Historical CGE modelling, applied using the PEKGEM – a dynamic CGE model of the South African economy, was chosen to examine the relationship between growth and structural changes under the different economic and development policies in South Africa between 1995 and 2013. The primary aim was to determine how the dynamics and structure of South African employment changed during the period in which these policies were implemented, using the historical CGE modelling approach. The focus was primarily on changes in the capital and labour markets across all sectors over this period. The results indicate an increase in capital relative to labour (K/L) over the period 1995 to 2013, despite the increase seen in the rental price of capital relative to wages (PK/PL). To better understand the structural shift, the theoretical specification of the capital/labour preference within PEKGEM was considered. The results suggests that at any given ratio of real wages relative to the rental price of capital, industries would choose a K/L ratio 8.1 per cent higher in 2013 than it would have in 1995. Considering the fact that South Africa has a comparative advantage in unskilled labour-intensive goods, especially given the country’s abundance of labour and high levels of unemployment, the shortcomings of South Africa’s economic policies in addressing the pressing issue of unemployment is emphasised. / MCom (Economics), North-West University, Potchefstroom Campus, 2015
207

South Africa's economic policies on unemployment : a historical analysis of two decades of transition / Lorainne Steenkamp

Steenkamp, Lorainne January 2015 (has links)
After twenty years of democracy, the most pressing problem facing South Africa is the absence of sustainable economic growth and job creation. Since 1994, major economic reforms and adjustments have been made, which were seen as a requirement for achieving economic growth and development. However, despite these efforts, unemployment in South Africa remains a challenging problem. The main objectives of the study are, firstly, to examine South Africa’s economic policy initiatives implemented since 1994. Secondly, to determine whether the issue of unemployment has improved under a review of the economic policies that have been implemented since 1994. Finally, this is achieved by examining the changes in employment and, more specifically, the changes in the cost-neutral change in the capital/labour (K/L) ratio between 1995 and 2013 by means of a historical Computable General Equilibrium (CGE) modelling approach. The literature study focuses on employment, growth and human capital theories to reflect on the present state of knowledge and to contribute to evidence-based policy debates. It also provides an overview of South Africa’s economic policy, programmes and strategy decisions and of the country’s economic stance since the transition to democracy in 1994, with a specific focus on the labour market. Historical CGE modelling, applied using the PEKGEM – a dynamic CGE model of the South African economy, was chosen to examine the relationship between growth and structural changes under the different economic and development policies in South Africa between 1995 and 2013. The primary aim was to determine how the dynamics and structure of South African employment changed during the period in which these policies were implemented, using the historical CGE modelling approach. The focus was primarily on changes in the capital and labour markets across all sectors over this period. The results indicate an increase in capital relative to labour (K/L) over the period 1995 to 2013, despite the increase seen in the rental price of capital relative to wages (PK/PL). To better understand the structural shift, the theoretical specification of the capital/labour preference within PEKGEM was considered. The results suggests that at any given ratio of real wages relative to the rental price of capital, industries would choose a K/L ratio 8.1 per cent higher in 2013 than it would have in 1995. Considering the fact that South Africa has a comparative advantage in unskilled labour-intensive goods, especially given the country’s abundance of labour and high levels of unemployment, the shortcomings of South Africa’s economic policies in addressing the pressing issue of unemployment is emphasised. / MCom (Economics), North-West University, Potchefstroom Campus, 2015
208

Evaluation of different types of fats for use in high-ratio layer cakes

Zhou, Jianmin January 1900 (has links)
Master of Science / Department of Grain Science and Industry / Jon M. Faubion / Charles E. Walker / Shortening is a major ingredient used in high-ratio layer cakes. Plastic shortenings are most commonly used by the U.S. baking industry, but high levels of trans- or saturated fats cause health concerns. Compared to plastic shortenings, liquid shortenings could significantly reduce the dependence on high melting point fats and the emulsifiers used would enhance the shortening’s functionality. The objective of this research was to compare the influence of different types of fats on the texture and shelf-life of high-ratio layer cakes. Cakes were baked with soybean oil to evaluate the function of three emulsifiers (PGMS, GMS, and Lecithin) on layer cake quality, including volume, cake score, interior visual texture (C-Cell), and firmness (Voland-Stevens). An optimum emulsifier combination was chosen (PGMS 1.8%, GMS 1.0% and Lecithin 0.8%) for addition to the liquid oil. Four groups of layer cakes were baked using: plastic shortening, liquid shortening, liquid oil, or liquid oil plus emulsifier combination. Cake performance and firming over-time were evaluated. The liquid shortening provided the best fresh cake characteristics and cake firmness performance. Liquid oil combined with a combination of added emulsifiers performed very similar in terms of firmness, as did the liquid shortening. This indicated that emulsifiers played an important role on the improvement of cake firmness shelf-life.
209

Fibonacci numbers and the golden rule applied in neural networks

Luwes, N.J. January 2010 (has links)
Published Article / In the 13th century an Italian mathematician Fibonacci, also known as Leonardo da Pisa, identified a sequence of numbers that seemed to be repeating and be residing in nature (http://en.wikipedia.org/wiki/Fibonacci) (Kalman, D. et al. 2003: 167). Later a golden ratio was encountered in nature, art and music. This ratio can be seen in the distances in simple geometric figures. It is linked to the Fibonacci numbers by dividing a bigger Fibonacci value by the one just smaller of it. This ratio seems to be settling down to a particular value of 1.618 (http://en.wikipedia.org/wiki/Fibonacci) (He, C. et al. 2002:533) (Cooper, C et al 2002:115) (Kalman, D. et al. 2003: 167) (Sendegeya, A. et al. 2007). Artificial Intelligence or neural networks is the science and engineering of using computers to understand human intelligence (Callan R. 2003:2) but humans and most things in nature abide to Fibonacci numbers and the golden ratio. Since Neural Networks uses the same algorithms as the human brain does, the aim is to experimentally proof that using Fibonacci numbers as weights, and the golden rule as a learning rate, that this might improve learning curve performance. If the performance is improved it should prove that the algorithm for neural network's do represent its nature counterpart. Two identical Neural Networks was coded in LabVIEW with the only difference being that one had random weights and the other (the adapted one) Fibonacci weights. The results were that the Fibonacci neural network had a steeper learning curve. This improved performance with the neural algorithm, under these conditions, suggests that this formula is a true representation of its natural counterpart or visa versa that if the formula is the simulation of its natural counterpart, then the weights in nature is Fibonacci values.
210

TELEMETRY LINK RELIABILITY IMPROVEMENT VIA “NO-HIT” DIVERSITY BRANCH SELECTION

Jefferis, Robert P. 10 1900 (has links)
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California / Multipath propagation consisting largely of specular reflection components is known to be the major channel impairment in many aeronautical mobile telemetry (AMT) applications. Adaptive equalizers are not effective against flat fading commonly created by strong power delay profile components representing small fractions of the transmitted symbol period. Avoidance and diversity techniques are the only practical means of combating this problem. A new post-detection, no-hit diversity branch selector is described in this paper. Laboratory and limited flight test data comparing non-diversity, selection diversity and intermediate frequency (IF) combining techniques are presented.

Page generated in 0.051 seconds