• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 664
  • 164
  • 91
  • 63
  • 57
  • 28
  • 15
  • 14
  • 12
  • 12
  • 5
  • 5
  • 4
  • 4
  • 3
  • Tagged with
  • 1301
  • 368
  • 351
  • 243
  • 159
  • 155
  • 152
  • 125
  • 119
  • 109
  • 98
  • 96
  • 94
  • 89
  • 88
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Påverkan på aktiekursen vid nyemissions meddelande inom hälsovårdsföretag / The impact on share price at equity issue notice in the health business

Akhavian, Arash, Habtigeorgis, Meron January 2010 (has links)
No description available.
122

The Momentum Effect: Evidence from the Swedish stock market

Vilbern, Marcus January 2008 (has links)
This thesis investigates the profitability of the momentum strategy in the Swedish stock market. The momentum strategy is an investment strategy where past winners are bought and past losers are sold short. In this paper Swedish stocks are analyzed during the period 1999 – 2007 with the approach first used by Jegadeesh and Titman (1993). The results indicate that momentum investing is profitable on the Swedish market. The main contribution to the profits is derived from investing in winners while the losers in most cases do not contribute at all to total profits. The profits remain after correcting for transaction costs for longer termed strategies while they diminish for the shorter termed ones. Compared to the market index, buying past winners yield an excess return while short selling of losers tend to make index investing more profitable. The analysis also shows that momentum can not be explained by the systematic risk of the individual stocks. The evidence in support of a momentum effect presented in this thesis also implies that predictable price patterns can be used to make excess returns; this contradicts the efficient market hypothesis.
123

The stock market and government debt : the impact of government debt changes on the stock market

Gerleman, Wendela January 2012 (has links)
This thesis investigates whether or not changes in a country’s government debt could affect its domestic stock market performance. The relationship is investigated by examining three different European countries, Germany, Portugal and Sweden, on the basis of two variables; (1) quarterly government debt changes as a percentage of gross domestic product and (2) the quarterly stock market changes over the time period2000:Q2 – 2011:Q2. The evidence is presented with help of Ordinary Least Square Method and Granger Causality test for each respective country. According to the Efficient Market Hypothesis, stock market prices should fully reflect all relevant information, e.g. government debt changes, as soon as they occur, without any delay, if the market is efficient. Past information should be insignificant and therefore not affect the stock market prices in an efficient market. In the cases of Sweden and Germany, the results proved to be ambiguous and thus do not allow for either rejection or acceptance of the Efficient Market Hypothesis with respect to government debt changes. However, some support was found in the case of Germany since the government debt changes and the stock market performance were instantaneously correlated. The empirical results presented in this thesis further allowed for the assumption that Portugal was not able to efficiently capture changes in the debt levels without any delay. This indicates that the Efficient Market Hypothesis can be rejected in regards for Portugal with respect to government debt changes. Furthermore, since the Portuguese stock market performance was not able to capture efficiently changes in the government debt level, it hence could possibly mislead the direction of the economy when looking into the stock prices to determine economic conditions. Moreover, the results imply that each country faces different relationships between the variables and that the relationships possibly could depend on the economic health of a country.
124

Efficient Algorithms for Parallel Excitation and Parallel Imaging with Large Arrays

Feng, Shuo 16 December 2013 (has links)
During the past two decades, techniques and devices were developed to transmit and receive signals with a phased array instead of a single coil in the MRI (Magnetic Resonance Imaging) system. The two techniques to simultaneously transmit and receive RF signals using phased arrays are called parallel excitation (pTx) and parallel imaging (PI), respectively. These two techniques lead to shorter transmit pulses for higher imaging quality and faster data acquisition correspondingly. This dissertation focuses on improving the efficiency of the pTx pulse design and the PI reconstruction in MRI. Both PI and pTx benefit from the increased number of elements of the array. However, efficiency concerns may arise which include: (1) In PI, the computation cost of the reconstructions and the achievable acceleration factors and (2) in pTx, the pulse design speed and memory cost. The work presented in this dissertation addresses these issues. First, a correlation based channel reduction algorithm is developed to reduce the computation cost of PI reconstruction. In conventional k-domain methods, the individual channel data is reconstructed via linear interpolation of the neighbourhood data from all channels. In this proposed algorithm, we choose only a subset of the channels based on the spatial correlation. The results have shown that the computation cost can be significantly reduced with similar or higher reconstruction accuracy. Then, a new parallel imaging method named parallel imaging using localized receive arrays with Sinc interpolation(PILARS) is proposed to improve the actual acceleration factor and to reduce the computation cost. It employs the local support of individual coils and pre-determines the magnitude of the reconstruction coefficients. Thus, it requires much less auto-calibration signals (ACS) data and achieves higher acceleration factors. The results show that this method can increase the acceleration factor and the reconstruction speed while achieving the same level of reconstruction quality. Finally, a fast pTx pulse design method is proposed to accelerate the design speed. This method is based on the spatial domain pulse design method and can be used to accelerate similar methods. We substitute the two computational expensive matrix- vector multiplications in the conjugate gradient (CG) solver with gridding and fast Fourier transform (FFT). Theoretical and simulation results have shown that the design speed can be improved by 10 times. Meanwhile, the memory cost is reduced by 103 times. This breaks the memory burden of implementing pulse designs on GPU which enables further accelerations.
125

Small Residence Multizone Modeling with Partial Conditioning for Energy Effieiency in Hot and Humid Climates

Andolsun, Simge 16 December 2013 (has links)
The purpose of this study is to reduce the energy cost of the low-income households in the hot and humid climates of the U.S. and thereby to help them afford comfortable homes. In this perspective, a new HVAC energy saving strategy, i.e. “partial conditioning” was modeled and its potential to reduce the HVAC energy consumption of the low income homes in Texas was quantified. The “partial conditioning” strategy combined three primary ideas: 1) using historic courtyard building schemes to provide a buffer zone between conditioned spaces, 2) zoning and applying occupancy based heating/cooling in each zone, and 3) reusing the conditioned air returning from the occupied zones in the unoccupied zones before it is returned to the system. The study was conducted in four steps: 1) data collection, 2) baseline design and modeling, 3) partial conditioning design and modeling, and 4) analyses and recommendations. First, a site visit was held to the Habitat for Humanity office in Bryan, Texas to collect data on the characteristics of the Habitat for Humanity houses built in Bryan. Second, a base-line Habitat for Humanity house was designed and modeled based on this information along with multiple other resources including International Energy Conservation Code 2012 and Building America benchmark definitions. A detailed comparison was made between the commonly used energy modeling tools (DOE-2.1e, EnergyPlus and TRNSYS) and a modeling method was developed for the estimation of the baseline energy consumption. Third, the “partial conditioning” strategy was introduced into the baseline energy model to simulate a partially conditioned atrium house. As the occupied zone and the direction of the airflow changed throughout the year in the partially conditioned house, this step required an innovative air loop model with interzonal air ducts that allowed for sched- uled bi-directional airflow. This air loop was modeled with the AirflowNetwork model of EnergyPlus. Fourth, the modeling results were analyzed and discussed to determine the performance of the partial conditioning strategy in a hot and humid climate. It was found that partial conditioning strategy can provide substantial (37%-46%) reduction in the overall HVAC energy consumption of small residences (∼1,000 ft2) in hot and humid climates while performing better in meeting the temperature set points in each room. It was also found that the quantity of the energy savings that can be obtained with the partial conditioning strategy depends significantly on the ground coupling condition of the house for low rise residential buildings.
126

Finding Value Through Sustainable Performance : A cross-sectional study of the relationship between risk-adjusted return and Environmental, Social and Governance performance on the Indian stock market

Johansson, Christoffer, Lundström, Petter January 2015 (has links)
Problem background and discussion: Emerging countries economies are growing substantially; one of these is India which stock market has been one of the best performing in the world in recent years. Analysts are forecasting further development and some claims that India has the most business- and investment-stimulating political leaders in the world. However, stock markets in emerging countries are highly volatile and normally more risky than in developed economies. One approach to emphasise the more common risks in emerging countries are by including Environmental, Social and Governance (ESG) rating into the fundamental investment model. However, there is a conflict of what previous studies suggest regarding ESG investments. Some argues there is a positive relation and others a negative relation between ESG factors and risk- adjusted return. Research question: “Is there a relation between risk-adjusted return and ESG performance at the Indian stock market?” Objective: The objective is to determine if there is a relationship between ESG performance and risk-adjusted return in India. Another objective is to determine if there is a relationship between ESG performance and risk-adjusted return among companies with high Total ESG rating as well as for companies with low Total ESG rating. Theoretical framework: ESG is an established approach to describe sustainability issues, where screening is a process designed to select those companies that meet ESG criteria. A basic description of Capital Asset Pricing Model CAPM, which calculates an asset's expected return, has been used to calculate risk-adjusted return. Efficient Market Hypothesis EMH is the basic theory of market efficiency and is used to explain any non-linear relationship between ESG factors and risk-adjusted returns. Adaptive Market Hypothesis AMH has been taken into account as it deals with financial behaviour. Method: A quantitative study using a deductive approach has been selected to perform this study. The practical approach is a cross sectional study where the relationship in the Indian market has been analysed and significance-tested during 2014. ESG information for 126 companies listed on the Bombay Stock Exchange (BSE) has been purchased from Sustainalytics, a global leader in research for responsible investment. Empirical findings and analysis: The results of the study demonstrate no significant relationship between Total ESG rating and risk-adjusted return during 2014. In the examination of individual categories, Environmental and Social rating does not have a significant association with the risk-adjusted Return. Though, the results display a negative relationship between Governance rating and risk-adjusted return. This relationship is also obtained among companies in with low Total ESG rating but not companies with high ESG rating. Conclusion: Results implies that investors have not been able to use the information of Total ESG performance to obtain a better risk-adjusted return on the Indian stock market in 2014. However, this can be achieved by using Governance rating.
127

Momentum Crashes in Sweden : NASDAQ OMX Stockholm from a Momentum Perspective

Blackestam, Andreas, Setterqvist, Viktor January 2014 (has links)
Momentum, or the basic idea of the momentum effect in finance, is that there is a tendency for rising asset prices to continue rising, while the falling prices continue to fall. As such, a momentum strategy is based on the idea that previous returns will predict future returns. In order to follow this line of thought, a momentum strategy is generally based on buying past winners and taking short positions in past losers. This quantitative study addresses the phenomenon of momentum crashes, which is a moment in time when a momentum strategy fails, and past losers outperform past winners. In our study we are setting out to study the momentum crash phenomenon during the years of 2006-2012 on NASDAQ OMX Stockholm, focusing specifically on the Small- and Large Cap segments. As we intend to explore the concept of momentum crashes as thoroughly as possible, we will also be researching momentum itself during this time period, as these two concepts are inevitably intertwined. In order to do this, we will be applying commonly used portfolio construction methods used in previous momentum research. These portfolios will be based on past winners and past losers, and their performance will then be tracked for different lengths of time, which will allow us to identify points in time where momentum crashes have occurred. What we found in our research was that, while we gathered data indicative of momentum trends during our chosen time period, we could not prove that momentum existed to any statistically meaningful degree. As for momentum crashes, we identified many different points in time where the past-loser portfolios outperformed the past-winner portfolios, thus resulting in negative winner-minus-loser portfolios and momentum crashes. The most interesting aspect of these findings was that the highest frequencies of momentum crashes were found in the years of 2008 and 2009, where we made the most negative winner-minus-loser portfolio observations. This finding is in line with similar research on other populations, as momentum crashes are theorized to occur at a higher frequency during times of market stress and high volatility. Furthermore, we also made some interesting connections between our findings and behavioral finance; we identified certain patterns which could be indicative of a relationship between the two. As for the research gap and the ultimate contribution of this study, we have increased the knowledge, understanding and awareness of momentum crashes in Sweden, and we have shown during which times these are likely to occur in a Swedish context. Additionally, we have also increased the general knowledge of momentum by exploring it from a Swedish perspective.
128

Design and analysis for efficient simulation in petrochemical industry / R.F. Rossouw

Rossouw, Ruan Francois January 2008 (has links)
Building an industrial simulation model is a very time and cost intensive exercise because these models are large and consist of complicated computer code. Fully understanding the relationships between the inputs and the outputs are not straight forward and therefore utilizing these models only for ad hoc scenario testing would not be cost effective. The methodology of Design and Analysis of Simulation Experiments (DASE) are proposed to explore the design space and pro - actively search for optimization opportunities. The system is represented by the simulation model and the aim is to conduct experiments on the simulation model. The surrogate models (metamodels) are then used in lieu of the original simulation code; facilitating the exploration of the design space, optimization, and reliability analysis. To explore the methodology of DASE, different designs and approximation models from DASE as well as the Design and Analysis of Computer Experiments (DACE) literature, was evaluated for modeling the overall availability of a chemical reactor plant as a factor of a number of process variables. Both mean square error and maximum absolute error criteria were used to compare different design by model combinations. Response surface models and kriging models are evaluated as approximation models. The best design by model combination was found to be the Plackett - Burman Design (Screening Phase), Fractional Factorial Design (Interaction Phase) and the Response Surface Model (Approximation Model). Although this result might be specific to this case study, it is provided as a general recommendation for the design and analysis of simulation experiments in industry. In addition, the response surface model was used to explore the design space of the case study, and to evaluate the risks in the design decisions. The significant factors on plant availability were identified for future pilot plant optimization studies. An optimum operating region was obtained in the design variables for maximum plant availability. Future research topics are proposed. / Thesis (M.Sc. (Computer Science))--North-West University, Vaal Triangle Campus, 2009.
129

Design and analysis for efficient simulation in petrochemical industry / R.F. Rossouw

Rossouw, Ruan Francois January 2008 (has links)
Building an industrial simulation model is a very time and cost intensive exercise because these models are large and consist of complicated computer code. Fully understanding the relationships between the inputs and the outputs are not straight forward and therefore utilizing these models only for ad hoc scenario testing would not be cost effective. The methodology of Design and Analysis of Simulation Experiments (DASE) are proposed to explore the design space and pro - actively search for optimization opportunities. The system is represented by the simulation model and the aim is to conduct experiments on the simulation model. The surrogate models (metamodels) are then used in lieu of the original simulation code; facilitating the exploration of the design space, optimization, and reliability analysis. To explore the methodology of DASE, different designs and approximation models from DASE as well as the Design and Analysis of Computer Experiments (DACE) literature, was evaluated for modeling the overall availability of a chemical reactor plant as a factor of a number of process variables. Both mean square error and maximum absolute error criteria were used to compare different design by model combinations. Response surface models and kriging models are evaluated as approximation models. The best design by model combination was found to be the Plackett - Burman Design (Screening Phase), Fractional Factorial Design (Interaction Phase) and the Response Surface Model (Approximation Model). Although this result might be specific to this case study, it is provided as a general recommendation for the design and analysis of simulation experiments in industry. In addition, the response surface model was used to explore the design space of the case study, and to evaluate the risks in the design decisions. The significant factors on plant availability were identified for future pilot plant optimization studies. An optimum operating region was obtained in the design variables for maximum plant availability. Future research topics are proposed. / Thesis (M.Sc. (Computer Science))--North-West University, Vaal Triangle Campus, 2009.
130

EE-GSEC: An Energy Efficient Diversity Combining Scheme

Bains, Harpreet 23 October 2014 (has links)
An energy-efficient diversity scheme based on the well researched Generalized-Switch-and-Examine Combining (GSEC) is presented. The presented scheme is more efficient in terms of providing better average combined SNR per active path. This results in considerable processing power savings of the receiver especially compared to the GSC scheme. EE-GSEC performance in terms of the average combined SNR, outage probability and average bit error rate (BER) are comparable to GSEC under certain conditions. EE-GSEC’s complexity performance is better than GSC and same as GSEC. This results in a considerable hardware cost savings at the receiver. However, the complexity savings come at the cost of performance when compared to GSC. This is a natural trade-off and needs to be considered when designing a wireless communication system. A thorough statistical analysis of the presented scheme is performed and then used to mathematically formulate the performance and complexity expressions. Using simulations the performance and complexity of EE-GSEC is examined and compared against other competing schemes. An energy efficiency analysis that validates the efficiency claims of the scheme is also performed. / Graduate / 0544 / hpbains@gmail.com

Page generated in 0.0662 seconds