• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 166
  • 91
  • 63
  • 57
  • 28
  • 15
  • 14
  • 12
  • 12
  • 5
  • 5
  • 4
  • 4
  • 3
  • Tagged with
  • 1319
  • 372
  • 357
  • 249
  • 164
  • 158
  • 154
  • 126
  • 124
  • 112
  • 99
  • 97
  • 96
  • 90
  • 89
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Domestic InFlux

Tannenbaum, Samuel 16 September 2013 (has links)
“Domestic InFlux” is a thesis that, through study of past typologies and modern technologies, creates a platform to produce new forms of privately owned houses that allows the user to accommodate their changing needs with minimal effort. “Domestic InFlux” overlaps typologies and program so that the occupant can use the house for any function by collapsing certain program and allowing others to expand. Although today’s house is larger than in the past, the average family is smaller. Houses today have also become more segmented, isolating program that has been limited to a predefined area by the architecture. Technology is partially responsible for this change. With improved technology we have become less interested in taking advantage of the site and environment in our life at home, and more interested in using that technology to help us block out the rest of the world and disregard the potentials of the site. As technology develops and the world becomes more efficient, so should the house. As a society we have conflicting desires. We want to live in the city, but we also want to live in a mansion. We want more stuff, but we don’t want to look at it. Special occasions require increased occupancy in a space that is unoccupied for most of the year. With “Domestic InFlux” a mansion can be fit into a row house, turning it in to a Swiss Army house or a house for every need. The “Domestic InFlux” house is no longer passive. It is interactive and dynamic, influencing the way we perceive space at every scale, including the scale of the neighborhood. As these houses aggregate, the residual spaces become outdoor rooms that can be occupied by the community.
272

Piotroskis F-score : En Grundläggande modell för värdering av aktier

Falk, Robin, Håkansson, Björn January 2013 (has links)
När man väljer att analysera företag, finns det flera olika värderingsmodeller att använda.En av dessa är Piotroskis F-score. Denna modell har mestadels tidigare använts för att analysera företag på den amerikanska marknaden. Nu vill författarna undersöka hur tillämpbar modellen är på den svenska aktiemarknaden och dessutom kombinera denna modell med en Magic Sixes värdering för att öka dess precision. Syftet med denna studie är att undersöka om F-score i samarbete med Magic Sixes kan generera överavkastning och överträffa den svenska aktiemarknaden. Författarna har genomfört en kvantitativ studie med en deduktiv ansats. Data har samlats in med hjälp av databasen Orbis och Dagens Industri. Författarna har upprättat identiska listor för att beräkna företagens F-score och Magic Sixes. Därefter har en beräkning av portföljernas lönsamhet genomförts. Författarnas studie har visat att F-score i kombination med Magic Sixes lyckas slå marknaden under 3 av de 6 studerade åren. F-score har en träffsäkerhet på 65,15% och Magic Sixes har en träffsäkerhet på 58,82%. Modellerna bör inte användas självständigt eller i kombination med varandra som enda grund för ett beslut, men bör ses mer som ett komplement till andra analysmetoder. Slutsatsen är att modellerna kan ge en god indikation på huruvida investerare bör undersöka investeringsobjektet närmare.
273

A Study of Different Switched Mode Power Amplifiers for the Burst Mode Operation

Parveg, Dristy Rahul January 2008 (has links)
Power-amplifier efficiency is a significant issue for the overall efficiency of most wireless system. Therefore, currently there are different kind of Switched mode power amplifiers are developed which are showing very high efficiency also at higher frequencies but all of these amplifiers are subjected to drive with the constant envelope signals. Whereas, for the increasing demand of high data rate transmissions in wireless communication there are some new modulation schemes are introduced and which are generating no more a constant envelope signal but a high peak to average power signal. Therefore, recently a new technique is proposed called the burst mode operation for operating the switched mode power amplifiers efficiently while driven by a high peak to average power signal.   The purpose of this master thesis work was to review the theory of this burst mode operation and some basic investigations of this theory on switched mode power amplifiers were performed in simulation environments. The amplifiers of class D, inverse D, DE and J are studied. The thesis work was mainly carried out by ADS and partly in MATLAB SIMULINK environment. Since this burst mode operation is a completely new technique therefore a new Harmonic balance simulation setups in ADS and Microwave Office are developed to generate the RF burst signals.   A Class J amplifier based on LDMOS technique is measured by a 16 carrier multi-tone signal having peak to average power ratio of 7 dB and achieved the drain efficiency of 50% with -30 dBc linearity at 946 MHz.
274

Implementation and Performance Analysis of Filternets

Einarsson, Henrik January 2006 (has links)
Today Image acquisition equipment produces huge amounts of data that needs to be processed. Often the data describes signals with a dimensionality higher then 2, as with ordinary images. This introduce a problem when it comes to process this high dimensional data since ordinary signal processing tools are no longer suitable. New faster and more efficient tools need to be developed to fully exploit the advantages with e. g. a 3D CT-scan. One such tool is filternets, a layered networklike structure, which the signal propagates through. A filternet has three fundamental advantages which will decrease the filtering time. The network structure allows complex filter to be decomposed into simpler ones, intermediate result may be reused and filters may be implemented with very few nonzero coefficients (sparse filters). The aim of this study has been to create an implementation for filternets and optimize it with respect to execution time. Specially the possibility to use filternets that approximates a harmonic filterset for estimating orientation in 3D signals is investigated. Tests show that this method is up to about 30 times faster than a full filterset consisting of dense filters. They also show a slightly larger error in the estimated orientation compared with the dense filters, this error should however not limit the usability of the method.
275

Energy Efficient Design for Deep Sub-micron CMOS VLSIs

Elgebaly, Mohamed January 2005 (has links)
Over the past decade, low power, energy efficient VLSI design has been the focal point of active research and development. The rapid technology scaling, the growing integration capacity, and the mounting active and leakage power dissipation are contributing to the growing complexity of modern VLSI design. Careful power planning on all design levels is required. This dissertation tackles the low-power, low-energy challenges in deep sub-micron technologies on the architecture and circuit levels. Voltage scaling is one of the most efficient ways for reducing power and energy. For ultra-low voltage operation, a new circuit technique which allows bulk CMOS circuits to work in the sub-0. 5V supply territory is presented. The threshold voltage of the slow PMOS transistor is controlled dynamically to get a lower threshold voltage during the active mode. Due to the reduced threshold voltage, switching speed becomes faster while active leakage current is increased. A technique to dynamically manage active leakage current is presented. Energy reduction resulting from using the proposed structure is demonstrated through simulations of different circuits with different levels of complexity. As technology scales, the mounting leakage current and degraded noise immunity impact performance especially that of high performance dynamic circuits. Dual threshold technology shows a good potential for leakage reduction while meeting performance goals. A model for optimally selecting threshold voltages and transistor sizes in wide fan-in dynamic circuits is presented. On the circuit level, a novel circuit level technique which handles the trade-off between noise immunity and energy dissipation for wide fan-in dynamic circuits is presented. Energy efficiency of the proposed wide fan-in dynamic circuit is further enhanced through efficient low voltage operation. Another direct consequence of technology scaling is the growing impact of interconnect parasitics and process variations on performance. Traditionally, worst case process, parasitics, and environmental conditions are considered. Designing for worst case guarantees a fail-safe operation but requires a large delay and voltage margins. This large margin can be recovered if the design can adapt to the actual silicon conditions. Dynamic voltage scaling is considered a key enabler in reducing such margin. An on-chip process identifier to recover the margin required due to process variations is described. The proposed architecture adjusts supply voltage using a hybrid between the one-time voltage setting and the continuous monitoring modes of operation. The interconnect impact on delay is minimized through a novel adaptive voltage scaling architecture. The proposed system recovers the large delay and voltage margins required by conventional systems by closely tracking the actual critical path at anytime. By tracking the actual critical path, the proposed system is robust and more energy efficient compared to both the conventional open-loop and closed-loop systems.
276

Optimal Portfolio Selection Under the Estimation Risk in Mean Return

Zhu, Lei January 2008 (has links)
This thesis investigates robust techniques for mean-variance (MV) portfolio optimization problems under the estimation risk in mean return. We evaluate the performance of the optimal portfolios generated by the min-max robust MV portfolio optimization model. With an ellipsoidal uncertainty set based on the statistics of the sample mean estimates, minmax robust portfolios equal to the ones from the standard MV model based on the nominal mean estimates but with larger risk aversion parameters. With an interval uncertainty set for mean return, min-max robust portfolios can vary significantly with the initial data used to generate the uncertainty set. In addition, by focusing on the worst-case scenario in the mean return uncertainty set, min-max robust portfolios can be too conservative and unable to achieve a high return. Adjusting the conservatism level of min-max robust portfolios can only be achieved by excluding poor mean return scenarios from the uncertainty set, which runs counter to the principle of min-max robustness. We propose a CVaR robust MV portfolio optimization model in which the estimation risk is measured by the Conditional Value-at-Risk (CVaR). We show that, using CVaR to quantify the estimation risk in mean return, the conservatism level of CVaR robust portfolios can be more naturally adjusted by gradually including better mean return scenarios. Moreover, we compare min-max robust portfolios (with an interval uncertainty set for mean return) and CVaR robust portfolios in terms of actual frontier variation, portfolio efficiency, and portfolio diversification. Finally, a computational method based on a smoothing technique is implemented to solve the optimization problem in the CVaR robust model. We numerically show that, compared with the quadratic programming (QP) approach, the smoothing approach is more computationally efficient for computing CVaR robust portfolios.
277

High-Speed Elliptic Curve and Pairing-Based Cryptography

Longa, Patrick 05 April 2011 (has links)
Elliptic Curve Cryptography (ECC), independently proposed by Miller [Mil86] and Koblitz [Kob87] in mid 80’s, is finding momentum to consolidate its status as the public-key system of choice in a wide range of applications and to further expand this position to settings traditionally occupied by RSA and DL-based systems. The non-existence of known subexponential attacks on this cryptosystem directly translates to shorter keylengths for a given security level and, consequently, has led to implementations with better bandwidth usage, reduced power and memory requirements, and higher speeds. Moreover, the dramatic entry of pairing-based cryptosystems defined on elliptic curves at the beginning of the new millennium has opened the possibility of a plethora of innovative applications, solving in some cases longstanding problems in cryptography. Nevertheless, public-key cryptography (PKC) is still relatively expensive in comparison with its symmetric-key counterpart and it remains an open challenge to reduce further the computing cost of the most time-consuming PKC primitives to guarantee their adoption for secure communication in commercial and Internet-based applications. The latter is especially true for pairing computations. Thus, it is of paramount importance to research methods which permit the efficient realization of Elliptic Curve and Pairing-based Cryptography on the several new platforms and applications. This thesis deals with efficient methods and explicit formulas for computing elliptic curve scalar multiplication and pairings over fields of large prime characteristic with the objective of enabling the realization of software implementations at very high speeds. To achieve this main goal in the case of elliptic curves, we accomplish the following tasks: identify the elliptic curve settings with the fastest arithmetic; accelerate the precomputation stage in the scalar multiplication; study number representations and scalar multiplication algorithms for speeding up the evaluation stage; identify most efficient field arithmetic algorithms and optimize them; analyze the architecture of the targeted platforms for maximizing the performance of ECC operations; identify most efficient coordinate systems and optimize explicit formulas; and realize implementations on x86-64 processors with an optimal algorithmic selection among all studied cases. In the case of pairings, the following tasks are accomplished: accelerate tower and curve arithmetic; identify most efficient tower and field arithmetic algorithms and optimize them; identify the curve setting with the fastest arithmetic and optimize it; identify state-of-the-art techniques for the Miller loop and final exponentiation; and realize an implementation on x86-64 processors with optimal algorithmic selection. The most outstanding contributions that have been achieved with the methodologies above in this thesis can be summarized as follows: • Two novel precomputation schemes are introduced and shown to achieve the lowest costs in the literature for different curve forms and scalar multiplication primitives. The detailed cost formulas of the schemes are derived for most relevant scenarios. • A new methodology based on the operation cost per bit to devise highly optimized and compact multibase algorithms is proposed. Derived multibase chains using bases {2,3} and {2,3,5} are shown to achieve the lowest theoretical costs for scalar multiplication on certain curve forms and for scenarios with and without precomputations. In addition, the zero and nonzero density formulas of the original (width-w) multibase NAF method are derived by using Markov chains. The application of “fractional” windows to the multibase method is described together with the derivation of the corresponding density formulas. • Incomplete reduction and branchless arithmetic techniques are optimally combined for devising high-performance field arithmetic. Efficient algorithms for “small” modular operations using suitably chosen pseudo-Mersenne primes are carefully analyzed and optimized for incomplete reduction. • Data dependencies between contiguous field operations are discovered to be a source of performance degradation on x86-64 processors. Three techniques for reducing the number of potential pipeline stalls due to these dependencies are proposed: field arithmetic scheduling, merging of point operations and merging of field operations. • Explicit formulas for two relevant cases, namely Weierstrass and Twisted Edwards curves over and , are carefully optimized employing incomplete reduction, minimal number of operations and reduced number of data dependencies between contiguous field operations. • Best algorithms for the field, point and scalar arithmetic, studied or proposed in this thesis, are brought together to realize four high-speed implementations on x86-64 processors at the 128-bit security level. Presented results set new speed records for elliptic curve scalar multiplication and introduce up to 34% of cost reduction in comparison with the best previous results in the literature. • A generalized lazy reduction technique that enables the elimination of up to 32% of modular reductions in the pairing computation is proposed. Further, a methodology that keeps intermediate results under Montgomery reduction boundaries maximizing operations without carry checks is introduced. Optimized formulas for the popular tower are explicitly stated and a detailed operation count that permits to determine the theoretical cost improvement attainable with the proposed method is carried out for the case of an optimal ate pairing on a Barreto-Naehrig (BN) curve at the 128-bit security level. • Best algorithms for the different stages of the pairing computation, including the proposed techniques and optimizations, are brought together to realize a high-speed implementation at the 128-bit security level. Presented results on x86-64 processors set new speed records for pairings, introducing up to 34% of cost reduction in comparison with the best published result. From a general viewpoint, the proposed methods and optimized formulas have a practical impact in the performance of cryptographic protocols based on elliptic curves and pairings in a wide range of applications. In particular, the introduced implementations represent a direct and significant improvement that may be exploited in performance-dominated applications such as high-demand Web servers in which millions of secure transactions need to be generated.
278

The Validity of Technical Analysis for the Swedish Stock Exchange : Evidence from random walk tests and back testing analysis

Gustafsson, Dan January 2012 (has links)
In this paper I examine the validity of technical analysis for the Swedish stock index OMXS30 between 2001-12-28 and 2011-12-30.  Results indicate that OMXS30 followed a non-random walk and that technical trading rules had predictive power over future price movements. Results also suggest that technical trading rules could be used to outperform a buy-and-hold strategy.
279

Post Earnings Announcement Drift in Sweden : Evidence and application of theories in Behavioural Finance

Magnusson, Fredrik January 2012 (has links)
The post earnings announcement drift is a market anomaly causing a firms cumulative abnormal returns to drift in the direction of an earnings surprise. By measuring quarterly earnings surprises using two measures. The first based upon a times series prediction and the other based upon on analyst forecast errors. This study finds evidence that the drift ex-ists in Sweden and that investor’s systematically underreacts towards positive earnings sur-prises. Further this study shows that the cumulative average abnormal returns is larger for surprises caused by analyst forecast errors. While previous studies have tried to explain the drift by taking on additional risk or illiquidity in the stocks. This study provides evidence supporting that investors limitations in weighting new information causes an underreaction, hence a drift in the stock prices.
280

PE and EV/EBITDA Investment Strategies vs. the Market : A Study of Market Efficiency

Persson, Eva, Ståhlberg, Caroline January 2007 (has links)
Background: The efficient market hypothesis states that it is not possible to consistently outperform the overall stock market by stock picking and market timing. This is because, in an efficient market, all stock prices are at their correct level, and there are no over- or undervalued stocks. Nevertheless, deviations from true price can occur according to the hypothesis, but when they do they are always random. Thus, the only way an investor can perform better than the overall stock market is by being lucky. However, the efficient market hypothesis is very controversial. It is often discussed within the area of modern financial theory and there are strong arguments both for and against it. Purpose: The purpose of this study was to investigate whether it is possible to outperform the overall stock market by investing in stocks that are undervalued according to the enterprise multiple (EV/EBITDA), and the price-earnings ratio. Realization of the Study: Portfolios were constructed based on information from five years, 2001 to 2005. Each year two portfolios were put together, one of them consisting of the six stocks with the lowest price-earnings ratio, and the other consisting of the six stocks with the lowest EV/EBITDA. Each portfolio was kept for one year and the unadjusted returns as well as the risk adjusted returns of the portfolios were compared to the returns on the two indexes OMXS30 and AFGX. The sample consisted of the 30 most traded stocks on the Nordic Stock Exchange in Stockholm 2006. Conclusion: The study shows that it is possible to outperform the overall stock market by investing in undervalued stocks according the price-earnings ratio and the EV/EBITDA. This indicates that the market is not efficient, even in its weak form.

Page generated in 0.0884 seconds