• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

追蹤穩定成長目標線的投資組合最佳化模型 / Portfolio optimization models for the stable growth benchmark tracking

謝承哲, Hsieh, Cheng Che Unknown Date (has links)
本論文研究如何建立一個投資組合用來追蹤穩定成長的目標線。我們將這個目標線追蹤問題建構成混合整數非線性數學規劃模型。由於用以追蹤目標線的投資組合,經過一段時間後其追蹤效能可能未如預期,本論文提出調整投資組合的數學規劃模型。這些模型中除了考量實務中的交易成本,亦考慮限制放空股票,所以將期貨加入投資組合中作為避險部位。最後,以台灣股票市場與期貨交易市場作為實證研究對象,探討投資組合建立與調整的表現,亦分析不同成長率設定之目標線與期貨投資比重上限對投資組合價值的影響。 / This thesis studies how to construct a tracking portfolio for the benchmark of a stable growth rate. This tracking problem can be formulated as a mixed-integer nonlinear programming model. Since the performance of the tracking portfolio may get worse when time elapses, this thesis proposes another mathematical programming model to rebalance the tracking portfolio. These models not only consider the transaction cost but also take into account of the limitation of shorting a stock; thus the tracking portfolio will include a futures position as a hedging position. Finally, an empirical study will be performed by using the data from the Taiwan stock market and the futures market to explore the performance of the proposed models. We will analyze how the different benchmark settings and the futures position limits will affect the value of the tracking portfolio.
442

追蹤穩定成長目標線的投資組合隨機最佳化模型 / Stochastic portfolio optimization models for the stable growth benchmark tracking

林澤佑, Lin, Tse Yu Unknown Date (has links)
本論文提出追蹤特定目標線的二階段混合整數非線性隨機規劃模型,以建立追蹤目標線的投資組合。藉由引進情境樹(scenario tree),我們將此類二階段隨機規劃問題,轉換成為等價的非隨機規劃模型。在金融商品的價格波動及交互作用下,所建立的投資組合在經過一段時間後,其追蹤目標線的能力可能會日趨降低,所以本論文亦提出調整投資組合的規劃模型。為符合實務考量,本論文同時考慮交易成本、股票放空的限制,並且加入期貨進行避險。為了反應投資者的預期心理,也引進了選擇權及情境樹。最後,我們使用台灣股票市場、期貨交易市場及台指選擇權市場的資料進行實證研究,亦探討不同成長率設定之目標線與投資比例對於投資組合的影響。 / To construct a portfolio tracking specific target line, this thesis studies how to do it via two-stage stochastic mixed-integer nonlinear model. We introduce scenario tree to convert this stochastic model into an deterministic equivalent model. Under the volatility of price and the interaction of each financial derivatives, the performance of the tracking portfolio may get worse when time elapses, this thesis proposes another mathematical model to rebalance the tracking portfolio. These models consider the transactions cost and the limitation of shorting a stock, and the tracking portfolio will include a futures as a hedge position. To reflect the expectation of investors, we introduce scenario tree and also include a options as a hedge position. Finally, an empirical study will be performed by the data from Taiwan stock market, the futures market and the options market to explore the performance of the proposed models. We will analyze how the different benchmarks settings and invest ratio will affect the value of the tracking portfolio.
443

Evaluating NMP Quality of Service : Experiment with JackTrip regarding Latency versus Packet Jitter/Dropouts with High Quality Audio via LAN and WAN / Utvärdering av  quality of service vid NMP : Experiment med JackTrip angående Latens kontra Jitter/Tapp av Paket med Högkvalitetsljud via LAN och WAN

Müntzing, Daniel January 2018 (has links)
This study has developed a method to create an, to a big extent, automated testing system for NMP (Networked Music Performance) communication over LAN and WAN to be able to benchmark the UDP streaming engine JackTrip using a client-server model. The method is not locked into using JackTrip only, it could be used to do experiments with other engines too. The study tried to answer the question if latency correlates to amount of correctly aligned audio, and to what extent the audio is correctly aligned in respect to tolerated latency (based on earlier research) when at least two musicians remote-conducting musical pieces together. There were 13 different buffer settings tested, which used no redundancy and redundancy of 2, and which were sent through 4 different LAN/WAN-scenarios. A big dataset was produced, with about 82 minutes’ worth of audio per test. To post-process the data a phase cancelling method was used to measure correctly aligned audio, while the latency was measured by counting the number of samples from the start of each audio file to the first sample that were not null or not under a certain threshold. The results showed clear correlations of buffer sizes impact of latency and amount of correctly audio sent over the network. If the buffer sizes are greater, it will produce higher latency and increase the amount of correctly aligned audio, and on the opposite side, if using less buffer, it will produce lower latencies and less correctly aligned amount of audio. The study also showed that there was very little impact of using higher redundancies, both regarding latency and amount of correctly audio. When analysing the amount of correct data when respecting the tolerated level of latency, the study showed a support for correctly aligned amount of streamed audio up to 65% when using JackTrip. / Den här studien har utvecklat en metod för att skapa ett, till stor del, automatiserat testsystem för NMP-kommunikation (Networked Music Performance) över LAN och WAN för att kunna prestandatesta den UDP-strömmande ljudmotorn JackTrip, med en klient-server-modell. Metoden är inte låst till att endast användas med JackTrip, den kan användas för att göra experiment med andra motorer också. Studien försökte svara på frågorna om latens korrelerar med mängden korrekt justerat ljud, och till hur stor del som ljudet är korrekt justerad med hänsyn till tolererbar latens (baserat på tidigare undersökning) när minst två musiker fjärrmusicerar tillsammans. Det testades 13 olika buffertinställningar, som använde ingen redundans samt med redundans på 2, och som kördes genom 4 olika LAN / WAN-scenarier. En stor datamängd producerades, med ca 82 minuters ljud per test. För att post-bearbeta data användes en fas-elimineringsmetod för att mäta mängden korrekt justerat ljud, medan latensen var uppmätt genom att räkna antalet samplingar från starten av varje ljudfil till den första samplingen som inte var innehållslös eller inte under ett specifikt tröskelvärde. Resultaten visade tydlig korrelation av buffertstorlekens påverkan av latens och mängd korrekt ljud skickat över nätverket. Om buffertstorlekarna är större kommer det att ge högre latens och öka mängden korrekt justerat ljud, och tvärtom, om mindre buffert används, kommer det att ge lägre latens och mindre rätt justerad mängd ljud. Studien visade också att det gav mycket liten effekt att använda högre redundans, både vad gällande latens och mängden korrekt ljud.  Vid analys av mängden korrekta data med hänsyn till den tolererade latensnivån visade studien ett stöd för korrekt justerat mängd av strömmat ljud upp till 65% vid användning av JackTrip.
444

Green Information Systems in der digitalen Gesellschaft - Eine multimethodische und multiperspektivische Analyse der Technologieakzeptanz

Warnecke, Danielle 02 March 2021 (has links)
Im Fokus der Dissertation steht die Erforschung nachhaltiger Effekte durch Informationssysteme, insbesondere Ansatzpunkte zur Nachhaltigkeitstransformation der Gesellschaft durch Methoden und Artefakte der Green Information Systems (Green IS). Als Green IS werden sozio-technische Informationssysteme bezeichnet, die neben wirtschaftlichen Kriterien der ressourceneffizienten Bereitstellung von Informationen, der Koordination und Kommunikation auch die ökologische und soziale Dimension gemäß der „Triple Bottom Line“ (Drei-Säulen-Modell der nachhaltigen Entwicklung) adressieren. Der Anwendungsbereich von Green IS liegt auf der Reduktion von Umweltbelastungen und der Bewältigung komplexer Umweltherausforderungen durch sozio-technische Informationssysteme. Neben Forschungsthemen der (Wirtschafts-)Informatik und der Wirtschaftswissenschaften werden Bereiche der Psychologie und Sozialwissenschaften zu Fragen der digitalen Nachhaltigkeitstransformation, der Nachhaltigkeitsbewertung sowie Akzeptanzforschung behandelt. Aufgrund der Komplexität und Vielschichtigkeit des Themas wird ein multimethodischer Forschungsansatz verfolgt, indem sowohl qualitative als auch quantitative Methoden zum Einsatz kommen. Die zentralen Forschungsfragen lauten dabei wie folgt: FF1. Welchen Beitrag können Green IS auf Makro- und Meso-Ebene zur Nachhaltigkeitsbewertung leisten und welchen Reifegrad weisen sie auf? FF2. Inwiefern können digitale Geschäftsmodelle zur unternehmerischen und gesellschaftlichen Nachhaltigkeitstransformation beitragen? FF3. Kann durch gezieltes Nachhaltigkeitsmarketing die Akzeptanz von Green IS in der Gesellschaft gefördert werden? Gemäß der Design Science Research werden Verfahren zur Nachhaltigkeitsbewertung für Smart City Mobilitätsstrategien und betriebliche Umweltinformationssysteme (BUIS) des produzierenden Gewerbes konstruiert. Es wird ein Prototyp zum webbasierten Benchmark solcher Smart City Initiativen realisiert. Das entwickelte Geschäftsprozessmodell zeigt auf, inwiefern eine Transformation zur Plattformorganisation im Rahmen von Open Innovation für Industriebetriebe erfolgreich gelingen kann. Die quantitativen Erhebungen zeigen auf, das vor allem hochpreisige Informations- und Kommunikationstechnologie (IKT) für Geschäftsmodelle der Sharing Economy geeignet ist sowie, dass die Akzeptanz nachhaltiger IKT in der Gesellschaft bereits insbesondere bei Zugehörigen des "Lifestyle of Health and Sustainability" (LOHAS) vorhanden ist und der weiteren Förderung durch geeignete Verbraucher-Symbole bedarf.
445

Second-order FE Analysis of Axial Loaded Concrete Members According to Eurocode 2 / Analys av axial belastade betongkonstruktioner med finita elementmetoden enligt Eurokod 2

Yosef Nezhad Arya, Nessa January 2015 (has links)
A nonlinear finite element analysis was performed for an axial loaded reinforced concrete column subjected to biaxial bending taking into account second-order effects. According to Eurocode there are two ways to take second-order effects into consideration: nonlinear FE analysis and hand calculation based on the simplified methods explained in Eurocode 2. Since simulating this kind of structures in ABAQUS is difficult, several simulations were made to find the correct model with satisfying accuracy. The nonlinear analysis focused on material modelling of concrete and its nonlinear behaviour. The simulation took into consideration the inelastic behaviour of concrete along with the confinement effect from transverse reinforcement. The finite element model was verified by comparing the obtained results from FEA to the results from a benchmark experiment. The mean values needed for simulating the FE model was derived from the mean compressive strength of concrete. After verification, another FE model using design parameters was analysed and the results were compared to the results from calculations based on simplified methods according to Eurocode 2 to see how much they agreed with each other. In a parametric study, the effect of eccentricity, compressive and tensile strength of concrete, fracture energy, modulus of elasticity, column cross-section dimension and length, steel yield stress and stirrup spacing were studied. A comparison between outcomes from the simplified methods and ABAQUS, calculated with design parameters showed that the bearing capacity from FE analysis was 21-34 % higher than the one obtained with the simplified methods. It is recommended that in further studies, analyse different slender reinforced concrete column with different L/h with FE-simulation to investigate if FEA always gives a more accurate result. For this case, and probably for columns with complex geometries, a finite element analysis is a better choice. / En icke-linjär finitelementanalys för en armerad betongpelare utsatt för tvåaxlig böjning genomfördes med hänsyn till andra ordningens effekter. Enligt Eurokoder finns det två sätt att iaktta andra ordningens effekter: icke-linjär analys och handberäkning baserad på de förenklade metoderna förklarad i Eurokod 2. Eftersom det är svårt att simulera den här typen av konstruktioner i ABAQUS, så har flera simuleringar utgjorts för att finna ett modell med acceptabelt noggrannhet. Den icke-linjära analysen fokuserade på korrekt materialmodell av betong och dess icke-linjära beteende. Modellen tog hänsyn till betongens oelastiska beteenden och inkluderade fleraxiella effekten. Finitelementmodellen verifierades genom att jämföra de erhållna resultaten från FEA till resultaten från ett försök. Värden som behövdes för att simulera FE-modellen härleddes från betongens medeltryckhållfasthet. Efter att referensmodellen var verifierad, ytterligare en FE-modell, som inkluderade designparametrar, analyserades och resultaten jämfördes med resultaten från beräkningar baserade på förenklade metoderna enligt Eurokod 2 för att se hur mycket de stämde överens med varandra. I en parameterstudie har effekten av excentricitet, tryck- och draghållfasthet av betong, brottenergi, elasticitetsmodul, pelarens tvärsnittsdimension och längd, stålsträckgränsen och centrumavstånd på byglar studerat. En jämförelse mellan resultaten från de förenklade metoderna och ABAQUS, beräknade med designparametrar visade att bärighetförmågan från FE-analys var 21-34% högre än den som erhålls med de förenklade metoderna. Det rekommenderas att i fortsatta studier, analysera flera slanka armerade betongpelare med olika L/h med FE-simulering för att undersöka om FEA alltid ger ett nogrannare resultat. För denna studie, och förmodligen för pelare med komplexa geometrier, är en FE-analys ett bättre val.
446

Automated Performance Test Generation and Comparison for Complex Data Structures - Exemplified on High-Dimensional Spatio-Temporal Indices

Menninghaus, Mathias 23 August 2018 (has links)
There exist numerous approaches to index either spatio-temporal or high-dimensional data. None of them is able to efficiently index hybrid data types, thus spatio-temporal and high-dimensional data. As the best high-dimensional indexing techniques are only able to index point-data and not now-relative data and the best spatio-temporal indexing techniques suffer from the curse of dimensionality, this thesis introduces the Spatio-Temporal Pyramid Adapter (STPA). The STPA maps spatio-temporal data on points, now-values on the median of the data set and indexes them with the pyramid technique. For high-dimensional and spatio-temporal index structures no generally accepted benchmark exists. Most index structures are only evaluated by custom benchmarks and compared to a tiny set of competitors. Benchmarks may be biased as a structure may be created to perform well in a certain benchmark or a benchmark does not cover a certain speciality of the investigated structures. In this thesis, the Interface Based Performance Comparison (IBPC) technique is introduced. It automatically generates test sets with a high code coverage on the system under test (SUT) on the basis of all functions defined by a certain interface which all competitors support. Every test set is performed on every SUT and the performance results are weighted by the achieved coverage and summed up. These weighted performance results are then used to compare the structures. An implementation of the IBPC, the Performance Test Automation Framework (PTAF) is compared to a classic custom benchmark, a workload generator whose parameters are optimized by a genetic algorithm and a specific PTAF alternative which incorporates the specific behavior of the systems under test. This is done for a set of two high-dimensional spatio-temporal indices and twelve variants of the R-tree. The evaluation indicates that PTAF performs at least as good as the other approaches in terms of minimal test cases with a maximized coverage. Several case studies on PTAF demonstrate its widespread abilities.
447

Superpixels and their Application for Visual Place Recognition in Changing Environments

Neubert, Peer 01 December 2015 (has links)
Superpixels are the results of an image oversegmentation. They are an established intermediate level image representation and used for various applications including object detection, 3d reconstruction and semantic segmentation. While there are various approaches to create such segmentations, there is a lack of knowledge about their properties. In particular, there are contradicting results published in the literature. This thesis identifies segmentation quality, stability, compactness and runtime to be important properties of superpixel segmentation algorithms. While for some of these properties there are established evaluation methodologies available, this is not the case for segmentation stability and compactness. Therefore, this thesis presents two novel metrics for their evaluation based on ground truth optical flow. These two metrics are used together with other novel and existing measures to create a standardized benchmark for superpixel algorithms. This benchmark is used for extensive comparison of available algorithms. The evaluation results motivate two novel segmentation algorithms that better balance trade-offs of existing algorithms: The proposed Preemptive SLIC algorithm incorporates a local preemption criterion in the established SLIC algorithm and saves about 80 % of the runtime. The proposed Compact Watershed algorithm combines Seeded Watershed segmentation with compactness constraints to create regularly shaped, compact superpixels at the even higher speed of the plain watershed transformation. Operating autonomous systems over the course of days, weeks or months, based on visual navigation, requires repeated recognition of places despite severe appearance changes as they are for example induced by illumination changes, day-night cycles, changing weather or seasons - a severe problem for existing methods. Therefore, the second part of this thesis presents two novel approaches that incorporate superpixel segmentations in place recognition in changing environments. The first novel approach is the learning of systematic appearance changes. Instead of matching images between, for example, summer and winter directly, an additional prediction step is proposed. Based on superpixel vocabularies, a predicted image is generated that shows, how the summer scene could look like in winter or vice versa. The presented results show that, if certain assumptions on the appearance changes and the available training data are met, existing holistic place recognition approaches can benefit from this additional prediction step. Holistic approaches to place recognition are known to fail in presence of viewpoint changes. Therefore, this thesis presents a new place recognition system based on local landmarks and Star-Hough. Star-Hough is a novel approach to incorporate the spatial arrangement of local image features in the computation of image similarities. It is based on star graph models and Hough voting and particularly suited for local features with low spatial precision and high outlier rates as they are expected in the presence of appearance changes. The novel landmarks are a combination of local region detectors and descriptors based on convolutional neural networks. This thesis presents and evaluates several new approaches to incorporate superpixel segmentations in local region detection. While the proposed system can be used with different types of local regions, in particular the combination with regions obtained from the novel multiscale superpixel grid shows to perform superior to the state of the art methods - a promising basis for practical applications.
448

The Relationship Of 10th-grade District Progress Monitoring Assessment Scores To Florida Comprehensive Assessment Test Scores In Reading And Mathematics For 2008-2009

Underwood, Marilyn 01 January 2010 (has links)
The focus of this research was to investigate the use of a district created formative benchmark assessment in reading to predict student achievement for 10th-grade students on the Florida Comprehensive Assessment Test (FCAT) in one county in north central Florida. The purpose of the study was to provide information to high school principals and teachers to better understand how students were performing and learning and to maximize use of the formative district benchmark assessment in order to modify instruction and positively impact student achievement. This study expanded a prior limited study which correlated district benchmark assessment scores to FCAT scores for students in grades three through five in five elementary schools in the targeted county. The high correlations suggested further study. This research focused on secondary reading, specifically in 10th grade where both state and targeted county FCAT scores were low in years preceding this research. Investigated were (a) the district formative assessment in reading as a predictor of FCAT Reading scores, (b) differences in strength of correlation and prediction among student subgroups and between high schools, and (c) any relationships between reading formative assessment scores and Mathematics FCAT scores. An additional focus of this study was to determine best leadership practices in schools where there were the highest correlations between the formative assessment and FCAT Reading scores. Research on best practices was reviewed, and principals were interviewed to determine trends and themes in practice. Tenth grade students in the seven Florida targeted district high schools were included in the study. The findings of the study supported the effective use of formative assessments both in instruction and as predictors of students' performance on the FCAT. The results of the study also showed a significant correlation between performance on the reading formative assessment and performance on FCAT Mathematics. The data indicated no significant differences in the strength of correlation between student subgroups or between the high schools included in the study. Additionally, the practices of effective principals in using formative assessment data to inform instruction, gathered through personal interviews, were documented and described.
449

Reliability prediction of electronic products combining models, lab testing and field data analysis

Choudhury, Noor January 2016 (has links)
At present there are different reliability standards that are being used for carrying out reliability prediction. They take into consideration different factors, environments and data sources to give reliability data for a wide range of electronic components. However, the users are not aware of the differences between the different reliability standards due to the absence of benchmarks of the reliability standards that would help classify and compare between them. This lack of benchmark denies the users the opportunity to have a top-down view of these different standards and choose the appropriate standard based on qualitative judgement in performing reliability prediction for a specific system. To addres this issue, the benchmark of a set of reliability standards are developed in this dissertation. The benchmark helps the users of the selected reliability standards understand the similarities and differences between them and based on the evaluation criterion defined can easily choose the appropriate standard for reliability prediction in different scenarios. Theoretical reliability prediction of two electronic products in Bombardier is performed using the standards that have been benchmarked. One of the products is matured with available incident report from the field while the other is a new product that is under development and yet to enter in service. The field failure data analysis of the matured product is then compared and correlated to the theoretical prediction. Adjustment factors are then derived to help bridge the gap between the theoretical reliability prediction and the reliability of the product in field conditions. Since the theoretical prediction of the product under development could not be used to compare and correlate any data due to unavailability, instead, the accelerated life test is used to find out the product reliability during its lifetime and find out any failure modes intrinsic to the board. A crucial objective is realized as an appropriate algorithm/model is found in order to correlate accelerated test temperature-cycles to real product temperature-cycles. The PUT has lead-free solder joints, hence, to see if any failures occurring due to solder joint fatigue has also been of interest. Additionally, reliability testing simulation is a performed in order to verify and validate the performance of the product under development during ALT. Finally, the goal of the thesis is achieved as separate models are proposed to predict product reliability for both matured products and products under development. This will assist the organization in realizing the goal of predicting their product reliability with better accuracy and confidence. / För närvarande finns det olika tillförlitlighetsstandarder som används för att utföra tillförlitlighet förutsägelse. De tar hänsyn till olika faktorer, miljöer och datakällor för att ge tillförlitlighetsdata för ett brett spektrum av elektronikkomponenter. Men användarna inte är medvetna om skillnaderna mellan de olika tillförlitlighetsstandarder på grund av avsaknaden av riktmärken för tillförlitlighetsstandarder som skulle hjälpa klassificera och jämföra mellan dem. Denna brist på jämförelse förnekar användarna möjlighet att få en top-down bakgrund av dessa olika standarder och välja lämplig standard baserad på kvalitativ bedömning att utföra tillförlitlighet prognos för ett specifikt system. För att lösa detta problem, är riktmärket en uppsättning av tillförlitlighetsstandarder som utvecklats i denna avhandling. Riktmärket hjälper användarna av de utvalda tillförlitlighetsstandarder förstå likheter och skillnader mellan dem och på grundval av bedömningskriteriet definieras kan enkelt välja lämplig standard för pålitlighet förutsägelse i olika scenarier. Teoretisk tillförlitlighet förutsäga två elektroniska produkter i Bombardier utförs med hjälp av standarder som har benchmarking. En av produkterna är mognat med tillgängliga incidentrapport från fältet, medan den andra är en ny produkt som är under utveckling och ännu inte gå in i tjänsten. Analysen av den mognade produkten fält feldata jämförs sedan och korreleras till den teoretiska förutsägelsen. Justeringsfaktorer sedan härledas för att överbrygga klyftan mellan den teoretiska tillförlitlighet förutsägelse och tillförlitligheten av produkten i fältmässiga förhållanden. Eftersom den teoretiska förutsägelsen av produkt under utveckling inte kan användas för att jämföra och korrelera alla data på grund av otillgängligheten, i stället är det accelererade livslängdstest som används för att ta reda på produktens tillförlitlighet under dess livstid och reda ut eventuella felmoder inneboende till styrelsen . Ett viktigt mål realiseras som en lämplig algoritm /modell finns i syfte att korrelera accelererade provningen temperaturcykler på verkliga produkttemperatur cykler. PUT har blyfria lödfogar därmed att se om några fel inträffar på grund av löda gemensam trötthet har också varit av intresse. Dessutom är tillförlitlighet testning simulering en utförs för att verifiera och validera produktens prestanda under utveckling under ALT. Slutligen är målet med avhandlingen uppnås som separata modeller föreslås att förutsäga produktens tillförlitlighet för både förfallna och produkter under utveckling. Detta kommer att hjälpa organisationen att förverkliga målet att förutsäga deras tillförlitlighet med bättre noggrannhet och förtroende.
450

Active versus passive portfolio management : A study of risk-adjusted return and market fluctuations on short term and long term

Duveskog, Ida, Halldén, Jesper January 2024 (has links)
Today fund matching is a natural part of Swedes finance and is a popular form of savings that includes a large number of investors in the Swedish fund market. This in turn generates an increased interest in how portfolio managers should locate and acquire knowledge in portfolio selection. This gives a greater interest in how different investment strategies can be affected and generate an investors wealth to an increased level within the stock market, which gives an increased focus to be able to generate as high risk-adjusted return as possible. The study partly presents traditional theory and background on modern portfolio theory and the efficient market hypothesis. Empirical studies also present within the financial market that demonstrate the differences of opinion between how actively versus passively managed funds have performed and which investment strategy is most beneficial for investment.  The purpose of the study is to compare realized return on active versus passive funds during long term, short term and specific time periods that had a lot of economic fluctuations, like bear markets. Within the study 10 actively managed funds and two index measures are selected to be studied and compared based on their respective performance, both within its rise and fall in the Swedish fund market. The performance measures will then be applied to be able to produce the results of the study and to be able to answer whether the active fund’s have any statistically significant over- and underperformance. After conducting single index models and t-test on the 10 active funds, the result of the study shows that despite using two benchmarks index, ten different active funds, long time period, short time period or specific time periods defined by market imbalance , we still resulted in many P-values that was not statistically significant. Active funds failed to overperform against passive funds, but passive funds also failed to outperform our selection of active funds.

Page generated in 0.0479 seconds