Spelling suggestions: "subject:"heighted"" "subject:"eighted""
571 |
Ocenenie spoločnosti Iveco Czech Republic, a.s. / Valuation of Iveco Czech Republic, a.s.Kornietová, Katarína January 2017 (has links)
Diploma thesis deals with the valuation of the company Iveco a.s., for the management needs. The estimated value refers to December 2016. The basis for the valuation is strategic and financial analysis. The results will be taken into account for establishment of the financial plan, which will be used in the valuation process. Emphasis is placed on yield methods using free cash flow to the firm(FCFF) and economic value added EVA. Both methods are compared with book value of the company.
|
572 |
加權範數最小變異數投資組合之實證應用:以台灣股市為例 / The Empirical Study of Weighted-Norm Minimum Variance Portfolios in Taiwan Stock Market莊丹華, Jhuang, Dan-Hua Unknown Date (has links)
資產配置問題與方法一直是投資人所關心之重要課題。藉由不同之建構投資組合的方法尋找資產的最適權重分配,可使得投資人對所持有資產的管理變得更容易且具效率。在這些方法當中,最小變異數投資組合可滿足追求風險極小化之需求。本文亦從此出發,探討一種特殊的最小變異數投資組合:加權範數最小變異數投資組合,並以台灣50作為實證資料,運用十個績效指標來衡量加權範數最小變異數投資組合、其他三種標竿投資組合與指數型基金台灣50之表現。
結果發現本研究所選取之台灣市場資料在運用加權範數最小變異數投資組合下,確實可以打敗其他大部分投資組合以及台灣50基金,並且在以下兩論點與過往文獻之敘述一致:加入報酬限制條件無法改善績效、使用替代參數亦可提供相稱績效。 / The asset allocation problem has always been an important issue on which investors concern. It is easier and more efficient for investors to manage their assets through constructing their portfolios in different methods to find the most optimized weight of assets. This essay explores a special portfolio, Weighted-Norm Minimum Variance Portfolio (WNMVP), which can minimize the risks of investment, and use Taiwan stock market data to undertake empirical study.
The research measured the performance of WNMVP, other three benchmark portfolios, and Taiwan Top 50 ETF (0050) by using ten indicators, bringing three findings. First, WNMVP performs better than most of other portfolios do. Second, adding estimated mean return vector into the WNMVP does not improve performances. Third, three alternative norm penalties provide comparable performance as parameters in WNMVP do. The second and third findings are consistence with previous literature.
|
573 |
Experimental and Computational Studies on Deflagration-to-Detonation Transition and its Effect on the Performance of PDEBhat, Abhishek R January 2014 (has links) (PDF)
This thesis is concerned with experimental and computational studies on pulse detonation engine (PDE) that has been envisioned as a new concept engine. These engines use the high pressure generated by detonation wave for propulsion. The cycle efficiency of PDE is either higher in comparison to conventional jet engines or at least has similar high performance with much greater simplicity in terms of components.
The first part of the work consists of an experimental study of the performance of PDE under choked flame and partial fill conditions. Detonations used in classical PDEs create conditions of Mach numbers of 4-6 and choked flames create conditions in which flame achieves Mach numbers near-half of detonation wave. While classical concepts on PDE's utilize deflagration-to-detonation transition and are more intensively studied, the working of PDE under choked regime has received inadequate attention in the literature and much remains to be explored. Most of the earlier studies claim transition to detonation as success in the working of the PDE and non-transition as failure. After exploring both these regimes, the current work brings out that impulse obtained from the wave traveling near the choked flame velocity conditions is comparable to detonation regime. This is consistent with the understanding from the literature that CJ detonation may not be the optimum condition for maximum specific impulse. The present study examines the details of working of PDE close to the choked regime for different experimental conditions, in comparison with other aspects of PDEs.
The study also examines transmission of fast flames from small diameter pipe into larger ducts. This approach in the smaller pipe for flame acceleration also leading to decrease in the time and length of transition process. The second part of the study aims at elucidating the features of deflagration-to-detonation transition with direct numerical simulation (DNS) accounting for and the choice of full chemistry and DNS is based on two features: (a) the induction time estimation at the conditions of varying high pressure and temperature behind the shock can only be obtained through the use of full chemistry, and (b) the complex effects of fine scale of turbulence that have sometimes been argued to influence the acceleration phase in the DDT cannot be captured otherwise. Turbulence in the early stages causes flame wrinkling and helps flame acceleration process. The study of flame propagation showed that the wrinkling of flame has major effect on the final transition phase as flame accelerates through the channel. Further, flame becomes corrugated prior to transition. This feature was investigated using non-uniform initial conditions. Under these conditions the pressure waves emanating from corrugated flame interact with the shock moving ahead and transition occurs in between the flame and the forward propagating shock wave.
The primary contributions of this thesis are: (a) Elucidating the phenomenology of choked flames, demonstrating that under partial fill conditions, the specific impulse can be superior to detonations and hence, allowing for the possibility of choked flames as a more appropriate choice for propulsive purposes instead of full detonations, (b) The use of smaller tube to enhance the flame acceleration and transition to detonation. The comparison with earlier experiments clearly shows the enhancements achieved using this method, and (c) The importance of the interaction between pressure waves emanating from the flame front with the shock wave which leads to formation of hot spots finally transitioning to detonation wave.
|
574 |
Undersampled Radial STEAM MRI: Methodological Developments and ApplicationsMerrem, Andreas 05 March 2018 (has links)
No description available.
|
575 |
Service Level Achievments - Test Data for Optimal Service SelectionRuss, Ricardo January 2016 (has links)
This bachelor’s thesis was written in the context of a joint research group, which developed a framework for finding and providing the best-fit web service for a user. The problem of the research group lays in testing their developed framework sufficiently. The framework can either be tested with test data produced by real web services which costs money or by generated test data based on a simulation of web service behavior. The second attempt has been developed within this scientific paper in the form of a test data generator. The generator simulates a web service request by defining internal services, whereas each service has an own internal graph which considers the structure of a service. A service can be atomic or can be compose of other services that are called in a specific manner (sequential, loop, conditional). The generation of the test data is done by randomly going through the services which result in variable response times, since the graph structure changes every time the system has been initialized. The implementation process displayed problems which have not been solved within the time frame. Those problems are displaying interesting challenges for the dynamical generation of random graphs. Those challenges should be targeted in further research.
|
576 |
Numerical Simulation of a High-speed Jet Injected in a Uniform Supersonic Crossflow Using Adaptively Redistributed GridsSeshadrinathan, Varun January 2017 (has links) (PDF)
Minimizing numerical dissipation without compromising the robust shock-capturing attributes remains an outstanding challenge in the design of numerical methods for high-speed compressible flows. The conflicting requirements of low and high numerical dissipation for accurate resolution of discontinuous and smooth flow features, respectively, are the principal reason behind this challenge. In this work we pursue a recently proposed novel strategy of combining adaptive mesh redistribution with conservative high-order shock-capturing finite-volume discretization methodology to overcome this challenge. In essence, we perform high-order finite-volume WENO (weighted essentially non oscillatory) reconstruction on a continuously moving grid the nodes of which are repositioned adaptively in such a way that maximum spatial resolution is achieved in regions associated with sharpest flow gradients. Moreover, to reduce computational expense, the finite-volume WENO discretization strategy is combined with the midpoint quadrature so that only one reconstruction along each intercool location is necessary.
To estimate a monotone upwind flux, a rotated HLLC (Harten-Lax-vanLeer-contact resolving) Riemann solver is employed at each intercool location with the state variables estimated from the high-order WENO reconstruction procedure. The effectiveness of this adaptive high-order discretization methodology is assessed on the well-known double Mach reflection test case for reconstruction orders ranging from five to eleven. We find that the resolution of the intricate flow features such as the wall-jet improves progressively with the reconstruction order, which is indicative of the reduced dissipation level of the adaptive high-order WENO discretization. The adaptive discretization methodology is applied to simulate a flow configuration consisting of a Mach 3 supersonic jet injected in a Mach 2 supersonic crossflow of similar ideal gas. It is found that the flow characteristics and especially features that are formed as a result of the Kelvin-Helmholtz instability are strongly influenced by the reconstruction order. The influence of the jet inclination angle on the overall flow features is analyzed.
|
577 |
Construction d'atlas en IRM de diffusion : application à l'étude de la maturation cérébrale / Atlas construction in diffusion-weighted MRI : application to brain maturation studyPontabry, Julien 30 October 2013 (has links)
L’IRM de diffusion (IRMd) est une modalité d’imagerie médicale in vivo qui suscite un intérêt croissant dans la communauté de neuro-imagerie. L’information sur l’intra-structure des tissus cérébraux est apportée en complément des informations de structure issues de l’IRM structurelle (IRMs). Ces modalités d’imagerie ouvrent ainsi une nouvelle voie pour l’analyse de population et notamment pour l’étude de la maturation cérébrale humaine normale in utero. La modélisation et la caractérisation des changements rapides intervenant au cours de la maturation cérébrale est un défi actuel. Dans ce but, ce mémoire de thèse présente une chaîne de traitement complète de la modélisation spatio-temporelle de la population à l’analyse des changements de forme au cours du temps. Les contributions se répartissent sur trois points. Tout d’abord, l’utilisation de filtre à particules étendus aux modèles d’ordre supérieurs pour la tractographie a permis d’extraire des descripteurs plus pertinents chez le foetus, utilisés ensuite pour estimer les transformations géométriques entre images. Ensuite, l’emploi d’une technique de régression non-paramétrique a permis de modéliser l’évolution temporelle moyenne du cerveau foetal sans imposer d’à priori. Enfin, les changements de forme sont mis en évidence au moyen de méthodes d’extraction et de sélection de caractéristiques. / Diffusion weighted MRI (dMRI) is an in vivo imaging modality which raises a great interest in the neuro-imaging community. The intra-structural information of cerebral tissues is provided in addition to the morphological information from structural MRI (sMRI). These imaging modalities bring a new path for population studies, especially for the study in utero of the normal humanbrain maturation. The modeling and the characterization of rapid changes in the brain maturation is an actual challenge. For these purposes, this thesis memoir present a complete processing pipeline from the spatio-temporal modeling of the population to the changes analyze against the time. The contributions are about three points. First, the use of high order diffusion models within a particle filtering framework allows to extract more relevant descriptors of the fetal brain, which are then used for image registration. Then, a non-parametric regression technique was used to model the temporal mean evolution of the fetal brain without enforce a prior knowledge. Finally, the shape changes are highlighted using features extraction and selection methods.
|
578 |
Optimization techniques for radio resource management in wireless communication networksWeeraddana, P. C. (Pradeep Chathuranga) 22 November 2011 (has links)
Abstract
The application of optimization techniques for resource management in wireless communication networks is considered in this thesis. It is understood that a wide variety of resource management problems of recent interest, including power/rate control, link scheduling, cross-layer control, network utility maximization, beamformer design of multiple-input multiple-output networks, and many others are directly or indirectly reliant on the general weighted sum-rate maximization (WSRMax) problem. Thus, in this dissertation a greater emphasis is placed on the WSRMax problem, which is known to be NP-hard.
A general method, based on the branch and bound technique, is developed, which solves globally the nonconvex WSRMax problem with an optimality certificate. Efficient analytic bounding techniques are derived as well. More broadly, the proposed method is not restricted to WSRMax. It can also be used to maximize any system performance metric, which is Lipschitz continuous and increasing on signal-to-interference-plus-noise ratio. The method can be used to find the optimum performance of any network design method, which relies on WSRMax, and therefore it is also useful for evaluating the performance loss encountered by any heuristic algorithm. The considered link-interference model is general enough to accommodate a wide range of network topologies with various node capabilities, such as singlepacket transmission, multipacket transmission, simultaneous transmission and reception, and many others.
Since global methods become slow in large-scale problems, fast local optimization methods for the WSRMax problem are also developed. First, a general multicommodity, multichannel wireless multihop network where all receivers perform singleuser detection is considered. Algorithms based on homotopy methods and complementary geometric programming are developed for WSRMax. They are able to exploit efficiently the available multichannel diversity. The proposed algorithm, based on homotopy methods, handles efficiently the self interference problem that arises when a node transmits and receives simultaneously in the same frequency band. This is very important, since the use of supplementary combinatorial constraints to prevent simultaneous transmissions and receptions of any node is circumvented. In addition, the algorithm together with the considered interference model, provide a mechanism for evaluating the gains when the network nodes employ self interference cancelation techniques with different degrees of accuracy. Next, a similar multicommodity wireless multihop network is considered, but all receivers perform multiuser detection. Solutions for the WSRMax problem are obtained by imposing additional constraints, such as that only one node can transmit to others at a time or that only one node can receive from others at a time. The WSRMax problem of downlink OFDMA systems is also considered. A fast algorithm based on primal decomposition techniques is developed to jointly optimize the multiuser subcarrier assignment and power allocation to maximize the weighted sum-rate (WSR). Numerical results show that the proposed algorithm converges faster than Lagrange relaxation based methods.
Finally, a distributed algorithm for WSRMax is derived in multiple-input single-output multicell downlink systems. The proposed method is based on classical primal decomposition methods and subgradient methods. It does not rely on zero forcing beamforming or high signal-to-interference-plus-noise ratio approximation like many other distributed variants. The algorithm essentially involves coordinating many local subproblems (one for each base station) to resolve the inter-cell interference such that the WSR is maximized. The numerical results show that significant gains can be achieved by only a small amount of message passing between the coordinating base stations, though the global optimality of the solution cannot be guaranteed. / Tiivistelmä
Tässä työssä tutkitaan optimointimenetelmien käyttöä resurssienhallintaan langattomissa tiedonsiirtoverkoissa. Monet ajankohtaiset resurssienhallintaongelmat, kuten esimerkiksi tehonsäätö, datanopeuden säätö, radiolinkkien ajastus, protokollakerrosten välinen optimointi, verkon hyötyfunktion maksimointi ja keilanmuodostus moniantenniverkoissa, liittyvät joko suoraan tai epäsuorasti painotetun summadatanopeuden maksimointiongelmaan (weighted sum-rate maximization, WSRMax). Tästä syystä tämä työ keskittyy erityisesti WSRMax-ongelmaan, joka on tunnetusti NP-kova.
Työssä kehitetään yleinen branch and bound -tekniikkaan perustuva menetelmä, joka ratkaisee epäkonveksin WSRMax-ongelman globaalisti ja tuottaa todistuksen ratkaisun optimaalisuudesta. Työssä johdetaan myös tehokkaita analyyttisiä suorituskykyrajojen laskentatekniikoita. Ehdotetun menetelmän käyttö ei rajoitu vain WSRMax-ongelmaan, vaan sitä voidaan soveltaa minkä tahansa suorituskykymetriikan maksimointiin, kunhan se on Lipschitz-jatkuva ja kasvava signaali-häiriö-plus-kohinasuhteen funktiona. Menetelmää voidaan käyttää minkä tahansa WSRMax-ongelmaan perustuvan verkkosuunnittelumenetelmän optimaalisen suorituskyvyn määrittämiseen, ja siksi sitä voidaan hyödyntää myös minkä tahansa heuristisen algoritmin aiheuttaman suorituskykytappion arvioimiseen. Tutkittava linkki-häiriömalli on riittävän yleinen monien erilaisten verkkotopologioiden ja verkkosolmujen kyvykkyyksien mallintamiseen, kuten esimerkiksi yhden tai useamman datapaketin siirtoon sekä yhtäaikaiseen lähetykseen ja vastaanottoon.
Koska globaalit menetelmät ovat hitaita suurien ongelmien ratkaisussa, työssä kehitetään WSRMax-ongelmalle myös nopeita paikallisia optimointimenetelmiä. Ensiksi käsitellään yleistä useaa eri yhteyspalvelua tukevaa monikanavaista langatonta monihyppyverkkoa, jossa kaikki vastaanottimet suorittavat yhden käyttäjän ilmaisun, ja kehitetään algoritmeja, joiden perustana ovat homotopiamenetelmät ja komplementaarinen geometrinen optimointi. Ne hyödyntävät tehokkaasti saatavilla olevan monikanavadiversiteetin. Esitetty homotopiamenetelmiin perustuva algoritmi käsittelee tehokkaasti itsehäiriöongelman, joka syntyy, kun laite lähettää ja vastaanottaa samanaikaisesti samalla taajuuskaistalla. Tämä on tärkeää, koska näin voidaan välttää lisäehtojen käyttö yhtäaikaisen lähetyksen ja vastaanoton estämiseksi. Lisäksi algoritmi yhdessä tutkittavan häiriömallin kanssa auttaa arvioimaan, paljonko etua saadaan, kun laitteet käyttävät itsehäiriön poistomenetelmiä erilaisilla tarkkuuksilla. Seuraavaksi tutkitaan vastaavaa langatonta monihyppyverkkoa, jossa kaikki vastaanottimet suorittavat monen käyttäjän ilmaisun. Ratkaisuja WSRMax-ongelmalle saadaan asettamalla lisäehtoja, kuten että vain yksi lähetin kerrallaan voi lähettää tai että vain yksi vastaanotin kerrallaan voi vastaanottaa. Edelleen tutkitaan WSRMax-ongelmaa laskevalla siirtotiellä OFDMA-järjestelmässä, ja johdetaan primaalihajotelmaan perustuva nopea algoritmi, joka yhteisoptimoi monen käyttäjän alikantoaalto- ja tehoallokaation maksimoiden painotetun summadatanopeuden. Numeeriset tulokset osoittavat, että esitetty algoritmi suppenee nopeammin kuin Lagrangen relaksaatioon perustuvat menetelmät.
Lopuksi johdetaan hajautettu algoritmi WSRMax-ongelmalle monisoluisissa moniantennilähetystä käyttävissä järjestelmissä laskevaa siirtotietä varten. Esitetty menetelmä perustuu klassisiin primaalihajotelma- ja aligradienttimenetelmiin. Se ei turvaudu nollaanpakotus-keilanmuodostukseen tai korkean signaali-häiriö-plus-kohinasuhteen approksimaatioon, kuten monet muut hajautetut muunnelmat. Algoritmi koordinoi monta paikallista aliongelmaa (yhden kutakin tukiasemaa kohti) ratkaistakseen solujen välisen häiriön siten, että WSR maksimoituu. Numeeriset tulokset osoittavat, että merkittävää etua saadaan jo vähäisellä yhdessä toimivien tukiasemien välisellä viestinvaihdolla, vaikka globaalisti optimaalista ratkaisua ei voidakaan taata.
|
579 |
The Weighted Space OdysseyKřepela, Martin January 2017 (has links)
The common topic of this thesis is boundedness of integral and supremal operators between weighted function spaces. The first type of results are characterizations of boundedness of a convolution-type operator between general weighted Lorentz spaces. Weighted Young-type convolution inequalities are obtained and an optimality property of involved domain spaces is proved. Additional provided information includes an overview of basic properties of some new function spaces appearing in the proven inequalities. In the next part, product-based bilinear and multilinear Hardy-type operators are investigated. It is characterized when a bilinear Hardy operator inequality holds either for all nonnegative or all nonnegative and nonincreasing functions on the real semiaxis. The proof technique is based on a reduction of the bilinear problems to linear ones to which known weighted inequalities are applicable. Further objects of study are iterated supremal and integral Hardy operators, a basic Hardy operator with a kernel and applications of these to more complicated weighted problems and embeddings of generalized Lorentz spaces. Several open problems related to missing cases of parameters are solved, thus completing the theory of the involved fundamental Hardy-type operators. / Operators acting on function spaces are classical subjects of study in functional analysis. This thesis contributes to the research on this topic, focusing particularly on integral and supremal operators and weighted function spaces. Proving boundedness conditions of a convolution-type operator between weighted Lorentz spaces is the first type of a problem investigated here. The results have a form of weighted Young-type convolution inequalities, addressing also optimality properties of involved domain spaces. In addition to that, the outcome includes an overview of basic properties of some new function spaces appearing in the proven inequalities. Product-based bilinear and multilinear Hardy-type operators are another matter of focus. It is characterized when a bilinear Hardy operator inequality holds either for all nonnegative or all nonnegative and nonincreasing functions on the real semiaxis. The proof technique is based on a reduction of the bilinear problems to linear ones to which known weighted inequalities are applicable. The last part of the presented work concerns iterated supremal and integral Hardy operators, a basic Hardy operator with a kernel and applications of these to more complicated weighted problems and embeddings of generalized Lorentz spaces. Several open problems related to missing cases of parameters are solved, completing the theory of the involved fundamental Hardy-type operators. / <p>Artikel 9 publicerad i avhandlingen som manuskript med samma titel.</p>
|
580 |
Algoritmos exatos para problema da clique maxima ponderada / Exact algorithms for the maximum-weight clique problem / Algorithmes pour le problème de la clique de poids maximumAraujo Tavares, Wladimir 06 April 2016 (has links)
Dans ce travail, nous présentons trois nouveaux algorithmes pour le problème de la clique de poids maximum. Les trois algorithmes dépendent d'un ordre initial des sommets. Deux ordres sont considérés, l'un en fonction de la pondération des sommets et l'autre en fonction de la taille voisinage des sommets. Le premier algorithme, que nous avons appelé BITCLIQUE, est une algorithme de séparation et évaluation. Il réunit efficacement plusieurs idées déjà utilisées avec succès pour résoudre le problème, comme l'utilisation d'une heuristique de coloration pondérée en nombres entiers pour l'évaluation ; et l'utilisation de vecteurs de bits pour simplifier les opérations sur le graphe. L'algorithme proposé surpasse les algorithmes par séparation et évaluation de l'état de l'art sur la plupart des instances considérées en terme de nombre de sous-problèmes énumérés ainsi que en terme de temps d'exécution. La seconde version est un algorithme des poupées russes, BITRDS, qui intègre une stratégie d'évaluation et de ramification de noeuds basée sur la coloration pondérée. Les simulations montrent que BITRDS réduit à la fois le nombre de sous-problèmes traités et le temps d'exécution par rapport à l'algorithme de l'état de l'art basée sur les poupées russes sur les graphes aléatoires avec une densité supérieure à 50%. Cette différence augmente à la mesure que la densité du graphe augmente. D'ailleurs, BITRDS est compétitif avec BITCLIQUE avec une meilleure performance sur les instances de graphes aléatoires avec une densité comprise entre 50% et 80%. Enfin, nous présentons une coopération entre la méthode poupées russes et la méthode de ``Resolution Search''. L'algorithme proposé, appelé BITBR, utilise au même temps la coloration pondérée et les limites supérieures donnés par les poupées pour trouver un ``nogood''. L'algorithme hybride réduit le nombre d'appels aux heuristiques de coloration pondérée, atteignant jusqu'à 1 ordre de grandeur par rapport à BITRDS. Plusieurs simulations sont réalisées avec la algorithmes proposés et les algorithmes de l'état de l'art. Les résultats des simulations sont rapportés pour chaque algorithme en utilisant les principaux instances disponibles dans la littérature. Enfin, les orientations futures de la recherche sont discutées. / In this work, we present three new exact algorithms for the maximum weight clique problem. The three algorithms depend on an initial ordering of the vertices. Two ordering are considered, as a function of the weights of the vertices or the weights of the neighborhoods of the vertices. This leads to two versions of each algorithm. The first one, called BITCLIQUE, is a combinatorial Branch & Bound algorithm. It effectively combines adaptations of several ideas already successfully employed to solve the problem, such as the use of a weighted integer coloring heuristic for pruning and branching, and the use of bitmap for simplifying the operations on the graph. The proposed algorithm outperforms state-of-the-art Branch & Bound algorithms in most instances of the considered in terms of the number of enumerated subproblems as well in terms of computational time The second one is a Russian Dolls, called BITRDS, which incorporates the pruning and branching strategies based on weighted coloring. Computational tests show that BITRDS reduces both the number of enumerated subproblems and execution time when compared to the previous state-of-art Russian Dolls algorithm for the problem in random graph instances with density above 50%. As graph density increases, this difference increases. Besides, BITRDS is competitive with BITCLIQUE with better performance in random graph instances with density between 50% and 80%. Finally, we present a cooperation between the Russian Dolls method and the Resolution Search method. The proposed algorithm, called BITBR, uses both the weighted coloring and upper bounds given by the dolls to find a nogood. The hybrid algorithm reduces the number of coloring heuristic calls, reaching up to 1 order of magnitude when compared with BITRDS. However, this reduction decreases the execution time only in a few instances. Several computational experiments are carried out with the proposed and state-of-the-art algorithms. Computational results are reported for each algorithm using the main instances available in the literature. Finally, future directions of research are discussed.
|
Page generated in 0.0447 seconds