• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 13
  • 9
  • 6
  • 5
  • 1
  • 1
  • Tagged with
  • 115
  • 88
  • 35
  • 31
  • 26
  • 26
  • 25
  • 18
  • 17
  • 16
  • 15
  • 13
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Analysis of An Uncertain Volatility Model in the framework of static hedging for different scenarios

Sdobnova, Alena, Blaszkiewicz, Jakub January 2008 (has links)
In Black-Scholes model, the parameters -a volatility and an interest rate were assumed as constants. In this thesis we concentrate on behaviour of the volatility as a function and we find more realistic models for the volatility, which elimate a risk connected with behaviour of the volatility of an underlying asset. That is the reason why we will study the Uncertain Volatility Model. In Chapter 1 we will make some theoretical introduction to the Uncertain Volatility Model introduced by Avellaneda, Levy and Paras and study how it behaves in the different scenarios. In Chapter 2 we choose one of the scenarios. We also introduce the BSB equation and try to make some modification to narrow the uncertainty bands using the idea of a static hedging. In Chapter 3 we try to construct the proper portfolio for the static hedging and compare the theoretical results with the real market data from the Stockholm Stock Exchange.
12

Straight-line Coverage in Wireless Sensor Networks

Lee, Tzu-Chen 17 July 2006 (has links)
Wireless sensor networks provide an alternative way of improving our environments, such as environment surveillance, hazard monitoring, and other customized environment application, especially in military applications. Furthermore, the coverage issue in wireless sensor networks also plays an important role. Good coverage of a sensor network is an essential issue to ensure the quality of service. This paper studies the barrier coverage problems of a sensor networks, and will find the optimized straight-line path for both best-case and worst-case coverage problems. The optimal algorithm we proposed has a quadratic time complexity and is based on computational geometry. We proposed the distance function theory and applied it in our problems and we used the sweep and divide concept to solve the problems. Furthermore, the correctness of the proposed method is validated and simulated by experiments.
13

Control flow graphs for real-time systems analysis reconstruction from binary executables and usage in ILP-based path analysis /

Theiling, Henrik. Unknown Date (has links) (PDF)
University, Diss., 2003--Saarbrücken. / Erscheinungsjahr an der Haupttitelstelle: 2002.
14

Best Practice inom Benchmarking : Hur små och medelstora företag använder förebilder och jämförelser

Eriksson, Robert, Skoglund, Evelina January 2015 (has links)
Titel: Best Practice inom Benchmarking; Hur små och medelstora företag använder förebilder och jämförelser Nivå: C-uppsats i ämnet företagsekonomi Författare: Robert Eriksson & Evelina Skoglund Handledare: Tomas Källqvist & Stig Sörling Datum: 2015 – januari.   Syfte: Enligt tidigare forskning existerar det en problematik för SME att använda konceptet benchmarking, de befintliga modeller och tillvägagångssätt är främst anpassade för större organisationer. Syftet med studien är därmed av skapa en förståelse kring hur SME arbetar med förebilder och jämförelser i enlighet med best practice. Metod: Studien antar ett hermeneutiskt och socialkonstruktivistiskt perspektiv och använder en deduktiv ansats. Den teoretiska referensramen har byggts upp genom att sammanställa den tidigare forskningen under teman. Det empiriska materialet har samlats in ifrån fem SME genom en kvalitativ metod med en fallstudiedesign där semistrukturerade intervjuer och öppna intervjufrågor har använts. Den teoretiska referensramen har sedan ställts mot det empiriska materialet för att belysa skillnader och likheter. Resultat & slutsats: Studien tyder på att SME ständigt arbetar med jämförelser och ser till förebilder. Deras arbetssätt skiljer sig dock från tidigare forskning, då de arbetar mer ostrukturerat och använder en ständig öppenhet för att få nya idéer. Det framkommer också att ledningens roll har en stor betydelse. Resultatet visar att dessa SME använder best practice, men även att se ser till worst practice. Förslag till fortsatt forskning: Utifrån denna studie ser vi det intressant att vidare se både kvantitativa och kvalitativa studier som fokuserar på begreppet worst practice, vi skulle även se det intressant att belysa ledningens roll inom konceptet för att få en djupare förståelse för dess vikt.   Uppsatsens bidrag: Det bidrag som studien genererar visar på att SME arbetar med jämförelser på ett mer ostrukturerat sätt än de presenterade modellerna. Den visar även att de ser till worst practice och inte bara best practice då de använder konceptet. Nyckelord: Best practice, SME, benchmarking, strukturlöst, worst practice öppenhet.
15

Thinking the Worst of Others: Does a Belief in Free Will Increase a Negativity Bias in Motive Attribution?

Gortner, Lindsey 07 June 2022 (has links)
No description available.
16

Prioritizing quality dimensions for a Polymer industry using Best-Worst Method.

Thugudam, Lalith Kumar 01 May 2020 (has links) (PDF)
This research answers the complex decision-making question about identifying the quality dimensions in a polymer industry and to prioritize these quality dimensions to obtain the best quality product with minimum expenditure. This research takes use of expert opinion and right decision-making model to yield an optimal solution which will help the manufacturing plants to reduce wastage and to get a better consistent quality product throughout the production process.
17

Asymptotic Worst-Case Analyses for the Open Bin Packing Problem

Ongkunaruk, Pornthipa 06 January 2006 (has links)
The open bin packing problem (OBPP) is a new variant of the well-known bin packing problem. In the OBPP, items are packed into bins so that the total content before the last item in each bin is strictly less than the bin capacity. The objective is to minimize the number of bins used. The applications of the OBPP can be found in the subway station systems in Hong Kong and Taipei and the scheduling in manufacturing industries. We show that the OBPP is NP-hard and propose two heuristic algorithms instead of solving the problem to optimality. We propose two offline algorithms in which the information of the items is known in advance. First, we consider the First Fit Decreasing (FFD) which is a good approximation algorithm for the bin packing problem. We prove that its asymptotic worst-case performance ratio is no more than 3/2. We observe that its performance for the OBPP is worse than that of the BPP. Consequently, we modify it by adding the algorithm that the set of largest items is the set of last items in each bin. Then, we propose the Modified First Fit Decreasing (MFFD) as an alternative and prove that its asymptotic worst-case performance ratio is no more than 91/80. We conduct empirical tests to show their average-case performance. The results show that in general, the FFD and MFFD algorithms use no more than 33% and 1% of the number of bins than that of optimal packing, respectively. In addition, the MFFD is asymptotically optimal when the sizes of items are (0,1) uniformly distributed. / Ph. D.
18

Investigations on CPI Centric Worst Case Execution Time Analysis

Ravindar, Archana January 2013 (has links) (PDF)
Estimating program worst case execution time (WCET) is an important problem in the domain of real-time systems and embedded systems that are deadline-centric. If WCET of a program is found to exceed the deadline, it is either recoded or the target architecture is modified to meet the deadline. Predominantly, there exist three broad approaches to estimate WCET- static WCET analysis, hybrid measurement based analysis and statistical WCET analysis. Though measurement based analyzers benefit from knowledge of run-time behavior, amount of instrumentation remains a concern. This thesis proposes a CPI-centric WCET analyzer that estimates WCET as a product of worst case instruction count (IC) estimated using static analysis and worst case cycles per instruction (CPI) computed using a function of measured CPI. In many programs, it is observed that IC and CPI values are correlated. Five different kinds of correlation are found. This correlation enables us to optimize WCET from the product of worst case IC and worst case CPI to a product of worst case IC and corresponding CPI. A prime advantage of viewing time in terms of CPI, enables us to make use of program phase behavior. In many programs, CPI varies in phases during execution. Within each phase, the variation is homogeneous and lies within a few percent of the mean. Coefficient of variation of CPI across phases is much greater than within a phase. Using this observation, we estimate program WCET in terms of its phases. Due to the nature of variation of CPI within a phase in such programs, we can use a simple probabilistic inequality- Chebyshev inequality, to compute bounds of CPI within a desired probability. In some programs that execute many paths depending on if-conditions, CPI variation is observed to be high. The thesis proposes a PC signature that is a low cost way of profiling path information which is used to isolate points of high CPI variation and divides a phase into smaller sub-phases of lower CPI variation. Chebyshev inequality is applied to sub-phases resulting in much tighter bounds. Provision to divide a phase into smaller sub-phases based on allowable variance of CPI within a sub-phase also exists. The proposed technique is implemented on simulators and on a native platform. Other advantages of phases in the context of timing analysis are also presented that include parallelized WCET analysis and estimation of remaining worst case execution time for a particular program run.
19

Scalable Trajectory Approach for ensuring deterministic guarantees in large networks

Medlej, Sara, Medlej, Sara 26 September 2013 (has links) (PDF)
In critical real-time systems, any faulty behavior may endanger lives. Hence, system verification and validation is essential before their deployment. In fact, safety authorities ask to ensure deterministic guarantees. In this thesis, we are interested in offering temporal guarantees; in particular we need to prove that the end-to-end response time of every flow present in the network is bounded. This subject has been addressed for many years and several approaches have been developed. After a brief comparison between the existing approaches, the Trajectory Approach sounded like a good candidate due to the tightness of its offered bound. This method uses results established by the scheduling theory to derive an upper bound. The reasons leading to a pessimistic upper bound are investigated. Moreover, since the method must be applied on large networks, it is important to be able to give results in an acceptable time frame. Hence, a study of the method's scalability was carried out. Analysis shows that the complexity of the computation is due to a recursive and iterative processes. As the number of flows and switches increase, the total runtime required to compute the upper bound of every flow present in the network understudy grows rapidly. While based on the concept of the Trajectory Approach, we propose to compute an upper bound in a reduced time frame and without significant loss in its precision. It is called the Scalable Trajectory Approach. After applying it to a network, simulation results show that the total runtime was reduced from several days to a dozen seconds.
20

ROBUST ADAPTIVE BEAMFORMING WITH BROAD NULLS

Yudong, He, Xianghua, Yang, Jie, Zhou, Banghua, Zhou, Beibei, Shao 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Robust adaptive beamforming using worst-case performance optimization is developed in recent years. It had good performance against array response errors, but it cannot reject strong interferences. In this paper, we propose a scheme for robust adaptive beamforming with broad nulls to reject strong interferences. We add a quadratic constraint to suppress the power of the array response over a spatial region of the interferences. The optimal weighting vector is then obtained by minimizing the power of the array output subject to quadratic constrains on the desired signal and interferences, respectively. We derive the formulations for the optimization problem and solve it efficiently using Newton recursive algorithm. Numerical examples are presented to compare the performances of the robust adaptive beamforming with no null constrains, sharp nulls and broad nulls. The results show its powerful ability to reject strong interferences.

Page generated in 0.0493 seconds