• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 10
  • 7
  • 4
  • 4
  • 1
  • Tagged with
  • 88
  • 88
  • 35
  • 31
  • 26
  • 25
  • 24
  • 17
  • 17
  • 13
  • 11
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Straight-line Coverage in Wireless Sensor Networks

Lee, Tzu-Chen 17 July 2006 (has links)
Wireless sensor networks provide an alternative way of improving our environments, such as environment surveillance, hazard monitoring, and other customized environment application, especially in military applications. Furthermore, the coverage issue in wireless sensor networks also plays an important role. Good coverage of a sensor network is an essential issue to ensure the quality of service. This paper studies the barrier coverage problems of a sensor networks, and will find the optimized straight-line path for both best-case and worst-case coverage problems. The optimal algorithm we proposed has a quadratic time complexity and is based on computational geometry. We proposed the distance function theory and applied it in our problems and we used the sweep and divide concept to solve the problems. Furthermore, the correctness of the proposed method is validated and simulated by experiments.
12

Control flow graphs for real-time systems analysis reconstruction from binary executables and usage in ILP-based path analysis /

Theiling, Henrik. Unknown Date (has links) (PDF)
University, Diss., 2003--Saarbrücken. / Erscheinungsjahr an der Haupttitelstelle: 2002.
13

Asymptotic Worst-Case Analyses for the Open Bin Packing Problem

Ongkunaruk, Pornthipa 06 January 2006 (has links)
The open bin packing problem (OBPP) is a new variant of the well-known bin packing problem. In the OBPP, items are packed into bins so that the total content before the last item in each bin is strictly less than the bin capacity. The objective is to minimize the number of bins used. The applications of the OBPP can be found in the subway station systems in Hong Kong and Taipei and the scheduling in manufacturing industries. We show that the OBPP is NP-hard and propose two heuristic algorithms instead of solving the problem to optimality. We propose two offline algorithms in which the information of the items is known in advance. First, we consider the First Fit Decreasing (FFD) which is a good approximation algorithm for the bin packing problem. We prove that its asymptotic worst-case performance ratio is no more than 3/2. We observe that its performance for the OBPP is worse than that of the BPP. Consequently, we modify it by adding the algorithm that the set of largest items is the set of last items in each bin. Then, we propose the Modified First Fit Decreasing (MFFD) as an alternative and prove that its asymptotic worst-case performance ratio is no more than 91/80. We conduct empirical tests to show their average-case performance. The results show that in general, the FFD and MFFD algorithms use no more than 33% and 1% of the number of bins than that of optimal packing, respectively. In addition, the MFFD is asymptotically optimal when the sizes of items are (0,1) uniformly distributed. / Ph. D.
14

Investigations on CPI Centric Worst Case Execution Time Analysis

Ravindar, Archana January 2013 (has links) (PDF)
Estimating program worst case execution time (WCET) is an important problem in the domain of real-time systems and embedded systems that are deadline-centric. If WCET of a program is found to exceed the deadline, it is either recoded or the target architecture is modified to meet the deadline. Predominantly, there exist three broad approaches to estimate WCET- static WCET analysis, hybrid measurement based analysis and statistical WCET analysis. Though measurement based analyzers benefit from knowledge of run-time behavior, amount of instrumentation remains a concern. This thesis proposes a CPI-centric WCET analyzer that estimates WCET as a product of worst case instruction count (IC) estimated using static analysis and worst case cycles per instruction (CPI) computed using a function of measured CPI. In many programs, it is observed that IC and CPI values are correlated. Five different kinds of correlation are found. This correlation enables us to optimize WCET from the product of worst case IC and worst case CPI to a product of worst case IC and corresponding CPI. A prime advantage of viewing time in terms of CPI, enables us to make use of program phase behavior. In many programs, CPI varies in phases during execution. Within each phase, the variation is homogeneous and lies within a few percent of the mean. Coefficient of variation of CPI across phases is much greater than within a phase. Using this observation, we estimate program WCET in terms of its phases. Due to the nature of variation of CPI within a phase in such programs, we can use a simple probabilistic inequality- Chebyshev inequality, to compute bounds of CPI within a desired probability. In some programs that execute many paths depending on if-conditions, CPI variation is observed to be high. The thesis proposes a PC signature that is a low cost way of profiling path information which is used to isolate points of high CPI variation and divides a phase into smaller sub-phases of lower CPI variation. Chebyshev inequality is applied to sub-phases resulting in much tighter bounds. Provision to divide a phase into smaller sub-phases based on allowable variance of CPI within a sub-phase also exists. The proposed technique is implemented on simulators and on a native platform. Other advantages of phases in the context of timing analysis are also presented that include parallelized WCET analysis and estimation of remaining worst case execution time for a particular program run.
15

Scalable Trajectory Approach for ensuring deterministic guarantees in large networks

Medlej, Sara, Medlej, Sara 26 September 2013 (has links) (PDF)
In critical real-time systems, any faulty behavior may endanger lives. Hence, system verification and validation is essential before their deployment. In fact, safety authorities ask to ensure deterministic guarantees. In this thesis, we are interested in offering temporal guarantees; in particular we need to prove that the end-to-end response time of every flow present in the network is bounded. This subject has been addressed for many years and several approaches have been developed. After a brief comparison between the existing approaches, the Trajectory Approach sounded like a good candidate due to the tightness of its offered bound. This method uses results established by the scheduling theory to derive an upper bound. The reasons leading to a pessimistic upper bound are investigated. Moreover, since the method must be applied on large networks, it is important to be able to give results in an acceptable time frame. Hence, a study of the method's scalability was carried out. Analysis shows that the complexity of the computation is due to a recursive and iterative processes. As the number of flows and switches increase, the total runtime required to compute the upper bound of every flow present in the network understudy grows rapidly. While based on the concept of the Trajectory Approach, we propose to compute an upper bound in a reduced time frame and without significant loss in its precision. It is called the Scalable Trajectory Approach. After applying it to a network, simulation results show that the total runtime was reduced from several days to a dozen seconds.
16

ROBUST ADAPTIVE BEAMFORMING WITH BROAD NULLS

Yudong, He, Xianghua, Yang, Jie, Zhou, Banghua, Zhou, Beibei, Shao 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Robust adaptive beamforming using worst-case performance optimization is developed in recent years. It had good performance against array response errors, but it cannot reject strong interferences. In this paper, we propose a scheme for robust adaptive beamforming with broad nulls to reject strong interferences. We add a quadratic constraint to suppress the power of the array response over a spatial region of the interferences. The optimal weighting vector is then obtained by minimizing the power of the array output subject to quadratic constrains on the desired signal and interferences, respectively. We derive the formulations for the optimization problem and solve it efficiently using Newton recursive algorithm. Numerical examples are presented to compare the performances of the robust adaptive beamforming with no null constrains, sharp nulls and broad nulls. The results show its powerful ability to reject strong interferences.
17

Robust optimization for portfolio risk : a re-visit of worst-case risk management procedures after Basel III award

Özün, Alper January 2012 (has links)
The main purpose of this thesis is to develop methodological and practical improvements on robust portfolio optimization procedures. Firstly, the thesis discusses the drawbacks of classical mean-variance optimization models, and examines robust portfolio optimization procedures with CVaR and worst-case CVaR risk models by providing a clear presentation of derivation of robust optimization models from a basic VaR model. For practical purposes, the thesis introduces an open source software interface called “RobustRisk”, which is developed for producing empirical evidence for the robust portfolio optimization models. The software, which performs Monte-Carlo simulation and out-of-sample performance for the portfolio optimization, is introduced by using a hypothetical portfolio data from selected emerging markets. In addition, the performance of robust portfolio optimization procedures are discussed by providing empirical evidence in the crisis period from advanced markets. Empirical results show that robust optimization with worst-case CVaR model outperforms the nominal CVaR model in the crisis period. The empirical results encourage us to construct a forward-looking stress test procedure based on robust portfolio optimization under regime switches. For this purpose, the Markov chain process is embedded into robust optimization procedure in order to stress regime transition matrix. In addition, assets returns, volatilities, correlation matrix and covariance matrix can be stressed under pre-defined scenario expectations. An application is provided with a hypothetical portfolio representing an internationally diversified portfolio. The CVaR efficient frontier and corresponding optimized portfolio weights are achieved under regime switch scenarios. The research suggests that stressed-CVaR optimization provides a robust and forward-looking stress test procedure to comply with the regulatory requirements stated in Basel II and CRD regulations.
18

Robustní optimalizace portfolia / Robust portfolio selection problem

Zákutná, Tatiana January 2013 (has links)
In this thesis, a portfolio optimization with integer variables which influ- ence optimal assets allocation, is studied. Measures of risk are defined and the cor- responding mean-risk models are derived. Two methods are used to develop robust models involving uncertainty in probability distribution: the worst-case analyses and contamination. The uncertainty in values of scenarios and in their probabili- ties of the discrete probability distribution is assumed separately followed by their combination. These models are applied to stock market data with using optimization software GAMS.
19

Low Complexity Space-Time coding for MIMO systems.

Ismail, Amr 24 November 2011 (has links) (PDF)
The last few years witnessed a dramatic increase in the demand on high-rate reliable wireless communications. In order to meet these new requirements, resorting to Multiple-Input Multiple-Output (MIMO) techniques was inevitable as they may offer high-rate reliable wireless communications without any additional bandwidth. In the case where the transmitter does not have any prior knowledge about the channel state information, space-time coding techniques have proved to efficiently exploit the MIMO channel degrees of freedom while taking advantage of the maximum diversity gain. On the other hand, the ML decoding complexity of Space-Time Codes (STCs) generally increases exponentially with the rate which imposes an important challenge to their incorporation in recent communications standards. Recognizing the importance of the low-complexity criterion in the STC design for practical considerations, this thesis focuses on the design of new low-complexity Space-Time Block Codes (STBCs) where the transmitted code matrix can be expressed as a weighted linear combination of information symbols and we propose new codes that are decoded with a lower complexity than that of their rivals in the literature while providing better or slightly lower performance.
20

Multiprocessor Scheduling with Availability Constraints

Grigoriu, Liliana 2010 May 1900 (has links)
We consider the problem of scheduling a given set of tasks on multiple pro- cessors with predefined periods of unavailability, with the aim of minimizing the maximum completion time. Since this problem is strongly NP-hard, polynomial ap- proximation algorithms are being studied for its solution. Among these, the best known are LPT (largest processing time first) and Multifit with their variants. We give a Multifit-based algorithm, FFDL Multifit, which has an optimal worst- case performance in the class of polynomial algorithms for same-speed processors with at most two downtimes on each machine, and for uniform processors with at most one downtime on each machine, assuming that P 6= NP. Our algorithm finishes within 3/2 the maximum between the end of the last downtime and the end of the optimal schedule. This bound is asymptotically tight in the class of polynomial algorithms assuming that P 6= NP. For same-speed processors with at most k downtimes on each machine our algorithm finishes within ( 3 2 + 1 2k ) the end of the last downtime or the end of the optimal schedule. For problems where the optimal schedule ends after the last downtime, and when the downtimes represent fixed jobs, the maximum completion time of FFDL Multifit is within 3 2 or ( 3 2+ 1 2k ) of the optimal maximum completion time. We also give an LPT-based algorithm, LPTX, which matches the performance of FFDL Multifit for same-speed processors with at most one downtime on each machine, and is thus optimal in the class of polynomial algorithms for this case. LPTX differs from LPT in that it uses a specific order of processors to assign tasks if two processors become available at the same time. For a similar problem, when there is at most one downtime on each machine and no more than half of the machines are shut down at the same time, we show that a bound of 2 obtained in a previous work for LPT is asymptotically tight in the class of polynomial algorithms assuming that P 6= NP.

Page generated in 0.0631 seconds