• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 15
  • 11
  • 8
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 109
  • 78
  • 35
  • 33
  • 25
  • 25
  • 21
  • 19
  • 17
  • 16
  • 15
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

High Level Techniques for Leakage Power Estimation andOptimization in VLSI ASICs

Gopalakrishnan, Chandramouli 26 September 2003 (has links)
As technology scales down and CMOS circuits are powered by lower supply voltages, standby leakage current becomes significant. A behavioral level framework for the synthesis of data-paths with low leakage power is presented. There has been minimal work done on the behavioral synthesis of low leakage datapaths. We present a fast architectural simulator for leakage (FASL) to estimate the leakage power dissipated by a system described hierarchically in VHDL. FASL uses a leakage power model embedded into VHDL leafcells. These leafcells are characterized for leakage accurately using HSPICE. We present results which show that FASL measures leakage power significantly faster than HSPICE, with less than a 5% loss in accuracy, compared to HSPICE. We present a comprehensive framework for synthesizing low leakage power data-paths using a parameterized Multi-threshold CMOS (MTCMOS) component library. The component library has been characterized for leakage power and delay as a function of sleep transistor width. We propose four techniques for minimization of leakage power during behavioral synthesis: (1) leakage power management using MTCMOS modules; (2) an allocation and binding algorithm for low leakage based on clique partitioning; (3) selective binding to MTCMOS technology, allowing the designer to have control over the area overhead; and (4) a performance recovery technique based on multi-cycling and introduction of slack, to alleviate the loss in performance attributed to the introduction of MTCMOS modules in the data-path. Finally, we propose two iterative search based techniques, based on Tabu search, to synthesize low leakage data-paths. The first technique searches for low leakage scheduling options. The second technique simultaneously searches for a low leakage schedule and binding. It is shown that the latter technique of unified search is more robust. The quality of results generated bytabu-based technique are superior to those generated by simulated annealing (SA) search technique.
72

A Cost-Benefit Approach to Risk Analysis : Merging Analytical Hierarchy Process with Game Theory / A Cost-Benefit Approach to Risk Analysis : Merging Analytical Hierarchy Process with Game Theory

Karlsson, Dennie January 2018 (has links)
In this study cost-benefits problems concerning the knapsack problem of limited resources is studied and how this relates to an attacker perspective when choosing defense strategies. This is accomplished by adopting a cost-benefit method and merging it with game theory. The cost-benefit method chosen for this study is the Analytical Hierarchy Process and from the field of game theory the Bayesian Nash Equilibrium is used. The Analytical Hierarchy Process allows the user to determine internally comparable weights between elements, and to bring in a security dimension to the Analytical Hierarchy Process a sub category consisting of confidentiality, integrity and availability is used. To determine the attacker strategy and, in effect, determine the best defense strategy the Bayesian Nash Equilibrium is used.
73

Multi-period optimization of pavement management systems

Yoo, Jaewook 30 September 2004 (has links)
The purpose of this research is to develop a model and solution methodology for selecting and scheduling timely and cost-effective maintenance, rehabilitation, and reconstruction activities (M & R) for each pavement section in a highway network and allocating the funding levels through a finite multi-period horizon within the constraints imposed by budget availability in each period, frequency availability of activities, and specified minimum pavement quality requirements. M & R is defined as a chronological sequence of reconstruction, rehabilitation, and major/minor maintenance, including a "do nothing" activity. A procedure is developed for selecting an M & R activity for each pavement section in each period of a specified extended planning horizon. Each activity in the sequence consumes a known amount of capital and generates a known amount of effectiveness measured in pavement quality. The effectiveness of an activity is the expected value of the overall gains in pavement quality rating due to the activity performed on a highway network over an analysis period. It is assumed that the unused portion of the budget for one period can be carried over to subsequent periods. Dynamic Programming (DP) and Branch-and-Bound (B-and-B) approaches are combined to produce a hybrid algorithm for solving the problem under consideratioin. The algorithm is essentially a DP approach in the sense that the problem is divided into smaller subproblems corresponding to each single period problem. However, the idea of fathoming partial solutions that could not lead to an optimal solution is incorporated within the algorithm to reduce storage and computational requirements in the DP frame using the B-and-B approach. The imbedded-state approach is used to reduce a multi-dimensional DP to a one-dimensional DP. For bounding at each stage, the problem is relaxed in a Lagrangean fashion so that it separates into longest-path network model subproblems. The values of the Lagrangean multipliers are found by a subgradient optimization method, while the Ford-Bellman network algorithm is employed at each iteration of the subgradient optimization procedure to solve the longest-path network problem as well as to obtain an improved lower and upper bound. If the gap between lower and upper bound is sufficiently small, then we may choose to accept the best known solutions as being sufficiently close to optimal and terminate the algorithm rather than continue to the final stage.
74

Cutting planes in mixed integer programming: theory and algorithms

Tyber, Steven Jay 19 February 2013 (has links)
Recent developments in mixed integer programming have highlighted the need for multi-row cuts. To this day, the performance of such cuts has typically fallen short of the single-row Gomory mixed integer cut. This disparity between the theoretical need and the practical shortcomings of multi-row cuts motivates the study of both the mixed integer cut and multi-row cuts. In this thesis, we build on the theoretical foundations of the mixed integer cut and develop techniques to derive multi-row cuts. The first chapter introduces the mixed integer programming problem. In this chapter, we review the terminology and cover some basic results that find application throughout this thesis. Furthermore, we describe the practical solution of mixed integer programs, and in particular, we discuss the role of cutting planes and our contributions to this theory. In Chapter 2, we investigate the Gomory mixed integer cut from the perspective of group polyhedra. In this setting, the mixed integer cut appears as a facet of the master cyclic group polyhedron. Our chief contribution is a characterization of the adjacent facets and the extreme points of the mixed integer cut. This provides insight into the families of cuts that may work well in conjunction with the mixed integer cut. We further provide extensions of these results under mappings between group polyhedra. For the remainder of this thesis we explore a framework for deriving multi-row cuts. For this purpose, we favor the method of superadditive lifting. This technique is largely driven by our ability to construct superadditive under-approximations of a special value function known as the lifting function. We devote our effort to precisely this task. Chapter 3 reviews the theory behind superadditive lifting and returns to the classical problem of lifted flow cover inequalities. For this specific example, the lifting function we wish to approximate is quite complicated. We overcome this difficulty by adopting an indirect method for proving the validity of a superadditive approximation. Finally, we adapt the idea to high-dimensional lifting problems, where evaluating the exact lifting function often poses an immense challenge. Thus we open entirely unexplored problems to the powerful technique of lifting. Next, in Chapter 4, we consider the computational aspects of constructing strong superadditive approximations. Our primary contribution is a finite algorithm that constructs non-dominated superadditive approximations. This can be used to build superadditive approximations on-the-fly to strengthen cuts derived during computation. Alternately, it can be used offline to guide the search for strong superadditive approximations through numerical examples. We follow up in Chapter 5 by applying the ideas of Chapters 3 and 4 to high-dimensional lifting problems. By working out explicit examples, we are able to identify non-dominated superadditive approximations for high-dimensional lifting functions. These approximations strengthen existing families of cuts obtained from single-row relaxations. Lastly, we show via the stable set problem how the derivation of the lifting function and its superadditive approximation can be entirely embedded in the computation of cuts. Finally, we conclude by identifying future avenues of research that arise as natural extensions of the work in this thesis.
75

Multi-period optimization of pavement management systems

Yoo, Jaewook 30 September 2004 (has links)
The purpose of this research is to develop a model and solution methodology for selecting and scheduling timely and cost-effective maintenance, rehabilitation, and reconstruction activities (M & R) for each pavement section in a highway network and allocating the funding levels through a finite multi-period horizon within the constraints imposed by budget availability in each period, frequency availability of activities, and specified minimum pavement quality requirements. M & R is defined as a chronological sequence of reconstruction, rehabilitation, and major/minor maintenance, including a "do nothing" activity. A procedure is developed for selecting an M & R activity for each pavement section in each period of a specified extended planning horizon. Each activity in the sequence consumes a known amount of capital and generates a known amount of effectiveness measured in pavement quality. The effectiveness of an activity is the expected value of the overall gains in pavement quality rating due to the activity performed on a highway network over an analysis period. It is assumed that the unused portion of the budget for one period can be carried over to subsequent periods. Dynamic Programming (DP) and Branch-and-Bound (B-and-B) approaches are combined to produce a hybrid algorithm for solving the problem under consideratioin. The algorithm is essentially a DP approach in the sense that the problem is divided into smaller subproblems corresponding to each single period problem. However, the idea of fathoming partial solutions that could not lead to an optimal solution is incorporated within the algorithm to reduce storage and computational requirements in the DP frame using the B-and-B approach. The imbedded-state approach is used to reduce a multi-dimensional DP to a one-dimensional DP. For bounding at each stage, the problem is relaxed in a Lagrangean fashion so that it separates into longest-path network model subproblems. The values of the Lagrangean multipliers are found by a subgradient optimization method, while the Ford-Bellman network algorithm is employed at each iteration of the subgradient optimization procedure to solve the longest-path network problem as well as to obtain an improved lower and upper bound. If the gap between lower and upper bound is sufficiently small, then we may choose to accept the best known solutions as being sufficiently close to optimal and terminate the algorithm rather than continue to the final stage.
76

Analysis of Hybrid CSMA/CA-TDMA Channel Access Schemes with Application to Wireless Sensor Networks

Shrestha, Bharat 27 November 2013 (has links)
A wireless sensor network consists of a number of sensor devices and coordinator(s) or sink(s). A coordinator collects the sensed data from the sensor devices for further processing. In such networks, sensor devices are generally powered by batteries. Since wireless transmission of packets consumes significant amount of energy, it is important for a network to adopt a medium access control (MAC) technology which is energy efficient and satisfies the communication performance requirements. Carrier sense multiple access with collision avoidance (CSMA/CA), which is a popular access technique because of its simplicity, flexibility and robustness, suffers poor throughput and energy inefficiency performance in wireless sensor networks. On the other hand, time division multiple access (TDMA) is a collision free and delay bounded access technique but suffers from the scalability problem. For this reason, this thesis focuses on design and analysis of hybrid channel access schemes which combine the strengths of both the CSMA/CA and TDMA schemes. In a hybrid CSMA/CA-TDMA scheme, the use of the CSMA/CA period and the TDMA period can be optimized to enhance the communication performance in the network. If such a hybrid channel access scheme is not designed properly, high congestion during the CSMA/CA period and wastage of bandwidth during the TDMA period result in poor communication performance in terms of throughput and energy efficiency. To address this issue, distributed and centralized channel access schemes are proposed to regulate the activities (such as transmitting, receiving, idling and going into low power mode) of the sensor devices. This regulation during the CSMA/CA period and allocation of TDMA slots reduce traffic congestion and thus improve the network performance. In this thesis work, time slot allocation methods in hybrid CSMA/CA-TDMA schemes are also proposed and analyzed to improve the network performance. Finally, such hybrid CSMA/CA-TDMA schemes are used in a cellular layout model for the multihop wireless sensor network to mitigate the hidden terminal collision problem.
77

Logistical Planning of Mobile Food Retailers Operating Within Urban Food Desert Environments

January 2016 (has links)
abstract: Mobile healthy food retailers are a novel alleviation technique to address disparities in access to urban produce stores in food desert communities. Such retailers, which tend to exclusively stock produce items, have become significantly more popular in the past decade, but many are unable to achieve economic sustainability. Therefore, when local and federal grants and scholarships are no longer available for a mobile food retailer, they must stop operating which poses serious health risks to consumers who rely on their services. To address these issues, a framework was established in this dissertation to aid mobile food retailers with reaching economic sustainability by addressing two key operational decisions. The first decision was the stocked product mix of the mobile retailer. In this problem, it was assumed that mobile retailers want to balance the health, consumer cost, and retailer profitability of their product mix. The second investigated decision was the scheduling and routing plan of the mobile retailer. In this problem, it was assumed that mobile retailers operate similarly to traditional distribution vehicles with the exception that their customers are willing to travel between service locations so long as they are in close proximity. For each of these problems, multiple formulations were developed which address many of the nuances for most existing mobile food retailers. For each problem, a combination of exact and heuristic solution procedures were developed with many utilizing software independent methodologies as it was assumed that mobile retailers would not have access to advanced computational software. Extensive computational tests were performed on these algorithm with the findings demonstrating the advantages of the developed procedures over other algorithms and commercial software. The applicability of these techniques to mobile food retailers was demonstrated through a case study on a local Phoenix, AZ mobile retailer. Both the product mix and routing of the retailer were evaluated using the developed tools under a variety of conditions and assumptions. The results from this study clearly demonstrate that improved decision making can result in improved profits and longitudinal sustainability for the Phoenix mobile food retailer and similar entities. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2016
78

Object-based PON Access and Tandem Networking

January 2014 (has links)
abstract: The upstream transmission of bulk data files in Ethernet passive optical networks (EPONs) arises from a number of applications, such as data back-up and multimedia file upload. Existing upstream transmission approaches lead to severe delays for conventional packet traffic when best-effort file and packet traffic are mixed. I propose and evaluate an exclusive interval for bulk transfer (EIBT) transmission strategy that reserves an EIBT for file traffic in an EPON polling cycle. I optimize the duration of the EIBT to minimize a weighted sum of packet and file delays. Through mathematical delay analysis and verifying simulation, it is demonstrated that the EIBT approach preserves small delays for packet traffic while efficiently serving bulk data file transfers. Dynamic circuits are well suited for applications that require predictable service with a constant bit rate for a prescribed period of time, such as demanding e-science applications. Past research on upstream transmission in passive optical networks (PONs) has mainly considered packet-switched traffic and has focused on optimizing packet-level performance metrics, such as reducing mean delay. This study proposes and evaluates a dynamic circuit and packet PON (DyCaPPON) that provides dynamic circuits along with packet-switched service. DyCaPPON provides (i) flexible packet-switched service through dynamic bandwidth allocation in periodic polling cycles, and (ii) consistent circuit service by allocating each active circuit a fixed-duration upstream transmission window during each fixed-duration polling cycle. I analyze circuit-level performance metrics, including the blocking probability of dynamic circuit requests in DyCaPPON through a stochastic knapsack-based analysis. Through this analysis I also determine the bandwidth occupied by admitted circuits. The remaining bandwidth is available for packet traffic and I analyze the resulting mean delay of packet traffic. Through extensive numerical evaluations and verifying simulations, the circuit blocking and packet delay trade-offs in DyCaPPON is demonstrated. An extended version of the DyCaPPON designed for light traffic situation is introduced in this article as well. / Dissertation/Thesis / Ph.D. Electrical Engineering 2014
79

Adaptive Sampling Pattern Design Methods for MR Imaging

Chennakeshava, K January 2016 (has links) (PDF)
MRI is a very useful imaging modality in medical imaging for both diagnostic as well as functional studies. It provides excellent soft tissue contrast in several diagnostic studies. It is widely used to study the functional aspects of brain and to study the diffusion of water molecules across tissues. Image acquisition in MR is slow due to longer data acquisition time, gradient ramp-up and stabilization delays. Repetitive scans are also needed to overcome any artefacts due to patient motion, field inhomogeneity and to improve signal to noise ratio (SNR). Scanning becomes di cult in case of claustrophobic patients, and in younger/older patients who are unable to cooperate and prone to uncontrollable motions inside the scanner. New MR procedures, advanced research in neuro and functional imaging are demanding better resolutions and scan speeds which implies there is need to acquire more data in a shorter time frame. The hardware approach to faster k-space scanning methods involves efficient pulse sequence and gradient waveform design methods. Such methods have reached a physical and physiological limit. Alternately, methods have been proposed to reduce the scan time by under sampling the k-space data. Since the advent of Compressive Sensing (CS), there has been a tremendous interest in developing under sampling matrices for MRI. Mathematical assumptions on the probability distribution function (pdf) of k-space have led researchers to come up with efficient under sampling matrices for sampling MR k-space data. The recent approaches adaptively sample the k-space, based on the k-space of reference image as the probability distribution instead of a mathematical distribution, to come with an efficient under sampling scheme. In general, the methods use a deterministic central circular/square region and probabilistic sampling of the rest of the k-space. In these methods, the sampling distribution may not follow the selected pdf and viii Adaptive Sampling Pattern Design Methods for MR Images the selection of deterministic and probabilistic sampling distribution parameters are heuristic in nature. Two novel adaptive Variable Density Sampling (VDS) methods are proposed to address the heuristic nature of the sampling k-space such that the selected pdf matches the k-space energy distribution of a given fully sampled reference k-space or the MR image. The proposed methods use a novel approach of binning the pdf derived from the fully sampled k-space energy distribution of a reference image. The normalized k-space magnitude spectrum of the reference image is taken as a 2D probability distribution function which is divided in to number of exponentially weighted magnitude bins obtained from the corresponding histogram of the k-space magnitude spectrum. In the first method, the normalized k-space histogram is binned exponentially, and the resulting exponentially binned 2D pdf is used with a suitable control parameter to obtain a sampling pattern of desired under sampling ratio. The resulting sampling pattern is an adaptive VDS pattern mimicking the energy distribution of the original k-space. In the second method, the binning of the magnitude spectrum of k-space is followed by ranking of the bins by its spectral energy content. A cost function is de ned to evaluate the k-space energy being captured by the bin. The samples are selected from the energy rank ordered bins using a Knapsack constraint. The energy ranking and the Knapsack criterion result in the selection of sampling points from the highly relevant bins and gives a very robust sampling grid with well defined sparsity level. Finally, the feasibility of developing a single adaptive VDS sampling pattern for a organ specific or multi-slice MR imaging, using the concept of binning of magnitude spectrum of the k-space, is investigated. Based on the premise that k-space of different organs have a different energy distribution structure to one another, the MR images of organs can be classified based on their spectral content and develop a single adaptive VDS sampling pattern for imaging an organ or multiple slices of the same. The classification is done using the k-space bin histogram as feature vectors and k-means clustering. Based on the nearest distance to the centroid of the organ cluster, a template image is selected to generate the sampling grid for the organ under consideration. Using the state of the art MR reconstruction algorithms, the performance of the proposed novel adaptive Variable Density Sampling (VDS) methods using image quality measures is evaluated and compared with other VDS methods. The reconstructions show significant improvement in image quality parameters quantitatively and visual reduction in artefacts at 20% 15%, 10% and 5% under sampling
80

Optimalizační metody s využitím simulací v MS Excel / The Optimization Methods with Utilization of the Simulation in MS Exel

Škulavíková, Štěpánka January 2008 (has links)
Thesis is based on original self-made application programmed at VBA in MS Excel 2007. The reason to build this application was integration of simulation Monte Carlo and chosen optimization methods. The application allows do simulation of the knapsack problem and of the assignment problem with uncertainty. The parameters of these models are possible to set up as changing values in dependence of chosen probability distribution. Output of the simulation is a probability recommendation which objects should be used. Choose of objects depend on optimized models. Results of both models are represented by statistical indexes, tables of parameters and graph.

Page generated in 0.0336 seconds