• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 768
  • 242
  • 119
  • 117
  • 37
  • 34
  • 16
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1745
  • 355
  • 304
  • 279
  • 262
  • 243
  • 191
  • 191
  • 185
  • 182
  • 181
  • 170
  • 169
  • 168
  • 161
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Mission Optimized Speed Control

He, Jincan, Bhatt, Sundhanva January 2017 (has links)
Transportation underlines the vehicle industry's critical role in a country's economic future.The amount of goods moved, specically by trucks, is only expected to increase inthe near future. This work attempts to tackle the problem of optimizing fuel consumptionin Volvo trucks, when there are hard constraints on the delivery time and speed limits.Knowledge of the truck such as position, state, conguration etc., along with the completeroute information of the transport mission is used for fuel optimization.Advancements in computation, storage, and communication on cloud based systems, hasmade it possible to easily incorporate such systems in assisting modern eet. In this work,an algorithm is developed in a cloud based system to compute a speed plan for the completemission for achieving fuel minimization. This computation is decoupled from thelocal control operations on the truck such as prediction control, safety, cruise control, etc.;and serves as a guide to the truck driver to reach the destination on time by consumingminimum fuel.To achieve fuel minimization under hard constraints on delivery (or arrival) time andspeed limits, a non-linear optimization problem is formulated for the high delity modelestimated from real-time drive cycles. This optimization problem is solved using a Nonlinearprogramming solver in Matlab.The optimal policy was tested on two drive cycles provided by Volvo. The policy wascompared with two dierent scenarios, where the mission demands hard constraints ontravel time and the speed limits in addition to no trac uncertainties (deterministic). with a cruise controller running at a constant set speed throughout the mission. Itis observed that there is no signicant fuel savings. with maximum possible fuel consumption; achieved without the help of optimalspeed plan (worst case). It is seen that there is a notable improvement in fuelsaving.In a real world scenario, a transport mission is interrupted by uncertainties such as trac ow, road blocks, re-routing, etc. To this end, a stochastic optimization algorithm is proposedto deal with the uncertainties modeled using historical trac ow data. Possiblesolution methodologies are suggested to tackle this stochastic optimization problem.
442

A Neat Approach To Genetic Programming

Rodriguez, Adelein 01 January 2007 (has links)
The evolution of explicitly represented topologies such as graphs involves devising methods for mutating, comparing and combining structures in meaningful ways and identifying and maintaining the necessary topological diversity. Research has been conducted in the area of the evolution of trees in genetic programming and of neural networks and some of these problems have been addressed independently by the different research communities. In the domain of neural networks, NEAT (Neuroevolution of Augmenting Topologies) has shown to be a successful method for evolving increasingly complex networks. This system's success is based on three interrelated elements: speciation, marking of historical information in topologies, and initializing search in a small structures search space. This provides the dynamics necessary for the exploration of diverse solution spaces at once and a way to discriminate between different structures. Although different representations have emerged in the area of genetic programming, the study of the tree representation has remained of interest in great part because of its mapping to programming languages and also because of the observed phenomenon of unnecessary code growth or bloat which hinders performance. The structural similarity between trees and neural networks poses an interesting question: Is it possible to apply the techniques from NEAT to the evolution of trees and if so, how does it affect performance and the dynamics of code growth? In this work we address these questions and present analogous techniques to those in NEAT for genetic programming.
443

Linking Place Value Concepts With Computational Practices In Third Grade

Cuffel, Terry 01 January 2009 (has links)
In an attempt to examine student understanding of place value with third graders, I conducted action research with a small group of girls to determine if my use of instructional strategies would encourage the development of conceptual understanding of place value. Strategies that have been found to encourage conceptual development of place value, such as use of the candy factory, were incorporated into my instruction. Instructional strategies were adjusted as the study progressed to meet the needs of the students and the development of their understanding of place value. Student explanations of their use of strategies contributed to my interpretation of their understanding. Additionally, I examined the strategies that the students chose to use when adding or subtracting multidigit numbers. Student understanding was demonstrated through group discussion and written and oral explanations. My observations, anecdotal records and audio recordings allowed me to further analyze student understanding. The results of my research seem to corroborate previous research studies that emphasize the difficulty that many students have in understanding place value at the conceptual level.
444

Visual Analytics for High Dimensional Simulation Ensembles

Dahshan, Mai Mansour Soliman Ismail 10 June 2021 (has links)
Recent advancements in data acquisition, storage, and computing power have enabled scientists from various scientific and engineering domains to simulate more complex and longer phenomena. Scientists are usually interested in understanding the behavior of a phenomenon in different conditions. To do so, they run multiple simulations with different configurations (i.e., parameter settings, boundary/initial conditions, or computational models), resulting in an ensemble dataset. An ensemble empowers scientists to quantify the uncertainty in the simulated phenomenon in terms of the variability between ensemble members, the parameter sensitivity and optimization, and the characteristics and outliers within the ensemble members, which could lead to valuable insight(s) about the simulated model. The size, complexity, and high dimensionality (e.g., simulation input and output parameters) of simulation ensembles pose a great challenge in their analysis and exploration. Ensemble visualization provides a convenient way to convey the main characteristics of the ensemble for enhanced understanding of the simulated model. The majority of the current ensemble visualization techniques are mainly focused on analyzing either the ensemble space or the parameter space. Most of the parameter space visualizations are not designed for high-dimensional data sets or did not show the intrinsic structures in the ensemble. Conversely, ensemble space has been visualized either as a comparative visualization of a limited number of ensemble members or as an aggregation of multiple ensemble members omitting potential details of the original ensemble. Thus, to unfold the full potential of simulation ensembles, we designed and developed an approach to the visual analysis of high-dimensional simulation ensembles that merges sensemaking, human expertise, and intuition with machine learning and statistics. In this work, we explore how semantic interaction and sensemaking could be used for building interactive and intelligent visual analysis tools for simulation ensembles. Specifically, we focus on the complex processes that derive meaningful insights from exploring and iteratively refining the analysis of high dimensional simulation ensembles when prior knowledge about ensemble features and correlations is limited or/and unavailable. We first developed GLEE (Graphically-Linked Ensemble Explorer), an exploratory visualization tool that enables scientists to analyze and explore correlations and relationships between non-spatial ensembles and their parameters. Then, we developed Spatial GLEE, an extension to GLEE that explores spatial data while simultaneously considering spatial characteristics (i.e., autocorrelation and spatial variability) and dimensionality of the ensemble. Finally, we developed Image-based GLEE to explore exascale simulation ensembles produced from in-situ visualization. We collaborated with domain experts to evaluate the effectiveness of GLEE using real-world case studies and experiments from different domains. The core contribution of this work is a visual approach that enables the exploration of parameter and ensemble spaces for 2D/3D high dimensional ensembles simultaneously, three interactive visualization tools that explore search, filter, and make sense of non-spatial, spatial, and image-based ensembles, and usage of real-world cases from different domains to demonstrate the effectiveness of the proposed approach. The aim of the proposed approach is to help scientists gain insights by answering questions or testing hypotheses about the different aspects of the simulated phenomenon or/and facilitate knowledge discovery of complex datasets. / Doctor of Philosophy / Scientists run simulations to understand complex phenomena and processes that are expensive, difficult, or even impossible to reproduce in the real world. Current advancements in high-performance computing have enabled scientists from various domains, such as climate, computational fluid dynamics, and aerodynamics to run more complex simulations than before. However, a single simulation run would not be enough to capture all features in a simulated phenomenon. Therefore, scientists run multiple simulations using perturbed input parameters, initial and boundary conditions, or different models resulting in what is known as an ensemble. An ensemble empowers scientists to understand the model's behavior by studying relationships between and among ensemble members, the optimal parameter settings, and the influence of input parameters on the simulation output, which could lead to useful knowledge and insights about the simulated phenomenon. To effectively analyze and explore simulation ensembles, visualization techniques play a significant role in facilitating knowledge discoveries through graphical representations. Ensemble visualization offers scientists a better way to understand the simulated model. Most of the current ensemble visualization techniques are designed to analyze or/and explore either the ensemble space or the parameter space. Therefore, we designed and developed a visual analysis approach for exploring and analyzing high-dimensional parameter and ensemble spaces simultaneously by integrating machine learning and statistics with sensemaking and human expertise. The contribution of this work is to explore how to use semantic interaction and sensemaking to explore and analyze high-dimensional simulation ensembles. To do so, we designed and developed a visual analysis approach that manifested in an exploratory visualization tool, GLEE (Graphically-Linked Ensemble Explorer), that allowed scientists to explore, search, filter, and make sense of high dimensional 2D/3D simulations ensemble. GLEE's visualization pipeline and interaction techniques used deep learning, feature extraction, spatial regression, and Semantic Interaction (SI) techniques to support the exploration of non-spatial, spatial, and image-based simulation ensembles. GLEE different visualization tools were evaluated with domain experts from different fields using real-world case studies and experiments.
445

Essays in сomputational macroeconomics and public finance

Ye, Victor Yifan 21 January 2023 (has links)
This dissertation applies advanced and novel computational tools to two major classes of economic simulations – large-scale, multi-region overlapping generations (OLG) models and models of lifetime consumption optimization. Chapters one and two develop large-scale, global OLG models to simulate, respectively, the future of automation and consequences of major corporate tax reform. Chapters three and four utilize The Fiscal Analyzer (TFA), a lifetime consumption smoothing algorithm, to estimate lifetime marginal tax rates of American workers and evaluate the impact of the Tax Cuts and Job Act (TCJA) on residents of red and blue states.
446

Energy Efficient Offloading for Competing Users on a Shared Communication Channel

Meskar, Erfan January 2016 (has links)
In this thesis we consider a set of mobile users that employ cloud-based computation offloading. In computation offloading, user energy consumption can be decreased by uploading and executing jobs on a remote server, rather than processing the jobs locally. In order to execute jobs in the cloud however, the user uploads must occur over a base station channel which is shared by all of the uploading users. Since the job completion times are subject to hard deadline constraints, this restricts the feasible set of jobs that can be remotely processed, and may constrain the users ability to reduce energy usage. The system is modelled as a competitive game in which each user is interested in minimizing its own energy consumption. The game is subject to the real-time constraints imposed by the job execution deadlines, user specific channel bit rates, and the competition over the shared communication channel. The thesis shows that for a variety of parameters, a game where each user independently sets its offloading decisions always has a pure Nash equilibrium, and a Gauss-Seidel method for determining this equilibrium is introduced. Results are presented which illustrate that the system always converges to a Nash equilibrium using the Gauss-Seidel method. Data is also presented which show the number of Nash equilibria that are found, the number of iterations required, and the quality of the solutions. We find that the solutions perform well compared to a lower bound on total energy performance. / Thesis / Master of Applied Science (MASc)
447

A Computational Approach to Custom Data Representation for Hardware Accelerators

Kinsman, Adam 04 1900 (has links)
<p> This thesis details the application of computational methods to the problem of determining custom data representations when building hardware accelerators for numerical computations. A majority of scientific applications which require hardware acceleration are implemented in IEEE-754 double precision. However, in many cases the error tolerance requirements of the application are much less than the accuracy which IEEE-754 double precision provides. By leveraging custom data representations, a more resource efficient hardware implementation arises thereby enabling greater parallelism and thus higher performance of the accelerator. </p> <p> The existing custom representation methods are unable to guarantee robust representations while at the same time adequately supporting ill-conditioned operators. Support for both of these scenarios is necessary for accelerating scientific calculations. To address this, we propose the use of a computational method based on Satisfiability-Modulo Theory (SMT). By capturing a calculation as a set of constraints, an SMT instance can be formulated which provides meaningful bounds even in the presence of ill-conditioned operators. At the same time, the analytical nature of SMT satisfies the need for robustness. Utilizing block vector arithmetic, our SMT approach is extended to provide scalability to large instances involving vector calculus which arise in scientific calculations. Atop this foundation, a unified error model is proposed which deals simultaneously with absolute and relative error, thereby providing the means of supporting both fixed-point and custom floating-point data types. Iterative algorithm analysis is leveraged to derive constraints for the SMT method. The application of the method to several scientific algorithms is discussed by way of case studies. </p> / Thesis / Doctor of Philosophy (PhD)
448

RESOURCE MANAGEMENT FOR MOBILE COMPUTATION OFFLOADING

Chen, Hong 11 1900 (has links)
Mobile computation offloading (MCO) is a way of improving mobile device (MD) performance by offloading certain task executions to a more resourceful edge server (ES), rather than running them locally on the MD. This thesis first considers the problem of assigning the wireless communication bandwidth and the ES capacity needed for this remote task execution, so that task completion time constraints are satisfied. The objective is to minimize the average power consumption of the MDs, subject to a cost budget constraint on communication and computation resources. The thesis includes contributions for both soft and hard task completion deadline constraints. The soft deadline case aims to create assignments so that the probability of task completion time deadline violation does not exceed a given violation threshold. In the hard deadline case, it creates resource assignments where task completion time deadlines are always satisfied. The problems are first formulated as mixed integer nonlinear programs. Approximate solutions are then obtained by decomposing the problems into a collection of convex subproblems that can be efficiently solved. Results are presented that demonstrate the quality of the proposed solutions, which can achieve near optimum performance over a wide range of system parameters. The thesis then introduces algorithms for static task class partitioning in MCO. The objective is to partition a given set of task classes into two sets that are either executed locally or those classes that are permitted to contend for remote ES execution. The goal is to find the task class partition that gives the minimum mean MD power consumption subject to task completion deadlines. The thesis generates these partitions for both soft and hard task completion deadlines. Two variations of the problem are considered. The first assumes that the wireless and computational capacities are given and the second generates both capacity assignments subject to an additional resource cost budget constraint. Two class ordering methods are introduced, one based on a task latency criterion, and another that first sorts and groups classes based on a mean power consumption criterion and then orders the task classes within each group based on a task completion time criterion. A variety of simulation results are presented that demonstrate the excellent performance of the proposed solutions. The thesis then considers the use of digital twins (DTs) to offload physical system (PS) activity. Each DT periodically communicates with its PS, and uses these updates to implement features that reflect the real behaviour of the device. A given feature can be implemented using different models that create the feature with differing levels of system accuracy. The objective is to maximize the minimum feature accuracy for the requested features by making appropriate model selections subject to wireless channel and ES resource availability. The model selection problem is first formulated as an NP-complete integer program. It is then decomposed into multiple subproblems, each consisting of a modified Knapsack problem. A polynomial-time approximation algorithm is proposed using dynamic programming to solve it efficiently, by violating its constraints by at most a given factor. A generalization of the model selection problem is then given and the thesis proposes an approximation algorithm using dependent rounding to solve it efficiently with guaranteed constraint violations. A variety of simulation results are presented that demonstrate the excellent performance of the proposed solutions. / Thesis / Doctor of Philosophy (PhD) / Mobile devices (MDs) such as smartphones are currently used to run a wide variety of application tasks. An alternative to local task execution is to arrange for some MD tasks to be run on a remote non-mobile edge server (ES). This is referred to as mobile computation offloading (MCO). The work in this thesis studies two important facets of the MCO problem. 1. The first considers the joint effects of communication and computational resource assignment on task completion times. This work optimizes task offloading decisions, subject to task completion time requirements and the cost that one is willing to incur when designing the network. Procedures are proposed whose objective is to minimize average mobile device power consumption, subject to these cost constraints. 2. The second considers the use of digital twins (DTs) as a way of implementing mobile computation offloading. A DT implements features that describe its physical system (PS) using models that are hosted at the ES. A model selection problem is studied, where multiple DTs share the execution services at a common ES. The objective is to optimize the feature accuracy obtained by DTs subject to the communication and computation resource availability. The thesis proposes different approximation and decomposition methods that solve these problems efficiently.
449

Binary Multi-User Computation Offloading via Time Division Multiple Access

Manouchehrpour, Mohammad Amin January 2023 (has links)
The limited energy and computing power of small smart devices restricts their ability to support a wide range of applications, especially those needing quick responses. Mobile edge computing offers a potential solution by providing computing resources at the network access points that can be shared by the devices. This enables the devices to offload some of their computational tasks to the access points. To make this work well for multiple devices, we need to judiciously allocate the available communication and computing resources among the devices. The main focus of this thesis is on (near) optimal resource allocation in a K-user offloading system that employs the time division multiple access (TDMA) scheme. In this thesis, we develop effective algorithms for the resource allocation problem that aim to minimize the overall (cost of the) energy that the devices consume in completing their computational tasks within the specified deadlines while respecting the devices' constraints. This problem is tackled for tasks that cannot be divided and hence the system must make a binary decision as to whether or not a task should be offloaded. This implies the need to develop an effective decision-making algorithm to identify a suitable group of devices for offloading. This thesis commences by developing efficient communication resource algorithms that incorporate the impact of integer finite block length in low-latency computational offloading systems with reserved computing resources. In particular, it addresses the challenge of minimizing total energy consumption in a binary offloading scenario involving K users. The approach considers different approximations of the fundamental rate limit in the finite block length regime, departing from the conventional asymptotic rate limits developed by Shannon. Two such alternatives, namely the normal approximation and the SNR-gap approximation, are explored. A decomposition approach is employed, dividing the problem into an inner component that seeks an optimal solution for the communication resource allocation within a defined set of offloading devices, and an outer component aimed at identifying a suitable set of offloading devices. Given the finiteness of the block length and its integer nature, various relaxation techniques are employed to determine an appropriate communication resource allocation. These include incremental and independent roundings, alongside an extended search that utilizes randomization-based methods in both rounding schemes. The findings reveal that incremental randomized rounding, when applied to the normal approximation of the rate limits, enhances system performance in terms of reducing the energy consumption of the offloading users. Furthermore, customized pruned greedy search techniques for selecting the offloading devices efficiently generate good decisions. Indeed, the proposed approach outperforms a number of existing approaches. In the second contribution, we develop efficient algorithms that address the challenge of jointly allocating both computation and communication resources in a binary offloading system. We employ a similar decomposition methodology as in the previous work to perform the decision-making, but this is now done along with joint computation and communication resource allocation. For the inner resource allocation problem, we divide the problem into two components: determining the allocation of computation resources and the optimal allocation of communication resources for the given allocation of computation resources. The allocation of the computation resources implicitly determines a suitable order for data transmission, which facilitates the subsequent optimal allocation of the communication resources. In this thesis, we introduce two heuristic approaches for allocating the computation resources. These approaches sequentially maximize the allowable transmission time for the devices in sequence, starting from the largest leading to a reduction in total offloading energy. We demonstrate that the proposed heuristics substantially lower the computational burden associated with solving the joint computation--communication resource allocation problem while maintaining a low total energy. In particular, its use results in substantially lower energy consumption than other simple heuristics. Additionally, the heuristics narrow the energy gap in comparison to a fictitious scenario in which each task has access to the whole computation resource without the need for sharing. / Thesis / Master of Applied Science (MASc)
450

Scheduling and Resource Efficiency Balancing: Discrete Species Conserving Cuckoo Search for Scheduling in an Uncertain Execution Environment

Bibiks, Kirils January 2017 (has links)
The main goal of a scheduling process is to decide when and how to execute each of the project’s activities. Despite large variety of researched scheduling problems, the majority of them can be described as generalisations of the resource-constrained project scheduling problem (RCPSP). Because of wide applicability and challenging difficulty, RCPSP has attracted vast amount of attention in the research community and great variety of heuristics have been adapted for solving it. Even though these heuristics are structurally different and operate according to diverse principles, they are designed to obtain only one solution at a time. In the recent researches on RCPSPs, it was proven that these kind of problems have complex multimodal fitness landscapes, which are characterised by a wide solution search spaces and presence of multiple local and global optima. The main goal of this thesis is twofold. Firstly, it presents a variation of the RCPSP that considers optimisation of projects in an uncertain environment where resources are modelled to adapt to their environment and, as the result of this, improve their efficiency. Secondly, modification of a novel evolutionary computation method Cuckoo Search (CS) is proposed, which has been adapted for solving combinatorial optimisation problems and modified to obtain multiple solutions. To test the proposed methodology, two sets of experiments are carried out. First, the developed algorithm is applied to a real-life software development project. Second, performance of the algorithm is tested on universal benchmark instances for scheduling problems which were modified to take into account specifics of the proposed optimisation model. The results of both experiments demonstrate that the proposed methodology achieves competitive level of performance and is capable of finding multiple global solutions, as well as prove its applicability in real-life projects.

Page generated in 0.1138 seconds