• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Information Freshness Optimization in Real-time Network Applications

Liu, Zhongdong 12 June 2024 (has links)
In recent years, the remarkable development in ubiquitous communication networks and smart portable devices spawned a wide variety of real-time applications that require timely information updates (e.g., autonomous vehicular systems, industrial automation systems, and live streaming services). These real-time applications all have one thing in common: they desire their knowledge of the information source to be as fresh as possible. In order to measure the freshness of information, a new metric, called the Age-of-Information (AoI) is proposed. AoI is defined as the time elapsed since the generation time of the freshest delivered update. This metric is influenced by both the inter-arrival time and the delay of the updates. As a result of these dependencies, the AoI metric exhibits distinct characteristics compared to traditional delay and throughput metrics. In this dissertation, our goal is to optimize AoI under various real-time network applications. Firstly, we investigate a fundamental problem of how exactly various scheduling policies impact AoI performance. Though there is a large body of work studying the AoI performance under different scheduling policies, the use of the update-size information and its combinations with other information (such as arrival-time information and service preemption) to reduce AoI has still not been explored yet. Secondly, as a recently introduced measure of freshness, the relationship between AoI and other performance metrics remains largely ambiguous. We analyze the tradeoffs between AoI and additional performance metrics, including service performance and update cost, within real-world applications. This dissertation is organized into three parts. In the first part, we realize that scheduling policies leveraging the update-size information can substantially reduce the delay, one of the key components of AoI. However, it remains largely unknown how exactly scheduling policies (especially those making use of update-size information) impact the AoI performance. To this end, we conduct a systematic and comparative study to investigate the impact of scheduling policies on the AoI performance in single-server queues and provide useful guidelines for the design of AoI-efficient scheduling policies. In the second part, we analyze the tradeoffs between AoI and other performance metrics in real-world systems. Specifically, we focus on the following two important tradeoffs. (i) The tradeoff between service performance and AoI that arises in the data-driven real-time applications (e.g., Google Maps and stock trading applications). In these applications, the computing resource is often shared for processing both updates from information sources and queries from end users. Hence there is a natural tradeoff between service performance (e.g., response time to queries) and AoI (i.e., the freshness of data in response to user queries). To address this tradeoff, we begin by introducing a simple single-server two-queue model that captures the coupled scheduling between updates and queries. Subsequently, we design threshold-based scheduling policies to prioritize either updates or queries. Finally, we conduct a rigorous analysis of the performance of these threshold-based scheduling policies. (ii) The tradeoff between update cost and AoI that appear in the crowdsensing-based applications (e.g., Google Waze and GasBuddy). On the one hand, users are not satisfied if the responses to their requests are stale; on the other side, there is a cost for the applications to update their information regarding certain points of interest since they typically need to make monetary payments to incentivize users. To capture this tradeoff, we first formulate an optimization problem with the objective of minimizing the sum of the staleness cost (which is a function of the AoI) and the update cost, then we obtain a closed-form optimal threshold-based policy by reformulating the problem as a Markov decision process (MDP). In the third part, we study the minimization of data freshness and transmission costs (e.g., energy cost) under an (arbitrary) time-varying wireless channel without and with machine learning (ML) advice. We consider a discrete-time system where a resource-constrained source transmits time-sensitive data to a destination over a time-varying wireless channel. Each transmission incurs a fixed cost, while not transmitting results in a staleness cost measured by the AoI. The source needs to balance the tradeoff between these transmission and staleness costs. To tackle this challenge, we develop a robust online algorithm aimed at minimizing the sum of transmission and staleness costs, ensuring a worst-case performance guarantee. While online algorithms are robust, they tend to be overly conservative and may perform poorly on average in typical scenarios. In contrast, ML algorithms, which leverage historical data and prediction models, generally perform well on average but lack worst-case performance guarantees. To harness the advantages of both approaches, we design a learning-augmented online algorithm that achieves two key properties: (i) consistency: closely approximating the optimal offline algorithm when the ML prediction is accurate and trusted; (ii) robustness: providing a worst-case performance guarantee even when ML predictions are inaccurate. / Doctor of Philosophy / In recent years, the rapid growth of communication networks and smart devices has spurred the emergence of real-time applications like autonomous vehicles and industrial automation systems. These applications share a common need for timely information. The freshness of information can be measured using a new metric called Age-of-Information (AoI). This dissertation aims to optimize AoI across various real-time network applications, organized into three parts. In the first part, we explore how scheduling policies (particularly those considering update size) impact the AoI performance. Through a systematic and comparative study in single-server queues, we provide useful guidelines for the design of AoI-efficient scheduling policies. The second part explores the tradeoff between update cost and AoI in crowdsensing applications like Google Waze and GasBuddy, where users demand fresh responses to their requests; however, updating information incurs update costs for applications. We aim to minimize the sum of staleness cost (a function of AoI) and update cost. By reformulating the problem as a Markov decision process (MDP), we design a simple threshold-based policy and prove its optimality. In the third part, we study the minimization of data freshness and transmission costs (e.g., energy cost) under a time-varying wireless channel. We first develop a robust online algorithm that achieves a competitive ratio of 3, ensuring a worst-case performance guarantee. Furthermore, when advice is available, e.g., predictions from machine learning (ML) models, we design a learning-augmented online algorithm that exhibits two desired properties: (i) consistency: closely approximating the optimal offline algorithm when the ML prediction is accurate and trusted; (ii) robustness: guaranteeing worst-case performance even with inaccurate ML prediction. While this dissertation marks a significant advancement in AoI research, numerous open problems remain. For instance, our learning-augmented online algorithm treats ML predictions as external inputs. Exploring the co-design and training of ML and online algorithms to improve performance could yield interesting insights. Additionally, while AoI typically assesses update importance based solely on timestamps, the content of updates also holds significance. Incorporating considerations of both age and semantics of information is imperative in future research.
22

Impedance Extraction by MATLAB/Simulink and LabView/Multisim

Lin, Jen-Pin 17 April 2014 (has links)
This thesis studies the techniques of small-signal impedance measurement in three-phase power systems. Stability issue has become critically important since power electronics are highly applied in power distribution and conversion systems. Controlled output systems cause the risk of instability. In order to obtain the impedance model, an impedance extraction in D-Q reference frame algorithm is developed. This paper also applied Interpolated Fast Fourier Transform to increase accuracy of impedance model. Based on the voltage injection, Phase-Locked Loop, Park Transform, D-Q reference frame, and IPFFT. Three-phase system has been realigned on D-Q coordinate and impedance model is extracted in this form. Firstly, impedance extraction algorithm is designed by MATLAB/Simulink, the algorithm includes PLL, D-Q transform, and IPFFT is used to obtain magnitude and phase angle in frequency domain. Impedance matrices in D-Q frame may be solved through the relation between currents and voltages. Impedance model is made through various tests. Secondly, using the algorithm to test RL circuit to verify with real bode plot of the circuit. Then apply the algorithm on sophisticated circuit model. Finally, implement the algorithm on LabView/Multisim for future hardware tests. This paper clearly describes the objective of the research, the research problem and approaches, and experiment setup. This paper presents work conducted at the Smart Grid Power Systems Laboratory at University of South Florida.
23

An instruction systolic array architecture for multiple neural network types

Kane, Andrew January 1998 (has links)
Modern electronic systems, especially sensor and imaging systems, are beginning to incorporate their own neural network subsystems. In order for these neural systems to learn in real-time they must be implemented using VLSI technology, with as much of the learning processes incorporated on-chip as is possible. The majority of current VLSI implementations literally implement a series of neural processing cells, which can be connected together in an arbitrary fashion. Many do not perform the entire neural learning process on-chip, instead relying on other external systems to carry out part of the computation requirements of the algorithm. The work presented here utilises two dimensional instruction systolic arrays in an attempt to define a general neural architecture which is closer to the biological basis of neural networks - it is the synapses themselves, rather than the neurons, that have dedicated processing units. A unified architecture is described which can be programmed at the microcode level in order to facilitate the processing of multiple neural network types. An essential part of neural network processing is the neuron activation function, which can range from a sequential algorithm to a discrete mathematical expression. The architecture presented can easily carry out the sequential functions, and introduces a fast method of mathematical approximation for the more complex functions. This can be evaluated on-chip, thus implementing the entire neural process within a single system. VHDL circuit descriptions for the chip have been generated, and the systolic processing algorithms and associated microcode instruction set for three different neural paradigms have been designed. A software simulator of the architecture has been written, giving results for several common applications in the field.
24

Assessing Cognitive Learning of Analytical Problem Solving

January 2011 (has links)
abstract: Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality of application and (2) they make the learning of the basic concepts tedious. The concepts introduced in CS1 courses are highly abstract and not easily comprehensible. In general, the difficulty is intrinsic to the field of computing, often described as "too mathematical or too abstract." This dissertation presents a small-scale mixed method study conducted during the fall 2009 semester of CS1 courses at Arizona State University. This study explored and assessed students' comprehension of three core computational concepts - abstraction, arrays of objects, and inheritance - in both algorithm design and problem solving. Through this investigation students' profiles were categorized based on their scores and based on their mistakes categorized into instances of five computational thinking concepts: abstraction, algorithm, scalability, linguistics, and reasoning. It was shown that even though the notion of computational thinking is not explicit in the curriculum, participants possessed and/or developed this skill through the learning and application of the CS1 core concepts. Furthermore, problem-solving experiences had a direct impact on participants' knowledge skills, explanation skills, and confidence. Implications for teaching CS1 and for future research are also considered. / Dissertation/Thesis / Ph.D. Computer Science 2011
25

Utváření představ a osvojování vybraných konceptů používaných při sestavování programů s testovacími podmínkami na 1. stupni ZŠ / Examining the formation of ideas and learning about some programming concepts in the primary school

Čuma, Radek January 2018 (has links)
This diploma thesis maps pupils' understanding about a functional principle of using commands along with testing conditions (IF, IF - THEN, REPEAT - UNTIL, etc.) when creating algorithms. The main aim of the thesis is to design and implement a set of lessons and a teaching approach based on a theory about learning of algorithmic concepts at primary education for pupils (aged in 9-11) with the intention of verifying a functionality of designed teaching procedures and their possible impacts on pupils' understanding. Data was collected through continuous monitoring of pupils' behavioural characteristics, progress and solution of chosen tasks, video recordings of task solving within the suggested unplugged activities, using a virtual tool Code.org for monitoring of a pupils' progress, audio recordings of interview with pupils, and photographs capturing a creation of own blocks of commands set up by a transcription from pupils' mother language into a machine language (programming language) have all been used for a verification process of the designed teaching approach. By combining the acquired data sets, adjustments of these procedures have been made in order to eliminate the most frequent problems that pupils have encountered during teaching. The case study findings revealed that it is important for...
26

Systolic algorithms and applications

Wan, Chunru January 1996 (has links)
The computer performance has been improved tremendously since the development of the first allpurpose, all electronic digital computer in 1946. However, engineers, scientists and researchers keep making more efforts to further improve the computer performance to meet the demanding requirements for many applications. There are basically two ways to improve the computer performance in terms of computational speed. One way is to use faster devices (VLSI chips). Although faster and faster VLSI components have contributed a great deal on the improvement of computation speed, the breakthroughs in increasing switching speed and circuit densities of VLSI devices will be diflicult and costly in future. The other way is to use parallel processing architectures which employ multiple processors to perform a computation task. When multiple processors working together, an appropriate architecture is very important to achieve the maximum performance in a cost-effective manner. Systolic arrays are ideally qualified for computationally intensive applications with inherent massive parallelism because they capitalize on regular, modular, rhythmic, synchronous, concurrent processes that require intensive, repetitive computation. This thesis can be divided into three parts. The first part is an introductory part containing Chap. I and Chap. 2. The second part, composed of Chap. 3 and Chap. 4 concerns with the systolic design methodology. The third part deals with the several systolic array design for different applications.
27

Advanced Concurrency Control Algorithm Design and GPU System Support for High Performance In-Memory Data Management

Yuan, Yuan January 2016 (has links)
No description available.
28

Statistical Methods for Offline Deep Reinforcement Learning

Danyang Wang (18414336) 20 April 2024 (has links)
<p dir="ltr">Reinforcement learning (RL) has been a rapidly evolving field of research over the past years, enhancing developments in areas such as artificial intelligence, healthcare, and education, to name a few. Regardless of the success of RL, its inherent online learning nature presents obstacles for its real-world applications, since in many settings, online data collection with the latest learned policy can be expensive and/or dangerous (such as robotics, healthcare, and autonomous driving). This challenge has catalyzed research into offline RL, which involves reinforcement learning from previously collected static datasets, without the need for further online data collection. However, most existing offline RL methods depend on two key assumptions: unconfoundedness and positivity (also known as the full-coverage assumption), which frequently do not hold in the context of static datasets. </p><p dir="ltr">In the first part of this dissertation, we simultaneously address these two challenges by proposing a novel policy learning algorithm: PESsimistic CAusal Learning (PESCAL). We utilize the mediator variable based on Front-Door Criterion, to remove the confounding bias. Additionally, we adopt the pessimistic principle to tackle the distributional shift problem induced by the under-coverage issue. This issue refers to the mismatch of distributions between the action distributions induced by candidate policies, and the policy that generates the observational data (known as the behavior policy). Our key observation is that, by incorporating auxiliary variables that mediate the effect of actions on system dynamics, it is sufficient to learn a lower bound of the mediator distribution function, instead of the Q-function, to partially mitigate the issue of distributional shift. This insight significantly simplifies our algorithm, by circumventing the challenging task of sequential uncertainty quantification for the estimated Q-function. Moreover, we provide theoretical guarantees for the algorithms we propose, and demonstrate their efficacy through simulations, as well as real-world experiments utilizing offline datasets from a leading ride-hailing platform.</p><p dir="ltr">In the second part of this dissertation, in contrast to the first part, which approaches the distributional shift issue implicitly by penalizing the value function as a whole, we explicitly constrain the learned policy to not deviate significantly from the behavior policy, while still enabling flexible adjustment of the degree of constraints. Building upon the offline reinforcement learning algorithm, TD3+BC \cite{fujimoto2021minimalist}, we propose a model-free actor-critic algorithm with an adjustable behavior cloning (BC) term. We employ an ensemble of networks to quantify the uncertainty of the estimated value function, thus addressing the issue of overestimation. Moreover, we introduce a method that is both convenient and intuitively simple for controlling the degree of BC, through a Bernoulli random variable based on the user-specified confidence level for different offline datasets. Our proposed algorithm, named Ensemble-based Actor Critic with Adaptive Behavior Cloning (EABC), is straightforward to implement, exhibits low variance, and achieves strong performance across all D4RL benchmarks.</p>
29

Desifn And Optimization Of A Mixed Flow Compressor Impeller Using Robust Design Methods

Cevik, Mert 01 September 2009 (has links) (PDF)
This is a study that is focused on developing an individual design methodology for a centrifugal impeller and generating a mixed flow impeller for a small turbojet engine by using this methodology. The structure of the methodology is based on the design, modeling and the optimization processes, which are operated sequentially. The design process consists of engine design and compressor design codes operated together with a commercial design code. Design of Experiment methods and an in-house Neural Network code is used for the modeling phase. The optimization is based on an in-house code which is generated based on multidirectional search algorithm. The optimization problem is constructed by using the inhouse parametric design codes of the engine and the compressor. The goal of the optimization problem is to reach an optimum design which gives the best possible combination of the thrust and the fuel consumption for a small turbojet engine. The final combination of the design parameters obtained from the optimization study are used in order to generate the final design with the commercial design code. On the last part of the thesis a comparison of the final design and a standard radial flow impeller is made in order to clarify the benefit of the study. The results have been showed that a mixed flow compressor design is superior to a standard radial flow compressor in a small turbojet application.
30

Efficient and Secure Equality-based Two-party Computation

Javad Darivandpour (11190051) 27 July 2021 (has links)
<div>Multiparty computation refers to a scenario in which multiple distinct yet connected parties aim to jointly compute a functionality. Over recent decades, with the rapid spread of the internet and digital technologies, multiparty computation has become an increasingly important topic. In addition to the integrity of computation in such scenarios, it is essential to ensure that the privacy of sensitive information is not violated. Thus, secure multiparty computation aims to provide sound approaches for the joint computation of desired functionalities in a secure manner: Not only must the integrity of computation be guaranteed, but also each party must not learn anything about the other parties' private data. In other words, each party learns no more than what can be inferred from its own input and its prescribed output.</div><div><br></div><div> This thesis considers secure two-party computation over arithmetic circuits based on additive secret sharing. In particular, we focus on efficient and secure solutions for fundamental functionalities that depend on the equality of private comparands. The first direction we take is providing efficient protocols for two major problems of interest. Specifically, we give novel and efficient solutions for <i>private equality testing</i> and multiple variants of <i>secure wildcard pattern matching</i> over any arbitrary finite alphabet. These problems are of vital importance: Private equality testing is a basic building block in many secure multiparty protocols; and, secure pattern matching is frequently used in various data-sensitive domains, including (but not limited to) private information retrieval and healthcare-related data analysis. The second direction we take towards a performance improvement in equality-based secure two-party computation is via introducing a generic functionality-independent secure preprocessing that results in an overall computation and communication cost reduction for any subsequent protocol. We achieve this by providing the first precise functionality formulation and secure protocols for replacing original inputs with much smaller inputs such that this replacement neither changes the outcome of subsequent computations nor violates the privacy of sensitive inputs. Moreover, our input-size reduction opens the door to a new approach for efficiently solving Private Set Intersection. The protocols we give in this thesis are typically secure in the semi-honest adversarial threat model.</div>

Page generated in 0.0811 seconds