• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2622
  • 941
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 32
  • 27
  • 26
  • Tagged with
  • 6007
  • 1461
  • 893
  • 731
  • 724
  • 707
  • 495
  • 495
  • 484
  • 455
  • 422
  • 414
  • 386
  • 366
  • 343
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Whitehead's Decision Problems for Automorphisms of Free Group

Mishra, Subhajit January 2020 (has links)
Let F be a free group of finite rank. Given words u, v ∈ F, J.H.C. Whitehead solved the decision problem of finding an automorphism φ ∈ Aut(F), carrying u to v. He used topological methods to produce an algorithm. Higgins and Lyndon gave a very concise proof of the problem based on the works of Rapaport. We provide a detailed account of Higgins and Lyndon’s proof of the peak reduction lemma and the restricted version of Whitehead’s theorem, for cyclic words as well as for sets of cyclic words, with a full explanation of each step. Then, we give an inductive proof of Whitehead’s minimization theorem and describe Whitehead’s decision algorithm. Noticing that Higgins and Lyndon’s work is limited to the cyclic words, we extend their proofs to ordinary words and sets of ordinary words. In the last chapter, we mention an example given by Whitehead to show that the decision problem for finitely generated subgroups is more difficult and outline an approach due to Gersten to overcome this difficulty. We also give an extensive literature survey of Whitehead’s algorithm / Thesis / Master of Science (MSc)
522

Non-Wiener Effects in Narrowband Interference Mitigation Using Adaptive Transversal Equalizers

Ikuma, Takeshi 25 April 2007 (has links)
The least mean square (LMS) algorithm is widely expected to operate near the corresponding Wiener filter solution. An exception to this popular perception occurs when the algorithm is used to adapt a transversal equalizer in the presence of additive narrowband interference. The steady-state LMS equalizer behavior does not correspond to that of the fixed Wiener equalizer: the mean of its weights is different from the Wiener weights, and its mean squared error (MSE) performance may be significantly better than the Wiener performance. The contributions of this study serve to better understand this so-called non-Wiener phenomenon of the LMS and normalized LMS adaptive transversal equalizers. The first contribution is the analysis of the mean of the LMS weights in steady state, assuming a large interference-to-signal ratio (ISR). The analysis is based on the Butterweck expansion of the weight update equation. The equalization problem is transformed to an equivalent interference estimation problem to make the analysis of the Butterweck expansion tractable. The analytical results are valid for all step-sizes. Simulation results are included to support the analytical results and show that the analytical results predict the simulation results very well, over a wide range of ISR. The second contribution is the new MSE estimator based on the expression for the mean of the LMS equalizer weight vector. The new estimator shows vast improvement over the Reuter-Zeidler MSE estimator. For the development of the new MSE estimator, the transfer function approximation of the LMS algorithm is generalized for the steady-state analysis of the LMS algorithm. This generalization also revealed the cause of the breakdown of the MSE estimators when the interference is not strong, as the assumption that the variation of the weight vector around its mean is small relative to the mean of the weight vector itself. Both the expression for the mean of the weight vector and for the MSE estimator are analyzed for the LMS algorithm at first. The results are then extended to the normalized LMS algorithm by the simple means of adaptation step-size redefinition. / Ph. D.
523

Discrete flower pollination algorithm for resource constrained project scheduling problem

Bibiks, Kirils, Li, Jian-Ping, Hu, Yim Fun 07 1900 (has links)
Yes / In this paper, a new population-based and nature-inspired metaheuristic algorithm, Discrete Flower Pollination Algorithm (DFPA), is presented to solve the Resource Constrained Project Scheduling Problem (RCPSP). The DFPA is a modification of existing Flower Pollination Algorithm adapted for solving combinatorial optimization problems by changing some of the algorithm's core concepts, such as flower, global pollination, Lévy flight, local pollination. The proposed DFPA is then tested on sets of benchmark instances and its performance is compared against other existing metaheuristic algorithms. The numerical results have shown that the proposed algorithm is efficient and outperforms several other popular metaheuristic algorithms, both in terms of quality of the results and execution time. Being discrete, the proposed algorithm can be used to solve any other combinatorial optimization problems. / Innovate UK / Awarded 'Best paper of the Month'
524

Machine Learning for Malware Detection in Network Traffic

Omopintemi, A.H., Ghafir, Ibrahim, Eltanani, S., Kabir, Sohag, Lefoane, Moemedi 19 December 2023 (has links)
No / Developing advanced and efficient malware detection systems is becoming significant in light of the growing threat landscape in cybersecurity. This work aims to tackle the enduring problem of identifying malware and protecting digital assets from cyber-attacks. Conventional methods frequently prove ineffective in adjusting to the ever-evolving field of harmful activity. As such, novel approaches that improve precision while simultaneously taking into account the ever-changing landscape of modern cybersecurity problems are needed. To address this problem this research focuses on the detection of malware in network traffic. This work proposes a machine-learning-based approach for malware detection, with particular attention to the Random Forest (RF), Support Vector Machine (SVM), and Adaboost algorithms. In this paper, the model’s performance was evaluated using an assessment matrix. Included the Accuracy (AC) for overall performance, Precision (PC) for positive predicted values, Recall Score (RS) for genuine positives, and the F1 Score (SC) for a balanced viewpoint. A performance comparison has been performed and the results reveal that the built model utilizing Adaboost has the best performance. The TPR for the three classifiers performs over 97% and the FPR performs < 4% for each of the classifiers. The created model in this paper has the potential to help organizations or experts anticipate and handle malware. The proposed model can be used to make forecasts and provide management solutions in the network’s everyday operational activities.
525

Exploring the Landscape of Big Data Analytics Through Domain-Aware Algorithm Design

Dash, Sajal 20 August 2020 (has links)
Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. High volume and velocity of the data warrant a large amount of storage, memory, and compute power while a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. In this thesis, we present our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental arrival of data through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We present Claret, a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool, to demonstrate the application of the first guideline. It combines algorithmic concepts extended from the stochastic force-based multi-dimensional scaling (SF-MDS) and Glimmer. Claret computes approximate weighted Euclidean distances by combining a novel data mapping called stretching and Johnson Lindestrauss' lemma to reduce the complexity of WMDS from O(f(n)d) to O(f(n) log d). In demonstrating the second guideline, we map the problem of identifying multi-hit combinations of genetic mutations responsible for cancers to weighted set cover (WSC) problem by leveraging the semantics of cancer genomic data obtained from cancer biology. Solving the mapped WSC with an approximate algorithm, we identified a set of multi-hit combinations that differentiate between tumor and normal tissue samples. To identify three- and four-hits, which require orders of magnitude larger computational power, we have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. Developing new statistics to combine search results over time makes incremental analysis feasible. iBLAST performs (1+δ)/δ times faster than NCBI BLAST, where δ represents the fraction of database growth. We also explored various approaches to mitigate catastrophic forgetting in incremental training of deep learning models. / Doctor of Philosophy / Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. Here volume represents the data's size, variety represents various sources and formats of the data, and velocity represents the data arrival rate. High volume and velocity of the data warrant a large amount of storage, memory, and computational power. In contrast, a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. This thesis presents our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric (pair-wise distance and distribution-related) and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental data arrival through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We demonstrate the application of the first guideline through the design and development of Claret. Claret is a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool that can reduce the dimension of high-dimensional data points. In demonstrating the second guideline, we identify combinations of cancer-causing gene mutations by mapping the problem to a well known computational problem known as the weighted set cover (WSC) problem. We have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer to solve the problem in less than two hours instead of an estimated hundred years. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. This analysis was made possible by developing new statistics to combine search results over time. We also explored various approaches to mitigate the catastrophic forgetting of deep learning models, where a model forgets to perform machine learning tasks efficiently on older data in a streaming setting.
526

Algorithm Visualization: The State of the Field

Cooper, Matthew Lenell 01 May 2007 (has links)
We report on the state of the field of algorithm visualization, both quantitatively and qualitatively. Computer science educators seem to find algorithm and data structure visualizations attractive for their classrooms. Educational research shows that some are effective while many are not. Clearly, then, visualizations are difficult to create and use right. There is little in the way of a supporting community, and many visualizations are downright poor. Topic distribution is heavily skewed towards simple concepts with advanced topics receiving little to no attention. We have cataloged nearly 400 visualizations available on the Internet. We have a wiki-based catalog which includes availability, platform, strengths and weaknesses, responsible personnel and institutions, and other data about each visualization. We have developed extraction and analysis tools to gather statistics about the corpus of visualizations. Based on analysis of this collection, we point out areas where improvements may be realized and suggest techniques for implementing such improvements. We pay particular attention to the free and open source software movement as a model which the visualization community may do well to emulate, from both a software engineering perspective and a community-building standpoint. / Master of Science
527

Information Freshness Optimization in Real-time Network Applications

Liu, Zhongdong 12 June 2024 (has links)
In recent years, the remarkable development in ubiquitous communication networks and smart portable devices spawned a wide variety of real-time applications that require timely information updates (e.g., autonomous vehicular systems, industrial automation systems, and live streaming services). These real-time applications all have one thing in common: they desire their knowledge of the information source to be as fresh as possible. In order to measure the freshness of information, a new metric, called the Age-of-Information (AoI) is proposed. AoI is defined as the time elapsed since the generation time of the freshest delivered update. This metric is influenced by both the inter-arrival time and the delay of the updates. As a result of these dependencies, the AoI metric exhibits distinct characteristics compared to traditional delay and throughput metrics. In this dissertation, our goal is to optimize AoI under various real-time network applications. Firstly, we investigate a fundamental problem of how exactly various scheduling policies impact AoI performance. Though there is a large body of work studying the AoI performance under different scheduling policies, the use of the update-size information and its combinations with other information (such as arrival-time information and service preemption) to reduce AoI has still not been explored yet. Secondly, as a recently introduced measure of freshness, the relationship between AoI and other performance metrics remains largely ambiguous. We analyze the tradeoffs between AoI and additional performance metrics, including service performance and update cost, within real-world applications. This dissertation is organized into three parts. In the first part, we realize that scheduling policies leveraging the update-size information can substantially reduce the delay, one of the key components of AoI. However, it remains largely unknown how exactly scheduling policies (especially those making use of update-size information) impact the AoI performance. To this end, we conduct a systematic and comparative study to investigate the impact of scheduling policies on the AoI performance in single-server queues and provide useful guidelines for the design of AoI-efficient scheduling policies. In the second part, we analyze the tradeoffs between AoI and other performance metrics in real-world systems. Specifically, we focus on the following two important tradeoffs. (i) The tradeoff between service performance and AoI that arises in the data-driven real-time applications (e.g., Google Maps and stock trading applications). In these applications, the computing resource is often shared for processing both updates from information sources and queries from end users. Hence there is a natural tradeoff between service performance (e.g., response time to queries) and AoI (i.e., the freshness of data in response to user queries). To address this tradeoff, we begin by introducing a simple single-server two-queue model that captures the coupled scheduling between updates and queries. Subsequently, we design threshold-based scheduling policies to prioritize either updates or queries. Finally, we conduct a rigorous analysis of the performance of these threshold-based scheduling policies. (ii) The tradeoff between update cost and AoI that appear in the crowdsensing-based applications (e.g., Google Waze and GasBuddy). On the one hand, users are not satisfied if the responses to their requests are stale; on the other side, there is a cost for the applications to update their information regarding certain points of interest since they typically need to make monetary payments to incentivize users. To capture this tradeoff, we first formulate an optimization problem with the objective of minimizing the sum of the staleness cost (which is a function of the AoI) and the update cost, then we obtain a closed-form optimal threshold-based policy by reformulating the problem as a Markov decision process (MDP). In the third part, we study the minimization of data freshness and transmission costs (e.g., energy cost) under an (arbitrary) time-varying wireless channel without and with machine learning (ML) advice. We consider a discrete-time system where a resource-constrained source transmits time-sensitive data to a destination over a time-varying wireless channel. Each transmission incurs a fixed cost, while not transmitting results in a staleness cost measured by the AoI. The source needs to balance the tradeoff between these transmission and staleness costs. To tackle this challenge, we develop a robust online algorithm aimed at minimizing the sum of transmission and staleness costs, ensuring a worst-case performance guarantee. While online algorithms are robust, they tend to be overly conservative and may perform poorly on average in typical scenarios. In contrast, ML algorithms, which leverage historical data and prediction models, generally perform well on average but lack worst-case performance guarantees. To harness the advantages of both approaches, we design a learning-augmented online algorithm that achieves two key properties: (i) consistency: closely approximating the optimal offline algorithm when the ML prediction is accurate and trusted; (ii) robustness: providing a worst-case performance guarantee even when ML predictions are inaccurate. / Doctor of Philosophy / In recent years, the rapid growth of communication networks and smart devices has spurred the emergence of real-time applications like autonomous vehicles and industrial automation systems. These applications share a common need for timely information. The freshness of information can be measured using a new metric called Age-of-Information (AoI). This dissertation aims to optimize AoI across various real-time network applications, organized into three parts. In the first part, we explore how scheduling policies (particularly those considering update size) impact the AoI performance. Through a systematic and comparative study in single-server queues, we provide useful guidelines for the design of AoI-efficient scheduling policies. The second part explores the tradeoff between update cost and AoI in crowdsensing applications like Google Waze and GasBuddy, where users demand fresh responses to their requests; however, updating information incurs update costs for applications. We aim to minimize the sum of staleness cost (a function of AoI) and update cost. By reformulating the problem as a Markov decision process (MDP), we design a simple threshold-based policy and prove its optimality. In the third part, we study the minimization of data freshness and transmission costs (e.g., energy cost) under a time-varying wireless channel. We first develop a robust online algorithm that achieves a competitive ratio of 3, ensuring a worst-case performance guarantee. Furthermore, when advice is available, e.g., predictions from machine learning (ML) models, we design a learning-augmented online algorithm that exhibits two desired properties: (i) consistency: closely approximating the optimal offline algorithm when the ML prediction is accurate and trusted; (ii) robustness: guaranteeing worst-case performance even with inaccurate ML prediction. While this dissertation marks a significant advancement in AoI research, numerous open problems remain. For instance, our learning-augmented online algorithm treats ML predictions as external inputs. Exploring the co-design and training of ML and online algorithms to improve performance could yield interesting insights. Additionally, while AoI typically assesses update importance based solely on timestamps, the content of updates also holds significance. Incorporating considerations of both age and semantics of information is imperative in future research.
528

Impact of algorithm design in implementing real-time active control systems

Hossain, M. Alamgir, Tokhi, M.O., Dahal, Keshav P. January 2004 (has links)
Yes / This paper presents an investigation into the impact of algorithm design for real-time active control systems. An active vibration control (AVC) algorithm for flexible beam systems is employed to demonstrate the critical design impact for real-time control applications. The AVC algorithm is analyzed, designed in various forms and implemented to explore the impact. Finally, a comparative real-time computing performance of the algorithms is presented and discussed to demonstrate the merits of different design mechanisms through a set of experiments.
529

A knowledge-based genetic algorithm for unit commitment

Aldridge, C.J., McKee, S., McDonald, J.R., Galloway, S.J., Dahal, Keshav P., Bradley, M.E., Macqueen, J.F. January 2001 (has links)
No / A genetic algorithm (GA) augmented with knowledge-based methods has been developed for solving the unit commitment economic dispatch problem. The GA evolves a population of binary strings which represent commitment schedules. The initial population of schedules is chosen using a method based on elicited scheduling knowledge. A fast rule-based dispatch method is then used to evaluate candidate solutions. The knowledge-based genetic algorithm is applied to a test system of ten thermal units over 24-hour time intervals, including minimum on/off times and ramp rates, and achieves lower cost solutions than Lagrangian relaxation in comparable computational time.
530

Využití distribuovaných a stochastických algoritmů v síti / Application of distributed and stochastic algorithms in network.

Yarmolskyy, Oleksandr Unknown Date (has links)
This thesis deals with the distributed and stochastic algorithms including testing their convergence in networks. The theoretical part briefly describes above mentioned algorithms, including their division, problems, advantages and disadvantages. Furthermore, two distributed algorithms and two stochastic algorithms are chosen. The practical part is done by comparing the speed of convergence on various network topologies in Matlab.

Page generated in 0.0388 seconds