Spelling suggestions: "subject:"1heory off algorithms"" "subject:"1heory oof algorithms""
1 |
On the Complexity of Scheduling University CoursesLovelace, April L 01 March 2010 (has links)
It has often been said that the problem of creating timetables for scheduling university courses is hard, even as hard as solving an NP-Complete problem. There are many papers in the literature that make this assertion but rarely are precise problem definitions provided and no papers were found which offered proofs that the university course scheduling problem being discussed is NP-Complete.
This thesis defines a scheduling problem that has realistic constraints. It schedules professors to sections of courses they are willing to teach at times when they are available without overloading them. Both decision and optimization versions are precisely defined. An algorithm is then provided which solves the optimization problem in polynomial time. From this it is concluded that the decision problem is unlikely to be NP-Complete because indeed it is in P.
A second more complex timetable design problem, that additionally seeks to assign appropriate rooms in which the professors can teach the courses, is then introduced. Here too both decision and optimization versions are defined. The second major contribution of this thesis is to prove that this decision problem is NP-Complete and hence the corresponding optimization problem is NP-Hard.
|
2 |
Fractals as Basis for Design and CritiqueDriscoll, John Charles 01 October 2019 (has links)
The design profession is responding to the complex systems represented by architecture and planning by increasingly incorporating the power of computer technology into the design process. This represents a paradigm shift, and requires that designers rise to the challenge of both embracing modern technologies to perform increasingly sophisticated tasks without compromising their objective to create meaningful and environmentally sensitive architecture. This dissertation investigated computer-based fractal tools applied within a traditional architectural charette towards a design process with the potential to address the complex issues architects and planners face today. We developed and presented an algorithm that draws heavily from fractal mathematics and fractal theory. Fractals offer a quantitative and qualitative relation between nature, the built environment and computational mechanics and in this dissertation serve as a bridge between these realms.
We investigated how qualitative/quantitative fractal tools may inform an architectural design process both in terms of generative formal solutions as well as a metric for assessing the complexity of designs and historic architecture. The primary research objective was to develop a compelling cybernetic design process and apply it to a real-world and multi-faceted case study project within a formal architectural critique. Jurors were provided a platform for evaluating design work and weighing in as practicing professional architects. Jurors' comments were documented and discussed and presented as part of the dissertation. Our intention was to open up the discussion and document the effectiveness or ineffectiveness of the process we presented.
First we discussed the history of generative and algorithmic design and fractals in architecture. We begin with examples in ancient Hindu temple architecture as well as Middle Eastern architecture and Gothic as well as Art Nouveau. We end this section with a discussion of fractals in the contemporary architecture of Frank Lloyd Wright and the Organic school.
Next we developed a cybernetic design process incorporating a computer-based tool termed DBVgen within a closed loop designer/algorithm back and forth. The tool we developed incorporated a genetic algorithm that used fractal dimension as the primary fitness criterion. We applied our design process with mixed results as discussed by the jurors whose feedback was chunked into ten categories and assessed along with the author/designer's feedback. Generally we found that compelling designs tended to have a higher FD, whereas, the converse was not true that higher FD consistently led to more compelling designs.
Finally, we further developed fractal theory towards an appropriate consideration of the significance of fractals in architecture. We articulated a nuanced definition of fractals in architecture as: designs having multi-scale and multi-functional representations of some unifying organizing principle as the result of an iterative process. We then wrapped this new understanding of fractals in architecture to precedent relevant to the case study project. We present and discuss fractals in the work of Frank Lloyd Wright as well as Dean Bryant Vollendorf. We expand on how a theory of fractals used in architecture may continue to be developed and applied as a critical tool in analyzing historic and contemporary architecture as well as a creative framework for designing new architectural solutions to better address the complex world we live in.
|
3 |
Statistical and Machine Learning Techniques Applied to Algorithm Selection for Solving Sparse Linear SystemsFuentes, Erika 01 December 2007 (has links)
There are many applications and problems in science and engineering that require large-scale numerical simulations and computations. The issue of choosing an appropriate method to solve these problems is very common, however it is not a trivial one, principally because this decision is most of the times too hard for humans to make, or certain degree of expertise and knowledge in the particular discipline, or in mathematics, are required. Thus, the development of a methodology that can facilitate or automate this process and helps to understand the problem, would be of great interest and help. The proposal is to utilize various statistically based machine-learning and data mining techniques to analyze and automate the process of choosing an appropriate numerical algorithm for solving a specific set of problems (sparse linear systems) based on their individual properties.
|
4 |
Animating the conversion of nondeterministic finite state automata to deterministic finite state automataMerryman, William Patrick. January 2007 (has links) (PDF)
Thesis (M.S.)--Montana State University--Bozeman, 2007. / Typescript. Chairperson, Graduate Committee: Rockford Ross. Includes bibliographical references (leaf 51).
|
5 |
Learning Accurate and Robust Deep Visual ModelsLi, Yandong 01 January 2021 (has links) (PDF)
Over the last decade, we have witnessed the renaissance of deep neural networks (DNNs) and their successful applications in computer vision. There is still a long way to build intelligent and reliable machine vision systems, but DNNs provide a promising direction. The goal of this thesis is to present a few small steps along this road. We mainly focus on two questions: How to design label-efficient learning algorithms for computer vision tasks? How to improve the robustness of DNN based visual models? Concerning label-efficiency, we investigate a reinforced sequential model for video summarization, a background hallucination strategy for high-resolution image generation, and a selective module integrated into self-supervised self-training for improving object detection with noisy Web images. Besides, we study how to rank many pre-trained deep neural checkpoints for the transfer learning to a downstream task. Considering robustness, we propose a powerful blackbox adversarial attack to facilitate the research toward robust DNNs, and we also explore a new threat model that the adversaries can distill the knowledge from a blackbox teacher model to harvest a student model for imitating the characteristics of the teacher. In each chapter, we introduce the problem and present our solutions using machine learning and deep neural architectures, followed by comparisons with existing baselines and discussions on future research.
|
6 |
Human Behavior in Domestic Environments: Prediction and ApplicationsZehtabian, Sharare 01 December 2021 (has links) (PDF)
A longstanding goal of human behavior science is to model and predict how humans interact with each other or with other systems. Such models are beneficial and have many applications, including designing and implementing assistive technologies, improving users' experiences and quality of life and making better decisions to create public policies. Behavior is highly complex due to uncertainties and a lack of scientific tools to measure it. Hence prediction of human behavior cannot be 100% accurate. However, prediction is also not hopeless because the biological needs, as well as cultural conventions (for instance, regarding meal times) set the general patterns of the humans' daily behavior. Furthermore, while individual humans might adjust these patterns according to their own preferences, they also show some degree of consistency in their daily routine. In this dissertation, we focus on interrelated challenges of improving the prediction models for human daily activities and developing techniques through which intelligent applications can benefit from this improved prediction. We describe techniques for creating predictive models that can help humans in their daily life using deep learning-based models. One of the challenges of learning based approaches in this setting is the scarcity of data. If we are collecting information about a given human in a home, our database will increase with exactly one sample a day – this is insufficient for deep learning algorithms that are often trained on datasets with millions of samples. We investigate three directions through which the paucity of samples can be overcome. First, we discuss techniques through which, starting from a small number of representative samples, we can generate much larger synthetic datasets that capture the statistical properties of the real world data, and can be used in training. We consider an application where we apply human behavior prediction to the practical problem of improving the quality of experience. By learning to predict the experience requested by the user, we are able to perform intelligent pre-caching, and achieve higher average quality of experience for a given available network bandwidth. Another direction we investigate is the collection of data from multiple users. This creates multiple challenges. First, users would prefer to minimize the shared personal data. This requires us to investigate techniques that learn predictive models from multiple user experiences without requiring the users to upload their data to a common repository. We adapt the technique of federated learning, which requires the users to only share the training gradients on a model that had been sent by a central server, but not raw data. We investigate procedures that allow the user to obtain the best possible model for her own prediction while minimizing the amount of data disclosed. The second challenge is that not all the users benefit to the same degree from creating a central learning model; by investigating how much the user can benefit, we can stop the learning process and implicit privacy loss earlier. Finally, we developed predictive models for the spread of pandemics and techniques that use these predictions to recommend Non-Pharmaceutical Interventions (NPIs) to local stakeholders. We find that the prediction of pandemics is also conditioned on the behavior of individual humans and the actions taken by the governments and, especially in the early phases of the pandemic, suffers from a lack of data. We used a combination of a deep learning-based predictive model with a compartmental model, which is trained on the months elapsed from the pandemic and predicts infection rates for the next months. We used cultural and geographical attributes as constant features along with the history of cases and deaths as context features and NPIs as action features to train a single predictive model that can predict both the infection rate and the stringency of the NPIs deployed by policymakers for all countries / regions. We found that the stringency is not always aligned with the number of cases but also depends on political, economic and cultural factors.
|
7 |
Algorithms and Variations on the Positional Burrows-Wheeler Transform and Their ApplicationsSanaullah, Ahsan 01 January 2023 (has links) (PDF)
In this dissertation, we develop algorithms and variations on the Positional Burrows-Wheeler Transform (PBWT). The PBWT is a data structure that stores M binary strings of length N while allowing efficient search. We develop the dynamic-PBWT (d-PBWT). The d-PBWT is a variation of the PBWT that allows its relevant algorithms to run with unchanged time complexity, but also allows efficient insertion and deletion of haplotypes. We provide insertion and deletion algorithms on the PBWT with average case O(N) time complexity. We also improve upon the query algorithms for the PBWT. Durbin described a set maximal match query algorithm on the PBWT and claimed O(N) time complexity. Naseri et al. described a long match query algorithm on the PBWT using additional data structures (LEAP arrays) in claimed O(N+c) time complexity, where c is the number of matches outputted. We showed these bounds to be incorrect in the worst case and provided set maximal match and long match query algorithms that do have these time complexities. Furthermore, we develop a new formulation of haplotype threading, the Minimal Positional Substring Cover (MPSC). We solve the MPSC in O(N) time. Then, we solve variants of the MPSC problem: leftmost MPSC, rightmost MPSC, and set maximal match only MPSC. Using these variants to bound the solution space, we are able to represent all possible MPSCs in efficiently. Then, we solve variants that may be more biologically useful: length maximal MPSC, h-MPSC, and L-MPSC. All the MPSC problems are solved in O(N) time given a PBWT of the reference panel. Finally, we show the biological usefulness of the MPSC formulation using an imputation benchmark.
|
8 |
Implementing Efficient Algorithms for Computing RunsWeng, Chia-Chun 10 1900 (has links)
<p>In the first part of this thesis we present a C++ implementation of an improved O(n log n) algorithm to compute runs, number of primitively rooted distinct squares, and maximal repetitions, based on Crochemore's partitioning algorithm. This is a joint work with Mei Jiang and extends her work on the problem. In the second part we present a C++ implementation of a linear algorithm to compute runs based on the Main's, and Kolpakov and Kucherov's algorithms following the strategy:</p> <p>1. Compute suffix array and LCP array in linear time;</p> <p>2. Using the suffix array and LCP array, compute Lempel-Ziv factorization in linear time;</p> <p>3. Using the Lempel-Ziv factorization, compute in linear time some of the runs that include all the leftmost runs following Main's algorithm;</p> <p>4. Using Kolpakov and Kucherov's approach, compute in linear time the rest of all the runs.</p> <p>For our linear time implementation, we partially relied on Jonathan Fischer's Java implementation.</p> / Master of Science (MSc)
|
9 |
Counting the number of spanning trees in some special graphs /Zhang, Yuanping. January 2002 (has links)
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002. / Includes bibliographical references (leaves 70-76). Also available in electronic version. Access restricted to campus users.
|
10 |
Hardcoding finite automataNgassam, Ernest Ketcha. January 2005 (has links)
Thesis (M. Sc.)(Computer Science)--University of Pretoria, 2003. / English summary. Includes bibliographical references.
|
Page generated in 0.0429 seconds