Spelling suggestions: "subject:"algorithm"" "subject:"allgorithm""
61 |
A mathematical theory of synchronous concurrent algorithmsThompson, Benjamin Criveli January 1987 (has links)
A synchronous concurrent algorithm is an algorithm that is described as a network of intercommunicating processes or modules whose concurrent actions are synchronised with respect to a global clock. Synchronous algorithms include systolic algorithms; these are algorithms that are well-suited to implementation in VLSI technologies. This thesis provides a mathematical theory for the design and analysis of synchronous algorithms. The theory includes the formal specification of synchronous algorithms; techniques for proving the correctness and performance or time-complexity of synchronous algorithms, and formal accounts of the simulation and top-down design of synchronous algorithms. The theory is based on the observation that a synchronous algorithm can be specified in a natural way as a simultaneous primitive recursive function over an abstract data type; these functions were first studied by J. V. Tucker and J. I. Zucker. The class of functions is described via a formal syntax and semantics, and this leads to the definition of a functional algorithmic notation called PR. A formal account of synchronous algorithms and their behaviour is achieved by showing that synchronous algorithms can be specified in PR. A formal account of the performance of synchronous algorithms is achieved via a mathematical account of the time taken to evaluate a function defined by simultaneous primitive recursion. A synchronous algorithm, when specified in PR, can be transformed into a program in a language called FPIT. FPIT is a language based on abstract data types and on the multiple or concurrent assignment statement. The transformation from PR to FPIT is phrased as a compiler that is proved correct; compiling the PR-representation of a synchronous algorithm thus yields a provably correct simulation of the algorithm. It is proved that FPIT is just what is needed to implement PR by defining a second compiler, this time from FPIT back into PR, which is again proved correct, and thus PR and FPIT are formally computationally equivalent. Furthermore, an autonomous account of the length of computation of FPIT programs is given, and the two compilers are shown to be performance preserving; thus PR and FPIT are computationally equivalent in an especially strong sense. The theory involves a formal account of the top-down design of synchronous algorithms that is phrased in terms of correctness and performance preserving transformations between synchronous algorithms specified at different levels of data abstraction. A new definition of what it means for one abstract data type to be 'implemented' over another is given. This definition generalises the idea of a computable algebra due to A. I. Mal'cev and M. 0. Rabin. It is proved that if one data type D is implementable over another data type D', then there exists correctness and performance preserving compiler mapping high level PR-programs over D to low level PR-programs over D'. The compilers from PR to FPIT and from FPIT to PR are defined explicitly, and our compilerexistence proof is constructive, and so this work is the basis of theoretically well-founded software tools for the design and analysis of synchronous algorithms.
|
62 |
High performance computing and algorithm development: application of dataset development to algorithm parameterizationJonas, Mario Ricardo Edward January 2006 (has links)
Magister Scientiae - MSc / A number of technologies exist that captures data from biological systems. In addition, several computational tools, which aim to organize the data resulting from these technologies, have been created. The ability of these tools to organize the information into biologically meaningful results, however, needs to be stringently tested. The research contained herein focuses on data produced by technology that records short Expressed Sequence Tags (EST's). / South Africa
|
63 |
Iterative solution of the Dirac Equation using the Lanczos algorithmAndrew, Richard Charles 11 February 2009 (has links)
Please read the abstract in the dissertation / Dissertation (MSc)--University of Pretoria, 2009. / Physics / unrestricted
|
64 |
Eye array sound source localizationAlghassi, Hedayat 05 1900 (has links)
Sound source localization with microphone arrays has received considerable attention as a means for the automated tracking of individuals in an enclosed space and as a necessary component of any general-purpose speech capture and automated camera pointing system. A novel computationally efficient method compared to traditional source localization techniques is proposed and is both theoretically and experimentally investigated in this research.
This thesis first reviews the previous work in this area. The evolution of a new localization algorithm accompanied by an array structure for audio signal localization in three dimensional space is then presented. This method, which has similarities to the structure of the eye, consists of a novel hemispherical microphone array with microphones on the shell and one microphone in the center of the sphere. The hemispherical array provides such benefits as 3D coverage, simple signal processing and low computational complexity. The signal processing scheme utilizes parallel computation of a special and novel closeness function for each microphone direction on the shell. The closeness functions have output values that are linearly proportional to the spatial angular difference between the sound source direction and each of the shell microphone directions. Finally by choosing directions corresponding to the highest closeness function values and implementing linear weighted spatial averaging in those directions we estimate the sound source direction. The experimental tests validate the method with less than 3.10 of error in a small office room.
Contrary to traditional algorithmic sound source localization techniques, the proposed method is based on parallel mathematical calculations in the time domain. Consequently, it can be easily implemented on a custom designed integrated circuit. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
65 |
Trasování pohybujícího se objektu v obrazové scéně / Tracking of moving object in videoKomloši, Michal January 2019 (has links)
This master thesis deals with tracking the moving object in image. The result of the thesis is designed algorithm which is implemented in the programming language C#. This algorithm improves the functionallity of an existing tracking algorithm.
|
66 |
Trasování pohybujícího se objektu v obrazové scéně / Tracking of moving object in videoKomloši, Michal January 2019 (has links)
This master thesis deals with tracking the moving object in image. The result of the thesis is designed algorithm which is implemented in the programming language C#. This algorithm improves the functionallity of an existing tracking algorithm.
|
67 |
Progressive Multiple Sequence Alignments from TripletsKruspe, Matthias, Stadler, Peter F. 14 December 2018 (has links)
Motivation:
The quality of progressive sequence alignments strongly depends on the accuracy of the individual pairwise alignment steps since gaps that are introduced at one step cannot be removed at later aggregation steps. Adjacent insertions and deletions necessarily appear in arbitrary order in pairwise alignments and hence form an unavoidable source of errors.
Idea:
Here we present a modified variant of progressive sequence alignments that addresses both issues. Instead of pairwise alignments we use exact dynamic programming to align sequence or profile triples. This avoids a large fractions of the ambiguities arising in pairwise alignments. In the subsequent aggregation steps we follow the logic of the Neighbor-Net algorithm, which constructs a phylogenetic network by step-wisely replacing triples by pairs instead of combining pairs to singletons. To this end the three-way alignments are subdivided into two partial alignments, at which stage all-gap columns are naturally removed. This alleviates the “once a gap, always a gap” problem of progressive alignment procedures.
Results:
The three-way Neighbor-Net based alignment program aln3nn is shown to compare favorably on both protein sequences and nucleic acids sequences to other progressive alignment tools. In the latter case one easily can include scoring terms that consider secondary structure features. Overall, the quality of resulting alignments in general exceeds that of clustalw or other multiple alignments tools even though our software does not included heuristics for context dependent (mis)match scores.
|
68 |
A New Adaptive Array of Vibration SensorsSumali, Hartono 05 August 1997 (has links)
The sensing technique described in this dissertation produces modal coordinates for monitoring and active control of structural vibration. The sensor array is constructed from strain-sensing segments. The segment outputs are transformed into modal coordinates by a sensor gain matrix.
An adaptive algorithm for computing the sensor gain matrix with minimal knowledge of the structure's modal properties is proposed. It is shown that the sensor gain matrix is the modal matrix of the segment output correlation matrix. This modal matrix is computed using new algorithms based on Jacobi rotations. The procedure is relatively simple and can be performed gradually to keep computation requirements low.
The sensor system can also identify the mode shapes of the structure in real time using Lagrange polynomial interpolation formula.
An experiment is done with an array of piezoelectric polyvinylidene fluoride (PVDF) film segments on a beam to obtain the segment outputs. The results from the experiment are used to verify a computer simulation routine. Then a series of simulations are done to test the adaptive modal sensing algorithms. Simulation results verify that the sensor gain matrix obtained by the adaptive algorithm transforms the segment outputs into modal coordinates. / Ph. D.
|
69 |
The Role of Innovative Elements in the Patentability of Machine Learning AlgorithmsPower, Cheryl Denise 16 December 2022 (has links)
Advances in data-driven digital innovations during Industrial Revolution 4.0 are the foundation for this patent discussion. In a shifting technological paradigm, I argue for an approach that considers the broader theoretical perspectives on innovation and the place of the term invention within that perspective. This research could inform the assessment of a patent for Machine Learning algorithms in Artificial Intelligence. For instance, inventions may have elements termed abstract (yet innovative) and not previously within the purview of patent law. Emergent algorithms do not necessarily align with existing patent guidance rather algorithms are nuanced, increasing support for a refined approach.
In this thesis, I discuss the term algorithm and how a novel combination of elements or a cooperating set of essential and non-essential elements, can result in a patentable result. For instance, a patentable end can include an algorithm as part of an application, whether it is integrated with a functional physical component such as a computer, whether it includes sophisticated calculations with a tangible end, or whether parameters adjust for speed or utility. I plan to reconsider the term algorithm in my arguments by exploring some challenges to the section 27(8) of the Patent Act, “What may not be patented,” including, that “no patent shall be granted for any mere scientific principle or abstract theorem.” The role of the algorithm in the proposed invention can be determinative of patent eligibility.
There are three lines of evidence used in this thesis. First, the thesis uses theoretical perspectives in innovation, some close to a century old. These are surprisingly relevant in the digital era. I illustrate the importance of considering these perspectives in innovation when identifying key contributing factors in a patent framework. For instance, I use innovation perspectives, including cluster theory, to inform the development of an approach to the patentable subject matter and the obviousness standard in AI software inventions. This approach highlights applications of emerging algorithmic technologies and considers the evolving nature of math beyond the basic algorithm and as a part of a physical machine or manufacture that is important in this emerging technological context.
As part of the second line of evidence, I review how the existing Canadian Federal & Supreme Court cases inform patent assessments for algorithms found in emerging technologies such as Artificial Intelligence. I explore the historical understanding of patent eligibility in software, professional skills, and business methods and apply cases that use relevant inventions from a different discipline. As such, I reflect upon the differing judicial perspectives that could influence achieving patent-eligible subject matter in the software space and, by extension how these decisions would hold in current times. Further to patent eligibility, I review the patentability requirements for novelty, utility, and non-obviousness.
As part of the third line of evidence, I reflect on why I collected the interview data and justify why it contributes to a better understanding of the thesis issues and overall narrative. Next, I provide detail and explain why certain questions formed a part of the interview and how the responses helped to synthesize the respective chapters of the thesis. The questions focus on patent drafting, impressions of the key cases, innovation, and the in-depth expertise of the experts on these topics. Finally, I provide recommendations for how the patent office and the courts could explore areas for further inquiry and action.
|
70 |
DEVELOPMENT OF A GENETIC ALGORITHM APPROACH TO CALIBRATE THE EVPSC MODELGe, Hanqing January 2016 (has links)
Magnesium is known as one of the lowest density metals. With the increasing importance of fuel economy and the need to reduce weight, magnesium has proven to be a very important structural material used in transportation industry. However, the use of magnesium alloys have been limited by its tendency to corrode, creep at high temperature, and higher cost compare to aluminium alloys and steels.
Polycrystal plasticity models such as VPSC and EVPSC were used to study deformation mechanisms of magnesium alloys. However, current polycrystal plasticity models with slip and twinning involve a large number of material parameters, which may not be uniquely determined. Furthermore, determining material parameters using traditional trial-and-error approach is very time consuming. Therefore, a genetic algorithm approach is developed in this thesis to optimize these material parameters.
The genetic algorithm approach is evaluated by studying large strain behavior of magnesium alloys under different deformation processes. The material parameters for those models are determined by material numerical simulations based on the polycrystal model to the corresponding experimental data. Then the material parameters are used to make prediction of other deformation behaviours (stress strain curves, R values, texture evolution and lattice strain), and the performance is judged by how well the prediction match the actual experimental data. The results show that the genetic algorithm approach works well on determining parameters, it can get reliable results within a relatively short period of time. / Thesis / Master of Applied Science (MASc)
|
Page generated in 0.0284 seconds