• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 449
  • 103
  • 99
  • 49
  • 43
  • 20
  • 17
  • 14
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • Tagged with
  • 945
  • 165
  • 128
  • 107
  • 101
  • 96
  • 94
  • 94
  • 93
  • 88
  • 80
  • 73
  • 70
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Implementation of the Weighted Filtered Backprojection Algorithm in the Dual-Energy Iterative Algorithm DIRA-3D

Tuvesson, Markus January 2021 (has links)
DIRA-3D is an iterative model-based reconstruction method for dual-energy helical CT whose goal is to determine the material composition of the patient from accurate linear attenuation coefficients (LACs). Possible applications are, for example, to aid in calculations of radiation transport and dose calculations in brachytherapy with low energy photons, and in proton therapy. There was a need to replace the current image reconstruction method, the PI-method, with a weighted filtered backprojection (wFBP) algorithm for image reconstruction, since wFBP is used for image reconstruction in Siemens's CT-scanners. The new DIRA-3D algorithm implemented the program take for cone-beam projection generation and the FreeCT wFBP algorithm for image reconstruction. Experiments showed that the accuracies of the resulting LACs for the DIRA-3D algorithm using wFBP for image reconstruction were comparable to the one using the PI-method for image reconstruction. The relative LAC errors reached a value below 0.2% after 10 iterations.
252

A Burnside Approach to the Termination of Mohri’s Algorithm for Polynomially Ambiguous Min-Plus-Automata

Kirsten, Daniel 06 February 2019 (has links)
We show that the termination of Mohri's algorithm is decidable for polynomially ambiguous weighted finite automata over the tropical semiring which gives a partial answer to a question by Mohri [29]. The proof relies on an improvement of the notion of the twins property and a Burnside type characterization for the finiteness of the set of states produced by Mohri's algorithm.
253

Contributions to formalisms for the specification and verification of quantitative properties

Mazzocchi, Nicolas 09 October 2020 (has links) (PDF)
Reactive systems are computer systems that maintain continuous interaction with the environment in which they operate. Such systems are nowadays part of our daily life: think about common yet critical applications like engine control units in automotive, aircraft autopilots, medical aided- devices, or micro-controllers in mass production. Clearly, any flaw in such critical systems can have catastrophic consequences. However, they exhibit several characteristics, like resource limitation constraints, real-time responsiveness, concurrency that make them difficult to implement correctly. To ensure the design of reactive systems that are dependable, safe, and efficient, researchers and industrials have advocated the use of so-called formal methods, that rely on mathematical models to express precisely and analyze the behaviors of these systems.Automata theory provides a fundamental notion such as languages of program executions for which membership, emptiness, inclusion, and equivalence problems allow us to specify and verify properties of reactive systems. However, these Boolean notions yield the correctness of the system against a given property that sometimes, falls short of being satisfactory answers when the performances are prone to errors. As a consequence, a common engineering approach is not just to verify that a system satisfies a property, but whether it does so within a degree of quality and robustness.This thesis investigates the expressibility, recognition, and robustness of quantitative properties for program executions.• Firstly, we provide a survey on languages definable by regular automata with Presburger definable constraints on letter occurrences for which we provide descriptive complexity. Inspired by this model, we introduce an expression formalism that mixes formula and automata to define quantitative languages \ie function from words to integers. These expressions use Presburger arithmetic to combine values given by regular automata weighted by integers. We show that quantitative versions of the classical decision problems such as emptiness, universality, inclusion, and equivalence are computable. Then we investigate the extension of our expressions with a ''Kleene star'' style operator. This allows us to iterate an expression over smaller fragments of the input word, and to combine the results by taking their iterated sum. The decision problems quickly turn out to be not computable, but we introduce a new subclass based on a structural restriction for which algorithmic procedures exist.• Secondly, we consider a notion of robustness that places a distance on words, thus defining neighborhoods of program executions. A language is said to be robust if the membership satisfiability cannot differ for two ''close'' words, and that leads to robust versions of all the classical decision problems. Our contribution is to study robustness verification problems in the context of weighted transducers with different measures (sum, mean-payoff, and discounted sum). Here, the value associated by the transducer to rewrite a word into another denotes the cost of the noise that this rewriting induce. For each measure, we provide algorithms that determine whether a language is robust up to a given threshold of error and we solve the synthesis of the robust kernel for the sum measure. Furthermore, we provide case studies including modeling human control failures and approximate recognition of type-1 diabetes using robust detection of blood sugar level swings.• Finally, we observe that algorithms for structural patterns recognition of automata often share similar techniques. So, we introduce a generic logic to express structural properties of automata with outputs in some monoid, in particular, the set of predicates talking about the output values is parametric. Then, we consider three particular automata models (regular automata, transducers, and automata weighted by integers) and instantiate the generic logic for each of them. We give tight complexity results for the three logics with respect to the pattern recognition problem. We study the expressiveness of our logics by expressing classical structural patterns characterizing for instance unambiguity and polynomial ambiguity in the case of regular automata, determinizability, and finite-valuedness in the case of transducers and automata weighted by integers. As a consequence of our complexity results, we directly obtain that these classical properties can be decided in logarithmic space. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
254

APPROXIMATION ALGORITHMS FOR MAXIMUM VERTEX-WEIGHTED MATCHING

Ahmed I Al Herz (8072036) 03 December 2019 (has links)
<div>We consider the maximum vertex-weighted matching problem (MVM), in which non-negative weights are assigned to the vertices of a graph, and the weight of a matching is the sum of the weights of the matched vertices. Vertex-weighted matchings arise in many applications, including internet advertising, facility scheduling, constraint satisfaction, the design of network switches, and computation of sparse bases for the null space or the column space of a matrix. Let m be the number of edges, n number of vertices, and D the maximum degree of a vertex in the graph. We design two exact algorithms for the MVM problem with time complexities of O(mn) and O(Dmn). The new exact algorithms use a maximum cardinality matching as an initial matching, after which the weight of the matching is increased using weight-increasing paths.</div><div><br></div><div>Although MVM problems can be solved exactly in polynomial time, exact MVM algorithms are still slow in practice for large graphs with millions and even billions of edges. Hence we investigate several approximation algorithms for MVM in this thesis. First we show that a maximum vertex-weighted matching can be approximated within an approximation ratio arbitrarily close to one, to k/(k + 1), where k is related to the length of augmenting or weight-increasing paths searched by the algorithm. We identify two main approaches for designing approximation algorithms for MVM. The first approach is direct; vertices are sorted in non-increasing order of weights, and then the algorithm searches for augmenting paths of restricted length that reach a heaviest vertex. (In this approach each vertex is processed once). The second approach repeatedly searches for augmenting paths and increasing paths, again of restricted length, until none can be found. In this second, iterative approach, a vertex may need to be processed multiple times. We design two approximation algorithms based on the direct approach with approximation ratios of 1/2 and 2/3. The time complexities of the 1/2-approximation algorithm is O(m + n log n), and that of the 2/3-approximation algorithm is O(mlogD). Employing the second approach, we design 1/2- and 2/3-approximation algorithms for MVM with time complexities of O(Dm) and O(D<sup>2</sup>m), respectively. We show that the iterative algorithm can be generalized to nd a k/(k+1)-approximate MVM with a time complexity of O(D<sup>k</sup>m). In addition, we design parallel 1/2- and 2/3-approximation algorithms for a shared memory programming model, and introduce a new technique for locking augmenting paths to avoid deadlock and related problems. </div><div><br></div><div>MVM problems may be solved using algorithms for the maximum edge-weighted matching (MEM) by assigning to each edge a weight equal to the sum of the vertex weights on its endpoints. However, our results will show that this is one way to generate MEM problems that are difficult to solve. On such problems, exact MEM algorithms may require run times that are a factor of a thousand or more larger than the time of an exact MVM algorithm. Our results show the competitiveness of the new exact algorithms by demonstrating that they outperform MEM exact algorithms. Specifically, our fastest exact algorithm runs faster than the fastest MEM implementation by a factor of 37 and 18 on geometric mean, using two different sets of weights on our test problems. In some instances, the factor can be higher than 500. Moreover, extensive experimental results show that the MVM approximation algorithm outperforms an MEM approximation algorithm with the same approximation ratio, with respect to matching weight and run time. Indeed, our results show that the MVM approximation algorithm outperforms the corresponding MEM algorithm with respect to these metrics in both serial and parallel settings.</div>
255

A multiobjective optimization model for optimal placement of solar collectors

Essien, Mmekutmfon Sunday 21 June 2013 (has links)
The aim and objective of this research is to formulate and solve a multi-objective optimization problem for the optimal placement of multiple rows and multiple columns of fixed flat-plate solar collectors in a field. This is to maximize energy collected from the solar collectors and minimize the investment in terms of the field and collector cost. The resulting multi-objective optimization problem will be solved using genetic algorithm techniques. It is necessary to consider multiple columns of collectors as this can result in obtaining higher amounts of energy from these collectors when costs and maintenance or replacement of damaged parts are concerned. The formulation of such a problem is dependent on several factors, which include shading of collectors, inclination of collectors, distance between the collectors, latitude of location and the global solar radiation (direct beam and diffuse components). This leads to a multi-objective optimization problem. These kind of problems arise often in nature and can be difficult to solve. However the use of evolutionary algorithm techniques has proven effective in solving these kind of problems. Optimizing the distance between the collector rows, the distance between the collector columns and the collector inclination angle, can increase the amount of energy collected from a field of solar collectors thereby maximizing profit and improving return on investment. In this research, the multi-objective optimization problem is solved using two optimization approaches based on genetic algorithms. The first approach is the weighted sum approach where the multi-objective problem is simplified into a single objective optimization problem while the second approach is finding the Pareto front. / Dissertation (MEng)--University of Pretoria, 2012. / Electrical, Electronic and Computer Engineering / MEng / Unrestricted
256

A study of Locations for Mobile Hospitals in Dalarna

Giriraj, Samhita January 2020 (has links)
Due to growing population over the past decades, settlements are scattered in sparse as well as dense clusters across Dalarna County. However, irrespective of any physical, social or economic conditions, free public health care must be available at a minimum and equal distance of travel for all citizens of a region. In the current scenario in Dalarna, around 16% of the population travels beyond 10 km to reach their nearest medical facility. The aim of this study is to suggest the most favorable locations for Mobile Hospital services across Dalarna County, based on spatial analysis of accessibility, population coverage, and importantly, in a way that travel distance, is minimized and equal for all. This study makes use of Multi Criteria Analysis methods. The problem of mobile hospital site selection is broken down into criteria, and Analytic Hierarchical Process is used to evaluate weights for each criterion. Then, a weighted overlay results in regions with score-based suitability for a mobile hospital. Maximum population coverage based Location Allocation analysis results in generating a proposed Facility and Demand Coverage output. The results show an increase in coverage of population, while meeting the requirements of criteria in the aim.
257

The Validity of the Weighted Application Blank as a Predictor of Tenure in the Nursing Home Industry; A Test of Two Models

Kettlitz, Gary Russell 05 1900 (has links)
The first purpose was to develop and validate a quantitative selection tool, the weighted application blank, tailored to the nursing home industry. The second purpose of this study was to determine whether data scaling and increased statistical rigor can reduce the frequency of type I and type II errors in the weighted application.
258

Information Retrieval by Identification of Signature Terms in Clusters

Muppalla, Sesha Sai Krishna Koundinya 24 May 2022 (has links)
No description available.
259

Feasibility study for expansion of the existing intermodal terminal in Jordbro.

Christidi, Ioulia Christina January 2014 (has links)
Freight transportation changes in the region of Stockholm lead municipalities to the need of finding ways to maintain and empower their current logistics role in the area. These changes are the operation of the future freight port in Norvik near Nynäshamn, the expansion of Stockholm leading to new freight transportation networks, and the extension of the double track line southern to Västerhaninge station. Haninge municipality is willing to keep the interest of existing companies in its land alive and perhaps increase its’ attractiveness by constructing an extension of the terminal in Jordbro. But before proceeding to that step, the municipality wants to investigate whether companies are aware of these changes and furthermore if they have plans for changing their current goods transportation patterns. It is also interested in finding out the factors that companies consider as the most important when they take decisions about their goods transportation plan. That way the municipality knows what is important for companies and can adjust the supply of infrastructure to their demands. The main method used for data collection is designing and conducting a questionnaire and for data analysis it is multi-criteria analysis (MCA). The questionnaire also includes questions about the nature of the company and the current ways of goods transportation. Although the number of responders is quite low, some general conclusions could be made. There are two alternatives competing and multi-criteria analysis leads to the selection of the most suitable one. There are several limitations and assumptions which can be overcome by further future research.
260

Modeling cross-border financial flows using a network theoretic approach

Sekgoka, Chaka Patrick 18 February 2021 (has links)
Criminal networks exploit vulnerabilities in the global financial system, using it as a conduit to launder criminal proceeds. Law enforcement agencies, financial institutions, and regulatory organizations often scrutinize voluminous financial records for suspicious activities and criminal conduct as part of anti-money laundering investigations. However, such studies are narrowly focused on incidents and triggered by tip-offs rather than data mining insights. This research models cross-border financial flows using a network theoretic approach and proposes a symmetric-key encryption algorithm to preserve information privacy in multi-dimensional data sets. The newly developed tools will enable regulatory organizations, financial institutions, and law enforcement agencies to identify suspicious activity and criminal conduct in cross-border financial transactions. Anti-money laundering, which comprises laws, regulations, and procedures to combat money laundering, requires financial institutions to verify and identify their customers in various circumstances and monitor suspicious activity transactions. Instituting anti-money laundering laws and regulations in a country carries the benefit of creating a data-rich environment, thereby facilitating non-classical analytical strategies and tools. Graph theory offers an elegant way of representing cross-border payments/receipts between resident and non-resident parties (nodes), with links representing the parties' transactions. The network representations provide potent data mining tools, facilitating a better understanding of transactional patterns that may constitute suspicious transactions and criminal conduct. Using network science to analyze large and complex data sets to detect anomalies in the data set is fast becoming more important and exciting than merely learning about its structure. This research leverages advanced technology to construct and visualize the cross-border financial flows' network structure, using a directed and dual-weighted bipartite graph. Furthermore, the develops a centrality measure for the proposed cross-border financial flows network using a method based on matrix multiplication to answer the question, "Which resident/non-resident nodes are the most important in the cross-border financial flows network?" The answer to this question provides data mining insights about the network structure. The proposed network structure, centrality measure, and characterization using degree distributions can enable financial institutions and regulatory organizations to identify dominant nodes in complex multi-dimensional data sets. Most importantly, the results showed that the research provides transaction monitoring capabilities that allow the setting of customer segmentation criteria, complementing the built-in transaction-specific triggers methods for detecting suspicious activity transactions. / Thesis (PhD)--University of Pretoria, 2021. / Banking Sector Education and Training Authority (BANKSETA) / UP Postgraduate Bursary / Industrial and Systems Engineering / PhD / Unrestricted

Page generated in 0.0361 seconds