• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 734
  • 269
  • 129
  • 52
  • 19
  • 14
  • 11
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1474
  • 668
  • 257
  • 243
  • 241
  • 240
  • 186
  • 182
  • 174
  • 167
  • 159
  • 150
  • 143
  • 141
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
801

Algorithms for Matching Problems Under Data Accessibility Constraints

Hanguir, Oussama January 2022 (has links)
Traditionally, optimization problems in operations research have been studied in a complete information setting; the input/data is collected and made fully accessible to the user, before an algorithm is sequentially run to generate the optimal output. However, the growing magnitude of treated data and the need to make immediate decisions are increasingly shifting the focus to optimizing under incomplete information settings. The input can be partially inaccessible to the user either because it is generated continuously, contains some uncertainty, is too large and cannot be stored on a single machine, or has a hidden structure that is costly to unveil. Many problems providing a context for studying algorithms when the input is not entirely accessible emanate from the field of matching theory, where the objective is to pair clients and servers or, more generally, to group clients in disjoint sets. Examples include ride-sharing and food delivery platforms, internet advertising, combinatorial auctions, and online gaming. In this thesis, we study three different novel problems from the theory of matchings. These problems correspond to situations where the input is hidden, spread across multiple processors, or revealed in two stages with some uncertainty. In particular, we present in Chapter 1 the necessary definitions and terminology for the concepts and problems we cover. In Chapter 2, we consider a two-stage robust optimization framework that captures matching problems where one side of the input includes some future demand uncertainty. We propose two models to capture the demand uncertainty: explicit and implicit scenarios. Chapters 3 and 4 see us switch our attention to matchings in hypergraphs. In Chapter 3, we consider the problem of learning hidden hypergraph matchings through membership queries. Finally, in Chapter 4, we study the problem of finding matchings in uniform hypergraphs in the massively parallel computation (MPC) model where the data (e.g. vertices and edges) is distributed across the machines and in each round, a machine performs local computation on its fragment of data, and then sends messages to other machines for the next round.
802

A MULTI-AGENT BASED APPROACH FOR SOLVING THE REDUNDANCY ALLOCATION PROBLEM

Li, Zhuo January 2011 (has links)
Redundancy Allocation Problem (RAP) is a well known mathematical problem for modeling series-parallel systems. It is a combinatorial optimization problem which focuses on determining an optimal assignment of components in a system design. Due to the diverse possible selection of components, the RAP is proved to be NP-hard. Therefore, many algorithms, especially heuristic algorithms were proposed and implemented in the past several decades, committed to provide innovative methods or better solutions. In recent years, multi-agent system (MAS) is proposed for modeling complex systems and solving large scale problems. It is a relatively new programming concept with the ability of self-organizing, self-adaptive, autonomous administrating, etc. These features of MAS inspire us to look at the RAP from another point of view. An RAP can be divided into multiple smaller problems that are solved by multiple agents. The agents can collaboratively solve optimal RAP solutions quickly and efficiently. In this research, we proposed to solve RAP using MAS. This novel approach, to the best of our knowledge, has not been proposed before, although multi-agent approaches have been widely used for solving other large and complex nonlinear problems. To demonstrate that, we analyzed and evaluated four benchmark RAP problems in the literature. From the results, the MAS approach is shown as an effective and extendable method for solving the RAP problems. / Electrical and Computer Engineering
803

Synthesis and Development of Antibiotic Adjuvants to Restore Antimicrobial Activity Against Resistant Gram-Negative Pathogens / Antibiotic Adjuvants for Resistant Gram-Negative Pathogens

Colden Leung, Madelaine 18 October 2019 (has links)
Widespread antimicrobial resistance, particularly in Gram-negative pathogens, is a serious threat facing the global community. Aminoglycosides are inactivated by enzymes such as aminoglycoside N-acetyltransferase-3 (AAC(3)) and O-nucleotidyltransferase-2” (ANT(2”)), while the New Delhi metallo-b- lactamase-1 (NDM-1) degrades carbapenems. Inhibition of these enzymes should result in bacteria becoming once again susceptible to aminoglycosides and carbapenems. This thesis describes the development of inhibitors to these enzymes, in an effort to rescue the utility of aminoglycoside and carbapenem drug classes through adjuvant therapy. High-throughput screening of protein kinase libraries identified two AAC(3)-Ia inhibitors with a common 3-benzylidene-2-indolinone core. New methods for purification of AAC(3)-Ia and monitoring its activity were developed. A chemical library was built around this scaffold and assessed for SAR. It was found that the initial hit (Z)-methyl 3-(3,5-dibromo-4-hydroxybenzylidene)-2- oxoindoline-5-carboxylate was the most active against AAC(3)-Ia, and alterations to either the 3,5-dibromo-4-hydroxybenzyl warhead or methyl ester substituent resulted in a decrease in activity. Previous whole-cell screening had identified two protein kinase inhibitors with a biphenyl isonicotinamide scaffold as inhibitors of ANT(2”)-Ia. A convergent parallel synthesis was developed, involving Suzuki and amide couplings and protecting group strategies. This methodology was used to assemble a focused chemical library for SAR analysis. Stepwise removal of extraneous complexity from the initial hits yielded a selective ANT(2”)-Ia inhibitor which demonstrated in vivo synergy with gentamicin. Aspergillomarasmine A (AMA) is a natural product with activity against NDM-1. Several derivatives of AMA have been synthesized to assess SAR, but the specific contributions of individual carboxylic acids have yet to determined due to difficulties accessing position 6. A synthetic approach was developed via reductive amination using Garner’s aldehyde as a serine equivalent. This strategy was used to synthesize an AMA analog with a hydroxyl group in place of the carboxylic acid in position 6. Additionally, an imine-promoted isomeric resolution was discovered. / Thesis / Doctor of Philosophy (PhD) / Antibiotics, such as aminoglycosides and carbapenems, are losing their effectiveness against bacteria responsible for deadly diseases. This is often due to resistance enzymes such as aminoglycoside N-acetyltransferase-3 (AAC(3)) and O- nucleotidyltransferase-2” (ANT(2”)), which inactivate aminoglycosides, and the New Delhi metallo-b-lactamase-1 (NDM-1), which destroys carbapenems. If these enzymes are blocked, the antibiotics should work against bacteria again. In order to develop compounds that will inhibit these enzymes, sets of similar compounds are made and tested. Patterns of what chemical groups improve or worsen inhibitory activity are noted and used to make another set of compounds in an iterative process. This thesis describes the development of inhibitors of AAC(3)-Ia and ANT(2”)-Ia by this process. Additionally, a specific compound was made to test if a particular chemical group has a role in inhibiting NDM-1.
804

Improved discrete cuckoo search for the resource-constrained project scheduling problem

Bibiks, Kirils, Hu, Yim Fun, Li, Jian-Ping, Pillai, Prashant, Smith, A. 03 May 2018 (has links)
Yes / An Improved Discrete Cuckoo Search (IDCS) is proposed in this paper to solve resource-constrained project scheduling problems (RCPSPs). The original Cuckoo Search (CS) was inspired by the breeding behaviour of some cuckoo species and was designed specifically for application in continuous optimisation problems, in which the algorithm had been demonstrated to be effective. The proposed IDCS aims to improve the original CS for solving discrete scheduling problems by reinterpreting its key elements: solution representation scheme, Lévy flight and solution improvement operators. An event list solution representation scheme has been used to present projects and a novel event movement and an event recombination operator has been developed to ensure better quality of received results and improve the efficiency of the algorithm. Numerical results have demonstrated that the proposed IDCS can achieve a competitive level of performance compared to other state-of-the-art metaheuristics in solving a set of benchmark instances from a well-known PSPLIB library, especially in solving complex benchmark instances. / Partially funded by the Innovate UK project HARNET – Harmonised Antennas, Radios and Networks under contract no. 100004607.
805

A comparative study of children enrolled in combination classes and non-combination classes in Fairfax County, Virginia public schools

Spratt, Brenda Roberts January 1986 (has links)
This study compares the scholastic achievement of 2,811 students enrolled in Fairfax County, Virginia, Public Schools for the 1983-1984 school year. Scholastic achievement of an experimental group of 1,068 students enrolled in combination or split/grade classes is compared with a control group of 1,743 students enrolled in regular graded classes. Five research questions were developed, three of which related directly to grade level student scholastic achievement by comparing test results for combination and regular grade classes, and two which attempted to identify any significance resulting from differences used by principals to select teachers and students for placement in combination classes. / Ed. D.
806

A Convergence Analysis of Generalized Hill Climbing Algorithms

Sullivan, Kelly Ann 21 April 1999 (has links)
Generalized hill climbing (GHC) algorithms provide a unifying framework for describing several discrete optimization problem local search heuristics, including simulated annealing and tabu search. A necessary and a sufficient convergence condition for GHC algorithms are presented. The convergence conditions presented in this dissertation are based upon a new iteration classification scheme for GHC algorithms. The convergence theory for particular formulations of GHC algorithms is presented and the implications discussed. Examples are provided to illustrate the relationship between the new convergence conditions and previously existing convergence conditions in the literature. The contributions of the necessary and the sufficient convergence conditions for GHC algorithms are discussed and future research endeavors are suggested. / Ph. D.
807

Synthesis and Biological Evaluation of Paclitaxel Analogs

Baloglu, Erkan 24 May 2001 (has links)
The complex natural product paclitaxel (Taxol®), first isolated from Taxus brevifolia, is a member of a large family of taxane diterpenoids. Paclitaxel is extensively used for the treatment of solid tumors, particularly those of the breasts and ovaries. In order to obtain additional information about the mechanism of action of paclitaxel and the environment of the paclitaxel-binding site, several fluorescent analogs of paclitaxel were synthesized, and their biological activities have been evaluated. For the investigation of possible synergistic effects, concurrent modifications on selected positions have been performed and their biological evaluation were studied. / Ph. D.
808

Assessing the Finite-Time Performance of Local Search Algorithms

Henderson, Darrall 18 April 2001 (has links)
Identifying a globally optimal solution for an intractable discrete optimization problem is often cost prohibitive. Therefore, solutions that are within a predetermined threshold are often acceptable in practice. This dissertation introduces the concept of B-acceptable solutions where B is a predetermined threshold for the objective function value. It is difficult to assess a priori the effectiveness of local search algorithms, which makes the process of choosing parameters to improve their performance difficult. This dissertation introduces the B-acceptable solution probability in terms of B-acceptable solutions as a finite-time performance measure for local search algorithms. The B-acceptable solution probability reflects how effectively an algorithm has performed to date and how effectively an algorithm can be expected to perform in the future. The B-acceptable solution probability is also used to obtain necessary asymptotic convergence (with probability one) conditions. Upper and lower bounds for the B-acceptable solution probability are presented. These expressions assume particularly simple forms when applied to specific local search strategies such as Monte Carlo search and threshold accepting. Moreover, these expressions provide guidelines on how to manage the execution of local search algorithm runs. Computational experiments are reported to estimate the probability of reaching a B-acceptable solution for a fixed number of iterations. Logistic regression is applied as a tool to estimate the probability of reaching a B-acceptable solution for values of B close to the objective function value of a globally optimal solution as well as to estimate this objective function value. Computational experiments are reported with logistic regression for pure local search, simulated annealing and threshold accepting applied to instances of the TSP with known optimal solutions. / Ph. D.
809

Generalized hill climbing algorithms for discrete optimization problems

Johnson, Alan W. 06 June 2008 (has links)
Generalized hill climbing (GHC) algorithms are introduced, as a tool to address difficult discrete optimization problems. Particular formulations of GHC algorithms include simulated annealing (SA), local search, and threshold accepting (T A), among. others. A proof of convergence of GHC algorithms is presented, that relaxes the sufficient conditions for the most general proof of convergence for stochastic search algorithms in the literature (Anily and Federgruen [1987]). Proofs of convergence for SA are based on the concept that deteriorating (hill climbing) transitions between neighboring solutions are accepted by comparing a deterministic function of both the solution change cost and a temperature parameter to a uniform (0,1) random variable. GHC algorithms represent a more general model, whereby deteriorating moves are accepted according to a general random variable. Computational results are reported that illustrate relationships that exist between the GHC algorithm's finite-time performance on three problems, and the general random variable formulations used. The dissertation concludes with suggestions for further research. / Ph. D.
810

Constructing Covering Arrays using Parallel Computing and Grid Computing

Avila George, Himer 10 September 2012 (has links)
A good strategy to test a software component involves the generation of the whole set of cases that participate in its operation. While testing only individual values may not be enough, exhaustive testing of all possible combinations is not always feasible. An alternative technique to accomplish this goal is called combinato- rial testing. Combinatorial testing is a method that can reduce cost and increase the effectiveness of software testing for many applications. It is based on con- structing functional test-suites of economical size, which provide coverage of the most prevalent configurations. Covering arrays are combinatorial objects, that have been applied to do functional tests of software components. The use of cov- ering arrays allows to test all the interactions, of a given size, among the input parameters using the minimum number of test cases. For software testing, the fundamental problem is finding a covering array with the minimum possible number of rows, thus reducing the number of tests, the cost, and the time expended on the software testing process. Because of the importance of the construction of (near) optimal covering arrays, much research has been carried out in developing effective methods for constructing them. There are several reported methods for constructing these combinatorial models, among them are: (1) algebraic methods, recursive methods, (3) greedy methods, and (4) metaheuristics methods. Metaheuristic methods, particularly through the application of simulated anneal- ing has provided the most accurate results in several instances to date. Simulated annealing algorithm is a general-purpose stochastic optimization method that has proved to be an effective tool for approximating globally optimal solutions to many optimization problems. However, one of the major drawbacks of the simulated an- nealing is the time it requires to obtain good solutions. In this thesis, we propose the development of an improved simulated annealing algorithm / Avila George, H. (2012). Constructing Covering Arrays using Parallel Computing and Grid Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17027

Page generated in 0.061 seconds