• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1540
  • 499
  • 154
  • 145
  • 145
  • 121
  • 55
  • 55
  • 47
  • 36
  • 36
  • 34
  • 17
  • 17
  • 16
  • Tagged with
  • 3395
  • 488
  • 475
  • 370
  • 340
  • 285
  • 261
  • 250
  • 242
  • 238
  • 234
  • 220
  • 214
  • 213
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Integrated Process Planning and Scheduling for a Complex Job Shop Using a Proxy Based Local Search

Henry, Andrew Joseph 10 December 2015 (has links)
Within manufacturing systems, process planning and scheduling are two interrelated problems that are often treated independently. Process planning involves deciding which operations are required to produce a finished product and which resources will perform each operation. Scheduling involves deciding the sequence that operations should be processed by each resource, where process planning decisions are known a priori. Integrating process planning and scheduling offers significant opportunities to reduce bottlenecks and improve plant performance, particularly for complex job shops. This research is motivated by the coating and laminating (CandL) system of a film manufacturing facility, where more than 1,000 product types are regularly produced monthly. The CandL system can be described as a complex job shop with sequence dependent setups, operation re-entry, minimum and maximum wait time constraints, and a due date performance measure. In addition to the complex scheduling environment, products produced in the CandL system have multiple feasible process plans. The CandL system experiences significant issues with schedule generation and due date performance. Thus, an integrated process planning and scheduling approach is needed to address large scale industry problems. In this research, a novel proxy measure based local search (PBLS) approach is proposed to address the integrated process planning and scheduling for a complex job shop. PBLS uses a proxy measure in conjunction with local search procedures to adjust process planning decisions with the goal of reducing total tardiness. A new dispatching heuristic, OU-MW, is developed to generate feasible schedules for complex job shop scheduling problems with maximum wait time constraints. A regression based proxy approach, PBLS-R, and a neural network based proxy approach, PBLS-NN, are investigated. In each case, descriptive statistics about the active process plan set are used as independent variables in the model. The resulting proxy measure is used to evaluate the effect of process planning local search moves on the objective function sum of total tardiness. Using the proxy measure to guide a local search reduces the number of times a detailed schedule is generated reducing overall runtime. In summary, the proxy measure based local search approach involves the following stages: • Generate a set of feasible schedules for a set of jobs in a complex job shop. • Evaluate the parameters and results of the schedules to establish a proxy measure that will estimate the effect of process planning decisions on objective function performance. • Apply local search methods to improve upon feasible schedules. Both PBLS-R and PBLS-NN are integrated process planning and scheduling heuristics capable of addressing the challenges of the CandL problem. Both approaches show significant improvement in objective function performance when compared to local search guided by random walk. Finally, an optimal solution approach is applied to small data sets and the results are compared to those of PBLS-R and PBLS-NN. Although the proxy based local search approaches investigated do not guarantee optimality, they provide a significant improvement in computational time when compared to an optimal solution approach. The results suggest proxy based local search is an appealing approach for integrated process planning and scheduling in complex job shop environment where optimal solution approaches are not viable due to processing time. / Ph. D.
302

A Greedy Search Algorithm for Maneuver-Based Motion Planning of Agile Vehicles

Neas, Charles Bennett 29 December 2010 (has links)
This thesis presents a greedy search algorithm for maneuver-based motion planning of agile vehicles. In maneuver-based motion planning, vehicle maneuvers are solved offline and saved in a library to be used during motion planning. From this library, a tree of possible vehicle states can be generated through the search space. A depth-first, library-based algorithm called AD-Lib is developed and used to quickly provide feasible trajectories along the tree. AD-Lib combines greedy search techniques with hill climbing and effective backtracking to guide the search process rapidly towards the goal. Using simulations of a four-thruster hovercraft, AD-Lib is compared to existing suboptimal search algorithms in both known and unknown environments with static obstacles. AD-Lib is shown to be faster than existing techniques, at the expense of increased path cost. The motion planning strategy of AD-Lib along with a switching controller is also tested in an environment with dynamic obstacles. / Master of Science
303

Analysis and Abstraction of Parallel Sequence Search

Goddard, Christopher Joseph 03 October 2007 (has links)
The ability to compare two biological sequences is extremely valuable, as matches can suggest evolutionary origins of genes or the purposes of particular amino acids. Results of such comparisons can be used in the creation of drugs, can help combat newly discovered viruses, or can assist in treating diseases. Unfortunately, the rate of sequence acquisition is outpacing our ability to compute on these data. Further, traditional dynamic programming algorithms are too slow to meet the needs of biologists, who wish to compare millions of sequences daily. While heuristic algorithms improve upon the performance of these dated applications, they still cannot keep up with the steadily expanding search space. Parallel sequence search implementations were developed to address this issue. By partitioning databases into work units for distributed computation, applications like mpiBLAST are able to achieve super-linear speedup over their sequential counterparts. However, such implementations are limited to clusters and require significant effort to work in a grid environment. Further, their parallelization strategies are typically specific to the target sequence search, so future applications require a reimplementation if they wish to run in parallel. This thesis analyzes the performance of two versions of mpiBLAST, noting trends as well as differences between them. Results suggest that these embarrassingly parallel applications are dominated by the time required to search vast amounts of data, and not by the communication necessary to support such searches. Consequently, a framework named gridRuby is introduced which alleviates two main issues with current parallel sequence search applications; namely, the requirement of a tightly knit computing environment and the specific, hand-crafted nature of parallelization. Results show that gridRuby can parallelize an application across a set of machines through minimal implementation effort, and can still exhibit super-linear speedup. / Master of Science
304

Blind Comprehension of Waveforms through Statistical Observations

Clark, William H. IV January 2015 (has links)
This paper proposes a cumulant based classification means to identify waveforms for a blind receiver in the presence of time varying channels, which is built from the work done on cumulants in static channels currently in the literature. Results show the classification accuracy is on the order or better than current methods in use in static channels that do not vary over an observation period. This is accomplished by making use of second through tenth order cumulants in a signature vector that the search engine platform has the means of differentiating. A receiver can then blindly identify waveforms accurately in the presence of multipath Rayleigh fading with AWGN noise. Channel learning occurs prior to classification in order to identify the consistent distortion pattern for a waveform that is observable in the signature vector. Then using a database look-up method, the observed waveform is identified as belonging to a particular cluster based on the observed signature vector. If the distortion patterns are collected from a variety of channel types, the database can then classify both the waveform and the rough channel type that the waveform passed through. If the exact channel model or channel parameters is known and used as a limiter, significant improvement on the waveform classification can be achieved. Greater accuracy comes from using the exact channel model as the limiter. / Master of Science
305

Entropy Measurements and Ball Cover Construction for Biological Sequences

Robertson, Jeffrey Alan 01 August 2018 (has links)
As improving technology is making it easier to select or engineer DNA sequences that produce dangerous proteins, it is important to be able to predict whether a novel DNA sequence is potentially dangerous by determining its taxonomic identity and functional characteristics. These tasks can be facilitated by the ever increasing amounts of available biological data. Unfortunately, though, these growing databases can be difficult to take full advantage of due to the corresponding increase in computational and storage costs. Entropy scaling algorithms and data structures present an approach that can expedite this type of analysis by scaling with the amount of entropy contained in the database instead of scaling with the size of the database. Because sets of DNA and protein sequences are biologically meaningful instead of being random, they demonstrate some amount of structure instead of being purely random. As biological databases grow, taking advantage of this structure can be extremely beneficial. The entropy scaling sequence similarity search algorithm introduced here demonstrates this by accelerating the biological sequence search tools BLAST and DIAMOND. Tests of the implementation of this algorithm shows that while this approach can lead to improved query times, constructing the required entropy scaling indices is difficult and expensive. To improve performance and remove this bottleneck, I investigate several ideas for accelerating building indices that support entropy scaling searches. The results of these tests identify key tradeoffs and demonstrate that there is potential in using these techniques for sequence similarity searches. / Master of Science / As biological organisms are created and discovered, it is important to compare their genetic information to known organisms in order to detect possible harmful or dangerous properties. However, the collection of published genetic information from known organisms is huge and growing rapidly, making it difficult to search. This thesis shows that it might be possible to use the non-random properties of biological information to increase the speed and efficiency of searches; that is, because genetic sequences are not random but have common structures, the increase of known data does not mean a proportional increase in complexity, known as entropy. Specifically, when comparing a new sequence to a set of previously known sequences, it is important to choose the correct algorithms for comparing the similarity of two sequences, also known as the distance between them. This thesis explores the performance of entropy scaling algorithm compared to several conventional tools.
306

Auditors’ Information Search and Documentation: Does Knowledge of the Client Preference Or PCAOB Accountability Pressure Matter?

Olvera, Renee M. 05 1900 (has links)
Auditors regularly make judgments regarding whether a client’s chosen accounting policy is appropriate and in accordance with generally accepted accounting Principles (GAAP). However, to form this judgment, auditors must either possess adequate topic-specific knowledge or must gain such knowledge through information search. This search is subject to numerous biases, including a bias toward confirmation of a client’s preference. It is important to further our understanding of bias in auditors’ information search to identify its causes and effects. Furthering our understanding is necessary to provide a basis for recommending and evaluating a potential debiaser, such as accountability. the Public Company Accounting Oversight Board (PCAOB) annually inspects the audit files of selected engagements, which introduces a new form of accountability within the auditing profession. This new form of accountability has come at great cost, however, there is little empirical evidence regarding its effects on auditors’ processes. As such, it is important to understand whether the presence of accountability from the PCAOB is effective in modifying auditors’ search behaviors to diminish confirmation bias. Using an online experiment, I manipulate client preference (unknown vs. known) and PCAOB accountability pressure (low vs. high) and measure search type (information –focus or decision-focus), search depth (shallow or deep) and documentation quality. I investigate whether auditors’ information search behaviors differ based on knowledge of client’s preference and in the presence of accountability from an expected PCAOB inspection. I also investigate whether differences in auditors’ information search behaviors influence documentation quality, which is the outcome of greatest concern to the PCAOB. I hypothesize and find a client preference effect on information search type such that auditors with knowledge of the client preference consider guidance associated with the client’s preference longer than those without knowledge of the client’s preference. Contrary to expectations, PCAOB accountability pressure does not influence information search depth. with respect to documentation quality, I find that auditors engaged in a more information-focused search have higher documentation quality. Further, as expected, auditors who initially engage in a decision-focus and deep search have higher documentation quality than those auditors who initially engaged in a decision-focused but shallow search.
307

DATALOG WITH CONTRAINTS: A NEW ANSWER-SET PROGRAMMING FORMALISM

East, Deborah J. 01 January 2001 (has links)
Knowledge representation and search are two fundamental areas of artificial intelligence. Knowledge representation is the area of artificial intelligence which deals with capturing, in a formal language, the properties of objects and the relationships between objects. Search is a systematic examination of all possible candidate solutions to a problem that is described as a theory in some knowledge representation formalism. We compare traditional declarative programming formalisms such as PROLOG and DATALOG with answer-set programming formalisms such as logic programming with stable model semantic. In this thesis we develop an answer-set formalism we can DC. The logic of DC is based on the logic of prepositional schemata and a version of Closed World Assumption. Two important features of the DC logic is that it supports modeling of the cardinalities of sets and Horn clauses. These two features facilitate modeling of search problems. The DC system includes and implementation of a grounder and a solver. The grounder for the DC system grounds instances of problems retaining the structure of the cardinality of sets. The resulting theories are thus more concise. In addition, the solver for the DC system utilizes the structure of cardinality of sets to perform more efficient search. The second feature, Horn clauses, are used when transitive closure will eliminate the need for additional variables. The semantics of the Horn clauses are retained in the grounded theories. This also results in more concise theories. Our goal in developing DC is to provide the computer science community with a system which facilitates modeling of problems, is easy to use, is efficient and captures the class of problems in NP-search. We show experimental results comparing DC to other systems. These results show that DC is always competitive with state-of-the-art answer-set programming systems and for many problems DC is more efficient.
308

Direct Search Methods for Nonsmooth Problems using Global Optimization Techniques

Robertson, Blair Lennon January 2010 (has links)
This thesis considers the practical problem of constrained and unconstrained local optimization. This subject has been well studied when the objective function f is assumed to smooth. However, nonsmooth problems occur naturally and frequently in practice. Here f is assumed to be nonsmooth or discontinuous without forcing smoothness assumptions near, or at, a potential solution. Various methods have been presented by others to solve nonsmooth optimization problems, however only partial convergence results are possible for these methods. In this thesis, an optimization method which use a series of local and localized global optimization phases is proposed. The local phase searches for a local minimum and gives the methods numerical performance on parts of f which are smooth. The localized global phase exhaustively searches for points of descent in a neighborhood of cluster points. It is the localized global phase which provides strong theoretical convergence results on nonsmooth problems. Algorithms are presented for solving bound constrained, unconstrained and constrained nonlinear nonsmooth optimization problems. These algorithms use direct search methods in the local phase as they can be applied directly to nonsmooth problems because gradients are not explicitly required. The localized global optimization phase uses a new partitioning random search algorithm to direct random sampling into promising subsets of ℝⁿ. The partition is formed using classification and regression trees (CART) from statistical pattern recognition. The CART partition defines desirable subsets where f is relatively low, based on previous sampling, from which further samples are drawn directly. For each algorithm, convergence to an essential local minimizer of f is demonstrated under mild conditions. That is, a point x* for which the set of all feasible points with lower f values has Lebesgue measure zero for all sufficiently small neighborhoods of x*. Stopping rules are derived for each algorithm giving practical convergence to estimates of essential local minimizers. Numerical results are presented on a range of nonsmooth test problems for 2 to 10 dimensions showing the methods are effective in practice.
309

Analýza činnosti záchranářských psů v rámci Integrovaného záchranného systému / Activity analysis Cynology Rescue Services

Jozífová, Kristýna January 2011 (has links)
Title: The analysis of the activities of rescue dogs in the Integred Rescue System. Objectives: To analyze the frequency of deploying dogs in action under the Integrated Rescue Systém for the period 2009 to 2011. Describe reasons for Access and silure to deploy dogs in the IRS. Method: Research of available sources, data collection. Results: Every year is the use of increasingly cynological pairs. Most service dogs are being used. At the request of debris search requirements are mainly from the Fire and Rescue Service area search by police of the Czech Republic. Keywords: Integrated Rescue Systém, dog, debris search, area search.
310

Advancing large scale object retrieval

Arandjelovic, Relja January 2013 (has links)
The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time. Such a system has a wide variety of applications including object or location recognition, video search, near duplicate detection and 3D reconstruction. The task is very challenging because of large variations in the imaged object appearance due to changes in lighting conditions, scale and viewpoint, as well as partial occlusions. A starting point of established systems which tackle the same task is detection of viewpoint invariant features, which are then quantized into visual words and efficient retrieval is performed using an inverted index. We make the following three improvements to the standard framework: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel discriminative method for query expansion; (iii) a new feature augmentation method. Scaling up to searching millions of images involves either distributing storage and computation across many computers, or employing very compact image representations on a single computer combined with memory-efficient approximate nearest neighbour search (ANN). We take the latter approach and improve VLAD, a popular compact image descriptor, using: (i) a new normalization method to alleviate the burstiness effect; (ii) vocabulary adaptation to reduce influence of using a bad visual vocabulary; (iii) extraction of multiple VLADs for retrieval and localization of small objects. We also propose a method, SCT, for extremely low bit-rate compression of descriptor sets in order to reduce the memory footprint of ANN. The problem of finding images of an object in an unannotated image corpus starting from a textual query is also considered. Our approach is to first obtain multiple images of the queried object using textual Google image search, and then use these images to visually query the target database. We show that issuing multiple queries significantly improves recall and enables the system to find quite challenging occurrences of the queried object. Current retrieval techniques work only for objects which have a light coating of texture, while failing completely for smooth (fairly textureless) objects best described by shape. We present a scalable approach to smooth object retrieval and illustrate it on sculptures. A smooth object is represented by its imaged shape using a set of quantized semi-local boundary descriptors (a bag-of-boundaries); the representation is suited to the standard visual word based object retrieval. Furthermore, we describe a method for automatically determining the title and sculptor of an imaged sculpture using the proposed smooth object retrieval system.

Page generated in 0.0267 seconds