• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2619
  • 941
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 32
  • 27
  • 26
  • Tagged with
  • 5995
  • 1459
  • 890
  • 730
  • 724
  • 706
  • 494
  • 493
  • 482
  • 453
  • 421
  • 414
  • 386
  • 366
  • 342
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Development and Validation of a Measure of Algorithm Aversion

Melick, Sarah 15 April 2020 (has links)
No description available.
372

Development of a Shore Profile Algorithm for Tidal Estuaries Dominated by Fine Sediments

Pevey, Kimberly Collins 30 April 2011 (has links)
The purpose of this work is to generate a shore profile algorithm to be used in estuaries dominated by fine sediments. Numerical models are continually evolving to enhance the overall accuracy of results. However, the typical shore profile is defined as a vertical wall. This work defines the shore as a nonlinear profile which will provide more realistic models. A variety of shore profile equations were examined and tested against a field site, Weeks Bay, Alabama. The most applicable, an equation by S. C. Lee, was modified in order to calculate the entire shore profile length. The distance from the land-water interface to the depth at which sedimentation is negligible can now be modeled with a single equation. Recommendations for the practical aspect of implementation into a numerical model are also considered.
373

Image Processing Algorithms for Realizing a Seamless Multi-Projection Screen

Ye, Xiuxian Jr January 2020 (has links)
This is a kind of image processing algorithm in order to realize a seamless video wall and improve the quality of images. / Nowadays, screens are very common in our daily life. There are several di erent kinds of screens, LCD, LED, OLED, ULED and so on. LCD screens can display high-resolution pictures while LED has advantages of low energy consumption and wider color range. This project has two goals. The rst one is to achieve a seamless display screen which consists of 9 LED backlit LCD boards. The second goal is to improve image quality, which is enabled by the combination of LED and LCD. There are two main problems that need to be solved in this project. The rst problem is brightness correction. Because of the projection method and the distance between lights and nal screen, there are di erent kinds of overlapping situations and distinct lines on screen. The other one is the combination of LED and LCD. The algorithms need to be developed to ensure that RGB LEDs and LCD panels display the same picture and to address some problems caused by the LCD module. / Thesis / Master of Applied Science (MASc)
374

Equitable Housing Generation Through Cellular Automata

Clark, Molly R 28 June 2022 (has links)
This thesis seeks to experiment with the culmination of social, natural and built paradigms of sustainability using digital generation as an architectural process. Specifically, this thesis will explore cellular automaton and modular design approaches in the context of multifamily housing, asking if we can quantify the qualities of equitable housing and guide digital algorithms to generate efficient, flexible, human centered designs. Cellular automaton is a term used to describe a phenomenon in which the growth of one cell in a plant or animal is entirely dependent upon the already existing adjacent cell. Digital cellular automaton is a mathematical, rule based tool used to generate patterns or to map complex systems; similarly, the generation of new cells is entirely dependent on the environment it is being born into. The aim of this work is to translate human centered parameters and local architectural guidelines into an algorithm with rules which can be easily manipulated to produce comparable digitally generated forms. The parameters will be based on an architectural program consisting of a multi-unit mixed income residential building located in, and designed for the residents of, Northampton, Massachusetts. Northampton is an exemplary small-scale city; a historic New England town with housing problems reminiscent of a larger urban area. The selected site allows for investigations of density, growth, adaptation and modular design in a way that could be applied to not only similarly sized cities, but regions of varying density based on their own local parameters. For a relevant output, the parameters and data put into the algorithm must be humanized, individualized, or in the case of this work, curated to reflect and serve a specific community. Cellular automaton allows for varied pattern generation and for the exploration of repeating modules as well as allow for future adaptations to evolving housing needs and sustainability targets. The goal is to create a supportive system of habitat that allows for growth potential and flexibility without sacrificing quality of life for the inhabitants.
375

A Hierarchical Approach to the Analysis of Intermediary Structures Within the Modified Contour Reduction Algorithm

Wallentinsen, Kristen M 01 January 2013 (has links) (PDF)
Robert Morris’s (1993) Contour-Reduction Algorithm—later modified by Rob Schultz (2008) and hereafter referred to as the Modified Contour Reduction Algorithm (MCRA)—recursively prunes a contour down to its prime: its first, last, highest, and lowest contour pitches. The algorithm follows a series of steps in two stages. The first stage prunes c-pitches that are neither local high points (maxima) nor low points (minima). The second stage prunes pitches that are neither maxima within the max-list (pitches that were maxima in the first stage) nor minima within the min-list (pitches that were minima in the first stage). This second stage is repeated until no more pitches can be pruned. What remains is the contour’s prime. By examining how the reduction process is applied to a given c-seg, one can discern a hierarchy of levels that indicates new types of relationships between them. In this thesis, I aim to highlight relationships between c-segs by analyzing the distinct subsets created by the different levels obtained by the applying the MCRA. These subsets, or sub-csegs, can be used to delineate further relationships between c-segs beyond their respective primes. As such, I posit a new method in which each sub-cseg produced by the MCRA is examined to create a system of hierarchical comparison that measures relationships between c-segs, using sub-cseg equivalence to calculate an index value representing degrees of similarity. The similarity index compares the number of levels at which two c-segs are similar to the total number of comparable levels. I then implement this analytical method by examining the similarities and differences between thirteen mode-2 Alleluias from the Liber Usualis that share the same alleluia and jubilus. The verses of these thirteen chants are highly similar in melodic content in that they all have the same prime, yet they are not fully identical. I will examine the verses of these chants using my method of comparison, analyzing intermediary sub-csegs between these 13 chants in order to reveal differences in the way the primes that govern their basic structures are composed out.
376

On-chip Thermal Sensor Placement

Xiang, Yun 01 January 2008 (has links) (PDF)
In the design of modern processors, thermal management has become one of the major challenges. Aggressive technology scaling and ever increasing demand for high performance VLSI circuits has resulted in higher current densities in the interconnect lines and increasingly higher power dissipation in the substrate. The importance of thermal effects on reliability and performance of integrated circuits increases as the technology advances. Thus a large number of thermal sensors are needed for accurate thermal mapping and thermal management. However, a rise in the number of the sensors might lead to large area cost and increase the complexity of routing. So to accurately calibrate the thermal gradients and reduce the area cost for thermal sensors, a systematic sensor distribution and allocation algorithm is essential. In this paper, we will first look into the existing thermal sensor placement techniques and add some further optimization technique to an existing temperature related algorithm. Then we will propose an algorithm based on the actual thermal gradient of the hotspots to determine the thermal sensor distribution on the microprocessors. The algorithm is designed to find the minimum number of sensors required and their correspondent locations while ensuring that all the calibrated temperatures’ deviation will be within a pre-determined error margin. We have used QT clustering algorithm to partition the hotspots and implemented a novel scheme for fast and efficient sensor location calculation. Our simulation is set on processor alpha21364, also known as EV6. Moreover, the anisotropic property of heat dissipation on chip is also considered. The final piece of this work will lead to some thermal management techniques on chip given the knowledge of the thermal sensor distribution. An optimized monitor network on chip (MNoC) scheme is set up for the non-uniform distributed sensors returned by the proposed thermal sensor placement algorithm. Two different schemes are proposed and analyzed to address this problem.
377

Genetic Algorithm-Based Improved Availability Approach for Controller Placement in SDN

Asamoah, Emmanuel 13 July 2023 (has links)
Thanks to the Software-Defined Networking (SDN) paradigm, which segregates the control and data layers of traditional networks, large and scalable networks can now be dynamically configured and managed. It is a game-changing networking technology that provides increased flexibility and scalability through centralized management. The Controller Placement Problem (CPP), however, poses a crucial problem in SDN because it directly impacts the efficiency and performance of the network. The CPP attempts to determine the most ideal number of controllers for any network and their corresponding relative positioning. This is to generally minimize communication delays between switches and controllers and maintain network reliability and resilience. In this thesis, we present a modified Genetic Algorithm (GA) technique to solve the CPP efficiently. Our approach makes use the GA’s capabilities to obtain the best controller placement correlation based on important factors such as network delay, reliability and availability. We further optimize the process by means of certain deduced constraints to allow faster convergence. In this study, our primary objective is to optimize the control plane design by identifying the optimal controller placement, which minimizes delay and significantly improves both the switch-to-controller and controller-to-controller link availability. We introduce an advanced genetic algorithm methodology and showcase a precise technique for optimizing the inherent availability constraints. To evaluate the trade-offs between the deployment of controllers and the associated costs of enhancing particular node link availabilities, we performed computational experiments on three distinct networks of varying sizes. Overall, our work contributes to the growth trajectory of SDN research by offering a novel GA-based resolution to the controller placement problem that can improve network performance and dependability.
378

An Exposition of the Deterministic Polynomial-Time Primality Testing Algorithm of Agrawal-Kayal-Saxena

Anderson, Robert Lawrence 29 June 2005 (has links) (PDF)
I present a thorough examination of the unconditional deterministic polynomial-time algorithm for determining whether an input number is prime or composite proposed by Agrawal, Kayal and Saxena in their paper [1]. All proofs cited have been reworked with full details for the sake of completeness and readability.
379

Computation of Weights for Probabilistic Record Linkage Using the EM Algorithm

Bauman, G. John 29 June 2006 (has links) (PDF)
Record linkage is the process of combining information about a single individual from two or more records. Probabilistic record linkage gives weights to each field that is compared. The decision of whether the records should be linked is then determined by the sum of the weights, or “Score”, over all fields compared. Using methods similar to the simple versus simple most powerful test, an optimal record linkage decision rule can be established to minimize the number of unlinked records when the probability of false positive and false negative errors are specified. The weights needed for probabilistic record linkage necessitate linking a “training” subset of records for the computations. This is not practical in many settings, as hand matching requires a considerable time investment. In 1989, Matthew A. Jaro demonstrated how the Expectation-Maximization, or EM, algorithm could be used to compute the needed weights when fields have Binomial matching possibilities. This project applies this method of using the EM algorithm to calculate weights for head-of-household records from the 1910 and 1920 Censuses for Ascension Parish of Louisiana and Church and County Records from Perquimans County, North Carolina. This project also expands the Jaro's EM algorithm to a Multinomial framework. The performance of the EM algorithm for calculating weights will be assessed by comparing the computed weights to weights computed by clerical matching. Simulations will also be conducted to investigate the sensitivity of the algorithm to the total number of record pairs, the number of fields with missing entries, the starting values of estimated probabilities, and the convergence epsilon value.
380

Packing Virtual Machines onto Servers

Wilcox, David Luke 28 October 2010 (has links) (PDF)
Data centers consume a significant amount of energy. This problem is aggravated by the fact that most servers and desktops are underutilized when powered on, and still consume a majority of the energy of a fully utilized computer even when idle This problem would be much worse were it not for the growing use of virtual machines. Virtual machines allow system administrators to more fully utilize hardware capabilities by putting more than one virtual system on the same physical server. Many times, virtual machines are placed onto physical servers inefficiently. To address this inefficiency, I developed a new family of packing algorithms. This family of algorithms is meant to solve the problem of packing virtual machines onto a cluster of physical servers. This problem is different than the conventional bin packing problem in two ways. First, each server has multiple resources that can be consumed. Second, loads on virtual machines are probabilistic and not completely known to the packing algorithm. We first compare our developed algorithm with other bin packing algorithms and show that it performs better than state-of-the-art genetic algorithms in literature. We then show the general feasibility of our algorithm in packing real virtual machines on physical servers.

Page generated in 0.048 seconds