• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 341
  • 133
  • 67
  • 62
  • 37
  • 22
  • 19
  • 14
  • 11
  • 8
  • 7
  • 7
  • 6
  • 5
  • 4
  • Tagged with
  • 872
  • 219
  • 99
  • 95
  • 79
  • 73
  • 68
  • 63
  • 55
  • 51
  • 49
  • 46
  • 44
  • 42
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

The process behind the delimbing of trees / Processen bakom kvistning av träd

Sunnälv Persson, Martin January 2021 (has links)
In modern forestry, two types of machines are generally used for thinning, harvesters and forwarders. These machines are large and weigh several tens of tons, this means that they damage the ground when they drive around in the woods. AirForestry is a startup company that focuses on manufacturing lightweight and electric forest machines that will do much less damage due to the lower weight. As their ambition is to develop an all-electric harvester, high efficiency in the work process is required to be competitive. The purpose of this thesis was to find which factors affect the debranching process and to find what approximate effect the harvester head's drive motors would need to have to achieve a stable debranching process. If the results of the research were successful, a new knife design would be developed for thinning. The work carried out began with a planning report which included introduction, literature study and method was written. The method chosen was to first develop a test rig to be able to measure what amount of work and what maximum impact force a single branch would need to be cut. Afterwards a study of branches and their distribution on trees in the thinning phase were conducted. The branch distribution and the data from the test rig, would make it possible to estimate the energy required to cut down a tree. Impact forces along the tree provide an opportunity to not only estimate the life cycle of the knives but also an analysis of how the remaining units hold. The branching process was assumed to be affected by both the design of the knives and which branches are present and how these are placed. The literature study showed that frozen wood gives higher forces when sheared, so it was investigated as well. The test rig was designed as a guillotine to make it easy to manufacture and due to potential energy being the only source of energy to power the rig. The work of cutting a branch was measured with a high-speed camera and the maximum impact force was measured with an accelerometer. Different knife designs were tested together with different angles and branch types. All variables tested in the rig were performed on several different branch diameters. The study of branch distribution along the tree was done on trees that were considered to be in the thinning stage and all branches with a diameter greater than 5 mm were noted together with distance and whether it was dead or not. The test rig's test data were later used in a multiple linear regression to determine if a variable effect of not. The regression was then used to estimate the power required to debranch a pine to be 3.1 kW and 6.6 kW for a spruce. Only the diameter of the branches and the tree sort could be proven to have a statistically significant effect on both the work required and the maximum impact force. / I det moderna skogsbruket används i regel två maskintyper vid gallring, skördare och skotare. Dessa maskiner är stora och väger flera tiotals ton vilket gör att de skadar marken när de kör ute i skog och mark. AirForestry är ett startupbolag som riktar in sig på att tillverka lättviktiga och elektriska skogsmaskiner som ska göra mycket mindre skada på grund av deras lägre vikt. Då deras ambition är att utveckla en helelektrisk skördare, krävs en energieffektiv process för att kunna vara konkurrenskraftiga. Målet med detta arbete var att hitta vilka faktorer som påverkar avgreningsprocessen samt hitta vilken ungefärlig effekt skördaraggregatets drivmotorer skulle behöva ha för att uppnå en stabil avgrening. Om resultatet av forskningen var lyckat skulle en ny knivdesign tas fram anpassad för gallring. Arbetet som utfördes började med en planeringsrapport där introduktion, litteraturstudie och metod skrevs. Metoden som valdes var att först utveckla en testrigg för att kunna mäta vilket arbete och vilken maximal slagkraft en gren skulle behöva för att kapas. En studie av grenar och dess distribution på träd i gallringsfasen skulle tillsammans med data från testriggen göra att energin som krävs att ta ner ett träd kan uppskattas. Slagkrafter längs med trädet ger en möjlighet att inte bara uppskatta knivarnas livscykel utan även en analys på hur resterande aggregat håller. Avgreningsprocessen antogs påverkas av både knivarnas design och vilka grenar som finns samt hur dessa är placerade. Litteraturstudien visade på att fryst trä ger högre krafter vid skjuvning, därför undersöktes det med. Testriggen konstruerades som en giljotin för att göra den enkel att tillverka och för att endast potentiell energi används för att driva riggen. Arbetet för att kapa en kvist mättes med höghastighetskamera och maximal slagkraft mättes med en accelerometer. Olika knivdesigner testades tillsammans med olika infallsvinklar och grentyper. Alla variabler som testades i riggen utfördes på flera olika kvistdiametrar. Studien av grenars distribution längs med trädet gjordes på träd som ansågs vara i gallringsstadiet och alla grenar med diameter större än 5 mm noterades tillsammans med avstånd och om den var död eller ej. Testriggens testdata användes senare i en multipel linjär regression för att kunna bestämma vilka variabler som påverkar. Regressionen användes sedan till att uppskatta effekt som krävs till 3.1 kW för en tall och 6.6 kW för en gran. Endast grenarnas diameter och trädsort kunde med statistik säkerställas påverka både arbetet som krävs samt maximala slagkraften.
442

On a turbo decoder design for low power dissipation

Fei, Jia 21 July 2000 (has links)
A new coding scheme called "turbo coding" has generated tremendous interest in channel coding of digital communication systems due to its high error correcting capability. Two key innovations in turbo coding are parallel concatenated encoding and iterative decoding. A soft-in soft-out component decoder can be implemented using the maximum a posteriori (MAP) or the maximum likelihood (ML) decoding algorithm. While the MAP algorithm offers better performance than the ML algorithm, the computation is complex and not suitable for hardware implementation. The log-MAP algorithm, which performs necessary computations in the logarithm domain, greatly reduces hardware complexity. With the proliferation of the battery powered devices, power dissipation, along with speed and area, is a major concern in VLSI design. In this thesis, we investigated a low-power design of a turbo decoder based on the log-MAP algorithm. Our turbo decoder has two component log-MAP decoders, which perform the decoding process alternatively. Two major ideas for low-power design are employment of a variable number of iterations during the decoding process and shutdown of inactive component decoders. The number of iterations during decoding is determined dynamically according to the channel condition to save power. When a component decoder is inactive, the clocks and spurious inputs to the decoder are blocked to reduce power dissipation. We followed the standard cell design approach to design the proposed turbo decoder. The decoder was described in VHDL, and then synthesized to measure the performance of the circuit in area, speed and power. Our decoder achieves good performance in terms of bit error rate. The two proposed methods significantly reduce power dissipation and energy consumption. / Master of Science
443

RTL Functional Test Generation Using Factored Concolic Execution

Pinto, Sonal 21 July 2017 (has links)
This thesis presents a novel concolic testing methodology and CORT, a test generation framework that uses it for high-level functional test generation. The test generation effort is visualized as the systematic unraveling of the control-flow response of the design over multiple (factored) explorations. We begin by transforming the Register Transfer Level (RTL) source for the design into a high-performance C++ compiled functional simulator which is instrumented for branch coverage. An exploration begins by simulating the design with concrete stimuli. Then, we perform an interleaved cycle-by-cycle symbolic evaluation over the concrete execution trace extracted from the Control Flow Graph (CFG) of the design. The purpose of this task is to dynamically discover means to divert the control flow of the system, by mutating primary-input stimulated control statements in this trace. We record the control-flow response as a Test Decision Tree (TDT), a new representation for the test generation effort. Successive explorations begin at system states heuristically selected from a global TDT, onto which each new decision tree resultant from an exploration is stitched. CORT succeeds at constructing functional tests for ITC99 and IWLS-2005 benchmarks that achieve high branch coverage using the fewest number of input vectors, faster than existing methods. Furthermore, we achieve orders of magnitude speedup compared to previous hybrid concrete and symbolic simulation based techniques. / Master of Science
444

A Library of Emotions

Doert, Jillian Elizabeth 26 August 2008 (has links)
This thesis is an investigation into the impact of design elements on human behavior as explored through the design of a library. A library was chosen for its role in the community and because of the diverse group of users a library affects. A library, defined as a collection of things, is also a metaphor for the role memory plays in determining the emotive response a person has to their surroundings. Memory acts as collection of internal associations and, when engaged through sensory experience, memory dictates an emotional reaction to a space based on previous experiences. This project is a discovery of how to engage the senses and the memory in order to evoke an emotive response. / Master of Architecture
445

Improving Branch Coverage in RTL Circuits with Signal Domain Analysis and Restrictive Symbolic Execution

Bagri, Sharad 18 March 2015 (has links)
Considerable research has been directed towards efficient test stimuli generation for Register Transfer Level (RTL) circuits. However, stimuli generation frameworks are still not capable of generating effective stimuli for all circuits. Some of the limiting factors are 1) It is hard to ascertain if a branch in the RTL code is reachable, and 2) Some hard-to-reach branches require intelligent algorithms to reach them. Since unreachable branches cannot be reached by any test sequence, we propose a method to deduce unreachability of a branch by looking for the possible values which a signal can take in an RTL code without explicit unrolling of the design. To the best of our knowledge, this method has been able to identify more unreachable branches than any method published in this domain, while being computationally less expensive. Moreover, some branches require very specific values on input signals in specific cycles to reach them. Conventional symbolic execution can generate those values but is computationally expensive. We propose a cycle-by-cycle restrictive symbolic execution that analyzes only a selected subset of program statements to reduce the computational cost. Our proposed method gathers information from an initial execution trace generated by any technique, to intelligently decide specific cycles where the application of this method will be helpful. This method can hybrid with simulation-based test stimuli generation methods to reduce the cost of formal verification. With this method, we were able to reach some previously unreached branches in ITC99 benchmark circuits. / Master of Science
446

Improving Bio-Inspired Frameworks

Varadarajan, Aravind Krishnan 05 October 2018 (has links)
In this thesis, we provide solutions to two different bio-inspired algorithms. The first is enhancing the performance of bio-inspired test generation for circuits described in RTL Verilog, specifically for branch coverage. We seek to improve upon an existing framework, BEACON, in terms of performance. BEACON is an Ant Colony Optimization (ACO) based test generation framework. Similar to other ACO frameworks, BEACON also has a good scope in improving performance using parallel computing. We try to exploit the available parallelism using both multi-core Central Processing Units (CPUs) and Graphics Processing Units(GPUs). Using our new multithreaded approach we can reduce test generation time by a factor of 25 — compared to the original implementation for a wide variety of circuits. We also provide a 2-dimensional factoring method for BEACON to improve available parallelism to yield some additional speedup. The second bio-inspired algorithm we address is for Deep Neural Networks. With the increasing prevalence of Neural Nets in artificial intelligence and mission-critical applications such as self-driving cars, questions arise about its reliability and robustness. We have developed a test-generation based technique and metric to evaluate the robustness of a Neural Nets outputs based on its sensitivity to its inputs. This is done by generating inputs which the neural nets find difficult to classify but at the same time is relatively apparent to human perception. We measure the degree of difficulty for generating such inputs to calculate our metric. / MS / High-level Hardware Design Languages (HDLs) has allowed designers to implement complicated hardware designs with considerably lesser effort. Unfortunately, design verification for the same circuits has failed to scale gracefully in terms of time and effort. Not only has it become more difficult for formal methods due to exponential complexity from increasing path explosion, but concrete test generation frameworks also face new issues such as the increased requirement in the volume of simulations. The advent of parallel computing using General Purpose Graphics Processing Units (GPGPUs) has led to improved performance for various applications. We propose to leverage both the multi-core CPU and the GPGPU for RTL test generation. This is achieved by implementing a test generation framework that can utilize the SIMD type parallelism available in GPGPUs and task level parallelism available on CPUs. The speedup achieved is extracted from both the test generation framework itself and also from refactoring the hardware model for multi-threaded test generation. For this purpose, we translate the RTL Verilog to a C++ and a CUDA compilable program. Experimental results show that considerable speedup can be achieved for test generation without loss of coverage. In recent years, machine learning and artificial intelligence have taken a substantial leap forward with the discovery of Deep Neural Networks(DNN). Unfortunately, apart from Accuracy and FTest numbers, there exist very few metrics to qualify a DNN. This becomes a reliability issue as DNNs are quite frequently used in safety-critical applications. It is difficult to interpret how the parameters of a trained DNN help store the knowledge from the training inputs. Therefore it is also difficult to infer whether a DNN has learned parameters which might cause an output neuron to misfire wrongly, a bug. An exhaustive search of the input space of the DNN is not only infeasible but is also misleading. Thus, in our work, we try to apply test generation techniques to generate new test inputs based on existing training and testing set to qualify the underlying robustness. Attempts to generate these inputs are guided only by the prediction probability values at the final output layer. We observe that depending on the amount of perturbation and time needed to generate these inputs we can differentiate between DNNs of varying quality.
447

Career Public-Sector Employee Attitudes About Political Appointments:  A Study of the U.S. Department of State

Boyette, Charity Lynne 14 May 2024 (has links)
Scholars have long examined the inherent trade-offs between control and capability when presidents politicize the executive branch through their appointment powers, including through political appointments to federal agency leadership positions. Empirical research over the past few decades connects high ratios of appointees to career leaders with decreased agency performance and higher voluntary turnover at the career senior ranks. However, less attention has been dedicated to the effects of such appointments on the attitudes of the civil service workforce, factors which has been shown to influence organizational performance. Employing a study of the U.S. Department of State, I evaluate the relationship between degree of agency politicization and self-reported measures of engagement, motivation, and job satisfaction among civil servants. Analysis suggests that the ongoing reliance on outside political appointees in senior leadership by successive presidents impedes the State Department's efforts to build and sustain positive workforce attitudes. This study examines the effects of the institutionalized use of outside appointments on the broader federal workforce, presenting a new perspective for scholarly understanding of the dynamics at play when presidents politicize the agencies they are entrusted to lead. / Doctor of Philosophy / U.S. presidents frequently use their appointment powers to exert control by placing trusted outsiders in positions of authority in federal government agencies. However, research has repeatedly shown that agencies with large numbers of outside leaders can struggle to perform effectively and lose experienced career civil servants at higher rates. While a connection between appointee leadership and performance is well established, researchers are less certain of what factors actually causes it to develop. In particular, little attention has been given to understanding the opinions of career employees of a federal agency about working within such a system or how those attitudes might help explain their behaviors at work. Through a study of one agency, the U.S. Department of State, I examine appointee-career relationships by exploring career employees' thoughts on leadership at the State Department, going beyond attitudes about specific leaders to evaluate whether using outside appointees to lead agencies creates barriers to employee recruitment, retention, and performance. The analysis suggests that, while the institutionalization of political appointments provides a president with greater control over an agency, the constant churn created through reliance on outsiders for leadership may harm an agency's ability to achieve its goals by undermining employee trust in leaders and the agency itself.
448

A Disassembly Optimization Problem

Bhootra, Ajay 10 January 2003 (has links)
The rapid technological advancement in the past century resulted in a decreased life cycle of a large number of products and, consequently, increased the rate of technological obsolescence. The disposal of obsolete products has resulted in rapid landfilling and now poses a major environmental threat. The governments in many countries around the world have started imposing regulations to curb uncontrolled product disposal. The consumers, too, are now aware of adverse effects of product disposal on environment and increasingly favor environmentally benign products. In the wake of imminent stringent government regulations and the consumer awareness about ecosystem-friendly products, the manufacturers need to think about the alternatives to product disposal. One way to deal with this problem is to disassemble an obsolete product and utilize some of its components/subassemblies in the manufacturing of new products. This seems to be a promising solution because products now-a-days are made in accordance with the highest quality standards and, although an obsolete product may not be in the required functional state as a whole, it is possible that several of its components or subassemblies are still in near perfect condition. However, product disassembly is a complex task requiring human labor as well as automated processes and, consequently, a huge amount of monetary investment. This research addresses a disassembly optimization problem, which aims at minimizing the costs associated with the disassembly process (namely, the costs of breaking the joints and the sequence dependent set-up cost associated with disassembly operations), while maximizing the benefits resulting from recovery of components/subassemblies from a product. We provide a mathematical abstraction of the disassembly optimization problem in the form of integer-programming models. One of our formulations includes a new way of modeling the subtour elimination constraints (SECs), which are usually encountered in the well-known traveling salesman problems. Based on these SECs, a new valid formulation for asymmetric traveling salesman problem (ATSP) was developed. The ATSP formulation was further extended to obtain a valid formulation for the precedence constrained ATSP. A detailed experimentation was conducted to compare the performance of the proposed formulations with that of other well-known formulations discussed in the literature. Our results indicate that in comparison to other well-known formulations, the proposed formulations are quite promising in terms of the LP relaxation bounds obtained and the number of branch and bound nodes explored to reach an optimal integer solution. These new formulations along with the results of experimentation are presented in Appendix A. To solve the disassembly optimization problem, a three-phase iterative solution procedure was developed that can determine optimal or near optimal disassembly plans for complex assemblies. The first phase helps in obtaining an upper bound on our maximization problem through an application of a Lagrangian relaxation scheme. The second phase helps to further improve this bound through addition of a few valid inequalities in our models. In the third phase, we fix some of our decision variables based on the solutions obtained in the iterations of phases 1 and 2 and then implement a branch and bound scheme to obtain the final solution. We test our procedure on several randomly generated data sets and identify the factors that render a problem to be computationally difficult. Also, we establish the practical usefulness of our approach through case studies on the disassembly of a computer processor and a laser printer. / Master of Science
449

The determinants of bank branch location in India: An empirical investigation

Zhang, Q., Arora, Rashmi, Colombage, S. 19 February 2021 (has links)
Yes / Bank branching plays a significant role in a wide range of economic activities. Existing studies on determinants of bank branching activities largely focus on developed countries, studies devoted to developing countries are scant. We present the first study that examines the determinants of bank branching activities in one of the largest developing country India. We employ a unique longitudinal data to study the determinants of bank branch location in India. This data is collected at the state level covering 25 Indian states for the period 2006 to 2017. We employ Poisson regression that are better suited for modelling counted dependent variable. First, region and bank specific factors such as size of population and bank deposits influence location of bank branches. Second, the relationship between these factors and branch locations is heterogeneous across different types of banks and across states with different business environments. First, from the view of banks, considering the factors of branch location are crucial in order to set out branching strategy. Irrespective of policy measures aimed at promoting financial inclusion in India, we show that banks consider economic activities in the region in locating their branches. Second, from the view of policy makers and regulators, such branching strategy could potentially contribute to financial exclusion. As a result, population in the less developed regions may be excluded from accessing financial services. Hence, policy makers and regulators should take into this account when formulating policies aimed at promoting financial inclusion. First, while existing studies largely focus on developed countries, studies devoted to developing countries are scant. To the best of our knowledge, we have not come across any study that investigates the determinants of bank branch location in India, so we reasonably believe that ours is a first-of-its-kind. Second, our study provides a new perspective concerning how regional and bank specific factors influence banks of different ownership in locating branches. Third, while traditional regression used to be a method of choice among early studies, we employ Poisson regression that are better suited for modelling counted dependent variable.
450

Enhanced lower bounds and an algorithm for a water distribution network design model

Totlani, Rajiv 29 August 2008 (has links)
The design of water distribution systems has received a great deal of attention in the last three decades because of its importance to industrial growth and its crucial role in society for community health, firefighting capability, and quality of life. The cost of installing a water distribution system is typically in the tens of millions of dollars. These systems also account for the largest costs in the municipal maintenance budgets. Furthermore, existing systems are being burdened by increasing urban development and water use. All these factors cause the pipe sizing decisions to be a critical task in designing a cost effective water distribution system that is capable of handling the demand and satisfying the minimum pressure head and hydraulic redundancy requirements. A number of research efforts have focused on the least cost pipe sizing decision, each of them generating improved solutions for several standard test problems from literature, but so far, very little work has been done to test the quality of these solutions. In this thesis, two lower bounding schemes are proposed to evaluate the quality of these solutions. These lower bounding schemes make use of the special concave-convex nature of the nonlinear frictional loss terms. We show that the first is a dual to <i>Eiger et al.’s</i> [1994] bounding procedure while the second method produces far tighter lower bounds with comparable ease. Results on applying these lower bounding schemes to some standard test problems from literature are presented. The second lower bounding scheme is then embedded in a branch-and-bound procedure along with an upper bounding scheme by suitably restricting the flows at each node of the search tree. By branching successively, we attempt to narrow the gap from optimality to generate near optimal solutions to the least cost pipe sizing problem. This results in a comprehensive reduced cost network design that satisfies all pressure and flow requirements for realistically sized problems. The proposed method is applied to standard test problems from the literature. It is hoped that this method will provide a useful tool for city engineers to design a cost effective water distribution system that meets specified hydraulic requirements. / Master of Science

Page generated in 0.0907 seconds