• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 503
  • 358
  • 96
  • 59
  • 43
  • 25
  • 17
  • 11
  • 10
  • 7
  • 6
  • 6
  • 4
  • 3
  • 2
  • Tagged with
  • 1370
  • 1370
  • 440
  • 234
  • 192
  • 177
  • 135
  • 134
  • 127
  • 113
  • 110
  • 109
  • 108
  • 106
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Reduction of Printed Circuit Card Placement Time Through the Implementation of Panelization

Tester, John T. 09 October 1999 (has links)
Decreasing the cycle time of panels in the printed circuit card manufacturing process has been a significant research topic over the past decade. The research objective in such literature has been to reduce the placement machine cycle times by finding the optimal placement sequences and component-feeder allocation for a given, fixed, panel component layout for a given machine type. Until now, no research has been found which allows the alteration of the panel configuration itself, when panelization is a part of that electronic panel design. This research will be the first effort to incorporate panelization into the cycle time reduction field. The PCB circuit design is not to be altered; rather, the panel design (i.e., the arrangement of the PCB in the panel) is altered to reduce the panel assembly time. Component placement problem models are developed for three types of machines: The automated insertion machine (AIM), the pick-and-place (PAPM) machine, and the rotary turret head machine (RTHM). Two solution procedures are developed which are based upon a genetic algorithm (GA) approach. One procedure simultaneously produces solutions for the best panel design and component placement sequence. The other procedure first selects a best panel design based upon an estimation of its worth to the minimization problem. Then that procedure uses a more traditional GA to solve for the component placement and component type allocation problem for that panel design. Experiments were conducted to discover situations where the consideration of panelization can make a significant difierence in panel assembly times. It was shown that the PAPM scenario benefits most from panelization and the RTHM the least, though all three machine types show improvements under certain conditions established in the experiments. NOTE: An updated copy of this ETD was added on 09/17/2010. / Ph. D.
212

A Distributed Active Vibration Absorber (DAVA) for Active-Passive Vibration and Sound Radiation Control

Cambou, Pierre E. 13 November 1998 (has links)
This thesis presents a new active-passive treatment developed to reduce structural vibrations and their associated radiated sound. It is a contribution to the research of efficient and low cost devices that implement the advantages of active and passive noise control techniques. A theoretical model has been developed to investigate the potential of this new "active-passive distributed absorber". The model integrates new functions that make it extremely stable numerically. Using this model, a genetic algorithm has been used to optimize the shape of the active-passive distributed absorber. Prototypes have been designed and their potential investigated. The device subsequently developed can be described as a skin that can be mechanically and electrically tuned to reduce unwanted vibration and/or sound. It is constructed from the piezoelectric material polyvinylidene fluoride (PVDF) and thin layers of lead. The tested device is designed to weight less than 10% of the main structure and has a resonance frequency around 1000 Hz. Experiments have been conducted on a simply supported steal beam (24"x2"x1/4"). Preliminary results show that the new treatment out-performs active-passive point absorbers and conventional constrained layer damping material. The compact design and its efficiency make it suitable for many applications especially in the transportation industry. This new type of distributed absorber is totally original and represent a potential breakthrough in the field of acoustics and vibration control. / Master of Science
213

Experimental Design for Estimating Electro-Thermophysical Properties of a Thermopile Thermal Radiation Detector

Barreto, Joel 10 August 1998 (has links)
As the Earth's atmosphere evolves due to human activity, today's modern industrial society relies significantly on the scientific community to foresee possible atmospheric complications such as the celebrated greenhouse effect. Scientists, in turn, rely on accurate measurements of the Earth Radiation Budget (ERB) in order to quantify changes in the atmosphere. The Thermal Radiation Group (TRG), a laboratory in the Department of Mechanical Engineering at Virginia Polytechnic Institute and State University, has been at the edge of technology designing and modeling ERB instruments. TRG is currently developing a new generation of thermoelectric detectors for ERB applications. These detectors consist of an array of thermocouple junction pairs that are based on a new thermopile technology using materials whose electro-thermophysical properties are not completely characterized. The objective of this investigation is to design experiments aimed at determining the electro-thermophysical properties of the detector materials. These properties are the thermal conductivity and diffusivity of the materials and the Seebeck coefficient of the thermocouple junctions. Knowledge of these properties will provide fundamental information needed for the development of optimally designed detectors that rigorously meet required design specifications. / Master of Science
214

Comparative Study of the Effect of Tread Rubber Compound on Tire Performance on Ice

Shenvi, Mohit Nitin 20 August 2020 (has links)
The tire-terrain interaction is complex and tremendously important; it impacts the performance and safety of the vehicle and its occupants. Icy roads further enhance these complexities and adversely affect the handling of the vehicle. The analysis of the tire-ice contact focusing on individual aspects of tire construction and operation is imperative for tire industry's future. This study investigates the effects of the tread rubber compound on the drawbar pull performance of tires in contact with an ice layer near its melting point. A set of sixteen tires of eight different rubber compounds were considered. The tires were identical in design and tread patterns but have different tread rubber compounds. To study the effect of the tread rubber compound, all operational parameters were kept constant during the testing conducted on the Terramechanics Rig at the Terramechanics, Multibody, and Vehicle Systems laboratory. The tests led to conclusive evidence of the effect of the tread rubber compound on the drawbar performance (found to be most prominent in the linear region of the drawbar-slip curve) and on the resistive forces of free-rolling tires. Modeling of the tire-ice contact for estimation of temperature rise and water film height was performed using ATIIM 2.0. The performance of this in-house model was compared against three classical tire-ice friction models. A parametrization of the Magic Formula tire model was performed using experimental data and a Genetic Algorithm. The dependence of individual factors of the Magic Formula on the ambient temperature, tire age, and tread rubber compounds was investigated. / Master of Science / The interaction between the tire and icy road conditions in the context of the safety of the occupants of the vehicle is a demanding test of the skills of the driver. The expected maneuvers of a vehicle in response to the actions of the driver become heavily unpredictable depending on a variety of factors like the thickness of the ice, its temperature, ambient temperature, the conditions of the vehicle and the tire, etc. To overcome the issues that could arise, the development of winter tires got a boost, especially with siping and rubber compounding technology. This research focuses on the effects on the tire performance on ice due to the variation in the tread rubber compounds. The experimental accomplishment of the same was performed using the Terramechanics rig at the Terramechanics, Multibody, and Vehicle Systems (TMVS) laboratory. It was found that the effect of the rubber compound is most pronounced in the region where most vehicles operate under normal circumstances. An attempt was made to simulate the temperature rise in the contact patch and the water film that exists due to the localized melting of ice caused by frictional heating. Three classical friction models were used to compare the predictions against ATIIM 2.0, an in-house developed model. Using an optimization technique namely the Genetic Algorithm, efforts were made to understand the effects of the tread rubber compound, the ambient temperature, and the aging of the tire on the parameters of the Magic Formula model, an empirical model describing the performance of the tire.
215

Preliminary Design of an Autonomous Underwater Vehicle Using a Multiple-Objective Genetic Optimizer

Martz, Matthew 26 June 2008 (has links)
The process developed herein uses a Multiple Objective Genetic Optimization (MOGO) algorithm. The optimization is implemented in ModelCenter (MC) from Phoenix Integration. It uses a genetic algorithm that searches the design space for optimal, feasible designs by considering three Measures of Performance (MOPs): Cost, Effectiveness, and Risk. The complete synthesis model is comprised of an input module, the three primary AUV synthesis modules, a constraint module, three objective modules, and a genetic algorithm. The effectiveness rating determined by the synthesis model is based on nine attributes identified in the US Navy's UUV Master Plan and four performance-based attributes calculated by the synthesis model. To solve multi-attribute decision problems the Analytical Hierarchy Process (AHP) is used. Once the MOGO has generated a final generation of optimal, feasible designs the decision-maker(s) can choose candidate designs for further analysis. A sample AUV Synthesis was performed and five candidate AUVs were analyzed. / Master of Science
216

A Genetic Algorithm-Based Place-and-Route Compiler For A Run-time Reconfigurable Computing System

Kahne, Brian C. 14 May 1997 (has links)
Configurable Computing is a technology which attempts to increase computational power by customizing the computational platform to the specific problem at hand. An experimental computing model known as wormhole run-time reconfiguration allows for partial reconfiguration and is highly scalable. In this approach, configuration information and data are grouped together in a computing unit called a stream, which can tunnel through the chip creating a series of interconnected pipelines. The Colt/Stallion project at Virginia Tech implements this computing model into integrated circuits. In order to create applications for this platform, a compiler is needed which can convert a human readable description of an algorithm into the sequences of configuration information understood by the chip itself. This thesis covers two compilers which perform this task. The first compiler, Tier1, requires a programmer to explicitly describe placement and routing inside of the chip. This could be considered equivalent to an assembler for a traditional microprocessor. The second compiler, Tier2, allows the user to express a problem as a dataflow graph. Actual placing and routing of this graph onto the physical hardware is taken care of through the use of a genetic algorithm. A description of the two languages is presented, followed by example applications. In addition, experimental results are included which examine the behavior of the genetic algorithm and how alterations to various genetic operator probabilities affects performance. / Master of Science
217

Intelligent Parameter Adaptation for Chemical Processes

Sozio, John Charles 23 July 1999 (has links)
Reducing the operating costs of chemical processes is very beneficial in decreasing a company's bottom line numbers. Since chemical processes are usually run in steady-state for long periods of time, saving a few dollars an hour can have significant long term effects. However, the complexity involved in most chemical processes from nonlinear dynamics makes them difficult processes to optimize. A nonlinear, open-loop unstable system, called the Tennessee Eastman Chemical Process Control Problem, is used as a test-bed problem for minimization routines. A decentralized controller is first developed that stabilizes the plant to set point changes and disturbances. Subsequently, a genetic algorithm calculates input parameters of the decentralized controller for minimum operating cost performance. Genetic algorithms use a directed search method based on the evolutionary principle of "survival of the fittest". They are powerful global optimization tools; however, they are typically computationally expensive and have long convergence times. To decrease the convergence time and avoid premature convergence to a local minimum solution, an auxiliary fuzzy logic controller was used to adapt the parameters of the genetic algorithm. The controller manipulates the input and output data through a set of linguistic IF-THEN rules to respond in a manner similar to human reasoning. The combination of a supervisory fuzzy controller and a genetic algorithm leads to near-optimum operating costs for a dynamically modeled chemical process. / Master of Science
218

Homogeneous models of anechoic rubber coatings

Cederholm, Alex January 2003 (has links)
No description available.
219

Homogeneous models of anechoic rubber coatings

Cederholm, Alex January 2003 (has links)
No description available.
220

Comprehensive Study of Meta-heuristic Algorithms for Optimal Sizing of BESS in Multi-energy syste

Ginste, Joakim January 2022 (has links)
The question of finding the optimal size for battery energy storage systems (BESS) to be used for energy arbitrage and peak shaving has gained more and more interest in recent years. This is due to the increase in variability of electricity prices caused by the increase of renewable but also variable electricity production units in the electricity grid. The problem of finding the optimal size for a BESS is of high complexity. It includes many factors that affect the usefulness and the economic value of a BESS. This study includes a thorough literature study regarding different methods and techniques used for finding optimal size (both capacity and power) for a BESS. From the literature study two meta-heuristic algorithms were found to have been used with success for similar problems. The two algorithms were Genetic algorithm (GA) and Firefly algorithm (FF). These algorithms have in this thesis been tested in a case study optimizing the BESS capacity and power to either maximising the net present value (NPV) of investing in a Li-ion BESS of the LPF type or minimizing the levelized cost of storage (LCOS) for the BESS, with a project lifetime of 10 years. The BESS gains monetary value from energy arbitrage by being a middleman between a large residential house complex seen as the "user" with a predefined hourly electricity load demand and the electricity grid. For the case study a simplified charge and discharge dispatch schedule was implemented for the BESS with the focus of maximising the value of energy arbitrage. The case study was divided into 3 different cases, the base case where no instalment of a BESS was done. Case 2 included the instalment of the BESS whilst case 3 included installing both a BESS and an electrical heater (ELH). The electrical heater in case 3 was implemented to shift a heating load from the user to an electrical load, to save money as well as reduce CO2 emissions from a preinstalled gas heater used in the base case. The results showed that overall GA was a better optimization algorithm for the stated problem, having lower optimization time overall between 60%-70% compared to FF and depending on the case. For case 2, GA achieves the best LCOS with a value of 0.225 e/kWh, being 11.4% lower compared to using FF. Regarding NPV for case 2, FF achieves the best solutions at the lowest possible value in the search space for the capacity and power (i.e., 0.1 kWh for capacity and 0.1 kW for power), with an NPV at -51.5e, showing that for case 2 when optimizing for NPV an investment in a BESS is undesirable. GA finds better solutions for case 3 for both NPV and LCOS at 954,982e and 0.2305 e/kWh respectively, being 35.7% larger and 9.1% lower respectively compared to using FF. For case 3 it was shown that the savings from installing the ELH stands for a large portion of the profits, leading to a positive NPV compared to case 2 when it was not implemented. Finally, it was found that the GA can be a useful tool for finding optimal power and capacity for BESS instalments, compared to FF that got stuck at local optimums. However, it was seen that the charge and discharge dispatch schedule play an important role regarding the effectiveness of installing a BESS. As for some cases the BESS was only used 17% of all hours during a year (case 2, when optimizing for NPV). Therefore, further research is of interest into the schedule function and its role regarding finding the optimal BESS size. / Frågan angående hur man hittar den optimal storleken på en energilagringsenhet av batteritypen (BESS) som skall användas för energiarbitrage samt "peak shaving" har fått mer och mer uppmärksamhet de senaste åren. Detta sker på grund av en ökning av variabiliteten av elpriser, vilket i sig delvis kommer från ett ökat installerande av förnyelsebar, men då också variabla energiproduktionsenheter till elnätet. Problemet med att hitta den optimala storleken för en BESS är på grund av komplexitet i frågan. Det innehåller många faktorer som påverkar effektiviteten samt det ekonomiska värdet av en BESS. Denna avhandling innehåller en litteraturstudie om olika tekniker och metoder som används för att hitta den optimal lösningen för optimal storlek (kapacitet och kraft) på en BESS. Från litteraturstudien hittades två meta-heuristiska algoritmer som använts med succés på liknande problem. De två algoritmerna var "Genetic algorithm" (GA) och "Firefly algorithm (FF). Dessa algoritmer har i denna avhandling blivit testade i en fallstudie för att optimera kapacitet och kraft för en BESS genom att antingen maximera nettonuvärdet (NPV) som fås av att investera i en Li-ion BESS av typen LPF eller att minimera "levelized cost of storage" (LCOE) för en BESS med en livstid på 10 år. Detta genom att man får monetärt värde från att använda en BESS för energiarbitrage genom att vara en mellanhand mellan ett stort bostadskomplex som ses vara en "användare" med ett förbestämt elanvändningsmönster och elnätet. För fallstudien användes en simpel metodologi för laddnings- och urladdninsgschema för att maximera energiarbitrage. Fallstudien delades upp i tre olika fall, ett basfall där ingen installation av en BESS gjordes. I fall 2 installerades bara en BESS medans för fall 3 installerades både en BESS samt en elektrisk värmare (ELH) för att omvandla användarens termiska energianvändning till mer elektrisk energianvändning. Genom detta kan monetära besparingar göras samt reducera mängden CO2 utsläpp som annars hade kommit från en redan installerade gasvärmare, i basfallet.  Resultatet visade att totalt sätt var GA en bättre optimeringsalgoritm för det specifika problemet, med lägre optimeringstid på 60%-70% jämfört med FF och beroende på fall. För fall 2 hittar GA det lägsta värdet på LCOS på 0.225 e/kWh, och var då 11.4% lägre jämfört med FF. Angående NPV för fall 2 hittar FF den bästa lösningen på det minsta möjliga värdet på kraft och kapacitet i sökutrymmet (det vill säga 0.1 kWh för kapacitet och 0.1 kW för kraft), med ett NPV värde på -51.5e, vilket visar att för fall 2 när man optimerar för NPV så finns ingen ekonomisk vinning av att investera i en BESS. GA hittar den bästa lösningen för fall 3, både för NPV och LCOS på 954,982e och 0.2305 e/kWh respektivt, vilket är 35.7% större och 9.1% lägre respektivt jämfört när man använder FF. För fall 3 visade resultaten att besparingarna från att installera en ELH stod för den större delen av alla vinster, vilket ledde till positiva värden för NPV. Slutligen visade resultaten att GA kan vara ett användbart verktyg för att hitta den optimala lösningen för storleken på en BESS, jämfört med FF som fastande på lokal optimala lösningar. Dock kunde resultaten också visa att laddnings- och urladdninsgschemat använt i fallstudien spelade en viktig roll angående effektiviteten med att installera en BESS. I vissa fall så användes BESS:en så lite som 17% av alla timmar på ett år (fall 2, optimering av NPV). Därför är det ett stort intresse att göra fortsatt forskning på andra laddnings- och urladdninsgscheman och dess roll med att hitta en optimal storlek på en BESS.

Page generated in 0.0923 seconds