• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 166
  • 166
  • 56
  • 31
  • 28
  • 23
  • 19
  • 19
  • 19
  • 18
  • 17
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Bionano Electronics: Magneto-Electric Nanoparticles for Drug Delivery, Brain Stimulation and Imaging Applications

Guduru, Rakesh 27 September 2013 (has links)
Nanoparticles are often considered as efficient drug delivery vehicles for precisely dispensing the therapeutic payloads specifically to the diseased sites in the patient’s body, thereby minimizing the toxic side effects of the payloads on the healthy tissue. However, the fundamental physics that underlies the nanoparticles’ intrinsic interaction with the surrounding cells is inadequately elucidated. The ability of the nanoparticles to precisely control the release of its payloads externally (on-demand) without depending on the physiological conditions of the target sites has the potential to enable patient- and disease-specific nanomedicine, also known as Personalized NanoMedicine (PNM). In this dissertation, magneto-electric nanoparticles (MENs) were utilized for the first time to enable important functions, such as (i) field-controlled high-efficacy dissipation-free targeted drug delivery system and on-demand release at the sub-cellular level, (ii) non-invasive energy-efficient stimulation of deep brain tissue at body temperature, and (iii) a high-sensitivity contrasting agent to map the neuronal activity in the brain non-invasively. First, this dissertation specifically focuses on using MENs as energy-efficient and dissipation-free field-controlled nano-vehicle for targeted delivery and on-demand release of a anti-cancer Paclitaxel (Taxol) drug and a anti-HIV AZT 5’-triphosphate (AZTTP) drug from 30-nm MENs (CoFe2O4-BaTiO3) by applying low-energy DC and low-frequency (below 1000 Hz) AC fields to separate the functions of delivery and release, respectively. Second, this dissertation focuses on the use of MENs to non-invasively stimulate the deep brain neuronal activity via application of a low energy and low frequency external magnetic field to activate intrinsic electric dipoles at the cellular level through numerical simulations. Third, this dissertation describes the use of MENs to track the neuronal activities in the brain (non-invasively) using a magnetic resonance and a magnetic nanoparticle imaging by monitoring the changes in the magnetization of the MENs surrounding the neuronal tissue under different states. The potential therapeutic and diagnostic impact of this innovative and novel study is highly significant not only in HIV-AIDS, Cancer, Parkinson’s and Alzheimer’s disease but also in many CNS and other diseases, where the ability to remotely control targeted drug delivery/release, and diagnostics is the key.
112

Comprehending Performance of Cross-Frames in Skewed Straight Steel I-Girder Bridges

Gull, Jawad H 20 February 2014 (has links)
The effects of support in steel bridges can present significant challenges during the construction. The tendency of girders to twist or layovers during the construction can present a particularly challenging problem regarding detailing cross-frames that provide bracing to steel girders. Methods of detailing cross-frames have been investigated in the past to identify some of the issues related to the behavior of straight and skewed steel bridges. However, the absence of a complete and simplified design approach has led to disputes between stakeholders, costly repairs and delays in the construction. The main objective of this research is to develop a complete and simplified design approach considering construction, fabrication and detailing of skewed bridges. This objective is achieved by comparing different detailing methods, understanding the mechanism by which skew effects develop in steel bridges, recommending simplified methods of analysis to evaluate them, and developing a complete and simplified design procedure for skew bridges. Girder layovers, flange lateral bending stress, cross-frame forces, component of vertical deflections, component of vertical reactions and lateral reactions or lateral displacements are affected by detailing methods and are referred as lack-of-fit effects. The main conclusion of this research is that lack-of-fit effects for the Final Fit detailing method at the steel dead load stage are equal and opposite to the lack-of-fit effects for the Erected Fit detailing method at the total dead load stage. This conclusion has helped using 2D grid analyses for estimating these lack-of-fit effects for different detailing methods. 3D erection simulations are developed for estimating fit-up forces required to attach the cross-frames to girders. The maximum fit-up force estimated from the 2D grid analysis shows a reasonable agreement with the one obtained from the erection simulations. The erection sequence that reduces the maximum fit-up force is also found by erection simulations. The line girder analysis is recommended for calculating cambers for the Final Fit detailing method. A combination of line girder analysis and 2D grid analysis is recommended for calculating cambers for the Erected Fit detailing method. Finally, flowcharts are developed that facilitate the selection of a detailing method and show the necessary design checks.
113

The Impact of Quantum Size Effects on Thermoelectric Performance in Semiconductor Nanostructures

Kommini, Adithya 24 March 2017 (has links)
An increasing need for effective thermal sensors, together with dwindling energy resources, have created renewed interests in thermoelectric (TE), or solid-state, energy conversion and refrigeration using semiconductor-based nanostructures. Effective control of electron and phonon transport due to confinement, interface, and quantum effects has made nanostructures a good way to achieve more efficient thermoelectric energy conversion. This thesis studies the two well-known approaches: confinement and energy filtering, and implements improvements to achieve higher thermoelectric performance. The effect of confinement is evaluated using a 2D material with a gate and utilizing the features in the density of states. In addition to that, a novel controlled scattering approach is taken to enhance the device thermoelectric properties. The shift in the onset of scattering due to controlled scattering with respect to sharp features in the density of states creates a window shape for transport integral. Along with the controlled scattering, an effective utilization of Fermi window can provide a considerable enhancement in thermoelectric performance. The conclusion from the results helps in selection of materials to achieve such enhanced thermoelectric performance. In addition to that, the electron filtering approach is studied using the Wigner approach for treating the carrier-potential interactions, coupled with Boltzmann transport equation which is solved using Rode's iterative method, especially in periodic potential structures. This study shows the effect of rapid potential variations in materials as seen in superlattices and the parameters that have significant contribution towards the thermoelectric performance. Parameters such as period length, height and smoothness of such periodic potentials are studied and their effect on thermoelectric performance is discussed. A combination of the above two methods can help in understanding the effect of confinement and key requirements in designing a nanostructured thermoelectric device that has a enhanced performance.
114

Thermal Numerical Analysis of Vertical Heat Extraction Systems in Landfills

Onnen, Michael Thomas 01 June 2014 (has links)
An investigation was conducted to determine the response of landfills to the operation of a vertical ground source heat pump (i.e., heat extraction system, HES). Elevated landfill temperatures, reported various researchers, impact the engineering performance of landfill systems. A numerical model was developed to analyze the influence of vertical HES operation on landfills as a function of climate and operational conditions. A 1-D model of the vertical profile of a landfill was developed to approximate fluid temperatures in the HES. A 2-D model was then analyzed over a 40 year time period using the approximate fluid temperatures to determine the heat flux applied by the HES and resulting landfill temperatures. Vertical HES configurations simulations consisted of 15 simulations varying 5 fluid velocities and 3 pipe sizes. Operational simulations consisted of 26 parametric evaluations of waste placement, waste height, waste filling rate, vertical landfill expansions, HES placement time, climate, and waste heating. Vertical HES operation in a landfill environment was determined to have 3 phases: heat extraction phase, transitional phase, and ground source heat pump phase. During the heat extraction phase, the heat extraction rate ranged from 0 to 2550, 310 to 3080, and 0 to 530 W for the first year, peak year, and last year of HES operation, respectively. The maximum total heat energy extracted during the heat extraction phase ranged from 163,000 to 1,400,000 MJ. The maximum difference in baseline landfill temperatures and temperatures 0 m away from the HES ranged from 5.2 to 43.2°C. Climate was determined to be the most significant factor impacting the vertical HES. Trends pertaining to performance of numerous variables (fluid velocity, pipe size, waste placement, waste height, waste filling rate, vertical landfill expansions, HES placement time, climate, and waste heating) were determined during this investigation. Increasing fluid velocity until turbulent flow was reached increased the heat extraction rate by the system. Once turbulent flow was reached, the increase in heat extraction rate with increasing fluid velocity was negligible. An increase in the heat extraction rate was caused by increasing pipe diameter. Wastes placed in warmer months caused an increase in the total heat energy extracted. Increasing waste height caused an increase in the peak heat extraction rate by 43 W/m waste height. Optimum heat extraction per 1 m of HES occurred for a 30 m waste height. Increasing the waste filling rate increased the total heat energy extracted. Heat extraction rates decreased as time between vertical landfill expansions increase. Total heat energy extracted over a 35 year period decreased by approximately 21,500 MJ/year for every year after the final cover was placed until HES operation began. For seasonal HES operation, the total heat energy obtained each year differs and the fourth year of operation yielded the most energy. Wet Climates with higher heat generating capacities yielded increased heat extraction rates. Maximum temperature differences in the landfill due to the HES increased by 16.6°C for every 1 W/m3 increase in peak heat generation rate. When a vertical HES was used for waste heating, up to a 13.7% increase in methane production was predicted. Engineering considerations (spacing, financial impact, and effect on gas production) for implementing a vertical HES in a landfill were investigated. Spacing requirements between the wells were dependent on maximum temperature differences in the landfill. Spacing requirements of 12, 12, 16, and 22 m are recommended for waste heating, winter-only HES operation, maximum temperature differences in the landfill less than 17°C, and maximum temperature differences in the landfill greater than 17°C, respectively. A financial analysis was conducted on the cost of implementing a single vertical HES well. The energy extracted per cost ranged from 0.227 to 0.150 $/MJ for a 50.8 mm pipe with a 1.0 m/s fluid velocity and a 50.8 mm pipe with a 0.3 m/s fluid velocity, respectively. A vertical HES could potentially increase revenue from a typical landfill gas energy project by $577,000 per year.
115

CORGI: Compute Oriented Recumbent Generation Infrastructure

Hunt, Christopher Allen 01 March 2017 (has links)
Creating a bicycle with a rideable geometry is more complicated than it may appear, with today’s mainstay designs having evolved through years of iteration. This slow evolution coupled with the bicycle’s intricate mechanical system has lead most builders to base their new geometries off of previous work rather than expand into new design spaces. This crutch can lead to slow bicycle iteration rates, often causing bicycles to all look about the same. To combat this, several bicycle design models have been created over the years, with each attempting to define a bicycle’s handling characteristics given its physical geometry. However, these models often analyze a single bicycle at a time, and as such, using them in an iterative design process can be cumbersome. This work seeks to improve an existing model used by the Cal Poly Mechanical Engineering department such that it can be used in a proactive, iterative fashion (as opposed to the reactive, single-design paradigm that it currently supports). This is accomplished by expanding the model’s inputs to include more bicycle components as well as differently sized riders. This augmented model is then incorporated into several search platforms ranging from a brute-force implementation to several variants using genetic algorithm concepts. These models allow the designer to specify a bicycle design search space as well as a set of riders upfront, from which the algorithms search out and find strong candidate designs to return to the user. This in turn reduces the overhead on the designer while also potentially discovering new bicycle designs which had not been considered previously viable. Finally, a front-end was created to make it easier for the user to access these algorithms and their results.
116

Towards Autonomous Localization of an Underwater Drone

Sfard, Nathan 01 June 2018 (has links)
Autonomous vehicle navigation is a complex and challenging task. Land and aerial vehicles often use highly accurate GPS sensors to localize themselves in their environments. These sensors are ineffective in underwater environments due to signal attenuation. Autonomous underwater vehicles utilize one or more of the following approaches for successful localization and navigation: inertial/dead-reckoning, acoustic signals, and geophysical data. This thesis examines autonomous localization in a simulated environment for an OpenROV Underwater Drone using a Kalman Filter. This filter performs state estimation for a dead reckoning system exhibiting an additive error in location measurements. We evaluate the accuracy of this Kalman Filter by analyzing the effect each parameter has on accuracy, then choosing the best combination of parameter values to assess the overall accuracy of the Kalman Filter. We find that the two parameters with the greatest effects on the system are the constant acceleration and the measurement uncertainty of the system. We find the filter employing the best combination of parameters can greatly reduce measurement error and improve accuracy under typical operating conditions.
117

Fast, Sparse Matrix Factorization and Matrix Algebra via Random Sampling for Integral Equation Formulations in Electromagnetics

Wilkerson, Owen Tanner 01 January 2019 (has links)
Many systems designed by electrical & computer engineers rely on electromagnetic (EM) signals to transmit, receive, and extract either information or energy. In many cases, these systems are large and complex. Their accurate, cost-effective design requires high-fidelity computer modeling of the underlying EM field/material interaction problem in order to find a design with acceptable system performance. This modeling is accomplished by projecting the governing Maxwell equations onto finite dimensional subspaces, which results in a large matrix equation representation (Zx = b) of the EM problem. In the case of integral equation-based formulations of EM problems, the M-by-N system matrix, Z, is generally dense. For this reason, when treating large problems, it is necessary to use compression methods to store and manipulate Z. One such sparse representation is provided by so-called H^2 matrices. At low-to-moderate frequencies, H^2 matrices provide a controllably accurate data-sparse representation of Z. The scale at which problems in EM are considered ``large'' is continuously being redefined to be larger. This growth of problem scale is not only happening in EM, but respectively across all other sub-fields of computational science as well. The pursuit of increasingly large problems is unwavering in all these sub-fields, and this drive has long outpaced the rate of advancements in processing and storage capabilities in computing. This has caused computational science communities to now face the computational limitations of standard linear algebraic methods that have been relied upon for decades to run quickly and efficiently on modern computing hardware. This common set of algorithms can only produce reliable results quickly and efficiently for small to mid-sized matrices that fit into the memory of the host computer. Therefore, the drive to pursue larger problems has even began to outpace the reasonable capabilities of these common numerical algorithms; the deterministic numerical linear algebra algorithms that have gotten matrix computation this far have proven to be inadequate for many problems of current interest. This has computational science communities focusing on improvements in their mathematical and software approaches in order to push further advancement. Randomized numerical linear algebra (RandNLA) is an emerging area that both academia and industry believe to be strong candidates to assist in overcoming the limitations faced when solving massive and computationally expensive problems. This thesis presents results of recent work that uses a random sampling method (RSM) to implement algebraic operations involving multiple H^2 matrices. Significantly, this work is done in a manner that is non-invasive to an existing H^2 code base for filling and factoring H^2 matrices. The work presented thus expands the existing code's capabilities with minimal impact on existing (and well-tested) applications. In addition to this work with randomized H^2 algebra, improvements in sparse factorization methods for the compressed H^2 data structure will also be presented. The reported developments in filling and factoring H^2 data structures assist in, and allow for, the further pursuit of large and complex problems in computational EM (CEM) within simulation code bases that utilize the H^2 data structure.
118

Computer solution to inverse problems of elliptic form: V²U(x,y)=g(a,U,x,y)

Jeter, Frederick Alvin 01 January 1971 (has links)
One important aspect of our present age of monolithic high speed computers is the computer's capability to solve complex problems hitherto impossible to tackle due to their complexity. This paper explains how to use a. digital computer to solve a specific type of problem; specifically, to find the inverse solution of a in the elliptical equation V2U(x,y) = g(a,U,x,y), with appropriate boundary conditions. This equation is very useful in the electronics field. The knowns are the complete set of boundary values of U(x,y) and a set of observations taken on internal points of U(x,y). Given this information, plus the specific form of the governing equation, we can solve for the unknown a. Once the computer program has been written using the technique of quasilinearization, Newton’S convergence method, discrete invariant imbedding, and the use of sensitivity functions, then we take data from the computer results and analyse it for proper convergence. This data shows that there are definite limits to the usefulness and capability of the technique. One of the results of this study is the observation that it is important to the proper functioning of this problem solving technique that the observations taken on U(x,y) are placed in the most efficient locations with the most efficient geometry in the region of largest effectiveness. Another result deals with the number of observation points used: too few gives insufficient information for proper program functioning, and too many tends to saturate the effectiveness of the observations. Thus this paper has two objectives. first to develop the technique and secondly to analyse the results from the realization of the technique through the use of a computer.
119

Analytical Modeling of Tree Vibration Generated during Cutting Process

Karvanirabori, Payman 01 January 2009 (has links) (PDF)
There are several ways to cut down a tree. The piece by piece cutting method is studied in this research. By modeling the cutting process into simple dynamic models and obtaining governing equations of motion of tree and cut piece in each model, the forces during cutting process were calculated. The method was then applied to a set of real data and tree vibrations were compared with field measurements. The study is very rare in the case of the variety of the topics it covers from dynamics and mechanics to finite element modeling of a biological system.
120

Using dynamic task allocation to evaluate driving performance, situation awareness, and cognitive load at different levels of partial autonomy

Patel, Viraj R. 08 August 2023 (has links) (PDF)
The state of the art of autonomous vehicles requires operators to remain vigilant while performing secondary tasks. The goal of this research was to investigate how dynamically allocated secondary tasks affected driving performance, cognitive load, and situation awareness. Secondary tasks were presented at rates based on the autonomy level present and whether the autonomous system was engaged. A rapid secondary task rate was also presented for two short periods regardless of whether autonomy was engaged. There was a three-minute familiarization phase followed by a data collection phase where participants responded to secondary tasks while preventing the vehicle from colliding into random obstacles. After data collection, there was a brief survey to gather data on cognitive load, situation awareness, and relevant demographics. The data was compared to data gathered in a similar study by Cossitt [10] where secondary tasks were presented at a controlled frequency and a gradually increasing frequency.

Page generated in 0.2139 seconds