• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23370
  • 1187
  • 868
  • 707
  • 178
  • 161
  • 161
  • 161
  • 161
  • 161
  • 161
  • 99
  • 73
  • 11
  • 8
  • Tagged with
  • 29245
  • 29245
  • 11390
  • 10914
  • 7594
  • 7343
  • 4514
  • 4129
  • 3593
  • 3328
  • 3230
  • 3230
  • 3230
  • 3230
  • 3230
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Design of a 3D integrated circuit for manipulating and sensing biological nanoparticles

Dickerson, Samuel J 25 September 2007 (has links)
We present the design of a mixed-technology microsystem for electronically manipulating and optically detecting nanometer scale particles in a fluid. This lab-on-a-chip is designed using 3D integrated circuit technology. By taking advantage of processing features inherent to 3D chip-stacking technology, we create very dense dielectrophoresis electrode arrays. During the 3D fabrication process, the top-most chip tier is assembled upside down and the substrate material is removed. This puts the polysilicon layer, which is used to create geometries with the process minimum feature size, in close proximity to a fluid channel etched into the top of the stack. This technique allows us to create electrode arrays that have a gap spacing of 270 nm in a 0.18 ìm SOI technology. Using 3D CMOS technology also provides the additional benefit of being able to densely integrate analog and digital control circuitry for the electrodes by using the additional levels of the chip stack. For sensing particles that are manipulated by dielectrophoresis, we present a method by which randomly distributed nanometer scale particles can be arranged into periodic striped patterns, creating an effective diffraction grating. The efficiency of this grating can be used to perform a label-free optical analysis of the particles. The functionality of the 3D lab-on-a-chip is verified with simulations of Kaposis sarcoma-associated herpes virus particles, which have a radius of approximately 125 nm, being manipulated by dielectrophoresis and detected optically.
602

Application of Multiobjective Optimization to Determining an Optimal Left Ventricular Assist Device (LVAD) Pump speed

McConahy, Douglas 25 September 2007 (has links)
A Left Ventricular Assist Device (LVAD) is a mechanical pump used to assist the weakened left ventricle to pump blood to the entire body. One method of controlling pump speed is using a closed-loop controller that changes the pump speed based on the patients level of activity and demand for cardiac output. An important aspect of the development of a closed-loop controller is the selection of the desired pump speed. Pump speed must be chosen such that the patient receives adequate cardiac output for his/her level of activity. The pump must also operate in a safe physiological operating region, placing constraints on other hemodynamic parameters. This work presents the pump speed selection problem as a multiobjective optimization problem, considering constraints on cardiac output, left atrial pressure, and arterial pressure. A penalty function is assigned to each hemodynamic variable and a mathematical model of the LVAD and cardiovascular system is used to map the penalty functions as functions of the hemodynamic parameters to penalty functions as functions of pump speed. The penalties for the different variables are combined by forming a weighted sum, and the best set of pump speeds is determined by minimizing the combined penalty functions using different sets of weights. The resulting set of best pump speeds forms the noninferior set (Zadeh, IEEE Trans. On Auto. Control, 1967). It was discovered that the noninferior set contains discontinuities, so the concept of a modified noninferior set known as the Clinicians noninferior set is introduced. A decision support system (DSS) is presented that allows clinicians to determine a single pump speed from the noninferior set by investigating the effects of different speeds on the hemodynamic variables. The DSS is also a tool that can be utilized to help clinicians develop a better understanding of how to assign weights to the different hemodynamic variables.
603

The Role of Surface Plasmon-Polaritons in the Propagation of Light in Sub-Wavelength Slits

Wuenschell, Jeff 25 September 2007 (has links)
Over the past few decades optics has begun to take over some of the duties of electronics. The primary focus of recent study in electronics has been miniaturization; meanwhile, a similar trend has begun in optics research with the growth of the nano-optics field. One of the key interests of this field is surface plasmon-polaritons, propagating light which is bound to the interface between a metal and a dielectric. Electromagnetic waves in the form of surface plasmons break some of the rules of classical optics. Classical optics predicts that light cannot propagate through an aperture much smaller than about half its wavelength the best it can do is tunnel through, which tends to result in weak transmission when the aperture is reasonably long. Experimentally, it has been shown that the wavelength dependence is significantly different than predicted by classical theory. Some structures have been shown to produce even more astonishing results; for example, a thick nano-slit array can exhibit near 100% normalized peak transmission. The purpose of this thesis will be to analytically and numerically study light propagation in sub-wavelength metallic slits. It will be shown that symmetric surface plasmon modes are the primary carriers of electromagnetic power in a sufficiently small slit. The dynamics of the power and polarization charge inherent in the propagation of these modes will be analyzed. Finally, these topics will be discussed with respect to possible applications, focusing on the utilization of sub-wavelength metallic slits for chemical sensing.
604

Signal design for Multiple-Antenna Systems and Wireless Networks

Song, Xiaofei 25 September 2007 (has links)
This dissertation is concerned with the signal design problems for Multiple Input and Multiple Output (MIMO) antenna systems and wireless networks. Three related but distinct problems are considered. The first problem considered is the design of space time codes for MIMO systems in the case when neither the transmitter nor the receiver knows the channel. We present the theoretical concept of communicating over block fading channel using Layered Unitary Space Time Codes (LUSTC), where the input signal is formed as a product of a series of unitary matrices with corresponding dimensionality. We show the channel capacity using isotropically distributed (i.d.) input signaling and optimal decoding can be achieved by layered i.d. signaling scheme along with a low complexity successive decoding. The closed form layered channel capacity is obtained, which serves as a design guideline for practical LUSTC. In the design of LUSTC, a successive design method is applied to leverage the problem of optimizing over lots of parameters. The feedback of channel state information (CSI) to the transmitter in MIMO systems is known to increase the forward channel capacity. A suboptimal power allocation scheme for MIMO systems is then proposed for limited rate feedback of CSI. We find that the capacity loss of this simple scheme is rather small compared to the optimal water-filling solution. This knowledge is applied for the design of the feedback codebook. In the codebook design, a generalized Lloyd algorithm is employed, in which the computation of the centroid is formulated as an optimization problem and solved optimally. Numerical results show that the proposed codebook design outperforms the existing algorithms in the literature. While it is not feasible to deploy multiple antennas in a wireless node due to the space limitation, user cooperation is an alternative to increase performance of the wireless networks. To this end, a coded user cooperation scheme is considered in the dissertation, which is shown to be equivalent to a coding scheme with the encoding done in a distributive manner. Utilizing the coding theoretic bound and simulation results, we show that the coded user cooperation scheme has great advantage over the non-cooperative scheme.
605

System Identification of Wrist Stiffness in Parkinson's Disease Patients

Sprague, Chris 30 January 2008 (has links)
The purpose of this work is to investigate the characteristics of motor control systems in Parkinson's disease patients. ARMAX system identification was performed to identify the intrinsic and reflexive, the non-controllable and controllable, components of wrist stiffness, enabling a better understanding of the problems associated with Parkinson's disease. The results show that the intrinsic stiffness dynamics represent the vast majority of the total stiffness in the wrist joint and that the reflexive stiffness dynamics are attributable to a tremor commonly found in Parkinsons disease patients. It was found that Parkinsonian rigidity, a symptom of Parkinsons disease, interferes with the known and traditional methods for separating intrinsic and reflexive components. Resolving this problem could lead to early detection of Parkinson's disease in patients not exhibiting typical symptoms, analytical measurement of the severity of the disease, and as a testing mechanism for the effectiveness of new medicines.
606

CONFLICT RESOLUTION AND TRAFFIC COMPLEXITY OF MULTIPLE INTERSECTING FLOWS OF AIRCRAFT

Treleaven, Kyle B 30 January 2008 (has links)
This paper proposes a general framework to study conflict resolution for multiple intersecting flows of aircraft in a planar airspace. The conflict resolution problem is decomposed into a sequence of sub-problems each involving only two intersecting flows of aircraft. The strategy for achieving the decomposition is to displace the aircraft flows so that they intersect in pairs, instead of all at once, and so that the resulting conflict zones have no overlap. A conflict zone is defined as a circular area centered at the intersection of a pair of flows which allows aircraft approaching the intersection to resolve conflict completely within the conflict zone, without straying outside. An optimization problem is then formulated to displace the aircraft flows in a way that keeps airspace demand as low as possible. Although this optimization problem is difficult to solve in general due to its non-convex nature, a closed-form solution can be obtained for three intersecting flows. The metric used for the airspace demand is the radius of the smallest circular region (control space) encompassing all of the non-overlapping conflict zones. This radius can also be used as an indication of traffic complexity for multiple intersecting flows of aircraft. It is shown that the growth of the demand for control-space radius is of the fourth order against the number of intersecting flows of aircraft in a symmetric configuration.
607

Computational and Robotic Models of Human Postural Control

Mahboobin, Arash 30 January 2008 (has links)
Currently, no bipedal robot exhibits fully human-like characteristics in terms of its postural control and movement. Current biped robots move more slowly than humans and are much less stable. Humans utilize a variety of sensory systems to maintain balance, primary among them being the visual, vestibular and proprioceptive systems. A key finding of human postural control experiments has been that the integration of sensory information appears to be dynamically regulated to adapt to changing environmental conditions and the available sensory information, a process referred to as "sensory re-weighting." In contrast, in robotics, the emphasis has been on controlling the location of the center of pressure based on proprioception, with little use of vestibular signals (inertial sensing) and no use of vision. Joint-level PD control with only proprioceptive feedback forms the core of robot standing balance control. More advanced schemes have been proposed but not yet implemented. The multiple sensory sources used by humans to maintain balance allow for more complex sensorimotor strategies not seen in biped robots, and arguably contribute to robust human balance function across a variety of environments and perturbations. Our goal is to replicate this robust human balance behavior in robots. In this work, we review results exploring sensory re-weighting in humans, through a series of experimental protocols, and describe implementations of sensory re-weighting in simulation and on a robot.
608

SELF-POWERED FIBER BRAGG GRATING SENSORS

McMillen, Benjamin Wesley 30 January 2008 (has links)
Fiber Bragg gratings (FBGs) are key components for optical sensing and communication. Traditionally, fiber grating sensors were purely passive, but recent developments have been made to allow active tuning of these sensors. These tuning methods, though effective, are often bulky, cumbersome, and expensive to package. This thesis demonstrates an approach for tuning in-fiber Bragg grating sensors by optical energy carried by the same optical fiber. Optical energy carried by optical fiber was used to heat in-fiber Bragg gratings to alter grating response to surrounding media. This tuning technique requires no external actuation or expensive packaging. Through the use of a simple metallic film and the delivery of high power laser light to the grating, 'active' tuning is obtained.</br>Two applications are demonstrated where self-powered FBG technology is applied to a level sensor capable of measuring discrete liquid levels as well as a vacuum sensor, with sensitivity into the milli-torr range. These sensors are also a demonstration of the networkability of FBG sensor arrays, allowing for large multipoint sensor networks. In addition, both sensors have dual functionality, being capable of sensing local temperature in addition to vacuum and liquid levels. These sensors are comparable or better than most MEMS and fiber based technology. Optical fiber in both these applications serves as a conduit for both signal-carrying light as well as power light, used to tune the gratings. This new self-powered FBG-based technology provides an innovative solution to fiber sensing, allowing design of versatile sensors without compromising their intrinsic benefits. Not only does the one-fiber solution provide lower design costs by utilizing a single feed through, but it also boasts simple packaging, long lifetime, reliable operation in harsh environments, and immunity to electromagnetic fields.
609

A Physical Implementation with Custom Low Power Extensions of a Reconfigurable Hardware Fabric

Dhanabalan, Gerold Joseph 09 June 2008 (has links)
The primary focus of this thesis is on the physical implementation of the SuperCISC Reconfigurable Hardware Fabric (RHF). The SuperCISC RHF provides a fast time to market solution that approximates the benefits of an ASIC (Application Specific Integrated Circuit) while retaining the design flow of an embedded software system. The fabric which consists of computational ALU stripes and configurable multiplexer based interconnect stripes has been implemented in the IBM 0.13um CMOS process using Cadence SoC Encounter. As the entire hardware fabric utilizes a combinational flow, glitching power consumption is a potential problem inherent to the fabric. A CMOS thyristor based programmable delay element has been designed in the IBM 0.13um CMOS process, to minimize the glitch power consumed in the hardware fabric. The delay element was characterized for use in the IBM standard cell library to synthesize standard cell ASIC designs requiring this capability such as the SuperCISC fabric. The thesis also introduces a power-gated memory solution, which can be used to increase the size of an EEPROM memory for use in SoC style applications. A macromodel of the EEPROM has been used to model the erase, program and read characteristics of the EEPROM. This memory is designed for use in the fabric for storing encryption keys, etc.
610

An Analysis of Minimum Entropy Time-Frequency Distributions

Bradley, Paul Abram 09 June 2008 (has links)
The subject area of time-frequency analysis is concerned with creating meaningful representations of signals in the time-frequency domain that exhibit certain properties. Different applications require different characteristics in the representation. Some of the properties that are often desired include satisfying the time and frequency marginals, positivity, high localization, and strong finite support. Proper time-frequency distributions, which are defined as distributions that are manifestly positive and satisfy both the time and frequency marginals, are of particular interest since they can be viewed as a joint time-frequency density function and ensure strong finite support. Since an infinite number of proper time-frequency distributions exist, it is often necessary to impose additional constraints on the distribution in order to create a meaningful representation of the signal. A significant amount of research has been spent attempting to find constraints that produce meaningful representations. Recently, the idea was proposed of using the concept of minimum entropy to create time-frequency distributions that are highly localized and contain a large number of zero-points. The proposed method starts with an initial distribution that is proper and iteratively reduces the total entropy of the distribution while maintaining the positivity and marginal properties. The result of this method is a highly localized, proper TFD. This thesis will further explore and analyze the proposed minimum entropy algorithm. First, the minimum entropy algorithm and the concepts behind the algorithm will be introduced and discussed. After the introduction, a simple example of the method will be examined to help gain a basic understanding of the algorithm. Next, we will explore different rectangle selection methods which define the order in which the entropy of the distribution is minimized. We will then evaluate the effect of using different initial distributions with the minimum entropy algorithm. Afterwards, the results of the different rectangle selection methods and initial distributions will be analyzed and some more advanced concepts will be explored. Finally, we will draw conclusions and consider the overall effectiveness of the algorithm.

Page generated in 0.1002 seconds