• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 500
  • 92
  • 71
  • 61
  • 36
  • 21
  • 19
  • 18
  • 13
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 1023
  • 688
  • 265
  • 180
  • 130
  • 125
  • 117
  • 97
  • 81
  • 80
  • 79
  • 77
  • 67
  • 64
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Development of High Throughput Screening Approaches to Target TN1549 and F Plasmid Movement

Hansen, Drew M. January 2019 (has links)
The antimicrobial resistance (AMR) crisis, where new antibiotic discovery is not keeping pace with the emergence of resistant pathogens, is driven by mobile genetic elements (MGEs). MGEs can autonomously transfer between bacteria, along with AMR genes. The widespread use of antibiotics in the clinic, in agriculture, and animal husbandry, has accelerated the MGE-mediated transfer of AMR genes in the environment. However, despite playing such an important role in the AMR crisis, the dynamics and mechanisms behind the transmission of genes are poorly understood. Furthermore, which natural and man-made compounds inhibit or promote their movement in these environments is unknown. One method to combat the rise in AMR is to identify small molecules as probes to understand the molecular basis of transmission and apply this information to prevent MGE-mediated resistance dissemination. Since conjugation is the main mechanism for AMR gene transfer, targeting MGEs that use conjugation, such as conjugative plasmids (e.g. Tn1549) and conjugative transposons (e.g. F plasmid), has the potential to prevent the emergence of multi-drug resistant pathogens. In this work, a high throughput assay modeled after Tn1549 excision was screened against a library of known bioactive compounds to find modulators of the integrase and excisionase activity. Several fluoroquinolone antibiotics including ciprofloxacin were identified as dose-dependent inhibitors of excision, which acted by changing supercoiling levels in the cell. Ciprofloxacin enhanced conjugation frequency of Tn1549 at sub-MIC concentrations relative to an untreated control and inhibited conjugation frequency at higher concentrations. A second project was focused on a high throughput conjugation assay based on the separation of the lux operon between a donor and recipient cell, such that only transconjugants produce luminescence to reflect active gene transfer. This work furthers our understanding of the development of assays to target MGEs and screening for inhibitors of their movement. / Thesis / Master of Science (MSc) / Antibiotics are small molecules that cure bacterial infections. However, their efficacy is fading as a result of the ability of mobile genetic elements (MGEs) to spread antimicrobial resistance genes between bacteria. Conjugative plasmids (CPs) and conjugative transposons (CTns) are two of the major types of MGEs that contribute to the dissemination of antimicrobial resistance in pathogens. The goal of this research is to search for inhibitors of CTns and CPs in order to prevent the emergence of multi-drug resistant bacteria. High throughput assays were designed to model both a CTn (Tn1549) and a CP (F plasmid) to find small molecules targeting their movement. A screen of the Tn1549 excision assay identified fluoroquinolone antibiotics that inhibit excision in a dose-dependent manner and indirectly inhibit the integrase used to excise the CTn. Ciprofloxacin, a fluoroquinolone, inhibited the conjugation frequency of Tn1549. Future work will focus on identifying new inhibitors of these MGEs and their characterization.
462

New Strategic and Dynamic Variation Reduction Techniques for Assembly Lines

Musa, Rami 24 May 2007 (has links)
Variation is inevitable in any process, so it has to be dealt with effectively and economically. Reducing variation can be achieved in assembly lines strategically and dynamically. Implementing both the strategic and dynamic variation reduction techniques is expected to lead to further reduction in the number of failed final assemblies. The dissertation is divided into three major parts. In the first part, we propose to reduce variation for assemblies by developing efficient inspection plans based on (1) historical data for existing products, or simulated data for newly developed products; (2) Monte Carlo simulation; and (3) optimization search techniques. The cost function to be minimized is the total of inspection, rework, scrap and failure costs. The novelty of the proposed approach is three-fold. First, the use of CAD data to develop inspection plans for newly launched products is new, and has not been introduced in the literature before. Second, frequency of inspection is considered as the main decision variable, instead of considering whether or not to inspect a quality characteristic of a subassembly. Third, we use a realistic reaction plan (rework-scrap-keep) that mimics reality in the sense that not all out-of-tolerance items should be scrapped or reworked. At a certain stage, real-time inspection data for a batch of subassemblies could be available. In the second part of this dissertation, we propose utilizing this data in near real-time to dynamically reduce variation by assigning the inspected subassembly parts together. In proposing mathematical models, we found that they are hard to solve using traditional optimization techniques. Therefore, we propose using heuristics.Finally, we propose exploring opportunities to reduce the aforementioned cost function by integrating the inspection planning model with the Dynamic Throughput Maximization (DTM) model. This hybrid model adds one decision variable in the inspection planning; which is whether to implement DTM (assemble the inspected subassemblies selectively) or to assemble the inspected items arbitrarily. We expect this hybrid implementation to substantially reduce the failure cost when assembling the final assemblies for some cases. To demonstrate this, we solve a numerical example that supports our findings. / Ph. D.
463

Formal Approaches to Globally Asynchronous and Locally Synchronous Design

Xue, Bin 30 September 2011 (has links)
The research reported in this dissertation is motivated by two trends in the system-on-chip (SoC) design industry. First, due to the incessant technology scaling, the interconnect delays are getting larger compared to gate delays, leading to multi-cycle delays in communication between functional blocks on the chip, which makes implementing a synchronous global clock difficult, and power consuming. As a result, globally asynchronous and locally synchronous (GALS) designs have been proposed for future SoCs. Second, due to time-to-market pressure, and productivity gain, intellectual property (IP) block reuse is a rising trend in SoC design industry. Predesigned IPs may already be optimized and verified for timing for certain clock frequency, and hence when used in an SoC, GALS offers a good solution that avoids reoptimizing or redesigning the existing IPs. A special case of GALS, known as Latency-Insensitive Protocol (LIP) lets designers adopt the well-understood and developed design flow of synchronous design while solving the multi-cycle latency at the interconnects. The communication fabrics for LIP are synchronous pipelines with hand shaking. However, handshake based protocol has complex control logics and the unnecessary handshake brings down the system's throughput. That is why scheduling based LIP was proposed to avoid the hand-shakes by pre-calculated clock gating sequences for each block. It is shown to have better throughput and easier to implement. Unfortunately, static scheduling only exists for bounded systems. Therefore, this type of design in literatures restrict their discussions to systems whose graphic representation has a single strongly connected component (SCC), which by the theory is bounded. This dissertation provides an optimization design flow for LIP synthesis with respect to back pressure, throughput and buffer sizes. This is based on extending the scheduled LIP with minimum modifications to render it general enough to be applicable to most systems, especially those with multiple SCCs. In order to guarantee the design correctness, a formal framework that can analyze concurrency and prevent fallacious behaviors such as overflow, deadlock etc., is required. Among many formal models of concurrency used previously in asynchronous system design, marked graphs, periodic clock calculus and polychrony are chosen for the purpose of modeling, analyzing and verifying in this work. Polychrony, originally developed for embedded software modeling and synthesis, is able to specify multi-rate interfaces. Then a synchronous composition can be analyzed to avoid incompatibly and combinational loops which causes incorrect GALS distribution. The marked graph model is a good candidate to represent the interconnection network which is quite suitable for modeling the communication and synchronizations in LIP. The periodic clock calculus is useful in analyzing clock gating sequences because periodic clock calculus easily captures data dependencies, throughput constraints as well as buffer sizes required for synchronization. These formal methods help establish a formally based design flow for creating a synchronous design and then transforming it into a GALS implementation either using LIP or in a more general GALS mechanisms. / Ph. D.
464

Development of high-throughput phenotyping methods and evaluation of morphological and physiological characteristics of peanut in a sub-humid environment

Sarkar, Sayantan 05 January 2021 (has links)
Peanut (Arachis hypogaea L.) is an important food crop in the USA and worldwide with high net returns but yield in excess of 4500 kg ha-1 is needed to offset the production costs. Because yield is limited by biotic and abiotic stresses, cultivars with stress tolerance are needed to optimize yield. The U.S. peanut mini-core germplasm collection is a valuable resource that breeders can use to improve stress tolerance in peanut. Phenotyping for plant height, leaf area, and leaf wilting have been used as proxies for the desired tolerance traits. However, proximal data collection, i.e. measurements are taken on individual plants or in the proximity, is slow. Remote data collection and machine learning techniques for analysis offer a high-throughput phenotyping (HTP) alternative to manual measurements that could help breeding for stress tolerance. The objectives of this study were to 1) develop HTP methods using aerial remote sensing; 2) evaluate the mini-core collection in SE Virginia; and 3) perform a detailed physiological analysis on a sub-set of 28 accessions from the mini-core collection under drought stress, i.e. the sub-set was selected based on contrasting differences under drought in three states, Virginia, Texas, and Oklahoma. To address these objectives, replicated experiments were performed in the field at the Tidewater Agricultural Research and Extension Center in Suffolk, VA, in 2017, 2018, and 2019, under rainfed, irrigated, and controlled conditions using rainout shelters to induce drought. Proximal data collection involved physiological, morphological, and yield measurements. Remote data collection was performed aerially and included collection of red-green-blue (RGB) images and canopy reflectance in the visible, near infra-red, and infra-red spectra. This information was used to estimate plant characteristics related to growth and drought tolerance. Under objective 1), we developed HTP for plant height with 85-95% accuracy, LAI with 85-88% accuracy, and wilting with 91-99% accuracy; this was done with significant reduction of time as compared to proximal data collection. Under objectives 2) and 3), we determined that shorter genotypes were more drought tolerant than taller genotypes; and identified CC650 less wilted and with increased carbon assimilation, electron transport, quantum efficiency, and yield than other accessions. / Doctor of Philosophy / Peanut is a profitable food crop in the USA but has high input costs. Pod yield over 4500 kg ha-1 is required for a profitable production, which is challenging in dry and hot years, and under disease pressure. Varieties tolerant to dry weather conditions (drought) and disease presence are required to sustain production. A collection of 112 peanut varieties is available for researchers to study the mechanisms of tolerance to drought and disease, and identify tolerant varieties to these stresses. Plant characteristics including height, leaf area, and leaf wilting can be used as proxies to estimate stress tolerance and yield, and identify tolerant varieties. How to measure these characteristics is very important. We think that using images collected by a drone and automated analysis by specific computer programs is the easiest, fastest, and most accurate way. Therefore, the objectives of my study were to 1) use drones and cameras to collect images, and computer programs to derive plant characteristics from these images, 2) evaluate the peanut collection to identify varieties with tolerance to drought and disease, and 3) evaluate in depth a sub-set of 28 varieties from this collection under controlled drought conditions to further learn about peanut mechanisms of tolerance to drought and diseases. Field experiments were conducted in 2017, 2018, and 2019, at the Tidewater Agricultural Research and Extension Center in Suffolk, VA. For some tests, we used rainout shelters to mimic drought. We measured plant height, leaf area, color, and wilting, canopy temperature, photosynthesis, and pod yield. From a drone, we collected images in the visible and invisible radiation and, using specific computer programs, estimated plant characteristics with 95% accuracy for height, 88% for leaf area, and 91% for leaf wilting under drought. We concluded that taller varieties were more susceptible to drought than shorter varieties. Peanut varieties CC650 and CC068 had higher end of season yield. The study showed that drought reduced several key mechanisms of photosynthesis including electron transport; and reduced the end of season yield. Variety CC650 performed better under drought than other varieties of the collection.
465

A Passive Microfluidic Device for Buffer Transfer of Cells

Thattai Sadagopan, Sudharsan 12 November 2021 (has links)
Buffer transfer of cells is a critical process in many biomedical applications such as dielectrophoresis experiments, optical trapping, and flow cytometry. Existing methods for buffer transfer of cells are time consuming, require skilled technicians and involve expensive equipment such as centrifuge and bio safety hoods. Furthermore, even a minute error in transferring the cells can easily result in cell lysis and decrease in viability. In this work, a lab-on-a-chip device is proposed that uses a textit{passive microfluidic approach} to effectively transfer cells from a growth medium to a desired buffer for downstream cDEP analysis. This eliminates the need for any external fields, expensive equipment, and significantly reduces manual efforts. Computational studies were carried out to analyze the impact of device geometry, channel configuration, and flowrate on the effectiveness of buffer transfer. The proposed device was evaluated through a parametric sweep and the device configurations were identified that induce low values of fluid shear stress, support high throughput, and maintains minimal diffusion. Finally, a method for fabricating the device in the laboratory using PDMS was illustrated. The outcome of this study helps further the development of highly effective microfluidic devices capable of performing buffer transfer of multiple cell lines. / Master of Science / Prior to performing biomedical experiments, cells often need to be transferred from the chemical solution in which they are grown to a different buffer that is customized for the analysis technique. This process is called buffer transfer and it is a critical process that needs to be performed before running many cell experiments. The way in which buffer transfer is carried out in most labs is time consuming, requiring skilled technicians and expensive machines. Moreover, even a small error while performing buffer transfer can easily cause the cells to die and reduce the cell count available for performing experiments. In this work, we propose an easy-to-use device that can perform the buffer exchange process without the need for expensive technologies or skilled technicians. The device achieves this exchange by leveraging fluid flow the channel to filter the cells out of the growth medium and transferring the cells to the desired chemical solution while washing the unwanted chemical solution away. We used CAD modeling and computational analysis to develop the device. The performance of the device was enhanced through a parametric analysis such that the device induces low shear stress, supports high flow through the channels and limits the mixing between the growth medium and the buffer. Finally, we have also illustrated a method for building the device in the laboratory. The results of this research work would help in furthering current efforts in the buffer transfer of cells.
466

Implementation and Analysis of Wireless Local Area Networks for High-Mobility Telematics

Aziz, Farhan Muhammad 26 June 2003 (has links)
Wireless networks provide communications to fixed, portable and mobile users and offer substantial flexibility to both end-users and service providers. Current cellular/PCS networks do not offer cost effective high data rate services for applications, such as, telematics, traffic surveillance and rescue operations. This research studies the feasibility and behavior of outdoor implementation of low-cost wireless LANs used for high mobility telematics and traffic surveillance. A multi-hop experimental wireless data network is designed and tested for this purpose. Outdoor field measurements show the wireless coverage and throughput patterns for static and mobile users. The results suggest that multi-hop wireless LANs can be used for high mobility applications if some protocols are improved. / Master of Science
467

Energy-harvested Lightweight Cryptosystems

Mane, Deepak Hanamant 21 May 2014 (has links)
The Internet of Things will include many resource-constrained lightweight wireless sensing devices, hungry for energy, bandwidth and compute cycles. The sheer amount of devices involved will require new solutions to handle issues such as identification and power provisioning. First, to simplify identity management, device identification is moving from symmetric-key solutions to public-key solutions. Second, to avoid the endless swapping of batteries, passively-powered energy harvesting solutions are preferred. In this contribution, we analyze some of the feasible solutions from this challenging design space. We have built an autonomous, energy-harvesting sensor node which includes a micro-controller, RF-unit, and energy harvester. We use it to analyze the computation and communication energy requirements for Elliptic Curve Digital Signature Algorithm (ECDSA) with different security levels. The implementation of Elliptic Curve Cryptography (ECC) on small microcontrollers is challenging. Most of the earlier literature has considered optimizing the performance of ECC (with respect to cycle count and software footprint) on a given architecture. This thesis addresses a different aspect of the resource-constrained ECC implementation wherein the most suitable architecture parameters are identified for any given application profile. At the high level, an application profile for an ECC-based lightweight device, such as wireless sensor node or RFID tag, is defined by the required security level, signature generation latency and the available energy/power budget. The target architecture parameters of interest include core-voltage, core-frequency, and/or the need for hardware acceleration. We present a methodology to derive and optimize the architecture parameters starting from the application requirements. We demonstrate our methodology on a MSP430F5438A microcontroller, and present the energy/architecture design space for 80-bit and 128-bit security-levels, for prime field curves secp160r1 and nistp256. Our results show that energy cost per authentication is minimized if a microcontroller is operated at the maximum possible frequency. This is because the energy consumed by leakage (i.e., static power dissipation) becomes proportionally less important as the runtime of the application decreases. Hence, in a given energy harvesting method, it is always better to wait as long as possible before initiating ECC computations which are completed at the highest frequency when sufficient energy is available. / Master of Science
468

Advancing Integrated Membrane Filtration Processes for Treating Industrial Wastewaters with Time Varying Feed Properties / DEVELOPING INTEGRATED MEMBRANE PROCESSES FOR INDUSTRIAL WASTEWATERS

Premachandra, Abhishek January 2024 (has links)
Wastewaters that are produced by industrial processes are more challenging to treat than municipal wastewaters, primarily due to two reasons. Firstly, industrial wastewaters contain high concentrations of several different contaminants (e.g. metals, nutrients and organics etc.), which can be challenging for a single process to treat. Secondly, the compositional properties of the wastewaters can vary significantly as it is dependent on several upstream processes. Commercial membrane technologies have shown significant adoption in desalination and municipal wastewater treatment applications. Their favourable selectivity and tunable properties have garnered interest from both academia and industry to push these technologies into industrial wastewater treatment. Despite showing promising contaminant removal results, current studies have shown that fouling due to high contaminant loadings, and variable treatment efficacies due to feed property variations, limit the adoption of commercial membranes into these applications. Current research addresses these challenges through the new material development or surface modifications, however, there is a need to approach these challenges at a process level by integrating existing membrane technology into adaptive processes. This thesis aims to advance the adoption of commercial membrane technology into ‘tough-to-treat’ industrial wastewater applications. Firstly, the effects of high contaminant concentrations and variable feed properties on membrane treatment is studied by using advanced techniques, such as gas chromatography – mass spectrometry, to resolve the composition of feed and permeate streams from membrane processes treating real wastewaters. It was determined that fast and efficient screening tools are required to optimize and adapt membrane processes to respond to this variability. This thesis then introduces high-throughput and miniaturized screening platform that combines analytical centrifugation with filter plate technology to rapidly optimize two-stage coagulation-filtration processes with an extremely low material and time requirement. / Thesis / Doctor of Philosophy (PhD) / Wastewaters sourced from industrial processes are considered ‘tough-to-treat’ due to high contaminant concentrations and time-varying compositional properties. Recent advancements in membrane technologies have demonstrate great promise in treating industrial wastewaters, however, these membranes often need to be integrated with other treatment technologies to overcome challenges with treating these wastewaters. This thesis aims to push the adoption of integrated membrane processes for treating high-strength industrial wastewaters. By utilizing advanced analytical techniques to investigate the effects of high contaminant loadings and variable feed properties on membrane processes, it was determined that screening tools are needed to rapidly design and optimize membrane process that are tailored to the properties of the wastewater. This thesis introduces a high-throughput and miniaturized screening platform that combines analytical centrifugation and filter-plate technology to holistically screen two-stage coagulation-filtration processes with little time and material requirements.
469

Designing and modeling high-throughput phenotyping data in quantitative genetics

Yu, Haipeng 09 April 2020 (has links)
Quantitative genetics aims to bridge the genome to phenome gap. The advent of high-throughput genotyping technologies has accelerated the progress of genome to phenome mapping, but a challenge remains in phenotyping. Various high-throughput phenotyping (HTP) platforms have been developed recently to obtain economically important phenotypes in an automated fashion with less human labor and reduced costs. However, the effective way of designing HTP has not been investigated thoroughly. In addition, high-dimensional HTP data bring up a big challenge for statistical analysis by increasing computational demands. A new strategy for modeling high-dimensional HTP data and elucidating the interrelationships among these phenotypes are needed. Previous studies used pedigree-based connectetdness statistics to study the design of phenotyping. The availability of genetic markers provides a new opportunity to evaluate connectedness based on genomic data, which can serve as a means to design HTP. This dissertation first discusses the utility of connectedness spanning in three studies. In the first study, I introduced genomic connectedness and compared it with traditional pedigree-based connectedness. The relationship between genomic connectedness and prediction accuracy based on cross-validation was investigated in the second study. The third study introduced a user-friendly connectedness R package, which provides a suite of functions to evaluate the extent of connectedness. In the last study, I proposed a new statistical approach to model high-dimensional HTP data by leveraging the combination of confirmatory factor analysis and Bayesian network. Collectively, the results from the first three studies suggested the potential usefulness of applying genomic connectedness to design HTP. The statistical approach I introduced in the last study provides a new avenue to model high-dimensional HTP data holistically to further help us understand the interrelationships among phenotypes derived from HTP. / Doctor of Philosophy / Quantitative genetics aims to bridge the genome to phenome gap. With the advent of genotyping technologies, the genomic information of individuals can be included in a quantitative genetic model. A new challenge is to obtain sufficient and accurate phenotypes in an automated fashion with less human labor and reduced costs. The high-throughput phenotyping (HTP) technologies have emerged recently, opening a new opportunity to address this challenge. However, there is a paucity of research in phenotyping design and modeling high-dimensional HTP data. The main themes of this dissertation are 1) genomic connectedness that could potentially be used as a means to design a phenotyping experiment and 2) a novel statistical approach that aims to handle high-dimensional HTP data. In the first three studies, I first compared genomic connectedness with pedigree-based connectedness. This was followed by investigating the relationship between genomic connectedness and prediction accuracy derived from cross-validation. Additionally, I developed a connectedness R package that implements a variety of connectedness measures. The fourth study investigated a novel statistical approach by leveraging the combination of dimension reduction and graphical models to understand the interrelationships among high-dimensional HTP data.
470

Interference Measurements and Throughput Analysis for 2.4 GHz Wireless Devices in Hospital Environments

Krishnamoorthy, Seshagiri 25 April 2003 (has links)
In recent years, advancements in the field of wireless communication have led to more innovative consumer products at reduced cost. Over the next 2 to 5 years, short-range wireless devices such as Bluetooth and Wireless Local Area Networks (WLANs) are expected to become widespread throughout hospital environments for various applications. Consequently the medical community views wireless applications as ineludible and necessary. However, currently there exist regulations on the use of wireless devices in hospitals, and with the ever increasing wireless personal applications, there will be more unconscious wireless devices entering and operating in hospitals. It is feared that these wireless devices may cause electromagnetic interference that could alter the operation of medical equipment and negatively impact patient care. Additionally, unintentional electromagnetic radiation from medical equipment may have a detrimental effect on the quality of service (QoS) of these short-range wireless devices. Unfortunately, little is known about the impact of these short-range wireless devices on medical equipment and in turn the interference caused to these wireless devices by the hospital environment. The objective of this research was to design and develop an automated software reconfigurable measurement system (PRISM) to characterize the electromagnetic environment (EME) in hospitals. The portable measurement system has the flexibility to characterize a wide range of non-contiguous frequency bands and can be monitored from a remote location via the internet. In this work electromagnetic interference (EMI) measurements in the 2.4 GHz ISM band were performed in two hospitals. These measurements are considered to be very first effort to analyze the 2.4 GHz ISM band in hospitals. Though the recorded EMI levels were well within the immunity level recommended by the FDA, it can be expected that Bluetooth devices will undergo a throughput reduction in the presence of major interferers such as WLANs and microwave ovens. A Bluetooth throughput simulator using semi-analytic results was developed as part of this work. PRISM and the Bluetooth simulator were used to predict the throughput for six Bluetooth Asynchronous Connectionless (ACL) transmissions as a function of piconet size and interferer distance. / Master of Science

Page generated in 0.1795 seconds