• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 53
  • 44
  • 19
  • 19
  • 19
  • 19
  • 19
  • 19
  • 18
  • 8
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 436
  • 436
  • 75
  • 67
  • 55
  • 47
  • 45
  • 43
  • 38
  • 35
  • 32
  • 31
  • 30
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

A comparative evaluation of high-power laser pulser topologies

Nel, Johannes Jurie 06 September 2012 (has links)
D.Ing. / An optimal laser pulser topology for employment in a future commercial molecular laser isotope separation (MLIS) plant is proposed by this thesis. It is pointed out in the introduction that power modulator research and development were performed without much regard to economic constraints in the past. These conditions were mainly caused by international strategic initiatives and spawned a wealth of different circuit topologies and techniques. Many more can be devised by using the various subsystems of these topologies and techniques in different combinations. However, under the paradigm of a modern day commercial application, the luxury of trying yet another new topology, merely on the merits of personal preference, does not exist. Therefore, it is proposed that a laser pulser topology be formally selected by using suitable criteria derived from the application. Formal definitions are provided for the general subsystems found in all laser excitation systems, as a foundation for the selection process. The available options for each subsystem type, as well as the options for combining them into various topologies are described. Many examples are quoted from the literature to corroborate the basic descriptions. Practical circuit issues are dealt with in an appendix. Selection criteria are determined by contemplating the theory and practical issues of pulse power technology, transversely excited atmospheric carbon dioxide lasers as well as molecular laser isotope separation. It is argued that all of these criteria can be combined into a single economic criterion, namely life cycle cost. This argument is supported by the commercial requirement of economic viability of the future plant. The author formulates a life cycle cost calculation model (LCCCM) from all the technical and economic issues previously mentioned. It includes a flexible design section that can accommodate any of the possible topology options. Cost functions, which include reliability analysis, are used to calculate capital and operating costs from the design parameters, throughout the life cycle of the plant. Probability theory is used to model parameters with indeterminate values. The use of the LCCCM and its subtleties are demonstrated by comparing two basic options in a case study. It is finally used in a reasoned process of elimination to find the best topology option for the application.
272

Enzymatic recovery of rhodium(III) from aqueous solution and industrial effluent using sulphate reducing bacteria: role of a hydrogenase enzyme

Ngwenya, Nonhlanhla January 2005 (has links)
In an attempt to overcome the high maintenance and costs associated with traditional physico-chemical methods, much work is being done on the application of enzymes for the recovery of valuable metals from solutions and industrial effluents. One of the most widely studied enzymatic metal recovery systems uses hydrogenase enzymes, particularly from sulphate reducing bacteria (SRB). While it is known that hydrogenases from SRB mediate the reductive precipitation of metals, the mechanism of enzymatic reduction, however, is not yet fully understood. The main aim of the present study was to investigate the role of a hydrogenase enzyme in the removal of rhodium from both aqueous solution and industrial effluent. A quantitative analysis of the rate of removal of rhodium(III) by a resting SRB consortium under different initial rhodium and biomass concentrations, pH, temperature, presence and absence of SRB cells and electron donor, was studied. Rhodium speciation was found to be the main factor controlling the rate of removal of rhodium from solution. SRB cells were found to have a higher affinity for anionic rhodium species, as compared to both cationic and neutral species, which become abundant when speciation equilibrium was reached. Consequently, a pH-dependant rate of rhodium removal from solution was observed. The maximum SRB uptake capacity for rhodium was found to be 66 mg rhodium per g of resting SRB biomass. Electron microscopy studies revealed a time-dependant localization and distribution of rhodium precipitates, initially intracellularly and then extracellularly, suggesting the involvement of an enzymatic reductive precipitation process. A hydrogenase enzyme capable of reducing rhodium(III) from solution was isolated and purified by PEG, DEAE-Sephacel anion exchanger and Sephadex G200 gel exclusion. A distinct protein band with a molecular weight of 62kDa was obtained when the hydrogenase containing fractions were subjected to a 10% SDS-PAGE. Characterization studies indicated that the purified hydrogenase had an optimum pH and temperature of 8 and 40°C, respectively. A maximum of 88% of the initial rhodium in solution was removed when the purified hydrogenase was incubated under hydrogen. Due to the low pH of the industrial effluent (1.31), the enzymatic reduction of rhodium by the purified hydrogenase was greatly retarded. It was apparent that industrial effluent pretreatment was necessary before the application an enzymatic treatment method. In the present study, however, it has been established that SRB are good candidates for the enzymatic recovery of rhodium from both solution and effluent.
273

A knowledge-based system for estimating the duration of cast in place concrete activities

Diaz Zarate, Gerardo Daniel 01 January 1992 (has links)
No description available.
274

Preparation of photocatalytic TiO₂ nanoparticles immobilized on carbon nanofibres for water purification

Nyamukamba, Pardon January 2011 (has links)
Titanium dioxide nanoparticles were prepared using the sol-gel process. The effect of temperature and precursor concentration on particle size was investigated. The optimum conditions were then used to prepare carbon and nitrogen doped titanium dioxide (TiO2) nanoparticles. Doping was done to reduce band gap of the nanoparticles in order to utilize visible light in the photocatalytic degradation of organic compounds. A significant shift of the absorption edge to a longer wavelength (lower energy) from 420 nm to 456 nm and 420 nm to 428 nm was observed for the carbon doped and nitrogen doped TiO2 respectively. In this study, the prepared TiO2 photocatalyst was immobilized on carbon nanofibres to allow isolation and reuse of catalyst. The photocatalytic activity of the catalyst was tested using methyl orange as a model pollutant and was based on the decolourization of the dye as it was degraded. The doped TiO2 exhibited higher photocatalytic activity than the undoped TiO2. The materials prepared were characterized by XRD, TEM, SEM, FT-IR, DSC and TGA while the doped TiO2 was characterized by XPS, ESR and Raman Spectroscopy.
275

Design and development of an all-optical active Q-switched Erbium-doped fibre ring laser

Kaboko, Jean-Jacques Monga 31 July 2012 (has links)
M.Phil. / This dissertation describes the design and experimental realization of an all-optical active Q-switched Erbium-doped fibre ring laser. The aim of this research is to propose an approach of Q-switching mechanism for a fibre laser. The Q-switch device combines a fibre Bragg grating and a tunable fibre Fabry-Perot filter. The Q-switching principle is based on dynamic spectral overlapping of two filters, namely FBG based filter and tunable F-P filter. When the spectra overlap, the filter system has the maximum transparency, the laser cavity has minimal losses and it can release the stored power in the form of the giant impulse. A series of experiments are performed to optimize the all-optical active Q-switched Erbium-doped ring laser system in term of output peak power and time duration of laser pulses. Two different Erbium-doped fibres having different Erbium ion concentration are used in this experimental investigation. The first fibre, with an Erbium ion concentration of 2200 ppm and pump absorption of 23.4 at 980 nm is referred to as “high concentration” and the second with an Erbium ion concentration of 960 ppm and pump absorption of 12.4 at 980 nm is referred to as “low concentration” To optimize the Q-switched fibre laser system, different parameters were investigated such as the length of the Erbium-doped fibre, the output coupling ratio, the repetition rate of pulses and the concentration of the Erbium Doped Fibres. The achieved output laser pulse characteristics, peak power and time duration, were 580 mW and 13 μs respectively, at 1 kHz of repetition rate. These characteristics were obtained using a length of 3.5 m “low concentration” Erbium-doped fibre in a ring laser cavity; the output coupling is 90 %, for a pump power of 80 mW. Employing this all-optical Q-switching approach, a simple, robust all-optical active Q-switched Erbium-doped laser is demonstrated.
276

Synthesis of cubic boron nitride thin films on silicon substrate using electron beam evaporation.

Vemuri, Prasanna 05 1900 (has links)
Cubic boron nitride (cBN) synthesis has gained lot of interest during the past decade as it offers outstanding physical and chemical properties like high hardness, high wear resistance, and chemical inertness. Despite of their excellent properties, every application of cBN is hindered by high compressive stresses and poor adhesion. The cost of equipment is also high in almost all the techniques used so far. This thesis deals with the synthesis of cubic phase of boron nitride on Si (100) wafers using electron beam evaporator, a low cost equipment that is capable of depositing films with reduced stresses. Using this process, need of ion beam employed in ion beam assisted processes can be eliminated thus reducing the surface damage and enhancing the film adhesion. Four sets of samples have been deposited by varying substrate temperature and the deposition time. scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), and Fourier transform infrared spectroscopy (FTIR) techniques have been used to determine the structure and composition of the films deposited. X-ray diffraction (XRD) was performed on one of the samples to determine the thickness of the film deposited for the given deposition rate. Several samples showed dendrites being formed as a stage of film formation. It was found that deposition at substrate temperature of 400oC and for a period of one hour yielded high quality cubic boron nitride films.
277

Design-based, Bayesian Causal Inference for the Social-Sciences

Leavitt, Thomas January 2021 (has links)
Scholars have recognized the benefits to science of Bayesian inference about the relative plausibility of competing hypotheses as opposed to, say, falsificationism in which one either rejects or fails to reject hypotheses in isolation. Yet inference about causal effects — at least as they are conceived in the potential outcomes framework (Neyman, 1923; Rubin, 1974; Holland, 1986) — has been tethered to falsificationism (Fisher, 1935; Neyman and Pearson, 1933) and difficult to integrate with Bayesian inference. One reason for this difficulty is that potential outcomes are fixed quantities that are not embedded in statistical models. Significance tests about causal hypotheses in either of the traditions traceable to Fisher (1935) or Neyman and Pearson (1933) conceive potential outcomes in this way; randomness in inferences about about causal effects stems entirely from a physical act of randomization, like flips of a coin or draws from an urn. Bayesian inferences, by contrast, typically depend on likelihood functions with model-based assumptions in which potential outcomes — to the extent that scholars invoke them — are conceived as outputs of a stochastic, data-generating model. In this dissertation, I develop Bayesian statistical inference for causal effects that incorporates the benefits of Bayesian scientific reasoning, but does not require probability models on potential outcomes that undermine the value of randomization as the “reasoned basis” for inference (Fisher, 1935, p. 14). In the first paper, I derive a randomization-based likelihood function in which Bayesian inference of causal effects is justified by the experimental design. I formally show that, under weak conditions on a prior distribution, as the number of experimental subjects increases indefinitely, the resulting sequence of posterior distributions converges in probability to the true causal effect. This result, typically known as the Bernstein-von Mises theorem, has been derived in the context of parametric models. Yet randomized experiments are especially credible precisely because they do not require such assumptions. Proving this result in the context of randomized experiments enables scholars to quantify how much they learn from experiments without sacrificing the design-based properties that make inferences from experiments especially credible in the first place. Having derived a randomization-based likelihood function in the first paper, the second paper turns to the calibration of a prior distribution for a target experiment based on past experimental results. In this paper, I show that usual methods for analyzing randomized experiments are equivalent to presuming that no prior knowledge exists, which inhibits knowledge accumulation from prior to future experiments. I therefore develop a methodology by which scholars can (1) turn results of past experiments into a prior distribution for a target experiment and (2) quantify the degree of learning in the target experiment after updating prior beliefs via a randomization-based likelihood function. I implement this methodology in an original audit experiment conducted in 2020 and show the amount of Bayesian learning that results relative to information from past experiments. Large Bayesian learning and statistical significance do not always coincide, and learning is greatest among theoretically important subgroups of legislators for which relatively less prior information exists. The accumulation of knowledge about these subgroups, specifically Black and Latino legislators, carries implications about the extent to which descriptive representation operates not only within, but also between minority groups. In the third paper, I turn away from randomized experiments toward observational studies, specifically the Difference-in-Differences (DID) design. I show that DID’s central assumption of parallel trends poses a neglected problem for causal inference: Counterfactual uncertainty, due to the inability to observe counterfactual outcomes, is hard to quantify since DID is based on parallel trends, not an as-if-randomized assumption. Hence, standard errors and ?-values are too small since they reflect only sampling uncertainty due to the inability to observe all units in a population. Recognizing this problem, scholars have recently attempted to develop inferential methods for DID under an as-if-randomized assumption. In this paper, I show that this approach is ill-suited for the most canonical DID designs and also requires conducting inference on an ill-defined estimand. I instead develop an empirical Bayes’ procedure that is able to accommodate both sampling and counterfactual uncertainty under the DIDs core identification assumption. The overall method is straightforward to implement and I apply it to a study on the effect of terrorist attacks on electoral outcomes.
278

Learning Structured Representations for Understanding Visual and Multimedia Data

Zareian, Alireza January 2021 (has links)
Recent advances in Deep Learning (DL) have achieved impressive performance in a variety of Computer Vision (CV) tasks, leading to an exciting wave of academic and industrial efforts to develop Artificial Intelligence (AI) facilities for every aspect of human life. Nevertheless, there are inherent limitations in the understanding ability of DL models, which limit the potential of AI in real-world applications, especially in the face of complex, multimedia input. Despite tremendous progress in solving basic CV tasks, such as object detection and action recognition, state-of-the-art CV models can merely extract a partial summary of visual content, which lacks a comprehensive understanding of what happens in the scene. This is partly due to the oversimplified definition of CV tasks, which often ignore the compositional nature of semantics and scene structure. It is even less studied how to understand the content of multiple modalities, which requires processing visual and textual information in a holistic and coordinated manner, and extracting interconnected structures despite the semantic gap between the two modalities. In this thesis, we argue that a key to improve the understanding capacity of DL models in visual and multimedia domains is to use structured, graph-based representations, to extract and convey semantic information more comprehensively. To this end, we explore a variety of ideas to define more realistic DL tasks in both visual and multimedia domains, and propose novel methods to solve those tasks by addressing several fundamental challenges, such as weak supervision, discovery and incorporation of commonsense knowledge, and scaling up vocabulary. More specifically, inspired by the rich literature of semantic graphs in Natural Language Processing (NLP), we explore innovative scene understanding tasks and methods that describe images using semantic graphs, which reflect the scene structure and interactions between objects. In the first part of this thesis, we present progress towards such graph-based scene understanding solutions, which are more accurate, need less supervision, and have more human-like common sense compared to the state of the art. In the second part of this thesis, we extend our results on graph-based scene understanding to the multimedia domain, by incorporating the recent advances in NLP and CV, and developing a new task and method from the ground up, specialized for joint information extraction in the multimedia domain. We address the inherent semantic gap between visual content and text by creating high-level graph-based representations of images, and developing a multitask learning framework to establish a common, structured semantic space for representing both modalities. In the third part of this thesis, we explore another extension of our scene understanding methodology, to open-vocabulary settings, in order to make scene understanding methods more scalable and versatile. We develop visually grounded language models that use naturally supervised data to learn the meaning of all words, and transfer that knowledge to CV tasks such as object detection with little supervision. Collectively, the proposed solutions and empirical results set a new state of the art for the semantic comprehension of visual and multimedia content in a structured way, in terms of accuracy, efficiency, scalability, and robustness.
279

Searching for Clues for a Matter Dominated Universe in Liquid Argon Time Projection Chambers

Jwa, Yeon-jae January 2022 (has links)
Liquid Argon Time Projection Chambers (LArTPCs) represent one of the most widely utilized neutrino detection techniques in neutrino experiments, for instance, in the Short Baseline Neutrino (SBN) program and the future large-scale LArTPC: Deep Underground Neutrino Experiment (DUNE). The high-end technique, facilitating excellent spatial and calorimetric reconstruction resolution, also enables testing exotic Beyond Standard Model (BSM) theories, such as baryon number violation (BNV) processes (e.g., proton-decay, neutron-antineutron oscillation). At the same time, Machine Learning (ML) techniques have demonstrated their ubiquitous use in recent decades; ML techniques have also become some of the most powerful tools in high-energy physics (HEP) analyses. Furthermore, the development of algorithms to cater to the needs of problems in HEP (i.e., triggering, reconstruction, improving sensitivity, etc.) has also become an active area of research. By developing a combined approach using Convolutional Neural Network (CNN) and Boosted Decision Tree (BDT) techniques, the sensitivity of neutron-antineutron oscillation in DUNE is evaluated for a projected exposure of 400kton⋅ years. Additionally, to meet the triggering requirement to select such rare events in DUNE, such a search is only supported with highly efficient self-triggering algorithms. An ML-based self-triggering scheme for large-scale LArTPCs, such as DUNE, is also developed with the intention of implementation on field-programmable gate arrays (FPGAs). The ML-based approach for searching for neutron-antineutron oscillation can be demonstrated and validated on the current LArTPC MicroBooNE. The analysis in MicroBooNE represents the first-ever search for neutron-antineutron oscillation in a LArTPC. DUNE's projected 90% C.L. sensitivity to the neutron antineutron oscillation lifetime is 6.45✕10³² years, assuming 1.327✕10³⁵ neutron⋅ years, equivalent to 10 years of DUNE far detector exposure (400kton⋅ years). For MicroBooNE, assuming 372 seconds of exposure (equivalent to 3.13✕10³⁶ neutron⋅ years), the 90% C.L. lifetime sensitivity is found at 3.07✕10²⁵ yrs, after accounting for Monte-Carlo statistical uncertainty and systematic uncertainty from detector effects.
280

Assessment of Crosslink Density in Collagen Models and Ultrafast Laser Crosslinking of Corneal and Cartilage Tissues as Novel Treatment Modalities

Wang, Chao January 2022 (has links)
Osteoarthritis (OA) is a progressive and complex joint disease that results from breakdown of articular cartilage and remodeling of underlying bone, which affects millions of Americans. While the expected lifetime of the load-bearing cartilage tissue should coincide with the lifespan of an individual, it has a limited ability to self-repair and the damage to the tissue can accumulate severely. One of the major challenges in OA treatment is its long asymptomatic period. Symptoms usually become noticeable when the disease is reaching advanced stages, and currently there is no effective intervention for early stages of OA. This may be due to lack of a reliable diagnostic method for detecting early OA. While OA is a degenerative joint disorder that may lead to gross cartilage loss and morphological damage to other joint tissues, a lot of subclinical, subtle biochemical changes occur in the early stages of OA progression. The degradation of collagen type II matrix in articular cartilage extracellular matrix (ECM) network may corelate with the progression of cartilage OA. During onset of OA, with the loss of collagen crosslinks, the collagen matrix in cartilage ECM becomes more disorganized and the cartilage can become susceptible to disorder and thus aggravate the degeneration. Raman spectroscopy has been utilized in studies of components of connective tissues, such as OA, and cartilage degradation. Although studies have demonstrated the potential of applying Raman for diagnosing cartilage degeneration, the analysis of Raman spectrum obtained from articular cartilage is rather complicated and so far there is no generally accepted quantitative analysis for diagnosing early stages of OA. The first stage of this doctoral study aims to extend the capability of Raman spectroscopy to quantitatively characterize collagen network in articular cartilage, to investigate the possible correlation with the degeneration of OA. The first part of this doctoral dissertation is focused on developing a novel, non-destructive, quantitative diagnostic modality, based on Raman spectroscopy that has potential to detect changes in biochemical composition of articular cartilage. The study is focused on the basic research associated with quantification of crosslink density and kinetics of the crosslinking process. A theoretical and computational framework for characterization of collagen crosslinks has been established and applied onto two models, 2-dimensional collagen type I thin films, and immature bovine, proteoglycan depleted, articular cartilage. Glutaraldehyde solution has been applied onto the model as a convenient method to introduce various levels of collagen crosslinks. Refractive error is a problem with focusing light accurately onto the retina due to the shape or other misfunctioning of the eye, rather than on the retina for the normal vision. The most common types of refractive errors are near-sightedness, far-sightedness, astigmatism, and presbyopia. Refractive errors have become a growing public health problem worldwide. Its incidence has doubled over the last 50 years in the United States and Europe. It is even more significant issue in some East Asian countries, where its prevalence reaches 70 to 90%. Most affected individuals use spectacles or contact lenses, which generally provides adequate refractive error correction. However, both are subject to limitations. Glasses do not work well in the rain and mist may form on them following changes in temperature or humidity. Contact lenses improve the field of vision and acuity, but many people find their presence on ocular surfaces intolerable. Over the last two to three decades, refractive surgery for the permanent correction of vision has thus emerged as an attractive choice for many patients. However, such a surgery is an invasive procedure that may compromise corneal structure, and postsurgical complications have been reported. In the second stage of this doctoral work, a novel, non-invasive femtosecond laser collagen crosslinks manipulation method is studied. This laser collagen crosslinking treatment is applied on corneal tissue for vision correction. Two examples of the laser treatment on an ex vivo porcine eyes model are proposed in the study: corneal flattening, which is used to correct refractive errors due to myopia, and corneal steepening, which is used to treat hyperopia. The effective refractive power is used to evaluate the effectiveness of the two different treatments. The depth of the crosslinked region in the cornea is assessed by two-photon autofluorescence (TPF) imaging. TPF imaging can be used to visualize changes induced in the cornea, because collagen is a primary extracellular source of nonlinear emissions. The safety of the proposed treatment methods is examined by haematoxylin and eosin (H&E) stained histological sections of corneas. The ex vivo porcine corneas are also cultured for one week after treatment, to determine whether crosslink density remains stable, and to check for degradation in the crosslinked layers of the stromal matrix, and further prove the safety of the proposed laser treatment method through the evaluation of cell viability after one week of treatment. An in vivo rabbit animal model, widely used as a model for the correction of refractive errors, is further utilized to demonstrate the stability and safety of the induced changes. The effective refractive power of live rabbits is assessed 24 h, seven days, and then weekly up to three months after the laser crosslinking treatment. The safety of the laser treatment is first evaluated by histology staining, and further confirmed by in vivo confocal laser scanning microscopy. This laser treatment approach could expand the pool of patients eligible for permanent vision correction, while simultaneously eliminating the adverse effects associated with current forms of surgery. Furthermore, the approach described is also suitable for the treatment of other disease for collagenous tissues. The last chapter of in this doctoral dissertation have discussed the results of applying this laser treatment techniques for the treatment of progressive OA. Finally, in a preliminary study, the proposed femtosecond laser treatment modality developed for corneal tissue has been applied onto articular cartilage towards slowing down or retarding progression of early osteoarthritis. We hypothesize that degradation of the articular cartilage extracellular matrix can be slowed down or reversed in the collagen network crosslinked with a femtosecond laser. We further theorize that the crosslinking mechanism introduced in the corneal tissue, which relies on laser ionization and dissociation of the tissue interstitial water to produce of refractive oxygen species, can increase crosslink density of collagen network in an articular cartilage. In the study, the treatment has been applied onto devitalized and live immature bovine cartilage explants, as well as cartilage plugs obtained from OA afflicted human cadaver joints. The preliminary results have shown that the proposed treatment has potential to enhance tissue mechanical properties, and increase wear resistance, an important factor in slowing down the progression of OA. Furthermore, preliminary imaging of live/dead stained tissue has shown that the laser treatment has minimal adverse effects up to two weeks after the laser irradiation.

Page generated in 0.1523 seconds