• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 220
  • 160
  • 37
  • 33
  • 32
  • 14
  • 10
  • 8
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 660
  • 129
  • 106
  • 100
  • 97
  • 88
  • 62
  • 62
  • 54
  • 47
  • 47
  • 47
  • 45
  • 44
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

A Software Framework For the Detection and Classification of Biological Targets in Bio-Nano Sensing

Hafeez, Abdul 08 September 2014 (has links)
Detection and identification of important biological targets, such as DNA, proteins, and diseased human cells are crucial for early diagnosis and prognosis. The key to discriminate healthy cells from the diseased cells is the biophysical properties that differ radically. Micro and nanosystems, such as solid-state micropores and nanopores can measure and translate these properties of biological targets into electrical spikes to decode useful insights. Nonetheless, such approaches result in sizable data streams that are often plagued with inherit noise and baseline wanders. Moreover, the extant detection approaches are tedious, time-consuming, and error-prone, and there is no error-resilient software that can analyze large data sets instantly. The ability to effectively process and detect biological targets in larger data sets lie in the automated and accelerated data processing strategies using state-of-the-art distributed computing systems. In this dissertation, we design and develop techniques for the detection and classification of biological targets and a distributed detection framework to support data processing from multiple bio-nano devices. In a distributed setup, the collected raw data stream on a server node is split into data segments and distributed across the participating worker nodes. Each node reduces noise in the assigned data segment using moving-average filtering, and detects the electric spikes by comparing them against a statistical threshold (based on the mean and standard deviation of the data), in a Single Program Multiple Data (SPMD) style. Our proposed framework enables the detection of cancer cells in a mixture of cancer cells, red blood cells (RBCs), and white blood cells (WBCs), and achieves a maximum speedup of 6X over a single-node machine by processing 10 gigabytes of raw data using an 8-node cluster in less than a minute, which will otherwise take hours using manual analysis. Diseases such as cancer can be mitigated, if detected and treated at an early stage. Micro and nanoscale devices, such as micropores and nanopores, enable the translocation of biological targets at finer granularity. These devices are tiny orifices in silicon-based membranes, and the output is a current signal, measured in nanoamperes. Solid-state micropore is capable of electrically measuring the biophysical properties of human cells, when a blood sample is passed through it. The passage of cells via such pores results in an interesting pattern (pulse) in the baseline current, which can be measured at a very high rate, such as 500,000 samples per second, and even higher resolution. The pulse is essentially a sequence of temporal data samples that abruptly falls below and then reverts back to a normal baseline with an acceptable predefined time interval, i.e., pulse width. The pulse features, such as width and amplitude, correspond to the translocation behavior and the extent to which the pore is blocked, under a constant potential. These features are crucial in discriminating the diseased cells from healthy cells, such as identifying cancer cells in a mixture of cells. / Ph. D.
112

Effect of Corrosion on the Behavior of Reinforced Concrete Beams Subject to Blast Loading

Myers, Daniel Lloyd 13 May 2024 (has links)
Corrosion of reinforcing steel embedded in concrete due to the presence of moisture, aggressive chemicals, inadequate cover, and other factors can lead to deterioration that substantially reduces the strength and serviceability of the affected structure. Accounting for corrosion degradation is critical for evaluation and assessment of the load carrying capacity of existing reinforced concrete (RC) structures. However, little is known about the relationship between high strain rate blast loading and the degradation effects that govern corrosion damaged structures such as concrete cover cracking, reduction in reinforcement areas, and deterioration of bond between concrete and steel. Ten identical RC beams were constructed and tested, half under blast loading conditions produced using the Virginia Tech Shock Tube Research Facility and the other half under quasi-static loading. The blast tests were conducted to investigate how increasing blast pressure and impulse affect the global displacement response and damage modes of beams subjected to blast loads. The quasi-static tests were performed to establish fundamental data on the load-deflection characteristics of corroded RC beams. One beam from each testing group served as a control specimen and was not corroded while the remaining beams were subjected to varying levels of corrosion (5%, 10%, 15%, and 20%) of the longitudinal reinforcement along the midspan region. The specimens were corroded using an accelerated corrosion technique in a tank of 3% sodium chloride solution and a constant electrical current, creating a controlled environment for varying levels of corrosion. An analytical model was also created using a single degree of freedom (SDOF) approach which predicted the performance of corroded RC beams under blast loading. The results of the quasi-static tests revealed that as corrosion levels increased, the load to cause yielding decreased, the yield displacements decreased, and failure occurred earlier for all specimens. This was accompanied by increased damage to the concrete cover and the addition of longitudinal corrosion induced cracking. For the blast loaded specimens, the results demonstrated that the maximum displacements and residual displacements increased beyond the expected response limits for corrosion levels greater than 5%, but at corrosion levels less than 5% there was no significant change in displacements. Damage levels increased by one or more categories with the introduction of even small levels of corrosion of less than 5%. At corrosion levels greater than 5%, before loading was applied, the specimens exhibited moderate damage due to the introduction of corrosion induced cracking. After loading, the specimens sustained hazardous damage at progressively lower blast volumes. The failure mode changed from ductile to sudden and brittle failure at corrosion levels greater than 5% but remained ductile with flexural failures at low corrosion levels below 5%. The experimental results could be predicted with a high level of accuracy using the SDOF approach, provided that the degraded strength of corroded concrete cover, degraded mechanical properties of corroded steel, length of the corroded region, and determination of either uniform or pitting corrosion are accounted for. Overall, the introduction of corrosion to an RC beam subjected to blast loading resulted in decreased strength and ductility across all specimens but with most disastrous effects occurring at corrosion levels of 5% or greater. A recommendation is made to adjust the response limits in ASCE/SEI 59 to account for corrosion in RC beams. / Master of Science / The threat of blast loads, resulting from either terrorist attacks or accidental explosions, poses a significant threat to the structural integrity of buildings, life safety of occupants, and the functionality of the structure. Corrosion of reinforcing steel embedded in concrete, due to the presence of moisture, aggressive chemicals, and other factors, can lead to deterioration that substantially weakens the affected structure. Accounting for corrosion degradation is critical for evaluation and assessment of the strength of existing reinforced concrete structures. However, little is known about the effects of blast loading on the adverse nature that governs corrosion damaged structures. Ten identical reinforced concrete beams were constructed and tested, half under blast loading and the other half under quasi-static loading. The blast loaded beams were subjected to a series of increasing blast volumes until failure was reached. Five identical beams were tested under quasi-static loading to provide a baseline comparison against the blast loaded beams. One beam from each testing group served as a control specimen and was not corroded while the remaining beams were subjected to varying levels of corrosion of the steel reinforcement. An analytical model was also created to predict the performance of corroded reinforced concrete beams under blast loading. The results of the study showed that as corrosion levels increased, the displacements increased beyond the expected response limits. Damage levels became increasingly more severe with the introduction of corrosion at all levels. The behavior changed from ductile to brittle at corrosion levels greater than 5% but remained ductile with flexural failures at corrosion levels below 5%. Overall, the introduction of corrosion to a concrete beam subjected to blast loading resulted in decreased strength and ductility across all specimens but with most disastrous effects occurring at corrosion levels of 5% or greater. A recommendation is made to adjust the response the limits in the code to account for corrosion in reinforced concrete beams.
113

Contributions to accelerated reliability testing

Hove, Herbert 06 May 2015 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy. Johannesburg, December 2014. / Industrial units cannot operate without failure forever. When the operation of a unit deviates from industrial standards, it is considered to have failed. The time from the moment a unit enters service until it fails is its lifetime. Within reliability and often in life data analysis in general, lifetime is the event of interest. For highly reliable units, accelerated life testing is required to obtain lifetime data quickly. Accelerated tests where failure is not instantaneous, but the end point of an underlying degradation process are considered. Failure during testing occurs when the performance of the unit falls to some specified threshold value such that the unit fails to meet industrial specifications though it has some residual functionality (degraded failure) or decreases to a critical failure level so that the unit cannot perform its function to any degree (critical failure). This problem formulation satisfies the random signs property, a notable competing risks formulation originally developed in maintenance studies but extended to accelerated testing here. Since degraded and critical failures are linked through the degradation process, the open problem of modelling dependent competing risks is discussed. A copula model is assumed and expert opinion is used to estimate the copula. Observed occurrences of degraded and critical failure times are interpreted as times when the degradation process first crosses failure thresholds and are therefore postulated to be distributed as inverse Gaussian. Based on the estimated copula, a use-level unit lifetime distribution is extrapolated from test data. Reliability metrics from the extrapolated use-level unit lifetime distribution are found to differ slightly with respect to different degrees of stochastic dependence between the risks. Consequently, a degree of dependence between the risks that is believed to be realistic to admit is considered an important factor when estimating the use-level unit lifetime distribution from test data. Keywords: Lifetime; Accelerated testing; Competing risks; Copula; First passage time.
114

Supernova Cosmology in an Inhomogeneous Universe

Gupta, Rahul January 2010 (has links)
<p>The propagation of light beams originating from synthetic ‘Type Ia’ supernovae, through an inhomogeneous universe with simplified dynamics, is simulated using a Monte-Carlo Ray-Tracing method. The accumulated statistical (redshift-magnitude) distribution for these synthetic supernovae observations, which is illustrated in the form of a Hubble diagram, produces a luminosity profile similar to the form predicted for a Dark-Energy dominated universe. Further, the amount of mimicked Dark-Energy is found to increase along with the variance in the matter distribution in the universe, converging at a value of Ω<sub>X</sub> ≈ 0.7.</p><p>It can be thus postulated that at least under the assumption of simplified dynamics, it is possible to replicate the observed supernovae data in a universe with inhomogeneous matter distribution. This also implies that it is demonstrably not possible to make a direct correspondence between the observed luminosity and redshift with the distance of a cosmological source and the expansion rate of the universe, respectively, at a particular epoch in an inhomogeneous universe. Such a correspondences feigns an apparent variation in dynamics, which creates the illusion of Dark-Energy.</p>
115

Supernova Cosmology in an Inhomogeneous Universe

Gupta, Rahul January 2010 (has links)
The propagation of light beams originating from synthetic ‘Type Ia’ supernovae, through an inhomogeneous universe with simplified dynamics, is simulated using a Monte-Carlo Ray-Tracing method. The accumulated statistical (redshift-magnitude) distribution for these synthetic supernovae observations, which is illustrated in the form of a Hubble diagram, produces a luminosity profile similar to the form predicted for a Dark-Energy dominated universe. Further, the amount of mimicked Dark-Energy is found to increase along with the variance in the matter distribution in the universe, converging at a value of ΩX ≈ 0.7. It can be thus postulated that at least under the assumption of simplified dynamics, it is possible to replicate the observed supernovae data in a universe with inhomogeneous matter distribution. This also implies that it is demonstrably not possible to make a direct correspondence between the observed luminosity and redshift with the distance of a cosmological source and the expansion rate of the universe, respectively, at a particular epoch in an inhomogeneous universe. Such a correspondences feigns an apparent variation in dynamics, which creates the illusion of Dark-Energy.
116

Pagreitintas procesas: reglamentavimo ir taikymo problemos / Accelerated process: problems of legal regulation and its application

Roščinienė, Kristina 28 January 2008 (has links)
Darbe analizuojamos Lietuvos Respublikos baudžiamojo proceso kodekso (LR BPK), XXXI skyriaus, antrame skirsnyje, įtvirtinto pagreitinto proceso instituto problemos. Pirmiausia remiantis Lietuvos ir užsienio moksline literatūra yra supažindinama su sąvokomis: baudžiamasis procesas ir forma. Plačiau analizuojama supaprastinto proceso samprata, jo formos, pagreitinto proceso sąvoka ir samprata. Aptariami pagreitintą procesą įtakoję Vokietijos ir Prancūzijos pagreitinto proceso modeliai bei iki 2003-05-01 Lietuvoje galiojęs sumarinis procesas. Jis lyginamas su pagreitintu procesu, kaip labiausiai nulėmęs pagreitinto proceso taikymo praktiką. Didžiausias dėmesys, žvelgiant per hipotezės „pagreitinto proceso taikymo problemų kilimą praktikoje įtakoja ne tik įstatyminis reguliavimas, bet ir praktikos vienodinimo teisės aktai, taip pat žmonių psichologija ir organizacijų vadyba“ prizmę, skiriamas pagreitinto proceso procedūrai nagrinėti. Analizuojamos ikiteisminio tyrimo ir teisminio nagrinėjimo apylinkės teisme stadijos. Darbe aptariamos teorinės ir praktinės problemos aštriausiais klausimais: normų kolizijos, baudžiamojo proceso principų ir įtariamojo procesinių garantijų užtikrinimo, galimybės nepateikti teismui jokios ikiteisminio tyrimo medžiagos, aiškių nusikaltimo padarymo aplinkybių suvokimo, pagreitinto proceso taikymo sunkiems nusikaltimams, tyrimo termino klausimais. Gvildenamos kitos – vadybinės ir psichologinės problemos, tiesiogiai įtakojančios praktines problemas. / The final Master's paper work analyses problematic aspects of the institute of the accelerated process legitimated in the XXXI chapter of the Lithuania code of criminal procedure. First of all, referring to the Lithuanian and foreign scientific literature the following concepts are introduced: the criminal process and its form. The concept and forms of the simplified process as well as the concept of the accelerated process are analyzed more precisely. The models of the accelerated process in Germany and France and the summary process valid in Lithuania till May 01, 2003 and their influence to the development of the accelerated process are discussed. The summary process is compared to the accelerated process as the most determining feature to the practice of accelerated process. The most attention is paid to the analysis of the procedure of the accelerated process looking in the light of hypothesis “the change of problems of the practical appliance of accelerated process are influenced not only by legal regulation but also by practical regulation as well as human psychology and organizational management”. Stages of pre-trial investigation and trial in the district court are being analyzed. The following sensitive problems are discussed in this paper work: collision of legal norms, the assurance of the principals of criminal process and process guarantees of the suspect, possibility not to present any pre-trial investigation material to the judge, clear perception of... [to full text]
117

Degradation modeling and monitoring of engineering systems using functional data analysis

Zhou, Rensheng 08 November 2012 (has links)
In this thesis, we develop several novel degradation models based on techniques from functional data analysis. These models are suitable for characterizing different types of sensor-based degradation signals, whether they are censored at a certain fixed time point or truncated at the failure threshold. Our proposed models can also be easily extended to accommodate for the effects of environmental conditions on degradation processes. Unlike many existing degradation models that rely on the existence of a historical sample of complete degradation signals, our modeling framework is well-suited for modeling complete as well as incomplete (sparse and fragmented) degradation signals. We utilize these models to predict and continuously update, in real time, the residual life distributions of partially degraded components. We assess and compare the performance of our proposed models and existing benchmark models by using simulated signals and real world data sets. The results indicate that our models can provide a better characterization of the degradation signals and a more accurate prediction of a system's lifetime under different signal scenarios. Another major advantage of our models is their robustness to the model mis-specification, which is especially important for applications with incomplete degradation signals (sparse or fragmented).
118

Överlevnadsanalys i tjänsteverksamhet : Tidspåverkan i överklagandeprocessen på Migrationsverket / Survival analysis in service : Time-effect in the process of appeal at the Swedish Migration Board

Minya, Kristoffer January 2014 (has links)
Migrationsverket är en myndighet som prövar ansökningar från personer som vill söka skydd, ha medborgarskap, studera eller vill jobba i Sverige. Då det på senare tid varit en stor ökning i dessa ansökningar har tiden för vilket ett beslut tar ökat. Varje typ av ansökning (exempelvis medborgarskap) är en process som består av flera steg. Hur beslutet går igenom dessa steg kallas för flöde. Migrationsverket vill därför öka sin flödeseffektivitet. När beslutet är klart och personen tagit del av det men inte är nöjd kan denne överklaga. Detta är en av de mest komplexa processerna på Migrationsverket. Syftet är analysera hur lång tid denna process tar och vilka steg i processen som påverkar tiden. Ett steg (som senare visar sig ha en stor effekt på tiden) är yttranden. Det är när domstolen begär information om vad personen som överklagar har att säga om varför denne överklagar. För att analysera detta var två metoder relevanta, accelerated failure time (AFT) och \multi-state models (MSM). Den ena kan predicera tid till händelse (AFT) medan den andra kan analysera effekten av tidspåverkan (MSM) i stegen. Yttranden tidigt i processen har stor betydelse för hur snabbt en överklagan får en dom samtidigt som att antal yttranden ökar tiden enormt. Det finns andra faktorer som påverkar tiden men inte i så stor grad som yttranden. Då yttranden tidigt i processen samtidigt som antal yttranden har betydelse kan flödeseffektiviteten ökas med att ta tid på sig att skriva ett informativt yttrande som gör att domstolen inte behöver begära flera yttranden. / The Swedish Migration Board is an agency that review applications from individuals who wish to seek shelter, have citizenship, study or want to work in Sweden. In recent time there has been a large increase in applications and the time for which a decision is made has increased. Each type of application (such as citizenship) is a process consisting of several stages. How the decision is going through these steps is called flow. The Swedish Migration Board would therefore like to increase their flow efficiency. When the decision is made and the person has take part of it but is not satisfied, he can appeal. This is one of the most complex processes at the Board. The aim is to analyze how long this process will take and what steps in the process affects the time. One step (which was later found to have a significant effect on time) is opinions. This is when the court requests information on what the person is appealing has to say about why he is appealing. To analyze this, two methods were relevant, accelerated failure time (AFT) and the multi-state models (MSM). One can predict time to event (AFT), the other to analyze the effect of time-manipulation (MSM) in the flow. Opinions early in the process is crucial to how quickly an appeal get judgment while the number of opinions increases the time enormously. There are other factors that affect the time but not so much as opinions. The flow efficiency can be increased by taking time to write an informative opinion which allows the court need not to ask for more opinions.
119

Performance of geotextile-reinforced bases for paved roads

Saghebfar, Milad January 1900 (has links)
Doctor of Philosophy / Department of Civil Engineering / Mustaque Hossain / Geotextiles have been widely promoted for pavement structure over the past 30 years. However, there is a lack of well-instrumented, full-scale experiments to investigate the effect of geotextile reinforcement on the pavement design. In this study, full–scale accelerated tests were conducted on eight lanes of pavement test sections. Six out of these eight sections had granular bases reinforced with different types of woven geotextiles. The reinforced base sections and the control sections (with unreinforced base) were paved with Superpave hot-mix asphalt. Base and subgrade materials were the same for all sections while the test sections had different asphalt and base layer thicknesses. Each section was instrumented with two pressure cells on top of the subgrade, six strain gages on the geotextile body, six H-bar strain gages at the bottom of the asphalt layer, two thermocouples and one Time Domain Reflectometer (TDR) sensor. The sections were loaded to 250,000 to 500,000 repetitions of an 80-kN single axle load of the accelerated pavement testing machine. The mechanistic response of each section was monitored and analyzed at selected number of wheel passes. Results indicate that properly selected and designed geotextile-reinforced bases improve pavement performance in term of rutting and reduced pressure at the top of the subgrade. Finite element (FE) models were developed and verified using results from the full-scale accelerated pavement tests. The calibrated model was used to investigate the effects of geotextile properties on the pavement responses. FE analysis shows that benefits of reinforcement are more evident when stiffer geotextile is used.
120

Finite element analysis of hot-mix asphalt layer interface bonding

Williamson, Matthew J. January 1900 (has links)
Doctor of Philosophy / Department of Civil Engineering / Mustaque A. Hossain / Tack coat is a thin layer of asphaltic material used to bind a newly-placed lift of hot-mix asphalt (HMA) pavement to a previously-placed lift or a new HMA overlay/inlay and existing pavement. The purpose of a tack coat is to ensure that a proper bond occurs so that traffic loads are carried by the entire HMA structure. Proper bonding exists when HMA layers act as a monolithic structure, transferring loads from one layer to the next. This depends on appropriate selection of tack coat material type and application rate, and is essential to prevent slippage failure and premature cracking in the wearing surface. This study focuses on development of a three-dimensional finite element (FE) model of HMA pavement structure in order to assess HMA interface bonding. The FE model was constructed using commercially available ABAQUS software to simulate an Accelerated Pavement Testing (APT) experiment conducted at the Civil Infrastructure Systems Laboratory (CISL) at Kansas State University. Mechanistic responses measured in the CISL experiment, such as localized longitudinal strain at the interface, were used to calibrate the FE model. Once calibrated, the model was used to predict mechanistic responses of the pavement structure by varying the tack coat property to reflect material characteristics of each application. The FE models successfully predicted longitudinal strains that corresponded to APT results.

Page generated in 0.0834 seconds