• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 326
  • 183
  • 46
  • 29
  • 22
  • 11
  • 9
  • 8
  • 6
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 800
  • 248
  • 108
  • 108
  • 84
  • 81
  • 81
  • 77
  • 70
  • 66
  • 65
  • 64
  • 63
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Resonator sensor technique for medical use : an intraocular pressure measurement system

Eklund, Anders January 2002 (has links)
In the work of this doctoral dissertation a new resonator sensor technique, first presented in 1989, has been further developed and evaluated with focus on technical characteristics and applications within the medical field. In a first part a catheter-type tactile sensor using the resonator sensor technique was evaluated in a silicone model and applied to human prostate in vitro. The main finding was that different histological compositions of prostate tissue correlated with the frequency shift, .fS, of the resonator sensor and that the common property was the hardness of the tissue. The results indicated that hardness of the prostate tissue, and maybe hardness of human tissue in general, can be expressed according to a cone penetration standard (DIN ISO 2137) and that the hardness can be measured with this tactile sensor system. The tissue hardness application for the resonator sensor technique has to be further developed and evaluated in a larger study. The study also produced results that has led to the basic understanding of the resonator sensor system. One important result was that .fS of the sensor system was related to the contact area between sensor and sample. This indicated that the resonance sensor could be used for contact area measurement. In a second part, containing three studies, the area-sensing capability from the first study was utilised in the development and evaluation of the applanation resonator sensor (ARS) for measurement of intraocular pressure (IOP). For the purpose of evaluating IOP-tonometers, an in vitro pig-eye model was developed, and it was shown that a saline column connected to the vitreous chamber could be used successfully to induce variations in IOP. A ARS sensor with a flat contact surface was applied onto the cornea with constant force and .fS was measured. A mathematical model based on the Imbert-Fick law and the assumption that .fS was linearly related to contact area was proposed and verified with a convincing result. IOP measured with the ARS correlated well (r=0.92, n=360) with the IOP elicited by a saline column. The ARS in a constant-force arrangement was evaluated on healthy human subjects in vivo. The results verified the sensor principle but revealed a nonnegligible source of error in off-centre positioning between the sensor and cornea. The sensor probe was redesigned and evaluated in the in vitro model. The new probe, with a spherical contact surface against the eye reduced the sensitivity to off-centre positioning. It was also shown that a .fS normalisation procedure could reduce the between-eye differences. The ARS method for IOP measurement was further developed using combined continuous force and area measurement during the dynamic phase when the sensor initially contacts the cornea. A force sensor was included with the resonator sensor in one probe. Evaluation was performed with the in vitro pig-eye model. The hypothesis was that the IOP could be deduced from the differential change of force and area during that phase. The study showed good accuracy and good reproducibility with a correlation of r=0.994 (n=414) between measured pressure in the vitreous chamber and IOP according to the ARS. Measurement time was short, 77 ms after initial contact. Problems with inter-eye differences and low resolution at high pressures were reduced. The ARS method is the first to combine simultaneous, continuous sampling of both parameters included in the applanation principle. Consequently, there is a potential for reducing errors in the clinical IOP tonometry.
72

Exact Algorithms for Exact Satisfiability Problems

Dahllöf, Vilhelm January 2006 (has links)
This thesis presents exact means to solve a family of NP-hard problems. Starting with the well-studied Exact Satisfiability problem (XSAT) parents, siblings and daughters are derived and studied, each with interesting practical and theoretical properties. While developing exact algorithms to solve the problems, we gain new insights into their structure and mutual similarities and differences. Given a Boolean formula in CNF, the XSAT problem asks for an assignment to the variables such that each clause contains exactly one true literal. For this problem we present an O(1.1730n) time algorithm, where n is the number of variables. XSAT is a special case of the General Exact Satisfiability problem which asks for an assignment such that in each clause exactly i literals be true. For this problem we present an algorithm which runs in O(2(1-ε) n) time, with 0 < ε < 1 for every fixed i; for i=2, 3 and 4 we have running times in O(1.4511n), O(1.6214n) and O(1.6848n) respectively. For the counting problems we present an O(1.2190n) time algorithm which counts the number of models for an XSAT instance. We also present algorithms for #2SATw and #3SATw, two well studied Boolean problems. The algorithms have running times in O(1.2561n) and O(1.6737n) respectively. Finally we study optimisation problems: As a variant of the Maximum Exact Satisfiability problem, consider the problem of finding an assignment exactly satisfying a maximum number of clauses while the rest are left with no true literal. This problem is reducible to #2SATw without the addition of new variables and thus is solvable in time O(1.2561n). Another interesting optimisation problem is to find two XSAT models which differ in as many variables as possible. This problem is shown to be solvable in O(1.8348n) time.
73

Grain hardness and slow dry matter disappearance rate in barley

Camm, Giselle Anne 07 April 2008
Barley grain is an important source of energy and protein for ruminant animals. However, feeding must be carefully managed to avoid maladies caused by the rapid breakdown of barley starch in the rumen. The development of slower degrading barley for ruminants may alleviate health problems associated with barley grain consumption. Selection for hard endosperm may result in slower starch degradation and improved feed quality. The objectives of this study were to: examine the effect of grain hardness, variety and environment on dry matter disappearance rate (DMDR); identify accurate and efficient hardness selection tools; and study environmental effects, inheritance and heritability of hardness.<p>To study grain hardness and genetic and environmental effects on DMDR, two genotypes grown at multiple locations in 2004 were analyzed for Single Kernel Characterization System (SKCS) hardness, by scanning electron microscopy (SEM), and for in situ DMDR. Genotype by environment interaction influenced DMDR, while neither SKCS hardness nor SEM analysis accurately differentiated DMDR between genotypes. <p>Eight genotypes were grown at multiple locations during 2003 and 2004 to study grain hardness measurement methodology, and genetic and environmental effects on hardness. Genotypes were analyzed for SKCS hardness, milling energy, endosperm light reflectance, feed particle size, protein and beta-glucan. Hardness measurements ranked genotypes similarly across environments. Feed particle size was correlated with milling energy but not other hardness measurements. Hardness measurements appeared to be influenced by protein and beta-glucan.<p>To examine the inheritance and heritability of barley grain hardness, 245 double haploid (DH) genotypes and parents, grown in 2003 and 2004, were analyzed for SKCS hardness, milling energy, protein, beta-glucan, with 100 evaluated for light reflectance. The population exhibited normal distributions for SKCS hardness, milling energy, protein and beta-glucan, suggesting quantitative inheritance for these traits with no apparent epistatic gene interaction. Narrow-sense heritability was 0.75 for SKCS hardness and 0.41 for protein. Light reflectance was not normally distributed, suggesting complementary gene interaction. Broad-sense heritability was 0.53.<p>Barley grain hardness is highly heritable and an efficient tool in making selections in a breeding program. However, breeding for high beta-glucan and protein may be better selection criteria for indirect selection of DMDR.
74

On Covering Points with Conics and Strips in the Plane

Tiwari, Praveen 1985- 14 March 2013 (has links)
Geometric covering problems have always been of focus in computer scientific research. The generic geometric covering problem asks to cover a set S of n objects with another set of objects whose cardinality is minimum, in a geometric setting. Many versions of geometric cover have been studied in detail, one of which is line cover: Given a set of points in the plane, find the minimum number of lines to cover them. In Euclidean space Rm, this problem is known as Hyperplane Cover, where lines are replaced by affine hyperplanes bounded by dimension d. Line cover is NP-hard, so is its hyperplane analogue. Our thesis focuses on few extensions of hyperplane cover and line cover. One of the techniques used to study NP-hard problems is Fixed Parameter Tractability (FPT), where, in addition to input size, a parameter k is provided for input instance. We ask to solve the problem with respect to k, such that the running time is a function in both n and k, strictly polynomial in n, while the exponential component is limited to k. In this thesis, we study FPT and parameterized complexity theory, the theory of classifying hard problems involving a parameter k. We focus on two new geometric covering problems: covering a set of points in the plane with conics (conic cover) and covering a set of points with strips or fat lines of given width in the plane (fat line cover). A conic is a non-degenerate curve of degree two in the plane. A fat line is defined as a strip of finite width w. In this dissertation, we focus on the parameterized versions of these two problems, where, we are asked to cover the set of points with k conics or k fat lines. We use the existing techniques of FPT algorithms, kernelization and approximation algorithms to study these problems. We do a comprehensive study of these problems, starting with NP-hardness results to studying their parameterized hardness in terms of parameter k. We show that conic cover is fixed parameter tractable, and give an algorithm of running time O∗ ((k/1.38)^4k), where, O∗ implies that the running time is some polynomial in input size. Utilizing special properties of a parabola, we are able to achieve a faster algorithm and show a running time of O∗ ((k/1.15)^3k). For fat line cover, first we establish its NP-hardness, then we explore algorithmic possibilities with respect to parameterized complexity theory. We show W [1]-hardness of fat line cover with respect to the number of fat lines, by showing a parameterized reduction from the problem of stabbing axis-parallel squares in the plane. A parameterized reduction is an algorithm which transforms an instance of one parameterized problem into an instance of another parameterized problem using a FPT-algorithm. In addition, we show that some restricted versions of fat line cover are also W [1]-hard. Further, in this thesis, we explore a restricted version of fat line cover, where the set of points are integer coordinates and allow only axis-parallel lines to cover them. We show that the problem is still NP-hard. We also show that this version is fixed parameter tractable having a kernel size of O (k^2) and give a FPT-algorithm with a running time of O∗ (3^k). Finally, we conclude our study on this problem by giving an approximation algorithm for this version having a constant approximation ratio 2.
75

Tecnología de detectores de partículas de silicio resistentes a la radiación

Fleta Corral, María Celeste 22 September 2006 (has links)
Pendent
76

Anticipatory Batch Insertion To Mitigate Perceived Processing Risk

Varghese, Smitha January 2004 (has links)
The literature reviewed on lot-sizing models with random yields is limited to certain random occurrences such as day to day administrative errors, minor machine repairs and random supply due to faulty delivery of parts. In reality however, the manufacturing industry faces other risks that are non random in nature. One example would be yield discrepancies caused by non random triggers such as a change in the production process, product or material. Yield uncertainties of these types are temporary in nature and usually pertain until the system stabilizes. One way of reducing the implications of such events is to have additional batches processed earlier in the production that can absorb the risk associated with the event. In this thesis, this particular approach is referred to as the <i>anticipatory batch insertion</i> to mitigate perceived risk. This thesis presents an exploratory study to analyze the performance of batch insertion under various scenarios. The scenarios are determined by sensitivity of products, schedule characteristics and magnitude of risks associated with causal triggers such as a process change. The results indicate that the highest return from batch insertion can be expected when there are slightly loose production schedules, high volumes of sensitive products are produced, there are high costs associated with the risks, and the risks can be predicted with some degree of certainty.
77

Anticipatory Batch Insertion To Mitigate Perceived Processing Risk

Varghese, Smitha January 2004 (has links)
The literature reviewed on lot-sizing models with random yields is limited to certain random occurrences such as day to day administrative errors, minor machine repairs and random supply due to faulty delivery of parts. In reality however, the manufacturing industry faces other risks that are non random in nature. One example would be yield discrepancies caused by non random triggers such as a change in the production process, product or material. Yield uncertainties of these types are temporary in nature and usually pertain until the system stabilizes. One way of reducing the implications of such events is to have additional batches processed earlier in the production that can absorb the risk associated with the event. In this thesis, this particular approach is referred to as the <i>anticipatory batch insertion</i> to mitigate perceived risk. This thesis presents an exploratory study to analyze the performance of batch insertion under various scenarios. The scenarios are determined by sensitivity of products, schedule characteristics and magnitude of risks associated with causal triggers such as a process change. The results indicate that the highest return from batch insertion can be expected when there are slightly loose production schedules, high volumes of sensitive products are produced, there are high costs associated with the risks, and the risks can be predicted with some degree of certainty.
78

Grain hardness and slow dry matter disappearance rate in barley

Camm, Giselle Anne 07 April 2008 (has links)
Barley grain is an important source of energy and protein for ruminant animals. However, feeding must be carefully managed to avoid maladies caused by the rapid breakdown of barley starch in the rumen. The development of slower degrading barley for ruminants may alleviate health problems associated with barley grain consumption. Selection for hard endosperm may result in slower starch degradation and improved feed quality. The objectives of this study were to: examine the effect of grain hardness, variety and environment on dry matter disappearance rate (DMDR); identify accurate and efficient hardness selection tools; and study environmental effects, inheritance and heritability of hardness.<p>To study grain hardness and genetic and environmental effects on DMDR, two genotypes grown at multiple locations in 2004 were analyzed for Single Kernel Characterization System (SKCS) hardness, by scanning electron microscopy (SEM), and for in situ DMDR. Genotype by environment interaction influenced DMDR, while neither SKCS hardness nor SEM analysis accurately differentiated DMDR between genotypes. <p>Eight genotypes were grown at multiple locations during 2003 and 2004 to study grain hardness measurement methodology, and genetic and environmental effects on hardness. Genotypes were analyzed for SKCS hardness, milling energy, endosperm light reflectance, feed particle size, protein and beta-glucan. Hardness measurements ranked genotypes similarly across environments. Feed particle size was correlated with milling energy but not other hardness measurements. Hardness measurements appeared to be influenced by protein and beta-glucan.<p>To examine the inheritance and heritability of barley grain hardness, 245 double haploid (DH) genotypes and parents, grown in 2003 and 2004, were analyzed for SKCS hardness, milling energy, protein, beta-glucan, with 100 evaluated for light reflectance. The population exhibited normal distributions for SKCS hardness, milling energy, protein and beta-glucan, suggesting quantitative inheritance for these traits with no apparent epistatic gene interaction. Narrow-sense heritability was 0.75 for SKCS hardness and 0.41 for protein. Light reflectance was not normally distributed, suggesting complementary gene interaction. Broad-sense heritability was 0.53.<p>Barley grain hardness is highly heritable and an efficient tool in making selections in a breeding program. However, breeding for high beta-glucan and protein may be better selection criteria for indirect selection of DMDR.
79

Surface Hardness Improvement in Magnesium Alloy by Metallic-Glass Sputtered Film

Chen, Bo-you 21 July 2011 (has links)
The Pd77Cu6Si17 (PCS) thin film metallic glasses (TFMGs) with high glass forming ability and hardness are selected as a hard coating for improving the surface hardness of the AZ31 magnesium alloy. Both micro- and nano-indentation tests are conducted on the specimens with various PCS film thicknesses from 30 to 2000 nm. The apparent hardness and the relative indentation depth (£]) are integrated by a quantitative model. The involved interaction parameters and relative hardness values are extracted from iterative calculations. According to the results, surface hardness can be enhanced greatly by PCS TFMGs in the shallow region, followed by gradual decrease with increasing £] ratio. In addition, the specimens with thinner coating (for example, 200 nm) show greater substrate-film interaction and those with thick coating (for example, 2000 nm) become prone to film cracking. The optimum TFMG coating thickness in this study is estimated to be around 200 nm. Keywords: Magnesium alloys, hardness, sputtering, thin film metallic glass, nanoindentation
80

none

Chuang, Chia-hao 21 July 2005 (has links)
The friction stir processing is applied in mixing elemental thin sheets of Mg, Al, and Zn in various portions to result in hard intermetallic alloys with Vicker¡¦s hardness in excess of 350. The Mg3Al2Zn3

Page generated in 0.0444 seconds