• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 618
  • 124
  • 124
  • 119
  • 71
  • 50
  • 20
  • 18
  • 17
  • 14
  • 12
  • 10
  • 8
  • 5
  • 4
  • Tagged with
  • 1408
  • 370
  • 232
  • 161
  • 159
  • 142
  • 138
  • 126
  • 125
  • 115
  • 110
  • 100
  • 96
  • 88
  • 86
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Developing standards for household latrines in Rwanda

Medland, Louise S. January 2014 (has links)
The issue of standards for household latrines is complex because discussions related to standards for latrines in literature from the water, sanitation and hygiene (WASH) sector tend to focus on the negative aspects of standards and highlights cases where the miss-application of standards in the past has caused problems. However, despite concerns about the constraints that standards can seemingly impose, there is an acknowledgement that standards can play a more positive role in supporting efforts to increase access to household latrines. The World Health Organisation has long established and widely recognised standards for water supply quality and quantity but there are no equivalent standards for sanitation services and there is currently no guidance that deals with the topic of standards for household latrines. Household latrines are a small component of the wider sanitation system in a country and by considering how standards for household latrines operate within this wider sanitation system the aim of this research is to understand what influences standards can have on household latrines and explore how the negative perceptions about standards and latrine building can be overcome. The development of guidance on how to develop well written standards is the core focus of this research. This research explores the factors that can influence the development and use of a standard for household latrines in Rwanda using three data collection methods. Document analysis using 66 documents, including policies and strategies, design manuals and training guides from 17 countries throughout Sub-Saharan Africa was used in conjunction with the Delphi Method involving an expert panel of 27 from Rwanda and 38 semi-structured interviews. The research concludes that perceptions about standards for household latrines are fragmented and confused with little consensus in Rwanda on what need a standard should meet and what role it should play. The study has found that the need for a standard must be considered in the context of the wider sanitation system otherwise it can lead to duplication of efforts and increased confusion for all stakeholders. The study also found that there is an assumed link between standards and enforcement of standards through regulation and punishments which creates the negative perceptions about standards in Rwanda. However, despite this aversion to standards, there are still intentions to promote the standardisation of latrine technologies and designs, led by national government in Rwanda and in other Sub-Saharan African countries. The contribution to knowledge of this research includes a decision process presented at the end of the study which can be used by decision makers who are interested in developing a standard for household latrines. The decision process acts as a tool for outlining how a standard can operate within the national sanitation system. This understanding provides decision makers with the basis for continuing the debate on what a well written standard looks like in the national context and supports the development of a standard that is fit for purpose and provides a positive contribution to the sector.
182

Micro-Anatomical Quantitative Imaging Towards Enabling Automated Diagnosis of Thick Tissues at the Point of Care

Mueller, Jenna Lynne Hook January 2015 (has links)
<p>Histopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.</p><p>Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions. </p><p>To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.</p><p>To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology. </p><p>Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy. </p><p>Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation. </p><p>Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone. </p><p>Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted. </p><p>In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.</p> / Dissertation
183

STRUCTURED SOFTWARE DESIGN IN A REAL-TIME CONTROL APPLICATION

DeBrunner, Keith E. 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 1984 / Riviera Hotel, Las Vegas, Nevada / Software for real-time (time critical) control applications has been shown in military and industry studies to be a very expensive type of software effort. This type of software is not typically addressed in discussions of software architecture design methods and techniques, therefore the software engineer is usually left with a sparse design “tool kit” when confronted with overall system design involving time critical and/or control problems. This paper outlines the successful application of data flow and transaction analysis design methods to achieve a structured yet flexible software architecture for a fairly complex antenna controller used in automatic tracking antenna systems. Interesting adaptations of, and variations on, techniques described in the literature are discussed; as are issues of modularity, coupling, morphology, global data handling, and evolution (maintenance). Both positive and negative aspects of this choice of design method are outlined, and the importance of a capable real-time executive and conditional compilation and assembly is stressed.
184

The development, implementation and evaluation of a short course in Objective Structured Clinical Examination (OSCE) skills

De Villiers, Adele 03 1900 (has links)
Thesis (MPhil)--University of Stellenbosch, 2011. / Introduction: Objective Structured Clinical Examination (OSCE) examiner training is widely employed to address some of the reliability and validity issues that accompany the use of this assessment tool. An OSCE skills course was developed and implemented at the Stellenbosch Faculty of Health Sciences and the influence thereof on participants (clinicians) was evaluated. Method: Participants attended the OSCE skills course which included theoretical sessions concerning topics such as standard-setting, examiner influence and assessment instruments, as well as two staged OSCEs, one at the beginning and the other at the end of the course. During the latter, each participant examined a student role-player performing a technical skill while being video recorded. Participants‟ behaviour and assessment results from the two OSCEs were evaluated, as well as the feedback from participants regarding the course and group interviews with student role players. Results: There was a significant improvement in inter-rater reliability as well as a slight decrease in inappropriate examiner behaviour, such as teaching and prompting during assessment of students. Furthermore, overall feedback from participants and perceptions of student role-players was positive. Discussion: In this study, examiner conduct and inter-rater reliability was positively influenced by the following interventions: examiner briefing; involvement of examiners in constructing assessment instruments as well as viewing (on DVD) and reflection, by examiners, of their assessment behaviour. Conclusion: This study proposes that the development and implementation of an OSCE skills course is a worthwhile endeavour in improving validity and reliability of the OSCE as an assessment tool.
185

Mass transfer area of structured packing

Tsai, Robert Edison 20 October 2010 (has links)
The mass transfer area of nine structured packings was measured as a function of liquid load, surface tension, liquid viscosity, and gas rate in a 0.427 m (16.8 in) ID column via absorption of CO₂ from air into 0.1 mol/L NaOH. Surface tension was decreased from 72 to 30 mN/m via the addition of a surfactant (TERGITOL[trademark] NP-7). Viscosity was varied from 1 to 15 mPa·s using poly(ethylene oxide) (POLYOX[trademark] WSR N750). A wetted-wall column was used to verify the kinetics of these systems. Literature model predictions matched the wetted-wall column data within 10%. These models were applied in the interpretation of the packing results. The packing mass transfer area was most strongly dictated by geometric area (125 to 500 m²/m³) and liquid load (2.5 to 75 m³/m²·h or 1 to 30 gpm/ft²). A reduction in surface tension enhanced the effective area. The difference was more pronounced for the finer (higher surface area) packings (15 to 20%) than for the coarser ones (10%). Gas velocity (0.6 to 2.3 m/s), liquid viscosity, and channel configuration (45° vs. 60° or smoothed element interfaces) had no appreciable impact on the area. Surface texture (embossing) increased the area by 10% at most. The ratio of effective area to specific area (a[subscript e]/a[subscript p]) was correlated within limits of ±13% for the experimental database: [mathematical formula]. This area model is believed to offer better predictive accuracy than the alternatives in the literature, particularly under aqueous conditions. Supplementary hydraulic measurements were obtained. The channel configuration significantly impacted the pressure drop. For a 45°-to-60° inclination change, pressure drop decreased by more than a factor of two and capacity expanded by 20%. Upwards of a two-fold increase in hold-up was observed from 1 to 15 mPa·s. Liquid load strongly affected both pressure drop and hold-up, increasing them by several-fold over the operational range. An economic analysis of an absorber in a CO₂ capture process was performed. Mellapak[trademark] 250X yielded the most favorable economics of the investigated packings. The minimum cost for a 7 m MEA system was around $5-7/tonne CO₂ removed for capacities in the 100 to 800 MW range. / text
186

OPTIMAL PHASE MEASURING PROFILOMETRY TECHNIQUES FOR STATIC AND DYNAMIC 3D DATA ACQUISITION

Yalla, Veeraganesh 01 January 2006 (has links)
Phase measuring Profilometry (PMP) is an important technique used in 3D data acquisition. Many variations of the PMP technique exist in the research world. The technique involves projecting phase shifted versions of sinusoidal patterns with known frequency. The 3D information is obtained from the amount of phase deviation that the target object introduces in the captured patterns. Using patterns based on single frequency result in projecting a large number of patterns necessary to achieve minimal reconstruction errors. By using more than one frequency, that is multi-frequency, the error is reduced with the same number of total patterns projected as in the single frequency case. The first major goal of our research work is to minimize the error in 3D reconstruction for a given scan time using multiple frequency sine wave patterns. A mathematical model to estimate the optimal frequency values and the number of phase shift patterns based on stochastic analysis is given. Experiments are conducted by implementing the mathematical model to estimate the optimal frequencies and the number of patterns projected for each frequency level used. The reduction in 3D reconstruction errors and the quality of the 3D data obtained shows the validity of the proposed mathematical model. The second major goal of our research work is the implementation of a post-processing algorithm based on stereo correspondence matching adapted to structured light illumination. Composite pattern is created by combining multiple phase shift patterns and using principles from communication theory. Composite pattern is a novel technique for obtaining real time 3D depth information. The depth obtained by the demodulation of captured composite patterns is generally noisy compared to the multi-pattern approach. In order to obtain realistic 3D depth information, we propose a post-processing algorithm based on dynamic programming. Two different communication theory principles namely, Amplitude Modulation (AM) and Double Side Band Suppressed Carrier (DSBSC) are used to create the composite patterns. As a result of this research work, we developed a series of low-cost structured light scanners based on the multi-frequency PMP technique and tested them for their accuracy in different 3D applications. Three such scanners with different camera systems have been delivered to Toyota for vehicle assembly line inspection. All the scanners use off the shelf components. Two more scanners namely, the single fingerprint and the palmprint scanner developed as part of the Department of Homeland Security grant are in prototype and testing stages.
187

Three-dimensional hybrid grid generation with application to high Reynolds number viscous flows

Athanasiadis, Aristotelis 29 June 2004 (has links)
In this thesis, an approach is presented for the generation of grids suitable for the simulation of high Reynolds number viscous flows in complex three-dimensional geometries. The automatic and reliable generation of such grids is today on the biggest bottlenecks in the industrial CFD simulation environment. In the proposed approach, unstructured tetrahedral grids are employed for the regions far from the viscous boundaries of the domain, while semi-structured layers of high aspect ratio prismatic and hexahedral elements are used to provide the necessary grid resolution inside the boundary layers and normal to the viscous walls. The definition of the domain model is based on the STEP ISO standard and the topological information contained in the model is used for applying the hierarchical grid generation parameters defined by the user. An efficient, high-quality and robust algorithm is presented for the generation of the unstructured simplicial (triangular of tetrahedral) part of the grid. The algorithm is based on the Delaunay triangulation and the internal grid points are created following a centroid or frontal approach. For the surface grid generation, a hybrid approach is also proposed similar to the volume. Semi-structured grids are generated on the surface grid (both on the edges and faces of the domain) to improve the grid resolution around convex and concave ridges and corners, by aligning the grid elements in the directions of high solution gradients along the surface. A method is also developed for automatically setting the grid generation parameters related to the surface grid generation based on the curvature of the surface in order to obtain an accurate and smooth surface grid. Finally, a semi-structured prismatic/hexahedral grid generation algorithm is presented for the generation of the part of grid close to the viscous walls of the domain. The algorithm is further extended with improvements meant to increase the grid quality around concave and convex ridges of the domain, where the semi-structured grids are known to be inadequate. The combined methodology is demonstrated on a variety of complex examples mainly from the automotive and aeronautical industry.
188

Structured Text Compiler Targeting XML

Hassan, Jawad January 2010 (has links)
No description available.
189

An artefact to analyse unstructured document data stores / by André Romeo Botes

Botes, André Romeo January 2014 (has links)
Structured data stores have been the dominating technologies for the past few decades. Although dominating, structured data stores lack the functionality to handle the ‘Big Data’ phenomenon. A new technology has recently emerged which stores unstructured data and can handle the ‘Big Data’ phenomenon. This study describes the development of an artefact to aid in the analysis of NoSQL document data stores in terms of relational database model constructs. Design science research (DSR) is the methodology implemented in the study and it is used to assist in the understanding, design and development of the problem, artefact and solution. This study explores the existing literature on DSR, in addition to structured and unstructured data stores. The literature review formulates the descriptive and prescriptive knowledge used in the development of the artefact. The artefact is developed using a series of six activities derived from two DSR approaches. The problem domain is derived from the existing literature and a real application environment (RAE). The reviewed literature provided a general problem statement. A representative from NFM (the RAE) is interviewed for a situation analysis providing a specific problem statement. An objective is formulated for the development of the artefact and suggestions are made to address the problem domain, assisting the artefact’s objective. The artefact is designed and developed using the descriptive knowledge of structured and unstructured data stores, combined with prescriptive knowledge of algorithms, pseudo code, continuous design and object-oriented design. The artefact evolves through multiple design cycles into a final product that analyses document data stores in terms of relational database model constructs. The artefact is evaluated for acceptability and utility. This provides credibility and rigour to the research in the DSR paradigm. Acceptability is demonstrated through simulation and the utility is evaluated using a real application environment (RAE). A representative from NFM is interviewed for the evaluation of the artefact. Finally, the study is communicated by describing its findings, summarising the artefact and looking into future possibilities for research and application. / MSc (Computer Science), North-West University, Vaal Triangle Campus, 2014
190

A case study to evaluate the introduction of Objective Structured Clinical Examination (OSCE) within a School of Pharmacy

O'Hare, Roisin January 2014 (has links)
Healthcare education is continually evolving to reflect therapeutic advances in patient management. Society demands assurances regarding the ongoing competence of HCPs including pharmacists. The use of OSCEs to evaluate competence of medical staff as well as nurses is well documented in the literature however evidence of its use with undergraduate pharmacy students is still sparse.

Page generated in 0.0402 seconds