• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 11
  • 11
  • 11
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

DAS Writeback: A Collaborative Annotation System for Proteins

Salazar O., Gustavo A. 01 January 2010 (has links)
We designed and developed a Collaborative Annotation System for Proteins called DAS Writeback, which extends the Distributed Annotation System (DAS) to provide the functionalities of adding, editing and deleting annotations. A great deal of effort has gone into gathering information about proteins over the last few years. By June 2009, UniProtKB/Swiss-Prot, a curated database, contained over four hundred thousand sequence entries and UniProtKB/TrEMBL, a database with automated annotation, contained over eight million sequence entries. Every protein is annotated with relevant information, which needs to be eciently captured and made available to other research groups. These include annotations about the structure, the function or the biochemical residues. Several research groups have taken on the task of making this information accessible to the community, however, information flow in the opposite direction has not been extensively explored. Users are currently passive actors that behave as consumers of one or several sources of protein annotations and they have no immediate way to provide feedback to the source if, for example, a mistake is detected or they want to add information. Any change has to be done by the owner of the database. The current lack of being able to feed information back to a database is tackled in this project. The solution consists of an extension of the DAS protocol that defines the communication rules between the client and the writeback server following the Uniform Interface of the RESTful architecture. A protocol extension was proposed to the DAS community and implementations of both server and client were created in order to have a fully functional system. For the development of the server, writing functionalities were added to MyDAS, which is a widely used DAS server. The writeback client is an extended version of the web-based protein client Dasty2. The involvement of the DAS community and other potential users was a fundamental component of this project. The architecture was designed with the insight of the DAS specialized forum, a prototype was then created and subsequently presented in the DAS workshop 2009. The feedback from the forum and workshop was used to redefine the architecture and implement the system. A usability experiment was performed using potential users of the system emulating a real annotation task. It demonstrated that DAS writeback is effective, usable and will provide the appropriate environment for the creation and evolution of a protein annotation community. Although the scope of this research is limited to protein annotations, the specification was defined in a general way. It can, therefore, be used for other types of information supported by DAS, implying that the server is versatile enough to be used in other scenarios without major modifications.
2

Can Health Workers capture data using a generic mobile phone with sufficient accuracy for Capture at Source to be used for Clinical Research Purposes?

Workman, Michael L 01 October 2013 (has links)
Objective: To determine the accuracy, measured by error rate, with which Clinical Research Workers (CRWs), with minimal experience in data entry, could capture data on a feature phone during an interview using two different mobile phone applications, compared to the accuracy with which they could record data on paper Case Report Forms (CRFs). Design: A comparative study was performed where 10 participating CRWs performed 90 mock interviews using either paper CRFs or one of two mobile phone applications. The phone applications were a commonly used open source application and an application custom built for this study that followed a simplified, less flexible user interface paradigm. The answers to the interview questions were randomly generated and provided to the interviewees in sealed envelopes prior to the scheduling of the mock interview. Error rates of the captured data were calculated relative to the randomly generated expected answers. Results and Conclusion: The study aimed to show that error rates of clinical research data captured using a mobile phone application would not be inferior to data recorded on paper CRFs. For the custom application, this desired result was not found unequivocally. An error in judgment when designing the custom phone application resulted in dates being captured in a manner unfamiliar to the study participants, leading to high error rates for this type of data. If this error is condoned by excluding the date type from the results for the custom application, the custom application is shown to be non-inferior, at the 95% confidence level, to standard paper forms when capturing data for clinical research. Analysis of the results for the open source application showed that using this application for data capture was inferior to paper CRFs. Secondary analysis showed that error rates for data captured on the custom mobile phone application by non-computer literate users were significantly lower at the 95% confidence level than the error rates for data recorded by the same users on paper and for data captured by computer literate users using the custom application. This result confirms that even non-computer literate users can capture data accurately using a feature phone with a simplified user interface.
3

Developing locally relevant applications for rural South Afica: a telemedicine example

Chetty, Marshini 01 December 2005 (has links)
Within developing countries, there is a digital divide between rural and urban areas. In order to overcome this division, we need to provide locally relevant Information and Communication Technology (ICT) services to these areas. Traditional software development methodologies are not suitable for developing software for rural and underserviced areas because they cannot take into account the unique requirements and complexities of such areas. We set out to find the most appropriate way to engineer suitable software applications for rural communities. We developed a methodological framework for creating software applications for a rural community. We critically examined the restrictions that current South African telecommunications legislation places on software development for underserviced areas. Our socially aware computing framework for creating software applications uses principles from Action Research and Participatory Design as well as best practice guidelines; it helps us address all issues affecting the project success. The validity of our framework was demonstrated by using it to create Multi-modal Telemedicine Intercommunicator (MuTI). MuTI is a prototype system for remote health consultation for a rural community. It allowed for synchronous and asynchronous communications between a clinic in one village and a hospital in the neighbouring village, nearly 20 kilometers away, in the Eastern Cape province of South Africa. It used Voice over Internet Protocol (VoIP) combined with a store and forward approach for communication. MuTI was tested over a Wireless Fidelity (WiFi) network for several months. Our socially aware framework proved to be appropriate for developing locally relevant applications for rural areas in South Africa. We found that MuTI was an improvement on the previous telemedicine solution in the target community. Using the approach also led to several insights into best practice for ICT development projects. We also found that VoIP and WiFi are relevant technologies for rural regions and that further telecommunication liberalisation in South Africa is required in order to spur technological developments in rural and underserviced areas.
4

Algorithms for efficiently and effectively matching agents in microsimulations of sexually transmitted infections

Geffen, Nathan 01 January 2018 (has links)
Mathematical models of the HIV epidemic have been used to estimate incidence, prevalence and life-expectancy, as well the benets and costs of public health interventions, such as the provision of antiretroviral treatment. Models of sexually transmitted infection epidemics attempt to account for varying levels of risk across a population based on diverse | or heterogeneous | sexual behaviour. Microsimulations are a type of model that can account for fine-grained heterogeneous sexual behaviour. This requires pairing individuals, or agents, into sexual partnerships whose distribution matches that of the population being studied, to the extent this is known. But pair-matching is computationally expensive. There is a need for computer algorithms that pair-match quickly. In this work we describe the role of modelling in responses to the South African HIV epidemic. We also chronicle a three-decade debate, greatly influenced since 2008 by a mathematical model, on the optimal time for people with HIV to start antiretroviral treatment. We then present and analyse several pair-matching algorithms, and compare them in a microsimulation of a fictitious STI. We find that there are algorithms, such as Cluster Shuffle Pair-Matching, that offer a good compromise between speed and approximating the distribution of sexual relationships of the study-population. An interesting further finding is that infection incidence decreases as population increases, all other things being equal. Whether this is an artefact of our methodology or a natural world phenomenon is unclear and a topic for further research.
5

Design of a prototype mobile application interface for efficient accessing of electronic laboratory results by health clinicians

Chigudu, Kumbirai 01 January 2018 (has links)
There is a significant increase in demand for rapid laboratory medical diagnoses for various ailments in order for clinicians to make informed medical decisions and prescribe the correct medication within a limited specified time. Since no further informed action can be taken on the patient until the laboratory report reaches the clinician, the delivery of the report to the clinician becomes a critical path in the value chain of the laboratory testing process. The National Health Laboratory Service (NHLS) currently delivers lab results in three ways: via a physical paper report, and electronically through a web application. The third alternative is for short and high-priority test results, like human immunodeficiency virus (HIV) and tuberculosis (TB), that are delivered via short message service (SMS) printers in remote rural clinics. However, despite its inefficiencies, the paper report remains the most commonly used method. As turnaround times for basic and critical laboratory tests remain a great challenge for NHLS to meet the specified targets; there is need to shift method of final delivery from paper to a paperless secured electronic result delivery system. Accordingly, the recently-implemented centralised TrakCare Lab laboratory information system (LIS) makes provision for delivery of electronic results via a web application, 'TrakCarewebview'. However, the uptake of TrakCarewebview has been very low due to the cumbersomeness of the application; this web application takes users through nine steps to obtain the results and is not designed for mobile devices. In addition, its access in remote rural health care facilities is a great challenge because of lack of supportive infrastructure. There is therefore an obvious gap and considerable potential in diagnostic result delivery system that calls for an immediate action to design and development of a less complex, cost effective and usable mobile application, for electronic delivery of laboratory results. After obtaining research ethics clearance approval from the University’s Faculty of Science Research Ethics Committee a research was sanctioned. A survey of public sector clinicians across South Africa indicated that 98% have access to the internet through smartphones, and 93% of the clinicians indicated that they would use their mobile devices to access electronic laboratory results. A significant number of clinicians believe that the use of a mobile application in health facilities will improve patient care. This belief, therefore, set a strong basis for designing and developing a mobile application for laboratory results. The study aims to design and develop a mobile application prototype that can demonstrate the capability of delivering electronic laboratory test results to clinicians on their smart devices, via a usable mobile application. The design of the mobile application prototype was driven by user-centred design (UCD) principles in order to develop an effective design. Core and critical to the process is the design step which establishes the user requirements specifications that meet the user expectations. The study substantiated the importance of the design aspect as the initial critical step in obtaining a good final product. The prototype was developed through an iterative process alternating prototype development and evaluation. The development iterations consisted of a single paper prototyping iteration followed by further two iterations using an interactive Justinmind prototyping tool. Respective to the development iterations, cognitive walk-through and heuristic principles were used to evaluate the usability of the initial prototype. The final prototype was then evaluated using the system usability scale (SUS) survey quantitative tool, which determines the effectiveness and perceived usability of the application. The application scored an average SUS score of 77, which is significantly above the average acceptable SUS score of 68. The standard SUS measurement deems 80 to be an excellent score. Yet a score below 68 is considered below average. The evaluation was conducted by the potential user group which was involved in the initial design process. The ability of the interactive prototyping tool (Justinmind) to mimic the actual final product offered end users a feel of the actual product thus giving the outcome of the evaluation a strong basis to develop the actual product.
6

A Comparison of Statistical and Geometric Reconstruction Techniques: Guidelines for Correcting Fossil Hominin Crania

Neeser, Rudolph 01 January 2007 (has links)
The study of human evolution centres, to a large extent, around the study of fossil morphology, including the comparison and interpretation of these remains within the context of what is known about morphological variation within living species. However, many fossils suffer from environmentally caused damage (taphonomic distortion) which hinders any such interpretation: fossil material may be broken and fragmented while the weight and motion of overlaying sediments can cause their plastic distortion. To date, a number of studies have focused on the reconstruction of such taphonomically damaged specimens. These studies have used myriad approaches to reconstruction, including thin plate spline methods, mirroring, and regression-based approaches. The efficacy of these techniques remains to be demonstrated, and it is not clear how different parameters (e.g., sample sizes, landmark density, etc.) might effect their accuracy. In order to partly address this issue, this thesis examines three techniques used in the virtual reconstruction of fossil remains by statistical or geometrical means: mean substitution, thin plate spline warping (TPS), and multiple linear regression. These methods are compared by reconstructing the same sample of individuals using each technique. Samples drawn from Homo sapiens, Pan troglodytes, Gorilla gorilla, and various hominin fossils are reconstructed by iteratively removing then estimating the landmarks. The testing determines the methods' behaviour in relation to the extant of landmark loss (i.e., amount of damage), reference sample sizes (this being the data used to guide the reconstructions), and the species of the population from which the reference samples are drawn (which may be different to the species of the damaged fossil). Given a large enough reference sample, the regression-based method is shown to produce the most accurate reconstructions. Various parameters effect this: when using small reference samples drawn from a population of the same species as the damaged specimen, thin plate splines is the better method, but only as long as there is little damage. As the damage becomes severe (missing 30% of the landmarks, or more), mean substitution should be used instead: thin plate splines are shown to have a rapid error growth in relation to the amount of damage. When the species of the damaged specimen is unknown, or it is the only known individual of its species, the smallest reconstruction errors are obtained with a regression-based approach using a large reference sample drawn from a living species. Testing shows that reference sample size (combined with the use of multiple linear regression) is more important than morphological similarity between the reference individuals and the damaged specimen. The main contribution of this work are recommendations to the researcher on which of the three methods to use, based on the amount of damage, number of reference individuals, and species of the reference individuals.
7

Real-time Generation of Procedural Forests

Kenwood, Julian 01 January 2014 (has links)
The creation of 3D models for games and simulations is generally a time-consuming and labour intensive task. Forested landscapes are an important component of many large virtual environments in games and film. To create the many individual tree models required for forests requires a large numbers of artists and a great deal of time. In order to reduce modelling time procedural methods are often used. Such methods allow tree models to be created automatically and relatively quickly, albeit at potentially reduced quality. Although the process is faster than manual creation, it can still be slow and resource-intensive for large forests. The main contribution of this work is the development of an efficient procedural generation system for creating large forests. Our system uses L-Systems, a grammar based procedural technique, to generate each tree. We explore two approaches to accelerating the creation of large forests. First, we demonstrate performance improvements for the creation of individual trees in the forest, by reducing the computation required by the underlying L-Systems. Second, we reduce the memory overhead by sharing geometry between trees using a novel branch instancing approach. Test results show that our scheme significantly improves the speed of forest generation over naive methods: our system is able to generate over 100, 000 trees in approximately 2 seconds, while using a modest amount of memory. With respect to improving L-System processing, one of our methods achieves a 25% speed up over traditional methods at the cost of a small amount of additional memory, while our second method manages a 99% reduction in memory at the expense of a small amount of extra processing.
8

Selecting Biomarkers for Pluripotency and Alzheimer's Disease: The Real Strength of the GA/SVM

Scheubert, Lena 16 October 2012 (has links)
Pluripotency and Alzheimer's disease are two very different biological states. Even so, they are similar in the lack of knowledge about their underlying molecular mechanisms. Identifying important genes well suited as biomarkers for these two states improves our understanding. We use different feature selection methods for the identification of important genes usable as potential biomarkers. Beside the identification of biomarkers for these two specific states we are also interested in general algorithms showing good results in biomarker detection. For this reason we compare three feature selection methods with each other. Particularly good results show a rarely noticed wrapper approach of genetic algorithm and support vector machine (GA/SVM). More detailed investigations of the results show the strength of the small gene sets selected by our GA/SVM. In our work we identify a number of promising biomarker candidates for pluripotency as well as for Alzheimer's disease. We also show that the GA/SVM is well suited for feature selection even if its potential is not yet exhausted.
9

Accelerating Genomic Sequence Alignment using High Performance Reconfigurable Computers

McMahon, Peter 01 January 2008 (has links)
Recongurable computing technology has progressed to a stage where it is now possible to achieve orders of magnitude performance and power eciency gains over conventional computer architectures for a subset of high performance computing applications. In this thesis, we investigate the potential of recongurable computers to accelerate genomic sequence alignment specically for genome sequencing applications. We present a highly optimized implementation of a parallel sequence alignment algorithm for the Berkeley Emulation Engine (BEE2) recongurable computer, allowing a single BEE2 to align simultaneously hundreds of sequences. For each recongurable processor (FPGA), we demonstrate a 61X speedup versus a state-of-the-art implementation on a modern conventional CPU core, and a 56X improvement in performance-per-Watt. We also show that our implementation is highly scalable and we provide performance results from a cluster implementation using 32 FPGAs. We conclude that recongurable computers provide an excellent platform on which to run sequence alignment, and that clusters of recongurable computers will be able to cope far more easily with the vast quantities of data produced by new ultra-high-throughput sequencers.
10

Entwicklung und Evaluation eines elektronischen Systems zur Unterstützung der Informationsverarbeitung in pflegerischen Dienstübergaben

Flemming, Daniel 16 December 2015 (has links)
Pflegerische Dienstübergaben in Einrichtungen des Gesundheitswesens stellen für die Patientensicherheit und die kontinuierliche Versorgung von Patienten zentrale, aber gefährdete Kommunikationsszenarien dar. Die Akteure übergeben dabei nicht nur relevante Detailinformationen, sondern insbesondere auch die Verantwortung über die Versorgung des einzelnen Patienten. Zu diesem Zweck verständigen sie sich auf ein gemeinsames Bild oder mentales Modell zu dem klinischen Fall und dessen Versorgung. Es sind somit neben den kommunikativen insbesondere auch kognitive Prozesse in Dienstübergaben von Bedeutung. Vor diesem Hintergrund zielt die vorliegende Arbeit darauf ab, mithilfe eines neuartigen Ansatzes in Form einer kognitiven Karte des klinischen Falls innerhalb einer erweiterten Elektronischen Patientenakte die menschliche Informationsverarbeitung in Dienstübergaben zu unterstützen. Die kognitive Karte soll sowohl die frühen kognitiven Prozesse wie Aufmerksamkeit und Wahrnehmung, als auch die nachfolgenden kognitiven Prozesse wie Entscheiden und Planen fördern. Die Arbeit beschreibt die Anforderungsanalyse, die Systementwicklung und eine erste initiale Evaluation der Gebrauchstauglichkeit und der kognitiven Unterstützung des entwickelten Prototypens zur Darstellung kognitiver Karten im Rahmen von pflegerischen Dienstübergaben.

Page generated in 0.0298 seconds