• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 5
  • Tagged with
  • 34
  • 34
  • 34
  • 12
  • 11
  • 11
  • 10
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

DAS Writeback: A Collaborative Annotation System for Proteins

Salazar O., Gustavo A. 01 January 2010 (has links)
We designed and developed a Collaborative Annotation System for Proteins called DAS Writeback, which extends the Distributed Annotation System (DAS) to provide the functionalities of adding, editing and deleting annotations. A great deal of effort has gone into gathering information about proteins over the last few years. By June 2009, UniProtKB/Swiss-Prot, a curated database, contained over four hundred thousand sequence entries and UniProtKB/TrEMBL, a database with automated annotation, contained over eight million sequence entries. Every protein is annotated with relevant information, which needs to be eciently captured and made available to other research groups. These include annotations about the structure, the function or the biochemical residues. Several research groups have taken on the task of making this information accessible to the community, however, information flow in the opposite direction has not been extensively explored. Users are currently passive actors that behave as consumers of one or several sources of protein annotations and they have no immediate way to provide feedback to the source if, for example, a mistake is detected or they want to add information. Any change has to be done by the owner of the database. The current lack of being able to feed information back to a database is tackled in this project. The solution consists of an extension of the DAS protocol that defines the communication rules between the client and the writeback server following the Uniform Interface of the RESTful architecture. A protocol extension was proposed to the DAS community and implementations of both server and client were created in order to have a fully functional system. For the development of the server, writing functionalities were added to MyDAS, which is a widely used DAS server. The writeback client is an extended version of the web-based protein client Dasty2. The involvement of the DAS community and other potential users was a fundamental component of this project. The architecture was designed with the insight of the DAS specialized forum, a prototype was then created and subsequently presented in the DAS workshop 2009. The feedback from the forum and workshop was used to redefine the architecture and implement the system. A usability experiment was performed using potential users of the system emulating a real annotation task. It demonstrated that DAS writeback is effective, usable and will provide the appropriate environment for the creation and evolution of a protein annotation community. Although the scope of this research is limited to protein annotations, the specification was defined in a general way. It can, therefore, be used for other types of information supported by DAS, implying that the server is versatile enough to be used in other scenarios without major modifications.
2

Can Health Workers capture data using a generic mobile phone with sufficient accuracy for Capture at Source to be used for Clinical Research Purposes?

Workman, Michael L 01 October 2013 (has links)
Objective: To determine the accuracy, measured by error rate, with which Clinical Research Workers (CRWs), with minimal experience in data entry, could capture data on a feature phone during an interview using two different mobile phone applications, compared to the accuracy with which they could record data on paper Case Report Forms (CRFs). Design: A comparative study was performed where 10 participating CRWs performed 90 mock interviews using either paper CRFs or one of two mobile phone applications. The phone applications were a commonly used open source application and an application custom built for this study that followed a simplified, less flexible user interface paradigm. The answers to the interview questions were randomly generated and provided to the interviewees in sealed envelopes prior to the scheduling of the mock interview. Error rates of the captured data were calculated relative to the randomly generated expected answers. Results and Conclusion: The study aimed to show that error rates of clinical research data captured using a mobile phone application would not be inferior to data recorded on paper CRFs. For the custom application, this desired result was not found unequivocally. An error in judgment when designing the custom phone application resulted in dates being captured in a manner unfamiliar to the study participants, leading to high error rates for this type of data. If this error is condoned by excluding the date type from the results for the custom application, the custom application is shown to be non-inferior, at the 95% confidence level, to standard paper forms when capturing data for clinical research. Analysis of the results for the open source application showed that using this application for data capture was inferior to paper CRFs. Secondary analysis showed that error rates for data captured on the custom mobile phone application by non-computer literate users were significantly lower at the 95% confidence level than the error rates for data recorded by the same users on paper and for data captured by computer literate users using the custom application. This result confirms that even non-computer literate users can capture data accurately using a feature phone with a simplified user interface.
3

Developing locally relevant applications for rural South Afica: a telemedicine example

Chetty, Marshini 01 December 2005 (has links)
Within developing countries, there is a digital divide between rural and urban areas. In order to overcome this division, we need to provide locally relevant Information and Communication Technology (ICT) services to these areas. Traditional software development methodologies are not suitable for developing software for rural and underserviced areas because they cannot take into account the unique requirements and complexities of such areas. We set out to find the most appropriate way to engineer suitable software applications for rural communities. We developed a methodological framework for creating software applications for a rural community. We critically examined the restrictions that current South African telecommunications legislation places on software development for underserviced areas. Our socially aware computing framework for creating software applications uses principles from Action Research and Participatory Design as well as best practice guidelines; it helps us address all issues affecting the project success. The validity of our framework was demonstrated by using it to create Multi-modal Telemedicine Intercommunicator (MuTI). MuTI is a prototype system for remote health consultation for a rural community. It allowed for synchronous and asynchronous communications between a clinic in one village and a hospital in the neighbouring village, nearly 20 kilometers away, in the Eastern Cape province of South Africa. It used Voice over Internet Protocol (VoIP) combined with a store and forward approach for communication. MuTI was tested over a Wireless Fidelity (WiFi) network for several months. Our socially aware framework proved to be appropriate for developing locally relevant applications for rural areas in South Africa. We found that MuTI was an improvement on the previous telemedicine solution in the target community. Using the approach also led to several insights into best practice for ICT development projects. We also found that VoIP and WiFi are relevant technologies for rural regions and that further telecommunication liberalisation in South Africa is required in order to spur technological developments in rural and underserviced areas.
4

Algorithms for efficiently and effectively matching agents in microsimulations of sexually transmitted infections

Geffen, Nathan 01 January 2018 (has links)
Mathematical models of the HIV epidemic have been used to estimate incidence, prevalence and life-expectancy, as well the benets and costs of public health interventions, such as the provision of antiretroviral treatment. Models of sexually transmitted infection epidemics attempt to account for varying levels of risk across a population based on diverse | or heterogeneous | sexual behaviour. Microsimulations are a type of model that can account for fine-grained heterogeneous sexual behaviour. This requires pairing individuals, or agents, into sexual partnerships whose distribution matches that of the population being studied, to the extent this is known. But pair-matching is computationally expensive. There is a need for computer algorithms that pair-match quickly. In this work we describe the role of modelling in responses to the South African HIV epidemic. We also chronicle a three-decade debate, greatly influenced since 2008 by a mathematical model, on the optimal time for people with HIV to start antiretroviral treatment. We then present and analyse several pair-matching algorithms, and compare them in a microsimulation of a fictitious STI. We find that there are algorithms, such as Cluster Shuffle Pair-Matching, that offer a good compromise between speed and approximating the distribution of sexual relationships of the study-population. An interesting further finding is that infection incidence decreases as population increases, all other things being equal. Whether this is an artefact of our methodology or a natural world phenomenon is unclear and a topic for further research.
5

Design of a prototype mobile application interface for efficient accessing of electronic laboratory results by health clinicians

Chigudu, Kumbirai 01 January 2018 (has links)
There is a significant increase in demand for rapid laboratory medical diagnoses for various ailments in order for clinicians to make informed medical decisions and prescribe the correct medication within a limited specified time. Since no further informed action can be taken on the patient until the laboratory report reaches the clinician, the delivery of the report to the clinician becomes a critical path in the value chain of the laboratory testing process. The National Health Laboratory Service (NHLS) currently delivers lab results in three ways: via a physical paper report, and electronically through a web application. The third alternative is for short and high-priority test results, like human immunodeficiency virus (HIV) and tuberculosis (TB), that are delivered via short message service (SMS) printers in remote rural clinics. However, despite its inefficiencies, the paper report remains the most commonly used method. As turnaround times for basic and critical laboratory tests remain a great challenge for NHLS to meet the specified targets; there is need to shift method of final delivery from paper to a paperless secured electronic result delivery system. Accordingly, the recently-implemented centralised TrakCare Lab laboratory information system (LIS) makes provision for delivery of electronic results via a web application, 'TrakCarewebview'. However, the uptake of TrakCarewebview has been very low due to the cumbersomeness of the application; this web application takes users through nine steps to obtain the results and is not designed for mobile devices. In addition, its access in remote rural health care facilities is a great challenge because of lack of supportive infrastructure. There is therefore an obvious gap and considerable potential in diagnostic result delivery system that calls for an immediate action to design and development of a less complex, cost effective and usable mobile application, for electronic delivery of laboratory results. After obtaining research ethics clearance approval from the University’s Faculty of Science Research Ethics Committee a research was sanctioned. A survey of public sector clinicians across South Africa indicated that 98% have access to the internet through smartphones, and 93% of the clinicians indicated that they would use their mobile devices to access electronic laboratory results. A significant number of clinicians believe that the use of a mobile application in health facilities will improve patient care. This belief, therefore, set a strong basis for designing and developing a mobile application for laboratory results. The study aims to design and develop a mobile application prototype that can demonstrate the capability of delivering electronic laboratory test results to clinicians on their smart devices, via a usable mobile application. The design of the mobile application prototype was driven by user-centred design (UCD) principles in order to develop an effective design. Core and critical to the process is the design step which establishes the user requirements specifications that meet the user expectations. The study substantiated the importance of the design aspect as the initial critical step in obtaining a good final product. The prototype was developed through an iterative process alternating prototype development and evaluation. The development iterations consisted of a single paper prototyping iteration followed by further two iterations using an interactive Justinmind prototyping tool. Respective to the development iterations, cognitive walk-through and heuristic principles were used to evaluate the usability of the initial prototype. The final prototype was then evaluated using the system usability scale (SUS) survey quantitative tool, which determines the effectiveness and perceived usability of the application. The application scored an average SUS score of 77, which is significantly above the average acceptable SUS score of 68. The standard SUS measurement deems 80 to be an excellent score. Yet a score below 68 is considered below average. The evaluation was conducted by the potential user group which was involved in the initial design process. The ability of the interactive prototyping tool (Justinmind) to mimic the actual final product offered end users a feel of the actual product thus giving the outcome of the evaluation a strong basis to develop the actual product.
6

Identification and functional analysis of single nucleotide polymorphisms that affect human cancer

Grochola, Lukasz Filip January 2011 (has links)
Aims: The p53 regulatory network is crucial in directing the suppression of cancer formation and mediating the response to commonly used cancer therapies. Functional genetic variants in the genes comprising this network could help identify individuals at greater risk for cancer and patients with poorer responses to therapies, but few such variants have been identified as yet. Methods: We first develop and apply three different screens that utilize known characteristics of functional single nucleotide polymorphisms (SNPs) in the p53 network to search for variants that associate with allelic differences in (i) recent natural selection, (ii) chemosensitivity profiles, and (iii) the gender- and age- dependent incidence of soft-tissue sarcoma. Secondly, we study and explore the functional mechanisms associated with the identified variants. Results: We identify SNPs in the PPP2R5E, CD44, YWHAQ and ESR1 genes that associate with allelic differences in the age of tumour diagnosis (up to 32.5 years, p=0.031), cancer risk (up to 8.1 odds ratio, p=0.004) and overall survival (up to 2.85 relative risk, p=0.011) in sarcomas, ovarian and pancreatic cancers, and exhibit allelic differences in the cellular responses to cytotoxic chemotherapeutic agents (up to 5.4-fold, p=5.6x10<sup>-47</sup>). Lastly, we identify candidate causal SNPs in those genes and describe the regulatory mechanisms by which they might affect human cancer. Conclusions: Together, our work suggests that the inherited genetics of the p53 pathway have a great potential to further define populations in their abilities to react to stress, suppress tumor formation and respond to therapies.
7

Genetics determinants of vaccine responses

O'Connor, Daniel January 2014 (has links)
Vaccines have had a profound influence on human health with no other health intervention rivalling their impact on the morbidity and mortality associated with infectious disease. However, the magnitude and persistence of vaccine immunity varies considerably between individuals, a phenomenon that is not well understood. Recent studies have used contemporary technologies to correlate variations in the genome and transcriptome to complex phenotypic traits, and these approaches have started to provide fresh insight into the intrinsic factors determining the generation and persistence of vaccine-induced immunity. This thesis aimed to describe the relationship between genomic and transcriptomic variations, and the immunogenicity of childhood immunisations. Candidate gene and genome-wide genotyping was conducted to evaluate the influence of genetic variants on vaccine-induced immunity following childhood immunisation. Furthermore, contemporary methodologies were used to assess non-coding and coding gene transcript profiles following vaccination, to further dissect the molecular systems involved in vaccine responses. Key findings from this thesis include the description of the first genome-wide association studies into the persistence of immunity to three routine childhood immunisations: capsular group C meningococcal (MenC) conjugate vaccine, Haemophilus influenzae type b (Hib) conjugate vaccine and tetanus toxoid (TT) vaccine. Genome-wide genotyping was completed on over 2000 participants, with an additional 1000 participants genotyped at selected genetic markers. Genome-wide significant associations (p<5×10<sup>−8</sup>) were described between single- nucleotide polymorphisms (SNPs) in two genes, CNTN6 and ENKUR, and the persistence of serological immunity to MenC following immunisation of children 6-15 years of age. In addition, genome-wide significant associations were described between SNPs within an intergenic region of chromosome 10 and the persistence of TT-specific IgG concentrations following childhood immunisations. Furthermore, a number of variants in loci with putative involvement in the immune system such as FOXP1, the human leukocyte antigen locus and the lambda light chain immunoglobulin locus, were shown to have suggestive associations (p<1×10<sup>−5</sup>) with the persistence of vaccine-induced serological immunity. The fundamental challenge will be to describe functional mechanisms associated with these findings, and to translate these into innovative and pragmatic strategies to develop new and more effective vaccines.
8

The role of the G-protein subunit, G-α-11, and the adaptor protein 2 sigma subunit, ap2-σ-2, in the regulation of calcium homeostasis

Howles, Sarah Anne January 2015 (has links)
The calcium sensing receptor (CaSR) is a G-protein coupled receptor (GPCR) that plays a central role in calcium homeostasis. Loss-of-function mutations of the CaSR cause familial hypocalciuric hypercalcaemia type 1 (FHH1), whilst gain-of-function mutations are associated with autosomal dominant hypocalcaemia (ADH). However, 35&percnt; of cases of FHH and 60&percnt; of cases of ADH are not due to CaSR mutations. This thesis demonstrates that FHH type 2 (FHH2) and the new clinical disorder, ADH type 2 (ADH2), are due to loss- and gain-of-function mutations in the G-protein subunit, G&alpha;11, respectively. The CaSR signals through G&alpha;11 and FHH2-associated mutations are shown to exert their effects through haploinsufficiency. Three-dimensional modelling of ADH2-associated G&alpha;11 mutations predicts impaired GTPase activity and increases in the rate of GDP/GTP exchange. Furthermore, mouse models of FHH2 and ADH2 have been identified and re-derived to enable in vivo studies of the role of G&alpha;11 in calcium homeostasis. I also demonstrate that FHH3 is due to loss-of-function mutations in the adaptor protein 2 sigma subunit, AP2&sigma;2, which exert dominant-negative effects. AP2&sigma;2 is a component of the adaptor protein 2 (AP2), which is a crucial component of clathrin-coated vesicles (CCV) and facilitates clathrin-mediated endocytosis of plasma membrane components such as GPCRs. All of the identified FHH3-associated mutations affect the Arg15 residue of AP2&sigma;2, which forms key polar contacts with CCV cargo proteins. This thesis proposes that FHH3-associated AP2&sigma;2 mutations impair CaSR internalisation and thus negatively impact on CaSR signalling. In addition, these studies show that these signalling defects can be rectified by the use of the CaSR allosteric modulator cinacalcet, which may represent a useful therapeutic modality for FHH3 patients. In summary, FHH2 is due to loss-of-function mutations in G&alpha;11 causing haploinsufficiency, whilst FHH3 is due to loss-of-function mutations in AP2&sigma;2, which exert dominant-negative effects. In contrast, ADH2 is due to gain-of-function mutations in G&alpha;11.
9

A Comparison of Statistical and Geometric Reconstruction Techniques: Guidelines for Correcting Fossil Hominin Crania

Neeser, Rudolph 01 January 2007 (has links)
The study of human evolution centres, to a large extent, around the study of fossil morphology, including the comparison and interpretation of these remains within the context of what is known about morphological variation within living species. However, many fossils suffer from environmentally caused damage (taphonomic distortion) which hinders any such interpretation: fossil material may be broken and fragmented while the weight and motion of overlaying sediments can cause their plastic distortion. To date, a number of studies have focused on the reconstruction of such taphonomically damaged specimens. These studies have used myriad approaches to reconstruction, including thin plate spline methods, mirroring, and regression-based approaches. The efficacy of these techniques remains to be demonstrated, and it is not clear how different parameters (e.g., sample sizes, landmark density, etc.) might effect their accuracy. In order to partly address this issue, this thesis examines three techniques used in the virtual reconstruction of fossil remains by statistical or geometrical means: mean substitution, thin plate spline warping (TPS), and multiple linear regression. These methods are compared by reconstructing the same sample of individuals using each technique. Samples drawn from Homo sapiens, Pan troglodytes, Gorilla gorilla, and various hominin fossils are reconstructed by iteratively removing then estimating the landmarks. The testing determines the methods' behaviour in relation to the extant of landmark loss (i.e., amount of damage), reference sample sizes (this being the data used to guide the reconstructions), and the species of the population from which the reference samples are drawn (which may be different to the species of the damaged fossil). Given a large enough reference sample, the regression-based method is shown to produce the most accurate reconstructions. Various parameters effect this: when using small reference samples drawn from a population of the same species as the damaged specimen, thin plate splines is the better method, but only as long as there is little damage. As the damage becomes severe (missing 30% of the landmarks, or more), mean substitution should be used instead: thin plate splines are shown to have a rapid error growth in relation to the amount of damage. When the species of the damaged specimen is unknown, or it is the only known individual of its species, the smallest reconstruction errors are obtained with a regression-based approach using a large reference sample drawn from a living species. Testing shows that reference sample size (combined with the use of multiple linear regression) is more important than morphological similarity between the reference individuals and the damaged specimen. The main contribution of this work are recommendations to the researcher on which of the three methods to use, based on the amount of damage, number of reference individuals, and species of the reference individuals.
10

Real-time Generation of Procedural Forests

Kenwood, Julian 01 January 2014 (has links)
The creation of 3D models for games and simulations is generally a time-consuming and labour intensive task. Forested landscapes are an important component of many large virtual environments in games and film. To create the many individual tree models required for forests requires a large numbers of artists and a great deal of time. In order to reduce modelling time procedural methods are often used. Such methods allow tree models to be created automatically and relatively quickly, albeit at potentially reduced quality. Although the process is faster than manual creation, it can still be slow and resource-intensive for large forests. The main contribution of this work is the development of an efficient procedural generation system for creating large forests. Our system uses L-Systems, a grammar based procedural technique, to generate each tree. We explore two approaches to accelerating the creation of large forests. First, we demonstrate performance improvements for the creation of individual trees in the forest, by reducing the computation required by the underlying L-Systems. Second, we reduce the memory overhead by sharing geometry between trees using a novel branch instancing approach. Test results show that our scheme significantly improves the speed of forest generation over naive methods: our system is able to generate over 100, 000 trees in approximately 2 seconds, while using a modest amount of memory. With respect to improving L-System processing, one of our methods achieves a 25% speed up over traditional methods at the cost of a small amount of additional memory, while our second method manages a 99% reduction in memory at the expense of a small amount of extra processing.

Page generated in 0.0899 seconds