• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 14
  • 8
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 28
  • 18
  • 18
  • 14
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Developing Statistical Methods for Incorporating Complexity in Association Studies

Palmer, Cameron Douglas January 2017 (has links)
Genome-wide association studies (GWAS) have identified thousands of genetic variants associated with hundreds of human traits. Yet the common variant model tested by traditional GWAS only provides an incomplete explanation for the known genetic heritability of many traits. Many divergent methods have been proposed to address the shortcomings of GWAS, including most notably the extension of association methods into rarer variants through whole exome and whole genome sequencing. GWAS methods feature numerous simplifications designed for feasibility and ease of use, as opposed to statistical rigor. Furthermore, no systematic quantification of the performance of GWAS across all traits exists. Beyond improving the utility of data that already exist, a more thorough understanding of the performance of GWAS on common variants may elucidate flaws not in the method but rather in its implementation, which may pose a continued or growing threat to the utility of rare variant association studies now underway. This thesis focuses on systematic evaluation and incremental improvement of GWAS modeling. We collect a rich dataset containing standardized association results from all GWAS conducted on quantitative human traits, finding that while the majority of published significant results in the field do not disclose sufficient information to determine whether the results are actually valid, those that do replicate precisely in concordance with their statistical power when conducted in samples of similar ancestry and reporting accurate per-locus sample sizes. We then look to the inability of effectively all existing association methods to handle missingness in genetic data, and show that adapting missingness theory from statistics can both increase power and provide a flexible framework for extending most existing tools with minimal effort. We finally undertake novel variant association in a schizophrenia cohort from a bottleneck population. We find that the study itself is confounded by nonrandom population sampling and identity-by-descent, manifesting as batch effects correlated with outcome that remain in novel variants after all sample-wide quality control. On the whole, these results emphasize both the past and present utility and reliability of the GWAS model, as well as the extent to which lessons from the GWAS era must inform genetic studies moving forward.
32

Estratégia computacional para apoiar a reprodutibilidade e reuso de dados científicos baseado em metadados de proveniência. / Computational strategy to support the reproducibility and reuse of scientific data based on provenance metadata.

Daniel Lins da Silva 17 May 2017 (has links)
A ciência moderna, apoiada pela e-science, tem enfrentado desafios de lidar com o grande volume e variedade de dados, gerados principalmente pelos avanços tecnológicos nos processos de coleta e processamento dos dados científicos. Como consequência, houve também um aumento na complexidade dos processos de análise e experimentação. Estes processos atualmente envolvem múltiplas fontes de dados e diversas atividades realizadas por grupos de pesquisadores geograficamente distribuídos, que devem ser compreendidas, reutilizadas e reproduzíveis. No entanto, as iniciativas da comunidade científica que buscam disponibilizar ferramentas e conscientizar os pesquisadores a compartilharem seus dados e códigos-fonte, juntamente com as publicações científicas, são, em muitos casos, insuficientes para garantir a reprodutibilidade e o reuso das contribuições científicas. Esta pesquisa objetiva definir uma estratégia computacional para o apoio ao reuso e a reprodutibilidade dos dados científicos, por meio da gestão da proveniência dos dados durante o seu ciclo de vida. A estratégia proposta nesta pesquisa é apoiada em dois componentes principais, um perfil de aplicação, que define um modelo padronizado para a descrição da proveniência dos dados, e uma arquitetura computacional para a gestão dos metadados de proveniência, que permite a descrição, armazenamento e compartilhamento destes metadados em ambientes distribuídos e heterogêneos. Foi desenvolvido um protótipo funcional para a realização de dois estudos de caso que consideraram a gestão dos metadados de proveniência de experimentos de modelagem de distribuição de espécies. Estes estudos de caso possibilitaram a validação da estratégia computacional proposta na pesquisa, demonstrando o seu potencial no apoio à gestão de dados científicos. / Modern science, supported by e-science, has faced challenges in dealing with the large volume and variety of data generated primarily by technological advances in the processes of collecting and processing scientific data. Therefore, there was also an increase in the complexity of the analysis and experimentation processes. These processes currently involve multiple data sources and numerous activities performed by geographically distributed research groups, which must be understood, reused and reproducible. However, initiatives by the scientific community with the goal of developing tools and sensitize researchers to share their data and source codes related to their findings, along with scientific publications, are often insufficient to ensure the reproducibility and reuse of scientific results. This research aims to define a computational strategy to support the reuse and reproducibility of scientific data through data provenance management during its entire life cycle. Two principal components support our strategy in this research, an application profile that defines a standardized model for the description of provenance metadata, and a computational architecture for the management of the provenance metadata that enables the description, storage and sharing of these metadata in distributed and heterogeneous environments. We developed a functional prototype for the accomplishment of two case studies that considered the management of provenance metadata during the experiments of species distribution modeling. These case studies enabled the validation of the computational strategy proposed in the research, demonstrating the potential of this strategy in supporting the management of scientific data.
33

Developing conceptual frameworks for structuring legal knowledge to build knowledge-based systems

Deedman, Galvin Charles 05 1900 (has links)
This dissertation adopts an interdisciplinary approach to the field of law and artificial intelligence. It argues that the conceptual structuring of legal knowledge within an appropriate theoretical framework is of primary importance when building knowledge-based systems. While technical considerations also play a role, they must take second place to an in-depth understanding of the law. Two alternative methods of structuring legal knowledge in very different domains are used to explore the thesis. A deep-structure approach is used on nervous shock, a rather obscure area of the law of negligence. A script-based method is applied to impaired driving, a well-known part of the criminal law. A knowledge-based system is implemented in each area. The two systems, Nervous Shock Advisor (NSA) and Impaired Driving Advisor (IDA), and the methodologies they embody, are described and contrasted. In light of the work undertaken, consideration is given to the feasibility of lawyers without much technical knowledge using general-purpose tools to build knowledge-based systems for themselves.
34

Packing problems on a PC.

Deighton, Andrew George. January 1991 (has links)
Bin packing is a problem with many applications in various industries. This thesis addresses a specific instance of the this problem, known as the Container Packing problem. Special attention is paid to the Pallet Loading problem which is a restricted sub-problem of the general Container Packing problem. Since the Bin Packing problem is NP-complete, it is customary to apply a heuristic measure in order to approximate solutions in a reasonable amount of computation time rather than to attempt to produce optimal results by applying some exact algorithm. Several heuristics are examined for the problems under consideration, and the results produced by each are shown and compared where relevant. / Thesis (M.Sc.)-University of Natal, Durban, 1991.
35

An internship with the Ohio Evaluation & Assessment Center

Marks, Pamela Anne. January 2005 (has links)
Thesis (M.T.S.C.)--Miami University, Dept. of English, 2005. / Title from first page of PDF document. Document formatted into pages; contains [1], vi, 55 p. : ill. Includes bibliographical references (p. 33).
36

Analýza trhu cukrovinek v České republice z pohledu retail auditu / Retail audit analysis of confectionery market in Czech Republic

Konečná, Eva January 2013 (has links)
The main goal of my master thesis is to conduct a complex analysis of the confectionery market in Czech Republic, as one of the most profitable and dynamic category. The methodology that is used to analyse the market is the retail audit analysis, that provides an independent insight into the problem and helps to analyse the market from various perspectives. The first part is focused on the total confectionery market with the goal to introduce the market with its specifics, then the analysis goes deeper into the chocolate confectionery market and is extended by consumer research, that provides a useful consumer insight. As a result, the thesis emphasizes the importance of retail audit data, while at the same time gaps of the analysis are shown by using the consumer research as a secondary methodology.
37

Developing conceptual frameworks for structuring legal knowledge to build knowledge-based systems

Deedman, Galvin Charles 05 1900 (has links)
This dissertation adopts an interdisciplinary approach to the field of law and artificial intelligence. It argues that the conceptual structuring of legal knowledge within an appropriate theoretical framework is of primary importance when building knowledge-based systems. While technical considerations also play a role, they must take second place to an in-depth understanding of the law. Two alternative methods of structuring legal knowledge in very different domains are used to explore the thesis. A deep-structure approach is used on nervous shock, a rather obscure area of the law of negligence. A script-based method is applied to impaired driving, a well-known part of the criminal law. A knowledge-based system is implemented in each area. The two systems, Nervous Shock Advisor (NSA) and Impaired Driving Advisor (IDA), and the methodologies they embody, are described and contrasted. In light of the work undertaken, consideration is given to the feasibility of lawyers without much technical knowledge using general-purpose tools to build knowledge-based systems for themselves. / Graduate and Postdoctoral Studies / Graduate
38

Adopting research data management (RDM) practices at the University of Namibia (UNAM): a view from researchers

Samupwa, Astridah Njala 14 February 2020 (has links)
This study investigated the extent of Research Data Management (RDM) adoption at the University of Namibia (UNAM), viewing it from the researcher’s perspective. The objectives of the study were to investigate the extent to which RDM has been adopted as part of the research process at UNAM, to identify challenges encountered by researchers attempting to practice RDM and to provide solutions to some of the challenges identified. Rogers’ Diffusion of Innovation (DOI) theory was adopted for the study to place UNAM within an innovation-decision process stage. The study took a quantitative approach of which a survey was used. A stratified sample was drawn from a list of all 948 faculty members (the number of academics taken from the UNAM annual report of 2016). The Raosoft sample size calculator (Raosoft, 2004) states that 274 is the minimum recommended sample size necessary for a 5% margin of error and a 95% confidence level from a population of 948, and this was the intended sample size. A questionnaire administered via an online web-based software tool, SurveyMonkey, was used. A series of questions was asked to individuals to obtain statistically useful information on the topic under study. The paid version of SurveyMonkey was used for analysis while graphics and tables were created in Microsoft Excel. The results of the study showed that for the group that responded to the survey, the extent to which they have adopted RDM practices is still very low. Although individuals were found to be managing their research data, this was done out of their own free will; this is to say that there was no policy mandating and guiding their practices. The researcher placed most of the groups that responded to the survey at the first stage of the innovation-decision process, which is the information stage. However, librarians who responded to the survey were found to be more advanced as they were seen to be aware of and engaged in knowledge acquisition regarding RDM practices. Thus, the researcher placed them at the second stage in the innovation-decision process (Persuasion). Recommendations for the study are based on the analysed data. It is recommended, among others, that UNAM should give directives in the form of policies to enhance the adoption of RDM practices and this should be communicated to the entire UNAM community to create awareness regarding the concept of RDM.
39

Investigating the relevance of quality measurement indicators for South African higher education libraries

Ntshuntshe-Matshaya, Pateka Patricia January 2021 (has links)
Philosophiae Doctor - PhD / This study investigates the relevance of quality measurement indicators at higher education libraries for faculty academics, librarians, and students. The study followed a mixed-method design with a mixture of quantitative and qualitative data collection. Faculty academics, librarians and students ranked the existing quality measurement indicators for South African higher education libraries. The findings revealed that for library quality measures to meet the needs of faculty academics, librarians, and students, the resources must be accessible both physically and virtually, and staff should be accountable and willing to offer services responsive to the users' needs and expectations of a safe, secure, and comfortable library space, be it physical or virtual. The qualitative data highlighted the importance of adequate resources and the adoption of new developments as measures for quality. Quality measurement indicators must include elements such as adequate funding; relevant resources aligned with teaching and learning programmes; programmes that are integrated into teaching plans; effective supplier collaboration with respect to the process of acquiring relevant learning materials; effective student training; communication of the value of library services and alignment with the student learning outcomes; research support in a digital environment with e-tools and website navigability; research data management; and open access, which is a prominent role of the library. Based on the data, there was a quality measure (process) that was commendable even though it did not form part of the existing quality measures nor a service whose relevance was assessed. The separation of undergraduate and postgraduate learning spaces was amongst those services that ranked quite high from the students' responses (qualitative data). Even though there were differences emphasized on each indicator by either faculty academics or students, there were also discrepancies in the interpretation of what each quality indicator means to each study population group. As the study of this nature has recommendations and gaps identified in terms of research findings, it is quite important to record that there was a series of gaps that were identified in terms of library expectations and perceptions. These gaps were suggested as part of further research that must be conducted to fill the void in terms of library users’ voices in the development of higher education library measurement indicators.
40

Developing an implementation plan for research data management (RDM) at the University of Ghana

Avuglah, Bright Kwaku January 2016 (has links)
The current global and data intensive outlook of research provides new opportunities and challenges for HEIs including effective and sustainable RDM. As a growing area of interest in the global research arena, experiences from developed countries have dominated the body of literature on RDM. This study is in part, to fill this gap by assessing the state of the art of RDM and institutional preparedness at the University of Ghana (through existing data management activities and capabilities) in order to develop a plan for implementation. The study used a qualitative case study method and gathered data using semi-structured interviews and document analysis. Thematic analysis method was used to analyse the data collected. A total of seven respondents (five service providers and two senior researchers) were selected purposively using two sampling techniques ("priori criteria sampling" and snowball sampling). Criteria were set for their inclusion and each respondent provided information about institutional support, capabilities, policies and expectations on RDM. The findings of the study revealed a number of RDM related activities, these include support for collaborative research, support for data analysis and computational science, guidance on RDM and grant applications as well as support for storage and high-speed connectivity to facility the research enterprise at UG. In terms of capabilities, no specific RDM policy was identified, existing infrastructure identified include an HPC cluster, a private cloud facility (HP Cloud Matrix), an Institutional repository (UGSpace), an institutional Google Drive platform, data analysis packages (NVivo and SPSS) and a robust network and security infrastructure. These were not necessarily provisioned for RDM purposes. Also, the findings show that staff do not possess the necessary skills or adequate knowledge to fully support RDM at UG. In terms of the specific objectives of the study, the results of the semi-structured interviews and document analysis provided an understanding of the current situation (i.e. requirements, current activities and capabilities at the UG) which is the first objective of the study. These findings were then benchmarked against the EPSRC policy framework following the outline of the DCC CARDIO Matrix and using the optimal desirable expectation or level of development as the standard for comparison. This was useful in identifying gaps in RDM awareness, support and capabilities at UG which is the second objective of the study. To achieve the third objective, which was identifying priority areas for RDM development, the researcher examined both initial findings (i.e. findings on requirements, current activities and capabilities identified under the first objective as well as the gaps identified in the second objective) and proposed six broad areas where UG must focus its RDM development agenda. Finally, the six broad areas proposed in objective three were further cascaded into a number of specific initiatives and tasks to be implemented. This was done taking cognisance of the potential of current infrastructure, gaps identified in institutional awareness and capabilities as well as essentials for a cultural changed. The study concluded that RDM at the University of Ghana is currently underdeveloped but with immense potential for growth. While a few RDM related activities were identified, existing capabilities were generally found to be inchoate, uncoordinated and not formally instituted. The study recommended six main areas where the UG should focus RDM development, these include: constituting a steering group to spearhead and coordinate RDM development at the UG, developing a coordinated policy framework for RDM at UG, streamlining existing technical infrastructure to support data management requirements, creating opportunities for RDM training and capacity development for professional staff, researchers and students, developing services to support requirements, and exploring internal funding strategies to facilitate RDM development and support at the UG. The study also recommends that the academic community at the UG should be actively engaged throughout the RDM development process as this is critical to ensure that the eventual solutions are fit for purpose and acceptable. / Mini Dissertation (MIT)--University of Pretoria, 2016. / Information Science / MIT / Unrestricted

Page generated in 0.0467 seconds