• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 15
  • 8
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 31
  • 19
  • 18
  • 15
  • 15
  • 14
  • 13
  • 13
  • 13
  • 12
  • 11
  • 11
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Expert decision support system for two stage operations planning.

January 1999 (has links)
by Tam Chi-Fai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 87-88). / abstract --- p.I / table of content --- p.II / list of figures --- p.V / acknowledgments --- p.VII / Chapter chapter 1 --- introduction --- p.1 / Chapter 1.1 --- Two Stage Operations Planning --- p.1 / Chapter 1.2 --- Iterative Activities in the Two Stage Planning Approach --- p.3 / Chapter 1.3 --- Expert Decision Support System for Two Stage Planning --- p.4 / Chapter 1.4 --- Scope of the Study --- p.5 / Chapter 1.5 --- Organization of the Thesis --- p.6 / Chapter chapter 2 --- literature review --- p.7 / Chapter 2.1 --- Network Design for Air Express Service --- p.7 / Chapter 2.2 --- Integrative Use of Optimization and Simulation Model --- p.8 / Chapter 2.3 --- Expert System & Decision Support System --- p.11 / Chapter 2.3.1 --- Expert System --- p.11 / Chapter 2.3.2 --- Decision Support System --- p.13 / Chapter 2.3.3 --- ES / DSS Integration --- p.14 / Chapter chapter 3 --- research methodology --- p.19 / Chapter 3.1 --- Review on DSS / ES Integration --- p.19 / Chapter 3.2 --- System Design --- p.20 / Chapter 3.3 --- Prototyping --- p.22 / Chapter 3.4 --- Analysis and Evaluation --- p.23 / Chapter chapter 4 --- system architecture and knowledge modeling --- p.24 / Chapter 4.1 --- Architecture Overview --- p.24 / Chapter 4.1.1 --- System Architecture and Interactions --- p.26 / Chapter 4.1.2 --- Decision Support System --- p.27 / Chapter 4.1.3 --- Expert System --- p.32 / Chapter 4.2 --- System Operations --- p.35 / Chapter 4.2.1 --- Operations Flow --- p.35 / Chapter chapter 5 --- case study and prototyping --- p.38 / Chapter 5.1 --- Case Background --- p.38 / Chapter 5.1.1 --- The Service Network --- p.38 / Chapter 5.1.2 --- Objectives of the Project --- p.40 / Chapter 5.1.3 --- Network Design Methodology --- p.41 / Chapter 5.2 --- Iterative Network Planning --- p.49 / Chapter 5.2.1 --- Multi-period Network Planning Feedback --- p.50 / Chapter 5.2.2 --- Feedback in Validation and Evaluation --- p.51 / Chapter 5.3 --- The System Prototype --- p.57 / Chapter 5.3.1 --- Data Management and Model Manipulation --- p.57 / Chapter 5.3.2 --- Intelligent Guidance for the Iterations --- p.65 / Chapter chapter 6 --- evaluation and analysis --- p.75 / Chapter 6.1 --- Test Scenario for Network Planning --- p.75 / Chapter 6.1.1 --- Consultation Process --- p.75 / Chapter 6.1.2 --- Consultation Results --- p.78 / Chapter 6.2 --- Effectiveness of EDSS in Network Planning --- p.81 / Chapter 6.3 --- Generalized Advancement and Limitation --- p.82 / Chapter chapter 7 --- conclusion --- p.85 / bibliography --- p.87 / appendices --- p.89
32

Estratégia computacional para apoiar a reprodutibilidade e reuso de dados científicos baseado em metadados de proveniência. / Computational strategy to support the reproducibility and reuse of scientific data based on provenance metadata.

Silva, Daniel Lins da 17 May 2017 (has links)
A ciência moderna, apoiada pela e-science, tem enfrentado desafios de lidar com o grande volume e variedade de dados, gerados principalmente pelos avanços tecnológicos nos processos de coleta e processamento dos dados científicos. Como consequência, houve também um aumento na complexidade dos processos de análise e experimentação. Estes processos atualmente envolvem múltiplas fontes de dados e diversas atividades realizadas por grupos de pesquisadores geograficamente distribuídos, que devem ser compreendidas, reutilizadas e reproduzíveis. No entanto, as iniciativas da comunidade científica que buscam disponibilizar ferramentas e conscientizar os pesquisadores a compartilharem seus dados e códigos-fonte, juntamente com as publicações científicas, são, em muitos casos, insuficientes para garantir a reprodutibilidade e o reuso das contribuições científicas. Esta pesquisa objetiva definir uma estratégia computacional para o apoio ao reuso e a reprodutibilidade dos dados científicos, por meio da gestão da proveniência dos dados durante o seu ciclo de vida. A estratégia proposta nesta pesquisa é apoiada em dois componentes principais, um perfil de aplicação, que define um modelo padronizado para a descrição da proveniência dos dados, e uma arquitetura computacional para a gestão dos metadados de proveniência, que permite a descrição, armazenamento e compartilhamento destes metadados em ambientes distribuídos e heterogêneos. Foi desenvolvido um protótipo funcional para a realização de dois estudos de caso que consideraram a gestão dos metadados de proveniência de experimentos de modelagem de distribuição de espécies. Estes estudos de caso possibilitaram a validação da estratégia computacional proposta na pesquisa, demonstrando o seu potencial no apoio à gestão de dados científicos. / Modern science, supported by e-science, has faced challenges in dealing with the large volume and variety of data generated primarily by technological advances in the processes of collecting and processing scientific data. Therefore, there was also an increase in the complexity of the analysis and experimentation processes. These processes currently involve multiple data sources and numerous activities performed by geographically distributed research groups, which must be understood, reused and reproducible. However, initiatives by the scientific community with the goal of developing tools and sensitize researchers to share their data and source codes related to their findings, along with scientific publications, are often insufficient to ensure the reproducibility and reuse of scientific results. This research aims to define a computational strategy to support the reuse and reproducibility of scientific data through data provenance management during its entire life cycle. Two principal components support our strategy in this research, an application profile that defines a standardized model for the description of provenance metadata, and a computational architecture for the management of the provenance metadata that enables the description, storage and sharing of these metadata in distributed and heterogeneous environments. We developed a functional prototype for the accomplishment of two case studies that considered the management of provenance metadata during the experiments of species distribution modeling. These case studies enabled the validation of the computational strategy proposed in the research, demonstrating the potential of this strategy in supporting the management of scientific data.
33

Developing Statistical Methods for Incorporating Complexity in Association Studies

Palmer, Cameron Douglas January 2017 (has links)
Genome-wide association studies (GWAS) have identified thousands of genetic variants associated with hundreds of human traits. Yet the common variant model tested by traditional GWAS only provides an incomplete explanation for the known genetic heritability of many traits. Many divergent methods have been proposed to address the shortcomings of GWAS, including most notably the extension of association methods into rarer variants through whole exome and whole genome sequencing. GWAS methods feature numerous simplifications designed for feasibility and ease of use, as opposed to statistical rigor. Furthermore, no systematic quantification of the performance of GWAS across all traits exists. Beyond improving the utility of data that already exist, a more thorough understanding of the performance of GWAS on common variants may elucidate flaws not in the method but rather in its implementation, which may pose a continued or growing threat to the utility of rare variant association studies now underway. This thesis focuses on systematic evaluation and incremental improvement of GWAS modeling. We collect a rich dataset containing standardized association results from all GWAS conducted on quantitative human traits, finding that while the majority of published significant results in the field do not disclose sufficient information to determine whether the results are actually valid, those that do replicate precisely in concordance with their statistical power when conducted in samples of similar ancestry and reporting accurate per-locus sample sizes. We then look to the inability of effectively all existing association methods to handle missingness in genetic data, and show that adapting missingness theory from statistics can both increase power and provide a flexible framework for extending most existing tools with minimal effort. We finally undertake novel variant association in a schizophrenia cohort from a bottleneck population. We find that the study itself is confounded by nonrandom population sampling and identity-by-descent, manifesting as batch effects correlated with outcome that remain in novel variants after all sample-wide quality control. On the whole, these results emphasize both the past and present utility and reliability of the GWAS model, as well as the extent to which lessons from the GWAS era must inform genetic studies moving forward.
34

Estratégia computacional para apoiar a reprodutibilidade e reuso de dados científicos baseado em metadados de proveniência. / Computational strategy to support the reproducibility and reuse of scientific data based on provenance metadata.

Daniel Lins da Silva 17 May 2017 (has links)
A ciência moderna, apoiada pela e-science, tem enfrentado desafios de lidar com o grande volume e variedade de dados, gerados principalmente pelos avanços tecnológicos nos processos de coleta e processamento dos dados científicos. Como consequência, houve também um aumento na complexidade dos processos de análise e experimentação. Estes processos atualmente envolvem múltiplas fontes de dados e diversas atividades realizadas por grupos de pesquisadores geograficamente distribuídos, que devem ser compreendidas, reutilizadas e reproduzíveis. No entanto, as iniciativas da comunidade científica que buscam disponibilizar ferramentas e conscientizar os pesquisadores a compartilharem seus dados e códigos-fonte, juntamente com as publicações científicas, são, em muitos casos, insuficientes para garantir a reprodutibilidade e o reuso das contribuições científicas. Esta pesquisa objetiva definir uma estratégia computacional para o apoio ao reuso e a reprodutibilidade dos dados científicos, por meio da gestão da proveniência dos dados durante o seu ciclo de vida. A estratégia proposta nesta pesquisa é apoiada em dois componentes principais, um perfil de aplicação, que define um modelo padronizado para a descrição da proveniência dos dados, e uma arquitetura computacional para a gestão dos metadados de proveniência, que permite a descrição, armazenamento e compartilhamento destes metadados em ambientes distribuídos e heterogêneos. Foi desenvolvido um protótipo funcional para a realização de dois estudos de caso que consideraram a gestão dos metadados de proveniência de experimentos de modelagem de distribuição de espécies. Estes estudos de caso possibilitaram a validação da estratégia computacional proposta na pesquisa, demonstrando o seu potencial no apoio à gestão de dados científicos. / Modern science, supported by e-science, has faced challenges in dealing with the large volume and variety of data generated primarily by technological advances in the processes of collecting and processing scientific data. Therefore, there was also an increase in the complexity of the analysis and experimentation processes. These processes currently involve multiple data sources and numerous activities performed by geographically distributed research groups, which must be understood, reused and reproducible. However, initiatives by the scientific community with the goal of developing tools and sensitize researchers to share their data and source codes related to their findings, along with scientific publications, are often insufficient to ensure the reproducibility and reuse of scientific results. This research aims to define a computational strategy to support the reuse and reproducibility of scientific data through data provenance management during its entire life cycle. Two principal components support our strategy in this research, an application profile that defines a standardized model for the description of provenance metadata, and a computational architecture for the management of the provenance metadata that enables the description, storage and sharing of these metadata in distributed and heterogeneous environments. We developed a functional prototype for the accomplishment of two case studies that considered the management of provenance metadata during the experiments of species distribution modeling. These case studies enabled the validation of the computational strategy proposed in the research, demonstrating the potential of this strategy in supporting the management of scientific data.
35

Developing conceptual frameworks for structuring legal knowledge to build knowledge-based systems

Deedman, Galvin Charles 05 1900 (has links)
This dissertation adopts an interdisciplinary approach to the field of law and artificial intelligence. It argues that the conceptual structuring of legal knowledge within an appropriate theoretical framework is of primary importance when building knowledge-based systems. While technical considerations also play a role, they must take second place to an in-depth understanding of the law. Two alternative methods of structuring legal knowledge in very different domains are used to explore the thesis. A deep-structure approach is used on nervous shock, a rather obscure area of the law of negligence. A script-based method is applied to impaired driving, a well-known part of the criminal law. A knowledge-based system is implemented in each area. The two systems, Nervous Shock Advisor (NSA) and Impaired Driving Advisor (IDA), and the methodologies they embody, are described and contrasted. In light of the work undertaken, consideration is given to the feasibility of lawyers without much technical knowledge using general-purpose tools to build knowledge-based systems for themselves.
36

Packing problems on a PC.

Deighton, Andrew George. January 1991 (has links)
Bin packing is a problem with many applications in various industries. This thesis addresses a specific instance of the this problem, known as the Container Packing problem. Special attention is paid to the Pallet Loading problem which is a restricted sub-problem of the general Container Packing problem. Since the Bin Packing problem is NP-complete, it is customary to apply a heuristic measure in order to approximate solutions in a reasonable amount of computation time rather than to attempt to produce optimal results by applying some exact algorithm. Several heuristics are examined for the problems under consideration, and the results produced by each are shown and compared where relevant. / Thesis (M.Sc.)-University of Natal, Durban, 1991.
37

An internship with the Ohio Evaluation & Assessment Center

Marks, Pamela Anne. January 2005 (has links)
Thesis (M.T.S.C.)--Miami University, Dept. of English, 2005. / Title from first page of PDF document. Document formatted into pages; contains [1], vi, 55 p. : ill. Includes bibliographical references (p. 33).
38

Analýza trhu cukrovinek v České republice z pohledu retail auditu / Retail audit analysis of confectionery market in Czech Republic

Konečná, Eva January 2013 (has links)
The main goal of my master thesis is to conduct a complex analysis of the confectionery market in Czech Republic, as one of the most profitable and dynamic category. The methodology that is used to analyse the market is the retail audit analysis, that provides an independent insight into the problem and helps to analyse the market from various perspectives. The first part is focused on the total confectionery market with the goal to introduce the market with its specifics, then the analysis goes deeper into the chocolate confectionery market and is extended by consumer research, that provides a useful consumer insight. As a result, the thesis emphasizes the importance of retail audit data, while at the same time gaps of the analysis are shown by using the consumer research as a secondary methodology.
39

Developing conceptual frameworks for structuring legal knowledge to build knowledge-based systems

Deedman, Galvin Charles 05 1900 (has links)
This dissertation adopts an interdisciplinary approach to the field of law and artificial intelligence. It argues that the conceptual structuring of legal knowledge within an appropriate theoretical framework is of primary importance when building knowledge-based systems. While technical considerations also play a role, they must take second place to an in-depth understanding of the law. Two alternative methods of structuring legal knowledge in very different domains are used to explore the thesis. A deep-structure approach is used on nervous shock, a rather obscure area of the law of negligence. A script-based method is applied to impaired driving, a well-known part of the criminal law. A knowledge-based system is implemented in each area. The two systems, Nervous Shock Advisor (NSA) and Impaired Driving Advisor (IDA), and the methodologies they embody, are described and contrasted. In light of the work undertaken, consideration is given to the feasibility of lawyers without much technical knowledge using general-purpose tools to build knowledge-based systems for themselves. / Graduate and Postdoctoral Studies / Graduate
40

Adopting research data management (RDM) practices at the University of Namibia (UNAM): a view from researchers

Samupwa, Astridah Njala 14 February 2020 (has links)
This study investigated the extent of Research Data Management (RDM) adoption at the University of Namibia (UNAM), viewing it from the researcher’s perspective. The objectives of the study were to investigate the extent to which RDM has been adopted as part of the research process at UNAM, to identify challenges encountered by researchers attempting to practice RDM and to provide solutions to some of the challenges identified. Rogers’ Diffusion of Innovation (DOI) theory was adopted for the study to place UNAM within an innovation-decision process stage. The study took a quantitative approach of which a survey was used. A stratified sample was drawn from a list of all 948 faculty members (the number of academics taken from the UNAM annual report of 2016). The Raosoft sample size calculator (Raosoft, 2004) states that 274 is the minimum recommended sample size necessary for a 5% margin of error and a 95% confidence level from a population of 948, and this was the intended sample size. A questionnaire administered via an online web-based software tool, SurveyMonkey, was used. A series of questions was asked to individuals to obtain statistically useful information on the topic under study. The paid version of SurveyMonkey was used for analysis while graphics and tables were created in Microsoft Excel. The results of the study showed that for the group that responded to the survey, the extent to which they have adopted RDM practices is still very low. Although individuals were found to be managing their research data, this was done out of their own free will; this is to say that there was no policy mandating and guiding their practices. The researcher placed most of the groups that responded to the survey at the first stage of the innovation-decision process, which is the information stage. However, librarians who responded to the survey were found to be more advanced as they were seen to be aware of and engaged in knowledge acquisition regarding RDM practices. Thus, the researcher placed them at the second stage in the innovation-decision process (Persuasion). Recommendations for the study are based on the analysed data. It is recommended, among others, that UNAM should give directives in the form of policies to enhance the adoption of RDM practices and this should be communicated to the entire UNAM community to create awareness regarding the concept of RDM.

Page generated in 0.0664 seconds