• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 24
  • 21
  • 18
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 486
  • 486
  • 120
  • 103
  • 99
  • 88
  • 69
  • 65
  • 62
  • 56
  • 51
  • 47
  • 47
  • 46
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Towards a Data-Driven Analysis of Programming Tutorials' Telemetry to Improve the Educational Experience in Introductory Programming Courses

Russo Kennedy, Anna 21 August 2015 (has links)
Retention in Computer Science undergraduate education, particularly of underrepresented groups, continues to be a growing challenge. A theme shared by much of the research literature into why this is so is one of a distancing in the relationship between Computer Science professors and students [39, 40, 45]. How then, can we begin to lessen that distance, and build stronger connections between these groups in an era of growing class sizes and technology replacing human interaction? This work presents BitFit, an online programming practice and learning tool, to describe an approach to using the telemetry made possible from deploying this or similar tools in introductory programming courses to improve the quality of instruction, and the students' course experiences. BitFit gathers interaction data as students use the tool to actively engage with course material. In this thesis we first explore what kind of quantitative data can be used to help professors gain insights into how students might be faring in their courses, moving the method of instruction towards a data- and student-driven model. Secondly, we demonstrate the capacity of the telemetry to aid professors in more precisely identifying students at risk of failure in their courses. Our goal is to reveal possible reasons these students would be considered at-risk at an early enough point in the course to make interventions possible. Finally, we show how the use of tools such as BitFit within introductory programming courses could positively impact the student experience. Through a preliminary qualitative assessment, we seek to address impact on confidence, metacognition, and the ability for an individual to envision success in Computer Science. When used together within an all-encompassing approach aimed at improving retention in Computer Science, tools such as BitFit can move towards improving the quality of instruction and the students' experience by helping to build stronger connections rooted in empathy between professors and students. / Graduate / 0710 / 0984 / alrusso@uvic.ca
102

A Data-Driven Approach for System Approximation and Set Point Optimization, with a Focus in HVAC Systems

Qin, Xiao January 2014 (has links)
Dynamically determining input signals to a complex system, to increase performance and/or reduce cost, is a difficult task unless users are provided with feedback on the consequences of different input decisions. For example, users self-determine the set point schedule (i.e. temperature thresholds) of their HVAC system, without an ability to predict cost--they select only comfort. Users are unable to optimize the set point schedule with respect to cost because the cost feedback is provided at billing-cycle intervals. To provide rapid feedback (such as expected monthly/daily cost), mechanisms for system monitoring, data-driven modeling, simulation, and optimization are needed. Techniques from the literature require in-depth knowledge in the domain, and/or significant investment in infrastructure or equipment to measure state variables, making these solutions difficult to implement or to scale down in cost. This work introduces methods to approximate complex system behavior prediction and optimization, based on dynamic data obtained from inexpensive sensors. Unlike many existing approaches, we do not extract an exact model to capture every detail of the system; rather, we develop an approximated model with key predictive characteristics. Such a model makes estimation and prediction available to users who can then make informed decisions; alternatively, these estimates are made available as an input to an optimization tool to automatically provide pareto-optimized set points. Moreover, the approximation nature of this model makes the determination of the prediction and optimization parameters computationally inexpensive, adaptive to system or environment change, and suitable for embedded system implementation. Effectiveness of these methods is first demonstrated on an HVAC system methodology, and then extended to a variety of complex system applications.
103

INTEGRATED DECISION MAKING FOR PLANNING AND CONTROL OF DISTRIBUTED MANUFACTURING ENTERPRISES USING DYNAMIC-DATA-DRIVEN ADAPTIVE MULTI-SCALE SIMULATIONS (DDDAMS)

Celik, Nurcin January 2010 (has links)
Discrete-event simulation has become one of the most widely used analysis tools for large-scale, complex and dynamic systems such as supply chains as it can take randomness into account and address very detailed models. However, there are major challenges that are faced in simulating such systems, especially when they are used to support short-term decisions (e.g., operational decisions or maintenance and scheduling decisions considered in this research). First, a detailed simulation requires significant amounts of computation time. Second, given the enormous amount of dynamically-changing data that exists in the system, information needs to be updated wisely in the model in order to prevent unnecessary usage of computing and networking resources. Third, there is a lack of methods allowing dynamic data updates during the simulation execution. Overall, in a simulation-based planning and control framework, timely monitoring, analysis, and control is important not to disrupt a dynamically changing system. To meet this temporal requirement and address the above mentioned challenges, a Dynamic-Data-Driven Adaptive Multi-Scale Simulation (DDDAMS) paradigm is proposed to adaptively adjust the fidelity of a simulation model against available computational resources by incorporating dynamic data into the executing model, which then steers the measurement process for selective data update. To the best of our knowledge, the proposed DDDAMS methodology is one of the first efforts to present a coherent integrated decision making framework for timely planning and control of distributed manufacturing enterprises.To this end, comprehensive system architecture and methodologies are first proposed, where the components include 1) real time DDDAM-Simulation, 2) grid computing modules, 3) Web Service communication server, 4) database, 5) various sensors, and 6) real system. Four algorithms are then developed and embedded into a real-time simulator for enabling its DDDAMS capabilities such as abnormality detection, fidelity selection, fidelity assignment, and prediction and task generation. As part of the developed algorithms, improvements are made to the resampling techniques for sequential Bayesian inferencing, and their performance is benchmarked in terms of their resampling qualities and computational efficiencies. Grid computing and Web Services are used for computational resources management and inter-operable communications among distributed software components, respectively. A prototype of proposed DDDAM-Simulation was successfully implemented for preventive maintenance scheduling and part routing scheduling in a semiconductor manufacturing supply chain, where the results look quite promising.
104

Novice, Generalist, and Expert Reasoning During Clinical Case Explanation: A Propositional Assessment of Knowledge Utilization and Application

Mariasin, Margalit January 2010 (has links)
Objectives: The aim of the two exploratory studies presented here, was to investigate expert-novice cognitive performance in the field of dietetic counseling. More specifically, the purpose was to characterize the knowledge used and the cognitive reasoning strategies of expert, intermediate and novice dietitians during their assessment of clinical vignettes of simulated dyslipidemia cases. Background: Since no studies have been conducted on the expert-novice differences in knowledge utilization and reasoning in the field of dietetics, literature from various domains looking at expert-novice decision-making was used to guide the studies presented here. Previous expert-novice research in aspects of health such as counseling and diagnostic reasoning among physicians and nurses has found differences between in the way experts extract and apply knowledge during reasoning. In addition, various studies illustrate an intermediate effect, where generalist performance is somewhat poorer than that of experts and novices. Methods: The verbal protocols of expert (n=4), generalist (n=4), and novice (n=4) dietitians were analyzed, using propositional analysis. Semantic networks were generated, and used to compare reasoning processes to a reference model developed from an existing Dyslipidemia care map by Brauer et al, (2007, 2009). Detailed analysis was conducted on individual networks in an effort to obtain better understanding of cue utilization, concept usage, and overall cohesiveness during reasoning. Results: The results of the first study indicate no statistical differences in reasoning between novices, generalist and experts with regards to recalls and inferences. Interesting findings in the study also suggest that discussions of the terms “dietary fat” and “cholesterol” by individuals in each level of expertise had qualitative differences. This may be reflective of the information provided in the case scenearios to each participating dietitian. Furthermore, contrary to previous studies in expert-novice reasoning, an intermediate effect was not evident. The results of the second study show a statistical difference in data driven (forward) reasoning between experts and novices. There was no statistical difference in hypothesis driven (backward) reasoning between groups. The reasoning networks of experts appear to reveal more concise explanations of important aspects related to dyslipidemia counseling. Reasoning patterns of the expert dietitians appear more coherent, although there was no statistical difference in the length or number of reasoning chains between groups. With previous research focusing on diagnostic reasoning rather than counseling, this finding may be a result of the nature of the underlying task. Conclusion: The studies presented here serve as a basis for future expert-novice research in the field of dietetics. The exploration of individual verbal protocols to identify characteristics of dietitians of various levels of expertise, can provide insight into the way knowledge is used and applied during diet counseling. Subsequent research can focus on randomized sample selection, with case scenarios as a constant, in order to obtain results that can be generalized to the greater dietitian population.
105

社会的認知研究のための潜在記憶テストの作成

堀内, 孝, Horiuchi, Takashi 12 1900 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
106

Developing Materials Informatics Workbench for Expediting the Discovery of Novel Compound Materials

Kwok Wai Steny Cheung Unknown Date (has links)
This project presents a Materials Informatics Workbench that resolves the challenges confronting materials scientists in the aspects of materials science data assimilation and dissemination. It adopts an approach that has ingeniously combined and extended the technologies of the Semantic Web, Web Service Business Process Execution Language (WSBPEL) and Open Archive Initiative Object Reuse and Exchange (OAI-ORE). These technologies enable the development of novel user interfaces and innovative algorithms and techniques behind the major components of the proposed workbench. In recent years, materials scientists have been struggling with the challenge of dealing with the ever-increasing amount of complex materials science data that are available from online sources and generated by the high-throughput laboratory instruments and data-intensive software tools, respectively. Meanwhile, the funding organizations have encouraged, and even mandated, the sponsored researchers across many domains to make the scientifically-valuable data, together with the traditional scholarly publications, available to the public. This open access requirement provides the opportunity for materials scientists who are able to exploit the available data to expedite the discovery of novel compound materials. However, it also poses challenges for them. The materials scientists raise concerns about the difficulties of precisely locating and processing diverse, but related, data from different data sources and of effectively managing laboratory information and data. In addition, they also lack the simple tools for data access and publication, and require measures for Intellectual Property protection and standards for data sharing, exchange and reuse. The following paragraphs describe how the major workbench components resolve these challenges. First, the materials science ontology, represented in the Web Ontology Language (OWL), enables, (1) the mapping between and the integration of the disparate materials science databases, (2) the modelling of experimental provenance information acquired in the physical and digital domains and, (3) the inferencing and extraction of new knowledge within the materials science domain. Next, the federated search interface based on the materials science ontology enables the materials scientists to search, retrieve, correlate and integrate diverse, but related, materials science data and information across disparate databases. Then, a workflow management system underpinning the WSBPEL engine is not only able to manage the scientific investigation process that incorporates multidisciplinary scientists distributed over a wide geographic region and self-contained computational services, but also systematically acquire the experimental data and information generated by the process. Finally, the provenance-aware scientific compound-object publishing system provides the scientists with a view of the highly complex scientific workflow at multiple-grained levels. Thus, they can easily comprehend the science of the workflow, access experimental information and keep the confidential information from unauthorised viewers. It also enables the scientists to quickly and easily author and publish a scientific compound object that, (1) incorporates not only the internal experimental data with the provenance information from the rendered view of a scientific experimental workflow, but also external digital objects with the metadata, for example, published scholarly papers discoverable via the World Wide Web (the Web), (2) is self- contained and explanatory with IP protection and, (3) is guaranteed to be disseminated widely on the Web. The prototype systems of the major workbench components have been developed. The quality of the material science ontology has been assessed, based on Gruber’s principles for the design of ontologies used for knowledge–sharing, while its applicability has been evaluated through two of the workbench components, the ontology-based federated search interface and the provenance-aware scientific compound object publishing system. Those prototype systems have been deployed within a team of fuel cell scientists working within the Australian Institute for Bioengineering and Nanotechnology (AIBN) at the University of Queensland. Following the user evaluation, the overall feedback to date has been very positive. First, the scientists were impressed with the convenience of the ontology-based federated search interface because of the easy and quick access to the integrated databases and analytical tools. Next, they felt the surge of the relief that the complex compound synthesis process could be managed by and monitored through the WSBPEL workflow management system. They were also excited because the system is able to systematically acquire huge amounts of complex experimental data produced by self-contained computational services that is no longer handled manually with paper-based laboratory notebooks. Finally, the scientific compound object publishing system inspired them to publish their data voluntarily, because it provides them with a scientific-friendly and intuitive interface that enables scientists to, (1) intuitively access experimental data and information, (2) author self-contained and explanatory scientific compound objects that incorporate experimental data and information about research outcomes, and published scholarly papers and peer-reviewed datasets to strengthen those outcomes, (3) enforce proper measures for IP protection, (4) comply those objects with the Open Archives Initiative Protocol – Object Exchange and Reuse (OAI-ORE) to maximize its dissemination over the Web and,(5) ingest those objects into a Fedora-based digital library.
107

Developing Materials Informatics Workbench for Expediting the Discovery of Novel Compound Materials

Kwok Wai Steny Cheung Unknown Date (has links)
This project presents a Materials Informatics Workbench that resolves the challenges confronting materials scientists in the aspects of materials science data assimilation and dissemination. It adopts an approach that has ingeniously combined and extended the technologies of the Semantic Web, Web Service Business Process Execution Language (WSBPEL) and Open Archive Initiative Object Reuse and Exchange (OAI-ORE). These technologies enable the development of novel user interfaces and innovative algorithms and techniques behind the major components of the proposed workbench. In recent years, materials scientists have been struggling with the challenge of dealing with the ever-increasing amount of complex materials science data that are available from online sources and generated by the high-throughput laboratory instruments and data-intensive software tools, respectively. Meanwhile, the funding organizations have encouraged, and even mandated, the sponsored researchers across many domains to make the scientifically-valuable data, together with the traditional scholarly publications, available to the public. This open access requirement provides the opportunity for materials scientists who are able to exploit the available data to expedite the discovery of novel compound materials. However, it also poses challenges for them. The materials scientists raise concerns about the difficulties of precisely locating and processing diverse, but related, data from different data sources and of effectively managing laboratory information and data. In addition, they also lack the simple tools for data access and publication, and require measures for Intellectual Property protection and standards for data sharing, exchange and reuse. The following paragraphs describe how the major workbench components resolve these challenges. First, the materials science ontology, represented in the Web Ontology Language (OWL), enables, (1) the mapping between and the integration of the disparate materials science databases, (2) the modelling of experimental provenance information acquired in the physical and digital domains and, (3) the inferencing and extraction of new knowledge within the materials science domain. Next, the federated search interface based on the materials science ontology enables the materials scientists to search, retrieve, correlate and integrate diverse, but related, materials science data and information across disparate databases. Then, a workflow management system underpinning the WSBPEL engine is not only able to manage the scientific investigation process that incorporates multidisciplinary scientists distributed over a wide geographic region and self-contained computational services, but also systematically acquire the experimental data and information generated by the process. Finally, the provenance-aware scientific compound-object publishing system provides the scientists with a view of the highly complex scientific workflow at multiple-grained levels. Thus, they can easily comprehend the science of the workflow, access experimental information and keep the confidential information from unauthorised viewers. It also enables the scientists to quickly and easily author and publish a scientific compound object that, (1) incorporates not only the internal experimental data with the provenance information from the rendered view of a scientific experimental workflow, but also external digital objects with the metadata, for example, published scholarly papers discoverable via the World Wide Web (the Web), (2) is self- contained and explanatory with IP protection and, (3) is guaranteed to be disseminated widely on the Web. The prototype systems of the major workbench components have been developed. The quality of the material science ontology has been assessed, based on Gruber’s principles for the design of ontologies used for knowledge–sharing, while its applicability has been evaluated through two of the workbench components, the ontology-based federated search interface and the provenance-aware scientific compound object publishing system. Those prototype systems have been deployed within a team of fuel cell scientists working within the Australian Institute for Bioengineering and Nanotechnology (AIBN) at the University of Queensland. Following the user evaluation, the overall feedback to date has been very positive. First, the scientists were impressed with the convenience of the ontology-based federated search interface because of the easy and quick access to the integrated databases and analytical tools. Next, they felt the surge of the relief that the complex compound synthesis process could be managed by and monitored through the WSBPEL workflow management system. They were also excited because the system is able to systematically acquire huge amounts of complex experimental data produced by self-contained computational services that is no longer handled manually with paper-based laboratory notebooks. Finally, the scientific compound object publishing system inspired them to publish their data voluntarily, because it provides them with a scientific-friendly and intuitive interface that enables scientists to, (1) intuitively access experimental data and information, (2) author self-contained and explanatory scientific compound objects that incorporate experimental data and information about research outcomes, and published scholarly papers and peer-reviewed datasets to strengthen those outcomes, (3) enforce proper measures for IP protection, (4) comply those objects with the Open Archives Initiative Protocol – Object Exchange and Reuse (OAI-ORE) to maximize its dissemination over the Web and,(5) ingest those objects into a Fedora-based digital library.
108

Data Analytics Methods for Enterprise-wide Optimization Under Uncertainty

Calfa, Bruno Abreu 01 April 2015 (has links)
This dissertation primarily proposes data-driven methods to handle uncertainty in problems related to Enterprise-wide Optimization (EWO). Datadriven methods are characterized by the direct use of data (historical and/or forecast) in the construction of models for the uncertain parameters that naturally arise from real-world applications. Such uncertainty models are then incorporated into the optimization model describing the operations of an enterprise. Before addressing uncertainty in EWO problems, Chapter 2 deals with the integration of deterministic planning and scheduling operations of a network of batch plants. The main contributions of this chapter include the modeling of sequence-dependent changeovers across time periods for a unitspecific general precedence scheduling formulation, the hybrid decomposition scheme using Bilevel and Temporal Lagrangean Decomposition approaches, and the solution of subproblems in parallel. Chapters 3 to 6 propose different data analytics techniques to account for stochasticity in EWO problems. Chapter 3 deals with scenario generation via statistical property matching in the context of stochastic programming. A distribution matching problem is proposed that addresses the under-specification shortcoming of the originally proposed moment matching method. Chapter 4 deals with data-driven individual and joint chance constraints with right-hand side uncertainty. The distributions are estimated with kernel smoothing and are considered to be in a confidence set, which is also considered to contain the true, unknown distributions. The chapter proposes the calculation of the size of the confidence set based on the standard errors estimated from the smoothing process. Chapter 5 proposes the use of quantile regression to model production variability in the context of Sales & Operations Planning. The approach relies on available historical data of actual vs. planned production rates from which the deviation from plan is defined and considered a random variable. Chapter 6 addresses the combined optimal procurement contract selection and pricing problems. Different price-response models, linear and nonlinear, are considered in the latter problem. Results show that setting selling prices in the presence of uncertainty leads to the use of different purchasing contracts.
109

Data-Driven Statistical Models of Robotic Manipulation

Paolini, Robert 01 May 2018 (has links)
Improving robotic manipulation is critical for robots to be actively useful in realworld factories and homes. While some success has been shown in simulation and controlled environments, robots are slow, clumsy, and not general or robust enough when interacting with their environment. By contrast, humans effortlessly manipulate objects. One possible reason for this discrepancy is that, starting from birth, humans have years of experience to collect data and develop good internal models of what happens when they manipulate objects. If robots could also learn models from a large amount of real data, perhaps they, too, could become more capable manipulators. In this thesis, we propose to improve robotic manipulation by solving two problems. First, we look at how robots can collect a large amount of manipulation data without human intervention. Second, we study how to build statistical models of robotic manipulation from the collected data. These data-driven models can then be used for planning more robust manipulation actions. To solve the first problem of enabling large data collection, we perform several different robotic manipulation experiments and use these as case studies. We study bin-picking, post-grasp manipulation, pushing, tray tilting, planar grasping, and regrasping. These case studies allow us to gain insights on how robots can collect a large amount of accurate data with minimal human intervention. To solve the second problem of statistically modeling manipulation actions, we propose models for different parts of various manipulation actions. First, we look at how to model post-grasp manipulation actions by modeling the probability distribution of where an object ends up in a robot’s hand, and how this affects its success rate at various tasks such as placing or insertion. Second, we model how robots can change the pose of an object in their hand with regrasp actions. Third, we improve on the place and pick regrasp action by modeling each separately with more data. These learned data-driven models can then be used for planning more robust and accurate manipulation actions.
110

Síntese de controladores ressonantes baseado em dados aplicado a fontes ininterruptas de energia

Schildt, Alessandro Nakoneczny January 2014 (has links)
Este trabalho trata da utilização de um método de sintonia de controladores baseado nos dados obtidos da planta. A proposta é a sintonia de controladores ressonantes para aplicação em inversores de frequência presentes em fontes ininterruptas de energia, com o intuito de seguimento de referência senoidal de tensão. Dentro deste contexto, será usado o algoritmo Virtual Reference Feedback Tuning, o qual é um método de identificação de controladores baseado em dados que não é iterativo e não necessita do modelo do sistema para identificar o controlador. A partir dos dados obtidos da planta e também da definição de um modelo de referência pelo projetista, o método estima os parâmetros de uma estrutura fixada previamente para o controlador através da minimização de uma função custo definida pelo erro entre a saída desejada e a saída real. Além disso, uma realimentação de corrente é necessária na malha de controle, onde seu ganho proporcional é definido por experimento empírico. Para demonstrar a utilização do método são apresentados resultados simulados e práticos de uma fonte ininterrupta de energia com potência de 5 kV A utilizando cargas lineares e não-lineares. É avaliado o desempenho do ponto de vista da qualidade do sinal de saída real obtido com controladores sintonizados a partir de diferentes modelos de referência, além do uso de sinais de excitação diversos para o algoritmo V RFT. Os resultados experimentais são obtidos em um inversor de frequência monofásico com uma plataforma em tempo real baseada na placa de aquisição de dados dSPACE DS1104. Os resultados mostram que, em relação as normas internacionais, o sistema de controle proposto possui bom comportamento para seguimento de referência, operando à vazio ou utilizando carga linear. / This work discusses about controller tuning methods based on plant data. The proposal is to tune resonant controllers for application to the frequency inverters found in uninterruptible power supplies, with the goal of following sinusoidal reference signals. Within this context, the Virtual Reference Feedback Tuning algorithm is used, which is a data-driven controller identification method that is not iterative and does not require a system model to identify the controller. Data obtained from the plant and also the definition of a reference model by the designer, are used by the method to estimate the parameters of a previously fixed controller structure through the minimization of a cost function, which is defined by the error between desired and actual outputs. Moreover, a current feedback is required in the control loop where the proportional gain is defined by empirical experiment. To demonstrate the method’s application, simulated and practical results of an uninterruptible power supply with capacity of the 5 kV A will be presented employing linear and nonlinear loads. Evaluates the performance in terms of system’s actual output quality, obtained with controllers tuned with different reference models. Distinct excitation signals are also used to feed the VRFT algorithm. The experimental results achieved from use of an single-phase inverter and a real-time platform based on data acquisition board dSPACE DS1104. The results show that, with respect to international standards, the proposed control system has good performance for tracking reference, operating at empty or using linear load.

Page generated in 0.0686 seconds