• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 285
  • 23
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 468
  • 468
  • 114
  • 97
  • 96
  • 85
  • 67
  • 61
  • 60
  • 54
  • 48
  • 46
  • 46
  • 45
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

社会的認知研究のための潜在記憶テストの作成

堀内, 孝, Horiuchi, Takashi 12 1900 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
102

Developing Materials Informatics Workbench for Expediting the Discovery of Novel Compound Materials

Kwok Wai Steny Cheung Unknown Date (has links)
This project presents a Materials Informatics Workbench that resolves the challenges confronting materials scientists in the aspects of materials science data assimilation and dissemination. It adopts an approach that has ingeniously combined and extended the technologies of the Semantic Web, Web Service Business Process Execution Language (WSBPEL) and Open Archive Initiative Object Reuse and Exchange (OAI-ORE). These technologies enable the development of novel user interfaces and innovative algorithms and techniques behind the major components of the proposed workbench. In recent years, materials scientists have been struggling with the challenge of dealing with the ever-increasing amount of complex materials science data that are available from online sources and generated by the high-throughput laboratory instruments and data-intensive software tools, respectively. Meanwhile, the funding organizations have encouraged, and even mandated, the sponsored researchers across many domains to make the scientifically-valuable data, together with the traditional scholarly publications, available to the public. This open access requirement provides the opportunity for materials scientists who are able to exploit the available data to expedite the discovery of novel compound materials. However, it also poses challenges for them. The materials scientists raise concerns about the difficulties of precisely locating and processing diverse, but related, data from different data sources and of effectively managing laboratory information and data. In addition, they also lack the simple tools for data access and publication, and require measures for Intellectual Property protection and standards for data sharing, exchange and reuse. The following paragraphs describe how the major workbench components resolve these challenges. First, the materials science ontology, represented in the Web Ontology Language (OWL), enables, (1) the mapping between and the integration of the disparate materials science databases, (2) the modelling of experimental provenance information acquired in the physical and digital domains and, (3) the inferencing and extraction of new knowledge within the materials science domain. Next, the federated search interface based on the materials science ontology enables the materials scientists to search, retrieve, correlate and integrate diverse, but related, materials science data and information across disparate databases. Then, a workflow management system underpinning the WSBPEL engine is not only able to manage the scientific investigation process that incorporates multidisciplinary scientists distributed over a wide geographic region and self-contained computational services, but also systematically acquire the experimental data and information generated by the process. Finally, the provenance-aware scientific compound-object publishing system provides the scientists with a view of the highly complex scientific workflow at multiple-grained levels. Thus, they can easily comprehend the science of the workflow, access experimental information and keep the confidential information from unauthorised viewers. It also enables the scientists to quickly and easily author and publish a scientific compound object that, (1) incorporates not only the internal experimental data with the provenance information from the rendered view of a scientific experimental workflow, but also external digital objects with the metadata, for example, published scholarly papers discoverable via the World Wide Web (the Web), (2) is self- contained and explanatory with IP protection and, (3) is guaranteed to be disseminated widely on the Web. The prototype systems of the major workbench components have been developed. The quality of the material science ontology has been assessed, based on Gruber’s principles for the design of ontologies used for knowledge–sharing, while its applicability has been evaluated through two of the workbench components, the ontology-based federated search interface and the provenance-aware scientific compound object publishing system. Those prototype systems have been deployed within a team of fuel cell scientists working within the Australian Institute for Bioengineering and Nanotechnology (AIBN) at the University of Queensland. Following the user evaluation, the overall feedback to date has been very positive. First, the scientists were impressed with the convenience of the ontology-based federated search interface because of the easy and quick access to the integrated databases and analytical tools. Next, they felt the surge of the relief that the complex compound synthesis process could be managed by and monitored through the WSBPEL workflow management system. They were also excited because the system is able to systematically acquire huge amounts of complex experimental data produced by self-contained computational services that is no longer handled manually with paper-based laboratory notebooks. Finally, the scientific compound object publishing system inspired them to publish their data voluntarily, because it provides them with a scientific-friendly and intuitive interface that enables scientists to, (1) intuitively access experimental data and information, (2) author self-contained and explanatory scientific compound objects that incorporate experimental data and information about research outcomes, and published scholarly papers and peer-reviewed datasets to strengthen those outcomes, (3) enforce proper measures for IP protection, (4) comply those objects with the Open Archives Initiative Protocol – Object Exchange and Reuse (OAI-ORE) to maximize its dissemination over the Web and,(5) ingest those objects into a Fedora-based digital library.
103

Developing Materials Informatics Workbench for Expediting the Discovery of Novel Compound Materials

Kwok Wai Steny Cheung Unknown Date (has links)
This project presents a Materials Informatics Workbench that resolves the challenges confronting materials scientists in the aspects of materials science data assimilation and dissemination. It adopts an approach that has ingeniously combined and extended the technologies of the Semantic Web, Web Service Business Process Execution Language (WSBPEL) and Open Archive Initiative Object Reuse and Exchange (OAI-ORE). These technologies enable the development of novel user interfaces and innovative algorithms and techniques behind the major components of the proposed workbench. In recent years, materials scientists have been struggling with the challenge of dealing with the ever-increasing amount of complex materials science data that are available from online sources and generated by the high-throughput laboratory instruments and data-intensive software tools, respectively. Meanwhile, the funding organizations have encouraged, and even mandated, the sponsored researchers across many domains to make the scientifically-valuable data, together with the traditional scholarly publications, available to the public. This open access requirement provides the opportunity for materials scientists who are able to exploit the available data to expedite the discovery of novel compound materials. However, it also poses challenges for them. The materials scientists raise concerns about the difficulties of precisely locating and processing diverse, but related, data from different data sources and of effectively managing laboratory information and data. In addition, they also lack the simple tools for data access and publication, and require measures for Intellectual Property protection and standards for data sharing, exchange and reuse. The following paragraphs describe how the major workbench components resolve these challenges. First, the materials science ontology, represented in the Web Ontology Language (OWL), enables, (1) the mapping between and the integration of the disparate materials science databases, (2) the modelling of experimental provenance information acquired in the physical and digital domains and, (3) the inferencing and extraction of new knowledge within the materials science domain. Next, the federated search interface based on the materials science ontology enables the materials scientists to search, retrieve, correlate and integrate diverse, but related, materials science data and information across disparate databases. Then, a workflow management system underpinning the WSBPEL engine is not only able to manage the scientific investigation process that incorporates multidisciplinary scientists distributed over a wide geographic region and self-contained computational services, but also systematically acquire the experimental data and information generated by the process. Finally, the provenance-aware scientific compound-object publishing system provides the scientists with a view of the highly complex scientific workflow at multiple-grained levels. Thus, they can easily comprehend the science of the workflow, access experimental information and keep the confidential information from unauthorised viewers. It also enables the scientists to quickly and easily author and publish a scientific compound object that, (1) incorporates not only the internal experimental data with the provenance information from the rendered view of a scientific experimental workflow, but also external digital objects with the metadata, for example, published scholarly papers discoverable via the World Wide Web (the Web), (2) is self- contained and explanatory with IP protection and, (3) is guaranteed to be disseminated widely on the Web. The prototype systems of the major workbench components have been developed. The quality of the material science ontology has been assessed, based on Gruber’s principles for the design of ontologies used for knowledge–sharing, while its applicability has been evaluated through two of the workbench components, the ontology-based federated search interface and the provenance-aware scientific compound object publishing system. Those prototype systems have been deployed within a team of fuel cell scientists working within the Australian Institute for Bioengineering and Nanotechnology (AIBN) at the University of Queensland. Following the user evaluation, the overall feedback to date has been very positive. First, the scientists were impressed with the convenience of the ontology-based federated search interface because of the easy and quick access to the integrated databases and analytical tools. Next, they felt the surge of the relief that the complex compound synthesis process could be managed by and monitored through the WSBPEL workflow management system. They were also excited because the system is able to systematically acquire huge amounts of complex experimental data produced by self-contained computational services that is no longer handled manually with paper-based laboratory notebooks. Finally, the scientific compound object publishing system inspired them to publish their data voluntarily, because it provides them with a scientific-friendly and intuitive interface that enables scientists to, (1) intuitively access experimental data and information, (2) author self-contained and explanatory scientific compound objects that incorporate experimental data and information about research outcomes, and published scholarly papers and peer-reviewed datasets to strengthen those outcomes, (3) enforce proper measures for IP protection, (4) comply those objects with the Open Archives Initiative Protocol – Object Exchange and Reuse (OAI-ORE) to maximize its dissemination over the Web and,(5) ingest those objects into a Fedora-based digital library.
104

Data Analytics Methods for Enterprise-wide Optimization Under Uncertainty

Calfa, Bruno Abreu 01 April 2015 (has links)
This dissertation primarily proposes data-driven methods to handle uncertainty in problems related to Enterprise-wide Optimization (EWO). Datadriven methods are characterized by the direct use of data (historical and/or forecast) in the construction of models for the uncertain parameters that naturally arise from real-world applications. Such uncertainty models are then incorporated into the optimization model describing the operations of an enterprise. Before addressing uncertainty in EWO problems, Chapter 2 deals with the integration of deterministic planning and scheduling operations of a network of batch plants. The main contributions of this chapter include the modeling of sequence-dependent changeovers across time periods for a unitspecific general precedence scheduling formulation, the hybrid decomposition scheme using Bilevel and Temporal Lagrangean Decomposition approaches, and the solution of subproblems in parallel. Chapters 3 to 6 propose different data analytics techniques to account for stochasticity in EWO problems. Chapter 3 deals with scenario generation via statistical property matching in the context of stochastic programming. A distribution matching problem is proposed that addresses the under-specification shortcoming of the originally proposed moment matching method. Chapter 4 deals with data-driven individual and joint chance constraints with right-hand side uncertainty. The distributions are estimated with kernel smoothing and are considered to be in a confidence set, which is also considered to contain the true, unknown distributions. The chapter proposes the calculation of the size of the confidence set based on the standard errors estimated from the smoothing process. Chapter 5 proposes the use of quantile regression to model production variability in the context of Sales & Operations Planning. The approach relies on available historical data of actual vs. planned production rates from which the deviation from plan is defined and considered a random variable. Chapter 6 addresses the combined optimal procurement contract selection and pricing problems. Different price-response models, linear and nonlinear, are considered in the latter problem. Results show that setting selling prices in the presence of uncertainty leads to the use of different purchasing contracts.
105

Data-Driven Statistical Models of Robotic Manipulation

Paolini, Robert 01 May 2018 (has links)
Improving robotic manipulation is critical for robots to be actively useful in realworld factories and homes. While some success has been shown in simulation and controlled environments, robots are slow, clumsy, and not general or robust enough when interacting with their environment. By contrast, humans effortlessly manipulate objects. One possible reason for this discrepancy is that, starting from birth, humans have years of experience to collect data and develop good internal models of what happens when they manipulate objects. If robots could also learn models from a large amount of real data, perhaps they, too, could become more capable manipulators. In this thesis, we propose to improve robotic manipulation by solving two problems. First, we look at how robots can collect a large amount of manipulation data without human intervention. Second, we study how to build statistical models of robotic manipulation from the collected data. These data-driven models can then be used for planning more robust manipulation actions. To solve the first problem of enabling large data collection, we perform several different robotic manipulation experiments and use these as case studies. We study bin-picking, post-grasp manipulation, pushing, tray tilting, planar grasping, and regrasping. These case studies allow us to gain insights on how robots can collect a large amount of accurate data with minimal human intervention. To solve the second problem of statistically modeling manipulation actions, we propose models for different parts of various manipulation actions. First, we look at how to model post-grasp manipulation actions by modeling the probability distribution of where an object ends up in a robot’s hand, and how this affects its success rate at various tasks such as placing or insertion. Second, we model how robots can change the pose of an object in their hand with regrasp actions. Third, we improve on the place and pick regrasp action by modeling each separately with more data. These learned data-driven models can then be used for planning more robust and accurate manipulation actions.
106

Síntese de controladores ressonantes baseado em dados aplicado a fontes ininterruptas de energia

Schildt, Alessandro Nakoneczny January 2014 (has links)
Este trabalho trata da utilização de um método de sintonia de controladores baseado nos dados obtidos da planta. A proposta é a sintonia de controladores ressonantes para aplicação em inversores de frequência presentes em fontes ininterruptas de energia, com o intuito de seguimento de referência senoidal de tensão. Dentro deste contexto, será usado o algoritmo Virtual Reference Feedback Tuning, o qual é um método de identificação de controladores baseado em dados que não é iterativo e não necessita do modelo do sistema para identificar o controlador. A partir dos dados obtidos da planta e também da definição de um modelo de referência pelo projetista, o método estima os parâmetros de uma estrutura fixada previamente para o controlador através da minimização de uma função custo definida pelo erro entre a saída desejada e a saída real. Além disso, uma realimentação de corrente é necessária na malha de controle, onde seu ganho proporcional é definido por experimento empírico. Para demonstrar a utilização do método são apresentados resultados simulados e práticos de uma fonte ininterrupta de energia com potência de 5 kV A utilizando cargas lineares e não-lineares. É avaliado o desempenho do ponto de vista da qualidade do sinal de saída real obtido com controladores sintonizados a partir de diferentes modelos de referência, além do uso de sinais de excitação diversos para o algoritmo V RFT. Os resultados experimentais são obtidos em um inversor de frequência monofásico com uma plataforma em tempo real baseada na placa de aquisição de dados dSPACE DS1104. Os resultados mostram que, em relação as normas internacionais, o sistema de controle proposto possui bom comportamento para seguimento de referência, operando à vazio ou utilizando carga linear. / This work discusses about controller tuning methods based on plant data. The proposal is to tune resonant controllers for application to the frequency inverters found in uninterruptible power supplies, with the goal of following sinusoidal reference signals. Within this context, the Virtual Reference Feedback Tuning algorithm is used, which is a data-driven controller identification method that is not iterative and does not require a system model to identify the controller. Data obtained from the plant and also the definition of a reference model by the designer, are used by the method to estimate the parameters of a previously fixed controller structure through the minimization of a cost function, which is defined by the error between desired and actual outputs. Moreover, a current feedback is required in the control loop where the proportional gain is defined by empirical experiment. To demonstrate the method’s application, simulated and practical results of an uninterruptible power supply with capacity of the 5 kV A will be presented employing linear and nonlinear loads. Evaluates the performance in terms of system’s actual output quality, obtained with controllers tuned with different reference models. Distinct excitation signals are also used to feed the VRFT algorithm. The experimental results achieved from use of an single-phase inverter and a real-time platform based on data acquisition board dSPACE DS1104. The results show that, with respect to international standards, the proposed control system has good performance for tracking reference, operating at empty or using linear load.
107

Improving processor power demand comprehension in data-driven power and software phase classification and prediction

Khoshbakht, Saman 14 August 2018 (has links)
The single-core performance trend predicted by Moore's law has been impeded in recent years partly due to the limitations imposed by increasing processor power demands. One way to mitigate this limitation in performance improvement is the introduction of multi-core and multi-processor computation. Another approach to increasing the performance-per-Watt metric is to utilize the processor's power more efficiently. In a single-core system, the processor cannot sustainably dissipate more than the nominal Thermal Design Power (TDP) limit determined for the processor at design time. Therefore it is important to understand and manage the power demands of the processes being executed. This principle also applies to multi-core and multi-processor environments. In a multi-processor environment, knowing the power demands of the workload, the power management unit can schedule the workload to a processor based on the state of each processor and process in the most efficient way. This is an example of the knapsack problem. Another approach, also applicable to multi-cores, could be to reduce the core's power by reducing its working voltage and frequency, leading to mitigation of the power bursts, lending more headroom to other cores, and keeping the total power under the TDP limit. The information collected from the execution of the software running on the processor (i.e. the workload) is the key to determining the actions needed with regards to power management at any given time. This work comprises two different approaches in improving the comprehension of software power demands as it executes on the processor. In the first part of this work, the effects of software data on power is analysed. It is important to be able to model the power based on the instructions it comprises, however, to the best of our knowledge, no work exists in which the effects of the values being processed has been investigated with regards to processor power. Creating a power model capable of accurately reflecting the power demands of the software at any given time is a problem addressed by previous research. The software power model can be used in processor simulation environments as well as in the processor itself to create an estimated power dissipation without the need to physically measure the power. In the first part of this research, the effects of software data on power is investigated. In order to collect the data required as part of this research, a profiler tool has been developed by the author and used in this part of the research as well as the second part. The second part of this work focuses on the development of processor power throughout time during the execution of the software. Understanding the power demands of the processor at any given time is important to maintain and manage processor power. Additionally, acquiring an insight into the future power demands of the software can help the system with scheduling planning ahead of time, in order to prepare for any high-power section of the code as well as to plan to use the available power headroom as a result of an upcoming low-power section. In this part of our work, a new hierarchical approach to software phase classification is developed. Software phase classification problem focuses on determining the behaviour of the software at any given time slice by assigning the time slice to one of pre-determined software phases. Each phase is assumed to have known behaviour which was previously measured and instrumented based on previously observed instances of the phase, or by utilizing a model capable of estimating the behaviour of each phase. Using a two-tiered hierarchical clustering approach, our proposed phase classification methodology incorporates the recent performance behaviour of the software in order to determine the power phase. We focused on determining the power phase using the performance information because the real processor power is not usually available without the need for added hardware, while there exists a large number of different performance counters available on most modern processors. Additionally, based on our observations, the relation between performance phases and power behaviour is highly predictable. This method is shown to provide robust results with a low amount of noise compared to other methods, while providing a high enough timing accuracy for the processor to act on. To the best of our knowledge, no other existing work is able to provide both timing accuracy and reduced noise compared to our work. Software phase classification can be used to control the processor power based on the software's phase at any given time, but it does not provide future insight into the progression of the workload. Finally, we developed and compared several phase prediction methodologies based on phase precursors and phase locality concepts. Phase precursor-based methods rely on detecting the precursors observed before the software enters a certain phase, while phase locality methods rely on the locality principle, which postulates a high probability for the current software behaviour to be observed in the near-future. The phase classification, as well as phase prediction methodologies was shown to be able to reduce the power bursts within a workload in order to provide a more smooth power trace. As the bursts are removed from one workload's power trace, the multi-core processor power headroom can be confidently utilized for another process. / Graduate
108

Síntese de controladores ressonantes baseado em dados aplicado a fontes ininterruptas de energia

Schildt, Alessandro Nakoneczny January 2014 (has links)
Este trabalho trata da utilização de um método de sintonia de controladores baseado nos dados obtidos da planta. A proposta é a sintonia de controladores ressonantes para aplicação em inversores de frequência presentes em fontes ininterruptas de energia, com o intuito de seguimento de referência senoidal de tensão. Dentro deste contexto, será usado o algoritmo Virtual Reference Feedback Tuning, o qual é um método de identificação de controladores baseado em dados que não é iterativo e não necessita do modelo do sistema para identificar o controlador. A partir dos dados obtidos da planta e também da definição de um modelo de referência pelo projetista, o método estima os parâmetros de uma estrutura fixada previamente para o controlador através da minimização de uma função custo definida pelo erro entre a saída desejada e a saída real. Além disso, uma realimentação de corrente é necessária na malha de controle, onde seu ganho proporcional é definido por experimento empírico. Para demonstrar a utilização do método são apresentados resultados simulados e práticos de uma fonte ininterrupta de energia com potência de 5 kV A utilizando cargas lineares e não-lineares. É avaliado o desempenho do ponto de vista da qualidade do sinal de saída real obtido com controladores sintonizados a partir de diferentes modelos de referência, além do uso de sinais de excitação diversos para o algoritmo V RFT. Os resultados experimentais são obtidos em um inversor de frequência monofásico com uma plataforma em tempo real baseada na placa de aquisição de dados dSPACE DS1104. Os resultados mostram que, em relação as normas internacionais, o sistema de controle proposto possui bom comportamento para seguimento de referência, operando à vazio ou utilizando carga linear. / This work discusses about controller tuning methods based on plant data. The proposal is to tune resonant controllers for application to the frequency inverters found in uninterruptible power supplies, with the goal of following sinusoidal reference signals. Within this context, the Virtual Reference Feedback Tuning algorithm is used, which is a data-driven controller identification method that is not iterative and does not require a system model to identify the controller. Data obtained from the plant and also the definition of a reference model by the designer, are used by the method to estimate the parameters of a previously fixed controller structure through the minimization of a cost function, which is defined by the error between desired and actual outputs. Moreover, a current feedback is required in the control loop where the proportional gain is defined by empirical experiment. To demonstrate the method’s application, simulated and practical results of an uninterruptible power supply with capacity of the 5 kV A will be presented employing linear and nonlinear loads. Evaluates the performance in terms of system’s actual output quality, obtained with controllers tuned with different reference models. Distinct excitation signals are also used to feed the VRFT algorithm. The experimental results achieved from use of an single-phase inverter and a real-time platform based on data acquisition board dSPACE DS1104. The results show that, with respect to international standards, the proposed control system has good performance for tracking reference, operating at empty or using linear load.
109

Building Energy Modeling: A Data-Driven Approach

January 2016 (has links)
abstract: Buildings consume nearly 50% of the total energy in the United States, which drives the need to develop high-fidelity models for building energy systems. Extensive methods and techniques have been developed, studied, and applied to building energy simulation and forecasting, while most of work have focused on developing dedicated modeling approach for generic buildings. In this study, an integrated computationally efficient and high-fidelity building energy modeling framework is proposed, with the concentration on developing a generalized modeling approach for various types of buildings. First, a number of data-driven simulation models are reviewed and assessed on various types of computationally expensive simulation problems. Motivated by the conclusion that no model outperforms others if amortized over diverse problems, a meta-learning based recommendation system for data-driven simulation modeling is proposed. To test the feasibility of the proposed framework on the building energy system, an extended application of the recommendation system for short-term building energy forecasting is deployed on various buildings. Finally, Kalman filter-based data fusion technique is incorporated into the building recommendation system for on-line energy forecasting. Data fusion enables model calibration to update the state estimation in real-time, which filters out the noise and renders more accurate energy forecast. The framework is composed of two modules: off-line model recommendation module and on-line model calibration module. Specifically, the off-line model recommendation module includes 6 widely used data-driven simulation models, which are ranked by meta-learning recommendation system for off-line energy modeling on a given building scenario. Only a selective set of building physical and operational characteristic features is needed to complete the recommendation task. The on-line calibration module effectively addresses system uncertainties, where data fusion on off-line model is applied based on system identification and Kalman filtering methods. The developed data-driven modeling framework is validated on various genres of buildings, and the experimental results demonstrate desired performance on building energy forecasting in terms of accuracy and computational efficiency. The framework could be easily implemented into building energy model predictive control (MPC), demand response (DR) analysis and real-time operation decision support systems. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2016
110

Run-to-run modelling and control of batch processes

Duran Villalobos, Carlos Alberto January 2016 (has links)
The University of ManchesterCarlos Alberto Duran VillalobosDoctor of Philosophy in the Faculty of Engineering and Physical SciencesDecember 2015This thesis presents an innovative batch-to-batch optimisation technique that was able to improve the productivity of two benchmark fed-batch fermentation simulators: Saccharomyces cerevisiae and Penicillin production. In developing the proposed technique, several important challenges needed to be addressed:For example, the technique relied on the use of a linear Multiway Partial Least Squares (MPLS) model to adapt from one operating region to another as productivity increased to estimate the end-point quality of each batch accurately. The proposed optimisation technique utilises a Quadratic Programming (QP) formulation to calculate the Manipulated Variable Trajectory (MVT) from one batch to the next. The main advantage of the proposed optimisation technique compared with other approaches that have been published was the increase of yield and the reduction of convergence speed to obtain an optimal MVT. Validity Constraints were also included into the batch-to-batch optimisation to restrict the QP calculations to the space only described by useful predictions of the MPLS model. The results from experiments over the two simulators showed that the validity constraints slowed the rate of convergence of the optimisation technique and in some cases resulted in a slight reduction in final yield. However, the introduction of the validity constraints did improve the consistency of the batch optimisation. Another important contribution of this thesis were a series of experiments that were implemented utilising a variety of smoothing techniques used in MPLS modelling combined with the proposed batch-to-batch optimisation technique. From the results of these experiments, it was clear that the MPLS model prediction accuracy did not significantly improve using these smoothing techniques. However, the batch-to-batch optimisation technique did show improvements when filtering was implemented.

Page generated in 0.0466 seconds