• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • Tagged with
  • 7
  • 7
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Selection of Simplified Models and Parameter Estimation Using Limited Data

Wu, SHAOHUA 23 December 2009 (has links)
Due to difficulties associated with formulating complex models and obtaining reliable estimates of unknown model parameters, modellers often use simplified models (SMs) that are structurally imperfect and that contain a smaller number of parameters. The objectives of this research are: 1) to develop practical and easy-to-use strategies to help modellers select the best SM from a set of candidate models, and 2) to assist modellers in deciding which parameters in complex models should be estimated, and which should be fixed at initial values. The aim is to select models and parameters so that the best possible predictions can be obtained using the available data and the modeller’s engineering and scientific knowledge. This research summarizes the extensive qualitative and quantitative results in the statistics literature regarding the use of SMs. Mean-squared error (MSE) is used to judge the quality of model predictions obtained from different candidate models, and a confidence-interval approach is developed to assess the uncertainties associated with whether a SM or the corresponding extended model will give better predictions. Nine commonly-applied model-selection criteria (MSC) are reviewed and analyzed for their propensities of preferring SMs. It is shown that there exist preferential orderings for many MSC that are independent of model structure and the particular data set. A new MSE-based MSC is developed using univariate linear statistical models. The effectiveness of this criterion for selecting dynamic nonlinear multivariate models is demonstrated both theoretically and empirically. The proposed criterion is then applied for determining the optimal number of parameters to estimate in complex models, based on ranked parameter lists obtained from estimability analysis. This approach makes use of the modeller’s prior knowledge about precision of initial parameter values and is less computationally expensive than comparable methods in the literature. / Thesis (Ph.D, Chemical Engineering) -- Queen's University, 2009-12-23 09:48:45.423
2

Klasifikace audia hlubokým učením s limitovanými zdroji dat / Audio Classification with Deep Learning on Limited Data Sets

Harár, Pavol January 2019 (has links)
Standardní postupy diagnózy dysfonie klinickým logopedem mají své nevýhody, především tu, že je tento proces velmi subjektivní. Nicméně v poslední době získala popularitu automatická objektivní analýza stavu mluvčího. Vědci úspěšně založili své metody na různých algoritmech strojového učení a ručně vytvořených příznacích. Tyto metody nejsou bohužel přímo škálovatelné na jiné poruchy hlasu, samotný proces tvorby příznaků je pracný a také náročný z hlediska financí a talentu. Na základě předchozích úspěchů může přístup založený na hlubokém učení pomoci překlenout některé problémy se škálovatelností a generalizací, nicméně překážkou je omezené množství trénovacích dat. Jedná se o společný jmenovatel téměř ve všech systémech pro automatizovanou analýzu medicínských dat. Hlavním cílem této práce je výzkum nových přístupů prediktivního modelování založeného na hlubokém učení využívající omezené sady zvukových dat, se zaměřením zejména na hodnocení patologických hlasů. Tato práce je první, která experimentuje s hlubokým učením v této oblasti, a to na dosud největší kombinované databázi dysfonických hlasů, která byla v rámci této práce vytvořena. Předkládá důkladný průzkum veřejně dostupných zdrojů dat a identifikuje jejich limitace. Popisuje návrh nových časově-frekvenčních reprezentací založených na Gaborově transformaci a představuje novou třídu chybových funkcí, které přinášejí reprezentace výstupů prospěšné pro učení. V numerických experimentech demonstruje zlepšení výkonu konvolučních neuronových sítí trénovaných na omezených zvukových datových sadách pomocí tzv. "augmented target loss function" a navržených časově-frekvenčních reprezentací "Gabor" a "Mel scattering".
3

Limited angle reconstruction for 2D CT based on machine learning

Oldgren, Eric, Salomonsson, Knut January 2023 (has links)
The aim of this report is to study how machine learning can be used to reconstruct 2 dimensional computed tomography images from limited angle data. This could be used in a variety of applications where either the space or timeavailable for the CT scan limits the acquired data.In this study, three different types of models are considered. The first model uses filtered back projection (FBP) with a single learned filter, while the second uses a combination of multiple FBP:s with learned filters. The last model instead uses an FNO (Fourieer Neural Operator) layer to both inpaint and filter the limited angle data followed by a backprojection layer. The quality of the reconstructions are assessed both visually and statistically, using PSNR and SSIM measures.The results of this study show that while an FBP-based model using one or more trainable filter(s) can achieve better reconstructions than ones using an analytical Ram-Lak filter, their reconstructions still fail for small angle spans. Better results in the limited angle case can be achieved using the FNO-basedmodel.
4

Using Neural Networks with Limited Data to Estimate Manufacturing Cost

Dowler, John D. 29 July 2008 (has links)
No description available.
5

Resource- and Time-Constrained Control Synthesis for Multi-Agent Systems

Yu, Pian January 2018 (has links)
Multi-agent systems are employed for a group of agents to achieve coordinated tasks, in which distributed sensing, computing, communication and control are usually integrated with shared resources. Efficient usage of these resources is therefore an important issue. In addition, in applications such as robotics, a group of agents may encounter the request of a sequence of tasks and deadline constraint on the completion of each task is a common requirement. Thus, the integration of multi-agent task scheduling and control synthesis is of great practical interest. In this thesis, we study control of multi-agent systems under a networked control system framework. The first purpose is to design resource-efficient communication and control strategies to solve consensus problem for multi-agent systems.The second purpose is to jointly schedule task sequence and design controllers for multiagent systems that are subject to a sequence of deadline-constrained tasks. In the first part, a distributed asynchronous event-triggered communication and control strategy is proposed to tackle multi-agent consensus. It is shown that the proposed event-triggered communication and control strategy fulfils the reduction of both the rates of sensor-controller communication and controller-actuator communication as well as excluding Zeno behavior. To further relax the requirement of continuous sensing and computing, a periodic event-triggered communication and control strategy is proposed in the second part. In addition, an observer-based encoder-decoder with finite-level quantizeris designed to deal with the constraint of limited data rate. An explicit formula for the maximum allowable sampling period is derived first. Then, it is proven that exponential consensus can be achieved in the presence of data rate constraint. Finally, in the third part, the problem of deadline-constrained multi-agent task scheduling and control synthesis is addressed. A dynamic scheduling strategy is proposed and a distributed hybrid control law is designed for each agent that guarantees the completion and deadline satisfaction of each task. The effectiveness of the theoretical results in the thesis is verified by several simulation examples. / <p>QC 20180918</p>
6

Deep Learning Based User Models for Interactive Optimization of Watershed Designs

Andrew Paul Hoblitzell (8086769) 11 December 2019 (has links)
<p>This dissertation combines stakeholder and analytical intelligence for consensus decision-making via an interactive optimization process. This dissertation outlines techniques for developing user models of subjective criteria of human stakeholders for an environmental decision support system called WRESTORE. The dissertation compares several user modeling techniques and develops methods for incorporating such user models selectively for interactive optimization, combining multiple objective and subjective criteria. </p><p>This dissertation describes additional functionality for our watershed planning system, called WRESTORE (Watershed REstoration Using Spatio-Temporal Optimization of REsources) (http://wrestore.iupui.edu). Techniques for performing the interactive optimization process in the presence of limited data are described. This work adds a user modeling component that develops a computational model of a stakeholder’s preferences and then integrates the user model component into the decision support system. <br></p><p>Our system is one of many decision support systems and is dependent upon stake- holder interaction. The user modeling component within the system utilizes deep learning, which can be challenging with limited data. Our work integrates user models with limited data with application-specific techniques to address some of these challenges. The dissertation describes steps for implementing accurate virtual stakeholder models based on limited training data. </p><p>Another method for dealing with limited data, based upon computing training data uncertainty, is also presented in this dissertation. Results presented show more stable convergence in fewer iterations when using an uncertainty-based incremental sampling method than when using stability based sampling or random sampling. The technique is described in additional detail. </p><p>The dissertation also discusses non-stationary reinforcement-based feature selection for the interactive optimization component of our system. The presented results indicate that the proposed feature selection approach can effectively mitigate against superfluous and adversarial dimensions which if left untreated can lead to degradation in both computational performance and interactive optimization performance against analytically determined environmental fitness functions. </p><p>The contribution of this dissertation lays the foundation for developing a framework for multi-stakeholder consensus decision-making in the presence of limited data.</p>
7

Investigating the Correlation Between Marketing Emails and Receivers Using Unsupervised Machine Learning on Limited Data : A comprehensive study using state of the art methods for text clustering and natural language processing / Undersökning av samband mellan marknadsföringsemail och dess mottagare med hjälp av oövervakad maskininlärning på begränsad data

Pettersson, Christoffer January 2016 (has links)
The goal of this project is to investigate any correlation between marketing emails and their receivers using machine learning and only a limited amount of initial data. The data consists of roughly 1200 emails and 98.000 receivers of these. Initially, the emails are grouped together based on their content using text clustering. They contain no information regarding prior labeling or categorization which creates a need for an unsupervised learning approach using solely the raw text based content as data. The project investigates state-of-the-art concepts like bag-of-words for calculating term importance and the gap statistic for determining an optimal number of clusters. The data is vectorized using term frequency - inverse document frequency to determine the importance of terms relative to the document and to all documents combined. An inherit problem of this approach is high dimensionality which is reduced using latent semantic analysis in conjunction with singular value decomposition. Once the resulting clusters have been obtained, the most frequently occurring terms for each cluster are analyzed and compared. Due to the absence of initial labeling an alternative approach is required to evaluate the clusters validity. To do this, the receivers of all emails in each cluster who actively opened an email is collected and investigated. Each receiver have different attributes regarding their purpose of using the service and some personal information. Once gathered and analyzed, conclusions could be drawn that it is possible to find distinguishable connections between the resulting email clusters and their receivers but to a limited extent. The receivers from the same cluster did show similar attributes as each other which were distinguishable from the receivers of other clusters. Hence, the resulting email clusters and their receivers are specific enough to distinguish themselves from each other but too general to handle more detailed information. With more data, this could become a useful tool for determining which users of a service should receive a particular email to increase the conversion rate and thereby reach out to more relevant people based on previous trends. / Målet med detta projekt att undersöka eventuella samband mellan marknadsföringsemail och dess mottagare med hjälp av oövervakad maskininlärning på en brgränsad mängd data. Datan består av ca 1200 email meddelanden med 98.000 mottagare. Initialt så gruperas alla meddelanden baserat på innehåll via text klustering. Meddelandena innehåller ingen information angående tidigare gruppering eller kategorisering vilket skapar ett behov för ett oövervakat tillvägagångssätt för inlärning där enbart det råa textbaserade meddelandet används som indata. Projektet undersöker moderna tekniker så som bag-of-words för att avgöra termers relevans och the gap statistic för att finna ett optimalt antal kluster. Datan vektoriseras med hjälp av term frequency - inverse document frequency för att avgöra relevansen av termer relativt dokumentet samt alla dokument kombinerat. Ett fundamentalt problem som uppstår via detta tillvägagångssätt är hög dimensionalitet, vilket reduceras med latent semantic analysis tillsammans med singular value decomposition. Då alla kluster har erhållits så analyseras de mest förekommande termerna i vardera kluster och jämförs. Eftersom en initial kategorisering av meddelandena saknas så krävs ett alternativt tillvägagångssätt för evaluering av klustrens validitet. För att göra detta så hämtas och analyseras alla mottagare för vardera kluster som öppnat något av dess meddelanden. Mottagarna har olika attribut angående deras syfte med att använda produkten samt personlig information. När de har hämtats och undersökts kan slutsatser dras kring hurvida samband kan hittas. Det finns ett klart samband mellan vardera kluster och dess mottagare, men till viss utsträckning. Mottagarna från samma kluster visade likartade attribut som var urskiljbara gentemot mottagare från andra kluster. Därav kan det sägas att de resulterande klustren samt dess mottagare är specifika nog att urskilja sig från varandra men för generella för att kunna handera mer detaljerad information. Med mer data kan detta bli ett användbart verktyg för att bestämma mottagare av specifika emailutskick för att på sikt kunna öka öppningsfrekvensen och därmed nå ut till mer relevanta mottagare baserat på tidigare resultat.

Page generated in 0.0719 seconds