• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 256
  • 55
  • 32
  • 16
  • 15
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 467
  • 69
  • 60
  • 60
  • 58
  • 55
  • 49
  • 46
  • 45
  • 40
  • 37
  • 35
  • 34
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Filter Based Stabilization Methods for Reduced Order Models of Convection-Dominated Systems

Moore, Ian Robert 15 May 2023 (has links)
In this thesis, I examine filtering based stabilization methods to design new regularized reduced order models (ROMs) for under-resolved simulations of unsteady, nonlinear, convection-dominated systems. The new ROMs proposed are variable delta filtering applied to the evolve-filter-relax ROM (V-EFR ROM), variable delta filtering applied to the Leray ROM, and approximate deconvolution Leray ROM (ADL-ROM). They are tested in the numerical setting of Burgers equation, a nonlinear, time dependent problem with one spatial dimension. Regularization is considered for the low viscosity, convection dominated setting. / Master of Science / Numerical solutions of partial differential equations may not be able to be efficiently computed in a way that fully captures the true behavior of the underlying model or differential equation, especially if significant changes in the solution to the differential equation occur over a very small spatial area. In this case, non-physical numerical artifacts may appear in the computed solution. We discuss methods of treating these calculations with a goal of improving the fidelity of numerical solutions with respect to the original model.
192

Exploring the Landscape of Big Data Analytics Through Domain-Aware Algorithm Design

Dash, Sajal 20 August 2020 (has links)
Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. High volume and velocity of the data warrant a large amount of storage, memory, and compute power while a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. In this thesis, we present our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental arrival of data through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We present Claret, a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool, to demonstrate the application of the first guideline. It combines algorithmic concepts extended from the stochastic force-based multi-dimensional scaling (SF-MDS) and Glimmer. Claret computes approximate weighted Euclidean distances by combining a novel data mapping called stretching and Johnson Lindestrauss' lemma to reduce the complexity of WMDS from O(f(n)d) to O(f(n) log d). In demonstrating the second guideline, we map the problem of identifying multi-hit combinations of genetic mutations responsible for cancers to weighted set cover (WSC) problem by leveraging the semantics of cancer genomic data obtained from cancer biology. Solving the mapped WSC with an approximate algorithm, we identified a set of multi-hit combinations that differentiate between tumor and normal tissue samples. To identify three- and four-hits, which require orders of magnitude larger computational power, we have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. Developing new statistics to combine search results over time makes incremental analysis feasible. iBLAST performs (1+δ)/δ times faster than NCBI BLAST, where δ represents the fraction of database growth. We also explored various approaches to mitigate catastrophic forgetting in incremental training of deep learning models. / Doctor of Philosophy / Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. Here volume represents the data's size, variety represents various sources and formats of the data, and velocity represents the data arrival rate. High volume and velocity of the data warrant a large amount of storage, memory, and computational power. In contrast, a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. This thesis presents our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric (pair-wise distance and distribution-related) and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental data arrival through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We demonstrate the application of the first guideline through the design and development of Claret. Claret is a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool that can reduce the dimension of high-dimensional data points. In demonstrating the second guideline, we identify combinations of cancer-causing gene mutations by mapping the problem to a well known computational problem known as the weighted set cover (WSC) problem. We have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer to solve the problem in less than two hours instead of an estimated hundred years. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. This analysis was made possible by developing new statistics to combine search results over time. We also explored various approaches to mitigate the catastrophic forgetting of deep learning models, where a model forgets to perform machine learning tasks efficiently on older data in a streaming setting.
193

Some Advances in Local Approximate Gaussian Processes

Sun, Furong 03 October 2019 (has links)
Nowadays, Gaussian Process (GP) has been recognized as an indispensable statistical tool in computer experiments. Due to its computational complexity and storage demand, its application in real-world problems, especially in "big data" settings, is quite limited. Among many strategies to tailor GP to such settings, Gramacy and Apley (2015) proposed local approximate GP (laGP), which constructs approximate predictive equations by constructing small local designs around the predictive location under certain criterion. In this dissertation, several methodological extensions based upon laGP are proposed. One methodological contribution is the multilevel global/local modeling, which deploys global hyper-parameter estimates to perform local prediction. The second contribution comes from extending the laGP notion of "locale" to a set of predictive locations, along paths in the input space. These two contributions have been applied in the satellite drag emulation, which is illustrated in Chapter 3. Furthermore, the multilevel GP modeling strategy has also been applied to synthesize field data and computer model outputs of solar irradiance across the continental United States, combined with inverse-variance weighting, which is detailed in Chapter 4. Last but not least, in Chapter 5, laGP's performance has been tested on emulating daytime land surface temperatures estimated via satellites, in the settings of irregular grid locations. / Doctor of Philosophy / In many real-life settings, we want to understand a physical relationship/phenomenon. Due to limited resources and/or ethical reasons, it is impossible to perform physical experiments to collect data, and therefore, we have to rely upon computer experiments, whose evaluation usually requires expensive simulation, involving complex mathematical equations. To reduce computational efforts, we are looking for a relatively cheap alternative, which is called an emulator, to serve as a surrogate model. Gaussian process (GP) is such an emulator, and has been very popular due to fabulous out-of-sample predictive performance and appropriate uncertainty quantification. However, due to computational complexity, full GP modeling is not suitable for “big data” settings. Gramacy and Apley (2015) proposed local approximate GP (laGP), the core idea of which is to use a subset of the data for inference and further prediction at unobserved inputs. This dissertation provides several extensions of laGP, which are applied to several real-life “big data” settings. The first application, detailed in Chapter 3, is to emulate satellite drag from large simulation experiments. A smart way is figured out to capture global input information in a comprehensive way by using a small subset of the data, and local prediction is performed subsequently. This method is called “multilevel GP modeling”, which is also deployed to synthesize field measurements and computational outputs of solar irradiance across the continental United States, illustrated in Chapter 4, and to emulate daytime land surface temperatures estimated by satellites, discussed in Chapter 5.
194

Nutrient cycling by the herbivorous insect Chrysomela tremulae : Nutrient content in leaves and frass and measurements of ingestion, egestion and exuviation rates / Näringsomsättning hos den herbivora insekten Chrysomela tremulae : Näringsinnehåll i blad och spillning och mätningar av födointags-, defekerings- och ömsningshastighet

Andersson, Sara January 2016 (has links)
Insect herbivory on forest canopies strongly affects ecosystem nutrient cycling and availability in a variety of ways, e.g. by changing the quantity, quality and timing of nutrient input to forest soils. A qualitative method for measurements of ingestion, egestion and exuviation rates of the insect Chrysomela tremulae on leaves of the hybrid Populus tremula x tremuloides were tested in this study with the aim to detect differences in relative nutrient cycling efficiencies. The assimilation efficiency (AD), efficiency of conversion of digested food (ECD) and efficiency of conversion of ingested food (ECI) increased from 1st, through 2nd and 3rd instar larvae with generally higher efficiencies for nitrogen than carbon. Effects on nutrient limitations for the insect were also tested by increasing the C:N ratio of insect diet. A carbohydrate solution was painted onto leaves which resulted in a significant increase in C:N (p<0.0001). This lead to a trend of lengthened developmental time for each ontogenetic stage, as well as higher ingestion rate and lower egestion- and exuviation rates. However, a different method of increasing the C:N ratio is recommended in future experiments since the leaves never truly absorbed the solution.
195

Towards Extending the Description Logic FL0 with Threshold Concepts Using Weighted Tree Automata

Fernández Gil, Oliver, Marantidis, Pavlos 02 September 2024 (has links)
We introduce an extension of the Description Logic FL0 that allows us to define concepts in an approximate way. More precisely, we extend FL0 with a threshold concept constructor of the form Ct><l t for t><l ∈ {≤, <, ≥, >}, whose semantics is given by using a membership distance function (mdf). A membership distance function m assigns to each domain element and concept a distance value expressing how “close” is such element to being an instance of the concept. Based on this, a threshold concept Ct><l t is interpreted as the set of all domain elements that have a distance s from C such that s t><l t. We provide a framework to obtain membership distance functions based on functions that compare tuples of languages, and we show how weighted looping tree automata over a semiring can be used to define membership distance functions for FL0 concepts. / This is an extended version of an article accepted at the 36th International Workshop on Description Logics (DL 2023).
196

Summary Statistic Selection with Reinforcement Learning

Barkino, Iliam January 2019 (has links)
Multi-armed bandit (MAB) algorithms could be used to select a subset of the k most informative summary statistics, from a pool of m possible summary statistics, by reformulating the subset selection problem as a MAB problem. This is suggested by experiments that tested five MAB algorithms (Direct, Halving, SAR, OCBA-m, and Racing) on the reformulated problem and comparing the results to two established subset selection algorithms (Minimizing Entropy and Approximate Sufficiency). The MAB algorithms yielded errors at par with the established methods, but in only a fraction of the time. Establishing MAB algorithms as a new standard for summary statistics subset selection could therefore save numerous scientists substantial amounts of time when selecting summary statistics for approximate bayesian computation.
197

Automatický multikriteriální paralelní evoluční návrh a aproximace obvodů / Automated Multi-Objective Parallel Evolutionary Circuit Design and Approximation

Hrbáček, Radek Unknown Date (has links)
Spotřeba a energetická efektivita se stává jedním z nejdůležitějších parametrů při návrhu počítačových systémů, zejména kvůli omezené kapacitě napájení u zařízení napájených bateriemi a velmi vysoké spotřebě energie rostoucích datacenter a cloudové infrastruktury. Současně jsou uživatelé ochotni do určité míry tolerovat nepřesné nebo chybné výpočty v roustoucím počtu aplikací díky nedokonalostem lidských smyslů, statistické povaze výpočtů, šumu ve vstupních datech apod. Přibližné počítání, nová oblast výzkumu v počítačovém inženýrství, využívá rozvolnění požadavků na funkčnost za účelem zvýšení efektivity počítačových systémů, pokud jde o spotřebu energie, výpočetní výkon či složitost. Aplikace tolerující chyby mohou být implementovány efektivněji a stále sloužit svému účelu se stejnou nebo mírně sníženou kvalitou. Ačkoli se objevují nové metody pro návrh přibližně počítajících výpočetních systémů, je stále nedostatek automatických návrhových metod, které by nabízely velké množství kompromisních řešení dané úlohy. Konvenční metody navíc často produkují řešení, která jsou daleko od optima. Evoluční algoritmy sice přinášejí inovativní řešení složitých optimalizačních a návrhových problémů, nicméně trpí několika nedostatky, např. nízkou škálovatelností či vysokým počtem generací nutných k dosažení konkurenceschopných výsledků. Pro přibližné počítání je vhodný zejména multikriteriální návrh, což existující metody většinou nepodporují. V této práci je představen nový automatický multikriteriální paralelní evoluční algoritmus pro návrh a aproximaci digitálních obvodů. Metoda je založena na kartézském genetickém programování, pro zvýšení škálovatelnosti byla navržena nová vysoce paralelizovaná implementace. Multikriteriální návrh byl založen na principech algoritmu NSGA-II. Výkonnost implementace byla vyhodnocena na několika různých úlohách, konkrétně při návrhu (přibližně počítajících) aritmetických obvodů, Booleovských funkcích s vysokou nelinearitou či přibližných logických obvodů pro tří-modulovou redundanci. V těchto úlohách bylo dosaženo význammých zlepšení ve srovnání se současnými metodami.
198

Aproximativní implementace aritmetických operací v obrazových filtrech / Approximate Implementation of Arithmetic Operations in Image Filters

Válek, Matěj January 2021 (has links)
Tato diplomová práce se zabývá  aproximativní implementace aritmetických operací v obrazových filtrech. Zejména tedy využitím aproximativních technik pro úpravu způsobu násobení v netriviálním obrazovém filtru. K tomu je využito několik technik, jako použití převodu násobení s pohyblivou řadovou čárkou na násobení s pevnou řadovou čárkou, či využití evolučních algoritmů zejména kartézkého genetického programování pro vytvoření nových aproximovaných násobiček, které vykazují přijatelnou chybu, ale současně redukují výpočetní náročnost filtrace. Výsledkem jsou evolučně navržené aproximativní násobičky zohledňující distribuci dat v obrazovém filtru a jejich nasazení v obrazovém filtru a porovnání původního filtru s aproximovaným fitrem na sadě barevných obrázků.
199

Deep Learning based Approximate Message Passing for MIMO Detection in 5G : Low complexity deep learning algorithms for solving MIMO Detection in real world scenarios / Deep Learning-baserat Ungefärligt meddelande som passerar för MIMO-detektion i 5G : Låg komplexitet djupinlärningsalgoritmer för att lösa MIMO-detektion i verkliga scenarier

Pozzoli, Andrea January 2022 (has links)
The Fifth Generation (5G) mobile communication system is the latest technology in wireless communications. This technique brings several advantages, in particular by using multiple receiver antennas that serve multiple transmitters. This configuration used in 5G is called Massive Multiple Input Multiple Output (MIMO), and it increases link reliability and information throughput. However, MIMO systems face two challenges at link layer: channel estimation and MIMO detection. In this work, the focus is only on the MIMO detection problem. It consists in retrieving the original messages, sent by the transmitters, at the receiver side when the received message is a noisy signal. The optimal technique to solve the problem is called Maximum Likelihood (ML), but it does not scale and therefore with MIMO systems it cannot be used. Several sub-optimal techniques have been tested during years in order to solve MIMO detection problem, trying to balance the complexity-performance trade-off. In recent years, Approximate Message Passing (AMP) based techniques brought interesting results. Moreover, deep learning (DL) is spreading in several and different fields, and also in MIMO detection, it has been tested with promising results. A neural network called MMNet brought the most interesting results, but new techniques have been developed. These new techniques, despite they are promising, have not been compared with MMNet. In this thesis, two new techniques AMP and DL based, called Ortoghonal AMP Network Second (OAMP-Net2) and Learnable Vector AMP (LVAMP), have been tested and compared with the state of art. The aim of the thesis is to discover if one or both the techniques can provide better results than MMNet, in order to discover a valid alternative solution while dealing with MIMO detection problem. OAMP-Net2 and LVAMP have been developed and tested on different channel models (i.i.d. Gaussian and Kronecker) and on MIMO systems of different sizes (small and medium-large). OAMP-Net2 revealed to be a consistent technique that can be used in solving MIMO detection problem. It provides interesting results on both i.i.d Gaussian and Kronecker channel models and with different sizes matrices. Moreover, OAMP-Net2 has good adaptability, in fact it provides good results on Kronecker channel models also when it is trained with i.i.d. Gaussian matrices. LVAMP instead has performances that are similar to MMSE, but with a lower complexity. It adapts well to complex channels such as OAMP-Net2. / Femte generationens (5G) mobila kommunikationssystem är den senaste tekniken inom trådlös kommunikation. Denna teknik ger flera fördelar, i synnerhet genom att använda flera mottagarantenner som betjänar flera sändare. Denna konfiguration som används i 5G kallas Massive Multiple Input Multiple Output (MIMO), och den ökar länktillförlitligheten och informationsgenomströmningen. MIMO-system står dock inför två utmaningar i länkskiktet: kanaluppskattning och MIMO-detektering. I detta arbete ligger fokus endast på MIMO-detekteringsproblemet. Den består i att hämta de ursprungliga meddelandena, skickade av sändarna, på mottagarsidan när det mottagna meddelandet är en brusig signal. Den optimala tekniken för att lösa problemet kallas Maximum Likelihood (ML), men den skalas inte och därför kan den inte användas med MIMO-system. Flera suboptimala tekniker har testats under flera år för att lösa MIMO-detekteringsproblem och försöka balansera komplexitet-prestanda-avvägningen. Under de senaste åren har Approximate Message Passing (AMP)-baserade tekniker gett intressanta resultat. Dessutom sprids djupinlärning (DL) inom flera och olika områden, och även inom MIMO-detektering har det testats med lovande resultat. Ett neuralt nätverk kallat MMNet gav de mest intressanta resultaten, men nya tekniker har utvecklats. Dessa nya tekniker, trots att de är lovande, har inte jämförts med MMNet. I detta examensarbete har två nya tekniker AMP- och DL-baserade, kallade Ortoghonal AMP Network Second (OAMP-Net2) och Learnable Vector AMP (LVAMP), testats och jämförts med den senaste tekniken. Syftet med avhandlingen är att ta reda på om en eller båda teknikerna kan ge bättre resultat än MMNet, för att upptäcka en giltig alternativ lösning samtidigt som man hanterar MIMO-detekteringsproblem. OAMP-Net2 och LVAMP har utvecklats och testats på olika kanalmodeller (i.i.d. Gaussian och Kronecker) och på MIMO-system av olika storlekar (small och medium-large).OAMP-Net2 visade sig vara en konsekvent teknik som kan användas för att lösa MIMO-detekteringsproblem. Det ger riktigt intressanta resultat på både i.i.d Gaussian och Kronecker kanalmodeller och med matriser i olika storlekar. Dessutom har OAMP-Net2 god anpassningsförmåga, faktiskt ger den bra resultat på Kronecker kanalmodeller även när den tränas med i.i.d. Gaussiska matriser. LVAMP har istället prestanda som liknar MMSE, men med lägre komplexitet. Den anpassar sig väl till komplexa kanaler somOAMPNet2.
200

Calibration of Breast Cancer Natural History Models Using Approximate Bayesian Computation / Kalibrering av natural history models för bröstcancer med approximate bayesian computation

Bergqvist, Oscar January 2020 (has links)
Natural history models for breast cancer describe the unobservable disease progression. These models can either be fitted using likelihood-based estimation to data on individual tumour characteristics, or calibrated to fit statistics at a population level. Likelihood-based inference using individual level data has the advantage of ensuring model parameter identifiability. However, the likelihood function can be computationally heavy to evaluate or even intractable. In this thesis likelihood-free estimation using Approximate Bayesian Computation (ABC) will be explored. The main objective is to investigate whether ABC can be used to fit models to data collected in the presence of mammography screening. As a background, a literature review of ABC is provided. As a first step an ABC-MCMC algorithm is constructed for two simple models both describing populations in absence of mammography screening, but assuming different functional forms of tumour growth. The algorithm is evaluated for these models in a simulation study using synthetic data, and compared with results obtained using likelihood-based inference. Later, it is investigated whether ABC can be used for the models in presence of screening. The findings of this thesis indicate that ABC is not directly applicable to these models. However, by including a sub-model for tumour onset and assuming that all individuals in the population have the same screening attendance it was possible to develop an ABC-MCMC algorithm that carefully takes individual level data into consideration in the estimation procedure. Finally, the algorithm was tested in a simple simulation study using synthetic data. Future research is still needed to evaluate the statistical properties of the algorithm (using extended simulation) and to test it on observational data where previous estimates are available for reference. / Natural history models för bröstcancer är statistiska modeller som beskriver det dolda sjukdomsförloppet. Dessa modeller brukar antingen anpassas till data på individnivå med likelihood-baserade metoder, eller kalibreras mot statistik för hela populationen. Fördelen med att använda data på individnivå är att identifierbarhet hos modellparametrarna kan garanteras. För dessa modeller händer det dock att det är beräkningsintensivt eller rent utav omöjligt att evaluera likelihood-funktionen. Huvudsyftet med denna uppsats är att utforska huruvida metoden Approximate Bayesian Computation (ABC), som används för skattning av statistiska modeller där likelihood-funktionen inte är tillgänglig, kan implementeras för en modell som beskriver bröstcancer hos individer som genomgår mammografiscreening. Som en del av bakgrunden presenteras en sammanfattning av modern ABC-forskning. Metoden består av två delar. I den första delen implementeras en ABC-MCMC algoritm för två enklare modeller. Båda dessa modeller beskriver tumörtillväxten hos individer som ej genomgår mammografiscreening, men modellerna antar olika typer av tumörtillväxt. Algoritmen testades i en simulationsstudie med syntetisk data genom att jämföra resultaten med motsvarande från likelihood-baserade metoder. I den andra delen av metoden undersöks huruvida ABC är kompatibelt med modeller för bröstcancer hos individer som genomgår screening. Genom att lägga till en modell för uppkomst av tumörer och göra det förenklande antagandet att alla individer i populationen genomgår screening vid samma ålder, kunde en ABC-MCMC algoritm utvecklas med hänsyn till data på individnivå. Algoritmen testades sedan i en simulationsstudie nyttjande syntetisk data. Framtida studier behövs för att undersöka algoritmens statistiska egenskaper (genom upprepad simulering av flera dataset) och för att testa den mot observationell data där tidigare parameterskattningar finns tillgängliga.

Page generated in 0.251 seconds