• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 174
  • 43
  • 41
  • 22
  • 11
  • 10
  • 9
  • 8
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 381
  • 118
  • 69
  • 63
  • 60
  • 48
  • 35
  • 34
  • 33
  • 32
  • 32
  • 31
  • 31
  • 30
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Detekce a rozměření elektronového svazku v obrazech z TEM / Detection and measurement of electron beam in TEM images

Polcer, Simon January 2020 (has links)
This diploma thesis deals with automatic detection and measurement of the electron beam in the images from a transmission electron microscope (TEM). The introduction provides a description of the construction and the main parts of the electron microscope. In the theoretical part, there are summarized modes of illumination from the fluorescent screen. Machine learning, specifically convolution neural network U-Net is used for automatic detection of the electron beam in the image. The measurement of the beam is based on ellipse approximation, which defines the size and dimension of the beam. Neural network learning requires an extensive database of images. For this purpose, the own augmentation approach is proposed, which applies a specific combination of geometric transformations for each mode of illumination. In the conclusion of this thesis, the results are evaluated and summarized. This proposed algorithm achieves 0.815 of the DICE coefficient, which describes an overlap between two sets. The thesis was designed in Python programming language.
272

Relationship between determinants of arterial stiffness assessed by diastolic and suprasystolic pulse oscillometry: comparison of vicorder and vascular explorer

Teren, Andrej, Beutner, Frank, Wirkner, Kerstin, Löffler, Markus, Scholz, Markus January 2016 (has links)
Pulse wave velocity (PWV) and augmentation index (AI) are independent predictors of cardiovascular health. However, the comparability of multiple oscillometric modalities currently available for their assessment was not studied in detail. In the present study, we aimed to evaluate the relationship between indices of arterial stiffness assessed by diastolic and suprasystolic oscillometry. In total, 56 volunteers from the general population (23 males; median age 70 years [interquartile range: 65–72 years]) were recruited into observational feasibility study to evaluate the carotid-femoral/aortic PWV (cf/aoPWV), brachial-ankle PWV (baPWV), and AI assessed by 2 devices: Vicorder (VI) applying diastolic, right-sided oscillometry for the determination of all 3 indices, and Vascular explorer (VE) implementing single-point, suprasystolic brachial oscillometry (SSBO) pulse wave analysis for the assessment of cfPWV and AI. Within- and between-device correlations of measured parameters were analyzed. Furthermore, agreement of repeated measurements, intra- and inter-observer concordances were determined and compared for both devices. In VI, both baPWVand cfPWVinter-correlatedwell and showed good level of agreement with bilateral baPWVmeasured byVE (baPWV[VI]– baPWV[VE]R: overall concordance correlation coefficient [OCCC]¼0.484, mean difference¼1.94 m/s; cfPWV[VI]–baPWV[- VE]R: OCCC¼0.493, mean difference¼1.0m/s). In contrast, SSBO derived aortic PWA (cf/aoPWA[VE]) displayed only weak correlation with cfPWV(VI) (r¼0.196; P¼0.04) and ipsilateral baPWV (cf/ aoPWV[VE]R–baPWV[VE]R: r¼0.166; P¼0.08). cf/aoPWA(VE) correlated strongly with AI(VE) (right-sided: r¼0.725, P<0.001). AI exhibited marginal between-device agreement (right-sided: OCCC¼ 0.298, mean difference: 6.12%). All considered parameters showed good-to-excellent repeatability giving OCCC > 0.9 for 2-point-PWV modes and right-sided AI(VE). Intra- and inter-observer concordances were similarly high except for AI yielding a trend toward better reproducibility in VE (interobserver–OCCC[VI] vs [VE]¼0.774 vs 0.844; intraobserver OCCC[VI] vs [VE]¼0.613 vs 0.769). Both diastolic oscillometry-derived PWV modes, and AI measured either with VI or VE, are comparable and reliable alternatives for the assessment of arterial stiffness. Aortic PWV assessed by SSBO in VE is not related to the corresponding indices determined by traditional diastolic oscillometry.
273

Lithium’s Emerging Role in the Treatment of Refractory Major Depressive Episodes: Augmentation of Antidepressants

Bauer, Michael, Adli, Mazda, Bschor, Tom, Pilhatsch, Maximilian, Pfennig, Andrea, Sasse, Johanna, Schmid, Rita, Lewitzka, Ute January 2010 (has links)
Background: The late onset of therapeutic response and a relatively large proportion of nonresponders to antidepressants remain major concerns in clinical practice. Therefore, there is a critical need for effective medication strategies that augment treatment with antidepressants. Methods: To review the available evidence on the use of lithium as an augmentation strategy to treat depressive episodes. Results: More than 30 open-label studies and 10 placebo-controlled double-blind trials have demonstrated substantial efficacy of lithium augmentation in the acute treatment of depressive episodes. Most of these studies were performed in unipolar depression and included all major classes of antidepressants, however mostly tricyclics. A meta-analysis including 10 randomized placebo-controlled trials has provided evidence that lithium augmentation has a statistically significant effect on the response rate compared to placebo with an odds ratio of 3.11, which corresponds to a number-needed-to-treat of 5. The meta-analysis revealed a mean response rate of 41.2% in the lithium group and 14.4% in the placebo group. One placebo-controlled trial in the continuation treatment phase showed that responders to acute-phase lithium augmentation should be maintained on the lithium-antidepressant combination for at least 12 months to prevent early relapses. Preliminary studies to assess genetic influences on response probability to lithium augmentation have suggested a predictive role of the –50T/C single nucleotide polymorphism of the GSK3β gene. Conclusion: Augmentation of antidepressants with lithium is currently the best-evidenced augmentation therapy in the treatment of depressed patients who do not respond to antidepressants. / Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
274

Data Quality Evaluation and Improvement for Machine Learning

Chen, Haihua 05 1900 (has links)
In this research the focus is on data-centric AI with a specific concentration on data quality evaluation and improvement for machine learning. We first present a practical framework for data quality evaluation and improvement, using a legal domain as a case study and build a corpus for legal argument mining. We first created an initial corpus with 4,937 instances that were manually labeled. We define five data quality evaluation dimensions: comprehensiveness, correctness, variety, class imbalance, and duplication, and conducted a quantitative evaluation on these dimensions for the legal dataset and two existing datasets in the medical domain for medical concept normalization. The first group of experiments showed that class imbalance and insufficient training data are the two major data quality issues that negatively impacted the quality of the system that was built on the legal corpus. The second group of experiments showed that the overlap between the test datasets and the training datasets, which we defined as "duplication," is the major data quality issue for the two medical corpora. We explore several widely used machine learning methods for data quality improvement. Compared to pseudo-labeling, co-training, and expectation-maximization (EM), generative adversarial network (GAN) is more effective for automated data augmentation, especially when a small portion of labeled data and a large amount of unlabeled data is available. The data validation process, the performance improvement strategy, and the machine learning framework for data evaluation and improvement discussed in this dissertation can be used by machine learning researchers and practitioners to build high-performance machine learning systems. All the materials including the data, code, and results will be released at: https://github.com/haihua0913/dissertation-dqei.
275

Advanced Data Augmentation : With Generative Adversarial Networks and Computer-Aided Design

Thaung, Ludwig January 2020 (has links)
CNN-based (Convolutional Neural Network) visual object detectors often reach human level of accuracy but need to be trained with large amounts of manually annotated data. Collecting and annotating this data can frequently be time-consuming and financially expensive. Using generative models to augment the data can help minimize the amount of data required and increase detection per-formance. Many state-of-the-art generative models are Generative Adversarial Networks (GANs). This thesis investigates if and how one can utilize image data to generate new data through GANs to train a YOLO-based (You Only Look Once) object detector, and how CAD (Computer-Aided Design) models can aid in this process. In the experiments, different models of GANs are trained and evaluated by visual inspection or with the Fréchet Inception Distance (FID) metric. The data provided by Ericsson Research consists of images of antenna and baseband equipment along with annotations and segmentations. Ericsson Research supplied the YOLO detector, and no modifications are made to this detector. Finally, the YOLO detector is trained on data generated by the chosen model and evaluated by the Average Precision (AP). The results show that the generative models designed in this work can produce RGB images of high quality. However, the quality reduces if binary segmentation masks are to be generated as well. The experiments with CAD input data did not result in images that could be used for the training of the detector. The GAN designed in this work is able to successfully replace objects in images with the style of other objects. The results show that training the YOLO detector with GAN-modified data compared to training with real data leads to the same detection performance. The results also show that the shapes and backgrounds of the antennas contributed more to detection performance than their style and colour.
276

Uncertainty Estimation in Volumetric Image Segmentation

Park, Donggyun January 2023 (has links)
The performance of deep neural networks and estimations of their robustness has been rapidly developed. In contrast, despite the broad usage of deep convolutional neural networks (CNNs)[1] for medical image segmentation, research on their uncertainty estimations is being far less conducted. Deep learning tools in their nature do not capture the model uncertainty and in this sense, the output of deep neural networks needs to be critically analysed with quantitative measurements, especially for applications in the medical domain. In this work, epistemic uncertainty, which is one of the main types of uncertainties (epistemic and aleatoric) is analyzed and measured for volumetric medical image segmentation tasks (and possibly more diverse methods for 2D images) at pixel level and structure level. The deep neural network employed as a baseline is 3D U-Net architecture[2], which shares the essential structural concept with U-Net architecture[3], and various techniques are applied to quantify the uncertainty and obtain statistically meaningful results, including test-time data augmentation and deep ensembles. The distribution of the pixel-wise predictions is estimated by Monte Carlo simulations and the entropy is computed to quantify and visualize how uncertain (or certain) the predictions of each pixel are. During the estimation, given the increased network training time in volumetric image segmentation, training an ensemble of networks is extremely time-consuming and thus the focus is on data augmentation and test-time dropouts. The desired outcome is to reduce the computational costs of measuring the uncertainty of the model predictions while maintaining the same level of estimation performance and to increase the reliability of the uncertainty estimation map compared to the conventional methods. The proposed techniques are evaluated on publicly available volumetric image datasets, Combined Healthy Abdominal Organ Segmentation (CHAOS, a set of 3D in-vivo images) from Grand Challenge (https://chaos.grand-challenge.org/). Experiments with the liver segmentation task in 3D Computed Tomography (CT) show the relationship between the prediction accuracy and the uncertainty map obtained by the proposed techniques. / Prestandan hos djupa neurala nätverk och estimeringar av deras robusthet har utvecklats snabbt. Däremot, trots den breda användningen av djupa konvolutionella neurala nätverk (CNN) för medicinsk bildsegmentering, utförs mindre forskning om deras osäkerhetsuppskattningar. Verktyg för djupinlärning fångar inte modellosäkerheten och därför måste utdata från djupa neurala nätverk analyseras kritiskt med kvantitativa mätningar, särskilt för tillämpningar inom den medicinska domänen. I detta arbete analyseras och mäts epistemisk osäkerhet, som är en av huvudtyperna av osäkerheter (epistemisk och aleatorisk) för volymetriska medicinska bildsegmenteringsuppgifter (och möjligen fler olika metoder för 2D-bilder) på pixelnivå och strukturnivå. Det djupa neurala nätverket som används som referens är en 3D U-Net-arkitektur [2] och olika tekniker används för att kvantifiera osäkerheten och erhålla statistiskt meningsfulla resultat, inklusive testtidsdata-augmentering och djupa ensembler. Fördelningen av de pixelvisa förutsägelserna uppskattas av Monte Carlo-simuleringar och entropin beräknas för att kvantifiera och visualisera hur osäkra (eller säkra) förutsägelserna för varje pixel är. Under uppskattningen, med tanke på den ökade nätverksträningstiden i volymetrisk bildsegmentering, är träning av en ensemble av nätverk extremt tidskrävande och därför ligger fokus på dataaugmentering och test-time dropouts. Det önskade resultatet är att minska beräkningskostnaderna för att mäta osäkerheten i modellförutsägelserna samtidigt som man bibehåller samma nivå av estimeringsprestanda och ökar tillförlitligheten för kartan för osäkerhetsuppskattning jämfört med de konventionella metoderna. De föreslagna teknikerna kommer att utvärderas på allmänt tillgängliga volymetriska bilduppsättningar, Combined Healthy Abdominal Organ Segmentation (CHAOS, en uppsättning 3D in-vivo-bilder) från Grand Challenge (https://chaos.grand-challenge.org/). Experiment med segmenteringsuppgiften för lever i 3D Computed Tomography (CT) vissambandet mellan prediktionsnoggrannheten och osäkerhetskartan som erhålls med de föreslagna teknikerna.
277

Balanserad samordning genom kollektiv kommunikation : En kontextualiserad studie av hyperautomationsprocessen avseende dess utformning och tillämpning

Axelsson, Matilda January 2023 (has links)
Bakgrund och syfte: Det finns en ny generation av organisatorisk automatisering kallad hyper- automation. Till skillnad från klassisk automation omfattar hyperautomation en samordnad tillämpning av olika avancerade automatiseringstekniker, såsom RPA, AI och ML, som kan tillämpas för att automatisera såväl rutinmässiga som mer kognitivt krävande arbetsuppgifter. Konceptets utformning och tillämpning är ännu tämligen outforskade områden, vilket skapar ovisshet gällande dess effekter och implikationer inom de sociala kontexter där de skapas och används. Följaktligen finns ett behov av ytterligare studier rörande människa och automation i hybrid, varför denna uppsats syftar till att utforska hyperautomation i allmänhet, och dess utformning och tillämpning i synnerhet. Särskild uppmärksamhet ägnas åt de aktiviteter och implikationer som uppstår i människa-maskin-interaktionerna som hyperautomationsprocessen innebär och eftersträvar, samt hur olika organisatoriska aktörer upplever och reflekterar kring dessa. Detta uppfylls genom att granska hur ett konsultföretag, som förmedlar hyperautomationslösningar, och ett av dess kundföretag, observerar de kort- och långsiktiga effekterna av och målen med införandet av hyperautomation. Litteraturgenomgång: Uppsatsens teoretiska utgångspunkt underbyggs av tidigare forskning avseende fyra väsentliga intresseområden – människa, digital teknik och organisation, hyperautomation, HR och digitalisering samt digital innovation. Vidare guidas studiens analys och diskussion av två teoretiska ramverk; Distribuerad kognition och Ramverket för ansvarsfull innovation. Metod: Uppsatsen utgörs av en kvalitativ fallstudie, med abduktiv ansats, som baseras på insikter från både tidigare forskning och empiriska resultat. Empirin genereras genom semistrukturerade intervjuer med sju respondenter, varav tre tillhör det studerade kundföretaget och resterande fyra är verksamma inom konsultföretaget. Bearbetning av insamlade data sker enligt en tematisk analys, vilken mynnar ut i fyra huvudteman och totalt tio underteman. Resultat: Resultatsammanställningen struktureras i enlighet med studiens fyra huvudteman; beslutet att hyperautomatisera, samordning, processen att hyperautomatisera samt framtidsutsikter. Kortfattat baseras hyperautomationsinitiativet på en ambition om att spara tid och pengar, samt att avlasta linjecheferna. Vidare identifieras brister avseende samordningen av onboardingprocessens involverade moment och aktörer. Beträffande processen att hyperautomatisera beslutas att denna ska ske i iterationer. Samtidigt som detta bedöms vara positivt för att komma igång, hinner inte förvaltningen med att hantera alla de problem som påträffas under användningen av hyperautomationslösningarna. De studerade företagen ser likväl positivt på framtiden och har ambitioner om att utöka hyperautomationen med fler avancerade digitala tekniker, såsom AI. Utmaningarna gällande samordning och kommunikation måste dock hanteras dessförinnan. Diskussion och slutsatser: Hyperautomationen är välkommen och efterlängtad som koncept, men det uppstår problem avseende både dess praktiska utformning och tillämpning. Dessa bekymmer härleds till bristen på balanserad samordning och kommunikation, med särskild emfas vid avsaknaden av tidiga inputs från slutanvändarna.
278

Volumetric Image Segmentation of Lizard Brains / Tredimensionell segmentering av ödlehjärnor

Dragunova, Yulia January 2023 (has links)
Accurate measurement brain region volumes are important in studying brain plasticity, which brings insight into the fundamental mechanisms in animal, memory, cognitive, and behavior research. The traditional methods of brain volume measurements are ellipsoid or histology. In this study, micro-computed tomography (micro-CT) method was used to achieve more accurate results. However, manual segmentation of micro-CT images is time consuming, hard to reprodu-ce, and has the risk of human error. Automatic image segmentation is a faster method for obtaining the segmentations and has the potential to provide eciency, reliability, repeatability, and scalability. Different methods are tested and compared in this thesis. In this project, 29 micro-CT scans of lizard heads were used and measurements of the volumes of 6 dierent brain regions was of interest. The lizard heads were semi-manually segmented into 6 regions and three open-source segmentation algorithms were compared, one atlas-based algorithm and two deep-learning-based algorithms. Dierent number of training data were quantitatively compared for deep-learning methods from all three orientations (sagittal, horizontal and coronal). Data augmentation was tested and compared, as well. The comparison shows that the deep-learning algorithms provided more accurate results than the atlas-based algorithm. The results also demonstrated that in the sagittal plane, 5 manually segmented images for training are enough to provide resulting predictions with high accuracy (dice score 0.948). Image augmentation was shown to improve the accuracy of the segmentations but a unique dataset still plays an important role. In conclusion, the results show that the manual segmentation work can be reduced drastically by using deep learning for image segmentation. / Noggrann mätning av hjärnregionsvolymer är viktigt för att studera hjärnans plasticitet, vilket ger insikt i de grundläggande mekanismerna inom djurstudier, minnes-, kognitions- och beteendeforskning. De traditionella metoderna för mätning av hjärnvolym är ellipsoid modellen eller histologi. I den här studien användes mikrodatortomografi (mikro-CT) metoden för att få mer korrekta resultat. Manuell segmentering av mikro-CT-bilder är dock tidskrävande, svår att reproducera och har en risk för mänskliga fel. Automatisk bildsegmentering är en snabb metod för att erhålla segmenteringarna. Den har potentialen att ge eektivitet, tillförlitlighet, repeterbarhet och skalbarhet. Därför testas och jämförs tre metoder för automatisk segmentering i denna studie. I projektet användes 29 mikro-CT-bilder av ödlehuvuden för att få fram volymerna hos 6 olika hjärnregioner. Ödlehuvudena segmenterades halvmanu- ellt i 6 regioner och tre segmenteringsalgoritmer med öppen källkod jämfördes (en atlasbaserad algoritm och två djupinlärningsbaserade algoritmer). Olika antal träningsdata jämfördes kvantitativt för djupinlärningsmetoder i alla tre plan (sagittal, horisontell och frontal). Även datautökning testades och analyserades. Jämförelsen visar att djupinlärningsalgoritmerna gav mer signifikanta resultat än den atlasbaserade algoritmen. Resultaten visade även att i det sagittala planet räcker det med 5 manuellt segmenterade bilder för träning för att ge segmenteringar med hög noggrannhet (dice värde 0,948). Datautökningen har visat sig förbättra segmenteringarnas noggrannhet, men ett unikt dataset spelar fortfarande en viktig roll. Sammanfattningsvis visar resultaten att det manuella segmenteringsarbetet kan minskas drastiskt genom att använda djupinlärning för bildsegmentering.
279

Compare Accuracy of Alternative Methods for Sound Classification on Environmental Sounds of Similar Characteristics

Rudberg, Olov January 2022 (has links)
Artificial neural networks have in the last decade been a vital tool in image recognition, signal processing and speech recognition. Because of these networks' ability to be highly flexible, they suit a vast amount of different data. This flexible attribute is very sought for within the field of environmental sound classification. This thesis seeks to investigate if audio from three types of water usage can be distinguished and classified. The usage types investigated are handwashing, showering and WC-flushing. The data originally consisted of sound recordings in WAV format. The recordings were converted into spectrograms, which are visual representations of audio signals. Two neural networks are addressed for this image classification issue, namely a Multilayer Perceptron (MLP) and a Convolutional Neural Network (CNN). Further, these spectrograms are subject to both image preprocessing using a Sobel filter, a Canny edge detector and a Gabor filter while also being subjected to data augmentation by applying different brightness and zooming alterations. The result showed that the CNN gave superior results compared to the MLP. The image preprocessing techniques did not improve the data and the model performances, neither did augmentation or a combination between them. An important finding was that constructing the convolutional and pooling filters of the CNN into rectangular shapes and using every other filter type horizontally and vertically on the input spectrogram gave superior results. It seemed to capture more information of the spectrograms since spectrograms mainly contain information in a horizontal or vertical direction. This model achieved 91.14% accuracy. The result stemming from this model architecture  further contributes to the environmental sound classification community. / <p>Masters thesis approved 20th june 2022.</p>
280

Tools for fluid simulation control in computer graphics

Schoentgen, Arnaud 09 1900 (has links)
L’animation basée sur la physique peut générer des systèmes aux comportements complexes et réalistes. Malheureusement, contrôler de tels systèmes est une tâche ardue. Dans le cas de la simulation de fluide, le processus de contrôle est particulièrement complexe. Bien que de nombreuses méthodes et outils ont été mis au point pour simuler et faire le rendu de fluides, trop peu de méthodes offrent un contrôle efficace et intuitif sur une simulation de fluide. Étant donné que le coût associé au contrôle vient souvent s’additionner au coût de la simulation, appliquer un contrôle sur une simulation à plus haute résolution rallonge chaque itération du processus de création. Afin d’accélérer ce processus, l’édition peut se faire sur une simulation basse résolution moins coûteuse. Nous pouvons donc considérer que la création d’un fluide contrôlé peut se diviser en deux phases: une phase de contrôle durant laquelle un artiste modifie le comportement d’une simulation basse résolution, et une phase d’augmentation de détail durant laquelle une version haute résolution de cette simulation est générée. Cette thèse présente deux projets, chacun contribuant à l’état de l’art relié à chacune de ces deux phases. Dans un premier temps, on introduit un nouveau système de contrôle de liquide représenté par un modèle particulaire. À l’aide de ce système, un artiste peut sélectionner dans une base de données une parcelle de liquide animé précalculée. Cette parcelle peut ensuite être placée dans une simulation afin d’en modifier son comportement. À chaque pas de simulation, notre système utilise la liste de parcelles actives afin de reproduire localement la vision de l’artiste. Une interface graphique intuitive a été développée, inspirée par les logiciels de montage vidéo, et permettant à un utilisateur non expert de simplement éditer une simulation de liquide. Dans un second temps, une méthode d’augmentation de détail est décrite. Nous proposons d’ajouter une étape supplémentaire de suivi après l’étape de projection du champ de vitesse d’une simulation de fumée eulérienne classique. Durant cette étape, un champ de perturbations de vitesse non-divergent est calculé, résultant en une meilleure correspondance des densités à haute et à basse résolution. L’animation de fumée résultante reproduit fidèlement l’aspect grossier de la simulation d’entrée, tout en étant augmentée à l’aide de détails simulés. / Physics-based animation can generate dynamic systems of very complex and realistic behaviors. Unfortunately, controlling them is a daunting task. In particular, fluid simulation brings up particularly difficult problems to the control process. Although many methods and tools have been developed to convincingly simulate and render fluids, too few methods provide efficient and intuitive control over a simulation. Since control often comes with extra computations on top of the simulation cost, art-directing a high-resolution simulation leads to long iterations of the creative process. In order to shorten this process, editing could be performed on a faster, low-resolution model. Therefore, we can consider that the process of generating an art-directed fluid could be split into two stages: a control stage during which an artist modifies the behavior of a low-resolution simulation, and an upresolution stage during which a final high-resolution version of this simulation is driven. This thesis presents two projects, each one improving on the state of the art related to each of these two stages. First, we introduce a new particle-based liquid control system. Using this system, an artist selects patches of precomputed liquid animations from a database, and places them in a simulation to modify its behavior. At each simulation time step, our system uses these entities to control the simulation in order to reproduce the artist’s vision. An intuitive graphical user interface inspired by video editing tools has been developed, allowing a nontechnical user to simply edit a liquid animation. Second, a tracking solution for smoke upresolution is described. We propose to add an extra tracking step after the projection of a classical Eulerian smoke simulation. During this step, we solve for a divergence-free velocity perturbation field resulting in a better matching of the low-frequency density distribution between the low-resolution guide and the high-resolution simulation. The resulting smoke animation faithfully reproduces the coarse aspect of the low-resolution input, while being enhanced with simulated small-scale details.

Page generated in 0.1986 seconds