• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 50
  • 46
  • 22
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 349
  • 85
  • 67
  • 66
  • 66
  • 45
  • 41
  • 38
  • 38
  • 37
  • 36
  • 32
  • 31
  • 30
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Hybrid and data-driven methods for efficient and realistic particle-based liquid simulations

Roy, Bruno 12 1900 (has links)
L’approximation de phénomènes physiques, tels qu’une simulation de liquides en informatique graphique, requiert l’utilisation de méthodes complexes nécessitant des temps de calcul et une quantité de mémoire importants. Malgré les avancées récentes dans ce domaine, l’écart en réalisme entre un liquide simulé et la réalité demeure encore aujourd’hui considérable. Cet écart nous séparant du réalisme souhaité nécessite des modèles numériques de simulation dont la complexité ne cesse de croître. L’objectif ultime est de permettre à l’utilisateur de manipuler ces modèles de simulation de liquides sans la nécessité d’avoir une connaissance accrue de la physique requise pour atteindre un niveau de réalisme acceptable et ce, en temps réel. Plusieurs approches ont été revisitées dans les dernières années afin de simplifier ces modèles ou dans le but de les rendre plus facilement paramétrables. Cette thèse par articles encadre bien les trois projets constituant nos contributions dans le but d’améliorer et de faciliter la génération de simulations de liquides en informatique graphique. Tout d’abord, nous introduisons une approche hybride permettant de traiter séparément le volume de liquide non-apparent (i.e., en profondeur) et une couche de particules en surface par la méthode de calcul Smoothed Particle Hydrodynamics (SPH). Nous revisitons l’approche par bandes de particules, mais cette fois nouvellement appliquée à la méthode SPH qui offre un niveau de réalisme supérieur. Comme deuxième projet, nous proposons une approche permettant d’améliorer le niveau de détail des éclaboussures de liquides. En suréchantillonnant une simulation de liquides existante, notre approche est capable de générer des détails réalistes d’éclaboussures grâce à la dynamique de balistique. En complément, nous proposons une méthode de simulation par vagues permettant de reproduire les interactions entre les éclaboussures générées et les portions quasi-statiques de la simulation existante. Finalement, le troisième projet introduit une approche permettant de rehausser la résolution apparente d’un liquide par l’apprentissage automatique. Nous proposons une architecture d’apprentissage inspirée des flux optiques dont l’objectif est de générer une correspondance entre le déplacement des particules de simulations de liquides à différentes résolutions (i.e., basses et hautes résolutions). Notre modèle d’apprentissage permet d’encoder des caractéristiques de hautes résolutions à l’aide de déformations pré-calculées entre deux liquides à différentes résolutions et d’opérations de convolution basées sur le voisinage des particules. / The approximation of natural phenomena such as liquid simulations in computer graphics requires complex methods that are computationally expensive. Despite recent advances in this field, the gap in realism between a simulated liquid and reality remains considerable. This disparity that separates us from the desired realism requires numerical models whose complexity continues to grow. The ultimate goal is to provide users the capacity and tools to manipulate these liquid simulation models to obtain acceptable realism. In the last decade, several approaches have been revisited to simplify and to allow more flexible models. In this dissertation by articles, we present three projects whose contributions support the improvement and flexibility of generating liquid simulations for computer graphics. First, we introduce a hybrid approach allowing us to separately process the volume of non-apparent liquid (i.e., in-depth) and a band of surface particles using the Smoothed Particle Hydrodynamics (SPH) method. We revisit the particle band approach, but this time newly applied to the SPH method, which offers a higher level of realism. Then, as a second project, we propose an approach to improve the level of detail of splashing liquids. By upsampling an existing liquid simulation, our approach is capable of generating realistic splash details through ballistic dynamics. In addition, we propose a wave simulation method to reproduce the interactions between the generated splashes and the quasi-static portions of the existing liquid simulation. Finally, the third project introduces an approach to enhance the apparent resolution of liquids through machine learning. We propose a learning architecture inspired by optical flows by which we generate a correspondence between the displacement of the particles of liquid simulations at different resolutions (i.e., low and high resolutions). Our training model allows high-resolution features to be encoded using pre-computed deformations between two liquids at different resolutions and convolution operations based on the neighborhood of the particles.
332

On Integral Transforms and Convolution Equations on the Spaces of Tempered Ultradistributions / Prilozi teoriji integralnih transformacija i konvolucionih jednačina na prostorima temperiranih ultradistribucija

Perišić Dušanka 03 July 1992 (has links)
<p>In the thesis are introduced and investigated spaces of Burling and of Roumieu type tempered ultradistributions, which are natural generalization of the space of Schwartz&rsquo;s tempered distributions in Denjoy-Carleman-Komatsu&rsquo;s theory of ultradistributions.&nbsp; It has been proved that the introduced spaces preserve all of the good properties Schwartz space has, among others, a remarkable one, that the Fourier transform maps continuposly the spaces into themselves.<br />In the first chapter the necessary notation and notions are given.<br />In the second chapter, the spaces of ultrarapidly decreasing ultradifferentiable functions and their duals, the spaces of Beurling and of Roumieu tempered ultradistributions, are introduced; their topological properties and relations with the known distribution and ultradistribution spaces and structural properties are investigated;&nbsp; characterization of&nbsp; the Hermite expansions&nbsp; and boundary value representation of the elements of the spaces are given.<br />The spaces of multipliers of the spaces of Beurling and of Roumieu type tempered ultradistributions are determined explicitly in the third chapter.<br />The fourth chapter is devoted to the investigation of&nbsp; Fourier, Wigner, Bargmann and Hilbert transforms on the spaces of Beurling and of Roumieu type tempered ultradistributions and their test spaces.<br />In the fifth chapter the equivalence of classical definitions of the convolution of Beurling type ultradistributions is proved, and the equivalence of, newly introduced definitions, of ultratempered convolutions of Beurling type ultradistributions is proved.<br />In the last chapter is given a necessary and sufficient condition for a convolutor of a space of tempered ultradistributions to be hypoelliptic in a space of integrable ultradistribution, is given, and hypoelliptic convolution equations are studied in the spaces.<br />Bibliograpy has 70 items.</p> / <p>U ovoj tezi su proučavani prostori temperiranih ultradistribucija Beurlingovog&nbsp; i Roumieovog tipa, koji su prirodna uop&scaron;tenja prostora Schwarzovih temperiranih distribucija u Denjoy-Carleman-Komatsuovoj teoriji ultradistribucija. Dokazano je ovi prostori imaju sva dobra svojstva, koja ima i Schwarzov prostor, izmedju ostalog, značajno svojstvo da Furijeova transformacija preslikava te prostore neprekidno na same sebe.<br />U prvom poglavlju su uvedene neophodne oznake i pojmovi.<br />U drugom poglavlju su uvedeni prostori ultrabrzo opadajucih ultradiferencijabilnih funkcija i njihovi duali, prostori Beurlingovih i Rumieuovih temperiranih ultradistribucija; proučavana su njihova topolo&scaron;ka svojstva i veze sa poznatim prostorima distribucija i ultradistribucija, kao i strukturne osobine; date su i karakterizacije Ermitskih ekspanzija i graničnih reprezentacija elemenata tih prostora.<br />Prostori multiplikatora Beurlingovih i Roumieuovih temperiranih ultradistribucija su okarakterisani u trećem poglavlju.<br />Četvrto poglavlje je posvećeno proučavanju Fourierove, Wignerove, Bargmanove i Hilbertove transformacije na prostorima Beurlingovih i Rouimieovih temperiranih ultradistribucija i njihovim test prostorima.<br />U petoj glavi je dokazana ekvivalentnost klasičnih definicija konvolucije na Beurlingovim prostorima ultradistribucija, kao i ekvivalentnost novouvedenih definicija ultratemperirane konvolucije ultradistribucija Beurlingovog tipa.<br />U poslednjoj glavi je dat potreban i dovoljan uslov da konvolutor prostora temperiranih ultradistribucija bude hipoeliptičan u prostoru integrabilnih ultradistribucija i razmatrane su neke konvolucione jednačine u tom prostoru.<br />Bibliografija ima 70 bibliografskih jedinica.</p>
333

Návrh nové metody pro stereovidění / Design of a New Method for Stereovision

Kopečný, Josef January 2008 (has links)
This thesis covers with the problems of photogrammetry. It describes the instruments, theoretical background and procedures of acquiring, preprocessing, segmentation of input images and of the depth map calculating. The main content of this thesis is the description of the new method of stereovision. Its algorithm, implementation and evaluation of experiments. The covered method belongs to correlation based methods. The main emphasis lies in the segmentation, which supports the depth map calculation.
334

Approach for frequency response-calibration for microphone arrays / Metod för kalibrering av frekvenssvar för mikrofonarrayer

Drotz, Jacob January 2023 (has links)
Matched frequency responses are a fundamental starting point for a variety ofimplementations for microphone arrays. In this report, two methods for frequencyresponse-calibration of a pre-assembled microphone array are presented andevaluated. This is done by extracting the deviation in frequency responses of themicrophones in relation to a selected reference microphone, using a swept sine asa stimulus signal and an inverse filter. The swept sine includes all frequencieswithin the bandwidth of human speech. This allows for a full frequency responsemeasurements from all microphones using a single recording.Using the swept sine, the deviation in frequency response between the microphonescan be obtained. This deviation represents the scaling factor that all microphonesmust be calibrated with to match the reference microphone. Applying the scalingfactors on the recorded stimulus signal shows an improvement for both implementedmethods, and where one method matches the frequency response of the microphoneswith high accuracy.Once the scaling factors of the various microphones is obtained, it can be usedto calibrate other recorded signals. This leads to an minor improvement formatching the frequency responses, as it has been shown that the differencesin frequency response between the microphones is signal-dependent and variesbetween recordings. The response differences between the microphones dependson the design of the array, speaker, room and the acoustic frequency dispersionthat occurs with sound waves. This makes it difficult to calibrate the frequencyresponses of the microphones without appropriate equipment because the responseof the microphones is noticeably affected by these other factors. Proposals to addressthese problems are discussed in the report as future work. / Matchade frekvenssvar är en grundläggande utgångspunkt för ett flertal implementationer för mikrofonarrayer. I denna rapport presenteras och utvärderas tvåmetoder för frekvenssvarskalibrering för en förmonterad mikrofonarray. Detta görsgenom att extrahera avvikelsen i frekvenssvar hos alla mikrofoner i förhållandetill en vald referensmikrofon. Frekvenssvaren tas fram med hjälp av ettsinussvep som stimulanssignal och ett inverterat filter. Sinussvepet inkluderar helafrekvensbredden för mänskligt tal och möjliggör att mikrofonernas fulla frekvenssvarkan analyseras från en enda inspelning.Med hjälp av sinussvepet kan avvikelsen i frekvenssvar mellan mikrofonerna erhållas.Denna avvikelse representerar den skalningsfaktor alla mikrofoner måste kalibrerasefter för att matcha referensmikrofonen. Genom att applicera faktorerna på deninspelade stimulussignalen ses en förbättring för båda implementerade metoderna,där en metod matchar mikrofonernas frekvenssvar med hög noggrannhet.När skalningsfaktorn för de olika mikrofonerna har erhållits kan den användas föratt kalibrera andra inspelade signaler. Detta leder till en liten förbättring i att matchafrekvenssvaren, då det har visat sig att skillnader mellan mikrofonernas frekvenssvarär signalberoende och varierar mellan inspelningar. Skillnader i frekvenssvar mellanmikrofonerna beror på ljudets utbredning i rummet, utformningen av arrayen,högtalaren och den akustiska frekvensspridningen som uppstår hos ljudvågor. Dettagör det svårt att kalibrera frekvenssvaren hos mikrofonerna utan lämplig utrustningeftersom mikrofonernas respons märkbart påverkas av dessa andra faktorer. Förslagför att kringgå dessa problem diskuteras i rapporten och tas upp som framtidaarbete.
335

Exploring feasibility of reinforcement learning flight route planning / Undersökning av använding av förstärkningsinlärning för flyruttsplannering

Wickman, Axel January 2021 (has links)
This thesis explores and compares traditional and reinforcement learning (RL) methods of performing 2D flight path planning in 3D space. A wide overview of natural, classic, and learning approaches to planning s done in conjunction with a review of some general recurring problems and tradeoffs that appear within planning. This general background then serves as a basis for motivating different possible solutions for this specific problem. These solutions are implemented, together with a testbed inform of a parallelizable simulation environment. This environment makes use of random world generation and physics combined with an aerodynamical model. An A* planner, a local RL planner, and a global RL planner are developed and compared against each other in terms of performance, speed, and general behavior. An autopilot model is also trained and used both to measure flight feasibility and to constrain the planners to followable paths. All planners were partially successful, with the global planner exhibiting the highest overall performance. The RL planners were also found to be more reliable in terms of both speed and followability because of their ability to leave difficult decisions to the autopilot. From this it is concluded that machine learning in general, and reinforcement learning in particular, is a promising future avenue for solving the problem of flight route planning in dangerous environments.
336

Semi-Markov processes for calculating the safety of autonomous vehicles / Semi-Markov processer för beräkning av säkerheten hos autonoma fordon

Kaalen, Stefan January 2019 (has links)
Several manufacturers of road vehicles today are working on developing autonomous vehicles. One subject that is often up for discussion when it comes to integrating autonomous road vehicles into the infrastructure is the safety aspect. There is in the context no common view of how safety should be quantified. As a contribution to this discussion we propose describing each potential hazardous event of a vehicle as a Semi-Markov Process (SMP). A reliability-based method for using the semi-Markov representation to calculate the probability of a hazardous event to occur is presented. The method simplifies the expression for the reliability using the Laplace-Stieltjes transform and calculates the transform of the reliability exactly. Numerical inversion algorithms are then applied to approximate the reliability up to a desired error tolerance. The method is validated using alternative techniques and is thereafter applied to a system for automated steering based on a real example from the industry. A desired evolution of the method is to involve a framework for how to represent each hazardous event as a SMP. / Flertalet tillverkare av vägfordon jobbar idag på att utveckla autonoma fordon. Ett ämne ofta på agendan i diskussionen om att integrera autonoma fordon på vägarna är säkerhet. Det finns i sammanhanget ingen klar bild över hur säkerhet ska kvantifieras. Som ett bidrag till denna diskussion föreslås här att beskriva varje potentiellt farlig situation av ett fordon som en Semi-Markov process (SMP). En metod presenteras för att via beräkning av funktionssäkerheten nyttja semi-Markov representationen för att beräkna sannolikheten för att en farlig situation ska uppstå. Metoden nyttjar Laplace-Stieltjes transformen för att förenkla uttrycket för funktionssäkerheten och beräknar transformen av funktionssäkerheten exakt. Numeriska algoritmer för den inversa transformen appliceras sedan för att beräkna funktionssäkerheten upp till en viss feltolerans. Metoden valideras genom alternativa tekniker och appliceras sedan på ett system för autonom styrning baserat på ett riktigt exempel från industrin. En fördelaktig utveckling av metoden som presenteras här skulle vara att involvera ett ramverk för hur varje potentiellt farlig situation ska representeras som en SMP.
337

SkeMo: A Web Application for Real-time Sketch-based Software Modeling

Sharma Chapai, Alisha 19 July 2023 (has links)
No description available.
338

Siamese Network with Dynamic Contrastive Loss for Semantic Segmentation of Agricultural Lands

Pendotagaya, Srinivas 07 1900 (has links)
This research delves into the application of semantic segmentation in precision agriculture, specifically targeting the automated identification and classification of various irrigation system types within agricultural landscapes using high-resolution aerial imagery. With irrigated agriculture occupying a substantial portion of US land and constituting a major freshwater user, the study's background highlights the critical need for precise water-use estimates in the face of evolving environmental challenges, the study utilizes advanced computer vision for optimal system identification. The outcomes contribute to effective water management, sustainable resource utilization, and informed decision-making for farmers and policymakers, with broader implications for environmental monitoring and land-use planning. In this geospatial evaluation research, we tackle the challenge of intraclass variability and a limited dataset. The research problem centers around optimizing the accuracy in geospatial analyses, particularly when confronted with intricate intraclass variations and constraints posed by a limited dataset. Introducing a novel approach termed "dynamic contrastive learning," this research refines the existing contrastive learning framework. Tailored modifications aim to improve the model's accuracy in classifying and segmenting geographic features accurately. Various deep learning models, including EfficientNetV2L, EfficientNetB7, ConvNeXtXLarge, ResNet-50, and ResNet-101, serve as backbones to assess their performance in the geospatial context. The data used for evaluation consists of high-resolution aerial imagery from the National Agriculture Imagery Program (NAIP) captured in 2015. It includes four bands (red, green, blue, and near-infrared) with a 1-meter ground sampling distance. The dataset covers diverse landscapes in Lonoke County, USA, and is annotated for various irrigation system types. The dataset encompasses diverse geographic features, including urban, agricultural, and natural landscapes, providing a representative and challenging scenario for model assessment. The experimental results underscore the efficacy of the modified contrastive learning approach in mitigating intraclass variability and improving performance metrics. The proposed method achieves an average accuracy of 96.7%, a BER of 0.05, and an mIoU of 88.4%, surpassing the capabilities of existing contrastive learning methods. This research contributes a valuable solution to the specific challenges posed by intraclass variability and limited datasets in the realm of geospatial feature classification. Furthermore, the investigation extends to prominent deep learning architectures such as Segformer, Swin Transformer, Convexnext, and Convolution Vision Transformer, shedding light on their impact on geospatial image analysis. ConvNeXtXLarge emerges as a robust backbone, demonstrating remarkable accuracy (96.02%), minimal BER (0.06), and a high MIOU (85.99%).
339

ACCELERATING SPARSE MACHINE LEARNING INFERENCE

Ashish Gondimalla (14214179) 17 May 2024 (has links)
<p>Convolutional neural networks (CNNs) have become important workloads due to their<br> impressive accuracy in tasks like image classification and recognition. Convolution operations<br> are compute intensive, and this cost profoundly increases with newer and better CNN models.<br> However, convolutions come with characteristics such as sparsity which can be exploited. In<br> this dissertation, we propose three different works to capture sparsity for faster performance<br> and reduced energy. </p> <p><br></p> <p>The first work is an accelerator design called <em>SparTen</em> for improving two-<br> sided sparsity (i.e, sparsity in both filters and feature maps) convolutions with fine-grained<br> sparsity. <em>SparTen</em> identifies efficient inner join as the key primitive for hardware acceleration<br> of sparse convolution. In addition, <em>SparTen</em> proposes load balancing schemes for higher<br> compute unit utilization. <em>SparTen</em> performs 4.7x, 1.8x and 3x better than dense architecture,<br> one-sided architecture and SCNN, the previous state of the art accelerator. The second work<br> <em>BARISTA</em> scales up SparTen (and SparTen like proposals) to large-scale implementation<br> with as many compute units as recent dense accelerators (e.g., Googles Tensor processing<br> unit) to achieve full speedups afforded by sparsity. However at such large scales, buffering,<br> on-chip bandwidth, and compute utilization are highly intertwined where optimizing for<br> one factor strains another and may invalidate some optimizations proposed in small-scale<br> implementations. <em>BARISTA</em> proposes novel techniques to balance the three factors in large-<br> scale accelerators. <em>BARISTA</em> performs 5.4x, 2.2x, 1.7x and 2.5x better than dense, one-<br> sided, naively scaled two-sided and an iso-area two-sided architecture, respectively. The last<br> work, <em>EUREKA</em> builds an efficient tensor core to execute dense, structured and unstructured<br> sparsity with losing efficiency. <em>EUREKA</em> achieves this by proposing novel techniques to<br> improve compute utilization by slightly tweaking operand stationarity. <em>EUREKA</em> achieves a<br> speedup of 5x, 2.5x, along with 3.2x and 1.7x energy reductions over Dense and structured<br> sparse execution respectively. <em>EUREKA</em> only incurs area and power overheads of 6% and<br> 11.5%, respectively, over Ampere</p>
340

Análise de carteiras em tempo discreto / Discrete time portfolio analysis

Kato, Fernando Hideki 14 April 2004 (has links)
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados. / In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.

Page generated in 0.0326 seconds