• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 468
  • 271
  • 30
  • 28
  • 20
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 996
  • 996
  • 394
  • 366
  • 160
  • 129
  • 90
  • 89
  • 88
  • 85
  • 79
  • 78
  • 77
  • 76
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Secagem da Mentha piperita em leito fixo utilizando diferentes temperaturas e velocidades de ar. / Mentha piperita drying in fixed bed using different temperatures and air speed

Gasparin, Priscila Pigatto 09 February 2012 (has links)
Made available in DSpace on 2017-05-12T14:48:36Z (GMT). No. of bitstreams: 1 priscila.pdf: 1624852 bytes, checksum: cc2a0f1c77b464486310e190671aeeb2 (MD5) Previous issue date: 2012-02-09 / The species Mentha piperita, popularly known as peppermint as well as being a medicinal plant, can be used to obtain flavoring, tea infusions and spices for pharmaceutical, food and cosmetics use. Thus, drying process is necessary to increase the shelf life of the product and facilitate its transport, handling and storage. Due to the aimed good mint quality, it is necessary to study the pre-and post-harvest. The objective of this study was the evaluation of drying the curves of drying mint, by mathematical modeling, in the range of 30 to 70 °C with temperature variation of the drying air velocity of 0.3 and 0.5 m.s-1 and quality analysis of the obtained product. To carry out the drying of leaves, we used a fixed bed dryer at a laboratory scale, which has a fan for air movement and a heating system. There were two analysis regarding the quality, essential oil yield and analysis of color. The results showed that the model Midilli is the one that best fits to the experimental data. The temperature has some influence on the drying process, but the speed had no effect on the analysis of essential oil yield nor on the color of the leaves. The temperature at 50 °C proved to be most suitable for drying of the species, because it presented the highest yield of essential oil and preservation of the green color. / A espécie Mentha piperita, conhecida popularmente como hortelã-pimenta, além de ser considerada uma planta medicinal, pode ser utilizada para obtenção de aromatizantes, infusões de chá e temperos. É amplamente utilizada nas indústrias farmacêuticas, alimentícia e cosméticos. Por estas razões, é necessário o correto processo de secagem, aumentando o tempo de conservação e a vida útil do produto, facilitando seu transporte, manuseio e armazenamento. Para que os derivados da hortelã tenham qualidade, são necessários estudos sobre a pré e a pós-colheita. Objetivou-se neste estudo a avaliação da secagem e a obtenção das curvas de secagem da hortelã, por meio de modelagem matemática, no intervalo de 30 a 70 oC de temperaturas, com variação da velocidade do ar de secagem 0,3 e 0,5 m.s-1 e análise da qualidade do produto obtido. Para a secagem das folhas, foi utilizado um secador de leito fixo em escala de laboratório, o qual possui um ventilador para a movimentação do ar e sistema de aquecimento. Foram realizadas duas análises referentes à qualidade, ao rendimento do óleo essencial e à análise da cor. Os resultados evidenciaram que o modelo de Midili é o que melhor se ajusta aos dados experimentais. A temperatura influenciou o processo de secagem, porém a velocidade não apresentou influência para as análises do rendimento do óleo essencial e nem para a coloração das folhas. A temperatura de 50 oC mostrou-se a mais indicada para a secagem da espécie, pois apresentou o maior rendimento de óleo essencial e a preservação da cor verde.
492

Matematičko modelovanje sagorevanja pšenične slame u nepokretnom sloju sa aspekta uticaja promene parametara procesa / Mathematical modeling of wheat straw combustion in a fixed bed from theaspect of the influence of process parameters change

Čepić Zoran 19 March 2018 (has links)
<p>Cilj doktorske disertacije je da poveže teorijska znanja iz oblasti<br />matematičkog modelovanja sa eksperimentalnim ispitivanjem<br />sagorevanja pšenične slame u nepokretnom sloju, u cilju formiranja<br />matematičkog modela koji će kroz računarske simulacije omogućiti<br />analizu uticaja radnih parametara (gustine sloja, količine vazduha za<br />sagorevanje) na odvijanje procesa sagorevanja, odnosno određivanje<br />brzine sagorevanja, temperaturskog profila u sloju i koncentracije<br />pojedinih gasova u sloju.<br />Takođe, kroz eksperimentalna merenja, osim validacije modela,<br />urađena je analiza i opisivanje pojava i fenomena koji se odvijaju pri<br />sagorevanju pšenične slame u nepokretnom sloju.</p> / <p>The goal of doctoral dissertation is to bring together theoretical knowledge in<br />the field of mathematical modelling and experimental investigation of wheat<br />straw combustion in fixed bed, with the aim of developing a mathematical<br />model which will, through computer simulation, enable the analysis of effects<br />of operational parameters (bed density, amount of combustion air) on the<br />combustion process, as well as the determination of burning rate, bed<br />temperature profile and concentration of certain gases in the bed.<br />Also, through experimental measurements, in addition to validating the<br />mathematical model, the analysis and description of the phenomena that<br />occurre during the combustion of wheat straw in a fixed bed, was performed.</p>
493

Métodos heurísticos aplicados ao problema de programação da frota de navios PLVs. / Heuristics methods applied in a PLV fleet scheduling problem.

Queiroz, Maciel Manoel de 03 October 2011 (has links)
O presente trabalho abordou um problema de programação de embarcações que realizam o lançamento de dutos ou linhas de produção e a interligação destes à infra-estrutura submarina, em uma operação de exploração de petróleo offshore. As tarefas são realizadas por embarcações PLVs (pipe layer vessels), e possuem como atributos: duração, em dias; lista de embarcações compatíveis; instante de liberação; penalidade relacionada ao atraso na execução da tarefa. Este problema é uma variação da classe de problemas de programação de máquinas paralelas não-relacionadas, em que o objetivo é minimizar o atraso ponderado total. Este trabalho empregou como métodos de solução a meta-heurística GRASP com path relinking. Esta técnica foi implementada utilizando os recursos de processamento multi-threading, de forma a explorar múltiplas trajetórias simultaneamente. Testes foram feitos para comprovar o desempenho das heurísticas propostas, comparando-as com limitantes fornecidos pelo método geração de colunas. / This work addressed a fleet scheduling problem present in the offshore oil industry. Among the special purpose services one will find the pipe layer activities and its connection to the subsea infrastructure, accomplished by the Pipe Layer Vessels (PLV). The jobs are characterized by a release date, which reflects the expected arrival date of the necessary material at the port. There are compatibility constraints between job and vessel, so that some vessels may not be able to perform a certain job; the duration of the jobs can be differentiated by vessel and if a job is finished after its due date, a penalty is incurred. This is a variation of the unrelated parallel machine problem with total weighted tardiness objective function. This research employed a metaheuristic GRASP with Path Relinking, which have proved to be competitive and an effective solution strategy. This method was implemented in a multi-threading scheme allowing multiple paths to be explored simultaneously. Computational experiments were conducted, comparing solutions with bounds provided by linear column generation.
494

Modelagem analítico-numérica do escoamento laminar convectivo em tubos associada à filtração tangencial / Analytical-numerical modeling of convective laminar flow in tubes associated with cross-flow

Venezuela, Antonio Luís 22 April 2008 (has links)
Nesta tese de doutorado é utilizada a técnica híbrida analítico-numérica, conhecida internacionalmente por GITT (Generalized Integral Transform Technique), para modelagem e simulação da equação de conservação das espécies químicas, na investigação do escoamento laminar incompressível, newtoniano e permanente em tubos permeáveis. O escoamento é aplicado ao processo de filtração tangencial com membranas e foram realizados dois estudos relacionados à equação convectiva-difusiva elíptica e parabólica, para as quais são utilizadas as mesmas condições de fronteira. Na modelagem a velocidade na parede permeável é considerada uniforme e os perfis de velocidade para a região de entrada do escoamento são obtidos na literatura. O modelo matemático utiliza originalmente uma expressão para a espessura da camada limite de concentração, com uma metodologia que determina a taxa assintótica, com a qual se estabelece a espessura da camada de concentração. Os resultados são apresentados com análise de convergência através de tabelas e com gráficos para o fluxo transmembrana local e médio, a correlação de Sherwood e a espessura da camada limite de concentração e ainda são comparados com outros resultados e metodologias reportadas na literatura. / In this doctoral thesis, the analytical-numerical hybrid technique, internationally known as GITT (Generalized Integral Transform Technique), is used for the modeling and simulation of the equation of chemical species conservation, in the investigation of the incompressible, Newtonian and permanent laminar flow in permeable tubes. The flow is applied to the cross-flow process with membranes and two studies related to the elliptic and parabolic convective-diffusive equation were accomplished, for which the same boundary conditions are used. In the modeling, the velocity on the permeable wall is considered uniform and the velocity profiles for the entrance region flow are obtained from the literature. The mathematical model originally uses an expression for the concentration boundary layer thickness, with a methodology that determines the asymptotic ratio, establishing the concentration boundary layer thickness. The results are presented with convergence analysis through tables and with graphs for the mean local transmembrane flux, Sherwood correlation and the concentration boundary layer thickness, and they are also compared with other results and methodologies reported in the literature.
495

Sistema para simulação dinâmica de circuitos de britagem. / System for dynamic simulation of crushing circuits.

Deliberato Neto, Octávio 17 December 2007 (has links)
A produção de brita para construção civil na região metropolitana de São Paulo (RMSP) é uma tarefa desafiadora: de um lado, as crescentes pressões da sociedade e das questões ambientais e, de outro, os baixos preços e parâmetros de qualidade impostos pelo mercado fazem a indústria de agregados perseguir, cada vez mais, custos de operação baixos que se traduzam em vantagem competitiva. Seja na otimização de intalações existentes, seja em novos projetos de instalações, vem se acentuando o uso de simuladores de circuitos de britagem. A automação das instalações produtoras de agregados da RMSP também se mostra como uma tendência irreversível. Neste contexto, este trabalho apresenta um simulador dinâmico de circuitos de britagem, desenvolvido para auxiliar as tarefas de otimização, automação e mesmo projeto de instalações produtoras de agregados. AggXtream, um novo simulador dinâmico de circuitos de britagem, foi desenvolvido com os mais modernos modelos matemáticos de britagem atualmente disponíveis, e traz consigo um conjunto de rotinas de calibração desses modelos que utiliza técnicas de inteligência artificial. / The production of aggregates for civil construction in the metropolitan region of São Paulo (RMSP) is a challenging task: from one side, growing pressures of society and environmental issues and, from another, low prices and quality standards demanded by the market make the aggregate industry pursue, even more, low operating costs that turn into competitive advantage. Either in the optimization of existing plants or new projects, the use of simulators of mineral comminution circuits is becoming widespread. The automation of RMSPs aggregates operations has also become an irreversible trend. Within this context, the present work describes the development of a dynamic simulator for crushing circuits, aiming to be used in the optimization, automation and project of aggregates plants. AggXtream, a new dynamic simulator of crushing circuits, has been built with the most modern mathematical models of crushing currently available. It also incorporates model calibration routines that use artificial intelligence techniques.
496

Modelagem matemática do processo térmico contínuo de alimentos líquidos em trocadores de calor a placas. / Mathematical modeling of the continuous thermal processing of liquid foods in plate heat exchangers.

Benze, Rafael Viana 12 April 2013 (has links)
O principal objetivo do trabalho foi o desenvolvimento de um modelo matemático de um pasteurizador a placas visando a determinação da distribuição de temperatura e de letalidade ao longo dos canais do trocador de calor em todas as suas seções, nas conexões tubulares e no tubo de retenção para a avaliação do decaimento logarítmico da concentração de um microorganismo alvo ou da atividade de uma enzima alvo em um processo de pasteurização contínua. A modelagem matemática foi composta pelo balanço diferencial de energia e letalidade nos canais do trocador, no tubo de retenção e nas suas conexões, levando em conta a perda de calor para o ambiente nos tubos. O modelo construído apresentou um conjunto de equações diferenciais ordinárias de primeira ordem e sua resolução foi feita pelo método das diferenças finitas usando o software gPROMS. Um estudo de caso foi analisado e a modelagem descreveu de forma coerente o processo térmico e a letalidade e futuramente ela poderá ser utilizada para a otimização do processo de pasteurização em trocadores de calor a placas, visando a obtenção de produtos seguros e de alta qualidade. A verificação do processo de pasteurização que o modelo desenvolvido contempla foi realizada a partir da validação experimental utilizando um integrador tempo-temperatura e ensaios laboratoriais com indicador enzimático (fosfatase alcalina em tampão fosfato), que mostraram que o modelo desenvolvido se aproxima de maneira satisfatória dos resultados reais. / The main objective of this study was to develop a mathematical model in a plate pasteurizer in order to determine the temperature and lethality distribution along the channels of the heat exchanger in all its sections, in tubular connections and holding tube evaluating the logarithmic decay in concentration of a target microorganism or enzyme activity in a continuous pasteurization process. Mathematical modeling was composed of the differential energy balance and lethality in the channels of the exchanger, the holding tube and its connections, taking into account the heat loss to the environment in the tubes. The constructed model presented a set of ordinary differential equations of first order and its resolution was made by the finite difference method using the gPROMS software. A case study was analyzed and the modeling described the thermal and the lethality process very well and in the future it can be used to optimize the process of pasteurization in plate heat exchangers, aiming the achievement of safe and with high quality products. A verification of the pasteurization process that the model developed contemplates has been performed based on experimental validation using time-temperature integrator and testing with enzymatic indicator (alkaline phosphatase in phosphate buffer), which showed that the model developed satisfactorily approximates the actual results.
497

Critical states of seismicity : modeling and data analysis

Zöller, Gert January 2005 (has links)
The occurrence of earthquakes is characterized by a high degree of spatiotemporal complexity. Although numerous patterns, e.g. fore- and aftershock sequences, are well-known, the underlying mechanisms are not observable and thus not understood. Because the recurrence times of large earthquakes are usually decades or centuries, the number of such events in corresponding data sets is too small to draw conclusions with reasonable statistical significance. Therefore, the present study combines both, numerical modeling and analysis of real data in order to unveil the relationships between physical mechanisms and observational quantities. The key hypothesis is the validity of the so-called "critical point concept" for earthquakes, which assumes large earthquakes to occur as phase transitions in a spatially extended many-particle system, similar to percolation models. New concepts are developed to detect critical states in simulated and in natural data sets. The results indicate that important features of seismicity like the frequency-size distribution and the temporal clustering of earthquakes depend on frictional and structural fault parameters. In particular, the degree of quenched spatial disorder (the "roughness") of a fault zone determines whether large earthquakes occur quasiperiodically or more clustered. This illustrates the power of numerical models in order to identify regions in parameter space, which are relevant for natural seismicity. The critical point concept is verified for both, synthetic and natural seismicity, in terms of a critical state which precedes a large earthquake: a gradual roughening of the (unobservable) stress field leads to a scale-free (observable) frequency-size distribution. Furthermore, the growth of the spatial correlation length and the acceleration of the seismic energy release prior to large events is found. The predictive power of these precursors is, however, limited. Instead of forecasting time, location, and magnitude of individual events, a contribution to a broad multiparameter approach is encouraging. / Das Auftreten von Erdbeben zeichnet sich durch eine hohe raumzeitliche Komplexität aus. Obwohl zahlreiche Muster, wie Vor- und Nachbeben bekannt sind, weiß man wenig über die zugrundeliegenden Mechanismen, da diese sich direkter Beobachtung entziehen. Die Zeit zwischen zwei starken Erdbeben in einer seismisch aktiven Region beträgt Jahrzehnte bis Jahrhunderte. Folglich ist die Anzahl solcher Ereignisse in einem Datensatz gering und es ist kaum möglich, allein aus Beobachtungsdaten statistisch signifikante Aussagen über deren Eigenschaften abzuleiten. Die vorliegende Arbeit nutzt daher numerische Modellierungen einer Verwerfungszone in Verbindung mit Datenanalyse, um die Beziehung zwischen physikalischen Mechanismen und beobachteter Seismizität zu studieren. Die zentrale Hypothese ist die Gültigkeit des sogenannten "kritischen Punkt Konzeptes" für Seismizität, d.h. starke Erdbeben werden als Phasenübergänge in einem räumlich ausgedehnten Vielteilchensystem betrachtet, ähnlich wie in Modellen aus der statistischen Physik (z.B. Perkolationsmodelle). Es werden praktische Konzepte entwickelt, die es ermöglichen, kritische Zustände in simulierten und in beobachteten Daten sichtbar zu machen. Die Resultate zeigen, dass wesentliche Eigenschaften von Seismizität, etwa die Magnitudenverteilung und das raumzeitliche Clustern von Erdbeben, durch Reibungs- und Bruchparameter bestimmt werden. Insbesondere der Grad räumlicher Unordnung (die "Rauhheit") einer Verwerfungszone hat Einfluss darauf, ob starke Erdbeben quasiperiodisch oder eher zufällig auftreten. Dieser Befund zeigt auf, wie numerische Modelle genutzt werden können, um den Parameterraum für reale Verwerfungen einzugrenzen. Das kritische Punkt Konzept kann in synthetischer und in beobachteter Seismizität verifiziert werden. Dies artikuliert sich auch in Vorläuferphänomenen vor großen Erdbeben: Die Aufrauhung des (unbeobachtbaren) Spannungsfeldes führt zu einer Skalenfreiheit der (beobachtbaren) Größenverteilung; die räumliche Korrelationslänge wächst und die seismische Energiefreisetzung wird beschleunigt. Ein starkes Erdbeben kann in einem zusammenhängenden Bruch oder in einem unterbrochenen Bruch (Vorbeben und Hauptbeben) stattfinden. Die beobachtbaren Vorläufer besitzen eine begrenzte Prognosekraft für die Auftretenswahrscheinlichkeit starker Erdbeben - eine präzise Vorhersage von Ort, Zeit, und Stärke eines nahenden Erdbebens ist allerdings nicht möglich. Die genannten Parameter erscheinen eher vielversprechend als Beitrag zu einem umfassenden Multiparameteransatz für eine verbesserte zeitabhängige Gefährdungsabschätzung.
498

Modeling Collective Decision-Making in Animal Groups

Granovskiy, Boris January 2012 (has links)
Many animal groups benefit from making decisions collectively. For example, colonies of many ant species are able to select the best possible nest to move into without every ant needing to visit each available nest site. Similarly, honey bee colonies can focus their foraging resources on the best possible food sources in their environment by sharing information with each other. In the same way, groups of human individuals are often able to make better decisions together than each individual group member can on his or her own. This phenomenon is known as "collective intelligence", or "wisdom of crowds." What unites all these examples is the fact that there is no centralized organization dictating how animal groups make their decisions. Instead, these successful decisions emerge from interactions and information transfer between individual members of the group and between individuals and their environment. In this thesis, I apply mathematical modeling techniques in order to better understand how groups of social animals make important decisions in situations where no single individual has complete information. This thesis consists of five papers, in which I collaborate with biologists and sociologists to simulate the results of their experiments on group decision-making in animals. The goal of the modeling process is to better understand the underlying mechanisms of interaction that allow animal groups to make accurate decisions that are vital to their survival. Mathematical models also allow us to make predictions about collective decisions made by animal groups that have not yet been studied experimentally or that cannot be easily studied. The combination of mathematical modeling and experimentation gives us a better insight into the benefits and drawbacks of collective decision making, and into the variety of mechanisms that are responsible for collective intelligence in animals. The models that I use in the thesis include differential equation models, agent-based models, stochastic models, and spatially explicit models. The biological systems studied included foraging honey bee colonies, house-hunting ants, and humans answering trivia questions.
499

Coordinated Routing : applications in location and inventory management

Andersson, Henrik January 2006 (has links)
Almost everywhere, routing plays an important role in everyday life. This thesis consists of three parts, each studying different applications where routing decisions are coordinated with other decisions. A common denominator in all applications is that an intelligent utilization of a fleet of vehicles is crucial for the performance of the system. In the first part, routing and inventorymanagement decisions are coordinated, in the second part, routing decisions concerning different modes of transportation are coordinated with inventory management, and in the third part, location decision and routing are coordinated. In the first part, an application concerning waste management is presented. Many industries generate garbage, and instead of handling the waste disposal themselves, other companies, specialized in garbage collection, handle the disposal. Each industry rents containers from a company to be used for waste, and the garbage collection companies handle the collection. The industries buy a service including one or more containers at the industry and the garbage collection companies are obliged to make sure that the containers never become overfull. The idea is that the industries buy this service and in return, the garbage collection company can plan the collection so that the overall cost and the number of overfull containers is minimized. Two models for the problem facing the garbage collection company are proposed. The first is solved using a Lagrangean relaxation approach on a flow based model, and the second is solved using Benders decomposition on a column based model. The second part investigates a distribution chain management problem taken from the Swedish pulp industry. Given fixed production plans at the mills, and fixed customer demands, the problem is to minimize the distribution cost. Unlike many other models for marine distribution chains, the customers are not located at the harbors. This means that the model proposed also incorporates the distribution planning from the harbors to the customers. All customers are not served from the harbors; some are served directly from the mills using trucks and trains to distribute the pulp, and these decisions are also included. The problem is modeled as a mixed integer linear program and solved using a branch and price scheme. Due to the complexity of the problem, the solution strategy is divided into two phases, where the first emphasizes the generation of schedules for the vessels operated by the company, while the second deals with the chartering of vessels on the spot market. In the third part, routing is combined with location decisions in the location-routing problem. Special emphasis is given to strategic management where decision makers must make location, capacity and routing decisions over a long planning period. The studied application comes fromstrategic schoolmanagement, where the location and capacity of the schools as well as their catchment areas are under consideration. The problem is modeled as a mixed integer linear program. The computational study shows the importance of incorporating a routing component allowing multiple visits, as well as the danger of having a too short planning period.
500

A Two Dimensional Model of a Direct Propane Fuel Cell with an Interdigitated Flow Field

Khakdaman, Hamidreza 18 April 2012 (has links)
Increasing environmental concerns as well as diminishing fossil fuel reserves call for a new generation of energy conversion technologies. Fuel cells, which convert the chemical energy of a fuel directly to electrical energy, have been identified as one of the leading alternative energy conversion technologies. Fuel cells are more efficient than conventional heat engines with minimal pollutant emissions and superior scalability. Proton Exchange Membrane Fuel Cells (PEMFCs) which produce electricity from hydrogen have been widely investigated for transportation and stationary applications. The focus of this study is on the Direct Propane Fuel Cell (DPFC), which belongs to the PEMFC family, but consumes propane instead of hydrogen as feedstock. A drawback associated with DPFCs is that the propane reaction rate is much slower than that of hydrogen. Two ideas were suggested to overcome this issue: (i) operating at high temperatures (150-230oC), and (ii) keeping the propane partial pressure at the maximum possible value. An electrolyte material composed of zirconium phosphate (ZrP) and polytetrafluoroethylene (PTFE) was suggested because it is an acceptable proton conductor at high temperatures. In order to keep the propane partial pressure at the maximum value, interdigitated flow-fields were chosen to distribute propane through the anode catalyst layer. In order to evaluate the performance of a DPFC which operates at high temperature and uses interdigitated flow-fields, a computational approach was chosen. Computational Fluid Dynamics (CFD) was used to create two 2-D mathematical models for DPFCs based on differential conservation equations. Two different approaches were investigated to model species transport in the electrolyte phase of the anode and cathode catalyst layers and the membrane layer. In the first approach, the migration phenomenon was assumed to be the only mechanism of proton transport. However, both migration and diffusion phenomena were considered as mechanisms of species transport in the second approach. Therefore, Ohm's law was used in the first approach and concentrated solution theory (Generalized Stefan-Maxwell equations) was used for the second one. Both models are isothermal. The models were solved numerically by implementing the partial differential equations and the boundary conditions in FreeFEM++ software which is based on Finite Element Methods. Programming in the C++ language was performed and the existing library of C++ classes and tools in FreeFEM++ were used. The final model contained 60 pages of original code, written specifically for this thesis. The models were used to predict the performance of a DPFC with different operating conditions and equipment design parameters. The results showed that using a specific combination of interdigitated flow-fields, ZrP-PTFE electrolyte having a proton conductivity of 0.05 S/cm, and operating at 230oC and 1 atm produced a performance (polarization curve) that was (a) far superior to anything in the DPFC published literature, and (b) competitive with the performance of direct methanol fuel cells. In addition, it was equivalent to that of hydrogen fuel cells at low current densities (30 mA/cm2).

Page generated in 0.1026 seconds