Spelling suggestions: "subject:"[een] COMPUTATIONAL SCIENCE"" "subject:"[enn] COMPUTATIONAL SCIENCE""
1 |
The effect of cash constraints on smallholder farmer revenuePay, Kenneth(Wen Hong Kenneeth) January 2020 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Center for Computational Science and Engineering, 2020 / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 77-79). / Many smallholder farmers in developing countries struggle to make ends meet. We develop a model that examines how markets catering to numerous smallholder farmers reach an equilibrium, while incorporating real world challenges that smallholder farmers face, namely a lack of long term planning and cash constraints. Through this, we analyze the effectiveness of two common forms of government intervention, storage and loan provision. We fully characterize market equilibrium conditions under the base scenario of no government intervention, analyzing how price conditions, number of farmers, and severity of cash constraints impact farmer behaviour. We then illustrate how these results change when storage and loans are integrated into the model. The analysis demonstrates that myopic optimization and cash constraints induce farmers to make sub-optimal decisions, resulting in farmers not receiving the full benefit of government interventions. We show that while storage is always useful in situations where farmers have excess quantity, providing overly generous loan terms can negatively impact farmer revenue by disincentivizing farmers from selling their produce on the market. We also show that attempting to improve equality by alleviating farmer cash constraints can result in negative externalities like increased wastage. Empirical analysis with Bengal gram farmers in India shows that farmers are in dire need of government assistance to meet their cash constraints. However, improving loan terms only boosts farmer revenue up to a point, after which revenue declines. The analysis shows that while loan schemes are widely popular and sometimes necessary in aiding struggling farmers, governments should be aware that the strategic response of different farmers can result in adverse effects. / by Kenneth Pay. / S.M. / S.M. Massachusetts Institute of Technology, Center for Computational Science and Engineering
|
2 |
Learning communication policies for decentralized task allocation under communication constraintsRaja, Sharan. January 2020 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Center for Computational Science and Engineering, 2020 / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 57-60). / Multi-UAS teams often operate in ad-hoc communication networks where blind application of consensus algorithms perform poorly because of message intensive nature of such algorithms. Important messages can get lost due to interference or collisions with other messages, and the broadcasting of less important messages can limit the effective bandwidth available for the team. This thesis presents a novel algorithm - Communication-aware CBBA (CA-CBBA) that learns a cooperative communication policy for agents performing decentralized task allocation using consensus based bundle algorithm (CBBA) by accounting for these communication issues. In our approach, agents learn to use features, such as local communication graph density and value of their own messages, to both censor and schedule themselves amongst the other agents competing for shared communication medium. Experiments show that the learned communication policy enables more efficient utilization of the shared medium by prioritizing agents with important messages and more frequently censoring agents in denser parts of the network to alleviate the "hidden node problem." The approach is shown to lead to better task allocation outcomes with faster convergence time and conflict resolution rates compared to CBBA in communication-constrained environments. Policy learnt by agents trained on a specific team size and task number is shown to generalize to larger team sizes in task allocation problems with varying task numbers. To our knowledge, this is the first task allocation algorithm to co-design planning algorithm and communication protocol by incorporating communication constraints into the design process; resulting in better task allocation outcomes in communication-constrained environments. / by Sharan Raja. / S.M. / S.M. Massachusetts Institute of Technology, Center for Computational Science and Engineering
|
3 |
Estimation of cumulative prospect theory-based passenger behavioral models for dynamic pricing & transactive control of shared mobility on demandJagadeesan Nair, Vineet. January 2021 (has links)
Thesis: S.M. in Computational Science and Engineering, Massachusetts Institute of Technology, Department of Mechanical Engineering and Center for Computational Science and Engineering, February, 2021 / Cataloged from the official version of thesis. / Includes bibliographical references (pages 147-155). / This thesis studies the optimal design of large-scale shared mobility on demand services (SMoDS) in urban settings. Specifically, we build upon previous work done in the Active-Adaptive Control Lab lab on the dynamic pricing and routing of ride sharing services. We develop and characterize a novel passenger behavioral model based on Cumulative Prospect Theory (CPT) to more accurately represent decision making in the presence of significant risks and uncertainty associated with SMoDS' travel times. A comprehensive survey was designed to estimate both the mode-choice and CPT models. The mode choice section consisted of a series of discrete choice experiments created via factorial design, while the CPT section involved carefully constructed lottery questions and travel choice scenarios to elicit risk preferences. After conducting a pilot study and going through several iterations, the survey was launched via a panel firm. / Data was collected from 1000+ respondents in the Greater Boston metro area. This was used to fully characterize the model and estimate parameters through methods including maximum simulated likelihood estimation, nonlinear least squares and global optimization tools. I also utilized other techniques like regularization and transfer learning to improve the quality of results obtained. Beyond parameter estimation, the uncertainty associated with such behavioral models was quantified via well-established nonlinear programming methods. Sensitivity and robustness analyses were performed to assess the effects of CPT model parametrization errors on the performance of the SMoDS system and objectives like expected revenue, average waiting times etc. These insights were used to design and simulate a closed loop, feedback control mechanism for the SMoDS system to correct modelling errors in real-time, achieve setpoint tracking and enable parameter estimation. / The scheme uses the dynamic tariff as a transactive control input to influence the passenger's behavior as desired. This was implemented via gradient-descent based control schemes to update the price signals, in order to drive the (i) drive the passengers' probabilities of accepting the SMoDS ride offer towards the desired value while also (ii) learning the true passenger behavioral model parameters. / by Vineet Jagadeesan Nair. / S.M. in Computational Science and Engineering / S.M. in Computational Science and Engineering Massachusetts Institute of Technology, Department of Mechanical Engineering
|
4 |
Playing at Reality: Exploring the potential of the digital game as a medium for science communicationAitkin, Alexander Lewis, alex.aitkin@dest.gov.au January 2005 (has links)
Scientific culture is not popular because the essential nature of science – the models and practises that make it up – cannot be communicated via conventional media in a manner that is interesting to the average person. These models and practises might be communicated in an interesting manner using the new medium of the digital game, yet very few digital games based upon scientific simulations have been created and thus the potential of such games to facilitate scientific knowledge construction cannot be studied directly. Scientific simulations have, however, been much used by scientists to facilitate their own knowledge construction, and equally, both simulations and games have been used by science educators to facilitate knowledge construction on the part of their students. The large academic literatures relating to these simulations and games collectively demonstrate that their ability to: re-create reality; model complex systems; be visual and interactive; engage the user in the practise of science; and to engage the user in construction and collaboration, makes them powerful tools for facilitating scientific knowledge construction. Moreover, the large non-academic literature discussing the nature of digital games (which are themselves both simulations and games) demonstrates that their ability to perform the above tasks (i.e. to re-create reality, model complex systems, and so forth) is what makes them enjoyable to play.¶Because the features of scientific and educational simulations and games that facilitate knowledge construction are the very same features that make digital games enjoyable to play, the player of a scientific-simulation-based digital game would be simultaneously gaining enjoyment and acquiring scientific knowledge. If science were widely communicated using digital games, therefore, then it would be possible for there to be a popular scientific culture.
|
5 |
Physics-based Simulation of Tablet Disintegration and DissolutionYue Li (11202198) 29 July 2021 (has links)
<p>As the most used dosage form in the world, tablets are widely used for the mass production of drugs. The disintegration and dissolution kinetics of tablets play a vital role in the pharmacokinetics and pharmacodynamics of drugs. It is also critical for evaluating the quality of drug formulations. This thesis reports a modeling and simulation approach of tablet disintegration and dissolution processes in a dissolution test device. By coupling the lattice Boltzmann method with the discrete element method, we simulate the hydrodynamics as well as the particle dynamics in the dissolution test device. Our computational methods could model the tablet structure,</p><p>disintegration of the tablet in the dissolution device, and dissolution of particles under the influence of hydrodynamics. The simulation results show that our computational methods can reproduce experimental results. Our methods pave the path toward an in-silico platform for tablet formulation design and verification.</p>
|
6 |
A REST model for high throughput scheduling in computational gridsStokes-Rees, Ian January 2006 (has links)
Current grid computing architectures have been based on cluster management and batch queuing systems, extended to a distributed, federated domain. These have shown shortcomings in terms of scalability, stability, and modularity. To address these problems, this dissertation applies architectural styles from the Internet and Web to the domain of generic computational grids. Using the REST style, a flexible model for grid resource interaction is developed which removes the need for any centralised services or specific protocols, thereby allowing a range of implementations and layering of further functionality. The context for resource interaction is a generalisation and formalisation of the Condor ClassAd match-making mechanism. This set theoretic model is described in depth, including the advantages and features which it realises. This RESTful style is also motivated by operational experience with existing grid infrastructures, and the design, operation, and performance of a proto-RESTful grid middleware package named DIRAC. This package was designed to provide for the LHCb particle physics experiment’s “off-line” computational infrastructure, and was first exercised during a 6 month data challenge which utilised over 670 years of CPU time and produced 98 TB of data through 300,000 tasks executed at computing centres around the world. The design of DIRAC and performance measures from the data challenge are reported. The main contribution of this work is the development of a REST model for grid resource interaction. In particular, it allows resource templating for scheduling queues which provide a novel distributed and scalable approach to resource scheduling on the grid.
|
7 |
A Systematic Approach for Obtaining Performance on Matrix-Like OperationsVeras, Richard Michael 01 August 2017 (has links)
Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.
|
8 |
Übungen zur Vorlesung Theoretische Physik III: Elektrodynamik/Computergestützte ElektrodynamikLöcse, Frank 17 March 2004 (has links)
Übungen zur Vorlesung Theoretische Physik III: Elektrodynamik/Computergestützte Elektrodynamik im Wintersemester 2002/03 für den Studiengang Physik
und den Bakkalaureusstudiengang Computational Science
|
9 |
Übungen zur Vorlesung Theoretische Physik III: Elektrodynamik/Computergestützte ElektrodynamikLöcse, Frank 18 March 2004 (has links)
Übungen zur Vorlesung Theoretische Physik III: Elektrodynamik/Computergestützte Elektrodynamik im Wintersemester 2003/04 für den Studiengang Physik und den Bakkalaureusstudiengang Computational Science
|
10 |
Übungen zur Vorlesung Theoretische Physik III: Elektrodynamik/Computergestützte ElektrodynamikLöcse, Frank 26 August 2005 (has links)
Übungen zur Vorlesung Theoretische Physik III: Elektrodynamik/Computergestützte Elektrodynamik im Wintersemester 2004/05 für den Studiengang Physik und den Bakkalaureusstudiengang Computational Science
|
Page generated in 0.3697 seconds