Spelling suggestions: "subject:"3dmodeling anda prediction"" "subject:"3dmodeling ando prediction""
1 |
Reglas de predicción aplicables al diseño de un curso de computaciónGrossi, María Delia January 2008 (has links) (PDF)
Esta monografía describe la manera de utilizar técnicas de Minería de Datos para la mejora del proceso de enseñanza-aprendizaje de la asignatura Computación que se desarrolla en la Facultad de Ingeniería de la Universidad de Buenos Aires.
El enfoque propuesto busca modelizar la interacción del alumno con el material de estudio utilizando reglas de predicción cuya interpretación permitirá detectar las falencias del proceso educativo así como evaluar la calidad del material de estudio utilizado.
|
2 |
Un entorno de aprendizaje y una propuesta de enseñanza de Simulación de Eventos Discretos con GPSSVillarreal, Gonzalo Luján 30 September 2013 (has links)
La enseñanza en el área de simulación de eventos discretos requiere integrar una variedad de conceptos teóricos y ponerlos en práctica a través de la creación y ejecución de modelos abstractos de simulación, con el objetivo de recopilar información que pueda traspolarse hacia los sistemas reales. Para construir modelos, ejecutarlos y analizar los resultados de cada ejecución se utilizan herramientas de software cada vez más sofisticadas que permiten expresar los elementos de los modelos en términos de entidades abstractas y relaciones, y que recopilan gran cantidad de datos y estadísticas sobre cada una de estas entidades del modelo. GPSS es una de estas herramientas, y se compone de un lenguaje de programación por bloques y un motor de simulación que traduce estos bloques en distintas entidades del modelo. A pesar de que su primera versión data de 1961, GPSS es aún muy utilizado por profesionales y empresas, y es una de las herramientas más utilizadas para la enseñanza de simulación de eventos discretos por instituciones académicas de todo el mundo.
El avance de la capacidad de cómputo de las computadoras ha permitido incorporar una mayor cantidad de herramientas y funciones a las distintas implementaciones de GPSS. Mientras que esto representa una ventaja para sus usuarios, requiere también un cada vez mayor esfuerzo por parte de los docentes para enseñar a sus estudiantes a aprovechar todo su potencial. Muchos docentes e investigadores han buscado optimizar la enseñanza de simulación de eventos discretos desde múltiples ángulos: la organización del curso y la metodología de enseñanza, la creación de elementos de aprendizaje que ayuden a aplicar los distintos elementos teóricos, la generación de herramientas para construir modelos GPSS, y la construcción de herramientas para comprender el motor de simulación por dentro.
En esta tesis se introduce una herramienta de software que permite construir modelos GPSS de manera interactiva, cuyo diseño fue pensado para integrar los elementos teóricos del curso con los objetos y entidades de GPSS. Esta herramienta también permite ejecutar estos modelos y analizar con alto nivel de detalle su evolución a través del tiempo de simulación, lo que permite a los estudiantes comprender cómo funciona el motor de simulación y cómo interactúan las distintas entidades entre sí. Se incluye también una propuesta de enseñanza basada en una fuerte participación de los estudiantes, que, por medio de esta nueva herramienta, les permite incorporar los conceptos más fácilmente. Esta propuesta de enseñanza fue puesta a prueba con alumnos del área de sistemas, quienes tomaron un curso que contiene los mismos elementos teóricos y prácticos de un curso tradicional, pero con una organización diferente. Entre los resultados logrados se destacan una reducción del tiempo requerido para aprender los conceptos de GPSS cercana al 50%, una mayor capacidad por parte de los alumnos para asimilar conceptos y derivar nuevos conceptos por sí solos, a partir de conceptos adquiridos previamente.
|
3 |
COMPARISON OF TWO AERIAL DISPERSION MODELS FOR THE PREDICTION OF CHEMICAL RELEASE ASSOCIATED WITH MARITIME ACCIDENTS NEAR COASTAL AREASKEONG KOK, TEO 11 March 2002 (has links)
No description available.
|
4 |
Schémas d'adaptations algorithmiques sur les nouveaux supports d'éxécution parallèles / Algorithmic adaptations schemas on the new parallel platformsAchour, Sami 06 July 2013 (has links)
Avec la multitude des plates-formes parallèles émergentes caractérisées par une hétérogénéité sur le plan matériel (processeurs, réseaux, …), le développement d'applications et de bibliothèques parallèles performantes est devenu un défi. Une méthode qui se révèle appropriée pour relever ce défi est l'approche adaptative consistant à utiliser plusieurs paramètres (architecturaux, algorithmiques,…) dans l'objectif d'optimiser l'exécution de l'application sur la plate-forme considérée. Les applications adoptant cette approche doivent tirer avantage des méthodes de modélisation de performance pour effectuer leurs choix entre les différentes alternatives dont elles disposent (algorithmes, implémentations ou ordonnancement). L'usage de ces méthodes de modélisation dans les applications adaptatives doit obéir aux contraintes imposées par ce contexte, à savoir la rapidité et la précision des prédictions. Nous proposons dans ce travail, en premier lieu, un framework de développement d'applications parallèles adaptatives basé sur la modélisation théorique de performances. Ensuite, nous nous concentrons sur la tâche de prédiction de performance pour le cas des milieux parallèles et hiérarchiques. En effet, nous proposons un framework combinant les différentes méthodes de modélisation de performance (analytique, expérimentale et simulation) afin de garantir un compromis entre les contraintes suscités. Ce framework profite du moment d'installation de l'application parallèle pour découvrir la plate-forme d'exécution et les traces de l'application afin de modéliser le comportement des parties de calcul et de communication. Pour la modélisation de ces deux composantes, nous avons développé plusieurs méthodes s'articulant sur des expérimentations et sur la régression polynômiale pour fournir des modèles précis. Les modèles résultats de la phase d'installation seront utilisés (au moment de l'exécution) par notre outil de prédiction de performance de programmes MPI (MPI-PERF-SIM) pour prédire le comportement de ces derniers. La validation de ce dernier framework est effectuée séparément pour les différents modules, puis globalement pour le noyau du produit de matrices. / With the multitude of emerging parallel platforms characterized by their heterogeneity in terms of hardware components (processors, networks, ...), the development of performant applications and parallel libraries have become a challenge. A method proved suitable to face this challenge is the adaptive approach which uses several parameters (architectural, algorithmic, ...) in order to optimize the execution of the application on the target platform. Applications adopting this approach must take advantage of performance modeling methods to make their choice between the alternatives they have (algorithms, implementations or scheduling). The use of these modeling approaches in adaptive applications must obey the constraints imposed by the context, namely predictions speed and accuracy. We propose in this work, first, a framework for developing adaptive parallel applications based on theoretical modeling performance. Then, we focuse on the task of performance prediction for the case of parallel and hierarchical environments. Indeed, we propose a framework combining different methods of performance modeling (analytical, experimental and simulation) to ensure a balance between the constraints raised. This framework makes use of the installing phase of the application to discover the parallel platform and the execution traces of this application in order to model the behavior of two components namely computing kernels and pt/pt communications. For the modeling of these components, we have developed several methods based on experiments and polynomial regression to provide accurate models. The resulted models will be used at runtime by our tool for performance prediction of MPI programs (MPI-PERF-SIM) to predict the behavior of the latter. The validation of the latter framework is done separately for the different modules, then globally on the matrix product kernel.
|
5 |
Gaussian Conditionally Markov Sequences: Theory with ApplicationRezaie, Reza 05 August 2019 (has links)
Markov processes have been widely studied and used for modeling problems. A Markov process has two main components (i.e., an evolution law and an initial distribution). Markov processes are not suitable for modeling some problems, for example, the problem of predicting a trajectory with a known destination. Such a problem has three main components: an origin, an evolution law, and a destination. The conditionally Markov (CM) process is a powerful mathematical tool for generalizing the Markov process. One class of CM processes, called $CM_L$, fits the above components of trajectories with a destination. The CM process combines the Markov property and conditioning. The CM process has various classes that are more general and powerful than the Markov process, are useful for modeling various problems, and possess many Markov-like attractive properties.
Reciprocal processes were introduced in connection to a problem in quantum mechanics and have been studied for years. But the existing viewpoint for studying reciprocal processes is not revealing and may lead to complicated results which are not necessarily easy to apply.
We define and study various classes of Gaussian CM sequences, obtain their models and characterizations, study their relationships, demonstrate their applications, and provide general guidelines for applying Gaussian CM sequences. We develop various results about Gaussian CM sequences to provide a foundation and tools for general application of Gaussian CM sequences including trajectory modeling and prediction.
We initiate the CM viewpoint to study reciprocal processes, demonstrate its significance, obtain simple and easy to apply results for Gaussian reciprocal sequences, and recommend studying reciprocal processes from the CM viewpoint. For example, we present a relationship between CM and reciprocal processes that provides a foundation for studying reciprocal processes from the CM viewpoint. Then, we obtain a model for nonsingular Gaussian reciprocal sequences with white dynamic noise, which is easy to apply. Also, this model is extended to the case of singular sequences and its application is demonstrated. A model for singular sequences has not been possible for years based on the existing viewpoint for studying reciprocal processes. This demonstrates the significance of studying reciprocal processes from the CM viewpoint.
|
6 |
Some methods for reducing the total consumption and production prediction errors of electricity: Adaptive Linear Regression of Original Predictions and Modeling of Prediction ErrorsOleksandra, Shovkun January 2014 (has links)
Balance between energy consumption and production of electricityis a very important for the electric power system operation and planning. Itprovides a good principle of effective operation, reduces the generation costin a power system and saves money. Two novel approaches to reduce thetotal errors between forecast and real electricity consumption wereproposed. An Adaptive Linear Regression of Original Predictions (ALROP)was constructed to modify the existing predictions by using simple linearregression with estimation by the Ordinary Least Square (OLS) method.The Weighted Least Square (WLS) method was also used as an alternativeto OLS. The Modeling of Prediction Errors (MPE) was constructed in orderto predict errors for the existing predictions by using the Autoregression(AR) and the Autoregressive-Moving-Average (ARMA) models. For thefirst approach it is observed that the last reported value is of mainimportance. An attempt was made to improve the performance and to getbetter parameter estimates. The separation of concerns and the combinationof concerns were suggested in order to extend the constructed approachesand raise the efficacy of them. Both methods were tested on data for thefourth region of Sweden (“elområde 4”) provided by Bixia. The obtainedresults indicate that all suggested approaches reduce the total percentageerrors of prediction consumption approximately by one half. Resultsindicate that use of the ARMA model slightly better reduces the total errorsthan the other suggested approaches. The most effective way to reduce thetotal consumption prediction errors seems to be obtained by reducing thetotal errors for each subregion.
|
7 |
Supervised Machine Learning Modeling in Secondary Metallurgy : Predicting the Liquid Steel Temperature After the Vacuum Treatment Step of the Vacuum Tank DegasserVita, Roberto January 2022 (has links)
In recent years the steelmaking industry has been subjected to continuous attempts to improve its production route. The main goals has been to increase the competitiveness and to reduce the environmental impact. The development of predictive models has therefore been of crucial importance in order to achieve such optimization. Models are representations or idealizations of reality which can be used to investigate new process strategies without the need of intervention in the process itself. Together with the development of Industry 4.0, Machine Learning (ML) has turned out as a promising modeling approach for the steel industry. However, ML models are generally difficult to interpret, which makes it complicated to investigate if the model accurately represents reality. The present work explores the practical usefulness of applied ML models in the context of the secondary metallurgy processes in steelmaking. In particular, the application of interest is the prediction of the liquid steel temperature after the vacuum treatment step in the Vacuum Tank Degasser (VTD). The choice of the VTD process step is related to its emerging importance in the SSAB Oxelösund steel plant due to the planned future investment in an Electric Arc Furnace (EAF) based production line. The temperature is an important parameter of process control after the vacuum treatment since it directly influences the castability of the steel. Furthermore, there are not many available models which predict the temperature after the vacuum treatment step. The present thesis focuses first on giving a literature background of the statistical modeling approach, mainly addressing the ML approach, and the VTD process. Furthermore, it is reported the methodology behind the construction of the ML model for the application of interest and the results of the numerical experiments. All the statistical concepts used are explained in the literature section. By using the described methodologies, several findings originated from the resulting ML models predicting the temperature of the liquid steel after the vacuum treatment in the VTD.A high complexity of the model is not necessary in order to achieve a high predictive performance on the test data. On the other hand, the data quality is the most important factor to take into account when improving the predictive performance. Itis fundamental having an expertise in both metallurgy and machine learning in order to create a machine learning model that is both relevant and interpretable to domain experts. This knowledge is indeed fundamental for the selection of the input data and the machine learning model framework. Crucial information for the predictions result to be the heat status of the ladle as well as the stirring process time and the temperature benchmarks before and after the vacuum steps. However, to draw specific conclusions, a higher model predictive performance is needed. This can only be obtained upon a significant data quality improvement. / Stålindustrin har under de senaste åren ständigt förbättrat sin produktionsförmåga som i huvudsak har bidragit till ökad konkurrenskraft och minskad miljöpåverkan. Utvecklingen av prediktiva modeller har under denna process varit av avgörande betydelse för att uppnå dessa bedrifter. Modeller är representationer eller idealiseringar av verkligheten som kan användas för att utvärdera nya processtrategier utan att åberopa ingrepp i själva processen. Detta sparar industrin både tid och pengar. I takt med Industri 4.0 har maskininlärning blivit uppmärksammad som ett ytterligare modelleringsförfarande inom stålindustrin. Maskininlärningsmodeller är dock generellt svårtolkade, vilket gör det utmanande att undersöka om modellen representerar verkligheten. Detta arbete undersöker den praktiska användningen av maskininlärningsmodeller inom sekundärmetallurgin på ett svenskt stålverk. Tillämpningen är i synnerhet av intresse för att kunna förutspå temperaturen hos det flytande stålet efter vakuumbehandlingssteget i VTD-processen. Denna process valdes eftersom den är av stor betydelse för framtida ståltillverkning hos SSAB i Oxelösund. Detta är primärt på grund utav att SSAB kommer att investera i en ljusbågsugnsbaserad produktionslinje. Temperaturen är en viktig processparameter eftersom den direkt påverkar stålets gjutbarhet. Utöver detta har inga omfattande arbeten gjorts gällande att förutspå temperaturen efter vakuumbehandlingssteget med hjälp av maskininlärningsmodeller. Arbetet presenterar först en litteraturbakgrund inom statistisk modellering med fokus på maskininlärning och VTD-processen. Därefter redovisas metodiken som använts för att skapa maskininlärningsmodellerna som ligger till grund för de numeriska experimenten samt resultaten. Genom att använda de beskrivna metoderna härrörde flera fynd från de skapande maskininlärningsmodellerna. En hög grad av komplexitet är inte nödvändig för att uppnå en hög prediktiv förmåga på data som inte använts för att anpassa modellens parametrar. Å andra sidan är datakvalitén den viktigaste faktorn om man ämnar att förbättra den prediktiva förmågan hos modellen. Utöver detta är det av yttersta vikt att ha kompetens inom både metallurgi och maskininlärning för att skapa en modell som är både relevant och tolkbar för experter inom området processmetallurgi. Ideligen är kunskap inom processmetallurgi grundläggande för val av indata och val av maskininlärningsalgoritm. Under analysen av maskininlärningsmodellerna upptäcktes det att skänkens värmestatus, omrörningstiden i processen, samt temperaturriktmärkena före och efter vakuumstegen var de mest avgörande variablerna för modellens prediktiva förmåga. För att kunna dra specifika slutsatser behöver modellen ha en högre prediktiv förmåga. Detta kan endast erhållas efter en betydande förbättring av datakvalitén.
|
Page generated in 0.0956 seconds