• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 26
  • 6
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 33
  • 26
  • 26
  • 16
  • 16
  • 16
  • 14
  • 13
  • 12
  • 11
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

A classifier-guided sampling method for early-stage design of shipboard energy systems

Backlund, Peter Bond 26 February 2013 (has links)
The United States Navy is committed to developing technology for an All-Electric Ship (AES) that promises to improve the affordability and capability of its next-generation warships. With the addition of power-intensive 21st century electrical systems, future thermal loads are projected to exceed current heat removal capacity. Furthermore, rising fuel costs necessitate a careful approach to total-ship energy management. Accordingly, the aim of this research is to develop computer tools for early-stage design of shipboard energy distribution systems. A system-level model is developed that enables ship designers to assess the effects of thermal and electrical system configurations on fuel efficiency and survivability. System-level optimization and design exploration, based on these energy system models, is challenging because the models are sometimes computationally expensive and characterized by discrete design variables and discontinuous responses. To address this challenge, a classifier-guided sampling (CGS) method is developed that uses a Bayesian classifier to pursue solutions with desirable performance characteristics. The CGS method is tested on a set of example problems and applied to the AES energy system model. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, when compared to genetic algorithms. / text
92

ExtractCFG : a framework to enable accurate timing back annotation of C language source code

Goswami, Arindam 30 September 2011 (has links)
The current trend in embedded systems design is to move the initial design and exploration phase to a higher level of abstraction, in order to tackle the rapidly increasing complexity of embedded systems. One approach of abstracting software development from the low level platform details is host- compiled simulation. Characteristics of the target platform are represented in a host-compiled simulation model by annotating the high level source code. Compiler optimizations make accurate annotation of the code a challenging task. In this thesis, we describe an approach to enable correct back-annotation of C code at the basic block level, while taking compiler optimizations into account. / text
93

Spelet Cave Generation : En Studie Om Procedurell Generering i 2D-Plattformsspel / Cave Generation the game : A study about procedural generation in 2D platform games

Johannesson, Nick, Kevin, Simon January 2018 (has links)
Denna studie undersöker olika designval inom datorspel för att skapa intressanta och varierade nivåer som kan tilltala både erfarna spelare och nybörjare inom en av de populäraste fritidsaktiviterna i världen. Att designa en nivå i ett spel är en tidskrävande process, och ett sätt att korta ner på detta arbete är att låta en dator skapa nivåer efter specifika instruktioner, genom en process som kallas för procedurell generering. Målet med denna studie är att ta reda på vilka designval som behöver tas i åtanke för att skapa program för att procedurellt generera nivåer i spel som är anpassade efter en specifik målgrupp. Studiens forskningsfråga lyder: Vilka element krävs i ett 2D-plattformsspel för att spelare skall tycka om spelet? Samt vilka undersökta PCG-algoritmer och parameterinställningar lämpar sig för att skapa ett spel som uppfyller dessa önskemål? För att göra detta utvecklades ett datorspel som använde sig av olika former av procedurell generering. Respondenter från olika målgrupper utformade efter deras spelvanor intervjuades för att ta reda på vilka aspekter personer från varje grupp letade efter i ett datorspel, och det egenutvecklade spelet utformades efter respondenternas svar. Efter detta så fick ett större antal informanter testa spelet och svara på enkätfrågor som sedan sammanställdes och analyserades för att ta reda på vilka aspekter av spelets genererade nivåer som informanterna från varje målgrupp tyckte att datorn hade lyckats med. En del av de resultat vi fått från studien var bland annat att folk som spelar ofta blir mer motiverade av ett poängsystem än folk som spelar mer sällan, samt att alla målgrupper tyckte om en stor variation mellan nivåerna. / This study examines different design choices within computer games to create interesting and varied levels that can appeal to both seasoned gamers and newcomers in one of the most popular hobbies in the world. Designing a level is a time-consuming process, and a way to shorten this work is to let a computer create the levels based on specific instructions, through a process called procedural generation. The goal of this study is to find out what design choices that needs to be taken into account in order to create programs to procedurally generate levels in games that are tailored for a specific target audience. The research question of this study is: Which elements are required in a 2D-platforming game in order for players to enjoy it? And which of the examined PCG-algorithms and parameter settings are suitable for creating a game that fulfils these requirements? In order to do this, a computer game was developed which used various forms of procedural generation. Respondents from target audiences based on their gaming habits were interviewed in order to find out what aspects people from each group looked for in a video game, and the in-house developed computer game was designed based on the respondents’ answers. After this a larger number of people tested the game and answered a survey, which was later compiled and analysed to find out which aspects of the games generated levels that the players from each target audience felt that the computer had been successful in. A part of the results that we found in this study are among other thing that people who play games more often are more motivated by a score system than other players who play less. And that all target audiences prefer varied level design.
94

Agile bandpass sampling RF receivers for low power applications

Lolis, Luis 11 March 2011 (has links)
Les nouveaux besoins en communications sans fil pussent le développement de systèmes de transmission RF en termes the reconfigurabilité, multistandard et à basse consommation. Ces travaux de thèse font l’objet de la proposition d’une nouvelle architecture de réception capable d’adresser ces aspects dans le contexte des réseaux WPAN. La technique de sous échantillonnage (BPS-Bandpass Sampling) est appliquée et permet d’exploiter et certain nombre d’avantages liées au traitement du signal à Temps Discret (DT-Discrete Time signal processing), notamment le filtrage et la décimation. Si comparées à la Radio Logicielle, ces techniques permettent de relâcher les contraintes liées aux ADCs en maintenant des caractéristiques multistandard et de reconfigurabilité. Un simulateur dans le domaine fréquentiel large bande a été développé sous MATLAB pour répondre à des limitations au niveau système comme par exemple le repliement spectral et le produit gain bande. En addition avec une nouvelle méthode de conception système, cet outil permet de séparer les différentes contraintes des blocs pour la définition d’un plan de fréquence et the filtrage optimaux. La séparation des différentes contributions dans la dégradation du SNDR (notamment le bruit thermique, bruit de phase, non linéarité et le repliement), permet de relâcher de spécifications critiques liées à la consommation de puissance. L’architecture à sous échantillonnage proposée dans la thèse est résultat d’une comparaison quantitative des différentes architectures à sous échantillonnage, tout en appliquant la méthode et l’outil de conception système développés. Des aspects comme l’optimisation du filtrage entre les techniques à temps continu et temps discret et le plan de fréquence associé, permettent de trouve l’architecture qui représente le meilleur compromis entre la consommation électrique et l’agilité, dans le contexte voulu. Le bloc de filtrage à temps discret est identifié comme étant critique, et une étude sur les limitations d’implémentation circuit est menée. Des effets come les capacités parasites, l’imparité entre les capacités, le bruit du commutateur, la non linéarité, le gain finit de Ampli OP, sont évalués à travers d’une simulation comportementale en VHDL-AMS. On observe la robustesse des circuits orientés temps discret par rapport les contraintes des nouvelles technologies intégrés. Finalement, le système est spécifié en termes de bruit de phase, qui peuvent représenter jusqu’à 30% de la consommation en puissance. Dans ce but, une nouvelle méthode numérique est proposée pour être capable d’évaluer le rapport signal sur distorsion due au jitter SDjR dans le processus de sous échantillonnage. En plus, une conclusion non intuitive est survenue de cette étude, où on que réduire la fréquence d’échantillonnage n’augmente pas les contraintes en termes de jitter pour le système. L’architecture proposée issue de cette étude est sujet d’un développement circuit pour la validation du concept. / New needs on wireless communications pushes the development in terms reconfigurable, multistandards and low power radio systems. The objective of this work is to propose and design new receiver architecture capable of addressing these aspects in the context of the WPAN networks. The technique of Bandpass Sampling (BPS) is applied and permits to exploit a certain number of advantages linked to the discrete time (DT) signal processing, notably filtering and decimation. Compared to the Software-defined Radio (SDR), these techniques permit to relax the ADC constraints while keeping the multi standard and reconfigurable features. A wide band system level simulation tool is developed using MATLAB platform to overcome system level limitations such spectral aliasing and gain bandwidth product. In addition to a new system design method, the tool helps separating the blocks constraints and defining the optimum frequency plan and filtering. Separating the different contributions on the SNDR degradation (noise, phase noise, non linearity, and aliasing), critical specifications for power consumption can be relaxed. The proposed BPS architecture on the thesis is a result of a quantitative comparison of different BPS architectures, applying the system design method and tool. Aspects such filtering optimization between continuous and discrete time filtering and the associated frequency plan permitted to find the architecture which represents the best trade-off between power consumption and agility on the aimed context. The DT filtering block is therefore identified as critical block, which a study on the circuit implementation limitations is carried out. Effects such parasitic capacitances and capacitance mismatch, switch noise, non linear distortion, finite gain OTA, are evaluated through VHDL-AMS modelling. It is observed the robustness of discrete time oriented circuits. Finally, phase noise specifications are given considering that frequency synthesis circuits may represent up to 30% of the power consumption. For that goal, a new numerical method is proposed, capable of evaluating the signal to jitter distortion ratio SDjR on the BPS process. Moreover, a non intuitive conclusion is given, where reducing the sampling frequency does not increase the constraints in terms of jitter. The proposed architecture issue from this study is in stage of circuit level design in the project team of LETI for final proof of concept.
95

Approche de conception haut-niveau pour l'accélération matérielle de calcul haute performance en finance / High-level approach for hardware acceleration of high-performance computing in finance

Mena morales, Valentin 12 July 2017 (has links)
Les applications de calcul haute-performance (HPC) nécessitent des capacités de calcul conséquentes, qui sont généralement atteintes à l'aide de fermes de serveurs au détriment de la consommation énergétique d'une telle solution. L'accélération d'applications sur des plateformes hétérogènes, comme par exemple des FPGA ou des GPU, permet de réduire la consommation énergétique et correspond donc à un compromis architectural plus séduisant. Elle s'accompagne cependant d'un changement de paradigme de programmation et les plateformes hétérogènes sont plus complexes à prendre en main pour des experts logiciels. C'est particulièrement le cas des développeurs de produits financiers en finance quantitative. De plus, les applications financières évoluent continuellement pour s'adapter aux demandes législatives et concurrentielles du domaine, ce qui renforce les contraintes de programmabilité de solutions d'accélérations. Dans ce contexte, l'utilisation de flots haut-niveaux tels que la synthèse haut-niveau (HLS) pour programmer des accélérateurs FPGA n'est pas suffisante. Une approche spécifique au domaine peut fournir une réponse à la demande en performance, sans que la programmabilité d'applications accélérées ne soit compromise.Nous proposons dans cette thèse une approche de conception haut-niveau reposant sur le standard de programmation hétérogène OpenCL. Cette approche repose notamment sur la nouvelle implémentation d'OpenCL pour FPGA introduite récemment par Altera. Quatre contributions principales sont apportées : (1) une étude initiale d'intégration de c'urs de calculs matériels à une librairie logicielle de calcul financier (QuantLib), (2) une exploration d'architectures et de leur performances respectives, ainsi que la conception d'une architecture dédiée pour l'évaluation d'option américaine et l'évaluation de volatilité implicite à partir d'un flot haut-niveau de conception, (3) la caractérisation détaillée d'une plateforme Altera OpenCL, des opérateurs élémentaires, des surcouches de contrôle et des liens de communication qui la compose, (4) une proposition d'un flot de compilation spécifique au domaine financier, reposant sur cette dernière caractérisation, ainsi que sur une description des applications financières considérées, à savoir l'évaluation d'options. / The need for resources in High Performance Computing (HPC) is generally met by scaling up server farms, to the detriment of the energy consumption of such a solution. Accelerating HPC application on heterogeneous platforms, such as FPGAs or GPUs, offers a better architectural compromise as they can reduce the energy consumption of a deployed system. Therefore, a change of programming paradigm is needed to support this heterogeneous acceleration, which trickles down to an increased level of programming complexity tackled by software experts. This is most notably the case for developers in quantitative finance. Applications in this field are constantly evolving and increasing in complexity to stay competitive and comply with legislative changes. This puts even more pressure on the programmability of acceleration solutions. In this context, the use of high-level development and design flows, such as High-Level Synthesis (HLS) for programming FPGAs, is not enough. A domain-specific approach can help to reach performance requirements, without impairing the programmability of accelerated applications.We propose in this thesis a high-level design approach that relies on OpenCL, as a heterogeneous programming standard. More precisely, a recent implementation of OpenCL for Altera FPGA is used. In this context, four main contributions are proposed in this thesis: (1) an initial study of the integration of hardware computing cores to a software library for quantitative finance (QuantLib), (2) an exploration of different architectures and their respective performances, as well as the design of a dedicated architecture for the pricing of American options and their implied volatility, based on a high-level design flow, (3) a detailed characterization of an Altera OpenCL platform, from elemental operators, memory accesses, control overlays, and up to the communication links it is made of, (4) a proposed compilation flow that is specific to the quantitative finance domain, and relying on the aforementioned characterization and on the description of the considered financial applications (option pricing).
96

Une approche fonctionnelle pour la conception et l'exploration architecturale de systèmes numériques / A Functional Approach to Digital System Modeling and Design Space Exploration

Toczek, Tomasz 15 June 2011 (has links)
Ce manuscrit présente une méthode de conception au niveau système reposant sur la programmation fonctionnelle typée et visant à atténuer certains des problèmes complexifiant le développement des systèmes numériques modernes, tels que leurs tailles importantes ou la grande variété des blocs les constituant. Nous proposons un ensemble de mécanismes permettant de mélanger au sein d'un même design plusieurs formalismes de description distincts («modèles de calcul») se situant potentiellement à des niveaux d'abstraction différents. De plus, nous offrons au concepteur la possibilité d'expliciter directement les paramètres explorables de chaque sous-partie du design, puis d'en déterminer des valeurs acceptables via une étape d'exploration partiellement ou totalement automatisée réalisée à l'échelle du système. Les gains qu'apportent ces stratégies nouvelles sont illustrés sur plusieurs exemples. / This work presents a novel system-level design method based on typed functional programming and aiming at mitigating some of the issues making the development of modern digital systems complex, such as their increasing sizes and the variety of their subcomponents. We propose a range of mechanisms allowing to mix within a single design several description formalisms (``models of computation''), possibly at different abstraction levels. Moreover, the designer is provided with means to directly express the explorable parameters of each part of their design, and to find acceptable values for them through a partially or totally automatic system-wide architectural exploration step. The advantages brought by those new strategies are illustrated on several examples.
97

Circuit-level approaches to mitigate the process variability and soft errors in FinFET logic cells / Approches au niveau du circuit pour atténuer la variabilité de fabrication et les soft errors dans les cellules logiques FinFET

Lackmann-Zimpeck, Alexandra 24 September 2019 (has links)
Les contraintes imposées par la roadmap technologique nanométrique imposent aux fabricants de microélectronique une réduction de la variabilité de fabrication mais également de durcissement vis-à-vis des erreurs logiques induits par l’environnement radiatif naturel afin d’assurer un haut niveau de fiabilité. Certains travaux ont mis en évidence l'influence de la variabilité de fabrication et SET sur les circuits basés sur les technologies FinFET. Cependant jusqu’à lors, aucune approche pour les atténuer n’ont pu être présenté pour les technologies FinFET. Pour ces raisons, du point de vue de la conception, des efforts considérables doivent être déployés pour comprendre et réduire les impacts générés par ces deux problématiques de fiabilité. Dans ce contexte, les contributions principales de cette thèse sont: 1) étudier le comportement des cellules logiques FinFET en fonction des variations de fabrication et des effets de rayonnement; 2) évaluer quatre approches des durcissement au niveau du circuit afin de limiter les effets de variabilité (work-function fluctuation, WFF) de fabrication et des soft errors (SE); 3) fournir une comparaison entre toutes les techniques appliquées dans ce travail; 4) proposer le meilleur compromis entre performance, consommation, surface, et sensibilité aux corruptions de données et erreurs transitoires. Transistor reordering, decoupling cells, Schmitt Trigger, et sleep transistor sont quatre techniques prometteuses d’optimisation au niveau de circuit, explorées dans ce travail. Le potentiel de chacune d'elles pour rendre les cellules logiques plus robustes vis-à-vis variabilité de fabrication et de SE a été évalué. Cette thèse propose également une estimation des tendances comportementales en fonction du niveau de variabilité, des dimensionnements des transistors et des caractéristiques énergétique de particule ionisante comme transfert d'énergie linéaire. Lors de cette thèse, la variabilité de fabrication a été évaluée par des simulations Monte Carlo (MC) avec une WFF modélisé par une fonction Gaussienne utilisant le SPICE. La susceptibilité SE a été estimée à partir de d’outil de génération MC de radiations, MUSCA SEP3. Cet outil est basé sur des calculs MC afin de rendre compte des caractéristiques de l’environnement radiatif du design et des paramètres électriques des composants analysés. Les approches proposées par cette thèse améliorent l'état-de-l'art actuel en fournissant des options d’optimisation au niveau du circuit pour réduire les effets de variabilité de fabrication et la susceptibilité aux SE. La Transistor reordering peut augmenter la robustesse des cellules logiques pour une variabilité allant jusqu’à 8%, cependant cette approche n’est pas idéale pour la mitigation des SE. L’utilisation de decoupling cells permet de meilleurs résultats pour le contrôle de la variabilité de consommation avec des niveaux de variation supérieurs à 4%, et atténuant jusqu'à 10% la variabilité du délai pour la variabilité de fabrication de 3% de la WFF. D’un point de vue SE, cette technique permet une diminution de 10% de la sensibilité des cellules logiques étudiées. L’utilisation de structure Schmitt Triggers en sortie de cellule logique permet une amélioration allant jusqu’à 5% de la sensibilité à la variabilité de fabrication. Enfin, l’utilisation de sleep transistors améliore la variabilité de fabrication d'environ 12% pour 5% de WFF. La variabilité du délai dépend de la manière dont les transistors sont disposés au circuit. Cette méthode permet une immunité totale de la cellule logique y compris en régime near-threshold. En résumé, la meilleure approche de mitigation de la variabilité de fabrication semble être l’utilisation de structure Schmitt Triggers alors que l’utilisation de sleep transistors est le plus adapté pour l’optimisation de SE. Ainsi, selon les applications et contraintes, la méthode de durcissement par sleep transistors semble proposer le meilleur compromis. / Process variability mitigation and radiation hardness are relevant reliability requirements as chip manufacturing advances more in-depth into the nanometer regime. The parameter yield loss and critical failures on system behavior are the major consequences of these issues. Some related works explore the influence of process variability and single event transients (SET) on the circuits based on FinFET technologies, but there is a lack of approaches to mitigate them. For these reasons, from a design standpoint, considerable efforts should be made to understand and reduce the impacts introduced by reliability challenges. In this regard, the main contributions of this PhD thesis are to: 1) investigate the behavior of FinFET logic cells under process variations and radiation effects; 2) evaluate four circuit-level approaches to attenuate the impact caused by work-function fluctuations (WFF) and soft errors (SE); 3) provide an overall comparison between all techniques applied in this work; 4) trace a trade-off between the gains and penalties of each approach regarding performance, power, area, SET cross-section, and SET pulse width. Transistor reordering, decoupling cells, Schmitt Triggers, and sleep transistors are the four circuit-level mitigation techniques explored in this work. The potential of each one to make the logic cells more robust to the process variability and radiation-induced soft errors are assessed comparing the standard version results with the design using each approach. This PhD thesis also establishes the mitigation tendency when different levels of variation, transistor sizing, and radiation particles characteristics such as linear energy transfer (LET) are applied in the design with these techniques.The process variability is evaluated through Monte Carlo (MC) simulations with the WFF modeled as a Gaussian function using SPICE simulation while the SE susceptibility is estimated using the radiation event generator tool MUSCA SEP3 (developed at ONERA) also based on a MC method that deals both with radiation environment characteristics, layout features and the electrical properties of devices. In general, the proposed approaches improve the state-of-the-art by providing circuit-level options to reduce the process variability effects and SE susceptibility, at fewer penalties and design complexity. The transistor reordering technique can increase the robustness of logic cells under process variations up to 8%, but this method is not favorable for SE mitigation. The insertion of decoupling cells shows interesting outcomes for power variability control with levels of variation above 4%, and it can attenuate until 10% the delay variability considering manufacturing process with 3% of WFF. Depending on the LET, the design with decoupling cells can decrease until 10% of SE susceptibility of logic cells. The use of Schmitt Triggers in the output of FinFET cells can improve the variability sensitivity by up to 50%. The sleep transistor approach improves the power variability reaching around 12% for WFF of 5%, but the advantages of this method to delay variability depends how the transistors are arranged with the sleep transistor in the pull-down network. The addition of a sleep transistor become all logic cells studied free of faults even at the near-threshold regime. In this way, the best approach to mitigate the process variability is the use of Schmitt Triggers, as well as the sleep transistor technique is the most efficient for the SE mitigation. However, the Schmitt Trigger technique presents the highest penalties in area, performance, and power. Therefore, depending on the application, the sleep transistor technique can be the most appropriate to mitigate the process variability effects.
98

Mixed-Initiative Tile-Based Designer : Examining Expressive Range And Controllability For 2D Tile-Based Levels

Dolfe, Rafael January 2022 (has links)
This paper investigates the effectiveness of expressive range and controllability for 2-dimensional tile-based procedural content generation. Procedural content generation (PCG) is the automation of content, often in games, and tile-based PCG is when the generated content is constrained to a grid structure. Mixed-initiative PCG, which is the integration of user control into PCG, has been researched previously for tile-based PCG but the implementations have been limited by lack of breadth and user control over the algorithm. As a result, the expressive ranges and controllabilities of their algorithm have not been comprehensive, and in turn, the concepts of expressive range and controllability not thoroughly scrutinized. For the purpose of examining the concepts of expressive range and controllability, an implementation of declarative modelling, named Mixed-initiative Tile-based Designer (MTD), is made. The MTD combines the mission and shape grammar algorithm proposed by Dormans’ and the level generation system in Spelunky. Nine sets of input parameters, scenarios, are tested, each with 2000 levels generated. For each scenario, the expressive range of its output is examined using the standard evaluation metrics linearity and leniency. The results of the data sampling indicated that an analysis based on expressive range needs to be supported by additional analyses for the insights drawn to be more general. In particular, expressive range needs to be complemented by manual inspection, and linearity when applied to sufficiently complex levels needs to be complemented by additional evaluation metrics. On the other hand, controllability was found to have more significant limitations in its current form because of the normalization of the data that goes into the expressive range. One solution is instead to visualize the raw data of the expressive range and make liberal use of manual inspection. As a secondary study, the feasibility of implementing declarative modelling into 2-dimensional tile-based PCG is investigated through analyzing the MTD. In particular, the MTD’s user interface, procedural output and controllability is examined to determine whether declarative modelling is feasible for 2-dimensional tile-based PCG. / Denna artikel undersöker effektiviteten av uttrycksomfång och kontrollerbarhet för generering av 2-dimensionell rut-baserad procedurinnehåll. Procedurell innehållsgenerering (PCG) är automatisering av innehåll, ofta i spel, och rut-based PCG är när det genererade innehållet är begränsat till en rutnätsstruktur. Blandinitiativ PCG, som är integrationen av användarkontroll i PCG, har undersökts tidigare för rut-baserad PCG men implementeringarna har begränsats av brist på bredd och användarkontroll över algoritmen. Som ett resultat har uttrycksomfången och kontrollerbarheten av deras algoritm inte varit heltäckande, och i sin tur har begreppen uttrycksfullt omfång och kontrollerbarhet inte granskats noggrant. I syfte att undersöka begreppen uttrycksomfång och kontrollerbarhet görs en implementering av deklarativ modellering, benämnd Mixed-initiative Tile-based Designer (MTD). MTD:n kombinerar uppdrags-och formgrammatikalgoritmen som föreslagits av Dormans och nivågenereringssystemet i Spelunky. Nio uppsättningar ingångsparametrar, scenarier, testas, var och en med 2000 genererade nivåer. För varje scenario undersöks det uttrycksomfånget för dess utdata med hjälp av standardutvärderingsmåtten linjäritet och svårighet. Resultaten indikerade att en analys baserad på uttrycksomfång måste stödjas av ytterligare analyser för att insikterna ska bli mer generella. Speciellt måste uttrycksomfång kompletteras med manuell inspektion, och linjäritet när den tillämpas på tillräckligt komplexa nivåer måste kompletteras med ytterligare utvärderingsmått. Å andra sidan visade sig kontrollerbarhet ha mer betydande begränsningar i sin nuvarande form på grund av normaliseringen av data som går in i uttrycksomfånget. En lösning är istället att visualisera rådata från uttrycksomfånget och använda sig av manuell inspektion. Som en sekundär studie undersöks möjligheten att implementera deklarativ modellering i 2- dimensionell rut-baserad PCG genom att analysera MTD. I synnerhet undersöks MTD:s användargränssnitt, procedurutdata och kontrollerbarhet för att avgöra om deklarativ modellering är genomförbar för 2-dimensionell rut-baserad PCG.
99

Emergent Gameplay and the Affordance of Features in Open-World Video Game Environments / Framväxande spelstil och hur miljön påverkar i öppna spelvärldar

Nur Fauzan, Harits January 2023 (has links)
There has been a noticeable increase in people’s interest in open-world games, especially those that incorporate emergent gameplay. Emergent gameplay refers to activities performed by players to create properties that cannot be created without the interaction of multiple game components. While it is evident that emergent gameplay can arise from the interaction between multiple game components, the role of the environment in influencing its occurrence remains unclear. Therefore, this study aims to identify the role of the environment by utilizing the concept of affordance, which explains the relationship between how an actor interacts with an environment. To gather insights, a small open-world game prototype was developed to collect gameplay data from players, who were interviewed upon testing the prototype. A thematic analysis was conducted to gain insights from the interview data, which was complemented by visualizing the players’ gameplay data. The analysis results indicate that the occurrence of emergent gameplay can be enhanced by concealing the affordance and signifiers of the environment. Additionally, the environment acts as an extension of the players’ interaction space when the players can only interact with the environment through other game mechanics. / Det har varit ett märkbart ökat intresse för spel med öppna världar, särskilt de som inkluderar emergent gameplay. Emergent gameplay avser aktiviteter som spelare utför för att skapa egenskaper som inte kan skapas utan interaktion mellan flera olika spelkomponenter. Även om det är tydligt att emergent gameplay kan uppstå genom interaktionen mellan flera spelkomponenter, så är rollen som miljön spelar för att påverka dess förekomst fortfarande oklar. Därför syftar denna studie till att identifiera miljöns roll genom att utnyttja begreppet "affordance", vilket förklarar förhållandet mellan hur en aktör interagerar med en miljö. För att samla insikter utvecklades en liten prototyp av ett spel med en öppen värld för att samla in gameplay-data från spelare som intervjuades efter att ha testat prototypen. En tematisk analys genomfördes för att få insikter från intervjumaterialet, som kompletterades med en visualisering av spelarnas gameplay-data. Analysresultaten indikerar att förekomsten av emergent gameplay kan förbättras genom att dölja affordance och tecken i miljön. Dessutom fungerar miljön som en förlängning av spelarnas interaktionsutrymme när spelarna endast kan interagera med miljön genom annan spelmekanik.
100

Attributes of Tool Development : Proceduralism for the Environment Artist

Andersson, Karl January 2023 (has links)
This paper explores what attributes are important for the creation of environment art tools. The purpose of this is to make sure that when a tool is to be developed, it will be done properly within a given time frame. This is important since the cost of tool development is high in both time and capital spent. Being able to make sure that when those resources are spent, that the resulting tool is of high quality and solving the problem which the development team set out to do. Through interviews, forms and the creation of our own tool I hope to find these attributes and to be able to provide insights into how a studio or team might apply them for their own purposes.

Page generated in 0.0505 seconds