11 |
Task Load Modelling for LTE Baseband Signal Processing with Artificial Neural Network ApproachWang, Lu January 2014 (has links)
This thesis gives a research on developing an automatic or guided-automatic tool to predict the hardware (HW) resource occupation, namely task load, with respect to the software (SW) application algorithm parameters in an LTE base station. For the signal processing in an LTE base station it is important to get knowledge of how many HW resources will be used when applying a SW algorithm on a specic platform. The information is valuable for one to know the system and platform better, which can facilitate a reasonable use of the available resources. The process of developing the tool is considered to be the process of building a mathematical model between HW task load and SW parameters, where the process is dened as function approximation. According to the universal approximation theorem, the problem can be solved by an intelligent method called articial neural networks (ANNs). The theorem indicates that any function can be approximated with a two-layered neural network as long as the activation function and number of hidden neurons are proper. The thesis documents a work ow on building the model with the ANN method, as well as some research on data subset selection with mathematical methods, such as Partial Correlation and Sequential Searching as a data pre-processing step for the ANN approach. In order to make the data selection method suitable for ANNs, a modication has been made on Sequential Searching method, which gives a better result. The results show that it is possible to develop such a guided-automatic tool for prediction purposes in LTE baseband signal processing under specic precision constraints. Compared to other approaches, this model tool with intelligent approach has a higher precision level and a better adaptivity, meaning that it can be used in any part of the platform even though the transmission channels are dierent. / Denna avhandling utvecklar ett automatiskt eller ett guidat automatiskt verktyg for att forutsaga behov av hardvaruresurser, ocksa kallat uppgiftsbelastning, med avseende pa programvarans algoritmparametrar i en LTE basstation. I signalbehandling i en LTE basstation, ar det viktigt att fa kunskap om hur mycket av hardvarans resurser som kommer att tas i bruk nar en programvara ska koras pa en viss plattform. Informationen ar vardefull for nagon att forsta systemet och plattformen battre, vilket kan mojliggora en rimlig anvandning av tillgangliga resurser. Processen att utveckla verktyget anses vara processen att bygga en matematisk modell mellan hardvarans belastning och programvaruparametrarna, dar processen denieras som approximation av en funktion. Enligt den universella approximationssatsen, kan problemet losas genom en intelligent metod som kallas articiella neuronnat (ANN). Satsen visar att en godtycklig funktion kan approximeras med ett tva-skiktS neuralt natverk sa lange aktiveringsfunktionen och antalet dolda neuroner ar korrekt. Avhandlingen dokumenterar ett arbets- ode for att bygga modellen med ANN-metoden, samt studerar matematiska metoder for val av delmangder av data, sasom Partiell korrelation och sekventiell sokning som dataforbehandlingssteg for ANN. For att gora valet av uppgifter som lampar sig for ANN har en andring gjorts i den sekventiella sokmetoden, som ger battre resultat. Resultaten visar att det ar mojligt att utveckla ett sadant guidat automatiskt verktyg for prediktionsandamal i LTE basbandssignalbehandling under specika precisions begransningar. Jamfort med andra metoder, har dessa modellverktyg med intelligent tillvagagangssatt en hogre precisionsniva och battre adaptivitet, vilket innebar att den kan anvandas i godtycklig del av plattformen aven om overforingskanalerna ar olika.
|
12 |
The Use of Physiological Data and Machine Learning to Detect Stress Events for Adaptive AutomationFalkenberg, Zachary 26 July 2023 (has links)
No description available.
|
13 |
Investigating The Universality And Comprehensive Ability Of Measures To Assess The State Of WorkloadAbich, Julian 01 January 2013 (has links)
Measures of workload have been developed on the basis of the various definitions, some are designed to capture the multi-dimensional aspects of a unitary resource pool (Kahneman, 1973) while others are developed on the basis of multiple resource theory (Wickens, 2002). Although many theory based workload measures exist, others have often been constructed to serve the purpose of specific experimental tasks. As a result, it is likely that not every workload measure is reliable and valid for all tasks, much less each domain. To date, no single measure, systematically tested across experimental tasks, domains, and other measures is considered a universal measure of workload. Most researchers would argue that multiple measures from various categories should be applied to a given task to comprehensively assess workload. The goal for Study 1 to establish task load manipulations for two theoretically different tasks that induce distinct levels of workload assessed by both subjective and performance measures was successful. The results of the subjective responses support standardization and validation of the tasks and demands of that task for investigating workload. After investigating the use of subjective and objective measures of workload to identify a universal and comprehensive measure or set of measures, based on Study 2, it can only be concluded that not one or a set of measures exists. Arguably, it is not to say that one will never be conceived and developed, but at this time, one does not reside in the psychometric catalog. Instead, it appears that a more suitable approach is to customize a set of workload measures based on the task. The novel approach of assessing the sensitivity and comprehensive ability of conjointly utilizing subjective, performance, and physiological workload measures for theoretically different tasks within the same domain contributes to the theory by laying the foundation for improving methodology for researching workload. The applicable contribution of this project is a stepping-stone towards developing complex profiles of workload for use in closed-loop systems, such as human-robot team iv interaction. Identifying the best combination of workload measures enables human factors practitioners, trainers, and task designers to improve methodology and evaluation of system designs, training requirements, and personnel selection
|
14 |
Interaktionskvalitet - hur mäts det?Friberg, Annika January 2009 (has links)
Den tekniska utvecklingen har lett till att massiva mängder av information sänds, i högahastigheter. Detta flöde måste vi lära oss att hantera. För att maximera nyttan av de nyateknikerna och undkomma de problem som detta enorma informationsflöde bär med sig, börinteraktionskvalitet studeras. Vi måste anpassa gränssnitt efter användaren eftersom denneinte har möjlighet att anpassa sig till, och sortera i för stora informationsmängder. Vi måsteutveckla system som gör människan mer effektiv vid användande av gränssnitt.För att anpassa gränssnitten efter användarens behov och begränsningar krävs kunskaperom den mänskliga kognitionen. När kognitiv belastning studeras är det viktigt att en såflexibel, lättillgänglig och icke-påträngande teknik som möjligt används för att få objektivamätresultat, samtidigt som pålitligheten är av största vikt. För att kunna designa gränssnittmed hög interaktionskvalitet krävs en teknik att utvärdera dessa. Målet med uppsatsen är attfastställa en mätmetod väl lämpad för mätning av interaktionskvalitet.För mätning av interaktionskvalitet rekommenderas en kombinering av subjektiva ochfysiologiska mätmetoder, detta innefattar en kombination av Functional near-infraredspecroscopy; en fysiologisk mätmetod som mäter hjärnaktiviteten med hjälp av ljuskällor ochdetektorer som fästs på frontalloben, Electrodermal activity; en fysiologisk mätmetod sommäter hjärnaktiviteten med hjälp av elektroder som fästs över skalpen och NASA task loadindex; en subjektiv, multidimensionell mätmetod som bygger på kortsortering och mäteruppfattad kognitiv belastning i en sammanhängande skala. Mätning med hjälp av dessametoder kan resultera i en ökad interaktionskvalitet i interaktiva, fysiska och digitalagränssnitt. En uppskattning av interaktionskvalitet kan bidra till att fel vid interaktionminimeras, vilket innebär en förbättring av användares upplevelse vid interaktion. / Technical developments have led to the broadcasting of massive amounts of information, athigh velocities. We must learn to handle this flow. To maximize the benefits of newtechnologies and avoid the problems that this immense information flow brings, interactionquality should be studied. We must adjust interfaces to the user because the user does nothave the ability to adapt and sort overly large amounts of information. We must developsystems that make the human more efficient when using interfaces.To adjust the interfaces to the user needs and limitations, knowledge about humancognitive processes is required. When cognitive workload is studied it is important that aflexible, easily accessed and non assertive technique is used to get unbiased results. At thesame time reliability is of great importance. To design interfaces with high interaction quality,a technique to evaluate these is required. The aim of this paper is to establish a method that iswell suited for measurement of interaction quality.When measuring interaction quality, a combination of subjective and physiologicalmethods is recommended. This comprises a combination of Functional near-infraredspectroscopy; a physiological measurement which measures brain activity using light sourcesand detectors placed on the frontal lobe, Electrodermal activity; a physiological measurementwhich measures brain activity using electrodes placed over the scalp and NASA task loadindex; a subjective, multidimensional measurement based on card sorting and measures theindividual perceived cognitive workload on a continuum scale. Measuring with these methodscan result in an increase in interaction quality in interactive, physical and digital interfaces.An estimation of interaction quality can contribute to eliminate interaction errors, thusimproving the user’s interaction experience.
|
15 |
Implementation and Analysis of Co-Located Virtual Reality for Scientific Data VisualizationJordan M McGraw (8803076) 07 May 2020 (has links)
<div>Advancements in virtual reality (VR) technologies have led to overwhelming critique and acclaim in recent years. Academic researchers have already begun to take advantage of these immersive technologies across all manner of settings. Using immersive technologies, educators are able to more easily interpret complex information with students and colleagues. Despite the advantages these technologies bring, some drawbacks still remain. One particular drawback is the difficulty of engaging in immersive environments with others in a shared physical space (i.e., with a shared virtual environment). A common strategy for improving collaborative data exploration has been to use technological substitutions to make distant users feel they are collaborating in the same space. This research, however, is focused on how virtual reality can be used to build upon real-world interactions which take place in the same physical space (i.e., collaborative, co-located, multi-user virtual reality).</div><div><br></div><div>In this study we address two primary dimensions of collaborative data visualization and analysis as follows: [1] we detail the implementation of a novel co-located VR hardware and software system, [2] we conduct a formal user experience study of the novel system using the NASA Task Load Index (Hart, 1986) and introduce the Modified User Experience Inventory, a new user study inventory based upon the Unified User Experience Inventory, (Tcha-Tokey, Christmann, Loup-Escande, Richir, 2016) to empirically observe the dependent measures of Workload, Presence, Engagement, Consequence, and Immersion. A total of 77 participants volunteered to join a demonstration of this technology at Purdue University. In groups ranging from two to four, participants shared a co-located virtual environment built to visualize point cloud measurements of exploded supernovae. This study is not experimental but observational. We found there to be moderately high levels of user experience and moderate levels of workload demand in our results. We describe the implementation of the software platform and present user reactions to the technology that was created. These are described in detail within this manuscript.</div>
|
Page generated in 0.0796 seconds