• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 290
  • 24
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 475
  • 475
  • 117
  • 99
  • 99
  • 88
  • 67
  • 62
  • 62
  • 54
  • 48
  • 47
  • 47
  • 45
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Data-Driven Simulation Modeling of Construction and Infrastructure Operations Using Process Knowledge Discovery

Akhavian, Reza 01 January 2015 (has links)
Within the architecture, engineering, and construction (AEC) domain, simulation modeling is mainly used to facilitate decision-making by enabling the assessment of different operational plans and resource arrangements, that are otherwise difficult (if not impossible), expensive, or time consuming to be evaluated in real world settings. The accuracy of such models directly affects their reliability to serve as a basis for important decisions such as project completion time estimation and resource allocation. Compared to other industries, this is particularly important in construction and infrastructure projects due to the high resource costs and the societal impacts of these projects. Discrete event simulation (DES) is a decision making tool that can benefit the process of design, control, and management of construction operations. Despite recent advancements, most DES models used in construction are created during the early planning and design stage when the lack of factual information from the project prohibits the use of realistic data in simulation modeling. The resulting models, therefore, are often built using rigid (subjective) assumptions and design parameters (e.g. precedence logic, activity durations). In all such cases and in the absence of an inclusive methodology to incorporate real field data as the project evolves, modelers rely on information from previous projects (a.k.a. secondary data), expert judgments, and subjective assumptions to generate simulations to predict future performance. These and similar shortcomings have to a large extent limited the use of traditional DES tools to preliminary studies and long-term planning of construction projects. In the realm of the business process management, process mining as a relatively new research domain seeks to automatically discover a process model by observing activity records and extracting information about processes. The research presented in this Ph.D. Dissertation was in part inspired by the prospect of construction process mining using sensory data collected from field agents. This enabled the extraction of operational knowledge necessary to generate and maintain the fidelity of simulation models. A preliminary study was conducted to demonstrate the feasibility and applicability of data-driven knowledge-based simulation modeling with focus on data collection using wireless sensor network (WSN) and rule-based taxonomy of activities. The resulting knowledge-based simulation models performed very well in properly predicting key performance measures of real construction systems. Next, a pervasive mobile data collection and mining technique was adopted and an activity recognition framework for construction equipment and worker tasks was developed. Data was collected using smartphone accelerometers and gyroscopes from construction entities to generate significant statistical time- and frequency-domain features. The extracted features served as the input of different types of machine learning algorithms that were applied to various construction activities. The trained predictive algorithms were then used to extract activity durations and calculate probability distributions to be fused into corresponding DES models. Results indicated that the generated data-driven knowledge-based simulation models outperform static models created based upon engineering assumptions and estimations with regard to compatibility of performance measure outputs to reality.
352

Diseño de identidades digitales: metodología iterativa para la creación y desarrollo de marcas

Canavese Arbona, Ana 07 September 2023 (has links)
[ES] El desarrollo del medio digital ha transformado nuestra forma de consumo en las últimas décadas. La invención de Internet, su democratización, la aparición de múltiples dispositivos de acceso y las redes sociales, la tecnificación de los objetos y la llegada de la inteligencia artificial han tenido un impacto significativo en la sociedad, así como entidades esenciales como las empresas y sus marcas. La integración total de la digitalización en las marcas es una realidad, y cada vez se opta más por este medio como un espacio prioritario para aportar valor al público a través de sus productos o servicios. Esta investigación se centra en profundizar en el significado de la marca digital y en sus características esenciales. Para ello, se realizará un recorrido histórico de la evolución de los signos identitarios con relación a la tecnología, lo que permitirá tener un enfoque global en su adaptación a cada uno de los avances digitales. Además, se analizarán los múltiples significados de marca y se revisará y recogerá la metodología específica para su creación: el branding. Con el fin de entender las particularidades y ventajas de los marcos de trabajo aplicados en el sector digital y del desarrollo de software, se estudiarán metodologías iterativas basadas en sistemas ágiles como el Design Thinking, el Diseño Centrado en Usuario o el Atomic Design, entre otros. Finalmente, a partir del estudio realizado, se generará una metodología híbrida para crear marcas digitales capaces de adaptarse mejor a los cambios de contexto del medio. Para ello, se hará uso de procesos, herramientas y plataformas complementarias empleadas en ámbitos tecnológicos y se diseñará un proceso de revisión constante con el fin de asegurar la calidad y el buen funcionamiento de las marcas en todo momento. / [CA] El desenvolupament dels mitjans digitals han transformat la nostra forma de consum en les últimes dècades. La invenció d'Internet, la seua democratització, l'aparició de múltiples dispositius d'accés i les xarxes socials, la tecnificació dels objectius i l'arribada de la intel·ligència artificial han tingut un impacte significatiu en la societat i en les entitats essencials com les empreses i les seues marques. La integració total de la digitalització en les marques és una realitat, i cada vegada s'opta més per aquest mitjà com un espai prioritari per a aportar valor al públic. Aquesta investigació es centra en aprofundir en el significat de la marca digital i en les seues característiques essencials. Per a això, es realitzarà un recorregut històric de l'evolució dels signes identitaris en relació amb la tecnologia, la qual cosa permetrà tindre un enfocament global de la seua adaptació a cadascun dels sorgiments digitals. A més a més, s'analitzaran els múltiples significats de marca i es revisarà i recollirà la metodologia específica per a la seua creació: el branding. Amb la finalitat d'entendre les particularitats i avantatges dels marcs de treball aplicats al sector digital i del desenvolupament del software, s'estudiaran metodologies iteratives basades en sistemes àgils com el Design Thinking, el Disseny Centrat en l'Usuari, l'Atomic Design, entre d'altres. Finalment, a partir de l'estudi realitzat, es generarà una metodologia híbrida per a crear marques digitals capaces d'adaptar-se millor als canvis de context del mitjà. Per a això, es farà ús dels processos, eines i plataformes complementàries emprades en àmbits tecnològics i es dissenyarà un procés de revisió constant amb la finalitat d'assegurar la qualitat i el bon funcionament de les marques en tot moment. / [EN] The advancement of digital media has significantly impacted how we consume information in recent years. With the Internet and social networks becoming more accessible, coupled with the emergence of multiple access devices and the application of artificial intelligence, society and essential entities such as companies and their brands have been significantly affected. Digitalisation has become an integral part of branding, and companies now prioritize using digital media to provide value to their customers. This research explores the meaning of digital branding and its fundamental characteristics. It will provide a historical overview of how identity signs have evolved with technological advancements, offering a comprehensive approach to their adaptation in the digital age. To fully understand the advantages and nuances of digital and software development frameworks, this study will delve into iterative methodologies based on agile systems, such as Design Thinking, User-Centered Design, and Atomic Design. Ultimately, the study will generate a hybrid methodology for creating digital brands that can adapt better to environmental changes. For this purpose, other complementary processes, such as tools and platforms used in technological fields, will be used. A constant review process will also be present to ensure the quality and proper functioning of the brands at all times. / Canavese Arbona, A. (2023). Diseño de identidades digitales: metodología iterativa para la creación y desarrollo de marcas [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/196737
353

Data-Driven Traffic Forecasting for Completed Vehicle Simulation: : A Case Study with Volvo Test Trucks

Shahrokhi, Samaneh January 2023 (has links)
This thesis offers a thorough investigation into the application of machine learning algorithms for predicting the presence of vehicles in a traffic setting. The research primarily focuses on enhancing vehicle simulation by employing data-driven traffic prediction methods. The study approaches the problem as a binary classification task. Various supervised learning algorithms, including Random Forest (RF), Gradient Boosting (GB), Support Vector Machine (SVM), and Logistic Regression (LogReg) were evaluated and tested. The thesis encompasses six distinct implementations, each involving different combinations of algorithms, feature engineering, hyperparameter tuning, and data splitting. The performance of each model was assessed using metrics such as accuracy, precision, recall, and F1-score, and visualizations like ROC-AUC curves were used to gain insights into their discrimination capabilities. While the RF model achieved the highest accuracy at 97%, the AUC score of Combination 2 (RF+GB) suggests that this ensemble model could strike a better balance between high accuracy (86%) and effective class separation (99%). Ultimately, the study identifies an ensemble model as the preferred choice, leading to significant improvements in prediction accuracy. The research also explores working on the problem as a time-series prediction task, exploring the use of Long Short-Term Memory (LSTM) and Auto-Regressive Integrated Moving Average (Auto-ARIMA) models. However, we found that this approach was impractical due to the dataset’s discrete and non-sequential nature. This research contributes to the advancement of vehicle simulation and traffic forecasting, demonstrating the potential of machine learning in addressing complex real-world scenarios.
354

Machine Learning Approaches to Develop Weather Normalize Models for Urban Air Quality

Ngoc Phuong, Chau January 2024 (has links)
According to the World Health Organization, almost all human population (99%) lives in 117 countries with over 6000 cities, where air pollutant concentration exceeds recommended thresholds. The most common, so-called criteria, air pollutants that affect human lives, are particulate matter (PM) and gas-phase (SO2, CO, NO2, O3 and others). Therefore, many countries or regions worldwide have imposed regulations or interventions to reduce these effects. Whenever an intervention occurs, air quality changes due to changes in ambient factors, such as weather characteristics and human activities. One approach for assessing the effects of interventions or events on air quality is through the use of the Weather Normalized Model (WNM). However, current deterministic models struggle to accurately capture the complex, non-linear relationship between pollutant concentrations and their emission sources. Hence, the primary objective of this thesis is to examine the power of machine learning (ML) and deep learning (DL) techniques to develop and improve WNMs. Subsequently, these enhanced WNMs are employed to assess the impact of events on air quality. Furthermore, these ML/DL-based WNMs can serve as valuable tools for conducting exploratory data analysis (EDA) to uncover the correlations between independent variables (meteorological and temporal features) and air pollutant concentrations within the models.  It has been discovered that DL techniques demonstrated their efficiency and high performance in different fields, such as natural language processing, image processing, biology, and environment. Therefore, several appropriate DL architectures (Long Short-Term Memory - LSTM, Recurrent Neural Network - RNN, Bidirectional Recurrent Neural Network - BIRNN, Convolutional Neural Network - CNN, and Gated Recurrent Unit - GRU) were tested to develop the WNMs presented in Paper I. When comparing these DL architectures and Gradient Boosting Machine (GBM), LSTM-based methods (LSTM, BiRNN) have obtained superior results in developing WNMs. The study also showed that our WNMs (DL-based) could capture the correlations between input variables (meteorological and temporal variables) and five criteria contaminants (SO2, CO, NO2, O3 and PM2.5). This is because the SHapley Additive exPlanations (SHAP) library allowed us to discover the significant factors in DL-based WNMs. Additionally, these WNMs were used to assess the air quality changes during COVID-19 lockdown periods in Ecuador. The existing normalized models operate based on the original units of pollutants and are designed for assessing pollutant concentrations under “average” or consistent weather conditions. Predicting pollution peaks presents an even greater challenge because they often lack discernible patterns. To address this, we enhanced the Weather Normalized Models (WNMs) to boost their performance specifically during daily concentration peak conditions. In the second paper, we accomplished this by developing supervised learning techniques, including Ensemble Deep Learning methods, to distinguish between daily peak and non-peak pollutant concentrations. This approach offers flexibility in categorizing pollutant concentrations as either daily concentration peaks or non-daily concentration peaks. However, it is worth noting that this method may introduce potential bias when selecting non-peak values. In the third paper, WNMs are directly applied to daily concentration peaks to predict and analyse the correlations between meteorological, temporal features and daily concentration peaks of air pollutants.
355

Controller Design for a Gearbox Oil ConditioningTestbed Through Data-Driven Modeling / Regulatordesign för en växellåda oljekonditionering testbädd genom datadriven modellering.

Brinkley IV, Charles, Wu, Chieh-Ju January 2022 (has links)
With the exponential development of more sustainable automotive powertrains, new gearbox technologies must also be created and tested extensively. Scania employs dynamometer testbeds to conduct such tests, but this plethora of new and rapidly developed gearboxes pose many problems for testbed technicians. Regulating oil temperature during tests is vital and controllers must be developed for each gearbox configuration; this is difficult given system complexity, nonlinear dynamics, and time limitations. Therefore, technicians currently resort to a manually tuned controller based on real-time observations; a time-intensive process with sub-par performance. This master thesis breaks down this predicament into two research questions. The first employs a replicate study to investigate whether linear system identification methods can model the oil conditioning system adequately. A test procedure is developed and executed on one gearbox setup to capture system behavior around a reference point and the resulting models are compared for best fitment. Results from this study show that such data-driven modeling methods can sufficiently represent the system. The second research question investigates whether the derived model can then be used to create a better-performing model-based controller through pole placement design. To draw a comparison between old and new controllers, both are implemented on the testbed PLC while conducting a nominal test procedure varying torque and oil flow. Results from this study show that the developed controller does regulate temperature sufficiently, but the original controller is more robust in this specific test case. / Med den exponentiella utvecklingen av mer hållbara drivlinor i fordonsindustrin måste nya växellådsteknologier skapas och testas på en omfattande skala. Scania använder sig utav dynamometer testbäddar för att utföra sådana tester, men denna uppsjö av nya och snabbt utvecklade växellådor skapar utmaningar för testbäddsteknikerna. Reglering av oljetemperaturen under testerna är avgörande och därmed måste nya regulatorer utvecklas för varje växellådskonfiguration; detta är problematiskt med tanke på systemkomplexitet, olinjär dynamik samt tidsbegränsning. På grund av detta använder sig testbäddsteknikerna för tillfället av en manuell metod för att ta fram parametrarna till regulatorerna baserat på realtidsobservationer vilket är en tidskrävande process som ofta leder till en underpresterande regulator. Det här masterarbetet bryter ner den nämnda problematiken i två forskningsfrågor. Den första behandlar en replikationsstudie för att undersöka om linjära systemidentifikations metoder kan modellera oljekonditioneringssytemet på ett adekvat sätt. En testprocedur utvecklas och utförs på en växellådskonfiguration för att ta fram en modell för systemet kring en referenspunkt. De resulterande modellerna jämförs för att fastställa vilken metod som bäst beskriver systemet. Resultatet från denna studie visar att sådana data-drivna modelleringsmetoder kan beskriva systemet på ett tillfredsställande sätt. Den andra forskningsfrågan undersöker om den härledda modellen kan användas för att skapa en bättre presterande modellbaserad regulator med hjälp av polplaceringsmetoden. För att kunna göra en jämförelse mellan gamla samt nya regulatorer implementeras båda på testbäddens PLC varvid en nominell testprocedur utförs som varierar vridmoment och oljeflöde. Resultatet från denna studie visar att den framtagna regulatorn kan reglera oljetemperaturen på ett tillfredsställande sätt, däremot är den ursprungliga regulatorn mer robust i det behandlade testfallet.
356

What would be the highestelectrical loads with -20°C inStockholm in 2022 ? : A study of the sensitivity of electrical loads to outdoor temperature in Stockholm region.

Mellon, Magali January 2022 (has links)
In the last 10 years, no significant increase in the peak electricity consumption of the region of Stockholm has been observed, despite new customers being connected to the grid. But, as urbanization continues and with electrification being a decisive step of decarbonization pathways, more growth is expected in the future. However, the Swedish Transmission System Operator (TSO), Svenska Kraftnat, can only supply a limited power to Stockholm region. Distribution System Operators (DSOs) such as Vattenfall Eldistribution, which operates two thirds Stockholm region's distribution grid, need to find solutions to satisfy an increasing demand with a limited power supply. In these times, forecasting the worst-case scenarios, i.e., the highest possible loads, becomes a critical question. In Sweden, peak loads are usually triggered by the coldest temperatures, but the recent winters have been mild: this brings uncertainty about a possible underlying temperature adjusted growth that would be masked by relatively warm winters. Answering the question 'What would be the highest loads in 2022 with -20°C in Stockholm region ?' could help Vattenfall Eldistribution estimating the flexibility needed nowadays and designing the future grid with the necessary grid reinforcements. This master thesis uses a data-driven approach based on eleven years of hourly data on the period 2010-2021 to investigate the temperature sensitivity of aggregated electricity load in Stockholm region. First, an exploratory analysis aims at quantifying how large the growth has been in the past ten years and at understanding how and when peak loads occur. The insights obtained help design two innovative regression techniques that investigate the evolution of the loads across years and provide first estimates of peak loads. Then, a Seasonal Autoregressive Integrated Moving Average with eXogenous regressors (SARIMAX) process is used to model a full winter of load as a function of temperatures. This third method provides new and more reliable estimates of peak loads in 2022 at e.g. -20°C. Eventually, the SARIMAX estimates are kept and a synthesis of the global outlooks of the three methods and possible extensions of the SARIMAX method is presented in a final section. The results conclude on a significant increase in the load levels in southern Stockholm ('Stockholm Sodra') between 2010 and 2015 and stable evolution onwards, while the electric consumption in Northern Stockholm remained stable during the period 2010-2021. During a very cold winter, the electricity demand is expected to exceed the subscription levels during about 300h in Stockholm Sodra and 200h in Stockholm Norra. However, this will be a rare occurrence, which suggests that short-term solutions could be privileged rather than costly grid extension work. Many questions arise, and the capability of local heat & power production and electricity prices signals to regulate today's demand are yet to investigate. Additional work exploring future demand scenarios at a smaller scale could also be contemplated. / Under den senaste årtionden har Stockholms toppkonsumtion av el inte ökat markant trots nya elkunder som ansluter till elnätet. Med en snabb urbanisering, är ökad elektrifiering en huvudlösning för att uppnå ett fossilfritt samhälle och denna trend förväntas fortsätta under kommande årtionden. Samtidigt börjar den svenska transmissionsnätoperatören (TSO) Svenska kraftnät få problem med att leverera elkraft till Stockholmsregionen, på grund av en begränsad överföringskapacitet. Därför måste lokala eldistributörer (DSO), liksom Vattenfall Eldistribution, som är Sveriges största DSO med systemansvar för distributionssystem, undersöka nya lösningar för att uppfylla den ökande efterfrågan på el. Det blir dessutom mycket viktigt att identifiera de värsta tänkbara scenario, som att göra prognos av högsta möjliga elförbrukning. Stockholm konsumerar exempelvis mest el när det är som kallast – men de senaste vintrarna har varit milda jämfört med till exempel vintrarna 2010 – 2011 eller 2012 – 2013 då temperaturer i Stockholmsregion mättes till under -20°C grader för flera dagar i sträck. Detta resulterar i en relevant frågeställning: ” Vad skulle Stockholms elkonsumtion vid -20°C bli 2021 eller 2022?”. Att kvantitativt kunna besvara denna fråga skulle hjälpa Vattenfall med att designa framtidens elnät samt se till att det finns rätt mängd flexibilitet i reserv i nuvarande Stockholm Flex elmarknad. Detta examensarbete utgår från att kvantitativt analysera denna frågeställning. Utgångsläget är ett datadrivet tillvägagångssätt baserat på tio års tidseriedata för att undersöka temperaturkänsligheten för det aggregerade elbehovet i Stockholmsregionen, och dra slutsatser om dess utveckling genom åren. I första hand, utförs en explorativ analys för att förstå när och hur toppbelastning kan hända. Då hjälper dessa insikter till att utforma två innovativa regressionsmetoder för att undersöka utvecklingen av elförbrukning under det senaste decenniet och uppskatta värdet på toppbelastningen. Därefter används ett säsongmässigt autoregressivt integrerat rörligt genomsnitt med exogena faktorer (SARIMAX) för att modellera en vinter som en funktion av temperaturerna. Denna tredje metod behandlar nya och mer tillförlitliga beräkningar av toppbelastning värden i 2022 på -20°C. Huvudslutsatser från examensarbetet är att elförbrukningen skulle öka i området Stockholm Södra speciellt mellan 2010 och 2015, medan elförbrukningen skulle vara stabil under hela perioden i området Stockholm Norra. Det finns en risk för att under ett antal timmar vid riktigt kall vinter, ha ett elbehov högre än Vattenfall Eldistributions summa av abonnemang. Dock är det väldigt låg sannolikhet att detta händer, vilket innebär att det förmodligen finns andra sätt att hantera denna efterfråga på el än att öka överföringskapaciteten i elnätet. Examensarbetet resulterar i flera frågor. Exempelvis att utreda möjligheter i att utnyttja lokala el och värmekraftverk och använda elprissignaler. Ytterligare arbete kan också undersöka scenarier av den framtida elförbrukning i en mindre skala.
357

Impacts of Participatory Design on Data Driven Decision Making in Organisations

Rovolis, Georgios January 2023 (has links)
This thesis explores the impacts of applying participatory design (PD) to data-driven decision-making (DDDM) in organisations. Despite the extensive examination of PD and DDDM individually, there is a noticeable research gap in understanding their integration and their impact on decision-making processes in organisations. This research aims to fill this gap by investigating the potential impacts, challenges, benefits, and critical success factors associated with the incorporation of PD activities into DDDM. The study employs a systematic literature review methodology to provide a comprehensive understanding of the topic. The findings contribute to the development of best practices and guidelines for organizations seeking to optimise their decision-making processes by incorporating participatory design principles into their data-driven decision-making strategies. The research also considers the ethical implications of data-driven decision-making. Ultimately, this thesis advances our understanding of how PD and DDDM can be effectively combined to achieve better decision-making outcomes.
358

Datadrivna beslut inom Livslångt lärande : En process för att organisationer ska lyckas med strategisk kompetensförsörjning / Data-driven Decision-making in Lifelong Learning : A Process for Organizations to Succeed with Strategic Competence Provision

Bäckelin, Jonas January 2023 (has links)
Syftet med denna studie var att ta fram en process för hur modern teknik kan användas för att organisationer ska lyckas med strategisk kompetensförsörjning. Begreppet datadrivna beslut används när så kallade klassificeringsalgoritmer kan hjälpa oss att upptäcka en ’önskad kompetens som saknas’ eller ’föreslå ett område som vi behöver utveckla’. Metoden utgår från tjänstedesign och denna studie använde sig av en empati karta, som skapades från en enkät studie på det sociala yrkesnätverket LinkedIn med virtuell snöbollsmetod (jmf. respondentdriven sampling). Den utgår från kvalitativa data som beskriver insikter utifrån användarnas upplevelser och drivkrafter. Sedan var det viktig att definiera vilka aktörer som berörs av utmaningen för att kunna beskriva stegen i en användarresa och ta fram en designskiss. Design processen inkluderade även intervjuer med huvudaktörerna för att kunna undersöka rotorsaker och sålla idéer med hjälp av klusteranalys. Slutligen testades en digital prototyp och för att utvärdera vad som fungerade och titta på förbättringar skapades feedback matris. Underlaget för att undersöka problemet kommer från behovet inom användargruppen och perspektiv från aktörer, som sedan validerats genom att använda flera olika verktyg hämtade från tjänstedesign. Slutsatsen var att datadrivet beslutsfattande går ut på att använda mätbara indikatorer och data för att fatta beslut som är i linje med strategiska mål inom kompetensförsörjning. Detta redovisas som en användarresa som består av stegen ”Initiera & kartlägga”, ”Genomföra & uppföljning” och ”Utvärdera & reflektera”. / The purpose of this study was to develop a process for how modern technology can be used for organizations to succeed in strategic competence provision. The concept of data-driven decisions is used when so-called classification algorithms can help us discover a 'desired competence that is missing' or 'suggest an area that we need to develop'.  The method is based on service design and this study used an empathy map, which was created from a survey on the professional social network LinkedIn using the virtual snowball method (cf. respondent-driven sampling). It is based on qualitative data that describes insights based on the users' experiences and driving forces. Then it was important to define which stakeholders that are affected by the challenge in order to be able to describe the steps in a journey map and produce a design sketch. The design process also included interviews with the main stakeholders in order to investigate root causes and sorting ideas using cluster analysis. Finally, a digital prototype was tested and to evaluate what worked and look for improvements, a feedback matrix was created. The basis for investigating the problem comes from the need within the user group and perspectives from stakeholders, which are then validated by using several different tools taken from service design. The conclusion was that data-driven decision-making involves define measurable indicators and data to make decisions that are in line with strategic goals in competence provision. This is reported as a user journey consisting of the steps "Initiate & map out", "Implement & follow up" and "Evaluate & reflect".
359

Beyond Disagreement-based Learning for Contextual Bandits

Pinaki Ranjan Mohanty (16522407) 26 July 2023 (has links)
<p>While instance-dependent contextual bandits have been previously studied, their analysis<br> has been exclusively limited to pure disagreement-based learning. This approach lacks a<br> nuanced understanding of disagreement and treats it in a binary and absolute manner.<br> In our work, we aim to broaden the analysis of instance-dependent contextual bandits by<br> studying them under the framework of disagreement-based learning in sub-regions. This<br> framework allows for a more comprehensive examination of disagreement by considering its<br> varying degrees across different sub-regions.<br> To lay the foundation for our analysis, we introduce key ideas and measures widely<br> studied in the contextual bandit and disagreement-based active learning literature. We<br> then propose a novel, instance-dependent contextual bandit algorithm for the realizable<br> case in a transductive setting. Leveraging the ability to observe contexts in advance, our<br> algorithm employs a sophisticated Linear Programming subroutine to identify and exploit<br> sub-regions effectively. Next, we provide a series of results tying previously introduced<br> complexity measures and offer some insightful discussion on them. Finally, we enhance the<br> existing regret bounds for contextual bandits by integrating the sub-region disagreement<br> coefficient, thereby showcasing significant improvement in performance against the pure<br> disagreement-based approach.<br> In the concluding section of this thesis, we do a brief recap of the work done and suggest<br> potential future directions for further improving contextual bandit algorithms within the<br> framework of disagreement-based learning in sub-regions. These directions offer opportuni-<br> ties for further research and development, aiming to refine and enhance the effectiveness of<br> contextual bandit algorithms in practical applications.<br> <br> </p>
360

Data-Driven Operator Behavior Visualization : Developing a Prototype for Wheel Loader / Datadriven visualisering av operatörsbeteende : Utveckling av en prototyp för hjullastare

Tian, Huahua January 2022 (has links)
To realize key business capabilities and secure long-term growth, Volvo Construction Equipment (Volvo CE) set out to define a vision for digital transformation. The latest trends in AI-powered smart electronics open up endless opportunities to help Volvo CE's operators use Wheel Loaders – Construction machines to increase productivity. To ensure operators are working in a way that delivers optimum fuel efficiency and productivity to achieve optimum results on-site, the company aspires to create visual tools to keep track of operator behavior in the operator environment. Monitor operator behavior with key indicators then visualized to inform how this affects important results for the customers and for Volvo CE. The audience is operators themselves, and internal staff like UX engineers and Product owners. Data-driven concept design (DDCD) is a decision-making approach that heavily relies on collected data and highlights the need to proactively plan and design. It is a popular approach to capturing tacit customer needs and makes a great contribution to data visualization design. Also, an emerging concept like the digital twin provides inspired ideas in data visualization conceptual design. However, little research is on the DDCD for data visualization. Thus, this work aims to explore appropriate data visualization techniques under the DDCD framework. The result is to help Volvo CE, primarily via data visualization, keep track of operator behaviors, and how these affect wheel loader productivity and energy efficiency data on different levels and in a wider context. To carry out, A series of DDCD cases for the improvement of wheel loader operator behaviors are researched and designed, to present data in a clear and concise visual way for both internal audience and operator training. As the result, a prototype containing a series of visualization techniques is proposed for two target groups and corresponding application scenarios including coaching and aid decision-making. Created a series of dashboards with expected functionalities based on understanding the current machine. The prototype for the internal audience has functionality: site and time selection, weekly overview window, phase selection, cycle thread trace, insight window, data presentation, and toolbox. The prototype for operator training has functionality: site and time selection, opponent selection, phase selection, cycle thread trace, external data window, individual comparison section, and insights block. / För att förverkliga viktiga affärsmöjligheter och säkra långsiktig tillväxt har Volvo Construction Equipment (Volvo CE) tagit fram en vision för digital omvandling. De senaste trenderna inom AIdriven smart elektronik öppnar oändliga möjligheter att hjälpa Volvo CE:s operatörer att använda hjullastare - anläggningsmaskiner för att öka produktiviteten. För att säkerställa att förarna arbetar på ett sätt som ger optimal bränsleeffektivitet och produktivitet för att uppnå optimala resultat på plats strävar företaget efter att skapa visuella verktyg för att hålla koll på förarens beteende i förarmiljön. Övervaka operatörens beteende med nyckelindikatorer som sedan visualiseras för att informera om hur detta påverkar viktiga resultat för kunderna och för Volvo CE. Målgruppen är operatörerna själva och intern personal som UX-ingenjörer och produktägare. Datadriven konceptdesign (DDCD) är en beslutsmetod som i hög grad bygger på insamlade data och belyser behovet av proaktiv planering och design. Det är ett populärt tillvägagångssätt för att fånga upp tysta kundbehov och ger ett stort bidrag till design av datavisualisering. Dessutom ger ett framväxande koncept som den digitala tvillingen inspirerande idéer för konceptuell utformning av datavisualisering. Det finns dock lite forskning om DDCD för datavisualisering. Det här arbetet syftar därför till att utforska lämpliga datavisualiseringstekniker inom ramen för DDCD. Resultatet är att hjälpa Volvo CE, främst via datavisualisering, att hålla koll på förarnas beteenden och hur dessa påverkar data om hjullastares produktivitet och energieffektivitet på olika nivåer och i ett större sammanhang. För att genomföra, En serie DDCD-fall för förbättring av beteenden hos hjullastarförare undersöks och utformas, för att presentera data på ett tydligt och kortfattat visuellt sätt för både intern publik och förarutbildning. Som resultat föreslås en prototyp som innehåller en serie visualiseringstekniker för två målgrupper och motsvarande tillämpningsscenarier, inklusive coaching och stöd för beslutsfattande. Skapade en serie instrumentpaneler med förväntade funktioner baserat på förståelse av den nuvarande maskinen. Prototypen för den interna målgruppen har följande funktioner: val av plats och tid, fönster för veckoöversikt, val av fas, spårning av cykeltråd, insiktsfönster, datapresentation och verktygslåda. Prototypen för operatörsutbildning har följande funktioner: val av plats och tid, val av motståndare, val av fas, spårning av cykeltråd, fönster för externa data, avsnitt för individuella jämförelser och block för insikter.

Page generated in 0.2536 seconds