Spelling suggestions: "subject:"artifical"" "subject:"artifiical""
71 |
Mediální reprezentace fenoménu deepfakes / Media representation of deepfakeJanjić, Saška January 2022 (has links)
This master thesis explores the media representation of deepfakes. The first part summarizes previous research followed by a comprehensive review of deepfakes, including the technology allowing for their emergence, current uses and methods of regulation and detection. The second part connects the phenomenon with important theoretical concepts such as social construction of reality and the crucial role of media in this process. The empirical part consists of research combining two methods - quantitative content analysis and qualitative critical discourse analysis. The research analysis is focused on media articles dealing with deepfakes in order to find out how the media represent this phenomenon. The results show that current media discourse of deepfakes is strongly negative as the media frame them as a security threat. This negative representation is highly speculative since journalists often invent their own stories of future disastrous consequences of the technology for national security due to lack of current examples. The findings show an apparent hierarchy of the harms posed by deepfakes which is present in media coverage, and reflects gender sereotypes and inequality in the current society. Harm in the form of non-consensual fake pornography targeting women is neglected in the media...
|
72 |
Object detection for autonomous trash and litter collection / Objektdetektering för autonom skräpupplockningEdström, Simon January 2022 (has links)
Trashandlitter discarded on the street is a large environmental issue in Sweden and across the globe. In Swedish cities alone it is estimated that 1.8 billion articles of trash are thrown to the street each year, constituting around 3 kilotons of waste. One avenue to combat this societal and environmental problem is to use robotics and AI. A robot could learn to detect trash in the wild and collect it in order to clean the environment. A key component of such a robot would be its computer vision system which allows it to detect litter and trash. Such systems are not trivially designed or implemented and have only recently reached high enough performance in order to work in industrial contexts. This master thesis focuses on creating and analysing such an algorithm by gathering data for use in a machine learning model, developing an object detection pipeline and evaluating the performance of that pipeline based on varying its components. Specifically, methods using hyperparameter optimisation, psuedolabeling and the preprocessing methods tiling and illumination normalisation were implemented and analysed. This thesis shows that it is possible to create an object detection algorithm with high performance using currently available state-of-the-art methods. Within the analysed context, hyperparameter optimisation did not significantly improve performance and psuedolabeling could only briefly be analysed but showed promising results. Tiling greatly increased mean average precision (mAP) for the detection of small objects, such as cigarette butts, but decreased the mAP for large objects and illumination normalisation improved mAPforimagesthat were brightly lit. Both preprocessing methods reduced the frames per second that a full detector could run at whilst psuedolabeling and hyperparameter optimisation greatly increased training times. / Skräp som slängs på marken har en stor miljöpåverkan i Sverige och runtom i världen. Enbart i Svenska städer uppskattas det att 1,8 miljarder bitar skräp slängs på gatan varje år, bestående av cirka 3 kiloton avfall. Ett sätt att lösa detta samhälleliga och miljömässiga problem är att använda robotik och AI. En robot skulle kunna lära siga att detektera skräp i utomhusmiljöer och samla in den för att på så sätt rengöra våra städer och vår natur. En nyckelkomponent av en sådan robot skulle vara dess system för datorseende som tillåter den att se och hitta skräp. Sådana system är inte triviala att designa eller implementera och har bara nyligen påvisat tillräckligt hög prestanda för att kunna användas i kommersiella sammanhang. Detta masterexamensarbete fokuserar på att skapa och analysera en sådan algoritm genom att insamla data för att använda i en maskininlärningsmodell, utveckla en objektdetekterings pipeline och utvärdera prestandan när dess komponenter modifieras. Specifikt analyseras metoderna pseudomarkering, hyperparameter optimering samt förprocesseringsmetoderna kakling och ljusintensitetsnormalisering. Examensarbetet visar att det är möjligt att skapa en objektdetekteringsalgoritm med hög prestanda med hjälp av den senaste tekniken på området. Inom det undersökta sammanhanget gav hyperparameter optimering inte någon större förbättring av prestandan och pseudomarkering kunde enbart ytligt analyseras men uppvisade preliminärt lovande resultat. Kakling förbättrade resultatet för detektering av små objekt, som cigarettfimpar, men minskade prestandan för större objekt och ljusintensitetsnormalisering förbättrade prestandan för bilder som var starkt belysta. Båda förprocesseringsmetoderna minskade bildhastigheten som en detektor skulle kunna köra i och psuedomarkering samt hyperparameter optimering ökade träningstiden kraftigt.
|
73 |
<b>Machine-Learning-Aided Development of Surrogate Models for Flexible Design Optimization of Enhanced Heat Transfer Surfaces</b>Saeel Shrivallabh Pai (20692082) 10 February 2025 (has links)
<p dir="ltr">Due to the end of Dennard scaling, electronic devices must consume more electrical power for increased functionality. The increased power consumption, combined with diminishing form factors, results in increased power density within the device, leading to increased heat fluxes at the devices surfaces. Without proper thermal management, the increase in heat fluxes can cause device temperatures to exceed operational limits, ultimately resulting in device failure. However, the dissipation of these high heat fluxes often requires pumping or refrigeration of a coolant, which in turn, increases the total energy usage. Data centers, which form the backbone of the cloud infrastructure and the modern economy, account for ~2% of the total US electricity use, of which up to ~40% is spent on cooling needs alone. Thus, it is necessary to optimize the designs of the cooling systems to be able to dissipate higher heat fluxes, but at lower operating powers.</p><p dir="ltr">The design optimization of various thermal management components such as cold plates, heat sinks, and heat exchangers relies on accurate prediction of flow heat transfer and pressure drop. During the iterative design process, the heat transfer and pressure drop is typically either computed numerically or obtained using geometry-specific correlations for Nusselt number (<i>Nu</i>) and friction factor (<i>f</i>). Numerical approaches are accurate for evaluation of a single design but become computationally expensive if many design iterations are required (such as during formal optimization processes). Moreover, traditional empirical correlations are highly geometry dependent and assume functional forms that could introduce inaccuracies. To overcome these limitations, this thesis introduces accurate and continuous-valued machine-learning (ML)-based surrogate models for predicting Nusselt number and friction factor on various heat exchange surfaces. These surrogate models, which are applicable to more geometries than traditional correlations, enable flexible and computationally inexpensive design optimization. The utility of these surrogate models is first demonstrated through the optimization of single-phase liquid cold plates under specific boundary conditions. Subsequently, their effectiveness is further showcased in the more practical challenge of designing liquid-to-liquid heat exchangers by integrating the surrogate models with a homogenization-based topology optimization framework. As topology optimization relies heavily on accurate predictions of pressure drop and heat transfer at every point in the domain during each iteration, using ML-based surrogate models greatly reduces the computational cost while enabling the development of high-performance, customized heat exchange surfaces. Thus, this work contributes to the advancement of thermal management by leveraging machine learning techniques for efficient and flexible design optimization processes.</p><p dir="ltr">First, artificial neural network (ANN)-based surrogate correlations are developed to predict <i>f</i> and <i>Nu</i> for fully developed internal flow in channels of arbitrary cross section. This effectively collapses all known correlations for channels of different cross section shapes into one correlation for <i>f</i> and one for <i>Nu</i>. The predictive performance and generality of the ANN-based surrogate models is verified on various shapes outside the training dataset, and then the models are used in the design optimization of flow cross sections based on performance metrics that weigh both heat transfer and pressure drop. The optimization process leads to novel shapes outside the training data, the performance of which is validated through numerical simulations. Although the ML model predictions lose accuracy outside the training set for these novel shapes, the predictions are shown to follow the correct trends with parametric variations of the shape and therefore successfully direct the search toward optimized shapes.</p><p dir="ltr">The success of ANN-aided shape optimization of constant cross-section internal flow channels serves as a compelling proof-of-concept, highlighting the potential of ML-aided optimization in thermal-fluid applications. However, to address the complexities of widely used thermal management devices such as cold plates and heat exchangers, known for their intricate surface geometries beyond constant cross-section channels, a strategic shift is imperative. With the goal of crafting ML models specifically tailored for practical design optimization algorithms like topology optimization, the thesis next delves into diverse micro-pin fin arrangements commonly employed in applications like cold plates and heat exchangers. This study on pin fins includes the exploration of hydrodynamic and thermal developing effects, as well as the impact of pin fin cross section shape and orientation. The ML-based predictive models are trained on numerically simulated synthetic data. The large amounts of accurate synthetic data required to train machine learning models are generated using a custom-developed simulation automation framework. With this framework, numerical flow and heat transfer simulations can be run on thousands of geometries and boundary conditions with minimal user intervention. The proposed models provide accurate predictions of <i>f</i> and <i>Nu</i>, with a near exact match to the training data as well as on unseen testing data. Furthermore, the outputs of the ANNs are inspected to propose new analytical correlations to estimate the hydrodynamic and thermal entrance lengths for flow through square pin fin arrays. The ML models are also shown to be useable for fluids other than water, employing physics-based, Prandtl-number-dependent scaling relations.</p><p dir="ltr">The thesis further demonstrates the utility of the ML surrogate models to facilitate the design optimization of thermal management components through their integration in the topology optimization (TO) framework for heat exchanger design. Topology optimization is a computational design methodology for determining the optimal material distribution within a design space based on given constraints. The use of topology optimization in the design of heat exchangers and other thermal management devices has been gaining significant attention in recent years, particularly with the widespread availability of additive manufacturing techniques that offer geometric design flexibility. Particularly advantageous for heat exchanger design is the homogenization approach to topology optimization, which represents partial densities in the design domain using a physical unit cell structure to achieve sub-grid resolution features. This approach requires geometry-specific, correlations for <i>f</i> and <i>Nu</i> to simulate the performance of designs and evaluate the objective function during the optimization process. Topology optimized pin fin-based component designs rely on additive manufacturing, posing production scalability challenges with current technologies. Furthermore, the demand for flow and thermal anisotropy in several applications adds complexity to the design requirements. To address these challenges, the focus is shifted to traditional heat exchanger surface geometries that can be manufactured using conventional techniques, and which also exhibit pronounced anisotropy in flow and heat transfer characteristics. Traditionally, these geometries are distributed uniformly across heat exchange surfaces. However, incorporating such geometries into the topology optimization framework merges the strengths of both approaches, yielding mathematically optimized heat exchange surfaces with conventionally manufacturable designs. Offset strip fins, one such commonly used geometry, is chosen to be the physical unit cell structure to demonstrate the integration of ML-based surrogate models into the topology optimization framework. The large amount of data required to develop robust machine learning-based surrogate <i>f</i> and <i>Nu</i> models for axial and cross flow of water through offset strip fins are generated through numerical simulations performed for convective flows through these geometries. The data generated are compared against in-house-measured experimental data as well as against data from literature. To facilitate the integration of ML models into topology optimization, a discrete adjoint method was developed to calculate the sensitivities during topology optimization, to circumvent the absence of the analytical gradients.</p><p dir="ltr">Successful integration of the machine learning-based surrogate models into the topology optimization framework was demonstrated through the design optimization of a counterflow heat exchanger. The topology optimized design outperformed the benchmarks that used uniform, parametrically optimized offset strip fin arrays. The topology optimized design exhibited domain-specific enhancements such as peripheral flow paths for enhanced heat transfer and open channels to minimize pressure drops. This integration showcases the potential of combining ML models with topology optimization, providing a flexible framework that can be extended to a wide range of enhanced surface structure types and geometric configurations for which ML models can be trained. Thus, by enabling spatially localized optimization of enhanced surface structures using ML models, and consequently offering a pathway for expanding the design space to include many more surface structures in the topology optimization framework than previously possible, this thesis lays the foundation for advancing design optimization of thermal-fluid components and systems, using both additively and conventionally manufacturable geometries.</p>
|
74 |
Aide à la décision médicale et télémédecine dans le suivi de l’insuffisance cardiaque / Medical decision support and telemedecine in the monitoring of heart failureDuarte, Kevin 10 December 2018 (has links)
Cette thèse s’inscrit dans le cadre du projet "Prendre votre cœur en mains" visant à développer un dispositif médical d’aide à la prescription médicamenteuse pour les insuffisants cardiaques. Dans une première partie, une étude a été menée afin de mettre en évidence la valeur pronostique d’une estimation du volume plasmatique ou de ses variations pour la prédiction des événements cardiovasculaires majeurs à court terme. Deux règles de classification ont été utilisées, la régression logistique et l’analyse discriminante linéaire, chacune précédée d’une phase de sélection pas à pas des variables. Trois indices permettant de mesurer l’amélioration de la capacité de discrimination par ajout du biomarqueur d’intérêt ont été utilisés. Dans une seconde partie, afin d’identifier les patients à risque de décéder ou d’être hospitalisé pour progression de l’insuffisance cardiaque à court terme, un score d’événement a été construit par une méthode d’ensemble, en utilisant deux règles de classification, la régression logistique et l’analyse discriminante linéaire de données mixtes, des échantillons bootstrap et en sélectionnant aléatoirement les prédicteurs. Nous définissons une mesure du risque d’événement par un odds-ratio et une mesure de l’importance des variables et des groupes de variables. Nous montrons une propriété de l’analyse discriminante linéaire de données mixtes. Cette méthode peut être mise en œuvre dans le cadre de l’apprentissage en ligne, en utilisant des algorithmes de gradient stochastique pour mettre à jour en ligne les prédicteurs. Nous traitons le problème de la régression linéaire multidimensionnelle séquentielle, en particulier dans le cas d’un flux de données, en utilisant un processus d’approximation stochastique. Pour éviter le phénomène d’explosion numérique et réduire le temps de calcul pour prendre en compte un maximum de données entrantes, nous proposons d’utiliser un processus avec des données standardisées en ligne au lieu des données brutes et d’utiliser plusieurs observations à chaque étape ou toutes les observations jusqu’à l’étape courante sans avoir à les stocker. Nous définissons trois processus et en étudions la convergence presque sûre, un avec un pas variable, un processus moyennisé avec un pas constant, un processus avec un pas constant ou variable et l’utilisation de toutes les observations jusqu’à l’étape courante. Ces processus sont comparés à des processus classiques sur 11 jeux de données. Le troisième processus à pas constant est celui qui donne généralement les meilleurs résultats / This thesis is part of the "Handle your heart" project aimed at developing a drug prescription assistance device for heart failure patients. In a first part, a study was conducted to highlight the prognostic value of an estimation of plasma volume or its variations for predicting major short-term cardiovascular events. Two classification rules were used, logistic regression and linear discriminant analysis, each preceded by a stepwise variable selection. Three indices to measure the improvement in discrimination ability by adding the biomarker of interest were used. In a second part, in order to identify patients at short-term risk of dying or being hospitalized for progression of heart failure, a short-term event risk score was constructed by an ensemble method, two classification rules, logistic regression and linear discriminant analysis of mixed data, bootstrap samples, and by randomly selecting predictors. We define an event risk measure by an odds-ratio and a measure of the importance of variables and groups of variables using standardized coefficients. We show a property of linear discriminant analysis of mixed data. This methodology for constructing a risk score can be implemented as part of online learning, using stochastic gradient algorithms to update online the predictors. We address the problem of sequential multidimensional linear regression, particularly in the case of a data stream, using a stochastic approximation process. To avoid the phenomenon of numerical explosion which can be encountered and to reduce the computing time in order to take into account a maximum of arriving data, we propose to use a process with online standardized data instead of raw data and to use of several observations per step or all observations until the current step. We define three processes and study their almost sure convergence, one with a variable step-size, an averaged process with a constant step-size, a process with a constant or variable step-size and the use of all observations until the current step without storing them. These processes are compared to classical processes on 11 datasets. The third defined process with constant step-size typically yields the best results
|
75 |
Meření podobnosti obrazů s pomocí hlubokého učení / Image similarity measuring using deep learningŠtarha, Dominik January 2018 (has links)
This master´s thesis deals with the reseach of technologies using deep learning method, being able to use when processing image data. Specific focus of the work is to evaluate the suitability and effectiveness of deep learning when comparing two image input data. The first – theoretical – part consists of the introduction to neural networks and deep learning. Also, it contains a description of available methods, their benefits and principles, used for processing image data. The second - practical - part of the thesis contains a proposal a appropriate model of Siamese networks to solve the problem of comparing two input image data and evaluating their similarity. The output of this work is an evaluation of several possible model configurations and highlighting the best-performing model parameters.
|
76 |
Pulse Repetition Interval Modulation Classification using Machine Learning / Maskininlärning för klassificering av modulationstyp för pulsrepetitionsintervallNorgren, Eric January 2019 (has links)
Radar signals are used for estimating location, speed and direction of an object. Some radars emit pulses, while others emit a continuous wave. Both types of radars emit signals according to some pattern; a pulse radar, for example, emits pulses with a specific time interval between pulses. This time interval may either be stable, change linearly, or follow some other pattern. The interval between two emitted pulses is often referred to as the pulse repetition interval (PRI), and the pattern that defines the PRI is often referred to as the modulation. Classifying which PRI modulation is used in a radar signal is a crucial component for the task of identifying who is emitting the signal. Incorrectly classifying the used modulation can lead to an incorrect guess of the identity of the agent emitting the signal, and can as a consequence be fatal. This work investigates how a long short-term memory (LSTM) neural network performs compared to a state of the art feature extraction neural network (FE-MLP) approach for the task of classifying PRI modulation. The results indicate that the proposed LSTM model performs consistently better than the FE-MLP approach across all tested noise levels. The downside of the proposed LSTM model is that it is significantly more complex than the FE-MLP approach. Future work could investigate if the LSTM model is too complex to use in a real world setting where computing power may be limited. Additionally, the LSTM model can, in a trivial manner, be modified to support more modulations than those tested in this work. Hence, future work could also evaluate how the proposed LSTM model performs when support for more modulations is added. / Radarsignaler används för att uppskatta plats, hastighet och riktning av objekt. Vissa radarer sänder ut signaler i form av pulser, medan andra sänder ut en kontinuerlig våg. Båda typer av radarer avger signaler enligt ett visst mönster, till exempel avger en pulsradar pulser med ett specifikt tidsintervall mellan pulserna. Detta tidsintervall kan antingen vara konstant, förändras linjärt, eller följa ett annat mönster. Intervallet mellan två pulser benämns ofta pulsrepetitionsintervall (PRI), och mönstret som definierar PRIn benämns ofta modulering. Att klassificera vilken PRI-modulering som används i en radarsignal är en viktig del i processen att identifiera vem som skickade ut signalen. Felaktig klassificering av den använda moduleringen kan leda till en felaktig gissning av identiteten av agenten som skickade ut signalen, vilket kan leda till ett dödligt utfall. Detta arbete undersöker hur väl det framtagna neurala nätverket som består av ett långt korttidsminne (LSTM) kan klassificera PRI-modulering i förhållande till en modern modell som använder särskilt utvalda beräknade särdrag från data och klassificerar dessa särdrag med ett neuralt nätverk. Resultaten indikerar att LSTM-modellen konsekvent klassificerar med högre träffsäkerhet än modellen som använder särdrag, vilket gäller för alla testade brusnivåer. Nackdelen med LSTM-modellen är att den är mer komplex än modellen som använder särdrag. Framtida arbete kan undersöka om LSTM-modellen är för komplex för att använda i ett verkligt scenario där beräkningskraften kan vara begränsad. Dessutom skulle framtida arbete kunna utvärdera hur väl LSTM-modellen kan klassificera PRI-moduleringar när stöd för fler moduleringar än de som testats i detta arbete läggs till, detta då stöd för ytterligare PRI-moduleringar kan läggas till i LSTM-modellen på ett trivialt sätt.
|
77 |
The Shades of Styles : A human search for words communicating all aspects of styles.Hellerslien, Erlend January 2021 (has links)
This research is an investigative attempt on the concept of style´s development to potentially noticing our diverse human history on viewing the aspect of styles, starting (in the part one) by looking into the problem of the development of styles and its characteristic of representation in terms of its messages, realties, semiotics, and human collaboration. Leading towards the human search in seeing style more commonly neutral for a more meaningful dialog. The research shows then (in the part two) the potential to build a Digital Style Dictionary and A Digital Visual Compass: A Human-Centric Guide on The Aspect of Seeing Reality’s that can support identifying aspects of multiple realities (core reality, abstract reality, surreal reality and artificial reality) — where two cases (in the part three) of visual styles get analyzed, discussed, reframed, and presented (Transpace and Swisch). Fundamentally this paper looks to provoke a discussion on what we humans want the point to be in seeing styles. The complexity is as grand as our diversity, but still, this research highlights the hope to respectfully identify the distinctive shades of styles for the sake of a more significant human dialog and inclusion. The research´s grand ambition is knowingly bigger than what it itself can grasp to complete right now (2021) fully. It proposes an idea for the near future to shape a Digital Style Dictionary and a Digital Visual Compass that works for the common human aspect of seeing styles. This research is a first attempt towards shaping the fundamental frame towards a spectrum of the style´s, that we can respectfully continue to articulate for the sake to include better human communication on the aspect of seeing distinctiveness, not that style´s stands in a capital value program between something “high” or “low.” Instead, we can now start to collaborate in shaping and building these potential tools as A Digital Style Dictionary and A Digital Visual Compass in sharing a more human-centric spectrum of styles to push the human evolution of knowledge further.
|
Page generated in 0.0609 seconds