731 |
Toward The Horizon: Contemporary Queer Theatre as Utopic ActivismPage, Cody Allyn 20 May 2021 (has links)
No description available.
|
732 |
Image generation through feature extraction and learning using a deep learning approachBruneel, Tibo January 2023 (has links)
With recent advancements, image generation has become more and more possible with the introduction of stronger generative artificial intelligence (AI) models. The idea and ability of generating non-existing images that highly resemble real world images is interesting for many use cases. Generated images could be used, for example, to augment, extend or replace real data sets for training AI models, therefore being capable of minimising costs on data collection and similar processes. Deep learning, a sub-field within the AI field has been on the forefront of such methodologies due to its nature of being able to capture and learn highly complex and feature-rich data. This work focuses on deep generative learning approaches within a forestry application, with the goal of generating tree log end images in order to enhance an AI model that uses such images. This approach would not only reduce costs of data collection for this model, but also many other information extraction models within the forestry field. This thesis study includes research on the state of the art within deep generative modelling and experiments using a full pipeline from a deep generative modelling stage to a log end recognition model. On top of this, a variant architecture and image sampling algorithm are proposed to add in this pipeline and evaluate its performance. The experiments and findings show that the applied generative model approaches show good feature learning, but lack the high-quality and realistic generation, resulting in more blurry results. The variant approach resulted in slightly better feature learning with a trade-off in generation quality. The proposed sampling algorithm proved to work well on a qualitative basis. The problems found in the generative models propagated further into the training of the recognition model, making the improvement of another AI model based on purely generated data impossible at this point in the research. The results of this research show that more work is needed on improving the application and generation quality to make it resemble real world data more, so that other models can be trained on artificial data. The variant approach does not improve much and its findings contribute to the field by proving its strengths and weaknesses, as with the proposed image sampling algorithm. At last this study provides a good starting point for research within this application, with many different directions and opportunities for future work.
|
733 |
Unsupervised Change Detection Using Multi-Temporal SAR Data : A Case Study of Arctic Sea Ice / Oövervakad förändringsdetektion med multitemporell SAR data : En fallstudie över arktisk havsisFröjse, Linda January 2014 (has links)
The extent of Arctic sea ice has decreased over the years and the importance of sea ice monitoring is expected to increase. Remote sensing change detection compares images acquired over the same geographic area at different times in order to identify changes that might have occurred in the area of interest. Change detection methods have been developed for cryospheric topics. The Kittler-Illingworth thresholding algorithm has proven to be an effective change detection tool, but has not been used for sea ice. Here it is applied to Arctic sea ice data. The objective is to investigate the unsupervised detection of changes in Arctic sea ice using multi-temporal SAR images. The well-known Kittler-Illingworth algorithm is tested using two density function models, i.e., the generalized Gaussian and the log-normal model. The difference image is obtained using the modified ratio operator. The histogram of the change image, which approximates its probability distribution, is considered to be a combination of two classes, i.e., the changed and unchanged classes. Histogram fitting techniques are used to estimate the unknown density functions and the prior probabilities. The optimum threshold is selected using a criterion function directly related to classification error. In this thesis three datasets were used covering parts of the Beaufort Sea from the years 1992, 2002, 2007 and 2009. The SAR and ASAR C-band data came from satellites ERS and ENVISAT respectively. All three were interpreted visually. For all three datasets, the generalized Gaussian detected a lot of change, whereas the log-normal detected less. Only one small subset of a dataset was validated against reference data. The log-normal distribution then obtained 0% false alarm rate through all trials. The generalized Gaussian obtained false alarm rates around 4% for most of the trials. The generalized Gaussian achieved detection accuracies around 95%, whereas the log-normal achieved detection accuracies around 70%. The overall accuracies for the generalized Gaussian were about 95% in most trials. The log-normal achieved overall accuracies at around 85%. The KHAT for the generalized Gaussian was in the range of 0.66-0.93. The KHAT for log-normal was in the range of 0.68-0.77. Using one additional speckle filter iteration increased the accuracy for the log-normal distribution. Generally, the detection of positive change has been accomplished with higher level of accuracy compared with negative change detection. A visual inspection shows that the generalized Gaussian distribution probably over-estimates the change. The log-normal distribution consistently detects less change than the generalized Gaussian. Lack of validation data made validation of the results difficult. The performed validation might not be reliable since the available validation data was only SAR imagery and differentiating change and no-change is difficult in the area. Further due to the lack of reference data it could not be decided, with certainty, which distribution performed the best. / Ytan av arktisk havsis har minskat genom åren och vikten av havsisövervakning förväntas öka. Förändrigsdetection jämför bilder från samma geografiska område från olika tidpunkter föra att identifiera förändringar som kan ha skett i intresseområdet. Förändringsdekteringsmetoder har utvecklats för kryosfäriska ämnen. Tröskelvärdesbestämning med Kittler-Illingworth algoritmen har visats sig vara ett effektivt verktyg för förändringsdetektion, men har inte änvänts på havsis. Här appliceras algoritmen på arktisk havsis. Målet är att undersökra oövervakad förändringsdetektion i arktisk havsis med multitemporella SAR bilder. Den välkända Kittler-Illingworth algoritmen testas med två täthetsfunktioner, nämligen generaliserad normaldistribution och log-normal distributionen. Differensbilden erhålls genom den modifierad ratio-operator. Histogrammet från förändringsbilden skattar dess täthetsfunktion, vilken anses vara en kombination av två klasser, förändring- och ickeförändringsklasser. Histogrampassningstekniker används för att uppskatta de okända täthetsfunktionerna och a priori sannolikheterna. Det optimala tröskelvärdet väljs genom en kriterionfunktion som är direkt relaterad till klassifikationsfel. I detta examensarbete användes tre dataset som täcker delar av Beaufort-havet från åren 1992, 2002, 2007 och 2009. SAR C-band data kom från satelliten ERS och ASAR C-band data kom från satelliten ENVISAT. Alla tre tolkades visuellt och för alla tre detekterade generaliserad normaldistribution mycket mer förändring än lognormal distributionen. Bara en mindre del av ett dataset validerades mot referensdata. Lognormal distributionen erhöll då 0% falska alarm i alla försök. Generalised normaldistributionen erhöll runt 4% falska alarm i de flesta försöken. Generaliserad normaldistributionen nådde detekteringsnoggrannhet runt 95% medan lognormal distributionen nådde runt 70%. Generell noggrannheten för generaliserad normaldistributionen var runt 95% i flesta försöken. För lognormal distributionen nåddes en generell noggrannhet runt 85%. KHAT koefficienten för generaliserad normaldistributionen var i intervallet 0.66-0.93. För lognormal distributionen var den i intervallet 0.68-0.77. Med en extra speckle-filtrering ökades nogranneheten för lognormal distributionen. Generellt sett, detekterades positiv förändring med högre nivå av noggrannhet än negativ förändring. Visuell inspektion visar att generaliserad normaldistribution troligen överskattar förändringen. Lognormal distributionen detekterar konsistent mindre förändring än generaliserad normaldistributionen. Bristen på referensdata gjorde valideringen av resultaten svårt. Den utförda valideringen är kanske inte så trovärdig, eftersom den tillgänliga referensdatan var bara SAR bilder och att särskilja förändring och ickeförändring är svårt i området. Vidare, på grund av bristen på referensdata, kunde det inte bestämmas med säkerhet vilken distribution som var bäst.
|
734 |
根拠に基づく保健福祉政策の実現に関する研究 : 新たな指標「健康費」の概念形成について / コンキョ ニ モトズク ホケン フクシ セイサク ノ ジツゲン ニカンスル ケンキュウ : アラタナ シヒョウ「ケンコウヒ」ノ ガイネン ケイセイ ニツイテ / 根拠に基づく保健福祉政策の実現に関する研究 : 新たな指標健康費の概念形成について北岡 有喜, Yuki Kitaoka 21 March 2014 (has links)
PHRサービス「ポケットカルテ」に集積された個々の住民の生涯の健康医療福祉介護履歴情報は、当該個人のLife-Logといえることが判明し、従来の医療費に加えて、健康維持や「未病」対応のための消費の総和を新たな指標「健康費」と定義した。現在の医療経済施策基盤である国民医療費の上位概念となる「健康費」を最適化することは、医療の質を向上しつつ、国民医療費を適正化し、国民皆保険の維持に寄与すると思われる。 / The big data of the life-long history information in health care and welfare care of an individual resident stored in the PHR service "Pocket Karte", has been found to be the Life-Log of the resident. So, I have defined a new index "Health Care Fee" as the sum of consumptions for health maintenance and the "pre-disease" care in addition to the conventional medical costs. It is believed that to optimize the "Health Care Fee", the broader concept of the Estimates of National Medical Care Expenditure as the basis for medical economic policy currently, is to contribute to improve the quality of medical care, and to optimize the Estimates of National Medical Care Expenditure, and to maintain of universal health insurance system in Japan. / 博士(政策科学) / Doctor of Philosophy in Policy and Management / 同志社大学 / Doshisha University
|
735 |
AN ORGANIC NEURAL CIRCUIT: TOWARDS FLEXIBLE AND BIOCOMPATIBLE ORGANIC NEUROMORPHIC PROCESSINGMohammad Javad Mirshojaeian Hosseini (16700631) 31 July 2023 (has links)
<p>Neuromorphic computing endeavors to develop computational systems capable of emulating the brain’s capacity to execute intricate tasks concurrently and with remarkable energy efficiency. By utilizing new bioinspired computing architectures, these systems have the potential to revolutionize high-performance computing and enable local, low-energy computing for sensors and robots. Organic and soft materials are particularly attractive for neuromorphic computing as they offer biocompatibility, low-energy switching, and excellent tunability at a relatively low cost. Additionally, organic materials provide physical flexibility, large-area fabrication, and printability.</p><p>This doctoral dissertation showcases the research conducted in fabricating a comprehensive spiking organic neuron, which serves as the fundamental constituent of a circuit system for neuromorphic computing. The major contribution of this dissertation is the development of the organic, flexible neuron composed of spiking synapses and somas utilizing ultra-low voltage organic field-effect transistors (OFETs) for information processing. The synaptic and somatic circuits are implemented using physically flexible and biocompatible organic electronics necessary to realize the Polymer Neuromorphic Circuitry. An Axon-Hillock (AH) somatic circuit was fabricated and analyzed, followed by the adaptation of a log-domain integrator (LDI) synaptic circuit and the fabrication and analysis of a differential-pair integrator (DPI). Finally, a spiking organic neuron was formed by combining two LDI synaptic circuits and one AH synaptic circuit, and its characteristics were thoroughly examined. This is the first demonstration of the fabrication of an entire neuron using solid-state organic materials over a flexible substrate with integrated complementary OFETs and capacitors.</p>
|
736 |
Data Driven Video Source Camera IdentificationHopkins, Nicholas Christian 15 May 2023 (has links)
No description available.
|
737 |
Seismic attributes of the Clinton interval reservoir in the Dominion East Ohio Gabor gas storage field near North Canton, OhioHaneberg-Diggs, Dominique Miguel January 2014 (has links)
No description available.
|
738 |
How to Estimate Local Performance using Machine learning Engineering (HELP ME) : from log files to support guidance / Att estimera lokal prestanda med hjälp av maskininlärningEkinge, Hugo January 2023 (has links)
As modern systems are becoming increasingly complex, they are also becoming more and more cumbersome to diagnose and fix when things go wrong. One domain where it is very important for machinery and equipment to stay functional is in the world of medical IT, where technology is used to improve healthcare for people all over the world. This thesis aims to help with reducing downtime on critical life-saving equipment by implementing automatic analysis of system logs that without any domain experts involved can give an indication of the state that the system is in. First, a literature study was performed where three potential candidates of suitable neural network architectures was found. Next, the networks were implemented and a data pipeline for collecting and labeling training data was set up. After training the networks and testing them on a separate data set, the best performing model out of the three was based on GRU (Gated Recurrent Unit). Lastly, this model was tested on some real world system logs from two different sites, one without known issues and one with slow image import due to network issues. The results showed that it was feasible to build such a system that can give indications on external parameters such as network speed, latency and packet loss percentage using only raw system logs as input data. GRU, 1D-CNN (1-Dimensional Convolutional Neural Network) and Transformer's Encoder are the three models that were tested, and the best performing model was shown to produce correct patterns even on the real world system logs. / I takt med att moderna system ökar i komplexitet så blir de även svårare att felsöka och reparera när det uppstår problem. Ett område där det är mycket viktigt att maskiner och utrustning fungerar korrekt är inom medicinsk IT, där teknik används för att förbättra hälso- och sjukvården för människor över hela världen. Syftet med denna avhandling är att bidra till att minska tiden som kritisk livräddande utrustning inte fungerar genom att implementera automatisk analys av systemloggarna som utan hjälp av experter inom området kan ge en indikation på vilket tillstånd som systemet befinner sig i. Först genomfördes en litteraturstudie där tre lovande typer av neurala nätverk valdes ut. Sedan implementerades dessa nätverk och det sattes upp en datapipeline för insamling och märkning av träningsdata. Efter att ha tränat nätverken och testat dem på en separat datamängd så visade det sig att den bäst presterande modellen av de tre var baserad på GRU (Gated Recurrent Unit). Slutligen testades denna modell på riktiga systemloggar från två olika sjukhus, ett utan kända problem och ett där bilder importerades långsamt på grund av nätverksproblem. Resultaten visade på att det är möjligt att konstruera ett system som kan ge indikationer på externa parametrar såsom nätverkshastighet, latens och paketförlust i procent genom att enbart använda systemloggar som indata. De tre modeller som testades var GRU, 1D-CNN (1-Dimensional Convolutional Neural Network) och Transformer's Encoder. Den bäst presterande modellen visade sig kunna producera korrekta mönster även för loggdata från verkliga system.
|
739 |
Automating debugging through data mining / Automatisering av felsökning genom data miningThun, Julia, Kadouri, Rebin January 2017 (has links)
Contemporary technological systems generate massive quantities of log messages. These messages can be stored, searched and visualized efficiently using log management and analysis tools. The analysis of log messages offer insights into system behavior such as performance, server status and execution faults in web applications. iStone AB wants to explore the possibility to automate their debugging process. Since iStone does most parts of their debugging manually, it takes time to find errors within the system. The aim was therefore to find different solutions to reduce the time it takes to debug. An analysis of log messages within access – and console logs were made, so that the most appropriate data mining techniques for iStone’s system would be chosen. Data mining algorithms and log management and analysis tools were compared. The result of the comparisons showed that the ELK Stack as well as a mixture between Eclat and a hybrid algorithm (Eclat and Apriori) were the most appropriate choices. To demonstrate their feasibility, the ELK Stack and Eclat were implemented. The produced results show that data mining and the use of a platform for log analysis can facilitate and reduce the time it takes to debug. / Dagens system genererar stora mängder av loggmeddelanden. Dessa meddelanden kan effektivt lagras, sökas och visualiseras genom att använda sig av logghanteringsverktyg. Analys av loggmeddelanden ger insikt i systemets beteende såsom prestanda, serverstatus och exekveringsfel som kan uppkomma i webbapplikationer. iStone AB vill undersöka möjligheten att automatisera felsökning. Eftersom iStone till mestadels utför deras felsökning manuellt så tar det tid att hitta fel inom systemet. Syftet var att därför att finna olika lösningar som reducerar tiden det tar att felsöka. En analys av loggmeddelanden inom access – och konsolloggar utfördes för att välja de mest lämpade data mining tekniker för iStone’s system. Data mining algoritmer och logghanteringsverktyg jämfördes. Resultatet av jämförelserna visade att ELK Stacken samt en blandning av Eclat och en hybrid algoritm (Eclat och Apriori) var de lämpligaste valen. För att visa att så är fallet så implementerades ELK Stacken och Eclat. De framställda resultaten visar att data mining och användning av en plattform för logganalys kan underlätta och minska den tid det tar för att felsöka.
|
740 |
Statistical approaches for natural language modelling and monotone statistical machine translationAndrés Ferrer, Jesús 11 February 2010 (has links)
Esta tesis reune algunas contribuciones al reconocimiento de formas estadístico y, más especícamente, a varias tareas del procesamiento del lenguaje natural. Varias técnicas estadísticas bien conocidas se revisan en esta tesis, a saber: estimación paramétrica, diseño de la función de pérdida y modelado estadístico. Estas técnicas se aplican a varias tareas del procesamiento del lenguajes natural tales como clasicación de documentos, modelado del lenguaje natural
y traducción automática estadística.
En relación con la estimación paramétrica, abordamos el problema del suavizado proponiendo una nueva técnica de estimación por máxima verosimilitud con dominio restringido (CDMLEa ). La técnica CDMLE evita la necesidad de la etapa de suavizado que propicia la pérdida de las propiedades del estimador máximo verosímil. Esta técnica se aplica a clasicación de documentos mediante el clasificador Naive Bayes. Más tarde, la técnica CDMLE se extiende a la estimación por máxima verosimilitud por leaving-one-out aplicandola al suavizado de modelos de lenguaje. Los resultados obtenidos en varias tareas de modelado del lenguaje natural, muestran una mejora en términos de perplejidad.
En a la función de pérdida, se estudia cuidadosamente el diseño de funciones de pérdida diferentes a la 0-1. El estudio se centra en aquellas funciones de pérdida que reteniendo una complejidad de decodificación similar a la función 0-1, proporcionan una mayor flexibilidad. Analizamos y presentamos varias funciones de pérdida en varias tareas de traducción automática y con varios modelos de traducción. También, analizamos algunas reglas de traducción que destacan por causas prácticas tales como la regla de traducción directa; y, así mismo, profundizamos en la comprensión de los modelos log-lineares, que son de hecho, casos particulares de funciones de pérdida.
Finalmente, se proponen varios modelos de traducción monótonos basados en técnicas de modelado estadístico . / Andrés Ferrer, J. (2010). Statistical approaches for natural language modelling and monotone statistical machine translation [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7109
|
Page generated in 0.0333 seconds