• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 15
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 86
  • 34
  • 33
  • 30
  • 20
  • 19
  • 17
  • 15
  • 15
  • 13
  • 13
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Automatização do teste estrutural de software de veículos autônomos para apoio ao teste de campo / Automated structural software testing of autonomous vehicle to support field testing

Vânia de Oliveira Neves 15 May 2015 (has links)
Veículo autônomo inteligente (ou apenas veículo autônomo VA) é um tipo de sistema embarcado que integra componentes físicos (hardware) e computacionais (software). Sua principal característica é a capacidade de locomoção e de operação de modo semi ou completamente autônomo. A autonomia cresce com a capacidade de percepção e de deslocamento no ambiente, robustez e capacidade de resolver e executar tarefas lidando com as mais diversas situações (inteligência). Veículos autônomos representam um tópico de pesquisa importante e que tem impacto direto na sociedade. No entanto, à medida que esse campo avança alguns problemas secundários aparecem como, por exemplo, como saber se esses sistemas foram suficientemente testados. Uma das fases do teste de um VA é o teste de campo, em que o veículo é levado para um ambiente pouco controlado e deve executar livremente a missão para a qual foi programado. Ele é geralmente utilizado para garantir que os veículos autônomos mostrem o comportamento desejado, mas nenhuma informação sobre a estrutura do código é utilizada. Pode ocorrer que o veículo (hardware e software) passou no teste de campo, mas trechos importantes do código nunca tenham sido executados. Durante o teste de campo, os dados de entrada são coletados em logs que podem ser posteriormente analisados para avaliar os resultados do teste e para realizar outros tipos de teste offline. Esta tese apresenta um conjunto de propostas para apoiar a análise do teste de campo do ponto de vista do teste estrutural. A abordagem é composta por um modelo de classes no contexto do teste de campo, uma ferramenta que implementa esse modelo e um algoritmo genético para geração de dados de teste. Apresenta também heurísticas para reduzir o conjunto de dados contidos em um log sem diminuir substancialmente a cobertura obtida e estratégias de combinação e mutação que são usadas no algoritmo. Estudos de caso foram conduzidos para avaliar as heurísticas e estratégias e são também apresentados e discutidos. / Intelligent autonomous vehicle (or just autonomous vehicle - AV) is a type of embedded system that integrates physical (hardware) and computational (software) components. Its main feature is the ability to move and operate partially or fully autonomously. Autonomy grows with the ability to perceive and move within the environment, robustness and ability to solve and perform tasks dealing with different situations (intelligence). Autonomous vehicles represent an important research topic that has a direct impact on society. However, as this field progresses some secondary problems arise, such as how to know if these systems have been sufficiently tested. One of the testing phases of an AV is the field testing, where the vehicle is taken to a controlled environment and it should execute the mission for which it was programed freely. It is generally used to ensure that autonomous vehicles show the intended behavior, but it usually does not take into consideration the code structure. The vehicle (hardware and software) could pass the field testing, but important parts of the code may never have been executed. During the field testing, the input data are collected in logs that can be further analyzed to evaluate the test results and to perform other types of offline tests. This thesis presents a set of proposals to support the analysis of field testing from the point of view of the structural testing. The approach is composed of a class model in the context of the field testing, a tool that implements this model and a genetic algorithm to generate test data. It also shows heuristics to reduce the data set contained in a log without reducing substantially the coverage obtained and combination and mutation strategies that are used in the algorithm. Case studies have been conducted to evaluate the heuristics and strategies, and are also presented and discussed.
72

Evaluating the error of measurement due to categorical scaling with a measurement invariance approach to confirmatory factor analysis

Olson, Brent 05 1900 (has links)
It has previously been determined that using 3 or 4 points on a categorized response scale will fail to produce a continuous distribution of scores. However, there is no evidence, thus far, revealing the number of scale points that may indeed possess an approximate or sufficiently continuous distribution. This study provides the evidence to suggest the level of categorization in discrete scales that makes them directly comparable to continuous scales in terms of their measurement properties. To do this, we first introduced a novel procedure for simulating discretely scaled data that was both informed and validated through the principles of the Classical True Score Model. Second, we employed a measurement invariance (MI) approach to confirmatory factor analysis (CFA) in order to directly compare the measurement quality of continuously scaled factor models to that of discretely scaled models. The simulated design conditions of the study varied with respect to item-specific variance (low, moderate, high), random error variance (none, moderate, high), and discrete scale categorization (number of scale points ranged from 3 to 101). A population analogue approach was taken with respect to sample size (N = 10,000). We concluded that there are conditions under which response scales with 11 to 15 scale points can reproduce the measurement properties of a continuous scale. Using response scales with more than 15 points may be, for the most part, unnecessary. Scales having from 3 to 10 points introduce a significant level of measurement error, and caution should be taken when employing such scales. The implications of this research and future directions are discussed. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate
73

Informationsentropische, spektrale und statistische Untersuchungen fahrzeuggenerierter Verkehrsdaten unter besonderer Berücksichtigung der Auswertung und Dimensionierung von FCD-Systemen

Gössel, Frank 15 April 2005 (has links)
Untersuchungsgegenstand der vorliegenden Arbeit ist die Schnittstelle zwischen Verkehrsprozess und Informationsprozess in Systemen für die fahrzeuggenerierte Verkehrsdatengewinnung. Dabei konzentrieren sich die Untersuchungen auf die originäre Größe Geschwindigkeit. Das wesentliche Ziel der theoretischen und praktischen Untersuchungen bildet die qualifizierte Bestimmung makroskopischer Kenngrößen des Verkehrsflusses aus mikroskopischen Einzelfahrzeugdaten. Einen Schwerpunkt der Arbeit bildet die Analyse von mikroskopischen Einzelfahrzeugdaten mit Hilfe von informationsentropischen und spektralen Betrachtungen. Diese Untersuchungen erfolgen mit dem Ziel, eine optimale Nutzung der limitierten Übertragungs- und Verarbeitungskapazität in realen FCD-Systemen zu ermöglichen, theoretische Grenzerte abzuleiten und in der Praxis verwendete Parameter von FCD-Systemen theoretisch zu begründen. Ausgehend von empirischen und theoretischen Untersuchungen wird die Entropie der Informationsquelle "Geschwindigkeitsganglinie" bestimmt. Es wird gezeigt, dass Geschwindigkeitsganglinien als Markov-Quellen modelliert werden können. Aus der Entropiedynamik von Geschwindigeitsganglinien wird eine optimale Größe für den Erfassungstakt abgeleitet. Eine Analyse der spektralen Eigenschaften von Geschwindigkeitsverläufen zeigt, dass zwischen den Spektren von Geschwindigkeitsganglinien und dem Verkehrszustand Zusammenhänge bestehen. Geschwindigkeitsganglinien besitzen Tiefpasscharakter. Für die Berechnung der Tiefpassgrenzfrequenzen von empirischen Geschwindigkeitsganglinien wird ein Leistungskriterium eingeführt. Ausgehend von den derart bestimmten empirischen Tiefpassgrenzfrequenzen kann ein optimaler Erfassungstakt ermittelt werden, dessen Größe näherungsweise mit dem aus der Entropiedynamik abgeleiteten Erfassungstakt übereinstimmt. Ein einfacher Indikator für die Dynamik von Geschwindigkeitsverläufen ist der Variationskoeffizient der Einzelfahrzeuggeschwindigkeit. Es wird gezeigt, dass die Gewinnung und Übertragung von Variationskoeffizienten der Einzelfahrzeuggeschwindigkeiten in FCD-Systemen sinnvoll ist. In der Arbeit erfolgt eine theoretische Begründung des erforderlichen Ausrüstungsgrades in FCD-Systemen. Die Beurteilung der Leistungsfähigkeit von FCD-Systemen erfolgt dabei auf der Grundlage einer Konfidenzschätzung für die Zufallsgröße Reisegeschwindigkeit. Das verwendete Verfahren ist geeignet, die Leistungsfähigkeit von FCD-Systemen in unterschiedlichen Szenarien (Stadt-, Landstraßen-, Autobahnverkehr) zu vergleichen. Es wird gezeigt, dass FC-Daten in bestimmten Szenarien (insbesondere Stadtverkehr) zwingend einer Fusion mit anderen Verkehrsdaten bedürfen. Für die statistische Dimensionierung und Auswertung eines FCD-Systems ist der Variationskoeffizient der mittleren Reisegeschwindigkeiten der Fahrzeuge eines Fahrzeugkollektivs (kollektiver Variationskoeffizient) ein wesentlicher Parameter. Es wird gezeigt, dass der kollektive Variationskoeffizient in der Regel nicht nur vom Verkehrszustand, sondern auch von der räumlichen und zeitlichen Strukturierung des Beobachtungsgebietes abhängig ist. Für die näherungsweise Bestimmung des kollektiven Variationskoeffizienten werden Modelle abgeleitet und verifiziert.
74

Complex Vehicle Modeling: A Data Driven Approach

Schoen, Alexander C. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This thesis proposes an artificial neural network (NN) model to predict fuel consumption in heavy vehicles. The model uses predictors derived from vehicle speed, mass, and road grade. These variables are readily available from telematics devices that are becoming an integral part of connected vehicles. The model predictors are aggregated over a fixed distance traveled (i.e., window) instead of fixed time interval. It was found that 1km windows is most appropriate for the vocations studied in this thesis. Two vocations were studied, refuse and delivery trucks. The proposed NN model was compared to two traditional models. The first is a parametric model similar to one found in the literature. The second is a linear regression model that uses the same features developed for the NN model. The confidence level of the models using these three methods were calculated in order to evaluate the models variances. It was found that the NN models produce lower point-wise error. However, the stability of the models are not as high as regression models. In order to improve the variance of the NN models, an ensemble based on the average of 5-fold models was created. Finally, the confidence level of each model is analyzed in order to understand how much error is expected from each model. The mean training error was used to correct the ensemble predictions for five K-Fold models. The ensemble K-fold model predictions are more reliable than the single NN and has lower confidence interval than both the parametric and regression models.
75

A Study on Applying Learning Techniques to Remote Sensing Data

Radhakrishnan, Aswathnarayan 06 October 2020 (has links)
No description available.
76

[en] AN APPROACH BASED ON INTERACTIVE MACHINE LEARNING AND NATURAL INTERACTION TO SUPPORT PHYSICAL REHABILITATION / [pt] UMA ABORDAGEM BASEADA NO APRENDIZADO DE MÁQUINA INTERATIVO E INTERAÇÃO NATURAL PARA APOIO À REABILITAÇÃO FÍSICA

JESSICA MARGARITA PALOMARES PECHO 10 August 2021 (has links)
[pt] A fisioterapia visa melhorar a funcionalidade física das pessoas, procurando atenuar as incapacidades causadas por alguma lesão, distúrbio ou doença. Nesse contexto, diversas tecnologias computacionais têm sido desenvolvidas com o intuito de apoiar o processo de reabilitação, como as tecnologias adaptáveis para o usuário final. Essas tecnologias possibilitam ao fisioterapeuta adequar aplicações e criarem atividades com características personalizadas de acordo com as preferências e necessidades de cada paciente. Nesta tese é proposta uma abordagem de baixo custo baseada no aprendizado de máquina interativo (iML - Interactive Machine Learning) que visa auxiliar os fisioterapeutas a criarem atividades personalizadas para seus pacientes de forma fácil e sem a necessidade de codificação de software, a partir de apenas alguns exemplos em vídeo RGB (capturadas por uma câmera de vídeo digital) Para tal, aproveitamos a estimativa de pose baseada em aprendizado profundo para rastrear, em tempo real, as articulações-chave do corpo humano a partir de dados da imagem. Esses dados são processados como séries temporais por meio do algoritmo Dynamic Time Warping em conjunto com com o algoritmo K-Nearest Neighbors para criar um modelo de aprendizado de máquina. Adicionalmente, usamos um algoritmo de detecção de anomalias com o intuito de avaliar automaticamente os movimentos. A arquitetura de nossa abordagem possui dois módulos: um para o fisioterapeuta apresentar exemplos personalizados a partir dos quais o sistema cria um modelo para reconhecer esses movimentos; outro para o paciente executar os movimentos personalizados enquanto o sistema avalia o paciente. Avaliamos a usabilidade de nosso sistema com fisioterapeutas de cinco clínicas de reabilitação. Além disso, especialistas avaliaram clinicamente nosso modelo de aprendizado de máquina. Os resultados indicam que a nossa abordagem contribui para avaliar automaticamente os movimentos dos pacientes sem monitoramento direto do fisioterapeuta, além de reduzir o tempo necessário do especialista para treinar um sistema adaptável. / [en] Physiotherapy aims to improve the physical functionality of people, seeking to mitigate the disabilities caused by any injury, disorder or disease. In this context, several computational technologies have been developed in order to support the rehabilitation process, such as the end-user adaptable technologies. These technologies allow the physiotherapist to adapt applications and create activities with personalized characteristics according to the preferences and needs of each patient. This thesis proposes a low-cost approach based on interactive machine learning (iML) that aims to help physiotherapists to create personalized activities for their patients easily and without the need for software coding, from just a few examples in RGB video (captured by a digital video camera). To this end, we take advantage of pose estimation based on deep learning to track, in real time, the key joints of the human body from image data. This data is processed as time series using the Dynamic Time Warping algorithm in conjunction with the K-Nearest Neighbors algorithm to create a machine learning model. Additionally, we use an anomaly detection algorithm in order to automatically assess movements. The architecture of our approach has two modules: one for the physiotherapist to present personalized examples from which the system creates a model to recognize these movements; another to the patient performs personalized movements while the system evaluates the patient. We assessed the usability of our system with physiotherapists from five rehabilitation clinics. In addition, experts have clinically evaluated our machine learning model. The results indicate that our approach contributes to automatically assessing patients movements without direct monitoring by the physiotherapist, in addition to reducing the specialist s time required to train an adaptable system.
77

A Deep-Learning Approach to Evaluating the Navigability of Off-Road Terrain from 3-D Imaging

Pech, Thomas Joel 30 August 2017 (has links)
No description available.
78

Privacy-preserving Synthetic Data Generation for Healthcare Planning / Sekretessbevarande syntetisk generering av data för vårdplanering

Yang, Ruizhi January 2021 (has links)
Recently, a variety of machine learning techniques have been applied to different healthcare sectors, and the results appear to be promising. One such sector is healthcare planning, in which patient data is used to produce statistical models for predicting the load on different units of the healthcare system. This research introduces an attempt to design and implement a privacy-preserving synthetic data generation method adapted explicitly to patients’ health data and for healthcare planning. A Privacy-preserving Conditional Generative Adversarial Network (PPCGAN) is used to generate synthetic data of Healthcare events, where a well-designed noise is added to the gradients in the training process. The concept of differential privacy is used to ensure that adversaries cannot reveal the exact training samples from the trained model. Notably, the goal is to produce digital patients and model their journey through the healthcare system. / Nyligen har en mängd olika maskininlärningstekniker tillämpats på olika hälso- och sjukvårdssektorer, och resultaten verkar lovande. En sådan sektor är vårdplanering, där patientdata används för att ta fram statistiska modeller för att förutsäga belastningen på olika enheter i sjukvården. Denna forskning introducerar ett försök att utforma och implementera en sekretessbevarande syntetisk datagenereringsmetod som uttryckligen anpassas till patienters hälsodata och för vårdplanering. Ett sekretessbevarande villkorligt generativt kontradiktoriskt nätverk (PPCGAN) används för att generera syntetisk data från hälsovårdshändelser, där ett väl utformat brus läggs till gradienterna i träningsprocessen. Begreppet differentiell integritet används för att säkerställa att motståndare inte kan avslöja de exakta träningsproven från den tränade modellen. Målet är särskilt att producera digitala patienter och modellera deras resa genom sjukvården.
79

Generation of Synthetic Clinical Trial Subject Data Using Generative Adversarial Networks

Lindell, Linus January 2024 (has links)
The development of new solutions incorporating artificial intelligence (AI) within the medical field is an area of great interest. However, access to comprehensive and diverse datasets is restricted due to the sensitive nature of the data. A potential solution to this is to generatesynthetic datasets based on real medical data. Synthetic data could protect the integrity of the subjects while preserving the inherent information necessary for training AI models and be generated in greater quantity than otherwise available. This thesis project aims to generate reliable clinical trial subject data using a generative adversarial network (GAN). The main data set used is a mock clinical trial dataset consisting of multiple subject visits, however an additional data set containing authentic medical data is also used for better insights into the model’s ability to learn underlying relationships. The thesis also investigates training strategies for simulating the temporal dimension and the missing values in the data. The GAN model used is an altered version of the Conditional Tabular GAN (CTGAN)made to be compatible with the preprocessed clinical trial mock data, and multiple model architectures and number of training epochs are examined. The results show great potential for GAN models on clinical trial datasets, especially for real-life data. One model, trained on the authentic dataset, generates near-perfect synthetic data with respect to column distributions and correlation between columns. The results also show that classification models trained on synthetic data and tested on real data have the potential to match the performance of classification models trained on real data. While the synthetic data replicates the missing values, no definitive conclusion can be drawn regarding the temporal characteristics due to the sparsity of the mock dataset and lack of real correlations in it. Although the results are promising, further experiments on authentic datasets with less sparsity are required.
80

[en] GENERATION AND DETECTION OF OBJECTS IN DOCUMENTS BY DEEP LEARNING NEURAL NETWORK MODELS (DEEPDOCGEN) / [pt] GERAÇÃO E DETECÇÃO DE OBJETOS EM DOCUMENTOS POR MODELOS DE REDES NEURAIS DE APRENDIZAGEM PROFUNDA (DEEPDOCGEN)

LOICK GEOFFREY HODONOU 06 February 2025 (has links)
[pt] A eficácia dos sistemas de conversação homem-máquina, como chatbots e assistentes virtuais, está diretamente relacionada à quantidade e qualidade do conhecimento disponível para eles. Na era digital, a diversidade e a qualidade dos dados aumentaram significativamente, estando disponíveis em diversos formatos. Entre esses, o PDF (Portable Document Format) se destaca como um dos mais conhecidos e amplamente utilizados, adaptando-se a variados setores, como empresarial, educacional e de pesquisa. Esses arquivos contêm uma quantidade considerável de dados estruturados, como textos, títulos, listas, tabelas, imagens, etc. O conteúdo dos arquivos PDF pode ser extraído utilizando ferramentas dedicadas, como o OCR (Reconhecimento Ótico de Caracteres), o PdfMiner, Tabula e outras, que provaram ser adequadas para esta tarefa. No entanto, estas ferramentas podem deparar-se com dificuldades quando lidam com a apresentação complexa e variada dos documentos PDF. A exatidão da extração pode ser comprometida pela diversidade de esquemas, formatos não normalizados e elementos gráficos incorporados nos documentos, o que frequentemente leva a um pós-processamento manual. A visão computacional e, mais especificamente, a detecção de objetos, é um ramo do aprendizado de máquina que visa localizar e classificar instâncias em imagens utilizando modelos de detecção dedicados à tarefa, e está provando ser uma abordagem viável para acelerar o trabalho realizado por algoritmos como OCR, PdfMiner, Tabula, além de melhorar sua precisão. Os modelos de detecção de objetos, por serem baseados em aprendizagem profunda, exigem não apenas uma quantidade substancial de dados para treinamento, mas, acima de tudo, anotações de alta qualidade pois elas têm um impacto direto na obtenção de altos níveis de precisão e robustez. A diversidade de layouts e elementos gráficos em documentos PDF acrescenta uma camada adicional de complexidade, exigindo dados anotados de forma representativa para que os modelos possam aprender a lidar com todas as variações possíveis. Considerando o aspecto volumoso dos dados necessários para o treinamento dos modelos, percebemos rapidamente que o processo de anotação dos dados se torna uma tarefa tediosa e demorada que requer intervenção humana para identificar e etiquetar manualmente cada elemento relevante. Essa tarefa não é apenas demorada, mas também sujeita a erros humanos, o que muitas vezes exige verificações e correções adicionais. A fim de encontrar um meio-termo entre a quantidade de dados, a minimização do tempo de anotação e anotações de alta qualidade, neste trabalho propusemos um pipeline que, a partir de um número limitado de documentos PDF anotados com as categorias texto, título, lista, tabela e imagem recebidas como entrada, é capaz de criar novas layouts de documentos semelhantes com base no número desejado pelo usuário. Este pipeline vai mais longe em preenchendo com o conteúdo as novas layouts criadas, a fim de fornecer imagens de documentos sintéticos e suas respectivas anotações. Com sua estrutura simples, intuitiva e escalável, este pipeline pode contribuir para o active learning, permitindo assim aos modelos de detecção serem treinados continuamente, os tornando mais eficazes e robustos diante de documentos reais. Em nossas experiências, ao avaliar e comparar três modelos de detecção, observamos que o RT-DETR (Real-Time DEtection TRansformer) obteve os melhores resultados, atingindo uma precisão média (mean Average Precision, mAP) de 96,30 por cento, superando os resultados do Mask R-CNN (Region-based Convolutional Neural Networks) e Mask DINO (Mask DETR with Improved Denoising Anchor Boxes). A superioridade do RT-DETR indica seu potencial para se tornar uma solução de referência na detecção de características em documentos PDF. Esses resultados promissores abrem caminho para aplicações mais eficientes e confiáveis no processamento automático de documentos. / [en] The effectiveness of human-machine conversation systems, such as chat-bots and virtual assistants, is directly related to the amount and quality of knowledge available to them. In the digital age, the diversity and quality of data have increased significantly, being available in various formats. Among these, the PDF (Portable Document Format) stands out as one of the most well-known and widely used, adapting to various sectors, such as business, education, and research. These files contain a considerable amount of structured data, such as text, headings, lists, tables, images, etc. The content of PDF files can be extracted using dedicated tools, such as OCR (Optical Character Recognition), PdfMiner, Tabula and others, which have proven to be suitable for this task. However, these tools may encounter difficulties when dealing with the complex and varied presentation of PDF documents. The accuracy of extraction can be compromised by the diversity of layouts, non-standardized formats, and embedded graphic elements in the documents, often leading to manual post-processing. Computer vision, and more specifically, object detection, is a branch of machine learning that aims to locate and classify instances in images using models dedicated to the task. It is proving to be a viable approach to accelerating the work performed by algorithms like OCR, PdfMiner, Tabula and improving their accuracy. Object detection models, being based on deep learning, require not only a substantial amount of data for training but, above all, high-quality annotations, as they have a direct impact on achieving high levels of accuracy and robustness. The diversity of layouts and graphic elements in PDF documents adds an additional layer of complexity, requiring representatively annotated data so that the models can learn to handle all possible variations. Considering the voluminous aspect of the data needed for training the models, we quickly realize that the data annotation process becomes a tedious and time-consuming task requiring human intervention to manually identify and label each relevant element. This task is not only time-consuming but also subject to human error, often requiring additional checks and corrections. To find a middle ground between the amount of data, minimizing annotation time, and high-quality annotations, in this work, we proposed a pipeline that, from a limited number of annotated PDF documents with the categories text, title, list, table, and image as input, can create new document layouts similar to the desired number by the user. This pipeline goes further by filling the new created layouts with content to provide synthetic document images and their respective annotations. With its simple, intuitive, and scalable structure, this pipeline can contribute to active learning, allowing detection models to be continuously trained, making them more effective and robust in the face of real documents. In our experiments, when evaluating and comparing three detection models, we observed that the RT-DETR (Real-Time Detection Transformer) achieved the best results, reaching a mean Average Precision (mAP) of 96.30 percent, surpassing the results of Mask R-CNN (Region-based Convolutional Neural Networks) and Mask DINO (Mask DETR with Improved Denoising Anchor Boxes). The superiority of RT-DETR indicates its potential to become a reference solution in detecting features in PDF documents. These promising results pave the way for more efficient and reliable applications in the automatic processing of documents.

Page generated in 0.0476 seconds