• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1860
  • 57
  • 54
  • 38
  • 37
  • 37
  • 19
  • 13
  • 11
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 2691
  • 2691
  • 1111
  • 964
  • 835
  • 611
  • 580
  • 492
  • 491
  • 467
  • 442
  • 439
  • 415
  • 412
  • 379
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Consistent and Accurate Face Tracking and Recognition in Videos

Liu, Yiran 23 September 2020 (has links)
No description available.
262

Dimension Reduction for Network Analysis with an Application to Drug Discovery

Chen, Huiyuan January 2020 (has links)
No description available.
263

A Conditional Generative Adversarial Network Demosaicing Strategy for Division of Focal Plane Polarimeters

Sargent, Garrett Craig January 2020 (has links)
No description available.
264

Leveraging attention-based deep neural networks for security vetting of Android applications

Pathak, Prabesh 01 June 2021 (has links)
No description available.
265

LSTMs and Deep Residual Networks for Carbohydrate and Bolus Recommendations in Type 1 Diabetes Management

Beauchamp, Jeremy T. 25 May 2021 (has links)
No description available.
266

Image-Based Biomarker Localization from Regression Networks

Cano Espinosa, Carlos 26 September 2019 (has links)
La inteligencia artificial, y en concreto la desarrollada mediante aprendizaje profundo, se ha instaurado firmemente en nuestra sociedad en la última década. Los avances tecnológicos han hecho posible su uso en todos los ámbitos, lo que ha impulsado la investigación y el desarrollo de nuevos métodos que superan enormemente a lo que hasta hace poco era lo que se consideraba más avanzado. Las técnicas tradicionales han sido sustituidas por redes neuronales que permiten obtener resultados de mejor calidad de una forma mucho más rápida. Esto ha sido posible principalmente por dos factores: Primero, el avance en la tecnología de procesadores gráficos, que permiten una alta paralelización, ha permitido el desarrollo de técnicas que hasta la fecha eran completamente inviables debido a su coste temporal. Segundo, la proliferación de la ``cultura de los datos'' en la que estos son tratados como un recurso imprescindible, sumado al abaratamiento de la capacidad de almacenamiento digital, ha propiciado la aparición de grandes bases de datos de todo tipo y para todo propósito, creciendo exponencialmente con el tiempo y con una calidad cada vez superior debido a que se diseñan con el propósito específico de servir como base a estos tipos de sistemas inteligentes. Uno de los ámbitos que más se han beneficiado por estas técnicas es el entorno médico. La era de la digitalización ha hecho posible la recopilación de datos con información de pacientes, enfermedades, tratamientos, etc. No obstante, algo que diferencia al entorno médico de muchos otros ámbitos es que para poder atender correctamente a un paciente y valorar su estado es necesario la opinión de un experto, lo que provoca cierta inseguridad en el uso de los sistemas inteligentes, ya que estos hasta la fecha tienen una gran falta de explicabilidad. Es decir, pueden por ejemplo predecir, categorizar o localizar una enfermedad, pero no es fácilmente interpretable el cómo ha llegado a esa conclusión, cosa que es imprescindible conocer antes de tomar una decisión que puede ser crítica para el paciente. Además, este tipo de técnicas aplicadas al entorno médico tienen un problema añadido. Puesto que las bases de datos suelen provenir de diferentes instituciones, con una diversidad importante en cuanto a los modelos de máquinas empleadas para la obtención de estos datos, cada una con unas propiedades y características diferentes. Por ejemplo, suele existir una diferencia importante en los resultados al aplicar un método que se entrenó con datos provenientes de un hospital, en datos de otro diferente. Para hacer uso de estas bases de datos se requiere que sean lo suficientemente grandes como para poder generalizar de manera adecuada este hecho. Por otro lado, nos encontramos con que las bases de datos suelen estar etiquetadas por varios especialistas, por lo que se introduce cierto grado de diversidad subjetiva e incluso algunos errores que han de tenerse en cuenta. No obstante, en los últimos años se está haciendo un esfuerzo importante en solventar todos estos problemas. Concretamente en el campo de la interpretabilidad, aunque aún es un tema que está en sus fases más tempranas, surgen muchas propuestas que lo abordan desde diferentes perspectivas. Con ello en mente, esta investigación hace su aporte centrándose en redes neuronales para la regresión de biomarcadores, proponiendo un método capaz de indicar la localización de aquellas estructuras, órganos, tejidos o fluidos a partir de los cuales se infieren. Para ello, únicamente se dispone del valor agregado final de dichos biomarcadores y se pretende obtener una localización que explique dicho valor. En esta tesis se analizarán las redes de regresión directa, proponiendo una arquitectura para el cálculo de la Enfermedad de las Arterias Coronarias (CAD), haciendo un estudio de los diferentes elementos que la compone, incluyendo la función de coste empleada y cómo afecta al resultado dependiendo de las propiedades de los datos utilizados para su entrenamiento. Los resultados se validan en otros dos problemas de regresión, Área del Musculo Pectoral (PMA) y Área de Grasa Subcutánea (SFA). Como resultado de esta tesis podemos concluir que la regresión directa de los biomarcadores es una tarea totalmente viable, obteniendo altos índices de correlación entre los resultados calculados por este tipo de redes y la referencia real obtenida de la evaluación de un especialista en el campo al cual se aplica. En segundo lugar, la percepción de este tipo de sistemas en el entorno médico puede mejorarse si, junto con el resultado de la regresión, se proporciona una localización que guíe al especialista para inferir de dónde proviene este valor, centrando su atención en una región específica de la imagen. En esta tesis se profundiza en las redes de regresión de biomarcadores y permitiéndoles localizar las estructuras utilizadas para inferir el valor del biomarcador.
267

VISUAL SALIENCY ANALYSIS, PREDICTION, AND VISUALIZATION: A DEEP LEARNING PERSPECTIVE

Mahdi, Ali Majeed 01 August 2019 (has links) (PDF)
In the recent years, a huge success has been accomplished in prediction of human eye fixations. Several studies employed deep learning to achieve high accuracy of prediction of human eye fixations. These studies rely on pre-trained deep learning for object classification. They exploit deep learning either as a transfer-learning problem, or the weights of the pre-trained network as the initialization to learn a saliency model. The utilization of such pre-trained neural networks is due to the relatively small datasets of human fixations available to train a deep learning model. Another relatively less prioritized problem is amount of computation of such deep learning models requires expensive hardware. In this dissertation, two approaches are proposed to tackle abovementioned problems. The first approach, codenamed DeepFeat, incorporates the deep features of convolutional neural networks pre-trained for object and scene classifications. This approach is the first approach that uses deep features without further learning. Performance of the DeepFeat model is extensively evaluated over a variety of datasets using a variety of implementations. The second approach is a deep learning saliency model, codenamed ClassNet. Two main differences separate the ClassNet from other deep learning saliency models. The ClassNet model is the only deep learning saliency model that learns its weights from scratch. In addition, the ClassNet saliency model treats prediction of human fixation as a classification problem, while other deep learning saliency models treat the human fixation prediction as a regression problem or as a classification of a regression problem.
268

Learning 3D Shape Representations for Reconstruction and Modeling

Biao, Zhang 04 1900 (has links)
Neural fields, also known as neural implicit representations, are powerful for modeling 3D shapes. They encode shapes as continuous functions mapping 3D coordinates to scalar values like the signed distance function (SDF) or occupancy probability. Neural fields represent complex shapes using an MLP. The MLP takes spatial coordinates, undergoes nonlinear transformations, and approximates the continuous function of the neural field. During training, the MLP's weights are learned through backpropagation. This PhD thesis presents novel methods for shape representation learning and generation with neural fields. The first part introduces an interpretable and high-quality reconstruction method for neural fields. A neural network predicts labeled points, improving surface visualization and interpretability. The method achieves accurate reconstruction even with rendered image input. A binary classifier, based on predicted labeled points, represents the shape's surface with precision. The second part focuses on shape generation, a challenge in generative modeling. Complex data structures like oct-trees or BSP-trees are challenging to generate with neural networks. To address this, a two-step framework is proposed: an autoencoder compresses the neural field into a fixed-size latent space, followed by training generative models within that space. Incorporating sparsity into the shape autoencoding network reduces dimensionality while maintaining high-quality shape reconstruction. Autoregressive transformer models enable the generation of complex shapes with intricate details. This research explores the potential of denoising diffusion models for 3D shape generation. The latent space efficiency is improved by further compression, leading to more efficient and effective generation of high-quality shapes. Remarkable shape reconstruction results are achieved, even without sparse structures. The approach combines the latest generative model advancements with novel techniques, advancing the field. It has the potential to revolutionize shape generation in gaming, manufacturing, and beyond. In summary, this PhD thesis proposes novel methods for shape representation learning, generation, and reconstruction. It contributes to the field of shape analysis and generation by enhancing interpretability, improving reconstruction quality, and pushing the boundaries of efficient and effective 3D shape generation.
269

AI-augmented analysis onto the impact of the containment strategies and climate change to pandemic

Dong, Shihao January 2023 (has links)
This thesis uses a multi-tasking long short-term memory (LSTM) model to investigate the correlation between containment strategies, climate change, and the number of COVID-19 transmissions and deaths. The study focuses on examining the accuracy of different factors in predicting the number of daily confirmed cases and deaths cases to further explore the correlation between different factors and cases. The initial assessment results suggest that containment strategies, specifically vaccination policies, have a more significant impact on the accuracy of predicting daily confirmed cases and deaths from COVID-19 compared to climate factors such as the daily average surface 2-meter temperature. Additionally, the study reveals that there are unpredictable effects on predictive accuracy resulting from the interactions among certain impact factors. However, the lack of interpretability of deep learning models poses a significant challenge for real-world applications. This study provides valuable insights into understanding the correlation between the number of daily confirmed cases, daily deaths, containment strategies, and climate change, and highlights areas for further research. It is important to note that while the study reveals a correlation, it does not imply causation, and further research is needed to understand the trends of the pandemic.
270

Fusion for Object Detection

Wei, Pan 10 August 2018 (has links)
In a three-dimensional world, for perception of the objects around us, we not only wish to classify them, but also know where these objects are. The task of object detection combines both classification and localization. In addition to predicting the object category, we also predict where the object is from sensor data. As it is not known ahead of time how many objects that we have interest in are in the sensor data and where are they, the output size of object detection may change, which makes the object detection problem difficult. In this dissertation, I focus on the task of object detection, and use fusion to improve the detection accuracy and robustness. To be more specific, I propose a method to calculate measure of conflict. This method does not need external knowledge about the credibility of each source. Instead, it uses the information from the sources themselves to help assess the credibility of each source. I apply the proposed measure of conflict to fuse independent sources of tracking information from various stereo cameras. Besides, I propose a computational intelligence system for more accurate object detection in real--time. The proposed system uses online image augmentation before the detection stage during testing and fuses the detection results after. The fusion method is computationally intelligent based on the dynamic analysis of agreement among inputs. Comparing with other fusion operations such as average, median and non-maxima suppression, the proposed methods produces more accurate results in real-time. I also propose a multi--sensor fusion system, which incorporates advantages and mitigate disadvantages of each type of sensor (LiDAR and camera). Generally, camera can provide more texture and color information, but it cannot work in low visibility. On the other hand, LiDAR can provide accurate point positions and work at night or in moderate fog or rain. The proposed system uses the advantages of both camera and LiDAR and mitigate their disadvantages. The results show that comparing with LiDAR or camera detection alone, the fused result can extend the detection range up to 40 meters with increased detection accuracy and robustness.

Page generated in 0.0617 seconds