• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

Burns Depth Assessment Using Deep Learning Features

Abubakar, Aliyu, Ugail, Hassan, Smith, K.M., Bukar, Ali M., Elmahmudi, Ali 20 March 2022 (has links)
Yes / Burns depth evaluation is a lifesaving task and very challenging that requires objective techniques to accomplish. While the visual assessment is the most commonly used by surgeons, its accuracy reliability ranges between 60 and 80% and subjective that lacks any standard guideline. Currently, the only standard adjunct to clinical evaluation of burn depth is Laser Doppler Imaging (LDI) which measures microcirculation within the dermal tissue, providing the burns potential healing time which correspond to the depth of the injury achieving up to 100% accuracy. However, the use of LDI is limited due to many factors including high affordability and diagnostic costs, its accuracy is affected by movement which makes it difficult to assess paediatric patients, high level of human expertise is required to operate the device, and 100% accuracy possible after 72 h. These shortfalls necessitate the need for objective and affordable technique. Method: In this study, we leverage the use of deep transfer learning technique using two pretrained models ResNet50 and VGG16 for the extraction of image patterns (ResFeat50 and VggFeat16) from a a burn dataset of 2080 RGB images which composed of healthy skin, first degree, second degree and third-degree burns evenly distributed. We then use One-versus-One Support Vector Machines (SVM) for multi-class prediction and was trained using 10-folds cross validation to achieve optimum trade-off between bias and variance. Results: The proposed approach yields maximum prediction accuracy of 95.43% using ResFeat50 and 85.67% using VggFeat16. The average recall, precision and F1-score are 95.50%, 95.50%, 95.50% and 85.75%, 86.25%, 85.75% for both ResFeat50 and VggFeat16 respectively. Conclusion: The proposed pipeline achieved a state-of-the-art prediction accuracy and interestingly indicates that decision can be made in less than a minute whether the injury requires surgical intervention such as skin grafting or not.
712

Application of Plasticity Theory to Reinforced Concrete Deep Beams

Ashour, Ashraf, Yang, Keun-Hyeok 11 1900 (has links)
yes / This paper reviews the application of the plasticity theory to reinforced concrete deep beams. Both the truss analogy and mechanism approach were employed to predict the capacity of reinforced concrete deep beams. In addition, most current codes of practice, for example Eurocode 1992 and ACI 318-05, recommend the strut-and-tie model for designing reinforced concrete deep beams. Compared with methods based on empirical or semi-empirical equations, the strut-and-tie model and mechanism analyses are more rational, adequately accurate and sufficiently simple for estimating the load capacity of reinforced concrete deep beams. However, there is a problem of selecting the effectiveness factor of concrete as reflected in the wide range of values reported in the literature for deep beams.
713

Self-supervised monocular image depth learning and confidence estimation

Chen, L., Tang, W., Wan, Tao Ruan, John, N.W. 17 June 2020 (has links)
No / We present a novel self-supervised framework for monocular image depth learning and confidence estimation. Our framework reduces the amount of ground truth annotation data required for training Convolutional Neural Networks (CNNs), which is often a challenging problem for the fast deployment of CNNs in many computer vision tasks. Our DepthNet adopts a novel fully differential patch-based cost function through the Zero-Mean Normalized Cross Correlation (ZNCC) to take multi-scale patches as matching and learning strategies. This approach greatly increases the accuracy and robustness of the depth learning. Whilst the proposed patch-based cost function naturally provides a 0-to-1 confidence, it is then used to self-supervise the training of a parallel network for confidence map learning and estimation by exploiting the fact that ZNCC is a normalized measure of similarity which can be approximated as the confidence of the depth estimation. Therefore, the proposed corresponding confidence map learning and estimation operate in a self-supervised manner and is a parallel network to the DepthNet. Evaluation on the KITTI depth prediction evaluation dataset and Make3D dataset show that our method outperforms the state-of-the-art results.
714

Design Methods and Processes for ML/DL models

John, Meenu Mary January 2021 (has links)
Context: With the advent of Machine Learning (ML) and especially Deep Learning (DL) technology, companies are increasingly using Artificial Intelligence (AI) in systems, along with electronics and software. Nevertheless, the end-to-end process of developing, deploying and evolving ML and DL models in companies brings some challenges related to the design and scaling of these models. For example, access to and availability of data is often challenging, and activities such as collecting, cleaning, preprocessing, and storing data, as well as training, deploying and monitoring the model(s) are complex. Regardless of the level of expertise and/or access to data scientists, companies in all embedded systems domain struggle to build high-performing models due to a lack of established and systematic design methods and processes. Objective: The overall objective is to establish systematic and structured design methods and processes for the end-to-end process of developing, deploying and successfully evolving ML/DL models. Method: To achieve the objective, we conducted our research in close collaboration with companies in the embedded systems domain using different empirical research methods such as case study, action research and literature review. Results and Conclusions: This research provides six main results: First, it identifies the activities that companies undertake in parallel to develop, deploy and evolve ML/DL models, and the challenges associated with them. Second, it presents a conceptual framework for the continuous delivery of ML/DL models to accelerate AI-driven business in companies. Third, it presents a framework based on current literature to accelerate the end-to-end deployment process and advance knowledge on how to integrate, deploy and operationalize ML/DL models. Fourth, it develops a generic framework with five architectural alternatives for deploying ML/DL models at the edge. These architectural alternatives range from a centralized architecture that prioritizes (re)training in the cloud to a decentralized architecture that prioritizes (re)training at the edge. Fifth, it identifies key factors to help companies decide which architecture to choose for deploying ML/DL models. Finally, it explores how MLOps, as a practice that brings together data scientist teams and operations, ensures the continuous delivery and evolution of models. / <p>Due to copyright reasons, the articles are not included in the fulltext online</p>
715

Chen,W Official Thesis Submission.pdf

Winifred X Chen (14227994) 07 December 2022 (has links)
<p>  </p> <p>Identification of the phases of a large-scale natural disaster is often clouded by classes and sources of deep uncertainty, further proliferating as disaster events unfold. Focusing on three distinct phases of natural disaster relief operations, it is not necessary nor viable to eliminate all uncertainty from a natural disaster system. Instead, reducing the amount of time taken to minimize particular uncertainties may be sufficient to execute the preparation phase to carry out a response. The goal of this research is to understand the intricacies associated with forecastable and rapid-onset natural disaster events and restructure already-established tools to assist first responders and relevant decision-makers in the planning and response phases. Understanding specific foraging actions will support the considerations that must be made during the preparation phase while tying in other notable concepts, including use of a problem-structuring technique from the decision-making under deep uncertainty literature to contextualize the system of interest. The restructuring of a planning-based to a response-based problem-structuring tool will also highlight the added value in shifting from a static to a dynamic perspective. Following contextualization, utilizing an adaptive pathway approach will serve as a practical decision-support tool, allowing for open and flexible progression through the response phase of a natural disaster as events unfold, inclusive of specific triggers indicating a new event occurrence and thus, a new decision point. This paper addresses conditional criterion-based decision-making, focusing on an adaptive pathways approach in response to flooding incidents.</p>
716

Automated Pre-Play Analysis of American Football Formations Using Deep Learning

Newman, Jacob DeLoy 29 June 2022 (has links)
Annotation and analysis of sports videos is a time consuming task that, once automated, will provide benefits to coaches, players, and spectators. American football, as the most watched sport in the United States, could especially benefit from this automation. Manual annotation and analysis of recorded video of American football games is an inefficient and tedious process. Currently, most college football programs focus on annotating offensive formation. As a first step to further research for this unique application, we use computer vision and deep learning to analyze an overhead image of a football play immediately before the play begins. This analysis consists of locating and labeling individual football players, as well as identifying the formation of the offensive team. We obtain greater than 90% accuracy on both player detection and labeling, and 84.8% accuracy on formation identification. These results prove the feasibility of building a complete American football strategy analysis system using artificial intelligence.
717

Analysis and Applications of Deep Learning Features on Visual Tasks

Shi, Kangdi January 2022 (has links)
Benefiting from hardware development, deep learning (DL) has become a popular research area in recent decades. Convolutional neural network (CNN) is a critical deep learning tool that has been utilized in many computer vision problems. Moreover, the data-driven approach has unleashed CNN's potential in acquiring impressive learning ability with minimum human supervision. Therefore, many computer vision problems are brought into the spotlight again. In this thesis, we investigate the application of deep-learning-based methods, particularly the role of deep learning features, in two representative visual tasks: image retrieval and image inpainting. Image retrieval aims to find in a dataset images similar to a query image. In the proposed image retrieval method, we use canonical correlation analysis to explore the relationship between matching and non-matching features from pre-trained CNN, and generate compact transformed features. The level of similarity between two images is determined by a hypothesis test regarding the joint distribution of transformed image feature pairs. The proposed approach is benchmarked against three popular statistical analysis methods, Linear Discriminant Analysis (LDA), Principal Component Analysis with whitening (PCAw), and Supervised Principal Component Analysis (SPCA). Our approach is shown to achieve competitive retrieval performances on Oxford5k, Paris6k, rOxford, and rParis datasets. Moreover, an image inpainting framework is proposed to reconstruct the corrupted region in an image progressively. Specifically, we design a feature extraction network inspired by Gaussian and Laplacian pyramid, which is usually used to decompose the image into different frequency components. Furthermore, we use a two-branch iterative inpainting network to progressively recover the corrupted region on high and low-frequency features respectively and fuse both high and low-frequency features from each iteration. Moreover, an enhancement model is introduced to employ neighbouring iterations' features to further improve intermediate iterations' features. The proposed network is evaluated on popular image inpainting datasets such as Paris Streetview, Celeba, and Place2. Extensive experiments prove the validity of the proposed method in this thesis, and demonstrate the competitive performance against the state-of-the-art. / Thesis / Doctor of Philosophy (PhD)
718

Effects of pruning timing, leaf removal, and shoot thinning on 'MidSouth' winegrape quality in South Mississippi

Williams, Haley Nicole 13 May 2022 (has links)
‘MidSouth’, a relatively low maintenance interspecific hybrid bunch grape currently grown in South Mississippi, has low sugar and high acid levels for red wine use. Two studies, conducted at the Mississippi State University McNeill Research Unit in 2020 and 2021, determined the effects of pruning timing, leaf removal, and shoot thinning on ‘MidSouth’ development and fruit and wine quality. Treatments in the first study included early versus normal pruning timing, both with and without leaf removal, and treatments in the second study included leaf removal, shoot thinning, and control vines. Cluster temperatures, leaf chlorophyll, berries per cluster, berry and cluster weights, crop yield, Ravaz index, total soluble solids, titratable acidity, juice pH, monomeric anthocyanin pigment, and total phenolic content data were collected. It was determined that ‘MidSouth’ fruit quality can be altered through canopy manipulation, but not enough of a desired effect was achieved for these practices to be recommended.
719

Predicting Transcription Factor Binding in Humans with Context-Specific Chromatin Accessibility Profiles Using Deep Learning

Cazares, Tareian January 2022 (has links)
No description available.
720

Contributions to Document Image Analysis: Application to Music Score Images

Castellanos, Francisco J. 25 November 2022 (has links)
Esta tesis contribuye en el límite del conocimiento en algunos procesos relevantes dentro del flujo de trabajo típico asociado a los sistemas de reconocimiento óptico de música (OMR). El análisis de los documentos es una etapa clave y temprana dentro de dicho flujo, cuyo objetivo es proporcionar una versión simplificada de la información entrante; es decir, de las imágenes de documentos musicales. El resto de procesos involucrados en OMR pueden aprovechar esta simplificación para resolver sus correspondientes tareas de forma más sencilla y centrándose únicamente en la información que necesitan. Un ejemplo claro es el proceso dedicado a reconocer las áreas donde se sitúan los diferentes pentagramas. Tras obtener las coordenadas de los mismos, los pentagramas individuales pueden ser procesados para recuperar la secuencia simbólica musical que contienen y así construir una versión digital de su contenido. El trabajo de investigación que se ha realizado para completar la presente tesis se encuentra avalada por una serie de contribuciones publicadas en revistas de alto impacto y congresos internacionales. Concretamente, esta tesis contiene un conjunto de 4 artículos que se han publicado en revistas indexadas en el Journal Citation Reports y situadas en los primeros cuartiles en cuanto al factor de impacto, teniendo un total de 58 citas según Google Scholar. También se han incluido 3 comunicaciones realizadas en diferentes ediciones de un congreso internacional de Clase A según la clasificación proporcionada por GII-GRIN-SCIE. Se puede observar que las publicaciones tratan temas muy relacionados entre sí, enfocándose principalmente en el análisis de documentos orientado a OMR pero con pinceladas de transcripción de la secuencia musical y técnicas de adaptación al dominio. También hay publicaciones que demuestran que algunas de estas técnicas pueden ser aplicadas a otros tipos de imágenes de documentos, haciendo que las soluciones propuestas sean más interesantes por su capacidad de generalización y adaptación a otros contextos. Además del análisis de documentos, también se estudia cómo afectan estos procesos a la transcripción final de la notación musical, que a fin de cuentas, es el objetivo final de los sistemas OMR, pero que hasta el momento no se había investigado. Por último, debido a la incontable cantidad de información que requieren las redes neuronales para construir un modelo suficientemente robusto, también se estudia el uso de técnicas de adaptación al dominio, con la esperanza de que su éxito abra las puertas a la futura aplicabilidad de los sistemas OMR en entornos reales. Esto es especialmente interesante en el contexto de OMR debido a la gran cantidad de documentos sin datos de referencia que son necesarios para entrenar modelos de redes neuronales, por lo que una solución que aproveche las limitadas colecciones etiquetadas para procesar documentos de otra índole nos permitiría un uso más práctico de estas herramientas de transcripción automáticas. Tras la realización de esta tesis, se observa que la investigación en OMR no ha llegado al límite que la tecnología puede alcanzar y todavía hay varias vías por las que continuar explorando. De hecho, gracias al trabajo realizado, se han abierto incluso nuevos horizontes que se podrían estudiar para que algún día estos sistemas puedan ser utilizados para digitalizar y transcribir de forma automática la herencia musical escrita o impresa a gran escala y en un tiempo razonable. Entre estas nuevas líneas de investigación, podemos destacar las siguientes: · En esta tesis se han publicado contribuciones que utilizan una técnica de adaptación al dominio para realizar análisis de documentos con buenos resultados. La exploración de nuevas técnicas de adaptación al dominio podría ser clave para construir modelos de redes neuronales robustos y sin la necesidad de etiquetar manualmente una parte de todas las obras musicales que se pretenden digitalizar. · La aplicación de las técnicas de adaptación al dominio en otros procesos como en la transcripción de la secuencia musical podría facilitar el entrenamiento de modelos capaces de realizar esta tarea. Los algoritmos de aprendizaje supervisado requieren que personal cualificado se encargue de transcribir manualmente una parte de las colecciones, pero los costes temporal y económico asociados a este proceso suponen un amplio esfuerzo si el objetivo final es transcribir todo este patrimonio cultural. Por ello, sería interesante estudiar la aplicabilidad de estas técnicas con el fin de reducir drásticamente esta necesidad. · Durante la tesis, se ha estudiado cómo afecta el factor de escala de los documentos en el rendimiento de varios procesos de OMR. Además de la escala, otro factor importante que se debe tratar es la orientación, ya que las imágenes de los documentos no siempre estarán perfectamente alineadas y pueden sufrir algún tipo de rotación o deformación que provoque errores en la detección de la información. Por lo tanto, sería interesante estudiar cómo afectan estas deformaciones a la transcripción y encontrar soluciones viables para el contexto que aplica. · Como caso general y más básico, se ha estudiado cómo, con diferentes modelos de propósito general de detección de objetos, se podrían extraer los pentagramas para su posterior procesamiento. Estos elementos se han considerado rectangulares y sin rotación, pero hay que tener en cuenta que no siempre nos encontraremos con esta situación. Por lo tanto, otra posible vía de investigación sería estudiar otros tipos de modelos que permitan detectar elementos poligonales y no solo rectangulares, así como la posibilidad de detectar objetos con cierta inclinación sin introducir solapamiento entre elementos consecutivos como ocurre en algunas herramientas de etiquetado manual como la utilizada en esta tesis para la obtención de datos etiquetados para experimentación: MuRET. Estas líneas de investigación son, a priori, factibles pero es necesario realizar un proceso de exploración con el fin de detectar aquellas técnicas útiles para ser adaptadas al ámbito de OMR. Los resultados obtenidos durante la tesis señalan que es posible que estas líneas puedan aportar nuevas contribuciones en este campo, y por ende, avanzar un paso más a la aplicación práctica y real de estos sistemas a gran escala.

Page generated in 0.0867 seconds