• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1286
  • 349
  • 214
  • 91
  • 65
  • 53
  • 40
  • 36
  • 27
  • 17
  • 13
  • 13
  • 13
  • 13
  • 7
  • Tagged with
  • 2658
  • 2658
  • 832
  • 815
  • 589
  • 570
  • 449
  • 407
  • 402
  • 332
  • 310
  • 284
  • 259
  • 248
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Support for Accessible Bitsliced Software

Conroy, Thomas Joseph 05 March 2021 (has links)
The expectations on embedded systems have grown incredibly in recent years. Not only are there more applications for them than ever, the applications are increasingly complex, and their security is essential. To meet such demanding goals, designers and programmers are always looking for more efficient methods of computation. One technique that has gained attention over the past couple of decades is bitsliced software. In addition to high efficiency in certain situations, including block ciphers computation, it has been used in designs to resist hardware attacks. However, this technique requires both program and data to be in a specific format. This requirement makes writing bitsliced software by hand laborious and adds computational overhead to transpose the data before and after computation. This work describes a code generation tool that produces it from a higher-level description in Verilog. By supporting the synthesis of sequential circuits, this tool extends bitsliced software to parallel synchronous software. This tool is then used to implement a method for accelerating software neural network processing with reduced-precision computation on highly constrained devices. To address the data transposition overhead and to support a hardware attack-resistant architecture, a custom DMA controller is introduced that efficiently transposes the data as it transfers along with dedicated hardware for masking and redundancy generation. In combination, these tools make bitsliced software and its benefits more accessible to system designers and programmers. / Master of Science / Small computers embedded in devices, such as cars, smart devices, and other electronics, face many challenges. Often, they are pushed to their limits by designers and programmers to reach acceptable levels of performance. The increasing complexity of the applications they run compounds with the need for these applications to be secure. The programmers are always looking for better, more efficient methods of doing computations. Over the past two decades bitsliced software has gained attention as a technique that can, in certain situations, be more efficient than standard software. It also has properties that make it useful for designs implementing secure software. However, writing bitsliced software by hand is a laborious task, and the data input to the software needs to be in a specific format. To make writing the software easier, a tool that generates it from the well-known Verilog hardware description language is discussed in this work. This tool is then used to implement a method to accelerate artificial intelligence calculations on highly constrained computers. A custom hardware module is also introduced to speed up the formatting of data for bitsliced processing. In combination, these tools make bitsliced software and its benefits more accessible.
342

Medical image classification based on artificial intelligence approaches: A practical study on normal and abnormal confocal corneal images

Qahwaji, Rami S.R., Ipson, Stanley S., Sharif, Mhd Saeed, Brahma, A. 31 July 2015 (has links)
Yes / Corneal images can be acquired using confocal microscopes which provide detailed images of the different layers inside the cornea. Most corneal problems and diseases occur in one or more of the main corneal layers: the epithelium, stroma and endothelium. Consequently, for automatically extracting clinical information associated with corneal diseases, or evaluating the normal cornea, it is important also to be able to automatically recognise these layers easily. Artificial intelligence (AI) approaches can provide improved accuracy over the conventional processing techniques and save a useful amount of time over the manual analysis time required by clinical experts. Artificial neural networks (ANN) and adaptive neuro fuzzy inference systems (ANFIS), are powerful AI techniques, which have the capability to accurately classify the main layers of the cornea. The use of an ANFIS approach to analyse corneal layers is described for the first time in this paper, and statistical features have been also employed in the identification of the corneal abnormality. An ANN approach is then added to form a combined committee machine with improved performance which achieves an accuracy of 100% for some classes in the processed data sets. Three normal data sets of whole corneas, comprising a total of 356 images, and seven abnormal corneal images associated with diseases have been investigated in the proposed system. The resulting system is able to pre-process (quality enhancement, noise removal), classify (whole data sets, not just samples of the images as mentioned in the previous studies), and identify abnormalities in the analysed data sets. The system output is visually mapped and the main corneal layers are displayed. 3D volume visualisation for the processed corneal images as well as for each individual corneal cell is also achieved through this system. Corneal clinicians have verified and approved the clinical usefulness of the developed system especially in terms of underpinning the expertise of ophthalmologists and its applicability in patient care.
343

AN EMPIRICAL STUDY ON SYSTEMATIC BOOST OF CORPORATE VALUE FROM THE PERSPECTIVE OF THE THEORY OF THE FIVE ELEMENTS

LONG, YAN January 2022 (has links)
The paper discusses how to boost corporate value systematically based on the theory of the five elements in traditional Chinese culture; firstly, the categorization and analogy approach from the theory of the five elements is applied to make the five elements correspond to the five major factors influencing corporate value: metal, wood, water, fire, and earth, correspond to business innovation, financial capability, operational efficiency, competitive barriers, and comprehensive capacity, respectively, and the logical relation in these five elements is described and explained according to the principle of inter-promoting relation in the five elements. Continuous business innovation will bring about the improvement of operational efficiency, which in turn will be directly reflected in the growth of financial indicators, thereby providing an enterprise with the performance growth and cash flow guarantee; depending on a better profitability, the enterprise is capable of attracting and accumulating resources such as talents and capital, gaining competitive advantages and forming competitive barriers, as well as promoting the improvement of comprehensive capabilities and the cultivation of corporate culture; in addition, the improvement of corporate atmosphere can promote innovation more effectively, and thus boost corporate value systematically. Moreover, inspired by neural networks, the paper establishes an umbrella corporate value scoring system by combining it with the M-P neuron model, which is composed of the umbrella chart of corporate value scoring and the five-element neural network evaluation model. With respect to an enterprise, the five factors can be scored after an analysis is performed on its financial reports, industry rankings, corporate announcements and other information, and a five-dimensional umbrella radar chart can clearly present the performance of each factor influencing its value; the industry average is taken into account in scoring, the umbrella chart of each enterprise is not isolated but comparable because they all establish a certain connection with the industry; the five-element neural network evaluation model is regarded as a tool to facilitate enterprises to calculate the corporate value score, also with the help of the initial scoring of each element. The score reflects the inter-promoting relation principle of the five elements and can show how much each element can contribute to the improvement of the corporate value under the mutual influence, thereby highlighting the concept of systematic boost of corporate value. In this paper, the five elements analysis theory is applied to analyze the five enterprises: Ali Health, Wal-Mart, Tesla, Kweichow Moutai, and Haidilao, and find out the advantages of each company by making use of the inter-promoting relationship between the promoting elements, promoted elements and advantageous elements, and conclude their shortcomings of the companies and provide development suggestions. Finally, this paper explores the application method of the principle of inter-restricting in case analysis, i.e. developing the elements restricting advantageous elements, and promoting the systematic boost of corporate value from the perspective of balance. This chapter also expands the theory of the five elements from a company to the whole industry, and established a three-level classification system of "industry-segment -company". Each company in industry has its own advantages and it is unavailable to extract the advantageous element in the whole industry. However, in the process of dividing the industry into multiple segments, the present research concludes that the leading enterprises in each segment represent the future development trend of the field to a certain extent. The advantages of this enterprise tend to represent the advantages of this field. As a result, the leading enterprises can obtain various high-quality resources in the industry, which is helpful to further boost their corporate value. / Business Administration/Finance
344

Reconstructing the Behavior of Turbidity Currents From Turbidites-Reference to Anno Formation and Japan Trench / タービダイトにもとづいた混濁流の挙動の復元-安野層と日本海溝の例

Cai, Zhirong 26 September 2022 (has links)
京都大学 / 新制・課程博士 / 博士(理学) / 甲第24174号 / 理博第4865号 / 新制||理||1696(附属図書館) / 京都大学大学院理学研究科地球惑星科学専攻 / (主査)准教授 成瀬 元, 准教授 堤 昭人, 教授 野口 高明 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
345

A Self-Organizing Computational Neural Network Architecture with Applications to Sensorimotor Grounded Linguistic Grammar Acquisition

Jansen, Peter 10 1900 (has links)
<p> Connectionist models of language acquisition typically have difficulty with systematicity, or the ability for the network to generalize its limited experience with language to novel utterances. In this way, connectionist systems learning grammar from a set of example sentences tend to store a set of specific instances, rather than a generalized abstract knowledge of the process of grammatical combination. Further, recent models that do show limited systematicity do so at the expense of simultaneously storing explicit lexical knowledge, and also make use of both developmentally-implausible training data and biologically-implausible learning rules. Consequently, this research program develops a novel unsupervised neural network architecture, and applies this architecture to the problem of systematicity in language models.</p> <p> In the first of several studies, a connectionist architecture capable of simultaneously storing explicit and separate representations of both conceptual and grammatical information is developed, where this architecture is a hybrid of both a self-organizing map and an intra-layer Hebbian associative network. Over the course of several studies, this architecture's capacity to acquire linguistic grammar is evaluated, where the architecture is progressively refined until it is capable of acquiring a benchmark grammar consisting of several difficult clausal sentence structures - though it must acquire this grammar at the level of grammatical category, rather than the lexical level.</p> <p> The final study bridges the gap between the lexical and grammatical category levels, and develops an activation function based on a semantic feature co-occurrence metric. In concert with developmentally-plausible sensorimotor grounded conceptual representations, it is shown that a network using this metric is able to undertake a process of semantic bootstrapping, and successfully acquire separate explicit representations at the level of the concept, part-of-speech category, and grammatical sequence. This network demonstrates broadly systematic behaviour on a difficult test of systematicity, and extends its knowledge of grammar to novel sensorimotor-grounded words.</p> / Thesis / Doctor of Philosophy (PhD)
346

Drone Detection and Classification using Machine Learning

Shafiq, Khurram 26 September 2023 (has links)
UAV (Unmanned Airborne Vehicle) is a source of entertainment and a pleasurable experience, attracting many young people to pursue it as a hobby. With the potential increase in the number of UAVs, the risk of using them for malicious purposes also increases. In addition, birds and UAVs have very similar maneuvers during flights. These UAVs can also carry a significant payload, which can have unintended consequences. Therefore, detecting UAVs near red-zone areas is an important problem. In addition, small UAVs can record video from large distances without being spotted by the naked eye. An appropriate network of sensors may be needed to foresee the arrival of such entities from a safe distance before they pose any danger to the surrounding areas. Despite the growing interest in UAV detection, limited research has been conducted in this area due to a lack of available data for model training. This thesis proposes a novel approach to address this challenge by leveraging experimental data collected in real-time using high-sensitivity sensors instead of relying solely on simulations. This approach allows for improved model accuracy and a better representation of the complex and dynamic environments in which UAVs operate, which are difficult to simulate accurately. The thesis further explores the application of machine learning and sensor fusion algorithms to detect UAVs and distinguish them from other objects, such as birds, in real-time. Specifically, the thesis utilizes YOLOv3 with deep sort and sensor fusion algorithms to achieve accurate UAV detection. In this study, we employed YOLOv3, a deep learning model known for its high efficiency and complexity, to facilitate real-time drone versus bird detection. To further enhance the reliability of the system, we incorporated sensor fusion, leading to a more stable and accurate real-time system, and mitigating the incidence of false detections. Our study indicates that the YOLOv3 model outperformed the state-of-the-art models in terms of both speed and robustness, achieving a high level of confidence with a score above 95%. Moreover, the YOLOv3 model demonstrated a promising capability in real-time drone versus bird detection, which suggests its potential for practical applications
347

Wildfire Risk Assessment Using Convolutional Neural Networks and Modis Climate Data

Nesbit, Sean F 01 June 2022 (has links) (PDF)
Wildfires burn millions of acres of land each year leading to the destruction of homes and wildland ecosystems while costing governments billions in funding. As climate change intensifies drought volatility across the Western United States, wildfires are likely to become increasingly severe. Wildfire risk assessment and hazard maps are currently employed by fire services, but can often be outdated. This paper introduces an image-based dataset using climate and wildfire data from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS). The dataset consists of 32 climate and topographical layers captured across 0.1 deg by 0.1 deg tiled regions in California and Nevada between 2015 and 2020, associated with whether the region later saw a wildfire incident. We trained a convolutional neural network (CNN) with the generated dataset to predict whether a region will see a wildfire incident given the climate data of that region. Convolutional neural networks are able to find spatial patterns in their multi-dimensional inputs, providing an additional layer of inference when compared to logistic regression (LR) or artificial neural network (ANN) models. To further understand feature importance, we performed an ablation study, concluding that vegetation products, fire history, water content, and evapotranspiration products resulted in increases in model performance, while land information products did not. While the novel convolutional neural network model did not show a large improvement over previous models, it retained the highest holistic measures such as area under the curve and average precision, indicating it is still a strong competitor to existing models. This introduction of the convolutional neural network approach expands the wealth of knowledge for the prediction of wildfire incidents and proves the usefulness of the novel, image-based dataset.
348

OFDM Channel Estimation with Artificial Neural Networks

Bednar, Joseph W 01 June 2022 (has links) (PDF)
The use of orthogonal frequency-division multiplexing (OFDM) by wireless standards is often preferred due to its high spectral efficiency and ease of implementation. However, data transmission via OFDM still suffers when passing through a noisy channel. In order to maximize the abilities of OFDM, channel effects must be corrected. Unfortunately, channel estimation is often difficult due to the nonlinearity and randomness present in a practical communication channel. Recently, machine learning based approaches have been used to improve existing channel estimation algorithms for a more efficient transmission. This thesis investigates the application of artificial neural networks (ANNs) as a means of improving existing channel estimation techniques. Multi-layer feed forward neural networks (FNNs) and convolutional neural networks (CNNs) are tested on a variety of random fading channels with different signal-to-noise ratios (SNRs) via computer simulations. Compared to the conventional least squares (LS) algorithm, the approach based on CNN can reduce the bit error rate (BER) of data transmission by an average of 47.59%.
349

Long-term Forecasting Heat Use in Sweden's Residential Sector using Genetic Algorithms and Neural Network

Momtaz, Alireza, Befkin, Mohammad January 2024 (has links)
In this study, the parameters of population, gross domestic product (GDP), heat price, U-value, and temperature have been used to predict heat consumption for Sweden till 2050. It should be noted that the heat consumption has been considered for multi-family houses. Most multi-family houses (MFH) get their primary heat from district heating (DH). A literature analysis of various models and variables has been conducted to enhance comprehension of forecasting and its process. The majority of earlier research has focused on electricity or energy rather than heat. The aim of this study is to create a model (linear and non-linear) from 1993 to 2019 with a minimum error as possible, and then use the genetic algorithm (GA) and neural network (NN) to predict Sweden's heat consumption till 2050
350

Variational networks in magnetic resonance imaging - Application to spiral cardiac MRI and investigations on image quality / Variational Networks in der Magnetresonanztomographie - Anwendung auf spirale Herzbildgebung und Untersuchungen zur Bildqualität

Kleineisel, Jonas January 2024 (has links) (PDF)
Acceleration is a central aim of clinical and technical research in magnetic resonance imaging (MRI) today, with the potential to increase robustness, accessibility and patient comfort, reduce cost, and enable entirely new kinds of examinations. A key component in this endeavor is image reconstruction, as most modern approaches build on advanced signal and image processing. Here, deep learning (DL)-based methods have recently shown considerable potential, with numerous publications demonstrating benefits for MRI reconstruction. However, these methods often come at the cost of an increased risk for subtle yet critical errors. Therefore, the aim of this thesis is to advance DL-based MRI reconstruction, while ensuring high quality and fidelity with measured data. A network architecture specifically suited for this purpose is the variational network (VN). To investigate the benefits these can bring to non-Cartesian cardiac imaging, the first part presents an application of VNs, which were specifically adapted to the reconstruction of accelerated spiral acquisitions. The proposed method is compared to a segmented exam, a U-Net and a compressed sensing (CS) model using qualitative and quantitative measures. While the U-Net performed poorly, the VN as well as the CS reconstruction showed good output quality. In functional cardiac imaging, the proposed real-time method with VN reconstruction substantially accelerates examinations over the gold-standard, from over 10 to just 1 minute. Clinical parameters agreed on average. Generally in MRI reconstruction, the assessment of image quality is complex, in particular for modern non-linear methods. Therefore, advanced techniques for precise evaluation of quality were subsequently demonstrated. With two distinct methods, resolution and amplification or suppression of noise are quantified locally in each pixel of a reconstruction. Using these, local maps of resolution and noise in parallel imaging (GRAPPA), CS, U-Net and VN reconstructions were determined for MR images of the brain. In the tested images, GRAPPA delivers uniform and ideal resolution, but amplifies noise noticeably. The other methods adapt their behavior to image structure, where different levels of local blurring were observed at edges compared to homogeneous areas, and noise was suppressed except at edges. Overall, VNs were found to combine a number of advantageous properties, including a good trade-off between resolution and noise, fast reconstruction times, and high overall image quality and fidelity of the produced output. Therefore, this network architecture seems highly promising for MRI reconstruction. / Eine Beschleunigung des Bildgebungsprozesses ist heute ein wichtiges Ziel von klinischer und technischer Forschung in der Magnetresonanztomographie (MRT). Dadurch könnten Robustheit, Verfügbarkeit und Patientenkomfort erhöht, Kosten gesenkt und ganz neue Arten von Untersuchungen möglich gemacht werden. Da sich die meisten modernen Ansätze hierfür auf eine fortgeschrittene Signal- und Bildverarbeitung stützen, ist die Bildrekonstruktion ein zentraler Baustein. In diesem Bereich haben Deep Learning (DL)-basierte Methoden in der jüngeren Vergangenheit bemerkenswertes Potenzial gezeigt und eine Vielzahl an Publikationen konnte deren Nutzen in der MRT-Rekonstruktion feststellen. Allerdings besteht dabei das Risiko von subtilen und doch kritischen Fehlern. Daher ist das Ziel dieser Arbeit, die DL-basierte MRT-Rekonstruktion weiterzuentwickeln, während gleichzeitig hohe Bildqualität und Treue der erzeugten Bilder mit den gemessenen Daten gewährleistet wird. Eine Netzwerkarchitektur, die dafür besonders geeignet ist, ist das Variational Network (VN). Um den Nutzen dieser Netzwerke für nicht-kartesische Herzbildgebung zu untersuchen, beschreibt der erste Teil dieser Arbeit eine Anwendung von VNs, welche spezifisch für die Rekonstruktion von beschleunigten Akquisitionen mit spiralen Auslesetrajektorien angepasst wurden. Die vorgeschlagene Methode wird mit einer segmentierten Rekonstruktion, einem U-Net, und einem Compressed Sensing (CS)-Modell anhand von qualitativen und quantitativen Metriken verglichen. Während das U-Net schlecht abschneidet, zeigen die VN- und CS-Methoden eine gute Bildqualität. In der funktionalen Herzbildgebung beschleunigt die vorgeschlagene Echtzeit-Methode mit VN-Rekonstruktion die Aufnahme gegenüber dem Goldstandard wesentlich, von etwa zehn zu nur einer Minute. Klinische Parameter stimmen im Mittel überein. Die Bewertung von Bildqualität in der MRT-Rekonstruktion ist im Allgemeinen komplex, vor allem für moderne, nichtlineare Methoden. Daher wurden anschließend forgeschrittene Techniken zur präsizen Analyse von Bildqualität demonstriert. Mit zwei separaten Methoden wurde einerseits die Auflösung und andererseits die Verstärkung oder Unterdrückung von Rauschen in jedem Pixel eines untersuchten Bildes lokal quantifiziert. Damit wurden lokale Karten von Auflösung und Rauschen in Rekonstruktionen durch Parallele Bildgebung (GRAPPA), CS, U-Net und VN für MR-Aufnahmen des Gehirns berechnet. In den untersuchten Bildern zeigte GRAPPA gleichmäßig eine ideale Auflösung, aber merkliche Rauschverstärkung. Die anderen Methoden verhalten sich lokal unterschiedlich je nach Struktur des untersuchten Bildes. Die gemessene lokale Unschärfe unterschied sich an den Kanten gegenüber homogenen Bildbereichen, und Rauschen wurde überall außer an Kanten unterdrückt. Insgesamt wurde für VNs eine Kombination von verschiedenen günstigen Eigenschaften festgestellt, unter anderem ein guter Kompromiss zwischen Auflösung und Rauschen, schnelle Laufzeit, und hohe Qualität und Datentreue der erzeugten Bilder. Daher erscheint diese Netzwerkarchitektur als ein äußerst vielversprechender Ansatz für MRT-Rekonstruktion.

Page generated in 0.0511 seconds