• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1349
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3045
  • 532
  • 465
  • 417
  • 410
  • 358
  • 328
  • 276
  • 265
  • 222
  • 219
  • 201
  • 169
  • 161
  • 158
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
851

Graph Theory for the Discovery of Non-Parametric Audio Objects

Srinivasa, Christopher January 2011 (has links)
A novel framework based on cluster co-occurrence and graph theory for structure discovery is applied to audio to find new types of audio objects which enable the compression of an input signal. These new objects differ from those found in current object coding schemes as their shape is not restricted by any a priori psychoacoustic knowledge. The framework is novel from an application perspective, as it marks the first time that graph theory is applied to audio, and with regards to theoretical developments, as it involves new extensions to the areas of unsupervised learning algorithms and frequent subgraph mining methods. Tests are performed using a corpus of audio files spanning a wide range of sounds. Results show that the framework discovers new types of audio objects which yield average respective overall and relative compression gains of 15.90% and 23.53% while maintaining a very good average audio quality with imperceptible changes.
852

Depth Map Compression Based on Platelet Coding and Quadratic Curve Fitting

Wang, Han January 2012 (has links)
Due to the fast development in 3D technology during recent decades, many approaches in 3D representation technologies have been proposed worldwide. In order to get an accurate information to render a 3D representation, more data need to be recorded compared to normal video sequence. In this case, how to find an efficient way to transmit the 3D representation data becomes an important part in the whole 3D representation technology. Recent years, many coding schemes based on the principle of encoding the depth have been proposed. Compared to the traditional multiview coding schemes, those new proposed schemes can achieve higher compression efficiency. Due to the development of depth capturing technology, the accuracy and quality of the reconstructed depth image also get improved. In this thesis we propose an efficient depth data compression scheme for 3D images. Our proposed depth data compression scheme is platelet based coding using Lagrangian optimization, quadtree decomposition and quadratic curve fitting. We study and improve the original platelet based coding scheme and achieve a compression improvement of 1-2 dB compared to the original platelet based scheme. The experimental results illustrate the improvement provided by our scheme. The quality of the reconstructed results of our proposed curve fitting based platelet coding scheme are better than that of the original scheme.
853

Probing Collective Multi-electron Effects with Few Cycle Laser Pulses

Shiner, Andrew January 2013 (has links)
High Harmonic Generation (HHG) enables the production of bursts of coherent soft x-rays with attosecond pulse duration. This process arrises from the nonlinear interaction between intense infrared laser pulses and an ionizing gas medium. Soft x-ray photons are used for spectroscopy of inner-shell electron correlation and exchange processes, and the availability of attosecond pulse durations will enable these processes to be resolved on their natural time scales. The maximum or cutoff photon energy in HHG increases with both the intensity as well as the wavelength of the driving laser. It is highly desirable to increase the harmonic cutoff as this will allow for the generation of shorter attosecond pulses, as well as HHG spectroscopy of increasingly energetic electronic transitions. While the harmonic cutoff increases with laser wavelength, there is a corresponding decrease in harmonic yield. The first part of this thesis describes the experimental measurement of the wavelength scaling of HHG efficiency, which we report as lambda^(-6.3) in xenon, and lambda^(-6.5) in krypton. To increase the HHG cutoff, we have developed a 1.8 um source, with stable carrier envelope phase and a pulse duration of <2 optical cycles. The 1.8 um wavelength allowed for a significant increase in the harmonic cutoff compared to equivalent 800 nm sources, while still maintaing reasonable harmonic yield. By focusing this source into neon we have produced 400 eV harmonics that extend into the x-ray water window. In addition to providing a source of photons for a secondary target, the HHG spectrum caries the signature of the electronic structure of the generating medium. In krypton we observed a Cooper minimum at 85 eV, showing that photoionization cross sections can be measured with HHG. Measurements in xenon lead to the first clear observation of electron correlation effects during HHG, which manifest as a broad peak in the HHG spectrum centred at 100 eV. This thesis also describes several improvements to the HHG experiment including the development of an ionization detector for measuring laser intensity, as well as an investigation into the role of laser mode quality on HHG phase matching and efficiency.
854

Fractal application in data compression / Uplatnění fraktálů v kompresi dat

Dušák, Petr January 2015 (has links)
The mission of the Technology Transfer Programme Office is to increase impact on a society by transferring technologies developed by the European Space Agency. Method and Apparatus for compressing time series is a patented compression algorithm designed to be efficient as its purpose is to run on deep space probes or satellites. The algorithm is inspired by a method for fractal terrain generation, namely the midpoint displacement algorithm. This work introduces fractals, their application and modifying the patented algorithm, in order to achieve greater compression. The modification lies in modifying the displacement mechanism. The modified algorithm is capable of reducing data up to 25 %, compared to the patented algorithm. The modification made the algorithm less efficient. In large-scale test, performed on Rosetta spacecraft telemetry, the modified algorithm achieved around 5 % higher compression.
855

Studies on the prevention of venous insufficiency and ulceration

Sultan, Muhammad January 2013 (has links)
Introduction: Venous disease impairs the quality of life, necessitates time off work and causes venous ulcers. The focus of this thesis is to explore strategies to prevent chronic venous insufficiency (CVI) and venous ulceration. Aims 1. To identify a population at risk of developing venous ulcers 2. To study the pressure profile required by elastic stockings to halve transit venous time 3. To explore the role of compression following ankle fracture Methods: Data was collected from 231 patients with venous ulcers and age and sex matched 210 controls to identify risk factors for venous ulceration. Univariate and multivariate analysis of potential risk factors was undertaken to identify those that independently predict this risk. After identifying the population at risk, prophylactic strategies were developed. The effect of Engineered Compression Stockings (ECS) delivering 15mmHg, 25mmHg and 35mmHg pressure at the ankle on the calf venous transit time and volume was measured to determine the ideal pressure profile required to halve transit venous time, which should be appropriate for DVT prophylaxis. A dorsal foot vein was cannulated in 15 healthy volunteers with no venous disease. The transit time (secs) for ultrasound contrast from a foot vein to the popliteal vein was measured using duplex ultrasound. Calf volumes were recorded by water displacement. ECS delivering 25mmHg of pressure around the ankle were compared with no compression in a randomized controlled trial (RCT) in 90 patients within 72 hours of ankle fracture. Patients were randomised to either i) ECS and air-cast boot or ii) a liner and air-cast boot and were followed at 2, 4, 8, 12 weeks and 6 months. The primary outcome was functional recovery measured using the Olerud Molander Ankle Score (OMAS). Secondary outcomes were i) The American Orthopaedic Foot and Ankle Score (AOFAS), ii) SF12v2 Quality of Life score (QoL), iii) pain, and iv) frequency of DVT. Results The risk factors significantly associated with venous ulceration on multivariate analysis included a history of Deep Vein Thrombosis (DVT), phlebitis, hip replacement, poor mobility, weight/kg>100Kg, varicose veins (VV), family history of VV and weight (kg) between 75-100kg. A simple diagnostic scoring system was derived from this regression analysis with scores of . 3 predicting a 6.7% annual risk and of < 1 a 0.6% risk. Mean transit time without compression was 35, 32 and 33 secs while standing, sitting and lying. Transit time was consistently halved by ECS delivering 25mmHg to 14, 13 and 14 secs respectively (p<0.001). Mean leg volume whilst standing was reduced significantly from 3447ml with no ECS to 3259ml, 3161ml and 3067ml with ECS applying 15, 25 and 35mmHg respectively (p<0.001). ECS in ankle fracture patients reduced ankle swelling at all time points and significantly improved mean OMAS score at six months to 98 compared with 67 for the liner (p<0.001). AOFAS and SF12v2 scores were also significantly improved (p<0.001, p= 0.016). Of 86 patients with duplex imaging at four weeks, only five (12%) of the 43 ECS patients had a DVT compared with 10 (23%) of the 43 controls (p= 0.26). Conclusions: The risk score for venous ulcers will allow us to undertake RCTs on the prevention of leg ulceration. The pressure profile required to halve transit venous time is 25mmHg. The frequency of asymptomatic DVT following ankle fracture is sufficient to justify prophylaxis. Compression has a potential role in the management of ankle fractures by improving functional outcome and QoL. These studies facilitate research into the prevention of venous disease.
856

Adaptive compression coding

Nasiopoulos, Panagiotis January 1988 (has links)
An adaptive image compression coding technique, ACC, is presented. This algorithm is shown to preserve edges and give better quality decompressed pictures and better compression ratios than that of the Absolute Moment Block Truncation Coding. Lookup tables are used to achieve better compression rates without affecting the visual quality of the reconstructed image. Regions with approximately uniform intensities are successfully detected by using the range and these regions are approximated by their average. This procedure leads to further reduction in the compression data rates. A method for preserving edges is introduced. It is shown that as more details are preserved around edges the pictorial results improve dramatically. The ragged appearance of the edges in AMBTC is reduced or eliminated, leading to images far superior than those of AMBTC. For most of the images ACC yields Root Mean Square Error smaller than that obtained by AMBTC. Decompression time is shown to be comparable to that of AMBTC for low threshold values and becomes significantly lower as the compression rate becomes smaller. An adaptive filter is introduced which helps recover lost texture at very low compression rates (0.8 to 0.6 b/p, depending on the degree of texture in the image). This algorithm is easy to implement since no special hardware is needed. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
857

Studies on probabilistic tensor subspace learning

Zhou, Yang 04 January 2019 (has links)
Most real-world data such as images and videos are naturally organized as tensors, and often have high dimensionality. Tensor subspace learning is a fundamental problem that aims at finding low-dimensional representations from tensors while preserving their intrinsic characteristics. By dealing with tensors in the learned subspace, subsequent tasks such as clustering, classification, visualization, and interpretation can be greatly facilitated. This thesis studies the tensor subspace learning problem from a generative perspective, and proposes four probabilistic methods that generalize the ideas of classical subspace learning techniques for tensor analysis. Probabilistic Rank-One Tensor Analysis (PROTA) generalizes probabilistic principle component analysis. It is flexible in capturing data characteristics, and avoids rotational ambiguity. For robustness against overfitting, concurrent regularizations are further proposed to concurrently and coherently penalize the whole subspace, so that unnecessary scale restrictions can be relaxed in regularizing PROTA. Probabilistic Rank-One Discriminant Analysis (PRODA) is a bilinear generalization of probabilistic linear discriminant analysis. It learns a discriminative subspace by representing each observation as a linear combination of collective and individual rank-one matrices. This provides PRODA with both the expressiveness of capturing discriminative features and non-discriminative noise, and the capability of exploiting the (2D) tensor structures. Bilinear Probabilistic Canonical Correlation Analysis (BPCCA) generalizes probabilistic canonical correlation analysis for learning correlations between two sets of matrices. It is built on a hybrid Tucker model in which the two-view matrices are combined in two stages via matrix-based and vector-based concatenations, respectively. This enables BPCCA to capture two-view correlations without breaking the matrix structures. Bayesian Low-Tubal-Rank Tensor Factorization (BTRTF) is a fully Bayesian treatment of robust principle component analysis for recovering tensors corrupted with gross outliers. It is based on the recently proposed tensor-SVD model, and has more expressive modeling power in characterizing tensors with certain orientation such as images and videos. A novel sparsity-inducing prior is also proposed to provide BTRTF with automatic determination of the tensor rank (subspace dimensionality). Comprehensive validations and evaluations are carried out on both synthetic and real-world datasets. Empirical studies on parameter sensitivities and convergence properties are also provided. Experimental results show that the proposed methods achieve the best overall performance in various applications such as face recognition, photograph-sketch match, and background modeling. Keywords: Tensor subspace learning, probabilistic models, Bayesian inference, tensor decomposition.
858

Efficient and Secure Deep Learning Inference System: A Software and Hardware Co-design Perspective

January 2020 (has links)
abstract: The advances of Deep Learning (DL) achieved recently have successfully demonstrated its great potential of surpassing or close to human-level performance across multiple domains. Consequently, there exists a rising demand to deploy state-of-the-art DL algorithms, e.g., Deep Neural Networks (DNN), in real-world applications to release labors from repetitive work. On the one hand, the impressive performance achieved by the DNN normally accompanies with the drawbacks of intensive memory and power usage due to enormous model size and high computation workload, which significantly hampers their deployment on the resource-limited cyber-physical systems or edge devices. Thus, the urgent demand for enhancing the inference efficiency of DNN has also great research interests across various communities. On the other hand, scientists and engineers still have insufficient knowledge about the principles of DNN which makes it mostly be treated as a black-box. Under such circumstance, DNN is like "the sword of Damocles" where its security or fault-tolerance capability is an essential concern which cannot be circumvented. Motivated by the aforementioned concerns, this dissertation comprehensively investigates the emerging efficiency and security issues of DNNs, from both software and hardware design perspectives. From the efficiency perspective, as the foundation technique for efficient inference of target DNN, the model compression via quantization is elaborated. In order to maximize the inference performance boost, the deployment of quantized DNN on the revolutionary Computing-in-Memory based neural accelerator is presented in a cross-layer (device/circuit/system) fashion. From the security perspective, the well known adversarial attack is investigated spanning from its original input attack form (aka. Adversarial example generation) to its parameter attack variant. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
859

Green hydrogen production for fuel cell applications and consumption in SAIAMC research facility

Chidziva, Stanford January 2020 (has links)
Philosophiae Doctor - PhD / Today fossil fuels such as oil, coal and natural gas are providing for our ever growing energy needs. As the world’s fossil fuel reserves fast become depleted, it is vital that alternative and cleaner fuels are found. Renewable energy sources are the way of the future energy needs. A solution to the looming energy crisis can be found in the energy carrier hydrogen. Hydrogen can be produced by a number of production technologies. One hydrogen production method explored in this study is electrolysis of water.
860

Acute Effects of Peristaltic Pulse Dynamic Compression on Recovery: Kinetic, Kinematic, and Perceptual Factors

McInnis, Timothy 01 August 2014 (has links)
The purpose of this dissertation was to determine the effects of peristaltic pulse dynamic compression administered via NormaTec recovery system provides measurable kinetic, kinematic, or perceptual benefits following a weightlifting training session. During 2 testing sessions separated by 1 week, 6 weightlifters performed dynamic mid thigh pulls on a force plate with potentiometers at loads equal to 50%, 60%, 70%, 80%, 90%, 100%, 110%, and 120% of 1 RM clean before and after a weightlifting training session consisting of 5 sets of 5 repetitions of clean pulls from the floor with 90% of 1 RM clean and treatment with the NormaTec recovery system or a sham treatment. Following a cross over design the weightlifters served as their own control receiving NomaTec treatment during 1 testing session and sham treatment during 1 testing session. Pre- and postdynamic mid thigh pulls were analyzed for peak force, force at 50 ms, force at 90 ms, force at 250 ms, impulse at 50 ms, impulse at 90 ms, impulse at 250 ms, rate of force development, peak velocity, peak power, and peak displacement. A 2x2 (treatment, time) repeated measures ANOVA showed no statistical differences in the interaction of treatment and time with the exception of impulse at 250 ms at the 70% load. The minimal effects of NormaTec on recovery from a weightlifting training session demonstrated in this study suggest that NormaTec recovery system does not provide substantial benefits following clean pulls from the floor. However, it is possible that NormaTec may be more effective following a greater level of eccentric damage or a test of strength endurance.

Page generated in 0.0473 seconds