• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 391
  • 172
  • 154
  • 37
  • 34
  • 29
  • 29
  • 27
  • 27
  • 19
  • 13
  • 11
  • 10
  • 7
  • 4
  • Tagged with
  • 1103
  • 182
  • 140
  • 128
  • 113
  • 111
  • 105
  • 101
  • 99
  • 97
  • 91
  • 88
  • 87
  • 81
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Topics in Harmonic Analysis on Combinatorial Graphs

Gidelew, Getnet Abebe January 2014 (has links)
In recent years harmonic analysis on combinatorial graphs has attracted considerable attention. The interest is stimulated in part by multiple existing and potential applications of analysis on graphs to information theory, signal analysis, image processing, computer sciences, learning theory, and astronomy. My thesis is devoted to sampling, interpolation, approximation, and multi-resolution on graphs. The results in the existing literature concern mainly with these theories on unweighted graphs. My main objective is to extend existing theories and obtain new results about sampling, interpolation, approximation, and multi-resolution on general combinatorial graphs such as directed, undirected and weighted. / Mathematics
532

An Analysis of Some Properties and the Use of the Twist Map for the Finite Frenkel–Kontorova Model

Quapp, Wolfgang, Bofill, Josep Maria 04 April 2024 (has links)
We discuss the twist map, with a special interest in its use for the finite Frenkel–Kontorova model. We explain the meaning of the tensile force in some proposed models. We demonstrate that the application of the twist map for the finite FK model is not correct, because the procedure ignores the necessary boundary conditions.
533

Tackling the current limitations of bacterial taxonomy with genome-based classification and identification on a crowdsourcing Web service

Tian, Long 25 October 2019 (has links)
Bacterial taxonomy is the science of classifying, naming, and identifying bacteria. The scope and practice of taxonomy has evolved through history with our understanding of life and our growing and changing needs in research, medicine, and industry. As in animal and plant taxonomy, the species is the fundamental unit of taxonomy, but the genetic and phenotypic diversity that exists within a single bacterial species is substantially higher compared to animal or plant species. Therefore, the current "type"-centered classification scheme that describes a species based on a single type strain is not sufficient to classify bacterial diversity, in particular in regard to human, animal, and plant pathogens, for which it is necessary to trace disease outbreaks back to their source. Here we discuss the current needs and limitations of classic bacterial taxonomy and introduce LINbase, a Web service that not only implements current species-based bacterial taxonomy but complements its limitations by providing a new framework for genome sequence-based classification and identification independently of the type-centric species. LINbase uses a sequence similarity-based framework to cluster bacteria into hierarchical taxa, which we call LINgroups, at multiple levels of relatedness and crowdsources users' expertise by encouraging them to circumscribe these groups as taxa from the genus-level to the intraspecies-level. Circumscribing a group of bacteria as a LINgroup, adding a phenotypic description, and giving the LINgroup a name using the LINbase Web interface allows users to instantly share new taxa and complements the lengthy and laborious process of publishing a named species. Furthermore, unknown isolates can be identified immediately as members of a newly described LINgroup with fast and precise algorithms based on their genome sequences, allowing species- and intraspecies-level identification. The employed algorithms are based on a combination of the alignment-based algorithm BLASTN and the alignment-free method Sourmash, which is based on k-mers, and the MinHash algorithm. The potential of LINbase is shown by using examples of plant pathogenic bacteria. / Doctor of Philosophy / Life is always easier when people talk to each other in the same language. Taxonomy is the language that biologists use to communicate about life by 1. classifying organisms into groups, 2. giving names to these groups, and 3. identifying individuals as members of these named groups. When most scientists and the general public think of taxonomy, they think of the hierarchical structure of “Life”, “Domain”, “Kingdom”, “Phylum”, “Class”, “Order”, “Family”, “Genus” and “Species”. However, the basic goal of taxonomy is to allow the identification of an organism as a member of a group that is predictive of its characteristics and to provide a name to communicate about that group with other scientists and the public. In the world of micro-organism, taxonomy is extremely important since there are an estimated 10,000,000 to 1,000,000,000 different bacteria species. Moreover, microbiologists and pathologists need to consider differences among bacterial isolates even within the same species, a level, that the current taxonomic system does not even cover. Therefore, we developed a Web service, LINbase, which uses genome sequences to classify individual microbial isolates. The database at the backend of LINbase assigns Life Identification Numbers (LINs) that express how individual microbial isolates are related to each other above, at, and below the species level. The LINbase Web service is designed to be an interactive web-based encyclopedia of microorganisms where users can share everything they know about micro-organisms, be it individual isolates or groups of isolates, for professional and scientific purposes. To develop LINbase, efficient computer programs were developed and implemented. To show how LINbase can be used, several groups of bacteria that cause plant diseases were classified and described.
534

A Framework for Dynamic Selection of Backoff Stages during Initial Ranging Process in Wireless Networks

Mufti, Muhammad R., Afzal, Humaira, Awan, Irfan U., Cullen, Andrea J. 06 August 2017 (has links)
Yes / The only available solution in the IEEE 802.22 standard for avoiding collision amongst various contending customer premises equipment (CPEs) attempting to associate with a base station (BS) is binary exponential random backoff process in which the contending CPEs retransmit their association requests. The number of attempts the CPEs send their requests to the BS are fixed in an IEEE 802.22 network. This paper presents a mathematical framework that helps the BS in determining at which attempt the majority of the CPEs become part of the wireless regional area network from a particular number of contending CPEs. Based on a particular attempt, the ranging request collision probability for any number of contending CPEs with respect to contention window size is approximated. The numerical results validate the effectiveness of the approximation. Moreover, the average ranging success delay experienced by the majority of the CPEs is also determined.
535

Effects of Solar Soft X-rays on Earth's Atmosphere

Samaddar, Srimoyee 06 February 2023 (has links)
The soft x-rays (wavelengths less than 30 nm) emitted by the sun are responsible for the production of high energy photoelectrons in the D and E regions of the ionosphere, where they deposit most of their energy. The photoelectrons created by this process are the main drivers for dissociation of nitrogen ($N_2$) molecules in the altitude range below 200 km. The dissociation of $N_2$ is one of main mechanisms responsible for the production of nitric oxide (NO) at these altitudes. These processes are important to understand because NO plays a critical role in controlling the temperatures of various regions of Earth's atmosphere. In order to estimate the dissociation rate of $N_2$ we need its dissociation cross-sections. The dissociation cross-sections of $N_2$ due to inelastic collisions with electrons is primarily es- timated from the cross-sections of its excitation states (using predissociation factors) and dissociative ionization channels. Predissociation is the transition without emission of radi- ation from a stable excited state to an unstable excited state of a molecule that leads to dissociation. Unfortunately, the lack of cross-section data, particularly at high electron en- ergies and of higher excited states of N 2 and N 2 + , introduces uncertainty in the dissociation cross-section and subsequently the dissociation rate calculation, which leads to uncertainties in the NO production rate. We have updated a photoelectron model with thoroughly-revised electron impact cross- section data of all major species and experimentally determined predissociation factors. The dissociation rates of $N_2$ using this model are compared to the dissociation rates obtained using another existing (Solomon and Qian [2005]) model. A parameterized version of the updated dissociation rates are used in a one-dimensional global average thermospheric/ ionospheric model, ACE1D (Atmospheric Chemistry and Energetics), to obtain the updated production rates of NO. In the final chapter, we use the ACE1D model to show that the energies deposited by the solar soft x-rays in the lower thermosphere at altitudes between 100 -150 km affect the temperature of the Earth's thermosphere at altitudes well above 300 km. By turning off the input solar flux in the different wavelength bins of the model iteratively, we are able to demonstrate that the maximum change in exospheric temperature is due to changes in the soft solar x-ray bins. We also show, using the thermodynamic heat equation, that the molecular diffusion via non-thermal photoelectrons is the main source of heat transfer to the upper ionosphere/thermosphere. Moreover, these temperature changes and heating effects of the solar soft x-rays are comparable to that of the much stronger He II 30.4nm emission. Finally, we show that the uncertainties in the solar flux irradiance at these soft x-rays wavelengths result in corresponding uncertainties in the modeled exospheric temperature, and these uncertainties increase substantially with increased solar activity. / Doctor of Philosophy / The radiation from the sun covers a wide range of the electromagnetic spectrum. The soft x-rays with wavelengths less than 30 nm are the most energetic and variable part of the spectrum, and would have detrimental effects on humans were they not absorbed by the atmosphere. The absorption of soft x-rays by the Earth's atmosphere at altitudes near 100- 150 km creates ionized and energized particles. These energetic changes can affect and even damage the satellites in low Earth orbit, and can cause radio communication blackouts and radiation storms (large quantities of energetic particles, protons and electrons accelerated by processes at and near the Sun). Therefore, we need to have good models that can quantify these changes in order to correctly predict their effects on our atmosphere, and help to mitigate any harmful effects. The soft x-rays and the extreme ultraviolet (EUV) are responsible for ionization of the major neutral species, $N_2$ , $O_2$ and O, in the Earth's atmosphere, which leads to the production of ions and energetic photoelectrons. These high energy photoelectrons can cause further ion- ization, excitation and dissociation. We study the dissociation of $N_2$ by these photoelectrons to create neutral N atoms. The N atoms created via this process combine with the $O_2$ in the atmosphere to produce nitric oxide (NO), which is one of the most important minor constituents because of its role in regulating atmospheric heating/cooling. The production of NO peaks near 106 km altitude, where most of the energy of the soft x-rays are deposited. However, they also affect the temperature of the upper atmosphere well above this altitude. This is because the energy of the photoelectrons is conducted to the upper atmosphere by collisions of electrons and ions with ambient neutral atoms and molecules, thus increasing their temperature. In this study, we use modeling of soft x-ray irradiance, photoelectron ionization, excitation and dissociation rates and atmospheric neutral temperature to quantify the effects of soft x-rays on the Earth's atmosphere.
536

Control of the Doubly Salient Permanent Magnet Switched Reluctance Motor

Merrifield, David Bruce 21 May 2010 (has links)
The permanent magnet switched reluctance motor (PMSRM) is hybrid dc motor which has the potential to be more effect than the switched reluctance (SRM) and permanent magnet (PM) motors. The PMSRM has a both a salient rotor and stator with permanent magnets placed directly onto the face of common pole stators. The PMSRM is wound like the SRM and can be controlled by the same family of converters. The addition of permanent magnets creates nonlinearities in both the governing electrical and mechanical equations which differentiate the PMSRM from all other classes of electric motors. The primary goal of this thesis is to develop a cohesive and comprehensive control strategy for the PMSRM so as to demonstrate its operation and highlight its efficiency. The control of the PMSRM starts with understanding its region of operation and the underlying torque production of the motor. The selection of operating region is followed by a both linear and nonlinear electrical modeling of the motor and the design of current controllers for the PMSRM. The electromechanical model of the motor is dynamically simulated with the addition of a closed loop speed controller. The speed controller is extended to add an efficiency searching algorithm which finds the operating condition with the highest efficiency online. / Master of Science
537

Metrics, Models and Methodologies for Energy-Proportional Computing

Subramaniam, Balaji 21 August 2015 (has links)
Massive data centers housing thousands of computing nodes have become commonplace in enterprise computing, and the power consumption of such data centers is growing at an unprecedented rate. Exacerbating such costs, data centers are often over-provisioned to avoid costly outages associated with the potential overloading of electrical circuitry. However, such over provisioning is often unnecessary since a data center rarely operates at its maximum capacity. It is imperative that we realize effective strategies to control the power consumption of the server and improve the energy efficiency of data centers. Adding to the problem is the inability of the servers to exhibit energy proportionality which diminishes the overall energy efficiency of the data center. Therefore in this dissertation, we investigate whether it is possible to achieve energy proportionality at the server- and cluster-level by efficient power and resource provisioning. Towards this end, we provide a thorough analysis of energy proportionality at the server and cluster-level and provide insight into the power saving opportunity and mechanisms to improve energy proportionality. Specifically, we make the following contribution at the server-level using enterprise-class workloads. We analyze the average power consumption of the full system as well as the subsystems and describe the energy proportionality of these components, characterize the instantaneous power profile of enterprise-class workloads using the on-chip energy meters, design a runtime system based on a load prediction model and an optimization framework to set the appropriate power constraints to meet specific performance targets and then present the effects of our runtime system on energy proportionality, average power, performance and instantaneous power consumption of enterprise applications. We then make the following contributions at the cluster-level. Using data serving, web searching and data caching as our representative workloads, we first analyze the component-level power distribution on a cluster. Second, we characterize how these workloads utilize the cluster. Third, we analyze the potential of power provisioning techniques (i.e., active low-power, turbo and idle low-power modes) to improve the energy proportionality. We then describe the ability of active low-power modes to provide trade-offs in power and latency. Finally, we compare and contrast power provisioning and resource provisioning techniques. This thesis sheds light on mechanisms to tune the power provisioned for a system under strict performance targets and opportunities to improve energy proportionality and instantaneous power consumption via efficient power and resource provisioning at the server- and cluster-level. / Ph. D.
538

The data processing to detect correlated movement of Cerebral Palsy patient in early phase

Pyon, Okmin 03 February 2016 (has links)
The early diagnosis of CP (Cerebral Palsy) in infants is important for developing meaningful interventions. One of the major symptoms of the CP is lack of the coordinated movements of a baby. The bilateral coordinated movement (BCM) is that a baby shows in the early development stage. Each limb movement shows various ranges of speed and angle with fluency in a normal infant. When a baby has CP the movements are cramped and more synchronized. A quantitative method is needed to diagnose the BCM. Data is collected from 3-axis accelerometers, which are connected, to each limb of the baby. Signal processing the collected data using short time Fourier transforms, along with the formation of time-dependent transfer functions and the coherence property is the key to the diagnostic approach. Combinations of each limb's movement and their relationship can represent the correlated movement. Data collected from a normal baby is used to develop the technique for identifying the fidgety movement. Time histories and the resulting diagnostic tool are presented to show the regions of the described movement. The evaluation of the transduction approach and the analysis is discussed in detail. The application of the quantitative tool for the early diagnosis of CP offers clinicians the opportunity to provide interventions that may reduce the debilitating impact this condition has on children. Tools such as this can also be used to assess motor development in infants and lead to the identification and early intervention for other conditions. / Master of Science
539

Integrated Enhancement of Testability and Diagnosability for Digital Circuits

Rahagude, Nikhil Prakash 29 November 2010 (has links)
While conventional test point insertions commonly used in design for testability can improve fault coverage, the test points selected may not necessarily be the best candidates to aid <em>silicon diagnosis</em>. In this thesis, test point insertions are conducted with the aim to detect more faults and also synergistically distinguish currently indistinguishable fault-pairs. We achieve this by identifying those points in the circuit, which are not only hard-to-test but also lie on distinguishable frontiers, as Testability-Diagnosability (TD) points. To this end, we propose a novel low-cost metric to identify such TD points. Further, we propose a new DFT + DFD architecture, which adds just one pin (to identify test/functional mode) and small additional combinational logic to the circuit under test. Our experiments indicate that the proposed architecture can distinguish 4x more previously indistinguishable fault-pairs than existing DFT architectures while maintaining similar fault coverages. Further, the experiments illustrate that quality results can be achieved with an area overhead of around 5%. Additional experiments conducted on hard-to-test circuits show an increase in <em>fault coverage</em> by 48% while maintaining similar diagnostic resolution. Built-in Self Test (BIST) is a technique of adding additional blocks of hardware to the circuits to allow them to perform self-testing. This enables the circuits to test themselves thereby reducing the dependency on the expensive external automated test equipment (ATE). At the end of a test session, BIST generates a signature which is a compaction of the obtained output responses of the circuit for that session. Comparison of this signature with the reference signature categorizes the circuit as error free or buggy. While BIST provides a quick and low cost alternative to check circuit's correctness, diagnosis in BIST environment remains poor because of the limited information present in the lossily compacted final signature. The signature does not give any information about the possible defect location in the circuit. To facilitate diagnosis, researchers have proposed the use of two additional on-chip embedded memories,response memory to store reference responses and fail memory to store failing responses. We propose a novel architecture in which only one additional memory is required. Experimental results conducted on benchmark circuits substantiate that the same fault coverage can be maintained using just 5% of the available test vectors. This reduces the size of memory required to store responses which in turn reduces area overhead. Further, by adding test points to the circuit using our proposed architecture, we can improve the diagnostic resolution by 60% with respect to external testing. / Master of Science
540

MODEL-FREE ALGORITHMS FOR CONSTRAINED REINFORCEMENT LEARNING IN DISCOUNTED AND AVERAGE REWARD SETTINGS

Qinbo Bai (19804362) 07 October 2024 (has links)
<p dir="ltr">Reinforcement learning (RL), which aims to train an agent to maximize its accumulated reward through time, has attracted much attention in recent years. Mathematically, RL is modeled as a Markov Decision Process, where the agent interacts with the environment step by step. In practice, RL has been applied to autonomous driving, robotics, recommendation systems, and financial management. Although RL has been greatly studied in the literature, most proposed algorithms are model-based, which requires estimating the transition kernel. To this end, we begin to study the sample efficient model-free algorithms under different settings.</p><p dir="ltr">Firstly, we propose a conservative stochastic primal-dual algorithm in the infinite horizon discounted reward setting. The proposed algorithm converts the original problem from policy space to the occupancy measure space, which makes the non-convex problem linear. Then, we advocate the use of a randomized primal-dual approach to achieve O(\eps^-2) sample complexity, which matches the lower bound.</p><p dir="ltr">However, when it comes to the infinite horizon average reward setting, the problem becomes more challenging since the environment interaction never ends and can’t be reset, which makes reward samples not independent anymore. To solve this, we design an epoch-based policy-gradient algorithm. In each epoch, the whole trajectory is divided into multiple sub-trajectories with an interval between each two of them. Such intervals are long enough so that the reward samples are asymptotically independent. By controlling the length of trajectory and intervals, we obtain a good gradient estimator and prove the proposed algorithm achieves O(T^3/4) regret bound.</p>

Page generated in 0.0579 seconds