• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 10
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 58
  • 58
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The relationship between orthology, protein domain architecture and protein function

Forslund, Kristoffer January 2011 (has links)
Lacking experimental data, protein function is often predicted from evolutionary and protein structure theory. Under the 'domain grammar' hypothesis the function of a protein follows from the domains it encodes. Under the 'orthology conjecture', orthologs, related through species formation, are expected to be more functionally similar than paralogs, which are homologs in the same or different species descended from a gene duplication event. However, these assumptions have not thus far been systematically evaluated. To test the 'domain grammar' hypothesis, we built models for predicting function from the domain combinations present in a protein, and demonstrated that multi-domain combinations imply functions that the individual domains do not. We also developed a novel gene-tree based method for reconstructing the evolutionary histories of domain architectures, to search for cases of architectures that have arisen multiple times in parallel, and found this to be more common than previously reported. To test the 'orthology conjecture', we first benchmarked methods for homology inference under the obfuscating influence of low-complexity regions, in order to improve the InParanoid orthology inference algorithm. InParanoid was then used to test the relative conservation of functionally relevant properties between orthologs and paralogs at various evolutionary distances, including intron positions, domain architectures, and Gene Ontology functional annotations. We found an increased conservation of domain architectures in orthologs relative to paralogs, in support of the 'orthology conjecture' and the 'domain grammar' hypotheses acting in tandem. However, equivalent analysis of Gene Ontology functional conservation yielded spurious results, which may be an artifact of species-specific annotation biases in functional annotation databases. I discuss possible ways of circumventing this bias so the 'orthology conjecture' can be tested more conclusively. / At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 6: Epub ahead of print.
42

Low complexity turbo equalization using superstructures

Myburgh, Hermanus Carel January 2013 (has links)
In a wireless communication system the transmitted information is subjected to a number of impairments, among which inter-symbol interference (ISI), thermal noise and fading are the most prevalent. Owing to the dispersive nature of the communication channel, ISI results from the arrival of multiple delayed copies of the transmitted signal at the receiver. Thermal noise is caused by the random fluctuation on electrons in the receiver hardware, while fading is the result of constructive and destructive interference, as well as absorption during transmission. To protect the source information, error-correction coding (ECC) is performed in the transmitter, after which the coded information is interleaved in order to separate the information to be transmitted temporally. Turbo equalization (TE) is a technique whereby equalization (to correct ISI) and decoding (to correct errors) are iteratively performed by iteratively exchanging extrinsic information formed by optimal posterior probabilistic information produced by each algorithm. The extrinsic information determined from the decoder output is used as prior information by the equalizer, and vice versa, allowing for the bit-error rate (BER) performance to be improved with each iteration. Turbo equalization achieves excellent BER performance, but its computational complexity grows exponentially with an increase in channel memory as well as with encoder memory, and can therefore not be used in dispersive channels where the channel memory is large. A number of low complexity equalizers have consequently been developed to replace the maximum a posteriori probability (MAP) equalizer in order to reduce the complexity. Some of the resulting low complexity turbo equalizers achieve performance comparable to that of a conventional turbo equalizer that uses a MAP equalizer. In other cases the low complexity turbo equalizers perform much worse than the corresponding conventional turbo equalizer (CTE) because of suboptimal equalization and the inability of the low complexity equalizers to utilize the extrinsic information effectively as prior information. In this thesis the author develops two novel iterative low complexity turbo equalizers. The turbo equalization problem is modeled on superstructures, where, in the context of this thesis, a superstructure performs the task of the equalizer and the decoder. The resulting low complexity turbo equalizers process all the available information as a whole, so there is no exchange of extrinsic information between different subunits. The first is modeled on a dynamic Bayesian network (DBN) modeling the Turbo Equalization problem as a quasi-directed acyclic graph, by allowing a dominant connection between the observed variables and their corresponding hidden variables, as well as weak connections between the observed variables and past and future hidden variables. The resulting turbo equalizer is named the dynamic Bayesian network turbo equalizer (DBN-TE). The second low complexity turbo equalizer developed in this thesis is modeled on a Hopfield neural network, and is named the Hopfield neural network turbo equalizer (HNN-TE). The HNN-TE is an amalgamation of the HNN maximum likelihood sequence estimation (MLSE) equalizer, developed previously by this author, and an HNN MLSE decoder derived from a single codeword HNN decoder. Both the low complexity turbo equalizers developed in this thesis are able to jointly and iteratively equalize and decode coded, randomly interleaved information transmitted through highly dispersive multipath channels. The performance of both these low complexity turbo equalizers is comparable to that of the conventional turbo equalizer while their computational complexities are superior for channels with long memory. Their performance is also comparable to that of other low complexity turbo equalizers, but their computational complexities are worse. The computational complexity of both the DBN-TE and the HNN-TE is approximately quadratic at best (and cubic at worst) in the transmitted data block length, exponential in the encoder constraint length and approximately independent of the channel memory length. The approximate quadratic complexity of both the DBN-TE and the HNN-TE is mostly due to interleaver mitigation, requiring matrix multiplication, where the matrices have dimensions equal to the data block length, without which turbo equalization using superstructures is impossible for systems employing random interleavers. / Thesis (PhD)--University of Pretoria, 2013. / gm2013 / Electrical, Electronic and Computer Engineering / unrestricted
43

Jednoduchý průmyslový Ethernet / Industrial low complexity Ethernet system

Šustek, Vladimír January 2019 (has links)
The diploma thesis is focused on the building embedded demonstration application of the proprietary Low Complexity Ethernet module for industrial usage further called the LEN/LES 2. At the first, main used technologies such as MCU, or the lightweight IP stack is discussed, Consequently, there is detailed view on system hardware architecture proposed by hardware and software requirements. Then though part describes blocks of embedded system are in term of specific parts and hardware requirements to create universal board. Following chapters expresses first startup and known hardware bugs, LWIP implementation and MODBUS system implementation. The core of the system is the new released microcontroller an ADuCM4050 and the Low Complexity Ethernet MAC-PHY prototype block and much more dependent convenient peripherals of the MCU based application.
44

Realization of a Low Cost Low Complexity Traveling Wave Antenna

Host, Nicholas K. 15 May 2015 (has links)
No description available.
45

The potential role of the multivalent ionic compound PolyP in the assembly of the liquid nature in the cell

Matta, Lara Michel 11 1900 (has links)
Les protéines de type prion, contenant des Séquences en acides aminés de Faible Complexité (SFC), ont tendance à s’agréger et à former des compartiments non-membranaires dans la cellule. Ces derniers ont des propriétés physiques communes à celles des liquides, telles que la capacité de mouiller les surfaces, de s’écouler et de fusionner avec d’autres corps liquides. Dans cette étude, nous avons démontré que la protéine Hrp1 forme, in vitro, des gouttes de différentes tailles via une transition de phase liquide à liquide, et ce, uniquement lorsqu’elle est exposée à un milieu chargé négativement. Exclusivement dans ce même milieu, nous avons aussi observé que le domaine SFC de Hrp1 s’assemble et forme une matière de type gel. Sur la base de ces observations, nous avons émis l’hypothèse que la tendance des systèmes moléculaires à former des compartiments liquides in vivo peut être influencée par la présence, dans le cytosol, de polyélectrolytes chargés négativement tels que l'ADN, l'ARN et les polyphosphates (PolyP). En utilisant la levure comme modèle cellulaire et des techniques de microscopie à fluorescence, nous nous sommes focalisés sur l’étude du rôle des PolyP dans l'assemblage des P-bodies. Les P-bodies ont été choisis comme système moléculaire de référence in vivo, étant des corps qui, après une transition de phase, se trouvent dans le cytosol sous forme de gouttes. Nous avons démontré que la déplétion du phosphate et la délétion du gène vtc4, responsable de la synthèse des PolyP dans la levure, n’ont pas d’influence dans la formation des P-bodies. Nous avons aussi remarqué que les PolyP et la protéine Edc3, une des composantes principales des P-bodies, ne sont pas co-localisés dans la cellule. Cette étude préliminaire nous suggère un manque de corrélation entre la formation des P-bodies et la présence de PolyP dans la cellule. Cependant, pour confirmer nos observations, des expériences complémentaires doivent être envisagées, en considérant d’autres composantes des P-bodies, tel que Lsm4, ou en analysant, in vivo, les effets des PolyP sur d’autres systèmes moléculaires de nature liquide. / Prion-like proteins containing Low Complexity Sequences (LCSs) have the propensity to aggregate and form membrane-less compartments in the cell. These proteins form droplets that have liquid features such as wetting, dripping and fusion. In this study, we demonstrated that the prion domain-containing protein Hrp1 forms droplets of different sizes in the presence of negatively charged polymers via liquid-liquid phase separation, whereas under the same conditions, the prion-like domain PolyQ/N of Hrp1 forms a gel-like material. Based on these findings, we hypothesize that droplets in vivo could be modulated by negatively charged polyelectrolytes found in the cell such as DNA, RNA and polyphosphate (PolyP). My goal was to examine the role of the polyanionic nature of PolyP on the assembly of P-bodies using Saccharomyces cerevisiae as a cellular model and fluorescence microscopy. We chose to study processing (P)- bodies, based on previous findings that these cellular subcompartments are formed by liquid-liquid phase separation of component proteins in the cytoplasm. We found that depleting phosphate from the media and deleting vtc4 gene, which is responsible for PolyP synthesis, did not have any effect on P-body formation. In addition, we demonstrated that PolyP and the protein Edc3, a core component of P-bodies, do not colocalize. Our data suggest that PolyP does not affect P-body formation. However, further and complementary studies have to be performed to confirm that PolyP have no effects on other membrane-less organelles.
46

Algorithmes parallèles et architectures évolutives de faible complexité pour systèmes optiques OFDM cohérents temps réel / Low-Complexity Parallel Algorithms and Scalable Architectures for Real-Time Coherent Optical OFDM Systems

Udupa, Pramod 19 June 2014 (has links)
Dans cette thèse, des algorithmes à faible complexité et des architectures parallèles et efficaces sont explorés pour les systèmes CO-OFDM. Tout d'abord, des algorithmes de faible complexité pour la synchronisation et l'estimation du décalage en fréquence en présence d'un canal dispersif sont étudiés. Un nouvel algorithme de synchronisation temporelle à faible complexité qui peut résister à grande quantité de retard dispersif est proposé et comparé par rapport aux propositions antérieures. Ensuite, le problème de la réalisation d'une architecture parallèle à faible coût est étudié et une architecture parallèle générique et évolutive qui peut être utilisée pour réaliser tout type d'algorithme d'auto-corrélation est proposé. Cette architecture est ensuite étendue pour gérer plusieurs échantillons issus du convertisseur analogique/numérique (ADC) en parallèle et fournir une sortie qui suive la fréquence des ADC. L'évolutivité de l'architecture pour un nombre plus élevé de sorties en parallèle et les différents types d'algorithmes d'auto-corrélation sont explorés. Une approche d'adéquation algorithme-architecture est ensuite appliquée à l'ensemble de la chaîne de l'émetteur-récepteur CO-OFDM. Du côté de l'émetteur, un algorithme IFFT à radix-22 est choisi pour et une architecture parallèle Multipath Delay Commutator (MDC). Feed-forward (FF) est choisie car elle consomme moins de ressources par rapport aux architectures MDC-FF en radix-2/4. Au niveau du récepteur, un algorithme efficace pour l'estimation du Integer CFO est adopté et implémenté de façon optimisée sans l'utilisation de multiplicateurs complexes. Une réduction de la complexité matérielle est obtenue grâce à la conception d'architectures efficaces pour la synchronisation temporelle, la FFT et l'estimation du CFO. Une exploration du compromis entre la précision des calculs en virgule fixe et la complexité du matériel est réalisée pour la chaîne complète de l'émetteur- récepteur, de façon à trouver des points de fonctionnement qui n'affectent pas le taux d'erreur binaire (TEB) de manière significative. Les algorithmes proposés sont validés à l'aide d'une part d'expériences off-line en utilisant un générateur AWG (arbitrary wave- form generator) à l'émetteur et un oscilloscope numérique à mémoire (DSO) en sortie de la détection cohérente au récepteur, et d'autre part un émetteur-récepteur temps-réel basé sur des plateformes FPGA et des convertisseurs numériques. Le TEB est utilisé pour montrer la validité du système intégré et en donner les performances. / In this thesis, low-complexity algorithms and architectures for CO-OFDM systems are explored. First, low-complexity algorithms for estimation of timing and carrier frequency offset (CFO) in dispersive channel are studied. A novel low-complexity timing synchro- nization algorithm, which can withstand large amount of dispersive delay, is proposed and compared with previous proposals. Then, the problem of realization of low-complexity parallel architecture is studied. A generalized scalable parallel architecture, which can be used to realize any auto-correlation algorithm, is proposed. It is then extended to handle multiple parallel samples from ADC and provide outputs, which can match the input ADC rate. The scalability of the architecture for higher number of parallel outputs and different kinds of auto-correlation algorithms is explored. An algorithm-architecture approach is then applied to the entire CO-OFDM transceiver chain. At the transmitter side, radix-22 algorithm for IFFT is chosen and parallel Mul- tipath Delay Commutator (MDC) Feed-forward (FF) architecture is designed which con- sumes lesser resources compared to MDC FF architectures of radix-2/4. At the receiver side, efficient algorithm for Integer CFO estimation is adopted and efficiently realized with- out the use of complex multipliers. Reduction in complexity is achieved due to efficient architectures for timing synchronization, FFT and Integer CFO estimation. Fixed-point analysis for the entire transceiver chain is done to find fixed-point sensitive blocks, which affect bit error rate (BER) significantly. The algorithms proposed are validated using opti- cal experiments by the help of arbitrary waveform generator (AWG) at the transmitter and digital storage oscilloscope (DSO) and Matlab at the receiver. BER plots are used to show the validity of the system built. Hardware implementation of the proposed synchronization algorithm is validated using real-time FPGA platform.
47

Using chaos to enhance multi-user time-of-arrival estimation : application to UWB ranging systems / Utilisation du chaos pour améliorer l’estimation du temps d'arrivée dans le cas multi-utilisateur : application à un système de télémétrie de type UWB

Ma, Hang 23 April 2014 (has links)
Dans les décennies à venir, la connaissance d’informations très précises concernant la position d’un objet permettra de créer des applications révolutionnaires dans les domaines sociaux, médicaux, commerciaux et militaires. La technologie Ultra-Wideband (UWB) est considérée comme un bon candidat permettant de fournir des capacités de localisation précise grâce à la mesure de l’estimation du temps d'arrivée (TOA). Dans cette thèse, des algorithmes de mesure de distance dans le cas multi-utilisateurs pour des systèmes UWB sont étudiés afin d'atteindre une bonne précision pour une faible complexité, avec de la robustesse aux interférences multi-utilisateur et dans le cas d’un grand nombre d'utilisateurs. Au cours de la dernière décennie, les signaux chaotiques ont reçu une attention significative en raison d'un certain nombre de caractéristiques intéressantes. Les signaux chaotiques sont des signaux non périodiques, déterministes ou considérés comme pseudo-aléatoires provenant de systèmes dynamiques non linéaires. Leur bonne autocorrélation et leurs faibles propriétés d’inter corrélation les rendent particulièrement résistants aux évanouissements par trajets multiples et capables d'atténuer les interférences multi-utilisateur (MUI). En raison de leur grande sensibilité aux conditions initiales, il est possible de générer un grand nombre de signaux chaotiques pour accroître la capacité globale du système. Dans cette thèse, deux nouveaux algorithmes d'estimation de TOA sont proposés dans un cadre multi-utilisateur avec une faible complexité et une bonne robustesse. Le nombre d'utilisateurs pris en charge par ces deux algorithmes est beaucoup plus grand que dans le cas des estimateurs de TOA actuels. Cependant, l'utilisation de séquences d'étalement classique et d’impulsion limite l'amélioration des performances et la capacité du système. Afin d’apporter des améliorations, des signaux chaotiques sélectionnés sont utilisés comme séquences d'étalement ou impulsion dans les algorithmes proposés. Grâce à l'utilisation de signaux chaotiques, notre algorithme est non seulement amélioré, mais permet également l’utilisation d’un plus grand nombre d'utilisateurs par comparaison avec l’algorithme utilisant des signaux classiques / In the coming decades, highly accurate position information has the potential to create revolutionary applications in the social, medical, commercial and military areas. Ultra-Wideband (UWB) technology is considered as a potential candidate for enabling accurate localization capabilities through Time-of-Arrival (TOA) based ranging techniques. Over the past decade, chaotic signals have received significant attention due to a number of attractive features. Chaotic signals are aperiodic, deterministic, and random-like signals derived from nonlinear dynamical systems whose good autocorrelation, low cross-correlation and sensitivity to the initial conditions make them particularly suitable to ranging systems. In this thesis, two new multiuser TOA estimation algorithms are proposed with low complexity and robustness to MUI, the number of users supported by which is much larger than current multiuser TOA estimators. While, the use of classic spreading sequences and ranging pulse constrain the further improvement of ranging performance and system capacity. For breaking through the limit brought by the classic signals, the selected chaotic signals are employed as the spreading sequences or ranging pulse in our proposed algorithms. With the use of chaotic signals, our proposed algorithm not only obtains the additional improvement, but also with capability to support larger number of users comparing with its counterpart using classic signals
48

Régularisations de faible complexité pour les problèmes inverses / Low Complexity Regularization of Inverse Problems

Vaiter, Samuel 10 July 2014 (has links)
Cette thèse se consacre aux garanties de reconstruction et de l’analyse de sensibilité de régularisation variationnelle pour des problèmes inverses linéaires bruités. Il s’agit d’un problème d’optimisation convexe combinant un terme d’attache aux données et un terme de régularisation promouvant des solutions vivant dans un espace dit de faible complexité. Notre approche, basée sur la notion de fonctions partiellement lisses, permet l’étude d’une grande variété de régularisations comme par exemple la parcimonie de type analyse ou structurée, l’anti-Parcimonie et la structure de faible rang. Nous analysons tout d’abord la robustesse au bruit, à la fois en termes de distance entre les solutions et l’objet original, ainsi que la stabilité de l’espace modèle promu.Ensuite, nous étudions la stabilité de ces problèmes d’optimisation à des perturbations des observations. A partir d’observations aléatoires, nous construisons un estimateur non biaisé du risque afin d’obtenir un schéma de sélection de paramètre. / This thesis is concerned with recovery guarantees and sensitivity analysis of variational regularization for noisy linear inverse problems. This is cast as aconvex optimization problem by combining a data fidelity and a regularizing functional promoting solutions conforming to some notion of low complexity related to their non-Smoothness points. Our approach, based on partial smoothness, handles a variety of regularizers including analysis/structured sparsity, antisparsity and low-Rank structure. We first give an analysis of thenoise robustness guarantees, both in terms of the distance of the recovered solutions to the original object, as well as the stability of the promoted modelspace. We then turn to sensivity analysis of these optimization problems to observation perturbations. With random observations, we build un biased estimator of the risk which provides a parameter selection scheme.
49

Exponential weighted aggregation : oracle inequalities and algorithms / Agrégation à poids exponentiels : inégalités oracles et algorithmes

Luu, Duy tung 23 November 2017 (has links)
Dans plusieurs domaines des statistiques, y compris le traitement du signal et des images, l'estimation en grande dimension est une tâche importante pour recouvrer un objet d'intérêt. Toutefois, dans la grande majorité de situations, ce problème est mal-posé. Cependant, bien que la dimension ambiante de l'objet à restaurer (signal, image, vidéo) est très grande, sa ``complexité'' intrinsèque est généralement petite. La prise en compte de cette information a priori peut se faire au travers de deux approches: (i) la pénalisation (très populaire) et (ii) l'agrégation à poids exponentiels (EWA). L'approche penalisée vise à chercher un estimateur qui minimise une attache aux données pénalisée par un terme promouvant des objets de faible complexité (simples). L'EWA combine une famille des pré-estimateurs, chacun associé à un poids favorisant exponentiellement des pré-estimateurs, lesquels privilègent les mêmes objets de faible complexité.Ce manuscrit se divise en deux grandes parties: une partie théorique et une partie algorithmique. Dans la partie théorique, on propose l'EWA avec une nouvelle famille d'a priori favorisant les signaux parcimonieux à l'analyse par group dont la performance est garantie par des inégalités oracle. Ensuite, on analysera l'estimateur pénalisé et EWA, avec des a prioris généraux favorisant des objets simples, dans un cardre unifié pour établir des garanties théoriques. Deux types de garanties seront montrés: (i) inégalités oracle en prédiction, et (ii) bornes en estimation. On les déclinera ensuite pour des cas particuliers dont certains ont été étudiés dans littérature. Quant à la partie algorithmique, on y proposera une implémentation de ces estimateurs en alliant simulation Monte-Carlo (processus de diffusion de Langevin) et algorithmes d'éclatement proximaux, et montrera leurs garanties de convergence. Plusieurs expériences numériques seront décrites pour illustrer nos garanties théoriques et nos algorithmes. / In many areas of statistics, including signal and image processing, high-dimensional estimation is an important task to recover an object of interest. However, in the overwhelming majority of cases, the recovery problem is ill-posed. Fortunately, even if the ambient dimension of the object to be restored (signal, image, video) is very large, its intrinsic ``complexity'' is generally small. The introduction of this prior information can be done through two approaches: (i) penalization (very popular) and (ii) aggregation by exponential weighting (EWA). The penalized approach aims at finding an estimator that minimizes a data loss function penalized by a term promoting objects of low (simple) complexity. The EWA combines a family of pre-estimators, each associated with a weight exponentially promoting the same objects of low complexity.This manuscript consists of two parts: a theoretical part and an algorithmic part. In the theoretical part, we first propose the EWA with a new family of priors promoting analysis-group sparse signals whose performance is guaranteed by oracle inequalities. Next, we will analysis the penalized estimator and EWA, with a general prior promoting simple objects, in a unified framework for establishing some theoretical guarantees. Two types of guarantees will be established: (i) prediction oracle inequalities, and (ii) estimation bounds. We will exemplify them for particular cases some of which studied in the literature. In the algorithmic part, we will propose an implementation of these estimators by combining Monte-Carlo simulation (Langevin diffusion process) and proximal splitting algorithms, and show their guarantees of convergence. Several numerical experiments will be considered for illustrating our theoretical guarantees and our algorithms.
50

Optimized information processing in resource-constrained vision systems. From low-complexity coding to smart sensor networks

MORBEE, MARLEEN 14 October 2011 (has links)
Vision systems have become ubiquitous. They are used for traffic monitoring, elderly care, video conferencing, virtual reality, surveillance, smart rooms, home automation, sport games analysis, industrial safety, medical care etc. In most vision systems, the data coming from the visual sensor(s) is processed before transmission in order to save communication bandwidth or achieve higher frame rates. The type of data processing needs to be chosen carefully depending on the targeted application, and taking into account the available memory, computational power, energy resources and bandwidth constraints. In this dissertation, we investigate how a vision system should be built under practical constraints. First, this system should be intelligent, such that the right data is extracted from the video source. Second, when processing video data this intelligent vision system should know its own practical limitations, and should try to achieve the best possible output result that lies within its capabilities. We study and improve a wide range of vision systems for a variety of applications, which go together with different types of constraints. First, we present a modulo-PCM-based coding algorithm for applications that demand very low complexity coding and need to preserve some of the advantageous properties of PCM coding (direct processing, random access, rate scalability). Our modulo-PCM coding scheme combines three well-known, simple, source coding strategies: PCM, binning, and interpolative coding. The encoder first analyzes the signal statistics in a very simple way. Then, based on these signal statistics, the encoder simply discards a number of bits of each image sample. The modulo-PCM decoder recovers the removed bits of each sample by using its received bits and side information which is generated by interpolating previous decoded signals. Our algorithm is especially appropriate for image coding. / Morbee, M. (2011). Optimized information processing in resource-constrained vision systems. From low-complexity coding to smart sensor networks [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/12126 / Palancia

Page generated in 0.4642 seconds