• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 61
  • 12
  • 9
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 202
  • 73
  • 60
  • 46
  • 32
  • 32
  • 31
  • 29
  • 27
  • 26
  • 26
  • 25
  • 24
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Estimativas de máxima verosimilhança e bayesianas do número de erros de um software.

Silva, Karolina Barone Ribeiro da 24 February 2006 (has links)
Made available in DSpace on 2016-06-02T20:05:58Z (GMT). No. of bitstreams: 1 DissKBRS.pdf: 617246 bytes, checksum: 9436ee8984a49f5df072023b717747c6 (MD5) Previous issue date: 2006-02-24 / In this work we present the methodology of capture-recapture, under the classic and bayesian approach, to estimate the number of errors of software through inspection by distinct reviewers. We present the general statistical model considering independence among errors and among reviewers and consider the particular cases of equally detectable errors (homogeneous) and reviewers not equally e¢ cient (heterogeneous) and of errors not equally detectable (heterogeneous) and equally e¢ cient reviewers (homogeneous). After that, under the assumption of independence and heterogeneity among errors and independence and homogeneity among reviwers, we supposed that the heterogeneity of the errors was expressed by a classification of these in easy and di¢ cult of detecting, admitting known the probabilities of detection of an easy error and of a di¢ cult error. Finally, under the hypothesis of independence and homogeneity among errors, we presented a new model considering heterogeneity and dependence among reviewers. Besides, we presented examples with simulate and real data. / Nesta dissertação apresentamos a metodologia de captura-recaptura, sob os enfoques clássico e bayesiano, para estimar o número de erros de um software através de sua inspeção por revisores distintos. Apresentamos o modelo estatístico geral considerando independência entre erros e entre revisores e consideramos os casos particulares de erros igualmente.detectáveis (homogêneos) e revisores não igualmente eficientes (heterogêneos) e de erros não igualmente detectáveis (heterogêneos) e revisores igualmente eficientes (homogêneos). Em seguida, sob a hipótese de heterogeneidade e independência entre erros e homogeneidade e independência entre revisores, supusemos que a heterogeneidade dos erros era expressa por uma classificação destes em fácil e difícil de detectar, admitindo conhecidas as probabilidades de detecção de um erro fácil e de um erro difícil. Finalmente, sob a hipótese de independência e homogeneidade entre erros, apresentamos um novo modelo considerando heterogeneidade e dependência entre revisores. Além disso, apresentamos exemplos com dados simulados e reais.
182

Řešení parciálních diferenciálních rovnic s využitím aposteriorního odhadu chyby / A posteriori error estimation method for partial differential equations solution

Valenta, Václav Unknown Date (has links)
This thesis deals with gradient calculation in triangulation nodes using weighted average of gradients of neighboring elements. This gradient is then used for a posteriori error estimation which produce better solution of partial differential equations. This work presents two common methods - Finite elements method and Finite difference method.
183

Local Ill-Posedness and Source Conditions of Operator Equations in Hilbert Spaces

Hofmann, B., Scherzer, O. 30 October 1998 (has links)
The characterization of the local ill-posedness and the local degree of nonlinearity are of particular importance for the stable solution of nonlinear ill-posed problems. We present assertions concerning the interdependence between the ill-posedness of the nonlinear problem and its linearization. Moreover, we show that the concept of the degree of nonlinearity com bined with source conditions can be used to characterize the local ill-posedness and to derive a posteriori estimates for nonlinear ill-posed problems. A posteriori estimates are widely used in finite element and multigrid methods for the solution of nonlinear partial differential equations, but these techniques are in general not applicable to inverse an ill-posed problems. Additionally we show for the well-known Landweber method and the iteratively regularized Gauss-Newton method that they satisfy a posteriori estimates under source conditions; this can be used to prove convergence rates results.
184

A posteriori error estimation for anisotropic tetrahedral and triangular finite element meshes

Kunert, Gerd 08 January 1999 (has links)
Many physical problems lead to boundary value problems for partial differential equations, which can be solved with the finite element method. In order to construct adaptive solution algorithms or to measure the error one aims at reliable a posteriori error estimators. Many such estimators are known, as well as their theoretical foundation. Some boundary value problems yield so-called anisotropic solutions (e.g. with boundary layers). Then anisotropic finite element meshes can be advantageous. However, the common error estimators for isotropic meshes fail when applied to anisotropic meshes, or they were not investigated yet. For rectangular or cuboidal anisotropic meshes a modified error estimator had already been derived. In this paper error estimators for anisotropic tetrahedral or triangular meshes are considered. Such meshes offer a greater geometrical flexibility. For the Poisson equation we introduce a residual error estimator, an estimator based on a local problem, several Zienkiewicz-Zhu estimators, and an L_2 error estimator, respectively. A corresponding mathematical theory is given.For a singularly perturbed reaction-diffusion equation a residual error estimator is derived as well. The numerical examples demonstrate that reliable and efficient error estimation is possible on anisotropic meshes. The analysis basically relies on two important tools, namely anisotropic interpolation error estimates and the so-called bubble functions. Moreover, the correspondence of an anisotropic mesh with an anisotropic solution plays a vital role. AMS(MOS): 65N30, 65N15, 35B25
185

A posteriori error estimation for non-linear eigenvalue problems for differential operators of second order with focus on 3D vertex singularities

Pester, Cornelia 21 April 2006 (has links)
This thesis is concerned with the finite element analysis and the a posteriori error estimation for eigenvalue problems for general operator pencils on two-dimensional manifolds. A specific application of the presented theory is the computation of corner singularities. Engineers use the knowledge of the so-called singularity exponents to predict the onset and the propagation of cracks. All results of this thesis are explained for two model problems, the Laplace and the linear elasticity problem, and verified by numerous numerical results.
186

Adaptivity in anisotropic finite element calculations

Grosman, Sergey 21 April 2006 (has links)
When the finite element method is used to solve boundary value problems, the corresponding finite element mesh is appropriate if it is reflects the behavior of the true solution. A posteriori error estimators are suited to construct adequate meshes. They are useful to measure the quality of an approximate solution and to design adaptive solution algorithms. Singularly perturbed problems yield in general solutions with anisotropic features, e.g. strong boundary or interior layers. For such problems it is useful to use anisotropic meshes in order to reach maximal order of convergence. Moreover, the quality of the numerical solution rests on the robustness of the a posteriori error estimation with respect to both the anisotropy of the mesh and the perturbation parameters. There exist different possibilities to measure the a posteriori error in the energy norm for the singularly perturbed reaction-diffusion equation. One of them is the equilibrated residual method which is known to be robust as long as one solves auxiliary local Neumann problems exactly on each element. We provide a basis for an approximate solution of the aforementioned auxiliary problem and show that this approximation does not affect the quality of the error estimation. Another approach that we develope for the a posteriori error estimation is the hierarchical error estimator. The robustness proof for this estimator involves some stages including the strengthened Cauchy-Schwarz inequality and the error reduction property for the chosen space enrichment. In the rest of the work we deal with adaptive algorithms. We provide an overview of the existing methods for the isotropic meshes and then generalize the ideas for the anisotropic case. For the resulting algorithm the error reduction estimates are proven for the Poisson equation and for the singularly perturbed reaction-difussion equation. The convergence for the Poisson equation is also shown. Numerical experiments for the equilibrated residual method, for the hierarchical error estimator and for the adaptive algorithm confirm the theory. The adaptive algorithm shows its potential by creating the anisotropic mesh for the problem with the boundary layer starting with a very coarse isotropic mesh.
187

K efektivním numerickým výpočtům proudění nenewtonských tekutin / Towards efficient numerical computation of flows of non-Newtonian fluids

Blechta, Jan January 2019 (has links)
In the first part of this thesis we are concerned with the constitutive the- ory for incompressible fluids characterized by a continuous monotone rela- tion between the velocity gradient and the Cauchy stress. We, in particular, investigate a class of activated fluids that behave as the Euler fluid prior activation, and as the Navier-Stokes or power-law fluid once the activation takes place. We develop a large-data existence analysis for both steady and unsteady three-dimensional flows of such fluids subject either to the no-slip boundary condition or to a range of slip-type boundary conditions, including free-slip, Navier's slip, and stick-slip. In the second part we show that the W−1,q norm is localizable provided that the functional in question vanishes on locally supported functions which constitute a partition of unity. This represents a key tool for establishing local a posteriori efficiency for partial differential equations in divergence form with residuals in W−1,q . In the third part we provide a novel analysis for the pressure convection- diffusion (PCD) preconditioner. We first develop a theory for the precon- ditioner considered as an operator in infinite-dimensional spaces. We then provide a methodology for constructing discrete PCD operators for a broad class of pressure discretizations. The...
188

Algoritmo de reconstrucción analítico para el escáner basado en cristales monolíticos MINDView

Sánchez Góez, Sebastián 17 January 2021 (has links)
[ES] La tomografía por emisión de positrones (PET, del inglés Positron Emission Tomography) es una técnica de medicina nuclear en la que se genera una imagen a partir de la detección de rayos gamma en coincidencia. Estos rayos son producidos dentro de un paciente al que se le inyecta una radiotrazador emisor de positrones, los cuales se aniquilan con electrones del medio circundante. El proceso de adquisición de eventos de interacción, tiene como unidad central el detector del escáner PET, el cual se compone a su vez de un cristal de centelleo, encargado de transformar los rayos gamma incidentes en fotones ópticos dentro del cristal. La finalidad es entonces, determinar las coordenadas de impacto dentro del cristal de centelleo con la mayor precisión posible, para que, a partir de dichos puntos, se pueda reconstruir una imagen. A lo largo de la historia, los detectores basados en cristales pixelados han representado la elección por excelencia para la la fabricación de escáneres PET. En está tesis se evalúa el impacto en la resolución espacial del escáner PET MINDView, desarrollado dentro del séptimo programa Marco de la Unión Europea No 603002, el cual se basa en el uso de cristales monolíticos. El uso de cristales monolíticos, facilita la determinación de la profundidad de interacción (DOI - del inglés Depth Of Interaction) de los rayos gamma incidentes, aumenta la precisión en las coordenadas de impacto determinadas, y disminuye el error de paralaje que se induce en cristales pixelados, debido a la dificultad para determinar la DOI. En esta tesis, hemos logrado dos objetivos principales relacionados con la medición de la resolución espacial del escáner MINDView: la adaptación del un algoritmo de STIR de Retroproyección Filtrada en 3D (FBP3DRP - del inglés Filtered BackProjection 3D Reproyected) a un escáner basado en cristales monolíticos y la implementación de un algoritmo de Retroproyección y filtrado a posteriori (BPF - BackProjection then Filtered). Respecto a la adaptación del algoritmo FBP, las resoluciones espaciales obtenidas varían en los intervalos [2 mm, 3,4 mm], [2,3 mm, 3,3 mm] y [2,2 mm, 2,3 mm] para las direcciones radial, tangencial y axial, respectivamente, en el primer prototipo del escáner MINDView dedicado a cerebro. Por otra parte, en la implementación del algoritmo de tipo BPF, se realizó una adquisición de un maniquí de derenzo y se comparó la resolución obtenida con el algoritmo de FBP y una implementación del algoritmo de subconjuntos ordenados en modo lista (LMOS - del inglés List Mode Ordered Subset). Mediante el algoritmo de tipo BPF se obtuvieron valores pico-valle de 2.4 a lo largo de los cilindros del maniquí de 1.6 mm de diámetro, en contraste con las medidas obtenidas de 1.34 y 1.44 para los algoritmos de FBP3DRP y LMOS, respectivamente. Lo anterior se traduce en que, mediante el algoritmo de tipo BPF, se logra mejorar la resolución para obtenerse un valor promedio 1.6 mm. / [CAT] La tomografia per emissió de positrons és una tècnica de medicina nuclear en la qual es genera una imatge a partir de la detecció de raigs gamma en coincidència. Aquests raigs són produïts dins d'un pacient a què se li injecta una radiotraçador emissor de positrons, els quals s'aniquilen amb electrons de l'medi circumdant. El procés de adquición d'esdeveniments d'interacció, té com a unitat central el detector de l'escàner PET, el qual es compon al seu torn d'un vidre de centelleig, encarregat de transformar els raigs gamma incidents en fotons òptics dins el vidre. La finalitat és llavors, determinar les coordenades d'impacte dins el vidre de centelleig amb la major precisió possible, perquè, a partir d'aquests punts, es pugui reconstruir una imatge. Al llarg de la història, els detectors basats en cristalls pixelats han representat l'elecció per excellència per a la la fabricació d'escàners PET. En aquesta tesi s'avalua l'impacte en la resolució espacial de l'escàner PET MINDView, desenvolupat dins el setè programa Marc de la Unió Europea No 603.002, el qual es basa en l'ús de vidres monolítics. L'ús de vidres monolítics, facilita la determinació de la profunditat d'interacció dels raigs gamma incidents, augmenta la precisió en les coordenades d'impacte determinades, i disminueix l'error de parallaxi que s'indueix en cristalls pixelats, a causa de la dificultat per determinar la DOI. En aquesta tesi, hem aconseguit dos objectius principals relacionats amb el mesurament de la resolució espacial de l'escàner MINDView: l'adaptació de l'un algoritme de STIR de Retroprojecció Filtrada en 3D a un escàner basat en cristalls monolítics i la implementació d'un algoritme de Retroprojecció i filtrat a posteriori. Pel que fa a l'adaptació de l'algoritme FBP3DRP, les resolucions espacials obtingudes varien en els intervals [2 mm, 3,4 mm], [2,3 mm, 3,3 mm] i [2,2 mm, 2,3 mm] per les direccions radial, tangencial i axial, respectivament, en el primer prototip de l'escàner MINDView dedicat a cervell. D'altra banda, en la implementació de l'algoritme de tipus BPF, es va realitzar una adquisició d'un maniquí de derenzo i es va comparar la resolució obtinguda amb l'algorisme de FBP3DRP i una implementació de l'algoritme de subconjunts ordenats en mode llista (LMOS - de l'anglès List Mode Ordered Subset). Mitjançant l'algoritme de tipus BPF es van obtenir valors pic-vall de 2.4 al llarg dels cilindres de l'maniquí de 1.6 mm de diàmetre, en contrast amb les mesures obtingudes de 1.34 i 1.44 per als algoritmes de FBP3DRP i LMOS, respectivament. L'anterior es tradueix en que, mitjançant l'algoritme de tipus BPF, s'aconsegueix millorar la resolució per obtenir-se un valor mitjà 1.6 mm. / [EN] Positron Emission Tomography (PET) is a medical imaging technique, in which an image is generated from the detection of gamma rays in coincidence. These rays are produced within a patient, who is injected with a positron emmiter radiotracer, from which positrons are annihilated with electrons in the media. The event acquisition process is focused on the scanner detector. The detector is in turn composed of a scintillation crystal, which transform the incident ray gamma into optical photons within the crystal. The purpose is then to determine the impact coordinates within the scintillation crystal with the greatest possible precision, so that, from these points, an image can be reconstructed. Throughout history, detectors based on pixelated crystals have represented the quintessential choice for PET scanners manufacture. This thesis evaluates the impact on the spatial resolution of the MINDView PET scanner, developed in the seventh Framework program of the European Union No. 603002, which detectors are based on monolithic crystals. The use of monolithic crystals facilitates the determination of the depth of interaction (DOI - Depth Of Interaction) of the incident gamma rays, increases the precision in the determined impact coordinates, and reduces the parallax error induces in pixelated crystals, due to the difficulties in determining DOI. In this thesis, we have achieved two main goals related to the measurement of the spatial resolution of the MINDView PET scanner: the adaptation of an STIR algorithm for Filtered BackProjection 3D Reproyected (FBP3DRP) to a scanner based on monolithic crystals, and the implementation of a BackProjection then Filtered algorithm (BPF). Regarding the FBP algorithm adaptation, we achieved resolutions ranging in the intervals [2 mm, 3.4 mm], [2.3 mm, 3.3 mm] and [2.2 mm, 2.3 mm] for the radial, tangential and axial directions, respectively. On the an acquisition of a derenzo phantom was performed to measure the spacial resolution, which was obtained using three reconstruction algorithms: the BPF-type algorithm, the FBP3DRP algorithm and an implementation of the list-mode ordered subsets algorithm (LMOS). Regarding the BPF-type algorithm, a peak-to-valley value of 2.4 were obtain along rod of 1.6 mm, in contrast to the measurements of 1.34 and 1.44 obtained for the FBP3DRP and LMOS algorithms, respectively. This means that, by means of the BPF-type algorithm, it is possible to improve the resolution to obtain an average value of 1.6 mm. / Sánchez Góez, S. (2020). Algoritmo de reconstrucción analítico para el escáner basado en cristales monolíticos MINDView [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159259 / TESIS
189

Stabilised finite element approximation for degenerate convex minimisation problems

Boiger, Wolfgang Josef 19 August 2013 (has links)
Infimalfolgen nichtkonvexer Variationsprobleme haben aufgrund feiner Oszillationen häufig keinen starken Grenzwert in Sobolevräumen. Diese Oszillationen haben eine physikalische Bedeutung; Finite-Element-Approximationen können sie jedoch im Allgemeinen nicht auflösen. Relaxationsmethoden ersetzen die nichtkonvexe Energie durch ihre (semi)konvexe Hülle. Das entstehende makroskopische Modell ist degeneriert: es ist nicht strikt konvex und hat eventuell mehrere Minimalstellen. Die fehlende Kontrolle der primalen Variablen führt zu Schwierigkeiten bei der a priori und a posteriori Fehlerschätzung, wie der Zuverlässigkeits- Effizienz-Lücke und fehlender starker Konvergenz. Zur Überwindung dieser Schwierigkeiten erweitern Stabilisierungstechniken die relaxierte Energie um einen diskreten, positiv definiten Term. Bartels et al. (IFB, 2004) wenden Stabilisierung auf zweidimensionale Probleme an und beweisen dabei starke Konvergenz der Gradienten. Dieses Ergebnis ist auf glatte Lösungen und quasi-uniforme Netze beschränkt, was adaptive Netzverfeinerungen ausschließt. Die vorliegende Arbeit behandelt einen modifizierten Stabilisierungsterm und beweist auf unstrukturierten Netzen sowohl Konvergenz der Spannungstensoren, als auch starke Konvergenz der Gradienten für glatte Lösungen. Ferner wird der sogenannte Fluss-Fehlerschätzer hergeleitet und dessen Zuverlässigkeit und Effizienz gezeigt. Für Interface-Probleme mit stückweise glatter Lösung wird eine Verfeinerung des Fehlerschätzers entwickelt, die den Fehler der primalen Variablen und ihres Gradienten beschränkt und so starke Konvergenz der Gradienten sichert. Der verfeinerte Fehlerschätzer konvergiert schneller als der Fluss- Fehlerschätzer, und verringert so die Zuverlässigkeits-Effizienz-Lücke. Numerische Experimente mit fünf Benchmark-Tests der Mikrostruktursimulation und Topologieoptimierung ergänzen und bestätigen die theoretischen Ergebnisse. / Infimising sequences of nonconvex variational problems often do not converge strongly in Sobolev spaces due to fine oscillations. These oscillations are physically meaningful; finite element approximations, however, fail to resolve them in general. Relaxation methods replace the nonconvex energy with its (semi)convex hull. This leads to a macroscopic model which is degenerate in the sense that it is not strictly convex and possibly admits multiple minimisers. The lack of control on the primal variable leads to difficulties in the a priori and a posteriori finite element error analysis, such as the reliability-efficiency gap and no strong convergence. To overcome these difficulties, stabilisation techniques add a discrete positive definite term to the relaxed energy. Bartels et al. (IFB, 2004) apply stabilisation to two-dimensional problems and thereby prove strong convergence of gradients. This result is restricted to smooth solutions and quasi-uniform meshes, which prohibit adaptive mesh refinements. This thesis concerns a modified stabilisation term and proves convergence of the stress and, for smooth solutions, strong convergence of gradients, even on unstructured meshes. Furthermore, the thesis derives the so-called flux error estimator and proves its reliability and efficiency. For interface problems with piecewise smooth solutions, a refined version of this error estimator is developed, which provides control of the error of the primal variable and its gradient and thus yields strong convergence of gradients. The refined error estimator converges faster than the flux error estimator and therefore narrows the reliability-efficiency gap. Numerical experiments with five benchmark examples from computational microstructure and topology optimisation complement and confirm the theoretical results.
190

Développement d'un alphabet structural intégrant la flexibilité des structures protéiques / Development of a structural alphabet integrating the flexibility of protein structures

Sekhi, Ikram 29 January 2018 (has links)
L’objectif de cette thèse est de proposer un Alphabet Structural (AS) permettant une caractérisation fine et précise des structures tridimensionnelles (3D) des protéines, à l’aide des chaînes de Markov cachées (HMM) qui permettent de prendre en compte la logique issue de l’enchaînement des fragments structuraux en intégrant l’augmentation des conformations 3D des structures protéiques désormais disponibles dans la banque de données de la Protein Data Bank (PDB). Nous proposons dans cette thèse un nouvel alphabet, améliorant l’alphabet structural HMM-SA27,appelé SAFlex (Structural Alphabet Flexibility), dans le but de prendre en compte l’incertitude des données (données manquantes dans les fichiers PDB) et la redondance des structures protéiques. Le nouvel alphabet structural SAFlex obtenu propose donc un nouveau modèle d’encodage rigoureux et robuste. Cet encodage permet de prendre en compte l’incertitude des données en proposant trois options d’encodages : le Maximum a posteriori (MAP), la distribution marginale a posteriori (POST)et le nombre effectif de lettres à chaque position donnée (NEFF). SAFlex fournit également un encodage consensus à partir de différentes réplications (chaînes multiples, monomères et homomères) d’une même protéine. Il permet ainsi la détection de la variabilité structurale entre celles-ci. Les avancées méthodologiques ainsi que l’obtention de l’alphabet SAFlex constituent les contributions principales de ce travail de thèse. Nous présentons aussi le nouveau parser de la PDB (SAFlex-PDB) et nous démontrons que notre parser a un intérêt aussi bien sur le plan qualitatif (détection de diverses erreurs)que quantitatif (rapidité et parallélisation) en le comparant avec deux autres parsers très connus dans le domaine (Biopython et BioJava). Nous proposons également à la communauté scientifique un site web mettant en ligne ce nouvel alphabet structural SAFlex. Ce site web représente la contribution concrète de cette thèse alors que le parser SAFlex-PDB représente une contribution importante pour le fonctionnement du site web proposé. Cette caractérisation précise des conformations 3D et la prise en compte de la redondance des informations 3D disponibles, fournies par SAFlex, a en effet un impact très important pour la modélisation de la conformation et de la variabilité des structures 3D, des boucles protéiques et des régions d’interface avec différents partenaires, impliqués dans la fonction des protéines / The purpose of this PhD is to provide a Structural Alphabet (SA) for more accurate characterization of protein three-dimensional (3D) structures as well as integrating the increasing protein 3D structure information currently available in the Protein Data Bank (PDB). The SA also takes into consideration the logic behind the structural fragments sequence by using the hidden Markov Model (HMM). In this PhD, we describe a new structural alphabet, improving the existing HMM-SA27 structural alphabet, called SAFlex (Structural Alphabet Flexibility), in order to take into account the uncertainty of data (missing data in PDB files) and the redundancy of protein structures. The new SAFlex structural alphabet obtained therefore offers a new, rigorous and robust encoding model. This encoding takes into account the encoding uncertainty by providing three encoding options: the maximum a posteriori (MAP), the marginal posterior distribution (POST), and the effective number of letters at each given position (NEFF). SAFlex also provides and builds a consensus encoding from different replicates (multiple chains, monomers and several homomers) of a single protein. It thus allows the detection of structural variability between different chains. The methodological advances and the achievement of the SAFlex alphabet are the main contributions of this PhD. We also present the new PDB parser(SAFlex-PDB) and we demonstrate that our parser is therefore interesting both qualitative (detection of various errors) and quantitative terms (program optimization and parallelization) by comparing it with two other parsers well-known in the area of Bioinformatics (Biopython and BioJava). The SAFlex structural alphabet is being made available to the scientific community by providing a website. The SAFlex web server represents the concrete contribution of this PhD while the SAFlex-PDB parser represents an important contribution to the proper function of the proposed website. Here, we describe the functions and the interfaces of the SAFlex web server. The SAFlex can be used in various fashions for a protein tertiary structure of a given PDB format file; it can be used for encoding the 3D structure, identifying and predicting missing data. Hence, it is the only alphabet able to encode and predict the missing data in a 3D protein structure to date. Finally, these improvements; are promising to explore increasing protein redundancy data and obtain useful quantification of their flexibility

Page generated in 0.0962 seconds