• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 11
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Izbor parametara kod gradijentnih metoda za probleme optimizacije bez ograničenja / Choice of parameters in gradient methods for the unconstrained optimization problems / Choice of parameters in gradient methods for the unconstrained optimization problems

Đorđević Snežana 22 May 2015 (has links)
<p>Posmatra se problem optimizacije bez ograničenja. Za re&scaron;avanje<br />problema&nbsp; optimizacije bez ograničenja postoji mno&scaron;tvo raznovrsnih<br />metoda. Istraživanje ovde motivisano je potrebom za metodama koje<br />će brzo konvergirati.<br />Cilj je sistematizacija poznatih rezultata, kao i teorijska i numerička<br />analiza mogućnosti uvođenja parametra u gradijentne metode.<br />Najpre se razmatra problem minimizacije konveksne funkcije vi&scaron;e<br />promenljivih.<br />Problem minimizacije konveksne funkcije vi&scaron;e promenljivih ovde se<br />re&scaron;ava bez izračunavanja matrice hesijana, &scaron;to je naročito aktuelno za<br />sisteme velikih dimenzija, kao i za probleme optimizacije kod kojih<br />ne raspolažemo ni tačnom vredno&scaron;ću funkcije cilja, ni tačnom<br />vredno&scaron;ću gradijenta. Deo motivacije za istraživanjem ovde leži i u<br />postojanju problema kod kojih je funkcija cilja rezultat simulacija.<br />Numerički rezultati, predstavljeni u Glavi 6, pokazuju da uvođenje<br />izvesnog parametra može biti korisno, odnosno, dovodi do ubrzanja<br />određenog metoda optimizacije.<br />Takođe se predstavlja jedan novi hibridni metod konjugovanog<br />gradijenta, kod koga je parametar konjugovanog gradijenta<br />konveksna kombinacija dva poznata parametra konjugovanog<br />gradijenta.<br />U prvoj glavi opisuje se motivacija kao i osnovni pojmovi potrebni za<br />praćenje preostalih glava.<br />U drugoj glavi daje se pregled nekih gradijentnih metoda prvog i<br />drugog reda.<br />Četvrta glava sadrži pregled osnovnih pojmova i nekih rezultata<br />vezanih za metode konjugovanih gradijenata.<br />Pomenute glave su tu radi pregleda nekih poznatih rezultata, dok se<br />originalni doprinos predstavlja u trećoj, petoj i &scaron;estoj glavi.<br />U trećoj glavi se opisuje izvesna modifikacija određenog metoda u<br />kome se koristi multiplikativni parametar, izabran na slučajan način.<br />Dokazuje se linearna konvergencija tako formiranog novog metoda.<br />Peta glava sadrži originalne rezultate koji se odnose na metode<br />konjugovanih gradijenata. Naime, u ovoj glavi predstavlja se novi<br />hibridni metod konjugovanih gradijenata, koji je konveksna<br />kombinacija dva poznata metoda konjugovanih gradijenata.<br />U &scaron;estoj glavi se daju rezultati numeričkih eksperimenata, izvr&scaron;enih<br />na&nbsp; izvesnom skupu test funkcija, koji se odnose na metode iz treće i<br />pete glave. Implementacija svih razmatranih algoritama rađena je u<br />paketu MATHEMATICA. Kriterijum upoređivanja je vreme rada<br />centralne procesorske jedinice.6</p> / <p>The problem under consideration is an unconstrained optimization<br />problem. There are many different methods made in aim to solve the<br />optimization problems.&nbsp; The investigation made here is motivated by<br />the fact that the methods which converge fast are necessary.<br />The main goal is the systematization of some known results and also<br />theoretical and numerical analysis of the possibilities to int roduce<br />some parameters within gradient methods.<br />Firstly, the minimization problem is considered, where the objective<br />function is a convex, multivar iable function. This problem is solved<br />here without the calculation of Hessian, and such solution is very<br />important, for example, when the&nbsp; big dimension systems are solved,<br />and also for solving optimization problems with unknown values of<br />the objective function and its gradient. Partially, this investigation is<br />motivated by the existence of problems where the objective function<br />is the result of simulations.<br />Numerical results, presented in&nbsp; Chapter&nbsp; 6, show that the introduction<br />of a parameter is useful, i.e., such introduction results by the<br />acceleration of the known optimization method.<br />Further, one new hybrid conjugate gradient method is presented, in<br />which the conjugate gradient parameter is a convex combination of<br />two known conjugate gradient parameters.<br />In the first chapter, there is motivation and also the basic co ncepts<br />which are necessary for the other chapters.<br />The second chapter contains the survey of some first order and<br />second order gradient methods.<br />The fourth chapter contains the survey of some basic concepts and<br />results corresponding to conjugate gradient methods.<br />The first, the second and the fourth&nbsp; chapters are here to help in<br />considering of some known results, and the original results are<br />presented in the chapters 3,5 and 6.<br />In the third chapter, a modification of one unco nstrained optimization<br />method is presented, in which the randomly chosen multiplicative<br />parameter is used. Also, the linear convergence of such modification<br />is proved.<br />The fifth chapter contains the original results, corresponding to<br />conjugate gradient methods. Namely, one new hybrid conjugate<br />gradient method is presented, and this&nbsp; method is the convex<br />combination of two known conjugate gradient methods.<br />The sixth chapter consists of the numerical results, performed on a set<br />of test functions, corresponding to methods in the chapters 3 and 5.<br />Implementation of all considered algorithms is made in Mathematica.<br />The comparison criterion is CPU time.</p> / <p>The problem under consideration is an unconstrained optimization<br />problem. There are many different methods made in aim to solve the<br />optimization problems.&nbsp; The investigation made here is motivated by<br />the fact that the methods which converge fast are necessary.<br />The main goal is the systematization of some known results and also<br />theoretical and numerical analysis of the possibilities to int roduce<br />some parameters within gradient methods.<br />Firstly, the minimization problem is considered, where the objective<br />function is a convex, multivar iable function. This problem is solved<br />here without the calculation of Hessian, and such solution is very<br />important, for example, when the&nbsp; big dimension systems are solved,<br />and also for solving optimization problems with unknown values of<br />the objective function and its gradient. Partially, this investigation is<br />motivated by the existence of problems where the objective function<br />is the result of simulations.<br />Numerical results, presented in&nbsp; Chapter&nbsp; 6, show that the introduction<br />of a parameter is useful, i.e., such introduction results by the<br />acceleration of the known optimization method.<br />Further, one new hybrid conjugate gradient method is presented, in<br />which the conjugate gradient parameter is a convex combination of<br />two known conjugate gradient parameters.<br />In the first chapter, there is motivation and also the basic co ncepts<br />which are necessary for the other chapters.<br />Key&nbsp; Words Documentation&nbsp; 97<br />The second chapter contains the survey of some first order and<br />second order gradient methods.<br />The fourth chapter contains the survey of some basic concepts and<br />results corresponding to conjugate gradient methods.<br />The first, the second and the fourth&nbsp; chapters are here to help in<br />considering of some known results, and the original results are<br />presented in the chapters 3,5 and 6.<br />In the third chapter, a modification of one unco nstrained optimization<br />method is presented, in which the randomly chosen multiplicative<br />parameter is used. Also, the linear convergence of such modification<br />is proved.<br />The fifth chapter contains the original results, corresponding to<br />conjugate gradient methods. Namely, one new hybrid conjugate<br />gradient method is presented, and this&nbsp; method is the convex<br />combination of two known conjugate gradient methods.<br />The sixth chapter consists of the numerical results, performed on a set<br />of test functions, corresponding to methods in the chapters 3 and 5.<br />Implementation of all considered algorithms is made in Mathematica.<br />The comparison criterion is CPU time</p>
22

Identification and validation of putative therapeutic and diagnostic antimicrobial peptides against HIV: An in silico approach

January 2013 (has links)
Magister Scientiae (Medical Bioscience) - MSc(MBS) / Background: Despite the effort of scientific research on HIV therapies and to reduce the rate of HIV infection, AIDS remains one of the major causes of death in the world and mostly in sub-Saharan Africa. To date, neither a cure nor an HIV vaccine had been found and the disease can only be managed by using High Active Antiretroviral Therapy (HAART) if detected early. The need for an effective early diagnostic and non-toxic treatment has brought about the necessity for the discovery of additional HIV diagnostic methods and treatment regimens to lower mortality rates. Antimicrobial Peptides (AMPs) are components of the first line of defense of prokaryotes and eukaryotes and have been proven to be promising therapeutic agents against HIV. Methods: With the utility of computational biology, this work proposes the use of profile search methods combined with structural modeling to identify putative AMPs with diagnostic and anti-HIV activity. Firstly, experimentally validated anti-HIV AMPs were retrieved from various publicly available AMP databases, APD, CAMP, Bactibase and UniProtKB and classified according to super-families. Hidden Markov Model (HMMER) and Gap Local Alignment of Motifs (GLAM2) profiles were built for each super-family of anti- HIV AMPs. Putative anti-HIV AMPs were identified after scanning genome sequence databases using the trained models, retrieved AMPs, and ranked based on their E-values. The 3-D structures of the 10 peptides that were ranked highest were predicted using 1-TASSER. These peptides were docked against various HIV proteins using PatchDock and putative AMPs showing the highest affinity and having the correct orientation to the HIV -1 proteins gp120 and p24 were selected for future work to establish their function in HIV therapy and diagnosis. Results: The results of the in silica analysis showed that the constructed models using the HMMER algorithm had better performances compare to that of the models built by the GLAM2 algorithm. Furthermore, the former tool has a better statistical and probability explanation compared to the latter tool. Thus only the HMMER scanning results were considered for further study. Out of 1059 species scanned by the HMMER models, 30 putative anti-HIV AMPs were identified from genome scans with the family-specific profile models after the elimination of duplicate peptides. Docking analysis of putative AMPs against HIV proteins showed that from the 10 best performing anti-HIV AMPs with the highest E-scores, molecules 1,3, 8, and 10 firmly bind the gp120 binding pocket at the VIN2 domain and the point of interaction between gp120 and T cells, with the 1st and 3rd highest scoring anti-HIV AMPs having the highest binding affinities. However, all 10 putative anti-HIV AMPs bind to the N-terminal domain of p24 with large surface interaction, rather than the C-terminal. Conclusion: The in silica approach has made it possible to construct computational models having high performances, and which enabled the identification of putative anti-HIV peptides from genome sequence scans. The in silica validation of these putative peptides through docking studies has shown that some of these AMPs may be involved in HIV/AIDS therapeutics and diagnostics. The molecular validation of these findings will be the way forward for the development of an early diagnostic tool and as a consequence initiate early treatment. This will prevent the invasion of the immune system by blocking the VIN2 domain and thus designing of a successful vaccine with broad neutralizing activity against this domain.
23

k-ary search on modern processors

Schlegel, Benjamin, Gemulla, Rainer, Lehner, Wolfgang 19 May 2022 (has links)
This paper presents novel tree-based search algorithms that exploit the SIMD instructions found in virtually all modern processors. The algorithms are a natural extension of binary search: While binary search performs one comparison at each iteration, thereby cutting the search space in two halves, our algorithms perform k comparisons at a time and thus cut the search space into k pieces. On traditional processors, this so-called k-ary search procedure is not beneficial because the cost increase per iteration offsets the cost reduction due to the reduced number of iterations. On modern processors, however, multiple scalar operations can be executed simultaneously, which makes k-ary search attractive. In this paper, we provide two different search algorithms that differ in terms of efficiency and memory access patterns. Both algorithms are first described in a platform independent way and then evaluated on various state-of-the-art processors. Our experiments suggest that k-ary search provides significant performance improvements (factor two and more) on most platforms.
24

迴歸分析與類神經網路預測能力之比較 / A comparison on the prediction performance of regression analysis and artificial neural networks

楊雅媛 Unknown Date (has links)
迴歸分析與類神經網路此兩種方法皆是預測領域上的主要工具。本論文嘗試在線性迴歸模式及非線性迴歸模式的條件下,隨機產生不同特性的資料以完整探討資料特性對迴歸分析與類神經網路之預測效果的影響。這些特性包括常態分配、偏態分配、不等變異、Michaelis-Menten關係模式及指數迴歸模式。 再者,我們使用區域搜尋法(local search methods)中的演化策略法(evolution strategies,ES)作為類神經網路的學習(learning)方法以提高其預測功能。我們稱這種類型的類神經網路為ESNN。 模擬結果顯示,ESNN確實可以取代常用來與迴歸分析做比較的倒傳遞類神經網路(back-propagation neural network,BPNN),成為類神經網路的新選擇。針對不同特性的資料,我們建議:如果原始的資料適合以常態線性迴歸模式配適,則使用者可考慮使用迴歸方法做預測。如果原始的資料經由圖形分析或由檢定方法得知違反誤差項為均等變異之假設時,若能找到合適的權數,可使用加權最小平方法,但若權數難以決定時,則使用ESNN做預測。如果資料呈現韋伯偏態分佈時,可考慮使用ESNN或韋伯迴歸方法。資料適合以非線性迴歸模式做配適時,則選擇以ESNN做預測。 關鍵詞:迴歸分析,類神經網路,區域搜尋法,演化策略法類神經網路,倒傳遞類神經網路 / Both regression analysis and artificial neural networks are the main techniques for prediction. In this research, we tried to randomly generate different types of data, so as to completely explore the effect of data characteristics on the predictive performance of regression analysis and artificial neural networks. The data characteristics include normal distribution, skew distribution, unequal variances, Michaelis-Menten relationship model and exponential regression model. In addition, we used the evolution strategies, which is one of the local search methods for training artificial neural networks, to further improve its predictive performance. We name this type of artificial neural networks ESNN. Simulation studies indicate that ESNN could indeed replace BPNN to be the new choice of artificial neural networks. For different types of data, we commend that users can use regression analysis for their prediction if the original data is fit for linear regression model. When the residuals of the data are unequal variances, users can use weighted least squares if the optimal weights could be found. Otherwise, users can use ESNN. If the data is fit for weibull distribution, users can use ESNN or weibull regression. If the data is fit for nonlinear regression model, users can choose ESNN for the prediction. Keywords: Regression Analysis, Artificial Neural Networks, Local Search Methods, Evolution Strategies Neural Network (ESNN), Back-propagation Neural Network (BPNN)

Page generated in 0.0458 seconds