Spelling suggestions: "subject:"inference"" "subject:"lnference""
381 |
Inferring Network Status from Partial ObservationsRangudu, Venkata Pavan Kumar 09 February 2017 (has links)
In many network applications, such as the Internet and infrastructure networks, nodes fail or get congested dynamically, but tracking this information about all the nodes in a network where some dynamical processes are taking place is a fundamental problem. In this work, we study the problem of inferring the complete set of failed nodes, when only a sample of the node failures are known---we will be referring to this particular problem as prob{} . We consider the setting in which there exists correlations between node failures in networks, which has been studied in the case of many infrastructure networks. We formalize the prob{} problem using the Minimum Description Length (MDL) principle and we show that, in general, finding solutions that minimize the MDL cost is hard, and develop efficient algorithms with rigorous performance guarantees for finding near-optimal MDL cost solutions. We evaluate our methods on both synthetic and real world datasets, which includes the one from WAZE. WAZE is a crowd-sourced road navigation tool, that collects and presents the traffic incident reports. We found that the proposed greedy algorithm for this problem is able to recover $80%$, on average, of the failed nodes in a network for a given partial sample of input failures, which are sampled from the true set of failures at some predefined rate. Furthermore, we have also proved that this algorithm will find a solution that has MDL cost with an additive approximation guarantee of log(n) from the optimal. / Master of Science / In many real-world networks, such as Internet and Transportation networks, there will be some dynamical processes taking place. Due to the activity of these processes some of the elements in these networks may fail at random. For example service node failures in Internet, traffic congestion in road networks are some such scenarios. Identifying the complete state information of such networks is a fundamental problem. In this work, we study the problem of identifying unknown node failures in a network based on the partial observations – we referr to this problem as NetStateInf. Similar to some of the previous studies in this area we assume the settings where node failures in these networks are correlated. We approached this problem using Minimum Description Length (MDL) principle, which states that the information learned from a given data can be maximized by compressing it i.e., by identifying maximum number of patterns in the data. Using these concepts we have developed a mathematical representation of NetStateInf problem and proposed efficient algorithms with rigorous performance guarantees for finding the best set of failed nodes in the network that can best explain the observed faiures. We evaluated our algorithms against both synthetic – artificial network with failures generated based on a predefined mathematical model – and real-world data, for example traffic alerts data collected by WAZE, a crowdsourced navigation tool, for Boston road network. Using this approach we are able to recover around 80% of the failured nodes in the network from the given partial failure data. Furthermore, we have proved that our algorithm will find a solution that has a maximum cost difference of <i>log(n)</i> when compared with the optimal solution, where cost of a solution is the MDL way of representing its allignment with desired requirements.
|
382 |
Analysis of Hierarchical Structure of Seismic Activity: Bayesian Approach to Forecasting Earthquakes / 地震活動の階層構造の解析:地震予測に向けたベイズ的アプローチTanaka, Hiroki 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第25438号 / 情博第876号 / 新制||情||147(附属図書館) / 京都大学大学院情報学研究科数理工学専攻 / (主査)教授 梅野 健, 教授 辻本 諭, 教授 田口 智清 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
383 |
XML-Based Agent Scripts and Inference MechanismsSun, Guili 08 1900 (has links)
Natural language understanding has been a persistent challenge to researchers in various computer science fields, in a number of applications ranging from user support systems to entertainment and online teaching. A long term goal of the Artificial Intelligence field is to implement mechanisms that enable computers to emulate human dialogue. The recently developed ALICEbots, virtual agents with underlying AIML scripts, by A.L.I.C.E. foundation, use AIML scripts - a subset of XML - as the underlying pattern database for question answering. Their goal is to enable pattern-based, stimulus-response knowledge content to be served, received and processed over the Web, or offline, in the manner similar to HTML and XML. In this thesis, we describe a system that converts the AIML scripts to Prolog clauses and reuses them as part of a knowledge processor. The inference mechanism developed in this thesis is able to successfully match the input pattern with our clauses database even if words are missing. We also emulate the pattern deduction algorithm of the original logic deduction mechanism. Our rules, compatible with Semantic Web standards, bring structure to the meaningful content of Web pages and support interactive content retrieval using natural language.
|
384 |
Assessing the use of a semisubmersible oil platform as a motion-based sea wave sensor. / Avaliação do uso de uma plataforma de óleo e gás do tipo semi-submersível como um sensor de onda marítimo baseado em movimento.Soler, Jordi Mas 11 December 2018 (has links)
This thesis assesses the use of the measured motions of a semisubmersible oil platform as a basis for estimating on-site wave spectra. The inference method followed is based on the wave buoy analogy, which aims at solving the linear inverse problem: estimate the sea state, given the measured motions and the transfer function of the platform. Directional wave inference obtained from the records of vessels motions is a technique that has seen its application grow signicantly over the last years. As a matter of fact, its applications in ships with forward speed and ship-shaped moored platforms (such as FPSOs) have provided good results. However, little research has been done regarding the use of semisubmersible platforms as wave sensors. This is due to the fact that these platforms are designed to present no signicant responses when excited by waves. Notwithstanding this, the semisubmersible platforms are characterized by measurable small motions. Moreover, if compared with ship-shaped motion-based wave sensors, the responses of the semisubmersibles are in better agreement with the response characteristics estimations obtained by means of linear hydrodynamic models. In addition, the eminently linear characteristics of the responses often lasts even for severe wave conditions. This feature results in that the semisubmersible platforms stand as a promising wave sensor even for extreme sea states, conditions in which other types of sensors (i.e. buoys, radars) may face diculties. Throughout the text, the main results of this work are presented and discussed. These results are mainly based on a dedicated experimental campaign, carried out with a scaled model of the Asgar-B platform, which is a semisubmersible platform located in the Asgard eld oshore Norway. Regarding the sea states tested during the experiential campaign, they were estimated by means of a motion-based Bayesian inference method, which has been developed for more than then years at the EPUSP. In order to allow the adoption of the semisubmersible platforms as a motion based wave sensors, this thesis provides two signicant improvements of the method: rst, a method to obtain an estimation of the linearized equivalent external viscous damping is provided. This analytical methodology allows to reduce the uncertainty of the transfer function of the platform close to the resonances of the motions and, as a consequence, it increases the accuracy of the inference approach. The second relevant contribution is the development of an alternative prior distribution, which is adopted to introduce the prior beliefs regarding the sea state in the Bayesian inference approach. It is shown that although some aspects of this novel approach require further evaluation in future work, the prior distribution developed has potential to improve the accuracy of wave estimates, and, at the same time, it signi cantly simplies the calibration procedures followed by other state-of-the-art Bayesian wave inference methods. Summing up, the inference approach proposed in this work provides the bases to use each semisubmersible oil platform, which stand as the most common type of oil platforms operated oshore Brasil, as a motion based wave sensor, thus contributing to the possible broadening of the Brazilian oceanographic measurement network. / A presente tese investiga a adoção de plataformas de petróleo semi submersíveis como base para inferência das condições de onda através do monitoramento de seus movimentos. O problema em questão consiste na solução do problema inverso de comportamento em ondas; ou seja, uma vez observados os movimentos da unidade flutuante (e conhecidas suas funções de resposta de movimento), estima-se as condições de ondas que os causaram. Este tipo de método já vem sendo empregado há anos para navios em curso e também para navios convertidos em plataformas de petróleo (os chamados FPSOs) com bons resultados. No entanto, o possível emprego de plataformas semi-submersíveis para o mesmo fim foi muito pouco explorado até o momento. Evidentemente, isso decorre da suposição de que, uma vez que essas estruturas são projetadas com o intuito primeiro de atenuar os movimentos decorrentes das ações de ondas, naturalmente elas não seriam bons sensores para esta finalidade. Os resultados apresentados nesta tese, todavia, contrariam tal suposição. De fato, as semi-submersíveis respondem de forma fraca as ondas, porem esta resposta é mensurável. Não apenas isso, mas, em comparação com os cascos de navios, esta resposta adere melhor às previsões dos modelos hidrodinâmicos lineares a partir dos quais as características da plataforma são estimadas. Ademais, o caráter eminentemente linear da resposta muitas vezes perdura inclusive para condições de ondas severas. Isto, por sua vez, torna as semi-submersíveis promissoras inclusive para a estimação de mares extremos, situação nas quais os outros tipos de sensores (boias, radares) enfrentam dificuldades. Nesta tese, a demonstração destes fatos é sustentada por um extenso conjunto de testes experimentais realizados em tanque de ondas com um modelo em escala reduzida de uma plataforma que hoje opera no Mar do Norte. Para tanto, foi empregado um método de inferência Bayesiana para estimação de ondas em navios que vem sendo desenvolvido na EPUSP há mais de dez anos. Para o estudo das semi-submersíveis o trabalho propõe duas melhorias importantes no método: A primeira consiste em um procedimento analítico para prever o amortecimento hidrodinâmico de origem viscosa dos movimentos observados do casco. Este procedimento permite reduzir as incertezas quanto a função de resposta em condições de ressonância dos movimentos com as ondas e, dessa forma, aumentar a confiabilidade do método. A segunda contribuição relevante é a proposição de uma alternativa para a chamada distribuição a priori originalmente empregada pelo método Bayesiano. Demonstra-se que, embora alguns aspectos desta nova metodologia ainda necessitem de uma avaliação adicional em trabalhos futuros, a nova distribuição tem grande potencial para melhorar a precisão das estimativas de ondas, além de simplificar de maneira significativa os procedimentos atuais de calibração do sistema de inferência. Em suma, o método de inferência aqui proposto abre caminho para tornar cada unidade flutuante de óleo e gás do tipo semi-submersível, um dos sistemas de produção mais frequentes nas costas brasileiras, um eventual ponto de monitoramento de ondas, contribuindo então para a possível ampliação de nossas bases de medição oceanograficas.
|
385 |
Modelos para proporções com superdispersão e excesso de zeros - um procedimento Bayesiano. / Models for zero-inflated and overdispersed proportion data - a bayesian approach.Borgatto, Adriano Ferreti 24 June 2004 (has links)
Neste trabalho, tres modelos foram ajustados a um conjunto de dados obtido de um ensaio de controle biol´ogico para Diatraea saccharalis, uma praga comum em planta¸coes de cana-de-a¸c´ucar. Usando a distribui¸cao binomial como modelo de probabilidade, um ajuste adequado nao pode ser obtido, devido `a superdispersao gerada pela variabililidade dos dados e pelo excesso de zeros. Nesse caso, o modelo binomial inflacionado de zeros (ZIB) superdisperso ´e mais flex´ývel e eficiente para a modelagem desse tipo de dados. Entretanto, quando o interesse maior est´a sobre os valores positivos das propor¸coes, pode-se utilizar o modelo binomial truncado superdisperso. Uma abordagem alternativa eficiente que foi utilizada para a modelagem desse tipo de dados foi a Bayesiana, sendo o ajuste do modelo realizado usando as t´ecnicas de simula¸cao Monte Carlo em Cadeias de Markov, atrav´es do algoritmo Metropolis-Hastings e a sele¸cao dos modelos foi feita usando o DIC (Deviance Information Criterion) e o fator de Bayes. Os modelos foram implementados no procedimento IML (Iteractive Matrix Linear) do programa SAS (Statistical Analysis System) e no programa WinBUGS e a convergencia das estimativas foi verificada atrav´es da an´alise gr´afica dos valores gerados e usando os diagn´osticos de Raftery & Lewis e de Heidelberger & Welch, implementado no m´odulo CODA do programa R. / In general the standard binomial regression models do not fit well to proportion data from biological control assays, manly when there is excess of zeros and overdispersion. In this work a zero-inflated binomial model is applied to a data set obtained from a biological control assay for Diatraea saccharalis, a commom pest in sugar cane. A parasite (Trichogramma galloi) was put to parasitize 128 eggs of the Anagasta kuehniella, an economically suitable alternative host (Parra, 1997), with a variable number of female parasites (2, 4, 8,..., 128), each with 10 replicates in a completely randomized experiment. When interest is only in the positive proportion data, a model can be based on the truncated binomial distribution. A Bayesian procedure was formulated using a simulation technique (Metropolis Hastings) for estimation of the posterior parameters of interest. The convergence of the Markov Chain generated was monitored by visualization of the trace plot and using Raftery & Lewis and Heidelberg & Welch diagnostics presented in the module CODA of the software R.
|
386 |
Regressão binária nas abordagens clássica e Bayesiana / Binary regression in the classical and Bayesian approachesFernandes, Amélia Milene Correia 16 December 2016 (has links)
Este trabalho tem como objetivo estudar o modelo de regressão binária nas abordagens clássica e bayesiana utilizando as funções de ligações probito, logito, complemento log-log, transformação box-cox e probito-assimétrico. Na abordagem clássica apresentamos as suposições e o procedimento para ajustar o modelo de regressão e verificamos a precisão dos parâmetros estimados, construindo intervalos de confiança e testes de hipóteses. Enquanto que, na inferência bayesiana fizemos um estudo comparativo utilizando duas metodologias. Na primeira metodologia consideramos densidades a priori não informativas e utilizamos o algoritmo Metropolis-Hastings para ajustar o modelo. Na segunda metodologia utilizamos variáveis auxiliares para obter a distribuição a posteriori conhecida, facilitando a implementação do algoritmo do Amostrador de Gibbs. No entanto, a introdução destas variáveis auxiliares podem gerar valores correlacionados, o que leva à necessidade de se utilizar o agrupamento das quantidades desconhecidas em blocos para reduzir a autocorrelação. Através do estudo de simulação mostramos que na inferência clássica podemos usar os critérios AIC e BIC para escolher o melhor modelo e avaliamos se o percentual de cobertura do intervalo de confiança assintótica está de acordo com o esperado na teoria assintótica. Na inferência bayesiana constatamos que o uso de variáveis auxiliares resulta em um algoritmo mais eficiente segundo os critérios: erro quadrático médio (EQM), erro percentual absoluto médio (MAPE) e erro percentual absoluto médio simétrico (SMAPE). Como ilustração apresentamos duas aplicações com dados reais. Na primeira, consideramos um conjunto de dados da variação do Ibovespa e a variação do valor diário do fechamento da cotação do dólar no período de 2013 a 2016. Na segunda aplicação, trabalhamos com um conjunto de dados educacionais (INEP-2013), focando nos estudos das variáveis que influenciam a aprovação do aluno. / The objective of this work is to study the binary regression model under the frequentist and Bayesian approaches using the probit, logit, log-log complement, Box-Cox transformation and skewprobit as link functions. In the classical approach we presented assumpti- ons and procedures used in the regression modeling. We verified the accuracy of the estimated parameters by building confidence intervals and conducting hypothesis tests. In the Bayesian approach we made a comparative study using two methodologies. For the first methodology, we considered non-informative prior distributions and the Metropolis-Hastings algorithm to estimate the model. In the second methodology we used auxiliary variables to obtain the known a posteriori distribution, allowing the use of the Gibbs Sampler algorithm. However, the introduction of these auxiliary variables can generate correlated values and needs the use of clustering of unknown quantities in blocks to reduce the autocorrelation. In the simulation study we used the AIC and BIC information criteria to select the most appropriate model and we evaluated whether the coverage probabilities of the confidence interval is in agre- ement with that expected by the asymptotic theory. In Bayesian approach we found that the inclusion of auxiliary variables in the model results in a more efficient algoritm according to the MSE, MAPE and SMAPE criteria. In this work we also present applications to two real datasets. The first dataset used is the variation of the Ibovespa and variation of the daily value of the American dollar at the time of closing the 2013 to 2016. The second dataset, used is an educational data set (INEP-2013), where we are interested in studying the factors that influence the approval of the student.
|
387 |
Understanding transcriptional regulation through computational analysis of single-cell transcriptomicsLim, Chee Yee January 2017 (has links)
Gene expression is tightly regulated by complex transcriptional regulatory mechanisms to achieve specific expression patterns, which are essential to facilitate important biological processes such as embryonic development. Dysregulation of gene expression can lead to diseases such as cancers. A better understanding of the transcriptional regulation will therefore not only advance the understanding of fundamental biological processes, but also provide mechanistic insights into diseases. The earlier versions of high-throughput expression profiling techniques were limited to measuring average gene expression across large pools of cells. In contrast, recent technological improvements have made it possible to perform expression profiling in single cells. Single-cell expression profiling is able to capture heterogeneity among single cells, which is not possible in conventional bulk expression profiling. In my PhD, I focus on developing new algorithms, as well as benchmarking and utilising existing algorithms to study the transcriptomes of various biological systems using single-cell expression data. I have developed two different single-cell specific network inference algorithms, BTR and SPVAR, which are based on two different formalisms, Boolean and autoregression frameworks respectively. BTR was shown to be useful for improving existing Boolean models with single-cell expression data, while SPVAR was shown to be a conservative predictor of gene interactions using pseudotime-ordered single-cell expression data. In addition, I have obtained novel biological insights by analysing single-cell RNAseq data from the epiblast stem cells reprogramming and the leukaemia systems. Three different driver genes, namely Esrrb, Klf2 and GY118F, were shown to drive reprogramming of epiblast stem cells via different reprogramming routes. As for the leukaemia system, FLT3-ITD and IDH1-R132H mutations were shown to interact with each other and potentially predispose some cells for developing acute myeloid leukaemia.
|
388 |
Integrating remotely sensed data into forest resource inventories / The impact of model and variable selection on estimates of precisionMundhenk, Philip Henrich 26 May 2014 (has links)
Die letzten zwanzig Jahre haben gezeigt, dass die Integration luftgestützter Lasertechnologien (Light Detection and Ranging; LiDAR) in die Erfassung von Waldressourcen
dazu beitragen kann, die Genauigkeit von Schätzungen zu erhöhen. Um diese zu ermöglichen, müssen Feldaten mit LiDAR-Daten kombiniert werden. Diverse Techniken
der Modellierung bieten die Möglichkeit, diese Verbindung statistisch zu beschreiben.
Während die Wahl der Methode in der Regel nur geringen Einfluss auf Punktschätzer
hat, liefert sie unterschiedliche Schätzungen der Genauigkeit.
In der vorliegenden Studie wurde der Einfluss verschiedener Modellierungstechniken und
Variablenauswahl auf die Genauigkeit von Schätzungen untersucht. Der Schwerpunkt
der Arbeit liegt hierbei auf LiDAR Anwendungen im Rahmen von Waldinventuren. Die
Methoden der Variablenauswahl, welche in dieser Studie berücksichtigt wurden, waren
das Akaike Informationskriterium (AIC), das korrigierte Akaike Informationskriterium
(AICc), und das bayesianische (oder Schwarz) Informationskriterium. Zudem wurden
Variablen anhand der Konditionsnummer und des Varianzinflationsfaktors ausgewählt.
Weitere Methoden, die in dieser Studie Berücksichtigung fanden, umfassen Ridge Regression, der least absolute shrinkage and selection operator (Lasso), und der Random
Forest Algorithmus. Die Methoden der schrittweisen Variablenauswahl wurden sowohl
im Rahmen der Modell-assistierten als auch der Modell-basierten Inferenz untersucht.
Die übrigen Methoden wurden nur im Rahmen der Modell-assistierten Inferenz untersucht.
In einer umfangreichen Simulationsstudie wurden die Einflüsse der Art der Modellierungsmethode und Art der Variablenauswahl auf die Genauigkeit der Schätzung von
Populationsparametern (oberirdische Biomasse in Megagramm pro Hektar) ermittelt.
Hierzu wurden fünf unterschiedliche Populationen genutzt. Drei künstliche Populationen
wurden simuliert, zwei weitere basierten auf in Kanada und Norwegen erhobenen Waldinveturdaten. Canonical vine copulas wurden genutzt um synthetische Populationen
aus diesen Waldinventurdaten zu generieren. Aus den Populationen wurden wiederholt
einfache Zufallsstichproben gezogen und für jede Stichprobe wurden der Mittelwert und
die Genauigkeit der Mittelwertschätzung geschäzt. Während für das Modell-basierte
Verfahren nur ein Varianzschätzer untersucht wurde, wurden für den Modell-assistierten
Ansatz drei unterschiedliche Schätzer untersucht.
Die Ergebnisse der Simulationsstudie zeigten, dass das einfache Anwenden von schrittweisen Methoden zur Variablenauswahl generell zur Überschätzung der Genauigkeiten
in LiDAR unterstützten Waldinventuren führt. Die verzerrte Schätzung der Genauigkeiten
war vor allem für kleine Stichproben (n = 40 und n = 50) von Bedeutung.
Für
Stichproben von größerem Umfang (n = 400), war die Überschätzung der Genauigkeit
vernachlässigbar. Gute Ergebnisse, im Hinblick auf Deckungsraten und empirischem
Standardfehler, zeigten Ridge Regression, Lasso und der Random Forest Algorithmus.
Aus den Ergebnissen dieser Studie kann abgeleitet werden, dass die zuletzt genannten
Methoden in zukünftige LiDAR unterstützten Waldinventuren Berücksichtigung finden
sollten.
|
389 |
Omezené Restartovací Automaty / Restricted Restarting AutomataČerno, Peter January 2015 (has links)
Restarting automata were introduced as a model for analysis by reduction which is a linguistically motivated method for checking correctness of a sentence. The thesis studies locally restricted models of restarting automata which (to the contrary of general restarting automata) can modify the input tape based only on a limited context. The investigation of such restricted models is easier than in the case of general restarting automata. Moreover, these models are effectively learnable from positive samples of reductions and their instructions are human readable. Powered by TCPDF (www.tcpdf.org)
|
390 |
Regressão binária nas abordagens clássica e Bayesiana / Binary regression in the classical and Bayesian approachesAmélia Milene Correia Fernandes 16 December 2016 (has links)
Este trabalho tem como objetivo estudar o modelo de regressão binária nas abordagens clássica e bayesiana utilizando as funções de ligações probito, logito, complemento log-log, transformação box-cox e probito-assimétrico. Na abordagem clássica apresentamos as suposições e o procedimento para ajustar o modelo de regressão e verificamos a precisão dos parâmetros estimados, construindo intervalos de confiança e testes de hipóteses. Enquanto que, na inferência bayesiana fizemos um estudo comparativo utilizando duas metodologias. Na primeira metodologia consideramos densidades a priori não informativas e utilizamos o algoritmo Metropolis-Hastings para ajustar o modelo. Na segunda metodologia utilizamos variáveis auxiliares para obter a distribuição a posteriori conhecida, facilitando a implementação do algoritmo do Amostrador de Gibbs. No entanto, a introdução destas variáveis auxiliares podem gerar valores correlacionados, o que leva à necessidade de se utilizar o agrupamento das quantidades desconhecidas em blocos para reduzir a autocorrelação. Através do estudo de simulação mostramos que na inferência clássica podemos usar os critérios AIC e BIC para escolher o melhor modelo e avaliamos se o percentual de cobertura do intervalo de confiança assintótica está de acordo com o esperado na teoria assintótica. Na inferência bayesiana constatamos que o uso de variáveis auxiliares resulta em um algoritmo mais eficiente segundo os critérios: erro quadrático médio (EQM), erro percentual absoluto médio (MAPE) e erro percentual absoluto médio simétrico (SMAPE). Como ilustração apresentamos duas aplicações com dados reais. Na primeira, consideramos um conjunto de dados da variação do Ibovespa e a variação do valor diário do fechamento da cotação do dólar no período de 2013 a 2016. Na segunda aplicação, trabalhamos com um conjunto de dados educacionais (INEP-2013), focando nos estudos das variáveis que influenciam a aprovação do aluno. / The objective of this work is to study the binary regression model under the frequentist and Bayesian approaches using the probit, logit, log-log complement, Box-Cox transformation and skewprobit as link functions. In the classical approach we presented assumpti- ons and procedures used in the regression modeling. We verified the accuracy of the estimated parameters by building confidence intervals and conducting hypothesis tests. In the Bayesian approach we made a comparative study using two methodologies. For the first methodology, we considered non-informative prior distributions and the Metropolis-Hastings algorithm to estimate the model. In the second methodology we used auxiliary variables to obtain the known a posteriori distribution, allowing the use of the Gibbs Sampler algorithm. However, the introduction of these auxiliary variables can generate correlated values and needs the use of clustering of unknown quantities in blocks to reduce the autocorrelation. In the simulation study we used the AIC and BIC information criteria to select the most appropriate model and we evaluated whether the coverage probabilities of the confidence interval is in agre- ement with that expected by the asymptotic theory. In Bayesian approach we found that the inclusion of auxiliary variables in the model results in a more efficient algoritm according to the MSE, MAPE and SMAPE criteria. In this work we also present applications to two real datasets. The first dataset used is the variation of the Ibovespa and variation of the daily value of the American dollar at the time of closing the 2013 to 2016. The second dataset, used is an educational data set (INEP-2013), where we are interested in studying the factors that influence the approval of the student.
|
Page generated in 0.0518 seconds