Spelling suggestions: "subject:"pairwise"" "subject:"fairwise""
61 |
Bipartite RankBoost+: An Improvement to Bipartite RankBoostZhang, Ganqin 22 January 2021 (has links)
No description available.
|
62 |
A hypothesis generating case study comparing exploratory and pairwise testing in an embedded system environment / En hypotesgenererande fallstudie som jämför utforskande testing och parvis testning i en inbäddad systemmiljöFalkenstrand, Petter, Gidlöf, Tim January 2022 (has links)
Mjukvarutestning har, sedan introduktionen av datorer, varit föremål för forskning. Idag, när datorer och mikroprocessorer alltmer integreras i produkter som omger oss i vårt dagliga liv, så ökar vikten av effektiv och korrekt mjukvarutestning. Tidigare forskning visar att utforskande testning är en effektiv metod för att upptäcka programvarubuggar. Men, med tids- och resursaspekterna i åtanke finns det andra metoder som kan vara effektivare. En möjlig sådan metod är parvis testning. Litteraturen visar att även denna metod har bra potential för att kunna identifiera programvarubuggar. Det finns dock inte så mycket forskning om jämförelsen av dessa två metoder, vilket är anledningen till att denna studie genomfördes. Denna förklarande fallstudie utvärderar hur utforskande testning presterar jämfört med parvis testning. Aspekter som beaktades i utvärderingsprocessen var antalet upptäckta defekter, allvarlighetsgraden av de hittade defekterna och vilka typer av defekter som hittades. Data samlades in genom undersökningar, intervjuer, deltagande observationer och direkta observationer. Med all denna data som samlats in, drogs slutsatsen att svaret på denna rapports forskningsfrågor är tvetydiga. Det finns fördelar med båda teknikerna, och beroende på förutsättningarna för utforskande testning så kan parvis testning prestera likvärdigt. En sak märktes dock under studien som inte var en del av den ursprungliga omfattningen, och det var styrkan som utforskande testning har som ett läroverktyg. / Software testing has been a subject of research since the introduction of computers. Today, when computers and microprocessors are increasingly integrated into the products surrounding us in our daily lives, the importance of effective and accurate software testing increases. Previous research shows that exploratory testing is an effective method for detecting software bugs. Still, with the time and resource aspects considered, there are other potentially more time and resource-effective methods. One such possible method is the pairwise testing method. The literature also shows that this method is effective for finding software bugs. There is, however, not that much research about the comparison of these two methods, which is why this study was conducted. This explanatory case study evaluates how exploratory testing performs compared with pairwise testing. Aspects considered in the evaluation process were the number of detected defects, the severity distribution of the found defects, and what types of defects were found. The data was collected through surveys, interviews, participant observations, and direct observations. With the data collected, it was concluded that the answers to the research questions of this study are ambiguous. There are benefits with both of the techniques, and depending on the exploratory testing conditions, the pairwise technique can perform comparably as the exploratory testing. However, one thing noticed during the study that was not part of the original scope was the strength of exploratory testing as a learning tool. Lastly, some hypotheses were stated, supported by the collected data.
|
63 |
The assessment of the quality of science education textbooks : conceptual framework and instruments for analysisSwanepoel, Sarita 04 1900 (has links)
Science and technology are constantly transforming our day-to-day living. Science
education has become of vital importance to prepare learners for this everchanging
world. Unfortunately, science education in South Africa is hampered
by under-qualified and inexperienced teachers. Textbooks of good quality can assist
teachers and learners and facilitate the development of science teachers. For
this reason thorough assessment of textbooks is needed to inform the selection of
good textbooks.
An investigation revealed that the available textbook evaluation instruments are
not suitable for the evaluation of the physical science textbooks in the South
African context. An instrument is needed that focusses on science education textbooks
and which prescribes the criteria, weights, evaluation procedure and rating
scheme that can ensure justifiable, transparent, reliable and valid evaluation results.
This study utilised elements from the Analytic Hierarchy Process (AHP) to
develop such an instrument and verified the reliability and validity of the instrument’s
evaluation results.
Development of the Instrument for the Evaluation of Science Education Textbooks
started with the formulation of criteria. Characteristics that influence the
quality of textbooks were identified from literature, existing evaluation instruments
and stakeholders’ concerns. In accordance with the AHP, these characteristics
or criteria were divided into categories or branches to give a hierarchical
structure. Subject experts verified the content validity of the hierarchy.
Expert science teachers compared the importance of different criteria. The data
were used to derive weights for the different criteria with the Expert Choice computer
application. A rubric was formulated to act as rating-scheme and score
sheet. During the textbook evaluation process the ratings were transferred to a
spreadsheet that computed the scores for the quality of a textbook as a whole as
well as for the different categories.
The instrument was tested on small scale, adjusted and then applied on a larger
scale. The results of different analysts were compared to verify the reliability of
the instrument. Triangulation with the opinions of teachers who have used the
textbooks confirmed the validity of the evaluation results obtained with the instrument.
Future investigations on the evaluation instrument can include the use
of different rating scales and limiting of criteria. / Thesis (M. Ed. (Didactics))
|
64 |
[en] THE AHP - CONCEPTUAL REVIEW AND PROPOSAL OF SIMPLIFICATION / [pt] O MÉTODO AHP - REVISÃO CONCEITUAL E PROPOSTA DE SIMPLIFICAÇÃOCRISTINA SANTOS WOLFF 27 October 2008 (has links)
[pt] Muitos problemas de transportes, assim como de outras áreas
do
conhecimento, envolvem tomada de decisão. Em decisões
complexas, a escolha
da melhor alternativa ou plano de ação pode envolver mais
de um critério e é
necessário estudar como cada ação afeta cada critério. O
método AHP, Analytic
Hierarchy Process, proposto por Thomas L. Saaty, é um
método de decisão
multicriterial que funciona para os mais diversos tipos de
decisões, solucionando
problemas com fatores quantitativos e qualitativos. Ele
reúne a opinião dos
tomadores de decisão em matrizes de comparação. Este
trabalho faz uma revisão
geral de conceitos básicos do método, mostrando diferentes
maneiras de cálculo
da solução. A primeira explorada é o cálculo exato através
dos autovalores e
autovetores das matrizes. Para esse cálculo, foi utilizado
o software francês
Scilab, semelhante ao mais conhecido Matlab, mas distibuído
gratuitamente na
internet. É discutida a questão da consistência dos
julgamentos, com maneiras de
medi-la e melhorá-la. Finalmente, é feita uma proposta de
solução aproximada,
que questiona a idéia original de que um certo nível de
inconsistência é desejável.
É uma solução simplificada que, supondo consistência
absoluta, facilita não só os
cálculos como o trabalho inicial dos tomadores de decisão.
Em vez de comparar
todas as alternativas com as outras, duas a duas, passa a
ser necessário comparar
apenas uma alternativa com as outras. A nova solução
aproximada é comparada
com a solução exata em três casos retirados da literatura. / [en] Several transportation problems, as well as problems in
other knowledge
areas, request decision making. In complex decisions, the
choice of best
alternative or course of action can contain more than one
criterion and it is
necessary to study how each alternative affects each
criterion. The AHP, Analytic
Hierarchy Process, proposed by Thomas L. Saaty, is a
multicriteria decision
method that works well for very diverse decision types,
solving problems with
tangible and intangible factors. It gathers the opinion of
decision makers in
comparison matrices. This study makes a general review of
basic concepts of the
method, showing different manners of calculating the
solution. The first one to be
displayed is the exact solution using the eigenvalues and
eigenvectors of the
matrices. For this solution the French software Scilab was
used, which is similar
to the well-known Matlab, but free and distributed on the
web. The issue of
judgment consistency is discussed, including ways of
measuring and improving it.
Finally, a proposal of approximated solution is made,
questioning the original idea
which says that a certain level of inconsistency is
desirable. It is a simplification
that, considering absolute consistency, facilitates not
only the calculations but also
the early work of decision makers when judging the
alternatives. Instead of
making pair wise comparisons of all alternatives with each
other, it becomes
necessary to compare only one alternative with the others.
The new approximated
solution is compared to the real solution in three cases
taken from the literature.
|
65 |
Modelagem estatística de extremos espaciais com base em processos max-stable aplicados a dados meteorológicos no estado do Paraná / Statistical modelling of spatial extremes based on max-stable processes applied to environmental data in the Parana StateOlinda, Ricardo Alves de 09 August 2012 (has links)
A maioria dos modelos matemáticos desenvolvidos para eventos raros são baseados em modelos probabilísticos para extremos. Embora as ferramentas para modelagem estatística de extremos univariados e multivariados estejam bem desenvolvidas, a extensão dessas ferramentas para modelar extremos espaciais integra uma área de pesquisa em desenvolvimento muito ativa atualmente. A modelagem de máximos sob o domínio espacial, aplicados a dados meteorológicos é importante para a gestão adequada de riscos e catástrofes ambientais nos países que tem a sua economia profundamente dependente do agronegócio. Uma abordagem natural para tal modelagem é a teoria de extremos espaciais e o processo max-stable, caracterizando-se pela extensão de dimensões infinitas da teoria de valores extremos multivariados, podendo-se então incorporar as funções de correlação existentes na geoestatística e consequentemente, verificar a dependência extrema por meio do coeficiente extremo e o madograma. Neste trabalho descreve-se a aplicação de tais processos na modelagem da dependência de máximos espaciais de precipitação máxima mensal do estado do Paraná, com base em séries históricas observadas em estações meteorológicas. Os modelos propostos consideram o espaço euclidiano e uma transformação denominada espaço climático, que permite explicar a presença de efeitos direcionais, resultantes de padrões meteorológicos sinóticos. Essa metodologia baseia-se no teorema proposto por De Haan (1984) e nos modelos de Smith (1990) e de Schlather (2002), verifica-se também o comportamento isotrópico e anisotrópico desses modelos via simulação Monte Carlo. Estimativas são realizadas através da máxima verossimilhança pareada e os modelos são comparados usando-se o Critério de Informação Takeuchi. O algoritmo utilizado no ajuste é bastante rápido e robusto, permitindo-se uma boa eficiência computacional e estatística na modelagem da precipitação máxima mensal, possibilitando-se a modelagem dos efeitos direcionais resultantes de fenômenos ambientais. / The most mathematical models developed for rare events are based on probabilistic models for extremes. Although the tools for statistical modeling of univariate and multivariate extremes are well-developed, the extension of these tools to model spatial extremes data is currently a very active area of research. Modeling of maximum values under the spatial domain, applied to meteorological data is important for the proper management of risks and environmental disasters in the countries where the agricultural sector has great influence on the economy. A natural approach for such modeling is the theory of extreme spatial and max-stable process, characterized by infinite dimensional extension of multivariate extreme value theory, and we can then incorporate the current correlation functions in geostatistics and thus, check the extreme dependence through the extreme coefficient and the madogram. This thesis describes the application of such procedures in the modeling of spatial maximum dependency of monthly maximum rainfall of Paraná State, historical series based on observed meteorological stations. The proposed models consider the Euclidean space and a transformation called climatic space, which makes it possible to explain the presence of directional effects resulting from synoptic weather patterns. This methodology is based on the theorem proposed by De Haan (1984) and Smith (1990) models and Schlather (2002), checking the isotropic and anisotropic behavior these models through Monte Carlo simulation. Estimates are performed using maximum pairwise likelihood and the models are compared using the Takeuchi information criterion. The algorithm used in the fit is very fast and robust, allowing a good statistical and computational efficiency in monthly maximum rainfall modeling, allowing the modeling of directional effects resulting from environmental phenomena.
|
66 |
Développement de représentations et d'algorithmes efficaces pour l'apprentissage statistique sur des données génomiques / Learning from genomic data : efficient representations and algorithms.Le Morvan, Marine 03 July 2018 (has links)
Depuis le premier séquençage du génome humain au début des années 2000, de grandes initiatives se sont lancé le défi de construire la carte des variabilités génétiques inter-individuelles, ou bien encore celle des altérations de l'ADN tumoral. Ces projets ont posé les fondations nécessaires à l'émergence de la médecine de précision, dont le but est d'intégrer aux dossiers médicaux conventionnels les spécificités génétiques d'un individu, afin de mieux adapter les traitements et les stratégies de prévention. La traduction des variations et des altérations de l'ADN en prédictions phénotypiques constitue toutefois un problème difficile. Les séquenceurs ou puces à ADN mesurent plus de variables qu'il n'y a d'échantillons, posant ainsi des problèmes statistiques. Les données brutes sont aussi sujettes aux biais techniques et au bruit inhérent à ces technologies. Enfin, les vastes réseaux d'interactions à l'échelle des protéines obscurcissent l'impact des variations génétiques sur le comportement de la cellule, et incitent au développement de modèles prédictifs capables de capturer un certain degré de complexité.Cette thèse présente de nouvelles contributions méthodologiques pour répondre à ces défis.Tout d'abord, nous définissons une nouvelle représentation des profils de mutations tumorales, qui exploite leur position dans les réseaux d'interaction protéine-protéine. Pour certains cancers, cette représentation permet d'améliorer les prédictions de survie à partir des données de mutations, et de stratifier les cohortes de patients en sous-groupes informatifs. Nous présentons ensuite une nouvelle méthode d'apprentissage permettant de gérer conjointement la normalisation des données et l'estimation d'un modèle linéaire. Nos expériences montrent que cette méthode améliore les performances prédictives par rapport à une gestion séquentielle de la normalisation puis de l'estimation. Pour finir, nous accélérons l'estimation de modèles linéaires parcimonieux, prenant en compte des interactions deux à deux, grâce à un nouvel algorithme. L'accélération obtenue rend cette estimation possible et efficace sur des jeux de données comportant plusieurs centaines de milliers de variables originales, permettant ainsi d'étendre la portée de ces modèles aux données des études d'associations pangénomiques. / Since the first sequencing of the human genome in the early 2000s, large endeavours have set out to map the genetic variability among individuals, or DNA alterations in cancer cells. They have laid foundations for the emergence of precision medicine, which aims at integrating the genetic specificities of an individual with its conventional medical record to adapt treatment, or prevention strategies.Translating DNA variations and alterations into phenotypic predictions is however a difficult problem. DNA sequencers and microarrays measure more variables than there are samples, which poses statistical issues. The data is also subject to technical biases and noise inherent in these technologies. Finally, the vast and intricate networks of interactions among proteins obscure the impact of DNA variations on the cell behaviour, prompting the need for predictive models that are able to capture a certain degree of complexity. This thesis presents novel methodological contributions to address these challenges. First, we define a novel representation for tumour mutation profiles that exploits prior knowledge on protein-protein interaction networks. For certain cancers, this representation allows improving survival predictions from mutation data as well as stratifying patients into meaningful subgroups. Second, we present a new learning framework to jointly handle data normalisation with the estimation of a linear model. Our experiments show that it improves prediction performances compared to handling these tasks sequentially. Finally, we propose a new algorithm to scale up sparse linear models estimation with two-way interactions. The obtained speed-up makes this estimation possible and efficient for datasets with hundreds of thousands of main effects, thereby extending the scope of such models to the data from genome-wide association studies.
|
67 |
Générateur stochastique de temps multisite basé sur un champ gaussien multivarié / Spatial stochastic weather generator based on a multivariate gaussian random fieldBourotte, Marc 17 June 2016 (has links)
Les générateurs stochastiques de temps sont des modèles numériques capables de générer des séquences de données climatiques de longueur souhaitée avec des propriétés statistiques similaires aux données observées. Ces modèles sont de plus en plus utilisés en sciences du climat, hydrologie, agronomie. Cependant, peu de générateurs permettent de simuler plusieurs variables, dont les précipitations, en différents sites d’une région. Dans cette thèse, nous proposons un modèle original de générateur stochastique basé sur un champ gaussien multivarié spatio-temporel. Un premier travail méthodologique a été nécessaire pour développer un modèle de covariance croisée entièrement non séparable adapté à la nature spatio-temporelle multivariée des données étudiées. Cette covariance croisée est une généralisation au cas multivarié du modèle non séparable spatio-temporel de Gneiting dans le cas de la famille de Matérn. La démonstration de la validité du modèle et l’estimation de ses paramètres par maximum de vraisemblance par paires pondérées sont présentées. Une application sur des données climatiques démontre l’intérêt de ce nouveau modèle vis-à-vis des modèles existants. Le champ gaussien multivarié permet la modélisation des résidus des variables climatiques (hors précipitation). Les résidus sont obtenus après normalisation des variables par des moyennes et écarts-types saisonniers, eux-mêmes modélisés par des fonctions sinusoïdales. L’intégration des précipitations dans le générateur stochastique nécessite la transformation d’une composante du champ gaussien par une fonction d’anamorphose. Cette fonction d’anamorphose permet de gérer à la fois l’occurrence et l’intensité des précipitations. La composante correspondante du champ gaussien correspond ainsi à un potentiel de pluie, corrélé aux autres variables par la fonction de covariance croisée développée dans cette thèse. Notre générateur stochastique de temps a été testé sur un ensemble de 18 stations réparties en zone à climat méditerranéen (ou proche) en France. La simulation conditionnelle et non conditionnelle de variables climatiques journalières (températures minimales et maximales, vitesse moyenne du vent, rayonnement solaire et précipitation) pour ces 18 stations soulignent les bons résultats de notre modèle pour un certain nombre de statistiques / Stochastic weather generators are numerical models able to simulate sequences of weather data with similar statistical properties than observed data. However, few of them are able to simulate several variables (with precipitation) at different sites from one region. In this thesis, we propose an original model of stochastic generator based on a spatio-temporal multivariate Gaussian random field. A first methodological work was needed to develop a completely non separable cross-covariance function suitable for the spatio-temporal multivariate nature of studied data. This cross-covariance function is a generalization to the multivariate case of spatio-temporal non-separable Gneiting covariance in the case of the family of Matérn. The proof of the validity of the model and the estimation of its parameters by weighted pairwise maximum likelihood are presented. An application on weather data shows the interest of this new model compared with existing models. The multivariate Gaussian random field allows the modeling of weather variables residuals (excluding precipitation). Residuals are obtained after normalization of variables by seasonal means and standard deviations, themselves modeled by sinusoidal functions. The integration of precipitation in the stochastic generator requires the transformation of a component of the Gaussian random field by an anamorphosis function. This anamorphosis function can manage both the occurrence and intensity of precipitation. The corresponding component of the Gaussian random field corresponds to a rain potential, correlated with other variables by the cross-covariance function developed in this thesis. Our stochastic weather generator was tested on a set of 18 stations distributed over the Mediterranean area (or close) in France. The conditional and non-conditional simulation of daily weather variables (maximum and minimum temperature, average wind speed, solar radiation and precipitation) for these 18 stations show good result for a number of statistics.
|
68 |
Beräkning av pumpkapacitet samt konstruktion av pumpfundament / Calculation of pump capacity and construction of a pump foundationBeijer, Anton, Lindholm, Magnus January 2013 (has links)
Ett undersöknings- och utvecklingsarbete för att lösa ett problem med att dränkbara pumpar i ett vattenavrinningssystem gick sönder med perioder på två år i genomsnitt, utfördes i samarbete med Cementa AB i Skövde. Orsak till pumpars haveri söktes och fanns vara bristande rutiner och kunskap om det underhåll pumparna krävde. För att lösa detta problem utvecklades riktlinjer för nyinköp av torruppställda pumpar för att möjliggöra kontinuerligt underhåll. Då möjligheter för placering av torruppställd pump saknades utvecklades ett pumpfundament för placering av torruppställd pump. Krav för utvecklingsarbetet togs fram i samarbete med Cementas underhållsavdelning och teoretisk dimensionering av dränkbara länspumpars dåvarande volymflödeskapacitet utfördes. Krav utvärderades och viktades med hjälp av parvis jämförelse. Dimensionering och kontroll av hållfasthet för utvecklat pumpfundamentet utfördes med hjälp av Finita Element Analyser i programvaran Pro/Engineer Creo 1.0 Mechanica. Kontroll av hållfasthet i infästning av pumpfundamentet samt svetsfogar utfördes analytiskt. Arbetet resulterade i en rekommendation till Cementa AB i Skövde att ta in offerter på nya torruppställda pumpar med hjälp av utvecklade riktlinjer och att tillverka det pumpfundament som tagits fram inom ramen för examensarbetet, för att placera nya pumpar på. Att noggrant följa de underhållsinstruktioner som pumpar har och att underlätta för personal att utföra detta underhåll ansågs kunna bidra till att pumpar skulle få en längre och mer ekonomisk livslängd. / A development project to solve problems with why submersible pumps in a run-off system broke down with periods of two years, on average, was performed in collaboration with Cementa AB in Skövde. Reason for the pumps breakdowns was searched and found to be inadequate procedures and missing knowledge of the maintenance required on the pumps. To solve this problem, guidelines for the purchase of new dry pit pumps were developed to allow for continuous maintenance. As the possibilities of placing a dry well pump did not exist at Cementa, a pump foundation was developed. Requirements for the development work were produced in cooperation with Cementas maintenance department and theoretical dimensioning of the submersible bilge pumps volume flow capacity was performed. Requirements were evaluated and weighted using Pairwise comparison. The design and control of the strength of the developed pump foundation was performed using finite element analysis in the software Pro/Engineer Creo 1.0 Mechanica. Controls of the strength of the attachment of the pump foundation and welds were performed analytically. The work resulted in a recommendation to Cementa AB in Skövde to bring in quotes on the new dry-pit pumps using the developed guidelines and to manufacture the pump foundation developed within the framework of the thesis. Cementa was also recommended to carefully follow the maintenance instructions for pumps and make it easier for staff to perform this maintenance. This was recommended to ensure that new pumps would have a longer and more economical lifetime.
|
69 |
Bit-interleaved coded modulation for hybrid rf/fso systemsHe, Xiaohui 05 1900 (has links)
In this thesis, we propose a novel architecture for hybrid radio frequency
(RF)/free–space optics (FSO) wireless systems. Hybrid RF/FSO systems
are attractive since the RF and FSO sub–systems are affected differently by
weather and fading phenomena. We give a thorough introduction to the RF
and FSO technology, respectively. The state of the art of hybrid RF/FSO systems
is reviewed. We show that a hybrid system robust to different weather
conditions is obtained by joint bit–interleaved coded modulation (BICM) of the
bit streams transmitted over the RF and FSO sub–channels. An asymptotic
performance analysis reveals that a properly designed convolutional code can
exploit the diversity offered by the independent sub–channels. Furthermore,
we develop code design and power assignment criteria and provide an efficient
code search procedure. The cut–off rate of the proposed hybrid system is also
derived and compared to that of hybrid systems with perfect channel state
information at the transmitter. Simulation results show that hybrid RF/FSO
systems with BICM outperform previously proposed hybrid systems employing
a simple repetition code and selection diversity.
|
70 |
Key Distribution In Wireless Sensor NetworksGupta, Abhishek 06 1900 (has links)
In the last few years, wireless sensor networks (WSNs) have become a very actively researched area. The impetus for this spurt of interest were developments in wireless technologies and low-cost VLSI, that made it possible to build inexpensive sensors and actuators. Each such device has limited computational power, memory and energy supply. Nevertheless, because of the low cost, such devices can be deployed in large numbers, and can thereafter form a sensor network. Usually, one or more base stations are also present which act as sink nodes.
When sensors are deployed in hostile environments, security becomes an integral part for such type of networks. A first step in this direction is to provide secure communication between any two nodes and between a node and the base station. Since the public key cryptographic techniques are computationally expensive for resource constrained sensors, one need to rely on symmetric key cryptography for secure communication. The distribution and management of cryptographic keys poses a unique challenge in sensor networks. One requires efficient key distribution algorithms for such type of networks.
In this thesis, we address the problem of secure path key establishment in wireless sensor networks. We first propose a pairwise key distribution algorithm for probabilistic schemes. Inspired by the recent proxy-based schemes, we introduce a friend-based scheme for establishing pairwise keys securely. We show that the chances of finding friends in a neighbourhood are considerably more than that of finding proxies, leading to lower communication overhead. Further, we prove that the friend-based scheme performs better than the proxy-based scheme both in terms of resilience against node capture as well as in energy consumption for pairwise key establishment.
A recent study has shown that the advantages of the probabilistic approach over the deterministic approach, are not as much as people have believed. Thus, we focus our attention on deterministic schemes in which we first discuss why one cannot use the conventional security measure for determining the resilience of a key distribution scheme in case of schemes in which nodes share more than one key. Then, we propose a new and a more general security metric for measuring the resilience of a key distribution scheme in wireless sensor networks. Further, we present a polynomial-based scheme and a novel complete connectivity scheme for distributing keys to sensors and show an analytical comparison, in terms of security and connectivity, between the schemes. Motivated by the schemes, we derive general expressions for the new security measure and the connectivity. A number of conclusions are made using these general expressions.
Then, we conclude our work with a number of future directions that can be followed with this piece of work.
|
Page generated in 0.0296 seconds