• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 33
  • 11
  • 7
  • 5
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 200
  • 29
  • 25
  • 21
  • 20
  • 17
  • 16
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Fast Split Arithmetic Encoder Architectures and Perceptual Coding Methods for Enhanced JPEG2000 Performance

Varma, Krishnaraj M. 11 April 2006 (has links)
JPEG2000 is a wavelet transform based image compression and coding standard. It provides superior rate-distortion performance when compared to the previous JPEG standard. In addition JPEG2000 provides four dimensions of scalability-distortion, resolution, spatial, and color. These superior features make JPEG2000 ideal for use in power and bandwidth limited mobile applications like urban search and rescue. Such applications require a fast, low power JPEG2000 encoder to be embedded on the mobile agent. This embedded encoder needs to also provide superior subjective quality to low bitrate images. This research addresses these two aspects of enhancing the performance of JPEG2000 encoders. The JPEG2000 standard includes a perceptual weighting method based on the contrast sensitivity function (CSF). Recent literature shows that perceptual methods based on subband standard deviation are also effective in image compression. This research presents two new perceptual weighting methods that combine information from both the human contrast sensitivity function as well as the standard deviation within a subband or code-block. These two new sets of perceptual weights are compared to the JPEG2000 CSF weights. The results indicate that our new weights performed better than the JPEG2000 CSF weights for high frequency images. Weights based solely on subband standard deviation are shown to perform worse than JPEG2000 CSF weights for all images at all compression ratios. Embedded block coding, EBCOT tier-1, is the most computationally intensive part of the JPEG2000 image coding standard. Past research on fast EBCOT tier-1 hardware implementations has concentrated on cycle efficient context formation. These pass-parallel architectures require that JPEG2000's three mode switches be turned on. While turning on the mode switches allows for arithmetic encoding from each coding pass to run independent of each other (and thus in parallel), it also disrupts the probability estimation engine of the arithmetic encoder, thus sacrificing coding efficiency for improved throughput. In this research a new fast EBCOT tier-1 design is presented: it is called the Split Arithmetic Encoder (SAE) process. The proposed process exploits concurrency to obtain improved throughput while preserving coding efficiency. The SAE process is evaluated using three methods: clock cycle estimation, multithreaded software implementation, a field programmable gate array (FPGA) hardware implementation. All three methods achieve throughput improvement; the hardware implementation exhibits the largest speedup, as expected. A high speed, task-parallel, multithreaded, software architecture for EBCOT tier-1 based on the SAE process is proposed. SAE was implemented in software on two shared-memory architectures: a PC using hyperthreading and a multi-processor non-uniform memory access (NUMA) machine. The implementation adopts appropriate synchronization mechanisms that preserve the algorithm's causality constraints. Tests show that the new architecture is capable of improving throughput as much as 50% on the NUMA machine and as much as 19% on a PC with two virtual processing units. A high speed, multirate, FPGA implementation of the SAE process is also proposed. The mismatch between the rate of production of data by the context formation (CF) module and the rate of consumption of data by the arithmetic encoder (AE) module is studied in detail. Appropriate choices for FIFO sizes and FIFO write and read capabilities are made based on the statistics obtained from test runs of the algorithm. Using a fast CF module, this implementation was able to achieve as much as 120% improvement in throughput. / Ph. D.
72

Empirical essays on job search behavior, active labor market policies, and propensity score balancing methods

Schmidl, Ricarda January 2014 (has links)
In Chapter 1 of the dissertation, the role of social networks is analyzed as an important determinant in the search behavior of the unemployed. Based on the hypothesis that the unemployed generate information on vacancies through their social network, search theory predicts that individuals with large social networks should experience an increased productivity of informal search, and reduce their search in formal channels. Due to the higher productivity of search, unemployed with a larger network are also expected to have a higher reservation wage than unemployed with a small network. The model-theoretic predictions are tested and confirmed empirically. It is found that the search behavior of unemployed is significantly affected by the presence of social contacts, with larger networks implying a stronger substitution away from formal search channels towards informal channels. The substitution is particularly pronounced for passive formal search methods, i.e., search methods that generate rather non-specific types of job offer information at low relative cost. We also find small but significant positive effects of an increase of the network size on the reservation wage. These results have important implications on the analysis of the job search monitoring or counseling measures that are usually targeted at formal search only. Chapter 2 of the dissertation addresses the labor market effects of vacancy information during the early stages of unemployment. The outcomes considered are the speed of exit from unemployment, the effects on the quality of employment and the short-and medium-term effects on active labor market program (ALMP) participation. It is found that vacancy information significantly increases the speed of entry into employment; at the same time the probability to participate in ALMP is significantly reduced. Whereas the long-term reduction in the ALMP arises in consequence of the earlier exit from unemployment, we also observe a short-run decrease for some labor market groups which suggest that caseworker use high and low intensity activation measures interchangeably which is clearly questionable from an efficiency point of view. For unemployed who find a job through vacancy information we observe a small negative effect on the weekly number of hours worked. In Chapter 3, the long-term effects of participation in ALMP are assessed for unemployed youth under 25 years of age. Complementary to the analysis in Chapter 2, the effects of participation in time- and cost-intensive measures of active labor market policies are examined. In particular we study the effects of job creation schemes, wage subsidies, short-and long-term training measures and measures to promote the participation in vocational training. The outcome variables of interest are the probability to be in regular employment, and participation in further education during the 60 months following program entry. The analysis shows that all programs, except job creation schemes have positive and long-term effects on the employment probability of youth. In the short-run only short-term training measures generate positive effects, as long-term training programs and wage subsidies exhibit significant locking-in'' effects. Measures to promote vocational training are found to increase the probability of attending education and training significantly, whereas all other programs have either no or a negative effect on training participation. Effect heterogeneity with respect to the pre-treatment level education shows that young people with higher pre-treatment educational levels benefit more from participation most programs. However, for longer-term wage subsidies we also find strong positive effects for young people with low initial education levels. The relative benefit of training measures is higher in West than in East Germany. In the evaluation studies of Chapters 2 and 3 semi-parametric balancing methods of Propensity Score Matching (PSM) and Inverse Probability Weighting (IPW) are used to eliminate the effects of counfounding factors that influence both the treatment participation as well as the outcome variable of interest, and to establish a causal relation between program participation and outcome differences. While PSM and IPW are intuitive and methodologically attractive as they do not require parametric assumptions, the practical implementation may become quite challenging due to their sensitivity to various data features. Given the importance of these methods in the evaluation literature, and the vast number of recent methodological contributions in this field, Chapter 4 aims to reduce the knowledge gap between the methodological and applied literature by summarizing new findings of the empirical and statistical literature and practical guidelines for future applied research. In contrast to previous publications this study does not only focus on the estimation of causal effects, but stresses that the balancing challenge can and should be discussed independent of question of causal identification of treatment effects on most empirical applications. Following a brief outline of the practical implementation steps required for PSM and IPW, these steps are presented in detail chronologically, outlining practical advice for each step. Subsequently, the topics of effect estimation, inference, sensitivity analysis and the combination with parametric estimation methods are discussed. Finally, new extensions of the methodology and avenues for future research are presented. / In Kapitel 1 der Dissertation wird die Rolle von sozialen Netzwerken als Determinante im Suchverhalten von Arbeitslosen analysiert. Basierend auf der Hypothese, dass Arbeitslose durch ihr soziales Netzwerk Informationen über Stellenangebote generieren, sollten Personen mit großen sozialen Netzwerken eine erhöhte Produktivität ihrer informellen Suche erfahren, und ihre Suche in formellen Kanälen reduzieren. Durch die höhere Produktivität der Suche sollte für diese Personen zudem der Reservationslohn steigen. Die modelltheoretischen Vorhersagen werden empirisch getestet, wobei die Netzwerkinformationen durch die Anzahl guter Freunde, sowie Kontakthäufigkeit zu früheren Kollegen approximiert wird. Die Ergebnisse zeigen, dass das Suchverhalten der Arbeitslosen durch das Vorhandensein sozialer Kontakte signifikant beeinflusst wird. Insbesondere sinkt mit der Netzwerkgröße formelle Arbeitssuche - die Substitution ist besonders ausgeprägt für passive formelle Suchmethoden, d.h. Informationsquellen die eher unspezifische Arten von Jobangeboten bei niedrigen relativen Kosten erzeugen. Im Einklang mit den Vorhersagen des theoretischen Modells finden sich auch deutlich positive Auswirkungen einer Erhöhung der Netzwerkgröße auf den Reservationslohn. Kapitel 2 befasst sich mit den Arbeitsmarkteffekten von Vermittlungsangeboten (VI) in der frühzeitigen Aktivierungsphase von Arbeitslosen. Die Nutzung von VI könnte dabei eine „doppelte Dividende“ versprechen. Zum einen reduziert die frühe Aktivierung die Dauer der Arbeitslosigkeit, und somit auch die Notwendigkeit späterer Teilnahme in Arbeitsmarktprogrammen (ALMP). Zum anderen ist die Aktivierung durch Information mit geringeren locking-in‘‘ Effekten verbunden als die Teilnahme in ALMP. Ziel der Analyse ist es, die Effekte von frühen VI auf die Eingliederungsgeschwindigkeit, sowie die Teilnahmewahrscheinlichkeit in ALMP zu messen. Zudem werden mögliche Effekte auf die Qualität der Beschäftigung untersucht. Die Ergebnisse zeigen, dass VI die Beschäftigungswahrscheinlichkeit signifikant erhöhen, und dass gleichzeitig die Wahrscheinlichkeit in ALMP teilzunehmen signifikant reduziert wird. Für die meisten betrachteten Subgruppen ergibt sich die langfristige Reduktion der ALMP Teilnahme als Konsequenz der schnelleren Eingliederung. Für einzelne Arbeitsmarktgruppen ergibt sich zudem eine frühe und temporare Reduktion, was darauf hinweist, dass Maßnahmen mit hohen und geringen „locking-in“ Effekten aus Sicht der Sachbearbeiter austauschbar sind, was aus Effizienzgesichtspunkten fragwürdig ist. Es wird ein geringer negativer Effekt auf die wöchentliche Stundenanzahl in der ersten abhängigen Beschäftigung nach Arbeitslosigkeit beobachtet. In Kapitel 3 werden die Langzeiteffekte von ALMP für arbeitslose Jugendliche unter 25 Jahren ermittelt. Die untersuchten ALMP sind ABM-Maßnahmen, Lohnsubventionen, kurz-und langfristige Maßnahmen der beruflichen Bildung sowie Maßnahmen zur Förderung der Teilnahme an Berufsausbildung. Ab Eintritt in die Maßnahme werden Teilnehmer und Nicht-Teilnehmer für einen Zeitraum von sechs Jahren beobachtet. Als Zielvariable wird die Wahrscheinlichkeit regulärer Beschäftigung, sowie die Teilnahme in Ausbildung untersucht. Die Ergebnisse zeigen, dass alle Programme, bis auf ABM, positive und langfristige Effekte auf die Beschäftigungswahrscheinlichkeit von Jugendlichen haben. Kurzfristig finden wir jedoch nur für kurze Trainingsmaßnahmen positive Effekte, da lange Trainingsmaßnahmen und Lohnzuschüsse mit signifikanten locking-in‘‘ Effekten verbunden sind. Maßnahmen zur Förderung der Berufsausbildung erhöhen die Wahrscheinlichkeit der Teilnahme an einer Ausbildung, während alle anderen Programme keinen oder einen negativen Effekt auf die Ausbildungsteilnahme haben. Jugendliche mit höherem Ausbildungsniveau profitieren stärker von der Programmteilnahme. Jedoch zeigen sich für längerfristige Lohnsubventionen ebenfalls starke positive Effekte für Jugendliche mit geringer Vorbildung. Der relative Nutzen von Trainingsmaßnahmen ist höher in West- als in Ostdeutschland. In den Evaluationsstudien der Kapitel 2 und 3 werden die semi-parametrischen Gewichtungsverfahren Propensity Score Matching (PSM) und Inverse Probability Weighting (IPW) verwendet, um den Einfluss verzerrender Faktoren, die sowohl die Maßnahmenteilnahme als auch die Zielvariablen beeinflussen zu beseitigen, und kausale Effekte der Programmteilahme zu ermitteln. Während PSM and IPW intuitiv und methodisch sehr attraktiv sind, stellt die Implementierung der Methoden in der Praxis jedoch oft eine große Herausforderung dar. Das Ziel von Kapitel 4 ist es daher, praktische Hinweise zur Implementierung dieser Methoden zu geben. Zu diesem Zweck werden neue Erkenntnisse der empirischen und statistischen Literatur zusammengefasst und praxisbezogene Richtlinien für die angewandte Forschung abgeleitet. Basierend auf einer theoretischen Motivation und einer Skizzierung der praktischen Implementierungsschritte von PSM und IPW werden diese Schritte chronologisch dargestellt, wobei auch auf praxisrelevante Erkenntnisse aus der methodischen Forschung eingegangen wird. Im Anschluss werden die Themen Effektschätzung, Inferenz, Sensitivitätsanalyse und die Kombination von IPW und PSM mit anderen statistischen Methoden diskutiert. Abschließend werden neue Erweiterungen der Methodik aufgeführt.
73

Modelos com variáveis latentes aplicados à mensuração de importância de atributos

Samartini, André Luiz Silva 17 February 2006 (has links)
Made available in DSpace on 2010-04-20T20:48:25Z (GMT). No. of bitstreams: 3 127069.pdf.jpg: 11605 bytes, checksum: f442e466112923acefc8517744135e8d (MD5) 127069.pdf: 1363751 bytes, checksum: 38a2699d587f65ea30682f3fbfee2b05 (MD5) 127069.pdf.txt: 230496 bytes, checksum: 731125dd5ec8de957061ab9be44d7e96 (MD5) Previous issue date: 2006-02-17T00:00:00Z / Pesquisas de opinião freqüentemente utilizam questionários com muitos itens para avaliar um serviço ou produto. Nestas pesquisas, cada respondente, além de avaliar os itens segundo seu grau de satisfação ou concordância, deve também atribuir um grau de importância ao item. Com estas duas informações disponíveis para cada item, é possível criar uma medida resumo, na forma de um escore total composto pela avaliação dos itens ponderada pela sua importância. O objetivo desta tese é modelar a importância dos itens por meio de um modelo de Desdobramento Graduado Generalizado, pertencente à família de modelos da Teoria de Resposta ao Item. Resultados de uma pesquisa sobre academia de ginástica mostram que o modelo tem bom ajuste neste caso, e simulações mostram que é possível, com a utilização do modelo, montar desenhos experimentais para diminuir o número de itens ou categorias de resposta a serem perguntados aos respondentes sem perda de informação. / Many surveys use multi-items questionnaires to make an assessment of a service or product. In these surveys, each respondent, besides having to evaluate each item (attribute) according to the degree of satisfaction or agreement, must also evaluate its importance. It is possible, with these two informations, to create a global score of the product or service weighted by the importance of the items. The goal of this thesis is to model the importance through a Generalized Graded Unfolding Model, which is an Item Response Theory model. Results of a survey showed a good fit of the model, and simulations showed that it is possible to design experiments to reduce the number of items or response categories asked to the respondents without loss of information if this model is used.
74

Modèles d'appariement du greffon à son hôte, gestion de file d'attente et évaluation du bénéfice de survie en transplantation hépatique à partir de la base nationale de l'Agence de la Biomédecine. / Liver transplantation graft-to-recipient matching models, queue management and evaluation of the survival benefit : study based on the Agency of Biomedicine national database

Winter, Audrey 28 September 2017 (has links)
La transplantation hépatique (TH) est la seule intervention possible en cas de défaillance hépatique terminale. Une des limitations majeures à la TH est la pénurie d'organes. Pour pallier ce problème, les critères de sélection des donneurs ont été élargis avec l'utilisation de foie de donneurs dits à "critères étendus" (extended criteria donor (ECD)). Cependant, il n'existe pas de définition univoque de ces foies ECD. Un score donneur américain a donc été mis en place : le Donor Risk Index (DRI), pour qualifier ces greffons. Mais à qui doit-on donner ces greffons "limites"? En effet, une utilisation appropriée des greffons ECD pourrait réduire la pénurie d'organes. Le but de cette thèse est d'établir un nouveau système d'allocation des greffons qui permettrait à chaque greffon d'être transplanté au candidat dont la transplantation permettra le plus grand bénéfice de survie et d'évaluer l'appariement entre donneurs et receveurs en tenant compte des greffons ECD.La première étape a consisté à effectuer une validation externe du DRI ainsi que du score qui en découle : l'Eurotransplant-DRI. Toutefois la calibration et la discrimination n'étaient pas maintenus dans la base française. Un nouveau score pronostique donneur a donc été élaboré : le DRI-Optimatch, à l'aide d'un modèle de Cox donneur ajusté sur les covariables receveur. Le modèle a été validé par bootstrap avec correction de la performance par l'optimisme.La seconde étape consista à explorer l'appariement entre donneur et receveur afin d'attribuer les greffons ECD de manière optimale. Il a été tenu compte des critères donneurs et receveurs, tels qu'évalués par le DRI-Optimatch et par le MELD (Model for End-stage Liver Disease, score pronostique receveur), respectivement. La méthode de stratification séquentielle retenue s'inspire du principe de l'essai contrôlé randomisé. Nous avons alors estimé, à l'aide de rapport de risques, quel bénéfice de survie un patient donné (repéré à l'aide du MELD) pourrait avoir avec un greffon donné (repéré à l'aide du DRI-Optimatch) en le comparant avec le groupe de référence composé des patients (même MELD), éligibles à la greffe, restés sur liste dans l'attente d'un meilleur greffon (DRI-Optimatch plus petit).Dans une troisième étape, nous avons développé un système d'allocation basé sur le bénéfice de survie alliant deux grands principes dans l'allocation de greffons; l'urgence et l'utilité. Dans ce type de système, un greffon alloué est attribué au patient avec la plus grande différence entre la durée de vie post-transplantation prédite et la durée estimée sur la liste d'attente pour un donneur spécifique. Ce modèle est principalement basé sur deux modèles de Cox : un pré-greffe et un post-greffe. Dans ces deux modèles l'évènement d'intérêt étant le décès du patient, pour le modèle pré-greffe, la censure dépendante a été prise en compte. En effet, sur liste d'attente le décès est bien souvent censuré par un autre évènement : la transplantation. Une méthode dérivée de l'Inverse Probability of Censoring Weighting a été utilisée pour pondérer chaque observation. De plus, données longitudinales et données de survie ont aussi été utilisées. Un modèle "en partie conditionnel", permettant d'estimer l'effet de covariables dépendantes du temps en présence de censure dépendante, a été utilisé pour modéliser la survie pré-greffe.Après avoir développé un nouveau système d'allocation, la quatrième et dernière étape, nous a permis de l'évaluer à travers de simulation d'évènement discret ou DES : Discret Event Simulation. / Liver transplantation (LT) is the only life-saving procedure for liver failure. One of the major impediments to LT is the shortage of organs. To decrease organ shortage, donor selection criteria were expanded with the use of extended criteria donor (ECD). However, an unequivocal definition of these ECD livers was not available. To address this issue, an American Donor Risk Index (DRI) was developed to qualify those grafts. But to whom should those ECD grafts be given? Indeed, a proper use of ECD grafts could reduce organ shortage. The aim of this thesis is to establish a new graft allocation system which would allow each graft to be transplanted in the candidate whose LT will allow the greatest survival benefit; and to evaluate the matching between donors and recipients taking into account ECD grafts.The first step was the external validation of the DRI as well as the resultant Eurotransplant-DRI score. However, calibration and discrimination were not maintained on the French database. A new prognostic donor score: the DRI-Optimatch was then developed using a Cox donor model with adjustment on recipient covariates. The model was validated by bootstrapping with correction of the performance by the optimism.The second step was to explore the matching between donors and recipients in order to allocate ECD grafts optimally. Consideration should be given to the donor and recipient criteria, as assessed by the DRI-Optimatch and the Model for End-stage Liver Disease (MELD), respectively. The sequential stratification method retained is based on the randomized controlled trial principle. We then estimated, through hazard ratios, the survival benefit for different categories of MELD and DRI-Optimatch compared against the group of candidates remaining on the wait list (WL) and waiting for a transplant with a graft of better quality (lower DRI-Optimatch).In the third step, we have developed an allocation system based on survival benefit combining the two main principles in graft allocation; urgency and utility. In this system, a graft is allocated to the patient with the greatest difference between the predicted post-transplant life and the estimated waiting time for a specific donor. This model is mainly based on two Cox models: pre-LT and post-LT. In these two models the event of interest being the death of the patient, for the pre-graft model, the dependent censoring was taken into account. Indeed, on the WL, death is often censored by another event: transplantation. A method derived from Inverse Probability of Censoring Weighting was used to weight each observation. In addition, longitudinal data and survival data were also used. A partly conditional model, to estimate the effect of time-dependent covariates in the presence of dependent censoring, was therefore used for the pre-LT model.After developing a new allocation system, the fourth and final step was to evaluate it through Discrete Event Simulation (DES).
75

Comparison of heat maps showing residence price generated using interpolation methods / Jämförelse av färgdiagram för bostadspriser genererade med hjälp av interpolationsmetoder

Wong, Mark January 2017 (has links)
In this report we attempt to provide insights in how interpolation can be used for creating heat maps showing residence prices for different residence markets in Sweden. More specifically, three interpolation methods are implemented and are then used on three Swedish residence markets. These three residence markets are of varying characteristics such as size and residence type. Data of residence sales and the physical definitions of the residence markets were collected. As residence sales are never identical, residence sales were preprocessed to make them comparable. For comparison, a so-called external predictor was used as an extra parameter for the interpolation method. In this report, distance to nearest public transportation was used as an external predictor. The interpolated heat maps were compared and evaluated using both quantitative and qualitative approaches. Results show that each interpolation method has its own strengths and weaknesses, and that using an external predictor results in better heat maps compared to only using residence price as predictor. Kriging was found to be the most robust method and consistently resulted in the best interpolated heat maps for all residence markets. On the other hand, it was also the most time-consuming interpolation method. / Den här rapporten försöker ge insikter i hur interpolation kan användas för att skapa färgdiagram över bostadspriser för olika bostadsmarknader i Sverige. Mer specifikt implementeras tre interpolationsmetoder som sedan används på tre olika svenska bostadsmarknader. Dessa tre bostadsmarknader är av olika karaktär med hänsyn till storlek och bostadstyp. Bostadsförsäljningsdata och de fysiska definitionerna för bostadsmarknaderna samlades in. Eftersom bostadsförsäljningar aldrig är identiska, behandlas de först i syfte att göra dem jämförbara. En extern indikator, vilket är en extra parameter för interpolationsmetoder, undersöktes även. I den här rapporten användes avståndet till närmaste kollektiva transportmedel som extern indikator. De interpolerade färgdiagrammen jämfördes och utvärderades både med en kvantiativ och en kvalitativ metod. Resultaten visar att varje interpolationsmetod har sina styrkor och svagheter och att användandet av en extern indikator alltid renderade i ett bättre färgdiagram jämfört med att endast använda bostadspris som indikator. Kriging bedöms vara den mest robusta interpolationsmetoden och interpolerade även de bästa färgdiagrammen för alla bostadsmarknader. Samtidigt var det även den mest tidskrävande interpolationsmetoden.
76

With or without context : Automatic text categorization using semantic kernels

Eklund, Johan January 2016 (has links)
In this thesis text categorization is investigated in four dimensions of analysis: theoretically as well as empirically, and as a manual as well as a machine-based process. In the first four chapters we look at the theoretical foundation of subject classification of text documents, with a certain focus on classification as a procedure for organizing documents in libraries. A working hypothesis used in the theoretical analysis is that classification of documents is a process that involves translations between statements in different languages, both natural and artificial. We further investigate the close relationships between structures in classification languages and the order relations and topological structures that arise from classification. A classification algorithm that gets a special focus in the subsequent chapters is the support vector machine (SVM), which in its original formulation is a binary classifier in linear vector spaces, but has been extended to handle classification problems for which the categories are not linearly separable. To this end the algorithm utilizes a category of functions called kernels, which induce feature spaces by means of high-dimensional and often non-linear maps. For the empirical part of this study we investigate the classification performance of semantic kernels generated by different measures of semantic similarity. One category of such measures is based on the latent semantic analysis and the random indexing methods, which generates term vectors by using co-occurrence data from text collections. Another semantic measure used in this study is pointwise mutual information. In addition to the empirical study of semantic kernels we also investigate the performance of a term weighting scheme called divergence from randomness, that has hitherto received little attention within the area of automatic text categorization. The result of the empirical part of this study shows that the semantic kernels generally outperform the “standard” (non-semantic) linear kernel, especially for small training sets. A conclusion that can be drawn with respect to the investigated datasets is therefore that semantic information in the kernel in general improves its classification performance, and that the difference between the standard kernel and the semantic kernels is particularly large for small training sets. Another clear trend in the result is that the divergence from randomness weighting scheme yields a classification performance surpassing that of the common tf-idf weighting scheme.
77

Analogy-based software project effort estimation : contributions to projects similarity measurement, attribute selection and attribute weighting algorithms for analogy-based effort estimation

Azzeh, Mohammad Y. A. January 2010 (has links)
Software effort estimation by analogy is a viable alternative method to other estimation techniques, and in many cases, researchers found it outperformed other estimation methods in terms of accuracy and practitioners' acceptance. However, the overall performance of analogy based estimation depends on two major factors: similarity measure and attribute selection & weighting. Current similarity measures such as nearest neighborhood techniques have been criticized that have some inadequacies related to attributes relevancy, noise and uncertainty in addition to the problem of using categorical attributes. This research focuses on improving the efficiency and flexibility of analogy-based estimation to overcome the abovementioned inadequacies. Particularly, this thesis proposes two new approaches to model and handle uncertainty in similarity measurement method and most importantly to reflect the structure of dataset on similarity measurement using Fuzzy modeling based Fuzzy C-means algorithm. The first proposed approach called Fuzzy Grey Relational Analysis method employs combined techniques of Fuzzy set theory and Grey Relational Analysis to improve local and global similarity measure and tolerate imprecision associated with using different data types (Continuous and Categorical). The second proposed approach presents the use of Fuzzy numbers and its concepts to develop a practical yet efficient approach to support analogy-based systems especially at early phase of software development. Specifically, we propose a new similarity measure and adaptation technique based on Fuzzy numbers. We also propose a new attribute subset selection algorithm and attribute weighting technique based on the hypothesis of analogy-based estimation that assumes projects that are similar in terms of attribute value are also similar in terms of effort values, using row-wise Kendall rank correlation between similarity matrix based project effort values and similarity matrix based project attribute values. A literature review of related software engineering studies revealed that the existing attribute selection techniques (such as brute-force, heuristic algorithms) are restricted to the choice of performance indicators such as (Mean of Magnitude Relative Error and Prediction Performance Indicator) and computationally far more intensive. The proposed algorithms provide sound statistical basis and justification for their procedures. The performance figures of the proposed approaches have been evaluated using real industrial datasets. Results and conclusions from a series of comparative studies with conventional estimation by analogy approach using the available datasets are presented. The studies were also carried out to statistically investigate the significant differences between predictions generated by our approaches and those generated by the most popular techniques such as: conventional analogy estimation, neural network and stepwise regression. The results and conclusions indicate that the two proposed approaches have potential to deliver comparable, if not better, accuracy than the compared techniques. The results also found that Grey Relational Analysis tolerates the uncertainty associated with using different data types. As well as the original contributions within the thesis, a number of directions for further research are presented. Most chapters in this thesis have been disseminated in international journals and highly refereed conference proceedings.
78

Automatic Analysis of Facial Actions: Learning from Transductive, Supervised and Unsupervised Frameworks

Chu, Wen-Sheng 01 January 2017 (has links)
Automatic analysis of facial actions (AFA) can reveal a person’s emotion, intention, and physical state, and make possible a wide range of applications. To enable reliable, valid, and efficient AFA, this thesis investigates automatic analysis of facial actions through transductive, supervised and unsupervised learning. Supervised learning for AFA is challenging, in part, because of individual differences among persons in face shape and appearance and variation in video acquisition and context. To improve generalizability across persons, we propose a transductive framework, Selective Transfer Machine (STM), which personalizes generic classifiers through joint sample reweighting and classifier learning. By personalizing classifiers, STM offers improved generalization to unknown persons. As an extension, we develop a variant of STM for use when partially labeled data are available. Additional challenges for supervised learning include learning an optimal representation for classification, variation in base rates of action units (AUs), correlation between AUs and temporal consistency. While these challenges could be partly accommodated with an SVM or STM, a more powerful alternative is afforded by an end-to-end supervised framework (i.e., deep learning). We propose a convolutional network with long short-term memory (LSTM) and multi-label sampling strategies. We compared SVM, STM and deep learning approaches with respect to AU occurrence and intensity in and between BP4D+ [282] and GFT [93] databases, which consist of around 0.6 million annotated frames. Annotated video is not always possible or desirable. We introduce an unsupervised Branch-and-Bound framework to discover correlated facial actions in un-annotated video. We term this approach Common Event Discovery (CED). We evaluate CED in video and motion capture data. CED achieved moderate convergence with supervised approaches and enabled discovery of novel patterns occult to supervised approaches.
79

Coerência, ponderação de princípios e vinculação à lei: métodos e modelos / Coherence, weighing and balancing and law binding: methods and models

Col, Juliana Sipoli 30 November 2012 (has links)
O objeto da discussão é a racionalidade das decisões judiciais em casos em que se constata conflito de princípios ou entre princípios e regras, casos esses considerados difíceis, uma vez que não há no ordenamento jurídico solução predeterminada que permita mera subsunção dos fatos à norma. São examinados métodos alternativos ao de subsunção. O primeiro é o método da ponderação, difundido principalmente por Robert Alexy, com suas variantes. Entretanto, o problema que surge com a aplicação do método da ponderação é da imponderabilidade entre ponderação e vinculação à lei, ou seja, a escolha dos pesos dos princípios e sua potencial desvinculação da lei. O segundo modelo, chamado de coerentista, busca conferir alguma racionalidade e fornecer critérios que poderiam explicar escolhas entre valores conflitantes subjacentes à legislação e mesmo aos pesos do método de ponderação. Dentro do modelo coerentista, examina-se em particular a versão inferencial que explora a coerência entre regras e princípios pela inferência abdutiva dos princípios a partir das regras. A aplicação dos diferentes modelos é feita em duas decisões prolatadas pelo Supremo Tribunal Federal em casos de conflito de princípio, casos Ellwanger e de aborto de anencéfalos. O que não permite generalização, mas oferece ilustrações específicas das virtudes e vícios desses modelos de decisão. / The subject of this study is rationality of judgments when there is collision of principles or conflict between principles and rules, which are hard cases, since there is no predetermined solution in legal system that allows only subsuming facts to the norm. Alternative methods are then examined. The first is the method of weighting and balancing proposed mainly by Robert Alexy, in spite of its variants. However, the difficulty to apply such method is theweightlessness between weighing and law binding, that is, the choice of weight of principles and its untying to the Law. The second model, called coherence model, intends to reach any rationality and provide criteria that could explain choices between conflicting values underlying Law and also the ascription of weights of the weighing and balancing method. In coherence model, it is studied especially its inferential version that explores coherence between rules and principles through abduction of principles from rules. These methods are tested in two decisions by Brazilian Supreme Court in cases of collision of principle, in Ellwanger and anencephalic abortion cases. That does not allow a general approach, but only specific outlines of the virtues and defects of these models of decision.
80

A (In)adequada recepção da ponderação Alexyana pelo direito brasileiro

Lopes, Lorena Duarte Santos 02 December 2014 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2015-10-08T11:31:39Z No. of bitstreams: 1 Lorena Duarte Santos Lopes_.pdf: 5337361 bytes, checksum: 706203861b3730663d869769281dd2f2 (MD5) / Made available in DSpace on 2015-10-08T11:31:39Z (GMT). No. of bitstreams: 1 Lorena Duarte Santos Lopes_.pdf: 5337361 bytes, checksum: 706203861b3730663d869769281dd2f2 (MD5) Previous issue date: 2014-12-02 / Nenhuma / A importância do ato de decidir em um Estado Democrático de Direito passa pela perfeita compreensão acerca da diferença existente entre escolher e decidir, de acordo com os termos preconizados por Lenio Streck. Portanto, não deve o juiz, ao tomar suas decisões, intuir de forma parcial ou discricionária, já que não se trata puramente de um ato de escolha: decidir exige verdadeiro compromisso constitucional. Todavia, hodiernamente no Brasil, o que se constata é a recepção de teorias estrangeiras cujos elementos e técnicas enfatizam a discricionariedade judicial - dentre as quais, a teoria da argumentação jurídica de Robert Alexy e sua técnica da ponderação em caso de colisão entre direitos fundamentais. Faz-se necessário então analisar as origens de tal princípio em seu ambiente jusfilosófico de formação - qual seja, a jurisprudência dos valores - para enfim verificar os principais elementos que a constituem. Ademais, se apura a incorporação da teoria no Direito brasileiro por sua constante presença nas mais diversas obras jurídicas nacionais - sobretudo em sede de Direito Constitucional - observando a menção recorrente aos elementos alexyanos nos julgados do Supremo Tribunal Federal (STF). Em razão da alta carga de discricionariedade vinculada à teoria, tais posturas doutrinárias e jurisprudências devem ser combatidas. Como instrumento para o enfrentamento do problema da discricionariedade judicial, escolheu-se a proposta a Teoria da Decisão Judicial, de Lenio Luiz Streck. / The importance about decide in a Law Democratic State are associated with the perfect notion about difference between choosing and deciding, according to Lenio Streck. Torendertheir decisions the judge can’t actpartially, discretion. Because judgingisnotan actof choice, to deciderequires a real constitutional commitment. However, at present, there is in Brasil the introductionof international theory that whose elementsand techniques highlights the judicial discretion.Including the “teoria da Argumentação Jurídica” os Robert Alexy, and the technique of weighting, when happen the collision between fundamental rights.It’s necessary to analyze the origin of this theory in their legal and philosophical environment, namely, the jurisprudence of values in Germany, and and their evidence to conclude it was inadequate to Brazilian law.It is possible to observe the incorporation of this theoryin Brazilian Law for his constant presencein several national legal works, especially inconstitutional lawbooks, and the jurisprudence of the Supreme Court. Because of the high burden of discretion linked to this theorythis attitudemust be fought. Like a way ofcopingagainstjudicial discretion it is proposed the “theory ofjudicial decision” of Lenio Luiz Streck.

Page generated in 0.1172 seconds