441 |
Trade mark use in paid search marketing and direct liabilityMok, Sungho January 2014 (has links)
The thesis considers the scope of trade mark protection against the context of paid search marketing. The hypothesis is that ‘fair and efficient competition’ is at the heart of the balance between interested parties and between trade mark protection and between free speech. This introduces the concept of a 'virtuous cycle' in the application of trade mark law. this thesis suggests that fair and efficient competition should be the ultimate purpose of trade mark law. The concept can be furthered by protecting pro-competitive trade mark functions: the intra trade mark information function and the inter-trademark differentiation function. Thus, only where third party use third party use is likely to harm the information and differentiation functions of owners' trade marks user could be liable. In a democratic society, there is anadditional consideration:thebalance between trade mark protection and free speech. Where third parties use trade marks in non-commercial contexts, likelihood of confusion or dilution should be the result ofactual malice or calculated falsehood. These two considerations are tested against the real world context of paid search marketing. Based on the protection of pro-competitive trade mark functions and speech restriction standards, and the relevance of actual and direct context and circumstances of paid search marketing, advertisers can be liable for their use of trade marks even when they do not include trade marks in their advertisements. Search engines, however,are not responsible for their use ‘under current practices,’ whether or not trade marks are included in advertisements. The thesis supports that trade mark law and jurisprudence should transform the cycle that starts with the balance of interests and end with fair and efficient competition into a virtuous spiralwhere one feeds the other; the two are inextricably linked.
|
442 |
Spatial and Temporal Learning in Robotic Pick-and-Place Domains via Demonstrations and ObservationsToris, Russell C 20 April 2016 (has links)
Traditional methods for Learning from Demonstration require users to train the robot through the entire process, or to provide feedback throughout a given task. These previous methods have proved to be successful in a selection of robotic domains; however, many are limited by the ability of the user to effectively demonstrate the task. In many cases, noisy demonstrations or a failure to understand the underlying model prevent these methods from working with a wider range of non-expert users. My insight is that in many mobile pick-and-place domains, teaching is done at a too fine grained level. In many such tasks, users are solely concerned with the end goal. This implies that the complexity and time associated with training and teaching robots through the entirety of the task is unnecessary. The robotic agent needs to know (1) a probable search location to retrieve the task's objects and (2) how to arrange the items to complete the task. This thesis work develops new techniques for obtaining such data from high-level spatial and temporal observations and demonstrations which can later be applied in new, unseen environments. This thesis makes the following contributions: (1) This work is built on a crowd robotics platform and, as such, we contribute the development of efficient data streaming techniques to further these capabilities. By doing so, users can more easily interact with robots on a number of platforms. (2) The presentation of new algorithms that can learn pick-and-place tasks from a large corpus of goal templates. My work contributes algorithms that produce a metric which ranks the appropriate frame of reference for each item based solely on spatial demonstrations. (3) An algorithm which can enhance the above templates with ordering constraints using coarse and noisy temporal information. Such a method eliminates the need for a user to explicitly specify such constraints and searches for an optimal ordering and placement of items. (4) A novel algorithm which is able to learn probable search locations of objects based solely on sparsely made temporal observations. For this, we introduce persistence models of objects customized to a user's environment.
|
443 |
Essays on models of the labour market with on-the-job searchGottfries, Axel January 2018 (has links)
In my first chapter, I provide a solution for how to model bargaining when there is on-the-job search and worker turnover depends on the wage. Bargaining is a standard feature in models without on-the-jobs search, but, due to endogeneity of the match surplus, a solution does not exist when worker turnover depends on the wage. My solution is based on wages being infrequently renegotiated. With renegotiation, the equilibrium wage distribution and the bargaining outcomes are both unique and the model nests earlier models in the literature as limit cases when wages are either continuously or never renegotiated. Furthermore, the rate of renegotiation has important implications for the nature of the equilibrium. A higher rate of renegotiation lowers the response of the match duration to a wage increase, which decreases a firm's willingness to accept a higher wage. This results in a lower share of the match surplus going to the worker. Moreover, a high rate of renegotiation also lowers the positive wage spillovers from a minimum wage increase, since these spillovers rely on firms' incentives to use higher wages to reduce turnover. In the standard job ladder model, search is modelled via an employment-specific Poisson rate. The size of the Poisson rate governs the size of the search friction. The Poisson rate can represent the frequency of applications by workers or the rate at which firms post suitable vacancies. In the second chapter, which is co-authored with Jake Bradley, we set up a model which has both of these aspects. Firms infrequently post vacancies and workers occasionally apply for these vacancies. The model nests the standard job ladder model and a version of the stock-flow model as special cases while remaining analytically tractable and easy to estimate empirically from standard panel data sets. The structurally estimated parameters are consistent with recent survey evidence of worker behavior. The model fits moments of the data that are inconsistent with the standard job ladder model and in the process reconciles the level of frictional wage dispersion in the data with replacement ratios used in the macro labor literature. In my third chapter, which is co-authored with Coen Teulings, we develop a simple method to measure the position in the job ladder in models with on-the-job search. The methodology uses two implications from models with on-the-job search: workers gradually select into better paying jobs until they get laid off at which time they start again to climb the job ladder. The measure relies on two sources of variation: (i) time-variation in job-finding rates and (ii) individual variation in the time since the last lay-off. We use the method to quantify the returns to on-the-job search and to establish the shape of the wage offer distribution by means of simple OLS regressions with wages as dependent variables. Moreover, we derive a simple prediction on the distribution of job durations. Applying the method to the NLSY 79, we find strong support for this class of models. We estimate the standard deviation of the wage offer distribution to be 12%. OJS accounts for 30% of the experience profile and 9% of the total wage dispersion.
|
444 |
Otimização aplicada ao processo de transmissão de Acinetobacter spp em unidades de terapia intensivaAraújo, Aurélio de Aquino January 2018 (has links)
Orientador: Daniela Renata Cantane / Resumo: Originadas na década de 1970, as Infecções Hospitalares vêm cada vez mais tomando proporções colossais, acarretando óbito em cerca de 30% dos pacientes em Unidades de Terapia Intensiva (UTI). Os pacientes diagnosticados com a infecções permanecem muito tempo internados, gerando um custo muito alto para os hospitais. No ambiente hospitalar a bactéria Acinetobacter baumannii a principal responsável por tais infecções, devido a sua facilidade de sobreviver em ambientes secos e úmidos, podendo sobreviver tanto no organismo humano, quanto nos ambientes que os profissionais da saúde entram em contato (computadores, equipamentos médicos, etc). Os principais vetores desta bactéria são os próprios agentes de saúde, visto que os pacientes na UTI estão todos acamados. No entanto, medidas de higienização extremamente necessárias para conter surtos da infecções o. Por outro lado, devido as emergências nestas unidades, muitas vezes não há tempo hábil para tais procedimentos. Visto que impossível uma medida total de higienização e uma taxa nula de contato da equipe de trabalho com o ambiente em UTIs, importante conhecer quais são as mínimas medidas necessárias para a diminuição de infecções hospitalares. Neste contexto, o objetivo deste trabalho propor e analisar um modelo que descreva a dinâmica de transmissão da infecção dentro de uma UTI, considerando pacientes e profissionais da saúde, assim como, propor um modelo de otimização visando determinar quais as mínimas medidas de higienização... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Originated in the 1970s, Hospital Infections come every time more taking colossal proportions, causing death in about 30% of patients in Intensive Care Units (ICU). Patients diagnosed with infections remain long hospitalized, generating a very high cost for hospitals. In the hospital environment, Acinetobacter baumannii is the main responsible for such infections due to their ease of survival in dry and humid, and can survive both in the human body and in the environments that health workers contact (computers, medical equipment, etc.). The main vectors of this bacterium are the health agents themselves, since the patients in the ICU are all bedridden. However, hygiene measures are extremely necessary to contain outbreaks of infection. On the other hand, due to emergencies in these units, there is often no time for such procedures. Since a total sanitation measure and it is important to know the minimum measures necessary for the decrease of infections. In this context, the objective xiii of this work is to propose and analyze a model that describes the dynamics of transmission of infection within an ICU, considering patients and health professionals, as well as to propose an optimization model aiming to determine the minimum hygienic measures are needed to minimize the number of infected patients. A Variable Neighborhood Search Metaheuristic was proposed to solve the optimization model. For validation of the models were carried out computational simulations. These simulation... (Complete abstract click electronic access below) / Mestre
|
445 |
A Nearest-Neighbor Approach to Indicative Web SummarizationPetinot, Yves January 2016 (has links)
Through their role of content proxy, in particular on search engine result pages, Web summaries play an essential part in the discovery of information and services on the Web. In their simplest form, Web summaries are snippets based on a user-query and are obtained by extracting from the content of Web pages. The focus of this work, however, is on indicative Web summarization, that is, on the generation of summaries describing the purpose, topics and functionalities of Web pages. In many scenarios — e.g. navigational queries or content-deprived pages — such summaries represent a valuable commodity to concisely describe Web pages while circumventing the need to produce snippets from inherently noisy, dynamic, and structurally complex content. Previous approaches have identified linking pages as a privileged source of indicative content from which Web summaries may be derived using traditional extractive methods. To be reliable, these approaches require sufficient anchortext redundancy, ultimately showing the limits of extractive algorithms for what is, fundamentally, an abstractive task. In contrast, we explore the viability of abstractive approaches and propose a nearest-neighbors summarization framework leveraging summaries of conceptually related (neighboring) Web pages. We examine the steps that can lead to the reuse and adaptation of existing summaries to previously unseen pages. Specifically, we evaluate two Text-to-Text transformations that cover the main types of operations applicable to neighbor summaries: (1) ranking, to identify neighbor summaries that best fit the target; (2) target adaptation, to adjust individual neighbor summaries to the target page based on neighborhood-specific template-slot models. For this last transformation, we report on an initial exploration of the use of slot-driven compression to adjust adapted summaries based on the confidence associated with token-level adaptation operations. Overall, this dissertation explores a new research avenue for indicative Web summarization and shows the potential value, given the diversity and complexity of the content of Web pages, of transferring, and, when necessary, of adapting, existing summary information between conceptually similar Web pages.
|
446 |
Cross-media meta-search engine.January 2005 (has links)
Cheng Tung Yin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 136-141). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview --- p.1 / Chapter 1.1.1 --- Information Retrieval --- p.1 / Chapter 1.1.2 --- Search Engines --- p.2 / Chapter 1.1.3 --- Data Merging --- p.3 / Chapter 1.2 --- Meta-search Engines --- p.3 / Chapter 1.2.1 --- Framework and Techniques Employed --- p.3 / Chapter 1.2.2 --- Advantages of meta-searching --- p.8 / Chapter 1.3 --- Contribution of the Thesis --- p.10 / Chapter 1.4 --- Organization of the Thesis --- p.12 / Chapter 2 --- Literature Review --- p.14 / Chapter 2.1 --- Preliminaries --- p.14 / Chapter 2.2 --- Fusion Methods --- p.15 / Chapter 2.2.1 --- Fusion methods based on a document's score --- p.15 / Chapter 2.2.2 --- Fusion methods based on a document's ranking position --- p.23 / Chapter 2.2.3 --- Fusion methods based on a document's URL title and snippets --- p.30 / Chapter 2.2.4 --- Fusion methods based on a document's entire content --- p.40 / Chapter 2.3 --- Comparison of the Fusion Methods --- p.42 / Chapter 2.4 --- Relevance Feedback --- p.46 / Chapter 3 --- Research Methodology --- p.48 / Chapter 3.1 --- Investigation of the features of the retrieved results from the search engines --- p.48 / Chapter 3.2 --- Types of relationships --- p.53 / Chapter 3.3 --- Order of Strength of the Relationships --- p.64 / Chapter 3.3.1 --- Derivation of the weight for each kind of relationship (criterion) --- p.68 / Chapter 3.4 --- Observation of the relationships between retrieved objects and the effects of these relationships on the relevance of objects --- p.69 / Chapter 3.4.1 --- Observation on the relationships existed in items that are irrelevant and relevant to the query --- p.68 / Chapter 3.5 --- Proposed re-ranking algorithms --- p.89 / Chapter 3.5.1 --- Original re-ranking algorithm (before modification) --- p.91 / Chapter 3.5.2 --- Modified re-ranking algorithm (after modification) --- p.95 / Chapter 4 --- Evaluation Methodology and Experimental Results --- p.101 / Chapter 4.1 --- Objective --- p.101 / Chapter 4.2 --- Experimental Design and Setup --- p.101 / Chapter 4.2.1 --- Preparation of data --- p.101 / Chapter 4.3 --- Evaluation Methodology --- p.104 / Chapter 4.3.1 --- Evaluation of the relevance of a document to the corresponding query --- p.104 / Chapter 4.3.2 --- Performance Measures of the Evaluation --- p.105 / Chapter 4.4 --- Experimental Results and Interpretation --- p.106 / Chapter 4.4.1 --- Precision --- p.107 / Chapter 4.4.2 --- Recall --- p.107 / Chapter 4.4.3 --- F-measure --- p.108 / Chapter 4.4.4 --- Overall evaluation results for the ten queries for each evaluation tool --- p.110 / Chapter 4.4.5 --- Discussion --- p.123 / Chapter 4.5 --- Degree of difference between the performance of systems --- p.124 / Chapter 4.5.1 --- Analysis using One-Way ANOVA --- p.124 / Chapter 4.5.2 --- Analysis using paired samples T-test --- p.126 / Chapter 5 --- Conclusion --- p.131 / Chapter 5.1 --- "Implications, Limitations, and Future Work" --- p.131 / Chapter 5.2 --- Conclusions --- p.133 / Bibliography --- p.134 / Chapter A --- Paired samples T-test for F-measures of systems retrieving all media's items --- p.140
|
447 |
Pulsar Search Using Supervised Machine LearningFord, John M. 01 January 2017 (has links)
Pulsars are rapidly rotating neutron stars which emit a strong beam of energy through mechanisms that are not entirely clear to physicists. These very dense stars are used by astrophysicists to study many basic physical phenomena, such as the behavior of plasmas in extremely dense environments, behavior of pulsar-black hole pairs, and tests of general relativity. Many of these tasks require information to answer the scientific questions posed by physicists. In order to provide more pulsars to study, there are several large-scale pulsar surveys underway, which are generating a huge backlog of unprocessed data. Searching for pulsars is a very labor-intensive process, currently requiring skilled people to examine and interpret plots of data output by analysis programs. An automated system for screening the plots will speed up the search for pulsars by a very large factor. Research to date on using machine learning and pattern recognition has not yielded a completely satisfactory system, as systems with the desired near 100% recall have false positive rates that are higher than desired, causing more manual labor in the classification of pulsars. This work proposed to research, identify, propose and develop methods to overcome the barriers to building an improved classification system with a false positive rate of less than 1% and a recall of near 100% that will be useful for the current and next generation of large pulsar surveys. The results show that it is possible to generate classifiers that perform as needed from the available training data. While a false positive rate of 1% was not reached, recall of over 99% was achieved with a false positive rate of less than 2%. Methods of mitigating the imbalanced training and test data were explored and found to be highly effective in enhancing classification accuracy.
|
448 |
Labour market policies and unemployment in the presence of search & matching frictionsOnwordi, George Emeka January 2016 (has links)
This thesis consists of three theoretical chapters, all related to the response of unemployment to shocks and the role of active and passive labour market policies. Throughout the thesis, unemployment is assumed to evolve as a result of the uncoordinated nature of the labour market along the lines outlined in the Diamond-Mortensen-Pissarides equilibrium search and matching model. Chapter 2 examines the effects of employment policies on vacancy creation and allocation decisions of firms and unemployment across workers with different skills. We develop a partial equilibrium model with heterogeneous high- and low-tech jobs and with skilled and unskilled workers, which we motivate by the stark evidence on the incidence of cross-skill employment (which crowds out unskilled workers, e.g. evidence for the US, the UK and the EU put these at 58%, 32%, and 35%, respectively). We show that certain employment protection policies could, in fact, lead to a reduction in job creation and might alter the allocation of vacancies across low- and high-tech job type. We find that: (i) skilled workers benefit while unskilled workers experience high jobless rate; (ii) policy effects differ when they are skill-specific; (ii) stricter policies can have more severe consequences; and (iv) vacancy creation subsidy can play a key role in reducing unemployment across worker type as well as alleviating the cross-skill crowding out of jobs. Against conventional wisdom, we demonstrate that severance compensation can have a ‘real’ effect on job creation decision, provided there is some degree of strictness in its enforcement. Motivated by the extensive use of fiscal stimulus policies and labour market reforms during the last economic crisis, in Chapter 3 we study the implications of labour market regulations in driving the sensitivity of an economy to fiscal spending shocks, in a Dynamic Stochastic General Equilibrium (DSGE) model with job search frictions. We demonstrate that less rigidity in the labour market reduces the impact of fiscal demand shock on job creation and employment, both at extensive and intensive margins, whereas higher rigidity amplifies it. We also establish that the extent to which government spending promotes economic activity, job creation and employment depends on the degree of substitutability between private and public consumption. Higher substitutability dampens economic activity and reduces the sizes of output and employment multipliers. Labour market-oriented fiscal spending is found to be the most potent policy instruments for promoting employment – especially in the presence of high labour market rigidities. Finally, in Chapter 4, we study how openness to international trade and capital mobility and their interactions with labour market policies affect the behaviour of an economy, in particular with respect to its unemployment level. We show that the degree of openness to international capital flow is crucial for understanding the response of unemployment to different shocks. In isolation, by raising the incentive to invest, a reduction in capital mobility barriers leads to lower unemployment, both in the long-run and the dynamic short-run. With limited restrictions to capital movement, unemployment responds faster and with greater magnitude to a domestic productivity shock, and this is further enhanced the more the economy is open to international trade. A striking finding of this study is that while a higher degree of capital mobility enhances the adjustment of unemployment in response to a domestic productivity shock, it dampens its adjustment to a foreign demand shock. By contrast, higher openness to international trade enhances the adjustment effects of both shocks on unemployment. Finally, we find that heterogeneity in the welfare state systems in the EU can generate substantial differentials in the adjustment of unemployment to various shocks.
|
449 |
New local search in the space of infeasible solutions framework for the routing of vehiclesHamid, Mona January 2018 (has links)
Combinatorial optimisation problems (COPs) have been at the origin of the design of many optimal and heuristic solution frameworks such as branch-and-bound algorithms, branch-and-cut algorithms, classical local search methods, metaheuristics, and hyperheuristics. This thesis proposes a refined generic and parametrised infeasible local search (GPILS) algorithm for solving COPs and customises it to solve the traveling salesman problem (TSP), for illustration purposes. In addition, a rule-based heuristic is proposed to initialise infeasible local search, referred to as the parameterised infeasible heuristic (PIH), which allows the analyst to have some control over the features of the infeasible solution he/she might want to start the infeasible search with. A recursive infeasible neighbourhood search (RINS) as well as a generic patching procedure to search the infeasible space are also proposed. These procedures are designed in a generic manner, so they can be adapted to any choice of parameters of the GPILS, where the set of parameters, in fact for simplicity, refers to set of parameters, components, criteria and rules. Furthermore, a hyperheuristic framework is proposed for optimizing the parameters of GPILS referred to as HH-GPILS. Experiments have been run for both sequential (i.e. simulated annealing, variable neighbourhood search, and tabu search) and parallel hyperheuristics (i.e., genetic algorithms / GAs) to empirically assess the performance of the proposed HH-GPILS in solving TSP using instances from the TSPLIB. Empirical results suggest that HH-GPILS delivers an outstanding performance. Finally, an offline learning mechanism is proposed as a seeding technique to improve the performance and speed of the proposed parallel HH-GPILS. The proposed offline learning mechanism makes use of a knowledge-base to keep track of the best performing chromosomes and their scores. Empirical results suggest that this learning mechanism is a promising technique to initialise the GA's population.
|
450 |
Unsupervised extraction and normalization of product attributes from web pages.January 2010 (has links)
Xiong, Jiani. / "July 2010." / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 59-63). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Motivation --- p.4 / Chapter 1.3 --- Our Approach --- p.8 / Chapter 1.4 --- Potential Applications --- p.12 / Chapter 1.5 --- Research Contributions --- p.13 / Chapter 1.6 --- Thesis Organization --- p.15 / Chapter 2 --- Literature Survey --- p.16 / Chapter 2.1 --- Supervised Extraction Approaches --- p.16 / Chapter 2.2 --- Unsupervised Extraction Approaches --- p.19 / Chapter 2.3 --- Attribute Normalization --- p.21 / Chapter 2.4 --- Integrated Approaches --- p.22 / Chapter 3 --- Problem Definition and Preliminaries --- p.24 / Chapter 3.1 --- Problem Definition --- p.24 / Chapter 3.2 --- Preliminaries --- p.27 / Chapter 3.2.1 --- Web Pre-processing --- p.27 / Chapter 3.2.2 --- Overview of Our Framework --- p.31 / Chapter 3.2.3 --- Background of Graphical Models --- p.32 / Chapter 4 --- Our Proposed Framework --- p.36 / Chapter 4.1 --- Our Proposed Graphical Model --- p.36 / Chapter 4.2 --- Inference --- p.41 / Chapter 4.3 --- Product Attribute Information Determination --- p.47 / Chapter 5 --- Experiments and Results --- p.49 / Chapter 6 --- Conclusion --- p.57 / Bibliography --- p.59 / Chapter A --- Dirichlet Process --- p.64 / Chapter B --- Hidden Markov Models --- p.68
|
Page generated in 0.0703 seconds