• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 139
  • 100
  • 66
  • 60
  • 33
  • 11
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 773
  • 63
  • 62
  • 59
  • 56
  • 52
  • 52
  • 47
  • 47
  • 44
  • 42
  • 41
  • 38
  • 38
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Investigando a operatividade das imunidades tributárias

Braga, Raquel Xavier Vieira January 2013 (has links)
A presente dissertação examina o instituto jurídico da imunidade tributária. Para tanto, dedica-se na primeira parte à sua definição, razão pela qual analisa o uso da palavra imunidade, preparando o trabalho para investigar a espécie normativa na qual a imunidade tributária está inserida. A partir de então, analisa a forma de atuação da norma imunizante. Compreendidos o conteúdo, os fundamentos e as características das imunidades, realiza a definição conceitual de imunidade tributária, o que permite partir para segunda parte do trabalho: seu estudo classificatório. Nele, investiga-se a operatividade das imunidades tributárias da seguinte forma: primeiramente, é examinada a forma de direcionamento da norma; em segundo lugar, analisa-se como a norma imunizante se manifesta no texto constitucional; em terceiro lugar, reflete-se sobre a abrangência da norma quanto às espécies tributárias e às situações as quais a norma alcança; num quarto momento, investiga-se a força normativa das imunidades tributárias para, na quinta etapa, debruçar-se sobre a eficácia normativa da imunidade tributária; e, por fim, o trabalho analisa o seu modo de interpretação, examinando os critérios sistemáticos que controlam a interpretação da norma imunizante. / This paper examines the legal institute of tax immunity. In order to do such, it dedicates on the first step to define tax immunity, which reason is to analyse the use of the word immunity to investigate the type of standard which tax immunity is inserted. As a result, it analyses the acting form of the immunizing rule. After understanding the content, the aims and the characteristics of the immunities, this work defines the concept of the tax immunity which leads to its classificatory study, the second step of this paper. This study investigates the operativity of the immunities in this way: first, examines the direction’s form of the rule. Second, analyses the rule’s manifestation in constitutional terms. In a third moment, reflects about the rule’s coverage related to taxes sorts and the situations which the rule achieves. Moreover, investigates the rule’s strength of the tax immunity. Furthermore, examines the efficacy of the tax immunity. Finally, the analysis is focused on its interpretation type, that investigates the systematic criteria which controls the interpretation of the immunizing rule.
212

Skolbiblioteket : definition och användning: en innehållsanalys av gymnasieskolans styrdokument 2012 / The definition and use of the school library : a content analysis of the secondary upper school policy documents 2012

Magnusson, Emma Linnea January 2013 (has links)
The purpose of this thesis is to systematically describe and map how authorities define the school library and its function based on the current policy documents for the Swedish upper secondary school. The study is guided by the following questions: • What dimensions does the school library have according to the explicit definitions, descriptions and requirements? • What implicit arguments, references and unspoken assumptions about the school library and its role, function and significance can be derived? The study is delimited to the national policy documents, which have been divided into two groups – structural and teaching focused – corresponding to the two questions above. Further delimitations have been made to avoid noise and to manage a manual content analysis. Each question required its own analysis tool. The first a dimension-matrix and the second a group of ideal types adapted from models of the public library’s functions. The result shows that the policy documents explicit defines the school library as mainly two dimensional (space and media), but also that other dimensions could be implicated depending on the interpretation of some active verbs. There are several implicit references to the library’s ideal types skills and citizens, but also to the ideal types culture and community parlour. The conclusion is that the implicit expectations makes it possible for the school library to play a greater role then the explicit defining framework require. The defining framework is vague and unstructured and would benefit from an elucidation in order for the statutory requirement for school libraries to have any significance. / Program: Bibliotekarie
213

Specialists or specialising generalists : a grounded theory of the role of the clinical pharmacist in neuroscience

Dorward, Benjamin J. January 2014 (has links)
Neuroscience is a relatively small and emerging clinical pharmacy specialism focusing on drug therapy for neurological disease. Against a professional momentum for specialist practice within pharmacy, there is paucity both of relevant research, and a clearly defined role for specialist pharmacy practice in neuroscience. A qualitative research study was undertaken, using constructivist grounded theory method, to explore how hospital based pharmacists practicing in neuroscience define and develop their role and specialism. Data were concurrently generated and analysed, through verbatim transcription of telephone interviews with fourteen pharmacists. Data analysis resulted in the identification of three processes: (1) Acquiring and utilising knowledge in practice; (2) Gatekeeping access to drug therapies; (3) Integrating into the neuroscience service. The key findings within each process are: (1) Pharmacists utilise different forms of knowledge and there can be barriers to gaining knowledge. Pharmacists identify strengths in their breadth of clinical knowledge and holistic consideration of patients’ drug therapy. (2) Pharmacists act as barriers to drug therapy but also act to expedite and secure access to drug therapy. (3) Pharmacists act as an organisational nexus between pharmacy and neuroscience services and identify the importance in practice of forming working relationships within neuroscience services, underpinned by trust. The study identified a basic social process: Maintaining an overview of drug therapy for patients with neurological disease. This process conceptualises the tensions experienced by the pharmacists between their role as near-patient facing clinical specialists, but also as pharmacist generalists. The study findings have implications for supporting pharmacy practice in neuroscience.
214

Bortom bilens sfär : En kvalitativ studie om kontextens betydelse för en alternativ verklighetsuppfattning / Beyond the car's sphere : A qualitative study of the contextual significance of an alternative perception of reality

Erström, Elin, Larsson, Kirsi January 2019 (has links)
Studiens syfte är att tolka och förstå individens upplevelser och verklighetsuppfattning. Att skapa en förståelse kring kontexten bilen och hur omkringliggande faktorer kan influera till individens verklighetsuppfattning. Studien undersöker hur individen upplever sin verklighet och tankar i bilens kontext. Likaså undersöks hur omgivningen kan influera individens verklighetsuppfattning. Studien är utförd med en kvalitativ metod och ett induktivt förhållningssätt. Den teoretiska referensramen grundas på teoretiska begrepp såsom sociala handlingar, förståelse, rolltagande och definition av situation. Empirin grundar sig i tio stycken semistrukturerade intervjuer och ligger till grund för studiens resultat. Studiens resultat visar på skilda upplevelser och tankar från informanterna i deras verklighetsuppfattningar. Resultatet visar även att omgivningen har en påverkan på informanternas verklighetsuppfattning.
215

Edith Stein's critique of Martin Heidegger : background, reasons and scope

Ripamonti, Lidia January 2013 (has links)
This thesis is a critical assessment of Edith Stein’s critique of Martin Heidegger, which is focused on the definition of the human being. I explore Stein’s ontology of the person from the point of view of her examination of Heidegger’s existential ‘Dasein’ and the way she reaches a very different answer to the same question that Heidegger posed, the question of the meaning of being. To this end I examine key passages of Stein’s most important ontological work Finite and Eternal Being - An Attempt at an Ascent to the Meaning of Being along with its appendix Martin Heidegger’s Philosophy of Existence, in which she directly discussed Heidegger’s philosophy, focusing on his work Being and Time. In the first part of this research I draw a historico-philosophical overview of the academic and political background of the period between World War I and World War II in Germany in order to position both authors in context and investigate their philosophical influences as well as their ambiguous relationship with the phenomenological school. The central part is dedicated to Stein’s analysis of Heidegger’s Dasein: I compare and explain both authors’ approaches to the philosophical understanding of human being, person, life, soul and death. This investigation was carried out with both a hermeneutical and terminological analysis. I draw upon the results to demonstrate how Stein’s phenomenology of life experiences enlarges the borders of human finitude to embrace the possibility of its ontological horizon while Heidegger restricts and concentrates the entire ontological question on the Dasein, its existence and ultimately its finitude. My findings provide an assessment of the limits as well as the strengths of Stein’s critique. I demonstrate that Stein attempted to build a bridge between classical ontology and phenomenology, while Heidegger’s distance from the philosophical tradition was rooted in his methodological refusal. I also show how their opposite methods and findings present unexpected similarities and how Stein’s philosophical significance should be reconsidered in the light of her work. This research leads to various implications for today’s philosophical debate and makes it possible to view Stein’s theory of being in a wider ethical context, as presented in the final part of this work. I argue that Heidegger depersonalises and violates traditional ontology to explain the human being only in terms of pure existence, while Stein’s portrait of the ‘fullness’ and the meaning of life contributes to the discussion between philosophy and religion. In the final section of this work I show how some of the elements emerging from Stein’s critique of Heidegger can cast a light on the current ethical discussion about how death is understood and experienced socially, and how best to care for the dying.
216

Desenvolvimento algorítmico e arquitetural para a estimação de movimento na compressão de vídeo de alta definição / Algorithmic and architectural development for motion estimation on high definition video compression

Porto, Marcelo Schiavon January 2012 (has links)
A compressão de vídeo é um tema extremamente relevante no cenário atual, principalmente devido ao crescimento significativo da utilização de vídeos digitais. Sem a compressão, é praticamente impossível enviar ou armazenar vídeos digitais devido à sua grande quantidade de informações, inviabilizando aplicações como televisão digital de alta definição, vídeo conferência, vídeo chamada para celulares etc. O problema vem se tornando maior com o crescimento de aplicações de vídeos de alta definição, onde a quantidade de informação é consideravelmente maior. Diversos padrões de compressão de vídeo foram desenvolvidos nos últimos anos, todos eles podem gerar grandes taxas de compressão. Os padrões de compressão de vídeo atuais obtêm a maior parte dos seus ganhos de compressão explorando a redundância temporal, através da estimação de movimento. No entanto, os algoritmos de estimação de movimento utilizados atualmente não consideram as variações nas características dos vídeos de alta definição. Neste trabalho uma avaliação da estimação de movimento em vídeos de alta definição é apresentada, demonstrando que algoritmos rápidos conhecidos, e largamente utilizados pela comunidade científica, não apresentam os mesmos resultados de qualidade com o aumento da resolução dos vídeos. Isto demonstra a importância do desenvolvimento de novos algoritmos focados em vídeos de altíssima definição, superiores à HD 1080p. Esta tese apresenta o desenvolvimento de novos algoritmos rápidos de estimação de movimento, focados na codificação de vídeos de alta definição. Os algoritmos desenvolvidos nesta tese apresentam características que os tornam menos suscetíveis à escolha de mínimos locais, resultando em ganhos significativos de qualidade em relação aos algoritmos rápidos convencionais, quando aplicados a vídeos de alta definição. Além disso, este trabalho também visa o desenvolvimento de arquiteturas de hardware dedicadas para estes novos algoritmos, igualmente dedicadas a vídeos de alta definição. O desenvolvimento arquitetural é extremamente relevante, principalmente para aplicações de tempo real a 30 quadros por segundo, e também para a utilização em dispositivos móveis, onde requisitos de desempenho e potência são críticos. Todos os algoritmos desenvolvidos foram avaliados para um conjunto de 10 sequências de teste HD 1080p, e seus resultados de qualidade e custo computacional foram avaliados e comparados com algoritmos conhecidos da literatura. As arquiteturas de hardware dedicadas, desenvolvidas para os novos algoritmos, foram descritas em VHDL e sintetizadas para FPGAs e ASIC, em standard cells nas tecnologias 0,18μm e 90nm. Os algoritmos desenvolvidos apresentam ganhos de qualidade para vídeos de alta definição em relação a algoritmos rápidos convencionais, e as arquiteturas desenvolvidas possuem altas taxas de processamento com baixo consumo de recursos de hardware e de potência. / Video compression is an extremely relevant theme in today’s scenario, mainly due to the significant growth in digital video applications. Without compression, it is almost impossible to send or store digital videos, due to the large amount of data that they require, making applications such as high definition digital television, video conferences, mobiles video calls, and others unviable. This demand is increasing since there is a strong growth in high definition video applications, where the amount of information is considerably larger. Many video coding standards were developed in the last few years, all of them can achieve excellent compression rates. A significant part of the compression gains in the current video coding standards are obtained through the exploration of the temporal redundancies by means of the motion estimation process. However, the current motion estimation algorithms do not consider the inherent variations that appear in high and ultra-high definition videos. In this work an evaluation of the motion estimation in high definition videos is presented. This evaluation shows that some well know fast algorithms, that are widely used by the scientific community, do not keep the same quality results when applied to high resolution videos. It demonstrates the relevance of new fast algorithms that are focused on high definition videos. This thesis presents the development of new fast motion estimation algorithms focused in high definition video encoding. The algorithms developed in this thesis show some characteristics that make them more resilient to avoid local minima, when applied to high definition videos. Moreover, this work also aims at the development of dedicated hardware architectures for these new algorithms, focused on high definition videos. The architectural development is extremely relevant, mainly for real time applications at 30 frames per second, and also for mobile applications, where performance and power are critical issues. All developed algorithms were assessed using 10 HD 1080p test video sequences, and the results for quality and computational cost were evaluated and compared against known algorithms from the literature. The dedicated hardware architectures, developed for the new algorithms, were described in VHDL and synthesized for FPGA and ASIC. The ASIC implementation used 0.18μm and 90nm CMOS standard cells technology. The developed algorithms present quality gains in comparison to regular fast algorithms for high definition videos, and the developed architectures presents high processing rate with low hardware resources cost and power consumption.
217

Analysis, coding, and processing for high-definition videos. / CUHK electronic theses & dissertations collection

January 2010 (has links)
Firstly, the characteristics of HD videos are studied quantitatively. The results show that HD videos distinguish from other lower resolution videos by higher spatial correlation and special power spectral density (PSD), mainly distributed along the vertical and horizontal directions. / Secondly, two techniques for HD video coding are developed based on the aforementioned analysis results. To exploit the spatial property, 2D order-16 transforms are proposed to code the higher correlated signals more efficiently. Specially, two series of 2D order-16 integer transforms, named modified integer cosine transform (MICT) and non-orthogonal integer cosine transform (NICT), are studied and developed to provide different trade-offs between the performance and the complexity. Based on the property of special PSD, parametric interpolation filter (PIF) is proposed for motion-compensated prediction (MCP). Not only can PIF track the non-stationary statistics of video signals as the related work shows, but also it represents interpolation filters by parameters instead of individual coefficients, thus solving the conflict of the accuracy of coefficients and the size of side information. The experimental results show the proposed two coding techniques significantly outperform their equivalents in the state-of-the-art international video coding standards. / Thirdly, interlaced HD videos are studied, and to satisfy different delay constraints, two real-time de-interlacing algorithms are proposed specially for H.264 coded videos. They adapt to local activities, according to the syntax element (SE) values. Accuracy analysis is also introduced to deal with the disparity between the SE values and the real motions and textures. The de-interlacers provide better visual quality than the commonly used ones and can de-interlace 1080i sequences in real time on PCs. / Today, High-Definition (HD) videos become more and more popular with many applications. This thesis analyzes the characteristics of HD videos and develops the appropriate coding and processing techniques accordingly for hybrid video coding. / Dong, Jie. / Adviser: King Ngi Ngan. / Source: Dissertation Abstracts International, Volume: 72-01, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 153-158). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
218

The Effect of Various Word Treatments on Chinese Vocabulary Comprehension During Multimedia Language Learning

Kim, Jeongwoon 01 July 2015 (has links)
The current study investigated the effect of viewing various types of word definitions-dictionary, glossed, and annotated definitions-on Chinese learners' vocabulary comprehension during video-based language learning. A total of 53 advanced Chinese learner participants were given access to various randomized word definitions while viewing the subtitled videos used in the experiment. Results showed that learners recalled words for which they viewed definitions significantly better after the experiment than before. The viewing of dictionary, glossed, or annotated definitions yielded comparable levels of word comprehension. In situations where learners did not view word definitions; however, word comprehension was not significantly higher on the posttest measures of recall. Attrition occurred whether participants viewed word definitions or not, but it occurred more severely for those words that learners chose not to look up definitions. The more a target word occurred, the better the word was comprehended. Survey results showed that participants preferred dictionary and annotated definitions more than glossed definitions. The results from this study suggest that word treatment is an effective way to comprehend vocabulary. If language learners and teachers lack resources to afford cost-intensive glosses and annotations, they can consider taking advantage of free, automatic online dictionaries, which are as effective as glosses and annotations.
219

Avec ou sans équivalent ˸ le poids de la définition dans une analyse lexicométrique des anglicismes / With or without equivalents ˸ the impact of the definition on a lexicometric analysis of anglicisms

Planchon, Cécile 08 January 2019 (has links)
Cette étude repose sur l’influence empirique d’un changement de paradigme définitoire sur l’utilisation des anglicismes dans la presse écrite francophone. Construite autour d’une opposition entre anglicismes en général, anglicismes avec équivalent en français et anglicismes sans équivalent, elle analyse en diachronie (2000-2015) les nombreuses différences liées au changement de définition ainsi qu’à ses retombées sur des analyses comparatives portant sur l’origine géographique (France-Québec) et la nature (« de référence » ou « populaire ») du journal. Fort de plus de 330 millions de mots, notre corpus regroupe les publications de 2000 à 2015 de quatre quotidiens différents: Le Monde et Le Parisien pour la France, Le Devoir et La Presse pour le Québec. De plus, les listes témoins, l’une de 5 416 anglicismes et l’autre de 2 934, formées à l’aide de deux dictionnaires de langue générale (Le Petit Robert 2016 et Le Multidictionnaire de la langue française-4e édition 2013) et deux dictionnaires spécialisés (Le Dictionnaire des anglicismes de Höfler et Le Colpron, dictionnaire des anglicismes-4e édition) de France et du Québec. Premièrement, les résultats que nous obtenons montrent qu’en général, la fréquence d’utilisation est faible (0,72% pour tous les anglicismes ; 0,28% pour les anglicismes avec équivalents ; 0,44% pour les anglicismes sans équivalent) que ce soit en France ou au Québec, ce qui est plutôt conforme aux conclusions des études précédentes. Il y a cependant des différences quant aux résultats obtenus pour chaque analyse étant donné que le taux d’anglicisme est une fois et demi plus faible selon que l’on considère qu’un anglicisme doit avoir un équivalent ou non. Cela montre que la définition du concept d’« anglicisme » a un impact incontestable sur la fréquence affichée. Deuxièmement, nous observons que ce changement de définition influe également sur des critères d’analyse spécifiques tels la diatopie ou la nature des journaux. / This study addresses the impact of a definition paradigm shift on the use of anglicisms in the French-speaking written press. Built around a diachronic analysis (2000-2015) of the numerous differences linked to a change of definition, it also ventures into the impact such a change may have on comparative analysis focusing on geographical origin (France vs Quebec) or the nature of the newspaper (“of record” vs “popular”). It aims at contributing to the literature though two different aspects:(1) This study is the only conceptual work to highlight what sort of impact a change in the definition of “English borrowing” can have on the frequency of anglicism usage. It shows through an opposition with/without French equivalents, that an analysis focused only on anglicisms for which French provides an equivalent – those Forest and Boudreau (1998) consider “unnecessary” or “erroneous” (Villers, 2009) – reveals an important drop in the usage frequency compared to an analysis based on all anglicisms. It also unveils that, out of three anglicisms extracted from my corpus, two have no equivalents in French and are de facto unavoidable.(2) This research is, to my knowledge, one of the first in comparative study to show, with empirical evidence, how definition affects not only usage frequency, but also dictates the way newspapers develop over time depending on different criterion. Based on two different angles of comparison (geographical origin and nature of the journal), I also show that a change in the definition leads to different evolution trajectories, different relationships between our newspapers and different diachronic variations. To answer my research questions, I built one of the largest corpora ever used for a study on anglicisms. With more than 330 million words, it gathers all the articles published by four different newspapers for the years 2000, 2005, 2010 and 2015: Le Monde and Le Parisien for France, and Le Devoir and La Presse for Quebec. I also used two lists of keywords composed of 5416 and 2934 anglicisms taken from two general dictionaries (Le Petit Robert 2016 and Le Multidictionnaire de la langue française–4th edition 2013) and two specialized dictionaries (Höfler’s Dictionnaire des anglicismes–1982 and Le Colpron, dictionnaire des anglicismes–4th edition). By going far beyond what previous studies have done, it offers a more advanced overview than usual.First, my results show that the usage frequency is generally low (0.72% for all anglicisms; 0.28% for anglicisms with equivalents only; 0.44% for anglicisms without equivalents only) both in France and Quebec, which is consistent with the findings of previous studies. However, I find clear differences in terms of results for each analysis, as the frequency rate for anglicisms with equivalents is 1.5 times lower than the frequency rate for anglicisms without equivalent. This proves that the definition of the very concept of “anglicism” directly influences the frequency of usage. Second, I observe that this change of definition also affects comparative analysis focused on specific criteria such as the geographical origin or the nature of the newspapers. My study thus reveals great nuances in terms of trend over 15 years according to which definition is used: for instance, results for Quebec newspapers are slowly decreasing over time when anglicisms with equivalents are concerned, – especially for Le Devoir, whereas they increase when anglicisms do not have an equivalent. When it comes to the nature of the newspaper, Le Parisien, which is the newspaper that uses the most anglicisms of the four, forms a much more heterogeneous duo with La Presse when only anglicisms with equivalents are taken into account. They also give prominence to Le Devoir and Le Monde whose evolution trends are close to identical when the broader definition of Anglicism is used but a bit more asymmetric when equivalents come into play.
220

A Problem Model for Decision Support Systems

Cameron, Mark A, Mark.Cameron@csiro.au January 2000 (has links)
This body of research focuses on supporting problem-stakeholders, decision-makers and problem-solvers faced with an ill-defined and complex real world problem. An ill-defined problem has a characteristic trait of continual refinement. That is, the definition of the problem changes throughout the problem investigation and resolution process. The central theme of this research is that a support system should provide problem stakeholders with a problem definition model for constructing and manipulating a representation of the definition of the problem as they understand it. The approach adopted herein is to first develop a problem definition model for ill-defined problems— the 6-Component problem definition model. With this model, it is then possible to move on to identifying the types of changes or modifications to the problem definition that problem stakeholders, decision makers and problem solvers may wish to explore. Importantly, there must be a connection between the surface representation of the problem and the underlying implementation of the support system. This research argues that by focusing the support system around the problem definition, it is possible to reduce the mismatch between the problem objectives and the representation of the problem that the support system offers. This research uses the Unified Modelling Language to record and explore the requirements that problem stakeholders, faced with an evolving problem definition, place on a support system. The 6-Component problem definition model is then embedded within a design for an evolutionary support system. This embedding, supported by collaboration diagrams, shows how a system using the 6-Component problem definition model will support stakeholders in their exploration, evaluation and resolution of an ill-defined and complex real-world problem. A case study provides validation of the effectiveness of the 6-Component problem definition model proposed and developed in this work. The case study uses the 6-Component problem definition model as a basis for implementing the Integration Workbench, an evolutionary support system for land-use planning. Stakeholders explore, communicate, evaluate and resolve the Tasmanian Regional Forest Agreement problem with assistance from the Integration Workbench.

Page generated in 0.0408 seconds