• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 59
  • 34
  • 24
  • 20
  • 17
  • 8
  • 8
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 439
  • 439
  • 60
  • 56
  • 56
  • 53
  • 52
  • 45
  • 38
  • 35
  • 35
  • 34
  • 31
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Uppfylls vallöften i EU? : En jämförande studie av de svenska riksdagspartiernas uppfyllnadsgrad av vallöften inför Europaparlamentsvalet 2014

Edenmyr, Ester January 2023 (has links)
The European Parliament elections have since the 1980's been described as 'second-order national elections', which, among other things, means that they are less important to both political parties and to voters. Scholars have often described political parties as a weak link between the European Union and its citizens. Previous studies of the fulfillment of election promises have mainly focused on national governments, and not national political parties in the European Parliament. The purpose of this descriptive study is to investigate the level of fulfillment of election promises that Swedish parliamentary parties present in their election manifestos for the European Parliament election 2014. Based on previous research, this study tests five hypotheses on the material. By mapping and analyzing 183 election promises from 8 election manifestos, the results show a lower fulfillment level than Swedish governments usually achieve on the national arena. The result showed one possible covariation between which party groups the political parties belonged to and fulfillment, but no clear patterns between the characteristics of the election promise and fulfillment. The results of this study has shown that there is reason to further investigate and try to better understand election promises that are given ahead of European Parliament elections.
32

Limit of detection for second-order calibration methods

Rodríguez Cuesta, Mª José 02 June 2006 (has links)
Analytical chemistry can be split into two main types, qualitative and quantitative. Most modern analytical chemistry is quantitative. Popular sensitivity to health issues is aroused by the mountains of government regulations that use science to, for instance, provide public health information to prevent disease caused by harmful exposure to toxic substances. The concept of the minimum amount of an analyte or compound that can be detected or analysed appears in many of these regulations (for example, to discard the presence of traces of toxic substances in foodstuffs) generally as a part of method validation aimed at reliably evaluating the validity of the measurements.The lowest quantity of a substance that can be distinguished from the absence of that substance (a blank value) is called the detection limit or limit of detection (LOD). Traditionally, in the context of simple measurements where the instrumental signal only depends on the amount of analyte, a multiple of the blank value is taken to calculate the LOD (traditionally, the blank value plus three times the standard deviation of the measurement). However, the increasing complexity of the data that analytical instruments can provide for incoming samples leads to situations in which the LOD cannot be calculated as reliably as before.Measurements, instruments and mathematical models can be classified according to the type of data they use. Tensorial theory provides a unified language that is useful for describing the chemical measurements, analytical instruments and calibration methods. Instruments that generate two-dimensional arrays of data are second-order instruments. A typical example is a spectrofluorometer, which provides a set of emission spectra obtained at different excitation wavelengths.The calibration methods used with each type of data have different features and complexity. In this thesis, the most commonly used calibration methods are reviewed, from zero-order (or univariate) to second-order (or multi-linears) calibration models. Second-order calibration models are treated in details since they have been applied in the thesis.Concretely, the following methods are described:- PARAFAC (Parallel Factor Analysis)- ITTFA (Iterative Target Transformation Analysis)- MCR-ALS (Multivariate Curve Resolution-Alternating Least Squares)- N-PLS (Multi-linear Partial Least Squares)Analytical methods should be validated. The validation process typically starts by defining the scope of the analytical procedure, which includes the matrix, target analyte(s), analytical technique and intended purpose. The next step is to identify the performance characteristics that must be validated, which may depend on the purpose of the procedure, and the experiments for determining them. Finally, validation results should be documented, reviewed and maintained (if not, the procedure should be revalidated) as long as the procedure is applied in routine work.The figures of merit of a chemical analytical process are 'those quantifiable terms which may indicate the extent of quality of the process. They include those terms that are closely related to the method and to the analyte (sensitivity, selectivity, limit of detection, limit of quantification, ...) and those which are concerned with the final results (traceability, uncertainty and representativity) (Inczédy et al., 1998). The aim of this thesis is to develop theoretical and practical strategies for calculating the limit of detection for complex analytical situations. Specifically, I focus on second-order calibration methods, i.e. when a matrix of data is available for each sample.The methods most often used for making detection decisions are based on statistical hypothesis testing and involve a choice between two hypotheses about the sample. The first hypothesis is the "null hypothesis": the sample is analyte-free. The second hypothesis is the "alternative hypothesis": the sample is not analyte-free. In the hypothesis test there are two possible types of decision errors. An error of the first type occurs when the signal for an analyte-free sample exceeds the critical value, leading one to conclude incorrectly that the sample contains a positive amount of the analyte. This type of error is sometimes called a "false positive". An error of the second type occurs if one concludes that a sample does not contain the analyte when it actually does and it is known as a "false negative". In zero-order calibration, this hypothesis test is applied to the confidence intervals of the calibration model to estimate the LOD as proposed by Hubaux and Vos (A. Hubaux, G. Vos, Anal. Chem. 42: 849-855, 1970).One strategy for estimating multivariate limits of detection is to transform the multivariate model into a univariate one. This strategy has been applied in this thesis in three practical applications:1. LOD for PARAFAC (Parallel Factor Analysis).2. LOD for ITTFA (Iterative Target Transformation Factor Analysis).3. LOD for MCR-ALS (Multivariate Curve Resolution - Alternating Least Squares)In addition, the thesis includes a theoretical contribution with the proposal of a sample-dependent LOD in the context of multivariate (PLS) and multi-linear (N-PLS) Partial Least Squares. / La Química Analítica es pot dividir en dos tipus d'anàlisis, l'anàlisi quantitativa i l'anàlisi qualitativa. La gran part de la química analítica moderna és quantitativa i fins i tot els govern fan ús d'aquesta ciència per establir regulacions que controlen, per exemple, nivells d'exposició a substàncies tòxiques que poden afectar la salut pública. El concepte de mínima quantitat d'un analit o component que es pot detectar apareix en moltes d'aquestes regulacions, en general com una part de la validació dels mètodes per tal de garantir la qualitat i la validesa dels resultats.La mínima quantitat d'una substància que pot ser diferenciada de l'absència d'aquesta substància (el que es coneix com un blanc) s'anomena límit de detecció (limit of detection, LOD). En procediments on es treballa amb mesures analítiques que són degudes només a la quantitat d'analit present a la mostra (situació d'ordre zero) el LOD es pot calcular com un múltiple de la mesura del blanc (tradicionalment, 3 vegades la desviació d'aquesta mesura). Tanmateix, l'evolució dels instruments analítics i la complexitat creixent de les dades que generen, porta a situacions en les que el LOD no es pot calcular fiablement d'una forma tan senzilla. Les mesures, els instruments i els models de calibratge es poden classificar en funció del tipus de dades que utilitzen. La Teoria Tensorial s'ha utilitzat en aquesta tesi per fer aquesta classificació amb un llenguatge útil i unificat. Els instruments que generen dades en dues dimensions s'anomenen instruments de segon ordre i un exemple típic és l'espectrofluorímetre d'excitació-emissió, que proporciona un conjunt d'espectres d'emissió obtinguts a diferents longituds d'ona d'excitació.Els mètodes de calibratge emprats amb cada tipus de dades tenen diferents característiques i complexitat. En aquesta tesi, es fa una revisió dels models de calibratge més habituals d'ordre zero (univariants), de primer ordre (multivariants) i de segon ordre (multilinears). Els mètodes de segon ordre estan tractats amb més detall donat que són els que s'han emprat en les aplicacions pràctiques portades a terme. Concretament es descriuen:- PARAFAC (Parallel Factor Analysis)- ITTFA (Iterative Target Transformation Analysis)- MCR-ALS (Multivariate Curve Resolution-Alternating Least Squares)- N-PLS (Multi-linear Partial Least Squares)Com s'ha avançat al principi, els mètodes analítics s'han de validar. El procés de validació inclou la definició dels límits d'aplicació del procediment analític (des del tipus de mostres o matrius fins l'analit o components d'interès, la tècnica analítica i l'objectiu del procediment). La següent etapa consisteix en identificar i estimar els paràmetres de qualitat (figures of merit, FOM) que s'han de validar per, finalment, documentar els resultats de la validació i mantenir-los mentre sigui aplicable el procediment descrit.Algunes FOM dels processos químics de mesura són: sensibilitat, selectivitat, límit de detecció, exactitud, precisió, etc. L'objectiu principal d'aquesta tesi és desenvolupar estratègies teòriques i pràctiques per calcular el límit de detecció per problemes analítics complexos. Concretament, està centrat en els mètodes de calibratge que treballen amb dades de segon ordre.Els mètodes més emprats per definir criteris de detecció estan basats en proves d'hipòtesis i impliquen una elecció entre dues hipòtesis sobre la mostra. La primera hipòtesi és la hipòtesi nul·la: a la mostra no hi ha analit. La segona hipòtesis és la hipòtesis alternativa: a la mostra hi ha analit. En aquest context, hi ha dos tipus d'errors en la decisió. L'error de primer tipus té lloc quan es determina que la mostra conté analit quan no en té i la probabilitat de cometre l'error de primer tipus s'anomena fals positiu. L'error de segon tipus té lloc quan es determina que la mostra no conté analit quan en realitat si en conté i la probabilitat d'aquest error s'anomena fals negatiu. En calibratges d'ordre zero, aquesta prova d'hipòtesi s'aplica als intervals de confiança de la recta de calibratge per calcular el LOD mitjançant les fórmules d'Hubaux i Vos (A. Hubaux, G. Vos, Anal. Chem. 42: 849-855, 1970)Una estratègia per a calcular límits de detecció quan es treballa amb dades de segon ordre es transformar el model multivariant en un model univariant. Aquesta estratègia s'ha fet servir en la tesi en tres aplicacions diferents::1. LOD per PARAFAC (Parallel Factor Analysis).2. LOD per ITTFA (Iterative Target Transformation Factor Analysis).3. LOD per MCR-ALS (Multivariate Curve Resolution - Alternating Least Squares)A més, la tesi inclou una contribució teòrica amb la proposta d'un LOD que és específic per cada mostra, en el context del mètode multivariant PLS i del multilinear N-PLS.
33

Fakta och innehåll framför förmågor och analys? : Strategier och metoder för tankebegrepp i historieundervisningen

Thoresen, Jakob January 2016 (has links)
Enligt kursplanen för historia i årskurs 7-9 ska eleven kunna uppvisa ett tiotal förmågor utöver det historiska innehållet för att uppnå den lägst godkända kunskapsnivån E. Forskning visar att elever har svårt att konstruera historisk kunskap, samt att ämnesinnehållet ofta är tunt beskrivet. Kandidatuppsatsen är en litteraturstudie med forskning och litteratur utifrån en historiedidaktisk kontext. Empirin är inventerad genom en hermeneutisk metod och har analyserats med historiskt tänkande teorins fokusering av tankebegrepp. Enligt historiskt-tänkande teorin är tankebegrepp redskap för att organisera och hantera det historiska tänkandet och framställandet. Exempel på ett begrepp som studien behandlar är orsak-konsekvens. Totalt studeras sex olika tankebegrepp. Studien syftar till att undersöka vilka undervisningsmetoder och strategier som kan användas i historieundervisningen utifrån historiskt tänkande teorins fokusering av tankebegrepp. Studiens resultat visar bland annat att undervisningsmetoder och strategier som skildrar innehållet ur flera perspektiv, ställer frågor som öppnar upp för analys istället för fakta, samt undervisar dem explicit i undervisningen är faktorer som skapar förutsättningar för att stärka elevers förmågor att framställa och organisera sina historiska kunskaper.
34

An Attempt to Automate <i>NP</i>-Hardness Reductions via <i>SO</i>&#8707; Logic

Nijjar, Paul January 2004 (has links)
We explore the possibility of automating <i>NP</i>-hardness reductions. We motivate the problem from an artificial intelligence perspective, then propose the use of second-order existential (<i>SO</i>&#8707;) logic as representation language for decision problems. Building upon the theoretical framework of J. Antonio Medina, we explore the possibility of implementing seven syntactic operators. Each operator transforms <i>SO</i>&#8707; sentences in a way that preserves <i>NP</i>-completeness. We subsequently propose a program which implements these operators. We discuss a number of theoretical and practical barriers to this task. We prove that determining whether two <i>SO</i>&#8707; sentences are equivalent is as hard as GRAPH ISOMORPHISM, and prove that determining whether an arbitrary <i>SO</i>&#8707; sentence represents an <i>NP</i>-complete problem is undecidable.
35

Weak cost automata over infinite trees

Vanden Boom, Michael T. January 2012 (has links)
Cost automata are traditional finite state automata enriched with a finite set of counters that can be manipulated on each transition. Based on the evolution of counter values, a cost automaton defines a function from the set of structures under consideration to the natural numbers extended with infinity, modulo a boundedness relation that ignores exact values but preserves boundedness properties. Historically, variants of cost automata have been used to solve problems in language theory such as the star height problem. They also have a rich theory in their own right as part of the theory of regular cost functions, which was introduced by Colcombet as an extension to the theory of regular languages. It subsumes the classical theory since a language can be associated with the function that maps every structure in the language to 0 and everything else to infinity; it is a strict extension since cost functions can count some behaviour within the input. Regular cost functions have been previously studied over finite words and trees. This thesis extends the theory to infinite trees, where classical parity automata are enriched with a finite set of counters. Weak cost automata, which have priorities {0,1} or {1,2} and an additional restriction on the structure of the transition function, are shown to be equivalent to a weak cost monadic logic. A new notion of quasi-weak cost automata is also studied and shown to arise naturally in this cost setting. Moreover, a decision procedure is given to determine whether or not functions definable using weak or quasi-weak cost automata are equivalent up to the boundedness relation, which also proves the decidability of the weak cost monadic logic over infinite trees. The semantics of these cost automata over infinite trees are defined in terms of cost-parity games which are two-player infinite games where one player seeks to minimize the counter values and satisfy the parity condition, and the other player seeks to maximize the counter values or sabotage the parity condition. The main contributions and key technical results involve proving that certain cost-parity games admit positional or finite-memory strategies. These results also help settle the decidability of some special cases of long-standing open problems in the classical theory. In particular, it is shown that it is decidable whether a regular language of infinite trees is recognizable using a nondeterministic co-Büchi automaton. Likewise, given a Büchi or co-Büchi automaton as input, it is decidable whether or not there is a weak automaton recognizing the same language.
36

Rethinking Populism: Peak democracy, liquid identity and the performance of sovereignty

Blühdorn, Ingolfur, Butzlaff, Felix January 2019 (has links) (PDF)
Despite the burgeoning literature on right-wing populism, there is still considerable uncertainty about its causes, its impact on liberal democracies and about promising counter-strategies. Inspired by recent suggestions that (1) the emancipatory left has made a significant contribution to the proliferation of the populist right; and (2) populist movements, rather than challenging the established socio-political order, in fact stabilize and further entrench its logic, this article argues that an adequate understanding of the populist phenomenon necessitates a radical shift of perspective: beyond the democratic and emancipatory norms, which still govern most of the relevant literature. Approaching its subject matter via democratic theory and modernization theory, it undertakes a reassessment of the triangular relationship between modernity, democracy and populism. It finds that the latter is not helpfully conceptualized as anti-modernist or anti-democratic but should, instead, be regarded as a predictable feature of the form of politics distinctive of today's third modernity.
37

Contribuição da rigidez transversal à flexão das lajes na distribuição dos esforços em estruturas de edifícios de andares múltiplos, em teoria de segunda ordem / Contribution of bending stiffness transverse of slabs in the forces distribution in structures of multistory buildings, in second order theory

Martins, Carlos Humberto 10 August 1998 (has links)
O principal objetivo deste trabalho é calcular esforços e deslocamentos de estruturas tridimensionais de edifícios de andares múltiplos, sujeitos às ações verticais e laterais, considerando a rigidez transversal à flexão das lajes, em teoria de 2ª ordem. O elemento finito de placa adotado na discretização do pavimento, responsável pela consideração da rigidez transversal das lajes na análise do edifício é o DKT (Discrete Kirchhoff Theory). Para os pilares o equilíbrio de forças é verificado na sua posição deformada, ou como é conhecido da literatura técnica, análise em teoria de 2ª ordem, considerando a não linearidade geométrica. Para o cálculo dos esforços e deslocamentos na estrutura são aplicadas as técnicas de subestruturação em série e paralelo na matriz de rigidez global da estrutura. Elaborou-se um programa de computador para o processo de cálculo, utilizando a linguagem computacional Fortran Power Station 90 e pré e pós processadores em Visual Basic 4.0 para ambiente Windows. Finalmente são apresentados alguns exemplos para comprovar a validade do processo de cálculo utilizado / The main aim of this work is to calculate stresses and displacements of threedimensional structures of multistory buildings, subjected to vertical and lateral loads, considering the transverse bending stiffness of slabs, in second order theory. The plate finite element adopted in floor discretization, responsible for considering the bending stiffness contribution of slabs in the analysis of buildings, is the DKT (Discrete Kirchhoff Theory). For columns the forces equilibrium is verified for the columns in their deformed position, which is known in the technical literature as 2nd order analysis, considering the geometric non-linearity. The techniques of serial and parallel analysis of substructures are applied to the global stiffness matrix for the calculus of forces and displacements in the strucuture. A computer program was developed for the calculation process, using the computer language Fortran Power Station 90 and pre and post-processors in Visual Basic 4.0 for a Windows environment. Finally, some examples are presented to check the validity of the employed calculus process
38

Where is the mind of the media editor? : an analysis of editors as intermediaries between technology and the cinematic experience

de Selincourt, Chris January 2016 (has links)
What space does the mind occupy? A standard response to this question might be to locate the mind within the brain. However some argue that our mental processes also extend beyond the boundaries of the brain. Gallagher & Zahavi (2008) have termed these two views of the mind: internalism and externalism. In cinema, the role of editor as mediator between the cognitive activities of filmmakers, audiences and the editing equipment, makes their practice particularly suited for investigating these two seemingly incompatible views. When editors cut or join chunks of sound and image, they assemble externally what some would recognise internally as the mind’s fluctuations between one object of attention and the next. Their activities reveal a side of cinema, but also of the mind, which is usually hidden from view. The purpose of this thesis will therefore be to show how studying the process of editing contributes to our understanding of the relationship between mind and world. In order to address the question of where the editor’s mental processes are located, this study applies a phenomenographic methodology. Rather than attempt to understand cognition from a preconceived or objectively constituted position, phenomenography starts by examining variation in how a group of individuals view a particular process. This leads toward research findings that are presented from a ‘second-order perspective’ (Marton, 1981). In this thesis an understanding of how audiovisual material is selected and sequenced is revealed through fourteen interviews with British editors and directors. From the analysis of these interviews a framework emerged of five critical interrelated ways to approach the editing process. This evidence suggests that the cognitive process occurs in virtue of an editor’s physical activities, the editing equipment, plus a broader network of social and cultural relations that support the filmmaking environment. Refuting the belief that the mind is separate from the world, the editor’s mental processes are to be found distributed amongst a variety of internal and external features of their environment. The outcome of this thesis is a phenomenographic perspective on the editing process. This, I conclude, will help to inform cognitive scientists of the kinds of mental processes that editors are aware of. It also provides a wider audience of scholars with a framework for further research on variation in the process and practice of editing.
39

Comprimento efetivo de colunas de aço em pórticos deslocáveis / Effective length for steel columns of plane un-braced frames

Antunes, Maurício Carmo 14 September 2001 (has links)
Dentro da prática de verificação e projeto de estruturas metálicas, o cálculo de instabilidade exerce papel importante, já que o aço, por sua elevada resistência, incentiva o uso de colunas significativamente esbeltas. É comum na verificação da instabilidade de pórticos metálicos de andares múltiplos a utilização do conhecido fator K, que define, para a coluna, um comprimento efetivo. Tal fator é usualmente obtido em ábacos construídos a partir de duas hipóteses distintas para o mecanismo de instabilidade: a de flambagem por deslocamento lateral do andar e a de flambagem com esse deslocamento impedido. Essa divisão, e os modelos usualmente utilizados para tratá-la, se mostram incompletos para o caso de pórticos que se distanciem das hipóteses simplificadoras adotadas, e podem induzir a confusões e mal-entendido no uso do fator K. Neste trabalho, serão mostrados modelos alternativos para a determinação desse fator, buscando-se maior generalidade, assim como tentativas de esclarecer algumas possíveis ambiguidades no seu uso; além disso, esses modelos serão aplicados a alguns exemplos particulares. Como complemento ao trabalho, foi criado um programa de computador para determinar deslocamentos e esforços em segunda ordem para esse tipo de edificação, assim como ábacos alternativos. Os resultados obtidos nos exemplos serão contrastados com os fornecidos pelo programa e pelos ábacos. / In the practice of analysis and design of steel structures, instability calculations is a very important feature, since steel, for its high strength, motivates the use of significantly slender columns. It is usual in the analysis of multi-storey steel plane frames, the use of the K factor, that defines, for a column, an effective length. Such factor is usually obtained from nomograms, based on two different hypothesis for the instability mode: one with a side-way mode and the other with no lateral displacements. That division, and the models usually used to deal with, are incomplete to treat frames for which behaviour go away from the adopted hypothesis and simplifications, and they can induce to confusions and misunderstanding in the use of the K factor. In this work, alternative models will be shown for the determination of the K factor, looking for a larger generality, as well as attempting to clear ambiguities in its use; a series of examples will be then presented as applications of the models. As a complement, a computer program was created to determine first and second order nodal displacements and member forces, as well alternative nomograms; the results obtained from the models will be contrasted with those obtained from the program and the nomograms.
40

Pedagogisk dokumentation : En kvalitativ studie om pedagogers uppfattningar kring arbetet med pedagogisk dokumentation i relation till det systematiska kvalitetsarbetet i Reggio Emiliainspirerade förskolor

Seyhan, Ninva January 2018 (has links)
In this study I have investigated what perceptions educators have about educational documentation and how teachers perceive that pedagogical documentation helps to discover and visualize learning. I have also investigated how teachers perceive that the size of the child group and the systematic quality work affect the work of educational documentation. The purpose is to investigate educators perceptions of these tools to contribute to the development of pre-school work. I have done a qualitative study in the form of interviews. In the study, I have taken the phenomenological research effort as a starting point and the result shows that there are both equal and different perceptions about the systematic quality work and the work on educational documentation.

Page generated in 0.0397 seconds