• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 154
  • 42
  • 14
  • 12
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 283
  • 104
  • 95
  • 77
  • 43
  • 41
  • 40
  • 30
  • 29
  • 29
  • 28
  • 27
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

[pt] O FORMALISMO CLOCKWORK PARA HIERARQUIAS NATURAIS DE FÉRMIONS / [en] THE CLOCKWORK APPROACH TO NATURAL FERMION HIERARCHIES

FERNANDO ABREU ROCHA DE SOUZA 02 August 2019 (has links)
[pt] O Modelo Padrão de física de partículas é uma das teorias mais bem estabelecidas no campo da física, sendo capaz de fazer previsões verificadas experimentalmente até doze algoritmos significativos. No entanto, o Modelo deixa algumas perguntas sem resposta, o que vem perturbando os físicos por muitos anos. Uma dessas questões é a estrutura hierárquica presente no setor dos férmions, onde matrizes Yukawas possuem autovalores que diferem um do outro por várias ordens de magnitude. Outro aspecto cabível de investigação é relacionado com a matriz CKM, responsável pela mistura entre férmions de sabores distintos. Por que tal matriz é aproximadamente diagonal e por que os ângulos de mistura são tão pequenos? Por que o elétron é muito mais leve que seus primos de outras gerações? A mesma pergunta pode ser feita para os quarks e o Modelo Padrão não seria capaz de responder nenhuma delas. Nesse trabalho, uma explicação proposta vem da utilização de um novo modelo, chamado de Mecanismo Clockwork, que pressupõe a existência de novos férmions pesados, nomeados Clockwork Gears, que são capazes de naturalmente gerar acoplamentos exponencialmente suprimidos a partir de Yukawas de ordem um, após a ocorrência de quebra espontânea de simetria. Além disso, simulações foram feitas com o objetivo de otimizar os parâmetros livres do modelo, assim como confirmar sua eficiência em acomodar os dados experimentais. Por fim, foi feita uma análise de alguns processos, envolvendo correntes neutras que trocam sabor, no regime de teoria efetiva de campo, para poder-se estipular um limite para a escala típica de massa para essas novas partículas. / [en] The Standard Model of particle physics is one of the most well established theories in the field of physics and is able to make predictions correctly measured and verified up to twelve significant figures. However, the theory leaves some unanswered questions that have been bothering physicist for many years. One of those questions is the hierarchical structure of the fermion sector, where Yukawa matrices have eigenvalues that differ from each other by several orders of magnitude. Another aspect concerns the CKM matrix, which dictates the mixing between fermions of distinct flavours: why is this matrix almost diagonal, and why are the mixing angles so small? Why is the electron so much lighter than its cousins from different generations? The same question could be made for the quarks and the Standard Model would not be able to answer neither of these. In this work, an explanation is proposed by employing a novel model, called Clockwork Mechanism, which assumes the existence of new heavy fermion particles, named Clockwork Gears, which are able to naturally generate exponentially suppressed couplings out of order-one Yukawas, after spontaneously symmetry breaking occurs. In addition, simulations were run in order to optimize the free parameters of the model, as well as to confirm its efficiency at fitting with experimental data. Lastly, a few processes involving Flavour Changing Neutral Currents were considered in the effective field theory regime as a means to stipulate a typical mass scale for these new particles.
252

Visualizations for simulation-based training : Enhancing the evaluation of missile launch events during after-action reviews of air combat simulation / Visualiseringar för simulatorbaserad utbildning : Förbättring av utvärderingen av robotskott under after-action reviews för luftstridssimulering

ter Vehn, Pontus January 2016 (has links)
This thesis work has been part of an effort to improve the after-action reviews of the air combat simulator training sessions conducted at the Swedish Air Force Combat Simulation Centre (FLSC). Initial studies identified three main needs regarding the evaluation of air-to-air missile shots during beyond-visual-range combat simulation. These needs included an improved detection of where and when in the simulation playback a missile shot took place, a collected view of flight parameters to prevent confusion and cross-referencing between the various displays, as well as the ability to review an aircraft’s flight parameters over time in order to discuss alternative shooting opportunities or maneuvering patterns. To fulfill these three needs, design studies were performed iteratively in collaboration with staff at the FLSC. This work has resulted in a design proposal with a prototype based on the design guidelines and recommendations of the study's participants. The purpose of the visualization is to provide support for instructors and promote the individual learning of pilots. Hopefully, this can ultimately help in answering the question regarding why a missile missed its target. For instructors and air units such aids could mean that operating errors can be more easily identified and also form a basis for discussion during the assessment briefings. / Detta examensarbete har haft som syfte att förbättra utvärderingen av luftstridssimuleringar som bedrivs vid det svenska flygvapnets luftstridssimuleringscentrum, FLSC. Inledande studier identifierade tre huvudsakliga behov för utvärderingen av flygplansburna robotskott avfyrade mot luftmål utom synhåll, på långa avstånd. Dessa behov inkluderar en förbättring när det gäller att upptäcka var och när i en simuleringsuppspelning som ett robotskott har skett, en samlad vy över flygparametrar för att förhindra förvirring och korsreferering mellan olika skärmar, samt möjligheten att utvärdera ett flygplans flygparametrar över tid för att kunna diskutera alternativa avfyrningsmöjligheter eller manövreringsmönster. För att fylla dessa tre behov har iterativa designstudier utförts i samarbete med personalen på FLSC. Detta har resulterat i ett designförslag med en prototyp baserad på de designriktlinjer och -rekommendationer som studiens deltagare delgett. Syftet med visualiseringen är att ge stöd till instruktörer och främja piloters individuella inlärning. Förhoppningsvis kan detta i slutändan bidra till att svara på frågan om varför en robot missade sitt mål. För instruktörer och flygförband kan ett sådant hjälpmedel underlätta identifiering av felmanövreringar och även ligga till grund för värdefulla diskussioner under analysen av genomförda luftsstridsimuleringar.
253

Study on Air Interface Variants and their Harmonization for Beyond 5G Systems

Flores de Valgas Torres, Fernando Josue 22 March 2021 (has links)
[ES] La estandarización de la Quinta Generación de redes móviles o 5G, ha concluido este año 2020. No obstante, en el año 2014 cuando la ITU empezó el proceso de estandarización IMT-2020, una de las principales interrogantes era cuál sería la forma de onda sobre la cual se construiría la capa física de esta nueva generación de tecnologías. El 3GPP se comprometió a entregar una tecnología candidata al proceso IMT-2020, y es así como dentro de este proceso de deliberación se presentaron varias formas de onda candidatas, las cuales fueron evaluadas en varios aspectos hasta que en el año 2016 el 3GPP tomó una decisión, continuar con CP-OFDM (utilizada en 4G) con numerología flexible. Una vez decidida la forma de onda, el proceso de estandarización continuó afinando la estructura de la trama, y todos los aspectos intrínsecos de la misma. Esta tesis acompañó y participó de todo este proceso. Para empezar, en esta disertación se evaluaron las principales formas de onda candidatas al 5G. Es así que se realizó un análisis teórico de cada forma de onda, destacando sus fortalezas y debilidades, tanto a nivel de implementación como de rendimiento. Posteriormente, se llevó a cabo una implementación real en una plataforma Software Defined Radio de tres de las formas de onda más prometedoras (CP-OFDM, UFMC y OQAM-FBMC), lo que permitió evaluar su rendimiento en términos de la tasa de error por bit, así como la complejidad de su implementación. Esta tesis ha propuesto también el uso de una solución armonizada como forma de onda para el 5G y sostiene que sigue siendo una opción viable para sistemas beyond 5G. Dado que ninguna de las forma de onda candidatas era capaz de cumplir por sí misma con todos los requisitos del 5G, en lugar de elegir una única forma de onda se propuso construir un transceptor que fuese capaz de construir todas las principales formas de onda candidatas (CP-OFDM, P-OFDM, UFMC, QAM-FBMC, OQAM-FBMC). Esto se consiguió identificando los bloques comunes entre las formas de onda, para luego integrarlos junto con el resto de bloques indispensables para cada forma de onda. La motivación para esta solución era tener una capa física que fuese capaz de cumplir con todos los aspectos del 5G, seleccionando siempre la mejor forma de onda según el escenario. Esta propuesta fue evaluada en términos de complejidad, y los resultados se compararon con la complejidad de cada forma de onda. La decisión de continuar con CP-OFDM con numerología flexible como forma de onda para el 5G se puede considerar también como una solución armonizada, ya que al cambiar el prefijo cíclico y el número de subportadoras, cambian también las prestaciones del sistema. En esta tesis se evaluaron todas las numerologías propuestas por el 3GPP sobre cada uno de los modelos de canal descritos para el 5G (y considerados válidos para sistemas beyond 5G), teniendo en cuenta factores como la movilidad de los equipos de usuario y la frecuencia de operación; para esto se utilizó un simulador de capa física del 3GPP, al que se hicieron las debidas adaptaciones con el fin de evaluar el rendimiento de las numerologías en términos de la tasa de error por bloque. Finalmente, se presenta un bosquejo de lo que podría llegar a ser la Sexta Generación de redes móviles o 6G, con el objetivo de entender las nuevas aplicaciones que podrían ser utilizadas en un futuro, así como sus necesidades. Completado el estudio llevado a cabo en esta tesis, se puede afirmar que como se propuso desde un principio la solución, tanto para el 5G como para beyond 5G, la solución es la armonización de las formas de onda. De los resultados obtenidos se puede corroborar que una solución armonizada permite alcanzar un ahorro computacional entre el 25-40% para el transmisor y del 15-25% para el receptor. Además, fue posible identificar qué numerología CP-OFDM es la más adecuada para cada escenario, lo que permitiría optimizar el diseño y despliegue de las redes 5G. Esto abriría la puerta a hacer lo mismo con el 6G, ya que en esta tesis se considera que será necesario abrir nuevamente el debate sobre cuál es la forma de onda adecuada para esta nueva generación de tecnologías, y se plantea que el camino a seguir es optar por una solución armonizada con distintas formas de onda, en lugar de solo una como sucede con el 5G. / [CA] L'estandardització de la Quinta Generació de xarxes mòbils o 5G, ha conclòs enguany 2020. No obstant això, l'any 2014 quan la ITU va començar el procés d'estandardització IMT-2020, uns dels principals interrogants era quina seria la forma d'onda sobre la qual es construiria la capa física d'esta nova generació de tecnologies. El 3GPP es va comprometre a entregar una tecnologia candidata al procés IMT-2020, i és així com dins d'este procés de deliberació es van presentar diverses formes d'onda candidates, les quals van ser avaluades en diversos aspectes fins que l'any 2016 el 3GPP va prendre una decisió, continuar amb CP-OFDM (utilitzada en 4G) amb numerología flexible. Una vegada decidida la forma d'onda, el procés d'estandardització va continuar afinant la frame structure (no se m'ocorre nom en espanyol), i tots els aspectes intrínsecs de la mateixa. Esta tesi va acompanyar i va participar de tot este procés. Per a començar, en esta dissertació es van avaluar les principals formes d'onda candidates al 5G. És així que es va realitzar una anàlisi teòrica de cada forma d'onda, destacant les seues fortaleses i debilitats, tant a nivell d'implementació com de rendiment. Posteriorment, es va dur a terme una implementació real en una plataforma Software Defined Radio de tres de les formes d'onda més prometedores (CP-OFDM, UFMC i OQAM-FBMC), la qual cosa va permetre avaluar el seu rendiment en termes de la taxa d'error per bit, així com la complexitat de la seua implementació. Esta tesi ha proposat també l'ús d'una solució harmonitzada com a forma d'onda per al 5G i sosté que continua sent una opció viable per a sistemes beyond 5G. Atés que cap de les forma d'onda candidates era capaç de complir per si mateixa amb tots els requeriments del 5G, en compte de triar una única forma d'onda es va proposar construir un transceptor que fóra capaç de construir totes les principals formes d'onda candidates (CP-OFDM, P-OFDM, UFMC, QAM-FBMC, OQAM-FBMC). Açò es va aconseguir identificant els blocs comuns entre les formes d'onda, per a després integrar-los junt amb la resta de blocs indispensables per a cada forma d'onda. La motivació per a esta solució era tindre una capa física que fóra capaç de complir amb tots els aspectes del 5G, seleccionant sempre la millor forma d'onda segons l'escenari. Esta proposta va ser avaluada en termes de complexitat, i els resultats es van comparar amb la complexitat de cada forma d'onda. La decisió de continuar amb CP-OFDM amb numerología flexible com a forma d'onda per al 5G es pot considerar també com una solució harmonitzada, ja que al canviar el prefix cíclic i el número de subportadores, canvien també les prestacions del sistema. En esta tesi es van avaluar totes les numerologías propostes pel 3GPP sobre cada un dels models de canal descrits per al 5G (i considerats vàlids per a sistemes beyond 5G), tenint en compte factors com la mobilitat dels equips d'usuari i la freqüència d'operació; per a açò es va utilitzar un simulador de capa física del 3GPP, a què es van fer les degudes adaptacions a fi d'avaluar el rendiment de les numerologías en termes de la taxa d'error per bloc. Finalment, es presenta un esbós del que podria arribar a ser la Sexta Generació de xarxes mòbils o 6G, amb l'objectiu d'entendre les noves aplicacions que podrien ser utilitzades en un futur, així com les seues necessitats. Completat l'estudi dut a terme en esta tesi, es pot afirmar que com es va proposar des d'un principi la solució, tant per al 5G com per a beyond 5G, la solució és l'harmonització de les formes d'onda. dels resultats obtinguts es pot corroborar que una solució harmonitzada permet aconseguir un estalvi computacional entre el 25-40% per al transmissor i del 15-25% per al receptor. A més, va ser possible identificar què numerología CP-OFDM és la més adequada per a cada escenari, la qual cosa permetria optimitzar el disseny i desplegament de les xarxes 5G. Açò obriria la porta a fer el mateix amb el 6G, ja que en esta tesi es considera que serà necessari obrir novament el debat sobre quina és la forma d’onda adequada per a esta nova generació de tecnologies, i es planteja que el camí que s’ha de seguir és optar per una solució harmonitzada amb distintes formes d’onda, en compte de només una com succeïx amb el 5G. / [EN] The standardization of the Fifth Generation of mobile networks or 5G is still ongoing, although the first releases of the standard were completed two years ago and several 5G networks are up and running in several countries around the globe. However, in 2014 when the ITU began the IMT-2020 standardization process, one of the main questions was which would be the waveform to be used on the physical layer of this new generation of technologies. The 3GPP committed to submit a candidate technology to the IMT-2020 process, and that is how within this deliberation process several candidate waveforms were presented. After a thorough evaluation regarding several aspects, in 2016 the 3GPP decided to continue with CP-OFDM (used in 4G) but including, as a novelty, the use of a flexible numerology. Once the waveform was decided, the standardization process continued to fine-tune the frame structure and all the intrinsic aspects of it. This thesis accompanied and participated in this entire process. To begin with, this dissertation evaluates the main 5G candidate waveforms. Therefore, a theoretical analysis of each waveform is carried out, highlighting its strengths and weaknesses, both at the implementation and performance levels. Subsequently, a real implementation on a Software Defined Radio platform of three of the most promising waveforms (CP-OFDM, UFMC, and OQAM-FBMC) is presented, which allows evaluating their performance in terms of bit error rate, as well as the complexity of its implementation. This thesis also proposes the use of a harmonized solution as a waveform for 5G and argues that it remains a viable option for systems beyond 5G. Since none of the candidate waveforms was capable of meeting on its own with all the requirements for 5G, instead of choosing a single waveform, this thesis proposes to build a transceiver capable of building all the main waveforms candidates (CP-OFDM, P-OFDM, UFMC, QAM-FBMC, OQAM-FBMC). This is achieved by identifying the common blocks between the waveforms and then integrating them with the rest of the essential blocks for each waveform. The motivation for this solution is to have a physical layer that is capable of complying with all aspects of beyond 5G technologies, always selecting the best waveform according to the scenario. This proposal is evaluated in terms of complexity, and the results are compared with the complexity of each waveform. The decision to continue with CP-OFDM with flexible numerology as a waveform for 5G can also be considered as a harmonized solution, since changing the cyclic prefix and the number of subcarriers, changes also the performance of the system. In this thesis, all the numerologies proposed by the 3GPP are evaluated on each of the channel models described for 5G (and considered valid for beyond 5G systems), taking into account factors such as the mobility of the user equipment and the operating frequency. For this, a 3GPP physical layer simulator is used, and proper adaptations are made in order to evaluate the performance of the numerologies in terms of the block error rate. Finally, a sketch of what could become the Sixth Generation of mobile networks or 6G is presented, with the aim of understanding the new applications that could be used in the future, as well as their needs. After the completion of the study carried out in this thesis, it can be said that, as stated from the beginning, for both 5G and beyond 5G systems, the solution is the waveform harmonization. From the results obtained, it can be corroborated that a harmonized solution allows achieving computational savings between 25-40% for the transmitter and 15-25% for the receiver. In addition, it is possible to identify which CP-OFDM numerology is the most appropriate for each scenario, which would allow optimizing the design and deployment of 5G networks. This would open the door to doing the same with 6G, i.e., a harmonized solution with different waveforms, instead of just one as in 5G. / Flores De Valgas Torres, FJ. (2020). Study on Air Interface Variants and their Harmonization for Beyond 5G Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/164442
254

Beyond the Standard Model Orders of Charge–Parity Violation

Kley, Jonathan 19 November 2024 (has links)
In dieser Arbeit verwenden wir Flavourinvarianten, um systematisch Lösungen für Probleme des Standardmodells (SM) der Teilchenphysik mit Hilfe verschiedener effektiver Feldtheorien (EFTs) zu untersuchen. In Teil I untersuchen wir die CP-Verletzung im SM und in der SM EFT erweitert mit leichten, sterilen Neutrinos. Wir konstruieren die erzeugende Menge von Flavourinvarianten im νSM, mit der jede Observable als Polynom der Invarianten, sowie die Bedingungen für die CP-Verletzung auf flavourinvariante Weise ausgedrückt werden können. Anschließend weiten wir die Ergebnisse auf die EFT-Wechselwirkungen für verschiedene Szenarien der Neutrinomassen aus. Hier ändert sich die Form der EFT-Flavourinvarianten und ihre Unterdrückung mit der Skala der neuen Physik drastisch mit der untersuchten Art der Neutrinomassen. In Teil II untersuchen wir verschiedene Aspekte der Symmetriebrechung in EFTs von axionartigen Teilchen (ALPs). Wegen ihrer pseudo-Nambu–Goldstone-Natur ist eine wesentliche Eigenschaft der ALPs ihre Shiftsymmetrie (ShS). Wir formulieren flavourinvariante Ordnungsparameter der ShS, die das Powercounting der EFT führender Ordnung bei einer leicht gebrochenen ShS korrekt implementieren lassen. Mit der Hilbertreihe zählen wir die Anzahl der Operatoren, die in der ALP EFT mit und ohne ShS oberhalb und unterhalb der elektroschwachen Skala auftreten, womit wir Operatorbasen konstruieren, die Beziehungen der ShS auf höhere Ordnung verallgemeinern und die CP-verletzenden Flavourinvarianten führender Ordnung konstruieren. Die Axionlösung des starken CP-Problems kann durch neue CP-Verletzung im Ultravioletten durch kleine Instantonen gestört werden. Mit einer SMEFT-Parametrisierung der neuen CP-Verletzung zeigen wir, dass neu konstruierte CP-verletzende SMEFT-Flavourinvarianten explizit in den Instantonberechnungen auftauchen und zur Systematisierung der Berechnungen verwendet werden können, wodurch wir bessere Limits für kleine Instanton- und Flavourszenarien ableiten. / In this thesis, we use flavour invariants to systematically study solutions to problems of the Standard Model (SM) of particle physics with different effective field theories (EFTs). In Part I, we study Charge–Parity (CP) violation in the SM and SM EFT extended with light sterile neutrinos. We construct the generating set of flavour invariants in the νSM allowing us to express any observable as a polynomial of those invariants. In addition, the invariants enable us to express the conditions for CP violation in a flavour-invariant way. We extend the results to the EFT interactions with different scenarios for the neutrino masses. Here, the form of the EFT flavour invariants and their suppression with the scale of new physics changes drastically depending on the nature of the neutrino masses. In Part II, we study different aspects of symmetry breaking in the EFTs of axionlike particles (ALPs). An essential property of ALPs is their shift symmetry (ShS) due to their pseudo-Nambu–Goldstone nature. We formulate flavour-invariant order parameters of ShS, which allow us to properly impose the power counting of the leading order EFT in the presence of a softly broken ShS. Using the Hilbert series, we count the number of operators appearing in the ALP EFT with and without a ShS above and below the electroweak scale. We use this information to construct operator bases, generalise the relations imposing ShS to higher orders and construct the leading order CP-odd flavour invariants. The axion solution to the strong CP problem can be spoiled by new CP violation in the ultraviolet in the presence of small instantons. Parameterising the new CP violation in the SMEFT, we show that newly constructed CP-odd SMEFT flavour invariants, featuring the strong CP angle, explicitly appear in the instanton computations and vice-versa that they can be used to systematise the computations. Using these results, we derive bounds on different small instanton and SMEFT flavour scenarios.
255

Le cadre institutionnel de la convention des Nations Unies sur le droit de la mer en quête de son avenir / The Institutional Framework of the United Nations Convention on the Law of the Sea in Search of its Future

Konstantinidis, Ioannis 10 February 2016 (has links)
Fruit de négociations longues et ardues, la Convention des Nations Unies sur le droit de la mer signée en 1982 est sans doute l’un des traités multilatéraux les plus réussis sur le plan international. Pierre angulaire de la Convention, l’attribution du statut de « patrimoine commun de l’humanité » aux fonds marins et leur sous-sol situés au-delà des limites de la juridiction nationale ainsi qu’à leurs ressources a constitué une innovation majeure dans le domaine du droit international. Le succès de la Convention tient notamment au fait qu’elle a établi un cadre institutionnel sans précédent chargé de la mise en œuvre de la Convention et incarné par trois institutions : l’Autorité internationale des fonds marins, la Commission des limites du plateau continental et le Tribunal international du droit de la mer. Dotées de statuts juridiques divers et de compétences différentes, ces institutions fonctionnent depuis l’entrée en vigueur de la Convention en 1994. Vingt-et-un ans après sa fondation, il convient d’examiner ce cadre institutionnel dans son ensemble et d’évaluer sa mise en œuvre pour mieux comprendre le rôle complémentaire des institutions. Cette étude porte un regard critique sur la genèse, la nature, le fonctionnement et la pratique des institutions, et s’attache à les considérer dans leur interaction et leur interdépendance. Identifier les insuffisances institutionnelles et interinstitutionnelles, ainsi que les défis auxquels les institutions sont confrontées est un préalable indispensable à la recherche de solutions efficaces et viables pour surmonter les difficultés rencontrées, à la mise en œuvre harmonieuse de la Convention et à la concrétisation du concept fondamental de patrimoine commun de l’humanité. Dans cette perspective, l’importance du Tribunal dans son rôle de garant de l’intégrité de la Convention et le pouvoir créateur du juge international face aux lacunes conventionnelles méritent une attention toute particulière. / The result of protracted and arduous negotiations, the United Nations Convention on the Law of the Seasigned in 1982 is undoubtedly one of the most successful multilateral treaties at the international level. The principle of the common heritage of mankind, represented by the seabed, ocean floor and subsoil and their resources beyond the limits of national jurisdiction, is the cornerstone of the Convention and constituted a major innovation in international law. The success of the Convention lies, in particular, in the establishment of an unprecedented institutional framework, which is incarnated by three institutions: the International Seabed Authority, the Commission on the Limits of the Continental Shelf and the International Tribunal for the Law of the Sea. These institutions of diverse legal status are vested with different functions and have been in operation since the entry into force of the Convention in 1994. Twentyone years following its establishment, it is necessary to review this institutional framework as a whole and to assess its implementation in order to better understand the complementary role of the institutions. This study critically examines the genesis, the nature, the functioning and the practice of the institutions throughtheir interaction and their interdependence. Identifying institutional and inter-institutional weaknesses, and the challenges that the institutions face is an indispensable prerequisite for ensuring effective and viablesolutions, the harmonious implementation of the Convention and for giving substance to the principle ofthe common heritage of mankind. In this context, the role of the Tribunal as the guarantor of the integrityof the Convention and the creative power of the international judge merit special attention.
256

Ekonomistyrning i PostNord AB Region Växjö : Budget i kombination med prestationsmätning och dess styreffekter i organisationen

Mattelin, Martin, Andersson, Emelie January 2014 (has links)
Bakgrund: I en konkurrenskraftig miljö med föränderliga villkor krävs en tillämpning av sofistikerade styrverktyg inom ett företags ekonomistyrsystem. Verksamheten PostNord AB har i och med en bolagisering och avreglering genomgått en strukturomvandling men har fortfarande ett statligt uppdrag samtidigt som de konkurrerar med helt kommersiella företag. Problemdiskussion: PostNord AB får i dagsläget inte önskad effekt på styrning i verksamheten då det brister i förhållning till budgeten. Detta har utmynnat i en diskussion kring relationen mellan budget och prestationsmätning och dess styreffekter i organisationen. Syfte: Studiens syfte är att kartlägga PostNord ABs ekonomistyrsystem med särskilt fokus på budget och prestationsmätningar och dess styreffekter i organisationen. Vidare är syftet att ge rekommendationer på förändringar av dagens styrsystem, vilka kan ge en förbättrad styreffekt inom PostNord AB - Region Växjö. Metod: Studien innefattar en fallstudie som forskningsdesign. Insamlingen av empiriskt material har skett genom intervjuer, dokument och observationer. Intervjuerna har utförts semi-strukturerat och respondenturvalet har skett utifrån ett kedje- samt lämplighetsurval. Resultat: Problemområden beträffande ekonomistyrsystemets styreffekter har identifierats där förbättringsförslag rörarande företagets mest kritiska områden har rekommenderats. Dessa innefattar en nedtoning av budgeten till förmån för prestationsmätningar, mål kopplade till mått, ökad kommunikation mellan nivåerna samt förhöjd motivation genom ökat deltagande. Slutsats: Marknaden som PostNord AB verkar på kännetecknas av tämligen fasta spelregler med en likartad infrastruktur för distribution. Avgörandet för företagets framgång beror på hur det på mest fördelaktiga sätt kan anpassa och maximera sin verksamhet efter rådande villkor. Detta talar för en ökad användning av processinriktade prestationsmätningar. / Background: In a competitive environment with changing conditions the use of sophisticated management tool are required within a company’s management control system. The company Post Nord AB has, with corporatisation and deregulation undergone a structural change, but still has a state-mandated while competing with fully commercial enterprises. Problems Discussion: Post Nord AB is in the current situation not receiving the desired effect on the control of the business when because of imperfections in the attitude to the budget. This has led to a discussion on the relationship between budget and performance measurement and its control effects in the organization. Purpose: The aim of the study is to map Post Nord AB’s management control system with particular focus on the budget and performance measurements and its control effects in the organization. A further purpose is to provide recommendations on changes to the current control system, which can enhance the control effect in Post Nord AB - Region Växjö. Method: The study includes a case study research design. The collection of empirical data were collected through interviews, documents and observations. The interviews were conducted semi- structured and the selection were based on a chain and suitability selection. Results: Issues concerning control effects of the management control system have been identified and improvement proposals have been recommended. These include a dimming of the budget in favor of performance measurements, goals related to measurement, increased communication between levels and enhanced motivation through increased participation. Conclusion: PostNord AB acts on a market characterized by fairly fixed rules with a similar distribution infrastructure. The essential for the company’s success depends on how it in most beneficial ways can customize and maximize their business to the prevailing conditions. This suggests an increased use of process-oriented performance measurements.
257

Electroweak radiative B-decays as a test of the Standard Model and beyond / Désintégrations radiatives faibles de mésons B comme un test du Modèle Standard et au-delà.

Tayduganov, Andrey 05 October 2011 (has links)
La désintégration radiative du méson B en méson étrange axiale, B--> K1(1270) gamma, a été observée récemment avec un rapport d'embranchement assez grand que B--> K* gamma. Ce processus est particulièrement intéressant car la désintégration du K1 dans un état final à trois corps nous permet de déterminer la polarisation du photon, qui est surtout gauche (droit) pour Bbar(B) dans le Modèle Standard tandis que des modèles divers de nouvelle physique prédisent une composante droite (gauche) supplémentaire. Dans cette thèse, une nouvelle méthode est proposée pour déterminer la polarisation, en exploitant la distribution totale du Dalitz plot, qui semble réduire considérablement les erreurs statistiques de la mesure du paramètre de la polarisation lambda_gamma.Cependant, cette mesure de la polarisation nécessite une connaissance détaillée de la désintégration forte K1--> K pi pi : c'est-à-dire l'ensemble complexe des différentes amplitudes d'ondes partielles en plusieurs canaux en quasi-deux-corps ainsi que leurs phases relatives. Un certain nombre d'expériences ont été faites pour extraire ces informations bien qu'il reste divers problèmes dans ces études. Dans cette thèse, nous étudions en détail ces problèmes en nous aidant d'un modèle théorique. Nous utilisons ainsi le modèle 3P0 de création d'une paire de quarks pour améliorer notre compréhension des désintégrations fortes du K1.A partir de ce modèle nous estimons les incertitudes théoriques : en particulier, celle venant de l'incertitude de l'angle de mélange du K1, et celle due à l'effet d'une phase ``off-set'' dans les désintégrations fortes en ondes-S. Selon nos estimations, les erreurs systématiques se trouvent être de l'ordre de sigma(lambda_gamma)^th<20%. D'autre part nous discutons de la sensibilité des expériences futures, notamment les usines SuperB et LHCb, pour lambda_gamma. En estimant naïvement les taux annuels d'évènements, nous trouvons que l'erreur statistique de la nouvelle méthode est sigma(lambda_gamma)^stat<10%, ce qui est deux fois plus petit qu'en utilisant seulement les distributions angulaires simples.Nous discutons également de la comparaison avec les autres méthodes de mesure de la polarisation en utilisant les processus tels que B--> K* e^+ e^-, Bd--> K* gamma et Bs--> phi gamma, pour la détermination du rapport des coefficients de Wilson C7gamma^‘eff/C7gamma^eff. Nous montrons un exemple de contraintes possibles sur C7gamma^‘eff/C7gamma^eff dans plusieurs scénarios de modèles supersymétriques. / Recently the radiative B-decay to strange axial-vector mesons, B--> K1(1270) gamma, was observed with a rather large branching ratio. This process is particularly interesting as the subsequent K1-decay into its three-body final state allows us to determine the polarization of the photon, which is mostly left(right)-handed for Bbar(B) in the Standard Model while various new physics models predict additional right(left)-handed components. In this thesis, a new method is proposed to determine the polarization, exploiting the full Dalitz plot distribution, which seems to reduce significantly the statistical errors on the polarization parameter lambda_gamma measurement.This polarization measurement requires, however a detailed knowledge of the K1--> K pi pi strong interaction decays, namely, the complex pattern of the various partial wave amplitudes into several possible quasi-two-body channels as well as their relative phases. A number of experiments have been done to extract all these information while there remain various problems in the previous studies. In this thesis, we investigate the details of these problems. As a theoretical tool, we use the 3P0 quark-pair-creation model in order to improve our understanding of strong K1 decays.Finally we try to estimate some theoretical uncertainties: in particular, the one coming from the uncertainty on the K1 mixing angle, and the effect of a possible ``off-set'' phase in strong decay S-waves. According to our estimations, the systematic errors are found to be of the order of sigma(lambda_gamma)^th<20%. On the other hand, we discuss the sensitivity of the future experiments, namely the SuperB factories and LHCb, to lambda_gamma. Naively estimating the annual signal yields, we found the statistical error of the new method to be sigma(lambda_gamma)^stat<10% which turns out to be reduced by a factor 2 with respect to using the simple angular distribution.We also discuss a comparison to the other methods of the polarization measurement using processes, such as B--> K* e^+ e^-, Bd--> K* gamma and Bs--> phi gamma, for the determination of the ratio of the Wilson coefficients C7gamma^‘eff/C7gamma^eff. We show an example of the potential constraints on C7gamma^‘eff/C7gamma^eff. in several scenarios of supersymmetric models.
258

Limits to surprise of recommender systems / Limites de surpresa de Sistemas de Recomendação

Lima, André Paulino de 15 March 2019 (has links)
Surprise is an important component of serendipity. In this research, we address the problem of measuring the capacity of a recommender system at embedding surprise in its recommendations. We show that changes in surprise of an item owing to the growth in user experience, as well as to the increase in the number of items in the repository, are not taken into account by the current metrics and evaluation methods. As a result, in so far as the time elapsed between two measurements grows, they become increasingly incommensurable. This poses as an additional challenge in the assessment of the degree to which a recommender is exposed to unfavourable conditions, such as over-specialisation or filter bubble. We argue that a) surprise is a finite resource in any recommender system, b) there are limits to the amount of surprise that can be embedded in a recommendation, and c) these limits allow us to create a scale up in which two measurements that were taken at different moments can be directly compared. By adopting these ideas as premises, we applied the deductive method to define the concepts of maximum and minimum potential surprises and designed a surprise metric called \"normalised surprise\" that employs these limits. Our main contribution is an evaluation method that estimates the normalised surprise of a system. Four experiments were conducted to test the proposed metrics. The aim of the first and the second experiments was to validate the quality of the estimates of minimum and maximum potential surprise values obtained by means of a greedy algorithm. The first experiment employed a synthetic dataset to explore the limits to surprise to a user, and the second one employed the Movielens-1M to explore the limits to surprise that can be embedded in a recommendation list. The third experiment also employed the Movielens-1M dataset and was designed to investigate the effect that changes in item representation and item comparison exert on surprise. Finally, the fourth experiment compares the proposed and the current state-of-the-art evaluation method in terms of their results and execution times. The results obtained from the experiments a) confirm that the quality of the estimates of potential surprise are adequate for the purpose of evaluating normalised surprise; b) show that the item representation and comparison model that is adopted has a strong effect on surprise; and c) indicate an association between high degrees of surprise and negatively skewed pairwise distance distributions, and also indicate a significant difference in the average normalised surprise of recommendations produced by a factorisation algorithm when the surprise employs the cosine or the Euclidean distance / A surpresa é um componente importante da serendipidade. Nesta pesquisa, abordamos o problema de medir a capacidade de um sistema de recomendação de incorporar surpresa em suas recomendações. Mostramos que as mudanças na surpresa de um item, devidas ao crescimento da experiência do usuário e ao aumento do número de itens no repositório, não são consideradas pelas métricas e métodos de avaliação atuais. Como resultado, na medida em que aumenta o tempo decorrido entre duas medições, essas se tornam cada vez mais incomensuráveis. Isso se apresenta como um desafio adicional na avaliação do grau em que um sistema de recomendação está exposto a condições desfavoráveis como superespecialização ou filtro invisível. Argumentamos que a) surpresa é um recurso finito em qualquer sistema de recomendação; b) há limites para a quantidade de surpresa que pode ser incorporada em uma recomendação; e c) esses limites nos permitem criar uma escala na qual duas medições que foram tomadas em momentos diferentes podem ser comparadas diretamente. Ao adotar essas ideias como premissas, aplicamos o método dedutivo para definir os conceitos de surpresa potencial máxima e mínima e projetar uma métrica denominada \"surpresa normalizada\", que emprega esses limites. Nossa principal contribuição é um método de avaliação que estima a surpresa normalizada de um sistema. Quatro experimentos foram realizados para testar as métricas propostas. O objetivo do primeiro e do segundo experimentos foi validar a qualidade das estimativas de surpresa potencial mínima e máxima obtidas por meio de um algoritmo guloso. O primeiro experimento empregou um conjunto de dados sintético para explorar os limites de surpresa para um usuário, e o segundo empregou o Movielens-1M para explorar os limites da surpresa que pode ser incorporada em uma lista de recomendações. O terceiro experimento também empregou o conjunto de dados Movielens-1M e foi desenvolvido para investigar o efeito que mudanças na representação de itens e na comparação de itens exercem sobre a surpresa. Finalmente, o quarto experimento compara os métodos de avaliação atual e proposto em termos de seus resultados e tempos de execução. Os resultados que foram obtidos dos experimentos a) confirmam que a qualidade das estimativas de surpresa potencial são adequadas para o propósito de avaliar surpresa normalizada; b) mostram que o modelo de representação e comparação de itens adotado exerce um forte efeito sobre a surpresa; e c) apontam uma associação entre graus de surpresa elevados e distribuições assimétricas negativas de distâncias, e também apontam diferenças significativas na surpresa normalizada média de recomendações produzidas por um algoritmo de fatoração quando a surpresa emprega a distância do cosseno ou a distância Euclidiana
259

Policy responses by different agents/stakeholders in a transition: Integrating the Multi-level Perspective and behavioral economics

Gazheli, Ardjan, Antal, Miklós, Drake, Ben, Jackson, Tim, Stagl, Sigrid, van den Bergh, Jeroen, Wäckerle, Manuel 12 1900 (has links) (PDF)
This short paper considers all possible stakeholders in different stages of a sustainability transition and matches their behavioral features and diversity to policies. This will involve an assessment of potential or expected responses of stakeholders to a range of policy instruments. Following the Multi-Level Perspective framework to conceptualize sustainability transitions, we classify the various transition policies at niche, regime and landscape levels. Next, we offer a complementary classification of policies based on a distinction between social preferences and bounded rationality. The paper identifies many barriers to making a sustainability transition and how to respond to them. In addition, lessons are drawn from the case of Denmark. The detailed framework and associated literature for the analysis was discussed in Milestone 31 of the WWWforEurope project (Gazheli et al., 2012). / Series: WWWforEurope
260

Beyond Education and Society : On the Political Life of Education for Sustainable Development

Bengtsson, Stefan L. January 2014 (has links)
The objective of this dissertation is to develop a theoretical and analytical framework for understanding the political in education from a social and global perspective. With this objective in mind, it employs an empirical engagement and theoretical reflection on how this political can be seen to emerge in policy making on Education for Sustainable Development (ESD). Policy making on ESD is interpreted as engaging in the constitution of the social and globalisation, where the non-determination of this practice is seen to require political acts of identification with particular perspectives on what education, society and, as a result, ESD should be. Book I constitutes a theoretical and analytical framework that outlines central concepts, such as antagonism, temporality, space and rhizomic globalisation, in order to conceive of how the political in education can be understood and analysed in concrete articulations, such as policy making on ESD. The findings of the empirical analysis underlying this dissertation and that address the political in policy making on ESD are presented in the papers that are incorporated into this dissertation as part of Book II. Paper I discusses how we can conceive of the relation between ESD and globalisation and makes an argument that this relation should be seen to be political and characterised by conflicting perspectives on what ESD is. Paper II presents the findings from a comparative study of policy making on ESD that engages with concrete policy on ESD in order to reflect on how globalisation can be seen to emerge in these instances of policy making. Paper III presents the findings of a comprehensive discourse analysis of Vietnamese policy making and shows how the concepts of ESD and Sustainable Development are contested among different perspectives of how Vietnamese society should be constituted. The dissertation as a whole makes an argument for the inescapable political condition for education and how this condition necessitates the articulation of concepts such as ESD that name an inaccessible state beyond conflict and social antagonisms that is to be achieved through education.

Page generated in 0.0438 seconds