411 |
Abstrakt normprövning : En komparativrättslig studie av det svenska Lagrådet och den azerbajdzjanska författningsdomstolens rättstillämpning av den abstrakta normprövningenAgharzayeva, Leyla, Bakhtiyarova, Dilnaza January 2023 (has links)
This public law study aims to analyze abstract norm review in Sweden and Azerbaijan, conducted byspecific entities – the Swedish Council on Legislation (Lagrådet) and the Constitutional Court ofAzerbaijan, respectively. The primary focus is to examine and compare the approaches of Lagrådet andthe Azerbaijani Constitutional Court in abstract norm control.In this essay, a comparison is made between abstract norm review in Sweden and Azerbaijan, revealingsimilarities in purpose but significant differences in the powers and the binding nature of decisions.Historically, Sweden has undergone a longer constitutional development, while Azerbaijan hasundergone changes following its independence from the Soviet Union, which is noticeable throughvarious historical and political contexts shaping their norm review processes.In practice, Lagrådet in Sweden plays an advisory role during the legislative process. Although itsadvice carries significant weight, the final decision to follow or deviate from these recommendationslies with the government. Meanwhile, the Constitutional Court in Azerbaijan possesses direct legallybinding authority over its decisions, which affect all organs and individuals in the country.The difference in independence and legitimacy between these institutions is reflected in their impact onlegislation. Despite its active role, Lagrådet is subordinate to the government's decisions. Meanwhile,the Constitutional Court in Azerbaijan has a more independent and tangible influence on legislation.
|
412 |
Recommending TEE-based Functions Using a Deep Learning ModelLim, Steven 14 September 2021 (has links)
Trusted execution environments (TEEs) are an emerging technology that provides a protected hardware environment for processing and storing sensitive information. By using TEEs, developers can bolster the security of software systems. However, incorporating TEE into existing software systems can be a costly and labor-intensive endeavor. Software maintenance—changing software after its initial release—is known to contribute the majority of the cost in the software development lifecycle. The first step of making use of a TEE requires that developers accurately identify which pieces of code would benefit from being protected in a TEE. For large code bases, this identification process can be quite tedious and time-consuming. To help reduce the software maintenance costs associated with introducing a TEE into existing software, this thesis introduces ML-TEE, a recommendation tool that uses a deep learning model to classify whether an input function handles sensitive information or sensitive code. By applying ML-TEE, developers can reduce the burden of manual code inspection and analysis. ML-TEE's model was trained and tested on functions from GitHub repositories that use Intel SGX and on an imbalanced dataset. The accuracy of the final model used in the recommendation system has an accuracy of 98.86% and an F1 score of 80.00%. In addition, we conducted a pilot study, in which participants were asked to identify functions that needed to be placed inside a TEE in a third-party project. The study found that on average, participants who had access to the recommendation system's output had a 4% higher accuracy and completed the task 21% faster. / Master of Science / Improving the security of software systems has become critically important. A trusted execution environment (TEE) is an emerging technology that can help secure software that uses or stores confidential information. To make use of this technology, developers need to identify which pieces of code handle confidential information and should thus be placed in a TEE. However, this process is costly and laborious because it requires the developers to understand the code well enough to make the appropriate changes in order to incorporate a TEE. This process can become challenging for large software that contains millions of lines of code. To help reduce the cost incurred in the process of identifying which pieces of code should be placed within a TEE, this thesis presents ML-TEE, a recommendation system that uses a deep learning model to help reduce the number of lines of code a developer needs to inspect. Our results show that the recommendation system achieves high accuracy as well as a good balance between precision and recall. In addition, we conducted a pilot study and found that participants from the intervention group who used the output from the recommendation system managed to achieve a higher average accuracy and perform the assigned task faster than the participants in the control group.
|
413 |
Ext Enhanced Soergel Diagrammatics for Dihedral GroupsLi, Cailan January 2024 (has links)
We compute Ext groups between Soergel Bimodules associated to the infinite/finite dihedral group for a realization in characteristic 0 and show that they are free right 𝖱−modules with an explicit basis. We then give a diagrammatic presentation for the corresponding monoidal category of Ext-enhanced Soergel Bimodules. As applications, we compute reduced triply graded link homology 𝐇̅𝐇̅𝐇̅ of the connect sum of two Hopf links as an 𝖱−module and show that the Poincare series for the Hochschild homology of Soergel Bimodules of finite dihedral type categorifies Gomi's trace for finite dihedral groups.
|
414 |
Analysis and Enforcement of Properties in Software SystemsWu, Meng 02 July 2019 (has links)
Due to the lack of effective techniques for detecting and mitigating property violations, existing approaches to ensure the safety and security of software systems are often labor intensive and error prone. Furthermore, they focus primarily on functional correctness of the software code while ignoring micro-architectural details of the underlying processor, such as cache and speculative execution, which may undermine their soundness guarantees.
To fill the gap, I propose a set of new methods and tools for ensuring the safety and security of software systems. Broadly speaking, these methods and tools fall into three categories. The first category is concerned with static program analysis. Specifically, I develop a novel abstract interpretation framework that considers both speculative execution and a cache model, and guarantees to be sound for estimating the execution time of a program and detecting side-channel information leaks. The second category is concerned with static program transformation. The goal is to eliminate side channels by equalizing the number of CPU cycles and the number of cache misses along all program paths for all sensitive variables. The third category is concerned with runtime safety enforcement. Given a property that may be violated by a reactive system, the goal is to synthesize an enforcer, called the shield, to correct the erroneous behaviors of the system instantaneously, so that the property is always satisfied by the combined system. I develop techniques to make the shield practical by handling both burst error and real-valued signals.
The proposed techniques have been implemented and evaluated on realistic applications to demonstrate their effectiveness and efficiency. / Doctor of Philosophy / It is important for everything around us to follow some rules to work correctly. That is the same for our software systems to follow the security and safety properties. Especially, softwares may leak information via unexpected ways, e.g. the program timing, which makes it more difficult to be detected or mitigated. For instance, if the execution time of a program is related to the sensitive value, the attacker may obtain information about the sensitive value. On the other side, due to the complexity of software, it is nearly impossible to fully test or verify them. However, the correctness of software systems at runtime is crucial for critical applications. While existing approaches to find or resolve properties violation problem are often labor intensive and error prone, in this dissertation, I first propose an automated tool for detecting and mitigating the security vulnerability through program timing. Programs processed by the tool are guaranteed to be time constant under any sensitive values. I have also taken the influence of speculative execution, which is the cause behind recent Spectre and Meltdown attack, into consideration for the first time. To enforce the correctness of programs at runtime, I introduce an extra component that can be attached to the original system to correct any violation if it happens, thus the entire system will still be correct. All proposed methods have been evaluated on a variety of real world applications. The results show that these methods are effective and efficient in practice.
|
415 |
CHARACTERIZATION OF GLUCOSE TOLERANCE AND METABOLISM IN A MOUSE MODEL WITH SUPPRESSED ALBUMIN EXPRESSIONAfsoun Abdollahi (17988520) 29 April 2024 (has links)
<p dir="ltr">In the three conducted studies, we investigated the role of serum albumin in metabolic processes, particularly in lipid metabolism and glucoregulation. The first study explored how disrupting the binding of free fatty acids (FFA) to circulating albumin affects lipid metabolism and glucose control. Male and female albumin knockout mice exhibited significantly reduced plasma FFA levels, hepatic lipid content, and blood glucose during tolerance tests compared to wild-type mice. Additionally, albumin deficiency led to changes in adipose tissue gene expression, indicating the importance of albumin and plasma FFA concentration in metabolic regulation. In the second study, the focus was on determining if impeding serum albumin's function in transporting FFAs could prevent hepatic steatosis and metabolic dysfunction in obesity. Albumin knockout mice, despite being obese due to a high-fat diet, showed lower plasma FFA levels, improved glucose tolerance, and reduced hepatic lipid accumulation compared to wild-type mice. Elevated gene expression in liver and adipose tissues suggested albumin's involvement in hepatic lipid accumulation and glucose metabolism in obesity. Lastly, in the third study, we examined the phenotype of heterozygous albumin knockout mice and compared it to wild-type and homozygous knockout mice. While homozygous knockout mice exhibited improved glucoregulation and reduced plasma FFA concentration, heterozygous knockout mice did not show significant improvements compared to wild-type mice. The findings imply that a minor suppression of albumin expression may not be adequate to enhance glucoregulation. In summary, the studies emphasize the crucial role of serum albumin in metabolic processes, illustrating how disrupting FFA binding to albumin leads to improved glucose control and reduced hepatic lipid accumulation. However, minor suppression of albumin expression may not effectively enhance metabolic health. These findings provide valuable insights into potential therapeutic interventions targeting the albumin-FFA pathway to improve metabolic outcomes.</p><p dir="ltr"><br></p>
|
416 |
Abstract MeasureBridges, Robert Miller 08 1900 (has links)
This study of abstract measure covers classes of sets, measures and outer measures, extension of measures, and planer measure.
|
417 |
Abstract Logics and Lindström's Theorem / Abstrakta Logiker och Lindströms SatsBengtsson, Niclas January 2023 (has links)
A definition of abstract logic is presented. This is used to explore and compare some abstract logics, such as logics with generalised quantifiers and infinitary logics, and their properties. Special focus is given to the properties of completeness, compactness, and the Löwenheim-Skolem property. A method of comparing different logics is presented and the concept of equivalent logics introduced. Lastly a proof is given for Lindström's theorem, which provides a characterization of elementary logic, also known as first-order logic, as the strongest logic for which both the compactness property and the Löwenheim-Skolem property, holds.
|
418 |
Subtree Hashing of Tests in Build Systems : Rust Tricorder / Subträd Hashing av tester i byggsystem : Rust TricorderCapitanu, Calin January 2023 (has links)
Software applications are built by teams of developers that constantly iterate over the codebase. Software projects rely on a build system, which handles the management of dependencies, compilation, testing, and deployment of the software. The execution of the tests during each build allow developers to validate that their changes do not introduce regressions. However, the execution of the test suite during each build can take a long time, potentially impacting the development process. To facilitate quicker feedback, build systems use incremental building in order to avoid the reprocessing of unmodified artifacts. This is achieved by maintaining a cache of source files, and only rebuilding artifacts that differ from their cached version. Yet, changing any part of a source file invalidates the cache, triggering the re-execution of unmodified tests. This focus over an entire file can be misleading to the build system, as it can not determine whether the actual function being tested has changed, thus invoking redundant re-testing. In this thesis, we propose a finer-grained approach to caching within build systems, by caching components within the Abstract Syntax Tree instead of entire source files. We compare their hashes on subsequent runs, in order to identify components that have changed. The potential advantage of this strategy is that re-running a specific test that has not been modified will leverage the use of caches even if the file that contains it has been modified. We implement our approach in a system called TRICORDER, and integrate it within a build system called WARP. TRICORDER works by analyzing RUST source code in order to identify the test cases that have not been changed, such as through the addition of comments, or modifications of unrelated functions. This can benefit developers by avoiding the re-execution of tests that are unmodified. We evaluate our approach against 4 notable, open-source RUST projects, targeting a set of 16 tests within them. We first analyze the accuracy with which TRICORDER detects the internal dependencies of a test function, which is needed for the code slicing done by TRICORDER, in order to cache code items related to the target test function. We then introduce artificial changes to our study subjects in order to determine whether or not TRICORDER indicates tests that need to be re-run. Finally, we analyze the ability of TRICORDER to identify real changes based on the commit history of our study subjects. Our results show that the more granular approach to caching can avoid the unnecessary recompilation and re-execution of test cases. An important direction for future work is to extend the current implementation to support the entire set of RUST features in order to evaluate TRICORDER on a larger set of study subjects. / Programvaruapplikationer byggs av utvecklingsteam som ständigt itererar över kodbasen. Programvaruprojekt förlitar sig på ett byggsystem som hanterar beroenden, kompilering, testning och implementering av programvaran. Utförande av testerna under varje byggprocess möjliggör för utvecklare att validera att deras ändringar inte introducerar regressionsfel. Dock kan utförningen av testsviten under varje byggprocess ta lång tid och potentiellt påverka utvecklingsprocessen. För att underlätta snabbare återkoppling använder byggsystemen inkrementell byggning för att undvika onödig återbearbetning av oförändrade artefakter. Detta uppnås genom att bibehålla en cache av källkodsfilerna och endast bygga om artefakter som skiljer sig från deras cachade version. Att ändra vilken del som helst av en källkodsfil invaliderar cachet och utlöser körningen av oförändrade tester. Fokuseringen på en hel fil kan vara vilseledande för byggsystemet, då det inte kan avgöra om den faktiska funktionen som testas har ändrats och därigenom påbörjar onödig omtestning. I detta projekt föreslår vi en mer detaljerad cache-strategi inom byggsystem, genom att cacha komponenter inom det abstrakta syntaxträdet istället för hela källkodsfiler. Vi jämför deras hash-värden vid senare körningar för att identifiera ändringar. Den potentiella fördelen med denna strategi är när man kör om ett specifikt test som inte har ändrats kan cachen användas även om filen som innehåller testet har modifierats. Vi implementerar vår metod i ett system som kallas TRICORDER och integrerar det i ett byggsystem som heter WARP. TRICORDER fungerar genom att analysera RUST-källkod för att identifiera testfall som inte har ändrats, till exempel genom tillägg av kommentarer eller ändringar av irrelevanta funktioner. Detta kan gynna utvecklare genom att undvika att köra om tester som inte har ändrats. Vi utvärderar vår metod mot 4 välkända öppen källkodsprojekt i RUST och riktar in oss på en uppsättning av 16 tester inom dem. Först analyserar vi noggrannheten med vilken TRICORDER identifierar de interna beroendena hos en testfunktion, vilket behövs för kodavskärningen som TRICORDER utför för att cachelagra kodenheter relaterade till måltestfunktionen. Sedan inför vi konstgjorda ändringar i våra studieobjekt för att avgöra om TRICORDER indikerar tester som behöver köras om. Slutligen analyserar vi TRICORDER förmåga att identifiera verkliga ändringar baserat på ändringshistoriken för våra studieobjekt. Våra resultat visar att den mer granulära cachelagringsmetoden kan undvika onödig omkompilering och omkörning av testfall. En viktig riktning för framtida arbete är att utöka den nuvarande implementationen för att stödja hela uppsättningen av RUST-funktioner för att utvärdera TRICORDER på en större uppsättning studieobjekt. / Aplicațiile software sunt dezvoltate de programatori care iterează constant asupra codului. Proiectele de software se bazează pe un sistem de generare care gestionează dependențele, compilarea, testarea și lansarea software-ului. Execuția testelor permite dezvoltatorilor să valideze că modificările lor nu introduc regresii. Cu toate acestea, execuția testelor în cadrul fiecărei generări poate dura mult timp, având potențialul de a incetinii dezvoltarea. Pentru a facilita o reprocesare mai rapidă, sistemele de generare utilizează construirea incrementală pentru a evita reprelucrarea a artefactelor nemodificate. Acest lucru se realizează prin menținerea unei cache și reconstruirea doar a artefactelor care diferă de cele din cache. Cu toate acestea, orice modificare a unui fișier sursă invalidează cache-ul, declanșând reprocesarea. Focalizarea asupra unui fișier întreg poate induce în eroare sistemul de generare, deoarece nu poate determina dacă funcția testată a suferit modificări, declanșând astfel teste redundante. În această teză, propunem o abordare mai detaliată a cache-ului în cadrul sistemelor de generare, prin cacharea componentelor Arborelui Sintactic Abstract, în locul întregilor fișiere sursă. Comparăm hash-urile acestora în rulările ulterioare pentru a identifica componentele modificate. Avantajul potențial al acestei strategii constă în faptul că reexecutarea unui test care nu a nemodificat va utiliza cache-urile chiar dacă fișierul a fost modificat. Implementăm abordarea noastră într-un sistem numit TRICORDER și îl integrăm într-un sistem de construire numit WARP. TRICORDER funcționează prin analizarea codului sursă RUST pentru a identifica cazurile de testare care nu au fost modificate, cum ar fi prin adăugarea de comentarii sau modificări ale funcțiilor nerelevante. Acest lucru poate fi benefic pentru dezvoltatori, evitând reexecutarea testelor care nu au fost modificate. Evaluăm abordarea noastră în raport cu 4 proiecte notabile open-source în RUST, având în vedere un set de 16 teste în cadrul acestora. Mai întâi, analizăm precizia cu care TRICORDER detectează dependențele interne ale unei funcții de testare, ceea ce este necesar pentru tăierea de cod realizată de TRICORDER, pentru a memora în cache elementele de cod legate de funcția de testare țintă. Apoi, introducem modificări artificiale în subiecții noștri de studiu pentru a determina dacă TRICORDER indică sau nu teste care trebuie reluate. În final, analizăm capacitatea TRICORDER de a identifica schimbări reale pe baza istoricului de angajări al subiecților noștri de studiu. Rezultatele noastre arată că abordarea mai granulară a memorării în cache poate evita recompilarea și reexecutarea inutilă a cazurilor de testare. O direcție importantă pentru viitor este extinderea implementării curente pentru a sprijini întregul set de caracteristici RUST, pentru a evalua TRICORDER pe un set mai mare de subiecți de studiu.
|
419 |
Abstrakt normkontroll : En komparativrättslig studie av det svenska Lagrådet och den tyska författningsdomstolens tillämpning av den abstrakta normprövningen / Abstract norm control : a comparative legal study of the Swedish Council on Legislation and the German Constitutional Court's application of the abstract norm controlSari, Rukiye, El-Sayed, Rania January 2015 (has links)
The scope of this thesis in public law discussed the abstract judicial review in Sweden and Germany, which is practiced by a specified organ. In Sweden, the abstract judicial review is practiced by the Council on Legislation, and in Germany by the German Federal Constitutional Court. This study focuses on how the Swedish Council on Legislation and the German Constitutional Court differ in the practice of the abstract norm control. Moreover, a theoretical discussion is applied to regarding whether the Swedish justice system is in need of setting up a constitutional court or whether the Swedish Council on Legislation should be in a stronger position. Throughout this study, we concluded that the Swedish legal system is not in need to establish a Constitutional Court or another organ for the maintenance of an adequate standard norm control in Sweden. To this end, we suggest that the abstract norm control in Sweden by the law's compatibility with the constitution should be strong, but that there may be reason to further strengthen the review of the Council on Legislation. For instance, enhancing the investigation could be done by creating a legal secretariat and subsequently emphasizing the council’s independence from the parliament and government. By reinforcing the council with legal expertise, such as linking draftsmen to the council, can thus make the council’s opinion legally binding.
|
420 |
Power and narrative in project management : lessons learned in recognising the importance of phronesisRogers, Michael David January 2014 (has links)
A component part of modern project management practice is the ‘lessons learned’ activity that is designed to transfer experience and best practice from one project to another, thus improving the practice of project management. The departure point for this thesis is: If we are learning lessons from our experiences in project management, then why are we not better at managing projects? It is widely cited in most project management literature that 50–70% of all projects fail for one reason or another, a figure that has steadfastly refused to improve over many years. My contention is that the current rational approach to understanding lessons learned in project management, one entrenched in the if–then causality of first-order systems thinking where the nature of movement is a ‘corrective repetition of the past in order to realise an optimal future state’ (Stacey 2011: 301), does not reflect the actual everyday experience of organisational life. I see this as an experience of changing priorities, competing initiatives, unrealistic timescales, evaporation of resources, non-rational decisions based on power relations between actors in the organisations we find ourselves in; and every other manner of challenge that presents itself in modern large commercial organisations. I propose a move away from what I see as the current reductionist view of lessons learned, with its emphasis on objective observation, to one of involved subjective understanding. This is an understanding rooted in the particular experience of the individual acting into the social, an act that necessarily changes both the individual and the social. My contention is that a narrative approach to sense making as first-order abstractions in the activity of lessons learned within project management is what is required if we are to better learn from our experiences. This narrative approach that I have termed ‘thick simplification’ supports learning by enabling the reader of the lessons learned account to situate the ‘lesson learned’ within their own experience through treating the lessons learned as a potential future understanding .This requires a different view of what is going on between people in organisations – one that challenges the current reliance on detached process and recognises the importance of embedded phronesis, the Aristotelian virtue of practical judgement. It is an approach that necessarily ‘focuses attention directly on patterns of human relating, and asks what kind of power relations, ideology and communication they reflect’ (Stacey 2007: 266).
|
Page generated in 0.0551 seconds