• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 120
  • 27
  • 20
  • 15
  • 11
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 567
  • 141
  • 117
  • 108
  • 88
  • 59
  • 56
  • 54
  • 52
  • 51
  • 50
  • 46
  • 46
  • 45
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Neoliberalism, the Environmental Protection Agency, and the Chesapeake Bay

Steffy, Kathryn Marie 30 June 2016 (has links)
Neoliberalism, as the influence of economic considerations within the political process, has impacted environmentalism on a variety of levels. Without regulation, the neoliberal capitalist drive to maximize production, consumption, and profits is antagonistic to environmental sustainability. The influences that corporations and economic elites have within modern democracies holds substantial implications for the rigor and enforcement of environmental policies. Particular to the United States, the Environmental Protection Agency offers numerous illustrations of neoliberal influence within its history and policy practices. These influences inevitably impact the Agency's ability to accomplish the goals of their mission and purpose statements. As seen through regulations such as the Clean Water Act, neoliberal pressure has altered the priorities of government on a federal level to prioritize economic well-being over that of other social goods, such as environmental protection. The Clean Water Act prioritizes economic profitability over environmental protection through cap and trade policies, such as NPDES permits, and legitimizes pollution-causing behavior through TMDLs. Further, the act was weakened by neoliberal forces with the non-point source exemption created for the sake of avoiding economic harm to large industries and its shortcomings are visible within many of the nation's waterways, including the Chesapeake Bay. Through a case study, this project demonstrates how the neoliberal influences impacting the Environmental Protection Agency has resonated in its policies, like in the abilities of the Clean Water Act to sufficiently clean-up the Chesapeake Bay within its proposed timeline. / Master of Arts
92

Sediment Management for Aquatic Life Protection Under the Clean Water Act

Govenor, Heather Lynn 19 January 2018 (has links)
Although sediment is a natural component of stream ecosystems, excess sediment presents a threat to natural freshwater ecosystems. Sediment management is complicated because sediment can be dissolved in the water column, suspended as particles in the water column, or rest on the bottom of the stream bed, and can move between these forms (e.g. bedded sediment can be resuspended). Each form of sediment affects aquatic life in a specific way. To manage stream sediment in a way that protects aquatic life, we need to understand the ways different forms of sediment affect living things, and we need to be able to predict how sediment changes form under different stream conditions (for example, during high water events). To improve our understanding of these things, the studies in this dissertation set out to: (1) identify how often sediment is specifically mentioned as the primary pollutant “stressor” of the benthic macroinvertebrate community (primarily aquatic insects); (2) determine which forms of sediment have the largest negative impacts on aquatic insects in Virginia and what levels of sediment may cause harm; and (3) measure the changes of sediment between suspended and bedded forms in a small stream to provide information needed to restore the health of stream ecosystems. An inventory of published US Clean Water Act Total Maximum Daily Load (TMDL) reports, which states write to identify their impaired waters and their plans to improve those waters, revealed that sediment is an important stressor in over 70% of waters that have altered aquatic insect communities. If the language used to describe how waters are evaluated and what is causing the impairments were standardized among states, data collected under the Clean Water Act could be more broadly used to help understand water quality issues and ways to address them. Analysis of 10 years of Virginia Department of Environmental Quality sediment and aquatic insect community data collected within 5 ecoregions of the state indicates that a combination of 9 sediment parameters reflecting dissolved, suspended, and bedded forms explains between 20.2% and 76.4% of the variability in the health of the aquatic insect community within these regions. Embeddedness, which measures how much larger particles such as gravel and cobble are buried by finer particles like sand; and conductivity, which is a measure of dissolved salts in the water column, both have substantial impacts on the aquatic insect community. Sensitivity thresholds for embeddedness and conductivity indicate the levels of these parameters above which 5% of insect families are absent from a stream; therefore, these levels are considered protective of 95% of the insect community. Thresholds for embeddedness are 68% for the 5 combined ecoregions, 65% for the Mountain bioregion (comprised of Central Appalachian, Ridge and Valley, and Blue Ridge ecoregions), and 88% for the Piedmont bioregion (comprised of Northern Piedmont and Piedmont ecoregions). Thresholds for conductivity are 366 µS/cm for combined ecoregions, 391 µS/cm for the Mountain bioregion, and 136 µS/cm for the Piedmont bioregion. These thresholds can be used by water quality professionals to identify waters with sediment impairments and can be used to help identify appropriate stream restoration goals. A study of sediment movement within the channel of a small stream indicated average transport speeds of ~ 0.21 m/s during floods with peak flows of ~ 55 L/s. The use of rare earth elements (REE) to trace sediment particles revealed individual particle transport distances ranging from 0 m to >850 m. Deposition on a unit area basis was greater in the stream channel than on the floodplain, and the movement of sediment from the stream bed to the water column and back again during sequential floods was evident. Approximately 80% of the tracer was deposited within the first 66 m of the reach. This information can aid the development of models that predict the impact of stream restoration practices on in-stream habitat and improve predictions on the time it will take between the initiation of stream restoration projects and when we see improvements in the biological community. / PHD / Although sediment is a natural component of stream ecosystems, excess sediment presents a threat to natural freshwater ecosystems. Sediment management is complicated because sediment can be dissolved in the water column, suspended as particles in the water column, or rest on the bottom of the stream bed, and can move between these forms (e.g. bedded sediment can be resuspended). Each form of sediment affects aquatic life in a specific way. To manage stream sediment in a way that protects aquatic life, we need to understand the ways different forms of sediment affect living things, and we need to be able to predict how sediment changes form under different stream conditions (for example, during high water events). To improve our understanding of these things, the studies in this dissertation set out to: (1) identify how often sediment is specifically mentioned as the primary pollutant “stressor” of the benthic macroinvertebrate community (primarily aquatic insects); (2) determine which forms of sediment have the largest negative impacts on aquatic insects in Virginia and what levels of sediment may cause harm; and (3) measure the changes of sediment between suspended and bedded forms in a small stream to provide information needed to restore the health of stream ecosystems. An inventory of published US Clean Water Act Total Maximum Daily Load (TMDL) reports, which states write to identify their impaired waters and their plans to improve those waters, revealed that sediment is an important stressor in over 70% of waters that have altered aquatic insect communities. If the language used to describe how waters are evaluated and what is causing the impairments were standardized among states, data collected under the Clean Water Act could be more broadly used to help understand water quality issues and ways to address them. Analysis of 10 years of Virginia Department of Environmental Quality sediment and aquatic insect community data collected within 5 ecoregions of the state indicates that a combination of 9 sediment parameters reflecting dissolved, suspended, and bedded forms explains between 20.2% and 76.4% of the variability in the health of the aquatic insect community within these regions. Embeddedness, which measures how much larger particles such as gravel and cobble are buried by finer particles like sand; and conductivity, which is a measure of dissolved salts in the water column, both have substantial impacts on the aquatic insect community. Sensitivity thresholds for embeddedness and conductivity indicate the levels of these parameters above which 5% of insect families are absent from a stream; therefore, these levels are considered protective of 95% of the insect community. Thresholds for embeddedness are 68% for the 5 combined ecoregions, 65% for the Mountain bioregion (comprised of Central Appalachian, Ridge and Valley, and Blue Ridge ecoregions), and 88% for the Piedmont bioregion (comprised of Northern Piedmont and Piedmont ecoregions). Thresholds for conductivity are 366 µS/cm for combined ecoregions, 391 µS/cm for the Mountain bioregion, and 136 µS/cm for the Piedmont bioregion. These thresholds can be used by water quality professionals to identify waters with sediment impairments and can be used to help identify appropriate stream restoration goals. A study of sediment movement within the channel of a small stream indicated average transport speeds of ~ 0.21 m/s during floods with peak flows of ~ 55 L/s. The use of rare earth elements (REE) to trace sediment particles revealed individual particle transport distances ranging from 0 m to >850 m. Deposition on a unit area basis was greater in the stream channel than on the floodplain, and the movement of sediment from the stream bed to the water column and back again during sequential floods was evident. Approximately 80% of the tracer was deposited within the first 66 m of the reach. This information can aid the development of models that predict the impact of stream restoration practices on in-stream habitat and improve predictions on the time it will take between the initiation of stream restoration projects and when we see improvements in the biological community.
93

The Applicability of the Tap-Delay Line Channel Model to Ultra Wideband

Yang, Liu 30 September 2004 (has links)
Ultra-wideband (UWB) communication systems are highly promising because of their capabilities for high data rate information transmission with low power consumption and low interference and their immunity to multipath fading. More importantly, they have the potential to relieve the "spectrum drought" caused by the explosion of wireless systems in the past decade by operating in the same bands as existing narrowband systems. With the extremely large bandwidth of UWB signals, we need to revisit UWB channel modeling. Specifically we need to verify whether or not the traditional tap-line delay channel model is still applicable to UWB. One essential task involved in channel modeling is deconvolving the channel impulse response from the measurement data. Both frequency domain and time domain techniques were studied in this work. After a comparison, we examined a time domain technique known as the CLEAN algorithm for our channel modeling analysis. A detailed analysis of the CLEAN algorithm is given, as it is found that it is sufficient for our application. The impact of per-path pulse distortion due to various mechanisms on the tap-delay line channel model is discussed. It is shown that with cautious interpretation of the channel impulse response, the tap-line delay channel model is still applicable to UWB. / Master of Science
94

Production of blue ammonia as a clean fuel in Qatar

Al-Shamari, M., Khodary, A., Han, D.S., Mujtaba, Iqbal, Rahmanian, Nejat 03 June 2023 (has links)
Yes / The production of blue ammonia is considered an alternative fuel to reduce CO2 emissions in the ecosystem. Qatar aims to construct the world's largest blue ammonia plant, with an annual capacity of 1.2 million tons (MT), in the first quarter of 2026. Blue ammonia is produced by combining nitrogen with "blue" hydrogen from natural gas feedstocks, with carbon dioxide captured and stored safely. Blue Ammonia can be transported by conventional ships and utilized in power stations to produce low-carbon electricity and potential future applications in decarbonized industries. The new plant will be located in Mesaieed Industrial City (MIC) and operated by QAFCO as part of its integrated facilities. QAFCO is already a significant ammonia and urea producer worldwide, with an annual production capacity of 3.8 million MT of ammonia and 5.6 million MT of urea per annum. Furthermore, QAFCO is the largest producer of urea and ammonia at a single facility worldwide. Qatar Energy Renewable Solutions (QERS) will develop and manage integrated carbon capture and storage facilities to capture and sequester 1.5 MT of CO2 per year for the blue ammonia plant. QERS will also provide more than 35 MW of renewable electricity to the Ammonia-7 facility from its upcoming PV Solar Power Plant in MIC. This project is a step towards reducing the carbon intensity of energy products and is a crucial pillar of Qatar’s sustainability and energy transition strategy to align with Qatar’s 2030 National Vision.
95

Free-field inlet / outlet noise identification on aircraft engines using microphones array / Identification du bruit d'entrée et de sortie sur des moteurs d'avion par antennes microphoniques

Khatami, Iman January 2014 (has links)
Abstract : This thesis considers the discrimination of inlet / exhaust noise of aero-engines in free-field static tests using far-field microphone arrays. Various techniques are compared for this problem, including classical beamforming (CB), regularized inverse method (Tikhonov regularization), LI - generalized inverse beamforming (LI-GIB), clean-PSF, clean-SC and two novel methods which are called hybrid method and clean-hybrid. The classical beamforming method is disadvantaged due to its need for a high number of measurement microphones in accordance with the requirements. Similarly, the inverse method is disadvantaged due to their need of having a priori source information. The classical Tikhonov regularization provides improvements in solution stability, however continues to be disadvantaged due to its requirement of imposing a stronger penalty for undetected source positions. Coherent and incoherent sources are resolved by LI-generalized inverse beamforming (L1-GIB). This algorithm can distinguish the multipole sources as well as the monopoles sources. However, source identification by LI-generalized inverse beamforming takes much time and requires a PC with high memory. The hybrid method is a new regularization method which involves the use of an a priori beamforming measurement to define a data-dependent discrete smoothing norm for the regularization of the inverse problem. Compared to the classical beamforming and the inverse modeling, the hybrid (beamforming regularization) approach provides improved source strength maps without substantial added complexity. Although the hybrid method rather solves the disadvantage of the former methods, the application of this method for identification of weaker sources in the presence of the strong sources isn't satisfactory. This can be explained by the large penalization being applied to the weaker source in the hybrid method, which results in underestimation of source strength for this source. To overcome this defect, the clean-SC method and the proposed clean-hybrid method, which is a combination of the hybrid method and the clean-SC, are applied. These methods remove the effect of the strong sources in source power maps to identify the weaker sources. The proposed methods which represent the main contribution of this thesis show promising results and opens new research avenues. Theoretical study of all approaches is performed for various sources and configurations of array. In order to validate the theoretical study, several laboratory experiments are conducted at Universito de Sherbrooke. The proposed methods have further been applied to the measured noise data from a Pratt & Whitney Canada turbo-fan engine and have been observed to provide better spatial resolution and solution robustness with a limited number of measurement microphones compared to the existing methods. / Résumé : La présente thèse étudie la discrimination du bruit d'entrée / de sortie des moteurs d'avion dans des tests statiques en champ libre en utilisant des antennes de microphones en champ lointain. Diverses techniques sont comparées pour ce problème, dont la formation de voie classique (CB), la méthode inverse régularisée (régularisation de Tikhonov), la formation de voies généralisée inverse (L1-GIB), Clean-PSF, Clean-SC et deux méthodes proposées qui s'appellent la méthode hybride et la méthode Clean-hybride. La méthode la formation de voie classique est désavantagée en raison de son besoin de nombreux microphones de mesure. De même, la méthode inverse est désavantagée en raison du besoin d'information a priori sur les sources. La régularisation Tikhonov classique fournit des améliorations dans. la stabilité de la solution; cependant elle reste désavantageuse en raison de son exigence d'imposer une pénalité plus forte pour des positions de source non détectées. Des sources cohérentes et incohérentes peuvent être résolues par la formation de voies généralisée inverse (L1-GIB). Cet algorithme peut identifier les sources multi- polaires aussi bien que les sources monopolaires. Cependant, l'identification de source par la formation de voies généralisée inverse prend beaucoup de temps et exige un ordinateur avec une capacité de mémoire élevée. La méthode hybride est une nouvelle méthode de régularisation qui implique l'utilisation d'un traitement par formation de voie a priori pour définir une norme discrète et dépendante des données pour la régularisation du problème inverse. En comparaison avec la formation de voie classique et la méthode inverse, l'approche hybride (régularisation par formation de voie) fournit des cartographies améliorées d'amplitudes de sources sans aucune complexité supplémentaire substantielle. Bien que la méthode hybride lève les limitations des méthodes classiques, l'application de cette méthode pour l'identification de sources de faible puissance en présence de sources de forte puissance n'est pas satisfaisante. On peut expliquer ceci par la plus grande pénalisation appliquée à la source plus faible dans la méthode hybride, qui aboutit à la sous-estimation de l'amplitude de cette source. Pour surmonter ce défaut, la méthode Clean-SC et la méthode Clean-hybrides proposée qui est une combinaison de la méthode hybride et de Clean-SC sont appliquées. Ces méthodes éliminent l'effet des sources fortes dans les cartographies de puissance de sources pour identifier les sources plus faibles. Les méthodes proposées qui représentent la contribution principale de cette thèse conduisent à des résultats fiables et ouvrent des nouvelles voies de recherche. L'étude théorique de toutes les approches est menée pour divers types de sources et de configurations microphoniques. Pour valider l'étude théorique, plusieurs expériences en laboratoire sont réalisées à Université de Sherbrooke. Les méthodes proposées ont été appliquées aux données de bruit mesurées d'une turbo-soufflante Pratt & Whitney Canada pour fournir une meilleure résolution spatiale des sources acoustique et une solution robuste avec un nombre limité des microphones de mesure comparé aux méthodes existantes.
96

Cleantech SMEs’ Expectations and Perceptions of an Established Community-based Intermediary Moving into their Sector

Dahiya, Sushil 07 March 2013 (has links)
Innovation intermediaries provide a range of services to assist firms during the process of innovation. How SMEs perceive innovation intermediaries is an area of investigation that would provide important information on how innovation intermediaries’ assist small and medium enterprises (SMEs). This study focuses on the cleantech industry and explores SMEs’ expectations and perceptions of an established community-based intermediary (CBI) moving into their sector. A qualitative research methodology was adopted to collect data from 15 sample SMEs. In regards to SMEs, the findings show that cleantech companies face financing, partnerships, marketing, sales, regulatory and bureaucratic challenges. In regards to innovation intermediaries, the findings showcase how CBI, a regional intermediary, is not effective in supporting cleantech SMEs with their sector specific needs or challenges.
97

Cleantech SMEs’ Expectations and Perceptions of an Established Community-based Intermediary Moving into their Sector

Dahiya, Sushil 07 March 2013 (has links)
Innovation intermediaries provide a range of services to assist firms during the process of innovation. How SMEs perceive innovation intermediaries is an area of investigation that would provide important information on how innovation intermediaries’ assist small and medium enterprises (SMEs). This study focuses on the cleantech industry and explores SMEs’ expectations and perceptions of an established community-based intermediary (CBI) moving into their sector. A qualitative research methodology was adopted to collect data from 15 sample SMEs. In regards to SMEs, the findings show that cleantech companies face financing, partnerships, marketing, sales, regulatory and bureaucratic challenges. In regards to innovation intermediaries, the findings showcase how CBI, a regional intermediary, is not effective in supporting cleantech SMEs with their sector specific needs or challenges.
98

Evolving Future Internet clean-slate Entity Title Architecture with quality-oriented control-plane extensions / Extens?es orientadas a qualidade ao plano de controle da Arquitetura Entidade-T?tulo

Lema, Jos? Castillo 31 July 2014 (has links)
Made available in DSpace on 2014-12-17T15:48:11Z (GMT). No. of bitstreams: 1 JoseCL_DISSERT.pdf: 3900397 bytes, checksum: b91f886645164577ed2a25d0dc1d2260 (MD5) Previous issue date: 2014-07-31 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / A Internet atual vem sofrendo v?rios problemas em termos de escalabilidade, desempenho, mobilidade, etc., devido ao vertiginoso incremento no n?mero de usu?rios e o surgimento de novos servi?os com novas demandas, propiciando assim o nascimento da Internet do Futuro. Novas propostas sobre redes orientadas a conte?do, como a arquitetura Entidade Titulo (ETArch), proveem novos servi?os para este tipo de cen?rios, implementados sobre o paradigma de redes definidas por software. Contudo, o modelo de transporte do ETArch ? equivalente ao modelo best-effort da Internet atual, e vem limitando a confiabilidade das suas comunica??es. Neste trabalho, ETArch ? redesenhado seguindo o paradigma do sobreaprovisionamento de recursos para conseguir uma aloca??o de recursos avan?ada integrada com OpenFlow. Como resultado, o framework SMART (Suporte de Sess?es M?veis com Alta Demanda de Recursos de Transporte), permite que a rede defina semanticamente os requisitos qualitativos das sess?es para assim gerenciar o controle de Qualidade de Servi?o visando manter a melhor Qualidade de Experi?ncia poss?vel. A avalia??o do planos de dados e de controle teve lugar na plataforma de testes na ilha do projeto OFELIA, mostrando o suporte de aplica??es m?veis multim?dia com alta demanda de recursos de transporte com QoS e QoE garantidos atrav?s de um esquema de sinaliza??o restrito em compara??o com o ETArch legado / Current Internet has confronted quite a few problems in terms of network mobility, quality, scalability, performance, etc., mainly due to the rapid increase of the number of endusers and various new service demands, requiring new solutions to support future usage scenarios. New Future Internet approaches targeting Information Centric Networking, such as the Entity Title Architecture (ETArch), provide new services and optimizations for these scenarios, using novel mechanisms leveraging the Software Defined Networking (SDN) concept. However, ETArch approach is equivalent to the Best Effort capability of current Internet, which limits achieving reliable communications. In this work, ETArch was evolved with both quality-oriented mobility and resilience functions following the over-provisioning paradigm to achieve advanced network resource allocation integrated with OpenFlow. The resulting framework, called Support of Mobile Sessions with High Transport Network Resource Demand (SMART), allows the network to semantically define the quality requirements of each session to drive network Quality of Service control seeking to keep best Quality of Experience. The SMART evaluation in both data and control plane was carried out using a real testbed of the OFELIA Brazilian island, showing that its quality-oriented network functions allowed supporting bandwidth-intensive multimedia applications with high QoS and QoE over time through a signalling restricted scheme in comparison with the legacy ETArch
99

Etude et développement d'une méthode de recherche pour les sources de contamination chimique des pompes à vide entre les équipements de EUV / Study and development of a research method of a chemical contamination source of vacuum pumps on EUV equipments

Vinci, Andréa 11 July 2013 (has links)
Ce travail présente l’étude d'une méthode de recherche utilisable en milieu industriel dessources de contamination chimique des pompes à vide équipant les lithographes EUV. Cetravail porte sur la problématique d’une éventuelle contamination carbonée introduite par lesystème de pompage et, notamment, par la turbine et le stator.A partir d’une caractérisation détaillée par chromatographie et spectroscopie dephotoémission de la contamination résiduelle issue du procédé de production, une procédured’analyse par spectrométrie de masse en phase gazeuse (RGA) utilisable en milieu industriel aété mise en oeuvre. En particulier, la possibilité de varier la température de l’échantillonpendant la mesure permet de caractériser la contamination carbonée résiduelle en étudiant lesprocédés physico-chimiques qui en sont à l’origine.Après avoir démontré l’efficacité du nettoyage final à éliminer les résidus des huiles de coupet avoir identifié la contamination organique résiduelle comme résidu du seul procédé denettoyage industriel, une copie « in vitro » de ce nettoyage a été développée : cela a permis demieux le caractériser en étudiant l’impact de plusieurs paramètres.L’influence de la concentration de lessive et de la procédure de séchage sur la contaminationcarbonée résiduelle a ainsi été étudiée. Une analyse de la contamination en phase gazeuse(RGA et TD-GCMS) ainsi qu’une caractérisation XPS de la surface des échantillons ont étéfaites. L’analyse de l’ensemble de ces résultats a permis d’établir un lien direct entre laconcentration de lessive utilisée et la contamination organique résiduelle. De plus,l’importance d’un séchage à haute température a été démontrée en mettant en évidence laprésence de plusieurs facteurs qui contribuent à la contamination résiduelle. / This work presents the study of a research method of the contamination sources of EUVLturbo molecular pumps, suitable for the industrial environment. This study deals with theproblem of a possible carbon contamination due to the pumping system and in particular tothe rotor and the stator.After a detailed characterisation of the production process residual contamination by TDGCMS/FID and XPS, a RGA procedure suitable for industrial environment has beendeveloped. The possibility to change the sample temperature during the measure lets tocharacterise the residual carbon contamination by investigating its primal physic-chemicalphenomena.The identification of the whole production process residual contamination demonstrates theefficacy of the industrial cleaning step to clean lubricants residuals. In order to bettercharacterise the cleaning step residual contamination, we developed an “in vitro” copy of theindustrial cleaning step.Thanks to the temperature variable RGA analysis of the residual contamination, we couldpoint out several contributions to carbon contamination and we could connect thesecontributions to different cleaning parameters.Detergent concentration as well as different drying procedure impact on residual carboncontamination has been studied. RGA and TD-GCMS/FID analysis as well as XPS surfacecharacterization have been performed. These analyses show a direct connection between thedetergent concentration used in the cleaning step and the residual carbon contamination.Furthermore, the importance of a high temperature drying step has been demonstrated.
100

Clean Code vs Dirty Code : Ett fältexperiment för att förklara hur Clean Code påverkar kodförståelse / Clean Code vs Dirty Code

Hagman, Tobias January 2015 (has links)
Stora och komplexa kodbaser med bristfällig kodförståelse är ett problem som blir allt vanligare bland företag idag. Bristfällig kodförståelse resulterar i längre tidsåtgång vid underhåll och modifiering av koden, vilket för ett företag leder till ökade kostnader. Clean Code anses enligt somliga vara lösningen på detta problem. Clean Code är en samling riktlinjer och principer för hur man skriver kod som är enkel att förstå och underhålla. Ett kunskapsglapp identifierades vad gäller empirisk data som undersöker Clean Codes påverkan på kodförståelse. Studiens frågeställning var: Hur påverkas förståelsen vid modifiering av kod som är refaktoriserad enligt Clean Code principerna för namngivning och att skriva funktioner? För att undersöka hur Clean Code påverkar kodförståelsen utfördes ett fältexperiment tillsammans med företaget CGM Lab Scandinavia i Borlänge, där data om tidsåtgång och upplevd förståelse hos testdeltagare samlades in och analyserades. Studiens resultat visar ingen tydlig förbättring eller försämring av kodförståelsen då endast den upplevda kodförståelsen verkar påverkas. Alla testdeltagare föredrar Clean Code framför Dirty Code även om tidsåtgången inte påverkas. Detta leder fram till slutsatsen att Clean Codes effekter kanske inte är omedelbara då utvecklare inte hunnit anpassa sig till Clean Code, och därför inte kan utnyttja det till fullo. Studien ger en fingervisning om Clean Codes potential att förbättra kodförståelsen. / Summary: Big and complex codebases with inadequate understandability, is a problem which is becoming more common among companies today. Inadequate understandability leads to bigger time requirements when maintaining code, which means increased costs for a company. Clean Code is according to some people the solution to this problem. Clean Code is a collection of guidelines and principles for how to write code which is easy to understand and maintain. A gap of knowledge was identified, as there is little empirical data that investigates how Clean Code affects understandability. This lead to the following the question: How is the understandability affected when modifying source code which has been refactored according to the Clean Code principles regarding names and functions? In order to investigate how Clean Code affects understandability, a field experiment was conducted in collaboration with the company CGM Lab Scandinavia in Borlänge. In the field experiment data in the form of time and experienced understandability was collected and analyzed.The result of this study doesn’t show any clear signs of immediate improvements or worsening when it comes to understandability. This is because even though all participants prefer Clean Code, this doesn’t show in the measured time of the experiment. This leads me to the conclusion that the effects of Clean Code aren’t immediate, since developers hasn’t been able to adapt to Clean Code, and therefore are not able to utilize its benefits properly. This study gives a hint of the potential Clean Code has to improve understandability

Page generated in 0.0334 seconds