• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 11
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 56
  • 56
  • 36
  • 31
  • 30
  • 29
  • 28
  • 28
  • 27
  • 27
  • 27
  • 27
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Myson Burch Thesis

Myson C Burch (16637289) 08 August 2023 (has links)
<p>With the completion of the Human Genome Project and many additional efforts since, there is an abundance of genetic data that can be leveraged to revolutionize healthcare. Now, there are significant efforts to develop state-of-the-art techniques that reveal insights about connections between genetics and complex diseases such as diabetes, heart disease, or common psychiatric conditions that depend on multiple genes interacting with environmental factors. These methods help pave the way towards diagnosis, cure, and ultimately prediction and prevention of complex disorders. As a part of this effort, we address high dimensional genomics-related questions through mathematical modeling, statistical methodologies, combinatorics and scalable algorithms. More specifically, we develop innovative techniques at the intersection of technology and life sciences using biobank scale data from genome-wide association studies (GWAS) and machine learning as an effort to better understand human health and disease. <br> <br> The underlying principle behind Genome Wide Association Studies (GWAS) is a test for association between genotyped variants for each individual and the trait of interest. GWAS have been extensively used to estimate the signed effects of trait-associated alleles, mapping genes to disorders and over the past decade about 10,000 strong associations between genetic variants and one (or more) complex traits have been reported. One of the key challenges in GWAS is population stratification which can lead to spurious genotype-trait associations. Our work proposes a simple clustering-based approach to correct for stratification better than existing methods. This method takes into account the linkage disequilibrium (LD) while computing the distance between the individuals in a sample. Our approach, called CluStrat, performs Agglomerative Hierarchical Clustering (AHC) using a regularized Mahalanobis distance-based GRM, which captures the population-level covariance (LD) matrix for the available genotype data.<br> <br> Linear mixed models (LMMs) have been a popular and powerful method when conducting genome-wide association studies (GWAS) in the presence of population structure. LMMs are computationally expensive relative to simpler techniques. We implement matrix sketching in LMMs (MaSk-LMM) to mitigate the more expensive computations. Matrix sketching is an approximation technique where random projections are applied to compress the original dataset into one that is significantly smaller and still preserves some of the properties of the original dataset up to some guaranteed approximation ratio. This technique naturally applies to problems in genetics where we can treat large biobanks as a matrix with the rows representing samples and columns representing SNPs. These matrices will be very large due to the large number of individuals and markers in biobanks and can benefit from matrix sketching. Our approach tackles the bottleneck of LMMs directly by using sketching on the samples of the genotype matrix as well as sketching on the markers during the computation of the relatedness or kinship matrix (GRM). <br> <br> Predictive analytics have been used to improve healthcare by reinforcing decision-making, enhancing patient outcomes, and providing relief for the healthcare system. These methods help pave the way towards diagnosis, cure, and ultimately prediction and prevention of complex disorders. The prevalence of these complex diseases varies greatly around the world. Understanding the basis of this prevalence difference can help disentangle the interaction among different factors causing complex disorders and identify groups of people who may be at a greater risk of developing certain disorders. This could become the basis of the implementation of early intervention strategies for populations at higher risk with significant benefits for public health.<br> <br> This dissertation broadens our understanding of empirical population genetics. It proposes a data-driven perspective to a variety of problems in genetics such as confounding factors in genetic structure. This dissertation highlights current computational barriers in open problems in genetics and provides robust, scalable and efficient methods to ease the analysis of genotype data.</p>
82

DATA-CENTRIC DECISION SUPPORT SYSTEM FRAMEWORK FOR SELECTED APPLICATIONS

Xiang Gu (11090106) 15 December 2021 (has links)
<p>The web and digital technologies have been continuously growing in the recent five years. The data generated from the Internet of Things (IoT) devices are heterogeneous, increasing data storage and management difficulties. The thesis developed user-friendly data management system frameworks in the local environment and cloud platform. The two frameworks applied to two applications in the industrial field: the agriculture informatics system and the personal healthcare management system. The systems are capable of information management and two-way communication through a user-friendly interface. </p>
83

Internet of Things Architecture Design and Implementation for Immersive Interfaces

Javier Belmonte (9193829) 09 September 2022 (has links)
<div>The coming of the Internet of things (IoT) has enabled manufacturers, teachers, machine operators, makers, and researchers to design and use new workflows, fabricate parts efficiently and effectively, and interact with systems and devices in ways that were not possible before.</div><div>These networked systems have changed the way in which input is received from and data is outputted to humans. Context-awareness and autonomy are characteristics of these devices that result in automated processes, faster production times, and more intuitive interfaces. Direct manipulation is an intuitive and natural human-computer interaction (HCI) that enables its users an easy and fast learning.</div><div>In this thesis, an Internet of things architecture is designed and implemented to enable control and data visualization in machines and devices through immersive interfaces using direct manipulation. The proposed architecture and interfaces are tested and validated approaching three different categories of systems; namely, systems that need to be modified to be IoT ready, systems that are IoT ready, and systems that have not yet been constructed. For the latter case, a custom system has been made to evaluate and test the whole architecture and its implementation. The knowledge acquired in developing this architecture and the design rationale behind the development of immersive interfaces, are summarized and presented as a series of guidelines and recommendations for IoT systems manufacturers to follow to include immersive interfaces in their designs.</div>
84

Touching the Essence of Life : Haptic Virtual Proteins for Learning

Bivall, Petter January 2010 (has links)
This dissertation presents research in the development and use of a multi-modal visual and haptic virtual model in higher education. The model, named Chemical Force Feedback (CFF), represents molecular recognition through the example of protein-ligand docking, and enables students to simultaneously see and feel representations of the protein and ligand molecules and their force interactions. The research efforts have been divided between educational research aspects and development of haptic feedback techniques. The CFF model was evaluated in situ through multiple data-collections in a university course on molecular interactions. To isolate possible influences of haptics on learning, half of the students ran CFF with haptics, and the others used the equipment with force feedback disabled. Pre- and post-tests showed a significant learning gain for all students. A particular influence of haptics was found on students reasoning, discovered through an open-ended written probe where students' responses contained elaborate descriptions of the molecular recognition process. Students' interactions with the system were analyzed using customized information visualization tools. Analysis revealed differences between the groups, for example, in their use of visual representations on offer, and in how they moved the ligand molecule. Differences in representational and interactive behaviours showed relationships with aspects of the learning outcomes. The CFF model was improved in an iterative evaluation and development process. A focus was placed on force model design, where one significant challenge was in conveying information from data with large force differences, ranging from very weak interactions to extreme forces generated when atoms collide. Therefore, a History Dependent Transfer Function (HDTF) was designed which adapts the translation of forces derived from the data to output forces according to the properties of the recently derived forces. Evaluation revealed that the HDTF improves the ability to haptically detect features in volumetric data with large force ranges. To further enable force models with high fidelity, an investigation was conducted to determine the perceptual Just Noticeable Difference (JND) in force for detection of interfaces between features in volumetric data. Results showed that JNDs vary depending on the magnitude of the forces in the volume and depending on where in the workspace the data is presented.
85

兩稅合一制度下「股東可扣抵稅額」於企業評價之角色-Ohlson模型之應用 / The Role of Imputation Credits Disclosure to Firms’ Valuation after the Integration of Individual and Corporate Taxes— An Application of the Ohlson Model

張青霞, Chang, Ching-Hsia Unknown Date (has links)
依據財務會計理論,附註揭露為整體財務報表的一部份,其目的在提供投資人進行企業評價時所需之攸關資訊。兩稅合一制度實施後,不僅使稅賦型態轉變,會計原則中也新增附註揭露股東可扣抵稅額之規定,因此提供了驗證資本市場與財務報表揭露的機會,本研究即針對股東可扣抵稅揭露是否具有價值攸關性進行測試。 本研究以87年為樣本年度,分析資料完整的317家上市公司,透過Ohlson模型來檢測股東可扣抵稅額之價值攸關性,並處理Ohlson模型中兩個重要的information dynamics,以異常盈餘(xa )及其他資訊(v)做為模型中的自變數,將財務分析師之財務預測(analysts’forecasts)做為Ohlson模型中其他資訊(other information)之代理變數,以捕捉Ohlson模型中其他資訊對股價的影響。最後,考慮產業及公司規模兩項因素,觀察紡織業與電子業對股東可扣抵稅額揭露之反應以及公司規模對於價值攸關性研究的影響。 實驗結果顯示,無論以現金基礎或應計基礎衡量股東可扣抵稅額,其揭露均具價值攸關性,投資人的確使用財務報表附註揭露中有關股東可扣抵稅額之資訊於企業評價上。其次,異常盈餘與其他資訊皆能捕捉股價之變動。最後,在紡織業與電子業中雖未觀察到股東可扣抵稅額之揭露具有攸關性,但公司規模的因素則無論在全體樣本或各別產業中皆具影響力。 / According to modern accouning theory, footnote disclosures are an intergrated part of the overall financial statements. The purpose of footnote disclosures is to provide value-relevant information in assisting investors’ valuation process. After Taiwan’s 1998 Tax Reform, which intergrates the individual and corporate taxes, the current GAAP requires a footnote disclosure of imputation credits (IC). This provides a good chance to test how Taiwan’s stock market reacts to such disclosuer. The main purpose of this study is to examine the value relevance of IC disclosure to investors’ equity valuation. This study uses Ohlson’s (1995) model to analyze 317 firms listed on Taiwan’s Stock Exchang (TSE) during 1998. To estimate the abcdrmal earings and other information (captured by analysts’ forcasts), this study adopts Dechow, Hutton, and Sloan’s(1999) methodology. We also investigate the effects of industry and firm size on the value relevance of IC disclosure. The empirical results reveal three findings. First, there is a positive association between IC and stock price in TSE. Therefore, the IC disclosure is value relevant to investors’ equity valuation. Second, abcdmal earnings and other information can both explain stock price behavior. Finally, when we focus our sample on the textile and high-tech industries, no significant association between IC disclosure and stock price can be found. When we further consider firm size, however, the value relevance of IC disclosure becomes significant. In other word, the value relevance of IC disclosure may be affected by firm size.
86

KOLLEKTIVTRAFIKENS GEOGRAFISKA VARIATIONER I TID OCH KOSTNAD – HUR PÅVERKAR DETTA BOSTADSPRISERNA? : Fallstudie Uppsala län med pendlingsomland

Sognestrand, Johanna, Österberg, Matilda January 2009 (has links)
<p>The distance between home and work has increased in recent decades. By the development of infrastructure and public transport, jobs farther from home have become more accessible and this development has in turn increased commuting. Commuting travellers often pass over administrative boundaries which often serve as borders for public transport pricing. Also the market control prices. Research shows that travel times and costs significantly affect commuting choice. Many people have an upper limit of 60 minutes commuting distance between home and work. How commuting costs affect the individual's choice of commuting will vary depending on the individual's income and housing costs. The aim of our study was to see how public transport costs and travel times may vary geographically. GIS, Geographic Information System was used to make a network analysis which showed time distances and travel costs on maps. We also examined whether there was a link between towns accessibility by public transport and housing market which we did with help of correlation and regression analysis. In order to answer our questions we started from a study area consisting of Uppsala County with its surrounding commuting area. The maps showed how accessibility to larger towns varies among the smaller towns. The access is often best between bigger towns while there is less accessibility between smaller towns. The distance to bus stops or railway station also has a significant effect on how long the total travel time will be. Urban areas with access to rail services had the best opportunities to reach larger cities and that give also better access to labour market. From our study of the Uppsala County with a monocentric structure, we could indicate a link between accessibility to the bigger cities and housing prices in the surrounding towns. The higher commuting costs and longer travel time to the central place the lower the housing prices. A similar study of Stockholm which has a polycentric structure showed that the relationship between accessibility and house prices not are applicable to all regions. Here we can conclude that housing markets depends on many other factors than access to rapid public transport. House prices can depend on things like closeness to nature and water.</p> / <p>Avståndet mellan bostad och arbete har ökat under de senaste decennierna. Utvecklingen av infrastruktur och kollektivtrafik har lett till att arbetsplatser längre från hemmet har blivit mer tillgängliga och denna utveckling har i sin tur bidragit till en ökad arbetspendling i samhället. Pendlingsresenärer passerar ofta över administrativa gränser och dessa gränser styr ofta över kollektivtrafikens prissättning men även efterfrågan kan styra priset. Forskning visar att restider och kostnader i hög grad påverkar pendlingsvalet. Många människor föredrar ett pendlingsavstånd, mellan hem och arbete på högst 60 minuter. Hur pendlingskostnader påverkar individens val till pendling varierar bland annat beroende på individens inkomst och boendekostnader.</p><p>Syftet med vår studie var att se hur kollektivtrafikens kostnader och restider kan variera geografiskt. GIS, Geografiska Informationssystem, användes vid utförandet av en nätverks- och kostnadsanalys vilket visade tidsmässigt avstånd och kostnad på kartor. Vi undersökte också om det fanns ett samband mellan orters tillgänglighet med kollektivtrafik och bostadsmarknaden genom att utföra korrelations- och regressionsanalyser. För att svara på våra frågeställningar utgick vi från ett undersökningsområde bestående av Uppsala län med pendlingsomland.</p><p>Kartbilderna visade tydligt hur tillgängligheten till större städer varierar mellan olika orter och att tillgängligheten ofta är bäst mellan större tätorter medan det är sämre tillgänglighet mellan mindre tätorter. Avståndet till hållplatser har också betydande påverkan på hur lång den totala restiden blir. Tätorter med tillgång till järnvägstrafik hade det bästa möjligheterna att nå större tätorter och därmed blir arbetsmarknaden större för dessa orter. Från vår studie över Uppsala län som kan anses ha monocentrisk struktur kunde vi även tyda ett samband mellan tätorters tillgänglighet till centralorten och orternas bostadspriser. Ju högre pendlingskostnad och längre restid till centralorten desto lägre var orternas bostadspriser. En likadan studie över Stockholm som har en mer polycentrisk struktur visade dock att detta samband mellan tillgänglighet och bostadspriser inte gäller för alla regioner. Här kan vi dra den slutsatsen att bostadsmarknaden styrs av många andra faktorer än tillgång till snabb kollektivtrafik och att vissa områdens bostadspriser mer styrs av exempelvis närhet till natur och vatten.</p>
87

EFFICIENTNEXT: EFFICIENTNET FOR EMBEDDED SYSTEMS

Abhishek Rajendra Deokar (12456477) 12 July 2022 (has links)
<p>Convolutional Neural Networks have come a long way since  AlexNet. Each year the limits of the state of the art are being pushed to new  levels. EfficientNet pushed the performance metrics to a new high and EfficientNetV2 even more so. Even so, architectures for mobile applications can benefit from improved accuracy and reduced model footprint. The classic Inverted Residual block has been the foundation upon which most mobile networks seek to improve. EfficientNet architecture is built using the same Inverted Residual block. In this paper we experiment with Harmonious Bottlenecks in  place of the Inverted Residuals to observe a reduction in the number of parameters and improvement in accuracy. The designed network is then deployed on the NXP i.MX 8M Mini board for Image classification.</p>
88

AUTOMATING BIG VISUAL DATA COLLECTION AND ANALYTICS TOWARD LIFECYCLE MANAGEMENT OF ENGINEERING SYSTEMS

Jongseong Choi (9011111) 09 September 2022 (has links)
Images have become a ubiquitous and efficient data form to record information. Use of this option for data capture has largely increased due to the widespread availability of image sensors and sensor platforms (e.g., smartphones and drones), the simplicity of this approach for broad groups of users, and our pervasive access to the internet as one class of infrastructure in itself. Such data contains abundant visual information that can be exploited to automate asset assessment and management tasks that traditionally are manually conducted for engineering systems. Automation of the data collection, extraction and analytics is however, key to realizing the use of these data for decision-making. Despite recent advances in computer vision and machine learning techniques extracting information from an image, automation of these real-world tasks has been limited thus far. This is partly due to the variety of data and the fundamental challenges associated with each domain. Due to the societal demands for access to and steady operation of our infrastructure systems, this class of systems represents an ideal application where automation can have high impact. Extensive human involvement is required at this time to perform everyday procedures such as organizing, filtering, and ranking of the data before executing analysis techniques, consequently, discouraging engineers from even collecting large volumes of data. To break down these barriers, methods must be developed and validated to speed up the analysis and management of data over the lifecycle of infrastructure systems. In this dissertation, big visual data collection and analysis methods are developed with the goal of reducing the burden associated with human manual procedures. The automated capabilities developed herein are focused on applications in lifecycle visual assessment and are intended to exploit large volumes of data collected periodically over time. To demonstrate the methods, various classes of infrastructure, commonly located in our communities, are chosen for validating this work because they: (i) provide commodities and service essential to enable, sustain, or enhance our lives; and (ii) require a lifecycle structural assessment in a high priority. To validate those capabilities, applications of infrastructure assessment are developed to achieve multiple approaches of big visual data such as region-of-interest extraction, orthophoto generation, image localization, object detection, and image organization using convolution neural networks (CNNs), depending on the domain of lifecycle assessment needed in the target infrastructure. However, this research can be adapted to many other applications where monitoring and maintenance are required over their lifecycle.
89

<strong>TOWARDS A TRANSDISCIPLINARY CYBER FORENSICS GEO-CONTEXTUALIZATION FRAMEWORK</strong>

Mohammad Meraj Mirza (16635918) 04 August 2023 (has links)
<p>Technological advances have a profound impact on people and the world in which they live. People use a wide range of smart devices, such as the Internet of Things (IoT), smartphones, and wearable devices, on a regular basis, all of which store and use location data. With this explosion of technology, these devices have been playing an essential role in digital forensics and crime investigations. Digital forensic professionals have become more able to acquire and assess various types of data and locations; therefore, location data has become essential for responders, practitioners, and digital investigators dealing with digital forensic cases that rely heavily on digital devices that collect data about their users. It is very beneficial and critical when performing any digital/cyber forensic investigation to consider answering the six Ws questions (i.e., who, what, when, where, why, and how) by using location data recovered from digital devices, such as where the suspect was at the time of the crime or the deviant act. Therefore, they could convict a suspect or help prove their innocence. However, many digital forensic standards, guidelines, tools, and even the National Institute of Standards and Technology (NIST) Cyber Security Personnel Framework (NICE) lack full coverage of what location data can be, how to use such data effectively, and how to perform spatial analysis. Although current digital forensic frameworks recognize the importance of location data, only a limited number of data sources (e.g., GPS) are considered sources of location in these digital forensic frameworks. Moreover, most digital forensic frameworks and tools have yet to introduce geo-contextualization techniques and spatial analysis into the digital forensic process, which may aid digital forensic investigations and provide more information for decision-making. As a result, significant gaps in the digital forensics community are still influenced by a lack of understanding of how to properly curate geodata. Therefore, this research was conducted to develop a transdisciplinary framework to deal with the limitations of previous work and explore opportunities to deal with geodata recovered from digital evidence by improving the way of maintaining geodata and getting the best value from them using an iPhone case study. The findings of this study demonstrated the potential value of geodata in digital disciplinary investigations when using the created transdisciplinary framework. Moreover, the findings discuss the implications for digital spatial analytical techniques and multi-intelligence domains, including location intelligence and open-source intelligence, that aid investigators and generate an exceptional understanding of device users' spatial, temporal, and spatial-temporal patterns.</p>

Page generated in 0.0843 seconds