• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 591
  • 119
  • 109
  • 75
  • 40
  • 40
  • 27
  • 22
  • 19
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1226
  • 1226
  • 181
  • 170
  • 163
  • 156
  • 150
  • 150
  • 149
  • 129
  • 112
  • 110
  • 110
  • 109
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Produktutvecklingsprocesser vid digitalisering av hemprodukter : Påverkan på intern struktur, projekttid, användardata och produktutvecklingsmetod / Product Development in Digitalization of Home Products

BRICK, ADÉLE, HABBERSTAD, HELENA January 2020 (has links)
Under de senaste åren har digitaliseringen av fysiska produkter ökat, och allt fler företag har därmed börjat implementera digitala komponenter i sina produkter. Att implementera mjukvara i en analog produkt innebär nya utmaningar för produktutvecklingsteam som tidigare arbetat med att ta fram analoga produkter. Många företag har i och med digitaliseringen valt att anpassa sina produktutvecklingsmetoder med målet att integrera de digitala och analoga produktutvecklingsprocesserna med varandra. Syftet med studien är att undersöka hur produktutvecklingsprocesserna ser ut idag på produktutvecklande företag som har genomgått en digitalisering. De aspekter som har tagits extra hänsyn till är projekttid, produktutvecklingsmetod, företagets organisering och struktur samt insamling och implementering av användardata i produktutvecklingsprocessen. Företag som utvecklar uppkopplade produkter för hemmet är exempel på företag som just nu genomgår en digitalisering av tidigare analoga produkter, därför har företag med detta spår valts som inriktning vid denna studie. Studien har utförts genom en inledande litteraturstudie följt av kvalitativa intervjuer med fyra responderande företag, som samtliga utvecklar uppkopplade hemprodukter vilka innehåller IoT-teknologi. Studien visar att företagen strävar mot ett agilt arbetssätt, men att det finns svårigheter med att integrera hårdvaru- och mjukvaruutveckling i produktutvecklingsprocesserna. Trots detta upplevs utvecklingstiden i projekt som oförändrad jämfört med innan digitaliseringen. Det framkommer även att tvärfunktionalitet hos utvecklingsteamen är en fördel i samspelet mellan de digitala och analoga delarna av produktutvecklingen. Studien visade slutligen att kunddata som samlas in via digitaliserade produkter används av företag som ett verktyg för att effektivisera produktutvecklingen. / In recent years digitalization of physical products has increased, and many companies has therefore started to implement digital components in their products. To add software to an analog product creates new challenges for the product development teams, which up until then mainly have been developing analog products. Many companies have, as a result of the digitalization, chosen to adapt their product development methods to manage the integration between digital and analog development processes. The purpose of this study is to investigate what the product developing process looks like today in companies that have digitalized their products. The aspects that are specifically considered are; project duration, product development method, organizational structure of the company, and implementation of big data in the product development process. Companies that develop products for home use is one example of companies that are going through a digitalization process of their previously analog products, which is why this branch of companies is targeted in this study. The study was conducted through an initial literature study, followed by interviews with four responding companies, who all develop connected home products containing IoT-technology. The study shows that the companies aim for a more agile work procedure, but that there are problems with integrating hardware and software development in product development processes. Nonetheless, the time duration of the projects does not appear to have changed significantly in comparison to pre-digitalization. It is also revealed that cross-functional teams are an advantage within the collaboration between the digital and analog parts in the development process. The study finally shows that big data, collected through digitalized products, is used by the companies as a tool for increasing the effectiveness of the product development.
372

Case Studies on Fractal and Topological Analyses of Geographic Features Regarding Scale Issues

Ren, Zheng January 2017 (has links)
Scale is an essential notion in geography and geographic information science (GIScience). However, the complex concepts of scale and traditional Euclidean geometric thinking have created tremendous confusion and uncertainty. Traditional Euclidean geometry uses absolute size, regular shape and direction to describe our surrounding geographic features. In this context, different measuring scales will affect the results of geospatial analysis. For example, if we want to measure the length of a coastline, its length will be different using different measuring scales. Fractal geometry indicates that most geographic features are not measurable because of their fractal nature. In order to deal with such scale issues, the topological and scaling analyses are introduced. They focus on the relationships between geographic features instead of geometric measurements such as length, area and slope. The scale change will affect the geometric measurements such as length and area but will not affect the topological measurements such as connectivity.   This study uses three case studies to demonstrate the scale issues of geographic features though fractal analyses. The first case illustrates that the length of the British coastline is fractal and scale-dependent. The length of the British coastline increases with the decreased measuring scale. The yardstick fractal dimension of the British coastline was also calculated. The second case demonstrates that the areal geographic features such as British island are also scale-dependent in terms of area. The box-counting fractal dimension, as an important parameter in fractal analysis, was also calculated. The third case focuses on the scale effects on elevation and the slope of the terrain surface. The relationship between slope value and resolution in this case is not as simple as in the other two cases. The flat and fluctuated areas generate different results. These three cases all show the fractal nature of the geographic features and indicate the fallacies of scale existing in geography. Accordingly, the fourth case tries to exemplify how topological and scaling analyses can be used to deal with such unsolvable scale issues. The fourth case analyzes the London OpenStreetMap (OSM) streets in a topological approach to reveal the scaling or fractal property of street networks. The fourth case further investigates the ability of the topological metric to predict Twitter user’s presence. The correlation between number of tweets and connectivity of London named natural streets is relatively high and the coefficient of determination r2 is 0.5083.   Regarding scale issues in geography, the specific technology or method to handle the scale issues arising from the fractal essence of the geographic features does not matter. Instead, the mindset of shifting from traditional Euclidean thinking to novel fractal thinking in the field of GIScience is more important. The first three cases revealed the scale issues of geographic features under the Euclidean thinking. The fourth case proved that topological analysis can deal with such scale issues under fractal way of thinking. With development of data acquisition technologies, the data itself becomes more complex than ever before. Fractal thinking effectively describes the characteristics of geographic big data across all scales. It also overcomes the drawbacks of traditional Euclidean thinking and provides deeper insights for GIScience research in the big data era.
373

Exploring Spatio-Temporal Patterns of Volunteered Geographic Information : A Case Study on Flickr Data of Sweden

Miao, Yufan January 2013 (has links)
This thesis aims to seek interesting patterns from massive amounts of Flickr data in Sweden with pro- posed new clustering strategies. The aim can be further divided into three objectives. The first one is to acquire large amount of timestamped geolocation data from Flickr servers. The second objective is to develop effective and efficient methods to process the data. More specifically, the methods to be developed are bifold, namely, the preprocessing method to solve the “Big Data” issue encountered in the study and the new clustering method to extract spatio-temporal patterns from data. The third one is to analyze the extracted patterns with scaling analysis techniques in order to interpret human social activities underlying the Flickr Data within the urban envrionment of Sweden. During the study, the three objectives were achieved sequentially. The data employed for this study was vector points downloaded through Flickr Application Programming Interface (API). After data ac- quisition, preprocessing was performed on the raw data. The whole dataset was firstly separated by year based on the temporal information. Then data of each year was accumulated with its former year(s) so that the evovling process can be explored. After that, large datasets were splitted into small pieces and each piece was clipped, georeferenced, and rectified respectively. Then the pieces were merged together for clustering. With respect to clustering, the strategy was developed based on the Delaunay Triangula- tion (DT) and head/tail break rule. After that, the generated clusters were analyzed with scaling analysis techniques and spatio-temporal patterns were interpreted from the analysis results. It has been found that the spatial pattern of the human social activities in the urban environment of Sweden generally follows the power-law distribution and the cities defined by human social activities are evolving as time goes by. To conclude, the contributions of this research are threefold and fulfill the objectives of this study, respectively. Firstly, large amount of Flickr data is acquired and collated as a contribution to other aca- demic researches related to Flickr. Secondly, the clustering strategy based on the DT and head/tail break rule is proposed for spatio-temporal pattern seeking. Thirdly, the evolving of the cities in terms of human activities in Sweden is detected from the perspective of scaling. Future work is expected in major two aspects, namely, data and data processing. For the data aspect, the downloaded Flickr data is expected to be employed by other studies, especially those closely related to human social activities within urban environment. For the processing aspect, new algorithms are expected to either accelerate the processing process or better fit machines with super computing capacities.
374

Inhämtning & analys av Big Data med fokus på sociala medier

Åhlander, Niclas, Aldaamsah, Saed January 2015 (has links)
I en värld som till allt större del använder sig av sociala medier skapas och synliggörs information om användarna som tidigare inte varit enkel att i stor mängd analysera. I det här arbetet visas processen för att skapa ett automatiserat insamlingssätt av specifik data från sociala medier. Insamlad data analyseras därefter med noggrant utformade algoritmer och slutligen demonstreras processens nytta i sin helhet. Datainhämtningen från sociala medier automatiserades med hjälp av en mängd kombinerade metoder. Därefter kunde analysen av det inhämtade datat utföras med hjälp av specifika algoritmer som redovisades i det här arbetet. Tillsammans resulterade metoderna i att vissa mönster framkom i datan, vilket avslöjade en mängd olika typer av information kring analysens utvalda individer.
375

An Explorative Parameter Sweep: Spatial-temporal Data Mining in Stochastic Reaction-diffusion Simulations

Wrede, Fredrik January 2016 (has links)
Stochastic reaction-diffusion simulations has become an efficient approach for modelling spatial aspects of intracellular biochemical reaction networks. By accounting for intrinsic noise due to low copy number of chemical species, stochastic reaction-diffusion simulations have the ability to more accurately predict and model biological systems. As with many simulations software, exploration of the parameters associated with the model can be needed to yield new knowledge about the underlying system. The exploration can be conducted by executing parameter sweeps for a model. However, with little or no prior knowledge about the modelled system, the effort for practitioners to explore the parameter space can get overwhelming. To account for this problem we perform a feasibility study on an explorative behavioural analysis of stochastic reaction-diffusion simulations by applying spatial-temporal data mining to large parameter sweeps. By reducing individual simulation outputs into a feature space involving simple time series and distribution analytics, we were able to find similar behaving simulations after performing an agglomerative hierarchical clustering.
376

Sequential estimation in statistics and steady-state simulation

Tang, Peng 22 May 2014 (has links)
At the onset of the "Big Data" age, we are faced with ubiquitous data in various forms and with various characteristics, such as noise, high dimensionality, autocorrelation, and so on. The question of how to obtain accurate and computationally efficient estimates from such data is one that has stoked the interest of many researchers. This dissertation mainly concentrates on two general problem areas: inference for high-dimensional and noisy data, and estimation of the steady-state mean for univariate data generated by computer simulation experiments. We develop and evaluate three separate sequential algorithms for the two topics. One major advantage of sequential algorithms is that they allow for careful experimental adjustments as sampling proceeds. Unlike one-step sampling plans, sequential algorithms adapt to different situations arising from the ongoing sampling; this makes these procedures efficacious as problems become more complicated and more-delicate requirements need to be satisfied. We will elaborate on each research topic in the following discussion. Concerning the first topic, our goal is to develop a robust graphical model for noisy data in a high-dimensional setting. Under a Gaussian distributional assumption, the estimation of undirected Gaussian graphs is equivalent to the estimation of inverse covariance matrices. Particular interest has focused upon estimating a sparse inverse covariance matrix to reveal insight on the data as suggested by the principle of parsimony. For estimation with high-dimensional data, the influence of anomalous observations becomes severe as the dimensionality increases. To address this problem, we propose a robust estimation procedure for the Gaussian graphical model based on the Integrated Squared Error (ISE) criterion. The robustness result is obtained by using ISE as a nonparametric criterion for seeking the largest portion of the data that "matches" the model. Moreover, an l₁-type regularization is applied to encourage sparse estimation. To address the non-convexity of the objective function, we develop a sequential algorithm in the spirit of a majorization-minimization scheme. We summarize the results of Monte Carlo experiments supporting the conclusion that our estimator of the inverse covariance matrix converges weakly (i.e., in probability) to the latter matrix as the sample size grows large. The performance of the proposed method is compared with that of several existing approaches through numerical simulations. We further demonstrate the strength of our method with applications in genetic network inference and financial portfolio optimization. The second topic consists of two parts, and both concern the computation of point and confidence interval (CI) estimators for the mean µ of a stationary discrete-time univariate stochastic process X \equiv \{X_i: i=1,2,...} generated by a simulation experiment. The point estimation is relatively easy when the underlying system starts in steady state; but the traditional way of calculating CIs usually fails since the data encountered in simulation output are typically serially correlated. We propose two distinct sequential procedures that each yield a CI for µ with user-specified reliability and absolute or relative precision. The first sequential procedure is based on variance estimators computed from standardized time series applied to nonoverlapping batches of observations, and it is characterized by its simplicity relative to methods based on batch means and its ability to deliver CIs for the variance parameter of the output process (i.e., the sum of covariances at all lags). The second procedure is the first sequential algorithm that uses overlapping variance estimators to construct asymptotically valid CI estimators for the steady-state mean based on standardized time series. The advantage of this procedure is that compared with other popular procedures for steady-state simulation analysis, the second procedure yields significant reduction both in the variability of its CI estimator and in the sample size needed to satisfy the precision requirement. The effectiveness of both procedures is evaluated via comparisons with state-of-the-art methods based on batch means under a series of experimental settings: the M/M/1 waiting-time process with 90% traffic intensity; the M/H_2/1 waiting-time process with 80% traffic intensity; the M/M/1/LIFO waiting-time process with 80% traffic intensity; and an AR(1)-to-Pareto (ARTOP) process. We find that the new procedures perform comparatively well in terms of their average required sample sizes as well as the coverage and average half-length of their delivered CIs.
377

Modern Computing Techniques for Solving Genomic Problems

Yu, Ning 12 August 2016 (has links)
With the advent of high-throughput genomics, biological big data brings challenges to scientists in handling, analyzing, processing and mining this massive data. In this new interdisciplinary field, diverse theories, methods, tools and knowledge are utilized to solve a wide variety of problems. As an exploration, this dissertation project is designed to combine concepts and principles in multiple areas, including signal processing, information-coding theory, artificial intelligence and cloud computing, in order to solve the following problems in computational biology: (1) comparative gene structure detection, (2) DNA sequence annotation, (3) investigation of CpG islands (CGIs) for epigenetic studies. Briefly, in problem #1, sequences are transformed into signal series or binary codes. Similar to the speech/voice recognition, similarity is calculated between two signal series and subsequently signals are stitched/matched into a temporal sequence. In the nature of binary operation, all calculations/steps can be performed in an efficient and accurate way. Improving performance in terms of accuracy and specificity is the key for a comparative method. In problem #2, DNA sequences are encoded and transformed into numeric representations for deep learning methods. Encoding schemes greatly influence the performance of deep learning algorithms. Finding the best encoding scheme for a particular application of deep learning is significant. Three applications (detection of protein-coding splicing sites, detection of lincRNA splicing sites and improvement of comparative gene structure identification) are used to show the computing power of deep neural networks. In problem #3, CpG sites are assigned certain energy and a Gaussian filter is applied to detection of CpG islands. By using the CpG box and Markov model, we investigate the properties of CGIs and redefine the CGIs using the emerging epigenetic data. In summary, these three problems and their solutions are not isolated; they are linked to modern techniques in such diverse areas as signal processing, information-coding theory, artificial intelligence and cloud computing. These novel methods are expected to improve the efficiency and accuracy of computational tools and bridge the gap between biology and scientific computing.
378

New regression methods for measures of central tendency

Aristodemou, Katerina January 2014 (has links)
Measures of central tendency have been widely used for summarising statistical data, with the mean being the most popular summary statistic. However, in reallife applications it is not always the most representative measure of central location, especially when dealing with data which is skewed or contains outliers. Alternative statistics with less bias are the median and the mode. Median and quantile regression has been used in different fields to examine the effect of factors at different points of the distribution. Mode estimation, on the other hand, has found many applications in cases where the analysis focuses on obtaining information about the most typical value or pattern. This thesis demonstrates that mode also plays an important role in the analysis of big data, which is becoming increasingly important in many sectors of the global economy. However, mode regression has not been widely applied, even though there is a clear conceptual benefit, due to the computational and theoretical limitations of the existing estimators. Similarly, despite the popularity of the binary quantile regression model, computational straight forward estimation techniques do not exist. Driven by the demand for simple, well-found and easy to implement inference tools, this thesis develops a series of new regression methods for mode and binary quantile regression. Chapter 2 deals with mode regression methods from the Bayesian perspective and presents one parametric and two non-parametric methods of inference. Chapter 3 demonstrates a mode-based, fast pattern-identification method for big data and proposes the first fully parametric mode regression method, which effectively uncovers the dependency of typical patterns on a number of covariates. The proposed approach is demonstrated through the analysis of a decade-long dataset on the Body Mass Index and associated factors, taken from the Health Survey for England. Finally, Chapter 4 presents an alternative binary quantile regression approach, based on the nonlinear least asymmetric weighted squares, which can be implemented using standard statistical packages and guarantees a unique solution.
379

Redovisnings- och revisionsbranschens påverkan av digitalisering / The impact of digitizationin the accounting and auditing industry

Halvars, Viktoria, Svantorp, Petra January 2016 (has links)
Tidigare forskning har visat att den teknologiska utvecklingen har påverkat många branscher. Vi har valt att fokusera på en särskild bransch och därmed är syftet med denna studie att förklara och förstå hur redovisnings- och revisionsbranschens har påverkats av digitaliserings framfart. Studien bygger vidare på tre forskningsfrågor, där första undersöker hur redovisnings- och revisionsbranschen utvecklats och förändrats under 2000-talet, den andra undersöker viktiga faktorer att beakta vid implementeringen av digitalisering och den tredje undersöker vilken förändring redovisningskonsulter och revisorer står inför. Studien bygger på empiri insamlat av redovisningskonsulter och revisorer och besvarar de tre forskningsfrågor med utgångspunkt i den teoretiska referensramen. För att uppnå syftet med studien har en kvalitativ studie valts där 11 stycken semistrukturerade intervjuer har genomförts med både redovisningskonsulter och revisorer. För att få en djupare förståelse kring ämnet diskuteras relevanta begrepp och teorier i en teoretisk referensram. Analysen bygger sedan på teori och på citat från informanterna. Utifrån informanternas uppfattningar är vår slutsats att redovisnings- och revisionsbranschen påverkats av digitaliseringens framfart. Framförallt genom förändrade arbetsuppgifter och att tillgängligheten och mobiliteten som digitaliserade arbetsmetoder för med sig, givit informanterna mer frihet i arbetet. Vi har till skillnad från tidigare forskning inom ämnet märkt av att revisionsbranschen ligger något efter redovisningsbranschen i dess arbete med att implementera digitaliserade arbetsmetoder. / Previous research has shown that the technological development has affected many industries. We have chosen to focus on a particular industry and the main purpose of this study is to explain and understand how the accounting and auditing industry has been affected by digitization. The study consists of three research questions, the first one explores how the accounting and auditing profession has changed during the 2000s, the second examines the key factors to consider in the implementation of digitization and the third examines the change that accounting consultants and auditors are facing. The study is based on empirical data collected by accountants and auditors and our three research questions that are based on our theoretical framework. In order to answer our research questions and main purpose, our study has a qualitative approach. To get a deeper understanding of our topics, we have collected relevant theories in a theoretical framework. We have conducted eleven semi-structured interviews, with both accounting consultants and auditors. The analysis is based on our theoretical framework and our empirical data. Based on informants' perceptions, our conclusion is that the accounting and auditing industry has been affected by digitization in many ways. Unlike previous research, we have noticed that the auditing industry is far behind when it comes to digitization in the daily work activities.
380

(Mis)trusting health research synthesis studies : exploring transformations of 'evidence'

Petrova, Mila January 2014 (has links)
This thesis explores the transformations of evidence in health research synthesis studies – studies that bring together evidence from a number of research reports on the same/ similar topic. It argues that health research synthesis is a broad and intriguing field in a state of pre-formation, in spite of the fact that it may appear well established if equated with its exemplar method – the systematic review inclusive of meta-analysis. Transformations of evidence are processes by which pieces of evidence are modified from what they are in the primary study report into what is needed in the synthesis study while, supposedly, having their integrity fully preserved. Such processes have received no focused attention in the literature. Yet they are key to the validity and reliability of synthesis studies. This work begins to describe them and explore their frequency, scope and drivers. A ‘meta-scientific’ perspective is taken, where ‘meta-scientific’ is understood to include primarily ideas from the philosophy of science and methodological texts in health research, and, to a lesser extent, social studies of science and psychology of science thinking. A range of meta-scientific ideas on evidence and factors that shape it guide the analysis of processes of “data extraction” and “coding” during which much evidence is transformed. The core of the analysis involves the application of an extensive Analysis Framework to 17 highly heterogeneous research papers on cancer. Five non-standard ‘injunctions’ complement the Analysis Framework – for comprehensiveness, extensive multiple coding, extreme transparency, combination of critical appraisal and critique, and for first coding as close as possible to the original and then extending towards larger transformations. Findings suggest even lower credibility of the current overall model of health research synthesis than initially expected. Implications are discussed and a radical vision for the future proposed.

Page generated in 0.3069 seconds