• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1042
  • 415
  • 189
  • 111
  • 71
  • 62
  • 44
  • 40
  • 36
  • 18
  • 13
  • 12
  • 11
  • 8
  • 6
  • Tagged with
  • 2396
  • 1249
  • 367
  • 333
  • 249
  • 201
  • 197
  • 188
  • 180
  • 177
  • 174
  • 172
  • 171
  • 171
  • 170
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Study of the relationship between profit rates and economic concentration in a sample of Canadian industry

Smith, Milo Alastair January 1967 (has links)
The purpose of this thesis is to test the hypothesis, derived from neo-classical micro-economic theory, that other things being equal the more concentrated an industry becomes, the more likely it is that firms in that industry can pursue monopolistic practices and thereby earn greater profits than would otherwise be possible if there were more firms in that industry. The method of study employed is the application of regression and correlation analysis to a cross-sectional sample of Canadian industry. The results of this study lead one to the conclusion that concentration and profits are positively correlated, thus supporting the hypothesis. However, concentration explains only about 10 per cent of the variation in industry profit rates in the cross-section. / Business, Sauder School of / Graduate
152

"I apologise for my poor blogging": Searching for Apologies in the Birmingham Blog Corpus

Lutzky, Ursula, Kehoe, Andrew 15 February 2017 (has links) (PDF)
This study addresses a familiar challenge in corpus pragmatic research: the search for functional phenomena in large electronic corpora. Speech acts are one area of research that falls into this functional domain and the question of how to identify them in corpora has occupied researchers over the past 20 years. This study focuses on apologies as a speech act that is characterised by a standard set of routine expressions, making it easier to search for with corpus linguistic tools. Nevertheless, even for a comparatively formulaic speech act, such as apologies, the polysemous nature of forms (cf. e.g. I am sorry vs. a sorry state) impacts the precision of the search output so that previous studies of smaller data samples had to resort to manual microanalysis. In this study, we introduce an innovative methodological approach that demonstrates how the combination of different types of collocational analysis can facilitate the study of speech acts in larger corpora. By first establishing a collocational profile for each of the Illocutionary Force Indicating Devices associated with apologies and then scrutinising their shared and unique collocates, unwanted hits can be discarded and the amount of manual intervention reduced. Thus, this article introduces new possibilities in the field of corpus-based speech act analysis and encourages the study of pragmatic phenomena in large corpora.
153

Low temperature tolerance for Artemisia tridentata seedlings over an elevation gradient

Redar, Sean Patrick 01 January 2000 (has links)
No description available.
154

Leveraging big data resources and data integration in biology: applying computational systems analyses and machine learning to gain insights into the biology of cancers

Sinkala, Musalula 24 February 2021 (has links)
Recently, many "molecular profiling" projects have yielded vast amounts of genetic, epigenetic, transcription, protein expression, metabolic and drug response data for cancerous tumours, healthy tissues, and cell lines. We aim to facilitate a multi-scale understanding of these high-dimensional biological data and the complexity of the relationships between the different data types taken from human tumours. Further, we intend to identify molecular disease subtypes of various cancers, uncover the subtype-specific drug targets and identify sets of therapeutic molecules that could potentially be used to inhibit these targets. We collected data from over 20 publicly available resources. We then leverage integrative computational systems analyses, network analyses and machine learning, to gain insights into the pathophysiology of pancreatic cancer and 32 other human cancer types. Here, we uncover aberrations in multiple cell signalling and metabolic pathways that implicate regulatory kinases and the Warburg effect as the likely drivers of the distinct molecular signatures of three established pancreatic cancer subtypes. Then, we apply an integrative clustering method to four different types of molecular data to reveal that pancreatic tumours can be segregated into two distinct subtypes. We define sets of proteins, mRNAs, miRNAs and DNA methylation patterns that could serve as biomarkers to accurately differentiate between the two pancreatic cancer subtypes. Then we confirm the biological relevance of the identified biomarkers by showing that these can be used together with pattern-recognition algorithms to infer the drug sensitivity of pancreatic cancer cell lines accurately. Further, we evaluate the alterations of metabolic pathway genes across 32 human cancers. We find that while alterations of metabolic genes are pervasive across all human cancers, the extent of these gene alterations varies between them. Based on these gene alterations, we define two distinct cancer supertypes that tend to be associated with different clinical outcomes and show that these supertypes are likely to respond differently to anticancer drugs. Overall, we show that the time has already arrived where we can leverage available data resources to potentially elicit more precise and personalised cancer therapies that would yield better clinical outcomes at a much lower cost than is currently being achieved.
155

Electronic Evidence Locker: An Ontology for Electronic Evidence

Smith, Daniel 01 December 2021 (has links)
With the rapid growth of crime data, overwhelming amounts of electronic evidence need to be stored and shared with the relevant agencies. Without addressing this challenge, the sharing of crime data and electronic evidence will be highly inefficient, and the resource requirements for this task will continue to increase. Relational database solutions face size limitations in storing larger amounts of crime data where each instance has unique attributes with unstructured nature. In this thesis, the Electronic Evidence Locker (EEL) was proposed and developed to address such problems. The EEL was built using a NoSQL database and a C# website for querying stored data. Baseline results were collected to measure the growth of required machine resources (in memory and time) using various test cases and larger datasets. The results showed that search time is more impacted by the search direction in the data than the addition of the query search conditions.
156

Možnosti využití konceptu Big Data v pojišťovnictví

Stodolová, Jana January 2019 (has links)
This diploma thesis deals with the phenomenon of recent years called Big Data. Big Data are unstructured data of large volume which cannot be managed and processed by commonly used software tools. The analytical part deals with the concept of Big Data and analyses the possibilities of using this concept in the in-surance sector. The practical part presents specific methods and approaches for the use of big data analysis, specifically in increasing the competitiveness of the insurance company and in detecting insurance frauds. Most space is devoted to data mining methods in modelling the task of detecting insurance frauds. This di-ploma thesis builds on and extends the bachelor thesis of the author titled "Mod-ern technology of data analysis and its use in detection of insurance frauds".
157

How to capture that business value everyone talks about? : An exploratory case study on business value in agile big data analytics organizations

Svenningsson, Philip, Drubba, Maximilian January 2020 (has links)
Background: Big data analytics has been referred to as a hype the past decade, making manyorganizations adopt data-driven processes to stay competitive in their industries. Many of theorganizations adopting big data analytics use agile methodologies where the most importantoutcome is to maximize business value. Multiple scholars argue that big data analytics lead toincreased business value, however, there is a theoretical gap within the literature about how agileorganizations can capture this business value in a practically relevant way. Purpose: Building on a combined definition that capturing business value means being able todefine-, communicate- and measure it, the purpose of this thesis is to explore how agileorganizations capture business value from big data analytics, as well as find out what aspects ofvalue are relevant when defining it. Method: This study follows an abductive research approach by having a foundation in theorythrough the use of a qualitative research design. A single case study of Nike Inc. was conducted togenerate the primary data for this thesis where nine participants from different domains within theorganization were interviewed and the results were analysed with a thematic content analysis. Findings: The findings indicate that, in order for agile organizations to capture business valuegenerated from big data analytics, they need to (1) define the value through a synthezised valuemap, (2) establish a common language with the help of a business translator and agile methods,and (3), measure the business value before-, during- and after the development by usingindividually idenified KPIs derived from the business value definition.
158

Det Balanserade Styrkortet i jämförelseperspektiv : En kvalitativ studie som undersöker hur BSC används av Big Four jämfört med mindre revisionsbyråer. / The Balanced Scorecard in a comparative perspective

Fadel, Mohammed, Salamah, David January 2020 (has links)
Purpose - The purpose of this study is to explore how Big-4 companies use the balanced scorecard to manage the operations in comparison to smaller competitors in the same field. Method - The study is of a qualitative nature and is based on an inductive approach. The empirical data has been collected through Conclusion - The results show that the use of BSC differs not only between Big-4 and Non-big 4 bureaus but also within the groups. In addition, there is no link between the size of the bureau, belonging to The Big 4 or not, on the one hand, and the use of BSC on the other hand. Some bureaus within the Big-4 use the card more extensively than others within the block. At the same time there are some smaller bureaus that focus on more perspectives of the card, compared to some Big-4 bureaus. This leads to the conclusion that the extent to which the card is used differs drastically between all bureaus. / Syfte - syftet med denna uppsats är att utforska hur Big-4 byråerna använder det balanserade styrkortet för att styra verksamheten i jämförelse med mindre konkurrenter i samma fält. Metod - Studien är av kvalitativ karaktär och har utgått från en induktiv ansats. Empirin har samlats in genom dokument samt litteratur innehållsanalys. Slutsats - Resultaten visar att användningen av BSC skiljer sig inte bara mellan Big 4 och Non-big 4 utan även inom grupperna, samtidigt som det inte förekommer någon koppling mellan byråns storlek, att tillhöra Big 4 eller inte och användningen av BSC. Vissa byråer inom Big-4 använder kortet mer omfattande än dem andra inom blocket. Samtidigt som vissa Non-big 4 byråer tar hänsyn till fler perspektiv av kortet, jämfört med andra byråer inom Big-4. Detta leder till slutsatsen att användning av kortet skiljer sig mellan samtliga ingående byråerna.
159

Exploration of 5G Traffic Models using Machine Learning / Analys av trafikmodeller i 5G-nätverk med maskininlärning

Gosch, Aron January 2020 (has links)
The Internet is a major communication tool that handles massive information exchanges, sees a rapidly increasing usage, and offers an increasingly wide variety of services.   In addition to these trends, the services themselves have highly varying quality of service (QoS), requirements and the network providers must take into account the frequent releases of new network standards like 5G. This has resulted in a significant need for new theoretical models that can capture different network traffic characteristics. Such models are important both for understanding the existing traffic in networks, and to generate better synthetic traffic workloads that can be used to evaluate future generations of network solutions using realistic workload patterns under a broad range of assumptions and based on how the popularity of existing and future application classes may change over time. To better meet these changes, new flexible methods are required. In this thesis, a new framework aimed towards analyzing large quantities of traffic data is developed and used to discover key characteristics of application behavior for IP network traffic. Traffic models are created by breaking down IP log traffic data into different abstraction layers with descriptive values. The aggregated statistics are then clustered using the K-means algorithm, which results in groups with closely related behaviors. Lastly, the model is evaluated with cluster analysis and three different machine learning algorithms to classify the network behavior of traffic flows. From the analysis framework a set of observed traffic models with distinct behaviors are derived that may be used as building blocks for traffic simulations in the future. Based on the framework we have seen that machine learning achieve high performance on the classification of network traffic, with a Multilayer Perceptron getting the best results. Furthermore, the study has produced a set of ten traffic models that have been demonstrated to be able to reconstruct traffic for various network entities. / <p>Due to COVID-19 the presentation was performed over ZOOM.</p>
160

Architectural model of information for a Big Data platform for the tourism sector

Mérida, César, Ríos, Richer, Kobayashi, Alfred, Raymundo, Carlos 01 January 2017 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / Resumen. Grandes vendedores tecnológicos ponen sus esfuerzos en crear nuevas tecnologías y plataformas para solucionar los problemas que poseen los principales sectores de la industria. En los últimos años, el turismo está desarrollando una mayor tendencia al crecimiento, aunque carece de tecnologías que hayan sido integradas para la explotación de la información de gran volumen que esta genera. Con el análisis de las herramientas de IBM y Oracle, se ha llegado a proponer una arquitectura que sea capaz de considerar las condiciones y particularidades propias del sector para la toma de decisiones en tiempo real. La plataforma propuesta tiene la finalidad de hacer uso de los procesos de negocio involucrados en el sector turismo, y tomar diversas fuentes de información especializadas en brindar información al turista y a los negocios. La arquitectura está conformada por tres capas. La primera describe la extracción y carga de datos de las diversas fuentes de información estructurada, no estructurada y sistemas de negocio. El procesamiento de los datos, segunda capa, permite realizar una limpieza y análisis de datos utilizando herramientas como MapReduce y tecnologías de stream computing para el procesamiento en tiempo real. Y la última capa, Entrega y Visualización, permite identificar la información relevante que son presentadas en diversas interfaces como web o plataformas móviles. Con esta propuesta se busca lograr la obtención de resultados en tiempo real sobre las necesidades del sector turismo. / Instytut Biologii Medycznej Polskiej Akademii Nauk

Page generated in 0.0402 seconds