Spelling suggestions: "subject:"multi dimensional"" "subject:"culti dimensional""
81 |
Developing city-level sustainability indicators in the MENA region with the cases of Benghazi and AmmanEl- Hegazi, Serag January 2021 (has links)
The development of a methodological framework for local and institutional
sustainability assessment can be helpful for planners and urban governments.
The aim of this research is to develop an approach to local and institutional
sustainability assessment (ALISA). It is designed to assist in the clarification,
formulation, preparation, selection, and ranking of key indicators to facilitate the
assessment of city sustainability at the local and institutional level in the Middle
Eastern and North African (MENA) cities.
The ALISA methodological framework is developed using joint documentary and
analysed data in the two case studies of Benghazi and Amman. The data for this
also includes focus-group discussions, semi-structured interviews, and
questionnaires that reflect the approach required in order to develop a combined
framework that assists the development of sustainability indicators.
The initial list of proposed sustainability indicators for Benghazi contains 37
indicators. This list was developed based on logical information and procedure
which has been supported by consultants and specialists in sustainability and
urbanization from the University of Benghazi in the form of workshops as well as
searching through the literature on sustainable development. Similarly, with
support from consultants and specialists in sustainability and urbanization from
the Applied science University a list of 36 indicators was also developed in
Amman.
Both lists were given to the local communities in Benghazi and Amman to be
ranked based on priority to identify two final lists of sustainability indicators. The
results indicated that economic and social indicators were highly ranked in
Benghazi and Amman, respectively.
|
82 |
Feature Extraction using Dimensionality Reduction Techniques: Capturing the Human PerspectiveColeman, Ashley B. January 2015 (has links)
No description available.
|
83 |
Specification, Configuration and Execution of Data-intensive Scientific ApplicationsKumar, Vijay Shiv 14 December 2010 (has links)
No description available.
|
84 |
Ecological footprint as an indicator of national and city level sustainability. A case study of Bangladesh with a focus on NarayanganjHoque, Muhammad N. January 2023 (has links)
This thesis addresses urban sustainability from an ecological footprint viewpoint to make cities inclusive, safe and livable in Bangladesh under the SDG-11 of the Agenda for Sustainable Development Goals-2030. Previous empirical studies on Bangladesh do not focus on Ecological Footprint (EF), and Naryanganj City’s (NC) EF has yet been explored. Previous studies also focused on only specific dimension of sustainability. This study aims to meet these gaps by linking the existing facilities and required initiatives under all dimensions of sustainability, e.g., social, economic, environment and institutions. Both secondary and primary data were used. Secondary data were collected from the World Bank and Global Footprint Network. Primary data were collected by a survey of 500 households in NC and twenty in-depth semi-structured interviews with experienced urban experts from academics, NGOs, government, voluntary and independent urban research organizations in Bangladesh. STIRPAT model was used to analyze the secondary data, and interview data were analyzed following theoretical thematic analysis. The empirical findings suggest that Bangladesh can still afford more urbanization. However, the high EF of NC is a concern of this affordability. The interview data analysis indicates that the government can promote the current opportunities for a relatively quick result for urban sustainability without imposing any extra burden on citizens. A sustainable urban development framework (SUDF) and based on this SUDF, a set of policy recommendations is also proposed. Further research options and limitations of this study have also been identified.
|
85 |
Implementation of decision trees for embedded systemsBadr, Bashar January 2014 (has links)
This research work develops real-time incremental learning decision tree solutions suitable for real-time embedded systems by virtue of having both a defined memory requirement and an upper bound on the computation time per training vector. In addition, the work provides embedded systems with the capabilities of rapid processing and training of streamed data problems, and adopts electronic hardware solutions to improve the performance of the developed algorithm. Two novel decision tree approaches, namely the Multi-Dimensional Frequency Table (MDFT) and the Hashed Frequency Table Decision Tree (HFTDT) represent the core of this research work. Both methods successfully incorporate a frequency table technique to produce a complete decision tree. The MDFT and HFTDT learning methods were designed with the ability to generate application specific code for both training and classification purposes according to the requirements of the targeted application. The MDFT allows the memory architecture to be specified statically before learning takes place within a deterministic execution time. The HFTDT method is a development of the MDFT where a reduction in the memory requirements is achieved within a deterministic execution time. The HFTDT achieved low memory usage when compared to existing decision tree methods and hardware acceleration improved the performance by up to 10 times in terms of the execution time.
|
86 |
A redação de vestibular sob uma perspectiva multidimensional: uma abordagem da linguística de corpusBarreto, Juliana Pereira Souto 30 June 2016 (has links)
Submitted by Jailda Nascimento (jmnascimento@pucsp.br) on 2016-10-03T15:02:31Z
No. of bitstreams: 1
Juliana Pereira Souto Barreto.pdf: 3340189 bytes, checksum: 30f53e6df08e1c7c68807c3dacf8ad70 (MD5) / Made available in DSpace on 2016-10-03T15:02:31Z (GMT). No. of bitstreams: 1
Juliana Pereira Souto Barreto.pdf: 3340189 bytes, checksum: 30f53e6df08e1c7c68807c3dacf8ad70 (MD5)
Previous issue date: 2016-06-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The research reported here intends to analyze the production and the evaluation of the written text in the college entrance essays produced by undergraduate applicants. More specifically, this study verifies how the composition tests, written by the applicants during the admission process to the Rio Grande do Norte Federal University (UFRN), are related to the varying dimensions of Brazilian Portuguese, presented in Berber Sardinha, Kauffmann and Acunzo (2014). The research uses the theoretical framework of Corpus Linguistics and the methodological approach of Multidimensional Analysis. The study corpus is composed of one hundred essays written by applicants for admission to higher education undergraduate courses, tagged by Palavras parser and post-processed with a script, which calculates the score of each text into all of the six variation dimensions of Brazilian Portuguese. At first, it is checked how the applicants’ texts are related to the six Brazilian Portuguese varying dimensions. Then, it is observed the variation in relation to the grades award to these essays by examiners in order to determine whether and which correction criteria were met, based on the scores of the Brazilian Portuguese multidimensional analysis. Hence, the outcomes here are likely to provide important contributions to the field of textual production in Portuguese in Brazil, considering that it is vital to develop a more accurate understanding of the language in use applied to the teaching and learning of argumentative text production, written by applicants during their admission to undergraduate courses in Higher Education / A pesquisa aqui relatada tem por objetivo analisar a qualidade do texto escrito nas redações de vestibular produzidas por candidatos à graduação. Mais especificamente, esse estudo verifica o modo como as redações de candidatos ao Ensino Superior da Universidade Federal do Rio Grande do Norte (UFRN) elacionam-se às dimensões de variação do português brasileiro, apresentadas em Berber Sardinha, Kauffmann e Acunzo (2014a; 2014b; no prelo). Para isso, a pesquisa recorre ao arcabouço teórico da Linguística de Corpus e à abordagem metodológica da Análise Multidimensional (Biber, 1988). O corpus de estudo, composto por cem redações escritas por candidatos ao ingresso em cursos de graduação do Ensino Superior, foi analisado com o etiquetador Palavras e pós-processado com um script que calcula o escore de cada texto em cada uma das seis dimensões de variação do português brasileiro. Em um primeiro momento, é verificado o modo como as redações dos candidatos relacionam-se às seis dimensões de variação do português brasileiro. Em seguida, a variação é observada em relação às notas dadas às redações por examinadores, a fim de determinar se e quais os critérios de avalição, estipulados para a correção das redações de vestibular, foram atendidos, com base nos escores da análise multidimensional do português brasileiro. Por fim, acredita-se que os resultados aqui encontrados possam vir a proporcionar contribuições relevantes ao campo da produção textual em língua portuguesa no Brasil, uma vez que se faz necessário uma compreensão mais apurada da língua em uso aplicada ao ensino e ao aprendizado da produção de textos argumentativos por candidatos a ingresso em cursos de graduação do Ensino Superior
|
87 |
Passage à l'échelle pour les contraintes d'ordonnancement multi-ressources / Scalable multi-dimensional resources scheduling constraintsLetort, Arnaud 28 October 2013 (has links)
La programmation par contraintes est une approche régulièrement utilisée pour résoudre des problèmes combinatoires d’origines diverses. Dans cette thèse nous nous focalisons sur les problèmes d’ordonnancement cumulatif. Un problème d’ordonnancement consiste à déterminer les dates de débuts et de fins d’un ensemble de tâches, tout en respectant certaines contraintes de capacité et de précédence. Les contraintes de capacité concernent aussi bien des contraintes cumulatives classiques où l’on restreint la somme des hauteurs des tâches intersectant un instant donné, que des contraintes cumulatives colorées où l’on restreint le nombre maximum de couleurs distinctes prises par les tâches. Un des objectifs récemment identifiés pour la programmation par contraintes est de traiter des problèmes de grandes tailles, habituellement résolus à l’aide d’algorithmes dédiés et de métaheuristiques. Par exemple, l’utilisation croissante de centres de données virtualisés laisse apparaitre des problèmes d’ordonnancement et de placement multi-dimensionnels de plusieurs milliers de tâches. Pour atteindre cet objectif, nous utilisons l’idée de balayage synchronisé considérant simultanément une conjonction de contraintes cumulative et des précédences, ce qui nous permet d’accélérer la convergence au point fixe. De plus, de ces algorithmes de filtrage nous dérivons des procédures gloutonnes qui peuvent être appelées à chaque nœud de l’arbre de recherche pour tenter de trouver plus rapidement une solution au problème. Cette approche permet de traiter des problèmes impliquant plus d’un million de tâches et 64 ressources cumulatives. Ces algorithmes ont été implémentés dans les solveurs de contraintes Choco et SICStus, et évalués sur divers problèmes déplacement et d’ordonnancement. / Constraint programming is an approach often used to solve combinatorial problems in different application areas. In this thesis we focus on the cumulative scheduling problems. A scheduling problem is to determine the starting dates of a set of tasks while respecting capacity and precedence constraints. Capacity constraints affect both conventional cumulative constraints where the sum of the heights of tasks intersecting a given time point is limited, and colored cumulative constraints where the number of distinct colors assigned to the tasks intersecting a given time point is limited. A newly identified challenge for constraint programming is to deal with large problems, usually solved by dedicated algorithms and metaheuristics. For example, the increasing use of virtualized datacenters leads to multi dimensional placement problems of thousand of jobs. Scalability is achieved by using a synchronized sweep algorithm over the different cumulative and precedence constraints that allows to speed up convergence to the fix point. In addition, from these filtering algorithms we derive greedy procedures that can be called at each node of the search tree to find a solution more quickly. This approach allows to deal with scheduling problems involving more than one million jobs and 64 cumulative resources. These algorithms have been implemented within Choco and SICStussolvers and evaluated on a variety of placement and scheduling problems.
|
88 |
企業建立平衡計分卡之研究 / Research of Balanced ScoreCard Using Case Study王清弘, Wang, Ching-Hung Unknown Date (has links)
在科技發達的現今,而企業間的經營競爭也愈來愈激烈,企業需要有一套績效評估的標準以知道本身是否有成長?是否有利潤產生?是否具有競爭力?然而在現今的資訊時代中,大部分的企業所採用的企業績效評估的標準仍以傳統的財務會計模式為主,然而財務面只是企業經營成果的最後衡量,而哪些因素是影響、導致最後在財務面的成果展現則是非常重要且需要去探討的。
Kaplan & Norton所提出的平衡計分卡(The Balanced Scorecard; BSC)能幫助企業達成願景,滿足股東的期望。平衡計分卡(BSC)包括四個構面(財務、顧客、內部流程、學習與成長),強調企業最後的營運績效(利潤)不僅只來自於財務報表的資料成果,還包括其它層面(如顧客、內部流程、學習與成長)所間接產生的因果關係所造成的影響。
另一方面,企業的資訊系統(IS)可以產生有用的資訊以提供給高階主管決策之用,幫助調整企業的策略目標與願景,可反應出企業策略的制定有無妥善。然而,傳統的資訊系統(IS)卻無法即時有效地提供資訊;因此,就應利用多維度模式(Multi-Dimensional Model)的資料結構來找出所追求的關鍵性資訊以供作決策。
在資訊時代中,對於如何幫助企業發展一個平衡計分卡(BSC),使企業根據策略目標與願景獲取競爭優勢以達到永續經營;而以多維度模型(Multi-Dimensional Model)所建構的資訊系統,可反應出經過分析後所得到的資訊是否可以支援(Support)企業在平衡計分卡(BSC)中所設計的策略性指標(Indicators)。
根據以上所述,本研究針對製造業中的一家製帽公司進行研究,探討企業是否能夠建立平衡計分卡(BSC)系統?同時,探討企業的資訊系統(IS)對於平衡計分卡(BSC)的支援情況如何?以及從一般的資訊系統建立平衡計分卡(BSC)的困難處在哪?最後,要探討的是如何利用資料結構(多維度模型)去支援(Support)平衡計分卡(BSC)?
研究所得結論可歸納為,受訪的個案公司能夠接受平衡計分卡(BSC)的觀念;然而平衡計分卡(BSC)的實施有賴於各層面因果關係的建立;若要達到各層面因果關係的建立,則企業需要有以流程為主的整合資訊系統;而受訪個案公司的傳統資訊系統不易支援平衡計分卡(BSC);若企業沒有良好的資料模型就無法支援平衡計分卡(BSC);而平衡計分卡(BSC)需要以多維度模型(Multi-Dimensional Model)來支援。
|
89 |
Directed Enzyme Evolution of Theta Class Glutathione Transferase : Studies of Recombinant Libraries and Enhancement of Activity toward the Anticancer Drug 1,3-bis(2-Chloroethyl)-1-nitrosoureaLarsson, Anna-Karin January 2003 (has links)
<p>Glutathione transferases (GSTs) are detoxication enzymes involved in the cellular protection against a wide range of reactive substances. The role of GSTs is to catalyze the conjugation of glutathione with electrophilic compounds, which generally results in less toxic products. </p><p>The ability to catalyze the denitrosation of the anticancer drug 1,3-bis(2-chloroethyl)- 1-nitrosourea (BCNU) was measured in twelve different GSTs. Only three of the enzymes showed any measurable activity with BCNU, of which human GST T1-1 was the most efficient. This is of special interest, since human GST T1-1 is a polymorphic protein and its expression in different patients may be crucial for the response to BCNU.</p><p>DNA shuffling was used to create a mutant library by recombination of cDNA coding for two different Theta-class GSTs. In total, 94 randomly picked mutants were characterized with respect to their catalytic activity with six different substrates, expression level and sequence. A clone with only one point mutation compared to wild-type rat GST T2-2 had a significantly different substrate-activity pattern. A high expressing mutant of human GST T1-1 was also identified, which is important, since the yield of the wild-type GST T1-1 is generally low. </p><p>Characterization of the Theta library demonstrated divergence of GST variants both in structure and function. The properties of every mutant were treated as a point in a six-dimensional substrate-activity space. Groups of mutants were formed based on euclidian distances and K-means cluster analyses. Both methods resulted in a set of five mutants with high alkyltransferase activities toward dichloromethane and 4-nitrophenethyl bromide (NPB). </p><p>The five selected mutants were used as parental genes in a new DNA shuffling. Addition of cDNA coding for mouse and rat GST T1-1 improved the genetic diversity of the library. The evolution of GST variants was directed towards increased alkyltransferase activity including activity with the anticancer drug BCNU. NPB was used as a surrogate substrate in order to facilitate the screening process. A mutant from the second generation displayed a 65-fold increased catalytic activity with NPB as substrate compared to wild-type human GST T1-1. The BCNU activity with the same mutant had increased 175-fold, suggesting that NPB is a suitable model substrate for the anticancer drug. Further evolution presented a mutant in the fifth generation of the library with 110 times higher NPB activity than wild-type human GST T1-1.</p>
|
90 |
Directed Enzyme Evolution of Theta Class Glutathione Transferase : Studies of Recombinant Libraries and Enhancement of Activity toward the Anticancer Drug 1,3-bis(2-Chloroethyl)-1-nitrosoureaLarsson, Anna-Karin January 2003 (has links)
Glutathione transferases (GSTs) are detoxication enzymes involved in the cellular protection against a wide range of reactive substances. The role of GSTs is to catalyze the conjugation of glutathione with electrophilic compounds, which generally results in less toxic products. The ability to catalyze the denitrosation of the anticancer drug 1,3-bis(2-chloroethyl)- 1-nitrosourea (BCNU) was measured in twelve different GSTs. Only three of the enzymes showed any measurable activity with BCNU, of which human GST T1-1 was the most efficient. This is of special interest, since human GST T1-1 is a polymorphic protein and its expression in different patients may be crucial for the response to BCNU. DNA shuffling was used to create a mutant library by recombination of cDNA coding for two different Theta-class GSTs. In total, 94 randomly picked mutants were characterized with respect to their catalytic activity with six different substrates, expression level and sequence. A clone with only one point mutation compared to wild-type rat GST T2-2 had a significantly different substrate-activity pattern. A high expressing mutant of human GST T1-1 was also identified, which is important, since the yield of the wild-type GST T1-1 is generally low. Characterization of the Theta library demonstrated divergence of GST variants both in structure and function. The properties of every mutant were treated as a point in a six-dimensional substrate-activity space. Groups of mutants were formed based on euclidian distances and K-means cluster analyses. Both methods resulted in a set of five mutants with high alkyltransferase activities toward dichloromethane and 4-nitrophenethyl bromide (NPB). The five selected mutants were used as parental genes in a new DNA shuffling. Addition of cDNA coding for mouse and rat GST T1-1 improved the genetic diversity of the library. The evolution of GST variants was directed towards increased alkyltransferase activity including activity with the anticancer drug BCNU. NPB was used as a surrogate substrate in order to facilitate the screening process. A mutant from the second generation displayed a 65-fold increased catalytic activity with NPB as substrate compared to wild-type human GST T1-1. The BCNU activity with the same mutant had increased 175-fold, suggesting that NPB is a suitable model substrate for the anticancer drug. Further evolution presented a mutant in the fifth generation of the library with 110 times higher NPB activity than wild-type human GST T1-1.
|
Page generated in 0.0891 seconds