1361 |
Temporalised Description Logics for Monitoring Partially Observable EventsLippmann, Marcel 01 July 2014 (has links)
Inevitably, it becomes more and more important to verify that the systems surrounding us have certain properties. This is indeed unavoidable for safety-critical systems such as power plants and intensive-care units. We refer to the term system in a broad sense: it may be man-made (e.g. a computer system) or natural (e.g. a patient in an intensive-care unit). Whereas in Model Checking it is assumed that one has complete knowledge about the functioning of the system, we consider an open-world scenario and assume that we can only observe the behaviour of the actual running system by sensors. Such an abstract sensor could sense e.g. the blood pressure of a patient or the air traffic observed by radar.
Then the observed data are preprocessed appropriately and stored in a fact base. Based on the data available in the fact base, situation-awareness tools are supposed to help the user to detect certain situations that require intervention by an expert. Such situations could be that the heart-rate of a patient is rather high while the blood pressure is low, or that a collision of two aeroplanes is about to happen.
Moreover, the information in the fact base can be used by monitors to verify that the system has certain properties. It is not realistic, however, to assume that the sensors always yield a complete description of the current state of the observed system. Thus, it makes sense to assume that information that is not present in the fact base is unknown rather than false. Moreover, very often one has some knowledge about the functioning of the system. This background knowledge can be used to draw conclusions about the possible future behaviour of the system. Employing description logics (DLs) is one way to deal with these requirements. In this thesis, we tackle the sketched problem in three different contexts: (i) runtime verification using a temporalised DL, (ii) temporalised query entailment, and (iii) verification in DL-based action formalisms.
|
1362 |
Complexity Management to design and produce customerspecific hydraulic controls for mobile applicationsKrüßmann, Martin, Tischler, Karin January 2016 (has links)
Complexity management is the key to success for mobile machinery where the variety of customers and applications requires individual solutions. This paper presents the way Bosch Rexroth supports each OEM with hydraulic controls – from specification and conception towards application and production. It gives examples how platforms and processes are optimized according to the customer needs. The demand for flexible, short-term deliveries is met by an agile production with the technologies of Industry 4.0.
|
1363 |
Organisatorische Unterstützung der Produktentwicklung mit SysML-ModellenPaetzold, Kristin January 2016 (has links)
Aus der Einleitung:
"In der Entwicklung technischer Produkte sind die Entwickler mit einer zunehmenden Komplexität der Produkte konfrontiert. Die Komplexität hat unterschiedliche Ursachen, wie bspw. eine höhere Anzahl an Anforderungen, eine steigende Anzahl an unterschiedlichen beteiligten Domänen oder eine kürzere Entwicklungszeit. Zusätzlich muss bereits während der Entwicklung der gesamte Lebenszyklus des Produkts bis zur Entsorgung beachtet
werden. ..."
|
1364 |
Mnoho tváří Černobylu: temporalita, toxicita, komplexita v životech environmentálních migrantů / Many faces of Chernobyl: temporality, toxicity, complexity in the lives of environmental migrantsPodlesná, Valentina January 2020 (has links)
The diploma thesis focuses on the topic of environmental migration because of the Chernobyl nuclear accident in 1986. The main questions of the thesis are whether the Chernobyl accident is associated at any social level with environmental migration to the Czech Republic? To what time is the event socially framed? In what areas and how does the accident currently affect the lives of the environmental migrants? The diploma thesis has two research parts. The first part is a media analysis, which shows that the Chernobyl accident is mostly socially perceived as a matter of the past and is not associated with environmental migration to the Czech Republic. The second research part is based on anthropological research of semi- structured interviews with respondents. The result of this research is the division of radiation toxicity into five dimensions (political, biological, potential, reproductive, migration), in which the consequences of the Chernobyl accident continue to manifest themselves in the lives of environmental migrants. The dimensions of toxicity take place in different time frames, so the work also focuses on the temporality of modern accidents. The social perception of the Chernobyl accident as a finished event does not mean that it is no longer present in human lives. The aim of the...
|
1365 |
A comparative study of the complexity of trust tax legislation in selected Southern African Development Community countriesGraskie, Michael Andrew January 2019 (has links)
For many years tax jurisdictions have encountered challenges with the concept of the taxation of trusts. The trust concept itself conjures up thoughts of complexity. The development of the trust as a structure and the subsequent colonial influence on the development of trusts in Africa, resulted in different applications of the common law trust principle in tax legislation. This study explores the complexity of the trust tax legislation by way of a comparison of the legislation in selected Southern African Development Community (SADC) countries.
The study applies a qualitative approach to compare the trust tax legislation of the selected SADC tax jurisdictions in an effort to assess the complexity involved in the taxation of trusts. The research methodology employed is doctrinal (black letter law) and includes an analysis of the legislation of the selected jurisdictions.
The study reveals substantial similarities in the terminology used in the taxation of trusts in the selected jurisdictions. In comparing the legislation of the jurisdictions that make provision for the flow through principle with those of the jurisdictions that do not make provision for the flow through principle, a larger element of complexity in the legislation becomes apparent with wider legislative design differences. Additionally, jurisdictions that make provision for the flow through principle also introduce anti-avoidance legislation to curb the misuse of income attributed to the beneficiaries of trusts. This anti-avoidance legislation creates additional complexity in the legislation. / Dissertation (MCom (Taxation))--University of Pretoria, 2019. / Taxation / MCom Taxation / Unrestricted
|
1366 |
Customer-centric Service Management: Conceptualization and Evaluation of Consumer-induced Service CompositionSachse, Stephan 05 November 2018 (has links)
The customer takes over the center stage of tomorrow’s economy. In the wake of customer-centric service industries, traditional intermediaries are becoming increasingly obsolete and are substituted by self-services. Additionally, because of the on-going digitalization, e-services provide various alternatives to the customer. Thus, self-directed customers must overlook and manage an increasingly complex network of services and providers themselves. Technology is a central factor in this context. On the one hand, it is the leading cause of the current challenges whereby, on the contrary, it is the key to solving them.
This work proposes the concept of Customer-centric Service Management (CSM). It is an interdisciplinary approach to adopt the service composition process from the field of business and IT to the particularities of consumers. Combining modular services to individualized and valuable service bundles is its objective. Making this type of interaction accessible for consumers requires a substantial reduction of complexity in the front end. The key to achieving this is by taking an outside-in perspective. This means understanding the decision process of the customer and speaking his language in a field that has been dominated by formal description standards and product parameters for a long time.
This work hypothesizes that a paradigm-shift enables consumer-driven service com-position. Thus, the concept of customer-centricity is applied to service management. By letting the consumer describe himself, respectively his distinct needs and requirements, a better customer value is achieved than by traditional product-centric approaches. Unlike existing product-centric configuration tools, customer-centric configurators do not elicit product parameters. Instead, they rely on a structured description of customers’ intentions and values captured in a domain specific customer model. Consequently, the concept applies to a more abstract level of service categories instead of specific product instances. This refers to the pre-purchase phase of the consumer journey – a phase that is widely neglected by academia and practice yet.
This work analyzes the concept of CSM on a technical, process-related, and strategic level. Three elements are identified as the core of CSM: the customer model, the service model, and the composition logic. Each item is elaborated in detail at the example of financial services.
The concept of CSM facilitates current knowledge from different fields of research and finally implements them into a prototype. This demonstrator is the basis for a large field experiment to answer two questions: in the first place, does customer-centric service composition provide higher customer value regarding perceived complexity, solution utility and process utility? Moreover, secondly, does a reduced complexity, in respect of the amount of information that needs to be handled, with-out changing the configuration paradigm, have a greater impact on customer value?
Empirical validation shows that the customer-centric approach has significant ad-vantages over the product-centric one. It offers higher customer value with respect to perceived complexity, perceived solution utility and perceived user experience. This proves the high potential of this concept. The findings of this thesis form the basis of a new form of customer interaction and enable new business models.:1 Introduction
1.1 Initial Situation and Problem
1.2 Contribution and Research Question
1.3 Research Approach
1.4 Thesis Structure
2 Foundations
2.1 Services
2.2 Complexity
2.3 Individualization
2.4 Service Management
3 Conceptualization of Customer-centric
Service Management
3.1 Customer-centric Service Management
3.2 Customer Model
3.3 Service Model
3.4 Service Composition Logic
4 Empirical Validation
4.1 Objectives
4.2 Conceptualization
4.3 Prototype
4.4 Experiment Design and Empirical Testing
4.5 Data Analysis and Results
5 Results, Evaluation and Outlook
5.1 Summary and Results
5.2 Customer-centric Service Management as a Business-Model –
Practical Startup Experiences
5.3 Outlook and Impact of CSM
5.4 Limitations and Need for Future Research
6 References
Curriculum Vitae
Bibliographic Data
|
1367 |
APPROXIMATION ALGORITHMS FOR MAXIMUM VERTEX-WEIGHTED MATCHINGAhmed I Al Herz (8072036) 03 December 2019 (has links)
<div>We consider the maximum vertex-weighted matching problem (MVM), in which non-negative weights are assigned to the vertices of a graph, and the weight of a matching is the sum of the weights of the matched vertices. Vertex-weighted matchings arise in many applications, including internet advertising, facility scheduling, constraint satisfaction, the design of network switches, and computation of sparse bases for the null space or the column space of a matrix. Let m be the number of edges, n number of vertices, and D the maximum degree of a vertex in the graph. We design two exact algorithms for the MVM problem with time complexities of O(mn) and O(Dmn). The new exact algorithms use a maximum cardinality matching as an initial matching, after which the weight of the matching is increased using weight-increasing paths.</div><div><br></div><div>Although MVM problems can be solved exactly in polynomial time, exact MVM algorithms are still slow in practice for large graphs with millions and even billions of edges. Hence we investigate several approximation algorithms for MVM in this thesis. First we show that a maximum vertex-weighted matching can be approximated within an approximation ratio arbitrarily close to one, to k/(k + 1), where k is related to the length of augmenting or weight-increasing paths searched by the algorithm. We identify two main approaches for designing approximation algorithms for MVM. The first approach is direct; vertices are sorted in non-increasing order of weights, and then the algorithm searches for augmenting paths of restricted length that reach a heaviest vertex. (In this approach each vertex is processed once). The second approach repeatedly searches for augmenting paths and increasing paths, again of restricted length, until none can be found. In this second, iterative approach, a vertex may need to be processed multiple times. We design two approximation algorithms based on the direct approach with approximation ratios of 1/2 and 2/3. The time complexities of the 1/2-approximation algorithm is O(m + n log n), and that of the 2/3-approximation algorithm is O(mlogD). Employing the second approach, we design 1/2- and 2/3-approximation algorithms for MVM with time complexities of O(Dm) and O(D<sup>2</sup>m), respectively. We show that the iterative algorithm can be generalized to nd a k/(k+1)-approximate MVM with a time complexity of O(D<sup>k</sup>m). In addition, we design parallel 1/2- and 2/3-approximation algorithms for a shared memory programming model, and introduce a new technique for locking augmenting paths to avoid deadlock and related problems. </div><div><br></div><div>MVM problems may be solved using algorithms for the maximum edge-weighted matching (MEM) by assigning to each edge a weight equal to the sum of the vertex weights on its endpoints. However, our results will show that this is one way to generate MEM problems that are difficult to solve. On such problems, exact MEM algorithms may require run times that are a factor of a thousand or more larger than the time of an exact MVM algorithm. Our results show the competitiveness of the new exact algorithms by demonstrating that they outperform MEM exact algorithms. Specifically, our fastest exact algorithm runs faster than the fastest MEM implementation by a factor of 37 and 18 on geometric mean, using two different sets of weights on our test problems. In some instances, the factor can be higher than 500. Moreover, extensive experimental results show that the MVM approximation algorithm outperforms an MEM approximation algorithm with the same approximation ratio, with respect to matching weight and run time. Indeed, our results show that the MVM approximation algorithm outperforms the corresponding MEM algorithm with respect to these metrics in both serial and parallel settings.</div>
|
1368 |
Handling Complexity via Statistical MethodsEvidence S Matangi (8082623) 05 December 2019 (has links)
<p>Phenomena investigated from complex systems are
characteristically dynamic, multi-dimensional, and nonlinear. Their traits can be captured through data
generating mechanisms (<i>DGM</i>) that
explain the interactions among the systems’ components. Measurement is fundamental to advance science,
and complexity requires deviation from linear thinking to handle. Simplifying the measurement of complex and
heterogeneous data in statistical methodology can compromise their accuracy. In particular, conventional statistical methods
make assumptions on the DGM that are rarely met in real world, which can make inference
inaccurate. We posit that causal
inference for complex systems phenomena requires at least the incorporation of
subject-matter knowledge and use of dynamic metrics in statistical methods to improve
on its accuracy.</p>
<p>This thesis consists of two separate topics on handling data
and data generating mechanisms complexities, the evaluation of bundled
nutrition interventions and modeling atmospheric data.</p>
<p>Firstly, when a public health problem requires multiple ways
to address its contributing factors, bundling of the approaches can be cost-effective. Scaling up bundled interventions geographically
requires a hierarchical structure in implementation, with central coordination
and supervision of multiple sites and staff delivering a bundled intervention. The experimental design to evaluate such an
intervention becomes complex to accommodate the multiple intervention
components and hierarchical implementation structure. The components of a bundled intervention may
impact targeted outcomes additively or synergistically. However, noncompliance
and protocol deviation can impede this potential impact, and introduce data
complexities. We identify several statistical considerations and recommendations
for the implementation and evaluation of bundled interventions. </p>
<p>The simple aggregate metrics used in clustering randomized
controlled trials do not utilize all available information, and findings are
prone to the ecological fallacy problem, in which inference at the aggregate
level may not hold at the disaggregate level.
Further, implementation heterogeneity impedes statistical power and
consequently the accuracy of the inference from conventional comparison with a control
arm. The intention-to-treat analysis can be inadequate for bundled
interventions. We developed novel process-driven,
disaggregated participation metrics to examine the mechanisms of impact of the
Agriculture to Nutrition (ATONU) bundled intervention (ClinicalTrials.gov
Identifier: NCT03152227). Logistic and beta-logistic hierarchical models were
used to characterize these metrics, and generalized mixed models were employed
to identify determinants of the study outcome, dietary diversity for women of
reproductive age. Mediation analysis was
applied to explore the underlying determinants by which the intervention affects
the outcome through the process metrics. The determinants of greater participation
should be the targets to improve implementation of future bundled interventions.</p>
<p>Secondly, observed atmospheric records are often
prohibitively short with only one record typically available for study. Classical
nonlinear time series models applied to explain the nonlinear DGM exhibit some
statistical properties of the phenomena being investigated, but have nothing to
do with their physical properties. The data’s complex dependent structure
invalidates inference from classical time series models involving strong
statistical assumptions rarely met in real atmospheric and climate data. The subsampling method may yield valid statistical
inference. Atmospheric records, however, are typically too short to satisfy<i> </i>asymptotic conditions for the method’s
validity, which necessitates enhancements of subsampling with the use of
approximating models (those sharing statistical properties with the series
under study). </p>
<p>Gyrostat models (<i>G-models</i>)
are physically sound low-order models generated from the governing equations
for atmospheric dynamics thus retaining some of their fundamental statistical
and physical properties. We have demonstrated statistic that using G-models as
approximating models in place of traditional time series models results in more
precise subsampling confidence intervals with improved coverage probabilities.
Future works will explore other types of G-models as approximating models for
inference on atmospheric data. We will adopt this technique for inference on phenomena
for AstroStatistics and pharmacokinetics. </p>
|
1369 |
From problem-sovling to improvisation in filmmaking : Production of "Falling Grief" and "Circadian Anguish"Chow, Jackel (Wing Hang) January 2020 (has links)
This exposition provides the insight of indeterminacy during improvisation, as well as the reflection process of how I converted my problems-solving skills to planned improvisation during the adverse filming condition of my graduation feature film production. I define Improvisation as a way to be adaptive and flexible in uncertainty, while problem- solving as a solution to overcome the obstacles faced. I started from an ambitious goal by making a feature length hybrid film for my graduation showcase in my two years of master study. Facing problems like lack of money, insufficient network to find talents and limited time to acquire local knowledge of the working styles in the country, I met a lot of challenges. When I solved the problems one by one within this filmmaking process, I gradually realized I relied quite a lot on improvisation. It does not only apply on the set when I worked with the actors, but also on scriptwriting, crew recruiting, locations scouting, shots creation, etc. The turning point for me to change from coincidental (unplanned) improvisation (because it is needed with problem solving) to deliberate (planned) improvisation started from my second half of principal photography (or simply called production/filming) stage because more uncertainty emerged and I started to get used to such style. At the end, I made two versions of the films with different levels of improvisational practice. I will reflect my whole filmmaking process and its connection with improvisation from my film products.
|
1370 |
Developing Oral Reading Fluency Among Hispanic High School English-language Learners: an Intervention Using Speech Recognition SoftwareRuffu, Russell 08 1900 (has links)
This study investigated oral reading fluency development among Hispanic high school English-language learners. Participants included 11 males and 9 females from first-year, second-year, and third-year English language arts classes. The pre-post experimental study, which was conducted during a four-week ESL summer program, included a treatment and a control group. The treatment group received a combination of components, including modified repeated reading with self-voice listening and oral dictation output from a speech recognition program. Each day, students performed a series of tasks, including dictation of part of the previous day’s passage; listening to and silently reading a new passage; dictating and correcting individual sentences from the new passage in the speech recognition environment; dictating the new passage as a whole without making corrections; and finally, listening to their own voice from their recorded dictation. This sequence was repeated in the subsequent sessions. Thus, this intervention was a technology-enhanced variation of repeated reading with a pronunciation dictation segment. Research questions focused on improvements in oral reading accuracy and rate, facility with the application, student perceptions toward the technology for reading, and the reliability of the speech recognition program. The treatment group improved oral reading accuracy by 50%, retained and transferred pronunciation of 55% of new vocabulary, and increased oral reading rate 16 words-correct-per-minute. Students used the intervention independently after three sessions. This independence may have contributed to students’ self-efficacy as they perceived improvements in their pronunciation, reading in general, and reported an increased liking of school. Students initially had a very positive perception toward using the technology for reading, but this perception decreased over the four weeks from 2.7 to 2.4 on a 3 point scale. The speech recognition program was reliable 94% of the time. The combination of the summer school program and intervention component stacking supported students’ gains in oral reading fluency, suggesting that further study into applications of the intervention is warranted. Acceleration of oral reading skills and vocabulary acquisition for ELLs contributes to closing the reading gap between ELLs and native-English speakers. Fluent oral reading is strongly correlated with reading comprehension, and reading comprehension is essential for ELLs to be successful in school. Literacy support tools such as this intervention can play a role in ameliorating English acquisition faster than the rate attained through traditional practices.
|
Page generated in 0.0526 seconds