Spelling suggestions: "subject:"hier"" "subject:"vier""
51 |
A New Era for the Big 8? Evidence on the Association Between Earnings Quality and Audit Firm TypeCassell, Cory A. 2009 May 1900 (has links)
I examine the association between earnings quality and audit firm type using a three-tiered audit firm classification scheme which allows for an explicit examination of the quality of Second-Tier audited earnings. My tests are motivated by the lack of competition in the market for audit services, theoretical arguments which suggest a positive association between audit firm size and audit quality, evidence pointing to the rapid post-Andersen growth in Second-Tier audit practices, and the lack of empirical research that fully differentiates audit firm type.
My results indicate that the post-Andersen growth of Second-Tier audit firms coincides with improved Second-Tier audit quality, relative to the other audit firm types (Big N and other non-Big N). Specifically, the results indicate that Second-Tier client earnings quality was not distinct from that of other non-Big N clients in the pre-Andersen period. However, in the post-Andersen period, the results indicate that Second-Tier client earnings quality was higher than that of other non-Big N clients. Moreover, the post-Andersen results provide partial evidence suggesting that there is no difference in Second-Tier and Big N client earnings quality and, thus, lend some credence to the notion of a new era for the Big 8. These results convey important information to market participants (e.g., investors, underwriters, analysts, etc.) who wish to assess the extent to which financial statements are likely to be free from opportunistic managerial manipulation, to clients that are contemplating switching to a Second-Tier audit firm, to government agencies who have expressed concern over the state of competition in the market for audit services, and to those who have promoted the use of Second-Tier audit firms in the wake of SOX-related resource constraints.
|
52 |
The Study of Corruption Prevention and Profits Promotion of Corporate GovernanceChang, Chia-chi 20 June 2007 (has links)
This study is to research the ¡§Corruption Prevention and Profits Promotion of Corporate Governance in our country¡¨. Recently, companies have encountered the corporate governance problems globally and indicate that corporate governance have not been fulfilled exactly. It not only failed to maximize the profit of shareholders and stakeholders, and even worse, unable to make the investment money get back. Furthermore, it will also cause bad effects that associate with the social instability and weaken the economic development.
In view of the reasons that contribute to the occurrence of company scandals, most of people will think it¡¦s due to lack of monitoring and supervision of companies. But this should be merely the direct reasons. Apart form this, it has to be added with indirect causes of inability of company strategy leadership and cause financial problems to occur consequently. It¡¦s obvious that monitoring with corruption prevention and leadership work to profit promotion are two essential features of corporate governance.
In the aspect of monitoring system enhancement, this paper has investigated the current supervision problems of corporate governance and provided improvement suggestions accordingly. To the issue of evaluating these two systems regarding independent directors and existing supervisors, several fundamental but important points have been reviewed and recommendations have been provided. In consideration of increasing leadership management performance and building a multi-purposes board, we recommend taking British mode of a balancing board as a good reference example. It¡¦s recommended that both characteristics of independent monitoring and excellent ability of promoting company¡¦s profit should be taken into account simultaneously when choosing directors. It¡¦s expected that the board can fulfill the duties of monitoring the company efficiently and playing the role as excellent navigator successfully to lead the company going forward continuously. It¡¦s all wish that owing to these dual functions have been stressed and developed; it can bring a new era to our corporate governance.
|
53 |
An empirical approach to automated performance management for elastic n-tier applications in computing cloudsMalkowski, Simon J. 03 April 2012 (has links)
Achieving a high degree of efficiency is non-trivial when managing the performance of large web-facing applications such as e-commerce websites and social networks. While computing clouds have been touted as a good solution for elastic applications, many significant technological challenges still have to be addressed in order to leverage the full potential of this new computing paradigm. In this dissertation I argue that the automation of elastic n-tier application performance management in computing clouds presents novel challenges to classical system performance management methodology that can be successfully addressed through a systematic empirical approach. I present strong evidence in support of my thesis in a framework of three incremental building blocks: Experimental Analysis of Elastic System Scalability and Consolidation, Modeling and Detection of Non-trivial Performance Phenomena in Elastic Systems, and Automated Control and Configuration Planning of Elastic Systems. More concretely, I first provide a proof of concept for the feasibility of large-scale experimental database system performance analyses, and illustrate several complex performance phenomena based on the gathered scalability and consolidation data. Second, I extend these initial results to a proof of concept for automating bottleneck detection based on statistical analysis and an abstract definition of multi-bottlenecks. Third, I build a performance control system that manages elastic n-tier applications efficiently with respect to complex performance phenomena such as multi-bottlenecks. This control system provides a proof of concept for automated online performance management based on empirical data.
|
54 |
Eisengehalt von Fleisch - Ermittlung des Eisengehalts im Fleisch verschiedener TierartenWestphal, Karsten, Klose, Ralf, Golze, Manfred 08 December 2009 (has links) (PDF)
Der Bericht beinhaltet Ergebnisse von Fleischuntersuchungen auf den Eisengehalt.
Untersucht wurden 308 Schweinefleischproben und etwa 300 Fleischproben der Tierarten Rind, Bison, Auerochse, Büffel, Schaf, Ziege, Kaninchen, Wildschwein, Rehwild, Rotwild und Fasan.
Die sächsischen Ergebnisse bestätigen Untersuchungen anderer Bundesländer und belegen den starken Rückgang des Eisengehaltes im Schweinefleisch. Er lag im Mittel bei 4,1 mg/kg Frischmasse (FM). Vor 30 Jahren lag der Eisengehalt noch bei 18 mg/kg - 25 mg/kg.
Das Fleisch von Tierarten mit sogenanntem rotem Fleisch wie Rind, Schaf, Büffel, Bison, Auerochse, Reh- und Rotwild weist erwartungsgemäß einen hohen Eisengehalt (17 mg/kg FM - 33 mg/kg FM) auf.
Farbuntersuchungen zeigten, dass der Eisengehalt eine Abhängigkeit zur Farbhelligkeit und zum Rotton des Fleisches aufweist. Je dunkler bzw. je intensiver der Rotton des Fleisches ist, desto höher der Eisengehalt.
|
55 |
SOQPSK with LDPC: Spending Bandwidth to Buy Link MarginHill, Terry, Uetrecht, Jim 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / Over the past decade, SOQPSK has been widely adopted by the flight test community, and the low density parity check (LDPC) codes are now in widespread use in many applications. This paper defines the waveform and presents the bit error rate (BER) performance of SOQPSK coupled with a rate 2/3 LDPC code. The scheme described here expands the transmission bandwidth by approximately 56% (which is still 22% less than the legacy PCM/FM modulation), for the benefit of improving link margin by over 10 dB at BER = 10⁻⁶.
|
56 |
ARTM CPM Receiver/Demodulator Performance: An UpdateTemple, Kip 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / Since the waveform was first developed by the Advanced Range Telemetry Program (ARTM) and adopted by the Range Commanders Council Telemetry Group (RCC/TG), receiver/demodulators for the ARTM Continuous Phase Modulation (CPM) waveform have undergone continued development by several hardware vendors to boost performance in terms of phase noise, detection performance, and resynchronization time. These same results were initially presented at the International Telemetry Conference (ITC) 2003 when hardware first became available supporting this waveform, at the time called ARTM Tier II. This paper reexamines the current state of the art performance of ARTM CPM receiver/demodulators available in the marketplace today.
|
57 |
A study of transient bottlenecks: understanding and reducing latency long-tail problem in n-tier web applicationsWang, Qingyang 21 September 2015 (has links)
An essential requirement of cloud computing or data centers is to simultaneously achieve good performance and high utilization for cost efficiency. High utilization through virtualization and hardware resource sharing is critical for both cloud providers and cloud consumers to reduce management and infrastructure costs (e.g., energy cost, hardware cost) and to increase cost-efficiency. Unfortunately, achieving good performance (e.g., low latency) for web applications at high resource utilization remains an elusive goal. Both practitioners and researchers have experienced the latency long-tail problem in clouds during periods of even moderate utilization (e.g., 50%). In this dissertation, we show that transient bottlenecks are an important contributing factor to the latency long-tail problem. Transient bottlenecks are bottlenecks with a short lifespan on the order of tens of milliseconds. Though short-lived, transient bottleneck can cause a long-tail response time distribution that spans a spectrum of 2 to 3 orders of magnitude, from tens of milliseconds to tens of seconds, due to the queuing effect propagation and amplification caused by complex inter-tier resource dependencies in the system. Transient bottlenecks can arise from a wide range of factors at different system layers. For example, we have identified transient bottlenecks caused by CPU dynamic voltage and frequency scaling (DVFS) control at the CPU architecture layer, Java garbage collection (GC) at the system software layer, and virtual machine (VM) consolidation at the application layer. These factors interact with naturally bursty workloads from clients, often leading to transient bottlenecks that cause overall performance degradation even if all the system resources are far from being saturated (e.g., less than 50%). By combining fine-grained monitoring tools and a sophisticated analytical method to generate and analyze monitoring data, we are able to detect and study transient bottlenecks in a systematic way.
|
58 |
Check, Connect, and Expect in a Self-Contained Setting for Elementary Students with Emotional and Behavioral DisordersMcDaniel, Sara C, Houchins, David E, Jolivette, Kristine, Steed, Elizabeth, Gagne, Phil, Henrich, Chris 11 August 2011 (has links)
Check, Connect, Expect (CCE) is a secondary tier behavioral intervention for at-risk students who require targeted behavioral support in addition to school-wide positive behavioral interventions and supports. A full-time coach in the CCE intervention provided behavioral supports including daily check-in and check-out procedures, as well as targeted social skills instruction. This study extended CCE to a self-contained elementary school for students with emotional and behavioral disorders. Twenty-two students participated in the 17-week study that involved a four week baseline phase, followed by a 13-week intervention phase. The following research questions were addressed: (a) How did CCE affect student behavior?; (b) How did CCE affect student weekly academic engagement?; (c) How did CCE affect student weekly math calculation and oral reading fluency growth?; (d) How did severity of behavior predict student response to CCE?; (e) How did function maintaining the behavior predict student response to CCE?; (f) How did relationship strength with the coach predict student response to CCE?; and (g) How socially valid was CCE for teachers, paraprofessionals, and students? Two growth curve models were used to analyze the academic and behavioral data. Overall, students displayed significant behavioral growth during the intervention phase and positive growth in the areas of academic engagement and achievement. Severity of behavior, function, and relationship strength were not significant predictors of student response to the CCE intervention. Future directions, limitations, and implications for practice are discussed.
|
59 |
Transforming Medical Imaging Applications into Collaborative PACS-based Telemedical SystemsMaani, Rouzbeh 13 October 2010 (has links)
Many medical imaging applications have been developed so far; however, many of them do not support collaboration and are not remotely accessible (i.e., Telemedicine). Medical imaging applications are not practical for use in clinical workflows unless they are able to communicate with the Picture Archiving and Communications System (PACS).
This thesis presents an approach based on a three-tier architecture and provides several components to transform medical imaging applications into collaborative, PACS-based, telemedical systems.
A novel method is presented to support PACS connectivity. The method is to use the Digital Imaging and COmmunication in Medicine (DICOM) protocol and enhance transmission time by employing a combination of parallelism and compression methods. Experimental results show up to 1.63 speedup over Local Area Networks (LANs) and up to 16.34 speedup over Wide Area Networks (WANs) compared to the current method of medical data transmission.
|
60 |
An automated approach to create, manage and analyze large- scale experiments for elastic n-tier application in cloudsJayasinghe, Indika D. 20 September 2013 (has links)
Cloud computing has revolutionized the computing landscape by providing on-demand, pay-as-you-go access to elastically scalable resources. Many applications are now being migrated from on-premises data centers to public clouds; yet, the transition to the cloud is not always straightforward and smooth. An application that performed well in an on-premise data center may not perform identically in public computing clouds, because many variables like virtualization can impact the application's performance. By collecting significant performance data through experimental study, the cloud's complexity particularly as it relates to performance can be revealed. However, conducting large-scale system experiments is particularly challenging because of the practical difficulties that arise during experimental deployment, configuration, execution and data processing. In spite of these associated complexities, we argue that a promising approach for addressing these challenges is to leverage automation to facilitate the exhaustive measurement of large-scale experiments.
Automation provides numerous benefits: removes the error prone and cumbersome involvement of human testers, reduces the burden of configuring and running large-scale experiments for distributed applications, and accelerates the process of reliable applications testing. In our approach, we have automated three key activities associated with the experiment measurement process: create, manage and analyze. In create, we prepare the platform and deploy and configure applications. In manage, we initialize the application components (in a reproducible and verifiable order), execute workloads, collect resource monitoring and other performance data, and parse and upload the results to the data warehouse. In analyze, we process the collected data using various statistical and visualization techniques to understand and explain performance phenomena. In our approach, a user provides the experiment configuration file, so at the end, the user merely receives the results while the framework does everything else. We enable the automation through code generation. From an architectural viewpoint, our code generator adopts the compiler approach of multiple, serial transformative stages; the hallmarks of this approach are that stages typically operate on an XML document that is the intermediate representation, and XSLT performs the code generation. Our automated approach to large-scale experiments has enabled cloud experiments to scale well beyond the limits of manual experimentation, and it has enabled us to identify non-trivial performance phenomena that would not have been possible otherwise.
|
Page generated in 0.0411 seconds