Контакты/Проезд  Доставка и Оплата Помощь/Возврат
История
  +7 707 857-29-98
  +7(7172) 65-23-70
  10:00-18:00 пн-пт
  shop@logobook.kz
   
    Поиск книг                        
Найти
  Зарубежные издательства Российские издательства  
Авторы | Каталог книг | Издательства | Новинки | Учебная литература | Акции | Бестселлеры | |
 

An Architecture for Fast and General Data Processing on Large Clusters, Matei Zaharia


Варианты приобретения
Цена: 51620.00T
Кол-во:
Наличие: Поставка под заказ.  Есть в наличии на складе поставщика.
Склад Америка: 128 шт.  
При оформлении заказа до:
Ориентировочная дата поставки:
При условии наличия книги у поставщика.

Добавить в корзину
в Мои желания

Автор: Matei Zaharia
Название:  An Architecture for Fast and General Data Processing on Large Clusters
ISBN: 9781970001563
Издательство: Mare Nostrum (Eurospan)
Классификация:


ISBN-10: 1970001569
Обложка/Формат: Paperback
Страницы: 141
Вес: 0.25 кг.
Дата издания: 30.05.2016
Серия: Acm books
Язык: English
Размер: 235 x 191 x 8
Читательская аудитория: General (us: trade)
Ключевые слова: Computer science,Computer architecture & logic design
Рейтинг:
Поставляется из: Англии
Описание: The past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters. At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too.This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduces scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing.We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective.This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.
Дополнительное описание: Computer architecture and logic design|Distributed systems|Parallel processing


An Architecture for Fast and General Data Processing on Large Clusters

Автор: Matei Zaharia
Название: An Architecture for Fast and General Data Processing on Large Clusters
ISBN: 1970001593 ISBN-13(EAN): 9781970001594
Издательство: Mare Nostrum (Eurospan)
Рейтинг:
Цена: 62660.00 T
Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: The past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters. At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too.This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing.We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective.This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.

Compiling Algorithms for Heterogeneous Systems

Автор: Steven Bell, Jing Pu, James Hegarty, Mark Horowitz
Название: Compiling Algorithms for Heterogeneous Systems
ISBN: 1681732637 ISBN-13(EAN): 9781681732633
Издательство: Mare Nostrum (Eurospan)
Рейтинг:
Цена: 72070.00 T
Наличие на складе: Невозможна поставка.
Описание: Most emerging applications in imaging and machine learning must perform immense amounts of computation while holding to strict limits on energy and power. To meet these goals, architects are building increasingly specialized compute engines tailored for these specific tasks. The resulting computer systems are heterogeneous, containing multiple processing cores with wildly different execution models. Unfortunately, the cost of producing this specialized hardware—and the software to control it—is astronomical. Moreover, the task of porting algorithms to these heterogeneous machines typically requires that the algorithm be partitioned across the machine and rewritten for each specific architecture, which is time consuming and prone to error.Over the last several years, the authors have approached this problem using domain-specific languages (DSLs): high-level programming languages customized for specific domains, such as database manipulation, machine learning, or image processing. By giving up generality, these languages are able to provide high-level abstractions to the developer while producing high performance output. The purpose of this book is to spur the adoption and the creation of domain-specific languages, especially for the task of creating hardware designs.In the first chapter, a short historical journey explains the forces driving computer architecture today. Chapter 2 describes the various methods for producing designs for accelerators, outlining the push for more abstraction and the tools that enable designers to work at a higher conceptual level. From there, Chapter 3 provides a brief introduction to image processing algorithms and hardware design patterns for implementing them. Chapters 4 and 5 describe and compare Darkroom and Halide, two domain-specific languages created for image processing that produce high-performance designs for both FPGAs and CPUs from the same source code, enabling rapid design cycles and quick porting of algorithms. The final section describes how the DSL approach also simplifies the problem of interfacing between application code and the accelerator by generating the driver stack in addition to the accelerator configuration.This book should serve as a useful introduction to domain-specialized computing for computer architecture students and as a primer on domain-specific languages and image processing hardware for those with more experience in the field.

Compiling Algorithms for Heterogeneous Systems

Автор: Steven Bell, Jing Pu, James Hegarty, Mark Horowitz
Название: Compiling Algorithms for Heterogeneous Systems
ISBN: 162705961X ISBN-13(EAN): 9781627059619
Издательство: Mare Nostrum (Eurospan)
Рейтинг:
Цена: 51750.00 T
Наличие на складе: Невозможна поставка.
Описание: Most emerging applications in imaging and machine learning must perform immense amounts of computation while holding to strict limits on energy and power. To meet these goals, architects are building increasingly specialized compute engines tailored for these specific tasks. The resulting computer systems are heterogeneous, containing multiple processing cores with wildly different execution models. Unfortunately, the cost of producing this specialized hardware—and the software to control it—is astronomical. Moreover, the task of porting algorithms to these heterogeneous machines typically requires that the algorithm be partitioned across the machine and rewritten for each specific architecture, which is time consuming and prone to error.Over the last several years, the authors have approached this problem using domain-specific languages (DSLs): high-level programming languages customized for specific domains, such as database manipulation, machine learning, or image processing. By giving up generality, these languages are able to provide high-level abstractions to the developer while producing high performance output. The purpose of this book is to spur the adoption and the creation of domain-specific languages, especially for the task of creating hardware designs.In the first chapter, a short historical journey explains the forces driving computer architecture today. Chapter 2 describes the various methods for producing designs for accelerators, outlining the push for more abstraction and the tools that enable designers to work at a higher conceptual level. From there, Chapter 3 provides a brief introduction to image processing algorithms and hardware design patterns for implementing them. Chapters 4 and 5 describe and compare Darkroom and Halide, two domain-specific languages created for image processing that produce high-performance designs for both FPGAs and CPUs from the same source code, enabling rapid design cycles and quick porting of algorithms. The final section describes how the DSL approach also simplifies the problem of interfacing between application code and the accelerator by generating the driver stack in addition to the accelerator configuration.This book should serve as a useful introduction to domain-specialized computing for computer architecture students and as a primer on domain-specific languages and image processing hardware for those with more experience in the field.

Algorithm-Architecture Matching for Signal and Image Processing

Автор: Guy Gogniat; Dragomir Milojevic; Adam Morawiec; Ah
Название: Algorithm-Architecture Matching for Signal and Image Processing
ISBN: 9400733925 ISBN-13(EAN): 9789400733923
Издательство: Springer
Рейтинг:
Цена: 139750.00 T
Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: Coverage of Algorithm-Architecture Matching ranges from sensors to architectures design, reflecting a diversity of potential algorithms including signal, communication, image, video, 3D-Graphics implemented onto architectures from FPGA to multiprocessor systems.

Shared-Memory Parallelism Can Be Simple, Fast, and Scalable

Автор: Julian Shun
Название: Shared-Memory Parallelism Can Be Simple, Fast, and Scalable
ISBN: 1970001887 ISBN-13(EAN): 9781970001884
Издательство: Mare Nostrum (Eurospan)
Рейтинг:
Цена: 77370.00 T
Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra , which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression.The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores.This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.

Shared-Memory Parallelism Can Be Simple, Fast, and Scalable

Автор: Julian Shun
Название: Shared-Memory Parallelism Can Be Simple, Fast, and Scalable
ISBN: 1970001917 ISBN-13(EAN): 9781970001914
Издательство: Mare Nostrum (Eurospan)
Рейтинг:
Цена: 94850.00 T
Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra , which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression.The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores.This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.

Design and Analysis of Algorithms: A Contemporary Perspective

Автор: Sandeep Sen, Amit Kumar
Название: Design and Analysis of Algorithms: A Contemporary Perspective
ISBN: 1108496822 ISBN-13(EAN): 9781108496827
Издательство: Cambridge Academ
Рейтинг:
Цена: 49630.00 T
Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: A valuable text in the field of computer science and engineering, covering fundamental concepts and recent advancements. To help the reader to design/redesign algorithms for their requirements rather than be overawed by the challenges of a new framework.

Fast techniques for integrated circuit design /

Автор: Sahrling, Mikael,
Название: Fast techniques for integrated circuit design /
ISBN: 1108498450 ISBN-13(EAN): 9781108498456
Издательство: Cambridge Academ
Рейтинг:
Цена: 102430.00 T
Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: Learn how to use estimation techniques to solve integrated circuit (IC) design problems, accelerate the design process, and enhance understanding of complex systems with this step-by-step guide. Using numerous real-world application examples, it is ideal for both early-career and established professionals and researchers, and graduate students, working in IC design.

Design and Analysis of Algorithms: A Contemporary Perspective

Автор: Sandeep Sen, Amit Kumar
Название: Design and Analysis of Algorithms: A Contemporary Perspective
ISBN: 1108721990 ISBN-13(EAN): 9781108721998
Издательство: Cambridge Academ
Рейтинг:
Цена: 47510.00 T
Наличие на складе: Невозможна поставка.
Описание: The text covers important algorithm design techniques, like greedy algorithms, dynamic programming, divide-and-conquer and gives applications to contemporary problems. Techniques including Fast Fourier transform, KMP algorithm for string matching, CYK algorithm for context free parsing and gradient descent for convex function minimization are discussed in detail. The book's emphasis is on computational models and their effect on algorithm design. It gives insights into algorithm design techniques in parallel, streaming and memory hierarchy computational models. The book also emphasizes the role of randomization in algorithm design, and gives numerous applications ranging from data-structures like skip-lists to dimensionality reduction methods.

Emerging Technology and Architecture for Big-data Analytics

Автор: Anupam Chattopadhyay; Chip Hong Chang; Hao Yu
Название: Emerging Technology and Architecture for Big-data Analytics
ISBN: 3319548395 ISBN-13(EAN): 9783319548395
Издательство: Springer
Рейтинг:
Цена: 139750.00 T
Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: Part I State-of-the-Art Architectures and Automation for Data-analytics.- Chapter 1. Scaling the Java Virtual Machine on a Many-core System.- Chapter 2.Scaling the Java Virtual Machine on a Many-core System.- Chapter 3.Least-squares based Machine Learning Accelerator for Big-data Analytics in Smart Buildings.- Chapter 4.Compute-in-memory Architecture for Data-Intensive Kernels.- Chapter 5. New Solutions for Cross-Layer System-Level and High-Level Synthesis.- Part II New Solutions for Cross-Layer System-Level and High-Level Synthesis.- Chapter 6.Side Channel Attacks and Efficient Countermeasures on Residue Number System Multipliers.- Chapter 7. Ultra-Low-Power Biomedical Circuit Design and Optimization: Catching The Don't Cares.- Chapter 8.Acceleration of MapReduce Framework on a Multicore Processor.- Chapter 9. Adaptive dynamic range compression for improving envelope-based speech perception: Implications for cochlear implants.- Part III Emerging Technology, Circuits and Systems for Data-analytics.- Chapter 10. Emerging Technology, Circuits and Systems for Data-analytics.- Chapter 11. Energy Efficient Spiking Neural Network Design with RRAM Devices.- Chapter 12. Efficient Neuromorphic Systems and Emerging Technologies - Prospects and Perspectives.- Chapter 13. In-memory Data Compression Using ReRAMs.- Chapter 14. In-memory Data Compression Using ReRAMs.- Chapter 15.Data Analytics in Quantum Paradigm - An Introduction.


Achieving buzzword compliance: data arch

Автор: Hay, David C
Название: Achieving buzzword compliance: data arch
ISBN: 1634623703 ISBN-13(EAN): 9781634623704
Издательство: Gazelle Book Services
Рейтинг:
Цена: 95790.00 T
Наличие на складе: Есть у поставщика Поставка под заказ.
Описание:

Understand the language and vocabulary of Data Architecture.

The Data Architecture field is rife with terms that have become "fashionable." Some of the terms began with very specific, specialized, meanings - but as their use spread, they lost the precision of their technical definitions and become, well, "buzzwords."

A buzzword is "a word or expression from a particular subject area that has become fashionable because it has been used a lot." Compliance is "the obeying of an accepted principle or instruction that states the way things are or should be done."

The assignment is to take buzzwords and follow rules to use them correctly. We cut through the hype to arrive at buzzword compliance - the state where you fully understand the words that in fact have real meaning in the data architecture industry. This book will rationalize the various ways all these terms are defined.

Of necessity, the book must address all aspects of describing an enterprise and its data management technologies. This includes a wide range of subjects, from entity/relationship modeling, through the semantic web, to database issues like relational and "beyond relational" ("NoSQL") approaches. In each case, the definitions for the subject are meant to be detailed enough to make it possible to understand basic principles--while recognizing that a full understanding will require consulting the sources where they are more completely described.

The book's Glossary contains a catalogue of definitions and its Bibliography contains a comprehensive set of references.


A VLSI Architecture for Concurrent Data Structures

Автор: J. W. Dally
Название: A VLSI Architecture for Concurrent Data Structures
ISBN: 1461291917 ISBN-13(EAN): 9781461291916
Издательство: Springer
Рейтинг:
Цена: 104480.00 T
Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and commu- nication into data structures.


Казахстан, 010000 г. Астана, проспект Туран 43/5, НП2 (офис 2)
ТОО "Логобук" Тел:+7 707 857-29-98 ,+7(7172) 65-23-70 www.logobook.kz
Kaspi QR
   В Контакте     В Контакте Мед  Мобильная версия