Building Big Data Pipelines with Apache Beam: Use a single programming model for both batch and stream data processing - THE PIRATE BOOK

Search This Blog

Trang

Thursday, January 13, 2022

Building Big Data Pipelines with Apache Beam: Use a single programming model for both batch and stream data processing

 

DOWNLOAD

DOWNLOAD 2

DOWNLOAD 3

Building Big Data Pipelines with Apache Beam: Use a single programming model for both batch and stream data processing

  • Length: 342 pages
  • Edition: 1
  • Publisher: 
  • Publication Date: 2022-01-21

mplement, run, operate, and test data processing pipelines using Apache Beam

Key Features

  • Understand how to improve usability and productivity when implementing Beam pipelines
  • Learn how to use stateful processing to implement complex use cases using Apache Beam
  • Implement, test, and run Apache Beam pipelines with the help of expert tips and techniques

Book Description

Apache Beam is an open source unified programming model for implementing and executing data processing pipelines, including Extract, Transform, and Load (ETL), batch, and stream processing.

This book will help you to confidently build data processing pipelines with Apache Beam. You’ll start with an overview of Apache Beam and understand how to use it to implement basic pipelines. You’ll also learn how to test and run the pipelines efficiently. As you progress, you’ll explore how to structure your code for reusability and also use various Domain Specific Languages (DSLs). Later chapters will show you how to use schemas and query your data using (streaming) SQL. Finally, you’ll understand advanced Apache Beam concepts, such as implementing your own I/O connectors.

By the end of this book, you’ll have gained a deep understanding of the Apache Beam model and be able to apply it to solve problems.

What you will learn

  • Understand the core concepts and architecture of Apache Beam
  • Implement stateless and stateful data processing pipelines
  • Use state and timers for processing real-time event processing
  • Structure your code for reusability
  • Use streaming SQL to process real-time data for increasing productivity and data accessibility
  • Run a pipeline using a portable runner and implement data processing using the Apache Beam Python SDK
  • Implement Apache Beam I/O connectors using the Splittable DoFn API

Who this book is for

This book is for data engineers, data scientists, and data analysts who want to learn how Apache Beam works. Intermediate-level knowledge of the Java programming language is assumed.

Table of Contents

  1. Introduction to Data Processing with Apache Beam
  2. Implementing, Testing, and Deploying Basic Pipelines
  3. Implementing Pipelines Using Stateful Processing
  4. Structuring Code for Reusability
  5. Using SQL for Pipeline Implementation
  6. Using Your Preferred Language with Portability
  7. Extending Apache Beam’s I


No comments:

Post a Comment