You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Learn how to take full advantage of Apache Kafka, the distributed, publish-subscribe queue for handling real-time data feeds. With this comprehensive book, you will understand how Kafka works and how it is designed. Authors Neha Narkhede, Gwen Shapira, and Todd Palino show you how to deploy production Kafka clusters; secure, tune, and monitor them; write rock-solid applications that use Kafka; and build scalable stream-processing applications. Learn how Kafka compares to other queues, and where it fits in the big data ecosystem. Dive into Kafka's internal designPick up best practices for developing applications that use Kafka. Understand the best way to deploy Kafka in production monitoring, tuning, and maintenance tasks. Learn how to secure a Kafka cluster.
Get expert guidance on architecting end-to-end data management solutions with Apache Hadoop. While many sources explain how to use various components in the Hadoop ecosystem, this practical book takes you through architectural considerations necessary to tie those components together into a complete tailored application, based on your particular use case. To reinforce those lessons, the book’s second section provides detailed examples of architectures used in some of the most commonly found Hadoop applications. Whether you’re designing a new Hadoop application, or planning to integrate Hadoop into your existing data infrastructure, Hadoop Application Architectures will skillfully guide y...
Every enterprise application creates data, whether it consists of log messages, metrics, user activity, or outgoing messages. Moving all this data is just as important as the data itself. With this updated edition, application architects, developers, and production engineers new to the Kafka streaming platform will learn how to handle data in motion. Additional chapters cover Kafka's AdminClient API, transactions, new security features, and tooling changes. Engineers from Confluent and LinkedIn responsible for developing Kafka explain how to deploy production Kafka clusters, write reliable event-driven microservices, and build scalable stream processing applications with this platform. Throu...
Distributed systems have become more fine-grained as organizations shift from code-heavy monolithic applications to smaller, self-contained microservices. But developing these systems brings its own set of problems. With lots of examples and practical advice, this expanded second edition takes a holistic view of the topics system architects and administrators must consider when building, managing, and evolving microservices architectures. Author Sam Newman provides you with a firm grounding in the concepts while diving into the latest solutions for modeling, integrating, testing, deploying, and monitoring your own autonomous services. Through real-world examples, you'll learn how organizatio...
Dr.P.Pushpa, Lecturer, School of Software Engineering, East China University of Technology, Nanchang, Jiangxi, China. Dr.V.Thamilarasi, Assistant Professor, Department of Computer Science, Sri Sarada College for Women(Autonomous), Salem, Tamil Nadu, India. Dr. S. Lakshmi Prabha, Associate Professor, Department of Computer Science, Seethalakshmi Ramaswami College, Tiruchirappalli, Tamil Nadu, India. Mrs.Sudha Nagarajan, Assistant Professor, Department of Computer Science, Excel College for Commerce and Science, Komarapalayam, Namakkal, Tamil Nadu, India.
Data lakes and warehouses have become increasingly fragile, costly, and difficult to maintain as data gets bigger and moves faster. Data meshes can help your organization decentralize data, giving ownership back to the engineers who produced it. This book provides a concise yet comprehensive overview of data mesh patterns for streaming and real-time data services. Authors Hubert Dulay and Stephen Mooney examine the vast differences between streaming and batch data meshes. Data engineers, architects, data product owners, and those in DevOps and MLOps roles will learn steps for implementing a streaming data mesh, from defining a data domain to building a good data product. Through the course of the book, you'll create a complete self-service data platform and devise a data governance system that enables your mesh to work seamlessly. With this book, you will: Design a streaming data mesh using Kafka Learn how to identify a domain Build your first data product using self-service tools Apply data governance to the data products you create Learn the differences between synchronous and asynchronous data services Implement self-services that support decentralized data
Slow systems are frustrating. They waste time and money. But making consistently great decisions about performance can be easy, if you understand what's going on. This book explains in a clear and thoughtful voice why systems perform the way they do. It's for anybody who's curious about how computer programs and other processes use their time and about what you can do to improve them. Through a mix of personal vignettes and technical use cases, Cary Millsap reviews the process of improving performance and provides best practices for optimizing systems efficiently. You'll learn how to identify the information needed to improve a system, how to find the root causes of performance issues, and how to fix them. You'll also learn how performance optimization is both a skill set and a mindset, and how to develop both over time. If you're a computer professional whose success relies on software that goes fast, by the end of this book you'll be able to identify, view, scope, analyze, and remedy performance issues with consistency and confidence.
Before you can build analytics tools to gain quick insights, you first need to know how to process data in real time. With this practical guide, developers familiar with Apache Spark will learn how to put this in-memory framework to use for streaming data. You’ll discover how Spark enables you to write streaming jobs in almost the same way you write batch jobs. Authors Gerard Maas and François Garillot help you explore the theoretical underpinnings of Apache Spark. This comprehensive guide features two sections that compare and contrast the streaming APIs Spark now supports: the original Spark Streaming library and the newer Structured Streaming API. Learn fundamental stream processing co...
Get up to speed with Apache Drill, an extensible distributed SQL query engine that reads massive datasets in many popular file formats such as Parquet, JSON, and CSV. Drill reads data in HDFS or in cloud-native storage such as S3 and works with Hive metastores along with distributed databases such as HBase, MongoDB, and relational databases. Drill works everywhere: on your laptop or in your largest cluster. In this practical book, Drill committers Charles Givre and Paul Rogers show analysts and data scientists how to query and analyze raw data using this powerful tool. Data scientists today spend about 80% of their time just gathering and cleaning data. With this book, you’ll learn how Dri...
Implementing and designing systems that make suggestions to users are among the most popular and essential machine learning applications available. Whether you want customers to find the most appealing items at your online store, videos to enrich and entertain them, or news they need to know, recommendation systems (RecSys) provide the way. In this practical book, authors Bryan Bischof and Hector Yee illustrate the core concepts and examples to help you create a RecSys for any industry or scale. You'll learn the math, ideas, and implementation details you need to succeed. This book includes the RecSys platform components, relevant MLOps tools in your stack, plus code examples and helpful suggestions in PySpark, SparkSQL, FastAPI, and Weights & Biases. You'll learn: The data essential for building a RecSys How to frame your data and business as a RecSys problem Ways to evaluate models appropriate for your system Methods to implement, train, test, and deploy the model you choose Metrics you need to track to ensure your system is working as planned How to improve your system as you learn more about your users, products, and business case