10k

DDIA-Chapter10, 11,12-Notes

Chapter10 Batch Processing

  1. Service (only system ): wait request from client to process
  2. Batch : takes large amount of data and process and output some data
  3. Stream (near real-time ): something in the middle

Batch Processing with Unix Tools

Simple Log Analisys

  1. The log is a the request log for a server , this can help find the top 5 most popular website

    image-20240512095928976

Chain of commands versus custom program

  1. You can also write small scripts to do this.

    ruby counts = Hash.new(0) File.open('/var/log/nginx/access.log') do |file| file.each do |line| url = line.split[6] counts[url] += 1 end end top5 = counts.map{|url, count| [count, url] }.sort.reverse[0...5] top5.each{|count, url| puts "#{count} #{url}" }

  2. It's not concise but readable , and the process flow is different

Sorting versus in memory aggregation

  1. Scripts may run out of the memory if the data set is too large while sort in Linux will handle this-> it split the data into disk and parallels sorting into multiple cores.

The Unix Philosophy

    1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”. > 2. Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don’t insist on interactive input. > 3. Design and build software, even operating systems, to be tried early, ideally within weeks. Don’t hesitate to throw away the clumsy parts and rebuild them. > 4. Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you’ve finished using them.
  1. automation, rapid prototyping, incremental iteration, being friendly to experimentation, and breaking down large projects into manageable chunks— sounds remarkably like the Agile and DevOps movements of today.

A uniform interface

  1. In unix , that interface is a file. -> sequence of bytes

Separation of logic and IO wiring

  1. Unix system use stdin and stdout
  2. Makes it flexible -> you can build your logic with stdio, and logic don't cares about the input format

Transparency and experimentation

  1. The input files to Unix commands are normally treated as immutable. This means you can run the commands as often as you want, trying various command-line options, without damaging the input files.
  2. You can end the pipeline at any point, pipe the output into less, and look at it to see if it has the expected form. This ability to inspect is great for debugging.
  3. You can write the output of one pipeline stage to a file and use that file as input to the next stage. This allows you to restart the later stage without rerunning the entire pipeline.
  4. But they are in single machine

MapReduce and Distributed Filesystems

  1. Effective tool for processing the input and producing output
  2. Running a MR job doesn't modify the input
  3. In Hadoop implementation of MapReduce, the filesystem like stdin is called HDFS(an open source implementation of Google File System)
  4. HDFS is based on shared nothing principle
  5. A daemon process runs on each machine to allow others access the file on the machine
  6. And there is a central server called NameNode keeps track which node stores which files
  7. Data are replicated on the hardware level in many machines

MapReduce Job Execution

  1. A similar to the simple log analysis in the previous section:
    1. Read a set of input , and break it up into records
    2. Call the mapper function to extract a ket and value from each input record
    3. Sort all of the key-value pair by key
    4. Call the reducer function to iterate over the sorted k-v pairs. Same key in the records will be combined to the same key in the list.
  2. To create a MapRecuse, you need to implement mapper and reducer
    1. Mapper called once : extract the key and value from the input record
    2. Reducer: the framework collect all the values and calls the reducer with an iterator over the values

Distributed execution of MapReduce

  1. The main difference from pipelines of Unix Command is that : MapREduce can parallelize a computation across many machines without writing code to handle the parallelism
  2. A Hadoop MapReduce job's paralyzation is based on partitioning:
  3. The MapReduce scheduler tries to run each mapper on one of the machines that stores a replica of the input file (each machine has enough spare RAM and CPU to urn the map task) -> putting the computation near the data - >saves copying the input file over the network, reducing network load and increasing locality
  4. image-20240513083832073
  5. The code will be copied to the machine that runs the map function. The map starts and records passed to the mapper callback.
  6. Reduce is also partitioned, and framework use hash to determine the same key will be in the same reducer
  7. The dataset may be large and cannot perform sorting in memory so output will be partitioned by reducer, and stored in the mapper's disk.
  8. Shuffle : after the mapper read the input and write the output, reducer will connect to each reducers download file in its partitions and sort.
  9. Reuters runs with a key and an iterator , iterator will scan all the records with the key and handle them with any logic, and output anything.

MapReduce workflows

  1. Single MapReduce can do limit things, in the log analysis, single MP job can know the visit count of single page but it cannot know the top 5, which needs another round of sorting.
  2. Job chain is called workflow. First job output become the second input
  3. It's like a set of commands
  4. We can say the batch processing is successful when all the jobs are done -> since there are many jobs so many schedulers are developed : Oozie, Azkaban...

Reduce-side Joins and Grouping

  1. Join in batch processing means resolving all occurrences of some association within a dataset.

Example: analysis of user activity events

image-20240513200012733

  1. Company wants to analysis which group like what most, so they process the event in the left, but the event has only the user id so they need to join the user profile table.
  2. One implementation of join is to query the table for each user id it sees. -> this is time wasting
  3. A better way is to do the computation in one machine -> random access request overs network for each record is too slow. And request remote database makes the data nondeterministic
  4. So we make a copy of the database and stores somewhere , in one place

Sort-merge joins

image-20240513200649138

  1. Mapper output sorted by key , and reducer connect them .

From here I will skip the notes and quickly read original book.

For the part3 of this book dive too much into the details of the implementation, which is not necessary for me currently.

GROUP BY

Handing skew

Map-Side Joins

The Output of Batch Workflows

Comparing Hadoop to Distributed Databases

Beyond MapReduce

Chapter 11 Stream Processing

Chapter 12 The Future of Data Systems

Thoughts? Leave a comment