Big Data principles harnessed by Mumbai Dabbawallas

A well written article chronicling how the Mumbai Dabbawallas use the same principles that are the force behind the Big Data technologies today. Titled "What is Common between Mumbai Dabbawalas and Apache Hadoop?", the article details out how the entire process of collecting, shuffling, sorting and delivering tiffin boxes by the Mumbai Dabbawallas is akin to the MapReduce algorithm that is key to dealing with huge data collections.

Here's the crux of the article



  • Just like HDFS slices and distributes the chunk of data to individual nodes, each household submits the lunchbox to a Dabbawala.
  • All the lunchboxes are collected at the common place for tagging them and to put them into carriages with unique codes. This is the job of the Mapper!
  • Based on the code, carriages that need to go to the common destination are sorted and on-boarded to the respective trains. This is called Shuffle and Sort phase in MapReduce.
  • At each railway station, the Dabbawala picks up the carriage and delivers each box in that to respective customers. This is the Reduce phase.

Whats your take on this?

Strongly recommend you read the full article here


If you enjoyed reading this post, Subscribe to the feed here ...And never miss a post

Comments