So I just realized that I am here after a month or so. I was busy at work and traveling out of the country.
I am starting a kind of new series, I say it Data Engineering Series in which I will be discussing different tools. Of course, I am not able to discuss the entire concept of Data Engineering neither I know it as I will be learning myself.
What is Data Engineering?
Data Engineering is all about developing, maintaining systems that are responsible for transferring data in large volumes and make it available for analysts and data scientists to use it for analyzing and data modeling. Data engineering is a superset of Data Science or the subset, not clear to me but the collaboration of data engineers and scientists fruits useful data-driven solutions.
Data Engineering tools
It consists of several tools. Some are dealing with data storage while others with analysis and ETL. I already have covered Kafka recently. The others tools that I might be covering are Apache Airflow, an ETL tool and Hadoop Ecosystem components like HDFS, Hive, Yarn, Pig etc. There is no such specific roadmap so tools can be covered in any order. Since I mostly work in Python so will be trying my best to find some way to interact with Python but it is not necessary as most of Hadoop related systems are in either Java or Scala.
So, stay tuned and I will be back shortly with the new post.
Image Credit: Lynda