Abhishek comes with a rich experience of 7 years in the field of Big Data Technology. He has been fostered Hadoop architectural designing and consulted technology solution for different organizations. He has conceptualized process optimization by building customized big data solutions for using Hadoop ecosystem i.e. Hive, Pig, Spark, HBase, MongoDB, Zookeeper, Sqoop, Flume, Ambari and many more.
He holds up key responsibilities for deploying cost effective Hadoop environment with a strategic planning added selecting the right Hadoop distribution (HortonWorks, Cloudera, AWS EMR and BigInsights) for an organization.
He has been ardent driven for hosting live Webinars and sharing knowledge on Hadoop.
His corporate experience has made him explore technologies such as Hadoop R&D, Big Data technologies, Amazon Web Services (EC2, S3, IoT, RedShift), Hadoop administration, IBM Netezza Database Administration, Data Warehousing, Data Mining (Netezza, Oracle PL/SQL and Microsoft SQL Server), Development, ETL and Advanced analytics.
Boundless exposures on various programming and query languages such as Linux, python, PLSQL, HQL, PIG, Java, VBA, R, SAS and Awk. He has also conceptualized and developed standardized procedures, SQL and VBA codes to streamline various processes. During his one year Big Data (Data Scientist) program at TCS, he also holds an exposure to LP modelling, Operational Research, Supply Chain Management, Machine Learning, Natural programming Language and many more which can add value to any business.