Spread the love

FAQ:

Big Data includes data sets whose size is beyond the ability of current software tools to capture, manage, and process the data in a reasonable time.

This question cannot be easily answered absolutely. Based on the infrastructure on the market the lower threshold is at about 1 to 3 terabytes.
But using Big Data technologies can be sensible for smaller databases as well, for example if complex mathematiccal or statistical analyses are run against a database. Netezza offers about 200 built in functions and computer languages like Revolution R or Phyton which can be used in such cases.

Eventually the big data hype will wear off, but studies show that big data adoption will continue to grow. With a projected $16.9B market by 2015 (Wikibon goes even further to say $50B by 2017), it is clear that big data is here to stay. However, the big data talent pool is lagging behind and will need to catch up to the pace of the market. McKinsey & Company estimated in May 2011 that by 2018, the US alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.
The emergence of big data analytics has permanently altered many businesses’ way of looking at data. Big data can take companies down a long road of staff, technology, and data storage augmentation, but the payoff – rapid insight into never-before-examined data – can be huge. As more use cases come to light over the coming years and technologies mature, big data will undoubtedly reach critical mass and will no longer be labeled a trend. Soon it will simply be another mechanism in the BI ecosystem.

The Apache Hadoop software library allows for the distributed processing of large data sets across clusters of computers using a simple programming model. The software library is designed to scale from single servers to thousands of machines; each server using local computation and storage. Instead of relying on hardware to deliver high-availability, the library itself handles failures at the application layer. As a result, the impact of failures is minimized by delivering a highly-available service on top of a cluster of computers.

Hadoop’s popularity is partly due to the fact that it is used by some of the world’s largest Internet businesses to analyze unstructured data. Hadoop enables distributed applications to handle data volumes in the order of thousands of exabytes.

Hadoop originally derives from Google’s implementation of a programming model called MapReduce. Google’s MapReduce framework could break down a program into many parallel computations, and run them on very large data sets, across a large number of computing nodes. An example use for such a framework is search algorithms running on Web data.
Hadoop, initially associated only with web indexing, evolved rapidly to become a leading platform for analyzing big data. Cloudera, an enterprise software company, began providing Hadoop-based software and services in 2008.
In 2012, GoGrid, a cloud infrastructure company, partnered with Cloudera to accelerate the adoption of Hadoop-based business applications. Also in 2012, Dataguise, a data security company, launched a data protection and risk assessment tool for Hadoop.

The short answer is dual processor/dual core machines with 4-8GB of RAM using ECC memory, depending upon workflow needs. Machines should be moderately high-end commodity machines to be most cost-effective and typically cost 1/2 – 2/3 the cost of normal production application servers but are not desktop-class machines. This cost tends to be $2-5K.

.Basic knowledge of backend programming languages like Java and OOP.
.Writing reliable and maintainable code
.Good knowledge of concurrency and multi-threading concepts.
.Data processing for generating the patterns.
.Obvious knowledge of Hadoop ecosystem and its components. Pig, Hive, Flume, etcetera.
.Analytical skills applied to Big Data Domain
.Data cleansing for verification and validation
It is not mandatory that every Hadoop Developer must be involved in all the aforementioned points. They are just commonly performed tasks, however, the job role differs from organization to organization considering the domains, plans, and size of the team.

When “Big Data” emerged as a problem, Apache Hadoop evolved as a solution to it. Apache Hadoop is a framework which provides us various services or tools to store and process Big Data. It helps in analyzing Big Data and making business decisions out of it, which can’t be done efficiently and effectively using traditional systems.

explain the main components of Hadoop, i.e.:

  • Storage unit– HDFS (NameNode, DataNode)
  • Processing framework– YARN (ResourceManager, NodeManager)

The core Hadoop components HDFS and MapReduce are all about Data Storage and Processing. HDFS for storage and MapReduce for processing. But Hadoop core components such as Pig and Hive are used for analytics. For Visual Reports Tableau, QlikView can be connected to Hadoop for Visual Reporting.

 


Spread the love