Enormous Data is an issue and Hadoop given the arrangement.
Prior information was put away in the RDBMS. The issue of RDBMS was that it can just store organized information. Yet, today 80% of the information produced is in unstructured structure and it is beyond the realm of imagination to expect to store that information in RDBMS, So hadoop rose for the arrangement of the issue. HDFS is the capacity layer of Hadoop and it is entirely adaptable it implies it can store any sort of information.
The Second issue was that, RDBMS can't store gigantic information dependably and today we are producing information in immense sum. The information age is expanding step by step.
World's 90% of the information is produced in the last recent years. Presently you can imagine that how quickly information is getting produced.
In this way, Hadoop gives the answer for both the issue For putting away the proficiently gigantic measure of information and for putting away a wide range of information.
We should adapt Big information in detail.
As indicated by Gartner:
Huge information is gigantic volume, quick speed, and diverse assortment data resources that request creative stage for upgraded experiences and basic leadership.
Enormous Data is ordered as far as:
Volume – Today information measure has expanded to size of terabytes as records or exchanges
Assortment – There is enormous assortment of information dependent on inside, outer, conduct, or/and social sort. Information can be of organized, semi organized or unstructured sort.
Speed – It implies close or constant digestion of information coming in colossal volume
To know the tenth V of Big Data allude underneath connection: