*Sheng-Jung Hsiao (Chung Yuan Christian University (CYCU)) email@example.com
As technology advances,we are surrounded by big data,they are omnipresent. In
every minute,Facebook users share nearly 2.5 million pieces of content. YouTube
users upload 72 hours of new video content.So how can we do with such huge data?
Apache Hadoop is an open source software framework for storage and large scale
processing of data-sets on clusters of commodity hardware.By using MapReduce in
Hadoop,we can acess data and mining it easily