We have designed and implemented the Google File System, a scalable distributed ï¬le system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While sharing many of the same goals as previous distributed ï¬le systems, our design has been driven by observations of our application workloads and technological environment, both current and anticipated, that reï¬‚ect a marked departure from some earlier ï¬le system assumptions. This has led us to reexamine traditional choices and explore radically diï¬€erent design points. The ï¬le system has successfully met our storage needs.
It is widely deployed within Google as the storage platform for the generation and processing of data used by our service as well as research and development eï¬€orts that require large data sets. The largest cluster to date provides hundreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients. In this paper, we present ï¬le system interface extensions designed to support distributed applications, discuss many aspects of our design, and report measurements from both
micro-benchmarks and real world use.
Previewing from http://research.google.com/archive/gfs-sosp2003.pdf