UNIX and HDFS - file systems on same partition.

I am learning Hadoop. As a part of that, Hdfs - Hadoop distributed file system has commands similar to unix where we can create,copy,move files from unix/linux file system to HDFS.
My question is
1) how two file systems (unix and hdfs) can coexist on thr same partition..
2)What if block used by one FS is overwritten/ modified by another FS.
3) Who does this block handling mechanism inorder to avoid unwanted results.
4) what happens to inodes(unix) pointing to data blocks being used/blocked by HDFS.
Please explain what happens at atomic level when mutiple file systems coexists on same partition.

Multiple filesystems DO NOT coexist on a partition.

Hadoop either creates a file on an underlying UNIX filesystem and uses UNIX system calls to allocate blocks to that file, read blocks from the file, write blocks to the file, etc; or it creates a HDFS filesystem on a block special UNIX file and/or character special UNIX file that is a partition where no UNIX filesystem is allocated (and still uses UNIX read and write system calls to read and write HDFS blocks to that underlying UNIX file). The fact that the data written to that file is treated as a filesystem by Hadoop and may contain many files that Hadoop serves to users of the HDFS stored (entirely or partially) inside that UNIX file is invisible to the underlying UNIX system.

1 Like