To decide on a mechanism for storing a large number of files and querying them based on metadata we have two options –
a) storing the file + metadata combination as BLOB in the database along with the metadata fields or
b) just storing the metadata in the database and the actual file itself on the filesystem.
An excellent source of information that explains the trade-offs between these two options is the paper – To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem?.
Going by the discussions in the paper a trade off can be made based on the size of the objects stored. For small file sizes (in the order of few 100s of KBs) a database offers higher read throughputs. But as the file size approaches MBs the throughput of the file system increases faster than that of the database.
A key aspect that the paper brings out is the impact that disk fragmentation has on the performance of such a storage solution. One of the main reasons why access from a filesystem is faster is because of its ability to deal with disk fragmentation due to repeated updates. As the “storage age” (avg. number of times a file has been replaced) increases both read and write performance of a database degrades. Mainly because databases simply don’t have an automated way of dealing with the impact of fragmentation due to repeated updates. Defragmentation of a database requires explicit application logic to copy BLOBS to a new table (on SQL server).
So some of the requirements that a good BLOB database solution needs to address includes providing ability to defragment automatically. At the minimum it should tell you how fragmented a BLOB object is. On top of this it can offer other optimizations like in place defragmentation etc.
It feels like the idea of scalable “BLOB databases” in general (for the lack of a better term) is perhaps still nascent. Most BLOB management solutions (for audio, video or text) rely on distributed object stores like S3 or Ceph. Most of these don’t even offer metadata storage along with the data objects, leave alone offering specialized indexing and search capabilities. Its left to the applications that push data objects into these object stores to deal with the question of how to keep the database object metadata and the filesystem object data synchronized.
Attempts such as HSS from AOL and Haystack are in some ways a start. Although they are very far from becoming a specialized distributed database.