For the better or for the worse the amount of data generated in the world grows exponentially. The year of 2012 was dubbed as the year of Big Data and Data Deluge, in 2013 petabyte scale is referenced matteroffactly and exabyte size is now in the vocabulary of storage providers and large organization. The traditional copy based technology doesn’t scale into this size territory, relational DBs give up on many billions rows in tables; typical File Systems are not designed to store trillions of objects. Disks fail, networks are not always available. Yet individuals, businesses and academic institutions demand 100% availability with no data loss. Is this the final dead end? These lectures will describe a storage system, based on IDA (Information Dispersal Algorithm) unlimited in scale, with a very high level of reliability, availability, and unbounded scalable indexing. And all this without any central facility anywhere in the system and thus no single point of failure or any scalability barriers.
In this lecture we discussed:
Sponsor: Maria Dimou