CERN Computing Seminar

Adding Search as a first-class citizen to Hadoop

by Dr Wolfgang Hoschek (Cloudera)

31/3-004 - IT Amphitheatre (CERN)

31/3-004 - IT Amphitheatre


Show room on map

Apache Hadoop is enabling organizations to collect larger, more varied data - but after it's collected how will it be found? Your users expect to be able to search for information using simple text based queries -- regardless of data location, size, and complexity. How do they quickly find information that's just been created, or been stored for months or even years? Cloudera Search Senior Software Engineer Wolfgang Hoschek will present a solution to this problem; what architecture is necessary to search HDFS and HBase? How was Apache Solr, Lucene, Flume and MapReduce integrated to allow for Near Real Time and Batch indexing of data? What are the solved problems and what's still to come? Join us for an exciting discussion on this new technology.

About the speaker

Wolfgang Hoschek is a Software Engineer at Cloudera working on the Hadoop Platform and Cloudera Search team. He is a committer on the Apache Flume and Apache Lucene/Solr projects, a committer on the Kite project, a committer on the Lily HBase Indexer project, and the lead developer on Morphlines. He is a former CERN fellow and former Computer Scientist at Lawrence Berkeley Laboratory, and former Senior Software Engineer at Skytide. He has 15+ years of experience in large-scale distributed systems, data intensive computing and real time analytics. He received his Ph.D in Computer Science from the Technical University of Vienna, Austria.

Organised by: Sverre Jarp and Miguel Angel Marquina
Computing Seminars /IT Department

more information
Video in CDS