2009Feb12

From FilteredPush
Jump to: navigation, search

Discussed passing of information around hadoop network - job as class sent to job tracker by rpc over tcp, map reduce similarly communicating to nodes.

Discussed robustness/failover - distributed file system can be configured to have metadata backup node and a master node, job tracker is a single point of failure. Discussed potential layers for adding robustness - one analogus to transactions for map-reduce jobs.

Identified mixing of information needed by slave nodes to function as slave nodes (db configuration) with information about job in configuration files that are sitting on all machines.

Maureen and Bob got Hadoop + Specify + FP on Bob's laptop and documented the installation Installing_and_Running_the_Prototype in preparation for Bob's Demo at a GBIf meeting.

Trac to be set up to manage development priorities