There are three kinds of artifacts on these diagrams.
- Boxes: system components
- Documents: icons "travel upon the arrows" and represent data flowing between system components
- Data stores: repositories implemented by 3rd party software such as MongoDB and Mulgara
The SCAN diagram
- we have a test instance that Maureen needs to get up and running to see where we are with it. She needs to get the RSA app reinstalled so that she can access the VPN. Call RC helpdesk if necessary.
- not implemented: an Annotation Processor to go over MCZBase. Could go into three stages
- have something that can read from prod
- have something that can write to spreadsheets of MCZBase-compatible batch upload files rather than the database itself
- have something that can write new determinations, updates to georeferences, and can display "solve with more data" annotations
Note that the Node in this diagram is FP2. The Annotation Processor for MCZBase would be installed on a machine in the MCZ.
- From FP2-Node to SCAN Symbiota on Symbiota1
- View Annotations is implemented. We have a tab in Symbiota for reading annotations from FP2-Node.
- not implemented: Taxon Interests-- a tab for specialists to see what's hap[pening in their area of interest
- not on diagram: a way for researchers to query Mongo from Symbiota to obtain analysis results. Need a new page for this in Symbiota; it replaces the "run analysis" page in the Annotation Processor
- From Symbiota to the Harvester
- not explicitly depicted on the diagram: the OAI provider implementations that serve the three kinds of Symbiota records described below
- Taxon Trees into Mulgara: implemented
- Occurrence Records into Mongo: more or less implemented, need to document the process, hammer out formats of ids so that updates can happen, timestamp fields
- not implemented: OmOccurEdits into Mongo. Need something (OAI provider) that converts records in OmOccurEdits table in Symbiota into annotations to store in Knowledge. Need also a the piece that stores the records in Knowledge.
- not implemented: should update the harvester to output into Camel routes, or to have a Camel component listen on the filesystem for newly harvested files
- not implemented: need to have some process invoke analysis after harvested records become available in Mongo.
- not implenented: need to decide whetehr to harvest files into one directory and then write a component with the intelligence to route the files to each destination, or to harvest files into multiple directories and write components for each destination that know only about that particular destination.
Note that it seems more flexible to harvest the products of the editorial process and then convert them into annotations, rather than capture annotations at the point of form submission.
Note also that the harvester is on the same machine as Knowledge in this diagram (i.e. for SCAN).
- we need to have Akka configured with a single workflow that does all we want to do. We might replace that workflow later. That workflow has an actor to read data from Mongo.
- not implemented: we need a trigger for launching the workflow on newly harvested data in Mongo.
- we need to test StarterActor.
- we need to work out what parameters to pass to Analysis.
- we need an arrow that connects Analysis to Knowledge via generated annotations.
- we also need a piece that creates the annotations in Akka, an agent.
Note that the Analysis results arrow is incorrect: it should go to Mongo, not Fedora.