Creating an Anzo Data Store

This topic provides instructions for creating an Anzo Data Store (sometimes called a graph data source). Creating a Data Store means that you designate a directory on the File Store where file-based linked data sets and other files can be created and shared during the ETL process. All installations require at least one Data Store. You can create one Data Store and configure all pipelines to write to that store (each ETL run automatically creates a new sub-directory under the Data Store directory) or you can create multiple Data Stores to use for different Datasets.

For information about setting up a connection to the shared file system that will host the data store, see Connecting to a File Store.

  1. In the Administration application, expand the Connections menu and click Anzo Data Store. Anzo displays the Anzo Data Store screen, which lists any existing data stores. For example:

    The Server Anzo Data Store is a default data store that points to the local Anzo file system. This store exists so that first-time users can quickly test the onboarding process. It is not meant to be used in production. Do not change the Data Location to a shared file store; reconfiguring this data store can cause unexpected consequences when upgrading or migrating the system. It is safe to delete this data store so that it is not presented as an option when users configure ingestion pipelines.

  2. On the Anzo Data Store screen, click the Add Anzo Data Store button and select Add Anzo Data Store. Anzo opens the Create Anzo Data Store screen.

  3. Type a Title and optional Description for the Data Store.
  4. Click in the Data Location field. Anzo opens the File Location dialog box.

  5. On the left side of the screen, select the File Store on which to create this Data Store. On the right side of the screen, navigate to the directory that you want to designate as the data location. Select a directory, and then click OK. Or click Create New Folder to create a new directory. Each time a pipeline is run for this Data Store, a new subdirectory is created under the specified data location.

    The Data Location needs to be a directory on the file store that is shared between Anzo, AnzoGraph, and any Anzo Unstructured, Elasticsearch, or Spark servers. If you want Anzo to generate files for this data store in one location and then load the files into AnzoGraph from another location, specify the file generation location in this field, and then specify the AnzoGraph load location in the Alternate Data Location field that is displayed on the Details screen after you save the data store.

  6. If necessary, you can modify the maximum limit for the size of the files that are created by pipelines that write to this data store by specifying the size (in bytes) in the Max File Size Before Compression (Bytes) field. The value applies to files before they are compressed. The Spark ETL engine partitions files on output, and the default maximum file size is 100 MB (uncompressed). The Sparkler ETL engine partitions files on input, and the default maximum file size is 128 MB (uncompressed). Since Sparkler files are partitioned on input, the resulting output FLDS files can be significantly larger than 128 MB since the source is converted to Turtle (TTL) format after it is partitioned.

    Cambridge Semantics recommends that you do not set this value unless instructed to do so by Cambridge Semantics Support.

  7. Specify whether to compress the generated load files. By default, the Compress output checkbox is selected, indicating that Anzo generates .ttl.gz files when writing to this graph data source. If you clear the checkbox, Anzo generates uncompressed .ttl files. To preserve disk space and reduce read times when loading data into memory, Cambridge Semantics recommends that you accept the default configuration and compress load files.
  8. The ETL engine does not remove duplicates by default when running Pipelines. If the source contains a significant number of duplicate entities, you have two options for deduplicating the data:
    • Deduplicate the data during the ETL process: To deduplicate the data while running the jobs that will generate this graph source, select the Dedupe output per executor option. Enabling the dedupe option limits the number of duplicates to one duplicate per executor node. For example, if the Spark configuration has 10 executor nodes, the resulting data set can contain a maximum of 10 duplicate entities.

      Deduplication is based on primary keys and URI templates. If the source does not employ templating, do not enable the dedupe option. In addition, enabling this option substantially increases the time it takes to run the jobs for this data store.

    • Deduplicate the data after loading it to AnzoGraph: AnzoGraph deduplicates data during a "vacuum" process that runs automatically after data is loaded into memory. If you leave the Dedupe output per executor option disabled, duplicates will be removed by AnzoGraph.

      Deduplicating data with AnzoGraph streamlines the ETL process but can increase load time and temporary memory usage in AnzoGraph during the load.

  9. Click Save to create the data store. Anzo saves the configuration and displays the details view. For example:

    You can click the Edit icon () to modify any of the options. Click the check mark icon () to save changes to an option, or click the X icon () to clear the value for an option.

  10. If you plan to load files into AnzoGraph from a location that is different than the Data Location that you specified, edit the Alternate Data Location field and select the location for AnzoGraph load files.
Related Topics