Fullscreen Image

Data Migration Tools

For customers migrating from Meridian Enterprise Server to Meridian Cloud, we have developed tools to simplify the data migration process. At a high level, this process contains four steps:

  1. Extract your data from your Meridian Enterprise Server vault.

    Using our Export tool, you will receive multiple .xlsx files containing your data in order of when it was originally created.

  2. Transform your data into import packages.

    You can either manually transform or create a tool to transform one or more import package manifests and create related data stream archives based on the original .xlsx file stream information. For each record in the original .xslx file, you need to provide the document number, property mappings, and import template selection. You should also transform reference information into import template format. Based on the content file names in the original .xslx, you should copy the stream data from the stream server folder into the package data archive.

  3. Validate your data.

    Use our Validation tool to validate your import package manifests against your selected import profile templates.

  4. Import your data into Meridian Cloud.

ClosedBefore You Start

Before you start this process:

  1. Review our best practices for importing with packages.

  2. Download the migration tools for these procedures.

  3. Extract the migration tools.

    We recommend saving the extraction tools to a location where you can easily find them. For example, you could create a folder on your C drive named Data Migration.

  4. Release all existing documents and purge all deleted documents.

ClosedStep 1: Export Data from the Vault

Important!

We highly recommended that you release all existing documents and purge all deleted documents before starting this process.

To export data from your vault:

  1. Open a Windows Command Prompt and go to the folder where you extracted the tools.

  2. Type a command using the following syntax:

    Copy
    M360.DataExtractor.exe -RepositoryName [Your repository name] -FilePath "[Output file path here].xlsx" -RevisionsPerFile [Number of Revision Records per File]

    If you do not want your records to be split between different files, you can enter a high number for the Number of Revision Records per File.

  3. Press the Enter key on your keyboard.

    The Export tool begins the export. Once the progress reaches 100%, two JSON files containing all revisions and references in the vault will appear in the file path you specified: [output file path]_Revision.json and [output file path]_References.json.

    Once these have been generated, the Export tool will generate Excel files until all your records have been processed. The Excel files will have the naming convention of [output file path]_[number].xlsx. You will also receive an Excel file with your project information, titled [output file path]_projects.xlsx.

    The number of Excel files generated depends on:

    1. How many documents you have in your vault, and

    2. How many revision records per file you selected

  4. Once the JSON files have been generated, you can interrupt the export process and restart it again.

    • To interrupt the process:

      • Close the Command Prompt window, or

      • Press CTRL + C on your keyboard.

        The process is interrupted.

    • To restart the process:

      1. Type a command using the following syntax:

        Copy
        M360.DataExtractor.exe -RepositoryName [RepositoryName] -FilePath [output file path].xlsx -RevisionsPerFile [number of revision records per file] -RestoreRevisions true -NextFile [next file number]
        Important!

        If you are restarting the export process after interrupting it, do NOT change the value of the RevisionsPerFile parameter.

      2. Press the Enter key on your keyboard.

        The export process resumes.

ClosedStep 2: Transform your Data

You can either manually transform or create a tool to transform one or more import package manifests and create related data stream archives based on the original .xlsx file stream information. For each record in the original .xslx file, you need to provide the document number, property mappings, and import template selection. You should also transform reference information into import template format. Based on the content file names in the original .xslx, you should copy the stream data from the stream server folder into the package data archive.

ClosedDocument Excel Files

Each document Excel file has several tabs:

  • Revisions

  • References

  • Streams

  • Parts

  • Per [Document type] tabs - each of these tabs has properties specific to the document type

The following sections describe each of the tabs, and the columns contained in them.

ClosedRevisions

The following columns are in the Revisions tab:

  • DateTimeStamp

  • Revision columns

  • DocumentType, which is used to cross reference <Document type> tabs

  • Path location in source vault

  • Project ID

  • Display name

  • Global ID

  • Version name/number

  • WorkFlow status

  • Rendering info

ClosedReferences

The following columns are in the References tab:

  • DateTimeStamp

  • Source revision columns

  • Target revision columns

  • Reference Type

  • Reference Display name

ClosedStreams

The following columns are in the Streams tab:

  • Revision columns

  • Stream names columns

    • Content

    • Redlines

    • Rendition

    • RenditionRedlines

    • HotSpots

    • PDFTronXOD

    • DrawingLayout

ClosedParts

The following columns are in the Parts tab:

  • Revision columns

  • Part name

  • Part Content location

ClosedPer Document Type

The following columns are in the Per Document Type tabs:

  • Revision columns

  • Columns for properties from property sets assigned to specific document type

ClosedProject Excel File

The project Excel file has several tabs:

ClosedProjects

The following columns are in the Projects tab:

  • ProjectId

  • ProjectType to cross ref Project type tabs

  • Path location in source vault

  • Display name

  • Global ID

  • ProjectWorkFlow status

ClosedPer Project type

The following columns are in the Per Project Type tabs:

  • ProjectId

  • Columns for properties from property sets assigned to specific project/folder type

ClosedStep 3: Validate your Data

To validate your transformed data:

  1. Open a Windows Command Prompt and go to the folder where you extracted the tools.

  2. Type a command using the following syntax:

    Copy
    M360.DataValidator -Manifest [Package Manifest File].[xls|xlsx] -Template [Import Template file].[xls|xlsx] [options]

    Manifest is the import spreadsheet with the metadata. Template is the spreadsheet that you exported from the system.

    The options you can enter are:

    • -ProfileType [Projects|Masters|Documentation|TagsReferences]
    • -Archive [<Import package content folder>|<Import package content archive>]
  3. Press the Enter key on your keyboard.

    The Validator tool checks the following:

    • All required sheets and columns exist

      You should download an import template so that the tool can validate custom columns.

    • All columns from template should be present in the Manifest

    • Valid stream paths:

      • Content path has no invalid characters

      • Content files exist (if content archive is available)

      • Path is not too long

    • Valid length for all property values, not exceeding 255 characters

    Once the Validator tool finishes running, it will generate a log file. If any errors are detected, they are displayed on the screen and notated in the log file.

Step 4: Upload Import Sources to your Repository

Step 5: Import your Data

ClosedStep 6: Automatically Generate Renditions

After the import has been completed, renditions will be generated for files that were migrated without a rendition file. The import profile column related to rendition generation will be used to generate the rendition. If you do not provide any rendition settings in your import spreadsheets, your default settings will be used.

Generating renditions may take several days depending on how much data was uploaded. However, this generation will not interfere with the regular rendition generation that is part of the workflow transition process.

Learn more about renditions.