Rate This Document
Findability
Accuracy
Completeness
Readability

Migrating from a Camunda Framework Database to the DM/Kingbase/Vastbase Database

You can modify the JAR package on which the project depends to complete the migration, or modify the obtained Camunda framework source code and package it into a JAR package to complete the migration. In this migration guide, the JAR Editor plugin is used to directly modify the JAR package.

During the project deployment, the Camunda framework scans the database to check for the required database table structure. If the required database table structure does not exist, the Camunda framework invokes the SQL statements in the JAR package to create the required database tables. However, the JAR package does not contain the SQL statements for the DM database. Therefore, you are advised to migrate the framework after you have migrated the database.

If you need to deploy your project when the required tables do not exist in the database, change the database type to oracle during the first deployment, create tables using the Oracle SQL statements in the package, and change the database type to dm/kingbase for the later deployment.

Prerequisites

  • You are advised to perform the operations in IntelliJ IDEA and install the JAR Editor plugin.
  • It is strongly recommended that you migrate the framework after you have migrated the database or when you have the complete table structure.

Adapting Camunda to DM

  1. Use IntelliJ IDEA to open the dependency package org.camunda.bpm:camunda-engine and use the JAR Editor plugin to modify the JAR package.
  2. Open the org/camunda/bpm/engine/impl/cfg/ProcessEngineConfigurationImpl file in the JAR package.
    Add the following code in bold to the getDefaultDatabaseTypeMappings method:
    protected static Properties getDefaultDatabaseTypeMappings() {
        Properties databaseTypeMappings = new Properties();
        databaseTypeMappings.setProperty("H2", "h2");
        databaseTypeMappings.setProperty(MY_SQL_PRODUCT_NAME, "mysql");
        databaseTypeMappings.setProperty(MARIA_DB_PRODUCT_NAME, "mariadb");
        databaseTypeMappings.setProperty("Oracle", "oracle");
        databaseTypeMappings.setProperty(POSTGRES_DB_PRODUCT_NAME, "postgres");
        databaseTypeMappings.setProperty("Microsoft SQL Server", "mssql");
        databaseTypeMappings.setProperty("DB2", "db2");
        databaseTypeMappings.setProperty("DB2", "db2");
        databaseTypeMappings.setProperty("DB2/NT", "db2");
        databaseTypeMappings.setProperty("DB2/NT64", "db2");
        databaseTypeMappings.setProperty("DB2 UDP", "db2");
        databaseTypeMappings.setProperty("DB2/LINUX", "db2");
        databaseTypeMappings.setProperty("DB2/LINUX390", "db2");
        databaseTypeMappings.setProperty("DB2/LINUXX8664", "db2");
        databaseTypeMappings.setProperty("DB2/LINUXZ64", "db2");
        databaseTypeMappings.setProperty("DB2/400 SQL", "db2");
        databaseTypeMappings.setProperty("DB2/6000", "db2");
        databaseTypeMappings.setProperty("DB2 UDB iSeries", "db2");
        databaseTypeMappings.setProperty("DB2/AIX64", "db2");
        databaseTypeMappings.setProperty("DB2/HPUX", "db2");
        databaseTypeMappings.setProperty("DB2/HP64", "db2");
        databaseTypeMappings.setProperty("DB2/SUN", "db2");
        databaseTypeMappings.setProperty("DB2/SUN64", "db2");
        databaseTypeMappings.setProperty("DB2/PTX", "db2");
        databaseTypeMappings.setProperty("DB2/2", "db2");
        databaseTypeMappings.setProperty("DMDBMS", "dm");	
        return databaseTypeMappings;
      }
    Use the JAR Editor to save and compile the file, and then rebuild the JAR package. Ensure that the JDK version is the same as the JDK version (JDK 11) compatible with the Camunda framework.
    In some cases, compilation errors may occur. The possible causes are as follows:
    • The dependency packages invoked by the Camunda framework are incompatible with those invoked by the migration project source code.
    • The JDK version is incorrect.
  3. Open the org/camunda/bpm/engine/impl/db/sql/DbSqlSessionFactory file in the JAR package.
    1. Add the following code in bold to the instance variables of DbSqlSessionFactory:
      public class DbSqlSessionFactory implements SessionFactory {
      
        public static final String MSSQL = "mssql";
        public static final String DB2 = "db2";
        public static final String ORACLE = "oracle";
        public static final String H2 = "h2";
        public static final String MYSQL = "mysql";
        public static final String POSTGRES = "postgres";
        public static final String MARIADB = "mariadb";
        public static final String DMDBMS = "dm";    // New variable
        public static final String[] SUPPORTED_DATABASES = {MSSQL, DB2, ORACLE, H2, MYSQL, POSTGRES, MARIADB, DMDBMS};  // New member
        ....
        }
    2. Add the following code to the static method of DbSqlSessionFactory:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
           databaseSpecificLimitBeforeStatements.put(DMDBMS, "select * from ( select a.*, ROWNUM rnum from (");
            optimizeDatabaseSpecificLimitBeforeWithoutOffsetStatements.put(DMDBMS, "select * from ( select a.*, ROWNUM rnum from (");
            databaseSpecificLimitAfterStatements.put(DMDBMS, "  ) a where ROWNUM < #{lastRow}) where rnum  >= #{firstRow}");
            optimizeDatabaseSpecificLimitAfterWithoutOffsetStatements.put(DMDBMS, "  ) a where ROWNUM <= #{maxResults})");
            databaseSpecificLimitBeforeWithoutOffsetStatements.put(DMDBMS, "");
            databaseSpecificLimitAfterWithoutOffsetStatements.put(DMDBMS, "AND ROWNUM <= #{maxResults}");
            databaseSpecificInnerLimitAfterStatements.put(DMDBMS, databaseSpecificLimitAfterStatements.get(DMDBMS));
            databaseSpecificLimitBetweenStatements.put(DMDBMS, "");
            databaseSpecificLimitBetweenFilterStatements.put(DMDBMS, "");
            databaseSpecificLimitBetweenAcquisitionStatements.put(DMDBMS, "");
      
            databaseSpecificOrderByStatements.put(DMDBMS, defaultOrderBy);
            databaseSpecificLimitBeforeNativeQueryStatements.put(DMDBMS, "");
            databaseSpecificDistinct.put(DMDBMS, "distinct");
            databaseSpecificLimitBeforeInUpdate.put(DMDBMS, "");
            databaseSpecificLimitAfterInUpdate.put(DMDBMS, "");
            databaseSpecificAuthJoinStart.put(DMDBMS, defaultAuthOnStart);
            databaseSpecificNumericCast.put(DMDBMS, "");
            databaseSpecificCountDistinctBeforeStart.put(DMDBMS, defaultDistinctCountBeforeStart);
            databaseSpecificCountDistinctBeforeEnd.put(DMDBMS, defaultDistinctCountBeforeEnd);
            databaseSpecificCountDistinctAfterEnd.put(DMDBMS, defaultDistinctCountAfterEnd);
      
            databaseSpecificEscapeChar.put(DMDBMS, defaultEscapeChar);
      
            databaseSpecificDummyTable.put(DMDBMS, "FROM DUAL");
            databaseSpecificBitAnd1.put(DMDBMS, "BITAND(");
            databaseSpecificBitAnd2.put(DMDBMS, ",");
            databaseSpecificBitAnd3.put(DMDBMS, ")");
            databaseSpecificDatepart1.put(DMDBMS, "to_number(to_char(");
            databaseSpecificDatepart2.put(DMDBMS, ",");
            databaseSpecificDatepart3.put(DMDBMS, "))");
      
            databaseSpecificTrueConstant.put(DMDBMS, "1");
            databaseSpecificFalseConstant.put(DMDBMS, "0");
            databaseSpecificIfNull.put(DMDBMS, "NVL");
      
            databaseSpecificDaysComparator.put(DMDBMS, "${date} <= #{currentTimestamp} - ${days}");
      
            databaseSpecificCollationForCaseSensitivity.put(DMDBMS, "");
      
            databaseSpecificAuthJoinEnd.put(DMDBMS, defaultAuthOnEnd);
            databaseSpecificAuthJoinSeparator.put(DMDBMS, defaultAuthOnSeparator);
      
            databaseSpecificAuth1JoinStart.put(DMDBMS, defaultAuthOnStart);
            databaseSpecificAuth1JoinEnd.put(DMDBMS, defaultAuthOnEnd);
            databaseSpecificAuth1JoinSeparator.put(DMDBMS, defaultAuthOnSeparator);
            databaseSpecificExtractTimeUnitFromDate.put(DMDBMS, defaultExtractTimeUnitFromDate);
      
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricProcessInstanceDurationReport", "selectHistoricProcessInstanceDurationReport_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricTaskInstanceDurationReport", "selectHistoricTaskInstanceDurationReport_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricTaskInstanceCountByTaskNameReport", "selectHistoricTaskInstanceCountByTaskNameReport_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectFilterByQueryCriteria", "selectFilterByQueryCriteria_oracleDb2");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricProcessInstanceIdsForCleanup", "selectHistoricProcessInstanceIdsForCleanup_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricDecisionInstanceIdsForCleanup", "selectHistoricDecisionInstanceIdsForCleanup_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricCaseInstanceIdsForCleanup", "selectHistoricCaseInstanceIdsForCleanup_oracle");
            addDatabaseSpecificStatement(DMDBMS, "selectHistoricBatchIdsForCleanup", "selectHistoricBatchIdsForCleanup_oracle");
      
            addDatabaseSpecificStatement(DMDBMS, "deleteAttachmentsByRemovalTime", "deleteAttachmentsByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteCommentsByRemovalTime", "deleteCommentsByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricActivityInstancesByRemovalTime", "deleteHistoricActivityInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricDecisionInputInstancesByRemovalTime", "deleteHistoricDecisionInputInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricDecisionInstancesByRemovalTime", "deleteHistoricDecisionInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricDecisionOutputInstancesByRemovalTime", "deleteHistoricDecisionOutputInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricDetailsByRemovalTime", "deleteHistoricDetailsByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteExternalTaskLogByRemovalTime", "deleteExternalTaskLogByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricIdentityLinkLogByRemovalTime", "deleteHistoricIdentityLinkLogByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricIncidentsByRemovalTime", "deleteHistoricIncidentsByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteJobLogByRemovalTime", "deleteJobLogByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricProcessInstancesByRemovalTime", "deleteHistoricProcessInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricTaskInstancesByRemovalTime", "deleteHistoricTaskInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricVariableInstancesByRemovalTime", "deleteHistoricVariableInstancesByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteUserOperationLogByRemovalTime", "deleteUserOperationLogByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteByteArraysByRemovalTime", "deleteByteArraysByRemovalTime_oracle");
            addDatabaseSpecificStatement(DMDBMS, "deleteHistoricBatchesByRemovalTime", "deleteHistoricBatchesByRemovalTime_oracle");
      
            constants = new HashMap<String, String>();
            constants.put("constant.event", "cast('event' as nvarchar2(255))");
            constants.put("constant.op_message", "NEW_VALUE_ || '_|_' || PROPERTY_");
            constants.put("constant_for_update", "for update");
            constants.put("constant.datepart.quarter", "'Q'");
            constants.put("constant.datepart.month", "'MM'");
            constants.put("constant.datepart.minute", "'MI'");
            constants.put("constant.null.startTime", "null START_TIME_");
            constants.put("constant.varchar.cast", "'${key}'");
            constants.put("constant.integer.cast", "NULL");
            constants.put("constant.null.reporter", "NULL AS REPORTER_");
            dbSpecificConstants.put(DMDBMS, constants);
      

    Use the JAR Editor to save and compile the file, and then rebuild the JAR package. Ensure that the JDK version is the same as the JDK version (JDK 11) compatible with the Camunda framework.

  4. Import the JDBC driver package of the DM database.

    Taking Maven as an example, add the following code to <dependencies></> in the pom.xml file:

    1
    2
    3
    4
    5
    <dependency>
       <groupId>com.dameng</groupId>
       <artifactId>Dm8JdbcDriver18</artifactId>
       <version>8.1.1.49</version>
    </dependency>
    

    For the latest version, visit the DM official website.

  5. Configure database information.

    Use the yaml file as an example:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    # Database driver
    driver-class-name: dm.jdbc.driver.DmDriver
    # Database IP address and port
    url: jdbc:dm://127.0.0.1:5236/DMSERVER?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf-8
    username: Database user name
    password: Database password
    
    # Specify the database type for the Camunda framework.
    camunda:
      bpm:
        database:
          type: dm
    

Adapting Camunda to Kingbase

  1. Use IntelliJ IDEA to open the dependency package org.camunda.bpm:camunda-engine and use the JAR Editor plugin to modify the JAR package.
  2. Open the org/camunda/bpm/engine/impl/cfg/ProcessEngineConfigurationImpl file in the JAR package.
    Add the following code in bold to the getDefaultDatabaseTypeMappings method:
    protected static Properties getDefaultDatabaseTypeMappings() {
    Properties databaseTypeMappings = new Properties();
    databaseTypeMappings.setProperty("H2", "h2");
    databaseTypeMappings.setProperty(MY_SQL_PRODUCT_NAME, "mysql");
    databaseTypeMappings.setProperty(MARIA_DB_PRODUCT_NAME, "mariadb");
    databaseTypeMappings.setProperty("Oracle", "oracle");
    databaseTypeMappings.setProperty(POSTGRES_DB_PRODUCT_NAME, "postgres");
    databaseTypeMappings.setProperty("Microsoft SQL Server", "mssql");
    databaseTypeMappings.setProperty("DB2", "db2");
    databaseTypeMappings.setProperty("DB2", "db2");
    databaseTypeMappings.setProperty("DB2/NT", "db2");
    databaseTypeMappings.setProperty("DB2/NT64", "db2");
    databaseTypeMappings.setProperty("DB2 UDP", "db2");
    databaseTypeMappings.setProperty("DB2/LINUX", "db2");
    databaseTypeMappings.setProperty("DB2/LINUX390", "db2");
    databaseTypeMappings.setProperty("DB2/LINUXX8664", "db2");
    databaseTypeMappings.setProperty("DB2/LINUXZ64", "db2");
    databaseTypeMappings.setProperty("DB2/400 SQL", "db2");
    databaseTypeMappings.setProperty("DB2/6000", "db2");
    databaseTypeMappings.setProperty("DB2 UDB iSeries", "db2");
    databaseTypeMappings.setProperty("DB2/AIX64", "db2");
    databaseTypeMappings.setProperty("DB2/HPUX", "db2");
    databaseTypeMappings.setProperty("DB2/HP64", "db2");
    databaseTypeMappings.setProperty("DB2/SUN", "db2");
    databaseTypeMappings.setProperty("DB2/SUN64", "db2");
    databaseTypeMappings.setProperty("DB2/PTX", "db2");
    databaseTypeMappings.setProperty("DB2/2", "db2");
    databaseTypeMappings.setProperty("KingbaseEs", "kingbase8");
    return databaseTypeMappings;
    }

    Use the JAR Editor to save and compile the file, and then rebuild the JAR package. Ensure that the JDK version is the same as the JDK version (JDK 11) compatible with the Camunda framework.

    In some cases, compilation errors may occur. The possible causes are as follows:
    • The dependency packages invoked by the Camunda framework are incompatible with those invoked by the migration project source code.
    • The JDK version is incorrect.
  3. Open the org/camunda/bpm/engine/impl/db/sql/DbSqlSessionFactory file in the JAR package.
    1. Add the following code to the instance variables of DbSqlSessionFactory:
      public class DbSqlSessionFactory implements SessionFactory {
      
      public static final String MSSQL = "mssql";
      public static final String DB2 = "db2";
      public static final String ORACLE = "oracle";
      public static final String H2 = "h2";
      public static final String MYSQL = "mysql";
      public static final String POSTGRES = "postgres";
      public static final String MARIADB = "mariadb";
      public static final String KINGBASEES = "kingbase8";   // New variable
      public static final String[] SUPPORTED_DATABASES = {MSSQL, DB2, ORACLE, H2, MYSQL, POSTGRES, MARIADB, KINGBASEES};  // New member
      ....
      }
    2. Add the following code to the static method of DbSqlSessionFactory:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
      databaseSpecificLimitBeforeStatements.put(KINGBASEES, "select * from ( select a.*, ROWNUM rnum from (");
      optimizeDatabaseSpecificLimitBeforeWithoutOffsetStatements.put(KINGBASEES, "select * from ( select a.*, ROWNUM rnum from (");
      databaseSpecificLimitAfterStatements.put(KINGBASEES, "  ) a where ROWNUM < #{lastRow}) where rnum  >= #{firstRow}");
      optimizeDatabaseSpecificLimitAfterWithoutOffsetStatements.put(KINGBASEES, "  ) a where ROWNUM <= #{maxResults})");
      databaseSpecificLimitBeforeWithoutOffsetStatements.put(KINGBASEES, "");
      databaseSpecificLimitAfterWithoutOffsetStatements.put(KINGBASEES, "AND ROWNUM <= #{maxResults}");
      databaseSpecificInnerLimitAfterStatements.put(KINGBASEES, databaseSpecificLimitAfterStatements.get(KINGBASEES));
      databaseSpecificLimitBetweenStatements.put(KINGBASEES, "");
      databaseSpecificLimitBetweenFilterStatements.put(KINGBASEES, "");
      databaseSpecificLimitBetweenAcquisitionStatements.put(KINGBASEES, "");
      
      databaseSpecificOrderByStatements.put(KINGBASEES, defaultOrderBy);
      databaseSpecificLimitBeforeNativeQueryStatements.put(KINGBASEES, "");
      databaseSpecificDistinct.put(KINGBASEES, "distinct");
      databaseSpecificLimitBeforeInUpdate.put(KINGBASEES, "");
      databaseSpecificLimitAfterInUpdate.put(KINGBASEES, "");
      databaseSpecificAuthJoinStart.put(KINGBASEES, defaultAuthOnStart);
      databaseSpecificNumericCast.put(KINGBASEES, "");
      databaseSpecificCountDistinctBeforeStart.put(KINGBASEES, defaultDistinctCountBeforeStart);
      databaseSpecificCountDistinctBeforeEnd.put(KINGBASEES, defaultDistinctCountBeforeEnd);
      databaseSpecificCountDistinctAfterEnd.put(KINGBASEES, defaultDistinctCountAfterEnd);
      
      databaseSpecificEscapeChar.put(KINGBASEES, defaultEscapeChar);
      
      databaseSpecificDummyTable.put(KINGBASEES, "FROM DUAL");
      databaseSpecificBitAnd1.put(KINGBASEES, "BITAND(");
      databaseSpecificBitAnd2.put(KINGBASEES, ",");
      databaseSpecificBitAnd3.put(KINGBASEES, ")");
      databaseSpecificDatepart1.put(KINGBASEES, "to_number(to_char(");
      databaseSpecificDatepart2.put(KINGBASEES, ",");
      databaseSpecificDatepart3.put(KINGBASEES, "))");
      
      databaseSpecificTrueConstant.put(KINGBASEES, "1");
      databaseSpecificFalseConstant.put(KINGBASEES, "0");
      databaseSpecificIfNull.put(KINGBASEES, "NVL");
      
      databaseSpecificDaysComparator.put(KINGBASEES, "${date} <= #{currentTimestamp} - ${days}");
      
      databaseSpecificCollationForCaseSensitivity.put(KINGBASEES, "");
      
      databaseSpecificAuthJoinEnd.put(KINGBASEES, defaultAuthOnEnd);
      databaseSpecificAuthJoinSeparator.put(KINGBASEES, defaultAuthOnSeparator);
      
      databaseSpecificAuth1JoinStart.put(KINGBASEES, defaultAuthOnStart);
      databaseSpecificAuth1JoinEnd.put(KINGBASEES, defaultAuthOnEnd);
      databaseSpecificAuth1JoinSeparator.put(KINGBASEES, defaultAuthOnSeparator);
      databaseSpecificExtractTimeUnitFromDate.put(KINGBASEES, defaultExtractTimeUnitFromDate);
      
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricProcessInstanceDurationReport", "selectHistoricProcessInstanceDurationReport_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricTaskInstanceDurationReport", "selectHistoricTaskInstanceDurationReport_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricTaskInstanceCountByTaskNameReport", "selectHistoricTaskInstanceCountByTaskNameReport_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectFilterByQueryCriteria", "selectFilterByQueryCriteria_oracleDb2");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricProcessInstanceIdsForCleanup", "selectHistoricProcessInstanceIdsForCleanup_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricDecisionInstanceIdsForCleanup", "selectHistoricDecisionInstanceIdsForCleanup_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricCaseInstanceIdsForCleanup", "selectHistoricCaseInstanceIdsForCleanup_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "selectHistoricBatchIdsForCleanup", "selectHistoricBatchIdsForCleanup_oracle");
      
      addDatabaseSpecificStatement(KINGBASEES, "deleteAttachmentsByRemovalTime", "deleteAttachmentsByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteCommentsByRemovalTime", "deleteCommentsByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricActivityInstancesByRemovalTime", "deleteHistoricActivityInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricDecisionInputInstancesByRemovalTime", "deleteHistoricDecisionInputInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricDecisionInstancesByRemovalTime", "deleteHistoricDecisionInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricDecisionOutputInstancesByRemovalTime", "deleteHistoricDecisionOutputInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricDetailsByRemovalTime", "deleteHistoricDetailsByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteExternalTaskLogByRemovalTime", "deleteExternalTaskLogByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricIdentityLinkLogByRemovalTime", "deleteHistoricIdentityLinkLogByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricIncidentsByRemovalTime", "deleteHistoricIncidentsByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteJobLogByRemovalTime", "deleteJobLogByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricProcessInstancesByRemovalTime", "deleteHistoricProcessInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricTaskInstancesByRemovalTime", "deleteHistoricTaskInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricVariableInstancesByRemovalTime", "deleteHistoricVariableInstancesByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteUserOperationLogByRemovalTime", "deleteUserOperationLogByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteByteArraysByRemovalTime", "deleteByteArraysByRemovalTime_oracle");
      addDatabaseSpecificStatement(KINGBASEES, "deleteHistoricBatchesByRemovalTime", "deleteHistoricBatchesByRemovalTime_oracle");
      
      constants = new HashMap<String, String>();
      constants.put("constant.event", "cast('event' as nvarchar2(255))");
      constants.put("constant.op_message", "NEW_VALUE_ || '_|_' || PROPERTY_");
      constants.put("constant_for_update", "for update");
      constants.put("constant.datepart.quarter", "'Q'");
      constants.put("constant.datepart.month", "'MM'");
      constants.put("constant.datepart.minute", "'MI'");
      constants.put("constant.null.startTime", "null START_TIME_");
      constants.put("constant.varchar.cast", "'${key}'");
      constants.put("constant.integer.cast", "NULL");
      constants.put("constant.null.reporter", "NULL AS REPORTER_");
      dbSpecificConstants.put(KINGBASEES, constants);
      

    Use the JAR Editor to save and compile the file, and then rebuild the JAR package. Ensure that the JDK version is the same as the JDK version (JDK 11) compatible with the Camunda framework.

  4. Import the JDBC driver package of Kingbase.

    Download the dependency package from the Kingbase official website and import it locally.

  5. Configure database information.

    Use the yaml file as an example:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    # Database driver
    driver-class-name: com.kingbase8.Driver
    # Database IP address and port (For the specific URL, see the IP address.)
    url: jdbc:kingbase8://127.0.0.1:54321/Database_name
    username: Database user name
    password: Database password
    
    # Specify the database type for the Camunda framework.
    camunda:
      bpm:
        database:
          type: kingbase8
    

Adapting Camunda to Vastbase

Vastbase is fully compatible with MySQL. To adapt Camunda to Vastbase, you only need to introduce the dependencies of MySQL and modify the database configuration in the configuration file. For example, if Camunda is integrated in the Spring Boot project, modify the following configuration file to complete the adaptation. The following uses the YAML file as an example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
spring.datasource:
  # Configure the MySQL driver.
  driver-class-name: com.mysql.cj.jdbc.Driver
  # Set the database type in the URL to MySQL.
  url: jdbc:mysql://{Vastbase_IP_address}:{Vastbase_port}/{Database_name}
  # Example: jdbc:mysql://127.0.0.1:2881/CamundaProject
  username: {Vastbase_user_name}
  password: {Vastbase_login_password}
# Specify the database type for the Camunda framework.
camunda:
  bpm:
    database:
      type: mysql
      # Change the database type to mysql.

After completing the configuration, start the project. If a correct database table is generated in the database, the adaptation is successful.