Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When implementing table sharding rules using ShardingSphere 5.4.1, I encountered an issue. #34559

Open
lhyjava opened this issue Feb 5, 2025 · 0 comments

Comments

@lhyjava
Copy link

lhyjava commented Feb 5, 2025

I attempted to determine whether the table name is vote_user_choice or vote_visit and use an expression-based algorithm to achieve table sharding.
However, the application failed to start with an error.
Caused by: groovy.lang.MissingPropertyException: No such property: divcol for class: Script1
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:67) ~[groovy-4.0.23.jar:4.0.23]
at org.codehaus.groovy.vmplugin.v8.IndyGuardsFiltersAndSignatures.unwrap(IndyGuardsFiltersAndSignatures.java:163) ~[groovy-4.0.23.jar:4.0.23]
at org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache(IndyInterface.java:321) ~[groovy-4.0.23.jar:4.0.23]
at Script1.run(Script1.groovy:1) ~[na:na]
at org.apache.shardingsphere.infra.expr.groovy.GroovyInlineExpressionParser.evaluate(GroovyInlineExpressionParser.java:117) ~[shardingsphere-infra-expr-groovy-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.infra.expr.groovy.GroovyInlineExpressionParser.evaluate(GroovyInlineExpressionParser.java:104) ~[shardingsphere-infra-expr-groovy-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.infra.expr.groovy.GroovyInlineExpressionParser.splitAndEvaluate(GroovyInlineExpressionParser.java:81) ~[shardingsphere-infra-expr-groovy-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.sharding.rule.ShardingRule.getDataSourceNames(ShardingRule.java:205) ~[shardingsphere-sharding-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.sharding.rule.ShardingRule.lambda$getDataSourceNames$8(ShardingRule.java:194) ~[shardingsphere-sharding-core-5.4.1.jar:5.4.1]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
at org.apache.shardingsphere.sharding.rule.ShardingRule.getDataSourceNames(ShardingRule.java:194) ~[shardingsphere-sharding-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.sharding.rule.ShardingRule.(ShardingRule.java:124) ~[shardingsphere-sharding-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.sharding.rule.builder.ShardingRuleBuilder.build(ShardingRuleBuilder.java:42) ~[shardingsphere-sharding-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.sharding.rule.builder.ShardingRuleBuilder.build(ShardingRuleBuilder.java:36) ~[shardingsphere-sharding-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.infra.rule.builder.database.DatabaseRulesBuilder.build(DatabaseRulesBuilder.java:65) ~[shardingsphere-infra-common-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.infra.metadata.database.ShardingSphereDatabase.create(ShardingSphereDatabase.java:88) ~[shardingsphere-infra-common-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.metadata.factory.ExternalMetaDataFactory.createGenericDatabases(ExternalMetaDataFactory.java:85) ~[shardingsphere-metadata-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.metadata.factory.ExternalMetaDataFactory.create(ExternalMetaDataFactory.java:72) ~[shardingsphere-metadata-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.mode.metadata.NewMetaDataContextsFactory.create(NewMetaDataContextsFactory.java:97) ~[shardingsphere-mode-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.mode.metadata.NewMetaDataContextsFactory.create(NewMetaDataContextsFactory.java:71) ~[shardingsphere-mode-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.mode.manager.standalone.NewStandaloneContextManagerBuilder.build(NewStandaloneContextManagerBuilder.java:53) ~[shardingsphere-standalone-mode-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.driver.jdbc.core.datasource.ShardingSphereDataSource.createContextManager(ShardingSphereDataSource.java:78) ~[shardingsphere-jdbc-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.driver.jdbc.core.datasource.ShardingSphereDataSource.(ShardingSphereDataSource.java:66) ~[shardingsphere-jdbc-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.driver.api.ShardingSphereDataSourceFactory.createDataSource(ShardingSphereDataSourceFactory.java:95) ~[shardingsphere-jdbc-core-5.4.1.jar:5.4.1]
at org.apache.shardingsphere.driver.api.ShardingSphereDataSourceFactory.createDataSource(ShardingSphereDataSourceFactory.java:78) ~[shardingsphere-jdbc-core-5.4.1.jar:5.4.1]
at com.lnbdy.datasource.utils.DatasourceUtil.getShardingDataSource(DatasourceUtil.java:542) ~[classes/:na]
at com.lnbdy.datasource.utils.DatasourceUtil.getDatasourceBean(DatasourceUtil.java:417) ~[classes/:na]
at com.lnbdy.datasource.DataSourceConfig.dataSource(DataSourceConfig.java:36) ~[classes/:na]
at com.lnbdy.datasource.DataSourceConfig$$SpringCGLIB$$0.CGLIB$dataSource$2() ~[classes/:na]
at com.lnbdy.datasource.DataSourceConfig$$SpringCGLIB$$FastClass$$1.invoke() ~[classes/:na]
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:258) ~[spring-core-6.1.14.jar:6.1.14]
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:348) ~[spring-context-6.1.14.jar:6.1.14]
at com.lnbdy.datasource.DataSourceConfig$$SpringCGLIB$$0.dataSource() ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:146) ~[spring-beans-6.1.14.jar:6.1.14]
... 73 common frames omitted

Here is the source code:
package com.lnbdy.datasource.utils;

import cn.hutool.core.collection.CollectionUtil;
import com.alibaba.druid.pool.xa.DruidXADataSource;
import com.lnbdy.datasource.properties.DBProperties;
import com.lnbdy.datasource.properties.DatasourceProperties;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import jakarta.annotation.PostConstruct;
import jakarta.annotation.PreDestroy;
import org.apache.commons.lang3.StringUtils;
import org.apache.shardingsphere.driver.api.ShardingSphereDataSourceFactory;
import org.apache.shardingsphere.infra.config.algorithm.AlgorithmConfiguration;
import org.apache.shardingsphere.infra.config.mode.ModeConfiguration;
import org.apache.shardingsphere.mode.repository.standalone.StandalonePersistRepositoryConfiguration;
import org.apache.shardingsphere.sharding.api.config.ShardingRuleConfiguration;
import org.apache.shardingsphere.sharding.api.config.rule.ShardingTableRuleConfiguration;
import org.apache.shardingsphere.sharding.api.config.strategy.sharding.ShardingStrategyConfiguration;
import org.apache.shardingsphere.sharding.api.config.strategy.sharding.StandardShardingStrategyConfiguration;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

import javax.sql.DataSource;
import java.sql.SQLException;
import java.time.LocalDate;
import java.time.format.DateTimeFormatter;
import java.util.*;

/**

  • @author lhy

  • @Date 2024年04月03日 9:21
    */
    @component
    public class DatasourceUtil {

    @Autowired
    private DatasourceProperties dbProperties;
    @Autowired
    private DBProperties baseProperties;
    @value("#{'${database.init-base:}'.empty ? null : '${database.init-base:}'.split(',')}")
    private Set initBase;
    private ShardingRuleConfiguration shardingRuleConfig;
    private Properties properties;
    private static HikariDataSource configDataSource;

    @PostConstruct
    public void initBase() {
    properties = new Properties();
    properties.put("check.table.metadata.enabled", "false");
    properties.put("sql.show", "false");
    if (CollectionUtil.isNotEmpty(initBase)) {
    initBase.forEach(e -> {
    try {
    refreshSourceNo(e);
    } catch (Exception ex) {
    throw new RuntimeException(ex);
    } finally {
    DatabaseContextHolder.clearDataSource();
    }
    });
    }
    }

    /**

    • 获取表规则配置信息。

    • 此方法会根据数据源和配置的分表规则,生成并返回一个分片规则配置对象。

    • @param datasource 数据源名称,用于根据不同的数据源配置不同的分表规则。

    • @return ShardingRuleConfiguration 分片规则配置对象,包含了所有的表规则配置。
      */
      private ShardingRuleConfiguration getTableRuleConfiguration(String datasource) {
      shardingRuleConfig = new ShardingRuleConfiguration();
      // 从基础属性中获取分表名称,并按逗号分割
      String tablestr = baseProperties.getShardingTables();
      if (StringUtils.isNotEmpty(tablestr)) {
      String[] tables = tablestr.split(",");
      // 遍历所有分表名称,为每个分表配置规则
      for (String table : tables) {
      if (table.equalsIgnoreCase("vote_user_choice") || table.equalsIgnoreCase("vote_visit")) {
      shardingRuleConfig.getTables().add(createVoteVisitTableRule(datasource, table));
      } else {
      shardingRuleConfig.getTables().add(getTableRuleConfiguration(datasource, table));
      }
      }
      }
      // 从基础属性中获取不分表的表名称,并按逗号分割
      String tablestr2 = baseProperties.getDefaultTables();
      if (StringUtils.isNotEmpty(tablestr2)) {
      String[] tables = tablestr2.split(",");
      // 遍历所有不分表的表名称,为每个表配置规则
      for (String table : tables) {
      shardingRuleConfig.getTables().add(getDefaultTableRuleConfiguration(datasource, table));
      }
      }

      // 设置默认的数据库分片策略
      shardingRuleConfig.setDefaultDatabaseShardingStrategy(getDatabaseShardingStrategy());

      // 设置默认的表分片策略
      shardingRuleConfig.setDefaultTableShardingStrategy(
      new StandardShardingStrategyConfiguration("divcol", "his-data-spi-based"));
      shardingRuleConfig.setDefaultTableShardingStrategy(
      new StandardShardingStrategyConfiguration("divcol", "vote-id-sharding"));
      // 添加自定义的分片算法
      shardingRuleConfig.getShardingAlgorithms().put("his-data-spi-based",
      new AlgorithmConfiguration("HIS_DATA_SPI_BASED", new Properties()));
      shardingRuleConfig.getShardingAlgorithms().put("vote-id-sharding",
      new AlgorithmConfiguration("VOTE_ID_BASED", new Properties()));

      return shardingRuleConfig;
      }

    /**

    • 刷新数据源配置。根据传入的新数据源名称,更新动态数据源配置。

    • 如果新数据源名称为null,则不执行任何操作。

    • 首先检查是否存在名为"druid_"和"sharding_"加上新数据源名称的数据源。

    • 如果两者都存在,那么不执行任何操作。

    • 如果其中一种不存在,则根据不存在的类型(druid或sharding),创建并添加相应的数据源到动态数据源配置中。

    • @param newsource 新数据源的名称。用于构建和查找数据源的键。

    • @throws IllegalStateException 如果在获取动态数据源实例时,动态数据源尚未初始化,则抛出此异常。

    • @throws SQLException 如果在操作数据源过程中发生SQL异常,则抛出此异常。
      */
      public void refreshSourceNo(String newsource) throws IllegalStateException, SQLException {
      if (newsource == null) {
      return;
      }
      // 检查是否存在"druid_"和"sharding_"类型的数据源
      boolean druid = DynamicDataSource.isExistDataSource("druid_" + newsource);
      boolean sharding = DynamicDataSource.isExistDataSource("sharding_" + newsource);
      // 如果两种数据源都存在,则不执行操作
      if (druid && sharding) {
      return;
      }
      // 获取动态数据源实例
      DynamicDataSource source = DynamicDataSource.getInstance();
      Map<Object, Object> map = new HashMap<>();

      // 如果"druid_"类型的数据源不存在,则创建并添加
      if (!druid) {
      DataSource dataSource = getDataSource(newsource);
      map.put("druid_" + newsource, dataSource);
      }

      // 如果"sharding_"类型的数据源不存在,则创建并添加
      if (!sharding) {
      DataSource shardingDataSource = getShardingDataSource(newsource);
      map.put("sharding_" + newsource, shardingDataSource);
      }

      // 更新动态数据源的配置
      source.setTargetDataSources(map);
      }

    /**

    • 刷新指定类型的数据源配置。

    • 根据传入的数据源类型和名称,更新动态数据源配置。

    • 如果指定名称的数据源已存在,则直接返回true。

    • 否则,根据类型添加新的数据源到动态数据源配置中,并返回true。

    • 如果传入的数据源名称为null,则返回false。

    • @param type 数据源类型,支持"druid"和"sharding"两种类型。

    • @param newsource 新数据源的名称。用于构建和查找数据源的键。

    • @return 返回操作结果,成功(即数据源存在或已添加)返回true,失败返回false。

    • @throws IllegalStateException 如果在获取动态数据源实例时,动态数据源尚未初始化,则抛出此异常。

    • @throws SQLException 如果在操作数据源过程中发生SQL异常,则抛出此异常。
      */
      public boolean refreshSource(String type, String newsource) throws IllegalStateException, SQLException {
      if (newsource == null) {
      return false;
      }
      // 构建数据源的键
      String name = type + "_" + newsource;
      // 检查数据源是否已存在
      boolean druid = DynamicDataSource.isExistDataSource(name);
      if (druid) {
      return true;
      }
      // 获取动态数据源实例
      DynamicDataSource source = DynamicDataSource.getInstance();
      Map<Object, Object> map = new HashMap<>();

      // 根据数据源类型,添加相应的数据源
      if ("druid".equals(type)) {
      DataSource dataSource = getDataSource(newsource);
      map.put(name, dataSource);
      } else if ("sharding".equals(type)) {
      DataSource shardingDataSource = getShardingDataSource(newsource);
      map.put(name, shardingDataSource);
      }
      // 更新动态数据源的配置
      source.setTargetDataSources(map);
      return true;
      }

    /**

    • 创建并返回一个包含数据源的Map对象。

    • 此方法主要用于配置和初始化主数据库的数据源,并将其封装到一个Map中。

    • @return Map<String, DataSource> 包含一个以数据库基名作为键,DruidXADataSource对象作为值的Map。

    • @throws SQLException 如果在设置数据源属性时发生SQL异常,则抛出SQLException。
      */
      public Map<String, DataSource> createDataSourceMap() throws SQLException {
      // 创建一个空的数据源Map
      Map<String, DataSource> dataSourceMap = new HashMap<String, DataSource>();

      // 初始化DruidXADataSource对象,并配置其属性
      DruidXADataSource data = new DruidXADataSource();
      data.setDriverClassName(baseProperties.getMasterDrivername());
      data.setUrl("jdbc:mysql://" + baseProperties.getMasterIp() + "/" + baseProperties.getMasterBasename() + "?serverTimezone=GMT%2B8&characterEncoding=utf-8");
      data.setUsername(baseProperties.getCommonUsername());
      data.setPassword(baseProperties.getCommonPassword());

      // 设置基础属性到数据源对象
      setbaseProperties(data);

      // 将配置好的数据源添加到Map中,以数据库基名作为键
      dataSourceMap.put(baseProperties.getMasterBasename(), data);

      return dataSourceMap;
      }

    /**

    • 创建数据源映射表

    • @param database 数据库名称

    • @return 包含数据库名称与对应DataSource对象的映射表

    • @throws SQLException 当初始化数据源失败时抛出SQLException
      */
      public Map<String, DataSource> createDataSourceMap(String database) throws SQLException {
      Map<String, DataSource> dataSourceMap = new HashMap<>();
      DruidXADataSource data = new DruidXADataSource();
      data.setDriverClassName(baseProperties.getCommonDrivername());

      // 根据源类型设置数据库连接URL
      if (0 == baseProperties.getSouceType()) {
      data.setUrl("jdbc:mysql://" + database + "?serverTimezone=GMT%2B8&characterEncoding=utf-8");
      } else {
      data.setUrl("jdbc:mysql://" + baseProperties.getCommonIp() + "/" + database + "?serverTimezone=GMT%2B8&characterEncoding=utf-8");
      }

      data.setUsername(baseProperties.getCommonUsername());
      data.setPassword(baseProperties.getCommonPassword());
      setbaseProperties(data);

      // 尝试初始化数据源,失败则打印堆栈跟踪并返回null
      try {
      data.init();
      } catch (SQLException e) {
      e.printStackTrace();
      return null;
      }

      dataSourceMap.put(database, data);
      return dataSourceMap;
      }

    /**

    • 创建一个包含超级数据库数据源的映射表

    • @return 包含数据库名称与对应DataSource对象的映射表

    • @throws SQLException 如果在设置数据源属性或初始化数据源时发生错误,则抛出SQLException
      */
      public Map<String, DataSource> createDataSourceMapSuper() throws SQLException {
      Map<String, DataSource> dataSourceMap = new HashMap<>();
      DruidXADataSource data = new DruidXADataSource();

      // 设置数据源的驱动类名
      data.setDriverClassName(baseProperties.getSuperDrivername());

      // 设置数据源的连接URL
      data.setUrl("jdbc:mysql://" + baseProperties.getSuperIp() + "/" + baseProperties.getSuperBase() + "?serverTimezone=GMT%2B8&characterEncoding=utf-8");

      // 设置数据源的用户名和密码
      data.setUsername(baseProperties.getSuperUsername());
      data.setPassword(baseProperties.getSuperPassword());

      // 更新baseProperties对象中的配置给data
      setbaseProperties(data);

      // 将超级数据库的数据源添加到映射表中
      dataSourceMap.put(baseProperties.getSuperBase(), data);

      return dataSourceMap;
      }

    /**

    • 获取超级数据源对象。

    • 此方法创建一个DruidXADataSource实例,并配置其连接属性,包括驱动类名、URL、用户名和密码。

    • 配置完成后,将该数据源对象返回。

    • @return 返回配置好的DruidXADataSource对象。

    • @throws SQLException 如果在配置数据源过程中发生SQL异常,则抛出SQLException。
      */
      private Object getSuperDataSource() throws SQLException {
      // 创建DruidXADataSource实例
      DruidXADataSource druidDataSource = new DruidXADataSource();

      // 配置数据源的驱动类名
      druidDataSource.setDriverClassName(baseProperties.getSuperDrivername());

      // 配置数据源的连接URL
      druidDataSource.setUrl("jdbc:mysql://" + baseProperties.getSuperIp() + "/" + baseProperties.getSuperBase() + "?serverTimezone=GMT%2B8&characterEncoding=utf-8");

      // 配置数据源的用户名和密码
      druidDataSource.setUsername(baseProperties.getSuperUsername());
      druidDataSource.setPassword(baseProperties.getSuperPassword());

      // 更新baseProperties对象中的配置属性
      setbaseProperties(druidDataSource);

      // 返回配置好的数据源对象
      return druidDataSource;
      }

    /**

    • 获取主数据源对象。

    • 此方法创建一个DruidXADataSource实例,并配置其连接属性,包括驱动类名、URL、用户名和密码。

    • 配置完成后,将该数据源对象返回。

    • @return 返回配置好的DruidXADataSource对象。

    • @throws SQLException 如果在配置数据源过程中发生SQL异常,则抛出SQLException。
      */
      private Object getMasterDataSource() throws SQLException {
      // 创建DruidXADataSource实例
      DruidXADataSource druidDataSource = new DruidXADataSource();

      // 配置数据源的驱动类名
      druidDataSource.setDriverClassName(baseProperties.getMasterDrivername());

      // 配置数据源的连接URL
      druidDataSource.setUrl("jdbc:mysql://" + baseProperties.getMasterIp() + "/" + baseProperties.getMasterBasename() + "?serverTimezone=GMT%2B8&characterEncoding=utf-8");

      // 配置数据源的用户名和密码
      druidDataSource.setUsername(baseProperties.getMasterUsername());
      druidDataSource.setPassword(baseProperties.getMasterPassword());

      // 更新baseProperties对象中的配置属性
      setbaseProperties(druidDataSource);

      // 返回配置好的数据源对象
      return druidDataSource;
      }

    /**

    • 获取数据源对象。

    • 此方法根据配置和输入参数创建并配置DruidXADataSource实例,包括驱动类名、连接URL、用户名和密码。

    • 如果配置指定,会根据源类型决定使用基于基地址(base)的URL还是基于固定IP和基地址的URL。

    • 配置完成后,将数据源初始化,并返回配置好的数据源对象。

    • @param base 数据源的基地址或名称。根据源类型,这可能用于构建最终的数据库连接URL。

    • @return 配置好的DruidXADataSource数据源对象。

    • @throws SQLException 如果在初始化数据源时发生SQL异常,则抛出SQLException。
      */
      public DataSource getDataSource(String base) throws SQLException {
      DruidXADataSource druidDataSource = new DruidXADataSource();
      // 设置通用驱动类名
      druidDataSource.setDriverClassName(baseProperties.getCommonDrivername());

      // 根据源类型配置连接URL
      if (0 == baseProperties.getSouceType()) {
      druidDataSource.setUrl("jdbc:mysql://" + base + "?serverTimezone=GMT%2B8&characterEncoding=utf-8");
      } else {
      druidDataSource.setUrl("jdbc:mysql://" + baseProperties.getCommonIp() + "/" + base + "?serverTimezone=GMT%2B8&characterEncoding=utf-8");
      }

      // 设置通用用户名和密码
      druidDataSource.setUsername(baseProperties.getCommonUsername());
      druidDataSource.setPassword(baseProperties.getCommonPassword());
      setbaseProperties(druidDataSource);

      // 尝试初始化数据源,失败则抛出运行时异常
      try {
      druidDataSource.init();
      } catch (SQLException e) {
      e.printStackTrace();
      throw new RuntimeException("数据源初始化失败 druid数据源: " + base);
      }

      return druidDataSource;
      }

    /**

    • 获取动态数据源的Bean实例。

    • 这个方法用于根据配置,创建并返回一个动态数据源实例。该实例能够管理主数据源和(如果配置了)超级数据源。

    • @return DynamicDataSource 动态数据源实例,包含主数据源和可能的超级数据源。

    • @throws Exception 如果在获取数据源实例过程中出现错误,则抛出异常。
      */
      public DynamicDataSource getDatasourceBean() throws Exception {
      // 获取动态数据源的单例
      DynamicDataSource dynamicDataSource = DynamicDataSource.getInstance();
      Map<Object, Object> targetDataSource = new HashMap<>();
      // 基于配置,初始化主数据源及其分片的数据源名称
      String masterBasename = "druid_" + baseProperties.getMasterBasename();
      String masterBaseNameSharding = "sharding_" + baseProperties.getMasterBasename();

      // 检查是否配置了超级数据源的IP地址
      if (StringUtils.isNotEmpty(baseProperties.getSuperIp())) {
      // 初始化超级数据源及其分片的数据源名称
      String superBasename = "druid_" + baseProperties.getSuperBase();
      String superBaseNameSharding = "sharding_" + baseProperties.getSuperBase();
      // 如果超级数据源的IP与主数据源的IP相同,则共享相同的数据源实例
      if (baseProperties.getSuperIp().equals(baseProperties.getMasterIp())) {
      targetDataSource.put(superBasename, targetDataSource.get(masterBasename));
      targetDataSource.put(superBaseNameSharding, targetDataSource.get(masterBaseNameSharding));
      } else {
      // 如果超级数据源的IP与主数据源的IP不同,则分别为超级数据源及其分片配置独立的数据源实例
      targetDataSource.put(superBasename, getSuperDataSource());
      targetDataSource.put(superBaseNameSharding, getShardingDataSource(baseProperties.getSuperBase()));
      }
      }

      // 配置主数据源及其分片的数据源实例
      targetDataSource.put(masterBasename, getMasterDataSource());
      targetDataSource.put(masterBaseNameSharding, getShardingDataSource(baseProperties.getMasterBasename()));
      // 设置动态数据源的targetDataSources,并指定默认的数据源为主数据源
      dynamicDataSource.setTargetDataSources(targetDataSource);
      dynamicDataSource.setDefaultTargetDataSource(getMasterDataSource());
      // 返回配置好的动态数据源实例
      return dynamicDataSource;
      }

    private void setbaseProperties(DruidXADataSource druidDataSource) throws SQLException {
    druidDataSource.setFilters(dbProperties.getFilters());
    druidDataSource.setInitialSize(dbProperties.getInitialSize());
    druidDataSource.setMaxActive(dbProperties.getMaxActive());
    druidDataSource.setMaxWait(dbProperties.getMaxWait());
    druidDataSource.setTimeBetweenEvictionRunsMillis(dbProperties.getTimeBetweenEvictionRunsMillis());
    druidDataSource.setMinEvictableIdleTimeMillis(dbProperties.getMinEvictableIdleTimeMillis());
    druidDataSource.setValidationQuery(dbProperties.getValidationQuery());
    druidDataSource.setTestWhileIdle(dbProperties.getTestWhileIdle());
    druidDataSource.setTestOnBorrow(dbProperties.getTestOnBorrow());
    druidDataSource.setTestOnReturn(dbProperties.getTestOnReturn());
    druidDataSource.setPoolPreparedStatements(dbProperties.getPoolPreparedStatements());
    druidDataSource.setMaxPoolPreparedStatementPerConnectionSize(dbProperties.getMaxPoolPreparedStatementPerConnectionSize());
    druidDataSource.setMinIdle(dbProperties.getMinIdle());
    druidDataSource.setRemoveAbandoned(dbProperties.getRemoveAbandoned());
    druidDataSource.setRemoveAbandonedTimeout(dbProperties.getRemoveAbandonedTimeout());
    druidDataSource.setMaxOpenPreparedStatements(dbProperties.getMaxOpenPreparedStatements());
    }

    /**

    • 创建ShardingSphere的模式配置

    • 问题描述:

      1. "Zip file closed" 错误:
      • 原因:ShardingSphere在初始化JDBC Repository时需要从jar包中读取SQL文件
      • 当使用默认配置时,ShardingSphere无法正确处理这些SQL文件的加载,导致jar文件访问异常
      1. 多个HikariPool创建的问题:
      • 原因:每次初始化JDBC Repository时都会创建新的数据源
      • 默认情况下,这些数据源没有被复用,导致资源浪费
    • 解决方案:

      1. 对于"Zip file closed"错误:
      • 提供完整的JDBC Repository配置,包括数据源信息
      • 使用H2内存数据库作为持久化存储,避免文件访问问题
      1. 对于多数据源问题:
      • 使用静态单例模式管理配置数据源
      • 通过双重检查锁定确保线程安全
      • 在Properties中传入已存在的数据源实例而不是创建新的
    • @return ModeConfiguration ShardingSphere的模式配置对象
      */
      private ModeConfiguration createModeConfiguration() {
      // 创建配置属性对象
      Properties props = new Properties();
      // 设置H2内存数据库的连接信息
      props.setProperty("path", "jdbc:h2:mem:config;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=false;MODE=MYSQL");
      props.setProperty("username", "sa");
      props.setProperty("password", "");
      props.setProperty("provider", "H2");
      // 指定初始化器,用于创建必要的元数据表
      props.setProperty("initializer-name",
      "org.apache.shardingsphere.mode.repository.standalone.jdbc.JDBCRepository");
      props.setProperty("max-retries", "3");

      // 初始化或获取已存在的数据源
      initConfigDataSource(props);
      // 将数据源实例添加到属性中,这样ShardingSphere就会使用这个已存在的数据源
      // 而不是尝试创建新的数据源
      props.put("dataSource", configDataSource);

      // 创建并返回模式配置
      // 使用Standalone模式和JDBC持久化方式
      return new ModeConfiguration("Standalone",
      new StandalonePersistRepositoryConfiguration("JDBC", props));
      }

    /**

    • 初始化配置数据源

    • 使用双重检查锁定模式确保线程安全和单例性
      */
      private void initConfigDataSource(Properties props) {
      if (configDataSource == null) {
      synchronized (DatasourceUtil.class) {
      if (configDataSource == null) {
      HikariConfig config = new HikariConfig();
      config.setJdbcUrl(props.getProperty("path"));
      config.setUsername(props.getProperty("username"));
      config.setPassword(props.getProperty("password"));
      config.setPoolName("ShardingSphereConfigPool");
      config.setMaximumPoolSize(5);
      config.setMinimumIdle(1);
      config.setIdleTimeout(300000);
      config.setConnectionTimeout(20000);
      config.setAutoCommit(true);
      // 禁用JMX以避免重复注册
      config.setRegisterMbeans(false);

               configDataSource = new HikariDataSource(config);
           }
       }
      

      }
      }

    @PreDestroy
    public void destroy() {
    if (configDataSource != null) {
    configDataSource.close();
    }
    }

    /**

    • 根据提供的数据库名称获取分片数据源。

    • @param database 指定的数据库名称。

    • @return DataSource 返回相应的数据源对象。如果无法创建数据源,可能返回null。

    • @throws SQLException 如果在创建数据源过程中发生SQL异常,则抛出SQLException。
      */
      public DataSource getShardingDataSource(String database) throws SQLException {
      // 创建模式配置
      ModeConfiguration modeConfiguration = createModeConfiguration();
      // 根据数据库名称获取表规则配置
      ShardingRuleConfiguration tableRuleConfiguration = getTableRuleConfiguration(database);

      // 判断是否为Master数据库,是则创建Master数据源
      if (database.equals(baseProperties.getMasterBasename())) {
      return ShardingSphereDataSourceFactory.createDataSource(modeConfiguration,
      createDataSourceMap(), Collections.singleton(tableRuleConfiguration), properties);
      // 判断是否为SuperBase数据库且已配置SuperIp,是则创建SuperBase数据源
      } else if (database.equals(baseProperties.getSuperBase()) && StringUtils.isNotEmpty(baseProperties.getSuperIp())) {
      return ShardingSphereDataSourceFactory.createDataSource(modeConfiguration,
      createDataSourceMapSuper(), Collections.singleton(tableRuleConfiguration), properties);
      }

      // 创建数据库的DataSource映射
      Map<String, DataSource> dataSourceMap = createDataSourceMap(database);
      if (dataSourceMap == null) {
      return null;
      }

      // 根据DataSource映射创建并返回数据源
      return ShardingSphereDataSourceFactory.createDataSource(modeConfiguration,
      createDataSourceMap(database), Collections.singleton(tableRuleConfiguration), properties);
      }

    /**

    • 按id分表规则

    • @param datasource

    • @param table

    • @return
      */
      private ShardingTableRuleConfiguration createVoteVisitTableRule(String datasource, String table) {
      // 使用表达式定义实际数据节点(允许动态表名)
      String actualDataNodes = datasource + "." + table + "_$->{divcol}";

      ShardingTableRuleConfiguration ruleConfig = new ShardingTableRuleConfiguration(
      table,
      actualDataNodes
      );

      // 配置分表策略(使用divcol列和自定义算法)
      ruleConfig.setTableShardingStrategy(new StandardShardingStrategyConfiguration(
      "divcol",
      "vote-id-sharding"
      ));

      return ruleConfig;
      }

    /**

    • 根据给定的数据源名称和表名称,生成分表规则配置。

    • 该规则基于日期将表分拆,每个月份对应一个物理表。

    • @param datasource 数据源名称。

    • @param tablename 基本表名称。

    • @return ShardingTableRuleConfiguration 分表规则配置对象。
      */
      private ShardingTableRuleConfiguration getTableRuleConfiguration(String datasource, String tablename) {
      // 定义起始和结束日期,用于计算期间的所有分表名称
      LocalDate startDate = LocalDate.of(2020, 3, 1);
      LocalDate endDate = LocalDate.now();

      List actualTableNames = new ArrayList<>();
      // 循环遍历每个月份,生成对应的分表名称并收集
      while (!startDate.isAfter(endDate)) {
      String shardTableName = datasource + "." + tablename + "_" + startDate.format(DateTimeFormatter.ofPattern("yyyyMM"));
      actualTableNames.add(shardTableName);
      startDate = startDate.plusMonths(1);
      }

      // 创建并返回分表规则配置对象
      ShardingTableRuleConfiguration result = new ShardingTableRuleConfiguration(tablename, String.join(",", actualTableNames));
      // 设置分表策略
      result.setTableShardingStrategy(new StandardShardingStrategyConfiguration(
      "divcol", "his-data-spi-based"));

      return result;
      }

    private ShardingTableRuleConfiguration getDefaultTableRuleConfiguration(String datasource, String tablename) {
    ShardingTableRuleConfiguration result = new ShardingTableRuleConfiguration(tablename, datasource + "." + tablename);
    return result;
    }

    private ShardingStrategyConfiguration getDatabaseShardingStrategy() {
    return null;
    }
    }

package com.lnbdy.datasource.utils;

import org.apache.shardingsphere.sharding.api.sharding.standard.PreciseShardingValue;
import org.apache.shardingsphere.sharding.api.sharding.standard.RangeShardingValue;
import org.apache.shardingsphere.sharding.api.sharding.standard.StandardShardingAlgorithm;
import org.springframework.stereotype.Component;

import java.util.Collection;
import java.util.Properties;

/**

  • @author lhy

  • @Date 2025年02月05日 10:41
    */
    @component
    public class VoteIdShardingAlgorithm implements StandardShardingAlgorithm {

    @OverRide
    public String doSharding(Collection availableTargetNames, PreciseShardingValue preciseShardingValue) {
    String logicTableName = preciseShardingValue.getLogicTableName();
    Long divcol = preciseShardingValue.getValue();
    System.out.println("Logic table name: " + logicTableName);
    System.out.println("Divcol value: " + divcol);
    System.out.println("Available target names: " + availableTargetNames);
    return logicTableName + "_" + divcol; // 返回实际表名
    }

    @OverRide
    public Collection doSharding(Collection availableTargetNames, RangeShardingValue rangeShardingValue) {
    System.out.println("Range sharding called");
    return availableTargetNames; // 范围查询返回所有可能表(按需实现)
    }

    @OverRide
    public String getType() {
    return "VOTE_ID_BASED";
    }

    public Properties getProps() {
    return new Properties();
    }

    @OverRide
    public void init(Properties properties) {
    // 初始化逻辑(如有需要)
    }
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant