Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apollo all-in-one deployment? #4313

Open
nobodyiam opened this issue Apr 14, 2022 · 7 comments
Open

Apollo all-in-one deployment? #4313

nobodyiam opened this issue Apr 14, 2022 · 7 comments
Labels
discussion Categorizes issue as related to discussion

Comments

@nobodyiam
Copy link
Member

Is your feature request related to a problem? Please describe.
Currently, apollo suggests deploying 3 different microservices(apollo-configservice, apollo-adminservice, apollo-portal) for the production environment. It makes sense for medium and large companies.
However, for small companies which may have only tens of microservices, it's a big cost to deploy apollo in this structure.

Describe the solution you'd like
Provide a production-ready solution to reduce the resource usage, e.g. an all in one deployment.

Describe alternatives you've considered
There is a quick start version, however, it's not production ready and the apollo portal is still a standalone process.

@nobodyiam nobodyiam added the discussion Categorizes issue as related to discussion label Apr 14, 2022
@Anilople
Copy link
Contributor

+1.


Is db only keep one? How to resolve the confliction of portaldb's table name and configdb's table name?

@nobodyiam
Copy link
Member Author

There are 2 tables App and AppNamespace exist in both ApolloPortalDB and ApolloConfigDB. Those in ApolloConfigDB are actually replicas since apollo-configservice/apollo-adminservice also need these data but they couldn't rely on apollo-portal.
So basically there is no conflict if we merge the tables into one database. But that might cause problems if someday the user wants to split the all in one deployment into distributed deployment.
So we may keep the 2 databases and configure 2 data sources in the all in one deployment?

@Anilople
Copy link
Contributor

I think

keep the 2 databases and configure 2 data sources

is better. User can split the deployment easier in the future.

How to resolve the config of 2 datasources?

Add new config's key and value to do it?

spring:
  datasource:
    todos:
      url: ...
      username: ...
      password: ...
      driverClassName: ...
    topics:
      url: ...
      username: ...
      password: ...
      driverClassName: ...

We might need to write some spring auto configuration for it?

@nobodyiam
Copy link
Member Author

Add new config's key and value to do it?

Yes, we need to configure different data source properties and jpa setup according to this tutorial.

@fananchong
Copy link

我为了方便部署,直接使用了 1 个 allinone 开了 portal , 2 个 allinone 分别对应 2 个 env
但是遇到一个问题:
image
allinone 里提供的是容器内地址,导致 portal 没法接入
只能把 allinone docker-compose.yml 里的网络改成 host

另外,由于 allinone 内会把 portal config admin 都开起来。我只能把 demo.sh 魔改下,2 个脚本,分别启动 portal 和 config admin ,并在 docker-compose.yml 中挂接进去,类似这样:

volumes:
      - ./entrypoint.sh:/apollo-quick-start/demo.sh

@fananchong
Copy link

我为了方便部署,直接使用了 1 个 allinone 开了 portal , 2 个 allinone 分别对应 2 个 env 但是遇到一个问题: image allinone 里提供的是容器内地址,导致 portal 没法接入 只能把 allinone docker-compose.yml 里的网络改成 host

另外,由于 allinone 内会把 portal config admin 都开起来。我只能把 demo.sh 魔改下,2 个脚本,分别启动 portal 和 config admin ,并在 docker-compose.yml 中挂接进去,类似这样:

volumes:
      - ./entrypoint.sh:/apollo-quick-start/demo.sh

看到文档里有说明,加了下面参数,终于可以不用 host 了:

environment:
      JAVA_OPTS: "-Deureka.instance.homePageUrl=http://172.26.144.21:8081"

@fananchong
Copy link

使用 helm 通过模板生成 docker-compose yml 文件
只要填写必要的数据库信息、环境信息即可:
https://github.com/fananchong/apollo_helm_script

类似官方可以搞个 k8s 的

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Categorizes issue as related to discussion
Projects
None yet
Development

No branches or pull requests

3 participants