Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data Persistence #16

Closed
radford1 opened this issue Aug 26, 2020 · 8 comments
Closed

Data Persistence #16

radford1 opened this issue Aug 26, 2020 · 8 comments

Comments

@radford1
Copy link

Is there a way to persist data even after the container is removed and then re-deployed. I am able to successfully mount the data directory from your instructions and the data persists when I stop and start the container, but when I remove and re-deploy the container the existing entities do not appear in the Atlas UI.

@rodrigoassumpcao
Copy link

Hi @radford1. I'm try this way too.

Actually the data is not missing, try to use this filter:

  • Type:_ALL_ENTITY_TYPES
  • Classification:_NOT_CLASSIFIED

I think the problem is with Solr, i'm try to set SOLR_DATA_HOME, but without success until now.

@rodrigoassumpcao
Copy link

rodrigoassumpcao commented Sep 25, 2020

Hey there.

I solved this problem, if you looking for yet.

It's a problem with Solr really. Setting the SOLR_DATA_HOME should work, but core.properties file, from each index is created at default folder, without considerer the env SOLR_DATA_HOME.

This should resolve the problem:

  • Create the volume and volumeMount as normal in the k8s yaml
  • Do not set SOLR_DATA_HOME
  • Create this three folders inside the file system (that one is the k8s volume):
    • vertex_index_shard1_replica_n1
    • fulltext_index_shard1_replica_n1
    • edge_index_shard1_replica_n1
  • In the Atlas Dockerfile create symbolic links to this folders:
RUN ln -sf /mnt/atlas-2.1.0/solr/vertex_index_shard1_replica_n1 $ATLAS_HOME_DIR/solr/server/solr/vertex_index_shard1_replica_n1 && \
    ln -sf /mnt/atlas-2.1.0/solr/fulltext_index_shard1_replica_n1 $ATLAS_HOME_DIR/solr/server/solr/fulltext_index_shard1_replica_n1 && \
    ln -sf /mnt/atlas-2.1.0/solr/edge_index_shard1_replica_n1 $ATLAS_HOME_DIR/solr/server/solr/edge_index_shard1_replica_n1

(/mnt/atlas-2.1.0 is the mountPath of volumeMounts of the k8s)

This steps resolve the problem and the Atlas Pod can be restarted without problems.

@sburn
Copy link
Owner

sburn commented Jan 20, 2021

Probably need to add this to the README. Thanks!

@sburn sburn pinned this issue Jan 20, 2021
@TuncTaylan
Copy link

TuncTaylan commented Mar 24, 2021

@rodrigoassumpcao Can you share your OpenShift/Kubernetes templates?

@krrgithub00
Copy link

@rodrigoassumpcao - Thanks for the step-by-step instructions on how to persist the Atlas data. Is there a version for Windows env? Much appreciated.Thanks!

@rodrigoassumpcao
Copy link

Hi @TuncTaylan, i will looking for the template and post here.

Hi @krrgithub00, I just try using k8s, with linux image.

@krrgithub00
Copy link

krrgithub00 commented Apr 27, 2021 via email

@taizilongxu
Copy link

taizilongxu commented Dec 24, 2021

Hi, the right config maybe:

docker run -d \
    -v /opt/atlas_docker/data:/opt/apache-atlas-2.1.0/data \
    -v /opt/atlas_docker/logs:/opt/apache-atlas-2.1.0/logs \
    -v /opt/atlas_docker/conf:/opt/apache-atlas-2.1.0/conf \
    -v /opt/atlas_docker/data/solr/edge_index_shard1_replica_n1:/opt/apache-atlas-2.1.0/solr/server/solr/edge_index_shard1_replica_n1 \
    -v /opt/atlas_docker/data/solr/fulltext_index_shard1_replica_n1:/opt/apache-atlas-2.1.0/solr/server/solr/fulltext_index_shard1_replica_n1 \
    -v /opt/atlas_docker/data/solr/vertex_index_shard1_replica_n1:/opt/apache-atlas-2.1.0/solr/server/solr/vertex_index_shard1_replica_n1 \
    --network=host \
    -p 21001:21000 \
    --name atlas \
    sburn/apache-atlas \
    /opt/apache-atlas-2.1.0/bin/atlas_start.py

@sburn sburn closed this as not planned Won't fix, can't repro, duplicate, stale Dec 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants