Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot run symbolicator tests #51241

Closed
armenzg opened this issue Jun 19, 2023 · 15 comments
Closed

Cannot run symbolicator tests #51241

armenzg opened this issue Jun 19, 2023 · 15 comments

Comments

@armenzg
Copy link
Member

armenzg commented Jun 19, 2023

I had to refresh some symbolicator tests for #51040 and I couldn't because I could not run the tests locally.
SENTRY_SNAPSHOTS_WRITEBACK=1 pytest -s -v tests/symbolicator -k test_full_minidump_invalid_extra would fail for me with relay did not start in time [1].

This is now impacting me heavily on #53643. I've tried setting up an M2 machine and I have the same problem.

I did set symbolicator.enabled: true in ~/.sentry/config.yml.

Running these commands to reproduce:

sentry devservices up --project test
pytest -s -vv tests/symbolicator/test_unreal_full.py -k test_unreal_apple_crash_with_attachments

[1]

    raise ValueError(f"relay did not start in time:\n{container.logs()}") from ex
E   ValueError: relay did not start in time:
E   b'<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)\n<jemalloc>: (This is the expected behaviour if you are running under QEMU)\n2023-06-19T11:09:36.803391Z  INFO relay::setup: launching relay from config folder /etc/relay\n2023-06-19T11:09:36.807498Z  INFO relay::setup:   relay mode: managed\n2023-06-19T11:09:36.807692Z  INFO relay::setup:   relay id: 88888888-4444-4444-8444-cccccccccccc\n2023-06-19T11:09:36.807951Z  INFO relay::setup:   public key: SMSesqan65THCV6M4qs4kBzPai60LzuDn-xNsvYpuP8\n2023-06-19T11:09:36.808608Z  INFO relay::setup:   log level: TRACE\n2023-06-19T11:09:36.808913Z  
INFO relay_server: relay server starting\n2023-06-19T11:09:36.973770Z  
INFO relay_server::actors::upstream: registering with upstream descriptor=http://host.docker.internal:60314/\n2023-06-19T11:09:37.098941Z ERROR r2d2: failed to lookup address information: Name or service not known    \n2023-06-19T11:09:37.098946Z ERROR r2d2: failed to lookup address
...
information: Name or service not known    \n2023-06-19T11:09:37.624999Z ERROR r2d2: failed to lookup address information: Name or service not known    \n2023-06-19T11:09:37.878686Z DEBUG relay_server::actors::upstream: got register challenge token="eyJ0aW1lc3RhbXAiOjE2ODcxNzI5NzcsInJlbGF5X2lkIjoiODg4ODg4ODgtNDQ0NC00NDQ0LTg0NDQtY2NjY2NjY2NjY2NjIiwicHVibGljX2tleSI6IlNNU2VzcWFuNjVUSENWNk00cXM0a0J6UGFpNjBMenVEbi14TnN2WXB1UDgiLCJyYW5kIjoiUmtOUzVuT2QzVnZtNzF4TTRxM0txWXRBSmgyM05jWGZIODBlWUlYZHR2cmF2LW5mdmpOYUhqNHVkV3lqWW1mTFlNX2JyWlEwUndNQldySnNoRmp3V3cifQ:JPI96ALeu7xJUyBvaUMR45mlUCt13SFid0eYyJWtroT1UmcjaA0ZhivKTSRENtcyioinfVhDIguYeWtJBLnd9w"\n2023-06-19T11:09:37.879186Z DEBUG relay_server::actors::upstream: sending register challenge response\n2023-06-19T11:09:37.911804Z  INFO relay_server::actors::upstream: relay successfully registered with upstream\n'
@Swatinem
Copy link
Member

I believe this looks rather like relay did not start in time, or according to the error message, failed to connect to postgres.

@github-actions

This comment was marked as outdated.

@armenzg
Copy link
Member Author

armenzg commented Aug 11, 2023

Similar to #51471

@joshuarli
Copy link
Member

You need to set symbolicator.enabled: true in your ~/.sentry/config.yml and rerun sentry devservices up.

Those instructions are in all the files in tests/symbolicator but it would be better to have the tests detect there is no symbolicator and guide you on what to do.

What CI is doing (host networking), is not what should be done locally in this case. Starting services via devservices locally only works with host.docker.internal which is a docker for Mac feature.

@armenzg armenzg changed the title Make sure symbolicator tests can run locally Cannot run symbolicator tests Aug 14, 2023
@armenzg
Copy link
Member Author

armenzg commented Aug 14, 2023

@joshuarli I have also followed the steps mentioned in the source code. My original comment was out of date; I've updated it.

I have tried setting a new M2 machine and I hit the same problems.

My work on #53643 is now being heavily impacted by this.

@joshuarli
Copy link
Member

I was able to reproduce (not the host.docker.internal stuff though, I get a different 2nd error) on Mac OS 13.3.1 on ARM, by forcing colima to use QEMU.

I think neither host.docker.internal nor the config file error are root cause, it's just that madvise is probably being called near these operations and the ordering is indeterminate (async or different threads). If you look at sentry_symbolicator logs it should be failing with MADV_DONTNEED does not work as well.

Are you on Mac Intel? Show me your colima status? scripts/start-colima.py will use QEMU on Mac Intel, but it works fine on my Intel Mac. It seems to only be an issue with colima+QEMU on Mac ARM. My guess is that your colima is somehow using QEMU and you're on Mac ARM, in which case, I'm not sure how you got to that state as it shouldn't be possible with scripts/start-colima.py...


E   ValueError: relay did not start in time:
E   b'<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)\n<jemalloc>: (This is the expected behaviour if you are running under QEMU)\nerror: could not open config file (file /etc/relay/config.yml)\n  caused by: No such file or directory (os error 2)\n'

colima status:

INFO[0000] colima is running using QEMU
INFO[0000] arch: aarch64
INFO[0000] runtime: docker
INFO[0000] mountType: virtiofs
INFO[0000] socket: unix:///Users/josh/.colima/default/docker.sock

@armenzg
Copy link
Member Author

armenzg commented Aug 15, 2023

I've switched to colima now.

colima status
INFO[0000] colima is running using macOS Virtualization.Framework
INFO[0000] arch: aarch64
INFO[0000] runtime: docker
INFO[0000] mountType: virtiofs
INFO[0000] socket: unix:///Users/armenzg/.colima/default/docker.sock

I get this error instead:

pytest -s -vv tests/symbolicator/test_unreal_full.py -k test_unreal_apple_crash_with_attachments
============================================================================================= test session starts ==============================================================================================
platform darwin -- Python 3.8.16, pytest-7.2.1, pluggy-0.13.1 -- /Users/armenzg/code/sentry/.venv/bin/python3
cachedir: .pytest_cache
rootdir: /Users/armenzg/code/sentry, configfile: pyproject.toml
plugins: fail-slow-0.3.0, rerunfailures-11.0, sentry-0.1.11, xdist-3.0.2, cov-4.0.0, django-4.4.0
collected 2 items / 1 deselected / 1 selected

tests/symbolicator/test_unreal_full.py::SymbolicatorUnrealIntegrationTest::test_unreal_apple_crash_with_attachments Creating test database for alias 'default' ('test_sentry')...
Operations to perform:
  Synchronize unmigrated apps: activedirectory, analytics, auth0, crispy_forms, discover, drf_spectacular, events, eventstream, fixtures, fly, generic, github, google, incidents, indexer_postgres_config, issues, java, javascript, jira, jumpcloud, messages, monitors, okta, onelogin, opsgenie, redmine, release_health, rest_framework, rippling, search, sentry, sentry_interface_types, sentry_urls, sentry_useragents, sentry_webhooks, sessionstack, snuba, staticfiles, sudo, suspect_resolutions, suspect_resolutions_releases, trello, twilio
  Apply all migrations: auth, contenttypes, nodestore, replays, sessions, sites, social_auth
Synchronizing apps without migrations:
  Creating tables...
    Creating table sentry_activity
...
TRIMMED HERE
...
    Creating table sentry_exporteddatablob
    Running deferred SQL...
Running migrations:
  Applying contenttypes.0001_initial... OK
...
TRIMMED HERE
...
  Applying social_auth.0002_default_auto_field... OK
ERRORDestroying test database for alias 'default' ('test_sentry')...


==================================================================================================== ERRORS ====================================================================================================
_________________________________________________________ ERROR at setup of SymbolicatorUnrealIntegrationTest.test_unreal_apple_crash_with_attachments _________________________________________________________
.venv/lib/python3.8/site-packages/docker/api/client.py:256: in _raise_for_status
    response.raise_for_status()
.venv/lib/python3.8/site-packages/requests/models.py:1021: in raise_for_status
    raise HTTPError(http_error_msg, response=self)
E   requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/containers/e0f50c8424a74c83b5c689e03167691888b9e22520d27edb44600f564ed27522/start

During handling of the above exception, another exception occurred:
src/sentry/testutils/pytest/relay.py:141: in relay_server
    container = docker_client.containers.run(**options)
.venv/lib/python3.8/site-packages/docker/models/containers.py:791: in run
    container.start()
.venv/lib/python3.8/site-packages/docker/models/containers.py:392: in start
    return self.client.api.start(self.id, **kwargs)
.venv/lib/python3.8/site-packages/docker/utils/decorators.py:19: in wrapped
    return f(self, resource_id, *args, **kwargs)
.venv/lib/python3.8/site-packages/docker/api/container.py:1091: in start
    self._raise_for_status(res)
.venv/lib/python3.8/site-packages/docker/api/client.py:258: in _raise_for_status
    raise create_api_error_from_http_exception(e)
.venv/lib/python3.8/site-packages/docker/errors.py:31: in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
E   docker.errors.NotFound: 404 Client Error: Not Found ("network sentry not found")
=========================================================================================== short test summary info ============================================================================================
ERROR tests/symbolicator/test_unreal_full.py::SymbolicatorUnrealIntegrationTest::test_unreal_apple_crash_with_attachments - docker.errors.NotFound: 404 Client Error: Not Found ("network sentry not found")
======================================================================================= 1 deselected, 1 error in 40.30s ========================================================================================

@ashwoods
Copy link

Basically any test that relies on RelayStoreHelper doesn't work for me with the same problem mentioned above running on mac m1, after a fresh docker purge and project bootstrap.

@joshuarli
Copy link
Member

Armen I can't reproduce that... are you running on latest master without any modifications?

@armenzg
Copy link
Member Author

armenzg commented Aug 28, 2023

Hi @joshuarli with clean master running on Venture 13.4:

docker -v
Docker version 24.0.2, build cb74dfcd85
colima --version
colima version 0.5.5
docker ps -a
CONTAINER ID   IMAGE                                                                               COMMAND                  CREATED       STATUS         PORTS                                                                          NAMES
ed0d11b20298   ghcr.io/getsentry/image-mirror-library-postgres:14-alpine                           "docker-entrypoint.s…"   11 days ago   Up 3 minutes   127.0.0.1:5432->5432/tcp                                                       test_postgres
882383557d49   ghcr.io/getsentry/image-mirror-confluentinc-cp-zookeeper:6.2.0                      "/etc/confluent/dock…"   11 days ago   Up 3 minutes   2181/tcp, 2888/tcp, 3888/tcp                                                   test_zookeeper
456eed59e855   ghcr.io/getsentry/image-mirror-altinity-clickhouse-server:21.6.1.6734-testing-arm   "/entrypoint.sh"         11 days ago   Up 3 minutes   127.0.0.1:8123->8123/tcp, 127.0.0.1:9000->9000/tcp, 127.0.0.1:9009->9009/tcp   test_clickhouse
0b473314dd13   ghcr.io/getsentry/image-mirror-confluentinc-cp-kafka:6.2.0                          "/etc/confluent/dock…"   11 days ago   Up 1 second    127.0.0.1:9092->9092/tcp                                                       test_kafka
288b5242ec5a   ghcr.io/getsentry/image-mirror-library-redis:5.0-alpine                             "docker-entrypoint.s…"   11 days ago   Up 3 minutes   127.0.0.1:6379->6379/tcp                                                       test_redis
df6a2dc6209c   ghcr.io/getsentry/snuba:latest                                                      "./docker_entrypoint…"   11 days ago   Up 3 minutes   127.0.0.1:1218-1219->1218-1219/tcp                                             test_snuba
1d6e3ecd6c6e   us.gcr.io/sentryio/symbolicator:nightly                                             "/bin/bash /docker-e…"   11 days ago   Up 3 minutes   127.0.0.1:3021->3021/tcp                                                       test_symbolicator
pytest -s -v tests/symbolicator -k test_full_minidump_invalid_extra
=============================================== test session starts ================================================
platform darwin -- Python 3.8.16, pytest-7.2.1, pluggy-0.13.1 -- /Users/armenzg/code/sentry/.venv/bin/python3
cachedir: .pytest_cache
rootdir: /Users/armenzg/code/sentry, configfile: pyproject.toml
plugins: fail-slow-0.3.0, rerunfailures-11.0, sentry-0.1.11, typeguard-3.0.2, xdist-3.0.2, cov-4.0.0, django-4.4.0
collected 14 items / 13 deselected / 1 selected

tests/symbolicator/test_minidump_full.py::SymbolicatorMinidumpIntegrationTest::test_full_minidump_invalid_extra Creating test database for alias 'default'...
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 50503)
Traceback (most recent call last):
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/socketserver.py", line 683, in process_request_thread
    self.finish_request(request, client_address)
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/socketserver.py", line 360, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/socketserver.py", line 747, in __init__
    self.handle()
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/django/core/servers/basehttp.py", line 178, in handle
    self.handle_one_request()
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/django/core/servers/basehttp.py", line 186, in handle_one_request
    self.raw_requestline = self.rfile.readline(65537)
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
socket.timeout: timed out
----------------------------------------
Traceback (most recent call last):
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/sentry_sdk/integrations/stdlib.py", line 124, in getresponse
    return real_getresponse(self, *args, **kwargs)
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 1348, in getresponse
    response.begin()
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 285, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 787, in urlopen
    retries = retries.increment(
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/packages/six.py", line 769, in reraise
    raise value.with_traceback(tb)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/sentry_sdk/integrations/stdlib.py", line 124, in getresponse
    return real_getresponse(self, *args, **kwargs)
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 1348, in getresponse
    response.begin()
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 285, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/armenzg/code/sentry/src/sentry/testutils/pytest/relay.py", line 149, in relay_server
    requests.get(url)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/api.py", line 73, in get
    return request("get", url, params=params, **kwargs)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
13:10:12 [ERROR] sentry.testutils.pytest.relay: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
ERRORDestroying test database for alias 'default'...


====================================================== ERRORS ======================================================
______________ ERROR at setup of SymbolicatorMinidumpIntegrationTest.test_full_minidump_invalid_extra ______________
.venv/lib/python3.8/site-packages/urllib3/connectionpool.py:703: in urlopen
    httplib_response = self._make_request(
.venv/lib/python3.8/site-packages/urllib3/connectionpool.py:449: in _make_request
    six.raise_from(e, None)
<string>:3: in raise_from
    ???
.venv/lib/python3.8/site-packages/urllib3/connectionpool.py:444: in _make_request
    httplib_response = conn.getresponse()
.venv/lib/python3.8/site-packages/sentry_sdk/integrations/stdlib.py:124: in getresponse
    return real_getresponse(self, *args, **kwargs)
../../.pyenv/versions/3.8.16/lib/python3.8/http/client.py:1348: in getresponse
    response.begin()
../../.pyenv/versions/3.8.16/lib/python3.8/http/client.py:316: in begin
    version, status, reason = self._read_status()
../../.pyenv/versions/3.8.16/lib/python3.8/http/client.py:285: in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
E   http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:
.venv/lib/python3.8/site-packages/requests/adapters.py:486: in send
    resp = conn.urlopen(
.venv/lib/python3.8/site-packages/urllib3/connectionpool.py:787: in urlopen
    retries = retries.increment(
.venv/lib/python3.8/site-packages/urllib3/util/retry.py:550: in increment
    raise six.reraise(type(error), error, _stacktrace)
.venv/lib/python3.8/site-packages/urllib3/packages/six.py:769: in reraise
    raise value.with_traceback(tb)
.venv/lib/python3.8/site-packages/urllib3/connectionpool.py:703: in urlopen
    httplib_response = self._make_request(
.venv/lib/python3.8/site-packages/urllib3/connectionpool.py:449: in _make_request
    six.raise_from(e, None)
<string>:3: in raise_from
    ???
.venv/lib/python3.8/site-packages/urllib3/connectionpool.py:444: in _make_request
    httplib_response = conn.getresponse()
.venv/lib/python3.8/site-packages/sentry_sdk/integrations/stdlib.py:124: in getresponse
    return real_getresponse(self, *args, **kwargs)
../../.pyenv/versions/3.8.16/lib/python3.8/http/client.py:1348: in getresponse
    response.begin()
../../.pyenv/versions/3.8.16/lib/python3.8/http/client.py:316: in begin
    version, status, reason = self._read_status()
../../.pyenv/versions/3.8.16/lib/python3.8/http/client.py:285: in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
E   urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:
src/sentry/testutils/pytest/relay.py:149: in relay_server
    requests.get(url)
.venv/lib/python3.8/site-packages/requests/api.py:73: in get
    return request("get", url, params=params, **kwargs)
.venv/lib/python3.8/site-packages/requests/api.py:59: in request
    return session.request(method=method, url=url, **kwargs)
.venv/lib/python3.8/site-packages/requests/sessions.py:589: in request
    resp = self.send(prep, **send_kwargs)
.venv/lib/python3.8/site-packages/requests/sessions.py:703: in send
    r = adapter.send(request, **kwargs)
.venv/lib/python3.8/site-packages/requests/adapters.py:501: in send
    raise ConnectionError(err, request=request)
E   requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

The above exception was the direct cause of the following exception:
src/sentry/testutils/pytest/relay.py:154: in relay_server
    raise ValueError(
E   ValueError: relay did not start in time http://127.0.0.1:33331:
E   2023-08-28T13:09:58.828585Z  INFO relay::setup: launching relay from config folder /etc/relay
E   2023-08-28T13:09:58.834230Z  INFO relay::setup:   relay mode: managed
E   2023-08-28T13:09:58.834269Z  INFO relay::setup:   relay id: 88888888-4444-4444-8444-cccccccccccc
E   2023-08-28T13:09:58.834572Z  INFO relay::setup:   public key: SMSesqan65THCV6M4qs4kBzPai60LzuDn-xNsvYpuP8
E   2023-08-28T13:09:58.835093Z  INFO relay::setup:   log level: TRACE
E   2023-08-28T13:09:58.835768Z  INFO relay_server: relay server starting
E   2023-08-28T13:09:58.925807Z  INFO relay_server::actors::upstream: registering with upstream descriptor=http://host.docker.internal:50486/
E   2023-08-28T13:10:00.600350Z DEBUG relay_server::actors::upstream: got register challenge token="eyJ0aW1lc3RhbXAiOjE2OTMyMjgyMDEsInJlbGF5X2lkIjoiODg4ODg4ODgtNDQ0NC00NDQ0LTg0NDQtY2NjY2NjY2NjY2NjIiwicHVibGljX2tleSI6IlNNU2VzcWFuNjVUSENWNk00cXM0a0J6UGFpNjBMenVEbi14TnN2WXB1UDgiLCJyYW5kIjoiRnNneElKejlSR1ZZODVsa3pFTVJlR1A3UGJWQnptYnZQZElIVUw1UWJRa0djZ1hFTUhoOHl6Q1JGbENicFdUbzhlaWpuQm92emdVVEVKeUIwN0FHZ0EifQ:K6tWlOgFoP48vR3dvJ-AiC7mIgm4kWpI93ic1oXkRaCDxGjTYoizunkPJ_yVq33N6DPp9dkNFOSO0Nv7tyNCTw"
E   2023-08-28T13:10:00.600781Z DEBUG relay_server::actors::upstream: sending register challenge response
E   2023-08-28T13:10:00.627939Z  INFO relay_server::actors::upstream: relay successfully registered with upstream
E   2023-08-28T13:10:01.017381Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:01.017390Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:01.017710Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:03.034401Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:03.034420Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:03.034401Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:05.040685Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:05.040980Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:05.041066Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:07.045685Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:07.045688Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:07.046167Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:09.050923Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:09.050923Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:09.051069Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:11.057129Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:11.057129Z ERROR r2d2: failed to lookup address information: Name or service not known
E   2023-08-28T13:10:11.057303Z ERROR r2d2: failed to lookup address information: Name or service not known
------------------------------------------------ Captured log setup ------------------------------------------------
ERROR    sentry.testutils.pytest.relay:relay.py:153 ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Traceback (most recent call last):
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/sentry_sdk/integrations/stdlib.py", line 124, in getresponse
    return real_getresponse(self, *args, **kwargs)
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 1348, in getresponse
    response.begin()
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 285, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 787, in urlopen
    retries = retries.increment(
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/packages/six.py", line 769, in reraise
    raise value.with_traceback(tb)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/sentry_sdk/integrations/stdlib.py", line 124, in getresponse
    return real_getresponse(self, *args, **kwargs)
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 1348, in getresponse
    response.begin()
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "/Users/armenzg/.pyenv/versions/3.8.16/lib/python3.8/http/client.py", line 285, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/armenzg/code/sentry/src/sentry/testutils/pytest/relay.py", line 149, in relay_server
    requests.get(url)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/api.py", line 73, in get
    return request("get", url, params=params, **kwargs)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/Users/armenzg/code/sentry/.venv/lib/python3.8/site-packages/requests/adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
============================================= short test summary info ==============================================
ERROR tests/symbolicator/test_minidump_full.py::SymbolicatorMinidumpIntegrationTest::test_full_minidump_invalid_extra - ValueError: relay did not start in time http://127.0.0.1:33331:
========================================= 13 deselected, 1 error in 23.96s =========================================

@ashwoods
Copy link

I migrated to colima and it failed too.
If you add a sleep to the relay setup code, I get a few more lines of relay errors:

src/sentry/testutils/pytest/relay.py:161: in relay_server
    raise ValueError(
E   ValueError: relay did not start in time http://127.0.0.1:33331:
E   2023-08-28T13:15:36.226426Z  INFO relay::setup: launching relay from config folder /etc/relay
E   2023-08-28T13:15:36.229715Z  INFO relay::setup:   relay mode: managed
E   2023-08-28T13:15:36.229740Z  INFO relay::setup:   relay id: 88888888-4444-4444-8444-cccccccccccc
E   2023-08-28T13:15:36.229915Z  INFO relay::setup:   public key: SMSesqan65THCV6M4qs4kBzPai60LzuDn-xNsvYpuP8
E   2023-08-28T13:15:36.230252Z  INFO relay::setup:   log level: TRACE
E   2023-08-28T13:15:36.230726Z  INFO relay_server: relay server starting
E   2023-08-28T13:15:36.289116Z  INFO relay_server::actors::upstream: registering with upstream descriptor=http://host.docker.internal:57511/
E   2023-08-28T13:15:36.748679Z DEBUG relay_server::actors::upstream: got register challenge token="eyJ0aW1lc3RhbXAiOjE2OTMyMjg1MzYsInJlbGF5X2lkIjoiODg4ODg4ODgtNDQ0NC00NDQ0LTg0NDQtY2NjY2NjY2NjY2NjIiwicHVibGljX2tleSI6IlNNU2VzcWFuNjVUSENWNk00cXM0a0J6UGFpNjBMenVEbi14TnN2WXB1UDgiLCJyYW5kIjoia3hCMGJsNEdMMGtvNFVueVotbFRaOG5oMDhfMDNoZGU1bkZJeUVfaUM2Vll1OUNPMTNmdVhHWmx0ajJkWm1nazBwZ0pKWTd6ZGNlNV80NHlVOU1yNncifQ:ea5UFFhV_HdpkgV4x5WDB90C44o0Wbk3bVRnJVR5Bz3WMoKJj7fxFOJ1n4_tl16GIpKlA2jXq6pLxPl_v_a_Cw"
E   2023-08-28T13:15:36.748884Z DEBUG relay_server::actors::upstream: sending register challenge response
E   2023-08-28T13:15:36.766669Z  INFO relay_server::actors::upstream: relay successfully registered with upstream
E   2023-08-28T13:15:39.354437Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:39.354478Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:39.354508Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:42.377920Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:42.377979Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:42.378098Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:45.401866Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:45.402095Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:45.401856Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:48.417629Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:48.417629Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:48.418258Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:51.436133Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:51.436133Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:51.437033Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:54.450423Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:54.450423Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:54.450423Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:57.476189Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:57.476480Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:15:57.477903Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:16:00.490813Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:16:00.491071Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:16:00.491076Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:16:03.496104Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:16:03.496104Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   2023-08-28T13:16:03.496122Z ERROR r2d2: failed to lookup address information: Name or service not known    
E   thread 'main' panicked at 'Cannot drop a runtime in a context where blocking is not allowed. This happens when a runtime is dropped from within an asynchronous context.', /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.28.0/src/runtime/blocking/shutdown.rs:51:21
E   stack backtrace:
E      0:     0x555556eb788a - <unknown>
E      1:     0x555556ee02fe - <unknown>
E      2:     0x555556eb3625 - <unknown>
E      3:     0x555556eb7655 - <unknown>
E      4:     0x555556eb8fdf - <unknown>
E      5:     0x555556eb8d1b - <unknown>
E      6:     0x555556eb9588 - <unknown>
E      7:     0x555556eb9442 - <unknown>
E      8:     0x555556eb7cf6 - <unknown>
E      9:     0x555556eb9192 - <unknown>
E     10:     0x555555731d03 - <unknown>
E     11:     0x555556c23b6a - <unknown>
E     12:     0x555556c26188 - <unknown>
E     13:     0x555555d16e9e - <unknown>
E     14:     0x555555d4258d - <unknown>
E     15:     0x555555bff65e - <unknown>
E     16:     0x555555912539 - <unknown>
E     17:     0x5555558d1eff - <unknown>
E     18:     0x5555557d3b4a - <unknown>
E     19:     0x5555557d1534 - <unknown>
E     20:     0x55555579ab2a - <unknown>
E     21:     0x55555579f923 - <unknown>
E     22:     0x5555557c5539 - <unknown>
E     23:     0x555556eac92c - <unknown>
E     24:     0x55555579acbc - <unknown>
E     25:     0x7fffffc6909b - __libc_start_main
E     26:     0x5555557323a9 - <unknown>
E     27:                0x0 - <unknown>
------------------------------------------------------------------------------ Captured log setup --------------------------
ERROR 

@lobsterkatie
Copy link
Member

lobsterkatie commented Oct 19, 2023

If you add a sleep to the relay setup code, I get a few more lines of relay errors

@ashwoods - How long did you sleep for? I've increased the number of attempts, such that it waits quite a long time now, and still nothing. Here is what I do get:

Test output

============================================= test session starts ==============================================
platform darwin -- Python 3.8.16, pytest-7.2.1, pluggy-0.13.1 -- /Users/Katie/Documents/Sentry/sentry/.venv/bin/python3
cachedir: .pytest_cache
rootdir: /Users/Katie/Documents/Sentry/sentry, configfile: pyproject.toml
plugins: fail-slow-0.3.0, rerunfailures-11.0, sentry-0.1.11, time-machine-2.13.0, xdist-3.0.2, cov-4.0.0, django-4.4.0
collected 2 items / 1 deselected / 1 selected

tests/symbolicator/test_unreal_full.py::SymbolicatorUnrealIntegrationTest::test_unreal_apple_crash_with_attachments ERROR [100%]

==================================================== ERRORS ====================================================
_________ ERROR at setup of SymbolicatorUnrealIntegrationTest.test_unreal_apple_crash_with_attachments _________
[stacktrace]
E ConnectionRefusedError: [Errno 61] Connection refused

During handling of the above exception, another exception occurred:
[stacktrace]
E urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x122625e20>: Failed to establish a new connection: [Errno 61] Connection refused

During handling of the above exception, another exception occurred:
[stacktrace]
E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=33331): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x122625e20>: Failed to establish a new connection: [Errno 61] Connection refused'))

During handling of the above exception, another exception occurred:
[stacktrace]
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=33331): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x122625e20>: Failed to establish a new connection: [Errno 61] Connection refused'))

The above exception was the direct cause of the following exception:
src/sentry/testutils/pytest/relay.py:151: in relay_server
raise ValueError(
E ValueError: (inside) relay did not start in time http://127.0.0.1:33331:
E 2023-10-19T18:34:24.561478Z INFO relay::setup: launching relay from config folder /etc/relay
E 2023-10-19T18:34:24.562247Z INFO relay::setup: relay mode: managed
E 2023-10-19T18:34:24.562309Z INFO relay::setup: relay id: 88888888-4444-4444-8444-cccccccccccc
E 2023-10-19T18:34:24.562316Z INFO relay::setup: public key: SMSesqan65THCV6M4qs4kBzPai60LzuDn-xNsvYpuP8
E 2023-10-19T18:34:24.562322Z INFO relay::setup: log level: TRACE
E 2023-10-19T18:34:24.562331Z INFO relay_server: relay server starting
E 2023-10-19T18:34:24.585426Z INFO relay_server::actors::upstream: registering with upstream descriptor=http://host.docker.internal:55360/
E 2023-10-19T18:34:24.680403Z ERROR r2d2: failed to lookup address information: Name or service not known
E 2023-10-19T18:34:24.684166Z ERROR r2d2: failed to lookup address information: Name or service not known

[many more copies of the r2d2 error]

E 2023-10-19T18:34:27.658557Z ERROR r2d2: failed to lookup address information: Name or service not known
E 2023-10-19T18:34:28.369209Z DEBUG relay_server::actors::upstream: got register challenge token="eyJ0aW1lc3RhbXAiOjE2OTc3NDA0NzEsInJlbGF5X2lkIjoiODg4ODg4ODgtNDQ0NC00NDQ0LTg0NDQtY2NjY2NjY2NjY2NjIiwicHVibGljX2tleSI6IlNNU2VzcWFuNjVUSENWNk00cXM0a0J6UGFpNjBMenVEbi14TnN2WXB1UDgiLCJyYW5kIjoiLXhuRklrN3pvT0puUWNTcUZlRWlWYUF1VGZQLUJpOXV5ZEFMdWprRzNERXZOeWtTem1JOXhySUVRQXNlSjN3Mk5iVDRhSEZYZ2tVeWxzc0RlcE1fbXcifQ:3CAavH4rSgYNId84MZFFy1kv-4fbs87ZrZDMXF4xvCdQ9I2OvTt6HGWg14BFXdDKZPTJbLDC9oqj-0D7AXpuyQ"
E 2023-10-19T18:34:28.369417Z DEBUG relay_server::actors::upstream: sending register challenge response
E 2023-10-19T18:34:28.453709Z INFO relay_server::actors::upstream: relay successfully registered with upstream
E 2023-10-19T18:34:29.599857Z ERROR relay_log::utils: error=could not initialize redis cluster client error.sources=[failed to pool redis connection, timed out waiting for connection: failed to lookup address information: Name or service not known]
-------------------------------------------- Captured stdout setup ---------------------------------------------
Operations to perform:
Synchronize unmigrated apps: activedirectory, analytics, auth0, crispy_forms, discover, drf_spectacular, events, eventstream, feedback, fixtures, fly, generic, github, google, hybridcloud, incidents, indexer_postgres_config, issues, java, javascript, jira, jumpcloud, messages, monitors, okta, onelogin, opsgenie, redmine, release_health, rest_framework, rippling, search, sentry, sentry_interface_types, sentry_urls, sentry_useragents, sentry_webhooks, sessionstack, snuba, staticfiles, sudo, suspect_resolutions, suspect_resolutions_releases, trello, twilio
Apply all migrations: auth, contenttypes, nodestore, replays, sessions, sites, social_auth
Synchronizing apps without migrations:
Creating tables...

[table creation and migration running]

Applying social_auth.0002_default_auto_field... OK
Waiting for Relay container to start
i = 0 - sleeping for 0.1 seconds
i = 1 - sleeping for 0.13999999999999999 seconds
i = 2 - sleeping for 0.19599999999999998 seconds
i = 3 - sleeping for 0.2743999999999999 seconds
i = 4 - sleeping for 0.38415999999999995 seconds
i = 5 - sleeping for 0.5378239999999999 seconds
i = 6 - sleeping for 0.7529535999999998 seconds
i = 7 - sleeping for 1.0541350399999996 seconds
i = 8 - sleeping for 1.4757890559999993 seconds
i = 9 - sleeping for 2.066104678399999 seconds
i = 10 - sleeping for 2.8925465497599983 seconds
i = 11 - sleeping for 4.049565169663998 seconds
i = 12 - sleeping for 5.669391237529596 seconds
i = 13 - sleeping for 7.937147732541433 seconds
i = 14 - sleeping for 11.112006825558007 seconds
i = 15 - sleeping for 15.556809555781209 seconds
i = 16 - sleeping for 21.77953337809369 seconds
HTTPConnectionPool(host='127.0.0.1', port=33331): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x122625e20>: Failed to establish a new connection: [Errno 61] Connection refused'))
-------------------------------------------- Captured stderr setup ---------------------------------------------
Creating test database for alias 'default' ('test_region')...
Creating test database for alias 'control' ('test_control')...
=========================================== short test summary info ============================================
ERROR tests/symbolicator/test_unreal_full.py::SymbolicatorUnrealIntegrationTest::test_unreal_apple_crash_with_attachments - ValueError: (inside) relay did not start in time http://127.0.0.1:33331:
================================== 1 deselected, 1 error in 96.74s (0:01:36) ===================================

Regardless, I think I know the cause of the problem, though not yet entirely how to fix it. TL;DR, we can't handle it when the devserver or tests are run with a devservices up --project value other than the default sentry.

Things I discovered in my testing/wandering around the code:

  • If you do normal devservices up, the tests all pass.

  • If you use --project test, and try to run the devserver with --ingest, it'll protest that you don't have USE_RELAY set, even if you do. It turns out that's because we've hardcoded the name of the container to look for here:

    if "sentry_kafka" not in containers:

  • If you fix that, the devserver gets as far as starting up relay, but it starts a container called sentry_relay, even though all of the other containers are test_whatever. Changing that isn't enough to fix things, but regardless, the reason it happens is because when we start relay, we don't pass the project as an option:

    daemons += [("relay", ["sentry", "devservices", "attach", "relay"])]

  • If you fix that, it still crashes with much the same errors as with the test - a million r2d2 errors, eventually ending in:

    22:31:11 relay               | 2023-10-19T22:31:10.405577Z ERROR relay_server::actors::upstream: authentication encountered error error=could not send request to upstream error.sources=[error sending request for url (http://host.docker.internal:8000/api/0/relays/register/challenge/): operation timed out, operation timed out]
    22:31:11 relay               | 2023-10-19T22:31:10.408501Z  INFO relay_server::actors::upstream: registering with upstream descriptor=http://host.docker.internal:8000/
    22:31:11 relay               | 2023-10-19T22:31:10.420861Z ERROR relay_log::utils: error=could not initialize redis cluster client error.sources=[failed to pool redis connection, timed out waiting for connection: failed to lookup address information: Name or service not known]
    
  • But: The fact that they're both complaining about redis got me wondering if we're not hard-coding sentry_redis somewhere, and lo and behold, we are, in /src/config/relay/config.yml, where we're not only hard-coding sentry_redis, we're also hardcoding sentry-kafka. Changing them manually to test_x doesn't change the devserver crash, but changing both that and the relay startup call to include --project test does! 🎉 The relay server (along with everything else) starts up normally at that point.

  • Another place I discovered we were hardcoding was is in the setup for the test relay server. Those values get fed into the dummy version of the same config.yml as above, but changing them still doesn't make the tests pass.

    "KAFKA_HOST": "sentry_kafka",
    "REDIS_HOST": "sentry_redis",

  • After that I went through and changed the other places I could find, like in /config/cdc/configuration.yaml and in the sever itself, but I wasn't able to get the tests to run.

    "KAFKA_ADVERTISED_LISTENERS": "PLAINTEXT://127.0.0.1:29092,INTERNAL://sentry_kafka:9093,EXTERNAL://127.0.0.1:9092",

So, what now?

Option 1: Make everything work with arbitrary project values

  • We know the project when devservices starts up, but how could we persist that value such that the devserver (and wherever else it's needed) would have access to it?

  • We'd need to go through and make as many places in the code as we can project-aware.

Option 2: Make it so that running tests with the default project doesn't nuke existing data

  • IIRC, we already do this with postgres. Could we do it with clickhouse, too?

Naively, I would guess that option 2 is easier (and it would certainly make for better ux on the developer's end), but at this point I will defer to - and hand this off to - my dev-infra colleagues, as I don't have enough context to go beyond all of the above. @joshuarli - can I pass this back to you, please?

[UPDATE] I pushed a branch with the changes I made, in case it's a helpful starting place. I hardcoded test in place of sentry, but we'd obviously need to get the dynamic value of project.

@armenzg
Copy link
Member Author

armenzg commented Oct 20, 2023

I have verified that if I don't use --project test it works.

@jernejstrasner
Copy link
Contributor

@armenzg is this still causing you problems?

@armenzg
Copy link
Member Author

armenzg commented Jan 15, 2024

Hi, @jernejstrasner I have a slightly different issue. Possibly due to Colima (CC @joshuarli )

I got a new MBP M2 laptop in January

I added symbolicator.enabled: true to ~/.sentry/config.yml and I ran this command with these results:

pytest -s -vv tests/symbolicator/test_unreal_full.py -k test_unreal_apple_crash_with_attachments
/Users/armenzg/code/sentry/.venv/lib/python3.10/site-packages/trio/_core/_multierror.py:511: RuntimeWarning: You seem to already have a custom sys.excepthook handler installed. I'll skip installing Trio's custom handler, but this means MultiErrors will not show full tracebacks.
  warnings.warn(
============================================== test session starts ===============================================
platform darwin -- Python 3.10.13, pytest-7.2.1, pluggy-0.13.1 -- /Users/armenzg/code/sentry/.venv/bin/python3
cachedir: .pytest_cache
django: version: 3.2.23
rootdir: /Users/armenzg/code/sentry, configfile: pyproject.toml
plugins: fail-slow-0.3.0, rerunfailures-11.0, sentry-0.1.11, time-machine-2.13.0, xdist-3.0.2, django-4.7.0, anyio-3.7.1, cov-4.0.0
collected 2 items / 1 deselected / 1 selected

tests/symbolicator/test_unreal_full.py::SymbolicatorUnrealIntegrationTest::test_unreal_apple_crash_with_attachments Creating test database for alias 'default' ('test_region')...
Creating test database for alias 'control' ('test_control')...
ERRORDestroying test database for alias 'default' ('test_region')...
Destroying test database for alias 'control' ('test_control')...


===================================================== ERRORS =====================================================
__________ ERROR at setup of SymbolicatorUnrealIntegrationTest.test_unreal_apple_crash_with_attachments __________
.venv/lib/python3.10/site-packages/docker/api/client.py:268: in _raise_for_status
    response.raise_for_status()
.venv/lib/python3.10/site-packages/requests/models.py:1021: in raise_for_status
    raise HTTPError(http_error_msg, response=self)
E   requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.43/containers/c3df0b46b7b3a69941de014d61428b9fca92de837e878c424b70b2dc67a2d991/start

The above exception was the direct cause of the following exception:
src/sentry/testutils/pytest/relay.py:130: in relay_server
    container = docker_client.containers.run(**options)
.venv/lib/python3.10/site-packages/docker/models/containers.py:865: in run
    container.start()
.venv/lib/python3.10/site-packages/docker/models/containers.py:406: in start
    return self.client.api.start(self.id, **kwargs)
.venv/lib/python3.10/site-packages/docker/utils/decorators.py:19: in wrapped
    return f(self, resource_id, *args, **kwargs)
.venv/lib/python3.10/site-packages/docker/api/container.py:1127: in start
    self._raise_for_status(res)
.venv/lib/python3.10/site-packages/docker/api/client.py:270: in _raise_for_status
    raise create_api_error_from_http_exception(e) from e
.venv/lib/python3.10/site-packages/docker/errors.py:39: in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation) from e
E   docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.43/containers/c3df0b46b7b3a69941de014d61428b9fca92de837e878c424b70b2dc67a2d991/start: Internal Server Error ("error while creating mount source path '/private/tmp/colima/pytest-of-armenzg/pytest-2/test_relay_config_2024-01-15_13-59-43_059687_0': mkdir /private/tmp/colima/pytest-of-armenzg: no such file or directory")
============================================ short test summary info =============================================
ERROR tests/symbolicator/test_unreal_full.py::SymbolicatorUnrealIntegrationTest::test_unreal_apple_crash_with_attachments - docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.43/containers/c3df0b46b7b3a69941de014...
========================================= 1 deselected, 1 error in 3.07s =========================================

@github-actions github-actions bot locked and limited conversation to collaborators Aug 1, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants