Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Traceback: <class 'psycopg2.errors.CardinalityViolation'>: more than one row returned by a subquery used as an expression #6696

Closed
anoadragon453 opened this issue Jan 13, 2020 · 13 comments
Labels
z-bug (Deprecated Label)

Comments

@anoadragon453
Copy link
Member

anoadragon453 commented Jan 13, 2020

Description

I seem to be failing to handle a few PDUs sent to me due to an issue in federation_sender

2020-01-13 12:35:13,435 - synapse.federation.federation_server - 246 - ERROR - PUT-280-$yg09yrsnWzyLftl-ygdO78GhGLs0iKOb038VZVNZ8Lg- Failed to handle PDU $yg09yrsnWzyLftl-ygdO78GhGLs0iKOb038VZVNZ8Lg
Traceback (most recent call last):
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/handlers/federation.py", line 279, in on_receive_pdu
    origin, pdu, prevs, min_depth
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/handlers/federation.py", line 544, in _get_missing_events_for_pdu
    await self.on_receive_pdu(origin, ev, sent_to_us_directly=False)
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/handlers/federation.py", line 422, in on_receive_pdu
    await self._process_received_pdu(origin, pdu, state=state)
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/handlers/federation.py", line 700, in _process_received_pdu
    context = await self._handle_new_event(origin, event, state=state)
twisted.internet.defer.FirstError: FirstError[#0, [Failure instance: Traceback: <class 'psycopg2.errors.CardinalityViolation'>: more than one row returned by a subquery used as an expression

...

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/federation/federation_server.py", line 235, in process_pdus_for_room
    await self._handle_received_pdu(origin, pdu)
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/federation/federation_server.py", line 629, in _handle_received_pdu
    await self.handler.on_receive_pdu(origin, pdu, sent_to_us_directly=True)
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/handlers/federation.py", line 284, in on_receive_pdu
    % (event_id, e)
Exception: Error fetching missing prev_events for $yg09yrsnWzyLftl-ygdO78GhGLs0iKOb038VZVNZ8Lg: FirstError[#0, [Failure instance: Traceback: <class 'psycopg2.errors.CardinalityViolation'>: more than one row returned by a subquery used as an expression

Version information

Synapse v1.8.0

Edit: It turns out, when this happens, it's possible for the code that sends dummy events for fixing forward extremities to break. Thus, your server can build up forward extremities to the point where your server starts to churn and take up vast amounts of RAM and CPU.

@anoadragon453
Copy link
Member Author

This is happening with the first execution of _update_current_state_txn:

    def _update_current_state_txn(self, txn, state_delta_by_room, stream_id):
        for room_id, current_state_tuple in iteritems(state_delta_by_room):
            to_delete, to_insert = current_state_tuple

            # First we add entries to the current_state_delta_stream. We
            # do this before updating the current_state_events table so
            # that we can use it to calculate the `prev_event_id`. (This
            # allows us to not have to pull out the existing state
            # unnecessarily).
            #
            # The stream_id for the update is chosen to be the minimum of the stream_ids
            # for the batch of the events that we are persisting; that means we do not
            # end up in a situation where workers see events before the
            # current_state_delta updates.
            #
            sql = """
                INSERT INTO current_state_delta_stream
                (stream_id, room_id, type, state_key, event_id, prev_event_id)
                SELECT ?, ?, ?, ?, ?, (
                    SELECT event_id FROM current_state_events
                    WHERE room_id = ? AND type = ? AND state_key = ?
                )
            """
            txn.executemany(
                sql,
                (
                    (
                        stream_id,
                        room_id,
                        etype,
                        state_key,
                        None,
                        room_id,
                        etype,
                        state_key,
                    )
                    for etype, state_key in to_delete
                    # We sanity check that we're deleting rather than updating
                    if (etype, state_key) not in to_insert
                ),
            )
            txn.executemany(
                sql,
                (
                    (
                        stream_id,
                        room_id,
                        etype,
                        state_key,
                        ev_id,
                        room_id,
                        etype,
                        state_key,
                    )
                    for (etype, state_key), ev_id in iteritems(to_insert)
                ),
            )

Possibly that inner SELECT statement is returning two rows.

@anoadragon453 anoadragon453 added the z-bug (Deprecated Label) label Jan 15, 2020
@anoadragon453
Copy link
Member Author

anoadragon453 commented Jan 15, 2020

So this is due to current_state_events having multiple rows with the same room_id, type, statekey values, even though there is a unique index on that table for those rows. We don't currently know how the database ends up in this state.

To check if you have the same problem, you can run:

select count(*), room_id, type, state_key from current_state_events group by 2,3,4 order by count(*) DESC LIMIT 10;

And see if any of the numbers in the count column are greater than 1. If so, you're affected.

To delete duplicate rows, I first took a backup and then ran:

DELETE FROM current_state_events a USING current_state_events b WHERE a.event_id < b.event_id AND a.room_id = b.room_id AND a.type = b.type AND a.state_key = b.state_key;

This deleted 518 rows for me.

Starting the server, things now work without the original exception. However the stats.notify_new_event and user_directory.notify_new_event background processes are now throwing exceptions when they run:

2020-01-15 14:14:06,055 - synapse.metrics.background_process_metrics - 214 - ERROR - user_directory.notify_new_event-4- Background process 'stats.notify_new_event' threw an exception
During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/metrics/background_process_metrics.py", line 212, in run
    return (yield result)
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/handlers/stats.py", line 74, in process
    yield self._unsafe_process()
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
    result = g.send(result)
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/handlers/stats.py", line 100, in _unsafe_process
    self.pos, room_max_stream_ordering
  File "/home/synapse/synapse/env/lib/python3.7/site-packages/synapse/storage/data_stores/main/state_deltas.py", line 51, in get_current_state_deltas
    assert prev_stream_id <= max_stream_id
AssertionError

This may cause room stats and the user directory to stop updating. But otherwise things seem to be running fine.

@anoadragon453
Copy link
Member Author

Some more debugging information:

synapse=# select i.* from pg_class c join pg_index i on i.indexrelid=c.oid  where relname='current_state_events_room_id_type_state_key_key';
 indexrelid | indrelid | indnatts | indisunique | indisprimary | indisexclusion | indimmediate | indisclustered | indisvalid | indcheckxmin | indisready | indislive | indisreplident | indkey | indcollation |    indclass    | indoption | indexprs | indpred 
------------+----------+----------+-------------+--------------+----------------+--------------+----------------+------------+--------------+------------+-----------+----------------+--------+--------------+----------------+-----------+----------+---------
     163692 |    68784 |        3 | t           | f            | f              | t            | f              | t          | f            | t          | t         | f              | 2 3 4  | 100 100 100  | 3126 3126 3126 | 0 0 0     |          | 
(1 row)

@anoadragon453
Copy link
Member Author

Closing for now as I haven't seen any more notify_new_event tracebacks except for the first time starting my server after deleting the rows. I'll keep my eyes peeled though.

@anoadragon453
Copy link
Member Author

It may also be a good idea to reindex your current_state_events table:

REINDEX table current_state_events;

as recommended by https://www.postgresql.org/message-id/23007.1473604324%40sss.pgh.pa.us

@richvdh richvdh reopened this Jan 16, 2020
@richvdh
Copy link
Member

richvdh commented Jan 16, 2020

I've had another report of this from @jomat, which I think makes three now. I'm going to try taking it to the postgres mailing list.

@richvdh
Copy link
Member

richvdh commented Jan 16, 2020

right, with the help of the postgres-general mailing list, I have an answer here:

As per https://wiki.postgresql.org/wiki/Locale_data_changes, glibc changed the sort order for UTF8 locales in version 2.28. This will mean postgres will be confused about where in an index a given entry should be stored.

Now, synapse should be safe, because the documentation says that you should set LC_COLLATE='C' on the postgres database; however empirically not everybody has read that bit of the documentation.

In short, if \l shows something other than C for the Collate value for the synapse database, and you have ever upgraded your database server from Debian 9 (stretch) to Debian 10 (buster), or from Ubuntu 18.04 (bionic) to Ubuntu 18 (cosmic), your indexes will be corrupted and you will need to recreate them.

@richvdh
Copy link
Member

richvdh commented Jan 16, 2020

I'm going to call this resolved, though as per #6722 we could do better at stopping people getting into this mess.

@richvdh richvdh closed this as completed Jan 16, 2020
richvdh pushed a commit that referenced this issue Jul 19, 2020
Instead just warn. This is a temporary workaround to allow
provisioning new Modular hosts.

See #6734
and #6696 (comment)

Signed-off-by: Jason Robinson <[email protected]>
@MurzNN
Copy link

MurzNN commented Jan 8, 2021

I try to dump working database with en_US.UTF-8 collation and import it back, but got several errors on creating indexes not only in current_state_events table, but in other tables too:

ERROR:  could not create unique index "current_state_events_room_id_type_state_key_key"
DETAIL:  Key (room_id, type, state_key)=(!JoceisEAQRNMtpWVGm:matrix.org, m.room.member, @_oftc_haagch:matrix.org) is duplicated.

ERROR:  could not create unique index "destinations_pkey"
DETAIL:  Key (destination)=(matrix.org) is duplicated.

ERROR:  could not create unique index "receipts_graph_uniqueness"
DETAIL:  Key (room_id, receipt_type, user_id)=(!jJmpwspyTtbvTSfcck:matrix.org, m.read, @n-13:matrix.org) is duplicated.

ERROR:  could not create unique index "receipts_linearized_uniqueness"
DETAIL:  Key (room_id, receipt_type, user_id)=(!zXfJBqSUvXySmsZMtB:jki.re, m.read, @tomtau.:matrix.org) is duplicated.

ERROR:  could not create unique index "device_lists_remote_cache_unique_id"
DETAIL:  Key (user_id, device_id)=(@gauge:matrix.org, ABYXOHMFSM) is duplicated.

ERROR:  could not create unique index "device_lists_remote_extremeties_unique_idx"
DETAIL:  Key (user_id)=(@bewe_:gnuradio.org) is duplicated.

Maybe this was happens earlier, with older version of Synapse, or results of compression tool like this #4372

But solution is to remove duplicates. Here https://gist.github.com/MurzNN/fa8f3536be21426c55307761d8dcbc90 is SQL queries to remove duplicates.

@richvdh
Copy link
Member

richvdh commented Jan 11, 2021

Maybe this was happens earlier, with older version of Synapse

this is absolutely the sort of symptom you would expect from using a collation setting whose ordering has changed during the lifetime of the database. It is database-level corruption: an application (such Synapse or a compression tool) cannot cause it.

@MurzNN
Copy link

MurzNN commented Jan 11, 2021

Maybe this was happens earlier, with older version of Synapse

this is absolutely the sort of symptom you would expect from using a collation setting whose ordering has changed during the lifetime of the database. It is database-level corruption: an application (such Synapse or a compression tool) cannot cause it.

I can't understand how this can happens even with wrong collation, if database already has unique index for fields? For example, in destinations table I have found two rows with "destination=matrix.org".
But after manual cleaning-up duplicates, new ones seems not created again.

@richvdh
Copy link
Member

richvdh commented Jan 11, 2021

I can't understand how this can happens even with wrong collation,

https://wiki.postgresql.org/wiki/Locale_data_changes explains it. The problem is that strings may sort in a different order depending on your OS version, which means that postgres cannot reliably find entries in the index.

@DMRobertson
Copy link
Contributor

DMRobertson commented Oct 5, 2023

right, with the help of the postgres-general mailing list

For completeness, Rich's email thread was https://www.postgresql.org/message-id/flat/288488b7-d3b6-905b-3706-43f7a3c7bc52%40matrix.org

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
z-bug (Deprecated Label)
Projects
None yet
Development

No branches or pull requests

4 participants