Skip to content

always return non nil Rows in fakesqldb.Conn#561

Merged
yaoshengzhe merged 1 commit intovitessio:masterfrom
yaoshengzhe:fix_always_non_nil_rows_in_fakesqldb
Apr 12, 2015
Merged

always return non nil Rows in fakesqldb.Conn#561
yaoshengzhe merged 1 commit intovitessio:masterfrom
yaoshengzhe:fix_always_non_nil_rows_in_fakesqldb

Conversation

@yaoshengzhe
Copy link
Copy Markdown
Contributor

fakesqldb.Conn will not init proto.QueryResult.Rows if RowsAffected is zero.
However, according to go/mysql/mysql.go, the ideal response should be an empty
proto.QueryResult.Rows.

@sougou
Copy link
Copy Markdown
Contributor

sougou commented Apr 11, 2015

lgtm

fakesqldb.Conn will not init proto.QueryResult.Rows if RowsAffected is zero.
However, according to go/mysql/mysql.go, the ideal response should be an empty
proto.QueryResult.Rows.
yaoshengzhe added a commit that referenced this pull request Apr 12, 2015
…akesqldb

always return non nil Rows in fakesqldb.Conn
@yaoshengzhe yaoshengzhe merged commit b27c558 into vitessio:master Apr 12, 2015
@yaoshengzhe yaoshengzhe deleted the fix_always_non_nil_rows_in_fakesqldb branch April 12, 2015 01:49
notfelineit pushed a commit to planetscale/vitess that referenced this pull request May 3, 2022
…sio#561)

* Only start SQL thread temporarily to WaitForPosition if needed (vitessio#10104)

After vitessio#9512 we always attempted to start the replication SQL_Thread(s) when waiting for a given position. The problem with this, however, is that if the SQL_Thread is running but the IO_Thread is not then the tablet repair does not try and start replication on a replica tablet. So in certain states such as when initializing a shard, replication may end up in a non-healthy state and never be repaired.

This changes the behavior so that:
  1. We only attempt to start the SQL_Thread(s) if it's not already running
  2. If we explicitly start the SQL_Thread(s) then we also explicitly reset it to what it was (stopped) as we exit the call

Because the caller should be/have a TabletManager which has a mutex, this should ensure that the replication manager calls are serialized and because we are resetting the replication state after mutating it, everything should work as it did before vitessio#9512 with the exception being that when waiting we ensure that the replica at least has the possibility of catching up.

Signed-off-by: Matt Lord <mattalord@gmail.com>

* Use older replication status interface

As vitess-private does not have this:
  vitessio#9853

Signed-off-by: Matt Lord <mattalord@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants