Repair doesn’t have a mechanism to drop a table
There are some race conditions in schema creation that can cause programmatic
schema condition (especially when multiple instances of the app can race) to
put things into a bad state.
If this is the problem, you’d want to inspect the cfid in the
That's unlikely. We run the repair job from crontab every week when no
application is connected to the cluster. We had the same error for another
table for more than 3 weeks until we recreated it:
ERROR [AntiEntropyStage:1] 2019-04-13 16:00:18,397
RepairMessageVerbHandler.java:177 - Table with id
Someone issued a drop table statement?
--
Jeff Jirsa
> On May 20, 2019, at 9:14 AM, Oliver Herrmann wrote:
>
> Hi,
>
> while we were running 'nodetool repair -full -dcpar' on one node we got the
> following error:
>
> ERROR [AntiEntropyStage:1] 2019-05-18 16:00:04,808
> RepairMessageVerb
Hi,
while we were running 'nodetool repair -full -dcpar' on one node we got the
following error:
ERROR [AntiEntropyStage:1] 2019-05-18 16:00:04,808
RepairMessageVerbHandler.java:177 - Table with id
5fb6b730-4ec3-11e9-b426-c3afc7dfebf6 was dropped during prepare phase of
repair
It looks like the