It was port 7000 that was my issue. I was thinking everything was going off
9160, and hadn't made sure that port was open.
Thanks Sasha and Jonathan.
On Fri, Jun 24, 2011 at 8:42 AM, Jonathan Ellis wrote:
> Did you try netcat to verify that you can get to the internal port on
> machine X from
Did you try netcat to verify that you can get to the internal port on
machine X from machine Y?
On Fri, Jun 24, 2011 at 8:20 AM, David McNelis
wrote:
> Running on Centos.
> We had a massive power failure and our UPS wasn't up to 48 hours without
> power...
> In this situation the IP addresses hav
Running on Centos.
We had a massive power failure and our UPS wasn't up to 48 hours without
power...
In this situation the IP addresses have all stayed the same. I can still
connect to the "other" node from cli, so I don't think its an issue where
the iptables settings weren't saved and started
Normally, no. What you've done is fine. What is the environment?
On amazon EC2 for example, the instance could have crashed, a new one
is brought online and has a different internal IP ...
in the cassandra/logs/system.log are there any messages on the 2nd
node and how it relates to the seed nod
I am running 0.8.0 on CentOS. I have a 2 nodes in my cluster, one is a
seed, the other is autobootstrapped.
After having an unexpected shutdown of both of the physical machines I am
trying to restart the cluster. I first started the seed node, it went
through the normal startup process and finis