Buscar este blog

sábado, 18 de julio de 2015

JBoss - HornetQ: ERROR [org.hornetq.core.client] (Old I/O client worker

Topology:
  • JBoss AS7.1.1 Standalone, profile full-ha
  • JBoss EAP 6.2 Domain,  1 DC + 2HC

All nodes are in the same machine, but under diferent binding addresses. In this image you can check the complete topology.



Standalone HornetQ config:
<subsystem xmlns="urn:jboss:domain:messaging:1.1">
 <hornetq-server>
  <clustered>true</clustered>
  <persistence-enabled>true</persistence-enabled>
  <journal-file-size>102400</journal-file-size>
  <journal-min-files>2</journal-min-files>
  <connectors>
   <netty-connector name="netty" socket-binding="messaging"/>
   <netty-connector name="netty-throughput" socket-binding="messaging-throughput">
    <param key="batch-delay" value="50"/>
   </netty-connector>
   <in-vm-connector name="in-vm" server-id="0"/>
  </connectors>
  <acceptors>
   <netty-acceptor name="netty" socket-binding="messaging"/>
   <netty-acceptor name="netty-throughput" socket-binding="messaging-throughput">
    <param key="batch-delay" value="50"/>
    <param key="direct-deliver" value="false"/>
   </netty-acceptor>
   <in-vm-acceptor name="in-vm" server-id="0"/>
  </acceptors>
  <broadcast-groups>
   <broadcast-group name="bg-group1">
    <group-address>231.7.7.7</group-address>
    <group-port>9876</group-port>
    <broadcast-period>5000</broadcast-period>
    <connector-ref>netty</connector-ref>
   </broadcast-group>
  </broadcast-groups>
  <discovery-groups>
   <discovery-group name="dg-group1">
    <group-address>231.7.7.7</group-address>
    <group-port>9876</group-port>
    <refresh-timeout>10000</refresh-timeout>
   </discovery-group>
  </discovery-groups>
  <cluster-connections>
   <cluster-connection name="my-cluster">
    <address>jms</address>
    <connector-ref>netty</connector-ref>
    <discovery-group-ref discovery-group-name="dg-group1"/>
   </cluster-connection>
  </cluster-connections>
  <security-settings>
   <security-setting match="#">
    <permission type="send" roles="guest"/>
    <permission type="consume" roles="guest"/>
    <permission type="createNonDurableQueue" roles="guest"/>
    <permission type="deleteNonDurableQueue" roles="guest"/>
   </security-setting>
  </security-settings>
  <address-settings>
   <!--default for catch all-->
   <address-setting match="#">
    <dead-letter-address>jms.queue.DLQ</dead-letter-address>
    <expiry-address>jms.queue.ExpiryQueue</expiry-address>
    <redelivery-delay>0</redelivery-delay>
    <redistribution-delay>1000</redistribution-delay>
    <max-size-bytes>10485760</max-size-bytes>
    <address-full-policy>BLOCK</address-full-policy>
    <message-counter-history-day-limit>10</message-counter-history-day-limit>
   </address-setting>
  </address-settings>
  <jms-connection-factories>
   <connection-factory name="InVmConnectionFactory">
    <connectors>
     <connector-ref connector-name="in-vm"/>
    </connectors>
    <entries>
     <entry name="java:/ConnectionFactory"/>
    </entries>
   </connection-factory>
   <connection-factory name="RemoteConnectionFactory">
    <connectors>
     <connector-ref connector-name="netty"/>
    </connectors>
    <entries>
     <entry name="RemoteConnectionFactory"/>
     <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
    </entries>
   </connection-factory>
   <pooled-connection-factory name="hornetq-ra">
    <transaction mode="xa"/>
    <connectors>
     <connector-ref connector-name="in-vm"/>
    </connectors>
    <entries>
     <entry name="java:/JmsXA"/>
    </entries>
   </pooled-connection-factory>
  </jms-connection-factories>
  <jms-destinations>
   <jms-queue name="testQueue">
    <entry name="queue/test"/>
    <entry name="java:jboss/exported/jms/queue/test"/>
   </jms-queue>
   <jms-topic name="testTopic">
    <entry name="topic/test"/>
    <entry name="java:jboss/exported/jms/topic/test"/>
   </jms-topic>
  </jms-destinations>
 </hornetq-server>
</subsystem>

Domain HornetQ config (full-ha profile):
<subsystem xmlns="urn:jboss:domain:messaging:1.4">
 <hornetq-server name="main-culster">
  <persistence-enabled>true</persistence-enabled>
  <cluster-user>adminCluster</cluster-user>
  <cluster-password>cluster.123</cluster-password>
  <journal-type>NIO</journal-type>
  <journal-min-files>2</journal-min-files>

  <connectors>
   <netty-connector name="netty" socket-binding="messaging"/>
   <netty-connector name="netty-throughput" socket-binding="messaging-throughput">
    <param key="batch-delay" value="50"/>
   </netty-connector>
   <in-vm-connector name="in-vm" server-id="0"/>
  </connectors>

  <acceptors>
   <netty-acceptor name="netty" socket-binding="messaging"/>
   <netty-acceptor name="netty-throughput" socket-binding="messaging-throughput">
    <param key="batch-delay" value="50"/>
    <param key="direct-deliver" value="false"/>
   </netty-acceptor>
   <in-vm-acceptor name="in-vm" server-id="0"/>
  </acceptors>

  <broadcast-groups>
   <broadcast-group name="bg-group1">
    <socket-binding>messaging-group</socket-binding>
    <broadcast-period>5000</broadcast-period>
    <connector-ref>
                                netty
    </connector-ref>
   </broadcast-group>
  </broadcast-groups>

  <discovery-groups>
   <discovery-group name="dg-group1">
    <socket-binding>messaging-group</socket-binding>
    <refresh-timeout>10000</refresh-timeout>
   </discovery-group>
  </discovery-groups>

  <cluster-connections>
   <cluster-connection name="my-cluster">
    <address>jms</address>
    <connector-ref>netty</connector-ref>
    <discovery-group-ref discovery-group-name="dg-group1"/>
   </cluster-connection>
  </cluster-connections>

  <security-settings>
   <security-setting match="#">
    <permission type="send" roles="guest"/>
    <permission type="consume" roles="guest"/>
    <permission type="createNonDurableQueue" roles="guest"/>
    <permission type="deleteNonDurableQueue" roles="guest"/>
   </security-setting>
  </security-settings>

  <address-settings>
   <address-setting match="#">
    <dead-letter-address>jms.queue.DLQ</dead-letter-address>
    <expiry-address>jms.queue.ExpiryQueue</expiry-address>
    <redelivery-delay>0</redelivery-delay>
    <max-size-bytes>10485760</max-size-bytes>
    <page-size-bytes>2097152</page-size-bytes>
    <address-full-policy>PAGE</address-full-policy>
    <message-counter-history-day-limit>10</message-counter-history-day-limit>
    <redistribution-delay>1000</redistribution-delay>
   </address-setting>
  </address-settings>

  <jms-connection-factories>
   <connection-factory name="InVmConnectionFactory">
    <connectors>
     <connector-ref connector-name="in-vm"/>
    </connectors>
    <entries>
     <entry name="java:/ConnectionFactory"/>
    </entries>
   </connection-factory>
   <connection-factory name="RemoteConnectionFactory">
    <connectors>
     <connector-ref connector-name="netty"/>
    </connectors>
    <entries>
     <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
    </entries>
    <ha>true</ha>
    <block-on-acknowledge>true</block-on-acknowledge>
    <retry-interval>1000</retry-interval>
    <retry-interval-multiplier>1.0</retry-interval-multiplier>
    <reconnect-attempts>-1</reconnect-attempts>
   </connection-factory>
   <pooled-connection-factory name="hornetq-ra">
    <transaction mode="xa"/>
    <connectors>
     <connector-ref connector-name="in-vm"/>
    </connectors>
    <entries>
     <entry name="java:/JmsXA"/>
    </entries>
   </pooled-connection-factory>
  </jms-connection-factories>

  <jms-destinations>
   <jms-queue name="testQueue">
    <entry name="queue/test"/>
    <entry name="java:jboss/exported/jms/queue/test"/>
   </jms-queue>
  </jms-destinations>
 </hornetq-server>
</subsystem>

By having the rest of configuration by default, you can get two diferent errors:
1) If AS 7.1.1 is running, when  EAP 6.2 HC starts, you get this errors:
In AS 7.1.1:
WARN  [org.hornetq.core.protocol.core.impl.HornetQPacketHandler] (Old I/O server worker (parentId: 1268068589, [id: 0x4b9530ed, /192.168.56.120:5445])) Client with version 123 and address /192.168.56.120:34031 is not compatible with server version 2.2.13.Final (HQ_2_2_13_FINAL_AS7, 122). Please ensure all clients and servers are upgraded to the same version for them to interoperate properly

ERROR [org.hornetq.core.protocol.core.impl.HornetQPacketHandler] (Old I/O server worker (parentId: 1268068589, [id: 0x4b9530ed, /192.168.56.120:5445])) Failed to create session : HornetQException[errorCode=108 message=Server and client versions incompatible]

In HC:
ERROR [org.hornetq.core.client] (Old I/O client worker ([id: 0xe27d6774, /192.168.56.120:34041 => /192.168.56.120:5445])) HQ214013: Failed to decode packet: java.lang.IndexOutOfBoundsException: Readable byte limit exceeded: 207

2) If EAP 6.2 HCs are running, when AS 7.1.1 starts, you get this errors in HCs:
In HC1:
ERROR [org.hornetq.core.server] (Old I/O server worker (parentId: -2000364733, [id: 0x88c4db43, /192.168.56.102:5595])) HQ224018: Failed to create session: HornetQException[errorType=SECURITY_EXCEPTION message=HQ119031: Unable to validate user: HORNETQ.CLUSTER.ADMIN.USER]

In HC2:
ERROR [org.hornetq.core.server] (Old I/O server worker (parentId: -1549800239, [id: 0xa39fecd1, /192.168.56.103:5595])) HQ224018: Failed to create session: HornetQException[errorType=SECURITY_EXCEPTION message=HQ119031: Unable to validate user: HORNETQ.CLUSTER.ADMIN.USER]

Well, this is what I think is happening.
When AS 7.1.1 is running, and one HC starts, the last one tries to enlist in the existing 2.2.13 HornetQ Cluster. But as the versions are diferent, the message formats don't match.
In the error, the ip adress shown belong to the AS 7.1.1 server.


When EAP domain is running, and AS 7.1.1 starts, the last one tries to enlist in the 2.3.12 HornetQ Cluster. In this case the cluster understands the messages, but the credentials send by AS 7.1.1 are not valid.
You can change the credentials in AS 7.1.1 standalone-full-ha, and in this case you will get the following warning in HC:
WARN  [org.hornetq.core.client] (hornetq-discovery-group-thread-dg-group1) HQ212034: There are more than one servers on the network broadcasting the same node id. You will see this message exactly once (per node) if a node is restarted, in which case it can be safely ignored. But if it is logged continuously it means you really do have more than one node on the same network active concurrently with the same node id. This could occur if you have a backup node active at the same time as its live node. nodeID=d03a0cf1-2d35-11e5-ae4b-0800276d9dbc

This happens because you are trying to configure two master nodes in AS 7.1.1 and EAP, and both are identified with the same parameters.

But what I want is take both clusters isolated, so they to not interfere with each other.
The simplest way is change multicast address.

In domain config, multicast address is configured as a socket binding refered from broadcast-group and discovery-group. It is named messaging-group:
<socket-binding-group name="full-ha-sockets" default-interface="public">
 (...)
 <socket-binding name="messaging" port="5445"/>
 <socket-binding name="messaging-group" port="0" multicast-address="${jboss.messaging.group.address:231.7.7.7}" multicast-port="${jboss.messaging.group.port:9876}"/>
 <socket-binding name="messaging-throughput" port="5455"/>
 (...)
</socket-binding-group>

In standalone config, multicast address is configigured directly in hornetQ-server:
<broadcast-groups>
 <broadcast-group name="bg-group1">
  <group-address>231.7.7.7</group-address>
  <group-port>9876</group-port>
  <broadcast-period>5000</broadcast-period>
  <connector-ref>netty</connector-ref>
 </broadcast-group>
</broadcast-groups>
<discovery-groups>
 <discovery-group name="dg-group1">
  <group-address>231.7.7.7</group-address>
  <group-port>9876</group-port>
  <refresh-timeout>10000</refresh-timeout>
 </discovery-group>
</discovery-groups>

So, we have two clusters broadcasting to the same dir and same port. Thats why the are colliding.
I tried to change the multicast address, but they still collide.
The solution was to change the port. (I tried to change multicast IP but this didn´t work)

No hay comentarios:

Publicar un comentario