Uploaded image for project: 'Spring XD'
  1. Spring XD
  2. XD-3613

Multiple module instances consuming from taps or topics get duplicate messages on redis Message Bus

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Done
    • Priority: Critical
    • Resolution: Won't Fix
    • Affects Version/s: 1.2.1
    • Fix Version/s: None
    • Component/s: Runtime
    • Labels:
      None
    • Story Points:
      5
    • Rank (Obsolete):
      9223372036854775807

      Description

      If I deploy more than one instance of a module (eg using module.name.count > 1 or module.name.count =0) that consumes from a tap or topic then I get duplicate messages if I’m using Redis as the message bus. It looks like this is the same issue as XD-3100 but the fix for that only fixed Rabbit as the message bus.

      This is easy to reproduce on a 2 container cluster using a Redis Message Bus:

      Create and deploy streams as follows:

      stream create --definition "http | log" --name httpLog
      stream deploy --name httpLog --properties "module.*.count=0"
      stream create --definition "tap:stream:httpLog > transform --expression='payload.toString() + \" TAPPED\"' | log" --name httpLogTap 
      stream deploy --name httpLogTap --properties "module.*.count=0"
      

      On container 1 send a message:

      curl --data "test message 001" http://localhost:9000/httpLog
      

      Container 1 logs are then:

      2015-10-13 14:16:28.853  INFO 22774 --- [ol-28-thread-18] xd.sink.httpLog                          : test message 001
      2015-10-13 14:16:28.855  INFO 22774 --- [enerContainer-4] xd.sink.httpLogTap                       : test message 001 TAPPED
      

      and container 2:

      2015-10-13 14:16:28.859  INFO 22719 --- [enerContainer-4] xd.sink.httpLogTap                       : test message 001 TAPPED
      

      Ie the tapped message is duplicated (picked up by both tap module instances)

      Similarly for topics create and deploy these streams:

      stream create --definition "http > topic:mytopic" --name httpTopic
      stream deploy --name httpTopic --properties "module.*.count=0"
      stream create --definition "topic:mytopic > transform --expression='payload.toString() + \" TOPIC CONSUMER 1\"' | log" --name topicConsumer1
      stream deploy --name topicConsumer1 --properties "module.*.count=0"
      stream create --definition "topic:mytopic > transform --expression='payload.toString() + \" TOPIC CONSUMER 2\"' | log" --name topicConsumer2
      stream deploy --name topicConsumer2 --properties "module.*.count=0"
      

      On container 1 send a message:

      curl --data "test message 002" http://localhost:9000/httpLog
      

      Container 1 logs are then:

      2015-10-13 14:34:23.168  INFO 22774 --- [enerContainer-2] xd.sink.topicConsumer2                   : test message 002 TOPIC CONSUMER 2
      2015-10-13 14:34:23.172  INFO 22774 --- [enerContainer-2] xd.sink.topicConsumer1                   : test message 002 TOPIC CONSUMER 1
      

      and container 2:

      2015-10-13 14:34:23.173  INFO 22719 --- [enerContainer-2] xd.sink.topicConsumer2                   : test message 002 TOPIC CONSUMER 2
      2015-10-13 14:34:23.177  INFO 22719 --- [enerContainer-2] xd.sink.topicConsumer1                   : test message 002 TOPIC CONSUMER 1
      

      Ie the topic message is picked up by each instance of the module in each stream. In this case I would expect each stream to pick up the message once
      ie I would get a single output for each stream

      test message 002 TOPIC CONSUMER 2 once (on either container)
      test message 002 TOPIC CONSUMER 1 once (on either container)

        Attachments

          Issue Links

            Activity

              People

              Assignee:
              mbogoevici Marius Bogoevici
              Reporter:
              david_geary David Geary
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved: