scalability - CometD long polling - Does it scale nicely to high traffic? -
if utilize cometd long polling:
suppose there 1000 messages in sec sent subscribers, cometd allow them auto-batched each client doesn't have re-connect each single message?
do "lazy channels" (as described here: http://docs.cometd.org/3/reference/#_java_server_lazy_messages) auto-batch queued messages sent clients upon timeout?
if on other hand don't utilize lazy channels, , suppose "batch-publish" messages on channels 1, 2 , 3:
cometd.batch(function() { cometd.publish('/channel1', { product: 'foo' }); cometd.publish('/channel2', { notificationtype: 'all' }); cometd.publish('/channel3', { update: false }); });
(http://docs.cometd.org/3/reference/#_javascript_batch)
does client subscribed 3 channels receive them in batch too? or send them separately, forcing client re-connect after each message (slow)?
cometd offers application developers total command of batching features, allowing have maximum flexibility, performance , scalability.
when using http long-polling
transports, there 2 places batching may happen.
from client server solved using cometd api , explicit batching (like snippet above). batching @ level typically in command of application, although cometd internal batching avoid exhausting connections server.
from server client there more variations.
for broadcast non-lazy channels there no automation, , happens first message client (that not publisher) trigger flush of message queue; while beingness sent, other messages queue on server side client , on next /meta/connect
whole queue flushed. 10 messages scheme like: 1-flush-9-flush (enqueue 1, flush queue, enqueue other 9 while waiting /meta/connect
come back, flush other 9).
for broadcast lazy channels there automation, cometd wait before sending messages next rules of lazy messages. typical scheme be: 10-flush.
for service channels, in command of application. client can send batched messages application via service channel (whose messages not broadcast automatically cometd). application on server can receive first message , know other 9 come, can wait send them until lastly has arrived. when lastly arrives, can utilize batching api batch responses clients, like:
class="lang-java prettyprint-override">list<serversession> subscribers = ...; (serversession subscriber : subscribers) { subscriber.batch(() -> { subscriber.deliver(sender, "/response", response1); subscriber.deliver(sender, "/response", response2); subscriber.deliver(sender, "/response", response3); }); }
of course of study responses may different messages received, both in content , number. scheme here can application wants, it's mutual have 10-flush, efficient.
a note regarding batching of messages sent publisher. special case , it's default automated: while processing incoming messages publisher, cometd starts internal batch particular publisher, message delivered publisher batched , flushed @ end of processing of incoming messages.
the bottom line cometd pretty tuned give maximum of performance , scalability in mutual cases, yet leaves application room customizing behaviour accomplish maximum efficiency using application specific knowledge of message patterns.
i encourage @ cometd documentation, tutorials , javadocs.
scalability broadcast server-push cometd batching
No comments:
Post a Comment